Next Article in Journal
AutoMH: Automatically Create Evolutionary Metaheuristic Algorithms Using Reinforcement Learning
Previous Article in Journal
Quantum Circuit Optimization for Solving Discrete Logarithm of Binary Elliptic Curves Obeying the Nearest-Neighbor Constrained
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gradient Learning under Tilted Empirical Risk Minimization

1
College of Science, Huazhong Agricultural University, Wuhan 430062, China
2
Hubei Key Laboratory of Applied Mathematics, Hubei University, Wuhan 430062, China
3
School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
4
Key Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2022, 24(7), 956; https://doi.org/10.3390/e24070956
Submission received: 16 June 2022 / Revised: 6 July 2022 / Accepted: 7 July 2022 / Published: 9 July 2022

Abstract

:
Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated.

1. Introduction

Data-driven variable selection aims to select informative features related with the response in high-dimensional statistics and plays a critical role in many areas. For example, if the milk production of dairy cows can be predicted by the blood biochemical indexes, then the doctors are eager to know which indexes can drive the milk production because each of them is independently measured with additional burden. Therefore, an explainable and interpretable system to select the effective variables is critical to convince domain experts. Currently, the methodologies on variable selection methods can be roughly divided into three categories including linear models [1,2,3], nonlinear additive models [4,5,6], and partial linear models [7,8,9]. Although achieving promising performance in some applications, these methods mentioned above still suffer from two main limitations. Firstly, the target function of these methods is restricted on the assumption of specific structures. Secondly, these methods cannot revive how the coordinates vary with respect to each other. As an alternative, Mukherjee and Zhou [10] proposed the gradient learning (GL) model, which aims to learn the gradient functions and enjoys the model-free property.
Despite the empirical success [11,12,13], there are still some limitations of the GL model, such as high computational cost, lacking the sparsity in high-dimensional data and lacking the robustness to complex noises. To this end, several variants of the GL model have been devoted to developing alternatives for individual purposes. For example, Dong and Zhou [14] proposed a stochastic gradient descent algorithm for learning the gradient and demonstrated that the gradient estimated by the algorithm converges to the true gradient. Mukherjee et al. [15] provided an algorithm to reduce dimension on manifolds for high-dimensional data with few observations. They obtained generalization error bounds of the gradient estimates and revealed that the convergence rate depends on the intrinsic dimension of the manifold. Borkar et al. [16] combined ideas from Spall’s Simultaneous Perturbation Stochastic Approximation with compressive sensing and proposed to learn the gradient with few function evaluations. Ye et al. [17] originally proposed a sparse GL model to further address the sparsity for high-dimensional variable selection of the estimated sparse gradients. He et al. [18] developed a three-step sparse GL method which allows for efficient computation, admits general predictor effects, and attains desirable asymptotic sparsistency. Following the research direction of robustness, Guinney et al. [19] provided a multi-task model which are efficient and robust for high-dimensional data. In addition, Feng et al. [20] provided a robust gradient learning (RGL) framework by introducing a robust regression loss function. Meanwhile, a simple computational algorithm based on gradient descent was provided, and the convergence of the proposed method is also analyzed.
Despite rapid progress, the GL model and its extensions mentioned above are established under the framework of empirical risk minimization (ERM). While enjoying the nice statistical properties, ERM usually performs poorly in situations where average performance is not an appropriate surrogate for the problem of interest [21]. Recently, a novel framework, named tilted empirical risk minimization (TERM), is proposed to flexibly address the deficiencies in ERM [21]. By using a new loss named t-tilted loss, it has been shown that TERM (1) can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; (2) has variance reduction properties that can benefit generalization; and (3) can be viewed as a smooth approximation to a superquantile method. Considering these strength, we propose to investigate the GL under the framework of TERM. The main contributions of this paper can be summarized as follows:
  • New learning objective. We propose to learn the gradient function under the framework of TERM. Specifically, the t-tilted loss is embedded into the GL model. To the best of our knowledge, it may be the first endeavor in this topic.
  • Theoretical guarantees. For the new learning objective, we estimate the generalization bound by error decomposition and operator approximation technique, and further provide the theoretical consistency and the convergence rate. To be specific, the convergence rate can recover the result of traditional GL as t tends 0 [10].
  • Efficient computation. A gradient descent method is provided to solve the proposed learning objective. By showing the smoothness and strongly convex of the learning objective, the convergence to the optimal solution is proved.
The rest of this paper is organized as follows: Section 2 proposes the GL with t-tilted loss (TGL) and states the main theoretical results on the asymptotic estimation. Section 3 provides the computational algorithm and its convergence analysis. Numerical experiments on synthetic data sets will be implemented in Section 4. Finally, Section 5 closes this paper with some conclusions.

2. Learning Objective

In this section, we introduce TGL and provide the main theoretical results on the asymptotic estimation.

2.1. Gradient Learning with t-Tilted Loss

Let X be a compact subset of R n and Y R . Assume that ρ is a probability measure on Z : = X × Y . It induces the marginal distribution ρ X on X and conditional distributions ρ ( · | x ) at x X . Denote L ρ X 2 as the L 2 space with the metric f ρ = ( X | f ( x ) | 2 d ρ X ) 1 / 2 . In addition, the regression function f ρ : X Y associated with ρ is defined as
f ρ ( x ) = Y y d ρ ( y | x ) , x X .
For x = ( x 1 , x 2 , , x n ) T X , the gradient of f ρ is the vector of functions (if the partial derivatives exist)
f ρ = f ρ x 1 , f ρ x 2 , , f ρ x n T .
The relevance between the l-th coordinate and f ρ can be evaluated via the norm of its partial derivative f ρ x l , where a large value implies a large change in the function f ρ with respect to a sensitive change in the l-th coordinate. This fact gives an intuitive motivation for the GL. In terms of Taylor series expansion, the following equation holds:
f ρ ( x ) f ρ ( x ˜ ) + f ρ ( x ˜ ) · ( x x ˜ ) ,
for x x ˜ and x , x ˜ X . Inspired by (1), we denote the weighted square loss of f as
V ( f , z , z ˜ = ω ( x , x ˜ ) ( y ˜ y + f ( x ˜ ) T ( x x ˜ ) ) 2 , f ( L ρ X 2 ) n , z , z ˜ Z ,
where the restriction x x ˜ will be enforced by weights ω ( x , x ˜ ) given by 1 s n + 2 e | x x ˜ | 2 / 2 s 2 with a constant 0 < s 1 , see, e.g., [10,11,19]. Then, the expected risk of f can be given by
E ( f ) = Z Z V ( f , z , z ˜ ) d ρ ( z ) d ρ ( z ˜ ) .
As mentioned in [21], the f defined in (3) usually performs poorly in situations where average performance is not an appropriate surrogate. Inspired from [21], for t R 0 , we address the deficiencies by introducing the t-tilted loss and define the expected risk of f with t-tilted loss as
E ( f , t ) = 1 t log Z Z e t V ( f , z , z ˜ ) d ρ ( z ) d ρ ( z ˜ ) .
Remark 1.
Note that t R 0 is a real-valued hyperparameter, and it can encompass a family of objectives which can address the fairness ( t > 0 ) or robustness ( t < 0 ) by different choices. In particular, it recovers the expected risk (3) as t 0 .
On this basis, the GL with t-tilted loss is formulated as the following regularization scheme:
f λ , t = arg min f H K n { E ( f , t ) + λ f K 2 } ,
where λ > 0 is a regularization parameter. Here, K : X × X R is a Mercer kernel that is continuous, symmetric, and positive semidefinite [22,23] and H K induced by K be an RKHS defined as the closure of the linear span of the set of functions { K x : = K ( x , · ) : x X } with the inner product · , · K satisfying K x , K x ˜ K = K ( x , x ˜ ) . The reproducing property takes the form K x , f K = f ( x ) , x X , f H K . Then, we denote H K n as an n-fold RKHS with the inner product
f , h K = l = 1 n f l , h l K , f = ( f 1 , f 2 , , f n ) T , h = ( h 1 , h 2 , , h n ) T H K n ,
and norm f K 2 = f , f K .

2.2. Main Results

This subsection states our main theoretical results on the asymptotic estimation of f λ , t f ρ ρ on the space ( L ρ X 2 ) n with norm f ρ = ( l = 1 n f l ρ 2 ) 1 / 2 . Before proceeding, we provide some necessary assumptions which have been used extensively in machine learning literature, e.g., [24,25].
Assumption 1.
Supposing that f ρ H K n and the kernel K is C 3 , there exists a constant c υ > 0 such that
| f ρ ( x ) f ρ ( x ˜ ) f ρ ( x ˜ ) T ( x x ˜ ) | c υ x x ˜ 2 , x , x ˜ X .
Assumption 2.
Assume | y | M , | x | M X almost surely. Suppose that, for some ς ( 0 , 2 3 ) , c l , c h > 0 , the marginal distribution ρ X satisfies
ρ X ( { x X : inf x ˜ R n X x x ˜ s } ) c h 2 s 4 ς , s > 0 ,
and the density p ( z ) of d ρ ( z ) exists and satisfies
c l p ( z ) c h , p ( z ) p ( z ˜ ) c h z z ˜ ς , z , z ˜ Z .
Taking the functional derivatives of (5), we know that f λ , t can be expressed in terms of the following integral operator on the space ( L ρ X 2 ) n .
Definition 1.
Let integral operator L K , s : ( L ρ X 2 ) n ( L ρ X 2 ) n be defined by
L K , s f = Z Z ϕ ( z , z ˜ ) ω ( x , x ˜ ) ( f ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) d ρ ( z ˜ ) d ρ ( z ) ,
where
ϕ ( z , z ˜ ) = ( Z Z e t V ( f λ , t , u , v ) d ρ ( u ) d ρ ( v ) ) 1 e t V ( f λ , t , z , z ˜ ) .
The operator L K , s has its range in H K n . It can also be regarded as a positive operator on H K n . We shall use the same notion for the operators on these two different domains. Given the definition of integral operator L K , s , we can write f λ , t in the following equation.
Theorem 1.
Given the integral operator L K , s , we have the following relationship:
f λ , t = ( L K , s + λ I ) 1 f ρ , s ,
where f ρ , s = Z Z ϕ ( z , z ˜ ) ω ( x , x ˜ ) f ρ ( x ) f ρ ( x ˜ ) K x ˜ ( x x ˜ ) d ρ ( z ˜ ) d ρ ( z ) , and I is the identity operator.
Proof of Theorem 1.
To solve the scheme (5), we take the functional derivative with respect to f , apply it to an element δ f of H K n and set it equal to 0. We obtain
Z Z ϕ ( z , z ˜ ) ω ( x , x ˜ ) ( y ˜ y + f λ , t ( x ˜ ) T ( x x ˜ ) ) δ f ( x ˜ ) T ( x x ˜ ) d ρ ( z ˜ ) d ρ ( z ) + λ f λ , t , δ f K = 0 .
Since it holds for any δ f H K n , it is trivial to obtain
Z Z ϕ ( z , z ˜ ) ω ( x , x ˜ ) ( y ˜ y + f λ , t ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) d ρ ( z ˜ ) d ρ ( z ) + λ f λ , t = 0
and
λ f λ , t + L K , s f λ , t = f ρ , s .
The desired result follows by shifting items.    □
On this basis, we propose to bound the error f λ , t f ρ ρ by a functional analysis approach and present the error decomposition as following proposition. The proof is straightforward and omitted for brevity.
Proposition 1.
For the f λ , t defined in (5), it holds that
f λ , t f ρ ρ f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ ρ + λ ( L K , s + λ I ) 1 f ρ ρ .
In the sequel, we focus on bounding f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ ρ and λ ( L K , s + λ I ) 1 f ρ ρ , respectively. Before we embark on the proof, we single out a important property regarding ϕ ( z , z ˜ ) that will be useful in later proofs.
Lemma 1.
Under the Assumptions 1 and 2, there exists B t and A t dependent on t satisfying
B t = e 8 | t | ( M 2 + C K M X ) ϕ ( z , z ˜ ) A t = e 8 | t | ( M 2 + C K M X ) .
Proof of Lemma 1.
Since the kernel K is C 3 and f λ , t H K n , we know from Zhou [26] that f λ , t l is C 1 for each l. There exists a constant C K satisfying | f λ , t ( x ) | 2 C K , x X . Hence, using Cauchy inequality, we have
V ( f λ , t , z , z ˜ ) = ω ( x ˜ , x ) ( y ˜ y + f λ , t ( x ˜ ) T ( x x ˜ ) ) 2 2 ( 4 M 2 + | f λ , t ( x ˜ ) | 2 | x x ˜ | 2 ) 8 ( M 2 + C K M X ) .
By a direct computation, we obtain
e 8 | t | ( M 2 + C K M X ) Z Z e t V ( f λ , t , u , v ) d ρ ( u ) d ρ ( v ) 1 e t V ( f λ , t , z , z ˜ ) e 8 | t | ( M 2 + C K M X ) .
The desired result follows.    □
Denote κ = sup x X K ( x , x ) and the moments of the Gaussian as J p = R n e | x | 2 2 x p d x , p = 1 , 2 , 3 , , we establish the following Lemma.
Lemma 2.
Under Assumptions 1 and 2, we have
f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ K 2 M s λ κ c υ c h J 3 A t .
Proof of Lemma 2.
Taking notice of (10), it follows that
f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ = ( L K , s + λ I ) 1 ( f ρ , s L K , s f ρ ) .
Then, we have
f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ K ( L K , s + λ I ) 1 K f ρ , s L K , s f ρ K 1 λ f ρ , s L K , s f ρ K .
We note that
J p s p 2 = R n ω ( x , x ˜ ) | x x ˜ | p d x ˜ = R n 1 s n + 2 e x x ˜ 2 2 s 2 | x x ˜ | p d x ˜ , p = 2 , 3 , .
From Assumptions 1 and 2, we have
f ρ , s L K , s f ρ K Z Z ω ( x , x ˜ ) x x ˜ 3 ϕ ( z , z ˜ ) K x ˜ K c υ d ρ ( z ) d ρ ( z ˜ ) 2 M s κ c υ c h J 3 A t .
The desired result follows.    □
As for λ ( L K , s + λ I ) 1 f ρ ρ , the multivariate mean value theorem ensures that there exists R t ( z ˜ ) = ϕ z ˜ , η z , η z R n × Y , such that
Z R n × Y e x x ˜ 2 2 s 2 | x x ˜ | 2 s 2 + n ϕ ( z , z ˜ ) K x ˜ f ( x ˜ ) p ( z ˜ ) d z d ρ ( z ˜ ) = Z R n × Y e x x ˜ 2 2 s 2 | x x ˜ | 2 s 2 + n R t ( z ˜ ) K x ˜ f ( x ˜ ) p ( z ˜ ) d z d ρ ( z ˜ ) .
From (14), we can define the integral operator associated with the Mercer kernel K which is related to L K , s . Using Lemma 16 and Lemma 18 in [10], we establish the following Lemma.
Lemma 3.
Under the Assumption 2, denote c ρ = 2 M A t κ 2 c h ( 2 J 2 + ς + J 4 + c h J 2 ) 1 ς and V p = Z ( p ( z ) ) 2 R t ( z ) d z . For any 0 < s m i n { c ρ λ 1 ς , 1 } , we have
λ ( L K , s + λ I ) 1 f ρ ρ 2 λ ( V p n ( 2 π ) n 2 M ) 1 2 L K 1 2 f ρ ρ ,
where L K is a positive operator on ( L ρ X 2 ) n defined by
L K f = Z K x f ( x ) p ( z ) R t ( z ) V p d ρ ( z ) , f ( L ρ 2 ) n .
Proof of Lemma 3.
To estimate (15), we need to consider the convergence of L K , s as s 0 . Denote the stepping stone
g = Z Z ω ( x , x ˜ ) ( x x ˜ ) R t ( z ˜ ) K x ˜ ( x x ˜ ) T f ( x ˜ ) p ( z ˜ ) d z d ρ ( z ˜ ) ,
we deduce that
L K , s f 2 M V p n ( 2 π ) n 2 L K f K L K , s f g + g 2 M V p n ( 2 π ) n 2 L K f K L K , s f g K + g 2 M V p n ( 2 π ) n 2 L K f K .
Using the multivariate mean value theorem, there exists z ζ , z σ R n × Y , such that
L K , s f g K = p ( z ζ ) Z R n × Y R t ( z ˜ ) ω ( x , x ˜ ) ( f ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) d z d ρ ( z ˜ ) Z Z ω ( x , x ˜ ) ( x x ˜ ) R t ( z ˜ ) K x ˜ ( x x ˜ ) T f ( x ˜ ) p ( z ˜ ) d z d ρ ( z ˜ ) K p ( z ζ ) Z R n × Y R t ( z ˜ ) ω ( x , x ˜ ) ( f ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) d z d ρ ( z ˜ ) Z R n × Y R t ( z ˜ ) ω ( x , x ˜ ) ( f ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) p ( z ) d z d ρ ( z ˜ ) K + Z Z R t ( z ˜ ) ω ( x , x ˜ ) ( f ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) ( p ( z ) p ( z ˜ ) ) d z d ρ ( z ˜ ) K p ( z ζ ) p ( z σ ) Z R n × Y R t ( z ˜ ) ω ( x , x ˜ ) ( ) f ( x ˜ ) T ( x x ˜ ) K x ˜ ( x x ˜ ) d z d ρ ( z ˜ ) K + Z Z R t ( z ˜ ) ω ( x , x ˜ ) ( f ( x ˜ ) T ( x x ˜ ) ) K x ˜ ( x x ˜ ) ( p ( z ) p ( z ˜ ) ) d z d ρ ( z ˜ ) K 4 M s ς κ c h J 2 + ς f ρ A t .
Noticing n ( 2 π ) n 2 = J 2 , we have
2 V p n ( 2 π ) n 2 M L K f = Z R n × Y ω ( x , x ˜ ) R t ( z ˜ ) K x ˜ f ( x ˜ ) ( x x ˜ ) T ( x x ˜ ) p ( z ˜ ) d z d ρ ( z ˜ ) .
Then, by (7), we can obtain the following conclusion from Lemma 16 in [10] when 0 s 1 ,
g 2 M V p n ( 2 π ) n 2 L K f K Z ( R n × Y ) Z ω ( x , x ˜ ) R t ( z ˜ ) K x ˜ f ( x ˜ ) p ( z ˜ ) | x x ˜ | 2 d z d ρ ( z ˜ ) K 2 M c ρ A t X R n X ω ( x , x ˜ ) K x ˜ | f ( x ˜ ) | | ( x x ˜ ) | 2 d x d ρ X ( x ˜ ) 2 M s ς κ c h ( J 4 + c h J 2 ) f ρ A t .
Combining the above two estimates, there holds for any 0 s 1 ,
L K , s 2 M V p n ( 2 π ) n 2 L K K 2 M A t κ 2 c h s ς ( 2 J 2 + ς + J 4 + c h J 2 ) .
Using Lemma 18 in [10] and (16), the desired result follows.    □
Since the measure d ρ ˜ = Y p ( z ) R t ( z ) V p d ρ is probability one on X, we know that the operator L K can be used to define the reproducing kernel Hilbert space [22]. Let L K 1 / 2 be the 1 2 -th power of the positive operator L K on ( L ρ ˜ 2 ) n with norm f ρ ˜ = ( l = 1 n f l ρ ˜ 2 ) 1 / 2 having a range in H K n , where f l ρ ˜ = ( X | f l ( x ) | 2 d ρ ˜ ) 1 / 2 . Then, H K n is the range of L K 1 / 2 :
f ρ ˜ = L K 1 / 2 f K , f ( L ρ ˜ 2 ) n .
The assumption we shall use is L K 1 / 2 f ρ ρ ˜ < . It means that f ρ lies in the range of L K 1 / 2 . Finally, we can give the upper bound of the error f λ , t f ρ ρ .
Theorem 2.
Under the Assumptions 1 and 2, choose λ = m τ n + 2 + 3 τ and s = ( κ c h ) 2 ζ m 1 n + 2 + 3 τ . For any m ( κ c h ) 2 ( n + 2 + 3 τ ) / τ , there exists a constant C ρ , K such that we have
f λ , t f ρ ρ C ρ , K A t B t 1 m ζ 2 n + 4 + 6 ζ .
Proof of Theorem 2.
Using Cauchy inequality, for f = ( f 1 , f 2 , , f n ) T ( L ρ X 2 ) n , we have
X f l ( x ) 2 d ρ X ( x ) Z f l ( x ) 2 p ( z ) R t ( z ) V p d ρ ( z ) 1 2 Z f l ( x ) 2 V p p ( z ) R t ( z ) d ρ ( z ) 1 2 V p c l B t Z f l ( x ) 2 p ( z ) R t ( z ) V p d ρ ( z ) 1 2 X f l ( x ) 2 d ρ X ( x ) 1 2 .
It means that
X f l ( x ) 2 d ρ X ( x ) 1 2 V p c l B t Z f l ( x ) 2 p ( z ) R t ( z ) V p d ρ ( z ) 1 2 .
According to the definitions of f l ρ and f l ρ ˜ , it is trivial to obtain
f ρ V p c l B t f ρ ˜ .
Since s = ( κ c h ) 2 ζ λ 1 ζ , λ = ( 1 m ) ζ n + 2 + 3 ζ , we see from the fact J 2 > 1 that the restriction 0 < s min { c ρ λ 1 ζ , 1 } in Lemma 3 is satisfied for m ( κ c h ) 2 ( n + 2 + 3 τ ) / τ . Then, combining Lemmas 2 and 3, Equation (17) and inequality (19), we have
f λ , t f ρ ρ f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ ρ + λ ( L K , s + λ I ) 1 f ρ ρ κ f λ , t f ρ + λ ( L K , s + λ I ) 1 f ρ K + λ ( L K , s + λ I ) 1 f ρ ρ 2 M s λ κ 2 c υ c h J 3 A t + 2 V p c l B t λ ( M V p n ( 2 π ) n 2 ) 1 2 f ρ K C ρ , K A t B t 1 m ζ 2 n + 4 + 6 ζ ,
where C ρ , K = ( 2 κ c h ) 2 ζ + 2 max { M κ 2 c υ c h J 3 , V p c l ( M V p n ( 2 π ) n 2 ) 1 2 C K } .    □
Remark 2.
Theorem 2 shows when m + , f λ , t f ρ ρ 0 . This means that the scheme (5) is consistent. In addition, A t and B t tend to 1 as t tends 0, we can see that the convergence rate of Scheme (5) is ζ 2 n + 4 + 6 ζ , which is consistent with previous result in [10]. It means that the proposed method can be regarded as an extension of traditional GL.

3. Computing Algorithm

In this section, we present the GL model under TERM and propose to use the gradient descent algorithm to find the minimizer. Finally, the convergence of the proposed algorithm is also guaranteed.
Given a set of observations z = z i = ( x i , y i ) i = 1 m Z m independently drawn according to ρ and assume that the RKHS are rich that the kernel matrix K = ( K ( x i , x j ) ) i , j = 1 m is strictly positive definite [27]. According to the Representer Theorem of kernel methods [28], we assert the approximation of f λ , t has the following form: i = 1 m c i K x i , c i = ( c i 1 , , c i n ) T R n . Let c = ( c 1 T , , c m T ) T R m n , the empirical version of (4) is formulated as follows:
c z , λ : = arg min c R m n E z ( c , t ) + λ i = 1 m c i K x i K 2 ,
where
E z ( c , t ) = 1 t log 1 m 2 i , j = 1 m exp t ω ( x i , x j ) ( y i y j + p = 1 m K ( x p , x i ) x ^ i j c p ) 2 ,
with x ^ i j = ( x j x i ) T . For simplicity, we denote
V z ( c , z i , z j ) = ω ( x i , x j ) ( y i y j + p = 1 m K ( x p , x i ) x ^ i j c p ) 2
and
ϕ z ( c , z i , z j ) = exp t V z ( c , z i , z j ) E z ( c , t ) .
The gradients of E z ( c , t ) and i = 1 m c i K x i K 2 at c are given by
c E z ( c , t ) = 1 m 2 i , j = 1 m ϕ z ( c , z i , z j ) 2 ω ( x i , x j ) ( y i y j + p = 1 m K ( x p , x i ) x ^ i j c p ) × K ( x 1 , x i ) x ^ i j , , K ( x m , x i ) x ^ i j T ,
and
c i = 1 m c i K x i K 2 = 2 i = 1 m K ( x i , x 1 ) c i T , , K ( x i , x m ) c i T T .
Correspondingly, scheme (20) can be solved via the following gradient method:
c k = c k 1 α c E z ( c k 1 , t ) + λ c i = 1 m c i , k 1 K x i K 2 ,
where c k = ( c 1 , k T , , c m , k T ) T R m n is the calculated solution at iteration k, and α is the step-size. The detailed gradient descent scheme is stated in Algorithm 1. To prove the convergence, we introduce the following lemma derived from Theorem 1 in [29].
Lemma 4.
When h ( c ) has an γ-Lipschitz continuous gradient (γ-smoothness) and is μ-strongly convex, for the basic unconstrained optimization problem c * = arg min h ( c ) , the gradient descent algorithm c k = c k 1 1 γ h ( c k 1 ) with a step-size of 1 / γ has a global linear convergence rate
h ( c k ) h ( c * ) 1 μ γ k h ( c 0 ) h ( c * ) .
Algorithm 1 Gradient descent for the Gradient Learning under TERM
Entropy 24 00956 i001
From Lemma 4, we obtain the following conclusion which states that the proposed algorithm converges to (20) by choosing a suitable step size α .
Theorem 3.
Denote L ( c , t ) = E z ( c , t ) + λ i = 1 m c i K x i K 2 , β m a x , β m i n are the maximum and minimum eigenvalues of kernel matrix K , respectively. There exist μ R + and γ R + dependent on t such that L ( c k , t ) is γ-smoothness and μ-strongly convex for any t > ( n λ β m i n / 64 ( M 2 + C K M X ) M X 2 m κ 4 ) . In addition, let the minimizer c z , λ defined in scheme (20) and { c k } be the sequence generated by Algorithm 1 with α = 1 / γ , we have
L ( c k , t ) L ( c z , λ , t ) 1 μ γ k L ( c 0 , t ) L ( c z , λ , t ) .
Proof of Theorem 3.
Note that the strong convexity and the smoothness are related to the Hessian Matrix, and we provide the proof by dividing the Hessian Matrix into three parts:
c c T 2 L ( c , t ) = t m 2 i , j = 1 m ϕ z ( c , z i , z j ) c V z ( c , z i , z j ) c E z ( c , t ) c V z ( c , z i , z j ) T E 1 + 1 m 2 i , j = 1 m ϕ z ( c , z i , z j ) c c T 2 V z ( c , z i , z j ) E 2 + λ c c T 2 i = 1 m c i K x i K 2 E 3 .
(1) Estimation on E 1 : Note that m 2 c E z ( c , t ) = i , j = 1 m ϕ z ( c , z i , z j ) c V z ( c , z i , z j ) and i , j = 1 m ϕ z ( c , z i , z j ) = m 2 . It follows that
i , j = 1 m ϕ z ( c , z i , z j ) ( c V z ( c , z i , z j ) c E z ( c , t ) ) c T E z ( c , t ) = 0 .
Hence, we can get the following equation:
E 1 = t m 2 i , j = 1 m ϕ z ( c , z i , z j ) ( c V z ( c , z i , z j ) c E z ( c , t ) ) ( c V z ( c , z i , z j ) c E z ( c , t ) ) T .
Similar to the proof of Lemma 1, for i , j = 1 , , m , it directly follows that
ω ( x i , x j ) ( y i y j + p = 1 m K ( x p , x i ) x ^ i j c p ) 2 2 ( M 2 + C K M X ) .
Note that, for i , j = 1 , , m , c V z ( c , z i , z j ) c V z ( c , z i , z j ) T has a sole eigenvalue, it means
c V z ( c , z i , z j ) c V z ( c , z i , z j ) T 32 ( M 2 + C K M X ) M X 2 m κ 4 I m n ,
and we have
( c V z ( c , z i , z j ) c E z ( c , t ) ) T ( c V z ( c , z i , z j ) c E z ( c , t ) ) 128 ( M 2 + C K M X ) M X 2 m κ 4 .
It means that the maximum eigenvalue of E 1 is 128 t ( M 2 + C K M X ) M X 2 m κ 4 . Then, the following inequations are satisfied
0 m n E 1 128 t ( M 2 + C K M X ) M X 2 m κ 4 I m n , t > 0 ; 128 t ( M 2 + C K M X ) M X 2 m κ 4 I m n E 1 0 m n , t < 0 ,
where 0 m n is the m n × m n matrix with all elements zero.
(2) Estimation on E 2 : Note that c c T 2 V z ( c , z i , z j ) can be rewritten as
2 ω ( x i , x j ) K ( x 1 , x i ) x ^ i j , , K ( x m , x i ) x ^ i j K ( x 1 , x i ) x ^ i j , , K ( x m , x i ) x ^ i j T .
Similar to (25), we have c c T 2 V z ( c , z i , z j ) 2 κ 4 M x 2 I m n . It follows
0 m n E 2 2 κ 4 M x 2 I m n .
(3) Estimation on E 3 : By a direct computation, we have
E 3 = 2 λ I n K ( x 1 , x 1 ) I n K ( x 1 , x 2 ) I n K ( x 1 , x m ) I n K ( x 2 , x 1 ) I n K ( x 2 , x 2 ) I n K ( x 2 , x m ) I n K ( x m , x 1 ) I n K ( x m , x 2 ) I n K ( x m , x m ) .
Setting Q = ( q 11 , q 21 , , q n 1 , , q 1 m , q 2 m , , q n m ) T R m n , we deduce that
Q T E 3 Q = 2 λ l = 1 n i = 1 m j = 1 m K ( x i , x j ) q l i q l j .
Note that the matrix of quadratic form i = 1 m j = 1 m K ( x i , x j ) q l i q l j is K , then we can obtain
2 λ n β m i n I m n E 3 2 λ n β m a x I m n .
Combining (26), (27) and (28), there exist two constants
μ = min { 2 n λ β m i n + 128 t ( M 2 + C K M X ) M X 2 m κ 4 , 2 n λ β m i n }
and
γ = max { 128 t ( M 2 + C K M X ) M X 2 m κ 4 + 2 n λ β m a x , 2 κ 4 M x 2 + 2 n λ β m a x }
satisfying that
μ I m n c c T 2 L ( c , t ) γ I m n .
Note μ > 0 as t > n λ β m i n / 64 ( M 2 + C K M X ) M X 2 m κ 4 , and it means that L ( c , t ) is γ -smoothness and μ -strongly convex. The desired result follows by Lemma 4. □

4. Simulation Experiments

In this section, we carry out simulation studies with the TGL model ( t < 0 for robust) on a synthetic data set in the robust variable selection problem. Let the observation data set z = z i = ( x i , y i ) i = 1 m with x i = ( x i 1 , , x i n ) be generated by the following linear equations:
y i = x i · w + ϵ ,
where ϵ represents the outliers or noises. To be specific, three different noises are used: Cauchy noise with the location parameter a = 2 and scale parameter b = 4 , Chi-square noise with 5 DOF scaled by 0.01 and Gaussian noise N ( 0 , 0.3 ) . Three different proportions of outliers including 0 % , 20 % , or 40 % are drawn from the Gaussian noise N ( 0 , 100 ) . Meanwhile, we consider two different cases with ( m , n ) = ( 50 , 50 ) , ( 30 , 80 ) corresponding to m = n and m < n, respectively. The weighted vector w = ( w 1 , , w n ) over different dimensions is constructed as follows:
w l = 2 + 0.5 sin ( 2 π l 10 ) , for l = 1 , , N n and 0, otherwise.
Here, N n = 30 means the number of effective variables. Two situations including uncorrelated variables x N ( 0 n , I n ) and correlated variables x N ( 0 n , Σ n ) are implemented for x, where the covariance matrix Σ n is given with the ( l , p ) th entry 0 . 5 | l p | .
For the variable selection algorithms, we perform the TGL with t = 6 × 10 6 , 1 , 10 and compare the traditional GL model [10] and RGL model [20]. For the GL and TGL models, N n variables are selected by ranking
r l = f z , λ l K 2 p = 1 n f z , λ p K 2 , l = 1 , , n .
For the RGL model, N n variables are selected by ranking
r l = i = 1 m ( c i l ) 2 q = 1 n i = 1 m ( c i q ) 2 , l = 1 , , n .
A model selecting more effective variables ( N n ) means a better algorithm.
We repeat experiments for 30 times with the observation set z generated in each circumstance. The average selected effective variables for different circumstances are reported in Table 1, and the optimal results are marked in bold. Several useful conclusions can be drawn from Table 1.
(1) When the input variables are uncorrelated, the three models have similar performance under different noise conditions and can provide satisfactory variable selection results (approaching N n ) without outliers. However, the performance degrades severely for GL and a little for TGL ( t < 0 for robust) with the increasing proportions of outliers, especially in case ( m , n ) = ( 30 , 80 ) . In contrast, RGL can always provide satisfying performance. This is consistent with the previous phenomenon [20].
(2) When the input variables are correlated, the three models also have similar performance under different noise conditions but only can select partial effective variables ranging from N n / 3 to 2 N n / 3 . In general, they degrade slowly with the increasing proportions of outliers and perform better in case ( m , n ) = ( 50 , 50 ) than in ( m , n ) = ( 30 , 80 ) . Specifically, the TGL model with t = 1 gives slightly better selection results than GL and RGL in case ( m , n ) = ( 50 , 50 ) . It supports the superiority of TGL to some extent.
(3) It is worth noting that the TGL model with t = 6 × 10 6 has similar performance to GL. This phenomenon supports the theoretical conclusion that TGL recovers the GL as t 0 and the algorithmic effectiveness that the proposed gradient descent method can converge to the minimizer.
(4) Noting that the TGL model with different parameters t has great differences in the variable selection results, we further conduct some simulation studies to investigate the influence. Figure 1 shows the variable selection results of different parameters t ranging from 100 to 0.1 . We can see that the satisfying performance can be achieved when the parameter t is near 1 . It does not turn out well when | t | is too large. This coincides with our previous discussion that L ( c , t ) is strongly convex with limited t.

5. Conclusions

In this paper, we have proposed a new learning objective TGL by embedding the t-tilted loss into the GL model. On the theoretical side, we have established its consistency and provided the convergence rate with the help of error decomposition and operator approximation technique. On the practical side, we have proposed a gradient descent method to solve the learning objective and provided the convergence analysis. Simulated experiments have verified the theoretical conclusion that TGL recovers the GL as t 0 and the algorithmic effectiveness that the proposed gradient descent method can converge to the minimizer. In addition, they also demonstrated the superiority of TGL when the input variables are correlated. Along the line of the present work, several open problems deserve further research—for example, using the random feature approximation to scale up the kernel methods [30] and learning with data-dependent hypothesis space to achieve a tighter error bound [31]. These problems are under our research.

Author Contributions

All authors have made a great contribution to the work. Methodology, L.L., C.Y., B.S. and C.X.; formal analysis, L.L. and C.X.; investigation, C.Y., Z.P. and C.X.; writing—original draft preparation, L.L., B.S. and W.L.; writing—review and editing, W.L. and C.X.; visualization, C.Y. and Z.P.; supervision, C.X.; project administration, B.S.; funding acquisition, B.S. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Fundamental Research Funds for the Central Universities of China (2662020LXQD002), the Natural Science Foundation of China (12001217), the Key Laboratory of Biomedical Engineering of Hainan Province (Opening Foundation 2022003), and the Hubei Key Laboratory of Applied Mathematics (HBAM 202004).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schwarz, G. Estimating the Dimension of a Model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
  2. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  3. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
  4. Chen, H.; Guo, C.; Xiong, H.; Wang, Y. Sparse additive machine with ramp loss. Anal. Appl. 2021, 19, 509–528. [Google Scholar] [CrossRef]
  5. Chen, H.; Wang, Y.; Zheng, F.; Deng, C.; Huang, H. Sparse Modal Additive Model. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2373–2387. [Google Scholar] [CrossRef]
  6. Deng, H.; Chen, J.; Song, B.; Pan, Z. Error bound of mode-based additive models. Entropy 2021, 23, 651. [Google Scholar] [CrossRef] [PubMed]
  7. Engle, R.F.; Granger, C.W.J.; Rice, J.; Weiss, A. Semiparametric Estimates of the Relation Between Weather and Electricity Sales. J. Am. Stat. Assoc. 1986, 81, 310–320. [Google Scholar] [CrossRef]
  8. Zhang, H.; Cheng, G.; Liu, Y. Linear or Nonlinear? Automatic Structure Discovery for Partially Linear Models. J. Am. Stat. Assoc. 2011, 106, 1099–1112. [Google Scholar] [CrossRef] [Green Version]
  9. Huang, J.; Wei, F.; Ma, S. Semiparametric Regression Pursuit. Stat. Sin. 2012, 22, 1403–1426. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Mukherjee, S.; Zhou, D. Learning Coordinate Covariances via Gradients. J. Mach. Learn. Res. 2006, 7, 519–549. [Google Scholar]
  11. Mukherjee, S.; Wu, Q. Estimation of Gradients and Coordinate Covariation in Classification. J. Mach. Learn. Res. 2006, 7, 2481–2514. [Google Scholar]
  12. Jia, C.; Wang, H.; Zhou, D. Gradient learning in a classification setting by gradient descent. J. Approx. Theory 2009, 161, 674–692. [Google Scholar]
  13. He, X.; Lv, S.; Wang, J. Variable selection for classification with derivative-induced regularization. Stat. Sin. 2020, 30, 2075–2103. [Google Scholar] [CrossRef]
  14. Dong, X.; Zhou, D.X. Learning gradients by a gradient descent algorithm. J. Math. Anal. Appl. 2008, 341, 1018–1027. [Google Scholar] [CrossRef] [Green Version]
  15. Mukherjee, S.; Wu, Q.; Zhou, D. Learning gradients on manifolds. Bernoulli 2010, 16, 181–207. [Google Scholar] [CrossRef] [Green Version]
  16. Borkar, V.S.; Dwaracherla, V.R.; Sahasrabudhe, N. Gradient Estimation with Simultaneous Perturbation and Compressive Sensing. J. Mach. Learn. Res. 2017, 18, 161:1–161:27. [Google Scholar]
  17. Ye, G.B.; Xie, X. Learning sparse gradients for variable selection and dimension reduction. Mach. Learn. 2012, 87, 303–355. [Google Scholar] [CrossRef] [Green Version]
  18. He, X.; Wang, J.; Lv, S. Efficient kernel-based variable selection with sparsistency. arXiv 2018, arXiv:1802.09246. [Google Scholar] [CrossRef]
  19. Guinney, J.; Wu, Q.; Mukherjee, S. Estimating variable structure and dependence in multitask learning via gradients. Mach. Learn. 2011, 83, 265–287. [Google Scholar] [CrossRef] [Green Version]
  20. Feng, Y.; Yang, Y.; Suykens, J.A.K. Robust Gradient Learning with Applications. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 822–835. [Google Scholar] [CrossRef]
  21. Li, T.; Beirami, A.; Sanjabi, M.; Smith, V. On tilted losses in machine learning: Theory and applications. arXiv 2021, arXiv:2109.06141. [Google Scholar]
  22. Cucker, F.; Smale, S. On the mathematical foundations of learning. Bull. Am. Math. Soc. 2002, 39, 1–49. [Google Scholar] [CrossRef] [Green Version]
  23. Chen, H.; Wang, Y. Kernel-based sparse regression with the correntropy-induced loss. Appl. Comput. Harmon. Anal. 2018, 44, 144–164. [Google Scholar] [CrossRef]
  24. Feng, Y.; Fan, J.; Suykens, J.A. A Statistical Learning Approach to Modal Regression. J. Mach. Learn. Res. 2020, 21, 1–35. [Google Scholar]
  25. Yang, L.; Lv, S.; Wang, J. Model-free variable selection in reproducing kernel hilbert space. J. Mach. Learn. Res. 2016, 17, 2885–2908. [Google Scholar]
  26. Zhou, D.X. Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inf. Theory 2003, 49, 1743–1752. [Google Scholar] [CrossRef]
  27. Belkin, M.; Niyogi, P.; Sindhwani, V. Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples. J. Mach. Learn. Res. 2006, 7, 2399–2434. [Google Scholar]
  28. Schölkopf, B.; Smola, A.J.; Bach, F. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  29. Karimi, H.; Nutini, J.; Schmidt, M. Linear convergence of gradient and proximal-gradient methods under the polyak-łojasiewicz condition. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Riva del Garda, Italy, 19–23 September 2016; Springer: Cham, Switzerland, 2016; pp. 795–811. [Google Scholar]
  30. Dai, B.; Xie, B.; He, N.; Liang, Y.; Raj, A.; Balcan, M.F.F.; Song, L. Scalable Kernel Methods via Doubly Stochastic Gradients. In Advances in Neural Information Processing Systems; Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2014; Volume 27. [Google Scholar]
  31. Wang, Y.; Chen, H.; Song, B.; Li, H. Regularized modal regression with data-dependent hypothesis spaces. Int. J. Wavelets Multiresolution Inf. Process. 2019, 17, 1950047. [Google Scholar] [CrossRef]
Figure 1. The influence of different t on the variable selection results.
Figure 1. The influence of different t on the variable selection results.
Entropy 24 00956 g001
Table 1. Variable selection results for different circumstances.
Table 1. Variable selection results for different circumstances.
MethodsUncorrelated VariablesCorrelated Variables
0 % 20 % 40 % 0 % 20 % 40 %
Cauchy noise G L 28.70 24.27 19.03 20.27 17.53 16.53
( m , n ) = ( 50 , 50 ) R G L 29.00 26 . 57 27 . 7 20.80 15.40 14.16
T G L t = 6 × 10 6 29 . 63 24.06 18.04 20.67 17.00 16.23
T G L t = 1 29.53 26.07 26.00 21 . 07 17 . 6 17 . 13
T G L t = 10 29.53 24.23 24.03 16.93 15.78 15.67
Chi-square noise G L 29.40 24.73 20.37 18.40 17.93 16.03
( m , n ) = ( 50 , 50 ) R G L 29.63 26 . 90 27 . 60 19.90 16.10 14.67
T G L t = 6 × 10 6 29 . 84 24.4 20.90 18.20 17.30 17.20
T G L t = 1 29.14 24.56 25.18 21 . 10 18 . 77 17 . 93
T G L t = 10 25.13 24.10 24.93 20.83 17.10 16.60
Gaussian noise G L 28.83 25.16 20.13 18.04 16.70 15.93
( m , n ) = ( 50 , 50 ) R G L 29 . 40 26 . 70 27 . 20 19.87 16.40 14.36
T G L t = 6 × 10 6 29.23 25.23 20.20 18.37 17.76 16.3
T G L t = 1 27.63 26.20 25.90 21.06 18 . 40 17 . 90
T G L t = 10 22.9 25.23 25.06 21 . 43 17.13 16.23
Cauchy noise G L 29.60 11.33 12.30 11.93 11.57 10.97
( m , n ) = ( 30 , 80 ) R G L 29 . 87 29 . 97 29 . 93 16.50 16 . 97 15 . 20
T G L t = 6 × 10 6 28.47 10.67 10.49 11.13 11.03 10.93
T G L t = 1 27.06 20.67 11.3 17 . 08 14.4 11.56
T G L t = 10 16.66 16.23 15.12 13.97 13.92 13.54
Chi-square noise G L 29.83 11.47 12.57 12.57 11.67 11.33
( m , n ) = ( 30 , 80 ) R G L 29 . 93 29 . 93 29 . 71 19 . 87 18 . 80 17 . 50
T G L t = 6 × 10 6 29.03 11.10 12.90 12.50 10.87 11.43
T G L t = 1 29.37 23.60 23.53 16.08 14.4 11.40
T G L t = 10 28.17 23.33 23.23 13.97 13.92 13.54
Gaussian noise G L 29 . 77 11.83 12.27 12.92 12.44 11.54
( m , n ) = ( 30 , 80 ) R G L 29.70 29 . 93 29 . 93 19 . 73 13.67 9.83
T G L t = 6 × 10 6 28.47 10.67 10.49 13.06 9.79 8.73
T G L t = 1 27.06 20.67 11.3 16.08 14 . 4 11.90
T G L t = 10 16.66 16.23 15.12 13.97 13.92 13 . 54
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, L.; Song, B.; Pan, Z.; Yang, C.; Xiao, C.; Li, W. Gradient Learning under Tilted Empirical Risk Minimization. Entropy 2022, 24, 956. https://doi.org/10.3390/e24070956

AMA Style

Liu L, Song B, Pan Z, Yang C, Xiao C, Li W. Gradient Learning under Tilted Empirical Risk Minimization. Entropy. 2022; 24(7):956. https://doi.org/10.3390/e24070956

Chicago/Turabian Style

Liu, Liyuan, Biqin Song, Zhibin Pan, Chuanwu Yang, Chi Xiao, and Weifu Li. 2022. "Gradient Learning under Tilted Empirical Risk Minimization" Entropy 24, no. 7: 956. https://doi.org/10.3390/e24070956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop