Gradient Learning under Tilted Empirical Risk Minimization

Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated.


Introduction
Data-driven variable selection aims to select informative features related with the response in high-dimensional statistics and plays a critical role in many areas. For example, if the milk production of dairy cows can be predicted by the blood biochemical indexes, then the doctors are eager to know which indexes can drive the milk production because each of them is independently measured with additional burden. Therefore, an explainable and interpretable system to select the effective variables is critical to convince domain experts. Currently, the methodologies on variable selection methods can be roughly divided into three categories including linear models [1][2][3], nonlinear additive models [4][5][6], and partial linear models [7][8][9]. Although achieving promising performance in some applications, these methods mentioned above still suffer from two main limitations. Firstly, the target function of these methods is restricted on the assumption of specific structures. Secondly, these methods cannot revive how the coordinates vary with respect to each other. As an alternative, Mukherjee and Zhou [10] proposed the gradient learning (GL) model, which aims to learn the gradient functions and enjoys the model-free property.
Despite the empirical success [11][12][13], there are still some limitations of the GL model, such as high computational cost, lacking the sparsity in high-dimensional data and lacking the robustness to complex noises. To this end, several variants of the GL model have been devoted to developing alternatives for individual purposes. For example, Dong and Zhou [14] proposed a stochastic gradient descent algorithm for learning the gradient and demonstrated that the gradient estimated by the algorithm converges to the true gradient. Mukherjee et al. [15] provided an algorithm to reduce dimension on manifolds for high-dimensional data with few observations. They obtained generalization error bounds of the gradient estimates and revealed that the convergence rate depends on the intrinsic dimension of the manifold. Borkar et al. [16] combined ideas from Spall's Simultaneous Perturbation Stochastic Approximation with compressive sensing and proposed to learn the gradient with few function evaluations. Ye et al. [17] originally proposed a sparse GL model to further address the sparsity for high-dimensional variable selection of the estimated sparse gradients. He et al. [18] developed a three-step sparse GL method which allows for efficient computation, admits general predictor effects, and attains desirable asymptotic sparsistency. Following the research direction of robustness, Guinney et al. [19] provided a multi-task model which are efficient and robust for high-dimensional data. In addition, Feng et al. [20] provided a robust gradient learning (RGL) framework by introducing a robust regression loss function. Meanwhile, a simple computational algorithm based on gradient descent was provided, and the convergence of the proposed method is also analyzed.
Despite rapid progress, the GL model and its extensions mentioned above are established under the framework of empirical risk minimization (ERM). While enjoying the nice statistical properties, ERM usually performs poorly in situations where average performance is not an appropriate surrogate for the problem of interest [21]. Recently, a novel framework, named tilted empirical risk minimization (TERM), is proposed to flexibly address the deficiencies in ERM [21]. By using a new loss named t-tilted loss, it has been shown that TERM (1) can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; (2) has variance reduction properties that can benefit generalization; and (3) can be viewed as a smooth approximation to a superquantile method. Considering these strength, we propose to investigate the GL under the framework of TERM. The main contributions of this paper can be summarized as follows: • New learning objective. We propose to learn the gradient function under the framework of TERM. Specifically, the t-tilted loss is embedded into the GL model. To the best of our knowledge, it may be the first endeavor in this topic. • Theoretical guarantees. For the new learning objective, we estimate the generalization bound by error decomposition and operator approximation technique, and further provide the theoretical consistency and the convergence rate. To be specific, the convergence rate can recover the result of traditional GL as t tends 0 [10]. • Efficient computation. A gradient descent method is provided to solve the proposed learning objective. By showing the smoothness and strongly convex of the learning objective, the convergence to the optimal solution is proved.
The rest of this paper is organized as follows: Section 2 proposes the GL with t-tilted loss (TGL) and states the main theoretical results on the asymptotic estimation. Section 3 provides the computational algorithm and its convergence analysis. Numerical experiments on synthetic data sets will be implemented in Section 4. Finally, Section 5 closes this paper with some conclusions.

Learning Objective
In this section, we introduce TGL and provide the main theoretical results on the asymptotic estimation.

Gradient Learning with t-Tilted Loss
Let X be a compact subset of R n and Y ∈ R. Assume that ρ is a probability measure on Z := X × Y. It induces the marginal distribution ρ X on X and conditional distributions ρ(·|x) at x ∈ X. Denote L 2 ρ X as the L 2 space with the metric f ρ = ( X | f (x)| 2 dρ X ) 1/2 . In addition, the regression function f ρ : X → Y associated with ρ is defined as For x = (x 1 , x 2 , . . . , x n ) T ∈ X, the gradient of f ρ is the vector of functions (if the partial derivatives exist) The relevance between the l-th coordinate and f ρ can be evaluated via the norm of its partial derivative ∂ f ρ ∂x l , where a large value implies a large change in the function f ρ with respect to a sensitive change in the l-th coordinate. This fact gives an intuitive motivation for the GL. In terms of Taylor series expansion, the following equation holds: for x ≈x and x,x ∈ X. Inspired by (1), we denote the weighted square loss of f as where the restriction x ≈x will be enforced by weights ω(x,x) given by 1 s n+2 e −|x−x| 2 /2s 2 with a constant 0 < s ≤ 1, see, e.g., [10,11,19]. Then, the expected risk of f can be given by As mentioned in [21], the f defined in (3) usually performs poorly in situations where average performance is not an appropriate surrogate. Inspired from [21], for t ∈ R \0 , we address the deficiencies by introducing the t-tilted loss and define the expected risk of f with t-tilted loss as Remark 1. Note that t ∈ R \0 is a real-valued hyperparameter, and it can encompass a family of objectives which can address the fairness (t > 0) or robustness (t < 0) by different choices. In particular, it recovers the expected risk (3) as t → 0.
On this basis, the GL with t-tilted loss is formulated as the following regularization scheme: where λ > 0 is a regularization parameter. Here, K : X × X → R is a Mercer kernel that is continuous, symmetric, and positive semidefinite [22,23] and H K induced by K be an RKHS defined as the closure of the linear span of the set of functions {K x := K(x, ·) : x ∈ X} with the inner product ·, · K satisfying K x , Kx K = K(x,x). The reproducing property takes the form K x , f K = f (x), ∀x ∈ X, ∀ f ∈ H K . Then, we denote H n K as an n-fold RKHS with the inner product

Main Results
This subsection states our main theoretical results on the asymptotic estimation of f λ,t − ∇ f ρ ρ on the space (L 2 ρ X ) n with norm f ρ = (∑ n l=1 f l 2 ρ ) 1/2 . Before proceeding, we provide some necessary assumptions which have been used extensively in machine learning literature, e.g., [24,25].
and the density p(z) of dρ(z) exists and satisfies Taking the functional derivatives of (5), we know that f λ,t can be expressed in terms of the following integral operator on the space (L 2 ρ X ) n . where The operator L K,s has its range in H n K . It can also be regarded as a positive operator on H n K . We shall use the same notion for the operators on these two different domains. Given the definition of integral operator L K,s , we can write f λ,t in the following equation. Theorem 1. Given the integral operator L K,s , we have the following relationship: , and I is the identity operator.
Proof of Theorem 1. To solve the scheme (5), we take the functional derivative with respect to f , apply it to an element δ f of H n K and set it equal to 0. We obtain Since it holds for any δ f ∈ H n K , it is trivial to obtain The desired result follows by shifting items.
On this basis, we propose to bound the error f λ,t − ∇ f ρ ρ by a functional analysis approach and present the error decomposition as following proposition. The proof is straightforward and omitted for brevity. (5), it holds that

Proposition 1. For the f λ,t defined in
In the sequel, we focus on bounding f λ, respectively. Before we embark on the proof, we single out a important property regarding φ(z,z) that will be useful in later proofs.

Lemma 1.
Under the Assumptions 1 and 2, there exists B t and A t dependent on t satisfying Proof of Lemma 1. Since the kernel K is C 3 and f λ,t ∈ H n K , we know from Zhou [26] that Hence, using Cauchy inequality, we have By a direct computation, we obtain The desired result follows.

Lemma 2.
Under Assumptions 1 and 2, we have Proof of Lemma 2. Taking notice of (10), it follows that Then, we have We note that From Assumptions 1 and 2, we have The desired result follows.
As for λ(L K,s + λI) −1 ∇ f ρ ρ , the multivariate mean value theorem ensures that there From (14), we can define the integral operator associated with the Mercer kernel K which is related to L K,s . Using Lemma 16 and Lemma 18 in [10], we establish the following Lemma.
where L K is a positive operator on (L 2 ρ X ) n defined by Proof of Lemma 3. To estimate (15), we need to consider the convergence of L K,s as s → 0. Denote the stepping stone we deduce that Using the multivariate mean value theorem, there exists z ζ , z σ ∈ R n × Y, such that Noticing n(2π) n 2 = J 2 , we have Then, by (7), we can obtain the following conclusion from Lemma 16 in [10] when 0 ≤ s ≤ 1, Combining the above two estimates, there holds for any 0 ≤ s ≤ 1, Using Lemma 18 in [10] and (16), the desired result follows.
Since the measure dρ = Y p(z)R t (z) V p dρ is probability one on X, we know that the operator L K can be used to define the reproducing kernel Hilbert space [22]. Let L 1/2 K be the The assumption we shall use is It means that ∇ f ρ lies in the range of L 1/2 K . Finally, we can give the upper bound of the error f λ,t − ∇ f ρ ρ .
Proof of Theorem 2. Using Cauchy inequality, for f = ( f 1 , f 2 , . . . , f n ) T ∈ (L 2 ρ X ) n , we have It means that According to the definitions of f l ρ and f l ρ , it is trivial to obtain Since , we see from the fact J 2 > 1 that the restriction 0 < s ≤ min{c ρ λ 1 ζ , 1} in Lemma 3 is satisfied for m ≥ (κc h ) 2(n+2+3τ)/τ . Then, combining Lemma 2, Lemma 3, Equation (17) and inequality (19), we have Remark 2. Theorem 2 shows when m → +∞, f λ,t − ∇ f ρ ρ → 0. This means that the scheme (5) is consistent. In addition, A t and B t tend to 1 as t tends 0, we can see that the convergence rate of Scheme (5) is − ζ 2n+4+6ζ , which is consistent with previous result in [10]. It means that the proposed method can be regarded as an extension of traditional GL.

Computing Algorithm
In this section, we present the GL model under TERM and propose to use the gradient descent algorithm to find the minimizer. Finally, the convergence of the proposed algorithm is also guaranteed.
Given a set of observations z = {z i = (x i , y i )} m i=1 ∈ Z m independently drawn according to ρ and assume that the RKHS are rich that the kernel matrix K = (K(x i , x j )) m i,j=1 is strictly positive definite [27]. According to the Representer Theorem of kernel methods [28], we assert the approximation of f λ,t has the following form: . . , c n i ) T ∈ R n . Let c = (c T 1 , . . . , c T m ) T ∈ R mn , the empirical version of (4) is formulated as follows: where The gradients of E z (c, t) and ∑ m i=1 c i K x i 2 K at c are given by Correspondingly, scheme (20) can be solved via the following gradient method: where c k = (c T 1,k , . . . , c T m,k ) T ∈ R mn is the calculated solution at iteration k, and α is the step-size. The detailed gradient descent scheme is stated in Algorithm 1. To prove the convergence, we introduce the following lemma derived from Theorem 1 in [29].

Lemma 4.
When h(c) has an γ-Lipschitz continuous gradient (γ-smoothness) and is µ-strongly convex, for the basic unconstrained optimization problem c * = arg min h(c), the gradient descent algorithm c k = c k−1 − 1 γ ∇h(c k−1 ) with a step-size of 1/γ has a global linear convergence rate

Algorithm 1 Gradient descent for the Gradient Learning under TERM
• Compute the gradient of the loss for i, j = 1, . . . , m • Compute the descent step: and set k = k + 1.

end
From Lemma 4, we obtain the following conclusion which states that the proposed algorithm converges to (20) by choosing a suitable step size α.
, β max , β min are the maximum and minimum eigenvalues of kernel matrix K, respectively. There exist µ ∈ R + and γ ∈ R + dependent on t such that L(c k , t) is γ-smoothness and µ-strongly convex for any t > (−nλβ min /64(M 2 + C K M X )M 2 X mκ 4 ). In addition, let the minimizer c z,λ defined in scheme (20) and {c k } be the sequence generated by Algorithm 1 with α = 1/γ, we have Proof of Theorem 3. Note that the strong convexity and the smoothness are related to the Hessian Matrix, and we provide the proof by dividing the Hessian Matrix into three parts: Hence, we can get the following equation: Similar to the proof of Lemma 1, for i, j = 1, . . . , m, it directly follows that and we have It means that the maximum eigenvalue of E 1 is 128t(M 2 + C K M X )M 2 X mκ 4 . Then, the following inequations are satisfied where 0 mn is the mn × mn matrix with all elements zero.
(2) Estimation on E 2 : Note that ∇ 2 cc T V z (c, z i , z j ) can be rewritten as Similar to (25), we have ∇ 2 cc T V z (c, z i , z j ) 2κ 4 M 2 x I mn . It follows (3) Estimation on E 3 : By a direct computation, we have Setting Q = (q 11 , q 21 , . . . , q n1 , . . . , q 1m , q 2m , . . . , q nm ) T ∈ R mn , we deduce that Note that the matrix of quadratic form ∑ m i=1 ∑ m j=1 K(x i , x j )q li q lj is K, then we can obtain 2λnβ min I mn E 3 2λnβ max I mn .

Simulation Experiments
In this section, we carry out simulation studies with the TGL model (t < 0 for robust) on a synthetic data set in the robust variable selection problem. Let the observation data set z = {z i = (x i , y i )} m i=1 with x i = (x 1 i , · · · , x n i ) be generated by the following linear equations: where represents the outliers or noises. To be specific, three different noises are used: Cauchy noise with the location parameter a = 2 and scale parameter b = 4, Chi-square noise with 5 DOF scaled by 0.01 and Gaussian noise N (0, 0.3). Three different proportions of outliers including 0%, 20%, or 40% are drawn from the Gaussian noise N (0, 100). Meanwhile, we consider two different cases with (m, n) = (50, 50), (30,80) corresponding to m = n and m < n, respectively. The weighted vector w = (w 1 , · · · , w n ) over different dimensions is constructed as follows: w l = 2 + 0.5 sin( 2πl 10 ), for l = 1, . . . , N n and 0, otherwise. Here, N n = 30 means the number of effective variables. Two situations including uncorrelated variables x ∼ N (0 n , I n ) and correlated variables x ∼ N (0 n , Σ n ) are implemented for x, where the covariance matrix Σ n is given with the (l, p)th entry 0.5 |l−p| .
For the variable selection algorithms, we perform the TGL with t = 6 × 10 −6 , −1, −10 and compare the traditional GL model [10] and RGL model [20]. For the GL and TGL models, N n variables are selected by ranking For the RGL model, N n variables are selected by ranking , l = 1, · · · , n.
A model selecting more effective variables (≤ N n ) means a better algorithm.
We repeat experiments for 30 times with the observation set z generated in each circumstance. The average selected effective variables for different circumstances are reported in Table 1, and the optimal results are marked in bold. Several useful conclusions can be drawn from Table 1. (1) When the input variables are uncorrelated, the three models have similar performance under different noise conditions and can provide satisfactory variable selection results (approaching N n ) without outliers. However, the performance degrades severely for GL and a little for TGL (t < 0 for robust) with the increasing proportions of outliers, especially in case (m, n) = (30,80). In contrast, RGL can always provide satisfying performance. This is consistent with the previous phenomenon [20].
(2) When the input variables are correlated, the three models also have similar performance under different noise conditions but only can select partial effective variables ranging from N n /3 to 2N n /3. In general, they degrade slowly with the increasing proportions of outliers and perform better in case (m, n) = (50, 50) than in (m, n) = (30, 80). Specifically, the TGL model with t = −1 gives slightly better selection results than GL and RGL in case (m, n) = (50, 50). It supports the superiority of TGL to some extent.
(3) It is worth noting that the TGL model with t = 6 × 10 −6 has similar performance to GL. This phenomenon supports the theoretical conclusion that TGL recovers the GL as t → 0 and the algorithmic effectiveness that the proposed gradient descent method can converge to the minimizer.
(4) Noting that the TGL model with different parameters t has great differences in the variable selection results, we further conduct some simulation studies to investigate the influence. Figure 1 shows the variable selection results of different parameters t ranging from −100 to −0.1. We can see that the satisfying performance can be achieved when the parameter t is near −1. It does not turn out well when |t| is too large. This coincides with our previous discussion that L(c, t) is strongly convex with limited t.

Conclusions
In this paper, we have proposed a new learning objective TGL by embedding the t-tilted loss into the GL model. On the theoretical side, we have established its consistency and provided the convergence rate with the help of error decomposition and operator approximation technique. On the practical side, we have proposed a gradient descent method to solve the learning objective and provided the convergence analysis. Simulated experiments have verified the theoretical conclusion that TGL recovers the GL as t → 0 and the algorithmic effectiveness that the proposed gradient descent method can converge to the minimizer. In addition, they also demonstrated the superiority of TGL when the input variables are correlated. Along the line of the present work, several open problems deserve further research-for example, using the random feature approximation to scale up the kernel methods [30] and learning with data-dependent hypothesis space to achieve a tighter error bound [31]. These problems are under our research.
Author Contributions: All authors have made a great contribution to the work. Methodology, L.L., C.Y., B.S. and C.X.; formal analysis, L.L. and C.X.; investigation, C.Y., Z.P. and C.X.; writing-original draft preparation, L.L., B.S. and W.L.; writing-review and editing, W.L. and C.X.; visualization, C.Y. and Z.P.; supervision, C.X.; project administration, B.S.; funding acquisition, B.S. and W.L. All authors have read and agreed to the published version of the manuscript.