Next Article in Journal
Corrected Dual-Simpson-Type Inequalities for Differentiable Generalized Convex Functions on Fractal Set
Next Article in Special Issue
A Family of Transformed Difference Schemes for Nonlinear Time-Fractional Equations
Previous Article in Journal
Extensible Steganalysis via Continual Learning
Previous Article in Special Issue
The Effect of a Nonlocal Thermoelastic Model on a Thermoelastic Material under Fractional Time Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization

1
Research Center of Nonlinear Science, School of Mathematical and Physical Sciences, Wuhan Textile University, Wuhan 430200, China
2
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen 518172, China
3
College of Science, Huazhong Agricultural University, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2022, 6(12), 709; https://doi.org/10.3390/fractalfract6120709
Submission received: 10 August 2022 / Revised: 19 November 2022 / Accepted: 23 November 2022 / Published: 29 November 2022

Abstract

:
Stochastic gradient descent is the method of choice for solving large-scale optimization problems in machine learning. However, the question of how to effectively select the step-sizes in stochastic gradient descent methods is challenging, and can greatly influence the performance of stochastic gradient descent algorithms. In this paper, we propose a class of faster adaptive gradient descent methods, named AdaSGD, for solving both the convex and non-convex optimization problems. The novelty of this method is that it uses a new adaptive step size that depends on the expectation of the past stochastic gradient and its second moment, which makes it efficient and scalable for big data and high parameter dimensions. We show theoretically that the proposed AdaSGD algorithm has a convergence rate of O ( 1 / T ) in both convex and non-convex settings, where T is the maximum number of iterations. In addition, we extend the proposed AdaSGD to the case of momentum and obtain the same convergence rate for AdaSGD with momentum. To illustrate our theoretical results, several numerical experiments for solving problems arising in machine learning are made to verify the promise of the proposed method.

1. Introduction

Optimization based on stochastic gradients is of central practical significance in many scientific and engineering fields. Many problems in these areas can be reduced to optimization problems of some scalar parameterized objective function for which parameters need to be maximized or minimized. Recent years have witnessed the great success of machine learning, especially deep learning, in many fields, including computer vision, speech processing, and natural language processing. For many machine learning tasks, a critical and challenging problem is to design optimization algorithms to train neural network models. If the objective function is differentiable, stochastic gradient descent (SGD) is an efficient and effective optimization method that plays a central role in many machine learning successes. The SGD algorithm can be traced back to Robbins and Monro [1], who stated that the classical convergence analysis depends on the decreasing positive learning rate condition. Stochastic approximation methods have been widely studied in various areas of the literature [2,3,4], mainly focusing on the convergence of algorithms in different environments.
In recent years, the convergence speed of standard SGD has been greatly improved, and a number of methods to reduce variance have been adopted, such as vanilla SGD in the non-convex case [5]. However, vanilla SGD is too sensitive to the learning rate, making it difficult to adjust the appropriate learning rate, and its convergence performance is poor. There have been many attempts to achieve easily tunable learning rates and improve SGD performance, For example, in the case of the smooth and strongly convex objective function, the variance of stochastic gradient decrease [6,7,8,9], adaptive learning rate [10,11,12,13,14,15,16], averaging [17], momentum acceleration mechanism [18,19,20,21], and the Powerball method [22] are used, and a better self optimization control method is proposed using fractional-order Gaussian noise [23]. The most promising variance reduction technique is the stochastic variance reduction gradient (SVRG) [8,9]. In fact, these stochastic methods need to store and use the full batch of past gradients in order to progressively reduce the variance of the stochastic gradient estimator. For stochastic optimization problems, the number of training samples is usually large; consequently, the algorithm can be difficult to implement if the storage space is limited. Therefore, adaptive learning rate and momentum mechanisms are more suitable for stochastic optimization problems than variance reduction.
In addition to the classical optimization algorithms, several other popular stochastic optimization algorithms can be found in the current literature, for example, genetic algorithms, which are inspired by biological evolution [24], particle swarm optimization derived from the natural behavior of clusters [25,26], and the most recent dynamic stochastic fractal search optimization algorithm based on the adaptive strategy of fuzzy logic for diffusion parameters [27]. However, because heuristic algorithms are proposed based on experience without a theoretical basis, they lack a unified and complete theoretical framework. In addition, due to the use of non-deterministic polynomial theory, global optimality cannot be guaranteed when using the heuristic polynomial approach.
Adaptive step sizes have a long history in convex settings. They were first proposed in the online learning literature [28] and later applied to the random learning literature [12]. In a recent study, an adaptive projection gradient algorithm has been proposed for a special nonlinear fractional optimization problem with an objective function that is smooth convex in the numerator and smooth concave in the denominator [29]. In [30], a very weak condition is proposed for the non-convex function to converge to the global optimum almost everywhere, and in [31], a new convergence analysis method for SGD under a decreasing learning rate regime is proposed. In [16,32,33], the authors studied several classes of stochastic optimization algorithms enriched with heavy ball momentum, showing a linear rate for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum (SGDM)). This does not require large memory, merely requiring slightly more computation in each iteration compared with the vanilla SGD method. Therefore, both techniques have been widely used and demonstrated to be effective for training deep neural networks [10,13]. On the one hand, common SGD variants have been designed and analyzed under convex settings [12], and the results may not provide a relevant guarantee of convergence [13]. On the other hand, it is well known that linear convergence can be achieved even with constant step-size gradient descent under certain conditions. However, while most of the advanced SGD variants can achieve faster convergence rates by applying adaptive step size, the convergence rate is not yet ideal.
We summarize the main contributions of the present paper to the existing results in the literature as follows:
  • For smooth and convex functions, a novel adaptive step-size stochastic gradient descent (AdaSGD) method is proposed, and a momentum acceleration variant (AdaSGDM) is studied as well. It is proven that both have a convergence rate of O ( 1 / T ) , where T is the maximum number of iterations.
  • For smooth but non-convex functions, we show that both AdaSGD and AdaSGDM achieve global optimization with a convergence rate of O ( 1 / T ) .
The rest of this paper is organized as follows. In Section 2, we describe the optimization problem and present the AdaSGD and AdaSGDM method along with details of the adaptive step sizes. In Section 3, we prove the convergence rates of the proposed AdaSGD and AdaSGDM theoretically. Section 4 presents a practical implementation and discusses the experimental results on problems arising from machine learning. Finally, a brief conclusion and discussion of possible future work is presented in Section 5.

2. Problem Statement

Consider the following unconstrained minimization problem:
min x R d f ( x ) ,
where f : R d R is a differentiable function (though not necessarily convex). More concretely, we assume that f ( x ) has a Lipschitz gradient.
Assumption 1. 
The continuously differentiable function f : R d R is bounded below by f * : = inf x R d f ( x ) R , and its gradient f ( x ) is L-Lipschitz; i.e., there exists a constant L > 0 such that, for all x , y R d ,
f ( x ) f ( y ) L x y , x , y R d ,
where · denotes the Euclidean norm.
Notice that the inequality does not imply the convexity of f. However, the assumption that f is L-smooth for any x , y R d implies that ([34], Lemma 1.2.3)
| f ( y ) f ( x ) f ( x ) , y x | L 2 y x 2 .
Because we are interested in solving (1) using stochastic gradient methods, we assume that at each x R d we have access to an unbiased estimator of the true gradient f ( x ) , denoted by g ( x , ξ ) , where ξ is a source of randomness. Thus, we need the following assumptions, which analyze SGD under the assumptions that f ( x ) is lower bounded and that the stochastic gradients g ( x , ξ ) are unbiased and have bounded variance [5].
Assumption 2. 
For any k 1 , the stochastic gradient oracle provides us an independent unbiased estimate g ( x k , ξ k ) of f ( x k ) upon receiving query x k R d :
E [ g ( x k , ξ k ) ] = f ( x k ) ,
where ξ is a random variable satisfying certain specific distributions and the variance of the random variable is bounded as follows:
E [ g ( x k , ξ k ) f ( x k ) 2 ] σ 2 ,
for some parameter 0 σ < .
It is worth noting that in the standard setting for SGD, the random vectors ξ , k = 1 , 2 , , are independent of each other (and of x k ; see, e.g., [17]). Note that due to unbiasedness, Assumption 2 is the standard stochastic gradient oracle assumption used for SGD analysis and the standard variance bound is equivalent to E g ( x , ξ ) 2 f ( x ) 2 + σ 2 . Classic convergence analysis of the SGD algorithm relies on placing conditions on the positive step size η k [1]. In particular, sufficient conditions are that
k = 1 η k = , and k = 1 η k 2 < .
The first condition is both necessary and intuitive, as it is necessary for the algorithm to be able to travel an arbitrary distance in order to reach the stationary point from the initial point. However, the second condition is actually unnecessary. Many popular step size choices, such as that of Adagrad [12], find it difficult to satisfy this condition, even when the step size can guarantee the convergence of Adagrad on convex sets.

2.1. Adaptive Step Size Stochastic Gradient Descent

More specifically, Adagrad [11] can be used to solve problem (1), as follows:
x k + 1 = x k η G ( x k , ξ k ) + ϵ g ( x k , ξ k ) ,
where the element-wise matrix-vector multiplication ⊙ between G ( x k , ξ k ) and g ( x k , ξ k ) , which here is G k R d × d , is a diagonal matrix in which each diagonal element i, i = 1 , 2 , , d is the sum of the squares of the gradients with respect to x k up to time step k, while ϵ is a smoothing term that avoids division by zero (usually on the order of 1 × 10 8 ). Interestingly, without the square root operation, the algorithm performs much more poorly. In this work, we focus on SGD with adaptive step size promotion, which iteratively updates the solution via
x k + 1 = x k η k g ( x k , ξ k ) ,
with an arbitrary initial point x 0 and adaptive step-size η k ; ξ k is a random variable obeying some distribution. In the sequelae, we let g k : = g ( x k , ξ k ) denote a stochastic gradient and assume that we have access to a stochastic first-order black-box oracle that returns a noisy estimate of the gradient of f at any point x R d . Unlike [11], in this paper we use the expectation of the stochastic gradient g k and its second moment to design a new adaptive step size, then obtain a new kind of adaptive stochastic gradient descent method (AdaSGD).
The pseudo-code of our proposed AdaSGD algorithm is presented in Algorithm 1.
Algorithm 1 Adaptive Stochastic Gradient Descent (AdaSGD) Method
1:
Initialization: initialize x 0 and the maximum number of iterations T
2:
Iterate:
3:
for   k = 0 , 1 , 2 , , T do
4:
 Compute the step size (i.e., learning rate) η k > 0 .
5:
 Generate a random variable ξ k .
6:
 Compute a stochastic gradient g ( x k , ξ k ) .
7:
 Update the new iterate x k + 1 = x k η k g ( x k , ξ k ) .
8:
End for

2.2. Adaptive Step Size Stochastic Gradient Descent with Momentum

In addition, we consider the momentum acceleration variant of the proposed AdaSGD for application of the algorithm. Similarly, the difference from stochastic heavy-ball in [35] is the different selection of adaptive step size. The updates to the AdaSGDM are as follows:
x k + 1 = x k η k g k + β ( x k x k 1 ) ,
with x 1 = x 0 , where β [ 0 , 1 ) is the momentum constant. Equivalently, denoting by y k + 1 : = x k + 1 x k , AdaSGDM can be implemented in two steps for k = 1 , 2 ,
y k + 1 = β y k η k g k , x k + 1 = x k + y k + 1 ,
where η k > 0 , β [ 0 , 1 ) . It is notable that during updating of x k + 1 , a momentum term is constructed based on the auxiliary sequence { y k } . When β = 0 , the method returns to AdaSGD. The pseudo-code of the AdaSGDM algorithm is presented in Algorithm 2.
Algorithm 2 Adaptive Stochastic Gradient Descent Momentum (AdaSGDM) Method
1:
Initialization: β 0 , initialize x 1 , x 0 and the maximum number of iterations T
2:
Iterate:
3:
for   k = 0 , 1 , 2 , , T do
4:
  Compute the step size (i.e., learning rate) η k > 0 .
5:
 Generate a random variable ξ k .
6:
 Compute a stochastic gradient g ( x k , ξ k ) .
7:
 Update the new iterate:
8:
    y k + 1 = β y k η k g k ,
9:
    x k + 1 = x k + y k + 1 .
10:
End for
To facilitate analysis of the stochastic momentum methods, we note that (3) implies the following recursions, which are straightforward to verify:
x k + 1 + p k + 1 = x k + p k η k 1 β g k ,
where p k is provided by
p k = β 1 β ( x k x k 1 ) , k 1 ,
and p 0 = 0 . Let v k = 1 β β p k ; then,
v k + 1 = β v k η k g k .

3. Convergence Analysis

In this section, without knowledge of the noise, we state the convergence results of AdaSGD and AdaSGDM under the convex settings in Section 3.1. Similarly, the convergence of the two methods under non-convex settings is analyzed in Section 3.2.

3.1. Adaptive Convergence Rates for Convex Functions

In this section, the convergence of AdaSGD and AdaSGDM under convex settings is discussed using the classical convergence analysis method under the specific adaptive step size iteration. Before stating the theorem for the convergence conclusion, we first provide the following technical Lemma for proving the theorem.
Lemma 1 
([15]). When f is L-smooth, then f ( x ) 2 2 L ( f ( x ) f ( x * ) ) , x R n , where x * = arg min x f ( x ) .
Next, we first provide the convergence results of AdaSGD and AdaSGDM in the case of convex functions.
Theorem 1. 
Assumptions 1 and 2 hold if f is convex by designing an appropriate adaptive step size, as follows:
η k = δ k · E [ g k ] E g k 2 1 / 2 E [ g k ] ,
where δ k > 0 is a parameter. Then, the iterates of AdaSGD ( β = 0 ) and AdaSGDM ( β 0 ) satisfy the following bound:
f ( x ¯ T ) f ( x * ) 1 T + 1 1 β 2 C x 0 x * 2 ,
where x ¯ T = 1 T + 1 k = 1 T x k , x * = arg min x f ( x ) , x 1 = x 0 are random initial points, C is a positive constant, and T is the maximum number of iterations.
Proof. 
From the iterative format (4), we can obtain
x k + 1 + p k + 1 x * 2 x k + p k x * 2 = 2 η k 1 β g k , x k + p k x * + η k 2 ( 1 β ) 2 g k 2 .
The adaptive step-size we analyze here is a generalization of ones widely used in the online and stochastic optimization literature. As such, their good performance has already been validated using numerous empirical results. In particular, we consider in the following parts that the step size satisfies (7). In addition, for E [ g k ] and E g k 2 1 / 2 , there always exists C 1 k ( 0 , 1 ) and C 2 k > 1 such that
E [ g k ] E g k 2 1 / 2 E [ g k ] C 1 k ,
and
1 < E g k 2 1 / 2 E g k 2 1 / 2 E [ g k ] C 2 k .
Taking the conditional expectation with respect to ξ 1 , , ξ k 1 , we can find that
  E η k g k , x k + p k x * = E [ g k ] , x k + p k x * · E [ g k ] E [ g k 2 ] 1 / 2 E [ g k ] · δ k f ( x k ) , x k + p k x * · C 1 k δ k = δ k C 1 k · f ( x k ) , x k x * + δ k C 1 k · β 1 β · f ( x k ) , x k x k 1 δ k C 1 k f ( x k ) f ( x * ) + δ k C 1 k · β 1 β f ( x k ) f ( x k 1 ) δ k C 1 k f ( x k ) f ( x * ) + δ ¯ C 0 · β 1 β f ( x k ) f ( x k 1 ) .
The first inequality is provided by (10), and the second inequality by the convexity of the function. The last inequality is provided by defining where C 0 : = min k = 0 , , T C 1 k and δ ¯ : = min k = 0 , , T δ k . Hence, by summing (9) over k = 0 to T and incorporating (12), we have
  2 1 β k = 0 T δ k C 1 k f ( x k ) f ( x * ) 2 β ( 1 β ) 2 δ ¯ C 0 f ( x T ) f ( x 1 ) + 1 ( 1 β ) 2 E k = 0 T η k 2 g k 2   + ( x 0 + p 0 x * 2 x T + 1 + p T + 1 x * 2 ) .
Notice the initial conditions x 1 = x 0 , p 0 = 0 ; then,
  2 1 β k = 0 T δ k C 1 k f ( x k ) f ( x * ) 2 β ( 1 β ) 2 δ ¯ C 0 f ( x T ) f ( x 0 ) + 1 ( 1 β ) 2 E k = 0 T η k 2 g k 2 + x 0 x * 2 .
Next, we consider the boundedness of the second term on the right side of (13):
E k = 0 T η k 2 g k 2 = k = 0 T E η k 2 g k 2   = k = 0 T E g k 2 · δ k 2 · E [ g k ] 2 E g k 2 1 / 2 E [ g k ] 2   k = 0 T C 2 k 2 · δ k 2 · f ( x k ) 2   k = 0 T C 2 k 2 · δ k 2 · 2 L f ( x k ) f ( x * ) ,
the first inequality is provided by (11) and the second by Lemma 1. Substituting (14) into (13), we have
2 1 β k = 0 T δ k C 1 k f ( x k ) f ( x * ) 1 ( 1 β ) 2 k = 0 T C 2 k 2 · δ k 2 · 2 L f ( x k ) f ( x * )     2 β δ ¯ C 0 ( 1 β ) 2 f ( x T ) f ( x 0 ) + x 0 x * 2 .
By recombining (15) and the definition of C 0 ,
2 ( 1 β ) 2 k = 0 T ( 1 β ) δ k C 1 k L C 2 k 2 δ k 2 f ( x k ) f ( x * ) 2 β δ ¯ C 0 ( 1 β ) 2 f ( x T ) f ( x 0 ) + x 0 x * 2 2 β δ ¯ C 0 ( 1 β ) 2 f ( x 0 ) f ( x * ) + x 0 x * 2 ,
and we choose δ k < 1 β L C 1 k C 2 k 2 such that ( 1 β ) δ k C 1 k L C 2 k 2 · δ k 2 > 0 .
Note that 0 < C 1 k < 1 and 0 < C 1 k + 1 < C 2 k can be obtained from (10) and (11). Per the definition of C 0 and δ ¯ , and without loss of generality, we can assume that δ ¯ = δ k 0 . Then,
δ ¯ C 0 = δ k 0 C 0 1 β L 2 C 1 k 0 C 2 k 0 2 C 0 1 β L 2 C 1 k 0 2 C 2 k 0 2 1 β L .
Let C : = min k = 0 , , T ( 1 β ) δ k C 1 k L C 2 k 2 δ k 2 ; then,
2 C ( 1 β ) 2 k = 0 T f ( x k ) f ( x * ) x 0 x * 2 + 2 β 1 β 1 L f ( x 0 ) f ( x * ) ,
which means that
k = 0 T f ( x k ) f ( x * ) ( 1 β ) 2 2 C x 0 x * 2 + 1 L β ( 1 β ) C f ( x 0 ) f ( x * ) ( 1 β ) 2 2 C x 0 x * 2 + 1 L β ( 1 β ) C L 2 x 0 x * 2 1 β 2 C x 0 x * 2 .
Now, from Jensen’s inequality, we have
f ( x ¯ T ) f ( x * ) 1 T + 1 k = 0 T f ( x k ) f ( x * ) 1 T + 1 1 β 2 C x 0 x * 2 ,
where x ¯ T = 1 T + 1 k = 0 T x k . □

3.2. Adaptive Convergence for Non-Convex Optimization

We now turn to the case where f is non-convex. In practice, most loss functions are non-convex. Because the convexity of a function plays an important role in convergence analysis, the convergence conclusion is not valid in the case of non-convexity. However, there are few theoretical results about stochastic optimization convergence in non-convex environments. In this section, we analyze the convergence of AdaSGD and AdaSGDM under non-convex settings by applying the expectation of the stochastic gradient and the second moment to the design of the adaptive step size.
Theorem 2. 
Let Assumptions 1 and 2 hold if f is non-convex. We choose the step size as in (7). Then, the iterates of AdaSGD satisfy the following bound:
min k = 1 , , T f ( x k ) 2 1 T · 1 C ^ f ( x 0 ) f ( x * ) ,
where x * is one of the minimum point of the function f ( x ) over R d , x 0 is a random initial point, C ^ is a positive constant, and T is the maximum number of iterations.
Proof. 
Because f ( x ) is an L-smooth function, we have
f ( y ) f ( x ) f ( x ) , y x L 2 y x 2 .
Then,
f ( x k + 1 ) f ( x k ) + f ( x k ) , x k + 1 x k + L 2 x k + 1 x k 2   = f ( x k ) f ( x k ) , η k g k + L 2 η k 2 g k 2 .
Using the expectation on both sides of (16),
E [ f ( x k + 1 ) f ( x k ) ] E [ f ( x k ) , η k g k ] + L 2 η k 2 E [ g k 2 ] = η k f ( x k ) 2 + L 2 η k 2 E g k 2 .
Now, by taking the adaptive step size as (7), we have
E [ f ( x k + 1 ) f ( x k ) ] δ k · E [ g k ] E g k 2 1 / 2 E [ g k ] · f ( x k ) 2 + L 2 δ k 2 · E [ g k ] E g k 2 1 / 2 E [ g k ] 2 · E g k 2 = δ k · E [ g k ] E g k 2 1 / 2 E [ g k ] · f ( x k ) 2 + L 2 δ k 2 · E [ g k 2 ] 1 / 2 E g k 2 1 / 2 E [ g k ] 2 · E [ g k ] 2 .
From (10) and (11),
E [ f ( x k + 1 ) f ( x k ) ] δ k C 1 k f ( x k ) 2 + L 2 δ k 2 C 2 k 2 · E [ g k ] 2 ,
we choose δ k < 2 L C 1 k C 2 k 2 such that δ k C 1 k L 2 δ k 2 C 2 k 2 > 0 . Let C ^ : = min k = 0 , , T δ k C 1 k δ k 2 C 2 k 2 ; then,
f ( x k + 1 ) f ( x k ) C ^ f ( x k ) 2 .
By summing (17) for k = 0 , , T and averaging it,
C ^ T k = 0 T f ( x k ) 2 1 T + 1 ( f ( x 0 ) f ( x T + 1 ) ) 1 T + 1 ( f ( x 0 ) f ( x * ) ) ,
then,
min k = 0 , , T f ( x k ) 2 1 T + 1 · 1 C ^ ( f ( x 0 ) f ( x * ) ) ,
where x * is one of the minimum point of the function f ( x ) over R d . □
In order to prove the convergence of AdaSGDM under a non-convex setting, we first analyze the relationship between the local error bound of the function and the local variation and gradient. Second, the relationship between the local variation of gradient and gradient is further analyzed. Finally, the boundary of the gradient is obtained, that is, the convergence of AdaSGDM under a non-convex setting. Before we state the adaptive convergence of AdaSGDM for non-convex optimization, we first present the following two Lemmas.
Lemma 2. 
Let z k = x k + p k . For AdaSGDM, we have the following for any k 0 :
E [ f ( z k + 1 ) f ( z k ) ] 1 2 L E f ( z k ) f ( x k ) 2 1 1 β C 1 k δ k L ( 1 β ) 2 C 2 k 2 δ k 2 E f ( x k ) 2 ,
where L is the Lipschitz constant of f, β [ 0 , 1 ) is the momentum constant as mentioned in (2), C 1 k and C 2 k are parameters in (10) and (11), and δ k is the parameter in (7).
Proof. 
Because f ( x ) is a smooth function, we have
f ( y ) f ( x ) f ( x ) , y x L 2 y x 2 .
We define ω k = g k f ( x k ) ; then, from Assumption 2, E [ ω k ] = 0 can be obtained. Then,
f ( z k + 1 ) f ( z k ) + f ( z k ) , z k + 1 z k + L 2 z k + 1 z k 2   = f ( z k ) 1 1 β η k f ( z k ) T g k + L 2 1 ( 1 β ) 2 η k 2 g k 2   = f ( z k ) 1 1 β η k f ( z k ) T ( ω k + f ( x k ) ) + L 2 1 ( 1 β ) 2 η k 2 g k 2   = f ( z k ) η k 1 β f ( z k ) T ω k η k 1 β f ( x k ) T ( f ( z k ) f ( x k ) )     η k 1 β f ( x k ) 2 + L 2 1 ( 1 β ) 2 η k 2 g k 2 .
Recombining (18) and using the expectation of both sides,
E [ f ( z k + 1 ) f ( z k ) ] 1 1 β η k E f ( x k ) T ( f ( z k ) f ( x k ) ) 1 1 β η k E [ f ( x k ) 2 ]     + L 2 1 ( 1 β ) 2 η k 2 E [ g k 2 ]   1 2 E 1 L f ( z k ) f ( x k ) 2 + L ( 1 β ) 2 η k 2 f ( x k ) 2     1 1 β η k E [ f ( x k ) 2 ] + L 2 1 ( 1 β ) 2 η k 2 E [ g k 2 ] ,
where the second inequality uses the inequality of the arithmetical and geometric means ( a b 1 2 ( a 2 + b 2 ) ) . By taking the adaptive step-size as (7) and substituting it into (19), we have
E [ f ( z k + 1 ) f ( z k ) ] 1 2 L E f ( z k ) f ( x k ) 2 1 1 β C 1 k δ k E [ f ( x k ) 2 ] + L ( 1 β ) 2 C 2 k 2 δ k 2 E f ( x k ) 2 ,
using (10) and (11). □
Lemma 3. 
For AdaSGDM, for any k 1 , we have
E f ( z k ) f ( x k ) 2 L 2 β 2 ( 1 β ) 2 · Γ k 1 i = 0 k 1 β i δ k 1 i 2 C 1 k 2 f ( x k 1 i ) 2 ,
where Γ k 1 : = i = 0 k 1 β i = 1 β k 1 β , L is the Lipschitz constant of f, and β [ 0 , 1 ) is the momentum constant as mentioned in (2). For k 1 , C 1 k and C 2 k are parameters in (10) and (11), respectively, and δ k is the parameter in (7).
Proof. 
Because f is L-smooth, z k = x k + p k , and (5), we have
f ( z k ) f ( x k ) 2 L 2 z k x k 2 = L 2 p k 2 = L 2 β 2 ( 1 β ) 2 x k x k 1 2 .
Recall the recursion in (6), that is, v k + 1 = β v k η k g k . Note that v 0 = 0 . By induction, for k 1 ,
v k = i = 0 k 1 β i η k 1 i g k 1 i .
Let Γ k 1 = i = 0 k 1 β i = 1 β k 1 β ; then,
v k 2 = i = 0 k 1 β i Γ k 1 η k 1 i g k 1 i 2 · Γ k 1 2   Γ k 1 2 i = 0 k 1 β i Γ k 1 η k 1 i 2 g k 1 i 2   = Γ k 1 i = 0 k 1 β i η k 1 i 2 g k 1 i 2 .
Taking the expectation over both sides of (23) and noting the step size (7), we have
E [ v k 2 ] Γ k 1 i = 0 k 1 β i η k 1 i 2 E [ g k 1 i 2 ] Γ k 1 i = 0 k 1 β i δ k 1 i 2 C 2 , k 1 i 2 f ( x k 1 i ) 2 .
Then, taking the expectation of both sides of (21) and substituting the above inequality into it, we have
E [ f ( z k ) f ( x k ) 2 ] L 2 β 2 ( 1 β ) 2 E [ x k x k 1 2 ] = L 2 β 2 ( 1 β ) 2 E [ v k 2 ] L 2 β 2 ( 1 β ) 2 Γ k 1 i = 0 k 1 β i δ k 1 i 2 C 2 , k 1 i 2 f ( x k 1 i ) 2 ,
which means that (20) is established. □
Based on the previous Lemmas 2 and 3, we can now state the convergence analysis of AdaSGDM under non-convex settings.
Theorem 3. 
Let Assumptions 1 and 2 hold, and let f be a non-convex and L-smooth function. Choosing the step size as in (7), the iteration sequence x k obtained by AdaSGDM satisfies the following bound:
min k = 0 , , T 1 E f ( x k ) 2 1 c ¯ d ¯ 1 T ( f ( x 0 ) f ( x * ) ) ,
where x * = arg min x f ( x ) , c ¯ , d ¯ 0 are constants, c ¯ > d ¯ , and T is the maximum number of iterations.
Proof. 
From the initial conditions, it follows that z 0 = x 0 ; thus, Lemmas 2 and 3 imply the following inequality:
E [ f ( z k + 1 ) f ( z k ) ] 1 2 L L 2 β 2 ( 1 β ) 2 Γ k 1 i = 0 k 1 β i δ k 1 i 2 C 2 k 2 E [ f ( x k 1 i ) 2 ]     1 1 β C 1 k δ k L ( 1 β ) 2 C 2 k 2 δ k 2 E [ ( x k ) 2 ] .
By summing (24) for k = 0 , , T ,
E [ f ( z T + 1 ) f ( z 0 ) ] E [ f ( z 0 ) f ( x 0 ) 2 ] k = 0 T 1 1 β C 1 k δ k L ( 1 β ) 2 C 2 k 2 δ k 2 E [ f ( x k ) 2 ] + 1 2 L L 2 β 2 ( 1 β ) 2 k = 1 T Γ k 1 i = 0 k 1 β i δ k 1 i 2 C 2 k 2 E [ f ( x k 1 i ) 2 ] = k = 0 T 1 ( c k d k ) E f ( x k ) 2 c T E f ( x T ) 2 ,
where c k : = 1 1 β C 1 k δ k L ( 1 β ) 2 C 2 k 2 δ k 2 and d k : = 1 2 L β 2 ( 1 β ) 2 δ k 2 C 2 k 2 i = k T 1 Γ i β i k . For k = 0 , , T , we choose δ k < 1 β L C 1 k C 2 k 2 1 2 + β 2 i = k T 1 Γ i β i k . Thus, it is true that c k > d k for k = 0 , , T 1 as well as that c T > 0 . Then,
k = 0 T 1 ( c k d k ) E f ( x k ) 2 E [ f ( z 0 ) f ( z T + 1 ) ] c T E f ( x T ) 2 f ( z 0 ) f ( z T + 1 ) .
Furthermore, because z 0 = x 0 and x * = arg min x f ( x ) , we have
min k = 0 , , T 1 E [ f ( x k ) 2 ] 1 c ¯ d ¯ 1 T ( f ( z 0 ) f ( z T + 1 ) ) 1 c ¯ d ¯ 1 T ( f ( x 0 ) f ( x * ) ) ,
where c ¯ d ¯ = min k = 0 , , T 1 { c k d k } . □

4. Experiments

In this section, we present experimental results of applying our adaptive schemes to several test problems. Section 4.1 focuses on regularized linear regression problem and regularized logistic regression problem, which are widely used in the machine learning community, while Section 4.2 considers the non-convex support vector machine (SVM) problem and non-convex quadratic problem. In both, we report the performance of AdaSGD and AdaSCDM and compare them with SGDM, Adam and Adagrad. In each instance, we set the step size for AdaSGD and AdaSGDM using the procedure in (7). To make the comparison equitable, the default parameter values for Adam are selected according to [9], especially η = 0.001 , β 1 = 0.9 , β 2 = 0.999 and ρ = 10 8 . For Adagrad, the initial step size is η 0 = 0.1 . Using random datasets, we prove that the proposed adaptive SGD can effectively solve practical deep learning problems.
The parameters in SGDM are set to a step size of η = 0.001 and momentum coefficient of β = 0.8 in the following applications. We repeated the experiment ten times and report the average results. All methods use the same random initialization; all figures in this section are in log–log scale, and the maximum number of iterations T = 10,000. Finally, all the algorithms involved in the experiment were implemented using MATLAB R2017a (9.2.0.538062) 64 bit software in Windows 10.

4.1. Convex Functions

Consider the following two convex optimization problems: an l 2 -regularized quadratic function with f 1 ( x ) = A x b 2 + λ x 2 2 , and an l 2 -regularized logistic regression for binary classification with f 2 ( x ) = i = 1 m log 1 + e b i A i x + λ x 2 2 with the penalty parameter λ = 0.1 , where A R m × n and b R m . The entries of b are randomly −1 or 1. Rows A i in A are generated by an i.i.d multivariate Gaussian distribution conditioned on b i . We use a mini-batch of size n to compute a stochastic gradient at each iteration. Note that the gradients of functions f 1 ( x ) and f 2 ( x ) are continuous; we assume that random sampling of small batches from the datasets satisfies Assumption 2.
When m = 60 and n = 10 , the convergence paths of the procedure for minimizing different convex functions in SGDM, Adam, Adagrad, and the proposed AdaSGD and AdaSGDM is demonstrated in Figure 1, where the left subfigure in Figure 1, corresponding to function f 1 ( x ) = A x b 2 + λ x 2 2 , takes 10.337139 s, and the right subfigure in Figure 1, corresponding to function f 2 ( x ) = i = 1 m log 1 + e b i A i x + λ x 2 2 , takes 4.947887 s. When m = 10 , 000 and n = 200 . The results are shown in Figure 2, where the left and right subgraphs in Figure 2, corresponding to f 1 ( x ) and f 2 ( x ) , take 2130.277402 s and 442.714215 s, respectively.
From the left and right figures in Figure 1 and Figure 2, it is not difficult to see that AdaSGD and AdaSGDM show better convergence than existing stochastic optimization methods when considering the convex optimization problems of different models. Observe that SGDM displays local acceleration close to the optimal point and attains convergence rate of O ( 1 / T ) , as shown in [36]. Adagrad shows a convergence rate of O ( 1 / T ) , as mentioned in [11]. Adam eventually attains a rate of convergence of O ( 1 / T ) , as shown in [10]. The proposed methods, AdaSGD and AdaSGDM, tend to converge faster than the SGDM, Adam, or Adagrad, showing a convergence of 1 / T , which is consistent with our theory results in this paper.

4.2. Non-Convex Functions

Consider the following non-convex support vector machine (SVM) problem with a sigmoid loss function, which has previously been considered in [5] (the data points are generated in the same way as in Section 4.1): min x R n f 3 ( x ) : = i = 1 m [ 1 tanh ( b i x , a i ) ] + λ x 2 2 , where λ = 0.1 is a regularization parameter. In addition, consider the following non-convex optimization problem corresponding to the elastic regression network model [37]: min x R n f 4 ( x ) : = A x b λ 1 x 2 2 + λ 2 x 1 , where λ 1 = 0.001 and λ 2 = 0.01 . Here, we use a mini-batch of size n to compute a stochastic gradient at each iteration. For minimizing the two non-convex functions f 3 ( x ) and f 4 ( x ) , the gradient of f 3 ( x ) is obviously continuous. For f 4 ( x ) it is easy to know that the derivative of f 4 ( x ) at point x = 0 does not exist; however, we can use the subgradient at this point. For example, one of the subgradients of f 4 ( x ) here is f 4 ( 0 ) = 0 . Although this gradient is discontinuous, it satisfies the Lipschitz condition, meaning that the conclusion in Theorem 3 holds. The convergence paths of the algorithms SGDM, Adam, Adagrad, AdaSGD, and AdaSGDM when ( m = 60 , n = 10 ) and ( m = 10 , 000 , n = 200 ) are shown in Figure 3 and Figure 4. The CPU times of the left and right subfigures corresponding to f 3 ( x ) and f 4 ( x ) in Figure 3 are 10.808409 s and 5.824761 s, respectively, and those of f 3 ( x ) and f 4 ( x ) in Figure 4 are 2369.346180 s and 455.041079 s, respectively.
From the figures in Figure 3 and Figure 4, it can be seen that AdaSGD and AdaSGDM maintain good convergence when considering non-convex optimization problems of different models. For different non-convex objective functions with Lipschitz continuous gradients, it can be observed that the gradient converges in expectation at the order of O ( 1 / T ) of SGDM, as shown in [36]. As described in [10], the convergence analysis for Adam is not applicable to non-convex problems, and it is only through experience that Adam is likely to perform better than other methods. The Adagrad algorithm displays a convergence rate of O ( log T / T ) under non-convex setting, as showed in [38]. For the proposed methods in this paper, AdaSGD and AdaSGDM, tend to converge faster than SGDM, Adam, and Adagrad under non-convex settings, showing a convergence of 1 / T , which is consistent with our theory result in this paper.

5. Conclusions and Future Work

In this paper, two shortcomings of the adaptive stochastic gradient descent method for stochastic optimization problems are studied. The first is the assumption of a convex setting, which is often harsh in many practical optimization problems of machine learning. The second is slow convergence, which is a result of using the adaptive step size of past stochastic gradients, and is generally up to O ( 1 / T ) . As a consequence, in this paper we first propose a new adaptive SGD in which the new step size is a function of the expectation of the past stochastic gradient and its second moment. In both convex and non-convex settings, the adaptive SGD with the new designed step size converges at the rate of O ( 1 / T ) . Second, the new adaptive SGD is extended to the case with momentum, and again achieves a convergence rate of O ( 1 / T ) , irrespective of convex or non-convex settings. To sum up, our results indicate that the designed adaptive step size is able to alleviate the problem of slow convergence caused by inherent variance to a certain extent. The proposed approach achieves accelerated convergence in convex setting, and works in non-convex settings as well. Experimental results show that the proposed adaptive stochastic gradient descent methods, both with and without momentum, have better convergence performance than existing methods. In the future, we hope to apply this method to large datasets or to actual data collection in order to better analyze its effectiveness.

Author Contributions

Writing—original draft, R.C.; Writing—review & editing, X.T.; Supervision, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Fundamental Research Funds for the Central Universities No. 2662021LXQD001 and the National Natural Science Foundation of China with Grant No. 61903148.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Ye Yuan for his helpful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  2. Chung, K.L. On a stochastic approximation method. Ann. Math. Stat. 1954, 25, 463–483. [Google Scholar] [CrossRef]
  3. Polyak, B.T.; Juditsky, A.B. Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 1992, 30, 838–855. [Google Scholar] [CrossRef] [Green Version]
  4. Ruszczyński, A.; Syski, W. A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems. Math. Program. Stud. 1986, 28, 113–131. [Google Scholar]
  5. Ghadimi, S.; Lan, G. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim. 2013, 23, 2341–2368. [Google Scholar] [CrossRef] [Green Version]
  6. Bach, F. Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression. J. Mach. Learn. 2014, 15, 595–627. [Google Scholar]
  7. Xiao, L.; Zhang, T. A proximal stochastic gradient method with progressive variance reduction. SIAM J. Optim. 2014, 24, 2057–2075. [Google Scholar] [CrossRef] [Green Version]
  8. Johnson, R.; Zhang, T. Accelerating stochastic gradient descent using predictive variance reduction. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–8 December 2013; pp. 315–323. [Google Scholar]
  9. Cutkosky, A.; Busa-Fekete, R. Distributed stochastic optimization via adaptive stochastic gradient descent. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 1910–1919. [Google Scholar]
  10. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  11. Mustapha, A.; Mohamed, L.; Ali, K.; Hamlich, M.; Bellatreche, L.; Mondal, A. An Overview of Gradient Descent Algorithm Optimization in Machine Learning: Application in the Ophthalmology Field. In Proceedings of the Smart Applications and Data Analysis. SADASC 2020, Marrakesh, Morocco, 25–26 June 2020; pp. 349–359. [Google Scholar]
  12. Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. 2011, 12, 257–269. [Google Scholar]
  13. Zeiler, M.D. Adadelta: An adaptive learning rate method. arXiv 2012, arXiv:1212.5701. [Google Scholar]
  14. Reddi, S.J.; Kale, S.; Kumar, S. On the convergence of Adam and beyond. arXiv 2019, arXiv:1904.09237. [Google Scholar]
  15. Li, X.; Orabona, F. On the convergence of stochastic gradient descent with adaptive stepsizes. arXiv 2018, arXiv:1805.08114. [Google Scholar]
  16. Yousefian, F.; Nedi, A.; Shanbhag, U.V. On stochastic gradient and subgradient methods with adaptive steplength sequences. Automatica 2012, 48, 56–67. [Google Scholar] [CrossRef] [Green Version]
  17. Nemirovski, A.; Juditsky, A.; Lan, G.; Shapiro, A. Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 2013, 19, 1574–1609. [Google Scholar] [CrossRef] [Green Version]
  18. Qian, N. On the momentum term in gradient descent learning algorithms. Neural Netw. 1999, 12, 145–151. [Google Scholar] [CrossRef] [PubMed]
  19. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  20. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 1983, 269, 543–547. [Google Scholar]
  21. Klein, S.; Pluim, J.P.W.; Staring, M.; Viergever, M.A. Adaptive stochastic gradient descent optimisation for image registration. Int. J. Comput. Vis. 2009, 81, 227. [Google Scholar] [CrossRef]
  22. Yuan, Y.; Li, M.; Liu, J.; Tomlin, C.J. On the Powerball method for optimization. arXiv 2016, arXiv:1603.07421. [Google Scholar]
  23. Viola, J.; Chen, Y.Q. A Fractional-Order On-Line Self Optimizing Control Framework and a Benchmark Control System Accelerated Using Fractional-Order Stochasticity. Fractal Fract. 2022, 6, 549. [Google Scholar] [CrossRef]
  24. Holland, J.H. Genetic Algorithms understand Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  25. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  26. Xu, K.; Cheng, T.L.; Lope, A.M.; Chen, L.P.; Zhu, X.X.; Wang, M.W. Fuzzy Fractional-Order PD Vibration Control of Uncertain Building Structures. Fractal Fract. 2022, 6, 473. [Google Scholar] [CrossRef]
  27. Lagunes, M.L.; Castillo, O.; Valdez, F.; Soria, J.; Melin, P. A New Approach for Dynamic Stochastic Fractal Search with Fuzzy Logic for Parameter Adaptation. Fractal Fract. 2021, 5, 33. [Google Scholar] [CrossRef]
  28. Auer, P.; Cesa-Bianchi, N.; Gentile, C. Adaptive and self-confident on-line learning algorithms. J. Comput. Syst. Sci. 2002, 64, 48–75. [Google Scholar] [CrossRef] [Green Version]
  29. Prangprakhon, M.; Feesantia, T.; Nimana, N. An Adaptive Projection Gradient Method for Solving Nonlinear Fractional Programming. Fractal Fract. 2022, 6, 566. [Google Scholar] [CrossRef]
  30. Bottou, L. Online learning and stochastic approximations. Online Learn. Neural Netw. 1998, 17, 142. [Google Scholar]
  31. Nguyen, L.M.; Nguyen, P.H.; Richtárik, P.; Scheinberg, K.; Takáč, M.; van Dijk, M. New convergence aspects of stochastic gradient algorithms. J. Mach. Learn. Res. 2019, 20, 1–49. [Google Scholar]
  32. Yan, Y.; Yang, T.; Li, Z.; Lin, Q.; Yang, Y. A unified analysis of stochastic momentum methods for deep learning. arXiv 2018, arXiv:1808.10396. [Google Scholar]
  33. Xu, P.; Wang, T.; Gu, Q. Continuous and discrete-time accelerated stochastic mirror descent for strongly convex functions. In Proceedings of the International Conference on Machine Learning, Macau, China, 26–28 February 2018; pp. 5488–5497. [Google Scholar]
  34. Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course; Springer Science & Business Media: Berlin, Germany, 2013; Volume 87. [Google Scholar]
  35. Zou, F.; Shen, L. On the convergence of adagrad with momentum for training deep neural networks. arXiv 2018, arXiv:1808.03408. [Google Scholar]
  36. Yang, T.; Lin, Q.; Li, Z. Unified convergence analysis of stochastic momentum methods for convex and non-convex optimization. arXiv 2016, arXiv:1604.03257. [Google Scholar]
  37. Facchinei, F.; Scutari, G.; Sagratella, S. Parallel selective algorithms for nonconvex big data optimization. IEEE Trans. Signal Process. 2015, 63, 1874–1889. [Google Scholar] [CrossRef]
  38. Ward, R.; Wu, X.; Bottou, L. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6677–6686. [Google Scholar]
Figure 1. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 60 , n = 10 , and the objective functions are smooth convex functions. Left: the objective function f 1 ( x ) ; Right: the objective function f 2 ( x ) .
Figure 1. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 60 , n = 10 , and the objective functions are smooth convex functions. Left: the objective function f 1 ( x ) ; Right: the objective function f 2 ( x ) .
Fractalfract 06 00709 g001
Figure 2. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 10 , 000 , n = 200 , and the objective functions are smooth convex functions. Left: the objective function f 1 ( x ) ; Right: the objective function f 2 ( x ) .
Figure 2. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 10 , 000 , n = 200 , and the objective functions are smooth convex functions. Left: the objective function f 1 ( x ) ; Right: the objective function f 2 ( x ) .
Fractalfract 06 00709 g002
Figure 3. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 60 , n = 10 , and the objective functions are smooth non-convex functions. Left: the objective function f 3 (x); Right: the objective function f 4 ( x ) .
Figure 3. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 60 , n = 10 , and the objective functions are smooth non-convex functions. Left: the objective function f 3 (x); Right: the objective function f 4 ( x ) .
Fractalfract 06 00709 g003
Figure 4. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 10 , 000 , n = 200 , and the objective functions are smooth non-convex functions. Left: the objective function f 3 ( x ) ; Right: the objective function f 4 ( x ) .
Figure 4. The convergence paths of the algorithms SGDM, Adam, and Adagrad and the proposed methods AdaSGD and AdaSGDM when m = 10 , 000 , n = 200 , and the objective functions are smooth non-convex functions. Left: the objective function f 3 ( x ) ; Right: the objective function f 4 ( x ) .
Fractalfract 06 00709 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, R.; Tang, X.; Li, X. Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization. Fractal Fract. 2022, 6, 709. https://doi.org/10.3390/fractalfract6120709

AMA Style

Chen R, Tang X, Li X. Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization. Fractal and Fractional. 2022; 6(12):709. https://doi.org/10.3390/fractalfract6120709

Chicago/Turabian Style

Chen, Ruijuan, Xiaoquan Tang, and Xiuting Li. 2022. "Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization" Fractal and Fractional 6, no. 12: 709. https://doi.org/10.3390/fractalfract6120709

APA Style

Chen, R., Tang, X., & Li, X. (2022). Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization. Fractal and Fractional, 6(12), 709. https://doi.org/10.3390/fractalfract6120709

Article Metrics

Back to TopTop