Next Article in Journal
Deterministic Coresets for k-Means of Big Sparse Data
Next Article in Special Issue
Machine Learning-Guided Dual Heuristics and New Lower Bounds for the Refueling and Maintenance Planning Problem of Nuclear Power Plants
Previous Article in Journal
Application of Generalized Polynomial Chaos for Quantification of Uncertainties of Time Averages and Their Sensitivities in Chaotic Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generalized Alternating Linearization Bundle Method for Structured Convex Optimization with Inexact First-Order Oracles

College of Mathematics and Information Science, Guangxi University, Nanning 540004, China
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(4), 91; https://doi.org/10.3390/a13040091
Submission received: 21 January 2020 / Revised: 9 April 2020 / Accepted: 9 April 2020 / Published: 14 April 2020
(This article belongs to the Special Issue Optimization Algorithms and Applications)

Abstract

:
In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and its exact first-order information (function value and subgradient) may be difficult to obtain, while the function h is relatively simple. We propose a generalized alternating linearization bundle method for solving this class of problems, which can handle inexact first-order information of on-demand accuracy. The inexact information can be very general, which covers various oracles, such as inexact, partially inexact and asymptotically exact oracles, and so forth. At each iteration, the algorithm solves two interrelated subproblems: one aims to find the proximal point of the polyhedron model of f plus the linearization of h; the other aims to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.

1. Introduction

In this paper, we consider the following structured convex optimization problem
F : = min x R n F ( x ) : = f ( x ) + h ( x ) ,
where f : dom h R and h : R n ( , ] are closed proper convex functions, but not necessarily differentiable, and dom h : = { x : h ( x ) < } is the effective domain of h. Problems of this type frequently arise in practice, such as compressed sensing [1], image reconstruction [2], machine learning [3], optimal control [4] and power system [5,6,7,8], and so forth. The following are three interesting examples.
Example 1.
( 1 minimization in compressed sensing). The signal recovery problems in compressed sensing [1] usually take the following form
min x R n 1 2 A x b 2 + λ x 1 ,
where A R m × n , b R m , λ > 0 and x 1 : = i = 1 n | x i | , which aims to get a sparse solution x of the linear system A x = b . Note that by defining f ( x ) = 1 2 A x b 2 and h ( x ) = λ x 1 , (2) is of the form of (1).
Example 2.
(Regularized risk minimization). At the core of many machine learning problems is to minimize a regularized risk function [9,10]:
min x R n R emp ( x ) + λ Ω ( x ) ,
where R emp ( x ) : = 1 m i = 1 m l ( u i , v i , x ) is the empirical risk, { ( u i , v i ) , i = 1 , , m } is a training set, and l is a convex loss function measuring the gap between v and the predicted values generated by using x. In general, R emp ( x ) is a nondifferentiable and computationally expensive convex function, whereas the regularization term Ω ( x ) is a simple convex function, say Ω ( x ) = 1 2 x 2 2 or Ω ( x ) = x 1 . By defining f ( x ) = R emp ( x ) and h ( x ) = λ Ω ( x ) , (3) is also of the form of (1).
Example 3.
(Unconstrained transformation of a constrained problem). Given a constrained problem
min { f ( x ) : x X } ,
where f is a convex function and X R n is a convex set. By introducing the indicator function δ X of X, that is, δ X ( x ) equals 0 on X and infinity elsewhere, problem (4) can be written equivalently as
min x R n f ( x ) + δ X ( x ) .
Clearly, by setting h ( x ) = δ X ( x ) , (5) becomes the form of (1). We note that such transformation could be very efficient in practice if the set X has some special structure [11,12].
The design of methods for solving problems of the form (1) has attracted the attention of many researchers. We mention here four classes of these methods—operator splitting methods [13,14,15], alternating direction methods of multipliers [5,16,17,18,19], alternating linearization methods [20,21], and alternating linearization bundle method [22]. They all fall within the well-known class of first-order black-box methods, that is, it is assumed that there is an oracle that can return the (approximate) function value and one arbitrary (approximate) subgradient at any given point. Regarding the above methods, we are concerned about the following three points:
  • Smoothness of one or both functions in the objective has been assumed for many of the methods.
  • Except for Reference [22], they all require the exact computation of the function values and (sub)gradients.
  • The alternating linearization methods [20,21] essentially assume that both functions f and h are “simple” in the sense that minimizing the function plus a separable convex quadratic function is easy.
However, for some practical problems, the functions may be nondifferentiable (nonsmooth), not easy to handle, and computationally expensive in the sense that the exact first-order information may be impossible to calculate, or be computable but difficult to obtain. For example, if f has the form
f ( x ) : = sup { ϕ u ( x ) : u U } ,
where U is an infinite set and ϕ u ( x ) : R n R is convex for any u U , then it is often difficult to calculate the exact function value f ( x ) . But for any tolerance ϵ > 0 , we may usually find a lower approximation f x ϵ f ( x ) in finite time such that f x ϵ [ f ( x ) ϵ , f ( x ) ] and f x ϵ = ϕ u ϵ ( x ) for some u ϵ U . Then we can take a subgradient of ϕ u ϵ at x as an approximate subgradient of f at x. Another example is two-stage stochastic programming (see, e.g., References [23,24]), in which the function value is generated after solving a series of linear programs (details will be given in the section of numerical experiments), therefore its accuracy is determined by the tolerance of the linear programming solver. Some other practical examples, such as Lagrangian relaxation, chance-constrained programs and convex composite functions, can be found in Reference [25].
Based on the above observation, in this paper, we focus particularly on the case where the function f is general, possibly nonsmooth and its exact function values and subgradients may be difficult to obtain, whereas the function h is assumed to be relatively simple. Our main goal is to provide an efficient method, namely, the improved alternating linearization bundle method, for such kind of structured convex optimization problems. The basic tool we used here to handle nonsmoothness and inexactness is the bundle technique, since in the nonsmooth optimization community, bundle methods [26,27,28,29] are regarded as the most robust and reliable methods, whose variants have been well studied for handling inexact oracles [23,25,30,31,32,33].
Roughly speaking, our method generalizes the alternating linearization bundle method of Kiwiel [22] from exact and inexact oracles to various oracles, including exact, inexact, partially inexact, asymptotically exact and partially asymptotically exact oracles. These oracles are covered by the so-called on-demand accuracy oracles proposed by de Oliveira and Sagastizábal [23], whose accuracy is controlled by two parameters: a descent target and an error bound. More precisely, the accuracy is bounded by an error bound whenever the function estimation reaches a certain descent target. The most advantage of oracles with on-demand accuracy is that the function and subgradient estimations can be rough without an accuracy control for some “non-critical” iterates, thus the computational effort can be saved.
At each iteration, the proposed algorithm alternately solves two interrelated subproblems: one is to find the proximal point of the polyhedron model of f plus the linearization of h; the other is to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.
This paper is organized as follows. In Section 2, we recall the condition of the inexact frist-order oracles. In Section 3, we present an improved alternating linearization bundle method for structured convex optimization with inexact first-order oracles and show some properties of the algorithm. In Section 4, we establish global convergence of the algorithm under different types of inexactness. In Section 5, we provide some numerical experiments on two-stage stochastic linear programming problems. The notations are standard. The Euclidean inner product in R n is denoted by x , y : = x T y , and the associated norm by · .

2. Preliminaries

For a given constant ϵ 0 , the ϵ -subdifferential of function f at x is defined by (see Reference [34])
ϵ f ( x ) : = { g R n : f ( y ) f ( x ) + g , y x ϵ , y R n } ,
with f ( x ) : = 0 f ( x ) being the usual subdifferential in convex analysis [35]. Each element g f ( x ) is called a subgradient. For simplicity, we use the following notations:
f x : the approximate f value at x, that is, f x f ( x ) ;
g x : an approximate subgradient of f at x, that is, g x g ( x ) f ( x ) ;
F x : the approximate F value at x, that is, F x : = f x + h ( x ) .
Aiming at the special structure of problem (1), we present a slight variant of the oracles with on-demand accuracy proposed in Reference [23] as follows: for a given x R n , a descent target γ x and an error bound ε x 0 , the approximate values f x , g x and F x satisfy the following condition
f x = f ( x ) η ( γ x ) with unknown η ( γ x ) 0 , g x η ( γ x ) f ( x ) , and whenever F x γ x ( d e s c e n t t a r g e t r e a c h e d ) , the relation η ( γ x ) ε x holds .
From the relations in (6), we see that although the error η ( γ x ) is unknown, it has to be restricted within the error bound ε x whenever the descent target F x γ x is reached. This ensures that the exact and inexact function values satisfy
f x [ f ( x ) ε x , f ( x ) ] and f ( x ) [ f x , f x + ε x ] , whenever F x γ x .
The advantages of oracle (6) are that: (1) if the descent target is not reached, the calculation of oracle information can be rough without an accuracy control, which can potentially reduce the computational cost; (2) by properly choosing the parameters γ x and ε x , the oracle (6) covers various existing oracles:
  • Exact (Ex) [12,21]: set γ x = + and ε x = 0 ;
  • Partially Inexact (PI) [24]: set γ x < + and ε x = 0 ;
  • Inexact (IE) [11,25,32,36,37]: set γ x = + and ε x ε > 0 (possibly unknown);
  • Asymptotically Exact (AE) [38,39]: set γ x = + and ε x 0 along the iterative process;
  • Partially Asymptotically Exact (PAE) [23]: set γ x < + and ε x 0 .

3. The Generalized Alternating Linearization Bundle Method

In this section, we present our generalized alternating linearization bundle method with inexact first-order oracles for solving (1).
Let k be the current iteration index, x j , j J k { 1 , , k } be given points generated by previous iterations, and the corresponding approximate values f x j / g x j be produced by the oracle (6). For notational convenience, we denote
f x j : = f x j , g x j : = g x j , F x j : = F x j , ε x j : = ε x j , γ x j : = γ x j .
The approximate linearizations of f at x j are given by
f j ( · ) : = f x j + g x j , · x j , j J k .
From the second relation in (6), we have
f ( · ) f ( x j ) + g x j , · x j η ( γ x j ) = f j ( · ) ,
which implies that f j is a lower approximation to f. Next, it is natural to define the polyhedral inexact cutting-plane model of f by
f ˇ k ( · ) : = max j J k f j ( · ) ,
which is obviously a lower polyhedral model for f, that is, f ˇ k ( · ) f ( · ) .
Let x ^ k (called stability center) be the “best” point obtained so far, which satisfies that x ^ k = x k ( l ) for some k ( l ) k . Frequently, it holds that f x k ( l ) = min j = 1 , , k f x j . Thus, from (7), we have
f ( x ^ k ) [ f x ^ k , f x ^ k + ε x k ( l ) ] , whenever F x ^ k γ x ^ k .
By applying the bundle idea to the “complex” function f, and keeping the simple function h unchanged, similar to traditional proximal bundle methods (see, e.g., Reference [28]), we may solve the following subproblem to obtain a new iterate x k + 1 :
x k + 1 : = arg min f ˇ k ( · ) + h ( · ) + 1 2 t k · x ^ k 2 ,
where t k > 0 is a proximal parameter. However, subproblem (10) is generally not easy to solve, so by making use of the alternating linearization idea of Kiwiel [22], we solve two easier subproblems instead of (10). These two subproblems are interrelated: one is to find the proximal point of the polyhedron model f ˇ plus the linearization of h, aiming at generating an aggregate linear model of f for use in the second subproblem; the other is to find the proximal point of the aggregate linear model of f plus h, aiming at obtaining a new trial point.
Now, we are ready to present the details of our algorithm, which generalizes the work of Kiwiel [22]. We note that the choice of the model function f ˇ k in the algorithm may be different from the form of (8), since the subgradient aggregation strategy [40] is used to compress the bundle. The algorithm generates three sequences of iterates as follows: { y k } , the sequence of intermediate points, at which the aggregate linear models of f are generated; { x k } , the sequence of trial points; { x ^ k } , the sequence of stability centers.
We make some comments about Algorithm 1 as follows.
Algorithm 1 Generalized alternating linearization bundle method
Step 0 (Initialization). Select an initial point x 1 R n , constants κ ( 0 , 1 ) , t min > 0 , and an initial stepsize t 1 t min . Call the oracle (6) at x 1 to compute the approximate values f x 1 and g x 1 . Choose an initial error bound ε x 1 0 and a descent target γ x 1 = + . Set x ^ 1 : = x 1 , f x ^ 1 : = f x 1 , F x ^ 1 : = f x ^ 1 + h ( x ^ 1 ) , f ¯ 0 : = f 1 , and h ¯ 0 ( · ) : = h ( x 1 ) + p h 0 , · x 1 with p h 0 h ( x 1 ) . Let i t 1 : = 0 , l : = 1 , k ( l ) : = 1 and k : = 1 .
Step 1 (Model selection). Choose f ˇ k : R n R closed convex and such that
max { f ¯ k 1 , f k } f ˇ k f .

Step 2 (Solve f-subproblem). Set
y k + 1 : = arg min ϕ f k ( · ) : = f ˇ k ( · ) + h ¯ k 1 ( · ) + 1 2 t k · x ^ k 2 ,
f ¯ k ( · ) : = f ˇ k ( y k + 1 ) + p f k , · y k + 1 with p f k : = 1 t k ( x ^ k y k + 1 ) p h k 1 .

Step 3 (Solve h-subproblem). Set
x k + 1 : = arg min ϕ h k ( · ) : = f ¯ k ( · ) + h ( · ) + 1 2 t k · x ^ k 2 ,
h ¯ k ( · ) : = h ( x k + 1 ) + p h k , · x k + 1 with p h k : = 1 t k ( x ^ k x k + 1 ) p f k .

Step 4 (Stopping criterion). Compute
v k : = F x ^ k f ¯ k ( x k + 1 ) + h ( x k + 1 ) , p k : = 1 t k ( x ^ k x k + 1 ) , ϵ k : = v k t k p k 2 .

If max { p k , ϵ k } = 0 , stop.
Step 5 (Noise attenuation). If v k < ϵ k , set t k : = 10 t k , i t k : = k , and go back to Step 2.
Step 6 (Call oracle). Select a new error bound ε x k + 1 0 and a new descent target γ x k + 1 R { + } . Call the oracle (6) to compute f x k + 1 and g x k + 1 .
Step 7 (Descent test). If the descent condition
F x k + 1 F x ^ k κ v k
holds, set x ^ k + 1 : = x k + 1 , F x ^ k + 1 : = F x k + 1 , i t k + 1 : = 0 , k ( l + 1 ) : = k + 1 , and l : = l + 1 (descent step); otherwise, set x ^ k + 1 : = x ^ k , F x ^ k + 1 : = F x ^ k , i t k + 1 : = i t k , and k ( l + 1 ) = k ( l ) (null step).
Step 8 (Stepsize updating). For a descent step, select t k + 1 t k . For a null step, either set t k + 1 : = t k or choose t k + 1 [ t min , t k ] if i t k + 1 = 0 .
Step 9 (Loop). Let k : = k + 1 , and go to Step 1.
Remark 1.
(i) Theoretically speaking, the model function f ˇ k can be the simplest form max { f ¯ k 1 , f k } , but in order to keep numerical stability, it may additionally consist of some active linearizations.
(ii) Alternately solving subproblems (11) and (13) can be regarded as the proximal alternating linearization method (e.g., Reference [21]) being applied to the function f ˇ k + h .
(iii) If f ˇ k is a polyhedral function, then subproblem (11) is equivalent to a convex quadratic programming and thus can be solved efficiently. In addition, if h is simple, subproblem (13) can also be solved easily, or even has a closed-form solution (say h ( x ) = 1 2 x 2 ).
(iv) The role of Step 5 is to reduce the impact of inexactness. The algorithm loops between steps 2–5 by increasing the step size t k until v k ε k .
(v) The stability center, descent target and error bound keep unchanged in the loop between Steps 2 and 5
(vi) In order to establish global convergence of the algorithm, the descent target and error bound at Step 6 should be suitably updated. Some detailed rules are presented in the next section.
The following lemma summarizes some fundamental properties of Algorithm 1, whose proof is a slight modification of that in [22], Lemma 2.2.
Lemma 1.
(i) The vectors p f k and p h k of (12) and (14) satisfy
p f k f ˇ k ( y k + 1 ) a n d p h k h ( x k + 1 ) .
The linearizations f ¯ k , h ¯ k , F ¯ k satisfy the following inequalities
f ¯ k f ˇ k , h ¯ k h a n d F ¯ k : = f ¯ k + h ¯ k F .
(ii) The aggregate subgradient p k defined in (15) and the above linearization F ¯ k can be expressed as follows
p k = p f k + p h k = 1 t k ( x ^ k x k + 1 ) ,
F ¯ k ( · ) = F ¯ k ( x k + 1 ) + p k , · x k + 1 .
(iii) The predicted descent v k and the aggregate linearization error ϵ k of (15) satisfy
v k = t k p k 2 + ϵ k a n d ϵ k = F x ^ k F ¯ k ( x ^ k ) .
(iv) The aggregate linearization F ¯ k is also expressed
F x ^ k ϵ k + p k , · x ^ k = F ¯ k ( · ) F ( · ) .
(v) Denote the optimality measure by
V k : = max { p k , ϵ k + p k , x ^ k } ,
which satisfies
V k max { p k , ϵ k } ( 1 + x ^ k )
and
F x ^ k F ( x ) + V k ( 1 + x ) , x .
( v i ) We have the relations
v k ϵ k t k p k 2 / 2 ϵ k v k t k p k 2 / 2 , v k ϵ k .
Moreover, if F x ^ k γ x ^ k , then we have ϵ k ε x k ( l ) and
v k max t k p k 2 2 , | ϵ k | i f v k ϵ k ,
V k max 2 v k t k 1 / 2 , v k ( 1 + x ^ k ) i f v k ϵ k ,
V k < 2 ε x k ( l ) t k 1 / 2 ( 1 + x ^ k ) i f v k < ϵ k .
Proof. 
(i) From the optimality condition of subproblem (11), we obtain
0 ϕ f k ( y k + 1 ) = f k ˇ ( y k + 1 ) + p h k 1 + 1 t k ( y k + 1 x ^ k ) = f k ˇ ( y k + 1 ) p f k ,
which implies p f k f ˇ k ( y k + 1 ) . In addition, the fact that f ¯ k ( y k + 1 ) = f k ˇ ( y k + 1 ) yields f ¯ k f k ˇ . Similarly, by the optimality condition of (14), we have
0 ϕ h k ( x k + 1 ) = p f k + h ( x k + 1 ) + 1 t k ( x k + 1 x ^ k ) = h ( x k + 1 ) p h k ,
which shows p h k h ( x k + 1 ) . Further from h ¯ ( x k + 1 ) = h ( x k + 1 ) , we obtain h ¯ k h . Finally, it follows that
F ¯ k = f ¯ k + h ¯ k f ˇ k + h F .
(ii) By (14), we obtain
p f k + p h k = p f k + 1 t k ( x ^ k x k + 1 ) p f k = 1 t k ( x ^ k x k + 1 ) = p k .
Utilizing the linearity of F ¯ k ( · ) , (12) and (19), we derive
F ¯ k ( · ) = f k ¯ ( · ) + h ¯ k ( · ) = f ˇ k ( y k + 1 ) + p f k , · y k + 1 + h ( x k + 1 ) + p h k , · x k + 1 = f ¯ k ( x k + 1 ) p f k , x k + 1 y k + 1 + p f k , · y k + 1 + h ( x k + 1 ) + p h k , · x k + 1 = f k ¯ ( x k + 1 ) + p f k , · x k + 1 + h ( x k + 1 ) + p h k , · x k + 1 = F ¯ k ( x k + 1 ) + p k , · x k + 1 .
(iii) We obtain directly v k = ϵ k + t k p k 2 from (15). Combining (15) and (ii), we have
ϵ k = v k t k p k 2 = F x ^ k [ f ¯ k ( x k + 1 ) + h ( x k + 1 ) ] t k p k 2 = F x ^ k F ¯ k ( x ^ k ) + p k , x ^ k x k + 1 t k p k 2 = F x ^ k F ¯ k ( x ^ k ) .
(iv) Since ϵ k = v k t k p k 2 = F x ^ k [ f ¯ k ( x k + 1 ) + h ( x k + 1 ) ] t k p k 2 , the aggregate lineaization F ¯ k ( · ) satisfies
F x ^ k ϵ k + p k , · x ^ k = F ¯ k ( x k + 1 ) + p k , · x k + 1 = F ¯ k ( · ) F ( · ) .
(v) Using the Cauchy-Schwarz inequality in the definition (22) gives
V k = max { p k , ϵ k + p k , x ^ k } max { p k , ϵ k + p k x ^ k } max { p k , ϵ k } + p k x ^ k max { p k , ϵ k } ( 1 + x ^ k ) .
From (21), we have
F x ^ k F ( x ) + ϵ k p k , x x ^ k = F ( x ) + ϵ k p k , x + p k , x ^ k F ( x ) + p k x + ϵ k + p k , x ^ k F ( x ) + max { p k , ϵ k + p k , x ^ k } ( 1 + x ) = F ( x ) + V k ( 1 + x ) , x .
(vi) By (iii), it is easy to get (25). Next, by (18), (20) and (9), we conclude that, if F x ^ k γ x ^ k ,
ϵ k = F ¯ k ( x ^ k ) F x ^ k F ( x ^ k ) F x ^ k = f ( x ^ k ) f x ^ k ε x k ( l ) .
Relation (26) follows from the facts that v k ϵ k and v k t k p k 2 / 2 . Relation (27) follows from (23), p k ( 2 v k t k ) 1 / 2 and ϵ k v k . Finally, if v k < ϵ k , we obtain p k 2 < 2 ϵ k t k , which together with ϵ k ε x k ( l ) shows that p k < ( 2 ε x k ( l ) t k ) 1 / 2 , and therefore (28) holds. □
Remark 2.
Relation (17) shows that p f k is a subgradient of the model function f ˇ k at y k + 1 and that p h k is a subgradient of h at x k + 1 . V k defined by (22) can be viewed as an optimality measure of the iterates, which will be proved to converge to zero in the next section. Relation (24) is also a test for optimality, in that x ^ k is an approximate optimal solution to problem (1) whenever V k is sufficiently small.

4. Global Convergence

This section aims to establish the global convergence of Algorithm 1 for various oracles. These oracles are controlled by two parameters: the error bound ε x and the descent target γ x . In Table 1, we present the choices of these two parameters for different type of instances described in Section 2, including Exact (Ex), Partially Inexact (PI), Inexact (IE), Asymptotically Exact (AE) and Partially Asymptotically Exact (PAE) oracles, where the constants are selected as θ , κ ( 0 , 1 ) , and κ ϵ ( 0 , κ ) .
The following lemma is crucial to guarantee the global convergence of Algorithm 1.
Lemma 2.
The descent target is always reached at the stability centers, that is, F x ^ k γ x ^ k for all k 1 .
Proof. 
For instances Ex, IE and AE, since γ x k = + , the claim holds immediately.
For instances PI and PAE, we have γ x k + 1 = F x ^ k θ κ v k . Thus, for k = 1 , from Step 0 it follows that x ^ 1 = x 1 , f x ^ 1 = f x 1 and γ x ^ 1 = γ x 1 = + . This implies F x ^ 1 = f x ^ 1 + h ( x ^ 1 ) γ x ^ 1 . In addition, for k 2 , since θ ( 0 , 1 ) , once the descent test (16) is satisfied at iteration k 1 , we have
F x ^ k F x ^ k 1 κ v k 1 F x ^ k 1 θ κ v k 1 = γ x k = γ x ^ k .
 □
The following lemma shows that an (approximate) optimal solution can be obtained whenever the algorithm terminates finitely or loops infinitely between Steps 2 and 5.
Lemma 3.
If either Algorithm 1 terminates at the kth iteration at Step 4, or loops between Steps 2 and 5 infinitely, then
(i) 
x ^ k is an optimal solution to problem (1) for instances Ex and PI.
(ii) 
x ^ k is ε-optimal, that is, F ( x ^ k ) F + ε , for instance IE.
(iii) 
x ^ k is ε x k ( l ) -optimal, that is, F ( x ^ k ) F + ε x k ( l ) , for instances AE and PAE.
Proof. 
Firstly, suppose that Algorithm 1 terminates at Step 4 with iteration k. Then from (23), we have V k = 0 . This together with (24) shows that
F x ^ k inf { F ( x ) : x R n } = F .
Thus, from (7), we can conclude that: F ( x ^ k ) = F x ^ k F for instances Ex and PI; F ( x ^ k ) F x ^ k + ε F + ε for instance IE; F ( x ^ k ) F x ^ k + ε x k ( l ) F + ε x k ( l ) for instances AE and PAE.
Secondly, suppose that Algorithm 1 loops between Steps 2 and 5 infinitely. Then from Lemma 2 and the condition at Step 5, it follows that (28) holds and t k . Thus, we obtain V k 0 . This along with (24) implies (29), and therefore the claims hold by repeating the corresponding lines in first case. □
From the above lemma, in what follows, we may assume that Algorithm 1 neither terminates finitely nor loops infinitely between Steps 2 and 5. In addition, as in Reference [22], we assume that the model subgradients p f k f ˇ k ( y k + 1 ) in (17) satisfy that { p f k } is bounded if { y k } is bounded.
Algorithm 1 must take only one of the following two cases:
(i)
the algorithm generates finitely many descent steps;
(ii)
the algorithm generates infinitely many descent steps.
We first consider case (i), in which two subcases may occur: t : = lim k t k = and t < . The first subcase of t = is analyzed in the following lemma.
Lemma 4.
Suppose that Algorithm 1 generates finitely many descent steps, that is, there exists an index k ¯ such that only null steps occur for all k k ¯ , and that t = . Denote K : = { k k ¯ : t k + 1 > t k } , then V k 0 as k K , k .
Proof. 
For the last time t k increases before Step 5 for k K , one has
V k < 2 ε x k ¯ t k 1 / 2 1 + x ^ k ¯ ,
which along with t k shows the lemma. □
The following lemma analyzes the second subcase of t < .
Lemma 5.
Suppose that there exists k ¯ such that x ^ k = x ^ k ¯ and t min t k + 1 t k for all k k ¯ . If the descent criterion (16) fails for all k k ¯ , then V k 0 .
Proof. 
In view of the facts that t min t k + 1 t k and x ^ k = x ^ k ¯ for all k k ¯ , we know that only null steps occur and t k does not increase at Step 5. By Taylor’s expansion, Cauchy-Schwarz inequality, and the properties of subproblems (11) and (13), we can conclude that v k 0 , so the conclusion holds from (27). For more details, one can refer to [11], Lemma 3.2. □
By combining Lemmas 4 and 5, we have the following lemma.
Lemma 6.
Suppose that there exists k ¯ such that only null steps occur for all k k ¯ . Let K : = { k k ¯ : t k + 1 > t k } if t k ; K : = { k : k k ¯ } otherwise. Then V k K 0 .
Now, we can present the main convergence result for the case where the algorithm generates finitely many descent steps.
Theorem 1.
Suppose that Algorithm 1 generates finitely many descent steps, and that x ^ k ¯ is the last stability center. Then, x ^ k ¯ is an optimal solution to problem (1) for instances Ex and PI; an ε-optimal solution for IE; and an ε x k ¯ ( l ) -optimal solution for AE and PAE.
Proof. 
Under the stated assumption, we know that x ^ k x ^ k ¯ and f x ^ k f x ^ k ¯ for all k k ¯ . This together with (24) and Lemma 6 shows that
F x ^ k ¯ inf { F ( x ) : x R n } = F .
Hence, similar to the proof of Lemma 3, we obtain the results of the theorem. □
Next, we consider the second case where the algorithm generates infinitely many descent steps.
Lemma 7.
Suppose that Algorithm 1 generates infinitely many descent steps, and that F x ^ : = lim k F x ^ k > . Let K : = { k : F x ^ k + 1 < F x ^ k } . Then v k K 0 and lim ¯ k K V k = 0 . Moreover, if { x ^ k } is bounded, then V k K 0 .
Proof. 
From the descent test condition (16), we may first prove that v k K 0 , and therefore ϵ k , t k p k 2 K 0 from (26) and p k K 0 from the fact that t k t min . It can be further proved that ϵ k , p k K 0 , so it follows lim ¯ k K V k = 0 from the definition of V k . Moreover, under the condition that { x ^ k } is bounded, we have V k K 0 by (v) of Lemma 1. For more details, one can refer to [11], Lemma 3.4. □
Finally, we present the convergence results for the second case.
Theorem 2.
Suppose that Algorithm 1 generates infinitely many descent steps, F x ^ > , and that the index set K is defined in Lemma 7. Then
(i) 
F lim ¯ k K F ( x ^ k + 1 ) lim ¯ k K F ( x ^ k + 1 ) F x ^ + ε for instance IE in Table 1;
(ii) 
F lim ¯ k K F ( x ^ k + 1 ) lim ¯ k K F ( x ^ k + 1 ) F x ^ for the remaining instances in Table 1;
(iii) 
lim ¯ k K V k = 0 and F x ^ k F x ^ F .
Proof. 
It is obvious that
F lim ¯ k K F ( x ^ k + 1 ) lim ¯ k K F ( x ^ k + 1 ) .
(i) For instance IE, it follows that ε x ^ k + 1 = ε and F x ^ k + 1 γ x ^ k + 1 = + for all k K . Then from (7), we have F ( x ^ k + 1 ) F x ^ k + 1 + ε , k K , which implies
lim ¯ k K F ( x ^ k + 1 ) lim k K F x ^ k + 1 + ε = F x ^ + ε .
This along with (30) shows part (i).
(ii) Next, the other four instances in Table 1 are considered separately.
For instance Ex, we have ε x ^ k + 1 = 0 , F x ^ k + 1 γ x ^ k + 1 = + and F ( x ^ k + 1 ) = F x ^ k + 1 . This implies
lim ¯ k K F ( x ^ k + 1 ) = lim k K F x ^ k + 1 = F x ^ .
For instance PI, we have ε x ^ k + 1 = 0 and γ x ^ k + 1 = F x ^ k θ κ v k for all k K . Thus, we obtain
F x ^ k + 1 F x ^ k κ v k F x ^ k θ κ v k = γ x ^ k + 1 .
This implies F ( x ^ k + 1 ) = F x ^ k + 1 , and therefore
lim ¯ k K F ( x ^ k + 1 ) = lim k K F x ^ k + 1 = F x ^ .
For instance AE, we have ε x ^ k + 1 = κ ϵ v k and F x ^ k + 1 γ x ^ k + 1 = + for all k K , which implies
F ( x ^ k + 1 ) F x ^ k + 1 + ε x ^ k + 1 F x ^ k + 1 + κ ϵ v k .
This along with Lemma 7 ( v k K 0 ) shows that
lim ¯ k K F ( x ^ k + 1 ) lim k K F x ^ k + 1 = F x ^ .
For instance PAE, we have ε x ^ k + 1 = κ ϵ v k and γ x ^ k + 1 = F x ^ k θ κ v k for all k K . Then, it follows that
F x ^ k + 1 F x ^ k κ v k F x ^ k θ κ v k = γ x ^ k + 1 ,
which implies
F ( x ^ k + 1 ) F x ^ k + 1 + κ ϵ v k .
Again from Lemma 7, we obtain
lim ¯ k K F ( x ^ k + 1 ) lim k K F x ^ k + 1 = F x ^ .
Summarizing the above analysis and noticing (30), we complete the proof of part (ii).
(iii) From Lemma 7, we know that lim ¯ k K V k = 0 . This together with (24) shows part (iii). □

5. Numerical Experiments

In this section, we aim to test the numerical efficiency of the proposed algorithm. In the fields of production and transportation, finance and insurance, power industry, and telecommunications, decision makers usually need to solve problems with uncertain information. As an effective tool to solve such problems, stochastic programming (SP) has attracted more and more attention and research on its practical instances and theories; see, for example, References [41,42]. We consider a class of two-stage SP problems with fixed recourse, whose discretization of uncertainty into N scenarios has the form (see e.g., References [23,43])
min f ( x ) : = c , x + i = 1 N p i V i ( x ) s . t . x X : = { x R + n 1 : A x = b } ,
where x is the first-stage decision variable, c R n 1 , A R m 1 × n 1 , and b R m 1 . In addition, the recourse function is
V i ( x ) : = min π R + n 2 { q , π : W π = h i T i x } ,
where corresponding to the ith scenario ( h i , T i ) , with probability p i > 0 for h i R m 2 and T i R m 2 × n 1 . Here π is the second-stage decision variable.
Clearly, by introducing the indicator function δ X , problem (31) can be written as the form of (5), and then becomes the form of (1) by setting h ( x ) = δ X ( x ) .
The above recourse function can be written as its dual form:
V i ( x ) = max y R m 2 h i T i x , y s . t . W T y q ,
where q R n 2 and W R m 2 × n 2 . By solving these linear programming problems to return solutions with precision up to a given tolerance, one can establish an inexact oracle in the form (6), see Reference [23] for more detailed description.
The instances of SP problems are downloaded from the link: http://pwp.gatech.edu/guanghui-lan/computer-codes/.
Four instances are tested, namely, SSN(50), SSN(100), 20-term(50), 20-term(100), where the integers in the brackets mean the number of scenarios N. Here, the SSN instances come from the telecommunications and have been studied by Sen, Doverspike, and Cosares [44]. And the 20-term instances come from the motor freight carrier’s problem and have been studied by Mak, Morton, and Wood [45]. The dimensions of these instances are listed in Table 2.
The parameters are selected as: κ = 0.04 , t m i n = 0.1 and t 1 = 1.1 . The maximum bundle size is set to be 35. All the tests were performed in MATLAB (R2014a) on a PC with Intel(R) Core(TM) i7-4790 CPU 3.60GHz, 4GB RAM. The quadratic programming and linear programming subproblems were solved by the MOSEK 8 toolbox for MATLAB; see http://www.mosek.com.
We first compare our algorithm (denoted by GALBM) with the accelerated prox-level method (APL) in Reference [43], where the tolerances of the linear programming solver of MOSEK are set by default. The results are listed in Table 3, in which the number of iterations (NI), the consumed CPU time in seconds (Time), and the returned minimum values ( f ) are compared. Note that we use the MATLAB commands tic and toc to measure the consumed CPU time. For each instance, we run 10 times and report the average CPU time. From Table 3, we see that, when similar solution quality is achieved, GALBM can significantly outperform than APL in terms of the number of iterations and CPU time.
In what follows, we are interested in evaluating the impact of inexact oracles for GALBM. In more detail, we carry out two groups of tests. The first group adopts fixed tolerances, that is, ε x k + 1 ε , and the corresponding results are reported in Table 4, Table 5, Table 6 and Table 7. Whereas the second group adopts dynamic tolerances with a safeguard parameter μ > 0 , that is, ε x k + 1 = min { μ , κ ϵ v k } with κ ϵ = 0.7 , and the corresponding results are reported in Table 8, Table 9, Table 10 and Table 11. The symbol “-” in the following tables means that the number of iterations for the corresponding instance is greater than 500.

6. Conclusions

In this paper, we have proposed a generalized alternating linearization bundle method for solving structured convex optimization with inexact first-order oracles. Our method can handle various inexact data by making use of the so-called on-demand accuracy oracles. At each iteration, two interrelated subproblems are solved alternately, aiming to reduce the computational cost. Global convergence of the algorithm is established under different types of inexactness. Numerical results show that the proposed algorithm is promising.

Author Contributions

C.T. mainly contributed to the algorithm design and convergence analysis; Y.L. and X.D. mainly contributed to the convergence analysis and numerical results; and B.H. mainly contributed to the numerical results. All authors have read and agree to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation (11761013) and Guangxi Natural Science Foundation (2018GXNSFFA281007) of China.

Acknowledgments

The authors would like to thank for the support funds.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsaig, Y.; Donoho, D.L. Compressed sensing. IEEE Trans. Inform. Theory 2006, 52, 1289–1306. [Google Scholar]
  2. Jung, M.; Kang, M. Efficient nonsmooth nonconvex optimization for image restoration and segmentation. J. Sci. Comput. 2015, 62, 336–370. [Google Scholar] [CrossRef]
  3. Sar, S.; Nowozin, S.; Wright, S.J. Optimization for Machine Learning; Massachusetts Institute of Technology Press: Cambridge, MA, USA, 2012. [Google Scholar]
  4. Clarke, F.H.; Ledyaev, Y.S.; Stern, R.J.; Wolenski, P.R. Nonsmooth Analysis and Control Theory; Springer: New York, NY, USA, 1998. [Google Scholar]
  5. Yang, L.F.; Luo, J.Y.; Jian, J.B.; Zhang, Z.R.; Dong, Z.Y. A distributed dual consensus ADMM based on partition for DC-DOPF with carbon emission trading. IEEE Trans. Ind. Inform. 2020, 16, 1858–1872. [Google Scholar] [CrossRef]
  6. Yang, L.F.; Zhang, C.; Jian, J.B.; Meng, K.; Xu, Y.; Dong, Z.Y. A novel projected two-binary-variable formulation for unit commitment in power systems. Appl. Energ. 2017, 187, 732–745. [Google Scholar] [CrossRef]
  7. Yang, L.F.; Jian, J.B.; Zhu, Y.N.; Dong, Z.Y. Tight relaxation method for unit commitment problem using reformulation and lift-and-project. IEEE Trans. Power. Syst. 2015, 30, 13–23. [Google Scholar] [CrossRef]
  8. Yang, L.F.; Jian, J.B.; Xu, Y.; Dong, Z.Y.; Ma, G.D. Multiple perspective-cuts outer approximation method for risk-averse operational planning of regional energy service providers. IEEE Trans. Ind. Inform. 2017, 13, 2606–2619. [Google Scholar] [CrossRef]
  9. Teo, C.H.; Vishwanathan, S.V.N.; Smola, A.J.; Le, Q.V. Bundle methods for regularized risk minimization. Mach. Learn. Res. 2010, 11, 311–365. [Google Scholar]
  10. Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization Methods for Machine Learning. SIAM Rev. 2018, 60, 223–311. [Google Scholar] [CrossRef]
  11. Kiwiel, K.C. A proximal-projection bundle method for Lagrangian relaxation, including semidefinite programming. SIAM J. Optim. 2006, 17, 1015–1034. [Google Scholar] [CrossRef]
  12. Tang, C.M.; Jian, J.B.; Li, G.Y. A proximal-projection partial bundle method for convex constrained minimax problems. J. Ind. Manag. Optim. 2019, 15, 757–774. [Google Scholar] [CrossRef] [Green Version]
  13. Paul, T. Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM Cont. Optim. 1991, 29, 119–138. [Google Scholar]
  14. Mahey, P.; Tao, P.D. Partial regularization of the sum of two maximal monotone operators. Math. Model. Numer. Anal. 1993, 27, 375–392. [Google Scholar] [CrossRef] [Green Version]
  15. Eckstein, J. Some saddle-function splitting methods for convex programming. Optim. Meth. Soft. 1994, 4, 75–83. [Google Scholar] [CrossRef]
  16. Fukushima, M. Application of the alternating direction method of multipliers to separable convex programming problems. Comput. Optim. Appl. 1992, 1, 93–111. [Google Scholar] [CrossRef]
  17. He, B.S.; Tao, M.; Yuan, X.M. Alternating direction method with gaussian back substitution for separable convex programming. SIAM J. Optim. 2012, 22, 313–340. [Google Scholar] [CrossRef]
  18. He, B.S.; Tao, M.; Yuan, X.M. Convergence rate analysis for the alternating direction method of multipliers with a substitution procedure for separable convex. Math. Oper. Res. 2017, 42, 662–691. [Google Scholar] [CrossRef]
  19. Chao, M.; Cheng, C.; Zhang, H. A linearized alternating direction method of multipliers with substitution procedure. Asia-Pac. Opera. Res. 2015, 32, 1550011. [Google Scholar] [CrossRef]
  20. Goldfarb, D.; Ma, S.; Scheinberg, K. Fast alternating linearization methods for minimizing the sum of two convex functions. Math. Program 2013, 141, 349–382. [Google Scholar] [CrossRef] [Green Version]
  21. Kiwiel, K.C.; Rosa, C.H.; Ruszczyński, A. Proximal decomposition via alternating linearization. SIAM J. Optim. 1999, 9, 668–689. [Google Scholar] [CrossRef]
  22. Kiwiel, K.C. An alternating linearization bundle method for convex optimization and nonlinear multicommodity flow problems. Math. Program 2011, 130, 59–84. [Google Scholar] [CrossRef]
  23. DeOliveira, W.; Sagastizábal, C. Level bundle methods for oracles with on-demand accuracy. Optim. Method Softw. 2014, 29, 1180–1209. [Google Scholar] [CrossRef]
  24. Kiwiel, K.C. Bundle Methods for Convex Minimization with Partially Inexact Oracles; Technical Report; Systems Research Institute, Polish Academy of Sciences: Warsaw, Poland, 2009. [Google Scholar]
  25. De Oliveira, W.; Sagastizábal, C.; Lemaréchal, C. Convex proximal bundle methods in depth: A unified analysis for inexact oracles. Math. Program. 2014, 148, 241–277. [Google Scholar] [CrossRef]
  26. Wolfe, P. A method of conjugate subgradients for minimizing nondifferentiable functions. Math. Program. 1975, 3, 145–173. [Google Scholar]
  27. Mäkelä, M. Survey of bundle methods for nonsmooth optimization. Optim. Meth. Soft. 2002, 17, 1–29. [Google Scholar] [CrossRef]
  28. Bonnans, J.F.; Gilbert, J.C.; Lemaréchal, C.; Sagastizábal, C. Numerical Optimization: Theoretical and Practical Aspects, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  29. Lemaréchal, C. An extension of davidon methods to nondifferentiable problems. Math. Program. 1975, 3, 95–109. [Google Scholar]
  30. Kiwiel, K.C. Approximations in proximal bundle methods and decomposition of convex programs. J. Optim. Theory. Appl. 1995, 84, 529–548. [Google Scholar] [CrossRef]
  31. Hintermuller, M. A proximal bundle method based on approximate subgradients. Comput. Optim. Appl. 2001, 20, 245–266. [Google Scholar] [CrossRef]
  32. Kiwiel, K.C. A proximal bundle method with approximate subgradient linearizations. SIAM J. Optim. 2006, 16, 1007–1023. [Google Scholar] [CrossRef] [Green Version]
  33. Kiwiel, K.C. An algorithm for nonsmooth convex minimization with errors. Math. Comput. 1985, 45, 173–180. [Google Scholar] [CrossRef]
  34. Hiriart-Urruty, J.B.; Lemaréchal, C. Convex Analysis and Minimization Algorithms. Number 305-306 in Grundlehren der mathematischen Wissenschaften; Springer: Berlin, Germany, 1993. [Google Scholar]
  35. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  36. Malick, J.; De Oliveira, W.; Zaourar-Michel, S. Uncontrolled inexact information within bundle methods. Eur. J. Oper. Res. 2017, 5, 5–29. [Google Scholar] [CrossRef] [Green Version]
  37. De Oliveira, W.; Sagastizábal, C.; Scheimberg, S. Inexact bundle methods for two-stage stochastic programming. SIAM J. Optim. 2011, 21, 517–544. [Google Scholar] [CrossRef] [Green Version]
  38. Zakeri, G.; Philpott, A.B.; Ryan, D.M. Inexact cuts in benders decomposition. SIAM J. Optim. 2000, 10, 643–657. [Google Scholar] [CrossRef]
  39. Fábixaxn, C.I. Bundle-type methods for inexact data. Cent. Eur. J. Oper. Res. 2000, 8, 35–55. [Google Scholar]
  40. Kiwiel, K.C. Methods of Descent for Nondifferentiable Optimization; Lecture Notes in Mathematics; Springer: Berlin, Germany, 1985. [Google Scholar]
  41. Wallace, S.W.; Ziemba, W.T. Applications of Stochastic Programming; MPS-SIAM Ser. Optim.; SIAM: Philadelphia, PA, USA, 2005. [Google Scholar]
  42. Shapiro, A.; Dentcheva, D.; Ruszczyński, A. Lectures on Stochastic Programming: Modeling and Theory; SIAM: Philadelphia, PA, USA, 2014. [Google Scholar]
  43. Lan, G. Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization. Math. Program 2015, 149, 1–45. [Google Scholar] [CrossRef] [Green Version]
  44. Sen, S.; Doverspike, R.D.; Cosares, S. Network planning with random demand. Telecommun. Syst. 1994, 3, 11–30. [Google Scholar] [CrossRef]
  45. Mak, W.; Morton, D.P.; Wood, R.K. Monte Carlo bounding techniques for determining solution quality in stochastic programs. Oper. Res. Lett. 1999, 24, 47–56. [Google Scholar] [CrossRef]
Table 1. The choices for the error bound and the descent target.
Table 1. The choices for the error bound and the descent target.
Instances ε x k + 1 γ x k + 1
Ex0 +
PI0 F x ^ k θ κ v k
IE ε > 0 +
AE κ ϵ v k +
PAE κ ϵ v k F x ^ k θ κ v k
Table 2. Dimensions of the SP instances.
Table 2. Dimensions of the SP instances.
Name n 1 m 1 n 2 m 2
SSN891706175
20-term633764124
Table 3. The comparisons between GALBM and accelerated prox-level (APL) for the stochastic programming (SP) instances.
Table 3. The comparisons between GALBM and accelerated prox-level (APL) for the stochastic programming (SP) instances.
NameAlgorithmNITime f
SSN(50)GALBM10528.124.838238
APL14772.404.838278
SSN(100)GALBM9549.587.352609
APL155156.537.352618
20-term(50)GALBM13247.62 2.549453 × 10 5
APL156106.51 2.549453 × 10 5
20-term(100)GALBM173128.23 2.532875 × 10 5
APL261364.93 2.532876 × 10 5
Table 4. Numerical results for SSN(50) with fixed tolerances.
Table 4. Numerical results for SSN(50) with fixed tolerances.
No. ε x NITime f
1 10 4 20655.214.838157
2 10 5 20250.374.838163
3 10 6 19049.644.838247
4 10 7 13234.354.838156
5 10 8 10527.814.838238
6 10 9 9724.494.838188
7 10 10 9925.624.838136
8 10 11 8422.984.838191
9 10 12 9727.474.838128
10 10 13 8425.534.838231
Table 5. Numerical results for SSN(100) with fixed tolerances.
Table 5. Numerical results for SSN(100) with fixed tolerances.
No. ε x NITime f
1 10 4 ---
2 10 5 224109.217.352932
3 10 6 19494.807.352854
4 10 7 12762.177.352937
5 10 8 9546.937.352758
6 10 9 8743.037.352750
7 10 10 7941.557.353074
8 10 11 8343.767.352939
9 10 12 8144.087.352748
10 10 13 8146.777.352734
Table 6. Numerical results for 20-term (50) with fixed tolerances.
Table 6. Numerical results for 20-term (50) with fixed tolerances.
No. ε x NITime f
1 10 2 25080.86 2.549490 × 10 5
3 10 3 14449.12 2.549466 × 10 5
3 10 4 16557.49 2.549463 × 10 5
4 10 5 16454.12 2.549460 × 10 5
5 10 6 21172.42 2.549461 × 10 5
6 10 7 17862.19 2.549459 × 10 5
7 10 8 13244.56 2.549457 × 10 5
8 10 9 17561.53 2.549461 × 10 5
9 10 10 13249.53 2.549457 × 10 5
10 10 11 25899.69 2.549457 × 10 5
11 10 12 17567.70 2.549461 × 10 5
12 10 13 18371.52 2.549461 × 10 5
Table 7. Numerical results for 20-term (100) with fixed tolerances.
Table 7. Numerical results for 20-term (100) with fixed tolerances.
No. ε x NITime f
1 10 2 227141.74 2.532914 × 10 5
2 10 3 ---
3 10 4 14096.02 2.532879 × 10 5
4 10 5 179117.71 2.532877 × 10 5
5 10 6 15299.29 2.532879 × 10 5
6 10 7 13995.07 2.532876 × 10 5
7 10 8 173128.44 2.532876 × 10 5
8 10 9 143103.51 2.532878 × 10 5
9 10 10 ---
10 10 11 159110.21 2.532877 × 10 5
11 10 12 150112.37 2.532878 × 10 5
12 10 13 13299.46 2.532878 × 10 5
Table 8. Numerical results for SSN (50) with dynamic tolerances.
Table 8. Numerical results for SSN (50) with dynamic tolerances.
No. μ NITime f
1 10 3 ---
2 10 4 20150.544.838163
3 10 5 20249.074.838163
4 10 6 19049.214.838247
5 10 7 13232.264.838156
6 10 8 10527.104.838238
Table 9. Numerical results for SSN (100) with dynamic tolerances.
Table 9. Numerical results for SSN (100) with dynamic tolerances.
No. μ NITime f
1 10 3 ---
2 10 4 ---
3 10 5 284141.857.353010
4 10 6 19497.017.352854
5 10 7 12763.837.352937
6 10 8 9547.977.352758
Table 10. Numerical results for 20-term (50) with dynamic tolerances.
Table 10. Numerical results for 20-term (50) with dynamic tolerances.
No. μ NITime f
1 10 2 19965.09 2.549485 × 10 5
2 10 3 14044.73 2.549462 × 10 5
3 10 4 16554.08 2.549463 × 10 5
4 10 5 16454.18 2.549460 × 10 5
5 10 6 21170.67 2.549461 × 10 5
6 10 7 17861.90 2.549459 × 10 5
7 10 8 13246.94 2.549457 × 10 5
Table 11. Numerical results for 20-term (100) with dynamic tolerances.
Table 11. Numerical results for 20-term (100) with dynamic tolerances.
No. μ NITime f
1 10 2 191119.40 2.532901 × 10 5
2 10 3 14392.32 2.532881 × 10 5
3 10 4 14091.68 2.532878 × 10 5
4 10 5 179121.12 2.532877 × 10 5
5 10 6 146104.27 2.532878 × 10 5
6 10 7 170120.29 2.532879 × 10 5
7 10 8 173126.57 2.532876 × 10 5

Share and Cite

MDPI and ACS Style

Tang, C.; Li, Y.; Dong, X.; He, B. A Generalized Alternating Linearization Bundle Method for Structured Convex Optimization with Inexact First-Order Oracles. Algorithms 2020, 13, 91. https://doi.org/10.3390/a13040091

AMA Style

Tang C, Li Y, Dong X, He B. A Generalized Alternating Linearization Bundle Method for Structured Convex Optimization with Inexact First-Order Oracles. Algorithms. 2020; 13(4):91. https://doi.org/10.3390/a13040091

Chicago/Turabian Style

Tang, Chunming, Yanni Li, Xiaoxia Dong, and Bo He. 2020. "A Generalized Alternating Linearization Bundle Method for Structured Convex Optimization with Inexact First-Order Oracles" Algorithms 13, no. 4: 91. https://doi.org/10.3390/a13040091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop