Next Article in Journal
Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation
Previous Article in Journal
SCS-Net: Stratified Compressive Sensing Network for Large-Scale Crowd Flow Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems

1
The Key Laboratory of Intelligent Perception and Image Understanding of the Ministry of Education, School of Artificial Intelligence, Xidian University, Xi’an 710126, China
2
The College of Intelligence and Computing, Tianjin University, Tianjin 300072, China
3
Medical College, Tianjin University, Tianjin 300072, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(10), 1687; https://doi.org/10.3390/math13101687
Submission received: 24 March 2025 / Revised: 21 April 2025 / Accepted: 22 April 2025 / Published: 21 May 2025

Abstract

:
Recently, many stochastic Alternating Direction Methods of Multipliers (ADMMs) have been proposed to solve large-scale machine learning problems. However, for large-scale saddle-point problems, the state-of-the-art (SOTA) stochastic ADMMs still have high per-iteration costs. On the other hand, the stochastic primal–dual hybrid gradient (SPDHG) has a low per-iteration cost but only a suboptimal convergence rate of 𝒪 ( 1 / S ) . Thus, there still remains a gap in the convergence rates between SPDHG and SOTA ADMMs. Motivated by the two matters, we propose (accelerated) stochastic variance reduced primal–dual hybrid gradient ((A)SVR-PDHG) methods. We design a linear extrapolation step to improve the convergence rate and a new adaptive epoch length strategy to remove the extra boundedness assumption. Our algorithms have a simpler structure and lower per-iteration complexity than SOTA ADMMs. As a by-product, we present the asynchronous parallel variants of our algorithms. In theory, we rigorously prove that our methods converge linearly for strongly convex problems and improve the convergence rate to 𝒪 ( 1 / S 2 ) for non-strongly convex problems as opposed to the existing 𝒪 ( 1 / S ) convergence rate. Compared with SOTA algorithms, various experimental results demonstrate that ASVR-PDHG can achieve an average speedup of 2 × 5 × .

1. Introduction

In this paper, we mainly consider the following saddle-point problem:
min x R d max y Y H ( x , y ) : = F ( x ) + y , A x G * ( y ) ,
where F ( x ) : = 1 n i = 1 n F i ( x ) is a convex and lower semi-continuous (l.s.c.) function that is a finite average of n convex, smooth and l.s.c. functions F i , A R l × d , d is the dimension of the feature, G * ( · ) is the Fenchel conjugate of a convex (but possibly non-smooth) function G ( · ) , and Y is a nonempty closed and convex set. Such a problem is to find a trade-off between minimizing the objective function for primal variable x and maximizing it for the dual variable y. Many modern machine learning problems can be formulated as such a problem, such as total variation denoising [1,2,3], 1 -norm regularization problems [4], and image reconstruction [5].
The saddle-point problem mentioned above can be solved equivalently by its primal problem. According to the definition of the conjugate function G * , its primal problem can be written as min x F ( x ) + G ( A x ) , which often appears in the machine learning community, such as graph Lasso (i.e., G ( A x ) = λ 1 A x 1 ) and low-rank matrix recovery (i.e., G ( A x ) = λ 1 A x * , where · * is the nuclear norm of a matrix). By introducing an auxiliary variable y = A x , the primal problem can be formulated an equality-constrained problem. Alternating direction methods of multipliers (ADMMs) [6,7,8,9] are common algorithms for solving such an equality-constrained problem and have shown excellent advantages. For example, for solving large-scale equality-constrained problems (n is very large), Online and Stochastic Alternating Direction Methods of Multipliers (OADMM [8] and SADMM [10]) have been proposed, though they only have suboptimal convergence rates. Thus, some acceleration techniques such as those in works [11,12,13,14,15,16] have been developed to successfully address the obstacle of the high variance of stochastic gradient estimators. Among them, the stochastic variance reduced gradient (SVRG) methods such as those in the works [16,17] can obtain a linear convergence rate for strongly convex (SC) problems. Katyusha [18], MIG [16], and ASVRG [19] have further improved the convergence rates for non-strongly convex (non-SC) problems by designing different momentum tricks. Recently, some researchers have introduced these techniques into SADMM and proposed some stochastic variants with faster convergence rates. SAG-ADMM [20] can attain linear convergence for SC problems, but it requires 𝒪 ( n d ) to store the past gradients. Similarly, SDCA-ADMM [21] inherits the drawbacks of SDCA [22], which calls for 𝒪 ( n ) extra storage. In contrast, stochastic variance reduced gradient ADMM (SVRG-ADMM) [23] does not require extra storage while ensuring linear convergence for SC problems and 𝒪 ( 1 / S ) convergence rate for non-SC problems. And ASVRG-ADMM [24] further achieves the convergence rate of 𝒪 ( 1 / S 2 ) for non-SC problems. However, as indirect methods for solving saddle-point problems, ADMM-type methods usually require that the proximal mapping of regular function G is easily computed, and need to update at least three vector variables in each iteration. Thus, when G is complex [25] or when solving large-scale structural regularization problems, ADMM-type methods may not be the first choice, and the primal–dual algorithms [4,26] are more efficient.
As another effective tool, the primal–dual algorithms are prevalent in solving the saddle-point problem directly, such as Stochastic Primal–Dual Coordinate (SPDC) [26,27,28,29] and primal–dual hybrid gradient (PDHG) [25,30,31,32,33]. Indeed, these methods alternate between maximizing the dual variable y and minimizing the primal variable x. Thus, these primal–dual algorithms update at least one less vector variable than ADMMs for solving the saddle-point problem, resulting in a lower per-iteration complexity. Due to such properties, primal–dual algorithms have been widely used in various machine learning applications [34,35,36,37,38,39].
Many SPDC algorithms have obtained excellent performance when d is very large. The work [26] proposed the SPDC method with a linear convergence rate when the loss function is smooth and SC. In order to further reduce its per-iteration complexity, the work [27] proposed a stochastic primal–dual method, called SPD1, with 𝒪 ( 1 ) per-iteration complexity as opposed to 𝒪 ( d ) . By incorporating the variance reduction technique [13], SPD1-VR obtains linear convergence for SC problems. More generally, for empirical composition optimization problems, SVRPDA-I and SVRPDA-II in the work [28] both achieve linear convergence under the condition of smooth component functions and the SC regularization term.
In contrast, when n is very large, the stochastic PDHG methods as an alternative become more competitive for solving the saddle-point problem. For solving such a large-scale optimization problem, the deterministic PDHG algorithm [25] incurs the extremely expensive iteration cost (i.e., 𝒪 ( n d ) ), and thus its stochastic version (SPDHG) [4] was developed. SPDHG only selects one sample for updating the primal variable at each iteration, which has accomplished the best possible one regarding the sample complexity. However, SPDHG can only obtain the convergence rates of 𝒪 ( 1 / S ) and 𝒪 ( 1 / S ) for the SC and non-SC problems (1), respectively. Another work [40] proposed a stochastic accelerated primal–dual (APD) algorithm for solving bilinear saddle-point problems in the online setting, which achieves the convergence rate of 𝒪 ( 1 / S 2 + 1 / S + 1 / S ) and matches the lower bound based on the primal–dual gap, i.e., max y H ( x ^ , y ) min x H ( x , y ^ ) for the point ( x ^ , y ^ ) . The works [41,42] further focused on the finite-sum setting and analyzed primal–dual algorithms for solving the non-smooth case. For example, Song et al. [42] considered the non-smooth saddle-point problem by using the convex conjugate of the data fidelity term, but it is limited to solving the primal problem with simple regular functions. On the contrary, our algorithms can solve more general structural regularity problems. Thus, their algorithms are orthogonal to our methods. Recently, several faster versions of stochastic primal–dual methods have been proposed. The work [43] can achieve a linear convergence rate for the problem with strong convexity of G * instead of F. The algorithms proposed by [44] further achieved the complexity matching the lower bound when F and G * are both SC. However, the non-SC setting is not taken into consideration by these two works. To bridge this gap, Zhao et al. [45] proposed a restart scheme, which focuses on the general convex–concave saddle-point problems rather than just the bilinear structure but they focused on solving the more general saddle-point problem. Very recently, SVRG-PDFP [46] integrated the variance reduction technique and the primal–dual fixed point method to solve the graph-guide logistic regression model and CT image reconstruction, and achieved an 𝒪 ( 1 / S ) convergence rate for non-SC finite-sum problems. But there remains a gap in the convergence rates between it and the lower bound 𝒪 ( 1 / ( n S ) ) [47]. Thus, it is essential to take advantage of the relative simplicity of primal–dual methods to design faster algorithms for solving the finite-sum problem (1).

1.1. Our Motivations

In this paper, we focus on the large sample regime (i.e., n is large). To solve the large-scale saddle-point problem more effectively, we mainly consider the following factors:
  • Computational cost per iteration: Although stochastic ADMMs can be used to solve the large-scale saddle-point problem, their per-iteration cost is still high. That is, the stochastic ADMMs usually use positive semi-definite matrix Q and update at least three variables per iteration, which increases the computational cost.
  • Theoretical properties: SPDHG only employs a decaying step size to reduce the variance of the stochastic gradient estimator, which leads to a suboptimal convergence rate. Thus, there still exists a gap in the convergence rate between SPDHG and state-of-the-art stochastic methods. Recently, SVRG-PDFP has improved the convergence rate from 𝒪 ( 1 / S ) to 𝒪 ( 1 / S ) for non-SC objectives, which only closes this gap in some sense.
  • Applications: SVRG-PDFP requires both F and G * to be SC functions in order to obtain a linear convergence rate, which limits its application. SPDC can also reduce the number of updated variables and achieve a linear convergence rate for SC problems as mentioned above. However, these proposals require that regularized functions must be SC. For common regularized problems (e.g., 1 -norm regularization), such a condition is not satisfied.
These facts motivate us to design a more efficient primal–dual algorithm for solving the large-scale saddle-point problem (1).

1.2. Our Contributions

We address the above tricky issues by proposing efficient stochastic variance reduced primal–dual hybrid gradient methods, which have the following advantages:
  • Accelerated primal–dual algorithms: We propose novel primal–dual hybrid gradient methods (SVR-PDHG and ASVR-PDHG) to solve SC and non-SC objectives by integrating variance reduction and acceleration techniques. In our ASVR-PDHG algorithm, we design a new momentum acceleration step and a linear extrapolation step to further improve our theoretical and practical convergence speeds.
  • Better convergence rates: For non-SC problems, we rigorously prove that SVR-PDHG achieves the convergence rate 𝒪 ( 1 / S ) and ASVR-PDHG attains the convergence rate 𝒪 ( 1 / S 2 ) based on the convergence criterion 𝒯 . Moreover, our algorithms enjoy linear convergence rates for SC problems. As by-products, we also analyze their gradient complexity results.
  • Lower computation cost: Our SVR-PDHG and ASVR-PDHG have simpler structures than SVRG-ADMM and ASVRG-ADMM, respectively. Our algorithms update one less vector variable than stochastic ADMMs, which reduces the per-iteration cost. That is why our algorithms perform better than them in practice.
  • More general applications: Our algorithms require fewer assumptions, which significantly extends the applicability of our algorithms. Firstly, the boundedness assumptions (i.e., assume x X and y Y , where X and Y are the convex compact sets with diameters D X = sup x 1 , x 2 X x 1 x 2 and D Y = sup y 1 , y 2 Y y 1 y 2 ) are removed. Secondly, unlike SVRPDA [28], our algorithms are also applicable for non-SC regularization (e.g., 1 -regularization). Thirdly, our algorithms only require the strong convexity of F to achieve a linear convergence rate for SC problems, while SVRG-PDFP and LPD [44] algorithms call for the strong convexity of G * .
  • Asynchronous Parallel Algorithms: We extend our SVR-PDHG and ASVR-PDHG algorithms to the asynchronous parallel setting. To the best of our knowledge, this is the first asynchronous parallel primal–dual algorithm. Our experiments show that the speedup of our SVR-PDHG and ASVR-PDHG is proportional to the number of threads.
  • Superior empirical behavior: We conduct various experiments for solving non-SC graph-guided fused Lasso problems, SC graph-guided logistic regression, and multi-task learning problems in the machine learning community. Compared with SPDHG, our algorithms achieve much better performance for both SC and non-SC problems. Due to the low per-iteration cost and acceleration techniques, 2 × to 5 × speedup can be obtained by our ASVR-PDHG compared with SVRG-PDFP, SVRG-ADMM, and ASVRG-ADMM.

2. Preliminaries and Related Work

In this section, we recall some necessary assumptions and introduce some related works that inspire our ideas.

2.1. Notations

We use lower-case letters x , y to denote vectors. Given a vector x R d , x is 2 -norm, x 1 is 1 -norm, and x Q = x Q x . F ( x ) denotes the full gradient of function F at x when F is differentiable. Given a matrix A, A denotes the induced norm of A, i.e., A = max { A x : x R d with x 1 } . σ min ( A ) is the smallest eigenvalue of A, and A denotes its pseudo-inverse.

2.2. Basic Assumptions

Assumption 1 
(Smoothness). Each component function F i , i = 1 , 2 , , n is smooth. That is, for x 1 , x 2 R d , there exists a constant L i > 0 such that F i ( x 1 ) F i ( x 2 ) L i x 1 x 2 , which means that the gradient of the average function F is also Lipschitz continuous, i.e., F ( x 1 ) F ( x 2 ) L F x 1 x 2 , where L F L = max i = 1 , , n L i .
Assumption 2 
(Strong Convexity). The function F is μ-strongly convex, i.e., for x 1 , x 2 R d , there exists a constant μ > 0 such that F ( x 1 ) F ( x 2 ) + F ( x 2 ) , x 1 x 2 + μ 2 x 1 x 2 2 .
Assumption 3 
(Bounded Function Value). We assume that f and G * admit finite lower bounds, i.e., f * = inf x f ( x ) > and ( G * ) * = inf y G * ( y ) > .

2.3. Related Work

2.3.1. Stochastic ADMM

SADMM is one of the most effective tools to solve the primal problem of Problem (1). In particular, Problem (1) is a primal–dual formulation of the following nonlinear primal problem: min x R d F ( x ) + G ( A x ) . One can formulate this primal problem into a problem with an equality constraint by introducing an auxiliary variable y as follows:
min x R d , y Y F ( x ) + G ( y ) , s . t . y = A x .
Thus, it can be solved by SADMM [10]. Together with the dual variable λ , the update steps of SADMM are
y t = arg min y Y G ( y ) + ζ 2 A x t 1 y + λ t 1 2 , x t = arg min x R d x F i t ( x t 1 ) + 1 2 η t x x t 1 Q 2 + ζ 2 A x y t + λ t 1 2 , λ t = λ t 1 + A x t y t ,
where we draw i t uniformly at random from [ n ] : = { 1 , , n } , and ζ > 0 is a penalty parameter. η t 1 / t is the step size of the t-th update, and x Q 2 = x Q x with given positive semi-definite matrix Q = δ I ν A A in the inexact Uzawa method [48]. Here, the constant ν relies on the step size, I is an identity matrix, and the positive semi-definiteness of Q is guaranteed by a suitable constant δ . But due to the variance of random sampling, the algorithm requires the step size to be asymptotically reduced to ensure convergence [13].
Recently, the variance reduction technique has been applied to SADMM, such as SVRG-ADMM [23] and its accelerated version, ASVRG-ADMM [24], and improved theoretical and experimental results. Specifically, the SVRG estimator is defined as
˜ F I t ( x t 1 s ) = 1 b i t I t F i t ( x t 1 s ) F i t ( x ˜ s 1 ) + F ( x ˜ s 1 ) ,
where I t { 1 , 2 , , n } is a mini-batch set of size b, and F ( x ˜ s 1 ) is a full gradient at the snapshot point x ˜ s 1 . Since the estimator is unbiased to the full gradient F ( x ˜ s 1 ) , it allows a constant step size to achieve faster convergence.

2.3.2. Stochastic PDHG

The PDHG-type methods update the variable x and the variable y alternately. For example, the main update rules of SPDHG for solving Problem (1) are written as follows:
y t = arg max y Y y , A x t 1 G * ( y ) 1 2 ρ t y y t 1 2 , x t = x t 1 η t ( A y t + F i t ( x t 1 ) ) ,
where i t is chosen uniformly at random from [ n ] : = { 1 , , n } , where ρ t and η t are step sizes. The SPDHG algorithm uses a stochastic gradient F i t ( · ) to approximate the full gradient F ( · ) . Similarly, the variance of F i t ( x t 1 ) is large due to random sampling, and a decaying step size has to be used. Thus, SPDHG can only attain suboptimal convergence. SPDHG requires boundedness assumptions, which limits its application scope. Inspired by the success of variance reduction and momentum acceleration in stochastic optimization, we propose more efficient stochastic PDHG algorithms to solve these tricky issues.

3. Our Stochastic Primal–Dual Hybrid Gradient Algorithms

In this section, we integrate variance reduction and momentum acceleration techniques into SPDHG and propose two stochastic variance reduced primal–dual hybrid gradient methods, called SVR-PDHG and ASVR-PDHG, where we design key linear extrapolation and momentum acceleration steps to improve the convergence rate. Moreover, we design asynchronous parallel versions for the proposed algorithms to further accelerate solving non-SC problems.

3.1. Our SVR-PDHG Algorithm

We first propose a stochastic variance reduced primal–dual hybrid gradient (SVR-PDHG) method for solving SC and non-SC objectives as shown in Algorithms 1 and 2. Our algorithms are divided into S epochs, and each epoch includes T updates, where T is usually set to T = 2 n as in the works [13,23,24]. More specifically, SVR-PDHG mainly includes the following three steps:
Update Dual Variable. We specially design a term 1 2 ρ y y t 1 s 2 to ensure the next iterate close to the current iterate y t 1 s . Specifically, the first-order surrogate function of the dual variable y is defined as follows:
y t s = arg max y Y y , A x ¯ t 1 s G * ( y ) 1 2 ρ y y t 1 s 2 ,
where x ¯ t 1 s is updated by a linear extrapolation step in (8) below. G * ( · ) is a conjugate function of G ( · ) , which is usually easy to solve. For example, for graph-guided fused Lasso problems, G * ( y ) 0 .
Update Primal Variable. Analogous to the sub-problem of y, we also add 1 2 η x x t 1 s 2 to the sub-problem of x. Thus, x t s is updated as follows:
x t s = arg min x R d x , A y t s + ˜ F I t ( x t 1 s ) + x x t 1 s 2 2 η = x t 1 s η ( A y t s + ˜ F I t ( x t 1 s ) ) ,
where ˜ F I t ( x t 1 s ) is the variance reduced gradient estimator (4). Note that a constant step size η is used instead of just decaying the step size as in the work [4].
Linear Extrapolation Step. In order to further improve the theoretical convergence rate, we design the key update rule of x ¯ t s as follows:
x ¯ t s = x t s + β ( x t s x t 1 s ) ,
where β ( 0 , 1 ] . When we choose β = 0 , there will be an extra inner product term in our proofs, which results in the convergence rate 𝒪 ( 1 / S ) within a certain error range like the Arrow–Hurwicz method [25]. While we choose β = 1 , this linear extrapolation step can eliminate the extra inner product term, which ensures the 𝒪 ( 1 / S ) convergence rate for non-SC problems.
Algorithm 1 SVR-PDHG for SC Objectives.
Input: 
T, ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 , y ^ 0 .
  1:
for  s = 1 , 2 , , S  do
  2:
    x ¯ 0 s = x 0 s = x ˜ s 1 , p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) ; y 0 s = ( A ) F ( x ˜ s 1 ) ;
  3:
   for  t = 1 , 2 , , T  do
  4:
     Choose I t [ n ] of size b, uniformly at random;
  5:
      ˜ F I t ( x t 1 s ) = 1 b i t I t F i t ( x t 1 s ) F i t ( x ˜ s 1 ) + p ˜ ;
  6:
      y t s = arg max y Y y , A x ¯ t 1 s G * ( y ) 1 2 ρ y y t 1 s 2 ;
  7:
      x t s = x t 1 s η ( A y t s + ˜ F I t ( x t 1 s ) ) ; x ¯ t s = x t s + β ( x t s x t 1 s ) ;
  8:
   end for
  9:
    x ˜ s = 1 T t = 1 T x t s , y ˜ s = 1 T t = 1 T y t s ;
10:
end for
Output: 
x ^ S = x ˜ S , y ^ S = y ˜ S .
Algorithm 2 SVR-PDHG for non-SC objectives.
Input: 
T, ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 = x T 0 , y ^ 0 = y ˜ 0 = y T 0 .
  1:
for  s = 1 , 2 , , S  do
  2:
    x ¯ 0 s = x 0 s = x T s 1 , y 0 s = y T s 1 , p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) ;
  3:
   for  t = 1 , 2 , , T  do
  4:
     Choose I t [ n ] of size b, uniformly at random;
  5:
      ˜ F I t ( x t 1 s ) = 1 b i t I t F i t ( x t 1 s ) F i t ( x ˜ s 1 ) + p ˜ ;
  6:
      y t s = arg max y Y y , A x ¯ t 1 s G * ( y ) 1 2 ρ y y t 1 s 2 ;
  7:
      x t s = x t 1 s η ( A y t s + ˜ F I t ( x t 1 s ) ) ; x ¯ t s = x t s + β ( x t s x t 1 s ) ;
  8:
   end for
  9:
    x ˜ s = 1 T t = 1 T x t s , y ˜ s = 1 T t = 1 T y t s ;
10:
end for
Output: 
x ^ S = 1 S s = 1 S x ˜ s , y ^ S = 1 S s = 1 S y ˜ s .
The other detailed update rules of our SVR-PDHG algorithms for SC and non-SC objectives are outlined in Algorithms 1 and 2, respectively (note that the outputs of our algorithms are all denoted by x ^ S and y ^ S ). The main differences of SVR-PDHG for solving SC and non-SC problems are listed as follows. The initial dual variable y 0 s at each epoch is set to y 0 s = ( A ) F ( x ˜ s 1 ) , which contributes to attaining a linear convergence rate for SC objectives. For comparison, the initial variables in non-SC problems are set to y 0 s = y T s 1 and x 0 s = x T s 1 . Furthermore, the outputs of SVR-PDHG for SC problems are x ^ S = x ˜ S and y ^ S = y ˜ S in a non-ergodic sense (note that a convergence rate is ergodic if it measures the optimality at ( x ^ S = 1 S s = 1 S x ˜ s , y ^ S = 1 S s = 1 S y ˜ s ), while the convergence rate of an algorithm is non-ergodic if it considers the optimality at the point ( x ˜ S , y ˜ S ) directly), and by contrast, the outputs of SVR-PDHG for non-SC problems are x ^ S = 1 S s = 1 S x ˜ s and y ^ S = 1 S s = 1 S y ˜ s .

3.2. Our ASVR-PDHG Algorithm

In this part, we design an accelerated stochastic variance reduced primal–dual hybrid gradient (ASVR-PDHG) method for solving SC and non-SC problems as shown in Algorithms 3 and 4, respectively. In particular, to eliminate the boundedness assumption for the non-SC case, we design a new adaptive epoch length strategy. More specifically, ASVR-PDHG mainly includes three steps:
Update Dual Variable. The optimization sub-problem of y has a similar term, 1 2 ρ θ s 1 y y t 1 s 2 , to ensure the next iterate close to the current iterate y t 1 s . Different from existing works, the quadratic term is preceded by an acceleration factor 1 θ s 1 (when solving SC problems, θ s 1 θ for all s). Thus, our first-order surrogate function becomes
y t s = arg max y Y y , A z ¯ t 1 s G * ( y ) 1 2 ρ θ s 1 y y t 1 s 2 ,
where z ¯ t 1 s is updated by a linear extrapolation step in (12) below.
Update Primal Variable. We introduce an auxiliary variable z t s to accelerate the primal variable, which mainly includes the following two steps.
  • Gradient descent: We first update z t s with x t 1 s . In particular, z t s is obtained by solving the following sub-problem with a step size η θ s 1 ,
    z t s = arg min x R d z , A y t s + ˜ F I t ( x t 1 s ) + θ s 1 z z t 1 s 2 2 η = z t 1 s η θ s 1 ( A y t s + ˜ F I t ( x t 1 s ) ) .
  • Momentum acceleration: We design a momentum acceleration step to accelerate our algorithms by using the snapshot point of the previous epoch, i.e., x ˜ s 1 . In particular,
    x t s = x ˜ s 1 + θ s 1 ( z t s x ˜ s 1 ) ,
    where θ s 1 is a momentum parameter. For non-SC objectives, it can be seen from the outer loop that θ s 1 is monotonically decreasing and satisfies the condition θ s 1 α ( b ) L η 1 L η for all s, where b is the size of the mini-batch, and α ( b ) = n b b ( n 1 ) .
Linear Extrapolation Step. We also design a key linear extrapolation step for z t s in Algorithms 3 and 4. This step can also eliminate an extra inner product term by setting β = 1 , which is also a reason to ensure an 𝒪 ( 1 / S 2 ) convergence rate of our ASVR-PDHG algorithm for solving non-SC problems. Specifically, z ¯ t s is updated by
z ¯ t s = z t s + β ( z t s z t 1 s ) .
Algorithm 3 ASVR-PDHG for SC objectives.
Input: 
T, ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 , y ^ 0 = y ˜ 0 , 0 θ 1 α ( b ) L η 1 L η .
  1:
for  s = 1 , 2 , , S  do
  2:
    p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) , x 0 s = x ˜ s 1 , z ¯ 0 s = z 0 s = x ˜ s 1 , y 0 s = ( A ) F ( x ˜ s 1 ) ;
  3:
   for  t = 1 , 2 , , T  do
  4:
     Choose I t [ n ] of size b, uniformly at random;
  5:
      ˜ F I t ( x t 1 s ) = 1 b i t I t F i t ( x t 1 s ) F i t ( x ˜ s 1 ) + p ˜ ;
  6:
      y t s = arg max y Y y , A z ¯ t 1 s G * ( y ) 1 2 ρ θ y y t 1 s 2 ;
  7:
      z t s = z t 1 s η θ ( A y t s + ˜ F I t ( x t 1 s ) ) ;
  8:
      x t s = x ˜ s 1 + θ ( z t s x ˜ s 1 ) ; z ¯ t s = z t s + β ( z t s z t 1 s ) ;
  9:
   end for
10:
    x ˜ s = 1 T t = 1 T x t s , y ˜ s = ( 1 θ ) y ˜ s 1 + θ T t = 1 T y t s ;
11:
end for
Output: 
x ^ S = x ˜ S , y ^ S = y ˜ S .
Algorithm 4 ASVR-PDHG for non-SC objectives.
Input: 
ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 = z T 0 0 , y ^ 0 = y ˜ 0 = y T 0 0 , 0 θ 0 1 α ( b ) L η 1 L η , T 0 .
  1:
for  s = 1 , 2 , , S  do
  2:
    p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) , x 0 s = x ˜ s 1 , z ¯ 0 s = z 0 s = z T s 1 s 1 , y 0 s = y T s 1 s 1 ;
  3:
   for  t = 1 , 2 , , T s 1  do
  4:
     Choose I t [ n ] of size b, uniformly at random;
  5:
      ˜ F I t ( x t 1 s ) = 1 b i t I t F i t ( x t 1 s ) F i t ( x ˜ s 1 ) + p ˜ ;
  6:
      y t s = arg max y Y y , A z ¯ t 1 s G * ( y ) 1 2 ρ θ s 1 y y t 1 s 2 ;
  7:
      z t s = z t 1 s η θ s 1 ( A y t s + ˜ F I t ( x t 1 s ) ) ;
  8:
      x t s = x ˜ s 1 + θ s 1 ( z t s x ˜ s 1 ) ; z ¯ t s = z t s + β ( z t s z t 1 s ) ;
  9:
   end for
10:
    x ˜ s = 1 T s 1 t = 1 T s 1 x t s , y ˜ s = ( 1 θ s 1 ) y ˜ s 1 + θ s 1 T s 1 t = 1 T s 1 y t s ;
11:
    θ s = θ s 1 4 + 4 θ s 1 2 θ s 1 2 2 , T s = T s 1 / ( 1 θ s ) ;
12:
end for
Output: 
x ^ S = x ˜ S , y ^ S = y ˜ S .
Moreover, we set y ˜ s = ( 1 θ s 1 ) y ˜ s 1 + θ s 1 T s 1 t = 1 T s 1 y t s (for SC problems, T s 1 = T , θ s 1 θ for all s) to further accelerate our algorithms, where T s 1 is the number of inner loops at the s-th outer-loop. The key differences between ASVR-PDHG for SC and non-SC problems are as follows.
ASVR-PDHG for SC problems: The initial dual variable y 0 s at each epoch is set to y 0 s = ( A ) F ( x ˜ s 1 ) , which contribute to attaining a linear convergence for SC objectives. The momentum parameter θ s and the length of inner-loop T s are set to the constants θ and T, respectively.
ASVR-PDHG for non-SC problems: The initial variables are set to y 0 s = y T s 1 s 1 and z 0 s = z T s 1 s 1 . Following ASVRG-ADMM [24], the sequence { θ s } is monotonically decreasing, satisfying 1 θ s / θ s 2 = 1 / θ s 1 2 . Different from ASVRG-ADMM, a new adaptive strategy for the epoch length T s is designed as follows:
T s = 1 1 θ s T s 1
with an initial T 0 , which is the reason to eliminate the boundedness assumption. Since θ s is decreasing, the coefficient of T s 1 is greater than 1 and decreases gradually, while a constant 2 is used in SVRG++ [49].

3.3. Our Asynchronous Parallel Algorithms

In this subsection, we extend our SVR-PDHG and ASVR-PDHG algorithms to the sparse and asynchronous parallel setting to further accelerate the convergence speed for large-scale sparse and high-dimensional non-SC problems. To the best of our knowledge, this is the first asynchronous parallel stochastic primal–dual algorithm. Our parallel ASVR-PDHG algorithm is shown in Algorithm 5 and the parallel SVR-PDHG algorithm is shown in Algorithm A1 in the Appendix A. Specifically, we consider A = I and batch size b = 1 to facilitate parallelism. Taking Algorithm 5 as an example, there are three main differences compared with Algorithm 4.
Algorithm 5 ASVR-PDHG for non-SC objectives in sparse and asynchronous parallel setting.
Input: 
ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 = z T 0 0 , y ^ 0 = y ˜ 0 = y T 0 0 = 0 , 0 θ 0 1 α ( b ) L η 1 L η , T 0 , p threads.
  1:
for  s = 1 , 2 , , S  do
  2:
   Read current value of x ˜ s 1 from the shared memory and all threads parallelly compute the full gradient p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) , x 0 s = x ˜ s 1 , z ¯ 0 s = z 0 s = z T s 1 s 1 , y 0 s = y T s 1 s 1 ;
  3:
    t = 0 ; //inner loop counter
  4:
   while  t < T s 1  in parallel do
  5:
      t = t + 1 ; //atomic increase counter
  6:
     Choose i t uniformly at random from { 1 , 2 , . . . , n } ;
  7:
      s i t : = support of sample i t ;
  8:
     Inconsistent read of [ x t 1 s ] s i t ;
  9:
      [ u ] s i t = F i t ( [ x t 1 s ] s i t ) F i t ( [ x ˜ s 1 ] s i t ) + [ D i t p ˜ ] s i t ;
10:
      [ y t s ] s i t = arg max [ y ] s i t [ y ] s i t , [ z ¯ t 1 s ] s i t G * ( y ) 1 2 ρ θ s 1 [ y ] s i t [ y t 1 s ] s i t 2 ;
11:
      [ z t s ] s i t = [ z t 1 s ] s i t η θ s 1 ( [ y t s ] s i t + [ u ] s i t ) ;
12:
      [ x t s ] s i t = [ x ˜ s 1 ] s i t + θ s 1 ( [ z t s ] s i t [ x ˜ s 1 ] s i t ) ;
13:
      [ z ¯ t s ] s i t = [ z t s ] s i t + β ( [ z t s ] s i t [ z t 1 s ] s i t ) ;
14:
   end while
15:
    x ˜ s = 1 T s 1 t = 1 T s 1 x t s , y ˜ s = ( 1 θ s 1 ) y ˜ s 1 + θ s 1 T s 1 t = 1 T s 1 y t s ;
16:
    θ s = θ s 1 4 + 4 θ s 1 2 θ s 1 2 2 , T s = T s 1 / ( 1 θ s ) ;
17:
end for
Output: 
x ^ S = x ˜ S , y ^ S = y ˜ S .
▸ The full gradient p ˜ in Algorithm 5 is computed in parallel, while the full gradient in Algorithm 4 is computed serially.
▸ We adopt a sparse approximation technique [50] to decrease the chances of conflicts between multiple cores, which makes the SVRG estimator (4) change as follows:
˜ F i t ( x t 1 s ) = F i t ( x t 1 s ) F i t ( x ˜ s 1 ) + D i t F ( x ˜ s 1 ) ,
where D i t is used to construct sparse iterates. The choice of D i t needs to ensure E [ D i t F ( x ˜ s 1 ) ] = F ( x ˜ s 1 ) . Under such a condition, the sparse approximated estimator (14) is still an unbiased estimator of F ( x ˜ s 1 ) .
▸ The proposed asynchronous parallel algorithm just updates the coordinates s i t of variables, i.e., the support set of chosen random samples, rather than the entire dense vector; see lines 9–10 of Algorithm 5. As long as these dimensions are different, Algorithm 5 can effectively avoid write conflicts. Thus, our parallel ASVR-PDHG can take advantage of the power of multi-core processor architectures and further accelerate Algorithm 4 on sparse datasets.

4. Theoretical Analysis

This section provides the convergence analysis for SVR-PDHG and ASVR-PDHG (i.e., Algorithms 1–4) in SC and non-SC cases, respectively. We first introduce the convergence criterion and then give the key technical results (i.e., Lemmas 1 and 2) for SVR-PDHG and ASVR-PDHG, respectively. Finally, we prove the convergence rate of our algorithms as shown by the following Theorems 1–4.

4.1. Convergence Criterion

In Problem (1), H ( · , y ) is convex for each y Y and H ( x , · ) is concave for each x R d . Under such conditions, Sion et al. [51] proved that max y Y min x R d H ( x , y ) = min x R d max y Y H ( x , y ) . In other words, there exists at least one saddle point ( x * , y * ) such that
min x R d H ( x , y * ) = H ( x * , y * ) = max y Y H ( x * , y ) .
That is, H ( x * , y ) H ( x * , y * ) H ( x , y * ) . Therefore, this setting will contribute to establishing the convergence criterion for our algorithms. Following the work [4], we first introduce the function 𝒯 ( x , y ) = H ( x , y * ) H ( x * , y ) as a convergence criterion. As an illustration, the criterion function 𝒯 ( · , · ) has the following properties.
Property 1. 
For x , y , if R d × Y contains a saddle point ( x * , y * ) , then 𝒯 ( x , y ) = H ( x , y * ) H ( x * , y ) 0 , and it vanishes only if ( x , y ) is itself a saddle point.
Property 2. 
According to the definition of 𝒯 ( · , · ) , for x R d and y Y , the following inequality holds: F ( x ) F ( x * ) F ( x * ) , x x * 𝒯 ( x , y ) .
There are commonly other convergence criteria such as the primal–dual gap, i.e., max y H ( x ^ , y ) min x H ( x , y ^ ) for point ( x ^ , y ^ ) . For example, Chen et al. [40] used the primal–dual gap as the measurement and achieved the complexity matching the lower bound for solving online bilinear saddle-point problems. Zhao et al. [41] proposed the OTPDHG algorithm, which still uses the primal–dual gap as the measurement, and achieved the optimal convergence rate for online bilinear saddle-point problems even when A is unknown a priori. Zhao et al. [45] further considered the beyond-bilinear setting. Based on the primal–dual gap, they still obtained the optimal convergence rate of online saddle-point problems. It is worth noting that the works mentioned above focus on the online setting, while we focus on analyzing the finite-sum setting. The lower bound for the online setting is usually higher than the lower bound for the finite-sum setting such as [52,53]. Although Song et al. [42] also analyzed the convergence results based on the primal–dual gap in the finite-sum setting, they focused on solving non-smooth problems, while our algorithms focus on improving the convergence rates for solving smooth problems. Thus, these analyses are orthogonal to ours. Thekumparampil et al. [44] considered the finite-sum setting, but their convergence rates are based on x x * + y y * and require SC G * . In this paper, we prove faster convergence rates based on 𝒯 ( x , y ) in the non-SC finite-sum setting.
By the convergence criterion 𝒯 ( x , y ) , we analyze our SVR-PDHG and ASVR-PDHG algorithms in the next two subsections. The detailed proofs of all the theoretical results are provided in the Appendix A. Here, we give a simple proof sketch: Our main proofs start from one-epoch analysis, i.e., Lemmas 1 and 2 below. Then, in Section 4.2, we prove the convergence results of SVR-PDHG by Theorems 1 and 2, which rely on the one-epoch inequality in Lemma 1, and the gradient complexity results are also given as a by-product. In Section 4.3, we prove the convergence rate and gradient complexity results of our ASVR-PDHG by Theorems 3 and 4, which depend on the one-epoch upper bound in Lemma 2.

4.2. Convergence Analysis of SVR-PDHG

This subsection provides the convergence analysis for SVR-PDHG (i.e., Algorithms 1 and 2). Lemma 1 provides a one-epoch analysis for SVR-PDHG.
Key technical challenges for SVR-PDHG. Line 5 in our SVR-PDHG algorithm eases the computational burden but simultaneously increases the difficulty of convergence analysis due to introducing a tricky inner product term A ( x ¯ t 1 s x t s ) , y t s y * in the bound in terms of y. To address this challenge, we use the linear extrapolation step x ¯ t s = x t s + β ( x t s x t 1 s ) and propose to establish the upper bound on A ( x ¯ t 1 s x t s ) , y t s y * in terms of x t 1 s x t 2 s 2 and y t s y t 1 s 2 to eliminate this inner product term in Lemma 1.
Lemma 1 
(One-Epoch Analysis for SVR-PDHG). Suppose Assumption 1 holds. Consider the sequence { x t s , y t s , x ˜ s , y ˜ s } generated by Algorithms 1 or 2 in one epoch, and ( x * , y * ) as an optimal solution of Problem (1). If 0 < η 1 9 L and 0 < ρ 8 L M 2 , then the following inequality holds for all s { 1 , 2 , , S } :
T ( 1 4 L η α ( b ) ) E 𝒯 ( x ˜ s , y ˜ s ) E [ x * x 0 s 2 x * x T s 2 ] 2 η + E [ y 0 s y * 2 y T s y * 2 ] 2 ρ + M η γ 2 η E x 0 s x 1 s 2 1 L η 2 η E x T 1 s x T s 2 + E [ A ( x 0 s x 1 s ) , y 0 s y * A ( x T s x T 1 s ) , y T s y * ] + 4 L η α ( b ) E F ( x 0 s ) F x * F x * , x 0 s x * 4 L η α ( b ) E F ( x T s ) F x * F x * , x T s x * + 4 T L η α ( b ) E [ F ( x ˜ s 1 ) F ( x * ) F ( x * ) , x ˜ s 1 x * ] ,
where α ( b ) = n b b ( n 1 ) , γ satisfies that M ρ γ 1 L η M η , and M = A .
Lemma 1 provides the upper bound of 𝒯 ( x ˜ s , y ˜ s ) ’s expectation in one epoch of our SVR-PDHG. Based on this lemma, we are now ready to combine the analysis across epochs, and derive our final Theorems 1 and 2 for SC and non-SC objectives, respectively. Lemma 1 also inspires us to analyze ASVR-PDHG.
Theorem 1 
(SVR-PDHG for SC Objectives). Let ( x ^ S , y ^ S ) be the output of Algorithm 1. Suppose Assumptions 1–3 hold and A has full row rank. If 0 < η min { 1 9 L , 1 9 L α ( b ) } and 0 < ρ 8 L M 2 , and we set T > 4 + 9 η μ + 9 L F ρ σ m i n ( A A ) such that ϕ 1 = 4 L η ( T + 1 ) α ( b ) ( 1 4 L η α ( b ) ) T + 1 T η μ ( 1 4 L η α ( b ) ) + L F T ρ ( 1 4 L η α ( b ) ) σ min A A < 1 holds strictly, then
E [ 𝒯 ( x ^ S , y ^ S ) ] ϕ 1 S 𝒯 ( x ^ 0 , y ^ 0 ) .
In other words, choosing T = 𝒪 ( L / μ ) , the gradient complexity of SVR-PDHG to achieve an ϵ-additive error (i.e., E 𝒯 ( x ^ S , y ^ S ) ϵ ) is 𝒪 ( n + L / μ ) log ( 1 / ϵ ) .
Theorem 1 shows that SVR-PDHG obtains a linear convergence rate for SC objectives. Unlike SVRG-PDFP, SVR-PDHG does not require the strong convexity of G * ( · ) . Our SVR-PDHG algorithm achieves the same coefficient ϕ 1 as in the inexact Uzawa method [48] for solving SC objectives.
Theorem 2 
(SVR-PDHG for Non-SC Objective). Suppose Assumptions 1 and 3 hold, and let ( x ^ S , y ^ S ) be the output of Algorithm 2. If 0 < η min { 1 9 L , 1 9 L α ( b ) } and 0 < ρ 8 L M 2 , then we have
E 𝒯 ( x ^ S , y ^ S ) 4 L η ( T + 1 ) α ( b ) ( 1 8 L η α ( b ) ) T S 𝒯 ( x ^ 0 , y ^ 0 ) + ρ D x * 2 + η D y * 2 2 η ρ ( 1 8 L η α ( b ) ) T S ,
where D x * = x * x ^ 0 and D y * = y * y ^ 0 . That is, if an output ( x ^ S , y ^ S ) satisfies E 𝒯 ( x ^ S , y ^ S ) ϵ , the gradient complexity of SVR-PDHG is 𝒪 ( n 𝒯 ( x ^ 0 , y ^ 0 ) ϵ + D x * 2 + D y * 2 ϵ ) .
From Theorem 2, it can be found that it removes the boundedness assumption in SPDHG and only depends on the constants D x * and D y * , and SVR-PDHG achieves the convergence rate 𝒪 ( 1 / S ) for non-SC objectives. In addition, SVR-PDHG has simpler iteration rules than SVRG-ADMM. Thus, despite the consistent convergence rate of SVR-PDHG and SVRG-ADMM, the former is faster in practice.

4.3. Convergence Analysis of ASVR-PDHG

This subsection provides the convergence analysis for ASVR-PDHG (i.e., Algorithms 3 and 4). Similarly, we first provide a one-epoch upper bound for our ASVR-PDHG.
Key technical challenges for ASVR-PDHG. In addition to the same technical challenges in the analysis of SVR-PDHG, the momentum acceleration step further increases the difficulty of analyzing faster convergence rates for our ASVR-PDHG. Clarifying the behavior of momentum acceleration is a key step. To address this challenge, we use the improved variance upper bound [24], and design v * = ( 1 θ s 1 ) x ˜ s 1 + θ s 1 x * to help to clarify the behavior of momentum acceleration steps while bounding a new and tricky inner product term E ˜ F I t ( x t 1 s ) , x t s x t 1 s in Lemma 2. Fixed T also hinders achieving the convergence rate of 𝒪 ( 1 / S 2 ) and makes the convergence rate result still dependent on the boundedness assumption, thereby hindering extending the applicability of our algorithms. Our new adaptive strategy for the epoch length T s can address these issues.
Lemma 2 
(One-Epoch Analysis for ASVR-PDHG). The sequence { x t s , z t s , y t s , x ˜ s , y ˜ s } is generated by Algorithm 3 or 4 with Assumption 1 holding, and ( x * , y * ) denote an optimal solution of Problem (1). If η < 1 2 L and ρ L M 2 , then the following inequality holds for all s,
E 𝒯 ( x ˜ s , y ˜ s ) ( 1 θ s 1 ) E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + θ s 1 2 2 T s 1 η E x * z 0 s 2 x * z T s 1 s 2 θ s 1 T s 1 E A ( z T s 1 s z T s 1 1 s ) , y T s 1 s y * + 1 2 T s 1 ρ E y 0 s y * 2 y T s 1 s y * 2 + θ s 1 M γ 2 T s 1 η E z 0 s z 1 s 2 L η θ s 1 2 2 T s 1 η E z T s 1 s z T s 1 1 s 2 + θ s 1 T s 1 E A ( z 0 s z 1 s ) , y 0 s y * ,
where θ s 1 α ( b ) L η 1 2 L η , α ( b ) = n b b ( n 1 ) , and γ satisfies that M θ s 1 η ρ γ L η θ s 1 M .
From Lemma 2, we can obtain the relationship between two consecutive epochs for our ASVR-PDHG algorithm. Based on this, we provide the convergence properties of ASVR-PDHG for both SC and non-SC objectives.
Theorem 3 
(ASVR-PDHG for SC Objectives). Suppose Assumptions 1–3 hold, A has a full row rank, and θ 1 α ( b ) L η 1 2 L η . Let ( x ^ S , y ^ S ) be the output generated by Algorithm 3, and ϕ 2 = 1 θ + θ 2 T η μ + L F T ρ σ min A A . If we set η < 1 2 L and ρ L M 2 and choose T > θ η μ + L F θ ρ σ m i n ( A A ) such that ϕ 2 < 1 , we obtain
E 𝒯 ( x ^ S , y ^ S ) ϕ 2 S 𝒯 ( x ^ 0 , y ^ 0 ) .
Analogous to SVR-PDHG, choosing T = 𝒪 ( L / μ ) , the gradient complexity of ASVR-PDHG to achieve an ϵ-additive error (i.e., E [ 𝒯 ( x ^ S , y ^ S ) ] ϵ ) is also 𝒪 ( n + L / μ ) log ( 1 / ϵ ) .
Theorem 3 indicates that ASVR-PDHG achieves a linear convergence rate for SC objectives. Note that ϕ 2 is more concise than ϕ 1 , and ASVR-PDHG actually converges faster than SVR-PDHG for SC problems as shown in our experiments, which implies the superiority of momentum acceleration.
Theorem 4 
(ASVR-PDHG for Non-SC Objectives). Suppose Assumptions 1 and 3 hold and ( x ^ S , y ^ S ) be the output of Algorithm 4. If we set θ 0 = 1 α ( b ) L η 1 2 L η , η < 1 2 L , and ρ L M 2 , ASVR-PDHG has the following convergence result for non-SC objectives:
E 𝒯 ( x ^ S , y ^ S ) 4 α ( b ) θ 0 2 ( 1 2 L η ) ( S + 1 ) 2 𝒯 ( x ^ 0 , y ^ 0 ) + 2 T 0 ( S + 1 ) 2 ( 1 η D x * 2 + 1 θ 0 2 ρ D y * 2 ) ,
where D x * = x * x ^ 0 , D y * = y * y ^ 0 . That is, if an output ( x ^ S , y ^ S ) satisfies E 𝒯 ( x ^ S , y ^ S ) ϵ , the gradient complexity of ASVR-PDHG is 𝒪 ( n 𝒯 ( x ^ 0 , y ^ 0 ) ϵ + n ( D x * + D y * ) ϵ ) .
In light of Theorem 4, ASVR-PDHG achieves an 𝒪 ( 1 / S 2 ) convergence rate with T 0 = 𝒪 ( n / S ) and n S . Note that ASVR-PDHG removes the extra boundedness assumption in SPDHG and only depends on the constants D x * and D y * . That is, ASVR-PDHG improves the convergence rate of variance reduction algorithms (e.g., SVRG-ADMM, SVRG-PDFP, and SVR-PDHG) from 𝒪 ( 1 / S ) to 𝒪 ( 1 / S 2 ) by an adaptive epoch length strategy, linear extrapolation step, and the momentum acceleration technology.
Remark 1. 
In order to further highlight the advantages of our algorithms, we compare their gradient complexity with those of other algorithms. When solving non-SC problems, the gradient complexity of SPDHG [4] is only 𝒪 ( 1 ϵ 2 ) , which is analogous to those of SGD [54] and SADMM [10]. Although SVR-PDHG has the same gradient complexity (i.e., 𝒪 ( n ϵ + 1 ϵ ) ) as SVRG-ADMM, SVR-PDHG has better practical performance. Theorem 4 implies that our ASVR-PDHG can effectively reduce gradient complexity and does not require additional assumptions. We summarize the gradient complexity of some stochastic primal–dual methods and stochastic ADMMs for non-SC problems as shown in Table 1. Note that we use the notation 𝒪 ( · ) to hide D x * , D y * , and other constants.

5. Experimental Results

This section evaluates the performance of our SVR-PDHG and ASVR-PDHG methods, and several state-of-the-art algorithms for solving non-SC graph-guided fused Lasso problems, SC graph-guided logistic regression, and non-SC multi-task learning problems. Our source codes are available at https://github.com/Weixin-An/ASVR-PDHG, accessed on 10 November 2021. The compared algorithms include SPDHG [55], SVRG-PDFP [46], SVRG-ADMM [23] and ASVRG-ADMM [24]. To alleviate statistical variability, the experiment in each case is carried out repeatedly 10 times, and shadow figures are plotted. The shadow represents the standard deviation, and the solid line in the middle represents the mean value. All the experiments are carried out on Intel Core i7-7700 3.6GHz CPU (Intel Corporation, Santa Clara, CA, USA) and 32GB RAM.
Hyper-parameter Selection. Based on the small-scale synthetic dataset mentioned in Section 5.1, we perform hyper-parameter selection. We use the grid search method to choose relatively good step sizes η = ρ = 1 for our algorithms in all the cases unless otherwise specified. We choose β = 1 in all experiments due to the same setting in our theoretical analysis. We choose T = N t r a i n b for our SVR-PDHG solving both SC and non-SC problems, where N t r a i n is the number of training samples. For ASVR-PDHG solving SC problems, we choose the common momentum parameter θ = 0.9 and choose the same number of inner loops T = N t r a i n b . For ASVR-PDHG solving non-SC problems, we choose the initial value of the momentum parameter θ 0 = 0.9 and apply the adaptive strategy for the epoch length T s during the first 10 epochs. As for the compared algorithms, we choose the same hyper-parameters as in the work [56] for ASVRG-ADMM, we tune the parameters as in the work [20] for SVRG-ADMM, and we also adopt the grid search method and choose the optimal step sizes η = ρ = 1 for SVRG-PDFP. About mini-batch sizes, we choose them guided by theory and considering the trade-off between time consumption and reasonably good performance. Specifically, we test the performance of our algorithms under different mini-batch sizes, and the results are shown in Figure A1 and Figure A2 in Appendix B. According to Figure A1 and Figure A2, considering the trade-off between time cost and loss, we determine the mini-batch sizes b = 120 for SC problems and b = 15 for non-SC problems.
We first solve the following non-SC graph-guided fused Lasso problem and SC graph-guided logistic regression problem in Section 5.1, Section 5.2, Section 5.3 and Section 5.4:
min x R d { 1 n i = 1 n F i ( x ) + λ 1 A x 1 } , min x R d { 1 n i = 1 n F i ( x ) + λ 2 2 x 2 + λ 1 A x 1 { ,
where each F i ( x ) = log ( 1 + exp ( c i a i x ) ) is the logistic loss on the feature–label pair ( a i , c i ) , λ 1 0 and λ 2 0 are two regularization parameters. And here, we set A as described in the work [57]. The 1 -norm regularized minimization can be converted into Problem (1) by setting λ 1 A x 1 = max y λ 1 y , A x , where · is the maximum norm of a vector. Thus, Problem (22) can be converted into the following saddle-point problems, respectively:
min x R d max y λ 1 { 1 n i = 1 n F i ( x ) + y , A x } ,
min x R d max y λ 1 { 1 n i = 1 n F i ( x ) + λ 2 2 x 2 + y , A x } .
Here, the conjugate function G * ( y ) = 0 . Then, we consider the general case G * ( y ) 0 in Section 5.5. Lastly, we solve the non-SC multi-task learning problem in Section 5.6.

5.1. Comparison of PDHG-Type Algorithms on Synthetic Datasets

In this subsection, to verify the advantages of our algorithms compared with PDHG-type algorithms, we first conduct experiments for solving Problems (23) and (24) on synthetic datasets. The method of generating a synthetic dataset is described below. Each sample a i is generated from i.i.d. standard Gaussian random variables and normalized according to a i = a i max { a i } , and the corresponding label is obtained by c i = sign ( a i x 0 + ϵ i ) , where the vector x 0 is generated from the d-dimensional standard normal distribution. The noise ϵ i also comes from the normal distribution with mean 0 and standard deviation 0.01. For Problem (23), we set λ 1 = 10 5 , and for Problem (24), we set λ 1 = 10 5 and λ 2 = 10 2 .
Figure 1 shows the comparison between our algorithms and SPDHG for solving SC and non-SC problems on a small-scale synthetic dataset. The experimental results imply that variance reduction methods including SVR-PDHG and ASVR-PDHG converge obviously faster than SPDHG, which verifies our algorithms improving the theoretical convergence rate. ASVR-PDHG converges much faster than SVR-PDHG in the non-SC setting, which verifies that the momentum acceleration step can significantly improve the convergence speed for solving non-SC problems. ASVR-PDHG performs better than SVR-PDHG in the SC setting, which demonstrates the superiority of momentum acceleration for solving SC problems.

5.2. Comparison of PDHG-Type Algorithms on Real-World Datasets

To further verify the advantages of our algorithms compared with the SPDHG algorithm on real-world datasets, we also conduct experiments on bio, phy, w8a, epsilon, and epsilon_test datasets, which can be downloaded from the LIBSVM website (https://www.csie.ntu.edu.tw/~cjlin/libsvm/, accessed on 5 November 2021) and the KDDCUP website (http://osmot.cs.cornell.edu/kddcup/datasets.html, accessed on 5 November 2021). The detailed description of these datasets is shown in Table 2.
Due to similar experimental phenomena on the five real-world datasets in Table 2, we only report the results on the bio, phy, and epsilon datasets in this subsection. Figure 2 shows the experimental results of Algorithms 2 and 4 to solve the non-SC problem (23) with λ 1 = 10 5 . All the experimental results show that our SVR-PDHG and ASVR-PDHG perform obviously better than their baseline, SPDHG. Moreover, our ASVR-PDHG consistently converges much faster than both SVR-PDHG and SPDHG in all the cases, which verifies the effectiveness of our momentum trick to accelerate the variance reduced stochastic PDHG algorithm.
As for SC objectives, Figure 3 shows the experimental results of SVR-PDHG and ASVR-PDHG on the three real-world datasets, where λ 1 = 10 5 , and λ 2 = 10 2 . It can be observed that SVR-PDHG and ASVR-PDHG are superior to the baseline, SPDHG, by a significant margin, in terms of their number of passes through data and CPU time, which also verifies our theoretical convergence results, i.e., linear convergence rate. Note that the standard deviation of the results of our methods is relatively small, which implies that our algorithms are relatively stable.

5.3. Sparse and Asynchronous Parallel Setting

We also conduct our algorithms in the sparse and asynchronous parallel settings. We consider A = I in the non-SC Problem (23) to facilitate parallelism and select the regularization parameter λ 1 = 10 5 . The sparse datasets rcv1.small and real-sim are used to test our algorithms, and we choose D i t as in the work [50]. We choose the single-thread algorithm as the baseline and compare the performance of all the methods in terms of the running time. The parallel SPDHG is achieved by updating the support sets of vectors x t and y t in a parallel fashion by ourselves. All the algorithms under asynchronous parallel setting are implemented in C++ with a Matlab interface, and the experimental results are shown in Figure 4.
All the experimental results in Figure 4 show that our SVR-PDHG and ASVR-PDHG significantly outperform SPDHG in terms of running time on both one thread and four threads. Our ASVR-PDHG method achieves more than 3 × and 5 × speedup over our SVR-PDHG on one thread and four threads, respectively, which benefits from our momentum acceleration and adaptive epoch length strategy. Moreover, SVR-PDHG and ASVR-PDHG with four threads achieve more than 3 × speedup than those with one thread, respectively. These phenomena indicate that the linear extrapolation step, momentum acceleration trick, and adaptive epoch length strategy are also suitable for large-scale machine learning problems in the sparse and asynchronous parallel setting, and our asynchronous parallel algorithms achieve a speedup proportional to the number of threads.

5.4. Compared with State-of-the-Art Stochastic Methods

To illustrate the advantages of our methods over SOTA methods, we further conduct some experiments on a large-scale synthetic dataset and real-world datasets. And we also set b = 120 and b = 15 for SC and non-SC problems, respectively.
Figure 5 shows the experimental results of SPDHG, SVR-PDHG, ASVR-PDHG, SVRG-ADMM, and ASVRG-ADMM on a larger synthetic dataset. It can be found that our algorithms (SVR-PDHG and ASVR-PDHG) converge significantly faster than SPDHG. For SC problems, SVR-PDHG and ASVR-PDHG achieve an average speedup of 3 × over SVRG-ADMM and ASVRG-ADMM, and 2 × over SVRG-PDFP. For non-SC problems, ASVR-PDHG achieves an average speedup of at least 4 × compared with other algorithms, which benefits from fewer variables, without Q, and the momentum acceleration technology.
Due to limited space and similar experimental phenomenon on the five real-world datasets in Table 2, we only report the results on the epsilon_test and w8a datasets. Figure 6 shows the compared results. It can be seen that our ASVR-PDHG almost always converges much faster than the stochastic ADMMs in all the settings. For SC problems, although the linear convergence rate can be obtained by all the algorithms, our algorithms achieve an average speedup of 2 × over SVRG-ADMM and ASVRG-ADMM because we do not require Q of ADMM-type methods. Moreover, compared with SVRG-PDFP, our algorithms also converge significantly faster. For non-SC problems, our ASVR-PDHG achieves at least 3 × speedup over other stochastic algorithms.
We also compare our methods with the famous extragradient (EG) methods such as stochastic EG [58] (SEG), stochastic AG-EG [59] (SAG-EG), and stochastic variance reduction EG [60] (SEG-VR) methods. We set the same initialization and choose the batch size and step sizes guided by theory, considering the trade-off between time consumption and accuracy, while observing reasonably good performance. The experimental results on the phy, w8a, and epsilon datasets are shown in Figure 7. From Figure 7, we can observe that our proposed methods still converge faster than EG-type methods. Especially for non-SC problems, our SVR-PDHG and ASVR-PDHG algorithms can achieve at least 6× and 7× speedup compared to EG-type methods, respectively, which benefit from the variance reduction and our momentum acceleration technology.

5.5. Comparisons of Primal–Dual Algorithms When G * ( y ) 0

We further conduct our algorithms to solve the general setting, i.e., G * ( y ) 0 . We compare our methods and other primal–dual algorithms such as SEG [58], SAG-EG [59], SEG-VR [60], and SVRG-PDFP [46] methods when solving the logistic regression problem with 2 regularization:
min x 1 n i = 1 n F i ( x ) + λ 2 x 2 .
Its primal–dual formulation is
min x max y 1 n i = 1 n F i ( x ) + λ x , y λ 2 y 2 ,
where G * ( y ) : = λ 2 y 2 and λ 0 is a regularization parameter. We set the same initialization ( x 0 , y 0 ) and batch size to compare all the methods. The convergence results are shown in Figure 8.
From Figure 8, it can be found that the experimental phenomenon is similar to the case of G * ( y ) = 0 . Specifically, our SVR-PDHG algorithm performs sightly better than SEG and SVRG-PDFP methods and achieves a speedup of at least 6 × compared to other comparison methods. Our ASVR-PDHG algorithm further improves the convergence speed of our SVR-PDHG algorithm, which again verifies the effectiveness of our momentum acceleration technology.

5.6. Multi-Task Learning

In this subsection, in order to verify the advantages of our algorithms for solving the matrix nuclear norm regularized problem, we conduct the following multi-task learning experiments. Here, the multi-task learning model can be described as follows:
min X i = 1 N l i ( X ) + λ 1 X * ,
where X R d × N , N is the number of tasks, l i ( X ) is the logistic loss on the i-th task, and · * is the nuclear norm.
An auxiliary variable Y is introduced to solve the above model, and the original model can be transformed into the equality constrained problem min X i = 1 N l i ( X ) + λ 1 Y * , s.t. X = Y , which can be solved by stochastic ADMMs. In order to apply the primal–dual technique to solve this problem, the nuclear norm needs to be rewritten as λ 1 X * = max X 2 λ 1 X , X , where · 2 is the spectral norm of a matrix. In this way, the multi-task learning model can be formulated into a saddle-point problem as follows:
min X max X 2 λ 1 i = 1 N l i ( X ) + X , X .
Here, the conjugate function G * ( X ) = 0 . This problem can be also solved by SPDHG, SVRG-PDFP, SVRG-ADMM, ASVRG-ADMM, and our algorithms.
We compare the stochastic ADMMs, SVRG-PDFP, and our algorithms on a dataset, 20newsgroups (available at https://github.com/jiayuzhou/MALSAR/tree/master/data, accessed on 5 November 2021), and set λ 1 = 10 5 and the mini-batch size b = 15 for each task. The training loss (i.e., the training objective value minus the minimum value) and test error are shown in Figure 9. It can be observed that ASVR-PDHG significantly outperforms other algorithms in terms of convergence speed and test error.

5.7. Non-Convex Support Vector Machines

We also compare the related methods on the Support Vector Machine (SVM) problem. Given a training set S = ( a i , c i ) i = 1 n , the non-convex 1 -norm penalized SVMs minimize the following penalized hinge loss function:
min x 1 n i = 1 n max { 0 , 1 c i x a i } + λ A x 1 .
In the same way, the SVMs can be formulated into a saddle-point problem:
min x max y λ 1 n i = 1 n max { 0 , 1 c i x a i } + y , A x .
We compare the ASVRG-ADMM, SVRG-PDFP, SAG-EG [59], and SPDM [61] algorithms on a synthetic dataset and the phy dataset, and set λ = 10 5 and the mini-batch size b = 15 . Regarding other hyper-parameters, we choose them guided by theory and considering the trade-off between time consumption and reasonably good performance. The experimental results are shown in Figure 10. It can be found that when solving non-convex problems, our ASVR-PDHG algorithm can still maintain a certain advantage.
Limitations. Our algorithms can be proved to achieve an advanced convergence rate under the convex assumption. For the non-convex problems, although our algorithms have unknown convergence properties, they achieve better experimental performance than some state-of-the-art methods as shown in Figure 10. As for solving complex non-convex problems such as training deep networks, the gradient calculation and acceleration steps may increase computational cost, but since the batch size can be chosen to be b = 1 and only vector addition operations are performed, the computational cost will not increase too much and will still be smaller than the ADMM-type algorithms. We will study the convergence properties of non-convex primal–dual problems in future work.

6. Conclusions and Future Work

In this paper, we proposed a stochastic primal–dual hybrid gradient method with variance reduction (SVR-PDHG) and an accelerated variant (ASVR-PDHG) for saddle-point problems. Compared with stochastic ADMMs, our algorithms have simpler structures. A new adaptive epoch length strategy was proposed to remove the extra boundedness assumption for non-SC problems. We theoretically analyzed the linear convergence properties of our methods without the strong convexity of G * ( · ) for SC problems. In particular, we rigorously proved that the convergence rates of SVR-PDHG and ASVR-PDHG are 𝒪 ( 1 / S ) and 𝒪 ( 1 / S 2 ) for non-SC problems, respectively. As a by-product, we extended our algorithms to asynchronous parallel settings for non-SC problems and experiments verify that our parallel algorithms are suitable for large-scale sparse non-SC machine learning problems. Finally, various experimental results verified that our ASVR-PDHG consistently converges much faster than the existing stochastic methods.
Besides the machine learning problems mentioned above, we can extend our algorithms to solve image processing problems [34] and differentially private problems [62,63] in future work. Another interesting direction for future work is the research of the theoretical properties of our asynchronous parallel primal–dual algorithms.

Author Contributions

Conceptualization, W.A., Y.L. and F.S.; methodology, W.A. and Y.L.; software, W.A.; validation, W.A.; formal analysis, W.A. and Y.L.; investigation, W.A.; resources, W.A. and Y.L.; data curation, W.A.; writing—original draft preparation, W.A.; writing—review and editing, W.A., Y.L., F.S. and H.L.; visualization, W.A.; supervision, Y.L., F.S. and H.L.; project administration, Y.L., F.S. and H.L.; funding acquisition, Y.L., F.S. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number 62276182, Peng Cheng Lab Program grant number PCL2023A08, Tianjin Natural Science Foundation grant numbers 24JCYBJC01230, 24JCYBJC01460, and Tianjin Municipal Education Commission Research Plan grant number 2024ZX008.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Theoretical Analysis

In this appendix, we give the detailed proofs of some lemmas, properties and theorems for our algorithms. In Appendix A.1, we provide our SVR-PDHG algorithm for solving non-SC objectives in the asynchronous parallel setting, as shown in Algorithm A1, and provide SVR-PDHG and ASVR-PDHG for solving SC objectives in the asynchronous parallel setting as shown in Algorithms A2 and A3. In Appendix A.2, we give some useful lemmas (i.e., Lemmas A1–A3) and properties (i.e., Properties A1–A4) and their detailed proofs. In Appendix A.3, we provide a one-epoch analysis for our SVR-PDHG and ASVR-PDHG methods by Lemmas 1 and 2, respectively. Based on Lemmas 1 and 2, we analyze the convergence properties (i.e., Theorems 1–4) of SVR-PDHG and ASVR-PDHG in Appendix A.4.1, Appendix A.4.2, Appendix A.4.3 and Appendix A.4.4 in Appendix A.4.

Appendix A.1. Our Algorithms in the Sparse and Asynchronous Parallel Setting

According to our analysis, our SVR-PDHG and ASVR-PDHG can be also easily extended to asynchronous parallel settings for solving SC problems. Considering A = I and b = 1 , the asynchronous parallel variants of our SVR-PDHG and ASVR-PDHG algorithms for solving SC problems are shown in Algorithms A2 and A3, respectively.

Appendix A.2. Some Key Properties and Lemmas

Proof of Property 2. 
For any x R d and y Y , then we have
F ( x ) F ( x * ) F ( x * ) , x x * F ( x ) F ( x * ) + A y * , x x * = F ( x ) F ( x * ) y , A x * + y * , A x y * y , A x * F ( x ) F ( x * ) y , A x * + y * , A x + y y * , G * ( y * ) F ( x ) F ( x * ) y , A x * + y * , A x + G * ( y ) G * ( y * ) = 𝒯 ( x , y ) ,
where the first inequality holds due to the optimality condition of the primal variable, i.e., F ( x * ) + A y * , x x * 0 , and the second inequality holds due to the optimality condition of the dual variable, i.e., G * ( y * ) A x * , y y * 0 , and the last inequality holds due to the convexity property of the function G ( · ) , i.e., G * ( y * ) , y y * G * ( y ) G * ( y * ) .    □
Algorithm A1 SVR-PDHG for non-SC objectives in the sparse and asynchronous parallel setting.
Input: 
T, ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 = x T 0 , y ^ 0 = y ˜ 0 = y T 0 = 0 , p threads.
  1:
for  s = 1 , 2 , , S   do
  2:
   Read current value of x ˜ s 1 from the shared memory and all threads parallelly compute the full gradient p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) , x ¯ 0 s = x 0 s = x T s 1 , y 0 s = y T s 1 ;
  3:
    t = 0 ; //inner loop counter
  4:
   while  t < T  in parallel do
  5:
      t = t + 1 ; //atomic increase counter
  6:
     Choose i t uniformly at random from { 1 , 2 , . . . , n } ;
  7:
      s i t : = support of sample i t ;
  8:
     Inconsistent read of [ x t 1 s ] s i t ;
  9:
      [ u ] s i t = F i t ( [ x t 1 s ] s i t ) F i t ( [ x ˜ s 1 ] s i t ) + [ D i t p ˜ ] s i t ;
10:
      [ y t s ] s i t = arg max [ y ] s i t [ y ] s i t , [ x ¯ t 1 s ] s i t G * ( y ) 1 2 ρ [ y ] s i t [ y t 1 s ] s i t 2 ;
11:
      [ x t s ] s i t = [ x t 1 s ] s i t η [ y t s ] s i t + [ u ] s i t ; [ x ¯ t s ] s i t = [ x t s ] s i t + β ( [ x t s ] s i t [ x t 1 s ] s i t ) ;
12:
   end while
13:
    x ˜ s = 1 T t = 1 T x t s , y ˜ s = 1 T t = 1 T y t s ;
14:
end for
Output: 
x ^ S = 1 S s = 1 S x ˜ s , y ^ S = 1 S s = 1 S y ˜ s .
Algorithm A2 SVR-PDHG for SC objectives in the sparse and asynchronous parallel setting.
Input: 
T, ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 , y ^ 0 , p threads.
  1:
for  s = 1 , 2 , , S  do
  2:
   Read current value of x ˜ s 1 from the shared memory and all threads parallelly compute the full gradient p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) ; y 0 s = F ( x ˜ s 1 ) , x ¯ 0 s = x 0 s = x ˜ s 1 ;
  3:
    t = 0 ; //inner loop counter
  4:
   while  t < T in parallel do
  5:
      t = t + 1 ; // atomic increase counter
  6:
     Choose i t uniformly at random from { 1 , 2 , . . . , n } ;
  7:
      s i t : = the support of sample i t ;
  8:
     Inconsistent read of [ x t 1 s ] s i t ;
  9:
      [ u ] s i t = F i t ( [ x t 1 s ] s i t ) F i t ( [ x ˜ s 1 ] s i t ) + [ D i t p ˜ ] s i t ;
10:
      [ y t s ] s i t = arg max [ y ] s i t [ y ] s i t , [ x ¯ t 1 s ] s i t G * ( y ) 1 2 ρ [ y ] s i t [ y t 1 s ] s i t 2 ;
11:
      [ x t s ] s i t = [ x t 1 s ] s i t η ( [ y t s ] s i t + [ u ] s i t ) ; [ x ¯ t s ] s i t = [ x t s ] s i t + β ( [ x t s ] s i t [ x t 1 s ] s i t ) ;
12:
   end while
13:
    x ˜ s = 1 T t = 1 T x t s , y ˜ s = 1 T t = 1 T y t s ;
14:
end for
Output: 
x ^ S = x ˜ S , y ^ S = y ˜ S .
Algorithm A3 ASVR-PDHG for SC objectives in the sparse and asynchronous parallel setting.
Input: 
T, ρ , η , 1 b n , 0 < β 1 .
Initialize: 
x ^ 0 = x ˜ 0 , y ^ 0 = y ˜ 0 , 0 θ 1 α ( b ) L η 1 L η , p threads.
  1:
for  s = 1 , 2 , , S  do
  2:
   Read current value of x ˜ s 1 from the shared memory and all threads parallelly compute the full gradient p ˜ = 1 n i = 1 n F i ( x ˜ s 1 ) , x 0 s = x ˜ s 1 , z ¯ 0 s = z 0 s = x ˜ s 1 , y 0 s = F ( x ˜ s 1 ) ;
  3:
    t = 0 ; // inner loop counter
  4:
   while  t < T in parallel do
  5:
      t = t + 1 ; //atomic increase counter
  6:
     Choose i t uniformly at random from { 1 , 2 , . . . , n } ;
  7:
      s i t : = the support of sample i t ;
  8:
     Inconsistent read of [ x t 1 s ] s i t ;
  9:
      [ u ] s i t = F i t ( [ x t 1 s ] s i t ) F i t ( [ x ˜ s 1 ] s i t ) + [ D i t p ˜ ] s i t ;
10:
      [ y t s ] s i t = arg max [ y ] s i t [ y ] s i t , [ z ¯ t 1 s ] s i t G * ( y ) 1 2 ρ θ [ y ] s i t [ y t 1 s ] s i t 2 ;
11:
      [ z t s ] s i t = [ z t 1 s ] s i t η θ ( [ y t s ] s i t + [ u ] s i t ) ; [ x t s ] s i t = [ x ˜ s 1 ] s i t + θ ( [ z t s ] s i t [ x ˜ s 1 ] s i t ) ; [ z ¯ t s ] s i t = [ z t s ] s i t + β ( [ z t s ] s i t [ z t 1 s ] s i t ) ;
12:
   end while
13:
    x ˜ s = 1 T t = 1 T x t s , y ˜ s = ( 1 θ ) y ˜ s 1 + θ T t = 1 T y t s ;
14:
end for
Output: 
x ^ S = x ˜ S , y ^ S = y ˜ S .
Property A1. 
Since the matrix A has full row rank, then we have
y * = ( A ) F ( x * ) .
Property A2. 
For the y-subproblem in Algorithms 1 or 2 and its optimal solution y t s , we have
A x ¯ t 1 s + G * ( y t s ) + 1 ρ ( y t s y t 1 s ) , y y t s 0 , for any y Y ,
where Y is a convex compact set.
Property A3. 
For the y-subproblem in Algorithms 3 or 4 and its optimal solution y t s , we have
A z ¯ t 1 s + G * ( y t s ) + 1 ρ θ s 1 ( y t s y t 1 s ) , y y t s 0 , for any y Y ,
where Y is a convex compact set.
Property A4. 
Given any a , b , c R d , then we have
a b , a c = 1 2 a b 2 + a c 2 b c 2 .
Lemma A1 
(Variance Upper Bound [23]). For the mini-batch semi-stochastic gradient
˜ F I t ( x t 1 s ) = 1 b i t I t F i t x t 1 s F i t x ˜ s 1 + F x ˜ s 1
we have
E ˜ F I t ( x t 1 s ) F ( x t 1 s ) 2 4 L α ( b ) F x t 1 s F x * F x * x t 1 s x * + F ( x ˜ s 1 ) F x * F x * x ˜ s 1 x * ,
where α ( b ) = n b b ( n 1 ) 1 , 1 b n and L : = max i L i .
Lemma A2 
(Improved Variance Upper Bound [24]). For the mini-batch semi-stochastic gradient ˜ F I t ( x t 1 s ) = 1 b i t I t [ F i t x t 1 s F i t x ˜ s 1 ] + F x ˜ s 1 , we have
E ˜ F I t ( x t 1 s ) F ( x t 1 s ) 2 2 L α ( b ) F x ˜ s 1 F x t 1 s + F x t 1 s , x t 1 s x ˜ s 1 ,
where α ( b ) = n b b ( n 1 ) 1 , 1 b n , and L : = max i L i .
Lemma A3. 
Let y 0 s = A F x ˜ s 1 , and y * = A F x * , then
y 0 s y * 2 2 L F σ min A A F x ˜ s 1 F x * F x * , x ˜ s 1 x * .

Appendix A.3. Proofs of Lemmas 1 and 2

Appendix A.3.1. Proof of Lemma 1

Proof. 
Since the function F is convex, differentiable with an L F -Lipschitz-continuous gradient, where L F L = max i = 1 , , n L i , then
F ( x t s ) F ( x t 1 s ) + F ( x t 1 s ) , x t s x t 1 s + L 2 x t 1 s x t s 2 F ( x * ) F ( x t 1 s ) , x * x t 1 s + F ( x t 1 s ) , x t s x t 1 s + L 2 x t 1 s x t s 2 = F ( x * ) ˜ F I t ( x t 1 s ) F ( x t 1 s ) , x t s x * ˜ F I t ( x t 1 s ) , x * x t s + L 2 x t 1 s x t s 2 ,
where the second inequality holds due to the convexity of the function F.
For the optimal solution x t s of the x-subproblem in SVR-PDHG, the first-order optimality condition is
˜ F I t ( x t 1 s ) + A y t s + 1 η ( x t s x t 1 s ) , x x t s 0 , for any x R d .
By setting x = x * in the above inequality, we obtain
˜ F I t ( x t 1 s ) , x t s x * A y t s + 1 η ( x t s x t 1 s ) , x * x t s = A y t s , x * x t s + 1 η x t s x t 1 s , x * x t s = ( a ) A y t s , x * x t s + 1 2 η [ x * x t 1 s 2 x * x t s 2 x t 1 s x t s 2 ] ,
where the equality = ( a ) holds due to Property A4. Taking the expectation over the random choice of I t and substituting the inequality (A10) into the inequality (A9) with η 1 9 L 1 L , we have
E F ( x t s ) F ( x * ) E F ( x t 1 s ) ˜ F I t ( x t 1 s ) , x t s x * + E A y t s , x * x t s + 1 2 η E x * x t 1 s 2 x * x t s 2 1 L η 2 η E x t 1 s x t s 2 .
According to the bound of the last term in Equation (4) in the appendix of the work [23], we obtain
E F ( x t 1 s ) ˜ F I t ( x t 1 s ) , x t s x * η E ˜ F I t ( x t 1 s ) F ( x t 1 s ) 2 .
Substituting the inequality (A12) into the inequality (A11) and taking expectation over I t for t = 1 , , T at the s-th epoch, we obtain
E F ( x t s ) F ( x * ) A y t s , x * x t s 1 2 η E x t 1 s x * 2 1 2 η E x t s x * 2 + η E ˜ F I t ( x t 1 s ) F ( x t 1 s ) 2 1 L η 2 η E x t 1 s x t s 2 .
Using Lemma A1, we have
E F ( x t s ) F ( x * ) A y t s , x * x t s 1 2 η E x t 1 s x * 2 1 2 η E x t s x * 2 1 L η 2 η E x t 1 s x t s 2 + 4 L η α ( b ) F x t 1 s F x * F x * , x t 1 s x * + 4 L η α ( b ) F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * .
By setting y = y * in Property A2, we obtain
A x ¯ t 1 s + G * ( y t s ) , y t s y * 1 ρ y t 1 s y t s , y t s y * .
Then, we have
A x t s , y t s y * + G * ( y t s ) G * ( y * ) A x t s , y t s y * + G * ( y t s ) , y t s y * = A x ¯ t 1 s , y t s y * + G * ( y t s ) , y t s y * + A ( x ¯ t 1 s x t s ) , y t s y * 1 ρ y t 1 s y t s , y t s y * + A ( x ¯ t 1 s x t s ) , y t s y * = 1 2 ρ y t 1 s y * 2 y t s y * 2 y t s y t 1 s 2 + A ( x ¯ t 1 s x t s ) , y t s y * ,
where the first inequality follows from the convexity of the function G * , the second inequality holds due to Property A2. and the last equality holds due to Property A4.
According to the iteration, x ¯ t s = x t s + β ( x t s x t 1 s ) with β = 1 , then
A ( x ¯ t 1 s x t s ) , y t s y * = A ( x t 1 s + ( x t 1 s x t 2 s ) x t s ) , y t s y * = A x t 1 s x t s , y t s y * + A ( x t 1 s x t 2 s ) , y t 1 s y * + A ( x t 1 s x t 2 s ) , y t s y t 1 s A ( x t 1 s x t s ) , y t s y * + A ( x t 1 s x t 2 s ) , y t 1 s y * + M γ η 2 η x t 1 s x t 2 s 2 + M ρ 2 γ ρ y t s y t 1 s 2 ,
where M = | | A | | , and the inequality holds due to Young’s inequality. By choosing M ρ γ 1 L η M η so that M η γ 2 η 1 L η 2 η and M ρ 2 γ ρ 1 2 ρ , then combining the inequality (A16), (A17) and inequality (A14), we obtain
E F x t s F ( x * ) A y t s , x * x t s A x t s , y t s y * + G * ( y t s ) G * ( y * ) = E F x t s F ( x * ) y t s , A x * + y * , A x t s + G * ( y t s ) G * ( y * ) 1 2 η E x * x t 1 s 2 E x * x t s 2 + M η γ 2 η E x t 1 s x t 2 s 2 1 L η 2 η E x t 1 s x t s 2 + 4 L η α ( b ) E F x t 1 s F x * F x * , x t 1 s x * + 4 L η α ( b ) E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * + 1 2 ρ E y t 1 s y * 2 y t s y * 2 + M ρ 2 γ ρ E y t s y t 1 s 2 1 2 ρ E y t s y t 1 s 2 + E A x t 1 s x t 2 s , y t 1 s y * E A x t s x t 1 s , y t s y * .
Summing the above inequality from t = 1 , 2 , , T at the s-th epoch, we have
E t = 1 T F x t s F ( x * ) y t s , A x * + y * , A x t s + G * ( y t s ) G * ( y * ) 1 2 η E x * x 0 s 2 E x * x T s 2 + M η γ 2 η E x 0 s x 1 s 2 1 L η 2 η E x T 1 s x T s 2 + 1 2 ρ E y 0 s y * 2 y T s y * 2 + E A x 0 s x 1 s , y 0 s y * E A x T s x T 1 s , y T s y * + 4 L η α ( b ) E t = 1 T F x t 1 s F x * F x * , x t 1 s x * + 4 L η T α ( b ) E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * .
Subtracting 4 L η α ( b ) E t = 1 T F ( x t s ) F x * F x * , x t s x * from both sides of the above inequality, we obtain
E t = 1 T F x t s F ( x * ) y t s , A x * + y * , A x t s + G * ( y t s ) G * ( y * ) 4 L η α ( b ) E t = 1 T F ( x t s ) F x * F x * , x t s x * 1 2 η E x * x 0 s 2 x * x T s 2 + M η γ 2 η E x 0 s x 1 s 2 1 L η 2 η E x T 1 s x T s 2 + E A x 0 s x 1 s , y 0 s y * E A x T s x T 1 s , y T s y * + 4 L η α ( b ) E F ( x 0 s ) F x * F x * , x 0 s x * 4 L η α ( b ) E F ( x T s ) F x * F x * , x T s x * + 4 L η α ( b ) T E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * + 1 2 ρ E y 0 s y * 2 y T s y * 2 .
Due to the property of the convex function, i.e., F ( i = 1 n a i x i ) i = 1 n a i F ( x i ) with a i 0 and i = 1 n a i = 1 for any i [ n ] , and the update rules, i.e., x ˜ s = 1 T t = 1 T x t s and y ˜ s = 1 T t = 1 T y t s , the left side of the above inequality becomes
E t = 1 T F x t s F ( x * ) y t s , A x * + y * , A x t s + G * ( y t s ) G * ( y * ) 4 L η α ( b ) E t = 1 T F ( x t s ) F x * F x * , x t s x * = ( 1 4 L η α ( b ) ) E t = 1 T F x t s F ( x * ) + E t = 1 T y * , A x t s y t s , A x * + G * ( y t s ) G * ( y * ) 4 L η α ( b ) E t = 1 T A y * , x t s x * = ( 1 4 L η α ( b ) ) E t = 1 T F x t s F ( x * ) y t s , A x * + y * , A x t s + E t = 1 T G * ( y t s ) G * ( y * ) + 4 L η α ( b ) E t = 1 T A x * , y * y t s ( 1 4 L η α ( b ) ) E t = 1 T F x t s F ( x * ) y t s , A x * + y * , A x t s + G * ( y t s ) G * ( y * ) ( 1 4 L η α ( b ) ) T E F x ˜ s F ( x * ) y ˜ s , A x * + y * , A x ˜ s + G * y ˜ s G * ( y * ) ,
where the first equality holds due to the optimality condition, i.e., F ( x * ) + A y * = 0 , and the first inequality holds due to the optimality condition G * ( y * ) A x * = 0 , and the convexity of G * on the convex set Y with y t s , y * Y . Thus, A x * , y * y t s = G * ( y * ) , y * y t s G * ( y * ) G * ( y t s ) . The last inequality holds due to the convexity of G * and F. Then, we obtain
( 1 4 L η α ( b ) ) T E F x ˜ s F ( x * ) y ˜ s , A x * + y * , A x ˜ s + G * y ˜ s G * ( y * ) 1 2 η E x * x 0 s 2 x * x T s 2 + M η γ 2 η E x 0 s x 1 s 2 1 L η 2 η E x T 1 s x T s 2 + E [ A x 0 s x 1 s , y 0 s y * A x T s x T 1 s , y T s y * ] + 4 L η α ( b ) E F ( x 0 s ) F x * F x * , x 0 s x * 4 L η α ( b ) E F ( x T s ) F x * F x * , x T s x * + 4 L η α ( b ) T E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * + 1 2 ρ E y 0 s y * 2 y T s y * 2 .
This completes the proof. □

Appendix A.3.2. Proof of Lemma 2

Proof. 
Since the function F is convex, differentiable with an L F -Lipschitz-continuous gradient, where L F L = max i = 1 , , n L i , then
F x t s F x t 1 s + F x t 1 s , x t s x t 1 s + L 2 x t s x t 1 s 2 = F x t 1 s + ˜ F I t x t 1 s , x t s x t 1 s + F ( x t 1 s ) ˜ F I t ( x t 1 s ) , x t s x t 1 s + L 2 x t s x t 1 s 2 .
Using Lemma A2, we obtain
E [ F x t 1 s ˜ F I t x t 1 s , x t s x t 1 s + L 2 x t s x t 1 s 2 ] E [ L η 2 L ( 1 2 L η ) F x t 1 s ˜ F I t x t 1 s 2 + L ( 1 2 L η ) 2 L η x t s x t 1 s 2 + L 2 x t s x t 1 s 2 ] α ( b ) L η 1 2 L η F x ˜ s 1 F x t 1 s + F x t 1 s , x t 1 s x ˜ s 1 + 1 L η 2 η E x t s x t 1 s 2 ,
where the first inequality holds due to the Young’s inequality, and the second inequality follows from Lemma A2. Taking the expectation over the random choice of I t and substituting the inequality (A24) into the inequality (A23) with η < 1 2 L , we have
E F x t s F x t 1 s + E [ ˜ F I t x t 1 s , x t s x t 1 s + 1 L η 2 η x t s x t 1 s 2 ] + α ( b ) L η 1 2 L η F ( x ˜ s 1 ) F ( x t 1 s ) + F ( x t 1 s ) , x t 1 s x ˜ s 1 = F x t 1 s + E [ ˜ F I t x t 1 s , x t s v * + v * x t 1 s + 1 L η 2 η x t s x t 1 s 2 ] + α ( b ) L η 1 2 L η [ F ( x ˜ s 1 ) F ( x t 1 s ) + F ( x t 1 s ) , x t 1 s x ˜ s 1 ] = F x t 1 s + E [ ˜ F I t x t 1 s , x t s v * + 1 L η 2 η x t s x t 1 s 2 ] + α ( b ) L η 1 2 L η [ F ( x ˜ s 1 ) F ( x t 1 s ) ] + F ( x t 1 s ) , α ( b ) L η 1 2 L η ( x t 1 s x ˜ s 1 ) + v * x t 1 s + E [ 1 b i t I t F i t ( x ˜ s 1 ) + F ( x ˜ s 1 ) , v * x t 1 s ] = F x t 1 s + E [ ˜ F I t x t 1 s , x t s v * + 1 L η 2 η x t s x t 1 s 2 ] + α ( b ) L η 1 2 L η F ( x ˜ s 1 ) F ( x t 1 s ) + F ( x t 1 s ) , α ( b ) L η 1 2 L η ( x t 1 s x ˜ s 1 ) + v * x t 1 s ,
where v * = 1 θ s 1 x ˜ s 1 + θ s 1 x * , the second equality holds due to the fact that ˜ F I t x t 1 s , v * x t 1 s = 1 b i t I t F i t ( x t 1 s ) , v * x t 1 s 1 b i t I t F i t ( x ˜ s 1 ) F ( x ˜ s 1 ) , v * x t 1 s , and E [ 1 b i t I t F i t ( x t 1 s ) ] = F ( x t 1 s ) , and the last equality follows from the fact that E [ 1 b i t I t F i t ( x ˜ s 1 ) + F ( x ˜ s 1 ) , v * x t 1 s ] = 0 . Furthermore,
F x t 1 s , v * x t 1 s + α ( b ) L η 1 2 L η x t 1 s x ˜ s 1 = F x t 1 s , ( 1 θ s 1 ) x ˜ s 1 + θ s 1 x * x t 1 s + α ( b ) L η 1 2 L η x t 1 s x ˜ s 1 = F x t 1 s , θ s 1 x * + ( 1 θ s 1 α ( b ) L η 1 2 L η ) x ˜ s 1 + α ( b ) L η 1 2 L η x t 1 s x t 1 s F ( θ s 1 x * + ( 1 θ s 1 α ( b ) L η 1 2 L η ) x ˜ s 1 + α ( b ) L η 1 2 L η x t 1 s ) F x t 1 s θ s 1 F x * + ( 1 θ s 1 α ( b ) L η 1 2 L η ) F x ˜ s 1 + α ( b ) L η 1 2 L η F x t 1 s F x t 1 s ,
where the first inequality holds due to the property of the convex function F, i.e., F ( x ) , y x F ( y ) F ( x ) , and the last inequality follows from the convexity of the function F, i.e., F ( i = 1 n a i x i ) i = 1 n a i F ( x i ) with a i 0 and i = 1 n a i = 1 for any i [ n ] , and the assumption that 1 θ s 1 α ( b ) L η 1 2 L η 0 . Substituting the inequality (A26) into the inequality (A25), we have
E F x t s F x t 1 s + E [ ˜ F I t x t 1 s , x t s v * + 1 L η 2 η x t s x t 1 s 2 ] + α ( b ) L η 1 2 L η ( F ( x ˜ s 1 ) F ( x t 1 s ) ) + θ s 1 F x * + ( 1 θ s 1 α ( b ) L η 1 2 L η ) F x ˜ s 1 + α ( b ) L η 1 2 L η F x t 1 s F x t 1 s = θ s 1 F x * + 1 θ s 1 F x ˜ s 1 + E [ ˜ F I t x t 1 s , x t s v * + 1 L η 2 η x t s x t 1 s 2 ] .
For the optimal solution z t s of the z-subproblem, the first-order optimality condition is
˜ F I t x t 1 s + A y t s + θ s 1 η z t s z t 1 s , z z t s 0 , for any z R d .
Since x t s = θ s 1 z t s + 1 θ s 1 x ˜ s 1 and v * = θ s 1 x * + 1 θ s 1 x ˜ s 1 , and by setting z = x * in the above inequality, we obtain
˜ F I t x t 1 s , x t s v * = θ s 1 ˜ F I t x t 1 s , z t s x * θ s 1 A y t s + θ s 1 η z t s z t 1 s , x * z t s = θ s 1 A y t s , x * z t s + θ s 1 2 η z t s z t 1 s , x * z t s = ( a ) θ s 1 A y t s , x * z t s + θ s 1 2 2 η x * z t 1 s 2 x * z t s 2 z t 1 s z t s 2 ,
where the equality = ( a ) holds due to Property A4. Substituting the inequality (A28) into the inequality (A27), and x t s x t 1 s = ( 1 θ s 1 ) x ˜ s 1 + θ s 1 z t s ( 1 θ s 1 ) x ˜ s 1 θ s 1 z t 1 s = θ s 1 ( z t s z t 1 s ) , we have
E [ F x t s F ( x * ) θ s 1 A y t s , x * z t s ] 1 θ s 1 [ F x ˜ s 1 F ( x * ) ] + 1 L η 2 η E x t s x t 1 s 2 + θ s 1 2 2 η E x * z t 1 s 2 x * z t s 2 z t 1 s z t s 2 = 1 θ s 1 [ F x ˜ s 1 F ( x * ) ] + θ s 1 2 ( 1 L η ) 2 η E z t s z t 1 s 2 + θ s 1 2 2 η E x * z t 1 s 2 x * z t s 2 z t 1 s z t s 2 = 1 θ s 1 [ F x ˜ s 1 F ( x * ) ] + θ s 1 2 2 η E x * z t 1 s 2 x * z t s 2 L η θ s 1 2 2 η E z t s z t 1 s 2 .
By setting y = y * in Property A3, we obtain
θ s 1 A z t s , y t s y * + θ s 1 G * ( y t s ) G * ( y * ) θ s 1 A z ¯ t 1 s , y t s y * + θ s 1 G * ( y t s ) , y t s y * + θ s 1 A ( z ¯ t 1 s z t s ) , y t s y * 1 ρ y t 1 s y t s , y t s y * + θ s 1 A ( z ¯ t 1 s z t s ) , y t s y * = 1 ρ y t 1 s y t s , y t s y * + θ s 1 A ( z t 1 s z t 2 s ) , y t s y * θ s 1 A ( z t s z t 1 s ) , y t s y * = 1 2 ρ y t 1 s y * 2 y t s y * 2 y t s y t 1 s 2 + θ s 1 A ( z t 1 s z t 2 s ) , y t 1 s y * + θ s 1 A ( z t 1 s z t 2 s ) , y t s y t 1 s θ s 1 A ( z t s z t 1 s ) , y t s y * ,
where the first inequality follows from the convexity of the function G * , the first equation holds due to z ¯ t s = z t s + β ( z t s z t 1 s ) with β = 1 , and the last equality holds due to Property A4.
Combining the inequality (A30) and the inequality (A29), we obtain
E [ F x t s F ( x * ) θ s 1 A y t s , x * z t s θ s 1 A z t s , y t s y * + θ s 1 G * ( y t s ) θ s 1 G * ( y * ) ] = E F x t s F ( x * ) θ s 1 y t s , A x * + θ s 1 y * , A z t s + θ s 1 G * ( y t s ) θ s 1 G * ( y * ) 1 θ s 1 [ F x ˜ s 1 F ( x * ) ] + θ s 1 2 2 η E x * z t 1 s 2 x * z t s 2 L η θ s 1 2 2 η E z t s z t 1 s 2 + 1 2 ρ E [ y t 1 s y * 2 y t s y * 2 ] 1 2 ρ E [ y t s y t 1 s 2 ] + θ s 1 E A ( z t 1 s z t 2 s ) , y t 1 s y * θ s 1 E A ( z t s z t 1 s ) , y t s y * + θ s 1 E A ( z t 1 s z t 2 s ) , y t s y t 1 s ( 1 θ s 1 ) [ F ( x ˜ s 1 ) F ( x * ) ] + θ s 1 2 2 η E [ x * z t 1 s 2 x * z t s 2 ] L η θ s 1 2 2 η E [ z t s z t 1 s 2 ] + 1 2 ρ E y t 1 s y * 2 y t s y * 2 1 2 ρ E y t s y t 1 s 2 + θ s 1 M γ 2 η E z t 1 s z t 2 s 2 + θ s 1 M η ρ 2 γ ρ E y t s y t 1 s + θ s 1 E A ( z t 1 s z t 2 s ) , y t 1 s y * θ s 1 E A ( z t s z t 1 s ) , y t s y * .
To simplify the proof, we set T s 1 as T in the rest of Lemma 2, and then replace T with T s 1 in the final result, which does not affect the proof procedure. Taking the expectation over the random choice of the history of random variables I 1 , , I T on the inequality (A31), summing it over t = 1 , , T at the s-th epoch and due to the property of the convex function, i.e., F ( i = 1 n a i x i ) i = 1 n a i F ( x i ) with a i 0 and i = 1 n a i = 1 for i [ n ] , and the update rule x ˜ s = 1 T t = 1 T x t s , set M θ s 1 η ρ γ L η θ s 1 M , which ensures that M θ s 1 γ 2 T η L η θ s 1 2 2 T η and M θ s 1 η ρ 2 T γ ρ 1 2 T ρ , so we have
E [ F x ˜ s F ( x * ) θ s 1 1 T t = 1 T y t s , A x * + θ s 1 y * , A ( 1 T t = 1 T z t s ) + θ s 1 G * ( 1 T t = 1 T y t s ) θ s 1 G * ( y * ) ] E [ 1 T t = 1 T F ( x t s ) F ( x * ) θ s 1 1 T t = 1 T y t s , A x * + θ s 1 y * , A ( 1 T t = 1 T z t s ) + θ s 1 T t = 1 T G * ( y t s ) θ s 1 G * ( y * ) ] 1 θ s 1 [ F x ˜ s 1 F ( x * ) ] + θ s 1 2 2 T η t = 1 T E x * z t 1 s 2 x * z t s 2 + 1 2 T ρ t = 1 T E y t 1 s y * 2 y t s y * 2 + M θ s 1 γ 2 T η t = 1 T E z t 1 s z t 2 s 2 L η θ s 1 2 2 T η t = 1 T E z t s z t 1 s 2 + M θ s 1 η ρ 2 T γ ρ 1 2 T ρ t = 1 T E y t s y t 1 s + θ s 1 T t = 1 T E A ( z t 1 s z t 2 s ) , y t 1 s y * θ s 1 T t = 1 T E A ( z t s z t 1 s ) , y t s y * 1 θ s 1 [ F x ˜ s 1 F ( x * ) ] + θ s 1 2 2 T η E x * z 0 s 2 x * z T s 2 + 1 2 T ρ E y 0 s y * 2 y T s y * 2 + M θ s 1 γ 2 T η E z 0 s z 1 s 2 L η θ s 1 2 2 T η E z T s z T 1 s 2 + θ s 1 T E A ( z 0 s z 1 s ) , y 0 s y * θ s 1 T E A ( z T s z T 1 s ) , y T s y * .
Using the update rules x t s = ( 1 θ s 1 ) x ˜ s 1 + θ s 1 z t s , we have
x ˜ s = 1 T t = 1 T x t s = 1 T t = 1 T ( 1 θ s 1 ) x ˜ s 1 + θ s 1 z t s = ( 1 θ s 1 ) x ˜ s 1 + θ s 1 T t = 1 T z t s .
Subtracting ( 1 θ s 1 ) E y ˜ s 1 , A x * y * , A x ˜ s 1 + G * ( y * ) G * ( y ˜ s 1 ) from both sides of the inequality (A32), we obtain
E F x ˜ s F ( x * ) y ˜ s , A x * J 1 + y * , A x ˜ s J 2 + G * ( y ˜ s ) J 3 G * ( y * ) J 4 E F x ˜ s F ( x * ) θ s 1 1 T t = 1 T y t s , A x * J 1 ( a ) + θ s 1 y * , A ( 1 T t = 1 T z t s ) J 2 ( a ) + θ s 1 G * ( 1 T t = 1 T y t s ) J 3 ( a ) θ s 1 G * ( y * ) J 4 ( a ) + E [ ( 1 θ s 1 ) y ˜ s 1 , A x * J 1 ( b ) + ( 1 θ s 1 ) y * , A x ˜ s 1 J 2 ( b ) + ( 1 θ s 1 ) G * ( y ˜ s 1 ) J 3 ( b ) ( 1 θ s 1 ) G * ( y * ) J 4 ( b ) ]
1 θ s 1 E [ F x ˜ s 1 F ( x * ) y ˜ s 1 , A x * + y * , A x ˜ s 1 + G * ( y ˜ s 1 ) G * ( y * ) ] + θ s 1 2 2 T η E x * z 0 s 2 x * z T s 2 + 1 2 T ρ E y 0 s y * 2 y T s y * 2 + M γ θ s 1 2 T η E z 0 s z 1 s 2 L η θ s 1 2 2 T η E z T s z T 1 s 2 + θ s 1 T E A ( z 0 s z 1 s ) , y 0 s y * θ s 1 T E A ( z T s z T 1 s ) , y T s y * ,
where J 1 = J 1 ( a ) + J 1 ( b ) due to y ˜ s = ( 1 θ s 1 ) y ˜ s 1 + θ s 1 T t = 1 T y t s , J 2 = J 2 ( a ) + J 2 ( b ) , J 3 J 3 ( a ) + J 3 ( b ) and J 4 = J 4 ( a ) + J 4 ( b ) . Then, for all s, we replace T with T s 1 ,
E 𝒯 ( x ˜ s , y ˜ s ) ( 1 θ s 1 ) E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + θ s 1 2 2 T s 1 η E [ x * z 0 s 2 x * z T s 1 s 2 ] + 1 2 T s 1 ρ E [ y 0 s y * 2 y T s 1 s y * 2 ] + M γ θ s 1 2 T s 1 η E z 0 s z 1 s 2 L η θ s 1 2 2 T s 1 η E z T s 1 s z T s 1 1 s 2 + θ s 1 T s 1 E A ( z 0 s z 1 s ) , y 0 s y * θ s 1 T s 1 E A ( z T s 1 s z T s 1 1 s ) , y T s 1 s y * .
This completes the proof. □

Appendix A.4. Convergence Analyses of SVR-PDHG and ASVR-PDHG

In this section, we give the convergence rate results of our algorithms in detail. Appendix A.4.1 and Appendix A.4.2 prove the convergence rate of our SVR-PDHG algorithm. Appendix A.4.3 and Appendix A.4.4 prove the convergence rate of our ASVR-PDHG algorithm.

Appendix A.4.1. Proof of Theorem 1 (SVR-PDHG for Strongly Convex Objectives)

Proof. 
For strongly convex function F, according to the update rule of x 0 s = x ˜ s 1 and Lemma 1, and setting x 0 s = x 1 s , we have
( 1 4 L η α ( b ) ) T E 𝒯 ( x ˜ s , y ˜ s ) 1 2 η E x * x 0 s 2 x * x T s 2 + M η γ 2 η E x 0 s x 1 s 2 1 L η 2 η E x T 1 s x T s 2 + E A x 0 s x 1 s , y 0 s y * E A x T s x T 1 s , y T s y * + 4 L η α ( b ) E F ( x 0 s ) F x * F x * , x 0 s x * 4 L η α ( b ) E F ( x T s ) F x * F x * , x T s x * + 4 L η α ( b ) T E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * + 1 2 ρ E y 0 s y * 2 y T s y * 2 1 2 η E x * x ˜ s 1 2 + 4 L η α ( b ) ( T + 1 ) E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * + 1 2 ρ E [ y 0 s y * 2 ] + E [ A x T 1 s x T s , y T s y * 1 L η 2 η x T 1 s x T s 2 1 2 ρ y T s y * 2 ] .
Because of M ρ 1 L η M η in Lemma 1, we have
A x T 1 s x T s , y T s y * 1 L η 2 η x T 1 s x T s 2 1 2 ρ y T s y * 2 1 L η 2 η x T 1 s x T s 2 + M 2 ρ η 2 ρ ( 1 L η ) y T s y * 2 1 L η 2 η x T 1 s x T s 2 1 2 ρ y T s y * 2 0 .
Using Property 2 and the definition of 𝒯 ( · , · ) , we have
E 𝒯 ( x ˜ s , y ˜ s ) 1 2 T η ( 1 4 L η α ( b ) ) E x * x ˜ s 1 2 + 4 L η ( T + 1 ) α ( b ) ( 1 4 L η α ( b ) ) T E 𝒯 ( x ˜ s 1 , y ˜ s 1 ) + 1 2 T ρ ( 1 4 L η α ( b ) ) E y 0 s y * 2 .
Due to the strong convexity of the function F, the update rule y 0 s = A F x ˜ s 1 and Lemma A3, we have
E 𝒯 ( x ˜ s , y ˜ s ) 4 L η ( T + 1 ) α ( b ) ( 1 4 L η α ( b ) ) T E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + 1 2 T η ( 1 4 L η α ( b ) ) E x * x ˜ s 1 2 + 1 2 T ρ ( 1 4 L η α ( b ) ) E y 0 s y * 2 4 L η ( T + 1 ) α ( b ) ( 1 4 L η α ( b ) ) T E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + 1 T η μ ( 1 4 L η α ( b ) ) E F x ˜ s 1 F x * F x * , x ˜ s 1 x * + L F T ρ ( 1 4 L η α ( b ) ) σ min A A E F x ˜ s 1 F x * F x * , x ˜ s 1 x * .
Using Property 2, we obtain
E 𝒯 ( x ˜ s , y ˜ s ) 4 L η ( T + 1 ) α ( b ) ( 1 4 L η α ( b ) ) T + 1 T η μ ( 1 4 L η α ( b ) ) + L F T ρ ( 1 4 L η α ( b ) ) σ min A A E 𝒯 ( x ˜ s 1 , y ˜ s 1 ) .
Setting x ^ 0 = x ˜ 0 , y ^ 0 = y ˜ 1 , x ^ S = x ˜ S , and y ^ S = y ˜ S , and letting ϕ 1 = 4 L η ( T + 1 ) α ( b ) ( 1 4 L η α ( b ) ) T + 1 T η μ ( 1 4 L η α ( b ) ) + L F T ρ ( 1 4 L η α ( b ) ) σ min A A , we have E 𝒯 ( x ^ S , y ^ S ) ϕ 1 S 𝒯 ( x ^ 0 , y ^ 0 ) , where S is the number of outer iterations. This implies that SVR-PDHG converges linearly for strongly convex problems. This completes the proof. □

Appendix A.4.2. Proof of Theorem 2 (SVR-PDHG for Non-Strongly Convex Objectives)

Proof. 
Recall Lemma 1
( 1 4 L η α ( b ) ) T E F x ˜ s F ( x * ) y ˜ s , A x * + y * , A x ˜ s + G * y ˜ s G * ( y * ) 1 2 η E x * x 0 s 2 x * x T s 2 + M η γ 2 η E x 0 s x 1 s 2 1 L η 2 η E x T 1 s x T s 2 + E A x 0 s x 1 s , y 0 s y * E A x T s x T 1 s , y T s y * + 4 L η α ( b ) E F ( x 0 s ) F x * F x * , x 0 s x * 4 L η α ( b ) E F ( x T s ) F x * F x * , x T s x * + 4 L η α ( b ) T E F ( x ˜ s 1 ) F x * F x * , x ˜ s 1 x * + 1 2 ρ E [ y 0 s y * 2 y T s y * 2 ] .
Using Property 2 and the definition of 𝒯 ( · , · ) , set x 0 s = x T s 1 , x 1 s = x T 1 s 1 , x 0 1 = x 1 1 , x 0 1 = x ¯ 0 = x ˜ 0 , and y 0 s = y T s 1 , taking the expectation and summing over all epochs s = 1 , , S , we have
( 1 4 L η α ( b ) ) T 1 S s = 1 S E 𝒯 ( x ˜ s , y ˜ s ) 1 2 η S E x * x 0 1 2 x * x T S 2 + M η γ 2 S η E x 0 1 x 1 1 2 1 L η 2 S η E x T 1 S x T S 2 + 1 S E A x 0 1 x 1 1 , y 0 1 y * E A x T S x T 1 S , y T S y * + 4 L η α ( b ) S E F ( x ¯ 0 ) F x * F x * , x ¯ 0 x * 4 L η α ( b ) S E F ( x T S ) F x * F x * , x T S x * + 4 L η α ( b ) T 1 S s = 1 S E 𝒯 ( x ˜ s 1 , y ˜ s 1 ) + 1 2 S ρ E y 0 1 y * 2 y T S y * 2 1 2 η S E x * x 0 1 2 x * x T S 2 + 4 L η α ( b ) S E F ( x ¯ 0 ) F x * F x * , x ¯ 0 x * 4 L η α ( b ) S E F ( x T S ) F x * F x * , x T S x * + 4 L η α ( b ) T 1 S s = 1 S 𝒯 ( x ˜ s 1 , y ˜ s 1 ) + 1 2 S ρ E y 0 1 y * 2 + 1 S E A x T 1 S x T S , y T S y * 1 L η 2 S η E x T 1 S x T S 2 1 2 S ρ E y T S y * 2 .
Because of M ρ 1 L η M η in Lemma 1, we have
1 S E A x T 1 S x T S , y T S y * 1 L η 2 S η E x T 1 S x T S 2 1 2 S ρ E y T S y * 2 1 L η 2 S η E x T 1 S x T S 2 + M 2 ρ η 2 S ρ ( 1 L η ) E y T S y * 2 1 L η 2 S η E x T 1 S x T S 2 1 2 S ρ E y T S y * 2 0 .
Then, we have
( 1 4 L η α ( b ) ) T 1 S s = 1 S E 𝒯 ( x ˜ s , y ˜ s ) 4 L η α ( b ) T 1 S s = 1 S E 𝒯 ( x ˜ s 1 , y ˜ s 1 ) + 4 L η α ( b ) S E F ( x ¯ 0 ) F x * F x * , x ¯ 0 x * 4 L η α ( b ) S E F ( x T S ) F x * F x * , x T S x * + 1 2 S η E x * x 0 1 2 + 1 2 S ρ E y 0 1 y * 2 .
Subtracting 4 L η α ( b ) T 1 S s = 1 S E 𝒯 ( x ˜ s , y ˜ s ) from both sides of the above inequality, setting x ^ 0 = x ¯ 0 = x ˜ 0 , y ^ 0 = y ¯ 0 = y ˜ 0 , and since the update rule y 0 s = y T s 1 , we obtain
( 1 8 L η α ( b ) ) T 1 S s = 1 S E 𝒯 ( x ˜ s , y ˜ s ) 4 L η ( T + 1 ) α ( b ) S 𝒯 ( x ˜ 0 , y ˜ 0 ) 4 L η α ( b ) T S 𝒯 ( x ˜ S , y ˜ S ) 4 L η α ( b ) S E F ( x T S ) F ( x * ) F x * , x T S x * + 1 2 S η E x * x 0 1 2 + 1 2 S ρ E y 0 1 y * 2 4 L η ( T + 1 ) α ( b ) S 𝒯 ( x ˜ 0 , y ˜ 0 ) + 1 2 S η D x * 2 + 1 2 S ρ D y * 2 ,
where D x * = x * x ^ 0 = x * x ˜ 0 , y 0 1 = y ˜ 0 , and D y * = y * y ^ 0 = y * y ˜ 0 . The first inequality holds due to the definition of 𝒯 ( · , · ) and the convexity of F ( · ) . Since x ^ S = 1 S s = 1 S x ˜ s and y ^ S = 1 S s = 1 S y ˜ s , we have
1 S s = 1 S E 𝒯 ( x ˜ s , y ˜ s ) = 1 S s = 1 S E F ( x ˜ s ) F ( x * ) y ˜ s , A x * + y * , A x ˜ s + G * ( y ˜ s ) G * ( y * ) = E s = 1 S 1 S F ( x ˜ s ) F ( x * ) 1 S s = 1 S y ˜ s , A x * + y * , A ( 1 S s = 1 S x ˜ s ) + s = 1 S 1 S G * ( y ˜ s ) G * ( y * ) E [ F ( x ^ S ) F ( x * ) y ^ S , A x * + y * , A x ^ S + G * ( y ^ S ) G * ( y * ) ] = E 𝒯 ( x ^ S , y ^ S ) .
Combining the above results, we obtain
E 𝒯 ( x ^ S , y ^ S ) 4 L η ( T + 1 ) α ( b ) ( 1 8 L η α ( b ) ) T S 𝒯 ( x ^ 0 , y ^ 0 ) + 1 2 ( 1 8 L η α ( b ) ) T S η D x * 2 + 1 2 ( 1 8 L η α ( b ) ) T S ρ D y * 2 1 S 4 L η ( T + 1 ) α ( b ) ( 1 8 L η α ( b ) ) T 𝒯 ( x ^ 0 , y ^ 0 ) + ρ D x * 2 + η D y * 2 2 η ρ ( 1 8 L η α ( b ) ) T .
One can see that SVR-PDHG enjoys the convergence rate of 𝒪 ( 1 / S ) for non-SC problems. This completes the proof. □

Appendix A.4.3. Proof of Theorem 3 (ASVR-PDHG for Strongly Convex Objectives)

Proof. 
According to Lemma 2 with θ s 1 = θ and T s 1 = T at all epochs, z 0 s = x ˜ s 1 and y 0 s = A F ( x ˜ s 1 ) and η < 1 2 L , and by setting z 0 s = z 1 s , we have
E [ 𝒯 ( x ˜ s , y ˜ s ) ] 1 θ E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + θ 2 2 T η E x * z 0 s 2 x * z T s 2 + 1 2 T ρ E y 0 s y * 2 y T s y * 2 + M γ θ 2 T η E z 0 s z 1 s 2 L η θ 2 2 T η E z T s z T 1 s 2 + θ T E A ( z 0 s z 1 s ) , y 0 s y * θ T E A ( z T s z T 1 s ) , y T s y * 1 θ E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + θ 2 2 T η E x * z 0 s 2 + 1 2 T ρ E y 0 s y * 2 + θ T E A ( z T 1 s z T s ) , y T s y * L η θ 2 η z T s z T 1 s 2 1 2 θ ρ y T s y * 2 ( a ) 1 θ E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + θ 2 2 T η E x * x ˜ s 1 2 + 1 2 T ρ E y 0 s y * 2 ( b ) 1 θ E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + θ 2 T η μ E F x ˜ s 1 F x * F x * , x ˜ s 1 x * + L F T ρ σ min A A E F x ˜ s 1 F x * F x * , x ˜ s 1 x * ,
where the inequality ( b ) holds due to the strong convexity of function F and Lemma A3, and the inequality ( a ) holds due to a similar reason, with (A43) as follows:
A ( z T 1 s z T s ) , y T s y * L η θ 2 η z T s z T 1 s 2 1 2 θ s y T s y * 2 1 2 θ ρ y T s y * 2 + M 2 θ ρ 2 z T s z T 1 s 2 L η θ 2 η z T s z T 1 s 2 1 2 θ ρ y T s y * 2 0 ,
where ρ L M 2 in Lemma 2. Using Property 2, we obtain
E 𝒯 ( x ˜ s , y ˜ s ) 1 θ + θ 2 T η μ + L F T ρ σ min A A E 𝒯 ( x ˜ s 1 , y ˜ s 1 ) .
Let x ^ 0 = x ˜ 0 , y ^ 0 = y ˜ 0 , x ^ S = x ˜ S , y ^ S = y ˜ S , and ϕ 2 = 1 θ + θ 2 T η μ + L F T ρ σ min A A , and we have
E 𝒯 ( x ^ S , y ^ S ) ϕ 2 S 𝒯 ( x ^ 0 , y ^ 0 ) .
This completes the proof. □

Appendix A.4.4. Proof of Theorem 4 (ASVR-PDHG for Non-Strongly Convex Objectives)

Proof. 
Recall Lemma 2, and dividing both sides of the inequality (19) by θ s 1 2 , we have
1 θ s 1 2 E 𝒯 ( x ˜ s , y ˜ s ) 1 θ s 1 θ s 1 2 E [ 𝒯 ( x ˜ s 1 , y ˜ s 1 ) ] + 1 2 T s 1 η E x * z 0 s 2 x * z T s 1 s 2 + 1 2 T s 1 θ s 1 2 ρ E [ y 0 s y * 2 y T s 1 s y * 2 ] + M γ 2 T s 1 η θ s 1 E z 0 s z 1 s 2 L η 2 T s 1 η E z T s 1 s z T s 1 1 s 2 + 1 T s 1 θ s 1 E A ( z 0 s z 1 s ) , y 0 s y * 1 T s 1 θ s 1 E A ( z T s 1 s z T s 1 1 s ) , y T s 1 s y * .
By the update rule of θ s : θ s = θ s 1 4 + 4 θ s 1 2 θ s 1 2 2 and θ 0 = 1 α ( b ) L η 1 L η , we have 1 θ s / θ s 2 = 1 / θ s 1 2 . And since the update rule of T s , i.e., T s = 1 1 θ s T s 1 , we have
1 T s 1 T s 1 0 , 1 T s θ s 1 T s 1 θ s 1 0 , 1 T s θ s 2 1 T s 1 θ s 1 2 0 , M γ 2 η T s θ s L η 2 T s 1 η 0 .
Summing over all epochs s = 1 , , S , since z 0 s = z T s 1 s 1 , z 1 s = z T s 1 1 s 1 and y 0 s = y T s 1 s 1 , set z 0 1 = z 1 1 we have
1 θ S 1 2 E 𝒯 ( x ˜ S , y ˜ S ) 1 θ 0 θ 0 2 𝒯 ( x ˜ 0 , y ˜ 0 ) + 1 2 T 0 η E x * z 0 1 2 + 1 2 T 0 θ 0 2 ρ E y 0 1 y * 2 1 2 T S 1 θ S 1 2 ρ E y T S 1 S y * 2 + M γ 2 η T 0 θ 0 z 0 1 z 1 1 2 L η 2 T 0 η E z T S 1 S z T S 1 1 S 2 + 1 T 0 θ 0 E A ( z 0 1 z 1 1 ) , y 0 S y * 1 T S 1 θ S 1 E A ( z T S 1 S z T S 1 1 S ) , y T S 1 S y * = 1 θ 0 θ 0 2 𝒯 ( x ˜ 0 , y ˜ 0 ) + 1 2 T 0 η E x * z 0 1 2 + 1 2 T 0 θ 0 2 ρ E y 0 1 y * 2 + 1 T S 1 θ S 1 E A ( z T S 1 1 S z T S 1 S ) , y T S 1 S y * L η 2 T 0 η E z T S 1 1 S z T S 1 S 2 1 2 T S 1 θ S 1 2 ρ E y T S 1 S y * 2 .
Here, set T 0 T S 1 < L θ S 1 M 2 ρ , i.e., ρ L θ S 1 T S 1 T 0 M 2 , and we have
1 T S 1 θ S 1 A ( z T S 1 1 S z T S 1 S ) , y T S 1 S y * L η 2 T 0 η z T S 1 1 S z T S 1 S 2 1 2 T S 1 θ S 1 2 ρ y T S 1 S y * 2 1 T S 1 θ S 1 M 2 ρ 2 z T S 1 1 S z T S 1 S 2 + 1 2 ρ y T S 1 S y * 2 L η 2 T 0 η z T S 1 1 S z T S 1 S 2 1 2 T S 1 θ S 1 ρ y T S 1 S y * 2 0 .
Furthermore, since z 0 1 = x ˜ 0 = x ^ 0 , y 0 1 = y ˜ 0 = y ^ 0 , x ^ S = x ˜ S and y ^ S = y ˜ S , then
1 θ S 1 2 E 𝒯 ( x ^ S , y ^ S ) 1 θ 0 θ 0 2 𝒯 ( x ^ 0 , y ^ 0 ) + 1 2 T 0 η E x * z 0 1 2 + 1 2 T 0 θ 0 2 ρ E y 0 1 y * 2 ,
where D x * = x * x ˜ 0 = x * x ^ 0 , D y * = y ˜ 0 y * = y ^ 0 y * . By combining the above results with θ s 2 / ( s + 2 ) for all s, and θ 0 = 1 α ( b ) L η 1 2 L η , we have
E 𝒯 ( x ^ S , y ^ S ) 1 θ 0 θ 0 2 θ S 1 2 𝒯 ( x ^ 0 , y ^ 0 ) + 1 2 T 0 η θ S 1 2 D x * 2 + 1 2 T 0 θ 0 2 ρ θ S 1 2 D y * 2 4 α ( b ) θ 0 2 ( 1 2 L η ) ( S + 1 ) 2 𝒯 ( x ^ 0 , y ^ 0 ) + 2 T 0 η ( S + 1 ) 2 D x * 2 + 2 T 0 θ 0 2 ρ ( S + 1 ) 2 D y * 2 ,
where ρ min { L M 2 , L θ S 1 T S 1 T 0 M 2 } . Note that the increase in T s leads to the increase in the algorithm complexity. In order to keep the complexity of our algorithms similar to those of the fixed inner loop length algorithms, we usually choose T 0 = 𝒪 ( n / S ) , where n is the number of samples and S is the number of outer loops ( n S ). Thus, we obtain that our ASVR-PDHG has the convergence rate of 𝒪 ( 1 S 2 + 1 n S + 1 n S ) for non-strongly convex objectives. This completes the proof. □

Appendix B. More Experimental Details and Results

In this appendix, we provide more experimental details and results.

Appendix B.1. Hyper-Parameter Selection

We choose the mini-batch sizes according to Figure A1 and Figure A2.
Figure A1. Comparison of the methods for solving SC problems on the small-scale synthetic dataset with different mini-batch sizes. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time or the number of passes through data.
Figure A1. Comparison of the methods for solving SC problems on the small-scale synthetic dataset with different mini-batch sizes. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time or the number of passes through data.
Mathematics 13 01687 g0a1
Figure A2. Comparison of the methods for solving non-SC problems on the small-scale synthetic dataset with different mini-batch sizes. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time or the number of passes through data.
Figure A2. Comparison of the methods for solving non-SC problems on the small-scale synthetic dataset with different mini-batch sizes. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time or the number of passes through data.
Mathematics 13 01687 g0a2

Appendix B.2. Experimental Results in Sparse and Asynchronous Parallel Setting

We also execute our asynchronous parallel Algorithms A2 and A3 to solve SC problems. The experimental results are shown in Figure A3. It can be observed that our ASVR-PDHG always outperforms SVR-PDHG for solving SC problems under different thread counts. Moreover, SVR-PDHG and ASVR-PDHG with four threads achieve more than 2 × speedup than those with one thread, respectively.
Figure A3. Comparison of the stochastic asynchronous parallel PDHG methods for solving SC problems with A = I , batch size b = 1 , and the regularization parameters λ 1 = 10 5 and λ 2 = 10 2 on the sparse datasets, rcv1.small and real-sim. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time (seconds).
Figure A3. Comparison of the stochastic asynchronous parallel PDHG methods for solving SC problems with A = I , batch size b = 1 , and the regularization parameters λ 1 = 10 5 and λ 2 = 10 2 on the sparse datasets, rcv1.small and real-sim. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time (seconds).
Mathematics 13 01687 g0a3

References

  1. Esser, E.; Zhang, X.; Chan, T.F. A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 2010, 3, 1015–1046. [Google Scholar] [CrossRef]
  2. Goldstein, T.; Li, M.; Yuan, X. Adaptive primal-dual splitting methods for statistical learning and image processing. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 2089–2097. [Google Scholar]
  3. Zhang, N.; Fang, C. Saddle point approximation approaches for two-stage robust optimization problems. J. Glob. Optim. 2020, 78, 651–670. [Google Scholar] [CrossRef]
  4. Qiao, L.; Lin, T.; Jiang, Y.G.; Yang, F.; Liu, W.; Lu, X. On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization. In Proceedings of the Twenty-second European Conference on Artificial Intelligence, The Hague, The Netherlands, 29 August–2 September 2016; pp. 167–174. [Google Scholar]
  5. Delplancke, C.; Gurnell, M.; Latz, J.; Markiewicz, P.J.; Schönlieb, C.B.; Ehrhardt, M.J. Improving a stochastic algorithm for regularized PET image reconstruction. In Proceedings of the 2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), Boston, MA, USA, 31 October–7 November 2020; pp. 1–3. [Google Scholar]
  6. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  7. Suzuki, T. Dual averaging and proximal gradient descent for online alternating direction multiplier method. In Proceedings of the International Conference on Machine Learning, PMLR, Atlanta, GA, USA, 17–19 June 2013; pp. 392–400. [Google Scholar]
  8. Wang, H.; Banerjee, A. Online alternating direction method. In Proceedings of the International Conference on Machine Learning, Edinburgh, Scotland, 26 June–1 July 2012; pp. 1699–1706. [Google Scholar]
  9. Zhao, S.Y.; Li, W.J.; Zhou, Z.H. Scalable stochastic alternating direction method of multipliers. arXiv 2015, arXiv:1502.03529. [Google Scholar]
  10. Ouyang, H.; He, N.; Tran, L.; Gray, A. Stochastic alternating direction method of multipliers. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; pp. 80–88. [Google Scholar]
  11. Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Proc. Ussr Acad. Sci. 1983, 269, 543–547. [Google Scholar]
  12. Xiao, L. Dual averaging method for regularized stochastic learning and online optimization. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; Volume 22. [Google Scholar]
  13. Johnson, R.; Zhang, T. Accelerating stochastic gradient descent using predictive variance reduction. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; Volume 26, pp. 315–323. [Google Scholar]
  14. Defazio, A.; Bach, F.; Lacoste-Julien, S. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 1646–1654. [Google Scholar]
  15. Mu, Y.; Liu, W.; Liu, X.; Fan, W. Stochastic Gradient Made Stable: A Manifold Propagation Approach for Large-Scale Optimization. IEEE Trans. Knowl. Data Eng. 2017, 29, 458–471. [Google Scholar] [CrossRef]
  16. Zhou, K.; Shang, F.; Cheng, J. A simple stochastic variance reduced algorithm with fast convergence rates. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5980–5989. [Google Scholar]
  17. Shang, F.; Zhou, K.; Liu, H.; Cheng, J.; Tsang, I.W.; Zhang, L.; Tao, D.; Jiao, L. VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning. IEEE Trans. Knowl. Data Eng. 2020, 32, 188–202. [Google Scholar] [CrossRef]
  18. Allen-Zhu, Z. Katyusha: The first direct acceleration of stochastic gradient methods. J. Mach. Learn. Res. 2017, 18, 8194–8244. [Google Scholar]
  19. Shang, F.; Jiao, L.; Zhou, K.; Cheng, J.; Ren, Y.; Jin, Y. Asvrg: Accelerated proximal svrg. In Proceedings of the Asian Conference on Machine Learning, Beijin, China, 14–16 November 2018; pp. 815–830. [Google Scholar]
  20. Zhong, W.; Kwok, J. Fast stochastic alternating direction method of multipliers. In Proceedings of the International Conference on Machine Learning, Beijin, China, 21–26 June 2014; pp. 46–54. [Google Scholar]
  21. Suzuki, T. Stochastic dual coordinate ascent with alternating direction method of multipliers. In Proceedings of the International Conference on Machine Learning, Beijin, China, 21–26 June 2014; pp. 736–744. [Google Scholar]
  22. Shalev-Shwartz, S.; Zhang, T. Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization. J. Mach. Learn. Res. 2013, 14, 567–599. [Google Scholar]
  23. Zheng, S.; Kwok, J.T. Fast-and-Light Stochastic ADMM. In Proceedings of the International Joint Conference on Artificial Intelligence, New York, NY, USA, 9-15 July 2016; pp. 2407–2613. [Google Scholar]
  24. Liu, Y.; Shang, F.; Cheng, J. Accelerated variance reduced stochastic ADMM. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
  25. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 2011, 40, 120–145. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Lin, X. Stochastic primal-dual coordinate method for regularized empirical risk minimization. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 353–361. [Google Scholar]
  27. Tan, C.; Zhang, T.; Ma, S.; Liu, J. Stochastic primal-dual method for empirical risk minimization with O(1) per-iteration complexity. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 8376–8385. [Google Scholar]
  28. Devraj, A.M.; Chen, J. Stochastic Variance Reduced Primal Dual Algorithms for Empirical Composition Optimization. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2019; Volume 32, pp. 9882–9892. [Google Scholar]
  29. Tran-Dinh, Q.; Liu, D. Faster Randomized Primal-Dual Algorithms For Nonsmooth Composite Convex Minimization. arXiv 2020, arXiv:2003.01322. [Google Scholar]
  30. Zhu, M.; Chan, T. An efficient primal-dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Rep. 2008, 34, 8–34. [Google Scholar]
  31. He, B.; Yuan, X. Convergence analysis of primal-dual algorithms for a saddle-point problem: From contraction perspective. SIAM J. Imag. Sci. 2012, 5, 119–149. [Google Scholar] [CrossRef]
  32. Palaniappan, B.; Bach, F. Stochastic variance reduction methods for saddle-point problems. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 1416–1424. [Google Scholar]
  33. Jiang, F.; Zhang, Z.; He, H. Solving saddle point problems: A landscape of primal-dual algorithm with larger stepsizes. J. Glob. Optim. 2022, 85, 821–846. [Google Scholar] [CrossRef]
  34. Chambolle, A.; Ehrhardt, M.J.; Richtárik, P.; Schonlieb, C.B. Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM J. Optim. 2018, 28, 2783–2808. [Google Scholar] [CrossRef]
  35. Rizkinia, M.; Okuda, M. Evaluation of primal-dual splitting algorithm for MRI reconstruction using spatio-temporal structure Tensor and L1-2 norm. Makara J. Technol. 2020, 23, 126–130. [Google Scholar] [CrossRef]
  36. Xu, S. A search direction inspired primal-dual method for saddle point problems. Optim. Online 2019, 559. [Google Scholar]
  37. Jiu, M.; Pustelnik, N. A deep primal-dual proximal network for image restoration. IEEE J. Sel. Top. Signal Process. 2021, 15, 190–203. [Google Scholar] [CrossRef]
  38. Baguer, D.O.; Leuschner, J.; Schmidt, M. Computed tomography reconstruction using deep image prior and learned reconstruction methods. Inverse Probl. 2020, 36, 094004. [Google Scholar] [CrossRef]
  39. Rahman Chowdhury, M.; Zhang, J.; Qin, J.; Lou, Y. Poisson image denoising based on fractional-order total variation. Inverse Probl. Imaging 2020, 14, 77–96. [Google Scholar] [CrossRef]
  40. Chen, Y.; Lan, G.; Ouyang, Y. Optimal primal-dual methods for a class of saddle point problems. SIAM J. Optim. 2014, 24, 1779–1814. [Google Scholar] [CrossRef]
  41. Zhao, R.; Haskell, W.B.; Tan, V.Y. An optimal algorithm for stochastic three-composite optimization. In Proceedings of the the 22nd International Conference on Artificial Intelligence and Statistics, Naha, Japan, 16–18 April 2019; pp. 428–437. [Google Scholar]
  42. Song, C.; Wright, S.J.; Diakonikolas, J. Variance reduction via primal-dual accelerated dual averaging for nonsmooth convex finite-sums. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 9824–9834. [Google Scholar]
  43. Du, S.S.; Hu, W. Linear convergence of the primal-dual gradient method for convex-concave saddle point problems without strong convexity. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, Naha, Japan, 16–18 April 2019; pp. 196–205. [Google Scholar]
  44. Thekumparampil, K.K.; He, N.; Oh, S. Lifted primal-dual method for bilinearly coupled smooth minimax optimization. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Virtual, 28–30 March 2022; pp. 4281–4308. [Google Scholar]
  45. Zhao, R. Accelerated stochastic algorithms for convex-concave saddle-point problems. Math. Oper. Res. 2022, 47, 1443–1473. [Google Scholar] [CrossRef]
  46. Zhu, Y.N.; Zhang, X. A Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable Optimization. SIAM J. Imag. Sci. 2021, 14, 1326–1353. [Google Scholar] [CrossRef]
  47. Xie, G.; Luo, L.; Lian, Y.; Zhang, Z. Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems. In Proceedings of the 37th International Conference on Machine Learning, Online, 13–18 July 2020; pp. 10504–10513. [Google Scholar]
  48. Zhang, X.; Burger, M.; Osher, S. A unified primal-dual algorithm framework based on Bregman iteration. J. Sci. Comput. 2011, 46, 20–46. [Google Scholar] [CrossRef]
  49. Allen-Zhu, Z.; Yuan, Y. Improved SVRG for non-strongly-convex or sum-of-non-convex objectives. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 1080–1089. [Google Scholar]
  50. Mania, H.; Pan, X.; Papailiopoulos, D.; Recht, B.; Ramchandran, K.; Jordan, M.I. Perturbed iterate analysis for asynchronous stochastic optimization. SIAM J. Optim. 2017, 27, 2202–2229. [Google Scholar] [CrossRef]
  51. Sion, M. On general minimax theorems. Pac. J. Math. 1958, 8, 171–176. [Google Scholar] [CrossRef]
  52. Arjevani, Y.; Carmon, Y.; Duchi, J.C.; Foster, D.J.; Srebro, N.; Woodworth, B. Lower bounds for non-convex stochastic optimization. Math. Program. 2023, 199, 165–214. [Google Scholar] [CrossRef]
  53. Han, Y.; Xie, G.; Zhang, Z. Lower complexity bounds of finite-sum optimization problems: The results and construction. J. Math. Learn. Res. 2024, 25, 1–86. [Google Scholar]
  54. Robbins, H.; Monro, S. A stochastic approximation method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  55. Qiao, L.; Lin, T.; Qin, Q.; Lu, X. On the iteration complexity analysis of Stochastic Primal-Dual Hybrid Gradient approach with high probability. Neurocomputing 2018, 307, 78–90. [Google Scholar] [CrossRef]
  56. Liu, Y.; Shang, F.; Liu, H.; Kong, L.; Jiao, L.; Lin, Z. Accelerated variance reduction stochastic ADMM for large-scale machine learning. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 4242–4255. [Google Scholar] [CrossRef]
  57. Banerjee, O.; El Ghaoui, L.; d’Aspremont, A. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 2008, 9, 485–516. [Google Scholar]
  58. Iusem, A.N.; Jofré, A.; Oliveira, R.I.; Thompson, P. Extragradient method with variance reduction for stochastic variational inequalities. SIAM J. Optim. 2017, 27, 686–724. [Google Scholar] [CrossRef]
  59. Du, S.S.; Gidel, G.; Jordan, M.I.; Li, C.J. Optimal extragradient-based bilinearly-coupled saddle-point optimization. arXiv 2022, arXiv:2206.08573. [Google Scholar]
  60. Alacaoglu, A.; Malitsky, Y. Stochastic variance reduction for variational inequality methods. In Proceedings of the Conference on Learning Theory, PMLR, London, UK, 2–5 July 2022; pp. 778–816. [Google Scholar]
  61. Boroun, M. Projection-Free and Accelerated Methods for Constrained Optimization and Saddle-Points Problems. Ph.D. Thesis, The University of Arizona, Tucson, AZ, USA, 2025. [Google Scholar]
  62. Wang, D.; Ye, M.; Xu, J. Differentially private empirical risk minimization revisited: Faster and more general. In Proceedings of the Advances in Neural Information Processing Systems, Los Angeles, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  63. Han, B.; Tsang, I.W.; Xiao, X.; Chen, L.; Fung, S.F.; Yu, C.P. Privacy-Preserving Stochastic Gradual Learning. IEEE Trans. Knowl. Data Eng. 2021, 33, 3129–3140. [Google Scholar] [CrossRef]
Figure 1. Comparison of the stochastic PDHG methods for solving SC (24) (left) and non-SC (23) (right) problems on small-scale synthetic datasets. The vertical axis is the objective value minus minimum value, and the horizontal axis is the number of passes through data.
Figure 1. Comparison of the stochastic PDHG methods for solving SC (24) (left) and non-SC (23) (right) problems on small-scale synthetic datasets. The vertical axis is the objective value minus minimum value, and the horizontal axis is the number of passes through data.
Mathematics 13 01687 g001
Figure 2. Comparison of the stochastic PDHG methods for solving non-SC Problem (23) with the regularization parameter λ 1 = 10 5 on the bio, phy, and epsilon datasets. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time (top) or the number of passes through data (bottom).
Figure 2. Comparison of the stochastic PDHG methods for solving non-SC Problem (23) with the regularization parameter λ 1 = 10 5 on the bio, phy, and epsilon datasets. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time (top) or the number of passes through data (bottom).
Mathematics 13 01687 g002
Figure 3. Comparison of the three stochastic PDHG methods for solving the SC Problem (24) with the regularization parameters λ 2 = 10 2 and λ 1 = 10 5 on the bio, phy, and epsilon datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the CPU time (top) or the number of passes through data (bottom).
Figure 3. Comparison of the three stochastic PDHG methods for solving the SC Problem (24) with the regularization parameters λ 2 = 10 2 and λ 1 = 10 5 on the bio, phy, and epsilon datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the CPU time (top) or the number of passes through data (bottom).
Mathematics 13 01687 g003
Figure 4. Comparison of the stochastic asynchronous parallel PDHG methods for solving non-SC Problem (23) with A = I , batch size b = 1 , and the regularization parameter λ 1 = 10 5 on the sparse datasets, rcv1.small and real-sim. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time (seconds).
Figure 4. Comparison of the stochastic asynchronous parallel PDHG methods for solving non-SC Problem (23) with A = I , batch size b = 1 , and the regularization parameter λ 1 = 10 5 on the sparse datasets, rcv1.small and real-sim. The vertical axis is the objective value minus minimum value, and the horizontal axis is the CPU time (seconds).
Mathematics 13 01687 g004
Figure 5. Comparison of these stochastic methods for solving the SC (left) and non-SC (right) problems on large-scale synthetic datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the CPU time.
Figure 5. Comparison of these stochastic methods for solving the SC (left) and non-SC (right) problems on large-scale synthetic datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the CPU time.
Mathematics 13 01687 g005
Figure 6. Comparison of all the stochastic methods for solving the SC (24) and non-SC (23) problems on the epsilon_test (left) and w8a (right) datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the CPU time (seconds).
Figure 6. Comparison of all the stochastic methods for solving the SC (24) and non-SC (23) problems on the epsilon_test (left) and w8a (right) datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the CPU time (seconds).
Mathematics 13 01687 g006
Figure 7. Comparison of the EG-type stochastic methods for solving the non-SC problem (23) on the phy (left), w8a (middle) and epsilon (right) datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the number of passes through data or CPU time (seconds).
Figure 7. Comparison of the EG-type stochastic methods for solving the non-SC problem (23) on the phy (left), w8a (middle) and epsilon (right) datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the number of passes through data or CPU time (seconds).
Mathematics 13 01687 g007
Figure 8. Comparison of the stochastic primal–dual methods for solving min x max y 1 n i = 1 n F i ( x ) + λ x , y λ 2 y 2 on the phy (left) and w8a (right) datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the number of passes through data or CPU time (seconds).
Figure 8. Comparison of the stochastic primal–dual methods for solving min x max y 1 n i = 1 n F i ( x ) + λ x , y λ 2 y 2 on the phy (left) and w8a (right) datasets. The vertical axis is the objective value minus the minimum value, and the horizontal axis is the number of passes through data or CPU time (seconds).
Mathematics 13 01687 g008
Figure 9. Comparison of all the methods for solving the non-SC multi-task Problem (28) with the regularization parameters λ 1 = 10 5 on the 20newsgroups dataset. The vertical axis is the objective value minus the minimum value or test error.
Figure 9. Comparison of all the methods for solving the non-SC multi-task Problem (28) with the regularization parameters λ 1 = 10 5 on the 20newsgroups dataset. The vertical axis is the objective value minus the minimum value or test error.
Mathematics 13 01687 g009
Figure 10. Comparison of the stochastic methods for solving non-convex SVMs on a synthetic dataset (left) and the phy dataset (right). The vertical axis is the objective value minus minimum value or test error and the horizontal axis is the CPU time (seconds) or the number of passes through data.
Figure 10. Comparison of the stochastic methods for solving non-convex SVMs on a synthetic dataset (left) and the phy dataset (right). The vertical axis is the objective value minus minimum value or test error and the horizontal axis is the CPU time (seconds) or the number of passes through data.
Mathematics 13 01687 g010
Table 1. Gradient complexity of some stochastic methods.
Table 1. Gradient complexity of some stochastic methods.
AlgorithmGradient Complexity
SPDHG [4], SGD [54], SADMM [10] 𝒪 ( 1 ϵ 2 )
SVR-PDHG (Ours), SVRG-ADMM [23] 𝒪 ( n ϵ + 1 ϵ )
ASVR-PDHG (Ours), ASVRG-ADMM [24] 𝒪 ( n ϵ + n ϵ )
Table 2. Some real-world datasets and their mini-batch sizes used in our experiments.
Table 2. Some real-world datasets and their mini-batch sizes used in our experiments.
Datasets# Training Samples# Testing Samples# Dimensionb (SC)b (Non-SC)
bio116,60029,1517412015
phy40,00010,0007812015
epsilon320,00080,000200012015
w8a39,800994930012015
epsilon_test80,00020,000200012015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

An, W.; Liu, Y.; Shang, F.; Liu, H. Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems. Mathematics 2025, 13, 1687. https://doi.org/10.3390/math13101687

AMA Style

An W, Liu Y, Shang F, Liu H. Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems. Mathematics. 2025; 13(10):1687. https://doi.org/10.3390/math13101687

Chicago/Turabian Style

An, Weixin, Yuanyuan Liu, Fanhua Shang, and Hongying Liu. 2025. "Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems" Mathematics 13, no. 10: 1687. https://doi.org/10.3390/math13101687

APA Style

An, W., Liu, Y., Shang, F., & Liu, H. (2025). Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems. Mathematics, 13(10), 1687. https://doi.org/10.3390/math13101687

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop