Next Article in Journal
Symmetrized, Perturbed Hyperbolic Tangent-Based Complex-Valued Trigonometric and Hyperbolic Neural Network Accelerated Approximation
Previous Article in Journal
SCS-Net: Stratified Compressive Sensing Network for Large-Scale Crowd Flow Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems

1
The Key Laboratory of Intelligent Perception and Image Understanding of the Ministry of Education, School of Artificial Intelligence, Xidian University, Xi’an 710126, China
2
The College of Intelligence and Computing, Tianjin University, Tianjin 300072, China
3
Medical College, Tianjin University, Tianjin 300072, China
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(10), 1687; https://doi.org/10.3390/math13101687
Submission received: 24 March 2025 / Revised: 21 April 2025 / Accepted: 22 April 2025 / Published: 21 May 2025

Abstract

Recently, many stochastic Alternating Direction Methods of Multipliers (ADMMs) have been proposed to solve large-scale machine learning problems. However, for large-scale saddle-point problems, the state-of-the-art (SOTA) stochastic ADMMs still have high per-iteration costs. On the other hand, the stochastic primal–dual hybrid gradient (SPDHG) has a low per-iteration cost but only a suboptimal convergence rate of O(1/S). Thus, there still remains a gap in the convergence rates between SPDHG and SOTA ADMMs. Motivated by the two matters, we propose (accelerated) stochastic variance reduced primal–dual hybrid gradient ((A)SVR-PDHG) methods. We design a linear extrapolation step to improve the convergence rate and a new adaptive epoch length strategy to remove the extra boundedness assumption. Our algorithms have a simpler structure and lower per-iteration complexity than SOTA ADMMs. As a by-product, we present the asynchronous parallel variants of our algorithms. In theory, we rigorously prove that our methods converge linearly for strongly convex problems and improve the convergence rate to O(1/S2) for non-strongly convex problems as opposed to the existing O(1/S) convergence rate. Compared with SOTA algorithms, various experimental results demonstrate that ASVR-PDHG can achieve an average speedup of 2×5×.
Keywords: saddle-point problem; stochastic optimization; variance reduction; asynchronous parallelism saddle-point problem; stochastic optimization; variance reduction; asynchronous parallelism

Share and Cite

MDPI and ACS Style

An, W.; Liu, Y.; Shang, F.; Liu, H. Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems. Mathematics 2025, 13, 1687. https://doi.org/10.3390/math13101687

AMA Style

An W, Liu Y, Shang F, Liu H. Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems. Mathematics. 2025; 13(10):1687. https://doi.org/10.3390/math13101687

Chicago/Turabian Style

An, Weixin, Yuanyuan Liu, Fanhua Shang, and Hongying Liu. 2025. "Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems" Mathematics 13, no. 10: 1687. https://doi.org/10.3390/math13101687

APA Style

An, W., Liu, Y., Shang, F., & Liu, H. (2025). Stochastic Variance Reduced Primal–Dual Hybrid Gradient Methods for Saddle-Point Problems. Mathematics, 13(10), 1687. https://doi.org/10.3390/math13101687

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop