Next Article in Journal
Matching Concepts of m-Polar Fuzzy Incidence Graphs
Previous Article in Journal
Biquadratic Tensors: Eigenvalues and Structured Tensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems

School of Mathematics and Information Science, Northern Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2025, 17(7), 1159; https://doi.org/10.3390/sym17071159
Submission received: 7 June 2025 / Revised: 9 July 2025 / Accepted: 14 July 2025 / Published: 20 July 2025
(This article belongs to the Section Mathematics)

Abstract

This paper proposes a Distributed Adaptive Regularization Algorithm (DARN) for solving composite non-convex and non-smooth optimization problems in multi-agent systems. The algorithm employs a three-phase iterative framework to achieve efficient collaborative optimization: (1) a local regularized optimization step, which utilizes proximal mappings to enforce strong convexity of weakly convex objectives and ensure subproblem well-posedness; (2) a consensus update based on doubly stochastic matrices, guaranteeing asymptotic convergence of agent states to a global consensus point; and (3) an innovative adaptive regularization mechanism that dynamically adjusts regularization strength using local function value variations to balance stability and convergence speed. Theoretical analysis demonstrates that the algorithm maintains strict monotonic descent under non-convex and non-smooth conditions by constructing a mixed time-scale Lyapunov function, achieving a sublinear convergence rate. Notably, we prove that the projection-based update rule for regularization parameters preserves lower-bound constraints, while spectral decay properties of consensus errors and perturbations from local updates are globally governed by the Lyapunov function. Numerical experiments validate the algorithm’s superiority in sparse principal component analysis and robust matrix completion tasks, showing a 6.6% improvement in convergence speed and a 51.7% reduction in consensus error compared to fixed-regularization methods. This work provides theoretical guarantees and an efficient framework for distributed non-convex optimization in heterogeneous networks.
Keywords: distributed optimization; adaptive regularization; non-convex and non-smooth optimization; consensus algorithms; Lyapunov convergence; proximal gradient methods distributed optimization; adaptive regularization; non-convex and non-smooth optimization; consensus algorithms; Lyapunov convergence; proximal gradient methods

Share and Cite

MDPI and ACS Style

Li, C.; Ma, Y. DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems. Symmetry 2025, 17, 1159. https://doi.org/10.3390/sym17071159

AMA Style

Li C, Ma Y. DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems. Symmetry. 2025; 17(7):1159. https://doi.org/10.3390/sym17071159

Chicago/Turabian Style

Li, Cunlin, and Yinpu Ma. 2025. "DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems" Symmetry 17, no. 7: 1159. https://doi.org/10.3390/sym17071159

APA Style

Li, C., & Ma, Y. (2025). DARN: Distributed Adaptive Regularized Optimization with Consensus for Non-Convex Non-Smooth Composite Problems. Symmetry, 17(7), 1159. https://doi.org/10.3390/sym17071159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop