Next Article in Journal
Spectral Bounds and Exit Times for a Stochastic Model of Corruption
Previous Article in Journal
Quantized Control of Switched Systems with Partly Unstabilizable Subsystems and Actuator Saturation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Saddle Points of Partial Augmented Lagrangian Functions

1
School of Mathematics and Statistics, Shandong University of Technology, Zibo 255000, China
2
School of Mathematics and Statistics, Xinyang Normal University, Xinyang 464000, China
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2025, 30(5), 110; https://doi.org/10.3390/mca30050110
Submission received: 2 September 2025 / Revised: 23 September 2025 / Accepted: 6 October 2025 / Published: 8 October 2025

Abstract

In this paper, we study a class of optimization problems with separable constraint structures, characterized by a combination of convex and nonconvex constraints. To handle these two distinct types of constraints, we introduce a partial augmented Lagrangian function by retaining nonconvex constraints while relaxing convex constraints into the objective function. Specifically, we employ the Moreau envelope for the convex term and apply second-order variational geometry to analyze the nonconvex term. For this partial augmented Lagrangian function, we study its saddle points and establish their relationship with KKT conditions. Furthermore, second-order optimality conditions are developed by employing tools such as second-order subdifferentials, asymptotic second-order tangent cones, and second-order tangent sets.

1. Introduction

Optimization problems have attracted significant attention due to their wide range of practical applications. In real-world scenarios, many problems often exhibit special structures, for example,
min f ( x ) : = f 1 ( x ) + f 2 ( x ) ,
where f 1 , f 2 : R n R and f 2 is regarded as a regularization term that imposes certain structures on f 1 to yield a desired solution; see [1,2,3,4,5] for more information. Particularly, the functions f 1 and f 2 often exhibit distinct properties, thus requiring different numerical treatment strategies. For example, in the LASSO problem, f 1 is typically a smooth function, whereas f 2 possesses non-smooth properties. Given a constrained optimization problem,
min f ( x ) , s . t . g ( x ) Γ ,
where g : R n R m and Γ is a closed subset in R m , the classical Lagrangian function provides a relaxation of the primal problem (2). However if either g or Γ is nonconvex, due to the presence of nonconvex constraints, this approach typically leads to a nonzero duality gap. Therefore, it is necessary to consider alternative strategies. In this case, we can employ the augmented Lagrangian function [6,7,8]. However this approach requires the penalty parameter τ to be sufficiently large to achieve the zero duality gap property between primal and dual problems, which inevitably introduces computational difficulties such as numerical instability.
Note that separable structures appear not only in the objective function as shown in (1) but also in the constraint system. In fact, in some applications, the constraint system can also be categorized into different types [9,10,11,12,13,14]. For example, the author in [14] considers the following optimization problem:
min f ( x ) , s . t . g ( x ) 0 , h ( x ) = 0 , G ( x ) H ( x ) = 0 ,
where g : R n R m , h : R n R p , G , H : R n R l . The first two constraints are of the classical nonlinear programming form. The third constraint, however, requires at least one of the corresponding component functions of G and H to be zero, and hence, the problem is referred to as a mathematical program with switching constraints. If the third constraint is replaced by the complementarity constraint G ( x ) 0 , H ( x ) 0 , G ( x ) H ( x ) , or equivalently ( G ( x ) , H ( x ) ) Γ : = { ( u , v ) | u 0 , v 0 , u v } , then the corresponding problem is termed as a mathematical program with complementarity constraints. Due to the special structure of the third constraint, we need to apply distinct analytical approaches to the first two constraints and the third constraint separately. In this paper, we consider the following structured constrained optimization problem:
( P ) min f ( x ) s . t . G ( x ) K ,
H ( x ) D ,
which corresponds to problem (2) by setting g ( x ) : = ( G ( x ) , H ( x ) ) and Γ : = ( K , D ) . The main reason for partitioning the constraint system into two components stems from their fundamentally different characteristics: in (3a), G : R n R m is continuously differentiable with Lipschitz-continuous derivatives (i.e., G ( x ) C 1 , 1 ), and K R m is a convex set; meanwhile, in (3b), H : R n R p is twice continuously differentiable (i.e., H ( x ) C 2 ), but D R p is a closed (but not necessarily convex) set. The objective function f : R n R is continuously differentiable with Lipschitz gradient, i.e., f ( x ) C 1 , 1 .
We adopt different strategies to handle these two distinct types of constraints in (3). For the convex constraint (3a), since K is convex, we utilize the Moreau envelope for regularization. Recall that for a convex function g : R n R , the Moreau envelope with parameter ν > 0 is defined as follows:
e ν g ( y ) : = inf w R m g ( w ) + 1 2 ν y w 2 .
A key advantage of the Moreau envelope lies in its smoothing property: even if g is nonsmooth, e ν g remains smooth, and hence, e ν serves as a smooth approximation of g. For the nonconvex constraint (3b), as mentioned above, since D is nonconvex, the augmented Lagrangian function requires sufficiently large penalty parameters, which in turn leads to poor numerical performance. To overcome this drawback, we choose to remain this constraint explicitly rather than relaxing it into the objective function. Based on these considerations, in this paper we mainly study the following partial augmented Lagrangian function:
L ( x , u , τ ) : = f ( x ) + e τ 1 δ K τ 1 u + G ( x ) 1 2 τ u 2 ,
where u R m is the multiplier and τ > 0 is a penalty parameter. The key difference between the proximal Lagrangian and the partial augmented Lagrangian lies in their treatment of nonconvex constraints: the former generally relaxes or regularizes nonconvex constraints by introducing a proximal term, whereas the latter retains nonconvex constraints directly within the constraint set and applies relaxation only to the convex part. The modified Lagrangian of this form has been studied in [8] for sparse optimization problems, where D denotes the sparse constraint, i.e., D : = { x | x 0 s } . In this case, D is a union of polyhedra, but in this paper, we require D to be merely a closed set, without any additional structure. In addition, when analyzing the second-order variational geometry of a set, it typically requires this set to be second-order regular or its second-order tangent set being nonempty [15,16,17]. Our approach eliminates this requirement.
The main work of this paper is summarized in the following three aspects:
(i)
Saddle points and Karush–Kuhn–Tucker (KKT) conditions. We first analyze the local/global saddle points of the partial augmented Lagrangian function (5). The relationships between saddle points, minimizers, and KKT points of problem (3) are established. In particular, if ( x * , u * ) is a local/global saddle point of L , then x * is a local/global minimizer of problem (3). Furthermore, ( x * , u * , v * ) with v * N D ( H ( x * ) ) becomes a KKT point, provided that the metric subregularity constraint qualification (MSCQ) holds. Conversely, if ( x * , u * , v * ) is a KKT point, then ( x * , u * ) is a global saddle point of L ( x , u , τ ) for all τ > 0 , as problem (3) is convex, where we only require the set K to be closed under addition, not necessarily to be a cone. The relationship between saddle points and the dual problem associated with the partial augmented Lagrangian is discussed.
(ii)
Second-order analysis for C 1 , 1 functions. According to the definition of saddle points (15) below, we know that x * is a minimizer of the following problem:
min L ( x , u , τ ) s . t . x C ,
where C : = H 1 ( D ) . Note that since f and G are C 1 , 1 functions, then L also belongs to the C 1 , 1 class, rather than being twice continuous differentiable. To establish second-order optimality conditions for problem (6), we need to study the second-order approximation of L . Toward this end, we employ the second-order subdifferential x 2 L ( x , u , τ ) , defined as the coderivative of the gradient x L ( x , u , τ ) . It enables us to obtain upper and lower bounds for the first-order Taylor expansion of L ( x , u , τ ) with respect to x. In the process of theoretical analysis, the outer semicontinuity and local boundedness properties of x 2 L for C 1 , 1 functions play an important role. Many works on second-order optimality conditions traditionally require the functions to be twice continuously differentiable; see [7,17,18,19,20,21]. Our work further relaxes this requirement from twice continuous differentiable to first-order continuous differentiability with Lipschitzian gradients, i.e., from C 2 to C 1 , 1 .
(iii)
Second-order variational geometry of the constraint set C = H 1 ( D ) . Since the set H ( x ) D is explicitly retained in the constraint system, it is necessary to study its variational geometric properties. The traditional approach for describing second-order geometric information of sets utilizes the concept of second-order tangent sets. However, this set may be empty even for convex sets. To overcome this limitation, we further study the asymptotic second-order tangent cone. A key theoretical result is that the second-order tangent set and asymptotic second-order tangent cone cannot be empty simultaneously (Proposition 2.1 in [22]). Therefore, the asymptotic second-order tangent cone serves as a supplement tool to second-order tangent sets, and their combination provides a complete characterization of the set’s second-order geometric information. By using the geometric analysis on constraint system and the aforementioned second-order analysis of objective function, we establish second-order optimality conditions for problem (6).
The structure of this paper is organized as follows. Section 2 presents fundamental notations and related results in the field of variational analysis. Section 3 introduces the partial augmented Lagrangian function and investigates its saddle point properties. Section 4 develops second-order optimality conditions for the partial augmented Lagrangian. Conclusion is drawn in Section 5.

2. Basic Notations and Tools in Variational Analysis

In this section, we first recall some notations and fundamental results in variational analysis which are used throughout the paper.
Let B denote the closed unit ball in R n . For a nonempty set S, the support function is defined by σ ( x | S ) : = sup u S x , u . The indicator function is defined as follows: δ S ( x ) = 0 if x S , and δ S ( x ) = + otherwise. The metric projection of a point x onto the set S is denoted by P S ( x ) . For a set-valued mapping F : R n R p , its graph and inverse mapping are gph F : = { ( x , y ) R n × R p y F ( x ) } and F 1 ( y ) : = { x R n y F ( x ) } . The Painlevé-Kuratowski upper/outer limit of a set-valued mapping F at a point x is defined as
limsup x x F ( x ) : = { y R p x k x , y k y with y k F ( x k ) } .
The Bouligand-Severi tangent/contingent cone to a closed set S at a point x ¯ S is
T S ( x ¯ ) : = limsup t 0 + S x ¯ t = { d R n t k 0 , d k d with x ¯ + t k d k S } .
The Fréchet normal cone and the limiting/Mordukhovich/basic normal cone of S at x ¯ are given by
N ^ S ( x ¯ ) : = v R n limsup x S x ¯ v , x x ¯ x x ¯ 0 , N S ( x ¯ ) : = limsup x x ¯ N ^ S ( x ) ,
where x S x ¯ represents the convergence of x to x ¯ with x S . If S is convex, the Fréchet and limiting normal cones coincide. For a given direction d R n , the limiting normal cone to S in direction d at x ¯ is defined as
N S ( x ¯ ; d ) : = limsup t 0 d d N ^ S ( x ¯ + t d ) = v R n t k 0 , d k d , v k v with v k N ^ S ( x ¯ + t k d k ) .
If, in particular, d = 0 , then N S ( x ¯ ; 0 ) coincides with N S ( x ¯ ) .
Now we are ready to review two kinds of second-order tangent sets, both of which play fundamental roles in the second-order analysis later.
Definition 1
([16,22]). Let S R n , x ¯ S and d T S ( x ¯ ) .
(i) 
The outer second-order tangent set to S at x ¯ in direction d is defined by
T S 2 ( x ¯ ; d ) : = w R n t k 0 , w k w with x ¯ + t k d + 1 2 t k 2 w k S .
(ii) 
The asymptotic second-order tangent cone to S at x ¯ in direction d is defined by
T S ( x ¯ ; d ) : = w R n ( t k , r k ) ( 0 , 0 ) , w k w with t k r k 0 , x ¯ + t k d + 1 2 t k r k w k S .
The asymptotic second-order tangent cone was first introduced by Penot [22] in the study of optimality conditions for scalar optimization. Note that the asymptotic second-order tangent cone is indeed a cone, while the second-order tangent set may not be a cone and even it may be empty; see, e.g., Bonnans and Shapiro (Example 3.29 in [16]). An important fact is that both sets cannot be empty simultaneously (Proposition 2.1 in [22]), i.e., T S 2 ( x ¯ ; d ) T S ( x ¯ ; d ) { 0 } . From this point, these two sets describe second-order information of the involved set sufficiently. In addition, according to definitions, it is easy to see
T S 2 ( x ¯ ; t d ) = t 2 T S 2 ( x ¯ ; d ) and T S ( x ¯ ; t d ) = t 2 T S ( x ¯ ; d ) , t > 0 .
The following is the concept of directional neighborhood.
Definition 2
([23]). Given a direction d R n and positive numbers ρ , δ > 0 , the directional neighborhood of direction d is defined as follows:
V ρ , δ ( d ) : = w δ B | d w w d ρ w d = δ B , if d = 0 , w δ B { 0 } | w w d d ρ { 0 } , if d 0 .
The concept of directional metric subregularity, as a directional version of constraint qualifications, plays an important role in developing variational geometric properties of constraint systems.
Definition 3
(Directional Metric Subregularity, [23]). Given a multifunction M : R n R p and ( x ¯ , y ¯ ) gph M , the mapping M is said to be metrically subregular at ( x ¯ , y ¯ ) in direction d R n , if there are positive numbers ρ , δ , κ > 0 such that
d x , M 1 ( y ¯ ) κ d y ¯ , M ( x ) , x x ¯ + V ρ , δ ( d ) .
The infimum of κ over all the combinations ρ , δ , and κ satisfying the above relations is called the modulus of directional metric subregularity. In the case of d = 0 , we simply say that M is metric subregular at ( x ¯ , y ¯ ) .
For the constraint system H ( x ) D , we say that the metric subregularity constraint qualification (MSCQ) holds at x ¯ in direction d if the set-valued mapping M ( x ) : = H ( x ) D is metric subregularity at ( x ¯ , 0 ) in direction d.
Lemma 1
(Lemma 2.2, [21]). Let x ¯ C : = H 1 ( D ) and assume that MSCQ holds in direction d R n for the constraint system H ( x ) D with modulus κ > 0 . Then,
N C ( x ¯ ; d ) v R n λ N D ( H ( x ¯ ) ; H ( x ¯ ) d ) κ v B with v = H ( x ¯ ) λ .
The relations between second-order tangent set and asymptotic second-order tangent cone of the sets C and D, where C : = H 1 ( D ) , under directional MSCQ are given below.
Lemma 2
(Proposition 2.2, [21]). Let x ¯ C : = H 1 ( D ) and d T C ( x ¯ ) . Then
T C 2 ( x ¯ ; d ) w R n H ( x ¯ ) w + 2 H ( x ¯ ) ( d , d ) T D 2 H ( x ¯ ) ; H ( x ¯ ) d
and
T C ( x ¯ ; d ) w R n H ( x ¯ ) w T D H ( x ¯ ) ; H ( x ¯ ) d .
If, in addition, assume that MSCQ holds in directional d R n for the constraint system H ( x ) D with modulus κ > 0 , then (9) and (10) hold as equality and the following estimations
d w , T C 2 ( x ¯ ; d ) κ d H ( x ¯ ) w + 2 H ( x ¯ ) ( d , d ) , T D 2 H ( x ¯ ) ; H ( x ¯ ) d , w R n
and
d w , T C ( x ¯ ; d ) κ d H ( x ¯ ) w , T D H ( x ¯ ) ; H ( x ¯ ) d , w R n .
Let ϕ : R n R ¯ be a single-valued map, and suppose that x ¯ R n satisfies | ϕ ( x ¯ ) | < + . The limiting (Mordukhovich) subdifferential of ϕ at x ¯ is defined as
ϕ ( x ¯ ) : = v R n ( v , 1 ) N epi ϕ ( x ¯ , ϕ ( x ¯ ) ) ,
where epi ϕ stands for the epigraph of ϕ .
Definition 4
([24]). Let F : R n R p be a multifunction and ( x ¯ , y ¯ ) gph F . The limiting (Mordukhovich) coderivative of F at ( x ¯ , y ¯ ) is a multifunction D * F ( x ¯ , y ¯ ) : R p R n with the values
D * F ( x ¯ , y ¯ ) ( v ) : = u R n ( u , v ) N gph F ( x ¯ , y ¯ ) , v R p .
If ( x ¯ , y ¯ ) gph F , one puts D * F ( x ¯ , y ¯ ) ( v ) = for any v R p . We simple write D * F ( x ¯ ) when F is single-valued at x ¯ and y = F ( x ¯ ) .
By employing the notion of coderivative, we can establish the second-order generalized differential theory for extended-real-valued functions. This theoretical framework has become increasingly significant in areas such as second-order conditions, stability theory, and algorithmic analysis [25,26,27].
Definition 5
([24]). Let ϕ : R n R ¯ be a function with a finite value at x ¯ . For any y ¯ ϕ ( x ¯ ) , the map 2 ϕ ( x ¯ , y ¯ ) : R n R n with the values
2 ϕ ( x ¯ , y ¯ ) ( v ) : = D * ( ϕ ) ( x ¯ , y ¯ ) ( v ) = u R n ( u , v ) N gph ϕ ( x ¯ , y ¯ )
is said to be the limiting (Mordukhovich) second-order subdifferential of ϕ at x ¯ relative to y ¯ . If ϕ ( x ¯ ) is a singleton, we simple write 2 ϕ ( x ¯ ) ( v ) for convenience.
If ϕ is twice continuously differentiable in a neighborhood of x ¯ , by Proposition 1.119 in [28],
2 ϕ ( x ¯ ) ( v ) = 2 ϕ ( x ¯ ) * v = 2 ϕ ( x ¯ ) v , v R n ,
where 2 ϕ ( x ¯ ) denotes the Hessian matrix of ϕ at x ¯ . Denote by C 1 , 1 the class of real-valued functions ϕ , which are Fréchet differentiable, and the gradient mapping ϕ ( · ) is locally Lipschitz. According to Proposition 1.120 in [28], if ϕ C 1 , 1 , one has
2 ϕ ( x ¯ ) ( v ) = 2 ϕ x ¯ , ϕ ( x ¯ ) ( v ) = v , ϕ ( · ) ( x ¯ ) , v R n .
Lemma 3
(Proposition 2.6, [29]). Let ϕ C 1 , 1 . The following assertions hold:
(i) 
For any λ 0 , one has 2 ϕ ( x ¯ ) ( λ υ ) = λ 2 ϕ ( x ¯ ) ( υ ) , v R n .
(ii) 
For any v R n the mapping x 2 ϕ ( x ) ( v ) is locally bounded. Moreover, if x k x ¯ , v k v , x k * x * , x k * 2 ϕ ( x k ) ( v k ) for all k N , then x * 2 ϕ ( x ¯ ) ( v ) .
This following result establishes upper and lower bounds for the first-order Taylor approximation of C 1 , 1 functions by employing the limiting second-order subdifferential.
Lemma 4
(Theorem 3.1, [30]). Let ϕ C 1 , 1 and [ a , b ] : = { x | x = λ a + ( 1 λ ) b , λ [ 0 , 1 ] } . Then, there exist z 2 ϕ ( ξ ) ( b a ) where ξ [ a , b ] , and z 2 ϕ ( ξ ) ( b a ) where ξ [ a , b ] , such that
1 2 z , b a ϕ ( b ) ϕ ( a ) ϕ ( a ) , b a 1 2 z , b a .

3. Saddle Points and KKT Conditions

The set of minimum points corresponding to the Moreau envelope e ν g (4) is defined as
P ν g ( y ) : = argmin w R m g ( w ) + 1 2 ν w y 2 = ( I + ν g ) 1 ( y ) .
In particular, if g is convex, then e ν g ( y ) is Fréchet differentiable and its gradient
e ν g ( y ) = ν 1 y P ν g ( y )
is ν 1 Lipschitz-continuous. Because the set K considered in this paper is convex, then L ( x , u , τ ) is continuously differentiable with respect to x and the derivative takes the form
x L ( x , u , τ ) = f ( x ) + τ G ( x ) τ 1 u + G ( x ) P K ( τ 1 u + G ( x ) ) .
Since the projection operator P K is Lipschitz with constant 1, then x L ( x , u , τ ) is Lipschitz as well. Hence the partial augmented Lagrangian function L ( x , u , τ ) belongs to C 1 , 1 .
For simplicity, we say that x * is a Karush–Kuhn–Tucker (KKT) point of problem (3) if there exists ( u * , v * ) R m × R p satisfying the following condition:
f ( x * ) + G ( x * ) u * + H ( x * ) v * = 0 , u * N K G ( x * ) , v * N D H ( x * ) .
The definition of local and global saddle points is given below.
Definition 6
(Local/Global saddle point). A pair ( x * , u * ) C × R m with C : = H 1 ( D ) is called a local saddle point of the partial augmented Lagrangian L ( x , u , τ ) if there exists τ 0 > 0 and a neighborhood U of x * such that
sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) inf x U C L ( x , u * , τ ) , τ τ 0 .
If the restriction of U is omitted, then the pair ( x * , u * ) C × R m is called a global saddle point and the infimum of all such τ 0 is denoted by τ * ( x * , u * ) .
Notice that there are two inequalities in the definition of saddle points. We begin by analyzing the first inequality.
Lemma 5.
For a given ( x * , u * ) C × R m , suppose that there exists τ 0 > 0 such that
sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) , τ τ 0 .
Then the following results hold.
(i) 
x * is a feasible solution of problem (3);
(ii) 
L ( x * , u * , τ ) = f ( x * ) , τ τ 0 ;
(iii) 
u * N K ( G ( x * ) ) .
Proof. 
(i) Suppose on the contrary that x * is not a feasible point of problem (3). Since x * C by assumption, i.e., the constraint condition H ( x * ) D holds, then it remains to consider the case of G ( x * ) K .
Let
u : = P K ( G ( x * ) ) G ( x * ) .
Then u 0 since G ( x * ) K . It follows from Example 6.16 in [31] that G ( x * ) P K ( G ( x * ) ) N K P K ( G ( x * ) ) , further implying
( 1 + t τ 1 ) G ( x * ) P K ( G ( x * ) ) N K P K ( G ( x * ) )
for all t > 0 , since N K is cone. Note that u + G ( x * ) P K ( G ( x * ) ) = 0 by (17). This together with (18) implies
t τ 1 u + G ( x * ) P K ( G ( x * ) ) + ( 1 + t τ 1 ) G ( x * ) P K ( G ( x * ) ) N K P K ( G ( x * ) ) t τ 1 u + G ( x * ) P K ( G ( x * ) ) N K P K ( G ( x * ) ) P K P K ( G ( x * ) ) t τ 1 u + G ( x * ) P K ( G ( x * ) ) = P K ( G ( x * ) ) P K t τ 1 u + G ( x * ) = P K ( G ( x * ) ) ,
where the second step above comes from the fact that b N K ( a ) P K ( a + b ) = a . Thus, we obtain from (17) and (19) that
τ 2 t τ 1 u + G ( x * ) P K t τ 1 u + G ( x * ) 2 1 2 τ t u 2 = τ 2 G ( x * ) P K t τ 1 u + G ( x * ) 2 t u , G ( x * ) P K t τ 1 u + G ( x * ) = τ 2 G ( x * ) P K G ( x * ) 2 t u , G ( x * ) P K G ( x * ) = τ 2 + t u 2 ,
Furthermore, as τ τ 0 , it follows from (16) and (20) that
L ( x * , u * , τ ) L ( x * , t u , τ ) = f ( x * ) + e τ 1 δ K t τ 1 u + G ( x * ) 1 2 τ t u 2 = f ( x * ) + τ 2 t τ 1 u + G ( x * ) P K t τ 1 u + G ( x * ) 2 1 2 τ t u 2 = f ( x * ) + τ 2 + t u 2 .
Taking the limit as t + in (21) and using the fact that u 0 leads to a contradiction with the finiteness of L ( x * , u * , τ ) by (16). Thus, G ( x * ) K .
(ii) If x satisfies G ( x ) K and u R m , τ > 0 , then
e τ 1 δ K τ 1 u + G ( x ) 1 2 τ u 2 = inf w K τ 2 τ 1 u + G ( x ) w 2 1 2 τ u 2 τ 2 τ 1 u 2 1 2 τ u 2 = 0 .
Hence,
L ( x , u , τ ) = f ( x ) + e τ 1 δ K τ 1 u + G ( x ) 1 2 τ u 2 f ( x ) .
In addition,
L ( x , 0 , τ ) = f ( x ) + e τ 1 δ K ( G ( x ) ) = f ( x ) + inf w K τ 2 G ( x ) w 2 = f ( x ) .
Since x * is a feasible point by (i), according to (22) and (23), we have
L ( x * , u * , τ ) f ( x * ) = L ( x * , 0 , τ ) , τ > 0 .
Combining (16) and (24) yields
f ( x * ) = L ( x * , 0 , τ ) sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) f ( x * ) , τ τ 0 .
Thus, L ( x * , u * , τ ) = f ( x * ) for all τ τ 0 .
(iii) According to (25), we have
f ( x * ) = L ( x * , u * , τ ) inf w R m δ K ( w ) + τ 2 τ 1 u * + G ( x * ) w 2 = 1 2 τ u * 2 τ 2 τ 1 u * + G ( x * ) P K τ 1 u * + G ( x * ) 2 = 1 2 τ u * 2 τ 1 u * + G ( x * ) P K τ 1 u * + G ( x * ) 2 = τ 1 u * 2 τ 1 u * + G ( x * ) P K τ 1 u * + G ( x * ) 2 = τ 1 u * + G ( x * ) G ( x * ) 2 ,
which implies G ( x * ) = P K τ 1 u * + G ( x * ) because both P K τ 1 u * + G ( x * ) and G ( x * ) belong to K and the projection onto a convex set is unique. Hence,
G ( x * ) = P K τ 1 u * + G ( x * ) τ 1 u * N K ( G ( x * ) ) u * N K ( G ( x * ) ) .
This completes the proof. □
In the above process of proof, it can be seen that
L ( x , u , τ ) = f ( x ) , u N K ( G ( x ) ) , τ > 0 ,
i.e., (22) holds as an equality whenever u N K ( G ( x ) ) . In fact, since u N K ( G ( x ) ) , then G ( x ) = P K τ 1 u + G ( x ) , and hence,
L ( x , u , τ ) = f ( x ) + inf w R m δ K ( w ) + τ 2 τ 1 u + G ( x ) w 2 1 2 τ u 2 = f ( x ) + τ 2 τ 1 u + G ( x ) P K τ 1 u + G ( x ) 2 1 2 τ u 2 = f ( x ) + τ 2 τ 1 u + G ( x ) G ( x ) 2 1 2 τ u 2 = f ( x ) .
Lemma 6.
If ( x * , u * ) C × R m is a local/global saddle point of L ( x , u , τ ) , then x * is a locally/globally optimal solution of problem (3).
Proof. 
The whole proof is divided into two parts. The first part proves that x * is a feasible point of problem (3) and the second part proves that x * is a local/global minimizer. Here we only consider the case of local saddle points, because the case of global saddle points can be proved by replacing the neighborhood U appeared in the following analysis by the whole space R n .
(i). Since ( x * , u * ) is a local saddle point of the partial augmented Lagrangian function L , then there exists τ 0 > 0 and a neighborhood U of x * such that
sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) inf x U C L ( x , u * , τ ) , τ τ 0 .
According to Lemma 5 (i), we know that x * is a feasible solution of problem (3).
(ii). Since ( x * , u * ) is a local saddle point of the partial augmented Lagrangian function L ( x , u , τ ) , then for a feasible point x of problem (3) satisfying x U and τ τ 0 , one has
f ( x * ) = L ( x * , u * , τ ) inf x U C L ( x , u * , τ ) L ( x , u * , τ ) f ( x ) ,
where the first equality comes from Lemma 5 (ii), the first inequality follows from (27), and last step is due to (22). Thus x * is a locally optimal solution to the problem (3). □
The dual problem associated with the partial augmented Lagrangian L ( x , u , τ ) for problem (3) is defined as
max u , τ ϖ ( u , τ ) s . t . u R m , τ > 0 ,
where ϖ ( u , τ ) : = inf x C L ( x , u , τ ) and C : = H 1 ( D ) . From (22), the weak duality property between the primal problem (3) and its dual problem holds, i.e.,
ϖ ( u , τ ) L ( x , u , τ ) f ( x ) , ( u , τ ) R m × ( 0 , + ) , x Ω ,
where Ω denotes the feasible set of problem (3).
We say that the zero duality gap property holds for the partial augmented Lagrangian L ( x , u , τ ) , if
inf f ( x ) | x Ω = sup ϖ ( u , τ ) | u R m , τ > 0 .
The relationship between saddle points and the zero duality gap property is given below.
Theorem 1.
(i) If ( x * , u * ) is a saddle point of the partial augmented Lagrangian L ( x , u , τ ) , then for any τ τ * ( x * , u * ) , the pair ( u * , τ ) is an optimal solution of the dual problem (28), and the zero duality gap property holds. (ii) If ( u * , τ * ) is an optimal solution of the dual problem (28), x * is an optimal solution of problem (3), and the zero duality gap property holds, then the pair ( x * , u * ) is a global saddle point of L ( x , u , τ ) and τ * τ * ( x * , u * ) .
Proof. 
(i) Let ( x * , u * ) be a saddle point of L ( x , u , τ ) . For τ τ * ( x * , u * ) , applying Lemma 5 yields
sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) = f ( x * ) inf x C L ( x , u * , τ ) .
By the definition of ϖ , it follows that for all u R m and τ τ * ( x * , u * )
ϖ ( u , τ ) = inf x C L ( x , u , τ ) L ( x * , u , τ ) sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) = f ( x * ) inf x C L ( x , u * , τ ) = ϖ ( u * , τ ) .
Taking the supremum over all ( u , τ ) R m × R + , we obtain from (30) that for all τ τ * ( x * , u * ) ,
sup ( u , τ ) R m × R + ϖ ( u , τ ) f ( x * ) ϖ ( u * , τ ) sup ( u , τ ) R m × R + ϖ ( u , τ ) ,
which implies
sup ( u , τ ) R + × R m ϖ ( u , τ ) = ϖ ( u * , τ ) = f ( x * ) .
Therefore, ( u * , τ ) is an optimal solution of the dual problem (28), and the zero duality gap property holds, since x * is an optimal solution of problem (3) by Lemma 6.
(ii) Let ( u * , τ * ) be an optimal solution of the dual problem (28), x * be an optimal solution of problem (3), and suppose that zero duality gap property holds, i.e.,
ϖ ( u * , τ * ) = sup ϖ ( u , τ ) u R m , τ > 0 = inf { f ( x ) | x Ω } = f ( x * ) .
Note that ϖ ( u , τ ) is nondecreasing in τ for all u R m . It follows that ϖ ( u * , τ ) = f ( x * ) for all τ τ * . Since x * is feasible, then L ( x * , u * , τ ) f ( x * ) for any τ > 0 . Therefore, by the definition of ϖ , we have
L ( x * , u * , τ ) f ( x * ) = ϖ ( u * , τ * ) = inf x C L ( x , u * , τ ) , τ τ * ,
which implies L ( x * , u * , τ ) = f ( x * ) . Since x * is feasible for problem (3), it follows from (29) that for all ( u , τ ) R m × [ τ * , + )
L ( x * , u , τ ) f ( x * ) = L ( x * , u * , τ ) .
Combining (31) and (32) yields
sup u R m L ( x * , u , τ ) L ( x * , u * , τ ) inf x C L ( x , u * , τ ) , τ τ * .
This means that ( x * , u * ) is a global saddle point of L ( x , u , τ ) and τ * τ * ( x * , u * ) . □
Under the metric subregularity constraint qualification, saddle points necessarily satisfy the KKT conditions.
Theorem 2.
If ( x * , u * ) is a local/global saddle point of L ( x , u , τ ) , and the metric subregularity constraint qualification (MSCQ) holds at x * for the system H ( x ) D , then there exists v * N D ( H ( x * ) ) such that ( x * , u * , v * ) satisfies the KKT conditions for problem (3).
Proof. 
Since ( x * , u * ) is a local/global saddle point of L ( x , u , τ ) , then u * N K ( G ( x * ) ) by Lemma 5 (iii), which further implies G ( x * ) = P K τ 1 u * + G ( x * ) . Thus, according to (13), we have
x L ( x * , u * , τ ) = f ( x * ) + τ G ( x * ) τ 1 u * + G ( x * ) P K τ 1 u * + G ( x * ) = f ( x * ) + G ( x * ) u * .
Under the MSCQ condition, it follows from (8) that
N C ( x * ) = N C ( x * ; 0 ) v R n λ N D ( H ( x * ) ; 0 ) κ v B with v = H ( x * ) λ H ( x * ) N D ( H ( x * ) ) .
Since L ( · , u * , τ ) is continuously differentiable, then
x L ( · , u * , τ ) + δ C ( · ) ( x * ) = x L ( · , u * , τ ) ( x * ) + [ δ C ( · ) ] ( x * ) = f ( x * ) + G ( x * ) u * + N C ( x * ) ,
where the second equality follows from (33) and δ C ( x * ) = N C ( x * ) by Exercise 8.14 in [31].
Since ( x * , u * ) is a local/global saddle point of L ( x , u , τ ) , it follows from the definition that x * is a local/global minimizer of L ( x , u * , τ ) over H ( x ) D , i.e., x * is a local/global minimizer of L ( x , u * , τ ) + δ D ( H ( x ) ) . By applying Fermat’s rule generalized (see Theorem 10.1 in [31]), we have
0 x L ( · , u * , τ ) + δ D ( H ( · ) ) ( x * ) = x L ( · , u * , τ ) + δ C ( · ) ( x * ) = f ( x * ) + G ( x * ) u * + N C ( x * ) f ( x * ) + G ( x * ) u * + H ( x * ) N D ( H ( x * ) ) ,
where the second step follows from the fact that δ D ( H ( · ) ) = δ C ( · ) since C = H 1 ( D ) , and the third step comes from (35), and the last step is due to (34).
The formula (36) ensures the existence of v * N D ( H ( x * ) ) such that
f ( x * ) + G ( x * ) u * + H ( x * ) v * = 0 .
This together with the fact u * N K ( G ( x * ) ) as shown in Lemma 5 (iii) yields that ( x * , u * , v * ) satisfies the KKT condition (14) for problem (3). □
In the remainder of this section, we show that the converse of Theorem 2 is valid when problem (3) is convex.
Definition 7.
We say that problem (3) is convex if f is convex, the sets K and D are closed convex sets satisfying the additive closure properties:
K + K K a n d D + D D ,
and the mappings G and H are convex with respect to K and D, respectively, i.e., for all x , y R n and t [ 0 , 1 ] , we have
G ( t x + ( 1 t ) y ) K t G ( x ) + ( 1 t ) G ( y ) , H ( t x + ( 1 t ) y ) D t H ( x ) + ( 1 t ) H ( y ) ,
where u C v means v u C .
Here are some examples of convex sets that satisfy the additivity property.
Example 1.
(i). The set K is a convex cone. It is well-known that for any convex cone K, the property K + K = K holds, meaning that additivity is satisfied.
(ii). 
The set K : = x R n | x i a i , i = 1 , 2 , , n , where a i > 0 for i = 1 , 2 , , n , is a closed convex set that satisfies K + K K , but it is not a convex cone.
(iii). 
Let n 2 and γ > 0 . Define the set
K = x R n : x i 0 for all i , i = 1 n x i 1 / n γ .
Then K is a closed convex set such that K + K K , but K is not a convex cone.
Let f ( x ) : = i = 1 n x i 1 / n . Clearly f is continuous on the nonnegative orthant R + n , which implies that K is closed. Note first that for any x K , we must have x i > 0 for all i, due to γ > 0 . Thus, K is a subset of the positive orthant R + + n . The function f is concave on R + + n , as it is the geometric mean function (Examples 3.1.5 in [32]). Hence, K is convex, since it is a superlevel set of a concave function. Now, take x , y K and consider their sum x + y . Clearly, ( x + y ) i 0 for all i. It remains to show that f ( x + y ) = i = 1 n ( x i + y i ) 1 / n γ . By the inequality of arithmetic and geometric means, we have x i + y i 2 x i y i for each i. Therefore,
i = 1 n ( x i + y i ) i = 1 n 2 x i y i = 2 n i = 1 n x i i = 1 n y i = 2 n i = 1 n x i i = 1 n y i 1 / 2 .
Taking the 1 / n th power on both sides yields
f ( x + y ) 2 i = 1 n x i i = 1 n y i 1 / ( 2 n ) = 2 f ( x ) f ( y ) 1 / 2 2 ( γ · γ ) 1 / 2 = 2 γ > γ .
Thus, x + y K , which implies K + K K . Note that K is not a cone. In fact, take any x K such that f ( x ) = γ (for example, x = ( γ , γ , , γ ) ). For λ ( 0 , 1 ) , we have f ( λ x ) = λ f ( x ) = λ γ < γ , so λ x K . Thus, K is not closed under scalar multiplication and hence is not a cone.
The following results show that the Moreau envelope preserves convexity properties under certain conditions.
Lemma 7.
Let u R m and K be a closed convex set. Assume that G is a convex mapping with respect to K with K + K K . Then δ K ( G ( x ) ) and
( g G ) ( x ) : = e τ 1 δ K τ 1 u + G ( x )
is convex, where g ( v ) : = e τ 1 δ K τ 1 u + v .
Proof. 
Since G is a convex mapping with respect to K, it follows from (37) that for any x , y R n and t [ 0 , 1 ] , we have
G t x + ( 1 t ) y t G ( x ) + ( 1 t ) G ( y ) K .
This ensures that
δ K G ( t x + ( 1 t ) y ) δ K t G ( x ) + ( 1 t ) G ( y ) ,
which is equivalent to the statement that if G ( t x + ( 1 t ) y ) K , then t G ( x ) + ( 1 t ) G ( y ) K as well. Suppose on the contrary that t G ( x ) + ( 1 t ) G ( y ) K , then
G ( t x + ( 1 t ) y ) = G ( t x + ( 1 t ) y ) t G ( x ) + ( 1 t ) G ( y ) + t G ( x ) + ( 1 t ) G ( y ) K ,
where the last step is due to the fact that K + K K . This leads to a contradiction with G ( t x + ( 1 t ) y ) K . Note that δ K is convex, since K is convex. This together with (39) yields
δ K G ( t x + ( 1 t ) y ) δ K t G ( x ) + ( 1 t ) G ( y ) t δ K ( G ( x ) ) + ( 1 t ) δ K ( G ( y ) ) ,
which means that δ K ( G ( x ) ) is convex.
Since K is convex, it follows from (12) that g ( v ) is differentiable with the gradient
g ( v ) = v e τ 1 δ K τ 1 u + v = τ τ 1 u + v P K τ 1 u + v = u + τ v τ P K τ 1 u + v .
Hence, for a , b R m , we have
g ( a ) g ( b ) , a b = u + τ a τ P K τ 1 u + a u + τ b τ P K τ 1 u + b , a b = τ a τ b , a b τ P K τ 1 u + a τ P K τ 1 u + b , a b = τ a b 2 τ P K τ 1 u + a P K τ 1 u + b , a b τ a b 2 τ P K τ 1 u + a P K τ 1 u + b · a b τ a b 2 τ a b 2 = 0 ,
where the fifth step follows from the fact that the metric projection is a non-expansive mapping, i.e.,
P K τ 1 u + a P K τ 1 u + b τ 1 u + a ( τ 1 u + b ) = a b .
Thus, we show that g ( v ) is monotone by (40). Taking into account Theorem 12.17 in [31], the function g ( v ) = e τ 1 δ K τ 1 u + v is convex.
Note that
e τ 1 δ K τ 1 u + G ( t x + ( 1 t ) y ) = inf w R m δ K ( w ) + τ 2 τ 1 u + G ( t x + ( 1 t ) y ) w 2 = inf w K τ 2 w τ 1 u + G ( t x + ( 1 t ) y ) 2 inf v K τ 2 v τ 1 u + t G ( x ) + ( 1 t ) G ( y ) 2 = inf v R m δ K ( v ) + τ 2 v τ 1 u + t G ( x ) + ( 1 t ) G ( y ) 2 = e τ 1 δ K τ 1 u + t G ( x ) + ( 1 t ) G ( y ) ,
where the inequality comes from that fact that
K + G ( t x + ( 1 t ) y ) t G ( x ) + ( 1 t ) G ( y ) K
by (38) and K + K K by assumption.
Since g ( v ) = e τ 1 δ K τ 1 u + v is convex as shown above, we have
e τ 1 δ K τ 1 u + t G ( x ) + ( 1 t ) G ( y ) t e τ 1 δ K τ 1 u + G ( x ) + ( 1 t ) e τ 1 δ K τ 1 u + G ( y ) .
This together with (41) yields
e τ 1 δ K τ 1 u + G ( t x + ( 1 t ) y ) t e τ 1 δ K τ 1 u + G ( x ) + ( 1 t ) e τ 1 δ K τ 1 u + G ( y ) ,
i.e., ( g G ) ( t x + ( 1 t ) y ) t ( g G ) ( x ) + ( 1 t ) ( g G ) ( y ) . This completes the proof. □
Theorem 3.
Suppose that problem (3) is convex. If ( x * , u * , v * ) satisfies the KKT conditions, then ( x * , u * ) is a global saddle point of L ( x , u , τ ) for all τ > 0 .
Proof. 
Taking into account the convexity of problem (3) and Lemma 7, we know that the function T ( x ) : = δ D ( H ( x ) ) is convex. Hence,
H ( x ) N D ( H ( x ) ) = H ( x ) N ^ D ( H ( x ) ) ^ T ( x ) = T ( x ) ,
where the first step follows from the fact N D ( H ( x ) ) = N ^ D ( H ( x ) ) since D is convex, and the second step comes from ^ T ( x ) H ( x ) ^ δ D ( H ( x ) ) = H ( x ) N ^ D ( H ( x ) ) by Exercise 8.14 and Theorem 10.6 in [31].
Define Φ ( x ) : = L ( x , u , τ ) + T ( x ) . Note that the function Φ ( x ) is convex, since Φ ( x ) is composed of f ( x ) , T ( x ) , and e τ 1 δ K τ 1 u + G ( x ) , and these functions are convex with respect to x by Lemma 7. Since ( x * , u * , v * ) satisfies the KKT conditions (14), then u * N K G ( x * ) and v * N D H ( x * ) . It then follows from (33) that
0 = f ( x * ) + G ( x * ) u * + H ( x * ) v * = x L ( x * , u * , τ ) + H ( x * ) v * x L ( x * , u * , τ ) + H ( x * ) N D ( H ( x * ) ) x L ( x * , u * , τ ) + T ( x * ) = Φ ( x * ) ,
where the forth step is ensured by (42).
Since Φ ( x ) is convex and 0 Φ ( x * ) by (43), we know that x * is a global optimal solution of Φ ( x ) . Therefore,
Φ ( x * ) Φ ( x ) , x R n L ( x * , u * , τ ) + δ D ( H ( x * ) ) L ( x , u * , τ ) + δ D ( H ( x ) ) , x R n L ( x * , u * , τ ) L ( x , u * , τ ) , H ( x ) D .
On the other hand, for any u R m and τ > 0 , since x * is feasible and u * N K ( G ( x * ) ) , it follows from (22) and (26) that
L ( x * , u , τ ) f ( x * ) = L ( x * , u * , τ ) .
Therefore, putting (44) and (45) together yields
L ( x * , u , τ ) L ( x * , u * , τ ) L ( x , u * , τ ) , H ( x ) D , u R m , τ > 0 ,
which means that ( x * , u * ) is a global saddle point of L ( x , u , τ ) . □

4. Optimality Conditions for Partial Augmented Lagrangian

In the previous section, we have discussed the first inequality in the definition of saddle points (15). Now, we turn our attention to the second inequality that appears in (15). This inequality indicates that x * is a local optimal solution of the following optimization problem
min L ( x , u , τ ) s . t . H ( x ) D .
The concept of directional local minimizer is given below.
Definition 8.
A point x ¯ C : = H 1 ( D ) is said to be a local optimal solution of (46) in direction d R n , if there exist positive numbers ρ , δ > 0 such that
L ( x , u , τ ) L ( x ¯ , u , τ ) , x C x ¯ + V ρ , δ ( d ) .
The following result establishes the first-order optimality conditions for problem (46).
Theorem 4.
Let x ¯ C be a local optimal solution of problem (46) in direction d T C ( x ¯ ) . For the constraint system H ( x ) D , suppose that MSCQ holds at x ¯ in direction d R n with modulus κ > 0 . Then
(i) 
f ( x ¯ ) + G ( x ¯ ) u + τ G ( x ¯ ) G ( x ¯ ) P K τ 1 u + G ( x ¯ ) d 0 ;
(ii) 
if f ( x ¯ ) + G ( x ¯ ) u + τ G ( x ¯ ) G ( x ¯ ) P K τ 1 u + G ( x ¯ ) d = 0 , then there exists v N D H ( x ¯ ) ; H ( x ¯ ) d such that
v κ f ( x ¯ ) + G ( x ¯ ) u + τ G ( x ¯ ) G ( x ¯ ) P K τ 1 u + G ( x ¯ ) ,
and
f ( x ¯ ) + G ( x ¯ ) u + τ G ( x ¯ ) G ( x ¯ ) P K τ 1 u + G ( x ¯ ) + H ( x ¯ ) v = 0 .
Proof. 
Since L ( x , u , τ ) is continuously differentiable with respect to x, applying Proposition 3.1 in [21] yields
(a)
x L ( x ¯ , u , τ ) d 0 ;
(b)
if x L ( x ¯ , u , τ ) d = 0 , then 0 x L ( x ¯ , u , τ ) + N C ( x ¯ ; d ) .
By (13), we know
x L ( x ¯ , u , τ ) = f ( x ¯ ) + G ( x ¯ ) u + τ G ( x ¯ ) G ( x ¯ ) P K τ 1 u + G ( x ¯ ) ,
which together with (a) yields the desired conclusion (i).
According to (b) and Lemma 1, there exists v N D ( H ( x ¯ ) ; H ( x ¯ ) d ) such that x L ( x ¯ , u , τ ) + H ( x ¯ ) v = 0 and v κ H ( x ¯ ) v = κ x L ( x ¯ , u , τ ) . The desired results hold by further utilizing the formula of x L ( x ¯ , u , τ ) given in (47). □
The second-order necessary condition is obtained using second-order tangent sets and asymptotic second-order tangent cones.
Theorem 5.
Let x ¯ be a local optimal solution of problem (46) in direction d T C ( x ¯ ) with x L ( x ¯ , u , τ ) d = 0 . Then,
(i) 
x L ( x ¯ , u , τ ) v 0 , v T C ( x ¯ ; d ) ;
(ii) 
for any w T C 2 ( x ¯ ; d ) , there exists z x 2 L ( x ¯ , u , τ ) ( d ) such that
x L ( x ¯ , u , τ ) , w + z , d 0 .
Proof. 
(i) Pick v T C ( x ¯ ; d ) . Then, there exist ( t k , r k ) ( 0 , 0 ) and v k v such that t k / r k 0 and x k : = x ¯ + t k d + 1 2 t k r k v k C for all k N . Since
x k x ¯ x k x ¯ = t k d + 1 2 t k r k v k t k d + 1 2 t k r k v k d d ,
then x k x ¯ + V ρ , δ ( d ) whenever k is sufficiently large, and hence L ( x k , u , τ ) L ( x ¯ , u , τ ) . Note that x L ( x ¯ , u , τ ) d = 0 . Then
L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , x k x ¯ = L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , t k d + 1 2 t k r k v k x L ( x ¯ , u , τ ) , t k d + 1 2 t k r k v k = t k x L ( x ¯ , u , τ ) , d 1 2 t k r k x L ( x ¯ , u , τ ) , v k = 1 2 t k r k x L ( x ¯ , u , τ ) , v k .
Since L ( x , u , τ ) C 1 , 1 , it follows from Lemma 4 that there exists z k x 2 L ( ξ k , u , τ ) ( x k x ¯ ) , where ξ k [ x ¯ , x k ] , such that
L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , x k x ¯ 1 2 z k , t k d + 1 2 t k r k v k .
According to Lemma 3, we have
z k x 2 L ( ξ k , u , τ ) ( x k x ¯ ) = x 2 L ( ξ k , u , τ ) ( t k d + 1 2 t k r k v k ) = x 2 L ( ξ k , u , τ ) t k ( d + 1 2 r k v k ) = t k x 2 L ( ξ k , u , τ ) d + 1 2 r k v k ,
which implies z k = t k z k for some
z k x 2 L ( ξ k , u , τ ) d + 1 2 r k v k .
This together with (48) and (49) yields
1 2 t k r k x L ( x ¯ , u , τ ) , v k L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , x k x ¯ 1 2 z k , t k d + 1 2 t k r k v k = 1 2 t k z k , t k d + 1 2 t k r k v k = 1 2 t k 2 z k , d + 1 2 r k v k .
By dividing both sides of the above inequality by ( t k r k ) / 2 , we get
x L ( x ¯ , u , τ ) , v k t k r k z k , d + 1 2 r k v k .
We claim that { z k } is bounded. Let g k ( x ) : = d + 1 2 r k v k , x L ( x , u , τ ) . According to (11) and (50), we have
z k x 2 L ( ξ k , u , τ ) d + 1 2 r k v k = d + 1 2 r k v k , x L ( · , u , τ ) ( ξ k ) = g k ( ξ k ) .
Note that
g k ( x ) g k ( y ) = d + 1 2 r k v k , x L ( x , u , τ ) x L ( y , u , τ ) d + 1 2 r k v k · x L ( x , u , τ ) x L ( y , u , τ ) ( d + 1 ) L x y ,
where L is the Lipschitz constant of x L . This implies that the subdifferential g k is included in ( d + 1 ) L B . Since z k g k ( ξ k ) by (52), then { z k } is bounded. We can assume without loss of generality that z k z . Since ξ k x ¯ , d + 1 2 r k v k d , then z x 2 L ( x ¯ , u , τ ) ( d ) by Lemma 3. Using t k / r k 0 and taking the limit in both sides of (51) yields x L ( x ¯ , u , τ ) , v 0 .
(ii) Pick w T C 2 ( x ¯ ; d ) . Then, there exist t k 0 and w k w such that x k : = x ¯ + t k d + 1 2 t k 2 w k C for all k N . By an argument similar to that used for (51) in case (i), we can obtain
x L ( x ¯ , u , τ ) , w k z k , d + 1 2 t k w k ,
where z k x 2 L ( ξ k , u , τ ) ( d + 1 2 t k w k ) and ξ k [ x ¯ , x k ] . Taking limits yields
z , d + x L ( x ¯ , u , τ ) , w 0 .
This completes the proof. □
Corollary 1.
Let x ¯ be a local optimal solution of problem (46) in direction d T C ( x ¯ ) with x L ( x ¯ , u , τ ) d = 0 . For the constraint system H ( x ) D , suppose that MSCQ holds at x ¯ in direction d. Then,
(i) 
for every v R n satisfying H ( x ¯ ) v T D ( H ( x ¯ ) ; H ( x ¯ ) d ) , we have x L ( x ¯ , u , τ ) v 0 .
(ii) 
for every w R n satisfying H ( x ¯ ) w + 2 H ( x ¯ ) ( d , d ) T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) , there exists z x 2 L ( x ¯ , u , τ ) ( d ) such that x L ( x ¯ , u , τ ) , w + z , d 0 .
Proof. 
The results follows immediately by applying Lemma 2 and Theorem 5. □
The following result develops second-order necessary conditions in terms of support functions.
Theorem 6.
Let x ¯ be a local optimal solution of problem (46) in direction d T C ( x ¯ ) with x L ( x ¯ , u , τ ) d = 0 . If λ R p satisfies x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 , then
(i) 
σ λ | H ( x ¯ ) T C ( x ; d ) 0 ;
(ii) 
for each γ > 0 , there exists z x 2 L ( x ¯ , u , τ ) ( d ) such that
z , d + λ , 2 H ( x ¯ ) ( d , d ) σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ) γ B + 2 H ( x ¯ ) ( d , d ) 0 .
Proof. 
(i) Pick ξ H ( x ¯ ) T C ( x ; d ) , i.e., there exists v T C ( x ; d ) such that ξ = H ( x ¯ ) v . Hence,
λ , ξ = λ , H ( x ¯ ) v = H ( x ¯ ) λ , v = x L ( x ¯ , u , τ ) , v 0 ,
where the third step comes from the fact x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 by assumption and the last step follows from Theorem 5 (i).
(ii) If T C 2 ( x ¯ ; d ) γ B = , then the corresponding support function takes the value , and in this case, (53) holds trivially. If T C 2 ( x ¯ ; d ) γ B , then there exists w γ T C 2 ( x ¯ ; d ) γ B such that
min w T C 2 ( x ¯ ; d ) γ B x L ( x ¯ , u , τ ) , w = x L ( x ¯ , u , τ ) , w γ .
According to Theorem 5, for the above w γ , there exists z γ x 2 L ( x ¯ , u , τ ) ( d ) such that
x L ( x ¯ , u , τ ) , w γ + z γ , d 0 .
Therefore,
σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ) γ B + 2 H ( x ¯ ) ( d , d ) = max λ , H ( x ¯ ) w + 2 H ( x ¯ ) ( d , d ) | w T C 2 ( x ¯ ; d ) γ B = max H ( x ¯ ) λ , w | w T C 2 ( x ¯ ; d ) γ B + λ , 2 H ( x ¯ ) ( d , d ) = max x L ( x ¯ , u , τ ) , w | w T C 2 ( x ¯ ; d ) γ B + λ , 2 H ( x ¯ ) ( d , d ) = min x L ( x ¯ , u , τ ) , w | w T C 2 ( x ¯ ; d ) γ B + λ , 2 H ( x ¯ ) ( d , d ) = x L ( x ¯ , u , τ ) , w γ + λ , 2 H ( x ¯ ) ( d , d ) = x L ( x ¯ , u , τ ) , w γ z γ , d + z γ , d + λ , 2 H ( x ¯ ) ( d , d ) z γ , d + λ , 2 H ( x ¯ ) ( d , d ) ,
where the third equality comes from the fact x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 by assumption, the fifth equality follows from (54), and the last step is due to (55). □
Lemma 8
(Lemma 3.4, [33]). Let x ¯ F R n . If the sequence x k F { x ¯ } converges to x ¯ such that ( x k x ¯ ) / t k converges to some nonzero vector d T F ( x ¯ ) , where t k : = x k x ¯ , then either ( x k x ¯ t k d ) / 1 2 t k 2 converges to some vector w T F 2 ( x ¯ ; d ) { d } , or there exists a sequence r k 0 such that t k / r k 0 and ( x k x ¯ t k d ) / 1 2 t k r k converges to some vector w T F ( x ¯ ; d ) { d } { 0 } , where { d } denotes the orthogonal subspace to d.
The second-order sufficient condition is derived below.
Theorem 7.
Let x ¯ be a feasible solution of problem (46), i.e., x ¯ C . Let d T C ( x ¯ ) { 0 } with x L ( x ¯ , u , τ ) d = 0 . Suppose that there exists λ R p such that x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 and
(i) 
λ , v < 0 , v H ( x ) T C ( x ¯ ; d ) { d } { 0 } ;
(ii) 
z , d + λ , 2 H ( x ¯ ) ( d , d ) σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ) { d } + 2 H ( x ¯ ) ( d , d ) > 0 , z x 2 L ( x ¯ , u , τ ) ( d ) .
Then, the second-order growth conditions holds at x ¯ in direction d, i.e., there exist κ , ρ , δ > 0 such that
L ( x , u , τ ) L ( x ¯ , u , τ ) + κ x x ¯ 2 , x C x ¯ + V ρ , δ ( d ) .
Proof. 
Suppose on the contrary that the second-order growth condition in direction d does not hold at x ¯ . This means that there exists sequence { x k } x ¯ + V ( 1 / k , 1 / k ) ( d ) such that x k C and
L ( x k , u , τ ) < L ( x ¯ , u , τ ) + 1 k x k x ¯ 2 .
Pick λ R p with x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 satisfying (i) and (ii). Let t k : = x k x ¯ and d k : = ( x k x ¯ ) / t k . Then t k 0 + and we can assume without loss of generality that d k d ¯ : = d / d T C ( x ¯ ) . Lemma 8 ensures that one of the following conditions hold:
(a)
w k : = ( x k x ¯ t k d ¯ ) / 1 2 t k 2 converges to some vector w T C 2 ( x ¯ ; d ¯ ) { d ¯ } ;
(b)
there exists a sequence r k 0 such that t k / r k 0 and w ˜ k : = ( x k x ¯ t k d ¯ ) / 1 2 t k r k converges to some vector w ˜ T C ( x ¯ ; d ¯ ) { d ¯ } { 0 } .
Case (1). If condition (a) holds, then x k = x ¯ + t k d ¯ + 1 2 t k 2 w k . Since L ( · , u , τ ) C 1 , 1 , it follows from Lemma 4 that there exists z k x 2 L ( ξ k , u , τ ) ( x k x ¯ ) , where ξ k [ x ¯ , x k ] , such that
L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , x k x ¯ 1 2 z k , x k x ¯ .
Note that
L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , x k x ¯ = L ( x k , u , τ ) L ( x ¯ , u , τ ) x L ( x ¯ , u , τ ) , t k d ¯ + 1 2 t k 2 w k = L ( x k , u , τ ) L ( x ¯ , u , τ ) t k x L ( x ¯ , u , τ ) , d ¯ 1 2 t k 2 x L ( x ¯ , u , τ ) , w k = L ( x k , u , τ ) L ( x ¯ , u , τ ) 1 2 t k 2 x L ( x ¯ , u , τ ) , w k ,
where the last step is due to x L ( x ¯ , u , τ ) d = 0 by assumption. Hence, it follows from (57) and (58) that
L ( x k , u , τ ) L ( x ¯ , u , τ ) 1 2 z k , t k d ¯ + 1 2 t k 2 w k + 1 2 t k 2 x L ( x ¯ , u , τ ) , w k .
According to Lemma 3, we have
z k x 2 L ( ξ k , u , τ ) ( x k x ¯ ) = t k x 2 L ( ξ k , u , τ ) d ¯ + 1 2 t k w k .
Hence, z k = t k z k with z k x 2 L ( ξ k , u , τ ) ( d ¯ + 1 2 t k w k ) . Following a similar argument as given for (52), we can assume without loss of generality that z k converge to some z x 2 L ( x ¯ , u , τ ) ( d ¯ ) = 1 d x 2 L ( x ¯ , u , τ ) ( d ) by Lemma 3 (i). Hence z = 1 d z ˜ for some z ˜ x 2 L ( x ¯ , u , τ ) ( d ) .
It follows from (56) and (59) that
1 2 z k , t k d ¯ + 1 2 t k 2 w k + 1 2 t k 2 x L ( x ¯ , u , τ ) , w k L ( x k , u , τ ) L ( x ¯ , u , τ ) < 1 k t k 2 .
Since z k = t k z k , then
1 2 z k , t k d ¯ + 1 2 t k 2 w k = 1 2 t k z k , t k d ¯ + 1 2 t k 2 w k = 1 2 t k 2 z k , d ¯ + 1 4 t k 3 z k , w k .
Putting (60) and (61) together gives
1 2 t k 2 z k , d ¯ + 1 4 t k 3 z k , w k + 1 2 t k 2 x L ( x ¯ , u , τ ) , w k < 1 k t k 2 .
Dividing both sides of the above equation by 1 2 t k 2 and letting k yields
z , d ¯ + x L ( x ¯ , u , τ ) , w 0 .
Since w T C 2 ( x ¯ ; d ¯ ) { d ¯ } , then
H ( x ¯ ) w + 2 H ( x ¯ ) ( d ¯ , d ¯ ) H ( x ¯ ) T C 2 ( x ¯ ; d ¯ ) { d ¯ } + 2 H ( x ¯ ) ( d ¯ , d ¯ ) ,
and hence,
λ , H ( x ¯ ) w + 2 H ( x ¯ ) ( d ¯ , d ¯ ) σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ¯ ) { d ¯ } + 2 H ( x ¯ ) ( d ¯ , d ¯ ) .
Note that
z , d ¯ + λ , 2 H ( x ¯ ) ( d ¯ , d ¯ ) = x L ( x ¯ , u , τ ) + H ( x ¯ ) λ , w + z , d ¯ + λ , 2 H ( x ¯ ) ( d ¯ , d ¯ ) = z , d ¯ + x L ( x ¯ , u , τ ) , w + λ , H ( x ¯ ) w + 2 H ( x ¯ ) ( d ¯ , d ¯ ) z , d ¯ + x L ( x ¯ , u , τ ) , w + σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ¯ ) { d ¯ } + 2 H ( x ¯ ) ( d ¯ , d ¯ ) σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ¯ ) { d ¯ } + 2 H ( x ¯ ) ( d ¯ , d ¯ ) ,
where the first step is due to x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 by assumption, the third step follows from (64), and the last step comes from (63). Hence, it follows from (65) that
z , d ¯ + λ , 2 H ( x ¯ ) ( d ¯ , d ¯ ) σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ¯ ) { d ¯ } + 2 H ( x ¯ ) ( d ¯ , d ¯ ) 0 .
Recall that z = z ˜ / d and d ¯ = d / d . Hence applying (7), the above formula can be rewritten as
z ˜ , d + λ , 2 H ( x ¯ ) ( d , d ) σ λ | H ( x ¯ ) T C 2 ( x ¯ ; d ) { d } + 2 H ( x ¯ ) ( d , d ) 0 ,
which is a contradiction to assumption (ii).
Case (2). If condition (b) holds, then x k = x ¯ + t k d ¯ + 1 2 t k r k w ˜ k . By following a similar argument on the formula (62) as given in case (1), we can obtain
1 2 t k 2 z ˜ k , d ¯ + 1 4 t k 2 r k z ˜ k , w ˜ k + 1 2 t k r k x L ( x ¯ , u , τ ) , w ˜ k < 1 k t k 2 ,
where z ˜ k x 2 L ( ξ k , u , τ ) ( d ¯ + 1 2 r k w ˜ k ) . Dividing both sides of the above equation by 1 2 t k r k yields
t k r k z ˜ k , d ¯ + 1 2 t k z ˜ k , w ˜ k + x L ( x ¯ , u , τ ) , w ˜ k < 2 k · t k r k .
Taking the limit as k and noting that t k / r k 0 yields
x L ( x ¯ , u , τ ) , w ˜ 0 .
This together with the fact x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 by assumption implies
λ , H ( x ¯ ) w ˜ = H ( x ¯ ) λ , w ˜ = x L ( x ¯ , u , τ ) , w ˜ 0 ,
which is a contradiction to condition (i). □
Corollary 2.
Let x ¯ be a feasible solution of problem (46), i.e., x ¯ C , and let d T C ( x ¯ ) { 0 } such that x L ( x ¯ , u , τ ) d = 0 . Suppose that there exists λ R p satisfying x L ( x ¯ , u , τ ) + H ( x ¯ ) λ = 0 and
(i) 
λ , v < 0 , v T D ( H ( x ¯ ) ; H ( x ¯ ) d ) H ( x ¯ ) { d } { 0 } ;
(ii) 
z , d + λ , 2 H ( x ¯ ) ( d , d ) σ T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) ( λ ) > 0 , z x 2 L ( x ¯ , u , τ ) ( d ) .
Then, the second-order growth conditions holds at x ¯ in direction d.
Proof. 
The results follows immediately by applying Lemma 2 and Theorem 7. □
Example 2.
Consider the following optimization problem:
min f ( x 1 , x 2 ) = x 1 2 + x 1 + 0 x 2 | t | d t , s . t . G ( x 1 , x 2 ) = x 1 x 2 K = ( y 1 , y 2 ) y 1 0 , y 2 0 , H ( x 1 , x 2 ) = e x 1 x 2 2 1 2 x 2 D ,
where
D = { ( y 1 , y 2 ) R 2 y 1 y 2 2 / 2 } { ( y 1 , y 2 ) R 2 y 1 y 2 2 / 2 , y 1 1 } .
For τ > 0 , the partial augmented Lagrangian is
L ( x , u , τ ) = f ( x ) + e τ 1 δ K τ 1 u + G ( x ) 1 2 τ u 2 = x 1 2 + x 1 + 0 x 2 | t | d t + e τ 1 δ K ( τ 1 u + x ) 1 2 τ u 2 ,
and its gradient is
x L ( x , u , τ ) = 2 x 1 + 1 | x 2 | + τ τ 1 u + x P K τ 1 u + x .
Let x ¯ = u * = ( 0 , 0 ) R 2 . Note that C : = H 1 ( D ) = { ( x 1 , x 2 ) R 2 | e x 1 1 3 x 2 2 } { ( x 1 , x 2 ) R 2 | e x 1 x 2 2 1 e x 1 } and T C ( x ¯ ) = R + × R . Hence, if x C and lies in some neighborhood of x ¯ , then e x 1 1 3 x 2 2 0 , which implies x 1 0 . So,
x L ( x , u * , τ ) = 2 x 1 + 1 x 2 , x 2 0 , 2 x 1 + 1 ( τ 1 ) x 2 , x 2 < 0 .
For any nonzero direction d T C ( x ¯ ) = R + × R , we show that either x L ( x ¯ , u * , τ ) d > 0 or x L ( x ¯ , u * , τ ) d = 0 and the second-order condition in Corollary 2 holds. In fact, since d T C ( x ¯ ) , then d 1 0 . Consider the following two cases. If d 1 > 0 , then x L ( x ¯ , u * , τ ) d = d 1 > 0 . If d 1 = 0 , then by direct calculation, we obtain H ( x ¯ ) = 1 0 0 2 , T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) = { ( w 1 , w 2 ) R 2 w 1 4 d 2 2 } , and T D ( H ( x ¯ ) ; H ( x ¯ ) d ) H ( x ¯ ) { d } { 0 } = { ( w 1 , w 2 ) R 2 w 1 > 0 , w 2 = 0 } .
Let λ = ( 1 , 0 ) . Then, x L ( x ¯ , u * , τ ) + H ( x ¯ ) λ = 0 and λ , v = w 1 < 0 for all v : = ( w 1 , w 2 ) T D ( H ( x ¯ ) ; H ( x ¯ ) d ) H ( x ¯ ) { d } { 0 } . Hence, the condition (i) in Corollary 2 holds.
Define ϕ ( x ) : = d , L ( x , u * , τ ) . Then,
ϕ ( x ) = x 2 d 2 , x 2 0 , ( τ 1 ) x 2 d 2 , x 2 < 0 .
For x 2 > 0 , the gradient of ϕ is ( 0 , d 2 ) , and for x 2 < 0 , it is ( 0 , ( τ 1 ) d 2 ) . Therefore, the subdifferential of ϕ at x ¯ = ( 0 , 0 ) is given by
x 2 L ( x ¯ , u * , τ ) ( d ) = ϕ ( x ¯ ) = ( 0 , d 2 ) , ( 0 , ( τ 1 ) d 2 ) , τ 2 , co { ( 0 , d 2 ) , ( 0 , ( τ 1 ) d 2 ) } , τ < 2 .
It is straightforward to compute that
λ , 2 H ( x ¯ ) ( d , d ) = ( 1 , 0 ) , ( 2 d 2 2 , 0 ) = 2 d 2 2 .
For any z x 2 L ( x ¯ , u * , τ ) ( d ) , we have z = ( 0 , θ d 2 ) with θ { 1 , τ 1 } if τ 2 , or θ co { 1 , τ 1 } if τ < 2 . So,
z , d = ( 0 , θ d 2 ) , ( 0 , d 2 ) = θ d 2 2 .
The support function of the second-order tangent set T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) at λ = ( 1 , 0 ) is
σ T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) ( λ ) = sup w T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) λ , w = sup w 1 4 d 2 2 ( w 1 ) = 4 d 2 2 .
Combining (66)–(68) yields
z , d + λ , 2 H ( x ¯ ) ( d , d ) σ T D 2 ( H ( x ¯ ) ; H ( x ¯ ) d ) ( λ ) = θ d 2 2 + 2 d 2 2 ( 4 d 2 2 ) = ( θ + 6 ) d 2 2 min { 7 , τ + 5 } d 2 2 > 0 .
So, the condition (ii) in Corollary 2 holds.
According to the above analysis, we can conclude that x ¯ is a local minimizer of L ( x , u , τ ) . This establishes the second inequality in (15). The first inequality in (15) follows from the fact that L ( x ¯ , u * , τ ) = f ( x ¯ ) by (26) since u * = 0 N K ( G ( x ¯ ) ) , and that L ( x ¯ , u , τ ) f ( x ¯ ) for all u R m by (22) due to x ¯ being feasible. This shows that ( x ¯ , u * ) is a local saddle point of the problem.

5. Conclusions

In this paper, we focus on optimization problems with a separable structure, where the constraint system is categorized into two types based on their properties: convex constraints and nonconvex constraints. To handle this particular structure, we propose a partial augmented Lagrangian function that adopts distinct analytical strategies for each type of constraint. We mainly investigate the properties of saddle points and second-order optimality conditions for this partial augmented Lagrangian function. In particular, (i) the convex set is only required to be closed under addition, without assuming it to be a cone; (ii) the differentiability requirements are weakened from twice continuous differentiability to gradient Lipschitz continuity; and (iii) asymptotic second-order tangent cones and second-order tangent sets are utilized to capture the geometric properties of the nonconvex constraints. It is worth further investigating the development of duality theory and algorithmic designs based on this partial augmented Lagrangian function.

Author Contributions

Conceptualization, L.H. and J.Z.; Methodology, L.H. and Y.W.; Validation, J.T., Y.W. and J.Z.; Formal analysis, L.H. and J.T.; Writing—original draft preparation, L.H. and Y.W.; Writing—review and editing, J.T. and J.Z.; Supervision, J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (12371305, 12571321), the Shandong Provincial Natural Science Foundation (ZR2023MA020, ZR2025QC11), and the Key scientific research projects of Higher Education of Henan Province (26A110018).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors are gratefully indebted to the anonymous referees for their valuable suggestions that helped us greatly improve the original presentation of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bai, J.C.; Hager, W.W.; Zhang, H.C. An inexact accelerated stochastic ADMM for separable convex optimization. Comput. Optim. Appl. 2022, 81, 479–518. [Google Scholar] [CrossRef]
  2. Bai, J.C.; Li, J.C.; Xu, F.M.; Zhang, H.C. Generalized symmetric ADMM for separable convex optimization. Comput. Optim. Appl. 2018, 70, 129–170. [Google Scholar] [CrossRef]
  3. Hager, W.W.; Zhang, H.C. Convergence rates for an inexact ADMM applied to separable convex optimization. Comput. Optim. Appl. 2020, 77, 729–754. [Google Scholar] [CrossRef]
  4. Luke, D.R. Convergence in distribution of randomized algorithms: The case of partially separable optimization. Math. Program. 2025, 212, 763–798. [Google Scholar] [CrossRef]
  5. Stefanov, S.M. Separable Optimization: Theory and Methods; Springer: Cham, Switzerland, 2013. [Google Scholar]
  6. Bai, X.D.; Sun, J.; Zheng, X.J. An augmented Lagrangian decomposition method for chance-constrained optimization problems. INFORMS J. Comput. 2021, 33, 1056–1069. [Google Scholar] [CrossRef]
  7. Kan, C.; Song, W. Second-order conditions for existence of augmented Lagrange multipliers for eigenvalue composite optimization problems. J. Glob. Optim. 2015, 63, 77–97. [Google Scholar] [CrossRef]
  8. Kan, C.; Song, W. Second-order conditions for the existence of augmented Lagrange multipliers for sparse optimization. J. Optim. Theory Appl. 2024, 201, 103–129. [Google Scholar] [CrossRef]
  9. Bai, K.; Ye, J.J.; Zeng, S.Z. Optimality conditions for bilevel programmes via Moreau envelope reformulation. Optimization 2024, 74, 2685–2719. [Google Scholar] [CrossRef]
  10. Ma, X.; Yao, W.; Ye, J.J.; Zhang, J. Combined approach with second-order optimality conditions for bilevel programming problems. arXiv 2021, arXiv:2108.00179. [Google Scholar] [CrossRef]
  11. Chen, J.; Liu, L.; Lv, Y.; Ghosh, D.; Yao, J.C. Second-order strong optimality and duality for nonsmooth multiobjective fractional programming with constraints. Positivity 2024, 28, 36. [Google Scholar] [CrossRef]
  12. Chen, J.; Su, H.; Ou, X.; Lv, Y. First- and second-order optimality conditions of nonsmooth sparsity multiobjective optimization via variational analysis. J. Glob. Optim. 2024, 89, 303–325. [Google Scholar] [CrossRef]
  13. Kien, B.T.; Qin, X.; Wen, C.F.; Yao, J.C. Second-order optimality conditions for multiobjective optimal control problems with mixed pointwise constraints and free right end point. SIAM J. Control Optim. 2020, 58, 2658–2677. [Google Scholar] [CrossRef]
  14. Mehlitz, P. Stationarity conditions and constraint qualifications for mathematical programs with switching constraints. Math. Program. 2020, 181, 149–186. [Google Scholar] [CrossRef]
  15. Bonnans, J.F.; Cominetti, R.; Shapiro, A. Second order optimality conditions based on parabolic second order tangent sets. SIAM J. Optim. 1999, 9, 466–492. [Google Scholar] [CrossRef]
  16. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  17. Gfrerer, H.; Ye, J.J.; Zhou, J.C. Second-order optimality conditions for nonconvex set-constrained optimization problems. Math. Oper. Res. 2022, 47, 2344–2365. [Google Scholar] [CrossRef]
  18. Bai, K.; Song, Y.; Zhang, J. Second-order enhanced optimality conditions and constraint qualifications. J. Optim. Theory Appl. 2023, 198, 1264–1284. [Google Scholar] [CrossRef]
  19. Andreani, R.; Haeser, G.; Mito, L.M.; Ramirez, H.; Silveira, T.P. First- and second-order optimality conditions for second-order cone and semidefinite programming under a constant rank condition. Math. Program. 2023, 202, 473–513. [Google Scholar] [CrossRef]
  20. Medeiros, J.C.A.; Ribeiro, A.A.; Sachine, M.; Secchin, L.D. A practical second-order optimality condition for cardinality-constrained problems with application to an augmented Lagrangian method. J. Optim. Theory Appl. 2025, 206, 22. [Google Scholar] [CrossRef]
  21. Ouyang, W.; Ye, J.J.; Zhang, B. New second-order optimality conditions for directional optimality of a general set-constrained optimization problem. SIAM J. Optim. 2025, 35, 1274–1299. [Google Scholar] [CrossRef]
  22. Penot, J.P. Second-order conditions for optimization problems with constraints. SIAM J. Control Optim. 1998, 37, 303–318. [Google Scholar] [CrossRef]
  23. Gfrerer, H. On directional metric regularity, subregularity and optimality conditions for nonsmooth mathematical programs. Set-Valued Var. Anal. 2013, 21, 151–176. [Google Scholar] [CrossRef]
  24. Mordukhovich, B.S. Variational Analysis and Applications; Springer: Cham, Switzerland, 2018. [Google Scholar]
  25. An, D.T.V.; Xu, H.K.; Yen, N.D. Fréchet second-order subdifferentials of Lagrangian functions and optimality conditions. SIAM J. Optim. 2023, 33, 766–784. [Google Scholar] [CrossRef]
  26. Khanh, P.D.; Khoa, V.V.H.; Mordukhovich, B.S.; Phat, V.T. Second-order subdifferential optimality conditions in nonsmooth optimization. SIAM J. Optim. 2025, 35, 678–711. [Google Scholar] [CrossRef]
  27. Mordukhovich, B.S. Second-Order Variational Analysis in Optimization, Variational Stability, and Control: Theory, Algorithms, Applications; Springer: Cham, Switzerland, 2024. [Google Scholar]
  28. Mordukhovich, B.S. Variational Analysis and Generalized Differentiation I: Basic Theory; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  29. An, D.T.V.; Tuyen, N.V. On second-order optimality conditions for C1,1 optimization problems via Lagrangian functions. Appl. Anal. 2025, 1, 17. [Google Scholar] [CrossRef]
  30. Feng, M.; Li, S.J. On second-order Fritz John type optimality conditions for a class of differentiable optimization problems. Appl. Anal. 2020, 99, 2594–2608. [Google Scholar] [CrossRef]
  31. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  32. Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  33. Jiménez, B.; Novo, V. Optimality conditions in differentiable vector optimization via second-order tangent sets. Appl. Math. Optim. 2004, 49, 123–144. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, L.; Tang, J.; Wang, Y.; Zhou, J. Saddle Points of Partial Augmented Lagrangian Functions. Math. Comput. Appl. 2025, 30, 110. https://doi.org/10.3390/mca30050110

AMA Style

Huang L, Tang J, Wang Y, Zhou J. Saddle Points of Partial Augmented Lagrangian Functions. Mathematical and Computational Applications. 2025; 30(5):110. https://doi.org/10.3390/mca30050110

Chicago/Turabian Style

Huang, Longfei, Jingyong Tang, Yutian Wang, and Jinchuan Zhou. 2025. "Saddle Points of Partial Augmented Lagrangian Functions" Mathematical and Computational Applications 30, no. 5: 110. https://doi.org/10.3390/mca30050110

APA Style

Huang, L., Tang, J., Wang, Y., & Zhou, J. (2025). Saddle Points of Partial Augmented Lagrangian Functions. Mathematical and Computational Applications, 30(5), 110. https://doi.org/10.3390/mca30050110

Article Metrics

Back to TopTop