Next Article in Journal
A Memetic Algorithm with a Novel Repair Heuristic for the Multiple-Choice Multidimensional Knapsack Problem
Previous Article in Journal
Subclasses of Multivalent Meromorphic Functions with a Pole of Order p at the Origin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimality and Duality for DC Programming with DC Inequality and DC Equality Constraints

College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(4), 601; https://doi.org/10.3390/math10040601
Submission received: 20 December 2021 / Revised: 30 January 2022 / Accepted: 14 February 2022 / Published: 16 February 2022

Abstract

:
In this paper, a class of nondifferentiable DC programming with DC inequality and DC equality constraints are considered. Firstly, in terms of this special nondifferentiable DC constraint system, an appropriate relaxed constant rank constraint qualification is proposed and used to deduce one necessary optimality condition. Then, by adopting the convexification technique, another necessary optimality condition is obtained. Further, combined with the conjugate theory, the zero duality gap properties between the pairs of Wolfe and Mond-Weir type primal-dual problems are characterized, respectively.

1. Introduction

DC (difference of two convex functions) programming has been one of the most active areas in nonconvex optimization. It arises in various applications, such as digital communication systems [1], assignment and power allocation [2] and compressed sensing [3]. Up to now, most of the work has concentrated on DC programming with convex inequality constraints or convex cone constraints. By virtue of the conjugate theories, epigraph techniques, subdifferentials and perturbation approaches, several duality results, and optimality conditions for DC or composite DC programming are established, see [4,5,6,7,8,9,10,11,12] and the references therein. In these fruitful developments, the convexity of the constraint system is vital. However, many practical problems take the mathematical forms of DC problems with nonconvex constraints. For instance, the problem involving DC inequality and DC equality constraints is an interesting subject, whose constraint system is nonconvex and owns a special form. It will be the focus of our research.
Over the years, various local search and global search methods for solving smooth and nonsmooth DC programming were proposed, see, for example, [13,14]. It is worth noting that these methods are often based on global necessary optimality conditions. Therefore, how to establish the optimality conditions that are easily verified gives us one of the motivations. As we know, relaxed constant rank constraint qualification (RCRCQ for short) was presented by Minchenko and Stakhovski [15] for the continuously differentiable inequality and equality constraint functions. It proved to be a suitable assumption to discuss the optimality conditions for optimization problems involving inequality and equality constraints, see, for example, [16,17,18]. So, we will extend the original RCRCQ to the nondifferentiable DC constraint system and use it to carry out our research.
Wolfe and Mond-Weir dualities, which are closely related to the optimality of the primal problem, are highly important in dual theory. To the best of our knowledge, the Wolfe and Mond-Weir dualities were discussed intensively for convex optimization problems, see [19] for more details. However, there are few papers on the study of Wolfe and Mond-Weir type dual problems for DC programming with DC inequality and DC equality constraints since neither the objective nor the constrained functions enjoy the convexity. In this case, how to characterize the Wolfe and Mond-Weir dualities gives us other motivations.
In this paper, we mainly investigate a class of nondifferentiable DC programming with DC inequality and DC equality constraints. We firstly extend RCRCQ by taking points from the subdifferential sets of two convex functions in DC constraint functions, respectively. Then, we use the extended RCRCQ to deduce one necessary optimality condition. Moreover, we adopt the convexification technique to translate DC constraints into convex constraints and establish another necessary condition. For the purpose of constructing Wolfe and Mond-Weir dual problems, if we directly use the first necessary optimality condition as constraint and give the objective function according to the traditional way, we find that even the weak duality is not satisfied. For this reason, we adopt the convexification technique again and the Fenchel-Moreau theorem to formulate an inner convex problem. By constructing the Wolfe and Mond-Weir dual problems of this inner convex problem, we can obtain the corresponding Wolfe and Mond-Weir type dual problems of the primal problem and characterize the zero duality gap properties, respectively.
Compared with [8], the DC equality constraints are added to our model. Moreover, the constraint system considered in [4,7,10,11,12] is convex. However, our model focuses on the DC constraint system, which is nonconvex. Finally, our results of the Wolfe and Mond-Weir dualities for DC programming with the DC constraint system have not been discussed in any existing studies.
The rest of the paper is organized as follows. In Section 2, we recall some basic concepts and properties. In Section 3, we deduce two necessary optimality conditions for the DC programming. In Section 4, we characterize the zero duality gap properties between the pairs of Wolfe and Mond-Weir type primal-dual problems, respectively.

2. Preliminaries

Let R d be a d-dimensional Euclidean space and B δ ( x ) be a closed ball centered at x with positive radius δ . For any two vectors x , y R d , we denote by x , y their inner product and denote by · the Euclidean norm. For a nonempty set A R d , | A | and cone A denote the cardinality and conical hull of A, respectively. If A is a convex set, the normal cone of A at x A is given by
N ( A , x ) : = x * R d : x * , y x 0 , y A .
For a real-valued function f : R d R , the conjugate function of f is defined by f * : R d R { + } with f * ( x * ) : = sup x R d x * , x f ( x ) . Similarly, the biconjugate function of f is defined by f * * : R d R { + } with f * * ( x ) : = sup x * R d x * , x f * ( x * ) . We denote dom f * : = { x * R d : f * ( x * ) < + } .
For a multifunction F : R d R d , the expression
Lim sup x x ¯ F ( x ) : = x * R d : x k x ¯ , x k * x * s . t . x k * F ( x k ) , k N
signifies the sequential Painlevé-Kuratowski upper limit.
Here, we also recall the concepts of locally Lipschitz continuity and semicontinuity of a real-valued function. f : R d R is said to be semicontinuous around x ¯ if for any ε > 0 , there exists an open neighborhood U of x ¯ such that
f ( x ) > f ( x ¯ ) ε , x U ,
or, equivalently,
lim inf x x ¯ f ( x ) f ( x ¯ ) .
f : R d R is said to be locally Lipschitz continuous around x ¯ if there exists an open neighborhood U of x ¯ and L > 0 such that
| f ( x ) f ( y ) | L x y , x , y U .
Definition 1.
[20] Let f : R d R be a lower semicontinuous function.
(i) 
The Fréchet subdifferential of f at x ¯ is defined by
^ f ( x ¯ ) : = x * R d : lim inf x x ¯ f ( x ) f ( x ¯ ) x * , x x ¯ x x ¯ 0 .
(ii) 
The limiting subdifferential of f at x ¯ is defined by
¯ f ( x ¯ ) : = Lim sup x f x ¯ ^ f ( x ) ,
where x f x ¯ means that x x ¯ with f ( x ) f ( x ¯ ) .
Remark 1.
Note that ^ f ( x ¯ ) ¯ f ( x ¯ ) and ^ f ( x ¯ ) if f is locally Lipschitz continuous at x ¯ . Moreover,
¯ f ( x ) = ^ f ( x ) = x * R d : x * , y x f ( y ) f ( x ) , y R d
if f is a convex function. In this case, we denote the rightmost set of (1) by f ( x ) . The Young inequality and Young equality are expressed as follows, respectively,
f ( x ) + f * ( x * ) x * , x , x , x * R d ,
f ( x ) + f * ( x * ) = x * , x x * f ( x ) .
Before proceeding further, let us remark on the functions involved in the next discussion. We denote by I : = { 1 , , m } and I 0 : = { m + 1 , , n } the index sets. Suppose that φ , ψ , f i and g i , i I I 0 are convex functions from R d to R . Now, we consider a DC problem with DC inequality and DC equality constraints:
( DC ) inf φ ( x ) ψ ( x ) s . t . f i ( x ) g i ( x ) 0 , i I , f i ( x ) g i ( x ) = 0 , i I 0 .
Set Ω : = x R d : f i ( x ) g i ( x ) 0 , i I , f i ( x ) g i ( x ) = 0 , i I 0 . For x Ω , set I ( x ) : = i I : f i ( x ) = g i ( x ) . I ( x ) is usually called the active index set. We denote by v ( DC ) the optimal value of ( DC ) .
Remark 2.
The DC problems with convex inequality constraints and convex cone constraints are studied in [4,5,6,7,10]. Our model (DC) focuses on the DC inequality and DC equality constraints, which can be regarded as the generalization of a convex case.
Let us close this section by recalling the classical relaxed constant rank constraint qualification, which is useful for the next section. Let h i , i I I 0 be continuously differentiable and C : = x R d : φ i ( x ) 0 , i I , φ i ( x ) = 0 , i I 0 .
Definition 2.
[15] We say that C satisfies the relaxed constant rank constraint qualification at x ¯ C if there exists a neighborhood V ( x ¯ ) of the point x ¯ such that for any index subset J = K I 0 , where K I ( x ¯ ) , the family of gradient vectors φ i ( x ) , i J has the same rank at all x V ( x ¯ ) .

3. Necessary Optimality Conditions for (DC)

In this section, our main goal is to establish necessary optimality conditions for ( DC ) . We firstly extend RCRCQ given in Definition 2 to the special constraint system Ω .
Definition 3.
We say that Ω satisfies RCRCQ at x ¯ Ω if there exist u ¯ i f i ( x ¯ ) , u ¯ ¯ i g i ( x ¯ ) , i I and v ¯ i f i ( x ¯ ) , v ¯ ¯ i g i ( x ¯ ) , i I 0 such that for any index set J I ( x ¯ ) and all sufficiently large k N ,
rank u ¯ i u ¯ ¯ i , i J , v ¯ i v ¯ ¯ i , i I 0 = rank u ¯ i k u ¯ ¯ i k , i J , v ¯ i k v ¯ ¯ i k , i I 0 ,
where u ¯ i k f i ( x k ) , u ¯ ¯ i k g i ( x k ) , i I , v ¯ i k f i ( x k ) , v ¯ ¯ i k g i ( x k ) , i I 0 for all sequences { x k } , { u ¯ k } , { u ¯ ¯ k } , { v ¯ k } and { v ¯ ¯ k } satisfying x k x ¯ , u ¯ i k u ¯ i , u ¯ ¯ i k u ¯ ¯ i , i I , v ¯ i k v ¯ i , v ¯ ¯ i k v ¯ ¯ i , i I 0 as k .
Remark 3.
When g i = 0 and f i are continuously differentiable for i I I 0 , Definition 3 collapses into Definition 1 of [15]. So, Definition 3 is regarded as a nonsmooth version of RCRCQ for the special constraint system Ω.
The following observation is considered as an extension of [18] (Proposition 3.4) to the nonsmooth case, which is useful for the derivation of necessary conditions of (DC).
Lemma 1.
Suppose that Ω satisfies R C R C Q at x ¯ Ω . Then, there exist v ¯ i f i ( x ¯ ) , v ¯ ¯ i g i ( x ¯ ) , i I 0 , I ˜ 0 I 0 and | I ˜ 0 | = r a n k { v ¯ i v ¯ ¯ i , i I 0 } such that v ¯ i k v ¯ ¯ i k i I ˜ 0 is linearly independent, where v ¯ i k f i ( x k ) , v ¯ ¯ i k g i ( x k ) , i I 0 for all sequences { x k } , { v ¯ k } and { v ¯ ¯ k } satisfying x k x ¯ , v ¯ i k v ¯ i , v ¯ ¯ i k v ¯ ¯ i , i I 0 as k .
Proof. 
If v ¯ i v ¯ ¯ i , i I 0 is linearly independent, it is evident that the assertion is true.
If v ¯ i v ¯ ¯ i , i I 0 is linearly dependent. Then, there exists an index set I 0 : = { i 1 , , i j } I 0 such that v ¯ i j v ¯ ¯ i j i j I 0 is linearly independent. From the fact that Ω satisfies RCRCQ at x ¯ Ω , we have
rank { v ¯ i v ¯ ¯ i , i I 0 } = rank { v ¯ i k v ¯ ¯ i k , i I 0 } = rank { v ¯ i j v ¯ ¯ i j , i j I 0 } = | I 0 |
for J = , where v ¯ i k f i ( x k ) , v ¯ ¯ i k g i ( x k ) , i I 0 for all sequences { x k } , { v ¯ k } and { v ¯ ¯ k } satisfying x k x ¯ , v ¯ i k v ¯ i , v ¯ ¯ i k v ¯ ¯ i , i I 0 as k . Thus, there exists an index set I ˜ 0 and | I ˜ 0 | = | I 0 | such that v ¯ i k v ¯ ¯ i k i I ˜ 0 is linearly independent. The proof is complete. □
Further, we introduce a function ϕ : R d R :
ϕ ( x ) : = i I max 0 , f i ( x ) g i ( x ) + i I 0 | f i ( x ) g i ( x ) | .
Obviously, ϕ is locally Lipschitz continuous on R d since convex functions are locally Lipschitz continuous in the Euclidean space (see [21] (Theorem 1.4.1)).
Lemma 2.
For arbitrary x R d , there exist λ i 0 , i A ( x ) , λ i R , i I 0 such that
¯ ϕ ( x ) i A ( x ) I 0 λ i ¯ ( f i g i ) ( x ) i A ( x ) I 0 λ i f i ( x ) i A ( x ) I 0 λ i g i ( x ) ,
where A ( x ) : = i I : f i ( x ) g i ( x ) 0 .
Proof. 
This conclusion is obtained immediately by [22] (Lemma 2.1) and [20] (Corollary 3.3). We omit the proof here. □
Theorem 1.
Let x ¯ Ω . Suppose that Ω satisfies R C R C Q at x ¯ . If x ¯ is an optimal solution of (DC), then there exist λ i 0 , i I ( x ¯ ) , λ i R , i I 0 such that
0 φ ( x ¯ ) ψ ( x ¯ ) + i I ( x ¯ ) I 0 λ i f i ( x ¯ ) i I ( x ¯ ) I 0 λ i g i ( x ¯ ) .
Proof. 
Step 1. For every k N , let us first consider a penalty problem ( P k ) :
inf F k ( x ) : = φ ( x ) ψ ( x ) + 1 2 k ϕ 2 ( x ) + 1 2 x x ¯ 2 s . t . x B ε ( x ¯ ) ,
where ε > 0 are such that φ ( x ¯ ) ψ ( x ¯ ) < φ ( x ) ψ ( x ) for every x B ε ( x ¯ ) Ω .
Without loss of generality, we may assume that x k is the optimal solution of ( P k ) and x k x ˜ B ε ( x ¯ ) as k since B ε ( x ¯ ) is a compact set and F k is continuous on R d . Then, one has
φ ( x k ) ψ ( x k ) + 1 2 k ϕ 2 ( x k ) + 1 2 x k x ¯ 2 φ ( x ¯ ) ψ ( x ¯ ) .
If lim k ϕ ( x k ) 0 , then lim k k ϕ 2 ( x k ) = and this contradicts with (3). Thus, lim k ϕ ( x k ) = ϕ ( x ˜ ) = 0 and x ˜ Ω . Further, we have
lim k φ ( x k ) ψ ( x k ) + 1 2 k ϕ 2 ( x k ) + 1 2 x k x ¯ 2 = φ ( x ˜ ) ψ ( x ˜ ) + 1 2 x ˜ x ¯ 2 φ ( x ¯ ) ψ ( x ¯ ) ,
which implies that x ˜ = x ¯ . If not, φ ( x ˜ ) ψ ( x ˜ ) < φ ( x ¯ ) ψ ( x ¯ ) . This is a contradiction with φ ( x ¯ ) ψ ( x ¯ ) < φ ( x ) ψ ( x ) for every x B ε ( x ¯ ) Ω . Consequently, we infer that x ˜ = x ¯ and x k x ¯ as k .
Step 2. By the necessary optimality condition in terms of limiting subdifferential and nonsmooth calculus rules, it follows that
0 ¯ ( φ ψ ) ( x k ) + k ϕ ( x k ) ¯ ϕ ( x k ) + x k x ¯ .
According to [20] (Corollary 3.3) and Lemma 2 and k ϕ ( x k ) 0 , there exist λ i k 0 , i A ( x k ) , λ i k R , i I 0 such that
0 φ ( x k ) ψ ( x k ) + i A ( x k ) I 0 λ i k f i ( x k ) i A ( x k ) I 0 λ i k g i ( x k ) + x k x ¯ .
Further, there exist ξ ¯ k φ ( x k ) , η ¯ k ψ ( x k ) , u ¯ i k f i ( x k ) , u ¯ ¯ i k g i ( x k ) , i A ( x k ) , v ¯ i k f i ( x k ) , v ¯ ¯ i k g i ( x k ) , i I 0 such that
0 = ξ ¯ k η ¯ k + i A ( x k ) λ i k ( u ¯ i k u ¯ ¯ i k ) + i I 0 λ i k ( v ¯ i k v ¯ ¯ i k ) + x k x ¯ .
It is easy to verify that A ( x k ) I ( x ¯ ) for sufficiently large k. Let λ i k = 0 , i I ( x ¯ ) A ( x k ) . Then, (4) is replaced with
0 = ξ ¯ k η ¯ k + i I ( x ¯ ) λ i k ( u ¯ i k u ¯ ¯ i k ) + i I 0 λ i k ( v ¯ i k v ¯ ¯ i k ) + x k x ¯ ,
where λ i k 0 , i I ( x ¯ ) .
Since Ω satisfies RCRCQ at x ¯ Ω , by Lemma 1, there exist v ¯ i f i ( x ¯ ) , v ¯ ¯ i g i ( x ¯ ) , i I 0 , I ˜ 0 I 0 and | I ˜ 0 | = rank { v ¯ i v ¯ ¯ i , i I 0 } such that v ¯ i k v ¯ ¯ i k i I ˜ 0 is linearly independent, where v ¯ i k f i ( x k ) , v ¯ ¯ i k g i ( x k ) , i I 0 satisfying v ¯ i k v ¯ i , v ¯ ¯ i k v ¯ ¯ i , i I 0 as k . Then, there exists λ ˜ i k , i I ˜ 0 such that
i I 0 λ i k ( v ¯ i k v ¯ ¯ i k ) = i I ˜ 0 λ ˜ i k ( v ¯ i k v ¯ ¯ i k ) ,
Let supp ( λ k ) : = { i I : λ i k 0 } . Combining (5) with (6), one has
0 = ξ ¯ k η ¯ k + i I ( x ¯ ) supp ( λ k ) λ i k ( u ¯ i k u ¯ ¯ i k ) + i I ˜ 0 λ ˜ i k ( v ¯ i k v ¯ ¯ i k ) + x k x ¯ .
Moreover, by [22] (Lemma 2.2), there exist I k I ( x ¯ ) supp ( λ k ) , λ ¯ i k , i I k I ˜ 0 such that λ ˜ i k · λ ¯ i k > 0 for every i I k , { u ¯ i k u ¯ ¯ i k } i I k { v ¯ i k v ¯ ¯ i k } i I ˜ 0 is linearly independent and
i I ( x ¯ ) supp ( λ k ) λ i k ( u ¯ i k u ¯ ¯ i k ) + i I ˜ 0 λ ˜ i k ( v ¯ i k v ¯ ¯ i k ) = i I k λ ¯ i k ( u ¯ i k u ¯ ¯ i k ) + i I ˜ 0 λ ¯ i k ( v ¯ i k v ¯ ¯ i k ) .
Together with (7), we obtain that
0 = ξ ¯ k η ¯ k + i I k λ ¯ i k ( u ¯ i k u ¯ ¯ i k ) + i I ˜ 0 λ ¯ i k ( v ¯ i k v ¯ ¯ i k ) + x k x ¯ , λ ¯ i k > 0 , i I k .
Due to the index set I k being finite for every large k, we may assume that I k I ˜ . Hence, { u ¯ i k u ¯ ¯ i k } i I ˜ { v ¯ i k v ¯ ¯ i k } i I ˜ 0 is linearly independent, and (8) can be rewritten as
0 = ξ ¯ k η ¯ k + i I ˜ λ ¯ i k ( u ¯ i k u ¯ ¯ i k ) + i I ˜ 0 λ ¯ i k ( v ¯ i k v ¯ ¯ i k ) + x k x ¯ .
Step 3. Finally, we have to show that ( λ ¯ i k ) i I ˜ I ˜ 0 is bounded. Let M k = ( λ ¯ i k ) i I ˜ I ˜ 0 . To the contrary, suppose that ( λ ¯ i k ) i I ˜ I ˜ 0 is unbounded such that M k as k and
lim k ( λ ¯ i k ) i I ˜ I ˜ 0 M k = ( λ ¯ i ) i I ˜ I ˜ 0 .
It is easy to see that λ ¯ i 0 , i I ˜ . Without loss of generality, we assume that ξ ¯ k ξ ¯ , η ¯ k η ¯ , u ¯ i k u ¯ i , u ¯ ¯ i k u ¯ ¯ i , i I ˜ . Then, we have ξ ¯ φ ( x ¯ ) , η ¯ ψ ( x ¯ ) , u ¯ i f i ( x ¯ ) , u ¯ ¯ i g i ( x ¯ ) , i I ˜ by [23] (Proposition 2.1.5). Dividing M k on both sides of (9) and taking limits with k , one has
0 = i I ˜ λ ¯ i ( u ¯ i u ¯ ¯ i ) + i I ˜ 0 λ ¯ i ( v ¯ i v ¯ ¯ i ) .
This means that { u ¯ i u ¯ ¯ i } i I ˜ { v ¯ i v ¯ ¯ i } i I ˜ 0 is linearly dependent since ( λ ¯ i ) i I ˜ I ˜ 0 = 1 .
Because Ω satisfies RCRCQ at x ¯ Ω and { u ¯ i k u ¯ ¯ i k } i I ˜ { v ¯ i k v ¯ ¯ i k } i I ˜ 0 is linearly independent, we have the following relation,
rank u ¯ i u ¯ ¯ i , i I ˜ , v ¯ i v ¯ ¯ i , i I 0 = rank u ¯ i k u ¯ ¯ i k , i I ˜ , v ¯ i k v ¯ ¯ i k , i I 0 = rank u ¯ i k u ¯ ¯ i k , i I ˜ , v ¯ i k v ¯ ¯ i k , i I ˜ 0 = | I ˜ | + | I ˜ 0 | .
This contradicts the fact that { u ¯ i u ¯ ¯ i } i I ˜ { v ¯ i v ¯ ¯ i } i I ˜ 0 is linearly dependent. Thus, ( λ ¯ i k ) i I ˜ I ˜ 0 is bounded. We may assume that ( λ ¯ i k ) i I ˜ I ˜ 0 ( λ i ) i I ˜ I ˜ 0 . Taking limits in both sides of (9) to get that
0 = ξ ¯ η ¯ + i I ˜ λ i ( u ¯ i u ¯ ¯ i ) + i I ˜ 0 λ i ( v ¯ i v ¯ ¯ i ) .
Let λ i = 0 , i ( I ( x ¯ ) I ˜ ) ( I 0 I ˜ 0 ) . We can see that
0 φ ( x ¯ ) ψ ( x ¯ ) + i I ( x ¯ ) I 0 λ i f i ( x ¯ ) i I ( x ¯ ) I 0 λ i g i ( x ¯ ) ,
where λ i 0 , i I ( x ¯ ) . The proof is complete. □
Remark 4.
As shown in [13], the necessary optimality condition of a unconstrained DC problem is significant for algorithm design and convergence analysis. Our result (2) is expressed as the formulation of classical convex subdifferentials, which is easier to calculate than the Fréchet or limiting subdifferentials and may be helpful for some relevant research of DC algorithm with nonconvex constraints. The following example is given to verify the conclusion of Theorem 1.
Example 1.
Suppose that d = 2 , m = 1 , n = 2 , φ ( x ) = x 1 , ψ ( x ) = | x 2 | , f 1 ( x ) = x 2 x 1 1 , f 2 ( x ) = x 1 , g 1 ( x ) = x 2 and g 2 ( x ) = x . Then,
( DC ) inf ( x 1 | x 2 | ) s . t . x 2 x 1 1 x 2 0 , x 1 x = 0 .
The feasible set Ω = ( x 1 , x 2 ) R 2 : x 1 1 , x 2 = 0 . It is easy to see that the optimal solution of ( D C ) is x ¯ = ( 1 , 0 ) and the optimal value is v ( D C ) = 1 .
It is not difficult to verify that Ω satisfies RCRCQ at x ¯ . Moreover, by calculating, we have
φ ( x ¯ ) = ( 1 , 0 ) , ψ ( x ¯ ) = { 0 } × [ 1 , 1 ] , f 1 ( x ¯ ) = ( 1 , 1 ) , f 2 ( x ¯ ) = ( 1 , 0 ) , g 1 ( x ¯ ) = ( 0 , 1 ) , g 2 ( x ¯ ) = ( 1 , 0 ) .
Taking λ 1 = 1 2 and λ 2 = 1 4 , we obtain that
φ ( x ¯ ) ψ ( x ¯ ) + ( λ 1 f 1 ( x ¯ ) + λ 2 f 2 ( x ¯ ) ) ( λ 1 g 1 ( x ¯ ) + λ 2 g 2 ( x ¯ ) ) = { 0 } × [ 1 , 1 ]
and ( 0 , 0 ) { 0 } × [ 1 , 1 ] .
Next, we establish another necessary optimality condition from the point of translating (DC) into a problem whose objective function is the same as (DC) and the constrained functions are convex. Let x ¯ R d . We take the following DC problem into consideration:
( DC ( β , γ ) ) inf φ ( x ) ψ ( x ) s . t . f i ( x ) β i , x + g i * ( β i ) 0 , i I , f i ( x ) β i , x + g i * ( β i ) 0 , i I 0 , g i ( x ) γ i , x + f i * ( γ i ) 0 , i I 0 ,
where β i g i ( x ¯ ) , i I I 0 and γ i f i ( x ¯ ) , i I 0 and β : = ( β i ) i I I 0 i I I 0 g i ( x ¯ ) and γ : = ( γ i ) i I 0 i I 0 f i ( x ¯ ) . Denote Ω ( β , γ ) : = x R d : f i ( x ) β i , x + g i * ( β i ) 0 , i I , f i ( x ) β i , x + g i * ( β i ) 0 , i I 0 , g i ( x ) γ i , x + f i * ( γ i ) 0 , i I 0 .
Remark 5.
In fact, if we directly convexificate the Ω to yield the problem
( DC β ) inf φ ( x ) ψ ( x ) s . t . f i ( x ) β i , x + g i * ( β i ) 0 , i I , f i ( x ) β i , x + g i * ( β i ) = 0 , i I 0 ,
where β i g i ( x ¯ ) , i I I 0 and β : = ( β i ) i I I 0 i I I 0 g i ( x ¯ ) , then we observe that ( D C β ) has no relation with ( D C ) . In this way, we firstly translate the DC equality constraints of (DC) into DC inequality constraints, and then deal with these DC inequality constraints by convexification technique to formulate ( D C ( β , γ ) ) and obtain the next proposition.
Proposition 1.
If x ¯ Ω is an optimal solution of ( DC ) , then x ¯ is an optimal solution of ( DC ( β , γ ) ) for each ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) .
Proof. 
For each ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) and x Ω ( β , γ ) , by Young inequality, we have
f i ( x ) g i ( x ) f i ( x ) β i , x + g i * ( β i ) 0 , i I , f i ( x ) g i ( x ) f i ( x ) β i , x + g i * ( β i ) 0 , i I 0 , g i ( x ) f i ( x ) g i ( x ) γ i , x + f i * ( γ i ) 0 , i I 0 .
The last two inequalities imply that f i ( x ) g i ( x ) = 0 for all i I 0 . Together with the first inequality, we obtain that Ω ( β , γ ) Ω . Moreover, for x ¯ Ω , it follows from Young equality that x ¯ Ω ( β , γ ) . Since x ¯ is an optimal solution of (DC), then φ ( x ) ψ ( x ) φ ( x ¯ ) ψ ( x ¯ ) for any x Ω . Due to the previous conclusion Ω ( β , γ ) Ω , we see that x ¯ is also an optimal solution of ( DC ( β , γ ) ) . □
Here, we recall the basic constraint qualification (BCQ for short) of a constrained set at some point.
Definition 4.
[8] Let h j : R d R be convex for j J : = { 1 , , p } . For a set A : = x R d : h j ( x ) 0 , j J , we call that A satisfies BCQ at x ¯ A if
N ( A , x ¯ ) cone j J ( x ¯ ) h j ( x ¯ ) ,
where J ( x ¯ ) : = { j J : h j ( x ¯ ) = 0 } .
Finally, we derive the necessary optimality condition of ( DC ) via ( DC ( β , γ ) ) .
Theorem 2.
Let x ¯ Ω . If x ¯ is an optimal solution of ( D C ) and Ω ( β , γ ) satisfies B C Q at x ¯ for each ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) . Then for each x * ψ ( x ¯ ) and ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) , there exist λ i 0 , i I ( x ¯ ) I 0 and μ i 0 , i I 0 such that
x * + i I ( x ¯ ) I 0 λ i β i + i I 0 μ i γ i φ ( x ¯ ) + i I ( x ¯ ) I 0 λ i f i ( x ¯ ) + i I 0 μ i g i ( x ¯ ) .
Proof. 
For each ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) , it follows from Proposition 1 that x ¯ is an optimal solution of ( DC ( β , γ ) ) . Since φ and δ Ω ( β , γ ) are convex, by virtue of [20] (Theorem 3.1), one has
0 ^ φ ψ + δ Ω ( β , γ ) ( x ¯ ) x * ψ ( x ¯ ) φ + δ Ω ( β , γ ) ( x ¯ ) x * ,
which means that 0 φ ( x ¯ ) + N ( Ω ( β , γ ) , x ¯ ) x * for any x * ψ ( x ¯ ) . Because Ω ( β , γ ) satisfies BCQ at x ¯ , then for each x * ψ ( x ¯ ) , ( β i ) i I I 0 i I I 0 g i ( x ¯ ) and ( γ i ) i I 0 i I 0 f i ( x ¯ ) , we have
0 φ ( x ¯ ) + cone i I ( x ¯ ) I 0 ( x ¯ ) ( f i ( x ¯ ) β i ) i I 0 ( x ¯ ) ( g i ( x ¯ ) γ i ) x * ,
where I 0 ( x ¯ ) = { i I 0 : f i ( x ¯ ) g i ( x ¯ ) = 0 } = I 0 and I 0 ( x ¯ ) = { i I 0 : g i ( x ¯ ) f i ( x ¯ ) = 0 } = I 0 . That is, for each x * ψ ( x ¯ ) , ( β i ) i I I 0 i I I 0 g i ( x ¯ ) and ( γ i ) i I 0 i I 0 f i ( x ¯ ) , there exist λ i 0 , i I ( x ¯ ) I 0 and μ i 0 , i I 0 such that
0 φ ( x ¯ ) x * + i I ( x ¯ ) I 0 λ i ( f i ( x ¯ ) β i ) + i I 0 μ i ( g i ( x ¯ ) γ i ) ,
or, equivalently,
x * + i I ( x ¯ ) I 0 λ i β i + i I 0 μ i γ i φ ( x ¯ ) + i I ( x ¯ ) I 0 λ i f i ( x ¯ ) + i I 0 μ i g i ( x ¯ ) .
The following Example 2 explains that the necessary optimality result in Theorem 2 fails if the BCQ of Ω ( β , γ ) at x ¯ is not satisfied. In addition, Example 3 is given to verify the result of Theorem 2.
Example 2.
Let us reconsider the problem of Example 1. Through some calculation in Example 1, we have β 1 = ( 0 , 1 ) , β 2 = ( 1 , 0 ) and γ 2 = ( 1 , 0 ) . We also calculate g 1 * ( β 1 ) = g 2 * ( β 2 ) = f 2 * ( γ 2 ) = 0 . Then, we get the feasible set Ω ( β , γ ) = [ 1 , 0 ] × { 0 } . Further, we obtain N ( Ω ( β , γ ) , x ¯ ) = ( , 0 ] × R and
cone ( f 1 ( x ¯ ) β 1 ) ( f 2 ( x ¯ ) β 2 ) ( g 2 ( x ¯ ) γ 2 ) = ( , 2 ] × { 0 } .
Obviously, N ( Ω ( β , γ ) , x ¯ ) cone ( f 1 ( x ¯ ) β 1 ) ( f 2 ( x ¯ ) β 2 ) ( g 2 ( x ¯ ) γ 2 ) . Indeed, taking x * = ( 0 , 1 ) ψ ( x ¯ ) = { 0 } × [ 1 , 1 ] , we observe that for any λ 1 , λ 2 , μ 2 0 ,
( 0 , 1 ) φ ( x ¯ ) + i I ( x ¯ ) I 0 λ i ( f i ( x ¯ ) β i ) + i I 0 μ i ( g i ( x ¯ ) γ i ) = ( 1 λ 1 + 2 λ 2 2 μ 2 , 0 ) .
Example 3.
Suppose that d = 1 , m = 1 , n = 2 , φ ( x ) = 2 x , ψ ( x ) = | x | , f 1 ( x ) = 3 x , f 2 ( x ) = x , g 1 ( x ) = 2 | x | and g 2 ( x ) = | x | . Then, (DC) is given as:
( DC ) inf ( 2 x | x | ) s . t . 3 x 2 | x | 0 , x | x | = 0 .
The feasible set Ω = [ 1 , + ) . It is easy to see that the optimal solution of ( D C ) is x ¯ = 1 and the optimal value is v ( D C ) = 1 . By a simple analysis, we have
φ ( x ¯ ) = 2 , ψ ( x ¯ ) = 1 , f 1 ( x ¯ ) = 1 , γ 2 = f 2 ( x ¯ ) = 1 , β 1 = g 1 ( x ¯ ) = 2 , β 2 = g 2 ( x ¯ ) = 1 , g 1 * ( β 1 ) = 0 , g 2 * ( β 2 ) = 0 , f 2 * ( γ 2 ) = 0 .
Then, the feasible set Ω ( β , γ ) = [ 1 , + ) . Obviously,
N ( Ω ( β , γ ) , x ¯ ) = ( , 0 ] = cone ( f 1 ( x ¯ ) β 1 ) ( f 2 ( x ¯ ) β 2 ) ( g 2 ( x ¯ ) γ 2 ) = ( , 0 ] ,
namely, BCQ is satisfied. Indeed, for 1 ψ ( x ¯ ) , taking λ 1 = 1 3 , λ 2 = 1 and μ 2 = 1 , we have
1 φ ( x ¯ ) + i I ( x ¯ ) I 0 λ i ( f i ( x ¯ ) β i ) + i I 0 μ i ( g i ( x ¯ ) γ i ) = 2 + 1 3 · ( 1 2 ) + 1 · ( 1 1 ) + 1 · ( 1 1 ) .
The conclusion of Theorem 2 is verified. Simultaneously, we observe that the conclusion of Theorem 1 is true. That is,
0 φ ( x ¯ ) ψ ( x ¯ ) + ( λ 1 f 1 ( x ¯ ) + λ 2 f 2 ( x ¯ ) ) ( λ 1 g 1 ( x ¯ ) + λ 2 g 2 ( x ¯ ) ) = 2 1 + 1 3 · ( 1 ) + 1 · 1 1 3 · 2 + 1 · 1 .
Remark 6.
Examples 1, 2, and 3 also illustrate that the results of Theorems 1 and 2 have no relation in general. The necessary optimality condition established in Theorem 2 will be important for the discussion of the next section.

4. Wolfe and Mond-Weir Type Dualities for (DC)

By Fenchel-Moreau theorem, (DC) can be equivalently translated into
inf α dom ψ * ψ * ( α ) + inf x Ω ( φ ( x ) α , x ) .
Moreover, by convexification technique, (DC) can be associated with
inf α dom ψ * ψ * ( α ) + inf x Ω ( β , γ ) ( φ ( x ) α , x ) ,
where ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) for some x ¯ R d and Ω ( β , γ ) is the same as the aforementioned one in Section 3. Motivated by this idea, for α dom ψ * and ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) , we consider the two problems below:
( P α ) inf x Ω φ ( x ) α , x ,
( P ( α , β , γ ) ) inf x Ω ( β , γ ) φ ( x ) α , x .
Firstly, we give the following corollary, which is helpful for the discussion of Wolfe and Mond-Weir type dualities of (DC) in the sequel.
Corollary 1.
For some α dom ψ * and x ¯ Ω , if x ¯ is an optimal solution of ( P α ) and Ω ( β , γ ) satisfies B C Q at x ¯ for each ( β , γ ) i I I 0 g i ( x ¯ ) × i I 0 f i ( x ¯ ) . Then, there exist λ i 0 , i I ( x ¯ ) I 0 and μ i 0 , i I 0 such that
0 φ ( x ¯ ) α + i I ( x ¯ ) I 0 λ i ( f i ( x ¯ ) β i ) + i I 0 μ i ( g i ( x ¯ ) γ i ) .
Proof. 
Applying Proposition 1 and Theorem 2 to ψ ( · ) = α , · , (12) is obtained immediately. □
Now, we construct the Wolfe and Mond-Weir type dual problems of ( DC ) via ( P ( α , β , γ ) ) . Let λ R n and μ R n m ,
L ( α , β , γ ) ( y , λ , μ ) : = φ ( y ) α , y + i I I 0 λ i ( f i ( y ) β i , y + g i * ( β i ) ) + i I 0 μ i ( g i ( y ) γ i , y + f i * ( γ i ) ) ,
Ω ( α , β , γ ) W : = { ( y , λ , μ ) R d × R n × R n m : 0 φ ( y ) α + i I ( y ) I 0 λ i ( f i ( y ) β i ) + i I 0 μ i ( g i ( y ) γ i ) , λ i 0 , i I ( y ) I 0 , μ i 0 , i I 0 } ,
Ω ( α , β , γ ) MW : = { ( y , λ , μ ) R d × R n × R n m : ( y , λ , μ ) Ω ( α , β , γ ) W , i I I 0 λ i ( f i ( y ) β i , y + g i * ( β i ) ) + i I 0 μ i ( g i ( y ) γ i , y + f i * ( γ i ) ) 0 } .
The Wolfe and Mond-Weir dual problems of ( P ( α , β , γ ) ) are constructed by, respectively,
( WD ( α , β , γ ) ) sup ( y , λ , μ ) Ω ( α , β , γ ) W L ( α , β , γ ) ( y , λ , μ ) ,
( MWD ( α , β , γ ) ) sup ( y , λ , μ ) Ω ( α , β , γ ) MW φ ( y ) α , y .
Based on this, the Wolfe type and Mond-Weir type dual problems of ( DC ) are defined as, respectively,
( WD ( β , γ ) ) inf α dom ψ * sup ( y , λ , μ ) Ω ( α , β , γ ) W ψ * ( α ) + L ( α , β , γ ) ( y , λ , μ ) ,
( MWD ( β , γ ) ) inf α dom ψ * sup ( y , λ , μ ) Ω ( α , β , γ ) MW ψ * ( α ) + φ ( y ) α , y .
Denote by v ( WD ( β , γ ) ) (resp. v ( MWD ( β , γ ) ) ) the optimal values of ( WD ( β , γ ) ) (resp. ( MWD ( β , γ ) ) ). The zero duality gap property holds between ( DC ) and ( WD ( β , γ ) ) (resp. ( MWD ( β , γ ) ) ) if and only if v ( DC ) = v ( WD ( β , γ ) ) (resp. v ( DC ) = v ( MWD ( β , γ ) ) ).
Remark 7.
If we define
L ˜ ( y , λ ) : = φ ( y ) ψ ( y ) + i I I 0 λ i ( f i ( y ) g i ( y ) ) ,
Ω W : = { ( y , λ ) R d × R n : 0 φ ( y ) ψ ( y ) + i I ( y ) I 0 λ i ( f i ( y ) g i ( y ) ) , λ i 0 , i I ( y ) I 0 } ,
Ω MW : = ( y , λ ) R d × R n : ( y , λ ) Ω W , i I I 0 λ i f i ( y ) 0
and present the Wolfe and Mond-Weir dual problems of ( DC ) in the traditional way:
( WD ˜ ) sup ( y , λ ) Ω W L ˜ ( y , λ ) ,
( MWD ˜ ) sup ( y , λ ) Ω MW φ ( y ) ψ ( y ) .
Then, it is not easy to characterize the zero duality gap properties since some subdifferential calculus rules do not work for DC functions. Here, we observe that ( WD ( β , γ ) ) and ( MWD ( β , γ ) ) are slightly different from the traditional Wolfe and Mond-Weir dual problems ( WD ˜ ) and ( MWD ˜ ) , so we call them the Wolfe type and Mond-Weir type dual problems, respectively.
We close this section by characterizing the zero duality gap properties between (DC) and ( WD ( β , γ ) ) , ( MWD ( β , γ ) ) .
Theorem 3.
For any α dom ψ * , if ( P α ) has an optimal solution x ¯ α Ω , then for each ( β , γ ) i I I 0 g i ( x ¯ α ) × i I 0 f i ( x ¯ α ) , v ( MWD ( β , γ ) ) v ( WD ( β , γ ) ) v ( DC ) . Furthermore, if Ω ( β , γ ) satisfies BCQ at x ¯ α for each ( β , γ ) i I I 0 g i ( x ¯ α ) × i I 0 f i ( x ¯ α ) , then v ( MWD ( β , γ ) ) = v ( WD ( β , γ ) ) = v ( DC ) .
Proof. 
If there exists α dom ψ * such that Ω ( α , β , γ ) W is empty, then Ω ( α , β , γ ) MW is empty. In this case, v ( MWD ( β , γ ) ) = v ( WD ( β , γ ) ) = v ( DC ) . So, we suppose that Ω ( α , β , γ ) W is nonempty for any α dom ψ * . Then, for each α dom ψ * and ( y , λ , μ ) Ω ( α , β , γ ) W , we have λ i 0 , i I ( y ) I 0 , μ i 0 , i I 0 and
0 φ ( y ) α + i I ( y ) I 0 λ i ( f i ( y ) β i ) + i I 0 μ i ( g i ( y ) γ i ) .
If i I I 0 λ i ( f i ( y ) β i , y + g i * ( β i ) ) + i I 0 μ i ( g i ( y ) γ i , y + f i * ( γ i ) ) 0 , then φ ( y ) α , y L ( α , β , γ ) ( y , λ , μ ) and
sup ( y , λ , μ ) Ω ( α , β , γ ) MW φ ( y ) α , y sup ( y , λ , μ ) Ω ( α , β , γ ) MW L ( α , β , γ ) ( y , λ , μ ) sup ( y , λ , μ ) Ω ( α , β , γ ) W L ( α , β , γ ) ( y , λ , μ ) .
That is, v ( MWD ( α , β , γ ) ) v ( WD ( α , β , γ ) ) . Due to the arbitrariness of α dom ψ * , we get v ( MWD ( β , γ ) ) v ( WD ( β , γ ) ) . Let λ i = 0 for i I \ I ( y ) , (13) implies that
L ( α , β , γ ) ( y , λ , μ ) L ( α , β , γ ) ( x , λ , μ ) , x R d .
In particular, for x ¯ α Ω , (14) also holds and means that
ψ * ( α ) + L ( α , β , γ ) ( y , λ , μ ) ψ * ( α ) + L ( α , β , γ ) ( x ¯ α , λ , μ ) ψ * ( α ) + φ ( x ¯ α ) α , x ¯ α .
Considering the arbitrariness of α and ( y , λ , μ ) , we obtain that
v ( WD ( β , γ ) ) = inf α dom ψ * sup ( y , λ , μ ) Ω ( α , β , γ ) W ψ * ( α ) + L ( α , β , γ ) ( y , λ , μ ) inf α dom ψ * { ψ * ( α ) + φ ( x ¯ α ) α , x ¯ α } = inf α dom ψ * ψ * ( α ) + inf x Ω φ ( x ) α , x = v ( DC ) .
Furthermore, if Ω ( β , γ ) satisfies BCQ at x ¯ α for each ( β , γ ) i I I 0 g i ( x ¯ α ) × i I 0 f i ( x ¯ α ) , then by Corollary 1, there exist λ ¯ i 0 , i I ( x ¯ ) I 0 and μ ¯ i 0 , i I 0 such that
0 φ ( x ¯ α ) α + i I ( x ¯ α ) I 0 λ i ( f i ( x ¯ α ) β i ) + i I 0 μ i ( g i ( x ¯ α ) γ i ) .
Then, we see that ( x ¯ α , λ ¯ , μ ¯ ) Ω ( α , β , γ ) W . Let λ ¯ i = 0 for i I I ( x ¯ ) , we get ( x ¯ α , λ ¯ , μ ¯ ) Ω ( α , β , γ ) MW . Hence, one has
ψ * ( α ) + inf x Ω φ ( x ) α , x = ψ * ( α ) + φ ( x ¯ α ) α , x ¯ α ψ * ( α ) + sup ( y , λ , μ ) Ω ( α , β , γ ) MW φ ( y ) α , y .
Considering the arbitrariness of α dom ψ * , we get v ( DC ) v ( MWD ( β , γ ) ) . In conclusion, v ( MWD ( β , γ ) ) = v ( WD ( β , γ ) ) = v ( DC ) . □
Remark 8.
There are no convexity or generalized convexity assumptions imposed on φ ψ and f i g i for i I 0 I in Theorem 3, yet we still obtain dualities. This is due to some characteristics of the structure of DC programming itself.

5. Conclusions

In this paper, under the extended RCRCQ given by Definition 3, we directly established one necessary optimality condition for (DC), see Theorem 1. Then, we derived another necessary optimality condition by virtue of convexification and BCQ, see Theorem 2. Finally, we constructed the Wolfe type and Mond-Weir type dual problems and characterized the zero duality gap properties between them and (DC), see Theorems 3. It is worth mentioning that the possible future research directions are as follows: (i) If the objective function of DC programming considered in our paper is replaced by a vector-valued function, namely, every component function is a DC function, then how do we study the necessary optimality conditions and the Wolfe and Mond-Weir dualities? (ii) As we know, if we study DC programming in Banach space, the lower semi-continuity of the DC functions plays a key role. Then, in the absence of the lower semi-continuity of DC functions, or even when the DC functions are replaced by the difference of two quasiconvex functions, how can we deduce the optimality conditions and various types of dualities (Wolfe, Mond-Weir, Lagrange, Fenchel-Lagrange dualities)?

Author Contributions

Conceptualization, Y.X.; formal analysis, Y.X.; investigation, Y.X.; methodology, Y.X.; writing—original draft, Y.X.; writing—review and editing, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China, grant number: 11971078 and the Fundamental Research Funds for the Central Universities, grant number: 106112017CDJZRPY0020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alvarado, A.; Scutari, G.; Pang, J.S. A new decomposition method for multiuser DC-programming and its applications. IEEE Trans. Signal Process. 2014, 62, 2984–2998. [Google Scholar] [CrossRef]
  2. Sanjabi, M.; Razaviyayn, M.; Luo, Z.-Q. Optimal joint base station assignment and beamforming for heterogeneous networks. IEEE Trans. Signal Process. 2014, 62, 1950–1961. [Google Scholar] [CrossRef]
  3. Yin, P.; Lou, Y.; He, Q.; Xin, J. Minimization of l1-2 for compressed sensing. SIAM J. Sci. Comput. 2015, 37, A536–A563. [Google Scholar] [CrossRef] [Green Version]
  4. Dinh, N.; Nghia, T.T.A.; Vallet, G. A closedness condition and its applications to DC programs with convex constraints. Optimization 2010, 59, 541–560. [Google Scholar] [CrossRef]
  5. Dinh, N.; Mordukhovich, B.; Nghia, T.T.A. Subdifferentials of value functions and optimality conditions for DC and bilevel infinite and semi-infinite programs. Math. Program. 2010, 123, 101–138. [Google Scholar] [CrossRef]
  6. Dinh, N.; Strodiot, J.J.; Nguyen, V.H. Duality and optimality conditions for generalized equilibrium problems involving DC functions. J. Glob. Optim. 2010, 48, 183–208. [Google Scholar] [CrossRef]
  7. Sun, X.K. Regularity conditions characterinzing Fenchel-Lagrange duality and Farkas-type results in DC infinite programming. J. Math. Anal. Appl. 2014, 414, 590–611. [Google Scholar] [CrossRef]
  8. Fang, D.H.; Zhao, X.P. Local and global optimality conditions for DC infinite optimization. Taiwan J. Math. 2014, 18, 817–834. [Google Scholar] [CrossRef]
  9. Dolgopolik, M.V. New global optimality conditions for nonsmooth DC optimization problems. J. Glob. Optim. 2020, 76, 25–55. [Google Scholar] [CrossRef] [Green Version]
  10. Fang, D.H.; Zhang, Y. Optimality conditions and total dualities for conic programming involving composite function. Optimization 2020, 69, 305–327. [Google Scholar] [CrossRef]
  11. Fang, D.H.; Gong, X. Extended Farkas lemma and strong duality for composite optimization problems with DC functions. Optimization 2017, 66, 179–196. [Google Scholar] [CrossRef]
  12. Fang, D.H.; Zhang, Y. Extended Farkas’s lemmas and strong dualities for conic programming involving composite functions. J. Optim Theory Appl. 2018, 176, 351–376. [Google Scholar] [CrossRef]
  13. Wen, B.; Chen, X.J.; Pong, T.K. A proximal difference-of convex algorithem with extrapolation. Comput. Optim. Appl. 2017, 69, 297–324. [Google Scholar] [CrossRef] [Green Version]
  14. Le Thi, H.A.; Dinh, P. DC programming and DCA: Thirty years of development. Math. Program. 2018, 169, 5–68. [Google Scholar] [CrossRef]
  15. Minchenko, L.; Stakhovski, S. On relaxed constant rank regularity condition in mathematical programming. Optimization 2011, 60, 429–440. [Google Scholar] [CrossRef]
  16. Minchenko, L.; Stakhovski, S. Parametric nonlinear programming problems under the relaxed constant rank condition. SIAM J. Optim. 2011, 21, 314–332. [Google Scholar] [CrossRef]
  17. Bednarczuk, E.M.; Rutkowski, K.E. On Lipschitz-like property for polyhedral moving sets. SIAM J. Optim. 2019, 29, 2504–2516. [Google Scholar] [CrossRef] [Green Version]
  18. Bednarczuk, E.M.; Minchenko, L.I.; Rutkowski, K.E. On Lipschitz-like continuity of a class of set-valued mappings. Optimization 2020, 69, 2535–2549. [Google Scholar] [CrossRef]
  19. Bot, R.I.; Grad, S.M.; Wanka, G. Wolfe and Mond-Weir duality concepts. In Duality in Vector Optimization; Springer: Berlin, Germany, 2009; pp. 249–278. [Google Scholar]
  20. Mordukhovich, B.S.; Nam, N.M.; Yen, N.D. Fréchet subdifferential calculus and optimality conditions in nondifferentiable programming. Optimization 2006, 55, 685–708. [Google Scholar] [CrossRef] [Green Version]
  21. Schirotzek, W. Continuity of Convex Functionals. In Nonsmooth Analysis; Springer: Berlin, Germany, 2007; pp. 11–13. [Google Scholar]
  22. Xu, M.; Ye, J.J. Relaxed constant positive linear dependence constraint qualification and its application to bilevel programs. J. Glob. Optim. 2020, 78, 181–205. [Google Scholar] [CrossRef]
  23. Clarke, F.H. Generalized gradients. In Optimization and Nonsmooth Analysis; SIAM: Philadelphi, PA, USA, 1990; pp. 25–30. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, Y.; Li, S. Optimality and Duality for DC Programming with DC Inequality and DC Equality Constraints. Mathematics 2022, 10, 601. https://doi.org/10.3390/math10040601

AMA Style

Xu Y, Li S. Optimality and Duality for DC Programming with DC Inequality and DC Equality Constraints. Mathematics. 2022; 10(4):601. https://doi.org/10.3390/math10040601

Chicago/Turabian Style

Xu, Yingrang, and Shengjie Li. 2022. "Optimality and Duality for DC Programming with DC Inequality and DC Equality Constraints" Mathematics 10, no. 4: 601. https://doi.org/10.3390/math10040601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop