Next Article in Journal
(Iq)–Stability and Uniform Convergence of the Solutions of Singularly Perturbed Boundary Value Problems
Next Article in Special Issue
Two-Person Stochastic Duel with Energy Fuel Constraint Ammo
Previous Article in Journal
The Mechanism of Orientation Detection Based on Artificial Visual System for Greyscale Images
Previous Article in Special Issue
A Data-Driven Decision-Making Model for Configuring Surgical Trays Based on the Likelihood of Instrument Usages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximate Subdifferential of the Difference of Two Vector Convex Mappings

by
Abdelghali Ammar
1,†,
Mohamed Laghdir
2,†,
Ahmed Ed-dahdah
2,† and
Mohamed Hanine
3,*,†
1
Department of Computer Engineering, Networks and Telecommunications, National School of Applied Sciences, Cadi Ayyad University, BP. 63, Safi 46000, Morocco
2
Department of Mathematics, Faculty of Sciences, Chouaib Doukkali University, BP. 20, El Jadida 24000, Morocco
3
Department of Telecommunications, Networks, and Informatics, National School of Applied Sciences, Chouaib Doukkali University, El Jadida 24000, Morocco
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(12), 2718; https://doi.org/10.3390/math11122718
Submission received: 20 April 2023 / Revised: 6 June 2023 / Accepted: 13 June 2023 / Published: 15 June 2023
(This article belongs to the Special Issue New Advance in Operations Research and Analytics)

Abstract

:
This paper deals with the strong approximate subdifferential formula for the difference of two vector convex mappings in terms of the star difference. This formula is obtained via a scalarization process by using the approximate subdifferential of the difference of two real convex functions established by Martinez-Legaz and Seeger, and the concept of regular subdifferentiability. This formula allows us to establish approximate optimality conditions characterizing the approximate strong efficient solution for a general DC problem and for a multiobjective fractional programming problem.

1. Introduction

It is well known that the theory of DC mathematical programming, dealing with functions expressed as a difference of two convex functions, is now very well developed due to its theoretical aspects and extensive range of practical applications in optimal control, mechanics, operations research, and others (see [1,2,3,4,5,6,7,8] and references therein). This theory constitutes an important approach to nonconvex optimization problems. In machine learning, a lot of important learning problems such as Boltzmann machines can be formulated as DC programming (see [9]).
The overview paper [4] presents essential results on theory, applications, and solution methods for DC programming in the sense of global optimization. Significant advances have been made in the study of duality theory associated with constrained DC optimization problems (see [10,11,12,13,14,15]).
The motivation for this paper stems from the significant contributions of Martinez-Legaz and Seeger [16], who established a formula for the approximate subdifferential of the difference of two convex functions over a locally convex topological vector space. This formula is expressed in terms of the star difference of two subsets, and the authors provided an application for DC programming.
The aim of this work is to show how the formula established by Martinez-Legaz and Seeger can be used to obtain the approximate subdifferential of the difference of two vector convex mappings by using the vector strong subdifferential, the concept of subdifferential regularity [17], and a scalarization process. Two illustrations are given: the first deals with a constrained DC vector programming problem, and the second deals with a constrained multiobjective fractional programming problem. The rest of the work is organized as follows. In Section 2, we present some basic definitions and preliminary material. In Section 3, we recall the formula established by Martinez-Legaz and Seeger [16] and show how this formula can be used to obtain the approximate subdifferential of the difference of two vector convex mappings. In Section 4 and Section 5, we derive, from the obtained formula, optimality conditions for two vector cone-constrained programming problems. Finally, the paper ends with a conclusion and future work.

2. Preliminaries

In this paper, let E, F, and G be tree real Hausdorff locally convex topological vector spaces. The space F (respectively, G) is endowed with a nonempty convex cone F + F (respectively, G + G ) introducing a partial preorder in F (respectively, in G) defined by: for y , y F +
y F + y y y F + .
We adjoin to F (respectively to G) two abstract elements + F and F , such that
F = + F , y F F + y , y , y F + F + F = + F , y F + y + F = + F , y , y F + F β . + F = + F , β 0 .
The dual topological spaces of E and G are denoted respectively by E * and G * , and the duality pairing in G is denoted by g * , z , with g * G * and z G . The positive dual cone of G + is defined by
G + * : = g * G * : g * , z 0 , z G + .
Let S F . The point m F is said to be a lower bound of S if m F + y , for all y S . We denote by inf S , if it exists, the greatest lower bound of S.
Let B and C be two nonempty subsets of F, and α 0 . The following operations will be used:
B + C : = x + y : x B , y C α B : = α x , x B + B = B + : = .
Let H : E F { + F } be a given mapping. The effective domain of H is denoted by
d o m H : = { x E : H ( x ) F } .
We say that H is proper when d o m H . The epigraph of the mapping H is denoted by E p i H , which is defined as follows:
E p i H : = x , y : H x F + y .
H is called F + -convex if
H ( α x + ( 1 α ) x ˜ ) F + α H ( x ) + ( 1 α ) H ( x ˜ ) , α [ 0 , 1 ] , x , x ˜ E .
A mapping K : F G { + G } is said to be ( F + , G + ) -increasing if for all y , y F ,
y F + y K y G + K y .
The composed mapping K H : E G { + G } is defined as follows:
K H x : = K H x , if x d o m H + G ,       else .
Let us note that if K is ( F + , G + ) -increasing and G + -convex and if H is F + -convex, then K H is G + -convex.
Following [18], whenever x ˜ d o m H and ϵ F + , the strong ϵ -subdifferential of H at x ˜ is defined by
ϵ s H ( x ˜ ) : = T L ( E , F ) : T ( x x ˜ ) ϵ F + H ( x ) H ( x ˜ ) , x E ,
where L ( E , F ) denotes the vector space of continuous linear mappings from E to F. For ϵ = 0 , we have the usual strong vector subdifferential
s H ( x ˜ ) : = T L ( E , F ) : T ( x x ˜ ) F + H ( x ) H ( x ˜ ) , x E .
If x ˜ d o m H , we set ϵ s H ( x ˜ ) = s H ( x ˜ ) : = . Let us note that when F = R , ϵ s H ( x ˜ ) reduces to the usual subdifferential of convex analysis, denoted by
ϵ H ( x ˜ ) : = e * E * : e * , x x ˜ ϵ H x H x ˜ , x E .

3. Approximate Subdifferential of the Difference of Two Vector Convex Mappings

In this section, we attempt to extend the formula of [16] for the difference of two vector-valued mappings. Let us recall this scalar formula [16] expressed by means of the star difference of two subsets of E * .
Definition 1
([19]). The star difference between two subsets B and C of E * is given by
B C = e * E * : e * + C B .
Theorem 1
([16]). Let H , K : E R + be two proper functions, x d o m H d o m K , and ϵ 0 . If H and K are lower semicontinuous and convex, then
ϵ H K x = η 0 η + ϵ H x η K x .
Let H : E G { + G } and g * G + * { 0 } . The scalar function g * H : E R { + } is defined by
g * H ( x ) = g * , H ( x ) , if x d o m H + , else .
Let us note that, for any g * G + * { 0 } , d o m g * H = d o m H and, if H is G + -convex, then g * H is convex. In order to state our main result, we will need the following lemma.
Lemma 1.
1.
If G + is closed and if there exists z G such that g * , z 0 , for all g * G + * { 0 } , then z G + ;
2.
Let H : E G { + G } be a given G + -convex mapping, ϵ G + , and x ˜ E . If G + is closed, then
ϵ s H x ˜ = g * G + * { 0 } A L E , G , g * A g * , ϵ g * H x ˜ ;
3.
If the topological interior of G + is nonempty and i n t G + , then for any g * G + * 0 , we have
R + = g * , θ , θ ( i n t G + ) { 0 G } ,
where R + = 0 , + .
Proof.  
1.
See ([20] Proposition 2.1).
2.
See ([17] Theorem 3.2).
3.
We have g * , θ , θ ( i n t G + ) { 0 G } R + for any g * G + * 0 . For the reverse inclusion, let g * G + * 0 and α R + . If α = 0 , we obviously have 0 = g * , 0 G . Following ([20] Proposition 2.1), there exists some z 0 i n t G + such that g * , z 0 = 1 , and hence we write α = g * , α z 0 for any α > 0 . Let us note that α z 0 i n t G + since α > 0 , z 0 i n t G + , and i n t G + is a cone.
We say that a vector valued mapping K : E G { + G } is star G + -lower semicontinuous at x ˜ if the function g * K is lower semicontinuous at x ˜ for any g * G + * (see [21]), and K is called weak regular γ -subdifferentiable at x ˜ d o m K , where γ 0 (see [17]), if
γ g * K x ˜ = η G + γ g * , η = γ g * η s K x ˜ , g * G + * { 0 }
where G + 0 = { 0 G } and G + γ = G + if γ > 0 .
If γ = 0 , we say K is weak regular subdifferentiable at x ˜ .
Theorem 2.
Let H , K : E G { + G } be two given mappings, x ˜ d o m H d o m K and ϵ G + . Then
ϵ s H K x ˜ η G + η + ϵ s H x η s K x ˜
with equality if H and K are proper, G + -convex, and star G + -lower semicontinuous; K is weak regular γ-subdifferentiable at x ˜ for all γ 0 and the cone G + is closed; and i n t G + .
Proof. 
Let T ϵ s H K x ˜ , i.e.,
H x K x H x ˜ + K x ˜ T x x ˜ + ϵ G + , x E .
Let η G + . Then, for all T η s K x , we have
K x K x ˜ T x x ˜ + η G + , x E .
Adding (2) and (3) term by term, and since G + is a convex cone, we obtain
H x H x ˜ T + T x x ˜ + η + ϵ G + , x E ,
i.e.,
T + T ϵ + η s H x ˜ , T η s K x ˜ ,
which yields that, for any η G + ,
T η + ϵ s H x ˜ η s K x ˜ ,
i.e.,
T η G + η + ϵ s H x ˜ η s K x ˜ ,
and the direct inclusion is proved. For the reverse inclusion, let
T η G + η + ϵ s H x ˜ η s K x ˜ ;
then, for every η G + , we have
T η + ϵ s H x ˜ η s K x ˜ ,
i.e.,
T + T ϵ + η s H x ˜ , T η s K x ˜ .
Since H is G + -convex, it follows according to property ( 2 ) of Lemma 1 that
g * T + g * T g * , η + ϵ g * H x ˜ , T η s K x ˜ , g * G + * 0 ,
and then
g * T + g * η s K x ˜ g * , ϵ + g * , η g * H x ˜ , η G + , g * G + * 0 .
Let θ ( i n t G + ) { 0 G } . Since K is weak regular γ -subdifferentiable at x ˜ for all γ 0 , then K is weak regular g * , θ -subdifferentiable at x ˜ for all g * G + * { 0 } , i.e.,
g * , θ ( g * K ) ( x ˜ ) = η G + g * , θ g * , η = g * , θ g * η s K ( x ˜ ) , g * G + * { 0 } ,
with
G + g * , θ : = 0 , if θ = 0 G G + , if θ i n t G + .
From (4), we deduce that, for any θ ( i n t G + ) { 0 G } and g * G + * 0 , we have
g * T + η G + g * , θ g * , η = g * , θ g * η s K x ˜ η G + z * , θ g * , η = g * , θ g * , η + g * , ϵ g * H x ˜ ,
i.e.,
g * T + η G + g * , θ g * , η = g * , θ g * η s K x ˜ g * , θ + g * , ϵ g * H x ˜ .
From (5) and (7), it follows that
g * T + g * , θ ( g * K ) x ˜ g * , θ + g * , ϵ g * H x ˜ , θ ( i n t G + ) { 0 G } , g * G + * 0 ,
i.e.,
g * T g * , ϵ + g * , θ g * H x ˜ g * , θ g * K x ˜ , g * G + * 0 , θ ( i n t G + ) { 0 G } .
Again, by applying the property ( 3 ) of Lemma 1, we can write
g * T g * , ϵ + β g * H x ˜ β g * K x ˜ , g * G + * 0 , β 0 ,
which yields
g * T β 0 g * , ϵ + β g * H x ˜ β g * K x ˜ , g * G + * 0 .
Since H and K are proper G + -convex, star G + -lower semicontinuous at x ˜ d o m H d o m K , then g * H and g * K are proper convex, lower semicontinuous functions and finite at x ˜ ; hence, by applying Theorem 1, we obtain
g * T g * , ϵ g * H g * K x ˜ , g * G + * 0 ,
i.e.,
g * T g * , ϵ g * H K x ˜ , g * G + * 0 .
By using the scalarization process of the strong subdifferential given by property ( 2 ) of Lemma 1, we obtain
T ϵ s H K x ˜ .
This completes the proof. □
By taking ϵ = 0 G in Theorem 2, we obtain the formula of the exact subdifferential of the difference of two vector convex mappings.
Corollary 1.
Let H , K : E G { + G } be two given mappings and x ˜ d o m H d o m K . Then
s H K x ˜ η G + η s H x ˜ η s K x ˜
with equality if H and K are proper, G + -convex, and star G + -lower semicontinuous; K is weak regular γ-subdifferentiable at x ˜ for all γ 0 ; and the positive cone G + is closed and i n t G + .

4. Application to DC Vector Programming Problems

Let H : E G { + G } be a mapping and ϵ G + . A point x ˜ d o m H is called an ϵ -minimizer of H on C if
H ( x ˜ ) ϵ G + H ( x ) , x C ,
where C E . If C = E , we have that x ˜ is an ϵ -minimizer of H if and only if 0 ϵ s H x ˜ . The vector indicator mapping δ C v : E G + G is defined by
x δ C v x : = 0 , if   C + G ,   else .
The ϵ -normal set of C at x ˜ C in a vector sense is defined by
N ϵ v C , x ˜ : = ϵ s δ C v x ˜ = T L E , G : T x x ˜ G + ϵ , x C .
It is clear that if T L + ( F , G ) : = { T L ( F , G ) : T ( F + ) G + } , then T is ( F + , G + ) -increasing. By taking a G + -convex mapping L : E F { + F } , it follows that the composed mapping T L : E G + G is G + -convex.
We will need the following result later.
Lemma 2
([22]). We suppose that the convex cone G + is closed. For every ϵ G + , we have
1.
If y F + , then
T N ϵ v ( F + , y ) T L + ( F , G ) G + T ( y ) ;
2.
If y F + , then
T N ϵ v ( F + , y ) T L + ( F , G ) G + T ( y ) ;
In [18], Théra developed the calculus formula for the strong ϵ -subdifferential of the addition of two convex vector mappings. We need to recall that ( G , G + ) is said to be order complete if inf A exists, for each nonempty subset A G order-bounded from below. We say that G is normal if there exists a basis of neighborhoods N of 0 G such that
N = ( N + G + ) ( N G + ) .
Theorem 3
([18]). Let H , K : E G { + G } be two G + -convex mappings. If H is continuous at some point of d o m H d o m K and ( G , G + ) is normal order-complete, then for every x E and ϵ G + , we have
ϵ s H + K x = ϵ 1 + ϵ 2 = ϵ ϵ 1 , ϵ 2 G + ϵ 1 s H x + ϵ 2 s K x .
Consider the following constrained DC vector minimization problem,
( P 1 ) min ( H x K x ) x C ,
where C E is convex and H , K : E G { + G } are two proper G + -convex mappings. By using the vector indicator mapping, the problem ( P 1 ) is equivalent to the following unconstrained problem:
min ( H x + δ C v x K x ) x E .
Now, we establish necessary and sufficient optimality conditions for the minimization problem ( P 1 ) characterizing an ϵ -minimizer.
Theorem 4.
Let H , K : E G { + G } be two proper, G + -convex and star G + -lower semicontinuous mappings, and C E be convex and closed. If H is continuous at some point of d o m H C , ( G , G + ) is normal order-complete, K is weak regular γ-subdifferentiable at x ˜ d o m H d o m K for all γ 0 , and the cone G + is closed and i n t G + . Then x ˜ is an ϵ-minimizer of the problem ( P 1 ) if and only if, for all η G + ,
η s K x ˜ ϵ 1 + ϵ 2 = η + ϵ ϵ 1 , ϵ 2 G + ϵ 1 s H x ˜ + N ϵ 2 v C , x ˜ .
Proof. 
We have that x ˜ is an ϵ -minimizer of the problem ( P 1 ) if and only if
0 ϵ s ( H + δ C v K ) ( x ˜ ) .
Since the subset C is convex and nonempty, the vector indicator mapping δ C v is G + -convex and proper. It is simple to observe that g * δ C v = δ C , for any g * G + * , where δ C is the scalar indicator function of the subset C. Given the fact that C is closed, it follows that δ C is lower semicontinuous, and we deduce that δ C v is star G + -lower semicontinuous.
Since H and δ C v are G + -convex, star G + -lower semicontinuous, and d o m H C , then H + δ C v is G + -convex, proper, and star G + -lower semicontinuous. As K is weak regular γ -subdifferentiable at x ˜ for any γ 0 , and the positive cone G + is closed and i n t G + , then by virtue of Theorem 2, (8) becomes
0 η + ϵ s H + δ C v ( x ˜ ) η s K ( x ˜ ) , η G + ,
i.e.,
η s K ( x ˜ ) η + ϵ s H + δ C v ( x ˜ ) , η G + .
As H and δ C v are G + -convex, H is continuous at some point of d o m H C , and ( G , G + ) is normal order-complete; then, according to Theorem 3, (9) becomes
η s K x ϵ 1 + ϵ 1 = ϵ + η ϵ 1 , ϵ 2 G + ϵ 1 s H x ˜ + N ϵ 2 v C , x ˜ , η G + .
The proof is complete. □
By taking C = E in (9) of the above proof, we deduce the following proposition.
Proposition 1.
Let H , K : E G { + G } be two proper, G + -convex, and star G + -lower semicontinuous mappings. If K is weak regular γ-subdifferentiable at x ˜ d o m H d o m K for all γ 0 , and the cone G + is closed and i n t G + , then x ˜ is an ϵ-minimizer of H K if and only if
η s K x ˜ η + ϵ s H x ˜ , η G + .
In particular, x ˜ is a minimizer of H K if and only if
η s K x ˜ η s H x ˜ , η G + .
Remark 1.
The above proposition generalizes a result due to Hiriart-Urruty’s [3] characterizing a global minimum for a scalar DC programming problem.
A point x ˜ d o m H is said to be an ϵ -maximizer of H on C if
H ( x ) G + H ( x ˜ ) + ϵ , x C .
Consider the following constraint convex vector maximization problem:
P 2 max K x x C .
The problem P 2 becomes equivalent to
min ( δ C v x K x ) x E .
Corollary 2.
Let K : E G { + G } be a proper, G + -convex, and star G + -lower semicontinuous mapping and C E be convex and closed. If K is weak regular γ-subdifferentiable at x ˜ d o m K C for all γ 0 , and the cone G + is closed and i n t G + , then x ˜ is an ϵ-maximizer of the problem ( P 2 ) if and only if
η s K x ˜ N η + ϵ v C , x ˜ , η G + .
Proof. 
It suffices to take H = δ C v in Proposition 1. □
Let us now consider the following constrained vector minimization problem,
P 3 min H ( x ) K ( x ) L x F + ,
where H , K : E G { + G } are two G + -convex mappings and L : E F + F is a proper F + -convex mapping. By using the vector indicator mapping δ F + v , the unconstrained minimization problem below is equivalent to the problem P 3 :
min ( H x + δ F + v L x K x ) x E .
The following result will be required to state the necessary and sufficient approximate optimality conditions that characterize an ϵ -minimizer of problem P 3 .
Theorem 5
([23]). Let H : E G { + G } be a proper G + -convex mapping, K : F G { + G } be a proper, G + -convex, and ( F + , G + ) -increasing mapping and L : E F { + F } be a proper and F + -convex mapping. If there exists a d o m H d o m L L 1 ( d o m K ) such that K is continuous at the point L ( a ) , then
ϵ s ( H + K L ) ( x ˜ ) = η + η = ϵ η , η G + η s H + T L x ˜ , T η s K L x ˜ ,
for any x ˜ E and ϵ G + .
Now, we are prepared to announce the approximate optimality conditions related to problem P 3 .
Theorem 6.
Let H , K : E G { + G } be two proper, G + -convex, and star G + -lower semicontinuous mappings, and L : E F { + F } be a proper and F + -convex mapping. If there exists some point a d o m H L 1 ( i n t F + ) , L 1 F + is closed, K is weak regular γ-subdifferentiable at x ˜ d o m H d o m K L 1 F + for all γ 0 , and the cone G + is closed and i n t G + , then x ˜ is an ϵ-minimizer of the problem ( P 3 ) if and only if for any η G + and for any A η s K x ˜ , there exist ϵ 1 , ϵ 2 G + and T L + ( F , G ) , satisfying ϵ 1 + ϵ 2 = η + ϵ ,   A ϵ 1 s H + T L x ˜ , and ϵ 2 G + T ( L ( x ˜ ) ) .
Proof. 
The point x ˜ is an ϵ -minimizer of the problem ( P 3 ) if and only if
0 ϵ s ( H + δ F + v L K ) ( x ˜ ) .
Let us recall that the vector indicator mapping δ F + v : F G { + G } is ( F + , G + ) -increasing (see [20]) and G + -convex. Since L is F + -convex, then δ F + v L is G + -convex. The fact that g * δ F + v L = δ F + L for any g * G + * { 0 } , it follows that
E p i g * δ F + v L = x , α : L x F + , α R + = L 1 F + × R + ,
and as L 1 F + is closed, we deduce that E p i g * δ F + v L is closed, which yields that δ F + v L is star G + -lower semicontinuous. Since H is G + -convex, star G + -lower semicontinuous and a d o m H L 1 ( i n t F + ) , it follows that H + δ F + v L is G + -convex, star G + -lower semicontinuous, and proper. We claim that δ F + v is continuous on i n t F + . Indeed, for any neighborhood V of 0 G , we have δ F + v ( i n t F + ) = { 0 G } V . As a L 1 ( i n t F + ) , then δ F + v is continuous at L ( a ) . Let us note that all assumptions of Proposition 1 are satisfied; therefore, we obtain
η s K x ˜ η + ϵ s H + δ F + v L x ˜ , η G + .
Let us observe that all hypotheses of Theorem 5 are satisfied and therefore (11) becomes equivalent to
η s K x ˜ ϵ 1 + ϵ 2 = η + ϵ ϵ 1 , ϵ 2 G + ϵ 1 s H + T L x ˜ , T ϵ 2 s δ F + v L x ˜ , η G + ,
i.e.,
η s K x ˜ ϵ 1 + ϵ 2 = η + ϵ ϵ 1 , ϵ 2 G + ϵ 1 s H + T L x ˜ , T N ϵ 2 v F + , L x ˜ , η G + .
Therefore, by virtue of Lemma 2 we obtain, for any η G + and for any A η s K x ˜ , there exist ϵ 1 , ϵ 2 G + , and T L + ( F , G ) satisfying ϵ 1 + ϵ 2 = η + ϵ ,   A ϵ 1 s H + T L x ˜ , and ϵ 2 G + T ( L ( x ˜ ) ) . This completes the proof. □

5. Application to a Multiobjective Fractional Programming Problem

This section focuses on a general multiobjective fractional programming problem,
Q min h 1 ( x ) k 1 ( x ) , , h n ( x ) k n ( x ) L ( x ) F + ,
where the functions h i , k i : E R are convex such that h i ( x ) 0 , k i ( x ) > 0 , for any x E ( i = 1 , , n ) , and L: E F { + F } is a proper F + -convex mapping. The following notation will be required:
ϵ : = ( ϵ 1 , , ϵ n ) R + n , ν i : = h i ( x ˜ ) k i ( x ˜ ) ϵ i , ϵ ¯ : = ( ϵ 1 k 1 ( x ˜ ) , , ϵ n k n ( x ˜ ) ) .
The finite-dimensional space G : = R n is equipped with its natural order induced by the positive cone
G + : = R + n = { ( d 1 , , d n ) R n : d i 0 , i = 1 , , n } ,
i.e.,
( c 1 , , c n ) R + n ( d 1 , , d n ) c i d i , i = 1 , , n .
The following definition is equivalent to the one of an ϵ -minimizer.
Definition 2.
Let ϵ = ( ϵ 1 , , ϵ n ) R + n . We say that a point x ˜ L 1 ( F + ) is an ϵ-minimizer of the problem Q if
h i x ˜ k i x ˜ ϵ i h i x k i x , x L 1 ( F + ) , i = 1 , , n .
By using a parametric approach, we can equivalently convert the multiobjective fractional programming problem ( Q ) into a DC vector nonfractional programming problem defined in the following way:
Q x ˜ min ( H ( x ) K x ˜ ( x ) ) L ( x ) F + ,
where H : E R n and K x ˜ : E R n are two mappings defined for every x E by
H ( x ) : = h 1 ( x ) , , h n ( x ) , K x ˜ ( x ) : = ν 1 k 1 ( x ) , , ν n k n ( x ) .
In order to relate the fractional programming problem Q to the DC vector optimization problem ( Q x ˜ ), we formulate the following lemma.
Lemma 3.
A point x ˜ L 1 ( F + ) is an ϵ-minimizer of Q if and only if x ˜ is an ϵ ¯ -minimizer of the problem Q x ˜ .
Proof. 
Assume that x ˜ is an ϵ -minimizer of Q . From Definition 2, we have for each i = 1 , , n
h i x ˜ k i x ˜ ϵ i h i x k i x , x L 1 ( F + ) .
Since k i ( x ) > 0 , we deduce from (12) that 0 h i ( x ) ν i k i ( x ) , for any x L 1 ( F + ) and i = 1 , , n . As h i ( x ˜ ) ν i k i ( x ˜ ) ϵ i k i ( x ˜ ) = 0 , we write
0 = h i ( x ˜ ) ν i k i ( x ˜ ) ϵ i k i ( x ˜ ) h i ( x ) ν i k i ( x ) , x L 1 ( F + ) , i = 1 , , n ,
i.e.,
H ( x ˜ ) K x ˜ ( x ˜ ) ϵ ¯ R + n H ( x ) K x ˜ ( x ) , x L 1 ( F + ) ,
which yields that x ˜ is an ϵ ¯ -minimizer for the problem ( Q x ˜ ) .
By using similar arguments as above, we show easily that if x ˜ is an ϵ ¯ -minimizer for the problem ( Q x ˜ ) , then x ˜ is an ϵ -minimizer for the problem Q .
This completes the proof. □
The problem ( Q x ˜ ) is reduced to the following unconstrained minimization problem:
min ( H x + δ F + v L x K x ˜ ( x ) ) x E .
Proposition 2.
Let h i , k i : E R be 2 n convex and lower semicontinuous functions such that h i ( x ) 0 and k i ( x ) > 0 , for each i = 1 , , n and for any x E . Let L : E F { + F } be a proper F + -convex mapping. We assume that L 1 F + is closed nonempty and there exists some x 0 E such that ( n 1 ) functions k i are continuous at x 0 . Let ϵ = ϵ 1 , . . . , ϵ n R + n , x ˜ L 1 ( F + ) , and ν i : = h i ( x ˜ ) k i ( x ˜ ) ϵ i 0 ( i = 1 , , n ) . Then, x ˜ is an ϵ-minimizer of the problem Q if and only if for any η i 0 and for any e i * η i ( ν i k i ) x ˜ , there exist ϵ 1 i , ϵ 2 i 0 and T i L + ( F , R ) satisfying ϵ 1 i + ϵ 2 i = ϵ i + η i , e i * ϵ 1 i h i + T i L x ˜ and ϵ 2 i T i ( L ( x ˜ ) ) .
Proof. 
Let x ˜ L 1 ( F + ) , ϵ ¯ : = ϵ 1 k 1 x ˜ , . . . , ϵ n k n x ˜ , and η = η 1 , . . . , η n R + n . By Lemma 3, we have that x ˜ is an ϵ -minimizer of Q if and only if x ˜ is an ϵ ¯ -minimizer of the problem Q x ˜ i.e.,
0 ϵ s ( H + δ F + v L K x ˜ ) ( x ˜ ) .
Let us note that in this situation G = R n and G + = R + n , which is a closed convex cone, and i n t G + ; hence, the R + n -convexity of the mappings H and K x ˜ follows easily from the convexity of the functions h i and k i for i = 1 , , n . For any g * = ( α 1 , , α n ) G + * = R + n , we have g * H = i = 1 n α i h i and, since h i is lower semicontinuous, we deduce that g * H is lower semicontinuous, which yields that H is star R + n -lower semicontinuous. Similarly, we show that K x ˜ is also star R + n -lower semicontinuous.
Let γ 0 , by virtue of [17], the γ -weak subdifferential regularity of K x ˜ = ν 1 k 1 , , ν n k n becomes exactly a famous chain rule of convex analysis, i.e., for any ( α 1 , , α n ) R + n { 0 R n } , we have
γ ( i = 1 n α i ν i k i ) = ϵ i 0 i = 1 n α i ϵ i = γ i = 1 n α i ϵ i ( ν i k i )
and this formula holds under the popular Moreau–Rockafellar qualification condition, i.e., the functions k i , ( i = 1 , , n ) are convex and there exits some x 0 E such that ( n 1 ) functions k i are continuous at x 0 . For our purpose, this qualification condition is satisfied. Let us emphasize that all the assumptions of Theorem 6 are fulfilled; therefore, x ˜ is an ϵ -minimizer of the problem ( Q ) if and only if for any A η s K x ˜ x ˜ , there exist ϵ 1 , ϵ 2 R + n and T L + ( F , R n ) satisfying ϵ 1 + ϵ 2 = η + ϵ ,   A ϵ 1 s H + T L x ˜ and ϵ 2 R + n T ( L ( x ˜ ) ) .
The strong η -subdifferential η s K x ˜ ( x ˜ ) reduces to
η s K x ˜ ( x ˜ ) = η 1 ( ν 1 k 1 ) ( x ˜ ) × × η n ( ν n k n ) ( x ˜ ) .
The condition T L + ( E , R n ) can be written as T = ( T 1 , , T n ) where T i L + ( E , R ) . The composed mapping T L : E R n { + R n } is defined by
( T L ) ( x ) : = T ( L ( x ) ) = ( T 1 ( L ( x ) ) , , T n ( L ( x ) ) ) , if   x d o m L + R n , otherwise .
Now, we can write A = ( e 1 * , , e n * ) with e i * E * , and hence we obtain
A η s K x ˜ x ˜ e i * η i ( ν i k i ) ( x ˜ ) , i = 1 , , n .
The condition A ϵ 1 s H + T L x ˜ may be rewritten as e i * ϵ 1 i h i + T i L x ˜ for any i = 1 , , n . Obviously, the condition ϵ 2 R + n T ( L ( x ˜ ) ) is equivalent to ϵ 2 i T i ( L ( x ˜ ) ) , for any i = 1 , . . . , n .
The proof is complete. □

6. Conclusions and Discussion

Our investigation in this article aimed to extend within the setting of vector convex mappings a formula [16] dealing with the approximate subdifferential of the difference of two real convex functions. This is obtained by a scalarization process by using this scalar formula, the regular subdifferentiability concept, and the difference star operation. Therefore, the established result allows us to obtain the existence of approximate strong solutions to a constrained vector DC programming problem and a constrained multiobjective fractional problem.
Let us note that a similar result of Proposition 1 was developed by Hiriart-Urruty [3] for an unconstrained scalar DC optimization problem in terms of Fenchel approximate subdifferentials characterizing a global (exact or approximate) solution. Additionally, in [5], a similar condition is established characterizing a weakly efficient solution for the difference of two vector mappings in finite or infinite-dimensional preordered space.
In a forthcoming work, we will try to study a Pareto version (weak and proper) of the above formula and also, we will attempt to find efficient algorithms for solving numerically this class of problems.
We would like to express our gratitude to the referees for offering us the opportunity to read three papers [24,25,26] in order to elaborate a concrete application model that we will study using our results, and for valuable comments and suggestions, which certainly contribute to improve the quality of the paper.

Author Contributions

Methodology, A.A.; Validation, M.L.; Formal analysis, M.L.; Resources, A.E.-d.; Visualization, A.A.; Supervision, M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Khazayel, B.; Farajzadeh, A. On the optimality conditions for DC vector optimization problems. Optimization 2022, 71, 2033–2045. [Google Scholar] [CrossRef]
  2. Shafie, A. Necessary and sufficient optimality conditions for DC vector optimization. Acta Univ. Apulensis Math. Inform. 2017, 51, 41–52. [Google Scholar]
  3. Hiriart-Urruty, J.B. From Convex Optimization to Nonconvex Optimization. Necessary and Sufficient Conditions for Global Optimality. In Nonsmooth Optimization and Related Topics; Clarke, F.H., Dem’yanov, V.F., Giannessi, F., Eds.; Ettore Majorana International Science Series; Springer: Boston, MA, USA, 1989; Volume 43. [Google Scholar]
  4. Horst, R.; Thoai, N.V. DC programming: Overview. J. Optim. Theory Appl. 1999, 103, 1–43. [Google Scholar] [CrossRef]
  5. El Maghri, M. (ϵ-)Efficiency in difference vector optimization. J. Glob. Optim. 2015, 61, 803–812. [Google Scholar] [CrossRef]
  6. Dolgopolik, M.V. New global optimality conditions for nonsmooth DC optimization problems. J. Glob. Optim. 2020, 76, 25–55. [Google Scholar] [CrossRef] [Green Version]
  7. Laghdir, M. Optimality conditions in DC-constrained optimization. Acta Math. Vietnam 2005, 30, 169–179. [Google Scholar]
  8. Amahroq, T.; Penot, J.P.; Syam, A. On the subdifferentiability of the difference of two functions and local minimization. Set-Valued Anal. 2008, 16, 413–427. [Google Scholar] [CrossRef]
  9. Nitanda, A.; Suzuki, T. Stochastic difference of convex algorithm and its application to training deep boltzmann machines. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 470–478. [Google Scholar]
  10. Volle, M. Duality principles for optimization problems dealing with the difference of vector-valued convex mappings. J. Optim. Theory Appl. 2002, 114, 223–241. [Google Scholar] [CrossRef]
  11. Laghdir, M.; Benkenza, N.; Najeh, N. Duality in DC-constrained programming via duality in reverse convex programming. J. Nonlinear Convex Anal. 2004, 5, 275–284. [Google Scholar]
  12. Li, G.; Zhang, L.; Liu, Z. The stable duality of DC programs for composite convex functions. J. Ind. Manag. Optim. 2016, 13, 63–79. [Google Scholar] [CrossRef] [Green Version]
  13. Xu, Y.; Li, S. Optimality and Duality for DC Programming with DC Inequality and DC Equality Constraints. Mathematics 2022, 10, 601. [Google Scholar] [CrossRef]
  14. Xu, Y.; Li, S. Duality for minimization of the difference of two Φc-convex functions. J. Ind. Manag. Optim. 2023, 19, 5045–5059. [Google Scholar] [CrossRef]
  15. Sun, X.; Long, X.J.; Li, M. Some characterizations of duality for DC optimization with composite functions. Optimization 2017, 66, 1425–1443. [Google Scholar] [CrossRef]
  16. Martinez-Legaz, J.E.; Seeger, A. A formula on the approximate subdifferential of the difference of two convex functions. Bull. Austral. Math. Soc. 1992, 45, 37–42. [Google Scholar] [CrossRef] [Green Version]
  17. El Maghri, M. Pareto-Fenchel ϵ-subdifferential sum rule and ϵ-efficiency. Optim. Lett. 2012, 6, 763–781. [Google Scholar] [CrossRef]
  18. Théra, M. Calcul ϵ-sous-différentiel des applications convexes. C. R. Acad. Sci. Paris 1980, 290, 549–551. [Google Scholar]
  19. Pontryagin, L.S. Linear differential games II. Soviet Math. Dokl. 1967, 8, 910–912. [Google Scholar]
  20. El Maghri, M.; Laghdir, M. Pareto subdifferential calculus for convex vector mappings and applications to vector optimization. SIAM J. Optim. 2009, 19, 1970–1994. [Google Scholar] [CrossRef]
  21. Boţ, R.I.; Grad, S.M.; Wanka, G. Duality in Vector Optimization; Springer Science & Business Media: Berlin, Germany, 2009. [Google Scholar]
  22. Moustaid, M.B.; Rikouane, A.; Dali, I.; Laghdir, M. Sequential approximate weak optimality conditions for multiobjective fractional programming problems via sequential calculus rules for the Brøndsted-Rockafellar approximate subdifferential. Rend. Circ. Mat. Palermo, II. Ser. 2022, 71, 737–754. [Google Scholar] [CrossRef]
  23. Laghdir, M.; Rikouane, A. A Note on Approximate Subdifferential of Composed Convex Operator. Appl. Math. Sci. 2014, 8, 2513–2523. [Google Scholar]
  24. Song, D.; Tang, L.; Liu, C.; Wu, J.; Song, X. A Novel Operation Optimization Method Based on Mechanism Analytics for the Quality of Molten Steel in the BOF Steelmaking Process. IEEE Trans. Autom. Sci. Eng. 2022, 20, 218–232. [Google Scholar] [CrossRef]
  25. Yang, L.; Sun, Q.; Zhang, N.; Li, Y. Indirect multi-energy transactions of energy internet with deep reinforcement learning approach. IEEE Trans. Power Syst. 2022, 37, 4067–4077. [Google Scholar] [CrossRef]
  26. Lai, X.; Zhang, P.; Wang, Y.; Chen, L.; Wu, M. Continuous State Feedback Control Based on Intelligent Optimization for First-Order Nonholonomic Systems. IEEE Trans. Syst. Man Cyber. Syst. 2020, 50, 2534–2540. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ammar, A.; Laghdir, M.; Ed-dahdah, A.; Hanine, M. Approximate Subdifferential of the Difference of Two Vector Convex Mappings. Mathematics 2023, 11, 2718. https://doi.org/10.3390/math11122718

AMA Style

Ammar A, Laghdir M, Ed-dahdah A, Hanine M. Approximate Subdifferential of the Difference of Two Vector Convex Mappings. Mathematics. 2023; 11(12):2718. https://doi.org/10.3390/math11122718

Chicago/Turabian Style

Ammar, Abdelghali, Mohamed Laghdir, Ahmed Ed-dahdah, and Mohamed Hanine. 2023. "Approximate Subdifferential of the Difference of Two Vector Convex Mappings" Mathematics 11, no. 12: 2718. https://doi.org/10.3390/math11122718

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop