Next Article in Journal
A Zero-Touch Vulnerability Remediation Framework Based on OpenVAS, Threat Intelligence, and RAG-Enhanced Large Language Models
Previous Article in Journal
CA-GFNet: A Cross-Modal Adaptive Gated Fusion Network for Facial Emotion Recognition
Previous Article in Special Issue
Fixed Point of Polynomial F-Contraction with an Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Solution of DC-Type Vector Optimization via Abstract Convex Analysis

School of Mathematics and Statistics, Hainan University, Haikou 570228, China
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(6), 1070; https://doi.org/10.3390/math14061070
Submission received: 4 February 2026 / Revised: 18 March 2026 / Accepted: 19 March 2026 / Published: 22 March 2026
(This article belongs to the Special Issue Nonlinear Functional Analysis: Theory, Methods, and Applications)

Abstract

The concept of vector topical functions, which take values in a partially ordered Banach space endowed with a complete lattice structure, was introduced in our preceding study. This structure enabled the development of an abstract convexity theory for vector topical functions, utilizing the notion of vector support. In this paper, applying these abstract convex theories, a DC-type vector optimization is investigated. Using the idea of a slack, the support set of a vector-valued map can be fully characterized by the subdifferential in abstract convex sense. Then, with the aid of this result, a sufficient condition to detect the efficient solutions for the DC-type vector optimization is obtained. In addition, a dual problem for the DC optimization is proposed, for which some strong dual results are established.

1. Introduction

Convexity has always been an essential property in the investigation of many fields, ranging from optimization and economics to variational analysis and beyond. Its significance stems from the fact that convex problems often admit elegant theoretical characterizations and can be solved efficiently by numerical algorithms. The theory of convex analysis, see [1], provides powerful tools for studying problems with convex features. However, there are plenty of problems arising out of practice that can not be depicted as convex models. Engineering design, robust control, and many modern applications in machine learning frequently involve nonconvex objectives or constraints, which fall outside the scope of classical convex analysis. Therefore, different kinds of generalized convexity are proposed and studied, so that one might be able to apply these techniques and ideas in classical convex analysis to more nonconvex problems. Abstract convexity is one of them. It is a generalization of convexity from the global point of view, which covers a broad range of nonconvex functions. Moreover, abstract convexity retains the global envelope structure, making it particularly suitable for extending duality and support function techniques. For surveys on the ideas and early results of abstract convex analysis, we refer to the classical books [2,3].
Nowadays, the theories of abstract convex analysis for scalar-valued functions are abundant and have found numerous applications in various fields, like [4,5,6,7,8]. These developments include the study of various kinds of abstract convex functions, subdifferentials in abstract convex sense, and the corresponding calculus rules, all of which have proven useful in areas such as global optimization and economic equilibrium theory. That motivates us to extend this useful concept and theory to vector-valued mappings so that it is possible to make use of abstract convex analysis to cope with vector problems. In the theory of abstract convex analysis, the concept of supremum is a fundamental tool. The structure of an abstract convex map is based on the envelope by a certain class of functions. Unlike the real space, what we have in a vector space is usually a partial order, which leaves a lot of difficulties for dealing with the supremum of vector maps, as well as many other challenges along with that. For instance, the supremum of a set of vectors may not exist in the usual sense, and even when it does, it is usually a set, which complicates the development of the theory for vector-valued abstract convex functions. These challenges call for a careful reexamination of fundamental concepts and a systematic development of a vector-valued abstract convex analysis framework.
Our main concern here is a typical example of abstract convex mappings, called topical functions. It was used as a basic model to investigate the discrete event system in [9]. A key feature of these functions is that they are both increasing and plus-homogeneous, which makes them particularly suitable for modeling dynamic processes such as manufacturing systems or network synchronization. Then, it turned out that every topical function can be expressed as the envelope of the Gerstewitz functions (see [10]), meaning that a topical function is abstract convex with respect to the family of Gerstewitz functions. This connection is significant because Gerstewitz functions serve as a fundamental tool in scalarization techniques for vector optimization, linking topical functions to broader applications in optimization theory. The abstract convex theory of topical functions was fully studied in [8,11,12,13], and then generalized to the infinite dimensional case in [14,15,16,17]. In [18], we introduced a version of vector topical functions and investigated the abstract convex framework of it. Then, employing that, several collections of weak separation functions, which are crucial tools in image space analysis are constructed in [19,20,21]. The space considered there was a general partial ordered Banach space. In order to guarantee the existence of supremum, some versions of supremum based on the one proposed by Tanino [22] are applied. However, these notions of supremum usually give a set, rather than a single point, which is not easy to deal with. This set-valued nature complicates both theoretical analysis and numerical implementation, as it requires handling collections of points rather than individual values.
Therefore, in [23], a complete lattice structure was imposed on the space, with which, the existence of a strong version of supremum can be ensured. Then, we can take full advantage of it to establish another abstract convex scheme for the vector topical function, where the support map and conjugate map can be both vector valued. Due to the additional structure required here, this scheme is not as general as the one in [18] but it is much more convenient to handle. Moreover, every Euclidean space equipped with the natural partial order, which covers the situations of a broad class of problems in practice, is a complete lattice. Thus, this scheme is also worth considering. Here, based on what was achieved in [23], we want to further investigate the behavior of the vector topical functions within this scheme, especially from the dual point of view. More attention will be paid to the subdifferential, and then it will be used as a main tool to study some generalized DC-type optimizations.
The paper is organized as follows. The abstract convex framework given by [23] is recalled in Section 2, and some basic definitions and results about the supremum in a partially ordered vector space are also provided. In Section 3, we propose some strong versions of subdifferential, and the characterization and nonemptiness property of the subdifferential for vector topical functions are derived. Some DC-type optimization problems involving the vector topical functions are studied in Section 4. By virtue of the theories in Section 3, some optimal conditions are obtained. Then, we introduce a dual problem for the DC-type optimization, for which the zero duality gap and strong duality are investigated. Finally, Section 5 is a conclusion.

2. Preliminaries

Let X and Y be real Banach spaces with the norm · X and · Y , while K X and K Y are convex, closed and pointed cones with a nonempty interior, inducing a partial order K X in X and K Y in Y, respectively, in the following way:
x 1 K X x 2 x 1 x 2 + K X , y 1 K Y y 2 y 1 y 2 + K Y .
For a subset A Y , the topological interior, boundary and closure of A are denoted by int A , bd A and cl A , respectively. Let X 0 be a nontrivial linear subspace of X with int K X X 0 , and let T 0 be a continuous linear bijection from X 0 to Y, satisfying T 0 ( K X X 0 ) = K Y .
The concept of topical functions as well as the whole framework of abstract convexity relies strongly on the order setting, both on the domain space and image space. Especially for the situation in this work, where we operate within a complete lattice to make sure that a vector-valued function can be enveloped through a strong supremum, the whole theory is only available for real spaces, not for a complex one. In addition, the definition of vector topical functions used in this paper and the entire abstract convex structure depend on the choice of K X , K Y , X 0 and T 0 . With different choices of ( K X , K Y , X 0 , T 0 ) , the vector topical function is also different. However, as long as the choice of ( K X , K Y , X 0 , T 0 ) satisfies the assumptions given here, the entire abstract convex framework remains valid.
Adding elements and + in Y, we consider the extended vector space Y ¯ , in which the order relation and linear operations are generalized as follows:
y Y , K Y y K Y + , 0 · ( + ) = 0 Y , 0 · ( ) = 0 Y , α > 0 : α · ( + ) = + , α · ( ) = , y Y : y + ( + ) = + , y + ( ) = , ( + ) = .
Recall that for a vector-valued map f : X Y ¯ , the domain of f is defined by dom f : = { x X : f ( x ) + } . With regard to a general vector optimization problem (VOP):
max x S f ( x ) ,
where f : X Y ¯ and S is a subset of X, x ¯ is an efficient solution of (VOP) if ( { f ( x ¯ ) } + K Y ) f ( S ) = { f ( x ¯ ) } , while f ( x ¯ ) is called a maximal value of (VOP). x ¯ is a weakly efficient solution of (VOP) if ( { f ( x ¯ ) } + int K Y ) f ( S ) = , and f ( x ¯ ) is called a weakly maximal value of (VOP). The maximal value set and the weakly maximal value set of (VOP) is denoted by Max ( VOP ) and WMax ( VOP ) , respectively.
In [24], a vector topical function was introduced by Kermani and Doagooei. Denote 1 X and 1 Y as two fixed vectors in int K X and int K Y , respectively.
Definition 1 ([24]). 
A function f : X Y is called vector topical w.r.t. 1 X and 1 Y if it is
(i) 
Increasing: x 1 K X x 2 f ( x 1 ) K Y f ( x 2 ) , for all x 1 , x 2 X .
(ii) 
Plus-homogeneous w.r.t. 1 X and 1 Y : f ( x + λ 1 X ) = f ( x ) + λ 1 Y for all x X and λ R .
The plus-homogeneous property (w.r.t. 1 X and 1 Y ) above is a generalization of the corresponding feature of the classical scalar topical function, namely,
f ( x + λ 1 X ) = f ( x ) + λ for all x X and λ R ,
which shows that f has some translation property along ( 1 X , 1 ) . However, the situation in a vector space Y is more complicated, since there is only one direction in R whereas infinitely many directions in Y. Therefore, it is not adequate to establish the envelope result if we only require that f has this translation property along one direction. For this reason, in [18], we propose the concept of abstract convexity for vector-valued mappings as well as a notion of vector topical functions.
Definition 2 ([18]). 
A function f : X Y ¯ is said to be vector topical w.r.t. T 0 if it is
(i) 
Increasing: x 1 K X x 2 f ( x 1 ) K Y f ( x 2 ) , for all x 1 , x 2 X ;
(ii) 
Plus-homogeneous w.r.t. T 0 : f ( x + x 0 ) = f ( x ) + T 0 ( x 0 ) for all x X and x 0 X 0 .
We dealt with a vector space, which is partially ordered. In [23], we brought in the complete lattice structure, which helps to overcome the non-existence of the supremum in this situation and to establish an abstract convex framework for the vector topical functions.
Definition 3 ([25]). 
An element y ¯ Y ¯ , denoted by s u p s A , is said to be the strong supremum of a set A in Y if y ¯ fulfills the following conditions:
(i) 
a K Y y ¯ , for all a A ;
(ii) 
For any y Y such that a K Y y for all a A it follows that y ¯ K Y y .
It means that y ¯ is the least upper bound of the given set A and by convention we set sup s = . The strong infimum can be defined similarly.
Definition 4 ([25]). 
A partially ordered set ( E , E ) is called a complete lattice if the strong infimum and supremum exist for every subset of E.
The complete lattice can also be characterized by the existence of the infimum or supremum, see [26]. In this work, we assume that the partially ordered space ( Y ¯ , K Y ) is a complete lattice. Picking an arbitrary ω X , with the structure of the complete lattice, we are able to define a vector-valued map φ ω : Y ¯ by
φ ω ( x ) = sup s { y Y T 0 1 ( y ) K X x + ω } .
Some properties of φ ω are listed below.
Lemma 1 ([23]). 
For all ω X , one has:
(i) 
φ ω ( ω ) = 0 Y ;
(ii) 
φ ω ( x ) = φ x ( ω ) , for all x X ;
(iii) 
φ ω is vector topical w.r.t. T 0 , for all ω X .
We have shown that φ ω ( X ) Y for all ω X if int K X X 0 and every finite vector topical function can be enveloped by W φ : = { φ ω ω X } .
Theorem 1 ([23]). 
Assuming f : X Y ¯ , then it holds that
f is   vector   topical   w.r.t. T 0 f ( x ) = s u p s { φ ω ( x ) φ ω s u p p ( f , W φ ) } , x X ,
where s u p p ( f , W φ ) = { φ ω W φ φ ω ( x ) K Y f ( x ) , x X } .
The collection supp ( f , W φ ) is called the W φ -support set for f. For a function f : X Y ¯ that is vector topical w.r.t. T 0 , its support map can be detected by the following way.
Proposition 1 ([23]). 
If f : X Y ¯ is vector topical w.r.t. T 0 , then
s u p p ( f , W φ ) = { φ ω W φ f ( ω ) K Y 0 Y } .
Definition 5. 
For a general vector-valued map f : X Y ¯ , the W φ -conjugate function of f, denoted by f c ( φ ) : W φ Y ¯ , is defined by
f c ( φ ) ( φ ω ) = s u p s { φ ω ( x ) f ( x ) x X } , φ ω W φ .
If f is vector topical w.r.t. T 0 , the conjugation f c ( φ ) has a symmetric relation with f and, therefore, is easy to compute.
Theorem 2 ([23]). 
Let f : X Y ¯ . Then, f is vector topical w.r.t. T 0 if and only if
f c ( φ ) ( φ ω ) = f ( ω ) , φ ω W φ .

3. Subdifferentials

Define the linear operations, norm, and order relation on W φ by
(i)
α φ ω 1 + β φ ω 2 = φ α ω 1 + β ω 2 , φ ω 1 , φ ω 2 W φ and α , β R ;
(ii)
φ ω W φ = ω X , φ ω W φ ;
(iii)
φ ω 1 W φ φ ω 2 φ ω 1 ( x ) K Y φ ω 2 ( x ) , x X .
Then, W φ forms a Banach space which is isometric to X and we have the order-preserving isometry
Φ : X W φ ω φ ω .
W φ is the dual space of X in the abstract convex framework of the vector topical function, on which we can define the notion of subdifferential for a vector-valued map.
Definition 6. 
For a vector-valued map f : X Y ¯ , the W φ -subdifferential of f at some x ¯ d o m f , denoted by W φ f ( x ¯ ) , is defined as
W φ f ( x ¯ ) = { φ ω W φ φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( x ¯ ) , x X } .
What we shall mainly use is a slack version of this concept.
Definition 7. 
Given some ϵ K Y { 0 Y } , for a vector map f : X Y ¯ , the ( ϵ , W φ ) -subdifferential of f at some x ¯ d o m f , denoted by W φ ϵ f ( x ¯ ) , is defined as
W φ ϵ f ( x ¯ ) = { φ ω W φ φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( x ¯ ) + ϵ , x X } .
The term ϵ relaxes the subdifferential condition, so that it could be fulfilled by more elements in W φ .
The notion “subdifferential” is defined by a global property of f. However, if f is vector topical w.r.t. T 0 , the ( ϵ , W φ ) -subdifferential of f at x ¯ can be identified by a local feature. Also, with the assumption int K X X 0 , it is not hard to observe that, for a vector topical function (w.r.t. T 0 ) f, either f or f + or f ( x ) Y for all x X . Thus, when it comes to the vector topical function, we can focus on the nontrivial case where f ( X ) Y .
Theorem 3. 
Suppose f : X Y is vector topical w.r.t. T 0 , then
φ ω W φ ϵ f ( x ¯ ) φ ω ( x ¯ ) K Y f ( x ¯ ) f ( ω ) ϵ .
Proof. 
Assume that φ ω W φ ϵ f ( x ¯ ) , which means φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( x ¯ ) + ϵ , for all x X . Considering the case x = ω , it can be obtained from Lemma 1(i) and Definition 7 that 0 Y φ ω ( x ¯ ) K Y f ( ω ) f ( x ¯ ) + ϵ , i.e., φ ω ( x ¯ ) K Y f ( x ¯ ) f ( ω ) ϵ .
Conversely, assume that φ ω ( x ¯ ) K Y f ( x ¯ ) f ( ω ) ϵ . Let x X be arbitrary. For any y Y with T 0 1 ( y ) K X x + ω , i.e., ω + T 0 1 ( y ) K X x , since f is vector topical w.r.t. T 0 , applying the increasing property and the plus-homogeneous property (w.r.t. T 0 ) of f, it can be deduced that f ( ω ) + y K Y f ( x ) , which means y K Y f ( x ) f ( ω ) . Then, due to the arbitrariness of y φ ω ( x ) , we further obtain φ ω ( x ) K Y f ( x ) f ( ω ) . Then,
φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( ω ) ( f ( x ¯ ) f ( ω ) ϵ ) = f ( x ) f ( x ¯ ) + ϵ .
Proposition 2. 
Let x ¯ X be arbitrary. If f : X Y is vector topical w.r.t. T 0 , then, W φ ϵ f ( x ¯ ) for every ϵ K Y { 0 Y } .
Proof. 
For any x ¯ X and ϵ K Y { 0 Y } , set ω ¯ = x ¯ + T 0 1 ( y ¯ + ϵ ) for some y ¯ Y fixed. Letting x X , according to Lemma 1,
φ ω ¯ ( x ¯ ) = φ x ¯ ( x ¯ ) + y ¯ + ϵ = y ¯ + ϵ .
On the other hand, since f is vector topical w.r.t. T 0 , we can get
f ( ω ¯ ) = f ( x ¯ T 0 1 ( y ¯ + ϵ ) ) = f ( x ¯ ) y ¯ ϵ .
Hence,
φ ω ¯ ( x ¯ ) = y ¯ + ϵ K Y f ( x ¯ ) f ( ω ¯ ) .
Then, it follows from Theorem 3 that φ ω ¯ W φ ϵ f ( x ¯ ) . □

4. Optimality for DC-Type Vector Optimization

Consider the following unconstrained vector optimization:
max { f ( x ) g ( x ) x X } ,
where f : X Y , g : X Y ¯ , and f is vector topical w.r.t. T 0 . The solution concepts we consider here are the efficient and weakly efficient solutions w.r.t. the order relation K Y . We shall build a sufficient condition to identify the efficient solution of (3) by virtue of the subdifferential, for which we need the following result.
Proposition 3. 
Let f : X Y ¯ and x ¯ X . If f ( x ¯ ) Y and W φ ϵ ¯ f ( x ¯ ) for some ϵ ¯ K Y , then,
s u p p ( f , W φ ) = ϵ K Y { φ ω ¯ W φ ω ¯ = ω T 0 1 ( φ ω ( x ¯ ) f ( x ¯ ) + ϵ ) , φ ω W φ ϵ f ( x ¯ ) } .
Proof. 
Note that W φ ϵ ¯ f ( x ¯ ) for some ϵ ¯ K Y guarantees that the union on the right side of the equation above is nonempty. Picking arbitrary ϵ K Y and φ ω W φ ϵ f ( x ¯ ) , then
φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( x ¯ ) + ϵ , x X ,
i.e.,
φ ω ( x ) φ ω ( x ¯ ) + f ( x ¯ ) ϵ K Y f ( x ) , x X ,
which, according to Lemma 1, implies φ ω ¯ supp ( f , W φ ) , where ω ¯ = ω T 0 1 ( φ ω ( x ¯ ) f ( x ¯ ) + ϵ ) .
Conversely, suppose φ ω supp ( f , W φ ) , which means φ ω ( x ) K Y f ( x ) for all x X . If we set ϵ = f ( x ¯ ) φ ω ( x ¯ ) , then ϵ K Y and
φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( x ¯ ) + ( f ( x ¯ ) φ ω ( x ¯ ) ) = f ( x ) f ( x ¯ ) + ϵ ,
for all x X . Hence, φ ω W φ ϵ f ( x ¯ ) and ω = ω T 0 1 ( φ ω ( x ¯ ) f ( x ¯ ) + ϵ ) . □
Remark 1. 
For some 1 X i n t K X , if we set Y = R , K Y = R + , and X 0 = s p a n { 1 X } and define T 0 : X 0 Y as T 0 ( λ 1 x ) = λ for all λ R , then T 0 ( K X X 0 ) = T 0 ( { λ 1 X λ R + } ) = R + = K Y . In such case,
φ ω ( x ) = sup { t R t 1 X K X x + w } ,
and the concept of vector topical w.r.t. T 0 , together with the concepts of W φ -subdifferential and ( ϵ , W φ ) -subdifferential, degenerates to the case for classical scalar topical functions. The characterization of the support set given in Proposition 3 becomes
s u p p ( f , W φ ) = ϵ R + { φ ω ¯ W φ ω ¯ = ω ( φ ω ( x ¯ ) f ( x ¯ ) + ϵ ) 1 X , φ ω W φ ϵ f ( x ¯ ) } ,
which is also consistent with the classical one.
Making use of the difference structure of the DC-type optimization, together with Proposition 3, we can obtain the following optimality condition for (3).
Theorem 4. 
Let f : X Y , g : X Y ¯ , and f be vector topical w.r.t. T 0 , and x ¯ { x X g ( x ) Y } . Then, x ¯ is an efficient solution of (3) if
W φ ϵ f ( x ¯ ) W φ ϵ g ( x ¯ ) , ϵ K Y .
Proof. 
Setting f ¯ ( x ) : = f ( x ) f ( x ¯ ) and g ¯ ( x ) : = g ( x ) g ( x ¯ ) , then f ¯ is vector topical w.r.t. T 0 since f is vector topical w.r.t. T 0 . As W φ ϵ f ( x ¯ ) W φ ϵ g ( x ¯ ) , ϵ K Y , then it is not hard to observe that W φ ϵ f ¯ ( x ¯ ) W φ ϵ g ¯ ( x ¯ ) , ϵ K Y , as well. Proposition 2 guarantees that W φ ϵ f ¯ ( x ¯ ) for every ϵ K Y , which indicates W φ ϵ g ¯ ( x ¯ ) for every ϵ K Y , as well. Then, according to Proposition 3, supp ( f ¯ , W φ ) supp ( g ¯ , W φ ) .
Assuming that x ¯ is not an efficient solution of (3), then there exists some x ^ X such that
f ( x ^ ) g ( x ^ ) K Y { 0 Y } f ( x ¯ ) g ( x ¯ ) ,
i.e., f ¯ ( x ^ ) K Y { 0 Y } g ¯ ( x ^ ) ( f ¯ ( x ^ ) g ¯ ( x ^ ) K Y { 0 Y } ). Considering the element ω ^ = x ^ + T 0 ( f ( x ^ ) f ( x ¯ ) ) , we have f ¯ ( ω ^ ) = 0 Y . It follows from Proposition 1 that φ ω ^ supp ( f ¯ , W φ ) and, therefore, φ ω ^ supp ( g ¯ , W φ ) . By Lemma 1(iii), this implies
φ ω ^ ( x ^ ) = φ x ^ ( x ^ + T 0 ( f ( x ^ ) f ( x ¯ ) ) ) = φ x ^ ( x ^ ) + f ( x ^ ) f ( x ¯ ) = f ¯ ( x ^ ) K Y g ¯ ( x ^ ) ,
which is a contradiction. □
We give an example to illustrate this result.
Example 1. 
Let ( X , K X ) = ( l 2 , l + 2 ) , with l 2 = ( x 1 , x 2 , x 3 , ) | x n R , n = 1 | x n | 2 < and l + 2 = ( x 1 , x 2 , x 3 , ) | x n R + , n = 1 | x n | 2 < , and X 0 = { ( x 1 , x 2 , 0 , 0 , ) x 1 , x 2 R } . Considering the extended partially ordered space ( R ¯ 2 , R + 2 ) , set A = { A R ¯ 2 A ¯ = A } , where
A ¯ = WMax ( C l _ A ) , i f C l _ A , C l _ A R 2 , { } , i f C l _ A = , { + } , i f C l _ A = R 2 ,
C l _ A = R 2 , i f + A , , i f A = { } , c l ( A { } R + 2 ) , o t h e r w i s e ,
and WMax ( C l _ A ) denotes the set of all weakly maximal elements of C l _ A . Setting Y = { ( y 1 , y 2 ) b d R + 2 y 1 , y 2 R } , then Y A . Define the operations as well as an order relation K Y on Y by
A 1 A 2 : = A 1 + A 2 ¯ , α A 1 : = α · A 1 ¯ , A 1 A 2 : = A 1 ( ( 1 ) A 2 ) A 1 K Y A 2 : C l _ A 1 C l _ A 2
for all A 1 , A 2 Y and α R , and then ( Y , K Y ) forms a complete lattice. Let K Y = { A Y A K Y b d R + 2 } and T 0 L ( X 0 , Y ) be defined by T 0 ( x ) = ( x 1 , x 2 ) b d R + 2 for all x = ( x 1 , x 2 , 0 , 0 , ) X 0 . It is easy to verify that T 0 : X 0 Y is a bijection satisfying T 0 ( K X X 0 ) = K Y , and
φ ω ( x ) = ( x 1 + ω 1 , x 2 + ω 2 ) b d R + 2 , i f x i + ω i 0   for   all   i > 2 , , o t h e r w i s e
for all x = ( x 1 , x 2 , x 3 , ) , ω = ( ω 1 , ω 2 , ω 3 , ) X .
Define f : X Y and g : X Y as
f ( x ) = f ( x 1 , x 2 , x 3 , ) = ( x 1 + x 3 3 , x 2 + x 3 3 ) b d R + 2 , i f x 3 < 0 , ( x 1 + 1 2 x 3 , x 2 + 1 2 x 3 ) b d R + 2 , i f x 3 0 ,
and
g ( x ) = g ( x 1 , x 2 , x 3 , ) = ( x 1 + | x 3 | , x 2 + | x 3 | ) b d R + 2 , i f x 1 < 0 , ( x 1 + | x 3 | , 2 x 2 + | x 3 | ) b d R + 2 , i f x 1 0 ,
respectively. Then, it is not hard to observe that ( x ¯ , y ¯ ) = ( 0 X , b d R + 2 ) is an efficient solution of ( D C P 1 ) . Suppose that φ ω W φ ϵ f ( x ¯ ) , i.e., φ ω ( x ) φ ω ( x ¯ ) K Y f ( x ) f ( x ¯ ) ϵ for all x X . That implies
φ ω ( x ) φ ω ( x ¯ ) K Y ( x 1 + x 3 3 , x 2 + x 3 3 ) b d R + 2 , i f x 3 < 0 , ( x 1 + 1 2 x 3 , x 2 + 1 2 x 3 ) b d R + 2 , i f x 3 0 ϵ K Y ( x 1 + | x 3 | , x 2 + | x 3 | ) b d R + 2 ϵ K Y g ( x ) g ( x ¯ ) ϵ .
Therefore, φ ω W φ ϵ g ( x ¯ ) . Then, one has W φ ϵ f ( x ¯ ) W φ ϵ g ( x ¯ ) , for all ϵ K Y , which is consistent with Theorem 4.
Next, we consider a constrained DC-type vector problem:
max { f ( x ) g ( x ) x S } ,
where f : X Y ¯ and g : X Y ¯ are both vector topical w.r.t. T 0 , and S is a nonempty subset of X. In [23], this model was considered without the complete lattice, and a dual model which was defined on a space consisting of set-valued maps was studied. Here, with the aid of the complete lattice structure, we are able to construct a dual problem, for which the variables are vector maps. We give the additional assumptions that
S dom f S dom g = and { x S f ( x ) = { } } { x S g ( x ) = { } } = ,
to avoid the situation of ( + ) + ( ) . The dual problem is also a DC-type optimization, which is constructed as follows:
max { g c ( φ ) ( φ ω ) f c ( φ ) ( φ ω ) φ ω S W φ } ,
where S W φ = { φ ω W φ ω S } . The dual model (4) offers deep insights into the structure of the primal problem as well as more ways to deal with the primal problem, from the dual perspective. The dual variables of problem (4) are functions, which are often more tractable than those of the primal problem. This often endows the dual problem with a richer mathematical structure. Moreover, the duality theory between (4) and (5) could also be helpful for developing some primal–dual algorithm.
Proposition 4. 
Assuming that f ( x 0 ) for some x 0 d o m g S , then M a x ( D C P 2 ) { } , M a x ( D C D 2 ) { } , and we have
M a x ( D C P 2 ) = M a x ( D C D 2 ) , W M a x ( D C P 2 ) = W M a x ( D C D 2 ) .
Proof. 
First, note that Max ( DCP2 ) { } and Max ( DCD2 ) { } is guaranteed by f ( x 0 ) for some x 0 dom g S . Then, picking an x S , and applying Theorem 2, it can be deduced that
f ( x ) g ( x ) = [ f ( ( x ) ) ] g ( ( x ) ) = g c ( φ ) ( φ x ) f c ( φ ) ( φ x ) ,
and φ x S W φ . Conversely, for any φ ω S W φ , it follows from Theorem 2 that
g c ( φ ) ( φ ω ) f c ( φ ) ( φ ω ) = g ( ω ) ( f ( ω ) ) = f ( ω ) g ( ω ) ,
while ω S . Therefore,
{ f ( x ) g ( x ) x S } = { g c ( φ ) ( φ ω ) f c ( φ ) ( φ ω ) φ ω S W φ } ,
which implies Max ( DCP2 ) = Max ( DCD2 ) and WMax ( DCP2 ) = WMax ( DCD2 ) . □
Theorem 5. 
Suppose f ( x 0 ) for some x 0 d o m g S . Then, for the efficient solutions of (4) and (5), we have the following assertions.
(i) 
Assuming that x ¯ is an efficient solution of (4), then, φ x ¯ + x is an efficient solution of (5) for any x X 0 , whenever x ¯ x S .
(ii) 
Assuming that φ ω ¯ is an efficient solution of (5), then ω ¯ + ω is an efficient solution of (4) for any φ ω Φ ( X 0 ) (see (2)), whenever φ ω ¯ ω S W φ .
Proof. 
(i) If x ¯ is an efficient solution of (4), then, according to Proposition 4,
f ( x ¯ ) g ( x ¯ ) Max ( DCP2 ) = Max ( DCD2 ) .
Picking some x X 0 with x ¯ x S , φ x ¯ + x is feasible for (5). Applying Theorem 2,
g c ( φ ) ( φ x ¯ + x ) f c ( φ ) ( φ x ¯ + x ) = g ( x ¯ x ) + f ( x ¯ x ) = g ( x ¯ ) + T 0 ( x ) + f ( x ¯ ) T 0 ( x ) = f ( x ¯ ) g ( x ¯ ) ,
which shows g c ( φ ) ( φ x ¯ + x ) f c ( φ ) ( φ x ¯ + x ) Max ( DCD2 ) , and therefore, φ x ¯ + x is an efficient solution of (5).
(ii) If φ ω ¯ is an efficient solution of (5), then, it follows from Proposition 4,
g c ( φ ) ( φ ω ¯ ) f c ( φ ) ( φ ω ¯ ) Max ( DCD2 ) = Max ( DCP2 ) .
For any φ ω Φ ( X 0 ) with φ ω ¯ ω S W φ , ω ¯ + ω is feasible for (4). Applying Theorem 2,
f ( ω ¯ + ω ) g ( ω ¯ + ω ) = f [ ( ω ¯ ω ) ] g [ ( ω ¯ ω ) ] = [ f ( ω ¯ ) ] + T 0 ( ω ) g ( ω ¯ ) T 0 ( ω ) = g c ( φ ) ( φ ω ¯ ) f c ( φ ) ( φ ω ¯ ) ,
implying f ( ω ¯ + ω ) g ( ω ¯ + ω ) Max ( DCP2 ) . Hence, ω ¯ + ω is an efficient solution of (4). □
With a similar argument, we can also get the following result for weakly efficient solutions.
Theorem 6. 
Suppose f ( x 0 ) for some x 0 d o m g S . Then, for the weakly efficient solutions of (4) and (5), we have the following assertions.
(i) 
Assuming that x ¯ is a weakly efficient solution of (4), then, φ x ¯ + x is a weakly efficient solution of (5) for any x X 0 , whenever x ¯ x S .
(ii) 
Assuming that φ ω ¯ is a weakly efficient solution of (5), then ω ¯ + ω is a weakly efficient solution of (4) for any φ ω Φ ( X 0 ) , whenever φ ω ¯ ω S W φ .

5. Conclusions

In this paper, with the help of the complete lattice, some strong versions of subdifferentials in the sense of abstract convexity are studied. Especially for vector topical functions, it has been shown that the subdifferential can be identified with some local condition and the nonemptiness of the subdifferential is easily ensured. Then, these theories are applied to derive the optimal conditions for some DC-type optimization. We also establish a conjugate dual problem for the DC-type optimization, and the zero duality gap, as well as the strong duality.
If X and Y are Hilbert spaces, the additional information and structure allow a coordinate representation for the elements, which could lead to more concrete formulations for the results as well as more specific applications. Therefore, extending the current results to the Hilbert space setting and exploiting these features will be a central focus of our future work.

Author Contributions

Conceptualization, R.G. and C.Y.; methodology, C.Y.; validation, R.G. and C.Y.; formal analysis, R.G. and C.Y.; investigation, R.G. and C.Y.; resources, R.G. and C.Y.; writing—original draft preparation, R.G.; writing—review and editing, C.Y.; supervision, C.Y.; funding acquisition, C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Natural Science Foundation of China (grant numbers 12201160, 12561056, 12361105 and 12071379) and the Hainan Provincial Natural Science Foundation of China (grant numbers: 124RC441 and 124QN176).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  2. Rubinov, A.M. Abstract Convexity and Global Optimization; Kluwer Academic Publishers: Boston, MA, USA, 2000. [Google Scholar]
  3. Singer, I. Abstract Convex Analysis; Wiley-Interscience: New York, NY, USA, 1997. [Google Scholar]
  4. Bednarczuk, E.M.; Syga, M. On duality for nonconvex minimization problems within the framework of abstract convexity. Optimization 2022, 71, 949–971. [Google Scholar] [CrossRef]
  5. Levin, V.L. Abstract convexity in measure theory and in convex analysis. J. Math. Sci. 2003, 116, 3432–3467. [Google Scholar] [CrossRef]
  6. Lorenz, D.; Bednarczuk, E.; Tran, T.H. Proximal algorithms for a class of abstract convex functions. Set-Valued Var. Anal. 2025, 33, 5. [Google Scholar] [CrossRef]
  7. Mohebi, H. Abstract convexity of radiant functions with applications. J. Glob. Optim. 2013, 55, 521–538. [Google Scholar] [CrossRef]
  8. Mohebi, H.; Rubinov, A.M. Best approximation by downward sets with applications. Anal. Theory Appl. 2006, 22, 20–40. [Google Scholar] [CrossRef]
  9. Gunawardena, J.; Keane, M. On the Existence of Cycle Times for Some Nonexpansive Maps; Technical Report HPL-BRIMS-95-003; Hewlett-Packard Labs: Palo Alto, CA, USA, 1995. [Google Scholar]
  10. Tammer, C.; Weidner, P. Scalarization and Separation by Translation Invariant Functions with Applications in Optimization, Nonlinear Functional Analysis, and Mathematical Economics; Springer: Cham, Switzerland, 2020. [Google Scholar]
  11. Gunawardena, J. An Introduction to Idempotency; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  12. Martinez-Legaz, J.E.; Rubinov, A.M.; Singer, I. Downward sets and their separation and approximation properties. J. Glob. Optim. 2002, 23, 111–137. [Google Scholar] [CrossRef]
  13. Rubinov, A.M.; Shveidel, A.P. Some classes of abstract convex functions. In Optimization and Control with Applications; Qi, L., Teo, K., Yang, X., Eds.; Applied Optimization; Springer: Boston, MA, USA, 2005; Volume 96, pp. 141–154. [Google Scholar]
  14. Doagooei, A.R.; Mohebi, H. Optimization of the difference of topical functions. J. Glob. Optim. 2013, 57, 1349–1358. [Google Scholar] [CrossRef]
  15. Mohebi, H. Topical functions and their properties in a class of ordered Banach spaces. In Continuous Optimization; Jeyakumar, V., Rubinov, A., Eds.; Applied Optimization; Springer: Boston, MA, USA, 2005; Volume 99. [Google Scholar]
  16. Mohebi, H.; Barsam, H. Some results on abstract convexity of functions. Math. Slovaca 2018, 68, 1001–1008. [Google Scholar] [CrossRef]
  17. Mohebi, H.; Samet, M. Abstract convexity of topical functions. J. Glob. Optim. 2014, 58, 365–375. [Google Scholar] [CrossRef]
  18. Yao, C.L.; Li, S.J. Vector topical function, abstract convexity and image space analysis. J. Optim. Theory Appl. 2018, 177, 717–742. [Google Scholar] [CrossRef]
  19. Yao, C.L.; Tammer, C. Weak separation functions constructed by Gerstewitz and topical functions with applications in conjugate duality. J. Nonlinear Var. Anal. 2023, 7, 859–896. [Google Scholar] [CrossRef]
  20. Yao, C.L.; Tammer, C.; Günther, C. Image regularity conditions based on nonconvex separation with applications. J. Nonlinear Var. Anal. 2025, 9, 481–498. [Google Scholar] [CrossRef]
  21. Yao, C.L.; Wang, S.Q.; Tammer, C. Construction of vector-valued weak separation functions with applications to conjugate duality in vector optimization. Vietnam J. Math. 2025, 53, 859–892. [Google Scholar] [CrossRef]
  22. Tanino, T.; Sawaragi, Y. Conjugate maps and duality in multiobjective optimization. J. Optim. Theory Appl. 1980, 31, 473–499. [Google Scholar] [CrossRef]
  23. Yao, C.L.; Chen, J.W. Vector conjugation and subdifferential of vector topical function in complete lattice. Optim. Lett. 2021, 15, 1241–1261. [Google Scholar] [CrossRef]
  24. Kermani, V.M.; Doagooei, A.R. Vector topical functions and Farkas type theorems with applications. Optim. Lett. 2015, 9, 359–374. [Google Scholar] [CrossRef]
  25. Khan, A.A.; Tammer, C.; Zălinescu, C. Set-Valued Optimization: An Introduction with Applications; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  26. Löhne, A. Vector Optimization with Infimum and Supremum; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, R.; Yao, C. Efficient Solution of DC-Type Vector Optimization via Abstract Convex Analysis. Mathematics 2026, 14, 1070. https://doi.org/10.3390/math14061070

AMA Style

Gao R, Yao C. Efficient Solution of DC-Type Vector Optimization via Abstract Convex Analysis. Mathematics. 2026; 14(6):1070. https://doi.org/10.3390/math14061070

Chicago/Turabian Style

Gao, Ruimin, and Chaoli Yao. 2026. "Efficient Solution of DC-Type Vector Optimization via Abstract Convex Analysis" Mathematics 14, no. 6: 1070. https://doi.org/10.3390/math14061070

APA Style

Gao, R., & Yao, C. (2026). Efficient Solution of DC-Type Vector Optimization via Abstract Convex Analysis. Mathematics, 14(6), 1070. https://doi.org/10.3390/math14061070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop