1. Introduction
Multiobjective optimization problems (MOPs) involve the simultaneous optimization of two or more conflicting objective functions. Vilfredo Pareto [
1] introduced the concept of Pareto optimality in the context of economic systems. A solution is called Pareto optimal or efficient if none of the objective functions can be improved without deteriorating some of the other objective values [
2]. MOPs arise in scenarios where trade-offs are required, such as balancing cost and quality in business operations or improving efficiency while reducing environmental impact in engineering [
3,
4]. As a consequence, various techniques and algorithms have been proposed to solve MOPs in different frameworks [
5,
6,
7]. For a more detailed discussion on MOPs, we refer to [
2,
8,
9] and the references cited therein.
In many real-world problems arising in engineering, science, and related fields, we often encounter data that are imprecise or uncertain. This uncertainty can arise from various factors such as unknown future developments, measurement or manufacturing errors, or incomplete information in model development [
10,
11]. In such contexts, it is common to model uncertain parameters or objective functions using intervals. Moreover, if the uncertainties involved in the objective functions of MOPs are represented as intervals, the resulting problems are referred to as interval-valued multiobjective optimization problems (IVMOPs). IVMOPs frequently arise in diverse fields such as transportation, economics, and business administration [
12,
13,
14,
15,
16].
It is well-known that the conjugate direction method is a powerful optimization technique that can be widely employed for solving systems of linear equations and optimization problems [
17,
18]. Its computational strength has made it valuable in tackling a variety of real-world problems, including electromagnetic scattering [
19], inverse engineering [
20], and geophysical inversion [
21].
2. Review of Related Works
The foundational work on IVOPs is attributed to Ishibuchi and Tanaka [
22], who investigated IVOPs by transforming them into corresponding deterministic MOPs. Wu [
23] derived optimality conditions for a class of constrained IVOPs, employing the notion of Hukuhara differentiability and assuming the convexity hypothesis on the objective and constraint functions. Moreover, Wu [
24] developed Karush–Kuhn–Tucker-type optimality conditions for IVOPs and derived strong duality theorems that connect the primal problems with their associated dual problems. Bhurjee and Panda [
25] explored IVOPs by defining interval-valued functions in parametric forms. More recently, Roy et al. [
26] proposed a gradient-based descent line search technique employing the notion of generalized Hukuhara (gH) differentiability to solve IVOPs.
Kumar and Bhurjee [
27] studied IVMOPs by transforming them into their corresponding deterministic MOPs and establishing the relationships between the solutions of IVMOPs and MOPs. Upadhyay et al. [
28] introduced Newton’s method for IVMOPs and established the quadratic convergence of the sequence generated by Newton’s method under suitable assumptions. Subsequently, Upadhyay et al. [
29] developed quasi-Newton methods for IVMOPs and demonstrated their efficacy in solving both convex and non-convex IVMOPs. For a more comprehensive and updated survey on IVMOPs, we refer [
30,
31,
32,
33] and the references cited therein.
The conjugate direction method was first introduced by Hestenes and Stiefel [
17], who developed the conjugate gradient method to solve a system of linear equations. Subsequently, Pérez and Prudente [
18] introduced the HS-type conjugate direction algorithm for MOPs by employing an inexact line search. Wang et al. [
34] introduced an HS-type conjugate direction algorithm for MOPs without employing line search techniques, and established the global convergence of the proposed method under suitable conditions. Recently, based on the memoryless Broyden–Fletcher–Goldfarb–Shanno update, Khoshsimaye-Bargard and Ashrafi [
35] presented a convex hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate parameters. For a more detailed discussion on the conjugate direction method, we refer to [
18,
36] and the references cited therein. From the above discussion, it is evident that HS-type conjugate direction methods have been developed to solve single-objective problems as well as MOPs. However, there is no research paper available in the literature that has explored the HS-type conjugate direction method for IVMOPs. The aim of this article is to fill the aforementioned research gaps by developing the HS-type conjugate direction method for a class of IVMOPs.
Motivated by the works of [
17,
18,
34], in this paper, we investigate a class of IVMOPs and define the HS-type direction for the objective function of IVMOPs. A descent direction property of the HS-type direction is established at noncritical points. To determine an appropriate step size, we employ an Armijo-like line search. Moreover, an HS-type conjugate direction algorithm for IVMOPs is presented, and the convergence of this algorithm is established. Furthermore, under appropriate assumptions, we deduce that the proposed algorithm exhibits a linear order of convergence. In addition to this, the order of complexity of the proposed algorithm is investigated. Finally, the efficiency of the proposed method is demonstrated by solving various numerical problems employing MATLAB.
The primary contribution and novel aspects of the present article are as follows:
The rest of the paper is structured as follows. In
Section 3, we discuss some mathematical preliminaries that will be employed in the sequel.
Section 4 presents an HS-type conjugate direction algorithm for IVMOPs. In
Section 5, we establish the convergence of the sequence generated by the proposed algorithm and deduce that the sequence exhibits linear order convergence under suitable assumptions. Furthermore, we investigate the worst-case complexity of the sequence generated by the proposed algorithm. In
Section 6, we demonstrate the efficiency of the proposed algorithm by solving several numerical examples via MATLAB. Finally, in
Section 7, we provide our conclusions as well as future research directions.
3. Preliminaries
Throughout this article, the symbol
denotes the set of all natural numbers. For
, the symbol
refers to the
-dimensional Euclidean space. The symbol
refers to the collection of all negative real numbers. The symbols Ø and
are employed to denote the empty set and the identity matrix of order
, respectively. For two matrices
, the notation
is employed to denote that
is positive semidefinite. For any
and
, the symbols
and
denote the open and closed balls of radius
r centered at
, respectively. For any non-empty set
and
, the notation
represents the Cartesian product defined as
Let
. Then, the symbol
is defined as
For
, the symbol
is defined as
Let
. Then, the notations
and
are used to represent the following sets:
Let
. The following notations are employed throughout this article:
For
and
, we define:
and
which denote the one-sided right and left
-th partial derivatives of
at the point
, respectively, assuming the limits defined in (
1) and (
2) are well-defined.
If for every
and
,
and
exist, then we define:
The symbols
and
are used to denote the following sets:
Let
. The symbol
represents the following:
The interval
is referred to as a degenerate interval if and only if
Let
,
, and
. Corresponding to
and
, we define the following algebraic operations [
28]:
The subsequent definition is from [
37].
Definition 1. Consider an arbitrary set . Then, the symbol represents the norm of and is defined as follows: For
and
, we adopt the following notations throughout the article:
Let
,
. The ordered relations between
and
are described as follows:
The following definition is from [
37].
Definition 2. For arbitrary intervals and , the symbol represents the gH-difference between and and is defined as follows: The notion of gH-continuity for
is recalled in the subsequent definition [
37].
Definition 3. The function is said to be a gH-continuous function at a point if for any , there exists some such that for any satisfying , the following inequality holds: In the following definition, we recall the notion of the gH-Lipschitz continuity property for the class of interval-valued function (for instance, see [
38]).
Definition 4. The function is said to be gH-Lipschitz continuous on with Lipschitz constant , if for every , the following inequality is satisfied: Remark 1. In view of Definition 4, it follows that if is gH-Lipschitz continuous with Lipschitz constant on , then the function , defined by:is also Lipschitz continuous with Lipschitz constant on . The subsequent definition is from Upadhyay et al. [
29].
Definition 5. The function is said to be a convex function if, for all , and any , the following inequality holds: Moreover, is said to be locally convex at a point if there exists some neighborhood of such that the restriction of to is convex.
The subsequent definitions are from [
39].
Definition 6. Let and with . The gH-directional derivative of at in the direction is defined as follows:provided that the above limit exists. Definition 7. Let and be defined as follows:Let the functions be defined as follows:The mapping is said to be gH-differentiable at if there exist vectors with and , and error functions , such that , and for all the following hold:andIf is gH-differentiable at every element , then is said to be gH-differentiable on . The proof of the following theorem can be established employing Theorem 5 and Propositions 9 and 11 from [
39].
Theorem 1. Let and let be defined as follows:If is gH-differentiable at , then for any , one of the following conditions is fulfilled: - (i)
The gradients and exist, and - (ii)
, , , and exist and satisfy: The following definition is from [39].
Definition 8. Let be gH-differentiable at . Then, the gH-gradient of at is defined as follows:where denotes the -th canonical direction in . We recall the following proposition from [
28].
Proposition 1. Let and let be defined as follows:If the function is m-times gH-differentiable at , then the function , defined in Remark 1, is also m-times differentiable at . We define the interval-valued vector function
as follows:
where
are interval-valued functions.
The following two definitions are from [
32].
Definition 9. Let and . Suppose that every component of possesses gH-directional derivatives. Then, is called a critical point of , provided that there does not exist any , satisfying:where . Definition 10. An element is referred to as the descent direction of at a point , provided that some exists, satisfying: Definition 11. Let . The function is said to be continuously gH-differentiable at if every component of is continuously gH-differentiable at .
Remark 2. In view of Definitions 6 and 10, it follows that if is continuously gH-differentiable at and if is a descent direction of at , then 4. HS-Type Conjugate Direction Method for IVMOPs
In this section, we present an HS-type conjugate direction method for IVMOPs. Moreover, we establish the convergence of the sequence generated by this method.
Consider the following IVMOP:
where the functions
are defined as
The functions
are assumed to be continuously gH-differentiable unless otherwise specified.
The notions of effective and weak effective solutions for IVMOP are recalled in the subsequent definition [
28].
Definition 12. A point is said to be an effective solution of the IVMOP if there is no other point such thatSimilarly, a point is said to be a weak effective solution of the IVMOP provided that there is no other point for which: In the rest of the article, we employ to represent the set of all critical points of .
Let
. In order to determine the descent direction for the objective function
of IVMOP, we consider the following scalar optimization problem with interval-valued constraints [
32]:
where
is a real-valued function. It can be shown that the problem
has a unique solution.
Any feasible point of the problem
is represented as
, where
and
. Let
denote the feasible set of
. We consider the functions
and
, which are defined as follows:
From now onwards, for any
, the notation
will be used to represent the optimal solution of the problem
.
Now, for every
, we consider a function
, defined as follows:
Remark 3. Since is a solution of the problem , therefore for all it follows that:This implies that the function satisfies the following inequality: In the subsequent discussions, we utilize the following lemmas from Upadhyay et al. [
32].
Lemma 1. Let . If , then is a descent direction at for .
Lemma 2. For , the following properties hold:
- (i)
If , then and .
- (ii)
If , then .
Remark 4. From Lemma 1, it follows that if , then the optimal solution of yields a descent direction. Furthermore, from Lemma 2, it can be inferred that the value of can be utilized to determine whether or not. Specifically, for any given point , if , then . Otherwise, , and in this case, serves as a descent direction at for .
We recall the following result from Upadhyay et al. [
32].
Lemma 3. Let . If the functions () are locally convex at , then is a locally weak effective solution of IVMOP.
To introduce the Hestenes–Stiefel-type direction for IVMOP, we define a function
as follows:
where,
In the following lemma, we establish the relationship between the critical point of
and the function
.
Lemma 4. Let be defined in (3), and . Then, is a critical point of if and only if Proof. Let
be a critical point of
. Then, by Definition 9, for every
there exists
such that
Consequently, it follows that
, which implies
Conversely, suppose that
Then, for any
there exists
such that
This further implies that for any
,
Therefore, it follows that
is a critical point of
. This completes the proof. □
We establish the following lemma, which will be used in the sequel.
Lemma 5. Let and let be the optimal solution of the problem . Then Proof. Since
, therefore we have
From (
4), we obtain
Consequently,
Let us define
. Then, we have
Therefore, we obtain
This implies that
. Since
is the optimal solution of the problem
, we obtain
Combining (
5) and (
6), we conclude
This completes the proof. □
Let
be fixed and let
. Now, we introduce a Hestenes–Stiefel-type direction (HS-type direction)
at
.
where
represents the HS-type direction at
-th step, and for
,
is defined as follows:
Remark 5. If every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, that is, , then Equation (7) reduces to the HS-type direction defined for vector-valued functions, as considered by Pérez and Prudente [18]. As a result, the parameter introduced in (8) extends the HS-type direction from MOPs to IVMOPs, which belong to a broader class of optimization problems. Moreover, when , Equation (7) further reduces to the classical HS-type direction for a real-valued function, defined by Hestenes and Stiefel [17]. It can be observed that
, defined in (
8), becomes undefined when
and the direction
defined in Equation (
7) may not provide a descent direction. Therefore, to address this issue, we adopt an approach similar to that proposed by Gilbert and Nocedal [
40] and Pérez and Prudente [
18]. Hence, we define
and
as follows:
and
In the following lemma, we establish an inequality that relates the directional derivative of at point in the direction to .
Lemma 6. Let and be noncritical points of . Suppose that represents a descent direction of at point . Then, we have Moreover, for every , we have: Proof. In view of Definition
9, it follows that
. Now, the following two possible cases may arise:
Therefore, the inequality in (
11) is satisfied.
- Case 2:
Let . Our aim is to prove that
Since
, it suffices to show that
On the contrary, assume that there exists
such that
Therefore, from (
12), we have the following inequality:
or
Since
, therefore it follows from above that
which in turn implies that
Since
is a descent direction of
at point
, we have
This implies that
From (
13) and (
14) we obtain
Now, if
then from Definition
9, we obtain
which contradicts the assumption that
. On the other hand, if
then
Using (
15) and Definition
9 we obtain
which is a contradiction. Therefore, we have
Now, from (
16) and in view of Theorem 1, it follows that:
for every
. This completes the proof. □
Notably, for
, the direction
, as defined in Equation (
10), coincides with
. Thus, by Lemma 1, we conclude that
serves as a descent direction at
for
. Therefore, in the following theorem, we establish that
serves as a descent direction at
for
, under appropriate assumptions.
Theorem 2. Let be noncritical points of . Suppose that , as defined in Equation (10), serves as a descent direction of at for all . Then, serves as a descent direction at for the function . Proof. Since the functions
(
) are continuously gH-differentiable, therefore, to prove that
is a descent direction at
, it is sufficient to show that
Let
be fixed. From Theorem 1 we have
Therefore, from (
18), and Lemmas 2 and 6, we obtain
Similarly, we can prove that
Therefore, from (
17), (
19), and (
20) we conclude that
This completes the proof. □
Now, we introduce an Armijo-like line search method for the objective function of IVMOP.
Consider
such that
is a descent direction at
for the function
. Let
. A step length
t is acceptable if it satisfies the following condition:
Remark 6. If every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, that is, , then (22) reduces to the following Armijo-like line search, defined by Fliege and Svaiter [5]:where represents the Jacobian of at . In the next lemma, we prove the existence of such
t which satisfies (
22) for a given
.
Lemma 7. If is gH-differentiable and for each , then for the given , there exists such that Proof. Let
be fixed. By the definition of the directional derivative of
, there exists a function
such that
where
as
.
From (
23) and Definition 2, these possible two cases may arise:
Since
, therefore
and
. Define
Since
as
, there exists
such that
Substituting (
25) and (
26) in (
24), we have
On the lines of the proof of Case 1, it can be shown that the inequality in (
27) holds for all
for some
.
Since
was arbitrary, we conclude that for each
, there exists
such that (
27) holds. Let us set
, then we have
This completes the proof. □
Remark 7. From Lemma 7, it follows that for , if , then there exists some such that (22) holds for all . To compute the step length t numerically, we adopt the following backtracking process: We start with and check whether (22) holds for . - (a)
If the inequality in (22) is satisfied, we take as the step length. - (b)
Otherwise, set and update , repeating the process until the inequality in (22) is satisfied.
In view of the fact that, as some exists such that (22) holds for all , and the sequence converges to 0, the above process terminates after a finite number of iterations. Thus, at any with , we can choose η as the largest t from the setsuch that (22) is satisfied. Now, we present the HS-type conjugate direction algorithm for IVMOP.
Algorithm 1 HS-Type Conjugate Direction Algorithm for IVMOP |
- 1:
Let , initial point , , and set . - 2:
Solve the optimization problem and obtain the values of and . - 3:
If , then stop. Otherwise, proceed to the next step. - 4:
Calculate using ( 10). - 5:
Select as the largest value of that satisfies ( 22). Update the iterate as follows: - 6:
Set , and go to Step 2.
|
Remark 8. It is worth noting that if in (10) is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, that is, , then Algorithm 1 reduces to the steepest descent algorithm for MOPs, as proposed by Fliege and Svaiter [5]. It is obvious that if Algorithm 1 has a finite iteration, then the last iterative point is an approximate critical point. Thus, it is relevant to consider the convergence analysis when Algorithm 1 generates an infinite sequence. That is, for all . Consequently, we have and serves as a descent direction at for all .
5. Main Results
In this section, we establish the convergence of the sequence generated by Algorithm 1. Moreover, we deduce that the sequence exhibits linear order convergence under appropriate assumptions. Furthermore, we investigate the worst-case complexity of the sequence generated by Algorithm 1.
In the following theorem, we establish the convergence of the sequence generated by Algorithm 1.
Theorem 3. Let be an infinite sequence generated by Algorithm 1. Suppose that the setis bounded. Under these assumptions, every accumulation point of the sequence is a critical point of the objective function of IVMOP. Proof. From (
22), for all
, we have
Using Remark 2, for all
, we obtain
From (
31), the sequence
lies in
which is a bounded subset of
. As a result, the sequence
is also bounded. Hence, it possesses at least one accumulation point, say
. We claim that
is a critical point of the objective function of IVMOP.
Indeed, as
is a bounded sequence in
and for all
,
is gH-continuous on
; therefore, using (
31), we conclude that, for all
, the sequence
is non-increasing and bounded. Consequently, from (
30), it follows that
Since for all , therefore, the value exists. Therefore, the following two possible cases may arise:
Case 1: Let
. Hence, employing (
32) and taking into account the fact that
is an accumulated point of the sequence
, there exist subsequence
and
of
and
, respectively, such that
and
Our aim is to show that
is a critical point of
. On the contrary, assume that
is a noncritical point of
. This implies that there exists
such that
Since
is continuously gH-differentiable, therefore, there exist
and
, such that
Since
as
, therefore, using (
35), there exists
, such that
Now, for every
, by defining
, we get
This implies that, for every
,
Using (
36) and (
37), for all
, we obtain
This implies that, for all
, we get
Now, for all
and
, we consider
Therefore, using (
38), for all
and
, we conclude that
Now, using Lemma 6, for all
and
, we obtain
This leads to a contradiction with Equation (
33).
Now, for
, there exists
such that
Therefore, for
, the inequality in (
22) is not satisfied, that is, for all
, we have
Letting
(along a suitable subsequence, if necessary) in both sides of the inequality in (
40), there exists
such that
which leads to a contradiction. This completes the proof. □
The proof of the following lemma follows from Remark 1, Proposition 1, and Definition 3.
Lemma 8. Let the functions be gH-Lipschitz continuous with Lipschitz constant and twice continuously gH-differentiable on . Then, the functions are Lipschitz continuous with Lipschitz constant and twice continuously differentiable on .
In the following theorem, we establish that the sequence generated by Algorithm 1 exhibits linear order convergence.
Theorem 4. Let be the sequence generated by Algorithm 1 and the set defined in (29) be bounded. Suppose that the functions are twice continuously gH-differentiable and the gH-gradients of the functions , that is, are gH-Lipschitz continuous with Lipschitz constant . Moreover, if we assume that for every , where with , then the sequence converges linearly to the critical point of the objective function of IVMOP. Proof. Since
is the sequence generated by Algorithm 1 and the set
is bounded, therefore, it follows from Theorem 3 that the sequence
converges to the critical point, say
, of the objective function of IVMOP. Given that the functions
are twice continuously gH-differentiable, it follows from Lemma 8 that the functions
are twice continuously differentiable. Therefore, by applying the second-order Taylor formula (see [
41]) for each
, we have:
where
. Moreover, from the hypothesis, it follows that for every
,
. Taking into account this fact with (
41), we have the following inequality:
Combining (
42) with (
43), we get
Employing the mean value theorem (see [
41]) on the right-hand side of (
44), there exists
such that:
Since the functions
are gH-Lipschitz continuous with Lipschitz constant
, it follows from Lemma 8 that the functions
are Lipschitz continuous with Lipschitz constant
. Therefore, from (
45), we have
Since
is a critical point, therefore, there exists some
such that the following inequality holds:
Hence, from (
46), we get
where
. From the hypothesis, it follows that
. Hence, the sequence
converges linearly to the critical point of the objective function of IVMOP. This completes the proof. □
Remark 9. Let such that is a descent direction at for the function . Let . Then, in an Armijo-like line search strategy, a step length t is considered acceptable if it satisfies the following condition:This implies that the function satisfies the following Armijo-like line search strategy: The following lemma will play a crucial role to investigate the worst-case complexity of the sequence generated by Algorithm 1.
Lemma 9. Let us assume that the gH-gradient of the functions , that is, are gH-Lipschitz continuous with Lipschitz constant . Moreover, if there exists some such that the following inequality holds:then the step size in Algorithm 1 always satisfies the following inequality:for every . Proof. Let
be fixed. Since
is a step size in Algorithm 1, therefore, in view of Remark 9, there exists some
, such that:
Since the functions
are gH-Lipschitz continuous with Lipschitz constant
, it follows from Lemma 8 that the functions
are Lipschitz continuous with Lipschitz constant
. Combining this fact with (
49), we have the following inequality:
Since
for all
, therefore, from (
48) and (
50), it follows that
From the hypothesis, it follows that
From (
51) and (
52), we have:
Rearranging the terms of the above inequality, we infer that:
On the other hand,
is a solution of the problem
and
, so it follows that
Moreover, in view of Lemma 6, we obtain the following inequality:
Multiplying
on both sides of the above inequality, we get:
From (
54) and (
55), we have:
Since
is nonzero for every
, therefore, from the above inequality, we have
This completes the proof. □
In the following theorem, we investigate the worst-case complexity of the sequence generated by Algorithm 1.
Theorem 5. Let all the assumptions of Theorem 4 be satisfied. Suppose that there exists some , such that for every , the following inequality:is satisfied. Then, the sequence generated by Algorithm 1 is such that for any , at most iterations are needed to produce an iterate solution such that , where Proof. Let
be fixed. Since the sequence
is generated by Algorithm 1 and the set
is bounded, therefore, it follows from Theorem 3 that the sequence
converges to the critical point, say
, of the objective function of IVMOP. Furthermore, given that the functions
are twice continuously gH-differentiable, therefore, it follows from Lemma 8 that the functions
are twice continuously differentiable. Since
is bounded, therefore, there exists some
such that
Moreover, as
are continuous on
, therefore, there exists
such that for all
Since
is a step size in Algorithm 1, therefore, in view of Remark 9, for every
, we have
Since
, therefore, from (
10), it follows that:
In view of Lemma 6, it follows that
. Therefore, from (
58), and the fact that
, we have:
Rearranging the terms of the inequality in (
59) and using Remark 3, we get the following inequality:
Since for every
,
is non-positive, therefore, from (
60), we have:
Employing the mean value theorem (see [
41]) on the left-hand side of (
61), there exists some
such that:
Since
, therefore,
. Combining (
56) with (
62), we obtain:
Moreover, in view of Theorem 4, and from (
63), we have the following inequality:
Now, from Lemma 9, it follows from (
64) that
Now, we assume that for the first
iterations, we have
Therefore, from (
65), we have
Taking the logarithm on both sides of the above inequality, we get:
This implies
This completes the proof. □
6. Experiments and Discussion
In this section, we furnish several numerical examples to illustrate the effectiveness of Algorithm 1 and solve them by employing MATLAB R2024a.
To solve problems (P1), and (P2) presented in Examples 1 and 2, respectively, we employ Algorithm 1, implemented on a system equipped with an Intel(R) Core(TM) i7-8700 CPU@3.20 GB processor with 8 GB of RAM. However, the above system fails to solve problem (P3) presented in Example 3, which is a large-scale IVMOP . Therefore, to solve problem (P3), we employ Algorithm 1, implemented on a high-performance computational system running Ubuntu, with the following specifications: Memory: 128.0 GiB, Processor: Intel® Xeon® Gold 5415+ (32 cores), and OS type: 64-bit.
Now, to execute Algorithm 1, we employ the following steps to find for a given :
- 1.
Choose the parameter , such that
- 2.
To find the values of
and
, we solve the following optimization problem:
by employing a numerical method (for example, using optimvar functions in the tools of MATLAB software).
- 3.
Compute the values of
and the direction
, which satisfy Equations (
9) and (
10), respectively.
- 4.
Compute the step size
as per the Armijo-like line search, that is, the inequality in (
22), equipped with the backtracking as discussed in Remark 7. Equivalently,
.
- 5.
Finally, one can find the values of
from the given condition:
In the following example, we consider a locally convex IVMOP to demonstrate the effectiveness of Algorithm 1.
Example 1. Consider the following problem (P1) which belongs to the class of IVMOPs.where are defined as follows:It is evident that is a critical point of the objective function of (P1). Since the components of the objective function in (P1) are locally convex at , it follows from Lemma 3 that is a locally weak effective solution of (P1). Now, we employ Algorithm 1 to solve (P1), with an initial point . The stopping criterion is defined as . The numerical results for Algorithm 1 are shown in Table 1. From Step 3 of Table 1, we conclude that the sequence converges to a locally weak effective solution of (P1). It is worth noting that the locally weak effective solution of an IVMOP is not an isolated point. However, applying Algorithm 1 with a given initial point can lead to one such locally weak effective solution. To generate an approximate locally weak effective solution set, we employ a multi-start approach, and or a maximum of 1000 iterations as the stopping criteria. Specifically, we generate 100 uniformly distributed random initial points and subsequently execute Algorithm 1 starting from each of these points. In view of the above fact, in Example 1 we generate a set of approximate locally weak effective solutions by selecting 100 uniformly distributed random initial points in the domain using the “rand” function of MATLAB. The sequences generated from these points are illustrated in Figure 1. Remark 10. It is worth noting that the objective function of problem (P1), presented in Example 1, is twice continuously gH-differentiable but not convex. Moreover, in view of the fact that Newton’s and quasi-Newton methods for IVMOPs introduced by Upadhyay et al. [28,29] are applicable to solve certain classes of IVMOPs in which the objective functions are strongly convex and twice continuously gH-differentiable, Newton’s and quasi-Newton methods could not be applied to solve the problem (P1) presented in Example 1. Nevertheless, it has been demonstrated in Example 1 that our proposed algorithm, that is, Algorithm 1, effectively solves the problem (P1). Moreover, in view of the works of Upadhyay et al. [
28,
29], it can be observed that Newton’s and quasi-Newton methods are applicable to solve certain classes of IVMOPs in which the objective functions of IVMOPs are twice continuously gH-differentiable as well as strongly convex. In contrast, our proposed algorithm only requires the continuously gH-differentiability on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [
28,
29]. To demonstrate this, we consider an IVMOP where the first component of the objective function is continuously gH-differentiable but not twice continuously gH-differentiable.
Example 2. Consider the following problem (P2) which belongs to the class of IVMOPs.where and are defined as follows: andIt can be verified that is continuously gH-differentiable but not twice continuously gH-differentiable. As a result, the Newton’s and quasi-Newton methods proposed by Upadhyay et al. [28,29] cannot be applied to solve (P2). However, we solve (P2) by employing Algorithm 1 and MATLAB. Now, we employ Algorithm 1 to solve (P2), with an initial point . The stopping criterion is defined as . The numerical results for Algorithm 1 are shown in Table 2. Therefore, in view of Step 17 in Table 2, the sequence generated by Algorithm 1 converges to an approximate critical point of the objective function of (P2). In the following example, we apply Algorithm 1 employing MATLAB to solve a large-scale IVMOP for different values of .
Example 3. Consider the following problem (P3) which belongs to the class of IVMOPs.where is defined as follows:We consider a random point, obtained using the built-in MATLAB functionrand(n,1), as the initial point of Algorithm 1. We define the stopping criteria as or reaching a maximum of 5000 iterations. Table 3 presents the number of iterations and the computational times required to solve (P3) using Algorithm 1 for various values of n, starting from randomly generated initial points. 7. Conclusions and Future Research Directions
In this article, we have developed an HS-type conjugate direction algorithm to solve a class of IVMOPs. We have performed the convergence analysis, discussed the convergence rate, and investigated the worst-case complexity of the sequence generated by the proposed algorithm. The results established in this article have generalized several significant results existing in the literature. Specifically, we have extended the work of Pérez and Prudente [
18] on the HS-type conjugate direction method for MOPs to a more general class of optimization problems, namely, IVMOPs. Moreover, it is worth noting that, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [
5]. Furthermore, it is imperative to note that Newton’s and quasi-Newton methods (see Upadhyay et al. [
28,
29]) can be applied to solve certain classes of IVMOPs in which the components of the objective function are twice continuously gH-differentiable as well as strongly convex. However, the HS-type conjugate direction algorithm proposed in this paper only requires the continuously gH-differentiability assumptions on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [
28,
29].
It has been observed that all objective functions in the considered IVMOP are assumed to be continuously gH-differentiable. Consequently, the findings of this paper are not applicable when the objective functions involved do not satisfy this requirement, which can be considered as a limitation of this paper.
The results presented in this article leave numerous avenues for future research works. An important direction for future research is the exploration of hybrid approaches for IVMOPs that integrate a convex hybridization of different conjugate direction methods, following the methodology of [
35]. Another key direction is investigating the conjugate direction method for IVMOP without employing any line search techniques, following the methodology of [
42]. Moreover, in view of the works in [
43,
44,
45], it would be interesting to develop a Hestenes–Stiefel-type conjugate direction algorithm to train neural networks under uncertainty, where intervals represent the model parameters or data, and to study the robustness of Algorithm 1 in the presence of noisy data.