Abstract
This article investigates a class of interval-valued multiobjective optimization problems (IVMOPs). We define the Hestenes–Stiefel (HS)-type direction for the objective function of IVMOPs and establish that it has a descent property at noncritical points. An Armijo-like line search is employed to determine an appropriate step size. We present an HS-type conjugate direction algorithm for IVMOPs and establish the convergence of the sequence generated by the algorithm. We deduce that the proposed algorithm exhibits a linear order of convergence under appropriate assumptions. Moreover, we investigate the worst-case complexity of the sequence generated by the proposed algorithm. Furthermore, we furnish several numerical examples, including a large-scale IVMOP, to demonstrate the effectiveness of our proposed algorithm and solve them by employing MATLAB. To the best of our knowledge, the HS-type conjugate direction method has not yet been explored for the class of IVMOPs.
Keywords:
Hestenes–Stiefel method; interval-valued optimization; generalized Hukuhara derivative; conjugate direction method; multiobjective optimization MSC:
26E35; 65G40; 65K05; 90B50; 90C25; 90C30
1. Introduction
Multiobjective optimization problems (MOPs) involve the simultaneous optimization of two or more conflicting objective functions. Vilfredo Pareto [1] introduced the concept of Pareto optimality in the context of economic systems. A solution is called Pareto optimal or efficient if none of the objective functions can be improved without deteriorating some of the other objective values [2]. MOPs arise in scenarios where trade-offs are required, such as balancing cost and quality in business operations or improving efficiency while reducing environmental impact in engineering [3,4]. As a consequence, various techniques and algorithms have been proposed to solve MOPs in different frameworks [5,6,7]. For a more detailed discussion on MOPs, we refer to [2,8,9] and the references cited therein.
In many real-world problems arising in engineering, science, and related fields, we often encounter data that are imprecise or uncertain. This uncertainty can arise from various factors such as unknown future developments, measurement or manufacturing errors, or incomplete information in model development [10,11]. In such contexts, it is common to model uncertain parameters or objective functions using intervals. Moreover, if the uncertainties involved in the objective functions of MOPs are represented as intervals, the resulting problems are referred to as interval-valued multiobjective optimization problems (IVMOPs). IVMOPs frequently arise in diverse fields such as transportation, economics, and business administration [12,13,14,15,16].
It is well-known that the conjugate direction method is a powerful optimization technique that can be widely employed for solving systems of linear equations and optimization problems [17,18]. Its computational strength has made it valuable in tackling a variety of real-world problems, including electromagnetic scattering [19], inverse engineering [20], and geophysical inversion [21].
2. Review of Related Works
The foundational work on IVOPs is attributed to Ishibuchi and Tanaka [22], who investigated IVOPs by transforming them into corresponding deterministic MOPs. Wu [23] derived optimality conditions for a class of constrained IVOPs, employing the notion of Hukuhara differentiability and assuming the convexity hypothesis on the objective and constraint functions. Moreover, Wu [24] developed Karush–Kuhn–Tucker-type optimality conditions for IVOPs and derived strong duality theorems that connect the primal problems with their associated dual problems. Bhurjee and Panda [25] explored IVOPs by defining interval-valued functions in parametric forms. More recently, Roy et al. [26] proposed a gradient-based descent line search technique employing the notion of generalized Hukuhara (gH) differentiability to solve IVOPs.
Kumar and Bhurjee [27] studied IVMOPs by transforming them into their corresponding deterministic MOPs and establishing the relationships between the solutions of IVMOPs and MOPs. Upadhyay et al. [28] introduced Newton’s method for IVMOPs and established the quadratic convergence of the sequence generated by Newton’s method under suitable assumptions. Subsequently, Upadhyay et al. [29] developed quasi-Newton methods for IVMOPs and demonstrated their efficacy in solving both convex and non-convex IVMOPs. For a more comprehensive and updated survey on IVMOPs, we refer [30,31,32,33] and the references cited therein.
The conjugate direction method was first introduced by Hestenes and Stiefel [17], who developed the conjugate gradient method to solve a system of linear equations. Subsequently, Pérez and Prudente [18] introduced the HS-type conjugate direction algorithm for MOPs by employing an inexact line search. Wang et al. [34] introduced an HS-type conjugate direction algorithm for MOPs without employing line search techniques, and established the global convergence of the proposed method under suitable conditions. Recently, based on the memoryless Broyden–Fletcher–Goldfarb–Shanno update, Khoshsimaye-Bargard and Ashrafi [35] presented a convex hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate parameters. For a more detailed discussion on the conjugate direction method, we refer to [18,36] and the references cited therein. From the above discussion, it is evident that HS-type conjugate direction methods have been developed to solve single-objective problems as well as MOPs. However, there is no research paper available in the literature that has explored the HS-type conjugate direction method for IVMOPs. The aim of this article is to fill the aforementioned research gaps by developing the HS-type conjugate direction method for a class of IVMOPs.
Motivated by the works of [17,18,34], in this paper, we investigate a class of IVMOPs and define the HS-type direction for the objective function of IVMOPs. A descent direction property of the HS-type direction is established at noncritical points. To determine an appropriate step size, we employ an Armijo-like line search. Moreover, an HS-type conjugate direction algorithm for IVMOPs is presented, and the convergence of this algorithm is established. Furthermore, under appropriate assumptions, we deduce that the proposed algorithm exhibits a linear order of convergence. In addition to this, the order of complexity of the proposed algorithm is investigated. Finally, the efficiency of the proposed method is demonstrated by solving various numerical problems employing MATLAB.
The primary contribution and novel aspects of the present article are as follows:
- The results presented in this paper generalize several significant results from the existing literature. Specifically, we generalize the results established by Pérez and Prudente [18] on the HS-type method from MOPs to a more general class of optimization problems, namely, IVMOPs.
- The algorithm introduced in this paper is more general than the steepest descent algorithm introduced by Fliege and Svaiter [5]. More specifically, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [5].
- It is evident that Newton’s and quasi-Newton methods (see Upadhyay et al. [28,29]) can be applied to solve certain classes of IVMOPs in which the components of the objective function are twice continuously gH-differentiable as well as strongly convex. However, the HS-type conjugate direction algorithm proposed in this paper only requires the continuously gH-differentiability assumptions on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [28,29].
- To the best of our knowledge, this is the first time that the linear order of convergence of the HS-type conjugate direction method has been investigated along with its worst-case complexity.
The rest of the paper is structured as follows. In Section 3, we discuss some mathematical preliminaries that will be employed in the sequel. Section 4 presents an HS-type conjugate direction algorithm for IVMOPs. In Section 5, we establish the convergence of the sequence generated by the proposed algorithm and deduce that the sequence exhibits linear order convergence under suitable assumptions. Furthermore, we investigate the worst-case complexity of the sequence generated by the proposed algorithm. In Section 6, we demonstrate the efficiency of the proposed algorithm by solving several numerical examples via MATLAB. Finally, in Section 7, we provide our conclusions as well as future research directions.
3. Preliminaries
Throughout this article, the symbol denotes the set of all natural numbers. For , the symbol refers to the -dimensional Euclidean space. The symbol refers to the collection of all negative real numbers. The symbols Ø and are employed to denote the empty set and the identity matrix of order , respectively. For two matrices , the notation is employed to denote that is positive semidefinite. For any and , the symbols and denote the open and closed balls of radius r centered at , respectively. For any non-empty set and , the notation represents the Cartesian product defined as
Let . Then, the symbol is defined as
For , the symbol is defined as
Let . Then, the notations and are used to represent the following sets:
Let . The following notations are employed throughout this article:
For and , we define:
and
which denote the one-sided right and left -th partial derivatives of at the point , respectively, assuming the limits defined in (1) and (2) are well-defined.
If for every and , and exist, then we define:
The symbols and are used to denote the following sets:
Let . The symbol represents the following:
The interval is referred to as a degenerate interval if and only if
Let , , and . Corresponding to and , we define the following algebraic operations [28]:
The subsequent definition is from [37].
Definition 1.
Consider an arbitrary set . Then, the symbol represents the norm of and is defined as follows:
For and , we adopt the following notations throughout the article:
Let , . The ordered relations between and are described as follows:
The following definition is from [37].
Definition 2.
For arbitrary intervals and , the symbol represents the gH-difference between and and is defined as follows:
The notion of gH-continuity for is recalled in the subsequent definition [37].
Definition 3.
The function is said to be a gH-continuous function at a point if for any , there exists some such that for any satisfying , the following inequality holds:
In the following definition, we recall the notion of the gH-Lipschitz continuity property for the class of interval-valued function (for instance, see [38]).
Definition 4.
The function is said to be gH-Lipschitz continuous on with Lipschitz constant , if for every , the following inequality is satisfied:
Remark 1.
In view of Definition 4, it follows that if is gH-Lipschitz continuous with Lipschitz constant on , then the function , defined by:
is also Lipschitz continuous with Lipschitz constant on .
The subsequent definition is from Upadhyay et al. [29].
Definition 5.
The function is said to be a convex function if, for all , and any , the following inequality holds:
Moreover, is said to be locally convex at a point if there exists some neighborhood of such that the restriction of to is convex.
The subsequent definitions are from [39].
Definition 6.
Let and with . The gH-directional derivative of at in the direction is defined as follows:
provided that the above limit exists.
Definition 7.
Let and be defined as follows:
Let the functions be defined as follows:
The mapping is said to be gH-differentiable at if there exist vectors with and , and error functions , such that , and for all the following hold:
and
If is gH-differentiable at every element , then is said to be gH-differentiable on .
The proof of the following theorem can be established employing Theorem 5 and Propositions 9 and 11 from [39].
Theorem 1.
Let and let be defined as follows:
If is gH-differentiable at , then for any , one of the following conditions is fulfilled:
- (i)
- The gradients and exist, and
- (ii)
- , , , and exist and satisfy:Moreover,or,The following definition is from [39].
Definition 8.
Let be gH-differentiable at . Then, the gH-gradient of at is defined as follows:
where denotes the -th canonical direction in .
We recall the following proposition from [28].
Proposition 1.
Let and let be defined as follows:
If the function is m-times gH-differentiable at , then the function , defined in Remark 1, is also m-times differentiable at .
We define the interval-valued vector function as follows:
where are interval-valued functions.
The following two definitions are from [32].
Definition 9.
Let and . Suppose that every component of possesses gH-directional derivatives. Then, is called a critical point of , provided that there does not exist any , satisfying:
where .
Definition 10.
An element is referred to as the descent direction of at a point , provided that some exists, satisfying:
Definition 11.
Let . The function is said to be continuously gH-differentiable at if every component of is continuously gH-differentiable at .
Remark 2.
In view of Definitions 6 and 10, it follows that if is continuously gH-differentiable at and if is a descent direction of at , then
4. HS-Type Conjugate Direction Method for IVMOPs
In this section, we present an HS-type conjugate direction method for IVMOPs. Moreover, we establish the convergence of the sequence generated by this method.
Consider the following IVMOP:
where the functions are defined as
The functions are assumed to be continuously gH-differentiable unless otherwise specified.
The notions of effective and weak effective solutions for IVMOP are recalled in the subsequent definition [28].
Definition 12.
A point is said to be an effective solution of the IVMOP if there is no other point such that
Similarly, a point is said to be a weak effective solution of the IVMOP provided that there is no other point for which:
In the rest of the article, we employ to represent the set of all critical points of .
Let . In order to determine the descent direction for the objective function of IVMOP, we consider the following scalar optimization problem with interval-valued constraints [32]:
where is a real-valued function. It can be shown that the problem has a unique solution.
Any feasible point of the problem is represented as , where and . Let denote the feasible set of . We consider the functions and , which are defined as follows:
From now onwards, for any , the notation will be used to represent the optimal solution of the problem .
Now, for every , we consider a function , defined as follows:
Remark 3.
Since is a solution of the problem , therefore for all it follows that:
This implies that the function satisfies the following inequality:
In the subsequent discussions, we utilize the following lemmas from Upadhyay et al. [32].
Lemma 1.
Let . If , then is a descent direction at for .
Lemma 2.
For , the following properties hold:
- (i)
- If , then and .
- (ii)
- If , then .
Remark 4.
From Lemma 1, it follows that if , then the optimal solution of yields a descent direction. Furthermore, from Lemma 2, it can be inferred that the value of can be utilized to determine whether or not. Specifically, for any given point , if , then . Otherwise, , and in this case, serves as a descent direction at for .
We recall the following result from Upadhyay et al. [32].
Lemma 3.
Let . If the functions () are locally convex at , then is a locally weak effective solution of IVMOP.
To introduce the Hestenes–Stiefel-type direction for IVMOP, we define a function as follows:
where,
In the following lemma, we establish the relationship between the critical point of and the function .
Lemma 4.
Let be defined in (3), and . Then, is a critical point of if and only if
Proof.
Let be a critical point of . Then, by Definition 9, for every there exists such that
Consequently, it follows that , which implies
Conversely, suppose that
Then, for any there exists such that
This further implies that for any ,
Therefore, it follows that is a critical point of . This completes the proof. □
We establish the following lemma, which will be used in the sequel.
Lemma 5.
Let and let be the optimal solution of the problem . Then
Proof.
Since , therefore we have
Let us define . Then, we have
Therefore, we obtain
This implies that . Since is the optimal solution of the problem , we obtain
Let be fixed and let . Now, we introduce a Hestenes–Stiefel-type direction (HS-type direction) at .
where represents the HS-type direction at -th step, and for , is defined as follows:
Remark 5.
If every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, that is, , then Equation (7) reduces to the HS-type direction defined for vector-valued functions, as considered by Pérez and Prudente [18]. As a result, the parameter introduced in (8) extends the HS-type direction from MOPs to IVMOPs, which belong to a broader class of optimization problems. Moreover, when , Equation (7) further reduces to the classical HS-type direction for a real-valued function, defined by Hestenes and Stiefel [17].
It can be observed that , defined in (8), becomes undefined when
and the direction defined in Equation (7) may not provide a descent direction. Therefore, to address this issue, we adopt an approach similar to that proposed by Gilbert and Nocedal [40] and Pérez and Prudente [18]. Hence, we define and as follows:
and
In the following lemma, we establish an inequality that relates the directional derivative of at point in the direction to .
Lemma 6.
Let and be noncritical points of . Suppose that represents a descent direction of at point . Then, we have
Moreover, for every , we have:
Proof.
Since , it suffices to show that
On the contrary, assume that there exists such that
In view of Definition 9, it follows that . Now, the following two possible cases may arise:
- Case 1:
- If , then
Therefore, the inequality in (11) is satisfied.
- Case 2:
- Let . Our aim is to prove that
Therefore, from (12), we have the following inequality:
or
Since , therefore it follows from above that
which in turn implies that
Since is a descent direction of at point , we have
This implies that
Now, if
then from Definition 9, we obtain
which contradicts the assumption that . On the other hand, if
then
Using (15) and Definition 9 we obtain
which is a contradiction. Therefore, we have
Now, from (16) and in view of Theorem 1, it follows that:
for every . This completes the proof. □
Notably, for , the direction , as defined in Equation (10), coincides with . Thus, by Lemma 1, we conclude that serves as a descent direction at for . Therefore, in the following theorem, we establish that serves as a descent direction at for , under appropriate assumptions.
Theorem 2.
Let be noncritical points of . Suppose that , as defined in Equation (10), serves as a descent direction of at for all . Then, serves as a descent direction at for the function .
Proof.
Since the functions () are continuously gH-differentiable, therefore, to prove that is a descent direction at , it is sufficient to show that
Let be fixed. From Theorem 1 we have
Consider
Therefore, from (18), and Lemmas 2 and 6, we obtain
Similarly, we can prove that
This completes the proof. □
Now, we introduce an Armijo-like line search method for the objective function of IVMOP.
Consider such that is a descent direction at for the function . Let . A step length t is acceptable if it satisfies the following condition:
Remark 6.
If every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, that is, , then (22) reduces to the following Armijo-like line search, defined by Fliege and Svaiter [5]:
where represents the Jacobian of at .
In the next lemma, we prove the existence of such t which satisfies (22) for a given .
Lemma 7.
If is gH-differentiable and for each , then for the given , there exists such that
Proof.
Let be fixed. By the definition of the directional derivative of , there exists a function such that
where as .
From (23) and Definition 2, these possible two cases may arise:
- Case 1:
Since , therefore and . Define
Since as , there exists such that
This implies that
- Case 2:
On the lines of the proof of Case 1, it can be shown that the inequality in (27) holds for all for some .
Since was arbitrary, we conclude that for each , there exists such that (27) holds. Let us set , then we have
This completes the proof. □
Remark 7.
From Lemma 7, it follows that for , if , then there exists some such that (22) holds for all . To compute the step length t numerically, we adopt the following backtracking process:
We start with and check whether (22) holds for .
In view of the fact that, as some exists such that (22) holds for all , and the sequence converges to 0, the above process terminates after a finite number of iterations.
Now, we present the HS-type conjugate direction algorithm for IVMOP.
| Algorithm 1 HS-Type Conjugate Direction Algorithm for IVMOP |
Remark 8.
It is worth noting that if in (10) is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, that is, , then Algorithm 1 reduces to the steepest descent algorithm for MOPs, as proposed by Fliege and Svaiter [5].
It is obvious that if Algorithm 1 has a finite iteration, then the last iterative point is an approximate critical point. Thus, it is relevant to consider the convergence analysis when Algorithm 1 generates an infinite sequence. That is, for all . Consequently, we have and serves as a descent direction at for all .
5. Main Results
In this section, we establish the convergence of the sequence generated by Algorithm 1. Moreover, we deduce that the sequence exhibits linear order convergence under appropriate assumptions. Furthermore, we investigate the worst-case complexity of the sequence generated by Algorithm 1.
In the following theorem, we establish the convergence of the sequence generated by Algorithm 1.
Theorem 3.
Let be an infinite sequence generated by Algorithm 1. Suppose that the set
is bounded. Under these assumptions, every accumulation point of the sequence is a critical point of the objective function of IVMOP.
Proof.
Now, for , there exists such that
Therefore, for , the inequality in (22) is not satisfied, that is, for all , we have
This implies that
From (31), the sequence lies in which is a bounded subset of . As a result, the sequence is also bounded. Hence, it possesses at least one accumulation point, say . We claim that is a critical point of the objective function of IVMOP.
Indeed, as is a bounded sequence in and for all , is gH-continuous on ; therefore, using (31), we conclude that, for all , the sequence is non-increasing and bounded. Consequently, from (30), it follows that
Since for all , therefore, the value exists. Therefore, the following two possible cases may arise:
- Case 1: Let . Hence, employing (32) and taking into account the fact that is an accumulated point of the sequence , there exist subsequence and of and , respectively, such thatand
Our aim is to show that is a critical point of . On the contrary, assume that is a noncritical point of . This implies that there exists such that
Since is continuously gH-differentiable, therefore, there exist and , such that
Since as , therefore, using (35), there exists , such that
Now, for every , by defining , we get
This implies that, for every ,
This implies that, for all , we get
Now, for all and , we consider
This leads to a contradiction with Equation (33).
- Case 2: Let . Since , for all , therefore, we get
Letting (along a suitable subsequence, if necessary) in both sides of the inequality in (40), there exists such that
which leads to a contradiction. This completes the proof. □
The proof of the following lemma follows from Remark 1, Proposition 1, and Definition 3.
Lemma 8.
Let the functions be gH-Lipschitz continuous with Lipschitz constant and twice continuously gH-differentiable on . Then, the functions are Lipschitz continuous with Lipschitz constant and twice continuously differentiable on .
In the following theorem, we establish that the sequence generated by Algorithm 1 exhibits linear order convergence.
Theorem 4.
Let be the sequence generated by Algorithm 1 and the set defined in (29) be bounded. Suppose that the functions are twice continuously gH-differentiable and the gH-gradients of the functions , that is, are gH-Lipschitz continuous with Lipschitz constant . Moreover, if we assume that for every , where with , then the sequence converges linearly to the critical point of the objective function of IVMOP.
Proof.
Since is the sequence generated by Algorithm 1 and the set is bounded, therefore, it follows from Theorem 3 that the sequence converges to the critical point, say , of the objective function of IVMOP. Given that the functions are twice continuously gH-differentiable, it follows from Lemma 8 that the functions are twice continuously differentiable. Therefore, by applying the second-order Taylor formula (see [41]) for each , we have:
where . Moreover, from the hypothesis, it follows that for every , . Taking into account this fact with (41), we have the following inequality:
From (31), we have
Since the functions are gH-Lipschitz continuous with Lipschitz constant , it follows from Lemma 8 that the functions are Lipschitz continuous with Lipschitz constant . Therefore, from (45), we have
Since is a critical point, therefore, there exists some such that the following inequality holds:
Hence, from (46), we get
where . From the hypothesis, it follows that . Hence, the sequence converges linearly to the critical point of the objective function of IVMOP. This completes the proof. □
Remark 9.
Let such that is a descent direction at for the function . Let . Then, in an Armijo-like line search strategy, a step length t is considered acceptable if it satisfies the following condition:
This implies that the function satisfies the following Armijo-like line search strategy:
The following lemma will play a crucial role to investigate the worst-case complexity of the sequence generated by Algorithm 1.
Lemma 9.
Let us assume that the gH-gradient of the functions , that is, are gH-Lipschitz continuous with Lipschitz constant . Moreover, if there exists some such that the following inequality holds:
then the step size in Algorithm 1 always satisfies the following inequality:
for every .
Proof.
Let be fixed. Since is a step size in Algorithm 1, therefore, in view of Remark 9, there exists some , such that:
Now,
Since the functions are gH-Lipschitz continuous with Lipschitz constant , it follows from Lemma 8 that the functions are Lipschitz continuous with Lipschitz constant . Combining this fact with (49), we have the following inequality:
From the hypothesis, it follows that
Rearranging the terms of the above inequality, we infer that:
On the other hand, is a solution of the problem and , so it follows that
Moreover, in view of Lemma 6, we obtain the following inequality:
Multiplying on both sides of the above inequality, we get:
In the following theorem, we investigate the worst-case complexity of the sequence generated by Algorithm 1.
Theorem 5.
Let all the assumptions of Theorem 4 be satisfied. Suppose that there exists some , such that for every , the following inequality:
is satisfied. Then, the sequence generated by Algorithm 1 is such that for any , at most iterations are needed to produce an iterate solution such that , where
Proof.
Let be fixed. Since the sequence is generated by Algorithm 1 and the set is bounded, therefore, it follows from Theorem 3 that the sequence converges to the critical point, say , of the objective function of IVMOP. Furthermore, given that the functions are twice continuously gH-differentiable, therefore, it follows from Lemma 8 that the functions are twice continuously differentiable. Since is bounded, therefore, there exists some such that
Moreover, as are continuous on , therefore, there exists such that for all
Since is a step size in Algorithm 1, therefore, in view of Remark 9, for every , we have
Since , therefore, from (10), it follows that:
In view of Lemma 6, it follows that . Therefore, from (58), and the fact that , we have:
Rearranging the terms of the inequality in (59) and using Remark 3, we get the following inequality:
Since for every , is non-positive, therefore, from (60), we have:
Employing the mean value theorem (see [41]) on the left-hand side of (61), there exists some such that:
Moreover, in view of Theorem 4, and from (63), we have the following inequality:
Now, from Lemma 9, it follows from (64) that
Now, we assume that for the first iterations, we have Therefore, from (65), we have
Taking the logarithm on both sides of the above inequality, we get:
This implies
This completes the proof. □
6. Experiments and Discussion
In this section, we furnish several numerical examples to illustrate the effectiveness of Algorithm 1 and solve them by employing MATLAB R2024a.
To solve problems (P1), and (P2) presented in Examples 1 and 2, respectively, we employ Algorithm 1, implemented on a system equipped with an Intel(R) Core(TM) i7-8700 CPU@3.20 GB processor with 8 GB of RAM. However, the above system fails to solve problem (P3) presented in Example 3, which is a large-scale IVMOP . Therefore, to solve problem (P3), we employ Algorithm 1, implemented on a high-performance computational system running Ubuntu, with the following specifications: Memory: 128.0 GiB, Processor: Intel® Xeon® Gold 5415+ (32 cores), and OS type: 64-bit.
Now, to execute Algorithm 1, we employ the following steps to find for a given :
- 1.
- Choose the parameter , such that
- 2.
- To find the values of and , we solve the following optimization problem:by employing a numerical method (for example, using optimvar functions in the tools of MATLAB software).
- 3.
- 4.
- Compute the step size as per the Armijo-like line search, that is, the inequality in (22), equipped with the backtracking as discussed in Remark 7. Equivalently,.
- 5.
- Finally, one can find the values of from the given condition:
In the following example, we consider a locally convex IVMOP to demonstrate the effectiveness of Algorithm 1.
Example 1.
Consider the following problem (P1) which belongs to the class of IVMOPs.
where are defined as follows:
It is evident that is a critical point of the objective function of (P1). Since the components of the objective function in (P1) are locally convex at , it follows from Lemma 3 that is a locally weak effective solution of (P1).
Now, we employ Algorithm 1 to solve (P1), with an initial point . The stopping criterion is defined as . The numerical results for Algorithm 1 are shown in Table 1.
Table 1.
Sequence generated by Algorithm 1 for the problem (P1).
From Step 3 of Table 1, we conclude that the sequence converges to a locally weak effective solution of (P1).
It is worth noting that the locally weak effective solution of an IVMOP is not an isolated point. However, applying Algorithm 1 with a given initial point can lead to one such locally weak effective solution. To generate an approximate locally weak effective solution set, we employ a multi-start approach, and or a maximum of 1000 iterations as the stopping criteria. Specifically, we generate 100 uniformly distributed random initial points and subsequently execute Algorithm 1 starting from each of these points. In view of the above fact, in Example 1 we generate a set of approximate locally weak effective solutions by selecting 100 uniformly distributed random initial points in the domain using the “rand” function of MATLAB. The sequences generated from these points are illustrated in Figure 1.
Figure 1.
Approximate locally weak effective solutions generated from 100 uniformly distributed random initial points.
Remark 10.
It is worth noting that the objective function of problem (P1), presented in Example 1, is twice continuously gH-differentiable but not convex. Moreover, in view of the fact that Newton’s and quasi-Newton methods for IVMOPs introduced by Upadhyay et al. [28,29] are applicable to solve certain classes of IVMOPs in which the objective functions are strongly convex and twice continuously gH-differentiable, Newton’s and quasi-Newton methods could not be applied to solve the problem (P1) presented in Example 1. Nevertheless, it has been demonstrated in Example 1 that our proposed algorithm, that is, Algorithm 1, effectively solves the problem (P1).
Moreover, in view of the works of Upadhyay et al. [28,29], it can be observed that Newton’s and quasi-Newton methods are applicable to solve certain classes of IVMOPs in which the objective functions of IVMOPs are twice continuously gH-differentiable as well as strongly convex. In contrast, our proposed algorithm only requires the continuously gH-differentiability on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [28,29]. To demonstrate this, we consider an IVMOP where the first component of the objective function is continuously gH-differentiable but not twice continuously gH-differentiable.
Example 2.
Consider the following problem (P2) which belongs to the class of IVMOPs.
where and are defined as follows:
and
It can be verified that is continuously gH-differentiable but not twice continuously gH-differentiable. As a result, the Newton’s and quasi-Newton methods proposed by Upadhyay et al. [28,29] cannot be applied to solve (P2).
However, we solve (P2) by employing Algorithm 1 and MATLAB. Now, we employ Algorithm 1 to solve (P2), with an initial point . The stopping criterion is defined as . The numerical results for Algorithm 1 are shown in Table 2.
Table 2.
Sequence generated by Algorithm 1 for the problem (P2).
Therefore, in view of Step 17 in Table 2, the sequence generated by Algorithm 1 converges to an approximate critical point of the objective function of (P2).
In the following example, we apply Algorithm 1 employing MATLAB to solve a large-scale IVMOP for different values of .
Example 3.
Consider the following problem (P3) which belongs to the class of IVMOPs.
where is defined as follows:
We consider a random point, obtained using the built-in MATLAB functionrand(n,1), as the initial point of Algorithm 1. We define the stopping criteria as or reaching a maximum of 5000 iterations. Table 3 presents the number of iterations and the computational times required to solve (P3) using Algorithm 1 for various values of n, starting from randomly generated initial points.
Table 3.
The numerical results of Algorithm 1 for the problem (P3).
7. Conclusions and Future Research Directions
In this article, we have developed an HS-type conjugate direction algorithm to solve a class of IVMOPs. We have performed the convergence analysis, discussed the convergence rate, and investigated the worst-case complexity of the sequence generated by the proposed algorithm. The results established in this article have generalized several significant results existing in the literature. Specifically, we have extended the work of Pérez and Prudente [18] on the HS-type conjugate direction method for MOPs to a more general class of optimization problems, namely, IVMOPs. Moreover, it is worth noting that, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [5]. Furthermore, it is imperative to note that Newton’s and quasi-Newton methods (see Upadhyay et al. [28,29]) can be applied to solve certain classes of IVMOPs in which the components of the objective function are twice continuously gH-differentiable as well as strongly convex. However, the HS-type conjugate direction algorithm proposed in this paper only requires the continuously gH-differentiability assumptions on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [28,29].
It has been observed that all objective functions in the considered IVMOP are assumed to be continuously gH-differentiable. Consequently, the findings of this paper are not applicable when the objective functions involved do not satisfy this requirement, which can be considered as a limitation of this paper.
The results presented in this article leave numerous avenues for future research works. An important direction for future research is the exploration of hybrid approaches for IVMOPs that integrate a convex hybridization of different conjugate direction methods, following the methodology of [35]. Another key direction is investigating the conjugate direction method for IVMOP without employing any line search techniques, following the methodology of [42]. Moreover, in view of the works in [43,44,45], it would be interesting to develop a Hestenes–Stiefel-type conjugate direction algorithm to train neural networks under uncertainty, where intervals represent the model parameters or data, and to study the robustness of Algorithm 1 in the presence of noisy data.
Author Contributions
Conceptualization, B.B.U. and R.K.P.; methodology, B.B.U. and R.K.P.; validation, B.B.U., R.K.P., S.P. and I.S.-M.; formal analysis, B.B.U., R.K.P. and S.P.; writing—review and editing, B.B.U., R.K.P. and S.P.; supervision, B.B.U. All authors have read and agreed to the published version of the manuscript.
Funding
The first author would like to thank the University Grants Commission, New Delhi, India, for the received financial support (UGC-Ref. No.: 1213/(CSIR-UGC NET DEC 2017)). The third author extends their gratitude to the Ministry of Education, Government of India, for their financial support through the Prime Minister Research Fellowship (PMRF), granted under PMRF ID-2703573.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
The authors would like to thank the anonymous reviewers for their constructive suggestions, which have substantially improved the paper in its present form.
Conflicts of Interest
The authors confirm that there are no actual or potential conflicts of interest related to this article.
References
- Pareto, V. Manuale di Economia Politica; Societa Editrice: Milano, Italy, 1906. [Google Scholar]
- Miettinen, K. Nonlinear Multiobjective Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999; pp. xvii, 298. [Google Scholar]
- Diao, X.; Li, H.; Zeng, S.; Tam, V.W.; Guo, H. A Pareto multi-objective optimization approach for solving time-cost-quality tradeoff problems. Technol. Econ. Dev. Econ. 2011, 17, 22–41. [Google Scholar] [CrossRef]
- Guillén-Gosálbez, G. A novel MILP-based objective reduction method for multi-objective optimization: Application to environmental problems. Comput. Chem. Eng. 2011, 35, 1469–1477. [Google Scholar] [CrossRef]
- Fliege, J.; Svaiter, B.F. Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 2000, 51, 479–494. [Google Scholar] [CrossRef]
- Bento, G.C.; Melo, J.G. Subgradient method for convex feasibility on Riemannian manifolds. J. Optim. Theory Appl. 2012, 152, 773–785. [Google Scholar] [CrossRef]
- Upadhyay, B.B.; Singh, S.K.; Stancu-Minasian, I.M.; Rusu-Stancu, A.M. Robust optimality and duality for nonsmooth multiobjective programming problems with vanishing constraints under data uncertainty. Algorithms 2024, 17, 482. [Google Scholar] [CrossRef]
- Ehrgott, M. Multicriteria Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Upadhyay, B.B.; Poddar, S.; Yao, J.C.; Zhao, X. Inexact proximal point method with a Bregman regularization for quasiconvex multiobjective optimization problems via limiting subdifferentials. Ann. Oper. Res. 2025, 345, 417–466. [Google Scholar] [CrossRef]
- Beer, M.; Ferson, S.; Kreinovich, V. Imprecise probabilities in engineering analyses. Mech. Syst. Signal Process. 2013, 37, 4–29. [Google Scholar] [CrossRef]
- Chaudhuri, A.; Lam, R.; Willcox, K. Multifidelity uncertainty propagation via adaptive surrogates in coupled multidisciplinary systems. AIAA J. 2018, 56, 235–249. [Google Scholar] [CrossRef]
- Qiu, D.; Jin, X.; Xiang, L. On solving interval-valued optimization problems with TOPSIS decision model. Eng. Lett. 2022, 30, 1101–1106. [Google Scholar]
- Lanbaran, N.M.; Celik, E.; Yiğider, M. Evaluation of investment opportunities with interval-valued fuzzy TOPSIS method. Appl. Math. Nonlinear Sci. 2020, 5, 461–474. [Google Scholar] [CrossRef]
- Moore, R.E. Method and Applications of Interval Analysis; SIAM: Philadelphia, PA, USA, 1979. [Google Scholar]
- Maity, G.; Roy, S.K.; Verdegay, J.L. Time variant multi-objective interval-valued transportation problem in sustainable development. Sustainability 2019, 11, 6161. [Google Scholar] [CrossRef]
- Zhang, J.; Li, S. The portfolio selection problem with random interval-valued return rates. Int. J. Innov. Comput. Inf. Control 2009, 5, 2847–2856. [Google Scholar]
- Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
- Pérez, L.R.; Prudente, L.F. Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 2018, 28, 2690–2720. [Google Scholar] [CrossRef]
- Sarkar, T.; Rao, S. The application of the conjugate gradient method for the solution of electromagnetic scattering from arbitrarily oriented wire antennas. IEEE Trans. Antennas Propag. 1984, 32, 398–403. [Google Scholar] [CrossRef]
- Pandey, V.; Bekele, A.; Ahmed, G.M.S.; Kanu, N.J. An application of conjugate gradient technique for determination of thermal conductivity as an inverse engineering problem. Mater. Today Proc. 2021, 47, 3082–3087. [Google Scholar] [CrossRef]
- Frank, M.S.; Balanis, C.A. A conjugate direction method for geophysical inversion problems. IEEE Trans. Geosci. Remote Sens. 2007, 25, 691–701. [Google Scholar] [CrossRef]
- Ishibuchi, H.; Tanaka, H. Multiobjective programming in optimization of the interval objective function. Eur. J. Oper. Res. 1990, 48, 219–225. [Google Scholar] [CrossRef]
- Wu, H.-C. The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function. Eur. J. Oper. Res. 2007, 176, 46–59. [Google Scholar] [CrossRef]
- Wu, H.-C. On interval-valued nonlinear programming problems. J. Math. Anal. Appl. 2008, 338, 299–316. [Google Scholar] [CrossRef]
- Bhurjee, A.K.; Panda, G. Efficient solution of interval optimization problem. Math. Methods Oper. Res. 2012, 76, 273–288. [Google Scholar] [CrossRef]
- Roy, P.; Panda, G.; Qiu, D. Gradient-based descent line search to solve interval-valued optimization problems under gH-differentiability with application to finance. J. Comput. Appl. Math. 2024, 436, 115402. [Google Scholar] [CrossRef]
- Kumar, P.; Bhurjee, A.K. Multi-objective enhanced interval optimization problem. Ann. Oper. Res. 2022, 311, 1035–1050. [Google Scholar] [CrossRef]
- Upadhyay, B.B.; Pandey, R.K.; Liao, S. Newton’s method for interval-valued multiobjective optimization problem. J. Ind. Manag. Optim. 2024, 20, 1633–1661. [Google Scholar] [CrossRef]
- Upadhyay, B.B.; Pandey, R.K.; Pan, J.; Zeng, S. Quasi-Newton algorithms for solving interval-valued multiobjective optimization problems by using their certain equivalence. J. Comput. Appl. Math. 2024, 438, 115550. [Google Scholar] [CrossRef]
- Upadhyay, B.B.; Li, L.; Mishra, P. Nonsmooth interval-valued multiobjective optimization problems and generalized variational inequalities on Hadamard manifolds. Appl. Set-Valued Anal. Optim. 2023, 5, 69–84. [Google Scholar]
- Luo, S.; Guo, X. Multi-objective optimization of multi-microgrid power dispatch under uncertainties using interval optimization. J. Ind. Manag. Optim. 2023, 19, 823–851. [Google Scholar] [CrossRef]
- Upadhyay, B.B.; Pandey, R.K.; Zeng, S. A generalization of generalized Hukuhara Newton’s method for interval-valued multiobjective optimization problems. Fuzzy Sets Syst. 2024, 492, 109066. [Google Scholar] [CrossRef]
- Upadhyay, B.B.; Pandey, R.K.; Zeng, S.; Singh, S.K. On conjugate direction-type method for interval-valued multiobjective quadratic optimization problems. Numer. Algorithms 2024. [Google Scholar] [CrossRef]
- Wang, C.; Zhao, Y.; Tang, L.; Yang, X. Conjugate gradient methods without line search for multiobjective optimization. arXiv 2023, arXiv:2312.02461. [Google Scholar]
- Khoshsimaye-Bargard, M.; Ashrafi, A. A projected hybridization of the Hestenes-Stiefel and Dai-Yuan conjugate gradient methods with application to nonnegative matrix factorization. J. Appl. Math. Comput. 2025, 71, 551–571. [Google Scholar] [CrossRef]
- Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
- Stefanini, L.; Bede, B. Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear Anal. 2009, 71, 1311–1328. [Google Scholar] [CrossRef]
- Ghosh, D.; Debnath, A.K.; Chauhan, R.S.; Castillo, O. Generalized-Hukuhara-gradient efficient-direction method to solve optimization problems with interval-valued functions and its application in least-squares problems. Int. J. Fuzzy Syst. 2022, 24, 1275–1300. [Google Scholar] [CrossRef]
- Stefanini, L.; Arana-Jiménez, M. Karush-Kuhn-Tucker conditions for interval and fuzzy optimization in several variables under total and directional generalized differentiability. Fuzzy Sets Syst. 2019, 362, 1–34. [Google Scholar] [CrossRef]
- Gilbert, J.C.; Nocedal, J. Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 1992, 2, 21–42. [Google Scholar] [CrossRef]
- Apostol, T.M. Multi-Variable Calculus and Linear Algebra with Applications to Differential Equations and Probability; Wiley: New York, NY, USA, 1969. [Google Scholar]
- Nazareth, L. A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 1977, 23, 373–387. [Google Scholar] [CrossRef]
- Zhang, J.-J.; Zhang, D.-X.; Chen, J.-N.; Pang, L.-G.; Meng, D. On the uncertainty principle of neural networks. iScience 2025, 28, 112197. [Google Scholar] [CrossRef]
- Klyuev, R.V.; Morgoev, I.D.; Morgoeva, A.D.; Gavrina, O.A.; Martyushev, N.V.; Efremenkov, E.A.; Mengxu, Q. Methods of forecasting electric energy consumption: A literature review. Energies 2022, 15, 8919. [Google Scholar] [CrossRef]
- Shi, H.-J.M.; Xie, Y.; Byrd, R.; Nocedal, J. A noise-tolerant quasi-Newton algorithm for unconstrained optimization. SIAM J. Optim. 2022, 32, 29–55. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).