Next Article in Journal
Time Management in Wireless Sensor Networks for Industrial Process Control
Previous Article in Journal
Self-Supervised Keypoint Learning for the Geometric Analysis of Road-Marking Templates
Previous Article in Special Issue
A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hestenes–Stiefel-Type Conjugate Direction Algorithm for Interval-Valued Multiobjective Optimization Problems

by
Rupesh Krishna Pandey
1,†,
Balendu Bhooshan Upadhyay
1,†,
Subham Poddar
1,† and
Ioan Stancu-Minasian
2,*,†
1
Department of Mathematics, Indian Institute of Technology Patna, Patna 801106, Bihar, India
2
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy, 050711 Bucharest, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2025, 18(7), 381; https://doi.org/10.3390/a18070381
Submission received: 17 May 2025 / Revised: 19 June 2025 / Accepted: 20 June 2025 / Published: 23 June 2025

Abstract

This article investigates a class of interval-valued multiobjective optimization problems (IVMOPs). We define the Hestenes–Stiefel (HS)-type direction for the objective function of IVMOPs and establish that it has a descent property at noncritical points. An Armijo-like line search is employed to determine an appropriate step size. We present an HS-type conjugate direction algorithm for IVMOPs and establish the convergence of the sequence generated by the algorithm. We deduce that the proposed algorithm exhibits a linear order of convergence under appropriate assumptions. Moreover, we investigate the worst-case complexity of the sequence generated by the proposed algorithm. Furthermore, we furnish several numerical examples, including a large-scale IVMOP, to demonstrate the effectiveness of our proposed algorithm and solve them by employing MATLAB. To the best of our knowledge, the HS-type conjugate direction method has not yet been explored for the class of IVMOPs.

1. Introduction

Multiobjective optimization problems (MOPs) involve the simultaneous optimization of two or more conflicting objective functions. Vilfredo Pareto [1] introduced the concept of Pareto optimality in the context of economic systems. A solution is called Pareto optimal or efficient if none of the objective functions can be improved without deteriorating some of the other objective values [2]. MOPs arise in scenarios where trade-offs are required, such as balancing cost and quality in business operations or improving efficiency while reducing environmental impact in engineering [3,4]. As a consequence, various techniques and algorithms have been proposed to solve MOPs in different frameworks [5,6,7]. For a more detailed discussion on MOPs, we refer to [2,8,9] and the references cited therein.
In many real-world problems arising in engineering, science, and related fields, we often encounter data that are imprecise or uncertain. This uncertainty can arise from various factors such as unknown future developments, measurement or manufacturing errors, or incomplete information in model development [10,11]. In such contexts, it is common to model uncertain parameters or objective functions using intervals. Moreover, if the uncertainties involved in the objective functions of MOPs are represented as intervals, the resulting problems are referred to as interval-valued multiobjective optimization problems (IVMOPs). IVMOPs frequently arise in diverse fields such as transportation, economics, and business administration [12,13,14,15,16].
It is well-known that the conjugate direction method is a powerful optimization technique that can be widely employed for solving systems of linear equations and optimization problems [17,18]. Its computational strength has made it valuable in tackling a variety of real-world problems, including electromagnetic scattering [19], inverse engineering [20], and geophysical inversion [21].

2. Review of Related Works

The foundational work on IVOPs is attributed to Ishibuchi and Tanaka [22], who investigated IVOPs by transforming them into corresponding deterministic MOPs. Wu [23] derived optimality conditions for a class of constrained IVOPs, employing the notion of Hukuhara differentiability and assuming the convexity hypothesis on the objective and constraint functions. Moreover, Wu [24] developed Karush–Kuhn–Tucker-type optimality conditions for IVOPs and derived strong duality theorems that connect the primal problems with their associated dual problems. Bhurjee and Panda [25] explored IVOPs by defining interval-valued functions in parametric forms. More recently, Roy et al. [26] proposed a gradient-based descent line search technique employing the notion of generalized Hukuhara (gH) differentiability to solve IVOPs.
Kumar and Bhurjee [27] studied IVMOPs by transforming them into their corresponding deterministic MOPs and establishing the relationships between the solutions of IVMOPs and MOPs. Upadhyay et al. [28] introduced Newton’s method for IVMOPs and established the quadratic convergence of the sequence generated by Newton’s method under suitable assumptions. Subsequently, Upadhyay et al. [29] developed quasi-Newton methods for IVMOPs and demonstrated their efficacy in solving both convex and non-convex IVMOPs. For a more comprehensive and updated survey on IVMOPs, we refer [30,31,32,33] and the references cited therein.
The conjugate direction method was first introduced by Hestenes and Stiefel [17], who developed the conjugate gradient method to solve a system of linear equations. Subsequently, Pérez and Prudente [18] introduced the HS-type conjugate direction algorithm for MOPs by employing an inexact line search. Wang et al. [34] introduced an HS-type conjugate direction algorithm for MOPs without employing line search techniques, and established the global convergence of the proposed method under suitable conditions. Recently, based on the memoryless Broyden–Fletcher–Goldfarb–Shanno update, Khoshsimaye-Bargard and Ashrafi [35] presented a convex hybridization of the Hestenes–Stiefel and Dai–Yuan conjugate parameters. For a more detailed discussion on the conjugate direction method, we refer to [18,36] and the references cited therein. From the above discussion, it is evident that HS-type conjugate direction methods have been developed to solve single-objective problems as well as MOPs. However, there is no research paper available in the literature that has explored the HS-type conjugate direction method for IVMOPs. The aim of this article is to fill the aforementioned research gaps by developing the HS-type conjugate direction method for a class of IVMOPs.
Motivated by the works of [17,18,34], in this paper, we investigate a class of IVMOPs and define the HS-type direction for the objective function of IVMOPs. A descent direction property of the HS-type direction is established at noncritical points. To determine an appropriate step size, we employ an Armijo-like line search. Moreover, an HS-type conjugate direction algorithm for IVMOPs is presented, and the convergence of this algorithm is established. Furthermore, under appropriate assumptions, we deduce that the proposed algorithm exhibits a linear order of convergence. In addition to this, the order of complexity of the proposed algorithm is investigated. Finally, the efficiency of the proposed method is demonstrated by solving various numerical problems employing MATLAB.
The primary contribution and novel aspects of the present article are as follows:
  • The results presented in this paper generalize several significant results from the existing literature. Specifically, we generalize the results established by Pérez and Prudente [18] on the HS-type method from MOPs to a more general class of optimization problems, namely, IVMOPs.
  • The algorithm introduced in this paper is more general than the steepest descent algorithm introduced by Fliege and Svaiter [5]. More specifically, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [5].
  • It is evident that Newton’s and quasi-Newton methods (see Upadhyay et al. [28,29]) can be applied to solve certain classes of IVMOPs in which the components of the objective function are twice continuously gH-differentiable as well as strongly convex. However, the HS-type conjugate direction algorithm proposed in this paper only requires the continuously gH-differentiability assumptions on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [28,29].
  • To the best of our knowledge, this is the first time that the linear order of convergence of the HS-type conjugate direction method has been investigated along with its worst-case complexity.
The rest of the paper is structured as follows. In Section 3, we discuss some mathematical preliminaries that will be employed in the sequel. Section 4 presents an HS-type conjugate direction algorithm for IVMOPs. In Section 5, we establish the convergence of the sequence generated by the proposed algorithm and deduce that the sequence exhibits linear order convergence under suitable assumptions. Furthermore, we investigate the worst-case complexity of the sequence generated by the proposed algorithm. In Section 6, we demonstrate the efficiency of the proposed algorithm by solving several numerical examples via MATLAB. Finally, in Section 7, we provide our conclusions as well as future research directions.

3. Preliminaries

Throughout this article, the symbol N denotes the set of all natural numbers. For n N , the symbol R n refers to the n -dimensional Euclidean space. The symbol R refers to the collection of all negative real numbers. The symbols Ø and I n are employed to denote the empty set and the identity matrix of order n × n , respectively. For two matrices A , B R n × n , the notation B A is employed to denote that A B is positive semidefinite. For any u R n and r > 0 , the symbols B u , r and B u , r denote the open and closed balls of radius r centered at u , respectively. For any non-empty set X and m N , the notation X m represents the Cartesian product defined as
X m : = X × × X ( m   times ) .
Let y , u R n . Then, the symbol y , u is defined as
y , u : = k = 1 n y k u k .
For u R n , the symbol u is defined as
u : = u , u .
Let n , m N . Then, the notations I n and I m are used to represent the following sets:
I n : = { 1 , , n } , I m : = { 1 , , m } .
Let y , u R n . The following notations are employed throughout this article:
y u y l u l , for all l I n , y < u y l < u l , for all l I n , y u y u and y u .
For u R n and G : R n R , we define:
G u l + u : = lim β 0 + G ( u 1 , , u l + β , , u n ) G ( u 1 , , u l , , u n ) β ,
and
G u l u : = lim β 0 G ( u 1 , , u l + β , , u n ) G ( u 1 , , u l , , u n ) β ,
which denote the one-sided right and left l -th partial derivatives of G at the point u , respectively, assuming the limits defined in (1) and (2) are well-defined.
If for every l I n and u ¯ R n , G u l + u ¯ and G u l u ¯ exist, then we define:
G u ¯ : = G u 1 u ¯ , , G u n u ¯ , + G u ¯ : = G u 1 + u ¯ , , G u n + u ¯ .
The symbols C ( R ) and C ( R ) are used to denote the following sets:
C ( R ) : = b ̲ , b ¯ : b ̲ , b ¯ R , b ̲ b ¯ , C ( R ) : = b ̲ , b ¯ : b ̲ , b ¯ R , b ̲ b ¯ , b ¯ < 0 .
Let p , q R . The symbol p q represents the following:
p q : = min { p , q } , max { p , q } .
The interval X : = [ b ̲ , b ¯ ] C ( R ) is referred to as a degenerate interval if and only if
b ̲ = b ¯ .
Let X = [ p ̲ , p ¯ ] , Z : = [ q ̲ , q ¯ ] C ( R ) , and ξ R . Corresponding to X , Z , and ξ , we define the following algebraic operations [28]:
X Z : = [ p ̲ + q ̲ , p ¯ + q ¯ ] , X Z : = [ p ̲ q ¯ , p ¯ q ̲ ] , ξ X = [ ξ p ̲ , ξ p ¯ ] , ξ 0 , [ ξ p ¯ , ξ p ̲ ] , ξ < 0 .
The subsequent definition is from [37].
Definition 1. 
Consider an arbitrary set E : = [ e ̲ , e ¯ ] C ( R ) . Then, the symbol E I represents the norm of E and is defined as follows:
E I : = max { | e ̲ | , | e ¯ | } .
For X : = [ p ̲ , p ¯ ] and Z : = [ q ̲ , q ¯ ] C ( R ) , we adopt the following notations throughout the article:
X L U Z p ̲ q ̲ and p ¯ q ¯ , X < L U Z p ̲ < q ̲ and p ¯ < q ¯ , X L U Z X L U Z and X Z .
Let E = ( E 1 , , E m ) , R = ( R 1 , , R m ) C ( R ) m . The ordered relations between E and R are described as follows:
E m R E k L U R k , for all k I m , E m R E k < L U R k , for all k I m .
The following definition is from [37].
Definition 2. 
For arbitrary intervals X : = [ p ̲ , p ¯ ] and Z : = [ q ̲ , q ¯ ] C ( R ) , the symbol X gH Z represents the gH-difference between X and Z and is defined as follows:
X gH Z : = p ̲ q ̲ p ¯ q ¯ .
The notion of gH-continuity for G : R n C ( R ) is recalled in the subsequent definition [37].
Definition 3. 
The function G : R n C ( R ) is said to be a gH-continuous function at a point u ¯ R n if for any ϵ > 0 , there exists some δ ϵ > 0 such that for any u R n satisfying u u ¯ < δ ϵ , the following inequality holds:
G ( u ) g H G ( u ¯ ) I < ϵ .
In the following definition, we recall the notion of the gH-Lipschitz continuity property for the class of interval-valued function (for instance, see [38]).
Definition 4. 
The function G : R n C ( R ) is said to be gH-Lipschitz continuous on R n with Lipschitz constant L > 0 , if for every u , v R n , the following inequality is satisfied:
G ( u ) g H G ( v ) I L u v .
Remark 1. 
In view of Definition 4, it follows that if G : R n C ( R ) is gH-Lipschitz continuous with Lipschitz constant L > 0 on R n , then the function ζ : R n R , defined by:
ζ ( u ) : = G ̲ ( u ) + G ¯ ( u ) , f o r a l l u R n ,
is also Lipschitz continuous with Lipschitz constant 2 L on R n .
The subsequent definition is from Upadhyay et al. [29].
Definition 5. 
The function G : R n C ( R ) is said to be a convex function if, for all u 1 , u 2 R n and any β [ 0 , 1 ] , the following inequality holds:
G ( ( 1 β ) u 1 + β u 2 ) L U ( 1 β ) G ( u 1 ) β G ( u 2 ) .
Moreover, G is said to be locally convex at a point u ¯ R n if there exists some neighborhood V of u ¯ such that the restriction of G to V is convex.
The subsequent definitions are from [39].
Definition 6. 
Let u ¯ R n and d R n with d 0 . The gH-directional derivative of G : R n C ( R ) at u ¯ in the direction d is defined as follows:
D g H G ( u ¯ , d ) : = lim β 0 + G ( u ¯ + β d ) g H G ( u ¯ ) β ,
provided that the above limit exists.
Definition 7. 
Let u ¯ R n and G : R n C ( R ) be defined as follows:
G ( u ) : = [ G ̲ ( u ) , G ¯ ( u ) ] , f o r a l l u R n .
Let the functions G 1 , G 2 : R n R be defined as follows:
G 1 ( u ) : = G ¯ ( u ) + G ̲ ( u ) 2 , f o r a l l u R n , G 2 ( u ) : = G ¯ ( u ) G ̲ ( u ) 2 , f o r a l l u R n .
The mapping G is said to be gH-differentiable at u ¯ if there exist vectors w 1 , w 2 R n with w 1 : = w 1 ( 1 ) , , w 1 ( n ) and w 2 : = w 2 ( 1 ) , , w 2 ( n ) , and error functions F 1 , F 2 : R n R , such that lim z 0 F 1 ( z ) = lim z 0 F 2 ( z ) = 0 , and for all z 0 the following hold:
G 1 u ¯ + z G 1 u ¯ = k = 1 n w 1 ( k ) z k + z F 1 ( z ) ,
and
G 2 u ¯ + z G 2 u ¯ = k = 1 n w 2 ( k ) z k + z F 2 ( z ) .
If G is gH-differentiable at every element u ¯ R n , then G is said to be gH-differentiable on R n .
The proof of the following theorem can be established employing Theorem 5 and Propositions 9 and 11 from [39].
Theorem 1. 
Let u ¯ R n and let G : R n C ( R ) be defined as follows:
G ( u ) : = [ G ̲ ( u ) , G ¯ ( u ) ] , f o r a l l u R n .
If G is gH-differentiable at u ¯ R n , then for any d R n , one of the following conditions is fulfilled:
(i) 
The gradients G ̲ u ¯ and G ¯ u ¯ exist, and
D gH G u ¯ ; d = G ̲ u ¯ , d G ¯ u ¯ , d .
(ii) 
G ̲ u ¯ , G ¯ u ¯ , + G ̲ u ¯ , and + G ¯ u ¯ exist and satisfy:
G ̲ u ¯ = + G ¯ u ¯ , G ¯ u ¯ = + G ̲ u ¯ .
Moreover,
D gH G u ¯ ; d : = G ̲ u ¯ , d G ¯ u ¯ , d ,
or,
D gH G u ¯ ; d : = + G ̲ u ¯ , d + G ¯ u ¯ , d .
The following definition is from [39].
Definition 8. 
Let G be gH-differentiable at u ¯ R n . Then, the gH-gradient of G at u ¯ is defined as follows:
g H G u ¯ : = D gH G u ¯ ; e 1 , , D gH G u ¯ ; e n ,
where e l ( l I n ) denotes the l -th canonical direction in R n .
We recall the following proposition from [28].
Proposition 1. 
Let u ¯ R n and let G : R n C ( R ) be defined as follows:
G ( u ) : = [ G ̲ ( u ) , G ¯ ( u ) ] , f o r a l l u R n .
If the function G is m-times gH-differentiable at u ¯ R n , then the function ζ : R n R , defined in Remark 1, is also m-times differentiable at u ¯ .
We define the interval-valued vector function H : R n C ( R ) m as follows:
H ( u ) : = H 1 ( u ) , , H m ( u ) , for all u R n ,
where H k : R n C ( R ) ( k I m ) are interval-valued functions.
The following two definitions are from [32].
Definition 9. 
Let H : R n C ( R ) m and u ¯ R n . Suppose that every component of H possesses gH-directional derivatives. Then, u ¯ is called a critical point of H , provided that there does not exist any d R n , satisfying:
D g H H ( u ¯ , d ) ( C ( R ) ) m Ø ,
where D g H H ( u ¯ , d ) : = D g H H 1 ( u ¯ , d ) , , D g H H m ( u ¯ , d ) .
Definition 10. 
An element d R n is referred to as the descent direction of H : R n C ( R ) m at a point u ¯ R n , provided that some β ( β R , β > 0 ) exists, satisfying:
H ( u ¯ + t d ) m H ( u ¯ ) , f o r a l l t ( 0 , β ] .
Definition 11. 
Let u ¯ R n . The function H : R n C ( R ) m is said to be continuously gH-differentiable at u ¯ if every component of H is continuously gH-differentiable at u ¯ .
Remark 2. 
In view of Definitions 6 and 10, it follows that if H : R n C ( R ) m is continuously gH-differentiable at u ¯ R n and if d R n is a descent direction of H at u ¯ , then
D g H H k u ¯ , d L U [ 0 , 0 ] , f o r a l l k I m .

4. HS-Type Conjugate Direction Method for IVMOPs

In this section, we present an HS-type conjugate direction method for IVMOPs. Moreover, we establish the convergence of the sequence generated by this method.
Consider the following IVMOP:
( IVMOP ) Minimize H ( u ) : = ( H 1 ( u ) , , H m ( u ) ) , subject to u R n ,
where the functions H k : R n C ( R ) ( k I m ) are defined as
H k u : = H ̲ k u , H ¯ k u , for all u R n .
The functions H k ( k I m ) are assumed to be continuously gH-differentiable unless otherwise specified.
The notions of effective and weak effective solutions for IVMOP are recalled in the subsequent definition [28].
Definition 12. 
A point u ¯ R n is said to be an effective solution of the IVMOP if there is no other point u R n such that
H ( u ) m H ( u ¯ ) a n d H ( u ) H ( u ¯ ) .
Similarly, a point u ¯ R n is said to be a weak effective solution of the IVMOP provided that there is no other point u R n for which:
H ( u ) m H ( u ¯ ) .
In the rest of the article, we employ P to represent the set of all critical points of H .
Let u ¯ R n . In order to determine the descent direction for the objective function H of IVMOP, we consider the following scalar optimization problem with interval-valued constraints [32]:
( P ) u ¯ Minimize φ ( α , d ) : = α , subject to D g H H k ( u ¯ , d ) + 1 2 d 2 , d 2 L U [ α , α ] , k I m ,
where φ : R × R n R is a real-valued function. It can be shown that the problem ( P ) u ¯ has a unique solution.
Any feasible point of the problem ( P ) u ¯ is represented as ( α u ¯ , d u ¯ ) , where α u ¯ R and d u ¯ R n . Let K u ¯ R n + 1 denote the feasible set of ( P ) u ¯ . We consider the functions d : R n R n and α : R n R , which are defined as follows:
α ( u ¯ ) , d ( u ¯ ) : = arg min α u ¯ , d u ¯ K u ¯ φ ( α u ¯ , d u ¯ ) .
From now onwards, for any u R n , the notation ( α ( u ) , d ( u ) ) will be used to represent the optimal solution of the problem ( P ) u .
Now, for every k I m , we consider a function ζ k : R n R , defined as follows:
ζ k ( u ) : = H ¯ k ( u ) + H ̲ k ( u ) , for all u R n .
Remark 3. 
Since α ( u ¯ ) , d ( u ¯ ) is a solution of the problem ( P ) u ¯ , therefore for all k I m , it follows that:
D g H H k ( u ¯ , d ( u ¯ ) ) + 1 2 d ( u ¯ ) 2 , d ( u ¯ ) 2 L U [ α ( u ¯ ) , α ( u ¯ ) ] .
This implies that the function ζ k : R n R satisfies the following inequality:
ζ k ( u ¯ ) T d u ¯ 2 α ( u ¯ ) , f o r a l l k I m .
In the subsequent discussions, we utilize the following lemmas from Upadhyay et al. [32].
Lemma 1. 
Let u ¯ R n . If u ¯ P , then d ( u ¯ ) is a descent direction at u ¯ for H .
Lemma 2. 
For u ¯ R n , the following properties hold:
(i) 
If u ¯ P , then d ( u ¯ ) = 0 R n and α ( u ¯ ) = 0 .
(ii) 
If u ¯ P , then α ( u ¯ ) < 0 .
Remark 4. 
From Lemma 1, it follows that if u ¯ P , then the optimal solution of ( P ) u ¯ yields a descent direction. Furthermore, from Lemma 2, it can be inferred that the value of α ( u ¯ ) can be utilized to determine whether u ¯ P or not. Specifically, for any given point u ¯ R n , if α ( u ¯ ) = 0 , then u ¯ P . Otherwise, u ¯ P , and in this case, d ( u ¯ ) serves as a descent direction at u ¯ for H .
We recall the following result from Upadhyay et al. [32].
Lemma 3. 
Let u ¯ P . If the functions H k ( k I m ) are locally convex at u ¯ , then u ¯ is a locally weak effective solution of IVMOP.
To introduce the Hestenes–Stiefel-type direction for IVMOP, we define a function Φ ˜ : R n × R n R as follows:
Φ ˜ ( u , d ) : = max k I m D ¯ g H H k u , d , for all ( u , d ) R n × R n ,
where,
D g H H k u , d : = D ̲ g H H k u , d , D ¯ g H H k u , d , for all ( u , d ) R n × R n .
In the following lemma, we establish the relationship between the critical point of H and the function Φ ˜ .
Lemma 4. 
Let Φ ˜ : R n × R n R be defined in (3), and u ¯ R n . Then, u ¯ is a critical point of H if and only if
Φ ˜ ( u ¯ , d ) 0 , for all d R n .
Proof. 
Let u ¯ be a critical point of H . Then, by Definition 9, for every d R n there exists k I m such that
D g H H k ( u ¯ , d ) C ( R ) = Ø .
Consequently, it follows that D ¯ g H H k ( u ¯ , d ) 0 , which implies
Φ ˜ ( u ¯ , d ) 0 .
Conversely, suppose that
Φ ˜ ( u ¯ , d ) 0 , for all d R n .
Then, for any d R n there exists k I m such that
Φ ˜ ( u ¯ , d ) = D ¯ g H H k ( u ¯ , d ) 0 .
This further implies that for any d R n ,
D g H H ( u ¯ , d ) ( C ( R ) ) m = Ø .
Therefore, it follows that u ¯ is a critical point of H . This completes the proof.    □
We establish the following lemma, which will be used in the sequel.
Lemma 5. 
Let u ¯ R n and let α ( u ¯ ) , d ( u ¯ ) be the optimal solution of the problem ( P ) u ¯ . Then
α ( u ¯ ) = Φ ˜ u ¯ , d ( u ¯ ) .
Proof. 
Since α ( u ¯ ) , d ( u ¯ ) ) K u ¯ , therefore we have
D g H H k u ¯ , d ( u ¯ ) L U [ α ( u ¯ ) , α ( u ¯ ) ] , for all k I m .
From (4), we obtain
D ¯ g H H k u ¯ , d ( u ¯ ) α ( u ¯ ) , for all k I m .
Consequently,
Φ ˜ u ¯ , d ( u ¯ ) α ( u ¯ ) .
Let us define α u ¯ : = Φ ˜ u ¯ , d ( u ¯ ) . Then, we have
D ̲ g H H k u ¯ , d ( u ¯ ) D ¯ g H H k u ¯ , d ( u ¯ ) Φ ˜ u ¯ , d ( u ¯ ) = α u ¯ , for all k I m .
Therefore, we obtain
D g H H k u ¯ , d ( u ¯ ) L U [ α u ¯ , α u ¯ ] , for all k I m .
This implies that α u ¯ , d ( u ¯ ) K u ¯ . Since α ( u ¯ ) , d ( u ¯ ) ) is the optimal solution of the problem ( P ) u ¯ , we obtain
α ( u ¯ ) α u ¯ = Φ ˜ u ¯ , d ( u ¯ ) .
Combining (5) and (6), we conclude
α ( u ¯ ) = Φ ˜ u ¯ , d ( u ¯ ) .
This completes the proof.    □
Let s { 0 , 1 , 2 , } be fixed and let { u ( r ) } 0 r s R n . Now, we introduce a Hestenes–Stiefel-type direction (HS-type direction) w s HS at u ( s ) .
w s HS : = d ( u ( s ) ) ; if s = 0 , d ( u ( s ) ) + β s HS w ( s 1 ) HS ; if s 1 ,
where w ( s 1 ) HS represents the HS-type direction at ( s 1 ) -th step, and for s 1 , β s HS is defined as follows:
β s HS : = Φ ˜ u ( s ) , d ( u ( s ) ) + Φ ˜ u ( s 1 ) , d ( u ( s 1 ) ) Φ ˜ u ( s ) , w ( s 1 ) HS Φ ˜ u ( s 1 ) , w ( s 1 ) HS .
Remark 5. 
If every component of the objective function H of the IVMOP is a real-valued function rather than an interval-valued function, that is, H : R n R m , then Equation (7) reduces to the HS-type direction defined for vector-valued functions, as considered by Pérez and Prudente [18]. As a result, the parameter β s HS introduced in (8) extends the HS-type direction from MOPs to IVMOPs, which belong to a broader class of optimization problems. Moreover, when m = 1 , Equation (7) further reduces to the classical HS-type direction for a real-valued function, defined by Hestenes and Stiefel [17].
It can be observed that β s HS , defined in (8), becomes undefined when
Φ ˜ u ( s ) , w ( s 1 ) HS = Φ ˜ u ( s 1 ) , w ( s 1 ) HS ,
and the direction w s HS defined in Equation (7) may not provide a descent direction. Therefore, to address this issue, we adopt an approach similar to that proposed by Gilbert and Nocedal [40] and Pérez and Prudente [18]. Hence, we define β s and w ( s ) as follows:
β s : = 0 , if β s HS < 0 or undefined or s = 0 ; 0 , if Φ ˜ u ( s 1 ) , d u ( s 1 ) > Φ ˜ u ( s ) , d u ( s ) , Φ ˜ u ( s ) , d ( u ( s ) ) + Φ ˜ u ( s 1 ) , d ( u ( s 1 ) ) Φ ˜ u ( s ) , w ( s 1 ) Φ ˜ u ( s 1 ) , w ( s 1 ) , otherwise .
and
w ( s ) : = d ( u ( s ) ) + β s w ( s 1 ) .
In the following lemma, we establish an inequality that relates the directional derivative of H k ( k I m ) at point u ( s ) in the direction w ( s 1 ) to β s .
Lemma 6. 
Let u ( s ) and u ( s 1 ) be noncritical points of H . Suppose that w ( s 1 ) represents a descent direction of H k ( k I m ) at point u ( s 1 ) . Then, we have
β s D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , f o r a l l k I m .
Moreover, for every k I m , we have:
β s ζ k ( u ( s ) ) T w ( s 1 ) 0 .
Proof. 
In view of Definition 9, it follows that β s 0 . Now, the following two possible cases may arise:
Case 1: 
If β s = 0 , then
β s D g H H k ( u ( s ) , w ( s 1 ) ) = [ 0 , 0 ] , for all k I m .
Therefore, the inequality in (11) is satisfied.
Case 2: 
Let β s > 0 . Our aim is to prove that
β s D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
Since β s > 0 , it suffices to show that
D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
On the contrary, assume that there exists k I m such that
D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] .
Therefore, from (12), we have the following inequality:
D ̲ g H H k ( u ( s ) , w ( s 1 ) ) > 0 ,
or
D ¯ g H H k ( u ( s ) , w ( s 1 ) ) > 0 .
Since D ̲ g H H k ( u ( s ) , w ( s 1 ) ) D ¯ g H H k ( u ( s ) , w ( s 1 ) ) , therefore it follows from above that
D ¯ g H H k ( u ( s ) , w ( s 1 ) ) > 0 ,
which in turn implies that
Φ ˜ u ( s ) , w ( s 1 ) = max k I m D ¯ g H H k u ( s ) , w ( s 1 ) > 0 .
Since w ( s 1 ) is a descent direction of H at point u ( s 1 ) , we have
D g H H k ( u ( s 1 ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
This implies that
Φ ˜ u ( s 1 ) , w ( s 1 ) 0 .
From (13) and (14) we obtain
Φ ˜ u ( s ) , w ( s 1 ) Φ ˜ u ( s 1 ) , w ( s 1 ) > 0 .
Now, if
Φ ˜ u ( s 1 ) , d u ( s 1 ) > Φ ˜ u ( s ) , d u ( s ) ,
then from Definition 9, we obtain
β s = 0 ,
which contradicts the assumption that β s > 0 . On the other hand, if
Φ ˜ u ( s 1 ) , d u ( s 1 ) Φ ˜ u ( s ) , d u ( s ) ,
then
Φ ˜ u ( s 1 ) , d u ( s 1 ) Φ ˜ u ( s ) , d u ( s ) 0 .
Using (15) and Definition 9 we obtain
β s 0 ,
which is a contradiction. Therefore, we have
β s D g H H k ( u ( s ) , w ( s 1 ) ) L U [ 0 , 0 ] , for all k I m .
Now, from (16) and in view of Theorem 1, it follows that:
β s ζ k ( u ( s ) ) T w ( s 1 ) 0 ,
for every k I m . This completes the proof.    □
Notably, for s = 0 , the direction w ( s ) , as defined in Equation (10), coincides with d ( u ( s ) ) . Thus, by Lemma 1, we conclude that w ( s ) serves as a descent direction at u ( s ) for s = 0 . Therefore, in the following theorem, we establish that w ( s ) serves as a descent direction at u ( s ) for s 1 , under appropriate assumptions.
Theorem 2. 
Let u ( r ) r { 0 , 1 , , s } be noncritical points of H . Suppose that w ( r ) , as defined in Equation (10), serves as a descent direction of H at u ( r ) for all r { 0 , 1 , , ( s 1 ) } . Then, w ( s ) serves as a descent direction at u ( s ) for the function H .
Proof. 
Since the functions H k ( k I m ) are continuously gH-differentiable, therefore, to prove that w ( s ) is a descent direction at u ( s ) , it is sufficient to show that
D g H H k ( u ( s ) , w ( s ) ) < L U [ 0 , 0 ] , for all k I m .
Let s 1 be fixed. From Theorem 1 we have
D g H H k ( u ( s ) , w ( s ) ) = D g H H k ( u ( s ) , d ( u ( s ) ) + β s w ( s 1 ) ) , = + H ̲ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) + H ¯ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) , for all k I m .
Consider
+ H ̲ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) = + H ̲ k u ( s ) T d ( u ( s ) ) + β s + H ̲ k u ( s ) T w ( s 1 ) . α u ( s ) + β s D ¯ g H H k ( u ( s ) , w ( s 1 ) ) , for all k I m .
Therefore, from (18), and Lemmas 2 and 6, we obtain
+ H ̲ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) < 0 , for all k I m .
Similarly, we can prove that
+ H ¯ k u ( s ) T d ( u ( s ) ) + β s w ( s 1 ) < 0 , for all k I m .
Therefore, from (17), (19), and (20) we conclude that
D g H H k ( u ( s ) , w ( s ) ) < L U [ 0 , 0 ] , for all k I m .
This completes the proof.    □
Now, we introduce an Armijo-like line search method for the objective function H of IVMOP.
Consider u , w R n such that w is a descent direction at u for the function H . Let γ ( 0 , 1 ) . A step length t is acceptable if it satisfies the following condition:
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , for all k I m .
Remark 6. 
If every component of the objective function H of the IVMOP is a real-valued function rather than an interval-valued function, that is, H : R n R m , then (22) reduces to the following Armijo-like line search, defined by Fliege and Svaiter [5]:
H ( u + t w ) H ( u ) + γ t J H ( u ) w ,
where J H ( u ) represents the Jacobian of H at u .
In the next lemma, we prove the existence of such t which satisfies (22) for a given γ ( 0 , 1 ) .
Lemma 7. 
If H is gH-differentiable and D g H H k ( u , w ) < L U [ 0 , 0 ] for each k I m , then for the given γ ( 0 , 1 ) , there exists t ^ > 0 such that
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , k I m , t ( 0 , t ^ ) .
Proof. 
Let k I m be fixed. By the definition of the directional derivative of H k , there exists a function τ k : R n C ( R ) such that
H k ( u + t w ) g H H k ( u ) = t D g H H k ( u , w ) + t τ k ( t ) ,
where τ k ( t ) : = [ ϵ ̲ k ( t ) , ϵ ¯ k ( t ) ] 0 , 0 as t 0 .
From (23) and Definition 2, these possible two cases may arise:
Case 1: 
H ̲ k ( u + t w ) H ̲ k ( u ) = t D ̲ g H H k ( u , w ) + t ϵ ̲ k ( t ) , H ¯ k ( u + t w ) H ¯ k ( u ) = t D ¯ g H H k ( u , w ) + t ϵ ¯ k ( t ) .
Since D g H H k ( u , w ) < L U [ 0 , 0 ] , therefore D ̲ g H H k ( u , w ) < 0 and D ¯ g H H k ( u , w ) < 0 . Define
ϵ : = max ( 1 γ ) D ̲ g H H k ( u , w ) , ( 1 γ ) D ¯ g H H k ( u , w ) > 0 .
Since τ k ( t ) [ 0 , 0 ] as t 0 , there exists t ^ k > 0 such that
τ k ( t ) I ϵ , for all t ( 0 , t ^ k ) .
Substituting (25) and (26) in (24), we have
H ̲ k ( u + t w ) H ̲ k ( u ) t D ̲ g H H k ( u , w ) t ( 1 γ ) D ̲ g H H k ( u , w ) , for all t ( 0 , t ^ k ) H ¯ k ( u + t w ) H ¯ k ( u ) t D ¯ g H H k ( u , w ) t ( 1 γ ) D ¯ g H H k ( u , w ) , for all t ( 0 , t ^ k ) .
This implies that
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , for all t ( 0 , t ^ k ) .
Case 2: 
H ¯ k ( u + t w ) H ¯ k ( u ) = t D ̲ g H H k ( u , w ) + t ϵ ̲ k ( t ) , H ̲ k ( u + t w ) H ̲ k ( u ) = t D ¯ g H H k ( u , w ) + t ϵ ¯ k ( t ) .
On the lines of the proof of Case 1, it can be shown that the inequality in (27) holds for all t ( 0 , t ^ k ) for some t ^ k > 0 .
Since k I m was arbitrary, we conclude that for each k I m , there exists t ^ k > 0 such that (27) holds. Let us set t ^ : = min { t ^ 1 , , t ^ m } , then we have
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , for all k I m , for all t ( 0 , t ^ ) .
This completes the proof.    □
Remark 7. 
From Lemma 7, it follows that for u R n , if D g H H k ( u , w ) < L U [ 0 , 0 ] ( k I m ) , then there exists some t ^ > 0 such that (22) holds for all t ( 0 , t ^ ) . To compute the step length t numerically, we adopt the following backtracking process:
We start with p = 0 and check whether (22) holds for t = 1 2 p .
(a) 
If the inequality in (22) is satisfied, we take t = 1 2 p as the step length.
(b) 
Otherwise, set p : = p + 1 and update t = 1 2 p , repeating the process until the inequality in (22) is satisfied.
In view of the fact that, as some t ^ > 0 exists such that (22) holds for all t ( 0 , t ^ ) , and the sequence 1 2 p p N { 0 } converges to 0, the above process terminates after a finite number of iterations.
Thus, at any u R n with D g H H k ( u , w ) < L U [ 0 , 0 ] ( k I m ) , we can choose η as the largest t from the set
1 2 p p N { 0 } ,
such that (22) is satisfied.
Now, we present the HS-type conjugate direction algorithm for IVMOP.
Algorithm 1 HS-Type Conjugate Direction Algorithm for IVMOP
1:
Let γ ( 0 , 1 ) , initial point u ( 0 ) R n , ϵ > 0 , and set s = 0 .
2:
Solve the optimization problem ( P ) u ( s ) and obtain the values of α ( u ( s ) ) and d u ( s ) .
3:
If | α ( u ( s ) ) | < ϵ , then stop. Otherwise, proceed to the next step.
4:
Calculate w ( s ) using (10).
5:
Select η ( s ) as the largest value of t 1 2 p : p N { 0 } that satisfies (22). Update the iterate as follows:
u ( s + 1 ) : = u ( s ) + η ( s ) w ( s ) .
6:
Set s : = s + 1 , and go to Step 2.
Remark 8. 
It is worth noting that if β s in (10) is set to zero and if every component of the objective function H of the IVMOP is a real-valued function rather than an interval-valued function, that is, H : R n R m , then Algorithm 1 reduces to the steepest descent algorithm for MOPs, as proposed by Fliege and Svaiter [5].
It is obvious that if Algorithm 1 has a finite iteration, then the last iterative point is an approximate critical point. Thus, it is relevant to consider the convergence analysis when Algorithm 1 generates an infinite sequence. That is, α ( u ( s ) ) 0 for all s N { 0 } . Consequently, we have α ( u ( s ) ) < 0 and w ( s ) serves as a descent direction at u ( s ) for all s N { 0 } .

5. Main Results

In this section, we establish the convergence of the sequence generated by Algorithm 1. Moreover, we deduce that the sequence exhibits linear order convergence under appropriate assumptions. Furthermore, we investigate the worst-case complexity of the sequence generated by Algorithm 1.
In the following theorem, we establish the convergence of the sequence generated by Algorithm 1.
Theorem 3. 
Let ( u ( s ) ) s N { 0 } be an infinite sequence generated by Algorithm 1. Suppose that the set
T : = { u R n : H ( u ) m H ( u ( 0 ) ) } ,
is bounded. Under these assumptions, every accumulation point of the sequence ( u ( s ) ) s N { 0 } is a critical point of the objective function of IVMOP.
Proof. 
From (22), for all s { 0 , 1 , 2 , } , we have
H k ( u ( s + 1 ) ) g H H k ( u ( s ) ) L U ( γ η ( s ) ) D g H H k ( u ( s ) , w ( s ) ) , for all k I m .
Using Remark 2, for all s { 0 , 1 , 2 , } , we obtain
H k ( u ( s + 1 ) ) g H H k ( u ( s ) ) L U ( γ η ( s ) ) D g H H k ( u ( s ) , w ( s ) ) L U [ 0 , 0 ] , for all k I m .
This implies that
H k ( u ( s + 1 ) ) L U H k ( u ( s ) ) , for all s { 0 , 1 , 2 , } and for all k I m .
From (31), the sequence ( u ( s ) ) s N { 0 } lies in T which is a bounded subset of R n . As a result, the sequence ( u ( s ) ) s N { 0 } is also bounded. Hence, it possesses at least one accumulation point, say u ¯ . We claim that u ¯ is a critical point of the objective function of IVMOP.
Indeed, as ( u ( s ) ) s N { 0 } is a bounded sequence in R n and for all k I m , H k is gH-continuous on R n ; therefore, using (31), we conclude that, for all k I m , the sequence H k ( u ( s ) ) s N { 0 } is non-increasing and bounded. Consequently, from (30), it follows that
lim s η ( s ) D g H H k ( u ( s ) , w ( s ) ) = [ 0 , 0 ] , k I m .
Since η ( s ) ( 0 , 1 ] for all s N { 0 } , therefore, the value lim sup s η ( s ) exists. Therefore, the following two possible cases may arise:
  • Case 1: Let lim sup s η ( s ) > 0 . Hence, employing (32) and taking into account the fact that u ¯ is an accumulated point of the sequence ( u ( s ) ) s N { 0 } , there exist subsequence u s j j N { 0 } and η s j j N { 0 } of u s s N { 0 } and η s s N { 0 } , respectively, such that
    lim j u s j = u ¯ , lim j η s j = lim sup s η ( s ) > 0 ,
    and
    lim j D g H H k ( u ( s j ) , w ( s j ) ) = [ 0 , 0 ] , for all k I m .
Our aim is to show that u ¯ is a critical point of H . On the contrary, assume that u ¯ is a noncritical point of H . This implies that there exists d R n such that
D g H H k ( u ¯ , d ) < L U [ 0 , 0 ] , k I m .
Since H is continuously gH-differentiable, therefore, there exist ϵ > 0 and δ > 0 , such that
D g H H k ( u , d ) < L U [ ϵ , ϵ ] , for all k I m and for all u B u ¯ , δ .
Since u s j u ¯ as j , therefore, using (35), there exists n 0 N , such that
D g H H k ( u s j , d ) < L U [ ϵ , ϵ ] , for all k I m and for all j n 0 .
Now, for every j n 0 , by defining α s j : = Φ ˜ u s j , d , we get
D g H H k ( u s j , d ) L U [ α s j , α s j ] , for all k I m .
This implies that, for every j n 0 ,
α s j , d K u s j .
Using (36) and (37), for all j n 0 , we obtain
α u s j α s j < ϵ .
This implies that, for all j n 0 , we get
D g H H k ( u s j , d u s j ) < L U [ ϵ , ϵ ] , for all k I m .
Now, for all j n 0 and for all k I m , we consider
D g H H k u ( s j ) , w ( s j ) = D g H H k u ( s j ) , d u s j + β s j w ( s j 1 ) = + H ̲ k u ( s j ) T d u s j + β s j w ( s j 1 ) + H ¯ k u ( s j ) T d u s j + β s j w ( s j 1 ) .
Therefore, using (38), for all j n 0 and for all k I m , we conclude that
D g H H k ( u ( s j ) , w ( s j ) ) < L U ϵ , ϵ + β s j D g H H k d u s j , w ( s j 1 ) .
Now, using Lemma 6, for all j n 0 and for all k I m , we obtain
D g H H k ( u ( s j ) , w ( s j ) ) < L U ϵ , ϵ .
This leads to a contradiction with Equation (33).
  • Case 2: Let lim sup s η ( s ) = 0 . Since η ( s ) 0 , for all s N , therefore, we get
lim s η ( s ) = 0 .
Now, for p N , there exists n p N such that
η ( s ) < 1 2 p , s n p .
Therefore, for t = 1 2 p , the inequality in (22) is not satisfied, that is, for all s n p , we have
H k u ( s j ) + 1 2 p u ( s j ) g H H k u ( s j ) L U γ 2 p D g H H k u ( s j ) , w ( s j ) , for all k I m .
Letting p (along a suitable subsequence, if necessary) in both sides of the inequality in (40), there exists k I m such that
D g H H k ( u ( s j ) , w ( s j ) ) ¬ L U γ D g H H k ( u ( s j ) , w ( s j ) ) , s j n p ,
which leads to a contradiction. This completes the proof.    □
The proof of the following lemma follows from Remark 1, Proposition 1, and Definition 3.
Lemma 8. 
Let the functions H k : R n C ( R ) ( k I m ) be gH-Lipschitz continuous with Lipschitz constant L and twice continuously gH-differentiable on R n . Then, the functions ζ k : R n R ( k I m ) are Lipschitz continuous with Lipschitz constant 2 L and twice continuously differentiable on R n .
In the following theorem, we establish that the sequence generated by Algorithm 1 exhibits linear order convergence.
Theorem 4. 
Let ( u ( s ) ) s N { 0 } be the sequence generated by Algorithm 1 and the set T defined in (29) be bounded. Suppose that the functions H k ( k I m ) are twice continuously gH-differentiable and the gH-gradients of the functions H k , that is, g H H k ( k I m ) are gH-Lipschitz continuous with Lipschitz constant L > 0 . Moreover, if we assume that for every u R n , 2 ζ k ( u ) a I n ( k I m ) where a > 0 with 4 L < a , then the sequence ( u ( s ) ) s N { 0 } converges linearly to the critical point of the objective function of IVMOP.
Proof. 
Since ( u ( s ) ) s N { 0 } is the sequence generated by Algorithm 1 and the set T is bounded, therefore, it follows from Theorem 3 that the sequence ( u ( s ) ) s N { 0 } converges to the critical point, say u ¯ , of the objective function of IVMOP. Given that the functions H k ( k I m ) are twice continuously gH-differentiable, it follows from Lemma 8 that the functions ζ k ( k I m ) are twice continuously differentiable. Therefore, by applying the second-order Taylor formula (see [41]) for each k I m , we have:
ζ k u ( s + 1 ) = ζ k u ¯ + ζ k u ¯ T u ( s + 1 ) u ¯ + 1 2 u ( s + 1 ) u ¯ T 2 ζ k u ( s + 1 ) + θ 1 u ( s + 1 ) u ¯ u ( s + 1 ) u ¯ ,
where θ 1 ( 0 , 1 ) . Moreover, from the hypothesis, it follows that for every u R n , 2 ζ k ( u ) a I n . Taking into account this fact with (41), we have the following inequality:
a 2 u ( s + 1 ) u ¯ 2 ζ k u ( s + 1 ) ζ k u ¯ ζ k u ¯ T u ( s + 1 ) u ¯ , for all k I m .
From (31), we have
ζ k u ¯ ζ k u ( s + 1 ) ζ k u ( s ) , for all k I m .
Combining (42) with (43), we get
a 2 u ( s + 1 ) u ¯ 2 ζ k u ( s ) ζ k u ¯ ζ k u ¯ T u ( s + 1 ) u ¯ , for all k I m .
Employing the mean value theorem (see [41]) on the right-hand side of (44), there exists θ 2 ( 0 , 1 ) such that:
a 2 u ( s + 1 ) u ¯ 2 ζ k u ( s ) + θ 2 u ( s ) u ¯ T u ( s ) u ¯ ζ k u ¯ T u ( s ) u ¯ + ζ k u ¯ T u ( s ) u ( s + 1 ) .
Since the functions g H H k ( k I m ) are gH-Lipschitz continuous with Lipschitz constant L > 0 , it follows from Lemma 8 that the functions ζ k ( k I m ) are Lipschitz continuous with Lipschitz constant 2 L > 0 . Therefore, from (45), we have
a 2 u ( s + 1 ) u ¯ 2 2 L u ( s ) u ¯ 2 + ζ k u ¯ T u ( s ) u ( s + 1 ) , for all k I m .
Since u ¯ is a critical point, therefore, there exists some k I m such that the following inequality holds:
ζ k u ¯ T u ( s ) u ( s + 1 ) 0 .
Hence, from (46), we get
u ( s + 1 ) u ¯ ρ u ( s ) u ¯ ,
where ρ : = 4 L a . From the hypothesis, it follows that ρ < 1 . Hence, the sequence ( u ( s ) ) s N { 0 } converges linearly to the critical point of the objective function of IVMOP. This completes the proof.    □
Remark 9. 
Let u , w R n such that w is a descent direction at u for the function H . Let γ ( 0 , 1 ) . Then, in an Armijo-like line search strategy, a step length t is considered acceptable if it satisfies the following condition:
H k ( u + t w ) g H H k ( u ) L U ( γ t ) D g H H k ( u , w ) , f o r a l l k I m .
This implies that the function ζ k : R n R satisfies the following Armijo-like line search strategy:
ζ k ( u + t w ) ζ k ( u ) ( γ t ) ζ k ( u ) T w , f o r a l l k I m .
The following lemma will play a crucial role to investigate the worst-case complexity of the sequence generated by Algorithm 1.
Lemma 9. 
Let us assume that the gH-gradient of the functions H k , that is, g H H k ( k I m ) are gH-Lipschitz continuous with Lipschitz constant L . Moreover, if there exists some c > 0 such that the following inequality holds:
w ( s ) c d ( u ( s ) ,
then the step size η ( s ) in Algorithm 1 always satisfies the following inequality:
η ( s ) > 1 γ 4 L c 2 ,
for every s N { 0 } .
Proof. 
Let s N { 0 } be fixed. Since η ( s ) is a step size in Algorithm 1, therefore, in view of Remark 9, there exists some k I m , such that:
2 γ η ( s ) ζ k ( u ( s ) ) T w ( s ) < ζ k u ( s ) + 2 η ( s ) w ( s ) ζ k ( u ( s ) ) .
Now,
ζ k u ( s ) + 2 η ( s ) w ( s ) ζ k ( u ( s ) ) = 0 2 η ( s ) d d t ζ k u ( s ) + t w ( s ) d t = 0 2 η ( s ) ζ k u ( s ) + t w ( s ) T w ( s ) d t = 0 2 η ( s ) ( ζ k ( u ( s ) ) T w ( s ) + ( ζ k u ( s ) + t w ( s ) T ζ k u ( s ) T ) w ( s ) ) d t .
Since the functions g H H k ( k I m ) are gH-Lipschitz continuous with Lipschitz constant L > 0 , it follows from Lemma 8 that the functions ζ k ( k I m ) are Lipschitz continuous with Lipschitz constant 2 L . Combining this fact with (49), we have the following inequality:
ζ k u ( s ) + 2 η ( s ) w ( s ) ζ k ( u ( s ) ) 0 2 η ( s ) ζ k ( u ( s ) ) T w ( s ) d t + 0 2 η ( s ) 2 t L w ( s ) 2 d t = 2 η ( s ) ζ k ( u ( s ) ) T w ( s ) + 4 L ( η ( s ) ) 2 w ( s ) 2 .
Since η ( s ) > 0 for all s N { 0 } , therefore, from (48) and (50), it follows that
γ ζ k ( u ( s ) ) T w ( s ) < ζ k ( u ( s ) ) T w ( s ) + 2 L η ( s ) w ( s ) 2 .
From the hypothesis, it follows that
w ( s ) c d ( u ( s ) ) .
From (51) and (52), we have:
γ ζ k ( u ( s ) ) T w ( s ) < ζ k ( u ( s ) ) T w ( s ) + 2 L η ( s ) c 2 d ( u ( s ) ) 2 .
Rearranging the terms of the above inequality, we infer that:
2 L η ( s ) c 2 d ( u ( s ) ) 2 < ( 1 γ ) ζ k ( u ( s ) ) T w ( s ) .
On the other hand, α ( u ( s ) ) , d ( u ( s ) ) is a solution of the problem ( P ) u s and α ( u s ) 0 , so it follows that
ζ k ( u ( s ) ) T d ( u ( s ) ) 1 2 d ( u ( s ) ) 2 .
Moreover, in view of Lemma 6, we obtain the following inequality:
ζ k ( u ( s ) ) T w ( s ) 1 2 d ( u ( s ) ) 2 .
Multiplying ( 1 γ ) on both sides of the above inequality, we get:
( 1 γ ) ζ k ( u ( s ) ) T w ( s ) ( 1 γ ) 2 d ( u ( s ) ) 2 .
From (54) and (55), we have:
2 L η ( s ) c 2 d ( u ( s ) ) 2 < ( 1 γ ) 2 d ( u ( s ) ) 2 .
Since d ( u ( s ) ) is nonzero for every s N { 0 } , therefore, from the above inequality, we have
η ( s ) > 1 γ 4 L c 2 .
This completes the proof.    □
In the following theorem, we investigate the worst-case complexity of the sequence generated by Algorithm 1.
Theorem 5. 
Let all the assumptions of Theorem 4 be satisfied. Suppose that there exists some c > 0 , such that for every s N { 0 } , the following inequality:
w ( s ) c d ( u ( s ) )
is satisfied. Then, the sequence { u ( s ) } generated by Algorithm 1 is such that for any ϵ > 0 , at most s max iterations are needed to produce an iterate solution u ( s ) such that | α ( u ( s ) ) | < ϵ , where
s max < O ( l o g ( 1 / ϵ ) ) .
Proof. 
Let s N { 0 } be fixed. Since the sequence ( u ( s ) ) s N { 0 } is generated by Algorithm 1 and the set T is bounded, therefore, it follows from Theorem 3 that the sequence ( u ( s ) ) s N { 0 } converges to the critical point, say u ¯ , of the objective function of IVMOP. Furthermore, given that the functions H k ( k I m ) are twice continuously gH-differentiable, therefore, it follows from Lemma 8 that the functions ζ k ( k I m ) are twice continuously differentiable. Since T is bounded, therefore, there exists some r > 0 such that
T B u ¯ , r .
Moreover, as ζ k ( k I m ) are continuous on B u ¯ , 3 r R n , therefore, there exists M > 0 such that for all k I m
ζ k u M , for all u B u ¯ , 3 r .
Since η ( s ) is a step size in Algorithm 1, therefore, in view of Remark 9, for every k I m , we have
ζ k u ( s ) + η ( s ) w ( s ) ζ k ( u ( s ) ) γ η ( s ) ζ k ( u ( s ) ) T w ( s ) .
Since u ( s + 1 ) = u ( s ) + η ( s ) w ( s ) , therefore, from (10), it follows that:
ζ k u ( s + 1 ) ζ k ( u ( s ) ) γ η ( s ) ζ k ( u ( s ) ) T d u ( s ) + γ β s η ( s ) ζ k ( u ( s ) ) T w ( s 1 ) .
In view of Lemma 6, it follows that β s ζ k u ( s ) T w ( s 1 ) 0 . Therefore, from (58), and the fact that γ ( 0 , 1 ) , we have:
ζ k u ( s + 1 ) ζ k ( u ( s ) ) γ η ( s ) ζ k ( u ( s ) ) T d u ( s ) < η ( s ) ζ k ( u ( s ) ) T d u ( s ) .
Rearranging the terms of the inequality in (59) and using Remark 3, we get the following inequality:
2 η ( s ) α ( u ( s ) ) < ζ k u ( s ) ζ k u ( s + 1 ) .
Since for every s N { 0 } , α ( u ( s ) ) is non-positive, therefore, from (60), we have:
2 η ( s ) α ( u ( s ) ) < ζ k u ( s ) ζ k u ( s + 1 ) .
Employing the mean value theorem (see [41]) on the left-hand side of (61), there exists some θ 3 ( 0 , 1 ) such that:
2 η ( s ) α ( u ( s ) ) < ζ k u ( s ) + θ 3 u ( s ) u ( s + 1 ) T u ( s ) u ( s + 1 ) .
Since u ( s ) , u ( s + 1 ) T ( B u ¯ , r ) , therefore, u ( s ) + θ 3 u ( s ) u ( s + 1 ) B u ¯ , 3 r . Combining (56) with (62), we obtain:
2 η ( s ) α ( u ( s ) ) < M u ( s + 1 ) u ( s ) M u ( s + 1 ) u ¯ + u ( s ) u ¯ .
Moreover, in view of Theorem 4, and from (63), we have the following inequality:
2 η ( s ) α ( u ( s ) ) < M ρ s ρ + 1 u ( 0 ) u ¯ .
Now, from Lemma 9, it follows from (64) that
α ( u ( s ) ) < 2 L c 2 1 γ M ρ s ρ + 1 u ( 0 ) u ¯ .
Now, we assume that for the first s ϵ iterations, we have α ( u ( s ϵ ) ) ϵ . Therefore, from (65), we have
ϵ < 2 L c 2 1 γ M ρ s ϵ ρ + 1 u ( 0 ) u ¯ . 1 γ 2 L c 2 M ρ + 1 u ( 0 ) u ¯ ϵ < ρ s ϵ .
Taking the logarithm on both sides of the above inequality, we get:
log 1 γ 2 L c 2 M ρ + 1 u ( 0 ) u ¯ ϵ < log ρ s ϵ . log 2 L c 2 M ρ + 1 u ( 0 ) u ¯ 1 γ 1 ϵ > s ϵ log 1 ρ . s ϵ < 1 log 1 ρ log 2 L c 2 M ρ + 1 u ( 0 ) u ¯ 1 γ 1 ϵ .
This implies
s ϵ < O ( log ( 1 / ϵ ) ) .
This completes the proof.    □

6. Experiments and Discussion

In this section, we furnish several numerical examples to illustrate the effectiveness of Algorithm 1 and solve them by employing MATLAB R2024a.
To solve problems (P1), and (P2) presented in Examples 1 and 2, respectively, we employ Algorithm 1, implemented on a system equipped with an Intel(R) Core(TM) i7-8700 CPU@3.20 GB processor with 8 GB of RAM. However, the above system fails to solve problem (P3) presented in Example 3, which is a large-scale IVMOP ( n 100 ) . Therefore, to solve problem (P3), we employ Algorithm 1, implemented on a high-performance computational system running Ubuntu, with the following specifications: Memory: 128.0 GiB, Processor: Intel® Xeon® Gold 5415+ (32 cores), and OS type: 64-bit.
Now, to execute Algorithm 1, we employ the following steps to find u ( s + 1 ) for a given  u ( s ) :
1.
Choose the parameter γ , such that γ ( 0 , 1 ) .
2.
To find the values of α ( u ( s ) ) and d ( u ( s ) ) , we solve the following optimization problem:
( P ) u ( s ) Minimize φ ( α , d ) : = α , subject to D g H H k ( u ( s ) , d ) + 1 2 d 2 , d 2 L U [ α , α ] , k I m ,
by employing a numerical method (for example, using optimvar functions in the tools of MATLAB software).
3.
Compute the values of β s and the direction w ( s ) , which satisfy Equations (9) and (10), respectively.
4.
Compute the step size η ( s ) as per the Armijo-like line search, that is, the inequality in (22), equipped with the backtracking as discussed in Remark 7. Equivalently,
η ( s ) : = max 1 2 p : p N { 0 } , when
H k u ( s ) + 1 2 p w ( s ) g H H k ( u ( s ) ) L U γ 2 p D g H H k ( u ( s ) , w ( s ) ) , for all k I m .
5.
Finally, one can find the values of u ( s + 1 ) from the given condition:
u ( s + 1 ) = u ( s ) + η ( s ) w ( s ) .
In the following example, we consider a locally convex IVMOP to demonstrate the effectiveness of Algorithm 1.
Example 1. 
Consider the following problem (P1) which belongs to the class of IVMOPs.
( P 1 ) Minimize H 1 ( u 1 , u 2 ) , H 2 ( u 1 , u 2 ) , subject to ( u 1 , u 2 ) R 2 ,
where H k : R 2 C ( R ) ( k = 1 , 2 ) are defined as follows:
H 1 ( u 1 , u 2 ) : = 1 2 u 1 1 2 + u 2 1 2 1 2 ( u 1 1 ) 3 u 1 u 2 2 + e u 1 + u 2 , H 2 ( u 1 , u 2 ) : = u 1 1 2 + u 2 1 2 u 1 + 1 2 + u 2 + 1 2 .
It is evident that ( 1 , 1 ) is a critical point of the objective function of (P1). Since the components of the objective function in (P1) are locally convex at ( 1 , 1 ) , it follows from Lemma 3 that ( 1 , 1 ) is a locally weak effective solution of (P1).
Now, we employ Algorithm 1 to solve (P1), with an initial point ( 4 , 12 ) . The stopping criterion is defined as | α ( u ( s ) ) | < ϵ = 10 4 . The numerical results for Algorithm 1 are shown in Table 1.
From Step 3 of Table 1, we conclude that the sequence converges to a locally weak effective solution ( 1 , 1 ) of (P1).
It is worth noting that the locally weak effective solution of an IVMOP is not an isolated point. However, applying Algorithm 1 with a given initial point can lead to one such locally weak effective solution. To generate an approximate locally weak effective solution set, we employ a multi-start approach, and | α ( u ( s ) ) | < ϵ = 10 4 or a maximum of 1000 iterations as the stopping criteria. Specifically, we generate 100 uniformly distributed random initial points and subsequently execute Algorithm 1 starting from each of these points. In view of the above fact, in Example 1 we generate a set of approximate locally weak effective solutions by selecting 100 uniformly distributed random initial points in the domain [ 0 , 10 ] × [ 0 , 10 ] using the “rand” function of MATLAB. The sequences generated from these points are illustrated in Figure 1.
Remark 10. 
It is worth noting that the objective function of problem (P1), presented in Example 1, is twice continuously gH-differentiable but not convex. Moreover, in view of the fact that Newton’s and quasi-Newton methods for IVMOPs introduced by Upadhyay et al. [28,29] are applicable to solve certain classes of IVMOPs in which the objective functions are strongly convex and twice continuously gH-differentiable, Newton’s and quasi-Newton methods could not be applied to solve the problem (P1) presented in Example 1. Nevertheless, it has been demonstrated in Example 1 that our proposed algorithm, that is, Algorithm 1, effectively solves the problem (P1).
Moreover, in view of the works of Upadhyay et al. [28,29], it can be observed that Newton’s and quasi-Newton methods are applicable to solve certain classes of IVMOPs in which the objective functions of IVMOPs are twice continuously gH-differentiable as well as strongly convex. In contrast, our proposed algorithm only requires the continuously gH-differentiability on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [28,29]. To demonstrate this, we consider an IVMOP where the first component of the objective function is continuously gH-differentiable but not twice continuously gH-differentiable.
Example 2. 
Consider the following problem (P2) which belongs to the class of IVMOPs.
( P 2 ) Minimize H 1 ( u 1 , u 2 ) , H 2 ( u 1 , u 2 ) , subject to ( u 1 , u 2 ) R 2 ,
where H 1 : R 2 C ( R ) and H 2 : R 2 C ( R ) are defined as follows:
H 1 ( u 1 , u 2 ) : = ( u 1 3 ) 2 2 + u 1 ( u 1 3 ) 2 2 + u 1 + u 2 2 , if u 1 3 , u 1 2 4 u 1 2 + 9 4 u 1 2 4 u 1 2 + 9 4 + u 2 2 , if u 1 < 3 ,
and
H 2 ( u 1 , u 2 ) : = ( u 1 3 ) 2 + u 2 2 u 1 2 + ( u 2 4 ) 2 .
It can be verified that H 1 is continuously gH-differentiable but not twice continuously gH-differentiable. As a result, the Newton’s and quasi-Newton methods proposed by Upadhyay et al. [28,29] cannot be applied to solve (P2).
However, we solve (P2) by employing Algorithm 1 and MATLAB. Now, we employ Algorithm 1 to solve (P2), with an initial point ( 9 , 4 ) . The stopping criterion is defined as | α ( u ( s ) ) | < ϵ = 10 4 . The numerical results for Algorithm 1 are shown in Table 2.
Therefore, in view of Step 17 in Table 2, the sequence generated by Algorithm 1 converges to an approximate critical point ( 1.0022 , 3.5864 ) of the objective function of (P2).
In the following example, we apply Algorithm 1 employing MATLAB to solve a large-scale IVMOP for different values of n .
Example 3. 
Consider the following problem (P3) which belongs to the class of IVMOPs.
( P 3 ) Minimize H 1 ( u 1 , u 2 , , u n ) , H 2 ( u 1 , u 2 , , u n ) , subject to ( u 1 , , u n ) R n ,
where H k : R n C ( R ) ( k = 1 , 2 ) is defined as follows:
H 1 ( u 1 , u 2 , , u n ) : = k = 1 n u k 1 2 k = 1 n u k + 4 5 , H 2 ( u 1 , u 2 , , u n ) : = k = 1 n u k 3 + u k 4 k = 1 n u k + 1 2 .
We consider a random point, obtained using the built-in MATLAB functionrand(n,1), as the initial point of Algorithm 1. We define the stopping criteria as | α ( u ( s ) ) | < ϵ = 10 4 or reaching a maximum of 5000 iterations. Table 3 presents the number of iterations and the computational times required to solve (P3) using Algorithm 1 for various values of n, starting from randomly generated initial points.

7. Conclusions and Future Research Directions

In this article, we have developed an HS-type conjugate direction algorithm to solve a class of IVMOPs. We have performed the convergence analysis, discussed the convergence rate, and investigated the worst-case complexity of the sequence generated by the proposed algorithm. The results established in this article have generalized several significant results existing in the literature. Specifically, we have extended the work of Pérez and Prudente [18] on the HS-type conjugate direction method for MOPs to a more general class of optimization problems, namely, IVMOPs. Moreover, it is worth noting that, if the conjugate parameter is set to zero and if every component of the objective function of the IVMOP is a real-valued function rather than an interval-valued function, then the proposed algorithm reduces to the steepest descent algorithm for MOPs, as introduced by Fliege and Svaiter [5]. Furthermore, it is imperative to note that Newton’s and quasi-Newton methods (see Upadhyay et al. [28,29]) can be applied to solve certain classes of IVMOPs in which the components of the objective function are twice continuously gH-differentiable as well as strongly convex. However, the HS-type conjugate direction algorithm proposed in this paper only requires the continuously gH-differentiability assumptions on the components of the objective function. In view of this fact, Algorithm 1 could be applied to solve a broader class of IVMOPs compared to the algorithms proposed by Upadhyay et al. [28,29].
It has been observed that all objective functions in the considered IVMOP are assumed to be continuously gH-differentiable. Consequently, the findings of this paper are not applicable when the objective functions involved do not satisfy this requirement, which can be considered as a limitation of this paper.
The results presented in this article leave numerous avenues for future research works. An important direction for future research is the exploration of hybrid approaches for IVMOPs that integrate a convex hybridization of different conjugate direction methods, following the methodology of [35]. Another key direction is investigating the conjugate direction method for IVMOP without employing any line search techniques, following the methodology of [42]. Moreover, in view of the works in [43,44,45], it would be interesting to develop a Hestenes–Stiefel-type conjugate direction algorithm to train neural networks under uncertainty, where intervals represent the model parameters or data, and to study the robustness of Algorithm 1 in the presence of noisy data.

Author Contributions

Conceptualization, B.B.U. and R.K.P.; methodology, B.B.U. and R.K.P.; validation, B.B.U., R.K.P., S.P. and I.S.-M.; formal analysis, B.B.U., R.K.P. and S.P.; writing—review and editing, B.B.U., R.K.P. and S.P.; supervision, B.B.U. All authors have read and agreed to the published version of the manuscript.

Funding

The first author would like to thank the University Grants Commission, New Delhi, India, for the received financial support (UGC-Ref. No.: 1213/(CSIR-UGC NET DEC 2017)). The third author extends their gratitude to the Ministry of Education, Government of India, for their financial support through the Prime Minister Research Fellowship (PMRF), granted under PMRF ID-2703573.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive suggestions, which have substantially improved the paper in its present form.

Conflicts of Interest

The authors confirm that there are no actual or potential conflicts of interest related to this article.

References

  1. Pareto, V. Manuale di Economia Politica; Societa Editrice: Milano, Italy, 1906. [Google Scholar]
  2. Miettinen, K. Nonlinear Multiobjective Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1999; pp. xvii, 298. [Google Scholar]
  3. Diao, X.; Li, H.; Zeng, S.; Tam, V.W.; Guo, H. A Pareto multi-objective optimization approach for solving time-cost-quality tradeoff problems. Technol. Econ. Dev. Econ. 2011, 17, 22–41. [Google Scholar] [CrossRef]
  4. Guillén-Gosálbez, G. A novel MILP-based objective reduction method for multi-objective optimization: Application to environmental problems. Comput. Chem. Eng. 2011, 35, 1469–1477. [Google Scholar] [CrossRef]
  5. Fliege, J.; Svaiter, B.F. Steepest descent methods for multicriteria optimization. Math. Methods Oper. Res. 2000, 51, 479–494. [Google Scholar] [CrossRef]
  6. Bento, G.C.; Melo, J.G. Subgradient method for convex feasibility on Riemannian manifolds. J. Optim. Theory Appl. 2012, 152, 773–785. [Google Scholar] [CrossRef]
  7. Upadhyay, B.B.; Singh, S.K.; Stancu-Minasian, I.M.; Rusu-Stancu, A.M. Robust optimality and duality for nonsmooth multiobjective programming problems with vanishing constraints under data uncertainty. Algorithms 2024, 17, 482. [Google Scholar] [CrossRef]
  8. Ehrgott, M. Multicriteria Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  9. Upadhyay, B.B.; Poddar, S.; Yao, J.C.; Zhao, X. Inexact proximal point method with a Bregman regularization for quasiconvex multiobjective optimization problems via limiting subdifferentials. Ann. Oper. Res. 2025, 345, 417–466. [Google Scholar] [CrossRef]
  10. Beer, M.; Ferson, S.; Kreinovich, V. Imprecise probabilities in engineering analyses. Mech. Syst. Signal Process. 2013, 37, 4–29. [Google Scholar] [CrossRef]
  11. Chaudhuri, A.; Lam, R.; Willcox, K. Multifidelity uncertainty propagation via adaptive surrogates in coupled multidisciplinary systems. AIAA J. 2018, 56, 235–249. [Google Scholar] [CrossRef]
  12. Qiu, D.; Jin, X.; Xiang, L. On solving interval-valued optimization problems with TOPSIS decision model. Eng. Lett. 2022, 30, 1101–1106. [Google Scholar]
  13. Lanbaran, N.M.; Celik, E.; Yiğider, M. Evaluation of investment opportunities with interval-valued fuzzy TOPSIS method. Appl. Math. Nonlinear Sci. 2020, 5, 461–474. [Google Scholar] [CrossRef]
  14. Moore, R.E. Method and Applications of Interval Analysis; SIAM: Philadelphia, PA, USA, 1979. [Google Scholar]
  15. Maity, G.; Roy, S.K.; Verdegay, J.L. Time variant multi-objective interval-valued transportation problem in sustainable development. Sustainability 2019, 11, 6161. [Google Scholar] [CrossRef]
  16. Zhang, J.; Li, S. The portfolio selection problem with random interval-valued return rates. Int. J. Innov. Comput. Inf. Control 2009, 5, 2847–2856. [Google Scholar]
  17. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  18. Pérez, L.R.; Prudente, L.F. Nonlinear conjugate gradient methods for vector optimization. SIAM J. Optim. 2018, 28, 2690–2720. [Google Scholar] [CrossRef]
  19. Sarkar, T.; Rao, S. The application of the conjugate gradient method for the solution of electromagnetic scattering from arbitrarily oriented wire antennas. IEEE Trans. Antennas Propag. 1984, 32, 398–403. [Google Scholar] [CrossRef]
  20. Pandey, V.; Bekele, A.; Ahmed, G.M.S.; Kanu, N.J. An application of conjugate gradient technique for determination of thermal conductivity as an inverse engineering problem. Mater. Today Proc. 2021, 47, 3082–3087. [Google Scholar] [CrossRef]
  21. Frank, M.S.; Balanis, C.A. A conjugate direction method for geophysical inversion problems. IEEE Trans. Geosci. Remote Sens. 2007, 25, 691–701. [Google Scholar] [CrossRef]
  22. Ishibuchi, H.; Tanaka, H. Multiobjective programming in optimization of the interval objective function. Eur. J. Oper. Res. 1990, 48, 219–225. [Google Scholar] [CrossRef]
  23. Wu, H.-C. The Karush-Kuhn-Tucker optimality conditions in an optimization problem with interval-valued objective function. Eur. J. Oper. Res. 2007, 176, 46–59. [Google Scholar] [CrossRef]
  24. Wu, H.-C. On interval-valued nonlinear programming problems. J. Math. Anal. Appl. 2008, 338, 299–316. [Google Scholar] [CrossRef]
  25. Bhurjee, A.K.; Panda, G. Efficient solution of interval optimization problem. Math. Methods Oper. Res. 2012, 76, 273–288. [Google Scholar] [CrossRef]
  26. Roy, P.; Panda, G.; Qiu, D. Gradient-based descent line search to solve interval-valued optimization problems under gH-differentiability with application to finance. J. Comput. Appl. Math. 2024, 436, 115402. [Google Scholar] [CrossRef]
  27. Kumar, P.; Bhurjee, A.K. Multi-objective enhanced interval optimization problem. Ann. Oper. Res. 2022, 311, 1035–1050. [Google Scholar] [CrossRef]
  28. Upadhyay, B.B.; Pandey, R.K.; Liao, S. Newton’s method for interval-valued multiobjective optimization problem. J. Ind. Manag. Optim. 2024, 20, 1633–1661. [Google Scholar] [CrossRef]
  29. Upadhyay, B.B.; Pandey, R.K.; Pan, J.; Zeng, S. Quasi-Newton algorithms for solving interval-valued multiobjective optimization problems by using their certain equivalence. J. Comput. Appl. Math. 2024, 438, 115550. [Google Scholar] [CrossRef]
  30. Upadhyay, B.B.; Li, L.; Mishra, P. Nonsmooth interval-valued multiobjective optimization problems and generalized variational inequalities on Hadamard manifolds. Appl. Set-Valued Anal. Optim. 2023, 5, 69–84. [Google Scholar]
  31. Luo, S.; Guo, X. Multi-objective optimization of multi-microgrid power dispatch under uncertainties using interval optimization. J. Ind. Manag. Optim. 2023, 19, 823–851. [Google Scholar] [CrossRef]
  32. Upadhyay, B.B.; Pandey, R.K.; Zeng, S. A generalization of generalized Hukuhara Newton’s method for interval-valued multiobjective optimization problems. Fuzzy Sets Syst. 2024, 492, 109066. [Google Scholar] [CrossRef]
  33. Upadhyay, B.B.; Pandey, R.K.; Zeng, S.; Singh, S.K. On conjugate direction-type method for interval-valued multiobjective quadratic optimization problems. Numer. Algorithms 2024. [Google Scholar] [CrossRef]
  34. Wang, C.; Zhao, Y.; Tang, L.; Yang, X. Conjugate gradient methods without line search for multiobjective optimization. arXiv 2023, arXiv:2312.02461. [Google Scholar]
  35. Khoshsimaye-Bargard, M.; Ashrafi, A. A projected hybridization of the Hestenes-Stiefel and Dai-Yuan conjugate gradient methods with application to nonnegative matrix factorization. J. Appl. Math. Comput. 2025, 71, 551–571. [Google Scholar] [CrossRef]
  36. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  37. Stefanini, L.; Bede, B. Generalized Hukuhara differentiability of interval-valued functions and interval differential equations. Nonlinear Anal. 2009, 71, 1311–1328. [Google Scholar] [CrossRef]
  38. Ghosh, D.; Debnath, A.K.; Chauhan, R.S.; Castillo, O. Generalized-Hukuhara-gradient efficient-direction method to solve optimization problems with interval-valued functions and its application in least-squares problems. Int. J. Fuzzy Syst. 2022, 24, 1275–1300. [Google Scholar] [CrossRef]
  39. Stefanini, L.; Arana-Jiménez, M. Karush-Kuhn-Tucker conditions for interval and fuzzy optimization in several variables under total and directional generalized differentiability. Fuzzy Sets Syst. 2019, 362, 1–34. [Google Scholar] [CrossRef]
  40. Gilbert, J.C.; Nocedal, J. Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 1992, 2, 21–42. [Google Scholar] [CrossRef]
  41. Apostol, T.M. Multi-Variable Calculus and Linear Algebra with Applications to Differential Equations and Probability; Wiley: New York, NY, USA, 1969. [Google Scholar]
  42. Nazareth, L. A conjugate direction algorithm without line searches. J. Optim. Theory Appl. 1977, 23, 373–387. [Google Scholar] [CrossRef]
  43. Zhang, J.-J.; Zhang, D.-X.; Chen, J.-N.; Pang, L.-G.; Meng, D. On the uncertainty principle of neural networks. iScience 2025, 28, 112197. [Google Scholar] [CrossRef]
  44. Klyuev, R.V.; Morgoev, I.D.; Morgoeva, A.D.; Gavrina, O.A.; Martyushev, N.V.; Efremenkov, E.A.; Mengxu, Q. Methods of forecasting electric energy consumption: A literature review. Energies 2022, 15, 8919. [Google Scholar] [CrossRef]
  45. Shi, H.-J.M.; Xie, Y.; Byrd, R.; Nocedal, J. A noise-tolerant quasi-Newton algorithm for unconstrained optimization. SIAM J. Optim. 2022, 32, 29–55. [Google Scholar] [CrossRef]
Figure 1. Approximate locally weak effective solutions generated from 100 uniformly distributed random initial points.
Figure 1. Approximate locally weak effective solutions generated from 100 uniformly distributed random initial points.
Algorithms 18 00381 g001
Table 1. Sequence generated by Algorithm 1 for the problem (P1).
Table 1. Sequence generated by Algorithm 1 for the problem (P1).
s u ( s ) w ( s ) η ( s ) | α ( u ( s ) ) |
0 ( 4 , 12 ) ( 10 , 22 ) 0.125 292
1 ( 2.75 , 9.25 ) ( 7.5 , 16.5 ) 0.25 164.25
2 ( 0.875 , 5.125 ) ( 3.75 , 8.25 ) 0.5 41.062
3 ( 1 , 1 ) ( 0.00099217 , 0.00099217 ) 0 1.2 × 10 6
Table 2. Sequence generated by Algorithm 1 for the problem (P2).
Table 2. Sequence generated by Algorithm 1 for the problem (P2).
s u ( s ) w ( s ) η ( s ) | α ( u ( s ) ) |
0 ( 9 , 4 ) ( 7 , 0.00099728 ) 1 24.5
1 ( 2 , 3.999 ) ( 0.4555 , 0.14238 ) 0.5 0.11387
2 ( 1.7723 , 3.9278 ) ( 0.34145 , 0.12352 ) 0.5 0.06592
3 ( 1.6015 , 3.8661 ) ( 0.25917 , 0.10383 ) 0.5 0.038973
4 ( 1.4719 , 3.8141 ) ( 0.19892 , 0.085853 ) 0.5 0.023469
5 ( 1.3725 , 3.7712 ) ( 0.15416 , 0.070341 ) 0.5 0.014354
6 ( 1.2954 , 3.736 ) ( 0.12042 , 0.057329 ) 0.5 0.008892
7 ( 1.2352 , 3.7074 ) ( 0.094691 , 0.046586 ) 0.5 0.0055667
8 ( 1.1878 , 3.6841 ) ( 0.074865 , 0.03779 ) 1 0.0035148
9 ( 1.113 , 3.6463 ) ( 0.044284 , 0.023278 ) 1 0.0012498
10 ( 1.0687 , 3.623 ) ( 0.026646 , 0.014336 ) 1 0.00045745
11 ( 1.0421 , 3.6087 ) ( 0.016225 , 0.0089043 ) 1 0.00017037
12 ( 1.0258 , 3.5998 ) ( 0.0099184 , 0.0054681 ) 1 6.4004 × 10 5
13 ( 1.0159 , 3.5943 ) ( 0.0061017 , 0.0034133 ) 1 2.4231 × 10 5
14 ( 1.0098 , 3.5909 ) ( 0.0038132 , 0.002264 ) 1 8.5523 × 10 6
15 ( 1.006 , 3.5886 ) ( 0.0023159 , 0.0013489 ) 1 3.2717 × 10 6
16 ( 1.0037 , 3.5873 ) ( 0.0014456 , 0.00089985 ) 1 1.1303 × 10 6
17 ( 1.0022 , 3.5864 ) ( 0.00091828 , 0.00065009 ) 1 3.1304 × 10 7
Table 3. The numerical results of Algorithm 1 for the problem (P3).
Table 3. The numerical results of Algorithm 1 for the problem (P3).
nNumber of IterationsComputation Times (in Seconds)
1004498414.3
1204336436.7
1404641468.8
1604895525.6
1804880586.8
2004619628.2
2204886727.7
2404686810.5
2604707896.6
28050001044.4
30050001148.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pandey, R.K.; Upadhyay, B.B.; Poddar, S.; Stancu-Minasian, I. Hestenes–Stiefel-Type Conjugate Direction Algorithm for Interval-Valued Multiobjective Optimization Problems. Algorithms 2025, 18, 381. https://doi.org/10.3390/a18070381

AMA Style

Pandey RK, Upadhyay BB, Poddar S, Stancu-Minasian I. Hestenes–Stiefel-Type Conjugate Direction Algorithm for Interval-Valued Multiobjective Optimization Problems. Algorithms. 2025; 18(7):381. https://doi.org/10.3390/a18070381

Chicago/Turabian Style

Pandey, Rupesh Krishna, Balendu Bhooshan Upadhyay, Subham Poddar, and Ioan Stancu-Minasian. 2025. "Hestenes–Stiefel-Type Conjugate Direction Algorithm for Interval-Valued Multiobjective Optimization Problems" Algorithms 18, no. 7: 381. https://doi.org/10.3390/a18070381

APA Style

Pandey, R. K., Upadhyay, B. B., Poddar, S., & Stancu-Minasian, I. (2025). Hestenes–Stiefel-Type Conjugate Direction Algorithm for Interval-Valued Multiobjective Optimization Problems. Algorithms, 18(7), 381. https://doi.org/10.3390/a18070381

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop