Next Article in Journal
A Study on the Existence of Fixed Point Results for Some Fuzzy Contractions in Fuzzy Metric Spaces with Application
Previous Article in Journal
Optimal Control Problems for Erlang Loss Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An S-Hybridization Technique Using Two-Directional Optimization

by
Vladimir Rakočević
1,2 and
Milena J. Petrović
3,*
1
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18108 Niš, Serbia
2
Serbian Academy of Sciences and Arts, Kneza Mihaila 35, 11000 Belgrade, Serbia
3
Faculty of Sciences and Mathematics, University of Priština in Kosovska Mitrovica, Lole Ribara 29, 38220 Kosovska Mitrovica, Serbia
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(2), 131; https://doi.org/10.3390/axioms14020131
Submission received: 14 December 2024 / Revised: 7 February 2025 / Accepted: 8 February 2025 / Published: 11 February 2025

Abstract

:
In this paper, we study a recently established s-hybrid approach for generating gradient descent methods for solving optimization tasks. We present an s-hybrid variant of the accelerated double-direction method. The results obtained based on convergence analysis confirm the first-order consistency of the newly defined method on a set of strictly convex quadratic functions.
MSC:
90C30; 90C06; 49M37; 65K99; 47H09; 47H10

1. Background

Our main focus concerns a special form of accelerated gradient methods that contain two vector directions. Aiming to develop as efficient of an optimization method as possible, we explored an approach that combines several different search vectors. In a calculative sense, iterations with two directions have become the preferred choice above others. In [1], the authors suggested the double-direction iterative method for solving non-differentiable problems
x k + 1 = x k + t k s k + t k 2 d k .
In (1), t k is the iterative step length, while s k and d k are two differently defined vector directions. These three parameters are described by the following algorithms listed below. These procedures are denoted as the curve search algorithm, the algorithm for deriving vector direction s k , and the algorithm for deriving vector direction d k , respectively.
Curve search algorithm
t k = q i ( k ) , 0 < q < 1
where i ( k ) is the smallest integer from { 0 , 1 , 2 , } such that
F ( x k ) F ( x k + q i ( k ) s k + q 2 i ( k ) d k ) σ q i ( k ) g k T s k + 1 2 q 4 i ( k ) F D ( x k ; d k ) ,
In (3), σ is estimated as 0 < σ < 1 , F D stands for the second-order Dini upper-directional derivative at x k in direction d, and the function F presents the Moreau–Yosida regularization of the objective function f associated to the metric M, defined as follows:
F ( x ) = min y R n f ( y ) + 1 2 y x M 2 .
Algorithm for deriving vector direction s k
s k ( t ) = s k * if k m 1 1 i = 2 m t i 1 p k i g k + i = 2 m t i 1 p k i s k i + 1 if k mk ,
m = c a r d I k , m > 1 , I k = { 0 , 1 , 2 , } is an index set at k-th iteration.
p k i = ρ g k 2 ( m 1 ) [ g k 2 + | g k T s k i + 1 | ] , i = 2 , 3 , , m ,
0 < ρ < 1 and s k * 0 , k m 1 is a vector that satisfies g k T s k * 0 .
Algorithm for deriving vector direction d k
d k ( t ) = d k * if k m 1 i = 2 m t i 1 d k i + 1 * if k m ,
d k * is the solution to the problem
min d R F ( x k ) T d + 1 2 F D ( x k ; d k ) .
The results presented in [1] motivated the authors in [2] to define an accelerated gradient version of the iterative rule (1). In ref. [2], the accelerated double-direction method, denoted as the ADD, is presented as follows:
x k + 1 = x k + t k 2 d k t k γ k A D D 1 g k .
In (5), x k is the current iterative point, t k is the iterative step size, g k is the gradient of the objective function, γ k A D D 1 is the accelerated parameter of the ADD method, and d k is the second vector direction.
The step size parameter t k of the iteration (5) is calculated in the following way:
t k = q i ( k ) , 0 < q < 1 ,
i ( k ) is the smallest integer from 0 , 1 , 2 , , such that
f ( x k ) f ( x k + q i ( k ) s k + q 2 i ( k ) d k ) σ q i ( k ) g k T s k + 1 2 q 4 i ( k ) γ k + 1 ,
where σ is a real number such that 0 < σ < 1 .
Remark 1.
An alternative method for deriving the iterative step size is using the backtracking line search procedure, originally presented in [3]. In this procedure, we find the iterative step length t k , generally by starting with initial value t = 1 . Applying the Backtracking parameters 0 < σ < 0.5 and β ( σ , 1 ) , this algorithm checks the following condition:
f ( x k + t d k ) > f ( x k ) + σ t g k T d k ,
while updating the reduced value of the iterative step size as next t : = t β . The optimal initial step length, t k = t , is obtained after fulfilling the exit condition of the backtracking algorithm.
Remark 2.
There are two main approaches for calculating the step length parameter in an optimization method:
1. exact line search;
2. inexact line search.
Using procedure 1, in each iteration, the step size value is derived as the solution to the following problem:
f ( x k + t k d k ) = min f ( x k + t d k ) , t > 0 .
Clearly, the exact line search requests additional CPU time. Accordingly, the required number of iterations and the number of the function evaluations are certainly increasing. In relation to that, in many contemporary optimization schemes, the step length parameter is derived using the second approach, i.e., through inexact line search procedures. We list some of the commonly applied inexact line search techniques as follows:
  • Weak Wolfe’s line search algorithm is given by the following relations [4]:
    f ( x k + t k d k ) f ( x k ) + δ t k g k T d k
    g ( x k + t k d k ) T d k σ g k T t d k ;
  • Strong Wolfe’s line search rule is expressed as follows:
    f ( x k + t k d k ) f ( x k ) + δ t k g k T d k
    | g ( x k + t k d k ) T d k |   σ g k T t d k ;
  • Armijo’s Backtracking procedure is introduced in [3] and defined by Relation (6).
  • Goldstein’s rule [5] can be stated as a generalization of the Armijo’s with more strict conditions:
    f ( x k + t k d k ) f ( x k ) + ρ t k g k T d k
    f ( x k + t k d k ) f ( x k ) + ( 1 + ϱ ) t k g k T d k ,
    where ρ is the parameter, bounded as in Armijo’s procedure, ρ ( 0 , 1 2 ) .
The authors in [2] modified the algorithms for deriving vector directions of the iteration (5) in gradient terms. These modifications are illustrated in Algorithms 1 and 2.
  • Algorithm 1 Calculation of the vector s k :
    s k = γ k 1 g k k
  • Algorithm 2 Calculation of the vector d k :
    d k ( t ) = d k * , k m 1 i = 2 m t i 1 d k i + 1 * , k m
    where d k * is the solution of the transformed minimization problem (4)
    min x R f ( x k ) T d + 1 2 γ k + 1 I = g ( x k ) T d + 1 2 γ k + 1 I .
Remark 3.
The vector of the search direction is one of the important elements of each gradient minimization scheme for solving unconstrained optimization problems. In solving minimization tasks, it is assumed that the iterative search direction d k satisfies the following inequality:
g k T d k < 0 ,
where g k is the gradient of the objective function at point x k . Relation 8 is known as the descending condition. We list only some of the approaches for generating search directions that fulfill Condition (8).
  • One of the common methods for deriving a vector direction that fulfills the descending condition is by expressing it as a multiplication of a negative gradient and a positive accelerated parameter. The acceleration factor can be determined using the features of the first or second-order Taylor expansion of the relevant iteration.
  • An unpublished idea for accelerated parameter construction relates to the properties of the logarithmic and local double-logarithmic reconstructions described in [6,7,8].
  • A similar approach concerns third-order accurate non-polynomial reconstructions and hyperbolic reconstructions of the objective function as a basis for developing an efficient search direction [9,10].
  • The vector direction can be generated as a linear combination of the negative gradient vector and the vector d k defined in Algorithm 2, as proposed in [11].
  • In [12], the direction vector is presented as multiplication of the gradient and the accelerated parameter θ k = a k b k , where a k = t k g k T g k , b k = t k y k T g k ,
  • The search direction of the conjugate gradient algorithm can be formulated as a linear combination of the gradient and the difference between two successive iterative points, as presented in [13].
The last suggestion from the previous Remark 3 induces an idea that can be used as a basis for further studies. This research can relate to comparisons of a chosen conjugate gradient method and the hybrid accelerated method proposed in this paper.
The general form of the conjugate gradient method is given as follows:
x k + 1 = x k + t k d k ,
where the iterative step length variable t k is calculated via the exact line search or via one of the listed inexact line searches given in Remark 1. The main specific of a conjugate gradient scheme comes from the method of generating the vector direction d k , which is defined as follows:
d 0 = ( x 0 ) d k = ( x k ) + p k d k 1 p k = f ( x k ) 2 f ( x k 1 ) 2 ,
i.e.,
d k = g k + g k , g k g k 1 , g k 1 d k 1 .
In (10), g k , g k symbolize the scalar product of the gradient vectors.
For suggested comparative studies, in relation to the research presented in this paper, it would be valuable to pay special attention on the set of quadratic functions. A quadratic function is defined by the following expression:
f ( x ) = 1 2 x T A x + b T x + C ,
where A is a symmetric, positive definite n × n matrix, b R n , and C R . Starting with the initial condition g 1 T d 0 = 0 , after some calculations, an update for the vector d k is obtained as follows:
d k = g k + β k 1 d k 1 ,
where
β k 1 = g k T g k g k 1 T g k 1 .
The conjugate gradient method (9) with a vector direction defined by (12), where β k 1 is calculated using Relation (13), is known as the Fletcher–Reeves formulation of the conjugate gradient method [14].
We list several significant variants of the conjugate gradient method that differ with respect to the expressions that define β k 1 quotients [15,16,17]:
β k 1 = g k T ( g k g k 1 ) d k 1 T ( g k g k 1 ) ;
β k 1 = g k T ( g k g k 1 ) g k 1 T g k 1 ;
β k 1 = g k T g k d k 1 T g k 1 ;
β k 1 = g k T g k d k 1 T ( g k g k 1 ) .
For example, as a comparative minimization model, the conjugate method proposed in [13] can be taken. Providing the suggested comparative analysis would certainly contribute to the optimization community in general.
One of the crucial variables of the ADD scheme (5) is the acceleration factor, calculated using the second-order Taylor expansion of the objective function:
γ k + 1 = 2 f ( x k + 1 ) f ( x k ) α k g k T α k d k γ k 1 g k α k d k γ k 1 g k T α k d k γ k 1 g k
The most significant contribution achieved in ref. [2] likely concerns the importance of the accelerated parameter. To substantiate this fact, the authors constructed and tested the non-accelerated version of the ADD model, called NADD method. In these studies, the incomparable effectiveness of the ADD method was confirmed.
Recently, in [18], the authors introduced a new hybrid approach for generating accelerated gradient optimization methods. This calculative technique is denoted as s-hybridization. In developing this new approach of constructing an efficient minimization scheme, the authors were guided by research regarding the nearly contraction mappings and nearly asymptotically nonexpansive mappings and the existence of fixed points of these classes of mappings [19,20]. The main idea regarding the s- h y b r i d schemes arrives from the study presented in [20], where the following three-termed s-iterative rule is presented:
x 1 = x C , x n + 1 = ( 1 α n ) T x n + α n T y n , y n = ( 1 β n ) x n + β n T x n n N .
In (15), { α n } and { β n } are sequences of real numbers satisfying the following conditions:
{ α n } , { β n } ( 0 , 1 ) n = 1 α n β n ( 1 β n ) = .
The authors in [18] simplified S-Iteration (15) by applying Condition (17)
α n + β n = 1 ,
which transforms limits (16)–(18)
{ α n } ( 0 , 1 ) n = 1 α n 2 ( 1 α n ) = .
Therewith, the s-iteration with one corrective parameter α n is expressed as follows:
x 1 = x R , x n + 1 = ( 1 α n ) T x n + α n T y n , y n = α n x n + ( 1 α n ) T x n n N .
Guided by Iteration (19), in connection with the SM method from [21], the authors in [18] proposed the SHSM optimization method (20):
x n + 1 = x n ( 1 + α n α n 2 ) γ n 1 t n g n .
The authors proved that this model is well defined and established comprehensive convergence analysis.
This paper is organized as follows: in Section 2, we develop the s-hybrid double-direction minimization method based on the obtained results from the relevant studies described in the first section. The convergence analysis is presented in Section 3. Numerical investigations are illustrated in Section 4.

2. S-Hybridization of the Accelerated Double-Direction Method

In this section, we generate the s-hybrid model using the ADD iterative rule,
T y k = y k + t k 2 d k t k γ k 1 g k ,
as a guiding operator in the three-term process (19). Applying the previously stated facts regarding the s-hybridization technique and the ADD method, we develop the shADD process trough the following three-term relations:
x 1 = x R , x n + 1 = ( 1 α n ) T x n + α n T y n = ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) + α n ( y n + t n 2 d n t n γ n 1 g n ) , y n = α n x n + ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) , n N .
Before we state and prove that the three-termed process (21), rewritten in a merged form, presents an accelerated gradient descent method, we induce the following two important terms.
Proposition 1
(Second-order necessary conditions—unconstrained case [22]). Let x * be an interior point of the set Ω, and suppose that x * is a relative minimum point over Ω of the function f C 2 . Then,
  • f ( x * ) = 0 ,
  • for all d , d T 2 f ( x * ) d 0 .
Proposition 2
(Second-ordered sufficient conditions—unconstrained case [22]). Let f C 2 be a function defined in a region in which the point x * is an interior point. Suppose in addition that
  • f ( x * ) = 0 ,
  • Hessian of f , F ( x * ) , is positive definite.
Then, x * is a strict relative minimum point of f.
Lemma 1.
The accelerated gradient iterative form of the shADD process (21) is given by the following relation:
x n + 1 = x n ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ]
Proof. 
The merged iterative rule of the s h A D D process (21) can be derived by substituting the expression of y n from (21) into the previous relation of the same three-term method, i.e., the one that defines x n + 1 :
x n + 1 = ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) + α n [ α n x n + ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) + t n 2 d n t n γ n 1 g n ] = ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) + α n 2 x n + α n ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) α n γ n 1 t n g n + α n t n 2 d n = ( 1 α n ) ( x n + t n 2 d n t n γ n 1 g n ) ( 1 + α n ) + α n 2 x n α n t n ( γ n 1 g n t n d n ) = ( 1 α n 2 ) ( x n + t n 2 d n t n γ n 1 g n ) + α n 2 x n α n t n ( γ n 1 g n t n d n ) = x n t n γ n 1 g n + t n 2 d n α n 2 x n + α n 2 t n γ n 1 g n α n 2 t n 2 d n + α n 2 x n α n t n ( γ n 1 g n t n d n ) = x n t n ( γ n 1 g n t n d n ) ( 1 α n 2 + α n ) ,
which proves (22).
Now, we show that Method (22) fulfills the gradient descent property. For this purpose, let us rewrite relation (22) as follows:
x n + 1 = x n + t n D n ,
where
D n = ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] .
Knowing that { α n } ( 0 , 1 ) implies 1 + α n α n 2 > 1 . Further, γ n 1 g n d n t n can be considered as a linear combination of the gradient vector, since the vector direction d n is derived by Algorithm 2. With that, the parameter γ n , as an acceleration parameter, is a positive constant. Therefore, direction D n is the gradient descent vector.
Now, we derive the iterative value of the acceleration parameter for Method (22). To achieve this goal, we use the second-order Taylor series of the objective function f:
f ( x k + 1 ) f ( x k ) g k T ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] + 1 2 1 + α k α k 2 2 ( t k 2 d n t k γ k 1 g k ) T 2 f ( ξ ) ( t k 2 d n t k γ k 1 g k ) ,
where ξ satisfies the following:
ξ [ x k , x k + 1 ] , ξ = x k + β ( x k + 1 x k ) = x k β 1 + α k α k 2 ( t k 2 d n t k γ k 1 g k ) , 0 β 1 .
Instead of the function’s Hessian 2 f ( ξ ) , we use the diagonal scalar matrix approximation in the previous Taylor expression, i.e., acceleration matrix γ k + 1 I :
f ( x k + 1 ) f ( x k ) g k T ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] + 1 2 1 + α k α k 2 2 ( t k 2 d n t k γ k 1 g k ) T γ k + 1 ( t k 2 d n t k γ k 1 g k ) ,
This gives us the expression of the acceleration parameter γ k + 1 s h A D D of the s h A D D process:
γ k + 1 s h A D D = 2 f ( x k + 1 ) f ( x k ) 1 + α k α k 2 g k T ( t k 2 d n t k γ k 1 g k ) 1 + α k α k 2 2 ( t k 2 d n t k γ k 1 g k ) T ( t k 2 d n t k γ k 1 g k ) .
We assume the positiveness of the derived acceleration parameter γ s h A D D . This fact confirms that the second-order necessary and sufficient conditions have been fulfilled. In the case of γ k + 1 s h A D D < 0 , we assign γ k + 1 s h A D D = 1 and derive the next iterative point as
x k + 2 = x k + 1 ( 1 + α n α n 2 ) t n [ g n d n t n ] .
Knowing that { α n } ( 0 , 1 ) induces 1 + α k + 1 α k + 1 2 > 0 , together with the fact that 0 < t k + 1 < 1 , confirms that the previous scheme is the gradient descent method. □
We end this section by exposing the algorithm of the shADD method, derived on the basis of the previously provided analysis.
Taking the initial values 0 < ρ < 1 , 0 < τ < 1 , x 0 , γ 0 s h A D D = 1 , the algorithm of the shADD method is given by the following steps:
  • Set k = 0 , compute f ( x 0 ) , g 0 , and take γ 0 s h A D D = 1 ;
  • If g k < ϵ , then go to Step 9; else, continue to Step 3;
  • Apply the backtracking algorithm to calculate the iterative step length t k ;
  • Compute the first vector direction s k using Algorithm 1;
  • Compute the second vector direction d k using Algorithm 2;
  • Compute x k + 1 using the iterative rule (22);
  • Determine the acceleration parameter γ k + 1 s h A D D using (24);
  • If γ k + 1 s h A D D < 0 , then take γ k + 1 s h A D D = 1 ;
  • Set k : = k + 1 , go to Step 2;
  • Return x k + 1 and f ( x k + 1 ) .

3. Convergence Features of the shADD Method

We start this section with some relevant known statements that can be found in [23,24].
Proposition 3.
If the function f : R n R is twice continuously differentiable and uniformly convex on R n , then:
  • the function f has a lower bound on L 0 = { x R n | f ( x ) f ( x 0 ) } , where x 0 R n is available;
  • the gradient g is Lipschitz continuous in an open convex set B that contains L 0 , i.e., there exists L > 0 such that
    g ( x ) g ( y ) L x y , x , y B .
Lemma 2.
Under the assumptions of Lemma 3, there exist real numbers m, M satisfying the following:
0 < m 1 M ,
such that f ( x ) has an unique minimizer x * and
m y 2 y T 2 f ( x ) y M y 2 , x , y R n ;
1 2 m x x * 2 f ( x ) f ( x * ) 1 2 M x x * 2 , x R n ;
m x y 2 ( g ( x ) g ( y ) ) T ( x y ) M x y 2 , x , y R n ;
Depending on the degree of complexity that a particular non-linear problem may have, the examination of its convergence resorts to establishing convergence on specific sets. Therefore, we expose in this section the convergence analysis of the derived s h A D D process on the set of strictly convex quadratic functions. The general expression of the strictly convex quadratics is given by (30).
f ( x ) = 1 2 x T A x b T x .
In (30), A is the real positive definite symmetric matrix, and b R n . Further on, we use the following notations regarding the relevant eigenvalues of matrix A: λ 1 λ 2 λ n .
Previous research showed that for the strictly convex quadratic an adequate relation, usually, a connection between the smallest and the largest eigenvalues must be fulfilled in order to establish the convergence of the objective optimization method [2,21,25,26]. In the next lemma, we define that connection by applying the s h A D D method.
Lemma 3.
The relation between the smallest and largest eigenvalues of symmetric positive definite matrix A R n × n that defines the strictly convex quadratic function (30) to which the s h A D D method (22) is applied is given as follows:
λ 1 γ k + 1 t k + 1 2 λ n β , k N ,
where β is the parameter defined in the backtracking procedure.
Proof. 
To prove (31), we start with the estimation of the difference of the function (30) values in two successive points:
f ( x n + 1 ) f ( x n ) = 1 2 x n + 1 T A x n + 1 b T x n + 1 1 2 x n T A x n + b T x n = 1 2 [ x n ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] ] T A [ x n ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] ] b T [ x n ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] 1 2 x n T A x n + b T x n = 1 2 x n T A x n 1 2 ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] T A x k 1 2 x k T ( 1 + α n α n 2 ) t n A [ γ n 1 g n d n t n ] + 1 2 ( 1 + α n α n 2 ) 2 t n 2 [ γ n 1 g n d n t n ] T A [ γ n 1 g n d n t n ] b T x n + b T ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] 1 2 x n T A x n + b T x n = 1 2 ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] T [ A x n b ] 1 2 ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] [ A x n T b T ] + 1 2 ( 1 + α n α n 2 ) 2 t n 2 γ n 2 g n T A g n 1 2 ( 1 + α n α n 2 ) t n 3 γ n 1 d n T A g n 1 2 ( 1 + α n α n 2 ) 2 t n 3 γ n 1 g n T A d n + 1 2 ( 1 + α n α n 2 ) t n 4 d n T A d n = 1 2 ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] T A g n T ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ]
The expression above, which describes the difference between function’s values for two successive points, is determined based on the following facts:
  • Matrix A is symmetric, so
    g k T A d k = d k T A g k ;
  • The gradient of Function (30) is
    g k = A x k b .
We now replace the derived difference in the acceleration parameter expression (24):
γ k + 1 s h A D D = 2 f ( x k + 1 ) f ( x k ) 1 + α k α k 2 g k T ( t k 2 d n t k γ k 1 g k ) 1 + α k α k 2 2 ( t k 2 d n t k γ k 1 g k ) T ( t k 2 d n t k γ k 1 g k ) = 2 1 2 ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] T A g n T ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] 1 + α k α k 2 2 ( t k 2 d n t k γ k 1 g k ) T ( t k 2 d n t k γ k 1 g k ) 2 1 + α k α k 2 g k T ( t k 2 d n t k γ k 1 g k ) 1 + α k α k 2 2 ( t k 2 d n t k γ k 1 g k ) T ( t k 2 d n t k γ k 1 g k ) = γ n 1 g n d n t n T A γ n 1 g n d n t n γ n 1 g n d n t n T γ n 1 g n d n t n .
The obtained expression
γ n 1 g n d n t n T A γ n 1 g n d n t n γ n 1 g n d n t n T γ n 1 g n d n t n
confirms that the acceleration parameter γ k + 1 s h A D D can be written as the Rayleigh quotient of the real symmetric positive definite matrix evaluated at the vector γ n 1 g n d n t n . This fact results in the following conclusion:
λ 1 γ k + 1 λ n , k N .
According to findings revealed in [21], the value of the iterative step length of the accelerated gradient method derived via the backtracking inexact algorithm satisfies the following:
t n > β ( 1 σ ) γ n L ,
where L is the Lipschitz constant that figures in Proposition (3), so the following is valid:
γ n t n < L β ( 1 σ ) .
Considering relation (32), which defines the gradient of the strictly convex function, we have the following:
g ( x ) g ( y ) = A x b A y + b = ( A x A y ) = A ( x y ) A x y = λ n x y .
The previous relation confirms that the largest eigenvalue of the symmetric matrix A fulfills the property of the Lipschitz constant L in (34). Additionally, according to the limitations of the backtracking parameters σ and β ( 0 < σ < 0.5 ) , β ( σ , 1 ) , we derive the following estimations:
γ n + 1 t n + 1 < L β ( 1 σ ) = λ n β ( 1 σ ) < λ n β · 1 2 = 2 λ n β ,
which confirms the right side of Estimation (31). Based on (33) and the fact that the iterative step size is less than 1, the first inequality of (31) arises. □
On the basis of proven estimations (31) relating to the acceleration parameter, backtracking parameter, and the lowest and the largest eigenvalues of the symmetric, positive definite matrix A that figures in the expression (30), using the following theorem, we establish the convergence of the s h A D D method on the set of strictly convex quadratics.
Theorem 1.
For the strictly convex quadratic function (30), the process (22) is linearly convergent when λ n < 2 λ 1 . More precisely, the following relations are valid:
  • for some real constants p 1 k , p 2 k , , p n k and q 1 k , q 2 k , , q n k , such that
    g k = i = 1 n p i k v i , d k = i = 1 n q i k v i ,
    When the vectors v i , i ( 1 , , n ) represent the orthonormal set of eigenvectors of matrix A, the following inequations are fulfilled:
    ( p i k + 1 ) 2 δ 2 ( p i k ) 2 a n d ( q i k + 1 ) 2 λ n 2 ( q i k ) 2 ,
    including
    δ = max 1 β λ 1 2 λ n , λ n λ 1 ( 1 + α k α k 2 ) 1 .
  • For the gradient (32) of the function (30), the following is valid:
    lim k g k = 0
Proof. 
Taking the expression of Gradient (32) of Function (30) at the ( n + 1 )-th iteration, we obtain the following:
g n + 1 = A x n + 1 b = A ( x n ( 1 + α n α n 2 ) t n [ γ n 1 g n d n t n ] ) b = A x n b ( 1 + α n α n 2 ) t n A [ γ n 1 g n d n t n ] = g n ( 1 + α n α n 2 ) t n A γ n 1 g n + ( 1 + α n α n 2 ) t n 2 A d n = I ( 1 + α n α n 2 ) t n γ n 1 A g n + ( 1 + α n α n 2 ) t n 2 A d n .
Applying the orthonormal representations (36) in the previous equation leads us to the following:
g k + 1 = i = 1 n 1 ( 1 + α k α k 2 ) t k γ k 1 λ i p i k v i + ( 1 + α k α k 2 ) t k 2 i = 1 n λ i q i k v i .
Knowing that λ i λ n , i { 1 , 2 , , n } , we will prove (38) by showing that
| 1 ( 1 + α k α k 2 ) t k γ k 1 λ i | < 1 .
For this purpose, let us first assume that λ i < ( 1 + α k α k 2 ) 1 t k 1 γ k , i.e.,
λ i < γ k ( 1 + α k α k 2 ) t k .
Applying (31), we obtain the following:
1 > λ i ( 1 + α k α k 2 ) t k γ k λ 1 t k γ k β λ 1 2 λ n .
We rewrite the previous inequalities as follows:
1 λ i ( 1 + α k α k 2 ) t k γ k 1 β λ 1 2 λ n δ .
We assume now the opposite case:
λ i > γ k ( 1 + α k α k 2 ) t k .
The last inequality gives
1 < λ i ( 1 + α k α k 2 ) t k γ k λ n ( 1 + α k α k 2 ) λ 1 ,
which directly implies
| 1 λ i ( 1 + α k α k 2 ) t k γ k | | λ n λ 1 ( 1 + α k α k 2 ) 1 | δ .
Estimations (41) and (42) prove (38).
Finally, in order to prove (39), we use the gradient representation from (36),
g k = i = 1 n p i k v i ,
which results in the following conclusion:
g k 2 = i = 1 n ( p i k ) 2 .
Applying the fact that δ ( 0 , 1 ) on inequalities (37) directly proves (39). □

Non-Convex Case Overview

In the previous section, we proved that the shADD method linearly converges to a set of strictly convex quadratics. Although it is not the main subject of this research, in this subsection, we analyze a possible application of the presented scheme when the objective function is non-convex. The importance of providing this introductory discussion on this topic simply arises from the endless array of contemporary non-convex problems such as matrix completion, low-rank models, tensor decomposition, and deep neural networks.
Neural networks, considered as universal function approximators, contain significant symmetric properties. These features define them as non-convex structures. Some of the known techniques for solving machine learning problems and other non-convex problems are as follows:
  • Stochastic gradient descent methods,
  • Mini batch approach,
  • Stochastic variance reduced gradient (SVRG) method,
  • Alternating minimization methods,
  • Branch and bound methods.
Confirming convergence properties in non-convex optimization is quite difficult. In a theoretical sense, there exist no regular approaches to achieving this goal, as is the case of convex problems. Additionally, there are potentially many local minimums and the existence of saddle points and flat regions when the objective function is non-convex.
Generally, when solving non-convex optimization tasks, theoretical guarantees are very weak, and there is no tried and tested way for ending this process successfully.
Principal component analysis (PCA) is a technique for linear dimensionality reduction, which is useful in proving global convergence of minimization methods when applied on non-convex functions. We propose connecting this approach with the shADD method in further studies. The PCA process can be characterized through the following steps:
  • Standardizing the range of continuous initial variables;
  • Computing the covariance matrix to identify correlations;
  • Computing the eigenvectors and eigenvalues of the covariance matrix to identify the principal components;
  • Creating a feature vector to decide which principal components to keep;
  • Recasting the data along the principal components axes.
We set the goal problem as follows: determine the dominant eigenvector and eigenvalue of a positive symmetric semidefinite matrix A. We can write this problem as follows:
u 1 = arg max x x T A x x T x .
The equivalent of problem (44) is (45):
λ 1 u 1 = arg min x x x T A F 2 ,
where · F is the Frobenius norm, i.e., B F = i j B i , j 2 . Taking the objective function, defined in terms of the Frobenius norm,
f ( x ) = 1 4 x x T A F 2 ,
we see that the gradient of this function is given by the following expression:
f ( x ) = x x T A x .
The classical gradient descent update step for Function (46) can be written as follows:
x n + 1 = x n c n ( x n x n T x n A x n ) ,
where the adaptive step size parameter fulfills the following relation:
c n = η 1 + η x n T x n
Applying (49) in (48) leads to the following:
x n + 1 = x n η 1 + η x n T x n ( x n x n T x n A x n ) = x n η 1 + η x n T x n x n x n T x n + η 1 + η x n T x n A x n = 1 η 1 + η x n T x n x n x n T x n + η 1 + η x n T x n A x n = x n + η A x n 1 + η x n T x n = 1 1 + η x n T x n ( I + η A ) x n .
After inductively taking the previous relation, we conclude that
x N = ( I + η A ) x 0 t = 0 T 1 1 1 + η x n 2 .
The previous relation (51) confirms that the gradient descent iteration (48) converges linearly, since
x N x n = ( I + η A ) T x 0 ( I + η A ) T x 0 .
A similar analysis can be applied to the shADD iteration (22) for Function (46). According to the construction of the vector d k (Algorithm 2) in (22), we can modify this vector direction as a linear combination of the gradient vector, as explained in the proof of Lemma 1. Considering this fact allows us to rewrite iteration (22) in a simpler form for which Property (52) can be easily proved.

4. Numerical Test Results

In this section, we analyze the numerical performance of the shADD method depending on the choice of parameter α (18), which is aptly named the corrective parameter. For the selected values of this parameter, we track standard numerical metrics, including the number of iterations performed, CPU time, and the number of function evaluations.
As proposed in ref. [27], which presents an extensive comparative analysis of several Khan-hybrid models, for a range of α k values (18), we take a specific numerical value α k α ( 0 , 1 ) for all k { 1 , 2 , 3 , } . This further reinforces the corrective expression of the shADD iteration (22):
c ( α ) = 1 + α α 2 ( 1 , 2 ) .
We have observed that for specific values of parameter α ( 0 , 1 ) , we have the following values of the expression c ( α ) :
c ( 0.1 ) = 1.09 = c ( 0.9 ) c ( 0.2 ) = 1.16 = c ( 0.8 ) c ( 0.3 ) = 1.21 = c ( 0.7 ) c ( 0.4 ) = 1.24 = c ( 0.6 ) c ( 0.5 ) = 1.25 .
All of this motivated us to test the shADD method for these five specified values of α . For this purpose, we selected five test functions from [28]. We tested these functions for the five given values of the corrective parameter α { 0.1 , 0.2 , 0.3 , 0.4 , 0.5 } and for ten different values of the number of variables n { 1000 , 2000 , 3000 , 4000 , 5000 , 6000 , 7000 , 8000 , 9000 , 10,000 } . For each test function, we summarized the obtained outcomes for all selected numbers of variables. The results obtained from the measured performance metrics (number of iterations, CPU time, and number of evaluations) are presented in Table 1, Table 2, and Table 3, respectively.
We can observe that for the first test function, the Extended Penalty function, the values of the output results regarding the number of iterations and the number of evaluations do not depend on changes in the corrective parameter α . For the same function, the changes in CPU time for different values of the corrective parameter are also minimal. However, for the other test functions, there are differences in the final values of all measured metrics depending on the choice of the corrective parameter value. In terms of metrics, regarding the number of iterations, the best results are achieved for α = 0.2 and α = 0.3 . In the case of measuring CPU time, tests for α = 0.1 and α = 0.2 take the least time. In terms of the number of evaluations, the smallest values are achieved for α = 0.2 and α = 0.3 . A general conclusion we can draw, based on a total of 250 tests, is that it is advisable to use a corrective parameter value of α = 0.2 or α = 0.3 .
The tests were conducted using a standard termination criterion:
g k 10 6 and | f ( x k + 1 ) f ( x k ) | 1 + | f ( x k ) | .
The code was written in the C++ programming language.

5. Conclusions

In this paper, we propose a new s-hybrid variant of the accelerated double-direction minimization model. The explored convergence properties of the developed iterative rule on the set of strictly convex quadratic functions confirm that the method has linear consistency.
Numerical tests were performed for different values of the corrective parameter. This study leaves an open space for further investigations of the numerical performance of the defined process and comparative analysis regarding similar models.

Author Contributions

Conceptualization, V.R. and M.J.P.; methodology, V.R.; software, M.J.P.; validation, V.R. and M.J.P.; formal analysis, V.R. and M.J.P.; investigation, M.J.P.; resources, V.R.; data curation, V.R. and M.J.P.; writing—original draft preparation, V.R. and M.J.P.; writing—review and editing, V.R.; visualization, M.J.P.; supervision, V.R and M.J.P.; project administration, M.J.P.; funding acquisition, V.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting this study are available from the authors upon request.

Acknowledgments

The author Milena J. Petrović gratefully acknowledges support from the Project supported by Ministry of Science, Technological Development and Innovation of the Republic of Serbia project no. 451-03-65/2024-03/200123.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Djuranović-Miličić, N.I.; Gardašević-Filipović, M. A multi-step curve search algorithm in nonlinear optimization: Nondifferentiable case. Facta Univ. Ser. Inform. 2010, 25, 11–24. [Google Scholar]
  2. Petrović, M.J.; Stanimirovic, P.S. Accelerated Double Direction Method For Solving Unconstrained Optimization Problems. Math. Probl. Eng. 2014, 2014, 965104. [Google Scholar] [CrossRef]
  3. Armijo, L. Minimization of functions having Lipschitz first partial derivatives. Pac. J. Math. 1966, 6, 1–3. [Google Scholar] [CrossRef]
  4. Wolfe, P. Convergence conditions for ascent methods. SIAM Rev. 1968, 11, 226–235. [Google Scholar] [CrossRef]
  5. Goldstein, A.A. On steepest descent. SIAM J. Control 1965, 3, 147–151. [Google Scholar]
  6. Petrović, M. A Truly Third Order Finite Volume Scheme on Quadrilateral Mesh. Master’s Thesis, Lund University, Lund, Sweden, 2006. [Google Scholar]
  7. Artebrant, R.; Schroll, H.J. Conservative Logarithmic Reconstruction and Finite Volume Methods. SIAM J. Sci. Comput. 2005, 27, 294–314. [Google Scholar] [CrossRef]
  8. Artebrant, R.; Schroll, H.J. Limiter-free Third Order Logarithmic Reconstruction. SIAM J. Sci. Comput. 2006, 28, 359–381. [Google Scholar] [CrossRef]
  9. Artebrant, R. Third order acurate non-polynomial reconstruction on ractangular and triangular meshes. J. Sci. Comput. 2007, 30, 193–221. [Google Scholar] [CrossRef]
  10. Schroll, H.J.; Svensson, F. A Bi-Hyperbolic Finite Volume Method on Quadrilateral Meshes. J. Sci. Comput. 2006, 26, 237–260. [Google Scholar] [CrossRef]
  11. Potra, F.A.; Shi, Y. Efficient line search algorithm for unconstrained optimization. J. Optim. Theory Appl. 1995, 85, 677–704. [Google Scholar] [CrossRef]
  12. Andrei, N. An acceleration of gradient descent algoritham with backtracing for unconstrained optimization. Numer. Algorithms 2006, 42, 63–173. [Google Scholar] [CrossRef]
  13. Andrei, N. An accelerated conjugate gradient algorithm with guaranteed descent and conjugacy conditions for unconstrained optimization. Optim. Methods Softw. 2012, 27, 583–604. [Google Scholar] [CrossRef]
  14. Fletcher, R.; Reeves, C.M. Function minimization by conjugate gradients. Comput. J. 1964, 7, 149–154. [Google Scholar] [CrossRef]
  15. Dai, Y.H.; Yuan, J.Y.; Yuan, Y. Modified two-point step-size gradient methods for unconstrained optimization. Comput. Optim. Appl. 2002, 22, 103–109. [Google Scholar] [CrossRef]
  16. Dai, Y.H.; Yuan, Y. Alternate minimization gradient method. IMA J. Numer. Anal. 2003, 23, 377–393. [Google Scholar] [CrossRef]
  17. Polak, E.; Ribiére, G. Note sur la convergence de méthodes de directions conjuguées. Rev. Fr. d’Inform. Rech. Opér. Sér. Rouge 1969, 16, 35–43. [Google Scholar] [CrossRef]
  18. Petrović, M.J.; Rakočević, V.; Ilić, D. Hybrid optimization models based on S-iteration process. Filomat 2024, 38. [Google Scholar]
  19. Sahu, D.R. Fixed points of demicontinuous nearly Lipschitzian mappings in Banach spaces. Comment. Math. Univ. Carol. 2005, 46, 653–666. [Google Scholar]
  20. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 8, 61. [Google Scholar]
  21. Stanimirović, P.S.; Miladinović, M.B. Accelerated gradient descent methods with line search. Numer. Algorithms 2010, 54, 503–520. [Google Scholar] [CrossRef]
  22. Luenberg, D.G.; Ye, Y. Linear and Nonlinear Programming; Springer Science+Business Media, LLC: New York, NY, USA, 2008. [Google Scholar]
  23. Ortega, J.M.; Reinboldt, W.C. Iterative Solution of Nonlinear Equation in Several Variables; Academic: London, UK, 1970. [Google Scholar]
  24. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  25. Long, J.; Hu, X.; Zhang, L. Improved Newton’s method with exact line searches to solve quadratic matrix equation. J. Comput. Appl. Math. 2008, 222, 645–654. [Google Scholar] [CrossRef]
  26. Zhou, Q. An adaptive nonmomotonic trust region method with curlinear searches. J. Comput. Math. 2006, 24, 761–770. [Google Scholar]
  27. Rakočević, V.; Petrović, M.J. Comparative analysis over accelerated models for solving unconstrained optimization problems with application of Khan’s hybrid rule. Mathemathics 2022, 10, 4411. [Google Scholar] [CrossRef]
  28. Andrei, N. An Unconstrained Optimization Test Functions Collection. Adv. Model. Optim. 2008, 10, 1–15. Available online: https://camo.ici.ro/journal/vol10/v10a10.pdf (accessed on 7 February 2025).
Table 1. Number of iterations with respect to the corrective parameter value.
Table 1. Number of iterations with respect to the corrective parameter value.
Test Functions α = 0.1 α = 0.2 α = 0.3 α = 0.4 α = 0.5
Extended Penalty function6060606060
Diagonal 1 function5470938187
Diagonal 3 function190155166177223
Generalized Tridiagonal 1 function128120120120120
Quadratic Diagonal Perturbed function25882407145516183779
Table 2. CPU with respect to the corrective parameter value.
Table 2. CPU with respect to the corrective parameter value.
Test Functions α = 0.1 α = 0.2 α = 0.3 α = 0.4 α = 0.5
Extended Penalty function53554
Diagonal 1 function35666
Diagonal 3 function1012141917
Generalized Tridiagonal 1 function109101010
Quadratic Diagonal Perturbed function321200249175294
Table 3. Number of evaluations with respect to the corrective parameter value.
Table 3. Number of evaluations with respect to the corrective parameter value.
Test Functions α = 0.1 α = 0.2 α = 0.3 α = 0.4 α = 0.5
Extended Penalty function37293729372937293729
Diagonal 1 function13,97913,78519,63414,44014,870
Diagonal 3 function71404289374194867767
Generalized Tridiagonal 1 function10531006101310191027
Quadratic Diagonal Perturbed function896284195563605212,535
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rakočević, V.; Petrović, M.J. An S-Hybridization Technique Using Two-Directional Optimization. Axioms 2025, 14, 131. https://doi.org/10.3390/axioms14020131

AMA Style

Rakočević V, Petrović MJ. An S-Hybridization Technique Using Two-Directional Optimization. Axioms. 2025; 14(2):131. https://doi.org/10.3390/axioms14020131

Chicago/Turabian Style

Rakočević, Vladimir, and Milena J. Petrović. 2025. "An S-Hybridization Technique Using Two-Directional Optimization" Axioms 14, no. 2: 131. https://doi.org/10.3390/axioms14020131

APA Style

Rakočević, V., & Petrović, M. J. (2025). An S-Hybridization Technique Using Two-Directional Optimization. Axioms, 14(2), 131. https://doi.org/10.3390/axioms14020131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop