Next Article in Journal
Satisfiability Threshold of Random Propositional S5 Theories
Previous Article in Journal
A Heuristic Method for Solving Polynomial Matrix Equations
Previous Article in Special Issue
A Hybrid Rule-Based Rough Set Approach to Explore Corporate Governance: From Ranking to Improvement Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning in Quasi-Newton Methods

by
Vladimir Krutikov
1,2,
Elena Tovbis
3,
Predrag Stanimirović
1,4,
Lev Kazakovtsev
1,3,* and
Darjan Karabašević
5,6,*
1
Laboratory “Hybrid Methods of Modeling and Optimization in Complex Systems”, Siberian Federal University, 79 Svobodny Prospekt, 660041 Krasnoyarsk, Russia
2
Department of Applied Mathematics, Kemerovo State University, 6 Krasnaya Street, 650043 Kemerovo, Russia
3
Institute of Informatics and Telecommunications, Reshetnev Siberian State University of Science and Technology, 31, Krasnoyarskii Rabochii Prospekt, 660037 Krasnoyarsk, Russia
4
Faculty of Sciences and Mathematics, University of Niš, 18000 Niš, Serbia
5
College of Global Business, Korea University, Sejong 30019, Republic of Korea
6
Faculty of Applied Management, Economics and Finance, University Business Academy in Novi Sad, Jevrejska 24, 11000 Belgrade, Serbia
*
Authors to whom correspondence should be addressed.
Axioms 2024, 13(4), 240; https://doi.org/10.3390/axioms13040240
Submission received: 14 February 2024 / Revised: 22 March 2024 / Accepted: 2 April 2024 / Published: 5 April 2024

Abstract

:
In this article, we consider the correction of metric matrices in quasi-Newton methods (QNM) from the perspective of machine learning theory. Based on training information for estimating the matrix of the second derivatives of a function, we formulate a quality functional and minimize it by using gradient machine learning algorithms. We demonstrate that this approach leads us to the well-known ways of updating metric matrices used in QNM. The learning algorithm for finding metric matrices performs minimization along a system of directions, the orthogonality of which determines the convergence rate of the learning process. The degree of learning vectors’ orthogonality can be increased both by choosing a QNM and by using additional orthogonalization methods. It has been shown theoretically that the orthogonality degree of learning vectors in the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method is higher than in the Davidon–Fletcher–Powell (DFP) method, which determines the advantage of the BFGS method. In our paper, we discuss some orthogonalization techniques. One of them is to include iterations with orthogonalization or an exact one-dimensional descent. As a result, it is theoretically possible to detect the cumulative effect of reducing the optimization space on quadratic functions. Another way to increase the orthogonality degree of learning vectors at the initial stages of the QNM is a special choice of initial metric matrices. Our computational experiments on problems with a high degree of conditionality have confirmed the stated theoretical assumptions.

1. Introduction

The problem of unconstrained minimization of smooth functions in a finite-dimensional Euclidean space has received a lot of attention in the literature [1,2]. In unconstrained optimization, in contrast to constrained optimization [3], the process of optimizing the objective function is carried out in the absence of restrictions on variables. Unconstrained problems arise also as reformulations of constrained optimization problems, in which the constraints are replaced by penalization terms in the objective function that have the effect of discouraging constraint violations [2].
Well-known methods [1,2] that enable us to solve such a problem include the gradient method, which is based on the idea of function local linear approximation, or Newton’s method, which uses its quadratic approximation. The Levenberg–Marquardt method is a modification of Newton’s method, where the direction of descent differs from that specified by Newton’s method. The conjugate gradient method is a two-step method in which the parameters are found from the solution of a two-dimensional optimization problem.
Quasi-Newton minimization methods are effective tools of solving smooth minimization problems when the function level curves have a high degree of elongation [4,5,6,7]. QNMs are commonly applied in a wide range of areas, such as biology [8], image processing [9], technics [10,11,12,13,14,15], and deep learning [16,17,18].
The QNM is based on the idea of using a matrix of second derivatives reconstructed from the gradients of a function. The first QNM was proposed in [19] and improved in [20]. The generally accepted notation for the matrix updating formula in this method is DFP. Nowadays, there are a significant number of equations for updating matrices in the QNM [4,5,6,7,21,22,23,24,25,26,27,28], and it is generally accepted [4,5] that among a variety of QNMs, the best methods use the BFGS matrix updating equation [29,30,31]. However, it has been experimentally established, but not theoretically explained, why the BFGS generates the best results among the QNMs [5].
A sampled version of the BFGS method named limited-memory BFGS (L-BFGS) [32] was presented to handle high-dimensional problems. The algorithm stores only a few vectors that represent the approximation of the Hessian instead of the entire matrix. A version with bound constraints was proposed in [33].
The penalty method [2] was developed for solving constrained optimization problems. The unconstrained problems are formed by adding a term, called a penalty function, to the objective function. The penalty is zero for feasible points and non-zero for infeasible points.
The development of QNMs occurred spontaneously through the search for matrix updating equations that satisfy certain properties of data approximation obtained in the problem solving process. In this paper, we consider a method for deriving matrix updating equations in QNMs by forming a quality functional based on learning relations for matrices, followed by obtaining matrix updating equations in the form of a step of the gradient method for minimizing the quality functional. This approach has shown high efficiency in organizing subgradient minimization methods [34,35].
In machine learning theory, the system in which the average risk (mathematical expectation of the total loss function) is minimal is considered optimal [36,37]. The goal of learning represents the state that has to be reached by the learning system in the process of learning. The selection of such a desired state is actually achieved by a proper choice of a certain functional that has an extremum which corresponds to the desired state [38]. Thus, in the matrix learning process, it is necessary to formulate a quality functional.
In QNMs, for each of the matrix rows, there is a product of the vector which exists as a learning relation. Consequently, we have a linear model with the coefficients of the matrix row as its parameters. Thus, we may formulate a quadratic learning quality functional for a linear model and obtain a gradient machine learning (ML) algorithm. This paper shows how one can obtain known methods for updating matrices in QNMs based on a gradient learning algorithm. Based on the general properties of convergence of gradient learning algorithms, it seems relevant to study the origins of the effectiveness of metric updating equations in QNMs.
In a gradient learning algorithm, the sequence of steps is represented as a method of minimization along a system of directions. The degree of orthogonality of these directions determines the convergence rate of the algorithm. The use of gradient learning algorithms for deriving matrix updating equations in QNMs enables us to analyze the quality of matrix updating algorithms based on the convergence rate properties of the learning algorithms. This paper shows that the higher degree of orthogonality of learning vectors in the BFGS method determines its advantage compared to the DFP method.
Studies on quadratic functions identify conditions under which the space dimension is reduced during the QNM iterations. The dimension of the minimization space is reduced when the QNM includes iterations with an exact one-dimensional descent or an iteration with additional orthogonalization. It is possible to increase the orthogonality of the learning vectors and thereby increase the convergence rate of the method through special normalization of the initial matrix.
The computational experiment was carried out on functions with a high degree of conditionality. Various ways of increasing the orthogonality of learning vectors were assessed. The theoretically predicted effects of increasing the efficiency of QNMs confirmed their effectiveness in practice. It turned out that with an approximate one-dimensional descent, additional orthogonalization in iterations of the algorithm significantly increased the efficiency of the method. In addition, the efficiency of the method also increased significantly with the correct normalization of the initial matrix.
The rest of this paper is organized as follows. In Section 2, we provide basic information about matrix learning algorithms in QNMs. Section 3 contains an analysis of matrix updating formulas in QNMs. A symmetric positive definite metric is considered in Section 4. Section 5 gives a qualitative analysis of the BFGS and DFP matrix updating equations. Methods for reducing the minimization space of QNMs on quadratic functions are presented in Section 6. Methods for increasing the orthogonality of learning vectors in QNMs are considered in Section 7. In Section 8, we present a numerical study, and the last section summarizes the work.

2. Matrix Learning Algorithms in Quasi-Newton Methods

Consider the minimization problem
f(x) → min, xRn.
The QNM for this problem is iterated as follows:
x   k + 1 = x   k + β k s   k ,   s   k = H   k   f x   k ,
β k = arg min   β 0 f x k + β s k ,
Δ x   k = x   k + 1 x   k ,   y   k = f ( x   k + 1 ) f ( x   k ) ,
H   k + 1 = H H   k , Δ x   k , y   k .
Here, f x is the gradient of a function, sk is the search direction, and βk is chosen to satisfy the Wolfe conditions [2]. Further, H k R n × n is a symmetric matrix which is used as an approximation of the Hessian inverse. The operator
H H , Δ x , y R n × n ,    H R n × n ,    Δ x , y R n
specifies a certain equation for updating the initial matrix H. At the input of the algorithm, the starting point x0 and the symmetric strictly positive definite matrix H0 must be specified. Such a matrix will be denoted as H0 > 0.
Let us consider the relations for obtaining updating equations for Hk matrices on quadratic functions:
f x = 1 2 x x   * , A ( x x   * ) + d ,   A > 0 ,
where x* is the minimum point. Here and below, the expression <·,·> means a scalar product of vectors. Without a loss of generality, we assume d = 0. The gradient of a quadratic function f(x) is ∇f(x) = A(xx*). For ΔxRn, the gradient difference y = ∇f(x + Δx) − ∇f(x) satisfies the relation:
A Δ x = y   or   A 1 y = Δ x .
The equalities in (7) are used to obtain various equations for updating matrices Hk, which are approximations for A−1, or matrices Bk = (Hk)−1, which are approximations for A. An arbitrary equation for updating matrices H or B, the result of which is a matrix satisfying (7), will be denoted by H(H, Δx, y) or B(B, Δx, y), respectively.
Denoting as Ai and Ai1 rows of the corresponding matrices A and A−1 with i-th index, then, according to (7), we obtain equations for the learning relations necessary to formulate algorithms for matrix rows’ learning:
A i Δ x = y i ,   A i 1 y = Δ x i ,   i = 1,2 , , n ,
where yi and Δxi are the components of the vectors in (7). The relations in (8) make it possible to use machine learning algorithms of a linear model in the parameters to estimate the rows of the corresponding matrices.
Let us formulate the problem of estimating the parameters of a linear model from observational data.
ML problem: find unknown parameters c* ∈ Rn of the linear model
y = z , c   ,    z , c R n   ,   y R 1
from observational data
y k R 1   ,   z   k R n   ,   k = 0,1 , 2 , ,
where yk = <c*, zk>. We will use an indicator of training quality,
Q ( z , c ) = 1 2 ( z , c y ) 2 ,
which is an estimate of the quality functional required to find c*.
Function (11) is a loss function. Due to the large dimension of the problem of estimating the elements of metric matrices, the use of the classical least squares method becomes difficult. We use the adaptive least squares method (recurrent least squares formulas).
The gradient learning algorithm based on (11) has the following form:
c   k + 1 = c   k h k Q ( z   k , c   k ) = c   k h k ( z   k , c   k y k ) z   k .
Due to the orthogonality of the training vectors, the stochastic gradient method in the form “receiving of an observation-training-forgetting the observation information” in quasi-Newton methods enables us to obtain good approximations of the inverse matrices of second derivatives while maintaining their symmetry and positive definiteness.
In this paper, the value of such consideration is that we are able to identify the advantages of the BFGS method and obtain a method with orthogonalization of learning vectors and prove these provisions through testing.
The Kaczmarz algorithm [39] is a special case of (12) with the form
c   k + 1 = c   k ( z   k , c   k y k ) z   k , c   k z   k .
Let us list some of the properties of process (13), which we use to justify the properties of matrix updating in QNMs.
Property 1.
Process (13) ensures the equality
y k = z   k , c   k + 1 ,
and the solution is achieved under the condition of minimum changes in the parameters’ values c   k + 1 c   k .
Property 2.
If yk = <c*, zk> then the iteration of process (13) is equivalent to the step of minimizing the quadratic function
ϕ ( c ) = c c * , c c * / 2
from the point ck along the direction zk.
Proof. 
Property 2 is justified by the direct implementation of the function in (15) which is the minimizing step along the direction zk, which is presented in Figure 1. Property 1 follows from the fact that movement to the point ck+1 is carried out along the normal to the hyperplane <zk, c> = yk, that is, along the shortest path (Figure 1). Movement to other points on the hyperplane, for example to point A, satisfy only the condition in (14). □
Let us denote the residual as rk = ckc*. By subtracting c* from both sides of (13) and making transformations, we obtain the following learning algorithm in the form of residuals:
r   k + 1 = W ( z   k ) r   k ,     W ( z ) = I z   z T   z T   z ,
where I is the identity matrix. The sequence of minimization steps can be represented in the form of the residual transformation, where m is the number of iterations:
r   k + 1 = W k m k ( z )   r   k m ,   W k m k z = W z   k   W ( z   k 1 ) W ( z   k m ) .
The convergence rate of process (13) is significantly affected by the degree of orthogonality of the learning vectors z. The following property reflects the well-known fact of the minimization algorithm termination along orthogonal directions of the quadratic form of (15) with equal Hessian eigenvalues.
Property 3.
Let vectors zk, k = l, l + 1,…, l + n − 1 for a sequence of n iterations (13) be mutually orthogonal. Then, the solution c* minimizing the function (15) is obtained in no more than n steps of the process (13) for an arbitrary initial c l , wherein
r   l + n = W l l + n 1 ( z ) r   l = 0 ,   W l l + n 1 ( z ) = 0 .
The following results are useful to estimate the convergence rate of the process in (13) as a method for minimizing the function in (15) without orthogonality of the descent vectors.
Consider a cycle of iterations for minimizing a function θ(x), xRn, along the column vectors zk, ‖zk‖ = 1, k = 1,…, n, of matrix ZRn×n:
x k + 1 = x k + β k z k ,   β k = arg min   β 0 θ x k + β z k ,   k = 1 , , n .
Here and below, we will use the Euclidean vector norm ‖x‖ = <x, x>1/2. Let us present the result of the iterations in (19) in the form of the operator xn+1 = XP(x1, Z). Consider the process
u q + 1 = X P ( u q , Z q ) , q = 0,1 , . ,
where matrices Zq and the initial approximation u0 are given. To estimate the convergence rate of the QNM and the convergence rate of the metric matrix approximation, we need the following assumption about the properties of the function.
Assumption 1.
Let the function be strongly convex, with a constant ρ > 0, and differentiable, and its gradient satisfy the Lipschitz condition with a constant L > 0.
We assume that the function f(x), xRn, is differentiable and strongly convex in Rn, i.e., there exists ρ > 0 such that for all x,yRn and α ∈ [0, 1], the inequality holds,
f α x + 1 α y α f ( x ) + ( 1 α ) f ( y ) α ( 1 α ) ρ x y 2 / 2 ,
and its gradient ∇f(x) satisfies the Lipschitz condition:
f ( x ) f ( y ) L x y    x , y R n ,     L > 0 .
Let us denote the minimum point of the function θ(x) by x*. The following theorem [40] establishes the convergence rate of the iteration cycle (20).
Theorem 1.
Let the function θ(x), x ∈ Rn, satisfy Assumption 1; let matrices Zq of the process in (20) be such that minimum eigenvalues μ q of matrices (Zq)T Zq satisfy the constraint μ q ≥ μ0 > 0. Then, the following inequality estimates the convergence rate of the process in (20):
θ ( u m ) θ ( x * ) [ θ ( u 0 ) θ ( x * ) ] exp m ρ 2 μ 0 2 2 L 2 n 3 .
Estimate (22) enables us to formulate the following property of the process in (13).
Property 4.
Let vectors zk, k = 1,…,n−1, be given in (13), the columns of the matrices Z be composed of vectors zk/‖zk‖, and the minimum eigenvalue μ of the matrix ZT Z satisfy the constraint μ ≥ μ0 > 0. Then, the following inequality estimates the convergence rate:
c   n c * 2 c   0 c * 2 exp μ 0 2 2 n 3 .
Proof. 
Let us apply the results of Theorem 1 to the process in (13). The strong convexity and Lipschitz constants for the gradient of the quadratic function in (15) are the same: ρ = L = 1. Using Property 2 and the estimate in (22) for m = 1, we obtain (23). □
The property of operators W l l + n 1 , when the conditions of Property 4 are met, is determined by the estimate in (23), which can be represented in the following form:
r   n = W 0 n 1 ( z ) r   0 2 r   0 2 exp μ 0 2 2 n 3
Thus, the Kaczmarz algorithm provides a solution to the equality in (14) for the last observation, while it implements a local learning strategy, i.e., a strategy for iteratively improving the approximation quality from a functional (15) point of view. If the learning vectors are orthogonal, the solution is found in no more than n iterations. When n learning vectors are linearly independent, the convergence rate (23) is determined by the degree of the learning vectors’ orthogonality. The degree of the vectors’ orthogonality will indicate the boundedness of the minimum eigenvalue μμ0 > 0 of the matrix ZTZ defined in Property 4.
Using the learning relations in (8), we obtain machine learning algorithms for estimating the rows of the corresponding matrices in the form of the process in (13). Consequently, the question of analyzing the quality of algorithms for updating matrices in QNMs will consist of analyzing learning relations like (8) and the degree of orthogonality of the vectors involved in training.

3. Gradient Learning Algorithms for Deriving and Analyzing Matrix Updating Equations in Quasi-Newton Methods

Well-known equations for matrix updating in QNMs were found as equations that eliminate mismatch on a new portion of training information. In machine learning theory, a quality measure is formulated. A gradient minimization algorithm is used to minimize this measure. Our goal is to give an account of QNMs from the standpoint of machine learning theory, i.e., to formulate quality measures of training and construct their minimization algorithms. This approach enables us to obtain a unified method for deriving matrix updating equations and extend the known facts and algorithms of learning theory to solve analysis of and achieve improvement in QNMs.
Let us obtain formulas for updating matrices in QNMs using the quadratic model of the minimized function in (6) and learning relations in (7). For one of the learning relations in (7), we present a complete study of Properties 1–4.
Let the current approximation H of the matrix H* = A−1 be known. It is required to construct a new approximation using the learning relations in (7) for the rows of the matrix in (8):
H * y = Δ x   or   H i * y = Δ x i   ,   i = 1 , 2 , , n .
To evaluate each row of the matrix H* based on (25), we apply Algorithm (13). As a result, we obtain the following matrix updating equation:
H   + = H B 2 ( H , Δ x , y ) = H + Δ x H y y T y T y ,
which is known as the 2nd Broyden method for estimating matrices when solving systems of non-linear equations [5,6].
Equation (26) determines the step of minimizing a type of functional of (15) for each of the rows Hi of matrix H along the direction y:
ϕ ( H i ) = H i H i * 2 / 2 , i = 1 , 2 , , n .
The matrix residual is R = HH*. Because of the iteration of (26), the residual is transformed according to the rule
R   + = R W ( y ) .
Let us denote the scalar product for matrices A,B R n × n as
A , B = i = 1 n A i T B i   = j = 1 n i = 1 n A i j   B i j   .
We use the Frobenius norm of matrices:
H = i = 1 n H i   2 1 / 2 .
Let us define the function,
Φ ( H ) = i = 1 n H i H i * 2 / 2 = H H   * 2 / 2 ,
and reformulate Properties 1–4 for the matrix updating process in (26).
Theorem 2.
Iteration (26) is equivalent to the minimization step Φ(H) from a point H along the direction ΔH:
Δ H = Δ x H y y T / y T y ,
where
H   + y = Δ x
H   + H H   Δ x H
for arbitrary matrices H Δ x R n × n satisfying the condition in (31).
Proof of Theorem 2.
Let us show that the condition for the minimum of the function in (27) along the direction ΔH (30) is satisfied at the point H + :
Δ H , Φ ( H   + ) = j = 1 n i = 1 n ( Δ x H y ) i y j ( H i j + H i j * ) = i = 1 n ( Δ x H y ) i ( H i + H i * ) y = i = 1 n ( Δ x H y ) i ( Δ x i Δ x i ) = 0 .
 □
Next, we prove (32) by showing that ΔH is the normal of the hyperplane of matrices satisfying the condition in (31). To do this, we prove orthogonality of the vector in (30) to an arbitrary vector of the hyperplane, formed as the difference of matrices belonging to the hyperplane V = H1H2:
Δ H , H   1 H   2 ) = j = 1 n i = 1 n ( Δ x H y ) i y j ( H i j 1 H i j 2 ) = i = 1 n ( Δ x H y ) i ( H i 1 H i 2 ) y = i = 1 n ( Δ x H y ) i ( Δ x i Δ x i ) = 0 .
Let us prove an analogue of Property 3 for (26).
Theorem 3.
Let the vectors yk, k = l, l + 1, …, l + n − 1, for the sequence of n iterations in (26) be mutually orthogonal, then the solution H* to the minimization problem in (29) will be obtained in no more than n steps of the process in (26),
H   k + 1 = H B 2 ( H   k , Δ x k , y k ) ,   k = l , l + 1 , , l + n 1 ,
for an arbitrary matrix Hl,
R   l + n = R   l + n [ W l l + n 1 ( y ) ] T = 0 .
Proof of Theorem 3.
From (28), the orthogonality of vectors yk and (18) follows (35). □
Theorem 4.
Let vectors yk, k = 0, 1, …, n − 1, in (13) be given, vectors yk/‖yk‖ be columns of matrix P, and the minimum eigenvalue μ of a matrix PTP satisfy the constraint μ ≥ μ0 > 0. Then, to estimate the convergence rate of the process in (34), the following inequality holds:
H   n H   *   2 H   0 H   *   2 exp μ 0 2 2 n 3 .
Proof of Theorem 4.
According to Property 4 and conditions of the theorem, the rows of matrices will have the following estimates (23):
H i n H i *   2 H i 0 H i *   2 exp μ 0 2 2 n 3 , i = 0,1 , , n 1 .
A similar inequality will be true for the sums of the left and right sides. Considering the connection between the norms H   n H   *   2 = i = 1 n H i n H i *   2 , we obtain the estimate in (36). □
In the case when the matrix H is symmetric, two products of the matrix H* and the vector y are known:
H   * y = Δ x ,   y   T H   * = Δ x   T .
Applying the process in (28) twice for (37), we obtain a new process for updating the matrix residual:
R   + = W ( y ) R W ( y ) .
Expanding (38), we obtain the updating formula H + = H G ( H , Δ x , y ) of J. Grinstadt [5,6], where
H G ( H , Δ x , y ) = H + H y Δ x , y y y T y , y 2 y H y Δ x T + H y Δ x y T y , y .
Let us reformulate Properties 1–4 of the matrix updating process (26) for (39).
Theorem 5.
The iteration of (39) is equivalent to the minimization step Φ(H) from a point H along the Δ H direction:
Δ H = H y Δ x , y y y T y T y 2 y H y Δ x T y T y H y Δ x y T y T y .
At the same time,
H   + y = Δ x ,   y   T H   + = Δ x   T ,
H   + H H   Δ x H
for arbitrary matrices H Δ x R n × n satisfying the condition in (41).
Proof of Theorem 5.
Let us show that at the point H   + , the condition for the minimum of the function in (27) along the direction ΔH is satisfied:
Δ H , Φ ( H   + ) = Δ H , H   + H   * = 0 .
In (43), let us consider the scalar product for each term of (40) separately. The third term of Expression (40) coincides with (30). The equality to zero of the scalar product for it was obtained in (33). For the first term, the calculations are similar to (33):
Δ H   1 , Φ ( H   + ) = H y Δ x , y y , y 2 j = 1 n i = 1 n y i y j ( H i j + H i j * ) = H y Δ x , y y , y 2 i = 1 n y i ( H i + H i * ) y = H y Δ x , y y , y 2 i = 1 n y i ( Δ x i Δ x i ) = 0 .
Let us carry out calculations for the second term using the symmetry of matrices:
y , y Δ H   2 , Φ H   + = j = 1 n i = 1 n y i ( H y Δ x ) j ( H i j + H i j * )      = j = 1 n ( H y Δ x ) j i = 1 n y i H i j + H i j * = j = 1 n ( H y Δ x ) j ( H j + H j * ) y              = j = 1 n ( H y Δ x ) j ( Δ x j Δ x j ) = 0 .
Proof (43) is complete. Next, we prove that ΔH is the normal of the hyperplane of matrices satisfying the condition in (42). To do this, we prove that the vector ΔH is orthogonal to an arbitrary vector of the hyperplane, formed as the difference of matrices belonging to the hyperplane V = H 1 H 2 , that is, <ΔH, H 1 H 2 > = 0. Since the matrices H 1 and H 2 satisfy the condition in (42), the proof is identical to the justification of the equality in (43). □
The following theorem establishes the convergence rate for a series of successive updates (39).
Theorem 6.
Let vectors yk, k = l, l + 1, …, l + n − 1, for the sequence of n iterations of (39) be mutually orthogonal. Then, the solution to the minimization problem in (29) can be obtained in no more than n steps of the process in (39),
H k + 1 = H G ( H k , Δ x k , y k ) ,   k = l , l + 1 , , l + n 1 ,
for an arbitrary symmetric matrix H l :
R l + n = W l l + n 1 ( y ) R l + n [ W l l + n 1 ( y ) ] T = 0 .
Proof of Theorem 6.
The update in (45) can be represented as two successive multiplications by W l l + n 1 ( y ) , first from the left and then from the right. For each of the updates, the estimate in (35) is valid. □
Theorem 7.
Let vectors yk, k = 0, 1, …, n − 1, be given, vectors yk/‖yk‖ be columns of matrix P, and the minimum eigenvalue μ of a matrix PTP satisfy the constraint μ ≥ μ0 > 0. Then, to estimate the convergence rate of the process in (44), the following inequality holds:
H n H * 2 H 0 H * 2 exp μ 0 2 n 3 .
Proof of Theorem 7.
The matrix residual is updated according to the rule
R l + n = W l l + n 1 ( y ) R l + n [ W l l + n 1 ( y ) ] T ,
which can be represented as two successive multiplications by W l l + n 1 ( y ) , first from the left and then from the right. The estimate in (36) is valid for each of the updates, which proves (46). □

4. Symmetric Positive Definite Metric and Its Analysis

Let Function (6) be quadratic. We use the coordinate transformation
x ^ = V x .
Let the matrix V satisfy the relation
V T V = 2 f ( x ) = A .
In the new coordinate system, the minimized function takes the following form:
f ^ ( x ^ ) = f V 1 x ^ = f ( x ) .
Quadratic Function (6), considering (49), (47), and (48), takes the following form:
f ^ ( x ^ ) = 1 2 ( x ^ x ^ * ) T V T A V 1 ( x ^ x ^ * ) = 1 2 x ^ x ^ * , x ^ x ^ * .
Here, x ^ * is the minimum point of the function. According to (38) and (50), the matrix of second derivatives is the identity matrix 2 f ^ ( x ^ ) = I . Let us denote r ^ = x ^ x ^ * . The gradient is
f ^ ( x ^ ) = r ( x ^ ) = r ^ = x ^ x ^ * .
For the characteristics of functions f ^ ( x ^ ) and f(x), the following relationships are valid:
f ^ ( x ^ ) = V T f ( x ) ,   2 f ^ ( x ^ ) = V T 2 f ( x ) V 1 ,
Δ x ^ = x ^   + x ^ = V x   + V x = V Δ x ,
y ^ = f ^ x + f ^ x = V T f x + V T f x = V T y
where notation V T = V T 1 is used.
From (53), (54), and the properties of matrices V (48), the following equality holds:
y ^ = Δ x ^ z
For the symmetric matrix H ^ , two products of the matrix H ^ * and the vector y are known:
H ^   * y ^ = Δ x ^ , y ^   T H ^   * = Δ x ^   T .
Applying the process in (28) twice to (56), we obtain a new process for updating the matrix residual R ^ = H ^ I :
R ^   + = W ( y ^ ) R ^ W ( y ^ ) = W ( z ) R ^ W ( z ) .
Taking into account (55), the update in (39) takes the form
H ^ B F G S + = H ^ G H ^ , Δ x ^ , y ^ = H ^ + H ^ z z , z z z T z , z 2 z H ^ z z T + H ^ z z z T z , z .
Let us consider the methods in (1)–(4) in relation to the function f ^ ( x ^ ) in the new coordinate system.
x ^   k + 1 = x ^   k + β ^ k s ^   k , s ^   k = H ^   k f ^ x ^   k ,
β ^ k = arg min   β ^ 0 f ^ x ^   k + β ^ s ^   k ,
Δ x ^   k = x ^   k + 1 x ^   k = z   k ,   y ^   k = f ^ ( x ^   k + 1 ) f ^ ( x ^   k ) = z   k ,
H ^   k + 1 = H H ^   k , Δ x ^   k , y ^   k .
Parameter β ^ k in (59) characterizes the accuracy of a one-dimensional descent. If the matrices are correlated by
H ^   k = V H   k V T ,   H   k = V 1 H ^   k V T ,
and the initial conditions are
x ^   0 = V x   0 ,   H ^   0 = V H   0 V T ,
then these processes generate identical sequences f ^ ( x ^   k ) = f ( x k ) and characteristics connected by the relations in (47) and (52)–(54). In this case, the equality β ^ k = β k holds.
Considering the equality y ^ = Δ x ^ from (55), Equation (58) can be transformed. As a result, we obtain the BFGS equation:
H B F G S ( H ^ , Δ x ^ , y ^ ) = H ^ Δ x ^ H ^ y ^ , y ^ Δ x ^ Δ x ^ T y ^ , Δ x ^ 2 + Δ x ^ H ^ y ^ Δ x ^ T + Δ x ^ Δ x ^ H ^ y ^ T y ^ , Δ x ^ .
Equation (65) satisfies the requirement of (63) and has the same form in various coordinate systems. Similar properties have the matrix transformation equation HDFP, which can be represented as a transformed formula HBFGS [29,30,31]:
H D F P ( H ^ , Δ x ^ , y ^ ) = H B F G S ( H ^ , Δ x ^ , y ^ ) v v   T ,       v = y ^ , H ^ y ^ 1 2 Δ x ^ Δ x ^ , y ^ H ^ y ^ y ^ , H ^ y ^ .
Taking into account (55) and (58), we obtain the following expression in the new coordinate system:
H ^ D F P = H ^ B F G S v ^ v ^   T ,   v ^ = z , H ^ z 1 2 z z , z H ^ z z , H ^ z .
The form of the matrices in (65) and (66) does not change depending on the coordinate system. Consequently, the form of the processes in (1)–(4) and (59)–(62) is completely identical in different coordinate systems when using Formulas (65) and (67). Thus, for further studies of the properties of QNMs on quadratic functions, we can use Equations (58) and (67) in the coordinate system specified by the transformation in (47).
Within the iteration of the processes in (59)–(62) for a quadratic function with an identity matrix of second derivatives, the residual can be represented in the form of components
r ^   k r ( x ^   k ) = r ^ z k + r ^ z k ,
where r ^ z k is a component along the vector zk (or, which is the same, along s ^   k ), and r ^ z k is a component orthogonal to zk. With an inexact one-dimensional descent in (59), the component r ^ z k decreases but does not disappear completely. For the convenience of theoretical studies, the residual transformation in Equation (68) in this case can be represented by introducing parameter γk ∈ (0, 2) instead of β ^ k , characterizing the degree of descent accuracy:
r ^   k + 1 = W ( z   k , γ k ) r ^   k = ( 1 γ k ) r ^ z k + r ^ z k ,   W ( z , γ ) = I γ z z   T z   T z ,   γ k ( 0 , 2 ) .
Here, at arbitrary γk ∈ (0, 2), the objective function decreases. With an inexact one-dimensional descent, a certain value γk ∈ (0, 2) will be attained, at which the new value of the function becomes smaller.
The restriction on the one-dimensional search in (59), imposed on γk in (69), ensures a reduction in the objective function
f ^ ( x ^   k + 1 ) = r ^   k + 1 2 / 2 = W ( z   k , γ k ) r ^   k 2 / 2    = ( 1 γ k ) 2 r ^ z k 2 + r ^ z k 2 < r ^ z k 2 + r ^ z k 2 = f ^ ( x ^   k ) .
As a result of the iterations in (59)–(62) with (65) and according to (57), the matrix residual R ^   k = H ^   k H ^ * = H ^   k I is transformed according to the rule
R ^   k + 1 = W ( z   k ) R ^   k W ( z   k ) .
Therefore, one system of vectors zk is used in the new coordinate system of the QNM iteration with the aim of minimizing the function and residual functional for matrices (29). With the orthogonality of vectors zk and an exact one-dimensional search, the solution r ^   k = 0 will be obtained in no more than n iterations. By virtue of the equality z   i , z   j = A Δ x   i , Δ x   j , the orthogonality of vectors zk in the chosen coordinate system is equivalent to the conjugacy of vectors Δxk.
Due to the type of identity which defines the QNM iteration in different coordinate systems, we further denote the iteration of processes (59)–(62) and (1)–(4), considering the accuracy of one-dimensional descent (introduced in (69) by the parameter γ k ( 0 , 2 ) ) by the operator
Q N ( x k , H k , x k + 1 , H k + 1 , γ k ) .
To simplify the notation in further studies of quasi-Newton methods on quadratic functions, without a loss of generality, we use an iteration of the method in (71) adjusted to minimize the function
f ( x ) = 1 2 x x * , x x * ,
which allows us, without transforming the coordinate system (47), to use all associated relations for the processes in (59)–(62) with the function in (50) for studying the process in (71), omitting the hats above the variables in the notation.
Let us note some of the properties of the QNM.
Theorem 8.
Let H k > 0 and the iteration of (71) be carried out with matrix transformation equations H B F G S and H D F P (67). Then, the vector zk is an eigenvector of the matrices H B F G S k + 1 , H D F P k + 1 , R B F G S k + 1 , and R D F P k + 1 :
R B F G S k + 1 z   k = 0 ,      H B F G S k + 1 z   k = z   k .
R D F P k + 1 z   k = 0 ,     H D F P k + 1 z   k = z   k .
Proof of Theorem 8.
The first of the equalities in (72) follows from (70). The second of the equalities in (72) follows from this fact and the definition of the matrix residual.
By direct verification, based on (67), we establish that the vectors zk and vk are orthogonal. Therefore, the additional term v vT in Equation (67) does not affect the multiplication of vector zk by a matrix, which together with (72) proves (73). □
As consequence of Theorem 8, the dimension of the space being minimized is reduced by one in the case of an exact one-dimensional descent, which will be shown below. Section 5 justifies the advantages of the BFGS equation (65) over the DFP equation (66) for matrix transformation.

5. Qualitative Analysis of the Advantages of the BFGS Equation over the DFP Equation

The effectiveness of the learning algorithm is determined by the degree of orthogonality of the learning vectors in the operator factors W k m k ( y ) . In the new coordinate system, the transformation in (70) is determined by the factors W k m k ( z ) in the residual expressions. Therefore, to analyze the orthogonality degree of the system of vectors z, it is necessary to involve the method of their formation. Let us show that the vectors zk in (69) and (70) generated by the BFGS equation have a higher degree of orthogonality compared to those generated by DFP. To get rid of a large number of indices, consider the iteration of the QNM (71) in the form
Q N ( f ^ , x ^ , H ^ , x ^ + , H ^ + , γ ) .
Theorem 9.
Let H ^ > 0 and the iteration of (74) be carried out with the matrix updating equations H ^ B F G S (58) and H ^ D F P (67), and
v ^ 0 .
Then, the following statements are valid.
  • 1. The descent directions for the next iteration are of the form
    s ^ B F G S + = H ^ B F G S + r ^   + = ( 1 γ k ) r ^ z + z , H ^ z 1 2 v ^ ,
    s ^ D F P + = H ^ D F P + r ^   + = ( 1 γ k ) r ^ z + q z , H ^ z 1 2 v ^ ,
    where
    0 < q = H ^ r ^ , H ^ r ^ 2 r ^ , H ^ r ^ H ^ H ^ r ^ , H ^ r ^ < 1 .
  • 2. With respect to the cosine of the angle between adjacent directions of the descent, we have the following estimate:
    s ^ B F G S + , z 2 z , z s ^ B F G S + , s ^ B F G S + s ^ D F P + , z 2 z , z s ^ D F P + , s ^ D F P + .
  • 3. In the subspace of vectors orthogonal to z, the trace of the matrix H ^ B F G S + does not change,
    s p z ( H ^ B F G S + ) = s p z ( H ^ ) ,
    and the trace of the matrix H ^ D F P + decreases,
    s p z ( H ^ D F P + ) = s p z ( H ^ ) v ^ , H ^ z 2 v ^ , v ^ z , H ^ z < s p z ( H ^ ) .
Proof of Theorem 9.
We represent the residual, similarly to (69), in the following form:
r ^ = r ^ z + r ^ z , r ^ z 0 .
After performing the iteration of (74), the residual takes the form
r ^   + = W ( z , γ ) r ^ = ( 1 γ ) r ^ z + r ^ z .
According to (83), in r ^   + , the component r ^ z does not depend on the accuracy of the one-dimensional search. Therefore, initially, we find new descent directions in (76) and (77) under the condition of an exact one-dimensional search, that is, with r ^   + = r ^ z .
Considering the gradient expression in (51), the direction of minimization in the iteration of (74) is s ^ = H ^ r ^ . Based on that result, considering (55) and the equality r ^   + , z = 0 , following from the condition of exact one-dimensional minimization (60), we obtain
r ^   + = W ( z ) r ^ = r ^ + z = r ^ H ^ r ^ r ^ , H ^ r ^ H ^ r ^ , H ^ r ^ .
This implies
z = H ^ r ^ r ^ , H ^ r ^ H ^ r ^ , H ^ r ^ ,
H ^ r ^ = z H ^ r ^ , H ^ r ^ r ^ , H ^ r ^ = z H ^ r ^ , z r ^ , z .
From (84), taking into account the orthogonality of the vectors r ^   + , z, we obtain the equality
r ^ , z = z , z .
Let us find the expression H ^   + r ^   + necessary to form the descent direction s ^   + = H ^   + r ^   + in the next iteration. Considering the orthogonality of the vectors r ^   + and z, using the BFGS matrix transformation formula (58), we obtain
H ^   + r ^   + = H ^   r ^   + + z z H ^ z , r ^   + z , z = H ^ r ^   + z H ^ z , r ^   + z , z = H ^ r ^ + H ^ z z H ^ z , r ^   + z , z = H ^ r ^ z H ^ z , r ^ z , z + H ^ z z H ^ z , z z , z .
Transformation of the equality in (86) based on (87) leads to
H ^ r ^ = z H ^ r ^ , z r ^ , z = z H ^ r ^ , z z , z = z H ^ z , r ^ z , z .
Making the replacement (89) in the last expression from (88), we find
H ^   + r ^   + = H ^ z z H ^ z , z z , z .
According to (90), the new descent vector can be represented using the expression for v ^ from (67)
s ^   + = H ^   + r ^   + = z H ^ z , z z , z H ^ z = H ^ z , z   1 2 v ^ .
Since the component r ^ z   in (83) does not depend on the accuracy of the one-dimensional search, Expression (91) determines its contribution to the direction of descent in (76). Finally, the property of (72) together with the residual r ^ representation in (82) proves (76).
The condition in (75) according to (91) prevents the completion of the minimization process. If v ^ = 0 , then as a result of exact one-dimensional minimization, we obtain s ^   + = H ^   + r ^   + = H ^ z , z   0.5 v ^ = 0 , which, taking into account H ^ > 0 , means r ^   + = 0 . As before, using (67), we find a new descent direction for the DFP method, assuming that the one-dimensional search is exact:
s ^ D F P + = H ^ D F P + r ^   + = H ^ B F G S + r ^   + + v ^ v ^ , r ^   + = s ^ B F G S + + v ^ v ^ , r ^   + .
The last term in (92), taking into account (91) and the orthogonality of the vectors r ^   + , z , can be represented in the form
v ^ v ^ , r ^   + = z , H ^ z z z , z H ^ z z , H ^ z , r ^   +   z z , z H ^ z z , H ^ z = H ^ z z , H ^ z , r ^   + s ^ B F G S + = H ^ z z , H ^ z , r ^ + z s ^ B F G S + = H ^ z , r ^ z , H ^ z 1 s ^ B F G S + .
Let us transform the scalar value as follows:
q = H ^ z , r ^ H ^ z , z = H ^ H ^ r ^ , r ^ H ^ H ^ r ^ , z = H ^ H ^ r ^ , r ^ 2 H ^ H ^ r ^ , H ^ r ^ H ^ r ^ , r ^ .
Based on (92), together with (93) and (94), we obtain the expression
s ^ D F P + = H ^ D F P + r ^   + = s ^ B F G S + + ( q 1 ) s ^ B F G S + = q s ^ B F G S + .
And finally, the last expression, using the property of (73) together with the representation of the residual, considering the accuracy of the one-dimensional descent (82), proves (77).
Since H ^ > 0 , the left inequality in (78) will hold. We prove the right inequality by contradiction. Let us denote by H ^ L > 0 ( L > 0 ) a matrix with eigenvectors of the matrix H and eigenvalues in the form of powers of the corresponding eigenvalues of the matrix H, given by λ i H ^ L = ( λ i H ) L , i = 1,2 , , n . Let u = ( H ^ )   0.5 r ^ . Then,
q = H ^ u , u 2 / H ^ u , H ^ u u , u .
Consequently, the equality H ^ u = ρ u holds if q = 1. Therefore, u is an eigenvector of the matrix H, and therefore, all matrices H ^ L   also have such an eigenvector. Due to this fact and the equality u = ( H ^ )   1 / 2 r ^ , the vector r ^ is also an eigenvector, and u = ( H ^ )   1 / 2 r ^ = ρ   1 / 2 r ^ , where ρ is the eigenvalue of the matrix H ^ . In this case, considering the representation in (85) of vector z, vector v ^ , according to its representation in (67), is zero, which cannot be true according to the condition in (75). Therefore, the right inequality in (78) also holds.
Due to the orthogonality of vectors v ^ and z and according to (76) and (77), the numerators in (79) are the same, and for the denominators, taking into account (78), the inequality s ^ D F P + , s ^ D F P + < s ^ B F G S < + , s ^ B F G S + holds, which proves (79). In an exact one-dimensional search, the equality is satisfied in (79) since the numerators in (79) are zero.
Let us justify point 3 of the theorem. In accordance with the notation of equations H B F G S (58) and H D F P (67), we introduce an orthogonal coordinate system in which the first two orthonormal vectors are determined by the following equations:
e 1 = z / z   ,   e 2 = p / p ,   p = H ^ z z z , H ^ z z , z ,
where vectors p and z are orthogonal and v ^ = z , H ^ z 1 / 2 p . In such a coordinate system, these vectors are defined by
z T = z , 0 , , 0   p T = 0 , p , 0 , , 0 .
Let us consider the form of matrix H ^ in the selected coordinate system. Let us determine the type of vector p based on its representation in (95). Taking into account z , H ^ z / z , z = H ^ 1 , 1   , components of vector p have the form
( H ^ z ) T = z H ^ 1,1 , H ^ 2,1 , H ^ 3,1 , , H ^ n , 1 , z T z , H ^ z z , z = z ( H ^ 1,1 , 0 , , 0 ) .
Hence, p T = z 0 , H ^ 2,1 , H ^ 3,1 , , H ^ n , 1 . Comparing the last expression with the expression in (96), we conclude that in the chosen coordinate system, the first column H ^ 1 of matrix H ^ has the following form:
H ^ 1 = H ^ 11 , H ^ 21 , 0 , , 0 T .
From (97) and (96), it follows that
p T = z 0 , H ^ 2,1 , 0 , , 0 ,   v ^ = z , H ^ z 1 / 2 , p = 0 , H ^ 2,1 / H ^ 1,1 1 / 2 , 0 , , 0 ,
and the original matrix will have the form
H ^ = H ^ 11 H ^ 12 0 0 H ^ 21 H ^ 22 H ^ 23 H ^ 2 n 0 H ^ 32 H ^ 33 H ^ 3 n 0 H ^ n 2 H ^ n 3 H ^ n n .
When correcting matrices with formulas BFGS (58) and DFP (67), changes will occur only in the space of the first two variables, determined by the unit vectors in (95). As a result of the BFGS transformation in (58), we obtain the following two-dimensional matrix:
H ^ 2 × 2 B F G S + = H ^ 11 H ^ 12 H ^ 12 H ^ 22 + H ^ 11 1 0 0 0 H ^ 11 1 H ^ 12 0 0 H ^ 11 1 0 H ^ 12 0 = 1 0 0 H ^ 22 .
Based on the relationship of matrices expressed in (67), using (98), we obtain the result of the transformation according to the DFP equation in (67):
H ^ 2 × 2 _ D F P + = H ^ 2 × 2 _ B F G S + v ^ v ^   T = 1 0 0 H ^ 22 H ^ 12 2 H ^ 11 .
Thus, the resulting two-dimensional matrices have the following form:
H ^ 2 × 2 _ B F G S + = 1 0 0 H ^ 22 .   H ^ 2 × 2 _ D F P + = 1 0 0 H ^ 22 H ^ 12 2 H ^ 11 .
The corresponding complete matrices are presented below:
H ^ B F G S + = 1 0 0 0 0 H ^ 22 H ^ 23 H ^ 2 n 0 H ^ 32 H ^ 33 H ^ 3 n 0 H ^ n 2 H ^ n 3 H ^ n n ,
H ^ D F P + = 1 0 0 0 0 H ^ 22 H ^ 12 2 H ^ 11 H ^ 23 H ^ 2 n 0 H ^ 32 H ^ 33 H ^ 3 n 0 H ^ n 2 H ^ n 3 H ^ n n .
Due to the condition in (75) from Expression (98) for v ^ , it follows that H ^ 2,1 0 . Consequently, the trace of matrix H ^ D F P + , according to (102) and (104), will decrease by H ^ 12 2 / H ^ 11 . The last expression can be transformed considering the definition of the coordinate system in (96). As a result, we obtain (81). From (103), we obtain (80). □
Regarding the results of Theorem 9, we can draw the following conclusions.
  • With an inexact one-dimensional descent in the DFP method, the successive descent directions are less orthogonal than in the BFGS method (79).
  • The trace of matrix H ^ in the DFP method in the unexplored space decreases (81). This makes it difficult to enter a new subspace during subsequent minimization. Moreover, in the case of an exact one-dimensional descent, in the next step, this decrease is restored; however, a new one appears.
  • Theorem 9 also shows that in the case of an exact one-dimensional search, the minimization space on quadratic functions is reduced by one.
Due to the limited computational accuracy on ill-conditioned problems (i.e., problems with a high condition number), the noted effects can significantly worsen the convergence of the DFP method.
In conjugate gradient methods [39], if the accuracy of the one-dimensional descent is violated, the sequence of vectors ceases to be conjugated. In QNMs, due to the reduction in the minimization subspace by one during exact one-dimensional descent, the effect of reducing the minimization space accumulates. In Section 6, we look at methods for replenishing the space excluded from the minimization process.

6. Methods for Reducing the Minimization Space of Quasi-Newton Methods on Quadratic Functions

We will assume that the quadratic function has the form expressed in (71a):
f ( x ) = 1 2 x x * , x x * .
For matrices H k + 1 and R k + 1 obtained using the iteration of (71), Q N ( x k , H k , x k + 1 , H k + 1 , γ k ) , the relations in (72) and (73) hold:
R   k + 1 z   k = 0 ,    H   k + 1 z   k = z   k .
Vector zk is an eigenvector for matrices H k + 1 and R k + 1 with one and zero eigenvalues, respectively. Let us consider ways to increase the dimension of the quasi-Newton relations’ execution subspace.
Let us denote by HIm a matrix H > 0 that has m eigenvectors with unit eigenvalues, and the corresponding matrix R = H − I with the corresponding eigenvectors and zero eigenvalues we will denote by ROm. Let us denote by Qm a subspace of dimension m spanned by a system of eigenvectors with unit eigenvalues of the matrix HIm, and its complement by D m = R n \ Q m .
An arbitrary orthonormal system of m vectors e1, …, em, of subspace Qm is a system of eigenvectors of matrices HIm and ROm:
H   e i = e i   ,   R e i = 0   ,   i = 1 , , m .
It follows that an arbitrary vector, which is a linear combination of vectors ei, will satisfy the quasi-Newton relations.
Lemma 1.
Consider the matrix H ∈ Im and the vectors
r = r Q + r D ,   r Q Q m ,   r D D m .
Then,
H r = H r Q + H r D ,   H r Q = r Q Q m ,   H r D D m .
Proof of Lemma 1.
The system of m eigenvectors of matrix HIm is contained in the set Qm. Due to the orthogonality of the eigenvectors, the remaining part of the matrix HIm is contained in the set Dm. Therefore, the operation of multiplying the vectors in (107) by the matrix in (108) does not take them beyond their subspace. In this case, for the vector rQ, the equality HrQ = rQQm holds, which follows from the definition of the subspace Qm. □
Lemma 2.
Let Hk > 0, Hk ∈ Im, m < n, r Q k = 0 , r D k 0 , and iteration Q N x k , H k , x k + 1 , H k + 1 , γ k be completed. Then,
i f   γ k = 1 ,   t h e n   H   k + 1 I m + 1   a n d   r Q k + 1 = 0 ;
i f   γ k 1 ,   t h e n   H   k + 1 I m + 1   a n d   r Q k + 1 0 .
Proof of Lemma 2.
The descent direction, taking into account (51), has the form s   k = H   k f x   k = H   k r   k = H   k r D k . Based on Lemma 1, it follows that H   k r D k D m . As follows from Theorem 8, a new eigenvector expressed in (72) and (73) with a unit eigenvalue appears in the subspace Dm, regardless of the accuracy of the one-dimensional descent, which proves (109), taking into account the accuracy of the one-dimensional search. With an inexact descent, part of the residual remains along the vector zk, which proves (110). □
Lemma 3.
Let Hk > 0, Hk ∈ Im, m ≤ n, r Q k 0 , r D k 0 , and iteration Q N x k , H k , x k + 1 , H k + 1 , γ k be completed. Then, it follows that
i f   γ k = 1 ,   t h e n   H   k + 1 I m   a n d   r Q k + 1 = 0 ;
i f   γ k 1 ,   t h e n   H   k + 1 I m   a n d   r Q k + 1 0 .
Proof of Lemma 3.
Since r Q k 0 , we take a system, where one of the eigenvectors is the vector r Q k , as an orthogonal system of eigenvectors in Qm. From the remaining eigenvectors, we form a subspace Qm−1 in which there is no residual. Applying to Qm−1 the results of Lemma 2 under the condition HkIm−1, we obtain (111) and (112). □
By alternating operations with an exact and inexact one-dimensional descent, it is possible to obtain finite convergence on quadratic functions of QNMs.
Theorem 10.
Let Hk > 0, Hk ∈ Im, r Q k 0 , m < n − 1, and the iterations be completed as follows:
Q N x k , H k , x k + 1 , H k + 1 , γ k ,   γ k = 1   ,
Q N x k + 1 , H k , x k + 2 , H k + 2 , γ k ,   γ k   1 .
Then,
H   k + 2 I m + 1 ,   r Q k + 2 0 .
Proof of Theorem 10.
For the iteration of (113), we apply the result of Lemma 3 (111), and for the iteration of (114), we apply the result of Lemma 2 (110). As a result, we obtain (115). □
Theorem 10 says that individual iterations with an exact one-dimensional descent make it possible to increase by one the dimension of the space where the quasi-Newton relation is satisfied. This means that after a finite number of such iterations, the matrix Hk = I will be obtained.
Let us consider another way of increasing the dimension of the quasi-Newton relation. It consists of using, after iterations of QNMs, an additional iteration of descent along the orthogonal vector vk defined in (67), and according to (91), with an exact one-dimensional descent coinciding, up to a scalar factor, with the descent direction s   k + 1 = H   k z   k , z   k   1 / 2 v   k   of the BFGS method:
Q N x k , H k , x k + 1 / 2 , H k + 1 / 2 , γ k ,   γ k ( 0,2 )   ,
x   k + 1 = x   k + 1 / 2 + β k + 1 / 2 v   k ,   γ k ( 0,2 ) ,
v   k = z   k , H   k z   k 1 2 z   k z   k , z   k H   k z   k z   k , H   k z   k ,
H   k + 1 = H H   k + 1 / 2 , Δ x   k + 1 / 2 , y   k + 1 / 2 .
Let us denote the iterations in (116)–(119) by
V Q N x k , H k , x k + 1 , H k + 1 , γ k , γ k + 1 / 2   ,   γ k ( 0,2 )   ,   γ k + 1 / 2 ( 0,2 ) .
Lemma 4.
Let Hk > 0, Hk ∈ Im, r Q k 0 , r D k 0 , m ≤ n − 1, and the iteration of (120) be completed. Then,
H   k + 1 I m + 1 ,   r Q k + 1 0 .
Proof of Lemma 4.
For the iteration of (116), as in the proof of Lemma 3, since r Q k 0 , we take this as an orthogonal system of eigenvectors in Qm, where one of the eigenvectors is the vector r Q k . From the remaining eigenvectors, we form a subspace Qm−1 in which there is no residual, and for this subspace, HkIm−1 holds. As a result of (116), according to the results of Theorem 8, an eigenvector zkQm−1 is formed. It is a derivative of vector s   k = H   k r Q k Q m 1 , which, due to multiplication by a matrix HkIm−1 with residual r Q k Q m 1 , according to the results of Lemma 1, does not belong to the subspace Qm−1. For this reason, the vector vkQm−1 obtained by Formula (118), orthogonal to zk, because of (117)–(119), becomes an eigenvector of the matrix Hk+1. Thus, the subspace Qm−1 is replenished with two eigenvectors of the matrix Hk+1, resulting in (121). □
Theorem 11.
To obtain Hk ∈ In, it is necessary to perform the iteration of (120) (n − 1) times.
Proof of Theorem 11.
In the first iteration of (120), we obtain Hk+1I2. In the next (n − 2) iterations of (120), according to the results of Lemma 4, we obtain Hk+n−1In. □
The results of Theorem 11 and Lemma 5 indicate the possibility of using techniques for increasing the dimension of the subspace of quasi-Newton relations’ execution at arbitrary moments, which enables us, as will be shown below, to develop QNMs that are resistant to the inaccuracies of a one-dimensional search.
In summary, the following conclusions can be drawn about properties of QNMs on quadratic functions without the condition of an exact one-dimensional descent.
  • The dimension of the minimization subspace decreases as the dimension of the subspace of fulfillment of the quasi-Newton relation increases (Lemma 2).
  • The dimension of the subspace of fulfillment of the quasi-Newton relation does not decrease during the execution of the QNM (Lemmas 2–5).
  • Individual iterations with an exact one-dimensional descent increase the dimension of the subspace of the quasi-Newton relation (Lemma 4).
  • Separate inclusions of iterations with the transformation of matrices for pairs of conjugate vectors increase the dimension of the subspace of the quasi-Newton relation (Lemma 5).
  • It is sufficient to perform at most the (n − 1) inclusion of an exact one-dimensional descent (113) in arbitrary iterations to solve the problem of minimizing a quadratic function in a finite number of steps in the QNM (Lemma 4 and Theorem 10).
  • To solve the problem of minimizing a quadratic function in a finite number of steps in the QNM, it is sufficient to perform in arbitrary iterations no more than (n − 1) inclusions of matrix transformations for pairs of descent vectors obtained as a result of the transformations in (118) and (119) (Lemma 5 and Theorem 11).

7. Methods for Increasing the Orthogonality of Learning Vectors in Quasi-Newton Methods

The term “degree of orthogonality” refers to the type of function (71a). For the type of function (6), this term means the degree of conjugacy of the vectors. Several conclusions can be drawn from our considerations.
Firstly, it is preferable to use the BFGS method. With imprecise one-dimensional descent in the DFP method, successive descent directions are less orthogonal than in the BFGS method (79).
Secondly, it makes sense to increase the degree of accuracy of the one-dimensional search, since individual iterations with an exact one-dimensional descent increase the dimension of the subspace of the quasi-Newton relation (Theorem 10), which reduces the dimension of the minimum search region.
Thirdly, separate inclusions of iterations with matrix transformation for pairs of conjugate vectors increase the dimension of the subspace of the quasi-Newton relation (Lemma 4). This requires applying a sequence of descent iterations for pairs of conjugate vectors (120).
On the other hand, it is important to correctly select the scaling factor ω of the initial matrix H0 = ωI from (1) in the QNM. Let us consider an example of a function of the form expressed in (6):
f ( x ) = 1 2 i = 1 n x i 2 / i .
The eigenvalues of the matrix of second derivatives A and its inverse A 1 are λ i = 1 i   a n d   λ i 1 = i , respectively. The gradient of the quadratic function in (122) is f x = i = 1 n i x i . In the first stages of the search for H 0 = I , in the gradients f ( x ) = A ( x x * ) and gradient differences, components of eigenvectors with large eigenvalues of matrix A and, accordingly, small eigenvalues of the matrix A−1 = H prevail. Let us calculate an approximation of the eigenvalues for scaling the initial matrix using data from (3) of the first iteration of the methods in (1)–(4):
λ m i n H   ω = Δ x 0 , Δ x 0 y 0 , Δ x 0 = A 1 y 0 , A 1 y 0 y 0 , A 1 y 0 λ m a x H ,
where λ min H , λ max H are the minimum and maximum eigenvalues of the matrix A−1 = H, respectively. To scale the initial matrix H0, consider the following:
H 0 = K ω I = K Δ x 0 , Δ x 0 y 0 , Δ x 0 I ,   K 1 .
Let us qualitatively investigate the operation of the quasi-Newton BFGS method (71). Taking into account the predominance of eigenvectors with large eigenvalues of the matrix A and, accordingly, small eigenvalues of the matrix A−1 = H, it is possible to qualitatively display the picture of the reconstruction of the matrix A−1 eigenvectors for different values of K, making a rough assumption that small eigenvalues are sequentially restored. A rough diagram of the process of reconstructing the spectrum of matrix eigenvalues is shown in Figure 2.
One of the components of increasing the degree of orthogonality of learning vectors in QNMs is the normalization of the initial metric matrix (124). In Section 8, we will consider the impact of the methods noted in this section on increasing the efficiency of QNMs.

8. Numerical Study of Ways to Increase the Orthogonality of Learning Vectors in Quasi-Newton Methods

We implemented and compared quasi-Newtonian BFGS and DFP methods. A one-dimensional search procedure with cubic interpolation [41] (exact one-dimensional descent) and a one-dimensional minimization procedure [34] (inexact one-dimensional descent) were used. We used both the classical QNM with the iterations of (1)–(4) (denoted as BFGS and DFP) and the QNM including iterations with additional orthogonalization (116)–(119) in the form of a sequence of iterations (120) (denoted as BFGS_V and DFP_V). The experiments were carried out by varying the coefficients of the initial normalization of the matrices of the QNM metric.
Since the use of quasi-Newtonian methods is justified primarily based on functions with a high degree of conditionality where conjugate gradient methods do not work efficiently, the test functions were selected based on this principle. Since the QNM is based on a quadratic model of a function, its local convergence rate in a certain neighborhood of the current minimum is largely determined by the efficiency of minimizing the ill-conditioned quadratic functions. The test functions are as follows:
(1) f 1 x = i = 1 n x i 2 i 6 , x 0 = ( 10 / 1,10 / 2 , , 10 / n ) .
The optimal value and minimum point are f 1 * = 0   a n d   x * = 0,0 , , 0 . The condition number of the matrix of second derivatives for some n is c o n d 2 f 1 x = λ m a x / λ m i n = n 6 . When n = 1000, the condition number will be c o n d 2 f 1 x = 1000 6 = 10 18 .
(2) f 2 x = i = 1 n x i 2 n i 6 , x 0 = ( 10,10 , , 10 ) .
The optimal value and minimum point are f 2 * = 0   a n d   x * = 0,0 , , 0 . The condition number of the matrix of second derivatives for some n is c o n d 2 f 2 x = λ m a x / λ m i n = n 6 . When n = 1000, the condition number will be c o n d 2 f 2 x = 1000 6 = 10 18 .
(3) f 3 x = ( i = 1 n x i 2 i ) r , x 0 = 1,1 , , 1 , r = 2 .
The optimal value and minimum point are f 3 * = 0   a n d   x * = 0,0 , , 0 . The function f3 is based on a quadratic function with the condition number of the matrix of second derivatives for some n  c o n d 2 f 3 x = λ m a x / λ m i n = n . When n = 1000, the condition number will be c o n d 2 f 3 x = 1000 . The topology of the level surfaces of the function f3 is identical to the topology of the level surfaces of the basic quadratic function. The matrix of second derivatives of a function tends to zero as it approaches the minimum. Consequently, the inverse matrix tends to infinity. The approximation pattern for the matrix of second derivatives in the QNM will correspond to K = 1 in Figure 2. This case makes it difficult to enter a new subspace due to the significant predominance of eigenvalues in the metric matrix in the already surveyed part of the subspace compared to the eigenvalues of the metric matrix in the unsurveyed area.
(4) f 4 x = i = 1 n / 2 [ 10 8 · x 2 i 1 2 x 2 i 2 + ( x 2 i 1 1 ) 2 ] , x 0 = 1.2,1 , 1.2,1 , , 1.2,1 .
The optimal value and minimum point of rescaled multidimensional Rosenbrock function [42] are f 4 * = 0   a n d   x * = 1,1 , , 1 . This function has a curved ravine with small values of the second derivative in the direction of the bottom of the ravine and large values of the second derivative in the direction of the normal to the bottom of the ravine. The ratio of second derivatives along such directions is approximately 108.
The stopping criterion is
f ( x   k ) f * ε = 1 0 10 .
The results of minimizing the presented functions are given in Table 1 and Table 2 for n = 1000. The problem was considered solved if the method, within the allotted number of iterations and calculations of the function and gradient, reached a function value that satisfied the stopping criterion. The cell indicates the number of iterations (one-dimensional searches along a direction), and below is the number of calls to the function procedure, where the function and gradient are calculated simultaneously. The number of iterations in all tests were limited to 40,000. If the costs of the method exceeded the specified number of iterations, the method was stopped. It was believed that no solution had been found by this method. The dash sign indicates options where a solution could not be obtained. In cases where there was no solution, looping of methods occurred due to the smallness of the minimization steps and, as a consequence, large errors in the gradient differences used in the transformation operations of metric matrices.
Let us consider the effects of reducing the convergence rate of the method. For example, for the function f3, the matrix of second derivatives tends to zero as it approaches the minimum. Consequently, the inverse matrix tends to infinity. The approximation pattern for the matrix of second derivatives in the QNM will correspond to K = 1 in Figure 2. In the explored part of the subspace, the matrix of the QNM grows. Therefore, the slight presence of residuals in this part of the subspace is greatly amplified. In the unexplored part of the space, the eigenvalues are fixed. This case makes it difficult to enter a new subspace due to the significant predominance of eigenvalues in the metric matrix in the explored part of the subspace compared to the eigenvalues of the metric matrix in the unexplored area. In order to enter the unexplored part of the subspace, it is necessary to eliminate the discrepancy in the explored part of the space. As a consequence, when minimizing functions with a high degree of conditionality, the search steps become smaller, the errors in the gradient differences increase, and the minimization method becomes loopy.
For exact descent, there are practically no differences between the BFGS and BFGS_V methods. In exact descent, successive descent vectors for quadratic functions are conjugated, and matrix learning, considered in a coordinate system with an identity matrix of second derivatives, is carried out using an orthogonal system of vectors. Minor errors lead to the fact that this orthogonality is violated, which affects the DFP method.
For inexact descent, the BFGS_V method significantly outperforms the BFGS method. The DFP and DFP_V methods are practically ineffective on these tests, although the DFP_V method shows better results.
Thus, with one-dimensional search errors, the BFGS_V algorithm is significantly more effective than the BFGS method. The DFP method is practically not applicable when the problem is highly conditioned.
Table 2 shows the experimental data with normalization of the matrix (124) at K > 1. For the functions f3(x) and f4(x), the coefficient K had to be reduced to obtain a more effective result.
The initial normalization of the metric matrices, as follows from the results of Table 1 and Table 2, significantly improves the convergence of QNMs. The situation corresponds to the case in Figure 2 for K > 1. Large eigenvalues in the unexplored part of the subspace make it easy to find new conjugate directions and efficiently train metric matrices with almost orthogonal training vectors.
For exact descent, there are practically no differences between the BFGS and BFGS_V methods. For inexact descent, the BFGS_V method significantly outperforms the BFGS method. The DFP and DFP_V methods are efficient for functions f1(x) − f3(x), while for inexact descent, the DFP_V method significantly outperforms the DFP method.
Thus, in the case of one-dimensional search errors, the BFGS_V algorithm is significantly more efficient than the BFGS method and correct initial normalization of metric matrices can significantly increase the convergence rate of the method.
For the purpose of giving a visual demonstration of the method, we minimize a two-dimensional function as follows:
f 5 x = x 1 2 + 100 x 2 2 2 ,   x 0 = ( 1,1 ) .
To test the idea of the efficiency of orthogonalization to increase the performance of the quasi-Newton method, to adversely affect the minimization conditions, the initial matrix was normalized at K = 0.000001, which should significantly complicate the solution of the problem and reveal the effect of the advantages of the degree of orthogonality of the learning vectors of the BFGS and BFGS_V methods over the DFP method.
The stopping criterion was
f ( x   k ) f * ε = 1 0 2
The results are shown in Table 3. The row with f5(x) shows the number of iterations, while the row with fmin shows the minimal function value achieved.
The path of three considered algorithms is shown in Figure 3.
Here, theoretical results of the influence of the orthogonality degree of matrix learning vectors on the convergence rate of the method are confirmed. The BFGS_V method performs forced orthogonalization, which improves the result of the BFGS method. The trajectories of the methods are listed in Table A1, Table A2 and Table A3 of Appendix A (the trajectory of the DFP method is shown partially).

9. Conclusions

This paper presents methods for converting metric matrices in quasi-Newton methods based on gradient learning algorithms. As a result, it is possible to represent the system of learning steps in the form of an algorithm for minimizing a certain objective function along a system of directions and to draw conclusions about the convergence rate of the learning process based on the properties of this system of directions. The main conclusion is that the convergence rate is directly dependent on the degree of orthogonality of the learning vectors.
Based on the study of learning algorithms in the DFP and BFGS methods, it is possible to show that the degree of orthogonality of the learning vectors in the BFGS method is higher than that in the DFP method. This means that entering the unexplored region of the minimization space due to the noise and inaccuracies of one-dimensional descent in the DFP method is more difficult than in the BFGS method, which explains why the BFGS updating formula has the best results.
As a result of studies on quadratic functions, it has been revealed that the dimension of the minimization space is reduced when iterations with an exact one-dimensional descent or iterations with additional orthogonalization are included in the quasi-Newton method. It is shown that it is also possible to increase the orthogonality of the learning vectors and thereby increase the convergence rate of the method through special normalization of the initial metric matrix. The theoretically predicted effects of increasing the efficiency of quasi-Newton methods were confirmed as a result of a computational experiment on complex ill-conditioned minimization problems. In future work, we plan to study minimization methods under the conditions of a linear background that adversely affects the convergence.

Author Contributions

Conceptualization, V.K. and E.T.; methodology, V.K., E.T. and P.S.; software, V.K.; validation, L.K., E.T., P.S. and D.K.; formal analysis, P.S., E.T. and D.K.; investigation, E.T.; resources, L.K.; data curation, P.S. and D.K.; writing—original draft preparation, V.K.; writing—review and editing, E.T., P.S. and L.K.; visualization, V.K.; supervision, V.K. and L.K.; project administration, L.K. and D.K.; funding acquisition, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121). Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. Trajectory of the BFGS_V method moving.
Table A1. Trajectory of the BFGS_V method moving.
Iterationf5(x)x1x2
00.01.01.0
19.605985 × 10−19.900005 × 10−15.073008 × 10−5
29.605986 × 10−19.900005 × 10−15.332304 × 10−5
39.605988 × 10−19.900006 × 10−15.659179 × 10−5
41.396454 × 10−16.112994 × 10−12.122674 × 10−4
57.500359 × 10−41.654885 × 10−15.745936 × 10−5
Table A2. Trajectory of the BFGS method moving.
Table A2. Trajectory of the BFGS method moving.
Iterationf5(x)x1x2
00.01.01.0
14.865982 × 1029.854078 × 10−1−4.592161 × 10−1
21.4899199.895086 × 10−1−4.914216 × 10−2
39.608406 × 10−19.899878 × 10−1−1.220497 × 10−3
49.605957 × 10−19.899999 × 10−1−6.968405 × 10−6
59.605941 × 10−19.899990 × 10−1−1.031223 × 10−4
69.605941 × 10−19.899990 × 10−1−9.891454 × 10−5
79.605941 × 10−19.899990 × 10−1−9.945802 × 10−5
89.612290 × 10−19.899966 × 10−11.815384 × 10−3
99.641387 × 10−19.899999 × 10−1−4.249478 × 10−3
109.605818 × 10−19.899963 × 10−19.036299 × 10−6
119.059276 × 10−19.603298 × 10−1−1.719565 × 10−2
121.634266 × 10−22.596599 × 10−12.457950 × 10−2
132.586155 × 10−3−2.187391 × 10−2−2.244455 × 10−2
Table A3. Trajectory of the DFP method moving.
Table A3. Trajectory of the DFP method moving.
Iterationf5(x)x1x2
00.01.01.0
19.605985 × 10−19.900005 × 10−15.073008 × 10−5
29.605986 × 10−19.900005 × 10−15.332304 × 10−5
39.605988 × 10−19.900006 × 10−15.659179 × 10−5
49.605991 × 10−19.900006 × 10−16.073300 × 10−5
59.605994 × 10−19.900007 × 10−16.601199 × 10−5
69.605942 × 10−19.899988 × 10−1−1.191852 × 10−4
79.605942 × 10−19.899988 × 10−1−1.202172 × 10−4
89.605942 × 10−19.899988 × 10−1−1.215724 × 10−4
99.605942 × 10−19.899988 × 10−1−1.233791 × 10−4
109.605942 × 10−19.899987 × 10−1−1.258332 × 10−4
119.605957 × 10−19.899999 × 10−1−8.005638 × 10−6
129.605974 × 10−19.899977 × 10−1−2.294417 × 10−4
139.605962 × 10−19.900000 × 10−14.809401 × 10−6
149.605946 × 10−19.899985 × 10−1−1.494894 × 10−4
159.605941 × 10−19.899992 × 10−1−8.356599 × 10−5
169.605941 × 10−19.899990 × 10−1−1.019703 × 10−4
179.605941 × 10−19.899990 × 10−1−9.866443 × 10−5
189.605941 × 10−19.899990 × 10−1−9.826577 × 10−5
199.605941 × 10−19.899990 × 10−1−9.914318 × 10−5
209.605941 × 10−19.899990 × 10−1−9.806531 × 10−5
219.639558 × 10−19.899569 × 10−1−4.240006 × 10−3
229.605941 × 10−19.899991 × 10−1−9.292366 × 10−5
239.605942 × 10−19.899988 × 10−1−1.237791 × 10−4
249.605943 × 10−19.899994 × 10−1−6.417724 × 10−5
259.605943 × 10−19.899986 × 10−1−1.365365 × 10−4
269.605942 × 10−19.899992 × 10−1−7.609652 × 10−5
279.605941 × 10−19.899989 × 10−1−1.126456 × 10−4
289.605941 × 10−19.899990 × 10−1−9.613542 × 10−5
299.605941 × 10−19.899990 × 10−1−1.018252 × 10−4
309.605941 × 10−19.899990 × 10−1−1.002864 × 10−4
319.605941 × 10−19.899990 × 10−1−1.007205 × 10−4
329.605941 × 10−19.899990 × 10−1−1.001624 × 10−4
339.605941 × 10−19.899990 × 10−1−1.006906 × 10−4
349.605941 × 10−19.899990 × 10−1−1.000533 × 10−4
359.605941 × 10−19.899990 × 10−1−1.006925 × 10−4
369.605941 × 10−19.899990 × 10−1−9.993761 × 10−5
379.605941 × 10−19.899990 × 10−1−1.007145 × 10−4
389.605941 × 10−19.899990 × 10−1−9.980530 × 10−5
399.605941 × 10−19.899990 × 10−1−1.007537 × 10−4
409.605941 × 10−19.899990 × 10−1−9.964890 × 10−5
419.605941 × 10−19.899990 × 10−1−1.008103 × 10−4

References

  1. Polyak, B.T. Introduction to Optimization; Translated from Russian; Optimization Software Inc., Publ. Division: New York, NY, USA, 1987. [Google Scholar]
  2. Nocedal, J.; Wright, S. Numerical Optimization, Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2006. [Google Scholar]
  3. Bertsekas, D.P. Constrained Optimization and Lagrange Multiplier Methods; Academic Press: New York, NY, USA, 1982. [Google Scholar]
  4. Gill, P.E.; Murray, W.; Wright, M.H. Practical Optimization; SIAM: Philadelphia, PE, USA, 2020. [Google Scholar]
  5. Dennis, J.E.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; SIAM: Philadelphia, PE, USA, 1996. [Google Scholar]
  6. Evtushenko, Y.G. Methods for Solving Extremal Problems and Their Application in Optimization Systems; Nauka: Moscow, Russia, 1982. (In Russian) [Google Scholar]
  7. Polak, E. Computational Methods in Optimization: A Unified Approach; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  8. Kokurin, M.M.; Kokurin, M.Y.; Semenova, A.V. Iteratively regularized Gauss–Newton type methods for approximating quasi–solutions of irregular nonlinear operator equations in Hilbert space with an application to COVID-19 epidemic dynamics. Appl. Math. Comput. 2022, 431, 127312. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, J.; Tao, X.; Sun, P.; Zheng, Z. A positional misalignment correction method for Fourier ptychographic microscopy based on the quasi-Newton method with a global optimization module. Opt. Commun. 2019, 452, 296–305. [Google Scholar] [CrossRef]
  10. Lampron, O.; Therriault, D.; Lévesque, M. An efficient and robust monolithic approach to phase-field quasi-static brittle fracture using a modified Newton method. Comput. Methods Appl. 2021, 386, 114091. [Google Scholar] [CrossRef]
  11. Spenke, T.; Hosters, N.; Behr, M. A multi-vector interface quasi-Newton method with linear complexity for partitioned fluid–structure interaction. Comput. Methods Appl. Mech. Engrg. 2020, 361, 112810. [Google Scholar] [CrossRef]
  12. Zorrilla, R.; Rossi, R. A memory-efficient MultiVector Quasi-Newton method for black-box Fluid-Structure Interaction coupling. Comput. Struct. 2023, 275, 106934. [Google Scholar] [CrossRef]
  13. Davis, K.; Schulte, M.; Uekermann, B. Enhancing Quasi-Newton Acceleration for Fluid-Structure Interaction. Math. Comput. Appl. 2022, 27, 40. [Google Scholar] [CrossRef]
  14. Tourn, B.; Hostos, J.; Fachinotti, V. Extending the inverse sequential quasi-Newton method for on-line monitoring and controlling of process conditions in the solidification of alloys. Int. Commun. Heat Mass Transf. 2023, 142, 1106647. [Google Scholar] [CrossRef]
  15. Hong, D.; Li, G.; Wei, L.; Li, D.; Li, P.; Yi, Z. A self-scaling sequential quasi-Newton method for estimating the heat transfer coefficient distribution in the air jet impingement. Int. J. Therm. Sci. 2023, 185, 108059. [Google Scholar] [CrossRef]
  16. Berahas, A.S.; Jahani, M.; Richtárik, P.; Takác, M. Quasi-Newton Methods for Machine Learning: Forget the Past, Just Sample. Optim. Methods Softw. 2022, 37, 1668–1704. [Google Scholar] [CrossRef]
  17. Rafati, J. Quasi-Newton Optimization Methods For Deep Learning Applications. 2019. Available online: https://arxiv.org/abs/1909.01994.pdf (accessed on 11 January 2024).
  18. Indrapriyadarsini, S.; Mahboubi, S.; Ninomiya, H.; Kamio, T.; Asai, H. Accelerating Symmetric Rank-1 Quasi-Newton Method with Nesterov’s Gradient for Training Neural Networks. Algorithms 2022, 15, 6. [Google Scholar] [CrossRef]
  19. Davidon, W.C. Variable Metric Methods for Minimization; A.E.C. Res. and Develop. Report ANL–5990; Argonne National Laboratory: Argonne, IL, USA, 1959. [Google Scholar]
  20. Fletcher, R.; Powell, M.J.D. A rapidly convergent descent method for minimization. Comput. J. 1963, 6, 163–168. [Google Scholar] [CrossRef]
  21. Oren, S.S. Self-scaling variable metric (SSVM) algorithms I: Criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 1974, 20, 845–862. [Google Scholar] [CrossRef]
  22. Oren, S.S. Self-scaling variable metric (SSVM) algorithms II: Implementation and experiments. Manag. Sci. 1974, 20, 863–874. [Google Scholar] [CrossRef]
  23. Powell, M.J.D. Convergence Properties of a Class of Minimization Algorithms. In Nonlinear Programming; Mangasarian, O.L., Meyer, R.R., Robinson, S.M., Eds.; Academic Press: New York, NY, USA, 1975; Volume 2, pp. 1–27. [Google Scholar]
  24. Dixon, L.C. Quasi-Newton algorithms generate identical points. Math. Program. 1972, 2, 383–387. [Google Scholar] [CrossRef]
  25. Huynh, D.Q.; Hwang, F.-N. An accelerated structured quasi-Newton method with a diagonal second-order Hessian approximation for nonlinear least squares problems. J. Comp. Appl. Math. 2024, 442, 115718. [Google Scholar] [CrossRef]
  26. Chai, W.H.; Ho, S.S.; Quek, H.C. A Novel Quasi-Newton Method for Composite Convex Minimization. Pattern Recognit. 2022, 122, 108281. [Google Scholar] [CrossRef]
  27. Fang, X.; Ni, Q.; Zeng, M. A modified quasi-Newton method for nonlinear equations. J. Comp. Appl. Math. 2018, 328, 44–58. [Google Scholar] [CrossRef]
  28. Zhou, W.; Zhang, L. A modified Broyden-like quasi-Newton method for nonlinear equations. J. Comp. Appl. Math. 2020, 372, 112744. [Google Scholar] [CrossRef]
  29. Broyden, C.G. The convergence of a class of double–rank minimization algorithms. J. Inst. Math. Appl. 1970, 6, 76–79. [Google Scholar] [CrossRef]
  30. Fletcher, R. A new approach to variable metric algorithms. Comput. J. 1970, 13, 317–322. [Google Scholar] [CrossRef]
  31. Goldfarb, D. A family of variable metric methods derived by variational means. Math. Comput. 1970, 24, 23–26. [Google Scholar] [CrossRef]
  32. Liu, D.C.; Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program. 1989, 45, 503–528. [Google Scholar] [CrossRef]
  33. Zhu, C.; Byrd, R.H.; Lu, P.; Nocedal, J. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization. ACM Trans. Math. Softw. 1997, 23, 550–560. [Google Scholar] [CrossRef]
  34. Tovbis, E.; Krutikov, V.; Stanimirović, P.; Meshechkin, V.; Popov, A.; Kazakovtsev, L. A Family of Multi-Step Subgradient Minimization Methods. Mathematics 2023, 11, 2264. [Google Scholar] [CrossRef]
  35. Krutikov, V.; Gutova, S.; Tovbis, E.; Kazakovtsev, L.; Semenkin, E. Relaxation Subgradient Algorithms with Machine Learning Procedures. Mathematics 2022, 10, 3959. [Google Scholar] [CrossRef]
  36. Feldbaum, A.A. On a class of dual control learning systems. Avtomat. i Telemekh. 1964, 25, 433–444. (In Russian) [Google Scholar]
  37. Aizerman, M.A.; Braverman, E.M.; Rozonoer, L.I. Method of Potential Functions in Machine Learning Theory; Nauka: Moscow, Russia, 1970. (In Russian) [Google Scholar]
  38. Tsypkin, Y.Z. Foundations of the Theory of Learning Systems; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  39. Kaczmarz, S. Approximate solution of systems of linear equations. Internet J. Control 1993, 54, 1239–1241. [Google Scholar] [CrossRef]
  40. Krutikov, V.N. On the convergence rate of minimization methods along vectors of a linearly independent system. USSR Comput. Math. Math. Phys. 1983, 23, 218–220. [Google Scholar] [CrossRef]
  41. Rao, S.S. Engineering Optimization; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  42. Andrei, N. An Unconstrained Optimization Test Functions Collection. Available online: http://www.ici.ro/camo/journal/vol10/v10a10.pdf (accessed on 1 April 2024).
Figure 1. Step of process (13) on hyperplane <zk, c> = yk along the direction zk.
Figure 1. Step of process (13) on hyperplane <zk, c> = yk along the direction zk.
Axioms 13 00240 g001
Figure 2. Qualitative behavior of the spectrum of matrix Hk eigenvalues for cases of scaling (124) for various values of K.
Figure 2. Qualitative behavior of the spectrum of matrix Hk eigenvalues for cases of scaling (124) for various values of K.
Axioms 13 00240 g002
Figure 3. Level curves and paths of the optimization algorithms for function f5.
Figure 3. Level curves and paths of the optimization algorithms for function f5.
Axioms 13 00240 g003
Table 1. Results of minimization with normalization of matrix (124) at K = 1 and n = 1000.
Table 1. Results of minimization with normalization of matrix (124) at K = 1 and n = 1000.
Exact DescentInexact Descent
BFGSBFGS_VDFPDFP_VBFGSBFGS_VDFPDFP_V
f1(x)115711571228121118541648-1762
2526252327122667398034133750
f2(x)24002370--43513218--
5663556099087242
f3(x)14041396-16431905150858372497
3206319037434286339413,3625686
f4(x)33282964------
74556668
Table 2. Results of minimization with normalization of matrix (124) at K = 10,000 and n = 1000. For results marked with an asterisk, K = 100.
Table 2. Results of minimization with normalization of matrix (124) at K = 10,000 and n = 1000. For results marked with an asterisk, K = 100.
Exact DescentInexact Descent
BFGSBFGS_VDFPDFP_VBFGSBFGS_VDFPDFP_V
f1(x)10381038104110411221118913071260
21942193219721952190211624312343
f2(x)79179510918521386101225241509
18631874256020283129215957943341
f3(x)1082 *1090 *8977 *1343 *1281112942811845
2436245420,20130552802245397424183
f4(x)4062 *3850*------
91358686
Table 3. Results of minimization with normalization of matrix (124) at K = 0.000001 and n = 2.
Table 3. Results of minimization with normalization of matrix (124) at K = 0.000001 and n = 2.
Exact Descent
BFGSBFGS_VDFP_V
Number of iterations1353733
Fmin2.5862 × 10−37.5003 × 10−47.7552 × 10−3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Krutikov, V.; Tovbis, E.; Stanimirović, P.; Kazakovtsev, L.; Karabašević, D. Machine Learning in Quasi-Newton Methods. Axioms 2024, 13, 240. https://doi.org/10.3390/axioms13040240

AMA Style

Krutikov V, Tovbis E, Stanimirović P, Kazakovtsev L, Karabašević D. Machine Learning in Quasi-Newton Methods. Axioms. 2024; 13(4):240. https://doi.org/10.3390/axioms13040240

Chicago/Turabian Style

Krutikov, Vladimir, Elena Tovbis, Predrag Stanimirović, Lev Kazakovtsev, and Darjan Karabašević. 2024. "Machine Learning in Quasi-Newton Methods" Axioms 13, no. 4: 240. https://doi.org/10.3390/axioms13040240

APA Style

Krutikov, V., Tovbis, E., Stanimirović, P., Kazakovtsev, L., & Karabašević, D. (2024). Machine Learning in Quasi-Newton Methods. Axioms, 13(4), 240. https://doi.org/10.3390/axioms13040240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop