Next Article in Journal
“Exact” and Approximate Methods for Bayesian Inference: Stochastic Volatility Case Study
Previous Article in Journal
On a Variational Definition for the Jensen-Shannon Symmetrization of Distances Based on the Information Radius
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Locally Polynomial Complexity of the Projection-Gradient Method for Solving Piecewise Quadratic Optimisation Problems

by
Agnieszka Prusińska
1,*,†,
Krzysztof Szkatuła
1,2,† and
Alexey Tret’yakov
1,2,3,†
1
Faculty of Exact and Natural Sciences, Siedlce University, 08-110 Siedlce, Poland
2
Systems Research Institute, Polish Academy of Sciences, 01-447 Warsaw, Poland
3
Dorodnicyn Computing Centre of FRC CSC, Russian Academy of Sciences, 119333 Moscow, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2021, 23(4), 465; https://doi.org/10.3390/e23040465
Submission received: 13 March 2021 / Revised: 2 April 2021 / Accepted: 14 April 2021 / Published: 15 April 2021
(This article belongs to the Section Complexity)

Abstract

:
This paper proposes a method for solving optimisation problems involving piecewise quadratic functions. The method provides a solution in a finite number of iterations, and the computational complexity of the proposed method is locally polynomial of the problem dimension, i.e., if the initial point belongs to the sufficiently small neighbourhood of the solution set. Proposed method could be applied for solving large systems of linear inequalities.

1. Introduction

Let us consider the following optimisation problem:
min x R n A · x b + 2 ,
where c + : = max c , 0 , A is an m × n matrix, A = a i j , x R n , x = x i , b R m , b = b j , i = 1 , , n , j = 1 , m and · is the Euclidean norm of R n .
In this paper, a method for solving the problem in (1) is proposed; moreover, the number of iterations (equivalent to the computational complexity) required by the proposed method with respect to m and n is locally polynomial, and in the worst-case scenario, it has a geometric convergence rate.
Let us define the set of solutions of (1) as follows
X * : = x * x * = arg min x R n A · x b + 2 .
If some point sufficiently close to the set X * of solutions to (1) is known, then it is possible to find a solution of (1) within a polynomial number of computational iterations; thus, the computational complexity is of the order of O ( m 3 · n 3 ) .
Many methods for solving (1) have been proposed (cf. Karmanov [1], Golikov and Evtushenko [2], Evtushenko and Golikov [3], Tretyakov [4], Tretyakov and Tyrtyshnikov [5] and Han [6]). All of these methods have reasonable computational complexity but, as mentioned above, to date, no strongly polynomial-time algorithm for solving (1) has been proposed. In studies by Tretyakov and Tyrtyshnikov [7] and Mangasarian [8], linear programming problems were solved by reducing them to the unconditional minimisation of strongly convex piecewise quadratic functions. A solution is obtained within a finite polynomial number of iterations if the starting point of the algorithm belongs to a sufficiently close neighbourhood of the unique solution to the problem. Unfortunately, the authors imposed severe limitations on the functions to be minimised: they should be strongly convex, the eigenvalues of the Hessian matrices should satisfy specific conditions, etc.
These results create significant limitations on the class of problems that can be solved: it is required that (1) has only one unique solution, etc. The solution method described by Tretyakov and Tyrtyshnikov [7] is based on exploiting information about the problem being solved by analysing a sufficiently small neighbourhood of an arbitrary solution of (1). Analogous methods were proposed by Facchinei et al. [9] for the forecasting (identification) of the active constraints in a sufficiently close neighbourhood of the solution to the problem. In papers by Tretyakov and Tyrtyshnikov [5] and Wright [10], locally polynomial methods for solving quadratic programming problems based on similar ideas were presented. Tretyakov [4] proposed the gradient projection method for solving (1); this method involves finding a solution of (1) in a finite number of iterations and is a combination of iterative and straightforward (e.g., Gaussian) methods.
This paper proposes a computational method for solving (1). When the starting point of the proposed method is sufficiently close to the set X * of solutions to (1), then its computational complexity is locally polynomial, i.e., it is of the order of O ( m 3 · n 3 ) .
We point out that solving a system of linear inequalities involves
A · x b 0 m ,
where the 0 m m-dimensional vectors of zeroes can be reduced to solve the problem (1). This means that the number of computations required for establishing a solution (if a given system of linear inequalities has one) is locally polynomial.
Let us denote
X : = x R n A · x b 0 m .
It is obvious that the set X might be empty in general, but our method, presented in this paper, either determines this situation in a locally polynomial number of computations or provides a solution to the system (3). The proposed method could be applied when solving large systems of linear inequalities, which appear in many practical, industrial applications, e.g., the simplex method (Pan [11]), Karmarkar’s method (Wright, [12]), Chubanov’s method (Roos [13]), and Fourier–Motzkin elimination method (Khachiyan [14], I. Šimeček, R. Fritsch, D. Langr, R. Lórencz [15]).

2. Definitions and Theoretical Results

Let
φ ( x ) = 1 2 · ( A · x b ) + 2 .
Theorem 1.
The function φ ( x ) is convex and has a nonempty set of minimal values
X * : = x * R n φ ( x * ) = min x R n φ ( x ) .
Proof. 
Theorem 1 follows immediately from the well-known features of quadratic-type convex functions (see, e.g., [16]). □
It is obvious that the elements x * X * , cf. (6) satisfy
φ ( x * ) = i = 1 m a i , x * b i + · a i = 0 n = A T · A · x * b + ,
where a i T is the ith row of matrix A.
Therefore, in the general case, our goal is to solve the following equation
φ ( x ) = i = 1 m a i , x b i + · a i = A T · A · x b + = 0 n , where x R n .
In the sequel, x * stands for an arbitrary element of X * (the minimum point of φ ). If the minimum value of φ is equal to zero, then X = X * , and if the minimum value of φ is positive, then X = . Let us denote
f i ( x ) : = a i , x b i , i D = { 1 , m } ,
and
J 0 ( x ) : = i D f i ( x ) = 0 , J ( x ) : = i D f i ( x ) < 0 , J + ( x ) : = i D f i ( x ) > 0 ,
where f i ( x ) is introduced to simplify the definitions of the sets J 0 ( x ) and J + ( x ) .
According to (7) and the above notations, x * should satisfy the formula
i J 0 ( x * ) J + ( x * ) a i , x * b i + · a i = 0 n .
The formula (10) is equivalent to a condition that should be satisfied at point x *
i J 0 ( x * ) J + ( x * ) a i , x * b i · a i = 0 n ,
In (11), it is considered that
a i , x * b i + = a i , x * b i , i J 0 ( x * ) J + ( x * ) .
This, in turn, means that, in the general case, we should solve the following equations
i J 0 ( x ) J + ( x ) a i , x b i + · a i = 0 n ,
or
i J + ( x ) a i , x b i + · a i = 0 n , a i , x b i = 0 , i J 0 ( x ) .
Without loss of generality, we may denote
J ( x * ) : = 1 , , l , J 0 ( x * ) : = l + 1 , , p , J + ( x * ) : = p + 1 , , m ,
where l p m .
The main idea exploited in this paper is based on the following Lemma. For ε > 0 we set U ε ( x * ) : = x R n : x x * ε .
Lemma 1.
Let x * be a solution to the problem (1). Then, there exists ε > 0 , such that for any x U ε ( x * ) , the inequality f i ( x ) 0 implies the inequality f i ( x * ) 0 .
Proof. 
If i J ( x * ) , that is f i ( x * ) < 0 , then, by continuity of the function f i , there exists ε i > 0 , such that f i ( x ) < 0 for all x U ε i ( x * ) . Set
ε = min i J ( x * ) ε i .
Then, for all i J ( x * ) and for all x U ε ( x * ) , we have f i ( x ) < 0 . Consequently, if there exists x U ε ( x * ) such that f i ( x ) 0 with some i { 1 , , m } , then i J ( x * ) , that is f i ( x * ) 0 . □
By virtue of the above lemma, in a sufficiently small neighbourhood of some fixed point x * X * , for every x ¯ U ε ( x * ) , the following hold
J 0 ( x ¯ ) J 0 ( x * ) and J + ( x ¯ ) J 0 ( x * ) J + ( x * ) , J ( x ¯ ) J 0 ( x * ) J ( x * ) .
Now, our goal is to correctly define the sets J 0 ( x * ) and J + ( x * ) based on the information gained at point x ¯ U ε ( x * ) . Let us denote
J ¯ 0 ( x ¯ ) : = J 0 ( x ¯ ) , J ¯ + ( x ¯ ) : = J + ( x ¯ ) , J ¯ ( x ¯ ) : = J ( x ¯ ) .
Let A ( x ¯ ) and b ( x ¯ ) represent the matrix and vector obtained from A and b, respectively. The rows of A ( x ¯ ) and the coefficients of b ( x ¯ ) correspond to the index set, which is defined by J ¯ 0 ( x ¯ ) J ¯ + ( x ¯ ) . In this case, Equations (12)–(13) may be rewritten as
A T ( x ¯ ) · A ( x ¯ ) · x b ( x ¯ ) = 0 n , a i , x b i = 0 , i J ¯ 0 ( x ¯ ) .
Let A ¯ ( x ¯ ) denote the matrix in the equations in (14) corresponding to the maximum set of linearly independent rows, and let b ¯ ( x ¯ ) denote the corresponding vector of constant terms in (14).
The equations in (14) may be reformulated in the following way
A ¯ ( x ¯ ) · x b ¯ ( x ¯ ) = 0 n .
Let us observe that, at point x * , the following holds
A T ( x * ) · A ( x * ) · x * b ( x * ) + = 0 n .
This, in turn, means that
A ¯ ( x * ) · x * b ¯ ( x * ) = 0 n .
M ( x ¯ ) : = { x R n i J ¯ 0 ( x ¯ ) J ¯ + ( x ¯ ) a i , x b i · a i = 0 n and a j , x b j = 0 , where j J ¯ 0 ( x ¯ ) .
If the rank of a matrix B of size r × n is equal to r, then the pseudoinverse matrix (operator) B + may be defined as B + : = B T · ( B · B T ) 1 . We denote the quadratic matrix n × n orthogonally projected on the space containing the rows of matrix B as B T II : = B T B · B T 1 · B = B + · B , and its projection on the orthogonal complement of the matrix is denoted as B T : = I B T II , where I is an all-ones matrix of size n × n .
Let a point z ( x ¯ ) be the projection of point x ¯ on the set M ( x ¯ ) . Let us observe that x * M ( x ¯ ) if x ¯ U ε ( x * ) and ε is sufficiently small.
Moreover, if the constraints at point z ( x ¯ ) are f i ( z ( x ¯ ) ) 0 for a certain i J ¯ + ( x ¯ ) , then we define the set I in the following way
I = i J ¯ + ( x ¯ ) f i ( z ( x ¯ ) ) 0 ; I J 0 ( x * ) .
Otherwise, if the constraints at point z ( x ¯ ) are f i ( z ( x ¯ ) ) 0 for a certain i J ¯ ( x ¯ ) , we define the set I + in an analogous way
I + = i J ¯ ( x ¯ ) f i ( z ( x ¯ ) ) 0 ; I + J 0 ( x * ) .
Now, we redefine J ¯ 0 ( x ¯ ) , J ¯ + ( x ¯ ) and J ¯ ( x ¯ ) as follows
J ¯ 0 ( x ¯ ) : = J ¯ 0 ( x ¯ ) I I + , J ¯ + ( x ¯ ) : = J ¯ + ( x ¯ ) \ I , J ¯ ( x ¯ ) : = J ¯ ( x ¯ ) \ I + .
Next, we project point x ¯ on the new set M ( x ¯ ) , cf. (18), and a new point z ( x ¯ ) is obtained.
Let
z ( x ) = P M ( x ¯ ) ( x ) = A T ( x ¯ ) · x + A ¯ + ( x ¯ ) · b ¯ ( x ¯ )
define the operator for the projection of point x on set M ( x ¯ ) .

3. Algorithm for Finding the Solution of (1)

In this section, the algorithm designed to find the solution to (1) is presented. The main idea of this algorithm is based on information related to a current point x ¯ belonging to a sufficiently small neighbourhood of the point x * X * . We also demonstrate how to find such a point. The proposed method comprises two algorithms. The starting point of the method can be arbitrary, because Algorithm 2 (gradient method with a special step selection) starts at an arbitrary point and, on a certain iteration, it will provide a point arbitrarily close to the solution set. Therefore, Algorithm 1 could start at the point specified by Algorithm 2.
Algorithm 1.
Initialisation Step: For the current point x ¯ , the sets of indices J 0 ( x ¯ ) , J ( x ¯ ) and J + ( x ¯ ) are defined according to (9). If set J + ( x ¯ ) = , then x ¯ is the solution of (1) and Algorithm 1 is terminated. Otherwise, the Main Recursive Step is performed.
Main Recursive Step: Let z ( x ¯ ) , the projection of point x ¯ on the set M ( x ¯ ) , be defined according to (20). We check if the following condition is satisfied
I + = and I = .
Checking Step: If (21) holds, then z ( x ¯ ) X * , and Equation (10) is satisfied; z ( x ¯ ) is the solution of (1), as defined in (2), and Algorithm 1 is terminated. Otherwise, if for certain values of i D the condition (21) is violated and i I + I , we define J ¯ 0 ( x ¯ ) , J ¯ + ( x ¯ ) and J ¯ ( x ¯ ) according to (19), M ( x ¯ ) is redefined according to (18), and the Main Recursive Step is repeated.
The set D is finite, and D = m ; therefore, the number of changes to the index sets J ¯ 0 ( x ¯ ) , J ¯ + ( x ¯ ) and J ¯ ( x ¯ ) does not exceed m, and finally, the point z ( x ¯ ) fulfilling (12) is established. This means that z ( x ¯ ) is the solution of (1), as defined in (2).
It is of utmost importance that x ¯ belongs to a sufficiently small neighbourhood of the point x * , because otherwise, z ( x ¯ ) may not satisfy (12). If this is not the case, it is necessary to find another point x ¯ that is closer to x * . The process for accomplishing this is described below.
Theorem 2.
For a sufficiently small ε > 0 and for every x ¯ U ε ( x * ) , Algorithm 1 provides z * = z ( x ¯ ) as the solution for
φ ( x ) = A T ( x ) · A ( x ) · x b + = 0 n ,
 and this is equivalent to finding the solution for (12) within a number of iterations of the order of 0 ( m 3 · n 3 ) .
Proof. 
The proof is based on the observation that for x ¯ belonging to a sufficiently small neighbourhood of the point x * , according to Lemma 1, the constraints f i ( x ¯ ) 0 correspond to constraints f i ( x * ) 0 . Therefore,
J ¯ 0 ( x ¯ ) J ¯ + ( x ¯ ) J 0 ( x * ) J + ( x * ) .
Let us determine z ( x ¯ ) as the projection of the point x ¯ on the set M ( x ¯ ) , which is defined according to (18). It may happen that the set J ¯ 0 ( x ¯ ) becomes enlarged. However, the number of iterations required when J ¯ 0 ( x ¯ ) becomes enlarged does not exceed m, the number of elements in the set D. Therefore, at some iteration, (21) is satisfied. This means that z ( x ¯ ) satisfies (12) or, equivalently, φ ( z ( x ¯ ) ) = 0 n . This demonstrates that z ( x ¯ ) is the solution for (1), as defined in (2). The computational complexity of establishing each projection z ( x ¯ ) is of order 0 ( m 2 · n 3 ) ; this process takes the computational effort related to the multiplications of matrices into account. The number of iterations does not exceed m and, therefore, the overall computational complexity is of order 0 ( m 3 · n 3 ) . □
To complement the presentation of this chapter, the gradient method for establishing x ¯ belonging to the sufficiently small neighbourhood U ε ( x * ) of some fixed solution x * X * to (1) is described. This gradient method has the following scheme
x k + 1 = x k α · φ ( x k )
where α = 1 L and gradient φ ( x k ) fulfils the Lipschitz condition
φ ( x k + 1 ) φ ( x k ) L · x k + 1 x k where L = 2 · A T · A .
The convergence of the gradient method (23) is considered in the following theorem, cf. Karmanov [1].
Theorem 3.
Let x 0 R n and the sequence x k , k = 0 , 1 , 2 , be constructed according to (23). Then,
x k x * , x * X * , where k and x k + 1 y < x k y y X * .
Proof. 
The scheme in (23) produces a sequence that converges to a certain x * X * . Moreover, for every sufficiently small ε > 0 , there exists k ¯ = k ( ε ) such that x k U ε ( x * ) , for all k k ¯ . This, in turn, means that at iteration k ¯ , the hypothesis of Theorem 2 is satisfied, and we obtain a solution to (1). □
Now, we have all the necessary prerequisites to present the solution algorithm for (3).
Algorithm 2.
Initialisation Step: Let k = 0 , and let x 0 be an arbitrary point in R n .
Main Recursive Step: Let
x k + 1 = x k α · φ ( x k ) .
Checking Step: If z ( x k ) is the solution for (3), then Algorithm 2 is terminated. Otherwise, we set k : = k + 1 , and the Main Recursive Step is repeated.
Theorem 4.
There exists a finite k ¯ such that z ( x k ¯ ) X * and z ( x k ¯ ) is the solution for (3).
Proof. 
The sequence x k converges to a fixed x * X * and therefore, at a certain iteration k ¯ , the hypothesis of Theorem 2 is satisfied, and we obtain the solution z * = P M ( x k ¯ ) X * . □
Theorem 4 allows us to establish whether (3) has a solution or not.
Corollary 1.
If
z * X ,
then z * is the solution of (3). Otherwise (3) has no solutions.

4. Conclusions and Appendix

As previously mentioned, the locally polynomial complexity estimate is valid only if the starting point of the proposed method belongs to a sufficiently small neighbourhood of the set of solutions X * . To reach such a desired point, the gradient method (23) is used. There are accelerated gradient methods (see those of Nesterov [17] and Poliak [18]), but these methods do not guarantee monotonic convergence to a set of solutions X * . The method presented in this paper monotonically converges to a certain point x * , x * X * . It is obvious that the point x * depends on the initial point x 0 and, therefore, the number of iterations required by the gradient method for entering the proper neighbourhood of point x * depends on the position of the initial point x 0 . Moreover, the ε radius of the neighbourhood of point x * , which the gradient method should reach, is unknown in the general case and depends on the specific problem being considered. However, it appears that we can guarantee a geometric convergence rate for the gradient method (23) while minimising piecewise quadratic functions of the form (5).
Namely, for every strongly convex function ψ ( x ) , the gradient method (23) has a geometric convergence rate, i.e.,
ψ ( x k ) ψ * c · δ k , where 0 < δ < 1 , c > 0 ,
where c is a constant that is independent of the size of the problem but depends on the initial point x 0 . In the general case, for functions that are not convex in the strongest sense, there is no proof of the geometric convergence of the gradient method (23). However, in the case where the function φ ( x ) is given by (5), it is possible to prove the geometric convergence of the gradient method (23). Let
l ( x k ) = x * + β · x k x * , β 0 and M ( s k ) = x * + β · s k , β 0 , s k = x k x * x k x * .
The theorem presented below proves the strong convexity of the function φ ( x ) in the cone of convergence.
Theorem 5.
The elements of the sequence x k , defined by (23), belong to the cone of strong convexity of the function φ ( x ) , namely x , y l ( x k ) , the function φ ( x ) is uniformly strongly convex for the sequence x k , i.e.,
φ ( λ · x + ( 1 λ ) · y ) λ · φ ( x ) + ( 1 λ ) · φ ( y ) γ · λ · ( 1 λ ) · x y 2
where λ 0 , 1 , x , y l ( x k ) , k = 0 , 1 , and γ > 0 .
Proof. 
First, it should be pointed out that because the second derivative of the function φ ( x ) has a finite number of points of discontinuity S ¯ R n in every direction, i.e., on the ray x * + λ · S ¯ , there exists σ > 0 such that on the closed interval [ x * , x * + σ · S ¯ ] , the function φ ( x ) has a continuous second derivative that obviously depends on S ¯ . Let us assume that the theorem does not hold, i.e., there is not γ > 0 , such that (24) holds. This means that for
l ( x k ) = x * + β · s k , β 0
the following must hold
2 φ ( x * ) s k 2 = γ k 0 when k ,
or
2 φ ( x * ) s k 2 = A T · A · s k , s k = γ k 0 when k .
For vector s = lim k s k , the following condition A T · A · s , s = 0 holds, or, due to the construction of φ ( x ) ,
φ ( x * + β · s ) = 0 = φ ( x * ) = min A · x b + 2 ,
where β [ 0 , β ¯ ] , β ¯ > 0 is a certain fixed constant. Let x k * be (locally) the projection of x k on the set M ( s ) X * . Then, due to s k s , k , we have
x k x k * = δ k · x k x * , where δ k 0 , k .
Let us set δ k sufficiently small and consider the points x k + r , r = 1 , 2 , …. Then, according to Theorem 3, we have
x k + r x k * < x k x k * .
On the other hand, according to (26), when r
x k + r x k * x k * x * x k + r x * x k x * x k x k * x k + r x * 1 δ k x k x k * x k x k * x k + r x * > x k x k * .
This is contradictory to (27), and therefore Theorem 5 holds. □
Theorem 5 allows for the estimation of the convergence rate of the gradient method (23).
Theorem 6.
Under the assumptions of Theorem 5 for the sequence x k , constructed according to (23), the following convergence rates hold
φ ( x k ) φ * c 1 · τ k and x k x * c 2 · τ k 2 ,
where τ ( 0 , 1 ) , and c 1 and c 2 > 0 ; the constants c 1 , c 2 are independent of the value of k but depend on the initial point x 0 .
Proof. 
Let us denote
μ k = φ ( x k ) φ * .
For the sequence x k and q 1 2 , 1 the following holds
φ ( x k ) φ ( x k + 1 ) α · q · φ ( x k ) 2 α · q · φ ( x k ) , s k 2 = 2 φ ( x k ) s k 2 α · q · γ 2 · φ ( x k ) φ *
or, equivalently,
μ k μ k + 1 α · q · γ 2 μ k .
Therefore, for τ ( 0 , 1 ) , the following holds
μ k c 1 · τ k or , equivalently , φ ( x k ) φ * c 1 · τ k .
This proves the first part of (28), while the latter part of (28) follows from the strong convexity of the function φ ( x ) in the cone of convergence. □
The conduction computational experiments and comparison of the presented method with other methods in the literature remains a topic for future research.

Author Contributions

Conceptualisation, A.T. and K.S.; methodology, A.P., K.S. and A.T.; validation, A.P., K.S. and A.T.; formal analysis, A.P.; investigation, A.P., K.S. and A.T.; resources, A.T.; writing—original draft preparation, A.P. and K.S.; supervision, A.T.; project administration, A.P.; funding acquisition, A.P. and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Higher Education, grant number 61/20/B.

Acknowledgments

The research of the third author was supported by Russian Sciences Foundation (project No 21-71-30005).

Conflicts of Interest

The authors declare no conflict of interest. Formal analysis, funding acquisition, investigation, methodology, project administration, validation, writing—original draft, A.P.; conceptualization, investigation, methodology, validation, writing—original draft, K.S.; conceptualization, funding acquisition, investigation, methodology, resources, supervision, validation, A.T.

References

  1. Karmanov, V.G. Mathematical Programming; Mir Publishers: Moscow, Russia, 1989. [Google Scholar]
  2. Golikov, A.; Evtushenko, Y.G. Theorems of the alternative and their applications in numerical methods. Comput. Math. Math. Phys. 2003, 43, 338–358. [Google Scholar]
  3. Evtushenko, Y.G.; Golikov, A. New perspective on the theorems of alternative. In High Performance Algorithms and Software for Nonlinear Optimization; Springer: Berlin/Heidelberger, Germany, 2003; pp. 227–241. [Google Scholar]
  4. Tretyakov, A. A finite-termination gradient projection method for solving systems of linear inequalities. Russ. J. Numer. Anal. Model. 2010, 25, 279–288. [Google Scholar] [CrossRef]
  5. Tretyakov, A.; Tyrtyshnikov, E. Exact differentiable penalty for a problem of quadratic programming with the use of a gradient-projective method. Russ. J. Numer. Anal. Model. 2015, 30, 121–128. [Google Scholar] [CrossRef]
  6. Han, S.-P. Least-Squares Solution of Linear Inequalities; Technical Report; Wisconsin Univ-Madison Mathematics Research Center: Madison, WI, USA, 1980. [Google Scholar]
  7. Tretyakov, A.; Tyrtyshnikov, E. A finite gradient-projective solver for a quadratic programming problem. Russ. J. Numer. Anal. Model. 2013, 28, 289–300. [Google Scholar] [CrossRef]
  8. Mangasarian, O. A Finite Newton Method for Classification Problems; Technical Report, Technical Report 01-11; Data Mining Institute, Computer Sciences Department, University of Wisconsin: Madison, WI, USA, 2001. [Google Scholar]
  9. Facchinei, F.; Fischer, A.; Kanzow, C. On the accurate identification of active constraints. SIAM 1998, 9, 14–32. [Google Scholar] [CrossRef]
  10. Wright, S.J. An algorithm for degenerate nonlinear programming with rapid local convergence. SIAM 2005, 15, 673–696. [Google Scholar] [CrossRef] [Green Version]
  11. Pan, P.Q. A Projective Simplex Algorithm Using LU Decomposition. Comput. Math. Appl. 2000, 39, 187–208. [Google Scholar] [CrossRef] [Green Version]
  12. Wright, M.H. The interior-point revolution in optimization: History, recent developments, and lasting consequences. Bull. Am. Math. Soc. 2005, 42, 39–56. [Google Scholar] [CrossRef] [Green Version]
  13. Roos, K. An improved version of Chubanov’s method for solving a homogeneous feasibility problem. Optim. Methods. Softw. 2018, 33, 26–44. [Google Scholar] [CrossRef] [Green Version]
  14. Khachiyan, L. Fourier-Motzkin elimination method. In Encyclopedia of Optimization; Floudas, C.A., Pardalos, P.M., Eds.; Springer: Berlin/Heidelberger, Germany, 2009; pp. 1074–1077. [Google Scholar]
  15. Šimeček, I.; Fritsch, R.; Langr, D.; Lórencz, R. Paralel solver of large systems of linear inequalities using Fourier-Motzkin elimination. Comput. Inform. 2016, 35, 1307–1337. [Google Scholar]
  16. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  17. Nesterov, Y. Nesterov. One class of methods of unconditional minimization of a convex function, having a high rate of convergence. USSR 1984, 24, 80–82. [Google Scholar]
  18. Poliak, B. Introduction to Optimization; Optimization Software, Inc.: New York, NY, USA, 1987. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Prusińska, A.; Szkatuła, K.; Tret’yakov, A. On the Locally Polynomial Complexity of the Projection-Gradient Method for Solving Piecewise Quadratic Optimisation Problems. Entropy 2021, 23, 465. https://doi.org/10.3390/e23040465

AMA Style

Prusińska A, Szkatuła K, Tret’yakov A. On the Locally Polynomial Complexity of the Projection-Gradient Method for Solving Piecewise Quadratic Optimisation Problems. Entropy. 2021; 23(4):465. https://doi.org/10.3390/e23040465

Chicago/Turabian Style

Prusińska, Agnieszka, Krzysztof Szkatuła, and Alexey Tret’yakov. 2021. "On the Locally Polynomial Complexity of the Projection-Gradient Method for Solving Piecewise Quadratic Optimisation Problems" Entropy 23, no. 4: 465. https://doi.org/10.3390/e23040465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop