Next Article in Journal
TREPH: A Plug-In Topological Layer for Graph Neural Networks
Next Article in Special Issue
Quantum-Inspired Applications for Classification Problems
Previous Article in Journal
Kinetics of Precipitation Processes at Non-Zero Input Fluxes of Segregating Particles
Previous Article in Special Issue
Quantum Machine Learning: A Review and Case Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inexact Feasible Quantum Interior Point Method for Linearly Constrained Quadratic Optimization

Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA 18015, USA
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(2), 330; https://doi.org/10.3390/e25020330
Submission received: 31 December 2022 / Revised: 7 February 2023 / Accepted: 8 February 2023 / Published: 10 February 2023
(This article belongs to the Special Issue Quantum Machine Learning 2022)

Abstract

:
Quantum linear system algorithms (QLSAs) have the potential to speed up algorithms that rely on solving linear systems. Interior point methods (IPMs) yield a fundamental family of polynomial-time algorithms for solving optimization problems. IPMs solve a Newton linear system at each iteration to compute the search direction; thus, QLSAs can potentially speed up IPMs. Due to the noise in contemporary quantum computers, quantum-assisted IPMs (QIPMs) only admit an inexact solution to the Newton linear system. Typically, an inexact search direction leads to an infeasible solution, so, to overcome this, we propose an inexact-feasible QIPM (IF-QIPM) for solving linearly constrained quadratic optimization problems. We also apply the algorithm to 1 -norm soft margin support vector machine (SVM) problems, and demonstrate that our algorithm enjoys a speedup in the dimension over existing approaches. This complexity bound is better than any existing classical or quantum algorithm that produces a classical solution.
MSC:
90C20; 90C51; 81P68

1. Introduction

Linearly constrained quadratic optimization (LCQO) is defined as optimizing a convex quadratic objective function over a set of linear constraints. Linear optimization is a special case of LCQO that corresponds to the case where the objective function is linear. LCQO has rich theory, algorithms, and applications. Many problems in machine learning can be formulated as LCQO problems, including variants of least square problems and variants of support vector machine training [1,2]. Some important optimization algorithms also have LCQO subproblems, e.g., sequential quadratic programming [1].
The modern age of IPMs was launched by Karmarkar’s projective method for linear optimization (LO). Since then, many variants of IPMs have also been applied to nonlinear optimization problems, including LCQO problems [3,4]. Contemporary IPMs progress towards the set of optimal solutions by moving within a neighbourhood of an analytic curve known as the central path. IPMs can be categorized according to whether or not the the sequence of iterates produced by the algorithm satisfies feasibility. Feasible IPMs are initialized with a strictly feasible solution and maintain feasibility in each iteration, whereas infeasible IPMs start from an infeasible interior solution and do not require feasibility to be exactly satisfied at any point of the algorithm. For LCQO problems with n variables, feasible IPMs can produce an ϵ -approximate solution using O ( n log ( 1 / ϵ ) ) iterations, whereas infeasible IPMs require O ( n 2 log ( 1 / ϵ ) ) IPM iterations to converge to an ϵ -approximate solution [5,6].
At each IPM iteration, a linear system needs to be solved to obtain the search direction, called the Newton direction. This so-called Newton linear system is traditionally in the form of the augmented system or the normal equation system. Classically, these linear systems can be solved exactly using Bunch–Parlett factorization if the matrices in the systems are symmetric indefinite [7], or Cholesky factorization if the matrices are symmetric positive definite. Solving the Newton linear systems using direct factorization approaches requires the use of O ( n 3 ) arithmetic operations, which suggests that feasible IPMs based on factoring methods cannot exhibit complexity better than O ( n 3.5 log ( 1 / ϵ ) ) , whereas, with the partial update, they achieve O ( n 3 log ( 1 / ϵ ) ) arithmetic operation complexity. The linear systems can also be solved inexactly using some inexact methods, e.g., Krylov subspace methods, which may require fewer iterations if the desired accuracy of the solutions to the linear systems is not high. However, inaccurately solving the Newton linear systems (i.e., the inaccuracy of the search directions) may result in the infeasibility of the sequence of solutions generated by IPMs; therefore, they have only been used in infeasible IPMs.
The advent of quantum technology has led to the development of many quantum-assisted algorithms for optimization and machine learning applications, such as linear regression [8] and the support vector machine training problem [9]. Following the seminal work on quantum algorithms for solving linear systems of equations [10], researchers have been studying whether QLSAs could yield quantum speedups in classical optimization algorithms. In particular, quantum IPMs (QIPMs) that utilize QLSAs to solve the Newton linear system arising in each iteration have been proposed for LO problems [11,12] and semidefinite optimization problems [13]. To maintain the feasibility of the iterates using quantum subroutines, the authors of [13,14] introduce the so-called orthogonal subspace system (OSS) for SDO and LO problems, and, in particular, demonstrate that a feasible solution to the original Newton system can be recovered from an inexact solution to the OSS. However, linearly constrained quadratic optimization problems, which are fundamental to both optimization and machine learning, have yet to be formally studied in the quantum literature.
In this work, we generalize the OSS for LO problems in [14] to LCQO problems and provide an efficient method for constructing the OSS using a quantum computer. Using the OSS, we can obtain an inexact feasible IPM, solving for the search directions inexactly but maintaining the feasibility of the iterates throughout the process of our IPM. The feasibility of the iterates gives better IPM iteration complexity and the bottleneck becomes solving the linear system, OSS. In particular, we show that a quantum implementation of our algorithm with access to quantum RAM (QRAM) obtains an ϵ -approximate solution to a given LCQO problem with worst-case complexity
O ˜ n , ω ¯ , 1 ϵ n n ω ¯ 2 ϵ + σ max ( Q ) κ V A Q + n 2 ,
where ω ¯ = max k ω k , σ max ( Q ) is the maximum singular value of the Hessian of the objective function and κ V A Q is the condition number of a matrix determined by initial data; see Lemma 3. We also consider the application of 1 -norm soft margin SVM problems, in which case, an ϵ -approximate solution is obtained with complexity
O ˜ m , n , ω ¯ , 1 ϵ ( m + n ) 1.5 ω ¯ 2 ϵ + σ max ( Q ) κ V A Q + ( m + n ) 2.5 .
Here, m is the number of features and n is the number of data points. ω ¯ , Q, and κ V A Q are defined similarly from the LCQO formulation of the SVM problem; see Section 4. The dependence on dimension is better than any existing quantum or classical algorithm.
The rest of this paper is organized as follows: in Section 2, we introduce IPMs for LCQO and the OSS system; in Section 3, we discuss how to use quantum algorithms to find the Newton directions and analyze the complexity of our IF-QIPM; in Section 4, we apply our IF-QIPM to the support vector machine problem. Discussions are provided in Section 5, and some technical proofs are moved to the Appendix A and Appendix B.

2. Preliminaries

In this section, we introduce notations before reviewing the theory of IPMs applied to LCQO, and derive the OSS system for the class of problems.

2.1. Notation

Vectors are typically represented by lower-case letters. We write 0 n when referring to the n-dimensional all-zeros vector, and the n-dimensional all-ones vector is denoted by e n . When the dimension is obvious from the context, we may write 0 or e, respectively. Matrices are typically represented with upper-case letters. The identity of dimension n is denoted by I n × n , and 0 n × m represents the n × m -dimensional all-zero matrix, again, dropping these subscripts when the dimension is obvious from the context. For a general n × m -dimensional matrix H, we write H i · to refer to its ith row, and, similarly, denote the jth column by H · j . For the ( i , j ) th element of H, we write H i j or H i , j .
For real-valued functions f 1 , f 2 , and f 3 , we write
f 1 = O ( f 2 )
if there exists a positive number k 4 such that f 1 k 4 f 2 . We write
f 1 = O ˜ f 3 ( f 2 )
if there exists a positive number k 5 such that f 1 k 5 f 2 × poly log ( f 3 ) .

2.2. IPMs for LCQO

In this work, LCQO is defined as follows.
 Definition 1
(LCQO Problem). For vectors b R m , c R n , and matrices A R m × n and Q R n × n with rank ( A ) = m n and Q symmetric positive semidefinite, we define the primal and dual LCQO problems as:
( P ) min c T x + 1 2 x T Q x , s . t . A x = b , x 0 , ( D ) max b T y 1 2 x T Q x , s . t . A T y + s Q x = c , s 0 ,
where x R n is the vector of primal variables, and y R m , s R n are vectors of the dual variables. Problem ( P ) is called the primal problem and ( D ) is called the dual problem.
Since A is of full row-rank, A does not contain any null rows, and we further make the following assumption on matrix A.
Assumption 1. 
Matrix A has no all-zero columns.
Remark 1. 
Suppose that A has zero columns. Without a loss of generality, assume that the nth column is all-zero. Introducing a new variable x n + 1 , we can rewrite the problem as
min c 0 T x x n + 1 + 1 2 x x n + 1 T Q 0 n × 1 0 1 × n 0 x x n + 1 , s . t . A · 1 A · ( n 1 ) 0 m × 1 0 m × 1 0 0 1 1 x x n + 1 = b 0 , x 0 , x n + 1 0 .
The new LCQO problem is equivalent to the original one, and contains fewer all-zero columns. Iterating this procedure to eliminate each of the all-zero columns, we obtain a new LCQO problem satisfying Assumption 1 with no more than 2 n m variables and n constraints in the worst case.
Assumption 2. 
There exists a solution ( x , y , s ) R n × R m × R n such that
A x = b , x > 0 , A T y + s Q x = c , and s > 0 .
The set of primal–dual feasible solutions is defined as
PD : = ( x , y , s ) R n × R m × R n : A x = b , A T y + s Q x = c , ( x , s ) 0
and, similarly, the set of interior feasible primal–dual solutions is given by
PD 0 : = ( x , y , s ) R n × R m × R n : A x = b , A T y + s Q x = c , ( x , s ) > 0 .
By strong duality, the set of optimal solutions can be characterized as
PD * : = ( x , y , s ) PD : x s = 0 ,
where x s denotes the Hadamard, i.e., component-wise product of x and s. Let ϵ > 0 ; then, the set of ϵ -approximate solutions to Problem (1) can be defined as
PD ϵ : = ( x , y , s ) PD : x T s n ϵ .
Let X and S be diagonal matrices of x and s, respectively. Under Assumption 2, for all μ > 0 , the perturbed system of optimality conditions
A x = b , A T y + s Q x = c , X S e = μ e , ( x , s ) 0
has a unique solution ( x ( μ ) , y ( μ ) , s ( μ ) ) , and this set of solutions gives rise to the primal–dual central path
CP : = ( x , y , s ) PD 0 | x i s i = μ for i { 1 , , n } ; for μ > 0 .
IPMs apply Newton’s method to solve system (3). At each iteration of infeasible IPMs, a candidate solution to the primal–dual LCQO pair in (1) is updated by solving the following linear system to find the Newton direction:
A 0 0 Q A T I S 0 X Δ x Δ y Δ s = r p r d r c ,
where
r p = b A x r d = c A T y s r c = σ μ e X S e ,
are residuals, and σ ( 0 , 1 ) is the barrier reduction parameter. If r p = 0 and r d = 0 , then the solution ( x , y , s ) exactly satisfies primal–dual feasibility. We can also define residuals in different ways as we will show later. Once the Newton direction is found, one can move along the direction but has to stay in a neighbourhood of the central path, which is defined as
N 2 ( θ ) : = ( x , y , s ) PD 0 | X S e μ e 2 θ μ ,
where θ ( 0 , 1 ) .
Until relatively recently, inexact solution approaches to solve the Newton linear system (4) had only been utilized in inexact infeasible IPMs (II-IPMs). For LCQO problems, ref. [6] proposes an II-IPM using an iterative method to solve the Newton systems and obtains a worst-case iteration complexity O ( n 2 log ( 1 ϵ ) ) . On the other hand, feasible IPMs for LCQO problems enjoy O ( n log ( 1 ϵ ) ) iteration complexity [15,16,17]. In [5], the author provides a general inexact feasible IPM for LCQO problems but does not discuss how the sequence of iterates could be guaranteed to maintain primal–dual feasibility exactly when using inexact linear system solvers. This is a vital consideration, as the feasible neighborhood of the central path as outlined in (5) is a subset of the primal–dual feasible set; if primal and dual feasibility are not satisfied exactly at any point in the algorithm, the iterates leave this neighborhood and the method fails. Our work fills this gap by using a method inspired by the QIPMs of [13,14].

2.3. Orthogonal Subspaces System

Assume that ( x , y , s ) PD 0 . To maintain the feasibility of the primal and dual variables, the first two linear equations in system (4) need to be solved with r p = 0 and r d = 0 exactly, which can be guaranteed if Δ x lies in the null space of A, denoted as Null ( A ) , and Δ s = Q Δ x A T Δ y . Accordingly, we can rewrite system (4) by representing Δ x as a linear combination of basis elements of Null ( A ) . To achieve this, we partition A as A = A B A N , where A B is a basis of A. Then, we construct the following matrix:
V = A B 1 A N I .
Matrix V has a full column rank and satisfies A V = 0 , i.e., the columns of V span the null space of A. Let Δ x = V λ , where λ R n m is the unknown coefficient vector used to determine Δ x . Subsequently, we can rewrite system (4) by substituting Δ x and Δ s in the third equation as
S V λ + X Q V λ A T Δ y = r c S V + X Q V X A T · λ Δ y = r c .
A similar system was proposed and called “Orthogonal Subspaces System” (OSS) in [13,14], and we use the same name in this work. The matrix in the OSS system (6) is of size n × n , and it is nonsingular. Even if the OSS system is solved inexactly, primal and dual feasibility are preserved by computing Δ x = V λ and Δ s = Q V λ A T Δ y . Thus, we can conclude that any inexactness will only impact the third equation of (4), i.e., r p = 0 and r d = 0 . This property of the OSS system is very convenient when analyzing the proposed inexact IPM, and allows us to obtain the best known iteration complexity for IPMs.

3. Inexact Feasible IPM with QLSAs

In this section, we propose our IF-QIPM for LCQO problems. We begin with the IF-IPM structure introduced by [5] and describe how to quantize it into an IF-QIPM. Then, we analyze the construction of the OSS system and conclude by analyzing the overall complexity of our IF-QIPM.

3.1. IF-IPM for LCQO

In [5], the author studies a general conceptual form IF-IPM for QCLO problems by assuming the feasibility of the primal and dual iterates, which induces the following system:
A 0 0 Q A T I S 0 X Δ x Δ y Δ s = 0 0 r c ,
where r c = σ μ e X S e , with σ ( 0 , 1 ) being the reduction factor of the central path parameter μ , i.e., μ n e w = σ μ . When system (7) is solved with r c = σ μ e X S e inexactly yielding an error r, if r 2 δ r c 2 for some δ ( 0 , 1 ) , the inexact IPM converges to an ϵ -approximate solution to Problem (1) in at most O ( n log ( 1 / ϵ ) ) iterations. As we mentioned earlier, it is not specified in [5] how to preserve primal and dual feasibility when system (7) is solved inexactly. Thus, it is presently not clear whether one could recover the convergence conditions described in [5] using inexact approaches, which are reliant on the assumption of primal–dual feasibility (see, e.g., system (7)).
Now, we present a general procedure of how to solve system (7) inexactly, while the inexactness error occurs only in the third equation of system (7). Let ( λ , Δ y ) be an inexact solution for system (6) and r be the error at this solution, i.e.,
S V + X Q V X A T · λ Δ y = r c + r .
The corresponding Newton step
Δ x = V λ Δ s = Q Δ x A T Δ y
satisfies
A 0 0 Q A T I S 0 X · Δ x Δ y Δ s = 0 0 r c + r .
Recall that once ( λ , Δ y ) is determined, then ( Δ x , Δ s ) is also (uniquely) determined. An interesting property is that, if ( λ , Δ y ) and ( Δ x , Δ y , Δ s ) can be deduced from each other, then the OSS system and system (7) yield the same error term r. Hence, the convergence conditions built upon system (7) can be directly examined using the residual r c and error r of the OSS system. Let ϵ O S S be the target accuracy of the OSS system (6), i.e.,
λ λ * , Δ y Δ y * 2 ϵ O S S ,
where ( λ * , Δ y * ) is the accurate solution. According to [5], in order to guarantee that the IF-IPM converges, we must have
r 2 = S V + X Q V X A T · λ Δ y r c 2 S V + X Q V X A T 2 ϵ O S S δ r c 2 ,
where δ ( 0 , 1 ) is a constant parameter. Therefore, to ensure the convergence of the IF-IPM, it suffices to set
ϵ O S S δ r c 2 S V + X Q V X A T 2 .
The IF-IPM is presented in full detail in Algorithm 1. In each iteration, we build and solve system (6) classically. We solve system (6) to the accuracy just introduced above and then compute the feasible Newton step from the inexact solution and take a full Newton step.
Algorithm 1: Short-step IF-IPM
1:
Choose ϵ > 0 , δ ( 0 , 1 ) , θ ( 0 , 1 ) , β ( 0 , 1 ) and σ = ( 1 β n ) .
2:
k 0
3:
Choose initial feasible interior solution ( x 0 , y 0 , s 0 ) N ( θ )
4:
while ( x k , y k , s k ) PD ϵ do
5:
    μ k ( x k ) T s k n
6:
    ϵ O S S k δ r c k 2 / S k V + X k Q V k X k A T 2
7:
    ( λ k , Δ y k ) solve system (6) with accuracy ϵ O S S k
8:
    Δ x k = V λ k and Δ s k = A T Δ y k
9:
    ( x k + 1 , y k + 1 , s k + 1 ) ( x k , y k , s k ) + ( Δ x k , Δ y k , Δ s k )
10:
    k k + 1
11:
end while
12:
return ( x k , y k , s k )
In the quantum-assisted IF-IPM, or IF-QIPM, we propose accelerating Step 7 using quantum subroutines. In the next sections, we investigate how to use quantum algorithms to build and solve the OSS system and obtain the Newton direction.

3.2. IF-QIPM for LCQO

The pseudocode of our IF-QIPM is presented in Algorithm 2. At each iteration of the IF-QIPM, we construct and solve system (6) and compute the Newton direction using quantum algorithms. To obtain an ϵ O S S -approximate solution of system (6), we first block encode system (8); see Appendix A. Then, we use quantum algorithms to solve for an ϵ Q L S A -approximate solution of system (8). This solution is normalized but we can rescale it to obtain an ϵ O S S -approximate solution of system (6). Details are discussed later in this section.
Algorithm 2: Short-step IF-QIPM
1:
Choose ϵ > 0 , δ ( 0 , 1 ) , θ ( 0 , θ 0 ) , β ( 0 , 1 ) and σ = ( 1 β n ) .
2:
k 0
3:
Choose initial feasible interior solution ( x 0 , y 0 , s 0 ) N ( θ )
4:
while ( x k , y k , s k ) PD ϵ do
5:
    μ k ( x k ) T s k n
6:
    ϵ O S S k δ r c k 2 / 2 S k V + X k Q V k X k A T 2
7:
    ( λ k , Δ y k ) solve system (6) with accuracy ϵ O S S k quantumly
8:
    Δ x k = V λ k and Δ s k = A T Δ y k
9:
    ( x k + 1 , y k + 1 , s k + 1 ) ( x k , y k , s k ) + ( Δ x k , Δ y k , Δ s k )
10:
    k k + 1
11:
end while
12:
return ( x k , y k , s k )
Here, θ 0 < 1 and its value will be discussed later. First, we introduce some notations to simplify the OSS system. In the kth iteration of Algorithm 2, let
M k = S k V + X k Q V X k A T , z k = λ k Δ y k .
Then, the OSS system can be rewritten as
M k z k = r c k .
As discussed in [14], to solve the OSS system (6) using quantum algorithms, we can first rewrite it as the normalized Hermitian OSS system
1 2 M k F 0 M k ( M k ) T 0 · 0 z k = 1 2 M k F . r c k 0 .
To use the QLSAs mentioned earlier, we need to turn the linear system (8) into a quantum linear system using the block encoding introduced in [18]. To this end, we first decompose the coefficient matrix in linear system (8) as
1 2 M k F 0 M k ( M k ) T 0 = 1 2 M k F 0 0 ( M k ) T 0 + 1 2 M k F 0 M k 0 0 ,
where
0 0 ( M k ) T 0 = 0 n × n 0 n × n 0 n × n 0 ( n m ) × n V T 0 ( n m ) × n 0 m × n 0 m × n A × 0 n × n 0 n × n S k 0 n × n 0 n × n 0 n × n + 0 n × n 0 n × n 0 n × n 0 n × n Q 0 n × n 0 n × n 0 n × n I n × n 0 n × n 0 n × n X k 0 n × n X k 0 n × n .
To compute matrix V, we need to find a basis matrix A B of matrix A and we need to compute the inverse matrix A B 1 . Both steps are nontrivial and can be expensive. However, we can reformulate the LCQO problem as follows:
min c T x + 1 2 x T Q x s . t . I 0 A 0 I A x x x = b b x 0 , x 0 , x 0 .
In this case, we have an obvious basis
A B = I 0 0 I
and matrix V can be constructed efficiently
V = A B 1 A N I = I 0 0 I A A I = A A I .
Since matrix A has no all-zero rows, matrix V has no all-zero rows either. This property of the reformulation is useful in the analysis of the proposed IF-QIPM but we do not want to build the complexity analysis on the reformulated problem. Thus, without a loss of generality we may make the following assumption.
Assumption 3. 
Matrix A is of the form A = I A N .
To simplify the analysis, we further assume that the input data are integers.
Assumption 4. 
The input data of Problem (1) are integers.
Based on the two assumptions above, we have the following lemma.
Lemma 1. 
Matrix V equals
V = A N I
and
min i = 1 , , n { V i · 2 2 } = min { 1 , min i = 1 , , m ( A N ) i · 2 2 } = 1 ,
where V i · and ( A N ) i · are the ith row of V and A N , respectively.
Now, we are ready to give θ 0 in our definition of the central path neighborhood; see (5). We set
θ 0 = min 1 3 n , 1 4 Q V V T F + 1 .
We also define ω k as the maximum of the values of primal variables and dual slack variables in the kth iteration.
Definition 2. 
Let ( x k , y k , s k ) be a candidate solution for Problem (1); then,
ω k = max i { 1 , , n } { x i k , s i k } .
As is standard in the literature on quantum algorithms, in this work, we assume access to quantum random access memory (QRAM). Then, Step 7 of Algorithm 2 consists of three parts: (1). use block encoding to build system (8); (2). use QLSAs to solve system (8); (3). use quantum tomography algorithms (QTAs) to extract the classical solution. We use the block-encoding methods introduced in [18] to block-encode linear system (8).
Proposition 1. 
In the kth iteration of Algorithm 2, using the block-encoding methods introduced in [18] and the decomposition described in Equations (9) and (10), a
V F 2 + A F 2 2 ω k M k F ( 2 Q F + 2 + 1 ) , O ( poly log ( n ) ) , ϵ Q L S A κ M k 3
-block-encoding of the matrix in system (8) can be implemented efficiently and the complexity will be dominated by the complexity of the QLSA step. Here, ϵ Q L S A is the accuracy required for the QLSA step and κ M k is the condition number of matrix M k .
Proof. 
See Appendix A for proof. □
Provided access to QRAM, the complexity associated with block encoding the OSS system coefficient matrix and preparing a quantum state encoding the right hand side amounts to polylogarithmic overhead. The cost of these steps is therefore negligible when compared with the complexity contributed by QLSAs and QTAs, so we ignore it here. To bound the total complexity contributed by QLSAs and QTAs, we first need to analyze the accuracy of QLSA characterized by ϵ Q L S A , the accuracy of QTA characterized by ϵ Q T A , and their relationship.
In each iteration, we use a QLSA to solve the block-encoded version of system (8) and obtain an ϵ Q L S A -approximate solution. Then, we use a QTA to extract an ϵ Q T A -approximate solution from the quantum machine. In the context of QLSAs and QTAs, if z ˜ is an ϵ -approximate solution of z, then z ˜ satisfies
z ˜ z ˜ 2 z z 2 2 ϵ
Observe that this definition of accuracy differs from the concept of ϵ -approximate solutions defined in (2).
Similar to [12,13], the QLSA we use is proposed by [19] and the QTA we use is proposed by [20]. Following the argument in Section 2 in [12], we can establish the relationship among ϵ Q L S A , ϵ Q T A , and ϵ O S S k as
ϵ Q L S A = ϵ Q T A = 1 2 · 2 M k F r c k 2 ϵ O S S k ,
where ϵ O S S k is defined as the 2 norm of the residual when solving system (8) inexactly in the kth iteration. This coefficient is also used to rescale the solution. According to [12], we rescale the normalized solution obtained from QLSA and QTA by
r c k 2 2 M k F
to obtain the ϵ O S S k -approximate solution for system (6). Here, we did not add superscript to ϵ Q L S A and ϵ Q T A , and the reason shall be revealed later. Let
0 ˜ k z ˜ k
be an inexact solution for system (8) in the kth iteration. Then, the norm of residual of system (8), which is ϵ O S S k , and the norm of residual of system (6), which is M k z ˜ k r c k 2 , satisfies
ϵ O S S k = 1 2 M k F 0 M k ( M k ) T 0 0 ˜ k z ˜ k 1 2 M k F r c k 0 2 = 1 2 M k F M k z ˜ k ( M k ) T 0 ˜ k 1 2 M k F r c k 0 2 1 2 M k F M k z ˜ k 1 2 M k F r c k 2 1 2 M k F M k z ˜ k r c k 2 .
Recall that the error arising from the OSS system (6) is the same as the error in the full Newton system (7); then, we can directly use the convergence condition in [5], i.e.,
M k z ˜ k r c k 2 δ r c k 2 .
We can require
M k z ˜ k r c k 2 2 M k F ϵ O S S k δ r c k 2
and it follows that
ϵ O S S k δ r c k 2 2 M k F .
Then, choosing
ϵ O S S k = δ r c k 2 2 M k F and ϵ Q L S A = ϵ Q T A = M k F ϵ O S S k 2 r c k 2 = δ 2
ensures the convergence of the IF-QIPM. The complexities for each step are also available now. Using the QLSA from [19] and QTA from [20], we have the complexity for QLSA and QTA:
T Q L S A = O ˜ n , ω ¯ , 1 ϵ κ M k ω k M k F , T Q T A = O ˜ n n ϵ Q T A T Q L S A = O ˜ n , ω ¯ , 1 ϵ n κ M k ω k M k F .
Since we have ϵ Q T A = δ 2 and δ ( 0 , 1 ) is a constant parameter, we omit ϵ Q T A in the Big-O notation. Note that the complexity of the block-encoding procedure is dominated by that of QLSA and QTA and thus we ignore the complexity contributed by block encoding. In Step 8, the complexity contributed by computing Newton step from OSS solution is O ( n 2 ) . The total complexity for the kth iteration of IF-QIPM will be
O T Q T A + n 2 = O ˜ n , ω ¯ , 1 ϵ n ω k κ M k M k F + n 2 .

3.2.1. Bound for ω k / M k F

In this section, all of the quantities that we consider are from the kth iteration. For simplicity, we omit the superscript k in this section unless we need it. Using the property of trace, we have
M F 2 = tr ( M T M ) = tr ( S V + X Q V ) ( S V + X Q V ) T + X A T A X = tr ( S V + X Q V ) ( S V + X Q V ) T + tr X A T A X = tr S V V T S + tr X Q V V T S + tr S V V T Q X + tr X Q V V T Q X + tr X A T A X .
For the non-symmetric term, due to the cyclic invariant property of trace, we have
tr X Q V V T S = tr S X Q V V T .
Recalling the central path neighborhood that we defined in (5), we define a matrix E such that
E = 1 μ θ ( X S μ I ) .
It is obvious that E is a diagonal matrix and satisfies
E e 2 < 1 ,
which leads to
| tr ( E ) | E e 1 n E F = n E e 2 < n and I E 0 and I + E 0 .
With this, we can have
tr X Q V V T S = tr S X Q V V T = tr ( θ μ E + μ I ) Q V V T = tr θ μ E Q V V T + tr μ Q V V T .
For the second term, we know that Q and V T Q V are both positive semidefinite. Thus, we can have
tr Q V V T = tr V T Q V 0
because of the cyclic invariant property of trace. According to the Cauchy–Schwarz inequality, we have
tr E Q V V T 2 E F 2 Q V V T F 2 .
Thus, we have
tr E Q V V T Q V V T F .
Thus, we have
tr X Q V V T S = tr θ μ E Q V V T + tr μ Q V V T μ tr Q V V T θ Q V V T F θ μ Q V V T F μ 4 ,
where the last inequality holds due to condition (11). Thus, we can bound M F by
M F 2 = tr S V V T S + tr X Q V V T S + tr S V V T Q X + tr X Q V V T Q X + tr X A T A X tr S V V T S + tr X Q V V T Q X + tr X A T A X μ 2 .
Since X Q V V T Q X 0 , we have
M F 2 tr S V V T S + tr X A T A X μ 2 .
Since X and S are both positive diagonal matrices, we have
M F 2 tr S V V T S + tr X A T A X μ 2 = i s i 2 ( V V T ) i i + i x i 2 ( A T A ) i i μ 2 ω 2 μ 2 .
As we mentioned in the very beginning of this section, at each iteration, ω is indeed ω k , but the superscript is ignored here. Now, we aim to find a bound for μ so we can further bound M F 2 . Since ω is the upper bound for the magnitude of the primal and dual slack variables, we have
ω 2 x i s i .
Recall the definition of matrix E; see (14). Thus, we have
ω 2 x i s i = μ + θ μ E i i μ θ μ = ( 1 θ ) μ .
Thus,
M F 2 ω 2 μ 2 ω 2 1 2 ω 2 1 θ ω 2 1 2 ω 2 1 1 / 3 = ω 2 4 ,
where the last inequality follows from the bound for θ ; see (11). Thus, we have
ω M F 2 = O 1 .

3.2.2. Bound for κ M k

Similar to the previous section, we ignore the superscript k unless we need it. We will start with a general result and then work on the matrix M k . The following lemma is a well-known result regarding condition numbers of matrices and can be proven using Courant–Fischer–Weyl min-max principle [21].
Lemma 2. 
For any full row rank matrix P R m × n and symmetric positive definite matrix D R n × n , their condition number satisfies
κ ( P D P T ) κ ( D ) κ ( P P T ) .
Next, we analyze the matrix in the OSS system (8). Specifically, we focus on M T M since we are interested in the spectral property of the OSS system (8). Using the matrix E defined in (14), we have the following decomposition:
M T M = V T ( S + X Q ) T ( S + X Q ) V V T ( S + X Q ) T X A T A X ( S + X Q ) V A X 2 A T = V T ( S + X Q ) T ( S + X Q ) V V T μ θ E A T V T Q T X 2 A T A μ θ E V A X 2 Q V A X 2 A T = V T 0 0 A ( S + X Q ) T ( S + X Q ) μ θ E Q X 2 μ θ E X 2 Q X 2 V T 0 0 A T .
The second equality holds because
V T S X A T V T Q T X 2 A T = V T μ I + θ E A T V T Q T X 2 A T = V T μ θ E A T V T Q X 2 A T ,
as A V = 0 and Q is symmetric. Then, plugging (14) into the first diagonal block of the decomposition we obtained earlier, we have
M T M = V T 0 0 A S 2 + 2 μ Q + μ θ ( E Q + Q E ) + Q X 2 Q μ θ E Q X 2 μ θ E X 2 Q X 2 V T 0 0 A T = V T 0 0 A S 2 + 2 μ Q + μ θ ( E Q + Q E ) μ θ E μ θ E 0 + Q X 2 Q Q X 2 X 2 Q X 2 V T 0 0 A T = V T 0 0 A I Q 0 I S 2 + 2 μ Q μ θ E μ θ E 0 I 0 Q I V T 0 0 A T + V T 0 0 A I Q 0 I 0 0 0 X 2 I 0 Q I V T 0 0 A T = V T 0 0 A I Q 0 I S 2 + 2 μ Q μ θ E μ θ E X 2 I 0 Q I V T 0 0 A T .
The first two matrices are nonsingular, so we can apply the Lemma 2, and thus we only need to study the middle matrix. Denote the middle matrix by Ψ . Observe that Ψ is almost the same as its counterpart in [14]. Subsequently, we have the following result regarding the spectral property of M k .
Lemma 3. 
When ( x , y , s ) N ( θ ) and θ 0 , min 1 3 n , 1 4 Q V V T F + 1 , the condition number of matrix M k satisfies
κ M k = O ( ω k ) 2 + μ k σ max ( Q ) μ k κ V A Q ,
where κ V A Q is the condition number of the matrix V T 0 0 A I Q 0 I .
Proof. 
The proof is in Appendix B. □
Putting all of these together, we have the complexity for our IF-QIPM for LCQO problems.
Theorem 1. 
The IF-QIPM for LCQO problems stops with the final duality gap less than ϵ in at most O n log ( 1 / ϵ ) IPM iterations and, in each IPM iteration, the Newton direction can be obtained with complexity O ˜ n , ω ¯ , 1 ϵ n ω ¯ 2 ϵ + σ max ( Q ) κ V A Q + n 2 , where ω ¯ = max k ω k .
Proof. 
The complexity bound for the IPM iterations comes from the result in [5]. According to (13), the complexity for obtaining the Newton direction is
O ˜ n , ω ¯ , 1 ϵ n ω k κ M k M k F + n 2 .
Combining this with the result in Section 3.2.1, the bound in Lemma 3, and μ k ϵ , we have
O ˜ n , ω ¯ , 1 ϵ n ω k κ M k M k F + n 2 = O ˜ n , ω ¯ , 1 ϵ n ω ¯ 2 ϵ + σ max ( Q ) κ V A Q + n 2 .

4. Application in Support Vector Machine Problems

In this section, we discuss how to use our IF-QIPM to solve SVM problems. We show that our algorithm can solve 1 -norm soft margin SVM problems faster than any existing classical or quantum algorithms with respect to dimension.
The ordinary SVM problem works on a linearly separable dataset, in which the data points have binary labels. The ordinary SVM aims to find a hyperplane correctly separating the data points with a maximum margin. However, in practice, the data points are not necessarily linearly separable. To allow for mislabelling, the concept of a soft margin SVM was introduced in [22]. Let { ( ϕ i , ζ i ) R m × { 1 , + 1 } | i = 1 , , n } be the set of data points, Φ be a matrix with the ith column being ϕ i , and Z be a diagonal matrix with the ith diagonal element being ζ i . The SVM problem with an l 1 -norm soft margin can be formulated as below.
min ( ξ , w , t ) R n × R m × R 1 2 w 2 2 + C ξ 1 s . t . ζ i w , ϕ i + t 1 ξ i , i = 1 , , n ξ i 0 , i = 1 , , n .
Here, ( w , t ) determines a hyperplane and C is a penalty parameter. In [9], the authors rewrote the SVM problem as a second-order conic optimization (SOCO) problem and used the quantum algorithm that they proposed to solve the resulting SOCO problem. They claim the complexity of their algorithm has O ( n 2 ) dependence on the dimension, which is better than any classical algorithm. However, the algorithm in [9] is invalid. Their algorithm is an inexact infeasible-QIPM (II-QIPM), while they used the IPM complexity for the feasible-QIPM, which ignores at least O ( n 1.5 ) dependence on n. They also missed the symmetrization of the Newton step, which is necessary for SOCO problems and makes their Newton step invalid.
Aside from [9], some pure quantum algorithms for SVM problems are also proposed. In [23], the authors propose a pure quantum algorithm for SVM problems. They claim the complexity is O ( κ eff 3 ϵ 3 log ( m n ) ) , where κ eff is the condition number of a matrix involving the kernel matrix and ϵ is the accuracy. In the worst case, κ eff = O ( m ) . Their complexity is worse than ours regarding the dependence of dimension and accuracy. In addition, their algorithm does not provide classical solutions. Namely, the solution is in the quantum machine and we cannot read or use it in a classical computer. However, our algorithm produces a classical solution.
To convert the problem into standard-form LCQO, we introduce ( w + , w ) R + m × R + m , ( t + , t ) R + × R + , and a slack variable ρ R + n . Then, we can obtain the following formulation:
min w + , w , t + , t , ξ , ρ 1 2 w + w 2 2 + C ξ 1 s . t . ζ i w + w , ϕ i + t + t + ξ i ρ i = 1 , i = 1 , , n ( ξ , w + , w , t + , t , ρ ) 0 .
This is a standard-form LCQO problem with non-negative variables ( w + , w , t + , t , ξ , ρ ) R m × R m × R × R × R n × R n and parameters
c = 0 2 m + 2 C e n 0 n Q = I m × m I m × m 0 m × ( 2 + 2 n ) I m × m I m × m 0 m × ( 2 + 2 n ) 0 ( 2 + 2 n ) × m 0 ( 2 + 2 n ) × m 0 ( 2 + 2 n ) × ( 2 + 2 n ) A = Z Φ T Z Φ T Z Z I n × n I n × n b = e .
Thus, we can use the proposed IF-QIPM for LCQO problems to solve the 1 -norm soft margin SVM problems and obtain an ϵ -approximate solution with complexity
O ˜ m , n , ω ¯ , 1 ϵ ( m + n ) 1.5 ω ¯ 2 ϵ + σ max ( Q ) κ V A Q + ( m + n ) 2.5 .
This dependence on dimension is better than any existing quantum or classical algorithm.

5. Discussion

In this work, we present an IF-QIPM for LCQO problems by combining the IF-IPM framework proposed in [5] and the OSS system introduced in [14]. Our algorithm has n 1.5 dependence on n, which is better than any existing algorithms for LCQO problems. The dependence on the accuracy is polynomial, which is worse than classic IPMs. Iterative refinement techniques might help to improve the dependence on the accuracy but they are beyond the discussion of this work.

Author Contributions

Conceptualization, Z.W. and T.T.; methodology, Z.W.; supervision, X.Y. and T.T.; validation, Z.W., Mohammadhossein Mohammadisiahroudi, B.A., X.Y. and T.T.; writing—original draft, Z.W.; writing—review and editing, Z.W., M.M., B.A., X.Y. and T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Defense Advanced Research Projects Agency as part of the project W911NF2010022: The Quantum Computing Revolution and Optimization: Challenges and Opportunities. This work was also supported by National Science Foundation CAREER DMS-2143915.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The funder had no role in the design of the study; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
IF-IPMInexact Feasible Interior Point Method
IF-QIPMInexact Feasible Quantum Interior Point Methods
IPMInterior Point Method
LCQOLinearly Constrained Quadratic Optimization
LOLinear Optimization
OSSOrthogonal Subspace System
QIPMQuantum Interior Point Method
QLSAQuantum Linear System Algorithm
QTAQuantum Tomography Algorithm
SOCOSecond-Order Conic Optimization
SVMSupport Vector Machine

Appendix A. Block Encoding of the OSS System

In this section, we ignore the superscript k for simplicity. As described in Equation (9), we first block encode each of the matrices involved in (10). We assume that V , A , S , and X are given and are stored in a quantum accessible data structure (we ignore the complexity to store the classical information into the quantum machine). For the first matrix
M 1 = 0 n × n 0 n × n 0 n × n 0 ( n m ) × n V T 0 ( n m ) × n 0 m × n 0 m × n A ,
a
V F 2 + A F 2 , O ( poly log ( n ) ) , ϵ 1
-block-encoding of M 1 can be implemented efficiently according to Lemma 50 from [18].
The second matrix
M 2 = 0 n × n 0 n × n S 0 n × n 0 n × n 0 n × n
is both one-row-sparse and one-column-sparse. By the definition of ω , each element of M 2 / ω has an absolute value of at most 1. According to Lemma 48 in [18], a
1 , O ( poly log ( n ) ) , ϵ 2
-block-encoding of M 2 / ω can be implemented efficiently.
The third matrix
M 3 = 0 n × n 0 n × n 0 n × n 0 n × n Q 0 n × n 0 n × n 0 n × n I n × n
can be decomposed into
M 3 = 0 n × n 0 n × n 0 n × n 0 n × n Q 0 n × n 0 n × n 0 n × n 0 n × n + 0 n × n 0 n × n 0 n × n 0 n × n 0 n × n 0 n × n 0 n × n 0 n × n I n × n .
Then, we can block encode the two matrices first, and then apply a linear combination to obtain M 3 . In fact, a
Q F , O ( poly log ( n ) ) , ϵ 3
-block-encoding of the left matrix can be implemented efficiently according to Lemma 50 from [18] and a
1 , O ( poly log ( n ) ) , ϵ 3
-block-encoding of the right matrix can be implemented efficiently according to Lemma 48 in [18]. With the state-preparation cost of the linear combination coefficient vector ( 1 , 1 ) neglected, a
Q F + 1 , O ( poly log ( n ) ) , ( Q F + 1 ) ϵ 3
-block-encoding of M 3 can be implemented efficiently according to Lemma 52 from [18].
The fourth matrix
M 4 = 0 n × n 0 n × n X 0 n × n X 0 n × n
is one-row-sparse and two-columns-sparse. After being scaled by 1 ω , each element of M 4 / ω has an absolute value of at most 1. According to Lemma 48 in [18], a
2 , O ( poly log ( n ) ) , ϵ 4
-block-encoding of M 4 / ω can be implemented efficiently.
For the matrix multiplication M 3 M 4 / ω , a
2 Q F + 2 , O ( poly log ( n ) ) , ( Q F + 1 ) ( 2 ϵ 3 + ϵ 4 )
-block-encoding can be implemented efficiently according to Lemma 53 from [18].
For the linear combination M 2 / ω + M 3 M 4 / ω , the cost for the state-preparation of the coefficient vector ( 1 , 1 ) is negligible and thus a
2 Q F + 2 + 1 , O ( poly log ( n ) ) , ( 2 Q F + 2 + 1 ) ( ϵ 3 + 1 2 ϵ 4 )
-block-encoding can be implemented efficiently according to Lemma 52 from [18].
For the matrix multiplication of M 1 ( M 2 / ω + M 3 M 4 / ω ) , a
( V F 2 + A F 2 ( 2 Q F + 2 + 1 ) , O ( poly log ( n ) ) , V F 2 + A F 2 ( 2 Q F + 2 + 1 ) ( ϵ 3 + 1 2 ϵ 4 ) + ( 2 Q F + 2 + 1 ) ϵ 1 )
-block-encoding can be implemented efficiently according to Lemma 53 from [18].
Finally, considering that the complexity of the state-preparation of the vector
( ω 2 M F , ω 2 M F )
can be neglected, a
( V F 2 + A F 2 2 ω M F ( 2 Q F + 2 + 1 ) , O ( poly log ( n ) ) , V F 2 + A F 2 2 ω M F ( 2 Q F + 2 + 1 ) 2 V F 2 + A F 2 ( ϵ 3 + 1 2 ϵ 4 ) + ϵ 1 )
-block-encoding of the coefficient matrix of system (8) can be implemented efficiently according to Lemma 52 from [18]. We can choose
ϵ 1 = ϵ Q L S A κ M 3 1 2 K ϵ 2 = ϵ 1 2 V F 2 + A F 2 ϵ 3 = ϵ 2 ϵ 4 = 2 ϵ 2 ,
where K depends on the initial data
K = 2 V F 2 + A F 2 ( 2 Q F + 2 + 1 ) 2 .
Now, considering that the complexity for all of the block-encoding algorithms that we have used so far has poly-logarithmic dependence on the dimension and accuracy, and that, for i = 1 , 2 , 3 , 4
O poly log ( 1 ϵ i ) = O poly log ( κ M ) ,
the complexity for block encoding will be dominated by the complexity for QLSA because QLSA has linear dependence on κ M , we can ignore the complexity of block encoding.

Appendix B. Spectral Analysis for Matrix Ψ

In this section, we provide the spectral analysis for the matrix
Ψ = S 2 + 2 μ Q μ θ E μ θ E X 2 .
Just like in the previous section, for simplicity, we ignore the superscript k. We can perform the following decomposition:
S 2 + 2 μ Q μ θ E μ θ E X 2 = S 2 μ θ E μ θ E X 2 + 2 μ Q 0 0 0 .
Let us use the following notation:
Ψ 1 = S 2 μ θ E μ θ E X 2 Ψ 2 = 2 μ Q 0 0 0 .
It can be proven that Ψ 1 is positive definite. The majority of the proof of this conclusion comes from the paper [14]. For the reader’s convenience, we provide the complete proof here.
Matrix Ψ 1 is a block diagonal matrix, with all four blocks being diagonal matrices. Thus, we can easily compute the eigenvalues using the characteristic polynomial
det ( Ψ 1 q I ) = det X 2 q I S 2 q I θ 2 μ 2 E 2 = i = 1 n x i 2 q s i 2 q θ 2 μ 2 E i i 2 .
Clearly, det ( Ψ 1 q I ) = 0 gives n quadratic equations and each quadratic equation gives two eigenvalues. The two eigenvalues from the ith quadratic equation are
q i + = 1 2 ( ( x i 2 + s i 2 ) + ( x i 2 + s i 2 ) 2 4 x i 2 s i 2 + 4 θ 2 μ 2 E i i 2 )
and
q i = 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 x i 2 s i 2 + 4 θ 2 μ 2 E i i 2 ) .
Recalling the definition of E in (14), we can write
q i = 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 x i 2 s i 2 + 4 ( x i s i μ ) 2 ) = 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 + 4 ( x i s i μ + x i s i ) ( x i s i μ x i s i ) ) = 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 μ ( 2 x i s i μ ) ) = 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 μ ( 2 θ μ E i i + μ ) ) .
One can verify that the square root always exists because
( x i 2 + s i 2 ) 2 4 μ ( 2 x i s i μ ) 4 ( x i s i ) 2 4 μ ( 2 x i s i ) + 4 μ 2 = 4 ( x i s i μ ) 2 0 .
With θ 0 , min 1 3 n , 1 4 Q V V T F + 1 , we have
q i 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 μ ( 2 θ μ E i i + μ ) ) 1 2 ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 μ ( 2 μ 1 3 n + μ ) = 1 2 ( ( x i 2 + s i 2 ) ( x i 2 + s i 2 ) 2 4 3 μ 2 ) = 1 2 4 3 μ 2 ( x i 2 + s i 2 ) + ( x i 2 + s i 2 ) 2 4 3 μ 2 1 2 4 3 μ 2 ( x i 2 + s i 2 ) + ( x i 2 + s i 2 ) 2 = μ 2 3 ( x i 2 + s i 2 ) > 0 .
This means that matrix Ψ 1 is positive definite and its eigenvalues coincide with its singular values because Ψ 1 is also real and symmetric. Analogously, we have
q i + = 1 2 ( ( x i 2 + s i 2 ) + ( x i 2 + s i 2 ) 2 4 μ ( 2 θ μ E i i + μ ) ) 1 2 ( ( x i 2 + s i 2 ) + ( x i 2 + s i 2 ) + 2 μ ( 2 θ E i i + 1 ) ) 1 2 ( ( x i 2 + s i 2 ) + ( x i 2 + s i 2 ) + 2 μ 2 ) = ( x i 2 + s i 2 ) + 2 μ .
Thus, the condition number of Ψ satisfies
κ ( Ψ ) σ max ( Ψ 1 ) + σ max ( Ψ 2 ) σ min ( Ψ 1 ) + σ min ( Ψ 2 ) = max i q i + + σ max ( Ψ 2 ) min j q j + σ min ( Ψ 2 ) max i { x i 2 + s i 2 } + 2 μ + 2 μ σ max ( Q ) min j μ 2 3 ( x i 2 + s i 2 ) = 3 max i { x i 2 + s i 2 } max i { x i 2 + s i 2 } + 2 μ + 2 μ σ max ( Q ) μ 2 3 ω 2 ω 2 + 2 μ + 2 μ σ max ( Q ) μ 2 ,
where the last inequality comes from the definition of ω . Since ω 2 x i s i ( 1 θ ) μ , we have
κ ( Ψ ) = O ω 2 ( ω 2 + μ σ max ( Q ) ) μ 2 .
Using Lemma 2, we can also bound the condition number of matrix M by
κ M = κ ( M T M ) κ ( Ψ ) κ V A Q = O ( ω 2 + μ σ max ( Q ) ) μ κ V A Q .

References

  1. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; Haussler, D., Ed.; Association for Computing Machinery: New York, NY, USA, 1992; pp. 144–152. [Google Scholar]
  3. Roos, C.; Terlaky, T.; Vial, J.P. Theory and Algorithms for Linear Optimization: An Interior Point Approach; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  4. Pólik, I.; Terlaky, T. Interior point methods for nonlinear optimization. In Nonlinear Optimization; Gianni Di Pillo, F.S., Ed.; Springer: New York, NY, USA, 2010; pp. 215–276. [Google Scholar]
  5. Gondzio, J. Convergence analysis of an inexact feasible interior point method for convex quadratic programming. SIAM J. Optim. 2013, 23, 1510–1527. [Google Scholar] [CrossRef]
  6. Lu, Z.; Monteiro, R.D.; O’Neal, J.W. An iterative solver-based infeasible primal-dual path-following algorithm for convex quadratic programming. SIAM J. Optim. 2006, 17, 287–310. [Google Scholar] [CrossRef]
  7. Bunch, J.R.; Parlett, B.N. Direct methods for solving symmetric indefinite systems of linear equations. SIAM J. Numer. Anal. 1971, 8, 639–655. [Google Scholar] [CrossRef]
  8. Schuld, M.; Sinayskiy, I.; Petruccione, F. Prediction by linear regression on a quantum computer. Phys. Rev. A 2016, 94, 022342. [Google Scholar] [CrossRef]
  9. Kerenidis, I.; Prakash, A.; Szilágyi, D. Quantum algorithms for second-order cone programming and support vector machines. Quantum 2021, 5, 427. [Google Scholar] [CrossRef]
  10. Harrow, A.W.; Hassidim, A.; Lloyd, S. Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 2009, 103, 150502. [Google Scholar] [CrossRef] [PubMed]
  11. Kerenidis, I.; Prakash, A. A quantum interior point method for LPs and SDPs. ACM Trans. Quantum Comput. 2020, 1, 1–32. [Google Scholar] [CrossRef]
  12. Mohammadisiahroudi, M.; Fakhimi, R.; Terlaky, T. Efficient use of quantum linear system algorithms in interior point methods for linear optimization. arXiv 2022, arXiv:2205.01220. [Google Scholar]
  13. Augustino, B.; Nannicini, G.; Terlaky, T.; Zuluaga, L.F. Quantum interior point methods for semidefinite optimization. arXiv 2021, arXiv:2112.06025. [Google Scholar]
  14. Mohammadisiahroudi, M.; Fakhimi, F.; Wu, Z.; Terlaky, T. An Inexact Feasible Interior Point Method for Linear Optimization with High Adaptability to Quantum Computers; Technical Report: 21T-006; Department of ISE, Lehigh University: Bethlehem, PA, USA, 2021. [Google Scholar]
  15. Kojima, M.; Mizuno, S.; Yoshise, A. A polynomial-time algorithm for a class of linear complementarity problems. Math. Program. 1989, 44, 1–26. [Google Scholar] [CrossRef]
  16. Monteiro, R.D.; Adler, I. Interior path following primal-dual algorithms. part II: Convex quadratic programming. Math. Program. 1989, 44, 43–66. [Google Scholar] [CrossRef]
  17. Goldfarb, D.; Liu, S. An O (n 3L) primal interior point algorithm for convex quadratic programming. Math. Program. 1990, 49, 325–340. [Google Scholar] [CrossRef]
  18. Gilyén, A.; Su, Y.; Low, G.H.; Wiebe, N. Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics. arXiv 2018, arXiv:1806.01838. [Google Scholar]
  19. Chakraborty, S.; Gilyén, A.; Jeffery, S. The power of block-encoded matrix powers: Improved regression techniques via faster Hamiltonian simulation. arXiv 2018, arXiv:1804.01973. [Google Scholar]
  20. van Apeldoorn, J.; Cornelissen, A.; Gilyén, A.; Nannicini, G. Quantum tomography using state-preparation unitaries. arXiv 2022, arXiv:2207.08800. [Google Scholar]
  21. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  22. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  23. Rebentrost, P.; Mohseni, M.; Lloyd, S. Quantum support vector machine for big data classification. Phys. Rev. Lett. 2014, 113, 130503. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Z.; Mohammadisiahroudi, M.; Augustino, B.; Yang, X.; Terlaky, T. An Inexact Feasible Quantum Interior Point Method for Linearly Constrained Quadratic Optimization. Entropy 2023, 25, 330. https://doi.org/10.3390/e25020330

AMA Style

Wu Z, Mohammadisiahroudi M, Augustino B, Yang X, Terlaky T. An Inexact Feasible Quantum Interior Point Method for Linearly Constrained Quadratic Optimization. Entropy. 2023; 25(2):330. https://doi.org/10.3390/e25020330

Chicago/Turabian Style

Wu, Zeguan, Mohammadhossein Mohammadisiahroudi, Brandon Augustino, Xiu Yang, and Tamás Terlaky. 2023. "An Inexact Feasible Quantum Interior Point Method for Linearly Constrained Quadratic Optimization" Entropy 25, no. 2: 330. https://doi.org/10.3390/e25020330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop