Next Article in Journal
A Solution to the Quantum Measurement Problem
Previous Article in Journal
Ancilla-Mediated Higher Entanglement as T-Duality, a Categorial Conjecture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum-Enhanced Generalized Pattern Search Optimization

by
Colton Mikes
1,*,
David Huckleberry Gutman
2 and
Victoria E. Howle
3
1
Department of Industrial, Manufacturing and Systems Engineering, Texas Tech University, Lubbock, TX 79409, USA
2
Department of Industrial and Systems Engineering, Texas A&M, College Station, TX 77840, USA
3
Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
Quantum Rep. 2024, 6(4), 509-521; https://doi.org/10.3390/quantum6040034
Submission received: 22 August 2024 / Revised: 18 September 2024 / Accepted: 24 September 2024 / Published: 29 September 2024

Abstract

:
While the development of quantum computers promises a myriad of advantages over their classical counterparts, care must be taken when designing algorithms that substitute a classical technique with a potentially advantageous quantum method. The probabilistic nature of many quantum algorithms may result in new behavior that could negatively impact the performance of the larger algorithm. The purpose of this work is to preserve the advantages of applying quantum search methods for generalized pattern search algorithms (GPSs) without violating the convergence criteria. It is well known that quantum search methods are able to reduce the expected number of oracle calls needed for finding the solution to a search problem from O ( N ) to O ( N ) However, the number of oracle calls needed to determine that no solution exists with certainty is exceedingly high and potentially infinite. In the case of GPS, this is a significant problem since overlooking a solution during an iteration will violate a needed assumption for convergence. Here, we overcome this problem by introducing the quantum improved point search (QIPS), a classical–quantum hybrid variant of the quantum search algorithm QSearch. QIPS retains the O ( N ) oracle query complexity of QSearch when a solution exists. However, it is able to determine when no solution exists, with certainty, using only O ( N ) oracle calls.

1. Introduction

Traditional optimization algorithms rely on derivatives to obtain their results. However, in many cases, derivative information may be unavailable. Even if the derivatives of some order exist, it may be impractical or prohibitively expensive to calculate any of them. For example, it can be impractical to obtain gradient information when the objective function is computed using a black-box simulation. Similarly, approximation methods such as finite differences can become prohibitively expensive or inaccurate for objective functions that are expensive to calculate or noisy. Even for functions constructed recursively from elementary functions using composition and arithmetic operations, gradient computation may incur onerous costs in time and memory. Automatic differentiation can also be used to obtain gradient information in some cases, but is not always practical since it either needs the source code for the function to be available, or to be built into the original code through a method such as operator overloading. In these situations, we turn to derivative-free optimization (DFO) algorithms. DFO algorithms seek to solve a given optimization problem without the use of differential information of any order. Instead, they rely on calls to a given oracle to evaluate the objective function. These calls are often expensive and become the dominant cost of the algorithm.
In this work, we are concerned with a class of DFO algorithms known as Generalized Pattern Search (GPS). These algorithms generate a sequence of iterates with non-increasing objective function values by systematically selecting points to be evaluated and considered for the next iterate. This process is split into two steps: the search and poll steps. The only difference between them is how points are chosen to be evaluated. The search step is designed to allow for increased flexibility when selecting which points to consider, potentially reducing the number of iterations needed to find a solution. The poll step is constructed to ensure convergence of the algorithm, only considering those points necessary to satisfy the convergence criteria outlined in [1], Section 3. A typical iteration will begin with the search step. The poll step is invoked only if the search step fails to produce an iterate with a reduced objective function value.
Both the search and poll steps of GPS can be framed as unstructured search problems on finite sets. It is well known that quantum computers are able to reduce the expected number of oracle calls in such problems from O ( N ) to O ( N ) [2,3]. As in [4,5,6], we make the standard assumption that any difference in the computational cost of a quantum and classical oracle call is negligible. At first glance, quantum search techniques would seem to be a natural fit for GPS. However, quantum search methods assume the existence of a solution within the set being searched and are not equipped to determine when no solution exists. Terminating the algorithm before an exhaustive search has been performed will invalidate the convergence properties provided by the poll step, and allowing the algorithm to run until an exhaustive search has been performed is not guaranteed to occur using a finite number of oracle calls. This presents a significant hurdle for applying quantum search methods to GPS algorithms since the sets searched during an iteration of GPS are not guaranteed to contain a point with reduced objective function value.
The application of the quantum search techniques developed in [2,3] for optimization was previously explored in earlier works such as [7,8]. However, these approaches restrict their search to a single finite set of points, obtaining an accurate solution with high probability using the error bounds initially presented in [9]. Applying this approach to GPS, as was first touched on by Arunachalam [10], allows for the potential of error during the poll step, violating necessary assumptions for convergence. The main contribution of this paper is the introduction of the Quantum Improved Point Search algorithm (QIPS), a quantum–classical hybrid variant of the quantum search algorithm QSearch ([3], Section 2). QIPS introduces a classical search process that runs in parallel with QSearch, designed to leverage the reduction in oracle calls provided by QSearch while ensuring the convergence criteria for GPS presented in [1] is satisfied.
The remaining sections of this paper are organized as follows. Section 2 is dedicated to reviewing the background material necessary for the rest of the paper. In Section 2.1, we discuss the fundamental aspects of quantum computing. In Section 2.2, we review the quantum search algorithm QSearch. In Section 2.3, we review classical GPS algorithms and their convergence criteria. In Section 3, we introduce our main result, the QIPS algorithm. We begin in Section 3.1, where we discuss how our optimization problem is represented in a quantum computer. In Section 3.2, we formally introduce the QIPS algorithm and show that it retains the advantages of quantum search methods while preserving convergence when used for GPS. We end the paper with some concluding remarks in Section 4.

2. Preliminaries

In this section, we review all of the technology and notation from quantum computing and classical GPS algorithms required to understand this paper’s main contributions. In Section 2.1, we discuss the fundamental components of quantum computing that will be used throughout the paper. Then, in Section 2.2, we review the quantum search algorithm QSearch introduced in [3], Section 2. In Section 2.3, we review classical GPS algorithms, following the formulation presented by Audet and Dennis in [1]. In the original formulation introduced by Torczon in [11], the search and poll steps are not treated as distinct steps. Audet and Dennis’s separation of GPS iterations into the search and poll steps allows for a clearer understanding of the algorithms convergence properties.

2.1. Quantum Computing

In this section, we provide a brief review on the fundamental details of quantum computing relevant to our results. As their name implies, quantum computers exploit the quantum mechanical behavior of a physical system in order to perform some specified computation. The state of a quantum computer is described by its state vector, a unit vector in a complex-valued inner product space, sometimes referred to as the state space. The state vector is stored using qubits, a quantum mechanical variant of the classical bit. A single qubit is a unit vector in C 2 , denoted as | ψ = α | 0 + β | 1 , where α , β C , and { | 0 , | 1 } is the standard basis for C 2 , referred to here as the computational basis. For those unfamiliar with this notation, better known as Dirac notation, we note that | 0 = ( 1 , 0 ) T and | 1 = ( 0 , 1 ) T .
Furthermore, the Dirac notation for the conjugate transpose of a vector | ψ is ( | ψ ) = ψ | and thus the standard inner and outer products are written as ψ | ψ and | ψ ψ | . As one may expect, quantum computers are not limited to states consisting of a single qubit. The state vector of a quantum computer containing N qubits is a unit vector in C 2 N . Thus, the computational basis for a quantum computer containing N qubits is defined as the standard basis for C 2 N . For example, the computational basis for a 2-qubit quantum computer is
| 00 , | 01 , | 10 , | 11 = ( 1 , 0 , 0 , 0 ) T , ( 0 , 1 , 0 , 0 ) T , ( 0 , 0 , 1 , 0 ) T , ( 0 , 0 , 0 , 1 ) T .
Let | ψ C 2 N be an arbitrary state vector for a quantum computer containing N qubits:
| ψ = j = 0 2 N 1 α j | j .
Since all state vectors are unit vectors, we require that
j = 0 2 N 1 | α j | 2 = 1 ,
and we refer to the value α j as the amplitude of the computational basis state | j .
To simplify our notation, we denote the k qubit state vector consisting of k copies of the same state | ψ by | ψ k . Furthermore, when | ψ { | 0 , | 1 } , we often omit the superscript entirely and write the state simply as | 0 or | 1 , so long as the number of qubits is clear from the context. A set of qubits is often referred to as a quantum register to distinguish between the set of qubits and the quantum computer itself. The quantum computational variant of logic gates is represented as unitary matrices that can be applied to the state vector of a quantum computer.
One of the properties that distinguishes quantum computers from their classical counterparts is the inability to view the state vector directly. Instead, we are only able to perform a measurement which will produce a computational basis vector, with probability determined by its amplitude. That is to say, the probability of obtaining | j upon measurement is given by | α j | 2 . For a given state vector with more than one non-zero amplitude, we say that it is in a superposition of the corresponding states. Finally, a quantum algorithm is defined as a unitary operator applied to the state vector, possibly followed by measurement.

2.2. QSearch

Let A be a measurement-free algorithm such that A | 0 = | ϕ . Recall that the state vector | ϕ is a unit vector in C 2 m , where m is the number of qubits used to store the state. We can separate C 2 m into a subspace of desired states ( D ) and its orthogonal compliment containing all undesired states ( U ). These subspaces are determined by partitioning the computational basis into a set of desired measurement results D ¯ , and a set of undesired measurement results U ¯ . In [3], Section 2, this representation of a state vector is used to introduce amplitude amplification, a class of algorithms designed to increase the probability of obtaining an element of D ¯ upon measurement. In this section, we review the amplitude amplification algorithm QSearch, beginning with the fundamental details of amplitude amplification itself.
A key component of amplitude amplification is a unitary operator Q, chosen such that applying it to the state | ϕ increases the amplitudes of states in D , while decreasing those of U . Let | ϕ d D and | ϕ u U such that
| ϕ = | ϕ d + | ϕ u .
The reader should take care to note that since | ϕ is a state vector, it is also a unit vector. Thus, if | ϕ d and | ϕ u are both non-zero vectors, then neither of them is a unit vector. Q is defined in [3], Section 2, as the unitary operator
Q = ( I 2 | ϕ ϕ | ) I 2 ϕ u | ϕ u | ϕ u ϕ u | .
However, Q is more often implemented using the formula provided by the following theorem from [3], Section 2.
Theorem 1
([3], Section 2). We can express Q as Q = A S 0 A S D , where S 0 and S D are the unitary operators defined by
S 0 = I 2 | 0 0 |
S D = I 2 ϕ u | ϕ u | ϕ u ϕ u | .
To see how Q can be used to increase the probability of obtaining a desired state upon measurement, we have from [3], Section 2, that for all j N ,
Q j | ϕ = 1 ϕ d | ϕ d s i n ( ( 2 j + 1 ) θ d ) | ϕ d + 1 ϕ u | ϕ u c o s ( ( 2 j + 1 ) θ d ) | ϕ u .
Thus, the probability of obtaining a desired state is increased by choosing j such that s i n 2 ( θ d ) < s i n 2 ( ( 2 j + 1 ) θ d ) 1 . However, the ideal choice of j depends on the value of θ d . In [3], Section 2, several amplitude amplification methods are presented, each differing based on our knowledge of θ d . For our purposes, we are only concerned with QSearch algorithm. This method assumes we have no prior knowledge of θ d . We now present Algorithm 1 ([3], Section 2).
Algorithm 1 QSearch
1:
function QSearch( ( Q , | ϕ , D ) )
2:
     Let l = 0 and c ( 1 , 2 ) .
3:
     Let | R be the result of measuring | ϕ .
4:
     while  | R D  do
5:
          Set l = l + 1 .
6:
          Select j 1 , c l Z uniformly at random.
7:
          Let | R be the result of measuring Q j | ϕ .
8:
          if  | R D  then
9:
               Let | R be the result of measuring | ϕ .
10:
        end if
11:
    end while
12:
    return  | R .
13:
end function
Our presentation of QSearch differs slightly from that in [3], Section 2. In our presentation, we pass Q and | ϕ as input, whereas the original uses A and χ , where χ is a binary function used in the construction of S D . Our version is equivalent, this change only being made to streamline the notation and better align with the results presented later in the paper. We are now prepared to present the following result for Qsearch, whose proof is omitted for the sake of concision. Interested readers are encouraged to review its original presentation in [3], Theorem 3. We note that in the original presentation, the authors emphasize the expected number of times the algorithm A and its inverse A are used. Since this is a direct result of the number of times Q is used, we have adjusted the language to emphasize this fact.
Theorem 2
(QSearch Complexity) ([3], Theorem 3). For some quantum algorithm A , let | ϕ and Q be as defined in (4) and 1. Furthermore, suppose ϕ d | ϕ d = t N , for some non-negative integer t and positive integer N. Exactly one of the following cases holds based on the value of t:
  • t > 0 : The expected number of times QSearch will use the operator Q before returning a desired state is in O ( N t ) .
  • t = 0 : QSearch will fail to terminate.
That QSearch fails to terminate when ϕ d | ϕ d = 0 presents a significant hurdle when attempting to use it as a subroutine in algorithms such as GPS. Traditional approaches, such as that in [7,8], make use of known error bounds originally established in [9] to terminate the algorithm when there is a high probability that ϕ d | ϕ d = 0 . However, this leaves a non-zero probability that ϕ d | ϕ d 0 . It is this non-zero probability that causes a direct application of QSearch in GPS algorithms to violate the convergence criteria.

2.3. Generalized Pattern Search Algorithms

In this section, we review the construction of GPS algorithms. Originally introduced by Torczon in [11] before being expanded on by Audet and Dennis in [1]. Here, we follow Audet and Dennis’s formulation, where iterations are separated into distinct search and poll steps that better illustrate the convergence behavior of the algorithm. To simplify our presentation, we restrict ourselves to the unconstrained optimization problem.
GPS algorithms solve the optimization problem
min x R n f x
for some f : R n R , by generating a sequence of iterates x k k = 0 such that { f ( x k ) } k = 0 is a non-increasing sequence. A single iteration consists of the search and poll steps.
In the search step, we evaluate f at a finite number of points from a set called the mesh. At the kth iteration, the mesh is the set defined by
M k = { x k + Δ k D z : z Z + p } ,
where x k is our current iterate, Δ k is a positive real number called the mesh size parameter, and D is a real valued n × p matrix whose columns form a positive spanning set, i.e., every vector in R n can be written as a non-negative linear combination of D’s columns. Additionally, D satisfies the restriction that its columns are the products of some non-singular generating matrix  G R n × n and integer vectors z j Z n for j = 1 , 2 , , p . Symbolically, this restriction means D takes the form D = G z 1 G z p . At iteration k, we fix a subset of M k , denoted by S k , that will be searched. Thus, the number of points evaluated during the search step is given by n s ( k ) = | S k | . We call a point y S k an improved mesh point if f ( x k ) > f ( y ) . If such a point is found during the search step, we end the iteration. Otherwise, we begin the poll step.
During the kth iteration, we select a subset of the columns of D, denoted by D k , which is a positive spanning set. The poll set  P k is formed using the current iterate and the elements of D k :
P k = { x k + Δ k d : d D k } .
If the poll step is invoked, we evaluate the objective function for each of the elements in P k to check for an improved mesh point. Thus, the number of points evaluated during the poll step is given by n p ( k ) = | P k | . If no improved mesh points are found, we call the current iterate x k a local mesh optimizer.
Whenever a point y is identified as an improved mesh point by either the search or poll steps, we immediately end the iteration, set x k + 1 = y , and either increase or maintain the mesh size parameter. Alternatively, if x k is determined to be a local mesh optimizer, then we set x k + 1 = x k and shrink the mesh size parameter. The changes in the mesh size parameter are determined by the mesh adjustment parameter, a rational number τ > 1 , and the set of mesh adjustment exponents, a finite set of integers { ω : ω [ ω , ω + ] Z } where ω 1 and ω + 0 . When an improved mesh point is found, the mesh adjustment exponent is set for some ω [ 0 , ω + ] so that Δ k + 1 = τ ω Δ k Δ k . Conversely, when both the search and poll steps fail to produce an improved mesh point at iteration k, the mesh adjustment exponent ω is set for some ω [ ω , 1 ] so that δ k + 1 = τ ω Δ k < Δ k . We now present Algorithm 2 ([1], Section 2).
Algorithm 2 Generalized Pattern Search
1:
function GPS( G , Z , Δ 0 , τ , ω , ω + , ϵ , x 0 )
2:
      Let D = G Z and k = 0 .
3:
      while  Δ k > ϵ  do
4:
            Prepare S k M k = { x k + Δ k D z : z N p } ,with | S k | = n S ( k ) < .
5:
            if There exists x + S k such that f ( x + ) < f ( x k )  then
6:
                  Let ω be an integer in [ 0 , ω + ]
7:
                  Let x k + 1 = x + and Δ k + 1 = τ ω Δ k .
8:
            else
9:
                  Select columns of D to form a positive spanning set D k .
10:
                 Let P k = { x k + Δ k d : d D k } .
11:
                 if There exists x + P k such that f ( x + ) < f ( x k )  then
12:
                       Let ω be an integer in [ 0 , ω + ] .
13:
                       Let x k + 1 = x + and Δ k + 1 = τ ω Δ k .
14:
                 else
15:
                       Let ω be an integer in [ ω , 1 ] .
16:
                       Let x k + 1 = x k and Δ k + 1 = τ ω Δ k .
17:
                 end if
18:
          end if
19:
          Let k = k + 1 .
20:
    end while
21:
    Let x = x k + 1 .
22:
    return  x .
23:
end function
We conclude this section with a critical but brief discussion of the convergence results for GPS originally presented in [1], Section 3, and how they must be delicately handled in the presence of quantum search subroutines. The following theorem embodies the main results of [1], Section 3.
Theorem 3.
Suppose f : R n R has bounded sublevel sets. Then, the following hold for the sequence X = { x j } j = 0 of GPS:
1. 
Ref. [1] (Theorem 3.6). The iterate set contains a refining subsequence, i.e., a subsequence { x j i } i = 0 of local mesh optimizers such that lim i = 0 Δ j i = 0 , which converges to some x .
2. 
Ref. [1] (Theorem 3.7). Let { x j i } i = 0 be a convergent refining subsequence with limit point x and d be an element of a positive spanning set. If the objective function is evaluated at x j i + d for infinitely many iterates in { x j i } i = 0 , and f is Lipschitz in a neighborhood of x , then the Clarke generalized directional derivative of f at x in the direction of d is non-negative:
f o ( x , d ) = lim sup y x , t 0 f ( y + t d ) f ( y ) t 0 .
3. 
Ref. [1] (Theorem 3.9). Let { x j i } i = 0 be a convergent refining subsequence with limit x . If the objective function f is strictly differentiable at x , then f ( x ) = 0 .
The proof of [1], Theorem 3.6, pivotally relies upon the mesh size parameter shrinking only at the local mesh optimizers. GPS will frequently violate this assumption when equipped with quantum search methods, such as QSearch, using the techniques in [7,8].

3. Adapting QSearch for Generalized Pattern Search Algorithms

In this section, we present our main result, the Quantum Improved Point Search algorithm (QIPS). We begin in Section 3.1, where we discuss how to represent our optimization problem on a quantum computer. In Section 3.2, we discuss how we have modified QSearch to develop QIPS and prove that it retains the expected reduction in oracle calls provided by QSearch while satisfying the previously mentioned convergence criteria required by GPS.

3.1. The Quantum Representation of Our Optimization Problem

In this section, we discuss how the elements of our optimization problem will be represented on a quantum computer. Our representation of real numbers on a quantum computer will mimic that of a classical computer, using qubits in place of classical bits. For example, the number 0100110 would be stored as | 0100110 . Performing binary arithmetic on numbers stored in this manner is an active area of research, with a wide variety of approaches proposed. For simplicity, we have chosen to represent real numbers using a fixed point representation. We represent a point y R n as a computational basis state where the number of qubits used corresponds to the fixed point representation chosen. Furthermore, we assume access to a quantum oracle F that is able to evaluate the objective function. To ensure that F is a valid unitary operator, we store the state | f ( y ) in a separate register than | y . Thus, we assume F satisfies the following:
F | y | 0 = | y | f ( y ) .
Given F as defined above, the linear extension of F to a linear unitary operator is trivial. Since we are only concerned about F’s behavior on the elements in s p a n { | y | 0 : y R n } , we assume any valid linear extension has been chosen to define F.
The advantage of using the quantum oracle F over its classical counterpart stems from the ability to apply F to a set of points stored in superposition, enabling the objective function to be evaluated for each point in the set using a single call to F. Let Y = { y j } j = 0 N 1 be a set of N points in R n . Using an algorithm such as those described in [12], we can prepare the state j = 0 N 1 | y j | 0 . Applying F to this state then gives
F j = 0 N 1 | y j j = 0 N 1 | y j | f ( y j ) .
We are now prepared to begin our presentation of the QIPS algorithm.

3.2. Quantum Improved Point Search

In this section, we introduce the QIPS algorithm. Our algorithm aims to leverage the oracle query reductions provided by QSearch (Algorithm 1) while preserving the convergence properties of GPS presented in [1], Section 3. Recall from the Theorem governing the complexity of QSearch (Theorem 2) that QSearch fails to terminate if ϕ d | ϕ d = 0 . This presents a significant problem for its use in GPS (Algorithm 2) since ϕ d | ϕ d = 0 during the poll step whenever the current iterate is a local mesh optimizer. For example, suppose we have the objective function f ( x ) = x 2 , mesh size parameter Δ = 1 , positive spanning set D = { 2 , 3 } , and current iterate x 0 = 1 . Then, we have the poll set P = { 3 , 2 } , and x 0 is a local mesh optimizer for P. It follows that QSearch would fail to terminate when applied in such a situation. The error bounds introduced in [9] allow the possibility of terminating the algorithm when the probability that ϕ d | ϕ d = 0 is high. However, there remains a non-zero probability that ϕ d | ϕ d 0 . This can be remedied somewhat when prior knowledge of the state | ϕ allows us to determine the number of states in the computational basis that have non-zero amplitude. As is the case with GPS algorithms. In terms of the search problem, this only requires that we know the number of items being searched. However, even when we have this prior knowledge, QSearch only guarantees ϕ d | ϕ d = 0 with certainty once all possible states have been returned during the execution of lines 7 1). However, this is not guaranteed to occur using a finite number of oracle calls.
This presents a significant problem when applying QSearch in a GPS method since the convergence results shown in Theorem 3 require that the mesh size parameter shrinks only at local mesh optimizers, meaning a direct application of QSearch during the poll step eliminates the convergence guarantees it is designed to provide. To overcome this limitation, we introduce a classical search method we call Local Mesh Filter (LMF). This method takes as argument a classical oracle for our objective function f, the current iterate x, a list of points Y in the mesh, and an integer j, and checks j points from Y for improved mesh points. This process is run in parallel with QSearch to produce QIPS. Using this approach, we are able to ensure that for sets containing an improved mesh point, QIPS will find one using an expected number of oracle calls in O ( N ) , and require an expected number of oracle calls in O ( N ) to determine one does not exist. Algorithm 3 is then as follows:
Algorithm 3 Local Mesh Filter
1:
function LMF( ( f , x , Y , j ) )
2:
     Set k = 1 and x + = x .
3:
     Set K = min { j , | Y | } .
4:
     while  k < j and x + = x  do
5:
          Choose x k from Y uniformly at random.
6:
          if  f ( x k ) < f ( x )  then
7:
               Set x + = x k .
8:
          else
9:
               Set Y = Y { x k } .
10:
             Set k = k + 1 .
11:
        end if
12:
    end while
13:
    return  x + , Y.
14:
end function
Before we introduce QIPS, we must first discuss how to obtain the necessary inputs related to the QSearch portion of the algorithm. Each call to QIPS will require a unique choice of A corresponding to either the subset of the mesh S k or the poll set P k , where k is an integer denoting the current iteration. Letting Y = { y j } j = 0 N 1 denote an arbitrary subset of the mesh or poll set, a natural approach would be to choose A such that
A | 0 = 1 N j = 0 N 1 | y j | f ( y j ) .
However, this approach may be unnecessarily expensive when the cost of preparing the operator Q is considered. Recall from Theorem 1 that Q = A S 0 A S D , where
S 0 = I 2 | 0 0 |
S D = I 2 ϕ u | ϕ u | ϕ u ϕ u | .
Using Equation (14) to identify improved mesh points requires that our set of desired states D changes whenever an improved mesh point is found. This in turn requires that S D is recalculated each time an improved mesh point is found. This additional overhead may be avoided by choosing A in such a way that D does not depend on the value of the current iterate. Suppose we use an additional register that contains the state | f ( y j ) f ( x k ) , where f ( x k ) is the value of our objective function at the current iterate x k . Then, D may be defined by the sign qubit of | f ( y j ) f ( x k ) . Thus, we choose A Y such that
A Y | 0 = 1 N j = 0 N 1 | y j | f ( y j ) | f ( y j ) f ( x k ) .
Our set of desired states is then
D = { | a | b | c | c < 0 }
for all iterations, and S D will only need to be computed a single time.
To implement A Y , we will need three additional algorithms beyond F. Let B be a quantum algorithm such that for computational basis states | a , | b , and | c
B | a | b | c = | a | b | b + c
and let V Y denote a quantum algorithm such that
V Y | 0 = 1 N j = 0 N 1 | y j | 0 | 0 .
Finally, let W k be a quantum algorithm such that
W k | 0 | 0 | 0 = | 0 | 0 |     f ( x k ) .
Since we are only concerned with the effects of B, V Y , and W k as defined above, we assume any valid linear extensions have been chosen. Interested readers are directed to [12,13] for examples of valid choices. Thus, we define A Y as the quantum algorithm
A Y = B F V Y W k .
With these inputs defined, we return to the introduction of the QIPS algorithm. As previously mentioned, QIPS can be viewed as running QSearch and LMF in parallel, terminating the algorithm if an improved mesh point is found or LMF determines that the current iterate is a local mesh optimizer. We make the additional modification of setting a fixed value for c at c = 6 5 and selecting our exponent to be strictly less than c l during the lth iteration. These last two changes follow the approach shown in [9], Theorem 3, and are made to simplify our results. Algorithm 4 is as follows:
Algorithm 4 Quantum Improved Point Search
1:
function QIPS( ( Q , | ϕ , D , f , x , Y ) )
2:
     Let l = 0 , c = 6 5 , and x + = x .
3:
     Let | R 0   | R 1   | R 2 be the result of measuring | ϕ .
4:
     while  | R 0   | R 1   | R 2 D , x + = x , and Y  do
5:
          Set l = l + 1 .
6:
          Select j 1 , c l Z uniformly at random.
7:
          Let x + , Y = L M F ( f , x , Y , j + 1 ) .
8:
          if  x + = x and Y  then
9:
               Let | R 0   | R 1   | R 2 be the result of measuring Q j | ϕ .
10:
             if  | R 0   | R 1   | R 2 D  then
11:
                   Let | R 0   | R 1   | R 2 be the result of measuring | ϕ .
12:
             end if
13:
        end if
14:
        if  | R 0   | R 1   | R 2 D  then
15:
             Let x + = R 0 .
16:
        end if
17:
    end while
18:
    return  x + .
19:
end function
Using QIPS during the search and poll steps of GPS, we are able to reduce the expected number of oracle calls from O ( N ) to O ( N ) when an improved mesh point exists, while retaining the O ( N ) complexity when one does not. Furthermore, we are guaranteed that QIPS will correctly identify local mesh optimizers, preserving the convergence results of GPS presented in [1], Section 3. We summarize this in the following Theorem.
Theorem 4
(Quantum Improved Point Search: Complexity and Correctness). Given Y = { y j } j = 0 N 1 R n , x R n , f : R n R , A as in (22), D as in (18), Q as in (5), and
x ˜ = QIPS ( Q , | ϕ , D , f , x , Y )
The following hold:
1. 
(Correctness) QIPS correctly determines if Y contains an improved mesh point. More formally, x ˜ = x if and only if f ( x ) f ( y ) for all y Y .
2. 
(Complexity: Improved Mesh Point) Suppose Y contains an improved mesh point; more formally, suppose there exists y Y such that f ( y ) < f ( x ) . Then, QIPS will return an improved mesh point using an expected number of calls to the oracles F and f in O N t , where t is the number of improved mesh points contained in Y .
3. 
(Complexity: Local Mesh Optimizer) Suppose Y does not contain an improved mesh point; more formally, suppose f ( x ) f ( y ) for all y Y . Then, QIPS will return x using O N calls to the oracles F and f.
  • Proof. 
1.
From lines (7) and (15) of QIPS, we have that x ˜ = x if and only if the while loop started on line (4) terminates because Y = . The result follows as line (7) returns Y = if and only if each element originally in Y was evaluated by the classical oracle f and failed to be an improved mesh point.
2.
Suppose Y contains t > 0 improved mesh points. For any iteration that does not return an improved mesh point, the number of calls to f is identical to those of F. Furthermore, if an improved mesh point is found on lines 3, 9, or 11, then no further calls to f are made. Thus, we proceed by bounding the expected number of calls to F. We obtain an upper bound on the expected number of times F is called following a similar approach to that used in [9] (Lemma 2 and Theorem 3) and [3] (Theorem 3). The oracle F is called only when the oracle Q is called. This occurs a single time on line 3, and then again each time lines 9 and 11 are executed. Lines 3 and 11 return an improved mesh point with probability t N . On the kth iteration of the while loop begun on line 4, we have that line 9 returns an improved mesh point with probability s i n 2 2 j + 1 θ d (recall that s i n 2 ( θ d ) = t N .) Let P k denote the probability of obtaining an improved mesh point on the kth iteration of the while loop. Since j is chosen as an integer in [ 1 , c k ] uniformly at random, it follows that P k is given by
P k = 1 c k j = 0 c k 1 t N + N t N s i n 2 2 j + 1 θ d .
For all positive integers k, we can establish a lower bound for P k as follows:
P k = 1 c k j = 0 c k 1 t N + N t N s i n 2 2 j + 1 θ d
1 c k j = 0 c k 1 t N s i n 2 2 j + 1 θ d + N t N s i n 2 2 j + 1 θ d
= 1 c k j = 0 c k 1 s i n 2 2 j + 1 θ d
= 1 2 c k j = 0 c k 1 1 c o s 2 j + 1 2 θ d .
By [9], Lemma 1, it follows that
P k 1 2 ( c k ) j = 0 c k 1 1 c o s 2 j + 1 2 θ d
= 1 2 s i n ( 4 ( c k ) θ d ) 4 ( c k ) s i n 2 ( 2 θ d )
1 2 1 4 ( c k ) s i n ( 2 θ d ) .
In particular, this implies P k 1 4 when c k 1 s i n ( 2 θ d ) .
If t > 3 N 4 , then the expected number of times line 11 will be executed is bounded above by 4 and the result follows. We now assume 0 < t 3 N 4 . In this case, we have that
1 s i n ( 2 θ d ) = N 2 ( N t ) t < N t .
Let λ = l o g c 1 s i n ( 2 θ d ) , During the kth iteration of the while loop, the total number of calls to F is bounded above by c k + 1 . It follows that the total number calls to F while l λ is then bounded above by
k = 0 λ c k + 1 = λ + k = 0 λ c k
λ + 1 N t 1 c
= λ + 1 c 1 N t 1 .
Hence, if an improved mesh point is returned during iteration k λ , we have that it does so using O N t calls to F. Now suppose an improved mesh point is not found during the first λ iterations of the while loop. Since the probability of obtaining an improved mesh point after iteration λ is bounded below by 1 4 , it follows that the expected number of calls to F needed to obtain an improved mesh point is bounded above by
k = 0 1 4 3 4 k c k + λ + 1 = 1 4 k = 0 3 4 k + 1 4 k = 0 3 4 k c k c λ
1 4 k = 0 3 4 k + 1 4 k = 0 3 4 k 6 5 k N t
= 1 4 k = 0 3 4 k + 1 4 k = 0 9 10 k N t
= 1 + 10 4 N t .
Thus, the expected number of calls to the oracle F is in O N t .
3.
Suppose Y contains no improved mesh points. Then, | R 0   | R 1   | R 2 D during each measurement on lines (3), (11), and (9). The while loop terminates if Y = . This occurs only once the LMF algorithm has evaluated each point originally in Y on line (6) and determined that Y has no improved mesh points. Line (9) of LMF guarantees that no point in Y is evaluated by the oracle f twice, ensuring f is called at most N times. Since the number of times the oracle F is called on lines (9) and (11) of QIPS is bounded above by the number of times f is called, it follows that the number of times F is called is also bounded above by N.

4. Conclusions

Quantum algorithms are theorized to hold numerous potential advantages over classical methods. However, the probabilistic nature of these algorithms may pose significant problems when used as a subroutine within a larger algorithm. In this work, we considered the impact of using quantum search methods to perform the search and poll steps of GPS. These methods provide a significant reduction in the expected number oracle calls needed when a solution exists. Unfortunately, they are not guaranteed to accurately determine that no solution exists using a finite number of oracle calls. Here, we have solved this problem by introducing QIPS, a quantum–classical hybrid variant of the quantum search algorithm QSearch. QIPS allows us to preserve the reduction in oracle calls provided by QSearch when an improved mesh point exists, while accurately determining that no improved mesh point exists using O ( N ) oracle calls. We expect to build upon this work in two ways. First, we will analyze the behavior of QIPS when used in similar DFO algorithms such as Mesh Adaptive Direct Search [14]. Second, we plan to explore alternative DFO algorithms that utilize quantum search methods, but do not share the convergence criteria of GPS.

Author Contributions

Conceptualization, C.M., D.H.G. and V.E.H.; Methodology, C.M., D.H.G. and V.E.H.; Formal analysis, C.M., D.H.G. and V.E.H.; Writing—original draft, C.M., D.H.G. and V.E.H.; Writing—review and editing, C.M., D.H.G. and V.E.H.; Supervision, D.H.G. and V.E.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of Open Access Journals
GPSGeneralized Pattern Search
QIPSQuantum Improved Point Search
DFODerivative-Free Optimization

References

  1. Audet, C.; Dennis, J.E. Analysis of Generalized Pattern Searches. SIAM J. Optim. 2002, 13, 889–903. [Google Scholar] [CrossRef]
  2. Grover, L.K. Quantum Mechanics Helps in Searching for a Needle in a Haystack. Phys. Rev. Lett. 1997, 79, 325–328. [Google Scholar] [CrossRef]
  3. Brassard, G.; Høyer, P.; Mosca, M.; Tapp, A. Quantum Amplitude Amplification and Estimation. Quantum Comput. Inf. 2002, 305, 53–74. [Google Scholar] [CrossRef]
  4. Gilyén, A.; Arunachalam, S.; Wiebe, N. Optimizing Quantum Optimization Algorithms via Faster Quantum Gradient Computation. arXiv 2017, arXiv:1711.00465. [Google Scholar]
  5. Jordan, S.P. Fast Quantum Algorithm for Numerical Gradient Estimation. Phys. Rev. Lett. 2005, 95, 050501. [Google Scholar] [CrossRef] [PubMed]
  6. Bernstein, E.; Vazirani, U. Quantum Complexity Theory. SIAM J. Comput. 1997, 26, 1411–1473. [Google Scholar] [CrossRef]
  7. Durr, C.; Hoyer, P. A Quantum Algorithm for Finding the Minimum. arXiv 1996, arXiv:quant-ph/9607014. [Google Scholar] [CrossRef]
  8. Baritompa, W.; Bulger, D.; Wood, G. Grover’s Quantum Algorithm Applied to Global Optimization. SIAM J. Optim. 2005, 15, 1170–1184. [Google Scholar] [CrossRef]
  9. Boyer, M.; Brassard, G.; Hoeyer, P.; Tapp, A. Tight bounds on quantum searching. Protein Sci. 1996, 46, 493–505. [Google Scholar]
  10. Arunachalam, S. Quantum Speed-ups for Boolean Satisfiability and Derivative-Free Optimization. Master’s Thesis, University of Waterloo, Waterloo, ON, Canada, 2014. [Google Scholar]
  11. Torczon, V. On the Convergence of Pattern Search Algorithms. SIAM J. Optim. 1997, 7, 1–25. [Google Scholar] [CrossRef]
  12. Cortese, J.A.; Braje, T.M. Loading Classical Data into a Quantum Computer. arXiv 2018, arXiv:1803.01958. [Google Scholar]
  13. Draper, T.G. Addition on a Quantum Computer. arXiv 2000, arXiv:1803.01958. [Google Scholar]
  14. Audet, C.; Dennis, J. Mesh Adaptive Direct Search Algorithms for Constrained Optimization. SIAM J. Optim. 2006, 17, 188–217. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mikes, C.; Gutman, D.H.; Howle, V.E. Quantum-Enhanced Generalized Pattern Search Optimization. Quantum Rep. 2024, 6, 509-521. https://doi.org/10.3390/quantum6040034

AMA Style

Mikes C, Gutman DH, Howle VE. Quantum-Enhanced Generalized Pattern Search Optimization. Quantum Reports. 2024; 6(4):509-521. https://doi.org/10.3390/quantum6040034

Chicago/Turabian Style

Mikes, Colton, David Huckleberry Gutman, and Victoria E. Howle. 2024. "Quantum-Enhanced Generalized Pattern Search Optimization" Quantum Reports 6, no. 4: 509-521. https://doi.org/10.3390/quantum6040034

APA Style

Mikes, C., Gutman, D. H., & Howle, V. E. (2024). Quantum-Enhanced Generalized Pattern Search Optimization. Quantum Reports, 6(4), 509-521. https://doi.org/10.3390/quantum6040034

Article Metrics

Back to TopTop