Abstract
While the development of quantum computers promises a myriad of advantages over their classical counterparts, care must be taken when designing algorithms that substitute a classical technique with a potentially advantageous quantum method. The probabilistic nature of many quantum algorithms may result in new behavior that could negatively impact the performance of the larger algorithm. The purpose of this work is to preserve the advantages of applying quantum search methods for generalized pattern search algorithms (GPSs) without violating the convergence criteria. It is well known that quantum search methods are able to reduce the expected number of oracle calls needed for finding the solution to a search problem from to However, the number of oracle calls needed to determine that no solution exists with certainty is exceedingly high and potentially infinite. In the case of GPS, this is a significant problem since overlooking a solution during an iteration will violate a needed assumption for convergence. Here, we overcome this problem by introducing the quantum improved point search (QIPS), a classical–quantum hybrid variant of the quantum search algorithm QSearch. QIPS retains the oracle query complexity of QSearch when a solution exists. However, it is able to determine when no solution exists, with certainty, using only oracle calls.
1. Introduction
Traditional optimization algorithms rely on derivatives to obtain their results. However, in many cases, derivative information may be unavailable. Even if the derivatives of some order exist, it may be impractical or prohibitively expensive to calculate any of them. For example, it can be impractical to obtain gradient information when the objective function is computed using a black-box simulation. Similarly, approximation methods such as finite differences can become prohibitively expensive or inaccurate for objective functions that are expensive to calculate or noisy. Even for functions constructed recursively from elementary functions using composition and arithmetic operations, gradient computation may incur onerous costs in time and memory. Automatic differentiation can also be used to obtain gradient information in some cases, but is not always practical since it either needs the source code for the function to be available, or to be built into the original code through a method such as operator overloading. In these situations, we turn to derivative-free optimization (DFO) algorithms. DFO algorithms seek to solve a given optimization problem without the use of differential information of any order. Instead, they rely on calls to a given oracle to evaluate the objective function. These calls are often expensive and become the dominant cost of the algorithm.
In this work, we are concerned with a class of DFO algorithms known as Generalized Pattern Search (GPS). These algorithms generate a sequence of iterates with non-increasing objective function values by systematically selecting points to be evaluated and considered for the next iterate. This process is split into two steps: the search and poll steps. The only difference between them is how points are chosen to be evaluated. The search step is designed to allow for increased flexibility when selecting which points to consider, potentially reducing the number of iterations needed to find a solution. The poll step is constructed to ensure convergence of the algorithm, only considering those points necessary to satisfy the convergence criteria outlined in [], Section 3. A typical iteration will begin with the search step. The poll step is invoked only if the search step fails to produce an iterate with a reduced objective function value.
Both the search and poll steps of GPS can be framed as unstructured search problems on finite sets. It is well known that quantum computers are able to reduce the expected number of oracle calls in such problems from to [,]. As in [,,], we make the standard assumption that any difference in the computational cost of a quantum and classical oracle call is negligible. At first glance, quantum search techniques would seem to be a natural fit for GPS. However, quantum search methods assume the existence of a solution within the set being searched and are not equipped to determine when no solution exists. Terminating the algorithm before an exhaustive search has been performed will invalidate the convergence properties provided by the poll step, and allowing the algorithm to run until an exhaustive search has been performed is not guaranteed to occur using a finite number of oracle calls. This presents a significant hurdle for applying quantum search methods to GPS algorithms since the sets searched during an iteration of GPS are not guaranteed to contain a point with reduced objective function value.
The application of the quantum search techniques developed in [,] for optimization was previously explored in earlier works such as [,]. However, these approaches restrict their search to a single finite set of points, obtaining an accurate solution with high probability using the error bounds initially presented in []. Applying this approach to GPS, as was first touched on by Arunachalam [], allows for the potential of error during the poll step, violating necessary assumptions for convergence. The main contribution of this paper is the introduction of the Quantum Improved Point Search algorithm (QIPS), a quantum–classical hybrid variant of the quantum search algorithm QSearch ([], Section 2). QIPS introduces a classical search process that runs in parallel with QSearch, designed to leverage the reduction in oracle calls provided by QSearch while ensuring the convergence criteria for GPS presented in [] is satisfied.
The remaining sections of this paper are organized as follows. Section 2 is dedicated to reviewing the background material necessary for the rest of the paper. In Section 2.1, we discuss the fundamental aspects of quantum computing. In Section 2.2, we review the quantum search algorithm QSearch. In Section 2.3, we review classical GPS algorithms and their convergence criteria. In Section 3, we introduce our main result, the QIPS algorithm. We begin in Section 3.1, where we discuss how our optimization problem is represented in a quantum computer. In Section 3.2, we formally introduce the QIPS algorithm and show that it retains the advantages of quantum search methods while preserving convergence when used for GPS. We end the paper with some concluding remarks in Section 4.
2. Preliminaries
In this section, we review all of the technology and notation from quantum computing and classical GPS algorithms required to understand this paper’s main contributions. In Section 2.1, we discuss the fundamental components of quantum computing that will be used throughout the paper. Then, in Section 2.2, we review the quantum search algorithm QSearch introduced in [], Section 2. In Section 2.3, we review classical GPS algorithms, following the formulation presented by Audet and Dennis in []. In the original formulation introduced by Torczon in [], the search and poll steps are not treated as distinct steps. Audet and Dennis’s separation of GPS iterations into the search and poll steps allows for a clearer understanding of the algorithms convergence properties.
2.1. Quantum Computing
In this section, we provide a brief review on the fundamental details of quantum computing relevant to our results. As their name implies, quantum computers exploit the quantum mechanical behavior of a physical system in order to perform some specified computation. The state of a quantum computer is described by its state vector, a unit vector in a complex-valued inner product space, sometimes referred to as the state space. The state vector is stored using qubits, a quantum mechanical variant of the classical bit. A single qubit is a unit vector in , denoted as , where , and is the standard basis for , referred to here as the computational basis. For those unfamiliar with this notation, better known as Dirac notation, we note that and .
Furthermore, the Dirac notation for the conjugate transpose of a vector is and thus the standard inner and outer products are written as and . As one may expect, quantum computers are not limited to states consisting of a single qubit. The state vector of a quantum computer containing N qubits is a unit vector in . Thus, the computational basis for a quantum computer containing N qubits is defined as the standard basis for . For example, the computational basis for a 2-qubit quantum computer is
Let be an arbitrary state vector for a quantum computer containing N qubits:
Since all state vectors are unit vectors, we require that
and we refer to the value as the amplitude of the computational basis state .
To simplify our notation, we denote the k qubit state vector consisting of k copies of the same state by . Furthermore, when , we often omit the superscript entirely and write the state simply as or , so long as the number of qubits is clear from the context. A set of qubits is often referred to as a quantum register to distinguish between the set of qubits and the quantum computer itself. The quantum computational variant of logic gates is represented as unitary matrices that can be applied to the state vector of a quantum computer.
One of the properties that distinguishes quantum computers from their classical counterparts is the inability to view the state vector directly. Instead, we are only able to perform a measurement which will produce a computational basis vector, with probability determined by its amplitude. That is to say, the probability of obtaining upon measurement is given by . For a given state vector with more than one non-zero amplitude, we say that it is in a superposition of the corresponding states. Finally, a quantum algorithm is defined as a unitary operator applied to the state vector, possibly followed by measurement.
2.2. QSearch
Let be a measurement-free algorithm such that . Recall that the state vector is a unit vector in , where m is the number of qubits used to store the state. We can separate into a subspace of desired states () and its orthogonal compliment containing all undesired states (). These subspaces are determined by partitioning the computational basis into a set of desired measurement results , and a set of undesired measurement results . In [], Section 2, this representation of a state vector is used to introduce amplitude amplification, a class of algorithms designed to increase the probability of obtaining an element of upon measurement. In this section, we review the amplitude amplification algorithm QSearch, beginning with the fundamental details of amplitude amplification itself.
A key component of amplitude amplification is a unitary operator Q, chosen such that applying it to the state increases the amplitudes of states in , while decreasing those of . Let and such that
The reader should take care to note that since is a state vector, it is also a unit vector. Thus, if and are both non-zero vectors, then neither of them is a unit vector. Q is defined in [], Section 2, as the unitary operator
However, Q is more often implemented using the formula provided by the following theorem from [], Section 2.
Theorem 1
To see how Q can be used to increase the probability of obtaining a desired state upon measurement, we have from [], Section 2, that for all ,
Thus, the probability of obtaining a desired state is increased by choosing j such that . However, the ideal choice of j depends on the value of . In [], Section 2, several amplitude amplification methods are presented, each differing based on our knowledge of . For our purposes, we are only concerned with QSearch algorithm. This method assumes we have no prior knowledge of . We now present Algorithm 1 ([], Section 2).
| Algorithm 1 QSearch |
|
Our presentation of QSearch differs slightly from that in [], Section 2. In our presentation, we pass Q and as input, whereas the original uses and , where is a binary function used in the construction of . Our version is equivalent, this change only being made to streamline the notation and better align with the results presented later in the paper. We are now prepared to present the following result for Qsearch, whose proof is omitted for the sake of concision. Interested readers are encouraged to review its original presentation in [], Theorem 3. We note that in the original presentation, the authors emphasize the expected number of times the algorithm and its inverse are used. Since this is a direct result of the number of times Q is used, we have adjusted the language to emphasize this fact.
Theorem 2
(QSearch Complexity) ([], Theorem 3). For some quantum algorithm , let and Q be as defined in (4) and 1. Furthermore, suppose , for some non-negative integer t and positive integer N. Exactly one of the following cases holds based on the value of t:
- : The expected number of times QSearch will use the operator Q before returning a desired state is in
- : QSearch will fail to terminate.
That QSearch fails to terminate when presents a significant hurdle when attempting to use it as a subroutine in algorithms such as GPS. Traditional approaches, such as that in [,], make use of known error bounds originally established in [] to terminate the algorithm when there is a high probability that . However, this leaves a non-zero probability that . It is this non-zero probability that causes a direct application of QSearch in GPS algorithms to violate the convergence criteria.
2.3. Generalized Pattern Search Algorithms
In this section, we review the construction of GPS algorithms. Originally introduced by Torczon in [] before being expanded on by Audet and Dennis in []. Here, we follow Audet and Dennis’s formulation, where iterations are separated into distinct search and poll steps that better illustrate the convergence behavior of the algorithm. To simplify our presentation, we restrict ourselves to the unconstrained optimization problem.
GPS algorithms solve the optimization problem
for some , by generating a sequence of iterates such that is a non-increasing sequence. A single iteration consists of the search and poll steps.
In the search step, we evaluate f at a finite number of points from a set called the mesh. At the kth iteration, the mesh is the set defined by
where is our current iterate, is a positive real number called the mesh size parameter, and D is a real valued matrix whose columns form a positive spanning set, i.e., every vector in can be written as a non-negative linear combination of D’s columns. Additionally, D satisfies the restriction that its columns are the products of some non-singular generating matrix and integer vectors for . Symbolically, this restriction means D takes the form . At iteration k, we fix a subset of , denoted by , that will be searched. Thus, the number of points evaluated during the search step is given by . We call a point an improved mesh point if . If such a point is found during the search step, we end the iteration. Otherwise, we begin the poll step.
During the kth iteration, we select a subset of the columns of D, denoted by , which is a positive spanning set. The poll set is formed using the current iterate and the elements of :
If the poll step is invoked, we evaluate the objective function for each of the elements in to check for an improved mesh point. Thus, the number of points evaluated during the poll step is given by . If no improved mesh points are found, we call the current iterate a local mesh optimizer.
Whenever a point y is identified as an improved mesh point by either the search or poll steps, we immediately end the iteration, set , and either increase or maintain the mesh size parameter. Alternatively, if is determined to be a local mesh optimizer, then we set and shrink the mesh size parameter. The changes in the mesh size parameter are determined by the mesh adjustment parameter, a rational number , and the set of mesh adjustment exponents, a finite set of integers where and . When an improved mesh point is found, the mesh adjustment exponent is set for some so that . Conversely, when both the search and poll steps fail to produce an improved mesh point at iteration k, the mesh adjustment exponent is set for some so that . We now present Algorithm 2 ([], Section 2).
| Algorithm 2 Generalized Pattern Search |
|
We conclude this section with a critical but brief discussion of the convergence results for GPS originally presented in [], Section 3, and how they must be delicately handled in the presence of quantum search subroutines. The following theorem embodies the main results of [], Section 3.
Theorem 3.
Suppose has bounded sublevel sets. Then, the following hold for the sequence of GPS:
- 1.
- Ref. [] (Theorem 3.6). The iterate set contains a refining subsequence, i.e., a subsequence of local mesh optimizers such that , which converges to some .
- 2.
- Ref. [] (Theorem 3.7). Let be a convergent refining subsequence with limit point and d be an element of a positive spanning set. If the objective function is evaluated at for infinitely many iterates in , and f is Lipschitz in a neighborhood of , then the Clarke generalized directional derivative of f at in the direction of d is non-negative:
- 3.
- Ref. [] (Theorem 3.9). Let be a convergent refining subsequence with limit . If the objective function f is strictly differentiable at , then .
The proof of [], Theorem 3.6, pivotally relies upon the mesh size parameter shrinking only at the local mesh optimizers. GPS will frequently violate this assumption when equipped with quantum search methods, such as QSearch, using the techniques in [,].
3. Adapting QSearch for Generalized Pattern Search Algorithms
In this section, we present our main result, the Quantum Improved Point Search algorithm (QIPS). We begin in Section 3.1, where we discuss how to represent our optimization problem on a quantum computer. In Section 3.2, we discuss how we have modified QSearch to develop QIPS and prove that it retains the expected reduction in oracle calls provided by QSearch while satisfying the previously mentioned convergence criteria required by GPS.
3.1. The Quantum Representation of Our Optimization Problem
In this section, we discuss how the elements of our optimization problem will be represented on a quantum computer. Our representation of real numbers on a quantum computer will mimic that of a classical computer, using qubits in place of classical bits. For example, the number 0100110 would be stored as . Performing binary arithmetic on numbers stored in this manner is an active area of research, with a wide variety of approaches proposed. For simplicity, we have chosen to represent real numbers using a fixed point representation. We represent a point as a computational basis state where the number of qubits used corresponds to the fixed point representation chosen. Furthermore, we assume access to a quantum oracle F that is able to evaluate the objective function. To ensure that F is a valid unitary operator, we store the state in a separate register than . Thus, we assume F satisfies the following:
Given F as defined above, the linear extension of F to a linear unitary operator is trivial. Since we are only concerned about F’s behavior on the elements in , we assume any valid linear extension has been chosen to define F.
The advantage of using the quantum oracle F over its classical counterpart stems from the ability to apply F to a set of points stored in superposition, enabling the objective function to be evaluated for each point in the set using a single call to F. Let be a set of N points in . Using an algorithm such as those described in [], we can prepare the state . Applying F to this state then gives
We are now prepared to begin our presentation of the QIPS algorithm.
3.2. Quantum Improved Point Search
In this section, we introduce the QIPS algorithm. Our algorithm aims to leverage the oracle query reductions provided by QSearch (Algorithm 1) while preserving the convergence properties of GPS presented in [], Section 3. Recall from the Theorem governing the complexity of QSearch (Theorem 2) that QSearch fails to terminate if . This presents a significant problem for its use in GPS (Algorithm 2) since during the poll step whenever the current iterate is a local mesh optimizer. For example, suppose we have the objective function , mesh size parameter , positive spanning set , and current iterate . Then, we have the poll set , and is a local mesh optimizer for P. It follows that QSearch would fail to terminate when applied in such a situation. The error bounds introduced in [] allow the possibility of terminating the algorithm when the probability that is high. However, there remains a non-zero probability that . This can be remedied somewhat when prior knowledge of the state allows us to determine the number of states in the computational basis that have non-zero amplitude. As is the case with GPS algorithms. In terms of the search problem, this only requires that we know the number of items being searched. However, even when we have this prior knowledge, QSearch only guarantees with certainty once all possible states have been returned during the execution of lines 7 1). However, this is not guaranteed to occur using a finite number of oracle calls.
This presents a significant problem when applying QSearch in a GPS method since the convergence results shown in Theorem 3 require that the mesh size parameter shrinks only at local mesh optimizers, meaning a direct application of QSearch during the poll step eliminates the convergence guarantees it is designed to provide. To overcome this limitation, we introduce a classical search method we call Local Mesh Filter (LMF). This method takes as argument a classical oracle for our objective function f, the current iterate x, a list of points Y in the mesh, and an integer j, and checks j points from Y for improved mesh points. This process is run in parallel with QSearch to produce QIPS. Using this approach, we are able to ensure that for sets containing an improved mesh point, QIPS will find one using an expected number of oracle calls in , and require an expected number of oracle calls in to determine one does not exist. Algorithm 3 is then as follows:
| Algorithm 3 Local Mesh Filter |
|
Before we introduce QIPS, we must first discuss how to obtain the necessary inputs related to the QSearch portion of the algorithm. Each call to QIPS will require a unique choice of corresponding to either the subset of the mesh or the poll set , where k is an integer denoting the current iteration. Letting denote an arbitrary subset of the mesh or poll set, a natural approach would be to choose such that
However, this approach may be unnecessarily expensive when the cost of preparing the operator Q is considered. Recall from Theorem 1 that , where
Using Equation (14) to identify improved mesh points requires that our set of desired states changes whenever an improved mesh point is found. This in turn requires that is recalculated each time an improved mesh point is found. This additional overhead may be avoided by choosing in such a way that does not depend on the value of the current iterate. Suppose we use an additional register that contains the state , where is the value of our objective function at the current iterate . Then, may be defined by the sign qubit of . Thus, we choose such that
Our set of desired states is then
for all iterations, and will only need to be computed a single time.
To implement , we will need three additional algorithms beyond F. Let B be a quantum algorithm such that for computational basis states and
and let denote a quantum algorithm such that
Finally, let be a quantum algorithm such that
Since we are only concerned with the effects of B, , and as defined above, we assume any valid linear extensions have been chosen. Interested readers are directed to [,] for examples of valid choices. Thus, we define as the quantum algorithm
With these inputs defined, we return to the introduction of the QIPS algorithm. As previously mentioned, QIPS can be viewed as running QSearch and LMF in parallel, terminating the algorithm if an improved mesh point is found or LMF determines that the current iterate is a local mesh optimizer. We make the additional modification of setting a fixed value for c at and selecting our exponent to be strictly less than during the lth iteration. These last two changes follow the approach shown in [], Theorem 3, and are made to simplify our results. Algorithm 4 is as follows:
| Algorithm 4 Quantum Improved Point Search |
|
Using QIPS during the search and poll steps of GPS, we are able to reduce the expected number of oracle calls from to when an improved mesh point exists, while retaining the complexity when one does not. Furthermore, we are guaranteed that QIPS will correctly identify local mesh optimizers, preserving the convergence results of GPS presented in [], Section 3. We summarize this in the following Theorem.
Theorem 4
(Quantum Improved Point Search: Complexity and Correctness). Given , , , as in (22), as in (18), Q as in (5), and
The following hold:
- 1.
- (Correctness) QIPS correctly determines if contains an improved mesh point. More formally, if and only if for all .
- 2.
- (Complexity: Improved Mesh Point) Suppose contains an improved mesh point; more formally, suppose there exists such that . Then, QIPS will return an improved mesh point using an expected number of calls to the oracles F and f in , where t is the number of improved mesh points contained in .
- 3.
- (Complexity: Local Mesh Optimizer) Suppose does not contain an improved mesh point; more formally, suppose for all . Then, QIPS will return x using calls to the oracles F and f.
- Proof.
- 1.
- From lines (7) and (15) of QIPS, we have that if and only if the while loop started on line (4) terminates because . The result follows as line (7) returns if and only if each element originally in was evaluated by the classical oracle f and failed to be an improved mesh point.
- 2.
- Suppose contains improved mesh points. For any iteration that does not return an improved mesh point, the number of calls to f is identical to those of F. Furthermore, if an improved mesh point is found on lines 3, 9, or 11, then no further calls to f are made. Thus, we proceed by bounding the expected number of calls to F. We obtain an upper bound on the expected number of times F is called following a similar approach to that used in [] (Lemma 2 and Theorem 3) and [] (Theorem 3). The oracle F is called only when the oracle Q is called. This occurs a single time on line 3, and then again each time lines 9 and 11 are executed. Lines 3 and 11 return an improved mesh point with probability . On the kth iteration of the while loop begun on line 4, we have that line 9 returns an improved mesh point with probability (recall that .) Let denote the probability of obtaining an improved mesh point on the kth iteration of the while loop. Since j is chosen as an integer in uniformly at random, it follows that is given byFor all positive integers k, we can establish a lower bound for as follows:By [], Lemma 1, it follows thatIn particular, this implies when .If , then the expected number of times line 11 will be executed is bounded above by 4 and the result follows. We now assume . In this case, we have thatLet , During the kth iteration of the while loop, the total number of calls to F is bounded above by . It follows that the total number calls to F while is then bounded above byHence, if an improved mesh point is returned during iteration , we have that it does so using calls to F. Now suppose an improved mesh point is not found during the first iterations of the while loop. Since the probability of obtaining an improved mesh point after iteration is bounded below by , it follows that the expected number of calls to F needed to obtain an improved mesh point is bounded above byThus, the expected number of calls to the oracle F is in .
- 3.
- Suppose contains no improved mesh points. Then, during each measurement on lines (3), (11), and (9). The while loop terminates if . This occurs only once the LMF algorithm has evaluated each point originally in on line (6) and determined that has no improved mesh points. Line (9) of LMF guarantees that no point in is evaluated by the oracle f twice, ensuring f is called at most N times. Since the number of times the oracle F is called on lines (9) and (11) of QIPS is bounded above by the number of times f is called, it follows that the number of times F is called is also bounded above by N.
□
4. Conclusions
Quantum algorithms are theorized to hold numerous potential advantages over classical methods. However, the probabilistic nature of these algorithms may pose significant problems when used as a subroutine within a larger algorithm. In this work, we considered the impact of using quantum search methods to perform the search and poll steps of GPS. These methods provide a significant reduction in the expected number oracle calls needed when a solution exists. Unfortunately, they are not guaranteed to accurately determine that no solution exists using a finite number of oracle calls. Here, we have solved this problem by introducing QIPS, a quantum–classical hybrid variant of the quantum search algorithm QSearch. QIPS allows us to preserve the reduction in oracle calls provided by QSearch when an improved mesh point exists, while accurately determining that no improved mesh point exists using oracle calls. We expect to build upon this work in two ways. First, we will analyze the behavior of QIPS when used in similar DFO algorithms such as Mesh Adaptive Direct Search []. Second, we plan to explore alternative DFO algorithms that utilize quantum search methods, but do not share the convergence criteria of GPS.
Author Contributions
Conceptualization, C.M., D.H.G. and V.E.H.; Methodology, C.M., D.H.G. and V.E.H.; Formal analysis, C.M., D.H.G. and V.E.H.; Writing—original draft, C.M., D.H.G. and V.E.H.; Writing—review and editing, C.M., D.H.G. and V.E.H.; Supervision, D.H.G. and V.E.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
No new data were created or analyzed in this study.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| MDPI | Multidisciplinary Digital Publishing Institute |
| DOAJ | Directory of Open Access Journals |
| GPS | Generalized Pattern Search |
| QIPS | Quantum Improved Point Search |
| DFO | Derivative-Free Optimization |
References
- Audet, C.; Dennis, J.E. Analysis of Generalized Pattern Searches. SIAM J. Optim. 2002, 13, 889–903. [Google Scholar] [CrossRef]
- Grover, L.K. Quantum Mechanics Helps in Searching for a Needle in a Haystack. Phys. Rev. Lett. 1997, 79, 325–328. [Google Scholar] [CrossRef]
- Brassard, G.; Høyer, P.; Mosca, M.; Tapp, A. Quantum Amplitude Amplification and Estimation. Quantum Comput. Inf. 2002, 305, 53–74. [Google Scholar] [CrossRef]
- Gilyén, A.; Arunachalam, S.; Wiebe, N. Optimizing Quantum Optimization Algorithms via Faster Quantum Gradient Computation. arXiv 2017, arXiv:1711.00465. [Google Scholar]
- Jordan, S.P. Fast Quantum Algorithm for Numerical Gradient Estimation. Phys. Rev. Lett. 2005, 95, 050501. [Google Scholar] [CrossRef] [PubMed]
- Bernstein, E.; Vazirani, U. Quantum Complexity Theory. SIAM J. Comput. 1997, 26, 1411–1473. [Google Scholar] [CrossRef]
- Durr, C.; Hoyer, P. A Quantum Algorithm for Finding the Minimum. arXiv 1996, arXiv:quant-ph/9607014. [Google Scholar] [CrossRef]
- Baritompa, W.; Bulger, D.; Wood, G. Grover’s Quantum Algorithm Applied to Global Optimization. SIAM J. Optim. 2005, 15, 1170–1184. [Google Scholar] [CrossRef]
- Boyer, M.; Brassard, G.; Hoeyer, P.; Tapp, A. Tight bounds on quantum searching. Protein Sci. 1996, 46, 493–505. [Google Scholar]
- Arunachalam, S. Quantum Speed-ups for Boolean Satisfiability and Derivative-Free Optimization. Master’s Thesis, University of Waterloo, Waterloo, ON, Canada, 2014. [Google Scholar]
- Torczon, V. On the Convergence of Pattern Search Algorithms. SIAM J. Optim. 1997, 7, 1–25. [Google Scholar] [CrossRef]
- Cortese, J.A.; Braje, T.M. Loading Classical Data into a Quantum Computer. arXiv 2018, arXiv:1803.01958. [Google Scholar]
- Draper, T.G. Addition on a Quantum Computer. arXiv 2000, arXiv:1803.01958. [Google Scholar]
- Audet, C.; Dennis, J. Mesh Adaptive Direct Search Algorithms for Constrained Optimization. SIAM J. Optim. 2006, 17, 188–217. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).