1. Introduction
During a lecture at the Massachusetts Institute of Technology in the early 1980s on the possibilities and limits of numerical simulation [
1], Richard Feynman posed a now-famous question: “Can quantum systems be simulated probabilistically by a classical computer with local connections?”. He argued that classical simulations do not scale, as they face fundamental complexity barriers, most notably exponential growth in time and memory requirements. From this observation, Feynman concluded that classical computers cannot faithfully reproduce quantum mechanics, a realization that laid the conceptual foundations of quantum computing. Three years later, David Deutsch introduced the notion of a universal quantum Turing machine [
2], providing the first formal model of quantum computation. While early works remained largely theoretical, concrete applications began to emerge approximately a decade later. In particular, Peter Shor’s factoring algorithm demonstrated an exponential speedup for integer factorization, threatening widely used cryptographic schemes and triggering intense global interest in universal quantum computing. His first paper dealing with this innovation was proposed in 1994 at an IEEE conference. Shor presented Las Vegas algorithms (probabilistic algorithms which can be viewed as a variant of the Monte Carlo methods that always provide correct results). They were used for finding discrete logarithms and factoring integers on a quantum computer. Considering the number of digits of the integer to be factored, the number of steps required by the quantum algorithms is polynomial, and he then gave examples of quantum cryptanalysis [
3]. A more complete paper about factoring integers and finding discrete logarithm was then published in the SIAM Review five years later [
4]. Beyond factoring, Shor also made a crucial, though less widely known, contribution by developing quantum error-correcting codes, which address the intrinsic fragility and decoherence of quantum systems [
5]. At approximately the same time, advancements were achieved by Lov Grover and Seth Lloyd in different areas. Lloyd showed how quantum systems could be simulated using Hamiltonians (operators that describe the total energy of a system), and this opened the pathway to the simulation of quantum computers [
6]. This theoretical framework significantly advanced our ability to model and understand quantum dynamics.
Grover, on his side, offered a glimpse into the potential of quantum systems to process information in fundamentally different ways compared to classical computers. The Grover algorithm, in particular, is known for its ability to search unsorted databases quadratically faster than its classical counterparts [
7,
8], representing a significant leap in computing efficiency. Quantum computing exploits the unique properties of quantum mechanics, such as superposition and entanglement, to perform computations, as shown in [
9]. Quantum mechanics can speed up a range of search applications over unsorted data. For example, imagine a phone directory containing N names arranged in completely random order. To find someone’s phone number within a reasonable probability, any classical algorithm (whether deterministic or probabilistic) will need to access the database a minimum number of times. Quantum mechanical systems can be in a superposition of states and simultaneously examine multiple names. By properly adjusting the phases of various operations, successful computations reinforce each other while others interfere randomly. As a result, the desired phone number can be obtained after limited access to the database. This paradigm shift from classical computing has opened new frontiers in various fields.
While cryptography has considerable implications, optimization and operations research, in general, are not far behind. In this encyclopedic article, we have chosen to present the Grover algorithm since it is useful in many fields. We will first present the approach behind this algorithm, then we will show the potential applications where this algorithm is useful, and next, we will explain the current limitations encountered. Lastly, we will discuss reproducibility issues with different simulators and machines.
2. Presentation of the Grover Quantum Algorithm
The Grover algorithm is a foundational quantum algorithm that provides a quadratic speedup for unstructured search problems. By iteratively applying an oracle that marks solution states and a diffusion operator that performs inversion about the mean, the algorithm amplifies the probability amplitude of the desired state within a uniform superposition. After approximately O(√N) iterations, where N denotes the size of the search space, the measurement shows the marked element with high probability. This improvement over a classical query with O(N) complexity is often presented as one of the most widely cited demonstrations of the quantum computational advantage. The BBBV theorem [
10] (named after Charles Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani) establishes a fundamental lower bound in quantum computing (Ω(√N) in asymptotic notation); thus, Grover’s algorithm is likely optimal for an unstructured search.
Imagine we have an unsorted list of N items and want to find one specific item. A classical computer will check the items one by one. In the worst case, we require N checks, and on average, we require N/2 checks. No classical algorithm can perform better if the list is unstructured. But Grover’s algorithm shows that a quantum computer can find the item in approximately O(√N) steps, which is a quadratic speedup. To achieve this, quantum computers exploit the following two principles of quantum mechanics: superposition and interference. With superposition, a quantum bit (or qubit) can be in a superposition of states as follows: α∣0⟩ + β∣1⟩ (with α and β being complex numbers). At the measurement, all qubits will collapse to a single value, 0 or 1. Due to this property, with n qubits, we can represent all 2n states at once. This does not mean that quantum computers simply try all solutions simultaneously since the measurement collapses this superposition. The power lies in manipulating amplitudes rather than in parallel evaluation. The number ‘n’ of qubits is related to the number of items we search in N = 2n. With interference, quantum amplitudes can reinforce each other (constructive interference) or cancel each other out (destructive interference). The key idea of Grover’s algorithm is to carefully engineer interference so the correct answer becomes more likely when we measure.
To prepare the initial state of the algorithm, we begin with
n qubits initialized to
. Then, we apply a Hadamard quantum gate to each qubit, placing them in superposition to guess all answers at once, each with equal probability (Equation (1)), as follows:
After initialization, we apply an oracle. This is a quantum operation that recognizes the correct answer. The oracle does not tell us the answer directly; it only marks it by a phase change, and phase alone is invisible to measurement, and so we need another step, as shown in Equation (2):
If x is not the solution, then nothing happens, but if it is, its phase is flipped.
The next step is known as the diffusion operator. It amplifies the marked state using the average amplitude (named the inversion about the mean). First, we compute the average amplitude, then we reflect all amplitudes around that average. The marked state is below average; therefore, after the phase flip, performing symmetry around the average, it becomes larger, as shown in Equation (3):
With this step, the probability of the correct state increases and the probabilities of incorrect state decreases.
We need to iterate both previous steps. A Grover iteration consists of: (1) applying the oracle of and (2) applying the diffusion operator. If we look for a single solution, the number of iterations is as follows: k ≈ π/4 √
N.
Table 1 below shows the rounded values for some search space sizes. The general number of iterations, if we mark S states, is provided below (Equation (4)). With
N = 4, we have a special case where a single Grover iteration provides the correct answer in one step exactly.
If we want a geometric interpretation of the Grover algorithm, we can say that it operates in a 2D space. Each Grover iteration attempts to perform a rotation toward the solution state. We start pointing mostly at “wrong answers”, but each iteration rotates closer to the “correct answer”. A selective rotation of the phase of the amplitude is completed only when the searched state is found (the phase flip mentioned above). As mentioned in Grover’s original paper, the transformation describing this rotation for a two-state system is given by the following matrix, where i = √−1 and
φ1,
φ2 are real numbers [
8] (p. 2):
After applying the correct number of Grover iterations, it is important to measure the qubits at the right time. The correct state will appear with the highest probability. Measuring too early or too late reduces success, as timing matters. For a generic unstructured search, the physical implementation of the oracle cost will require at least O(n) Toffoli gates (for n qubits). For other applications where hashes, constraints, and satisfiability problems clauses are needed, the cost would often require O(poly(n)).
Further reading to understand the mathematics behind the Grover algorithm can be found in [
11]. This paper analyzes the mathematical structure of the classical Grover algorithm using linear algebra over complex numbers.
3. Potential Applications and Dequantization
First of all, it is important to realize that applications in quantum machine learning and quantum database searches depend heavily on the availability of quantum random access memory (QRAM), and there is precisely a lack of scalable QRAM that we will discuss in the next section. However, despite technical limits, the potential applications of the Grover algorithm can be found in various computational domains. Grover’s algorithm is not just one algorithm but, rather, a framework, often called “Amplitude amplification”. This technique can boost the probability of finding a correct solution in other quantum algorithms, making them more efficient [
12]. In [
13], the authors explain the operation of Grover’s quantum search algorithm (QSA) and discusses its applications as a “subroutine” in other quantum algorithms for wireless communication applications.
In cryptography, it reduces the effective security of symmetric-key algorithms and hash functions by enabling quadratic improvements in brute-force search and preimage attacks. For instance, breaking symmetric cryptography with brute force for a key of length k takes O(2k) classically. With Grover’s algorithm, this complexity is reduced to O(2k/2). This effectively halves the key length from a security standpoint (e.g., AES-256 would only provide ~128-bit security against quantum attackers). For a hash function inversion, when given a hash value, finding a preimage can also be sped up from O(2n) to O(2n/2). Message authentication codes and pseudorandom functions, when attacked generically, will also lose half of their effective security.
In optimization and constraint satisfaction, the algorithm accelerates the search for assignments that satisfy complex conditions. Many optimization problems can be cast as searching for a solution that minimizes, or satisfies, certain conditions. Grover’s algorithm can be used as a subroutine to speed up constraint satisfaction problems like satisfiability problems known as SAT (or B-SAT). A SAT problem attempts to determine if we can find an interpretation satisfying a given boolean formula. With QUBO (quadratic unconstrained binary optimization), we attempt to solve quadratic combinational optimization problems with binary variables. This is interesting in finance, where quantum-inspired algorithms are used for collateral optimization. Additionally, Grover’s algorithm is also useful in general optimization, scheduling, logistics, and resource allocation.
While practical large-scale quantum databases do not currently exist, the Grover algorithm is a theoretical tool for searching an unstructured database. In the context of unstructured search problems, if you have a list of N items and want to find the one that satisfies some property, a classical algorithm requires O(N) checks, while Grover’s algorithm reduces this to O(√N), which is extremely significant for very large datasets. It can also be useful with Artificial Intelligence (AI) and machine learning (ML), where finding matching data points or nearest neighbors can be formulated as search. Detecting certain entries in large data lakes is promising, whereas quantum search engines currently remain theoretical. This algorithm does not magically index classical databases; the speedup will only apply when the database is represented in a quantum-accessible form.
When dealing with graph problems, Grover’s algorithm can help identify a specific node or path in a graph that satisfies certain properties, and this can be applied to the detection of an anomaly or marked elements in networks. Saldi recently applied Grover’s search algorithm to the identification and mitigation of cyber threats by monitoring real-time network traffic for suspicious activities [
14]. The paper presents anomaly detection in networks and evaluates the effectiveness of simulating Grover’s search algorithm in detecting anomalous network activity within an intrusion detection system (IDS). It focuses on detecting malicious network traffic based on predefined conditions like source port, destination port, and packet size. Bhattacharya and Verma [
15] applied quantum machine learning (QML) algorithms, which are part of quantum-assisted graph networks (QGNs), to anomaly detection, specifically enhancing the ability to identify irregularities and outliers in large-scale social community networks. Quantum-assisted graph networks (QGNs) were introduced as a means to redefine the optimization and algorithmic paradigms for large-scale graph networks by leveraging quantum algorithms such as quantum annealing, quantum walk, and quantum machine learning. The search for collisions or matches across two sets or speedups in graph property testing are expected. Although not as dramatic as quantum walk algorithms, Grover’s algorithm still provides generic enhancements. For general pattern matching, Grover’s algorithm enables speeding up problems in text and image pattern recognition by framing them as search problems.
In machine learning (ML) and Artificial Intelligence (the AI domain), quantum machine-learning primitives, quantum kernels, the Harrow–Hassidim–Lloyd algorithm [
16], and the Grover algorithm can help with quantum-enhanced searches in ML pipelines (searching over model architectures, searching over hyperparameters, and searching over example(s) satisfying a condition, as well as feature selection). A generalization of the Grover technique using “Amplitude Amplification” also helps to boost the success probability of quantum subroutines such as quantum recommendation systems, quantum sampling, and quantum boosting algorithms.
More broadly, Grover’s algorithm and its generalization through amplitude amplification can also serve as apowerful primitive scientific computing. Monte Carlo simulations, where quantum subroutines benefit from boosted success probabilities, quantum chemistry (speeding up sampling procedures), solving differential equations (via subroutine boosting), and searching state spaces in physics simulations can benefit from amplitude amplification.
The Grover algorithm is not a universal solution but, rather, a general quadratic speedup for search-like problems.
Table 2 shows a large summary of the application coverage; however, its biggest practical impact is expected in cryptanalysis and optimization tasks. For instance, Deng and his colleagues proposed a security anomaly detection method for big data platforms that used quantum optimization clustering, including a quantum ant colony, that optimized an affinity propagation clustering algorithm [
17].
For other quantum algorithms claiming an exponential speedup, a contribution of Ewin Tang is considered as a breakthrough. She gave a talk at Microsoft Research in November 2018 to show that a quantum algorithm believed to have an advantage can actually be efficiently simulated by a classical algorithm, under certain assumptions. Ewin Tang proposed a “dequantized” version of a quantum algorithm for recommendation systems [
18], and the initial quantum version was a well-cited work originally proposed by Kerenidis and Prakash [
19]. They claimed an exponential speedup compared to classical computers, but Tang designed a classical algorithm with a similar performance. Before this achievement, many believed this was a strong example of an exponential quantum advantage. Her work showed that the speedup came largely from data access assumptions and not from quantum effects. She introduced a classical framework based on efficient ℓ
2-norm sampling and data structures, enabling fast randomized access. This breakthrough triggered a wave of dequantization results, especially for quantum machine learning algorithms [
20].
4. Limits in the NISQ Era
Despite its theoretical significance, the practical implementation of the Grover algorithm on current hardware remains highly limited. Realistic problem sizes require deep quantum circuits, but noisy intermediate-scale quantum (NISQ) hardware devices suffer from short coherence times, limited qubit connectivity, and significant gate noise. Although Grover is theoretically powerful, today’s hardware cannot yet execute it at a meaningful scale. Hereafter, we provide the main current limitations.
An implementation will need a circuit depth that far exceeds NISQ capabilities. The Grover algorithm requires multiple iterations of the oracle (the most complex part) and the diffusion operator, which involve many multi-qubit gates.
We require iterations.
Even for modest problem sizes, for instance, searching among N = 230, we require more than 25,000 iterations. The relevant constraint is not just the number of iterations but, rather, the depth per iteration. The combined circuit depth is, therefore, far larger than the iteration count alone suggests, and this emphasizes the NISQ limits. Most NISQ machines can handle tens to a few hundreds of layers of logic before decoherence destroys the computation. Grover’s algorithm requires thousands to millions of coherent gates for any non-toy problem, and this is an important limit for current machines. If there are multiple marked items, the optimal number of iterations changes. NISQ noise makes it impossible to fine-tune the number of Grover iterations accurately in this case.
In addition, the speedup of the Grover algorithm depends critically on having an efficient oracle that marks the solution. In theory, the oracle is a “black box,” but on real hardware, oracles must be explicitly decomposed into elementary gates as complex conditions require deep circuits, and finally, the oracle implementation dominates the total cost of the circuit. In practice, on NISQ machines, the time to build/run the oracle outweighs the quadratic speedup; thus, even if Grover’s iteration were perfect, the oracle cost cancels the benefit.
Grover’s speedup can only be guaranteed in a fault-tolerant regime where circuits run with negligible noise. On NISQ hardware, we have noisy qubits, uncorrected errors, crosstalk, and short coherence times. All of these prevent Grover’s algorithm from reaching its theoretical efficiency. Grover relies on very precise amplitude rotations, and our NISQ architectures remain noisy. Noise accumulation kills the “Amplitude Amplification” since it leads to amplitude distortion and phase misalignment, and thus, it reduces the probability of correctly measuring the marked state. Because noise compounds across iterations, the probability of success quickly collapses as more steps are added. In NISQ, only one to three Grover iterations are realistically feasible, which corresponds to a maximum searching space size N of approximately 16. This was shown in a recent study dealing with the complexity of NISQ architectures [
21]. Currently, NISQ devices cannot outperform classical computers for Grover-like tasks.
The last limit we will mention is linked to the lack of scalable QRAM (quantum random access memory), as pointed out in the beginning of
Section 3. For Grover’s algorithm to search a database, we should be able to store the database in a quantum-accessible memory and have it accessible in superposition. We do not have QRAM at a meaningful scale, and current theoretical designs require extremely deep, error-sensitive circuits. Physical implementations are far beyond current hardware, but without QRAM, Grover cannot effectively search a real dataset and can only work on tiny synthetic examples.
5. Reproducibility Issues
Although fully universal quantum computers are not yet available, existing quantum hardware already supports nontrivial quantum circuits, enabling the implementation of algorithms through different paradigms depending on the hardware architectures and vendor-specific constraints. In essence, quantum computers are stochastic and the results obtained are probabilistic, and we do not obtain the “bitwise identical results” that we could expect from our classical deterministic computers (perfect repeatability is required for debugging on a classical computer). For any serious scientific study with a quantum computer, however, we require reproducibility: finding the same probabilistic trend, i.e., statistically identical results, and the same scientific conclusion. Too many studies claiming significant results could not be reproduced, and this issue has recently attracted attention since we now have evidence that research is often not reproducible, particularly when using high-performance computing and quantum computing [
22].
A notion of “quantum volume” was introduced in [
23] and is directly linked to system error rates on NISQ devices. This volume provides a single number metric, quantifying the largest random circuit of equal width and depth that a tested quantum computer can implement successfully. We would need thousands of excellent qubits to obtain a single “perfect” qubit, and it is physically difficult to obtain reliable qubits due to decoherence, noise, a lack of error correction, and so on; thus, reproducibility in quantum computing is a major concern. In [
24], the authors reported the results of a three-qubit Grover algorithm using trapped atomic ions. The authors of [
25] tested the current viability of using publicly available IBM quantum computers for data-driven tasks using Grover’s algorithm to investigate the impacts of factors like qubit number, device choice, and qubit choice (up to four qubits). The work of Wu and his colleagues [
26] showed that Grover circuits provide non-reproducible success probabilities across runs. The authors attributed this to the fact that Grover’s amplitude amplification is extremely sensitive to small coherent errors since the algorithm is faced with a complex gate decomposition problem. Their paper introduced an optimization with a two-stage quantum search algorithm based on divide-and-conquer, which can run quickly in parallel on a quantum computer.
Implementing Grover’s algorithm on a NISQ device is extremely challenging but remains interesting for testing the hardware evolution with a small number of qubits since we cannot currently scale as much as we would like to with current technology. A recent study introduced GRADE, a Grover-based reliability assessment for device evaluation [
27]. Thanks to IBM, which offers free access to quantum machines for scientists, we have also been able, since 2019, to test the reproducibility of the Grover algorithm [
28] with different simulators (providing reproducible statistics), whereas the use of various machines has often led to distinct and inconsistent results from run to run, as demonstrated when we tested a five-qubit version of the algorithm, thus corroborating the works previously cited and what we discussed in the section dealing with NISQ-era limitations.
6. Conclusions
Grover’s algorithm is one of the cornerstone quantum algorithms [
7,
8]. It provides a quadratic speedup for searching an unstructured database or, more generally, for solving problems that can be reduced to an unstructured search problem. Its principle is to amplify the probability amplitude of a marked state (or states) through repeated operations called Grover iterations. This algorithm is not a universal solution, but thanks to its “Amplitude Amplification” technique, it can boost the probability of finding a correct solution in other quantum algorithms, making them more efficient. Its biggest practical impact is expected in cryptanalysis and optimization tasks. Efficiently implementing the Grover algorithm requires a circuit depth that is out of reach on today’s quantum machines, while the absence of scalable QRAM prevents true quantum access to classical datasets. Lastly, noise accumulation disrupts the delicate amplitude rotations central to the algorithm, reducing the feasible iterations to only a few. Hence, Grover’s algorithm implementations are available only at a small scale. Even there, quantum computing faces numerous practical challenges. Among them, obtaining the same scientific conclusions with our quantum machines is essential. Reproducibility is an epistemological cornerstone for the advancement of science, which is not obvious to reach in the era of noisy intermediate-scale quantum machines since identical quantum experiments have currently yielded different statistical results. Interested readers are encouraged to go further into the explanation of the fundamental principles of the algorithm in [
29], and this paper also discusses the latest trends in applying Grover’s algorithm to various problems, including large-scale database searches, cryptanalysis, and optimization.