Previous Article in Journal
SG-RAG MOT: SubGraph Retrieval Augmented Generation with Merging and Ordering Triplets for Knowledge Graph Multi-Hop Question Answering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Quantum Machine Learning and Deep Learning: Fundamentals, Algorithms, Techniques, and Real-World Applications

Electronics Laboratory, Physics Department, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mach. Learn. Knowl. Extr. 2025, 7(3), 75; https://doi.org/10.3390/make7030075 (registering DOI)
Submission received: 23 April 2025 / Revised: 24 July 2025 / Accepted: 29 July 2025 / Published: 1 August 2025
(This article belongs to the Section Learning)

Abstract

Quantum computing, with its foundational principles of superposition and entanglement, has the potential to provide significant quantum advantages, addressing challenges that classical computing may struggle to overcome. As data generation continues to grow exponentially and technological advancements accelerate, classical machine learning algorithms increasingly face difficulties in solving complex real-world problems. The integration of classical machine learning with quantum information processing has led to the emergence of quantum machine learning, a promising interdisciplinary field. This work provides the reader with a bottom-up view of quantum circuits starting from quantum data representation, quantum gates, the fundamental quantum algorithms, and more complex quantum processes. Thoroughly studying the mathematics behind them is a powerful tool to guide scientists entering this domain and exploring their connection to quantum machine learning. Quantum algorithms such as Shor’s algorithm, Grover’s algorithm, and the Harrow–Hassidim–Lloyd (HHL) algorithm are discussed in detail. Furthermore, real-world implementations of quantum machine learning and quantum deep learning are presented in fields such as healthcare, bioinformatics and finance. These implementations aim to enhance time efficiency and reduce algorithmic complexity through the development of more effective quantum algorithms. Therefore, a comprehensive understanding of the fundamentals of these algorithms is crucial.

1. Introduction

Quantum technologies have experienced remarkable growth in recent years. Since the mid-20th century, when Richard Feynman first introduced the concept of quantum computing, extensive research has been conducted in this field. Quantum computers leverage quantum mechanical principles such as superposition and entanglement. Unlike classical computers, which encode information using bits that represent either 0 or 1, quantum computers use qubits, which can represent both values simultaneously due to their quantum properties [1].
Machine learning (ML) problems fundamentally involve two key tasks: efficiently managing large volumes of data and developing algorithms that process these data as quickly as possible. Quantum registers offer a significant advantage over classical registers in addressing the first task. While an n-bit classical register can store only a single n-bit binary string, an n-qubit register can represent 2 n n-bit binary strings simultaneously by encoding the information in quantum amplitudes. However, extracting all these strings is challenging, as measurement causes the quantum state to collapse, yielding only one string or amplitude as the output. Despite this limitation, qubits enable inherent parallelism, allowing algorithms to operate on all 2 n strings simultaneously, which can lead to exponential speedups over their classical counterparts [2].
Quantum algorithms are the foundation of quantum computing, with decades of research driving significant progress and breakthroughs. Among the nine key quantum algorithms—Deutsch’s algorithm, Deutsch–Josza algorithm, Bernstein–Vazirani algorithm, Simon’s algorithm, quantum Fourier transform, phase estimation algorithm, Shor’s algorithm, Grover’s algorithm and the HHL algorithm— Shor’s algorithm, introduced in 1994 [3], was a major breakthrough, enabling the efficient factoring of large numbers—a problem believed to be computationally hard for classical computers, as no known polynomial-time classical algorithm exists for it. The algorithm leverages the quantum phase estimation (QPE) algorithm to estimate the eigenvalues of unitary operators [4]. In 1996, Lov Grover further advanced the field with Grover’s algorithm [5], achieving a quadratic speedup for searching unsorted databases.
Quantum machine learning (QML) is an interdisciplinary field that lies at the intersection of quantum computing (QC) and ML. The scope of this field leverages the principles of quantum mechanics to process and analyze data more efficiently. Due to the rapid growth of data, classical computers struggle to handle it effectively. As a result, QML presents a promising solution to address these challenges.
The progression of QML can be divided into two main stages. The first stage, from the mid-1990s to 2007, was mainly focused on the development of theoretical models. The second stage, which is ongoing, is concentrated on the practical application and implementation of these models [6].
In 1995, Kak [7] introduced a novel computational framework that integrates quantum mechanics with neural network models to enhance learning, memory, and processing efficiency. Building on this foundation, Ventura and Martinez proposed a quantum associative memory in 1999 [8], capable of retrieving stored patterns exponentially faster than classical models. Around the same time, the concept of a qubit-like neuron was introduced [9], followed by the proposal of a quantum neural network (QNN) [10]. After these breakthrough proposals, other quantum machine learning techniques emerged, including the quantum support vector machine [11] and the quantum k-means algorithm [12].
The first comprehensive monograph on QML [13] was published in 2014. This was followed by key implementations, including the first demonstration of a quantum neuron on a quantum processor as outlined in [14]. In 2020, Google introduced Quantum TensorFlow [15], seamlessly integrating quantum computing capabilities into the TensorFlow platform.
The aim of this work is to provide a tutorial for beginners to enter the field of QML by thoroughly studying the mathematics behind quantum algorithms and QML algorithms. Additionally, real-world applications are presented to illustrate the practical use of these concepts.
There are several review articles that present QML. What differentiates our work from previous studies is the following. In [16], there is a comprehensive analysis covering both NISQ and fault-tolerant quantum algorithms. However, our study specifically focuses on fault-tolerant quantum algorithms and their mathematical preliminaries, which are missing in [16]. While [17] is a review on QML, it lacks a tutorial character, and quantum algorithms are not presented in detail. Our work, in contrast, provides a more structured and explanatory approach. In [18], a review on QML is presented, but it does not include mathematical equations, which we explicitly cover in our study. In [19], some quantum algorithms are presented in significant detail, but the article lacks a tutorial character and does not discuss possible applications in QML and QDL, which our study addresses. In [20], the scope does not include an in-depth mathematical explanation of the algorithms, whereas our work aims to bridge this gap.
The paper is organized as follows: Section 2 presents the fundamental theory of quantum computing. An analytic mathematical explanation of basic quantum algorithms is provided in Section 3. Section 4 and Section 5 cover the fundamentals of quantum machine learning and quantum deep learning, respectively, along with some applications. Section 6 highlights real-world applications of quantum machine learning. Finally, the conclusions are presented in Section 7.

2. Overview of Quantum Computing

2.1. History and Evolution of QC

Quantum computing is an emerging field that leverages principles of quantum mechanics to perform computations. It sits at the intersection of mathematics, physics and computer science. While quantum computers exist today, their practical use remains minimal. Despite its immense potential, quantum computing is still in its early stages. Preskill [21] first introduced the term “noisy intermediate-scale quantum (NISQ) era” because the current quantum circuits are susceptible to noise. However, there is optimism that NISQ-era devices will soon demonstrate practical applications, marking a significant step toward more advanced quantum computing.
The theoretical idea of quantum computing began in the mid-20th century and has since evolved into a rapidly advancing field of research and technological development. The development of quantum mechanics by physicists such as Niels Bohr, Werner Heisenberg, Erwin Schrödinger, and Paul Dirac in the 1920s and 1930s laid the groundwork for understanding the strange behavior of subatomic particles, such as superposition and entanglement—principles central to quantum computing. Einstein is considered the principal founder of quantum theory because he explained the photoelectric effect by proposing that light behaves as particles called quanta. The idea of quantum computing was introduced by Nobel laureate Richard Feynman. While working on a simulation for quantum physics models, he discovered that the values in his calculations were growing exponentially, requiring computational power far beyond what traditional computers could handle [22].
Between 1980 and 2000, the foundational concepts of quantum computing emerged. In 1981, Richard Feynman proposed that a quantum computer could be used to simulate quantum systems, which classical computers could not perform efficiently. This is widely considered the first concrete proposal for a quantum computer [22]. In 1985, David Deutsch, a British physicist, formalized the idea of a quantum Turing machine, a theoretical model for a universal quantum computer. His work demonstrated that a quantum computer could solve problems that classical computers could not. During the late 1980s, researchers began developing early quantum algorithms, including Deutsch’s algorithm. One of the most significant breakthroughs occurred in 1994 when Peter Shor developed an efficient quantum algorithm for integer factorization [23].
In the early 2000s, researchers built small-scale quantum computers that could run simple algorithms. In 2001, researchers from IBM and Stanford University successfully implemented a small version of Shor’s algorithm on a 7-qubit quantum computer, factorizing the number 15. In 2011, D-Wave Systems, a Canadian company, launched its first commercial quantum computing system based on quantum annealing [23].
After 2010, several companies began investing more heavily in the development of efficient quantum computers, aiming to build systems with a greater number of qubits. Nowadays, large companies such as Google, IBM and Microsoft, along with startups such as Rigetti, D-Wave, and Xanadu, have made breakthroughs in building quantum computers with increasing qubit capacities. Additionally, there has been active research in developing quantum algorithms and exploring the applications of quantum computers in real-world scenarios, including finance, machine learning, drug discovery and cryptography.
Quantum algorithms have been developed for both NISQ and fault-tolerant devices. For NISQ devices, the most important are variational algorithms, including the Variational Quantum Eigensolver (VQE) [24], the Quantum Approximate Optimization Algorithm (QAOA) [25] and Parameterized Quantum Circuits [26].
A variational quantum circuit is a hybrid approach that combines quantum and classical computation, utilizing the advantages of both. It consists of a quantum circuit with adjustable parameters that a classical computer optimizes iteratively. These parameters function similarly to the weights in artificial neural networks [27].
The algorithms described below refer to fault-tolerant quantum computing.

2.2. Bra–Ket Notation

States and operators in quantum mechanics are represented as vectors and matrices, respectively [28]. Bra–Ket Notation, or Dirac notation, was introduced as an easier way to write quantum mechanical expressions.
A ket, denoted as | ψ , represents a quantum state in a Hilbert space. It is a column vector that encapsulates all the information about the state of a quantum system. If | a is a quantum state in a two-level system, it can be represented as
| a = a 1 a 2
A bra is denoted as ϕ | . It is a row vector and is obtained by taking the conjugate transpose of a ket. If | b is a quantum state, its corresponding bra is represented as
b | = | b = b 1 b 2 = b 1 * b 2 *

2.3. Quantum Bits

The fundamental component of QC is known as a quantum bit or qubit for short. Qubits are basic units of quantum information and function according to the principles of quantum mechanics. Quantum characteristics are not limited to subatomic particles, as larger systems can also exhibit quantum behavior. A qubit can be realized using various entities, including photons, electrons, neutrons, and even atoms [29]. A qubit can be represented as a vector in a two-dimensional complex vector space. The states | 0 and | 1 are written in vector form as
| 0 = 1 0 , | 1 = 0 1
A qubit can exist in a state of 0, a state of 1 or in a superposition of both states simultaneously, a phenomenon referred to as superposition. In contrast, classical bits are limited to a single value at any given time, either 0 or 1. Mathematically, a qubit can be expressed by the following equation:
| ψ = α | 0 + β | 1
where | ψ is the qubit’s state and α and β represent the the probabilistic amplitude of the waveform of the state for being in the |0> state and |1> state, respectively. It must be noted that
| α | 2 + | β | 2 = 1
Thus, the general state of a qubit | ψ can also be written as
| ψ = α 1 0 + β 0 1
When we conduct a measurement, we retrieve a single bit of information, either 0 or 1. The simplest form of measurement occurs in the computational basis, represented by | 0 and | 1 . For instance, measuring the state α | 0 + β | 1 on this basis yields a result of 0 with a probability of | α | 2 and a result of 1 with a probability of | β | 2 .
In quantum mechanics, the inner product between two vectors representing qubit states in a Hilbert space provides valuable information about their similarities. Specifically, the inner product is a mathematical operation that takes two quantum states, denoted as | ϕ and | ψ and is represented as ϕ | ψ .
This inner product reveals important properties of the states:
1. Orthogonality: If the states | ϕ and | ψ are orthogonal, meaning they are completely independent and share no information, the inner product equals zero:
ϕ | ψ = 0
Consequently, the inner product 0 | 1 can be computed as
0 | 1 = 1 0 0 1 = 1 · 0 + 0 · 1 = 0
2. Normalization: If the states are identical, that is | ϕ = | ψ , the inner product equals one:
ϕ | ϕ = 1
Consequently, the inner product 0 | 0 can be computed as
0 | 0 = 1 0 1 0 = 1 · 1 + 0 · 0 = 1
Quantum computing uses quantum physics principles including superposition and entanglement to process data. Superposition is described by Equation (4). This property allows for the exponential speedup of quantum computing, as a qubit can exist in a combination of both states simultaneously. Quantum entanglement will be described in detail below.

2.4. Quantum Gates

The fundamental components of quantum circuits are quantum gates, which act on qubits. These quantum gates are realized through unitary operators, which serve to transform the state of a closed quantum system. Therefore, quantum gates are represented by unitary matrices. Unitary operators play a crucial role in evolving the quantum state, preserving the overall probabilities and maintaining the reversibility of the quantum process. The unitary operator maps the quantum state | ϕ into the state U | ψ as follows:
| ϕ = U | ψ
An operator U is a unitary transformation if the following condition is satisfied:
U U = U U = I
where U is the conjugate transpose of the unitary operator U, and I is the identity operator.
The equation above describes the reversibility of a quantum system and ensures that the information contained in a quantum state can be recovered after the application of a quantum gate, allowing for the reconstruction of the original state prior to the operation. Every quantum gate operation is inherently reversible due to the unitary nature of quantum mechanics, unlike many classical operations.
Quantum gates operate on single-qubit, two-qubit or multi-qubit systems. Therefore, quantum gates can be categorized into three main categories: single-qubit gates, two-qubit gates and multi-qubit gates [30]. Below is a concise representation of the most significant quantum gates in each category.

2.4.1. Single-Qubit Gates

Single-qubit gates operate on a single qubit. Common single-qubit gates are illustrated in Figure 1 and include the following:
  • Pauli (X) Gate: Acts like a quantum NOT gate, flipping the state | 0 to | 1 and vice versa.
    X = 0 1 1 0
    For example, applying the X gate to the state | 0 ,
    X | 0 = 0 1 1 0 1 0 = 0 1 = | 1
    Similarly, applying the X gate to the state | 1 ,
    X | 1 = 0 1 1 0 0 1 = 1 0 = | 0
  • Pauli (Y) Gate: Introduces both a flip and a phase change.
    Y = 0 i i 0
    For example, applying the Y gate to the state | 0 ,
    Y | 0 = 0 i i 0 1 0 = 0 i = i | 1
    Similarly, applying the Y gate to the state | 1 ,
    Y | 1 = 0 i i 0 0 1 = i 0 = i | 0
  • Pauli (Z) Gate: Applies a phase flip to the | 1 state.
    Z = 1 0 0 1
    For example, applying the Z gate to the state | 0 ,
    Z | 0 = 1 0 0 1 1 0 = 1 0 = | 0
    Applying the Z gate to the state | 1 ,
    Z | 1 = 1 0 0 1 0 1 = 0 1 = | 1
  • Hadamard (H) Gate: It is one of the most frequently used quantum gates, transforming a basis state ( | 0 or | 1 ) into a superposition state.
    H = 1 2 1 1 1 1
    For example, applying the H gate to the state | 0 ,
    H | 0 = 1 2 1 1 1 1 1 0 = 1 2 1 1 = 1 2 ( | 0 + | 1 )
    Similarly, applying the H gate to the state | 1 ,
    H | 1 = 1 2 1 1 1 1 0 1 = 1 2 1 1 = 1 2 ( | 0 | 1 )

2.4.2. Two-Qubit Gates

Two-qubit gates operate on two qubits simultaneously. A two-qubit system consists of two qubits, which together form a composite system. Each qubit can be in the state | 0 or | 1 , so the two-qubit system can be in one of four possible basis states:
| 00 , | 01 , | 10 , | 11
The two-qubit system is described using the tensor product of the individual qubit states. If qubit 1 is in the state | ψ 1 and qubit 2 is in the state | ψ 2 , the combined state is
| ψ 12 = | ψ 1 | ψ 2
For example,
If | ψ 1 = | 0 and | ψ 2 = | 1 , then the combined state is
| 0 | 1 = 1 0 0 1 = 0 1 0 0 = | 01
Common two-qubit gates include the following:
  • CNOT (Controlled-NOT) Gate: Flips the target qubit if the control qubit is | 1 (Figure 2a).
    CNOT = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0
    For example, applying the CNOT gate to the state | 00 ,
    CNOT | 00 = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 0 0 = 1 0 0 0 = | 00
    Similarly, applying the CNOT gate to the state | 01 ,
    CNOT | 01 = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 = 0 1 0 0 = | 01
    Similarly, applying the CNOT gate to the state | 10 ,
    CNOT | 10 = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 1 0 = 0 0 0 1 = | 11
    Finally, applying the CNOT gate to the state | 11 ,
    CNOT | 11 = 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 0 1 = 0 0 1 0 = | 10
    The information above is summarized in the following Table 1.
    Table 1 shows that the output state of the target qubit corresponds to that of a standard XOR gate. The target qubit is | 0 when both inputs are identical and | 1 when the inputs differ.
  • SWAP Gate: Swaps the states of two qubits (Figure 2b).
    SWAP = 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1
    For example, applying the SWAP gate to the state | 00 ,
    SWAP | 00 = 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 1 0 0 0 = 1 0 0 0 = | 00
    Similarly, applying the SWAP gate to the state | 01 ,
    SWAP | 01 = 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 = 0 0 1 0 = | 10
    Similarly, applying the SWAP gate to the state | 10 ,
    SWAP | 10 = 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 = 0 1 0 0 = | 01
    Finally, applying the SWAP gate to the state | 11 ,
    SWAP | 11 = 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 1 = 0 0 0 1 = | 11
    The information above is summarized in the following Table 2.

2.4.3. Multi-Qubit Gates

Multi-qubit gates operate on multiple qubits simultaneously. A multi-qubit system consists of n qubits, which together form a composite quantum system. Each qubit can be in the state | 0 or | 1 , allowing the multi-qubit system to exist in one of 2 n possible basis states. The states of the multi-qubit system are represented using the tensor product of the individual qubit states. If qubit 1 is in the state | ψ 1 , qubit 2 is in the state | ψ 2 and so on, the combined state of the system is given by
| ψ 1 n = | ψ 1 | ψ 2 | ψ n
For example, if | ψ 1 = | 0 , | ψ 2 = | 1 , and | ψ 3 = | 0 , then the combined state of the three-qubit system is
| 0 | 1 | 0 = 1 0 0 1 0 1 = 0 1 0 0 0 1 = 0 0 0 1 0 0 0 0 = | 010
Fredkin Gate: Swaps the states of two target qubits based on the state of a control qubit. If the control qubit is | 1 , the states of the two target qubits are swapped. If the control qubit is | 0 , the target qubits remain unchanged (Figure 3). The Fredkin gate is also known as the controlled-SWAP gate.
Fredkin = 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1
For example, applying the Fredkin gate to the state | 001 ,
Fredkin | 001 = 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 = 0 0 1 0 0 0 0 0 = | 001
Similarly, applying the Fredkin gate to the state | 110 ,
Fredkin | 110 = 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 = 0 0 0 0 0 1 0 0 = | 101
The information above is summarized in the following Table 3.

2.5. Quantum Measurement

After the application of quantum gates, the measurement process is initiated. Measurement in quantum computing constitutes a fundamental concept that significantly influences the operation of quantum systems and their interaction with classical information. Following the execution of a series of quantum gates, the state of a quantum system can be represented by a quantum state vector | ψ . This state vector encapsulates the information regarding the quantum system and can be expressed as a superposition of orthonormal basis states:
| ψ = i α i | ψ i
where | ψ i denotes the orthonormal basis states and α i are the complex coefficients that characterize the probability amplitudes associated with each basis state. Upon measurement, the quantum state collapses to one of the possible basis states, with the probability of obtaining a particular state | ψ i given by the square of the magnitude of the corresponding coefficient [31]:
P ( i ) = | α i | 2
The measurement process is essential, as it facilitates the extraction of information and enables decision-making based on the results of quantum computations.

2.6. Quantum Entanglement

An important principle of quantum computing is called entanglement, which has no classical analogue. Qubits can become entangled, a phenomenon in which the state of one qubit is directly related to the state of another, regardless of the distance between them [32]. In simple terms, two quantum systems are entangled when their combined state cannot be written as tensor product of basic states.
Suppose that two qubits q s 0 and q s 1 are in the state | q s , which is given by
| q s = 1 2 | 10 + | 11
| q s can also be written as
| q s = 1 2 | 10 + | 11 = | 1 1 2 | 0 + | 1
That is, the states of q s 0 and q s 1 are
| q s 1 = | 1
| q s 0 = 1 2 | 0 + | 1
Therefore, | q s can be written as
| q s = | q s 1 | q s 0
Specifically, | q s can be written as the tensor product of the states of the two qubits, so that q s 0 and q s 1 are not in quantum entanglement but in a separable state.
Let us consider two other qubits, q e 0 and q e 1 , which are in the state | q e , given by
| q e = 1 2 | 00 + | 11
The state | q e cannot be written as the tensor product of the states of the two qubits, so q e 0 and q e 1 are in quantum entanglement. The difference between separability and entanglement is explained as follows.
When qubit q s 1 is measured in state | q s , it is always found in | 1 . Afterward, qubit q s 0 has a 50% chance of being in | 0 or | 1 , meaning q s 1 ’s measurement does not affect q s 0 . When qubit q e 1 is measured in state | q e , it has a 50% chance of being in | 0 or | 1 . If q e 1 is found in | 0 , q e 0 will be | 0 ; if in | 1 , q e 0 will be | 1 . This shows that measuring one entangled qubit determines the state of the other.
Bell states are quantum states involving two qubits that represent the simplest form of quantum entanglement. The quantum circuit that creates Bell states is shown in Figure 4. While there are various ways to generate entangled Bell states using quantum circuits, the most basic approach starts with a computational basis as the input and employs a Hadamard gate followed by a CNOT gate.
Suppose that the input state is given by the following equation:
| ψ 0 = | 00
Next, a Hadamard gate is applied to the q0 qubit, putting it into a superposition state using Equation (23).
| ψ 1 = H | 0 | 0 = [ | 0 + | 1 2 ] | 0 = | 00 + | 10 2
Then, q0 acts as a control input to the CNOT gate and the target qubit (q1) gets inverted only when the control is 1. The output is as follows:
| ψ 2 = | 00 + | 11 2
Table 4 summarizes the results of the Bell state circuit computation. For example, knowing the state of two input qubits and measuring one of the output qubits allows us to determine the state of the other qubit, as the two qubits are entangled.
The expression Tr [ ρ A 2 ] is a measurement of the degree of entanglement. The following equation shows whether two states, A and B, are completely separable or have some degree of entanglement:
Tr [ ρ A 2 ] = 1 if A and B are completely separable ( 1 2 , 1 ] if there is a degree of entanglement
Here, Tr [ ρ A 2 ] represents the trace of the square of a density matrix ρ A .
Two states are completely entangled when
Tr [ ρ A 2 ] = 1 2
The density matrix ρ can be computed by the following equation:
ρ = | q q | = α β α * β * = α α * α β * β α * β β * = | α | 2 α β * β α * | β | 2
The reduced density matrix ρ A can be computed by the following equation:
ρ A = Tr B ρ A B = Tr B ( | a 1 a 2 | | b 1 b 2 | ) = a 1 | a 2 Tr ( | b 1 b 2 | ) = a 1 | a 2 b 2 | b 1
As an instance, consider the following Bell state:
| q A B = 1 2 | 00 + | 11
The density matrix of qubit A is given by
ρ A = Tr B ρ A B = Tr B 1 2 | 00 + | 11 1 2 00 | + 11 | = 1 2 Tr B | 00 00 | + | 11 11 | + | 00 11 | + | 11 00 | = 1 2 Tr B ( | 00 00 | ) + Tr B ( | 00 11 | ) + Tr B ( | 11 00 | ) + Tr B ( | 11 11 | ) = 1 2 ( | 0 0 | ) ( 0 | 0 ) + ( | 0 1 | ) ( 1 | 0 ) + ( | 1 0 | ) ( 0 | 1 ) + ( | 1 1 | ) ( 1 | 1 )
Considering that the basis states are orthogonal, we have
ρ A = 1 2 | 0 0 | + | 1 1 |
The corresponding density matrix is
ρ A = 1 2 1 0 0 1
Therefore, ρ A 2 can be computed as
ρ A 2 = 1 2 1 0 0 1 1 2 1 0 0 1 = 1 4 1 0 0 1
Thus,
Tr ( ρ A 2 ) = 1 4 ( 1 + 1 ) = 2 4 = 1 2
The above computation indicates that the initial states are completely entangled.
As another example, the initial state of the two-qubit system is given by
| q A B = | 00
The density matrix of qubit A is given by
ρ A = Tr B ρ A B = Tr B ( | 00 00 | ) = ( | 0 0 | )
The corresponding density matrix is
ρ A = 1 0 0 0
Therefore, ρ A 2 can be computed as
ρ A 2 = 1 0 0 0 1 0 0 0 = 1 0 0 0
Thus,
Tr ( ρ A 2 ) = 1 + 0 = 1
The above computation indicates that the initial states are completely separable.

2.7. Quantum Computing Models

Quantum computing models provide abstract frameworks that define how quantum information is processed, specifying how qubits are manipulated and how computation proceeds in a quantum system. The quantum computing model described above is the quantum circuit model, also known as the gate-based model. This is the most widely used and familiar framework for quantum computing, analogous to classical digital circuits. Figure 5 depicts the main idea of this model. The quantum circuit model consists of three stages: initialization of quantum gates, implementation of quantum gates, and measurement. This model serves as the foundation for most current quantum computers, including those developed by companies like IBM and Google.
However, several other models have also been proposed, such as the adiabatic model (used by D-Wave Systems) [33], the topological model (explored by Microsoft) [34], and the quantum annealing model (also used by D-Wave Systems) [35].

2.8. Physical Representation of Quantum Computers

The physical representation of quantum computers encompasses a variety of methodologies, each with unique advantages and challenges. Ongoing research in these areas aims to optimize performance, enhance scalability and develop practical quantum computing systems. As the field progresses, these diverse approaches contribute to the broader goal of achieving functional and effective quantum computing technologies. The most important technologies that demonstrate quantum computers are
  • Ion Trap Quantum Computing. This method involves trapping individual ions using electromagnetic fields. The ions serve as qubits and their quantum states are manipulated using laser beams [36].
  • Superconducting Quantum Computing. This approach involves circuits made from superconducting materials that exhibit quantum behavior at low temperatures. The qubits are manipulated using microwave pulses, allowing for fast and efficient operations [37].
  • Linear Optical Quantum Computing. This method uses photons as qubits and leverages the properties of linear optical elements (like beam splitters, phase shifters, and detectors) to process quantum information [38].
  • Semiconductor Spin-Based Quantum Computing. This approach uses the spin states of electrons in semiconductor materials (like silicon) as qubits. Spin qubits can be created using quantum dots or by doping silicon with specific atoms [36].
  • Nuclear Magnetic Resonance (NMR)-Based Quantum Computing. NMR quantum computing employs the nuclear spins of molecules as qubits. Magnetic fields and radiofrequency pulses are used to manipulate the spins [36]. It was the initial approach to building quantum computers, but it has since become less favored [37].
  • Quantum Computing with Defects. This approach uses defects in solid-state materials (like nitrogen-vacancy centers in diamond or silicon vacancies in silicon carbide) as qubits [38].
Several companies are actively working on the physical implementation of quantum computers using various approaches. For instance, IBM and Google are focusing on superconducting qubits, while IonQ is advancing the trapped ion approach. Xanadu is exploring linear optical quantum computing [37]. These companies are making significant investments in quantum technology and frequently achieving new milestones, contributing to a clearer understanding of how these machines function. Their ongoing efforts are steadily advancing toward unlocking the true potential of quantum computers [22].

2.9. Quantum Noise

Quantum noise refers to the inherent uncertainty and random fluctuations found in quantum systems as a result of quantum physics’ fundamental principles. Unlike classical noise, which is caused by external sources such as heat disturbances or electromagnetic interference, quantum noise inevitably arises from the fundamental features of quantum particles. This is the result of the Heisenberg Uncertainty Principle, which asserts that some pairs of physical quantities, such as position and momentum, cannot be correctly measured at the same time.
In quantum information science, quantum noise can produce decoherence, which occurs when quantum systems lose their unique quantum features and behave more like classical systems. This issue poses a major barrier to the creation of reliable quantum computers. To limit the influence of quantum noise, researchers use approaches like quantum error correction, which detect and correct noise-induced errors [39].

3. Quantum Algorithms

As one group of researchers made progress on the physical implementation of quantum computers, others advanced in identifying algorithms that would run on a quantum computer with a speedup over classical computers.
Just as classical computers rely on classical algorithms for their functionality, quantum computers depend on quantum algorithms. These algorithms aim to demonstrate the advantages of quantum computing over classical computing. The investigation of quantum algorithms has constituted a dynamic area of research for over 20 years; however, the development of a fully operational quantum computer remains a work in progress [40].
As previously mentioned, the quantum circuit model is the most widely utilized framework in quantum computing. Quantum algorithms are typically represented by quantum gates that operate on a set of qubits and conclude with a measurement process.
After proposing the concept of a quantum Turing machine, Deutsch [41] developed an algorithm to demonstrate faster performance on a quantum computer. The Deutsch algorithm solves the problem of determining whether a function is constant or balanced using just one query, whereas a classical computer requires two queries. This algorithm showcases the strength of quantum parallelism. Deutsch, together with Richard Jozsa, later extended this idea into the more general Deutsch–Jozsa algorithm [42].
In 1993, Bernstein and his student Vazirani published a paper [43] introducing an algorithm that outperforms the best-known classical approach to a specific problem. Furthermore, they contributed by presenting a quantum version of the Fourier transform within the same paper.
Following the contributions of Bernstein and Vazirani, Daniel Simon made further advancements in 1994. Simon presented a problem [44] that could be solved exponentially faster by a quantum computer compared to a classical one.
One of the most important breakthroughs in quantum computing was Shor’s algorithm [3], developed in 1994. Shor built on earlier work by Deutsch, the Bernstein–Vazirani algorithm and Simon’s algorithm to create a way to quickly factor large numbers into two prime factors. While classical computers find this task extremely hard, Shor’s algorithm can complete it efficiently on a quantum computer, thanks in part to the quantum phase estimation algorithm, which is essential for estimating the eigenvalues of the unitary operators involved in the computation. Factoring large numbers is fundamental for the Rivest–Shamir–Adleman (RSA) encryption system, which secures most online communication today. This includes protecting credit card info, bank transfers and private messages [32].
Following Shor’s groundbreaking work, Lov Grover developed another significant quantum algorithm in 1996. His algorithm [5] focuses on searching through an unsorted database and provides a substantial quadratic speedup compared to classical algorithms.
The last fundamental quantum algorithm proposed by Harrow, Hassidim and Lloyd is called HHL algorithm [45]. This algorithm is designed to solve linear systems of equations and demonstrates the potential for exponential speedup under certain conditions compared to classical methods.
Table 5 contains all the aforementioned algorithms with a brief description of their function. These nine quantum algorithms are fault-tolerant.

3.1. Deutsch’s Algorithm

Deutsch’s algorithm was one of the first quantum algorithms to demonstrate that quantum computers could solve certain problems more efficiently than classical ones. Deutsch’s problem involves a black-box function, f : { 0 , 1 } { 0 , 1 } , that takes a single bit as input and outputs a single bit. This algorithm is designed to determine whether function f ( x ) is constant or balanced. A function f ( x ) is classified as constant if the output is the same for both inputs (either always 0 or always 1) while it is classified as balanced if the output differs for each input (0 for one input, 1 for the other). This can be expressed mathematically as follows:
f ( x ) = constant , if f ( 0 ) = f ( 1 ) balanced , if f ( 0 ) f ( 1 )
Before the mathematical representation of the algorithm, it is crucial to explain an important property known as the phase oracle property. Figure 6 represents the quantum circuit of the phase oracle property.
Let U f be a unitary operator that maps the state | x | y to | x | y f ( x ) . This can be expressed mathematically as follows:
U f | x | y = | x | y f ( x )
where ⊕ denotes addition modulo 2 (XOR operation).
Set y = | 0 . Then, the unitary operator U f acts as follows:
U f | x | 0 = | x | 0 f ( x ) = | x f ( x )
since 0 0 = 0 and 0 1 = 1 .
The final state does not depend on the y state.
Set | y = | . Thus, the operation of U f becomes
U f | x | = U f | x 1 2 ( | 0 | 1 ) = 1 2 ( U f | x | 0 U f | x | 1 ) = 1 2 ( | x | f ( x ) | x | f ( x ) ¯ )
= 1 2 ( | x | 0 | x | 1 ) if f ( x ) = 0 1 2 ( | x | 1 | x | 0 ) if f ( x ) = 1
= ( | x | ) if f ( x ) = 0 ( | x | ) if f ( x ) = 1
= ( 1 ) f ( x ) | x |
Thus, let U f be a unitary operator that maps the state | x | to ( 1 ) f ( x ) | x | . This property is mathematically expressed as
U f | x | = ( 1 ) f ( x ) | x |
The Deutsch–Jozsa algorithm can be implemented following a common five-step procedure. The quantum circuit for this algorithm is depicted in Figure 7.
First, initial input states are set up.
| ψ 0 = | 0 | 1
Next, a Hadamard gate is applied to each qubit, putting it into a superposition state using Equations (23) and (24).
| ψ 1 = H | 0 H | 1 = [ | 0 + | 1 2 ] [ | 0 | 1 2 ]
In the third step, an oracle operation is applied. Thus, using Equation (76), | ψ 2 is expressed as follows:
| ψ 2 = U f | ψ 1 = ( 1 ) f ( x ) | 0 + | 1 2 | = ± 1 2 | 0 + | 1 | if f ( 0 ) = f ( 1 ) , ± 1 2 | 0 | 1 | if f ( 0 ) f ( 1 )
In the fourth step, Hadamard gates are applied. Therefore,
| ψ 3 = ± H 1 2 | 0 + | 1 | if f ( 0 ) = f ( 1 ) ± H 1 2 | 0 | 1 | if f ( 0 ) f ( 1 ) = ± | 0 | if f ( 0 ) = f ( 1 ) ± | 1 | if f ( 0 ) f ( 1 )
Therefore, by measuring the first qubit, the function f can be determined to be either constant or balanced. If the measurement of the first qubit yields | 0 , then f is constant; if the measurement yields | 1 , then f is balanced.
A classical computer would require at least two evaluations, while a quantum computer requires only one. This algorithm is based on quantum parallelism and highlights the power of quantum computers. However, it does not have any practical applications.

3.2. Deutsch–Josza Algorithm

The Deutsch–Jozsa algorithm is a generalization of the Deutsch algorithm and it is designed to determine whether a given Boolean function f : { 0 , 1 } n { 0 , 1 } is constant or balanced. The Deutsch–Jozsa algorithm can be implemented following a common five-step procedure. The quantum circuit for this algorithm is depicted in Figure 8.
First, initial input states are set up.
| ψ 0 = | 0 n | 1
Next, a Hadamard gate is applied to each qubit, putting it into a superposition state.
| ψ 1 = H n | 0 n H | 1
H n | 0 n = 1 2 | 0 + | 1 n = 1 2 n k = 0 2 n 1 | k
With the help of Equation (83), | ψ 1 can be expressed as
| ψ 1 = 1 2 n + 1 x = 0 2 n 1 | x | 0 | 1 = 1 2 n x = 0 2 n 1 | x | )
In the third step, an oracle operation is applied. Thus, using Equation (76), | ψ 2 is expressed as follows:
| ψ 2 = U f | ψ 1 = 1 2 n x = 0 2 n 1 U f | x | ) = 1 2 n x = 0 2 n 1 ( 1 ) f ( x ) | x |
In the fourth step, Hadamard gates are applied. Therefore,
| ψ 3 = H n | ψ 2 = 1 2 n x = 0 2 n 1 ( 1 ) f ( x ) H n | x |
For the measurement, the state | is not of interest.
H n | x = H n | x 1 | x 2 | x n =
= 1 2 | 0 + ( 1 ) x 1 | 1 1 2 | 0 + ( 1 ) x 2 | 1 1 2 | 0 + ( 1 ) x n | 1 =
= 1 2 n y = 0 2 n 1 ( 1 ) x · y | y
Therefore, | ψ 3 is expressed as follows:
| ψ 3 = 1 2 n x = 0 2 n 1 y = 0 2 n 1 ( 1 ) f ( x ) ( 1 ) x · y | y
Finally, a measurement of the qubits is performed to obtain the answer.
The probability of measuring the | 000 . . . 0 state is
1 2 n x = 0 2 n 1 ( 1 ) f ( x )
If f is constant:
If f(x) = 0 for all x, then:
1 2 n x = 0 2 n 1 1 = 1 2 n 2 n = 1
If f(x) = 1 for all x, then:
1 2 n x = 0 2 n 1 ( 1 ) = 1 2 n ( 2 n ) = 1
Amplitude of | 000 . . . 0 state is ± 1 .
If f is balanced:
1 2 n ( 1 ) 0 + ( 1 ) 1 + + ( 1 ) 0 = 0
Thus, if the state | 00 0 is measured, then the function f is confirmed to be constant. Conversely, if any other state is measured, it indicates that f is balanced.
Some positive aspects of the Deutsch–Jozsa algorithm include its ability to provide an exponential speedup over classical algorithms for the specific problem it addresses. While a classical algorithm might require up to 2 n 1 + 1 queries to the function to determine if it is constant or balanced, the Deutsch–Jozsa algorithm requires only one quantum query. However, the Deutsch–Jozsa algorithm solves a very specific problem, leading to its limited practical application. The Deutsch–Jozsa algorithm served as a foundational milestone for the advancement of more important quantum algorithms.

3.3. Bernstein–Vazirani Algorithm

The Bernstein–Vazirani algorithm is designed to determine an n-bit “hidden string” s by querying a function f ( x ) , which is defined as
f ( x ) = s · x ( mod 2 )
where x is any n-bit binary string, representing the input to the algorithm, and s · x denotes the bitwise dot product of s and x, calculated as
s · x = i = 1 n s i x i ( mod 2 )
The Bernstein–Vazirani algorithm can be implemented following a common five-step procedure. The quantum circuit for this algorithm is depicted in Figure 9. The quantum topology of this circuit is the same as that of the Deutsch–Josza algorithm.
First, initial input states are set up.
| ψ 0 = | 0 n | 1
Next, a Hadamard gate is applied to each qubit, putting the system into a superposition state as described by Equations (24) and (83).
| ψ 1 = H n | 0 n H | 1 = 1 2 n x = 0 2 n 1 | x | )
In the third step, an oracle operation is applied. Thus, using Equation (76), | ψ 2 is expressed as follows:
| ψ 2 = U f | ψ 1 = 1 2 n x = 0 2 n 1 U f | x | ) = 1 2 n x = 0 2 n 1 ( 1 ) f ( x ) | x | = 1 2 n x = 0 2 n 1 ( 1 ) s · x | x |
For the measurement, the state | is not of interest.
In the fourth step, Hadamard gates are applied. Therefore,
| ψ 3 = 1 2 n x = 0 2 n 1 ( 1 ) s · x H n | x
With the aim of Equation (87), | ψ 3 can be expressed as
| ψ 3 = 1 2 n x = 0 2 n 1 ( 1 ) s · x 1 2 n y = 0 2 n 1 ( 1 ) x · y | y = 1 2 n x = 0 2 n 1 y = 0 2 n 1 ( 1 ) ( s + y ) · x | y
The term ( s + y ) can be expressed as ( s y ) because
  • If y s , the sum
    x = 0 2 n 1 ( 1 ) x · ( s y )
    evaluates to zero for all y s . This is because x · ( s y ) takes on an equal number of 0 and 1 values as x varies over the 2 n possible n-bit strings, resulting in equal numbers of + 1 and 1 terms, which sum to 0.
  • If y = s
    x = 0 2 n 1 ( 1 ) ( s + s ) · x = x = 0 2 n 1 ( 1 ) 2 s · x = x = 0 2 n 1 1 = 2 n
Thus, | ψ 3 can be expressed as
| ψ 3 = 1 2 n x = 0 2 n 1 y = 0 2 n 1 ( 1 ) x · ( s y ) | y
Finally, | y is measured to obtain the answer.
The amplitude of | s is
1 2 n x = 0 2 n 1 ( 1 ) ( s + s ) · x = 1 2 n x = 0 2 n 1 ( 1 ) 2 s · x = 1 2 n x = 0 2 n 1 1 = 1 2 n · 2 n = 1
The probability of measuring the | s state is 1.
The output of the Bernstein–Vazirani algorithm is the hidden bit string s, which is retrieved with just one query to the oracle, demonstrating the efficiency of QC over classical methods for this specific problem.

3.4. Simon’s Algorithm

Simon’s problem can be defined as follows:
A function f : { 0 , 1 } n { 0 , 1 } m that maps bit strings to bit strings. The unknown function f maps either each unique input to a unique output or maps two distinct inputs to one unique output. In mathematical terms, this can be expressed as
f ( x ) = f ( y ) if and only if x y = s
for some x , y { 0 , 1 } n . The goal is to determine whether f is one-to-one or two-to-one by finding the hidden string s with as few evaluations of f as possible.
In order to solve this problem, a classical approach requires 2 n / 2 queries, while Simon’s algorithm promises to solve this using n queries.
Simon’s Algorithm Example:
Consider the function f : { 0 , 1 } 3 { 0 , 1 } 3 defined by the truth Table 6.
Each output f ( x ) appears twice for two distinct inputs. For example,
f ( 000 ) = f ( 100 ) s = 001 010 = 011 .
Simon’s algorithm can be implemented following a common five-step procedure. The quantum circuit for this algorithm is depicted in Figure 10.
First, initial input states are set up.
| ψ 0 = | 0 n | 0 n
Next, a Hadamard gate is applied to each qubit, putting them into a superposition state.
| ψ 1 = H n | 0 n | 0 n
Using Equation (83),
| ψ 1 = 1 2 n x = 0 2 n 1 | x | 0 n
In the third step, an oracle operation is applied. Thus, | ψ 2 is expressed as follows:
| ψ 2 = U f | ψ 1 = U f 1 2 n x = 0 2 n 1 | x | 0 n = 1 2 n x = 0 2 n 1 | x | f ( x )
The output of f corresponds to an input of either x or y where x and y are two different inputs to f that gave the same output. Hence, the n bits are in the state
1 2 ( | x + | y )
In the fourth step, Hadamard gates are applied. Therefore,
| ψ 3 = H n | ψ 2 = H n 1 2 ( | x + | y )
Using Equation (87), | ψ 3 is expressed as follows:
| ψ 3 = 1 2 n + 1 z = 0 2 n 1 [ ( 1 ) x · z + ( 1 ) y · z ] | z
Finally, a measurement of the qubits is performed to obtain the answer. The measurement returns a random bit string z such that
x · z = y · z mod 2
Therefore,
x · z = ( x s ) · z mod 2
x · z = x · z s · z mod 2
0 = s · z mod 2
Thus, the bit string z is orthogonal to the secret string s.
From the measurement results { z 1 , , z k } , a system of equations is formed such that
z 1 · s mod 2 = 0 z 2 · s mod 2 = 0 z k · s mod 2 = 0
There are k equations and n unknowns. The secret string s can be solved by solving the system of equations derived above.
Simon’s algorithm demonstrates the power of quantum algorithms. while the best classical algorithms for the same problem may require exponential time, Simon’s algorithm can solve it in polynomial time.

3.5. Quantum Fourier Transform

The quantum Fourier transform, or QFT for short, transforms a qubit from the computational basis { | 0 , | 1 } to the Fourier basis { | + , | } . Although the QFT does not speed up the classical task of computing Fourier transforms of classical data [39], it is an important component of the quantum phase estimation algorithm, which will be explained in detail later.
Consider the computational basis { | 0 , | 1 , , | N 1 } (in decimal notation) with N = 2 n , where n is the number of qubits. The action of the QFT on a basis state | j is given by
| j QFT 1 N k = 0 N 1 e 2 π i j k / N | k
Any quantum state | ψ can be written as a linear combination of the basis states. Therefore, QFT acts on a general quantum state | ψ as follows:
| ψ = j = 0 N 1 x j | j QFT 1 N k = 0 N 1 j = 0 N 1 x j e 2 π i j k / N | k
However,
y k = 1 N j = 0 N 1 x j e 2 π i j k N = 1 N j = 0 N 1 x j ω k j
where ω = e 2 π i / N .
The matrix representation of QFT is as follows:
QFT = 1 N 1 1 1 1 1 ω ω 2 ω N 1 1 ω 2 ω 4 ω 2 ( N 1 ) 1 ω N 1 ω 2 ( N 1 ) ω ( N 1 ) ( N 1 )
where ω = e 2 π i / N .
Another representation of the QFT is described by the following equation:
| j QFT 1 2 n / 2 k = 0 2 n 1 e 2 π i j k / 2 n | k = 1 2 n / 2 k 1 = 0 1 k n = 0 1 e 2 π i j l = 1 n k l 2 l | k 1 k n = 1 2 n / 2 k 1 = 0 1 k n = 0 1 l = 1 n e 2 π i j l = 1 n k l 2 l | k l = 1 2 n / 2 l = 1 n k l = 0 1 e 2 π i j k l 2 l | k l = 1 2 n / 2 l = 1 n | 0 + e 2 π i j 2 l | 1 = 1 2 n / 2 | 0 + e 2 π i 0 . j n | 1 | 0 + e 2 π i 0 . j n 1 j n | 1 | 0 + e 2 π i 0 . j 1 j 2 j n | 1
The above equation can be summarized as
1 2 n j = 0 n 1 | 0 + e 2 π i 0 . j 1 j 2 j n | 1
The above representation has adopted the notation 0 . j l j l + 1 j m to represent the binary fraction j l 2 + j l + 1 4 + + j m 2 m l + 1 . The quantum circuit for this algorithm is depicted in Figure 11. Each qubit went from | j n to | 0 + e 2 π i 0 . j n j n 1 j n k | 1 .
The QFT consists of two types of quantum gates: the Hadamard gate and the controlled R k gate. If a Hadamard gate is applied to any | j n , the result will be as follows, in accordance with Equations (23) and (24):
H | j n = 1 2 ( | 0 + | 1 ) , if j n = 0 1 2 ( | 0 | 1 ) , if j n = 1
This can be generalized by the following expression:
1 2 | 0 + e 2 π i j n / 2 | 1
Because the term e 2 π i j n / 2 is equal to + 1 if j n = 0 and 1 if j n = 1 .
The controlled U R O T k gate is described mathematically using the following equation:
U R O T k = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e 2 π i / 2 k
For example, applying the U R O T k gate to the state | 00 , where the first qubit indicates the control qubit while the second qubit indicates the target,
U R O T k | 00 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e 2 π i / 2 k 1 0 0 0 = 1 0 0 0 = | 00
Similarly, applying the U R O T k gate to the state | 01 ,
U R O T k | 01 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e 2 π i / 2 k 0 1 0 0 = 0 1 0 0 = | 01
Similarly, applying the U R O T k gate to the state | 10 ,
U R O T k | 10 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e 2 π i / 2 k 0 0 1 0 = 0 0 1 0 = | 10
Finally, applying the U R O T k gate to the state | 11 ,
U R O T k | 11 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e 2 π i / 2 k 0 0 0 1 = e 2 π i / 2 k 0 0 0 1 = e 2 π i / 2 k | 11
This gate applies a phase of e 2 π i / 2 k for the state | 1 on the target qubit and acts only if the control qubit is in the state | 1 .
The action of C R O T k on a two-qubit state | x i x j where the first qubit is the control and the second is the target is given by
C R O T k | 0 x j = | 0 x j C R O T k | 1 x j = exp 2 π i 2 k x j | 1 x j
In the quantum circuit design for the QFT, the following gates are required:
  • For the first line, only 1 Hadamard gate is required.
  • For the second line, there are 1 Hadamard gate and ( n 2 ) R gates.
  • For the ( n 1 ) th line, we need 1 Hadamard gate and 1 R gate.
  • Additionally, there are n 2 swap gates.
To summarize, the total number of quantum gates required can be calculated as follows:
n + n ( n 1 ) 2 + n 2
where n represents the number of qubits in the system.
In terms of complexity, QFT requires n 2 operations while the classical Fourier transform requires n 2 n . Therefore, QFT demonstrates superior performance in terms of complexity.
Let us present a simple example of a 3-qubit system and calculate the QFT of number 5. Figure 12 represents this circuit.
The binary representation of the number 5 is 101. Since qubits in Qiskit are initialized in the | 0 state, the first and third qubits should have an X gate applied to initialize them in the | 1 state. Next, Hadamard gates are applied to every qubit. Sequentially, UROT gates are applied. Finally, swap gates are used to reverse the order of the qubits.
Generally, the number | j can be represented in the binary system as | j 1 j 2 j 3 .
Firstly, the Hadamard gate is applied to | j 1 . The output is as follows:
H | j 1 = 1 2 ( | 0 + e 2 π i j 1 / 2 | 1 )
Next, the R 2 gate is applied using Equation (128):
R 2 H | j 1 = 1 2 | 0 + e 2 π i ( j 1 / 2 + j 2 / 2 2 ) | 1
Next, the R 3 gate is applied using Equation (128):
R 3 R 2 H | j 1 = 1 2 | 0 + e 2 π i ( j 1 / 2 + j 2 / 2 2 + j 3 / 2 3 ) | 1
In the next step, a similar process is applied to | j 2 .
H | j 2 = 1 2 | 0 + e 2 π i j 2 / 2 | 1
Next, the R 2 gate is applied using Equation (128):
R 2 H | j 2 = 1 2 | 0 + e 2 π i ( j 2 / 2 + j 3 / 2 2 ) | 1
Finally, a similar process is applied to | j 3 .
H | j 3 = 1 2 | 0 + e 2 π i j 3 / 2 | 1
The output is as follows:
Q F T | j = 1 2 | 0 + e 2 π i j 1 2 + j 2 2 2 + j 3 2 3 | 1 1 2 | 0 + e 2 π i j 2 2 + j 3 2 2 | 1 1 2 | 0 + e 2 π i j 3 2 | 1
Finally, a swap gate is applied:
Q F T | j = 1 2 | 0 + e 2 π i j 3 2 | 1 1 2 | 0 + e 2 π i j 2 2 + j 3 2 2 | 1 1 2 | 0 + e 2 π i j 1 2 + j 2 2 2 + j 3 2 3 | 1
Therefore, the number 5 can be represented by QFT by the following:
Q F T | 101 = 1 2 | 0 + e π i | 1 1 2 | 0 + e π i 2 | 1 1 2 | 0 + e 5 π i 4 | 1
In addition to the QFT, there is also the inverse IQFT. Mathematically, this is described as follows. If | ψ is the result of the application of the QFT algorithm to | j ,
| ψ = QFT | j = 1 2 n | 0 + e 2 π i 0 . j n 1 | 1 | 0 + e 2 π i 0 . j n 2 | 1 | 0 + e 2 π i 0 . j 0 | 1
The result of the inverse QFT is as follows:
IQFT | ψ = | j
Although the QFT requires fewer operations in comparison with the classical Fourier transform, practical implementations of QFT are limited. The QFT’s exponential speedup in certain quantum algorithms, such as quantum phase estimation and Shor’s algorithm, demonstrates its significance in quantum computing.

3.6. Quantum Phase Estimation

Quantum phase estimation is a key subroutine for important quantum algorithms, including Shor’s algorithm. Suppose a unitary operator U has an eigenvector | u with eigenvalue e 2 π i φ , where the value of φ is unknown [39]. The goal of the phase estimation algorithm is to estimate φ . This can be expressed mathematically using the following equation:
U | u = e 2 π i φ | u
Before explaining QPE in detail, it is essential to understand the concept of phase kickback. Phase kickback occurs when a controlled phase gate is applied to a control qubit in superposition. Figure 13 illustrates the quantum circuit that demonstrates this property.
In this setup, the target qubit in the controlled operation—in our case, | ψ —must be an eigenvector of the unitary operator U. This condition is expressed mathematically as follows:
U | ψ = e 2 π i φ | ψ
With this requirement in place, the controlled operation modifies the phase of the control qubit, effectively encoding the phase information φ from the target eigenstate onto the control qubit.
The matrix representation of U for a two-qubit system is expressed by the following equation:
C U = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e i ϕ
The above matrix applies a phase e i φ to the target qubit only when the control qubit is | 1 .
First, initial input states are set up.
| 0 | ψ
Next, a Hadamard gate is applied to control qubit, putting it into a superposition state using the Equation (23).
H | 0 | ψ = 1 2 | 0 + | 1 | ψ
Finally, a controlled-U gate is applied.
C U 1 2 | 0 + | 1 | ψ = 1 2 | 0 | ψ + | 1 e i θ | ψ = 1 2 | 0 + | 1 e i θ | ψ
Therefore, the phase of the target qubit is transferred, or “kicked back,” to the control qubit.
Phase estimation is performed in two stages. The first stage includes initialization, Hadamard gates and controlled-U gates, while the second stage includes the inverse QFT and measurement. Figure 14 depicts the first stage of the algorithm and Figure 15 shows the full quantum circuit for phase estimation.
Suppose a unitary operator U has an eigenvector | ψ with an eigenvalue of the form e 2 π i ϕ :
U | ψ = e 2 π i ϕ | ψ
The quantum phase estimation procedure utilizes two registers. The first register holds t qubits, all initialized to the state | 0 . The choice of t is determined by the desired level of precision for estimating φ . The second register contains enough qubits to store the state | u .
This is expressed by the following:
| 0 n | ψ
Next, Hadamard gates are applied to the first register with the aim of Equation (83).
H n | 0 n | ψ = 1 2 | 0 + | 1 n | ψ = 1 2 n k = 0 2 n 1 | k | ψ
Next, controlled-U operations are applied on the second register, with U raised to successive powers of two.
1 2 n k = 0 2 n 1 | k U k | ψ = 1 2 n k = 0 2 n 1 e 2 π i k ϕ | k | ψ = 1 2 n | 0 + e i 2 π ϕ · 2 n 1 | 1 | 0 + e i 2 π ϕ · 2 n 2 | 1 | 0 + e i 2 π ϕ · 2 0 | 1 | ψ .
The above equation resembles Equation (119), but with ϕ j 2 n .
The second stage contains the IQFT and the measurement. With the aim of Equation (140), what we obtain is
IQFT 1 2 n | 0 + e i 2 π ϕ 2 n 1 | 1 | 0 + e i 2 π ϕ 2 n 2 | 1 | 0 + e i 2 π ϕ 2 0 | 1 = | m
where | m represents the measurement result with m = 2 n ϕ .
QPE is a key quantum algorithm that can be used as a subroutine in various applications, including machine learning algorithms. The central component of the algorithm, as described earlier, is the application of the IQFT.

3.7. Shor’s Algorithm

As mentioned earlier, QPE is an essential component of Shor’s algorithm. Before analyzing Shor’s algorithm or the factoring problem, it is necessary to understand the order-finding problem. Before diving into the order-finding problem, some basic knowledge of number theory is required.
Suppose a function is defined by the following equation:
f ( a ) = a x mod N
The goal is to estimate the period r, which is the smallest positive integer such that a r 1 mod N .
Consider the example with a = 2 and N = 9 :
2 1 2 mod 9 2 2 4 mod 9 2 3 8 mod 9 2 4 7 mod 9 2 5 5 mod 9 2 6 1 mod 9 2 7 2 mod 9 2 8 4 mod 9 2 9 8 mod 9 2 10 7 mod 9 2 11 5 mod 9 2 12 1 mod 9
We observe the following repeated pattern: 2 , 4 , 8 , 7 , 5 , 1 . Therefore, the period r is 6, as the sequence repeats every 6 steps.
Suppose that
U | u s = | x y mod N
and
| u s = 1 r k = 0 r 1 e 2 π i k t / r | x k mod U
Applying U to u s ,
U | u s = U 1 r k = 0 r 1 e 2 π i k t / r | x k mod N = 1 r k = 0 r 1 e 2 π i k t / r | x k + 1 mod N = 1 r k = 0 r 1 e 2 π i ( k 1 ) t / r | x k mod N = e 2 π i t / r | u s
From the above equation, u s is the eigenvector of U.
The reduction of order-finding to phase estimation is completed by explaining how to extract the desired answer r from the output of the phase estimation algorithm, which yields t r . Here, t r is a rational number and by computing the nearest fraction to t r , it may be possible to obtain r. The continued fractions algorithm efficiently accomplishes this task.
The continued fraction algorithm is a mathematical method for finding the best rational approximation to a real number. This can be expressed mathematically by the following equation:
[ a 0 , , a M ] a 0 + 1 a 1 + 1 a 2 + 1 + 1 a M
The last denominator less than N is the candidate for r. The optimal r can be calculated by the following:
r n = a n r n 1 + r n 2
Example:
427 512 = 1 512 427 = 1 1 + 85 427 = 1 1 + 1 427 85 = 1 1 + 1 5 + 2 85 = 1 1 + 1 5 + 1 85 2 = 1 1 + 1 5 + 1 42 + 1 2
Using the continued fraction expansion,
r 0 = 1
r 1 = a 1 = 1
r 2 = a 2 r 1 + r 0 = 5 · 1 + 1 = 6
Therefore, r equals 6.
The order-finding algorithm can be implemented following a common 6-step procedure. The quantum circuit for this algorithm is depicted in Figure 16.
First, initial input states are set up.
| ψ 0 = | 0 t | 1 L
Next, a Hadamard gate is applied to each qubit, putting it into a superposition state.
| ψ 1 = H t | 0 t | 1 L = 1 2 t j = 0 2 t 1 | j | 1 L
In the third step, U x N is applied. Thus,
| ψ 2 = U x , N 1 2 t j = 0 2 t 1 | j | 1 L = 1 2 t j = 0 2 t 1 | j | x j mod N = 1 r 2 t s = 0 r 1 j = 0 2 t 1 e 2 π i s j r | j | u s
In the fourth step, IQFT is applied to first register. Therefore,
| ψ 3 = 1 r s = 0 r 1 s r | u s
Therefore, by measuring the first register, the term s r can be extracted.
The last step involves the application of continued fractions algorithm to obtain r.
The order-finding problem turns out to be equivalent to the factoring problem or, in other words, to Shor’s algorithm. The goal of Shor’s algorithm is to find the prime factors of a large number N = p · q . The steps to achieve this are as follows:
  • Choose a random number 1 < a < N 1 .
  • Compute gcd ( a , N ) .
  • If gcd ( a , N ) 1 , go back to step 1.
  • Use the order-finding subroutine to find r such that a r 1 mod N .
  • If r is odd, go back to step 1.
  • If a r / 2 1 mod N , go back to step 1.
  • The factors of N are p = gcd ( a r / 2 1 , N ) and q = gcd ( a r / 2 + 1 , N ) .
An example of factoring the number 15 by implementing Shor’s algorithm is presented below.
  • Choose a = 7 .
  • Compute gcd ( 15 , 7 ) = 1 .
  • Calculate r:
    7 1 mod 15 = 7
    7 2 mod 15 = 4
    7 3 mod 15 = 13
    7 4 mod 15 = 1
    Thus, r = 4 .
  • Check if r 2 = 2 is valid:
    7 2 1 mod 15
  • Find factors:
    p = gcd ( 7 2 1 , 15 ) = gcd ( 48 , 15 ) = 3
    q = gcd ( 7 2 + 1 , 15 ) = gcd ( 50 , 15 ) = 5
Therefore, the prime factors of number 15 are 3 and 5.
Shor’s algorithm does not consist entirely of quantum components. The quantum part is limited to the order-finding subroutine. However, Shor’s algorithm runs in polynomial time in log N , whereas the best classical factoring methods run in sub-exponential time in log N . As a result, Shor’s algorithm achieves a super-polynomial (often referred to as exponential) speedup, offering an efficient solution to a problem that remains challenging for classical computers. This algorithm has significant implications for cryptographic applications.

3.8. Grover’s Algorithm

Grover’s algorithm is a quantum search algorithm designed to find a target item in an unstructured database with N entries. Instead of directly searching the database elements, we focus on the index of those elements, denoted by x. The search is guided by a function f ( x ) , which computes whether a given index x matches the desired criteria.
The goal of Grover’s algorithm is to find an index x such that f ( x ) = 1 , indicating that the corresponding database entry is the target item. Figure 17 briefly describes the search process.
In a classical approach, the search complexity is O ( N ) . However, Grover’s algorithm provides a quadratic speedup, reducing the complexity to O ( N ) . This quantum speedup makes Grover’s algorithm significantly faster than classical search methods for large databases.
The quantum circuit for Grover’s algorithm is depicted in Figure 18.
First, initial input states are set up.
| ψ 0 = | 0 n
Next, a Hadamard gate is applied to each qubit, putting the system into a superposition state as described by Equation (83).
| ψ 1 = H n | 0 n = 1 2 n k = 0 2 n 1 | k
Next, a quantum subroutine, known as the Grover iteration, is applied repeatedly. The quantum circuit for Grover’s iteration is shown in Figure 19. The optimal number of iterations, denoted by t, will be defined later.
Grover’s iteration can be described by the following mathematical equation:
G = H n Z 0 H n Z f
It consists of an oracle to mark the correct state and a diffusion operation to amplify the amplitude of the marked state.
The oracle can be denoted as Z f and can be expressed mathematically by the following equation:
Z f : | x ( 1 ) f ( x ) | x
The oracle acts similarly to the oracle in the Deutsch–Jozsa algorithm. If x is not a solution to the search problem, applying the oracle does not change the state. On the other hand, if x is a solution to the search problem (meaning f ( x ) = 1 ), it shifts the phase of the solution.
The diffusion operator can be denoted as Z O R and can be expressed mathematically by the following equation:
Z O R : H n 2 | 0 n 0 | n I H n
The above equation can also be written as
H n 2 | 0 n 0 | n I H n = 2 H n | 0 n 0 | n H n H n I H n = 2 H n | 0 n 0 | n H n I H n H n = 2 H n | 0 n 0 | n H n I = 2 H n | 0 n 0 | n H n I = 2 | u u | I
Suppose that there are the sets of non-solutions and solutions such that
A 0 = { x Σ n : f ( x ) = 0 }
A 1 = { x Σ n : f ( x ) = 1 }
Therefore,
Z f | A 0 = | A 0
Z f | A 1 = | A 1
The state | u can be described by the following equation:
| u = | A 0 | N | A 0 + | A 1 | N | A 1
After applying G to | A 0 ,
G | A 0 = ( 2 | u u | 1 ) Z f | A 0 = ( 2 | u u | 1 ) | A 0 = 2 | A 0 | N | u | A 0 = 2 | A 0 | N | A 0 | N | A 0 + | A 1 | N | A 1 | A 0 = | A 0 | | A 1 | N | A 0 + 2 | A 0 | · | A 1 | N | A 1
After applying G to | A 1 ,
G | A 1 = ( 2 | u u | 1 ) Z f | A 1 = ( 1 2 | u u | ) | A 1 = | A 1 2 | A 1 | N | u = | A 1 2 | A 0 | N | A 0 | N | A 0 + | A 1 | N | A 1 = 2 | A 0 | · | A 1 | N | A 0 + | A 0 | | A 1 | N | A 1
Therefore, the action of G on span { | A 0 , | A 1 } can be described by a 2 × 2 matrix:
M = | A 0 | | A 1 | N 2 | A 0 | · | A 1 | N | A 0 | · | A 1 | N | A 0 | | A 1 | N = | A 0 | N | A 1 | N | A 1 | N | A 0 | N 2
The above matrix is a rotation matrix. Therefore,
| A 0 | N | A 1 | N | A 1 | N | A 0 | N = cos ( θ ) sin ( θ ) sin ( θ ) cos ( θ ) θ = sin 1 | A 1 | N
Therefore,
M = cos ( 2 θ ) sin ( 2 θ ) sin ( 2 θ ) cos ( 2 θ )
The state | u can also be written as follows:
| u = | A 0 | N | A 0 + | A 1 | N | A 1 = cos ( θ ) | A 0 + sin ( θ ) | A 1
Each time the Grover operation is performed, the state is rotated by an angle 2 θ .
G | u = cos ( 3 θ ) | A 0 + sin ( 3 θ ) | A 1 G 2 | u = cos ( 5 θ ) | A 0 + sin ( 5 θ ) | A 1 G t | u = cos ( ( 2 t + 1 ) θ ) | A 0 + sin ( ( 2 t + 1 ) θ ) | A 1
The above procedure can also be described geometrically, as shown in Figure 20.
After t iterations, the probability of measuring the desired output A 1 is given by
P ( A 1 ) = sin 2 ( 2 t + 1 ) θ
This probability should be close to 1. Therefore,
( 2 t + 1 ) θ π 2
Thus,
t π 4 θ 1 2 closest integer t = π 4 θ
Grover’s algorithm has the potential to be used in a wide variety of applications, especially in ML. This algorithm can be used as a black box, offering a quadratic advantage compared to classical algorithms.

3.9. HHL Algorithm

Consider a system of N linear equations with N unknowns, represented as
A x = b ,
where x is the vector of unknowns, A is the coefficient matrix and b is the constants vector.
If A is an invertible matrix, the solution can be obtained as
x = A 1 b
The HHL algorithm is a quantum algorithm specifically designed to solve this linear system efficiently.
The best classical algorithms require O ( N ) complexity time for this problem. However, the HHL algorithm provides an exponential speedup, reducing the time complexity to O ( log N ) .
Before explaining HHL in detail, it is essential to clarify some linear algebra. A is a Hermitian matrix. This means it can be written as a sum of the outer products of its eigenvectors, scaled by its eigenvalues. Therefore,
A = i = 0 N 1 λ i v i v i
Therefore, the inverse A can be written as
A 1 = i = 0 N 1 1 λ i v i v i
Suppose that b is one eigenvector of A, which is one of the inputs in the quantum circuit. Since A is invertible and Hermitian, it must have an orthogonal basis of eigenvectors. Therefore, b can be written as
| b = j = 0 N 1 b j | v j
Therefore, the desired output has the following form:
| x = A 1 | b = ( i = 0 N 1 1 λ i v i v i ) ( j = 0 N 1 b j | v j ) = i = 0 N 1 j = 0 N 1 1 λ i b j v i v i v j = i = 0 N 1 1 λ i b i | v i
The HHL algorithm can be implemented following a common 5-step procedure. The quantum circuit for the HHL algorithm is depicted in Figure 21.
As inputs, the circuit requires an ancilla qubit, a register and one eigenvector of A, named b. An ancilla qubit is commonly used in many quantum algorithms. It serves as an auxiliary qubit to assist in implementing quantum operations and is not part of the circuit’s input or output. The HHL algorithm consists of three stages. In the first state, a phase estimation module computes the eigenvalues of A, which are subsequently stored in a quantum register. In the second stage, the inverse of the eigenvalues obtained in the first stage is computed using a controlled R y gate. The result of this computation is then encoded into an ancilla qubit. The final stage involves uncomputing the phase estimation and the unitary operations. The ancilla qubit is then measured. If the measurement outcome is 1, this indicates that the quantum state approximates | x .
First, initial input states are set up.
| ψ 0 = | 0 A | 0 R n | b I
Next, the QPE algorithm is applied using the unitary operator U = exp ( i A t ) , which can be expressed as
U = exp ( i A t ) = i = 0 N 1 exp ( i λ i t ) | v i v i |
where A is a Hermitian matrix with eigenstates | u i and corresponding eigenvalues λ i .
Since A is Hermitian, the operator exp ( i A t ) is unitary. Its eigenvalues are exp ( i λ j t ) , and its eigenstates are the same as those of A. Therefore, after applying U to | b , the output is as follows:
U | b = i = 0 N 1 exp ( i λ i t ) | b = i = 0 N 1 | λ ˜ i | b
Therefore,
| ψ 1 = | 0 A j = 0 N 1 b j | v j I i = 0 N 1 | λ ˜ i R
Next, a controlled Y rotation gate is applied to the ancilla qubit. The matrix representation of this gate is as follows:
R y ( θ ) = cos θ 2 sin θ 2 sin θ 2 cos θ 2
with θ = 2 arcsin C λ ˜ .
After applying R y ( θ ) into the ancilla qubit, the result is as follows:
R y ( θ ) | 0 A = cos θ 2 sin θ 2 sin θ 2 cos θ 2 1 0 = cos θ 2 sin θ 2 = cos θ 2 | 0 + sin θ 2 | 1
Therefore, the circuit is in the following state:
| ψ 2 = j = 0 N 1 b j | v j I i = 0 N 1 | λ ˜ i R 1 c 2 λ j 2 | 0 + c λ j | 1 A
The fourth step involves the application of an inverse QPE algorithm. Thus, the following state is obtained:
| ψ 3 = j = 0 N 1 b j | v j I | 0 R n 1 c 2 λ j | 0 + c λ j | 1 A
Finally, a measurement of the ancilla qubit is performed to obtain the answer. If 1 is obtained, the state is
| ψ 4 = j = 0 N 1 b j | v j c λ j = j = 0 N 1 b j | u j c λ j
which is proportional to the desired output.
We consider a practical example of the HHL algorithm using a 2 × 2 Hermitian matrix A defined as
A = 3 4 1 4 1 4 17 12
and the input vector:
b = 0 1
The solution vector for this system is
x = 1 4 3 4
The matrix A has the following eigenvalues and their corresponding normalized eigenvectors, where each normalized eigenvector is given by v ^ = v v .
λ 1 = 3 2 , λ 2 = 2 3
| v 1 = 1 10 1 3 , | v 2 = 1 10 3 1
Since b can be decomposed in terms of A’s eigenbasis, we write
| b = 3 10 | v 1 + 1 10 | v 2
The HHL algorithm proceeds in five main steps: initialization, quantum phase estimation, controlled rotation, inverse QPE, and measurement.
The initial state of the system (ancilla qubit A, register R, input qubit I) is
| ψ 0 = | 0 A | 0 R 2 | b I
After applying QPE, we obtain
| ψ 1 = | 0 A j = 0 1 | λ ˜ j R | v j I = | 0 A 3 10 | v 1 I | 3 2 R + 1 10 | v 2 I | 2 3 R
A controlled Y-rotation is applied to the ancilla qubit, conditioned on the register. Assuming the rotation constant C = 0.5 , we get
| ψ 2 = 3 10 | v 1 I | 3 2 R 2 2 3 | 0 + 1 3 | 1 A + 1 10 | v 2 I | 2 3 R 7 4 | 0 + 3 4 | 1 A
Applying the inverse QPE yields
| ψ 3 = 3 10 | v 1 I 2 2 3 | 0 A + 1 3 | 1 A | 0 R + 1 10 | v 2 I 7 4 | 0 + 3 4 | 1 A | 0 R
A measurement is performed on the ancilla qubit. If the outcome is | 1 A , the resulting (non-normalized) state of the input register is
| ψ 4 = j = 0 1 b j | v j C λ j = 1 10 | v 1 + 3 4 | v 2
Substituting the eigenvectors,
| x = 1 10 1 10 1 3 + 3 4 · 1 10 3 1 = 1 10 1 3 + 9 4 3 4 = 1 2 1 4 3 4
This is proportional to the true solution of the system.
Overall, HHL solves a linear system of equations providing an exponential speedup over the classical algorithms. This indicates a potential quantum advantage. HHL is used as a subroutine in many QML algorithms particularly for tasks such as matrix inversion and solving differential equations.

4. Quantum Machine Learning

Machine learning is a prominent area of research in computer science. However, with the substantial expansion of data sizes, researchers are increasingly exploring innovative methods to address this challenge. QC has emerged as a potential solution for managing these limitations. Consequently, researchers are investigating how the integration of quantum computing with machine learning can be effectively realized [19].
The general procedure for quantum machine learning algorithms comprises three main steps: encoding, quantum computation and measurement, as illustrated in Figure 22. The initial step (encoding) entails the transformation of data from classical forms into quantum states. The second step (quantum computation) varies based on the specific type of quantum machine learning algorithm being employed. The final step (measurement) involves converting the output data from quantum states back into classical formats [31].
QML is developed by modifying classical algorithms or their subroutines to run on future quantum computers. These devices are expected to become broadly available soon, helping manage the increasing amounts of data being produced globally [46]. The emerging field of QML can be approached in four main ways, determined by two factors: the nature of the data (classical or quantum) and the type of algorithm used (classical or quantum). Figure 23 presents these four distinct approaches [28].
The Classical–Classical (CC) approach utilizes classical data and algorithms that are inspired by ideas from quantum computing. These algorithms, termed “quantum-inspired,” are applied to classical data and run on conventional computers.
The Classical–Quantum (CQ) approach involves applying quantum algorithms to classical data.There are two main approaches to developing QML models within this approach. This approach involves developing quantum versions of traditional machine learning algorithms by leveraging quantum subroutines, such as Grover’s algorithm, the HHL algorithm and quantum phase estimation, to achieve algorithmic speedups. It also requires converting classical data into quantum data through a process known as quantum encoding [28].
The Quantum–Classical (QC) approach applies classical machine learning algorithms to quantum data with the goal of obtaining meaningful insights.
The Quantum–Quantum (QQ) approach involves applying quantum algorithms to quantum data in order to uncover underlying patterns and gain insights from the data. This approach is referred to as purely QML.
Among the four approaches, the CQ and CC methods have been more thoroughly investigated in the field of QML. Numerous studies examine the potential of these approaches to address real-world problems and demonstrate a quantum advantage [2].
Classical machine learning methods can be replaced by quantum algorithms to solve problems more effectively. With numerous potential applications and a broad range of theoretical approaches, QML is a very promising and quickly developing area of study.

4.1. Data Encoding

One of the main problems in QML is encoding data into quantum states. The process of converting classical data, such as images or big datasets, into quantum data requires a substantial investment of time and computing power. Thus, creating novel methods for effective data encoding is an important area for further study [31].

4.1.1. Basis Encoding

Basis encoding is the most elementary method for converting classical data into quantum states. This technique maps a binary string of classical data x = x 1 x n onto the computational basis state | x = | x 1 x n , requiring n qubits to represent n bits of classical information [48]. For instance, the classical input string (1100) is encoded as the quantum state | 1100 using four qubits [28]. In order to encode a dataset using the basis encoding method, the following equation must typically be used:
| D = 1 M m = 1 M | X m
where D = X 1 , X 2 , , X M represents classical data in the form of binary strings, where X m = b 1 , b 2 , , b N and b i { 0 , 1 } for i { 1 , 2 , , N } . Here, N denotes the number of features and M indicates the number of samples in the dataset D [31].
Additionally, in order to convert the vector x = ( 2 , 3 ) into a quantum state, each component must first be transformed into its binary representation, requiring 2 bits: x 1 = 10 and x 2 = 11 . The corresponding basis encoding then utilizes two qubits to represent the data as follows [28]:
| x = 1 2 | 10 + 1 2 | 11

4.1.2. Amplitude Encoding

Amplitude encoding is one of the most widely used and preferred techniques for encoding data in QML algorithms [31]. In this method, a classical data vector
x = ( x 1 , x 2 , , x n )
is mapped onto the amplitudes of a quantum state.
A normalized vector x is encoded into a quantum state | ψ as
| ψ = i = 1 n x i x | i
where | ψ is the quantum state, x i are the components of the classical vector, x is the normalization of the vector, and | i are the computational basis states.
For example, the vector
x = ( 0.1 , 0.7 , 1.0 )
can be encoded using the following procedure:
First, compute the normalization of the vector:
x = x 1 2 + x 2 2 + + x n 2 = 0.01 + 0.49 + 1.00 = 1.5 1.225
Subsequently,
x = 1 x x = 0.1 1.225 , 0.7 1.225 , 1.0 1.225 = ( 0.081 , 0.571 , 0.816 , 0.000 )
The vector x will be encoded into a quantum state as follows:
| ψ = 0.081 | 00 0.571 | 01 + 0.816 | 10 + 0.000 | 11
The above state can also be represented as a matrix:
A = 0.081 0.571 0.816 0.000
There are also other encoding methods, including Angle Encoding [31], QSample Encoding [28] and Hamiltonian Encoding [48]. Proper data encoding is crucial for the future of QML to fully leverage the computational power of quantum computing.

4.2. QML Algorithms

Machine learning is divided into supervised and unsupervised types. In both, models gain insights by analyzing data. Supervised learning uses labeled data, where input–output pairs are provided, allowing the model to learn relationships between them. Unsupervised learning uses only input data, and the model finds patterns or structures on its own. This section covers two quantum machine learning algorithms: the quantum support vector machine (supervised) and quantum k-means (unsupervised). We focus on quantum models using quantum algorithms, where learning happens directly at the quantum level.

4.2.1. Quantum Support Vector Machine

The support vector machine (SVM) algorithm is a popular supervised learning method, especially useful for binary classification tasks. Its key idea is to find a hyperplane that separates two different classes of data based on their features, which then acts as a decision boundary for classifying new data.
As shown in Figure 24, SVM works to maximize the margin between the hyperplane and the closest data points, ensuring the boundary between the two classes is as wide as possible. In this figure, two distinct classes are illustrated: class 1 and class 2. A sectional view of the dataset reveals that the data can be separated linearly when projected into higher dimensions. The points closest to the hyperplane, shown on the dashed lines, are called support vectors, while the hyperplane itself, dividing the two classes, is defined by a specific mathematical equation. The equation of the hyperplane can then be given as [19]
w · x + b = 0
Additionally, w and b should be adjusted such that
w · x i + b 1 , for x i in the positive class ,
w · x i + b 1 , for x i in the negative class
The distance between two hyperplanes can be expressed as 2 w .
Minimizing w leads to a maximum margin and adding constraints ensures that the margin correctly classifies the data. This can be represented by the following optimization problem:
min w , b 1 2 w 2
subject to the constraint
y ( i ) ( w T x ( i ) b ) 1
for all training examples i = 1 , , M with labels y ( i ) { 1 , 1 } , the constraint can be incorporated into the objective function using Lagrange multipliers α ( i ) . This leads to the following formulation of the problem:
min w , b max α ( i ) 0 1 2 w 2 i = 1 M α ( i ) y ( i ) ( w T x ( i ) b ) 1
In order to solve the maximization of the objective function F with respect to α ( i ) , we set the following derivatives to zero:
F w = w i = 1 M α ( i ) y ( i ) x ( i ) = 0
F b = i = 1 M α ( i ) y ( i ) = 0
As a consequence, we can express the weights w as
w = i = 1 M α ( i ) y ( i ) x ( i )
Therefore,
F = 1 2 i = 1 M α ( i ) y ( i ) x ( i ) j = 1 M α ( j ) y ( j ) x ( j ) i = 1 M α ( i ) y ( i ) x ( i ) j = 1 M α ( j ) y ( j ) x ( j ) + i = 1 M α ( i ) y ( i ) b + i = 1 M α ( i ) = i = 1 M α ( i ) 1 2 i = 1 M j = 1 M α ( i ) α ( j ) y ( i ) y ( j ) ( x ( i ) · x ( j ) )
Given that
α ( i ) 0
for each training example i = 1 , , M , and
i = 1 M α ( i ) y ( i ) = 0 .
The optimization problem can be extended to incorporate an arbitrary kernel function K ( x ( i ) , x ( j ) ) , introducing non-linearity to the model. By replacing the dot product in the previous dual formulation with the kernel function, the problem is reformulated as
min α ( i ) 1 2 i = 1 M j = 1 M α ( i ) α ( j ) y ( i ) y ( j ) K ( x ( i ) , x ( j ) ) i = 1 M α ( i )
The type of kernel function K ( x ( i ) , x ( j ) ) depends on the problem being solved. Common choices include the linear kernel, polynomial kernel and sigmoid kernel.
The quantum SVM can be implemented in two ways. The first method utilizes Grover’s algorithm [11], which performs a maximum search across all possible solutions to identify the hyperplane, achieving a quadratic speedup. A detailed description of each algorithm’s step is described below.
  • Initialization
    The kernel function and kernel matrix should be determined for the specific problem.
  • Data encoding
    Classical data should be converted into quantum data as described previously.
  • Find the objective function
    Grover’s algorithm is used to solve the objective function to find an optimal set of α ( i ) , which solves for the parameters w and b.
The second method employs the HHL algorithm [49], transforming the optimization problem into a linear system of equations that is solved using the HHL algorithm. This approach offers an exponential speedup.
As explained before, the dual problem involves finding the optimal α = [ α 1 , α 2 , , α M ] by solving a system of equations derived from the dual optimization.
At the optimal solution, the dual problem’s conditions ensure that
A α = b
where
  • A is the kernel matrix, defined as
    A i j = y ( i ) y ( j ) K ( x ( i ) , x ( j ) )
    where K ( x ( i ) , x ( j ) ) is the kernel function.
  • α is the vector of Lagrange multipliers.
  • b is a vector derived from the constraints:
    i = 1 M α i y ( i ) = 0
In other words, A α = b encapsulates the optimization process to compute the multipliers α i . A detailed description of each algorithm’s step is described below.
  • Initialization
    The kernel function and kernel matrix should be determined for the specific problem.
  • Data encoding
    Classical data should be converted into quantum data as described previously.
  • Find the Lagrange multipliers
    The HHL algorithm is used to solve the system of equations A α = b to find an optimal set of α ( i ) .
Table 7 contains an overview of the applications of the quantum SVM.

4.2.2. Quantum K-Means Algorithm

K-means clustering is one of the most widely recognized methods in unsupervised machine learning. Through the process of clustering, data points are grouped into distinct classes or clusters based on the underlying structure of the input data. The primary objective of clustering is to identify similarities among data points and to organize those that exhibit similar characteristics into cohesive clusters [2].
The classical k-means algorithm categorizes data into k clusters by assigning each data point to the nearest centroid during each iteration. New centroids are computed by averaging the data points within each cluster and this process continues until cluster assignments no longer change. A notable limitation of the k-means algorithm is that the number of clusters must be predetermined. Furthermore, it relies on the assumption that similarity can be quantified using Euclidean distance, implying that smaller distances signify greater similarity [2].
The quantum version of the k-means algorithm consists of three quantum subroutines: the swap test, distance calculation and Grover’s algorithm. The swap test is used to measure the overlap between two vectors, a | b , which serves as a measure of similarity. The quantum circuit for this subroutine is shown in Figure 25.
First, initial input states are set up.
| ψ 0 = | 0 | a | b
Next, a Hadamard gate is applied to control qubit, putting it into a superposition state.
| ψ 1 = H | 0 | a | b = 1 2 ( | 0 + | 1 ) ( | a | b ) = 1 2 ( | 0 , a , b + | 1 , a , b )
In the third step, a Fredkin gate is applied. Thus, | ψ 2 is expressed as follows:
| ψ 2 = 1 2 ( | 0 , a , b + | 1 , b , a )
In the fourth step, a Hadamard gate is applied to the control qubit. Therefore,
| ψ 3 = 1 2 | 0 ( | a , b + | b , a ) + 1 2 | 1 ( | a , b | b , a )
The probability of measuring the control qubit being in state | 0 is given by
P ( | 0 ) = 1 4 | a , b + | b , a 2 = 1 4 | a | b + | b | a | a | b + | b | a = 1 4 ( a | b | a | b + a | b | b | a + b | a | a | b + b | a | b | a ) = 1 2 + 1 2 | a | b | 2
where a | b | represents the inner product between the quantum states | a and | b . In case of P ( | 0 ) = 0.5 , it implies that the states | a and | b are orthogonal. On the other hand, if P ( | 0 ) = 1 it implies that the states are identical.
The swap test subroutine can be used as part of the distance calculation algorithm in order to calculate the Euclidean distance | a b | 2 .
First, initial input states are set up.
| ψ = 1 2 | 0 , a + | 1 , b
| ϕ = 1 Z | a | | 0 + | b | | 1
Next, ϕ | ψ can be calculated using the swap test. Therefore,
ϕ | ψ = 1 2 | a | 0 | + | b | 1 | 1 Z | 0 , a + | 1 , b = 1 2 Z ( | a | 0 | 0 , a + | a | 0 | 1 , b + | b | 1 | 0 , a + | b | 1 | 1 , b = 1 2 Z | a | | a + | b | | b
Using the equation from amplitude encoding, the above equation can be translated into
ϕ | ψ = 1 2 Z ( a + b )
The Euclidean distance can be calculated by squaring the above equation:
| a b | 2 = 2 Z | ϕ | ψ | 2
In the quantum version of the k-means algorithm, the swap test and distance calculation subroutines are used to measure the Euclidean distance between data points and centroids, while Grover’s algorithm is applied to select the closest centroid for each cluster. Since the above subroutines have been explained in detail, the quantum k-means algorithm should now be described thoroughly to integrate these components effectively.
  • Initialization
    The number of clusters, k, must be selected and k cluster centroids μ 1 , μ 2 , , μ k R N should be initialized. These initial centroids can be assigned using any standard method commonly employed in the classical k-means algorithm, such as random selection or the k-means initialization technique.
  • Main loops until convergence is reached.
    Inner Loop (i): Choose the closest cluster centroid.
    Loop over training examples i = { 1 , , M } , and for each training example x ( i ) to assign data points to clusters, compute the distances x ( i ) μ k for each cluster centroid. Then, use Grover’s algorithm to efficiently determine the index
    c ( i ) : = arg min k x ( i ) μ k 2
    Inner Loop (j): New cluster centroids should be chosen.
    For each cluster j, the centroid should be updated bu computing the mean of all points assigned to that cluster. Looping over clusters j = { 1 , , k } , the new centroid μ j is computed as
    μ j = 1 | C j | i C j x ( i )
    where | C j | represents the set of data points assigned to cluster j and C j is the number of such points. The updated μ j then becomes the new cluster centroid.
  • Convergence.
    Convergence is achieved when iterations of the algorithm do not change the positions of the cluster centroids [54].
The quantum version of the k-means algorithm offers an exponential speedup over its classical counterpart. Table 8 contains an overview of the applications of the quantum k-means algorithm.

4.2.3. Quantum Principal Component Analysis

Principal Component Analysis (PCA) is a method for reducing the dimensionality of data. It works by taking N-dimensional feature vectors (which may be correlated) from a training dataset, applying an orthonormal transformation, and compressing them to R-dimensional data. This lower-dimensional representational representation retains most of the important information and can be used in other machine learning algorithms. The advantage is that it allows similar conclusions to be drawn as with the full dataset, but with much faster execution, especially when R N .
The quantum version of PCA (qPCA) was proposed by Lloyd, Mohseni and Rebentrost [62]. It offers and exponential speedup over the classical PCA algorithm by leveraging quantum computing techniques, potentially enabling much faster processing of large datasets.
The qPCA can be implemented using quantum phase estimation subroutine. A detailed description of each algorithm’s step is described below.
  • Demean and normalization of classical data
    Firstly, the N-dimensional vectors should be demeaned. This can be expressed mathematically by the following equation:
    x ( i ) x ( i ) x ¯ , x ¯ = 1 M i = 1 M x ( i )
    Next, the items should be normalized. This can be expressed mathematically by the following equation:
    x ( i ) x ( i ) x , x = k = 1 N x k 2
  • Data encoding
    Classical data should be converted into quantum data as described previously.
    x | x = k = 1 N x k | k
  • Covariance/correlation matrix
    The density matrix can be described by the following equation:
    ρ = 1 M i = 1 M | x ( i ) x ( i ) |
    The tensor product | x ( i ) x ( i ) | is written as
    | x ( i ) x ( i ) | = k = 1 N m = 1 N x k ( i ) x m ( i ) | k m |
    which, in matrix notation, is represented by
    | x ( i ) x ( i ) | = x 1 ( i ) x 1 ( i ) x 1 ( i ) x 2 ( i ) x 1 ( i ) x N ( i ) x 2 ( i ) x 1 ( i ) x 2 ( i ) x 2 ( i ) x 2 ( i ) x N ( i ) x N ( i ) x 1 ( i ) x N ( i ) x 2 ( i ) x N ( i ) x N ( i )
    Thus, the sum over training examples produces the following matrix:
    1 M i = 1 M | x ( i ) x ( i ) | = 1 M i x 1 ( i ) x 1 ( i ) i x 1 ( i ) x 2 ( i ) i x 1 ( i ) x N ( i ) i x 2 ( i ) x 1 ( i ) i x 2 ( i ) x 2 ( i ) i x 2 ( i ) x N ( i ) i x N ( i ) x 1 ( i ) i x N ( i ) x 2 ( i ) i x N ( i ) x N ( i )
    The representation of demeaned data is equivalent to covariance matrix.
  • Exponential of density matrix
    QPE can be applied to efficiently obtain the eigenvalues and eigenvectors of density matrix. As mentioned before the QPE requires a unitary operation U. It is known that U = e i H is unitary for any Hermitian matrix H. The density matrix ρ is Hermitian by definition. Therefore, the gate U = e i ρ t should be calculated in order to apply QPE subroutine. It should be noted that the eigenvectors of ρ are also eigenvectors of e i ρ t and the eigenvalues λ of ρ are e i λ t
  • Eigendecomposition of density matrix
    The qPCA uses a slight modification of a QPE subroutine by applying the U to the density matrix and not to a eigenvector. For simplicity, let us apply U to the state | x ( i ) .
    e i ρ t | x ( i ) = j = 1 M e i λ ( j ) t | ϕ ( j ) ϕ ( j ) | x ( i ) = j = 1 M e i λ ( j ) t ϕ ( j ) | x ( i ) | ϕ ( j ) = j = 1 M c ( i j ) | ϕ ( j ) , c ( i j ) = e i λ ( j ) t ϕ ( j ) | x ( i )
    The output of QPE is as follows:
    | 0 n | x ( i ) j = 1 M c ( i j ) | λ ˜ ( j ) | ϕ ( j )
    The output can also be presented as density matrix representation as
    η ( i ) = j = 1 M | c ( i j ) | 2 | λ ˜ ( j ) λ ˜ ( j ) | | ϕ ( j ) ϕ ( j ) |
    As mentioned before, the QPE is applied to ρ in qPCA. Therefore,
    η = j = 1 M i = 1 M 1 M | c ( i j ) | 2 | λ ˜ ( j ) λ ˜ ( j ) | | ϕ ( j ) ϕ ( j ) | = j = 1 M ρ ( j ) | λ ˜ ( j ) λ ˜ ( j ) | | ϕ ( j ) ϕ ( j ) |
    The λ ( j ) coefficient is derived as follows:
    i = 1 M 1 M | c ( i j ) | 2 = i = 1 M 1 M e i λ ( j ) t ϕ ( j ) | x ( i ) x ( i ) | ϕ ( j ) e i λ ( j ) t = ϕ ( j ) | i = 1 M 1 M | x ( i ) x ( i ) | | ϕ ( j ) = ϕ ( j ) | ρ | ϕ ( j ) = λ ( j )
  • Sampling
    The final step involves sampling from the final quantum state to obtain features of eigenvectors.
The qPCA offers an exponential speedup over its classical counterpart. Table 9 contains an overview of the applications of the qPCA.

5. Quantum Deep Learning

As described above, various QML algorithms offer exponential or quadratic speedup over their classical counterparts. Researchers are also exploring the potential quantum versions of neural networks (NNs) in comparison with their classical versions. The potential advantages of quantum versions of NNs include exponential memory capacity, fewer hidden neurons with higher performance, faster learning and processing speed, as well as smaller scale and higher stability [19].
A quantum neural network (qNN) consists of four important components: the quantum input layer, quantum hidden layer, measurement layer, and a classical optimizer. The general structure of the quantum circuit for a qNN is shown in Figure 26.
The quantum input layer refers to the process of encoding classical data into quantum states. This process was described previously. Once the data is encoded into qubits, a unitary transformation is applied to the input, similar to how the weight vector functions in classical neural networks. The exact functionality of the unitary operator U depends on the specific problem. Finally, the output is obtained by measuring the output qubits [2]. The general mathematical form can be described by the following equation:
| f = U n = 0 N 1 θ n | x n
where | f represents the output quantum state, U the unitary operator, θ n the weight associated with each input quantum state, and | x n the input quantum state.
Similar to classical NNs, the training of qNNs involves optimizing the parameters of the unitary operator U to minimize a loss function.
The combination of deep learning and quantum computing is in its early stages. Table 10 provides an overview of the applications of qNNs.
It should be noted that the applications of quantum neural networks are currently limited due to the constraints of existing quantum hardware. However, as quantum hardware capabilities improve, it will become possible to apply quantum architectures of QNNs to larger datasets and demonstrate quantum advantage.

6. Real-World Applications of QML

There is an exponential growth of interest among researchers in applying QML techniques to real-world problems. The most common fields of application include healthcare, biology, finance, high-energy physics, pattern recognition and classification, image processing and analysis, wireless communication and more.
In healthcare, QML can be applied to medical imaging, biosignal analysis and medical health records to enable more precise medicine, facilitate early cancer diagnosis and predict different stages of diabetes. In [74], two distinct approaches were employed for medical image classification. The first approach involved integrating quantum circuits into the training process of classical neural networks, while the second focused on designing and training quantum orthogonal neural networks. These methods were applied to retinal color fundus images and chest X-rays. The algorithms were evaluated using IBM quantum hardware with configurations of 5, 7, and 16 qubits. A more comprehensive summary of the state of the art of quantum computing for healthcare applications is presented in [1,75].
In the biomedical domain, QML can achieve groundbreaking advancements in genomic sequence analysis, a field commonly referred to as the “omics” domain. The term “omics” encompasses data derived from genetics, such as DNA sequences and proteins, with the goal of developing effective and personalized treatments. QML has the potential to perform various tasks, including analyzing protein–DNA interactions, predicting gene expression, and conducting genomic sequencing analysis. Additionally, the categorization of cancer-causing genes to enable early cancer detection remains an area of ongoing research. For example, transcription factors regulate gene expression, but the mechanisms by which these proteins recognize and specifically bind to their DNA targets remain a topic of debate. In [76], a QML algorithm was implemented to predict binding specificity. The experiments were conducted using a quantum annealer to rank transcription factor binding. Additionally, in [77], a quantum-hybrid deep neural network was utilized to predict protein structures. A more comprehensive summary of the state of the art of quantum computing for omics study is presented in [1].
In finance, QML can be applied to portfolio optimization, fraud detection, market prediction and trading, pricing and risk management. In [78], a novel quantum reservoir computing (QRC) method was introduced and applied to the foreign exchange market. This approach effectively captured the stochastic dynamics of exchange rates, achieving significantly greater accuracy compared to classical reservoir computing techniques. Furthermore, in [79], a QML algorithm was employed for feature selection in fraud detection. The results obtained using an IBM Quantum computer were promising. A more comprehensive summary of the state of the art of quantum computing for finance applications is presented in [80,81].
High-energy physics (HEP) explores the fundamental nature of matter and the universe. While the standard model serves as a robust framework for explaining many physical phenomena, it does not address critical questions such as the source of dark matter or the properties of neutrino mass. QML offers exciting opportunities in HEP, including applications like quantum system simulations, nuclear physics computations (such as neutrino–nucleus scattering cross-sections), insights into quantum gravity and the development of quantum sensors to detect new physics beyond the standard model. The study by [82] demonstrates the use of QML for quantum simulations in high-energy physics. A more comprehensive summary of the state of the art of quantum computing for HEP applications is presented in [83].
QML has shown great potential in pattern recognition and classification tasks. The work presented in [84] introduces a quantum kNN algorithm that outperforms its classical equivalent in terms of time complexity. Additionally, the findings suggest that the quantum approach delivers insights that go beyond the capabilities of traditional methods.
In addition, QML holds significant potential in image processing. Quantum computing could enable major advancements in areas such as quantum image representation, geometric transformations, image protection, edge detection, image segmentation, filtering and compression [85]. In [86], a qSVM was implemented for cloud detection in satellite cloud images. The algorithm was developed using the PennyLane Python package and the experiments were conducted on a quantum simulator. In [87], a quantum approach to tomosynthesis is presented by implementing the 2D radon transform using the QFT and its inverse with the quantum 3D FFT, demonstrating the process through fundamental quantum gates. In [88], the quantum 3D FFT is further utilized for velocity filtering with a short execution time, serving as an important technical tool for isolating objects moving at speeds within certain limits. Both works were implemented using MATLAB 2022b. A more comprehensive summary of the state of the art of quantum computing for image processing is presented in [85,89]. Another emerging area is the wireless communication, where a QML-based framework for 6G communication network was proposed in [90]. This framework explores the potential of quantum algorithms to enhance data transmission, signal processing and network optimization in next generation wireless systems.

7. Future Directions in Quantum Machine Learning and Deep Learning

As quantum hardware continues to advance and software frameworks for quantum programming mature, QML and QDL are poised to evolve rapidly. However, several key challenges remain unresolved, and identifying promising research directions is crucial for unlocking the full potential of quantum-enhanced learning systems. This section outlines emerging areas of interest and critical open problems in the field.

7.1. Scalability and Quantum Advantage

One of the primary challenges is demonstrating a clear quantum advantage for practical machine learning tasks. While theoretical speedups exist for selected problems [45], most current QML models rely on hybrid approaches whose scaling behavior on real-world data is still poorly understood. Future work should explore concrete benchmarks comparing quantum models with classical baselines [21], and hardware-aware algorithm design to optimize circuit depth, qubit connectivity, and measurement schemes [91].

7.2. Barren Plateaus and Trainability

Variational quantum circuits (VQCs), central to many QML algorithms, often suffer from barren plateaus—regions in parameter space where gradients vanish exponentially [92]. This presents a major obstacle to training quantum models efficiently. Promising directions include novel cost function design [93], local cost functions and entanglement-aware ansatz structures [94], and layer-wise training and noise-aware optimization [95].

7.3. Quantum Data Representations

The way classical data is embedded into quantum states (quantum feature maps) significantly affects the performance and expressivity of QML models. Important lines of research are data encoding strategies with theoretical guarantees [96], expressibility vs. trainability trade-offs [97], and cost-efficient state preparation algorithms [98].

7.4. Model Expressivity and Generalization

Theoretical understanding of expressivity and generalization in quantum neural networks is still developing. Future work includes generalization bounds for variational quantum models [99], the relationship between expressivity, entanglement, and overfitting [100], and complexity theory perspectives on learnability [17].

7.5. Quantum Natural Language Processing (QNLP)

QNLP leverages quantum tensor structures to model grammatical and semantic relations. Although still emerging, the field holds potential for language understanding and semantic parsing [101]. Key directions are ccaling categorical QNLP models to larger corpora, hybrid QNLP pipelines using pre-trained classical language models, and noise robustness in compositional circuits.

7.6. Integration with Classical ML and Federated Architectures

Hybrid Quantum–Classical architectures offer a realistic near-term path. Examples include quantum kernel methods in support vector machines [49], quantum data encryption in federated learning [102], and decentralized optimization over quantum-secured communication channels.

7.7. Domain-Specific Applications and Benchmarks

Real-world adoption of QML depends on application-driven success stories in fields such as healthcare (drug discovery, patient stratification [103], finance (portfolio optimization and risk analysis [104], and bioinformatics (quantum-enhanced genomic analysis [76]. Future work must focus on the following:
  • Domain-aligned datasets and reproducible QML benchmarks.
  • Interpretability and robustness of quantum predictions.
  • Integration into existing ML pipelines with domain constraints.
In summary, while current QML and QDL systems remain in a nascent stage, the research community is actively exploring numerous directions with transformative potential. Progress in quantum hardware, algorithm design, and theoretical understanding will jointly determine the trajectory of quantum machine learning in the coming decade.

8. Conclusions

This paper provides a comprehensive introduction to QC, specifically designed for beginners. It offers an overview of the fundamental concepts and tools that form the foundation of QC, such as qubits, superposition, entanglement, quantum gates and quantum algorithms. The paper also explores the connection between quantum algorithms and QML as well as QDL. Additionally, it examines their potential applications in various fields such as bioinformatics, finance and HEP. By offering a clear and accessible explanation of these key ideas, the paper seeks to provide readers with a solid understanding of both the theory and practical implications of quantum computing in the modern technological landscape.
The fundamental knowledge of quantum physics, linear algebra and electronics is essential for studying the field of QC. By presenting the QPE algorithm [4], the usefulness of this method becomes apparent, particularly in one of the most groundbreaking quantum algorithms to date, Shor’s algorithm [3]. Shor’s algorithm provides an efficient solution to the factoring problem that is currently difficult for classical computers. Moreover, another groundbreaking algorithm, Grover’s algorithm [5], can be applied as a black box, providing a quadratic improvement over classical algorithms.
With the exponential growth of data generation and rapid technological advancements in recent years, classical machine learning algorithms may face challenges in addressing complex real-world problems. QML, leveraging the principles of quantum computing such as superposition and entanglement, is well-positioned to tackle the machine learning challenges of the future. Quantum algorithms can be integrated with machine learning to develop QML algorithms. Quantum subroutines have already been implemented and form the foundation of QML algorithms, as presented in this work. The quantum SVM can be implemented using both Grover’s algorithm [11] and the HHL algorithm [49]. The quantum version of the k-means algorithm can be implemented using Grover’s algorithm [54], while the quantum version of PCA can be implemented using the QPE subroutine [62].
Researchers are also working on finding real-world applications for these algorithms. This work presents multiple applications. In [50], a quantum SVM is introduced, aimed at predicting solar irradiation based on the HHL algorithm. In [56], a quantum k-means algorithm based on Grover’s algorithm is applied to a heart disease dataset, where the quantum version outperforms its classical counterpart in terms of accuracy, precision, sensitivity, specificity and F1-score. In [63], quantum PCA is presented as an end-to-end implementation designed for managing interest rate risk, based on QPE.
Although these algorithms hold promise, they are not yet capable of fully replacing classical algorithms due to existing challenges. The first major challenge is state preparation, which involves converting classical data into quantum data. The second challenge lies in hardware limitations, as current quantum devices are not yet capable of handling large datasets effectively. Overcoming these barriers is essential to achieving true quantum supremacy in machine learning.
Looking ahead, future opportunities include the development of more effective quantum algorithms designed for practical and meaningful tasks. Additionally, considerations of time efficiency and algorithmic complexity will be crucial.
In summary, this work aims to provide a tutorial for beginners entering the field of QML by thoroughly studying the mathematics behind quantum algorithms and QML algorithms. Additionally, real-world applications are presented to illustrate the practical use of these concepts. The collective insights contribute to the expanding body of knowledge in quantum computing and bring us closer to their realization, paving the way for the implementation of these algorithms in real-world applications with practical use.

Author Contributions

M.R. and G.K. have equally contributed to conceptualization, methodology, validation, and writing—original draft preparation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
QCQuantum Computing
QMLQuantum Machine Learning
SVMSupport Vector Machine

References

  1. Maheshwari, D.; Garcia-Zapirain, B.; Sierra-Sosa, D. Quantum Machine Learning Applications in the Biomedical Domain: A Systematic Review. IEEE Access 2022, 10, 80463–80484. [Google Scholar] [CrossRef]
  2. Jadhav, A.; Rasool, A.; Gyanchandani, M. Quantum Machine Learning: Scope for real-world problems. Procedia Comput. Sci. 2023, 218, 2612–2625. [Google Scholar] [CrossRef]
  3. Shor, P.W. Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. SIAM J. Comput. 1997, 26, 1484–1509. [Google Scholar] [CrossRef]
  4. Kitaev, A.Y. Quantum measurements and the Abelian Stabilizer Problem. Electron. Colloquium Comput. Complex. 1995, TR96. [Google Scholar] [CrossRef]
  5. Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, New York, NY, USA, 22–24 May 1996; STOC ’96; pp. 212–219. [Google Scholar] [CrossRef]
  6. Alchieri, L.; Badalotti, D.; Bonardi, P.; Bianco, S. An introduction to quantum machine learning: From quantum logic to quantum deep learning. Quantum Mach. Intell. 2021, 3, 28. [Google Scholar] [CrossRef]
  7. Kak, S.C. Quantum Neural Computing; Elsevier. Adv. Imaging Electron Phys. 1995, 94, 259–313. [Google Scholar] [CrossRef]
  8. Ventura, D.; Martinez, T.R. A Quantum Associative Memory Based on Grover’s Algorithm. In Proceedings of the International Conference on Adaptive and Natural Computing Algorithms, Portoroz, Slovenia, 6–9 April 1999. [Google Scholar]
  9. Matsui, N.; Takai, M.; Nishimura, H. A network model based on qubitlike neuron corresponding to quantum circuit. Electron. Commun. Jpn. (Part III: Fundam. Electron. Sci.) 2000, 83, 67–73. [Google Scholar] [CrossRef]
  10. Altaisky, M.V. Quantum neural network. arXiv 2001, arXiv:quant-ph/0107012. [Google Scholar]
  11. Anguita, D.; Ridella, S.; Rivieccio, F.; Zunino, R. Quantum optimization for training support vector machines. Neural Netw. Off. J. Int. Neural Netw. Soc. 2003, 16, 763–770. [Google Scholar] [CrossRef] [PubMed]
  12. Lloyd, S.; Mohseni, M.; Rebentrost, P. Quantum algorithms for supervised and unsupervised machine learning. arXiv 2013, arXiv:1307.0411. [Google Scholar] [CrossRef]
  13. Wittek, P. Quantum Machine Learning: What Quantum Computing Means to Data Mining; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  14. Tacchino, F.; Macchiavello, C.; Gerace, D.; Bajoni, D. An artificial neuron implemented on an actual quantum processor. npj Quantum Inf. 2019, 5, 26. [Google Scholar] [CrossRef]
  15. Broughton, M.; Verdon, G.; McCourt, T.; Martinez, A.J.; Yoo, J.H.; Isakov, S.V.; Massey, P.; Halavati, R.; Niu, M.Y.; Zlokapa, A.; et al. TensorFlow Quantum: A Software Framework for Quantum Machine Learning. arXiv 2021, arXiv:2003.02989. [Google Scholar] [CrossRef]
  16. Wang, Y.; Liu, J. A comprehensive review of quantum machine learning: From NISQ to fault tolerance. Rep. Prog. Phys. 2024, 87, 116402. [Google Scholar] [CrossRef] [PubMed]
  17. Biamonte, J.; Wittek, P.; Pancotti, N.; Rebentrost, P.; Wiebe, N.; Lloyd, S. Quantum machine learning. Nature 2017, 549, 195–202. [Google Scholar] [CrossRef] [PubMed]
  18. Cerezo, M.; Verdon, G.; Huang, H.Y.; Cincio, L.; Coles, P. Challenges and opportunities in quantum machine learning. Nat. Comput. Sci. 2022, 2, 567–576. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Ni, Q. Recent advances in quantum machine learning. Quantum Eng. 2020, 2, e34. [Google Scholar] [CrossRef]
  20. Peral-García, D.; Cruz-Benito, J.; García-Peñalvo, F.J. Systematic literature review: Quantum machine learning and its applications. Comput. Sci. Rev. 2024, 51, 100619. [Google Scholar] [CrossRef]
  21. Preskill, J. Quantum Computing in the NISQ era and beyond. Quantum 2018, 2, 79. [Google Scholar] [CrossRef]
  22. Balamurugan, K.S.; Sivakami, A.; Mathankumar, M. Quantum computing basics, applications and future perspectives. J. Mol. Struct. 2024, 1308, 137917. [Google Scholar] [CrossRef]
  23. Singh, J.; Singh, M. Evolution in Quantum Computing. In Proceedings of the 2016 International Conference System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 25–27 November 2016; pp. 267–270. [Google Scholar] [CrossRef]
  24. Peruzzo, A.; McClean, J.; Shadbolt, P.; Yung, M.H.; Zhou, X.Q.; Love, P.J.; Aspuru-Guzik, A.; O’Brien, J.L. A variational eigenvalue solver on a photonic quantum processor. Nat. Commun. 2014, 5, 4213. [Google Scholar] [CrossRef]
  25. Farhi, E.; Goldstone, J.; Gutmann, S. A Quantum Approximate Optimization Algorithm. arXiv 2014, arXiv:quant-ph/1411.4028. [Google Scholar] [CrossRef]
  26. McClean, J.R.; Romero, J.; Babbush, R.; Aspuru-Guzik, A. The theory of variational hybrid quantum-classical algorithms. New J. Phys. 2016, 18, 023023. [Google Scholar] [CrossRef]
  27. Chen, S.Y.; Goan, H. Variational Quantum Circuits and Deep Reinforcement Learning. arXiv 2019, arXiv:1907.00397. [Google Scholar] [CrossRef]
  28. Zeguendry, A.; Jarir, Z.; Quafafou, M. Quantum Machine Learning: A Review and Case Studies. Entropy 2023, 25, 287. [Google Scholar] [CrossRef]
  29. Rietsche, R.; Dremel, C.; Bosch, S.; Steinacker, L.; Meckel, M.; Leimeister, J.M. Quantum computing. Electron Mark. 2022, 32, 2525–2536. [Google Scholar] [CrossRef]
  30. Williams, C.P.; Williams, C.P. Quantum gates. In Explorations in Quantum Computing; Springer: London, UK, 2011; pp. 51–122. [Google Scholar]
  31. Houssein, E.H.; Abohashima, Z.; Elhoseny, M.; Mohamed, W.M. Machine learning in the quantum realm: The state-of-the-art, challenges, and future vision. Expert Syst. Appl. 2022, 194, 116512. [Google Scholar] [CrossRef]
  32. Hidary, J.D. A Brief History of Quantum Computing. In Quantum Computing: An Applied Approach; Springer International Publishing: Cham, Switzerland, 2019; pp. 11–16. [Google Scholar] [CrossRef]
  33. Albash, T.; Lidar, D.A. Adiabatic quantum computation. Rev. Mod. Phys. 2018, 90, 015002. [Google Scholar] [CrossRef]
  34. Lahtinen, V.; Pachos, J. A Short Introduction to Topological Quantum Computation. SciPost Phys. 2017, 3, 021. [Google Scholar] [CrossRef]
  35. Laumann, C.; Moessner, R.; Scardicchio, A.; Sondhi, S. Quantum annealing: The fastest route to quantum computation? Eur. Phys. J. Spec. Top. 2015, 224, 75–88. [Google Scholar] [CrossRef]
  36. Bhat, H.A.; Khanday, F.A.; Kaushik, B.K.; Bashir, F.; Shah, K.A. Quantum Computing: Fundamentals, Implementations and Applications. IEEE Open J. Nanotechnol. 2022, 3, 61–77. [Google Scholar] [CrossRef]
  37. Hassija, V.; Chamola, V.; Saxena, V.; Chanana, V.; Parashari, P.; Mumtaz, S.; Guizani, M. Present Landscape of Quantum Computing. IET Quantum Commun. 2020, 1, 1. [Google Scholar] [CrossRef]
  38. Nandhini, S.; Singh, H.; Akash, U.N. An extensive review on quantum computers. Adv. Eng. Softw. 2022, 174, 103337. [Google Scholar] [CrossRef]
  39. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information: 10th Anniversary Edition; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  40. Montanaro, A. Quantum algorithms: An overview. npj Quantum Inf. 2016, 2, 15023. [Google Scholar] [CrossRef]
  41. Deutsch, D.; Penrose, R. Quantum theory, the Church–Turing principle and the universal quantum computer. Proc. R. Soc. London A Math. Phys. Sci. 1985, 400, 97–117. [Google Scholar] [CrossRef]
  42. Deutsch, D.; Jozsa, R. Rapid solution of problems by quantum computation. Proc. R. Soc. London. Ser. A Math. Phys. Sci. 1992, 439, 553–558. [Google Scholar]
  43. Bernstein, E.; Vazirani, U. Quantum complexity theory. In Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, 16–18 May 1993; pp. 11–20. [Google Scholar]
  44. Simon, D.R. On the Power of Quantum Computation. SIAM J. Comput. 1997, 26, 1474–1483. [Google Scholar] [CrossRef]
  45. Harrow, A.W.; Hassidim, A.; Lloyd, S. Quantum Algorithm for Linear Systems of Equations. Phys. Rev. Lett. 2009, 103, 150502. [Google Scholar] [CrossRef]
  46. Schuld, M.; Sinayskiy, I.; Petruccione, F. An introduction to quantum machine learning. Contemp. Phys. 2014, 56, 172–185. [Google Scholar] [CrossRef]
  47. Schuld, M.; Petruccione, F. Machine Learning with Quantum Computers; Springer International Publishing: Chicago, USA, 2021. [Google Scholar] [CrossRef]
  48. Gujju, Y.; Matsuo, A.; Raymond, R. Quantum Machine Learning on Near-Term Quantum Devices: Current State of Supervised and Unsupervised Techniques for Real-World Applications. arXiv 2024, arXiv:2307.00908. [Google Scholar] [CrossRef]
  49. Rebentrost, P.; Mohseni, M.; Lloyd, S. Quantum Support Vector Machine for Big Data Classification. Phys. Rev. Lett. 2014, 113, 130503. [Google Scholar] [CrossRef]
  50. Senekane, M.; Taele, B. Prediction of Solar Irradiation Using Quantum Support Vector Machine Learning Algorithm. Smart Grid Renew. Energy 2016, 7, 293–301. [Google Scholar] [CrossRef]
  51. Yuan, X.J.; Chen, Z.; Liu, Y.D.; Xie, Z.; Liu, Y.Z.; Jin, X.M.; Wen, X.; Tang, H. Quantum Support Vector Machines for Aerodynamic Classification. Intell. Comput. 2023, 2, 0057. [Google Scholar] [CrossRef]
  52. Vashisth, S.; Dhall, I.; Aggarwal, G. Design and analysis of quantum powered support vector machines for malignant breast cancer diagnosis. J. Intell. Syst. 2021, 30, 998–1013. [Google Scholar] [CrossRef]
  53. Yang, J.; Awan, A.J.; Vall-llosera, G. Support Vector Machines on Noisy Intermediate Scale Quantum Computers. arXiv 2019, arXiv:1909.11988. [Google Scholar] [CrossRef]
  54. Kopczyk, D. Quantum machine learning for data scientists. arXiv 2018, arXiv:1804.10068. [Google Scholar] [CrossRef]
  55. Gong, C.; Dong, Z.; Gani, A.; Qi, H. Quantum k-means algorithm based on Trusted server in Quantum Cloud Computing. arXiv 2020, arXiv:2011.04402. [Google Scholar] [CrossRef]
  56. Kavitha, S.S.; Kaulgud, N. Quantum K-Means Clustering Method For Detecting Heart Disease Using Quantum Circuit Approach. 2021. Available online: https://europepmc.org/article/ppr/ppr409380 (accessed on 10 March 2024). [CrossRef]
  57. DiAdamo, S.; O’Meara, C.; Cortiana, G.; Bernabe-Moreno, J. Practical Quantum K-Means Clustering: Performance Analysis and Applications in Energy Grid Classification. IEEE Trans. Quantum Eng. 2022, 3, 1–16. [Google Scholar] [CrossRef]
  58. Benlamine, K.; Bennani, Y.; Zaiou, A.; Hibti, M.; Matei, B.; Grozavu, N. Distance Estimation for Quantum Prototypes Based Clustering; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  59. Ohno, H. A quantum algorithm of K-means toward practical use. Quantum Inf. Process. 2022, 21, 146. [Google Scholar] [CrossRef]
  60. Shi, X.; Shang, Y.; Guo, C. Quantum inspired K-means algorithm using matrix product states. arXiv 2020, arXiv:2006.06164. [Google Scholar] [CrossRef]
  61. Chen, J.; Qi, X.; Chen, L.; Chen, F.; Cheng, G. Quantum-inspired ant lion optimized hybrid k-means for cluster analysis and intrusion detection. Knowl.-Based Syst. 2020, 203, 106167. [Google Scholar] [CrossRef]
  62. Lloyd, S.; Mohseni, M.; Rebentrost, P. Quantum principal component analysis. Nat. Phys. 2014, 10, 631–633. [Google Scholar] [CrossRef]
  63. Martin, A.; Candelas, B.; Rodríguez-Rozas, Á.; Martín-Guerrero, J.; Chen, X.; Lamata, L.; Orus, R.; Solano, E.; Sanz, M. Toward pricing financial derivatives with an IBM quantum computer. Phys. Rev. Res. 2021, 3, 013167. [Google Scholar] [CrossRef]
  64. Dri, E.; Aita, A.; Fioravanti, T.; Franco, G.; Giusto, E.; Ranieri, G.; Corbelletto, D.; Montrucchio, B. Towards An End-To-End Approach For Quantum Principal Component Analysis. In Proceedings of the 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), Bellevue, WA, USA, 17–22 September 2023; pp. 1–6. [Google Scholar] [CrossRef]
  65. Salari, V.; Paneru, D.; Saglamyurek, E.; Ghadimi, M.; Abdar, M.; Rezaee, M.; Aslani, M.; Barzanjeh, S.; Karimi, E. Quantum face recognition protocol with ghost imaging. Sci. Rep. 2023, 13, 2401. [Google Scholar] [CrossRef]
  66. Lin, Z.; Liu, H.; Tang, K.; Che, L.; Long, X.; Wang, X.; Fan, Y.a.; Huang, K.; Yang, X.; Xin, T.; et al. Hardware-efficient quantum principal component analysis for medical image recognition. Front. Phys. 2024, 19, 51202. [Google Scholar] [CrossRef]
  67. Gordon, M.; Cerezo, M.; Cincio, L.; Coles, P. Covariance Matrix Preparation for Quantum Principal Component Analysis. PRX Quantum 2022, 3, 030334. [Google Scholar] [CrossRef]
  68. Zhao, J.; Zhang, Y.H.; Shao, C.P.; Wu, Y.C.; Guo, G.C.; Guo, G.P. Building quantum neural networks based on a swap test. Phys. Rev. A 2019, 100, 012334. [Google Scholar] [CrossRef]
  69. Niu, X.F.; Ma, W.P. A novel quantum neural network based on multi-level activation function. Laser Phys. Lett. 2021, 18, 025201. [Google Scholar] [CrossRef]
  70. Cong, I.; Choi, S.; Lukin, M.D. Quantum convolutional neural networks. Nat. Phys. 2019, 15, 1273–1278. [Google Scholar] [CrossRef]
  71. Henderson, M.; Shakya, S.; Pradhan, S.; Cook, T. Quanvolutional Neural Networks: Powering Image Recognition with Quantum Circuits. arXiv 2019, arXiv:1904.04767. [Google Scholar] [CrossRef]
  72. Zhao, Z.; Pozas-Kerstjens, A.; Rebentrost, P.; Wittek, P. Bayesian deep learning on a quantum computer. Quantum Mach. Intell. 2019, 1, 41–51. [Google Scholar] [CrossRef]
  73. Wiebe, N.; Kapoor, A.; Svore, K.M. Quantum Deep Learning. arXiv 2015, arXiv:1412.3489. [Google Scholar] [CrossRef]
  74. Mathur, N.; Landman, J.; Li, Y.Y.; Strahm, M.; Kazdaghli, S.; Prakash, A.; Kerenidis, I. Medical image classification via quantum neural networks. arXiv 2022, arXiv:2109.01831. [Google Scholar] [CrossRef]
  75. Ullah, U.; Garcia-Zapirain, B. Quantum machine learning revolution in healthcare: A systematic review of emerging perspectives and applications. IEEE Access 2024, 12, 11423–11450. [Google Scholar] [CrossRef]
  76. Li, R.Y.; Di Felice, R.; Rohs, R.; Lidar, D.A. Quantum annealing versus classical machine learning applied to a simplified computational biology problem. npj Quantum Inf. 2018, 4, 14. [Google Scholar] [CrossRef] [PubMed]
  77. Ben Geoffrey, A.S. Protein Structure Prediction Using AI and Quantum Computers. 2021. Available online: https://www.researchgate.net/publication/351846097_Protein_structure_prediction_using_AI_and_quantum_computers (accessed on 1 April 2024). [CrossRef]
  78. Xia, W.; Zou, J.; Qiu, X.; Chen, F.; Zhu, B.; Li, C.; Deng, D.L.; Li, X. Configured Quantum Reservoir Computing for Multi-Task Machine Learning. arXiv 2023, arXiv:2303.17629. [Google Scholar] [CrossRef]
  79. Zoufal, C.; Mishmash, R.V.; Sharma, N.; Kumar, N.; Sheshadri, A.; Deshmukh, A.; Ibrahim, N.; Gacon, J.; Woerner, S. Variational quantum algorithm for unconstrained black box binary optimization: Application to feature selection. Quantum 2023, 7, 909. [Google Scholar] [CrossRef]
  80. Herman, D.; Googin, C.; Liu, X.; Sun, Y.; Galda, A.; Safro, I.; Pistoia, M.; Alexeev, Y. Quantum computing for finance. Nat. Rev. Phys. 2023, 5, 450–465. [Google Scholar] [CrossRef]
  81. Mironowicz, P.; Shenoy H., A.; Mandarino, A.; Yilmaz, A.E.; Ankenbrand, T. Applications of Quantum Machine Learning for Quantitative Finance. arXiv 2024, arXiv:2405.10119. [Google Scholar]
  82. Nagano, L.; Miessen, A.; Onodera, T.; Tavernelli, I.; Tacchino, F.; Terashi, K. Quantum data learning for quantum simulations in high-energy physics. arXiv 2023, arXiv:2306.17214. [Google Scholar] [CrossRef]
  83. Di Meglio, A.; Jansen, K.; Tavernelli, I.; Alexandrou, C.; Arunachalam, S.; Bauer, C.W.; Borras, K.; Carrazza, S.; Crippa, A.; Croft, V.; et al. Quantum computing for high-energy physics: State of the art and challenges. PRX Quantum 2024, 5, 037001. [Google Scholar] [CrossRef]
  84. Schuld, M.; Sinayskiy, I.; Petruccione, F. Quantum computing for pattern classification. arXiv 2014, arXiv:1412.3646. [Google Scholar] [CrossRef]
  85. Wang, Z.; Xu, M.; Zhang, Y. Review of Quantum Image Processing. Arch. Comput. Methods Eng. 2021, 29, 737–761. [Google Scholar] [CrossRef]
  86. Miroszewski, A.; Mielczarek, J.; Czelusta, G.; Szczepanek, F.; Grabowski, B.; Le Saux, B.; Nalepa, J. Detecting Clouds in Multispectral Satellite Images Using Quantum-Kernel Support Vector Machines. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 7601–7613. [Google Scholar] [CrossRef]
  87. Koukiou, G.; Anastassopoulos, V. Quantum 3D FFT in Tomography. Appl. Sci. 2023, 13, 4009. [Google Scholar] [CrossRef]
  88. Koukiou, G.; Anastassopoulos, V. Velocity Filtering Using Quantum 3D FFT. Photonics 2023, 10, 483. [Google Scholar] [CrossRef]
  89. Kharsa, R.; Bouridane, A.; Amira, A. Advances in Quantum Machine Learning and Deep learning for image classification: A Survey. Neurocomputing 2023, 560, 126843. [Google Scholar] [CrossRef]
  90. Syed, J.N.; Sharma, S.K.; Wyne, S.; Patwary, M.; Asaduzzaman, M. Quantum Machine Learning for 6G Communication Networks: State-of-the-Art and Vision for the Future. IEEE Access 2019, 7, 46317–46350. [Google Scholar] [CrossRef]
  91. Benedetti, M.; Lloyd, E.; Sack, S.; Fiorentini, M. Parameterized quantum circuits as machine learning models. Quantum Sci. Technol. 2019, 4, 043001. [Google Scholar] [CrossRef]
  92. McClean, J.R.; Boixo, S.; Smelyanskiy, V.N.; Babbush, R.; Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 2018, 9, 4812. [Google Scholar] [CrossRef] [PubMed]
  93. Cerezo, M.; Sone, A.; Volkoff, T.; Cincio, L.; Coles, P.J. Cost function dependent barren plateaus in shallow parametrized quantum circuits. Nat. Commun. 2021, 12, 1791. [Google Scholar] [CrossRef] [PubMed]
  94. Pesah, A.; Cerezo, M.; Wang, S.; Volkoff, T.; Sornborger, A.T.; Coles, P.J. Absence of Barren Plateaus in Quantum Convolutional Neural Networks. Phys. Rev. X 2021, 11, 041011. [Google Scholar] [CrossRef]
  95. Grant, E.; Wossnig, L.; Ostaszewski, M.; Benedetti, M. An initialization strategy for addressing barren plateaus in parametrized quantum circuits. Quantum 2019, 3, 214. [Google Scholar] [CrossRef]
  96. Schuld, M.; Killoran, N. Quantum Machine Learning in Feature Hilbert Spaces. Phys. Rev. Lett. 2019, 122, 040504. [Google Scholar] [CrossRef]
  97. Sim, S.; Johnson, P.D.; Aspuru-Guzik, A. Expressibility and Entangling Capability of Parameterized Quantum Circuits for Hybrid Quantum-Classical Algorithms. Adv. Quantum Technol. 2019, 2, 1900070. [Google Scholar] [CrossRef]
  98. Havlíček, V.; Córcoles, A.D.; Temme, K.; Harrow, A.W.; Kandala, A.; Chow, J.M.; Gambetta, J.M. Supervised learning with quantum-enhanced feature spaces. Nature 2019, 567, 209–212. [Google Scholar] [CrossRef] [PubMed]
  99. Abbas, A.; Sutter, D.; Zoufal, C.; Lucchi, A.; Figalli, A.; Woerner, S. The power of quantum neural networks. Nat. Comput. Sci. 2021, 1, 403–409. [Google Scholar] [CrossRef] [PubMed]
  100. Du, Y.; Hsieh, M.H.; Liu, T.; Tao, D. Expressive power of parametrized quantum circuits. Phys. Rev. Res. 2020, 2, 033125, Erratum in Phys. Rev. Res. 2022, 4, 029003. [Google Scholar] [CrossRef]
  101. Coecke, B.; de Felice, G.; Meichanetzidis, K.; Toumi, A. Foundations for Near-Term Quantum Natural Language Processing. arXiv 2020, arXiv:2012.03755. [Google Scholar] [CrossRef]
  102. Gyongyosi, L.; Imre, S. Theory of quantum gravity information processing. Quantum Eng. 2019, 1, 23. [Google Scholar] [CrossRef]
  103. Schuld, M.; Bocharov, A.; Svore, K.M.; Wiebe, N. Circuit-centric quantum classifiers. Phys. Rev. A 2020, 101, 032308. [Google Scholar] [CrossRef]
  104. Orús, R.; Mugel, S.; Lizaso, E. Quantum computing for finance: Overview and prospects. Rev. Phys. 2019, 4, 100028. [Google Scholar] [CrossRef]
Figure 1. Circuit representation of (a) Pauli X, (b) Pauli Y, (c) Pauli Z and (d) H gates.
Figure 1. Circuit representation of (a) Pauli X, (b) Pauli Y, (c) Pauli Z and (d) H gates.
Make 07 00075 g001
Figure 2. Circuit representation of (a) a CNOT gate, where the horizontal lines represent qubit channels. The dot on the control qubit’s line indicates the control, and the plus sign inside a circle on the target qubit’s line denotes the target operation. (b) SWAP gate.
Figure 2. Circuit representation of (a) a CNOT gate, where the horizontal lines represent qubit channels. The dot on the control qubit’s line indicates the control, and the plus sign inside a circle on the target qubit’s line denotes the target operation. (b) SWAP gate.
Make 07 00075 g002
Figure 3. Circuit representation of Fredkin gate.
Figure 3. Circuit representation of Fredkin gate.
Make 07 00075 g003
Figure 4. Quantum circuit for generating Bell states.
Figure 4. Quantum circuit for generating Bell states.
Make 07 00075 g004
Figure 5. Graphical representation of the gate-based model.
Figure 5. Graphical representation of the gate-based model.
Make 07 00075 g005
Figure 6. The quantum circuit of the phase oracle property.
Figure 6. The quantum circuit of the phase oracle property.
Make 07 00075 g006
Figure 7. The quantum circuit of the Deutsch algorithm [39].
Figure 7. The quantum circuit of the Deutsch algorithm [39].
Make 07 00075 g007
Figure 8. The quantum circuit of the Deutsch–Josza algorithm [39].
Figure 8. The quantum circuit of the Deutsch–Josza algorithm [39].
Make 07 00075 g008
Figure 9. The quantum circuit of the Bernstein–Vazirani algorithm [39].
Figure 9. The quantum circuit of the Bernstein–Vazirani algorithm [39].
Make 07 00075 g009
Figure 10. The quantum circuit of Simon’s algorithm.
Figure 10. The quantum circuit of Simon’s algorithm.
Make 07 00075 g010
Figure 11. The quantum circuit of the quantum Fourier transform algorithm.
Figure 11. The quantum circuit of the quantum Fourier transform algorithm.
Make 07 00075 g011
Figure 12. The quantum circuit for calculating the QFT of number 5.
Figure 12. The quantum circuit for calculating the QFT of number 5.
Make 07 00075 g012
Figure 13. The quantum circuit of the phase kickback property.
Figure 13. The quantum circuit of the phase kickback property.
Make 07 00075 g013
Figure 14. The quantum circuit of the first stage of the QPE algorithm.
Figure 14. The quantum circuit of the first stage of the QPE algorithm.
Make 07 00075 g014
Figure 15. The general quantum circuit of the QPE algorithm [39].
Figure 15. The general quantum circuit of the QPE algorithm [39].
Make 07 00075 g015
Figure 16. The quantum circuit of the order-finding problem [39].
Figure 16. The quantum circuit of the order-finding problem [39].
Make 07 00075 g016
Figure 17. Schematic description of Grover’s algorithm.
Figure 17. Schematic description of Grover’s algorithm.
Make 07 00075 g017
Figure 18. The quantum circuit of Grover’s algorithm.
Figure 18. The quantum circuit of Grover’s algorithm.
Make 07 00075 g018
Figure 19. The quantum circuit of Grover’s iteration.
Figure 19. The quantum circuit of Grover’s iteration.
Make 07 00075 g019
Figure 20. (a) A reflection about the line L 1 parallel to | A 0 ; (b) a reflection about the line L 2 parallel to | u ; (c) the final rotation is obtained by twice the angle between the lines of reflection.
Figure 20. (a) A reflection about the line L 1 parallel to | A 0 ; (b) a reflection about the line L 2 parallel to | u ; (c) the final rotation is obtained by twice the angle between the lines of reflection.
Make 07 00075 g020
Figure 21. The quantum circuit of the HHL algorithm.
Figure 21. The quantum circuit of the HHL algorithm.
Make 07 00075 g021
Figure 22. A representation of QML algorithms’ basic framework.
Figure 22. A representation of QML algorithms’ basic framework.
Make 07 00075 g022
Figure 23. Four approaches to integrating QC with ML [47].
Figure 23. Four approaches to integrating QC with ML [47].
Make 07 00075 g023
Figure 24. Diagram depicting binary classification with support vectors.
Figure 24. Diagram depicting binary classification with support vectors.
Make 07 00075 g024
Figure 25. The quantum circuit of the swap test subroutine.
Figure 25. The quantum circuit of the swap test subroutine.
Make 07 00075 g025
Figure 26. Quantum circuit of qNN.
Figure 26. Quantum circuit of qNN.
Make 07 00075 g026
Table 1. Truth table for CNOT gate.
Table 1. Truth table for CNOT gate.
Control (Input)Target (Input)Control (Output)Target (Output)
| 0 | 0 | 0 | 0
| 0 | 1 | 0 | 1
| 1 | 0 | 1 | 1
| 1 | 1 | 1 | 0
Table 2. Truth table for the SWAP gate.
Table 2. Truth table for the SWAP gate.
Input Qubit 1Input Qubit 2Output Qubit 1Output Qubit 2
| 0 | 0 | 0 | 0
| 0 | 1 | 1 | 0
| 1 | 0 | 0 | 1
| 1 | 1 | 1 | 1
Table 3. Truth table for the Fredkin gate.
Table 3. Truth table for the Fredkin gate.
ControlTarget 1Target 2Output ControlOutput Target 1Output Target 2
000000
001001
010010
011011
100100
101110
110101
111111
Table 4. Bell state circuit computation.
Table 4. Bell state circuit computation.
InputIntermediate StateBell State
| 00 | 00 + | 10 2 | 00 + | 11 2
| 01 | 01 + | 11 2 | 01 + | 10 2
| 10 | 00 | 10 2 | 00 | 11 2
| 11 | 01 | 11 2 | 01 | 10 2
Table 5. Short description of quantum algorithms.
Table 5. Short description of quantum algorithms.
AlgorithmDescription
Deutsch’s AlgorithmDetermines whether a given function is constant or balanced using a single query to the oracle.
Deutsch–Josza AlgorithmExtends Deutsch’s algorithm to multiple inputs, identifying whether a function is constant or balanced with just one query, showcasing exponential speedup.
Bernstein–Vazirani AlgorithmFinds a hidden binary string by querying an oracle only once, providing a polynomial speedup over classical algorithms.
Simon’s AlgorithmSolves the hidden string problem for functions satisfying f ( x ) = f ( x s ) , demonstrating exponential speedup compared to classical methods.
Quantum Fourier TransformComputes the discrete Fourier transform of a quantum state efficiently, essential for algorithms like Shor’s.
Phase Estimation AlgorithmEstimates the phase of an eigenvalue of a unitary operator, playing a crucial role in many quantum algorithms, including Shor’s.
Shor’s AlgorithmEfficiently factors large integers using the quantum Fourier transform, posing a threat to classical cryptographic systems.
Grover’s AlgorithmProvides a quadratic speedup for searching an unstructured database, allowing the identification of a marked item in O ( N ) time.
HHL AlgorithmProvides an exponential speedup for solving system of equations.
Table 6. Truth table of function f ( x ) .
Table 6. Truth table of function f ( x ) .
x f ( x )
000101
001010
010100
011111
100101
101010
110100
111111
Table 7. Overview of quantum SVM variants.
Table 7. Overview of quantum SVM variants.
AuthorDescriptionApplicationComplexity
M. Senekane, B. M. Taele [50]This study introduces a quantum SVM aimed at predicting solar irradiation, using data from the Digital Technology Group (DTG) Weather Station at Cambridge University. The proposed algorithm incorporates the HHL algorithm as a quantum subroutine. The original problem is transformed into a linear system of equations, which is then solved using the HHL method. The algorithm has been implemented in Python.Solar irradiation predictionNot mentioned
Yuan et al. [51]This study presents a quantum support vector algorithm designed to detect flow separation in aeronautic applications. Beyond binary classification, a multiclass quantum SVM was developed to classify various wing angles of attack. The quantum algorithms showed improvements in accuracy of 11.1% and 17.9%, respectively, compared to the classical SVM. These algorithms are based on a quantum annealing model and have been implemented on the Advantage 4.1 system from D-Wave.Aerodynamic classificationNot mentioned
Shubham Vashisth et al. [52]This study introduces a quantum SVM algorithm designed for binary classification in the diagnosis of malignant breast cancer. The proposed algorithm was implemented using the existing components of the quantum SVM from the Qiskit library and its performance was assessed on the IBM Quantum Experience platform.Breast cancer diagnosis O ( log N M )
Yang et al. [53]This study presents a new quantum SVM algorithm that improves classification accuracy for OCR and Iris datasets. The proposed algorithm is based on the HHL algorithm and its performance was evaluated on the IBMQX2 quantum computer.Classification O ( log N M )
Table 8. Overview of quantum k-means algorithm variants.
Table 8. Overview of quantum k-means algorithm variants.
AuthorDescriptionApplicationComplexity
Changqing Gong et al. [55]The proposed quantum k-means algorithm, which leverages trusted servers in quantum cloud computing, is designed to simplify complex quantum computations and reduce the frequency of quantum state superposition and de-superposition, utilizing a quantum cloud server. The proposed algorithm incorporates key subroutines such as SwapTest and GroverOptim. This algorithm has been implemented using IBM’s Qiskit platform.Security O M log ( n ) k t
S.S. Kavitha, N. Kaulgud [56]This work proposes a quantum k-means algorithm, employing a quantum circuit approach to compute the distance between centroids and data points for a heart disease dataset. Subroutines such as SwapTest and GroverOptim are integral to the proposed algorithm. The implementation was carried out using IBM’s Qiskit platform. A comparison between the classical and quantum k-means approaches was conducted to evaluate performance. The results indicate that the quantum k-means algorithm processes data faster than classical machines. Additionally, the quantum version outperforms its classical counterpart in terms of accuracy, precision, sensitivity, specificity, F1-score, and processing time when predicting heart disease.Heart disease detection O ( L N K )
DiAdamo et al. [57]A general, competitive, and parallelized version of the quantum k-means clustering algorithm is proposed to address challenges posed by noisy quantum hardware. This approach is applied to a real-world energy grid clustering scenario using real data from the German electricity grid. The new method significantly enhances performance, improving the balanced accuracy of the standard quantum k-means clustering by 67.8% compared to the labeling of the classical algorithm. The algorithm includes key subroutines such as SwapTest and GroverOptim, and is implemented using IBM’s Qiskit platform.Energy grid classification O ( poly ( n ) )
K. Benlamine et al. [58]This work presents a comprehensive analysis of three different methods for estimating distance in quantum prototype-based clustering algorithms. The proposed algorithm is adaptable to all three methods. The results indicate that while the classical version of k-means operates in polynomial time, the quantum version achieves logarithmic time complexity, particularly for large datasets. The datasets utilized include Iris, Wine, and Breast Cancer from the UCI Machine Learning Repository. SwapTest and GroverOptim are essential components of this implementation. The platform used for implementation is not specified.Clustering O ( log n )
H. Ohno [59]This work introduces a quantum subroutine for the quantum k-means algorithm that leverages quantum entanglement and removes the need for explicit centroid calculations. Indeed, the subroutine estimates the Euclidean distance between the data points and the cluster centroids based on the cluster labels. The proposed k-means algorithm is evaluated on three datasets: Synthetic, Iris, and image. The algorithm is implemented using IBM’s Qiskit platform.Image recognition O M K log 2 K
Xiao Shi et al. [60]The work presents a quantum-inspired k-means clustering algorithm that begins by mapping classical data into quantum states, which are represented as matrix product states. The algorithm then minimizes the loss function using the variational matrix product states method in an expanded space, leveraging the power of quantum-inspired techniques to efficiently handle the clustering process. This algorithm does not rely on Lloyd’s algorithm, with key components including SwapTest and GroverOptim. The proposed algorithm is applied to the Breast, Ionosphere, Wine, Yeast, and E. coli datasets, demonstrating higher prediction accuracies compared to the classical k-means algorithm. The platform used for implementation is not specified.ClusteringNot mentioned
J. Chen, X. Qi, L. Chen et al. [61]This work proposes a k-means algorithm called QALO-K, which combines k-means with a quantum-inspired ant lion optimization approach. It was tested on several standard datasets from the UCI Machine Learning Repository, including Iris, Glass, Wine, Cancer, Vowel, CMC, and Vehicle. The experimental results demonstrate that the detection rate and accuracy of QALO-K are superior to the classical k-means algorithm. The platform used for implementation is MATLAB.Intrusion detection O ( N 5 )
Table 9. Overview of qPCA variants.
Table 9. Overview of qPCA variants.
AuthorDescriptionApplicationComplexity
Martin et al. [63]The proposed quantum PCA introduces a model for pricing interest rate financial derivatives. It incorporates the QPE algorithm as a quantum subroutine. This algorithm is implemented using IBM’s Qiskit platform.FinanceNot mentioned
Dri et al. [64]The proposed quantum PCA is an end-to-end implementation of the algorithm designed for managing interest rate risk. It incorporates the QPE algorithm as a quantum subroutine. This algorithm is implemented using IBM’s Qiskit platform.FinanceNot mentioned
Salari et al. [65]The proposed quantum PCA is used for pattern recognition. The proposed algorithm is based on the QPE algorithm. The platform used for implementation is not specified.Pattern recognition O ( N log N )
Zidong Lin et al. [66]The proposed quantum PCA is designed for classifying thoracic CT images from COVID-19 patients. The proposed algorithm is based on the QPE algorithm. This algorithm is implemented using an NMR quantum processor.Classification of thoracic CT images from COVID-19 patients O ( log d ) 2
Max Hunter Gordon et al. [67]The proposed quantum PCA is designed for quantum datasets and is applied to a set of molecular ground states corresponding to different interatomic distances. The algorithm accurately compresses these molecular ground states into a low-dimensional subspace. It is based on the QPE algorithm. However, this algorithm has not yet been implemented on real quantum hardware.Dimensionality reduction O ( N 2 )
Table 10. Overview of quantum deep learning variants.
Table 10. Overview of quantum deep learning variants.
AuthorDescription
Zhao et al. [68]The proposed work introduces a quantum feedforward neural network for classification purposes. It incorporates the swap test as a quantum subroutine. The platform used for implementation was not specified.
Tacchino F. et al.: [14]The proposed work introduces a quantum version of a perceptron to classify simple patterns, such as distinguishing vertical or horizontal lines among various possible inputs. This algorithm is implemented using IBM’s Qiskit platform.
X-F Niu and W-P Ma [69]The proposed work introduces a quantum neural network based on a multi-layer activation function, designed for lie detection. This algorithm is implemented using MATLAB.
Cong et al. [70]The proposed work introduces a quantum convolutional neural network for quantum phase recognition. The algorithm is based on quantum circuits and unitary transformations (quantum gates). Additionally, the work presents a protocol for implementing the algorithm using neutral Rydberg atoms.
Henderson et al. [71]The proposed work introduces a quantum convolutional neural network for image classification. The algorithm is based on quantum circuits and unitary transformations (quantum gates). This algorithm is implemented using the QxBranch Quantum Computer Simulation System.
Zhao et al. [72]The proposed work introduces a quantum version of the Bayesian technique, incorporating quantum matrix inversion as a quantum subroutine. This algorithm is implemented using IBM’s and Rigetti’s platforms.
Wiebe et al. [73]The proposed work explores the integration of quantum computing into deep learning tasks. It introduces quantum algorithms for accelerating training processes, particularly focusing on the use of quantum Boltzmann machines for unsupervised learning and quantum-enhanced optimization techniques for deep neural networks. The paper suggests that quantum speedups can improve efficiency in gradient descent and other optimization methods. The implementation platform is not explicitly mentioned in the paper.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Revythi, M.; Koukiou, G. Quantum Machine Learning and Deep Learning: Fundamentals, Algorithms, Techniques, and Real-World Applications. Mach. Learn. Knowl. Extr. 2025, 7, 75. https://doi.org/10.3390/make7030075

AMA Style

Revythi M, Koukiou G. Quantum Machine Learning and Deep Learning: Fundamentals, Algorithms, Techniques, and Real-World Applications. Machine Learning and Knowledge Extraction. 2025; 7(3):75. https://doi.org/10.3390/make7030075

Chicago/Turabian Style

Revythi, Maria, and Georgia Koukiou. 2025. "Quantum Machine Learning and Deep Learning: Fundamentals, Algorithms, Techniques, and Real-World Applications" Machine Learning and Knowledge Extraction 7, no. 3: 75. https://doi.org/10.3390/make7030075

APA Style

Revythi, M., & Koukiou, G. (2025). Quantum Machine Learning and Deep Learning: Fundamentals, Algorithms, Techniques, and Real-World Applications. Machine Learning and Knowledge Extraction, 7(3), 75. https://doi.org/10.3390/make7030075

Article Metrics

Back to TopTop