Next Article in Journal
A Case Study of the Possible Meteorological Causes of Unexpected Fire Behavior in the Pantanal Wetland, Brazil
Previous Article in Journal
A Spatial Econometric Analysis of Weather Effects on Milk Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Tensor DBMS and Quantum Gantt Charts: Towards Exponentially Faster Earth Data Engineering

by
Ramon Antonio Rodriges Zalipynis
School of Software Engineering, HSE University, 11 Pokrovsky Bulvar, 109028 Moscow, Russia
Earth 2024, 5(3), 491-547; https://doi.org/10.3390/earth5030027
Submission received: 16 July 2024 / Revised: 22 August 2024 / Accepted: 28 August 2024 / Published: 14 September 2024

Abstract

:
Earth data is essential for global environmental studies. Many Earth data types are naturally modeled by multidimensional arrays (tensors). Array (Tensor) DBMSs strive to be the best systems for tensor-related workloads and can be especially helpful for Earth data engineering, which takes up to 80% of Earth data science. We present a new quantum Array (Tensor) DBMS data model and new quantum approaches that rely on the upcoming quantum memory and demonstrate exponential speedups when applied to many of the toughest Array (Tensor) DBMS challenges stipulated by classical computing and real-world Earth data use-cases. We also propose new types of charts: Quantum Gantt (QGantt) Charts and Quantum Network Diagrams (QND). QGantt charts clearly illustrate how multiple operations occur simultaneously across different data items and what are the input/output data dependencies between these operations. Unlike traditional Gantt charts, which typically track project timelines and resources, QGantt charts integrate specific data items and operations over time. A Quantum Network Diagram combines several QGantt charts to show dependencies between multistage operations, including their inputs/outputs. By using a static format, QGantt charts and Quantum Network Diagrams allow users to explore complex processes at their own pace, which can be beneficial for educational and R&D purposes.

1. Introduction

Earth data is essential for understanding global environmental changes. However, researchers and practitioners typically spend up to 80% of their time on data engineering, including data collection, organization, management, cleaning, transformation, and other issues to make data usable for subsequent modeling, analysis, ML/AI, and other purposes [1,2]. In addition, Earth data science faces ever increasing Earth data volumes. For example, ECMWF (European Centre for Medium-Range Weather Forecasts) stores 510 PB and ingests 287 TB/day [3] while about 203 TiB of products from Sentinels, a family of European satellite missions, are disseminated daily with total downloads of 80.5  PiB/year [4]. Therefore, the scalability and performance of Earth data engineering tools are essential for enabling faster hypothesis testing and valuable insight discovery.
An important observation is that a multidimensional array (tensor) is a natural representation for a wide variety of Earth data types: diverse grids, meshes, swaths, and other constructs [5,6]. At the same time, a multidimensional array (tensor) is a native data model of Array (Tensor) DBMSs (Data Base Management Systems) [7,8,9] and a natural choice for fundamental datasets in countless domains: physics [7,10,11], geo-, bio-informatics [12,13,14,15], climate science [5,6,16,17], medicine [18,19,20], astronomy [21,22,23,24], e-commerce [25,26,27], and machine learning [28,29,30,31], to name a few.
Big Earth Data is one of the traditional and well-developed areas of Array (Tensor) DBMS applications [17,32,33,34], inspiring and attracting large multi-disciplinary teams building Cloud solutions and Earth Data Cubes using Array (Tensor) DBMSs  [35,36,37]. Array (Tensor) DBMSs have “natural affinity for remote sensing and geospatial data” [33], which are crucial for global ecological studies. Array (Tensor) DBMSs operate on personal computers, in the Cloud [32], and are even placed into satellite platforms for on-board processing of Big Earth Data [38]. Array (Tensor) DBMSs are also utilized in Precision Agriculture (e.g., detect pests, manage vegetation cycles, predict crop yield) [39,40,41], Forest Monitoring [42,43], Urban Planning [44,45], Water Management [46,47], Air Quality Control [48,49], and other environmental applications based on Big Earth Data [50,51,52,53].
Array (Tensor) DBMSs are relatively young systems that are gaining increasing attention and aim to be the best for tensor-oriented workloads related to storage [7,12,54,55,56], dissemination [9,57], management [58,59,60], processing [61], visualization [62,63,64], and even physical world simulations [65,66]. Array (Tensor) DBMSs address the challenges of big array storage, dissemination, management, processing, visualization, and diverse simulations [7,12,57,61,66]. We elaborate on the naming of Array (Tensor) DBMSs in Section 4.2.
Big array data is a primary source driving recent energy and excitement in array-oriented R&D: ChronosDB [17,57], SciDB [67], TileDB [56], ArrayStore [24], Google Earth Engine [68,69], SAGA  [70], EarthServer  [71], DataCube  [34], GeoTrellis  [72], Dask  [73], Microsoft Planetary Computer  [74], and other array systems appeared within just around a decade [13,17,28]. Impressive array-focused efforts are not surprising. For example, Google Earth Engine alone provides over 90 PB of data [75].
Efficient big array data management is challenging, so array R&D is blossoming with specialized, array-tailored techniques. Considerable effort was devoted to array storage layouts and compression schemes to improve I/O and compute time [9,14,54,56,67,76,77]. Arrays are also often stored out-db, so the problem is complicated by the need to optimize in-situ queries that use arrays in their native formats [17,70,78,79,80,81]. Indexing techniques, array joins, and tunable queries also pose challenging problems that can result in long runtimes [12,59,82]. Similarly, important array operations, including top-k queries, view maintenance, and caching use optimization heuristics [58,60,83]. Demanding applications like fast interactive visualization [57,63,64] and machine learning [28,29,30] directly impact user experience if not optimized quickly and properly. Array (Tensor) DBMSs can run simulations, but must explore diverse scenarios to derive realistic statistics [65,66]. The majority of efforts are directed towards reducing runtime, memory footprint, and other quantitative performance indicators. However, Array (Tensor) DBMSs face fundamental limitations that prevent further dramatic speedups using classical computing.
Quantum Computing is becoming a game-changer with its rapid development. Quantum computers are readily accessible via commercial Clouds of Microsoft [84], Amazon [85], IBM [86], Google [87], and others. Such a computer has about 64–1200 qubits (and up to 5000 annealing qubits) [88,89,90,91,92]. This year D-Wave deployed its quantum computer exceeding 1000 qubits [92] while IBM targets a 100 , 000 qubit computer within just 10 years [93]. Of course, quantum computers are incomplete, but future breakthroughs are undeniable. For example, quantum memory is not yet widely available, but we can rely on its predicted properties to start designing a full-fledged Quantum Array (Tensor) DBMS early.
The central logical part of a DBMS is how it can operate on data. Quantum Random Access Memory (QRAM) and Random Access Quantum Memory (RAQM) store classical and quantum data respectively. QRAM and RAQM are realistic, under active development, and expected to appear in quantum machines. Quantum memory is directly focused on array I/O and promises unprecedented capabilities to Quantum Array (Tensor) DBMSs.
We take the unique opportunity and pioneer the synergy of quantum computing, quantum memory, and Array (Tensor) DBMSs to start shaping a Quantum Array (Tensor) DBMS that will run, after light touches, on the upcoming real-world hardware. We demonstrate that challenging Array (Tensor) DBMS problems can be naturally or intrinsically optimized due to inherent quantum parallelism. We show how to design an array (tensor) data model and array (tensor) techniques that, with respective quantum memory, demonstrate exponential speedups compared to their classical counterparts by providing strict formal definitions of quantum array (tensor) operations.
Since the foundations of quantum computing [94,95] and first promising algorithms [96,97,98], great progress has been made in Quantum Fourier Transform [99,100], Swarm Intelligence [101,102], Graph Algorithms [103,104], Machine Learning [105,106], and many other areas [107,108,109,110,111,112]. However, quantum efforts related to database management systems are yet scarce and include schema matching [113], machine learning [106,114], searching [115,116], query optimization [117,118,119], and transaction scheduling [120].
We are the first to use quantum computing for Array (Tensor) DBMSs, covering a large number of acute challenges and uncertainties, holistically describing our Quantum Array (Tensor) DBMS approaches. Hence, the problem (challenge) is how to efficiently utilize quantum memory and quantum computing in Quantum Array (Tensor) DBMSs? Similarly, how to efficiently tackle classical Array (Tensor) DBMS challenges using quantum memory and quantum computing? We make the first steps and present the first solutions to challenges of designing efficient operations for Quantum Array (Tensor) DBMSs. Quantum memory and quantum computing are extremely helpful for tackling Array (Tensor) DBMS challenges: quantum memory is array-tailored and it seems to be an excellent tool for Array (Tensor) DBMSs. However, quantum memory and computing have their own limitations thus posing new challenges for Quantum Array (Tensor) DBMSs. Hence, we also discuss how to tackle these new challenges as well.
As the “quantum” perspective is attractive, the role of a holistic and insightful architecture covering key Quantum Array (Tensor) DBMS aspects and its future applications in Earth science is indispensable. It can increase awareness about the promising and exciting R&D direction, attract new researchers, as well as inspire future work. It is also a strong motivation for sustainable and focused development towards gaining practical benefits from Array (Tensor) DBMSs. Earth science and global ecological studies, as a consequence, can significantly benefit from Quantum Array (Tensor) DBMS in terms of faster Earth data engineering, enabling an improved user experience via accelerated hypothesis testing and knowledge discovery for understanding global environmental changes.
The article is structured as follows. First, we recall important quantum computing theory, including quantum memory and quantum visualization techniques, Section 2. Next, we present our novel type of charts for quantum computing: QGantt charts in Section 3. Afterwards, we introduce our algorithms for core quantum array (tensor) operations and their applications to Earth data engineering, Section 4. Along the way, we showcase the benefits of using quantum computing for array operations, flourished with QGantt charts to form a better intuition and demonstrate the power of QGantt charts. Section 5 presents Quantum Network Diagrams (QNDs), additional Quantum Array (Tensor) DBMS techniques, and alternative ways to visualize QGantt charts and QNDs. Section 6 provides challenges and opportunities with an R&D roadmap for developing Quantum Array (Tensor) DBMSs and their applications. We finish with concluding remarks in Section 7.

2. Quantum Computing: Preliminaries

“If [quantum theory] is correct, it signifies the end of physics as a science”
—Albert Einstein, Nobel laureate [121]
We showcase fundamental capabilities, peculiarities, and limitations of quantum computing relevant to Array (Tensor) DBMSs. Please, refer to Figure 1 and Table 1 for the summary of notations.

2.1. Mathematical Model

Qubit is the basic unit of information in quantum computing. A pure qubit state can be modeled as a point on the Bloch sphere or as a unit column vector α β T , denoted by q in the Dirac notation, Figure 1. A mixed qubit state is a statistical ensemble of pure states, a point inside the Bloch sphere.
Figure 1. Bloch sphere and a geometrical visualization of qubit q = α 0 + β 1 = c o s ( θ / 2 ) 0 + e i φ s i n ( θ / 2 ) 1 , where | α | 2 + | β | 2 = 1 , α , β C , ( 0 , 1 ) form a computational basis, φ [ 0 , 2 π ] and θ [ 0 , π ] , so θ is halved in computations.
Figure 1. Bloch sphere and a geometrical visualization of qubit q = α 0 + β 1 = c o s ( θ / 2 ) 0 + e i φ s i n ( θ / 2 ) 1 , where | α | 2 + | β | 2 = 1 , α , β C , ( 0 , 1 ) form a computational basis, φ [ 0 , 2 π ] and θ [ 0 , π ] , so θ is halved in computations.
Earth 05 00027 g001
States 0 and 1 correspond to classical 0 and 1 bit states. Any other pure state is called a superposition of 0 and 1 , yielding more states compared to a classical bit, but not infinite: too close points can be indistinguishable in practice, Figure 1.
For a pure qubit state, measurement and all unitary transformations are valid operations. Measurement destroys the superposition and yields 0 or 1 with probabilities of | α | 2 and | β | 2 respectively. It is impossible to get α or β , or to copy them for an arbitrary, unknown q (no-cloning theorem) [122].
For example, consider applying the Hadamard operator H to 0 , Table 1 and Figure 1:
H 0 = H 1 0 = 1 2 1 1 1 1 1 0 = 1 2 1 1 = +
Now, α = β = 1 / 2 for + . Hence, classical 0 or 1 with equal probability of 1 / 2 will result from the measurement of + . This is a basic quantum random number generation algorithm with a relatively strong source of entropy. Notations + and are reserved for H 0 and H 1 respectively.
A quantum transformation must be reversible, so a unitary matrix U defines a unitary operation iff U U = U U = I , so U 1 = U . A universal set of quantum operators (gates) is able not only to approximate any valid transformation (e.g., rotation or reflection of q , Figure 1) with a finite set of gates, but to construct a circuit that cannot be simulated in polynomial time on a probabilistic classical computer (Gottesman–Knill theorem) [123].
One of the popular universal quantum gate sets is { CNOT , H , T } . CNOT (Controlled NOT) operates on 2 qubits. It flips the second qubit (the controlled qubit) iff the first qubit (the control qubit) is 1 . Pauli operators X, Y, Z rotate q by π around x ^ , y ^ , z ^ resp., Figure 1.
A quantum register r consists of n qubits. The power of quantum parallelism becomes apparent when operation U is applied to all 2 n possible bit combinations simultaneously: U n H n 0 n = r . Here, H n 0 n yields a superposition of 2 n values:
1 2 n ( 0 + 1 + 2 + 3 + + 2 n 1 )
where q in q can be a decimal number, Table 1. For example, for n = 4 the decimal form 3 10 equals to the binary form 0011 2 = 0 0 1 1 and we can write H 4 0 4 as
1 2 4 ( 0000 + 0001 + 0010 + 0011 + + 1111 )
Although U n can be applied to all 2 n values at once, the measurement of the resulting register r will yield only a single value v [ 0 , 2 n ) Z with probability | v | r | 2 . This is why many quantum algorithms resort to diverse techniques that maximize the probability of obtaining a certain value, e.g., the well-known Grover search algorithm [98].
To form a general intuition of what is “possible” with quantum computing and what is not, as well as to support the intuition with a solid theoretical footing, we should refer to the complexity theory. Recall P and N P complexity classes for classical computing. Quantum computing has its own complexity classes; a noteworthy class is B Q P (polynomial time quantum algorithms with an error probability of at most 1 / 3 for all instances). To date, P B Q P , but it is supposed and not yet proven that B Q P N P . More details about quantum complexity classes are in [124]. Formally speaking, quantum computing is not targeted to solve problems that cannot be solved with classical computing: an exponential runtime algorithm or a polynomial time heuristic can also produce a solution. Another aspect is how a problem can be solved. For example, quantum mechanics makes it possible to naturally work with sparse arrays, as presented in Section 4.5.
In addition, quantum machines have the promise to solve certain problems much faster compared to classical machines, for example, with exponential speedups. A typical example is Shor’s algorithm, which can factor large coprime numbers in polynomial time, while the best known algorithm for classical machines is exponential [125]. Hence, the integer factorization problem is in B Q P .
Therefore, a major motivation for building quantum machines is to obtain exponential speedups for a range of problems. Consequently, the motivation for designing a Quantum Array (Tensor) DBMS is to achieve exponential speedups for challenging array (tensor) operations. Speedups differ in nature, thus a really good speedup (exponential in our case) is a really good motivation for building a Quantum Array (Tensor) DBMS.
Coherence Time (T2 time) and Gate Fidelity are major bottlenecks nowadays. As qubits are not 100% isolated, their states leak into the environment and loose the ability to maintain the superposition in T2 time. Hence, all computations must finish in T2 time (before the decoherence happens). This, in particular, depends on the circuit depth. Moreover, each operation introduces noise (or error) during a quantum algorithm. To improve Gate Fidelity (decrease Error Rate), several physical qubits are logically combined to create a higher-quality logical (error-corrected) qubit [126].
However, there is active research on increasing the number of qubits, coherence, fidelity, as well as on multi-core QPUs, and overall flexibility like qubit connectivity or measurement in the middle of computations. For example, companies build quantum computers using diverse technologies to tackle the aforementioned challenges: IonQ (ion-trap) [127], QuEra (neutral atom) [128], Rigetti [129], OQC [130], Fujitsu [89] (superconducting), and D-Wave (annealing) [91] are among others. The work is in full swing and foreseeable breakthroughs enabled by quantum computers are undeniable. For example, a recent search algorithm implementation confidently outperforms, on a quantum computer, its classical counterpart [115].
Table 1. Definitions of notations used in this article.
Table 1. Definitions of notations used in this article.
Symbol Definition
α * Complex conjugate of α : ( a + b i ) * = a b i , a , b R
| v | Norm of vector v: i | v i | 2 , v i C
v Conjugate transpose of column vector v: [ v 1 * v n * ]
Kronecker product: [ a , b ] [ x , y ] = [ a x , a y , b x , b y ]
v n n-fold repeated Kronecker product: v v n
u v Matrix multiplication: u × v
q ket: j n 1 j n 2 j 0 , q = k = 0 n 1 2 k j k , j k { 0 , 1 }
p q same as p q = p q
q bra: q = q , e.g., ( α 0 + β 1 ) = 0 α * + 1 β *
p q braket: p × q = p × q
HHadamard operator: H = 1 / 2 [ [ 1 , 1 ] , [ 1 , 1 ] ]
TT-gate or π / 8 -gate (8, not 4): T = [ [ 1 , 0 ] , [ 0 , e i π / 4 ] ]
O e [ γ , ] Oracle e that acts only on the first γ N qubits; O e [ , γ ] acts on the last γ  qubits
NotesNote that α α * = | α | 2 , ( e i φ ) * = e i φ , | v | 2 = v v , | q q | 2 = 1
Frequently used notations from Quantum Array (Tensor) Data Model, Section 4.1:
Nthe number of array (tensor) dimensions
NA a missing cell value; other popular names: null,NoData,n/a,NaN, etc.
A A a hyperslab, an N-d subarray of A, Equation (5)
array (tensor) join, Section 4.7
A d a 1d array with d elements, where d Z and d > 0
Quantum memory I/O ( ρ , ρ a and ω read/write from/to both QRAM and RAQM), Section 4.4:
π a quantum pointer j ψ j j , where j is the cell index that occupies γ qubits
ρ ( π ) j ψ j j | D j (read operation), where D j is the value of the jth memory cell
ω quantum memory write operation
ρ a ( π a ) read-and-append, read memory cells addressed by the first γ qubits ( j ) of
π a = j ψ j j · and entangle them with π a j ψ j j · | D j as well, Equation (24)
Notations used for complexity (asymptotic) analysis:
T type of array (tensor) cells, Equations (3) and (4)
| T | the number of bits or qubits for type T
| A | the size of array (tensor) A, Section 4.1.1
γ the number of qubits for keeping cell indexes in a quantum pointer, Section 4.4
A the number of qubits to keep the whole array (tensor) A in a quantum register
£runtime asymptotic cost; note that symbols T and T are already reserved
Obig O, defined in [131]
O e æ ( x ) the asymptotic cost of function or oracle e on O ( x ) qubits; note that the notation
accounts for multiple operands and ancillary qubits for e
For further information on quantum computing, we encourage the reader to refer to diverse books on quantum computing, e.g.,  [132,133,134,135,136]. In addition, quantum computing frameworks for specialized and well-known programming languages like Q# [137], QASM [138], Java [139], and Python [140] provide insightful documentation and enable running many quantum algorithms on simulators or physical quantum machines.

2.2. Quantum Memory

There are two basic types of quantum memory: Quantum Random Access Memory (QRAM) [141,142,143,144] and Random Access Quantum Memory (RAQM) [145], We expect QRAM and RAQM to be available soon, as they are under active development [145].
Unlike classical RAM, quantum memory performs atomic array I/O using several memory addresses at once. RAQM or QRAM accepts a quantum pointer (register) as input that can contain a superposition of several memory addresses, and fetches an array of values at one go:
j ψ j j j ψ j j | D j
where D j is the value of the jth memory cell, classical or quantum in the case of QRAM or RAQM respectively. The quantum memory pointer (register) becomes entangled with the values of quantum memory cells (entanglement is defined in Section 3.2). Note that quantum memory is essentially array-oriented. In general, a single quantum memory operation works on a whole array of values, not on a single value. This makes quantum memory very attractive for Array (Tensor) DBMSs.
For example, consider a quantum register r = 1 / 3 ( 45 + 50 + 55 ) . Note that r has only 3 values in its superposition, so we can “pack” from 1 to 2 n values in a quantum register of n qubits. Moreover, these 3 values are not continuous: cell indexes № 46–49 and 51–54 are skipped. We call r a quantum memory pointer or just quantum pointer. Now,
1 3 45 + 1 3 50 + 1 3 55 1 3 45 D 45 + 1 3 50 D 50 + 1 3 55 D 55
where D 45 , D 50 and D 55 are values of RAQM or QRAM cells with indexes 45, 50 and 55 respectively. Given a quantum pointer of n qubits, which time is required to retrieve 2 n cell values? One of the questions this article answers is what potential impact quantum memory will have on Quantum Array (Tensor) DBMSs if its realization supports reading/writing arrays (tensors) with up to 2 n elements in O ( n ) time, asymptotically [141,144,146,147].
In such a new and vibrant area such as quantum computing, we have uncertainties, not only in quantum algorithms, but also in the characteristics of quantum memory. Current discussions raise concerns about bandwidth limitations, mainly transferring data from classical to quantum computers and vice versa [148]. However, the discussions do not cite state-of-the-art quantum memory advancements [145,147] and do not analyze QRAM advantages. In addition, quantum-classical interfaces and quantum memory, as any other hardware, will constantly and actively improve their characteristics to make quantum machines suitable for data-driven applications, Section 6.1.6. Despite the concerns, great success has been made in making quantum computers, e.g., via quantum annealing, practical for diverse commercial applications including, resource scheduling, mobility, logistics, drug discovery, portfolio optimization, and manufacturing processes [149]. Myriads of other enhancements to advance quantum computing are being actively pursued [150,151,152].
In this article, we are interested not in nowadays technological limitations or making immediate usage benefits of current quantum memory developments in their infancy, but in the properties and principal possibilities of quantum memory and computing, both qualitative and quantitative, and their future efficient utilization. We have in mind not only performance, but also other characteristics. For example, the I/O pattern of quantum memory, i.e., how an address register becomes entangled with memory data, Equation (1). Another examples are the ways sparse and dense arrays can be efficiently stored and transformed in quantum registers and quantum memory, a long standing problem for classical Array (Tensor) DBMSs, Section 4.5.
Inside a quantum computer, data I/O from/to quantum memory to/from a quantum register in O ( n ) for 2 n values seems rather appealing. Optimistic architectures utilize qutrits (three-state) instead of qubits (two-state) [141], but even qudrits (multi-state) were demonstrated [153]. Therefore, we must start early in designing data layouts and algorithms to be able to efficiently utilize the benefits of upcoming quantum memory.
Figure 2. Examples of Quantum State Visualizations.
Figure 2. Examples of Quantum State Visualizations.
Earth 05 00027 g002
Unlike relational DBMSs, Array (Tensor) DBMSs manage multidimensional arrays (tensors) that often come from simulation outputs or diverse sensors. For example, Earth models (weather, climate, hydrology, etc.) or Earth observation. A numerical model can run on a quantum computer and load its output tensors into quantum memory. As the output tensors are already inside a quantum computer, Quantum Array (Tensor) DBMSs can perform operations on these output tensors without costly data movements between classical and quantum parts (components). Therefore, Array (Tensor) DBMSs have promise for practicality, especially for operating on tensors that are already in QRAM or RAQM (originated from a quantum algorithm or simulation).
In addition, note that the techniques presented further in this article are not completely tailored to quantum memory. If a multidimensional array (tensor) is already in a quantum register, as an output of some other quantum program, a Quantum Array (Tensor) DBMS can also be used to quickly operate on the quantum register that already contains data.
Just with 64 qubits we can address 2 64 memory cells in a single quantum register. With additional 32 qubits for integer values, a quantum register j ψ j j D i can store an array of 64 exbibytes (EiB) or over 73 exabytes (EB) of values, roughly 1000 × more than the entire Google Earth Engine platform to date [75], where j and D i have 64 and 32 qubits respectively, 96 qubits in total. The data can be generated by a quantum algorithm and need efficient management by an Array (Tensor) DBMS, possibly entirely within a quantum computer. Recall that, to date, we already have real, working quantum computers with over 1000 qubits [92,157]. Optimistically, we can read 2 64 QRAM cells in just O ( 64 ) given appropriate hardware. With the vision on how to use quantum memory benefits, this article may additionally stimulate further quantum memory advancements.

2.3. Visualization in Quantum Computing

Let us summarize visualization techniques that facilitate working with quantum information to justify the introduction of our new quantum charts.
  • Vector notation represents a state of single qubit quite well: α β T . However, as the number of qubits for visualization grows, the size of the resulting matrix grows exponentially and the vector notation becomes bulky and impractical:
    α 1 β 1 α 2 β 2 α n β n
  • Dirac notation was developed, in particular, to address the aforementioned limitation of the vector notation. It is typical to write a superposition as a sum. However, it becomes hard to track the progression of the applied transformations. For example, which parts of the sum generated the subsequent parts in Equation (2) which has only three summands and one transformation. The growth of the transformation chain and the number of summands hinder the comprehension of the situation.
  • Bloch sphere displays the state of only a single qubit, Figure 1.
  • IBM Q-sphere [158] was introduced to overcome the visualization limitation of the Bloch sphere. However, IBM Q-sphere visualizes only one state of a quantum register on a sphere, not the sequence of transformations, Figure 2a.
  • GHZ state spherical phase spaces [156] visualize a family of many-body states on a sphere, Figure 2c. Although important, this type of visualization hides actual values of information.
  • Majumdar-Ghosh model qubism representation [159] focuses on the same family of states as the previous approach, but utilizes 2-d grids for visualization.
  • Bloch Sphere Binary Tree (BSBT) [155] is a powerful visualization technique that can display time evolution and even datasets, Figure 2b. However, it can show a bird’s-eye view of the patterns and omits specific dataset values.
  • Quantum circuit diagram is a fundamental way to graphically illustrate a quantum program using a set of connected operators, Figure 3. Many exciting and powerful quantum circuit visualizers exist, for example, IBM Qiskit [160] and Quirk [161]. However, circuits display the sequence of operations, not the data flow, e.g., values of quantum registers and their source-target relationships.

3. Quantum Gantt Chart (QGantt)

Quantum Gantt Chart is a new type of charts that we propose. Almost all quantum operations are applied in parallel to an exponential number of values. In addition, quantum registers can contain data derived from algorithm outputs or obtained from quantum memory. To understand, it is important to see array values, e.g., A [ x ] . Operations with multiple arities also require clear visualizations. For example, joining arrays A and B, denoted by A B (Section 4.7). In such cases, it is also important to see how indexes of A and B correspond to each other in A B .
A far more complex problem is to statically visualize, letting users to explore the chart at their own pace, which and how many values are processed in parallel, inputs/outputs (values, not qubits) of operations, their duration, order, and to clearly annotate all of them with what they are doing. Obviously, current approaches (Section 2.3) fall short of clearly showing all the aforementioned information. They do not visualize or describe the flow of quantum computations combined with the data flow.
QGantt is a simple yet very informative way to statically visualize diverse quantum programs, suitable not only to the memory and Array (Tensor) DBMS aspects. Unlike the Gantt chart, the vertical axis depicts input/output data items for operations. The horizontal axis shows time and the duration of operations. Data items processed in parallel are located in a single column, annotated with explanations. Data items for the next operation go to the next column. If there are several logical stages (see Equation (39) as an example), QGantt charts can form a Quantum Network Diagram, Section 5. An important property of the QGantt chart is that it is straightforward to extend it by or combine it with other visualization techniques, see Section 2.3.
In the article, we typeset QGantt charts in Earth 05 00027 i014 due to aesthetic reasons. However, it is possible to render QGantt charts in an ASCII art or HTML versions, which are very flexible, similar to the way quantum circuits are presented in popular frameworks [160,161]. An example of an ASCII art version of a QGantt chart is in Section 5.4. Graphs are also good for displaying QGantt charts for practical reasons: diverse software can be used to manipulate graphs interactively. We will present increasingly complex QGantt charts during the progression of this article. For now, consider the following examples. We show that QGantt charts can naturally complement existing visualization techniques, including algebraic expressions with vector or Dirac notation, spherical representations, as well as quantum circuits. QGantt charts can also be used autonomously without additional visualization techniques, because QGantt charts are self-contained. In some cases, for example for operations with quantum memory or other data-centric algorithms, a QGantt chart can be the first visualization tool to try.

3.1. QGantt: Reading Data from Quantum Memory

Let us refer to the algebraic description of how data is read from QRAM, Equation (2). The QGantt chart for Equation (2) looks much more clear, Figure 4. The chart omits ψ j as they are not important in this context. However, it is easy to add the values or visualizations (bars, circles, etc.) of ψ j if required.
Figure 4 provides a wealth of information. We see only one operation depicted in the QGantt chart: reading from QRAM. Therefore, the chart displays two main columns: the first column contains the terms that can be found in the superposition of the quantum register, and the second column provides the result. Note that the terms of a superposition are laid out vertically, representing the fact that they exist simultaneously, and an operation is applied to all the terms simultaneously as well.
The leftmost column contains the numbers of the superposition terms to facilitate referencing the terms. Significant columns or superposition columns (columns that contain terms that constitute a superposition, inputs or outputs) are numbered: see row s u p at the top. This makes it possible to reference any superposition term in a QGantt chart. For example, we can reference 55 D 55 just by ( 3 , 2 ) .
At the bottom, the timeline, together with braces, can provide an indication of the duration of each operation. Each operation has an underlying brace; the number of braces corresponds to the number of operations. Braces additionally help to visually separate operations from each other. Recall that in the case of reading from QRAM, the asymptotic runtime is logarithmic, and this is indicated under the brace: ρ reads up to 2 γ cells in O ( γ ) , where γ is the number of qubits for cell indexes in the quantum register, Table 1.
Finally, the name of an operation is placed above the column with the results of this operation, but below the column numbers. For example we see “read” which is the name of the operation (command) applied to the superposition in the first column and whose result (output) is in the column next to the input, the second column. Input terms point to the respective output terms with ↦ (arrow). The inputs and outputs of an operation can be surrounded with a box like in Figure 4 for the “read” operation to clearly attribute a block in a diagram to a certain operation. However, columns, rows, or regions in QGantt charts can be decorated in different ways as well.

3.2. QGantt: Creating Bell State

Bell state is a fundamental notion in quantum computing. Bell states are important for understanding superdense coding and quantum teleportation. Bell state demonstrates quantum entanglement [132,133,134,135,136]. The circuit that creates Bell state Ψ + is in Figure 3. First, the Hadamard operator is applied to the first qubit. Then CNOT (Controlled NOT, Section 2.1) is used such that the first qubit is the control qubit.
The result of creating Bell state Ψ + is the superposition of two entangled qubits: the outcome of the measurement of the first qubit is unknown and will result in 0 or 1 with the same probability equal to 1 / 2 . However, once the first qubit is measured, the measurement of the second qubit will always yield the same classical bit value equal to the measurement of the first qubit. This demonstrates quantum entanglement.
Let us show how QGantt charts excel in clearly visualizing, step-by-step, the creation of Bell state Ψ + , exact values of all intermediate states with their amplitudes, as well as their input/output relationships, Figure 5.
Figure 5 displays 3 superpositions in 3 columns. Any term in the diagram can be referenced by a row and column (superposition) number.
We start from the initial state 00 with amplitude 1, where both qubits are initialized to 0 , Figure 5  ( 1 , 1 ) (row № 1, column or superposition № 1).
At the next step, Hadamard gate is applied to the first qubit as shown by the red dotted line and pointed by the red arrow. The first qubit in ( 1 , 1 ) is underlined to indicate that it participates in the operation. The red line, arrow and underlining are optional decorating elements to make the illustration more clear and apparent. The curly braces underneath indicate the scope of operations (inputs and outputs) and provide extra explanations.
After the application of Hadamard gate, we obtain two elements in the output superposition: ( 1 , 2 ) and ( 2 , 2 ) with the same amplitudes equal to 1 / 2 . The amplitudes changed compared to the amplitude 1 of the input term ( 1 , 1 ) . Arrows pointing from ( 1 , 1 ) to ( 1 , 2 ) and ( 2 , 2 ) indicate that both ( 1 , 2 ) and ( 2 , 2 ) resulted from an operation applied to a single term ( 1 , 1 ) .
Underlined qubits in ( 1 , 2 ) and ( 2 , 2 ) additionally emphasize that a modification or impact was caused to the underlined qubits compared to the previous superposition.
The last step is the application of the CNOT operation, column № 3. Symbols • and ⊕ mark the control and controlled qubits respectively. We could place these symbols somewhere at the top of ( 1 , 3 ) , as the positions of the control and controlled qubits are the same for each term in column № 3. However, symbols • and ⊕ appear over each term in the superposition, as many such terms can exist in a column and we should not scroll to find out which qubits are used for the CNOT operation and what their role is. The second value in ( 2 , 3 ) is underlined as it was modified as a result of the CNOT operation: from 0 in ( 2 , 2 ) to 1 in ( 2 , 3 ) .
Figure 5 shows the steps to create a Bell state with 2 qubits. It is straightforward to extend this diagram to illustrate the steps required to create a Greenberger–Horne–Zeilinger state (GHZ state) with 3 qubits: ( 000 + 111 ) / 2  [132,133,134,135,136]. For the purpose of a step-by-step illustration of creating the aforementioned GHZ state, symbols • and ⊕ for the CNOT operation will be much more necessary to increase clarity. For example, the notation | 1 1 1 ̲ indicates that the CNOT operation was applied to the 2nd and the 3rd qubits, not some other combination of qubits, and the 3rd qubit is underlined as it is that qubit that was modified in the result of the operation.

4. Quantum Array (Tensor) DBMS Techniques

4.1. Quantum Array (Tensor) Data Model

Now we carefully start defining the first quantum array (tensor) data model, as it will determine the overall Quantum Array (Tensor) DBMS design. To comply with state-of-the-art, we extend the formal model from [17].

4.1.1. Logical Array (Tensor)

An N-dimensional array (N-d array or tensor) is the mapping A : D 1 × D 2 × × D N T , where N > 0 , D i = [ 0 , l i ) Z , 0 < l i is a finite integer, and T is type. l i is said to be the size or length of ith dimension. Here and further on i [ 1 , N ] Z . Let us denote the N-d array (tensor) by
A l 1 , l 2 , , l N : T
By l 1 × l 2 × × l N denote the shape of A, by | A | denote the size of A such that | A | = i l i . A cell or element value of A with integer indexes ( x 1 , x 2 , , x N ) is referred to as A [ x 1 , x 2 , , x N ] , where x i D i . Each cell value of A is of type T , Figure 6.
An array (tensor) may be initialized after its definition by enumerating its cell values. For example, the following defines and initializes a 2-d array (matrix) of integers: A 2 , 2 : Z = { { 1 , 2 } , { NA , 4 } } . In this example, A [ 0 , 0 ] = 1 , A [ 1 , 0 ] = NA , | A | = 4 , and the shape of A is 2 × 2 . A missing value is denoted by NA. Other popular names for NA are null , NoData, n/a, NaN, and others.
Indexes x i are optionally mapped to specific values of ith dimension by coordinate arrays A . d i l i : T i , where T i is a totally ordered set, d i [ j ] < d i [ j + 1 ] , and d i [ j ] NA for j D i . In this case, A can be also defined as
A ( d 1 , d 2 , , d N ) : T
Without loss of generality, in this article we will work with integer types of T . Therefore, we will omit T if it is not important or we will write a numerical value instead of T to indicate the number of bits or qubits used to store an array value, e.g., A 2 , 2 : q , q N .
Figure 6. An array (tensor) data model illustration.
Figure 6. An array (tensor) data model illustration.
Earth 05 00027 g006
A hyperslab A A is an N-d subarray of A. The hyperslab A is defined by the notation
A [ b 1 : e 1 , , b N : e N ] = A ( d 1 , , d N )
where b i , e i Z , 0 b i e i < l i , d i = d i [ b i : e i ] , | d i | = e i b i + 1 , and for all y i [ 0 , e i b i ] the following holds:
A [ y 1 , , y N ] = A [ y 1 + b 1 , , y N + b N ]
d i [ y i ] = d i [ y i + b i ] .
Equations (6a) and (6b) state that A and A have a common coordinate subspace over which cell values of A and A coincide. The dimensionality of A and A is the same. In hyperslab definitions, Equation (5), we will omit “ : e i ” if b i = e i or “ b i : e i ” if b i = 0 and e i = | d i | 1 .

4.1.2. Quantum Model Aspects

Section 4.1.1 defines a logical array (tensor) whose model is common for classical and Quantum Array (Tensor) DBMSs. We further extend this model with quantum aspects for which the notations from Section 4.1.1 are also used. For example, we introduce quantum tensor strips (Section 4.5) and quantum hyperslabbing (Section 4.6) among other aspects. However, our extensions appear in the forthcoming sections to make the presentation and the justification for extensions more clear and timely. Hence, we first introduce our quantum memory operations and then describe our extensions to the array (tensor) model in the context of Array (Tensor) DBMS challenges. We mark model extensions with Earth 05 00027 i001.

4.2. Array (Tensor)

To date, the R&D area of Array (Tensor) DBMSs is at the stage of forming its terminological dictionary. Moreover, it is a relatively young R&D area and no commonly accepted standards have been established for array (tensor) schema, query languages, the set of supported operations (operators), and many other Array (Tensor) DBMS aspects [7,8,9].
Array (Tensor) DBMSs operate on multidimensional arrays (tensors): the formal definition is in Section 4.1.1. However, here we additionally elaborate on the naming of this class of DBMSs: why do we use the word combination “Array (Tensor)”?
The history begins from Titan  [162] and Paradise  [163], one of the first database systems that specifically focused on array operations. They targeted Earth remote sensing data, as newly launched satellites challenged the data management community by generating massive amounts of data, mostly 2- and 3-dimensional arrays. At the time, this data was new to the DBMSs and fundamentally different from the other supported data types.
It was quickly realized that many core data types in numerous other domains are naturally modeled by multidimensional arrays (tensors). As 2-dimensional arrays were most common, even one of the earliest systems was called RasDaMan, which stands for “Raster Data Manager”. However, it was clear that an array database management system goes far beyond rasters. That was reflected in the names of subsequent systems, e.g., “A Multidimensional Array DBMS” [164] or “A query language for multidimensional arrays” [165].
Although the word “array” does not clearly reflect that a software system can work with an array with more than 2 dimensions, “multidimensional array” becomes a too lengthy term. Even worse, it is hard to translate “Array DBMS” in an awkward-free manner into other languages. For at least these two solid reasons, the term “Array DBMS” should be reconsidered.
Today, we believe that “Tensor DBMS” best reflects the essence of a database system that manages multidimensional arrays. The trend towards using the word “tensor” is strongly supported not only by the data management community, but also across a wider research environment [7,166]. For example, “tensors are natural multidimensional generalizations of matrices” and “by tensor we mean only an array with d indices” [166].
However, we are experiencing an intermediate period of the gradual transition to the name “Tensor DBMS”. Therefore, in this article, we still use the terms “Array (Tensor) DBMS” and “array (tensor)” for clarity as to which software systems and objects we refer to and to foster the transition.
The word “tensor” is increasingly used not only for an array with over two dimensions, but even for matrices (“2-d arrays” or “2-d tensors“) [167]. Technically and semantically, there is little or often no difference in how a state-of-the-art Array (Tensor) DBMS operates on a 1-dimensional, 2-dimensional, or an N-dimensional array where N Z and N > 2  [7]. Consequently, we use the word combination “array (tensor)” or rarely one of these two words in our article.
Note that in our data model, a tensor is more than just an array with d indices, because it supports modeling of a wide variety of common and exotic Earth data types, including meshes, irregular grids, and others [17].
It is also worth mentioning that some researchers use the term “data cube” [35]. However, it is mostly understood as an object that can be obtained by issuing respective queries to an Array (Tensor) DBMS  [7].
Regardless of the current and possible future variations in the naming of database systems that manage diverse types of multidimensional arrays, and the naming of these arrays (rasters, tensors, data cubes, etc.), the word “tensor” perfectly reflects that an array can be multidimensional, is an international term, and is widely used in the research community directly for the purpose of referring to multidimensional arrays.

4.3. Illustrative Datasets

Let us take typical and popular Earth datasets that are subject to hyperslabbing, filtering, map algebra, and other operations that usually occur in Earth data engineering. We selected x-component of sea surface wind (Figure 7) and CO 2 mole fraction (Figure 8) as motivating examples for a range of our quantum array (tensor) operations.
Let W 28 ( l a t W , l o n W ) be the NOAA/NCDC sea surface wind speed array for the 28th of August, 2005, Figure 7.
l a t W 719 = { 89.75 , 89.50 , , 89.75 } + 0.25 ° step l o n W 1440 = { 0.00 , 0.25 , , 359.75 } + 0.25 ° step as well
Therefore, the shape of W 28 is 719 × 1440 .
Figure 7. NOAA/NCDC blended 6-hr 0.25° sea surface winds, 28 August 2005 ( 719 × 1440 cells).
Figure 7. NOAA/NCDC blended 6-hr 0.25° sea surface winds, 28 August 2005 ( 719 × 1440 cells).
Earth 05 00027 g007
Figure 8. AIRS-AMSU mole fraction of CO 2 in free troposphere, 1 August 2004 ( 91 × 144 cells).
Figure 8. AIRS-AMSU mole fraction of CO 2 in free troposphere, 1 August 2004 ( 91 × 144 cells).
Earth 05 00027 g008
Similarly, let CO 2 01 l a t CO , l o n CO be the AIRS-AMSU mole fraction of CO 2 in free troposphere for the 1st of August, 2004, Figure 8. Here
l o n CO 144 = { 89.5 , 88.0 , 86.0 , 60.0 } 2 ° step l a t CO 91 = { 180 , 177.5 , , 177.5 } + 2.5 ° step
Hence, the shape of CO 2 01 is 91 × 144 . Note that l o n CO [ 0 ] l o n CO [ 1 ] 2 , so the first two elements make the array irregular at the very beginning. It becomes regular afterwards.
We do not use the cell type in both cases, so we do not particularize T .

4.4. Quantum Memory Operations

Memory is an essential component of a computing machine, but contemporary literature devotes little attention to the interactions of algorithms and quantum memory. Hence, one of the most important questions we start with is how memory can be organized for and accessed by a Quantum Array (Tensor) DBMS. This subsection contributes by reasoning about quantum array-tailored memory operations, as current literature lacks these aspects.
To move further, we need to introduce memory operations and reason about their costs. Let us denote by ρ and ω read and write memory operations respectively, by π we denote a quantum pointer j ψ j j . Then ρ ( π ) = j ψ j j | D j , where D j is the value of the jth memory cell [145]. We do not make assumptions on whether ρ destroys memory cells after the completion of the operation. However, the property of being non-destructive would be very beneficial such that source tensor values do not vanish once they are read. We assume that we can perform several read and write operations in the same algorithm.
Note that π does not necessarily address continuous memory cells. Let us assume that a quantum pointer π has γ qubits. Similarly, ω ( j ψ j j | D j ) writes into quantum memory value D j to the jth memory cell for each j.
Next, let us assume that read and write operations on QRAM and RAQM are reversible, i.e., A = ρ ( π ) after the following sequence of operations: A ρ ( π ) and ω ( A ) . We do not require that ψ j are the same after a sequence of the read and write operations, only pairs j | D j are the same. We can expect
£ ( ρ ( π ) ) = £ ( ω ( ρ ( π ) ) ) = O ( n )
where | { j } | = 2 n and £ denotes asymptotic cost (recall that symbols T and T are already reserved), Section 6.1.6. In other words, £ ( ρ ( H n 0 n ) ) = O ( n ) . This will enable efficient implementations of many quantum algorithms, for example, the Grover search algorithm, HHL for Linear Algebra, Quantum Fourier Transform, pattern recognition, and others, yielding exponential speedups [141,145].
We have already illustrated reading from QRAM in Figure 4.
Note that in Section 4.7 we further extend ρ to retrieve cell indexes from the first γ qubits of a superposition. We present the extension and an important use case in Section 4.7.

4.4.1. NA in Tensor Cell Values

An interesting question is whether we should read and write D j values that are equal to NA , i.e., the way how to handle the term ψ j j NA . The most appropriate implementation for ρ is to skip NA cell values, but this may pose engineering challenges. In this case, the input to ρ could be a starting memory cell index s and length n (the number of cells to be read) rather than a quantum pointer: ρ ( s , n ) = j ψ j j | D j , where D j is the value of the jth memory cell such that D j NA and j [ s , s + n ] (no terms in superposition with missing values), n N . This will automatically enable intrinsic tensor compaction (Section 4.5).
This formulation seems simpler compared to submitting a superposition to ρ , because ρ takes just 2 values and generates a superposition itself. Although it may be possible to parameterize ρ in this way, the runtime cost need to be clarified for this setup: the worst case is O ( n ) as for the classical memory. A good cost is O ( log n ) .
We analyze an important scenario where it can be beneficial, but not strictly necessary to eliminate NA cell values in Section 4.5. In the same section we also present an approach to remove NA cell values from an arbitrary quantum register, using quantum memory. The approach does not require additional hardware support for skipping NA cells in ρ , Section 4.5.4. If RAQM is available, we can easily construct a superposition for ρ that does not contain cell indexes which point to NA values, see also Section 4.5.4.

4.4.2. NA in Address Indexes

A similar question is whether we should also allow a quantum register to contain ψ j NA | D j (NA as a cell index). We discuss a specific and important use-case where it is reasonable to have NA as an address (index) for an array (tensor) cell and which can benefit from specific handling of NA in cell indexes at the hardware level in Section 5.2.
At the hardware level, it would be perfect if ρ and ω omit cells indexed with NA: ψ j NA ψ j NA NA for ρ and ω skips ψ j NA | D j . Another solution is just to use the fact that NA can be a usual constant value, for example 2 | T | 1 for an array A d , where d : T . In this case, before invoking ρ or ω , it is possible to replace NA with some dummy index such that ρ reads the same value addressed by NA while ω writes all values to some dummy index (if parallel writes to the same index are supported). Again, before writing or reading, it is possible to utilize the NA elimination technique to avoid parallel writes to the same address or redundant reads from multiple dummy indexes, Section 4.5.4.

4.5. Quantum Sparse Arrays (Tensors)

Recall that an N-d array (tensor) can have missing values denoted by NA, Section 4.1.1. In general, the distribution of NA values seems random and may change after updates. Classically, memory space, I/O and CPU time are often wasted on NA-related issues. Many techniques tackle the problems of sparse arrays, e.g., TileDB has a new on-disk format with support for fast updates [56] and SciDB provides ragged arrays [67]. However, with classical memory, efficient storage and transformation of such arrays for diverse workloads becomes a Holy Grail  [7,66].
Sparse arrays are quite common in Earth data engineering. For example, Figure 7 provides no sea surface wind speed values for land which constitutes about 29 % of Earth’s surface. Hence, the wind speed array can be considered as a sparse array. Although array cells that correspond to land are predictable, more complex cases exist. Earth remote sensing data products can contain missing values in hardly predictable areas for each array. For instance, consider mole fraction of CO 2 in free troposphere, Figure 8.
Classical Challenge 1. How to efficiently represent sparse arrays (arrays with a large number of NA values)?
To make the discussion more specific with concrete numbers, consider an array A ( l a t , l o n ) : T such that | A | = 16 , | l a t | = | l o n | = 4 , Figure 9a. Array A has 16 cells and 6 cells (dashed red lines) do not contain values (NA cells, technically these cells can contain a special constant that denotes a missing value). Other values are numbers 0 or 1.
A classical solution to the stated challenge is to split an array into chunks or tiles: I/O units used to read/write array portions, Figure 9a. Typically, a classical array is processed (i.e., read, write, filter cells) in chunks or tiles which are smaller subarrays. Thick blue lines in Figure 9a separate chunks from each other. Chunks make array processing more manageable and avoid reading the whole array into the operating memory.
This approach works good for dense arrays (with little number of NA values), but becomes inefficient for sparse arrays which have many NA values. For example, consider chunk (subarray) A [ 0 :​ 1 , 2 :​ 3 ] . It contains only one value at A [ 1 , 3 ] and the other cells are empty. Allocating, reading, and writing a chunk of 4 cells leads to wasting 75 % of resources. Of course, Figure 9a represents a tiny example to facilitate understanding. Typical 2-d chunk shapes can be 512 × 512 and similar. Chunking is usually applied to arrays in persistent storage like SDD or Amazon S3. For classical RAM, the basic storage schemes are row-wise (Figure 9b) or column-wise layouts. However, to increase array processing efficiency, sophisticated RAM layouts are also devised, e.g., Apache Arrow [168].
Earth 05 00027 i001Quantum Look at Challenge 1. A Quantum Array (Tensor) DBMS Data Model is a radical departure from a classical array data model which splits a large Nd array into multiple chunks or tiles to create a more manageable dataset. Quantum Array (Tensor) DBMSs can now operate on quantum strips, or simply strips, Section 4.5.1 and Section 4.5.2. We introduce a new term, quantum strips, to clearly distinguish quantum array (tensor) formats (layouts) from chunks, tiles, and other classical entities.

4.5.1. Wide Quantum Strips

An array A is represented as a wide quantum strip as follows:
A l 1 , l 2 , , l N = j ψ j x 1 x 2 x N A [ x 1 , x 2 , , x N ]
This array layout intrinsically enables efficient packing of both dense and sparse arrays and naturally or directly exposes array cells as inputs to quantum algorithms. While we can still use the classical data model described above to define inputs/outputs for Array (Tensor) DBMS approaches, we now operate on the quantum array (tensor) layout.
The wide quantum strip of array A in Figure 9a, but without NA cells, now looks like
A ( l a t , l o n ) . = . ψ 0 , 0 0 0 1 + ψ 0 , 1 0 1 0 + ψ 1 , 0 1 0 0 + ψ 1 , 1 1 1 1 + ψ 1 , 3 1 3 1 + ψ 2 , 0 2 0 0 + ψ 2 , 2 2 2 0 + ψ 3 , 0 3 1 1 + ψ 3 , 2 3 2 1 + ψ 3 , 3 3 3 0
The same wide quantum strip of array A in Figure 9a, but with with NA values, looks like as follows:
A ( l a t , l o n ) . = . ψ 0 , 0 0 0 1 + ψ 0 , 1 0 1 0 + ψ 0 , 2 0 2 NA + ψ 0 , 3 0 3 NA + ψ 1 , 0 1 0 0 + ψ 1 , 1 1 1 1 + ψ 1 , 2 1 2 NA + ψ 1 , 3 1 3 1 + ψ 2 , 0 2 0 0 + ψ 2 , 1 2 1 NA + ψ 2 , 2 2 2 0 + ψ 2 , 3 2 3 NA + ψ 3 , 0 3 0 1 + ψ 3 , 1 3 1 NA + ψ 3 , 2 3 2 1 + ψ 3 , 3 3 3 0
Now the aforementioned metrics for classical arrays (tensors) like | A | do not reveal the quantum resource usage complexity. Let us denote by A the number of qubits to keep the whole A in a quantum register. What is the order of A ?
For a wide strip layout (this subsection), to keep in a quantum register a dense or sparse Nd array A that can contain from 0 up to 2 N γ cells of type T and the size of any dimension of A is no larger than 2 γ , we need a quantum register with at least A = | T | + N γ qubits.
For a narrow strip layout (Section 4.5.2), to keep in a quantum register a dense or sparse array A that can contain from 0 up to 2 γ cells of type T (regardless of the dimensionality of A), we need a quantum register with at least A = | T | + γ qubits.
What is the quantum memory consumption for storing a dense or sparse array A?
If RAQM is used, we need the same number A of qubits as for a quantum register to keep A in RAQM in Equation (10) or Equation (11), regardless of whether we have NA values in a strip or not. Moreover, if γ qubits are used for a quantum pointer, and | T | qubits for a cell value of A, in the same RAQM cell of A = 2 × γ + | T | qubits, we can store any 2d array from 0 up to 2 2 × γ cells of type T . We do not need to devise compaction or packing schemes for arrays (tensors) of any dimensionality for RAQM to save space. A processing operation on such a strip will also require time proportional to A regardless of an array (tensor) size. A sufficiently large γ must be supported by RAQM to keep the whole array (tensor) in a single cell. Otherwise, an array (tensor) must be split and respective array (tensor) partitioning techniques for RAQM will be required, Section 6.1.3.
If QRAM is used, we may need the same number of cells | A | to store array A similar to the classical model. Logically, QRAM can be similar to a row-wise layout. Figure 9b locates A values in a single row split into two rows. The numbers in the top row are cell indexes, while the numbers below indexes are the respective cell values (colored blue). At a glance, there is no benefit of using QRAM for storing arrays. In terms of occupied space, QRAM is not more optimal compared to classical RAM. However, an appropriate QRAM can be able to provide the following very attractive benefits:
  • read A in O ( n ) for | A | = 2 n or in O ( log | A | ) compared to O ( 2 n ) for the classical case: an exponential speedup
  • read arrays of significantly different sizes with almost the same performance: O ( 64 ) may not be too different from O ( 32 ) given the appropriate hidden constants
  • atomically read any list of indexes in a single go by providing a superposition of QRAM indexes for ρ
  • atomically write/update any list of values indexed by a superposition which can contain non-continuous index values
All of the above can be taken into consideration to develop appropriate algorithms for Quantum Array (Tensor) DBMSs.

4.5.2. Narrow Quantum Strips

Although x 1 x 2 x N can be convenient for algorithms, it may pose significant requirements for a very large N. An alternative quantum array layout can look as a potentially more compact 1-d indexed strip:
A l 1 , l 2 , , l N = j ψ j j A [ x 1 , x 2 , , x N ] , where j = i = 1 N 1 x i l i + x N
Here j is a 1-d index of an array cell A. We computed A in Section 4.5.1. It is straightforward to convert a 1-d index to a sequence of values to form a higher-dimensional index. Now narrow A can be less than wide A , but the burden of index conversion, although straightforward, can fall on oracles. For the RAQM and QRAM cases, the benefits of using narrow strips are similar as for using wide strips, but can depend on A .
As we have already noted, we need a new name for the quantum layouts: “strip” to emphasize its difference compared to chunks or tiles. Recall that we call layouts in Equations (9) and (12) as wide and narrow layouts respectively. Both layouts can be used for different purposes and new ones can also be devised.
In addition, note that QGantt chats visualize strips very clearly and explicitly, for example in Figure 4. Hence, besides other aspects, QGantt charts are also appropriate for investigating quantum memory and Quantum Array (Tensor) DBMS aspects.

4.5.3. Quantum Strip Layouts

In this article, we start from basic in-memory array (tensor) layouts due to the following reasons. First, state-of-the-art literature has almost no ideas on the algorithmic aspects of using quantum memory, as we have already noted. Second, before we move to advanced in-memory array (tensor) packing or compaction techniques, we must carefully investigate the advantages and disadvantages of a basic layout. Therefore, we start from QRAM (the most challenging compared to RAQM) and the row-wise layout that contains NA values (Figure 9b).
Recall the two interesting questions on whether we should allow NA in cell indexes (Section 4.4.2) and/or in cell values (Section 4.4.1). In general, we can store NA values in QRAM. Due to the logarithmic cost of ρ , there is no point in applying any array compaction schemes as they will not change the I/O memory runtime significantly unless we care about the memory footprint. We leave devising advanced array layouts for future work.
However, ρ and any operation can produce NA in a strip. We may (1) do not care and skip NA values in algorithms, or (2) utilize NA elimination techniques to cleanup a strip to simplify its subsequent processing. If hidden constants in £ ( ρ ) or £ ( ω ) are high, NA elimination will also be useful for reducing I/O time.

4.5.4. NA Elimination Technique (Deleting Terms from Quantum Superposition)

Let us present a technique to remove terms that satisfy a predicate from a quantum superposition. It is an important auxiliary operation that can be applied to a strip to avoid NA in an index or value. Ideas from this technique are also used in other Quantum Array (Tensor) DBMS approaches, including quantum array indexing algorithms (Section 4.9).
The problem statement is as follows. Without loss of generality, a narrow quantum strip comes as input, Equation (12). The approach is straightforward to extend to wide quantum strips, Equation (9). A narrow quantum strip without terms ψ j j NA comes as output. For example, consider array A ( l a t , l o n ) , Equation (11). Let us rewrite its definition in a narrow fashion that corresponds to the row-wise layout in Figure 9b:
A ( l a t , l o n ) . = . ψ 0 0 1 + ψ 1 1 0 + ψ 2 2 NA + ψ 3 3 NA + ψ 4 4 0 + ψ 5 5 1 + ψ 6 6 NA + ψ 7 7 1 + ψ 8 8 0 + ψ 9 9 NA + ψ 10 10 0 + ψ 11 11 NA + ψ 12 12 1 + ψ 13 13 NA + ψ 14 14 1 + ψ 15 15 0
The strip in Equation (13) contains NA values which should be excluded from the resulting strip (amplitudes may change):
A ( l a t , l o n ) . = . ψ 0 0 1 + ψ 1 1 0 ψ 4 4 0 + ψ 5 5 1 + ψ 7 7 1 + ψ 8 8 0 + ψ 10 10 0 ψ 12 12 1 + ψ 14 14 1 + ψ 15 15 0
We have previously noted that we may need the removal of terms with NA in indexes or values for convenience of algorithms or possible reduction of load on ρ and ω , Section 4.5.3.
The removal of an arbitrary term from an arbitrary superposition is not trivial. If we just remove a term without adjusting amplitudes of other terms, the following condition will not hold and the algorithm will not work in a quantum computer: k | ψ k | 2 = j | ψ j | 2 = 1 . According to a recent survey [169], only one algorithm exists to near completely delete a term from a superposition [170]. However, as can be concluded from the previous sentence, a complete deletion is not guaranteed. The approach [170] has the following limitations. The algorithm [170]:
  • removes only 1 term at a time, but we need to remove all terms with NA and we do not know the quantity of such NA terms
  • assumes that all terms are present in the superposition (for γ qubits there should be 2 γ terms in the input superposition): in a quantum strip we may not have all terms after a sequence of operations or just because its length is less than 2 γ and is generally unknown
  • assumes that the terms have the same amplitudes ( ψ j = ψ p for j , p and j p ) which may not hold and is hard to guarantee in practice due to the aforementioned reasons
  • needs to perform floating point operations which are hard to implement without bias; nearly complete deletion means that the certainty of the operation is less than 100% and, therefore, subsequent algorithms still need to account for the deleted value as if it was not deleted; in our case, algorithms must still do additional checks to avoid NA even after deleting them: this renders such a deletion operation pointless
Instead of tweaking the amplitudes to make terms with NA cell values negligible, the core idea behind our NA elimination technique is to replace index p and value NA in all terms ψ p p NA with an index j and v a l u e of some other term ψ j j v a l u e that is taken from the same strip, where v a l u e NA . Our algorithm utilizes QRAM.
Let us introduce some non-limiting assumptions before we present our approach. First, we will work with 1-based, not 0-based indexed arrays similar to Fortran and Pascal. This is not a limitation: we can use 1-based indexing or increment cell indexes by 1 before applying the NA elimination technique. Note that in this case we do not have a term ψ 0 0 · in the input strip. Second, NA will be a constant value with 1 in each qubit, i.e., NA = 1 | T | . This is a standard practice in classical Array (Tensor) DBMSs. Finally, we take into account the properties of ρ and ω , Section 4.4. NA both in index and/or value can be sometimes beneficial. However, if we would like to remove all terms in a quantum strip S with NA in cell values, the algorithm is as follows:
  • Write S into QRAM: O ( log | S | )
  • Collapse all terms ψ p p NA in the input strip S into a single term ψ 0 0 NA using the circuit in Figure 10: O ( | T | + γ )
  • Perform the Quantum Amplitude Amplification (QAA) to decrease ψ 0  [171]: O ( | S | ) , heuristics can also be applied, see [171]
  • Measure S to obtain any term ψ j j v a l u e where v a l u e NA : O ( 1 )
  • Read S from QRAM: O ( log | S | )
  • Repeat step №2: O ( | T | + γ )
  • Replace ψ 0 0 NA with ψ j j v a l u e using the same ideas as in the circuit in Figure 10: ψ 0 0 NA and ψ j j v a l u e will merge into ( ψ j + ψ 0 ) j v a l u e , O ( | T | + γ )
Note that we can create a mask using strip S without NA values: we can create a superposition j ψ j j (without cell values), store it in a single RAQM cell, and read the strip from QRAM skipping cells that contain NA. Step №4 relies on the fact that ψ 0 will be very small. However, we must still check the v a l u e after the measurement such that it is not equal to NA. Step №3 is the most time-consuming and challenging as it has a square root runtime. However, to date, it is the most generic way to de-amplify a superposition term.
The presented technique is flexible and can be plugged with another algorithm to find a term to use it to replace ψ 0 0 NA . If we do use QAA, we may not collapse all terms with NA into a single term if we do not care about the impact on the ω operation or do not debug. However, if there is another heuristic algorithm that is faster than QAA in a particular setting and yields a term with m a x index to use such a term to replace ψ 0 0 NA , then we do need the collapsing procedure to guarantee that the index of NA is minimal. We may also skip step №6 by modifying step №7: replace all ψ p p NA with ψ j j v a l u e . Therefore, the presented algorithm is generic and can be configured according to the goals and available heuristics. Note that the circuit in Figure 10 is quite important in diverse settings. For example, ideas from this algorithm, and in particular the circuit in Figure 10, are used in Section 5.2 without QAA to efficiently answer both dimension- and value-based queries.
If we would like to save time, we may be satisfied with a compromise solution: keep NA in ψ 0 0 NA and do not seek another value to replace NA. We may already save time for ω as only 1 term with NA value will be written. Subsequent algorithms will need to skip only one term with an already known index 0 rather than checking each term for NA (quantum parallelism can skip every term with NA, but conceptually, it is easier to skip only one term than an unknown number of such). Finally, for some reason we may know a v a l u e NA and its index, so it is possible to skip step №3.
To summarize, the asymptotic runtime cost of the NA elimination algorithm with all its steps is
£ ( NA eliminate complete ) = O ( log | S | ) + O ( | T | + γ ) + O ( | S | ) + O ( 1 ) + = . O ( log | S | ) + O ( | T | + γ ) + O ( | T | + γ ) = O ( log | S | ) + O ( | T | + γ ) + O ( | S | ) = O ( log | S | ) + O ( | S | ) = O ( | S | )
While getting £ ( NA eliminate complete ) = O ( | S | ) in Equation (15), we assume that although we need to perform read/write operations in O ( log | S | ) , the execution of QAA in O ( | S | ) time may still dominate the overall runtime, Section 6.1.6.
As noted above, if we omit steps №2 and №6, asymptotically we do not save time:
£ ( NA eliminate reduced ) = O ( log | S | ) + O ( | S | ) + O ( 1 ) + = . O ( log | S | ) + O ( | T | + γ ) = O ( log | S | ) + O ( | T | + γ ) + O ( | S | ) = O ( log | S | ) + O ( | S | ) = O ( | S | )
If we are satisfied with only 1 element to contain NA in its cell index, that is ψ 0 0 NA , we need only step №2, which uses the circuit in Figure 10. In this case
£ ( NA eliminate one ) = O ( | T | + γ )
which is the fastest version. The same asymptotic runtime applies to the case when a non-NA cell value and its index are known beforehand. Lastly, if there is a heuristic to find such index and value asymptotically faster than QAA, the asymptotic runtime depends on the complexity of that heuristic, but should be lower than O ( | S | ) .
The procedure for step №2 takes the superposition as input (pairs of cell values and indexes) and requires γ + 1 ancillary qubits initialized to 0 , including a flag qubit and γ qubits to perform the comparison and elimination, Figure 10.
The circuit in Figure 10 begins by comparing cell values to NA. It uses the Toffoli or CCNOT gate on 3 qubits. The controlled qubit (marked by ⊕) is inverted iff both control qubits (marked by •) are equal to 1 . Recall that NA consists entirely of qubits equal to 1 . Therefore, the ancillary qubit №1 will become 1 iff both cell value qubit №1 and qubit №2 are 1 . As all cell value qubits should be equal to 1 to represent the NA value, we involve the result of the previous CCNOT gate: ancillary qubit №1 together with cell value qubit №3 and so on. Ancillary qubit № ( γ 1 ) will be equal to 1 if the cell value is NA. The diagram shows the most complex case when | T | = γ . We copy the value of this qubit to the flag qubit as we will need ancillary qubit № ( γ 1 ) to copy cell index.
Afterwards, we cleanup all ancillary qubits by setting their values to 0 using the CNOT gate and the flag as the control qubit. Then we copy the value of cell indexes to ancillary qubits with the CNOT gate. Finally, a series of ( γ ) CCNOT gates with flag as one of the control qubits and each ancillary qubit, one by one, sets the cell index qubits to 0 if the flag qubit is 1 (the cell value is NA). If the flag qubit is 1 and an ancillary qubit №i is also 1 , then cell index qubit №i is also 1 as we have previously copied its value to ancillary qubit №i. Hence, cell index qubit №i will be inverted to 0 . Of course, no action will be taken if ancillary qubit №i is 0 , which is also equal to cell index qubit №i.
The circuit in Figure 10 runs in constant time O ( | T | + γ ) : it requires ( | T | + γ ) CCNOT gates and ( | T | + γ + 1 ) CNOT gates.

4.6. Quantum Array (Tensor) Hyperslabbing

Hyperslabbing is a fundamental operation in Earth data engineering. For data in Figure 7 and Figure 8, hyperslabbing can help to select data within certain geographical areas.
For example, W 28 [ 480 :​ 540 , 0 :​ 141 ] hyperslabs W 28 (sea surface wind speed) for the area between 30°…45° N (latitudes) and 0°…35° E (longitudes) that approximately corresponds to the Mediterranean sea. Note that l a t W [ 480 ] = 30 , l a t W [ 540 ] = 45 , l o n W [ 0 ] = 0 , and l o n W [ 141 ] = 35 , Figure 11.
For an array (tensor) with more dimensions, e.g., time and/or height in addition to latitude and longitude, hyperslabbing can also extract a 3-d cube or an N-d array that corresponds to a given spatio-temporal interval.
Earth 05 00027 i001 Let us complement the model in Section 4.1.2 with quantum hyperslabbing using classical indexes (QHCI):
A [ x 1 b : x 1 e , , x N b : x N e ] j ψ j x 1 x 2 x N A [ x 1 , x 2 , , x N ]
where x i [ x i b , x i e ] D i and i [ 1 , N ] . Despite the seemingly simple definition of hyperslabbing, its efficient execution is not simple at all: the performance varies by orders of magnitude depending on the array layout that is often tuned by trial & error [17].
Earth 05 00027 i001 Let us further complement the model in Section 4.1.2 with Quantum Hyperslabbing using Quantum Indexes (QHQI):
A [ x 1 ¯ , , x N ¯ ] = j ψ j x 1 x N A [ x 1 , x 2 , , x N ]
for each x i such that | x i x i ¯ | 2 0 and x i ¯ is a superposition of cell indexes that uses γ qubits. The main difference of quantum indexes compared to classical indexes is that we can conveniently and concisely express non-continuous array (tensor) strips with quantum indexes. For example, consider an array A 3 , 3 . The following will extract 4 non-neighbor (non-continuous) values A [ 0 , 0 ] , A [ 0 , 2 ] , A [ 2 , 0 ] , and A [ 2 , 2 ] :
A [ ψ 0 1 0 + ψ 2 1 2 , ψ 0 2 0 + ψ 2 2 2 ] = ψ 0 , 0 0 0 A [ 0 , 0 ] + ψ 0 , 2 0 2 A [ 0 , 2 ] + ψ 2 , 0 2 0 A [ 2 , 0 ] + ψ 2 , 2 2 2 A [ 2 , 2 ]
Quantum tensor indexes serve as lists of classical values that determine the positions of cells that are subject for hyperslabbing. We can store a quantum index x i ¯ in a single RAQM cell (we need N such cells for all N indexes) or use an array of indexes in QRAM.
Classical Challenge 2. How to efficiently hyperslab arrays?
Quantum Look at Challenge 2. Strips can stabilize the runtime and enable exponential speedups. The intuition behind the approach presented below is that we can create the required superposition using quantum operators. In the case of QHQI, the problem statement is quantum-native and we already have the required superpositions. Hence, let us devise the algorithm for QHCI.
Suppose that π has γ qubits, i.e., π can address 2 γ memory cells, Table 1. To hyperslab, we need a superposition of an index range. W.l.o.g., consider a 1-d array A l and its hyperslab A [ x b : x e ] , where x e x b + 1 = 2 n and γ n .
Next, let A [ 0 ] has address A π . Then,
A [ x b : x e ] = ρ O h ( A π + x b ) γ 0 ( γ n ) H n 0 n
where O h is an oracle for the standard addition: O h : j | j + A π + x b .
The hyperslabbing algorithm for a 1-d array consists of 3 stages: prepare the address register with the Hadamard operator in O ( n ) , run the addition oracle in O + æ ( γ ) , and perform the read operation in O ( γ ) . Therefore, the asymptotic runtime cost of quantum hyperslabbing is
£ ( hyperslab ) = £ ( H ) + £ ( O h ) + £ ( ρ ) = O ( n ) + O + æ ( γ ) + O ( γ ) = O ( n ) + O ( γ ) + O ( γ ) = O ( γ )
instead of O ( 2 n ) in the classical case. Typically 2 n γ . In Equation (22), O + æ ( γ ) is the cost of function or oracle e on O ( γ ) qubits, Table 1. We can expect O + æ ( γ ) to be relatively low to be efficient. In Equation (22), we assumed O + æ ( γ ) = O ( γ ) .
Indeed, quantum circuits already exist and are being constantly optimized for arithmetic operations [172,173,174]. For example, O h can be implemented as a quantum circuit. Multiple indicators are utilized to reason about the performance of quantum circuits, e.g., circuit depth and the number of ancillary qubits. However, the values of key performance indicators for contemporary quantum addition circuits have linear dependence on the number of qubits of their operands. Specific implementation details (that is, quantum circuits) of quantum addition and other arithmetic operations, as well as exact formulas that compute the performance indicators of the circuits, can be found in [172,173,174]. We can reliably assume that O + æ ( γ ) = O ( γ ) . Therefore, an efficient modern implementation of O h has no impact on the asymptotic runtime cost of the hyperslabbing algorithm.
Hyperslabbing frequently occurs as a sub-operation for other array (tensor) operations. Hyperslabbing is a core array (tensor) operation. Hence, let us illustrate it using a QGantt chart to facilitate forming a better intuition. We refer to Figure 9a and hyperslab A [ 3 , 0 :​ 3 ] to provide concrete indexes and numbers for a clear description. Suppose that A π = 0 and γ = 4 . If we lay out A in a row-wise manner, x b = 12 . The QGantt chart helps to develop intuition behind the approach and understand the hyperslabbing algorithm, Figure 12.
Figure 12 displays 6 superpositions. We need 4 steps in the presented case to obtain the hyperslab A [ 3 , 0 :​ 3 ] (column № 4 just contains a decimal representation of values from column № 3). The resulting hyperslab contains 4 elements, so n = 2 : two qubits are sufficient to carry indexes of the sought cells. Hence, we start from 00 ̲ = 0 2 , see ( 1 , 1 ) : row (term) № 1 and column (superposition) № 1. Both elements (zero symbols) in 00 ̲ are underlined because both of them participate in the next operation: they are inputs to the Hadamard operator H 2 .
The expression H 2 0 2 appears in the head of column № 2 to give a hint about how the terms of the superposition in column № 2 were produced. Two red dotted lines with red arrows at their ends point to the qubits that were affected by the Hadamard operator applied to ( 1 , 1 ) . All terms in column № 2 were derived from ( 1 , 1 ) as shown by dotted gray lines. Unlike the previous charts (e.g., Figure 5), the lines do not have arrows: the chart in Figure 12 is much larger. Therefore, the dotted line style and the absence of arrows are one of many possible examples of QGantt charts’ flexibility in reducing visual load.
Figure 12. QGantt chart for hyperslabbing A [ 3 , 0 :​ 3 ] in Figure 9a.
Figure 12. QGantt chart for hyperslabbing A [ 3 , 0 :​ 3 ] in Figure 9a.
Earth 05 00027 g012
All qubits in each term of column № 2 are also underlined to indicate that they were generated as a result of the operation that we have already described (the Hadamard operator H 2 ). However, underlining is not sufficient in Figure 12 compared to the previous QGantt charts. Therefore, small letters H are placed over each qubit in each term of column № 2. First, to avoid scrolling (it is possible to put H only over the first term, but there can be many of them in a single column). Second, to give a clue that they result from applying the Hadamard operator.
One of the reasons for placing an operator symbol over a qubit of a term in a superposition is to distinguish between different operations and clearly indicate which operation produced which qubit. If only underlining is used, such indications are not possible when multiple or different operations take place. For example, terms in column № 2 were produced by H 2 while terms in column № 3 are the result of the tensor product. A small symbol ⊗ over each of the two leftmost qubits in column № 3 indicates that only these two qubits are new while the remaining two rightmost qubits were copied from the respective terms in column № 2 (inputs and outputs can be tracked using dotted gray lines).
In column № 3 we append the remaining qubits to form a quantum pointer for the ρ operation. In this example, γ = 4 and γ n = 4 2 = 2 , so we append 0 2 = 00 . New qubits are marked with a small ⊗ symbol on top of them.
Column № 4 displays decimal values of the terms in column № 3.
Column № 5 presents the result of applying the oracle O h ( A π + x b ) γ = O h ( 0 + 12 ) 4 in this example, i.e., adding 12 to each term in column № 4. It is clear that it is possible to implement this oracle even in current quantum machines.
Finally, column № 6 contains the result of reading the hyperslab from QRAM. For the ρ operation, column № 5 serves as a quantum pointer which contains indexes of array cells that we would like to read. As a result, the ρ operation produced a superposition where the index values are entangled with the respective cell values, column № 6. For example, the index of the only NA cell in the example hyperslab is 13, so we have term 1 2 13 NA in column № 6. Note that A [ 3 , 0 :​ 3 ] = { 1 , NA , 1 , 0 } .
It is straightforward to extend the algorithm for hyperslabbing a higher-dimensional array (tensor) located in QRAM.

4.7. Quantum Array (Tensor) Joins

Array joins are significantly different compared to relational DBMS (RDBMS) joins. They also have subtypes and always enjoy attention similar to RDMBS joins, as an efficient array join is a challenging task [17,59,82]: even specialized array indexing techniques are designed to accelerate array joins [8,82].
Figure 13. The absolute values of sea surface wind speed differences between midnight and 06:00 o’clock UTC+00 of the 28th of August, 2005 that are greater or equal to 5 m/s.
Figure 13. The absolute values of sea surface wind speed differences between midnight and 06:00 o’clock UTC+00 of the 28th of August, 2005 that are greater or equal to 5 m/s.
Earth 05 00027 g013
Consider a popular value similarity join which takes two N-d arrays (tensors) A , B and a predicate θ as input such that A ( d 1 , d 2 , , d N ) : T and B ( d 1 , d 2 , , d N ) : T . Note that A and B have the same dimensions to be able to perform the join. The join result C = A θ B is also an N-d array (tensor) C ( d 1 , d 2 , , d N ) : { NA , 0 , 1 } whose cell values are defined in Equation (23).
C [ x 1 , x 2 , , x N ] = NA , if A [ x 1 , x 2 , , x N ] = NA or B [ x 1 , x 2 , , x N ] = NA 1 , if θ ( A [ x 1 , x 2 , , x N ] , B [ x 1 , x 2 , , x N ] ) is true 0 , otherwise
It is also possible to replace 0 with NA or to introduce additional values when A or B contains NA, e.g., yield 2 if A [ x 1 , x 2 , , x N ] = NA and 3 if B [ x 1 , x 2 , , x N ] = NA .
Array similarity joins appear in Earth data engineering in diverse ways. Consider our illustrative dataset that contains the sea surface wind speed, Section 4.3. Let us take two subsequent 2-d arrays of wind speed for the 28th of August, 2005 at midnight UTC+00 (Figure 7) and the same day at 06:00 o’clock UTC+00. To mark places on the map where wind speed values have changed over 5 m/s during this 6-h interval, we can perform the value similarity join with θ : a , b true if | a b | 5 , false otherwise. Figure 13 plots the respective differences of values. To get a value similarity join result, substitute all non-NA values by 1 and NA values (gray area) accordingly in Figure 13. Note that the user can tune the wind speed threshold multiple times triggering repeated computations. For example, the user can try 1, 3, 5, 10 m/s and other values to better explore the difference. This tuning use-case scenario can be time-consuming, so the join algorithm must be fast.
Classical Challenge 3. How to efficiently perform array joins?
Let us state the problem using a toy example to track the specific cell values, their indexes, and clearly illustrate the quantum join algorithm. W.l.o.g., consider two 2-d arrays: A ( l a t , l o n ) and B ( l a t , l o n ) , Figure 14a and Figure 14b respectively, such that | l a t | = | l o n | = 3 .
Next, let θ : a , b true if | a b | 2 . The value similarity join A 2 B is in Figure 14c. Cells of A , B and A 2 B are dashed if they satisfy the predicate. Empty cells in Figure 14c may be equal to 0 or NA.
Figure 14. Illustration of the value similarity join A 2 B .
Figure 14. Illustration of the value similarity join A 2 B .
Earth 05 00027 g014
Quantum Look at Challenge 3.
Let us extend ρ to read memory cells addressed only by the first γ qubits (cell indexes j ) of π a = j ψ j j · and entangle the values with π a as well (a kind of read-and-append): ρ a ( π a ) = π a j ψ j j · | D j , Table 1. Note that the value combinations of these qubits will remain unique if we add or subtract a constant to or from the first γ qubits. We hope that the hardware community will consider realizing ρ a to boost Array (Tensor) DBMSs and other tensor systems.
Now, with the inherent quantum parallelism
ρ a ( O h ( B π A π ) [ γ , ] ρ A π ) = ψ x B π + x A [ x ] B [ x ] = ρ A , B
where A π and B π are addresses of A [ 0 ] and B [ 0 ] respectively, A , B are laid out in QRAM in a row-wise order similar to Figure 9, ρ A π is the result of the read operation (all cells of A), and O h [ γ , ] acts only on the first γ qubits, Table 1.
Note that this approach is also flexible: if ρ for A omits ψ j j NA for some reason, B [ j ] will also be skipped. Hence, we do not need an extension where ρ reads addresses from one register and attaches data to another, we can use the same quantum register. We can also optimize the join via reordering (join B with A or A with B) in future work.
Finally,
A θ B = ω O h ( π B π ) [ γ , ] O [ , 2 | T | ] ρ A , B
where π is the address of A 2 B (the join result) and O [ , 2 | T | ] acts on the last 2 | T | qubits (pairs of values from A and B), Table 1. Here O is defined as follows (it is straightforward to extend O to account for NA):
O : A [ x ] B [ x ] 1 , if θ ( A [ x ] , B [ x ] ) 0 , otherwise
Let us analyze the asymptotic runtime complexity of the quantum array (tensor) join algorithm. We start from preparing the quantum register to perform the read operation which retrieves array A. The preparation takes O + æ ( γ ) time. If we further assume the same cost of the arithmetic addition operation similarly to the hyperslabbing, Section 4.6, we can run the preparation in O ( γ ) time. As agreed, the reading process takes O ( γ ) time.
Now we have a superposition that contains the elements of A. The next step is to modify this superposition for the read-and-append operation. The asymptotic runtime cost of these modifications is similar to the preparation for reading A: O + æ ( γ ) , which can be further refined to O ( γ ) , as in the previous preparation step. If read-and-append is of the similar asymptotic complexity to the usual read, we will obtain ρ A , B also in O ( γ ) time.
We are ready to perform the join logic in O 2 æ ( | T | ) time (two cell values and a constant, each stored in | T | qubits, and ancillary qubits). In this specific example, we need to subtract cell values and compare them to a constant in O æ ( | T | ) + O c o m p a r e æ ( | T | ) time (each operation runs on a pair of values, each stored in | T | qubits, and ancillary qubits). Again, with contemporary quantum circuits for subtraction and comparison [172,173,174], it is possible to conclude that O æ ( | T | ) + O c o m p a r e æ ( | T | ) = O ( | T | ) .
Figure 15. QGantt chart for creating value similarity join A 2 B in Figure 14.
Figure 15. QGantt chart for creating value similarity join A 2 B in Figure 14.
Earth 05 00027 g015
Finally, we prepare the quantum superposition for the write operation in O + æ ( γ ) = O ( γ ) time and write the array (tensor) join result into QRAM in O ( γ ) time. Hence,
£ ( A θ B ) = O + æ ( γ ) + O ( γ ) + O + æ ( γ ) + O ( γ ) + £ ( θ ) + O + æ ( γ ) + O ( γ ) = O ( γ ) + O ( γ ) + O ( γ ) + O ( γ ) + £ ( θ ) + O ( γ ) + O ( γ ) = O ( γ ) + £ ( θ )
To continue analyzing our example in this subsection, we can additionally specify £ ( θ ) , as discussed above:
£ ( A 2 B ) = O ( γ ) + £ ( θ ) = O ( γ ) + O 2 æ ( | T | ) = O ( γ ) + O æ ( | T | ) + O c o m p a r e æ ( | T | ) = O ( γ ) + O ( | T | ) = O ( γ + | T | )
For example, γ = 32 and | T | = 8 (byte). Recall that log | A | = log | B | = n γ , so we obtain an exponential speedup.
The respective QGantt chart for illustrative matrices in Figure 14 is in Figure 15. We have already examined the complexity of ρ and arithmetic operations in previous examples. Therefore, we use decimal instead of binary numbers and do not show all γ qubits in the QGantt chart explicitly. We also omit phase (amplitude) values for superposition terms as we do not modify them. We have shown how ρ and hyperslabbing work in Figure 12, so we immediately start from the result of ρ in Figure 15.
Column № 1 shows the result of reading A from QRAM, 9 cell values in total. We assumed that A π = 1 , the index of A [ 0 ] in QRAM (this is indicated at the top of column № 1). In other words, column № 1 contains the value of ρ A π (Equation (24)) as stated at the top of column № 1, below A π = 1 . The top row of a QGantt chart is a place where it is possible to place auxiliary information for each column.
Each term in column № 1 consists of 2 parts: the left decimal number is the cell index and the right is the cell value of A for that index. We put a small letter A on top of each cell value to make the semantics of the value clear. It is especially useful when cells of two or more arrays are present in the same superposition term like in column № 3. All indexes of A that were read by ρ are sequential. Hence, + 1 appears above each index value in column № 1. Finally, the asymptotic complexity of the read operation is O ( γ ) as indicated under the curly brace for column № 1, below the t i m e axis.
To get the superposition in column № 2, each term in column № 1 undergoes the transformation by the oracle O h ( B π A π ) , Equation (24). We assume that B π = 10 (see the top of column № 2). The oracle and + 10 1 annotate the dotted red line with arrow which illustrates the transformation of ( 1 , 1 ) to ( 1 , 2 ) : 1 + ( 10 1 ) = 10 . This step prepares for the next read-and-append operation. The asymptotic cost of transforming the superposition in column № 1 to column № 2 is O + æ ( γ ) , as stated under the curly brace for column № 2, below the t i m e axis.
Note that to add qubits, an oracle O h ( x ) needs some ancillary qubits: to store x, intermediate numbers, and other information. The same goes for O 2 . Note that O [ , 2 | T | ] in Equation (25) does not account for ancillary qubits and also focuses only on the input cell values stored in 2 | T | qubits. However, it is clear that these oracles rely on standard quantum computing transformations. Therefore, instead of discussing well-known arithmetic implementations that may obscure the main ideas behind array operations, we concentrate on the Array (Tensor) DBMS algorithms.
Column № 3 plots the result of the ρ a operation (read-and-append), completing the first step of the join operation, Equation (24). The asymptotic complexity appears below the t i m e axis and is the same as for the ρ operation. The last two parts of each term in column № 3 contain cell values of arrays A and B. Small letters A and B are placed above respective cell values in each term of column № 3 to facilitate visual matching of values with the array from which they were retrieved. The first part of each term in column № 3 is equal to the cell value index in QRAM for array B. Now we are ready to perform the join.
Column № 4 contains the result of applying O 2 to the superposition in column № 3, Equation (25). The dotted red line with arrow shows the transformation of ( 1 , 3 ) to ( 1 , 4 ) .
Finally, column № 5 is the superposition prepared to be written to QRAM by ω , Equation (25). We assume that π = 20 , see the top of column № 5. The oracle and its asymptotic complexity are similar to that in column № 2. After ω , the array that represents the join result will appear in QRAM in the row-wise layout starting from index 20.

4.8. Quantum Array Algebra

Array (Map) Algebra is widely used in the industry and constitutes a large portion of Array (Tensor) DBMS workloads  [7,8].
A typical expression that is a representative map algebra query is a popular vegetation index NDVI = (nirr)/(nir + r) that accepts nir and r: 2-d arrays, intensities of reflected solar radiation in the near-infrared and visible red spectra respectively.
Map algebra is an excellent candidate for unprecedented speedups, as many of its queries represent algebraic expressions amenable to automatic translations to quantum circuits, unlike arbitrary classical functions. A flexible Earth data engineering tool like a Quantum Array (Tensor) DBMS can be especially useful for map algebra operations on data that appears to be already in QRAM or RAQM.
Classical Challenge 4. How to efficiently execute queries that contain array (map) algebra expressions?
Although it is relatively straightforward to compute ndvi, there is a class of functions like f ( α , · ) with a user-tunable parameter α adjusted by users interactively. Dozens of such functions exist [12], e.g., savi, pvi and others, Figure 16. The user can tune α , e.g., α n e w = α o l d + Δ α , to experimentally find appropriate f ( α , · ) values for an area of interest. Sophisticated approaches to compute f ( α + Δ α , · ) exist [12], but quantum computing may offer an exponential speedup.
Users tend to spend vast amounts of time for Interactive Data Science, but can experience increased response times in tune-recompute scenarios. As a user typically spends their time waiting for query results in front of a computer, each successive data processing delay, even within a split second, increases human fatigue and thereby reduces the quality of work and understanding of the data.
Quantum Look at Challenge 4. The approach of executing map algebra queries is similar to the array join approach presented in Section 4.7.
O m [ , k | T | ] ρ A 1 , A 2 , , A k applies a local map algebra operation m to k arrays A 1 , A 2 , , A k in a single superposition ρ A 1 , A 2 , , A k prepared in a similar way as for the join operation, see column № 3 in Figure 15.
A typical asymptotic cost for a Quantum Array (Tensor) DBMS to execute ndvi and other frequent array algebra expressions is expected to be just O arithmetic æ ( | T | ) . Of course, reading, writing, and other accompanying costs exist as for £ ( A θ B ) , Equation (27), but in total they are asymptotically exponentially faster than the respective classical costs.
In the case of tunable functions, a naive quantum approach to recompute a function from scratch would still be exponentially faster. Current quantum machines can keep a superposition just for a very small amount of time before decoherence. However, even repetitive ρ and ω operations that require just O ( γ ) may be faster compared to classical approaches. Of course, more advanced quantum approaches can also be devised.

4.9. Quantum Array (Tensor) Indexes

Consider again Figure 7 and Figure 8. Suppose it is required to select all CO2 or wind map cells whose values are within the given range. This is a value-based query. For example, select array cells with mole fraction of CO2 in the range [ 3.77 × 10 4 , 3.84 × 10 4 ] . If a restriction is applied only to the geographical area, the query is called dimension-based (similar to hyperslabbing, Section 4.6). A single query can combine both types of restrictions. For example, select array cells in the Pacific Ocean (the restriction on l a t and l o n dimensions) whose wind speed is above 5 m/s (additional restriction on cell values).
To efficiently answer such Earth data engineering queries without scanning all array cells for each query, array (tensor) indexes are built beforehand. Once a query arrives, an array (tensor) index is used to accelerate query answering. Unlike RDBMS indexes, Array (Tensor) DBMS indexes can be used alone, without original data. They reorganize arrays to reduce I/O compared to querying original data [12,16,79,82,175].
Classical Challenge 5. How to efficiently perform dimension- and value-based array (tensor) operations, often within a single query?
Let us take a tiny matrix A as a running example for this subsection and define it as A ( l a t , l o n ) : T = { { 5 , 8 , 3 } , { 3 , 2 , 9 } , { 1 , 7 , 4 } } , where l a t 3 : T = { 0 , 1 , 2 } and l o n 3 : T = { 0 , 1 , 2 } (we use T both for values and indexes for simplicity).
The following query is both dimension- and value-based, because it extracts a subarray B that will contain values from A that are within the (1) value range [ 3 , 4 ] and (2) index ranges [ 0 , 1 ] and [ 1 , 2 ] for the first and second axes, respectively:
SELECT  B A
WHERE A.value IN  [ 3 , 4 ] AND
              A. l a t  IN  [ 0 , 1 ]  AND
              A. l o n  IN  [ 1 , 2 ]
Figure 17 illustrates the query challenges. The same array A is depicted in Figure 17a–c.  In all figures, the sought area A [ 0 : 1 , 1 : 2 ] is surrounded by blue dotted lines, while the respective indexes on the l a t and l o n axes are inside the dashed red areas. Figure 17a additionally dashes cells with red lines in the whole array A that are within the specified range of [ 3 , 4 ] , but not necessarily within the specified area. Figure 17b dashes cells with similar lines that are only within the specified area, but not the given cell value range. Finally, Figure 17c marks only the single cell with dashed lines that satisfies all query criteria and is the query result.
The challenge is to avoid scanning the whole array and even the area that satisfies the given dimension range (geographical area). Array A is tiny, but even in this case it is clear that the resulting cell A [ 0 , 3 ] constitutes only 25% of the hyperslab A [ 0 : 1 , 1 : 2 ] : the query is very selective. Without specialized indexing techniques, the problem becomes much worth for bigger arrays, for queries with loose dimension constraints or for highly selective queries when it is required to scan a large portion of an array or the whole array in order to answer the query.
Quantum Look at Challenge 5.
If array A is in QRAM, indexing is optional. Further, if A is not readonly, maintaining an index would be much more efficient using RAQM instead of QRAM (A itself can still be in QRAM). If A is readonly and we expect a sufficiently large number of queries to A that will compensate for the index preparation time, we may resort to QRAM to keep an index.
To answer the query in O ( log | A | ) + O arithmetic æ ( γ + | T | ) , we may not use an index. First, let us see how we can answer the query without any index, but using quantum memory and inherent quantum parallelism.
We can split the query into two steps: (1) hyperslabbing and (2) value selection. That is, in step № 1, extract A [ 0 : 1 , 1 : 2 ] using the technique proposed earlier, Section 4.6. We need to cover the value selection step in this subsection, i.e., seek cells whose values are within the range [ 3 , 4 ] and whose coordinates are in A [ 0 : 1 , 1 : 2 ] . Suppose that the value range (the value-based part of the query) is given as [ v b , v e ] , where v b , v e T .
We propose two approaches for executing step № 2. The first approach, the equality approach, assumes that we cannot use a range comparison oracle O [ v b , v e ] [ , | T | + 1 ] (Equation (29)), but rather an equality test oracle O = [ , 2 × | T | + 1 ] at step № 2. The second approach, the range approach, uses oracle O [ v b , v e ] [ , | T | + 1 ] (Equation (30)) at step № 2. For example, oracle O [ v b , v e ] [ , | T | + 1 ] may not be available, can be costlier than oracle O = [ , 2 × | T | + 1 ] , or we cannot use oracle O [ v b , v e ] [ , | T | + 1 ] for some other reasons.
We define the oracles as follows. Oracle O = [ , 2 × | T | + 1 ] operates on a pair of values and one ancillary qubit (flag) to store the result (we also need some ancillary qubits to perform the equality test, but we do not show them as such test is a standard operation):
O = [ , 2 × | T | + 1 ] : u , v 1 , if u = v 0 , otherwise
Oracle O [ v b , v e ] [ , | T | + 1 ] operates on one value and one ancillary qubit (flag) to store the result, Equation (30). We also need ancillary qubits to perform the test-in-range or the comparison of a with v e and v b . The number of such qubits depends on the algorithm used, but in general this number and the respective circuit depth can be larger than for O = [ , 2 × | T | + 1 ] . For example, we can implement the comparison of two values by subtracting one value from another and checking the sign of the result.
O [ v b , v e ] [ , | T | + 1 ] : a 1 , if a [ v b , v e ] 0 , otherwise
The equality approach not only shows how to answer both a dimension- and value-based query, but also presents how multistage quantum algorithms can look like and how to illustrate them step-by-step using a special type of quantum diagram, Section 5.2. In addition, the equality approach uncovers interesting cases where it is desirable to have NA in cell indexes and presents possible actions in such cases when there is no respective hardware support, Section 4.4.2. Importantly, other array quantum algorithms, not only related to quantum indexing, can also arrive at similar situations as described in our equality approach. Therefore, the equality approach is also useful from a broader perspective.
Let us first describe the equality approach. Further assume that v e v b + 1 = 2 n and | T | n . Otherwise, we will need to perform additional comparison operations to filter out irrelevant cell values. First, recall that we would like to avoid comparisons. Second, we aim to showcase the main ideas of the approach and prevent obscuring the algorithm. Anyway, if v e v b + 1 = 2 n , systems may optimize by using the presented algorithm to avoid additional operations and possibly reduce the runtime. Next, suppose that B = A [ 0 : 1 , 1 : 2 ] (or some other hyperslab). The first step prepares for the direct 1 : 1 equality test (not a generic test-in-range with O [ v b , v e ] [ , | T | + 1 ] which is more expensive):
B 1 = B O h ( v b ) | T | 0 ( | T | n ) H n 0 n
where O h is an oracle for the standard addition as in Equation (21). Now, each element of array B is replicated 2 n times and co-located with each integer value within the range [ v b , v e ] . It is possible to perform the equality test as follows: for each pair of values from B and [ v b , v e ] we check if they are equal or not. In classical computing this is equivalent to two nested loops:
for each value  b [ x 1 , x 2 ] B
     for each value  q [ v b , v e ]
        if  b [ x 1 , x 2 ] = q  then add b [ x 1 , x 2 ] to the result
The code above performs an exhaustive search, which is the worst and least efficient technique for classical, but not quantum computing. On the contrary, quantum computing can execute the same search just in a constant amount of time.
The next step is to perform the direct 1:1 equality test and save the result in an ancillary qubit:
B 2 = O = [ , 2 × | T | + 1 ] B 1 0
where O = [ , 2 × | T | + 1 ] is our equality oracle, Equation (29).
Finally, using the equality test result, we assign NA to indexes and values where the flag qubit equals 0 . B 2 can have the same shape as A [ 0 : 1 , 1 : 2 ] , yielding the final result with a copy of A [ 0 , 2 ] :
B ( l a t B , l o n B ) : T = { { NA , 3 } , { NA , NA } }
where l a t B 2 : T = { 0 , 1 } and l o n B 2 : T = { 1 , 2 } . We elaborate on why we set NA not only to values, but also to indexes in Section 5, where we also present post-processing steps to get Equation (33). After this final step we have several options. First, just write the result into QRAM. Second, write the result into RAQM. Last, use the result in subsequent quantum algorithms. In the last two options, we also may perform NA elimination, Section 4.5.4.
Now, let us describe the second approach, the range approach, if oracle O [ v b , v e ] [ , | T | + 1 ] is allowed. It can be formulated more concisely, Equation (34):
B 1 = O [ v b , v e ] [ , | T | + 1 ] B 0
To get B, we need to post-process B 1 : assign NA to cell values where the flag is 0 .
A detailed, step-by-step example of efficiently executing the query on page 31 which is both dimension- and value-based is in Section 5.
The asymptotic complexity analysis reveals the following runtime cost of the proposed algorithm that is plugged with the equality approach. First, we hyperslab array A in O ( γ 2 + γ ) time. The runtime differs from hyperslabbing a 1-d array, Section 4.6, because we need to convert the wide strip layout to the narrow layout, which requires multiplication that can run in quadratic time depending on the number of qubits in the input operands [172,173,174]. The cost can be reduced to O ( γ ) if we keep A in the narrow layout.
Next, we prepare for the 1:1 direct equality test. First, we apply the Hadamard operator in O ( n ) = O ( | T | ) time. Second, we run the oracle in O + æ ( | T | ) = O ( | T | ) time. Finally, we need to entangle the hyperslab with the prepared values. However, we assume the cost of this step to be O ( 1 ) , because we can allocate the required number of qubits in a quantum register beforehand to avoid additional entanglement operations.
We are ready to perform the direct 1:1 equality test in O = æ ( | T | ) = O ( | T | ) time. Afterwards, we assign NA to respective indexes and values in O ( | T | + γ ) time. Therefore,
£ ( dimension - and value - based query : 2 - d array , wide layout ) = O ( γ 2 + γ ) + O ( n ) + O + æ ( | T | ) + O = æ ( | T | ) + O ( | T | + γ ) = O ( γ 2 + γ ) + O ( | T | ) + O ( | T | ) + O ( | T | ) + O ( | T | + γ ) = O ( γ 2 + γ ) + O ( | T | ) + O ( | T | + γ ) = O ( γ 2 + γ + | T | )
Further alternative post-processing steps are described in Section 5.2.
Unlike the classical algorithm without any indexes, the presented quantum algorithm efficiently answers the query: O ( γ 2 + γ + | T | ) for a 2-d array with up to 2 γ elements instead of classical O ( 2 γ ) , yielding an exponential speed.
The asymptotic cost for the algorithm that utilizes the range approach is also O ( γ 2 + γ + | T | ) , if oracle O [ v b , v e ] [ , | T | + 1 ] is implemented using subroutines not costlier than O æ ( | T | ) .
We can improve further by building indexes for an array (tensor) beforehand. Multiple reasons can exist for this. For example, it may be beneficially to split an array (tensor) into multiple parts that require less qubits for storage, and, thus, less time for I/O.
Following the classical array (tensor) indexing approach [8], we can create separate indexes for value ranges 1 . . 3 , 4 . . 6 , and 7 . . 9 :
L 1 , 3 3 , 3 = ψ 1 2 0 1 + ψ 2 1 1 2 + ψ 3 0 2 3
L 4 , 6 and L 7 , 9 are similar to L 1 , 3 . Equation (36) shows a wide quantum strip. We apply the same algorithm with the same parameters as above to dimensions and values of L 1 , 3 and L 4 , 6 (without L 7 , 9 ; its exclusion may yield potential time savings) and then just write each result separately to quantum memory.
However, building and maintaining separate indexes can be costly. To build an index L v b , v e , we need to tackle superposition terms with NA values (Section 4.5.4). In addition, if QRAM is used, we must alter cell indexes in L v b , v e such that L v b , v e takes less space in QRAM compared to the original array (tensor), i.e., devise a special array layout. If RAQM is available, it would be excellent to keep L v b , v e in a single RAQM cell. A quantum indexing technique must be faster than O ( γ + | T | ) to be efficient in terms of asymptotic performance. Another goal of quantum indexing may be to reduce I/O load or other characteristics. We should revisit the question when more information about quantum memory is available.

5. Quantum Network Diagrams (QND)

How to clearly, step-by-step illustrate answering the query from Section 4.9, page 31? It is challenging for a single QGantt chart to accomplish this task. In this section, in addition to QGantt charts, we propose another new type of charts: Quantum Network Diagrams.

5.1. Quantum Network Diagrams: A Bird’s-Eye View

A Quantum Network Diagram consists of several independent QGantt charts that form a DAG (Directed Acyclic Graph). The outputs of several QGantt charts can be entangled and serve as inputs to other QGantt charts in the Quantum Network Diagram. The output/input data dependencies are denoted by arrows. Quantum Network Diagrams can clearly illustrate multistage quantum computations, data flows, and their inter-dependencies in a static manner similar to QGantt charts. Therefore, Quantum Network Diagrams possess virtues similar to QGantt charts, Section 3.
Let us create a Quantum Network Diagram to clearly illustrate the ideas behind the indexing approach in Section 4.9, using our sample query from page 31. In this query, [ v b , v e ] = [ 3 , 4 ] , 4 3 + 1 = 2 ( n = 1 ) and | T | = 4 . Hence, we can rewrite Equation (31) as
B 1 = A [ 0 : 1 , 1 : 2 ] O h ( 3 ) 4 0 ( 4 1 ) H 1 0 1
To extract the hyperslab A [ 0 : 1 , 1 : 2 ] = A [ x l a t b : x l a t e , x l o n b : x l o n e ] , let us further assume that γ = 4 , A π = 0 , x l a t b = 0 , x l a t e = 1   x l o n b = 1 , x l o n e = 2 . We also assume that ρ reads a wide strip (as we have already noted, it is straightforward to convert a wide strip to its narrow representation and vice versa). Now, complementing Equation (37) with hyperslabbing (see Equation (21)), we obtain
B 1 = ρ O h ( 0 + 0 ) 4 0 ( 4 1 ) H 1 0 1 O h ( 0 + 1 ) 4 0 ( 4 1 ) H 1 0 1 O h ( 3 ) 4 0 ( 4 1 ) H 1 0 1
Equation (38) can be simplified to
B 1 = ρ 0 3 H 0 l a t O h ( 1 ) 4 0 3 H 0 l o n ρ O h ( 3 ) 4 0 3 H 0 v a l
To clearly, step-by-step, illustrate Equation (39), we can integrate several QGantt charts into a single Quantum Network Diagram, Figure 18. Let us first describe the Quantum Network Diagram from a bird’s-eye view. The Quantum Network Diagram consists of 5 QGantt charts connected by arrows that represent input/output quantum superpositions. Recall that the quantum indexing algorithm consists of several parts which yield B 1 , B 2 , and the resulting array (tensor) B, Section 4.9. We marked logical portions of Equation (39) as l a t , l o n , ρ , and v a l for forming latitude, longitude indexes, and the values in the requested value range respectively. It is possible to find respective QGantt charts for computing l a t , l o n , ρ , v a l , B 1 , B 2 , and B in Figure 18. The diagram consumes almost the whole page, but makes it immediately clear what is happening.
Compared to the QGantt charts that we presented earlier in this article, QGantt charts in Figure 18 have additional visual components and formatting, summarized below.
  • Each QGantt chart in a Quantum Network Diagram is surrounded with a rectangle to clearly distinguish and visually separate its contents from other QGantt charts that belong to the same Quantum Network Diagram.
  • In Quantum Network Diagrams, each QGantt chart has its name (can also be called tag or label) located in the top left corner of each QGantt chart. In Figure 18, QGantt chart names are in bold italic font, placed inside a filled rectangle to better emphasize the names visually, for example, Earth 05 00027 i002
  • QGantt charts can have annotating formulas associated with a whole chart to explain what a QGantt chart aims to produce as its output. A chart annotating formula describes its QGantt chart as a whole in addition to short formulas that can mark superpositions inside a QGantt chart. In Figure 18, such chart annotating formulas are located above QGantt charts and resemble figure captions.
  • Symbol ⊗ appears above annotating formulas for QGantt charts which take superpositions as inputs. In Figure 18, the symbol ⊗ appears at the very top of Earth 05 00027 i003 and Earth 05 00027 i004, because the chronological order of QGantt charts (the order in which their output superpositions must be generated) coincides with their vertical order, from top to bottom. Of course, the symbol ⊗ can appear at any other side of a QGantt chart if the charts are laid out in a different way. We laid out ⊗ above the annotating formula for Earth 05 00027 i003 to indicate that the the output superpositions of Earth 05 00027 i002 and Earth 05 00027 i005 are used in the ρ operation. Similarly, ⊗ appears exactly above B 1 in Earth 05 00027 i004, not B 2 or B, to show that the tensor product of outputs of Earth 05 00027 i003 and Earth 05 00027 i006 forms B 1 —the output of the first stage of the algorithm, Section 4.9.
  • Finally, arrows output connect QGantt charts in a DAG (Directed Acyclic Graph). Typically, the last column of a QGantt chart is its output superposition. We additionally mark each such column in Earth 05 00027 i002, Earth 05 00027 i005, Earth 05 00027 i003, and Earth 05 00027 i006 by a curly brace at the bottom of the last column (superposition) with caption “output”. Each arrow that connects QGantt charts starts near this curly brace or its caption. The arrows point to the symbol ⊗ indicating that the superpositions at the beginning of the arrows participate in the tensor product operation. For example, the superposition № 1 in Earth 05 00027 i003 represents the tensor products of Earth 05 00027 i002 and Earth 05 00027 i005 outputs, so the column № 1 is marked as lat lon . Similarly, the first column (superposition) of Earth 05 00027 i004 equals to B 1 = ρ val , which is the tensor product of outputs of Earth 05 00027 i003 and Earth 05 00027 i006.
Note that in addition to Equation (39), the Quantum Network Diagram in Figure 18 also contains the QGantt chart named Earth 05 00027 i004 that completes the illustration of the execution of the algorithm from Figure 18 by providing step-by-step actions that lead to getting the dimension- and value-based query result B.

5.2. Efficiently Answering Dimension- and Value-Based Queries Step-by-Step

Now let us describe the logic behind the Quantum Network Diagram in Figure 18. In our example, we execute both a dimension- and value-based query. First, we hyperslab A [ 0 : 1 , 1 : 2 ] . Therefore, we prepare the superposition of indexes for the l a t and l o n dimensions in QGantt charts Earth 05 00027 i002 and Earth 05 00027 i005 respectively, in accordance with the algorithm in Section 4.6. As the hyperslabbing algorithm has been already discussed in Section 4.6, we omit the desciption of Earth 05 00027 i002 and Earth 05 00027 i005 which is similar to Figure 12. Therefore, the reader can refer to the description of Figure 12 to understand Earth 05 00027 i002 and Earth 05 00027 i005.
Figure 18. Quantum Network Diagram for Equation (39).
Figure 18. Quantum Network Diagram for Equation (39).
Earth 05 00027 g018
The only peculiarity of Earth 05 00027 i002 is that it does not require an addition oracle as its output index range starts from 0, representing a special case.
Next, the index ranges of Earth 05 00027 i002 and Earth 05 00027 i005 come to the QGantt chart named Earth 05 00027 i003 which illustrates reading the hyperslab of A, namely A [ 0 : 1 , 1 : 2 ] , from QRAM. For more clarity, we show a wide quantum strip for A [ 0 : 1 , 1 : 2 ] instead of its narrow version. Recall that it is straightforward to switch between wide and narrow strip layouts. The QGantt chart Earth 05 00027 i003 completes the execution of the dimension-based portion of the query.
According to the algorithm in Figure 18, the next step is to prepare the superposition of all values in the range that was submitted as part of the value-based portion of the query. The QGantt chart Earth 05 00027 i006 performs this step in accordance with Equations (31) and (39). Recall that the idea is to perform an exhaustive, direct 1:1 equality test of each value in the hyperslab A [ 0 : 1 , 1 : 2 ] with each value within the queried range [ v b , v e ] = [ 3 , 4 ] . This is the worst case for classical computing, but it can be executed in asymptotically constant time on a quantum machine.
Now we can compare each value marked by the symbol A in each term of superposition № 2 of Earth 05 00027 i003 with each value of superposition № 5 of Earth 05 00027 i006. To accomplish this, we form superposition № 1 called B 1 in Earth 05 00027 i004 based on the outputs of Earth 05 00027 i003 and Earth 05 00027 i006. For the sake of clarity, each value in each term of superposition № 1 in Earth 05 00027 i004 is marked by the symbols l a t , l o n , A, and v a l which indicate that the marked values represent l a t and l o n indexes, the cell values of the hyperslab A [ 0 : 1 , 1 : 2 ] , and the generated values by Earth 05 00027 i006 within the submitted cell value range [ v b , v e ] = [ 3 , 4 ] respectively. A [ 0 : 1 , 1 : 2 ] has 4 values while the range [ v b , v e ] = [ 3 , 4 ] has 2 values. We compare each value of A [ 0 : 1 , 1 : 2 ] with each value in the range [ v b , v e ] = [ 3 , 4 ] , so we get 8 terms in superposition B 1 , column № 1 of Earth 05 00027 i004.
To perform the equality test, we need an additional qubit to store the test result, we call it a flag qubit. Column № 2 in Earth 05 00027 i004 displays the superposition after appending the flag qubit. Initially it is set to 0 . The flag qubit is marked by the word flag and appears in bold in column № 2 to show that this qubit is new compared to the previous superposition in column № 1, QGantt chart Earth 05 00027 i004.
Column № 3 of Earth 05 00027 i004 displays the resulting superposition after applying the equality test oracle. The oracle acts on the values marked by A and v a l , saving the equality test result to the flag qubit; in total on 2 × | T | + 1 qubits, excluding possible ancillary qubits. It turns out, that now only term ( 3 , 3 ) has the flag qubit equal to 1 . The flag qubit in term ( 3 , 3 ) is bold and underlined to visually indicate that it is different compared to its previous state in term ( 3 , 2 ) . Flag qubits in other terms remain equal to 0 and, thus, appear in usual font style in column № 3.
We found only one value that satisfies our dimension- and value-based query: A [ 0 , 2 ] , Figure 17c. The next step is to cleanup the superposition in column № 3 to get B, the resulting array (tensor) consisting of only 1 non-NA cell, Equation (33). Ideally, we must remove all terms № 1, 2, 4–8, excluding term № 3. However, recall that it is a non-trivial task in quantum computing, Section 4.5.4.
At a glance, it might seem that we could avoid removing (deleting) terms that do not satisfy the value-based part of the query by just setting NA to cell values. For example, 1 8 0 l a t 1 l o n 8 A 3 val 0 could be easily transformed to 1 8 0 l a t 1 l o n NA B by replacing cell value with NA and dropping the ancillary qubits (notice the letter B over the NA value that indicates that the cell value in the term belongs to the final result). Afterwards, we would just write the resulting superposition into QRAM and get B, Equation (33). However, notice that we have two copies of A [ 0 , 2 ] : 1 8 0 l a t 2 l o n 3 A 3 val 1 ̲ and 1 8 0 l a t 2 l o n 3 A 4 val 0 ; the former copy satisfies the query while the latter does not. If we transform 1 8 0 l a t 2 l o n 3 A 4 val 0 to 1 8 0 l a t 2 l o n NA B , we will have 2 terms with the same indexes, but different cell values: 1 8 0 l a t 2 l o n 3 B and 1 8 0 l a t 2 l o n NA B . An oracle analyzing 1 8 0 l a t 2 l o n 3 A 4 val 0 cannot check whether A [ 0 , 2 ] belongs to the result. We also do not know the number of resulting NA or non-NA cells beforehand, and quantum counting is a too costly operation in this case.
If there is hardware support for NA in cell indexes, as suggested in Section 4.4.2, a possible way would be to transform 1 8 0 l a t 2 l o n 3 A 4 val 0 to 1 8 NA l a t NA l o n 3 B and simply avoid writing 1 8 NA l a t NA l o n 3 B to QRAM. Let us describe an approach that does not require hardware support of NA in cell indexes.
Column № 4 of Earth 05 00027 i004 contains terms whose cell indexes and values are set to NA if the qubit flag equals 0 , Figure 18. All terms where the flag qubit is 0 will collapse to 7 8 NA l a t NA l o n NA A NA val 0 . If we drop the ancillary qubits and the flag qubit, we will obtain 7 8 NA l a t NA l o n NA B and 1 8 0 l a t 2 l o n 3 B as the result. Again, with the respective hardware support, we could just write the superposition in column № 4 to QRAM, omitting term ( 1 , 5 ) .
Without the aforementioned hardware support, we have at least 2 options. First, apply the NA elimination technique to remove 7 8 NA l a t NA l o n NA B , Section 4.5.4. As the result, we can get B = 1 1 0 2 3 without NA cell values. This option is beneficial when we do not need NA in cell values or cell indexes in a superposition, for example, for further algorithms that will take B as input. The second option, which asymptotically can be executed faster, is to eliminate NA in cell indexes while keeping NA cell values, if this is suitable for subsequent algorithms. Note that even after following the second option we can still use the NA elimination technique at some future stage. The algorithm to remove terms with NA index values is a modification of the algorithm found in Section 4.5.4, but is asymptotically faster:
  • Append a flag qubit that indicates whether cell values contain NA. Continuing our example, we will have 2 terms: 7 8 NA l a t NA l o n NA B 1 ̲ f l a g and 1 8 0 l a t 2 l o n 3 B 0 ̲ f l a g . We can reuse the flag from the previous step. In addition, we can integrate this algorithm with the main indexing approach, thus skipping some steps, e.g., Step № 3, by setting 0 to cell indexes instead of NA .
  • Convert the array (tensor) from a 0-based to a 1-based indexed array (tensor): increment the index the corresponds to the last dimension (the most frequently varying) in terms where flag equals 0 . We get 7 8 NA l a t NA l o n NA B 1 f l a g and 1 8 0 l a t 3 ̲ l o n 3 B 0 f l a g .
  • Using the ideas illustrated in Figure 10, set cell indexes to 0 where flag equals 1 : 7 8 0 ̲ l a t 0 ̲ l o n NA B 1 f l a g and 1 8 0 l a t 3 l o n 3 B 0 f l a g .
  • Reserve in QRAM the number of cells equal to | A [ 0 : 1 , 1 : 2 ] | + 1 = 2 × 2 + 1 starting at B π and initialize them to NA : we know the shape of A [ 0 : 1 , 1 : 2 ] , because we can compute it using the dimension-based part of the query. We also need an additional dummy cell at the very beginning of the reserved QRAM space to write the cell whose value is equal to NA . The index of such a cell is 0 or 0 N in the case of an N-dimensional array (tensor). Write the superposition to QRAM starting at B π .
  • Read | A [ 0 : 1 , 1 : 2 ] | = 2 × 2 cells from QRAM starting at B π + 1 , thus, omitting the dummy cell. In our example, we obtain exactly the same result as provided in Equation (33). Note that the array (tensor) at this stage, after reading from QRAM, will already be a 0-based indexed array (tensor).
Recall that in Section 4.9 we noted that other algorithms can also be in a similar situation: NA in cell indexes, Section 4.4.2. Hence, the presented algorithm to deal with this situation without hardware support of such indexes can be useful in other cases as well.
Figure 19. A QGantt chart for eliminating NA cell indexes in Figure 18.
Figure 19. A QGantt chart for eliminating NA cell indexes in Figure 18.
Earth 05 00027 g019
We illustrate steps № 1–3 of the above algorithm using the QGantt chart in Figure 19. Unlike the previous algorithms, the step-by-step description of the algorithm with concrete values of the superposition terms now precedes the corresponding QGantt chart. This is another excellent way to experience that it is much more difficult to read the description of the algorithm on its own, without an accompanying QGantt chart. We start from the superposition number 0 in Figure 19 to match the step number in the algorithm to the superposition number in the QGantt chart. We avoid additionally describing the QGantt chart in Figure 19, because it should be clear from the algorithm presented above.
Let us describe the algorithm that answers both dimension- and value-based query, but which is plugged with the range approach, Section 4.9. We illustrate the execution of this setting using the same example from Section 5.1, but with a new Quantum Network Diagram, Figure 20. Recall that the first step is hyperslabbing, so QGantt charts Earth 05 00027 i002 and Earth 05 00027 i005 in Figure 20 are the same as in Figure 18. Column № 1 and column № 2 of Earth 05 00027 i004 in Figure 20 are also equal to column № 1 and column № 2 of Earth 05 00027 i003 in Figure 18.
Column № 3 of Earth 05 00027 i004 in Figure 20 displays the superposition after appending the flag qubit, which is initially 0 . Column № 4 is the result of applying the test-in-range oracle, Equation (30). For our example, O [ v b , v e ] [ , | T | + 1 ] = O [ 3 , 4 ] [ , | T | + 1 ] . We found that only ( 2 , 4 ) satisfies both the dimension- and value-based parts of the query. If we set NA to cell values where the flag qubit is 0 and drop the flag qubit, we will obtain the query result B, Equation (33).

5.3. QGantt Charts and Quantum Network Diagrams: Effect and Value

Let us elaborate more on the visualization effect and practical application value of QGantt charts and Quantum Network Diagrams. Here we complement the discussion of their merits found in the previous sections. As both of these two are very related to each other, we present our reasoning about both of them here, in the same subsection.
QGantt charts possess important properties that are not simultaneously present in other visualization techniques, Section 2.3. QGantt charts layout quantum superpositions in a specific spatial pattern that facilitates tracking changes, input/output dependencies, viewing specific data values as concrete numbers, comparing superposition elements (terms) between each other, and clearly referencing superpositions or individual superposition terms. QGantt charts can help to quickly grasp the progression of quantum states and the semantics of the evolution of these states due to the manner in which they visually present. In addition, QGantt charts allow annotating superpositions, their terms, term elements, and changes to all of them over time, statically, step-by-step. QGantt charts are especially helpful when there is a need to see exact numbers that comprise superposition terms and visually track their changes over time in a spatially structured, regularized manner.
Note that not only superposition terms, but also term components or term elements appear to be structured as well. For example, it is possible to consider 1 ̲ H as a term element in 1 2 | 1 ̲ H 0 ̲ H , Figure 12. Similarly, we can think of 4 A as a term element in 11 + 1 4 A 7 B , Figure 15.
Unlike other visualization techniques, Section 2.3, QGantt charts and Quantum Network Diagrams layout superposition terms in combination with diverse auxiliary annotations according to a specific spatial pattern that facilitates the visual exploration of the evolution of quantum states from one operation to another, step-by-step. Note that annotations are also composed in a regular manner together with quantum states. We have presented numerous annotations that improve and enhance the contents of QGantt charts.
The spatial pattern utilized by QGantt charts brings a certain order to superposition terms, annotations, and their evolution over time and presents them in a way suitable both for detailed exploration and comprehension at-a-glance. It is even possible to omit various annotations in a QGantt chart once they are applied to one of the superposition terms, as the spatial-visual structure of a QGantt chart makes it immediately clear that the same annotation applies to other terms in the superposition under question.
For example, we can omit + 1 or A in column № 1 in Figure 15, as it is clear that the same annotation applies to other term elements in column № 1. It is even easy to visually determine which particular annotation refers to which particular number (or element) in a superposition term, not just a single term in general, and visually match such numbers (elements) that belong to different terms, because they can line up in a column or row.
In addition, the new state of a superposition term after its transformation by an operation appears immediately in the next column on the right (sometimes even just right up front with the term) and can be pointed to by an arrow, making it easy to track changes and compare terms’ states over time, step-by-step.
Terms of a single superposition appear in a column, making individual elements (e.g., kets in Dirac notation) appear in a visual proximity. One can compare neighborhood terms below and above or quickly look through individual term elements that visually form a straight sub-column of a given superposition.
Recall that it is also possible to mark a term or an element of a term that experienced modification compared to the previous state, e.g., using bold font, underlining, or even annotating an element with a symbol that is associated with the operation that resulted in the modification. For example, 1 ̲ is underlined in 1 2 | 1 1 ̲ , term ( 2 , 3 ) , because it is changed in the result of the CNOT operation compared to term ( 2 , 2 ) in Figure 5. Symbols • and ⊕ indicate the control and controlled qubits respectively. These marking techniques can also help to bring more clarity during the study of quantum transformations and potentially accelerate the understanding of an algorithm illustrated by a respective QGantt chart.
Moreover, QGantt charts and Quantum Network Diagrams (QND) naturally provide provision for additional space that can be used for numerous complementary annotations that can be intuitively integrated into the chart and can also appear in a structured way, by inertia. This is because annotated elements are already structured and imply, but not force, the structure for newly added annotations.
For example, new visual elements can be naturally integrated in a QGantt chart by increasing the space between columns or rows, typically without disrupting the overall spatial structure of a QGantt chart. This comes in conjunction with the annotations of individual elements in a superposition term, terms, superpositions, as well as providing captions with operations, their cost, arrows, and other visual information and cues.
Of course, an attentive reader can find more merits of QGantt charts and Quantum Network Diagrams (QND): not all such merits are outlined in this subsection. Let us summarize practical application value of QGantt charts and Quantum Network Diagrams (QND). It is possible to provide at least three categories where QGantt charts and Quantum Network Diagrams (QND) are valuable. The aforementioned merits of QGantt charts and Quantum Network Diagrams are equally applicable to all of the following categories.
  • Research. QGantt charts and Quantum Network Diagrams (QND) excel at visual clarity and structure when graphically presenting, step-by-step, new quantum algorithms utilizing publication-quality vector graphics. Consider Quantum Network Diagrams in Figure 18 and Figure 20 and other QGantt charts introduced in this article.
  • Development. IDEs (Integrated Development Environments) as well as quantum computing frameworks can provide step-by-step visualization of program execution or enable debugging with the help of QGantt charts and/or Quantum Network Diagrams which can be displayed in ASCII art, Section 5.4, or rendered using high-quality formula engines comparable to Earth 05 00027 i014, for example, in ASCII Math [176,177,178].
  • Education. Drawing QGantt charts and Quantum Network Diagrams on paper, tablet, or whiteboard by hand is easy, as the ideas underlying the spatial organization of the entities are intuitive and clear. In addition, the static step-by-step nature of QGantt charts and Quantum Network Diagrams can assist teachers in presenting quantum algorithms and concepts. Furthermore, QGantt charts and Quantum Network Diagrams can support self-study for anyone interested in quantum computing.

5.4. Alternative Ways to Display Quantum Network Diagrams and QGantt Charts

As we have already noted, Quantum Network Diagrams and QGantt charts have natural provision for numerous enhancements, Section 5.3. In addition, we also noted that QGantt charts and Quantum Network Diagrams can be rendered in alternative representations, for example as an ASCII art or as graphs, possibly interactive, Section 3. Here we present illustrations of these notes.
The Quantum Network Diagram shown in Figure 18 lays out its QGantt charts in a time-aware, top-to-bottom manner. Suppose that QGantt charts Earth 05 00027 i007, Earth 05 00027 i008, …, Earth 05 00027 i009 provide their inputs to QGantt chart Earth 05 00027 i010. Therefore, QGantt chart Earth 05 00027 i010 depends on the outputs of all QGantt charts Earth 05 00027 i007, Earth 05 00027 i008, …, Earth 05 00027 i009. In the time-aware top-to-bottom layout, QGantt chart Earth 05 00027 i010 appears below all QGantt charts Earth 05 00027 i007, Earth 05 00027 i008, …, Earth 05 00027 i009.
Time-aware layouts help to identify which operations can occur earlier in time, as well as a possible execution order of such operations. The time-aware top-to-bottom layout, as in Figure 18, is helpful for a portrait page orientation. However, various forms of time-aware layouts are possible. For example, Figure 21 presents overview graphs in different layouts for the Quantum Network Diagram in Figure 18. An overview graph is a birds’-eye view for a Quantum Network Diagram. An overview graph contains only portions of the QGantt charts (for example, QGantt chart names), presents the input/output dependencies between the QGantt charts, and possibly provide additional annotations. The portions of QGantt charts must clearly match 1:1 with the complete versions of the QGantt charts in the respective Quantum Network Diagram.
An overview graph for a Quantum Network Diagram can help to concentrate only on the “big picture” of the algorithm that is described by the Quantum Network Diagram. An overview graph facilitates tracking input/output dependencies between different portions of an algorithm. In addition, it is also immediately clear what are the key logical stages of the algorithm, what is the number of stages and their execution order.
It is possible to layout the vertices of an overview graph in any convenient manner. For example, left-to-right to follow the temporal execution order without strict vertical positions, Figure 21a. More strict layouts are possible, in which most vertices that correspond to QGantt charts to be executed later appear in the first row (Figure 21b) or gradually descend down the diagonal (Figure 21c). Of course, the names of QGantt charts that appear in Figure 21 can be annotated with formulas taken from the algorithm, asymptotic complexity formulas, and other relevant graphics.
Overview graphs for Quantum Network Diagrams can also be laid out similarly to traditional Gantt charts, Figure 22. In this case, the names of all QGantt charts appear in a single column, while boxes next to the chart names represent the operations over time (a box width can be proportional to the duration of an operation; box widths in Figure 22 are very approximate: recall that there are several versions of the algorithm). It is straightforward to annotate elements in Figure 22 with additional explanations and formulas. This makes it possible to integrate QGantt charts and Quantum Network Diagrams with traditional Gantt charts. It depends on style, goals, and other factors which layout to choose for a Quantum Network Diagram and an overview graph for a particular purpose.
Let us elaborate on the other representations of QGantt charts. For example, during software development, as noted in Section 5.3, it can be beneficial or even required to have a simpler technique to render QGantt charts. It may be even not possible to use a sophisticated engine to render formulas. In this case, QGantt charts are suitable for ASCII art. For instance, consider the QGantt chart for creating the Bell State, Figure 23. We selected the QGantt chart in Figure 5 to convert to ASCII art because it is relatively compact and conceptually important for the quantum computing field in general.
Figure 22. A Gantt style overview graph for the Quantum Network Diagram in Figure 18.
Figure 22. A Gantt style overview graph for the Quantum Network Diagram in Figure 18.
Earth 05 00027 g022
It is immediately clear how to interpret the QGantt chart in Figure 23 if the reader is already familiar with the QGantt chart in Figure 5. Earth 05 00027 i014 symbols found in Figure 5 are replaced with textual symbols that can be found on any PC keyboard, except the square root which is a Unicode character. We can enhance the QGantt chart in Figure 23 even more by using Unicode characters for symbols • and ⊕ as well as diverse arrows [179]. However, Figure 23 looks understandable even without additional Unicode symbols. We slightly enhanced the ASCII art in Figure 23 by treating H,TIME,CNOT,-,|,>,->,:,\ as keywords and coloring them blue.
Let us briefly describe Figure 23 to show that the QGantt chart is easy to understand even in ASCII art. Note that it is also possible to add line numbers in ASCII text for additional referencing purposes; see the leftmost column in Figure 23. Similar to Figure 5, the QGantt chart in Figure 23 consists of 3 superpositions.
Column № 1 contains initial quantum state 00 with amplitude 1 1 . The first qubit is marked with * (asterisk) as it is the target qubit for applying the Hadamard gate. The symbol ^ (caret) indicates that the qubit will change in the next step.
The caption of column № 2 notes that the Hadamard gate whose standard notation is H (Table 1, appears as H in ASCII art) was applied to the qubit marked by * (asterisk) in term ( 1 , 1 ) (row 1, column or superposition 1). All terms in column № 2 have amplitudes  1 2 . The qubits annotated by H are the result of the Hadamard transform. The symbol ^ (caret) indicates that these qubits were changed compared to the previous quantum state. Please note that it is easy to annotate qubits (place informative symbols above or below qubit values) in QGantt charts rendered as ASCII art.
Finally, column № 3 contains the resulting quantum superposition. The CNOT gate was applied to both qubits: the control and controlled qubits are marked with . (dot) and + (plus) symbols respectively. If Unicode is acceptable, it is also possible to use the corresponding ⊕ Unicode symbol. Now, the second qubit in ( 2 , 3 ) is marked with ^ (caret) because it was changed compared to ( 2 , 2 ) . On the contrary, the term ( 1 , 3 ) is the same as ( 1 , 2 ) and is not marked with ^ (caret).
The description of Figure 23 is as clear as of Figure 5. This illustrates that QGantt charts are quite suitable for ASCII art. Displaying QGantt charts in ASCII art is a corner case that demonstrates the rendering flexibility of QGantt charts. Of course, other options like HTML versions of QGantt charts can look much better than ASCII art and use beautiful formula engines [176,177,178]. Therefore, we avoid including QGantt charts in HTML or similar formats in this article.

6. Discussion

“If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet”
—Niels Bohr, Nobel laureate [180]
“I think I can safely say that nobody really understands quantum mechanics”
—Richard Feynman, Nobel laureate [181]

6.1. Challenges and Opportunities

The perspective of quantum approaches to a wide variety of Array (Tensor) DBMS challenges and Earth data engineering is very promising. However, in turn, they open up both new challenges and exciting R&D opportunities.

6.1.1. Quantum Array (Tensor) DBMSs and Simulation Data

As we have already noted, Quantum Array (Tensor) DBMSs can also be beneficial for operating on arrays (tensors) that have been already generated by a quantum simulation model and stored in quantum memory. However, we would like to further emphasize that Array (Tensor) DBMSs themselves can run physical world simulations and therefore can be used for the modeling in the context of global environmental changes.
Physical world simulations are thought to be one the biggest beneficiaries of Quantum Machines [94,182,183,184]. It is exciting that the Quantum Array (Tensor) DBMS data model, Section 4.1, can represent various types of grids and meshes used for simulations. Hence, as Array (Tensor) DBMSs are perfectly suited for such a workload as we showed for the first time in [65,66], they can establish a new line of research and products if timely action is taken to advance the Quantum Array (Tensor) DBMS simulation R&D field. We used Cellular Automata [65,66], but their quantum versions are very promising for R&D [185,186,187,188,189]. Recent research shows that cellular automata are very competitive in terms of solution quality compared to traditional models with differential equations [190].
Physical world simulations benefit from multiple Array (Tensor) DBMS capabilities: data ingestion and fusion, parallelization, debugging UDFs, interactive visualization, a DBMS-like array management, interoperability, and end-to-end simulations support [65,66]. Simulations of quantum devices with Array (Tensor) DBMSs can also be possible.

6.1.2. Utilizing Other Types of Quantum Memory

In this article, we covered two key types of quantum memory: QRAM and RAQM. A wealth of new knowledge in terms of Quantum Array (Tensor) DBMSs can be gained by focusing only on quantum memory alone, without considering other aspects of quantum computing. We started to theoretically explore exciting Quantum Array (Tensor) DBMS benefits that result from a quantum memory (QRAM, and optionally RAQM) with logarithmic performance characteristics (I/O runtime, asymptotically). However, even with deviations in expected quantum memory performance, the presented approaches have value because they also utilize other properties of quantum computing and memory, for example, the way arrays (tensors) are represented (array/tensor layouts, Section 4.5) and processed (e.g., NA elimination technique, Section 4.5.4) within quantum registers, the pattern of how quantum memory performs I/O and addresses memory cells using quantum registers, Equation (1). We also revealed and discussed related challenges.
More variants of quantum memory exist. For example, research is directed towards native support for approximate [191], probabilistic [192], parametric [193], and distributed [194] access patterns, as well as quantum associative memory [195,196]. Although the above may not be realized directly in hardware, they can rely on QRAM or RAQM. It is appealing to explore tensor layouts and I/O algorithms for the aforementioned memory types to look for potential benefits they may provide. It could also be possible to state an algorithm in several ways that have drastically different costs. Interestingly, which are other use-cases for quantum indexes? What additional benefits can we get from quantum array cells? How can we use several types of memory in a single algorithm?

6.1.3. Quantum Array Layouts

In terms of A , a narrow quantum strip can be better than a wide quantum strip, Section 4.5. However, a 1-d index is longer to convert to an N-d index, at least O ( γ 2 ) at each iteration. On the other hand, a large superposition, beyond large A , is also susceptible to faster decoherence. We need a collection of quantum packing schemes for N-d arrays to choose from that balance ease of use, A , and processing time.
Previous work does not rely on quantum memory and states that a superposition constitutes a database [169]. It is similar to a statement that an array inside a classical CPU constitutes a database. Quantum memory, array layouts in different quantum memory types, and respective I/O algorithms should be the next step such that we can move the definition of a quantum database from a superposition to an in-memory database, similar to classical in-memory databases. The benefits of this are a wider spectrum of data engineering operations and early preparation for the upcoming quantum memory.

6.1.4. Quantum-Classical Interface

A Quantum Array (Tensor) DBMS must foresee a module for exchanging data with its classical components. It is generally referred to as a quantum-classical interface [197]. What will be the cost of data transfer across the boundary [148,197]? What must caching mechanisms in this new setup look like?
In this subsection, it is also appropriate to mention again that Quantum Array (Tensor) DBMSs can be designed to work on data that happens to be already inside a quantum machine. Therefore, in such cases Quantum Array (Tensor) DBMSs can avoid moving data using a quantum-classical interface, addressing the concern of its possible bandwidth limitation, Section 2.2. However, it is obvious that quantum-classical interfaces will improve in the future like any other hardware.
One might argue that one of the key bottlenecks in an Array (Tensor) DBMS is disk I/O, but even a tiny quantum register can address and contain an amount of data that is orders of magnitude larger than any contemporary disk volume or even an entire data warehouse, see Section 2.2 for an example.
Current quantum machine prototypes exhibit short coherence times: all the more reason to develop and use fast array (tensor) engineering techniques to be able to perform more operations before decoherence takes place. Naturally, there is ongoing work to improve coherence times, Section 2.1.

6.1.5. Query Parsing

Query parsing is a well-studied area, but not in quantum computing. Currently, no automatic way of translating classical functions to quantum instructions (circuits) exists. It is an open field of R&D. Moreover, the circuit length should also be minimized, e.g., to avoid decoherence [198,199]. Even compiling core Array (Tensor) DBMS queries is a challenge, especially in light of the inability to map a circuit 1:1 to a real quantum computer. A prominent and useful exception is Map Algebra that is amenable to automatic query compilation, Section 4.8.
In addition, circuit optimization techniques tailored to array operations could optimize a circuit with a series of array-related oracles, e.g., rotation merging, gate reduction & cancellation, account for architectural peculiarities (e.g., measure in the middle) [200,201,202].

6.1.6. New Cost Models and Benchmarks

Array algorithms should consider and balance between increasingly more parameters, sometimes contradicting, besides only runtime asymptotic complexity: | A | , A , fidelity, circuit depth and width, etc. In practice, increased runtime (e.g., more classical-quantum interactions), but higher fidelity, may be preferred.
For example, it is interesting to explore how approximate computations can leverage the notorious “weakness” of quantum computing. For instance, generate the output array after map algebra without computing the output values exactly, but faster.
Quantum memory is expected to have certain properties, including the way the memory cell values are entangled with a quantum pointer, Section 2.2. Therefore, quantum memory can offer unprecedented opportunities compared to classical memory, and we wait for quantum memory to appear, but we wait actively rather than passively. Diverse prototypes of quantum memory can appear, including virtual quantum memory, with different characteristics [144,147]. An efficient and effective quantum memory must quickly and atomically perform I/O (input/output) of the entire array, besides other merits like noise-resistance, long coherence times, sufficient capacity, and others.
Once quantum memory prototypes become available, we will be able not only to evaluate the presented approaches on a quantum machine, but also to gain valuable experience that will impact future Quantum Array (Tensor) DBMS advancements. However, it is very important to start early, right now, developing approaches that rely on the predicted quantum memory properties to promptly take advantage of its capabilities as much as possible in the future.
Current conceptual quantum memory architectures differ from each other in expected characteristics [141,144,146,147]. Although asymptotically it may take O ( γ ) to read a memory cell and O ( γ ) to perform an operation on a quantum resister of γ qubits, hidden constants may substantially differ. In certain scenarios, novel cost models and benchmarks should also consider counting the number of quantum memory I/O operations. Hence, if necessary, it is straightforward to refine the presented asymptotic estimates in this article to account for different quantum memory characteristics and usage scenarios.
As we have already noted, exponential speedups when solving certain problems are one of the central motivations for developing quantum machines. Of course, current quantum computing limitations, including noise, may somewhat impact the performance of a quantum software system and Quantum Array (Tensor) DBMSs. However, quantum machines should improve over time, as any other hardware, become more fault-tolerant, and acquire necessary characteristics that will allow them to become more preferred, practical, competitive, or form a niche for certain data-driven applications, broadly understood. For example, recall that quantum annealing is already being used successfully [149].
The well-known Grover algorithm can serve as an additional, vivid illustration of the importance of quantum memory for Earth data engineering [98]. Given an unsorted array of 2 n elements and some value, the algorithm yields an index that corresponds to the given value in O ( 2 n ) on a quantum computer. It tests all 2 n values at once just by O f n H n 0 n if O f is an oracle such that £ ( O f ) = O ( 1 ) . A naive classical algorithm runs in O ( 2 n ) by traversing all 2 n array values sequentially in the worst case.
Obviously, without a mechanism that supports fetching multiple values in parallel from an array, quantum algorithms will remain limited in practice, until there is another way to efficiently perform atomic and massively parallel data I/O other than quantum memory. The Grover algorithm requires the initialization phase of reading array data or a way to efficiently probe an array at each iteration, so the estimate O ( 2 n ) does not take I/O into account: ρ ( H n 0 n ) = ρ i n i in our notation. Logically, it is great to have £ ( ρ i n i ) = O ( n ) in a (near) future quantum memory, such that the Grover algorithm is advantageous over its classical counterpart. £ ( ρ i n i ) is unlikely to be O ( 1 ) as it will not depend on the I/O volume and not O ( 2 n ) as it will make it challenging to efficiently implement a data-driven algorithm (unless the data is already in a quantum register). Considering the initial QRAM access, an optimistic asymptotic cost of the Grover algorithm is £ ( Grover ) = O ( 2 n + n ) . This will also enable efficient implementations of other algorithms as noted in Section 4.4. Therefore, there is a lot of motivation to build efficient and effective quantum memory.
Finally, with the emergence of Quantum Array (Tensor) DBMSs, there is a need for designing respective quantum benchmarks. Well-known classical array benchmarks are Sequoia 2000 (contains queries for Earth data) [203,204] and SS-DB (focuses on astronomical data) [23]. In addition, evaluations have been conducted on neuroscience and astronomy datasets [22], geospatial workloads [17], and synthetic data [9,205]. Quantum computing and memory alter the way Earth data, and arrays (tensors) in general, are treated within Quantum Array (Tensor) DBMSs. Therefore, classical queries that exploit differences between sparse and dense arrays can become obsolete for a Quantum Array (Tensor) DBMS benchmark due to the use of quantum strips, Section 4.5.1 and Section 4.5.2. On the contrary, queries that delete a certain item from an array (tensor) can be of interest to quantum array (tensor) benchmarks, Section 4.5.4. Similarly, quantum benchmarks could provide insights and performance indicators to reveal applications and queries that run more efficiently in a Quantum Array (Tensor) DBMS in comparison to a classical Array (Tensor) DBMS.

6.1.7. Hardware-Software Co-Design

Array-tailored primitives like reduce, scatter, gather, and others at hardware level would enable implementing array operations much more efficiently, instead of custom multistage approaches. The hardware community can review the presence of NA in the parameters of ρ and ω to automatically enable intrinsic array compaction and make diverse array (tensor) algorithms both efficient and easier to design, Section 4.4.1 and Section 4.4.2.
In addition, it is possible to gain great advantages from the read-and-append operation ρ a , Section 4.7 and Table 1. For the standard read operation ρ , the address register always contains unique memory cell indexes: π = j ψ j j . Here each value j appears only once in π . On the contrary, for the read-and-append operation ρ a , there is potential that the address register can have repeating cell indexes: π a = j ψ j j · . Here j may not be unique due to · . However, this must not be treated as a problem, but rather as both a goal and a challenge for the hardware community. In many algorithms, entangling the value of the same memory cell with different superposition terms would be advantageous.
APIs of quantum frameworks must support array-specific operations at a low level to guarantee efficiency, e.g., hyperslabbing. To date, we are at the stage of suggestions to extend existing APIs [145]. Quantum memory makes it more natural to express array-tailored intentions. Hence, respective APIs must also clearly and explicitly support atomic array-based operations.
It would be great for the Earth science, data management, and quantum computing communities to establish a cross-disciplinary communication to be able to carry out collaborative quantum hardware-software co-design with important environmental and ecological applications in mind that are key to sustainable well-being [206].

6.2. Improving QGantt Charts and Quantum Network Diagrams

In this article, we proposed new types of charts along with numerous visualization and annotation techniques that improve the appearance of QGantt Charts and Quantum Network Diagrams. However, it is possible to explore numerous other opportunities and directions for improvement and alternative visualizations.
For example, in our charts we specified the amplitudes of superposition terms or omitted the amplitudes if they were not important for explaining a given algorithm. However, sometimes it is important to feel the order of each amplitude, not the exact amplitude values. In other words, how large are the amplitudes in general and relative to each other?
To serve the aforementioned purpose, we can use specialized visualizations instead of numbers. For instance, the clock image Earth 05 00027 i011 shows the time 15 min past noon. This mnemonics can indicate that the amplitude value is 1 4 . Another way is to render the amplitudes as icons, e.g., WiFi or similar: Earth 05 00027 i012. In this case, the amplitude values become somewhat proportional to the signal strength.
QGantt charts and Quantum Network Diagrams can also be visualized as interactive graphs in diverse formats like GEXF, GML, GraphML, and others [207].

6.3. A Roadmap for a Future Quantum Array (Tensor) DBMS

Now, with solid theoretical footing for the future Quantum Array (Tensor) DBMS capabilities, let us holistically describe an architecture of a future Quantum Array (Tensor) DBMS. To date, simulation is the most complex workload for an Array (Tensor) DBMS and the most diverse in the sense of the scope and variety of Array (Tensor) DBMS functions utilized [65,66]. Hence, we illustrate our holistic vision using an end-to-end complex Array (Tensor) DBMS simulation pipeline, Figure 24. This example can also serve as a roadmap or research plan. Obviously, the presented architecture may and should undergo subsequent refinements and maybe even substantial changes reflecting the development of quantum computing, memory, machines, transmission, and other related knowledge domains. However, it is important to start early, right now, by developing first versions of Quantum Array (Tensor) DBMSs based on the theoretical framework that we already have and keeping in pace with quantum research and industry advancements.
Most contemporary related work implicitly or explicitly assumes that a quantum superposition in a quantum register constitutes a database [169]. This is similar to treating the contents of a CPU register as a database without mentioning any I/O. Instead, we devote special attention to in-memory Quantum Array (Tensor) DBMS capabilities. Array (tensor) data can be stored in a classical DBMS component, in QRAM/RAQM which can be the next memory hierarchy level after quantum registers (along with caches; compare classical CPU, RAM, and in-memory database engines), exchanged between classical and quantum Array (Tensor) DBMS components, as well as via quantum communication.
The user starts with a GUI (Graphical User Interface) on a classical machine that serves as a coordinator and executor of non time-critical tasks. The coordinator parses UDFs (User Defined Functions) in the Array (Tensor) DBMS native language [66] and compiles them to quantum circuits. It also generates a proactive simulation plan (PSP) [65,66] by inspecting input array metadata and UDFs. The coordinator fetches an optimized PSP from a quantum component via the classical-quantum interface (CQI). Finally, optimized circuits and arrays are submitted via CQI. We expect probabilistic simulations to have a superpolynomial speedup compared to a classical machine (S2C2M) [208,209,210].
Figure 24. Example of a Quantum Array (Tensor) DBMS running physical simulations together with its prospective classical and quantum components.
Figure 24. Example of a Quantum Array (Tensor) DBMS running physical simulations together with its prospective classical and quantum components.
Earth 05 00027 g024
The procedure is the same for ETL (Extract, Transform, Load), simulation, deriving simulation statistics, and visualization, but ordinary execution plans can be submitted instead of a PSP. Not all array data can fit into the quantum part, so the coordinator splits a PSP into parts that operate on at most a given threshold of array volume that is possible to predict, as we have strict formal definitions of array operations. In addition, it tends to minimize the number and volume of CQI interactions.
The user cooks arrays during the ETL phase by submitting array transformation queries, e.g., resample, reshape, join, that run in S2C2M. The results may go to the classical storage and fetched later if their is no sufficient space in QRAM or RAQM. Recall that Quantum Array (Tensor) DBMSs can also work on data that has been generated inside a quantum machine and is already in a quantum register or quantum memory, not retrieved via CQI. To speedup simulations or certain queries, the user (or the Array (Tensor) DBMS automatically) will create quantum array indexes, Section 4.9.
Finally, the user can investigate input/output data visually or compute statistics via aggregation or more sophisticated operations, e.g., tune [12]. Complex rendering can be performed on the classical side, but the quantum side has the potential to be more beneficial in rapid-response scenarios. Arrays can be exchanged between Quantum Array (Tensor) DBMSs and external systems via quantum communication.
We have outlined the architecture of a future Quantum Array (Tensor) DBMS based on the most complex workload for a contemporary classical Array (Tensor) DBMS. A more comprehensive sketch can include quantum machine learning (QML) [211,212,213,214,215], distributed quantum computing (DQC) [216,217], quantum differential privacy (QDF) [218,219], as well as quantum communication (QC) [220,221,222]. These subareas are very promising within quantum computing. However, some areas are just paving their ways to Array (Tensor) DBMSs, for example, machine learning [8,28,31]. Therefore, Array (Tensor) DBMSs have an opportunity to become even more involved in the rapidly growing field of R&D at early stages, gaining the respective benefits of a timely start: quantum computing can additionally motivate integrating QML and other capabilities into Array (Tensor) DBMSs, making them quantum-ready or quantum-native solutions.
QC promise to provide fast and secure data exchange. Quantum Array (Tensor) DBMSs can leverage QC to move sensitive data between each other and external systems. Array (Tensor) DBMSs utilize open, standardized network protocols like Web Tile Map Service (WMTS) [223] to provide array tiles to multiple concurrent users [9,63]. Currently used protocols can be revised to integrate better with QC and utilize its benefits.
In addition, QC, QML, DQC, and quantum computing in general can be especially helpful in scenarios where immediate, small responses that result from massive computations are desired. For example, consider a typical array (tensor) tile which is shaped 256 × 256 . This means that we need a register of a small number of qubits to process a single tile (or a strip in the Quantum Array (Tensor) DBMS Data Model, Section 4.5). We can provide users a small, assembly-like UDF language to specify operations that they would like to run on the tiles (strips). In addition, such operations may be hard-coded and used in extreme scenarios like disaster monitoring, rapid response, and medical diagnosis. Note that as we would have a language tailored to existing quantum gates, we do not deal with arbitrary functions. Hence, we can compile and submit time-critical computations to quantum computers using our UDFs. This application can be one of the most representative, as it is both immediately practical and does not require a large number of qubits to start experimenting with this application.
UDFs in Array (Tensor) DBMSs are under active research. Most Array (Tensor) DBMSs accept UDFs in Python/C++ which are black boxes that cannot be optimized [22]. However, recently, the first native UDF language for array DBMSs has been introduced in [17]. Window query optimizations for UDFs were presented in [224]. Certainly, diverse quantum compiler optimization techniques are applicable together with multi-tenancy optimization and joint (shared) execution for future quantum UDFs.
DQC can enable building clusters that consist of multiple Quantum Array (Tensor) DBMSs. Note that even new quantum-tailored communication interfaces are being developed, for example, QMPI [217] which is a quantum version of the well-known Message Passing Interface (MPI). Undoubtedly, we include DQC as an enabling technology for Quantum Array (Tensor) DBMSs, because a Quantum Array (Tensor) DBMS would be able to immediately utilize DQC once it becomes mature to reach the same goals as perceived by classical distributed Array (Tensor) DBMSs when launched on a computer cluster [17,67].
Array (Tensor) DBMSs are attractive for machine learning due to their data model that natively operates on arrays (tensors) – one of the key entities in machine learning and data science. In addition, if an Array (Tensor) DBMS has rich built-in machine learning capabilities, it will be possible to avoid costly movements of potentially massive array (tensor) data between systems to perform data engineering coupled with machine learning. In this case, QML inside Quantum Array (Tensor) DBMSs is a natural followup of the aforementioned current efforts with a strong perspective. Quantum Convolutional Neural Networks [225,226,227,228] can deserve special attention from Array (Tensor) DBMS architects and developers, because they can be applicable to Earth array (tensor) data types that currently enjoy the most frequent utilization in Array (Tensor) DBMS workloads, Section 6.1.1, especially if such Earth data happens to be already inside a quantum machine.

7. Concluding Remarks

The journey of Array (Tensor) DBMSs to the area of Quantum Computing has just begun. They meet beautiful theory there, help to energize myriads of Earth science applications, and have a clear path forward. We have introduced a quantum array data model and quantum array approaches to a set of Array (Tensor) DBMS challenges that result in exponential reduction in the runtime computational complexity at the information theoretical level: quantum O ( n ) vs. classical O ( 2 n ) . This proves that Array (Tensor) DBMSs and Earth data engineering are a good target for quantum computing which can enable unprecedented speedups intrinsically. It is especially attractive to utilize Array (Tensor) DBMSs for data that is already inside a quantum computer, e.g., as a result of previous activity like simulations. We showcased quantum memory and quantum computing potentials for Array (Tensor) DBMSs, as well as contributed by reasoning about quantum array-tailored memory operations, as current literature lacks these aspects.
Quantum Gantt charts (QGantt charts) and Quantum Network Diagrams (QND) deserve a special note as they represent a new and comprehensive way to illustrate quantum computing and quantum memory operations in a static format. We used QGantt charts to demonstrate standard quantum techniques, unrelated to Quantum Array (Tensor) DBMSs, as well as to explain our new approaches. We accompanied our algorithms with QGantt charts to show that they are widely applicable for research, development, and educational purposes, especially those that benefit from seeing exact numerical values of superposition terms and easily tracking of inputs/outputs of quantum operations in detail.
Nevertheless, much still needs to be done: we holistically outlined key challenges and opportunities in building a complete Quantum Array (Tensor) DBMS, formed an intuition on how Quantum Array (Tensor) DBMSs may be designed, and provided a roadmap for future Quantum Array (Tensor) DBMS R&D. Of course, we expect that the proposed solutions and our vision will be further improved, as these represent the first steps in Quantum Array (Tensor) DBMSs. The goal and the rewards shine so brightly and appealingly, that they serve as a strong source of motivation for future work.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Donoho, D. 50 years of data science. J. Comput. Graph. Stat. 2017, 26, 745–766. [Google Scholar]
  2. Chai, C.P. The importance of data cleaning: Three visualization examples. Chance 2020, 33, 4–9. [Google Scholar]
  3. ECMWF. 2022. Available online: https://www.ecmwf.int/en/computing/our-facilities/data-handling-system (accessed on 22 August 2024).
  4. Sentinel Data Access Annual Report. 2021. Available online: https://sentinels.copernicus.eu/web/sentinel/-/copernicus-sentinel-data-access-annual-report-2021 (accessed on 22 August 2024).
  5. Nativi, S.; Caron, J.; Domenico, B.; Bigagli, L. Unidata’s Common Data Model mapping to the ISO 19123 Data Model. Earth Sci. Inform. 2008, 1, 59–78. [Google Scholar]
  6. Balaji, V.; Adcroft, A.; Liang, Z. Gridspec: A standard for the description of grids used in Earth System models. arXiv 2019, arXiv:1911.08638. [Google Scholar]
  7. Rusu, F. Multidimensional array data management. Found. Trends® Databases 2023, 12, 69–220. [Google Scholar]
  8. Rodriges Zalipynis, R.A. Array DBMS: Past, Present, and (Near) Future. Proc. VLDB Endow. 2021, 14, 3186–3189. [Google Scholar]
  9. Baumann, P.; Misev, D.; Merticariu, V.; Huu, B.P. Array databases: Concepts, standards, implementations. J. Big Data 2021, 8, 1–61. [Google Scholar]
  10. Kingsmore, S.F.; Smith, L.D.; Kunard, C.M.; Bainbridge, M.; Batalov, S.; Benson, W.; Blincow, E.; Caylor, S.; Chambers, C.; Del Angel, G.; et al. A genome sequencing system for universal newborn screening, diagnosis, and precision medicine for severe genetic diseases. Am. J. Hum. Genet. 2022, 109, 1605–1619. [Google Scholar]
  11. Askenazi, M.; Ben Hamidane, H.; Graumann, J. The arc of Mass Spectrometry Exchange Formats is long, but it bends toward HDF5. Mass Spectrom. Rev. 2017, 36, 668–673. [Google Scholar]
  12. Rodriges Zalipynis, R.A. BitFun: Fast Answers to Queries with Tunable Functions in Geospatial Array DBMS. Proc. VLDB Endow. 2020, 13, 2909–2912. [Google Scholar]
  13. Horlova, O.; Kaitoua, A.; Ceri, S. Array-based Data Management for Genomics. In Proceedings of the 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20–24 April 2020; pp. 109–120. [Google Scholar]
  14. Masseroli, M.; Canakoglu, A.; Pinoli, P.; Kaitoua, A.; Gulino, A.; Horlova, O.; Nanni, L.; Bernasconi, A.; Perna, S.; Stamoulakatou, E.; et al. Processing of big heterogeneous genomic datasets for tertiary analysis of Next Generation Sequencing data. Bioinformatics 2019, 35, 729–736. [Google Scholar]
  15. Rodriges Zalipynis, R.A. Generic Distributed In Situ Aggregation for Earth Remote Sensing Imagery. In Proceedings of the International Conference on Analysis of Images, Social Networks and Texts, Moscow, Russia, 5–7 July 2018; LNCS. Springer: Cham, Switzerland, 2018; Volume 11179, pp. 331–342. [Google Scholar]
  16. Xing, H.; Agrawal, G. COMPASS: Compact array storage with value index. In Proceedings of the 30th International Conference on Scientific and Statistical Database Management, Bozen-Bolzano, Italy, 9–11 July 2018; pp. 1–12. [Google Scholar]
  17. Rodriges Zalipynis, R.A. ChronosDB: Distributed, File Based, Geospatial Array DBMS. Proc. VLDB Endow. 2018, 11, 1247–1261. [Google Scholar]
  18. Deaton, A.M.; Parker, M.M.; Ward, L.D.; Flynn-Carroll, A.O.; BonDurant, L.; Hinkle, G.; Akbari, P.; Lotta, L.A. Gene-level analysis of rare variants in 379,066 whole exome sequences identifies an association of GIGYF1 loss of function with type 2 diabetes. Sci. Rep. 2021, 11, 21565. [Google Scholar]
  19. Ward, L.D.; Tu, H.C.; Quenneville, C.B.; Tsour, S.; Flynn-Carroll, A.O.; Parker, M.M.; Deaton, A.M.; Haslett, P.A.; Lotta, L.A.; Verweij, N.; et al. GWAS of serum ALT and AST reveals an association of SLC30A10 Thr95Ile with hypermanganesemia symptoms. Nat. Commun. 2021, 12, 4571. [Google Scholar]
  20. Aleksandrov, M.; Zlatanova, S.; Heslop, D.J. Voxelisation algorithms and data structures: A review. Sensors 2021, 21, 8241. [Google Scholar] [CrossRef]
  21. Kim, M.; Lee, H.; Chung, Y.D. Multi-Dimensional Data Compression and Query Processing in Array Databases. IEEE Access 2022, 10, 111528–111544. [Google Scholar]
  22. Mehta, P.; Dorkenwald, S.; Zhao, D.; Kaftan, T.; Cheung, A.; Balazinska, M.; Rokem, A.; Connolly, A.; Vanderplas, J.; AlSayyad, Y. Comparative evaluation of big-data systems on scientific image analytics workloads. Proc. VLDB Endow. 2017, 10, 1226–1237. [Google Scholar]
  23. Cudre-Mauroux, P.; Kimura, H.; Lim, K.T.; Rogers, J.; Madden, S.; Stonebraker, M.; Zdonik, S.B.; Brown, P.G. SS-DB: A Standard Science DBMS Benchmark. In Proceedings of the XLDB, 2010. Available online: https://people.csail.mit.edu/jennie/_content/research/ssdb_benchmark.pdf (accessed on 22 August 2024).
  24. Soroush, E.; Balazinska, M.; Wang, D. ArrayStore: A storage manager for complex parallel array processing. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, Athens, Greece, 12–16 June 2011; pp. 253–264. [Google Scholar]
  25. Kim, B.; Koo, K.; Enkhbat, U.; Kim, S.; Kim, J.; Moon, B. M2Bench: A Database Benchmark for Multi-Model Analytic Workloads. Proc. VLDB Endow. 2022, 16, 747–759. [Google Scholar]
  26. Choi, D.; Yoon, H.; Chung, Y.D. ReSKY: Efficient Subarray Skyline Computation in Array Databases. Distrib. Parallel Databases 2022, 40, 261–298. [Google Scholar]
  27. Choi, D.; Yoon, H.; Chung, Y.D. Subarray skyline query processing in array databases. In Proceedings of the 33rd International Conference on Scientific and Statistical Database Management, Tampa, FL, USA, 6–7 July 2021; pp. 37–48. [Google Scholar]
  28. Villarroya, S.; Baumann, P. On the Integration of Machine Learning and Array Databases. In Proceedings of the 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20–24 April 2020; pp. 1786–1789. [Google Scholar]
  29. Rodriges Zalipynis, R.A. Towards Machine Learning in Distributed Array DBMS: Networking Considerations. In Proceedings of the Machine Learning for Networking: Third International Conference, MLN 2020, Paris, France, 24–26 November 2020; LNCS. Springer: Cham, Switzerland, 2021; Volume 12629, pp. 284–304. [Google Scholar]
  30. Ordonez, C.; Zhang, Y.; Johnsson, S.L. Scalable machine learning computing a data summarization matrix with a parallel array DBMS. Distrib. Parallel Databases 2019, 37, 329–350. [Google Scholar]
  31. Villarroya, S.; Baumann, P. A survey on machine learning in array databases. Appl. Intell. 2023, 53, 9799–9822. [Google Scholar]
  32. Alam, M.M.; Torgo, L.; Bifet, A. A survey on spatio-temporal data analytics systems. ACM Comput. Surv. 2022, 54, 1–38. [Google Scholar]
  33. Xu, C.; Du, X.; Fan, X.; Giuliani, G.; Hu, Z.; Wang, W.; Liu, J.; Wang, T.; Yan, Z.; Zhu, J.; et al. Cloud-based storage and computing for remote sensing big data: A technical review. Int. J. Digit. Earth 2022, 15, 1417–1445. [Google Scholar]
  34. Lewis, A.; Oliver, S.; Lymburner, L.; Evans, B.; Wyborn, L.; Mueller, N.; Raevksi, G.; Hooke, J.; Woodcock, R.; Sixsmith, J.; et al. The Australian Geoscience Data Cube—Foundations and lessons learned. Remote Sens. Environ. 2017, 202, 276–292. [Google Scholar]
  35. Baumann, P.; Misev, D.; Merticariu, V.; Huu, B.P.; Bell, B. DataCubes: A technology survey. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 430–433. [Google Scholar]
  36. Baumann, P. Towards a Model-Driven Datacube Analytics Language. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 3740–3746. [Google Scholar]
  37. Mahecha, M.D.; Gans, F.; Brandt, G.; Christiansen, R.; Cornell, S.E.; Fomferra, N.; Kraemer, G.; Peters, J.; Bodesheim, P.; Camps-Valls, G.; et al. Earth system data cubes unravel global multivariate dynamics. Earth Syst. Dyn. 2020, 11, 201–234. [Google Scholar]
  38. Baumann, P.; Misev, D. ORBiDANSe: Orbital Big Datacube Analytics Service. In Proceedings of the EGU General Assembly Conference Abstracts, Vienna, Austria, 23–27 May 2022; p. EGU22-13002. [Google Scholar]
  39. Rivera, G.; Porras, R.; Florencia, R.; Sánchez-Solís, J.P. LiDAR applications in precision agriculture for cultivating crops: A review of recent advances. Comput. Electron. Agric. 2023, 207, 107737. [Google Scholar]
  40. Su, J.; Zhu, X.; Li, S.; Chen, W.H. AI meets UAVs: A survey on AI empowered UAV perception systems for precision agriculture. Neurocomputing 2023, 518, 242–270. [Google Scholar]
  41. Pande, C.B.; Moharir, K.N. Application of hyperspectral remote sensing role in precision farming and sustainable agriculture under climate change: A review. In Climate Change Impacts on Natural Resources, Ecosystems and Agricultural Systems; Springer: Berlin/Heidelberg, Germany, 2023; pp. 503–520. [Google Scholar]
  42. Luo, Y.; Huang, H.; Roques, A. Early monitoring of forest wood-boring pests with remote sensing. Annu. Rev. Entomol. 2023, 68, 277–298. [Google Scholar]
  43. Massey, R.; Berner, L.T.; Foster, A.C.; Goetz, S.J.; Vepakomma, U. Remote Sensing Tools for Monitoring Forests and Tracking Their Dynamics. In Boreal Forests in the Face of Climate Change: Sustainable Management; Springer: Berlin/Heidelberg, Germany, 2023; pp. 637–655. [Google Scholar]
  44. Yu, D.; Fang, C. Urban Remote Sensing with Spatial Big Data: A Review and Renewed Perspective of Urban Studies in Recent Decades. Remote Sens. 2023, 15, 1307. [Google Scholar] [CrossRef]
  45. Li, F.; Yigitcanlar, T.; Nepal, M.; Nguyen, K.; Dur, F. Machine Learning and Remote Sensing Integration for Leveraging Urban Sustainability: A Review and Framework. Sustain. Cities Soc. 2023, 96, 104653. [Google Scholar]
  46. Adjovu, G.E.; Stephen, H.; James, D.; Ahmad, S. Overview of the Application of Remote Sensing in Effective Monitoring of Water Quality Parameters. Remote Sens. 2023, 15, 1938. [Google Scholar] [CrossRef]
  47. Liu, Z.; Xu, J.; Liu, M.; Yin, Z.; Liu, X.; Yin, L.; Zheng, W. Remote sensing and geostatistics in urban water-resource monitoring: A review. Mar. Freshw. Res. 2023, 74, 747–765. [Google Scholar]
  48. Kurniawan, R.; Alamsyah, A.R.B.; Fudholi, A.; Purwanto, A.; Sumargo, B.; Gio, P.U.; Wongsonadi, S.K.; Susanto, A.E.H. Impacts of industrial production and air quality by remote sensing on nitrogen dioxide concentration and related effects: An econometric approach. Environ. Pollut. 2023, 334, 122212. [Google Scholar]
  49. Abu El-Magd, S.; Soliman, G.; Morsy, M.; Kharbish, S. Environmental hazard assessment and monitoring for air pollution using machine learning and remote sensing. Int. J. Environ. Sci. Technol. 2023, 20, 6103–6116. [Google Scholar]
  50. Sudmanns, M.; Augustin, H.; Killough, B.; Giuliani, G.; Tiede, D.; Leith, A.; Yuan, F.; Lewis, A. Think global, cube local: An Earth Observation Data Cube’s contribution to the Digital Earth vision. Big Earth Data 2023, 7, 831–859. [Google Scholar]
  51. Mahmood, R.; Zhang, L.; Li, G. Assessing effectiveness of nature-based solution with big earth data: 60 years mangrove plantation program in Bangladesh coast. Ecol. Process. 2023, 12, 11. [Google Scholar]
  52. Wang, S.; Wang, J.; Zhan, Q.; Zhang, L.; Yao, X.; Li, G. A unified representation method for interdisciplinary spatial earth data. Big Earth Data 2023, 7, 126–145. [Google Scholar]
  53. Rodriges Zalipynis, R.A.; Pozdeev, E.; Bryukhov, A. Array DBMS and Satellite Imagery: Towards Big Raster Data in the Cloud. In Proceedings of the International Conference on Analysis of Images, Social Networks and Texts, Moscow, Russia, 27–29 July 2017; LNCS. Volume 10716, pp. 267–279. [Google Scholar]
  54. Ladra, S.; Paramá, J.R.; Silva-Coira, F. Scalable and queryable compressed storage structure for raster data. Inf. Syst. 2017, 72, 179–204. [Google Scholar]
  55. Leclercq, É.; Gillet, A.; Grison, T.; Savonnet, M. Polystore and Tensor Data Model for Logical Data Independence and Impedance Mismatch in Big Data Analytics. In LNCS; Springer: Berlin/Heidelberg, Germany, 2019; pp. 51–90. [Google Scholar]
  56. Papadopoulos, S.; Datta, K.; Madden, S.; Mattson, T. The TileDB Array Data Storage Manager. Proc. VLDB Endow. 2016, 10, 349–360. [Google Scholar]
  57. Rodriges Zalipynis, R.A. ChronosDB in Action: Manage, Process, and Visualize Big Geospatial Arrays in the Cloud. In Proceedings of the 2019 International Conference on Management of Data, Amsterdam, The Netherlands, 30 June–5 July 2019; pp. 1985–1988. [Google Scholar]
  58. Zhao, W.; Rusu, F.; Dong, B.; Wu, K.; Nugent, P. Incremental view maintenance over array data. In Proceedings of the 2017 ACM International Conference on Management of Data, Chicago, IL, USA, 14–19 May 2017; pp. 139–154. [Google Scholar]
  59. Zhao, W.; Rusu, F.; Dong, B.; Wu, K. Similarity join over array data. In Proceedings of the 2016 International Conference on Management of Data, San Francisco, CA, USA, 26 June–1 July 2016; pp. 2007–2022. [Google Scholar]
  60. Zhao, W.; Rusu, F.; Dong, B.; Wu, K.; Ho, A.Y.; Nugent, P. Distributed caching for processing raw arrays. In Proceedings of the 30th International Conference on Scientific and Statistical Database Management, Bozen-Bolzano, Italy, 9–11 July 2018. [Google Scholar]
  61. Rodriges Zalipynis, R.A. FastMosaic in Action: A New Mosaic Operator for Array DBMSs. Proc. VLDB Endow. 2023, 16, 3938–3941. [Google Scholar]
  62. Kilsedar, C.E.; Brovelli, M.A. Multidimensional visualization and processing of big open urban geospatial data on the web. ISPRS Int. J.-Geo-Inf. 2020, 9, 434. [Google Scholar]
  63. Rodriges Zalipynis, R.A.; Terlych, N. WebArrayDB: A Geospatial Array DBMS in Your Web Browser. Proc. VLDB Endow. 2022, 15, 3622–3625. [Google Scholar]
  64. Battle, L.; Chang, R.; Stonebraker, M. Dynamic prefetching of data tiles for interactive visualization. In Proceedings of the 2016 International Conference on Management of Data, San Francisco, CA, USA, 26 June–1 July 2016; pp. 1363–1375. [Google Scholar]
  65. Rodriges Zalipynis, R.A. SimDB in Action: Road Trafic Simulations Completely Inside Array DBMS. Proc. VLDB Endow. 2022, 15, 3742–3745. [Google Scholar]
  66. Rodriges Zalipynis, R.A. Convergence of Array DBMS and Cellular Automata: A Road Traffic Simulation Case. In Proceedings of the 2021 International Conference on Management of Data, Xi’an, China, 20–25 June 2021; pp. 2399–2403. [Google Scholar]
  67. Cudre-Mauroux, P.; Kimura, H.; Lim, K.T.; Rogers, J.; Simakov, R.; Soroush, E.; Velikhov, P.; Wang, D.L.; Balazinska, M.; Becla, J.; et al. A demonstration of SciDB: A science-oriented DBMS. Proc. VLDB Endow. 2009, 2, 1534–1537. [Google Scholar]
  68. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar]
  69. Zhao, Q.; Yu, L.; Li, X.; Peng, D.; Zhang, Y.; Gong, P. Progress and trends in the application of Google Earth and Google Earth Engine. Remote Sens. 2021, 13, 3778. [Google Scholar] [CrossRef]
  70. Wang, Y.; Nandi, A.; Agrawal, G. SAGA: Array Storage as a DB with Support for Structural Aggregations. In Proceedings of the 26th International Conference on Scientific and Statistical Database Management, Aalborg, Denmark, 30 June–2 July 2014. [Google Scholar]
  71. Baumann, P.; Mazzetti, P.; Ungar, J.; Barbera, R.; Barboni, D.; Beccati, A.; Bigagli, L.; Boldrini, E.; Bruno, R.; Calanducci, A.; et al. Big data analytics for Earth sciences: The EarthServer approach. Int. J. Digit. Earth 2016, 9, 3–29. [Google Scholar]
  72. GeoTrellis. 2024. Available online: https://geotrellis.io/ (accessed on 22 August 2024).
  73. Dask. 2024. Available online: https://dask.org/ (accessed on 22 August 2024).
  74. Microsoft Planetary Computer. 2024. Available online: https://planetarycomputer.microsoft.com/ (accessed on 22 August 2024).
  75. Earth Engine|Google Cloud. 2024. Available online: https://cloud.google.com/earth-engine (accessed on 22 August 2024).
  76. Rodriges Zalipynis, R.A. Evaluating Array DBMS Compression Techniques for Big Environmental Datasets. In Proceedings of the 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Metz, France, 18–21 September 2019; Volume 2, pp. 859–863. [Google Scholar]
  77. Mainzer, J.; Fortner, N.; Heber, G.; Pourmal, E.; Koziol, Q.; Byna, S.; Paterno, M. Sparse Data Management in HDF5. In Proceedings of the 2019 IEEE/ACM 1st Annual Workshop on Large-Scale Experiment-in-the-Loop Computing (XLOOP), Denver, CO, USA, 18 November 2019; pp. 20–25. [Google Scholar]
  78. Cheng, Y.; Zhao, W.; Rusu, F. Bi-Level Online Aggregation on Raw Data. In Proceedings of the 29th International Conference on Scientific and Statistical Database Management, Chicago, IL, USA, 27–29 June 2017. [Google Scholar]
  79. Blanas, S.; Wu, K.; Byna, S.; Dong, B.; Shoshani, A. Parallel data analysis directly on scientific file formats. In Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 22–27 June 2014. [Google Scholar]
  80. Su, Y.; Agrawal, G. Supporting user-defined subsetting and aggregation over parallel NetCDF datasets. In Proceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing CCGrid, Ottawa, ON, Canada, 13–16 May 2012; pp. 212–219. [Google Scholar]
  81. Rodriges Zalipynis, R.A. Distributed In Situ Processing of Big Raster Data in the Cloud. In Proceedings of the Perspectives of System Informatics, Moscow, Russia, 27–29 June 2017; LNCS. Springer: Cham, Switzerland, 2018; Volume 10742, pp. 337–351. [Google Scholar]
  82. Xing, H.; Agrawal, G. Accelerating array joining with integrated value-index. In Proceedings of the 31st International Conference on Scientific and Statistical Database Management, Vienna, Austria, 7–9 July 2020; pp. 145–156. [Google Scholar]
  83. Choi, D.; Park, C.S.; Chung, Y.D. Progressive top-k subarray query processing in array databases. Proc. VLDB Endow. 2019, 12, 989–1001. [Google Scholar]
  84. Azure Quantum Homepage. 2024. Available online: https://quantum.microsoft.com/ (accessed on 22 August 2024).
  85. Cloud Quantum Computing Service—Amazon Braket—AWS. 2024. Available online: https://aws.amazon.com/braket (accessed on 22 August 2024).
  86. IBM Quantum Computing. 2024. Available online: https://www.ibm.com/quantum (accessed on 22 August 2024).
  87. Google Quantum AI. 2024. Available online: https://quantumai.google/ (accessed on 22 August 2024).
  88. Qubit Scorecard. 2024. Available online: https://www.qusecure.com/qubit-scorecard/ (accessed on 22 August 2024).
  89. Fujitsu Quantum. 2024. Available online: https://www.fujitsu.com/global/about/research/technology/quantum/ (accessed on 22 August 2024).
  90. Atom Computing. 2024. Available online: https://atom-computing.com/ (accessed on 22 August 2024).
  91. D-Wave Systems. 2024. Available online: https://www.dwavesys.com/ (accessed on 22 August 2024).
  92. D-Wave. D-Wave Announces Availability of 1200+ Qubit Advantage2™ Prototype. 2024. Available online: https://www.dwavesys.com/company/newsroom/press-release/d-wave-announces-availability-of-1-200-qubit-advantage2-prototype/ (accessed on 22 August 2024).
  93. IBM 100,000 Qubit Supercomputer. 2023. Available online: www.ibm.com/quantum/blog/100k-qubit-supercomputer (accessed on 22 August 2024).
  94. Feynman, R.P. Simulating Physics with Computers. Int. J. Theor. Phys. 1982, 21, 133–153. [Google Scholar]
  95. Benioff, P. Quantum mechanical Hamiltonian models of Turing machines. J. Stat. Phys. 1982, 29, 515–546. [Google Scholar]
  96. Deutsch, D. Quantum theory, the Church–Turing principle and the universal quantum computer. Proc. R. Soc. Lond. A Math. Phys. Sci. 1985, 400, 97–117. [Google Scholar]
  97. Shor, P.W. Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science, Santa Fe, NM, USA, 20–22 November 1994; pp. 124–134. [Google Scholar]
  98. Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, Philadelphia, PA, USA, 22–24 May 1996; pp. 212–219. [Google Scholar]
  99. Chen, J.; Stoudenmire, E.; White, S.R. Quantum Fourier Transform Has Small Entanglement. PRX Quantum 2023, 4, 040318. [Google Scholar]
  100. Camps, D.; Van Beeumen, R.; Yang, C. Quantum Fourier transform revisited. Numer. Linear Algebra Appl. 2021, 28, e2331. [Google Scholar]
  101. Acampora, G.; Chiatto, A.; Vitiello, A. Genetic algorithms as classical optimizer for the Quantum Approximate Optimization Algorithm. Appl. Soft Comput. 2023, 142, 110296. [Google Scholar]
  102. Drias, H.; Drias, Y.; Houacine, N.A.; Bendimerad, L.S.; Zouache, D.; Khennak, I. Quantum OPTICS and deep self-learning on swarm intelligence algorithms for Covid-19 emergency transportation. Soft Comput. 2023, 27, 13181–13200. [Google Scholar]
  103. Mukhamedov, F.; Souissi, A.; Hamdi, T.; Andolsi, A. Open quantum random walks and quantum Markov Chains on trees II: The recurrence. Quantum Inf. Process. 2023, 22, 232. [Google Scholar]
  104. Ardelean, S.M.; Udrescu, M. Graph coloring using the reduced quantum genetic algorithm. PeerJ Comput. Sci. 2022, 8, e836. [Google Scholar]
  105. Gupta, R.; Saxena, D.; Gupta, I.; Makkar, A.; Singh, A.K. Quantum machine learning driven malicious user prediction for cloud network communications. IEEE Netw. Lett. 2022, 4, 174–178. [Google Scholar]
  106. Melnikov, A.; Kordzanganeh, M.; Alodjants, A.; Lee, R.K. Quantum machine learning: From physics to software engineering. Adv. Phys. X 2023, 8, 2165452. [Google Scholar]
  107. Biasse, J.F.; Bonnetain, X.; Kirshanova, E.; Schrottenloher, A.; Song, F. Quantum algorithms for attacking hardness assumptions in classical and post-quantum cryptography. IET Inf. Secur. 2023, 17, 171–209. [Google Scholar]
  108. Herman, D.; Googin, C.; Liu, X.; Sun, Y.; Galda, A.; Safro, I.; Pistoia, M.; Alexeev, Y. Quantum computing for finance. Nat. Rev. Phys. 2023, 5, 450–465. [Google Scholar]
  109. Cordier, B.A.; Sawaya, N.P.; Guerreschi, G.G.; McWeeney, S.K. Biology and medicine in the landscape of quantum advantages. J. R. Soc. Interface 2022, 19, 20220541. [Google Scholar]
  110. Huang, D.; Wang, M.; Wang, J.; Yan, J. A survey of quantum computing hybrid applications with brain-computer interface. Cogn. Robot. 2022, 2, 64–176. [Google Scholar]
  111. Ullah, M.H.; Eskandarpour, R.; Zheng, H.; Khodaei, A. Quantum computing for smart grid applications. IET Gener. Transm. Distrib. 2022, 16, 4239–4257. [Google Scholar]
  112. Lloyd, S.; Mohseni, M.; Rebentrost, P. Quantum principal component analysis. Nat. Phys. 2014, 10, 631–633. [Google Scholar]
  113. Fritsch, K.; Scherzinger, S. Solving Hard Variants of Database Schema Matching on Quantum Computers. Proc. VLDB Endow. 2023, 16, 3990–3993. [Google Scholar]
  114. Groppe, S.; Groppe, J.; Çalıkyılmaz, U.; Winker, T.; Gruenwal, L. Quantum data management and quantum machine learning for data management: State-of-the-art and open challenges. In Proceedings of the International Conference on Intelligent Systems and Machine Learning, Guangzhou, China, 5–7 August 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 252–261. [Google Scholar]
  115. Figgatt, C.; Maslov, D.; Landsman, K.A.; Linke, N.M.; Debnath, S.; Monroe, C. Complete 3-qubit Grover search on a programmable quantum computer. Nat. Commun. 2017, 8, 1918. [Google Scholar]
  116. Zajac, M.; Störl, U. Towards quantum-based Search for industrial Data-driven Services. In Proceedings of the 2022 IEEE International Conference on Quantum Software (QSW), Barcelona, Spain, 10–16 July 2022; pp. 38–40. [Google Scholar]
  117. Trummer, I.; Koch, C. Multiple Query Optimization on the D-Wave 2X Adiabatic Quantum Computer. arXiv 2015, arXiv:1510.06437. [Google Scholar]
  118. Schönberger, M. Applicability of quantum computing on database query optimization. In Proceedings of the 2022 International Conference on Management of Data, Philadelphia, PA, USA, 12–17 June 2022; pp. 2512–2514. [Google Scholar]
  119. Fankhauser, T.; Solèr, M.E.; Füchslin, R.M.; Stockinger, K. Multiple Query Optimization using a Gate-Based Quantum Computer. IEEE Access 2023, 11, 114043. [Google Scholar]
  120. Çalikyilmaz, U.; Groppe, S.; Groppe, J.; Winker, T.; Prestel, S.; Shagieva, F.; Arya, D.; Preis, F.; Gruenwald, L. Opportunities for quantum acceleration of databases: Optimization of queries and transaction schedules. Proc. VLDB Endow. 2023, 16, 2344–2353. [Google Scholar]
  121. Albert Einstein Quote. 2024. Available online: https://www.azquotes.com/quote/905255 (accessed on 22 August 2024).
  122. Wootters, W.K.; Zurek, W.H. A single quantum cannot be cloned. Nature 1982, 299, 802–803. [Google Scholar]
  123. Aaronson, S.; Gottesman, D. Improved simulation of stabilizer circuits. Phys. Rev. A 2004, 70, 052328. [Google Scholar]
  124. Kitaev, A.Y.; Shen, A.; Vyalyi, M.N. Classical and Quantum Computation; Number 47; American Mathematical Soc.: Washington, DC, USA, 2002. [Google Scholar]
  125. Shor, P.W. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev. 1999, 41, 303–332. [Google Scholar]
  126. IonQ Glossary. 2024. Available online: https://ionq.com/resources/glossary (accessed on 22 August 2024).
  127. IonQ|Trapped Ion Quantum Computing. 2024. Available online: https://ionq.com/ (accessed on 22 August 2024).
  128. QuEra. 2024. Available online: https://www.quera.com/ (accessed on 22 August 2024).
  129. Rigetti Computing. 2024. Available online: https://www.rigetti.com/ (accessed on 22 August 2024).
  130. Oxford Quantum Circuits. 2024. Available online: https://oxfordquantumcircuits.com/ (accessed on 22 August 2024).
  131. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
  132. Kaye, P.; Laflamme, R.; Mosca, M. An Introduction to Quantum Computing; OUP Oxford: Oxford, UK, 2006. [Google Scholar]
  133. Kaiser, S.C.; Granade, C. Learn Quantum Computing with Python and Q#: A Hands-on Approach; Simon and Schuster: New York, NY, USA, 2021. [Google Scholar]
  134. Vos, J. Quantum Computing in Action; Simon and Schuster: New York, NY, USA, 2022. [Google Scholar]
  135. Silva, V. Practical Quantum Computing for Developers: Programming Quantum Rigs in the Cloud Using Python, Quantum Assembly Language and IBM QExperience; Apress: New York, NY, USA, 2018. [Google Scholar]
  136. Combarro, E.F.; Gonzalez-Castillo, S.; Di Meglio, A. A Practical Guide to Quantum Machine Learning and Quantum Optimization: Hands-on Approach to Modern Quantum Algorithms; Packt: Birmingham, UK, 2023. [Google Scholar]
  137. Q# Programming Language. 2024. Available online: https://github.com/microsoft/qsharp-language (accessed on 22 August 2024).
  138. OpenQASM. 2024. Available online: https://openqasm.com/ (accessed on 22 August 2024).
  139. Quantum Computing API for Java. 2024. Available online: https://github.com/redfx-quantum/strange (accessed on 22 August 2024).
  140. QuTiP—Quantum Toolbox in Python. 2024. Available online: https://qutip.org/ (accessed on 22 August 2024).
  141. Giovannetti, V.; Lloyd, S.; Maccone, L. Quantum random access memory. Phys. Rev. Lett. 2008, 100, 160501. [Google Scholar]
  142. Zidan, M.; Abdel-Aty, A.H.; Khalil, A.; Abdel-Aty, M.; Eleuch, H. A novel efficient quantum random access memory. IEEE Access 2021, 9, 151775–151780. [Google Scholar]
  143. Park, D.K.; Petruccione, F.; Rhee, J.K.K. Circuit-based quantum random access memory for classical data. Sci. Rep. 2019, 9, 3949. [Google Scholar]
  144. Phalak, K.; Chatterjee, A.; Ghosh, S. Quantum random access memory for dummies. Sensors 2023, 23, 7462. [Google Scholar] [CrossRef]
  145. Liu, C.; Wang, M.; Stein, S.A.; Ding, Y.; Li, A. Quantum Memory: A Missing Piece in Quantum Computing Units. arXiv 2023, arXiv:2309.14432. [Google Scholar]
  146. Jaques, S.; Rattew, A.G. QRAM: A survey and critique. arXiv 2023, arXiv:2305.10310. [Google Scholar]
  147. Xu, S.; Hann, C.T.; Foxman, B.; Girvin, S.M.; Ding, Y. Systems architecture for quantum random access memory. In Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, Toronto, ON, Canada, 28 October–1 November 2023; pp. 526–538. [Google Scholar]
  148. Hoefler, T.; Häner, T.; Troyer, M. Disentangling hype from practicality: On realistically achieving quantum advantage. Commun. ACM 2023, 66, 82–87. [Google Scholar]
  149. Customer Success Stories D-Wave. 2024. Available online: https://www.dwavesys.com/learn/customer-success-stories/ (accessed on 22 August 2024).
  150. Wang, C.; Li, X.; Xu, H.; Li, Z.; Wang, J.; Yang, Z.; Mi, Z.; Liang, X.; Su, T.; Yang, C.; et al. Towards practical quantum computers: Transmon qubit with a lifetime approaching 0.5 milliseconds. npj Quantum Inf. 2022, 8, 3. [Google Scholar]
  151. Pal, S.; Bhattacharya, M.; Lee, S.S.; Chakraborty, C. Quantum computing in the next-generation computational biology landscape: From protein folding to molecular dynamics. Mol. Biotechnol. 2024, 66, 163–178. [Google Scholar]
  152. Bharti, K.; Cervera-Lierta, A.; Kyaw, T.H.; Haug, T.; Alperin-Lea, S.; Anand, A.; Degroote, M.; Heimonen, H.; Kottmann, J.S.; Menke, T.; et al. Noisy intermediate-scale quantum algorithms. Rev. Mod. Phys. 2022, 94, 015004. [Google Scholar]
  153. Kazmina, A.S.; Zalivako, I.V.; Borisenko, A.S.; Nemkov, N.A.; Nikolaeva, A.S.; Simakov, I.A.; Kuznetsova, A.V.; Egorova, E.Y.; Galstyan, K.P.; Semenin, N.V.; et al. Demonstration of a parity-time-symmetry-breaking phase transition using superconducting and trapped-ion qutrits. Phys. Rev. A 2024, 109, 032619. [Google Scholar]
  154. IBM Quantum Composer. 2024. Available online: https://quantum.ibm.com/composer/files/new (accessed on 22 August 2024).
  155. Barthe, A.; Grossi, M.; Tura, J.; Dunjko, V. Bloch Sphere Binary Trees: A method for the visualization of sets of multi-qubit systems pure states. arXiv 2023, arXiv:2302.02957. [Google Scholar]
  156. Koczor, B.; Zeier, R.; Glaser, S.J. Fast computation of spherical phase-space functions of quantum many-body states. Phys. Rev. A 2020, 102, 062421. [Google Scholar]
  157. IBM Quantum Computer with over 1000 Qubits. 2023. Available online: https://www.nature.com/articles/d41586-023-03854-1 (accessed on 22 August 2024).
  158. IBM Q-Sphere. 2024. Available online: https://quantum-computing.ibm.com/composer/docs/iqx/visualizations#q-sphere-view (accessed on 22 August 2024).
  159. Migdał, P. Symmetries and self-similarity of many-body wavefunctions. arXiv 2014, arXiv:1412.6796. [Google Scholar]
  160. IBM Qiskit Circuit Visualization. 2024. Available online: https://docs.quantum.ibm.com/build/circuit-visualization (accessed on 22 August 2024).
  161. Quirk. 2024. Available online: https://algassert.com/quirk (accessed on 22 August 2024).
  162. Chang, C.; Moon, B.; Acharya, A.; Shock, C.; Sussman, A.; Saltz, J. Titan: A high-performance remote-sensing database. In Proceedings of the 13th International Conference on Data Engineering, Birmingham, UK, 7–11 April 1997; pp. 375–384. [Google Scholar]
  163. DeWitt, D.J.; Kabra, N.; Luo, J.; Patel, J.M.; Yu, J.B. Client-Server Paradise. In Proceedings of the VLDB, Santiago de Chile, Chile, 12–15 September 1994; pp. 558–569. [Google Scholar]
  164. van Ballegooij, A. RAM: A Multidimensional Array DBMS. In Proceedings of the EDBT, Heraklion, Greece, 14–18 March 2004; Volume 3268, pp. 154–165. [Google Scholar]
  165. Libkin, L.; Machlin, R.; Wong, L. A query language for multidimensional arrays: Design, implementation, and optimization techniques. In Proceedings of the ACM SIGMOD Record, Montreal, QC, Cananda, 4–6 June 1996; Volume 25, pp. 228–239. [Google Scholar]
  166. Oseledets, I. Tensor-train decomposition. SIAM J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar]
  167. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  168. Apache Arrow: A Cross-Language Development Platform for In-Memory Analytics. 2024. Available online: https://arrow.apache.org/ (accessed on 22 August 2024).
  169. Yuan, G.; Chen, Y.; Lu, J.; Wu, S.; Ye, Z.; Qian, L.; Chen, G. Quantum Computing for Databases: Overview and Challenges. arXiv 2024, arXiv:2405.12511. [Google Scholar]
  170. Liu, Y.; Long, G.L. Deleting a marked item from an unsorted database with a single query. arXiv 2007, arXiv:0710.3301. [Google Scholar]
  171. Brassard, G.; Hoyer, P.; Mosca, M.; Tapp, A. Quantum amplitude amplification and estimation. Contemp. Math. 2002, 305, 53–74. [Google Scholar]
  172. Li, H.S.; Fan, P.; Xia, H.; Peng, H.; Long, G.L. Efficient quantum arithmetic operation circuits for quantum image processing. Sci. China Phys. Mech. Astron. 2020, 63, 1–13. [Google Scholar]
  173. Cuccaro, S.A.; Draper, T.G.; Kutin, S.A.; Moulton, D.P. A new quantum ripple-carry addition circuit. arXiv 2004, arXiv:0410184. [Google Scholar]
  174. Zhang, Y. Four arithmetic operations on the quantum computer. J. Phys. Conf. Ser. Iop Publ. 2020, 1575, 012037. [Google Scholar]
  175. Chou, J.; Wu, K.; Prabhat. FastQuery: A general indexing and querying system for scientific data. In Proceedings of the International Conference on Scientific and Statistical Database Management, Portland, OR, USA, 20–22 July 2011; pp. 573–574. [Google Scholar]
  176. ASCII Math. 2024. Available online: https://asciimath.org/ (accessed on 22 August 2024).
  177. CortexJS. 2024. Available online: https://cortexjs.io/mathlive/guides/static (accessed on 22 August 2024).
  178. MathJax. 2024. Available online: https://www.mathjax.org/ (accessed on 22 August 2024).
  179. Unicode Arrows. 2024. Available online: https://unicode.org/charts/nameslist/n_2190.html (accessed on 22 August 2024).
  180. Niels Bohr Quote. 2024. Available online: https://www.azquotes.com/quote/30759?ref=quantum-mechanics (accessed on 22 August 2024).
  181. Richard Feynman Quotes. 2024. Available online: https://en.wikiquote.org/wiki/Talk:Richard_Feynman (accessed on 22 August 2024).
  182. Preskill, J. Quantum computing 40 years later. In Feynman Lectures on Computation; CRC Press: Boca Raton, FL, USA, 2023; pp. 193–244. [Google Scholar]
  183. Céleri, L.C.; Huerga, D.; Albarrán-Arriagada, F.; Solano, E.; de Andoin, M.G.; Sanz, M. Digital-analog quantum simulation of fermionic models. Phys. Rev. Appl. 2023, 19, 064086. [Google Scholar]
  184. Bringewatt, J.; Davoudi, Z. Parallelization techniques for quantum simulation of fermionic systems. Quantum 2023, 7, 975. [Google Scholar]
  185. Haah, J.; Fidkowski, L.; Hastings, M.B. Nontrivial quantum cellular automata in higher dimensions. Commun. Math. Phys. 2023, 398, 469–540. [Google Scholar]
  186. Gillman, E.; Carollo, F.; Lesanovsky, I. Using (1 + 1) Dquantum cellular automata for exploring collective effects in large-scale quantum neural networks. Phys. Rev. E 2023, 107, L022102. [Google Scholar]
  187. Kent, B.; Racz, S.; Shashi, S. Scrambling in quantum cellular automata. Phys. Rev. B 2023, 107, 144306. [Google Scholar]
  188. Seyedi, S.; Pourghebleh, B. A new design for 4-bit RCA using quantum cellular automata technology. Opt. Quantum Electron. 2023, 55, 11. [Google Scholar]
  189. Mohamed, N.A.E.S.; El-Sayed, H.; Youssif, A. Mixed Multi-Chaos Quantum Image Encryption Scheme Based on Quantum Cellular Automata (QCA). Fractal Fract. 2023, 7, 734. [Google Scholar] [CrossRef]
  190. Jiang, W.; Wang, F.; Fang, L.; Zheng, X.; Qiao, X.; Li, Z.; Meng, Q. Modelling of wildland-urban interface fire spread with the heterogeneous cellular automata model. Environ. Model. Softw. 2021, 135, 104895. [Google Scholar]
  191. Phalak, K.; Li, J.; Ghosh, S. Approximate quantum random access memory architectures. arXiv 2022, arXiv:2210.14804. [Google Scholar]
  192. de Paula Neto, F.M.; da Silva, A.J.; de Oliveira, W.R.; Ludermir, T.B. Quantum probabilistic associative memory architecture. Neurocomputing 2019, 351, 101–110. [Google Scholar]
  193. Sousa, R.S.; dos Santos, P.G.; Veras, T.M.; de Oliveira, W.R.; da Silva, A.J. Parametric probabilistic quantum memory. Neurocomputing 2020, 416, 360–369. [Google Scholar]
  194. Ezhov, A.; Nifanova, A.; Ventura, D. Quantum associative memory with distributed queries. Inf. Sci. 2000, 128, 271–293. [Google Scholar]
  195. Ventura, D.; Martinez, T. Quantum associative memory. Inf. Sci. 2000, 124, 273–296. [Google Scholar]
  196. Ventura, D.; Martinez, T. Quantum associative memory with exponential capacity. In Proceedings of the 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No. 98CH36227), Anchorage, AK, USA, 4–9 May 1998; Volume 1, pp. 509–513. [Google Scholar]
  197. Reilly, D. Challenges in scaling-up the control interface of a quantum computer. In Proceedings of the 2019 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 7–11 December 2019; pp. 31–37. [Google Scholar]
  198. De Vos, A. Reversible Computing: Fundamentals, Quantum Computing, and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  199. Niemann, P.; de Almeida, A.A.; Dueck, G.; Drechsler, R. Template-based mapping of reversible circuits to IBM quantum computers. Microprocess. Microsyst. 2022, 90, 104487. [Google Scholar]
  200. Fösel, T.; Niu, M.Y.; Marquardt, F.; Li, L. Quantum circuit optimization with deep reinforcement learning. arXiv 2021, arXiv:2103.07585. [Google Scholar]
  201. Bae, J.H.; Alsing, P.M.; Ahn, D.; Miller, W.A. Quantum circuit optimization using quantum Karnaugh map. Sci. Rep. 2020, 10, 15651. [Google Scholar]
  202. Nam, Y.; Ross, N.J.; Su, Y.; Childs, A.M.; Maslov, D. Automated optimization of large quantum circuits with continuous parameters. npj Quantum Inf. 2018, 4, 23. [Google Scholar]
  203. Stonebraker, M.; Frew, J.; Gardels, K.; Meredith, J. The Sequoia 2000 storage benchmark. SIGMOD Rec. 1993, 22, 2–11. [Google Scholar]
  204. Patel, J.; Yu, J.; Kabra, N.; Tufte, K.; Nag, B.; Burger, J.; Hall, N.; Ramasamy, K.; Lueder, R.; Ellmann, C.; et al. Building a scalable geo-spatial DBMS: Technology, implementation, and evaluation. In Proceedings of the 1997 ACM SIGMOD International Conference on Management of Data, Tucson, AR, USA, 13–15 May 1997; Volume 26, pp. 336–347. [Google Scholar]
  205. Merticariu, G.; Misev, D.; Baumann, P. Towards a general array database benchmark: Measuring storage access. In Big Data Benchmarking; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 40–67. [Google Scholar]
  206. Sajid, M.J.; Khan, S.A.R.; Yu, Z. Implications of Industry 5.0 on Environmental Sustainability; IGI Global: Hershey, PA, USA, 2022. [Google Scholar]
  207. Graph Formats. 2024. Available online: https://gephi.org/users/supported-graph-formats/ (accessed on 22 August 2024).
  208. Georgescu, I.M.; Ashhab, S.; Nori, F. Quantum simulation. Rev. Mod. Phys. 2014, 86, 153. [Google Scholar]
  209. Daley, A.J.; Bloch, I.; Kokail, C.; Flannigan, S.; Pearson, N.; Troyer, M.; Zoller, P. Practical quantum advantage in quantum simulation. Nature 2022, 607, 667–676. [Google Scholar]
  210. Buluta, I.; Nori, F. Quantum simulators. Science 2009, 326, 108–111. [Google Scholar]
  211. Sheng, Y.B.; Zhou, L. Distributed secure quantum machine learning. Sci. Bull. 2017, 62, 1025–1029. [Google Scholar]
  212. Peral-García, D.; Cruz-Benito, J.; García-Peñalvo, F.J. Systematic literature review: Quantum machine learning and its applications. Comput. Sci. Rev. 2024, 51, 100619. [Google Scholar]
  213. Tychola, K.A.; Kalampokas, T.; Papakostas, G.A. Quantum machine learning—An overview. Electronics 2023, 12, 2379. [Google Scholar] [CrossRef]
  214. Senokosov, A.; Sedykh, A.; Sagingalieva, A.; Kyriacou, B.; Melnikov, A. Quantum machine learning for image classification. Mach. Learn. Sci. Technol. 2024, 5, 015040. [Google Scholar]
  215. Zeguendry, A.; Jarir, Z.; Quafafou, M. Quantum machine learning: A review and case studies. Entropy 2023, 25, 287. [Google Scholar] [CrossRef]
  216. Caleffi, M.; Amoretti, M.; Ferrari, D.; Illiano, J.; Manzalini, A.; Cacciapuoti, A.S. Distributed quantum computing: A survey. Comput. Netw. 2024, 2024, 110672. [Google Scholar]
  217. Häner, T.; Steiger, D.S.; Hoefler, T.; Troyer, M. Distributed quantum computing with QMPI. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, St. Louis, MI, USA, 14–19 November 2021; pp. 1–13. [Google Scholar]
  218. Zhao, Y.; Zhong, H.; Zhang, X.; Zhang, C.; Pan, M. Bridging Quantum Computing and Differential Privacy: A Survey on Quantum Computing Privacy. arXiv 2024, arXiv:2403.09173. [Google Scholar]
  219. Hirche, C.; Rouzé, C.; França, D.S. Quantum differential privacy: An information theory perspective. IEEE Trans. Inf. Theory 2023, 69, 5771–5787. [Google Scholar]
  220. Pan, D.; Lin, Z.; Wu, J.; Zhang, H.; Sun, Z.; Ruan, D.; Yin, L.; Long, G.L. Experimental free-space quantum secure direct communication and its security analysis. Photonics Res. 2020, 8, 1522–1531. [Google Scholar]
  221. Luo, W.; Cao, L.; Shi, Y.; Wan, L.; Zhang, H.; Li, S.; Chen, G.; Li, Y.; Li, S.; Wang, Y.; et al. Recent progress in quantum photonic chips for quantum communication and internet. Light. Sci. Appl. 2023, 12, 175. [Google Scholar]
  222. Hasan, S.R.; Chowdhury, M.Z.; Saiam, M.; Jang, Y.M. Quantum communication systems: Vision, protocols, applications, and challenges. IEEE Access 2023, 11, 15855–15877. [Google Scholar]
  223. WMTS. 2024. Available online: https://www.opengeospatial.org/standards/wmts (accessed on 22 August 2024).
  224. Dong, B.; Wu, K.; Byna, S.; Liu, J.; Zhao, W.; Rusu, F. ArrayUDF: User-Defined Scientific Data Analysis on Arrays. In Proceedings of the 26th International Symposium on High-Performance Parallel and Distributed Computing, Washington, DC, USA, 26–30 June 2017. [Google Scholar]
  225. Cong, I.; Choi, S.; Lukin, M.D. Quantum convolutional neural networks. Nat. Phys. 2019, 15, 1273–1278. [Google Scholar]
  226. Oh, S.; Choi, J.; Kim, J. A tutorial on quantum convolutional neural networks (QCNN). In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 21–23 October 2020; pp. 236–239. [Google Scholar]
  227. Wei, S.; Chen, Y.; Zhou, Z.; Long, G. A quantum convolutional neural network on NISQ devices. AAPPS Bull. 2022, 32, 1–11. [Google Scholar]
  228. Hur, T.; Kim, L.; Park, D.K. Quantum convolutional neural network for classical data classification. Quantum Mach. Intell. 2022, 4, 3. [Google Scholar]
Figure 3. A quantum circuit diagram example: creating Bell state Φ + .
Figure 3. A quantum circuit diagram example: creating Bell state Φ + .
Earth 05 00027 g003
Figure 4. A QGantt chart example: reading data from QRAM.
Figure 4. A QGantt chart example: reading data from QRAM.
Earth 05 00027 g004
Figure 5. A QGantt chart example: creating Bell state Ψ + .
Figure 5. A QGantt chart example: creating Bell state Ψ + .
Earth 05 00027 g005
Figure 9. Illustration of a classical array and its quantum strip.
Figure 9. Illustration of a classical array and its quantum strip.
Earth 05 00027 g009
Figure 10. A quantum circuit diagram for collapsing terms with NA values into ψ 0 0 γ NA .
Figure 10. A quantum circuit diagram for collapsing terms with NA values into ψ 0 0 γ NA .
Earth 05 00027 g010
Figure 11. W 28 [ 480 :​ 540 , 0 :​ 141 ] hyperslab (30°…45° N and 0°…35° E) of array in Figure 7.
Figure 11. W 28 [ 480 :​ 540 , 0 :​ 141 ] hyperslab (30°…45° N and 0°…35° E) of array in Figure 7.
Earth 05 00027 g011
Figure 16. Examples of typical math functions with Earth 05 00027 i013 [12].
Figure 16. Examples of typical math functions with Earth 05 00027 i013 [12].
Earth 05 00027 g016
Figure 17. Illustration of the value- and index-based query.
Figure 17. Illustration of the value- and index-based query.
Earth 05 00027 g017
Figure 20. Quantum Network Diagram for Equations (34) and (37).
Figure 20. Quantum Network Diagram for Equations (34) and (37).
Earth 05 00027 g020
Figure 21. Overview graphs displayed in different time-aware layouts for the Quantum Network Diagram in Figure 18.
Figure 21. Overview graphs displayed in different time-aware layouts for the Quantum Network Diagram in Figure 18.
Earth 05 00027 g021
Figure 23. Creating Bell State Ψ + : an ASCII art version of the QGantt chart in Figure 5.
Figure 23. Creating Bell State Ψ + : an ASCII art version of the QGantt chart in Figure 5.
Earth 05 00027 g023
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rodriges Zalipynis, R.A. Quantum Tensor DBMS and Quantum Gantt Charts: Towards Exponentially Faster Earth Data Engineering. Earth 2024, 5, 491-547. https://doi.org/10.3390/earth5030027

AMA Style

Rodriges Zalipynis RA. Quantum Tensor DBMS and Quantum Gantt Charts: Towards Exponentially Faster Earth Data Engineering. Earth. 2024; 5(3):491-547. https://doi.org/10.3390/earth5030027

Chicago/Turabian Style

Rodriges Zalipynis, Ramon Antonio. 2024. "Quantum Tensor DBMS and Quantum Gantt Charts: Towards Exponentially Faster Earth Data Engineering" Earth 5, no. 3: 491-547. https://doi.org/10.3390/earth5030027

APA Style

Rodriges Zalipynis, R. A. (2024). Quantum Tensor DBMS and Quantum Gantt Charts: Towards Exponentially Faster Earth Data Engineering. Earth, 5(3), 491-547. https://doi.org/10.3390/earth5030027

Article Metrics

Back to TopTop