Exploring the Limitations of Hybrid Adiabatic Quantum Computing for Emission Tomography Reconstruction

Our study explores the feasibility of quantum computing in emission tomography reconstruction, addressing a noisy ill-conditioned inverse problem. In current clinical practice, this is typically solved by iterative methods minimizing a L2 norm. After reviewing quantum computing principles, we propose the use of a commercially available quantum annealer and employ corresponding hybrid solvers, which combine quantum and classical computing to handle more significant problems. We demonstrate how to frame image reconstruction as a combinatorial optimization problem suited for these quantum annealers and hybrid systems. Using a toy problem, we analyze reconstructions of binary and integer-valued images with respect to their image size and compare them to conventional methods. Additionally, we test our method’s performance under noise and data underdetermination. In summary, our method demonstrates competitive performance with traditional algorithms for binary images up to an image size of 32×32 on the toy problem, even under noisy and underdetermined conditions. However, scalability challenges emerge as image size and pixel bit range increase, restricting hybrid quantum computing as a practical tool for emission tomography reconstruction until significant advancements are made to address this issue.


Introduction
Quantum computing (QC) presents a new algorithmic paradigm that has gained much attention over the past decades.Although practical applications are minimal, QC offers enormous potential for complex computations and runtime speed up 1 .Quantum supremacy has been claimed in 2019 and 2021, respectively 2,3 .
One common QC model is the gate-base model 1 .However, not all quantum computers build upon this model.Different physical implementations can utilize the same quantum mechanics needed to realize QC.Other QC principles include measurement-based, adiabatic 4 , and topological QC.
We are utilizing adiabatic QC produced by D-Wave.However, current implementations violate adiabatic rules, so D-Wave's quantum computer is not universal.Due to the current limitation of the quantum computer, we utilize D-Wave's corresponding hybrid solvers, which combine both quantum and classical computing power 5 .
In this work, we investigate the task of QC-based tomographic image reconstruction.Tomographic imaging is the process of imaging sections hidden within an object 6 .Consequently, tomographic imaging techniques are fundamental in numerous fields, from radiology and materials science to astrophysics.In particular, we are interested in emission tomography (ET), which has a very low signal-to-noise ratio 7 .We also point out significant differences in transmission tomography (TT).For simplicity, we generally refer to the term tomography.
The primary problem setting in tomographic imaging is reconstructing the inner sections.The reconstruction of the underlying object is an inverse problem, which is usually ill-posed and can degrade in terms of completeness or noise 7 .In order to reconstruct ET images at today's scale of 512 × 512, advancements in classical computing hardware were necessary.Similarly, we expect QC to boost performance in tomographic reconstruction as the hardware matures.
In the following sections, we will give an overview of QC, particularly for the D-Wave quantum computer we utilize for our applications.Further, we will provide the reader with emission and transmission tomographic imaging fundamentals.More to the point, we will describe the inverse problem in tomographic reconstruction and how state-of-the-art methods solve it today.After that, we elaborate on our quantum annealing (QA)-based reconstruction technique and present binary and integer value reconstruction results.

Quantum Computing
QC is a new and innovative computing paradigm.Instead of using classical electronic bits, quantum computers utilize quantum bits (qubits) to exploit the quantum mechanical principles of superposition, entanglement and interference.Using these paradigms, a quantum computer with N qubits can be in 2 N states simultaneously, compared to one state for a classical computer 8 .This advantage ultimately leads to a benefit in terms of run time speed-up while enabling computations that are impossible on a classical computer.The concept of universal QC is accomplishable by several models.The total number of qubits is currently limited to 5640 qubits on the QA-based D-Wave Advantage2 system.For gate-based systems the current record is set by the IBM eagle system with 127 superconducting qubits 9 .

Adiabatic Quantum Computing
The fundamental assumption of AQC is that a physical system constantly evolves to its lowest energy over time 2, 10 -associated with the global minimum of the optimization landscape.In contrast to gate-based QC, AQC does not perform unitary operations through gates on single or multiple qubits 11 .Instead, we map the problem to the quantum computer with a problem-specific Hamiltonian 12 .The Hamiltonian describes an energy spectrum of the system and the set of viable solutions.In theory, AQC is still equivalent to gate-based QC and, therefore, universal 4 .
The adiabatic theorem, states that if we initialize a quantum system with a ground state Ĥ0 and let it evolve with time t for a fixed duration T , we will end up in the ground state Ĥ1 , which is associated with the lowest energy solution 13 : The main limitation of AQC is the ∆-gap 14 .The ∆-gap refers to the minimum spectral gap of the problems Hamiltonian, which is the difference between the lowest and second-lowest energy levels.The speed limit T is then calculated as: Quantum Annealing QA is the current realization of AQC.The system is initialized in a superposition state.Subsequently, the problem formulation is embedded in the hardware such that the system's ground state is the solution to the problem 15 .But, how does QA overcome the ∆-gap?
In short, it does not.Physical realizations of AQC usually let the system evolve multiple times for a specified time 16 .After initialization, the system repeatedly anneals for a specified annealing time t a .This way, a sample set is formed containing the samples, the associated energy level, and the number of times the solution occurred 17 .
QA builds upon the Hamiltonian of the Ising model: The variables s i of the Ising model are either spin-up or spin-down −1, 1, which resemble the eigenvalues of Pauli matrices 1 .Two variables s i and s j can have quadratic interactions J i, j , known as the coupling strength.Further, one variable can have a linear bias h i .Every Ising model is translatable to a Quadratic Unconstrained Binary Optimization (QUBO) problem, with variables x i being binary 0, 1, and vice-versa.QUBO problems can be NP-hard and are hard to solve using classical computers 18 .
We can describe a QUBO using an N × N upper-triangular matrix with the bias term on its diagonal and the quadratic interaction as the upper-triangular values: The goal is to minimize the QUBO's objective.In matrix notation, this results in the following: The connectivity of the qubits on the annealer's topology limits the interaction between qubits, see Fig. 2. D-Wave has proposed different topologies over the last years, such as the Chimaera 19,20 or Pegasus graph.By using D-Wave's Ocean interface 21 , we can embed and run the problem on the quantum annealer using the Leap cloud service 22 .

Tomographic Imaging
Tomographic reconstruction is a multidimensional inverse problem.The problem is estimating an object inside by only acquiring non-invasive measurements.The tomographic imaging process is similar to any linear digital imaging system, which maps a continuous domain to a discrete domain 23 .However, in practice, the image formation process is defined in a discrete-to-discrete forward model: Here y represents our measurement, M is the system matrix describing the action of the linear imaging system, and x represents the imaged object.In the realistic case of an imperfect imaging system and setting, a common representation includes additive noise 23 : Note that not every noise is additive, as the noise can also be signal-dependent on x.To evaluate the stability of reconstruction, one can compare reconstructions from two measurements, y 1 and y 2 , which are different in terms of their noise realizations.If the noise realizations are similar, the stability should estimate how close the reconstructed images x1 and x2 are.Additionally, α is a constant, accepting a certain tolerance.Stability defines as: This paper focuses on radiation-based tomography, where the foundation is the Radon transform 24,25 .However, the forward and inverse problem is similar to, e.g., magnetic resonance imaging (MRI).Our method is applicable to any imaging modality 3/14 with a matrix-based-forward model.In radiation tomography, one distinguishes between ET and TT.Emission tomography.In ET, the object of interest emits radiation from the inside.In clinical imaging, one typically injects a patient with a radioactive tracer.The tracer carries a radioactive isotope to a target location, where it emits radiation.Instead of visualizing the anatomy, ET reveals the metabolic and biochemical function of the underlying tissue 7 .
Transmission tomography.In transmission tomography, or Computed Tomography (CT), the projection views are captured by placing an X-ray source on one side of the object and an X-ray detector on the other side 266 .The X-rays emitted from the source are attenuated by the matter and captured by the detector.Discrete tomography.Discrete tomography is a specialized case of tomography, where the image x consists of binary pixels x i ∈ 0, 1 27 .This tomography case can apply to homogeneous scanning materials for non-destructive testing 27 .Because the complexity of the optimization is constrained to binary variables, the task's difficulty is to reconstruct with as few views as possible.
Tomographic Reconstruction.In this section, we outline different algorithms to invert a forward problem to find the original image x.The system matrix M is defined by the imaging system and can model physical effects or prior knowledge about imperfections.It describes the conditional probability that y j detects an emission of x i for each matrix entry: Thus, in image reconstruction, we are interested in finding the inverse of M to obtain the original image: In practice, the system matrix M is sparse, exceptionally large, singular, ill-posed, and non-square.Moreover, the projection images suffer from various sources of noise, most prominently Poisson noise.One conventional method to reconstruct tomographic images is Filtered BackProjection (FBP), an analytical and linear approach 28 .Other approaches to reconstructing tomographic images are iterative reconstruction algorithms 29 .In contrast to FBP, these algorithms are non-linear and seek to minimize the projection difference by repetitively applying back projections, updates, and forward projections 30 .Three examples of iterative reconstruction techniques include Maximum Likelihood Expectation Maximization (ML-EM) 30,31 , Conjugate Gradient (CG) 32 , and Simultaneous Algebraic Reconstruction Technique (SART) 33 .SART is an algebraic iterative reconstruction algorithm that performs additive and iterative updates from single projections.As the image size for the QAbased reconstruction is inherently limited by the size of the annealer, we also consider the Moore-Penrose general pseudoinverse (PI) as a reconstruction technique.The Moore-Penrose is defined by 34 : For this work, we have chosen to compare our method to FBP, SART, and the Moore-Penrose-based PI 34 .

Related Work
Over the past years, QC research has accelerated rapidly, as practical examples have come within reach.Specifically, image processing and machine learning on quantum computers have evolved to be an active area of research 35,36 .
In the following passage, we want to highlight work in the areas of quantum algorithm development for solving linear systems and tomographic image reconstruction.
The reconstruction problem of tomographic data is a system of linear equations.Therefore, we first review existing approaches on quantum computers to solve them.The initial proposal to solve linear equations on a gate-based quantum computer was made by Harrow et al. in 2009 when they introduced the HHL-algorithm 37 .The HHL algorithm is the basis for solving linear equations on gate-based quantum computers and is based on quantum phase estimation.Another take to solving (combinatorial) optimization problems on gate-based quantum computers is the Quantum Approximate Optimization Algorithm (QAOA) 38 .
Solving linear systems of equations is also possible with QA-based systems.The first example presented was non-negative binary matrix factorization 39 .Chang et al. demonstrated in 2019 that it is possible to solve polynomial equations with QA 40 .The approach was refined to a linear system with floating point values and floating-point division by Roger and Singleton in 2020 41 .Their paper utilizes the D-Wave 2000Q system to its whole extinct and shows results for matrix inversions of 3 × 3 matrices.However, they fail for matrices with high condition numbers.The first practical application to utilize QA for linear systems were Souza et al., who presented a seismic inversion problem which they solved in a least-square manner 42 .There have been initial steps to tackle the problem of medical image reconstruction with QC.The first who proposed to perform tomographic image reconstruction with both CT, PET, and MRI were Kiani et al. 43 .They proposed substituting classical Fourier transform with quantum-based Fourier transform to achieve a run time decrease 43 .Moreover, Schielein et al. presented a road map towards QC-assisted CT, describing data loading, storing, image processing, and image reconstruction problems.For a long time, QA hardware needed to be more mature for realistic problems.Schielein et al. were the first to propose to solve tomographic reconstruction with QA or the QAOA 38 .Following up, Jun proposed an implementation to use QA for image reconstruction in CT using sinogram-based optimization 44 .
In this work, we want to present multiple achievements: • Comparison of binary and integer-based tomographic reconstruction run on actual QC hardware to classical reconstruction algorithms • Analysis of capabilities and limitations of QA regarding image size, noise, and underdetermination of the system • A framework for the creation of tomographic toy problems to accelerate quantum image reconstruction research

Results
In this section, we present novel reconstruction results of our algorithm utilizing D-Wave's quantum computers and compare them to classical methods.Due to the size restrictions on the actual quantum annealer, images are reconstructed using the 5/14  hybrid solvers to enable the representation of more significant problems.The size restrictions of the quantum annealer are provided in the methods section.The time limit T for the hybrid optimization is fixed at 5s, where the annealing time is 0.015s.Therefore, the overhead is associated with classical computations.Every hybrid reconstruction is compared to three different classical methods: FBP, SART, and PI.We refer to our hybrid reconstruction method as QA.To compare the methods for binary tomography, we discretize the reconstruction result of the classical algorithms.Moreover, we compare the ground truth image (GT) to the reconstructions and visualize the corresponding sinogram (SG) to the tomographic problem.We utilize SymPy for our problem formulation and solve the reconstruction problem as a classical forward and inverse problem Mx = y.Therefore, it applies to any linear imaging or display system.The system matrix for the reconstruction problem is calculated using the Radon transform.In reality, for more significant problems, the system matrix becomes infeasible to store.Nevertheless, we want to test the general performance of the hybrid solver on inverse problems regarding size, noise, and underdetermination of the linear equations.

Experimental Setup
We have constructed a tomographic toy problem framework to test quantum computers' initial stages of image reconstruction.We set up example problems for our QA-based reconstruction by generating tomographic problems in a linear system manner.We utilize scikit-image 45 to perform Radon transforms of our ground truth images and create our system matrices.Here, the integration of the object rotated by an angle α defines one projection view.The number of angles equally distribute between 0 • and 180 • .ET inspires our model.Therefore, we do not model the attenuation of a source light ray through the object as one would do in transmission imaging.Instead, we model the emission process of photons within the object.Here we neglect the attenuation of the photons for now.The projection views at 0 • are taken from the top of the image.Subsequently, the angles are distributed in a clockwise direction.We have utilized scikit-image's iradon and sart to perform FBP and SART reconstruction algorithms.We utilize FBP with a ramp filter.For SART, we perform two iterations of the algorithm.The PI-based reconstruction uses NumPy's pinv function to estimate a Moore-Penrose PI.

Additive Noise
To test problems concerning noise in the data, we establish a simple noise model to alter the projection data.Hereby, we want to imitate the statistics of a low-count ET measurement.We apply the noise in an additive manner to the ground truth image for each projection view to create independent noise realizations: The noise is therefore defined as follows: With this noise model we want to resemble the signal dependence of Poisson noise.Poisson noise is the most prominent noise factor in projection images with very low counts.

Image Size Evaluation
We test the reconstruction capabilities of the hybrid solver concerning the image size N × N of x.Algorithm 2 describes the problem formulation for the hybrid solver.We apply our reconstruction technique to four different binary images 'foam', 'tree', 'snowflake', 'molecule' at four different squared image sizes 4, 8, 16, and 32.We chose the images to achieve a variance in frequency and image content.In order to downsample the image, we take local means of image blocks.We take N projection view with N measurements for each view.Thus, we have a fully-determined system for image size N × N. We show the reconstructed images and their corresponding ground truth, sinogram, and comparable classical results for the binary images 'foam' and 'tree' in Fig. 3.The examples for 'snowflake' and 'molecule' are in the appendix.Moreover, we compare the reconstruction algorithms on the four binary images measured by root mean square error (RMSE) and structural similarity index (SSIM) in Fig. 4. We employ the hybrid solver by using integer-valued variables in a 4-bit range representing the numbers from 0 to 16.With this, we move towards a more realistic use case.We simulate the well-known Shepp-Logan phantom at the possible 4-bit range and compare the hybrid integer reconstructions with conventional reconstruction methods in Fig. 5.We perform the experiments at four different squared image sizes 4, 8, 16, and 32.In order to downsample the image, we take local means of image blocks.We take N × N projected sinogram measurements for image size N × N to achieve a fully-determined system.Further, we plot a comparison regarding RMSE and SSIM for the reconstructed image size in Fig. 6.

Noise Evaluation
One typical problem in image reconstruction is the noise in the measured data.Especially in low-count tomography, one suffers from high photon noise.We alter image data with our noise model to imitate the high-noise level in low-count ET.We test the hybrid-based reconstruction's robustness with a simple noise alteration of the ground truth image during the acquisition.We utilize the UCI Machine Learning Repository Digits dataset 46 for small-scale images with low bit range.The dataset consists of 5620 digits of image size 8 × 8 with a bit range of [0, 16].We randomly chose 32 digits and reconstructed them with and without noise.The additive noise is described in equation 12. Visual results of the reconstructed images without and with noise are shown in Fig. 7. Again, we show the reconstructed images and their corresponding ground truth, sinogram, and comparable classical results.We take N × N projected sinogram measurements for image size N × N to achieve a fully-determined system.The remaining reconstructed digits can be found in the appendix.Furthermore, we present a quantitative evaluation of the RMSE and SSIM for both noise-free and noisy data for each digit image in Fig. 8 and 9.

Undetermined Evaluation
The reconstruction of binary images in a fully-determinant setting is easy for any reconstruction algorithm, as the number of combinatorial options is minimal compared to integer or floating-point-based reconstruction.The problem in binary tomography primarily results from reconstructing the objects with as few views as possible.In the past, methods have been presented to reconstruct an object with two views only 27 .The methods usually enforce a lot of regularization and prior knowledge of the object.Therefore, we want to present reconstruction results of binary images of size 32 × 32 with only 4 and 2 projection views acquired.The reconstructed images, their comparison algorithm results, and corresponding ground truth and sinogram are 7/14   displayed for the binary image 'foam' and 'tree' in Fig. 10.The other examples are provided in the appendix.Moreover, we compare the reconstruction algorithms on the four binary images measured by RMSE and SSIM in Fig. 11.

Discussion
Compared to previous quantum annealing-based matrix inversion, we see an improvement in linear systems with high-condition numbers.Rogers and Singleton's method of solving linear systems using a quantum annealer is restricted to small systems, 3 × 3, with very low condition numbers 41 .With the use of hybrid solvers, we can overcome this issue.Our system matrices M are singular, with a condition number approaching infinity.When entirely determined, the results of binary tomographic reconstruction contest against conventional methods.We postulate that a binary image size of 32 × 32 is no problem for the hybrid solver.With integer-based tomographic reconstructions, we see problems approaching larger image sizes than 8 × 8.If we increase the solver time T , the reconstruction error of larger problems can be improved.Nevertheless, if we increase time T , the runtime of the hybrid solver cannot compete with the runtime of classical methods.For this reason, we performed integer-valued tomography reconstruction for the digits dataset with image size 8 × 8.Here we can see that the hybrid-based reconstruction can yield similar results to the conventional algorithms.However, a PI-based solution is still able to outperform the hybrid reconstruction.An exciting finding can be seen when comparing the RMSE and SSIM of noisy simulations.The hybrid-based reconstruction can outperform every conventional reconstruction technique for all 32 digits.We postulate that the energy optimization landscape is not affected much by noise during the annealing process.Moreover, we see that the reconstruction from as few views as 2 or 4 projections can outperform standard reconstruction algorithms in binary-based images.However, the variance of the quantitative results is relatively high.The high variance can be associated with the reconstructed images, which have higher frequency content, especially within the imaged object.We postulate that the robustness to noise and the ability to reconstruct with fewer views can yield a considerable advantage, as a patient has to undergo less radiation, and less time is required to scan a patient.We also see potential drawbacks of our method.Most importantly, the D-Wave quantum annealer and the associated hybrid solvers are no universal quantum computers.Therefore, we can only perform the QA algorithm on the hardware.In return, this means that we cannot use the ability of quantum computers to represent extensive data with significantly fewer qubits.On the other hand, the data loading is part of the problem formulation for quantum annealer, which is a time-consuming task for gate-based QC.Quantum annealing can only provide significant speed up and better solutions for certain problems 47 .The use of the hybrid solver helps improve quality solutions and problem size.However, the problems may require long runtime, which cannot compete with classical methods.In general, hybrid-based reconstruction has problems reconstructing homogeneous regions.Adding smoothness constraints to the objective could improve reconstructed images in the future.Finally, the cost of QC is relatively high at the moment, but is expected to decrease as it did for classical computers.

Conclusion and Outlook
We have given the reader an overview of quantum computing, especially adibatic quantum computing.More to the point, we have explained how QA works and which problems it can solve.Subsequently, we present the inverse problem of tomographic image reconstruction and describe the use-case of emission tomography and the difference to transmission tomography.We summarize previous work in the solution of linear systems and image reconstruction with quantum annealing and quantum   computing and provide the fundamentals for our reconstruction method with quantum annealing and hybrid solvers.Finally, we showcase the results of binary-and integer-valued reconstruction for different matrix sizes.We also test the reconstruction concerning noise and underdetermination.Hybrid-based reconstruction can offer potential benefits in noisy linear systems and in the case of underdetermination.We also highlight the limitations due to the problem size, runtime, and explainability.

Methods
This section will cover how we solve an inverse problem with QA.We cover the fundamentals of embedding the problem on the quantum processing unit's (QPU) topology.We elaborate on the limitations of the current topology and how hardware has to advance to run the optimization of problems at a significant scale.Further, we describe how hybrid algorithms can utilize QA to its full extent and how we have to embed the problem for the hybrid approach.Finally, we describe the open-source framework for creating noisy tomographic test problems.

Problem Formulation
We recall that the reconstruction problem in tomography is an inverse problem of the form shown in equation 6.Therefore, the matrix inverse of M defines the solution, as seen in equation 10.
System matrices can also be non-square if an insufficient number of views is acquired, resulting in a system that is not fully determined.Therefore, we approximate the solution in a least-squares manner.In classical computing, we approximate the least-squares solution as the Moore-Penrose PI 34 .For simplification, we formulate the above equation as the objective of a quadratic minimization problem, with its minimum being approximated solution of x: Previously Rogers and Singleton have described matrix inversion as a QUBO problem for floating-point precision 41 .Moreover, seismic inversion 42 and binary matrix factorization 39 have posed similar problems.

Quadratic Unconstrained Binary Optimization for Binary Tomography
In the QC fundamentals, we have discussed that the Ising model is the basis and Hamiltonian for QA.In the case of binary tomography, the reconstruction problem is directly mappable to a QUBO problem.We recall that the binary tomographic model is defined like equation 6, where M ∈ R (m×n) , y ∈ R (m) and x ∈ {0, 1} n .The linear bias values can be extracted by following equation 14: The offset should not change the minimization problem's objective but is an optional parameter for the QA-based sampler.
When directly mapping a QUBO problem to the quantum annealer, one has to consider the embedding on the QPU's topology.The qubits on a Chimaera topology, as embedded on the D-Wave 2000Q, are internally connected to four other qubits and have one or two external connections to other qubits.The newer D-Wave Advantage2 system features internal connectivity of one qubit to 12 other qubits and two to three external couplers.To map higher connectivity graphs to the QPU, one utilizes chains of qubits to represent one qubit.Fully-connected graphs are mapped to the QPU using a clique embedding 19 .The most extensive mappable, fully connected graph on the QPU is a graph of 65 logical qubits on the D-Wave 2000Q and 100 logical qubits on the D-Wave Advantage2.Our binary reconstruction problem, now defined as the QUBO matrix Q, is fully connected.More to the point, variables usually have quadratic interactions with all other variables.The limit in terms of reconstructed binary image size for the D-Wave Advantage2 is 10 × 10.Therefore, using a QA to reconstruct a R-bit integer image of size N × N without problem optimization will require N 2 R fully-connected qubits.This qubit amount and connectivity will not be available soon.Gate-based quantum computers will likely offer an advantage as significantly fewer qubits are required to represent such extensive data.Fig. 2 shows examples of the graph and corresponding embedding for image sizes of 4x4 and 8x8.To embed more significant problems on the D-Wave machine, we use hybrid algorithms, which are further defined.The optimization scheme is described in pseudocode 1.

Hybrid Optimization
The intermediate step to complete quantum-assisted computation is to design hybrid algorithms to enable the embedding of significant large problems on current QC hardware.On quantum annealers, one can make use of hybrid workflows.Raymond et al. 5 introduced one type of hybrid computation.Their algorithm uses a large neighborhood local search to find subproblems in the original problem.The subproblems are then of a size that is mappable to the QPU.Moreover, D-Wave has introduced a hybrid solver for larger problems.The new constrained quadratic models (CQM) running on D-Wave's Hybrid Sampler enable the use of integer values in the quadratic model and, therefore, drastically expand solution possibilities.The new constrained quadratic model is defined as: Here, x i is the unknown integer variable we want to optimize for, a i is the linear weight, b i, j is the quadratic term between x i and x j and c can define possible inequality and equality constraints.In principle, the workflow is defined by the classical problem formulation and a time limit T 48 .The time limit T is automatically calculated depending on the problem, if not specified by the user.The solvers run in parallel and utilize heuristic solvers to explore the solution space and then pass this information to a quantum module that utilizes D-Wave systems to find solutions.The QPU solutions then guide the heuristic solvers to find better quality solutions and restrict the search space.This process iteratively repeats for the specified time limit.Further, one has the possibility of introducing quadratic and linear constraints 48 .The optimization scheme is described in algorithm 2

Figure 1 .
Figure 1.Simulation and reconstruction of a two-view binary tomographic problem using hybrid quantum annealing.

Figure 2 .
Figure 2. Graphs and physical embeddings on the QPU for binary reconstruction problems.a) and c) show the directed graph for binary tomographic problems for image sizes 4 × 4 and 8 × 8, respectively.b) and d) show the embedding on the QPU topology for a) and c), respectively.

Figure 7 .
Figure 7. 4-bit integer reconstructions of four digits from the UCI digits dataset without (left) and with random noise (right).