Next Article in Journal
Recent Advances of Constrained Variational Problems Involving Second-Order Partial Derivatives: A Review
Next Article in Special Issue
A Fast Quantum Image Component Labeling Algorithm
Previous Article in Journal
Three-Way Ensemble Clustering Based on Sample’s Perturbation Theory
Previous Article in Special Issue
Quantum Weighted Fractional Fourier Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Progress towards Analytically Optimal Angles in Quantum Approximate Optimisation

Laboratory of Quantum Algorithms for Machine Learning and Optimisation, Skolkovo Institute of Science and Technology, 3 Nobel Street, 121205 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2601; https://doi.org/10.3390/math10152601
Submission received: 22 June 2022 / Revised: 22 July 2022 / Accepted: 23 July 2022 / Published: 26 July 2022
(This article belongs to the Special Issue Quantum Computing Algorithms and Computational Complexity)

Abstract

:
The quantum approximate optimisation algorithm is a p layer, time variable split operator method executed on a quantum processor and driven to convergence by classical outer-loop optimisation. The classical co-processor varies individual application times of a problem/driver propagator sequence to prepare a state which approximately minimises the problem’s generator. Analytical solutions to choose optimal application times (called parameters or angles) have proven difficult to find, whereas outer-loop optimisation is resource intensive. Here we prove that the optimal quantum approximate optimisation algorithm parameters for p = 1 layer reduce to one free variable and in the thermodynamic limit, we recover optimal angles. We moreover demonstrate that conditions for vanishing gradients of the overlap function share a similar form which leads to a linear relation between circuit parameters, independent of the number of qubits. Finally, we present a list of numerical effects, observed for particular system size and circuit depth, which are yet to be explained analytically.

1. Introduction

The field of quantum algorithms has dramatically transformed in the last few years due to the advent of a quantum to classical feedback loop: a fixed depth quantum circuit is adjusted to minimise a cost function. This approach partially circumvents certain limitations such as variability in pulse timing and requires shorter depth circuits at the cost of outer-loop training [1,2,3,4,5,6]. The most studied algorithm in this setting is the quantum approximate optimisation algorithm (QAOA) [7] which was developed to approximate solutions to combinatorial optimisation problem instances [8] i.e., MAX-k-SAT [9,10], MAX-Cut [7,11,12,13,14,15,16], and MAX-k-Colorable-Subgraph [17] instances. The algorithm has certain real-world applications, including finances [18] and might prove useful for general constraint optimisation [19].
The setting of QAOA is that of n qubits: states are represented as vectors in V n = [ C 2 ] n . We are given a non-negative Hamiltonian P herm C ( V n ) and we seek the normalised ground vector | t arg min ϕ { 0 , 1 } n ϕ | P | ϕ .
QAOA might be viewed as a (time-variable fixed-depth) quantum split operator method. We let V ( γ ) be the propagator of P applied for time γ . We consider a second propagator U ( β ) generated by applying a yet-to-be-defined Hamiltonian H x for time β . We start off in the equal superposition state | + n = 2 n / 2 ( | 0 + | 1 ) n and form a p-depth U , V sequence:
| g p ( γ , β ) | 2 = | t | Π k = 1 p [ U ( β k ) V ( γ k ) ] | + n | 2 .
The time of application of each propagator is varied to maximise preparation of the state t. Finding γ , β to maximise | g p ( γ , β ) | has shown to be cumbersome. Even lacking such solutions, much progress has been made.
Recent milestones include experimental demonstration of p = 3 depth QAOA (corresponding to six tunable parameters) using a twenty three qubits [1] superconducting processor, universality results [20,21], as well as several results that aid and improve on the original implementation of the algorithm [11,12,17]. Towards practical realisation of the QAOA, trapped ion-based quantum computers have recently shown promising results, including demonstrations on up to forty qubits [2] and the potential to realise arbitrary combinatorial optimisation problems with all to all connectivity based on hardware-inspired modifications [22]. Although QAOA exhibits provable advantages such as recovering a near-optimal query complexity in Grover’s search [23] and offers a pathway towards quantum advantage [13], several limitations have been discovered for low depth QAOA [9,24,25].
In the setting of maximum-constraint satisfiability (e.g., minimizing a Hamiltonian representing a function of type f : { 0 , 1 } n R + ), it has been shown that underparameterisation of QAOA sequences can be induced by increasing a problem instances constraint to variable ratio [9]. This effect persists in graph minimisation problems [26]. While this effect is perhaps an expected limitation of the quantum algorithm, parameter concentrations and noise-assisted training add a degree of optimism. QAOA exhibits parameter concentrations, in which training for some fraction of ω < n qubits provides a training sequence for n qubits [27]. Moreover, whereas layerwise training saturates for QAOA in which the algorithm plateaus and fails to reach the target, local coherent noise recovers layerwise training robustness [28]. Both concentrations and noise-assisted training imply a reduction in computational resources required in outer-loop optimisation.
Exact solutions to find the optimal parameters for QAOA have only been possible in special cases including, e.g., fully connected graphs [14,15,16] and projectors [27]. A general analytical approach which would allow for (i) calculation of optimal parameters, (ii) estimation of the critical circuit depth and (iii) performance guarantees for fixed depth remains open.
Here we prove that optimal QAOA parameters for p = 1 are related as γ 1 = π 2 β 1 and in the thermodynamic limit, we recover optimality as β 1 n π and γ 1 π . We moreover demonstrate that conditions for vanishing gradients of the overlap function share a similar form which leads to a linear relation between circuit parameters, independent of the number of qubits. We hence devise an additional means to recover parameter concentrations [27] analytically. Finally, we present a list of numerical effects, observed for particular system size and circuit depth, which are yet to be explained analytically.

2. State Preparation with QAOA

We consider an n-qubit complex vector space V n = [ C 2 ] n C 2 n with fixed standard computational basis B n = { | 0 , | 1 } n . For an arbitrary target state | t B n (equivalently | t , t { 0 , 1 } × n ) we define propagators
U ( β ) e i β H x , V ( γ ) e i γ P ,
where P = | t t | and H x = j = 1 n X j is the one-body mixer Hamiltonian with X j the Pauli matrix acting non-trivially on the j-th qubit. Here we focus on the state preparation, thus choosing the problem Hamiltonian to be a projector ( P 2 = P ) on an arbitrary bit string | t . We note that while the projector has only two energy levels, the effective Hamiltonian of the whole QAOA sequence has up to n + 1 distinct energy levels. In such settings, the propagator V ( γ ) acting on a superposition adds a phase γ to the component | t , while the propagator U ( β ) mixes the components’ amplitudes.
A p-depth (p layer) QAOA circuit prepares a quantum state | ψ as:
| ψ p ( γ , β ) = k = 1 p [ U ( β k ) V ( γ k ) ] | + n ,
where γ k [ 0 , 2 π ) , β k [ 0 , π ) . The optimisation task is to determine QAOA optimal parameters for which the state prepared in (3) achieves maximum absolute value of the overlap g p ( γ , β ) = t | ψ p ( γ , β ) with the target | t . In other words, we search for
( γ o p t , β o p t ) arg max γ , β | g p ( γ , β ) | .
Note that the problem is equivalent to the minimisation of the ground state energy of Hamiltonian P =     | t t | ,
min γ , β ψ p ( γ , β ) | P | ψ p ( γ , β ) = 1 max γ , β | g p ( γ , β ) | 2 .
Remark 1 (Inversion symmetry).
Under the affine transformation
( γ , β ) ( 2 π γ , π β )
the absolute value of the overlap remains invariant as g p ( 1 ) n g p . Therefore, this narrows the search space to γ k [ 0 , π ) , β k [ 0 , π ) , whereas maximums inside the restricted region determine maximums in the composite space using Equation (6).
Proposition 1 (Overlap invariance).
The overlap function g p ( γ , β ) is invariant with respect to | t B n .
Proof. 
Each | t = | t 1 t 2 t n B n determines a unitary operator U = U = j = 1 n X j t j . Hence, we have
g p ( γ , β ) = 0 | U k = 1 p e i β k H x e i γ k U ( | 0 0 | ) U | + n = 0 | U k = 1 p e i β k H x [ U e i γ k ( | 0 0 | ) U ] | + n = 0 | k = 1 p e i β k H x e i γ k | 0 0 | | + n .
The first equality follows from U | 0 = | t where | 0 = | 0 n . The second equality follows from the definition of the matrix exponential. The third equality follows as U commutes with H x as does any analytic function of H x , and U | + n = | + n . Thus, the overlap is seen to be independent of the target bit string | t . □
Remark 2.
Overlap invariance introduced in Proposition 1 shows that optimisation problems in Equations (4) and (5) do not depend on the target. Therefore, optimal parameters are the same for any target state. Thus, with no loss of generality we limit our consideration to the target | t = | 0 .
Preparation of state (3) requires a strategy to assign 2 p variational parameters by outer-loop optimisation.
Remark 3 (Global optimisation).
A strategy when all 2 p parameters are optimised simultaneously which might provide the best approximation to prepare | t .
Remark 4 (Layerwise training).
Optimisation of parameters layer by layer. At each step of the algorithm, only one layer is optimised. After a layer is trained, a new layer is added and its parameters are optimised while keeping the parameters of the previous layers fixed.
Global optimisation is evidently challenging for high-depth circuits. The optimisation can, in principle, be simplified by exploiting problem symmetries [29] and leveraging parameter concentrations [27,30]. Layerwise training might avoid barren plateaus [31] yet is known [28] to stagnate at some critical depth, past which additional layers (trained one at a time) do not improve overlap. Local coherent noise was found to re-establish the robustness of layerwise training [28].

3. p = 1 QAOA

For a single layer, the global and layerwise strategies are equivalent. Such a circuit was considered to establish parameter concentrations [27] analytically. The overlap was shown to be:
| g 1 ( γ , β ) | 2 = 1 2 n [ 1 + 2 cos n β cos ( γ n β ) cos n β + 2 cos 2 n β 1 cos γ ] .
To find extreme points of (8) the authors in [27] set the derivatives with respect to γ and β to zero. This approach leads to solutions which contain maxima but also the minimum of the overlap (8). These must be carefully separated. Moreover, this approach ignores the operator structure of the overlap as presented here. For aesthetics, subscript opt in γ o p t and β o p t is further omitted.
Theorem 1.
Optimal p = 1 QAOA parameters relate as γ = π 2 β .
Proof. 
To maximise the absolute value of the overlap
g g 1 ( γ , β ) = 0 | e i β H x e i γ P | + n ,
with P = | 0 0 | we use the standard conditions ( g g ) γ = ( g g ) β = 0 . Setting the first derivative to zero we arrive at
0 | e i β H x e i γ P P | + n g = + | n P e i γ P e i β H x | 0 g .
Using the explicit form of the projector and the fact that 0 | e i β H x | 0 = cos n β , Equation (10) simplifies into
g = g e 2 i γ g e i γ = g e i γ ,
which is equivalent to
arg g = γ .
Then the derivative of expression (9) with respect to β is set to zero and we arrive at
0 | e i β H x H x e i γ P | + n g = + | n e i γ P H x e i β H x | 0 g .
Moving H x next to its eigenstate | + n is compensated as follows:
0 | e i β H x { e i γ P H x + ( e i γ 1 ) [ H x , P ] } | + n g = + | n { H x e i γ P + [ P , H x ] ( e i γ 1 ) } e i β H x | 0 g .
After simplification (see Remark 5) we arrive at
g A = g A e i γ ,
where A = + | n [ P , H x ] e i β H x | 0 . Now g is substituted from Equation (11) to establish
e i γ A = A .
Thus, similar to Equation (12) we arrive at
arg A = γ + π 2 .
A is calculated as
A 2 n = 0 | ( H x n ) e i β H x | 0 = n cos n 1 β e i β ,
which shows that arg A = π β . Thus, from Equation (17) we arrive at
π β = γ + π 2 ,
which finally establishes γ = π 2 β . □
Remark 5 (Trivial solutions).
Equation (14) has three pathological solutions which must be ruled out: (i) sin γ 2 = 0 (which sets e i γ 1 = 0 ), (ii) cos β = 0 (which sets A = 0 ), (iii) g ( γ , β ) = 0 . All three cases imply | g ( γ , β ) | g ( 0 , 0 ) .
Remark 6.
The zero derivative conditions result in (11) and (15) which have a similar form, viz. x = x e i φ . The first condition (11) can be obtained without differentiation [28] using the explicit form of the overlap Equation (9)
g 2 n = e i γ cos n β + ( e i β n cos n β ) ,
and the fact that max γ | A e i γ + B | = | A | + | B | for any A , B C . Although the derivative with respect to β leads to the condition (15), we find no way to recover this using elementary alignment arguments.
Remark 7.
While optimal angle relation γ = π 2 β has also been established in [27], here we demonstrate that it can result from certain ansatz symmetry, manifested in similar form of zero derivatives conditions (11) and (15). This can provide useful insights to understand similar optimal angle dependency for deeper circuits (Section 4.2).
To find optimal parameters one needs to solve the zero derivative conditions and then take solutions that deliver a global maximum to the overlap. For convenience, we substitute γ = π 2 β to the overlap function (20), square it and after simplification arrive at
| g | 2 2 n = 1 + 4 cos n + 1 β ( cos n + 1 β cos ( n + 1 ) β ) ,
which is used to prove the next theorem.
Theorem 2.
The optimal p = 1 QAOA parameters converge as β n π and γ π when n .
Proof. 
Using the explicit form of the overlap (20), from Equation (11) one can establish
Im [ e i γ ( e i β n cos n β ) ] = 0 .
Substituting γ = π 2 β one arrives at
Im [ e i ( n + 2 ) β e 2 i β cos n β ] = 0 ,
which is equivalent to
sin ( n + 2 ) β = sin 2 β cos n β .
We solve this equation in the limit n . In this limit sin 2 β cos n β 0 independent of the value of β . Thus, the left-hand side of Equation (24) tends to zero. This implies that the leading order solution scales as
β = k π n + 2 + o ( n 1 )
where k < n is a positive integer (in principle, n-dependent). To recover the optimal constant k we substitute Equation (25) to Equation (21) to obtain
| g | 2 2 n = 1 + 4 cos n + 2 k π n + 2 cos n k π n + 2 ( 1 ) k
up to o ( 1 ) terms. Finally, as cosine is monotonously decreasing in the interval [ 0 , π ) it is evident that the overlap maximises for the smallest odd constant k = 1 . Therefore, the optimal parameter β is given by
β = π n + 2 + o ( n 1 ) = π n + o ( n 1 ) ,
which implies n β π and thus γ = π 2 β π when n . □
Remark 8.
In Theorem 2 the leading order solutions were found for optimal parameters. Higher order corrections in n are found from Equation (24). For example, it is straightforward to show that
β = π n 4 π n 2 + O ( n 3 ) ,
γ = π 2 π n + 8 π n 2 + O ( n 3 ) .
Remark 9.
Expressions (28) and (29) are used to demonstrate parameter concentrations [27], i.e., the effect when optimal parameters for n and n + 1 qubits are polynomially close.
Theorems 1 and 2 provide state of the art analytical results for state preparation with p = 1 depth QAOA circuit. For deeper circuits and more general settings, analysis becomes complicated and known results are mostly numerical. Therefore, below we provide a list of numerical effects for deeper circuits which lack analytical explanations.

4. Empirical Findings Missing Analytical Theory

4.1. Parameter Concentration in p 2 QAOA

From expression (3), overlaps for circuits of different depths are related recursively as
g p + 1 ( γ , β , γ p + 1 , β p + 1 ) = g p ( γ , β ˜ ) + g p ( γ , β ) cos n β p + 1 ( e i γ p + 1 1 ) ,
where β ˜ = ( β 1 + β p + 1 , , β p + β p + 1 ) . This recursion was used in [27] for p = 2 where it was shown that in the thermodynamic limit n the zero derivative conditions let one obtain solutions for which n β π and γ π . This establishes parameter concentrations. The effect was further confirmed numerically on up to n = 17 qubits and p = 5 layers. For arbitrary depth, parameter concentrations are conjectured, yet analytical confirmation remains open. The recursion (30) can be used in the suggested operator formalism to derive a system of equations to calculate optimal parameters for circuits of arbitrary depth. In the suggested formalism the zero derivative conditions will contain expectations of propagators used in the circuit, and the system can be solved in the thermodynamic limit, albeit with a growing number of equations to satisfy.

4.2. Last Layer Behaviour

Theorem 1 establishes the linear relation between optimal parameters independent of the number of qubits n. Using a global training strategy for the same problem with p 2 depth circuits, it was numerically observed [27] that optimal parameters depend on the depth, yet usually can be approximately described by some linear relation. In the present work, we have observed that the last layer is distinctively characterised by the very same linear relation γ p + 2 β p = π stated in Theorem 1. We numerically confirmed this up to p = 5 layers and n = 17 qubits, as shown in Figure 1. The effect remains unexplained analytically and could be the manifestation of some hidden ansatz symmetry.

4.3. Saturation in Layerwise Training at p = n

It was demonstrated [28] that layerwise training saturates, meaning that past a critical depth p , overlap cannot be improved with further layer additions. Due to this effect, naive layerwise training performance falls below global training. Training saturation in layerwise optimisation was reported in [28] and confirmed up to n = 10 qubits. Most surprisingly, the saturation depth p was observed to be equal to the number of qubits n. Two effects remain unexplained analytically. Firstly does p = n . Secondly, could one go beyond the necessary conditions in [28] to explain saturations?

4.4. Removing Saturation in Layerwise Training

Any modification in the layerwise training process that violates the necessary saturation conditions can remove the system from its original saturation points. This idea was exploited in [28], where two types of variations were introduced for system sizes up to n = 7 : (i) undertraining the QAOA circuit at each iteration and (ii) training in the presence of random coherent phase noise. Whereas both modifications (i) and (ii) removed saturations at p = n yet the reason remains unexplained.

5. Conclusions

We have proven a relationship between optimal QAOA parameters for p = 1 , and we recover optimal angles in the thermodynamic limit. We demonstrated the effect of parameter concentrations for p = 1 QAOA circuits using an operator formalism. Compared to the explicit calculation where objective function gradients are set to zero, the operator approach exploits the ansatz symmetry in finding optimal parameters. The suggested approach can directly be adopted to find optimal parameters for the p 2 QAOA circuit, with increasing complexity due to the larger number of parameters. Finally, we present a list of numerical effects, observed for particular system size and circuit depth, which are yet to be explained analytically. These unexplained effects include both limitations and advantages to QAOA. While difficult, adding missing theory to these subtle effects would improve our understanding of variational algorithms.

Author Contributions

Methodology, D.R., R.S., E.C., V.A. and J.B.; Writing—original draft, D.R., R.S., E.C., V.A. and J.B. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge support from the research project Leading Research Center on Quantum Computing (agreement No. 014/20).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Harrigan, M.P.; Sung, K.J.; Neeley, M.; Satzinger, K.J.; Arute, F.; Arya, K.; Atalaya, J.; Bardin, J.C.; Barends, R.; Boixo, S.; et al. Quantum approximate optimization of non-planar graph problems on a planar superconducting processor. Nat. Phys. 2021, 17, 332–336. [Google Scholar] [CrossRef]
  2. Pagano, G.; Bapat, A.; Becker, P.; Collins, K.; De, A.; Hess, P.; Kaplan, H.; Kyprianidis, A.; Tan, W.; Baldwin, C.; et al. Quantum approximate optimization of the long-range Ising model with a trapped-ion quantum simulator. arXiv 2019, arXiv:1906.02700. [Google Scholar] [CrossRef] [PubMed]
  3. Guerreschi, G.G.; Matsuura, A.Y. QAOA for Max-Cut requires hundreds of qubits for quantum speed-up. Sci. Rep. 2019, 9, 6903. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Butko, A.; Michelogiannakis, G.; Williams, S.; Iancu, C.; Donofrio, D.; Shalf, J.; Carter, J.; Siddiqi, I. Understanding quantum control processor capabilities and limitations through circuit characterization. In Proceedings of the 2020 International Conference on Rebooting Computing (ICRC), Atlanta, GA, USA, 1–3 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 66–75. [Google Scholar]
  5. Biamonte, J. Universal variational quantum computation. Phys. Rev. A 2021, 103, L030401. [Google Scholar] [CrossRef]
  6. Campos, E.; Nasrallah, A.; Biamonte, J. Abrupt transitions in variational quantum circuit training. Phys. Rev. A 2021, 103, 032607. [Google Scholar] [CrossRef]
  7. Farhi, E.; Goldstone, J.; Gutmann, S. A Quantum Approximate Optimization Algorithm. arXiv 2014, arXiv:1411.4028. [Google Scholar]
  8. Niu, M.Y.; Lu, S.; Chuang, I.L. Optimizing qaoa: Success probability and runtime dependence on circuit depth. arXiv 2019, arXiv:1905.12134. [Google Scholar]
  9. Akshay, V.; Philathong, H.; Morales, M.E.; Biamonte, J.D. Reachability Deficits in Quantum Approximate Optimization. Phys. Rev. Lett. 2020, 124, 090504. [Google Scholar] [CrossRef] [Green Version]
  10. Akshay, V.; Philathong, H.; Campos, E.; Rabinovich, D.; Zacharov, I.; Zhang, X.M.; Biamonte, J. On Circuit Depth Scaling for Quantum Approximate Optimization. arXiv 2022, arXiv:2205.01698. [Google Scholar]
  11. Zhou, L.; Wang, S.T.; Choi, S.; Pichler, H.; Lukin, M.D. Quantum Approximate Optimization Algorithm: Performance, Mechanism, and Implementation on Near-Term Devices. Phys. Rev. X 2020, 10, 021067. [Google Scholar] [CrossRef]
  12. Brady, L.T.; Baldwin, C.L.; Bapat, A.; Kharkov, Y.; Gorshkov, A.V. Optimal Protocols in Quantum Annealing and Quantum Approximate Optimization Algorithm Problems. Phys. Rev. Lett. 2021, 126, 070505. [Google Scholar] [CrossRef] [PubMed]
  13. Farhi, E.; Harrow, A.W. Quantum Supremacy through the Quantum Approximate Optimization Algorithm. arXiv 2016, arXiv:1602.07674. [Google Scholar]
  14. Farhi, E.; Goldstone, J.; Gutmann, S.; Zhou, L. The quantum approximate optimization algorithm and the sherrington-kirkpatrick model at infinite size. arXiv 2019, arXiv:1910.08187. [Google Scholar] [CrossRef]
  15. Wauters, M.M.; Mbeng, G.B.; Santoro, G.E. Polynomial scaling of QAOA for ground-state preparation of the fully-connected p-spin ferromagnet. arXiv 2020, arXiv:2003.07419. [Google Scholar]
  16. Claes, J.; van Dam, W. Instance Independence of Single Layer Quantum Approximate Optimization Algorithm on Mixed-Spin Models at Infinite Size. arXiv 2021, arXiv:2102.12043. [Google Scholar] [CrossRef]
  17. Wang, Z.; Rubin, N.C.; Dominy, J.M.; Rieffel, E.G. X Y mixers: Analytical and numerical results for the quantum alternating operator ansatz. Phys. Rev. A 2020, 101, 012320. [Google Scholar] [CrossRef] [Green Version]
  18. Hodson, M.; Ruck, B.; Ong, H.; Garvin, D.; Dulman, S. Portfolio rebalancing experiments using the Quantum Alternating Operator Ansatz. arXiv 2019, arXiv:1911.05296. [Google Scholar]
  19. Tsoulos, I.G.; Stavrou, V.; Mastorakis, N.E.; Tsalikakis, D. GenConstraint: A programming tool for constraint optimization problems. SoftwareX 2019, 10, 100355. [Google Scholar] [CrossRef]
  20. Lloyd, S. Quantum approximate optimization is computationally universal. arXiv 2018, arXiv:1812.11075. [Google Scholar]
  21. Morales, M.E.; Biamonte, J.; Zimborás, Z. On the universality of the quantum approximate optimization algorithm. Quantum Inf. Process. 2020, 19, 1–26. [Google Scholar] [CrossRef]
  22. Rabinovich, D.; Adhikary, S.; Campos, E.; Akshay, V.; Anikin, E.; Sengupta, R.; Lakhmanskaya, O.; Lakhmanskiy, K.; Biamonte, J. Ion native variational ansatz for quantum approximate optimization. arXiv 2022, arXiv:2206.11908. [Google Scholar]
  23. Jiang, Z.; Rieffel, E.G.; Wang, Z. Near-optimal quantum circuit for Grover’s unstructured search using a transverse field. Phys. Rev. A 2017, 95, 062317. [Google Scholar] [CrossRef] [Green Version]
  24. Hastings, M.B. Classical and quantum bounded depth approximation algorithms. arXiv 2019, arXiv:1905.07047. [Google Scholar] [CrossRef]
  25. Bravyi, S.; Kliesch, A.; Koenig, R.; Tang, E. Obstacles to State Preparation and Variational Optimization from Symmetry Protection. Phys. Rev. Lett. 2019, 125, 260505. [Google Scholar] [CrossRef]
  26. Akshay, V.; Philathong, H.; Zacharov, I.; Biamonte, J. Reachability Deficits in Quantum Approximate Optimization of Graph Problems. Quantum 2021, 5, 532. [Google Scholar] [CrossRef]
  27. Akshay, V.; Rabinovich, D.; Campos, E.; Biamonte, J. Parameter concentrations in quantum approximate optimization. Phys. Rev. A 2021, 104, L010401. [Google Scholar] [CrossRef]
  28. Campos, E.; Rabinovich, D.; Akshay, V.; Biamonte, J. Training saturation in layerwise quantum approximate optimization. Phys. Rev. A 2021, 104, L030401. [Google Scholar] [CrossRef]
  29. Shaydulin, R.; Wild, S.M. Exploiting symmetry reduces the cost of training QAOA. IEEE Trans. Quantum Eng. 2021, 2, 1–9. [Google Scholar] [CrossRef]
  30. Streif, M.; Leib, M. Comparison of QAOA with quantum and simulated annealing. arXiv 2019, arXiv:1901.01903. [Google Scholar]
  31. Skolik, A.; McClean, J.R.; Mohseni, M.; van der Smagt, P.; Leib, M. Layerwise learning for quantum neural networks. Quantum Mach. Intell. 2021, 3, 1–11. [Google Scholar] [CrossRef]
Figure 1. Optimal angles of p = 5 depth circuit for n [ 6 ; 17 ] . While the first layers can be approximately described by a linear relation, the last layer fits γ p + 2 β p = π . Moreover, the values of the last layer’s parameters are evidently distinct from the previous layers.
Figure 1. Optimal angles of p = 5 depth circuit for n [ 6 ; 17 ] . While the first layers can be approximately described by a linear relation, the last layer fits γ p + 2 β p = π . Moreover, the values of the last layer’s parameters are evidently distinct from the previous layers.
Mathematics 10 02601 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rabinovich, D.; Sengupta, R.; Campos, E.; Akshay, V.; Biamonte, J. Progress towards Analytically Optimal Angles in Quantum Approximate Optimisation. Mathematics 2022, 10, 2601. https://doi.org/10.3390/math10152601

AMA Style

Rabinovich D, Sengupta R, Campos E, Akshay V, Biamonte J. Progress towards Analytically Optimal Angles in Quantum Approximate Optimisation. Mathematics. 2022; 10(15):2601. https://doi.org/10.3390/math10152601

Chicago/Turabian Style

Rabinovich, Daniil, Richik Sengupta, Ernesto Campos, Vishwanathan Akshay, and Jacob Biamonte. 2022. "Progress towards Analytically Optimal Angles in Quantum Approximate Optimisation" Mathematics 10, no. 15: 2601. https://doi.org/10.3390/math10152601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop