Next Article in Journal
Two-Excitation Routing via Linear Quantum Channels
Next Article in Special Issue
Beyond Causal Explanation: Einstein’s Principle Not Reichenbach’s
Previous Article in Journal
Enhancing Chaos Complexity of a Plasma Model through Power Input with Desirable Random Features
Previous Article in Special Issue
Causal Intuition and Delayed-Choice Experiments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum Computation and Arrows of Time

Department of Physics, Nuclear Research Center-Negev, P.O. Box 9001, Be’er Sheva 84190, Israel
Entropy 2021, 23(1), 49; https://doi.org/10.3390/e23010049
Submission received: 9 December 2020 / Revised: 21 December 2020 / Accepted: 24 December 2020 / Published: 30 December 2020
(This article belongs to the Special Issue Quantum Theory and Causation)

Abstract

:
Quantum physics is surprising in many ways. One surprise is the threat to locality implied by Bell’s Theorem. Another surprise is the capacity of quantum computation, which poses a threat to the complexity-theoretic Church-Turing thesis. In both cases, the surprise may be due to taking for granted a strict arrow-of-time assumption whose applicability may be limited to the classical domain. This possibility has been noted repeatedly in the context of Bell’s Theorem. The argument concerning quantum computation is described here. Further development of models which violate this strong arrow-of-time assumption, replacing it by a weaker arrow which is yet to be identified, is called for.

1. Introduction

Physics faces unresolved difficulties with arrows of time. This has been evident at least since the discussions of Boltzmann’s H-Theorem and Loschmidt’s paradox in the late 19th century. Although progress has been made in connecting different arrows of time to the low-entropy big-bang origin of the universe, the resulting understanding is still incomplete (see, e.g., Reference [1]). Nevertheless, “the” arrow of time is often taken for granted, and is familiar from the “Newtonian schema” of kinematics plus dynamics [2]: it is often assumed that a physical system can always be described as having a “state” (kinematics) which “evolves” (dynamics) from the past to the future.
There are also some well-known exceptions—not all physics models conform to the rules of this schema. For example, in order to find the “state” of a system at a certain time according to the stationary-action principle, one must specify inputs—the values of the position coordinates—at both its past and future boundaries. This demonstrates the “Lagrangian schema”, which requires an all-at-once or block-universe approach. By looking beyond the standard schema, one is freed from the limitations of conventional thinking, and is open to novel possibilities. Seeking such freedom is especially relevant when an impasse is encountered; this article sets forth the claim that the surprising power of quantum computing (i.e., its tension with the strong form of the Church-Turing thesis [3]) is just the type of “paradox” which calls for abandoning the standard arrow of time.
There already exist several lines of evidence that quantum physics is at issue with the standard arrow of time (see also References [4,5,6] in the classical context). Early examples include discussions of the Einstein-Podolsky-Rosen (EPR) “paradox” [7] and delayed-choice experiments [8,9]; recent examples include argumentation from time symmetry [10] and the Pusey-Barrett-Rudolph (PBR) theorem [11] (the latter enquires whether the quantum state is ontic or epistemic, i.e., whether it describes reality or merely the information one has regarding a system; see, for example, Reference [12]). The strongest argument involves Bell’s Theorem (see, e.g., Reference [13]).
As typically understood, Bell’s theorem proves that there is no hope for reformulating Quantum Mechanics (QM) in terms of “hidden variables”, or parameters directly describing events in spacetime. But this relies on accepting the standard arrow-of-time rule, which is taken for granted by the “Local Causality” (a.k.a. “Einstein Locality”) assumption of Bell’s Theorem (in fact, the proof relies only on the latter; see, e.g., Reference [14]). Within such an approach, QM typically describes the “state” of a many-particle system as a ray in an abstract and exponentially large Hilbert space, with typical applications involving superpositions and complex probability amplitudes.
Considering an alternative schema opens up the possibility of describing quantum entanglement in terms of spacetime-based parameters with standard probability rules [15,16]. The apparently nonlocal connection between distant regions a and b is achieved through intermediate “hidden” parameters λ , which are situated in the past yet depend on the inputs in a and b, in defiance of the standard arrow-of-time rule. λ is taken to be in the overlap of the past lightcones of a and b, which are thus indirectly connected. The fact that the “hidden” λ (a microscopic parameter) may depend on future inputs need not lead to violations of Signal Causality, just like the collapse of the wavefunction at b due to a measurement at a does not lead to violations of Signal Locality in the standard discussion of Bell correlations. For this reason, this type of “retrocausality” cannot lead to paradoxical causal loops. (Any attempt to “measure” λ so that its value will be correlated with that of a macroscopic pointer would result in loss of the entanglement, as in a “which path” detection in the context of two-slit interference; see, e.g., Reference [17].)
So far, progress in developing a full reformulation of QM along these lines has been slow (see Reference [18] for a recent review). If too much freedom is allowed, one might obtain models with backward-in-time signaling, and it has been argued that preventing this requires fine tuning [19]. Although counterarguments are available [16], it seems that a physical principle, perhaps associated with the entropic arrow of time, is needed. Such a principle could limit the excess freedom resulting from removal of the standard arrow-of-time condition, and lead to results which would systematically conform to the Signal-Causality arrow-of-time rule. (The claim that fundamental physics is strictly time-reversal invariant, completely avoiding any symmetry breaking, is not tenable, as it risks predicting the possibility of sending messages into the past [1]. Such speculations will not be entertained here—it will be assumed that some symmetry-breaking rule is in place. Whether such a rule is considered to be an integral part of the theory or merely due to the perspective of the agents involved does not bear on the present discussion.)
A closely-related issue has to do with the degree of correlations allowed in classical, quantum, or general non-local theories. In proving Bell’s Theorem, one typically derives the Clauser-Horne-Shimony-Holt (CHSH) inequality—the fact that in any locally causal mathematical model, a certain combination of correlators cannot possess a value larger than 2 [20,21]. In general, one can generate models where this combination achieves values up to 4 [22], but in QM its value is limited to 2 2 , the Tsirelson bound [23]. Again, it appears that a physical principle is involved in limiting the exaggerated freedom of generic models (see, e.g., Reference [24]). In fact, research in this context has already made significant strides, involving several suggested principles [25,26,27].
In the present work, it is suggested that the algorithmic complexity achievable with quantum computation similarly provides motivation for rejecting the standard arrow of time (Reference [28] and Reference [29] suggest different approaches connecting the flow of time with quantum computation). Furthermore, here too it appears that a physical principle remains to be identified, one that would limit the freedom obtained with such a rejection. The argument is based on the distinction between a Directed Acyclic Graph (DAG) and a non-directed graph.
The main argument is given in the next brief section. It is followed by two more-detailed sections—the first focusing on DAGs, and the second on non-directed graphs—and by discussion and conclusion sections.

2. The Power of Quantum Computing vs. That of Physical Models on Graphs

Describing natural laws in spacetime in terms of mathematical parameters, and discretizing spacetime into N distinct events, leads to a DAG if the strong arrow of time is maintained. Assuming that the laws are local, and that the past is fixed and the future is not yet relevant, the mathematical rules for each event are greatly simplified, and the number of steps in a simulation of the physics is just the number of events, N. But it is not clear to begin with that the arrow of time must be imposed in this strong manner (see, e.g., Reference [2]). In particular, if there are stochastic rules that determine only how the probabilities for each event depend on events in its vicinity (in both space and time), without imposed arrows, finding the overall distribution for N events may be a much more complicated computational task, due to the requirement that all N events “simultaneously” conform to the physical laws. As an example, consider the task of finding the ground state of a three- (or higher-)dimensional spin glass, which is known to be an NP-complete problem [30,31].
It is thus seen that if one assumes that Nature supplies us with finite “machines” which operate according to local rules subject to “the” arrow of time, all that can be achieved algorithmically is similar to a standard algorithm with N steps, taking N to appropriately represent the finiteness and the resolution pertaining to these “machines”. This is just the strong (or extended, or physical) Church-Turing thesis. However, if the “machines” provided by nature are not subject to this particularly strong arrow-of-time rule, the possibility that they might be capable of performing exponentially harder tasks remains open.
Our best understanding of quantum computing does not lead us to expect natural “machines” to be able to solve NP-complete tasks, although it indicates they could do better than P. The complexity class associated with quantum “machines” is the BQP class, which is much weaker than NP [32] (strictly speaking, that BQP is weaker than NP but stronger than P is not a proven fact, but a conjecture which is assumed here). Thus, as in the Bell’s-theorem context mentioned in the introduction, a physical principle which is weaker than the standard arrow of time is required, one that would limit the achievable complexity class from NP-complete to BQP (not to P). It is natural to expect that the same physical principle would be involved in both contexts.
In the following sections we will go through the above argument in detail.

3. The Strict Arrow of Time Motivates the Strong Form of the Church-Turing Thesis

Mathematical models of classical physics employ local variables or parameters with a clear association of a place and a time for each parameter. A typical example is provided by the values of the classical electric field, E ( x , t ) . In order to connect this with algorithmic complexity, it is appropriate to discretize spacetime, taking a finite number N of events, ( x n , t n ) , distributed reasonably uniformly, to provide a sufficiently detailed representation of a finite region of (Minkowski) spacetime, to some desired accuracy.
Within a kinematics plus dynamics schema, the state of the modelled system at time t would be represented in this picture by the events m with times t m between t- Δ t and t for an appropriate small Δ t , and the values of the model parameters μ m associated with these events. The model obeys Local Causality if the dynamics specify a rule (which may be either deterministic or probabilistic) for obtaining the value of the parameters at the nth event from the parameters in its recent past and its close vicinity, with spacelike separations avoided, so that the relevant events are in the past relative to t n in all frames. We will denote the set of indices of these earlier and nearby events by r ( n ) .
If an external input, such as an external force, acts at the nth event, the value of the parameters at that event will be affected, but the values at earlier times will not. The parameters μ n associated with the nth event thus include inputs I n and non-input parameters Q n (each of these is in general a set of parameters, not limited to scalars). In the deterministic case, the dynamical rule F n specifies the value of Q n as a function of I n and the earlier { μ m } m r ( n ) (the rule F n depends also on the spacetime locations of n and the ms, of course). For example, a model discretizing Maxwell’s equations in this manner would have Q n corresponding to the electromagnetic fields, and I n specifying the charge and current densities, the relevant inputs in this case. (For stochastic models, F n determines the probability distribution of Q n .)
Assuming that the values of the parameters μ n are appropriately discretized as well, so that the modeling of each of the N events is finite, this description makes it obvious that the algorithmic complexity of a simulation according to such a model is O ( N ) . Conversely, the modeled physics cannot provide results which are not efficiently achievable by an algorithm with O ( N ) steps. That physical systems are (polynomially) equivalent to algorithms in this sense is an expression of the strong form of the Church-Turing thesis [3]. Barring problems with the discretization scheme, classical physics indeed operates in this manner. That quantum physics is different is discussed in the next sections.
Note that we have here considered only the number of steps in the algorithm, N. It is of course possible for only M out of the N events to have external inputs, such as initial conditions, with the other N-M events having no inputs (or having the corresponding I n set to zero or null in some fashion). It is further possible to have the number of physical parameters N exponentially larger than the number of physical inputs M, but this possibility is not of interest for the purposes of the present discussion, which focuses on N itself.
It is natural to take the N spacetime events to be nodes of a graph, with directed edges from the ms in r ( n ) to n itself, representing the dynamical rules F n . The resulting DAG represents the discretized mathematical model, as well as the algorithm which would carry out a simulation according to the model. (This O ( N ) graph is not to be confused with the exponentially large configuration graph representing all the possibilities for a model [3].) All the edges in the graph are directed from the past to the future (that the graph is acyclic corresponds to assuming a standard Minkowski geometry, with no closed time-like curves).
Note how central the standard arrow-of-time assumption is to the logic leading to the Church-Turing thesis (it is not denied, of course, that there are additional necessary assumptions, e.g., regarding discretization). It is the assumption that for each node n, the task of obtaining the parameters Q n can be performed while taking all the parameters from the past, that is, from r ( n ) , to be fixed, and ignoring all the parameters relating to the future, which leads to the finiteness of this task. Performing this for N nodes is then necessarily an O ( N ) task.

4. Models with No Arrow of Time

Removal of the above arrow-of-time restriction would have dramatic consequences for the algorithmic-complexity consideration. If a mathematical model is associated not with a DAG but with an undirected graph of N nodes, its capacity for computation could be entirely different. In fact, for a reasonable choice of the replacement for the rules F n , the category of computations which can be efficiently performed by a “machine” which would efficiently generate solutions of the relevant model would be the NP class. As already mentioned, this is known if the rules are replaced by those of a spin-glass system [30,31]. (Which rules should be expected for future physics theories is of course completely open—for example, at the dawn of QM, Heisenberg employed noncommuting-operator rules while Schroedinger used differential equations.) A further simple example is described next.
The example involves a standard NP-complete problem, such as scheduling M meetings within a finite given time T. The requirements concerning the length of the meetings and the intended participants are to be specified by inputs I m , and the timing of each meeting by parameters of a different type, P m . As the problem is NP, it is known that N steps are required to verify that the meetings have no conflicts, with N polynomial in M. It is easy to construct a DAG with N nodes representing the algorithm for performing this verification process, beginning with the I m s and P m s as inputs and resulting in an output O which is true for a valid combination of the timings P m . The directions of the edges of the DAG lead from its inputs I m and P m to its output O. Each of the N steps is associated with a rule F n , consistent with the description of the previous section.
Consider now removing the arrows from the graph. This could represent a model where each rule F n is replaced by a weight W n , which depends on both the input and the output parameters involved in F n . Thus, the rules of the model are local as before, but the model dictates the overall behavior of the combination of parameters { Q n } , and cannot be easily separated into N consecutive steps. Combining all of the local rules involves multiplying the weights W n for all n, and normalizing the weights to obtain a probability distribution involves adding all the product weights, resulting in a normalizing factor Z = { Q n } n W n . (In the statistical mechanics context, Z is called the partition function, and the weights are given by an exponent involving the potential energy and the temperature.)
Returning to the specific scheduling problem above, one can define each of the weights W n as equal to unity for every combination which is consistent with the rule F n , and to zero otherwise. One may further set the inputs I m for a specific scheduling task, and set the “output” O to “true”. If the time T is not too short, solutions exist, and every valid schedule, that is, every valid combination of the P m s, together with the corresponding values of the other Q n parameters, would have a weight of unity, with all other combinations having a vanishing weight (Z represents the number of valid schedules). The result of such a model would be to generate at random one of the valid schedules. This too is, of course, an NP-complete task. (Dealing with shorter times, or with tasks for which the structure of the graph and the F n rules depend not only on the I m s but also on the P m s, requires more-complicated examples.)
The resulting pattern is similar to that described for the CHSH inequality in the introduction. The standard arrow of time is a strong restriction that would limit the capabilities of any model of a physical system to those of standard algorithms, in accordance with the strong form of the Church-Turing thesis. Quantum systems are not as limited as that, but removing the arrow-of-time restriction altogether would result in capabilities which are “too powerful” according to reasonable expectations. A restriction is necessary, but it needs to be less powerful in order to curtail the achievable complexity class from NP to BQP, not to P.

5. Discussion

The above argumentation should be understood as following Feynman [17], who contended that nobody understands QM. He demonstrated how predictive power can coexist with a lack of understanding with the example of the astronomers of the Maya culture, who possessed a mathematical procedure for predicting the appearances of the moon—the timing of a new moon or of an eclipse—which did not involve any conception of orbital paths. Feynman also suggested that developing reformulations of existing theories can serve to improve our understanding, even if no novel predictions are involved (examples include the development of Lagrangean and Hamiltonian mechanics as alternative formulations of Newton’s equations; for a long time, these only improved our understanding of classical mechanics; much later, they also played essential roles in the development of QM).
In this context, the upshot of the previous sections is that quantum computation adds to our motivation to develop reformulations of QM which do not conform to the standard arrow of time. But it is clear that some effective arrow of time must be retained [24]. Physical theories in general, and standard QM in particular, conform to the Signal-Causality rule—they describe signaling to the future, but not to the past (there are many aspects which are time reversal-symmetric, but there is always something to break the symmetry, often just a special treatment of initial conditions, as mentioned in the introduction). Thus, the flow of accessible information, relating to the inputs and the outputs of the theory, is always from the past to the future.
In standard Schroedinger-picture QM, this past-to-future flow affects the internal parameters of the theory as well—the quantum state or wavefunction is taken to evolve from the past to the future (whether or not collapse is allowed for). A reformulation breaking the standard-arrow-of-time rules would involve some internal parameters which depend on other parameters in their future (possibly a statistical dependence, that is, having a probability distribution which depends on future parameters). In order for this future-dependence to play an essential role, it must involve relationships which cannot be simply inverted, such as a dependence on the externally-controlled settings of future measurement devices. For this reason, the arrow-of-time condition is most conveniently defined in relation to such future input parameters, and is called No Future-Input Dependence in Reference [18].
The situation concerning causality or the arrow of time in reformulations of QM which would violate this condition is similar to that concerning locality in standard QM, which violates locality in the sense of Bell’s Local-Causality condition, but conforms to Signal Locality. Here the No Future-Input Dependence condition would be violated for internal parameters, but the output parameters would not have this characteristic—the Signal Causality condition involving the outputs would be maintained.
As demonstrated in Section 4, relaxing No Future-Input Dependence has dramatic consequences for reformulations of QM. The generalization of Bell’s locality condition to all models, whether or not they have Future-Input Dependence, is called Continuous Action, and maintaining this locality condition has distinct advantages, in addition to the necessary Signal Locality [18]. In fact, Bell’s Local Causality condition can be seen to follow from requiring both Continuous Action and No Future-Input Dependence (assuming Lorentz Covariance and the use of standard mathematics and probability rules). Thus, if a reformulation of QM with Continuous Action can indeed be found, it will accordingly be based on a model with parameters with Future-Input Dependence.
It would be natural to view these parameters as providing a more-or-less direct description of reality—ontic variables—with the standard “quantum state” taken to merely represent the information available to an external observer up to a time t. This is the psi-epistemic view of QM (see, e.g., Reference [33]). The arguments posited against this view in the past would fail in the presence of Future-Input Dependence. The fact that this state “evolves” with t in an information-conserving manner (unitarity) would be required by its role as representing unchanged information, as long as indeed there is no update of the available information. Similarly, this “state of knowledge” would have to suddenly change upon such an update, explaining precisely why and how measurements cause “wavefunction collapse”.
This brief discussion only aims to indicate that the development of Future-Input Dependent models with Continuous Action is feasible in principle. For details, including concrete examples of toy models reproducing QM in the specific context of Bell’s Theorem, see Reference [18]. Developing a full reformulation of QM along these lines appears to be challenging not because of a necessity to deal with a particularly complicated situation, but primarily because of the need to overcome the barrier associated with conventional thinking concerning the arrow of time.

6. Conclusions

When we use a mathematical model to describe the objective properties of a physical system, we generally expect these properties to depend on the past of the system, not on its future. This works well in the classical, macroscopic domain, but the presence of quantum fluctuations and uncertainty appear to undermine such thinking for quantum systems. The time-symmetry of microscopic physical laws similarly speaks against such a distinction between the past and the future. (Indeed, time-symmetric interpretations of Quantum Mechanics exist—the transactional interpretation [34] and the two-state-vector formalism [35]—but these approaches still employ the standard quantum “state”, which for many-particle systems is exponentially complex and cannot be represented in terms of local variables μ n .) Allowing the system’s “objective” microscopic parameters to depend on the specification of the measurement to be made on the system at a later time, not only on the earlier preparation, may resolve many a quantum mystery. As described above, the “nonlocality” of Bell’s Theorem serves as the prime example—quantum phenomena violate the relevant “no-action-at-a-distance” condition only when this condition is formulated within models with such a strong past-future distinction.
Generalizing the “no-action-at-a-distance” condition to models which are time-reversal symmetric, or which possess a weaker arrow-of-time rule, removes the restriction posed by Bell’s Theorem [18]. This could serve to “explain” the power of quantum computation—if indeed microscopic parameters are not subject to the rules of a DAG, the associated complexity class need not be limited to P.
Once this point of view is accepted, one is faced with a sharply contrasting problem. It is not that quantum computation is surprisingly powerful—it becomes surprising that it is not even more powerful. A “physical principle” must be imposed on the relevant family of models to limit the capacity from NP to BQP. This is closely related to the search for a limiting physical principle in the context of Tsirelson’s bound, which is related to Bell’s Theorem and has been an active field in recent decades. Perhaps examining the computational complexity achievable by different classes of models on graphs will lead to new directions on this adventure.

Funding

This research received no external funding.

Acknowledgments

The author wishes to thank Scott Aaronson and the other participants of the 6th FQXi conference (Castelvecchio Pascoli, Italy, July 2019) for thought-provoking discussions, and Oded Schwartz and Ken Wharton for helpful comments on a draft of the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Schulman, L.S. Time’s Arrows and Quantum Measurement; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  2. Wharton, K. The Universe is not a Computer. In Questioning the Foundations of Physics; Aguirre, A., Foster, B., Merali, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 177–189. [Google Scholar]
  3. Arora, S.; Barak, B. Computational Complexity: A Modern Approach; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  4. Dirac, P.A.M. Classical theory of radiating electrons. Proc. R. Soc. A 1938, 167, 148–169. [Google Scholar] [CrossRef]
  5. Wheeler, J.A.; Feynman, R.P. Interaction with the absorber as the mechanism of radiation. Rev. Mod. Phys. 1945, 17, 157. [Google Scholar] [CrossRef]
  6. Wheeler, J.A.; Feynman, R.P. Classical electrodynamics in terms of direct interparticle action. Rev. Mod. Phys. 1949, 21, 425. [Google Scholar] [CrossRef] [Green Version]
  7. Costa de Beauregard, O. Une Réponse à l’Argument Dirigé par Einstein, Podolsky et Rosen Contre l’interprétation bohrienne des Phénomènes Quantiques. Comptes Rendus L’Académie Sci. 1953, 236, 1632–1634. [Google Scholar]
  8. Lewis, G.N. The nature of light. Proc. Natl. Acad. Sci. USA 1926, 12, 22–29. [Google Scholar] [CrossRef] [Green Version]
  9. Wheeler, J.A. The “Past” and the “Delayed-Choice” Double-Slit Experiment. In Mathematical Foundations of Quantum Theory; Marlow, A., Ed.; Academic Press: Cambridge, MA, USA, 1978; pp. 9–48. [Google Scholar] [CrossRef]
  10. Leifer, M.S.; Pusey, M.F. Is a time symmetric interpretation of quantum theory possible without retrocausality? Proc. R. Soc. A 2017, 473, 20160607. [Google Scholar] [CrossRef]
  11. Pusey, M.F.; Barrett, J.; Rudolph, T. On the reality of the quantum state. Nat. Phys. 2012, 8, 475. [Google Scholar] [CrossRef] [Green Version]
  12. Leifer, M.S. Is the Quantum State Real? An Extended Review of ψ-ontology Theorems. Quanta 2014, 3, 67–155. [Google Scholar] [CrossRef]
  13. Price, H. Time’s Arrow & Archimedes’ Point: New Directions for the Physics of Time; Oxford University Press: Oxford, MI, USA, 1997. [Google Scholar]
  14. Maudlin, T. What Bell did. J. Phys. A 2014, 47, 424010. [Google Scholar] [CrossRef]
  15. Argaman, N. Bell’s theorem and the causal arrow of time. Am. J. Phys. 2010, 78, 1007–1013. [Google Scholar] [CrossRef] [Green Version]
  16. Almada, D.; Ch’ng, K.; Kintner, S.; Morrison, B.; Wharton, K. Are Retrocausal Accounts of Entanglement Unnaturally Fine-Tuned? Int. J. Quantum Found. 2016, 2, 1–14. [Google Scholar]
  17. Feynman, R.P. The Character of Physical Law; The MIT Press: Cambridge, MA, USA, 1965. [Google Scholar]
  18. Wharton, K.; Argaman, N. Colloquium: Bell’s Theorem and Locally-Mediated Reformulations of Quantum Mechanics. Rev. Mod. Phys. 2020, 92, 021002. [Google Scholar] [CrossRef]
  19. Wood, C.J.; Spekkens, R.W. The lesson of causal discovery algorithms for quantum correlations: Causal explanations of Bell-inequality violations require fine-tuning. New J. Phys. 2015, 17, 033002. [Google Scholar] [CrossRef]
  20. Clauser, J.F.; Horne, M.A.; Shimony, A.; Holt, R.A. Proposed experiment to test local hidden-variable theories. Phys. Rev. Lett. 1969, 23, 880. [Google Scholar] [CrossRef] [Green Version]
  21. Clauser, J.F.; Horne, M.A. Experimental consequences of objective local theories. Phys. Rev. D 1974, 10, 526. [Google Scholar] [CrossRef]
  22. Popescu, S.; Rohrlich, D. Quantum nonlocality as an axiom. Found. Phys. 1994, 24, 379–385. [Google Scholar] [CrossRef]
  23. Cirel’son, B.S. Quantum generalizations of Bell’s inequality. Lett. Math. Phys. 1980, 4, 93–100. [Google Scholar] [CrossRef]
  24. Argaman, N. A Lenient Causal Arrow of Time? Entropy 2018, 20, 294. [Google Scholar] [CrossRef] [Green Version]
  25. Linden, N.; Popescu, S.; Short, A.J.; Winter, A. Quantum Nonlocality and Beyond: Limits from Nonlocal Computation. Phys. Rev. Lett. 2007, 99, 180502. [Google Scholar] [CrossRef] [Green Version]
  26. Pawłowski, M.; Paterek, T.; Kaszlikowski, D.; Scarani, V.; Winter, A.; Żukowski, M. Information causality as a physical principle. Nature 2009, 461, 1101. [Google Scholar] [CrossRef]
  27. Navascués, M.; Wunderlich, H. A glance beyond the quantum model. Proc. R. Soc. A 2010, 466, 881–890. [Google Scholar] [CrossRef] [Green Version]
  28. Aaronson, S. NP-complete problems and physical reality. ACM Sigact News 2005, 36, 30–52. [Google Scholar] [CrossRef]
  29. Castagnoli, G.; Cohen, E.; Ekert, A.K.; Elitzur, A.C. A Relational Time-Symmetric Framework for Analyzing the Quantum Computational Speedup. Found. Phys. 2019, 49, 1200–1230. [Google Scholar] [CrossRef] [Green Version]
  30. Bachas, C.P. Computer-intractability of the frustration model of a spin glass. J. Phys. A 1984, 17, L709–L712. [Google Scholar] [CrossRef]
  31. Barahona, F. On the computational complexity of Ising spin glass models. J. Phys. A 1982, 15, 3241–3253. [Google Scholar] [CrossRef]
  32. Nielsen, M.; Chuang, I. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  33. Caves, C.M.; Fuchs, C.A.; Schack, R. Quantum probabilities as Bayesian probabilities. Phys. Rev. A 2002, 65, 022305. [Google Scholar] [CrossRef] [Green Version]
  34. Cramer, J.G. Generalized absorber theory and the Einstein-Podolsky-Rosen paradox. Phys. Rev. D 1980, 22, 362. [Google Scholar] [CrossRef]
  35. Aharonov, Y.; Vaidman, L. Complete description of a quantum system at a given time. J. Phys. A 1991, 24, 2315. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argaman, N. Quantum Computation and Arrows of Time. Entropy 2021, 23, 49. https://doi.org/10.3390/e23010049

AMA Style

Argaman N. Quantum Computation and Arrows of Time. Entropy. 2021; 23(1):49. https://doi.org/10.3390/e23010049

Chicago/Turabian Style

Argaman, Nathan. 2021. "Quantum Computation and Arrows of Time" Entropy 23, no. 1: 49. https://doi.org/10.3390/e23010049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop