Next Article in Journal / Special Issue
It from Qubit: How to Draw Quantum Contextuality
Previous Article in Journal / Special Issue
Equivalence of the Symbol Grounding and Quantum System Identification Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum States as Ordinary Information

Department of Physics and Astronomy, San José State University, San José, CA 95192-0106, USA
Information 2014, 5(1), 190-208; https://doi.org/10.3390/info5010190
Submission received: 21 January 2014 / Revised: 5 March 2014 / Accepted: 6 March 2014 / Published: 7 March 2014
(This article belongs to the Special Issue Physics of Information)

Abstract

:
Despite various parallels between quantum states and ordinary information, quantum no-go-theorems have convinced many that there is no realistic framework that might underly quantum theory, no reality that quantum states can represent knowledge about. This paper develops the case that there is a plausible underlying reality: one actual spacetime-based history, although with behavior that appears strange when analyzed dynamically (one time-slice at a time). By using a simple model with no dynamical laws, it becomes evident that this behavior is actually quite natural when analyzed “all-at-once” (as in classical action principles). From this perspective, traditional quantum states would represent incomplete information about possible spacetime histories, conditional on the future measurement geometry. Without dynamical laws imposing additional restrictions, those histories can have a classical probability distribution, where exactly one history can be said to represent an underlying reality.

Graphical Abstract

1. Introduction

There are many parallels between quantum states and states of classical knowledge. A substantial number of quantum phenomena (including entanglement and teleportation) can be shown to have a strong analog in classical systems for which one has restricted knowledge [1], and the quantum collapse is strongly reminiscent of Bayesian updating upon learning new information [2]. These two well-cited papers alone provide numerous examples and arguments that link quantum information theory to classical information about quantum systems.
However, despite these connections between information/knowledge and quantum states, there has been little progress towards answering the deeply related “quantum reality problem” [3]: What is the underlying reality that quantum states represent knowledge about? If quantum states are information, what is the “informata”? Most research in quantum information has ducked this question, or denied the very possibility of hidden variables, likely due to strong no-go theorems [4,5,6]. Indeed some have rejected the basic notion that information needs to be about anything at all, taking the “It from Bit” view that information can somehow be more fundamental than reality. [7]
This paper will lay out the case that treating the quantum state as a state of knowledge does not require the “It from Bit" viewpoint. This will be accomplished by demonstrating that—despite the no-go theorems and conventional wisdom—quantum states can be recast as incomplete classical information about an underlying spacetime-based reality. Note that the specific meaning of “information”, as used in this paper, refers exclusively to an agent’s knowledge. Timpson [8] has carefully shown why this meaning is crucially distinct from the technical concept of Shannon Information, which is perhaps better termed “source compressibility” or “channel capacity” (in different contexts), and is a property of real sources or channels. Given that this paper sets to one side the alternate viewpoint that the quantum state comprises an element of reality (at least until the final conclusion), the conventional use of the word “information" is most appropriate for this discussion.
The below conclusions do not contradict the quantum no-go theorems. These theorems have indeed ruled out local hidden instructions; there can be no local hidden variables (LHVs) that determine quantum outcomes if the LHVs are independent of the (externally controlled) experimental geometry in that system’s future. Such a “no retrocausality” premise is a natural assumption for a system that solves a local dynamical equation, but fails for locally-interacting systems in general. If the LHVs are not merely instructions, but also reflect future measurement settings, the no-go theorems are not applicable. This is the often-ignored “retrocausal loophole”, that has been periodically noted as a way to restore a spacetime-based reality to quantum theory [9,10,11,12,13,14,15,16].
The main result of this paper is to show how non-dynamical models can naturally resolve the quantum reality problem using the “all-at-once”-style analysis of action principles (and certain aspects of classical statistical mechanics). This perspective naturally recasts our supposedly-complete information about quantum systems into incomplete information about an underlying, spacetime-based reality. As this paper is motivating a general research program, rather than a specific answer to the quantum reality problem, the below analysis will strive to be noncommittal as to the precise nature of the informata (or “beables”) of which quantum states might encode information. (Although it will be strictly assumed that the beables reside in spacetime; more on this point in Section 2.2.)
After some motivation and background in the next section, a simple model will then demonstrate how the all-at-once perspective works for purely spatial systems (without time). Then, applying the same perspective to spacetime systems will reveal a framework that can plausibly serve as a realistic account of quantum phenomena. The result of this analysis will be to dramatically weaken the “It from Bit” idea, demonstrating that realistic beables can exist in spacetime, no-go theorems not withstanding—so long as one is willing to drop dynamics in favor of an all-at-once analysis. We may still choose to reject this option, but the mere fact that it is on the table might encourage us not to redefine information as fundamental—especially as it becomes clear just how poorly-informed we actually are.

2. Framework and Background

2.1. Newtonian vs. Lagrangian Schemas

Isaac Newton taught us some powerful and useful mathematics, dubbed it the “System of the World”, and ever since we’ve assumed that the universe actually runs according to Newton’s overall scheme. Even though the details have changed, we still basically hold that the universe is a computational mechanism that takes some initial state as an input and generates future states as an output.
Such a view is so pervasive that only recently has anyone bothered to give it a name: Lee Smolin now calls this style of mathematics the “Newtonian Schema” [17]. Despite the classical-sounding title, this viewpoint is thought to encompass all of modern physics, including quantum theory. This assumption that we live in a Newtonian Schema Universe (NSU) is so strong that many physicists can’t even articulate what other type of universe might be conceptually possible.
When examined critically, the NSU assumption is exactly the sort of anthropocentric argument that physicists usually shy away from. It is essentially the assumption that the way we solve physics problems must be the way the universe actually operates. In the Newtonian Schema, we first map our knowledge of the physical world onto some mathematical state, then use dynamical laws to transform that state into a new state, and finally map the resulting (computed) state back onto the physical world. This is useful mathematics, because it allows us to predict what we don’t know (the future), from what we do know (the past). But it is possible we have erred by assuming the universe must operate as some corporeal image of our calculations.
The alternative to the NSU is well-developed and well-known: Lagrangian-based action principles. These are perhaps thought of as more a mathematical trick than as an alternative to dynamical equations, but the fact remains that all of classical physics can be recovered from action-extremization, and Lagrangian Quantum Field Theory is strongly based on these principles as well. This indicates an alternate way to do physics, without dynamical equations—deserving of the title “the Lagrangian Schema”.
Like the Newtonian Schema, the Lagrangian Schema is a mathematical technique for solving physics problems. One sets up a (reversible) two-way map between physical events and mathematical parameters, partially constrains those parameters on some spacetime boundary at both the beginning and the end, and then uses a global rule to find the values of the unconstrained parameters and/or a transition amplitude. This analysis does not proceed via dynamical equations, but rather is enforced on entire regions of spacetime “all at once”.
While it’s a common claim that these two schemas are equivalent, different parameters are being constrained in the two approaches. Even if the Lagrangian Schema yields equivalent dynamics to the Newtonian Schema, the fact that one uses different inputs and outputs for the two schemas (i.e., the final boundary condition is an input to the Lagrangian Schema) implies they are not exactly equivalent. And conflating these two schemas simply because they often lead to the same result is missing the point: These are still two different ways to solve problems. When new problems come around, different schemas suggest different approaches. Tackling every new problem in an NSU (or assuming that there is always a Newtonian Schema equivalent to every possible theory) will therefore miss promising alternatives.
Given the difficulties in finding a realistic interpretation of quantum phenomena, it’s perhaps worth considering another approach: looking to the Lagrangian Schema not as equivalent mathematics, but as a different framework that can be altered to generate physical theories not available to Newtonian Schema approaches. At first pass, the Lagrangian Schema does indeed seem to naturally solve various perplexing features of NSU-style quantum theory [18], but these are mostly big-picture arguments. The subsequent sections will indicate how this might work in practice.

2.2. Previous Work

In physical models without dynamical laws, everything must be kinematics. Instead of summarizing a system as a three-dimensional “state” (subject to temporal dynamics), the relevant system is now the entire four-dimensional “history”. To solve the quantum reality problem, then, there must be exactly one (fine-grained) history that actually occurs. Crucially, the history should not be additionally constrained by dynamical laws, or one effectively reverts back to the Newtonian Schema, even for a history-based analysis.
In quantum foundations, analyzing histories without dynamics is uncommon but certainly not unheard of; several different research programs have pursued this approach. Still, in seemingly every one of these programs, the history-analysis is accompanied with a substantial modification to (A) ordinary spacetime, or (B) ordinary probability and logic. Looking at previous research, one might conclude that it is not the Lagrangian Schema analysis that resolves problems in quantum foundations, but instead one of these other dramatic modifications. But such a conclusion is incorrect; the central point of this paper is that an all-at-once framework can naturally resolve all of the key problems without requiring any changes to (A) or (B).
Before turning to a brief summary of such research programs, it should be noted that any approach with an ontology that solves dynamical equations (such as the standard quantum wavefunction) will fall under the Newtonian Schema and will be subject to the no-go theorems; they will not have a spacetime-based resolution to the quantum reality problem. Even stochastic deviations from dynamic equations fall under the Newtonian Schema, as the future state is determined from the past state in addition to random inputs. Such approaches are not the target of this paper, even those that treat solutions to the dynamic equations as entire histories. (The point of this paper is to show that realistic beables can exist in spacetime, and that the quantum state can encode information about these beables; not that all other approaches must be wrong.) Therefore, any history-based analysis of deterministic Bohmian trajectories and/or stochastic Ghirardi-Rimini-Weber (GRW) theories [19] are not the sort of theory with which we are presently concerned.
Furthermore, any ontology built from the standard wavefunction is effectively a dramatic modification to spacetime (A), because multiparticle wavefunctions (or field functionals, in the case of quantum field theory) do not reside in ordinary spacetime. (They instead reside in a higher-dimensional configuration space.) Even setting aside standard Bohmian [20] and Everettian [21] interpretations of quantum theory (which treat the wave function as ontological, not epistemic), there are other wavefunction-based approaches that (arguably) have some non-dynamical element (including GRW-style flash ontologies [22,23,24], Cramer’s Transactional Interpretation [11] and the Aharonov-Vaidman two-state approach [25]). Even if these approaches somehow argued that they did not take the standard wavefunction to be ontological, their beables are still clearly functions on configuration space, not spacetime (A).
Without the wavefunction as an ontological element, there are still several history-based approaches in the literature. Griffiths’ “Consistent Histories” framework [26] is one example, although it is not a full solution to the quantum reality problem, as there are many cases where no consistent history can be found. Also, there is never one fine-grained history that can be said to occur. Gell-Mann and Hartle have recently [27] attempted to resolve these problems, but in the process they modify probabilistic logic (B), enabling the use of negative probabilities. (Tellingly, they also demand that the fine grained histories take the form of particle trajectories, even if the future measurement involves path interference.)
Several history-based approaches have been championed by Sorkin and colleagues. A research program motivated and based upon the path integral [28] is of particular relevance, although it is almost always presented in the context of a non-classical logic (B). (Not all such work falls in this category; one notable exception is a recent preprint by Kent [29]). This path-integral analysis is rather separate from Sorkin’s causal set program [30], which seeks to discretize spacetime in a Lorentz-covariant manner. While this is also history-based, it clearly modifies spacetime (A).
Another approach by Stuckey and Silberstein (the Relational Blockworld [31]) is strongly aligned against the Newtonian Schema, and the all-at-once aspect is central to that program. But again, this history-based framework comes with a severe modification of spacetime (A), in that the Relational Blockworld replaces spacetime with a discrete substructure. It is therefore unclear to what extent this program resolves interpretational questions via the non-existence of ordinary spacetime rather than simply relying on the features of all-at-once analysis.
Finally, an interesting approach that maps the standard quantum formalism onto a more time-neutral framework is recent work by Leifer and Spekkens [32]. Notably, it explicitly allows updating one’s description of the past upon learning about future events. Because the quantum conditional states defined in this work are clearly analogous to states of knowledge rather than states of reality, the fact that they exist in a large configuration space is not problematic (and indeed there is a strong connection to work built on spacetime-based beables [15]). Still, perhaps because of the lack of realistic underlying beables, the logical rules required to extract probabilities from these states differ somewhat from classical probability theory (B).
Any of the above research programs may turn out to be on the right track; after all, there is no guarantee that the beables that govern our universe do exist in spacetime. But the fact that they all modify spacetime (or logic) has thoroughly obscured a crucial point: A history-based analysis, with no dynamical restrictions, need not modify spacetime or logic to resolve the quantum reality problem, even taking the no-go theorems into account. The next sections will demonstrate how an “all at once” approach can provide a realistic underlying framework for quantum theory, and then will interpret quantum states in this light as encoding ordinary information about underlying spacetime-local beables.

3. The Partial-Knowledge Ising Model

Analyzing systems without the use of dynamical equations may be counter-intuitive, but it is done routinely; one can analyze 3D systems for which there are no dynamics, by definition. A particularly useful approach is found in classical statistical mechanics (CSM), because in that case one never knows the exact microscopic details, providing a well-defined framework in which to calculate probabilities that result from partial knowledge.
A specific example of this CSM-style logic can be found in the classical Ising model. Specifically, one can imagine agents with partial knowledge about the exact connections of the Ising model system, and then reveal knowledge to those agents in stages. The resulting probability-updates will be seen to have exactly the features thought to be impossible in dynamical systems. The fact that this updating is natural in a classical, dynamics-free system such as the Ising model will demonstrate that it is also natural in a dynamics-free (“all at once”) analysis of spacetime systems.
The classical Ising model considered here is very small to ensure simplicity: 3 or 4 lattice sites labeled by the index j, with each site associated with some discrete variable σ j = ± 1 . The total energy of the system is taken to be proportional to
H ( σ ) = < i j > σ i σ j ,
where the sum < i j > is over neighboring lattice sites. If the system is known to be in equilibrium at an inverse temperature β = ( k B T ) 1 , the (joint) probability of any complete configuration σ is proportional to e x p [ β H ( σ ) ] . These relative probabilities can be transformed into absolute probabilities by the usual normalization procedure, dividing by the partition function
Z = σ e β H ( σ ) ,
where the sum is over all allowable configurations. Therefore, the probability for any specific configuration is given by
P ( σ ) = e β H ( σ ) Z .
Several obvious features should be stressed here, for later reference. Most importantly, there is exactly one actual state of the system σ at any instant; other configurations are possible, but only one is real. This implies that (at any instant) the probabilities P ( σ ) are subjective, in that they represent degrees of belief but not any feature of reality. Therefore, one could learn information about the system (for example, that σ 1 = 1 ) that would change P ( σ ) without changing the system itself. (Indeed, learning such knowledge would also change Z, as it would restrict the sum to only global configurations where σ 1 = 1 .) Finally, note that the probabilities P ( σ ) do not take the form of local conditional probabilities, but are more naturally viewed as joint probabilities over all lattice points that make up the complete configuration σ.
The below analysis will work for any finite value of β, but it is convenient to take β = l n ( 2 ) . At this temperature, for a minimal system of two connected lattice sites, the lattice values are twice as likely to have the same sign as they are to have opposite signs: P ( σ 1 = + 1 , σ 2 = + 1 ) = 2 P ( σ 1 = + 1 , σ 2 = 1 ) .
For a simple lattice that will prove particularly relevant to quantum theory, consider Figure 1a. Each circle is a lattice site, and the lines show which sites are adjacent. With only three lattice sites, and knowledge that σ 1 = 1 , it is a simple manner to calculate the probabilities P ( σ 2 , σ 3 ) . For β = l n ( 2 ) , the above equations yield probabilities:
σ 2 σ 3 P 1 a ( σ 2 , σ 3 ) + 1 + 1 4/9 + 1 1 2/9 1 + 1 2/9 1 1 1/9
Figure 1. Two geometries of an Ising model; in both cases one knows the bottom lattice site is σ 1 = 1 . A dashed box indicates the subsystem of interest. The central example is where one does not know whether the geometry is that of Figure 1a or 1b.
Figure 1. Two geometries of an Ising model; in both cases one knows the bottom lattice site is σ 1 = 1 . A dashed box indicates the subsystem of interest. The central example is where one does not know whether the geometry is that of Figure 1a or 1b.
Information 05 00190 g001
For Figure 1b, the probabilities require a slightly more involved calculation because the fourth lattice site changes the global geometry (from a 1D to a 2D Ising model). In this case, the joint probability P ( σ 2 , σ 3 ) can be found by first calculating P ( σ 2 , σ 3 , σ 4 ) and then summing over both possibilities σ 4 = ± 1 . This process yields different probabilities:
σ 2 σ 3 P 1 b ( σ 2 , σ 3 ) + 1 + 1 20/41 + 1 1 8/41 1 + 1 8/41 1 1 5/41
The most interesting example is a further restriction where an agent does not know whether the actual geometry is that of Figure 1a or 1b. Specifically, one knows that σ 1 = 1 , and that σ 2 and σ 3 are both adjacent to σ 1 . But one does not know whether σ 4 is also adjacent to σ 2 and σ 3 (Figure 1b), or whether σ 4 is not even a lattice site (Figure 1a).
This is not to say there is no fact of the matter; this example presumes that is some particular geometry—it is merely unknown. This is not quite the same as the unknown values of a lattice site (which also have some particular state at any given instant), because the Ising model provides no clues as to how to calculate the probability of a geometry. All allowable states may have a known value of e x p [ β H ( σ ) ] from Equation (1), but without knowledge of which states are allowable one cannot calculate Z from Equation (2), and therefore one cannot calculate probabilities in the configuration space P ( σ 2 , σ 3 ) .
The obvious solution to such a dilemma is to use a configuration space conditional on the unknown geometry G = 1 a or G = 1 b , assigning probabilities to P ( σ 2 , σ 3 | G ) . Indeed, this has already been done in the above analysis using the notation P G ( σ 2 , σ 3 ) . Note that one cannot include G in the configuration space as P ( σ 2 , σ 3 , G ) because (unlike P G ) such a distribution would have to be normalized, and there is no information as to how to apportion the probabilities between the G = 1 a and G = 1 b cases.
Given this natural and (presumably) unobjectionable response to such a lack of knowledge, the next section will explore a crucial mistake that would lead one to conclude that no underlying reality exists for this CSM-based model, despite the fact that an underlying reality does indeed exist (by construction). Then, by applying the above logic to a dynamics-free scenario in space and time, one can find a strong analogy to probabilities in quantum theory with an unknown future measurement.

4. Implications of the Model

4.1. The Independence Fallacy

In the above partial-knowledge Ising model, the subsystem of interest is the dashed box that comprises the lattice sites σ 2 and σ 3 . (In any physical system, one might reasonably want to analyze a portion of it, independent from the rest.) The cost of this analysis, however, is that the mathematical structure over which a (limited knowledge) agent can ascribe probabilities is the conditional configuration space P G ( σ 2 , σ 3 ) rather than a “pure” configuration space over all possibilities P ( σ 2 , σ 3 ) .
It should be clear that a serious fallacy would result if one were to demand that probabilities should be assigned to such a pure configuration space. Such a demand would be natural if one thought that the probabilities ascribed to the subsystem should be independent of the external geometry, but would lead to the false equation P 1 a = P 1 b . This mistake will be termed the “Independence Fallacy”. From the above analysis, the explicitly calculated P 1 a and P 1 b reveals that they are not the same, and therefore this fallacy would lead to contradictions.
It is useful to imagine a confused agent who for some reason tried to build a theory around the (mistaken) premise that one should always be able to ascribe a master probability distribution P ( σ 2 , σ 3 ) that was independent of geometry. Such an agent would quickly find that this was an impossible task, and that there was no such classical probability distribution over the realistic microstates of this subsystem. Given the Independence Fallacy, the analysis of the confused agent might go like this: The geometry of Figure 1a implies a 5/9 probability of σ 2 = σ 3 , while the geometry of Figure 1b implies a 25/41 probability. But since this value must be independent of the geometry, the question “Does σ 2 = σ 3 ?” cannot be assigned a coherent probability. And if it cannot be answered, such a question should not even be asked.
This, of course, is wrong: such a question can be asked in this model, but the answer depends on the geometry. It is the Independence Fallacy which might lead to a denial of an underlying reality, and the ultimate culprit (in this case, at least) would be the attempt to describe a subsystem independently from the entire system.

4.2. Information-Based Updating

Without the Independence Fallacy, it is obvious how the conditional probabilities P ( σ 2 , σ 3 | G ) should be updated upon learning new information. For example, if an agent learned that the geometry was in fact that of Figure 1b, a properly-updated description would simply be P 1 b . There would be no reason to retain P 1 a as a partial description of the system; it would merely represent probabilities of a counter-factual geometry. Discarding this information is clearly Bayesian updating, not a physical change in the system. (Further updating would occur upon learning the actual value of a lattice site.)
The central point is that some information-updating naturally occurs when one learns the geometry of the model, even without any revealed lattice sites. And because CSM is a realistic model (with some real, underlying state), the information updating has no corresponding feature in objective reality. Updating is a subjective process, performed as some agent gains new information.

4.3. Introducing Time

The above Ising model was defined as a static system in two spatial dimensions. The only place that time entered the analysis was in the updating process in the previous subsection, but this subjective updating was not truly “temporal”, as it had no relation to any time in the system. Indeed, one could give different agents information in a different logical order, leading to different updating. Both orders would be unrelated to any temporal evolution of the system (which is static by definition).
Still, an objective time coordinate can be introduced in a trivial manner: simply redefine the model such that one of the spatial axes in Figure 1 represents time instead of space. Specifically, suppose that the vertical axis is time (past on the bottom, future on the top). It is crucial not to introduce dynamics along with time; one point of the model was to show how to analyze systems without dynamics. This analysis has already been performed, and does not change. The dashed box in Figure 1 now represents a space-time subsystem (i.e., one instant in time), and the same state-counting logic will lead to exactly the same probabilities as the purely spatial case.
One might be tempted to propose reasons why this space-time model is fundamentally different from the original space-space model, perhaps assuming the existence of dynamical laws. Such laws would break the analogy, but they are not part of the model. One might also argue that the use of H in Equation (1) embedded a notion of time even in the spatial case, but Equation (3) can be written entirely in terms of the unitless parameters β and σ j , not requiring any notion of a Hamiltonian. After all, the equations of CSM result from assigning equal a priori probabilities to all possible microstates; one can get similar equations in a spacetime framework by simply assigning equal a priori probabilities to all possible temporally-extended microstates (or more intuitively, “microhistories”) [16].
Whether or not the analogy can be broken, the previous section is an existence proof that such a system could be analyzed in this manner, which is all that is needed for the below conclusions. It is logically possible to assign a relative probability to each spacetime-history, normalize these probabilities using the entire allowed space of microhistories, and then make associated predictions. If one did so, it would be natural to update one’s probabilities of an instantaneous subsystem when one learned about the experimental geometry in that subsystem’s future.
The unusual features of the above partial-knowledge Ising model should now have a clear motivation. For an agent not to know the spatial geometry (Figure 1a vs. Figure 1b) would typically be an artificial restriction. But it is quite natural not to know the future, and if the vertical axis represents time, it is more plausible that an agent might be uncertain as to whether σ 4 would exist. However, this does not break the analogy, either. While we tend to learn about things in temporal order, it’s not a formal requirement; we can film a movie of a system and analyze it backwards, or even have spatial slices of a system delivered to us one at a time. The link between information-order and temporal-order is merely typical, not a logical necessity.
In a space-time context, it is also more understandable how one might fall into the Independence Fallacy in the first place. If we expect the future to be generated from the past via the Newtonian Schema, then we would also expect the probabilities we assign to the past to be independent of the future experimental geometry. But without dynamical laws, this is provably incorrect. If we assign every microhistory a probability, the conditional probabilities P ( σ 2 , σ 3 | G ) that made sense in the spatial case also make sense in the temporal case. When we learn about the experimental geometry of the future, such an analysis would update our probabilistic assessment of the past.

5. Quantum Reality

From the perspective of the above analysis, as applied to systems in both space and time, the various quantum no-go theorems are effectively making the Independence Fallacy. Specifically, they assume that any probabilistic description of the present must be independent of the future experimental geometry. In some discussions this assumption is not even explicit, with authors implicitly assuming that one obviously should not update one’s (past) probabilities upon learning about the future measurement setting [33].
However, what may seem (to some) to be obvious in a space-time setting is provably wrong in the above space-space setting, where the probabilities of a subsystem do depend on the external geometry. It is also provably wrong in a space-time setting of connected lattice points where the probabilities are given by Equation (3) (as opposed to dynamical laws). Therefore it should at least give one pause to make such an independence assumption in general. Arguing that standard quantum mechanics is governed by dynamical time evolution does not void this analysis; research programs that attempt to solve the quantum reality problem are exploring alternatives, and those alternatives need not in general utilize dynamical laws (as in some examples from Section 2.2).

5.1. Double Slit Experiment

To begin with a simple example, consider the delayed-choice double-slit experiment, depicted in Figure 2. Here a source (at the bottom) generates a single photon that passes up through a pair of slits. When lenses are used to form an image of the slits on a detector (as in Figure 2a), one always finds that the photon passed through one slit or the other.
However, it appears that each photon passes through both slits if one considers the experiment in Figure 2b. Here a screen records the interference pattern produced by waves passing through both slits, built up one photon at a time. In the many-photon limit, this pattern is explicable only if a wave passes through both slits and interferes. Since each individual photon conforms to this pattern (not landing in dark fringes), the most obvious conclusion is that each photon somehow passes through both slits.
Figure 2. Two geometries of a double slit experiment, in which a single photon passes through a pair of slits. (The vertical axis is performing double-duty as both time and a second spatial axis.) (a) Lenses and (black) detectors measure which slit the photon passes through; (b) A screen records a photon that contributes to a two-slit interference pattern.
Figure 2. Two geometries of a double slit experiment, in which a single photon passes through a pair of slits. (The vertical axis is performing double-duty as both time and a second spatial axis.) (a) Lenses and (black) detectors measure which slit the photon passes through; (b) A screen records a photon that contributes to a two-slit interference pattern.
Information 05 00190 g002
where reality seems to fail here is the description of the photon at the slits—one instant of the full spacetime diagram. In Figure 2a the photon seems to go through only one slit; in Figure 2b it seems to go through both. And since the status of the photon at the slits is “obviously” independent of the future experimental geometry, it follows that the actual location(s) of the photon-wave at the slits cannot be assigned a coherent probability.
Except that this is exactly the Independence Fallacy! Compare Figure 1 to Figure 2; they are quite analogous. In Figure 1a and Figure 2a the right and left branches stay separate; in Figure 1b and Figure 2b the geometry begins in the same way, but then allows recombination. Following the above logic, avoiding the Independence Fallacy allows a coherent underlying reality for the double-slit experiment.
While a generic realistic theory may have a quite complicated description of what is happening at the slits, the key parameter for our purposes is N, the number of slits that any beables pass through (classical EM fields, etc.; the details are irrelevant so long as one avoids a particle ontology for which N = 1 by definition). In a realistic model, it must be that N = 1 or N = 2 in any given case, although since this value is hidden from an external agent, the agent will have a probability distribution P ( N ) over these two possible values.
Utilizing the above analysis, the comparison to the partial-knowledge Ising model reveals a natural solution to the problem of defining P ( N ) . The answer is that the appropriate quantity for an agent who does not know the future experimental geometry G is P ( N | G ) (or equivalently P G ( N ) ). Specifically, in this case we have P 2 a ( N = 1 ) = P 2 b ( N = 2 ) = 1 and P 2 a ( N = 2 ) = P 2 b ( N = 1 ) = 0 . Upon learning the future geometry, an agent would update her assessment of the past probabilities, as above. In this case, N becomes certain, and one has a realistic description of the experiment. (For 2b, it is perfectly realistic to have a wave go through both slits.) If one avoids the Independence Fallacy and does not try to enforce P 2 a = P 2 b , there are no logical problems with this analysis.

5.2. Bell Inequality Violations

When analyzing experiments that exhibit Bell-inequality violations, the Independence Fallacy takes the form of “Measurement Independence” [34], where one assumes that the probability distribution over any (past) hidden variables λ must be independent of future measurement settings. But measurement settings are just a type of geometry (consider the difference between Figure 2a,b), so this assumption takes the form of P ( λ | G ) = P ( λ | G ) for all possible future measurement geometries ( G , G ) , and this is indeed the above-discussed Independence Fallacy.
Even without committing to a particular ontology, it’s possible to see how the all-at-once approach resolves entanglement-based no-go theorems (assuming a spacetime-based ontology that at least locally obeys T-symmetry). Consider the two different experimental geometries shown in Figure 3. In Figure 3a, a source E prepares an entangled pair of particles, sends one to Alice (A), and one to Bob (B); both Alice and Bob freely choose measurements to perform on the particles they receive. In Figure 3b, Alice prepares a single particle; that particle is transmitted to a device T that performs some particular transformation on the particle, and the transformed output is then sent to Bob who performs a final measurement.
Figure 3. Two dual experiments, with measurements by Alice (A) and Bob (B). For known measurement settings, there is always a duality between any entangled source (E) and some particular intermediate transformation (T).
Figure 3. Two dual experiments, with measurements by Alice (A) and Bob (B). For known measurement settings, there is always a duality between any entangled source (E) and some particular intermediate transformation (T).
Information 05 00190 g003
Given the stark differences between these two experiments the standard quantum framework, it may come as a surprise that for any entanglement process E, there is always a corresponding transformation T that will make the predicted correlations between Alice and Bob identical (between Figure 3a and Figure 3b). This duality has been worked out in detail [15] (also see [32,35,36]), and the fact that the the connection between E and T requires knowledge of the measurement settings (i.e., the observables at A and B) should not be surprising in light of the above arguments. (In one particularly simple example, if E prepares two spin-1/2 particles in the entangled state ( | 00 > + | 11 > ) / 2 , and A and B are both spin measurements in the x-z plane, the corresponding transformation T is merely the identity.)
This duality between Figure 3a and Figure 3b allows one to transform any all-at-once account of a single-particle experiment (Figure 3b) to an equivalent all-at-once account of an experiment with entanglement (Figure 3a). All that needs to be done is to time-reverse the left half of the Figure 3b ontology (which can always be done for a realistic history if the beables exist in spacetime). Crucially, the all-at-once analysis applies to the entire history, so while a dynamical explanation would not survive this transformation, an all-at-once analysis does. Unless a given no-go theorem applies to single-particle experiments (and the Bell inequality violations do not), then once can use this duality to turn any time-neutral LHV explanation of a single-particle experiment into a corresponding time-neutral LHV explanation of the entangled dual experiment [15].
As an illustrative example, consider Schulman’s ansatz that the Bloch-sphere vector of a spin-1/2 system might undergo an anomalous rotation of a net angle α with a probability proportional to W ( α ) = ( α 2 + γ 2 ) 1 [37]. (Here γ is a small arbitrary angle.) Consider the case of Figure 3b with T as the identity transformation; Alice and Bob simply make consecutive spin measurements on a single spin-1/2 particle, each in some particular direction in the x-z plane. In accordance with the Lagrangian Schema, it is consistent to have both the preparation and the measurement act as external boundary conditions that force the spin-state to be aligned with the angle chosen by the measuring apparatus (or possibly anti-aligned, in the case of measurement). Then, for an angle θ between Alice and Bob’s settings, the net anomalous rotation between measurements must be either θ or π + θ , corresponding to Bob’s two possible outcomes. By considering all possible histories with those anomalous rotations (including rotations larger than 2 π ) this implies
P ( θ ) P ( π θ ) = n = W ( 2 n π + θ ) n = W ( 2 n π + π + θ ) = c o s 2 ( θ / 2 ) + s i n 2 ( θ / 2 ) t a n h 2 ( γ / 2 ) s i n 2 ( θ / 2 ) + c o s 2 ( θ / 2 ) t a n h 2 ( γ / 2 ) .
In the limit that γ 0 , this obviously reduces to the standard Born rule probabilities, and since γ is an arbitrary parameter, Schulman’s ansatz can be made arbitrarily close to the Born rule. (For an expanded analysis, see [16]).
Schulman did not apply his ansatz to cases of entanglement, but using the above duality one can easily generate a realistic model of a Bell-inequality violating experiment. Switching to Figure 3a, the equivalent entangled state E predicted by standard quantum mechanics is ( | 00 > + | 11 > ) / 2 , but this is a nonlocal state, with no spacetime representation. Instead, using the duality, one need only utilize a single spin state for each particle, classically correlated at the source E but nowhere else. (Since T was the identity, the duality merely requires the two particles to be in the same state at E.) The previous ontology, of a spin state undergoing a net anomalous rotation somewhere between Alice and Bob, follows through from Figure 3b to Figure 3a; there is now an anomalous rotation continuously distributed across both particles, of a net angle θ between Alice’s and Bob’s measured spin directions. The relative probability of such a rotation is the same as before, W ( θ ) .
The rest of the analysis is straightforward. Alice has a spin measurement setting at some angle α (in the x-z plane), and Bob has a spin measurement setting at some angle β (in the x-z plane). The two possible outcomes for Bob (constrained as before) are now β ( b = 1 ) or π + β ( b = 2 ). Alice has lost some control compared to Figure 3b (see [38] for details), but the same constraints as Bob implies that her spin-state will be measured at α ( a = 1 ) or π + α ( a = 2 ). The correlation between Alice and Bob’s outcomes can then be calculated according to
P ( a = b ) P ( a b ) = n = W ( 2 n π + α β ) n = W ( 2 n π + π + α β ) n = W ( 2 n π + α β ) + n = W ( 2 n π + π + α β )
In the γ 0 limit this becomes simply c o s ( α β ) , which violates the Bell inequalities exactly as would be expected from the traditional entangled state.
Notice that this resolution still permits Alice and Bob to freely choose their measurement settings. Their decision lies outside the system considered in Figure 3, and their choices are effectively external (final) boundaries on the system of interest. The LHVs, on the other hand, are in the system, and constrained by the solution space of possible histories. This (retrocausal) resolution is therefore strikingly distinct from “superdeterminism”, an alternate conspiratorial approach that requires Alice’s and Bob’s decisions to somehow be caused by past hidden variables (presumably via Newtonian Schema processes). Experimental proposals to rule out superdeterminism [39] therefore have little to say about Lagrangian-Schema retrocausal models, where the past hidden variables are direct consequences of future constraints, rather than circuitous causes of future decisions.

5.3. Other Theorems

Most other no-go theorems work in the same manner as above; as very few of them apply to non-entangled states, there is no barrier to a realistic all-at-once account that resolves the dual experiment of Figure 3b. Such an account can then always be translated into the entangled case of Figure 3a, so long as the future measurement settings are known and accounted for. The most notable case of a no-go theorem applying to a non-entangled state is that of Kochen-Specker contextuality [5]. But this theorem merely proves that for a realistic account of quantum mechanics, different measurement procedures must imply a different preceding state of reality. In other words, Kochen-Specker contextuality is the failure of the Independence Assumption. (In the example in the previous section, a different future measurement setting would indeed correspond to a different spin state immediately before measurement.)
Another interesting case is the recent Pusey-Barrett-Rudolph (PBR) theorem [6], which has no uncertainty about the future measurement setting, but explicitly assumes that knowledge of this setting has not led to any updating of an initial hidden state. One of the two key premises of the PBR theorem is that the initial states of the two systems are completely uncorrelated, even at the level of hidden variables. But we’ve seen in the case of the double slit experiment that once an agent updates on a future measurement setting, this can inform details of a correlation in the past, even for a spatially-distributed beable like an electric field. Another way to think about PBR is as the time-reverse of a typical entanglement experiment. Since state-counting is explicitly a time-neutral analysis, past correlations can arise in PBR for exactly the same reason that future correlations arise in entanglement experiments.

6. Lessons for the Quantum State

While the analysis in the preceding sections can inform a realistic interpretation of quantum theory, it also implies that assigning probabilities to instantaneous configuration spaces is not the most natural perspective. The above CSM-style analysis is performed on the entire system “at once”, rather than one time-slice at a time, in keeping with the spirit of the Lagrangian Schema discussed in Section 2. The implication is that the best way to understand quantum phenomena is not to think in terms of instantaneous states, but rather in terms of entire histories.
Restricting one’s attention to instantaneous subsystems is necessary if one would like to compare the above story to that of traditional quantum theory, but it is certainly not necessary in general. Indeed, general relativity tells us that global foliations of spacetime regions into spacelike hypersurfaces may not even always be possible or meaningful. Moreover, if the foliation is subjective, the objective reality lies in the 4D spacetime block. It is here where the histories reside, and to be realistic, one particular microhistory must really be there—say, some multicomponent covariant field μ ( x , t ) . A physics experiment is then about learning which microhistory μ actually occurs, via information-based updating; we gain relevant information upon preparation, measurement setting, and measurement itself.
Given this notion of an underlying reality, one can then work backwards and deduce what the traditional quantum state must correspond to. Consider a typical experimenter as an agent informed about a system’s preparation, but uninformed about that system’s eventual measurement. With no information about the future measurement geometry, the agent would like to keep track of all possible eventual measurements, or P ( μ ( x , t ) | G ) for all possible values of G (including all possible times when measurement(s) will be made). This must correspond to the quantum state, | ψ ( t ) > .
We can check this result in a variety of ways. First of all, as the number of possible measurements increases, the dimensionality of the quantum state must naturally increase. This is correct, in that as particle number increases, the number of experimental options also increases (one can perform different measurements on each particle, and joint measurements as well). This increase in the space of G with particle number therefore explains the increase of the dimensionality of Hilbert space with particle number.
This perspective also naturally explains the reduction of the Hilbert space dimension for the case of identical particles. For two distinguishable particles, the experimenter can make a choice between (A,B)—measurement A on particle 1 and measurement B on particle 2—or (B,A), the opposite. But for two identical particles, the space of G (and the experimenter’s freedom) is reduced; there is no longer a difference between (A,B) and (B,A). And with a smaller space of G, the quantum state need not contain as much information, even if it encodes all possible future measurements.
Further insights can be gained by considering the (standard, apparent) wavefunction collapse when a measurement is finally made. Even if the experimenter refuses to update their prior probabilities upon learning the choice of measurement setting (the traditional mistake, motivated by the Independence Assumption), it would be even more incoherent for that agent to not update upon gaining direct new information about μ itself. So we do update our description of the system upon measurement, and apply a discontinuity to | ψ ( t ) > .
However, through the above analysis, it should be clear that no physically discontinuous event need correspond to the collapse of ψ. After all, when one learns about the value of a lattice site in the Ising model, one simply updates one’s assignment of configuration-space probabilities; no one would suggest that this change of P ( σ ) corresponds to a discontinuous change of the classical system. (The system might certainly be altered by the measurement procedure, but in a continuous manner, not as a sudden collapse.) Similarly, for a history-counting derivation of probabilities in a spacetime system, learning about an outcome would lead to an analogous updating. This picture is therefore consistent with the epistemic view of collapse propounded by Spekkens and others [1,2].
From this perspective, the quantum state represents a collection of (classical) information that a limited-knowledge agent has about a spacetime system at any one time. And it is not our best-possible description. In many cases we actually do know what measurement will be made on the system, or at least a vast reduction in the space of possible measurements. This indicates that a lower-dimensional quantum state is typically available. Indeed, quantum chemists are already aware of this, as they have developed techniques to compute the energies of complex quantum systems without using the full dimensionality of the Hilbert space implied by a strict reading of quantum physics (e.g., density functional theory).
If we could determine exactly how to write | ψ ( t ) > in terms of P ( μ ( x , t ) | G ) , then it would be perfectly natural to update twice: once upon learning the type of measurement G, and then again upon learning the outcome. It seems evident that this two-step process would make the collapse seem far less unsettling, as the first update would reduce the possibilities to those that could exist in spacetime. For example, in the which-slit experiment depicted in Figure 2a, if one’s possibility-space of histories had already been reduced to a photon on one detector or a photon on the other detector, there would be no need to talk about superluminal influences upon measurement.

7. Conclusions

Before quantum experiments were ever performed, it may have seemed possible that the Independence Fallacy would have turned out to be harmless. In classical scenarios, dynamic laws do seem to accurately describe our universe, and refusing to update the past based on knowledge of future experimental geometry does seem to be justified. And yet, in the quantum case, this assumption prevents one from ascribing probability distributions to everything that happens when we don’t look. If the ultimate source of the Independence Fallacy lies in the dynamical laws of the Newtonian Schema, then before giving up on a spacetime-based reality, we should perhaps first consider the Lagrangian Schema as an alternative. The above analysis demonstrates that this alternative looks quite promising.
Still, the qualitative analogy between the Ising model and the double slit experiment can only be pushed so far. And one can go too far in the no-dynamics direction: considering all histories, as in the path integral, would lead to the conclusion that the future would be almost completely uncorrelated with the past, contradicting macroscopic observations. But with the above framework in mind, and the path integral as a starting point, there are promising quantitative approaches. One intriguing option [16] is to limit the space of possible spacetime histories (perhaps those for which the total Lagrangian density is always zero). Such a limitation can be seen to cluster the remaining histories around classical (action-extremized) solutions, recovering Euler-Lagrange dynamics as a general guideline in the many-particle limit. Better yet, for at least one model, Schulman’s ansatz from Section 5 can be derived from a history-counting approach on an arbitrary spin state. And as with any deeper-level theory that purports to explain higher-level behavior, intriguing new predictions are also indicated.
Even before the technical project of developing such models is complete, the above analysis strongly indicates that one can have a spacetime-based reality to underly and explain quantum theory. If this is the case, the “It from Bit” idea would lose its strongest arguments; quantum information could be about something real rather than the other way around. It is true that this approach requires one to give up the intuitive NSU story of dynamical state evolution, so one may still choose to cling to dynamics, voiding the above analysis. But to this author, at least, losing a spacetime-based reality to dynamics does not seem like a fair trade-off. After all, if maintaining dynamics as a deep principle leads one into some nebulous “informational immaterialism” [8], one gives up on any reality on which dynamics might operate in the first place.
More common than “It from Bit” is the view that maintains the Independence Fallacy by elevating configuration space into some new high-dimensional reality in its own right [21]. Ironically, that resulting state is interdependent across the entire 3N-dimensional configuration space, with the lone exception that it remains independent of its future. While this is certainly a possibility, the alternative proposed here is simply to link everything together in standard 4D spacetime, with full histories as the most natural entities on which to assign probability distributions. So long as one does not additionally impose dynamical laws, there is no theorem that one of these histories cannot be real. Furthermore, by analyzing entire 4D histories rather than states in a 3N-dimensional configuration space, one seems less likely to make conceptual mistakes based on pre-relativistic intuitions wherein time and space are completely distinct.
The central lesson of the above analysis is that—whether or not one wants to give up NSU-style dynamics—action principles provide us with a logical framework in which we can give up dynamics. The promise, if we do so, is a solution to the quantum reality problem [3], where quantum states can be ordinary information about something that really does happen in spacetime. This would not only motivate better descriptions of what is happening when we don’t look (incorporating our knowledge of future measurements), but would formally put quantum theory on the same dimensional footing as classical general relativity. After nearly a century of efforts to quantize gravity, perhaps it is time to acknowledge the possibility of an alternate strategy: casting the quantum state as information about a generally-covariant, spacetime-based reality.

Acknowledgments

The author would like to thank the Foundational Questions Institute (fqxi.org) for sponsoring their annual essay contests, and awarding third prize to the two essays [18,40] that formed the basis of this paper.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Spekkens, R.W. Evidence for the epistemic view of quantum states: A toy theory. Phys. Rev. A 2007, 75, 032110. [Google Scholar] [CrossRef]
  2. Fuchs, C.A. Quantum mechanics as quantum information, mostly. J. Mod. Opt. 2003, 50, 987–1023. [Google Scholar] [CrossRef]
  3. Kent, A. A solution to the Lorentzian quantum reality problem. Available online: http://arxiv.org/pdf/1311.0249v1.pdf (accessed on 21 January 2014).
  4. Bell, J.S. On the Einstein Podolsky Rosen Paradox. Physics 1964, 1, 195–200. [Google Scholar]
  5. Kochen, S.; Specker, E.P. The problem of hidden variables in quantum mechanics. J. Math. Mech. 1967, 17, 59–87. [Google Scholar] [CrossRef]
  6. Pusey, M.F.; Barrett, J.; Rudolph, T. On the Reality of the Quantum State. Nat. Phys. 2012, 8, 475–478. [Google Scholar] [CrossRef]
  7. Wheeler, J.A. Information, Physics, Quantum: The Search For Links. In Complexity, Entropy and the Physics of Information; Zurek, W.H., Ed.; Westview Press: Boulder, CO, USA, 1990; pp. 3–28. [Google Scholar]
  8. Timpson, C.G. Quantum Information Theory and the Foundations of Quantum Mechanics; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  9. Costa de Beauregard, O. Time symmetry and interpretation of quantum mechanics. Found. Phys. 1976, 6, 539–559. [Google Scholar] [CrossRef]
  10. Reitdijk, C.W. Proof of a retroactive influence. Found. Phys. 1978, 8, 615–628. [Google Scholar] [CrossRef]
  11. Cramer, J.G. Generalized absorber theory and the Einstein-Podolsky-Rosen paradox. Phys. Rev. D 1980, 22, 362–376. [Google Scholar] [CrossRef]
  12. Sutherland, R.I. Bell’s theorem and Backwards-In-Time causality. Int. J. Theor. Phys. 1983, 22, 377–384. [Google Scholar] [CrossRef]
  13. Price, H. Time’s Arrow and Archimedes’ Point; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
  14. Miller, D.J. Realism and time symmetry in quantum mechanics. Phys. Lett. 1996, A222, 31–36. [Google Scholar] [CrossRef]
  15. Wharton, K.B.; Miller, D.J.; Price, H. Action Duality: A Constructive Principle for Quantum Foundations. Symmetry 2011, 3, 524–540. [Google Scholar] [CrossRef]
  16. Wharton, K.B. Lagrangian-only quantum theory. Available online: http://arxiv.org/pdf/1301.7012.pdf (accessed on 8 November 2013).
  17. Smolin, L. The unique universe. Phys. World 2009, 22N6, 21–26. [Google Scholar]
  18. Wharton, K. The Universe is Not a Computer. Available online: http://arxiv.org/pdf/1211.7081.pdf (accessed on 8 November 2013).
  19. Allori, V.; Goldstein, S.; Tumulka, R.; Zanghì, N. On the Common Structure of Bohmian Mechanics and the Ghirardi-Rimini-Weber Theory. Brit. J. Phil. Sci. 2008, 59, 353–389. [Google Scholar] [CrossRef]
  20. Bohm, D. A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables. Phys. Rev. 1952, 85, 166–179. [Google Scholar] [CrossRef]
  21. Everett, H. Relative State Formulation of Quantum Mechanics. Rev. Mod. Phys. 1957, 29, 454–462. [Google Scholar] [CrossRef]
  22. Bell, J.S. Toward An Exact Quantum Mechanics. In Themes in Contemporary Physics; Deser, S., Finkelstein, R.J., Eds.; World Scientific: Teaneck, NJ, USA, 1989; pp. 1–26. [Google Scholar]
  23. Kent, A. Quantum Jumps and Indistinguishability. Mod. Phys. Lett. A 1989, 4, 1839–1845. [Google Scholar] [CrossRef]
  24. Tumulka, R. A Relativistic Version of the Ghirardi-Rimini-Weber Model. J. Stat. Phys. 2006, 125, 821–840. [Google Scholar] [CrossRef]
  25. Aharonov, Y.; Vaidman, L. Complete description of a quantum system at a given time. J. Phys. A 1991, 24, 2315–2328. [Google Scholar] [CrossRef]
  26. Griffiths, R.B. Consistent histories and the interpretation of quantum mechanics. J. Stat. Phys. 1984, 36, 219–272. [Google Scholar] [CrossRef]
  27. Gell-Mann, M.; Hartle, J.B. Decoherent Histories Quantum Mechanics with One ‘Real’ Fine-Grained History. Phys. Rev. A 2011, 85, 062120. [Google Scholar] [CrossRef]
  28. Sorkin, R.D. Quantum dynamics without the wave function. J. Phys. A 2007, 40, 3207–3222. [Google Scholar] [CrossRef]
  29. Kent, A. Path Integrals and Reality. Available online: http://arxiv.org/abs/1305.6565 (accessed on 20 February 2014).
  30. Sorkin, R.D. Scalar Field Theory on a Causal Set in Histories form. J. Phys. 2011, 306, 012017. [Google Scholar] [CrossRef]
  31. Stuckey, W.M.; Silberstein, M.; Cifone, M. Reconciling Spacetime and the Quantum: Relational Blockworld and the Quantum Liar Paradox. Found. Phys. 2008, 38, 348–383. [Google Scholar] [CrossRef]
  32. Leifer, M.S.; Spekkens, R.W. Towards a Formulation of Quantum Theory as a Causally Neutral Theory of Bayesian Inference. Phys. Rev. A 2013, 88, 052130. [Google Scholar] [CrossRef]
  33. Maudlin, T. What Bell proved: A reply to Blaylock. Am. J. Phys. 2010, 78, 121–125. [Google Scholar] [CrossRef]
  34. Shimony, A. Sixty-Two Years of Uncertainty: Historical, Philosophical, and Physical Inquiries into the Foundations of Quantum Mechanics; Plenum: New York, NY, USA, 1990. [Google Scholar]
  35. Leifer, M.S. Quantum Dynamics as an analog of Conditional Probability. Phys. Rev. A 2006, 74, 042310. [Google Scholar] [CrossRef]
  36. Evans, P.; Price, H.; Wharton, K.B. New Slant on the EPR-Bell Experiment. Br. J. Phil. Sci. 2013, 64, 297–324. [Google Scholar] [CrossRef]
  37. Schulman, L.S. Experimental Test of the “Special State” Theory of Quantum Measurement. Entropy 2012, 14, 665–686. [Google Scholar] [CrossRef]
  38. Price, H.; Wharton, K. Dispelling the Quantum Spooks—a Clue that Einstein Missed? Available online: http://arxiv.org/pdf/1307.7744v1.pdf (accessed on 24 February 2014).
  39. Gallicchio, J.; Friedman, A.S.; Kaiser, D.I. Testing Bell’s inequality with cosmic photons: Closing the setting-independence loophole. Phys. Rev. Lett. 2014. accepted for publication. [Google Scholar] [CrossRef]
  40. Wharton, K. Reality, no matter how you slice it. Available online: http://arxiv.org/pdf/1311.0001v1.pdf (accessed on 21 January 2014).

Share and Cite

MDPI and ACS Style

Wharton, K. Quantum States as Ordinary Information. Information 2014, 5, 190-208. https://doi.org/10.3390/info5010190

AMA Style

Wharton K. Quantum States as Ordinary Information. Information. 2014; 5(1):190-208. https://doi.org/10.3390/info5010190

Chicago/Turabian Style

Wharton, Ken. 2014. "Quantum States as Ordinary Information" Information 5, no. 1: 190-208. https://doi.org/10.3390/info5010190

APA Style

Wharton, K. (2014). Quantum States as Ordinary Information. Information, 5(1), 190-208. https://doi.org/10.3390/info5010190

Article Metrics

Back to TopTop