1. Introduction
A deterministic hidden-variable model is said to be superdeterministic—not a word the author would have chosen—if the so-called Measurement Independence assumption (sometimes referred to as the Statistical Independence assumption or the
-independence assumption),
is violated [
1,
2,
3,
4,
5]. Here
is a probability density on a set of hidden variables
, and
x and
y denote experimentally chosen measurement settings—for concreteness, nominally accurate polariser orientations. Without (
1), it is impossible to show that the model satisfies the CHSH version of Bell’s inequality
where
C denotes a correlation on Bell-experiment measurement outcomes over an ensemble of particle pairs prepared in the singlet state.
The argument that models which violate (
1) are conspiratorial originates in a paper by Shimony, Horne, and Clauser, written in response to Bell’s paper on local beables [
6]. Shimony et al. write:
In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation. Unless we proceed under the assumption that hidden conspiracies of this sort do not occur, we have abandoned in advance the whole enterprise of discovering the laws of nature by experimentation.
The drug trial is often used to illustrate the contrived nature of such a conspiracy. For example [
7]:
… if you are performing a drug versus placebo clinical trial, then you have to select some group of patients to get the drug and some group of patients to get the placebo. The conclusions drawn from the study will necessarily depend on the assumption that the method of selection is independent of whatever characteristics those patients might have that might influence how they react to the drug.
Related to this, superdeterminism is sometimes described as requiring exquisitely (and hence unrealistically) finely tuned initial conditions [
8] or as negating experimenter freedom [
9]. A number of quantum foundations experts (e.g., [
10,
11,
12,
13]) use one or more of these arguments to dismiss superdeterminism in derisive terms.
A new twist was added by Aaronson [
13] who concluded his excoriating critique of superdeterminism with a challenge:
Maxwell’s equations were a clue to special relativity. The Hamiltonian and Lagrangian formulations of classical mechanics were clues to quantum mechanics. When has a great theory in physics ever been grudgingly accommodated by its successor theory in a horrifyingly ad-hoc way, rather than gloriously explained and derived?
It would seem that developing superdetermistic models of quantum physics is a hopeless cause. However, the purpose of this paper is to attempt to show that superdeterminism has been badly misunderstood, to rebuff these criticisms and indeed suggest that violating (
1) is perhaps the
only sensible way to understand the experimental violation of Bell inequalities. Importantly, we show that whilst conspiracy would imply a violation of (
1), the converse is not true. In
Section 2, we define what we mean by a non-conspiratorial violation of (
1) and how it differs from these more traditional conspiratorial violations. Motivated by this, and the unshieldable effects of gravity as described in
Section 3, a non-conspiratorial superdeterministic model is described in
Section 4, based on a specific discretisation of complex Hilbert Space [
14]. Using the homeomorphism between
p-adic integers and fractal geometry, the model is linked to the invariant set postulate [
15]—the universe is evolving precisely on some special dynamically invariant subset of state space. In
Section 5, it is shown how this model violates (
1) non-conspiratorially, and indeed violates (
2) in exactly the same way as does quantum mechanics. In
Section 5.3, we show that the superdeterministic model is locally causal. In
Section 6, we discuss common objections to superdeterminism including fine tuning, free will, and the so-called drug-trial analogy. Addressing Aaronson’s challenge in
Section 6.3, we show how the state space of quantum mechanics can be considered the singular continuum limit of the discretised Hilbert space of the superdeterministic model. A possible experimental test of the supderdeterministic model is discussed in
Section 7.
Before embarking on this venture, one may ask the question: why bother? After all, quantum mechanics is an extremely well tested theory, and has never been found wanting. Why not just accept that quantum theory violates the concept of local realism—whatever that means—and get on with it? The author’s principal motivation for pursuing theories of physics which violate (
1) lies in the possibility of finding a theory of quantum physics that is consistent with the locally causal nonlinear geometric determinism of general relativity theory. As discussed in
Section 8, results from this paper suggest that instead of seeking a quantum theory of gravity (‘quantum gravity’), we should be seeking a strongly holistic gravitational theory of the quantum, from which the Euclidean geometry of space-time is emergent as a coarse-grained approximation to the
p-adic geometry of state space. This, the author believes, is the real message behind the experimental violation of Bell inequalities.
2. Conspiratorial and Non-Conspiratorial Interventions
There is no doubt that conspiratorial violations of Bell inequalities, of the type mentioned in the Introduction, imply a violation of (
1). Here we are concerned with the converse question: does a violation of (
1) imply the existence of a conspiratorial hidden variable theory? In preparing to answer this question, we quote from Bell’s response ‘Free Variables and Local Causality’ [
6] (FVLC) to Shimony et al. [
6]. In FVLC, Bell writes:
I would insist here on the distinction between analyzing various physical theories, on the one hand, and philosophising about the unique real world on the other hand. In this matter of causality it is a great inconvenience that the real world is given to us once only. We cannot know what would have happened if something had been different. We cannot repeat an experiment changing just one variable; the hands of the clock will have moved and the moons of Jupiter. Physical theories are more amenable in this respect. We can calculate the consequences of changing free elements in a theory, be they only initial conditions, and so can explore the causal structure of the theory. I insist that B [Bell’s paper on the theory of local beables [
6] is primarily an analysis of certain kinds of physical theory.
To understand the significance of this quote, we base the analysis of this paper around the thought experiment devised by Bell in FVLC, where by design human free will plays no explicit role. Bell supposes
x and
y are determined by the outputs of two pseudo-random number generators (PRNGs). These outputs are sensitive to the parity
and
of the millionth digits of the PRNG inputs. Bell now makes what he calls a ‘reasonable’ assumption:
But this peculiar piece of information [whether the parity of the millionth digit is odd or even] is unlikely to be the vital piece for any distinctively different purpose, i.e., it is otherwise rather useless … In this sense the output of such a [PRNG] device is indeed a sufficiently free variable for the purpose at hand.
It is important to note that, in this quote, Bell deflects discussion away from statistical properties of some ensemble of runs of an experiment where measurement settings are supposedly selected randomly (as per the Shimony et al. quote above), and focusses on one individual run of an experiment. There is an important reason for this. When discussing conspiratorial hidden-variable models of the Shimony et al. type, it is assumed that in any large enough ensemble with a common value of
, there exist sub-ensembles for each of the four pairs of measurement settings (00, 01, 10, and 11). In this context, (
1) implies that the four sub-ensembles are statistically equal. Conversely, in a conspiratorial violation of (
1), the four sub-ensembles are statistically unequal. In such a situation,
can be interpreted as a frequency of occurrence within each of the four sub-ensembles. It is worth noting that in such a frequency-based interpretation of
, the issue of counterfactual definiteness—a central issue below—never arises. This has led to a misconception that counterfactual definiteness plays no role in Bell’s Theorem.
Importantly, hidden-variable models do not have to be like this. It is possible that the value of
is unique to each run of a Bell experiment. The model described below has this property. In this situation, if
were to define a frequency of occurrence, and a particle with value
was measured with settings
x and
y and could only be measured once, then
and
where
and
. However, this does not itself imply a violation of (
1)—it merely emphasises that
is fundamentally not a frequency of occurrence in an ensemble but rather is a probability density defined on an individual particle with value
.
With that in mind, let us continue to focus, as Bell does, on a single entangled particle pair. If
x and
y were not free variables, then
or
would not be vital for ‘distinctively different’ purposes. That is to say, we could vary
or
without having a vital impact on distinctly different systems. In the language of Pearl’s causal inference theory [
16], if
x and
y were not free variables, there would exist small so-called interventions which by design changed
or
, and by consequence had a vital impact on distinctly different systems.
The most important part of this paper is to draw attention to two possible ways this might happen. The first is the conventional way where the effect of the intervention propagates causally from its localised source in space-time, somehow vitally influencing distinctly different systems. It is hard to imagine how varying something as insignificant as the parity of the millionth digit of an input to a PRNG could have such a vital impact. For this reason, Bell argued, the PRNG output should indeed be considered a free variable. This, of course, is not unreasonable.
However, there is a second possibility—one that was not considered by Bell—that such interventions are simply inconsistent with physical theory. That is to say, the hypothetical state of the universe where or is perturbed but all other distinctly different elements of the universe are kept fixed, is inconsistent with the laws of physics. If such an intervention was hypothetically applied to a localised region of the universe, the ontological status of the whole universe would change; clearly a state of the universe as a whole only exists if all parts of it satisfy the laws of physics. Of course, we cannot perform an actual experiment to test directly this potential inconsistency: in changing the millionth digit, the hands of the clock and the moons of Jupiter will have moved, as Bell notes. Hence addressing the question of whether the PRNG output is a free variable in the sense of this paragraph requires studying the mathematical properties of physical theory. This is Bell’s point in the first quote and it is the topic of this paper.
Below, we develop a model where each particle pair has unique
with the property
As will be shown, this can be interpreted as a locally causal non-conspiratorial violation of (
1). It implies that an intervention on
x, keeping
and
y fixed, leads to a state of the world which is inconsistent with the model postulates and therefore has zero probability. Clearly, this cannot be an intervention within space-time. Instead, the intervention describes a hypothetical perturbation which takes a point in the state space of the universe (labelled by the triple
), consistent with the model and hence with
, to a state
which is inconsistent with physical theory and hence has
. Importantly, this means a non-conspiratorial interpretation of (
3) implies that physical theory does not have the post-hoc property of counterfactual definiteness. The essential nature of counterfactuals in Bell’s Theorem was pointed out by Redhead [
17] some years ago. This important point seems to have been lost in more recent discussions of Bell’s Theorem.
However, one needs to be careful to not throw the baby out with the bathwater. Counterfactual reasoning is both pervasive and important in physics. Indeed, it is central to the scientific method [
18]. The fact that we can express laws of physics mathematically gives us the power to estimate what would have happened had something been different. Such estimates can lead to predictions and the predictions can be verified by experiment. We clearly do not want to give up counterfactual reasoning entirely in our search for new theories of physics. We address this concern by noting that output from experiment and physical theory (particularly when the complexities of the real world are accounted for) involves some inherent coarse-graining. Experiments have some nominal accuracy and output from a computational model is typically truncated to a fixed number of significant digits. We can represent such coarse graining by integrating exact output of physical theory over small volumes
in state space, where
smoothly as
. In developing a superdeterministic model below, we will require that counterfactual definiteness holds generically when variables are coarse grained over volumes
, for suitably defined small values
. The model based on discretised Hilbert Space, described in
Section 4, has this property. As discussed below, this renders the drug-trial analogy irrelevant.
These matters are subtle, and it seems Bell appreciated this. Instead of derisively rejecting theories where (
1) is violated, he concludes FVLC with the words:
Of course it might be that these reasonable ideas about physical randomisers are just wrong—for the purposes at hand.
Indeed, in his last paper ‘La Nouvelle Cuisine’, Bell writes:
An essential element in the reasoning here is that [polariser settings] are free variables.... Perhaps such a theory could be both locally causal and in agreement with quantum mechanical predictions. However, I do not expect to see a serious theory of this kind. I would expect a serious theory to permit ‘deterministic chaos’ or ‘pseudorandomness’ …But I do not have a theorem about that.
The last sentence is insightful, because, as we discuss, there is no such theorem. Indeed, the reverse: here we develop a serious non-classical model where polariser settings are not free variables, utilising geometric concepts in deterministic chaos.
3. The Andromedan Butterfly Effect
The purpose of this section is to note how the Principle of Equivalence makes the interaction of matter with gravity especially chaotic. Here, we repeat a calculation first reported by Michael Berry [
19] and further analysed in [
20]. It is well known that the flap of a butterfly’s wings in Brazil can cause a tornado in Texas. But, could the flap of a butterfly’s wings on a planet in the Andromedan galaxy cause a tornado in Texas?
We consider molecules in the atmosphere as hard spheres of radius
R with mean free distance
l between collisions. We wish to estimate the uncertainty
in the angle of the
Mth collision of one of the spheres with other spheres, due to some very small uncertain external force. It is easily shown that
grows exponentially with
M. In particular
with
m and
m,
. After how many collisions
M is the position of a molecule in Earth’s atmosphere sensitive to the gravitational uncertainty in the uncertain flap of a butterfly’s wing in the Andromeda galaxy? Let
r denote the distance between Earth and Andromeda. The flap of a butterfly’s wing through a distance
will change the gravitational force on our target molecule by an amount
. Uncertainty in the flap of an Andromedan butterfly’s wing will therefore induce an uncertainty in the acceleration of a terrestrial atmospheric molecule by an amount
If
denotes the mean time between molecular collisions, there is an uncertainty in the direction of the molecule
. Plugging in
kg,
m,
s, and
m, gives
. How large is
M before
? From the above,
. Hence, after about 30 collisions the direction of travel of the terrestrial molecule has been rendered completely uncertain by the gravitational effect of the Andromedan butterfly. Indeed, one can go further: the direction of the terrestrial molecule is rendered completely uncertain by the uncertain position of a single electron at the edge of the visible universe after about only 50 or so collisions (which below we will round up to an order of magnitude of
). Once the molecules of the Earth’s atmosphere have been disturbed in this way, it is only a matter of a couple of weeks before the nonlinearity of the Navier-Stokes equations leads to uncertainty in a large-scale weather pattern, such as a Texan tornado [
21,
22].
Nowhere above did the mass of the molecule enter the calculation. The same calculation could just as well apply to the measurement process in quantum physics—where a particle interacts with atoms in some measuring device. Indeed, the same calculation could apply to molecules in an experimenter’s brain, affecting the decisions they make.
As a matter of principle, the direction of motion of a molecule after 100 collisions is for all practical purposes uncomputable. If a computation (say on a supercomputer) was attempted, then the direction of the molecule would depend on the motion of electrons in the chips of the supercomputer. Self-referentially, the computational software would have to include a representation of the computation. Technically, the Andromedan Butterfly Effect describes a computationally irreducible system [
23]—one that cannot be computed with a computationally simpler system.
This is the first step in our argument that (
1) could be violated because the universe should be considered a rather holistic chaotic system. Gravity is what makes the universe holistic. Unlike the other forces of nature, the effects of gravity cannot be shielded by negative charges. However, by itself, this argument does not imply that the parity of the millionth digit, or the flap of a butterfly’s wings in Andromeda, is a
vital piece of information for determining distinctly different systems. For that, we need more from the theory of chaos.
6. Objections to Superdeterminism
Below, we address some of the objections that have been raised against superdeterminism with the RaQM/invariant set model in mind.
6.1. The Drug Trial
Each human in a drug trial is unique; having unique DNA, for example. However, the characteristics used to allocate people randomly to the active drug or placebo groups are based on coarse-grain attributes. Are they young or old? Are they male or female? Are they black or white? We typically assume the two groups contain equal numbers of such coarse-grain attributes. In a conspiratorial drug trial, the selection process is manipulated so that the two groups do not contain equal numbers of coarse-grain attributes.
Although RaQM violates (
1), it does not violate any version of (
1) where coarse-grained hidden variables are used in place of hidden variables. For large enough
p, it is always possible to find counterfactual worlds which satisfy the rationality constraints in a coarse-grained volume
of state space, no matter how small is
. That is to say, in any sufficiently large ensemble of individual runs,
where
denotes a coarse-grained value of
and
denotes a frequency of occurrence. In this sense, (
28) does not imply (
1) and the drug trial conspiracy is an irrelevance to RaQM.
6.2. Fine Tuning
The fine-tuned objection (e.g., [
8]) rests on the notion that superdeterminism appears to require some special, atypical, initial conditions. Perhaps one might view an initial state lying on a fractal attractor as special and atypical—after all a seemingly tiny perturbation (changing
keeping
fixed) can take the state of the universe off its invariant set
. Although the Euclidean metric accurately describes distances in space-time, the
p-adic metric is more natural in describing distances in state space when the inherent geometry of state space is fractal [
37]. From the perspective of the
p-adic metric, a fractal invariant set is not fine-tuned: a perturbation which takes a point off
is a large perturbation (of magnitude at least
p), even though it may appear very small from a Euclidean perspective. Conversely, perturbations which map points of
to points of
can be considered small amplitude perturbations.
Similarly, we must ask with respect to what measure are states on
deemed atypical. Although states on
are atypical with respect to a uniform measure on the Euclidean space in which
is embedded, they are manifestly typical with respect to the invariant measure of
[
42].
In claiming that a theory is fine tuned, one should first ask with respect to which metric/measure is the tuning deemed fine—and then ask whether this is the natural metric/measure to assess fineness.
6.3. Singular Limits and Aaronson’s Challenge
Aaronson’s challenge (see the Introduction) raises a more general question: what is the relationship between a successor theory of physics and its predecessor theory? There is a subtle but important relationship brought out explicitly by Michael Berry [
43] that is of relevance here.
Typically an old theory is a singular limit of a new theory, and not a smooth limit, as a parameter of the new theory is set equal to infinity or zero. A singular limit is one where some characteristic of the theory changes discontinuously at the limit, and not continuously as the limit is approached. Berry cites as examples how the old theory of ray optics is explained from Maxwell theory, or how the old theory of thermodynamics is explained from statistical mechanics. His claim is that old theories of physics are typically singular limits of new theories.
If quantum theory is a forerunner of some successor superdeterministic theory, and Berry’s observation is correct, then quantum mechanics is likely to be a singular limit of that superdeterministic theory. Here, the state space of quantum theory arises at the limit
of RaQM, but not before. For any finite
p, no matter how big, the incompleteness property that led to the violation of (
1) holds. However, it does not hold at
. From this point of view, quantum mechanics is indeed a singular limit of RaQM’s discrete Hilbert space, at
. It is interesting to note that pure mathematicians often append the real numbers to sets (‘adeles’) of
p-adic numbers, at
. However, the properties of
p-adic numbers are quite different to those of the reals for any finite
p no matter how big. Here, the real-number continuum is the singular limit of the
p-adics at
. The relationship between QM and RaQM is very similar.
In physics, it is commonplace to solve differential equations numerically, i.e., to treat discretisations as approximations of some continuum exact equations, such that when the discretisation is fine enough, the numerical results are as close as we require to the exact continuum solution. This is not a good analogy here. A better analogy is analytic number theory, considered as an approximation to say the exact theory of prime numbers. If one is interested in properties of primes for large primes, treating p as if it were a continuum variable can provide excellent results. However, here the continuum limit is the approximation and not the exact theory.
Contrary to Aaronson’s statements, the singular relationship between a superdeterministic theory and quantum mechanics is exactly as one would expect from the history of science.
6.4. Free Will
Superdeterminism is often criticised as denying experimenter free will. Nobel Laureate Anton Zeilinger put it like this [
9]:
We always implicitly assume the freedom of the experimentalist... This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature.
A clear problem here is that the notion of free will is poorly understood [
44] and therefore hard to define rigorously. It was for this reason that Bell introduced his PRNG gedanken experiment—to show that it was possible to discuss (
1) and its potential violation without invoking free will.
Nevertheless, to avoid the charge of conspiracy, an experimenter must be able to choose in a way which is indistinguishable from a random choice. For the present purposes we can think of this as being consistent with free will. For this reason, experimenters have found increasingly whimsical ways of choosing measurement settings—such as bits from a movie, or the wavelength of light from a distant quasar—in an attempt to mimic randomness.
In
Figure 4, we replace the two PRNGs in
Figure 3 by Alice and Bob’s brains. It is well known that brains are low-power noisy systems [
22,
45] where neurons can be stochastically triggered. In practice, the source of such stochasticity is thermal noise. However, such noise—arising from the collision of molecules—will have an irreducible component due to the Andromedan butterfly effect. In such circumstances, as discussed in
Section 3, this can render the action of the brain non-computational. In particular, if we were to construct a model of Alice and Bob’s brains which are driven by the subset
of data on
(or indeed any subset), this model will not provide reliable predictions of their brains’ decisions. This surely provides evidence of our ability to choose in ways which are for all practical purposes random.
However, the Invariant Set Postulate provides insights which may help shed new light on the age-old dilemma of free will. Consider two possible invariant sets and . Here, for given , we suppose permits the settings 00 and 11 in a Bell experiment, whilst keeping fixed, permits the settings 01 and 10. These invariant sets differ (very slightly) in terms of their geometries, i.e., in terms of the (underlying deterministic) laws of physics.
If Alice and Bob are free to choose their measurement settings, then prior to their choosing, all observations of the universe must be consistent with the universe belonging to either of the and . However, once Alice and Bob have chosen, not only is one of and consistent with available observations, but the other becomes inconsistent with the supposed deterministic laws of physics.
Suppose Alice and Bob chose 00 and let
q denote a state of the universe prior to Alice and Bob choosing. The question then arises: was it always the case—even before Alice and Bob chose 00—that
and not
? Or, did the very act of Alice and Bob choosing result in
rather than
? In thinking about these questions, it is important to distinguish determinism from pre-destination. In a conventional deterministic initial-value problem, initial conditions are specified independently from evolutionary laws and the evolved state is predestined from the initial state. By contrast, if one thinks of the geometry of the invariant set as primitive, then the choices Alice and Bob make are no more ‘predestined’ from
q than
q is predestined from their choices. Instead, all one can say is that, because of determinism, the earlier and later states must be dynamically consistent. Importantly, as a global state space geometry,
is consistent with what Adlam calls an ‘all at once’ constraint [
46]. One could say that the geometric specification of
, and hence whether
, depends as much on states on
to the future of
q as on states to the past of
q. This future/past duality exists because neither the proposition
, nor Alice and Bob’s choice given
q, is computational (in our finite system it is computationally irreducible).
Hence, is it both true that (rather than ) before Alice and Bob chose, and also true that the act of choosing required rather than . Importantly, as there is no notion of temporal causality in state space, it would be wrong to call this latter fact retrocausality. Simply, it is a consequence of being an all-at-once constraint. This type of analysis helps explain the so-called delayed choice paradoxes in quantum physics.
It is possible to conclude not only that experimenter choices are indeed freely made (Nobel Laureates can be assured that their brains are not being subverted), but also that these choices can determine which states of the universe are consistent with the laws of physics and which are not (surely a fitting role for Nobel Laureates). This has some significant implications for the role of intelligent life in the universe more generally, which the author will discuss elsewhere.
7. Experimental Tests
A key result from this paper is that we will not be able to detect non-conspiratorial superdeterministic violations of (
1) by studying frequencies of measurement outcomes in a Bell experiment. We must look for other ways of testing such theories.
Of course, QM is exceptionally well tested and if a superdeterministic theory is to replace QM, it must clearly be consistent with results from all the experiments which support QM. Here, RaQM has a free parameter
p, which if large enough, can replicate all existing experiments. This is because with large enough
p, discretised Hilbert space is fine enough that it replicates to experimental accuracy the probabilistic predictions of a theory based on continuum Hilbert space (and Born’s Rule to interpret the squared modulus of a state as a probability—something automatically satisfied in RaQM). Conversely, however, if
p is some finite albeit large number, then in principle an experiment with free parameter
can study situations where
where there might be some departure between reality and QM [
47].
One conceivable test of RaQM versus QM probes the finite amount of information that can be contained in the quantum state vector (in RaQM n qubits are represented by n bit strings of length p). In RaQM, the finite information encoded in the quantum state vector will limit the power of a general purpose quantum computer, in the sense that RaQM predicts the exponential increase in quantum compute speed with qubit number for a range of quantum algorithms may generally max out at a finite number m of qubits.
The key question concerns the value of m. Could it be related to the number of collisions in the Andromeda butterfly effect? We will return to this elsewhere.
8. Conclusions
Attempts to develop models which violate (
1) do not justify the derision from a number of researchers in the quantum foundations community over the years. Not least, Bell himself did not treat the possible violation of (
1) with derision and accepted that seemingly reasonable ideas about the properties of physical randomisers might be wrong—for the purposes at hand.
A superdeterministic (and hence deterministic) model has been proposed which is not conspiratorial, is locally causal, does not deny experimenter free choice, and is not fine tuned with respect to natural metrics and measures. The model, based on a discretisation of Hilbert space, is not a classical hidden-variable model, i.e., it derives its properties from post-quantum-theory mathematical science (particularly that of non-computability and computational irreducibility). By considering the continuum of complex Hilbert Space as a singular limit of a superdeterministic discretisation of complex Hilbert Space, Aaronson’s challenge to superdeterminists, to show how a superdeterministic model might gloriously explain quantum mechanics, can be met.
One of the most important conclusions of this paper is that we need to be extremely cautious when invoking the notion of an ‘intervention’ in space time, at least in the context of fundamental physics. Such interventions form the bedrock of Pearl’s causal inference modelling [
16], and causal inference has been used widely in the quantum foundations community to try to analyse the causal structure of quantum physics, e.g., [
48,
49]. Here, we distinguish between two types of intervention: one that is consistent with the laws of physics and one that is not. The effect of the former type of intervention, if it is initially contained within a localised region of space-time, must propagate causally in space-time, constrained by the Lorentzian metric of space time. By contrast, the latter type of intervention simply perturbs a state of the universe from a part of state space where the laws of physics hold, to a part of state space where the laws of physics do not hold. If this superdeterministic model is correct, theories of quantum physics based on causal inference models which adopt an uncritical acceptance of interventions will give misleading results.
The results of this paper suggest that the way gravity interacts with matter may be central to understanding the reasons why the universe can be considered a holistic dynamical system evolving on an invariant set, and hence why Hilbert space should be discretised. This suggests that instead of looking for a quantum theory of gravity, we should instead be looking for a gravitational theory of the quantum [
50,
51]. However, importantly, the results here suggest such a theory will not be found by probing smaller and smaller regions of space-time, ultimately leading to the Planck scale. It will instead be found by incorporating into the fundamental laws of physics the state-space geometry of the universe at its very largest scales [
22]. Planck-scale discontinuities in space-time may instead be an emergent property of such (top-down) geometric laws of physics.
In this regard, a recent proposal [
52] for synthesising quantum and gravitational physics describes gravity as a classical stochastic field. The latter is consistent with our discussion of the Andromedan butterfly effect. However, it is not consistent with our discussion of Bell’s inequality. On the other hand, if one simply acknowledges that a stochastic gravitational field is a ‘for all practical purposes’ representation of a chaotic system evolving on a fractal invariant set, then Oppenheim’s model may become consistent with both the proposed superdeterministic violation of Bell’s inequality and with realism and the relativistic notion of local causality.
In the author’s opinion, this is the real message—not non-locality, indeterminism, or unreality—behind the violation of Bell’s inequality.