No scientific consensus exists concerning the question of how nature denies unlimited access to the experimenter agent of quantum states and quantum information. Entirely different physical explanations are on offer—as part of different quantum interpretations—regarding how nature limits agent access to quantum states or information and, therefore, how nature prohibits the prediction, and operational control by epistemic agents of individual quantum measurement outcomes. As was noted already, pre-quantum, classical, physics, by contrast, knows of no fundamental limits regarding agent access to nature (compare

Section 4.2).

In the textbook, SW operational interpretation, which is orthodox quantum mechanics, it is the metaphysical assumption of ‘intrinsic randomness’, i.e., ‘quantum indeterminism’, which fundamentally limits the powers of the agent to predict the value of a single measurement outcome (see

Section 5.1). By contrast, an ontological quantum theory, such as dBB-theory, typically derives its constraint on quantum predictability from the technological inability of the experimenter agent to collect complete information about initial conditions (see

Section 5.2). These opposing explanations are frequently discussed in terms of in-principle versus in-practice limits of agent-access to quantum systems. It is often presumed that an in-principle limit to agent-quantum access can only be posited in the case of operational quantum approaches, whereas only an apparently weaker, in-practice limit is available for ontological quantum mechanics.

The present work introduces a third option: the possibility of an in-principle limit for ontological quantum mechanics based upon self-referential dynamics which may produce outcome states whose predictability would require either (i) access to infinitely precise knowledge about initial conditions and/or (ii) the availability of infinite computational resources (see

Section 5.3). In the following, the three distinct options will be compared, whereby each one, albeit based on completely different physical assumptions, seeks to explain how nature prevents the agent from computing, predicting, and controlling, individual quantum events. First, the standard position of ‘universal indeterminism’ will be briefly discussed and criticized in

Section 5.1.

#### 5.1. Orthodox Quantum Mechanics: “Universal Indeterminism”

In orthodox quantum mechanics, the assumption of ‘intrinsic randomness’ serves as an absolute barrier to agent knowledge at the quantum level. Importantly, in the orthodox interpretation, the observed randomness is viewed as an

a priori property of nature herself, e.g., prior to any additional physical constraints involving the agent. Remarkably, in universal indeterminism, a single random event can initiate an entirely new causal chain—apparently “out of nothing” (e.g., [

72]). Nevertheless, and this—again—is the remarkable feature, the detection, for example, of a single ‘spin-up’ event by the measuring apparatus manifests a classical (pointer) state from which may propagate new causal flows, such as those triggering the formation of new biophysical events during sensory perception in the agent who observes the ‘spin-up’ measurement outcome. However, the question of what the exact nature might be of that initiating event, i.e., the question of ‘what is a quantum?’, is not addressed—famously—in the orthodox interpretation, and therefore, Plotnitsky [

73], for example, has noted that “…quantum objects are seen as indescribable and possibly even as inconceivable”, in the indeterministic interpretation of textbook quantum physics.

What is problematic, however, is that the very same indeterminism, or quantum randomness, which already serves as an absolute limit on agent knowledge, is often—at the same time—held to be the source also of the free-willed agency of the experimenter as in the free-will theorem by Conway and Kochen [

74,

75]. This is the exact opposite of being the source of a universal constraint. How could this be? How could one and the same (quantum) randomness be the source of both (i) objective chance and (ii) free-willed agent control of physical events in the world, such as freely selecting a measurement setting? This self-contradictory view, which has previously been captured in the concept of quantum super-indeterminism (see

Figure 1), has long obscured insight into the plausibility of those no-go arguments against the possibility of ontological quantum mechanics which are based upon the freedom of choice of the experimenter agent (for an overview see Walleczek [

60]).

Standard no-go theorems, such as Bell’s theorem [

28] or, again, the Conway-Kochen free-will theorem [

74,

75] fail to account for this contradiction within the orthodox view, which is implied by super-indeterminism (see the legend to

Figure 1). Therefore, such no-go theorems, i.e., the theorems claiming the impossibility of particular ontological propositions, imply conclusions of debatable value against the validity of deterministic quantum theories. For example, John Bell recognized the shortcomings himself regarding his own (no-go) theorem in view of an axiomatic interpretation of the non-signaling theorem, and he later adjusted his views [

78,

79,

80,

81,

82]. For a detailed analysis of Bell’s evolving positions—from an axiomatic to an effective non-signaling constraint—see Walleczek and Grössing [

27]. Concluding, the simple concept of super-indeterminism (

Figure 1) explains why the free choice assumption of the experimenter agent in selecting measurement settings does not imply the necessary rule of the standard, i.e., axiomatic, non-signaling theorem (for details see

Figure 2).

#### 5.1.1. On the Impossibility of Proving the Truth of Quantum Indeterminism

Long-running arguments against the possibility of deterministic, quantum-ontological approaches are increasingly criticized as falling short of their stated aims, in particular those based upon the free-will theorem and the non-signaling constraint as an axiom (see

Figure 1 and

Figure 2). Importantly, it is widely accepted that quantum indeterminism in the form of actual or objective chance can neither be proven by empirical tests nor by mathematical reasoning (e.g., [

84]). However, what might be provable instead is the objective unpredictability of individual quantum measurement outcomes, as defined, for example, by a formal theorem such as Turing incomputability (see

Section 5.3 and

Section 6). Again, “indeterminism” captures a metaphysical assumption about how nature really is—prior to any formal theorizing. Furthermore, an empirical proof of indeterminism is out of reach, likely always, as a final loophole-free test seems to be a logical impossibility (compare

Figure 1).

Finally, for a long time, because of the widespread belief that constraints such as absolute unpredictability, free will, nonlocality, non-signaling, or contextuality, could be compatible only with quantum indeterminism, any deterministic approaches to quantum theory have met with little interest by the mainstream of quantum physics, except often in reference to the perceived implausibility of the SW ontological quantum approaches (compare

Section 2). This lack of interest has long been due to the near exclusive use—in the mainstream discourse on quantum foundations—of definite, non-contextual ontological assumptions, i.e., those that are consistent only with the classical, metaphysical assumption known as direct or naïve realism. As was mentioned before, the proposed “elements of reality” in the argument by Einstein et al. [

38] represent, of course, entirely non-contextual ontic states in agreement with the classical metaphysics of naïve realism; there, the “elements” merely reveal their own “intrinsic”, already given, properties at the moment of their measurement.

As was reviewed in

Section 3.1, any non-contextual, measurement-independent ontology, such as naïve realism, is wholly incompatible with the measurement predictions of orthodox quantum mechanics [

39]. In recent years, however, new research has been pushing the frontiers of ontological possibilities beyond naïve realism, such as in the form of relational ontologies (e.g., Esfeld [

85]), time-symmetric ontologies (e.g., Leifer and Pusey [

12]), including unconventional causal structures such as retrocausality (e.g., Sutherland [

29], Price [

30], Wharton [

13,

31], Price and Wharton [

32]). In addition, there has been a revival of interest in the nonlocal and contextual ontologies related to dBB-theory [

1,

2,

3,

4] and Bohmian mechanics [

5,

6,

7,

8], which are ontological propositions that posit the fundamental interconnectedness, instead of the intrinsic randomness, of the physical universe (e.g., Walleczek and Grössing [

86]).

The focus of the subsequent

Section 5.2 and

Section 5.3 will be an assessment of the continuing possibility of ontology and determinism in quantum theory in relation to the experimenter agent. Specifically, what is sought is a scientifically based notion of “determinism without pre-determination” [

60,

86]. Next,

Section 5.2 presents the traditional option for quantum mechanics in a globally deterministic universe.

#### 5.2. Ontological Quantum Mechanics: “Effective Ignorance in Global Determinism”

Instead of the metaphysical assumption of intrinsic randomness (

Figure 3A), an ontological quantum mechanics opts for an alternative approach to explain the origins—in a globally deterministic universe—of experimentally observed quantum randomness. That is, ontological approaches typically seek an agent-dependent explanation based upon the unpredictability of individual measurement outcomes as a function of an epistemic limit, which—in the present analysis—is introduced as ‘effective ignorance’ (

Figure 3B).

Importantly, the approach towards an “effective randomness”—by way of the concept of ‘effective ignorance’—is an option that can be consistently adopted if agent and universe are not metaphysically separated entities as suggested by the

open line in

Figure 3B (for details see legend to

Figure 3). This is in contrast to the orthodox view shown in

Figure 3A, where the agent stands in a physically isolated (quasi-transcendent) position towards the rest of the physical universe. For explanation, in the orthodox interpretation of quantum indeterminism, the agent is presumed to be capable of somehow initiating new cause-effect chains “out of nothing”, e.g., in violation of Leibniz’ Principle of Sufficient Reason (compare

Section 5.1). This extra-physical agentic power is reminiscent of Maxwell’s demon-agent who was—falsely—thought to be unconstrained by the Laws of Nature, such as by the Second Law of Thermodynamics (see

Section 4.2). This isolated, or dualistic, notion of agency in the orthodox picture is indicated by the

closed line in

Figure 3A (for details see legend to

Figure 3).

The essential point of ‘effective ignorance’ is the following (

Figure 3B): If assuming that the complete initial conditions of some deterministic system could be obtained, then the exact prediction of outcome states is possible—at least in principle. An example is a computer-generated pseudorandom bit sequence that becomes fully predictable once the (random) seed, i.e., the initial condition, as well as the algorithm, which is used to generate the bit sequence from the seed, is known to the scientific agent. By analogy, having complete knowledge of initial conditions, the properties of a (deterministic) quantum state could be computed, e.g., for the purpose of prediction and control, even if possessing finite computational resources only. Significantly, in the case of effective ignorance—when discrete events are finite—while access to initial conditions (compare the “seed” above) is technologically impractical, there exists, however, no

formal limit that fundamentally constrains access to the complete initial state. For explanation, the definition of finite resources includes the whole universe as a finite resource, which—again—imposes an in-practice, effective limit, but not an in-principle, objective limit. In summary, the notion of “effective” quantum randomness as a result of the weak epistemic option is—at least in principle—computable by a Turing machine, even if the whole universe is to be recruited as a super-computational resource to achieve quantum predictability.

#### 5.2.1. Understanding John Bell’s Concept of “Free Variables” for Quantum Mechanics

The weak epistemic option of effective ignorance is consistent with, and explains, Bell’s own proposal of effectively “free variables” [

79]. “I would expect a serious theory to permit… ‘pseudorandomness’ for complicated subsystems (e.g., computers),” Bell [

65] suggested “…which would provide variables sufficiently free for the purposes at hand.” In addition, Bell provided the following explanation [

79]:

“Consider the extreme case of a ‘random’ generator which is in fact perfectly deterministic in nature—and, for simplicity, perfectly isolated. In such a device the complete final state perfectly determines the complete initial state—nothing is forgotten. And yet for many purposes, such a device is precisely a ‘forgetting machine’. A particular output is the result of combining so many factors, of such a lengthy and complicated dynamical chain, that it is quite extraordinarily sensitive to minute variations of any one of many initial conditions. It is the familiar paradox of classical statistical mechanics that such exquisite sensitivity to initial conditions is practically equivalent to complete forgetfulness of them.”

This in-practice limit, which Bell [

65,

78,

79] had argued for, does not, however, deny the theoretical possibility that the evolution of a deterministic system could be (computationally) predicted—at least in principle—if it

were possible to access and determine “the complete initial state” [

79]. By contrast, under the assumption that there exists a

fundamental limit on computability and agent knowledge about the initial state (compare

Section 5.3) that theoretical possibility would be denied also. Although Bell did mention ‘deterministic chaos’ in the context of ‘pseudorandomness’ [

65], he did

not propose that chaotic dynamics may represent a limit in any

fundamental sense. On that specific point, the present work revises the conclusions of an earlier discussion of Bell’s effectively free-variables concept [

27,

60].

By relying on an additional principle, sometimes the powers of the weak option of effective ignorance are sought to be enhanced (e.g., Aharonov et al. [

88]): the Uncertainty Principle prevents the simultaneous determination with arbitrary precision of, e.g., particle properties, thereby failing to characterize the relevant initial conditions for the same instant of time. However, the concept of ‘uncertainty’ is an operational, epistemic notion also, and the physical foundations of the Uncertainty Principle also remain to be identified (e.g., Rozema et al. [

89]). Summarizing, the weak epistemic option represents an instance of subjective agent-inaccessibility, because that option depends upon the incomplete state of knowledge of the experimenter agent, i.e., upon an “uncertainty”, about the physical universe, including about initial conditions. However, note that even if the entire universe were available as a super-computational resource, then the presence of a black-hole singularity, for example, might render impossible even the purely theoretical prospect—in the weak epistemic option—of the cosmic computability of an individual quantum measurement outcome.

#### 5.2.2. Criticizing the Weak Option Interpretation

The weak option described above has often been criticized on the grounds that quantum randomness cannot possibly be a function of merely some in-practice limit on agent knowledge (

Figure 3B). That skeptical position is echoed, for example, by Bub [

35], who noted that quantum probabilities that describe the “nonlocal probabilistic correlations that violate Bell’s inequality” must be “intrinsically random events”, and that these probabilities “do not quantify incomplete knowledge about an ontic state (the basic idea of ‘hidden variables’).” For a counterpoint to Bub’s skeptical position, consult, for example,

Figure 1 and

Figure 2 in the present article (

Section 5.1). Finally, Bub [

35] also reaffirmed the popular position that this very fact in particular “…means that quantum mechanics is quite unlike any theory we have dealt with before in the history of physics.”

Indeed, the perceived uniqueness of quantum mechanics, and it is supposed ‘weirdness’, is often cited as an “explanation” for strange or surprising features that are encountered in quantum studies involving single-particle observations. Specifically, concepts such as superposition (e.g., Schrödinger’s cat) and objective chance (i.e., intrinsic randomness)—in the form of objectively unpredictable measurement outcomes—are presumed to operate exclusively in the domain of the quantum, but never in the classical domain. However, what equally ‘weird’ phenomena may be produced as part of entirely classical systems? One example is the notion of ‘undecidable dynamics’ in classical systems as a function of self-referential systems dynamics. The present work introduces self-referential dynamics as a novel explanation that might underpin the physics of agent inaccessibility (see

Section 5.3). This third and final option counters the idea that what distinguishes a quantum from a classical system is the capacity to generate objectively unpredictable outcomes.

#### 5.3. Ontological Quantum Mechanics: “Objective Ignorance in Global Determinism”

The hypothesis that objective ignorance, as opposed to effective ignorance, can be the source of the unpredictability of individual quantum events in a deterministic system, represents the strong ontological option for explaining the physics of agent inaccessibility. Specifically, it had previously been proposed that agent inaccessibility in ontological quantum mechanics might be due to the limit that “…self-referential processes may generate physical observables whose values are universally uncomputable, i.e., their computation would require an infinite amount of computational resources” (Walleczek [

60]). Briefly, the key feature of a nonlinear dynamical process called ‘self-referential’ is that a system output becomes a new input for the system within the same system (e.g., Walleczek [

90]). In

dynamical chaos, the constant action of feedback loops (recursive processes) is responsible for the generation of the chaotically evolving dynamics. In physical systems that can be characterized by

undecidable dynamics, self-referential, recursive processes are, again, responsible for the objective unpredictability of outcome states. Importantly, the presence of self-referential dynamics (see

Table 1 below) can be identified both in concrete physical systems as well as the computational models that describe them.

The strong option based upon fundamental uncomputability of outcome states—as a necessary and sufficient criterion for objective ignorance—is illustrated in

Figure 4B. This proposal is contrasted with the orthodox position of intrinsic randomness shown in

Figure 4A. Importantly, two different types of self-referential dynamics are currently known to support the concept of formal uncomputability—dynamical chaos and undecidable dynamics; each type posits the lack of

infinite resources as a fundamental limit on computability (see

Table 1). The question of the physical plausibility of the notion of formal uncomputability in the account of the objective unpredictability of quantum processes in nature will be discussed in

Section 6.

A key distinguishing feature of the concept of objective ignorance—in contrast to that of effective ignorance—is the following (

Figure 4B): Even if assuming that the

complete initial conditions of some deterministic system could be obtained, then the exact prediction of outcome states is still impossible—even in principle. That is, in the option of objective ignorance (

Figure 4B), the lack of

infinite computational resources as a criterion places an

objective limit on the experimenter agent as a function of undecidable dynamics (see

Table 1), which, as Bennett [

91] put it, is dynamics that is “…unpredictable even from total knowledge of the initial conditions”. This type of objective unpredictability is exemplified also in the halting problem for Turing machines, with the essential point being that Turing machines “…are unpredictable”, as Moore [

92] noted, “even if the initial conditions are known exactly”.

A second key distinguishing feature which is covered by the strong option of objective ignorance, but not by effective ignorance (

Section 5.2), concerns the emergence of dynamical chaos in physical systems. Importantly, due to the theoretical impossibility of gathering information with

infinite precision about the initial state from which evolves a dynamically chaotic system, an

objective limit is imposed on the computability of the system’s outcome states. For explanation, note that

arbitrarily small differences in initial conditions may generate strongly divergent outcome states in computational models of dynamical chaos (see

Table 1).

Because the strong option is also a knowledge-constraining option, the term ‘ignorance’ has been retained as part of the present proposal of an AIP for quantum mechanics. However, in contrast to effective ignorance, in the concept of objective ignorance, agent knowledge is not incomplete in the sense that gathering more information about initial conditions, or amassing more computational power, might eventually lead to complete knowledge and total predictability. Instead, an in-principle limit guarantees the incompleteness of agent knowledge, and therefore the agent’s inability to control and predict even a single quantum measurement outcome is ensured (see

Table 1).

Therefore, the concept of objective ignorance represents an instance of

objective agent-inaccessibility, which—obviously—is a more restrictive notion than

subjective or

effective agent-inaccessibility. Accordingly, the difference between the

effective non-signaling constraint (

Figure 3B) and the

objective non-signaling constraint (

Figure 4B) is that the latter constraint adopts a fundamental, and not a practical, limit on complete agent access towards an ontic state λ, and towards quantum information transfers, in ontological quantum mechanics in general. For example, this holds true for (SW) quantum ontologies that are locally time-symmetric [

12,

13,

14,

15,

16,

29,

30,

31,

32], locally time-asymmetric [

45,

46,

47], or strictly nonlocal [

1,

2,

3,

4,

5,

6,

7,

8]. Finally, the here proposed principle (AIP) is fundamental in the sense that a Turing oracle only could predict the exact value of an individual outcome state as a function of physical systems and computational model evolution. The strong option of objective ignorance (

Figure 4B) might represent a fundamental principle by which nature prohibits access to the experimenter agent in the quantum regime. In the subsequent

Section 6, a selection of available views and results are reviewed briefly which may support the present proposal for an AIP based upon the concepts of objective unpredictability, undecidability, and uncomputability.