Next Article in Journal
The Lack of Other Minds as the Lack of Coherence in Human–AI Interactions
Previous Article in Journal
Faust and Job: The Dual Facets of Happiness
Previous Article in Special Issue
On the Interpretation of Denotational Semantics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Injecting Observers into Computational Complexity

by
Edgar Graham Daylight
a.k.a. Karel Van Oudheusden, Department of Computer Science, KU Leuven, Celestijnenlaan 200a, Box 2402, 3001 Leuven, Belgium
Philosophies 2025, 10(4), 76; https://doi.org/10.3390/philosophies10040076 (registering DOI)
Submission received: 26 February 2025 / Revised: 24 April 2025 / Accepted: 25 April 2025 / Published: 26 June 2025
(This article belongs to the Special Issue Semantics and Computation)

Abstract

We characterize computer science as an interplay between two modes of reasoning: the Aristotelian (procedural) method and the Platonic (declarative) approach. We contend that Aristotelian, step-by-step thinking dominates in computer programming, while Platonic, static reasoning plays a more prominent role in computational complexity. Various frameworks elegantly blend both Aristotelian and Platonic reasoning. A key example explored in this paper concerns nondeterministic polynomial time Turing machines. Beyond this interplay, we emphasize the growing importance of the ‘computing by observing’ paradigm, which posits that a single derivation tree—generated with a string-rewriting system—can yield multiple interpretations depending on the choice of the observer. Advocates of this paradigm formalize the Aristotelian activities of rewriting and observing within automata theory through a Platonic lens. This approach raises a fundamental question: How do these Aristotelian activities re-emerge when the paradigm is formulated in propositional logic? By addressing this issue, we develop a novel simulation method for nondeterministic Turing machines, particularly those bounded by polynomial time, improving upon the standard textbook approach.

1. Introduction

Scott Aaronson champions an integration of philosophy with computational complexity, envisioning philosophy as a “scout”, that explores and maps out “intellectual terrain for science to later move in on, and build condominiums on …” [1] (p. 6, original emphasis). Inspired by this perspective, we explore the interplay between Aristotelian, step-by-step thinking and Platonic, static reasoning in computational complexity. These two forms of reasoning are exemplified in the first and second quotes, respectively, which serve as focal points for our discussion.
  • Lance Fortnow as an Aristotelian:
    A Turing machine has a formal definition but that is not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step. …I feel it humming along, updating variables, looping, branching, searching until it arrives as its final destination and gives an answer.1
  • Robin K. Hill as a Platonist:
    A Turing Machine is a static object, a declarative, a quintuple or septuple of the necessary components. The object δ that constitutes the transition function that describes the action is itself a set of tuples. All of this is written in appropriate symbols, and just sits there.2
We will examine these two modes of reasoning—Aristotelian, step-by-step thinking and Platonic, static reasoning—with regard to nondeterministic Turing machines (TMs).
In addition to likening philosophy to a scout, Aaronson advocates for a science rooted in objective observations concerning TMs, rather than those that rely on subjective, human-dependent interpretations. Specifically, he emphasizes that the behavior of a TM must be regarded as an objective fact. He articulates this perspective as follows:
To me, if we are not willing to say that a given Turing machine M either accepts, rejects, or runs forever (when started on a blank tape)—and that which one it does is an objective fact, independent of our formal axiomatic theories, the laws of physics, the biology of the human brain, cultural conventions, etc.—then we have no basis to talk about any of those other things Ref. [2] (Section 12, our emphasis)
Here, we find ourselves in potential disagreement with Aaronson, as his view appears to overlook a meaningful distinction regarding the interpretation of a halting TM’s behavior. Consider, for instance, Observer 1, who treats the q a c c e p t and the q r e j e c t states of the machine as accept and reject states, respectively, and Observer 2, who interprets them as reject and accept states, instead. Crucially, the same (syntactic) computation performed by the machine can yield different (semantic) results depending on the choice of the observer. Nothing prevents us from formalizing both observers within automata-theoretic (i.e., set-theoretic) terms, while remaining faithful to Platonic convictions.
Our example of Observer 1 and Observer 2 is one of many relevant to the growing body of work on “polycomputing”, as conceptualized by Joshua Bongard and Michael Levin. They describe this phenomenon as follows:
[L]iving systems perform multiple functions in the same place at the same time. We refer to this as “polycomputing”—the ability of the same substrate to simultaneously compute different things, and make those computational results available to different observers. This ability is an important way in which living things are a kind of computer, but not the familiar, linear, deterministic kind …To develop our understanding of how life performs polycomputing, and how it can be convinced to alter one or more of those functions, we can first create technologies that polycompute and learn how to alter their functions. Ref. [3] (Abstract)
In contrast to Bongard and Levin, who focus on biological systems and their computational substrates, we aim to investigate the implications of polycomputing for computational complexity. We posit that the semantics of a (syntactic) computation, performed by a TM, has always implicitly relied on an observer [4]. With the advent of multiple observers, it becomes essential for theorists to explicitly account for the observer’s role.
In this paper, we pursue a dual focus: on one hand, we delve into a specific niche within polycomputing known as “computing by observing” [5], and on the other, we broaden our perspective from deterministic to nondeterministic TMs. Aligning at least partially with Aaronson’s outlook, we assume that all observers do follow the conventional interpretation, in which q a c c e p t and q r e j e c t unequivocally denote acceptance and rejection, respectively. Even in this structured setting—where observers will monitor the behavior of a string-rewriting system—we will uncover results that may challenge the prevailing expectations of specialists in computational complexity. Specifically, we will obtain a new method to simulate a nondeterministic polynomial time TM in propositional logic.
Definition 1. 
A Boolean formula in conjunctive normal form (cnf) is referred to as a 3cnf-formula if each clause consists of exactly three literals. If each clause contains at most two literals, the formula is called a 2cnf-formula. A Horn formula is a cnf-formula where every disjunction contains at most one positive literal, cf. [6] (p. 34). We refer to a formula as a 2cnf-Horn formula if it qualifies as both a 2cnf and a Horn formula.
Remark 1. 
Determining the satisfiability of 2cnf-formulas or Horn formulas is substantially more efficient than for genuine 3cnf-formulas [7].

1.1. Problem Statement

The literature on Computing by Observing formalizes the Aristotelian activities of rewriting and observing within automata theory and, ultimately, set theory [5,8,9]. This raises the question: How do these or similar Aristotelian activities emerge when the entire framework is recast in propositional logic? More specifically, how do they manifest within a tableau-based approach commonly used in standard treatments of NP -completeness?

1.2. Results

This exploration leads to a novel and efficient simulation method for nondeterministic TMs, particularly those bounded by polynomial time. Given an arbitrary input w and a nondeterministic polynomial time TM N, we demonstrate that a hypothetical accepting computation path of N on w can be fully represented in 3cnf using Horn formulas in conjunction with a single non-Horn formula, ϕ c e l l . This marks a conceptual improvement over the current state of the art, which requires two non-Horn formulas, ϕ c e l l and ϕ s t e p . Our findings suggest a need to revisit the concept of NP -completeness [10,11].

1.3. Methodology

This paper aims to bridge the philosophy of computing with the mathematical foundations of NP -completeness. It begins with a philosophical prelude, before focusing on its core contribution in constructive mathematics: the development of an efficient simulation technique. Throughout, we assume the reader is familiar with the standard textbook treatment of NP -completeness.

1.4. Outline

The rest of this article is structured as follows: Section 2 highlights key aspects of the philosophy of computing, while Section 3 explores the Computing by Observing paradigm. The novel simulation technique is detailed in Section 4, followed by our closing remarks in Section 5. Additionally, Appendix A provides rigorous definitions related to Turing machines.

2. Prelude

We first explore the philosophical literature from a broad, high-level perspective (Section 2.1) before transitioning to a more detailed examination of computer programming, nondeterministic Turing machinery, and NP -completeness (Section 2.2).

2.1. An Aerial Perspective

Following øystein Linnebo and Stewart Shapiro [12], we distinguish between two schools of reasoning. The first school, rooted in Aristotle and extending through Euclid up to Cantor, supports a step-by-step (dynamic) reasoning approach involving potential infinity.3 For example, Linnebo and Shapiro state about Euclid that:
Most of the language in the Elements is dynamic, talking about what a (presumably idealized) geometer can do
[Quoting Euclid:] That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on the side on which are the angles less than the two right angles. Ref. [12] (pp. 163–164)
The second school, led predominantly by Cantor and carried through Hilbert to the majority of contemporary mathematicians, embraces a static ontology of mathematics, aligning with Plato and Plotinus. This perspective incorporates the concept of an actual infinity.4 For instance, the dynamic language used by geometers (and, later, by Euclid in particular) was, from Plato’s perspective, “inconsistent with the nature of the true subject of geometry” [12] (p. 164). Specifically, quoting from Republic, VII, Plato stated that:
[The] science [of geometry] is in direct contradiction with the language employed by its adepts …Their language is most ludicrous, …for they speak as if they were doing something and as if all their words were directed toward action …[They talk] of squaring and applying and adding and the like …whereas in fact the real object of the entire subject is …knowledge …of what eternally exists, not of anything that comes to be this or that at some time and ceases to be. Ref. [12] (p. 164)
The crux is that Linnebo and Shapiro draw a clear distinction between two philosophical approaches: “a traditional Aristotelian form of potentialism”, which posits that at any given stage of mathematical inquiry, there are only finitely many objects, yet the capacity always exists to generate more indefinitely; and “actualism”, which unequivocally embraces the existence of actual infinities [12] (p. 168).
In this paper, we link Aristotelian potentialism (Aristotelianism) to a first-person perspective in mathematics, in which the mathematician—or their mathematical machine—engages in a step-by-step progression reminiscent of spatio-temporal reasoning. This may stand in contrast to Linnebo and Shapiro’s intentions. Conversely, we associate actualism (Platonism) with a third-person perspective where the mathematician acts as a narrator, describing a Platonic realm that exists independently of human beings and the step-by-step progression of their machines, unconstrained by finitude.
When a Turing machine’s step-by-step behavior is described from an elevated, human-independent perspective, we associate this operational description with Aristotelianism. However, the reader is free to interpret it as a hybrid of Aristotelianism and Platonism depending on their philosophical inclination. Our narrative does not hinge on any distinction in absolute terms, but rather on the relative comparison of what Aristotelianism and Platonism mean to us within the context of computing.
To be more precise, and drawing on the synthesis offered by philosopher Walter Dean, we capture the essence of nondeterminism through his three concise sentences:
  • Non-deterministic machines are sometimes described as making undetermined ‘choices’ among different possible successor configurations at various points during their computation.
  • However, what the …definitions actually describe is a tree …of all possible computation sequences starting from a given configuration C 0 for a deterministic machine N
  • It will generally be the case that all of the branches in [the tree] must be surveyed in order to determine N’s decision about the input …Ref. [7] (Section 3.1, original emphasis)
Dean’s first sentence is active and Aristotelian, while his second is Platonic, surveying the landscape of all computations that exist ‘out there’, independent of human involvement. His third sentence bridges this Platonic tree of all possible computations with what we, as human beings, seek to extract from it in our ongoing theoretical discourse, which, once again, will hinge on both Aristotelian and Platonic reasoning.
Revisiting Lance Fortnow’s discussion of TMs, we observe that his opening sentence acknowledges a Platonic framework, while the second shifts toward an Aristotelian mode of expression:
A Turing machine has a formal definition but that is not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step.5
Going forward, we characterize Fortnow’s approach to TMs as hybrid—if not entirely consistent—with the Platonic convictions upheld by Hill and Aaronson.

Looking Under the Hood

A deeper exploration of the literature leads us to three pivotal contributions: David Tall’s comprehensive study on mathematics education, which provides valuable insights into the historical evolution of mathematics [15]; Walter Dean’s examination of algorithmic realism—the view that algorithms are mathematical entities [16]; and Ray Turner’s work on computational abstraction [17]. Together, these works provide a basic understanding of some key tensions shaping contemporary philosophy of computing.
David Tall traces an emancipatory movement in mathematics, marked by a transition from “practical methods based on natural experiences” (pre-Cantor) to a Hilbertian ideal (post-Cantor) of “theoretical mathematics grounded in definitions and proofs” [15] (p. 147). He characterizes the work of Cantor and his successors as a “total reversal” in approach:
Instead of studying objects or operations that have (natural) properties, the chosen properties (axioms) are specified first and the structure is shown to have other properties that can be deduced from the axioms. Ref. [15] (p. 149, original emphasis)
This shift underscores the ascendancy of abstract, axiomatic reasoning in modern mathematics. However, while mathematics may have asserted its independence from physics and engineering, we briefly question whether this independence is as complete as it appears. Specifically, what does this imply for computer science, where the assertion that “A Turing machine can do everything that a real computer can do”—to quote Michael Sipser [18] (p. 137)—suggests that set theory, the foundation of computational complexity, retains a direct connection to physics and engineering after all? While this observation concerning abstract theory and practical application invites extra reflection, we will not go into these matters here.
Further explorations lead us to Walter Dean’s work on algorithmic realism, which aims to elevate results from computational (Aristotelian) methods to the same level of legitimacy as those from classical (Platonic) mathematics by integrating step-by-step reasoning into a set-theoretic framework. Dean’s inquiry revolves around the following question:
[W]hether classical mathematics possesses the resources to interpret the operational discourse itself in a manner which is faithful to our understanding of the mode of operation of the individual algorithms in question … [16] (Section 2)
Without delving into technical specifics, Dean ultimately concludes that there appears to be no feasible way to substantiate algorithmic realism in a manner fully consistent with the practices of contemporary theorists, including those in computational complexity.
Interestingly, a parallel tension emerges in the realm of programming language design, where computational and set-theoretic concepts clash (at least) occasionally. While Ray Turner employs a set-theoretic framework in his analysis, he cautions that “the use of set theory as the appropriate medium for the formalization of computational notions—such as the computer science notion of type—is not unproblematic” [17] (Section 17, slightly paraphrased). In contrast to Tall’s synthesis, Turner advocates for “a more radical approach in which the computational notion of type is taken as the fundamental one” [17] (Section 17). In this view, Platonic mathematics is relegated to a secondary role, with Aristotelian computability taking precedence.
In alignment with our understanding of set theory and some of Aaronson’s Platonic convictions, we will adopt Tall’s perspective as a guiding framework in this paper. Within computational complexity, Platonic mathematics retains primacy, while Aristotelian step-by-step reasoning—as exemplified in algorithmic thinking and computer programming—serves as a metaphor, one that we hope proves more illuminating than misleading.

2.2. A Ground-Level Perspective

While we prioritize a Platonic, static ontology, programmers naturally tend toward the Aristotelian approach. A notable example is Robert W. Floyd, who demonstrates this preference in his 1967 paper Nondeterministic Algorithms [19]. We shall discuss Floyd’s views next (Section 2.2.1), followed by nondeterministic Turing machinery (Section 2.2.2), and NP -completeness in particular (Section 2.2.3).

2.2.1. Computer Programming

For Floyd, step-by-step reasoning forms the cornerstone of his algorithmic framework, with the concept of “nondeterminism” acting as a mental tool. He describes nondeterministic algorithms as “conceptual devices” that “simplify the design of” sequential backtracking algorithms by allowing “considerations of program bookkeeping …to be ignored” [19] (p. 636). In this way, Floyd’s work highlights the practical utility of Aristotelian reasoning in programming, even as it contrasts with a static ontology underpinning modern mathematics.6
Floyd’s “nondeterministic algorithm” offers multiple ways to engage with the spatio-temporal world of programming. He writes:
In general, there may be many ways to execute a nondeterministic algorithm, carrying out all assignments, branches, etc., in the conventional way, and making an arbitrary selection from the set of possible values each time a multiple-valued function is encountered. Ref. [19] (p. 636)
Floyd’s “multiple-valued function” is comparable to a “choice point”, as he explains:
One may use a multiple-valued function, c h o i c e ( X ) , whose values are the positive integers less than or equal to X …Ref. [19] (p. 636)
As an Aristotelian programmer, Floyd swiftly translates the equivocal concept of a “multiple-valued function” into procedural terms:
One implementation of X c h o i c e ( f ) saves the original value of X, and assigns f to X. After all possible computations with any particular value of c h o i c e ( f ) have been tried, the next smaller value is tried. When all values have been tried, the original value of X is restored and backtracking continues. Ref. [19] (p. 640)
A declarative, third-person perspective feels distant when reading in Floyd’s dynamic language. However, we reiterate that Floyd does not associate “nondeterminism” an sich with an Aristotelian step-by-step interpretation. Instead, the term refers to a convenient mental tool, if not Platonic-like form. In his own words:
[I]t is perhaps desirable to make clear that nondeterministic algorithms are not probabilistic, random or Monte Carlo algorithms. Rather, they are convenient representations of systematic search procedures. Ref. [19] (p. 644, our emphasis)
Finally, turning now to Floyd’s most insightful reflections, he writes as follows:
From one point of view, a nondeterministic algorithm represents a method of thinking of computer programs as being in part governed, not by efficient causes (causes which precede their effects) but by final causes (goals: causes for the sake of which their effects are carried out). Achievement of success and avoidance of failure is the goal of a nondeterministic algorithm, or, more precisely, of its imagined processor. Ref. [19] (p. 644, our emphasis)
The phrase “in part” in Floyd’s first sentence suggests, once again, that in the standard deterministic case, an algorithm represents a way of thinking of computer programs as being fully governed by efficient causes—consistent with Aristotelian step-by-step thinking. In the case of nondeterministic algorithms, the picture becomes blurred: it is no longer entirely governed by efficient causes but to some extent also by final causes. In his final sentence, he emphasizes that such a conceptual tool does not align with the real world: it refers to an “imagined processor” rather than an actual one.
Floyd’s blurring allows him to adopt a declarative perspective on the entire landscape of sequential computations after all. For instance, he writes:
One may say of the nondeterministic algorithm for the four queens problem …that when c o l is equal to 1, r o w will never be chosen equal to 1 in any computation of the algorithm, because there are no solutions having a queen in the corner, and the goal of the processor is to find a solution. We may say that these algorithms are nondeterministic, not in the sense of being random, but in the sense of having free will. Ref. [19] (p. 644)
In our view, Floyd has shifted partially from Aristotelian reasoning toward a more abstract approach. Arguably, he does not completely abandon a spatio-temporal connotation, as he continues to reference “free will” when characterizing nondeterminism.
Do specialists in computational complexity conceptualize nondeterminism in an even more static manner than Floyd, as a programmer and computer scientist, did? This question lies at the heart of the next section.

2.2.2. Nondeterministic Turing Machinery

We provide a rigorous definition of nondeterminism following Michael Sipser’s 2006 textbook account (see Definition 2 below), which consists of five sentences. The first sentence describes a static object, a declarative, and just sits there—borrowing Robin K. Hill’s characterization once more. In contrast, the second sentence is dynamic and pertains a bit more to an Aristotelian first-person perspective of a mathematician (or, more precisely, a mathematical machine). The third sentence, discussing the transition function and power set, is Platonic, similar to the first sentence. The fourth sentence, part of which is underlined, also appears to be purely static, surveying the entire landscape of computations that exist independently of human involvement. However, we will return to this shortly. The fifth and final sentence connects the Platonic view of all possible computations with what we, as human beings, aim to extract from it in our ongoing theoretical discourse.
Definition 2. 
A nondeterministic Turing machine , Q , Γ , Φ , δ , q 0 , q a c c e p t , q r e j e c t , is a 7-tuple. At any point in a computation, the machine may proceed according to several possibilities. The transition function for the machine has the form δ : Q × Φ P Q × Φ × + , , where P denotes the power set. The computation of the machine is a tree whose branches correspond to different possibilities for the machine. If some branch of the computation leads to the state q a c c e p t , and thus halts in q a c c e p t , the machine accepts its input.7
Let us now use the images in Figure 1 to scrutinize the underlined words in Definition 2. At first glance, the description given with the underlining seems to be purely Platonic, capturing the entire landscape of sequential computations, as illustrated in image (a) in Figure 1. Alternatively, readers might (unconsciously) impose arrows onto the line segments, transforming the Platonic-like form into a hybrid Aristotelian–Platonic representation, as shown in (b). In image (b), the tree is read from top to bottom, adhering to spatial-temporal reasoning reminiscent of Aristotelian thought.
Upon further reflection, image (a) is not purely Platonic after all. While temporal reasoning is discouraged due to the absence of arrows, it is still present. The issue with (a) is that each node in the tree is distinct from the others. This generally only makes sense if each node is associated not only with a machine configuration but also with a unique time stamp, as a machine configuration can belong to two (or more) sequential computations, as depicted in image (c). So we take (c) to be more faithful to the intended meaning of the underlined words in Definition 2 than (a).
Finally, the landscape in (c) is not as general as desired either, as it too still suggests a top-down reading in line with a temporal perspective. This leads us to representation (d), which incorporates a loop within the computation. Now, the informed reader is (hopefully) much less inclined to consider the landscape with time in mind—especially if we rotate the graph 90 clockwise, resulting in representation (e). Here, edges can be traversed in any direction or, preferably, not at all; the landscape exists independently of human or machine involvement, and is just ‘out there,’ as a declarative.
The takeaway is that the underlined words in Sipser’s textbook definition (Definition 2) prompt the reader to adopt a hybrid Aristotelian–Platonic perspective, as shown in (a), even though the actual mathematical content is purely declarative, as depicted more faithfully in (e). The computation is not a tree, in the general case. Instead, it is a graph.
We are now equipped to interpret arrowed figures as Platonic representations when required. The direction of the arrows is arbitrary when it comes to appreciating the declarative nature of the mathematics involved. For instance, describing a string-rewriting system as having “an input word” w, with other strings “derived from it” in line with tree (b), is merely a façon de parler for the die-hard set theorist (though not necessarily for the programmer). There is no issue in reading tree (b) backwards—from the bottom to the top—or, even better, without any temporal process in mind at all. Although it is often more accurate to reference image (e) when introducing a string-rewriting system from a purely Platonic perspective, we will not always be so explicit.

2.2.3. NP -Completeness

When it comes to formalizing NP -completeness, the pinnacle of computational complexity, do specialists in this field start with an Aristotelian tree that must be actively unrolled? Or do they, similar to those set theorists who, like Plato, eschew dynamic language, and who may be unfamiliar with programming, begin with a Platonic graph? Let us delve into these questions.
Consider one of Sipser’s nondeterministic polynomial time TMs, say machine N, along with an input string w. Suppose further that N, on input w, has four disjoint computation paths. With this in mind, visualize both Graph (a) and Tree (b) in Figure 2. Graph (a) is static, devoid of human involvement. It is a picture of a relation (i.e., a set in set theory). In contrast, Tree (b) is dynamic, with one or more paths being unrolled by a mathematician or a symbolic engine. Computer programmers and complexity theorists alike describe Tree (b) with operational desiderata such as N’s list of machine instructions.
Semantic Graph (a) can be represented by syntactic Tree (b), or more specifically, by the instructions of machine N. Conversely, machine N or Tree (b) can be semantically interpreted as Graph (a). This perhaps illustrates how some set theorists unfamiliar with computer science would connect a particular type of graph in set theory (semantics) with nondeterministic Turing machinery in symbolic logic (syntax).
Specialists in computational complexity also bridge set theory and symbolic logic in a different way. They begin with the Aristotelian image (b) and then, by hypothesizing an accepting path in (b), construct Arrow (c). This arrow is formally captured, as per Stephen Cook’s 1971 work [20], with a propositional formula A w in conjunctive normal form—such that: A w is satisfiable iff N accepts w.8
Two remarks regarding the formula A w are worth emphasizing. First, A w encapsulates the behavior of a conjectural accepting path in Tree (b). That is, A w represents symbolic machinery in action like the Aristotelian, albeit more abstractly like the Platonist. The step-by-step reasoning for any accepting path in (b) is preserved in Platonic ice in (c). Second, due to this abstraction, A w allows for the complexity theorist to switch from symbolic machinery in (b) to set theory in (d) in the manner depicted graphically in Figure 3.
Formula A w bridges the dynamic realm of nondeterministic logic engines (b) and the static realm of set theory (d). Given N’s instructions, thereby specifying Tree (b), the complexity theorist recasts (⇛) the corresponding tree’s essence in set-theoretic terms, represented in (d) as either w L N or w L N —depending on whether A ( w ) is satisfiable or not.
Notably, die-hard set theorists might insist on not mistaking (b) for (a). Their rationale is that the active logic machinery in (b) merely serves as a surrogate for the set theory in (a), with the latter already considered to exist independently. Consequently, verifying (d) reduces to a seemingly straightforward task: one need only examine the leaves of Graph (a) and check for the presence of a q a c c e p t state. Is there, then, an efficient method to query Graph (a) for a q a c c e p t state?
This Aristotelian question, posed by the set theorist, lies at the heart of nondeterministic TMs and, by extension, the concept of NP -completeness. We conclude by drawing a distinction between our initial interpretation and a more nuanced understanding that emerged through subsequent dialogue.
  • In our initial view, the standard textbook formalization of NP -completeness reflects a Platonic reinterpretation of an Aristotelian ‘correct guess’ within a Platonic graph of deterministic computations. This interpretation aligns with Tall’s notion of a “total reversal” in the history of mathematics: rather than treating the Aristotelian, step-by-step execution of a nondeterministic TM as ontologically primary, the approach begins set theoretically with Graph (a) and concludes set theoretically as well, with a statement about the membership (or non-membership) of string w in the formal language L N .
  • In reply to a reviewer’s feedback, our revised interpretation recognizes that specialists in computational complexity typically adopt a more Aristotelian position: beginning with Tree (b), rather than with Graph (a), proceeding through (c), and arriving at the set-theoretic formalism in (d). From this perspective, Sipser’s emphasis on a tree structure in Section 2.2.2 is not only accurate but also conceptually foundational.
This second point prompts a clarifying response. The formalization of deterministic computation involves a sequence of instantaneous descriptions—each devoid of explicit time stamps. Similarly, nondeterministic computation is formalized without explicit temporal markers. To rigorously express the tree structure within set theory, one would need to augment each instantaneous description with a time stamp. While such an approach is entirely viable, it is not the convention followed in standard treatments. Instead, temporal ordering is implicit: the first description is understood to precede the second and is therefore naturally—though not necessarily—interpreted as occurring earlier in time.
Further abstraction from the Aristotelian notion of temporal progression is examined in the author’s Tableau with Holes [10], which complements the present discussion. We now return to our main line of inquiry, shifting focus from standard textbook treatments to the specialized domain of Computing by Observing.

3. Computing by Observing

For a devoted Platonist, a deterministic TM, in its essence, does not truly do anything. From a complementary, Aristotelian perspective, it is natural to envision the mathematical machine actively at work. This operational interpretation extends to nondeterministic TMs and rewriting systems, which align with the paradigm of Computing by Observing.
Currently, the paradigm is formalized within automata theory (Section 3.1). In this paper, we reframe it through the lens of propositional logic (Section 3.2), which in turn leads to the development of a novel simulation technique outlined in Section 4. A general understanding of this section suffices to engage with the technical content in Section 4. A synthesis is provided in Section 5.

3.1. Automata Theory

Consider the following rules of a string-rewriting system R.
a A , A b , b B , B c
As an Aristotelian, we view the system as ‘deriving’ new strings from an input, such as a b . Some of the derived strings, like A B and B c , are shown on the left-hand side of Figure 4.
Introducing an observer of system R further accentuates the Aristotelian viewpoint. The observer can be likened to a scientist monitoring an experiment or observing a natural phenomenon in fields such as biology or chemistry.9 Specifically, the right-hand side of Figure 4 demonstrates Observer O 1 , D 1 actively monitoring the derivations produced by the R system [5]. The monadic transducer O 1 maps (⤎) each derived string (e.g., A B ) to a symbol (e.g., 2), which belongs to a word in the regular language of the decider D 1 . The decider D 1 accepts a terminating derivation sequence only if the mapping adheres to the regular expression I 1234 + . All other derivation sequences generated by R are pruned away.
The current example is drawn from Leupold [5] (Section 4.2), who presents two distinct interpretations of system R. In the first interpretation, we employ the monadic transducer O 1 and decider D 1 to accept the language L 1 = a n b n n > 0 . Alternatively, we can use another monadic transducer, O 2 , and another decider, D 2 , which together form a second observer, to accept the language L 2 = a k k is prime .
Remark 2. 
Both the string-rewriting system R and a specific observer, such as O 1 , D 1 , are part of what Leupold calls an “accepting observer system”, formally defined in Daylight [9]. The language L 1 , expressed in alphabet Δ and accepted by the accepting observer system Δ , R , O 1 , D 1 , comprises all words w for which there exists some s R w —i.e., a terminating sequence s in the derivation tree, starting from input w—whose monitoring via O 1 is accepted by decider D 1 . Formally:
L 1 = w s s R w D 1 O 1 s = a c c e p t
The mapping of O 1 w , presented here for the sake of full coverage, is specified thus:
I if w a + b + 4 if w b a c b
1 if w b A a c b λ if w b B c +
2 if w b A a c B b if w b c +
3 if w b a c B b otherwise,
where λ denotes the empty word. Observer O 1 , D 1 prunes away sequences by having O 1 employ a symbol, ⊥, which does not appear in the words of D 1 ’s language. Source: Leupold [5].
In retrospect, the work of Leupold and his collaborators exemplifies polycomputing. The same nondeterministic system R produces different outcomes depending on the observer. We suggest that researchers in computational complexity consider these theoretical developments, as they do not challenge any Platonic convictions germane to set theory.
Polycomputing leads to Turing completeness: there exists a particular string-rewriting system, let us denote this with R ˜ , such that an enumeration of well-defined finite automata is sufficient to encompass the class RE of all recursively enumerable languages.
Theorem 1. 
(Cavaliere, Frisco, Hoogeboom [21]; Leupold [5]) There exists a context-free string-rewriting system R ˜ such that for every r.e. language L there exists a monadic transducer O L such that the accepting observer system composed of R ˜ and O L accepts L.
Notably, each language L in Theorem 1 corresponds to a deterministic TM M, i.e., L = L ( M ) . Optionally, we can clarify, as follows:
Remark 3. 
The alphabet Δ ˜ and the decider D ˜ are fixed from the outset of Theorem 1. For each r.e. language L expressed in Δ ˜ , there exists a corresponding accepting observer system, Ω ˜ L = Δ ˜ , R ˜ , O L , D ˜ , where the input alphabet is Δ ˜ . Both O L and D ˜ are finite automata.
The generality of Theorem 1 with respect to the specific rewriting system R ˜ is not needed for the remainder of this paper. Therefore, we will shift our focus away from Theorem 1 (Section 3.1.1). At the same time, we will extend our scope by transitioning from deterministic to nondeterministic machines (Section 3.1.2).

3.1.1. Shifting Away

We shift our focus from Turing completeness in Theorem 1 to examining one rewriting system and observer pair at a time. Reconsider, for example, the pair R , O 1 , D 1 , which defines the language
L 1 = a n b n n > 0 .
For an Aristotelian, the observer O 1 , D 1 actively filters derivation sequences, pruning away those instances where R’s rewriting rules deviate from the observer’s specifications.
In our L 1 -example—illustrated in Figure 4 and spelled out in Leupold [5] (p. 437)— relevant derivation sequences involve transforming the leftmost a into A and, similarly, converting the left-most b into B. If these transformations occur an equal number of times, it follows that the initial counts of a and b must have been identical. More precisely:
To accept an input word of the form a n b n with n > 0 , all rewritings occur in two distinct steps. For example, rather than converting b directly into c, it is first transformed into B and then into c. This deliberate two-step process ensures that each rewriting leaves a distinct mark—in this case, the intermediate letter B—within the derived string. The marking allows an observer to trace and validate the derivation sequence.
This kind of Aristotelian interplay between the rewriting system and the observer will play a crucial role in Section 4.

3.1.2. Zooming out

In addition to shifting away from Theorem 1, we also want to zoom out, from deterministic TMs M to nondeterministic TMs N. Specifically, let N denote an arbitrary nondeterministic TM that halts on each input. The present author has constructed a rewriting system R N and an observer O L ( N ) , D such that the latter carves out of the former’s derivation tree a computation tree of N. More precisely:
Theorem 2. 
(Daylight [9]) For each nondeterministic Turing machine N that halts on all inputs, with associated running time t and accepting observer system Ω L N = Δ ˜ , R N , O L ( N ) , D , the following equivalence holds, for arbitrary input w and natural number m, with m t w :
N on w reaches q a c c e p t after exactly m steps
iff
s R N w , s c C R N , w , such that: s is of the form w , , s c , , s accept ,
with D O L N s = a c c e p t and s c , , s accept = 3 m + 1 .
Remark 4. 
Tuple s represents a derivation sequence of the rewriting system R N for the input word w. The string s c denotes a starting configuration, consisting of w followed by some finite number of appended blank symbols.
The core idea of Theorem 2 is that we can simulate N on a string w of length n using the string-rewriting system R N and the observer O L ( N ) , D , where each step of N is precisely replicated by three string-rewriting operations.
Interestingly, R N applies only relabeling rules (i.e., non-lengthening rules of the form a b ) after the starting configuration s c has been established. These rules are inherently regular, functioning similarly to a finite-state automaton. Moreover, both O L ( N ) and D are finite automata by definition. This leads to a key insight concerning any nondeterministic polynomial time TM N:
If the running time of N is O ( n k ) for some constant k, the step-by-step simulation of N on w requires only regular control when starting from a configuration s c that includes w and n k n blank symbols □ appended to it. Source: Daylight [9].
Theorists may find this result particularly relevant for studying NP , as it establishes the groundwork for a novel simulation method for nondeterministic polynomial time TMs.

3.2. Propositional Logic

In the works of Leupold and Daylight, both the string-rewriting system and the observer are formalized through automata theory, thereby placing them within a static ontology of set membership. Philosophically, we raise the following questions: What happens if we reinterpret Daylight’s result—specifically his R N and O L ( N ) , D -observer from Theorem 2—within the framework of propositional logic? Do the Aristotelian activities of rewriting and observing reappear in this context, perhaps in a different form? Furthermore, if so, what are, if any, the practical implications?
Current textbook simulation techniques of a nondeterministic polynomial time TM in propositional logic essentially rely on two 3cnf-formulas, ϕ c e l l and ϕ s t e p , in adherence to a tableau. These textbooks include Sipser [18] (p. 280), Papadimitriou [22] (Section 8.2), Hopcroft, Motwani, and Ullman [23] (p. 443), and Aaronson [1] (p. 61). These approaches trace back to Cook’s seminal paper [20].
A tableau is a matrix with O ( n k ) rows and O ( n k ) columns, where any row index i and column index j jointly specify a c e l l [ i , j ] . The satisfiability of the formula ϕ c e l l ensures that each cell in the tableau contains exactly one symbol, while the satisfiability of ϕ s t e p guarantees that each transition between two consecutive rows in the tableau adheres to the instructions of N. Both ϕ c e l l and ϕ s t e p are 3cnf-formulas that, to date, cannot be expressed equivalently in 2cnf or as Horn formulas.
From an Aristotelian perspective, a tableau provides a step-by-step representation of the computation, capturing the machine’s evolution over time. Platonically, where time is absent, a tableau functions as a static framework that either contains or lacks an accepting computation path for N on w, depending on whether the propositional formula ϕ , with
ϕ = ϕ c e l l ϕ s t a r t ϕ e n d ϕ s t e p ,
is satisfiable. The formula ϕ s t a r t constrains the initial row of the tableau to represent the start configuration corresponding to w, and the formula ϕ e n d ensures that no cell in the tableau may contain the state symbol q r e j e c t .
Remark 5. 
While ϕ c e l l is neither a 2cnf- nor a Horn formula, both ϕ s t a r t and ϕ e n d are 2cnf-Horn formulas. See Sipser [18] (Section 7.4) and Grädel [6] (p. 32).
Building on Theorem 2, we introduce in Section 4 a novel simulation method for N on w. In our approach, the 3cnf-formula ϕ s t e p is succinctly transformed into an equivalent Horn formula ϕ s t e p η , effectively isolating the remaining computational complexities within the 3cnf-formula ϕ c e l l . Consequently, we define our propositional formula ϕ ˜ , thus:
ϕ ˜ = ϕ c e l l ϕ s t a r t ϕ e n d ϕ s t e p η ,
where ϕ ˜ is satisfiable iff N accepts the original input word w.
We further clarify ϕ ˜ as follows. The satisfiability of formula ϕ c e l l ensures that each c e l l [ i , j ] in the tableau contains exactly one symbol s Σ that is “turned on”. The extended alphabet Σ is defined shortly. Complementary, the satisfiability of
ϕ s t a r t ϕ e n d ϕ s t e p η
guarantees that all these “turned on” symbols collectively capture an admissible accepting computation path p of N on input w.
The Aristotelian activities of rewriting and observing from Theorem 2 are transformed into the four conjuncts of formula ϕ ˜ . Moreover, since the 2cnf-Horn formulas ϕ s t a r t and ϕ e n d are trivial, the majority of the computational complexity lies within ϕ c e l l ϕ s t e p η . Finally, given that ϕ s t e p η will be shown to be a succinct Horn formula, the complexity primarily resides in the 3cnf-formula ϕ c e l l .10 This marks a conceptual advance in computational complexity proper.

4. Step-by-Step Simulation

We are now ready to introduce our novel step-by-step simulation technique. Consider an arbitrary nondeterministic polynomial time TM N , k , or N for short. Let the tape alphabet Φ , state set Q, and basic label set T be extracted from N’s specifications.
Definition 3. 
A nondeterministic polynomial time Turing machine, denoted as  N , k , is defined as N = Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t , a nondeterministic TM in accordance with Definition A3, which serves as a decider with a running time of n k —as specified in Definition A4, where n and k represent the length of input w and some constant, respectively.
Remark 6. 
Without loss of generality, the nondeterminism associated with N consists solely of binary choices. For each such choice, say between instructions t 1 and t 2 , the movement of t 1 is to the left (−), while the movement of t 2 is to the right (+).
We contemplate the behavior of a hypothetical accepting computation path p of N on an extended input w ^ , with
w = w 0 w 1 w n 1 , w ^ = w ,
where the blank symbol □ occurs n k n times.
Path p depends on the execution of uniquely labeled instructions, such as:
t a b c : q 1 , a q 2 , b , , q 3 , c , + .
This nondeterministic instruction—labeled t a b c , with t a b c T — can be split into two deterministic ones:
t a b : q 1 , a q 2 , b , , t a c : q 1 , a q 3 , c , + .
Each deterministic instruction is assigned a unique label—for instance, label t a b T [ ] .
Instruction t a b specifies that when N is in state q 1 and reading symbol a, the machine is supposed to transition to state q 2 , rewrite the symbol a as b, and the tape head should move one cell to the left (−). A plus sign (+) indicates a unary move to the right.
To formally introduce the variables within the propositional formula ϕ ˜ , we define:
Σ = Φ Φ × Q T , .
For each i and j ranging from 1 to, respectively, 3 n k + 1 and n k + 2 , and for every symbol s in Σ , we introduce a Boolean variable, x i , j , s . We have a total of O n 2 k such variables.
Remark 7. 
The boundary markers,and, are included for full coverage, but play no essential role in this article.
The effect of instruction t a b is shown on the left, while the effect of instruction t a c is displayed on the right in Figure 5. In both illustrations, the rows are arranged sequentially from the top row, indexed as 3 l 2 , to the bottom row, indexed as 3 l + 1 . Each vertical arrow represents a change in precisely one symbol. If label t a b is stored in c e l l [ 3 l , j ] of the tableau—with row index 3 l and column index j, where l and j are natural numbers—then we denote this with propositional variable x 3 l , j , t a b .
Going forward, we first present a general discussion of the formula ϕ s t e p η = ϕ V ϕ H in Section 4.1. We then define its components: the subformula ϕ V in Section 4.2 and the subformula ϕ H in Section 4.3. The Aristotelian interplay between both components is key.

4.1. Formula ϕ s t e p η

The tableau, partly illustrated in Figure 5, is not only constrained by the formulas ϕ c e l l , ϕ s t a r t , and ϕ e n d but also by tandem V | | H , a system composed of a Vertical subsystem V and a Helicopter subsystem H, resulting in:
ϕ s t e p η = ϕ V ϕ H .
Here, V is the string-rewriting system R N , which operates vertically within each column of the tableau. Meanwhile, the Helicopter H serves as the observer O L ( N ) , D . Both R N and the observer are from Theorem 2. Although that theorem provides a precursor to this exposition, it is not essential for understanding the discussion here.
The crux of our tableau, see Figure 5, is that we capture each sequence of two consecutive machine configurations in hypothetical path p—specifically, in rows 3 l 2 and 3 l + 1 in the tableau—as an interaction between the constraints of the V subsystem on the one hand ( ϕ V ), and those of the H subsystem on the other ( ϕ H ). Additionally, by utilizing standard bookkeeping ( ϕ c e l l ϕ s t a r t ϕ e n d ), we obtain the larger formula ϕ ˜ whose size remains within a polynomial bound relative to the length n of the input word w.
The formula ϕ V ϕ H captures the Aristotelian interplay between the V and H subsystems. This interplay is achieved primarily by ensuring that specific vertical symbol conversions in the tableau are carried out in two distinct stages.
For instance, rather than directly converting symbol a q 0 into symbol a when traversing a column in the tableau top-down, the V subsystem first transforms a q 0 into the intermediate label t 0 , and only then into the symbol a . This deliberate two-step conversion guarantees that V produces a unique intermediate trace—namely, the instruction label t 0 of machine N—which can then be identified by the Helicopter H subsystem. This example, involving the label t 0 , corresponds to the following deterministic machine instruction:
t 0 : q 0 , a q 1 , a , + .
In general however, an instruction of N is nondeterministic rather than deterministic. For each binary choice of N, such as
t a b c : q 1 , a q 2 , b , , q 3 , c , + ,
we must first determinize the instruction by splitting it into two distinct deterministic ones:
t a b : q 1 , a q 2 , b , , t a c : q 1 , a q 3 , c , + .
Each deterministic instruction is assigned a unique label (e.g., t a b ). Notably, determinizing an instruction that is already deterministic —such as t 0 , t a b , or t a c —has no effect.
After applying determinization to all uniquely labeled instructions of N, we ensure that subsystem V, when selecting any deterministic instruction label t, explicitly records the label t as an intermediate trace in the tableau. Examples of t a b and t a c are shown in the center column, in the left and right illustrations, respectively, in Figure 5. Consequently, Helicopter H reads label t from the tableau and acts accordingly. The Aristotelian behavior of V is described by ϕ V (Section 4.2), while that of H is characterized by ϕ H (Section 4.3).
Remark 8. 
Any conversion between two distinct tape symbols, say from a to b, in any column of the tableau, must occur through an intermediate trace. Figure 6 provides an illustration, relying on the label t a b and, more precisely, the following instruction of machine N:
t a b : q 3 , a q 4 , b , .
The marked symbol a in the top row in Figure 6 can only change into the marked symbol b in the bottom row via an intermediate trace, such as t a b .

4.2. Vertical Subsystem V

A computation of N on input w proceeds through a sequence of deterministic instructions, such as t a c , derived from the determinization stage. Each deterministic instruction is reflected in the V subsystem through relabeling rules, as shown in Figure 7.
Reconsider, for example, the deterministic instruction t a c :
t a c : q 1 , a q 3 , c , + .
The role of V in capturing the execution of instruction t a c is defined, via Figure 7, by the following 2 + m relabeling rules, with y Φ , and where m denotes the cardinality of Φ :
     a q 1           t a c          y    
        
     t a c c y q 3     
Notably, these 2 + m rules are independent of the input word w.
To express this in propositional logic, we employ three distinct types of 4 × 1 windows, annotated with arrows, to describe V’s role in capturing the effect of each instruction. This is illustrated in Figure 8 for the specific instruction t a c .
Each annotated window in Figure 8 is read starting from the third row, extending one step downward (↓) and two steps upward (↑), following the arrows. These windows span the width of the tableau, in between the leftmost and the rightmost boundary.11
Notably, with regard to Figure 8, each cell in row 3 l of the tableau—for any l with 1 l n k — participates in exactly one of the three types of windows: Window 1, Window 2, or Window 3. The same remark applies to cells in row 3 l 1 . However, each cell in row 3 l 2 or row 3 l + 1 is part of two different windows, which may or may not be of different types. The exception is when the cell is in the top or bottom row of the tableau.
Let us now delve into each 4 × 1 annotated window in Figure 8.

4.2.1. Annotated Window 1

Window 1 in Figure 8 illustrates the “no change” rule of the form s s , where s Φ . Specifically, if the symbol a is found in c e l l 3 l , j of the tableau, this means it is also present in the cell directly below and in the two cells immediately above. This constraint can be expressed as a conjunction of three implications:
x 3 l , j , a x 3 l + 1 , j , a x 3 l , j , a x 3 l 1 , j , a x 3 l , j , a x 3 l 2 , j , a .
Iterating, via conjunction operators of the form
1 l n k 1 < j < n k + 2 s Φ ,
over all row indices ( 3 , 6 , , 3 l , ), all relevant column indices ( 2 , 3 , , j , ), and tape symbols ( a , b , c , d , ) in Φ , we obtain a 2cnf-Horn formula, ϕ V 1 , of size O n 2 k :
ϕ V 1 = 1 l n k 1 < j < n k + 2 s Φ x 3 l , j , s x 3 l + 1 , j , s x 3 l 1 , j , s x 3 l 2 , j , s .

4.2.2. Annotated Window 2

Window 2 in Figure 8 portrays the evolution from the symbol a q 1 to c through the intermediary label t a c . Upon closer examination, if the label t a c is discerned within c e l l 3 l , j of the tableau, it serves as an indicator of the presence of symbol c directly beneath it. Additionally, this label is also present in the cell immediately above, with the symbol a q 1 preceding it. These observations, which stem from the information found in the signature t a c : q 1 , a q 3 , c , + , can be generalized over all row indices ( 3 , 6 , , 3 l , ), all relevant column indices ( 2 , 3 , , j , ), and all deterministic instructions ( t 0 , t a b , t a c , ), via conjunction operators, leading to ϕ V 2 , a 2cnf-Horn formula of size O n 2 k .
Definition 4. 
Consider an arbitrary nondeterministic TM, N, and its basic label set, T [ ] . For any label t in T [ ] with corresponding signature t : q s o u r c e , s r e a d q t a r g e t , s w r i t e , + or t : q s o u r c e , s r e a d q t a r g e t , s w r i t e , , we let “ N - source t , ” “ N - target t , ” and “ N - write t ” stand for the symbols s r e a d q s o u r c e , q t a r g e t , and s w r i t e , respectively. When machine N is clear from the context, we shall simply note down “ source t , ” “ target t , ” and “ write t ”, respectively.
ϕ V 2 = 1 l n k j t T [ ] x 3 l , j , t x 3 l + 1 , j , write t x 3 l 1 , j , t x 3 l 2 , j , source t ,
with 1 < j < n k + 2 .

4.2.3. Annotated Window 3

Window 3 in Figure 8 conveys the scenario where the symbol d q 3 is identified in c e l l 3 l , j . This implies that the same symbol should reappear in the cell directly below, and d should appear in the two cells immediately above. Once again, this situation will be generalized over row indices 3 l , column indices j, and symbols s Φ × Q . The result is a third 2cnf-Horn formula, ϕ V 3 , with O n 2 k variables.
ϕ V 3 = 1 l n k 1 < j < n k + 2 ( y , q ) Φ × Q x 3 l , j , y q x 3 l + 1 , j , y q x 3 l 1 , j , y x 3 l 2 , j , y .
Remark 9. 
A formalization to extract symbol y from ( y , q ) is straightforward and omitted.

4.2.4. Result

The conjunction of these three subformulas gives us:
ϕ V = ϕ V 1 ϕ V 2 ϕ V 3 ,
which is a 2cnf-Horn formula of size O n 2 k variables.
Additional constraints, such as the context-sensitive requirement that Window 3 be attached to Window 2, in conformity with Figure 8, are imposed by the H subsystem (see Section 4.3). Window 3 must be positioned immediately to the right of Window 2 because the instruction under consideration, t a c , specifies a unary movement in that direction.

4.3. Helicopter Subsystem H

A computation of N on input w progresses through a sequence of deterministic instructions, such as t a c , derived from the determinization stage. Each deterministic instruction is mirrored in the Helicopter H subsystem.
To illustrate, consider the effect of executing instruction t a c on configuration C, with
C = a a q 1 d ,
which is accurately depicted on the left and inaccurately on the right side of Figure 9. In particular, the tape head pointing to symbol d in state q 3 is correct, whereas the tape head pointing to symbol a in state q 2 is incorrect. While the constraints imposed by the V subsystem do not prevent the scenario on the right side of Figure 9, the constraints from the H subsystem do.
As an Aristotelian, the H subsystem proceeds sequentially through the four rows in the left illustration in Figure 9, moving from the top row to the bottom. A transition from the top row ( 3 l 2 ) to the second row ( 3 l 1 ) and subsequently to the bottom row ( 3 l + 1 ), is allowed only if both transitions are consistent with the signature of the labeled instruction detected in the third row ( 3 l ). In our running example, the detected label is t a c , and the corresponding signature stored inside the Helicopter H is as follows:
t a c : q 1 , a q 3 , c , + .
As a result, H accepts the left side of Figure 9, while dismissing the right side from further consideration.
To formalize the Aristotelian operations (ultimately) in set theory, and relying on Definition 4, we define the formula, ϕ H + , as follows:
ϕ H + = 1 l n k 1 < j < n k + 2 t T + [ ] x 3 l , j , t A B ,
with:
t T + [ ] iff t T [ ] and the unary movement of t is to the right ,
A = s Φ x 3 l 2 , j + 1 , s x 3 l , j + 1 , s target ( t ) , B = j { j , j + 1 } s Φ x 3 l 2 , j , s x 3 l , j , s ,
where 1 < j < n k + 2 .
The formula ϕ H + is a Horn formula, not a 2cnf-formula, and it consists of two primary segments: A and B. Referring to Figure 9, we elaborate as follows:
  • Segment A ensures that the symbol d in c e l l [ 3 l 2 , j + 1 ] reappears in the symbol pair in c e l l [ 3 l , j + 1 ] , rather than some other symbol, such as e.
  • Segment B guarantees that the tape contents in rows 3 l 2 and 3 l are identical, except for columns j and j + 1 .
Due to the symmetry of the problem, the formal definitions of T [ ] and ϕ H —which pertain to movement to the left — precisely mirror those given above and are thus omitted from this paper. Overall, this results in the formula:
ϕ H = ϕ H + ϕ H ,
which is a compact Horn formula comprising O n 3 k literals.

5. Closing Remarks

In this paper, we interpret Aristotelian reasoning as metaphorical, while formalization—particularly in its set-theoretic form—is understood as fundamentally Platonic. We contend that many contemporary mathematical frameworks blend these two modes of reasoning. Our interest lies in the Computing by Observing paradigm, in which the Aristotelian activities of rewriting and observing are formalized first within automata theory and then within propositional logic.
Leveraging the frugality central to the Computing by Observing paradigm—namely, the insight that Turing completeness can be achieved using only a restricted context-free grammar and an enumeration of finite automata—we present an alternative formulation of polycomputing. The notion that a single derivation tree can support multiple interpretations is now rigorously framed within the context of nondeterministic TM computations. In propositional logic, we realize this primarily through the use of Horn clauses.
These exact technical results arose—albeit indirectly—from an initial and ongoing philosophical investigation into Aristotelian and Platonic views on computation, as well as from an engineering-driven fascination with assigning multiple meanings to one and the same computation—whether deterministic or nondeterministic.
More precisely, the contribution of this paper is twofold, articulated through a worm’s-eye perspective (Section 5.1) and a philosopher’s broader view (Section 5.2).

5.1. Worm’s-Eye View

Technically, we have addressed a natural question that often emerges when software engineering students first encounter NP -completeness in standard treatments [6,7,18]. Given the two 3cnf-formulas, ϕ c e l l and ϕ s t e p , which—if either—becomes easier to satisfy when the other is hypothetically transformed into an equivalent Horn formula? We resolve this question non-hypothetically by demonstrating that ϕ s t e p can, in fact, be efficiently reformulated as a Horn formula, denoted ϕ s t e p η .
Our transformation from ϕ s t e p to ϕ s t e p η shifts the computational burden of satisfiability to the remaining 3cnf-component, ϕ c e l l , thereby enabling a novel method for simulating nondeterministic TMs. This approach leverages the frugality emblematic of the Computing by Observing literature—translating automata-theoretic minimalism into Horn clauses within propositional logic. While not yielding any immediate practical breakthrough here, the chief contribution lies in increased conceptual clarity.

5.2. Philosopher’s View

More generally, we contribute to the philosophical debate on the nature of efficient computation through the Computing by Observing paradigm. In our understanding, Peter Leupold, a founder of this paradigm, aligns with John Searle’s (1992) famous argument that semantics is not intrinsic to syntax [24]. Leupold posits that “meaning” is assigned externally, writing:
The agent that assigns this meaning should at least be able to see the object, and thus observer is a fitting title for this role. Ref. [5] (p. 431, original emphasis)
Computation, therefore, is observer-relative: “a computation is not an intrinsic property that some process either has or has not” [5] (p. 428).
Whether we have made a genuine philosophical contribution concerning the observer-relative nature of computation remains an open question. Nevertheless, we have fully embraced this idea throughout the present paper. We have formalized both the syntax of string rewriting and the semantics of the observer—initially, in brief, within automata theory, and subsequently in greater detail within propositional logic. Our work extends the present author’s nondeterministic generalization of Leupold and his colleagues’ research on deterministic TMs, broadening the scope from automata theory to propositional logic.
At the heart of our philosophical inquiry into the Computing by Observing paradigm an sich lies the following question: How do Aristotelian activities re-emerge when the paradigm is formulated in terms of propositional logic? Pursuing this question has led to the development of a novel and efficient simulation method for nondeterministic TMs.
The claim of efficiency (of our simulation) is not fully substantiated within this paper alone, as it rests on three key points, the third of which lies beyond this paper’s scope. First, existing literature establishes that, to date, checking the satisfiability of Horn formulas is significantly more efficient than that of general 3cnf-formulas. Second, as demonstrated in this paper, the step-by-step behavior of a nondeterministic polynomial time TM N on input w can be represented as a compact Horn formula, ϕ s t e p η . Third, our related work addresses the inefficiencies associated with ϕ c e l l , the only remaining 3cnf-formula in our framework [10,11]. In summary, our Horn formula ϕ s t e p η offers a conceptual advantage over textbook approaches, suggesting a need to reassess NP -completeness in future work.

Funding

This research received no funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author extends sincere gratitude to the two anonymous reviewers, as well as to the editors of this special edition, Raymond Turner and Henri Stephanou, for their invaluable contributions to this article. Any remaining errors or omissions are the sole responsibility of the author. For updates, please visit https://dijkstrascry.com/pnp (accessed on 28 April 2025).

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TMTuring Machine
NPNondeterministic Polynomial

Appendix A. Definitions

Definition A1. 
Cf. Sipser [18] (p. 140), Daylight [9]. A deterministic Turing machine is an 8-tuple Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t , with Q , Γ , Φ , T finite sets:
  • Q is the set of states, and Γ is the input alphabet not containing □,
  • Φ is the tape alphabet, where Φ and Γ Φ , and
  • δ : Q × Φ Q × Φ × + , is the transition function,
  • Every transition in δ is accompanied by a distinct label t, and
  • T is the label set, containing all such labels,
  • q 0 Q is the start state,
  • q a c c e p t Q is the accept state,
  • q r e j e c t Q is the reject state, q r e j e c t q a c c e p t .
As a Turing machine computes, changes occur in the current state, the current tape contents, and the current head location. A setting of these three items is called a configuration of the Turing machine. The tape of the Turing machine is one-way infinite, from left to right. Specifically, for each input w 0 w 1 w n 1 of length n, machine M starts in configuration q 0 , for n = 0 , and in configuration w 0 q 0 w 1 w n 1 , for n > 0 . In both cases, the notation s q 0 , with s Φ , signifies that the head is located at the tape cell containing symbol s, while the machine resides in state q 0 . Machine M neither starts in q a c c e p t or q r e j e c t , nor progresses beyond either one of these states. Specifically, once M reaches q a c c e p t , it remains active solely in that state. Likewise for q r e j e c t . We take q a c c e p t to be some q m with m > 0 and similarly for q r e j e c t . Input word w is considered accepted when M on w reaches q a c c e p t . We write L M to denote the language accepted by M. We use notation t : q 1 , x q 2 , y , μ when referring to some transition in δ with label t T and movement μ + , . The plus sign (minus sign) signifies a movement to the right (to the left).
Definition A2. 
Let M be a deterministic Turing machine that halts on all inputs. The running time or time complexity of M is the function t t : N N , where t n is the maximum number of steps that M uses on any input of length n before halting. If t n is the running time of M, we say that M runs in time t n and is a t n time Turing machine.
Definition A3. 
Cf. Sipser [18] (p. 150), Daylight [9]. A nondeterministic Turing machine is an 8-tuple, Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t . At any point in a computation, the machine may proceed according to several possibilities. The transition function for the machine has the form δ : Q × Φ P Q × Φ × + , , where P denotes the power set. The computation of the machine is a tree whose branches correspond to different possibilities for the machine. If some branch of the computation leads to the state q a c c e p t , the machine accepts its input.
In conformity with Definition A1, each transition in set δ is accompanied by a distinct label t, and T is now called the general label set, containing all such labels. For instance, consider notation t : q , x q 1 , y 1 , μ 1 , q 2 , y 2 , μ 2 , with label t T , states q , q 1 , q 2 Q , symbols x , y 1 , y 2 Φ , movements μ 1 , μ 2 + , , and with tuple q 1 , y 1 , μ 1 different from q 2 , y 2 , μ 2 . This notation captures the nondeterministic transition encompassing the deterministic transitions t 1 : q , x q 1 , y 1 , μ 1 and t 2 : q , x q 2 , y 2 , μ 2 . We define the basic label set, denoted as T [ ] , as the set that encompasses the labels of all deterministic transitions, such as t 1 and t 2 .
Definition A4. 
Let N be a nondeterministic Turing machine decider. Its running time is function t : N N , with t n the maximum number of steps N uses on any branch of its computation, on any input of length n, before halting.
Definition A5. 
Cf. [18] (p. 251, 258). Let t : N R + be a function, with R + denoting the set of non-negative real numbers. Define the time complexity class, TIME t n , to be the collection of all languages that are decidable by an O t n time Turing machine.

Notes

1
Quoted from Lance Fortnow’s blog post, titled Can you feel the machine?, posted on 20 March 2024 and accessed on 22 August 2024. Source: https://blog.computationalcomplexity.org/2024/03/can-you-feel-machine.html.
2
Quoted from Robin K. Hill’s CACM blog post, titled The Imperativity of Algorithms, posted on 31 March 2023 and accessed on 22 August 2024. Source: https://cacm.acm.org/blogcacm/the-imperativity-of-algorithms/.
3
The first school is still present today, and many of its adherents are not intuitionists. Nevertheless, a well-known post-Cantor advocate remains L.E.J. Brouwer, who argued that the actually infinite set of natural numbers cannot be constructed based on intuitions, and, therefore, does not exist [13] (Section 5c).
4
The incorporation of actual infinity is primarily due to Plotinus rather than Plato, as outlined in Dowden’s work [13] (Section 3). Moreover, the term “Platonism”, as used in this article, is not meant to imply that Plato, himself, fully adhered to this school of thought [14].
5
Quoted from Lance Fortnow’s blog post, titled Can you feel the machine?, posted on 20 March 2024 and accessed on 22 August 2024. Source: https://blog.computationalcomplexity.org/2024/03/can-you-feel-machine.html.
6
The Aristotelian view contrasts with, though does not (necessarily) contradict, the Platonic perspective. We adopt the position of a set theorist as more Platonist than Aristotelian and will not introduce further nuance.
7
Based on the definition provided in Sipser [18] (p. 150, our underlining).
8
We resort to conjunctive normal form in adherence to Sipser’s modern account [18] (Section 7.4).
9
In this context, the author draws inspiration from the work of Matteo Cavaliere and Peter Leupold [8].
10
The time complexity of ϕ c e l l is analyzed in [10,11].
11
This approach contrasts sharply with Sipser’s top-down reading of his 2 × 3 windows in his tableau [18] (p. 280).

References

  1. Aaronson, S. Quantum Computing Since Democritus; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  2. Aaronson, S. Why Philosophers Should Care About Computational Complexity. In Computability: Turing, Gödel, Church, and Beyond; Copeland, B., Posy, C., Shagrir, O., Eds.; MIT Press: Cambridge, MA, USA, 2013. [Google Scholar]
  3. Bongard, J.; Levin, M. Thereś Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-Scale Machines. Biomimetics 2023, 8, 110. [Google Scholar] [CrossRef] [PubMed]
  4. Daylight, E. Refining Mark Burgin’s Case against the Church-Turing Thesis. Philosophies 2024, 9, 122. [Google Scholar] [CrossRef]
  5. Leupold, P. The Role of Observers in Computations. Minds Mach. 2018, 28, 427–444. [Google Scholar] [CrossRef]
  6. Grädel, E. Complexity Theory: WS 2009/10; Mathematische Grundlagen der Informatik, RWTH Aachen: Aachen, Germany, 2009. [Google Scholar]
  7. Dean, W. Computational Complexity Theory. Stanf. Encycl. Philos. 2016. [Google Scholar]
  8. Cavaliere, M.; Leupold, P. Evolution and Observation: A Non-standard Way to Accept Formal Languages. In Machines, Computations, and Universality; Margenstern, M., Ed.; Springer: Berlin/Heidelberg, Germany, 2005; Volume Lecture Notes in Computer Science 3354, pp. 153–163. [Google Scholar]
  9. Daylight, E. Capturing Nondeterminism with Accepting Observer Systems. 2025; submitted. [Google Scholar]
  10. Daylight, E. Tableau with Holes: Clarifying NP-Completeness. Symmetry 2025, 17, 677. [Google Scholar] [CrossRef]
  11. Daylight, E. Tossing Coins with an NP-machine. 2025; submitted. [Google Scholar]
  12. Linnebo, Ø.; Shapiro, S. Actual and Potential Infinity. Noûs 2019, 53, 160–191. [Google Scholar] [CrossRef]
  13. Dowden, B. The Infinite. Internet Encycl. Philos. 2022. Available online: https://iep.utm.edu/infinite/#H8 (accessed on 24 April 2025).
  14. Landry, E. Plato Was Not a Mathematical Platonist; Cambridge University Press: Cambridge, UK, 2023. [Google Scholar]
  15. Tall, D. How Humans Learn to Think Mathematically: Exploring the Three Worlds of Mathematics; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  16. Dean, W. Algorithms and the Mathematical Foundations of Computer Science. In Gödel’s Disjunction, 1st ed.; Horsten, L., Welch, P., Eds.; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  17. Turner, R. Computational Abstraction. Entropy 2021, 23, 213. [Google Scholar] [CrossRef] [PubMed]
  18. Sipser, M. Introduction to the Theory of Computation; Thomson Course Technology: Boston, MA, USA, 2006. [Google Scholar]
  19. Floyd, R. Nondeterministic Algorithms. J. ACM 1967, 14, 636–644. [Google Scholar] [CrossRef]
  20. Cook, S. The Complexity of Theorem-Proving Procedures. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing, Shaker Heights, OH, USA, 3–5 May 1971; Association for Computing Machinery: New York, NY, USA, 1971; pp. 151–158. [Google Scholar]
  21. Cavaliere, M.; Frisco, P.; Hoogeboom, H. Computing by only observing. In Developments in Language Theory; Lecture Notes in Computer Science; Ibarra, O., Dang, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4036, pp. 304–314. [Google Scholar]
  22. Papadimitriou, C. Computational Complexity; Addison Wesley Longman: Boston, MA, USA, 1994. [Google Scholar]
  23. Hopcroft, J.; Motwani, R.; Ullman, J. Introduction to Automata Theory, Languages, and Computation; Addison Wesley/Pearson Education: Boston, MA, USA, 2007. [Google Scholar]
  24. Searle, J.R. The Rediscovery of the Mind; MIT Press: Cambridge, MA, USA, 1992. [Google Scholar]
Figure 1. Images dissecting the underlined words in Definition 2. Image (a) evokes a Platonic interpretation. Image (b) reflects an Aristotelian perspective. Image (c) highlights the possibility of shared nodes across multiple computation paths. Image (d) illustrates that computations may include cycles. Image (e) presents a rotated view of the structure shown in image (d).
Figure 1. Images dissecting the underlined words in Definition 2. Image (a) evokes a Platonic interpretation. Image (b) reflects an Aristotelian perspective. Image (c) highlights the possibility of shared nodes across multiple computation paths. Image (d) illustrates that computations may include cycles. Image (e) presents a rotated view of the structure shown in image (d).
Philosophies 10 00076 g001
Figure 2. Three conceptualizations of nondeterminism. Representation (a) portrays a static ontology rooted in set theory. Representation (b) illustrates dynamic operational behavior characteristic of logic engines. Representation (c) integrates both set-theoretic and mechanistic perspectives, offering a hybrid interpretation.
Figure 2. Three conceptualizations of nondeterminism. Representation (a) portrays a static ontology rooted in set theory. Representation (b) illustrates dynamic operational behavior characteristic of logic engines. Representation (c) integrates both set-theoretic and mechanistic perspectives, offering a hybrid interpretation.
Philosophies 10 00076 g002
Figure 3. A transition from representation (b), depicting an active logic engine, through (c), a hybrid interpretation combining dynamic and static elements, to (d), which portrays a static set-theoretic ontology. Representation (a) from Figure 2 is not pertinent to this progression.
Figure 3. A transition from representation (b), depicting an active logic engine, through (c), a hybrid interpretation combining dynamic and static elements, to (d), which portrays a static set-theoretic ontology. Representation (a) from Figure 2 is not pertinent to this progression.
Philosophies 10 00076 g003
Figure 4. Left figure: The syntactic nondeterminism of a string-rewriting system R, starting with a b , is explored by one or more observers. Right figure: Observer O 1 , D 1 in action. Source: Daylight [9].
Figure 4. Left figure: The syntactic nondeterminism of a string-rewriting system R, starting with a b , is explored by one or more observers. Right figure: Observer O 1 , D 1 in action. Source: Daylight [9].
Philosophies 10 00076 g004
Figure 5. Illustrating binary choice with two small parts of a tableau—for some l, with 1 l n k , and for some j, with 1 < j < n k + 1 .
Figure 5. Illustrating binary choice with two small parts of a tableau—for some l, with 1 l n k , and for some j, with 1 < j < n k + 1 .
Philosophies 10 00076 g005
Figure 6. A conversion from symbol a (marked in the top row) to b (marked in the bottom row). Row indices range from 3 l 2 to 3 l + 4 . Column indices range from 2 to 5. Each symbol change from row to row is indicated with an arrow for better visualization. The boxes surrounding symbols a and b are merely included to improve readability.
Figure 6. A conversion from symbol a (marked in the top row) to b (marked in the bottom row). Row indices range from 3 l 2 to 3 l + 4 . Column indices range from 2 to 5. Each symbol change from row to row is indicated with an arrow for better visualization. The boxes surrounding symbols a and b are merely included to improve readability.
Philosophies 10 00076 g006
Figure 7. Three types of relabeling rules of the V subsystem. The symbols involved—tape symbols y Φ , state symbols q Q , and instruction labels t T [ ] — pertain to the properties of machine N.
Figure 7. Three types of relabeling rules of the V subsystem. The symbols involved—tape symbols y Φ , state symbols q Q , and instruction labels t T [ ] — pertain to the properties of machine N.
Philosophies 10 00076 g007
Figure 8. Subsystem V’s role in capturing the effect of instruction t a c with three types of 4 × 1 annotated windows. While the symbols a and d in Windows 1 and 3 are chosen arbitrarily, all symbols in Window 2 and symbol q 3 in Window 3 are specific to instruction t a c : q 1 , a q 3 , c , + .
Figure 8. Subsystem V’s role in capturing the effect of instruction t a c with three types of 4 × 1 annotated windows. While the symbols a and d in Windows 1 and 3 are chosen arbitrarily, all symbols in Window 2 and symbol q 3 in Window 3 are specific to instruction t a c : q 1 , a q 3 , c , + .
Philosophies 10 00076 g008
Figure 9. A correct (left) and incorrect (right) state of affairs, with the understanding that each row extends in both directions, covering the entire width of the tableau.
Figure 9. A correct (left) and incorrect (right) state of affairs, with the understanding that each row extends in both directions, covering the entire width of the tableau.
Philosophies 10 00076 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Daylight, E.G. Injecting Observers into Computational Complexity. Philosophies 2025, 10, 76. https://doi.org/10.3390/philosophies10040076

AMA Style

Daylight EG. Injecting Observers into Computational Complexity. Philosophies. 2025; 10(4):76. https://doi.org/10.3390/philosophies10040076

Chicago/Turabian Style

Daylight, Edgar Graham. 2025. "Injecting Observers into Computational Complexity" Philosophies 10, no. 4: 76. https://doi.org/10.3390/philosophies10040076

APA Style

Daylight, E. G. (2025). Injecting Observers into Computational Complexity. Philosophies, 10(4), 76. https://doi.org/10.3390/philosophies10040076

Article Metrics

Back to TopTop