Next Article in Journal
Achieving Computational Symmetry: A Novel Workflow Task Scheduling and Resource Allocation Method for D2D Cooperation
Previous Article in Journal
Symmetric Inequalities for Reciprocal Sums of Fibonacci Numbers
Previous Article in Special Issue
Scalable Graph Coloring Optimization Based on Spark GraphX Leveraging Partition Asymmetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tossing Coins with an 𝒩𝒫-Machine

by
Edgar Graham Daylight
a.k.a. Karel Van Oudheusden, Department of Computer Science, KU Leuven, Celestijnenlaan 200a, Box 2402, 3001 Leuven, Belgium
Symmetry 2025, 17(10), 1745; https://doi.org/10.3390/sym17101745
Submission received: 14 July 2025 / Revised: 31 August 2025 / Accepted: 4 September 2025 / Published: 16 October 2025
(This article belongs to the Special Issue Symmetry in Solving NP-Hard Problems)

Abstract

In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ , defined as ψ = ψ c e l l ψ r e s t . The component ψ c e l l enforces the constraint that each cell in the tableau contains exactly one symbol, while ψ r e s t incorporates constraints governing the step-by-step behavior of N on w. In recent work, we reformulated a critical part of ψ r e s t as a compact Horn formula. In another paper, we evaluated the cost of this reformulation, though our estimates were intentionally conservative. Here, we provide a more rigorous analysis and derive a polynomial bound for two enhanced variants of our original Filling Holes with Backtracking algorithm: the refined (rFHB) and streamlined (sFHB) versions, each tasked with solving 3-SAT. The improvements stem from exploiting inter-cell dependencies spanning large regions of the tableau in the case of rFHB, and by incorporating correlated coin-tossing constraints in the case of sFHB. These improvements are purely conceptual; no empirical validation—commonly expected by complexity specialists—is provided. Accordingly, any claim regarding P vs. NP remains beyond the scope of this work.

1. Introduction

Let N be a nondeterministic Turing machine (TM) that, on input w of length n, either accepts or rejects w within n k steps for some constant k. A conservative, deterministic simulation of N requires up to 2 n k steps, with each computation path corresponding to a chronology of binary nondeterministic choices.
In prior work [1], the present author conveniently assumed that N’s stepwise behavior could be concisely captured by a Horn formula ψ s t e p η . This conjecture was later confirmed in [2]. These findings now make it possible to move concretely beyond the classical top-down chronology of N’s computation, toward a non-sequential understanding of computability.
Traditionally, an accepting path of N on w can be represented by an accepting tableau—a two-dimensional matrix of cells—and formalized in propositional logic as a satisfiable 3cnf-formula ψ . In the present approach, however, the tableau structure is relaxed by allowing “holes,” replacing the 3cnf-formula ψ with a Horn formula ψ t r i m of size O ( n κ ) for some constant κ . (Determining the satisfiability of Horn formulas is, to date, substantially more efficient than for genuine 3cnf-formulas [3]). While ψ t r i m is guaranteed to be satisfiable whenever ψ is, the converse does not necessarily hold.
Specifically, consider the following definition:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 ,
where each conjunct is a succinct Horn formula:
1.
ψ s t e p η captures the step-by-step behavior of N on w—with the Greek letter eta ( η ), resembling the Latin letter h—highlighting that the formula is a Horn formula.
2.
ψ s t a r t ensures that the initial row of the tableau encodes N’s start configuration on w.
3.
ψ a c c e p t enforces that no cell in the tableau contains the reject state symbol q r e j e c t .
4.
ψ c e l l ensures that at most one variable is “turned on” per cell in the tableau, where a “hole” in the tableau refers to a cell where all variables are “turned off”.
5.
ψ e x t r a 1 captures the spatial dynamics of the TM’s head within the tableau (Theorem 1).
6.
ψ e x t r a 2 expresses the inter-cell dependencies across distant rows (Theorem 2).
Crucially, if ψ t r i m is satisfiable with the corresponding tableau containing no holes, then the original formula ψ is satisfiable too, implying that N accepts the input w.
How do the desiderata (1–6) relate to the author’s prior work and to this paper? The definition of ψ s t e p η , which appears as the first conjunct in the following definition of ψ s t e p η ,
ψ s t e p η = ψ s t e p η ψ s t e p det ,
is detailed in [2]. Extensive commentary of both conjuncts in (2) is provided in the present paper. (Notably, the second conjunct, ψ s t e p det , was not required in [2] due to the assumed presence of ψ c e l l rather than ψ c e l l , thereby enforcing each cell in the tableau to contain precisely one “turned on” variable).
The definitions of components 2–5 appear in [1]. Theorems 1 and 2—related to items 5 and 6, respectively—also appear in [1]. While this paper defines ψ e x t r a 1 and discusses Theorem 1, its primary focus is on revisiting Theorem 2 and unpacking item 6, namely the “inter-cell dependencies across distant rows” that arise during the computation of N on input w. While the definition of ψ extra 2 is outlined in [1] for a 3-SAT solver N , a complete formal definition is provided here.
In essence, this paper offers a self-contained exposition of Formula (1), serving as a natural continuation of the previous works [1,2], while not subsuming them. While certain formal definitions and proofs are deferred to those references, the present discussion remains independently accessible.

1.1. Methodology

The approach advocated here centers on the formula ψ t r i m and a timeless tableau—a matrix with O ( n k ) × O ( n k ) cells. When certifying the satisfiability of ψ t r i m , this tableau typically contains holes, thereby encoding an exponential number of paths.
A simplistic reliance on a timeless tableau, where each cell’s content is guessed independently, leads to a blow-up in deterministic time complexity—from 2 n k to 2 n 2 k . To counteract this inefficiency, Theorems 1 and 2 from [1] convey two techniques that significantly reduce the number of required guesses.
  • Theorem 1: Compression via geometric constraints. By leveraging a compression result that captures the spatial dynamics of the TM’s head within the tableau, we shrink the search space for nondeterministic guesses. As a result, the deterministic time complexity is restored to the classical 2 n k bound.
Consider an initially empty tableau. By designating two specific cells— c (situated above) and c (located several rows below and, say, to the far right)—as head positions, we constrain the machine’s behavior so that one or more binary nondeterministic choices collapse into deterministic transitions. In contrast, if only one cell holds a state symbol or if two nearby cells are populated with state symbols, the tableau exhibits a broader range of nondeterministic evolutions. For instance, if only c holds a state symbol, the machine may move freely left or right. However, if c , located far below and (say) to the right, also contains a state symbol, then some of the prior movements become restricted—only rightward transitions from c to c remain viable. This marks a shift from local stepwise reasoning to a more global, geometric form of constraint.
  • Theorem 2: Correlation in nondeterministic choices. A second form of compression arises by distinguishing between a pure coin-tossing machine and a 3-SAT solver modeled as a nondeterministic polynomial time TM N . Unlike the coin-tossing machine that produces independent bits, N generates correlated bits. This correlation allows for further compression of the tableau’s nondeterministic behavior.
For example, if the state symbol in cell c indicates that the machine has just produced a second coin toss of 1 (in one of multiple ways) and is about to toss a next coin, this constrains the allowable state symbol in a later cell c further down in the tableau. Due to inter-cell dependencies—embedded in the tableau’s deterministic substructure—the machine may be forced to toss the next coin to 1. These subtle, long-range interactions are crucial for compressing the overall nondeterministic behavior. Importantly, the computation of N on input w does not unfold top-down but grows in an interleaved fashion across the rows of the tableau. See Figure 1a.
Building on the first theorem, Theorem 2 establishes that if the original formula ψ is satisfiable, then it can be satisfied within K f ( n , k ) steps with the proposed ψ t r i m -based approach, where f ( n , k ) n 2 3 k and K is a constant. In this paper, Theorem 2 is revisited with more rigor and improved theoretical bounds.
Reflecting on Fortnow’s recent speculation about compression [4], it is argued that coin tossing by some nondeterministic polynomial time TM N—even when tasked to solve a cryptographic problem—is not entirely random. Fortnow’s exploration appears to offer an early, if not unprecedented, framing of this topic. Although the present author is familiar with the literature on Kolmogorov complexity [5], including a prior contribution to the field [6], no compelling connection is currently discernible between that work and the arguments advanced in this paper.

1.2. Task

The standard textbook formulation of an accepting tableau is given by:
ψ = ψ s t e p ψ s t a r t ψ a c c e p t ψ c e l l ,
where both ψ s t e p and ψ c e l l are traditionally regarded as genuine 3cnf-formulas. Recently, however, the author succeeded in re-expressinging ψ s t e p as ψ s t e p η , a compact Horn formula [2].
This improvement leaves ψ c e l l as the sole non-Horn component, which ensures that each cell in the tableau contains exactly one variable that is “turned on.” To move toward a purely Horn-based formulation, ψ c e l l is weakened to a Horn formula ψ c e l l and compensated with three additional Horn formulas: ψ s t e p det , ψ e x t r a 1 , and ψ e x t r a 2 . Together, these components yield the Horn formula ψ t r i m , as defined via Formulas (1) and (2).
Remark 1.  
A Boolean formula in conjunctive normal form (cnf) is referred to as a 3cnf-formula if each clause consists of exactly three literals. A cnf-formula is referred to as a Horn formula if every clause contains at most one positive literal. Standard definitions appear in Appendix A.
The task laid out in this paper is to formalize each newly introduced conjunct in Formula (1), to demonstrate that all components cohere and function in unison, and to establish a tight upper bound on the running time of the two ψ t r i m -based algorithms: rFHB and sFHB. These represent refined and streamlined variants, respectively, of the original FHB algorithm [1]. All three algorithms employ a standard HORNSAT solver H and incorporate external user actions, automating her interventions while embedding backtracking as an intrinsic capability.

1.3. Objectives

In re-examining the original cost analysis of the FHB algorithm [1], it becomes evident that two key assumptions were made, both of which are overly pessimistic. First, the analysis presumed that every cell in the tableau might involve a binary nondeterministic guess. In reality, the situation is considerably more favorable. Only a submatrix of the tableau corresponds to binary choices, and each such guess determines the presence of a specific symbol in another cell—typically located further down—in the tableau’s structure (see Figure 1a). Although this refinement was already noted in [1]—leading to the development of the refined FHB algorithm (rFHB)—the present paper offers a more formal treatment in this regard.
Second, Theorem 2 in [1], which pertains to the rFHB algorithm, was established under the assumption that the ratio between the entire tableau and its coin-tossing section was held constant—fixed at a value of 2—rather than allowed to scale with the number l of binary (nondeterministic) choices made by the TM in question. In this paper, we demonstrate that as l increases, the proportion of the dark-shaded region in Figure 1a decreases relative to the surrounding light-shaded area. This observation enables a more precise cost analysis of the rFHB algorithm.
Finally, these two improvements culminate in a conceptual simplification, illustrated by the transition from Figure 1a to Figure 1b. Rather than operating over the entire tableau, it becomes both feasible and pedagogically appealing to focus exclusively on the coin-tossing section. This leads to the notion of a correlated coin-tossing constraint and a streamlined variant of rFHB, denoted sFHB, which is simpler to implement.

1.4. Results

This paper introduces and examines two novel algorithms: rFHB and sFHB. Both are grounded in the ψ t r i m -based formulation presented in Formula (1), with their primary distinction arising from the final conjunct term, ψ e x t r a 2 . We shall outline arguments suggesting that the runtime of sFHB is asymptotically bounded above by that of rFHB. More importantly, this paper contains a formal analysis of the latter’s runtime, encapsulated in Theorem 3:
Let N ˜ be some N , k machine that solves 3-SAT, with k { 1 , 2 } , working in unary or binary. Let l denote the number of distinct (encoded) propositional variables in the input w of N ˜ . Then, the runtime R ( l ) of rFHB associated with N ˜ and w admits l · log l C as an upper bound, where C > 0 is a constant.
This result supersedes the cost analysis presented in [1], rendering it obsolete.
Remark 2.  
This paper presents no admissible evidence concerning P = NP that has been repeatedly characterized as fundamentally empirical on Scott Aaronson’s widely read blog https://scottaaronson.blog/ (accessed on 1 August 2025).

1.5. Outline

Three parts constitute this article: Orientation (Section 2, Section 3 and Section 4), Main Body (Section 5 and Section 6), and Final Commentary with Analytical Addenda (Section 7 and Appendix A, Appendix B, Appendix C, Appendix D, Appendix E and Appendix F).
The Orientation spans 27 pages and presents a detailed yet approachable formalization of the 3-SAT solver N . Key contributions include:
  • Commentary on the construction of ψ s t e p η using an extended tableau (Section 2);
  • The introduction of holes within the extended tableau framework (Section 3);
  • A complete definition of the solver N (Section 4).
The Main Body, comprising 21 pages, primarily investigates long-range inter-cell dependencies (see Figure 1a) and ends with the introduction of correlated coin-tossing constraints (see Figure 1b), culminating in the definition of ψ e x t r a 2 (Section 5). It also introduces and analyzes two key algorithms: rFHB and sFHB (Section 6).
The article concludes with a Final Commentary and a set of Analytical Addenda. Section 7 offers reflective insights. Appendix A presents literature-based definitions and theorems; Appendix B introduces definitions specific to ψ e x t r a 1 ; Appendix C illustrates a long-range top-down constraint; Appendix D and Appendix F each offer a standard solution to a recurrence relation; and Appendix E provides an alternative proof of Theorem 3, the central result of the paper.
Remark 3.  
Portions of the wording in the Orientation overlap with the author’s earlier works [1,2]. The author retains full ownership—including commercial rights—of the prior content. As such, there are no legal constraints on reusing portions of those works in the current article.

2. Extending the Tableau with Labels: Explicating ψ step η

Consider an arbitrary nondeterministic TM N that, given an input w of length n, decides whether to accept or reject w in at most n k steps for some constant k. The focus is on the behavior of a hypothetical accepting computation path p of N on an extended input w ^ , with
w = w 0 w 1 w n 1 , w ^ = w ,
where the blank symbol □ occurs n k n times.
Path p depends on the execution of instructions, which can be uniquely labeled, such as:
t a b c : q 1 , a q 2 , b , , q 3 , c , + .
This nondeterministic instruction, labeled t a b c , can be split into two deterministic ones:
t a b : q 1 , a q 2 , b , , t a c : q 1 , a q 3 , c , + .
Each deterministic instruction is assigned a unique label (e.g., t a b ).
Instruction t a b specifies that when N is in state q 1 and reading symbol a, the machine is supposed to transition to state q 2 , rewrite the symbol a as b, and the tape head should move one cell to the left (−). A plus sign (+) indicates a unary move to the right.
This section focuses on the formulation
ψ s t e p η = ψ s t e p η ψ s t e p det .
Section 2.1 begins with a conventional account of nondeterminism. Section 2.2 then zooms in on ψ s t e p η . The core insights are unpacked in Section 2.3. Finally, Section 2.4 explores the structure and role of ψ s t e p det .

2.1. Textbook Approach

To capture the step-by-step behavior of N on w ^ , attention is directed to the aforementioned instruction t a b c as it applies to the following TM configuration, denoted as C:
a a q 1 d
The symbol a q 1 indicates that the machine is currently in state q 1 , with its head oriented towards the tape cell containing the symbol a. This information can be expressed propositionally through the Boolean variable x i , j , a q 1 , where indices i and j denote the row i and column j in a tableau—a matrix of n k rows and n k + 2 columns, as shown in Figure 2.
The execution of instructions t a b and t a c is analyzed separately; both outcomes are depicted on the left and right sides of Figure 3, respectively. These are then combined into a single implication, resulting in the expression:
C 1 C 2 C 3 T a b T a c ,
where both T a b and T a c take the form C 1 C 2 C 3 . This yields a 3cnf-formula corresponding to the notion of a 2 × 3 window. By taking conjunctions over all 2 × 3 windows defined by N, and for each row and column in the tableau, a 3cnf-formula ψ s t e p of size O ( n 2 k ) is obtained.
To the best of our knowledge, every approach to NP -completeness ultimately hinges on the notion of “a tableau,” a concept that can be traced back to Cook’s seminal paper [7]. The work of Cook in the United States was mirrored by Levin’s concurrent developments in the Soviet Union [8,9].
Specifically, the two tableau illustrations presented in Figure 3 are modified adaptations of the exemplars found in Sipser ([10], p. 280). (Sipser’s textbook treatment uses “ q 1 a ” instead of “ a q 1 ” when referring to a 2 × 3 window ([10], p. 280). However, this is merely a cosmetic variation on the concept at hand). Similarly, Papadimitriou introduces the notion of a “computation table” in Section 8.2 of his work [11]. In the same spirit, Hopcroft, Motwani, and Ullman refer to a comparable structure as “an array of cell/ID facts” ([12], p. 443). Aaronson also echoes this idea of a tableau, albeit using more informal language in his accessible book ([13], p. 61).

2.2. Alternative Approach

Can the step-by-step behavior of N on w ^ be represented using a compact Horn formula, ψ s t e p η , instead of a 3cnf-formula, ψ s t e p ? An affirmative answer is provided in [2] by introducing an extended tableau with 3 n k + 1 rows and n k + 2 columns, explicitly storing the instruction labels, such as t a b and t a c . Two parts of such a tableau are shown in Figure 4, illustrating only one change occurring at a time. This contrasts with the two simultaneous changes depicted in each illustration in Figure 3.
The present author arrived at this result by adopting Aaronson’s vision of philosophy as a “scout” that explores and maps out “intellectual terrain for science to later move in on, and build condominiums on …” ([13], p. 6, original emphasis). Building on this metaphor, and in dialogue with the perspectives of Dean [14], Tall [15], and Turner [16], the author explored the interplay between two distinct modes of reasoning: Aristotelian, step-by-step thinking and Platonic, static reasoning—as largely formulated by Linnebo and Shapiro [17].
These contrasting perspectives are illustrated in the following two quotes:
  • Lance Fortnow as an Aristotelian:
A Turing machine has a formal definition but that’s not how I think of it. When I write code, or prove a theorem involving computation, I feel the machine processing step by step. …I feel it humming along, updating variables, looping, branching, searching until it arrives as its final destination and gives an answer. (Quoted from Lance Fortnow’s blog post [18]).
  • Robin K. Hill as a Platonist:
A Turing Machine is a static object, a declarative, a quintuple or septuple of the necessary components. The object δ that constitutes the transition function that describes the action is itself a set of tuples. All of this is written in appropriate symbols, and just sits there. (Quoted from Robin K. Hill’s CACM blog post [19]).
The author analyzes these two intellectual modes in the context of nondeterministic TMs in [2], and ultimately shows how to transform the 3cnf-formula ψ s t e p , which captures the step-by-step behavior of N on w ^ , into the compact Horn formula ψ s t e p η .
Technically, an extended tableau—also called a tableau with labels—is employed. Here, the TM configurations are represented in rows 3 l 2 where 1 l n k + 1 . The two auxiliary rows, 3 l 1 and 3 l , each contain exactly one instruction label, and row 3 l contains precisely one s q symbol, where s is a tape symbol and q a state symbol. Corresponding to Figure 4, the author defines in [2] the Horn formula ψ s t e p η of size O ( n 3 k ) literals.
Remark 4.  
The formula ψ s t e p η is called ϕ s t e p η in [2].
The innovation behind ψ s t e p η is representing the binary choice between t a b and t a c as a conjunction of two formulas:
( x 3 l , j , t a b i U i ) ( x 3 l , j , t a c i V i ) ,
yielding a Horn formula. In contrast, Sipser’s textbook treatment expresses this choice with a disjunction, recall Formula (4), which necessitates a 3cnf-formula.
To be more precise, i U i represents the knowledge derived from x 3 l , j , t a b through both upward and downward reasoning in the extended tableau. This derivation is expressed with the following formula:
x 3 l , j , t a b i { 3 l 2 , 3 l 1 , 3 l + 1 } U i ,
which is equivalent to:
( x 3 l , j , t a b U 3 l 2 ) ( x 3 l , j , t a b U 3 l 1 ) ( x 3 l , j , t a b U 3 l + 1 ) ,
where U i is a placeholder for a literal. The formula is a Horn formula. Likewise for t a c and derived knowledge i V i , which amounts to:
x 3 l , j , t a c i V i ,
where the subscript i stands for: i { 3 l 2 , 3 l 1 , 3 l + 1 } .
To convey the essence of the prior contribution without providing formal definitions, let K ( i , j , t ) denote the knowledge derived from x i , j , t for some instruction label t of machine N stored in c e l l [ i , j ] , with i = 3 l . In [2], the author demonstrates the construction of multiple Horn formulas, including:
ψ V 1 = , ψ V 2 = l j t x 3 l , j , t K ( 3 l , j , t ) , ψ V 3 = , ψ V = ψ V 1 ψ V 2 ψ V 3 , ψ H = , ψ s t e p η = ψ V ψ H ,
where the latter formula represents N’s step-by-step behavior. Here, “V” denotes “vertical” reasoning within the extended tableau, and “H” signifies “helicopter” reasoning across a block of rows, ranging from 3 l 2 to 3 l + 1 .

2.3. Explicating ψ s t e p η

More rigorously, consider an arbitrary nondeterministic polynomial time TM N , k , or N for short. Let the tape alphabet Φ , state set Q, and label set T be extracted from the specifications of machine N.
Definition 1.  
A nondeterministic polynomial time Turing machine , denoted as  N , k , is defined as N= Q , Γ , Φ , δ , T , q 0 , q a c c e p t , q r e j e c t , a nondeterministic Turing machine in accordance with Definition A5, which serves as a decider with a running time of n k — as specified in Definition A8, where n and k represent the length of input w and some constant, respectively.
Remark 5.  
Without loss of generality, the nondeterminism associated with TM N consists solely of binary choices. For each such choice, say between instructions t 1 and t 2 , the movement of t 1 is to the left (−), while the movement of t 2 is to the right (+).
Recall that the propositional formula ψ t r i m is defined as:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 ,
with
ψ s t e p η = ψ s t e p η ψ s t e p det .
To elucidate the variables within ψ t r i m , consider the alphabet Σ :
Σ = Φ Φ × Q T , .
For each i and j ranging from 1 to 3 n k + 1 and n k + 2 , respectively, and for every symbol s in Σ , a Boolean variable, x i , j , s is introduced. The total number of such variables is O n 2 k .
The formula
ψ s t e p η = ψ V ψ H
reflects the coordination between the V and H subsystems. This coordination is achieved primarily by ensuring that specific vertical symbol conversions in the extended tableau are carried out in two distinct stages.
For instance, rather than directly converting symbol a q 0 into symbol a when traversing a column in the extended tableau top-down, the V subsystem first transforms a q 0 into the intermediate label t 0 , and only then into the symbol a . This deliberate two-step conversion guarantees that V produces a unique intermediate trace—namely, the instruction label t 0 of machine N—which can then be identified by the H subsystem. This example, involving the label t 0 , corresponds to the following deterministic machine instruction:
t 0 : q 0 , a q 1 , a , + .
In general however, an instruction of N is nondeterministic. For each binary choice of N, such as
t a b c : q 1 , a q 2 , b , , q 3 , c , + ,
the instruction must first be determinized by splitting it into two distinct deterministic ones:
t a b : q 1 , a q 2 , b , , t a c : q 1 , a q 3 , c , + .
Each deterministic instruction is assigned a unique label (e.g., t a b ). Notably, determinizing an instruction that is already deterministic—such as t 0 , t a b , or t a c —has no effect.
After applying determinization to all uniquely labeled instructions of N, the author’s setup ensures that V, when selecting any deterministic instruction label t, explicitly records the label t as an intermediate trace in the extended tableau. Examples of t a b and t a c are shown in the center column, in the left and right illustrations, respectively, in Figure 4. Consequently, H reads label t from the tableau and acts accordingly. The behavior of V and H is described by Horn formulas ψ V and ψ H , respectively, as defined in ([2], Section 4).
Fundamentally, any conversion between two distinct tape symbols, say from a to b, in any column of the extended tableau, must occur through an intermediate trace. Figure 5 provides an illustration, relying on the label t a b and, more precisely, the following instruction of machine N:
t a b : q 3 , a q 4 , b , .
The marked symbol a in the top row in Figure 5 can only change into the marked symbol b in the bottom row via an intermediate trace, such as t a b .
A few additional clarifications regarding Figure 5 are necessary. First, each symbol change from row to row is indicated with an arrow for better visualization. Second, the boxes surrounding symbols a and b are merely included to improve readability.
To summarize, the novelty of [2] is twofold. First, it introduces an extended tableau that explicitly stores instruction labels, enabling single-symbol changes between consecutive rows. Second, it analyzes the tableau from both a vertical perspective ( ψ V ) and a helicopter perspective ( ψ H ), combining them into the succinct Horn formula: ψ s t e p η = ψ V ψ H .

2.4. Explicating ψ s t e p det

We now explicate the second conjunct of Equation (5), which pertains exclusively to the deterministic instructions t of the nondeterministic TM under consideration; specifically, those satisfying t T det , as defined in Definition A7.
A distinction is in order between the subsets T det + and T det , along with their associated formulas ψ + det and ψ det , respectively. The overall formula for deterministic transitions is thus expressed as:
ψ s t e p det = ψ + det ψ det .
The explicit definition of the first conjunct, ψ + det , is given below. The construction of ψ det follows by symmetry and is left to the reader:
ψ + det = l j t T det + s source t @ ( 3 l 2 , j ) s @ ( 3 l 2 , j + 1 ) t @ ( 3 l , j ) write t @ ( 3 l + 1 , j ) s target t @ ( 3 l + 1 , j + 1 ) ,
where 1 l n k + 1 and 1 j n k + 2 , with s Φ . For precise definitions of the operators such as source t , write t , and target t , see Definition A6.
The formula ψ + det comprises O ( n 2 k ) literals. An analogous construction, along with the same complexity bound, applies to ψ det . Together, these formulas encapsulate traditional top-down reasoning. Here, the top is row  3 l 2 , going via row  3 l to the bottom row  3 l + 1 . This aligns with the established result that such reasoning can be fully captured by a Horn formula ([20], p. 35).

3. Extended Tableau with Holes

An extended tableau is a matrix consisting of ( 3 n k + 1 ) × ( n k + 2 ) cells. It is formed—or, as we shall also say, induced—by augmenting each of the n k rows of the basic tableau (except the bottom row) with two auxiliary rows. Going forward, the context will clarify which version of the tableau—extended or basic—is being referred to. The reader is expected to switch between these representations as appropriate.

3.1. Notation

Given an extended tableau, the cell at row i and column j is denoted by c e l l i , j and is intended to store a symbol s Σ . The contents of these cells are represented using the variables of ψ t r i m , which, unless specified, is used in place of the original formula ψ .
When the variable x i , j , s is assigned the value 1, it signifies that c e l l i , j in the extended tableau contains the symbol s. This situation is also denoted as follows:
s @ ( i , j ) .
Conversely, we write
¬ s @ ( i , j ) or s @ ( i , j ) ¯
when x i , j , s is assigned 0, using either notation interchangeably to improve readability.
When referring to the corresponding basic tableau, which consists of n k × ( n k + 2 ) cells, we use the notation
s @ ( i , j ) B
to abbreviate
s @ ( 3 i 2 , j ) ,
indicating that the cell at row i and column j in the basic tableau corresponds to row 3 i 2 and column j in the extended tableau. Similarly, we write
¬ s @ ( i , j ) B or s @ ( i , j ) B ¯
to mean
s @ ( 3 i 2 , j ) ¯ .
Definition 2.  
Consider ψ t r i m and, correspondingly, its extended and basic tableaux. We say that c e l l [ i , j ] in the extended tableau contains a hole iff x i , j , s is false for all s Σ . We say that c e l l [ i , j ] in the basic tableau contains a hole iff c e l l [ 3 i 2 , j ] in the extended tableau contains a hole.
Definition 2 does not exclude the possibility that a specific variable x i , j , s , for some s Σ , may later be “turned on,” effectively filling the hole with symbol s . For example, suppose c e l l [ i , j ] initially contains a hole, but a human agent (e.g., a user of an off-the-shelf HORNSAT solver H ) later assigns it the tape-state symbol a q 5 . This scenario is illustrated in:
  • Figure 6 (with an extended tableau), and;
  • Figure 7 (with the corresponding basic tableau).
What are the implications of the proposition a q 5 @ ( 3 l 2 , j ) , with 1 l n k + 1 ?
To address this question in the remainder of this section, the Horn formula ψ e x t r a 1 is introduced in Section 3.2, followed by a closer examination of one of its subformulas, ψ e x t r a e x t e n d , in Section 3.3. The concept of “user interaction” is then explored in Section 3.4. Finally, the original FHB (Filling Holes with Backtracking) algorithm is presented in Section 3.5.

3.2. Introducing ψ e x t r a 1

The formula ψ e x t r a 1 captures global properties of a TM computation. While it is redundant in the context of ψ ’s satisfiability, it proves useful when ψ t r i m is satisfiable in the general case—namely when the basic tableau still contains holes. To convey the meaning of ψ e x t r a 1 , we begin by examining the example in Figure 6.
Remark 6. 
The Horn formula ψ e x t r a 1 corresponds to ψ e x t r a η from [1], with two notational differences:
  • the formula ψ e x t r a η is defined over a basic tableau, rather than an extended tableau; and
  • ψ e x t r a η uses the notation q s (instead of s q ) to denote the TM’s head is scanning s in state q.
A precise translation from ψ e x t r a η to ψ e x t r a 1 is straightforward. The formula ψ e x t r a 1 is treated informally here and formally in Appendix B. Importantly, ψ e x t r a 1 is a Horn formula of size O ( n κ ) , where the constant κ = 4 k .
If a q 5 @ ( i , j ) holds (where a Φ , q 5 Q , and i = 3 l 2 ), meaning x i , j , a q 5 = 1 , then the tape head of N in c e l l [ i , j ] cannot reach any of the crossed-out cells in Figure 6. This restriction follows from the fact that a TM can move its tape head by at most one cell per transition—either left or right. In terms of Figure 6, this corresponds to a transition between rows 3 l 2 and 3 l + 1 , with 1 l n k —or equivalently, between any two consecutive rows marked with crosses. These transitions mirror those between consecutive rows in the basic tableau (Figure 7).
Furthermore, in each column of the basic tableau (and similarly in the extended tableau), all crossed-out cells either contain or are required to contain the same tape symbol s Φ . This constraint follows from the fact that a TM can only modify a tape symbol when its head is directly over that cell.
Therefore, filling the hole in c e l l [ 3 l 2 , j ] (Figure 6) with the symbol a q 5 effectively amounts to filling in all crossed-out cells, albeit indirectly. In other words, the condition a q 5 @ ( 3 l 2 , j ) in Figure 6 ensures that, of the entire basic tableau, only the uncrossed cells in Figure 7 can encode the binary choices made by N on w.
Formula ψ e x t r a 1 expresses these restrictions as a conjunction of four parts:
ψ e x t r a 1 = 1 l n k + 1 1 j n k + 2 s Φ q Q ψ e x t r a s i n g l e s Φ ψ e x t r a l e f t ψ e x t r a r i g h t ψ e x t r a e x t e n d
The formal definition of each conjunct, provided in Appendix B, aligns with Figure 8, which extends the structure shown in Figure 6. In each column of Figure 8, every cross represents the same tape symbol.
Each conjunct plays a distinct role:
  • Single part ( ψ e x t r a s i n g l e ): Ensures that each row 3 l 2 contains at most one tape-state symbol s q , such as a q 5 .
  • Left part ( ψ e x t r a l e f t ): Handles the crossed-out cells to the left of a q 5 @ ( 3 l 2 , j ) in Figure 8.
  • Right part ( ψ e x t r a r i g h t ): Covers the crossed-out cells to the right of a q 5 @ ( 3 l 2 , j ) in Figure 8.
  • Extend part ( ψ e x t r a e x t e n d ): Introduces additional refinements not discussed in [1], as that work does not consider the ψ s t e p η -based intricacies of an extended tableau.
The final point is elaborated in the following section.

3.3. Formula ψ e x t r a e x t e n d

To illustrate the use of ψ e x t r a e x t e n d , consider Figure 4 again and suppose that proposition
a q 1 @ ( 3 l 2 , j )
has already been guessed (by the human agent) for some fixed l and j. Two clarifications are in order.

3.3.1. Clarification 1

If it later follows that
a q 2 @ ( 3 l + 1 , j 1 )
also holds—either due to another user guess or, more realistically, as a consequence of c e l l [ 3 l + 1 , j + 1 ] containing a tape symbol—then
t a b @ ( 3 l , j )
should automatically hold as well. The current state of affairs corresponds to the left illustration in Figure 4.
This inference arises from implications embedded in ψ e x t r a e x t e n d , such as:
a q 1 @ ( 3 l 2 , j ) a q 2 @ ( 3 l + 1 , j 1 ) t a b @ ( 3 l , j ) .
Similarly, if t a c applies instead of t a b , as shown with the right illustration in Figure 4, we have:
a q 1 @ ( 3 l 2 , j ) d q 3 @ ( 3 l + 1 , j + 1 ) t a c @ ( 3 l , j ) .

3.3.2. Clarification 2

Resuppose that
a q 1 @ ( 3 l 2 , j )
has been guessed for some fixed l and j and that, in adherence to the left illustration in Figure 4, c e l l [ 3 l + 1 , j + 1 ] contains some tape symbol s. Then, the following family of inferences,
s Φ s Φ a q 1 @ ( 3 l 2 , j ) s @ ( 3 l + 1 , j + 1 ) s @ ( 3 l 2 , j 1 ) s q 2 @ ( 3 l + 1 , j 1 ) ,
allows for an automatic derivation of
a q 2 @ ( 3 l + 1 , j 1 ) ,
where s and s stand for d and a, respectively, and q 2 denotes the target state of t a b (in our running example).
Remark 7. 
The definition of ψ e x t r a e x t e n d is provided in Appendix B and incorporates both clarifications discussed above. However, Clarification 2 also hinges upon the constraints in ([2], Section 4.3).

3.4. User Interaction

Assume that ψ t r i m is satisfiable and that the extended tableau reflects this condition, typically containing several holes, as illustrated in Figure 8. A hole in the (extended) tableau, located at row index i and column index j, represents more than just an empty cell. To maintain a conservative approach, the following stipulation is adopted:
  • Single Hole: If c e l l [ i , j ] is the only hole in the tableau, it corresponds to at most c possible accepting computation paths, where c is the cardinality of Σ . (In fact, in this scenario, it contributes to representing at most one accepting path).
  • Two Holes: If c e l l [ i , j ] is one of two holes in the tableau, it contributes to representing up to c × c possible accepting computation paths.
  • Three Holes: If c e l l [ i , j ] is one of three holes, it contributes to representing up to c × c × c possible accepting computation paths.
This pattern continues, with each additional hole multiplying the maximum number of possible accepting computation paths by c .
In the general case, the (extended) tableau, composed of a polynomial number of cells, indirectly represents an exponentially large number of paths for N on w, including paths that are syntactically inadmissible from the perspective of N’s step-by-step behavior. Among the syntactically admissible paths, there are both rejecting and accepting paths.
This flexibility is achieved by leaving most cells unfilled. The Horn clauses associated with ψ t r i m remain implicitly active in the background, waiting for an external user to fill in a hole via an additional specification, such as
s q @ ( i , j ) ,
where
s Φ and q Q ,
i = 3 l 2 and 1 < l < n k + 1 ,
1 < j < n k + 2 .
Consequently, the HORNSAT solver H is called upon again, now tasked with satisfying
ψ t r i m s q @ ( i , j ) ,
which stands for
ψ t r i m x i , j , s q .
After two more user interventions, the solver is tasked with satisfying the following type of formula:
ψ t r i m s q @ ( i , j ) s q @ ( i , j ) s q @ ( i , j ) ,
where
i = 3 l 2 ,
i = 3 l 2 .
However, if the user’s guess (e.g., s q @ ( i , j ) ) leads the solver H to detect unsatisfiability, backtracking is required. In such cases, the user may revise the assignment—for instance, replacing
s q @ ( i , j ) with s q @ ( i , j ) ,
where ( s , q ) ( s , q ) . This means that s s or q q or both.
Fortunately, as shown in Claim 3 of [1] in the context of the basic tableau, asymptotic analysis (with n ) reveals that filling any hole in the center row of a convex polygon of holes reduces the space for binary choices by a factor of 1 2 . This effect is illustrated in two places:
  • Figure 9 demonstrates the initial scaling.
  • Figure 10 shows a second intervention in row 4, where enforcing b q 9 @ ( 4 , 12 ) B results in 41 additional crossed-out cells.
Remark 8.  
To simplify this exposition, the leftmost columns in the depicted tableaux do not contain the boundary marker. However, strictly speaking, column 1 should contain the boundary marker, while tape and tape-state symbols appear only from column 2 onward.

3.5. The FHB Algorithm

Conceptually, the FHB algorithm relies on a standard HORNSAT solver H and integrates the actions of the external user, automating her interventions and incorporating backtracking as a built-in feature. It is worth noting that H operates in nearly linear time [21].
Recall the definition of ψ t r i m ,
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 ,
which will turn out to be O ( n κ 0 ) literals long for some constant κ 0 . For the time being, the truth value 1 is substituted for ψ e x t r a 2 (to be revisited in Section 5). In this context, ψ e x t r a 1 is the largest conjunct, of size O ( n 4 k ) , as follows from Appendix B.
Additionally, at most O ( n k ) extra stipulations of the form s q @ ( 3 l 2 , j ) will be present—namely, one per three rows in the extended tableau. Hence, an upper bound on the total cost of the HORNSAT instance at hand,
ψ t r i m s q @ ( 3 l 2 , j ) ,
can be expressed as
p ( n ) = κ 1 · n κ 2 ,
for some constants κ 1 and κ 2 .
The FHB algorithm begins with ψ t r i m , an instance of size < p ( n ) , and thus p ( n ) , and runs the solver H on it, resulting in a trivial “satisfiable” as a tentative outcome. (If N’s computation on w is deterministic, then the outcome is permanent and either satisfiable or unsatisfiable). Next, the algorithm selects the center row, or one of the two center rows, of the basic tableau and injects the first tape-state symbol—the first s q symbol appearing in a standard list representation of Φ × Q —into the leftmost hole in that row. If backtracking is required, subsequent iterations will use different tape-state symbols, and if this does not suffice, the next hole (from left to right) in the row will be filled instead, starting again with the first s q symbol appearing in a standard ordering of Φ × Q , and so on.
For a row containing holes, there are at most n k ways to inject some specific tape-state symbol s q , with ( s , q ) Φ × Q , into that row. This leads to a key observation:
There are at most c 0 · n k ways to inject any tape-state symbol s q , with ( s , q ) Φ × Q , into a row, where c 0 denotes the cardinality of Φ × Q .
The first user intervention results in scaling the space of binary choices by 1 2 , shrinking from size p ( n ) to size Δ · p ( n ) , with Δ = 1 2 . In the next two interventions, the algorithm selects the middle row (or, if applicable, one of the two middle rows) of the first and second convex polygons of holes, read from top to bottom in the basic tableau. In the next four interventions, the algorithm selects the middle row (or one of two middle rows) of each of the four smaller convex polygons of holes, moving sequentially from top to bottom. This pattern continues in subsequent steps.
Immediately after each intervention, the FHB algorithm directs the solver H to check the entire (extended) tableau for unsatisfiability and, in the process, simplify the underlying Horn clauses as much as possible, taking into account all constraints specified by ψ t r i m , where the dots refer to the cumulative intervention stipulations made up to that point.
After each stage of interventions—one intervention in stage 1, two interventions in stage 2, four interventions in stage 3, eight interventions in stage 4, and so on—the solver H runs on an instance that has been shrunk in size by Δ = 1 2 . To be technically precise, H continues operating on the entire instance, but the space of binary choices has been shrunk by a factor of Δ after each stage. As a result, the process is intrinsically dealing with an instance of size Δ m · p ( n ) after m stages. Hence, m is bounded from above by O ( l o g ( n ) ) . Additionally, the bookkeeping for backtracking itself incurs at most a polynomial cost. A runtime stack with a constant overhead per recursive call is sufficient in practice [22].
Remark 9.  
In future work, the software engineer could reduce the exponent κ 2 in
p ( n ) = κ 1 · n κ 2 ,
by considering the on-the-fly generation of the Horn constraints associated with ψ s t e p η and/or ψ e x t r a 1 .
For instance, the formula ψ s t e p η could expand and contract based on the placement of the s q symbols in the tableau, rather than conservatively accounting for all possibilities in advance. Similarly, the tailored constraints related to ψ e x t r a 1 could be added only when a guess s q @ ( 3 l 2 , j ) is made, causing the formula ψ e x t r a 1 to grow incrementally with each additional guess and shrinking during backtracking.
Theorem 1. 
(Reappropriated from Daylight ([1], p. 27)). Consider a nondeterministic polynomial time TM N , k that runs on an input w of length n. The runtime R ( n ) of the FHB algorithm, applied to N , k and w, satisfies
R ( n ) 2 κ · n k ,
where the constant κ > 1 .
Theorem 1 suggests a prohibitive runtime. However, a refinement of the method leads to a significantly tighter upper bound, as discussed in the remainder of this paper.

4. The 3-SAT Solver N

Even a devil’s advocate would have to admit that the analysis thus far is unduly pessimistic, as it assumes that every cell in the basic tableau could involve a binary nondeterministic guess. In reality, the situation is considerably more favorable. Only a portion of the basic tableau entails binary choices, and crucially, the outcome of each guess (i.e., the transition to either state q a or q b ) determines the presence of a specific state symbol (e.g., q c ) in another cell—typically further down—in the basic tableau. These observations about the basic tableau naturally carry over to the extended tableau.
To clarify why the situation is more favorable, this section formally defines the 3-SAT solver N , supplemented by informal insights. Section 5 explores the inter-cell dependencies of N across distant rows of the basic tableau, and concludes with the topic of correlated coin-tossing constraints. Finally, Section 6 presents two algorithms: the refined rFHB and the streamlined sFHB algorithm.

4.1. Overview

The TM N runs on an input word w of length n, where a substring w ˜ of input word w encodes a 3-SAT formula ϕ with l propositional variables ( x 1 , , x l ) and m clauses. Each clause consists of three literals—for example, x 2 ¬ x 7 x 92 . Here, l < n and m < n .
With respect to 3-SAT itself, an informal grasp of the following stipulations suffices:
  • Each variable x i appears in at least two literals across the formula ϕ ; otherwise, such a variable (ocurring only once) can be eliminated through preprocessing.
  • No clause contains the same variable x more than once—whether as x x , x x ¯ , or x ¯ x ¯ .
  • Accordingly, as m increases sufficiently, so does l.
    -
    The lower bound is m = Ω ( l ) , as each variable must be constrained in some way.
    -
    Under a deliberately conservative upper bound, we posit m = O ( l 3 ) .
The input word w of N has the form:
w ˜ # # ,
where the number of blank symbols (□) in between the two # markers is exactly l, and the comma is included solely for readability.
The operation of N , with state set Q and tape alphabet Φ , proceeds through three sequential stages:
1.
Coin-Tossing Stage  S ^ (Section 4.2);
2.
Updating Stage  S (Section 4.3);
3.
Checking Stage  S (Section 4.4).
The following provides a breakdown of the three stages:
  • Coin-Tossing Stage  S ^ : To denote elements specific to this stage, we annotate instruction labels and state symbols with a “roof” symbol ^. This annotation signifies divergence—that is, the nondeterminism which arises exclusively during this stage. The coin-tossing stage uses seven machine instructions labeled t 1 ^ , t 2 ^ , , t 7 ^ and operates with the following state set:
    Q ^ = { q 0 ^ , q 1 ^ , Q 0 ^ , } with Q ^ Q .
  • Updating Stage  S : Here, instruction labels and state symbols are annotated with a bidirectional arrow ↔, indicating the machine’s back-and-forth traversal across the tape. This movement is generally required to update the encoding of the formula ϕ (i.e., the substring w ˜ ) based on the coin-toss outcomes. The updating stage employs instructions labeled t 1 , t 2 , , and uses the following set of states:
    Q = { q 0 , q 1 , q 2 , Q 0 , U 0 , } with Q Q .
  • Checking Stage S : In this final stage, instruction labels and state symbols are annotated with a left arrow ←, reflecting the machine’s predominant leftward movement. While most transitions are leftward, occasional rightward steps will occur locally. The checking stage uses instructions labeled t 1 , t 2 , and the state set:
    Q = { q 3 , q 0 , Q 0 , q r e j e c t , q a c c e p t , } with Q Q .
    The symbols q r e j e c t and q a c c e p t serve as surrogates for, respectively, q r e j e c t and q a c c e p t ; it is the latter notation which is adopted in the remainder of this paper.
We characterize N as an N , 2 machine. Its Updating Stage incurs a cost of O ( n 2 ) , while the other two stages each require linear time.

4.2. Coin-Tossing Stage

The machine N stores the encoding of the formula ϕ , represented as the string w ˜ , on its tape. The tape head is initially positioned at the first blank symbol, □, right after w ˜ # . More specifically, the initial tape configuration is as follows:
w ˜ # q 1 ^ # .
Here, N is in state q 1 ^ , reading the first of l blank symbols. The punctuation is included solely for readability.
The machine generates l bits, proceeding from left to right and writing each bit—either 0 or 1—into a separate tape cell. This sequence, which is supposed to represent the outcome of l independent coin tosses, is enclosed at both ends by the marker #.
One outcome of any coin toss must correspond to a rightward movement (+) and the other to a leftward movement (−). To enforce this constraint—consistent with Remark 5—the following behavior is implemented, starting in the q 1 ^ tossing state:
t 1 ^ : q 1 ^ , q 1 ^ , 1 , + t 2 ^ : q 1 ^ , q 0 ^ , 0 ,
Among all instructions pertaining to N , only t 1 ^ and t 2 ^ involve nondeterministic choices.
  • If a coin toss yields bit 1, the machine moves its head one cell to the right and re-enters the q 1 ^ state. See instruction t 1 ^ .
  • If a coin toss yields bit 0, the machine first moves its head one cell to the left and enters state q 0 ^ . See instruction t 2 ^ . Then the machine performs two deterministic moves to the right, ending up in state q 1 ^  again.
    -
    See instructions t 3 ^ t 5 ^ for the first move to the right:
    t 3 ^ : q 0 ^ , # Q 0 ^ , # , + t 4 ^ : q 0 ^ , 0 Q 0 ^ , 0 , + t 5 ^ : q 0 ^ , 1 Q 0 ^ , 1 , +
    -
    See instruction t 6 ^ for the second move to the right:
    t 6 ^ : Q 0 ^ , 0 q 1 ^ , 0 , +
  • Once the machine reaches the rightmost # marker (while in state q 1 ^ ), it moves leftward and enters state q 2 :
    t 7 ^ : q 1 ^ , # q 2 , # ,
Upon completing the coin-tossing process, the machine will have generated l bits,
b 1 , b 2 , , b l 1 , b l ,
for, respectively, the propositional variables:
x l , x l 1 , , x 2 , x 1 .
In other words, the j-th coin toss from the right ( 1 j l ) determines the truth assignment, 0 or 1, for propositional variable x j .
Assigning the truth value 1 to the variable x j entails that, during the Updating Stage  S , the machine will set each encoded occurrence of x j in the word w ˜ to 1, and each encoded occurrence of ¬ x j to 0. (A similar remark holds for the truth value 0).
By preprogramming the proper constraints, to be detailed in Section 5, filling holes with tape-state symbols in the coin-tossing section of the basic tableau will automatically propagate to filling corresponding holes with tape-state symbols in lower sections of the entire basic tableau. Moreover, if and when all l coins have been tossed, the remaining basic tableau—and therefore the entire basic tableau—is fully determined. (Once the basic tableau is fully determined, the extended tableau is as well). Even a devil’s advocate would expect this property to be reflected in a worst-case analysis of the FHB algorithm or a refinement thereof.

4.2.1. Four Coin Tosses

Figure 11 illustrates the structure of the coin-tossing process for l = 4 . The hyphens and dots in the illustration represent potential positions of the tape head (i.e., tape-state symbols), with the distinction between the two serving only for visual clarity. The two extreme computation runs are depicted solely with hyphens: the diagonal run at the top (consisting of five hyphens) produces all four bits as 1, while the zigzagging run takes longer to complete and results in all four bits being 0.
Definition 3. 
Let B be a basic tableau. A mini tableau corresponding to B —or simply a mini tableau, when the context permits—is a matrix strictly contained within B .
Although this definition is general, we will use the term mini tableau specifically to refer to the smallest submatrix that encodes all possible computations relevant to the Coin-Tossing Stage  S ^ , as illustrated in Figure 11 for the case where l = 4 .
Two additional remarks about the term mini tableau are in order:
  • Figure 11 depicts the coin-tossing process using a matrix of dimensions 3 l + 1 by l + 2 . This mini tableau can be embedded within a more square-like mini tableau of size ( 3 l + 1 ) × ( 3 l + 3 ) . The distinction between an approximated square form and its rectangle-embedded counterpart is not essential and will often be omitted.
  • The basic ( 3 l + 1 ) × ( 3 l + 3 )  mini tableau corresponds to the extended ( 3 · ( 3 l + 1 ) + 1 ) × ( 3 l + 3 )  mini tableau, and neither is depicted in Figure 11.
Theorem 1 in Section 3.5 provides a basis for analyzing the runtime associated with the mini tableau. Crucially, if the 3-SAT solver N were solely responsible for tossing l coins, then no tighter bound than 2 l —akin to Theorem 1’s worst-case runtime of the FHB algorithm—applies. In reality, however, the coin tosses of N are made to correlate through the word w ˜ , which encodes the 3-SAT formula ϕ . For not all sequences of l coin tosses are valid—if any are at all.

4.2.2. Properties of Computation

To appreciate (and ultimately formalize) the correlation between the l coin tosses, the discussion begins with two basic insights regarding the computation runs of N on w:
1.
The basic mini tableau—and, more rigorously, the extended mini tableau—captures all nondeterminism (coin tossing) inherent to N , while also including rote deterministic computations.
  • Example 1: If four 1 bits are tossed consecutively, N ’s tape head lands on c e l l [ 5 , 6 ] of Figure 11 and immediately begins rote deterministic computation from row 6.
  • Example 2: If four 0 bits are tossed instead, N uses all rows of the mini tableau to complete the coin tossing, reaching c e l l [ 13 , 6 ] before starting rote deterministic computation in row 14 onward.
2.
The rote deterministic computation does not revisit column 6 (in Figure 11) or any column to its right. More formally, the rightmost column c ^ of the basic mini tableau, which is of length 3 l + 1 , contains exactly one tape-state symbol (namely, # q 1 ^ ). Furthermore, in the rest of the basic tableau, the same column c ^ contains only the # tape symbol and thus no other tape-state symbols.
(a)
Although the two hyphens and three dots in column  c ^ indicate multiple possible positions for a tape-state symbol, only one tape-state symbol can appear in any particular computation. Moreover, that symbol must be # q 1 ^ , implying that the following proposition holds:
# q 1 ^ @ ( r ^ , c ^ ) B ,
for a suitable row index r ^ .
(b)
For each input formula ϕ and any valid placement of # q 1 ^ in the rightmost column c ^ , which amounts to specifying r ^ , we can determine (and thus preprogram) the position of the machine’s head—though not the corresponding tape-state symbol—in every subsequent row of the entire basic tableau. In other words, the implication of guessing row index r ^ —with r ^ { 5 , 7 , 9 , 11 , 13 } in Figure 11—is as follows:
Apart for the bulk of the basic mini tableau, each row from r ^ + 1 onward in the basic tableau contains exactly one uncrossed cell. Figuratively speaking, a “crossed-out” cell cannot be occupied by the machine’s tape head.
This property conveyed via item 2(b) above is ensured through the construction of N , detailed further in Section 4.3 and Section 4.4.

4.3. Updating Stage

Upon entering Stage  S , the machine N has its tape head positioned at the rightmost bit, immediately before the rightmost # marker. The tape configuration is as follows:
& $ c l m $ $ c l 2 $ c l 1 $ # 1 0 0 0 q 2 # ,
with
0 q 2 @ ( r ^ + 1 , c ^ 1 ) B .
The machine is in state q 2 , with the head reading the bit 0 (in the current example).
Observe that the leftmost & symbol marks the beginning of the string w ˜ , and that the rightmost $ symbol marks the end of w ˜ . Moreover, each encoded clause, such as c l 1 , is delimited by the $ marker.

4.3.1. Unary Encoding

To simplify the control flow, a unary encoding is adopted for each variable and its negation. Specifically, a variable x α is encoded as a string of α copies of the symbol a:
a a a .
Similarly, the negated variable ¬ x α is encoded as a string of α copies of a distinct symbol a ¯ :
a ¯ a ¯ a ¯ .
Remark 10. 
Encoding l propositional variables in binary requires l × ( log l ) bits, whereas a unary encoding requires l × l bits. Although this results in an increase, it does not amount to an exponential blow-up; furthermore, it remains acceptable within the scope of computability theory [23].
Rather than stating formal desiderata, the unary encoding scheme is best illustrated through a few examples. Suppose Clause 1 is as follows:
( x 3 ¬ x 2 x 1 ) .
Its unary encoding, denoted as c l 1 , is:
a a a a ¯ a ¯ a ,
where the comma and extra spacing are included solely to enhance readability.
The subscript of the variable indicates the number of occurrences of either the symbol a or a ¯ . If the literal is positive, the symbol a is used, if the literal is negative, a ¯ is used.
Now consider Clause 2:
( ¬ x 4 x 2 ¬ x 3 ) .
Its unary encoding, denoted c l 2 , is:
a ¯ a ¯ a ¯ a ¯ a a a ¯ a ¯ a ¯ .
The punctuation is added solely for clarity in this exposition.

4.3.2. Updating a Unary Encoding: Part 1

Let us revisit Clause 1 and its unary encoding, c l 1 :
a a a a ¯ a ¯ a .
Suppose the variable x 1 is assigned the truth value 1:
x 1 = 1 .
Then, after an update operation, the coresponding string encoding should take the following form:
? ? ? ? ? 1 ,
where each question mark (?) denotes a “don’t care” entry—a meta-symbol whose assigned value (a specific symbol from the alphabet Σ ) is inconsequential within this context.
Similarly, if
x 2 = 1 ,
then, eventually, the coresponding encoding should take the form:
? ? ? 0 ? ? .
Finally, if
x 3 = 0 ,
the encoding should ultimately take the form:
0 ? ? ? ? ? ,
again with question marks indicating “don’t care” entries.
In summary, for each encoded literal, the leftmost symbol σ { a , a ¯ } should be overwritten with the appropriate truth value. The remaining symbols a and a ¯ in the encoded literal are superfluous.

4.3.3. Updating a Unary Encoding: Part 2

Suppose now that the machine’s tape head approaches the encoded clause c l 1 :
a a a a ¯ a ¯ a ,
from the right, under the assumption that x 1 = 1 . The machine then traverses the entire (encoded) clause from right to left, continuing leftward across all (encoded) clauses preceding c l 1 . During this traversal, the machine updates the first symbol σ { a , a ¯ } of each encoded literal—in every (encoded) clause—according to the following rule:
  • If σ = a , overwite it with 1.
  • If σ = a ¯ , overwite it with 0.
Applying this rule to c l 1 yields the updated encoding:
a a 1 a ¯ 0 1 .
The second update pass now proceeds, under the assumption that x 2 = 1 . The machine’s tape head approaches the previously updated string from the right-hand side, traversing the entire clause from right to left, as well as all (encoded) clauses to its left. In this pass, the machine updates the second symbol σ { a , a ¯ } of each encoded literal—in every (encoded) clause—according to the following rule:
  • If σ = a , overwite it with 1.
  • If σ = a ¯ , overwite it with 0.
Applying this rule yields the updated string:
a 1 1 0 0 1 .
Finally, the third update pass is reached, under the assumption that x 3 = 0 . The machine’s tape head once again approaches the previously updated string from the right-hand side, traversing the entire clause from right to left, along with all (encoded) clauses to its left. In this pass, it updates the third symbol σ { a , a ¯ } of each encoded literal—in every (encoded) clause—according to the following rule:
  • If σ = a , overwite it with 0.
  • If σ = a ¯ , overwite it with 1.
Applying this rule yields the final updated string:
0 1 1 0 0 1 .
The resulting string has the desired form
0 ? ? 0 ? 1 ,
corresponding to the assignment
x 3 = 0 and x 2 = x 1 = 1 ,
which is best read from right to left, in keeping with the direction of the tape head’s movement.

4.3.4. The Updating Code

The corresponding instructions of N are now presented. Upon entering the Updating Stage  S , the machine N has the configuration
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a $ # 1 0 0 0 q 2 # ,
and starts with t 1 or t 2 :
t 1 : q 2 , 1 q 1 , # , , t 2 : q 2 , 0 q 0 , # , .
In both cases, the bit (1 or 0) is overwritten with the # symbol.
In our running example, instruction t 1 does not apply, whereas t 2 does—since the scanned bit is not 1, but 0. This yields:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a $ # 1 0 0 q 0 # # .
Due to the symmetry in the behavior of updating a 1-bit and a 0-bit, the analysis is restricted to the second case ( t 2 ). Accordingly, the instructions and labels presented below are not exhaustive; additional instructions exist but are omitted here for brevity.

Moving Left

Instructions t 3 and t 4 allow the machine to remain in state q 0 while traversing all remaining bits from right to left:
t 3 : q 0 , 0 q 0 , 0 , t 4 : q 0 , 1 q 0 , 1 ,
These transitions preserve the bit values and simply move the tape head to the left.
Eventually, upon encountering the leftmost # marker, the machine transitions from state q 0 to Q 0 :
t 5 : q 0 , # Q 0 , # , ,
resulting in the following configuration:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a $ Q 0 # 1 0 0 # # .
When the machine encounters the $ symbol, or when it later visits the symbols a or a ¯ , it simply continues moving left, remaining in state Q 0 :
t 6 , x : Q 0 , x Q 0 , x , , x { $ , a , a ¯ } .
The result of executing the instruction t 6 , $ for the first time is as follows:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a Q 0 $ # 1 0 0 # # .
The machine has now reached the first (encoded) literal of an (encoded) clause and switches from state Q 0 to state U 0 , moving left:
t 7 : Q 0 , U 0 , , ,
resulting in the configuration:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ a U 0 $ # 1 0 0 # # .
While in the update state U 0 , encountering any of the symbols ∨, $, 0, or 1 does not trigger a state change; the machine’s tape head simply continues moving left:
t 8 , x : U 0 , x U 0 , x , , x { , $ , 0 , 1 } .
However, upon reading the symbol a (or a ¯ ), the machine carries out the update by writing the bit 0 (or 1, respectively) and returning to state Q 0 :
t 9 : U 0 , a Q 0 , 0 , , t 10 : U 0 , a ¯ Q 0 , 1 , .
In our running example, this yields:
& $ c l m $ $ c l 2 $ a a a a ¯ a ¯ Q 0 0 $ # 1 0 0 # # .
At this point, also reconsider the instructions t 6 , x , where x { $ , a , a ¯ } : the machine remains in state Q 0 while scanning from right to left in search of the next ∨ symbol, if one exists. In our running example, the next ∨ symbol has already been located.
Remark 11. 
As previously mentioned, the present analysis is restricted to the case of updating a 0-bit. However, for future reference, the twin instructions of t 9 and t 10 are provided as follows:
τ 9 : U 1 , a Q 1 , 1 , , τ 10 : U 1 , a ¯ Q 1 , 0 , ,
which update a and a ¯ to, respectively, 1 and 0.

Moving Right

Finally, if the machine is in state U 0 or Q 0 and encounters the leftmost marker &, it transitions to state h and begins moving right:
t 11 : U 0 , & h , & , + ,
t 12 : Q 0 , & h , & , + ,
resulting in the following configuration:
& $ h c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 0 # # .
In state h , the machine scans rightward, searching for the leftmost occurrence of the symbol #. It continues moving right over the symbols ∨, $, 0, 1, a, and a ¯ without changing state:
t 13 , x : h , x h , x , + , x { , $ , 0 , 1 , a , a ¯ } .
This ultimately leads to, e.g., the following configuration:
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # h 1 0 0 # # .
Upon encountering #, the machine transitions from state h to i and continues moving right:
t 14 : h , # i , # , + ,
which results in one of the following two types of configurations:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 $ # # i # # # # ,
and
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 i 0 0 # # .
In the first type of configuration, when there are no remaining toss outcomes (to the right of the leftmost # symbol), the machine transitions from state i to q 3 , and moves left:
t 15 : i , # q 3 , # , .
This transition marks the beginning of the final phase: the Checking Stage.
However, in the second type of configuration, when one or more toss outcomes are still present, the machine continues moving right, searching for the rightmost bit. It first transitions from i to j while preserving the bit value:
t 16 : i , 0 j , 0 , + , t 17 : i , 1 j , 1 , + .
Once in state j , the machine continues moving right through any remaining bits:
t 18 : j , 0 j , 0 , + , t 19 : j , 1 j , 1 , + .
When the leftmost # of the remaining # symbols is encountered, the machine transitions from state j to state q 2 and moves left:
t 20 : j , # q 2 , # , .
At this point, the machine has returned to state q 2 , scanning the rightmost bit:
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 0 q 2 # # .
It is now prepared to repeat the earlier steps, thereby continuing the Updating Stage for another iteration.

4.4. Checking Stage

Once all tossed coins have been processed, N reaches—via instruction t 15 —the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 $ # q 3 # # # ,
Here, N is ready to perform a final traversal of the tape—from its current position to the left, with local left-to-right movements—to check whether each instantiated clause is satisfied.
Remark 12.  
The caveat is that the tape movement behavior should remain invariant—that is, independent of the contents of the (encoded) instantiated clauses.
The initial phase of the right-to-left traversal is governed by the following two instructions:
t 1 : q 3 , # q 0 , # , , t 2 : q 0 , $ q 0 , $ , ,
which results in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 q 0 $ # # # # .
While in state q 0 , the machine switches to state Q 0 upon encountering the first ∨ symbol in the (encoded and instantiated) clause currently being scanned:
t 3 : q 0 , Q 0 , , ,
resulting, in our example, in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 Q 0 $ # # # # .
The states Q 0 , Q 00 , and Q 000 indicate that the machine is currently processing—right to left—the first, second, and third (encoded and instantiated) literals of the clause being scanned, respectively. As will be shown shortly, transitions from these states to R 0 , R 00 , and R 000 , respectively, signify that the machine has located the leftmost bit of the corresponding literal, thereby determining its truth value.

4.4.1. First Literal

In state Q 0 , the machine continues moving left through the (encoded and instantiated) clause, remaining in the same state as it reads bits:
t 4 , x : Q 0 , x Q 0 , x , , x { 0 , 1 } .
Upon reaching the next ∨ symbol, the machine switches to the result state R 0 and moves one cell to the right:
t 5 : Q 0 , R 0 , , + ,
resulting in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 0 R 0 $ # # # # .
The machine can now determine whether the clause is already satisfied or not:
t 6 : R 0 , 1 s 00 , 1 , , t 7 : R 0 , 0 q 00 , 0 , ,
where the letter “s” in s 00 stands for “satisfied.”
In our running example, the first literal is not satisfied, resulting in the following configuration:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 q 00 0 $ # # # # .
Since the first literal is not satisfied, the machine proceeds in state q 00 (rather than s 00 ). As will be shown shortly, the machine subsequently transitions from q 00 to Q 00 .

4.4.2. Second Literal: Part 1

The states q 00 and Q 00 indicate that the machine is currently processing—right to left—the second (encoded and instantiated) literal of the clause being scanned. The transition from Q 00 to R 00 signifies that the machine has located the leftmost bit of the literal, thereby determining its truth value.
The relevant transitions are as follows:
t 8 : q 00 , Q 00 , , .
t 9 , x : Q 00 , x Q 00 , x , , x { 0 , 1 } .
t 10 : Q 00 , R 00 , , + .
Once in the result state R 00 ,
& $ c l m $ $ c l 2 $ 0 1 0 0 R 00 1 0 $ # # # # ,
the machine inspects the bit:
  • If the truth value is 1, it switches to state s 000 .
  • If the truth value is 0, it switches to state q 000 .
These transitions are captured by:
t 11 : R 00 , 1 s 000 , 1 , , t 12 : R 00 , 0 q 000 , 0 , ,
where the letter “s” in s 000 stands for “satisfied.”
In our running example, the second literal is not satisfied, resulting in:
& $ c l m $ $ c l 2 $ 0 1 0 q 000 0 1 0 $ # # # # .
Since the second literal is not satisfied, the machine proceeds in state q 000 (rather than s 000 ). As will be shown shortly, the machine subsequently transitions from q 000 to Q 000 .

4.4.3. Second Literal: Part 2

Similar to q 00 and Q 00 , the states s 00 and S 00 also indicate that the machine is currently processing—right to left—the second (encoded and instantiated) literal of a clause. However, in this case, the clause is already known to be satisfied:
& $ c l m $ $ c l 2 $ 0 1 0 0 1 s 00 1 $ # # # # .
In conformity with Remark 12, the transition from S 00 to T 00 indicates that the machine has located the leftmost bit of the literal.
The corresponding transitions are:
t 8 : s 00 , S 00 , , .
t 9 , x : S 00 , x S 00 , x , , x { 0 , 1 } .
t 10 : S 00 , T 00 , , + .
Regardless of the leftmost bit’s value, the machine proceeds to state s 000 :
t 11 : T 00 , 1 s 000 , 1 , , t 12 : T 00 , 0 s 000 , 0 , .

4.4.4. Third Literal: Part 1

The states q 000 and Q 000 indicate that the machine is currently processing—right to left—the third (encoded and instantiated) literal of the clause being scanned. The transition from Q 000 to R 000 signifies that the machine has located the leftmost bit of the literal, thereby determining its truth value.
The relevant transitions are as follows:
t 13 : q 000 , Q 000 , , .
t 14 , x : Q 000 , x Q 000 , x , , x { 0 , 1 } .
t 15 : Q 000 , $ R 000 , $ , + .
Once in the result state R 000 ,
& $ c l m $ $ c l 2 $ 0 R 000 1 0 0 1 0 $ # # # # ,
the machine inspects the bit:
  • If the truth value is 1, it switches to state q 0 .
  • If the truth value is 0, it switches to state q r e j e c t .
These transitions are captured by:
t 16 : R 000 , 1 q 0 , 1 , , t 17 : R 000 , 0 q r e j e c t , 0 , .
In our running example, the third literal is not satisfied, resulting in:
& $ c l m $ $ c l 2 $ q r e j e c t 0 1 0 0 1 0 $ # # # # .

4.4.5. Third Literal: Part 2

Similar to q 000 and Q 000 , the states s 000 and S 000 also indicate that the machine is currently processing—right to left—the third (encoded and instantiated) literal of a clause. However, in this case, the clause is already known to be satisfied. In conformity with Remark 12, the transition from S 000 to T 000 indicates that the machine has located the leftmost bit of the literal.
The corresponding transitions are:
t 13 : s 000 , S 000 , , .
t 14 , x : S 000 , x S 000 , x , , x { 0 , 1 } .
t 15 : S 000 , $ T 000 , $ , + .
Regardless of the leftmost bit’s value, the machine proceeds to state q 0 :
t 16 : T 000 , 1 q 0 , 1 , , t 17 : T 000 , 0 q 0 , 0 , .

4.4.6. Early Rejection

The symbol q r e j e c t must never appear in any cell of the (extended) tableau—this follows directly from the semantics of the Horn formula ψ a c c e p t (see item 3 in Section 1). Arguably for the sake of mathematical aesthetics, one could handle its hypothetical presence similarly to t 2 , using the instruction:
t 18 : q r e j e c t , $ q 0 , $ , .
However, to maintain compatibility with the definitions of deterministic and nondeterministic TMs provided in Appendix A, instruction t 18 must be excluded from further consideration. Instead, N should enter a loop once it reaches the q r e j e c t state; the straightforward formalization is omitted.

4.4.7. Looping on the Left

If, and only if, all (encoded and instantiated) clauses have been examined from right to left, the tape head enters a loop confined to the two leftmost (non-blank) cells of the tape. In combination with instruction t 2 , the following instruction
t 19 : q 0 , & q 0 , & , +
ensures that the tape head oscillates indefinitely between these two leftmost cells associated with the word w ˜ .
However, to maintain coherence with the TM definitions in Appendix A, which require the machine to become and remain active in the q a c c e p t state upon accepting its input, it follows that the symbol q a c c e p t should appear in t 19 , thus:
t 19 : q 0 , & q a c c e p t , & , + ,
complemented with the following two instructions:
t 20 : q a c c e p t , $ q a c c e p t , $ , ,
t 21 : q a c c e p t , & q a c c e p t , & , + .

5. Long-Range Inter-Cell Dependencies: Defining ψ extra 2

Consider a devil’s advocate who adheres to the formal definition of the 3-SAT solver N presented in Section 4. She will recognize that, for a given ϕ , once the row r ^ corresponding to the guess
# q 1 ^ @ ( r ^ , c ^ ) B ,
is fixed, the positions of tape-state symbols in all subsequent rows of the basic tableau become fully determined. Furthermore, only the first r ^ rows of the basic tableau involve binary (nondeterministic) choices.
Based on this insight and the notion of “long-range inter-cell dependency,” introduced in this section, the goal is to improve the prohibitive runtime established by Theorem 1, as revisited in Section 6. To pursue this objective, the concept of a coin-tossing scenario is first introduced (Section 5.1), followed by an examination of two distinct forms of long-range inter-cell dependencies within the basic tableau of N : top-down constraints (Section 5.2) and bottom-up constraints (Section 5.3). These dual perspectives are ultimately unified in a single framework—termed correlated coin tossing (Section 5.4).

5.1. Coin Tossing Scenarios

The focus is on declarative coin toss outcomes (on the one hand) and imperative coin-tossing scenarios (on the other hand).
Definition 4.  
Given a coin x α , where l α 1 , we say that
x α = b , with b { 0 , 1 } ,
is a coin toss outcome.
Moreover, a coin-tossing scenario implementing the outcome x α = 1 is represented by a proposition of the form
q 1 ^ @ ( i , j ) B o r # q 1 ^ @ ( i , c ^ ) B ,
where i and j denote suitable row and column indices, respectively.
Similarly, a coin-tossing scenario implementing the outcome x α = 0 is represented by a proposition of the form
0 Q 0 ^ @ ( i , j ) B ,
where i and j denote suitable row and column indices, respectively.
The two instances of “suitable” in Definition 4 can be formalized; however, the concept is illustrated through a series of examples instead. In the following examples, note that coin x 1 is the last coin tossed, as the coins are ordered as follows:
x l , x l 1 , , x 2 , x 1 .
Example 1.  
Consider l = 4 and Figure 11. The coin toss outcome x 4 = 1 is exhaustively implemented by the coin-tossing scenario
q 1 ^ @ ( 2 , 3 ) B .
The TM N can produce a result of 1 for the leftmost coin, x 4 , in exactly one coin-tossing scenario.
Example 2.  
Consider l = 3 and Figure 12. The coin toss outcome x 3 = 1 is exhaustively implemented by the same coin-tossing scenario as in the previous example.
Remark 13.  
To facilitate Definition 4, observe that for a given coin x α and appropriately chosen indices i and j, the conjunction
q 1 ^ @ ( i , j ) B ¬ 0 Q 0 ^ @ ( i + 1 , j 1 ) B
represents a coin-tossing scenario implementing the outcome x α = 1 . Furthermore, the second conjunct follows from the first when the coin-tossing instructions of N are considered alongside ψ t r i m , under the assumption that all cells in B are filled—or, more precesily, that at least one pivotal cell, such as c e l l [ i + 1 , j 1 ] in B , is filled. Hence, the second conjunct is redundant and need not appear in Definition 4, though it may be included for practical considerations.
To illustrate, suppose, towards a contradiction, that for l = 3 it is the case that
q 1 ^ @ ( 2 , 3 ) B 0 Q 0 ^ @ ( 3 , 2 ) B ψ t r i m
is true. Recall the instruction t 2 ^ : q 1 ^ , q 0 ^ , 0 , from Section 4.2. From the first two conjuncts in (8) and ψ e x t r a in Appendix B.4, we obtain:
q 1 ^ @ ( 2 , 3 ) B 0 Q 0 ^ @ ( 3 , 2 ) B t 2 ^ @ ( _ , 3 ) ,
where the consequent refers to a cell in the extended tableau, and the underscore denotes a row index that can easily be computed if necessary. What is crucial here is that the presence of the instruction label t 2 ^ , when combined with ψ s t e p η ψ c e l l , conflicts—via the constraints specified in ([2], Section 4)—with the (alleged) proposition
0 Q 0 ^ @ ( 3 , 2 ) B ,
thereby producing the desired contradiction.
Example 3.  
Consider l = 4 and Figure 11. There are four distinct coin-tossing scenarios that exhaustively implement x 1 = 1 ; they are:
# q 1 ^ @ ( 5 , 6 ) B , # q 1 ^ @ ( 7 , 6 ) B , # q 1 ^ @ ( 9 , 6 ) B , a n d # q 1 ^ @ ( 11 , 6 ) B .
Example 4.  
Consider l = 3 and Figure 12. There are three distinct coin-tossing scenarios that exhaustively implement x 1 = 0 ; they are:
0 Q 0 ^ @ ( 5 , 4 ) B , 0 Q 0 ^ @ ( 7 , 4 ) B , a n d 0 Q 0 ^ @ ( 9 , 4 ) B .
Example 5.  
The three coin-tossing scenarios in Example 4, which involve the coin x 1 and l = 3 (Figure 12), also exhaustively implement the coin toss outcome x 2 = 0 when l = 4 (Figure 11).
The central insight, as demonstrated by these examples, can be stated as follows:
Claim 1.  
For any coin x α and any specific outcome x α = b , there exists a unique column in the mini tableau containing all coin-tossing scenarios that implement x α = b .
Definition 5.  
A composite coin-tossing scenario, or simply a scenario, is a conjunction of one or more coin-tossing scenarios.
For clarity, Example 6 is introduced prior to the definitions.
Example 6.  
Consider l = 4 and Figure 11. The scenario
q 1 ^ @ ( 2 , 3 ) B # q 1 ^ @ ( 5 , 6 ) B
implements the combined outcome
x 4 = 1 a n d x 1 = 1 .
Definition 6.  
We say that c of the form
c 1 a n d a n d c n ,
with each c i of the form x α = b , is a combined outcome when any two conjuncts c i and c j ( i j ) in c pertain to a different coin.
For instance, expression (9) represents a combined outcome; however,
x 4 = 1 and x 4 = 1
does not, nor does
x 4 = 1 and x 4 = 0 .
Definition 7.  
We say that a scenario s of the form
s 1 s n ,
implements a combined outcome c of the form
c 1 a n d a n d c n ,
when each conjunct s i in s implements the corresponding conjunct c i of c in adherence to Definition 4.
Lemma 1.  
Given a coin x α , with l α 1 , and an outcome b { 0 , 1 } , there are at most l distinct coin-tossing scenarios that exhaustively implement the coin toss outcome x α = b .
Proof. 
A geometrical argument is presented concerning the basic ( 3 l + 1 ) × ( l + 2 )  mini tableau, based on the definitions provided for the Coin-Tossing Stage (Section 4.2). Let c ^ denote the index of the rightmost column in the mini tableau that, figuratively, contains a dot and/or a hyphen. For any non-zero value of l, no column c to the left of column c ^ contains more occurrences of a dot or a hyphen. While the following argument pertains to column c ^ , a similar argument can be made for any other column c that contains an equal number of occurrences.
Due to the stepwise behavior of any TM, the first l cells in column c ^ cannot contain a dot or a hyphen. Similarly, the last cell in that column—at column index 3 l + 1 —contains a hyphen (see item 1 in Section 4.2.2). Moreover, every other cell in column c ^ , ranging from row index l + 1 to 3 l + 1 , cannot contain a dot or a hyphen, owing to the properties of TM movement. Therefore, at most x cells in column c ^ can contain a dot or a hyphen, where
x = ( 3 l + 1 ) l 2 = l + 1 2 .
This yields l + 1 . Finally at least one of these cells does not contribute to the outcome b but solely to 1 b , establishing the desired upper bound of l.    □
Corollary 1.  
Consider three coins x γ , x β , and x α , with l γ > β > α 1 , and three coin toss outcomes b γ , b β , b α { 0 , 1 } . There are at most l 3 distinct scenarios that exhaustively implement the combined outcome: x γ = b γ and x β = b β and x α = b α .
Definition 8.  
Consider a coin toss outcome x α = b , i.e., with α and b fixed. We write
( x α = b ) @ [ β ] ,
with the parameter β ranging from 1 to l, to denote the β-th coin-tossing scenario within a fixed, standard ordering of all coin-tossing scenarios that implement x α = b . If, for sufficiently large β, expression (10) does not correspond to a coin-tossing scenario, it instead serves as a placeholder for the truth value 0, meaning false.
Remark 14.  
Lemma 1 establishes that each coin toss outcome can correspond to at most l distinct coin-tossing scenarios. This accounts for the range of β from 1 to l in Definition 8.
Example 7.  
Suppose l = 3 and that any scenario (cf. Definitions 5 and 7) implementing the combined outcome x 2 = 1 and x 1 = 0 (cf. Definition 6) must imply the presence of bit 1 in c e l l [ 4 , 2 ] in the basic tableau (Figure 12). This constraint is formulated using the following Horn formula:
1 β 2 l 1 β 1 l ( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B ,
which is of complexity O ( l 2 ) .
Example 7 (Figure 12) is examined in the remainder of this subsection, as it will be needed later. The discussion begins with some syntactic surrogates (≡) from Example 4:
  • ( x 1 = 0 ) @ [ 1 ] 0 Q 0 ^ @ ( 5 , 4 ) B ;
  • ( x 1 = 0 ) @ [ 2 ] 0 Q 0 ^ @ ( 7 , 4 ) B ;
  • ( x 1 = 0 ) @ [ 3 ] 0 Q 0 ^ @ ( 9 , 4 ) B .
Likewise, the reader can verify that coin x 2 , with l = 3 , can land on 1 in either of the following two tossing scenarios:
  • ( x 2 = 1 ) @ [ 1 ] q 1 ^ @ ( 3 , 4 ) B ;
  • ( x 2 = 1 ) @ [ 2 ] q 1 ^ @ ( 5 , 4 ) B .
Hence, Formula (11) represents 2 × 3 = 6 implications of the three-literal form:
( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B ,
as partially illustrated with the two mini tableaux in Figure 13.
Remark 15.  
Each of the six implications is called a correlated coin-tossing constraint. Formula (11), in turn, is called a family of correlated coin-tossing constraints. In the running example, the correlated coins are x 2 , x 1 , and x 3 .
The left illustration in Figure 13 depicts the two possible choices for the left conjunct in Formula (12), while the right illustration presents the three possible choices for the right conjunct.
Among these six implications, only two remain potentially satisfiable when incorporating the constraints of ψ s t e p η (which accounts for the step-by-step behavior of N on w) and ψ e x t r a 1 (which captures the spatial dynamics of the TM’s head within the tableau). These two remaining options are depicted in Figure 14, with one option illustrated on the left and the other on the right. In both cases, the consequent
1 @ ( 4 , 2 ) B
from Formula (12) is emphasized by a boldfaced 1 in the corresponding cell.
How does the transition occur from the left illustration (respectively, the right illustration) in Figure 14 to the corresponding left (respectively, right) illustration in Figure 15? As before, the transformation is achieved by incorporating the constraints ψ s t e p η and ψ e x t r a 1 . Moreover, only the left illustration in Figure 15 remains potentially satisfiable, as the right illustration is rendered unsatisfiable due to the constraints imposed by the instructions t 2 ^ t 6 ^ in Section 4.2. Specifically, in the right illustration of Figure 15, c e l l [ 3 , 2 ] must contain the tape-state symbol 0 Q 0 ^ . However, this conflicts with 1 @ ( 4 , 2 ) B and instruction  t 6 ^ .
Finally, the left illustration in Figure 15 can be further refined by extending the column of 1s downward:
1 @ ( 7 , 2 ) B 1 @ ( 8 , 2 ) B .
This extension follows from ψ e x t r a 1 and the fact that row  r ^ must have been established as r ^ = 6 at the outset of this discussion—for else c e l l [ 5 , 4 ] may not contain ( x 1 = 0 ) @ [ 1 ] . Consequently,
# q 1 ^ @ ( 6 , 5 ) B
holds as a propositional fact.

5.2. Illustrating Top-Down (↓) Constraints

A central tenet in this paper is that any particular coin-tossing scenario in the basic mini tableau must determine the presence of specific tape-state symbols in lower cells, in the remainder of the basic tableau.
To illustrate, consider the coin x 2 with l = 3 (Figure 12) which can land on 1 in either of the following two coin-tossing scenarios:
q 1 ^ @ ( 3 , 4 ) B q 1 ^ @ ( 5 , 4 ) B ,
as depicted in the left illustration of Figure 13.
Each of these two coin-tossing scenarios necessitates the presence of tape-state symbols s q in cells located beyond the basic mini tableau but (obviously) still within the bounds of the basic tableau. This feature holds for both the Updating Stage  S and the Checking Stage  S (Section 4), giving rise to constraints (Section 5.2.1) and constraints (Section 5.2.2), respectively.
Remark 16.  
The notation
l c r
serves as a shorthand for the conjunction
( l c ) ( r c ) .

5.2.1. Updating Stage

Recall from Section 4.3.4 that the Updating Stage  S for coin x 2 begins with one of the following two types of configurations:
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 1 q 2 # # .
& $ c l m $ $ c l 2 $ a a 0 a ¯ 1 0 $ # 1 0 0 q 2 # # .
The first configuration type necessitates familiarity with the instructions τ 9 and τ 10 ,
τ 9 : U 1 , a Q 1 , 1 , , τ 10 : U 1 , a ¯ Q 1 , 0 , ,
introduced in Remark 11. The second configuration type relies on:
t 9 : U 0 , a Q 0 , 0 , , t 10 : U 0 , a ¯ Q 0 , 1 , .
Eeach coin-tossing scenario implementing x 2 = 1 (high up in the basic tableau) necessarily involves either a U 1 in cell c or a ¯ U 1 in cell c (further down in the basic tableau), corresponding to τ 9 and τ 10 , respectively. These cases are examined in the first and second subsections, respectively. The complexity is further analyzed in a third subsection.

Case a

Definition 9. 
Consider a coin x α , with l α 1 . An (uninstantiated, encoded) literal in w ˜ that contains at least α occurrences of the symbol a is called an a-admissible literal relative to x α .
Definition 10. 
Consider a coin x α , with l α 1 along with an a-admissible literal relative to x α , denoted as l . Counting from right to left, focus on the α-th occurrence of a in l . In the basic tableau, the a-updating cell corresponding to coin x α and literal l  is the cell c that contains either
a U 1 o r a U 0 ,
as prescribed by instructions τ 9 and t 9 , respectively.
The next claim follows from the construction of N in Section 4:
Claim 2. 
For each fixed row index  r ^ and coin x α , there exists a one-to-one correspondence between the literals l and the cells c , as established in Definition 10.
Example 8. 
Referring to Figure 12, let the row index r ^ be initially set to r ^ = 6 . Now, consider the coin toss outcome x 2 = 1 . Then, for each a-admissible literal l in w ˜ relative to x 2 , the following constraints hold:
q 1 ^ @ ( 3 , 4 ) B a U 1 @ ( i , j ) B q 1 ^ @ ( 5 , 4 ) B ,
where c e l l [ i , j ] = c e l l   c , and c is the a-updating cell corresponding to x 2 and l .
Remark 17. 
To recapitulate, given that there are some number γ of a-admissible literals l relative to x 2 , each of the two coin-tossing scenarios that implement x 2 = 1 necessitates the presence of a specific tape-state symbol in the corresponding γ cells c further down in the (basic) tableau.

Case a ¯

A similar discussion regarding x 2 = 1 and the symbol a ¯ (rather than a) brings us to the following definitions:
Definition 11. 
Consider a coin x α , with l α 1 . An (uninstantiated, encoded) literal in w ˜ that contains at least α occurrences of the symbol a ¯ is called an  a ¯ -admissible literal relative to x α .
Definition 12. 
Consider a coin x α , with l α 1 , along with an a ¯ -admissible literal relative to x α , denoted as l . Counting from right to left, focus on the α-th occurrence of a ¯ in l . In the basic tableau, the  a ¯ -updating cell corresponding to coin x α and literal l  is the cell c that contains either
a ¯ U 1 o r a ¯ U 0 ,
as prescribed by instructions τ 10 and t 10 , respectively.
Claim 3. 
For each fixed row index  r ^ and coin x α , there exists a one-to-one correspondence between the literals l and the cells c , as established in Definition 12.
Example 9. 
Referring to Figure 12, let the row index r ^ be initially set to r ^ = 6 . Now, consider the coin toss outcome x 2 = 1 . Then, for each a ¯ -admissible literal l in w ˜ relative to x 2 , the following constraints hold:
q 1 ^ @ ( 3 , 4 ) B a ¯ U 1 @ ( i , j ) B q 1 ^ @ ( 5 , 4 ) B ,
where c e l l [ i , j ] = c e l l c , and c is the a ¯ -updating cell corresponding to x 2 and l .
Remark 18. 
To recapitulate, given that there are some number γ of a ¯ -admissible literals l relative to x 2 , each of the two coin-tossing scenarios that implement x 2 = 1 necessitates the presence of a specific tape-state symbol in the corresponding γ cells c further down in the (basic) tableau.

Complexity

Generalizing from the Horn constraints (13) and (14), a conservative approach assumes that updating w ˜ to reflect a coin toss outcome x α = b requires modifying all 3 m encoded literals. In other words, this update necessitates performing 3 m overwrites at the α -th occurrences (counting from the right) of either a or a ¯ . Extending this to all l coins, and applying Lemma 1, we account for l coin-tossing scenarios per coin outcome, with two possible outcomes per coin. Consequently, the total number of constraints, each expressed in the two-literal Horn form
@ @ ,
is given by
3 m × l × l × 2 = O ( n 3 ) .
Remark 19. 
The complexity expressed in terms of l, with m = O ( l 3 ) , is O ( l 5 ) .
Additional constraints can be introduced, yet still resulting in merely O ( n κ ) complexity, for some constant κ . Another example is provided in Appendix C. To establish the compression result in Theorem 2, it is not necessary to exhaustively list all constraints.

5.2.2. Checking Stage

Concerning the Checking Stage  S , the aim is to present examples of constraints associated with the outcome x 2 = 1 . However, this depends on the positioning of both x 2 and ¬ x 2 within ϕ and, in its encoded form, within w ˜ . Hence, for illustration, suppose that the formula ϕ contains x 2 only as the leftmost literal encoded in w ˜ , and ¬ x 2 only as the rightmost literal. Under this assumption, the uninstantiated word w takes the form:
& $ a a $ $ c l 2 $ a ¯ a ¯ $ # # .
Now, recall from Section 4.4 the following three instructions:
t 5 : Q 0 , R 0 , , + ,
t 15 : Q 000 , $ R 000 , $ , + ,
t 15 : S 000 , $ T 000 , $ , + .
These correspond to the following three kinds of configurations, respectively:
& $ 1 0 $ $ c l 2 $ 0 R 0 1 $ # # ,
& $ 1 R 000 0 $ $ c l 2 $ 0 1 $ # # ,
& $ 1 T 000 0 $ $ c l 2 $ 0 1 $ # # .
Focusing on the rightmost and leftmost encoded literals (cf.  R 0 on the one hand, and R 000 and T 000 on the other hand), it becomes apparent that two corresponding cells, c and c , contain 0 and 1, respectively:
q 1 ^ @ ( 3 , 4 ) B 0 R 0 @ ( i , j ) B q 1 ^ @ ( 5 , 4 ) B ,
q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( i 1 , j 1 ) B 1 R 000 @ ( i , j ) B ,
q 1 ^ @ ( 3 , 4 ) B $ S 000 @ ( i 1 , j 1 ) B 1 T 000 @ ( i , j ) B .
Three points warrant clarification. First, the cells c and c “contain 0 and 1, respectively,” as indicated by the use of tape-state symbols: 0 R 0 on the one hand, and 1 R 000 along with 1 T 000 on the other hand. Second, cells c and c are shorthand for c e l l [ i , j ] and c e l l [ i , j ] , respectively. Third, the reader will recognize that the counterparts to formulas (16) and (17), where the left conjunct is replaced by
q 1 ^ @ ( 5 , 4 ) B ,
should also be considered in this discussion. For the sake of brevity, they are omitted.

Complexity

The crux is again that every coin-tossing scenario implementing x 2 = 1 (positioned high up in the basic tableau) necessitates the presence of specific tape-state symbols appearing further down in the basic tableau. However, three-literal Horn formulas (e.g., Formulas (16) and (17)) are now encountered, rather than only two-literal Horn formulas (such as Formula (15)).
Generalizing from the specific constraints illustrated in (15)–(17), the assumption is that checking—during the Checking Stage  S —whether the instantiated w ˜ is trivially true or trivially false requires examining each leftmost bit in all 3 m encoded and instantiated literals. Given that there are l coins, at most l coin-tossing scenarios per coin outcome, and two possible outcomes per coin toss, this results in a complexity of
3 m × l × l × 2 = O ( n 3 )
constraints, each expressed in either two-literal Horn form,
@ @ ,
or three-literal Horn form:
@ @ @ .
Remark 20. 
Formula (16)—and many other formulas—can be extended as follows:
# q 1 ^ @ ( r ^ , c ^ ) B q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( i 1 , j 1 ) B 1 R 000 @ ( i , j ) B
This more elaborate formulation also aligns with our discourse, in which the guess
# q 1 ^ @ ( r ^ , c ^ ) B
is explicitly part of the equation. Nevertheless, we continue to assume that the Horn constraints—such as (16)—are generated dynamically, thereby rendering the first conjunct in Formula (18) redundant.

5.3. Illustrating Bottom-Up (↑) Constraints

Bottom-up constraints are not a new consideration in this discussion. In fact, a specific type of bottom-up constraint has already emerged in the transition from the left illustration in Figure 14 to the left illustration in Figure 15. This transition showcases the upward propagation of 1s, driven by ψ e x t r a 1 . However, another form of bottom-up constraint (↑) also merits attention and will be briefly explored here.
Clause 1 from Section 4.3.1 is reconsidered, together with its unary encoding, c l 1 :
a a a a ¯ a ¯ a .
Assuming this encoded clause is integral to w ˜ , let x 2 = 1 and x 1 = 0 .
During the Updating Stage  S of the machine N , the coresponding string encoding ultimately takes one of the following forms:
1 1 0 0 1 0 ,
or the form
0 1 0 0 1 0 ,
depending on whether x 3 = 1 or, respectively, x 3 = 0 .
As established in Section 4.4.4, the leftmost bit in these forms corresponds, respectively, to the following instructions:
t 16 : R 000 , 1 q 0 , 1 , , t 17 : R 000 , 0 q r e j e c t , 0 , .
The first form alone does not result in the unsatisfiability of
ψ t r i m ,
where the ellipsis (…) represents, among other factors, the guessed coin-tossing scenarios that implement x 2 = 1 and x 1 = 0 . In contrast, the second form does lead to unsatisfiability due to the q r e j e c t state symbol and the ψ a c c e p t constraint (recall Section 4.4.6).
Hence, a particular kind of bottom-up constraint (↑) appears to be at play, extending from a lower cell c ̲ in the basic tableau—which, in adherence to t 16 and t 17 , potentially contains the 1 R 000 or 0 R 000 symbol—to the truth value b of x 3 :
b @ ( 4 , 2 ) B ,
as determined higher up, in the basic mini tableau (see the left illustration in Figure 14).
A key consideration is that cell c ̲ may represent a hole in the basic tableau, preventing any symbol—including 1 R 000 and 0 R 000 —from being turned on. Now, by automatically filling it with 1 R 000 instead of 0 R 000 , the hole is resolved without user intervention. However, this does require an extra measure beyond top-down reasoning, leading to the topic of correlated coin-tossing constraints.

5.4. Synthesis: Correlated Coin-Tossing Constraints

The top-down and bottom-up perspectives on the 3-SAT solver N are integrated within a framework that focuses exclusively on the mini tableau. To achieve this, consider an arbitrary clause in ϕ , which is typically not a Horn formula.
Consider, for instance, Clause 1:
x 3 x 2 ¯ x 1 .
Any of the following three equivalent formulations of Clause 1—where each antecedent represents a combined toss outcome of two coins (of the three coins in question)—are also, quite evidently, non-Horn formulas:
x 2 x 1 ¯ x 3
x 3 ¯ x 1 ¯ x 2 ¯
x 3 ¯ x 2 x 1
Yet, by synthesizing insights from both top-down and bottom-up reasoning, each of these expressions (20)–(22) can be reformulated as a compact family of Horn clauses.
Without loss of generality, the focus in this section is on implication (20). On the one hand, we express the consequent of (20) in the form of proposition (19) with b = 1 ,
1 @ ( 4 , 2 ) B ,
thereby establishing the coin toss outcome x 3 = 1 (Figure 14). On the other hand, we express the antecedent of (20) as a scenario (see Definition 5) that implements the combined outcome
x 2 = 1 and x 1 = 0 .
Naturally, all possible such scenarios must be accounted for:
1 β 2 l 1 β 1 l ( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B ,
which is Horn Formula (11) from Example 7.
The non-Horn Formula (20) has been transformed into the Horn Formula (23). Similarly, the Horn formulas for implications (21) and (22) are, respectively:
1 β 3 l 1 β 1 l ( x 3 = 0 ) @ [ β 3 ] ( x 1 = 0 ) @ [ β 1 ] 0 @ ( 5 , 3 ) B ,
1 β 3 l 1 β 2 l ( x 3 = 0 ) @ [ β 3 ] ( x 2 = 1 ) @ [ β 2 ] 1 @ ( 8 , 4 ) B .
To summarize, the coins x 3 , x 2 , and x 1 are correlated via Clause 1. Initially, this correlation is expressed through non-Horn constraints (20)–(22) and, ultimately, through Horn constraints (23)–(25). This method applies to every 3-literal clause in ϕ .

Complexity

Each family of correlated coin-tossing constraints (e.g., (23)) consists of O ( l 2 ) literals. Moreover, there are three such families per clause in ϕ (cf. (23)–(25) for Clause 1), with a total of m clauses. Consequently, the overall complexity amounts to:
O ( l 2 ) × 3 m = O ( n 3 )
literals.
Remark 21. 
The complexity expressed in terms of l, with m = O ( l 3 ) , is O ( l 5 ) .
By specifying all families of correlated coin-tossing constraints related to ϕ , we can eliminate the constraints (Section 5.2.1) and constraints (Section 5.2.2). This simplification, as illustrated in the transition from (a) to (b) in Figure 1 (see Section 1), renders the operation of N beyond the mini tableau obsolete. While the Coin-Tossing Stage  S ^ remains relevant, the specifics of the Updating Stage  S and Checking Stage  S are no longer necessary. This improvement leads from the rFHB algorithm to the streamlined variant called sFHB.

6. Two Algorithms: rFHB and sFHB

The rFHB and sFHB algorithms are now intrdocuced and analyzed. Both are governed by the Horn formula ψ t r i m , defined as:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 .
The final conjunct, ψ e x t r a 2 , varies considerably, depending on the algorithm:
  • ψ e x t r a 2 [ rFHB ] includes the and constraints (Section 5.2);
  • ψ e x t r a 2 [ sFHB ] captures the families of correlated coin-tossing constraints (Section 5.4).
The presentation of claims proceeds as statements asserted to be true, grounded in the constructive nature of the underlying reasoning. Claim 4 has already been established:
Claim 4. 
In either case of rFHB and sFHB , ψ t r i m is O ( n κ 1 ) literals long, for some constant κ 1 .
The rFHB operates over the entire tableau, whereas sFHB is confined to the mini tableau. Section 6.1 describes rFHB in the context of the 3-SAT solver N , i.e., with a fixed value of k = 2 and unary encoding. An alternative scenario is also considered, where k = 1 and binary notation is used. By contrast, Section 6.2 defines sFHB independently of both k and the encoding scheme for literals, as it restricts itself to the Coin-Tossing Stage.

6.1. The rFHB Algorithm

The rFHB algorithm leverages the internal mechanics of the 3-SAT solver in question. Section 6.1.1 defines rFHB, while Section 6.1.2 and Section 6.1.3 provide detailed explanations of steps 1 and 3, respectively. A comprehensive cost analysis is presented in Section 6.1.4.

6.1.1. Boxed Definition

To appreciate the specifics of the rFHB algorithm, recall the following points:
  • Row index r ^ in the basic mini tableau corresponds to row index 3 r ^ 2 in the extended mini tableau (Section 3).
  • If and when all l coins have been tossed, the entire (extended) tableau is determined.
  • The original FHB algorithm, denoted as A , relies on a HORNSAT solver H (Section 3.5).
  • The inferences from tape-state symbols within the mini tableau to those outside of it must be preprogrammed. This corresponds to formally defining ψ e x t r a 2 [ rFHB ] .
Symmetry 17 01745 i001

6.1.2. Elaborating Step 1

The r ^ -guess introduced in step 1 of the boxed definition conceptually implies that several cells beyond B r ^ can be immediately crossed out (see item  2 ( b ) in Section 4.2.2). Conceptually again, these cell crossings are revoked when A backtracks from the r ^ -guess. In reality, however, A does not work with crosses, nor does the solver H actually inject symbols in an actual tableau.
Recall Remark 20, which notes that numerous formulas admit extensions. For example, Formula (27) can be extended into the form of (28), shown below:
q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( i 1 , j 1 ) B 1 R 000 @ ( i , j ) B ,
# q 1 ^ @ ( r ^ , c ^ ) B q 1 ^ @ ( 3 , 4 ) B $ Q 000 @ ( i 1 , j 1 ) B 1 R 000 @ ( i , j ) B
Formula (28) makes the r ^ -guess explicit—through its first conjunct. However, this paper favors the dynamic generation of (27) and similar constraints, which renders the first conjunct in (28) redundant.
Claim 5. 
The dynamic generation of constraint (27), along with all other Horn constraints encapsulated by ψ t r i m , can be performed in O ( n κ ) time for some sufficiently large constant κ.
Claim 5 follows almost directly from Claim 4. To see this, first consider storing all constraints in a database using the format specified in (28) (after all). Then, when the r ^ -guess is made at runtime, prune the relevant constraints from the database and discard the first conjunct (which encodes the r ^ -guess) before passing them to H .

6.1.3. Elaborating Step 3

Recall Example 8 with regard to Figure 12, row index r ^ = 6 , and the coin toss outcome x 2 = 1 . Then, for each a-admissible literal l in w ˜ relative to x 2 , the following two constraints must hold:
q 1 ^ @ ( 3 , 4 ) B a U 1 @ ( i , j ) B q 1 ^ @ ( 5 , 4 ) B ,
where c e l l [ i , j ] is the a-updating cell corresponding to x 2 and l .
Now, if in step 3 the tape-state symbol q 1 ^ is guessed for c e l l [ 3 , 4 ] in the basic mini tableau
i . e . , q 1 ^ @ ( 3 , 4 ) B or , equivalently , q 1 ^ @ ( 3 · 3 2 , 4 ) ,
then—by virtue of the left implication in (29)—the solver H must incorporate the consequent a U 1 @ ( i , j ) B as a propositional fact in its subsequent satisfiability analysis.
Conceptually, the solver H injects the tape-state symbol a U 1 beyond the boundaries of B r ^ ; specifically, in c e l l [ i , j ] in the basic tableau. A parallel consideration applies to c e l l [ 5 , 4 ] in relation to the right implication in (29).

6.1.4. Complexity

Three initial observations introduce the complexity analysis of the rFHB algorithm. Observation 1 comprises three points:
1.
The dimensions of the basic mini tableau are given by
O ( l ) × O ( l ) = O ( l 2 ) .
The same result holds for the extended mini tableau.
2.
The rFHB algorithm runs on N , implying k = 2 and the use of unary notation. Asymptotically, this leads to O ( l ) symbols per literal across the total 3 m literals in w ˜ , forming the horizontal dimension of the entire tableau:
n = O ( l × m ) .
For the vertical dimension, the focus is on the Updating Stage, which asymptotically dominates the Coin-Tossing and Checking Stages. Here, we account for l iterations over the entire length of w ˜ :
l × O ( l × m ) = O ( l 2 × m ) .
Thus, the basic tableau, which is a quasi-square matrix, has the following dimensions:
O ( l 2 × m ) × O ( l 2 × m ) .
The same result holds for the extended tableau.
3.
Comparing the overall tableau size (33) to the basic mini tableau (30), yields:
O ( l 2 × m 2 ) .
This indicates that as l (and thereby m) increases, the mini tableau becomes proportionally much smaller relative to the entire tableau.
Observation 2, consisting of three similar points, addresses the state of affairs under the assumption of a sophisticated 3-SAT solver—configured with k = 1 and binary notation:
1.
The mini tableau remains unchanged: see (30).
2.
The horizontal dimension of the entire tableau, previously given in (31), now becomes:
n = O ( ( log l ) × m ) ,
since only O ( log l ) symbols per literal are needed, in contrast to l symbols. The vertical dimension, updated from (32), becomes:
O ( 1 ) × O ( ( log l ) × m ) = O ( ( log l ) × m ) ,
reflecting the fact that the Updating Stage now operates in a single (essentially right-to-left) pass. Accordingly, the tableau forms a quasi-square matrix with dimensions:
O ( ( log l ) × m ) × O ( ( log l ) × m ) = O ( log 2 l ) × O ( m 2 )
3.
The ratio of the entire tableau (37) to the mini tableau (30) is:
O ( log 2 l ) × O ( m 2 ) O ( l 2 ) .
A devil’s advocate will attempt to minimize this expression by assuming m = Θ ( l ) , yielding:
Ω ( log 2 l ) ,
which, similar to the result in (34), still confirms that the mini tableau becomes proportionally smaller as l increases.
With respect to the rFHB algorithm, Observation 3 underscores that Theorem 2 from Daylight [1]—which establishes genuine compression for the case k = 1 —is derived under a notably conservative assumption: the ratio between the entire tableau and the mini tableau is fixed at a constant value (specifically, 2), rather than allowed to grow with l, as is the case in results (38) and (34).
Theorem 2. 
(Reproduced from Daylight ([1], p. 30)) Let N , k be a 3-SAT solver. Then the runtime R ( n ) of the  rFHB  algorithm pertaining to N satisfies the upper bound:
R ( n ) K n 0.67 k ,
for some constant K > 0 .
Rewriting the upper bound from Theorem 2 in terms of l, under the assumption m = Θ ( l ) , yields:
K n 0 . 67 k = K O l 1 . 34 k if n = O l 2 K O ( l · log l ) 0 . 67 k if n = O l · log l ,
where the first and second cases correspond to unary and binary notation, respectively. Recall (31) and (35), respectively.
Corollary 2. 
Let N , 1 be a 3-SAT solver, operating in binary notation. Assume m = Θ ( l ) . Then, the runtime R ( l ) of the rFHB algorithm associated with N admits the upper bound:
R ( l ) C ( l · log l ) 0.67 ,
for some constant C > 0 .
Corollary 2 reveals genuine compression when m = Θ ( l ) , suggesting that NP machines are inferior to at least one exponential time deterministic TM. Yet rather than examining this potential novelty, emphasis is placed (again) on the fact that Corollary 2 rests on a highly conservative premise: the hole-filling region external to the mini tableau matches the mini tableau in size for all values of l.
Remark 22. 
In technical terms, Daylight’s proof outline ([1], p. 31) treats the “reduction factor,” denoted as Δ, as a constant—rather than as a function Δ ( l ) that decreases monotonically with l, such that lim l Δ ( l ) = 0 .
The asymptotic reality is that, as l grows, the contribution of the mini tableau becomes an increasingly negligible portion of the overall tableau. Recall (38) and (34). This observation is now formalized in the following theorem.
Theorem 3. 
(Refinement of Theorem 2) Let N ˜ be some N , k machine that solves 3-SAT , with k { 1 , 2 } , working in unary or binary. Let l denote the number of distinct (encoded) propositional variables in the input w of N ˜ . Then, the runtime R ( l ) of the rFHB algorithm associated with N ˜ and w admits the upper bound:
R ( l ) l · log l C ,
where C > 0 is a constant.
Remark 23. 
Theorem 3 also concerns the machine N , as defined in Section 4.
Proof. 
Eight cases can be distinguished based on three parameters. First, consider two values for k: either k = 1 or k = 2 . Second, the machine operates using either unary or binary notation. Third, the distinction lies between m = Θ ( l ) and m = Θ ( l 3 ) . In the remainder of this proof, it suffices to analyze the gravest case, which—as the reader can verify—occurs when k = 1 , binary notation is used, and m = Θ ( l ) .
In this context, recall from (38) the following result:
ratio ( l ) = log 2 l ,
which represents the smallest conceivable ratio between the size of the entire tableau and that of the mini tableau.
Furthermore, the recurrence relation from Daylight ([1], p. 31) is reused, taking the following form:
T ( p ) = κ 0 · p · T p 2 · ratio ( l ) 2 ,
where the constant κ 0 > 0 depends on the specific TM, N ˜ , under analysis. Appendix D presents a standard derivation of the solution to this recurrence relation, yielding:
T ( p ) = p O ( 1 ) .
Additionally, as noted by Daylight ([1], p. 31), at the onset of the recurrence, p denotes the initial area of possible binary nondeterministic choices made by N ˜ . Consequently, Equation (37) is revisited, this time replacing m with l, justified by the asymptotic relationship m = Θ ( l ) . By substituting O ( log 2 l ) × O ( l 2 ) for p in (40), the following is derived:
l · log l C ,
for some constant C > 0 .
Finally, the polynomial overhead per step of the rFHB algorithm is accounted for, which is O ( n κ ) for some constant κ —recall Claims 4 and 5. Given that n = O ( m · log l ) = O ( l · log l ) , the final solution also takes the form of Equation (41).    □
In light of the implication of Theorem 3, an alternative approach is presented that leads to the same conclusion: an iterative application of Theorem 2, as outlined in Appendix E. Attention now shifts to the streamlined variant of the rFHB algorithm.

6.2. The sFHB Algorithm

The sFHB focuses on the operation of the coin-tossing machine N ^ , which is solely tasked with executing stage S ^ of N in O ( l ) nondeterministic time (Section 4.2). Now we define:
ψ t r i m = ψ s t e p η ψ s t a r t ψ a c c e p t ψ c e l l ψ e x t r a 1 ψ e x t r a 2 [ sFHB ] ,
where each component within the parentheses is of smaller size relative to the rFHB case. In particular, ψ s t e p η encodes only the instructions of N ^ , not N . The formula ψ s t a r t characterizes the initial coin-tossing configuration (7) in Section 4.2 (without w ˜ ). Similarly, the remaining three conjuncts within the parentheses pertain solely to the structure of the (extended) mini tableau, not the (extended) entire tableau.
We distinguish between two arrangements that the sFHB can make per visited column (Section 6.2.1). Then we specify the algorithm (Section 6.2.2) and elaborate on step 4 in the specification (Section 6.2.3). A cost analysis concludes the discussion (Section 6.2.4).

6.2.1. Two Arrangements

As with rFHB, the sFHB algorithm performs an r ^ -guess expressed as
# q 1 ^ @ ( r ^ , c ^ ) B ,
from which it can subsequently backtrack.
In contrast to rFHB, which processes the tableau rowwise, sFHB operates column-wise, focusing exclusively on the mini tableau. For each visited column c in the mini tableau (see, e.g., Figure 12), sFHB guesses one of two possible symbol arrangements occurring at some position r among the first r ^ candidates in that column. The first possible arrangement corresponds to a coin toss yielding a 1:
q 1 ^ @ ( r , c ) B 1 @ ( r + 1 , c ) B 0 @ ( r + 2 , c ) B ¯ ,
while the second corresponds to a toss yielding a 0:
q 1 ^ @ ( r , c ) B 0 @ ( r + 1 , c ) B 0 Q 0 ^ @ ( r + 2 , c ) B 0 @ ( r + 3 , c ) B 1 @ ( r + 4 , c ) B ¯ .
When sFHB visits a column c, selects a row r, and guesses an arrangement in the basic mini tableau, it introduces a separation of concerns between the columns to the left and those to the right of c. While subsequent hole-filling choices on either side will typically reduce the number of options on the other side, they will not affect the corresponding computations an sich.
To clarify, a commitment is made—either to the first arrangement (43) or the second arrangement (44)—that constrains the tape head of N ^ to visit column c in the following manner:
  • in the first arrangement (43), solely via c e l l [ r , c ] and, optionally, also via c e l l [ r + 2 , c ] ;
  • in the second arrangement (44), via both c e l l [ r , c ] and c e l l [ r + 2 , c ] and, optionally, also via c e l l [ r + 4 , c ] .
The sFHB algorithm populates all remaining cells in column c—namely each c e l l [ i , c ] in the mini tableau, where i { r , r + 2 } in the first arrangement, and i { r , r + 2 , r + 4 } in the second arrangement—with designated tape symbols, in full compliance with the operation of the coin-tossing machine N ^ . On the one hand, blank symbols (□) are inserted above row r in both arrangements. (Yet, two special cases must be considered for the # marker, which appears in the leftmost and rightmost columns of the mini tableau.) On the other hand, the symbol 1 (respectively, 0) is placed below row r in the first (respectively, second) arrangement—subject to the aforementioned ∉-constraints.
To illustrate the first arrangement, recall instruction  t 1 ^ (Section 4.2) and Figure 12 (Section 5.1). Suppose x 2 = 1 and specifically:
q 1 ^ @ ( 2 , 3 ) B 1 @ ( 3 , 3 ) B 0 @ ( 4 , 3 ) B ¯ ,
where the latter conjunct implicitly refers to either 1 @ ( 4 , 3 ) B or to 1 q 0 ^ @ ( 4 , 3 ) B , depending on the toss outcome for coin x 1 . Regardless of which of these two possibilities manifests, the computations to the left of column 3 in Figure 12 remain unaffected. This invariance is ensured by the first conjunct in (45), which enforces the separation of concerns.
Notation (10) from Definition 8 is now redefined through two examples.
Example 10. 
Based on Figure 12:
( x 2 = 1 ) @ [ 1 ] q 1 ^ @ ( 2 , 3 ) B 1 @ ( 3 , 3 ) B
( x 2 = 1 ) @ [ 2 ] q 1 ^ @ ( 4 , 3 ) B 1 @ ( 5 , 3 ) B
( x 2 = 1 ) @ [ 3 ] 0 ( i . e . , false )
Example 11. 
Based on Figure 12:
( x 2 = 0 ) @ [ 1 ] q 1 ^ @ ( 2 , 3 ) B 0 @ ( 3 , 3 ) B
( x 2 = 0 ) @ [ 2 ] q 1 ^ @ ( 4 , 3 ) B 0 @ ( 5 , 3 ) B
( x 2 = 0 ) @ [ 3 ] 0 ( i . e . , false )
The proposition 0 @ ( 4 , 3 ) B ¯ follows logically from (46) and ψ t r i m . Similarly, 0 @ ( 6 , 3 ) B ¯ is entailed by (47) and ψ t r i m . Likewise, the propositions 0 Q 0 ^ @ ( 4 , 3 ) B and 0 Q 0 ^ @ ( 6 , 3 ) B follow from (49) and (50), respectively, when ψ t r i m is taken into account.

6.2.2. Boxed Definition

Symmetry 17 01745 i002

6.2.3. Elaborating Step 4

Recall the following example of a family of correlated coin-tossing constraints, where the non-Horn Formula (52) was transformed into the Horn Formula (53).
x 2 x 1 ¯ x 3
1 β 2 l 1 β 1 l ( x 2 = 1 ) @ [ β 2 ] ( x 1 = 0 ) @ [ β 1 ] 1 @ ( 4 , 2 ) B .
Suppose the sFHB algorithm guesses ( x 2 = 1 ) @ [ 1 ] and ( x 1 = 0 ) @ [ 1 ] . Conceptually, the solver H then injects the symbol 1 into c e l l [ 4 , 2 ] of the basic tableau B .

6.2.4. Complexity

Regarding the complexity of the sFHB algorithm, we observe that the bookkeeping overhead introduced by backtracking remains polynomial, and in practice, a runtime stack with constant overhead per recursive call is sufficient. Likewise, we abstract away the cost associated with making multiple r ^ -guesses, considering it negligible for the purposes of this analysis. Finally, the O ( 1 ) choices available for filling any given cell are already captured by the O ( l ) term in the recurrence relation presented below, in (54).
This leads directly to the following estimate of the combinatorial cost associated with the separation of concerns (Section 6.2.1):
T ( l 2 ) = 2 · O ( l ) · T l 2 2 + T l 2 2 ,
where:
  • The constant factor 2 corresponds to the two arrangements (43) and (44).
  • The term O ( l ) denotes the length of any column in the mini tableau.
  • The plus sign, rather than a multiplication sign, reflects the logical separation of concerns.
Upon a change of variables, the recurrence relation transforms into:
T ( p ) = O ( p ) · T p 2 ,
which, by the standard derivation in Appendix F, yields the following approximate solution:
T ( p ) C ( log p ) 2 = p O ( log p ) ,
for some constant C > 0 . Essentially the same result, which is superpolynomial but subexponential, is obtained when the fraction p 2 is (more realistically) replaced by, say, f ( p ) · p with f ( p ) = 1 1 log p .
This rather quick calculation suggests that, at a minimum, an exponential-time deterministic TM exists that outperforms every N , k machine. Notably, this conclusion is reached independently of the analysis in [1]. Furthermore, Lemma 2 enables this result to be strengthened.
Lemma 2. 
The runtime of the sFHB algorithm is bounded above by that of rFHB.

Proof Outline

Two key results need to be proved in this regard. First, that ψ t r i m with regard to rFHB is satisfiable iff ψ t r i m with regard to sFHB is. Second, that asymptotically, the size of ψ e x t r a 2 [ sFHB ] does not exceed that of ψ e x t r a 2 [ rFHB ] . A detailed—albeit straightforward—proof of the first result will be provided in a companion paper. The second result follows directly from the cost analyses presented in Section 5.2 and Section 5.4.
Hence, we posit that the tight upper bound for rFHB stated in inequality (39) of Theorem 3 also applies to the sFHB algorithm, which is conceptually much simpler.

7. Closing Remarks

A tableau that is polynomially bounded inherently lacks the capacity to concretely represent an exponential number of computation paths, including the full spectrum of coin-tossing behaviors exhibited by the N , k machine under scrutiny. As posited in the main body of this paper, these coin tosses exhibit substantial correlation. Moreover, as the length of the input w to machine N increases, the portion of the tableau attributable to coin tosses becomes increasingly negligible relative to its overall size.
To formally quantify these properties, a rigorous analytical framework was introduced, culminating in the presentation and cost evaluation of two enhanced variants of the original FHB algorithm: rFHB and sFHB. The latter, in retrospect, proves to be the most straightforward to explain, and a quick performance estimate—independent of the author’s prior analysis [1]—suggests a quasi-polynomial runtime upper bound.
The more intricate result of this paper, Theorem 3, leads to the conjecture that the complexity classes P and NP may be equal. However, no such claim is asserted in this paper. The proof of Theorem 3 does not meet the implementation standards championed by Scott Aaronson—as exemplified in this post https://scottaaronson.blog/?p=676 (accessed on 1 August 2025)—nor those upheld by the broader research community. It also fails to meet the less stringent expectation that practical demonstrations should accompany select portions of this paper.
While such critique is valid, the institutional context is equally noteworthy: engineering teams typically require a mathematically sanctioned roadmap—endorsed by a select group of theorists—before committing substantial resources to novel ventures, such as implementing the sFHB algorithm in silicon. Aaronson’s limited engagement with engineering realities, as reflected in the aforementioned blog post, is emblematic of a broader trend. This disconnect is notably less prevalent among mathematical engineers, as discussed in the author’s 2022 lecture [24].
Finally, it should be noted that engineering advances can—and often do—occur when the underlying theory is, in retrospect, incomplete. A prime example is the notable mathematical inaccuracies committed by Joseph Fourier, whose pioneering work on heat conduction and trigonometric series laid the foundation for Fourier analysis. Despite his incorrect assumption that any function could be represented by a Fourier series—an assertion that lacked rigorous justification at the time—his methods proved remarkably effective in engineering applications [25]. The aforementioned talk by the author offers historical examples of the complex interplay between theory and practice in computer science. Accordingly, this paper aims to initiate a dialogue between practitioners and those theorists who are agnostic about P vs. NP , fostering collaboration to complement aspects of the theory presented herein with long-term experimental validation.1

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author extends his appreciation to two anonymous reviewers and thanks Wilfried Verachtert, Diederik Verkest, and Roel Wuyts for an intensive discussion of an early draft of this paper at IMEC during the spring of 2025.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
TMTuring Machine
NPNondeterministic Polynomial

Appendix A

Definition A1.  
Cf. [10] (pp. 259, 271). Variables that can take on the values TRUE and FALSE are called Boolean variables. We represent TRUE by 1 and FALSE by 0. The Boolean operations  AND , OR , and NOT , represented by the symbols ∧, ∨, and ¬, respectively, are described in the standard manner. We use the overbar as a shorthand for the ¬ symbol, so x ¯ means ¬ x . A Boolean formula is an expression involving Boolean variables and operations. It is satisfiable if some assignment of 0s and 1s to the variables makes the formula evaluate to 1. We say the assignment satisfies ϕ . The satisfiability problem is to test whether a Boolean formula is satisfiable. Let SAT = ϕ ϕ is a satisfiable Boolean formula , where ϕ refers to a standard encoding of ϕ.
Definition A2.  
Cf. [10] (p. 273). A literal is a Boolean variable or a negated Boolean variable, as in x or x ¯ . The former is called a positive literal, while the latter is called a negative literal. A clause is several literals connected with ∨s, as in x 1 x 2 ¯ x 3 ¯ x 4 . A Boolean formula is in conjunctive normal form, called a cnf-formula, if it comprises clauses with ∧s, as in x 1 x 2 ¯ x 3 ¯ x 4 x 3 x 5 ¯ x 6 x 3 x 6 ¯ . The Boolean formula is a 3cnf-formula if each clause has three literals, as in x 1 x 2 ¯ x 3 ¯ x 3 x 5 ¯ x 6 x 3 x 6 ¯ x 4 . A 2cnf-formula is an AND of clauses, where each clause is an OR of at most two literals.
Definition A3.  
Cf. [20] (pp. 34–35). A (propositional) Horn formula is a cnf-formula where every disjunction contains at most one positive literal.
Also, HORNSAT = ϕ ϕ is a satisfiable Horn formula .
Remark A1.  
Horn clauses can be written as implications by the following equivalence (≡):
x 1 ¯ x k ¯ x x 1 x k x
Theorem A1.  
Cf. [20] (p. 35). HORNSAT P .
Definition A4.  
Cf. Sipser [10] (p. 140). A deterministic Turing machine is an 8-tuple
Q ,   Γ ,   Φ ,   δ ,   T ,   q 0 ,   q a c c e p t ,   q r e j e c t , with Q ,   Γ ,   Φ ,  T finite sets:
   
Q is the set of states, and  Γ  is the input alphabet not containing the blank symbol □.
Φ  is the tape alphabet, where   Φ  and  Γ Φ .
δ : Q × Φ Q × Φ × + ,  is the transition function.
Every transition in  δ  is accompanied by a distinct label t.
T is the label set, containing all such labels.
q 0 Q  is the start state.
q a c c e p t Q  is the accept state,  q r e j e c t Q  is the reject state, with  q r e j e c t q a c c e p t .
   
As a Turing machine computes, changes occur in the current state, the current tape contents, and the current head location. A setting of these three items is called a configuration of the Turing machine. The tape of the Turing machine is one-way infinite, from left to right. Specifically, for each input w 0 w 1 w n 1 of length n, machine M starts in configuration q 0 , for n = 0 , and in configuration w 0 q 0 w 1 w n 1 , for n > 0 . In both cases, the notation s q 0 , with s Φ , signifies that the head is located at the tape cell containing symbol s, while the machine resides in state q 0 . Machine M neither starts in q a c c e p t or q r e j e c t , nor progresses beyond either one of these states. Specifically, once M reaches q a c c e p t , it remains active solely in that state. Likewise for q r e j e c t . We take q a c c e p t to be some q m with m > 0 and similarly for q r e j e c t . Input word w is considered accepted when M on w reaches q a c c e p t . We write L M to denote the language accepted by M. We use notation t : q 1 , x q 2 , y , μ when referring to some transition in δ with label t T and movement μ + , . The plus sign (minus sign) signifies a movement to the right (to the left).
Definition A5.  
Cf. Sipser ([10], p. 150). A nondeterministic Turing machine is an 8-tuple, Q ,   Γ ,   Φ ,   δ ,   T ,   q 0 ,   q a c c e p t ,   q r e j e c t . At any point in a computation, the machine may proceed according to several possibilities. The transition function for the machine has the form δ : Q × Φ P Q × Φ × + , , where P denotes the power set. The computation of the machine is a tree whose branches correspond to different possibilities for the machine. If some branch of the computation leads to the state q a c c e p t , the machine accepts its input.
In conformity with Definition A4, each transition in set δ is accompanied by a distinct label t, and T is now called the general label set, containing all such labels. For instance, consider notation t : q , x q 1 , y 1 , μ 1 , q 2 , y 2 , μ 2 , with label t T , states q , q 1 , q 2 Q , symbols x , y 1 , y 2 Φ , movements μ 1 , μ 2 + , , and with tuple q 1 , y 1 , μ 1 different from q 2 , y 2 , μ 2 . This notation captures the  nondeterministic transition  encompassing the deterministic transitions t 1 : q , x q 1 , y 1 , μ 1 and t 2 : q , x q 2 , y 2 , μ 2 . We define the basic label set, denoted as T [ ] , as the set that encompasses the labels of all deterministic transitions, such as t 1 and t 2 .
Remark A2.  
In our discussion, we employ the term “nondeterministic transition” explicitly, while abbreviating “deterministic transition” and “basic label set” to simply “transition” and “label set.” We frequently omit brackets for ease of reading; e.g., we write t and t instead of t 1 and t 2 . The crux is that each basic label is unique.
Definition A6.  
Consider an arbitrary nondeterministic Turing machine, N, and its basic label set, T [ ] . For any label t in T [ ] with corresponding signature t : q s o u r c e , s r e a d q t a r g e t , s w r i t e , + or t : q s o u r c e , s r e a d q t a r g e t , s w r i t e , , we let “ N - source t , ” “ N - target t , ” and “ N - write t ” stand for the symbols s r e a d q s o u r c e ,   q t a r g e t , and s w r i t e , respectively. When machine N is clear from the context, we shall simply note down “ source t , ” “ target t , ” and “ write t ,” respectively.
Definition A7.  
Consider an arbitrary nondeterministic Turing machine, N, and its general label set, T. We let T det denote the subset of T containing the labels of all deterministic instructions of N. We let T det + , respectively T det , denote the subset of T det containing the labels of all deterministic instructions of N whose movement is to the right (+), respectively to the left (−).
Definition A8.  
Cf. ([10], p. 255). Let N be a nondeterministic Turing machine decider. Its running time is function t : N N , with t n the maximum number of steps N uses on any branch of its computation, on any input of length n, before reaching either the q a c c e p t or the q r e j e c t state for the first time.
Definition A9.  
Cf. ([10], pp. 251, 258). Let t : N R + be a function, with R + denoting the set of nonnegative real numbers. Define the time complexity class , TIME t n , to be the collection of all languages that are decidable by an O t n time Turing machine. P is the class of languages that is decidable in polynomial time on a deterministic single-tape Turing machine. In other words, P = k TIME n k .
Theorem A2.  
Cf. ([20], p. 35). HORNSAT P .
Definition A10.  
Cf. ([10], pp. 265–267). A verifier for a language A is an algorithm V, where A = w V accepts w , c for some string c . We measure the time of a verifier only in terms of the length of w, so a polynomial time verifier runs in polynomial time in the length of w. A language A is polynomial verifiable if it has a polynomial time verifier. NP is the class of languages that have polynomial time verifiers.
The remaining three items largely come from ([10], pp. 266–276).
Theorem A3.  
A language is in NP iff it is decided by some nondeterministic polynomial time Turing machine.
Definition A11.  
NTIME t n = L L is decided by a O t n time nondeterministic TM . A function f : Σ Σ is a polynomial time computable function if some deterministic polynomial time Turing machine M exists that reaches the q a c c e p t state for the first time with just f w on its tape, when started on any input w. A language B is  NP -complete if it satisfies two conditions: (1) B is in NP , and (2) every A in NP is polynomial time reducible to B.
Theorem A4.  
If B is NP -complete and B P , then P = NP .

Appendix B

Appendix B.1. The Single Part

We define ψ e x t r a s i n g l e as follows:
ψ e x t r a s i n g l e = s q @ ( 3 l 2 , j ) s q j j ¬ s q @ ( 3 l 2 , j ) ,
where s Φ , q Q , and the column index j ranges from 1 to n k + 2 . This condition ensures that if the tape-state symbol s q is stored in c e l l [ 3 l 2 , j ] , it is the only tape-state symbol in row 3 l 2 . In other words, no tape-state symbol s q can be stored in any other cell within the same row.

Appendix B.2. The Left Part

The formula ψ e x t r a l e f t is defined, thus:
ψ e x t r a l e f t = ψ 1 l e f t ψ 2 l e f t ψ 3 l e f t
On the one hand, we reason from row i both towards earlier rows ( δ ) and towards later rows ( + δ ):
ψ 1 l e f t = 0 < m < j s q @ 3 l 2 , j s @ 3 l 2 , j m r ( δ ) s @ ( 3 l 2 ) + 3 δ , j m ,
where the restriction on δ , denoted by r ( δ ) , is defined by the following inequalities:
  • 0 δ m 1 .
  • 1 ( 3 l 2 ) 3 δ ( 3 l 2 ) + 3 δ 3 n k + 1 .
The notation + 3 δ will be illustrated and defined shortly.
Example A1. 
A simple example for an arbitrary row i = 3 l 2 occurs when j = 2 in Figure 8, which schematically results in Figure A1. The implication is that m = 1 and the formula ψ 1 l e f t reduces to
a q 5 @ i , 2 s @ i , 1 r ( δ ) s @ i + 3 δ , 1 .
The expression
s @ i + 3 δ , 1
is shorthand for:
s @ ( i 3 · δ m i n ( 1 ) , 1 ) s @ ( i 3 · δ m i n ( 1 ) + 1 , 1 ) s @ ( i 3 · δ m i n ( 1 ) + 2 , 1 ) s @ ( i + 3 · δ m a x ( 1 ) 1 , 1 ) s @ ( i + 3 · δ m a x ( 1 ) , 1 ) ,
where the row indices increase from the leftmost to the rightmost conjunct. Since in this example, and for reasons yet to be explained, δ m i n ( 1 ) = 0 = δ m a x ( 1 ) , the expression (A2) simplifies to:
s @ ( i , 1 ) .
Consequently, the implication (A1) becomes:
a q 5 @ i , 2 s @ i , 1 r ( δ ) s @ ( i , 1 ) ,
where the restriction on δ, denoted by r ( δ ) , is defined by the following inequalities:
  • 0 δ 0 .
  • 1 i 3 δ i + 3 δ 3 n k + 1 ; that is, 1 i 3 n k + 1 .
In fact, since the consequent in (A3) does not depend on δ, the result is simply:
a q 5 @ i , 2 s @ i , 1 s @ ( i , 1 ) ,
where the consequent corresponds precisely with the single cross in column 1 in Figure A1.
Figure A1. A tiny illustration for Example A1.
Figure A1. A tiny illustration for Example A1.
Symmetry 17 01745 g0a1
More generally, the expression
s @ i + 3 δ , j
is shorthand for:
s @ ( i 3 · δ m i n ( m ) , j ) s @ ( i 3 · δ m i n ( m ) + 1 , j ) s @ ( i 3 · δ m i n ( m ) + 2 , j ) s @ ( i + 3 · δ m a x ( m ) 1 , j ) s @ ( i + 3 · δ m a x ( m ) , j ) ,
where δ m i n ( m ) is defined as the maximum (not the minimum) of the set:
{ δ | 0 δ m 1 and 1 i 3 δ 3 n k + 1 } .
Similarly, δ m a x ( m ) is defined as the maximum of the set:
{ δ | 0 δ m 1 and 1 i + 3 δ 3 n k + 1 } .
Example A2. 
For instance, if a q 5 @ ( i , j ) and s @ ( i , j 3 ) hold, with i = 3 l 2 , then the following must also hold:
r ( δ ) [ s @ ( i + 3 δ , j 3 ) ] ,
where r ( δ ) is defined by the following inequalities:
  • 0 δ 2 .
  • 1 i 3 δ i + 3 δ 3 n k + 1 .
The reader can verify that all relevant cells range from c e l l [ i 3 · 2 , j 3 ] at the top to c e l l [ i + 3 · 2 , j 3 ] at the bottom. These 3 · 2 + 1 + 3 · 2 = 13 cells correspond to the 13 crossed-out entries in column j 3 in Figure 8.
On the other hand, we reason from the earliest row ( i 3 · δ m i n ( m ) ) and, respectively, the latest row ( i + 3 · δ m a x ( m ) ) toward row i, with i = 3 l 2 :
ψ 2 l e f t = 0 < m < j s q @ ( 3 l 2 , j ) s @ ( 3 l 2 ) 3 · δ m i n ( m ) , j m s @ ( 3 l 2 , j m ) ,
ψ 3 l e f t = 0 < m < j s q @ ( 3 l 2 , j ) s @ ( ( 3 l 2 ) + 3 · δ m a x ( m ) , j m ) s @ ( 3 l 2 , j m ) .
The satisfiability of ψ 3 l e f t ensures that if a specific cell, such as
c e l l ( 3 l 2 ) + 3 · δ m a x ( m ) , j m ,
for some fixed m, is filled with the tape symbol s , then—through a chain reaction partially facilitated by ψ 1 l e f t —all crosses in column j m are replaced with the symbol s . A similar observation applies to the satisfiability of ψ 2 l e f t , which affects the propagation from an earlier cell, i.e.,
c e l l ( 3 l 2 ) 3 · δ m i n ( m ) , j m .

Appendix B.3. The Right Part

Given the inherent symmetry of the problem, the formal definition of ψ e x t r a r i g h t closely mirrors that of ψ e x t r a l e f t and is therefore omitted from this paper.

Appendix B.4. The Extend Part

ψ e x t r a e x t e n d = ψ e x t r a ψ e x t r a , ψ e x t r a + ψ e x t r a + , + ,
where—for s , s Φ and q , q Q —we have:
ψ e x t r a = s q s q s q @ ( 3 l 2 , j ) s q @ ( 3 l + 1 , j 1 ) t ( s , q ) @ ( 3 l , j ) ,
ψ e x t r a , = s q s s q @ ( 3 l 2 , j ) s @ ( 3 l + 1 , j + 1 ) t ( s , q ) @ ( 3 l , j ) ,
ψ e x t r a + = s q s q s q @ ( 3 l 2 , j ) s q @ ( 3 l + 1 , j + 1 ) t + ( s , q ) @ ( 3 l , j ) ,
ψ e x t r a + , + = s q s s q @ ( 3 l 2 , j ) s @ ( 3 l + 1 , j 1 ) t + ( s , q ) @ ( 3 l , j ) .
Here, t ( s , q ) and t + ( s , q ) represent two distinct labels of the machine N, such that:
N - source t ( s , q ) = s q = N - source t + ( s , q ) .
Recall Definition A6. The minus (plus) sign indicates that N moves to the left (right).

Appendix B.5. Consolidation

In summary, ψ e x t r a 1 is a Horn formula, with a size of O ( n 4 k ) , where the constant k corresponds to the running time n k of machine N.

Appendix C

Several more constraints can be formalized. For instance, consider l = 3 and Figure 12. Suppose that, in the mini tableau, the coin toss outcome x 2 = 1 has materialized:
q 1 ^ @ ( 3 , 4 ) B q 1 ^ @ ( 5 , 4 ) B .
Now, consider the cell c —positioned at row i and column j , beyond the confines of the first r ^ rows of the mini tableau—for which the following proposition holds:
b U 0 @ ( i 1 , j + 1 ) B , with b { 0 , 1 } .
In other words, relative to the cell c , one time step earlier and one cell to the right, the head is scanning a bit b while in state U 0 . This state of affairs is exemplified by the following configuration and b = 0 :
& $ c l m $ $ c l 2 $ a 1 0 U 0 0 1 0 $ # # .
In conformity with instructions  t 8 , 0 and  t 8 , 1 —recall from Section 4.3.4:
t 8 , x : U 0 , x U 0 , x , , x { , $ , 0 , 1 }
—we then posit that
1 U 0 @ ( i , j ) B
must hold. Here, the tape-state symbol 1 U 0 signifies that the TM’s head is scanning—from right to left—in search of the first occurrence of a symbol σ { a , a ¯ } located to the left of the bit 1 in cell c (When such a σ is found, the machine overwrites it: replacing a with 0 and a ¯ with 1).
Formally, these long-range dependencies are captured by the following constraints—each a Horn formula with three literals:
q 1 ^ @ ( 3 , 4 ) B 1 U 0 @ ( i 1 , j + 1 ) B 1 U 0 @ ( i , j ) B ,
q 1 ^ @ ( 5 , 4 ) B 1 U 0 @ ( i 1 , j + 1 ) B 1 U 0 @ ( i , j ) B ,
q 1 ^ @ ( 3 , 4 ) B 0 U 0 @ ( i 1 , j + 1 ) B 1 U 0 @ ( i , j ) B ,
q 1 ^ @ ( 5 , 4 ) B 0 U 0 @ ( i 1 , j + 1 ) B 1 U 0 @ ( i , j ) B .
Conservatively, this yields a complexity of:
O ( n 2 ) × l × l × 2 = O ( n 4 ) .

Appendix D

To solve the recurrence relation, start with the following setup:
T ( p ) = κ 0 · p · T p 2 ( log l ) 2 2 , with T ( 1 ) = O ( 1 ) ,
where κ 0 is a constant.

Appendix D.1. Step 1: Change of Variables

Set:
q = log p p = 2 q , so p = 2 q / 2 .
Now consider the recurrence:
p 2 ( log l ) 2 = 2 q 2 ( log l ) 2 = 2 q log ( 2 ( log l ) 2 ) = 2 q log 2 2 log log l .
Let:
c = log 2 + 2 log log l = 1 + 2 log log l .
Then the recurrence becomes:
T ( 2 q ) = κ 0 · 2 q / 2 · T ( 2 q c ) 2 .
Taking logarithms (base 2) on both sides:
log T ( 2 q ) = log κ 0 + q 2 + 2 log T ( 2 q c ) .
Let:
S ( q ) = log T ( 2 q ) ,
so the recurrence becomes:
S ( q ) = log κ 0 + q 2 + 2 S ( q c ) .

Appendix D.2. Step 2: Solve the Linear Recurrence

Unrolling the recurrence for n steps:
S ( q ) = i = 0 n 1 2 i log κ 0 + q i c 2 + 2 n S ( q n c ) .
To reach the base case S ( q n c ) = O ( 1 ) , choose:
n q c n = q c .
Then:
S ( q ) = O ( 2 n q ) = O 2 q / c q .
Recalling that q = log p , we obtain:
S ( q ) = log T ( p ) = O p 1 / c log p .
Exponentiating both sides:
T ( p ) = 2 S ( log p ) = exp 2 O p 1 / c log p ,
where:
c = 1 + 2 log log l .
Thus, the solution is:
T ( p ) = exp 2 O p 1 / ( 1 + 2 log log l ) · log p .

Appendix D.3. Asymptotic Behavior as l→∞

As l , this yields:
log l log log l ,
so:
1 1 + 2 log log l 0 .
Then:
p 1 / ( 1 + 2 log log l ) 1 ,
and thus:
T ( p ) = exp 2 O ( log p ) = O ( p α ) ,
for some constant α > 0 .
Hence, asymptotically:
T ( p ) = p O ( 1 )
as l . This implies that T ( p ) is polynomial in p, and the degree becomes smaller as l increases.

Appendix E

Theorem 2 is applied iteratively to establish the validity of Theorem 3.

Appendix E.1. First Iteration

Theorem 2 conveys that any 3-SAT solver N 1 , 1 , with state set Q 1 and tape alphabet Φ 1 —operating with O ( n ) nondeterministic binary guesses followed by O ( n ) deterministic verification time—can be simulated deterministically by some M 1 via the rFHB algorithm.
Via a single application of Theorem 2, the resulting deterministic TM M 1 runs in at most
K 1 n 0 . 67 = 2 ( log K 1 ) · n 0 . 67 = 2 α 1 · n 0 . 67
time, where:
  • The constant K 1 depends on the sizes of Q 1 and Φ 1 .
  • The constant α 1 = log K 1 > 0 .
Inspired by the existence of deterministic machine M 1 , we can now construct a nondeterministic polynomial time TM N 2 , with state set Q 2 and tape alphabet Φ 2 , where
| Q 1 | | Q 2 | and Φ 1 Φ 2 .
This machine N 2 is functionally equivalent to M 1 , and hence is also a 3-SAT solver.
At first glance, N 2 operates with
O α 1 · n 0 . 67 = O n 0 . 67
nondeterministic binary guesses, followed by
ν = O n κ 1
deterministic verification time. Recall Claim 4.
Upon closer inspection, N 2 operates with, say,
O n 0 . 70
nondeterministic binary guesses. A small but nonzero increase in the number of guesses (raising the exponent from 0 . 67 to 0 . 70 ) is required here, for the following reason: N 2 guesses (and correctly so, in the satisfiable case) the guesses made by rFHB relative to N 1 . Although the total number of holes is O ( n 0 . 67 ) , rather than linear in n, an additional O ( log l ) bits are needed to uniquely identify each guess within the tableau. Consequently, N 2 makes
O ( log l ) × O ( n 0 . 67 )
binary guesses, which is simplified to O ( n 0 . 70 ) for subsequent analysis.
To recapitulate, functionally equivalent machines N 1 and N 2 require O ( n ) and O ( n 0 . 70 ) guesses, respectively. For sufficiently large n, machine N 2 makes fewer guesses than N 1 .

Appendix E.2. Second Iteration

Via a second application of Theorem 2, we may infer the existence of a deterministic machine M 2 that simulates the nondeterministic machine N 2 . This machine M 2 runs in at most
2 α 2 · ( n 0 . 70 ) 0 . 67 × ν = 2 α 2 · n 0 . 47 × ν
time, where:
  • The constant α 2 = log K 2 > 0 .
  • K 2 depends on the sizes of Q 2 and Φ 2 , with K 1 < K 2 .
Inspired by the existence of deterministic machine M 2 , we can now construct a functionally equivalent nondeterministic polynomial time TM N 3 , with state set Q 3 and alphabet Φ 3 , where
| Q 2 | | Q 3 | and Φ 2 Φ 3 .
This machine N 3 operates, not with
O n 0 . 47 ,
but with, say,
O n 0 . 50 ,
nondeterministic binary guesses, followed by
ν + ν = 2 ν
verification time. Again, the rationale for the slight increase in the exponent is due to the extra bits that are needed to uniquely identify each guess within the tableau.
To recapitulate, N 2 and N 3 require O ( n 0 . 70 ) and O ( n 0 . 50 ) guesses, respectively. For sufficiently large n, machine N 3 makes fewer guesses than N 2 .

Appendix E.3. Third Iteration

A third application of Theorem 2 entails the existence of a deterministic TM M 3 that simulates the nondeterministic machine N 3 . And so on.

Appendix E.4. The j-th Iteration

Inspired by the existence of deterministic machine M j , we can now construct a functionally equivalent, nondeterministic polynomial time TM N j + 1 , with state set Q j + 1 and tape alphabet Φ j + 1 , where
| Q j | | Q j + 1 | and Φ j Φ j + 1 .
This machine N j + 1 operates, not with
O n ( 0 . 67 ) j ,
but with, say,
O n ( 0 . 70 ) j
nondeterministic binary guesses, followed by
j · ν
verification time.

Appendix E.5. Consolidation

Take j = κ × ( log n ) , for some sufficiently large constant κ . Then machine N j + 1 performs O ( 1 ) guesses, followed by
κ × ( log n ) × ν = O ( n κ 1 + 1 )
deterministic verification time. That is, N j + 1 is a deterministic polynomial time TM.

Appendix F

Solve the recurrence relation:
T ( p ) = κ · p · T p 2
where κ is a constant.

Appendix F.1. Solution

Assume p = 2 n . Then the recurrence becomes:
T ( 2 n ) = κ · 2 n · T ( 2 n 1 ) = κ · 2 n / 2 · T ( 2 n 1 )
Apply the recurrence repeatedly:
T ( 2 n ) = κ · 2 n / 2 · κ · 2 ( n 1 ) / 2 · T ( 2 n 2 ) = κ 2 · 2 n / 2 + ( n 1 ) / 2 · T ( 2 n 2 ) = = κ n · i = 0 n 1 2 ( n i ) / 2 · T ( 1 )
Now simplify the exponent:
i = 0 n 1 n i 2 = 1 2 j = 1 n j = 1 2 · n ( n + 1 ) 2 = n ( n + 1 ) 4
Therefore:
T ( 2 n ) = κ n · 2 n ( n + 1 ) / 4 · T ( 1 )
Substituting back n = log p , we get:
T ( p ) = κ log p · 2 log p ( log p + 1 ) 4 · T ( 1 )

Appendix F.2. Asymptotic Form

As p ,
T ( p ) = 2 O ( ( log p ) 2 ) = p O ( log p ) ,
which implies that the growth rate is quasi-polynomial—faster than any polynomial, yet slower than exponential.

Notes

1
Any remaining errors or omissions in this article are solely the responsibility of the author. For further updates, please visit https://dijkstrascry.com/pnp (accessed on 3 September 2025).

References

  1. Daylight, E.G. Tableau with Holes: Clarifying NP-Completeness. Symmetry 2025, 17, 677. [Google Scholar] [CrossRef]
  2. Daylight, E. Injecting Observers into Computational Complexity. Philosophies 2025, 10, 76. [Google Scholar] [CrossRef]
  3. Dean, W. Computational Complexity Theory. The Stanford Encyclopedia of Philosophy. 2016. Available online: https://plato.stanford.edu/archives/fall2021/entries/computational-complexity (accessed on 1 August 2025).
  4. Fortnow, L. Why Can’t We Break Cryptography. 2025. Available online: https://blog.computationalcomplexity.org/2025/04/why-cant-we-break-crytptography.html (accessed on 1 July 2025).
  5. Li, M.; Vitanyi, P. An Introduction to Kolmogorov Complexity and Its Applications, 4th ed.; Texts in Computer Science; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
  6. Daylight, E.; Koolen, W.; Vitányi, P. Time-Bounded Incompressibility of Compressible Strings and Sequences. Inf. Process. Lett. 2009, 109, 1055–1059. [Google Scholar] [CrossRef]
  7. Cook, S. The Complexity of Theorem-Proving Procedures. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing. Association for Computing Machinery, Shaker Heights, OH, USA, 3–5 May 1971; pp. 151–158. [Google Scholar]
  8. Levin, L. Universal Sorting Problems. Probl. Inf. Transm. 1973, 9, 265–266. [Google Scholar]
  9. Fortnow, L.; Homer, S. A Short History of Computational Complexity. Bull. Eur. Assoc. Theor. Comput. Sci. 2003, 80, 95–133. [Google Scholar]
  10. Sipser, M. Introduction to the Theory of Computation; Thomson Course Technology: Mason, OH, USA, 2006. [Google Scholar]
  11. Papadimitriou, C. Computational Complexity; Addison Wesley Longman: Upper Saddle River, NJ, USA, 1994. [Google Scholar]
  12. Hopcroft, J.; Motwani, R.; Ullman, J. Introduction to Automata Theory, Languages, and Computation; Addison Wesley/Pearson Education: Boston, MA, USA, 2007. [Google Scholar]
  13. Aaronson, S. Quantum Computing Since Democritus; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  14. Dean, W. Algorithms and the Mathematical Foundations of Computer Science. In Gödel’s Disjunction, 1st ed.; Horsten, L., Welch, P., Eds.; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
  15. Tall, D. How Humans Learn to Think Mathematically: Exploring the Three Worlds of Mathematics; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  16. Turner, R. Computational Abstraction. Entropy 2021, 23, 213. [Google Scholar] [CrossRef] [PubMed]
  17. Linnebo, Ø.; Shapiro, S. Actual and Potential Infinity. Noûs 2019, 53, 160–191. [Google Scholar] [CrossRef]
  18. Fortnow, L. Can You Feel the Machine? 2024. Available online: https://blog.computationalcomplexity.org/2024/03/can-you-feel-machine.html (accessed on 22 August 2024).
  19. Hill, R.K. The Imperativity of Algorithms. 2023. Available online: https://cacm.acm.org/blogcacm/the-imperativity-of-algorithms/ (accessed on 22 August 2024).
  20. Grädel, E. Complexity Theory: WS 2009/10; Mathematische Grundlagen der Informatik: Aachen, Germany, 2009. [Google Scholar]
  21. Dowling, W.; Gallier, J. Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae. J. Log. Program. 1984, 1, 267–284. [Google Scholar] [CrossRef]
  22. Daylight, E. Dijkstra’s Rallying Cry for Generalization: The Advent of the Recursive Procedure, late 1950s–early 1960s. Comput. J. 2011, 54, 1756–1772. [Google Scholar] [CrossRef]
  23. Shapiro, S. Acceptable notation. Notre Dame J. Form. Log. 1982, 23, 14–20. [Google Scholar] [CrossRef]
  24. Daylight, E.; Schuettpelz, E. The Turing Machine as a Boundary Object: Sorting Out American Science and European Engineering. 2022. Available online: https://www.youtube.com/watch?v=Fssz-LbRcTI (accessed on 1 August 2025).
  25. Grattan-Guinness, I. The Development of the Foundations of Mathematical Analysis from Euler to Riemann; MIT Press: Cambridge, MA, USA, 1970. [Google Scholar]
Figure 1. Illustration of (a) two top-down constraints and one bottom-up constraint, and (b) one correlated coin-tossing constraint. The large rectangle represents the entire tableau, while the dark shaded smaller rectangle indicates the coin-tossing section of the tableau.
Figure 1. Illustration of (a) two top-down constraints and one bottom-up constraint, and (b) one correlated coin-tossing constraint. The large rectangle represents the entire tableau, while the dark shaded smaller rectangle indicates the coin-tossing section of the tableau.
Symmetry 17 01745 g001
Figure 2. A tableau: an n k × n k + 2 matrix. All cells in the leftmost column contain the boundary marker ⊢. Likewise for the rightmost column and the marker ⊣.
Figure 2. A tableau: an n k × n k + 2 matrix. All cells in the leftmost column contain the boundary marker ⊢. Likewise for the rightmost column and the marker ⊣.
Symmetry 17 01745 g002
Figure 3. Illustrating two 2 × 3 windows. The effect of instruction t a b is shown on the left, while the effect of instruction t a c is displayed on the right, with both illustrations read from top to bottom. Column indices range from j 1 to j + 1 . Each vertical arrow represents a change in precisely one symbol. Notably, a q 1 is considered a single symbol, not two separate symbols.
Figure 3. Illustrating two 2 × 3 windows. The effect of instruction t a b is shown on the left, while the effect of instruction t a c is displayed on the right, with both illustrations read from top to bottom. Column indices range from j 1 to j + 1 . Each vertical arrow represents a change in precisely one symbol. Notably, a q 1 is considered a single symbol, not two separate symbols.
Symmetry 17 01745 g003
Figure 4. The effect of instruction t a b is shown on the left, while the effect of t a c is displayed on the right. In both illustrations, the rows are arranged sequentially from the top row, indexed as 3 l 2 , to the bottom row, indexed as 3 l + 1 . Each vertical arrow represents a change in precisely one symbol. If label t a b is stored in c e l l [ 3 l , j ] of the tableau—with row index 3 l and column index j, where l and j are natural numbers—then this is denoted with propositional variable x 3 l , j , t a b . Source: ([2], p. 15).
Figure 4. The effect of instruction t a b is shown on the left, while the effect of t a c is displayed on the right. In both illustrations, the rows are arranged sequentially from the top row, indexed as 3 l 2 , to the bottom row, indexed as 3 l + 1 . Each vertical arrow represents a change in precisely one symbol. If label t a b is stored in c e l l [ 3 l , j ] of the tableau—with row index 3 l and column index j, where l and j are natural numbers—then this is denoted with propositional variable x 3 l , j , t a b . Source: ([2], p. 15).
Symmetry 17 01745 g004
Figure 5. A conversion from symbol a (marked in the top row) to b (marked in the bottom row). Row indices range from 3 l 2 to 3 l + 4 . Column indices from 2 to 5. Source: ([2], p. 17).
Figure 5. A conversion from symbol a (marked in the top row) to b (marked in the bottom row). Row indices range from 3 l 2 to 3 l + 4 . Column indices from 2 to 5. Source: ([2], p. 17).
Symmetry 17 01745 g005
Figure 6. Here, the proposition a q 5 @ ( i , j ) holds, with i = 3 l 2 . In words: c e l l [ i , j ] in the extended tableau contains the tape-state symbol a q 5 .
Figure 6. Here, the proposition a q 5 @ ( i , j ) holds, with i = 3 l 2 . In words: c e l l [ i , j ] in the extended tableau contains the tape-state symbol a q 5 .
Symmetry 17 01745 g006
Figure 7. Here, the proposition a q 5 @ ( i , j ) B holds. This means that the symbol a q 5 appears at position ( i , j ) in the basic tableau (shown above), with i = i + 2 3 = 3 l 2 + 2 3 = l .
Figure 7. Here, the proposition a q 5 @ ( i , j ) B holds. This means that the symbol a q 5 appears at position ( i , j ) in the basic tableau (shown above), with i = i + 2 3 = 3 l 2 + 2 3 = l .
Symmetry 17 01745 g007
Figure 8. Filling cells (in the extended tableau) with additional crosses as a logical consequence of the state of affairs depicted in Figure 6 and ψ e x t r a 1 .
Figure 8. Filling cells (in the extended tableau) with additional crosses as a logical consequence of the state of affairs depicted in Figure 6 and ψ e x t r a 1 .
Symmetry 17 01745 g008
Figure 9. Basic tableau: crossing out 113 cells out of 16 × 16 = 256 cells.
Figure 9. Basic tableau: crossing out 113 cells out of 16 × 16 = 256 cells.
Symmetry 17 01745 g009
Figure 10. Basic tableau: a second intervention in row 4 amounts to crossing out 41 boldfaced cells out of 63 cells in rows 1–7. The depicted scenario is ultimately unsatisfiable, as the downward movement from b q 9 and the upward movement from a q 5 will fail to converge harmoniously in the same cell.
Figure 10. Basic tableau: a second intervention in row 4 amounts to crossing out 41 boldfaced cells out of 63 cells in rows 1–7. The depicted scenario is ultimately unsatisfiable, as the downward movement from b q 9 and the upward movement from a q 5 will fail to converge harmoniously in the same cell.
Symmetry 17 01745 g010
Figure 11. The coin-tossing section of the basic tableau, also known as the basic mini tableau [1]: tossing four coins from left to right, resulting in the truth values for variables x 4 , x 3 , x 2 , and x 1 . The row r ^ corresponds to the state attained after all l = 4 coins have been tossed and when the rightmost column c ^ = 6 is reached. In this case, l is even. Consequently, r ^ is odd and belongs to the set r ^ { 5 , 7 , 9 , 11 , 13 } = { l + 1 , l + 3 , , 3 l + 1 } .
Figure 11. The coin-tossing section of the basic tableau, also known as the basic mini tableau [1]: tossing four coins from left to right, resulting in the truth values for variables x 4 , x 3 , x 2 , and x 1 . The row r ^ corresponds to the state attained after all l = 4 coins have been tossed and when the rightmost column c ^ = 6 is reached. In this case, l is even. Consequently, r ^ is odd and belongs to the set r ^ { 5 , 7 , 9 , 11 , 13 } = { l + 1 , l + 3 , , 3 l + 1 } .
Symmetry 17 01745 g011
Figure 12. The basic mini tableau: tossing l = 3 coins from left to right, resulting in the truth values for x 3 , x 2 , and x 1 . The row r ^ corresponds to the state attained after all coins have been tossed and when the rightmost column c ^ = 5 is reached. In this case, l is odd. Consequently, r ^ is even and belongs to the set r ^ { 4 , 6 , 8 , 10 } = { l + 1 , l + 3 , , 3 l + 1 } .
Figure 12. The basic mini tableau: tossing l = 3 coins from left to right, resulting in the truth values for x 3 , x 2 , and x 1 . The row r ^ corresponds to the state attained after all coins have been tossed and when the rightmost column c ^ = 5 is reached. In this case, l is odd. Consequently, r ^ is even and belongs to the set r ^ { 4 , 6 , 8 , 10 } = { l + 1 , l + 3 , , 3 l + 1 } .
Symmetry 17 01745 g012
Figure 13. Illustrating ( x 2 = 1 ) @ [ β 2 ] on the left and ( x 1 = 0 ) @ [ β 1 ] on the right.
Figure 13. Illustrating ( x 2 = 1 ) @ [ β 2 ] on the left and ( x 1 = 0 ) @ [ β 1 ] on the right.
Symmetry 17 01745 g013
Figure 14. Two distinct computations in the making: one on the left and the other on the right.
Figure 14. Two distinct computations in the making: one on the left and the other on the right.
Symmetry 17 01745 g014
Figure 15. Two distinct computations taking shape, with only the left one remaining potentially satisfiable.
Figure 15. Two distinct computations taking shape, with only the left one remaining potentially satisfiable.
Symmetry 17 01745 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Daylight, E.G. Tossing Coins with an 𝒩𝒫-Machine. Symmetry 2025, 17, 1745. https://doi.org/10.3390/sym17101745

AMA Style

Daylight EG. Tossing Coins with an 𝒩𝒫-Machine. Symmetry. 2025; 17(10):1745. https://doi.org/10.3390/sym17101745

Chicago/Turabian Style

Daylight, Edgar Graham. 2025. "Tossing Coins with an 𝒩𝒫-Machine" Symmetry 17, no. 10: 1745. https://doi.org/10.3390/sym17101745

APA Style

Daylight, E. G. (2025). Tossing Coins with an 𝒩𝒫-Machine. Symmetry, 17(10), 1745. https://doi.org/10.3390/sym17101745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop