Next Article in Journal
HASwinNet: A Swin Transformer-Based Denoising Framework with Hybrid Attention for mmWave MIMO Systems
Next Article in Special Issue
Two-Stroke Pumping Technique for Many-Body Systems
Previous Article in Journal
Complexity and Robustness of Public–Private Partnership Networks
Previous Article in Special Issue
Partition Function Zeros of the Spin-One Ising Model on the Honeycomb Lattice in the Complex Temperature Plane
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

What Is a Pattern in Statistical Mechanics? Formalizing Structure and Patterns in One-Dimensional Spin Lattice Models with Computational Mechanics

Physics Department, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, USA
Entropy 2026, 28(1), 123; https://doi.org/10.3390/e28010123
Submission received: 6 November 2025 / Revised: 12 January 2026 / Accepted: 14 January 2026 / Published: 20 January 2026
(This article belongs to the Special Issue Ising Model—100 Years Old and Still Attractive)

Abstract

This work formalizes the notions of structure and pattern for three distinct one-dimensional spin-lattice models (finite-range Ising, solid-on-solid, and three-body), using information- and computation-theoretic methods. We begin by presenting a novel derivation of the Boltzmann distribution for finite one-dimensional spin configurations embedded in infinite ones. We next recast this distribution as a stochastic process, thereby enabling us to analyze each spin-lattice model within the theory of computational mechanics. In this framework, the process’s structure is quantified by excess entropy E (predictable information) and statistical complexity C μ (stored information), and the process’s structure-generating mechanism is specified by its ϵ -machine. To assess compatibility with statistical mechanics, we compare the configurations jointly determined by the information measures and ϵ -machines to typical configurations drawn from the Boltzmann distribution, and we find agreement. We also include a self-contained primer on computational mechanics and provide code implementing the information measures and spin-model distributions.

1. Introduction

When observing a natural system, we intuitively explain it by describing the way its components are arranged. We might say that the system displays order or randomness. We might describe systems that exhibit a blending of order and randomness as complex or structured (This paper uses two notions of structure. One refers to a system’s general type of arrangement, which we call generic structure. The other captures a more specific type of arrangement—one that exhibits patterns—which we call intrinsic structure. Throughout the paper, the intended notion will be clear from context.) [1]. Moreover, we might also regard as structured those ordered systems that have no randomness but exhibit a repetition of more than one component (a period greater than 1) [2]. Altogether, we might regard a structured system simply as one that exhibits patterns [3].
In light of this depiction, a physicist may feel compelled to bring clarity and definiteness to the notions of randomness, structure and pattern by formalizing them. Although statistical mechanics readily concretizes randomness through measures like entropy [4,5], it falls short when quantifying structure and pattern and formalizing its supporting mechanism. For instance, while magnetization is commonly treated as an indicator of structure, materials with distinct magnetic behaviors, such as paramagnets and antiferromagnets, have the same magnetization in the absence of a magnetic field: zero [2].
Furthermore, in statistical mechanics, “pattern” is not typically defined explicitly; instead, the criteria that one might regard as proxies for pattern depend on a choice of representation. This choice can enter through the observable taken as relevant, the scale at which structure is probed, or the coarse-graining scheme used to obtain a macroscopic description [6]. For the observable, one may diagnose order using magnetization or staggered magnetization [7]. For the scale, one may use correlations or structure factors evaluated at a chosen length or wavenumber [8]. For coarse-graining, one may formalize large-scale organization through a specific RG blocking or decimation prescription [9].
Faced with these limitations, the physicist may make their endeavor more concrete by posing two key questions:
  • What’s a simple system in statistical mechanics that manifests structure and patterns?
  • How could one extend statistical mechanics to formalize structure and patterns within such a system?
One-dimensional (1D) spin lattice models [10] (p. 67) are suitable candidates for addressing these challenges, as they compactly represent interacting magnets as spins in an evenly spaced grid, embodying both simplicity [11] and structure/patterns [12]. The simplicity stems from the spins taking discrete values (often binary) and the spin models being amenable to both analytical and numerical treatment [13,14]. The structure and patterns are evident in the model’s possible spin configurations, which exhibit regularity, randomness, and structure. For example, the 1D nearest-neighbor Ising model may have configurations rich in regularity, randomness, and structure, such as , and , respectively. These configurations contain repeating sequences of spins that we refer to as configuration patterns.
Mathematically, a spin model is expressed as a Hamiltonian that characterizes the energy of the spin system [10] (p. 67). Given the Hamiltonian, the usual goal is to determine the partition function and from it compute various properties of interest [15]. Among these, the Boltzmann distribution as a function of spin configurations is the least frequently computed (When the Boltzmann distribution is calculated, it is typically expressed as a function of energy [16,17] or other macroscopic properties [18,19,20], rather than directly in terms of configurations of fixed length), yet it stands out as the sole one directly addressing spin configurations, serving as a window for analyzing their structure and patterns. However, to clearly see through this window, we need to carefully consider how the distribution is formalized.
Typically, the Boltzmann distribution is defined so that each configuration, either implicitly or explicitly, represents an event of a single random variable, as indicated in Refs. [21] (p. 552) and [22]. Nonetheless, this approach is not conducive to examining how individual spins make up spin configurations. Instead, we can regard them as realizations of a partially ordered chain of random variables—a stochastic process [2].
In this process, which we call the spin process, each spin corresponds to an event of a single random variable. Given this perspective, we can now quantify the randomness, regularity, and structure of the spin process, and formalize the mechanism that generates its structure. Since randomness, regularity, and structure are ways in which a process elicits surprise, we quantify them as information—a measure of “quantifiable surprise” [23] (p. 64) or a “difference that makes a difference” [24].
In information theory, the theory of quantifiable surprise, a stochastic process’s intrinsic randomness or average randomness per symbol is quantified by its Shannon entropy rate h μ (Ref. [25], pp. 74–76). The process’s regularity, as the counterpart of its randomness, can be understood as the total correlation within the process. Thus, regularity is quantified as the amount of information that is shared within the process—that is, the process’s mutual information or excess entropy E [26,27,28,29].
Because a stochastic process’s structure is effectively captured by its patterns, we quantify the process’s structure by measuring the amount of information stored in those patterns. This quantity is known as the stored information, or statistical complexity C μ [30,31], and is defined as the Shannon entropy of those patterns. Calculating C μ , therefore, requires identifying these patterns—an inference task that effectively uncovers the process’s underlying structure-generating mechanism. We define these patterns next.
Since patterns are sought for their predictive utility, we define a pattern in the spin process setup from a prediction-based viewpoint. To do so, we split (Without loss of generality) each realization of the spin process into a left half (past) and a right half (future). Then, we define a pattern as the set of pasts that lead to the same futures (It should be highlighted that for 1D spin lattice models, the conventional time index is taken to be site location index and there is no time dependence.) [32]. By “lead to”, we mean that the conditional distribution over futures, when conditioned on any past in the set, is identical across all those pasts. This condition is known as the causal equivalence principle (This principle formalizes the implicit definition of a state commonly used in theoretical computer science when constructing machines. In this context, a state represents the information that must be retained to predict the system’s future behavior (see Appendix A).) [31,32,33], which recasts these patterns as causal states. Why the term “state”? Because this conception of pattern is consistent with the theory of computation’s definition of state as a system’s entity that “remembers a relevant portion of the system’s history” [34] (pp. 2–3). This connection points us toward the mechanism that underpins the process’s structure.
Given that a system’s structure is measured in units of information, formalizing its supporting mechanism is tantamount to unraveling how the system processes and stores information—essentially, how it computes [35]. This leads to a refined question: what is the minimal (To avoid accounting for computation not inherent to our system) abstract machine (In the 21st century, “computation” often evokes laptops, which perform useful computation—that is, computation carried out for some external task. In contrast, we focus on intrinsic computation, the computation a system performs by itself. To analyze this, we use abstract machines [34]—mathematical models that consist of states and transitions and laid the groundwork for the theory of computation,) which performs the computation inherent to the spin process. Leveraging concepts from the theory of computation (TOC), computational mechanics provides a compelling response: the set of causal states and their transitions, that is a ϵ -machine or Probabilistic Deterministic Finite State Machine (PDFM). Here, “probabilistic” means that state transitions include probabilities, while “deterministic” implies that when we have knowledge of a state and its associated outgoing symbol, we have complete certainty about the next state we will transition to. Several methods have been developed for inferring ϵ -machines [36,37,38,39,40,41,42]. Among these, Feldman and Crutchfield’s approach stands out as the only one that is both analytical and applicable to statistical mechanics [2].
In particular, Feldman and Crutchfield used this method to examine the structure of the nearest-neighbor and next-nearest neighbor Ising models. Subsequent research further developed their information-theoretic analysis of spin systems by calculating h μ and E for the two-dimensional nearest neighbor Ising model [43], as well as decomposing the nn Ising model’s Shannon entropy rate into more refined information components [44]. Moreover, quantum ϵ -machine formulations revealed striking memory advantages—ranging from extreme compression when simulating long-range Ising spin chains [45] to clarifying how simplicity differs in quantum versus classical descriptions [46]. Now, the aim of this paper is to develop information measures and ϵ -machines for three varied one-dimensional spin-lattice models—finite-range Ising, solid-on-solid, and three-body—and to assess the consistency of these results with statistical mechanics.
These developments are timely because they broaden the rapidly evolving landscape of abstract machines used to analyze computation in physical processes in two key ways. First, they encourage the application of abstract machines—which have most often been used to study thermodynamic [47,48,49,50] and quantum [51,52,53,54] processes—to statistical mechanical processes, potentially supporting more efficient information processing in materials. Second, these developments foster the use of abstract machines that are systematically inferred from data, rather than being designed in an ad hoc manner, as has more typically been the case.
To achieve the aim of this paper, we provide a pedagogical explanation of the application of computational mechanics to the nn and nnn Ising models, along with the necessary background from statistical mechanics, measure theory, stochastic processes, and information theory. We then apply these techniques to a wider range of spin models, such as finite-range Ising models, solid-on-solid models, and three-body models. In parallel, we find that the typical patterns observed in these spin models at various parameter values match those predicted by information measures and ϵ -machines. This allows us to present an account of spin patterns that is clearly consistent with statistical mechanics and information/computation theory.

2. Background and Methods

This section provides an intuition-first, pedagogical introduction to the concepts and methods required for our information- and computation-theoretic analysis of 1D spin lattice models. While the underlying machinery is standard in the literature [2,33,35], we restrict attention to the ingredients strictly necessary for our purposes, motivate each formal object intuitively, and ground each one in statistical mechanics. We provide detailed explanations of what each mathematical component means and what role it plays, instead of developing an abstract-first treatment aimed at greater formal generality.

2.1. Spin Measurements: Boltzmann Distribution of Finite Chain Embedded in Infinite Chain

The Boltzmann distribution serves as an entry point for probing the structure of spin models; however, defining it for both finite and infinite configurations introduces significant difficulties. For finite configurations, the Boltzmann distribution lacks generality and often relies on numerical simulations for approximation [55,56,57]. For infinite configurations, a different issue arises: their probability is zero [58] (pp. 94–97). This defies our expectation that they occur and results in an unnormalized total probability—a sum that is zero rather than one. To balance the constructiveness of finite configurations with the generality of infinite ones, we examine a hybrid configuration: a finite spin configuration embedded in an infinite one [59]. Figure 1 illustrates the finite configuration embedded within the infinite one. The key equations leading to the embedded distribution are presented below, with detailed derivations provided in Appendix F, Appendix G, Appendix H and Appendix I.
Consider a configuration consisting of N spins, where each spin can take one of two values (↑ or ↓) and interacts only with its nearest neighbors. For convenience, the configuration is subject to periodic boundary conditions:
s 0 s N 1 where s 0 = s N .
The system is governed by a translationally-invariant Hamiltonian, that is, a Hamiltonian whose form remains the same across spin sites. It is defined as follows:
E ( s i , s i + 1 ) = J s i s i + 1 B 2 ( s i + s i + 1 ) .
Next, the corresponding transfer matrix, with components V ( s i , s i + 1 ) = e β E ( s i s i + 1 ) , is expressed as [10] (p. 68):
V = e β E ( , ) e β E ( , ) e β E ( , ) e β E ( , )
Then, the probability distribution for this spin configuration in the thermodynamic limit N is obtained in terms of the transfer matrix components and the transfer matrix’s principal eigenvalue λ [10] (pp. 68–69):
Pr ( s 0 , s N 1 ) = i = 0 N 1 V ( s i , s i + 1 ) λ N
Now, consider a specific finite configuration of length L embedded in an infinite one:
s = s 0 s L 1 where L < N
Although the principal eigenvectors of the transfer matrix are seldom calculated in studies of spin models, they play a crucial role in defining the embedded distribution. Therefore, we obtain the normalized principal left and right eigenvectors of the transfer matrix, as provided in Ref. [10] (pp. 72–73). For conciseness, these are expressed in terms of the magnetization m, as shown below:
u L = 1 + m 2 1 m 2 and u R = 1 + m 2 1 m 2
Note that for the nn Ising model, the left and right eigenvectors are identical. Hence, in all subsequent subsections of this section, we omit the left and right superscripts.
Lastly, the probability distribution for the embedded configuration [59] is given by the following:
Pr ( s ) = u s 0 L u s L 1 R i = 0 L 2 V ( s i , s i + 1 ) λ L 1
Here, we provide the physical interpretation for each part of the equation:
  • In the denominator, λ is raised to L 1 as each embedded configuration has L spins and its boundaries are not periodic.
  • In the numerator, the product of transfer matrix components consists of L 1 factors. This reflects the fact that only the spins within the bulk have neighboring spins to interact with on both their left and right sides.
  • Also in the numerator, we include two extra terms: u s 0 L and u s L 1 R , which are the normalized principal eigenvector components associated with the boundary spins s 0 and s L 1 . Since the embedded configuration does not have periodic boundaries, these extra terms ensure that the boundary spins contribute to the system’s magnetization as much as the bulk spins. Moreover, these terms are key to normalizing the joint probabilities.
To facilitate later discussion, it will be useful to denote the component associated with spins ↑ or ↓ as u , and u , respectively. The values of these components correspond to either u s 0 L or u s L 1 R , depending on whether the orientations of the spins s 0 and s 1 are up or down. For example, in a spin configuration like , the component for the first spin s 0 is u s 0 L = u = 1 m 2 , while the component for the last spin s 3 is u s 3 R = u = 1 + m 2 .
Alternatively, Equation (7) can be interpreted as the probability measure of a coarse-grained configuration. The nature of this coarse-graining and its implementation, which relies on measure theory, will be discussed in the following section.

2.2. Coarse-Graining via Measure Theory

In this section, we view finite configurations embedded in infinite ones as coarse-grained versions of infinite-spin configurations. Here, “coarse-grained” means a simplified representation that retains essential features while reducing detail [60]. The procedure for arriving at these representations—that is, coarse-graining—is up to the scientist’s discretion [61]. However, when treating the spin model as a stochastic process, the conventional approach is to reduce the degrees of freedom such that only contiguous ones remain [62,63,64]. This coarse-graining is physically motivated by the observer’s inability to record infinite measurements or degrees of freedom. To define the set of coarse-grained configurations mathematically, we begin with the full set of possible configurations.
Consider the set of all possible infinite spin configurations Ω . An individual configuration in this set is represented as σ Ω . The degree of freedom at a lattice site i within a configuration σ is denoted by σ i . Thus, a configuration in terms of its degrees of freedom is given by the following:
σ = σ 0 σ N 1
with
σ 0 = σ N and N
The set of coarse-grained configurations Ω C is defined as the set of infinite configurations in which the contiguous spins from σ 0 to σ L 1 have fixed indices and can take any value from { 1 , 1 } .
This can be expressed as follows:
Ω C = { σ Ω σ 0 , , σ L 1 have fixed indices } .
Alternatively, the set of coarse-grained configurations can be defined as follows:
Ω C = { C 1 , C 2 , }
with each coarse-grained configuration C j defined as follows:
C j = { σ Ω σ 0 = s 0 , , σ L 1 = s L 1 }
where s 0 , , s L 1 represent the fixed spin values at fixed indices 0 , , L 1 . In more compact notation, this is written as follows:
C j = { σ Ω σ L = s L }
Notably, the act of coarse-graining changes our focus from individual configurations to sets, where each set C j groups configurations by their shared spin values. Figure 2, shows how the set of all possible infinite spin configurations Ω is partitioned into the set of coarse-grained configurations Ω C . Accordingly, we must adapt our notion of probability to align with this perspective, transitioning from the concept of a probability distribution to that of a probability measure, as denoted by μ [65] (pp. 331–336).
To formalize this, we introduce the concept of a sigma algebra, denoted by A . This is a collection of all subsets of Ω C that can be consistently assigned probabilities or measured, meaning they are physically relevant.
The sigma algebra A has three key properties:
1.
Entire Set Containment: A includes the sample space. In this case, that is the coarse-grained set of all infinite configurations Ω C :
Ω C A
2.
Complement Closure: If a set A is in A , then its complement Ω C A must also be in A :
A A Ω C A A
3.
Countable Union Closure: If A 1 , A 2 , A 3 , are in A , then their countable union is also in A :
A 1 , A 2 , A i = 1 A i A
With the concept of a sigma algebra established, we can now turn to the probability measure. This measure is analogous to a probability distribution, but applies to sets rather than individual outcomes. It extends the key constructive properties of probability distributions—namely, nonnegativity, normalization, and additivity—from finite to infinite configurations.
The probability measure is formalized as a function
μ : A [ 0 , 1 ] ,
which assigns a probability to each event in A and satisfies the following three key properties:
1.
Nonnegativity: In the same way that joint probabilities for finite configurations are never negative, the probability measure assigned to any set in A must also be nonnegative.
μ ( A ) 0 for every A A .
2.
Normalization: Similar to the sum of joint probabilities for all configurations equaling 1, the probability measure for the entire sample space, the set of coarse-grained configurations Ω C , must be 1.
μ ( Ω C ) = 1
3.
Countable additivity: Mirroring the additivity of joint probabilities, which asserts that the total probability of finite configurations equals the sum of their individual probabilities, probability measures demonstrate countable additivity. This property dictates that for any countable collection of non-overlapping sets (cylinder sets) { A i } i = 1 , the probability of their union is the sum of the probabilities of the individual sets:
μ i = 1 A i = i = 1 μ ( A i ) ,
where each A i is a cylinder set corresponding to a coarse-grained configuration, and the union represents the combined event of these configurations.
The last step in constructing the spin probability measure involves assigning each spin cylinder set’s probability measure the value of its associated embedded configuration’s probability. Notably, information measures in later sections are denoted with a μ subscript, indicating that their argument is a probability measure [2].

2.3. System and Measurements: Stochastic Processes

As mentioned in the introduction, we interpret configurations as realizations of a stochastic process. This section aims to delve further into this formalism by first explaining the reasons for departing from the conventional approach.
Traditionally, a spin configuration is represented as an event s of a random variable S. For example, a configuration with all spins pointing up is depicted as follows:
s =
However, this formalism impedes a direct examination of individual spins and their interactions. Furthermore, it leads to an unwieldy number of possible events. To address these issues, we adopt a more nuanced approach. Instead of representing a configuration as a single event, we depict it as a specific realization of events:
s = s 1 s 0 s 1
This realization is an instance of a stochastic process, i.e., a partially-ordered chain of random variables:
S = S 1 S 0 S 1
whose associated probability distribution is given by the following:
Pr S 1 S 0 S 1 .
Within this framework, the all-ups spin configuration is now denoted as follows:
s =
Without loss of generality, we can split a process into two parts: the past process, defined as follows:
S = S 1 .
along with its associated past realization, and the future process, defined as follows:
S = S 0
along with its associated future realization.
For simplicity, we will use the terms “past” and “future” to refer to both processes and their associated realizations, with the specific meaning inferred from the context.
The spin stochastic process will be our object of study. In the following subsection, we will elaborate on how it relates to broader categories of processes, as seen in Ref. [66].

2.3.1. Types of Processes

Stationary Process
A process in which the statistical properties of its random variables remain invariant over time. These properties include but are not limited to mean, variance, or joint distribution.
Strictly Stationary Process
A process whose joint distribution remains invariant under shifts in time. In other words, a process whose random variables are time-translation invariant. That is, a process that satisfies the following:
Pr S t S t + 1 S t + L 1 = Pr S 0 S 1 S L 1 .
Markovian Process
A process in which the probability distribution of the next random variable depends only on the preceding one. That is, a process whose joint distribution factors as follows:
Pr ( S ) = Pr S i S i 1 Pr S i + 1 S i
R-Order Markovian Process
A process in which the probability distribution of the next random variable depends only on the R preceding ones. That is, a process whose joint distribution is given as follows:
Pr ( S ) = Pr S i S i R , , S i 1
Spin Process
A process whose associated probability distribution is generated by a spin Hamiltonian model. For the models considered in this work (finite-range Ising, Solid on solid, and Three-body models), this process is strictly stationary and Markovian or R-order Markovian.
We can now define information measures of randomness, regularity, and structure for a stochastic process, starting from the basics of information theory.

2.4. Information Measures

What is information? Information can be conceived as quantifiable surprise, defined in terms of probabilities [23] (p. 64). Through this lens, an event s that is not likely to occur is deemed surprising, thus carrying high informational content. This means that the information of an event H ( s ) is inversely proportional to its probability, that is, H ( s ) 1 p ( s ) . More specifically, the event’s information content—termed self-information—[23] (p. 64) is defined as follows:
H ( s ) = log 2 p ( s ) .
Here, the presence of the logarithm is a convenient guarantee that the self-information possesses the additive property [67]. That is, the total surprise from combining events 1 and 2 equals the sum of their individual surprises.
The natural next step is to consider a random variable S. Its information content is known as Shannon entropy. It is defined as the weighted sum of the self-information of each possible event within the variable. Mathematically, it is expressed as follows:
H ( S ) = s = ± 1 log 2 p ( s )
Following this line of reasoning, we can define the conditional entropy (Ref. [67]; Ref. [25], p. 17) as the amount of information needed to specify a random variable S 1 given that a random variable S 0 is known.
H ( S 1 | S 0 ) = s 0 , s 1 = ± 1 Pr ( s 0 , s 1 ) log 2 Pr ( s 1 | s 0 )
Moreover, we can define the joint entropy (Ref. [67]; Ref. [25], pp. 16–17) as the amount of information contained in two random variables.
H ( S 0 , S 1 ) = s 0 , s 1 = ± 1 Pr ( s 0 , s 1 ) log 2 Pr ( s 0 , s 1 )
Now, how may we define the entropy of our object of interest, that is, the stochastic process? The simplest answer would be to consider the growth entropy [68], that is, the Shannon entropy of the entire process.
H ( S L ) = s 0 = ± 1 s L 1 = ± 1 Pr ( s L ) log 2 Pr ( s L )
However, as the length of the process increases, the growth entropy also rises and ultimately diverges when the process extends towards infinity ( L ). This raises the question: how can we capture the total information of a stochastic process? A solution lies in the Shannon entropy rate (Ref. [68]; Ref. [25], pp. 74–76) defined as follows:
h μ = lim L H ( S L ) L
Again, the symbol μ signifies that the Shannon entropy rate is calculated in terms of a probability measure. Notably, this rate can be simplified for stationary, Markovian processes, such as the spin process. For a stationary process, the entropy rate reduces to the following:
h μ = H ( S L | S L 1 , , S 1 ) .
If the process is also Markovian, it becomes the following:
h μ = H ( S 0 | S 1 )
By recasting the Shannon entropy rate as a conditional entropy, we can understand it as the amount of surprise each spin contributes. This effectively measures the process’s randomness per spin. Furthermore, for one-dimensional spin models, the Shannon entropy rate matches the Boltzmann entropy density, the more familiar form of entropy in statistical mechanics, as shown in Appendix C.
Since the regularity of the spin process is interpreted as the information shared between the process’ past and future, the regularity is defined as the process’ mutual information or excess entropy [26,27,28,29]. Mathematically, it is defined for the spin process as follows:
E = I ( S ; S ) = I ( S 1 ; S 0 ) .
Therefore,
E = s 1 , s 0 = ± 1 Pr ( s 1 , s 0 ) log 2 Pr ( s 1 , s 0 ) Pr ( s 1 ) Pr ( s 0 ) .
Notably, the excess entropy E can be interpreted as predictable information. That is, it quantifies the amount of information an observer has for recognizing configuration patterns, even if that information is not enough to identify them. However, in the absence of entropy rate h μ , E is sufficient to determine how much information is required for the observer to achieve synchronization with the underlying configuration patterns. Synchronization, from this purely information-theoretic perspective, refers to the observer’s ability to recognize and discern these configuration patterns.
To measure the process’s structure or statistical complexity, we need to determine the asymptotic probabilities of its patterns or causal states S . In general, this often requires inferring the process’s ϵ -machine, especially for non-Markovian processes [69] (p. 37). However, for the spin process, we can calculate them directly since we have a natural definition of causal states.
Given the Markovian nature of the spin process, the next spin only depends on the previous one. Thus, the probability distribution of future spins conditioned on past ones matches the probability distribution of the future conditioned on the previous spin being up or down. Now, since the probability of a spin up and the probability of a spin down add up to 1, and they represent the probability per site throughout the process, they can be interpreted as the asymptotic probabilities of the causal states. Therefore, the statistical complexity of the spin process can be quantified as follows [2]:
C μ = H ( patterns ) = H ( S ) = H ( S 0 ) .
Therefore,
C μ = s 0 = ± 1 Pr ( s 0 ) log 2 Pr ( s 0 ) = s 0 = ± 1 ( u s 0 L u s 0 R ) log 2 ( u s 0 L u s 0 R ) = s 0 = ± 1 u s 0 2 log 2 u s 0 2 .
This information can be simply related via the identity H ( S 0 ) = H ( S 0 | S 1 ) + I ( S 1 , S 0 ) as
C μ = R h μ + E .
This relationship [69] (p. 37) formalizes our intuition that structure is a blending of randomness and regularity. Here, R denotes the neighborhood radius, which equals 1 for the nn Ising model.
Since the causal states are sufficient to predict the process’s future, and considering that prediction is tantamount to reproduction, statistical complexity can be defined as the minimum amount of information required to reproduce the stochastic process [2]. As mentioned in the introduction, if structure is viewed as quantifiable information, then this suggests that the mechanism generating the structure can be described as a machine [32,35].

2.5. Structure: Computational Mechanics

To formalize the mechanism generating a physical process’s structure, the concept of a machine must be adapted to satisfy three statistical mechanical constraints:
  • Be capable of reproducing ensembles;
  • Possess a well-defined notion of “state”;
  • Be derivable from first principles.
Computational mechanics meets the first requirement by enhancing the simplest machine in TOC, the Deterministic Finite State Machine (DFSM), with probabilistic features while keeping its determinism intact [33,70]. The former is achieved by incorporating probabilities into the state transitions, and the latter is maintained by ensuring that the probability of transitioning to the next state, given the current state and a specific outgoing symbol, is precisely one. These modifications result in a machine known as Probabilistic Finite State Machine (PDFM) or ϵ -machine.
The second requirement is fulfilled by operationalizing TOC’s conceptual definition of a state—an entity that “remembers a relevant portion of the system’s history” [34] (pp. 2–3)—as a causal state. A causal state is the collection of all past realizations that when individually conditioning the process’ future yield the same conditional probability distribution [32,33]. Notably, formalizing the notion of “state” is crucial not just for conceptual clarity, but also to satisfy the third requirement. The reason for this is that without a clear understanding of what states are, the procedure for inferring them is much less clear.
To satisfy the third condition, we recast the definition of causal state as a guiding principle for inferring causal states from realizations, that is, the causal equivalence principle [32,33]. It states that two past realizations belong to the same causal state if they yield the same conditional distributions over the process’s futures. In practice, this principle allows us to construct the underlying ϵ -machine of an ensemble.
In summary, the key ingredients of computational mechanics are the concepts of ϵ -machine, causal transition, causal state, and the causal equivalence principle [33]. While we introduced them in this order to capture how they would be rediscovered conceptually, we will now present them in reverse order to delve into their mathematical details more pedagogically.
Causal equivalence principle. Two pasts are considered causally equivalent if and only if they make the same prediction over the future, i.e.,
s s Pr ( S | s ) = Pr ( S | s )
Effectively, this principle groups pasts that lead to the same future into what are known as causal states. To formalize what we mean by “leads,” a causal state is defined as follows:
Causal state. A triple that contains the following:
1.
An event with its associated probability of the causal state random variable S :
S i and Pr ( S i ) .
2.
A distribution of the future conditioned on the causal event, i.e., a morph:
M i = Pr ( s | S i ) .
3.
The set of histories that lead to the same morph:
H i = { s | Pr ( S | S i ) = Pr ( S | s ) } .
Now, assuming that our machine is deterministic in the computational theoretic sense, we can define the causal transition as follows:
Causal transition. The probability of transitioning from state S i to state S j while emitting the symbol s A :
T i j ( s ) = Pr ( S j , s | S i ) = Pr ( S j | s , S i ) Pr ( s | S i ) = Pr ( s | S i ) .
These definitions allows us to construct the minimal machine supporting a stochastic process’ structure.
ϵ -machine or PDFM. A pair that contains the following:
  • The set of causal states;
  • Transition dynamic (causal transitions gathered in a matrix) [31].
For inferring ϵ -machines, it will be important to distinguish between two types of causal states:
  • Recurrent causal states: These are states to which the machine will repeatedly transition as it operates. Consequently, their asymptotic probability is non-zero.
  • Transient causal states: These are states that the machine may reach temporarily but will not return to. As a result, their asymptotic probability is zero: Pr ( S i ) = 0 .
Notably, the connectivity and number of transient states specify how difficult it is to identify the periodicity of configurations. In other words, these transient states reflect the computational effort required to achieve synchronization with the recurrent causal states. Here, synchronization is recast as the observer achieving certainty about the recurrent causal state it occupies, even in systems with nonzero entropy rate h μ . Thus, transient states offer a computational perspective on synchronization, which completes the informational interpretation provided by the excess entropy E .
Since this is a principled approach, we can infer our machine of interest, rather than design it. For spin processes, an analytical method exists for inferring recurrent causal states [2]. Moreover, transient states can be reconstructed from these recurrent states, as detailed in Appendix B of Ref. [2]. Below, we provide a step-by-step explanation of the analytical reconstruction method for recurrent causal states.

Analytical Method to Infer ϵ -Machines

1.
Consider a finite configuration of length 2 L embedded in an infinite one.
s = s L s 1 s 0 s 1 s L 1
2.
Consider the joint probability of the embedded finite configuration.
Pr ( s ) = u s L L u s L 1 R i = L L 2 V s i s i + 1 λ 2 L 1
3.
Compute the conditional probability of the right half of the configuration given the left half.
Pr ( s | s ) = u s L 1 R u s 1 R i = 1 L 2 V s i s i + 1 λ L
4.
Notice that the only past element the conditional probability depends on is its last spin s 1 . Thus, the conditional probability is Markovian.
Pr ( s | s ) = Pr ( s | last spin )
5.
Identify morphs.
Pr ( s | S A ) = Pr ( s | pasts whose last spin is )
Pr ( s | S B ) = Pr ( s | pasts whose last spin is )
6.
Identify the number of causal states.
Since there are two morphs, there are two
causal states at most
7.
Identify sets of histories that lead to the same morph.
{ s | last spin is } and { s | last spin is }
8.
Apply the definition of causal transitions.
T A A ( ) = Pr ( | ) = e β ( J + B ) λ
T A B ( ) = Pr ( | ) = e β J λ 1 m 1 + m
T B B ( ) = Pr ( | ) = e β ( J B ) λ
T B A ( ) = Pr ( | ) = e β J λ 1 + m 1 m
9.
Calculate asymptotic causal state probabilities using two facts:
  • Pr ( s | S A ) = Pr ( s | )
  • Pr ( s | S B ) = Pr ( s | )
  • Pr ( S A ) + Pr ( S B ) = 1
Since Pr ( ) + Pr ( ) = 1 , by inspection, we have the following:
  • Pr ( S A ) = Pr ( ) = u 2 = 1 + m 2
  • Pr ( S B ) = Pr ( ) = u 2 = 1 m 2
10.
Build transition dynamic T.
T = 0 Pr ( S A ) Pr ( S B ) 0 Pr ( S A | S A ) Pr ( S B | S A ) 0 Pr ( S A | S B ) Pr ( S B | S B )
= 0 1 + m 2 1 m 2 0 e β ( J + B ) λ e β J λ 1 m 1 + m 0 e β J λ 1 + m 1 m e β ( J B ) λ
11.
Find the left eigenvector using π | T = π | .
π | = ( 0 , 1 + m 2 , 1 m 2 )
Since T is a stochastic matrix, this is its asymptotic probability distribution vector, which contains the causal states’ probabilities, as seen in Refs. [65] (p. 330), [71] and [72] (p. 128).
12.
Build HMM representation of ϵ -machine using the transition matrix T . Details of the resulting machine, for the parameter values J 1 = 1.0 , B = 0.35 , and T = 1.5 , are provided in Appendix E.

2.6. Patterns as ϵ -Machines

The following example illustrates how computational mechanics formalizes the concept of a pattern. Consider a spin configuration such as . When asked, “What’s the pattern in this configuration?”, an intuitive answer might be . However, if presented with an ensemble of spin configurations and posed with the same question, the concept of a pattern becomes vague. To reason towards a definition of pattern for ensembles, we can ask: “What’s the key property of a pattern?” A plausible candidate is that a pattern represents a compressed form of data that enables an observer to reproduce the original content [73]. Thus, we can then ask: “What’s the object that statistically reproduces such a configuration?” The framework of computational mechanics provides the answer: The ϵ -machine, which can be interpreted as a physical or ensemble pattern [32,33]. Figure 3 illustrates the relationship between configuration and ensemble patterns.
From this point forward, the plotted machines will be derived using the CMPy package, which implements a tree-reconstruction method for inferring ϵ -machines, as described in Refs. [30,31]. The transient and recurrent states of these machines are represented in purple and green, respectively. For clarity in visualization, spins ↑ and ↓, emitted during transitions between causal states, are represented as 1 and 0, respectively. The ensembles of spin models discussed in the following section include configurations of either 4 or 6 spins. Configurations with probabilities below 1 × 10 5 are excluded from consideration.
Based on Figure 3, one might be tempted to conclude that an ϵ -machine is simply a Hidden Markov Model (HMM). However, that is not the case. The difference stems from how states in HMM and ϵ -machines are characterized; specifically, a causal state in an ϵ -machine is defined as a triple, as mentioned earlier. In contrast, the conventional use of HMMs typically equates a state directly with the outcome of a random variable, treating the state as a singular entity rather than a triple. The definition of a state in computational mechanics is crucial, as it provides the foundation for inferring states from first principles rather than manually designing them [30].

3. Results and Discussion

As discussed in the previous section, the embedded Boltzmann distribution generates a vast number of spin configurations, making the structure and pattern of an arbitrary configuration unrepresentative of the system’s overall structure and patterns. However, to compare the information measures and ϵ -machines to the Boltzmann ensemble, it may still be useful to examine the structure and patterns of the individual configurations that are most optimally representative. To achieve this, we focus on a specific kind of configuration: typical configurations. These configurations are the most likely outcomes generated by the embedded Boltzmann distribution of a given spin model. Among these, likely typical configurations have probabilities that are significantly higher than those of non-typical configurations, whereas unlikely typical configurations have probabilities that are only slightly higher than those of non-typical configurations. The patterns present in these typical configurations are referred to as typical configuration patterns.
The patterns and structures of both typical and non-typical configurations across different spin models are shaped by various parameters [74]. To identify commonalities in how these parameters contribute to the configurations’ structure and patterns, we propose classifying them into three distinct types. To illustrate this, we will reference the nearest-neighbor (nn) Ising model as an example while defining each type of parameter.
  • Randomness Parameter: This parameter governs the degree of randomness within the system. As it increases, it leads configurations to become more uniformly likely. In the nn Ising model, temperature T usually fulfills this role.
  • Periodicity Parameter (Type 1): This parameter enhances periodicity and, as it varies, biases the system toward configurations that consist exclusively of a single period. In the nn Ising model, the coupling constant B exemplifies this. It induces period 1 configurations whether B is significantly positive or negative. Specifically, a high positive B biases all spins to point upwards, while a high negative B results in all spins pointing downwards.
  • Periodicity Parameter (Type 2): Similarly, this parameter enhances periodicity but, as it varies, steers the system towards typical configurations with multiple distinct periods. In the nn Ising model, this role is played by the coupling constant J. A high positive J value tends to produce period 1 configurations (all spins up), akin to B, but a negative J value leads to alternating spin configurations (e.g., up-down-up-down), indicating that the typical configuration can be of period 2.

3.1. Finite-Range Ising Model

The nearest-neighbor Ising model can be generalized to a finite-range model using Dobson’s spin block method [75]. This approach consists of redefining the model’s degrees of freedom from individual spins to blocks of spins. These spin blocks are only allowed to interact with their nearest-neighbor blocks. Equivalently, in terms of spin variables, a spin s i within a spin block η j is only allowed to interact with spins within the same block and spins within the nearest-neighbor spin blocks. Notably, every spin within a block will interact with all the spins within the same block. Nonetheless, a given spin won’t necessarily interact with all the spins from the nearest spin blocks unless the nearest-neighbor Ising model is the specific model under consideration [75]. The interactions of spins within spin blocks are illustrated in Figure 4.
The spin block method expresses the Hamiltonian of two interacting spin blocks η j and η j + 1 of the finite-range Ising model as the sum of three contributions, shown in Equation (37). The first is the energy within block η j , encompassing the interactions among spins within the block as well as the interactions of each spin with the magnetic field. The second contribution is the interaction energy between blocks η j and η j + 1 , which is determined solely by the interactions between spins in η j and spins in η j + 1 . The third contribution is the energy within block η j + 1 , which, like the first, consists of the interactions between spins inside the block and the interactions of these spins with the magnetic field [75]. The reduction of the finite-range Ising model Hamiltonian to the Hamiltonians of Ising models with neighboring radii R = 1 , 2, and 3 is shown in Appendix J:
E ( η j , η j + 1 ) = 1 2 X η j + Y η j , η j + 1 + 1 2 X η j + 1
where
  • X η j = B i = 0 n 1 s i j + k = 1 n J k i = 0 n k 1 s i j s i + k j ,
  • Y η j , η j + 1 = k = 1 n J k i = 0 k 1 s n i 1 j s k i 1 j + 1 .
The terms in X η j and Y η j , η j + 1 have the physical interpretations described below:
  • B 2 i = 0 n 1 s i j represents the energy contribution from the interactions between each spin in the block η j and the magnetic field B. For B > 0 , configurations tend to have all spins pointing up, while for B < 0 all spins pointing down are favored. Therefore, B acts as a type-1 periodicity parameter.
  • 1 2 k = 1 n J k i = 0 n k 1 s i j s i + k j represents the energy from the neighbor interactions between the spins within block η j . For J k > 0 , spins tend to align either all up or all down, favoring period-1 configurations. When J k < 0 , spin configurations of period- 2 R are prone to occur. Thus, J k serves as a type-2 periodicity parameter.
  • Y η j , η j + 1 denotes the energy associated with interactions between spins in neighboring blocks η j and η j + 1 . Since this term shares the same form and coupling as 1 2 k = 1 n J k i = 0 n k 1 s i j s i + k j , it leads to the same configuration patterns for corresponding values of J k . Thus, J k again acts as a type-2 periodicity parameter.
The next step is to determine how effective information measures are at detecting and distinguishing configuration patterns within typical configurations of finite-range Ising models. For this, we start by considering a next-nearest neighbor Ising model with a moderately negative next-nearest-neighbor coupling J 2 = 1.2 , a very weak magnetic field B = 0.05 and a low temperature T = 1 . Figure 5a shows the model’s information measures h μ , E and C μ as a function of the nearest-neighbor coupling J 1 [ 8 , 8 ] . To assess the detection capability of these measures, typical configurations generated by the finite-range Boltzmann distribution at various values of J 1 are displayed below the horizontal axis.
For a strongly negative nearest-neighbor coupling J 1 [ 8 , 7 ) , h μ approaches zero, while E 1 , together suggesting the presence of period-2 typical configurations. In this regime, the ensemble exclusively adopts configurations that alternate between ↑ and ↓, confirming the period-2 pattern. These resulting configurations arise from the negative coupling J 1 , which favors antiferromagnetic behavior [76,77].
For a strongly positive nearest-neighbor coupling J 1 [ 6 , 8 ] , all information measures approach zero, implying a period-1 typical configuration. The resulting “all-ups” pattern observed at these values is consistent with these measures. This outcome is expected, as the positive coupling J 1 drives the system toward ferromagnetic alignment [76,77].
For nearest-neighbor coupling J 1 = 0.2 , the system exhibits h μ 0.42 , E 1.17 , and reaches a maximum C μ 2 . While 1 E < 1.59 would imply period-3 configurations in the absence of entropy rate, the significant value of h μ results in C μ = 2 , pointing toward period-4 configurations. Consistently, at these parameter values, we observe period-4 patterns in the typical configurations, including , , and . Physically, this behavior can be understood as a result of the antiferromagnetic effect of J 2 being more dominant than the contributions from B and J 1 .
For J 1 = 2.5 and J 1 = 2.5 , all configurations have a probability of less than 0.1 . This indicates that, at these parameter values, the system does not have a typical configuration or preferred configuration pattern. Additionally, in the regions J 1 [ 5 , 1 ] [ 2 , 5 ] , we observe that C μ is not constant, but exhibits significant variation. As a result, these regions can be seen as configuration transition zones where the typical configurations are shifting to new ones as the parameter of interest varies.
Now, consider a 3-range Ising model with negative neighbor couplings of decreasing magnitude J 1 = 2.8 , J 2 = 1.3 , J 3 = 0.45 and low temperature T = 0.2 . Figure 5b shows the model’s information measures h μ , E and C μ as a function of the magnetic field B [ 0 , 13 ] . As in Figure 5a, typical configurations at various values of B are included below the horizontal axis.
For a weak magnetic field B [ 0 , 0.75 ] , we observe h μ 0 , E 1 , and C μ 1 , indicating that only period-2 configurations are present, with no possibility of other configurations, even as unlikely alternatives. This is further confirmed by the exclusivity of the alternating ↑ and ↓ configurations in this region. Moreover, for a strong magnetic field B [ 10 , 13 ] , all information measures approach zero, indicating that the system permits only period-1 configurations, consisting entirely of ↑ spins. This is further validated by the typical configurations calculated from the Boltzmann distribution. While the information measures and configuration patterns for these field ranges resemble those in Figure 5a, they begin to differ in the intermediate range of B.
For a moderate magnetic field B 4.2 , we observe h μ 0.1 , E 1.4 , and C μ 1.7 , indicating the presence of period-3 typical configurations. This is confirmed by the configurations calculated using the Boltzmann distribution. These results can be attributed to the competing effects between the antiferromagnetic couplings and the positive magnetic field [78,79,80]. Moreover, in Figure 5a, the probability of each non-typical configuration for J 1 = 0.2 is less than 0.03 , while in Figure 5b, for B 4.2 , the probability of each non-typical configuration is less than 0.01 . The lower value of h μ for B 4.2 in Figure 5b, compared to that for J 1 = 0.2 in Figure 5a, indicates that h μ effectively captures the likelihood of non-typical configurations.
For a strong magnetic field B = 7.5 , the typical configurations are period-4. Therefore, compared to Figure 5a, Figure 5b shows a greater variety of periodic patterns. Moreover, although the 3-range model in Figure 5b includes spins with two additional neighbors compared to the next-nearest neighbor model in Figure 5a, it does not exhibit configuration patterns of periodicity higher than period-4. This captures how different parameters can limit or expand the diversity of configuration patterns.
Notably, there is a dip around B = 4.5 , where B | J 1 + J 2 + J 3 | . This suggests that, when the magnetic field and the coupling parameters are in a state of competing balance without a clear dominant effect, the configuration patterns reach a complex yet not maximally intricate compromise. That is, their periodicity is higher than that of an antiferromagnet but still below the maximum possible within the range B [ 0 , 13 ] .
Figure 6 shows the ϵ -machines for 3-range Ising models at fixed values of the coupling, temperature, and magnetic field parameters. In panel (a), the parameters are a weak magnetic field B = 0.2 , a moderate temperature T = 4 , and weak ferromagnetic couplings J 1 = 1 , J 2 = 1 , and J 3 = 1 . In panel (b), the parameters are a strong magnetic field B = 8 , a low temperature T = 0.2 , and moderate antiferromagnetic couplings J 1 = 3 , J 2 = 2 and J 3 = 2 .
The ϵ -machine in Figure 6a exhibits the maximum possible number of recurrent states, given by 2 R = 2 3 = 8 , where R is the number of spins in a given spin block [2]. Therefore, by the definition of causal states, each spin block leads to a distinct future. This creates a one-to-one correspondence between spin blocks and causal states [2]. Additionally, it has 7 transient states, determined by 2 R 1 [2]. This indicates that 7 spin variables must be observed before the next spin, allowing the observer to discern the precise typical configuration pattern.
Figure 6b shows fewer recurrent states compared to Figure 6a. This is due to the stronger magnetic field B and lower temperature in Figure 6b, which bias typical configurations toward a period-1 pattern. Consequently, the variety of possible typical spin configurations is reduced, limiting the range of possible futures. Moreover, Figure 6b exhibits only 3 transient states. This can also be attributed to the bias toward period-1 configurations, as fewer spins need to be observed to discern the typical configuration pattern.
Furthermore, notice that Figure 6b has reduced connectivity compared to Figure 6a. Specifically, the causal states in Figure 6a each have two outgoing transitions, while in Figure 6b, only transient states have two outgoing transitions, and recurrent states have just one. This reduced connectivity is again a result of the low temperature, which limits the diversity of configuration patterns. Moreover, it can be further understood as a consequence of the balance between the magnetic field and coupling interactions, which leads to complex but not maximally intricate configuration patterns.
Ultimately, the smaller size and reduced connectivity of the machine in Figure 6b, compared to Figure 6a, indicate that it performs less computation. Moreover, both panels in Figure 6 illustrate that the number of causal states in a spin model does not always match the number of spin blocks; this occurs only when the model operates at maximum computational capacity. Instead, the number of causal states varies based on internal factors like interaction couplings and external conditions such as the magnetic field and temperature.

3.2. Solid on Solid Model

In 1951, Burton, Frank, and Cabrera (BFC) introduced a theory on the growth of real crystals in equilibrium, built upon earlier theories of perfect crystal growth [81]. BFC posited that crystal growth is driven by the presence of steps on the crystal surface, with the rate of growth determined by kinks in these steps.
In this context, a step refers to the edge of an incomplete molecular layer on a crystal surface [81]. The interface between real crystals and their vapor is an example of a step [82]. A kink, on the other hand, is an atomic site along a surface step where the atomic alignment at that point is disrupted.
In BFC’s theory, these kinks form on the surface at a specific temperature, referred to as the roughening temperature T R . This prompted BFC to quantify surface roughness per molecule by comparing the potential energy per molecule at roughening and zero temperatures, as shown in Equation (38):
s = U R U 0 U 0
Here, U 0 and U R represent the potential energy per molecule at zero and roughening temperatures, respectively. The difference U R U 0 is referred as the configurational potential energy, and provided BFC with a gateway to model crystal surfaces as spin lattice models.
They argued that for the ( 001 ) surface of a simple cubic crystal, the configurational potential energy is equivalent to the difference in potential energy between any two molecules [81]. Consequently, this allows for the crystal surface to be modeled as a two-dimensional Ising model on a square lattice where each site is labeled by integer coordinates x and y. Thus, the potential energy between two molecules is given by the following:
u ( μ , μ ) = U | μ μ | .
Moreover, by focusing on kinks along the interface/step of a crystal with its vapor, the problem can be simplified in two ways. First, all molecules on the surface to the left of the interface can be treated as spin up, and those to the right as spin down [83]. Second, these two regions can be regarded as forming a one-dimensional spin chain, reducing the Ising model from 2D to 1D [83], as depicted in Figure 7. This simplification is achieved by fixing the spins along the vertical boundaries at the extreme left x = 0 to spin value 1 and at the extreme right x = x high to spin value 1 . These boundary conditions create a distinct transition in the lattice, where spin values switch from 1 to 1 . As a result, the Hamiltonian describing the configurational energy between two molecules is given by the following:
U | n j n j + 1 |
where n j represents the number of leftmost up spins in row j up to the interface at column i.
If we further require that each occupied site sits directly above another occupied site—meaning no “overhangs” are allowed—then the one-dimensional spin chain meets the solid-on-solid condition [82].
Furthermore, an attractive wall potential can be incorporated into the Hamiltonian of the configurational energy. Abraham demonstrated that this potential “straightens” the interface, provided that x is restricted to lie in the right half of the plane, i.e., 0 x x high [84]. Following Privman et al. [83], a simple attractive wall potential can be expressed as:
W δ 1 , n y
Moreover, an additional external short-range potential can be included, represented as follows:
E ( n y ) c e a n y , a > 0
The resulting Hamiltonian for this system is given by Equation (43):
E = y U | n y n y 1 | W δ 1 , n y + E ( n y )
where
  • U | n y n y 1 | represents the energy cost of forming a kink in the interface. U > 0 biases the system toward period-1 configurations, while U < 0 favors alternating spins. Therefore, U acts as a periodicity parameter of type 2.
  • W δ 1 , n y represents the energy associated with pinning the interface to the wall [83]. For W > 0 , n y = 1 prevails, while for W < 0 , n y = 0 dominates. In both cases, the system favors period-1 configurations. Thus, W serves as a type 1 periodicity parameter.
  • E ( n y ) represents the energy contribution from an external field that influences the interface’s orientation or tilt [83]. For E > 0 , an interface made up of 1s is favored, while for E < 0 , an interface made up of 0s is preferred. Therefore, the parameters in this term function as type-1 periodicity parameters.
In what follows, we restrict n y { 0 , 1 } , so that under s y = 2 n y 1 the SOS Hamiltonian is equivalent to a nearest-neighbor 1D Ising chain. We compute the probabilities needed for the information measures and ϵ -machines directly from the Boltzmann distribution associated with Equation (43) using the transfer-matrix method.
We now aim to compare how turning the pinning wall W on and off affects both the configurations and information measures of the SOS model. For this comparison, we consider an SOS model with low temperature T = 1 , external potential V = e n y and pinning wall potential W = 0 or 1 . Figure 8 displays the information measures of the SOS model as the kink coupling U varies. In Figure 8a,b, the pinning wall W is set to 0 and 1, respectively.
In both panels of Figure 8, C μ ranges from 0 to 1, indicating typical configurations of either period-1 or period-2. In both figures, even a slight increase in the kink coupling above zero causes C μ to reach its peak value. This behavior aligns with the Gibbsean assumption that a low cost of forming kinks makes non-uniform configurations—that is, non-period-1 configurations—more likely to occur [81].
In Figure 8a, C μ reaches its peak just below 1, whereas in Figure 8b, it peaks around 0.75. Moreover, for 0 < U < 5 , C μ stays higher in Figure 8a than in Figure 8b. This sustained higher value of C μ in Figure 8a compared to Figure 8b is in line with the SOS Hamiltonian, which suggests that biasing the interface toward the pinning wall increases the likelihood of the interface becoming flat, that is, period-1 [81].
In Figure 8a, E peaks around E = 0.26 at U = 1.8 , while in Figure 8b, it peaks around E = 0.04 at U = 1 . This suggests that more spins need to be observed to determine the configuration pattern of the SOS model in Figure 8a compared to Figure 8b. This is consistent with period-1 configurations being more likely in Figure 8b, as these configurations do not require observing any spins.
At the E peak in Figure 8a, C μ = 0.75 , while at that of Figure 8b, C μ = 0.35 . This implies that period-2 configurations are more likely to occur in Figure 8a compared to Figure 8b. This aligns with typical period-1 configurations being less prevalent and, conversely, non-typical period-2 configurations being more frequent in Figure 8a compared to Figure 8b. Moreover, this is consistent with the physical expectation that biasing the interface to be attracted to the wall increases the likelihood that it becomes flat, thereby raising the probability of a period-1 configuration.
Furthermore, in both panels of Figure 8, as the kink coupling U increases, h μ decreases. This trend is expected, as the higher cost of kink formation makes non-period-1 configurations less likely, thereby reducing the uncertainty of the next observed spin. The decrease occurs more rapidly in Figure 8b compared to Figure 8a. This can be explained by the presence of the pinning wall, which further encourages the dominance of flat, period-1 configurations.
Figure 9a,b show the ϵ -machines corresponding to the E peaks of Figure 8a,b. Both ϵ -machines feature two recurrent states and one transient state. However, as circled in red, the probability of transitioning from state A to state B while outputting symbol 0 in Figure 9a is more than twice as high as in Figure 9b. Moreover, as circled in blue, the probability of transitioning from state B to state B while outputting symbol 0 decreases from 0.73 in Figure 9a to 0.27 in Figure 9b. This bias towards period-1 configurations of the machine in Figure 9b suggests that it is easier for the machine in Figure 9b to synchronize than the one in in Figure 9a, which aligns with the fact that the E is higher for the machine in Figure 9b while both machines have similar values of h μ (approximately 0.36 for Figure 9a and 0.31 for Figure 9b). Moreover, the outgoing transition probabilities from the transient state in Figure 9b are less uniform than those in Figure 9a. This suggests that while both machines can identify the “all-ups” configuration without observing any spins, this configuration is more representative of the machine in Figure 9b than of the one in Figure 9a. Computationally, this means that the behavior of the machine in Figure 9b more closely resembles that of a single-state machine that exclusively outputs symbol 1.
Lastly, note that the machines for the SOS model in Figure 8 and the nearest-neighbor Ising model in the Appendix E share similar recurrent states, transient states, and connectivity, but have different state transition probabilities. This suggests that the ϵ -machines offer a constructive framework for comparing the structures of different spin models and examining their similarities and differences.

3.3. Three-Body Model

Thermal desorption is the process of heating a solid surface to release a portion of its molecules [85,86]. The defining characteristic of this process is its kinetics, which are described by the desorption rate and the desorption rate constant, as outlined in [87] and presented in Equations (44) and (45), respectively. These two key equations are directly connected to experiment, as at sufficiently high pumping rates, the desorption rate equals the desorbant’s pressure [88]:
d θ d t = k d θ
k d = ν i P A , i exp E d ( 0 ) E i T
Detecting the temperatures at which desorption is greatest and identifying the qualitative properties of desorption at these values is crucial for various applications [88]. To achieve these objectives, the negative desorption rate is plotted against temperature to obtain the “desorption spectrum” [88]. The peaks in this spectrum indicate the temperatures at which the desorption rate is highest. These peaks vary in width, height, and location depending on the coverage, temperature, and material examined. For the case of the desorption spectrum of CO from Ni, Pd, Pt, Rh, and Ru closed-packed faces of single crystals, two distinguishing qualitative features arise, as demonstrated by Morris et al. [89]:
  • The splitting of thermal desorption peaks becomes progressively weaker as one goes from Ni to Ru.
  • The integral intensities of the peaks are distinct.
While nearest-neighbor (nn) and next-nearest-neighbor (nnn) spin models had been used to model thermal desorption [90], they did not capture the aforementioned properties. Myshlyavtsev et al. addressed this limitation by incorporating a three-body term in the spin Hamiltonian, which effectively models these characteristics [91]. The resulting three-body model removes the assumption of paired interactions [90], providing a more accurate account of the CO desorption process from metal surfaces. The 1D model is exactly solvable and, if lateral interactions are anisotropic, sufficient to capture thermal desorption, making it of theoretical and practical interest, respectively [91]. The spin interactions in the 1D three-body model are illustrated in Figure 10. The Hamiltonian for this model is given in Equation (46), and the corresponding transfer matrix is detailed in Appendix K.
E ( s i , s i + 1 , s i + 2 ) = i J 1 s i s i + 1 J 2 s i s i + 2 J tb s i s i + 1 s i + 2
where
  • J 1 s i s i + 1 is the term associated with the nearest-neighbor coupling. For J 1 > 0 , the model induces period-1 configurations, while for J 1 < 0 , the model induces period-2 configurations. Thus, J 1 serves as a type-2 periodicity parameter.
  • J 1 s i s i + 2 is the energy contribution of the next-nearest-neighbour coupling. When J 2 > 0 , the model tends toward period-1 configurations, whereas for J 2 < 0 , it leans toward period-4 configurations. Therefore, J 2 acts as a periodicity parameter of type 2.
  • J tb s i s i + 1 s i + 2 is the expression that represents the three-body interaction. When J tb > 0 , the configurations are biased toward a period-1 pattern, while J tb < 0 favors period-4 configurations. As a result, J tb functions as a type 2 periodicity parameter.
The purpose of Figure 11 is to illustrate how turning the nearest-neighbor coupling on and off in a three-body model affects both its configurations and information measures as a parameter of interest varies. Temperature is chosen as that parameter because it plays a key role in thermal desorption applications, where the goal is to identify the temperature that maximizes desorption [88,91]. In both panels, the next-nearest-neighbor coupling J 2 is set to 0 to highlight the role of the nearest-neighbor coupling J 1 , while the three-body coupling J tb is set to 1 . However, in Figure 11a, the nearest-neighbor coupling J 1 is set to 0, whereas in Figure 11b, it is set to 1.
In both Figure 11a,b, C μ increases and reaches its maximum value of C μ = 2 as the temperature T rises, but the starting values differ. In Figure 11a, C μ begins around 1.9 , whereas in Figure 11b, it starts at approximately C μ 1.58 . This suggests that at low temperature values, the typical configurations in Figure 11a are period-4, and in Figure 11b, they are period-3. This difference can be attributed to the fact that Figure 11b involves competing couplings, whereas Figure 11a does not, as it only includes the three-body coupling. In particular, in both Figure 11a,b, the three-body coupling J tb biases configurations toward a period-4 pattern. However, in Figure 11b, the ferromagnetic coupling J 1 also biases configurations toward a period-1 pattern. The competition leads to a compromise, resulting in period-3 configurations. This is consistent with the low-temperature typical configurations calculated using the Boltzmann distribution, which are shown below the horizontal axis in Figure 11b.
Moreover, the nearest-neighbor coupling significantly reduces the uncertainty in predicting the next spin by expanding the neighborhood of spins that each state affects. This leads to a lower h μ at very low temperatures in Figure 11b compared to Figure 11a. This prevents C μ in Figure 11b from being strongly influenced by h μ at very low temperatures.
Furthermore, although C μ is higher in Figure 11a than in Figure 11b at low temperatures, E is lower in Figure 11a compared to Figure 11b at the same temperatures. This implies that while typical configurations in Figure 11a at very low temperatures exhibit greater periodicity than those in Figure 11b (period-4 versus period-3), the observer must examine more spin variables to discern the configuration pattern in Figure 11b. While this might seem to suggest that patterns in Figure 11b are easier to discern than those in Figure 11a, the uncertainty per spin in Figure 11b is significantly higher. Specifically, h μ 0 for Figure 11a, whereas h μ 0.9 for Figure 11b. This substantial difference makes an information-theoretic approach based on excess entropy E insufficient for determining the ease of synchronization. We will soon address this by examining the computational properties of the three-body models.
Notably, the information measures of the three-body model reveal new features that were absent in previously studied spin models. For instance, unlike the dependence of E on temperature in the nearest-neighbor Ising model, where E decays to 0 as T increases (as shown in Ref. [2] and Appendix B), E for the three-body model remains nonzero even at high temperatures. Moreover, even though there is no magnetic field B in Figure 11b, the information measures are not flat across the temperature range. This suggests that a diversity of configuration patterns is possible whenever competing parameters are present, regardless of their specific nature, which further reinforces the usefulness of our classification of parameter types. Ultimately, the information measures plots in Figure 5, Figure 8 and Figure 11 suggest that different spin models give rise to distinct configuration patterns and structural behavior.
Figure 12 aims to illustrate the structural changes in the ϵ -machine of a three-body model with competing couplings as the temperature increases. The plots in Figure 12a and Figure 12b depict the ϵ -machines corresponding to Figure 11b at a very low temperature T = 0.025 and a low temperature T = 2 , respectively.
The outgoing probabilities from the causal transient state A to the transient states B and C in Figure 12a, which are circled in red and blue, are less uniform than those in Figure 12b. This implies that the ϵ -machine in Figure 12a is easier to synchronize than the one in Figure 12b. At first, this may seem inconsistent with their excess entropy values, given that E 1.58 for Figure 12a and E = 1 for Figure 12b, as shown in Figure 11b. However, this apparent contradiction is resolved by observing the significantly higher value of h μ in Figure 12b compared to Figure 12a, where h μ 1 for Figure 12b and h μ 0 for Figure 12a. As a result, while discerning configuration patterns in Figure 12a may require an additional spin, the much higher uncertainty in predicting the next spin in Figure 12b outweighs this requirement, making synchronization more challenging in Figure 12b than in Figure 12a. This uncertainty is further supported by the fact that typical configurations for Figure 12b are much less probable than those in Figure 12a. Specifically, the highest probability for a typical configuration in Figure 12a is 0.33 , whereas in Figure 12b, it is only 0.025 . This contrast highlights how the computational approach provided by ϵ -machines offers a more nuanced perspective on synchronization than the randomness-agnostic viewpoint of excess entropy E .
Moreover, the recurrent part of Figure 12a is much less connected than that of Figure 12b. In the ϵ -machine for Figure 12a, each recurrent causal state has only one outgoing transition with probability 1.0 . In contrast, the recurrent states in Figure 12b each have two outgoing transitions, both with probabilities close to 0.50 . Furthermore, Figure 12b includes self-loops that enable it to recognize period-1 configurations consisting entirely of 0s or 1s, a feature absent in Figure 12a. This indicates that the machine in Figure 12b generates a greater variety of spin configurations compared to the one in Figure 12a. This observation is consistent with the fact that at T = 0.025 , there are only three typical configurations, whereas at T = 2 , there are six.
Lastly, the number of recurrent causal states, along with the low connectivity of the machine in Figure 12a, suggests that it can support configurations with periods of up to 3. In contrast, the machine in Figure 12b, which has the same number of recurrent causal states but higher connectivity, permits configurations with periods of up to 4. The typical configurations in Figure 11b reflect this pattern, as Figure 12b accommodates both period-4 and period-3 configurations, whereas Figure 12a only supports period-3 configurations. Ultimately, this comparison of ϵ -machines underscores the importance of considering not only typical configurations but also their probabilities when developing a computation-theoretic account of spin patterns.

4. Conclusions

What, then, is a pattern in statistical mechanics? If one recasts the mechanism generating a system’s structure as an information processor, the answer for the one-dimensional spin models studied here is clear: the ϵ -machine. To support this perspective, we began by introducing computational mechanics and its application to statistical mechanics in a conceptual manner with only the necessary amount of mathematics. We then defined typical configurations and typical configuration patterns as the most likely configurations and configuration patterns within an ensemble. Furthermore, we classified the parameters of spin models according to the type of behavior they give rise to.
Using this framework, we computed typical configurations from the embedded Boltzmann distribution and compared them to those implied by information measures and ϵ -machines for three different spin models: the finite-range Ising model, the SOS model, and the three-body model. Our findings confirmed consistency between the results, establishing the ϵ -machine as a representation of the Boltzmann distribution’s ensemble patterns. Moreover, our analysis showed that information measures and ϵ -machines offer a detailed and nuanced characterization of typical configuration patterns, allowing us to distinguish between them and identify their shared features.
In the finite-range Ising model, the information plots show that C μ serves as a simple visual indicator of regions where no typical configurations exist. These regions, distinguished by the non-flat behavior of C μ , are what we refer to as transition zones. Furthermore, C μ captures the fact that different parameters influence the diversity of configuration patterns, and consequently, the computational demands. For instance, a dominant antiferromagnetic J 2 coupling maximizes computation, while competing effects between B and antiferromagnetic J 1 lead to high but constrained computation.
Moreover, the ϵ -machines of the finite-range Ising model provide a more refined perspective on the computational differences arising from varying parameters. For instance, the high but constrained computation observed in the three-range Ising model with a high magnetic field and low temperature is represented by fewer causal states and lower connectivity compared to a system with a low magnetic field and moderate temperature. This distinction provides a more nuanced understanding of what it means for a system to require more or less computation. Additionally, the analysis shows that the number of causal states cannot simply be inferred from properties such as the number of neighbors a given spin has or the magnitude and sign of the parameters.
In the SOS model, C μ allows us to quantify the reduction in computational effort caused by turning the wall on, even when the typical configuration remains unchanged. Furthermore, the observation that maximum C μ occurs at very low kink coupling demonstrates that the peak of maximum C μ varies depending on the specific parameter under consideration. Moreover, the ϵ -machines of the SOS model show that turning on the wall parameter reduces the uniformity of the outgoing transition probabilities from the start state. This indicates that the typical configuration becomes more likely as the wall parameter becomes nonzero. In computational terms, when the wall is fully activated, the machine becomes more similar to a single-state machine that solely outputs 1. More broadly, the machines from this case study, along with those of the nearest-neighbor Ising model, demonstrate that ϵ -machines provide a unified framework for identifying computational similarities (such as the number of states and connectivity) and differences (such as transition probabilities) between two distinct spin models.
The information measures of the three-body models, both with and without nearest-neighbor coupling, are not monotonically dependent. Specifically, a high E or h μ is shown to not necessarily imply a high or low C μ . The information plots of these three body models, along with those of the finite-range Ising models, capture how different spin models produce distinct configuration patterns when the same parameter, in this case temperature, is varied. Furthermore, the ϵ -machines of the three-body model with nearest-neighbor coupling provide an effective framework for identifying computational similarities and differences in the spin model as a parameter, such as temperature, changes. As temperature increases, typical configurations become more periodic, but also less likely to occur. The ϵ -machines capture this behavior by making the outgoing probabilities from transient causal states more uniform while increasing connectivity in the recurrent portion. This suggests that as typical configuration patterns become more periodic and less likely, they also become harder to discern overall. Notably, this result highlights the limitations of an information-theoretic perspective on synchronization—while useful, it remains incomplete without a computational viewpoint. This insight sheds light on subtle structural differences between systems that, despite having the same number of recurrent and transient causal states, exhibit distinct dynamical behaviors.
Ultimately, information theory and computational mechanics provide powerful tools for defining patterns in Boltzmann ensembles and for comprehensively characterizing the typical configurations generated by the Boltzmann distribution. They also enable a unified way of examining similarities and differences in the structure and patterns of a spin model under varying parameters and across different spin models. This perspective connects the abstract formalism of information theory and automata theory with the concrete physical models of statistical mechanics, providing a constructive and effective language to describe patterns in statistical mechanics.

Funding

This research was funded by the Foundational Questions Institute and by the Faggin Presidential Chair Fund.

Data Availability Statement

The code supporting this study is available at: https://github.com/omalagui/spin_patterns (accessed on 13 January 2026).

Acknowledgments

The author is grateful to Josh Deutsch, Jim Crutchfield, Anthony Aguirre, Zara Brandt, Evan Frangipane, Vidyesh Rao Anisetti and Jordan Scharnhorst for helpful feedback and insightful conversations.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A. Concept of “State” in Theory of Computation and Its Formalization in Computational Mechanics

In automata theory, abstract machines—the primary objects of study—are formalized in terms of “states.” However, the concept of “state” itself lacks an explicit mathematical definition. Even at a conceptual level, a “state” is rarely defined. One notable exception appears in [34] (pp. 2–3), where a state is defined as the relevant portion of a system’s history. Although the purpose of this relevance is not specified, it is illustrated through the example of a very simple finite-state machine: an on-off switch, shown below.
Figure A1. Two-state finite-state machine modeling an on/off switch. The initial state is ON (double ring). The initial state is ON (double ring). Each PRESS triggers a transition to the other state.
Figure A1. Two-state finite-state machine modeling an on/off switch. The initial state is ON (double ring). The initial state is ON (double ring). Each PRESS triggers a transition to the other state.
Entropy 28 00123 g0a1
The machine only needs to remember “whether it is in the on state or off state.” From this, we can infer that a state represents the relevant part of a machine’s history needed to predict a portion of the machine’s future behavior. This raises the question: “How to formalize this notion of state?” Computational mechanics addresses this by formalizing the concept probabilistically, defining it as a triple, as shown in Section 2.5.

Appendix B. Information Measures Across Varying Temperature in a Nearest-Neighbor Ising Model

Figure A2 presents information measures C μ , h μ , and E as functions of temperature T for a nn Ising model with B = 0.2 and ferromagnetic coupling J 1 = 1 . This figure reproduces Figure 13 of Ref. [2].
Figure A2. C μ , h μ , and E vs. T for nn spin- 1 / 2 Ising model with B = 0.2 and J 1 = 1 . Adapted from Feldman and Crutchfield, 2022, [2].
Figure A2. C μ , h μ , and E vs. T for nn spin- 1 / 2 Ising model with B = 0.2 and J 1 = 1 . Adapted from Feldman and Crutchfield, 2022, [2].
Entropy 28 00123 g0a2

Appendix C. Shannon Entropy Density hμ and Boltzmann Entropy Density htherm

The form of the Boltzmann (thermodynamic) entropy density and Shannon entropy rate for a nn Ising model are presented in Equations (A1) and (A2), respectively. These expressions are plotted as a function of temperature T in Figure A3, where they are graphically shown to be equivalent as temperature is varied.
h therm = T T N log 2 λ N
h μ = s 0 , s 1 = ± 1 Pr ( s 0 , s 1 ) log 2 Pr ( s 0 , s 1 ) Pr ( s 1 )
Figure A3. Boltzmann (thermodynamic) entropy density h therm and Shannon entropy rate h μ vs. temperature T.
Figure A3. Boltzmann (thermodynamic) entropy density h therm and Shannon entropy rate h μ vs. temperature T.
Entropy 28 00123 g0a3

Appendix D. Derivation of Boltzmann (Thermodynamic) Entropy Density for Nearest-Neighbor Ising Model

1.
Consider
h therm = T T N log 2 λ N = T T log 2 λ = log 2 λ + T T ( log 2 λ )
2.
Using the chain rule:
T = d β d T β = 1 T 2 β = β 2 β
3.
Rewrite h therm in terms of β = 1 T
h therm = log 2 λ β log 2 ( λ ) β = log 2 λ β 1 log ( 2 ) 1 λ λ β
4.
Split principal eigenvalue λ into two terms
λ = e β J cosh ( β B ) + e 2 β J sinh 2 ( β B ) + e 2 β J = term I + term II
5.
Carry out d d β term I and d d β term II
d d β term I = e β J J cosh ( β B ) + B sinh ( β B ) d d β term II = 1 2 e 2 β J sinh 2 ( β B ) + e 2 β J 1 2 · d d β e 2 β J sinh 2 ( β B ) + e 2 β J
6.
Simplify d d β e 2 β J sinh 2 ( β B ) + e 2 β J
= 2 J e 2 β J sinh 2 ( β B ) + e 2 β J · 2 B sinh ( β B ) cosh ( β B ) 2 J e 2 β J = 2 J e 2 β J sinh 2 ( β B ) + B e 2 β J sinh ( β B ) cosh ( β B ) J e 2 β J = 2 e 2 β J ( J e 4 β J sinh 2 ( β B ) + B e 4 β J sinh ( β B ) cosh ( β B ) J )
7.
Simplify d d β term II
= J e 2 β J sinh 2 ( β B ) + B e 2 β J sinh ( β B ) cosh ( β B ) J e 2 β J e 2 β J sinh 2 ( β B ) + e 2 β J
= 2 J e 2 β J sinh 2 ( β B ) + e 2 β J · 2 B sinh ( β B ) cosh ( β B ) 2 J e 2 β J = 2 J e 2 β J sinh 2 ( β B ) + B e 2 β J sinh ( β B ) cosh ( β B ) J e 2 β J = 2 e 2 β J ( J e 4 β J sinh 2 ( β B ) + B e 4 β J sinh ( β B ) cosh ( β B ) J )
8.
Simplify d λ d β
d λ d β = e β J J cosh ( β B ) + B sinh ( β B ) + J e 2 β J sinh 2 ( β B ) + B e 2 β J sinh ( β B ) cosh ( β B ) J e 2 β J e 2 β J sinh 2 ( β B ) + e 2 β J
9.
Replace in h therm
h therm = log 2 λ β 1 log ( 2 ) 1 λ λ β = log 2 λ β 1 log ( 2 ) 1 λ · ( e β J J cosh ( β B ) + B sinh ( β B ) + J e 2 β J sinh 2 ( β B ) + B e 2 β J sinh ( β B ) cosh ( β B ) J e 2 β J e 2 β J sinh 2 ( β B ) + e 2 β J )

Appendix E. ϵ-Machine of Nearest-Neighbor Ising Model

Figure A4 presents the ϵ -machine of a nearest-neighbor Ising model with J 1 = 1.0 , B = 0.35 , and T = 1.5 . This figure reproduces Figure 10 of Ref. [2].
Figure A4. ϵ -machine of nn Ising model with J 1 = 1.0 , B = 0.35 , and T = 1.5 .
Figure A4. ϵ -machine of nn Ising model with J 1 = 1.0 , B = 0.35 , and T = 1.5 .
Entropy 28 00123 g0a4

Appendix F. Joint Probability of Infinite Chain

1.
Consider a periodic infinite spin chain whose spins can only take two values (up or down) and only interact with their nearest neighbors.
s 0 , s 1 , s 2 , s 3 , . . . , s N 1 where s 0 = s N
2.
Define a Hamiltonian for this system in a translation-invariant manner.
E = E ( s 0 , , s N 1 ) = i = 0 N 1 J s i s i + 1 i = 0 N 1 B ( s i + s i + 1 ) 2
3.
Calculate the system’s partition function.
Z = { s i } e β E
4.
Define the Boltzmann probability of a given infinite configuration.
Pr ( s 0 s N ) = e β E Z
5.
Define the transfer matrix matrix, with components V ( s i , s i + 1 ) = V s i s i + 1 = e β E ( s i s i + 1 ) .
V = e β E ( , ) e β E ( , ) e β E ( , ) e β E ( , )
6.
Express Boltzmann probability weight e β E in terms of transfer matrix components.
e β E = e β E ( s 0 s 1 ) e β E ( s 1 s 2 ) . . . e β E ( s N 1 s N )
= V s 0 s 1 V s 1 s 2 . . . V s N 1 s N
7.
Calculate partition function in the thermodynamic limit N .
Z N = s 0 = ± 1 s 1 = ± 1 . . . s N = ± 1 V s 0 s 1 V s 1 s 2 . . . V s N 1 s N
8.
Apply definition of matrix multiplication s 2 V s 1 s 2 V s 2 s 3 = V s 1 s 3 2 and enforce periodic boundary conditions s 0 = s N .
Z N = s 0 = ± 1 s N = ± 1 V s 0 s N N 1 = s 0 = ± 1 V s 0 s 0 N
9.
Apply definition of trace.
Z N = ( Tr ( V ) ) N = lim N λ + N + λ N
= lim N λ + N ( 1 + λ + N λ N ) = λ + N = λ N
where λ is principal eigenvalue
10.
Express joint probability of a given infinite spin chain in terms of principal eigenvalue λ and transfer matrix components V s i , s i + 1 .
Pr ( s 0 , s 1 , . . . , s N 1 ) = V s 0 s 1 V s 1 s 2 . . . V s N 1 s N λ N = i = 0 N 1 V s i s i + 1 λ N

Appendix G. Eigenvalue Decomposition of Transfer Matrix

1.
Express V in terms of its eigenvalue decomposition V = UDU 1 .
V = u + u u u + λ + 0 0 λ u + u u u + 1
= λ + u + u u u + 1 0 0 λ λ + u + u u u + 1
2.
Use fact that in the thermodynamic limit N , λ + λ . Rename λ + as λ .
= u + u u u + λ 0 0 0 u + u u u + 1
= u + u u u + λ 0 0 0 u + u u u +
= u + u u u + λ u + λ u 0 0
Therefore,
V = λ u + 2 λ u + u λ u + u λ u 2
3.
Express transfer matrix components in terms of the principal eigenvalue λ and the principal eigenvector components u + and u at the thermodynamic limit.
V ( , ) = λ u + 2 V ( , ) = V ( , ) = λ u + u V ( , ) = λ u 2

Appendix H. Partition Function of Finite Chain with Fixed Boundary Conditions Embedded on Infinite Chain

Base Case ( L = 3 ):
1.
Consider the partition function of a finite chain of length 3 with fixed boundary conditions.
Z 3 = s 1 = ± 1 V ( s 0 fix , s 1 ) V ( s 1 , s 2 fix )
= V ( s 0 fix , ) V ( , s 2 fix ) + V ( s 0 fix , ) V ( , s 2 fix )
2.
Express transfer matrix components in terms of principal eigenvalue and principal eigenvector components. For simplicity, we will drop the L and R , because for the nn Ising model the left and right eigenvectors are the same.
= λ u s 0 fix u · λ u u s 2 fix + λ u s 0 fix u · λ u u s 2 fix
= λ u s 0 fix u s 2 fix u 2 + λ u s 0 fix u s 2 fix u 2
= λ u s 0 fix u s 2 fix ( u 2 + u 2 )
= λ u s 0 fix u s 2 fix
Inductive Step:
1.
Assume the partition function of a finite chain of length L has the following expression.
Z L = λ L 1 u s 0 fix u s L 1 fix
2.
Consider Z L + 1 .
Z L + 1 = s 1 = ± 1 s L 1 = ± 1 V ( s 0 fix , s 1 ) V ( s L 1 , s L fix )
3.
Sum over L 1 .
= V ( , s L fix ) · s 1 = ± 1 s L 2 = ± 1 V ( s 0 fix , s 1 ) V ( s L 2 , ) + V ( , s L fix ) · s 1 = ± 1 s L 2 = ± 1 V ( s 0 fix , s 1 ) V ( s L 2 , )
4.
Replace Equation (A4) in Equation (A5).
= V ( , s L fix ) · λ L 1 u s 0 fix u + V ( , s L fix ) · λ L 1 u s 0 fix u
5.
Replace Equation (A3) in Equation (A6)
= λ u u s L fix · λ L 1 u s 0 fix u + λ u s L fix · λ L 1 u s 0 fix u
= λ L u s 0 fix u s L fix u 2 + u s 0 fix u s L fix u 2
6.
Factor.
= λ L u s 0 fix u s L fix ( u 2 + u 2 )
7.
Use normalization condition u 2 + u 2 = 1 .
Z L + 1 = λ L u s 0 fix u s L fix

Appendix I. Joint Probability of Finite Chain Embedded on Infinite Chain

1.
Consider a finite spin chain embedded in an infinite spin chain.
s L = s 0 , , s L 1
2.
The embedding of the finite spin chain implies:
  • The thermodynamic limit applies to the finite chain.
  • The magnetization is uniform across the bulk and boundaries of the finite chain.
3.
To ensure uniform magnetization, express Pr embedded in terms of conditional and marginal probabilities to separate the contributions from the bulk and boundaries. For simplicity, we denote Pr embedded as Pr .
Pr ( s L ) = Pr ( s L | s 0 and s L 1 are fixed ) Pr ( s 0 , s L 1 )
4.
Since s 1 and s L are independent, their probabilities can be factored as:
Pr ( s L ) = Pr ( s L | s 0 = s 0 fixed , s L 1 = s L 1 fixed ) Pr ( s 0 ) Pr ( s L 1 )
5.
Express Pr ( s L | s 0 and s L are fixed ) as a joint probability using Pr ( s i fixed ) = 1 .
Pr ( s L | s 0 and s L are fixed ) = Pr ( s 0 fixed , , s L 1 fixed )
Thus,
Pr ( s L ) = Pr ( s 0 fixed , , s L 1 fixed ) Pr ( s 0 ) Pr ( s L 1 )
6.
Replace relevant joint and marginal probabilities for nn Ising model in Equation (A8).
Pr ( s L ) = i = 0 L 2 V s i s i + 1 u L , s 0 u R , s L 1 λ L 1 · u L , s 0 2 · u R , s L 1 2
= u L , s 0 u R , s L 1 i = 0 L 2 V s i s i + 1 λ L 1
7.
To recover Equation (36), consider s L instead of s L

Appendix J. Finite-Range Ising Model Hamiltonian for R=1,2 and 3

The finite-range Ising model Hamiltonian is written below for neighborhood radii R = 1 , R = 2 , and R = 3 . In the notation used for the Hamiltonian, the neighborhood radius R is denoted by n.
For n = 1 :
X η j = B i = 0 1 1 s i j k = 1 n = 1 J k i = 0 1 k 1 s i j s i + k j = B s 0 j 0 = B s 0 j = B s 0
Y η j , η j + 1 = k = 1 n = 1 J k i = 0 k 1 s 1 i 1 j s k i 1 j + 1 = J 1 i = 0 1 1 s i j s i j + 1 = J 1 s 0 j s 0 j + 1 = J 1 s 0 s 1
X η j + 1 = B s 0 j + 1 = B s 0 j + 1 = B s 1
For n = 2 :
X η j = B i = 0 2 1 s i j k = 1 n = 2 J k i = 0 2 k 1 s i j s i + k j = B s 0 j + s 1 j J 1 i = 0 2 1 1 s i j s i + 1 j J 2 i = 0 2 2 1 s i j s i + 2 j = B s 0 j + s 1 j J 1 s 0 j s 1 j = B s 0 + s 1 J 1 s 0 s 1
Y η j , η j + 1 = k = 1 n = 2 J k i = 0 k 1 s 2 i 1 j s k i 1 j + 1 = J 1 i = 0 1 1 s 1 i j s 0 i j + 1 J 2 i = 0 2 1 s 1 i j s 1 i j + 1 = J 1 s 1 j s 0 j + 1 + J 2 s 1 j s 1 j + 1 + s 0 j s 0 j + 1 = J 1 s 1 s 2 J 2 s 1 s 3 + s 0 s 2
X η j + 1 = B s 0 j + 1 + s 1 j + 1 J 1 s 0 j + 1 s 1 j + 1 = B s 2 + s 3 J 1 s 2 s 3
For n = 3 :
X η j = B i = 0 3 1 s i j k = 1 n = 3 J k i = 0 3 k 1 s i j s i + k j = B s 0 j + s 1 j + s 2 j J 1 i = 0 3 1 1 s i j s i + 1 j J 2 i = 0 3 2 1 s i j s i + 2 j = B s 0 j + s 1 j + s 2 j J 1 s 0 j s 1 j + s 1 j s 2 j J 2 s 0 j s 2 j = B i = 0 2 s i J 1 s 0 s 1 + s 1 s 2 J 2 s 0 s 2
Y η j , η j + 1 = k = 1 3 J k i = 0 k 1 s 3 i 1 j s k i 1 j + 1 = J 1 i = 0 1 1 s 2 i j s 0 i j + 1 J 2 i = 0 2 1 s 2 i j s 1 i j + 1 J 3 i = 0 3 1 s 2 i j s 2 i j + 1 = J 1 s 2 j s 0 j + 1 J 2 s 2 j s 1 j + 1 + s 1 j s 0 j + 1 J 3 s 2 j s 2 j + 1 + s 1 j s 1 j + 1 + s 0 j s 0 j + 1 = J 1 s 2 s 3 J 2 s 2 s 4 + s 1 s 3 J 3 s 2 s 5 + s 1 s 4 + s 0 s 3
X η j + 1 = B s 0 j + 1 + s 1 j + 1 + s 2 j + 1 J 1 s 0 j + 1 s 1 j + 1 + s 1 j + 1 s 2 j + 1 J 2 s 0 j + 1 s 2 j + 1 = B i = 3 n = 5 s i J 1 s 3 s 4 + s 4 s 5 J 2 s 3 s 5

Appendix K. Three-Body Model Transfer Matrix

The transfer matrix V of the three-body spin model is shown in Equation (A9). To simplify the notation of the matrix entries, we label each two-spin block as follows: = 1 , = 2 , = 3 , and = 4 . Moreover, we set the chemical potential μ to zero.
V = V 11 V 12 0 0 0 0 V 23 V 24 V 31 V 32 0 0 0 0 V 43 V 44
where
V 11 = exp μ J 1 J 2 J tb T , V 12 = exp μ J 1 T , V 23 = exp μ J 2 T , V 24 = exp μ T , V 31 = V 32 = V 43 = V 44 = 1
Note that, unlike the finite-range Ising model, the spin blocks in this model overlap by one spin. Specifically, the last spin in a row label must match the first spin in a column label. For example, the spin block in the second row can only transition to spin blocks or in the third and fourth columns, as its last spin ↓ matches the first spin of both and .

References

  1. Aaronson, S.; Carroll, S.M.; Ouellette, L. Quantifying the Rise and Fall of Complexity in Closed Systems: The Coffee Automaton. arXiv 2014, arXiv:1405.6903. [Google Scholar] [CrossRef]
  2. Feldman, D.P.; Crutchfield, J.P. Discovering Noncritical Organization: Statistical Mechanical, Information Theoretic, and Computational Views of Patterns in One-Dimensional Spin Systems. Entropy 2022, 24, 1282. [Google Scholar] [CrossRef]
  3. Bak, P. How Nature Works: The Science of Self-Organized Criticality, ebook ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  4. Rothstein, J. Information, Measurement, and Quantum Mechanics. Science 1951, 114, 171–175. [Google Scholar] [CrossRef] [PubMed]
  5. Eliazar, I. Five degrees of randomness. Phys. A Stat. Mech. Its Appl. 2021, 568, 125662. [Google Scholar] [CrossRef]
  6. Sethna, J.P. Statistical Mechanics: Entropy, Order Parameters, and Complexity; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
  7. Goldenfeld, N. Lectures on Phase Transitions and the Renormalization Group; Frontiers in Physics; Westview Press: Boca Raton, FL, USA, 1992; Volume 85. [Google Scholar] [CrossRef]
  8. Chaikin, P.M.; Lubensky, T.C. Principles of Condensed Matter Physics; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  9. Kadanoff, L.P. Scaling Laws for Ising Models Near Tc. Phys. Phys. Fiz. 1966, 2, 263–272. [Google Scholar] [CrossRef]
  10. Yeomans, J.M. Statistical Mechanics of Phase Transitions; Clarendon Press: Oxford, UK, 1992. [Google Scholar]
  11. Krinsky, S.; Furman, D. Exact renormalization-group exhibiting tricritical fixed point for a spin-one Ising model in one dimension. Phys. Rev. B 1975, 11, 2602–2613. [Google Scholar] [CrossRef]
  12. Gheissari, R.; Hongler, C.; Park, S.C. Ising Model: Local Spin Correlations and Conformal Invariance. Commun. Math. Phys. 2019, 367, 771–833. [Google Scholar] [CrossRef]
  13. Schulz, M.; Trimper, S. Analytical and Numerical Studies of the One-Dimensional Spin Facilitated Kinetic Ising Model. J. Stat. Phys. 1999, 94, 173–194. [Google Scholar] [CrossRef]
  14. Lacombe, R.H.; Simha, R. One-dimensional Ising model: Kinetic studies. J. Chem. Phys. 1974, 61, 1899–1911. [Google Scholar] [CrossRef]
  15. Landau, D.P.; Binder, K. Some necessary background. In A Guide to Monte Carlo Simulations in Statistical Physics, 4th ed.; Cambridge University Press: Cambridge, UK, 2015; pp. 7–46. [Google Scholar]
  16. McCoy, B.M.; Wu, T.T. Theory of a Two-Dimensional Ising Model with Random Impurities. I. Thermodynamics. Phys. Rev. 1968, 176, 631–643. [Google Scholar] [CrossRef]
  17. Beale, P.D. Exact Distribution of Energies in the Two-Dimensional Ising Model. Phys. Rev. Lett. 1996, 76, 78–81. [Google Scholar] [CrossRef] [PubMed]
  18. Köfinger, J.; Dellago, C. Single-file water as a one-dimensional Ising model. New J. Phys. 2010, 12, 093044. [Google Scholar] [CrossRef] [PubMed]
  19. Tsypin, M.M.; Blote, H.W.J. Probability distribution of the order parameter for the three-dimensional Ising-model universality class: A high-precision Monte Carlo study. Phys. Rev. E 2000, 62, 73–76. [Google Scholar] [CrossRef]
  20. Chatelain, C.; Karevski, D. Probability distributions of the work in the two-dimensional Ising model. J. Stat. Mech. Theory Exp. 2006, 2006, P06005. [Google Scholar] [CrossRef]
  21. Pathria, R.K.; Beale, P.D. Statistical Mechanics, 3rd ed.; Butterworth-Heinemann: Oxford, UK, 2011. [Google Scholar]
  22. Derrida, B. Random-Energy Model: Limit of a Family of Disordered Models. Phys. Rev. Lett. 1980, 45, 79–82. [Google Scholar] [CrossRef]
  23. Tribus, M. Thermostatics and Thermodynamics: An Introduction to Energy, Information and States of Matter, with Engineering Applications; D. Van Nostrand Company, Inc.: Princeton, NJ, USA, 1961. [Google Scholar]
  24. Bateson, G. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology; Jason Aronson Inc.: Northvale, NJ, USA; London, UK, 1987. [Google Scholar]
  25. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  26. Shaw, R. The Dripping Faucet as a Model Chaotic System; Aerial Press: Santa Cruz, CA, USA, 1984. [Google Scholar]
  27. Crutchfield, J.P.; Packard, N.H. Symbolic dynamics of noisy chaos. Phys. D 1983, 7, 201–223. [Google Scholar] [CrossRef]
  28. Grassberger, P. Toward a quantitative theory of self-generated complexity. Int. J. Theor. Phys. 1986, 25, 907–938. [Google Scholar] [CrossRef]
  29. Lindgren, K.; Nordhal, M.G. Complexity measures and cellular automata. Complex Syst. 1988, 2, 409–440. [Google Scholar]
  30. Crutchfield, J.P.; Young, K. Inferring statistical complexity. Phys. Rev. Lett. 1989, 63, 105–108. [Google Scholar] [CrossRef]
  31. Crutchfield, J.P. The calculi of emergence: Computation, dynamics, and induction. Phys. D 1994, 75, 11–54. [Google Scholar] [CrossRef]
  32. Crutchfield, J.P. Between order and chaos. Nat. Phys. 2012, 8, 17–24. [Google Scholar] [CrossRef]
  33. Shalizi, C.R.; Crutchfield, J.P. Computational Mechanics: Pattern and Prediction, Structure and Simplicity. J. Stat. Phys. 2001, 104, 817–879. [Google Scholar] [CrossRef]
  34. Hopcroft, J.E.; Ullman, J.D. Introduction to Automata Theory, Languages, and Computation, 2nd ed.; Addison-Wesley: Carrollton, TX, USA, 2001. [Google Scholar]
  35. Crutchfield, J.P.; Shalizi, C.R. Thermodynamic depth of causal states: Objective complexity via minimal representations. Phys. Rev. E 1999, 59, 275–283. [Google Scholar] [CrossRef]
  36. Still, S.; Crutchfield, J.P.; Ellison, C.J. Optimal causal inference: Estimating stored information and approximating causal architecture. Chaos 2010, 20, 037111. [Google Scholar] [CrossRef]
  37. Streloff, C.C.; Crutchfield, J.P. Bayesian structural inference for hidden processes. Phys. Rev. E 2014, 89, 042119. [Google Scholar] [CrossRef]
  38. Marzen, S.E.; Crutchfield, J.P. Predictive Rate–Distortion for Infinite-Order Markov Processes. J. Stat. Phys. 2016, 163, 1312–1338. [Google Scholar] [CrossRef]
  39. Rupe, A.; Kumar, N.; Epifanov, V.; Kashinath, K.; Pavlyk, O.; Schimbach, F.; Patwary, M.; Maidanov, S.; Lee, V.; Prabhat; et al. Disco: Physics-based unsupervised discovery of coherent structures in spatiotemporal systems. In Proceedings of the 2019 IEEE/ACM Workshop on Machine Learning in High Performance Computing Environments (MLHPC), Denver, CO, USA, 18 November 2019; pp. 75–87. [Google Scholar] [CrossRef]
  40. Rupe, A.; Crutchfield, J.P. Spacetime Autoencoders Using Local Causal States. arXiv 2020, arXiv:2010.05451. [Google Scholar] [CrossRef]
  41. Brodu, N.; Crutchfield, J.P. Discovering causal structure with reproducing-kernel Hilbert space ϵ-machines. Chaos Interdiscip. J. Nonlinear Sci. 2022, 32, 023103. [Google Scholar] [CrossRef] [PubMed]
  42. Jurgens, A.M.; Brodu, N. Inferring kernel epsilon-machines: Discovering structure in complex systems. Chaos Interdiscip. J. Nonlinear Sci. 2025, 35, 033162. [Google Scholar] [CrossRef]
  43. Feldman, D.P.; Crutchfield, J.P. Structural information in two-dimensional patterns: Entropy convergence and excess entropy. Phys. Rev. E 2003, 67, 051104. [Google Scholar] [CrossRef] [PubMed]
  44. Vijayaraghavan, V.S.; James, R.G.; Crutchfield, J.P. Anatomy of a Spin: The Information-Theoretic Structure of Classical Spin Systems. Entropy 2017, 19, 214. [Google Scholar] [CrossRef]
  45. Aghamohammadi, C.; Mahoney, J.R.; Crutchfield, J.P. Extreme Quantum Advantage when Simulating Classical Systems with Long-Range Interaction. Sci. Rep. 2017, 7, 6735. [Google Scholar] [CrossRef] [PubMed]
  46. Aghamohammadi, C.; Mahoney, J.R.; Crutchfield, J.P. The ambiguity of simplicity in quantum and classical simulation. Phys. Lett. A 2017, 381, 1223–1227. [Google Scholar] [CrossRef]
  47. Chattopadhyay, P.; Paul, G. Revisiting thermodynamics in computation and information theory. arXiv 2024, arXiv:2102.09981. [Google Scholar]
  48. Chu, D.; Spinney, R.E. A thermodynamically consistent model of finite-state machines. Interface Focus 2018, 8, 20180037. [Google Scholar] [CrossRef]
  49. Strasberg, P.; Cerrillo, J.; Schaller, G.; Brandes, T. Thermodynamics of stochastic Turing machines. arXiv 2015, arXiv:1506.00894. [Google Scholar] [CrossRef]
  50. Wolpert, D.H.; Scharnhorst, J. Stochastic Process Turing Machines. arXiv 2024, arXiv:2410.07131. [Google Scholar] [CrossRef]
  51. Li, L.; Chang, L.; Cleaveland, R.; Zhu, M.; Wu, X. The Quantum Abstract Machine. arXiv 2024, arXiv:2402.13469. [Google Scholar] [CrossRef]
  52. Bhatia, A.S.; Kumar, A. Quantum finite automata: Survey, status and research directions. arXiv 2019, arXiv:1901.07992. [Google Scholar] [CrossRef]
  53. Wang, D.S. A local model of quantum Turing machines. arXiv 2019, arXiv:1912.03767. [Google Scholar] [CrossRef]
  54. Molina, A.; Watrous, J. Revisiting the simulation of quantum Turing machines by quantum circuits. Proc. R. Soc. A Math. Phys. Eng. Sci. 2019, 475, 20180767. [Google Scholar] [CrossRef]
  55. Alves, N.A.; Berg, B.A.; Villanova, R. Ising-model Monte Carlo simulations: Density of states and mass gap. Phys. Rev. B 1990, 41, 383–386. [Google Scholar] [CrossRef]
  56. Lin, Y.; Wang, F.; Zheng, X.; Gao, H.; Zhang, L. Monte Carlo simulation of the Ising model on FPGA. J. Comput. Phys. 2013, 237, 224–234. [Google Scholar] [CrossRef]
  57. Ferrenberg, A.M.; Xu, J.; Landau, D.P. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model. Phys. Rev. E 2018, 97, 043301. [Google Scholar] [CrossRef]
  58. MacKay, D.J. Information Theory, Inference, and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  59. Myshlyavtsev, A.V. Surface Diffusion Modelling: Transfer Matrix Approach. In Studies in Surface Science and Catalysis; Guerrero-Ruiz, A., Rodríguez-Ramos, I., Eds.; Elsevier Science B.V.: Amsterdam, The Netherlands, 2001; Volume 138, pp. 173–190. [Google Scholar]
  60. Flack, J.C. Coarse-graining as a downward causation mechanism. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2017, 375, 20160338. [Google Scholar] [CrossRef] [PubMed]
  61. Shalizi, C.R.; Moore, C. What Is a Macrostate? Subjective Observations and Objective Dynamics. Found. Phys. 2025, 55, 2. [Google Scholar] [CrossRef]
  62. Ny, A.L. Introduction to (generalized) Gibbs measures. arXiv 2007, arXiv:0712.1171. [Google Scholar] [CrossRef]
  63. Muir, S. A new characterization of Gibbs measures on N Z d . Nonlinearity 2011, 24, 2933–2952. [Google Scholar] [CrossRef]
  64. Ganikhodjaev, N. Limiting Gibbs measures of Potts model with countable set of spin values. J. Math. Anal. Appl. 2007, 336, 693–703. [Google Scholar] [CrossRef]
  65. Lind, D.; Marcus, B. An Introduction to Symbolic Dynamics and Coding, 2nd ed.; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  66. Crutchfield, J.P.; Feldman, D.P. Regularities Unseen, Randomness Observed: Levels of Entropy Convergence. arXiv 2001, arXiv:cond-mat/0102181. [Google Scholar] [CrossRef] [PubMed]
  67. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Champaign-Urbana, IL, USA, 1963. [Google Scholar]
  68. Feldman, D. A Brief Introduction to Information Theory, Excess Entropy, and Computational Mechanics; College of the Atlantic: Bar Harbor, ME, USA, 1998. [Google Scholar]
  69. Shalizi, C.R. Causal Architecture, Complexity, and Self-Organization in Time Series and Cellular Automata. Ph.D. Dissertation, University of Wisconsin-Madison, Madison, WI, USA, 2001. Available online: http://bactra.org/thesis/single-spaced-thesis.pdf (accessed on 13 January 2026).
  70. Marzen, S.E.; Crutchfield, J.P. Probabilistic Deterministic Finite Automata and Recurrent Networks, Revisited. Entropy 2022, 24, 90. [Google Scholar] [CrossRef]
  71. Young, K.; Crutchfield, J.P. Fluctuation Spectroscopy. Chaos Solitons Fractals 1994, 4, 5–39. [Google Scholar] [CrossRef]
  72. Young, K.A. The Grammar and Statistical Mechanics of Complex Physical Systems. Ph.D. Dissertation, University of California, Santa Cruz, CA, USA, 1991. [Google Scholar]
  73. Dennett, D.C. Real Patterns. J. Philos. 1991, 88, 27–51. [Google Scholar] [CrossRef]
  74. Kikuchi, R. Statistical Mechanics of Liquid He3. Phys. Rev. 1955, 99, 1666–1671. [Google Scholar] [CrossRef]
  75. Dobson, J.F. Many-Neighbored Ising Chain. J. Math. Phys. 1969, 10. [Google Scholar] [CrossRef]
  76. Slotnick, M. Magnetic Neutron Diffraction from Exchange-Coupled Lattices at High Temperatures. Phys. Rev. 1951, 83, 996–1000. [Google Scholar] [CrossRef]
  77. Zener, C.; Heikes, R.R. Exchange Interactions. Rev. Mod. Phys. 1953, 25, 191–201. [Google Scholar] [CrossRef]
  78. Zarubin, A.V.; Kassan-Ogly, F.A.; Proshkin, A.I.; Shestakov, A.E. Frustration Properties of the 1D Ising Model. J. Exp. Theor. Phys. 2019, 128, 778–807. [Google Scholar] [CrossRef]
  79. Mutallib, K.A.; Barry, J.H. Frustration in a generalized Kagome Ising antiferromagnet: Exact results. Phys. Rev. E 2022, 106, 014149. [Google Scholar] [CrossRef]
  80. Moessner, R.; Sondhi, S.L. Ising models of quantum frustration. Phys. Rev. B 2001, 63, 224401. [Google Scholar] [CrossRef]
  81. Burton, W.K.; Cabrera, N.; Frank, F.C. The Growth of Crystals and the Equilibrium Structure of Their Surfaces. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Sci. 1951, 243, 299–358. [Google Scholar] [CrossRef]
  82. Weeks, J.D. The Roughening Transition. In Ordering in Strongly Fluctuating Condensed Matter Systems; Riste, T., Ed.; NATO Advanced Study Institutes Series: Series B, Physics; Plenum Press: New York, NY, USA, 1980; Volume 50, pp. 293–315. [Google Scholar]
  83. Privman, V.; Švrakić, N.M. Difference Equations in Statistical Mechanics. II. Solid-on-Solid Models in Two Dimensions. J. Stat. Phys. 1988, 51, 819–834. [Google Scholar] [CrossRef]
  84. Abraham, D.B. Solvable Model with a Roughening Transition for a Planar Ising Ferromagnet; Department of Mathematics, University of Newcastle: Newcastle, Australia, 1979. [Google Scholar]
  85. Wang, J.; Feng, X.; Anderson, C.W.; Xing, Y.; Shang, L. Remediation of mercury contaminated sites—A review. J. Hazard. Mater. 2012, 221, 1–18. [Google Scholar] [CrossRef]
  86. Aparicio, J.D.; Raimondo, E.E.; Saez, J.M.; Costa-Gutierrez, S.B.; Alvarez, A.; Benimeli, C.S.; Polti, M.A. The current approach to soil remediation: A review of physicochemical and biological technologies, and the potential of their strategic combination. J. Environ. Chem. Eng. 2022, 10, 107141. [Google Scholar] [CrossRef]
  87. Zhdanov, V.P. Lattice-Gas Model for Description of the Adsorbed Molecules of Two Kinds. Surf. Sci. 1981, 111, 63–79. [Google Scholar] [CrossRef]
  88. Redhead, P.A. Thermal Desorption of Gases. Vacuum 1962, 12, 203–211. [Google Scholar] [CrossRef]
  89. Morris, M.A.; Bowker, M.; King, D.A. Kinetics of Adsorption, Desorption and Diffusion at Metal Surfaces. In Comprehensive Chemical Kinetics; Elsevier: Amsterdam, The Netherlands, 1984; Volume 19, pp. 1–179. [Google Scholar]
  90. Zhdanov, V.P.; Zamaraev, K.I. Lattice-gas model of chemisorption on metal surfaces. Sov. Phys. Uspekhi 1986, 29, 755. [Google Scholar] [CrossRef]
  91. Myshlyavtsev, A.V.; Sales, J.L.; Zgrablich, G.; Zhdanov, V.P. The Effect of Three-Body Interactions on Thermal Desorption Spectra. J. Chem. Phys. 1989, 91, 7500–7506. [Google Scholar] [CrossRef]
Figure 1. Depiction of a finite spin configuration embedded within an infinite spin configuration with periodic boundary conditions.
Figure 1. Depiction of a finite spin configuration embedded within an infinite spin configuration with periodic boundary conditions.
Entropy 28 00123 g001
Figure 2. Graphical representation of coarse–grained Ising phase space. Only the purple spins are assigned fixed indices. For clarity, down spins ↓ are represented as 0 instead of 1 .
Figure 2. Graphical representation of coarse–grained Ising phase space. Only the purple spins are assigned fixed indices. For clarity, down spins ↓ are represented as 0 instead of 1 .
Entropy 28 00123 g002
Figure 3. Graphical representation of configuration and ensemble pattern concepts.
Figure 3. Graphical representation of configuration and ensemble pattern concepts.
Entropy 28 00123 g003
Figure 4. Illustration of spin interactions in Ising models with neighboring radii R = 1 (top), R = 2 (middle), and R = 3 (bottom).
Figure 4. Illustration of spin interactions in Ising models with neighboring radii R = 1 (top), R = 2 (middle), and R = 3 (bottom).
Entropy 28 00123 g004
Figure 5. (a) h μ , E and C μ vs. J 1 for nnn Ising model with J 2 = 1.2 , B = 0.05 and T = 1 . (b) h μ , E and C μ vs. B for 3-range Ising model with J 1 = 2.8 , J 2 = 1.3 , J 3 = 0.45 and T = 0.2 .
Figure 5. (a) h μ , E and C μ vs. J 1 for nnn Ising model with J 2 = 1.2 , B = 0.05 and T = 1 . (b) h μ , E and C μ vs. B for 3-range Ising model with J 1 = 2.8 , J 2 = 1.3 , J 3 = 0.45 and T = 0.2 .
Entropy 28 00123 g005
Figure 6. (a) ϵ -machine of 3-range Ising model with B = 0.2 , T = 4 , J 1 = 1 , J 2 = 1 and J 3 = 1 . (b) ϵ -machine of 3-range Ising model with B = 8 , T = 0.2 , J 1 = 3 , J 2 = 2 and J 3 = 2 .
Figure 6. (a) ϵ -machine of 3-range Ising model with B = 0.2 , T = 4 , J 1 = 1 , J 2 = 1 and J 3 = 1 . (b) ϵ -machine of 3-range Ising model with B = 8 , T = 0.2 , J 1 = 3 , J 2 = 2 and J 3 = 2 .
Entropy 28 00123 g006
Figure 7. Illustration of spin interactions in a 2D spin lattice with the leftmost and rightmost spins fixed to opposite values. The dashed black lines highlight the induced 1D spin chain interface.
Figure 7. Illustration of spin interactions in a 2D spin lattice with the leftmost and rightmost spins fixed to opposite values. The dashed black lines highlight the induced 1D spin chain interface.
Entropy 28 00123 g007
Figure 8. (a) h μ , E and C μ vs. U for SOS model with W = 0 , V = e n y and T = 1 . (b) h μ , E and C μ vs. U for SOS model with W = 1 , V = e n y and T = 1 .
Figure 8. (a) h μ , E and C μ vs. U for SOS model with W = 0 , V = e n y and T = 1 . (b) h μ , E and C μ vs. U for SOS model with W = 1 , V = e n y and T = 1 .
Entropy 28 00123 g008
Figure 9. (a) ϵ -machine for SOS model with U = 2 , W = 0 , V = e n y , T = 1 and C μ 0.61 . (b) ϵ -machine for SOS model with U = 1 , W = 1 , V = e n y , T = 1 and C μ 0.33 .
Figure 9. (a) ϵ -machine for SOS model with U = 2 , W = 0 , V = e n y , T = 1 and C μ 0.61 . (b) ϵ -machine for SOS model with U = 1 , W = 1 , V = e n y , T = 1 and C μ 0.33 .
Entropy 28 00123 g009
Figure 10. Illustration of spin interactions in three-body models: nearest-neighbor (purple), next-nearest neighbor (green), and three-body (orange) couplings.
Figure 10. Illustration of spin interactions in three-body models: nearest-neighbor (purple), next-nearest neighbor (green), and three-body (orange) couplings.
Entropy 28 00123 g010
Figure 11. (a) h μ , E and C μ vs. T for three body model with J 1 = 0 , J 2 = 0 and J t = 1 . (b) h μ , E and C μ vs. T for three body model with J 1 = 1 , J 2 = 0 and J t = 1 .
Figure 11. (a) h μ , E and C μ vs. T for three body model with J 1 = 0 , J 2 = 0 and J t = 1 . (b) h μ , E and C μ vs. T for three body model with J 1 = 1 , J 2 = 0 and J t = 1 .
Entropy 28 00123 g011
Figure 12. (a) ϵ -machine for three body model with J 1 = 1 , J 2 = 0 , J t = 1 , T = 0.025 . (b) ϵ -machine for three body model with J 1 = 1 , J 2 = 0 , J t = 1 , T = 2 .
Figure 12. (a) ϵ -machine for three body model with J 1 = 1 , J 2 = 0 , J t = 1 , T = 0.025 . (b) ϵ -machine for three body model with J 1 = 1 , J 2 = 0 , J t = 1 , T = 2 .
Entropy 28 00123 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aguilar, O. What Is a Pattern in Statistical Mechanics? Formalizing Structure and Patterns in One-Dimensional Spin Lattice Models with Computational Mechanics. Entropy 2026, 28, 123. https://doi.org/10.3390/e28010123

AMA Style

Aguilar O. What Is a Pattern in Statistical Mechanics? Formalizing Structure and Patterns in One-Dimensional Spin Lattice Models with Computational Mechanics. Entropy. 2026; 28(1):123. https://doi.org/10.3390/e28010123

Chicago/Turabian Style

Aguilar, Omar. 2026. "What Is a Pattern in Statistical Mechanics? Formalizing Structure and Patterns in One-Dimensional Spin Lattice Models with Computational Mechanics" Entropy 28, no. 1: 123. https://doi.org/10.3390/e28010123

APA Style

Aguilar, O. (2026). What Is a Pattern in Statistical Mechanics? Formalizing Structure and Patterns in One-Dimensional Spin Lattice Models with Computational Mechanics. Entropy, 28(1), 123. https://doi.org/10.3390/e28010123

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop