# Typical = Random

## Abstract

**:**

## 1. Introduction

## 2. Some Background on Entropy and Probability

- In statistical mechanics as developed by Boltzmann in 1877 [6], and more generally in what one might call “Boltzmann-style statistical mechanics”, which is based on typicality arguments [29], N is the number of (distinguishable) particles under consideration, and A could be a finite set of single-particle energy levels. More generally, $a\in A$ is some property each particle may separately have, such as its location in cell ${X}_{a}$ relative to some partition$$X=\underset{a\in A}{\u2a06}{X}_{a}$$$$\sigma :\{0,1,\dots ,N-1\}\to A$$$${P}_{f}^{N}\left(\left\{\sigma \right\}\right)={\left|A\right|}^{-N}={q}^{-N},$$$$f\left(a\right)=1/q,$$$$\left[\sigma \right]:=\sigma {A}^{\omega}=\{s\in {A}^{\omega}\mid \sigma \prec s\},$$$${P}_{p}^{\omega}\left(\left[\sigma \right]\right)={P}_{p}^{N}\left(\sigma \right).$$$${P}_{f}^{\omega}\left(\left[\sigma \right]\right)={\left|A\right|}^{-N}.$$$$S=klogW$$$${S}_{B}^{N}\left(\mu \right)=log{W}^{N}\left(\mu \right),$$$${W}^{N}\left(\mu \right)=\frac{N\left(\mu \right)}{{\left|A\right|}^{N}},$$$${L}_{N}\left(\sigma \right)=\frac{1}{N}\sum _{n=0}^{N-1}{\delta}_{\sigma \left(n\right)},$$$$N\left(\mu \right)=\frac{N!}{{\prod}_{a\in A}({\mu}^{\prime}\left(a\right)!)}.$$$${P}_{p}^{N}\left(\sigma \right)={e}^{N{\sum}_{a\in A}\mu \left(a\right)logp\left(a\right)}={e}^{-N\left(h\right(\mu )+I(\mu \left|p\right))},$$$$\begin{array}{cc}\hfill h\left(\mu \right)& :=-\sum _{a\in A}\mu \left(a\right)log\mu \left(a\right);\hfill \end{array}$$$$\begin{array}{cc}\hfill I\left(\mu \right|p)& :=\sum _{a\in A}\mu \left(a\right)log\left(\right)open="("\; close=")">\frac{\mu \left(a\right)}{p\left(a\right)},\hfill \end{array}$$$$\begin{array}{c}\hfill I\left(\mu \right|f)=-h(\mu )+log|A|.\end{array}$$$${W}^{N}\left(\mu \right)={P}_{p}^{N}({L}_{N}=\mu )=N\left(\mu \right){P}_{p}^{N}\left(\sigma \right)=\frac{N!\phantom{\rule{0.166667em}{0ex}}{e}^{-N\left(h\right(\mu )+I(\mu \left|p\right))}}{{\prod}_{a\in A}({\mu}^{\prime}\left(a\right)!)},$$$${s}_{B}\left(\mu \right|p):=\underset{N\to \infty}{lim}\frac{{S}_{B}^{N}\left({\mu}_{N}\right)}{N}=-I\left(\mu \right|p),$$$${s}_{B}\left(\mu \right|f)=h\left(\mu \right)-log\left|A\right|.$$$$I\left(\mu \right|p):={\int}_{A}dp\phantom{\rule{0.166667em}{0ex}}\frac{d\mu}{dp}log\left(\right)open="("\; close=")">\frac{d\mu}{dp}$$The stochastic process ${X}_{N}:\Omega \to \mathcal{X}$ whose large fluctuations are described by (22) is$$\begin{array}{ccccccc}\hfill \mathcal{X}=\mathrm{Prob}\left(A\right);& \phantom{\rule{1.em}{0ex}}\hfill & \hfill \Omega ={A}^{\omega};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill P={P}_{p}^{\omega};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {X}_{N}={L}_{N}.\end{array}$$$$\begin{array}{c}\hfill \underset{N\to \infty}{lim}\frac{1}{N}log{P}_{p}^{N}({L}_{N}\in \Gamma )=-I(\Gamma |p):=-\underset{\mu \in \Gamma}{inf}I\left(\mu \right|p)=\underset{\mu \in \Gamma}{sup}{s}_{B}\left(\mu \right|p),\end{array}$$$$\begin{array}{ccc}\hfill {P}_{p}^{N}({L}_{N}\in \Gamma )\approx {e}^{-NI(\Gamma |p)}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill \mathrm{as}\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.222222em}{0ex}}N\to \infty ,\end{array}$$Interpreting A as a set of energy levels, the relevant stochastic process ${X}_{N}:\mathrm{\Omega}\to \mathcal{X}$ still has $\Omega $ and P as in (25), but this time ${X}_{N}:\mathrm{\Omega}\to \mathbb{R}$ is the average energy, defined by$${X}_{N}\left(\sigma \right)\equiv {E}_{N}\left(\sigma \right)=\frac{1}{N}\sum _{n=0}^{N-1}\sigma \left(n\right).$$$$\begin{array}{cc}\hfill \underset{N\to \infty}{lim}\frac{1}{N}log{P}_{p}^{N}({E}_{N}\in \Delta )& =\underset{u\in \Delta}{sup}{s}_{C}\left(u\right|p);\hfill \end{array}$$$$\begin{array}{cc}\hfill {s}_{C}\left(u\right|p)& :=\underset{\mu \in \mathrm{Prob}\left(A\right)}{sup}\left(\right)open="\{"\; close="\}">{s}_{B}\left(\mu \right|p)\mid \sum _{a\in A}\mu \left(a\right)\xb7a=u,\hfill \end{array}$$$$f\left(\beta \right|p)=log\left(\right)open="("\; close=")">\sum _{a\in A}p\left(a\right){e}^{-\beta a}$$$$\begin{array}{ccc}\hfill \beta f\left(\beta \right|p)=\underset{u\in \mathbb{R}}{inf}\{\beta u-{s}_{C}\left(u\right|p)\};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {s}_{C}\left(u\right|p)=\underset{\beta \in \mathbb{R}}{inf}\{\beta u-\beta f\left(\beta \right|p)\}.\end{array}$$
- In information theory as developed by Shannon [35] (see also [36,37,38]) the “N” in our diagram is the number of letters drawn from an alphabet A by sampling a given probability distribution $p\in \mathrm{Prob}\left(A\right)$, the space of all probability distributions on A. So each microstate $\sigma \in {A}^{N}$ is a word with N letters. The entropy of p, i.e.,$${h}_{2}\left(p\right):=-\sum _{a\in A}p\left(a\right){log}_{2}p\left(a\right)=\sum _{a\in A}p\left(a\right){I}_{2}\left(a\right),$$$${I}_{2}\left(a\right):=-{log}_{2}p\left(a\right),$$$$L(C,p)=\sum _{a\in A}p\left(a\right)\ell \left(C\left(a\right)\right).$$
- Any prefix code satisfies ${h}_{2}\left(p\right)\le L(C,p)$;
- There exists an optimal prefix code C, which satisfies $L(C,p)\le {h}_{2}\left(p\right)+1$.
- One has ${h}_{2}\left(p\right)=L(C,p)$ iff $\ell \left(C\left(a\right)\right)={I}_{2}\left(a\right)$ for each $a\in A$ (if this is possible).

Of course, the equality $\ell \left(C\left(a\right)\right)={I}_{2}\left(a\right)$ can only be satisfied if $p\left(a\right)={2}^{-k}$ for some integer $k\in \mathbb{N}$. Otherwise, one can find a code for which $\ell \left(C\left(a\right)\right)=\left[{I}_{2}\left(a\right)\right]$, the smallest integer $\ge {I}_{2}\left(a\right)$. See e.g., [Section 5.4] in [36].Thus the information content ${I}_{2}\left(a\right)$ is approximately the length of the code-word $C\left(a\right)$ in some optimal coding C. Passing to our case of interest of N-letter words over A, in case of a memoryless source one simply has the Bernoulli measure ${P}_{p}^{N}$ on ${A}^{N}$, with entropy$${H}_{2}\left({P}_{p}^{N}\right)=-\sum _{\sigma \in {A}^{N}}{P}_{p}^{N}\left(\sigma \right){log}_{2}{P}_{p}^{N}\left(\sigma \right)=N{h}_{2}\left(p\right).$$$$\underset{N\to \infty}{lim}\frac{L({C}^{N},{P}_{p}^{N})}{N}={h}_{2}\left(p\right).$$$$\begin{array}{c}\hfill {\forall}_{\epsilon >0}\underset{N\to \infty}{lim}{P}_{p}^{N}\left(\right)open="("\; close=")">\left(\right)open="\{"\; close="\}">\sigma \in {A}^{N}\mid {P}_{p}^{N}\left(\sigma \right)\in [{2}^{-N({h}_{2}\left(p\right)+\epsilon )},{2}^{-N({h}_{2}\left(p\right)-\epsilon )}]& =1.\end{array}$$$$\begin{array}{c}\hfill {P}_{p}^{\omega}\left(\right)open="("\; close=")">\left(\right)open="\{"\; close="\}">s\in {A}^{\omega}\mid \underset{N\to \infty}{lim}-\frac{1}{N}{log}_{2}{P}_{p}^{N}\left({s}_{|N}\right)={h}_{2}\left(p\right)& =1.\end{array}$$ - In dynamical systems along the lines of the ubiquitous Kolmogorov [39], one starts with a triple $(X,P,T)$, where X–more precisely $(X,\Sigma )$, but I usually suppress the $\sigma $-algebra $\Sigma $–is a measure space, P is a probability measure on X (more precisely, on $\Sigma $), and $T:X\to X$ is a measurable (but not necessarily invertible) map, required to preserve P in the sense that $P\left({T}^{-1}B\right)=P\left(B\right)$ for any $B\in \Sigma $. A measurable coarse-graining (5) defines a map$$\begin{array}{ccc}\hfill \xi :X\to {A}^{\omega};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill \xi {\left(x\right)}_{n}=a\in A\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.222222em}{0ex}}\mathrm{iff}\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.222222em}{0ex}}{T}^{n}x\in {X}_{a},\end{array}$$$$\begin{array}{ccc}\hfill S:{A}^{\omega}\to {A}^{\omega};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {\left(Ss\right)}_{n}:={s}_{n+1}\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.222222em}{0ex}}\phantom{\rule{0.222222em}{0ex}}(n=0,1,\dots )\end{array}$$$$S\circ \xi =\xi \circ T,$$The point of Kolmogorov’s approach is to refine the partition (5), which I now denote by$$\pi =\{{X}_{a},a\in A\}\subset \Sigma \subset P\left(X\right),$$$${X}_{{\sigma}_{0}\cdots {\sigma}_{N-1}}:={X}_{{\sigma}_{0}}\cap {T}^{-1}{X}_{{\sigma}_{1}}\cap \cdots \cap {T}^{-(N-1)}{X}_{{\sigma}_{N-1}}.$$$$\begin{array}{ccccc}\hfill (x,Tx,\dots ,{T}^{N-1}x)\in {X}^{N};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill \xi {\left(x\right)}_{|N}={\sigma}_{0}\cdots {\sigma}_{N-1}\in {A}^{N}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill ({T}^{n}x\in {X}_{{\sigma}_{n}},n\in N).\end{array}$$$${I}_{(X,P,T,{\pi}^{N})}\left(x\right):=-{log}_{2}P\left({\pi}^{N}\left(x\right)\right),$$$${H}_{(X,P,T,{\pi}^{N})}:={\langle {I}_{(X,P,T,{\pi}^{N})}\rangle}_{P}={\int}_{X}dP\left(x\right)\phantom{\rule{0.166667em}{0ex}}{I}_{(X,P,T,{\pi}^{N})}\left(x\right)=-\sum _{Y\in {\pi}^{N}}P\left(Y\right){log}_{2}\left(P\left(Y\right)\right).$$$${h}_{(X,P,T,\pi )}:=\underset{N\to \infty}{lim}\frac{1}{N}{H}_{(X,P,T,{\pi}^{N})},$$$${h}_{(X,P,T)}:=\underset{\pi}{sup}{h}_{(X,P,T,\pi )},$$We say that $(X,P,T)$ is ergodic if for every T-invariant set $A\in \Sigma $ (i.e., ${T}^{-1}A=A$), either $P\left(A\right)=0$ or $P\left(A\right)=1$. For later use, I now state three equivalent conditions for ergodicity of $(X,P,T)$; see e.g., [Section 4.1] in [40]. Namely, $(X,P,T)$ is ergodic if and only if for P-almost every x:$$\begin{array}{cccc}\hfill \underset{N\to \infty}{lim}\frac{1}{N}\sum _{n=0}^{N-1}{\delta}_{{T}^{n}x}& =P\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \left(\mathrm{weakly}\mathrm{in}\mathrm{Prob}\right(X\left)\right);\hfill \end{array}$$$$\begin{array}{cccc}\hfill \underset{N\to \infty}{lim}\frac{1}{N}\sum _{n=0}^{N-1}f\left({T}^{n}x\right)& ={\int}_{X}dP\phantom{\rule{0.166667em}{0ex}}f\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{for}\text{}\mathrm{each}\text{}f\in {L}^{1}(X,P);\hfill \end{array}$$$$\begin{array}{cccc}\hfill \underset{N\to \infty}{lim}\frac{1}{N}\left|\{n\in \{0,\dots ,N-1\}:{T}^{n}x\in B\}\right|& =P\left(B\right);\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{for}\text{}\mathrm{each}\text{}B\in \Sigma .\hfill \end{array}$$
**Theorem****1.**If $(X,P,T)$ is ergodic, then for P-almost every $x\in X$ one has$${h}_{(X,P,T,\pi )}=-\underset{N\to \infty}{lim}\frac{1}{N}{log}_{2}P\left({\pi}^{N}\left(x\right)\right).$$

## 3. P-Randomness

**Definition**

**1.**

- 1.
- A topological space X is effective if it has a countable base $\mathcal{B}\subset \mathcal{O}\left(X\right)$ with a bijection$$B:\mathbb{N}\stackrel{\cong}{\to}\mathcal{B}.$$
- 2.
- An open set $V\in \mathcal{O}\left(X\right)$ as in 1. is computable if for some computable function $f:\mathbb{N}\to \mathbb{N}$,$$V=\bigcup _{n\in \mathbb{N}}B\left(f\left(n\right)\right).$$$$V=\bigcup _{n\in E}B\left(n\right)$$
- 3.
- A sequence $\left({V}_{n}\right)$ of opens ${V}_{n}\in \mathcal{O}\left(X\right)$ is computable if$${V}_{n}=\bigcup _{m\in \mathbb{N}}B\left(g(n,m)\right)$$$${V}_{n}=\bigcup _{m\mid (n,m)\in G}B\left(m\right)$$
- 4.
- A (randomness) test is a computable sequence $\left({V}_{n}\right)$ as in 3. for which for all $n\in \mathbb{N}$ one has$$P\left({V}_{n}\right)\le {2}^{-n}.$$$${V}_{n+1}\subset {V}_{n}.$$
- 5.
- A point $x\in X$ is P-random if $x\notin N$ for any subset $N\subset X$ of the form$$N=\bigcap _{n}{V}_{n},$$
- 6.
- A measure P in an effective probability space $(X,B,P)$ is upper semi-computable if the set$$U\left(P\right):=\left(\right)open="\{"\; close="\}">(F,q)\in {P}_{f}\left(\mathbb{N}\right)\times \mathbb{Q}\mid P\left(\right)open="("\; close=")">\bigcup _{n\in F}B\left(n\right)$$

**Definition**

**2.**

**Definition**

**3.**

- 1.
- The inequality (64) holds;
- 2.
- ${V}_{n+1}\subset {V}_{n}$ (as in Definition 1 (4));
- 3.
- $\sigma \in {V}_{n}$ and $\sigma \prec \tau $ imply $\tau \in {V}_{n}$ (i.e., extensions of $\sigma \in {V}_{n}$ also belong to ${V}_{n}$).

**Definition**

**4.**

- 1.
- A string $\sigma \in {A}^{\ast}$ is q-random (for some $q\in \mathbb{N}$) if ${m}_{U}\left(\sigma \right)<q\le \left|\sigma \right|$.
- 2.
- A sequence $s\in {A}^{\omega}$ is Calude random (with respect to $P={P}_{f}^{\omega}$) if there is a constant $q\in \mathbb{N}$ such that each finite segment ${s}_{|N}\in {A}^{N}\subset {A}^{\ast}$ is q-random, i.e., such that for all N,$${m}_{U}\left({s}_{|N}\right)<q.$$

**Theorem**

**2.**

**Definition**

**5.**

**Theorem**

**3.**

While idealizations are useful and, perhaps, even essential to progress in physics, a sound principle of interpretation would seem to be that no effect can be counted as a genuine physical effect if it disappears when the idealizations are removed. (Earman, [56] (p. 191)).

**Theorem**

**4.**

**Theorem**

**5.**

## 4. From ‘for P-Almost Every x’ to ‘for All P-Random x’

**Theorem**

**6.**

**Theorem**

**7.**

**Theorem**

**8.**

**Definition**

**6.**

**Theorem**

**9.**

**Theorem**

**10.**

**Theorem**

**11.**

**Theorem**

**12.**

**Theorem**

**13.**

- 1.
- For ${P}_{W}$-almost every $B\in C[0,1]$ there exists ${h}_{0}>0$ such that$$|B(t+h)-B\left(t\right)|\le \sqrt{2hlog(1/h)},$$for all $0<h<{h}_{0}$ and all $0\le t\le 1-h$, and $\sqrt{2}$ is the best constant for which this is true.
- 2.
- ${P}_{W}$-almost every $B\in C[0,1]$ is locally Hölder continuous with index $0<\alpha <1/2$.
- 3.
- ${P}_{W}$-almost every $B\in C[0,1]$ is not differentiable at any $t\in [0,1]$.

**Theorem**

**14.**

## 5. Applications to Statistical Mechanics

It is the author’s view that many of the most important questions still remain unanswered in very fundamental and important ways. (Sklar [2] (p. 413)).

What many “chefs” regard as absolutely essential and indispensable, is argued to be insufficient or superfluous by many others. (Uffink [3] (p. 925)).

- 1.
- Coarse-graining (only certain macroscopic quantities behave irreversibly);
- 2.
- Probability (irreversible behaviour is just very likely–or, in infinite systems, almost sure).

- The microstates of the Kac ring model for finite N are pairs$$\begin{array}{ccccc}\hfill ({x}^{\left(N\right)},{y}^{\left(N\right)})\in {2}^{2N+1}\times {2}^{2N+1}\equiv {A}^{N};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {x}^{\left(N\right)}=({x}_{-N},\dots ,{x}_{N});& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {y}^{\left(N\right)}=({y}_{-N},\dots ,{y}_{N}),\end{array}$$$$({x}^{\left(N\right)},{y}^{\left(N\right)})\stackrel{N\to \infty}{\u27f6}(x,y)\in {2}^{\mathbb{Z}}\times {2}^{\mathbb{Z}}\equiv {A}^{\omega}.$$
- The macrostates of the model, which replace the distribution function (99), form a pair$$\begin{array}{ccc}\hfill {m}^{\left(N\right)}:{A}^{N}\to [0,1],& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {m}^{\left(N\right)}({x}^{\left(N\right)},{y}^{\left(N\right)}):=\frac{1}{2N+1}\sum _{k=-N}^{N}{x}_{k};\end{array}$$$$\begin{array}{ccc}\hfill {s}^{\left(N\right)}:{A}^{N}\to [0,1],& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {s}^{\left(N\right)}({x}^{\left(N\right)},{y}^{\left(N\right)})=\frac{1}{2N+1}\sum _{k=-N}^{N}{y}_{k}.\end{array}$$
- The microdynamics replacing the time evolution $({\mathbf{r}}_{0}\left(t\right),{\mathbf{v}}_{0}\left(t\right),\dots ,{\mathbf{r}}_{N-1}\left(t\right),{\mathbf{v}}_{N-1}\left(t\right))$ generated by Newton’s equations with some potential, is now discretized, and is given by maps$$\begin{array}{cccccc}\hfill {T}^{\left(N\right)}:{A}^{N}\to {A}^{N};& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {T}^{\left(N\right)}{(x,y)}_{n+1}& :=({x}_{n},{y}_{n})\hfill & \hfill \phantom{\rule{1.em}{0ex}}& ({y}_{n}=0);\hfill \end{array}$$$$\begin{array}{cccccc}\hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & :=(1-{x}_{n},{y}_{n})\hfill & \hfill \phantom{\rule{1.em}{0ex}}& ({y}_{n}=1),\hfill \end{array}$$$$({x}_{N+1},{y}_{N+1})=({x}_{-N},{y}_{-N}).$$
- The macrodynamics, which replaces the solution of the Boltzmann equation, is given by$$\begin{array}{ccc}\hfill \Phi :[0,1]\times [0,1]\to [0,1]\times [0,1];& \phantom{\rule{1.em}{0ex}}\hfill & \hfill \Phi (\overline{m},\overline{s})=((1-2\overline{s})(\overline{m}-\frac{1}{2})+\frac{1}{2},\overline{s});\end{array}$$In particular, for $t\in \mathbb{N}$ one has$${\mathrm{\Phi}}^{t}(\overline{m},\overline{s})=({(1-2\overline{s})}^{t}(\overline{m}-\frac{1}{2})+\frac{1}{2},\overline{s}),$$$$\underset{t\to \infty}{lim}{\mathrm{\Phi}}^{t}(\overline{m},\overline{s})=(\frac{1}{2},\overline{s}).$$

**Theorem**

**15.**

## 6. Applications to Quantum Mechanics

**Theorem**

**16.**

**Proof.**

## 7. Summary

- 1.
- Is it probability or randomness that “comes first”? How are these concepts related?
- 2.
- Could the notion of “typicality” as it is used in Boltzmann-style statistical mechanics [29] be replaced by some precise mathematical form of randomness?
- 3.
- Are “typical” trajectories in “chaotic” dynamical systems (i.e., those with high Kolmogorov–Sinai entropy) random in the same, or some similar sense?

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Brush, S.G. The Kind of Motion We Call Heat; North-Holland: Amsterdam, The Netherlands, 1976. [Google Scholar]
- Sklar, L. Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
- Uffink, J. Compendium of the foundations of classical statistical physics. In Handbook of the Philosophy of Science; Butterfield, J., Earman, J., Eds.; North-Holland: Amsterdam, The Netherlands, 2007; Volume 2: Philosophy of Physics; Part B; pp. 923–1074. [Google Scholar]
- Uffink, J. Boltzmann’s Work in Statistical Physics. The Stanford Encyclopedia of Philosophy; Summer 2022 Edition; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2022; Available online: https://plato.stanford.edu/archives/sum2022/entries/statphys-Boltzmann/ (accessed on 15 June 2023).
- Von Plato, J. Creating Modern Probability; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
- Boltzmann, L. Über die Beziehung dem zweiten Haubtsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung respektive den Sätzen über das Wärmegleichgewicht. Wien. Berichte
**1877**, 76, 373–435, Reprinted in Boltzmann, L. Wissenschaftliche Abhandlungen, Hasenöhrl, F., Ed.; Chelsea: London, UK, 1969; Volume II, p. 39. English translation in Sharp, K.; Matschinsky, F., Entropy**2015**, 17, 1971–2009. [Google Scholar] [CrossRef] [Green Version] - Einstein, A. Zum Gegenwärtigen STAND des Strahlungsproblem. Phys. Z.
**1909**, 10, 185–193, Reprinted in The Collected Papers of Albert Einstein; Stachel, J., et al., Eds.; Princeton University Press: Princeton, NJ, USA, 1990; Volume 2; Doc.56, pp. 542–550. Available online: https://einsteinpapers.press.princeton.edu/vol2-doc/577 (accessed on 15 June 2023); English Translation Supplement. pp. 357–375. Available online: https://einsteinpapers.press.princeton.edu/vol2-trans/371 (accessed on 15 June 2023). - Ellis, R.S. Entropy, Large Deviations, and Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
- Ellis, R.S. An overview of the theory of large deviations and applications to statistical mechanics. Scand. Actuar. J.
**1995**, 1, 97–142. [Google Scholar] [CrossRef] - Lanford, O.E. Entropy and Equilibrium States in Classical Statistical Mechanics; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 1973; Volume 20, pp. 1–113. [Google Scholar]
- Martin-Löf, A. Statistical Mechanics and the Foundations of Thermodynamics; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 1979; Volume 101, pp. 1–120. [Google Scholar]
- McKean, H. Probability: The Classical Limit Theorems; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
- Von Mises, R. Grundlagen der Wahrscheinlichkeitsrechnung. Math. Z.
**1919**, 5, 52–99. [Google Scholar] [CrossRef] [Green Version] - Von Mises, R. Wahrscheinlichkeit, Statistik, und Wahrheit, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1936. [Google Scholar]
- Van Lambalgen, M. Random Sequences. Ph.D. Thesis, University of Amsterdam, Amsterdam, The Netherlands, 1987. Available online: https://www.academia.edu/23899015/RANDOM_SEQUENCES (accessed on 15 June 2023).
- Van Lambalgen, M. Randomness and foundations of probability: Von Mises’ axiomatisation of random sequences. In Statistics, Probability and Game Theory: Papers in Honour of David Blackwell; IMS Lecture Notes–Monograph Series; IMS: Beachwood, OH, USA, 1996; Volume 30, pp. 347–367. [Google Scholar]
- Porter, C.P. Mathematical and Philosophical Perspectives on Algorithmic Randomness. Ph.D. Thesis, University of Notre Dame, Notre Dame, IN, USA, 2012. Available online: https://www.cpporter.com/wp-content/uploads/2013/08/PorterDissertation.pdf (accessed on 15 June 2023).
- Kolmogorov, A.N. Grundbegriffe de Wahrscheinlichkeitsrechnung; Springer: Berlin/Heidelberg, Germany, 1933. [Google Scholar]
- Kolmogorov, A.N. Three Approaches to the Quantitative Definition of Information. Probl. Inf. Transm.
**1965**, 1, 3–11. Available online: http://alexander.shen.free.fr/library/Kolmogorov65_Three-Approaches-to-Information.pdf (accessed on 15 June 2023). [CrossRef] - Kolmogorov, A.N. Logical Basis for information theory and probability theory. IEEE Trans. Inf. Theory
**1968**, 14, 662–664. [Google Scholar] [CrossRef] [Green Version] - Cover, T.M.; Gács, P.; Gray, R.M. Kolmogorov’s contributions to information theory and algorithmic complexity. Ann. Probab.
**1989**, 17, 840–865. [Google Scholar] [CrossRef] - Li, M.; Vitányi, P.M.B. An Introduction to Kolmogorov Complexity and Its Applications, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Porter, C.P. Kolmogorov on the role of randomness in probability theory. Math. Struct. Comput. Sci.
**2014**, 24, e240302. [Google Scholar] [CrossRef] [Green Version] - Zvonkin, A.K.; Levin, L.A. The complexity of finite objects and the development of the concepts of information and randomness by means of the theory of algorithms. Russ. Math. Surv.
**1970**, 25, 83–124. [Google Scholar] [CrossRef] [Green Version] - Landsman, K. Randomness? What randomness? Found. Phys.
**2020**, 50, 61–104. [Google Scholar] [CrossRef] [Green Version] - Porter, C.P. The equivalence of definitions of algorithmic randomness. Philos. Math.
**2021**, 29, 153–194. [Google Scholar] [CrossRef] - Georgii, H.-O. Probabilistic aspects of entropy. In Entropy; Greven, A., Keller, G., Warnecke, G., Eds.; Princeton University Press: Princeton, NJ, USA, 2003; pp. 37–54. [Google Scholar]
- Grünwald, P.D.; Vitányi, P.M.B. Kolmogorov complexity and Information theory. With an interpretation in terms of questions and answers. J. Logic, Lang. Inf.
**2003**, 12, 497–529. [Google Scholar] [CrossRef] - Bricmont, L. Making Sense of Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
- Austin, T. Math 254A: Entropy and Ergodic Theory. 2017. Available online: https://www.math.ucla.edu/~tim/entropycourse.html (accessed on 15 June 2023).
- Dembo, A.; Zeitouni, A. Large Deviations: Techniques and Applications, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
- Dorlas, T.C. Statistical Mechanics: Fundamentals and Model Solutions, 2nd ed.; CRC: Boca Raton, FL, USA, 2022. [Google Scholar]
- Ellis, R.S. The theory of large deviations: From Boltzmann’s 1877 calculation to equilibrium macrostates in 2D turbulence. Physica D
**1999**, 133, 106–136. [Google Scholar] [CrossRef] - Borwein, J.M.; Zhu, Q.J. Techniques of Variational Analysis; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J.
**1948**, 27, 379–423. [Google Scholar] [CrossRef] [Green Version] - Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
- Lesne, A. Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci.
**2014**, 24, e240311. [Google Scholar] [CrossRef] [Green Version] - MacKay, D.J. Information Theory, Inference, and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Kolmogorov, A.N. New metric invariant of transitive dynamical systems and endomorphisms of Lebesgue spaces. Dokl. Russ. Acad. Sci.
**1958**, 119, 861–864. [Google Scholar] - Viana, M.; Oliveira, K. Foundations of Ergodic Theory; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
- Charpentier, E.; Lesne, A.; Nikolski, N.K. Kolmogorov’s Heritage in Mathematics; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Castiglione, P.; Falcioni, M.; Lesne, A.; Vulpiani, A. Chaos and Coarse Graining in Statistical Mechanics; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
- Martin-Löf, P. The definition of random sequences. Inf. Control
**1966**, 9, 602–619. [Google Scholar] [CrossRef] [Green Version] - Hertling, P.; Weihrauch, K. Random elements in effective topological spaces with measure. Inform. Comput.
**2003**, 181, 32–56. [Google Scholar] [CrossRef] [Green Version] - Hoyrup, M.; Rojas, C. Computability of probability measures and Martin-Löf randomness over metric spaces. Inf. Comput.
**2009**, 207, 830–847. [Google Scholar] [CrossRef] [Green Version] - Gács, P.; Hoyrup, M.; Rojas, C. Randomness on computable probability spaces—A dynamical point of view. Theory Comput. Syst.
**2011**, 48, 465–485. [Google Scholar] [CrossRef] [Green Version] - Bienvenu, L.; Gács, P.; Hoyrup, M.; Rojas, C. Algorithmic tests and randomness with respect to a class of measures. Proc. Steklov Inst. Math.
**2011**, 274, 34–89. [Google Scholar] [CrossRef] [Green Version] - Hoyrup, M.; Rute, J. Computable measure theory and algorithmic randomness. In Handbook of Computability and Complexity in Analysis; Springer: Berlin/Heidelberg, Germany, 2021; pp. 227–270. [Google Scholar]
- Calude, C.S. Information and Randomness: An Algorithmic Perspective, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Nies, A. Computability and Randomness; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
- Downey, R.; Hirschfeldt, D.R. Algorithmic Randomness and Complexity; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Kjos-Hansen, B.; Szabados, T. Kolmogorov complexity and strong approximation of Brownian motion. Proc. Am. Math. Soc.
**2011**, 139, 3307–3316. [Google Scholar] [CrossRef] [Green Version] - Chaitin, G.J. A theory of program size formally identical to information theory. J. ACM
**1975**, 22, 329–340. [Google Scholar] [CrossRef] - Gács, P. Exact expressions for some randomness tests. In Theoretical Computer Science 4th GI Conference; Weihrauch, K., Ed.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 1979; Volume 67, pp. 124–131. [Google Scholar]
- Levin, L.A. On the notion of a random sequence. Sov. Math.-Dokl.
**1973**, 14, 1413–1416. [Google Scholar] - Earman, J. Curie’s Principle and spontaneous symmetry breaking. Int. Stud. Phil. Sci.
**2004**, 18, 173–198. [Google Scholar] [CrossRef] - Mörters, P.; Peres, Y. Brownian Motion; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Billingsley, P. Convergence of Probability Measures; Wiley: Hoboken, NJ, USA, 1968. [Google Scholar]
- Asarin, E.A.; Prokovskii, A.V. Use of the Kolmogorov complexity in analysing control system dynamics. Autom. Remote Control
**1986**, 47, 21–28. [Google Scholar] - Fouché, W.L. Arithmetical representations of Brownian motion I. J. Symb. Log.
**2000**, 65, 421–442. [Google Scholar] [CrossRef] - Fouché, W.L. The descriptive complexity of Brownian motion. Adv. Math.
**2000**, 155, 317–343. [Google Scholar] [CrossRef] [Green Version] - Vovk, V.G. The law of the iterated logarithm for random Kolmogorov, or chaotic, sequences. Theory Probab. Its Appl.
**1987**, 32, 413–425. [Google Scholar] [CrossRef] - Brattka, V.; Miller, J.S.; Nies, A. Randomness and differentiability. Trans. Am. Math. Soc.
**2016**, 368, 581–605. [Google Scholar] [CrossRef] [Green Version] - Rute, J. Algorithmic Randomness and Constructive/Computable Measure Theory; Franklin & Porter: New York, NY, USA, 2020; pp. 58–114. [Google Scholar]
- Downey, R.; Griffiths, E.; Laforte, G. On Schnorr and computable randomness, martingales, and machines. Math. Log. Q.
**2004**, 50, 613–627. [Google Scholar] [CrossRef] - Bienvenu, L.; Day, A.R.; Hoyrup, M.; Mezhirov, I.; Shen, A. A constructive version of Birkhoff’s ergodic theorem for Martin-Löf random points. Inf. Comput.
**2012**, 210, 21–30. [Google Scholar] [CrossRef] [Green Version] - Galatolo, S.; Hoyrup, M.; Rojas, C. Effective symbolic dynamics, random points, statistical behavior, complexity and entropy. Inf. Comput.
**2010**, 208, 23–41. [Google Scholar] [CrossRef] [Green Version] - Pathak, N.; Rojas, C.; Simpson, S. Schnorr randomness and the Lebesgue differentiation theorem. Proc. Am. Math. Soc.
**2014**, 142, 335–349. [Google Scholar] [CrossRef] [Green Version] - V’yugin, V. Effective convergence in probability and an ergodic theorem for individual random sequences. SIAM Theory Probab. Its Appl.
**1997**, 42, 39–50. [Google Scholar] [CrossRef] - Towsner, H. Algorithmic Randomness in Ergodic Theory; Franklin & Porter: New York, NY, USA, 2020; pp. 40–57. [Google Scholar]
- V’yugin, V. Ergodic theorems for algorithmically random points. arXiv
**2022**, arXiv:2202.13465. [Google Scholar] - Brudno, A.A. Entropy and the complexity of the trajectories of a dynamic system. Trans. Mosc. Math. Soc.
**1983**, 44, 127–151. [Google Scholar] - White, H.S. Algorithmic complexity of points in dynamical systems. Ergod. Theory Dyn. Syst.
**1993**, 15, 353–366. [Google Scholar] [CrossRef] - Batterman, R.W.; White, H. Chaos and algorithmic complexity. Found. Phys.
**1996**, 26, 307–336. [Google Scholar] [CrossRef] - Porter, C.P. Biased Algorithmic Randomness; Franklin and Porter: New York, NY, USA, 2020; pp. 206–231. [Google Scholar]
- Brudno, A.A. The complexity of the trajectories of a dynamical system. Russ. Math. Surv.
**1978**, 33, 207–208. [Google Scholar] [CrossRef] - Schack, R. Algorithmic information and simplicity in statistical physics. Int. J. Theor. Phys.
**1997**, 36, 209–226. [Google Scholar] [CrossRef] [Green Version] - Fouché, W.L. Dynamics of a generic Brownian motion: Recursive aspects. Theor. Comput. Sci.
**2008**, 394, 175–186. [Google Scholar] [CrossRef] [Green Version] - Allen, K.; Bienvenu, L.; Slaman, T. On zeros of Martin-Löf random Brownian motion. J. Log. Anal.
**2014**, 6, 1–34. [Google Scholar] - Fouché, W.L.; Mukeru, S. On local times of Martin-Löf random Brownian motion. arXiv
**2022**, arXiv:2208.01877. [Google Scholar] - Hiura, K.; Sasa, S. Microscopic reversibility and macroscopic irreversibility: From the viewpoint of algorithmic randomness. J. Stat. Phys.
**2019**, 177, 727–751. [Google Scholar] [CrossRef] [Green Version] - Boltzmann, L. Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen. Wien. Berichte
**1872**, 66, 275–370, Reprinted in Boltzmann, L. Wissenschaftliche Abhandlungen; Hasenöhrl, F., Ed.; Chelsea: London, UK, 1969; Volume I, p. 23. English translation in Brush, S. The Kinetic Theory of Gases: An Anthology of Classic Papers with Historical Commentary; Imperial College Press: London, UK, 2003; pp. 262–349. [Google Scholar] - Lanford, O.E. Time evolution of large classical systems. In Dynamical Systems, Theory and Applications; Lecture Notes in Theoretical Physics; Moser, J., Ed.; Springer: Berlin/Heidelberg, Germany, 1975; Volume 38, pp. 1–111. [Google Scholar]
- Lanford, O.E. On the derivation of the Boltzmann equation. Astérisque
**1976**, 40, 117–137. [Google Scholar] - Ardourel, V. Irreversibility in the derivation of the Boltzmann equation. Found. Phys.
**2017**, 47, 471–489. [Google Scholar] [CrossRef] - Villani, C. A review of mathematical topics in collisional kinetic theory. In Handbook of Mathematical Fluid Dynamics; Friedlander, S., Serre, D., Eds.; Elsevier: Amsterdam, The Netherlands, 2002; Volume 1, pp. 71–306. [Google Scholar]
- Villani, C. (Ir)reversibility and Entropy. In Time Progress in Mathematical Physics; Duplantier, B., Ed.; (Birkhäuser): Basel, Switzerland, 2013; Volume 63, pp. 19–79. [Google Scholar]
- Bouchet, F. Is the Boltzmann equation reversible? A Large Deviation perspective on the irreversibility paradox. J. Stat. Phys.
**2020**, 181, 515–550. [Google Scholar] [CrossRef] - Bodineau, T.; Gallagher, I.; Saint-Raymond, L.; Simonella, S. Statistical dynamics of a hard sphere gas: Fluctuating Boltzmann equation and large deviations. arXiv
**2020**, arXiv:2008.10403. [Google Scholar] - Aldous, D.L. Exchangeability and Related Topics; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1985; Volume 1117, pp. 1–198. [Google Scholar]
- Sznitman, A. Topics in Propagation of Chaos; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1991; Volume 1464, pp. 164–251. [Google Scholar]
- Kac, N. Probability and Related Topics in Physical Sciences; Wiley: Hoboken, NJ, USA, 1959. [Google Scholar]
- Maes, C.; Netocny, K.; Shergelashvili, B. A selection of nonequilibrium issues. arXiv
**2007**, arXiv:math-ph/0701047. [Google Scholar] - De Bièvre, S.; Parris, P.E. A rigourous demonstration of the validity of Boltzmann’s scenario for the spatial homogenization of a freely expanding gas and the equilibration of the Kac ring. J. Stat. Phys.
**2017**, 168, 772–793. [Google Scholar] [CrossRef] [Green Version] - Landsman, K. Foundations of Quantum Theory: From Classical Concepts to Operator Algebras; Springer Open: Berlin/Heidelberg, Germany, 2017; Available online: https://www.springer.com/gp/book/9783319517766 (accessed on 15 June 2023).
- Landsman, K. Indeterminism and undecidability. In Undecidability, Uncomputability, and Unpredictability; Aguirre, A., Merali, Z., Sloan, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2021; pp. 17–46. [Google Scholar]
- Goldstein, S. Bohmian Mechanics. The Stanford Encyclopedia of Philosophy. 2017. Available online: https://plato.stanford.edu/archives/sum2017/entries/qm-bohm/ (accessed on 15 June 2023).
- Landsman, K. Bohmian mechanics is not deterministic. Found. Phys.
**2022**, 52, 73. [Google Scholar] [CrossRef] - Dürr, D.; Goldstein, S.; Zanghi, N. Quantum equilibrium and the origin of absolute uncertainty. J. Stat. Phys.
**1992**, 67, 843–907. [Google Scholar] [CrossRef] [Green Version] - Franklin, J.Y.; Porter, C.P. (Eds.) Algorithmic Randomness: Progress and Prospects; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Landsman, K.
Typical = Random. *Axioms* **2023**, *12*, 727.
https://doi.org/10.3390/axioms12080727

**AMA Style**

Landsman K.
Typical = Random. *Axioms*. 2023; 12(8):727.
https://doi.org/10.3390/axioms12080727

**Chicago/Turabian Style**

Landsman, Klaas.
2023. "Typical = Random" *Axioms* 12, no. 8: 727.
https://doi.org/10.3390/axioms12080727