#
A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions^{ †}

^{1}

^{2}

^{3}

^{†}

## Abstract

**:**

## 1. Introduction and Preliminaries

#### 1.1. Applications of Algorithmic (Kolmogorov) Complexity

#### 1.2. Applications of Algorithmic Probability

#### 1.3. Beyond Incomputability

## 2. Alternatives to Lossless Compression

#### 2.1. Time to Stop Fooling Ourselves

#### 2.2. The Regime of Short Strings

#### 2.3. Agreement of Distributions for High Frequency Elements

The theory of algorithmic complexity is of course now widely accepted, but was initially rejected by many because of the fact that algorithmic complexity depends on the choice of universal Turing machine and short binary sequences cannot be usefully discussed from the perspective of algorithmic complexity. [However, of CTM] … discovered, employing [t]his empirical, experimental approach, the fact that most reasonable choices of formalisms for describing short sequences of bits give consistent measures of algorithmic complexity! So the dreaded theoretical hole in the foundations of algorithmic complexity turns out, in practice, not to be as serious as was previously assumed.

#### 2.4. Problematic Use of Statistical Arguments

#### 2.5. Reality Check: From Theory to Practice

#### 2.6. Limitations and Challenges of All Approaches

#### 2.7. Relaxing Necessary Conditions and Studying the Implications

… also, very important, the Numerical Limitations subsection in the How It Works subpage.

Numerical limitations of CTM are the ultimate incomputability of the universal distribution, and the constant involved in the invariance theorem which, nevertheless, we have quantified and estimated to apparently be under control, with values converging even in the face of computational model variations. Our papers cover these limitations and their consequences should be taken into account.

For BDM, the limitations are explored in [38], and they are related to boundary conditions and to the limitations of CTM itself. The paper also shows that when CTM is not updated, BDM starts approximating Shannon entropy over the long range, yet the local estimations of CTM shed light on the algorithmic causal nature of even large objects.

#### 2.8. Algorithmic Complexity in Scientific Discovery

## 3. Conclusions

## Funding

## Conflicts of Interest

## References

- Franklin, J.N.Y.; Porter, C.P. Key developments in algorithmic randomness. arXiv
**2004**, arXiv:2004.02851. [Google Scholar] - Bienvenu, L.; Shafer, G.; Shen, A. On the history of martingales in the study of randomness. Electron. J. Hist. Probab. Stat.
**2009**, 5, 1. [Google Scholar] - Kolmogorov, A.N. Three approaches to the quantitative definition of information. Probl. Inf. Transm.
**1965**, 1, 1–7. [Google Scholar] [CrossRef] - Martin-Löf, P. The definition of random sequences. Inf. Control
**1966**, 9, 602–619. [Google Scholar] [CrossRef][Green Version] - Davis, M. The Universal Computer, The Road from Leibniz to Turing; W. Norton & Company: New York, NY, USA, 2000. [Google Scholar]
- Calude, C.S. Information and Randomness An Algorithmic Perspective, Texts in Theoretical Computer Science. An EATCS Series; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Nies, A. Computability and Randomness; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
- Downey, R.G.; Hirschfeldt, D.R. Algorithmic Randomness and Complexity, Theory and Applications of Computability; Springer: New York, NY, USA, 2010. [Google Scholar]
- Li, M.; Vitányi, P. An Introduction to Kolmogorov Complexity and Its Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory
**1978**, 24, 530. [Google Scholar] [CrossRef][Green Version] - Dongarra, J.J.; Du Croz, J.; Hammarling, S.; Hanson, R.J. A proposal for an extended set of Fortran Basic Linear Algebra Subprograms. ACM SIGNUM Newsl.
**1985**, 20, 2–18. [Google Scholar] [CrossRef] - Ancis, M.; Giusto, D.D. Image data compression by adaptive vector quantization of classified wavelet coefficients. IEEE Pac. Rim Conf. Commun. Comput. Signal Process. PACRIM
**1997**, 1, 330–333. [Google Scholar] - Salomon, D. Data Compression: The Complete Reference; Springer Science & Business Media: Berlin/Heidelberg, Germany, 20 March 2007. [Google Scholar]
- Borel, E. Les probabilités dénombrables et leurs applications arithmétiques. Rendiconti del Circolo Matematico di Palermo
**1909**, 27, 247–271. [Google Scholar] [CrossRef] - Cilibrasi, R.L.; Vitányi, P.M.B. Clustering by compression. IEEE Trans. Inf. Theory
**2005**, 51, 1523–1545. [Google Scholar] [CrossRef][Green Version] - Chaitin, G.J. On the length of programs for computing finite binary sequences: Statistical considerations. J. ACM
**1969**, 16, 145–159. [Google Scholar] [CrossRef] - Vitányi, P.M.B. How incomputable is Kolmogorov complexity? Entropy
**2020**, 22, 408. [Google Scholar] [CrossRef][Green Version] - Zenil, H. Towards Demystifying Shannon Entropy, Lossless Compression, and Approaches to Statistical Machine Learning. In Proceedings of the International Society for Information Studies (IS4IS) summit, University of California, Berkeley, CA, USA, 2–6 June 2019. [Google Scholar]
- Teixeira, A.; Matos, A.; Souto, A.; Antunes, L. Entropy Measures vs. Kolmogorov Complexity. Entropy
**2011**, 13, 595–611. [Google Scholar] [CrossRef] - Solomonoff, R.J. Complexity-Based Induction Systems: Comparisons and Convergence Theorems. IEEE Trans. Inf. Theory
**1978**, 24, 422–432. [Google Scholar] [CrossRef] - Solomonoff, R.J. The Application of Algorithmic Probability to Problems in Artificial Intelligence. Mach. Intell. Pattern Recognit.
**1986**, 4, 473–491. [Google Scholar] - Solomonoff, R.J. A System for Incremental Learning Based on Algorithmic Probability. In Proceedings of the Sixth Israeli Conference on Artificial Intelligence, Computer Vision and Pattern Recognition, Tel Aviv, Israel, December 1989; pp. 515–527. [Google Scholar]
- Levin, L.A. Universal sequential search problems. Probl. Inf. Transm.
**1973**, 9, 265–266. [Google Scholar] - Kirchherr, W.; Li, M.; Vitányi, P. The miraculous universal distribution. Math. Intell.
**1997**, 19, 7–15. [Google Scholar] [CrossRef][Green Version] - Solovay, R.M. Draft of paper (or series of papers) on Chaitin’s work. Unpublished notes, 215 pages, May 1975; In Algorithmic Randomness and Complexity, Theory and Applications of Computability; Downey, R.G., Hirschfeldt, D.R., Eds.; Springer: New York, NY, USA, 2010. [Google Scholar]
- Antunes, L.; Fortnow, L. Time-Bounded Universal Distributions. Electronic Colloquium on Computational Complexity. Report No. 144. 2005. Available online: http://www.mat.uc.pt/~kahle/dl06/luis-antunes.pdf (accessed on 29 May 2020).
- Minsky, M. Panel discussion on The Limits of Understanding. World Science Festival, NYC, 14 December 2014. Available online: https://www.worldsciencefestival.com/videos/the-limits-of-understanding/ (accessed on 26 February 2020).
- Zenil, H.; Delahaye, J.-P. On the Algorithmic Nature of the World. In Information and Computation; Dodig-Crnkovic, G., Burgin, M., Eds.; World Scientific Publishing Company: Singapore, 2010. [Google Scholar]
- Delahaye, J.-P.; Zenil, H. Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness. Appl. Math. Comput.
**2012**, 219, 63–77. [Google Scholar] [CrossRef][Green Version] - Soler-Toscano, F.; Zenil, H.; Delahaye, J.-P.; Gauvrit, N. Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines. PLoS ONE
**2014**, 9, e96223. [Google Scholar] [CrossRef][Green Version] - Levin, L.A. Randomness conservation inequalities; information and independence in mathematical theories. Inf. Control
**1984**, 61, 15–37. [Google Scholar] [CrossRef][Green Version] - Buhrman, H.; Fortnow, L.; Laplante, S. Resource-Bounded Kolmogorov Complexity Revisited. SIAM J. Comput.
**2001**, 31, 887–905. [Google Scholar] [CrossRef][Green Version] - Allender, E.; Buhrman, H.; Koucký, M.; van Melkebeek, D.; Ronneburger, D. Power from random strings. SIAM J. Comput.
**2006**, 35, 1467–1493. [Google Scholar] [CrossRef][Green Version] - Schmidhuber, J. The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. In Proceedings of the International Conference on Computational Learning Theory COLT 2002: Computational Learning Theory, Sydney, Australia, 8–10 July 2002; Kivinen, J., Sloan, R.H., Eds.; Springer: New York, NY, USA, 2002; pp. 216–228. [Google Scholar]
- Hutter, M. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Wallace, C.S.; Boulton, D.M. An information measure for classification. Comput. J.
**1968**, 11, 185–194. [Google Scholar] [CrossRef][Green Version] - Rissanen, J. Modeling by shortest data description. Automatica
**1978**, 14, 465–658. [Google Scholar] [CrossRef] - Zenil, H.; Hernández-Orozco, S.; Kiani, N.A.; Soler-Toscano, F.; Rueda-Toicen, A. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. Entropy
**2018**, 20, 605. [Google Scholar] [CrossRef][Green Version] - Zenil, H.; Kiani, N.A.; Zea, A.; Tegnér, J. Causal Deconvolution by Algorithmic Generative Models. Nat. Mach. Intell.
**2019**, 1, 58–66. [Google Scholar] [CrossRef] - Zenil, H.; Kiani, N.A.; Marabita, F.; Deng, Y.; Elias, Y.; Schmidt, A.; Ball, G.; Tegnér, J. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems. iScience
**2019**, 19, 1160–1172. [Google Scholar] [CrossRef][Green Version] - Zenil, H.; Minary, P. Training-free Measures Based on Algorithmic Probability Identify High Nucleosome Occupancy in DNA Sequences. Nucleic Acids Res.
**2019**, 47, e129. [Google Scholar] [CrossRef] - Zenil, J.; Badillo, L.; Hernández-Orozco, S.; Hernandez-Quiroz, F. Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability. Int. J. Parallel Emerg. Distrib. Syst.
**2018**. [Google Scholar] [CrossRef][Green Version] - Chomsky, N. Three models for the description of language. IEEE Trans. Inf. Theory
**1956**, 2, 113–124. [Google Scholar] [CrossRef][Green Version] - Shalizi, C.R.; Crutchfield, J.P. Computational mechanics: Pattern and prediction, structure and simplicity. J. Stat. Phys.
**2001**, 104, 817–879. [Google Scholar] [CrossRef] - Schnorr, C.P. A unified approach to the definition of a random sequence. Math. Syst. Theory
**1971**, 5, 246–258. [Google Scholar] [CrossRef] - Schnorr, C.P. Process complexity and effective random tests. J. Comput. Syst. Sci.
**1973**, 7, 376–388. [Google Scholar] [CrossRef][Green Version] - Calude, C.S.; Longo, G. The deluge of spurious correlations in big data. Found. Sci.
**2017**, 22, 595–612. [Google Scholar] [CrossRef][Green Version] - Zenil, H. Algorithmic Data Analytics, Small Data Matters and Correlation versus Causation. In Berechenbarkeit der Welt? Philosophie und Wissenschaft im Zeitalter von Big Data; Ott, M., Pietsch, W., Wernecke, J., Eds.; Springer: New York, NY, USA, 2017; pp. 453–475. [Google Scholar]
- Radó, T. On non-computable functions. Bell Syst. Tech. J.
**1962**, 41, 877–884. [Google Scholar] [CrossRef] - Cilibrasi, R. Personal communication.
- Zenil, H.; Soler-Toscano, F.; Delahaye, J.-P.; Gauvrit, N. Two-dimensional Kolmogorov complexity and an empirical validation of the Coding Theorem Method by compressibility. PeerJ Comput. Sci.
**2015**, 1, e23. [Google Scholar] [CrossRef] - Zenil, H.; Soler-Toscano, F.; Dingle, K.; Louis, A. Correlation of automorphism group size and topological properties with program-size complexity evaluations of graphs and complex networks. Phys. A Stat. Mech. Its Appl.
**2014**, 404, 341–358. [Google Scholar] [CrossRef][Green Version] - Soler-Toscano, F.; Zenil, H.; Delahaye, J.-P.; Gauvrit, N. Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures. Computability
**2013**, 2, 125–140. [Google Scholar] [CrossRef][Green Version] - Chaitin, G. Evaluation Report on the PhD Thesis Submitted Hector Zenil to the University of Lille ”Une Approche Expèrimentale à la Théorie de la Complexité Algorithmique” to Obtain the Degree of Doctor in Computer Science, 25 May 2011. Available online: http://www.mathrix.org/zenil/report.pdf (accessed on 26 February 2020).
- Zenil, H. Une Approche Expèrimentale à la Théorie de la Complexité Algorithmique. Ph.D. Thesis, University of Lille 1, France, July 2011. [Google Scholar]
- Calude, C.S. Stay, M.A. Most programs stop quickly or never halt. Adv. Appl. Math.
**2007**, 40, 295–308. [Google Scholar] [CrossRef][Green Version] - Abrahão, F.S.; Wehmuth, K.; Ziviani, A. Algorithmic Networks: Central time to trigger expected emergent open-endedness. Theor. Comput. Sci.
**2019**, 785, 83–116. [Google Scholar] [CrossRef][Green Version] - Mathy, F.; Fartoukh, M.; Gauvrit, N.; Guida, A. Developmental abilities to form chunks in immediate memory and its non-relationship to span development. Front. Psychol.
**2016**, 7, 201. [Google Scholar] [CrossRef] [PubMed][Green Version] - Silva, J.M.; Pinho, E.; Matos, S.; Pratas, D. Statistical Complexity Analysis of Turing Machine tapes with Fixed Algorithmic Complexity Using the Best-Order Markov Model. Entropy
**2020**, 22, 105. [Google Scholar] [CrossRef][Green Version] - Gauvrit, N.; Zenil, H.; Soler-Toscano, F.; Delahaye, J.-P.; Brugger, P. Human Behavioral Complexity Peaks at Age 25. PLoS Comput. Biol.
**2017**, 13, e1005408. [Google Scholar] [CrossRef][Green Version] - Champernowne, D.G. The construction of decimals normal in the scale of ten. J. Lond. Math. Soc.
**1933**, 8, 254–260. [Google Scholar] [CrossRef] - Soler-Toscano, F.; Zenil, H. A Computable Measure of Algorithmic Probability by Finite Approximations with an Application to Integer Sequences. Complexity
**2017**, 2017, 7208216. [Google Scholar] [CrossRef][Green Version] - Calude, C.S.; Dumitrescu, M. A probabilistic anytime algorithm for the halting problem. Computability
**2018**, 7, 259–271. [Google Scholar] [CrossRef][Green Version] - Ryabko, B.; Reznikova, Z. Using Shannon Entropy and Kolmogorov Complexity to Study the Communicative System and Cognitive Capacities in Ants Complexity; John Wiley & Sons Inc.: New York, NY, USA, 1996; Volume 2, pp. 37–42. [Google Scholar]
- Zenil, H.; Marshall, J.A.R.; Tegnér, J. Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results. In Alternative Computing; Adamatzky, A., Ed.; World Scientific: Singapore, 2020. [Google Scholar]
- Bauwens, B.; Makhlin, A.; Vereshchagin, N.; Zimand, M. Short lists with short programs in short time. Comput. Complex.
**2018**, 27, 31–61. [Google Scholar] [CrossRef][Green Version] - Filatov, G.; Bauwens, B.; Kertész-Farkas, A. LZW-Kernel: Fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification. Bioinformatics
**2018**, 34, 3281–3288. [Google Scholar] [CrossRef] - Bienvenu, L.; Downey, R.; Nies, A.; Merkle, W. Solovay functions and their applications in algorithmic randomness. J. Comput. Syst. Sci.
**2015**, 81, 1575–1591. [Google Scholar] [CrossRef] - Bienvenu, L.; Downey, R.; Nies, A.; Merkle, W. Solovay functions and K-triviality. In Proceedings of the 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011), Dortmund, Germany, 10–12 March 2011; pp. 452–463. Available online: https://hal.inria.fr/hal-00573598/ (accessed on 26 February 2020).
- Cibej, U.; Robic, B.; Mihelic, J. Empirical estimation of the halting probabilities. In Proceedings of the Computability in Europe (Language, Life, Limits), Budapest, Hungary, 23–27 June 2014. [Google Scholar]
- Calude, C.S.; Salomaa, K.; Roblot, T.K. Finite state complexity. Theor. Comput. Sci.
**2011**, 412, 5668–5677. [Google Scholar] [CrossRef][Green Version] - Calude, C.S.; Salomaa, K.; Roblot, T.K. State-size Hierarchy for Finite-state Complexity. Int. J. Found. Comput. Sci.
**2012**, 23, 37–50. [Google Scholar] [CrossRef] - Calude, C.S.; Salomaa, K.; Roblot, T. Finite-State Complexity and the Size of Transducers. In Proceedings of the DCFS 2010, EPTCS 31, Saskatoon, SK, Canada, 8–10 August 2010; pp. 38–47. [Google Scholar] [CrossRef][Green Version]
- Bienvenu, L.; Desfontaines, D.; Shen, A. Generic algorithms for halting problem and optimal machines revisited. Log. Methods Comput. Sci.
**2015**, 12, 1–29. [Google Scholar] [CrossRef][Green Version] - Zenil, H.; Kiani, N.A. Algorithmic Information Dynamics, Scholarpedia. 2019. Available online: http://www.scholarpedia.org/article/Algorithmic_Information_Dynamics (accessed on 20 March 2020).
- Friston, K.J.; Harrison, L.; Penny, W. Dynamic causal modelling. NeuroImage
**2003**, 19, 1273–1302. [Google Scholar] [CrossRef] - Zenil, H. Compression is Comprehension, and the Unreasonable Effectiveness of Digital Computation in the Natural World. In Unravelling Complexity (Gregory Chaitin’s 70 Festschrift); Wuppuluri, S., Doria, F., Eds.; World Scientific Publishing: Singapore, 2019. [Google Scholar]
- Teutsch, J. Short lists for shortest descriptions in short time. Comput. Complex.
**2014**, 23, 565–583. [Google Scholar] [CrossRef][Green Version]

**Figure 1.**(

**A**) An observer trying to characterise an observed stream of data (s) of unknown source (in this case, the sequence of natural numbers in decimal or binary that the observer may not recognise) has currently two options: (

**B**) the statistical compression approach (green arrow) represented by run-length encoding (RLE), producing a rather cryptic and even longer description than the original observation with no clear correspondence between possible ground and represented states; or alternatively, (

**C**) an approach that takes on the challenge as in an inverse problem represented by the Coding theorem method (CTM) (green arrow) that allows finding the set of small computer programs according to some reference machine up to a certain size that match their output to the original observation thereby potentially reverse engineering of the generating mechanism that may have recursively produced the sequence in the first place. (

**D**) Such an approach allows a state space description represented by a generative rule or transition table and a state diagram whose elements may correspond (or not) to a physical unfolding phenomenon against which it can be tested (notice that the binary to decimal Turing machines (TM) is also of a finite, small size and is only an intermediate step to arrive at s, but can be one of the engines behind it; here, for illustration purposes, only the TM binary counter is shown as an example).

**Figure 2.**Test for model internal consistency and optimality. The number of rules used in the transition table for the first and shortest Turing machine producing a string following the enumeration is in agreement with the estimation arrived at by CTM after application of the Coding theorem under the assumptions of optimality. Adapted from [53].

**Figure 3.**Agreement in shape and ranking of empirical distributions produced by three completely different models of computation: Turing machines (TM), cellular automata (CA), and post tag systems (TS), all three universal Turing models of computation for the same set of strings of length $k=5$ and $k=6$ for purposes of illustration only (comparison was made across all string lengths up to around 20, which already included a large set of ${2}^{20}$ strings).

**Figure 4.**This is the first ever empirical probability distribution produced by all the 15,059,072 Turing machines with three states and two symbols as reported in [29]. CTM has produced several comprehensive large tables for all binary (and also non-binary [60]) Turing machines with up to four and five states (and currently for six states) and is currently also computing for six states on one of the largest super computers available. The distributions (this and the much larger ones that followed) have shed light on many matters, from long standing challenges (the short string regime left uncovered by statistical compression), to the impact of the change of choice of computational power on empirical output distributions, to applications tested in the field that require high sensitivity (such as perturbation analysis [39,40]).

**Figure 5.**(

**A**) Unlike compression, which behaves just like the Shannon entropy, CTM approximates the long-tail shape of the universal distribution, better conforming with the theory under strong assumptions of optimality. We call these causal gaps (

**A**,

**B**) because they are precisely the cases in which strings can be explained by a computer program of length far removed from the greatest program length of the most algorithmically random strings according to our (universal) computational model. (

**C**) Divergence of CTM from entropy (and therefore LZW). Adapted from [38].

**Figure 6.**Density histograms showing how poorly lossless compression performs on short strings (just 12 bits in length), collapsing all values into two to four clusters, while CTM produces a finer-grained classification of 36 groups of strings. CTM (and BDM) can be calculated using the Online Algorithmic Complexity Calculator (https://complexitycalculator.com/).

**Figure 7.**Each “drop-like” distribution represents the set of strings that are minimally produced with the same number of instructions (Y axis) for all machines with up to four states for which there can be up to $2(n+1)=10$ transition rules. The more instructions needed to produce the strings, the more complex they are shown to be according to CTM (Y axis), after applying a Coding theorem-like equation to the empirical probability values for each string. The results conform with the theoretical expectation: the greater CTM, the greater the number of instructions used by the shortest Turing machine according to the enumeration, and vice versa. Adapted from [53].

**Table 1.**Different methods for different length regimes and for different purposes. Here, LZW represents the set of statistical lossless compression algorithms. MDL, minimum description length; BDM, block decomposition method.

Method | Regime | Outcome | Capability/ |
---|---|---|---|

of Application | Reach | ||

LZW, MDL, MML and similar | long strings | length of an often obfuscated file | statistical only |

CTM or similar | short strings | large set of small computer programs generating the data | algorithmic |

CTM + BDM | short and long strings | none compared to alternatives | algorithmic and statistical |

**Table 2.**List of nested theoretical properties in increasing rank of necessary condition to instantiate algorithmic complexity (lower rows depend on the properties of higher rows) and their potential negative and positive implications when holding. Different measures explore the implications of the relaxation of each of these properties. Proofs of optimality are nontrivial [74] if the model is not ad hoc, designed for the purpose of separating data from the model (as the minimum message length (MML) does), because the model is simple (statistical) and avoids Turing-universality). Rates of convergence of additive constants are never guaranteed, and any approach will suffer from it. Statistical lossless compression algorithms give up on universality at the very first level and therefore can claim to avoid all others. CTM embraces 0 (thus the first is the most important), assumes 1 and thus 2; 3 is of minor concern [17], and thus, 4 can be applied. Statistical lossless compression such as LZW, however, can be seen as embracing triviality, as it simply does not deal with even Level 0 and can only capture data redundancy in the form of repetitions. AP, algorithmic probability.

Dependency Level | Relaxed Property | Challenge/Limitation | Positive Implication |
---|---|---|---|

0 | Universality | some form of incomputability | characterisation beyond statistical |

1 | Optimality | ad hoc models | ranking invariant in the limit |

2 | Invariance | no rate of convergence | invariant in the limit w/overhead |

3 | Prefix-freeness | ad hoc language | slightly tighter bounds |

4 | Coding theorem | 0 to 3 have to hold | (AP(s) for C(s) and C(s) for AP(s)) ± O(1) |

© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zenil, H. A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions. *Entropy* **2020**, *22*, 612.
https://doi.org/10.3390/e22060612

**AMA Style**

Zenil H. A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions. *Entropy*. 2020; 22(6):612.
https://doi.org/10.3390/e22060612

**Chicago/Turabian Style**

Zenil, Hector. 2020. "A Review of Methods for Estimating Algorithmic Complexity: Options, Challenges, and New Directions" *Entropy* 22, no. 6: 612.
https://doi.org/10.3390/e22060612