Next Article in Journal
A Comment on Nadytko et al., “Amines in the Earth’s Atmosphere: A Density Functional Theory Study of the Thermochemistry of Pre-Nucleation Clusters”. Entropy 2011, 13, 554–569
Next Article in Special Issue
A Philosophical Treatise of Universal Induction
Previous Article in Journal
A Feature Subset Selection Method Based On High-Dimensional Mutual Information
Previous Article in Special Issue
Quantum Kolmogorov Complexity and Information-Disturbance Theorem
Entropy 2011, 13(4), 902-914; doi:10.3390/e13040902

Algorithmic Relative Complexity

1,*  and 1,2
1 German Aerospace Centre (DLR), Münchnerstr. 20, 82234 Wessling, Germany 2 Télécom ParisTech, rue Barrault 20, Paris F-75634, France
* Author to whom correspondence should be addressed.
Received: 3 March 2011 / Revised: 31 March 2011 / Accepted: 1 April 2011 / Published: 19 April 2011
(This article belongs to the Special Issue Kolmogorov Complexity)
Download PDF [120 KB, 24 February 2015; original version 24 February 2015]


Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence) found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Keywords: Kolmogorov complexity; compression; relative entropy; Kullback-Leibler divergence; similarity measure; compression based distance Kolmogorov complexity; compression; relative entropy; Kullback-Leibler divergence; similarity measure; compression based distance
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Share & Cite This Article

Further Mendeley | CiteULike
Export to BibTeX |
MDPI and ACS Style

Cerra, D.; Datcu, M. Algorithmic Relative Complexity. Entropy 2011, 13, 902-914.

View more citation formats

Related Articles

Article Metrics

For more information on the journal, click here


Cited By

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert