E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Information Processing in Complex Systems"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (15 March 2015)

Special Issue Editor

Guest Editor
Dr. Rick Quax

Computational Science, Faculty of Science, the University of Amsterdam, The Netherlands
Interests: information theory; statistical mechanics; complex systems; complex networks; dynamics on networks; theory of computation; theoretical computer science; formal languages; information geometry

Special Issue Information

Dear Colleagues,

All systems in nature have one thing in common: they process information. Information is registered in the state of a system and its elements, implicitly and invisibly. As elements interact, information is transferred and modified. Indeed, bits of information about the state of one element will travel—imperfectly—to the state of the other element, forming its new state. This storage, transfer, and modification of information, possibly between levels of a multi level system, is imperfect due to randomness or noise. From this viewpoint, a system can be formalized as a collection of bits that is organized according to its rules of dynamics and its topology of interactions. Mapping out exactly how these bits of information percolate through the system could reveal new fundamental insights in how the parts orchestrate to produce the properties of the system. A theory of information processing would be capable of defining a set of universal properties of dynamical multi level complex systems, which describe and compare the dynamics of diverse complex systems ranging from social interaction to brain networks, from financial markets to biomedicine. Each possible combination of rules of dynamics and topology of interactions, with disparate semantics, would reduce to a single language of information processing.

Dr. Rick Quax
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Keywords

  • information theory
  • statistical mechanics
  • complex systems
  • complex networks
  • dynamics on networks
  • theory of computation
  • theoretical computer science
  • formal languages
  • information geometry

Published Papers (10 papers)

View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:

Research

Open AccessArticle Conspiratorial Beliefs Observed through Entropy Principles
Entropy 2015, 17(8), 5611-5634; doi:10.3390/e17085611
Received: 12 April 2015 / Revised: 22 July 2015 / Accepted: 27 July 2015 / Published: 4 August 2015
PDF Full-text (2884 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We propose a novel approach framed in terms of information theory and entropy to tackle the issue of the propagation of conspiracy theories. We represent the initial report of an event (such as the 9/11 terroristic attack) as a series of strings [...] Read more.
We propose a novel approach framed in terms of information theory and entropy to tackle the issue of the propagation of conspiracy theories. We represent the initial report of an event (such as the 9/11 terroristic attack) as a series of strings of information, each string classified by a two-state variable Ei = ±1, i = 1, …, N. If the values of the Ei are set to −1 for all strings, a state of minimum entropy is achieved. Comments on the report, focusing repeatedly on several strings Ek, might alternate their meaning (from −1 to +1). The representation of the event is turned fuzzy with an increased entropy value. Beyond some threshold value of entropy, chosen by simplicity to its maximum value, meaning N/2 variables with Ei = 1, the chance is created that a conspiracy theory might be initiated/propagated. Therefore, the evolution of the associated entropy is a way to measure the degree of penetration of a conspiracy theory. Our general framework relies on online content made voluntarily available by crowds of people, in response to some news or blog articles published by official news agencies. We apply different aggregation levels (comment, person, discussion thread) and discuss the associated patterns of entropy change. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Prebiotic Competition between Information Variants, With Low Error Catastrophe Risks
Entropy 2015, 17(8), 5274-5287; doi:10.3390/e17085274
Received: 2 May 2015 / Revised: 7 July 2015 / Accepted: 20 July 2015 / Published: 27 July 2015
PDF Full-text (2021 KB) | HTML Full-text | XML Full-text
Abstract
During competition for resources in primitive networks increased fitness of an information variant does not necessarily equate with successful elimination of its competitors. If variability is added fast to a system, speedy replacement of pre-existing and less-efficient forms of order is required [...] Read more.
During competition for resources in primitive networks increased fitness of an information variant does not necessarily equate with successful elimination of its competitors. If variability is added fast to a system, speedy replacement of pre-existing and less-efficient forms of order is required as novel information variants arrive. Otherwise, the information capacity of the system fills up with information variants (an effect referred as “error catastrophe”). As the cost for managing the system’s exceeding complexity increases, the correlation between performance capabilities of information variants and their competitive success decreases, and evolution of such systems toward increased efficiency slows down. This impasse impedes the understanding of evolution in prebiotic networks. We used the simulation platform Biotic Abstract Dual Automata (BiADA) to analyze how information variants compete in a resource-limited space. We analyzed the effect of energy-related features (differences in autocatalytic efficiency, energy cost of order, energy availability, transformation rates and stability of order) on this competition. We discuss circumstances and controllers allowing primitive networks acquire novel information with minimal “error catastrophe” risks. We present a primitive mechanism for maximization of energy flux in dynamic networks. This work helps evaluate controllers of evolution in prebiotic networks and other systems where information variants compete. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Quantifying Redundant Information in Predicting a Target Random Variable
Entropy 2015, 17(7), 4644-4653; doi:10.3390/e17074644
Received: 18 March 2015 / Revised: 24 June 2015 / Accepted: 26 June 2015 / Published: 2 July 2015
Cited by 1 | PDF Full-text (3093 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some [...] Read more.
We consider the problem of defining a measure of redundant information that quantifies how much common information two or more random variables specify about a target random variable. We discussed desired properties of such a measure, and propose new measures with some desirable properties. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Maximum Entropy Rate Reconstruction of Markov Dynamics
Entropy 2015, 17(6), 3738-3751; doi:10.3390/e17063738
Received: 5 March 2015 / Revised: 2 June 2015 / Accepted: 4 June 2015 / Published: 8 June 2015
PDF Full-text (460 KB) | HTML Full-text | XML Full-text
Abstract
We develop ideas proposed by Van der Straeten to extend maximum entropy principles to Markov chains. We focus in particular on the convergence of such estimates in order to explain how our approach makes possible the estimation of transition probabilities when only [...] Read more.
We develop ideas proposed by Van der Straeten to extend maximum entropy principles to Markov chains. We focus in particular on the convergence of such estimates in order to explain how our approach makes possible the estimation of transition probabilities when only short samples are available, which opens the way to applications to non-stationary processes. The current work complements an earlier communication by providing numerical details, as well as a full derivation of the multi-constraint two-state and three-state maximum entropy transition matrices. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Tail Risk Constraints and Maximum Entropy
Entropy 2015, 17(6), 3724-3737; doi:10.3390/e17063724
Received: 3 March 2015 / Revised: 15 May 2015 / Accepted: 22 May 2015 / Published: 5 June 2015
PDF Full-text (429 KB) | HTML Full-text | XML Full-text
Abstract
Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. [...] Read more.
Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position). Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i) the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii) the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another), which is quite familiar to traders, naturally emerges in our construction. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Information Decomposition and Synergy
Entropy 2015, 17(5), 3501-3517; doi:10.3390/e17053501
Received: 26 March 2015 / Revised: 12 May 2015 / Accepted: 19 May 2015 / Published: 22 May 2015
Cited by 5 | PDF Full-text (259 KB) | HTML Full-text | XML Full-text
Abstract
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an [...] Read more.
Recently, a series of papers addressed the problem of decomposing the information of two random variables into shared information, unique information and synergistic information. Several measures were proposed, although still no consensus has been reached. Here, we compare these proposals with an older approach to define synergistic information based on the projections on exponential families containing only up to k-th order interactions. We show that these measures are not compatible with a decomposition into unique, shared and synergistic information if one requires that all terms are always non-negative (local positivity). We illustrate the difference between the two measures for multivariate Gaussians. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessFeature PaperArticle An Information-Theoretic Perspective on Coarse-Graining, Including the Transition from Micro to Macro
Entropy 2015, 17(5), 3332-3351; doi:10.3390/e17053332
Received: 13 March 2015 / Accepted: 11 May 2015 / Published: 14 May 2015
PDF Full-text (8499 KB) | HTML Full-text | XML Full-text
Abstract
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. [...] Read more.
An information-theoretic perspective on coarse-graining is presented. It starts with an information characterization of configurations at the micro-level using a local information quantity that has a spatial average equal to a microscopic entropy. With a reversible micro dynamics, this entropy is conserved. In the micro-macro transition, it is shown how this local information quantity is transformed into a macroscopic entropy, as the local states are aggregated into macroscopic concentration variables. The information loss in this transition is identified, and the connection to the irreversibility of the macro dynamics and the second law of thermodynamics is discussed. This is then connected to a process of further coarse-graining towards higher characteristic length scales in the context of chemical reaction-diffusion dynamics capable of pattern formation. On these higher levels of coarse-graining, information flows across length scales and across space are defined. These flows obey a continuity equation for information, and they are connected to the thermodynamic constraints of the system, via an outflow of information from macroscopic to microscopic levels in the form of entropy production, as well as an inflow of information, from an external free energy source, if a spatial chemical pattern is to be maintained. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle AIM for Allostery: Using the Ising Model to Understand Information Processing and Transmission in Allosteric Biomolecular Systems
Entropy 2015, 17(5), 2895-2918; doi:10.3390/e17052895
Received: 5 March 2015 / Revised: 16 April 2015 / Accepted: 30 April 2015 / Published: 7 May 2015
Cited by 2 | PDF Full-text (2124 KB) | HTML Full-text | XML Full-text
Abstract
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of [...] Read more.
In performing their biological functions, molecular machines must process and transmit information with high fidelity. Information transmission requires dynamic coupling between the conformations of discrete structural components within the protein positioned far from one another on the molecular scale. This type of biomolecular “action at a distance” is termed allostery. Although allostery is ubiquitous in biological regulation and signal transduction, its treatment in theoretical models has mostly eschewed quantitative descriptions involving the system’s underlying structural components and their interactions. Here, we show how Ising models can be used to formulate an approach to allostery in a structural context of interactions between the constitutive components by building simple allosteric constructs we termed Allosteric Ising Models (AIMs). We introduce the use of AIMs in analytical and numerical calculations that relate thermodynamic descriptions of allostery to the structural context, and then show that many fundamental properties of allostery, such as the multiplicative property of parallel allosteric channels, are revealed from the analysis of such models. The power of exploring mechanistic structural models of allosteric function in more complex systems by using AIMs is demonstrated by building a model of allosteric signaling for an experimentally well-characterized asymmetric homodimer of the dopamine D2 receptor. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Figures

Open AccessArticle Uncovering Discrete Non-Linear Dependence with Information Theory
Entropy 2015, 17(5), 2606-2623; doi:10.3390/e17052606
Received: 27 February 2015 / Revised: 21 April 2015 / Accepted: 22 April 2015 / Published: 23 April 2015
Cited by 1 | PDF Full-text (1874 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we model discrete time series as discrete Markov processes of arbitrary order and derive the approximate distribution of the Kullback-Leibler divergence between a known transition probability matrix and its sample estimate. We introduce two new information-theoretic measurements: information memory [...] Read more.
In this paper, we model discrete time series as discrete Markov processes of arbitrary order and derive the approximate distribution of the Kullback-Leibler divergence between a known transition probability matrix and its sample estimate. We introduce two new information-theoretic measurements: information memory loss and information codependence structure. The former measures the memory content within a Markov process and determines its optimal order. The latter assesses the codependence among Markov processes. Both measurements are evaluated on toy examples and applied on high frequency foreign exchange data, focusing on 2008 financial crisis and 2010/2011 Euro crisis. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)
Open AccessArticle Information-Theoretic Inference of Common Ancestors
Entropy 2015, 17(4), 2304-2327; doi:10.3390/e17042304
Received: 12 February 2015 / Revised: 29 March 2015 / Accepted: 1 April 2015 / Published: 16 April 2015
Cited by 3 | PDF Full-text (165 KB) | HTML Full-text | XML Full-text
Abstract
A directed acyclic graph (DAG) partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class [...] Read more.
A directed acyclic graph (DAG) partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version. Full article
(This article belongs to the Special Issue Information Processing in Complex Systems)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top