E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Complexity, Criticality and Computation (C³)"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Complexity".

Deadline for manuscript submissions: closed (28 February 2017)

Special Issue Editor

Guest Editor
Prof. Mikhail Prokopenko

Complex Systems Research Group, School of Civil Engineering, Faculty of Engineering and IT, The University of Sydney, Sydney, New South Wales, Australia
Website | E-Mail
Phone: +61-2-9351-7569
Interests: guided self-organization; information theory; machine learning; complex networks

Special Issue Information

Dear Colleagues,

Complex systems is a new approach to science and engineering that studies how relationships between parts give rise to collective emergent behaviours of the entire system, and how the system interacts with its environment.

Dynamics of a complex system cannot be predicted, or explained, as a linear aggregation of the individual dynamics of its components, and the interactions among the many constituent microscopic parts bring about synergistic macroscopic phenomena that cannot be understood by considering any single part alone. There is a growing awareness that complexity is strongly related to criticality: The behaviour of dynamical spatiotemporal systems at an order/disorder phase transition where scale invariance prevails.

Complex systems can also be viewed as distributed information-processing systems, in the domains ranging from systems biology and artificial life, to computational neuroscience, to digital circuitry, to transport networks. Consciousness emerging from neuronal activity and interactions, cell behaviour resultant from gene regulatory networks and swarming behaviour are all examples of global system behaviour emerging as a result of the local interactions of the individuals (neurons, genes, animals). Can these interactions be seen as a generic computational process? This question shapes the third component of our Special Issue, linking computation to complexity and criticality.

Prof. Mikhail Prokopenko
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • complexity
  • criticality
  • computation
  • emergent phenomena
  • self-organization
  • non-linear dynamics
  • phase transitions
  • information thermodynamics
  • distributed information-processing
  • computational neuroscience
  • swarming behavior
  • systems biology
  • artificial life
  • bio-inspired computing
  • agent-based simulation

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessArticle Utility, Revealed Preferences Theory, and Strategic Ambiguity in Iterated Games
Entropy 2017, 19(5), 201; doi:10.3390/e19050201 (registering DOI)
Received: 28 February 2017 / Revised: 10 April 2017 / Accepted: 26 April 2017 / Published: 29 April 2017
PDF Full-text (459 KB)
Abstract
Iterated games, in which the same economic interaction is repeatedly played between the same agents, are an important framework for understanding the effectiveness of strategic choices over time. To date, very little work has applied information theory to the information sets used by
[...] Read more.
Iterated games, in which the same economic interaction is repeatedly played between the same agents, are an important framework for understanding the effectiveness of strategic choices over time. To date, very little work has applied information theory to the information sets used by agents in order to decide what action to take next in such strategic situations. This article looks at the mutual information between previous game states and an agent’s next action by introducing two new classes of games: “invertible games” and “cyclical games”. By explicitly expanding out the mutual information between past states and the next action we show under what circumstances the explicit values of the utility are irrelevant for iterated games and this is then related to revealed preferences theory of classical economics. These information measures are then applied to the Traveler’s Dilemma game and the Prisoner’s Dilemma game, the Prisoner’s Dilemma being invertible, to illustrate their use. In the Prisoner’s Dilemma, a novel connection is made between the computational principles of logic gates and both the structure of games and the agents’ decision strategies. This approach is applied to the cyclical game Matching Pennies to analyse the foundations of a behavioural ambiguity between two well studied strategies: “Tit-for-Tat” and “Win-Stay, Lose-Switch”. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Open AccessArticle Criticality and Information Dynamics in Epidemiological Models
Entropy 2017, 19(5), 194; doi:10.3390/e19050194
Received: 2 March 2017 / Revised: 24 April 2017 / Accepted: 25 April 2017 / Published: 27 April 2017
PDF Full-text (310 KB) | HTML Full-text | XML Full-text
Abstract
Understanding epidemic dynamics has always been a challenge. As witnessed from the ongoing Zika or the seasonal Influenza epidemics, we still need to improve our analytical methods to better understand and control epidemics. While the emergence of complex sciences in the turn of
[...] Read more.
Understanding epidemic dynamics has always been a challenge. As witnessed from the ongoing Zika or the seasonal Influenza epidemics, we still need to improve our analytical methods to better understand and control epidemics. While the emergence of complex sciences in the turn of the millennium have resulted in their implementation in modelling epidemics, there is still a need for improving our understanding of critical dynamics in epidemics. In this study, using agent-based modelling, we simulate a Susceptible-Infected-Susceptible (SIS) epidemic on a homogeneous network. We use transfer entropy and active information storage from information dynamics framework to characterise the critical transition in epidemiological models. Our study shows that both (bias-corrected) transfer entropy and active information storage maximise after the critical threshold ( R 0 = 1). This is the first step toward an information dynamics approach to epidemics. Understanding the dynamics around the criticality in epidemiological models can provide us insights about emergent diseases and disease control. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Complexity and Vulnerability Analysis of the C. Elegans Gap Junction Connectome
Entropy 2017, 19(3), 104; doi:10.3390/e19030104
Received: 30 December 2016 / Revised: 24 February 2017 / Accepted: 3 March 2017 / Published: 8 March 2017
PDF Full-text (976 KB) | HTML Full-text | XML Full-text
Abstract
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to
[...] Read more.
We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to be caused by a relatively small set of connections involving command interneurons. We describe a method which progressively deletes these “complexity-causing” connections, and find that when these are eliminated, the network becomes significantly less complex than a random network. Furthermore, this result implicates the previously-identified set of neurons from the synaptic network’s “rich club” as the structural components encoding the network’s excess complexity. This study and our method thus support a view of the gap junction Connectome as consisting of a rather low-complexity network component whose symmetry is broken by the unique connectivities of singularly important rich club neurons, sharply increasing the complexity of the network. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Emergence of Distinct Spatial Patterns in Cellular Automata with Inertia: A Phase Transition-Like Behavior
Entropy 2017, 19(3), 102; doi:10.3390/e19030102
Received: 24 December 2016 / Revised: 20 February 2017 / Accepted: 28 February 2017 / Published: 7 March 2017
PDF Full-text (5142 KB) | HTML Full-text | XML Full-text
Abstract
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state Sn
[...] Read more.
We propose a Cellular Automata (CA) model in which three ubiquitous and relevant processes in nature are present, namely, spatial competition, distinction between dynamically stronger and weaker agents and the existence of an inner resistance to changes in the actual state S n (=−1,0,+1) of each CA lattice cell n (which we call inertia). Considering ensembles of initial lattices, we study the average properties of the CA final stationary configuration structures resulting from the system time evolution. Assuming the inertia a (proper) control parameter, we identify qualitative changes in the CA spatial patterns resembling usual phase transitions. Interestingly, some of the observed features may be associated with continuous transitions (critical phenomena). However, certain quantities seem to present jumps, typical of discontinuous transitions. We argue that these apparent contradictory findings can be attributed to the inertia parameter’s discrete character. Along the work, we also briefly discuss a few potential applications for the present CA formulation. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Identifying Critical States through the Relevance Index
Entropy 2017, 19(2), 73; doi:10.3390/e19020073
Received: 7 January 2017 / Revised: 11 February 2017 / Accepted: 13 February 2017 / Published: 16 February 2017
PDF Full-text (731 KB) | HTML Full-text | XML Full-text
Abstract
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit,
[...] Read more.
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss
Entropy 2017, 19(2), 71; doi:10.3390/e19020071
Received: 30 December 2016 / Revised: 12 February 2017 / Accepted: 13 February 2017 / Published: 16 February 2017
PDF Full-text (2240 KB) | HTML Full-text | XML Full-text
Abstract
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or
[...] Read more.
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Echo State Condition at the Critical Point
Entropy 2017, 19(1), 3; doi:10.3390/e19010003
Received: 29 October 2016 / Revised: 13 December 2016 / Accepted: 15 December 2016 / Published: 23 December 2016
PDF Full-text (638 KB) | HTML Full-text | XML Full-text
Abstract
Recurrent networks with transfer functions that fulfil the Lipschitz continuity with K=1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the
[...] Read more.
Recurrent networks with transfer functions that fulfil the Lipschitz continuity with K = 1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the recurrent connectivity is smaller than 1. The main achievement of this paper is a proof under which conditions the network is an echo state network even if the largest singular value is one. It turns out that in this critical case the exact shape of the transfer function plays a decisive role in determining whether the network still fulfills the echo state condition. In addition, several examples with one-neuron networks are outlined to illustrate effects of critical connectivity. Moreover, within the manuscript a mathematical definition for a critical echo state network is suggested. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Open AccessArticle Consensus of Second Order Multi-Agent Systems with Exogenous Disturbance Generated by Unknown Exosystems
Entropy 2016, 18(12), 423; doi:10.3390/e18120423
Received: 25 July 2016 / Revised: 16 November 2016 / Accepted: 23 November 2016 / Published: 25 November 2016
PDF Full-text (818 KB) | HTML Full-text | XML Full-text
Abstract
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control
[...] Read more.
This paper is concerned with consensus problem of a class of second-order multi-agent systems subjecting to external disturbance generated from some unknown exosystems. In comparison with the case where the disturbance is generated from some known exosystems, we need to combine adaptive control and internal model design to deal with the external disturbance generated from the unknown exosystems. With the help of the internal model, an adaptive protocol is proposed for the consensus problem of the multi-agent systems. Finally, one numerical example is provided to demonstrate the effectiveness of the control design. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Tentative Title: Controlling the emergence of spatial patterns through cellular automata with inertia
Authors: K. Kremer, M. Koehler and M.G.E. da Luz
Abstract: Just to give you a very short glimpse on the work (but this is not the "official" abstract), by using usual celluar automata models, but with an extra ingredient inertia --- something I have (with my two co-authors) introduced few years ago to describe spatial patterns in biome transition regions (usually called "ecotones") --- we describe in very general terms how to characterize phase transitions and critical phenomena in CA, which moreover can describe processes like phase segregation and frustration phenomena.

Tentative Title: Can a robot have free will?
Author: Keith Farnsworth
Abstract: Using insights from cybernetics and an information-based understanding of biological systems, I give a precise definition of free-will and set out the essential requirements for it to exist in principle. Among these are the ability to compute (store and process information), to maintain an abstract model of physical reality (i.e. information abstraction) and to adopt the /intentional stance/ in relation to it (meaning the model is /about/ something), so as to make a choice. This requires the system to have a definite identity, for which it must be a Kantian whole (the parts exist by means of the whole and the whole by means of the parts, e.g. an autocatalytic set). To be practical, the system also must have the means to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. Specifically those which a) are Kantian wholes by virtue of being autopoietic (self-creating) systems, b) can abstract information (via signalling proteins or neural systems), c) construct an abstract representation of physical reality (internal and external) using this information, d) choose internally determined actions (from among more than one possibility) based on the model and e) enact it in physical reality. The main impediment to present-day artificial computers (e.g. silicon-based) in this respect, is their lack of being a Kantian whole, otherwise, as part of a robot enabling physical expression, they may be able to exercise free will. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low.

Tentative Title: Criticality and information dynamics in epidemiological models
Authors: E. Yagmur Erten, Joseph T. Lizier, Mahendra Piraveenan and Mikhail Prokopenko

Tentative Title: An Information-Theoretic Formalism for Multiscale Structure in Complex Systems
Authors: Ben Allen, Blake Stacey, Yaneer Bar-Yam
Abstract: We present a mathematical formalism for representing and understanding structure in complex systems. In our view, structure is the totality of relationships among a system's components, and these relationships can be quantified using information theory. In the interest of flexibility we allow information to be quantified using any function, that satisfies certain axioms. Shannon information and vector space dimension are examples. Using these axioms, we formalize the notion of a dependency among components, and show how a system's structure is revealed in the amount of information assigned to each dependency. We then explore quantitative indices that summarize system structure by identifying the scales at which information applies. We generalize an existing index, the complexity profile, and introduce a new index, the marginal utility of information. Using simple examples, we show how these indices capture intuitive ideas about structure in a quantitative way.

Tentative Title: Strategic Behaviour and Information Theory in Sequential Economic Games
Author: Michael S. Harre
Abstract: Sequential games, in which the same economic game is repeatedly played between the same agents, are an important experimental and theoretical framework in which agents are able to learn which strategies are effective or ineffective against another agent. Using this framework, we look at the mutual information between previous game states (choices and utilities) and an agent’s next choice in non-cooperative economic games. By explicitly expanding out the mutual information between past states and the next choice we are able to show that for certain classes of games, including notable examples such as the Prisoner's Dilemma, it is only necessary for each agent to observe the utility they received from the action they chose in the previous round in order to capture all relevant information regarding the previous round, including the strategies of other players. While this result is quite natural some classes of games for which it is not true have some very interesting properties. Specifically, it is shown that for a class of games, called `cyclical games' here, the utility of the previous round is insufficient for an agent to infer, and therefore learn, the strategy of the other player based solely on information from the previous rounds. These results are related to a key strategic ambiguity in sequential games: the two commonly studied strategies `Tit-for-Tat' and `Win-Stay-Lose-Shift' are indistinguishable for a cyclical game such as Matching Pennies. These results are discussed in terms of linearly separable decision problems: the Matching Pennies game is shown to be analogous to two interacting logic gates, an XOR and an XNOR gate, and so the problem of learning the underlying `game logic' for the agents is not a linearly separable one. This illustrates the difficulty an experimentalist has in inferring the internal complexity of the decision-making processes of an agent: has the agent learned a complex set of interacting elements or a simple copying of the other agent's actions?

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
loading...
Back to Top