Special Issue "Complexity, Criticality and Computation (C³)"
Deadline for manuscript submissions: closed (28 February 2017)
Prof. Mikhail Prokopenko
Complex systems is a new approach to science and engineering that studies how relationships between parts give rise to collective emergent behaviours of the entire system, and how the system interacts with its environment.
Dynamics of a complex system cannot be predicted, or explained, as a linear aggregation of the individual dynamics of its components, and the interactions among the many constituent microscopic parts bring about synergistic macroscopic phenomena that cannot be understood by considering any single part alone. There is a growing awareness that complexity is strongly related to criticality: The behaviour of dynamical spatiotemporal systems at an order/disorder phase transition where scale invariance prevails.
Complex systems can also be viewed as distributed information-processing systems, in the domains ranging from systems biology and artificial life, to computational neuroscience, to digital circuitry, to transport networks. Consciousness emerging from neuronal activity and interactions, cell behaviour resultant from gene regulatory networks and swarming behaviour are all examples of global system behaviour emerging as a result of the local interactions of the individuals (neurons, genes, animals). Can these interactions be seen as a generic computational process? This question shapes the third component of our Special Issue, linking computation to complexity and criticality.
Prof. Mikhail Prokopenko
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.
- emergent phenomena
- non-linear dynamics
- phase transitions
- information thermodynamics
- distributed information-processing
- computational neuroscience
- swarming behavior
- systems biology
- artificial life
- bio-inspired computing
- agent-based simulation
The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.
Tentative Title: Controlling the emergence of spatial patterns through cellular automata with inertia
Authors: K. Kremer, M. Koehler and M.G.E. da Luz
Abstract: Just to give you a very short glimpse on the work (but this is not the "official" abstract), by using usual celluar automata models, but with an extra ingredient inertia --- something I have (with my two co-authors) introduced few years ago to describe spatial patterns in biome transition regions (usually called "ecotones") --- we describe in very general terms how to characterize phase transitions and critical phenomena in CA, which moreover can describe processes like phase segregation and frustration phenomena.
Tentative Title: Can a robot have free will?
Author: Keith Farnsworth
Abstract: Using insights from cybernetics and an information-based understanding of biological systems, I give a precise definition of free-will and set out the essential requirements for it to exist in principle. Among these are the ability to compute (store and process information), to maintain an abstract model of physical reality (i.e. information abstraction) and to adopt the /intentional stance/ in relation to it (meaning the model is /about/ something), so as to make a choice. This requires the system to have a definite identity, for which it must be a Kantian whole (the parts exist by means of the whole and the whole by means of the parts, e.g. an autocatalytic set). To be practical, the system also must have the means to enact its choice in physical reality. The only systems known to meet all these criteria are living organisms, not just humans, but a wide range of organisms. Specifically those which a) are Kantian wholes by virtue of being autopoietic (self-creating) systems, b) can abstract information (via signalling proteins or neural systems), c) construct an abstract representation of physical reality (internal and external) using this information, d) choose internally determined actions (from among more than one possibility) based on the model and e) enact it in physical reality. The main impediment to present-day artificial computers (e.g. silicon-based) in this respect, is their lack of being a Kantian whole, otherwise, as part of a robot enabling physical expression, they may be able to exercise free will. Consciousness does not seem to be a requirement and the minimum complexity for a free-will system may be quite low.
Tentative Title: Criticality and information dynamics in epidemiological models
Authors: E. Yagmur Erten, Joseph T. Lizier, Mahendra Piraveenan and Mikhail Prokopenko
Tentative Title: An Information-Theoretic Formalism for Multiscale Structure in Complex Systems
Authors: Ben Allen, Blake Stacey, Yaneer Bar-Yam
Abstract: We present a mathematical formalism for representing and understanding structure in complex systems. In our view, structure is the totality of relationships among a system's components, and these relationships can be quantified using information theory. In the interest of flexibility we allow information to be quantified using any function, that satisfies certain axioms. Shannon information and vector space dimension are examples. Using these axioms, we formalize the notion of a dependency among components, and show how a system's structure is revealed in the amount of information assigned to each dependency. We then explore quantitative indices that summarize system structure by identifying the scales at which information applies. We generalize an existing index, the complexity profile, and introduce a new index, the marginal utility of information. Using simple examples, we show how these indices capture intuitive ideas about structure in a quantitative way.
Tentative Title: Strategic Behaviour and Information Theory in Sequential Economic Games
Author: Michael S. Harre
Abstract: Sequential games, in which the same economic game is repeatedly played between the same agents, are an important experimental and theoretical framework in which agents are able to learn which strategies are effective or ineffective against another agent. Using this framework, we look at the mutual information between previous game states (choices and utilities) and an agent’s next choice in non-cooperative economic games. By explicitly expanding out the mutual information between past states and the next choice we are able to show that for certain classes of games, including notable examples such as the Prisoner's Dilemma, it is only necessary for each agent to observe the utility they received from the action they chose in the previous round in order to capture all relevant information regarding the previous round, including the strategies of other players. While this result is quite natural some classes of games for which it is not true have some very interesting properties. Specifically, it is shown that for a class of games, called `cyclical games' here, the utility of the previous round is insufficient for an agent to infer, and therefore learn, the strategy of the other player based solely on information from the previous rounds. These results are related to a key strategic ambiguity in sequential games: the two commonly studied strategies `Tit-for-Tat' and `Win-Stay-Lose-Shift' are indistinguishable for a cyclical game such as Matching Pennies. These results are discussed in terms of linearly separable decision problems: the Matching Pennies game is shown to be analogous to two interacting logic gates, an XOR and an XNOR gate, and so the problem of learning the underlying `game logic' for the agents is not a linearly separable one. This illustrates the difficulty an experimentalist has in inferring the internal complexity of the decision-making processes of an agent: has the agent learned a complex set of interacting elements or a simple copying of the other agent's actions?