E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Information Theoretic Incentives for Cognitive Systems"

Quicklinks

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: closed (30 June 2015)

Special Issue Editors

Guest Editor
Dr. Christoph Salge (Website)

Adaptive Systems Research Group,University of Hertfordshire, College Lane, AL10 9AB Hatfield, UK
Interests: Intrinsic Motivation (Empowerment); Self-Organization; Guided Self-Organization; Information-Theoretic Incentives for Social Interaction; Information-Theoretic Incentives for Swarms; Information Theory and Computer Game AI
Guest Editor
Dr. Georg Martius (Website)

Cognition and Neurosciences, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Interests: Autonomous Robots; Self-Organization; Guided Self-Organization; Information Theory; Dynamical Systems; Machine Learning; Neuroscience of Learning; Optimal Control
Guest Editor
Dr. Keyan Ghazi-Zahedi (Website)

Information Theory of Cognitive Systems, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Interests: Embodied Artificial Intelligence; Information Theory of the Sensorimotor Loop; Dynamical Systems; Cybernetics; Self-organisation; Synaptic plasticity; Evolutionary Robotics
Guest Editor
Dr. Daniel Polani (Website)

Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
Interests: artificial intelligence; artificial life; information theory for intelligent information processing; sensor Evolution; collective and multiagent systems

Special Issue Information

Dear Colleagues,

In recent years, ideas such as "life is information processing" or "information holds the key to understanding life" have become more common. However, how can information, or more formally Information Theory, increase our understanding of life, or life-like systems?

Information Theory not only has a profound mathematical basis, but also typically provides an intuitive understanding of processes, such as learning, behavior and evolution terms of information processing.

In this special issue, we are interested in both: a.) the information-theoretic formalization and quantification of different aspects of life, such as driving forces of learning and behavior generation, information flows between neurons, swarm members and social agents, and information theoretic aspects of evolution and adaptation and b.) the simulation and creation of life-like systems with previously identified principles and incentives.

Topics with relation to artificial and natural systems:

  • information theoretic intrinsic motivations
  • information theoretic quantification of behavior
  • information theoretic guidance of artificial evolution
  • information theoretic guidance of self-organization
  • information theoretic driving forces behind learning
  • information theoretic driving forces behind behavior
  • information theory in swarms
  • information theory in social behavior
  • information theory in evolution
  • information theory in the brain
  • information theory in system-environment distinction
  • information theory in the perception action loop
  • information theoretic definitions of life

Dr. Christoph Salge
Dr. Georg Martius
Dr.  Keyan Ghazi-Zahedi
Dr. Daniel Polani
Guest Editors

[1] A. S. Klyubin, D. Polani, and C.L. Nehaniv. Empowerment: A Universal Agent-Centric Measure of Control. In Proc. CEC 2005.
[2] G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.
[3] D. Y. Little, F. T. Sommer: Learning and exploration in action-perception loops. Frontiers in Neural Circuits, 2013
[4] Max Lungarella, Olaf Sporns, Information self-structuring: key principle for learning and development. Proc. ICDL, 2005.
[5] Polani, Daniel. "Information: currency of life?." HFSP journal 3.5 (2009): 307-316.
[6] K. Zahedi, N. Ay, and R. Der. Higher coordination with less control – a result of information maximization in the sensori-motor loop. Adaptive Behavior, 18(3–4):338–355, 2010.
[7] K. Zahedi and N. Ay. Quantifying morphological computation. Entropy, 15(5):1887–1915, 2013.
[8] R. Pfeifer, M. Lungarella, and F. Iida. Self-organization, embodiment, and biologically inspired robotics. Science, 318(5853):1088–1093, 2007.
[9] Salge, Christoph, Cornelius Glackin, and Daniel Polani. "Changing the Environment Based on Empowerment as Intrinsic Motivation." Entropy 16.5 (2014): 2789-2819.

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Published Papers (7 papers)

View options order results:
result details:
Displaying articles 1-7
Export citation of selected articles as:

Research

Open AccessArticle Nonparametric Problem-Space Clustering: Learning Efficient Codes for Cognitive Control Tasks
Entropy 2016, 18(2), 61; doi:10.3390/e18020061
Received: 16 July 2015 / Revised: 29 January 2016 / Accepted: 14 February 2016 / Published: 19 February 2016
PDF Full-text (2143 KB) | HTML Full-text | XML Full-text
Abstract
We present an information-theoretic method permitting one to find structure in a problem space (here, in a spatial navigation domain) and cluster it in ways that are convenient to solve different classes of control problems, which include planning a path to a [...] Read more.
We present an information-theoretic method permitting one to find structure in a problem space (here, in a spatial navigation domain) and cluster it in ways that are convenient to solve different classes of control problems, which include planning a path to a goal from a known or an unknown location, achieving multiple goals and exploring a novel environment. Our generative nonparametric approach, called the generative embedded Chinese restaurant process (geCRP), extends the family of Chinese restaurant process (CRP) models by introducing a parameterizable notion of distance (or kernel) between the states to be clustered together. By using different kernels, such as the the conditional probability or joint probability of two states, the same geCRP method clusters the environment in ways that are more sensitive to different control-related information, such as goal, sub-goal and path information. We perform a series of simulations in three scenarios—an open space, a grid world with four rooms and a maze having the same structure as the Hanoi Tower—in order to illustrate the characteristics of the different clusters (obtained using different kernels) and their relative benefits for solving planning and control problems. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Open AccessArticle Information-Theoretic Neuro-Correlates Boost Evolution of Cognitive Systems
Entropy 2016, 18(1), 6; doi:10.3390/e18010006
Received: 1 July 2015 / Revised: 9 December 2015 / Accepted: 22 December 2015 / Published: 25 December 2015
PDF Full-text (4492 KB) | HTML Full-text | XML Full-text
Abstract
Genetic Algorithms (GA) are a powerful set of tools for search and optimization that mimic the process of natural selection, and have been used successfully in a wide variety of problems, including evolving neural networks to solve cognitive tasks. Despite their success, [...] Read more.
Genetic Algorithms (GA) are a powerful set of tools for search and optimization that mimic the process of natural selection, and have been used successfully in a wide variety of problems, including evolving neural networks to solve cognitive tasks. Despite their success, GAs sometimes fail to locate the highest peaks of the fitness landscape, in particular if the landscape is rugged and contains multiple peaks. Reaching distant and higher peaks is difficult because valleys need to be crossed, in a process that (at least temporarily) runs against the fitness maximization objective. Here we propose and test a number of information-theoretic (as well as network-based) measures that can be used in conjunction with a fitness maximization objective (so-called “neuro-correlates”) to evolve neural controllers for two widely different tasks: a behavioral task that requires information integration, and a cognitive task that requires memory and logic. We find that judiciously chosen neuro-correlates can significantly aid GAs to find the highest peaks. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Open AccessArticle Quantifying Emergent Behavior of Autonomous Robots
Entropy 2015, 17(10), 7266-7297; doi:10.3390/e17107266
Received: 29 June 2015 / Revised: 6 October 2015 / Accepted: 12 October 2015 / Published: 23 October 2015
PDF Full-text (6304 KB) | HTML Full-text | XML Full-text
Abstract
Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity [...] Read more.
Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Open AccessArticle The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats
Entropy 2015, 17(8), 5472-5502; doi:10.3390/e17085472
Received: 22 May 2015 / Revised: 20 July 2015 / Accepted: 28 July 2015 / Published: 31 July 2015
Cited by 2 | PDF Full-text (6593 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Current approaches to characterize the complexity of dynamical systems usually rely on state-space trajectories. In this article instead we focus on causal structure, treating discrete dynamical systems as directed causal graphs—systems of elements implementing local update functions. This allows us to characterize [...] Read more.
Current approaches to characterize the complexity of dynamical systems usually rely on state-space trajectories. In this article instead we focus on causal structure, treating discrete dynamical systems as directed causal graphs—systems of elements implementing local update functions. This allows us to characterize the system’s intrinsic cause-effect structure by applying the mathematical and conceptual tools developed within the framework of integrated information theory (IIT). In particular, we assess the number of irreducible mechanisms (concepts) and the total amount of integrated conceptual information Φ specified by a system. We analyze: (i) elementary cellular automata (ECA); and (ii) small, adaptive logic-gate networks (“animats”), similar to ECA in structure but evolving by interacting with an environment. We show that, in general, an integrated cause-effect structure with many concepts and high Φ is likely to have high dynamical complexity. Importantly, while a dynamical analysis describes what is “happening” in a system from the extrinsic perspective of an observer, the analysis of its cause-effect structure reveals what a system “is” from its own intrinsic perspective, exposing its dynamical and evolutionary potential under many different scenarios. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Open AccessArticle The Fisher Information as a Neural Guiding Principle for Independent Component Analysis
Entropy 2015, 17(6), 3838-3856; doi:10.3390/e17063838
Received: 27 February 2015 / Revised: 28 May 2015 / Accepted: 5 June 2015 / Published: 9 June 2015
Cited by 1 | PDF Full-text (398 KB) | HTML Full-text | XML Full-text
Abstract
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning [...] Read more.
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule for synaptic plasticity. In the present work, we study the dependence of the solutions to this rule in terms of the moments of the input probability distribution and find a preference for non-Gaussian directions, making it a suitable candidate for independent component analysis (ICA). We confirm in a numerical experiment that a neuron trained under these rules is able to find the independent components in the non-linear bars problem. The specific form of the plasticity rule depends on the transfer function used, becoming a simple cubic polynomial of the membrane potential for the case of the rescaled error function. The cubic learning rule is also an excellent approximation for other transfer functions, as the standard sigmoidal, and can be used to show analytically that the proposed plasticity rules are selective for directions in the space of presynaptic neural activities characterized by a negative excess kurtosis. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Figures

Open AccessArticle Information Geometry on Complexity and Stochastic Interaction
Entropy 2015, 17(4), 2432-2458; doi:10.3390/e17042432
Received: 28 February 2015 / Revised: 2 April 2015 / Accepted: 8 April 2015 / Published: 21 April 2015
Cited by 4 | PDF Full-text (380 KB) | HTML Full-text | XML Full-text
Abstract
Interdependencies of stochastically interacting units are usually quantified by the Kullback-Leibler divergence of a stationary joint probability distribution on the set of all configurations from the corresponding factorized distribution. This is a spatial approach which does not describe the intrinsically temporal aspects [...] Read more.
Interdependencies of stochastically interacting units are usually quantified by the Kullback-Leibler divergence of a stationary joint probability distribution on the set of all configurations from the corresponding factorized distribution. This is a spatial approach which does not describe the intrinsically temporal aspects of interaction. In the present paper, the setting is extended to a dynamical version where temporal interdependencies are also captured by using information geometry of Markov chain manifolds. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Open AccessArticle A Fundamental Scale of Descriptions for Analyzing Information Content of Communication Systems
Entropy 2015, 17(4), 1606-1633; doi:10.3390/e17041606
Received: 3 January 2015 / Revised: 16 March 2015 / Accepted: 19 March 2015 / Published: 25 March 2015
Cited by 2 | PDF Full-text (2980 KB) | HTML Full-text | XML Full-text
Abstract
The complexity of the description of a system is a function of the entropy of its symbolic description. Prior to computing the entropy of the system’s description, an observation scale has to be assumed. In texts written in artificial and natural languages, [...] Read more.
The complexity of the description of a system is a function of the entropy of its symbolic description. Prior to computing the entropy of the system’s description, an observation scale has to be assumed. In texts written in artificial and natural languages, typical scales are binary, characters, and words. However, considering languages as structures built around certain preconceived set of symbols, like words or characters, limits the level of complexity that can be revealed analytically. This study introduces the notion of the fundamental description scale to analyze the essence of the structure of a language. The concept of Fundamental Scale is tested for English and musical instrument digital interface (MIDI) music texts using an algorithm developed to split a text in a collection of sets of symbols that minimizes the observed entropy of the system. This Fundamental Scale reflects more details of the complexity of the language than using bits, characters or words. Results show that this Fundamental Scale allows to compare completely different languages, such as English and MIDI coded music regarding its structural entropy. This comparative power facilitates the study of the complexity of the structure of different communication systems. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top