E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Information Theory in Neuroscience"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: closed (30 April 2018)

Special Issue Editors

Guest Editor
Prof. Stefano Panzeri

Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy
Website | E-Mail
Interests: neural coding; information theory; population coding; temporal coding
Guest Editor
Dr. Eugenio Piasini

Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA 19104, USA
Website | E-Mail
Interests: information processing in complex systems; neural coding; structure-function relationships in neural networks; normative models of neural function; perceptual decision-making; neuroinformatics

Special Issue Information

Dear Colleagues,

As the ultimate information processing device, the brain naturally lends itself to be studied with information theory. Application of information theory to neuroscience has spurred the development of principled theories of brain function, has led to advances in the study of consciousness, and to the development of analytical techniques to crack the neural code, that is to unveil the language used by neurons to encode and process information. In particular, advances in experimental techniques enabling precise recording and manipulation of neural activity on a large scale now enable for the first time the precise formulation and the quantitative test of hypotheses about how the brain encodes and transmits across areas the information used for specific functions.

This Special Issue emphasizes contributions on novel approaches in neuroscience using information theory, and on the development of new information theoretic results inspired by problems in neuroscience. Research work at the interface of neuroscience, Information Theory and other disciplines is also welcome.

Prof. Stefano Panzeri
Dr. Eugenio Piasini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • population coding
  • redundancy
  • synergy
  • optimal codes
  • directed information
  • integrated information theory
  • neural decoders

Published Papers (11 papers)

View options order results:
result details:
Displaying articles 1-11
Export citation of selected articles as:

Research

Open AccessArticle Information-Theoretical Analysis of the Neural Code in the Rodent Temporal Lobe
Entropy 2018, 20(8), 571; https://doi.org/10.3390/e20080571
Received: 31 May 2018 / Revised: 12 July 2018 / Accepted: 25 July 2018 / Published: 3 August 2018
PDF Full-text (1900 KB) | HTML Full-text | XML Full-text
Abstract
In the study of the neural code, information-theoretical methods have the advantage of making no assumptions about the probabilistic mapping between stimuli and responses. In the sensory domain, several methods have been developed to quantify the amount of information encoded in neural activity,
[...] Read more.
In the study of the neural code, information-theoretical methods have the advantage of making no assumptions about the probabilistic mapping between stimuli and responses. In the sensory domain, several methods have been developed to quantify the amount of information encoded in neural activity, without necessarily identifying the specific stimulus or response features that instantiate the code. As a proof of concept, here we extend those methods to the encoding of kinematic information in a navigating rodent. We estimate the information encoded in two well-characterized codes, mediated by the firing rate of neurons, and by the phase-of-firing with respect to the theta-filtered local field potential. In addition, we also consider a novel code, mediated by the delta-filtered local field potential. We find that all three codes transmit significant amounts of kinematic information, and informative neurons tend to employ a combination of codes. Cells tend to encode conjunctions of kinematic features, so that most of the informative neurons fall outside the traditional cell types employed to classify spatially-selective units. We conclude that a broad perspective on the candidate stimulus and response features expands the repertoire of strategies with which kinematic information is encoded. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Category Structure and Categorical Perception Jointly Explained by Similarity-Based Information Theory
Entropy 2018, 20(7), 527; https://doi.org/10.3390/e20070527
Received: 27 April 2018 / Revised: 8 July 2018 / Accepted: 10 July 2018 / Published: 14 July 2018
PDF Full-text (1097 KB) | HTML Full-text | XML Full-text
Abstract
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any
[...] Read more.
Categorization is a fundamental information processing phenomenon in the brain. It is critical for animals to compress an abundance of stimulations into groups to react quickly and efficiently. In addition to labels, categories possess an internal structure: the goodness measures how well any element belongs to a category. Interestingly, this categorization leads to an altered perception referred to as categorical perception: for a given physical distance, items within a category are perceived closer than items in two different categories. A subtler effect is the perceptual magnet: discriminability is reduced close to the prototypes of a category and increased near its boundaries. Here, starting from predefined abstract categories, we naturally derive the internal structure of categories and the phenomenon of categorical perception, using an information theoretical framework that involves both probabilities and pairwise similarities between items. Essentially, we suggest that pairwise similarities between items are to be tuned to render some predefined categories as well as possible. However, constraints on these pairwise similarities only produce an approximate matching, which explains concurrently the notion of goodness and the warping of perception. Overall, we demonstrate that similarity-based information theory may offer a global and unified principled understanding of categorization and categorical perception simultaneously. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle A Measure of Information Available for Inference
Entropy 2018, 20(7), 512; https://doi.org/10.3390/e20070512
Received: 11 May 2018 / Revised: 5 July 2018 / Accepted: 6 July 2018 / Published: 7 July 2018
PDF Full-text (447 KB) | HTML Full-text | XML Full-text
Abstract
The mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates
[...] Read more.
The mutual information between the state of a neural network and the state of the external world represents the amount of information stored in the neural network that is associated with the external world. In contrast, the surprise of the sensory input indicates the unpredictability of the current input. In other words, this is a measure of inference ability, and an upper bound of the surprise is known as the variational free energy. According to the free-energy principle (FEP), a neural network continuously minimizes the free energy to perceive the external world. For the survival of animals, inference ability is considered to be more important than simply memorized information. In this study, the free energy is shown to represent the gap between the amount of information stored in the neural network and that available for inference. This concept involves both the FEP and the infomax principle, and will be a useful measure for quantifying the amount of information available for inference. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Novel Brain Complexity Measures Based on Information Theory
Entropy 2018, 20(7), 491; https://doi.org/10.3390/e20070491
Received: 26 April 2018 / Revised: 6 June 2018 / Accepted: 19 June 2018 / Published: 25 June 2018
PDF Full-text (4605 KB) | HTML Full-text | XML Full-text
Abstract
Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe
[...] Read more.
Brain networks are widely used models to understand the topology and organization of the brain. These networks can be represented by a graph, where nodes correspond to brain regions and edges to structural or functional connections. Several measures have been proposed to describe the topological features of these networks, but unfortunately, it is still unclear which measures give the best representation of the brain. In this paper, we propose a new set of measures based on information theory. Our approach interprets the brain network as a stochastic process where impulses are modeled as a random walk on the graph nodes. This new interpretation provides a solid theoretical framework from which several global and local measures are derived. Global measures provide quantitative values for the whole brain network characterization and include entropy, mutual information, and erasure mutual information. The latter is a new measure based on mutual information and erasure entropy. On the other hand, local measures are based on different decompositions of the global measures and provide different properties of the nodes. Local measures include entropic surprise, mutual surprise, mutual predictability, and erasure surprise. The proposed approach is evaluated using synthetic model networks and structural and functional human networks at different scales. Results demonstrate that the global measures can characterize new properties of the topology of a brain network and, in addition, for a given number of nodes, an optimal number of edges is found for small-world networks. Local measures show different properties of the nodes such as the uncertainty associated to the node, or the uniqueness of the path that the node belongs. Finally, the consistency of the results across healthy subjects demonstrates the robustness of the proposed measures. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Graphical abstract

Open AccessFeature PaperArticle A Moment-Based Maximum Entropy Model for Fitting Higher-Order Interactions in Neural Data
Entropy 2018, 20(7), 489; https://doi.org/10.3390/e20070489
Received: 1 May 2018 / Revised: 15 June 2018 / Accepted: 19 June 2018 / Published: 23 June 2018
PDF Full-text (4389 KB) | HTML Full-text | XML Full-text
Abstract
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide
[...] Read more.
Correlations in neural activity have been demonstrated to have profound consequences for sensory encoding. To understand how neural populations represent stimulus information, it is therefore necessary to model how pairwise and higher-order spiking correlations between neurons contribute to the collective structure of population-wide spiking patterns. Maximum entropy models are an increasingly popular method for capturing collective neural activity by including successively higher-order interaction terms. However, incorporating higher-order interactions in these models is difficult in practice due to two factors. First, the number of parameters exponentially increases as higher orders are added. Second, because triplet (and higher) spiking events occur infrequently, estimates of higher-order statistics may be contaminated by sampling noise. To address this, we extend previous work on the Reliable Interaction class of models to develop a normalized variant that adaptively identifies the specific pairwise and higher-order moments that can be estimated from a given dataset for a specified confidence level. The resulting “Reliable Moment” model is able to capture cortical-like distributions of population spiking patterns. Finally, we show that, compared with the Reliable Interaction model, the Reliable Moment model infers fewer strong spurious higher-order interactions and is better able to predict the frequencies of previously unobserved spiking patterns. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
Entropy 2018, 20(3), 173; https://doi.org/10.3390/e20030173
Received: 18 December 2017 / Revised: 26 February 2018 / Accepted: 27 February 2018 / Published: 6 March 2018
Cited by 1 | PDF Full-text (708 KB) | HTML Full-text | XML Full-text
Abstract
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information (Φ) in the brain is related to the level of consciousness.
[...] Read more.
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessFeature PaperArticle The Identity of Information: How Deterministic Dependencies Constrain Information Synergy and Redundancy
Entropy 2018, 20(3), 169; https://doi.org/10.3390/e20030169
Received: 13 November 2017 / Revised: 26 February 2018 / Accepted: 28 February 2018 / Published: 5 March 2018
Cited by 1 | PDF Full-text (1887 KB) | HTML Full-text | XML Full-text
Abstract
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a
[...] Read more.
Understanding how different information sources together transmit information is crucial in many domains. For example, understanding the neural code requires characterizing how different neurons contribute unique, redundant, or synergistic pieces of information about sensory or behavioral variables. Williams and Beer (2010) proposed a partial information decomposition (PID) that separates the mutual information that a set of sources contains about a set of targets into nonnegative terms interpretable as these pieces. Quantifying redundancy requires assigning an identity to different information pieces, to assess when information is common across sources. Harder et al. (2013) proposed an identity axiom that imposes necessary conditions to quantify qualitatively common information. However, Bertschinger et al. (2012) showed that, in a counterexample with deterministic target-source dependencies, the identity axiom is incompatible with ensuring PID nonnegativity. Here, we study systematically the consequences of information identity criteria that assign identity based on associations between target and source variables resulting from deterministic dependencies. We show how these criteria are related to the identity axiom and to previously proposed redundancy measures, and we characterize how they lead to negative PID terms. This constitutes a further step to more explicitly address the role of information identity in the quantification of redundancy. The implications for studying neural coding are discussed. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Mutual Information and Information Gating in Synfire Chains
Entropy 2018, 20(2), 102; https://doi.org/10.3390/e20020102
Received: 22 December 2017 / Revised: 29 January 2018 / Accepted: 30 January 2018 / Published: 1 February 2018
PDF Full-text (1079 KB) | HTML Full-text | XML Full-text
Abstract
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other
[...] Read more.
Coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the graded transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
Entropy 2018, 20(1), 34; https://doi.org/10.3390/e20010034
Received: 7 November 2017 / Revised: 3 January 2018 / Accepted: 5 January 2018 / Published: 9 January 2018
Cited by 1 | PDF Full-text (1254 KB) | HTML Full-text | XML Full-text
Abstract
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able
[...] Read more.
The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Lifespan Development of the Human Brain Revealed by Large-Scale Network Eigen-Entropy
Entropy 2017, 19(9), 471; https://doi.org/10.3390/e19090471
Received: 3 August 2017 / Revised: 25 August 2017 / Accepted: 1 September 2017 / Published: 4 September 2017
PDF Full-text (2150 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific
[...] Read more.
Imaging connectomics based on graph theory has become an effective and unique methodological framework for studying functional connectivity patterns of the developing and aging brain. Normal brain development is characterized by continuous and significant network evolution through infancy, childhood, and adolescence, following specific maturational patterns. Normal aging is related to some resting state brain networks disruption, which are associated with certain cognitive decline. It is a big challenge to design an integral metric to track connectome evolution patterns across the lifespan, which is to understand the principles of network organization in the human brain. In this study, we first defined a brain network eigen-entropy (NEE) based on the energy probability (EP) of each brain node. Next, we used the NEE to characterize the lifespan orderness trajectory of the whole-brain functional connectivity of 173 healthy individuals ranging in age from 7 to 85 years. The results revealed that during the lifespan, the whole-brain NEE exhibited a significant non-linear decrease and that the EP distribution shifted from concentration to wide dispersion, implying orderness enhancement of functional connectome over age. Furthermore, brain regions with significant EP changes from the flourishing (7–20 years) to the youth period (23–38 years) were mainly located in the right prefrontal cortex and basal ganglia, and were involved in emotion regulation and executive function in coordination with the action of the sensory system, implying that self-awareness and voluntary control performance significantly changed during neurodevelopment. However, the changes from the youth period to middle age (40–59 years) were located in the mesial temporal lobe and caudate, which are associated with long-term memory, implying that the memory of the human brain begins to decline with age during this period. Overall, the findings suggested that the human connectome shifted from a relatively anatomical driven state to an orderly organized state with lower entropy. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Open AccessArticle Life on the Edge: Latching Dynamics in a Potts Neural Network
Entropy 2017, 19(9), 468; https://doi.org/10.3390/e19090468
Received: 2 August 2017 / Revised: 25 August 2017 / Accepted: 29 August 2017 / Published: 3 September 2017
Cited by 1 | PDF Full-text (14477 KB) | HTML Full-text | XML Full-text
Abstract
We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching
[...] Read more.
We study latching dynamics in the adaptive Potts model network, through numerical simulations with randomly and also weakly correlated patterns, and we focus on comparing its slowly and fast adapting regimes. A measure, Q, is used to quantify the quality of latching in the phase space spanned by the number of Potts states S, the number of connections per Potts unit C and the number of stored memory patterns p. We find narrow regions, or bands in phase space, where distinct pattern retrieval and duration of latching combine to yield the highest values of Q. The bands are confined by the storage capacity curve, for large p, and by the onset of finite latching, for low p. Inside the band, in the slowly adapting regime, we observe complex structured dynamics, with transitions at high crossover between correlated memory patterns; while away from the band latching, transitions lose complexity in different ways: below, they are clear-cut but last such few steps as to span a transition matrix between states with few asymmetrical entries and limited entropy; while above, they tend to become random, with large entropy and bi-directional transition frequencies, but indistinguishable from noise. Extrapolating from the simulations, the band appears to scale almost quadratically in the pS plane, and sublinearly in pC. In the fast adapting regime, the band scales similarly, and it can be made even wider and more robust, but transitions between anti-correlated patterns dominate latching dynamics. This suggest that slow and fast adaptation have to be integrated in a scenario for viable latching in a cortical system. The results for the slowly adapting regime, obtained with randomly correlated patterns, remain valid also for the case with correlated patterns, with just a simple shift in phase space. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Figures

Figure 1

Back to Top