# Function Identification in Neuron Populations via Information Bottleneck

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Function Identification Problem

## 3. Approach via Information Measures

## 4. Algorithm Using the Information Bottleneck Method

#### 4.1. The IB Method

#### 4.2. Parsing $p(z|\mathbf{x})$ for Function Identification

#### 4.2.1. The Case of Remapped-Linear Functions

#### Method 1

#### Method 2

#### Method 3

**Figure 1.**Methods 1, 2 and 3 for estimating the coefficients ${\alpha}_{i}$ in $Z={\sum}_{i=1}^{n}{\alpha}_{i}{\varphi}_{i}({X}_{i})$ from the output $p(z|\mathbf{x})$ of the IB algorithm. Method 1 looks at all $\{\mathbf{x},{\mathbf{x}}^{\prime}\}$ such that ${z}^{*}(\mathbf{x})={z}^{*}({\mathbf{x}}^{\prime})$, Method 2 sets ${z}^{*}(\mathbf{x})=\mathbb{E}\left(\right)open="["\; close="]">Y|{z}^{*}(\mathbf{x})$ and Method 3 sets ${z}^{*}(\mathbf{x})={max}_{y}\left\{p(y|{z}^{*}(\mathbf{x}))\right\}$. (

**a**) Method 1; (

**b**) Method 2; (

**c**) Method 3.

#### 4.3. Sufficient Evidence

#### 4.4. Normal Variables and Linear Functions

**α**and $\widehat{\mathit{\alpha}}$ respectively, then we have (a derivation is given in the Appendix):

**α**. This result is consistent with the discussion in Remark 2 where we argued that the coefficients cannot be uniquely determined using information measures.

**Figure 2.**$\rho \left(\right)open="("\; close=")">\frac{{\widehat{\alpha}}_{1}}{{\widehat{\alpha}}_{2}}$ and $\rho \left(\right)open="("\; close=")">\frac{{\widehat{\alpha}}_{2}}{{\widehat{\alpha}}_{1}}$ for 2 Gaussian inputs at different SNR levels. We see sharp peaks where the coefficients of V are equal to the actual coefficients of Y up to a scale factor. (

**a**) $\rho \left(\right)open="("\; close=")">\frac{{\widehat{\alpha}}_{1}}{{\widehat{\alpha}}_{2}}$; (

**b**) $\rho \left(\right)open="("\; close=")">\frac{{\widehat{\alpha}}_{2}}{{\widehat{\alpha}}_{1}}$.

**α**up to a scale factor, due to the sharp peaks in the plots at these points.

## 5. Results on Artificial Data: Remapped-Linear Functions

**Figure 3.**Estimating coefficients ${\widehat{\alpha}}_{1}$ and ${\widehat{\alpha}}_{2}$ using Methods 2 and 3 on artificial data with 2 inputs of support $\{-5,\dots ,5\}$ at different values of the β parameter used in the IB algorithm. Here $[{\alpha}_{1}=1;{\alpha}_{2}=2]$, $\left|\mathcal{Y}\right|=31$ and $\left|\mathcal{Z}\right|=5$. (

**a**) Using Method 2; (

**b**) Using Method 3.

**Figure 4.**Estimating coefficients ${\widehat{\alpha}}_{1},{\widehat{\alpha}}_{2}$ and ${\widehat{\alpha}}_{3}$ using Methods 2 and 3 on artificial data with 3 inputs of support $\{-5,\dots ,5\}$ at different values of the β parameter used in the IB algorithm. Here $[{\alpha}_{1}=1;{\alpha}_{2}=5;{\alpha}_{3}=-2]$, $\left|\mathcal{Y}\right|=81$ and $\left|\mathcal{Z}\right|=10$. (

**a**) Using Method 2; (

**b**) Using Method 3.

#### 5.1. Comparison with Linear Regression and Related Model-Based Approaches

## 6. Results on Experimental Data: Linear Function

#### 6.1. Data Description

#### 6.2. Applying the Proposed Algorithm on Data

**Figure 5.**Estimating joint histograms from spike trains, where we consider overlapping bins using a sliding window.

#### 6.3. Functions Identified in Data

- ${R}_{139}+{R}_{114}={R}_{98}^{250}$ with $\Theta =1.456$.
- ${R}_{28}+{R}_{114}={R}_{98}^{370}$ with $\Theta =1.321$.
- ${R}_{139}+{R}_{98}={R}_{28}^{310}$ with $\Theta =1.278$.
- ${R}_{139}+{R}_{28}={R}_{114}^{750}$ with $\Theta =1.273$.
- ${R}_{114}+{R}_{28}={R}_{63}^{450}$ with $\Theta =1.267$.

**Figure 6.**We check if the function (Equation (37)) obtained between neurons 28, 114 and 98 in the reference trial (t = 40,718) is valid across all 36 trials of the ${180}^{\circ}$ reaching task with a threshold $\theta =1$.

Successful Trials | Unsuccessful Trials | ||

$\text{\Theta}\ge 1$ | $\text{\Theta}<1$ | $\text{\Theta}\ge 1$ | $\text{\Theta}<1$ |

1.331 | 0.947 | 1.000 | 0.197 |

1.321 | 0.811 | 1.052 | 0.513 |

1.229 | 0.758 | 1.293 | 0.555 |

1.114 | 0.739 | 0.685 | |

1.109 | 0.676 | 0.787 | |

1.102 | 0.647 | 0.865 | |

1.091 | 0.601 | ||

1.000 | 0.574 | ||

1.000 | 0.427 | ||

1.000 | 0.341 | ||

0.283 | |||

0.000 |

- True Positive Rate (TPR) = TP/(TP+FP) = 76.92%
- True Negative Rate (TNR) = TN/((TN+FN) = 33.33%
- Sensitivity = TP/(TP+FN) = 45.45%
- Specificity = TN/(TN+FP) = 66.67%

## 7. Conclusions

## Appendix

## Acknowledgements

## References

- Rieke, F.; Warland, D.; Rob.; Bialek, W. Spikes: Exploring the Neural Code, 1st ed.; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Borst, A.; Theunissen, F.E. Information theory and neural coding. Nat. Neurosci.
**1999**, 2, 947–957. [Google Scholar] [CrossRef] [PubMed] - Dayan, P.; Abbott, L.F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems, 1st ed.; The MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
- Abbott, L.F.; Dayan, P. The effect of correlated variability on the accuracy of a population code. Neural Comput.
**1999**, 11, 91–101. [Google Scholar] [CrossRef] [PubMed] - Schneidman, E.; Bialek, W.; Berry, M.J. Synergy, redundancy, and independence in population codes. J. Neurosci.
**2003**, 23, 11539–11553. [Google Scholar] [PubMed] - Narayanan, N.S.; Kimchi, E.Y.; Laubach, M. Redundancy and synergy of neuronal ensembles in motor cortex. J. Neurosci.
**2005**, 25, 4207–4216. [Google Scholar] [CrossRef] [PubMed] - Latham, P.; Nirenberg, S. Synergy, redundancy, and independence in population codes, revisited. J. Neurosci.
**2005**, 25, 5195–5206. [Google Scholar] [CrossRef] [PubMed] - Averbeck, B.B.; Lee, D. Effects of noise correlations on information encoding and decoding. J. Neurophysiol.
**2006**, 95, 3633–3644. [Google Scholar] [CrossRef] [PubMed] - So, K.; Ganguly, K.; Jimenez, J.; Gastpar, M.C.; Carmena, J.M. Redundant information encoding in primary motor cortex during natural and prosthetic motor control. J. Comput. Neurosci.
**2011**, 32, 555–561. [Google Scholar] [CrossRef] [PubMed] - Quinn, C.; Coleman, T.; Kiyavash, N.; Hatsopoulos, N. Estimating the directed information to infer causal relationships in ensemble neural spike train recordings. J. Comput. Neurosci.
**2010**, 30, 17–44. [Google Scholar] [CrossRef] [PubMed] - So, K.; Koralek, A.C.; Ganguly, K.; Gastpar, M.C.; Carmena, J.M. Assessing functional connectivity of neural ensembles using directed information. J. Neural Eng.
**2012**, 9. [Google Scholar] [CrossRef] [PubMed] - Schmidt, M.; Lipson, H. Distilling free-form natural laws from experimental data. Science
**2009**, 324, 81–85. [Google Scholar] [CrossRef] [PubMed] - Tishby, N.; Pereira, F.C.; Bialek, W. The Information Bottleneck Method. In Proceedings of the 37th Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, USA, September 1999; IEEE Press: Piscataway, NJ, USA, 1999; pp. 368–377. [Google Scholar]
- Klampfl, S.; Legenstein, R.; Maass, W. Spiking neurons can learn to solve information bottleneck problems and extract independent components. Neural Comput.
**2009**, 21, 911–959. [Google Scholar] [CrossRef] [PubMed] - Buesing, L.; Maass, W. A spiking neuron as information bottleneck. Neural Comput.
**2010**, 22, 1961–1992. [Google Scholar] [CrossRef] [PubMed] - Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
- Slonim, N. The Information Bottleneck: Theory and Applications. PhD thesis, The Hebrew University, Jerusalem, Israel, 2003. [Google Scholar]

© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Buddha, S.K.; So, K.; Carmena, J.M.; Gastpar, M.C.
Function Identification in Neuron Populations via Information Bottleneck. *Entropy* **2013**, *15*, 1587-1608.
https://doi.org/10.3390/e15051587

**AMA Style**

Buddha SK, So K, Carmena JM, Gastpar MC.
Function Identification in Neuron Populations via Information Bottleneck. *Entropy*. 2013; 15(5):1587-1608.
https://doi.org/10.3390/e15051587

**Chicago/Turabian Style**

Buddha, S. Kartik, Kelvin So, Jose M. Carmena, and Michael C. Gastpar.
2013. "Function Identification in Neuron Populations via Information Bottleneck" *Entropy* 15, no. 5: 1587-1608.
https://doi.org/10.3390/e15051587