Open Access
This article is

- freely available
- re-usable

*Entropy*
**2019**,
*21*(8),
756;
https://doi.org/10.3390/e21080756

Article

Synaptic Information Transmission in a Two-State Model of Short-Term Facilitation

^{1}

Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, 82152 Planegg-Martinsried, Germany

^{2}

Bernstein Center for Computational Neuroscience Munich, 82152 Planegg-Martinsried, Germany

^{3}

German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-Universität, 81377 Munich, Germany

^{4}

Department of Biology II, Ludwig-Maximilians-Universität München, 82152 Planegg-Martinsried, Germany

^{5}

Computational Neuroscience, Brandenburg University of Technology Cottbus-Senftenberg, 03046 Cottbus, Germany

^{*}

Author to whom correspondence should be addressed.

Received: 30 April 2019 / Accepted: 31 July 2019 / Published: 2 August 2019

## Abstract

**:**

Action potentials (spikes) can trigger the release of a neurotransmitter at chemical synapses between neurons. Such release is uncertain, as it occurs only with a certain probability. Moreover, synaptic release can occur independently of an action potential (asynchronous release) and depends on the history of synaptic activity. We focus here on short-term synaptic facilitation, in which a sequence of action potentials can temporarily increase the release probability of the synapse. In contrast to the phenomenon of short-term depression, quantifying the information transmission in facilitating synapses remains to be done. We find rigorous lower and upper bounds for the rate of information transmission in a model of synaptic facilitation. We treat the synapse as a two-state binary asymmetric channel, in which the arrival of an action potential shifts the synapse to a facilitated state, while in the absence of a spike, the synapse returns to its baseline state. The information bounds are functions of both the asynchronous and synchronous release parameters. If synchronous release facilitates more than asynchronous release, the mutual information rate increases. In contrast, short-term facilitation degrades information transmission when the synchronous release probability is intrinsically high. As synaptic release is energetically expensive, we exploit the information bounds to determine the energy–information trade-off in facilitating synapses. We show that unlike information rate, the energy-normalized information rate is robust with respect to variations in the strength of facilitation.

Keywords:

short-term synaptic facilitation; release site; information theory; binary asymmetric channel; mutual information rate; information bound## 1. Introduction

Action potentials are the key carriers of information in the brain. The arrival of an action potential at a synapse opens calcium channels in the presynaptic site, which leads to the release of vesicles filled with neurotransmitters [1]. In turn, the released neurotransmitters activate post-synaptic receptors, thereby leading to a change in the post-synaptic potential.

This process of release, however, is stochastic. The release probability is affected by the level of intracellular calcium and the size of the readily releasable pool of vesicles [2,3]. Moreover, the release of a vesicle is not necessarily synchronized with the spiking process; a synapse may release asynchronously tens of milliseconds after the arrival of an action potential [4], or sometimes even spontaneously [5].

The release properties of a synapse also change on different time scales. The successive release of vesicles can deplete the pool of vesicles, thereby depressing the synapse. On the other hand, a sequence of action potentials with short inter-spike intervals can “prime” the release mechanism and increase the release probability, inducing short-term facilitation [6].

Several studies have addressed the modulatory role of short-term depression on synaptic information transmission [7,8,9]. In contrast, the information rate of a facilitating synapse is not yet fully understood, though it has been suggested that short-term facilitation temporally filters the incoming spike train [10].

To study the impact of short-term facilitation on synaptic information efficacy, we employ a binary asymmetric channel with two states. The model synapse switches between a baseline state and facilitated state based on the history of the input. Each state has distinct release probabilities, both for synchronous and asynchronous release. We derive a lower bound and an upper bound for the mutual information rate of such a facilitating synapse and assess the functional role of short-term facilitation on the synaptic information efficacy.

Short-term facilitation increases the release probability and consequently raises the metabolic energy consumption of the synapse [11]. We calculate the rate of information transmission per unit of energy to evaluate the compromises that a facilitating synapse makes to balance energy consumption and information transmission.

## 2. Synapse Model and Information Bounds

We use a binary asymmetric channel to model the stochasticity of release in a synapse (Figure 1A) [12]. The input of the model is the presynaptic spike process $X={\left\{{X}_{i}\right\}}_{i=0}^{\infty}$, where ${X}_{i}$ is a binary random variable corresponding to the presence (${X}_{i}=1$) or absence (${X}_{i}=0$) of a spike at time i. We assume that X is an i.i.d. random process, and ${X}_{i}$ is a Bernoulli random variable with $P({X}_{i}=1)=\alpha $. The output process of the channel $Y={\left\{{Y}_{i}\right\}}_{i=0}^{\infty}$, represents a release (${Y}_{i}=1$) or lack of release (${Y}_{i}=0$) at time i. The synchronous spike-evoked release probability is characterized as $P({Y}_{i}=1|{X}_{i}=1)$, and asynchronous release probability as $P({Y}_{i}=1|{X}_{i}=0)$.

In short-term synaptic facilitation, a presynaptic input spike facilitates the synaptic release for the next spike. We model this phenomenon as a binary asymmetric channel whose state is determined by the previous input of the channel (Figure 1A). In the absence of a presynaptic spike (${X}_{i-1}=0$), the channel is in the baseline state and the probabilities of synchronous spike-evoked and asynchronous release are ${p}_{1}$ and ${q}_{1}$. If a presynaptic spike occurs at time $i-1$, i.e., ${X}_{i-1}=1$, the state of the channel is switched to the facilitated state and the synchronous and asynchronous release probabilities are increased to ${p}_{2}$ and ${q}_{2}$ as follows,

$${p}_{2}=u({p}_{max}-{p}_{1})+{p}_{1},$$

$${q}_{2}=v({q}_{max}-{q}_{1})+{q}_{1}.$$

Here, u and v are facilitation coefficients of synchronous and asynchronous release probabilities ($0\le u,v\le 1$), and ${p}_{max}$ and ${q}_{max}$ are the maximum release probabilities of these two modes of release. A Markov chain describes the transitions between the baseline state and the facilitated state, and the transition probabilities correspond to the presence or absence of an action potential in the presynaptic neuron (Figure 1B).

If ${R}_{1}$ and ${R}_{2}$ represent the mutual information rates of the binary asymmetric channels corresponding to the baseline state and facilitated state, then for $i\in \{1,2\}$,
where $\overline{x}\triangleq 1-x$ and $h\left(x\right)=-x{log}_{2}\left(x\right)-\overline{x}{log}_{2}\left(\overline{x}\right)$. First we derive a lower bound for the information rate between the input spike process X and the output process of the release site Y (the proofs for the theorems are in the Appendix A).

$${R}_{i}=h(\alpha {p}_{i}+\overline{\alpha}{q}_{i})-\alpha h\left({p}_{i}\right)-\overline{\alpha}h\left({q}_{i}\right),$$

**Theorem**

**1**

(Lower Bound)

**.**Let ${R}_{F}$ denote the mutual information rate of a synapse with short-term facilitation, modeled by the two-state binary asymmetric channel (Figure 1A). Then ${R}_{\mathit{LB}}=\overline{\alpha}{R}_{1}+\alpha {R}_{2}$ is a lower bound for ${R}_{F}$.Since $\alpha =P({X}_{i}=1)$, ${R}_{\mathit{LB}}$ is the statistical average over the mutual information rates of the two constituent states of the release site. Therefore, our theorem shows that at least in this simple model of facilitation, the mutual information rate is higher than the statistical average over the mutual information rates of the single states. This contrasts with the result for the two-state model of depression [12], for which $\overline{\alpha}{R}_{1}+\alpha {R}_{2}$ is an exact result for the mutual information rate.

**Theorem**

**2**

(Lower Bound)
where

**.**The mutual information rate of the two-state model of facilitation is upper-bounded by
$${R}_{\mathit{UB}}={u}_{7}-{u}_{6},$$

$${u}_{1}=\overline{\alpha}(\alpha {p}_{1}+\overline{\alpha}{q}_{1}),$$

$${u}_{2}=\overline{\alpha}{q}_{1}+\alpha {q}_{2},$$

$${u}_{3}=\alpha (\alpha {p}_{2}+\overline{\alpha}{q}_{2}),$$

$${u}_{4}=\overline{\alpha}{p}_{1}+\alpha {p}_{2},$$

$${u}_{5}={u}_{1}+{u}_{3},$$

$${u}_{6}=h\left({q}_{1}\right){\overline{\alpha}}^{2}+\left(h\left({p}_{1}\right)+h\left({q}_{2}\right)\right)\alpha \overline{\alpha}+h\left({p}_{2}\right){\alpha}^{2},$$

$${u}_{7}=h\left(\frac{{u}_{1}{u}_{2}+{u}_{3}{u}_{4}}{{u}_{5}}\right){u}_{5}+h\left(\frac{{u}_{1}\overline{{u}_{2}}+{u}_{3}\overline{{u}_{4}}}{\overline{{u}_{5}}}\right)\overline{{u}_{5}}.$$

In a facilitating synapse, the release probability and consequently, the energy consumption of the synapse increases. We define the ratio of the mutual information rate by the release probability as the energy-normalized information rate of the synapse. The energy-normalized information rate of the synapse without facilitation, denoted by ${R}_{1}^{\left(E\right)}$, is then

$${R}_{1}^{\left(E\right)}={R}_{1}/(\alpha {p}_{1}+\overline{\alpha}{q}_{1}).$$

Moreover, the release probability of the synapse in the two-state model of facilitation is
which is independent of i. Hence, the energy-normalized information rate of a facilitated synapse, ${R}_{F}^{\left(E\right)}$, as well as the lower and upper bounds of energy-normalized information rate, ${R}_{\mathit{LB}}^{\left(E\right)}$ and ${R}_{\mathit{UB}}^{\left(E\right)}$, are calculated by dividing ${R}_{F}$, ${R}_{\mathit{LB}}$, and ${R}_{\mathit{UB}}$ by $\overline{\alpha}(\alpha {p}_{1}+\overline{\alpha}{q}_{1})+\alpha (\alpha {p}_{2}+\overline{\alpha}{q}_{2})$.

$$P({Y}_{i}=1)=\overline{\alpha}(\alpha {p}_{1}+\overline{\alpha}{q}_{1})+\alpha (\alpha {p}_{2}+\overline{\alpha}{q}_{2}),$$

## 3. Results

We use Theorems 1 and 2 to calculate the lower bound and upper bound of the information transmission rate of a synapse under short-term facilitation (Figure 2A). It is shown that the bounds are tighter for synapses with lower facilitation levels. We find that the information rate increases with the level of facilitation. By contrast, the bounds on the energy-normalized information rate of the synapse are relatively invariant to the strength of facilitation (Figure 2B,C). This finding indicates that a synapse can change the balance between its energy consumption and transmission rate by altering its level of facilitation without affecting the information rate per release.

If the lower bound of information rate, ${R}_{\mathit{LB}}$, is greater than the information rate of the synapse in the baseline state, ${R}_{1}$, we can conclude that short-term facilitation increases the mutual information rate of the synapse (i.e., ${R}_{F}>{R}_{1}$). In Figure 3A, it is shown that for the modeled synapse (with ${p}_{1}=0.5$ and ${q}_{1}=0.05$), short-term facilitation always increases the mutual information rate, provided that synchronous spike-evoked release and asynchronous release are facilitated equally, i.e., $u=v$.

Recent findings suggest that synchronous and asynchronous release may be governed by different mechanisms [4], and consequently they may show distinct levels of facilitation. We study the impact of different facilitation coefficients in the modeled synapse by fixing the facilitation coefficient of asynchronous release at $v=0.5$ and calculate the information bounds for different values of u. Short-term facilitation reduces the mutual information rate of the synapse if the upper bound of the rate of the synapse, ${R}_{\mathit{UB}}$, goes below the information rate of the synapse without facilitation, ${R}_{1}$. Figure 3B shows that short-term facilitation degrades the information rate of the synapse if the facilitation level of synchronous release is much lower than that of asynchronous release. The degrading effect of facilitation is pronounced when we compare the upper bound of the energy-normalized information rate of the synapse with facilitation, ${R}_{\mathit{UB}}^{\left(E\right)}$, with the energy-normalized information rate of a static synapse, ${R}_{1}^{\left(E\right)}$. The values below zero in Figure 3C show the operating points of synapses in which facilitation reduces the energy-normalized information rate.

In addition to the facilitation coefficient, the release probability of the synapse in the baseline state plays a critical role in determining the functional role of short-term facilitation. We study the interaction between u and ${p}_{1}$ in Figure 4. We show the regime of parameters for which short-term facilitation increases/decreases the mutual information rate and energy-normalized information rate of the synapse. If ${R}_{\mathit{LB}}<{R}_{1}<{R}_{\mathit{UB}}$ (or ${R}_{\mathit{LB}}^{\left(E\right)}<{R}_{1}^{\left(E\right)}<{R}_{\mathit{UB}}^{\left(E\right)}$), the bounds cannot specify whether facilitation increases or decreases the rate of information transmission (or energy-normalized information rate); these regions are marked in black in Figure 4. We show that for an unreliable synapse (with small ${p}_{1}$) and relatively large facilitation coefficient, u, short-term facilitation increases both mutual information rate and energy-normalized information rate of the synapse, since the enhancement of synchronous release dominates the rise of asynchronous release. Interestingly, it has been observed that for many facilitating synapses the baseline release probability is quite low [13,14]. For synapses that are more reliable a priori, the relative facilitation of asynchronous releases counteracts the improvement in the information rate. In reliable synapses (with higher values of ${p}_{1}$) and relatively small facilitation coefficients, short-term facilitation not only decreases the energy-normalized information rate of the synapse but also drops the information transmission rate. In addition, Figure 4 shows that higher input spike rates expand the range of synaptic parameters (${p}_{1}$ and u) for which short-term facilitation enhances the rate-energy performance of the synapse.

To study the effect of asynchronous release, we repeat the analysis of Figure 4 for very high (${q}_{1}\phantom{\rule{3.33333pt}{0ex}}=\phantom{\rule{3.33333pt}{0ex}}0.1$) and very low (${q}_{1}=0.01$) asynchronous release probabilities. Comparing Figure 5A,B reveals that decreasing the level of asynchronous release expands the range of synchronous release parameters, u and ${p}_{1}$, for which short-term facilitation increases the mutual information rate and energy-normalized information rate. We also study the interaction between asynchronous release probability ${q}_{1}$ and the facilitation coefficient of asynchronous release v, keeping the parameters of synchronous release fixed at ${p}_{1}=0.5$ and $u=0.5$ (Figure 5C). To benefit from short-term facilitation, the synapse needs to decrease the release probability and/or the facilitation coefficient of the asynchronous mode of release. For synapses with very high asynchronous release probabilities, short-term facilitation can still boost the information rate and energy-normalized rate of the synapse, provided that the facilitation coefficient of the asynchronous release is small enough. Similar to the results in Figure 4, by increasing the normalized spike rate, the synapse spends more time in the facilitated state and therefore, the impact of short-term facilitation on rate-energy efficiency of the synapse is enhanced.

Short-term facilitation creates a memory for the synapse, since the release probability of the synapse depends on the history of spiking activity. It is, therefore, important to study how short-term facilitation modulates information transmission rate in synapses with temporally correlated spike trains by making the Poisson rate of the input spike train time-dependent [15] (Figure 6A). We use a sinusoidal rate stimulus with a frequency of 1 Hz on top of a baseline rate (Figure 6B) and use the context-tree weighting algorithm to numerically estimate the mutual information and energy-normalized information rates of the facilitating synapse [16]. The amplitude of the sinusoidal signal specifies the level of correlation. The raster plots of the neurons show the synchrony between the spiking activity of the neuron and the sinusoidal instantaneous rate (Figure 6C). The instantaneous firing rate, averaged over 1000 trials, provides a good estimate of the stimulus (Figure 6D). The functional classes of short-term facilitation are calculated as a function of baseline release probability and facilitation coefficient of synchronous release for different levels of correlation. This numerical analysis shows that correlations in the presynaptic spike train can slightly enlarge the regions in which short-term facilitation increases the mutual information rate and energy-normalized information rate (Figure 6E).

In the general model of facilitation, it is assumed that the state of the synapse at time i is affected not only by the spiking activity of the presynaptic neuron at time $i-1$, but also by the whole history of the spiking events. Synchronous and asynchronous release probabilities converge to the limit probabilities, ${p}^{\left(L\right)}$ and ${q}^{\left(L\right)}$, exponentially by time constants ${\tau}_{{}_{L,p}}$ and ${\tau}_{{}_{L,q}}$. The arrival of an action potential at time i increases the limit probabilities ${p}_{i}^{\left(L\right)}$ and ${q}_{i}^{\left(L\right)}$ by $u({p}_{max}-{p}_{i}^{\left(L\right)})$ and $v({q}_{max}-{q}_{i}^{\left(L\right)})$ respectively; the initial values of the limit probabilities are ${p}_{0}^{\left(L\right)}={p}_{1}$ and ${q}_{0}^{\left(L\right)}={q}_{1}$. In the quiescent intervals, the limit probabilities decay to the baseline values, ${p}_{1}$ and ${q}_{1}$, by facilitation decay time constants, ${\tau}_{{}_{f,p}}$ and ${\tau}_{{}_{f,q}}$. The numerical methods are used to compare the synaptic information efficacy of the two-state model with the general model of short-term facilitation. We show that the two-state model provides a good approximation for the mutual information rate (Figure 7A) and energy-normalized information rate (Figure 7B) of a facilitating synapse, provided that the facilitation decays rapidly. If the facilitation decay time constant is large, similar to the approach in [12], the parameters of the two-state model can be tuned to provide a better estimation.

## 4. Discussion

We studied how prior spikes, by facilitating the release of neurotransmitter at a synapse, modulate the rate of synaptic information transfer. Most components of neural hardware are noisy, hybrid analog-digital devices. In particular, the synapse maps quite naturally onto an asymmetric binary channel in communication theory. Some neurons, such as thalamic relay neurons, act as nodes in a network for long-range communication using spikes, so it is natural to quantify the performance of the synapses in bits [17,18,19,20,21,22,23]. Synaptic information efficacy quantifies the amount of information that the post-synaptic potential contains about the spiking activity of the presynaptic neuron. This analysis, however, does not guarantee that the post-synaptic neuron accesses or uses this information, which rather depends on the biophysical mechanisms of encoding and decoding.

To capture the phenomenon of facilitation, we made the binary asymmetric channel have two states. The resulting model permits the short-term dynamics of synchronous and asynchronous releases to be different, which enabled us to assess the impact of each release mode on the efficacy of synaptic information transfer. We first assumed identical facilitation coefficients for synchronous and asynchronous release (i.e., $u=v$) and demonstrated that the lower bound of information rate of a facilitating synapse is higher than the information rate of a static synapse (Figure 3A). We were, therefore, able to show that synapses quantifiably transmit more information through short-term facilitation, as long as synchronous and asynchronous release of neurotransmitter obey the same dynamics. Indeed, the increase in information can outweigh the higher energy consumption, as measured by the energy-normalized information rate, provided that synchronous release is facilitated more than asynchronous release. In contrast, when facilitation enhances the asynchronous component of release more strongly than the synchronous component, short-term facilitation would have the opposite effect, namely to decrease synaptic information efficacy.

In previous work, we studied the information transmission in a synapse during short-term depression [9,12]. There, the state of the binary asymmetric channel, which models the synapse, depended on the history of the output $({Y}_{i-1},{Y}_{i-2},\dots ,{Y}_{1})$. Facilitation, in contrast, depends on the history of the input $({X}_{i-1},{X}_{i-2},\dots ,{X}_{1})$. This simple change makes the problem much more challenging mathematically, as it is, in fact, isomorphic to an unsolved problem in information theory, namely the entropy rate of a hidden Markov process [24]. Nevertheless, the lower bound we derive for a facilitating synapse mirrors the exact result we had derived earlier for short-term depression. Moreover, in practice, this bound is fairly tight.

The bounds derived here are only a first step towards understanding information transmission in facilitating synapses. The two-state binary asymmetric channel simplifies the process of facilitation by making the release probability depend only on the presence or absence of a presynaptic spike at the previous time-point. Yet when the facilitation decays rapidly, the two-state model converges in behavior to a more general model that considers the entire history of spiking.

Our model ignores the possibility of temporal correlations in the presynaptic spike train. Instead, in line with many other studies, the time series of spikes were assumed to obey Poisson statistics [8,22,25]. This simplification made the information-theoretic analysis of the synapse tractable and helped us to derive the upper bound and lower bound of information rate. Different methods have been suggested for modeling correlated spike trains, such as inhomogeneous Poisson processes [15,26], autoregressive point processes [27], and random spike selections from a set of spike trains [15]. We used an inhomogeneous Poisson process to generate correlated spike trains and estimated the mutual information rate and energy-normalized information rate of the synapse numerically. In the future, it will be of interest to study the effect of correlated input on the information efficacy of the general model of facilitation in which the release probabilities are determined by the whole history of spiking activity.

In this study, we have assumed that in response to an incoming action potential, the release site releases at most one vesicle. To capture multiple releases, the model should be extended to a communication channel with binary input and multiple outputs. The analysis of this channel will reveal the impact of multiple releases on the mutual information rate of a static and dynamic synapse.

## Author Contributions

Conceptualization: M.S. (Mehrdad Salmasi), M.S. (Martin Stemmler), S.G., A.L.; Formal analysis: M.S. (Mehrdad Salmasi); Investigation: M.S. (Mehrdad Salmasi), M.S. (Martin Stemmler), S.G., A.L.; Methodology: M.S. (Mehrdad Salmasi), M.S. (Martin Stemmler), S.G., A.L.; Software: M.S. (Mehrdad Salmasi); Data curation: M.S. (Mehrdad Salmasi); Resources: S.G.; Supervision: M.S. (Martin Stemmler), S.G., A.L.; Visualization: M.S. (Mehrdad Salmasi); Funding acquisition: S.G.; Writing—original draft: M.S. (Mehrdad Salmasi), M.S. (Martin Stemmler), S.G., A.L.; Writing—review and editing: M.S. (Mehrdad Salmasi), M.S. (Martin Stemmler), S.G., A.L.

## Funding

This research was funded by Bundesministerium für Bildung und Forschung (BMBF), German Center for Vertigo and Balance Disorders, grant number 01EO1401 (recipients: M.S. (Mehrdad Salmasi), S.G.), and Bundesministerium für Bildung und Forschung (BMBF), grant number 01GQ1004A (recipient: M.S. (Martin Stemmler)).

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix A

**Proof**

**of**

**Theorem**

**1.**

Let ${X}^{n}\triangleq ({X}_{1},{X}_{2},\dots ,{X}_{n})$. By definition,
where

$${R}_{F}=\underset{n\to \infty}{lim}\frac{1}{n}I({X}^{n};{Y}^{n}),$$

$$I({X}^{n};{Y}^{n})=H\left({Y}^{n}\right)-H\left({Y}^{n}\right|{X}^{n}).$$

The chain rule implies that

$$H\left({Y}^{n}\right)=\sum _{i=1}^{n}H\left({Y}_{i}\right|{Y}^{i-1}),$$

$$H\left({Y}^{n}\right|{X}^{n})=\sum _{i=1}^{n}H\left({Y}_{i}\right|{Y}^{i-1},{X}^{n}).$$

The model in Figure 1A posits that ${Y}_{i}$ depends only on ${X}_{i}$ and ${X}_{i-1}$. Given ${X}_{i-1}$, ${Y}_{i}$ is independent of ${Y}^{i-1}$ and consequently

$$H\left({Y}_{i}\right|{Y}^{i-1},{X}^{n})=H\left({Y}_{i}\right|{X}_{i-1},{X}_{i}).$$

Also

$$H\left({Y}_{i}\right|{Y}^{i-1},{X}_{i-1})=H\left({Y}_{i}\right|{X}_{i-1}).$$

From (A6),

$$\begin{array}{c}\hfill H\left({Y}_{i}\right|{Y}^{i-1})-H\left({Y}_{i}\right|{X}_{i-1})=I({Y}_{i};{X}_{i-1}|{Y}^{i-1})\ge 0.\end{array}$$

Hence,
and together with (A3),

$$H\left({Y}_{i}\right|{Y}^{i-1})\ge H\left({Y}_{i}\right|{X}_{i-1}),$$

$$H\left({Y}^{n}\right)\ge \sum _{i=1}^{n}H\left({Y}_{i}\right|{X}_{i-1}),$$

From (A4) and (A5),

$$H\left({Y}^{n}\right|{X}^{n})=\sum _{i=1}^{n}H\left({Y}_{i}\right|{X}_{i-1},{X}_{i}).$$

By applying (A9) and (A10) to (A2),

$$\begin{array}{cc}\hfill I({X}^{n};{Y}^{n})& \ge \sum _{i=1}^{n}\left(H\left({Y}_{i}\right|{X}_{i-1})-H\left({Y}_{i}\right|{X}_{i-1},{X}_{i})\right)\hfill \end{array}$$

$$\begin{array}{c}=\sum _{i=1}^{n}I({Y}_{i};{X}_{i}|{X}_{i-1}).\hfill \end{array}$$

Using the definition of conditional mutual information (for $i\ge 2$),

$$\begin{array}{cc}\hfill I({Y}_{i};{X}_{i}|{X}_{i-1})& =I({Y}_{i};{X}_{i}|{X}_{i-1}=0)P({X}_{i-1}=0)+I({Y}_{i};{X}_{i}|{X}_{i-1}=1)P({X}_{i-1}=1)\hfill \end{array}$$

$$\begin{array}{c}=\overline{\alpha}{R}_{1}+\alpha {R}_{2}.\hfill \end{array}$$

Finally, the lemma follows from (A1), (A12) and (A14). □

**Proof**

**of**

**Theorem**

**2.**

The non-negativity of mutual information implies that

$$H\left({Y}_{i}\right|{Y}^{i-1})\le H\left({Y}_{i}\right|{Y}_{i-1}).$$

By conditioning on ${X}_{i-2}$,
and similarly

$$P({Y}_{i-1}=1)=(\alpha {p}_{1}+\overline{\alpha}{q}_{1})\overline{\alpha}+(\alpha {p}_{2}+\overline{\alpha}{q}_{2})\alpha ={u}_{5},$$

$$P({Y}_{i-1}=0)=(\alpha \overline{{p}_{1}}+\overline{\alpha}\phantom{\rule{0.166667em}{0ex}}\overline{{q}_{1}})\overline{\alpha}+(\alpha \overline{{p}_{2}}+\overline{\alpha}\phantom{\rule{0.166667em}{0ex}}\overline{{q}_{2}})\alpha =\overline{{u}_{5}}.$$

Moreover
and

$$H\left({Y}_{i}\right|{Y}_{i-1}=0)=h\left(P({Y}_{i}=1|{Y}_{i-1}=0)\right),$$

$$H\left({Y}_{i}\right|{Y}_{i-1}=1)=h\left(P({Y}_{i}=1|{Y}_{i-1}=1)\right),$$

$$P({Y}_{i}=1|{Y}_{i-1}=0)=\sum _{{x}_{i-1}\in \{0,1\}}P({Y}_{i}=1|{Y}_{i-1}=0,{X}_{i-1}={x}_{i-1})\times P({X}_{i-1}={x}_{i-1}|{Y}_{i-1}=0).$$

We have

$$\begin{array}{cc}\hfill P({Y}_{i}=1|{Y}_{i-1}=0,{X}_{i-1}=0)& =P({Y}_{i}=1|{X}_{i-1}=0)\hfill \end{array}$$

$$\begin{array}{c}=\alpha {p}_{1}+\overline{\alpha}{q}_{1},\hfill \end{array}$$

$$\begin{array}{cc}\hfill P({Y}_{i}=1|{Y}_{i-1}=0,{X}_{i-1}=1)& =P({Y}_{i}=1|{X}_{i-1}=1)\hfill \end{array}$$

$$\begin{array}{c}=\alpha {p}_{2}+\overline{\alpha}{q}_{2}.\hfill \end{array}$$

Also

$$P({X}_{i-1}=1|{Y}_{i-1}=0)=\frac{P({Y}_{i-1}=0|{X}_{i-1}=1)P({X}_{i-1}=1)}{P({Y}_{i-1}=0)},$$

By conditioning on ${X}_{i-2}$, and given the fact that ${X}_{i-2}$ and ${X}_{i-1}$ are independent,

$$\begin{array}{c}\hfill P({Y}_{i-1}=0|{X}_{i-1}=1)=\overline{\alpha}\phantom{\rule{0.166667em}{0ex}}\overline{{p}_{1}}+\alpha \overline{{p}_{2}}=\overline{{u}_{4}}.\end{array}$$

Hence,
and similarly, we can show that

$$P({X}_{i-1}=1|{Y}_{i-1}=0)=\frac{\alpha \overline{{u}_{4}}}{\overline{{u}_{5}}},$$

$$P({X}_{i-1}=0|{Y}_{i-1}=0)=\frac{\overline{\alpha}\phantom{\rule{0.166667em}{0ex}}\overline{{u}_{2}}}{\overline{{u}_{5}}}.$$

Therefore,

$$P({Y}_{i}=1|{Y}_{i-1}=0)=\frac{{u}_{1}\overline{{u}_{2}}+{u}_{3}\overline{{u}_{4}}}{\overline{{u}_{5}}}.$$

With the same approach,

$$P({Y}_{i}=1|{Y}_{i-1}=1)=\frac{{u}_{1}{u}_{2}+{u}_{3}{u}_{4}}{{u}_{5}}.$$

The conditional entropy $H\left({Y}_{i}\right|{Y}_{i-1})$ can be written as
and for $i>2$, we can infer from (A16)–(A19), (A29), and (A30),

$$H\left({Y}_{i}\right|{Y}_{i-1})=H\left({Y}_{i}\right|{Y}_{i-1}=1)P({Y}_{i-1}=1)+H\left({Y}_{i}\right|{Y}_{i-1}=0)P({Y}_{i-1}=0),$$

$$\begin{array}{cc}\hfill H\left({Y}_{i}\right|{Y}_{i-1})& =h\left(\frac{{u}_{1}{u}_{2}+{u}_{3}{u}_{4}}{{u}_{5}}\right){u}_{5}+h\left(\frac{{u}_{1}\overline{{u}_{2}}+{u}_{3}\overline{{u}_{4}}}{\overline{{u}_{5}}}\right)\overline{{u}_{5}}\hfill \end{array}$$

$$\begin{array}{c}={u}_{7},\hfill \end{array}$$

We also conclude that $H\left({Y}_{i}\right|{Y}_{i-1})$ is independent of i for $i>2$. Moreover, we can easily obtain

$$\begin{array}{cc}\hfill H\left({Y}_{i}\right|{X}_{i-1},{X}_{i})& =h\left({q}_{1}\right){\overline{\alpha}}^{2}+\left(h\left({p}_{1}\right)+h\left({q}_{2}\right)\right)\alpha \overline{\alpha}+h\left({p}_{2}\right){\alpha}^{2}\hfill \end{array}$$

$$\begin{array}{c}={u}_{6}.\hfill \end{array}$$

From (A5) and (A15),

$$\begin{array}{cc}\hfill I({X}^{n};{Y}^{n})& =\sum _{i=1}^{n}\left(H\left({Y}_{i}\right|{Y}^{i-1})-H\left({Y}_{i}\right|{X}_{i},{X}_{i-1})\right)\hfill \end{array}$$

$$\begin{array}{c}\le \sum _{i=1}^{n}\left(H\left({Y}_{i}\right|{Y}_{i-1})-H\left({Y}_{i}\right|{X}_{i},{X}_{i-1})\right).\hfill \end{array}$$

Therefore, from (A33) and (A35),
and by dividing by n and calculating the limit when n goes to infinity,
and the proof is complete. □

$$I({X}^{n};{Y}^{n})\le I({Y}_{1};{X}_{1})+H\left({Y}_{2}\right|{Y}_{1})-H\left({Y}_{2}\right|{X}_{2},{X}_{1})+(n-2)({u}_{7}-{u}_{6}),$$

$$\begin{array}{cc}\hfill {R}_{F}& \le {u}_{7}-{u}_{6},\hfill \end{array}$$

## References

- Kandel, E.R.; Schwartz, J.H.; Jessell, T.M.; Siegelbaum, S.A.; Hudspeth, A.J.; Mack, S. Principles of Neural Science; McGraw-Hill: New York, NY, USA, 2000; Volume 4. [Google Scholar]
- Awatramani, G.B.; Price, G.D.; Trussell, L.O. Modulation of transmitter release by presynaptic resting potential and background calcium levels. Neuron
**2005**, 48, 109–121. [Google Scholar] [CrossRef] - Branco, T.; Staras, K. The probability of neurotransmitter release: Variability and feedback control at single synapses. Nat. Rev. Neurosci.
**2009**, 10, 373–383. [Google Scholar] [CrossRef] - Kaeser, P.S.; Regehr, W.G. Molecular mechanisms for synchronous, asynchronous, and spontaneous neurotransmitter release. Annu. Rev. Physiol.
**2014**, 76, 333–363. [Google Scholar] [CrossRef] - Kavalali, E.T. The mechanisms and functions of spontaneous neurotransmitter release. Nat. Rev. Neurosci.
**2015**, 16, 5–16. [Google Scholar] [CrossRef] - Zucker, R.S.; Regehr, W.G. Short-term synaptic plasticity. Annu. Rev. Physiol.
**2002**, 64, 355–405. [Google Scholar] [CrossRef] - Goldman, M.S. Enhancement of information transmission efficiency by synaptic failures. Neural Comput.
**2004**, 16, 1137–1162. [Google Scholar] [CrossRef] - Fuhrmann, G.; Segev, I.; Markram, H.; Tsodyks, M. Coding of temporal information by activity-dependent synapses. J. Neurophysiol.
**2002**, 87, 140–148. [Google Scholar] [CrossRef] - Salmasi, M.; Loebel, A.; Glasauer, S.; Stemmler, M. Short-term synaptic depression can increase the rate of information transfer at a release site. PLoS Comput. Biol.
**2019**, 15, e1006666. [Google Scholar] [CrossRef] - Fortune, E.S.; Rose, G.J. Short-term synaptic plasticity as a temporal filter. Trends Neurosci.
**2001**, 24, 381–385. [Google Scholar] [CrossRef] - Attwell, D.; Laughlin, S.B. An energy budget for signaling in the grey matter of the brain. J. Cereb. Blood Flow Metab.
**2001**, 21, 1133–1145. [Google Scholar] [CrossRef] - Salmasi, M.; Stemmler, M.; Glasauer, S.; Loebel, A. Information Rate Analysis of a Synaptic Release Site Using a Two-State Model of Short-Term Depression. Neural Comput.
**2017**, 29, 1528–1560. [Google Scholar] [CrossRef] - Dobrunz, L.E.; Stevens, C.F. Heterogeneity of release probability, facilitation, and depletion at central synapses. Neuron
**1997**, 18, 995–1008. [Google Scholar] [CrossRef] - Markram, H.; Wang, Y.; Tsodyks, M. Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci. USA
**1998**, 95, 5323–5328. [Google Scholar] [CrossRef] - Brette, R. Generation of correlated spike trains. Neural Comput.
**2009**, 21, 188–215. [Google Scholar] [CrossRef] - Jiao, J.; Permuter, H.H.; Zhao, L.; Kim, Y.H.; Weissman, T. Universal estimation of directed information. IEEE Trans. Inf. Theory
**2013**, 59, 6220–6242. [Google Scholar] [CrossRef] - Palmer, S.E.; Marre, O.; Berry, M.J.; Bialek, W. Predictive information in a sensory population. Proc. Natl. Acad. Sci. USA
**2015**, 112, 6908–6913. [Google Scholar] [CrossRef] - Osborne, L.C.; Bialek, W.; Lisberger, S.G. Time course of information about motion direction in visual area MT of macaque monkeys. J. Neurosci.
**2004**, 24, 3210–3222. [Google Scholar] [CrossRef] - Sincich, L.C.; Horton, J.C.; Sharpee, T.O. Preserving information in neural transmission. J. Neurosci.
**2009**, 29, 6207–6216. [Google Scholar] [CrossRef] - Arabzadeh, E.; Panzeri, S.; Diamond, M.E. Whisker vibration information carried by rat barrel cortex neurons. J. Neurosci.
**2004**, 24, 6011–6020. [Google Scholar] [CrossRef] - Dimitrov, A.G.; Lazar, A.A.; Victor, J.D. Information theory in neuroscience. J. Comput. Neurosci.
**2011**, 30, 1–5. [Google Scholar] [CrossRef] - Panzeri, S.; Schultz, S.R.; Treves, A.; Rolls, E.T. Correlations and the encoding of information in the nervous system. Proc. R. Soc. Lond. Ser. B Biol. Sci.
**1999**, 266, 1001–1012. [Google Scholar] [CrossRef] - Gjorgjieva, J.; Sompolinsky, H.; Meister, M. Benefits of pathway splitting in sensory coding. J. Neurosci.
**2014**, 34, 12127–12144. [Google Scholar] [CrossRef] - Han, G.; Marcus, B. Analyticity of entropy rate of hidden Markov chains. IEEE Trans. Inf. Theory
**2006**, 52, 5251–5266. [Google Scholar] [CrossRef] - Softky, W.R.; Koch, C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci.
**1993**, 13, 334–350. [Google Scholar] [CrossRef] - Song, S.; Abbott, L.F. Cortical development and remapping through spike timing-dependent plasticity. Neuron
**2001**, 32, 339–350. [Google Scholar] [CrossRef] - Farkhooi, F.; Strube-Bloss, M.F.; Nawrot, M.P. Serial correlation in neural spike trains: Experimental evidence, stochastic modeling, and single neuron variability. Phys. Rev. E
**2009**, 79, 021905. [Google Scholar] [CrossRef]

**Figure 1.**(

**A**) Short-term facilitation in a synapse is modeled by a binary asymmetric channel whose state at time i is determined by the previous input, ${X}_{i-1}$. If ${X}_{i-1}=0$, the synapse remains in the baseline state; the synapse goes to the facilitated state after an action potential, ${X}_{i-1}=1$. (

**B**) The transition between the baseline state and the facilitated state is modeled by a two-state Markov chain and the transition probabilities are determined by the normalized input spike rate $\alpha $.

**Figure 2.**(

**A**) Bounds of information rate in a synapse with short-term facilitation. The lower bound and upper bound are plotted as a function of $\alpha $ for different values of facilitation coefficients, u and v. (

**B**) The lower bound of energy-normalized information rate of a synapse under short-term facilitation. (

**C**) The upper bound of energy-normalized information rate. The model parameters are ${p}_{1}=0.5$, ${q}_{1}=0.05$, ${p}_{max}=1$, and ${q}_{max}=0.2$.

**Figure 3.**(

**A**) The difference between ${R}_{\mathit{LB}}$ and ${R}_{1}$ against input spike rate for various facilitation coefficients. It is assumed that the facilitation coefficients of synchronous spike-evoked release and asynchronous release are identical, $u=v$. (

**B**) The difference between ${R}_{\mathit{UB}}$ and ${R}_{1}$. (

**C**) The difference between ${R}_{\mathit{UB}}^{\left(E\right)}$ and ${R}_{1}^{\left(E\right)}$. In (

**B**) and (

**C**), the facilitation coefficient of asynchronous release is fixed at $v=0.5$. The other parameters are ${p}_{1}=0.5$, ${q}_{1}=0.05$, ${p}_{max}=1$ and ${q}_{max}=0.2$.

**Figure 4.**The regime of parameters (u and ${p}_{1}$) for which short-term facilitation increases/decreases the mutual information rate or energy-normalized information rate of the synapse. Asynchronous release is fixed at ${q}_{1}=0.05$ and $v=0.5$. The other parameters are ${p}_{max}=1$ and ${q}_{max}=0.2$.

**Figure 5.**The functional impact of asynchronous release. (

**A**) The range of synchronous release parameters, u and ${p}_{1}$, in which short-term facilitation enhances energy-rate efficiency of the synapse; the asynchronous release probability is ${q}_{1}=0.1$. (

**B**) Similar to (

**A**) for ${q}_{1}=0.01$. (

**C**) The functional classes of short-term facilitation are modified by the baseline release probability, ${q}_{1}$, and facilitation coefficient, v, of the asynchronous release probability. The other simulation parameters are $v=0.5$ in (

**A**,

**B**), and ${p}_{1}=0.5$ and $u=0.5$ in (

**C**). For all simulations, ${p}_{max}=1$ and ${q}_{max}=4{q}_{1}$.

**Figure 6.**(

**A**) Generation of correlated spike trains. (

**B**) Sinusoidal stimulus signals, with a frequency of 1 Hz, average value of 0.3, and amplitudes 0, $0.075$, and $0.15$, are used as the normalized rate $\alpha \left(t\right)$ of the inhomogeneous Poisson process. (

**C**) The spike raster plots of the simulated neurons (5 trials for each amplitude). (

**D**) The estimation of the instantaneous neuronal firing rate from 1000 trials. (

**E**) Functional classes of short-term facilitation for correlated input. The first column corresponds to the uncorrelated input ($\alpha \left(t\right)=0.3$) and the second and third columns correspond to the correlated spike trains generated by sinusoidal stimulus signals with amplitudes $0.075$ and $0.15$. The other simulation parameters are ${q}_{1}=0.05$, $v=0.5$, ${p}_{max}=1$, and ${q}_{max}=0.2$.

**Figure 7.**(

**A**) Mutual information rate of the two-state model (solid lines) and general model (dashed lines) of short-term facilitation as a function of normalized input spike rate for various values of baseline synchronous release probability, ${p}_{1}$. (

**B**) Similar to (

**A**) for energy-normalized information rates. The simulation parameters are ${q}_{1}=0.05$, $u=v=0.5$, ${p}_{max}=1$, ${q}_{max}=0.2$, ${\tau}_{{}_{L,p}}={\tau}_{{}_{L,q}}=250$ msec, and ${\tau}_{{}_{f,p}}={\tau}_{{}_{f,q}}=20$ msec.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).