1. Introduction
Mathematical models and analysis have been a strong tool to answer many important questions in biology and the work of Hodgkin and Huxley on nerve conduction is one of the best examples of it [
1]. In 1952, after many years of theoretical and experimental work of physiologists, a mathematical model was proposed by HH to explain the action potential generation of neurons using conductance models that are defined for different electrically excitable cells [
2,
3,
4]. Despite the rapid growth in the number of analyses on the communication between neurons, the noise effect has generally been overlooked in the literature. Recently, neuronal noise effects have started to be incorporated into the models, due to a phenomenon, called “Stochastic Resonance” [
5]. The communication between neurons is maintained by electrical signals, called ‘’Action Potentials (AP)”. If the action potentials, as a response to a stimulant, exceeds a certain threshold value, these signals are referred to as “Spikes”. The existence of a spike is determined by the value of a threshold value and additional noise component can easily increase or decrease the value of an AP versus the threshold, thus change the neural spike train code. Therefore, the noise is not merely a nuisance factor and it is capable of changing the meaning of the “neuronal code”. For this reason, to better understand how these changes can occur in a very complex system, such as our brain, we must first understand the underlying working principles of neuronal noise, which sets the framework of our investigations.
Here, we utilize information theory to better understand the effects of neuronal noise on the overall communication. Therefore, we generalize the HH model in such a way that the noise can be added to the system beside the coupling among the neurons. In the literature, the effect of coupling among different neurons have been explored by using TE [
6], however, to the best of our knowledge, the effects of noise on these interactions have not been fully considered yet.
On the other hand, certain types of models have been suggested to include the noise in the HH model [
7] without any coupling between the neurons. Here, we approach the complicated modeling problem by using a simplified version including two neurons, coupling between them, and additional noise terms. We propose utilizing information theory to analyze the relationships in neural communication.
In the literature, information-theoretic quantities, such as Entropy, Mutual Information (MI) and Transfer Entropy (TE) have been successfully utilized to analyze the statistical dependencies and relationships between random variables of highly complex systems [
8]. Among these, MI is a symmetric quantity reporting the dependency between two variables, whereas TE, is an asymmetric quantity that can be used to infer the direction of the interaction (as affecting and affected variables) between them [
9]. All the above quantities are calculated from observational data by inferring probability distributions. Despite the wide variety of different distribution estimation techniques, the whole procedure still suffers from adverse effects, such as the bias. Most common techniques in probability distribution estimation involve histograms [
10], Parzen windows [
11] and adaptive methods [
12]. In the literature, histogram estimation is widely used due to its computational simplicity. To rely on estimations from data, reporting the statistical significance of each estimate [
13] constitutes an important part of the methods.
In this work, we propose utilizing TE to investigate the directional relationships between the coupled neurons of a HH model under noisy conditions. Therefore, we extend the traditional HH model and analyzed the effect of noise on the directional relationships between the coupled neurons. As our first approach to model noisy neuronal interaction, we demonstrate the effect under certain levels of noise power in the simulations. Based on these simulations, we observe that the original interactions are preserved despite many changes in the structure of the neuronal code structure. Our future work will be based on the generalization of this modeling to consider N neurons and the effect of noise on their interactions.
2. Materials and Methods
2.1. The Hodgkin-Huxley Model
In this study we use Hodgkin-Huxley model which mimics the spiking behavior of the neurons recorded from the squid giant axon. This is the first mathematical model describing the action potential generation and it is one of the major breakthroughs of computational neuroscience [
1]. In 1952 two physiologists Hodgkin and Huxley got the Nobel prize after this work and after their work Hodgkin-Huxley type models are defined for many different electrically excitable cells such as cardiomyocytes [
2], pancreatic beta cells [
3] and hormone secretion [
4]. They observed that cell membranes behave much like electrical circuits. The basic circuit elements are the phospholipid bilayer of the cell, which behaves like a capacitor that accumulates ionic charge while the electrical potential across the membrane changes. Moreover, resistors in a circuit are analogue to the ionic permeabilities of the membrane and the electrochemical driving forces are analogous to batteries driving the ionic currents. Na
+, K
+, Ca
2+ and Cl
− ions are responsible for almost all the electrical actions in the body. Thus, the electrical behavior of cells is based upon the transfer and storage of ions and Hodgkin and Huxley observed that K
+ and Na
+ ions are mainly responsible for the HH system.
Mathematical description of the Hodgkin-Huxley model starts with the membrane potential
V based on the conservation of electric charge defined as follows
where
is the membrane capacitance,
is the applied current and
represents the sum of individual ionic currents and modeled according to Ohm’s Law:
here
,
and
are conductances,
,
,
are the reversal potentials associated with the currents. Hodgkin and Huxley observed that conductances are also voltage dependent. They realize that
depends on four activation gates and defined as
whereas
depends on three activation gates and one inactivation gate and modeled as
. In the HH model, ionic currents are defined as:
with Na
+ activation variable
m and inactivation variable
h, and K
+ activation variable
n. Here
denotes maximal conductances. Activation and inactivation dynamics of the channels are changing according to the differential equations below.
The steady state activation and inactivation functions together with time constants are defined as below and the transition rates
are given in
Table 1.
2.2. Information Theoretic Quantities
In information theory, Shannon entropy is defined to be the average uncertainty for finding the system at a particular state ‘x’ out of a possible set of states ‘X’, where
p(
x) denotes the probability of that state. Also, it is used to quantify the amount of information needed to describe a dataset. Shannon entropy is given by the following formula
Mutual information (MI), is another fundamental information-theoretic quantity which is used to quantify the information shared between two datasets. Given two datasets denoted by
X and
Y, the MI can be written as follows:
The MI is a symmetric quantity and it can be rewritten as a sum and difference of Shannon entropies by
where
is the joint Shannon entropy. If there is a directional dependency between the variables, such as a cause and effect relationship, a symmetric measure cannot unveil the dependency information from data. In the literature, TE was proposed to analyze the directional dependencies between two Markov processes. To quantify the directional effect of a variable
X on
Y, the TE is defined by the conditional distribution of Y depending on the past samples of both processes versus the conditional distribution of that variable depending only on its own past values [
14]. Thus, the asymmetry of TE helps us detect two directions of information flow. The TE definition in both directions (between variables X and Y) are given by the following equations:
where
and
are past states, and
X and
Y are
kth and
lth order Markov processes, respectively, such that
X depends on the
k previous values and
Y depends on the
l previous values. In the literature,
k and
l are also known as the embedding dimensions.
All the above quantities involve estimation of probability distributions from the observed data. Among many approaches in the literature, we utilize the histogram-based method to estimate the distributions on (14) and (15), due to its computational simplicity. In order to assess the statistical significance of the TE estimations, surrogate data testing is applied, and the p-values are reported.
2.3. The Proposed Method
In this paper we focus on the system of two coupled HH neurons with synaptic coupling from neuron 1 to neuron 2. Also, current noise is added with normal distribution for the action potential generation of the squid axons for this two-neuron network. It involves a fast sodium current
, a delayed rectifying potassium current
and a leak current
. The differential equations for the rate of change of voltage for these neurons are given as follows,
where
is the membrane voltage for the 1st neuron and
is the membrane voltage for the 2nd neuron. Here,
shows the noise distribution defined by normal distribution with 0 mean and
standart deviation. Synapting coupling is defined simply
with voltage difference and synaptic coupling strength is
. When
k is between 0 and 0.25, spiking activity occurs with unique stable limit cycle solution. After
k = 0.25 system turns back to stable steady state and spiking activity disappears. All other dynamics are same as described in
Section 2.1.
First, we propose using TE between , in the case of no noise in (16) and (17). Secondly, we include the noise components in (16) and (17) and utilize TE between , again. This comparison demonstrates the effects of noise on the information flow between the neurons. At a first glance on equations (16) and (17), we can conclude that the direction of the information flow under noiseless case must be from . However, when the noise is added, it is tedius to reach the same conclusion, as the added noise is capable of adding additional spikes and destroying the available ones. The simulation results in the next section demonsrate these findings and provides promising results to generalize our model to more complex neuronal interactions under noise.
The model is implemented in the XPPAUT software [
15] using the Euler method (dt = 0.1 ms).