Learning in Convolutional Neural Networks Accelerated by Transfer Entropy

Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead–accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter.


Introduction
Sometimes, it is difficult to distinguish causality from statistical correlation.A prerequisite of causality is the time lag between cause and effect: the cause precedes the effect [1,2].We consider here causality in a statistical sense, measured by information transfer.Statistical causality direction is inferred from the knowledge of a temporal structure, assuming that the cause has to precede the effect [3].According to the authors of [4], causal information flow describes the causal structure of a system, whereas information transfer can then be used to describe the emergent computation on that causal structure.For practical reasons, it is convenient to accept that causality can be measured by information transfer, even if the two concepts, are not exactly the same.
Transfer Entropy (TE) is an information transfer measure introduced by Schreiber [5] as a measure used to quantify the statistical coherence between events (usually, time series).Later, TE was considered in the framework of Granger's causality [6,7].Typically, causality is related to whether interventions on a source have an effect on the target, whereas information transfer is related to whether observations of the source can help predict state transitions on the target.According to Lizier et al. [4], to be considered a correct interpretation of information transfer, TE should only be applied to causal information sources for the given destination.A comparison between the TE and causality indicators can be found, for instance, in [4].With this in mind, we will use the information transfer (measured by the TE) to establish the presence of and quantify causal relationships.
Massey [8] defined the directivity of information flow through a channel in the form of directed information.In the presence of feedback, this is a more useful quantity than the traditional mutual information.Similarly, the TE measures the information flow from one process to another by quantifying the deviation from the generalized Markov property as a Kullback-Leibler distance.Therefore, the TE can be used to estimate the directional informational interaction between two random variables.
In our case, we quantify the information transfer between the neural layers of feedforward neural architectures.The information between these layers is directed: during the feedforward phase of the backpropagation algorithm, the layers are computed successively.We reduce the weights increment for larger values of TE with the objective of preserving the network configuration if the information is efficiently transferred between layers.The cause (the output from a neural layer) precedes the effect (the input to a subsequent layer).We measure causality by this directional information transfer.The directed informational transfer between two neural layers cannot be measured by a symmetrical statistical measure, such as mutual information, where cause and effect simultaneously coexist.
A variation of the TE is the Transfer Information Energy (TIE), introduced by us in [9,10] as an alternative to the TE.The TE measures of the reduction in uncertainty about one event given another, whereas the TIE measures the increase in certainty about one event given another.The TE and the TIE can be both used as quantitative indicators of information transfer between time series.The TIE can be computed ~20% faster than the TE [9].
Recently, there is a growing interest in applying TE in quantifying the effective connectivity between artificial neurons [11,12,13,14].For instance, the reservoir adaptation method in [15] optimizes the TE at each individual unit, dependent on properties of the information transfer between input and output of the system.Causal relationships within a neural network were defined in [16].It is a natural question if causal relationships quantified by TE information transfer measure can be inferred from neural networks and included in the learning mechanisms of neural architectures.There are few results reporting applications of TE in training neural networks: Herzog et al. [17,18], Patterson et al. [19], Obst et al. [15], and Moldovan et al. [20].Herzog et al. [17] computed the feedforward TE between neurons to structure neural feedback connectivity.These feedback connections were used in the training algorithm of a Convolutional Neural Network (CNN) classifier.Herzog et al. averaged (by layer and class) the calculation of TE gathered from directly or indirectly connected neurons, using thresholded activation values.The averaged TEs were then used in the subsequent neuron's activations, within the training algorithm.Only one TE derived value is used for each of the layers.Herzog et al. observed that there is a decreasing feedforward convergence towards higher layers.Furthermore, the TE value is in general lower between nodes at larger layer distances than between neighbors.This is caused by the fact that long-range TE is calculated by conditioning on the intermediate layers.[18].Their goal was to define clear guidelines about how to compute the TE based neural feedback connectivity to improve the overall classification performance of feedforward neural network classifiers.Structuring such feedback connection in a deep neural model is very challenging because of the very large number of candidate connections.For AlexNet, Herzog et al. narrowed the TE feedback connections to about 3.5% of all possible feedback connections.Then they used a genetic algorithm to modify their connection strengths and obtain in the end a set of very small weights, similar to many feedback paths in the brain, which amplify already connected feedforward paths only very mildly.According to their experiments in [18], this technique improved the classification performance.In a nutshell, the algorithmic steps in [18] are (i) train the network employing standard backpropagation, (ii) fine-tune the resulted network using feedback TE connections, and (iii) apply a genetic algorithm to generate the best performing network.

Herzog et al. continued their research in
Inspired by Herzog et al.'s paper [17], we defined in [20] a novel information-theoretical approach for analyzing the information transfer (measured by TE) between the nodes of feedforward neural networks.The approach employed a different learning mechanism than the one in [17].The TE was used to establish the presence of relationships and the quantification of these between neurons and the TE values were obtained from neuron output pairs located in consecutive layers.Calculating the TE values required a series of event values that were obtained by thresholding the neurons' outputs with a constant value.We introduced a backpropagation-type training algorithm which used TE feedback connections to improve its performance.Compared with a standard backpropagation training algorithm, the addition of the TE feedback in the training scheme implies a computational overhead needed to compute the TE values in the training stage.However, according to our experiments, adding the TE feedback parameter has three benefits [20]: (a) it accelerates the training process-in general, less epochs are needed; (b) generally achieve a better test set accuracy; and (c) it generates stability during training.The neural models trained in [20] were relatively small.This allowed to use all training samples to compute the TE from time series.When training complex models with large datasets, for computational reasons, this is unpractical.The question (and main motivation for our current work) is how to adapt our technique to such real-world cases.
We extend here the results from [20] and adapt them to a much larger neural architecture-the CNN network.Rather then being a simple generalization, it is a novel approach, as we had to redefine the network training process.The motivation is not only the popularity of CNNs in current deep learning applications, but also the fact that despite the ability of generating human-alike predictions, CNNs still lack a major component: interpretability.CNNs utilize a hierarchy of neural network layers.We consider that the statistical aspects of information transfer between these layers can bring an insight into the feature abstraction process.Our idea is to use TE as a momentum factor in the backward step of backpropagation of error and update the weights in accordance with the uni-directional amount of information transferred between pairs of neurons.We thus leverage the significant informational connection between two units in the learning phase, obtaining a better granularity of the weights' increments.
In contrast to the work in [18], we integrate the TE feedback connections in the training algorithm, and do not use them in a subsequent fine-tuning.The way we select the feedback connections is also different.The similarity between Herzog et al.'s method and our work consists in using TE to compute neural feedback connections.However, the two approaches are very different.Using feedback connections, a general training algorithm like backpropagation adds a computational overhead, not discussed in [17,18].We may expect that there is a trade-off between execution time and classification accuracy.
Beyond the exploratory aspect of our work, our main insights are twofold.First, we attempt to improve (training time, accuracy) the training algorithm of CNN architectures.Second, we create the premises for further information transfer interpretation within deep learning models.
The rest of the paper is structured as follows.Section 2 introduces the TE definition and notations, whereas Section 3 explains how we compute the TE feedback in a CNN.Section 4 introduces the core of our novel method-the integration of the TE feedback mechanism into the training phase of a CNN.Experimental results are presented in Section 5. Section 6 contains the final remarks and open problems.

Transfer Entropy Notations
The recent literature on TE applications is rich and illustrates an increasing interest for the use of this tool in a broad range of applications to measure the directional information flow between time series based on their probability density functions.Applications of TE to date has mainly been concentrated in neuroscience, bioinformatics, artificial life, climate science, finance, and economics [21].An introduction to TE is offered by [21].A nonparametric approach of causality considering the conditional entropy was introduced by Baghli [22].An extensive analysis of causality detection based on information-theoretic approaches in time series analysis is provided in [23].
To introduce the formal definition of TE, let us consider two discrete stationary processes I and J. Relative to time t, k previous data points of process I, l previous data points of process J, and the next point of process I contribute to the definition of TE as follows [5,24]: where i (k) t and j (l) t are the k and l dimensional delay vectors of time series I and J, respectively, and i t+1 and j t+1 are the discrete states at time t + 1 of I and J, respectively.The generalization of the entropy rate to two processes is a method to obtain a mutual information rate which measures the deviation from independence, and therefore T E J→I can be obtained from Kullback entropy [5].TE provides an evaluation of the asymmetric information transfer between two stochastic variables, being a tool which can be used to estimate the unidirectional interaction of pairs of time series.
The precise calculation of the entropy-based measures is an well-known difficult task and the computational effort is still significant when accurate estimation of TE from a dataset is required [25].One of the most widely used approach is based on the histogram estimation with fixed partitioning, bu it is not scalable and is sensible to bins width setting.As TE is derived from the nonparametric entropy estimation, popular methods are widely used for computing the transfer entropy TE [26,25,27]: kernel density estimation methods, nearest-neighbor, Parzen window density estimation, etc.

Computing the TE Feedback in a CNN
CNNs employ a particular form of linear transformation: convolution.A convolution operation retains the important and variational features of an input, while flattening the non-variant components.CNN design follows vision processing in living organisms and became very popular starting with the seminal paper of Yann LeCun et al. [28] on handwritten character recognition, where they introduced the LeNet-5 architecture.Since then, research on CNN architectures produced a variety of deep networks, like AlexNet [29], VGG [30], GoogleNet [31], ResNet [32], and more recently EfficientNet [33].These have many applications which makes them widely used in image recognition and classification, medical imaging, and time series analysis, to name a few.
Despite the ability of generating, especially in image recognition tasks, human-alike predictions, CNNs still lack a major component: interpretability [34].Neural networks in general are known for their black-box type of behavior, hiding the inner working mechanisms of reasoning.However, reasoning and causal explanations are extremely important for domains like medicine, law, and finance.It is tempting to model statistical cause-effect relationships between CNN neural layers using TE, with the goal to contribute to the interpretability of a CNN.As a first step, even improving the learning algorithm using an information transfer indicator is interesting, as it may lay the ground for future causality explanations.
To quantify inter-neural information transfer in a CNN, we have to quantify the relationship between training the samples and the output values of neurons.The measurable relationship is constructed by selecting subsequent layers and extracting TE values for the pairs of neurons implied.The computed TE values will directly participate in learning mechanism of the network, described in Section 4.
Each TE value is computed by combining two time series, I and J, each obtained by binarization of the activation function of a neuron, with threshold g.Each value in time series I and J is a neuron output computed for an input sample.An ideal binarization threshold should produce only few positive values.The reason is that, in such a case, the obtained TE values tend to have a comparative value with learning rate, and then tend to flatten during CNN training; this gives stability to the learning process.A similar binarization technique was used by Herzog et al. in [17,18].
We compute I and J in Equation ( 1) for individual pairs of neurons from adjacent layers k and l, l being the next layer after k.Index t is the position of an input sample in the training sequence.Time series I and J are updated online, after processing each input sample.For each considered pair of neurons, the TE value is computed only periodically, after a processing a fixed number of training samples.For all pairs of neurons in layers k and l, we obtain a triangular adjacency matrix of TE values.
It is computationally not feasible to compute the TE values for all possible pairs of neurons.Actually, according to our experiments, not all inter-neural information transfers are relevant for the training process of a network.We observed that the highest impact of using TE in training a CNN is within the last layers.We interpret this as a fine tuning of the classification process, as the classifiable features are available in the final layers of the network.Therefore, we compute the TE values only between the neurons of the last two layers of the CNN.The focus on the last two layers diminishes the computational overhead for calculating the TE values.Our approach is different than the one in Herzog et al. [17,18], where TE interactions between non-adjacent layers are also calculated.

TE Feedback Integration in CNN Training
Backpropagation training has two standard phases [35]: feedforward and backward.In the forward phase, for each input sample, in addition to the activations of the neurons for each layer, we also record the time series needed for the TE computation.
The last layer's output is used to calculate the error, which for classification tasks is usually L = −ln(p c ).In the backward phase the weights of error are updated, in reverse order, starting from the last layer.In contrast to the standard backpropagation algorithm, we update the weights with a value resulted from multiplying the current weights by the identity matrix minus the computed TE values.The two phases (feedforward and backward) are alternated until the entire training set is used, and this completes one training epoch.The training consists of several epochs, the training set being randomized at the start of each epoch.In practice, the backpropagation algorithm is used in conjunction with a Mini-batch Gradient Descent (SGD) [36] that updates the weights after a batch of training samples is processed.Mini-batch Gradient Descent is the algorithm of choice for neural networks training.Updating the gradient with the TE addition is synchronized with the batch gradient updates.Without synchronization, the SGD method will diminish the impact of the TE term.
After each batch of training samples goes through forward computation, the backpropagation of errors and new weights computation is performed using Algorithm 1. Different than in the standard backpropagation algorithm [35], we multiply (line 8 of the algorithm) the updated weights by (I − (te (k) ) ⊤ ).
Each pair of neurons generates a te value; two consequent layers k and l will produce triangular matrix te (k) , used to update the weights for layer k as shown in the Algorithm 1. From line 8 of the algorithm, we conclude that the right member will tamper the weights values, in particular when the cost function has a strong coercive action on the misclassified samples.This accelerates the learning process, as the weights are updated with smaller deltas at the beginning of epochs.Without te, the weights receive larger corrections at the beginning of the epoch.k) is the layer error, ⊙ is the elementwise product w w w (k) = (w w w (k) − ηδ (k) a a a (k−1) ) • (I − (te te te (k) ) ⊤ ) where I is the identity matrix end end All experiments were completed on an AWS Sagemaker ml.p2.xlarge instance using NVIDIA Tesla K80 GPU, with 4 vCPU cores and 61 GB of RAM, engineered on top of PyTorch 1.7.1.The repository is available on GitHub https://github.com/avmoldovan/CNN-TE.We used the following well-known datasets as benchmarks: CIFAR-10 [37], FashionMNIST [38], , SVHN [40], and USPS [41] datasets.This selection was determined by the vast amount of literature surrounding it and the number of available implementations and comparisons.
The networks used consist of the following sequential components: convolution and feature engineering, deconvolution and classifier (in this order).Within these, various mechanisms were used to prevent overfitting (e.g., dropout) and obtain normalization.
Our experiments and additions to this architecture involved mainly the classifier part of the network, but we have also ran experiments on the convolutional layers.
We applied the TE on the last (fully connected) two layers-the pre-softmax and softmax layers-with different binarization thresholds determined experimentally (see Table 1).The TE term is applied on the weights of the k-th layer (see the red arrows in Figure 5).
The softmax layer transforms the outputs of a linear layer into probabilities.The maximum probability corresponds to the predicted class.For all outcomes with probability above the threshold, the J time-series are positive.During training, at the beginning of each epoch, we noticed an increased instability, visible through the high variation and values of the gradients, as seen in Figure 3.These observation apply for all datasets and networks, with or without the TE added.The TE values also exhibit instability and have larger values at the beginning of each epoch.However, the TE values show smaller values during the first epochs due to the selected threshold value that matches larger weights values from subsequent epochs.During each epoch, and also during the whole training process, the slope of the gradients gradually decreases and the TE variation also decreases.To validate the TE impact, we set a target accuracy to be reached by both implementations with/without TE.We observed the implementation that reaches the target accuracy w.r.t. the number of epochs needed, as well as the average time per epoch.These results show which of the two implementation requires less epochs to reach a target accuracy on the test set.For a fair comparison, we used the same hyperparameters for the CNNs with/without TE implementations (see Table 1).The results are summarized in Tables 2-6.
As observed in [20], using time series constructed from the full length of the epoch results in smoother TE values.Computing the TE for large time series (e.g., >10 6 ) is computationally impractical.We also observed the necessary length of the time series in that produces observable and positive outcomes.Therefore, we limited this length to u (determined experimentally), using a sliding window technique.To obtain smoother TE values, we slid the window with every batch of training samples, instead of dividing the training set into u-sized windows.Figure 4 illustrates how the time series were computed.
Using this approach, we analyzed the impact of the window length on the accuracy of the classifier.After q training samples, we computed the TE.The u windows overlap partially, as shown in Figure 4.According to our experiments, limiting the length of the time series does not have a significant impact on the performance of the trained classifier.Furthermore, as we found that a u value five times the batch size is a good trade-off between accuracy and computational overhead.Convolutional layers are a major building blocks used in CNNs [42].A convolution is performed on the input data with the use of a kernel to produce a feature map.Applying the TE to measure the inter-neural information transfers between the input data and the resulted feature maps is interesting to be considered.We can compute the median of the activations of the neurons within each convolutional kernel from layer conv1 (see Figure 1) and pair it with the outputs of subsequent layer conv2.The obtained te values can be used in the CNN learning process.In our experiments, under this setup, the learning process diverged.In the best run, the top 1 accuracy hardly reached a considerable value.
In addition, this approach has a considerable computational overhead, especially if we consider several convolutional layers.This justifies our focus on the last two fully connected layers only.It is also interesting to evaluate the impact the length s of the time series.Experimentally, we observed that when s is a multiple of the batch size b, the accuracy maintains a favorable trend.In this scenario, the time series are constructed as illustrated in Figure 4.The best results were obtained when the length of the series are extended to the full epoch.
In another set of experiments, we tried to minimize the number of considered neuron pairs from the last two layers, with a minimum impact on the achieved accuracy.In other words, we tried to obtain an optimal performance-computational overhead trade-off.We found that the accuracy improves significantly even when using only 10% of the randomly selected neurons.The neurons were selected randomly for an entire epoch or by or for a TE window.The two strategies yielded similar results.This is somehow similar to dropout, as only some of the connections are updated using the TE feedback.The top performance is achieved when all neurons are selected.
According to our experiments, using the TE feedback loops for additional layer pairs improves the performance.We also conducted other experiments, applying the TE correction on an identical network, pre-trained without the TE mechanism.We continued training, freezing all layers (except the pre-softmax and softmax layers), and applying the TE correction.The results were not consistent through multiple executions, since the TE training on top of the non-TE training changes the convergence logic and creates instability.

Conclusions and Open Problems
TE can be used to measure how significantly neurons interact [11,12].In our study, we add specific TE feedback connections between neurons to improve performance.Our results confirm what we obtained in our previous study on a simple feedforward neural architecture [20].Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed.On the flip side, it adds computational overhead to each epoch.The optimal balance between these two conflicting factors is application dependent.
According to our results, in a CNN classifier it is efficient to consider only the inter-neural information transfer of a random subset of the neuron pairs from the last two fully connected layers.The information transfer within these layers has the most significant impact on the learning process, since they are the closest to the high-level classification decision.Many of the inter-neural information transfer connections appear to be redundant, and this allows us to use only a fraction of them.These observations are very interesting and may be further discussed in from a neuroscientific perspective (e.g., the vertebrate brain [43,44]).
Generally, to optimize the generalization of a learning algorithm, we try to minimize the number of its parameters.As adding the TE in the learning mechanism generates new hyper-parameters, connected to the integration of the TE in the learning algorithm, the question is if this does not conduct to overfitting and a weaker generalization performance.
In our experiments, the TE acted as a smoothing factor, becoming active only periodically, not after processing each input sample.Therefore, we can consider the TE is in our model a slowly changing meta-parameter.This can be related to the hierarchy of quickly-changing vs. slowly-changing parameters in learning neural causal models [45].
We observed that the TE feedback generates stability during training, this being compliant with the results presented in [20].
According to the authors of [18], it is tempting to speculate that a similar principle-an evaluation of the relevance of the different feedforward pathways-might have been a phylo-or ontogenetic driving force for the design of different feedback structures in real neural systems.The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this manuscript:

Algorithm 1 :
Backpropagation using TE for a single step and a single mini-batch.Mini-batches are obtained by equally dividing the training set by a fixed number.This algorithm is repeated for all the available mini-batches and for a number of epochs.Bold items denote matrices and vectors.σ ′ is the derivative of the activation function σ.The k and l indices are the same as the ones in Equation (1) begin foreach layer k=1, 2, ..., l doz z z (k) = w w w (k) a a a (k−1) + b b b (k) , where z z z(k) is the output of layer k and b b b is the bias a a a (k) = σ(z z z(k) ), where a a a(k) is the output of the activation function σ end foreach layer k=l − 1, ..., 1 do

Figure 1
Figure 1 depicts the architecture of the network used for the USPS dataset.

Figure 1 :
Figure 1: Illustration of the feedforward phase for the USPS dataset.The green arrows indicate the layers outputs that are used to compute the TE (Plotted using https://github.com/HarisIqbal88/PlotNeuralNet).

Figure 2 :
Figure 2: During the feedforward step, we compute time series I and J, and the te matrix, as shown by the green arrows.When the backward step propagates the errors, we then use the te matrix in the weight updates as shown in the Algorithm 1.

Figure 3 :
Figure 3: Evolution of the te standard deviation values on the first 4 epochs for the SVHN+TE dataset, for the presoftmax layer.Each data point in the plot represents a batch.The rest of the TE values have a similar shape and decrease slowly during training.We observe the spikes of the TE values at the beginning of each epoch due to the training set randomization.During the first epoch the TE values are not calculated for the first batches in order to prevent anomalous values, thus its value is close to 0.

Figure 4 :
Figure 4: Illustration of how time series I and J are produced for a pair of neurons from layers k and l, for multiple windows of events u 1 . . .u q .

A
.M., A.C. and R.A. equally contributed to the published work.All authors have read and agreed to the published version of the manuscript.The costs to publish in open access were covered by Siemens SRL.This work was published in Entropy journal, Volume 23, Year 2021, Number 9, Article Number 1218, Pub-MedID 34573843, ISSN 1099-4300, DOI: 10.3390/e23091218 available also here https://www.mdpi.com/1099-4300/23/9/1218.

Table 2 :
[37]lts for CIFAR-10[37], with/without TE.The increased training time results from the large size of the last linear layer.
However, this increases exponentially the number of TE computations needed.For the USPS network (see Appendix A.5), a very simple dataset, computing the TE for the last two linear layers adds an overhead of 7 minutes of training time per epoch.Computing the TE only for the pre-softmax and softmax layers training takes 6 minutes per epoch.Computing the TE for the convolutional layers for the USPS network implies an increased computational overhead.For all possible pairs of kernels between the convolutional layers, we measured an extra three days of training per epoch.Performance cost increases almost linearly with the number of performed TE calculations.The exact number of TE computations for a pair of layers is a product of the layer sizes and the number of batches.Therefore, it is computationally not practical to compute the TE for all layers.