Next Article in Journal
Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People
Next Article in Special Issue
The Conditional Entropy Bottleneck
Previous Article in Journal
Photon Dissipation as the Origin of Information Encoding in RNA and DNA
Previous Article in Special Issue
Convergence Behavior of DNNs with Mutual-Information-Based Regularization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variational Information Bottleneck for Semi-Supervised Classification

1
Department of Computer Science, University of Geneva, 1227 Carouge, Switzerland
2
DeepMind, London N1C 4AG, UK
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(9), 943; https://doi.org/10.3390/e22090943
Submission received: 22 July 2020 / Revised: 24 August 2020 / Accepted: 24 August 2020 / Published: 27 August 2020
(This article belongs to the Special Issue Information Bottleneck: Theory and Applications in Deep Learning)

Abstract

:
In this paper, we consider an information bottleneck (IB) framework for semi-supervised classification with several families of priors on latent space representation. We apply a variational decomposition of mutual information terms of IB. Using this decomposition we perform an analysis of several regularizers and practically demonstrate an impact of different components of variational model on the classification accuracy. We propose a new formulation of semi-supervised IB with hand crafted and learnable priors and link it to the previous methods such as semi-supervised versions of VAE (M1 + M2), AAE, CatGAN, etc. We show that the resulting model allows better understand the role of various previously proposed regularizers in semi-supervised classification task in the light of IB framework. The proposed IB semi-supervised model with hand-crafted and learnable priors is experimentally validated on MNIST under different amount of labeled data.

Notations

We will denote a joint generative distribution as p θ ( x , z ) = p θ ( z ) p θ ( x | z ) , whereas marginal p θ ( z ) is interpreted as a targeted distribution of latent space and marginal p θ ( x ) = E p θ ( z ) p θ ( x | z ) = z p θ ( x | z ) p θ ( z ) d z as a generated data distribution with a generative model described by p θ ( x | z ) , where E stands for the expected value. A joint data distribution q ϕ ( x , z ) = p D ( x ) q ϕ ( z | x ) , where p D ( x ) denotes an empirical data distribution and q ϕ ( z | x ) is an inference or encoding model and marginal q ϕ ( z ) denotes a “true” or “aggregated” distribution of latent space data. We will denote parameters of encoders as ϕ a and ϕ z , and those of decoders as θ c and θ x . The discriminators corresponding to Kullback–Leibler divergences are denoted as D x where the subscript indicates the space to which this discriminator is applied to. The cross-entropy metrics are denoted as D x x ^ , where the subscript indicates the corresponding vectors. X denotes random vector, while the corresponding realization is denoted as x .

1. Introduction

The deep supervised classifiers demonstrate an impressive performance when the amount of labeled data is large. However, their performance significantly deteriorates with the decrease of labeled samples. Recently, semi-supervised classifiers based on deep generative models such as VAE (M1 + M2) [1], AAE [2], CatGAN [3], etc., along with several other approaches based on multi-view and contrastive metrics just to mention the most recent ones [4,5], are considered to be a solution to the above problem. Besides the remarkable reported results, the information theoretic analysis of semi-supervised classifiers based on generative models and the role of different priors aiming to fulfil the gap in the lack of labeled data remain little studied. Therefore, in this paper we will try to address these issues using IB principle [6] and practically compare different priors on the same architecture of classifier.
Instead of considering the latent space of generative models such as VAE (M1 + M2) [1] and AAE [2] trained in the unsupervised way as suitable features for the classification, we will depart from the IB formulation of supervised classification, where we consider an encoder-decoder formulation of classifier and impose priors on its latent space. Thus, we study an approach to semi-supervised classification based on an IB formulation with a variational decomposition of IB compression and classification mutual information terms. To deeper understand the role and impact of different elements of variational IB on the classification accuracy, we consider two types of priors on the latent space of classifier: (i) hand-crafted and (ii) learnable priors. Hand-crafted latent space priors impose constraints on a distribution of latent space by fitting it to some targeted distribution according to the variational decomposition of the compression term of the IB. This type of latent space priors is well known as an information dropout [7]. One can also apply the same variational decomposition to the classification term of the IB, where the distribution of labels is supposed to follow some targeted class distribution to maximize the mutual information between inferred labels and targeted ones. This type of class label space regularization reflects an adversarial classification used in AAE [2] and CatGAN [3]. In contrast, learnable latent space priors aim at minimizing the need in human expertise in imposing priors on the latent space. Instead, the learnable priors are learned directly from unlabeled data using auto-encoding (AE) principle. In this way, the learnable priors are supposed to compensate the lack of labeled data in the semi-supervised learning yet minimizing the need in the hand-crafted control of the latent space distribution.
We demonstrate that several state-of-the-art models such as AAE [2], CatGAN [3], VAE (M1 + M2) [1], etc., can be considered to be instances of the variational IB with the learnable priors. At the same time, the role of different regularizers in the hand-crafted semi-supervised learning is generalized and linked to known frameworks such as information dropout [7].
We evaluate our model using standard dataset MNIST on both hand-crafted and learnable features. Besides revealing the impact of different components of variational IB factorization, we demonstrate that the proposed model outperforms prior works on this dataset.
Our main contribution is three-fold: (i) We propose a new formulation of IB for the semi-supervised classification and use a variational decomposition to convert it into a practically tractable setup with learnable parameters. (ii) We develop the variational IB for two classes of hand-crafted and learnable priors on the latent space of classifier and show its link to the state-of-the-art semi-supervised methods. (iii) We investigate the role of these priors and different regularizers in the classification, latent and reconstruction spaces for the same fixed architecture under the different amount of training data.

2. Related Work

Regularization techniques in semi-supervised learning: Semi-supervised learning tries to find a way to benefit from a large number of unlabeled samples available for training. The most common way to leverage unlabeled data is to add a special regularization term or some mechanism to better generalize to unseen data. The recent work [8] identifies three ways to construct such a regularization: (i) entropy minimization, (ii) consistency regularization and (iii) generic regularization. The entropy minimization [9,10] encourages the model to output confident predictions on unlabeled data. In addition, more recent work [3] extends this concept to adversarially generated samples or fakes for which the entropy of class label distribution was suggested to be maximized. Finally, the adversarial regularization of label space was considered in [2], where the discriminator was trained to ensure the labels produced by the classifier follow a prior distribution, which was defined to be a categorical one. The consistency regularization [11,12] encourages the model to produce the same output distribution when its inputs are perturbed. Finally, the generic regularization encourages the model to generalize well and avoid overfitting the training data. It can be achieved by imposing regularizers and corresponding priors on the model parameters or feature vectors.
In this work, we implicitly use the concepts of all three forms of considered regularization frameworks. However, instead of adding additional regularizers to the baseline classifier as suggested by the framework in [8], we will try to derive the corresponding counterparts from a semi-supervised IB framework. In this way, we will try to justify their origin and investigate their impact on overall classification accuracy for the same system architecture.
Information bottleneck: In the recent years, the IB framework [6] is considered to be a theoretical framework for analysis and explanation of supervised deep learning systems. However, as shown in [13], the original IB framework faces several practical issues: (i) for the deterministic deep networks, either the IB functional is infinite for network parameters, that leads to the ill-posed optimization problem, or it is piecewise constant, hence not admitting gradient-based optimization methods, and (ii) the invariance of the IB functional under bijections prevents it from capturing properties of the learned representation that are desirable for classification. In the same work, the authors demonstrate that these issues can be partly resolved for stochastic deep networks, networks that include a (hard or soft) decision rule, or by replacing the IB functional with related, but more well-behaved cost functions. It is important to mention that the same authors also note that rather than trying to repair the inherent problems in the IB functional, a better approach may be to design regularizers on latent representation enforcing the desired properties directly.
In our work, we extend these ideas using variational approximation approach suggested in [14] and that was also applied to unsupervised models in the previous work [15,16]. More particularly, we extend the IB framework to the semi-supervised classification and as discussed above we will consider two different ways of regularization of the latent space of classifier, i.e., either using traditional hand-crafted priors or suggested learnable priors. Although we do not consider the semi-supervised clustering and conditional generation in this work, the proposed findings can be extended to these problems in a way similar to prior works such as AAE [2], ADGM [17] and SeGMA [18].
The closest works: The proposed framework is closely related to several families of semi-supervised classifiers based on generative models. VAE (M1 + M2) [1] combines latent-feature discriminative model M1 and generative semi-supervised model M2. A new latent representation is learned using the generative model from M1 and subsequently a generative semi-supervised model M2 is trained using embeddings from the first latent representation instead of the raw data. Semi-supervised AAE classifier [2] is based on the AE architecture, where the encoder of AE outputs two latent representations: one representing class and another style. The latent class representation is regularized by an adversarial loss forcing it to follow categorical distribution. It is claimed that it plays an essential role for the overall classification performance. The latent style representation is regularized to follow Gaussian distribution. In both cases of VAE and AAE, the mean square error (MSE) metric is used for the reconstruction space loss. CatGAN [3] is an extension of GAN and is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model.
In contrast to the above approaches and following the IB framework, we formulate the semi-supervised classification problem as a training of classifier that aims at compressing the input x to some latent data a via an encoding that is supposed to retain only class relevant information that is controlled by a decoder as shown in Figure 1. If the amount of labeled data is sufficiently large, the supervised classifier can achieve this goal. However, when the amount of labeled examples is small such an encoder-decoder pair representing an IB-driven classifier is regularized by a latent space and adversarial label space regularizers to fill the gap in training data. The adversarial label space regularization was already used in AAE and CatGAN. The latent space regularization in the scope of IB framework was reported in [7]. In this paper, we demonstrate that both label and latent space regularizations are instances of the generalized IB formulation developed in Section 3. At the same time, in contrast to the hypothesis that the considered label space and latent space regularizations are the driving factors behind the success of semi-supervised classifiers, we demonstrate that the hand-crafted priors considered in these models cannot completely fulfil the lack of labelled data and lead to relatively poor performance in comparison to a fully supervised system based on a sole cross-entropy metric. For these reasons, we analyze another mechanism of regularization of latent space based on learnable priors as shown in Figure 2 and developed in Section 4. Along this line, we provide an IB formulation of AAE and explain the driving mechanisms behind its success as an instance of IB with learnable priors. Finally, we present several extensions that explain the IB origin and role of adversarial regularization in the reconstruction space.
Summary: The considered methods of semi-supervised learning can be differentiated based on: (i) the targeted tasks (auto-encoding, clustering, generation or classification that can be accomplished depending on available labeled data); (ii) the architecture in terms of the latent space representation (with a single representation vector or with multiple representation vectors); (iii) the usage of IB or other underlying frameworks (methods derived from the IB directly or using regularization techniques); (iv) the label space regularization (based on available unlabeled data, augmented labeled data, synthetically generated labeled and unlabeled data, especially designed adversarial examples); (v) the latent space regularization (hand-crafted regularizers and priors or learnable priors under the reconstruction and constrastive setups) and (vi) the reconstruction space regularization in case of reconstruction setup (based on unlabeled and labeled data, augmented data under certain perturbations, synthetically generated examples).
In this work, our main focus is the latent space regularization for the hand-crafted and learnable priors under the reconstruction setup within the IB framework. Our main task is the semi-supervised classification. We will not consider any augmentation and adversarial techniques besides a simple stochastic encoding based on the addition of data independent noise at the system input or even deterministic encoding without any form of augmentation. The regularization of the label space and reconstruction space is solely based on the terms derived from the IB framework and only includes available labeled and unlabeled data without any form of augmentation. In this way, we want to investigate the role and impact of the latent space regularization as such in the IB-based semi-supervised classification. The usage of the above mentioned techniques of augmentation should be further investigated and will likely provide an additional performance improvement.

3. IB with Hand-Crafted Priors (HCP)

We assume that a semi-supervised classifier has an access to x m , c m m = 1 N training labeled samples, where x m R D denotes m t h data sample and c m corresponding encoded class label from the set { 1 , 2 , , M c } , generated from the joint distribution p ( c , x ) , and non-labeled data samples x j j = 1 J with J N . To integrate the knowledge about the labeled and non-labeled data at training, one can formulate the IB as:
L HCP ( ϕ a ) = I ϕ a ( X ; A ) β c I ϕ a ( A ; C ) ,
where a denotes the latent representation, β c is a Lagrangian multiplier and the IB terms are defined as I ϕ a ( X ; A ) = E q ϕ a ( x , a ) log q ϕ a ( a | x ) q ϕ a ( a ) and I ϕ a ( A ; C ) = E p ( c , x ) E q ϕ a ( a | x ) log q ϕ a ( c | a ) p ( c ) .
According to the above IB formulation the encoder q ϕ a ( a | x ) is trained to minimize the mutual information between X and A while ensuring that the decoder q ϕ a ( c | a ) can reliably decide on labels C from the compressed representation A . The trade-off between the compression and recognition terms is controlled by β c . Thus, it is assumed that the information retained in the latent representation A represents the sufficient statistics for the class labels C .
However, since optimal q ϕ a ( c | a ) is unknown, the second term I ϕ a ( A ; C ) is lower bounded by I ϕ a , θ c ( A ; C ) using a variational approximation p θ c ( c | a ) :
I ϕ a ( A ; C ) E p ( c , x ) E q ϕ a ( a | x ) log q ϕ a ( c | a ) p ( c ) = E p ( c , x ) E q ϕ a ( a | x ) log q ϕ a ( c | a ) p ( c ) p θ c ( c | a ) p θ c ( c | a ) = E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) p ( c ) + E p ( c , x ) E q ϕ a ( a | x ) log q ϕ a ( c | a ) p θ c ( c | a ) = E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) p ( c ) + E p ( c , x ) D KL ( q ϕ a ( c | a ) | | p θ c ( c | a ) ) E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) p ( c ) ,
where D KL ( q ϕ a ( c | a ) | | p θ c ( c | a ) ) = E q ϕ a ( a | x ) log q ϕ a ( c | a ) p θ c ( c | a ) and the inequality follows from the fact that D KL ( q ϕ a ( c | a ) | | p θ c ( c | a ) ) 0 . We denote the term I ϕ a , θ c ( A ; C ) = E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) p ( c ) . Thus, I ϕ a ( A ; C ) I ϕ a , θ c ( A ; C ) .
Thus, the IB (1) can be reformulated as:
L HCP L ( ϕ a , θ c ) = I ϕ a ( X ; A ) β c I ϕ a , θ c ( A ; C ) .
The considered IB is schematically shown in Figure 1 and we will proceed next with the detailed development of each component of the IB formulation.

3.1. Decomposition of the First Term: Hand-Crafted Regularization

The first mutual information term I ϕ a ( X ; A ) in (3) can be decomposed using a factorization by a parametric marginal distribution p θ a ( a ) that represents a prior on the latent representation a :
I ϕ a ( X ; A ) = E q ϕ a ( x , a ) log q ϕ a ( x , a ) q ϕ a ( a ) p D ( x ) = E q ϕ a ( x , a ) log q ϕ a ( a | x ) q ϕ a ( a ) p θ a ( a ) p θ a ( a ) = E p D ( x ) D KL q ϕ a ( a | X = x ) p θ a ( a ) D a | x D KL q ϕ a ( a ) p θ a ( a ) D a ,
where the first term denotes the KL-divergence D a | x D KL q ϕ a ( a | X = x ) p θ a ( a ) = E q ϕ a ( a | x ) log q ϕ a ( a | x ) p θ a ( a ) and the term denotes the KL-divergence D a D KL q ϕ a ( a ) p θ a ( a ) = E q ϕ a ( a ) log q ϕ a ( a ) p θ a ( a ) .
It should be pointed out that the encoding q ϕ a ( a | x ) can be both stochastic or deterministic. Stochastic encoding q ϕ a ( a | x ) can be implemented via: (a) multiplicative encoding applied to the input x as a = f ϕ a ( x ϵ ) or in the latent space a = f ϕ a ( x ) ϵ , where f ϕ a ( x ) is the output of the encoder, ⊙ denotes the element-wise product and ϵ follows some data independent or data dependent distribution as in information dropout [7]; (b) additive encoding applied to the input x as a = f ϕ a ( x + ϵ ) with the data independent perturbations, e.g., such as in PixelGAN [19], or in the latent space with generally data-dependent perturbations of form a = f ϕ a ( x ) + σ ϕ a ( x ) ϵ , where f ϕ a ( x ) and σ ϕ a ( x ) are outputs of the encoder and ϵ is assumed to be a zero mean unit variance vector such as in VAE [1] or (c) concatenative/mixing encoding a = f ϕ a ( [ x , ϵ ] ) that is generally applied at the input of encoder. Deterministic encoding is based on the mapping a = f ϕ a ( x ) , i.e., no randomization is introduced, e.g., such as one of encoding modalities of AAE [2].

3.2. Decomposition of the Second Term

In this section, we factorize the second term in (3) to address the semi-supervised training, i.e., to integrate the knowledge of both non-labeled and labeled data available at training:
I ϕ a , θ c ( A ; C ) E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) p ( c ) p θ c ( c ) p θ c ( c ) = E p ( c ) log p θ c ( c ) E p ( c ) log p ( c ) p θ c ( c ) + E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) = H ( p ( c ) ; p θ c ( c ) ) D KL p ( c ) p θ c ( c ) H θ c , ϕ a ( C | A ) ,
with H ( p ( c ) ; p θ ( c ) ) = E p ( c ) log p θ c ( c ) denoting a cross-entropy between p ( c ) and p θ c ( c ) , and D c D KL p ( c ) p θ c ( c ) = E p ( c ) log p ( c ) p θ c ( c ) to be a KL-divergence between the prior class label distribution p ( c ) and the estimated one p θ c ( c ) . One can assume different forms of labels’ c encoding but one of the most often used forms is one-hot-label encoding that leads to the categorical distribution p ( c ) = cat ( c ) .
Finally, the conditional entropy is defined as D c c ^ H θ c , ϕ a ( C | A ) = E p ( c , x ) E q ϕ a ( a | x ) log p θ c ( c | a ) .
Since H ( p ( c ) ; p θ c ( c ) ) 0 , one can lower bound (5) as I ϕ a , θ c ( A ; C ) I ϕ a , θ c L ( A ; C ) where:
I ϕ a , θ c L ( A ; C ) D KL p ( c ) p θ c ( c ) D c H θ c , ϕ a ( C | A ) D c c ^ .

3.3. Supervised and Semi-Supervised Models with/without Hand-Crafted Priors

Summarizing the above variational decomposition of (3) with the terms (4) and (6), we will proceed with four practical scenarios.
Supervised training without latent space regularization (baseline): is based on term D c c ^ in (6)
L S NoReg HCP ( θ c , ϕ a ) = D c c ^ .
Semi-supervised training without latent space regularization is based on terms D c c ^ and D c in (6):
L SS NoReg HCP ( θ c , ϕ a ) = D c c ^ + D c .
Supervised training with latent space regularization is based on term D c c ^ in (6) and either term D a | x or D a or jointly D a | x and D a in (4):
L S Reg HCP ( θ c , ϕ a ) = E p D ( x ) D a | x + D a + β c D c c ^ .
Semi-supervised training with latent space regularization deploys all terms in (4) and (6):
L SS Reg HCP ( θ c , ϕ a ) = E p D ( x ) D a | x + D a + β c D c c ^ + β c D c .
The empirical evaluation of these setups on MNIST dataset is given in Section 5. The same architecture of encoder and decoder was used to establish the impact of each term in a function of available labeled data.

4. IB with Learnable Priors (LP)

In this section, we extend the results obtained for the hand-crafted priors to the learnable priors. Instead of applying the hand-crafted regularization of the latent representation a as suggested by the IB (3) and shown in Figure 1, we will assume that the latent representation a is regularized by an especially designed AE as shown in Figure 2. The AE-based regularization has two components: (i) the latent space z regularization and (ii) the observation space regularization. The design and training of this latent space regularizer in a form of the AE is guided by its own IB. In the general case, all elements of AE, i.e., its encoder-decoder pair, latent and observation space regularizers are conditioned by the learned class label c . The resulting Lagrangian with the learnable prior is (formally one should consider I ϕ a , ϕ z , θ c ( X ; Z | C ) for the term A. However, since I ϕ a , ϕ z , θ c ( X ; Z | C ) I ϕ a , ϕ z , θ c ( A ; Z | C ) due to the Markovianity of considered architecture, we consider the decomposition starting from A [20], Data Processing Inequality, Theorem 2.8.1):
L LP ( ϕ a , ϕ z , θ c , θ x ) = I ϕ a , ϕ z , θ c ( A ; Z | C ) A β x I ϕ a , ϕ z , θ c , θ x ( X ; Z | C ) B β c I ϕ a , θ c L ( A ; C ) C ,
where β x is a Lagrangian multiplier controlling the reconstruction of x at the decoder and β c is the same as in (1).
The terms A and B, conditioned by the class c , play a role of the latent space regularizer by imposing the learnable constrains on the vector a . These two terms correspond to the hand-crafted counterpart I ϕ a ( X ; A ) in (3). The term C in the learnable IB formulation corresponds to the classification part of hand-crafted IB in (3) and can be factorized along the same lines as in (6). Therefore, we will proceed with the factorization of terms A and B.
One can also consider the following IB formulation with the learnable priors with no conditioning on c in term A in (11) leading to an unconditional counterpart D below that can be viewed as an IB generalization of semi-supervised AAE [2]:
L AAE LP ( ϕ a , ϕ z , θ c , θ x ) = I ϕ a , ϕ z ( A ; Z ) D β x I ϕ a , ϕ z , θ c , θ x ( X ; Z | C ) B β c I ϕ a , θ c L ( A ; C ) C .

4.1. Decomposition of Latent Space Regularizer

We will denote p ϕ a , ϕ z , θ c ( x , a , c , z ) = p D ( x ) q ϕ a ( a | x ) p θ c ( c | a ) q ϕ z ( z | a , c ) and decompose the term A in (11) using variational factorization as:
I ϕ a , ϕ z , θ c ( A , Z | C ) = E p ϕ a , ϕ z , θ c ( x , a , c , z ) log q ϕ z ( z | a , c ) q ϕ z ( z | c ) p θ z ( z ) p θ z ( z ) = E p D ( x ) E q ϕ a ( a | x ) E p θ c ( c | a ) D KL q ϕ z ( z | A = a , C = c ) p θ z ( z ) D z | a , c , E p D ( x ) E q ϕ a ( a | x ) E p θ c ( c | a ) D KL q ϕ z ( z | C = c ) p θ z ( z ) D z | c ,
where D z | a , c D KL q ϕ z ( z | a , c ) p θ z ( z ) = E q ϕ z ( z | a , c ) log q ϕ z ( z | a , c ) p θ z ( z ) and D z | c D KL q ϕ z ( z | c ) p θ z ( z ) = E q ϕ z ( z | c ) log q ϕ z ( z | c ) p θ z ( z ) denote the KL-divergence terms and q ϕ z ( z | c ) = E p D ( x ) E q ϕ a ( a | x ) q ϕ z ( z | a , c ) ) .

4.2. Decomposition of Reconstruction Space Regularizer

Denoting p ϕ a , ϕ z , θ c , θ x ( x , a , c , z ) = p D ( x ) q ϕ a ( a | x ) p θ c ( c | a ) q ϕ z ( z | a , c ) p θ x ( x | z , c ) , we decompose the term B in (11) as:
I ϕ a , ϕ z , θ c , θ x ( X ; Z | C ) = E p ϕ a , ϕ z , θ c , θ c ( x , a , c , z ) log p θ x ( x | z , c ) p D ( x | c ) p θ x ( x ) p θ x ( x ) = E p θ c ( c ) H ( p D ( x | c ) ; p θ x ( x ) ) E p θ c ( c ) D KL p D ( x | C = c ) p θ x ( x ) D x | c H ϕ a , ϕ z , θ c , θ x ( X | Z , C ) D x x ^ ,
where p θ c ( c ) = E p D ( x ) E q ϕ a ( a | x ) p θ c ( c | a ) . The terms are defined as H ( p D ( x | c ) ; p θ x ( x ) ) = E p D ( x | c ) log p θ x ( x ) , D x | c D KL p D ( x | C = c ) p θ x ( x ) = E p D ( x | c ) log p D ( x | c ) p θ x ( x ) and D x x ^ H ϕ a , ϕ z , θ c , θ x ( X | Z , C ) = E p D ( x ) E q ϕ a ( a | x ) E p θ c ( c | a ) E q ϕ z ( z | a , c ) log p θ x ( x | z , c ) . Since E p θ c ( c ) H ( p D ( x | c ) ; p θ x ( x ) ) 0 , we can lower bound I ϕ a , ϕ z , θ c , θ x ( X ; Z | C ) I ϕ a , ϕ z , θ c , θ x L ( X ; Z | C ) D x | c D x x ^ .

4.3. Semi-Supervised Models with Learnable Priors

Summarizing the above variational decomposition of (11) with the terms (13) and (14), we will consider semi-supervised training with latent space regularization as:
L SS Reg LP ( θ c , θ x , ϕ a , ϕ z ) = E p D ( x ) E q ϕ a ( a | x ) E p θ c ( c | a ) D z | a , c + E p D ( x ) E q ϕ a ( a | x ) E p θ c ( c | a ) D z | c + β x D x x ^ + β x E p θ c ( c ) D x | c + β c D c c ^ + β c D c .
To create a link to the semi-supervised AAE [2], we also consider (12), where all latent and reconstruction space regularizers are independent of c , i.e., do not contain conditioning on c .
Semi-supervised training with latent space regularization and MSE reconstruction based on (12):
L SS AAE LP ( θ c , θ x , ϕ a , ϕ z ) = D z + β x D x x ^ + β c D c c ^ + β c D c ,
where D z D KL q ϕ z ( z ) p θ z ( z ) = E q ϕ z ( z ) log q ϕ z ( z ) p θ z ( z ) .
Semi-supervised training with latent space regularization and with MSE and adversarial reconstruction based on (12) deploys all terms:
L SS AAE complete LP ( θ c , θ x , ϕ a , ϕ z ) = D z + β x D x x ^ + β x D x + β c D c c ^ + β c D c ,
where D x D KL p D ( x ) p θ x ( x ) = E p D ( x ) log p D ( x ) p θ x ( x ) .

4.4. Links to State-Of-The-Art Models

The considered HCP and LP models can be linked with several state-of-the-art unsupervised models such VAE [21,22], β -VAE [23], AAE [2] and BIB-AE [15] and semi-supervised models such as AAE [2], CatGAN [3], VAE (M1 + M2) [1] and SeGMA [18].

4.4.1. Links to Unsupervised Models

The proposed LP model (11) generalizes unsupervised models without the categorical latent representation. In addition, the unsupervised models in a form of the auto-encoder are used as a latent space regularizer in the LP setup. For these reasons, we will briefly consider four models of interest, namely VAE, β -VAE, AAE, and BIB-AE.
Before we proceed with the analysis, we will define an unsupervised IB for these models. We will assume the fused encoders q ϕ a ( a | x ) and q ϕ z ( z | a ) without conditioning on c in the inference model according to Figure 2. We also assume no conditionally on c in the generative model.
The Lagrangian of unsupervised IB is defined according to [15]:
L U L ( θ x , ϕ z ) = I ϕ z ( X ; Z ) β x I ϕ z , θ x ( Z ; X ) ,
where similarly to the supervised counterpart (4), we define the first term as:
I ϕ z ( X ; Z ) = E q ϕ z ( x , z ) log q ϕ z ( x , z ) q ϕ z ( z ) p D ( x ) = E q ϕ z ( x , z ) log q ϕ z ( z | x ) q ϕ z ( z ) p θ z ( z ) p θ z ( z ) = E p D ( x ) D KL q ϕ z ( z | X = x ) p θ z ( z ) D z | x D KL q ϕ z ( z ) p θ z ( z ) D z ,
and similarly to (14) the second term is defined as:
I ϕ z , θ x ( Z ; X ) = E p D ( x ) E q ϕ z ( z | x ) log p θ x ( x | z ) p D ( x ) p θ x ( x ) p θ x ( x ) = H ( p D ( x | c ) ; p θ x ( x ) ) D KL p D ( x ) p θ x ( x ) D x H ϕ z , θ x ( X | Z ) D x x ^ ,
where the definition of all terms should follow from the above equations. Since H ( p D ( x | c ) ; p θ x ( x ) ) 0 , we can lower bound I ϕ z , θ x ( Z ; X ) D x D x x ^ .
Having defined the unsupervised IB variational bounded decomposition, we can proceed with an analysis of the related state-of-the-art methods along the lines of analysis introduced in Summary part of Section 2.
VAE [21,22] and β -VAE [23]:
  • The targeted tasks: auto-encoding and generation.
  • The architecture in terms of the latent space representation: the encoder outputs two vectors representing the mean and standard deviation vectors that control a new latent representation z = f ϕ z ( x ) + σ ϕ z ( x ) ϵ , where f ϕ z ( x ) and σ ϕ z ( x ) are outputs of the encoder and ϵ is assumed to be a zero mean unit variance Gaussian vector.
  • The usage of IB or other underlying frameworks: both VAE and β -VAE use evidence lower bound (ELBO) and are not derived from the IB framework. However, it can be shown [15] that the Lagrangian (18) can be reformulated for VAE and β VAE as:
    L β VAE ( θ x , ϕ z ) = E p D ( x ) D z | x + β x D x x ^ ,
    where β x = 1 for VAE. It can be noted that the VAE and β -VAE are based on an upper bound on the mutual information term I ϕ z ( X ; Z ) E p D ( x ) D z | x , since D KL q ϕ z ( z ) p θ z ( z ) 0 . Similar considerations apply to the second term since D KL p D ( x ) p θ x ( x ) 0 .
  • The label space regularization: does not apply here due to the unsupervised setting.
  • The latent space regularization: is based on the hand-crafted prior with Gaussian pdf.
  • The reconstruction space regularization in case of reconstruction loss: is based on the mean square error (MSE) counterpart of D x x ^ that corresponds to the Guassian likelihood assumption.
Unsupervised AAE [2]:
  • The targeted tasks: auto-encoding and generation.
  • The architecture in terms of the latent space representation: the encoder outputs one vector in stochastic or deterministic way as z = f ϕ z ( x ) .
  • The usage of IB or other underlying frameworks: AAE is not derived from the IB framework. As shown in [15], the AAE equivalent Lagrangian (18) can be linked with the IB formulation and defined as:
    L AAE ( θ x , ϕ z ) = D z + β x D x x ^ ,
    where β x = 1 in the original AAE formulation. It should be pointed out that the IB formulation of AAE contains the term D x x ^ , whose origin can be explained in the same way as for the VAE. Despite the fact that the term D z indeed appears in (22) with the opposite sign, it cannot be interpreted either as an upper bound on I ϕ z ( X ; Z ) similarly to the VAE or as a lower bound. The goal of AAE is to minimize the reconstruction loss or to maximize the log-likelihood by ensuring that the latent space marginal distribution q ϕ z ( z ) matches the prior p θ z ( z ) . The latter corresponds to the minimization of D KL q ϕ z ( z ) p θ z ( z ) , i.e., D z term.
  • The label space regularization: does not apply here due to the unsupervised setting.
  • The latent space regularization: is based on the hand-crafted prior with zero mean unit variance Gaussian pdf for each dimension.
  • The reconstruction space regularization in case of reconstruction loss: is based on the MSE.
BIB-AE [15]:
  • The targeted tasks: auto-encoding and generation.
  • The architecture in terms of the latent space representation: the encoder outputs one vector using any form of stochastic or deterministic encoding.
  • The usage of IB or other underlying frameworks: the BIB-AE is derived from the unsupervised IB (18) and its Lagrangian is defined as:
    L BIB AE ( θ x , ϕ z ) = E p D ( x ) D z | x D z + β x D x + β x D x x ^ .
  • The label space regularization: does not apply here due to the unsupervised setting.
  • The latent space regularization: is based on the hand-crafted prior with Gaussian pdf applied to both conditional and unconditional terms. In fact, the prior for D z can be any but D z | x requires analytical parametrisation.
  • The reconstruction space regularization in case of reconstruction loss: is based on the MSE counterpart of D x x ^ and the discriminator D x . This is a disctintive feature in comparison to VAE and AAE.
In summary, BIB-AE includes VAE and AAE as two particular cases. In turns, it should be clear that the regularizer of semi-supervised model considered in this paper resembles the BIB-AE model and extends it to the conditional case that will be considered below.

4.4.2. Links to Semi-Supervised Models

The proposed LP model (11) is also related to several state-of-the-art semi-supervised models used for the classification. As pointed out in the introduction, we only consider available labeled and unlabeled samples in our analysis. The extension to the augmented samples, i.e., permutations, syntehtically generated samples, i.e., fakes, and the adversarial examples for both latent space and label space regularizations can be performed along the line of analysis but it goes beyond the scope and focus of this paper.
Semi-supervised AAE [2]:
  • The targeted tasks: auto-encoding, clustering, (conditional) generation and classification.
  • The architecture in terms of the latent space representation: the encoder outputs two vectors representing the discrete class and continuous type of style. The class distribution is assumed to follow categorical distribution and style Gaussian one. Both constraints on the prior distributions are ensured using adversarial framework with two corresponding discriminators. In its original setting, AAE does not use any augmented samples or adversarial examples.
    Remark: It should be pointed out that in our architecture we consider the latent space to be represented by the vector a , which is fed to the classifier and regularizer that gives a natural consideration of IB and corresponding regularization and priors. In the case of semi-supervised AAE, the latent space is considered by the class and style representations directly. Therefore, to make it coherent with our case, one should assume that the class vector of semi-supervised AAE corresponds to the vector c and the style vector to the vector z .
  • The usage of IB or other underlying frameworks: AAE is not derived from the IB framework. However, as shown in our analysis the semi-supervised AAE represents the learnable prior case in part of latent space regularization. The corresponding Lagrangian of semi-supervised AAE is given by (16) and considered in Section 4.3.
  • The label space regularization: is based on the adversarial discriminator in assumption that the class labels follow categorical distribution. This is applied to both labeled and unlabeled samples.
  • The latent space regularization: is based on the learnable prior with Gaussian pdf of AE.
  • The reconstruction space regularization in case of reconstruction loss: is only based on the MSE.
CatGAN [3]: is based on an extension of classical GAN binary discriminator designed to distinguish between the original images and fake images generated from the latent space distribution to a multi-class discriminator. The author assumes the one-hot-vector encoding of class labels. The system is considered for the unsupervised and semi-supervised modes. For both modes the one-hot-vector encoding is used to encoded class labels. For the unsupervised mode, the system has an access only to the unlabeled data and the output of the classifier is considered to be a clustering to a predefined number of clusters/classes. The main idea behind the unsupervised training consists of a training of the discriminator that any sample from the set of original images is assigned to one of the classes with high fidelity whereas any fake or adversarial sample is assigned to all classes almost equiprobably. This corresponds to the fake samples and the regularization in the label space is based on the considered and extended framework of entropy minimization-based regularization. In the case of absence of fakes, this regularization coincides with the semi-supervised AAE label space regularization under the categorical distribution and adversarial discriminator that is equivalent to enforcing the minimum entropy of label space. However, the encoding of fake samples is equivalent to a sort of rejection option expressed via the activation of classes that have maximum entropy or uniform distribution over the classes. Equivalently, the above types of encoding can be considered to be the maximization of mutual information between the original data and encoded class labels and minimization of mutual information between the fakes/adversarial samples and the class labels. Semi-supevised CatGAN model adds a cross-entropy term computed for the true labeled samples.
Therefore, in summary:
  • The targeted tasks: auto-encoding, clustering, generation and classification.
  • The architecture in terms of the latent space representation: there is no encoder as such and instead the system has a generator/decoder that generates samples from a random latent space a following some hand-crafted prior. The second element of architecture is a classifier with the min/max entropy optimization for the original and fake samples. The encoding of classes is assumed to be a one-hot-vector encoding.
  • The usage of IB or other underlying frameworks: CatGAN is not derived from the IB framework. However, as shown in [15], one can apply the IB formulation to the adversarial generative models as in the case of CatGAN assuming that the term I ϕ a ( X ; A ) = 0 in (3) due to the absence of encoder as such. The minimization problem (3) reduces to the maximization of the second term I ϕ a , θ c ( A ; C ) expressed via its lower bound of variational decomposition (6). The first term D c enforces that the class labels of unlabeled samples follow the defined prior distribution p ( c ) with the above property of entropy minimization under one-hot-vector encoding whereas the second term D c c ^ reflects the supervised part for labeled samples. In the original CatGAN formulation, the author does not use the expression for the mutual information for the decoder/generator training as it is shown above but instead uses the decomposition of mutual information via the difference of corresponding entropies (see, first two terms in (9) in [3]). As we have pointed out, we do not include in our analysis the term corresponding to the fake samples as in original CatGAN. However, we do believe that this form of regularization does play an important role for the semi-supervised classification. The impact of this terms requires additional studies.
  • The label space regularization: is based on the above assumptions for labeled samples, which are included into the cross-entropy term, unlabeled samples included into the entropy minimization term and fake samples included into the entropy maximization term in the original CatGAN method.
  • The latent space regularization: is based on the hand-crafted prior.
  • The reconstruction space regularization in case of reconstruction loss: is based on the adversarial discriminator only.
SeGMA [18]: is a semi-supervised clustering and generative system with a single latent vector representation auto-encoder similar in spirit to the unsupervised version of AAE that can be also used for the classification. The latent space of SeGMA is assumed to follow a mixture of Gaussians. Using a small labeled data set, classes are assigned to components of this mixture of Gaussians by minimizing the cross-entropy loss induced by the class posterior distribution of a simple Gaussian classifier. The resulting mixture describes the distribution of the whole data, and representatives of individual classes are generated by sampling from its components. In the classification setup, SeGMA uses the latent space clustering scheme for the classification.
Therefore, in summary:
  • The targeted tasks: auto-encoding, clustering, generation and classification.
  • The architecture in terms of the latent space representation: a single vector representation following mixture of Gaussians distribution.
  • The usage of IB or other underlying frameworks: SeGMA is not derived from the IB framework but a link to the regularized ELBO an other related auto-encoders with interpretable latent space is demonstrated. However, as in previous methods it can be linked to the considered IB interpretation of the semi-supervised methods with hand-crafted priors (16). An equivalent Lagrangian of SeGMA is:
    L SeGMA ( θ c , θ x , ϕ z ) = D z + β x D x x ^ + β c D c c ^ ,
    where the latent space discriminator D z is assumed to be the maximum mean discrepancy (MMD) penalty that is analytically defined for the mixture of Gaussians pdf, D x x ^ is represented by the MSE and D c c ^ represents the cross-entropy for the labeled data defined over class labels deduced from the latent space representation.
  • The label space regularization: is based on the above assumptions for labeled samples, which are included into the cross-entropy term as discussed above.
  • The latent space regularization: is based on the hand-crafted mixture of Gaussians pdf.
  • The reconstruction space regularization in case of reconstruction loss: is based on the MSE.
VAE (M1 + M2) [1]: is based on the combination of several models. The model M1 represents a vanilla VAE considered in Section 4.4.1. Therefore, model M1 is a particular case of considered unsupervised IB. The model M2 is a combination of encoder producing a continuous latent representation and following Gaussian distribution and a classifier that takes as an input original data in parallel to the model M1. The class labels are encoded using the one-hot-vector representations and follow categorical distribution with a hyper-parameter following the symmetric Dirichlet distribution. The decoder of model M2 takes as an input the continuous latent representation and output of classifier. The decoder is trained under the MSE distortion metric. It is important to point out that the classifier works with the input data directly but not with the common latent space such as in the considered LP model. For this reason, it is an obvious analogy with the considered LP model (11) under the assumption that a = x and all performed IB analysis directly applies to. However, as pointed by the authors, the performance of model M2 in the semi-supervised classification for the limited number of labeled samples is relatively poor. That is why the third hybrid model M1 + M2 is considered when the models M1 and M2 and used in a stacked way. At the first stage, the model M1 is learned as the usual VAE. Then the latent space of model M1 is used as an input to the model M2 trained in a semi-supervised way. Such a two-stage approach closely resembles the learnable prior architecture presented in Figure 2. However, our model is end-to-end trained with the explainable common latent space and IB origin, while the model M1 + M2 is trained in two stages with the use of regularized ELBO for the derivation of model M2.
  • The targeted tasks: auto-encoding, clustering, (conditional) generation and classification.
  • The architecture in terms of the latent space representation: the stacked combination of models M1 and M2 is used as discussed above.
  • The usage of IB or other underlying frameworks: VAE M1 + M2 is not derived from the IB framework but it is linked to the regularized ELBO with the cross-entropy for the labeled samples. The corresponding IB Lagrangian of semi-supervised VAE M1 + M2 under the assumption of end-to-end training can be defined as:
    L SS VAE M 1 + M 2 LP ( θ c , θ x , ϕ a , ϕ z ) = E p D ( x ) D z | x + β x D x x ^ + β c D c c ^ + β c D c .
  • The label space regularization: is based on the assumption of categorical distribution of labels.
  • The reconstruction space regularization in case of reconstruction loss: is only based on the MSE.

5. Experimental Results

5.1. Experimental Setup

The tested system is based on (i) the deterministic encoder and decoder, (ii) the stochastic encoder of type a = f ϕ a ( x + ϵ ) with the data independent perturbations ϵ and deterministic decoder. The density ratio estimator [24] is used to measure all KL-divergences. The results of semi-supervised classification on the MNIST dataset are reported in Table 1, where symbol D indicates the deterministic setup (i) and symbol S corresponds to the stochastic one (ii). To choose the optimal parameters of systems, e.g., the Lagrangian multipliers in the considered models, we used 3-run cross-validation with the randomly chosen labeled examples as shown in Appendix B, Appendix C, Appendix D, Appendix E, Appendix F and Appendix G. Once the model parameters were chosen, we run 10 time cross-validation and the average results are shown in Table 1.
Additionally, we performed a 10-run cross-validation on the SVHN dataset [25]. We used the same architecture as for MNIST with the same encoders, decoders and discriminators. In contrast to VAE M1 + M2, we used normalized raw data without any pre-processing. Additionally, in contrast to AAE, where an extra set of 531,131 unlabeled images was used for the semi-supervised training, in our experiments only a train set of 73,257 images was used for training. Moreover, the experiments were performed: (i) for the optimal parameters chosen after 3-run cross-validation for the MNIST dataset with no special adaption to SVHN dataset and (ii) under the network architectures with exactly the same number of used filters as given in Appendix B, Appendix C, Appendix D, Appendix E, Appendix F and Appendix G for the MNIST dataset. In summary, our goal is to test the generalization capacity of the proposed approach but not just to achieve the best performance by fine-tuning of network parameters. The obtained results are represented in Table 1.
We compare the considered architectures with several state-of-the-art semi-supervised methods such as AAE [2], CatGAN [3], VAE (M1 + M2) [1], IB multiview [5], MV-InfoMax [5] and InfoMax [3] with 100, 1000 and 60,000 training labeled samples. The expected training times for the considered models are given in Table 2. The source code is available at https://github.com/taranO/IB-semi-supervised-classification. The analysis of the latent space of trained models for the MNIST dataset is given in Appendix A.

5.2. Discussion MNIST

The deterministic and stochastic systems based on the learnable priors clearly demonstrate the state-of-the-art performance in comparison to the considered semi-supervised counterparts.
Baseline Neural Network (NN): the obtained results allow concluding that, if the amount of labeled training data is large, as shown in “all” column (Table 1), the latent space regularization has no practically significant impact on the classification performance for both hand crafted and learnable priors. The deep classifier is capable of learning a latent representation retaining only sufficient statistics in the latent space solely based on the cross-entropy component of IB classification term decomposition as shown in Table A1, row D c c ^ and column “all”. The classes appear to be well separable under this form of visualization. At the same time, the decrease of number of labeled samples leads to the degradation of classification accuracy as show in Table 1 for columns “1000” and “100”. This degradation is also clearly observed in Table A1, row D c c ^ and column “l00”, where there is larger overlap between the classes compared to the column “all”. The stochastic encoding via the addition of noise to the input samples does not enhance the performance with respect to the deterministic decoding for the small amount of labeled examples. One can assume that the presence of additive noise is not typical for the considered data, whereas the samples clearly differ in the geometrical appearance. Therefore, we can only assume that random geometrical permutations would be a more interesting alternative to the additive noise permutations/encoding.
No priors on latent space: to investigate the impact of unlabeled data, we add the adversarial regularizer D c to the baseline classifier based on D c c ^ . The term D c enforces the distribution of class labels for the unlabeled samples to follow the categorical distribution. At this stage, no regularization of latent space is applied. The addition of the adversarial regularizer D c , see “100” column (Table 1), allows reducing the classification error in comparison to the baseline classifier. Moreover, the stochastic encoder slightly outperforms the deterministic one for all numbers of labeled samples. However, the achieved classification error is far away from the performance of baseline classifier trained on the whole labeled data set. Thus, the cross-entropy and adversarial classification terms alone can hardly cope with the lack of labeled data, and proper regularization of the latent space is the main mechanism capable of retaining the most relevant representation.
Hand crafted latent space priors: along this line we investigate the impact of hand-crafted regularization in the form of the added discriminator D a imposing Gaussian prior on the latent representation a . The sole regularization of latent space with the hand-crafted prior on the Gaussianity does not reflect the complex nature of latent space of real data. As a result the performance of the regularized classifier β c D c c ^ + D a does not lead to a remarkable improvement in comparison to the non-regularized counterpart D c c ^ for both stochastic and deterministic types of encoding. When in addition the label space regularization D c is added to the final classifier β c D c c ^ + D a + β c D c , it leads to the factor of 2 classification error reduction over the cross-entropy baseline classifier but it is still far away from the fully supervised baseline classifier trained on the fully labeled data set. At the same time, there is no significant difference between the stochastic and deterministic types of encoding.
Learnable latent space priors: along this line we will investigate the impact of learnable priors by adding the corresponding regularizations of the latent space of auto-encoder and data reconstruction. We investigate the role of reconstruction space regularization based on the MSE expressed via D x x ^ and joint D x x ^ and D x . The addition of discriminator D x slightly enhances the classification but requires almost doubled training time as shown in Table 2. The stochastic encoding does not show any obvious advantage over the deterministic one in this setup. The separability of classes shown in Table A1, row β c D c c ^ + β c D c + D z + β x D x x ^ + β x D x and column ”l00”, is very close to those of column “all” and row D c c ^ , i.e., the semi-supervised system with 100 labeled examples is capable of closely approximating the fully supervised one. We show the t-sne only for this setup since it practically coincides with β c D c c ^ + β c D c + D z + β x D x x ^ . However, it should be pointed out that the learnable priors ensures the reconstruction of data from the compressed latent space and the learned representation is the sufficient statistics for the data reconstruction task but not for the classification one. Since the entropy of the classification task is significantly lower to those of reconstruction, such a learned representation contains more information than actually needed for the classification task. A fraction of retained information is irrelevant to the classification problem and might be a potential source of classification errors. This likely explains a gap in performance between the considered semi-supervised system and fully supervised one.

5.3. Discussion SVHN

In the SVHN test, we did not try to optimize the Lagrangian coefficients as it was done for MNIST. However, to compensate for a potential non-optimality, we perform the model training with the reduced learning rate as indicated in Table 2. As a result, the training time on the SVHN dataset is longer. Therefore, 10-run validation of the proposed framework on the SVHN dataset was done with the optimal Lagrangian multipliers determined on the MNIST dataset. In this respect, one might observe a small degradation of the obtained results compared to the state-of-the-art. Additionally, we did not apply any pre-processing such as PCA that was used in VAE M1 + M2 and we did not use the extended unlabeled dataset as it was done in case of AAE. One can clearly observe the same behavior of semi-supervised classifiers as for MNIST data set discussed in Section 5.2. Therefore, we can clearly confirm the role of learnable priors in the overall performance observed for both datasets.

6. Conclusions and Future Work

We have introduced a novel formulation of variational information bottleneck for semi-supervised classification. To overcome the problem of original bottleneck and to compensate the lack of labeled data in the semi-supervised setting, we considered two models of latent space regularization via hand-crafted and learnable priors. On a toy example of MNIST dataset we investigated how the parameters of proposed framework influence the performance of classifier. By end-to-end training, we demonstrate how the proposed framework compares to the state-of-the-art methods and approaches the performance of fully supervised classifier.
The envisioned future work is along the lines of providing a stronger compression yet preserving only classification task relevant information since retaining more task irrelevant information does not provide distinguishable classification features, i.e., it only ensures reliable data reconstruction. In this work, we have considered IB for the predictive latent space model. We think that the contrastive multi-view IB formulation would be an interesting candidate for the regularization of latent space. Additionally, we did not use the adversarially generated examples to impose the constraint on the minimization of mutual information between them and class labels or equivalently to maximize the entropy of class label distribution for these adversarial examples according to the framework of entropy minimization. This line of “adversarial” regularization seems to be a very interesting complement to the considered variational bottleneck. In this work, we considered a particular form of stochastic encoding by the addition of data independent noise to the input with the preservation of the same class labels. This also corresponds to the consistency regularization when samples can be more generally permuted including the geometrical transformations. It is also interesting to point out that the same form of generic permutations is used in the unsupervised constrastive loss-based multi-view formulations for the continual latent space representation as opposed to the categorical one in the consistency regularization. Finally, the conditional generation can be an interesting line of research considering the generation from discrete labels and continuous latent space of the autoencoder.

Author Contributions

Conceptualization, S.V. and O.T.; methodology, O.T., M.K., T.H. and D.R.; software, O.T.; validation, O.T.; formal analysis, M.K., T.H. and D.R.; investigation, O.T.; writing—original draft preparation, S.V. and O.T., writing—review and editing, ALL; visualization, S.V. and O.T.; supervision, S.V.; project administration, S.V., All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Swiss National Science Foundation SNF No. 200021_182063.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IBInformation bottleneck
VAEVariational autoencode
AAEAdversarial autoencoder
CatGANCategorical generative adversarial networks
KL-divergencesKullback–Leibler divergences
MSEMean squared error
HCPIB with hand-crafted priors
LPIB with learnable priors
NNNeural Network
SSSemi-supervised

Appendix A. Latent Space of Trained Models

Figure A1. Latent space a (of size 1024) of classifier.
Figure A1. Latent space a (of size 1024) of classifier.
Entropy 22 00943 g0a1
Figure A2. Latent space z (of size 20) of auto-encoder.
Figure A2. Latent space z (of size 20) of auto-encoder.
Entropy 22 00943 g0a2
In this section, we consider the properties of classifier’s latent space for both the hand-crafted and learnable priors under different amount of training samples. Figure A1 and Figure A2 show t-sne plots for the perplexity 30 for 100, 1000 and 60,000 (“all”) training labels of the MNIST dataset.
The first raw of Figure A1 with the label “ D c c ^ ” corresponds to the classifier considered in Appendix B. The latent space a of the classifier with “all” labels demonstrates the perfect separability of classes. The classes are far away from each other and there are practically no outliers leading to the misclassification. The decrease of the number of labels in the supervised setup, see the columns 1000 and 100, leads to a visible degradation of separability between the classes.
The regularization of class label space by the regularizer D c or by the hand-crafted latent space regularizer D a shown in raws “ D c c ^ + α c D c ” considered in Appendix C and “ D c c ^ + α a D a ” considered in Appendix D for the small number of training samples equal 100 does not significantly enhance the class separability with respect to “ D c c ^ ”.
At the same time, the joint usage of the above regularizers according to the model “ D c c ^ + α c D c + α a D a ” according to the model in Appendix E leads to the better separability of classes for 100 labels in comparison with the previous cases. At the same time, the addition of these regularizers does not have any impact on the latent space for “all” label case.
The introduction of learnable regularization of latent space along with the class label regularization according to the model “ D c c ^ + D c + D z + D x x ^ + α x D x ” considered in Appendix G enhances the class separability in the latent space of classifier for 100 label case that is also very close to the fully supervised case.
For the comparison reasons, we also visualize the latent space of the auto-encoder z for the above model in Figure A2.

Appendix B. Supervised Training without Latent Space Regularization (Baseline)

The baseline architecture is based on the cross-entropy term D c c ^ (7) in the main part of paper and depicted in Figure A3:
L S NoReg HCP ( θ c , ϕ a ) = D c c ^ .
The parameters of encoder and decoder are shown in Table A1. The performance of baseline supervised classifier with and without batch normalization corresponds to the parameter α c = 0 in Table A3 (deterministic scenario) and Table A4 (stochastic scenario).
Figure A3. Baseline classifier based on D c c ^ . The blue shadowed regions are not used.
Figure A3. Baseline classifier based on D c c ^ . The blue shadowed regions are not used.
Entropy 22 00943 g0a3
Table A1. The network parameters of baseline classifier trained on D c c ^ . The encoder is trained with and without batch normalization (BN) after Conv2D layers.
Table A1. The network parameters of baseline classifier trained on D c c ^ . The encoder is trained with and without batch normalization (BN) after Conv2D layers.
Encoder
SizeLayer
28 × 28 × 1Input
14 × 14 × 32Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
2048Flatten
1024FC, ReLU
Decoder
SizeLayer
1024Input
500FC, ReLU
10FC, Softmax

Appendix C. Semi-Supervised Training without Latent Space Regularization and with Class Label Regularizer

This model is based on terms D c c ^ and D c in (8) in the main part of paper and schematically shown in Figure A4:
L SS NoReg HCP ( θ c , ϕ a ) = D c c ^ + α c D c .
The parameters of encoder, decoder and discriminator are shown in Table A2. The KL-divergence term D c is implemented in a form of density ratio estimator (DRE). In the considered practical implementation, the parameter α c controls the trade-off between the cross-entropy and class discriminator terms. The discriminator D c is trained in an adversarial way based on samples generated by the decoder and from targeted distribution.
The performance of semi-supervised classifier with and without batch normalization is shown in Table A3 (deterministic scenario) and Table A4 (stochastic scenario).
Figure A4. Semi-supervised classifier based on the cross-entropy D c c ^ and categorical class discriminator D c . No latent space regularization is applied. The blue shadowed regions are not used.
Figure A4. Semi-supervised classifier based on the cross-entropy D c c ^ and categorical class discriminator D c . No latent space regularization is applied. The blue shadowed regions are not used.
Entropy 22 00943 g0a4
Table A2. The network parameters of semi-supervised classifier trained on D c c ^ and D c . The encoder is trained with and without batch normalization (BN) after Conv2D layers.
Table A2. The network parameters of semi-supervised classifier trained on D c c ^ and D c . The encoder is trained with and without batch normalization (BN) after Conv2D layers.
Encoder
SizeLayer
28 × 28 × 1Input
14 × 14 × 32Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
2048Flatten
1024FC, ReLU
Decoder
SizeLayer
1024Input
500FC, ReLU
10FC, Softmax
D c
SizeLayer
10Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Table A3. The performance (percentage error) of deterministic classifier based on D c c ^ + α c D c for the encoder with and without batch normalization as a function of Lagrangian multiplier α c and the number of labelled examples.
Table A3. The performance (percentage error) of deterministic classifier based on D c c ^ + α c D c for the encoder with and without batch normalization as a function of Lagrangian multiplier α c and the number of labelled examples.
Encoder Model α c RunsMeanstd
123
MNIST 100
without BN026.5626.2428.0426.950.96
0.00520.4421.9318.9820.451.48
0.000518.5520.4320.5919.861.14
119.2322.4220.5720.741.60
with BN029.3729.2730.6229.750.75
0.00527.9728.0226.2727.421.00
0.000525.9923.7024.4724.721.17
127.7831.9835.8831.884.05
MNIST 1000
without BN07.746.996.977.230.44
0.0055.626.065.605.760.26
0.00056.306.126.026.150.14
15.996.276.286.180.16
with BN07.456.957.527.310.31
0.0055.575.085.225.290.25
0.00055.606.056.225.960.32
16.056.415.826.090.30
MNIST all
without BN00.830.830.740.800.05
0.0050.830.820.880.840.03
0.00050.860.920.820.870.05
10.720.850.870.810.08
with BN00.730.670.790.730.06
0.0050.720.730.700.720.02
0.00050.750.770.720.750.03
10.670.680.730.690.03
Table A4. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + α c D c for the encoder with and without batch normalization as a function of Lagrangian multiplier α c and the number of labelled examples.
Table A4. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + α c D c for the encoder with and without batch normalization as a function of Lagrangian multiplier α c and the number of labelled examples.
Encoder Model α c RunsMeanstd
123
MNIST 100
without BN025.7526.6126.5926.320.49
0.00523.3421.3824.3723.031.52
0.000519.9215.8316.0317.262.31
122.5120.4821.2821.421.02
with BN030.2631.2429.330.270.97
0.00521.1724.4124.7523.441.98
0.000522.9726.3824.4424.601.71
126.6230.4328.4428.501.91
MNIST 1000
without BN07.687.307.237.40.24
0.0055.595.165.805.520.33
0.00055.5965.845.810.21
16.666.87.627.030.52
with BN06.977.067.667.230.38
0.0054.424.544.084.350.24
0.00055.285.565.145.330.21
15.775.885.725.790.08
MNIST all
without BN00.80.910.870.860.06
0.0050.770.820.880.820.06
0.00050.860.810.870.850.03
10.930.850.920.900.04
with BN00.650.670.710.680.03
0.0050.690.770.680.710.05
0.00050.780.710.740.740.04
10.710.640.620.660.05

Appendix D. Supervised Training with Hand Crafted Latent Space Regularization

This model is based on the cross-entropy term D c c ^ and either term D a | x or D a or jointly D a | x and D a as defined by (9) in the main part of paper. In our implementation, we consider the regularization based on the adversarial term D a similar to AAE due to the flexibility of imposing different priors on the latent space distribution. The implemented system is shown in Figure A5 and the training is based on:
L S Reg HCP ( θ c , ϕ a ) = D c c ^ + α a D a ,
where α a is a regularization parameter controlling a trade-off between the cross-entropy term and latent space regularization term. We have replaced the Lagrangians above with respect to (9) in the main part of paper and used it in front of D a in contrast to the original formulation (9). It is done to keep the term D c c ^ without a multiplier as the reference to the baseline classifier.
The parameters of encoder, decoder and discriminator are summarized in Table A5. The performance of this classifier without and with batch normalization is shown in Table A6 (deterministic scenario) and Table A7 (stochastic scenario).
Figure A5. Supervised classifier based on the cross-entropy D c c ^ and hand crafted latent space regularization D a . The blue shadowed parts are not used.
Figure A5. Supervised classifier based on the cross-entropy D c c ^ and hand crafted latent space regularization D a . The blue shadowed parts are not used.
Entropy 22 00943 g0a5
Table A5. The network parameters of supervised classifier trained on D c c ^ and D a . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D a is trained in the adversarial way.
Table A5. The network parameters of supervised classifier trained on D c c ^ and D a . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D a is trained in the adversarial way.
Encoder
SizeLayer
28 × 28 × 1Input
14 × 14 × 32Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
2048Flatten
1024FC
Decoder
SizeLayer
1024Input
500FC, ReLU
10FC, Softmax
D a
SizeLayer
1024Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Table A6. The performance (percentage error) of deterministic classifier based on D c c ^ + α a D a for the encoder with and without batch normalization as a function of Lagrangian multiplier.
Table A6. The performance (percentage error) of deterministic classifier based on D c c ^ + α a D a for the encoder with and without batch normalization as a function of Lagrangian multiplier.
Encoder Model α a RunsMeanstd
123
MNIST 100
without BN026.7927.2627.3927.150.32
0.00528.0525.9530.7228.242.39
0.000526.6727.6928.4627.610.89
133.4233.0534.8133.760.92
with BN030.3729.3229.8229.830.52
0.00528.0231.4930.8030.111.84
0.000534.5431.9229.8231.092.36
134.4344.3544.2541.015.70
MNIST 1000
without BN07.168.127.557.610.48
0.0057.026.346.596.650.34
0.00056.736.346.826.630.26
19.499.9310.569.990.54
with BN07.397.837.927.720.28
0.0057.947.158.537.880.69
0.00058.009.629.519.050.91
115.7914.8813.7114.791.04
MNIST all
without BN00.760.700.810.760.06
0.0051.071.031.131.080.05
0.00050.840.780.890.840.06
14.787.244.715.581.44
with BN00.680.680.690.680.01
0.0050.900.811.120.940.16
0.00050.870.800.890.850.05
12.373.614.353.441.00
Table A7. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + α a D a for the encoder with and without batch normalization as a function of Lagrangian multiplier.
Table A7. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + α a D a for the encoder with and without batch normalization as a function of Lagrangian multiplier.
Encoder Model α a RunsMeanstd
123
MNIST 100
without BN0.00528.1325.1629.927.732.40
0.000528.0530.0328.1128.731.13
132.3334.0933.7333.380.93
with BN0.00532.2533.4726.0130.584.00
0.000533.3736.1535.6535.061.48
133.3742.3732.4636.075.48
MNIST 1000
without BN0.0057.377.176.657.060.37
0.00057.486.686.676.940.46
19.489.9411.6110.341.12
with BN0.0057.827.977.817.870.09
0.00059.58.689.379.180.44
112.9910.529.9811.161.60
MNIST all
without BN0.0051.191.091.061.110.07
0.00050.790.880.820.830.05
16.224.8155.340.77
with BN0.0050.941.071.041.020.07
0.00050.780.810.780.790.02
14.493.352.183.341.16

Appendix E. Semi-Supervised Training with Hand Crafted Latent Space and Class Label Regularizations

This model is based on the cross-entropy term D c c ^ and either term D a | x or D a or jointly D a | x and D a and the label class regularizer D c as defined by (10) in the main part of paper. In our implementation, we consider the regularization based on the adversarial term D a only as shown in Figure A6. The training is based on:
L S Reg HCP ( θ c , ϕ a ) = D c c ^ + α c D c + α a D a .
The parameters of encoder, decoder and both discriminators are shown in Table A8. The performance of this classifier without and with batch normalization is shown in Table A9 (deterministic scenario) and Table A10 (stochastic scenario).
Table A8. The network parameters of semi-supervised classifier trained on D c c ^ , D a and D c . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D a and D c are trained in the adversarial way.
Table A8. The network parameters of semi-supervised classifier trained on D c c ^ , D a and D c . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D a and D c are trained in the adversarial way.
Encoder
SizeLayer
28 × 28 × 1Input
14 × 14 × 32Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
2048Flatten
1024FC
Decoder
SizeLayer
1024Input
500FC, ReLU
10FC, Softmax
D c
SizeLayer
10Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
D a
SizeLayer
1024Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Figure A6. Semi-supervised classifier based on the cross-entropy D c c ^ and hand crafted latent space regularization D a . The blue shadowed parts are not used.
Figure A6. Semi-supervised classifier based on the cross-entropy D c c ^ and hand crafted latent space regularization D a . The blue shadowed parts are not used.
Entropy 22 00943 g0a6
Table A9. The performance (percentage error) of deterministic classifier based on D c c ^ + α a D a + α c D c for the encoder with and without batch normalization.
Table A9. The performance (percentage error) of deterministic classifier based on D c c ^ + α a D a + α c D c for the encoder with and without batch normalization.
Encoder Model α a α c RunsMeanstd
123
MNIST 100
without BN0.0050.00521.3918.1218.3419.281.83
0.00050.000515.3322.3613.8017.164.56
0.0050.000525.6626.2528.8126.911.67
0.00050.0059.8213.4413.0612.111.99
with BN0.0050.00523.4521.1928.8724.503.94
0.00050.000528.5719.0626.3724.674.98
0.0050.000526.1826.1825.4925.950.40
0.00050.0058.9613.8214.7612.523.11
MNIST 1000
without BN0.0050.0053.914.213.703.940.26
0.00050.00053.543.723.543.600.10
0.0050.00056.195.807.316.430.78
0.00050.0052.802.822.832.820.02
with BN0.0050.0053.302.942.933.060.21
0.00050.00052.802.532.502.610.17
0.0050.00053.513.754.123.790.31
0.00050.0052.582.272.242.370.19
MNIST all
without BN0.0050.0051.041.071.071.060.02
0.00050.00050.860.900.880.880.02
0.0050.00051.080.921.091.030.10
0.00050.0050.850.930.930.900.05
with BN0.0050.0051.101.010.931.010.09
0.00050.00050.840.880.830.850.03
0.0050.00051.101.120.931.050.10
0.00050.0050.760.820.790.790.03
Table A10. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + α a D a + α c D c for the encoder with and without batch normalization.
Table A10. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + α a D a + α c D c for the encoder with and without batch normalization.
Encoder Model α a α c RunsMeanstd
123
MNIST 100
without BN0.0050.00512.418.0516.7315.732.96
0.00050.000515.0111.1614.7413.642.15
0.0050.000523.3126.6125.4125.111.67
0.00050.0059.219.0210.129.450.59
with BN0.0050.00513.5522.4814.7216.924.85
0.00050.00058.3715.0126.9216.779.40
0.0050.000532.1230.2731.4431.280.94
0.00050.0055.461711.5411.335.77
MNIST 1000
without BN0.0050.0053.94.254.024.060.18
0.00050.00053.643.824.113.860.24
0.0050.00056.685.346.366.130.70
0.00050.0053.032.882.662.860.19
with BN0.0050.0052.963.372.983.100.23
0.00050.00052.873.102.732.900.19
0.0050.00053.723.84.143.890.22
0.00050.0052.572.392.282.410.15
MNIST all
without BN0.0050.0051.051.091.11.080.33
0.00050.00050.940.960.90.930.03
0.0050.00051.161.141.131.140.02
0.00050.0050.880.920.910.900.02
with BN0.0050.0050.980.840.940.920.07
0.00050.00050.790.960.820.860.09
0.0050.00051.041.051.031.040.01
0.00050.0050.740.780.840.790.05

Appendix F. Semi-Supervised Training with Learnable Latent Space Regularization

This model is based on the cross-entropy term D c c ^ , the MSE term representing D x x ^ , the label class regularizer D c and either term D z | x or D z or jointly D z | x and D z as defined by (16) in the main part of paper. In our implementation, we consider the regularization of the latent space based on the adversarial term D z only to compare it with the vanila AAE as shown in Figure A7. The encoder is also not conditioned on c as in the original semi-supervised AAE. Thus, the tested system is based on:
L SS AAE LP ( θ c , θ x , ϕ a , ϕ z ) = β c D c c ^ + β c D c + D z + β x D x x ^ .
We set the parameters β x = β c = 1 to compare our system with the vanila AAE. However, these parameters can be also optimized in practice.
The parameters of encoder and decoder are shown in Table A11. The performance of this classifier without and with batch normalization is shown in Table A12 (deterministic scenario) and Table A13 (stochastic scenario).
Table A11. The encoder and decoder of semi-supervised classifier trained based on D c c ^ , D c and D z . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D c and D z are trained in the adversarial way.
Table A11. The encoder and decoder of semi-supervised classifier trained based on D c c ^ , D c and D z . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D c and D z are trained in the adversarial way.
Encoder
SizeLayer
28 × 28 × 1 *Input
14 × 14 × 32Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
2048Flatten
1024FC, ReLU
1010FC, SoftmaxFC
Decoder
SizeLayer
10 + 10Input
7 × 7 × 128FC, Reshape, BN, ReLU
14 × 14 × 128Conv2DTrans, BN, ReLU
28 × 28 × 128Conv2DTrans, BN, ReLU
28 × 28 × 64Conv2DTrans, BN, ReLU
28 × 28 × 1Conv2DTrans, Sigmoid
Dz
SizeLayer
10Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Dc
SizeLayer
10Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Figure A7. Semi-supervised classifier with learnable priors: the cross-entropy D c c ^ , MSE D x x ^ , class label D c and latent space regularization D a . The blue shadowed parts are not used.
Figure A7. Semi-supervised classifier with learnable priors: the cross-entropy D c c ^ , MSE D x x ^ , class label D c and latent space regularization D a . The blue shadowed parts are not used.
Entropy 22 00943 g0a7
Table A12. The performance (percentage error) of deterministic classifier based on D c c ^ + D c + D z + D x x ^ for the encoder with and without batch normalization.
Table A12. The performance (percentage error) of deterministic classifier based on D c c ^ + D c + D z + D x x ^ for the encoder with and without batch normalization.
Encoder ModelRunsMeanstd
123
MNIST 100
without BN2.152.051.781.990.19
with BN1.571.561.921.680.21
MNIST 1000
without BN1.551.471.531.520.04
with BN1.371.341.731.480.22
MNIST all
without BN0.780.70.820.770.06
with BN0.790.770.760.770.02
Table A13. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + D c + D z + D x x ^ for the encoder with and without batch normalization.
Table A13. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + D c + D z + D x x ^ for the encoder with and without batch normalization.
Encoder ModelRunsMeanstd
123
MNIST 100
without BN1.553.192.112.280.83
with BN1.41.331.721.480.21
MNIST 1000
without BN1.731.531.61.620.10
with BN1.281.431.21.300.12
MNIST all
without BN0.940.860.860.890.05
with BN0.770.650.840.750.10

Appendix G. Semi-Supervised Training with Learnable Latent Space Regularization and Adversarial Reconstruction

This model is similar to the previously considered model but in addition to the MSE reconstruction term representing D x x ^ it also contains the adversarial reconstruction term D x as defined by (17) in the main part of paper. In our implementation, we consider the regularization of the latent space based on the adversarial term D z as shown in Figure A8. The training is based on:
L SS AAE LP ( θ c , θ x , ϕ a , ϕ z ) = D z + D x x ^ + D c c ^ + D c + α x D x .
The parameters of encoder and decoder are shown in Table A14. The performance of this classifier without and with batch normalization is shown in Table A15 (deterministic scenario) and Table A16 (stochastic scenario).
Figure A8. Semi-supervised classifier with learnable priors: the cross-entropy D c c ^ , MSE D x x ^ , adversarial reconstruction D x , class label D c and latent space regularizer D z . The blue shadowed parts are not used.
Figure A8. Semi-supervised classifier with learnable priors: the cross-entropy D c c ^ , MSE D x x ^ , adversarial reconstruction D x , class label D c and latent space regularizer D z . The blue shadowed parts are not used.
Entropy 22 00943 g0a8
Table A14. The network parameters of semi-supervised classifier trained based on D c c ^ , D c and D z . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D c and D z are trained in the adversarial way.
Table A14. The network parameters of semi-supervised classifier trained based on D c c ^ , D c and D z . The encoder is trained with and without batch normalization (BN) after Conv2D layers. D c and D z are trained in the adversarial way.
Encoder
SizeLayer
28 × 28 × 1Input
14 × 14 × 32Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
2048Flatten
1024FC, ReLU
1010FC, SoftmaxFC
Dz
SizeLayer
10Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Dc
SizeLayer
10Input
500FC, ReLU
500FC, ReLU
1FC, Sigmoid
Decoder
SizeLayer
10 + 10Input
7 × 7 × 128FC, Reshape, BN, ReLU
14 × 14 × 128Conv2DTrans, BN, ReLU
28 × 28 × 128Conv2DTrans, BN, ReLU
28 × 28 × 64Conv2DTrans, BN, ReLU
28 × 28 × 1Conv2DTrans, Sigmoid
Dx
SizeLayer
28 × 28 × 1Input
14 × 14 × 64Conv2D, LeakyReLU
7 × 7 × 64Conv2D, LeakyReLU
4 × 4 × 128Conv2D, LeakyReLU
4 × 4 × 256Conv2D, LeakyReLU
4096Flatten
1FC, Sigmoid
Table A15. The performance (percentage error) of deterministic classifier based on D c c ^ + D c + D z + D x x ^ + α x D x for the encoder with and without batch normalization.
Table A15. The performance (percentage error) of deterministic classifier based on D c c ^ + D c + D z + D x x ^ + α x D x for the encoder with and without batch normalization.
Encoder Model α x RunsMeanstd
123
MNIST 100
without BN0.0052.853.362.772.990.32
0.00052.582.493.082.720.32
119.6219.9615.9718.522.21
with BN0.0051.561.331.351.410.13
0.00051.681.662.021.790.20
120.8513.621.6718.714.44
MNIST 1000
without BN0.0052.292.352.112.250.12
0.00051.691.882.241.940.28
13.473.304.123.630.43
with BN0.0051.181.211.091.160.06
0.00051.441.281.291.340.09
14.142.942.483.190.86
MNIST all
without BN0.0050.971.011.041.010.04
0.00050.880.850.930.890.04
11.311.281.471.350.10
with BN0.0050.810.830.750.800.04
0.00050.730.780.750.750.03
10.880.861.271.000.23
Table A16. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + D c + D z + D x x ^ + α x D x for the encoder with and without batch normalization.
Table A16. The performance (percentage error) of stochastic classifier with supervised noisy data (noise std = 0.1, # noise realisation = 3) based on D c c ^ + D c + D z + D x x ^ + α x D x for the encoder with and without batch normalization.
Encoder Model α x RunsMeanstd
123
MNIST 100
without BN0.0052.453.042.672.720.30
0.00052.632.32.452.460.17
with BN0.0051.341.216.42.982.96
0.00051.351.511.931.600.30
MNIST 1000
without BN0.0052.312.262.22.260.06
0.00051.712.161.861.910.23
with BN0.0051.231.311.101.210.11
0.00051.421.621.371.470.13
MNIST all
without BN0.0050.931.011.051.000.06
0.00050.920.830.880.880.05
with BN0.0050.880.860.910.880.03
0.00050.770.800.800.790.02

References

  1. Kingma, D.P.; Mohamed, S.; Rezende, D.J.; Welling, M. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems; MIT Press: Montreal, QC, Canada, 2014; pp. 3581–3589. [Google Scholar]
  2. Makhzani, A.; Shlens, J.; Jaitly, N.; Goodfellow, I.; Frey, B. Adversarial autoencoders. arXiv 2015, arXiv:1511.05644. [Google Scholar]
  3. Springenberg, J.T. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv 2015, arXiv:1511.06390. [Google Scholar]
  4. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. arXiv 2020, arXiv:2002.05709. [Google Scholar]
  5. Federici, M.; Dutta, A.; Forré, P.; Kushman, N.; Akata, Z. Learning Robust Representations via Multi-View Information Bottleneck. arXiv 2020, arXiv:2002.07017. [Google Scholar]
  6. Tishby, N.; Zaslavsky, N. Deep learning and the information bottleneck principle. In Proceedings of the 2015 IEEE Information Theory Workshop (ITW), Jerusalem, Israel, 26 April–1 May 2015; pp. 1–5. [Google Scholar]
  7. Achille, A.; Soatto, S. Information dropout: Learning optimal representations through noisy computation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2897–2905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; Raffel, C.A. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems; MIT Press: Vancouver, BC, Canada, 2019; pp. 5049–5059. [Google Scholar]
  9. Grandvalet, Y.; Bengio, Y. Semi-supervised learning by entropy minimization. In Advances in Neural Information Processing Systems; MIT Press: Vancouver, BC, Canada, 2004; pp. 529–536. [Google Scholar]
  10. Lee, D.H. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop: Challenges in Representation Learning (WREPL); ICML: Atlanta, GR, USA, 2013; Volume 3. [Google Scholar]
  11. Cire<b>c</b>san, D.C.; Meier, U.; Gambardella, L.M.; Schmidhuber, J. Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 2010, 22, 3207–3220. [Google Scholar]
  12. Cubuk, E.D.; Zoph, B.; Mane, D.; Vasudevan, V.; Le, Q.V. Autoaugment: Learning augmentation policies from data. arXiv 2018, arXiv:1805.09501. [Google Scholar]
  13. Amjad, R.A.; Geiger, B.C. Learning representations for neural network-based classification using the information bottleneck principle. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2225–2239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Alemi, A.A.; Fischer, I.; Dillon, J.V.; Murphy, K. Deep variational information bottleneck. arXiv 2016, arXiv:1612.00410. [Google Scholar]
  15. Voloshynovskiy, S.; Kondah, M.; Rezaeifar, S.; Taran, O.; Hotolyak, T.; Rezende, D.J. Information bottleneck through variational glasses. In NeurIPS Workshop on Bayesian Deep Learning; Vancouver Convention Center: Vancouver, BC, Canada, 2019. [Google Scholar]
  16. Uğur, Y.; Zaidi, A. Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding. Entropy 2020, 22, 213. [Google Scholar] [CrossRef] [Green Version]
  17. Maaløe, L.; Sønderby, C.K.; Sønderby, S.K.; Winther, O. Auxiliary deep generative models. arXiv 2016, arXiv:1602.05473. [Google Scholar]
  18. Śmieja, M.; Wołczyk, M.; Tabor, J.; Geiger, B.C. SeGMA: Semi-Supervised Gaussian Mixture Auto-Encoder. arXiv 2019, arXiv:1906.09333. [Google Scholar]
  19. Makhzani, A.; Frey, B.J. Pixelgan autoencoders. In Advances in Neural Information Processing Systems; MIT Press: Long Beach, CA, USA, 2017; pp. 1975–1985. [Google Scholar]
  20. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  21. Kingma, D.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar]
  22. Rezende, D.J.; Mohamed, S.; Wierstra, D. Stochastic backpropagation and approximate inference in deep generative models. arXiv 2014, arXiv:1401.4082. [Google Scholar]
  23. Higgins, I.; Matthey, L.; Pal, A.; Burgess, C.; Glorot, X.; Botvinick, M.; Mohamed, S.; Lerchner, A. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
  24. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; MIT Press: Montreal, QC, Canada, 2014; pp. 2672–2680. [Google Scholar]
  25. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading Digits in Natural Images with Unsupervised Feature Learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning; NIPS Workshop: Granada, Spain, 2011; Volume 2011, p. 5. [Google Scholar]
Figure 1. Classification with the hand-crafted latent space regularization.
Figure 1. Classification with the hand-crafted latent space regularization.
Entropy 22 00943 g001
Figure 2. Classification with the learnable latent space regularization.
Figure 2. Classification with the learnable latent space regularization.
Entropy 22 00943 g002
Table 1. Semi-supervised classification performance (percentage error) for the optimal parameters (Appendix B, Appendix C, Appendix D, Appendix E, Appendix F and Appendix G) defined on the MNIST (D—deterministic; S—stochastic).
Table 1. Semi-supervised classification performance (percentage error) for the optimal parameters (Appendix B, Appendix C, Appendix D, Appendix E, Appendix F and Appendix G) defined on the MNIST (D—deterministic; S—stochastic).
MNIST (100)MIST (1000)MNIST (all)SVHN (1000)
NN Baseline ( D c c ^ )[D]26.31 (±0.91)7.50 (±0.19)0.68 (±0.05)36.16 (±0.77)
[S]26.78 (±1.66)7.54 (±0.25)0.70 (±0.05)36.28 (±0.93)
InfoMax [3][S]33.4121.515.86-
VAE [5][S]14.268.715.02-
MV-InfoMax [5][S]13.227.396.07-
IB multiview [5][S]3.032.342.22-
VAE (M1 + M2) [5][S]3.33 (±0.14)2.40 (±0.02)0.9636.02 (±0.10)
CatGAN[S]1.91 (±0.10)1.73 (±0.18)0.91-
AAE[D]1.90 (±0.10)1.60 (±0.08)0.85 (±0.02)17.70 (±0.30)
No priors on latent space
D c c ^ + D c [D]20.72 (±1.58)4.99 (±0.28)0.69 (±0.04)25.78 (±0.90)
[S]19.60 (±1.37)4.49 (±0.25)0.67 (±0.05)26.34 (±0.80)
Hand crafted latent space priors
β c D c c ^ + D a [D]27.44 (±1.40)6.77 (±0.34)0.91 (±0.05)35.94 (±1.08)
[S]27.48 (±1.07)6.91 (±0.45)0.88 (±0.05)35.80 (±1.21)
β c D c c ^ + D a + β c D c [D]12.04 (±4.46)2.43 (±0.12)0.81 (±0.05)24.70 (±0.46)
[S]11.80 (±3.82)2.40 (±0.10)0.82 (±0.04)24.62 (±0.54)
Learnable latent space priors
β c D c c ^ + β c D c + D z + β x D x x ^ [D]1.55 (±0.21)1.25 (±0.10)0.74 (±0.04)20.07 (±0.36)
[S]1.49 (±0.18)1.43 (±0.06)0.78 (±0.04)20.00 (±0.31)
β c D c c ^ + β c D c + D z + β x D x x ^ + β x D x [D]1.38 (±0.09)1.21 (±0.10)0.77 (±0.06)19.75 (±0.52)
[S]1.42 (±0.10)1.16 (±0.09)0.79 (±0.02)19.71 (±0.26)
Table 2. Execution time (hours) per 100 epochs on one NVIDIA GPU. For the SVHN the models with the learnable latent space priors were trained with a learning rate 0.0001 that explains the longer time but without optimization of Lagrangians, i.e., the Lagrangians were re-used from pre-trained MNIST model. All the others models were trained with a learning rate 0.001.
Table 2. Execution time (hours) per 100 epochs on one NVIDIA GPU. For the SVHN the models with the learnable latent space priors were trained with a learning rate 0.0001 that explains the longer time but without optimization of Lagrangians, i.e., the Lagrangians were re-used from pre-trained MNIST model. All the others models were trained with a learning rate 0.001.
MNISTSVHN
NN Baseline ( D c c ^ )0.47–0.650.85–0.92
No priors on latent space
   D c c ^ + D c 0.47–0.650.85–0.92
Hand crafted latent space priors
   β c D c c ^ + D a 0.47–0.651–1.05
   β c D c c ^ + D a + β c D c 0.97–1.181.5–1.6
Learnable latent space priors
   β c D c c ^ + β c D c + D z + β x D x x ^ 1.23–1.62.25–2.3
   β c D c c ^ + β c D c + D z + β x D x x ^ + β x D x 1.98–2.423.5–3.55

Share and Cite

MDPI and ACS Style

Voloshynovskiy, S.; Taran, O.; Kondah, M.; Holotyak, T.; Rezende, D. Variational Information Bottleneck for Semi-Supervised Classification. Entropy 2020, 22, 943. https://doi.org/10.3390/e22090943

AMA Style

Voloshynovskiy S, Taran O, Kondah M, Holotyak T, Rezende D. Variational Information Bottleneck for Semi-Supervised Classification. Entropy. 2020; 22(9):943. https://doi.org/10.3390/e22090943

Chicago/Turabian Style

Voloshynovskiy, Slava, Olga Taran, Mouad Kondah, Taras Holotyak, and Danilo Rezende. 2020. "Variational Information Bottleneck for Semi-Supervised Classification" Entropy 22, no. 9: 943. https://doi.org/10.3390/e22090943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop