Next Article in Journal
Payload-Based Traffic Classification Using Multi-Layer LSTM in Software Defined Networks
Next Article in Special Issue
Efficient Weights Quantization of Convolutional Neural Networks Using Kernel Density Estimation based Non-uniform Quantizer
Previous Article in Journal
Experimental Study on Mechanics and Permeability Properties of Water-Bearing Raw Coal Samples Under In-Situ Stress
Previous Article in Special Issue
Discriminating Emotions in the Valence Dimension from Speech Using Timbre Features
Open AccessArticle

Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders

School of System Informatics, Kobe University, 1-1, Rokkodai-cho, Nada-ku, Kobe 657-8501, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(12), 2551; https://doi.org/10.3390/app9122551
Received: 17 April 2019 / Revised: 16 June 2019 / Accepted: 19 June 2019 / Published: 21 June 2019
(This article belongs to the Special Issue Advances in Deep Learning)
Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches. View Full-Text
Keywords: deep generative model; learning constraint; representation learning deep generative model; learning constraint; representation learning
Show Figures

Figure 1

MDPI and ACS Style

Bai, W.; Quan, C.; Luo, Z.-W. Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders. Appl. Sci. 2019, 9, 2551.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop