Next Article in Journal
An Application for Aesthetic Quality Assessment in Photography with Interpretability Features
Next Article in Special Issue
Sampling the Variational Posterior with Local Refinement
Previous Article in Journal
Is Bitcoin Still a King? Relationships between Prices, Volatility and Liquidity of Cryptocurrencies during the Pandemic
Previous Article in Special Issue
Self-Supervised Variational Auto-Encoders
Article

Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning

by 1,* and 1,2
1
Mathematics and Computer Science, Rutgers University, Newark, NJ 07102, USA
2
School of Mathematics, Institute for Advanced Studies, Princeton, NJ 08540, USA
*
Author to whom correspondence should be addressed.
Academic Editors: Eric Nalisnick and Dustin Tran
Entropy 2021, 23(11), 1387; https://doi.org/10.3390/e23111387
Received: 1 October 2021 / Revised: 18 October 2021 / Accepted: 20 October 2021 / Published: 23 October 2021
(This article belongs to the Special Issue Probabilistic Methods for Deep Learning)
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference. View Full-Text
Keywords: deep Gaussian process; approximate inference; deep kernel learning; Bayesian learning; moment matching; inducing points; neural network deep Gaussian process; approximate inference; deep kernel learning; Bayesian learning; moment matching; inducing points; neural network
Show Figures

Figure 1

MDPI and ACS Style

Lu, C.-K.; Shafto, P. Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning. Entropy 2021, 23, 1387. https://doi.org/10.3390/e23111387

AMA Style

Lu C-K, Shafto P. Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning. Entropy. 2021; 23(11):1387. https://doi.org/10.3390/e23111387

Chicago/Turabian Style

Lu, Chi-Ken, and Patrick Shafto. 2021. "Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning" Entropy 23, no. 11: 1387. https://doi.org/10.3390/e23111387

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop