Next Article in Journal
F-Divergences and Cost Function Locality in Generative Modelling with Quantum Circuits
Next Article in Special Issue
PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses
Previous Article in Journal
Testing the Social Bubble Hypothesis on the Early Dynamics of a Scientific Project: The FET Flagship Candidate FuturICT (2010–2013)
Previous Article in Special Issue
Meta-Strategy for Learning Tuning Parameters with Guarantees
Article

Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks

by 1 and 1,2,*
1
Centre for Artificial Intelligence, Department of Computer Science, University College London, London WC1V 6LJ, UK
2
Inria Lille—Nord Europe Research Centre and Inria London, 59800 Lille, France
*
Author to whom correspondence should be addressed.
Academic Editor: Udo Von Toussaint
Entropy 2021, 23(10), 1280; https://doi.org/10.3390/e23101280
Received: 22 August 2021 / Revised: 23 September 2021 / Accepted: 27 September 2021 / Published: 29 September 2021
(This article belongs to the Special Issue Approximate Bayesian Inference)
We make two related contributions motivated by the challenge of training stochastic neural networks, particularly in a PAC–Bayesian setting: (1) we show how averaging over an ensemble of stochastic neural networks enables a new class of partially-aggregated estimators, proving that these lead to unbiased lower-variance output and gradient estimators; (2) we reformulate a PAC–Bayesian bound for signed-output networks to derive in combination with the above a directly optimisable, differentiable objective and a generalisation guarantee, without using a surrogate loss or loosening the bound. We show empirically that this leads to competitive generalisation guarantees and compares favourably to other methods for training such networks. Finally, we note that the above leads to a simpler PAC–Bayesian training scheme for sign-activation networks than previous work. View Full-Text
Keywords: statistical learning theory; PAC–Bayes theory; deep learning statistical learning theory; PAC–Bayes theory; deep learning
MDPI and ACS Style

Biggs, F.; Guedj, B. Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks. Entropy 2021, 23, 1280. https://doi.org/10.3390/e23101280

AMA Style

Biggs F, Guedj B. Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks. Entropy. 2021; 23(10):1280. https://doi.org/10.3390/e23101280

Chicago/Turabian Style

Biggs, Felix, and Benjamin Guedj. 2021. "Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks" Entropy 23, no. 10: 1280. https://doi.org/10.3390/e23101280

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop