Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = PAC–Bayes theory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 7587 KB  
Article
PAC–Bayes Guarantees for Data-Adaptive Pairwise Learning
by Sijia Zhou, Yunwen Lei and Ata Kabán
Entropy 2025, 27(8), 845; https://doi.org/10.3390/e27080845 - 8 Aug 2025
Viewed by 1321
Abstract
We study the generalization properties of stochastic optimization methods under adaptive data sampling schemes, focusing on the setting of pairwise learning, which is central to tasks like ranking, metric learning, and AUC maximization. Unlike pointwise learning, pairwise methods must address statistical dependencies between [...] Read more.
We study the generalization properties of stochastic optimization methods under adaptive data sampling schemes, focusing on the setting of pairwise learning, which is central to tasks like ranking, metric learning, and AUC maximization. Unlike pointwise learning, pairwise methods must address statistical dependencies between input pairs—a challenge that existing analyses do not adequately handle when sampling is adaptive. In this work, we extend a general framework that integrates two algorithm-dependent approaches—algorithmic stability and PAC–Bayes analysis for this purpose. Specifically, we examine (1) Pairwise Stochastic Gradient Descent (Pairwise SGD), widely used across machine learning applications, and (2) Pairwise Stochastic Gradient Descent Ascent (Pairwise SGDA), common in adversarial training. Our analysis avoids artificial randomization and leverages the inherent stochasticity of gradient updates instead. Our results yield generalization guarantees of order n1/2 under non-uniform adaptive sampling strategies, covering both smooth and non-smooth convex settings. We believe these findings address a significant gap in the theory of pairwise learning with adaptive sampling. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

13 pages, 418 KB  
Article
Still No Free Lunches: The Price to Pay for Tighter PAC-Bayes Bounds
by Benjamin Guedj and Louis Pujol
Entropy 2021, 23(11), 1529; https://doi.org/10.3390/e23111529 - 18 Nov 2021
Cited by 13 | Viewed by 3264
Abstract
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), [...] Read more.
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), others are “cheap” (simply finite variance). As it is well known, the more you pay, the more you get: in other words, the most expensive models yield the more interesting bounds. Recent advances in robust statistics have investigated procedures to obtain tight bounds while keeping the cost of assumptions minimal. The present paper explores and exhibits what the limits are for obtaining tight probably approximately correct (PAC)-Bayes bounds in a robust setting for cheap models. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

20 pages, 424 KB  
Article
PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses
by Maxime Haddouche, Benjamin Guedj, Omar Rivasplata and John Shawe-Taylor
Entropy 2021, 23(10), 1330; https://doi.org/10.3390/e23101330 - 12 Oct 2021
Cited by 30 | Viewed by 4080
Abstract
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions. This extends the relevance and applicability of the PAC-Bayes learning framework, where most of the existing literature focuses on supervised learning problems with a bounded loss function (typically assumed to [...] Read more.
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions. This extends the relevance and applicability of the PAC-Bayes learning framework, where most of the existing literature focuses on supervised learning problems with a bounded loss function (typically assumed to take values in the interval [0;1]). In order to relax this classical assumption, we propose to allow the range of the loss to depend on each predictor. This relaxation is captured by our new notion of HYPothesis-dependent rangE (HYPE). Based on this, we derive a novel PAC-Bayesian generalisation bound for unbounded loss functions, and we instantiate it on a linear regression problem. To make our theory usable by the largest audience possible, we include discussions on actual computation, practicality and limitations of our assumptions. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

14 pages, 326 KB  
Article
Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks
by Felix Biggs and Benjamin Guedj
Entropy 2021, 23(10), 1280; https://doi.org/10.3390/e23101280 - 29 Sep 2021
Cited by 23 | Viewed by 3535
Abstract
We make two related contributions motivated by the challenge of training stochastic neural networks, particularly in a PAC–Bayesian setting: (1) we show how averaging over an ensemble of stochastic neural networks enables a new class of partially-aggregated estimators, proving that these lead to [...] Read more.
We make two related contributions motivated by the challenge of training stochastic neural networks, particularly in a PAC–Bayesian setting: (1) we show how averaging over an ensemble of stochastic neural networks enables a new class of partially-aggregated estimators, proving that these lead to unbiased lower-variance output and gradient estimators; (2) we reformulate a PAC–Bayesian bound for signed-output networks to derive in combination with the above a directly optimisable, differentiable objective and a generalisation guarantee, without using a surrogate loss or loosening the bound. We show empirically that this leads to competitive generalisation guarantees and compares favourably to other methods for training such networks. Finally, we note that the above leads to a simpler PAC–Bayesian training scheme for sign-activation networks than previous work. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Back to TopTop