Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = PAC-Bayes bounds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 315 KiB  
Article
On a Low-Rank Matrix Single-Index Model
by The Tien Mai
Mathematics 2023, 11(9), 2065; https://doi.org/10.3390/math11092065 - 26 Apr 2023
Cited by 2 | Viewed by 1535
Abstract
In this paper, we conduct a theoretical examination of a low-rank matrix single-index model. This model has recently been introduced in the field of biostatistics, but its theoretical properties for jointly estimating the link function and the coefficient matrix have not yet been [...] Read more.
In this paper, we conduct a theoretical examination of a low-rank matrix single-index model. This model has recently been introduced in the field of biostatistics, but its theoretical properties for jointly estimating the link function and the coefficient matrix have not yet been fully explored. In this paper, we make use of the PAC-Bayesian bounds technique to provide a thorough theoretical understanding of the joint estimation of the link function and the coefficient matrix. This allows us to gain a deeper insight into the properties of this model and its potential applications in different fields. Full article
(This article belongs to the Special Issue New Advances in High-Dimensional and Non-asymptotic Statistics)
27 pages, 422 KiB  
Article
From Bilinear Regression to Inductive Matrix Completion: A Quasi-Bayesian Analysis
by The Tien Mai
Entropy 2023, 25(2), 333; https://doi.org/10.3390/e25020333 - 11 Feb 2023
Cited by 4 | Viewed by 1977
Abstract
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, [...] Read more.
In this paper, we study the problem of bilinear regression, a type of statistical modeling that deals with multiple variables and multiple responses. One of the main difficulties that arise in this problem is the presence of missing data in the response matrix, a problem known as inductive matrix completion. To address these issues, we propose a novel approach that combines elements of Bayesian statistics with a quasi-likelihood method. Our proposed method starts by addressing the problem of bilinear regression using a quasi-Bayesian approach. The quasi-likelihood method that we employ in this step allows us to handle the complex relationships between the variables in a more robust way. Next, we adapt our approach to the context of inductive matrix completion. We make use of a low-rankness assumption and leverage the powerful PAC-Bayes bound technique to provide statistical properties for our proposed estimators and for the quasi-posteriors. To compute the estimators, we propose a Langevin Monte Carlo method to obtain approximate solutions to the problem of inductive matrix completion in a computationally efficient manner. To demonstrate the effectiveness of our proposed methods, we conduct a series of numerical studies. These studies allow us to evaluate the performance of our estimators under different conditions and provide a clear illustration of the strengths and limitations of our approach. Full article
(This article belongs to the Special Issue Recent Advances in Statistical Theory and Applications)
13 pages, 418 KiB  
Article
Still No Free Lunches: The Price to Pay for Tighter PAC-Bayes Bounds
by Benjamin Guedj and Louis Pujol
Entropy 2021, 23(11), 1529; https://doi.org/10.3390/e23111529 - 18 Nov 2021
Cited by 10 | Viewed by 2907
Abstract
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), [...] Read more.
“No free lunch” results state the impossibility of obtaining meaningful bounds on the error of a learning algorithm without prior assumptions and modelling, which is more or less realistic for a given problem. Some models are “expensive” (strong assumptions, such as sub-Gaussian tails), others are “cheap” (simply finite variance). As it is well known, the more you pay, the more you get: in other words, the most expensive models yield the more interesting bounds. Recent advances in robust statistics have investigated procedures to obtain tight bounds while keeping the cost of assumptions minimal. The present paper explores and exhibits what the limits are for obtaining tight probably approximately correct (PAC)-Bayes bounds in a robust setting for cheap models. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

20 pages, 424 KiB  
Article
PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses
by Maxime Haddouche, Benjamin Guedj, Omar Rivasplata and John Shawe-Taylor
Entropy 2021, 23(10), 1330; https://doi.org/10.3390/e23101330 - 12 Oct 2021
Cited by 27 | Viewed by 3528
Abstract
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions. This extends the relevance and applicability of the PAC-Bayes learning framework, where most of the existing literature focuses on supervised learning problems with a bounded loss function (typically assumed to [...] Read more.
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions. This extends the relevance and applicability of the PAC-Bayes learning framework, where most of the existing literature focuses on supervised learning problems with a bounded loss function (typically assumed to take values in the interval [0;1]). In order to relax this classical assumption, we propose to allow the range of the loss to depend on each predictor. This relaxation is captured by our new notion of HYPothesis-dependent rangE (HYPE). Based on this, we derive a novel PAC-Bayesian generalisation bound for unbounded loss functions, and we instantiate it on a linear regression problem. To make our theory usable by the largest audience possible, we include discussions on actual computation, practicality and limitations of our assumptions. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Show Figures

Figure 1

39 pages, 480 KiB  
Article
PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models
by Imon Banerjee, Vinayak A. Rao and Harsha Honnappa
Entropy 2021, 23(3), 313; https://doi.org/10.3390/e23030313 - 6 Mar 2021
Cited by 3 | Viewed by 2893
Abstract
Datasets displaying temporal dependencies abound in science and engineering applications, with Markov models representing a simplified and popular view of the temporal dependence structure. In this paper, we consider Bayesian settings that place prior distributions over the parameters of the transition kernel of [...] Read more.
Datasets displaying temporal dependencies abound in science and engineering applications, with Markov models representing a simplified and popular view of the temporal dependence structure. In this paper, we consider Bayesian settings that place prior distributions over the parameters of the transition kernel of a Markov model, and seek to characterize the resulting, typically intractable, posterior distributions. We present a Probably Approximately Correct (PAC)-Bayesian analysis of variational Bayes (VB) approximations to tempered Bayesian posterior distributions, bounding the model risk of the VB approximations. Tempered posteriors are known to be robust to model misspecification, and their variational approximations do not suffer the usual problems of over confident approximations. Our results tie the risk bounds to the mixing and ergodic properties of the Markov data generating model. We illustrate the PAC-Bayes bounds through a number of example Markov models, and also consider the situation where the Markov model is misspecified. Full article
(This article belongs to the Special Issue Approximate Bayesian Inference)
Back to TopTop