PAC-Bayes Unleashed: Generalisation Bounds with Unbounded Losses

We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions. This extends the relevance and applicability of the PAC-Bayes learning framework, where most of the existing literature focuses on supervised learning problems with a bounded loss function (typically assumed to take values in the interval [0;1]). In order to relax this classical assumption, we propose to allow the range of the loss to depend on each predictor. This relaxation is captured by our new notion of HYPothesis-dependent rangE (HYPE). Based on this, we derive a novel PAC-Bayesian generalisation bound for unbounded loss functions, and we instantiate it on a linear regression problem. To make our theory usable by the largest audience possible, we include discussions on actual computation, practicality and limitations of our assumptions.


Introduction
Since its emergence in the late 1990s, the PAC-Bayes theory (see the seminal works of [1][2][3], the recent survey by [4] and work by [5]) has been a powerful tool to obtain generalisation bounds and to derive efficient learning algorithms. Generalisation bounds are helpful for understanding how a learning algorithm may perform on future similar batches of data. While the classical generalization bounds typically address the performance of individual predictors from a given hypothesis class, PAC-Bayes bounds typically address a randomized predictor defined by a distribution over the hypothesis class.
PAC-Bayes bounds were originally meant for binary classification problems [6][7][8], but the literature now includes many contributions involving any bounded loss function (without loss of generality, with values in [0; 1]), not just the binary loss. Our goal is to provide new PAC-Bayes bounds that are valid for unbounded loss functions, and thus extend the usability of PAC-Bayes to a much larger class of learning problems. To do so, we reformulate the general PAC-Bayes theorem of [9] and use it as basic building block to derive our new PAC-Bayes bound.
Some ways to circumvent the bounded range assumption on the losses have been explored in the recent literature. For instance, one approach consists of assuming a tail decay rate on the loss, such as sub-gaussian or sub-exponential tails [10,11]; however, this approach requires the knowledge of additional parameters. Some other works have also looked into the analysis for heavy-tailed losses, e.g., ref. [12] proposed a polynomial moment-dependent bound with f -divergences, while [13] devised an exponential bound that assumes the second (uncentered) moment of the loss is bounded by a constant (with a truncated risk estimator, as recalled in Section 4 below). A somewhat related approach was explored by [14], who do not assume boundedness of the loss, but instead control higher-order moments of the generalization gap through the Efron-Stein variance proxy. See also [5].
We investigate a different route here. We introduce the HYPothesis-dependent rangE (HYPE) condition, which means that the loss is upper-bounded by a term that depends on the chosen predictor (but does not depend on the data). Thus, effectively, the loss may have an arbitrarily large range. The HYPE condition allows us to derive an upper bound on the exponential moment of a suitably chosen functional, which, combined with the general PAC-Bayes theorem, leads to our new PAC-Bayes bound. To illustrate it, we instantiate the new bound on a linear regression problem, which additionally serves the purpose of illustrating that our HYPE condition is easy to verify in practice, given an explicit formulation of the loss function. In particular, we shall see in the linear regression setting that a mere use of the triangle inequality is enough to check the HYPE condition. The technical assumptions on which our results are based are comparable to those of the classical PAC-Bayes bounds; we state them in full detail, with discussions, for the sake of clarity and to make our work accessible.
Our contributions are twofold. (i) We propose PAC-Bayesian bounds holding with unbounded loss functions, therefore overcoming a limitation of the mainstream PAC-Bayesian literature for which a bounded loss is usually assumed. (ii) We analyse the bound, its implications, limitations of our assumptions, and their usability by practitioners. We hope this will extend the PAC-Bayes framework into a widely usable tool for a significantly wider range of problems, such as unbounded regression or reinforcement learning problems with unbounded rewards.
Outline. Section 2 introduces our notation and definition of the HYPE condition and provides a general PAC-Bayesian bound, which is valid for any learning problem complying with a mild assumption. For the sake of completeness, we present how our approach (designed for the unbounded case) behaves in the bounded case (Section 3). This section is not the core of our work, but rather serves as a safety check and particularises our bound to more classical PAC-Bayesian assumptions. We also provide numerical experiments. Section 4 introduces the notion of softening functions and particularises Section 2's PAC-Bayesian bound. In particular, we make explicit all terms in the right-hand side. Section 5.1 extends our results to linear regression (which has been studied from the perspective of PAC-Bayes in the literature, most recently by [15]). We also experimentally illustrate the behaviour of our bound. Finally, Section 6 presents, in detail, related works and Section 7 contains all proofs of the original claims we make in the paper.

Framework and Preliminary Results
The learning problem is specified by three variables (H, Z, ) consisting of a set H of predictors, the data space Z, and a loss function : H × Z → R + .
For a given positive integer m, we consider size-m datasets. The space of all possible datasets of this fixed size is S = Z m ; an arbitrary element of this space is s = (z 1 , . . . , z m ). We denote S as a random dataset: S = (Z 1 , . . . , Z m ) where the random data points Z i are independent and sampled from the same distribution µ over Z. We call µ the data-generating distribution. The assumption that the Z i 's are independent and identically distributed is typically called the i.i.d. data assumption. It means that the random sample S (of size m) has distribution µ ⊗m which is the product of m copies of µ.
For any predictor h ∈ H, we define the empirical risk of h over a sample s, denoted R s (h), and the theoretical risk of h, denoted R(h), as: We use a similar convention for expectations related to any other distributions and random quantities. We now introduce the key concept to our analysis. Definition 1. (HYPE). A loss function : H × Z → R + is said to satisfy the hypothesisdependent range (HYPE) condition if there exists a function K : H → R + \{0} such that sup z∈Z (h, z) ≤ K(h) for every predictor h. We then say that is HYPE(K) compliant.
Let M + 1 (H) be the set of probability distributions on H. We assume that all considered probability measures on H are defined on a fixed σ-algebra over H, while the notation M + 1 (H) hides the σ-algebra, for simplicity. For P, P ∈ M + 1 (H), the notation P P indicates that P is absolutely continuous with respect to P (i.e., P (A) = 0 if P(A) = 0 for measurable A ⊂ H). We write P ∼ P to indicate that P P and P P , i.e., these two distributions are absolutely continuous with respect to each other.
We now recall a result from Germain et al. [9]. Note that while implicit in many PAC-Bayes works (including theirs), we make it explicit that both the prior P and the posterior Q must be absolutely continuous with respect to each other. We discuss this restriction below. Theorem 1. (Adapted from [9], Theorem 2.1.) For any P ∈ M + 1 (H) with no dependency on data, for any function F : R + × R + → R, define the exponential moment: If F is convex, then for any δ ∈ [0; 1], with probability of at least 1 − δ over random samples S, simultaneously for all Q ∈ M + 1 (H) such that Q ∼ P we have: The proof is deferred to Section 7.1. Note that the proof in [9] requires that P Q, although it is not explicitly stated; we highlight this in our own proof. While Q P is classical and necessary for the KL(Q||P) to be meaningful, P Q appears to be more restrictive. In particular, we have to choose Q such that it has the exact same support as P (e.g., choosing a Gaussian and a truncated Gaussian is not possible). However, we can still apply our theorem when P and Q belong to the same parametric family of distributions, e.g., both 'full-support' Gaussian or Laplace distributions, but these are just two examples and there are many others.
Note that Alquier et al. [10] (Theorem 4.1) adapted a result from Catoni [8], which only requires Q P. This comes at the expense of what Alquier et al. [10] (Definition 2.3) called a Hoeffding's assumption, which means that the exponential moment χ is assumed to be bounded by a function depending only on the hyperparameters (such as the dataset size m or parameters given by Hoeffding's assumption). Our analysis does not require this assumption, which might prove restrictive in practice.
Theorem 1 may be seen as a basis to recover many classical PAC-Bayesian bounds. For instance, F(x, y) = 2m(x − y) 2 , recovers McAllester's bound as recalled in [4] (Theorem 1). To get a usable bound, the outstanding task is to bound the exponential moment χ. Note that a previous attempt has been made in [11], as described in Section 6.1 below. Furthermore, under the assumption that the distribution P has no dependency on the data, we may swap the order of integration in the exponential moment thanks to Fubini-Tonelli's theorem and the positiveness of the exponential: This is the starting point for the way that the exponential moment was handled in several works in the PAC-Bayes literature. Essentially, for a fixed h, one may upper-bound the innermost expectation (with respect to S) using standard exponential moment inequalities.
In this work, we will use Theorem 1 with F(x, y) = m α D(x, y), where α > 0, and D : R + × R + → R is a convex function. In this case, the high-probability inequality of the theorem takes the form: . (1) Our goal is to control E S e m α D(R S (h),R(h)) for a fixed h, when D(x, y) = y − x. This will readily give us control on the exponential moment χ. To do so, we propose the following theorem: Theorem 2. Let h ∈ H be a fixed predictor and α ∈ R. If the loss function is HYPE(K) compliant, Proof. Let h ∈ H. Then: We now apply Hoeffding's lemma, for any i ∈ {1..m}, the random (in and finally: The strength of this result lies in the fact that K(h) 2 m 1−2α , is a decreasing factor in m, when α ≤ 1/2, and more generally, one can control how fast the exponential moment will explode when m grows by the choice of the hyperparameter α.
For convenient cross-referencing, we state the following rewriting of Theorem 1.

Theorem 3.
Let the loss be HYPE(K) compliant. For any P ∈ M + 1 (H) with no data dependency, for any α ∈ R and for any δ ∈ [0; 1], with probability of at least 1 − δ over size-m random samples S, simultaneously for all Q such that Q ∼ P we have: Proof. We first apply Theorem 1 with F(x, y) = m α (y − x). More precisely, we use Equation (1) with D(x, y) = y − x. We then conclude with Theorem 2.

Theoretical Results
At this stage, the reader might wonder whether this new approach allows for the recovery of known results in the bounded case: the answer is yes.
In this section, we study the case where is bounded by some constant C ∈ R + \ {0}. In other words, we consider the case that sup h sup z (h, z) ≤ C. We provide a bound, valid for any choice of "priors" P and "posteriors" Q such that P ∼ Q, which is an immediate corollary of Theorem 3. Proposition 1. Let be HYPE(K) compliant, with K(h) = C constant, and let α ∈ R. Let P ∈ M + 1 (H) be a distribution with no data dependency. Then, for any δ ∈ [0; 1], with probability of at least 1 − δ over random m-samples S, simultaneously for all Q ∈ M + 1 (H) such that Q ∼ P we have:

Remark 1.
We provide Proposition 1 to evaluate the robustness of our approach. For instance, by comparing it with the PAC-Bayesian bound found in Germain et al. [11]. This discussion can be found in Section 6.1, where the bound from Germain et al. [11] is presented in detail.

Remark 2.
At first glance, a naive remark: in order to control the rate of convergence of all the terms of the bound in Proposition 1 (as is often the case in classical PAC-Bayesian bounds), then the only case of interest is in fact α = 1 2 . However, one could notice that the factor C 2 is not optimisable, while the KL is. In this way, if it appears that C 2 is too big, in practice, one wants to have the ability to attenuate its influence as much as possible and this may lead us to consider α < 1/2. The following lemma answers this question.

Lemma 1. For any given K
Proof. The explicit calculus of the f K 1 and the resolution of f K 1 (α) = 0 provides the result.
Remark 3. Lemma 1 indicates that with a fixed "prior" P and "posterior" Q, taking K 1 = KL(Q||P) + log(1/δ), gives the optimised value of the bound in Proposition 1. We numerically show in Section 3.2 (first experiment there) that optimising α leads to significantly better results.
Now the only remaining question is how to optimise the KL divergence. To do so, we may need to fix an "informed prior" to minimise the KL divergence with an interesting posterior. This idea has been studied by [16,17] and, more recently, by Mhammedi et al. [18], Rivasplata et al. [5], among others. We will adapt it to our problem in the simplest way.
We now introduce some additional notation. For a sample s = (z 1 , . . . , z m ) and k ∈ {1..m}, we define s ≤k := {z 1 , . . . , z k } and s >k := {z k+1 , . . . , z m }. Then, similarly, for a random sample S, we have the splits S ≤k and S >k . Proposition 2. Let be HYPE(K) compliant, with constant K(h) = C, and α 1 , α 2 ∈ R. Consider any "priors" P 1 ∈ M + 1 (H) (possibly dependent on S >m/2 ) and P 2 ∈ M + 1 (H) (possibly dependent on S ≤m/2 ). Then, for any δ ∈ [0; 1], with probability of at least 1 − δ over random size-m samples S, simultaneously for all Q ∈ M + 1 (H) such that Q ∼ P 1 and Q ∼ P 2 we have: Proof. Let P 1 , P 2 , Q be as stated in Proposition 2. We first notice that by using Proposition 1 on the two halves of the sample, we obtain, with a probability of at least 1 − δ/2: and also with probability at least 1 − δ/2: Hence, with a probability of at least 1 − δ, both inequalities hold, and the result follows by adding them and dividing by 2.

Remark 4.
One can notice that the main difference between Proposition 2 and Proposition 1 lies in the implicit PAC-Bayesian paradigm that our priors must not depend on the data. With this last proposition, we implicitly allow P 1 to depend on S >m/2 and P 2 on S ≤m/2 , which can in practice lead to far more accurate priors. We numerically show this fact in Section 3.2's second experiment. Note that this idea is not new and has been studied, for instance, in [19] for the specific case of SVMs.

Numerical Experiments
Our experimental framework has been inspired by the work of [18]. Settings. We generate synthetic data for classification, and we are using the 0-1 loss. The data space is For simplicity, we identify h w with w and we also identify the space H, with the weight space W = R d . For z = (x, y) ∈ Z and w ∈ W, we define the loss as (w, z) : We want to learn an optimised predictor To do so, we use regularised logistic regression and compute: where λ is a fixed regularisation parameter. We also restrict the probability distributions (over W = R d ), considered for this learning problem. We consider the Gaussian distribution N (w, σ 2 I d ) with centre w ∈ R d and diagonal covariance σ 2 I d ∈ R d×d with σ 2 > 0.
Parameters. We set δ = 0.05, λ = 0.01. We approximately solve Equation (2) by using the minimize function of the optimisation module in Python, with the Powell method. To approximate gaussian expectations, we use Monte-Carlo sampling.
Synthetic data. We generate synthetic data for d = 10 according to the following process: for a fixed sample size m, we draw X 1 , ..., X m under the multivariate Gaussian distribution N (0, I d ) and for each i we compute the label if X i as: where w * is the vector formed by the d first digits of the number π.
Normalisation trick. Given the predictors shape, we notice that for any w ∈ W: Thus, the value of the prediction is exclusively determined by the sign of the inner product, and this quantity is definitely not influenced by the norm of the vector. Then, for any sample S, we call the normalisation trick the fact of consideringŵ(S)/||ŵ(S)|| instead ofŵ(S) in our calculations. This process will not deteriorate the quality of the prediction and will considerably enhance the value of the KL divergence.

First experiment
Our goal here is to highlight the point discussed in Remark 2, e.g., the influence of the parameter α in Proposition 1. We arbitrarily fix σ 2 0 = 1/2, and define our naive prior as . For a fixed dataset S, we define our posterior as P(S) := N (ĥ(S), σ 2 I d ), with σ 2 ∈ {1/2, . . . , 1/2 J } (for J = log 2 (m)) such that it is minimising the bound among candidates. We computed two curves: first, Proposition 1 with α = 1/2 second, Proposition 1 again with α equals to the value proposed in Lemma 1. Notice that to compute this last bound, we first optimised our choice of posterior with α = 1/2 and then optimised α, to be consistent with Lemma 1. Indeed, we proved this lemma by assuming that the KL divergence was already fixed, hence our optimisation process is in two steps. Note that we chose to apply the normalisation trick here, we then obtained the left curve of Figure 1.
Discussion. From this curve, we formulate several remarks. First, we remark on this specific case, our theorem provides a tight result in practice (with an error rate lesser than 10% for the bound with optimised alpha). Second, we can now confirm that choosing an optimised α leads to a tighter bound. In further studies, it will be relevant to adjust α with regards to the different terms of our bound instead of looking for an identical convergence rate for all terms.

Second Experiment
We now study Proposition 2 to see if an informed prior effectively provides a tighter bound than a naive one. We will use the notations introduced in Proposition 2. For a dataset S, we define w 1 (S) = w(S >m/2 ) as the vector resulting from the optimisation of Equation (2) on S >m/2 . Similarly, we define w 2 (S) := w(S ≤m/2 ). We arbitrarily fix σ 2 0 = 1/2, and define our informed priors as: P 1 = N (w 1 (S), σ 2 0 I d ) and P 2 = N (w 2 (S), σ 2 0 I d ). Finally, we define our posterior as P(S) := N (ŵ(S), σ 2 I d ), with σ 2 ∈ {1/2, ..., 1/2 J } (for J = log 2 (m)) with σ 2 optimising the bound among the same candidate than the first experiment. We computed two curves: first, Proposition 1 with α optimised accordingly to Lemma 1 secondly, Proposition 2 with α 1 , α 2 optimised as well, and informed priors as defined above. We chose to not apply the normalisation trick here, we then obtained the right curve of Figure 1.
Discussion. It is clear, that with this framework, having an informed prior is a powerful tool to enhance the quality of our bound. Notice that we voluntarily chose to not apply the normalisation trick here. The reason is that this trick appears to be too powerful in practice, and applying it leads to counterproductive results; to highlight our point: the bound without informed prior would be tighter than the one with informed prior. Furthermore, this trick is linked to the specific structure of our problem and is not valid for any classification problem. Thus, the idea of providing informed priors remains an interesting tool for most cases.

PAC Bayesian Bounds with Smoothed Estimator
We now move on to control the right-hand side term in Theorem 3 when K is not constant. A first step is to consider a transformed estimate of the risk, inspired by the truncated estimator from [20], also used in [21], and more recently in [13]. The following is inspired by the results of [13], which we summarise in Section 6.
The idea is to modify the estimator R S (h) for any h by introducing a threshold t and a function ψ which will attenuate the influence of the empirical losses ( (h, Z i )) i=1..m that exceed t.

Definition 2. ψ-risks.
For every t > 0, ψ : R + → R + , for any h ∈ H, we define the empirical ψ-risk R S,ψ,t and the theoretical ψ-risk R ψ,t as follows: We now focus on what we call softening functions, i.e., functions that will temper high values of the loss function .

Definition 3. (Softening function).
We say that ψ : R + → R + is a softening function if: We let F denote the set of all softening functions.
Using ψ ∈ F , for a fixed threshold t > 0, the softened loss function tψ (h,z) t verifies for any h ∈ H, z ∈ Z: because ψ is non-decreasing. In this way, the exponential moment in Theorem 3 can be far more controllable. The trade-off lies in the fact that softening (instead of taking directly ) will deteriorate our ability to distinguish between two bad predictions when both of them are greater than t. For instance, if we choose ψ ∈ F such as ψ = 1 on [1; +∞) and t > 0, if ψ( (h, z)/t) = 1 for a certain pair (h, z), then we cannot tell how far (h, z) is from t and we only can affirm that (h, z) ≥ t. We now move on to the following lemma, which controls the shortfall between E h∼Q [R(h)] and E h∼Q [R ψ,t (h)] for all Q ∈ M + 1 (H), for a given ψ and t > 0. To do that, we assume that K admits a finite moment under any posterior distribution: For instance, in the case of H identified with a weight space W = R N , and if K is polynomial in ||w|| (where ||.|| denotes the Euclidean norm), then this assumption holds if we consider Gaussian priors and posteriors. (3) holds, and let ψ ∈ F , Q ∈ M + 1 (H), t > 0. We have:

Lemma 2. Assume that Equation
Proof. Let ψ ∈ F , Q ∈ M + 1 (H), t > 0. We have, for h ∈ H : and continuing: Finally, by crudely bounding the probability by 1, we get: Hence the result by integrating over H with respect to Q.
Finally we present the following theorem, which provides a PAC-Bayesian inequality bounding the theoretical risk by the empirical ψ-risk for ψ ∈ F . Theorem 4. Let be HYPE(K) compliant, and assume K satisfies Equation (3). Then for any P ∈ M + 1 (H) with no data dependency, for any α ∈ R, for any ψ ∈ F and for any δ ∈ [0; 1], with probability of at least 1 − δ over size-m random samples S, simultaneously for all Q such that Q ∼ P we have: Proof. Let ψ ∈ F , we define the ψ-loss: Since ψ is non decreasing, we have for all (h, z) ∈ H × Z: Thus, we apply Theorem 3 to the learning problem defined with 2 : for any α and δ ∈ (0, 1), with probability at least 1 − δ over size-m random samples S, simultaneously for all Q such that Q ∼ P we have: We then add E h∼Q [K(h)1{K(h) ≥ t}] on both sides of the latter inequality and apply Lemma 2.

Remark 6.
Notice that the function ψ : x → x1{x ≤ 1} + 1{x > 1} is such that for any given prior P we have E h∼P exp

Theoretical Result
We now focus on the celebrated linear regression problem and see how our theory translates to that particular learning problem. We assume that the data is a size-m random sample S = (Z i ) i=1..m where the Z i are i.i.d. drawn from the distribution µ, and Our goal here is to find the most accurate predictor h w (with w ∈ R N ), with respect to the loss function (h w , z) = | w, x − y|, where z = (x, y). We will make the following mild assumption: there exists B, C ∈ R+\{0} such that for all z = (x, y) drawn under µ: ||x|| ≤ B and |y| ≤ C where ||.|| is the norm associated to the classical inner product of R N . Under this assumption we note that for all z = (x, y) drawn according to µ, we have: Thus we define K(h w ) = B||w|| + C for w ∈ R N . If we first restrict ourselves to the framework of Section 2, we want to use Theorem 3 and doing so, our goal is to bound . The shape of K invites us to consider a Gaussian prior. Indeed, we notice that if P = N (0, σ 2 I N ) with 0 < σ 2 < m 1−2α B 2 , then ξ < +∞. Notice that we cannot take just any Gaussian prior, however with a small α, the condition 0 < σ 2 < m 1−2α B 2 may become quite loose. Thus, we have the following: Theorem 5. Let α ∈ R and N ≥ 6. Assume that the loss is HYPE(K) compliant with K(h) = B||h|| + C, with B > 0, C ≥ 0. For a prior distribution, consider any Gaussian P = N (0, Then, for any δ ∈ [0; 1], with probability of at least 1 − δ over size-m random samples S, simultaneously for all Q ∈ M + 1 (H) such that P ∼ Q we have: The proof is deferred to Section 7.2. To compare our result with those found in the literature, we can fix α = 1/2. Doing so, we lose the dependency in m for the choice of the variance of the prior (which now only depends on B), but we recover the classic decreasing factor 1/ √ m.

Remark 7.
Notice that for now we did not use Section 4, even if we could (because K is polynomial in ||w|| and we consider Gaussian priors and posteriors, so Equation (3) is satisfied). Doing so, we obtained a bound which appears to depend linearly on the dimension N. In practice, N may be too big, and in this case, introducing an adapted softening function ψ (one can think for instance of ψ(x) = x1{x ≤ 1} + 1{x > 1}) is a powerful tool to attenuate the weight of the exponential moment. This also extends the class of authorised Gaussian priors by avoidance, to stick with a variance σ 2 = t m 1−2α B 2 , 0 < t < 1.

Setting
In this section we apply Theorem 5 on a concrete linear regression problem. The situation is as follows: we want to approximate the function f (x) = w * , x , where w * ∈ R d . We assume that W = [−c, c] d so that w * lies in an hypercube centred at 0 of half-side c > 0, i.e., the set {(w i ) i=1,...,d | ∀i, |w i | ≤ c}. Doing so we have ||w * || ≤ c √ d. Furthermore, we assume that input data are drawn inside a hypercube of half-side e > 0, i.e., X = [−e, e] d . Doing so we have for any data x, ||x|| ≤ e √ d.
For any data x ∈ R d , we define y = f (x). As before, we identify the hypothesis set H with the weight space W = R d . As described in Section 5.1, we set (h w , x, y) = | w, x − y|. We then remark that for any (w, x, y): Then we can define B = e √ d and C = √ cde to apply Theorem 5. We restrict (as before) the class of distributions over W to be d-dimensional Gaussians: which is the set of candidate distributions for this learning problem. Recall that in practice, given a fixed α ∈ R, we are only allowed to consider priors such that their variance . We want to learn an optimised predictor (posterior) given a random dataset S = ((X i , Y i )) i=1,...,m . To do so, we consider synthetic data.

Synthetic Data
We draw w * under a Gaussian (with mean 0 and standard deviation equal to 5) truncated to the hypercube centered at 0 of the half-side c > 0. We generate synthetic data according to the following process: for a fixed sample size m, we draw X 1 , . . . , X m under a Gaussian (with mean 0 and standard deviation equal to 5) truncated to the hypercube centered at 0 of the half-side e > 0.

Experiment
First, we fix c = e = 10. Our goal here is to obtain a generalisation bound on our problem. We fix arbitrarily, for a fixed α ∈ R, t 0 = 1/2 and σ 2 0 = t 0 m 1−2α B 2 and we define our naive prior as P 0 = N (0, σ 2 0 I d ). For a given dataset S, we define our posterior as Q(S) := N (ŵ(S), σ 2 I d ), with σ 2 ∈ {σ 2 0 /2, ..., σ 2 0 /2 J } (J = log 2 (m)), such that it is minimising the bound among candidates. Note that all the previously defined parameters are dependent on α, which is why we choose α ∈ {i/step | 0 ≤ i ≤ step} for step a fixed integer (in practice step = 8 or 16) and we take the value of α minimising the bound among the candidates as well. Figure 2 contains two figures, one with d = 10, the other with d = 50. On each figure are computed the right-hand side term in Theorem 5 with an optimised α for each step.

Discussion
To the the best of our knowledge, this is the first attempt to numerically compute PAC-Bayes bounds for unbounded problems, making it impossible to compare to other results. We stress, however, that obtaining numerical values for the bound without assuming a bounded loss is a significant first step. Furthermore, we consider a rather hard problem: f is not linear, so we cannot rely on a linear approximation fitting perfectly data, and the larger the dimension, the larger the error, as illustrated by Figure 2. Thus, for any posterior Q, the quantity E h∼Q [R(h)] is potentially large in practice and our bound might not be tight. Finally, notice that optimising α (instead of taking α = 1/2 to recover a classic convergence rate) leads to a significantly better bound. A numerical example of this assertion is presented in Section 3.2. We aim to conduct further studies to consider the convergence rate as an hyperparameter to optimise, rather than selecting the same rate for all terms in the bound. In Germain et al. [11] (Section 4), a PAC-Bayesian bound has been provided for all sub-gamma losses with a variance t 2 and scale parameter c > 0, under a data distribution µ and a prior P, i.e. losses such that for every λ ∈ 0, 1 c the following is satisfied: Note that a sub-gamma loss (with regards to µ and P) is potentially unbounded. Germain et al. then propose the following PAC-Bayesian bound: Theorem 6. Ref. [11]. If the loss is sub-gamma with a variance t 2 and scale parameter c, under the data distribution µ and a fixed prior P ∈ H, then for any δ ∈ [0; 1], with probability 1 − δ over size-m random samples, simultaneously for all Q P we have: . Theorem 6 will be quoted several times in this paper given that it is a concrete PAC Bayesian bound provided with the will to overcome the constraint of a bounded loss. It is also one of the only one found in the literature.
Can we apply this theorem to the bounded case? The answer is yes: we remark that thanks to Hoeffding's lemma, if is bounded by C > 0, then for any h ∈ H it holds that 2 . Therefore, for any prior P, we have: Thus, is sub-gamma with variance C 2 and scale parameter 0. Then, Theorem 6 can be applied with t 2 = C 2 , c = 0.
Comparison with Proposition 1. We remark that by taking K = C and α = 1 in Proposition 1, we are recovering Theorem 6. However, our approach allows us to say that if we can obtain a more precise form of K such that ∀h ∈ H, K(h) ≤ C and K is non-constant, Theorem 3, will ensure that: Thus, having precise information on the behavior of the loss function , with regards to the predictor h, allows us to obtain a tighter control of the exponential moment, and hence a tighter bound.

Remark 8.
We can see that Theorem 6 cannot control the factor C 2 /2. However, Ref. [11] remarked on this apparent weakness and partially corrected this issue [11] (Section 4, Equations (13) and (14)). Indeed, they proposed to balance the influence of m between the different terms of the PAC-Bayes bound by providing the same convergence rate in 1/ √ m to all terms. We can then see Proposition 1 as a proper generalisation of Germain et al. [11] (Section 4, Equations (13) and (14)). Indeed, our bound exhibits properly the influence of the parameter α. Thus, we understand (and Lemma 1 proves it) that the choice of α deserves a study in itself in the way it is now a parameter of our optimisation problem. This fact has already been highlighted in Alquier et al. [10] (Theorem 4.1) (where λ := m α ).

Holland, 2019
In [13], Holland proposed a PAC Bayesian inequality with unbounded loss. For that, he introduced a function ψ verifying a few specific conditions, different to those used in Section 4 to define our set of softening functions. Indeed, he considered a function ψ such that: • ψ is bounded, • ψ is non decreasing, • it exists b > 0 such that for all u ∈ R: We remark that, as Holland did, we supposed that our softening functions are nondecreasing. We chose softening functions to be equal to the identity function (x → x) on [0, 1], which is quite restrictive. However, we are imposing softening functions to be lesser than the identity on [1, +∞); whereas, Holland supposed ψ to be bounded and satisfy Equation (4). A concrete example of such a function ψ, lies in the piecewise polynomial function of Catoni and Giulini [21], defined by: As in Section 4, we are considering the ψ-empirical risk R S,ψ,t for any t > 0. Holland provided his theorem given the fact the following assumptions are realised: Theorem 7. Ref. [13]. Let P be a prior distribution on model H. Let the three assumptions listed above hold. Setting t 2 = mM 2 /(2 log(δ −1 )), then for any δ ∈ [0; 1], with probability of at least 1 − δ over the random draw of the size-m sample S, simultaneously for all Q it holds that: where: So with probability of at least 1 − δ, we have: Applying the log function on each side of this inequality gives us with probability of at least 1 − δ over samples S: We now rename A := log E h∼P e F(R S (h),R(h)) .
Furthermore, if we denote by dQ dP the Radon-Nikodym derivative of Q with respect to P when Q P, we then have, for all Q such that Q ∼ P: and by concavity of log and Jensen's inequality, while by convexity of F with Jensen's inequality, Hence, for Q such that Q ∼ P, So with probability 1 − δ, for Q such that Q ∼ P, This completes the proof of Theorem 1.

Proof of Theorem 5
We first provide a technical property. Recall that: Proposition 3. Let α ∈ R. Suppose the loss is HYPE(K) compliant with K(h) = B||h|| + C, with B > 0, C ≥ 0. Then, for any Gaussian prior P = N (0, σ 2 I N ) with σ 2 = t m 1−2α B 2 , 0 < t < 1 and N ≥ 6 we have: Proof. We recall that σ 2 = t m 1−2α B 2 . By expliciting the expectation and K(h) we thus obtain: We will use the spherical coordinates in N-dimensional Euclidean space given in [22]: where especially r = ||h|| and also the Jacobian of φ is given by: Let us also precise that as given in Blumenson [22] (page 66), we have that the surface of the sphere of radius 1 in N-dimensional space is: where Γ is the Gamma function defined as: Then, if we set: we obtain by a change of variable: We fix a random variable X such that: We then have for any k positive integer, if k is even: And if k is odd: So we have: As precised in [23], we have for any k: So finally: Proof. As precised in the introduction of Srinivasan and Zvengrowski [24], Gauss [25] (page 147) proved that on the interval [x 0 , +∞) where x 0 ∈ [1.46, 1.47], Γ is a monotonic increasing function. So, for N − 1 ≥ k ≥ 2, Γ( k+1 2 ) ≤ Γ( N 2 ). And because Γ(1/2) = √ π, Γ(1) = 1, we have: Because N ≥ 6, and Γ is monotone and increasing on [3; +∞], we have Γ(N/2) ≥ Γ(3) ≥ √ π. Hence the result.
Using Lemma 3 allows us to write: We recall that σ 2 = t m 1−2α B 2 and f (t) = 1−t t . Then we can write: We now conclude with the final bound on ξ: This completes the proof of Proposition 3.
Proof of Theorem 5. We combine Theorem 3 with Proposition 3. We also upper-bound N − 1 by N.