Next Article in Journal
A Time-Symmetric Formulation of Quantum Entanglement
Next Article in Special Issue
Phase Transitions in Transfer Learning for High-Dimensional Perceptrons
Previous Article in Journal
On Grid Quorums for Erasure Coded Data
Previous Article in Special Issue
Common Information Components Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sharp Guarantees and Optimal Performance for Inference in Binary and Gaussian-Mixture Models †

by
Hossein Taheri
*,
Ramtin Pedarsani
* and
Christos Thrampoulidis
*
Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106, USA
*
Authors to whom correspondence should be addressed.
This paper is an extended version of our paper published in the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 2020.
Entropy 2021, 23(2), 178; https://doi.org/10.3390/e23020178
Submission received: 10 December 2020 / Revised: 25 January 2021 / Accepted: 26 January 2021 / Published: 30 January 2021

Abstract

:
We study convex empirical risk minimization for high-dimensional inference in binary linear classification under both discriminative binary linear models, as well as generative Gaussian-mixture models. Our first result sharply predicts the statistical performance of such estimators in the proportional asymptotic regime under isotropic Gaussian features. Importantly, the predictions hold for a wide class of convex loss functions, which we exploit to prove bounds on the best achievable performance. Notably, we show that the proposed bounds are tight for popular binary models (such as signed and logistic) and for the Gaussian-mixture model by constructing appropriate loss functions that achieve it. Our numerical simulations suggest that the theory is accurate even for relatively small problem dimensions and that it enjoys a certain universality property.

1. Introduction

1.1. Motivation

Classical estimation theory studies problems in which the number of unknown parameters n is small compared to the number of observations m. In contrast, modern inference problems are typically high-dimensional, that is n can be of the same order as m. Examples are abundant in a wide range of signal processing and machine learning applications such as medical imaging, wireless communications, recommendation systems, etc. Classical tools and theories are not applicable in these modern inference problems [1]. As such, over the last two decades or so, the study of high-dimensional estimation problems has received significant attention.
Perhaps the most well-studied setting is that of noisy linear observations (namely, linear regression). The literature on the topic is vast with remarkable contributions from the statistics, signal processing and machine learning communities. Several recent works focus on the proportional/linear asymptotic regime and derive sharp results on the inference performance of appropriate convex optimization methods (e.g., [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]). These works show that, albeit challenging, sharp results are advantageous over loose order-wise bounds. Not only do they allow for accurate comparisons between different choices of the optimization parameters, but they also form the basis for establishing optimal such choices as well as fundamental performance limitations (e.g., [12,14,15,16,24,25,26]).
This paper takes this recent line of work a step further by demonstrating that results of this nature can be achieved in binary observation models. While we depart from the previously studied linear regression model, we remain faithful to the requirement and promise of sharp results. Binary models are popularly applicable in a wide range of signal-processing (e.g., highly quantized measurements) and machine learning (e.g., binary classification) problems. We derive sharp asymptotics for a rich class of convex optimization estimators, which include least-squares, logistic regression and hinge loss as special cases. Perhaps more interestingly, we use these results to derive fundamental performance limitations and design optimal loss functions that provably outperform existing choices. Our results hold both for discriminative and generative data models.
In Section 1.2, we formally introduce the problem setup. The paper’s main contributions and organization are presented in Section 1.4. A detailed discussion of prior art follows in Section 1.5.
Notation 1.
The symbols P ( · ) , E · and Var [ · ] denote probability, expectation and variance, respectively. We use boldface notation for vectors. v 2 denotes the Euclidean norm of a vector v . We write i [ m ] for i = 1 , 2 , , m . When writing x * = arg min x f ( x ) , we let the operator arg min return any one of the possible minimizers of f. For all x R , Φ ( x ) is the cumulative distribution function of standard normal and Gaussian Q-function at x is defined as Q ( x ) = 1 Φ ( x ) .

1.2. Data Models

Consider m data pairs ( y i , a i ) i = 1 m generated i.i.d from one of the following two models such that y i { 1 , + 1 } and a i R n for all i [ m ] .
Binary models with Gaussian features: Here, the feature/measurement vectors a i , i [ n ] have i.i.d Gaussian entries, i.e., a i N ( 0 , I n ) . Given the feature vector a i , the corresponding label takes the form
y i = f ( a i T x 0 ) , i [ m ] ,
for some unknown true signal x 0 R n and a label/link function f : R { 1 , + 1 } a (possibly random) binary function. Some popular examples for the label function f include the following:
  • (Noisy) Signed: sign ( a i T x 0 ) , w . p . 1 ε , sign ( a i T x 0 ) , w . p . ε , where ε [ 0 , 1 / 2 ] .
  • Logistic: y i = + 1 , w . p . 1 1 + exp ( a i T x 0 ) , 1 , w . p . 1 1 1 + exp ( a i T x 0 ) .
  • Probit: y i = + 1 , w . p . Φ ( a i T x 0 ) , 1 , w . p . 1 Φ ( a i T x 0 ) .
We remark that when the signal strength x 0 2 + , logistic and Probit label functions approach the signed model (i.e., noisy-signed function with ε = 0 ).
Throughout, we assume that x 0 2 = 1 . This assumption is without loss of generality since the norm of x 0 can always be absorbed in the link function. Indeed, letting x 0 2 = r , we can always write the measurements as f ( a T x 0 ) = f ˜ a T x ˜ 0 , where x ˜ 0 = x 0 / r (hence, x ˜ 0 2 = 1 ) and f ˜ ( t ) = f r t . We make no further assumptions on the distribution of the true vector x 0 .
Gaussian-mixture model: In Section 5, we also study the following generative Gaussian-mixture model (GMM):
y i = + 1 , w . p . π , 1 , w . p . 1 π , , a i | y i N ( y i x 0 , I n ) , i [ m ] .
Above, π [ 0 , 1 ] is the prior of class + 1 and x 0 R n is the true signal, which here represents the mean of the features.

1.3. Empirical Risk Minimization

We study the performance of empirical-risk minimization (ERM) estimators x ^ of x 0 that solve the following optimization problem for some convex loss function : R R
x ^ : = arg min x 1 m i = 1 m ( y i a i T x ) .
Loss function. Different choices for lead to popular specific estimators including the following:
  • Least Squares (LS): ( t ) = ( t 1 ) 2 ,
  • Least-Absolute Deviations (LAD): ( t ) = | t 1 | ,
  • Logistic Loss: ( t ) = log ( 1 + exp ( t ) ) ,
  • Exponential Loss: ( t ) = exp ( t ) ,
  • Hinge Loss: ( t ) = max { 1 t , 0 } .
Performance Measure. We measure performance of the estimator x ^ by the value of its correlation to x 0 , i.e.,
corr x ^ ; x 0 : = x ^ , x 0 x ^ 2 x 0 2 [ 1 , 1 ] .
Obviously, we seek estimates that maximize correlation. While correlation is the measure of primal interest, our results extend rather naturally to other prediction metrics, such as classification error given by (e.g., see [27] (Section D.2.)),
E : = E a , y 1 y sign x ^ , a .
Expectation in (5) is derived based on a test sample ( a , y ) from the same distribution of the training set.

1.4. Contributions and Organization

As mentioned, our techniques naturally apply to both binary Gaussian and Gaussian-mixture models. For concreteness, we focus our presentation on the former models (see Section 2, Section 3 and Section 4.1). Then, we extend our results to Gaussian mixtures in Section 5. Numerical simulations corroborating our theoretical findings for both models are presented in Section 6.
Now, we state the paper’s main contributions:
  • Precise Asymptotics: We show that the absolute value of correlation of x ^ to the true vector x 0 is sharply predicted by 1 / ( 1 + σ 2 ) where the “effective noise” parameter σ can be explicitly computed by solving a system of three non-linear equations in three unknowns. We find that the system of equations (and, thus, the value of σ ) depends on the loss function through its Moreau envelope function. Our prediction holds in the linear asymptotic regime in which m , n and m / n δ > 1 (see Section 2).
  • Fundamental Limits: We establish fundamental limits on the performance of convex optimization-based estimators by computing an upper bound on the best possible correlation performance among all convex loss functions. We compute the upper bound by solving a certain nonlinear equation and we show that such a solution exists for all δ > 1 (see Section 3.1).
  • Optimal Performance and (sub)-optimality of LS for binary models: For certain binary models including signed and logistic, we find the loss functions that achieve the optimal performance, i.e., they attain the previously derived upper bound (see Section 3.2). Interestingly, for logistic and Probit models with x 0 2 = 1 , we prove that the correlation performance of least-squares (LS) is at least as good 0.9972 and 0.9804 times the optimal performance. However, as x 0 2 grows large, logistic and Probit models approach the signed model, in which case LS becomes sub-optimal (see Section 4.1).
  • Extension to the Gaussian-Mixture Model: In Section 5, we extend the fundamental limits and the system of equations to the Gaussian-mixture model. Interestingly, our results indicate that, for this model, LS is optimal among all convex loss functions for all δ > 1 .
  • Numerical Simulations: We do numerous experiments to specialize our results to popular models and loss functions, for which we provide simulation results that demonstrate the accuracy of the theoretical predictions (see Section 6 and Appendix E).
Figure 1 contains a pictorial preview of our results described above for the special case of signed measurements. First, Figure 1a depicts the correlation performance of LS and LAD estimators as a function of the aspect ratio δ . Both theoretical predictions and numerical results are shown; note the close match between theory and empirical results for both i.i.d. Gaussian (shown by circles) and i.i.d. Rademacher (shown by squares) distributions of the feature vectors for even small dimensions. Second, the red line on the same figure shows the upper bound derived in this paper—there is no convex loss function that results in correlation exceeding this line. Third, we show that the upper bound can be achieved by the loss functions depicted in Figure 1b for several values of δ . We solve (3) for this choice of loss functions using gradient descent and numerically evaluate the achieved correlation performance. The recorded values are compared in Table 1 to the corresponding values of the upper bound; again, note the close agreement between the values as predicted by the findings of this paper, which suggests that the fundamental limits derived in this paper hold for sub-Gaussian features. We present corresponding results for the logistic and Probit models in Section 6 and for the noisy-signed model in Appendix E.
A remark on the Gaussianity assumption. Our results on precise asymptotics (to which our study of fundamental limits rely upon) hold rigorously for the two data models in Section 1.2, in which the feature vectors have entries i.i.d. standard Gaussian. However, we conjecture that the Gaussianity assumption can be relaxed. As partial numerical evidence, note in Figure 1a the perfect match of our theory with the empirical performance over data in which the feature vectors a i , i [ m ] have entries i.i.d. Rademacher (i.e., centered Bernoulli with probability 1/2). Figure 2 shows corresponding results for the Gaussian-mixture model. Our conjecture that the so-called universality property holds in our setting is also in line with similar numerical observations and partial theoretical evidence previously made for linear regression settings [7,28,29,30,31]. A formal proof of universality of our results is beyond the scope of this paper. However, we remark that, as long as the asymptotic predictions of Section 2 enjoy this property, then all our results on fundamental performance limits and optimal functions automatically hold under the same relaxed assumptions.

1.5. Related Works

Over the past two decades, there has been a long list of works that derive statistical guarantees for high-dimensional estimation problems. Many of these are concerned with convex optimization-based inference methods. Our work is most closely related to the following three lines of research.
(a)
Sharp asymptotics for linear measurements.
Most of the results in the literature of high-dimensional statistics are order-wise in nature. Sharp asymptotic predictions have only more recently appeared in the literature for the case of noisy linear measurements with Gaussian measurement vectors. There are by now three different approaches that have been used towards asymptotic analysis of convex regularized estimators: (i) the one that is based on the approximate message passing (AMP) algorithm and its state-evolution analysis (e.g., [5,8,14,20,32,33,34]); (ii) the one that is based on Gaussian process (GP) inequalities, specifically on the convex Gaussian min-max Theorem (CGMT) (e.g., [9,10,13,15,18,19]); and (iii) the “leave-one-out” approach [11,35]. The three approaches are quite different to each other and each comes with its unique distinguishing features and disadvantages. A detailed comparison is beyond our scope.
Our results in Theorems 2 and 3 for achieving the best performance across all loss functions is complementary to [12] (Theorem 1) and the work of Advani and Ganguli [16], who proposed a method for deriving optimal loss function and measuring its performance, albeit for linear models. Instead, we study binary models. The optimality of regularization for linear measurements is recently studied in [22].
In terms of analysis, we follow the GP approach and build upon the CGMT. Since the previous works are concerned with linear measurements, they consider estimators that solve minimization problems of the form
x ^ : = arg min x i = 1 m ˜ ( y i a i T x ) + r R ( x )
Specifically, the loss function ˜ penalizes the residual. In this paper, we show that the CGMT is applicable to optimization problems in the form of (3). For our case of binary observations, (3) is more general than (6). To see this, note that, for y i ± 1 and popular symmetric loss functions ˜ ( t ) = ˜ ( t ) , e.g., least-squares (LS), (3) results in (6) by choosing ( t ) = ˜ ( t 1 ) in the former. Moreover, (3) includes several other popular loss functions such as the logistic loss and the hinge loss which cannot be expressed by (6).
(b)
One-bit compressed sensing.
Our work naturally relates to the literature on one-bit compressed sensing (CS) [36]. The vast majority of performance guarantees for one-bit CS are order-wise in nature (e.g., [37,38,39,40,41,42]). To the best of our knowledge, the only existing sharp results are presented in [43] for Gaussian measurement vectors, which studies the asymptotic performance of regularized LS. Our work can be seen as a direct extension of the work in [43] to loss functions beyond least-squares (see Section 4.1 for details).
Similar to the generality of our paper, Genzel [41] also studied the high-dimensional performance of general loss functions. However, in contrast to our results, their performance bounds are loose (order-wise); as such, they are not informative about the question of optimal performance which we also address here.
(c)
Classification in high-dimensions.
In [44,45], the authors studied the high-dimensional performance of maximum-likelihood (ML) estimation for the logistic model. The ML estimator is a special case of (3) and we consider general binary models. In addition, their analysis is based on the AMP framework. The asymptotics of logistic loss under different classification models is also recently studied in [46]. In yet another closely related recent work [47], the authors extended the results of Sur and Candes [45] to regularized ML by using the CGMT. Instead, we present results for general convex loss functions and for binary linear models. Importantly, we also study performance bounds and optimal loss functions.
We also remark on the following closely related parallel works. While the conference version of this paper was being reviewed, the CGMT was applied by Montanari et al. [48] and Deng et al. [49] to determine the generalization performance of max-margin linear classifiers in a binary classification setting. In essence, these results are complementary to the results of our paper in the following sense. Consider a binary classification setting under the logistic model and Gaussian regressors. As discussed in Section 4.2, the optimal set of (3) is bounded with probability approaching one if and only if δ > δ f , for appropriate threshold δ f determined for first time in [44] (see also Figure 3a). Our results hold in this regime. In contrast, the papers by Montanari et al. [48] and Deng et al. [49] study the regime δ < δ f .
We close this section by mentioning works that build on our results and appeared after the initial submission of this paper. The paper by Mignacco et al. [50] studies sharp asymptotics of ridge-regularized ERM with an intercept for Gaussian-mixture models. In [27], we extend the results of this paper on fundamental limits and optimality to the case of ridge-regularized ERM (see also the concurrent work by Aubin et al. [51]).

2. Sharp Performance Guarantees

2.1. Definitions

Moreau Envelopes. Before stating the first result, we need a definition. We write
M x ; λ : = min v 1 2 λ ( x v ) 2 + ( v ) ,
for the Moreau envelope function of the loss : R R at x with parameter λ > 0 . The minimizer (which is unique by strong convexity) is known as the proximal operator of at x with parameter λ and we denote it as prox x ; λ . A useful property of the Moreau envelope function is that it is continuously differentiable with respect to both x and λ [52]. We denote these derivatives as follows
M , 1 x ; λ : = M x ; λ x , M , 2 x ; λ : = M x ; λ λ .

2.2. A System of Equations

As we show shortly the asymptotic performance of the optimization in (3) is tightly connected to the solution of a certain system of nonlinear equations, which we introduce here. Specifically, define random variables G , S and Y as follows:
G , S i . i . d . N ( 0 , 1 ) and Y = f ( S ) ,
and consider the following system of non-linear equations in three unknowns ( μ , α 0 , λ 0 ) :
E Y S · M , 1 α G + μ S Y ; λ = 0 ,
λ 2 δ E M , 1 α G + μ S Y ; λ 2 = α 2 ,
λ δ E G · M , 1 α G + μ S Y ; λ = α .
The expectations are with respect to the randomness of the random variables G, S and Y. We remark that the equations are well defined even if the loss function is not differentiable. In Appendix A, we summarize some well-known properties of the Moreau envelope function and use them to simplify (8) for differentiable loss functions.

2.3. Asymptotic Prediction

We are now ready to state our first main result.
Theorem 1.
(Sharp Asymptotics). Assume data generated from the binary model with Gaussian features and assume δ > 1 such that the set of minimizers in (3) is bounded and the system of Equation (8) has a unique solution ( μ , α 0 , λ 0 ) , such that μ 0 . Let x ^ be as in (3). Then, in the limit of m , n + , m / n δ , it holds with probability one that
lim n corr x ^ ; x 0 = μ μ 2 + α 2 .
Moreover,
lim n x ^ μ · x 0 x 0 2 2 2 = α 2 .
Theorem 1 holds for any convex loss function. In Section 4, we specialize the result to specific popular choices and also present numerical simulations that confirm the validity of the predictions (see Figure 1a, Figure 3a, Figure 4a and Figure A4a,b). Before that, we include a few remarks on the conditions, interpretation and implications of the theorem. The proof is deferred to Appendix B and uses the convex Gaussian min-max theorem (CGMT) [13,15].
Remark 1.
(The Role of μ and α ). According to (9), the prediction for the limiting behavior of the correlation value is given in terms of an effective noise parameter σ : = α / μ , where μ and α are unique solutions of (8). The smaller is the value of σ is, the larger the correlation value becomes. While the correlation value is fully determined by the ratio of α and μ, their individual role is clarified in (10). Specifically, according to (10), x ^ is a biased estimate of the true x 0 and μ represents exactly the correlation bias term. In other words, solving (3) returns an estimator that is close to a μ-scaled version of x 0 . When x 0 and x ^ are scaled appropriately, the 2 -norm of their difference converges to α.
Remark 2.
(Why δ > 1 ). The theorem requires that δ > 1 (equivalently, m > n asymptotically). Here, we show that this condition is necessary for Equations (8) to have a bounded solution. To see this, take squares in both sides of (8c) and divide by (8b) to find that
δ = E M , 1 α G + μ S Y ; λ 2 E G · M , 1 α G + μ S Y ; λ 2 1 .
The inequality follows by applying Cauchy–Schwarz and using the fact that E [ G 2 ] = 1 .
Remark 3.
(On the Existence of a Solution to (8)). While δ > 1 is a necessary condition for the equations in (8) to have a solution, it is not sufficient in general. This depends on the specific choice of the loss function. For example, in Section 4.1, we show that, for the squared loss ( t ) = ( t 1 ) 2 , the equations have a unique solution iff δ > 1 . On the other hand, for logistic loss and hinge loss, it is argued in Section 4.2 that there exists a threshold value δ f > 2 such that the set of minimizers in (3) is unbounded if δ < δ f . In this case, the assumptions of Theorem 1 do not hold. We conjecture that, for these choices of loss, Equations (8) are solvable iff δ > δ f . Justifying this conjecture and further studying more general sufficient and necessary conditions under which the Equation (8) admit a solution is left to future work. However, in what follows, given such a solution, we prove that it is unique for a wide class of convex loss functions of interest.
Remark 4.
(On the Uniqueness of Solutions to (8)). We show that, if the system of equations in (8) has a solution, then it is unique provided that ℓ is strictly convex, continuously differentiable and its derivative satisfies ( 0 ) 0 . For instance, this class includes the square, the logistic and the exponential losses. However, it excludes non-differentiable functions such as the LAD and hinge loss. We believe that the differentiability assumption can be relaxed without major modification in our proof, but we leave this for future work. Our result is summarized in Proposition 1 below.
Proposition 1.
(Uniqueness). Assume that the loss function : R R has the following properties: (i) it is proper strictly convex; and (ii) it is continuously differentiable and its derivative is such that ( 0 ) 0 . Further, assume that the (possibly random) link function f is such that S Y = S f ( S ) , S N ( 0 , 1 ) has strictly positive density on the real line. The following statement is true. For any δ > 1 , if the system of equations in (8) has a bounded solution, then it is unique.
The detailed proof of Proposition 1 is deferred to Appendix B.5. Here, we highlight some key ideas. The CGMT relates—in a rather natural way—the original ERM optimization (3) to the following deterministic min-max optimization on four variables
min α > 0 , μ , τ > 0 max γ > 0 F ( α , μ , τ , γ ) : = γ τ 2 α γ δ + E M α G + μ Y S ; τ γ .
In Appendix B.4, we show that the optimization above is convex-concave for any lower semi-continuous, proper and convex function : R R . Moreover, it is shown that one arrives at the system of equations in (8) by simplifying the first-order optimality conditions of the min-max optimization in (11). This connection is key to the proof of Proposition 1. Indeed, we prove uniqueness of the solution (if such a solution exists) to (8), by proving instead that the function F ( α , μ , τ , γ ) above is (jointly) strictly convex in ( α , μ , τ ) and strictly concave in γ, provided that ℓ satisfies the conditions of the proposition. Next, let us briefly discuss how strict convex-concavity of (11) can be shown. For concreteness, we only discuss strict convexity here; the ideas are similar for strict concavity. At the heart of the proof of strict convexity of F is understanding the properties of the expected Moreau envelope function Ω : R + × R × R + × R + R defined as follows:
Ω ( α , μ , τ , γ ) : = E M α G + μ Y S ; τ γ .
Specifically, we prove in Proposition A7 in Appendix A.6 that if ℓ is strictly convex, differentiable and does not attain its minimum at 0, then Ω is strictly convex in ( α , μ , τ ) and strictly concave in γ. It is worth noting that the Moreau envelope function M α g + μ y s ; τ for fixed g , s and y = f ( s ) is not necessarily strictly convex. Interestingly, we show that the expected Moreau envelope has this desired feature. We refer the reader to Appendix A.6 and Appendix B.5 for more details.

3. On Optimal Performance

3.1. Fundamental Limits

In this section, we establish fundamental limits on the performance of (3) by deriving an upper bound on the absolute value of correlation corr x ^ ; x 0 that holds for all choices of loss functions satisfying Theorem 1. The result builds on the prediction of Theorem 1. In view of (9), upper bounding correlation is equivalent to lower bounding the effective noise parameter σ = α / μ . Theorem 2 derives such a lower bound.
Before stating the theorem, we need a definition. For a random variable H with density p H ( h ) that has a derivative p H ( h ) , h R , we denote its score function ξ H ( h ) : = h log p H ( h ) = p H ( h ) p H ( h ) . Then, the Fisher information of H, denoted by I ( H ) R + , is defined as follows (e.g., [53] (Sec. 2)):
I ( H ) : = E ( ξ H ( H ) ) 2 .
Theorem 2.
(Best Achievable Performance). Let the assumptions and notation of Theorem 1 hold and recall the definition of random variables G , S and Y in (7). For σ > 0 , define a new random variable W σ : = σ G + S Y , and the function κ : ( 0 , ] [ 0 , 1 ] as follows,
κ ( σ ) : = σ 2 σ 2 I ( W σ ) + I ( W σ ) 1 1 + σ 2 σ 2 I ( W σ ) 1 .
Further, define σ opt as follows,
σ opt : = min σ 0 : κ ( σ ) = 1 δ .
Then, for σ : = α μ , it holds that σ σ opt .
The theorem above establishes an upper bound on the best possible correlation performance among all convex loss functions. In Section 3.2, we show that this bound is often tight, i.e., there exists a loss function that achieves the specified best possible performance.
Remark 5.
Theorem 2 complements the results in [12,14] (Lem. 3.4) and [15] (Rem. 5.3.3), in which the authors considered only linear regression. In particular, Theorem 2 shows that it is possible to achieve results of this nature for the more challenging setting of binary classification considered here.
Proof of Theorem 2.
Fix a loss function and let ( μ 0 , α > 0 , λ 0 ) be a solution to (8), which by assumptions of Theorem 1 is unique. The first important observation is that the error of a loss function is unique up to a multiplicative constant. To see this, consider an arbitrary loss function ( t ) and let x ^ be a minimizer in (3). Now, consider (3) with the following loss function instead, for some arbitrary constants C 1 > 0 , C 2 0 :
^ ( t ) : = 1 C 1 C 2 t .
It is not hard to see that 1 C 2 x ^ is the minimizer for ^ . Clearly, 1 C 2 x ^ has the same correlation value with x 0 as x ^ , showing that the two loss functions and ^ perform the same. With this observation in mind, consider the function ^ : R R such that ^ ( t ) = λ μ 2 ( μ t ) . Then, notice that
M , 1 x ; λ = 1 λ M ^ , 1 x / μ ; 1 .
Using this relation in (8) and setting σ : = σ = α / μ , the system of equations in (8) can be equivalently rewritten in the following convenient form,
E Y S · M ^ , 1 W σ ; 1 = 0 ,
E M ^ , 1 W σ ; 1 2 = σ 2 / δ ,
E G · M ^ , 1 W σ ; 1 = σ / δ .
Next, we show how to use (14) to derive an equivalent system of equations based on W σ . Starting with (14c), we have
E G · M ^ , 1 W σ ; 1 = 1 σ u M ^ , 1 u + z ; 1 ϕ σ ( u ) p S Y ( z ) d u d z ,
where ϕ σ ( u ) : = p σ G ( u ) = 1 σ 2 π e u 2 2 σ 2 . Since it holds that ϕ σ ( u ) = σ 2 u ϕ σ ( u ) , using (A74), it follows that
E G · M ^ , 1 W σ ; 1 = σ M ^ , 1 u + z ; 1 ϕ σ ( u ) p S Y ( z ) d u d z = σ M ^ , 1 w ; 1 ϕ σ ( u ) p S Y ( w u ) d u d w = σ M ^ , 1 w ; 1 p W σ ( w ) d w ,
where in the last step we use
p W σ ( w ) = ϕ σ ( u ) p S Y ( w u ) d u .
Therefore, we have by (16) that
E G · M ^ , 1 W σ ; 1 = σ E M ^ , 1 W σ ; 1 ξ W σ ( W σ ) .
This combined with (14c) gives E M ^ , 1 W σ ; 1 ξ W σ ( W σ ) = 1 / δ . Second, multiplying (14c) with σ 2 and adding it to (14a) yields that,
E W σ · M ^ , 1 W σ ; 1 = σ 2 / δ ,
Putting these together, we conclude with the following system of equations which is equivalent to (14),
E W σ · M ^ , 1 W σ ; 1 = σ 2 / δ ,
E M ^ , 1 W σ ; 1 2 = σ 2 / δ ,
E M ^ , 1 W σ ; 1 ξ W σ ( W σ ) = 1 / δ .
Note that, for σ > 0 , ξ W σ = p W σ / p W σ exists everywhere. This is because for all w R : p W σ ( w ) > 0 and p W σ ( · ) is continuously differentiable. Combining (19a) and (19c), we derive the following equation which holds for α 1 , α 2 R ,
E ( α 1 W σ + α 2 ξ W σ ( W σ ) ) · M ^ , 1 W σ ; 1 = α 1 σ 2 / δ α 2 / δ .
By Cauchy–Schwarz inequality, we have that
E ( α 1 W σ + α 2 ξ W σ ( W σ ) ) · M ^ , 1 W σ ; 1 2 E ( α 1 W σ + α 2 ξ W σ ( W σ ) ) 2 E M ^ , 1 W σ ; 1 2 .
Using the fact that E [ W σ ξ W σ ( W σ ) ] = 1 (by integration by parts), E [ ( ξ W σ ( W σ ) ) 2 ] = I ( W σ ) , E [ W σ 2 ] = σ 2 + 1 and (19b), the right hand side of (20) is equal to
α 1 2 ( σ 2 + 1 ) + α 2 2 I ( W σ ) 2 α 1 α 2 σ 2 / δ .
Therefore, we conclude with the following inequality for σ ,
δ σ 2 α 1 2 ( σ 2 + 1 ) + α 2 2 I ( W σ ) 2 α 1 α 2 ( α 1 σ 2 α 2 ) 2 ,
which holds for all α 1 , α 2 R . In particular, (21) holds for the following choice of values for α 1 and α 2 :
α 1 = 1 σ 2 I ( W σ ) δ ( σ 2 I ( W σ ) + I ( W σ ) 1 ) , α 2 = 1 δ ( σ 2 I ( W σ ) + I ( W σ ) 1 ) .
(The choice above is motivated by the result of Section 3.2; see Theorem 3). Rewriting (21) with the chosen values of α 1 and α 2 yields the following inequality,
1 δ σ 2 ( σ 2 I ( W σ ) + I ( W σ ) 1 ) 1 + σ 2 ( σ 2 I ( W σ ) 1 ) = κ ( σ ) ,
where on the right-hand side above, we recognize the function κ defined in the theorem.
Next, we use (22) to show that σ opt defined in (12) yields a lower bound on the achievable value of σ . For the sake of contradiction, assume that σ < σ opt . By the above, 1 / δ κ ( σ ) . Moreover, by the definition of σ opt , we must have that 1 / δ < κ ( σ ) . Since κ ( 0 ) = 0 and κ ( · ) is a continuous function we conclude that for some σ 1 ( 0 , σ ) , it holds that κ ( σ 1 ) = 1 / δ . Therefore, for σ 1 < σ opt , we have κ ( σ 1 ) = 1 / δ , which contradicts the definition of σ opt . This proves that σ σ opt , as desired.
To complete the proof, it remains to show that the equation κ ( σ ) = 1 / δ admits a solution for all δ > 1 . For this purpose, we use the continuous mapping theorem and the fact that the Fisher information is a continuous function [54]. Recall that, for two independent and non-constant random variables, it holds that I ( X + Y ) < I ( X ) [53] (Eq. 2.18). Since G and S Y are independent random variables, we find that I ( σ G + S Y ) < I ( S Y ) which implies that I ( σ G + S Y ) is uniformly bounded for all values of σ . Therefore,
lim σ 0 κ ( σ ) = lim σ 0 σ 2 σ 2 I ( W σ ) + I ( W σ ) 1 1 + σ 2 σ 2 I ( W σ ) 1 = 0 .
Furthermore, σ 2 I ( σ G + S Y ) = I ( G + 1 σ S Y ) I ( G ) = 1 when σ . Hence,
lim σ κ ( σ ) = lim σ σ 2 σ 2 I ( W σ ) + I ( W σ ) 1 1 + σ 2 σ 2 I ( W σ ) 1 = 1 .
Note that σ 2 I ( σ G + S Y ) < σ 2 I ( σ G ) = 1 , which further yields that κ ( σ ) < 1 for all σ 0 . Finally, since I ( · ) is a continuous function, we deduce that range of κ : R + 0 R is [ 0 , 1 ) , implying the existence of a solution to (12) for all δ > 1 . This completes the proof of Theorem 2. □
A useful closed-form bound on the best achievable performance: In general, determining σ opt requires computing the Fisher information of the random variable σ G + S Y for σ > 0 . If the probability distribution of S Y is continuously differentiable (e.g., logistic model; see Appendix C.1), then we obtain the following simplified bound.
Corollary 1.
(Closed-form Lower Bound on σ opt ). Let p S Y : R R be the probability distribution of S Y . If p S Y ( x ) is differentiable for all x R , then,
σ opt 2 1 ( δ 1 ) ( I ( S Y ) 1 ) .
Proof. 
Based on Theorem 2, the following equation holds for σ = σ opt
1 δ = κ ( σ )
or, equivalently, by rewriting the right-hand side,
1 δ = 1 1 1 1 σ 2 I ( W σ ) σ 2 .
Define the following function
h ( x ) : = 1 1 1 1 σ 2 x σ 2 .
The function h is increasing in the region R σ = { z : z > σ 2 σ 4 } . According to Stam’s inequality [55], for two independent random variables X and Y with continuously differentiable p X and p Y , it holds that
I ( X + Y ) I ( X ) · I ( Y ) I ( X ) + I ( Y ) ,
where equality is achieved if and only if X and Y are independent Gaussian random variables. Therefore, since by assumption p S Y is differentiable on the real line, Stam’s inequality yields
I ( W σ ) = I ( σ G + S Y ) I ( σ G ) · I ( S Y ) I ( σ G ) + I ( S Y ) .
Next, we prove that for all σ > 0 , both sides of (25) are in the region R σ . First, we prove that I ( W σ ) R σ . By Cramer–Rao bound (e.g., see [53] (Eq. 2.15)) for Fisher information of a random variable X, we have that I ( X ) 1 / ( Var X ) . In addition, for the random variable W σ , we know that Var W σ = 1 + σ 2 ( E [ S Y ] ) 2 , thus
I ( W σ ) 1 1 + σ 2 ( E [ S Y ] ) 2 .
Using the relation ( E [ S Y ] ) 2 E [ S 2 ] E [ Y 2 ] = 1 , one can check that the following inequality holds:
1 1 + σ 2 ( E [ S Y ] ) 2 σ 2 σ 4 .
Therefore, from (26) and (27), we derive that I ( W σ ) R σ for all σ > 0 . Furthermore, by the inequality in (25) and the definition of R σ it directly follows that for all σ > 0
I ( σ G ) I ( S Y ) I ( σ G ) + I ( S Y ) R σ .
Finally, noting that h ( · ) is increasing in R σ , combined with (25), we have
1 δ = h I ( W σ ) h I ( σ G ) · I ( S Y ) I ( σ G ) + I ( S Y ) ,
which after using the relation I ( σ G ) = σ 2 and further simplification yields the inequality in the statement of the corollary. □
The proof of the corollary reveals that (23) holds with equality when S Y is Gaussian. In Appendix C.1, we compute p S Y for the logistic and the Probit models with x 0 2 = 1 and numerically show that it is close to the density of a Gaussian random variable. Consequently, the lower bound of Corollary 1 is almost exact when measurements are obtained according to the logistic and Probit models (see Figure A2 in the Appendix C).

3.2. On the Optimal Loss Function

It is natural to ask whether there exists a loss function that attains the bound of Theorem 2. If such a loss function exists, then we say it is optimal in the sense that it maximizes the correlation performance among all convex loss functions in (3).
Our next theorem derives a candidate for the optimal loss function, which we denote opt . Before stating the result, we provide some intuition about the proof which builds on Theorem 2. The critical observation in the proof of Theorem 2 is that the effective noise σ ^ of ^ is minimized (i.e., it attains the value σ opt ) if the Cauchy–Schwartz inequality in (20) holds with equality. Hence, we seek ^ = opt so that for some c R ,
M opt , 1 w ; 1 = c ( α 1 w + α 2 · ξ W opt ( w ) ) .
By choosing c = 1 , integrating and ignoring constants irrelevant to the minimization of the loss function, the previous condition is equivalent to the following M opt w ; 1 = α 1 w 2 / 2 α 2 log ( p W opt ( w ) ) . It turns out that this condition can be “inverted” to yield the explicit formula for opt as, opt ( w ) = M α 1 q + α 2 log ( p W opt ) w ; 1 . Of course, one has to properly choose α 1 and α 2 to make sure that this function satisfies the system of equations in (19) with σ = σ opt . The correct choice is specified in the theorem below. The proof is deferred to Appendix D.1.
Theorem 3.
(Optimal Loss Function). Recall the definition of σ opt in (12). Define the random variable W opt : = σ opt G + S Y and let p W opt denote its density. Consider the following loss function opt : R R
opt ( w ) = M α 1 q + α 2 log ( p W opt ) w ; 1 ,
where q ( x ) = x 2 / 2 and
α 1 = 1 σ opt 2 I ( W opt ) δ ( σ opt 2 I ( W opt ) + I ( W opt ) 1 ) , α 2 = 1 δ ( σ opt 2 I ( W opt ) + I ( W opt ) 1 ) .
If opt defined as in (29) is convex and the equation κ ( σ ) = 1 / δ has a unique solution, then σ opt = σ opt .
In general, there is no guarantee that the function opt ( · ) as defined in (29) is convex. However, if this is the case, the theorem above guarantees that it is optimal (Strictly speaking, the performance is optimal among all convex loss functions for which (8) has a unique solution as required by Theorem 2.). A sufficient condition for opt ( w ) to be convex is provided in Appendix D.2. Importantly, in Appendix D.2.1, we show that this condition holds for observations following the signed model. Thus, for this case, the resulting function is convex. Although we do not prove the convexity of optimal loss function for the logistic and Probit models, our numerical results (e.g., see Figure 3b) suggest that this is the case. Concretely, we conjecture that the loss function opt is convex for logistic and Probit models, and therefore by Theorem 3 its performance is optimal.

4. Special Cases

4.1. Least-Squares

By choosing ( t ) = ( t 1 ) 2 in (3), we obtain the standard least-squares estimate. To see this, note that since y i = ± 1 , it holds for all i that ( y i a i T x 1 ) 2 = ( y i a i T x ) 2 . Thus, x ^ is minimizing the sum of squares of the residuals:
x ^ = arg min x ( y i a i T x ) 2 .
For this choice of a loss function, we can solve the equations in (8) in closed form. Furthermore, the equations have a (unique, bounded) solution for any δ > 1 provided that E [ S Y ] > 0 . The final result is summarized in the corollary below (see Appendix F.1 for the proof).
Corollary 2.
(Least-squares). Assume data generated from the binary model and δ > 1 . For the label function assume that E [ S Y ] > 0 in the notation of (7). Let x ^ be as in (41). Then, in the limit of m , n + , m / n δ , Equations (9) and (10) hold with probability one with α and μ given as follows:
μ = E [ S Y ] ,
α = 1 E [ S Y ] 2 · 1 δ 1 .
Corollary 2 appears in [43] (see also [40,41,56] and Appendix F for an interpretation of the result). However, these previous works obtain results that are limited to least-squares loss. In contrast, our results are general and LS prediction is obtained as a simple corollary of our general Theorem 1. Moreover, our study of fundamental limits allows us to quantify the sub-optimality gap of least-square (LS) as follows.
On the Optimality of LS. On the one hand, Corollary 2 derives an explicit formula for the effective noise variance σ LS = α / μ of LS in terms of E [ Y S ] and δ . On the other hand, Corollary 1 provides an explicit lower bound on the optimal value σ opt in terms of I ( S Y ) and δ . Combining the two, we conclude that
σ LS 2 σ opt 2 ξ : = ( I ( S Y ) 1 ) 1 ( E [ S Y ] ) 2 ( E [ S Y ] ) 2 .
In terms of correlation,
corr opt corr LS = 1 + σ LS 2 1 + σ opt 2 σ LS σ opt ξ ,
where the first inequality follows from the fact that σ LS σ opt . Therefore, the performance of LS is at least as good as 1 ξ times the optimal one. In particular, assuming x 0 = 1 and for logistic and Probit models (for which Corollary 1 holds), we can explicitly compute 1 ξ = 0.9972 and 0.9804 , respectively. However, we recall that for large x 0 logistic and Probit models approach the signed model, and, as Figure 1a demonstrates, LS becomes suboptimal.
Another interesting consequence of combining Corollaries 1 and 2 is that LS would be optimal if S Y were a Gaussian random variable. To see this, recall from Corollary 1 that, if S Y is Gaussian, then:
σ opt 2 = 1 ( δ 1 ) ( I ( S Y ) 1 ) .
However, for S Y Gaussian, we can explicitly compute I ( S Y ) = 1 / Var [ S Y ] , which leads to
σ opt 2 = 1 ( E [ S Y ] ) 2 ( E [ S Y ] ) 2 ( δ 1 ) .
The right hand side is exactly σ LS 2 . Therefore, the optimal performance is achieved by the square loss function if S Y is a Gaussian random variable. Remarkably, for logistic and Probit models with small SNR (i.e., small x 0 ), density of S Y is close to the density of a normal random variable (see Figure A2 in the Appendix C), implying the optimality of LS for these models.

4.2. Logistic and Hinge Loss

Theorem 1 only holds in regimes for which the set of minimizers of (3) is bounded. As we show here, this is n o t always the case. Specifically, consider non-negative loss functions ( t ) 0 with the property lim t + ( t ) = 0 . For example, the hinge, exponential and logistic loss functions all satisfy this property. Now, we show that for such loss functions the set of minimizers is unbounded if δ < δ f for some appropriate δ f > 2 . First, note that the set of minimizers is unbounded if the following condition holds:
x s 0 such that y i a i T x s 0 , i [ m ] .
Indeed, if (34) holds then x = c · x s with c + , attains zero cost in (3); thus, it is optimal and the set of minimizers is unbounded. To proceed, we rely on a recent result by Candes and Sur [44] who proved that (34) holds iff (To be precise, Candes and Sur [44] proved the statement for measurements y i , i [ m ] that follow a logistic model. Close inspection of their proof shows that this requirement can be relaxed by appropriately defining the random variable Y in (7) (see also [48,49]).)
δ δ f : = min c R E G + c S Y 2 1 ,
where G , S and Y are random variables as in (7) and ( t ) : = min { 0 , t } . We highlight that logistic and hinge losses give unbounded solutions in the noisy-signed model with ε = 0 , since the condition (34) holds for x s = x 0 . However, their performances are comparable to the optimal performance in both logistic and Probit models (see Figure 3a and Figure 4a).

5. Extensions to Gaussian-Mixture Models

In this section, we show that our results on sharp asymptotics and lower bounds on error can be extended to include the Gaussian-Mixture model (GMM) presented in Section 1.2. The discussions on the phase transition for the existence of a bounded solution in Section 4.2 applies here as well. We rely on a phase-transition result [49] (Prop. 3.1), which proves that (34) holds if and only if
δ δ : = min t R E W 1 + t W 2 2 1 ,
where W 1 and W 2 are random variables defined in (7) and ( x ) 2 : = min { x , 0 } 2 . Therefore, for loss functions satisfying this property, e.g., hinge loss and logistic loss, the solution to (3) is unbounded if and only if δ δ .

5.1. System of Equations for GMM

It turns out that, similar to the generative models, the asymptotic performance of (3) for GMM depends on the loss function via its Moreau envelope. Specifically, let W 1 and W 2 be independent Gaussian random variables such that
W 1 N ( 0 , 1 ) , W 2 N ( r , 1 ) ,
where r : = x 0 2 > 0 . Consider the following system of non-linear equations in three unknowns ( μ , α 0 , λ 0 ) :
0 = E W 2 · M , 1 α W 1 + μ W 2 ; λ ,
α 2 = λ 2 δ E M , 1 α W 1 + μ W 2 ; λ 2 ,
α = λ δ E W 1 · M , 1 α W 1 + μ W 2 ; λ .
The expectations above are with respect to the randomness of the random variables W 1 and W 2 .
As we show shortly, the solution to these equations is tightly connected to the asymptotic behavior of the optimization in (3).

5.2. Theoretical Prediction of Error for Convex Loss Functions

Theorem 4.
(Asymptotic Prediction). Assume data generated from the Gaussian-mixture model and assume δ > 1 such that the set of minimizers in (3) is bounded and the system of Equation (38) has a unique solution ( μ , α , λ ) , such that μ 0 . Let x ^ be as in (3) and σ = α / μ . Then, in the limit of m , n + , m / n δ , it holds with probability one that
lim n corr x ^ ; x 0 = μ μ 2 + α 2 , lim n E = Q r 1 + σ 2 ,
where E denotes the classification test error defined in (5).
Remark 6
(Proof of Theorem 4). The high-level steps of the proof of Theorem 4 follow closely the proof of Theorem 1. Particularly, for GMM one can show the correlation of the ERM estimate with the true vector x 0 is predicted by a system of Equations as in (38), only with W 2 replaced by a non-gaussian random variable (denoted as S Y in Theorem 1). Specifically, by rotational invariance of the Gaussian feature vectors a i , we can assume, without loss of generality, that x 0 = [ r , 0 , 0 , , 0 ] T . Then, we can can guarantee that with probability one it holds that
lim n x ^ ( 1 ) = μ , lim n j = 2 n x ^ 2 ( j ) = α 2 ,
where μ and α are specified by (38). To see how this implies (39), we argue as follows. Recalling that x | y N ( y x 0 , I ) , we have
y x ^ , a N r x ^ ( 1 ) , x ^ 2 2 .
Using this and (40) leads to the asymptotic value of correlation and classification error as presented in (39).
Remark 7.
(On the Uniqueness of Solutions to Equation (38)) Our results in proving the uniqueness of solutions to the equations for generative models (8) in Proposition 1, extend to GMM. Noting that W 2 N ( r , 1 ) in (38) plays the role of S Y in (8), we straightforwardly deduce the following result for uniqueness of solutions to (38).
Proposition 2.
Assume that the loss function : R R has the following properties: (i) it is proper strictly convex; and (ii) it is continuously differentiable and its derivative is such that ( 0 ) 0 . The following statement is true. For any δ > 1 , if the system of equations in (38) has a bounded solution, then it is unique.

5.3. Special Case: Least-Squares

By choosing ( t ) = ( t 1 ) 2 in (3), we obtain the standard least-squares estimate. To see this, note that since y i = ± 1 , it holds for all i that ( y i a i T x 1 ) 2 = ( y i a i T x ) 2 .
Thus, the estimator x ^ L S is minimizing the sum of squares of the residuals:
x ^ L S = arg min x ( y i a i T x ) 2 .
For the choice ( t ) = ( t 1 ) 2 , it turns out that we can solve the equations in (38) in closed form. The final result is summarized in the corollary below and proved in Appendix G.1.
Corollary 3.
(Least-Squares). Let x ^ L S be as in (31) and δ > 1 . Then, in the limit of m , n + , m / n δ , Equation (39) holds with probability one with σ L S 2 given as follows:
σ L S 2 = 1 + r 2 r 2 · 1 ( δ 1 ) .

5.4. Optimal Risk for GMM

Next, we characterize the best achievable classification error by different choices of loss function. Considering (39), we see that an optimal choice of is the one that minimizes σ 2 . The next theorem characterizes the best achievable σ among convex loss functions by deriving an equivalent set of equations to (38) and combining them with proper coefficients. Similar to the proof of Theorem 2, a key step in the proof is properly setting up a Cauchy–Schwarz inequality that exploits the structure of the new set of equations. The proof is deferred to Appendix G.2.
Theorem 5.
(Lower Bound on Risk). Under the assumptions of Theorem 4, the following inequality holds for the effective risk parameter ( σ ) of a loss function ℓ:
lim n σ 2 σ 2 : = 1 + r 2 r 2 · 1 δ 1
Remark 8.
(Optimality of Least-squares for GMM). Theorem 5 provides a lower bound for the asymptotic value of σ which holds for all δ > 1 and r > 0 . This result together with Corollary 3 implies that least-squares achieves the least value of risk (i.e., σ and E ) for all δ > 1 and r > 0 among all convex loss functions ℓ for which the set of minimizers in (3) is bounded.

6. Numerical Experiments

In this section, we present numerical simulations that validate the predictions of Theorems 1–5. To begin, we use the following three popular models as our case study: signed, logistic and Probit. We generate random measurements according to (1). Without loss of generality (due to rotational invariance of the Gaussian measure), we set x 0 = [ 1 , 0 , , 0 ] T . We then obtain estimates x ^ of x 0 by numerically solving (3) and measure performance by the correlation value corr x ^ ; x 0 . Throughout the experiments, we set n = 128 and the recorded values of correlation are averages over 25 independent realizations. For each label function, we first provide plots that compare results of Monte Carlo simulations to the asymptotic predictions for loss functions discussed in Section 4, as well as to the optimal performance of Theorem 2. We next present numerical results on optimal loss functions. To empirically derive the correlation of optimal loss function, we run gradient descent-based optimization with 1000 iterations. As a general comment, we note that, despite being asymptotic, our predictions appear accurate even for relatively small problem dimensions. For the analytical predictions, we apply Theorem 1. In particular, for solving the system of non-linear equations in (3), we empirically observe (see also [15,47] for similar observation) that, if a solution exists, then it can be efficiently found by the following fixed-point iteration method. Let v : = [ μ , α , λ ] T and F : R 3 R 3 be such that (3) is equivalent to v = F ( v ) . With this notation, we initialize v = v 0 and for k 1 repeat the iterations v k + 1 = F ( v k ) until convergence.
Logistic model. For the logistic model, comparison between the predicted values and the numerical results is illustrated in Figure 3a. Results are shown for LS, logistic and hinge loss functions. Note that minimizing the logistic loss corresponds to the maximum-likelihood estimator (MLE) for logistic model. An interesting observation in Figure 3a is that in the high-dimensional setting (finite δ ) LS has comparable (if not slightly better) performance to MLE. Additionally, we observe that in this model, performance of LS is almost the same as the best possible performance derived according to Theorem 2. This confirms the analytical conclusion of Section 4.1. The comparison between the optimal loss function as in Theorem 3 and other loss functions is illustrated in Figure 3b. We note the obvious similarity between the shapes of optimal loss functions and LS which further explains the similarity between their performance.
Probit model. Theoretical predictions for the performance of hinge and LS loss functions are compared with the empirical results and optimal performance of Theorem 2 in Figure 4a. Similar to the logistic model, in this model, LS also outperforms hinge loss and its performance resembles the performance of optimal loss function derived according to Theorem 3. Figure 4b illustrates the shapes of LS, hinge loss and the optimal loss functions for the Probit model. The obvious similarity between the shape of LS and optimal loss functions for all values of δ explains the close similarity of their performance.
Additionally, by comparing the LS performance for the three models in Figure 1a, Figure 3a and Figure 4a, it is clear that higher (respectively, lower) correlation values are achieved for signed (respectively, logistic) measurements. This behavior is indeed predicted by Corollary 2: correlation performance is higher for higher values of μ = E [ S Y ] . It can be shown that, for the signed, probit and logistic models (with x 0 2 = 1 ), we have μ = 2 / π , 1 / π and 0.4132 , respectively.
Optimal loss function. By putting together Theorems 2 and 3, we obtain a method on deriving the optimal loss function for generative binary models. This requires the following steps.
  • Find σ opt by solving (12).
  • Compute the density of W opt = σ opt G + S Y .
  • Compute opt according to (29).
Note that computing σ opt needs the density function p W of the random variable W = σ G + S Y . In principle p W can be calculated as the convolution of the Gaussian density with the pdf p S Y of S Y . Moreover, it follows from the recipe above that the optimal loss function depends on δ in general. This is because σ opt itself depends on δ via (12).

Numerical Experiments for GMM

Theorem 5 implies the optimality of least-squares among convex loss functions in the under-parameterized regime δ > 1 . In Figure 2, we demonstrate the classification risk of least-squares alongside other well-known loss functions LAD and logistic, for r = 1 . Solid lines correspond to the theoretical predictions of Theorem 4. For least-squares we rely on the result of Corollary 3 and for LAD and logistic loss, the system of equations are solved by iterating over the equations, where we observe that after relatively small number of iterations the triple ( μ , α , λ ) converges to ( μ , α , λ ) . We use 10 5 and 10 3 samples to compute the expectations in (38) for LAD and logistic loss, respectively. After deriving σ = α / μ , the classification risk E is obtained according to the formula in (39). Dots correspond to the empirical evaluations of the classification risk of loss functions for n = 60 and for different values of δ = m / n > 1 . The resulting numbers are averaged over 30 independent experiments. As is observed, the empirical results closely follow the theoretical predictions of Theorem 4. Furthermore, as predicted by Theorem 5, least-squares has the minimum expected classification risk among other convex loss functions and for all δ > 1 .

7. Conclusions

We derive theoretical predictions for the generalization error of estimators obtained by ERM for generative binary models and a Gaussian Mixture model. Furthermore, we use this theoretical characterizations to find the optimal performance and optimal loss function among all convex losses. Although our analysis is true for Gaussian matrices, we empirically show they hold for sub-Gaussian matrices as well. As an exciting future direction, we plan to extend our analysis on sharp asymptotics and optimal loss function to non-isotropic (Gaussian) features with arbitrary covariance. A more challenging, albeit interesting, direction is going beyond (binary) linear models studied in this paper, by considering asymptotics and optimal error for kernel models and neural networks (see [48,57] for partial progress in this direction).

Author Contributions

Formal analysis, H.T., R.P. and C.T.; Funding acquisition, R.P. and C.T.; Investigation, H.T., R.P. and C.T.; Methodology, H.T., R.P. and C.T.; Project administration, H.T., R.P. and C.T.; Software, H.T.; Supervision, H.T., R.P. and C.T.; Writing—original draft, H.T., R.P. and C.T.; Writing—review & editing, H.T., R.P. and C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NSF CNS-2003035, NSF CCF-2009030 and NSF CCF-1909320.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Properties of Moreau Envelopes

Appendix A.1. Derivatives

Recall the definition of the Moreau envelope M x ; λ and proximal operator prox x ; λ of a function :
M x ; λ = min y 1 2 λ ( x y ) 2 + ( y ) ,
and prox x ; λ = arg min y 1 2 λ ( x y ) 2 + ( y ) .
Proposition A1
(Basic properties of M and prox [52]). Let : R R be lower semi-continuous (lsc), proper and convex. The following statements hold for any λ > 0 .
(a)
The proximal operator prox x ; λ is unique and continuous. In fact, prox x ; λ prox x ; λ whenever ( x , λ ) ( x , λ ) with λ > 0 .
(b)
The value M x ; λ is finite and depends continuously on ( λ , x ) , with M x ; λ f ( x ) for all x as λ 0 + .
(c)
The Moreau envelope function is differentiable with respect to both arguments. Specifically, for all x R , the following properties are true:
M , 1 x ; λ = 1 λ ( x prox x ; λ ) ,
M , 2 x ; λ = 1 2 λ 2 ( x prox x ; λ ) 2 .
If in addition ℓ is differentiable and denotes its derivative, then
M , 1 x ; λ = ( prox x ; λ ) ,
M , 2 x ; λ = 1 2 ( ( prox x ; λ ) 2 .

Appendix A.2. Alternative Representations of (8)

Replacing the above relations for derivative of M in (8), we can write the equations in terms of the proximal operator. If is differentiable, then Equations (8) can be equivalently written as follows:
E Y S · prox α G + μ S Y ; λ = 0 ,
λ 2 δ E prox α G + μ S Y ; λ 2 = α 2 ,
λ δ E G · prox α G + μ S Y ; λ = α .
Finally, if is two times differentiable, then applying integration by parts in Equation (14c) results in the following reformulation of (8c):
1 = λ δ E prox α G + μ S Y ; λ 1 + λ prox α G + μ S Y ; λ .

Appendix A.3. Examples of Proximal Operators

LAD.
For ( t ) = | t 1 | , the proximal operator admits a simple expression, as follows:
prox x ; λ = 1 + H x 1 ; λ ,
where
H x ; λ = x λ , if x > λ , x + λ , if x < λ , 0 , otherwise .
is the standard soft-thresholding function.
Hinge Loss.
When ( t ) = max { 0 , 1 t } , the proximal operator can be expressed in terms of the soft-thresholding function as follows:
prox x ; λ = 1 + H x + λ 2 1 ; λ 2 .

Appendix A.4. Fenchel–Legendre Conjugate Representation

For a function h : R R , its Fenchel–Legendre conjugate, h : R R is defined as:
h ( x ) = max y x y h ( y ) .
The following proposition relates Moreau Envelope of a function to its Fenchel–Legendre conjugate.
Proposition A2.
For λ > 0 and a function h, we have:
M h x ; λ = q ( x ) λ 1 λ q + λ h ( x ) ,
where q ( x ) = x 2 / 2 .
Proof. 
M h x ; λ = 1 2 λ min y ( x y ) 2 + 2 λ h ( y ) = x 2 2 λ + 1 2 λ min y y 2 2 x y + 2 λ h ( y ) = x 2 2 λ 1 λ max y x y y 2 / 2 + λ h ( y ) = q ( x ) λ 1 λ q + λ h ( x ) .

Appendix A.5. Convexity of the Moreau Envelope

Lemma A1.
The function H : R 3 R defined as follows
H ( x , v , λ ) = 1 2 λ ( x v ) 2 ,
is jointly convex in its arguments.
Proof. 
Note that the function h ( x , v ) = ( x v ) 2 is jointly convex in ( x , v ) . Thus, its perspective function
λ h ( x / λ , v / λ ) = ( x v ) 2 / λ = 2 H ( x , v , λ )
is jointly convex in ( x , v , λ ) [58] (Sec. 2.3.3), which completes the proof. □
Proposition A3.
(a) Ref. [52] (Prop. 2.22) Let f ( x , y ) be jointly convex in its arguments. Then, the function g ( x ) = min y f ( x , y ) is convex.
(b) Ref. [58] (Sec. 3.2.3) Suppose f i : R R is a set of concave functions, with i A an index set. Then, the function f : R R defined as f ( x ) : = inf i A f i ( x ) is concave.
Lemma A2.
Let : R R be a lsc, proper, convex function. Then, M x ; λ is jointly convex in ( x , λ ) .
Proof. 
Recall that
M x ; λ = min v G ( a ) : = 1 2 λ ( x v ) 2 + ( v ) ,
where, for compactness, we let a R 3 denote the triplet ( x , v , λ ) . Now, let a i = ( x i , v i , λ i ) , i = 1 , 2 , θ ( 0 , 1 ) and θ ¯ : = 1 θ . With this notation, we may write
G ( θ a 1 + θ ¯ a 2 ) = H θ x 1 + θ ¯ x 2 , θ λ 1 + θ ¯ λ 2 , θ v 1 + θ ¯ v 2 + ( θ v 1 + θ ¯ v 2 ) θ H ( x 1 , v 1 , λ 1 ) + θ ¯ H ( x 2 , v 2 , λ 2 ) + θ ( v 1 ) + θ ¯ ( v 2 ) = θ G ( a 1 ) + θ ¯ G ( a 2 ) .
For the first equality above, we recall the definition of H : R 3 R in (A10) and the inequality right after follows from Lemma A1 and convexity of . Thus, the function G is jointly convex in its arguments. Using this fact, as well as (A11), and applying Proposition A3(a) completes the proof. □

Appendix A.6. The Expected Moreau-Envelope (EME) Function and its Properties

The performance of the ERM estimator (3) is governed by the system of equations (8) in which the Moreau envelope function M x ; λ of the loss function plays a central role. More precisely, as already hinted by (8) and becomes clear in Appendix B, what governs the behavior is the function
( α > 0 , μ , τ > 0 , γ > 0 ) E M α G + μ S Y ; τ / γ ,
which we call the expected Moreau envelope (EME). Recall here that Y = f ( S ) . Hence, the EME is the key summary parameter that captures the role of both the loss function : R R and of the link function f : R { ± 1 } on the statistical performance of (3).
In this section, we study several favorable properties of the EME. In (A12), the expectation is over G , S iid N ( 0 , 1 ) . We first study the EME under more general distribution assumptions in Appendix A.6.1, Appendix A.6.2 and Appendix A.6.3 and we then specialize our results to Gaussian random variables G and S in Appendix A.6.4.

Appendix A.6.1. Derivatives

Proposition A4.
Let : R R be a lsc, proper and convex function. Further, let X , Z be independent random variables with bounded second moments E [ X 2 ] < , E [ Z 2 ] < . Then, the expected Moreau envelope function E M c X + Z ; λ is differentiable with respect to both c and λ and the derivatives are given as follows:
c E M c X + Z ; λ = E X M , 1 c X + Z ; λ ,
λ E M c X + Z ; λ = E M , 2 c X + Z ; λ .
Proof. 
The proof is an application of the Dominated Convergence Theorem (DCT). First, by Proposition A1(b), for every c R and any λ > 0 , the function E [ M c X + Z ; λ ] takes a finite value. Second, by Proposition A1(c), M c x + z ; λ is continuously differentiable with respect to both c and λ :
c M c X + Z ; λ = X M , 1 c X + Z ; λ = X 1 λ c X + Z prox c X + Z ; λ , λ M c X + Z ; λ = M , 2 c X + Z ; λ = 1 2 λ 2 c X + Z prox c X + Z ; λ 2 .
From this, note that the Cauchy–Schwarz inequality gives
E c M c X + Z ; λ E [ X 2 ] ) 1 / 2 E 1 λ 2 c X + Z prox c X + Z ; λ 2 : = A 1 / 2 ,
Therefore, the remaining condition to check so that DCT can be applied is that the term A / λ 2 above is integrable. To begin with, we can easily bound A as: A 2 ( c X + Z ) 2 + 2 ( prox c X + Z ; λ ) 2 . Next, by non-expansiveness (Lipschitz property) of the proximal operator [52] (Prop. 12.19), we have that | prox c X + Z ; λ | | c X + Z | + | prox 0 ; λ | . Putting together, we find that
A 6 ( c X + Z ) 2 + 2 | prox 0 ; λ | 2 12 c 2 X 2 + 12 Z 2 + 2 | prox 0 ; λ | 2 .
We consider two cases. First, for fixed λ > 0 and any compact interval I , we have that
E sup c I [ A ] 12 ( sup c I c 2 ) E [ X 2 ] + 12 E [ Z ] 2 + 2 | prox 0 ; λ | 2 < .
Similarly, for fixed c and any compact interval J on the positive real line, we have that
E sup λ J [ A / λ 2 ] 12 sup λ J c 2 E [ X 2 ] + E [ Z ] 2 λ 2 + 2 sup λ J | prox 0 ; λ | 2 λ 2 < ,
where we also used boundedness of the proximal operator (cf. Proposition A1(a)). This completes the proof. □

Appendix A.6.2. Strict Convexity

We study convexity properties of the expected Moreau envelope function Ψ : R 3 R :
Ψ ( v ) : = Ψ ( α , μ , λ ) : = E M α X + μ Z ; λ ,
for a lsc, proper, convex function and independent random variables X and Z with positive densities. Here, and onwards, we let v R 3 denote a triplet ( α , μ , λ ) and the expectation is over the randomness of X and Z. From Lemma A2, it is easy to see that Ψ ( v ) is convex. In this section, we prove a stronger claim:
“ If is strictly convex and does not attain its minimum at 0, then Ψ ( v ) is also strictly convex. ”
This is summarized in Proposition A5 below.
Proposition A5.
(Strict Convexity). Let : R R be a function with the following properties: (i) it is proper strictly convex; and (ii) it is continuously differentiable and its derivative is such that ( 0 ) 0 . Further, let X , Z be independent random variables with strictly positive densities. Then, the function Ψ : R 3 R in (A15) is jointly strictly convex in its arguments.
Proof. 
Let v i = ( α i , μ i , λ i ) , i = 1 , 2 , θ ( 0 , 1 ) and θ ¯ = 1 θ . Further, assume that v 1 v 2 and define the proximal operators
p i X , Z : = prox α i X + μ i Z ; λ i = arg min v 1 2 λ i α i X + μ i Z v 2 + ( v ) ,
for i = 1 , 2 . Finally, denote λ θ : = θ λ 1 + θ ¯ λ 2 , α θ : = θ α 1 + θ ¯ α 2 and μ θ : = θ μ 1 + θ ¯ μ 2 . With this notation, -4.6cm0cm
Ψ ( θ v 1 + θ ¯ v 2 ) E 1 2 λ θ α θ X + μ θ Z ( θ p 1 X , Z + θ ¯ p 2 X , Z ) 2 + θ p 1 X , Z + θ p 2 X , Z = E H α θ X + μ θ Z , θ p 1 X , Z + θ p 2 X , Z , λ θ + θ p 1 X , Z + θ ¯ p 2 X , Z E [ θ H α 1 X + μ 1 Z , p 1 X , Z , λ 1 + θ ¯ H α 2 X + μ 2 Z , p 2 X , Z , λ 2 + θ p 1 X , Z + θ ¯ p 2 X , Z ] .
The first inequality above follows by the definition of the Moreau envelope in (A1). The equality in the second line uses the definition of the function H : R 3 R in (A10). Finally, the last inequality follows from convexity of H as proved in Lemma A1.
Continuing from (59), we may use convexity of to find that
Ψ ( θ v 1 + θ ¯ v 2 ) E [ θ H ( α 1 X + μ 1 Z , λ 1 , p 1 X , Z ) + θ ¯ H ( α 2 X + μ 2 Z , λ 2 , p 2 X , Z ) + θ ( p 1 X , Z ) + θ ¯ ( p 2 X , Z ) ] = θ Ψ ( v 1 ) + θ ¯ Ψ ( v 2 ) .
This already proves convexity of (A15). In what follows, we argue that the inequality in (A17) is in fact strict under the assumption of the lemma.
Specifically, in Lemma A3, we prove that, under the assumptions of the proposition, for v 1 v 2 , it holds that
E θ p 1 X , Z + θ ¯ p 2 X , Z < θ E p 1 X , Z + θ ¯ E p 2 X , Z .
Using this in (A16) completes the proof of the proposition. The idea behind the proof of Lemma A3 is as follows. First, we use the fact that v 1 v 2 and ( 0 ) 0 to argue that there exists a non-zero measure set of ( x , z ) R 2 such that p 1 x , z p 2 x , z . Then, the desired claim follows by strict convexity of . □
Lemma A3.
Let : R R be a proper strictly convex function that is continuously differentiable with ( 0 ) 0 . Further, assume independent continuous random variables X , Z with strictly positive densities. Fix arbitrary triplets v i = ( α i , μ i , λ i ) , i = 1 , 2 such that v 1 v 2 . Further, denote
p i X , Z : = prox α i X + μ i Z ; λ i , i = 1 , 2 .
Then, there exists a ball S R 2 of nonzero measure, i.e., P ( X , Z ) S > 0 , such that p 1 x , z p 2 x , z , for all ( x , z ) S . Consequently, for any θ ( 0 , 1 ) and θ ¯ = 1 θ , the following strict inequality holds,
E θ p 1 X , Z + θ ¯ p 2 X , Z < θ E p 1 X , Z + θ ¯ E p 2 X , Z .
Proof. 
Note that (A19) holds trivially with “ < " replaced by “ " due to the convexity of . To prove that the inequality is strict, it suffices, by strict convexity of , that there exists subset S R 2 that satisfies the following two properties:
  • p 1 x , z p 2 x , z , for all ( x , z ) S .
  • P ( X , Z ) S > 0 .
Consider the following function f : R 2 R :
f ( x , z ) : = p 1 x , z p 2 x , z .
By Lemma A4, there exists ( x 0 , z 0 ) such that
f ( x 0 , z 0 ) 0 .
Moreover, by continuity of the proximal operator (cf. Proposition A1(a)), it follows that f is continuous. From this and (A21), we conclude that for sufficiently small ζ > 0 there exists a ζ -ball S centered at ( x 0 , z 0 ) , such that property 1 holds. Property 2 is also guaranteed to hold for S , since both X , Z have strictly positive densities and are independent. □
Lemma A4.
Let : R R be a proper, convex function. Further, assume that : R R is continuously differentiable and ( 0 ) 0 . Let α 1 , α 2 > 0 , λ 1 , λ 2 > 0 . Then, the following statement is true
-4.6cm0cm
( α 1 , μ 1 , λ 1 ) ( α 2 , μ 2 , λ 2 ) ( x , z ) R 2 : prox α 1 x + μ 1 z ; λ 1 prox α 2 x + μ 2 z ; λ 2 .
Proof. 
We prove the claim by contradiction, but first, let us set up some useful notation. Let v R 3 denote triplets ( α , μ , λ ) and further define
p α , μ , λ x , z : = prox α x + μ z ; λ ,
and
L α , μ , λ x , z : = prox α x + μ z ; λ .
By Proposition A1, the following is true:
L α , μ , λ x , z = 1 λ α x + μ z p α , μ , λ x , z .
For the sake of contradiction, assume that the claim of the lemma is false. Then,
p α 1 , μ 1 , λ 1 x , z = p α 2 , μ 2 , λ 2 x , z , ( x , z ) R 2 .
From this, it also holds that
L α 1 , μ 1 , λ 1 x , z = L α 2 , μ 2 , λ 2 x , z , ( x , z ) R 2 .
Recalling (A23) and applying (A24), we derive the following from (A25):
( λ 2 λ 1 ) p α 1 , μ 1 , λ 1 x , z = ( λ 2 α 1 λ 1 α 2 ) x + ( λ 2 μ 1 λ 1 μ 2 ) z , ( x , z ) R 2 .
We consider the following two cases separately.
Case 1: λ 1 = λ 2 : Since v 1 v 2 , it holds that
( x , z ) R 2 : α 1 x + μ 1 z α 2 x + μ 2 z .
However, from (A26) we have that ( α 1 α 2 ) x + ( μ 1 μ 2 ) z = 0 for all ( x , z ) R 2 . This contradicts (A27) and completes the proof for this case.
Case 2: λ 1 λ 2 : Continuing from (A26), we can compute that for all ( x , z ) R 2
( p α 1 , μ 1 , λ 1 x , z ) = 1 λ 1 ( α 1 x + μ 1 z p α 1 , μ 1 , λ 1 x , z ) = α 2 α 1 λ 2 λ 1 x + μ 2 μ 1 λ 2 λ 1 z .
By replacing p α 1 , μ 1 , λ 1 x , z from (A26), we derive that:
( ε 1 x + ε 2 z ) = ε 3 x + ε 4 z , ( x , z ) R 2 ,
where
ε 1 = λ 2 α 1 λ 1 α 2 λ 2 λ 1 , ε 2 = λ 2 μ 1 λ 1 μ 2 λ 2 λ 1 , ε 3 = α 2 α 1 λ 2 λ 1 , ε 4 = μ 2 μ 1 λ 2 λ 1 .
By replacing x = z = 0 in (A29), we find that ( 0 ) = 0 . This contradicts the assumption of the lemma and completes the proof. □

Appendix A.6.3. Strict Concavity

In this section, we study the following variant Γ : R + R of the expected Moreau envelope:
Γ ( γ ) : = E M X ; 1 / γ ,
for a lower semi-continuous, proper, convex function and continuous random variable X. The expectation above is over the randomness of X. In Appendix B.4, we show that the function Γ is concave in γ . Here, we prove the following statement regarding strict-concavity of Γ :
“ If is convex, continuously differentiable and ( 0 ) 0 , then Γ is strictly concave. ”
This is summarized in Proposition A6 below.
Proposition A6.
(Strict concavity). Let : R R be a convex, continuously differentiable function for which ( 0 ) 0 . Further, let X be a continuous random variable in R with strictly positive density in the real line. Then, the function Γ in (A23) is strictly concave in R + .
Proof. 
Before everything, we introduce the following convenient notation:
Γ ˜ x ( γ ) : = M x ; 1 / γ and p γ x : = prox x ; 1 / γ .
Note from Proposition A1 that Γ ˜ x is differentiable with derivative
Γ ˜ x ( γ ) = 1 2 x prox x ; 1 / γ 2 .
We proceed in two steps as follows. First, for fixed x R and γ 2 > γ 1 , we prove in Lemma A5 that
( x p γ 2 x ) 2 ( x p γ 1 x ) 2 γ 1 γ 2 γ 1 ( p γ 1 x p γ 2 x ) 2 ,
This shows that for all x R
Γ ˜ x ( γ 2 ) Γ ˜ x ( γ 1 ) 0 .
Second, we use Lemma A3 to argue that the inequality is in fact strict for all x S where S R and P ( X S ) > 0 . To be concrete, apply Lemma A3 for v i = ( 1 , 0 , 1 / γ i ) , i = 1 , 2 . Notice that all the assumptions of the lemma are satisfied, hence there exists interval S R for which P ( X S ) > 0 and
p γ 1 x p γ 2 x ( p γ 1 x p γ 2 x ) 2 > 0 , x S .
Hence, from (A32), it follows that
( x p γ 2 x ) 2 ( x p γ 1 x ) 2 < 0 , x S .
From this, and (A31) we conclude that
Γ ˜ x ( γ 2 ) Γ ˜ x ( γ 1 ) < 0 , x S .
Thus, from (A33) and (A34), as well as the facts that Γ ( γ ) = E Γ ˜ X ( γ ) and P ( X S ) > 0 , we conclude that Γ is strictly concave in R + . □
Lemma A5.
Let : R R be a convex, continuously differentiable function. Fix x R and denote p γ : = prox x ; 1 / γ . Then, for any γ , γ ˜ > 0 , it holds that
( γ ˜ γ ) ( p γ ˜ p γ ) ( p γ x ) + γ ˜ ( p γ ˜ p γ ) 2 0 .
Moreover, for γ 2 > γ 1 , the following statement is true:
( x p γ 2 ) 2 ( x p γ 1 ) 2 γ 1 γ 2 γ 1 ( p γ 1 p γ 2 ) 2 .
Proof. 
First, we prove (A35). Then, we use it to prove (A36).
Proof of (A35): Consider function g : R R defined as follows g ( p ) = γ ˜ 2 ( x p ) 2 + ( p ) . By assumption, g is differentiable with derivative g ( p ) = γ ˜ ( p x ) + ( p ) . Moreover, g is γ 2 -strongly convex. Finally, by optimality of the proximal operator (cf. Proposition A1), it holds that γ ( x p γ ) = ( p γ ) and γ ˜ ( x p γ ˜ ) = ( p γ ˜ ) . Using these, it can be computed that g ( p γ ˜ ) = 0 and g ( p γ ) = ( γ ˜ γ ) ( p γ x ) .
In the following inequalities, we combine all the aforementioned properties of the function g to find that
g ( p γ ) g ( p γ ˜ ) + γ ˜ 2 ( p γ p γ ˜ ) 2 g ( p γ ) + ( γ ˜ γ ) ( p γ x ) ( p γ ˜ p γ ) + γ ˜ ( p γ p γ ˜ ) 2 .
This leads to the desired statement and completes the proof of (A35).
Proof of (A36): We fix γ 2 > γ 1 and apply (A35) two times as follows. First, applying (A35) for ( γ ˜ , γ ) = ( γ 2 , γ 1 ) and using the fact that γ 2 > γ 1 , we find that
( p γ 2 p γ 1 ) ( p γ 1 x ) γ 2 γ 2 γ 1 ( p γ 2 p γ 1 ) 2 .
Second, applying (A35) for ( γ ˜ , γ ) = ( γ 1 , γ 2 ) and using again the fact that γ 2 > γ 1 , we find that
( γ 1 γ 2 ) ( p γ 1 p γ 2 ) ( p γ 2 x ) + γ 1 ( p γ 1 p γ 2 ) 2 0 ( p γ 2 p γ 1 ) ( p γ 2 x ) γ 1 γ 2 γ 1 ( p γ 1 p γ 2 ) 2 .
Adding (A37) and (A38), we show the desired property as follows:
( p γ 2 p γ 1 ) ( p γ 2 x ) + ( p γ 2 p γ 1 ) ( p γ 1 x ) γ 2 + γ 1 γ 2 γ 1 ( p γ 1 p γ 2 ) 2 .

Appendix A.6.4. Summary of Properties of (uid135)

Proposition A7.
Let : R R be a lsc, proper, convex function. Let G , S iid N ( 0 , 1 ) and function f : R { ± 1 } such that the random variable Y S = f ( S ) S has a continuous strictly positive density on the real line. Then, the following properties are true for the expected Moreau envelope function
Ω : ( α > 0 , μ , τ > 0 , γ > 0 ) E M α G + μ S Y ; τ / γ :
(a)
The function Ω is differentiable and its derivatives are given as follows:
α Ω ( α , μ , τ , γ ) = E G M , 1 α G + μ S Y ; τ / γ , μ Ω ( α , μ , τ , γ ) = E S Y M , 1 α G + μ S Y ; τ / γ , τ Ω ( α , μ , τ , γ ) = 1 γ E M , 2 α G + μ S Y ; τ / γ , γ Ω ( α , μ , τ , γ ) = τ γ 2 E M , 2 α G + μ S Y ; τ / γ .
(b)
The function Ω is jointly convex ( α , μ , τ ) and concave on γ.
(c)
The function Ω is increasing in α.
For the statements below, further assume that ℓ is strictly convex and continuously differentiable with ( 0 ) 0 .
(d)
The function Ω is strictly convex in ( α , μ , τ ) and strictly concave in λ.
(e)
The function Ω is strictly increasing in α.
Proof. 
Statements (a), (b) and (d) follow directly by Propositions A4–A6. It remains to prove Statements (c) and (e). Let α 2 > α 1 . Then, there exist independent copies G , G of G and α ˜ > 0 such that α 2 G = α 1 G + α ˜ G . Hence, we have the following chain of inequalities:
Ω ( α 2 , μ , τ , γ ) = E M α 1 G + α ˜ G + μ S Y ; τ / γ E M α 1 G + α ˜ E [ G ] + μ S Y ; τ / γ = E M α 1 G + μ S Y ; τ / γ = Ω ( α 1 , μ , τ , γ ) ,
where the inequality follows from Jensen and convexity of Ω with respect to α (see Statement (b) of the Proposition). This proves Statement (c). For Statement (e), note that the inequality is strict provided that Ω is strictly convex (see Statement (d) of the Proposition). □

Appendix B. Proof of Theorem 1

In this section, we provide a proof sketch of Theorem 1. The main technical tool that facilitates our analysis is the convex Gaussian min-max theorem (CGMT), which is an extension of Gordon’s Gaussian min-max inequality (GMT). We introduce the necessary background on the CGMT in Appendix B.1.
The CGMT has been mostly applied to linear measurements [9,10,13,15,19]. The simple, yet central idea, which allows for this extension, is a certain projection trick inspired by Plan and Vershynin [40]. Here, we apply a similar trick, but, in our setting, we recognize that it suffices to simply rotate x 0 to align with the first basis vector. The simple rotation decouples the measurements y i from the last n 1 coordinates of the measurement vectors a i (see Appendix B.2). While this is sufficient for LS in [43], to study more general loss functions, we further need to combine this with a duality argument similar to that in [13]. Second, while the steps that bring the ERM minimization to the form of a PO (see (A48)) bear the aforementioned similarities to those in [13,43], the resulting AO is different from the one studied in previous works. Hence, the mathematical derivations in Appendix B.3 and Appendix B.4 are different. This also leads to a different system of equations characterizing the statistical behavior of ERM. Finally, in Appendix B.5, we prove uniqueness of the solution of this system of equations using the properties of the expected Moreau envelope function studied in Appendix A.6.

Appendix B.1. Technical Tool: CGMT

Appendix B.1.1. Gordon’s Min-Max Theorem (GMT)

The Gordon’s Gaussian comparison inequality [59] compares the min-max value of two doubly indexed Gaussian processes based on how their autocorrelation functions compare. The inequality is quite general (see [59]), but for our purposes we only need its application to the following two Gaussian processes:
X w , u : = u T G w + ψ ( w , u ) ,
Y w , u : = w 2 g T u + u 2 T w + ψ ( w , u ) ,
where G R m × n , g R m , R n , they all have entries iid Gaussian; the sets S w R n and S u R m are compact; and ψ : R n × R m R . For these two processes, define the following (random) min-max optimization programs, which we refer to as the primary optimization (PO) problem and the auxiliary optimization (AO).
Φ ˜ ( G ) = min w S w max u S u X w , u ,
ϕ ( g , = min w S w max u S u Y w , u .
According to Gordon’s comparison inequality (To be precise, the formulation in (A42), which is due to [13], is slightly different from the original statement in Gordon’s paper (see [13] for details).), for any c R , it holds:
P Φ ˜ ( G ) < c 2 P ϕ ( g , < c .
In other words, a high-probability lower bound on the AO is a high-probability lower bound on the PO. The premise is that it is often much simpler to lower bound the AO rather than the PO. To be precise, (A42) is a slight reformulation of Gordon’s original result proved in [13].

Appendix B.1.2. Convex Gaussian Min-Max Theorem (CGMT)

The proof of Theorem 1 builds on the CGMT [13]. For ease of reference, we summarize here the essential ideas of the framework following the presentation in [15] (please see [15] (Section 6) for the formal statement of the theorem and further details). The CGMT is an extension of the GMT and it asserts that the AO in (41b) can be used to tightly infer properties of the original (PO) in (41a), including the optimal cost and the optimal solution. According to the CGMT [15] (Theorem 6.1), if the sets S w and S u are convex and ψ is continuous convex-concave on S w × S u , then, for any ν R and t > 0 , it holds that
P | Φ ˜ ( G ) ν | > t 2 P | ϕ ( g , ν | > t .
In words, concentration of the optimal cost of the AO problem around μ implies concentration of the optimal cost of the corresponding PO problem around the same value μ . Moreover, starting from (A43) and under strict convexity conditions, the CGMT shows that concentration of the optimal solution of the AO problem implies concentration of the optimal solution of the PO to the same value. For example, if minimizers of (A41b) satisfy w * ( g , 2 ζ * for some ζ * > 0 , then the same holds true for the minimizers of (A41a): w * ( G ) 2 ζ * [15] ([Theorem 6.1(iii)). Thus, one can analyze the AO to infer corresponding properties of the PO, the premise being of course that the former is simpler to handle than the latter.

Appendix B.2. Applying the CGMT to ERM for Binary Classification

In this section, we show how to apply the CGMT to (3). For convenience, we drop the subscript from x ^ and simply write
x ^ = arg min x 1 m i = 1 m ( y i a i T x ) ,
where the measurements y i , i [ m ] follow (1). By rotational invariance of the Gaussian distribution of the measurement vectors a i , i [ m ] , we assume without loss of generality that x 0 = [ 1 , 0 , , 0 ] T . We can rewrite (A44) as a constrained optimization problem by introducing n variables u i as follows:
x ^ = arg min x , u 1 m i = 1 m ( u i ) subject to u i = y i a i T x , i [ n ] .
This problem is now equivalent to the following min-max formulation:
min u , x max β 1 m i = 1 m ( u i ) + 1 m i = 1 m β i u i 1 m i = 1 m β i y i a i T x .
Now, let us define
a i = [ s i ; a ˜ i ] , i [ m ] and x = [ x 1 ; x ˜ ] ,
such that s i and x 1 are the first entries of a i and x , respectively. Note that in this new notation (1) becomes:
y i = f ( s i ) ,
and
corr x ^ ; x 0 = x ^ 1 x ^ 1 2 + x ^ ˜ 2 2 ,
where we decompose x ^ = [ x ^ 1 ; x ^ ˜ ] . In addition, (A45) is written as
min u , x max β 1 m i = 1 m ( u i ) + 1 m i = 1 m β i u i + 1 m i = 1 m β i y i a ˜ i T x ˜ 1 m i = 1 m β i y i s i x 1
or, in matrix form, as
min u , x max β 1 m β T D y A ˜ x ˜ + 1 m x 1 β T D y s + 1 m β T u + 1 m i = 1 m ( u i ) .
where D y : = diag ( y 1 , y 2 , , y m ) is a diagonal matrix with y 1 , y 2 , y m on the diagonal, s = [ s 1 , , s m ] T and A ˜ is an m × ( n 1 ) matrix with rows a ˜ i T , i [ m ] .
In (A48), we recognize that the first term has the bilinear form required by the GMT in (A41a). The rest of the terms form the function ψ in (A41a): they are independent of A ˜ and convex-concave as desired by the CGMT. Therefore, we express (A44) in the desired form of a PO and for the rest of the proof we analyze the probabilistically equivalent AO problem. In view of (A41b), this is given as follows,
min u , x max β 1 m x ˜ 2 g T D y β + 1 m D y β 2 h T x ˜ 1 m x 1 β T D y s + 1 m β T u + 1 m i = 1 m ( u i ) ,
where as in (A41b) g N ( 0 , I m ) and h N ( 0 , I n 1 ) .

Appendix B.3. Analysis of the Auxiliary Optimization

Here, we show how to analyze the AO in (A49). To begin with, note that y i { ± 1 } , therefore D y g N ( 0 , I m ) and D y β 2 = β 2 . In addition, let us denote the first entry x 1 of x as
μ : = x 1 .
The first step is to optimize over the direction of x ˜ . For this, we express the AO as: -4.6cm0cm
min u , μ , α 0 min x ˜ 2 = α max β 1 m x ˜ 2 g T D y β + 1 m D y β 2 h T x ˜ 1 m μ β T D y s + 1 m β T u + 1 m i = 1 m ( u i ) ,
Now, denote x ˜ = α 2 and observe that for every β the objective above is minimized (with respect to to x ˜ ) at x ˜ . Thus, it follows by [23] (Lem. 8) that (A50) simplifies to
min α 0 , μ , u max β 1 m α g T β α m β 2 h 2 1 m μ s T D y β + 1 m β T u + 1 m i = 1 m ( u i ) .
Next, let γ : = β 2 m and optimize over the direction of β to yield
min α 0 , u , μ max γ 0 γ m α g μ D y s + u 2 α m γ h 2 + 1 m i = 1 m ( u i ) .
To continue, we utilize the fact that for all x R , min τ > 0 τ 2 + x 2 2 τ m = x m . Hence,
γ m α g μ D y s + u 2 = min τ > 0 γ τ 2 + γ 2 τ m α g + μ D y s u 2 2 .
With this trick, the optimization over u becomes separable over its coordinates u i , i [ m ] . By inserting this in (A42), we have
min α 0 , u , μ max γ 0 min τ > 0 γ τ 2 α m γ h 2 + γ 2 τ m i = 1 m ( α g i + μ y i s i u i ) 2 + 1 m i = 1 m ( u i ) ,
Now, we show that the objective function above is convex-concave. Clearly, the function is linear (thus, concave in γ ). Moreover, from Lemma A1, the function 1 2 τ ( α g i + μ y i s i u i ) 2 is jointly convex in ( α , μ , u i , τ ) . The rest of the terms are clearly convex and this completes the argument. Hence, with a permissible change in the order of min-max, we arrive at the following convenient form (Here, we skip certain technical details in this argument regarding boundedness of the constraint sets in (A49). While they are not trivial, they can be handled with the same techniques used in [15,60].):
min μ , α 0 , τ > 0 max γ 0 γ τ 2 α m γ h 2 + 1 m i = 1 m M α g i + μ s i y i ; τ γ ,
where recall the definition of the Moreau envelope in (A1). As to now, we have reduced the AO into a random min-max optimization over only four scalar variables in (A53). For fixed μ , α , τ , γ , direct application of the weak law of large numbers shows that the objective function of (A53) converges in probability to the following as m , n and m n = δ :
γ τ 2 α γ δ + E M α G + μ Y S ; τ γ ,
where G , S N ( 0 , 1 ) and Y f ( S ) (in view of (A46)). Based on that, it can be shown (similar arguments are developed in [15,60]) that the random optimizers α n and μ n of (A53) converge to the deterministic optimizers α and μ of the following (deterministic) optimization problem (whenever these are bounded as the statement of the theorem requires):
min α 0 , μ , τ > 0 max γ 0 γ τ 2 α γ δ + E M α G + μ Y S ; τ γ .
At this point, recall that α represents the norm of x ˜ and μ the value of x 1 . Thus, in view of (i) (A47); (ii) the equivalence between the PO and the AO; and (iii) our derivations thus far, we have that with probability approaching 1,
lim n + corr x ^ ; x 0 = μ μ 2 + α 2 ,
where μ and α are the minimizers in (A54). The three equations in (8) are derived by the first-order optimality conditions of the optimization in (A54). We show this next.

Appendix B.4. Convex-Concavity and First-Order Optimality Conditions

First, we prove that the objective function in (A54) is convex–concave. For convenience define the function F : R 4 R as follows
F ( α , μ , τ , γ ) : = γ τ 2 α γ δ + E M α G + μ Y S ; τ γ .
Based on Lemma A2, it immediately follows that, if is convex, F is jointly convex in ( α , μ , τ ) . To prove concavity of F based on γ , it suffices to show that M x ; 1 / γ is concave in γ for all x R . To show this, we note that
M x ; 1 / γ = min u γ 2 ( x u ) 2 + ( u ) ,
which is the point-wise minimum of linear functions of γ . Thus, using Proposition A3(b), we conclude that M x ; 1 / γ is concave in γ . This completes the proof of convex-concavity of the function F in (A55) when is convex. By direct differentiation and applying Proposition A7(a), the first-order optimality conditions of the min–max optimization in (A54) are as follows:
E S Y · M , 1 α G + μ S Y ; τ γ = 0 ,
E G · M , 1 α G + μ S Y ; τ γ = γ δ ,
γ 2 + 1 γ E M , 2 α G + μ S Y ; τ γ = 0 ,
α δ τ γ 2 E M , 2 α G + μ S Y ; τ γ + τ 2 = 0 .
Next, we show how these equations simplify to the following system of equations (same as (8):
E Y S · M , 1 α G + μ S Y ; λ = 0 ,
λ 2 δ E M , 1 α G + μ S Y ; λ 2 = α 2 ,
λ δ E G · M , 1 α G + μ S Y ; λ = α .
Let λ : = τ γ . First, (A57a) is immediate from equation (A56a). Second, substituting γ from (A56c) in (A56d) yields τ = α δ or γ = α λ δ , which together with (A56b) leads to (A57c). Finally, (A57b) can be obtained by substituting γ = α λ δ in (A56c) and using the fact that (see Proposition A1):
M , 2 α G + μ S Y ; λ = 1 2 ( M , 1 α G + μ S Y ; λ ) 2 .

Appendix B.5. On the Uniqueness of Solutions to (A57): Proof of Proposition 1

Here, we prove the claim of Proposition 1 through the following lemmas. As discussed in Remark 4, the main part of the proof is showing strict convex-concavity of F in (11). Lemma A6 proves that this is the case, and Lemmas A7 and A8 show that this is sufficient for the uniqueness of solutions to (A57). When put together, these complete the proof of Proposition 1.
Lemma A6.
(Strict Convex-Concavity of (A55)). Let : R R be proper and strictly convex function. Further, assume that ℓ is continuously differentiable with ( 0 ) 0 . In addition, assume that S Y has positive density in the real line. Then, the function F : R 4 R defined in (A55) is strictly convex in ( α , μ , τ ) and strictly concave in γ.
Proof. 
The claim follows directly from the strict convexity-concavity properties of the expected Moreau-envelope proved in Propositions A5 and A6. Specifically, we apply Proposition A7. □
Lemma A7.
If the objective function in (A55) is strictly convex in ( α , μ , τ ) and strictly concave in γ, then (A56) has a unique solution ( α , μ , τ , γ ) .
Proof. 
Let ( α i , μ i , τ i , γ i ) , i = 1 , 2 , be two different saddle points of (A55). For convenience, let x i : = ( α i , μ i , τ i ) for i = 1 , 2 . By strict-concavity in γ , for fixed values of x : = ( α , μ , τ ) , the value of γ maximizing F ( x , γ ) is unique. Thus, if x 1 = x 2 , then it must hold that γ 1 = γ 2 , which is a contraction to our assumption of ( x 1 , γ 1 ) ( x 2 , γ 2 ) . Similarly, we can use strict-convexity to derive that γ 1 γ 2 . Then, based on the definition of the saddle point and strict convexity-concavity, the following two relations hold for i = 1 , 2 :
F ( x i , γ ) < F ( x i , γ i ) < F ( x , γ i ) , for all x x i , γ γ i .
We choose x = x 2 , γ = γ 2 for i = 1 and x = x 1 , γ = γ 1 for i = 2 to find
F ( x 1 , γ 2 ) < F ( x 1 , γ 1 ) < F ( x 2 , γ 1 ) , F ( x 2 , γ 1 ) < F ( x 2 , γ 2 ) < F ( x 1 , γ 2 ) .
From the above, it follows that F ( x 1 , γ 1 ) < F ( x 2 , γ 2 ) and F ( x 1 , γ 1 ) > F ( x 2 , γ 2 ) , which is a contradiction. This completes the proof. □
Lemma A8.
If (A56) has a unique solution ( α , μ , τ , γ ) , then (A57) has a unique solution ( α , μ , λ ) .
Proof. 
First, following the same approach of deriving Equations (A57) from (A56) in (A56), it is easy to see that existence of solution ( α 1 , μ 1 , τ 1 , γ 1 ) to (A57) implies existence of solution ( α 1 , μ 1 , λ 1 : = τ 1 γ 1 ) to (A57). Now, for the sake of contradiction to the statement of the lemma, assume that there are two different triplets v 1 : = ( α 1 , μ 1 , λ 1 ) and v 2 : = ( α 2 , μ 2 , λ 2 ) with α 1 , α 2 , λ 1 , λ 2 > 0 and satisfying (Appendix B.4). Then, we can show that both w i : = ( α i , μ i , τ i , γ i ) i = 1 , 2 , such that:
τ i : = α i δ , γ i = α i λ i δ , i = 1 , 2 ,
satisfy the system of equations in (A56). However, since v 1 v 2 , it must be that w 1 w 2 . This contradicts the assumption of uniqueness of solutions to (A56) and completes the proof. □

Appendix C. Discussions on the Fundamental Limits for Binary Models

  • On the Uniqueness of Solutions to Equation κ ( σ ) = 1 δ
The existence of a solution to the equation κ ( σ ) = 1 δ is proved in the previous section. However, it is not clear if the solution to this equation is unique, i.e., for any δ > 1 there exists only one σ opt > 0 such that κ ( σ opt ) = 1 δ . If this is the case, then Equation (12) in Theorem 2 can be equivalently written as
σ opt = σ , s . t . κ ( σ ) = 1 δ .
Although we do not prove this claim, our numerical experiments in Figure A1 show that κ ( · ) is a monotonic function for noisy-signed, logistic and Probit measurements, implying the uniqueness of solution to the equation κ ( σ ) = 1 δ for all δ > 1 .

Appendix C.1. Distribution of SY in Special Cases

We derive the following densities for S Y for the special cases ( x 0 2 = 1 ):
  • Signed: p S Y ( w ) = 2 π exp ( w 2 / 2 ) 1 { w 0 } .
  • Logistic: p S Y ( w ) = 2 π exp ( w 2 / 2 ) 1 + exp ( w ) .
  • Probit: p S Y ( w ) = 2 π Φ ( w ) exp ( w 2 / 2 ) .
In particular, we numerically observe that for logistic and Probit models; the resulting densities are similar to the density of a gaussian distribution derived according to N ( E [ S Y ] , Var [ S Y ] ) . Figure A2 illustrates this similarity for these two models. As discussed in Corollary 1, this similarity results in the tightness of the lower bound achieved for σ opt in Equation (23).
Figure A1. The value of κ ( σ ) as in Theorem 2 for various measurement models. Since κ ( σ ) is a monotonic function of σ , the solution to κ ( σ ) = 1 / δ determines the minimum possible value of σ .
Figure A1. The value of κ ( σ ) as in Theorem 2 for various measurement models. Since κ ( σ ) is a monotonic function of σ , the solution to κ ( σ ) = 1 / δ determines the minimum possible value of σ .
Entropy 23 00178 g0a1
Figure A2. Probability distribution function of S Y for the logistic and Probit models ( x 0 2 = 1 ) compared with the probability distribution function of the Gaussian random variable (dashed lines) with the same mean and variance i.e., N ( E [ S Y ] , Var [ S Y ] ) .
Figure A2. Probability distribution function of S Y for the logistic and Probit models ( x 0 2 = 1 ) compared with the probability distribution function of the Gaussian random variable (dashed lines) with the same mean and variance i.e., N ( E [ S Y ] , Var [ S Y ] ) .
Entropy 23 00178 g0a2

Appendix D. Proofs and Discussions on the Optimal Loss Function

Appendix D.1. Proof of Theorem 3

We show that the triplet ( μ = 1 , α = σ opt , λ = 1 ) is a solution to Equations (8) for chosen as in (29). Using Proposition A2 in the Appendix, we rewrite opt using the Fenchel–Legendre conjugate as follows:
opt ( w ) = q + α 1 q + α 2 log p W opt ( w ) q ( w ) ,
where q ( w ) = w 2 / 2 . For a function f, its Fenchel–Legendre conjugate is defined as:
f ( x ) = max y x y f ( y ) .
Next, we use the fact that, for any proper, closed and convex function f, it holds that ( f ) = f [61] (theorem 12.2). Therefore, noting that q + α 1 q + α 2 log p W opt is a convex function (see the proof of Lemma A9 in the Appendix), combined with (A58), it yields that
( opt + q ) = q + α 1 q + α 2 log p W opt .
Additionally, using Proposition A2, we find that M opt w ; 1 = q ( w ) ( q + opt ) ( w ) , which by (A49) reduces to:
M opt w ; 1 = α 1 q ( w ) α 2 log p W opt ( w ) .
Thus, by differentiation, we find that opt satisfies (28) with c = 1 , i.e.,
M opt , 1 w ; 1 = α 1 w α 2 · ξ W opt ( w ) .
Next, we establish the desired by directly substituting (A60) into the system of equations in (19). First, using the values of α 1 and α 2 in (30), as well as the fact that κ ( σ opt ) = 1 / δ , we have the following chain of equations:
E M opt , 1 W opt ; 1 2 = E ( α 1 W opt + α 2 ξ W opt ( W opt ) ) 2 = α 1 2 ( σ opt 2 + 1 ) + α 2 2 I ( W opt ) + 2 α 1 α 2 E W opt · ξ W opt ( W opt ) = 1 + σ opt 2 σ opt 2 I ( W opt ) 1 δ 2 σ opt 2 I ( W opt ) + I ( W opt ) 1 = σ opt 2 δ 2 κ ( σ opt ) = σ opt 2 / δ .
This shows (8b). Second, using again the specified values of α 1 and α 2 , a similar calculation yields
E M opt , 1 W opt ; 1 ξ W opt ( W opt ) = E α 1 W opt + α 2 ξ W opt ( W opt ) ξ W opt ( W opt ) = α 1 α 2 I ( W opt ) = 1 / δ .
Recall from (17) that E G · M opt , 1 W opt ; 1 = σ opt E M opt , 1 W opt ; 1 ξ W opt ( W opt ) . This combined with (A62) yields (8c). Finally, we use again (A60) and the specified values of α 1 and α 2 to find that
E W opt · M opt , 1 W opt ; 1 = E W opt · ( α 1 W opt α 2 ξ W opt ( W opt ) ) = α 1 E W opt 2 α 2 E W opt ξ W opt ( W opt ) = α 1 ( σ opt 2 + 1 ) α 2 w p W opt ( w ) d w = α 1 ( σ opt 2 + 1 ) + α 2 = σ opt 2 / δ .
However, using (17), it holds that
E W opt · M opt , 1 W opt ; 1 = σ opt 2 E M opt , 1 W opt ; 1 ξ W opt ( W opt ) + E Y S · M opt , 1 W opt ; λ .
This combined with (A63) and (A62) shows that E Y S · M opt , 1 W opt ; λ = 0 , as desired to satisfy (8a). This completes the proof of the theorem.

Appendix D.2. On the Convexity of Optimal Loss Function

Here, we provide a sufficient condition for opt ( w ) to be convex.
Lemma A9.
The optimal loss function as defined in Theorem 3 is convex if
( log ( p W σ ) ) ( w ) 1 σ 2 + 1 , for all w R and σ 0 .
Proof. 
Using (A9) optimal loss function is written in the following form
opt ( w ) = q + α 1 q + α 2 log ( p W opt ) ( w ) q ( w ) .
Next, we prove that q + α 1 q + α 2 log ( p W opt ) is a convex function. We first show that both α 1 and α 2 are positive numbers for all values of σ opt . We first note that, since G and S Y are independent random variables, σ opt 2 I ( W opt ) < σ opt 2 I ( σ opt G ) = 1 . Therefore,
1 σ opt 2 I ( W opt ) > 0 .
Additionally, following the Cramer–Rao bound [53] for Fisher information yields that:
I ( W opt ) > 1 E ( W opt E [ W opt ] ) 2 = 1 1 + σ opt 2 ( E [ S Y ] ) 2 .
Using this inequality for I ( W opt ) , we derive that
σ opt 2 I ( W opt ) + I ( W opt ) 1 > 0 .
From (A65) and (A66), it follows that α 1 , α 2 > 0 .
Based on the definition of the random variable W opt :
log p W opt ( w ) = w 2 / ( 2 σ opt 2 ) + log exp ( 2 w z z 2 ) / 2 σ opt 2 p S Y ( z ) d z + c ,
where c is a constant independent of w. By differentiating twice, we see that
log exp ( 2 w z z 2 ) / 2 σ opt 2 p S Y ( z ) d z
is a convex function of w. Therefore, to prove that q + α 1 q + α 2 log ( p W opt ) is a convex function, it is sufficient to prove that ( 1 + α 1 α 2 / σ opt 2 ) q is a convex function or equivalently 1 + α 1 α 2 / σ opt 2 0 . Replacing values of α 1 , α 2 and recalling the equation for σ opt yields that
1 + α 1 α 2 / σ opt 2 = 0 ,
which implies the convexity of q + α 1 q + α 2 log ( p W opt ) . To obtain the derivative of o p t , we use the result in [61] (Cor. 23.5.1), which states that, for a convex function f,
( f ) = ( f ) 1 .
Therefore, following (A64),
opt ( w ) = ( q + α 1 q + α 2 ( log ( p W opt ) ) ) 1 ( w ) w .
Differentiating again and using the properties of inverse function yields that
opt ( w ) = 1 1 + α 1 + α 2 ( log ( p W opt ) ) ( g ( w ) ) 1 ,
where
g ( w ) : = ( q + α 1 q + α 2 ( log ( p W opt ) ) ) 1 ( w ) .
Note that the denominator of (A68) is nonnegative since it is second derivative of a convex function. Therefore, it is evident from (A68) that a sufficient condition for the convexity of opt is that
α 1 + α 2 ( log ( p W opt ) ) ( w ) 0 , for all w R ,
or
1 σ opt 2 I ( W opt ) + ( log ( p W opt ) ) ( w ) 0 .
This condition is satisfied if the statement of the lemma holds for σ = σ opt :
1 σ opt 2 I ( W opt ) + ( log ( p W opt ) ) ( w ) 1 σ opt 2 I ( W opt ) 1 1 + σ opt 2 < 0 ,
where we use (A66) in the last inequality. This concludes the proof. □

Appendix D.2.1. Provable Convexity of the Optimal Loss Function for Signed Model

In the case of signed model, it can be proved that the conditions of Lemma A9 is satisfied. Since W σ = σ G + S Y , we derive the probability density of W σ as follows:
p W σ ( w ) = p σ G ( w ) * p S Y ( w ) = exp ( w 2 / ( 2 + 2 σ 2 ) ) 2 π ( 1 + σ 2 ) · f ( w ) ,
where
f ( w ) = 2 2 Q ( w / ( σ 2 + 2 σ 2 ) ) .
Direct calculation shows that f is a log-concave function for all w R . Therefore,
( log ( p W σ ) ) ( w ) = 1 σ 2 + 1 + ( log ( f ) ) ( w ) 1 σ 2 + 1 .
This proves the convexity of optimal loss function derived according to Theorem 3 when measurements follow the signed model.

Appendix E. Noisy-Signed Measurement Model

Consider a noisy-signed label function as follows:
y i = f ε ( a i T x 0 ) = sign ( a i T x 0 ) , w . p . 1 ε , sign ( a i T x 0 ) , w . p . ε ,
where ε [ 0 , 1 / 2 ] .
Figure A3. The value of the threshold δ f ε in (A69) as a function of probability of error ε [ 0 , 1 / 2 ] . For logistic and hinge losses, the set of minimizers in (3) is bounded (as required by Theorem 1) iff δ > δ f ε .
Figure A3. The value of the threshold δ f ε in (A69) as a function of probability of error ε [ 0 , 1 / 2 ] . For logistic and hinge losses, the set of minimizers in (3) is bounded (as required by Theorem 1) iff δ > δ f ε .
Entropy 23 00178 g0a3
In the case of signed measurements, i.e., y i = sign ( a i T x 0 ) , it can be observed that for all possible values of δ , the condition (34) in Section 4.2 holds for x s = x 0 . This implies the separability of data and therefore the solution to the optimization problem (3) is unbounded for all δ . However, in the case of noisy signed label function, boundedness or unboundedness of solutions to (3) depends on δ . As discussed in Section 4.2, the minimum value of δ for bounded solutions is derived from the following:
δ f ε ( ε ) : = min c R E G + c S Y 2 1 ,
where Y = f ε ( S ) . It can be checked analytically that δ f ε is a decreasing function of ε with δ f ε ( 0 + ) = + and δ f ε ( 1 / 2 ) = 2 .
In Figure A3, we numerically evaluate the threshold value δ f ε as a function of the probability of error ε . For δ < δ f ε , the set of minimizers of the (3) with logistic or hinge loss is unbounded.
The performances of LS, LAD and hinge loss functions for noisy-signed measurement model with ε = 0.1 and ε = 0.25 are demonstrated in Figure A4a,b, respectively. Comparing performances of least-squares and hinge loss functions suggest that hinge loss is robust to measurement corruptions, as for moderate to large values of δ it outperforms the LS estimator. Theorem 1 opens the way to analytically confirm such conclusions, which is an interesting future direction.
Figure A4. Comparisons between analytical and empirical results for the least-squares (LS), least-absolute deviations and hinge loss functions along with the upper bound on performance and the empirical performance of optimal loss function as in Theorem 3, for noisy-signed measurement model with ε = 0.1 (a) and ε = 0.25 (b). The vertical dashed lines are evaluated by (A59) and represent δ f ε 3 and 2.25 for ε = 0.1 and 0.25 , respectively.
Figure A4. Comparisons between analytical and empirical results for the least-squares (LS), least-absolute deviations and hinge loss functions along with the upper bound on performance and the empirical performance of optimal loss function as in Theorem 3, for noisy-signed measurement model with ε = 0.1 (a) and ε = 0.25 (b). The vertical dashed lines are evaluated by (A59) and represent δ f ε 3 and 2.25 for ε = 0.1 and 0.25 , respectively.
Entropy 23 00178 g0a4

Appendix F. On LS Performance for Binary Models

Appendix F.1. Proof of Corollary 2

To get the values of α and μ as in the statement of the corollary, we show how to simplify Equations (8) for ( t ) = ( t 1 ) 2 . In this case, the proximal operator admits a simple expression:
prox x ; λ = ( x + 2 λ ) / ( 1 + 2 λ ) .
In addition, ( t ) = 2 ( t 1 ) . Substituting these in (14a) gives the formula for μ as follows:
0 = E Y S ( α G + μ S Y 1 ) = μ E [ S 2 ] E [ Y S ] μ = E [ Y S ] ,
where we have also used from (7) that E [ S 2 ] = 1 and G is independent of S. In addition, since ( t ) = 2 , direct application of (A7) gives
1 = λ δ 2 1 + 2 λ λ = 1 2 ( δ 1 ) .
Finally, substituting the value of λ into (14b), we obtain the desired value for α as follows:
α 2 = 4 λ 2 δ E ( prox α G + μ S Y ; λ 1 ) 2 = 4 λ 2 ( 1 + 2 λ ) 2 δ E ( α G + μ S Y 1 ) 2 = 4 λ 2 δ ( 1 + 2 λ ) 2 ( α 2 + μ 2 + 1 2 μ E [ S Y ] ) = 1 δ ( α 2 + 1 E [ S Y ] 2 ) α = 1 E [ S Y ] 2 · 1 δ 1 .

Appendix F.2. Discussion

Linear vs. Binary

On the one hand, Corollary 2 shows that least-squares performance for binary measurements satisfies
lim n x ^ μ x 0 2 · x 0 2 2 = τ 2 · 1 δ 1 ,
where μ is as in (32) and τ 2 : = 1 ( E [ S Y ] ) 2 . On the other hand, it is well-known (e.g., see references in [15] (Sec. 5.1)) that least-squares for (scaled) linear measurements with additive Gaussian noise (i.e., y i = ρ a i T x 0 + σ z i , z i N ( 0 , 1 ) ) leads to an estimator that satisfies
lim n x ^ ρ · x 0 2 2 = σ 2 · 1 δ 1 .
Direct comparison of (A70) to (A71) suggests that least-squares with binary measurements performs the same as if measurements were linear with scaling factor ρ = μ / x 0 2 and noise variance σ 2 = τ 2 = α 2 ( δ 1 ) . This worth-mentioning conclusion is not new, as it is proved in [40,43,56,62]. We include a short discussion on the relation to this prior work in the following paragraph. We highlight that all these existing results are limited to a least-squares loss unlike our general analysis.
Prior work. There is a lot of recent work on the use of least-squares-type estimators for recovering signals from nonlinear measurements of the form y i = h ( a i T x 0 ) with Gaussian vectors a i . The original work that suggests least-squares as a reasonable estimator in this setting is due to Brillinger [56]. In his 1982 paper, Brillinger studied the problem in the classical statistics regime (namely, n is fixed not scaling with m + ) and he proved for the least-squares solution satisfies
lim m + 1 m x ^ μ x 0 2 · x 0 2 2 = τ 2 ,
where
μ = E [ S Y ] , S N ( 0 , 1 ) , τ 2 = E [ ( Y μ S ) 2 ] .
and the expectations are with respect to S and possible randomness of f. Evaluating (A72) for Y = f ε ( S ) leads to the same values for μ and τ 2 in (A70). In other works, (A70) for δ + indeed recovers Brillinger’s result. The extension of Brillinger’s original work to the high-dimensional setting (both m , n large) was first studied by Plan and Vershynin [40], who derived (non-sharp) non-asymptotic upper bounds on the performance of constrained least-squares (such as the Lasso). Shortly after, Thrampoulidis et al. [43] extended this result to sharp asymtpotic predictions and to regularized least-squares. In particular, Corollary 2 is a special case of the main theorem in [43]. Several other interesting extensions of the result by Plan and Vershynin have recently appeared in the literature (e.g., [41,62,63,64]). However, the one in [43] is the only one to give results that are sharp in the flavor of this paper. Our work, extends the result of Thrampoulidis et al. [43] to general loss functions beyond least-squares. The techniques of Thrampoulidis et al. [43] that have guided the use of the CGMT in our context were also recently applied by Dhifallah et al. [60] in the context of phase-retrieval.

Appendix G. Fundamental Limits for Gaussian-Mixture Models: Proofs for Section 5

Appendix G.1. Proof of Corollary 3

The proof follows directly by noting that, when ( t ) = ( t 1 ) 2 , it holds that M x ; λ = ( x 1 ) 2 2 λ + 1 . By inserting this into (38a) and simplifying the equations, we find the value of μ :
μ = r 1 + r 2 .
Similarly, we derive λ using Equation (38c):
λ = 1 2 ( δ 1 ) .
Substituting these values of μ and λ into (38b) yields that
α 2 = 1 δ 1 · 1 r 2 + 1 .
Recalling that σ L S = α / μ concludes the proof.

Appendix G.2. Proof of Theorem 5

The high-level steps of the proof follow the proof of Theorem 2. First, we note that by scaling the loss function the value of σ does not change. In particular, if ˜ ( t ) : = C 1 ( C 2 t ) for arbitrary constants C 1 > 0 , C 2 0 , it is not hard to see that x ^ ˜ = 1 / C 2 x ^ is the minimizer of (3). Thus, we conclude from (40) that σ ˜ = σ . With this observation, consider the function ˜ : R R such that ˜ ( t ) = λ μ 2 ( μ t ) . Then, notice that
M , 1 x ; λ = 1 λ M ˜ , 1 x / μ ; 1 .
Using this relation in (38) and setting σ : = σ = α / μ , the system of equations in (38) can be equivalently rewritten in the following convenient form, where Z σ = σ W 1 + W 2 :
E W 2 · M ˜ , 1 Z σ ; 1 = 0 ,
E M ˜ , 1 Z σ ; 1 2 = σ 2 / δ ,
E W 1 · M ˜ , 1 Z σ ; 1 = σ / δ .
Next, we show how to use (A73) to derive an equivalent system of equations in terms of only Z σ . Starting with (A73c), we have
E W 1 · M ˜ , 1 Z σ ; 1 = 1 σ x M ˜ , 1 x + y ; 1 p σ W 1 ( x ) p W 2 ( y ) d x d y ,
where recall that p σ W 1 ( x ) = 1 σ 2 π e x 2 2 σ 2 . Since it holds that p σ W 1 ( x ) = σ 2 x p σ W 1 ( x ) , using (A74) yields that
E W 1 · M ˜ , 1 Z σ ; 1 = σ M ˜ , 1 x + y ; 1 p σ W 1 ( x ) p W 2 ( y ) d x d y = σ M ˜ , 1 z ; 1 p σ W 1 ( x ) p W 2 ( z x ) d x d z = σ M ˜ , 1 z ; 1 p Z σ ( z ) d z ,
where in the last step we use
p Z σ ( w ) = p σ W 1 ( x ) p W 2 ( z x ) d x .
Therefore,
E W 1 · M ˜ , 1 Z σ ; 1 = σ E M ˜ , 1 Z σ ; 1 ξ Z σ ( Z σ ) .
This combined with (A73c) gives
E M ˜ , 1 Z σ ; 1 ξ Z σ ( Z σ ) = 1 / δ .
Second, multiplying (A73c) with σ 2 and adding it to (A73a) yields
E Z σ · M ˜ , 1 Z σ ; 1 = σ 2 / δ .
Putting these together, we conclude with the following system of equations which is equivalent to (A73),
E Z σ · M ˜ , 1 Z σ ; 1 = σ 2 / δ ,
E M ˜ , 1 Z σ ; 1 2 = σ 2 / δ ,
E M ˜ , 1 Z σ ; 1 ξ Z σ ( Z σ ) = 1 / δ .
Next, considering (A76a) and (A76c), the following holds for any c 1 , c 2 R ,
E c 1 Z σ + c 2 ξ Z σ ( Z σ ) · M ˜ , 1 Z σ ; 1 = c 1 σ 2 / δ c 2 / δ .
Applying Cauchy–Schwarz inequality to the LHS of (A77) gives
c 1 σ 2 / δ c 2 / δ 2 = E c 1 Z σ + c 2 ξ Z σ ( Z σ ) · M ˜ , 1 Z σ ; 1 2
E c 1 Z σ + c 2 ξ Z σ ( Z σ ) 2 E M ˜ , 1 Z σ ; 1 2 .
By considering (A76b), E [ Z σ ξ Z σ ( Z σ ) ] = 1 (follows from integration by parts) and E ( ξ Z σ ( Z σ ) ) 2 = I ( Z σ ) = ( σ 2 + 1 ) 1 , we simplify (A78) to the following:
( c 1 σ 2 / δ c 2 / δ ) 2 c 1 2 ( σ 2 + 1 + r 2 ) + c 2 2 / ( σ 2 + 1 ) 2 c 1 c 2 ) σ 2 / δ .
Choosing c 1 = 1 and c 2 = ( 1 + r 2 ) ( 1 + σ 2 ) and simplifying both sides, we derive the lower bound for σ 2 :
σ 2 1 + r 2 r 2 · 1 ( δ 1 ) .
This completes the proof of theorem.

References

  1. Donoho, D.L. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math. Chall. Lect. 2000, 1, 32. [Google Scholar]
  2. Donoho, D.L. Compressed sensing. Inf. Theory IEEE Trans. 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  3. Stojnic, M. Various thresholds for 1-optimization in compressed sensing. arXiv 2009, arXiv:0907.3666. [Google Scholar]
  4. Chandrasekaran, V.; Recht, B.; Parrilo, P.A.; Willsky, A.S. The convex geometry of linear inverse problems. Found. Comput. Math. 2012, 12, 805–849. [Google Scholar] [CrossRef]
  5. Donoho, D.L.; Maleki, A.; Montanari, A. The noise-sensitivity phase transition in compressed sensing. Inf. Theory IEEE Trans. 2011, 57, 6920–6941. [Google Scholar] [CrossRef] [Green Version]
  6. Tropp, J.A. Convex recovery of a structured signal from independent random linear measurements. arXiv 2014, arXiv:1405.1102. [Google Scholar]
  7. Oymak, S.; Tropp, J.A. Universality laws for randomized dimension reduction, with applications. Inf. Inference J. IMA 2017, 7, 337–446. [Google Scholar] [CrossRef]
  8. Bayati, M.; Montanari, A. The LASSO risk for gaussian matrices. Inf. Theory IEEE Trans. 2012, 58, 1997–2017. [Google Scholar] [CrossRef] [Green Version]
  9. Stojnic, M. A framework to characterize performance of LASSO algorithms. arXiv 2013, arXiv:1303.7291. [Google Scholar]
  10. Oymak, S.; Thrampoulidis, C.; Hassibi, B. The Squared-Error of Generalized LASSO: A Precise Analysis. arXiv 2013, arXiv:1311.0830. [Google Scholar]
  11. Karoui, N.E. Asymptotic behavior of unregularized and ridge-regularized high-dimensional robust regression estimators: Rigorous results. arXiv 2013, arXiv:1311.2445. [Google Scholar]
  12. Bean, D.; Bickel, P.J.; El Karoui, N.; Yu, B. Optimal M-estimation in high-dimensional regression. Proc. Natl. Acad. Sci. USA 2013, 110, 14563–14568. [Google Scholar] [CrossRef] [Green Version]
  13. Thrampoulidis, C.; Oymak, S.; Hassibi, B. Regularized Linear Regression: A Precise Analysis of the Estimation Error. In Proceedings of the 28th Conference on Learning Theory, Paris, France, 3–6 July 2015; pp. 1683–1709. [Google Scholar]
  14. Donoho, D.; Montanari, A. High dimensional robust m-estimation: Asymptotic variance via approximate message passing. Probab. Theory Relat. Fields 2016, 166, 935–969. [Google Scholar] [CrossRef] [Green Version]
  15. Thrampoulidis, C.; Abbasi, E.; Hassibi, B. Precise Error Analysis of Regularized M-Estimators in High Dimensions. IEEE Trans. Inf. Theory 2018, 64, 5592–5628. [Google Scholar] [CrossRef] [Green Version]
  16. Advani, M.; Ganguli, S. Statistical mechanics of optimal convex inference in high dimensions. Phys. Rev. X 2016, 6, 031034. [Google Scholar] [CrossRef] [Green Version]
  17. Weng, H.; Maleki, A.; Zheng, L. Overcoming the limitations of phase transition by higher order analysis of regularization techniques. Ann. Stat. 2018, 46, 3099–3129. [Google Scholar] [CrossRef] [Green Version]
  18. Thrampoulidis, C.; Xu, W.; Hassibi, B. Symbol Error Rate Performance of Box-relaxation Decoders in Massive MIMO. IEEE Trans. Signal Process. 2018, 66, 3377–3392. [Google Scholar] [CrossRef] [Green Version]
  19. Miolane, L.; Montanari, A. The distribution of the Lasso: Uniform control over sparse balls and adaptive parameter tuning. arXiv 2018, arXiv:1811.01212. [Google Scholar]
  20. Bu, Z.; Klusowski, J.; Rush, C.; Su, W. Algorithmic analysis and statistical estimation of slope via approximate message passing. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BD, Canada, 8–14 December 2019; pp. 9361–9371. [Google Scholar]
  21. Xu, J.; Maleki, A.; Rad, K.R.; Hsu, D. Consistent risk estimation in high-dimensional linear regression. arXiv 2019, arXiv:1902.01753. [Google Scholar]
  22. Celentano, M.; Montanari, A. Fundamental Barriers to High-Dimensional Regression with Convex Penalties. arXiv 2019, arXiv:1903.10603. [Google Scholar]
  23. Kammoun, A.; Alouini, M.S. On the precise error analysis of support vector machines. arXiv 2020, arXiv:2003.12972. [Google Scholar]
  24. Amelunxen, D.; Lotz, M.; McCoy, M.B.; Tropp, J.A. Living on the edge: A geometric theory of phase transitions in convex optimization. arXiv 2013, arXiv:1303.6672. [Google Scholar]
  25. Donoho, D.L.; Johnstone, L.; Montanari, A. Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising. IEEE Trans. Inf. Theory 2013, 59, 3396–3433. [Google Scholar] [CrossRef] [Green Version]
  26. Mondelli, M.; Montanari, A. Fundamental limits of weak recovery with applications to phase retrieval. arXiv 2017, arXiv:1708.05932. [Google Scholar] [CrossRef] [Green Version]
  27. Taheri, H.; Pedarsani, R.; Thrampoulidis, C. Fundamental limits of ridge-regularized empirical risk minimization in high dimensions. arXiv 2020, arXiv:2006.08917. [Google Scholar]
  28. Bayati, M.; Lelarge, M.; Montanari, A. Universality in polytope phase transitions and message passing algorithms. Ann. Appl. Probab. 2015, 25, 753–822. [Google Scholar] [CrossRef]
  29. Panahi, A.; Hassibi, B. A universal analysis of large-scale regularized least squares solutions. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 3381–3390. [Google Scholar]
  30. Abbasi, E.; Salehi, F.; Hassibi, B. Universality in learning from linear measurements. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BD, Canada, 8–14 December 2019; pp. 12372–12382. [Google Scholar]
  31. Goldt, S.; Reeves, G.; Mézard, M.; Krzakala, F.; Zdeborová, L. The Gaussian equivalence of generative models for learning with two-layer neural networks. arXiv 2020, arXiv:2006.14709. [Google Scholar]
  32. Donoho, D.L.; Maleki, A.; Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef] [Green Version]
  33. Bayati, M.; Montanari, A. The dynamics of message passing on dense graphs, with applications to compressed sensing. Inf. Theory IEEE Trans. 2011, 57, 764–785. [Google Scholar] [CrossRef] [Green Version]
  34. Mousavi, A.; Maleki, A.; Baraniuk, R.G. Consistent parameter estimation for LASSO and approximate message passing. Ann. Stat. 2018, 46, 119–148. [Google Scholar] [CrossRef] [Green Version]
  35. El Karoui, N. On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators. Probab. Theory Relat. Fields 2018, 170, 95–175. [Google Scholar] [CrossRef] [Green Version]
  36. Boufounos, P.T.; Baraniuk, R.G. 1-bit compressive sensing. In Proceedings of the 2008 IEEE 42nd Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 19–21 March 2008; pp. 16–21. [Google Scholar]
  37. Jacques, L.; Laska, J.N.; Boufounos, P.T.; Baraniuk, R.G. Robust 1-bit compressive sensing via binary stable embeddings of sparse vectors. IEEE Trans. Inf. Theory 2013, 59, 2082–2102. [Google Scholar] [CrossRef] [Green Version]
  38. Plan, Y.; Vershynin, R. One-Bit Compressed Sensing by Linear Programming. Commun. Pure Appl. Math. 2013, 66, 1275–1297. [Google Scholar] [CrossRef] [Green Version]
  39. Plan, Y.; Vershynin, R. Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach. IEEE Trans. Inf. Theory 2012, 59, 482–494. [Google Scholar] [CrossRef] [Green Version]
  40. Plan, Y.; Vershynin, R. The generalized lasso with non-linear observations. IEEE Trans. Inf. Theory 2016, 62, 1528–1537. [Google Scholar] [CrossRef]
  41. Genzel, M. High-dimensional estimation of structured signals from non-linear observations with general convex loss functions. IEEE Trans. Inf. Theory 2017, 63, 1601–1619. [Google Scholar] [CrossRef] [Green Version]
  42. Xu, C.; Jacques, L. Quantized compressive sensing with rip matrices: The benefit of dithering. arXiv 2018, arXiv:1801.05870. [Google Scholar] [CrossRef]
  43. Thrampoulidis, C.; Abbasi, E.; Hassibi, B. Lasso with non-linear measurements is equivalent to one with linear measurements. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 3420–3428. [Google Scholar]
  44. Candès, E.J.; Sur, P. The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression. arXiv 2018, arXiv:1804.09753. [Google Scholar] [CrossRef] [Green Version]
  45. Sur, P.; Candès, E.J. A modern maximum-likelihood theory for high-dimensional logistic regression. Proc. Natl. Acad. Sci. USA 2019, 201810420. [Google Scholar] [CrossRef] [Green Version]
  46. Mai, X.; Liao, Z.; Couillet, R. A Large Scale Analysis of Logistic Regression: Asymptotic Performance and New Insights. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 3357–3361. [Google Scholar]
  47. Salehi, F.; Abbasi, E.; Hassibi, B. The Impact of Regularization on High-dimensional Logistic Regression. arXiv 2019, arXiv:1906.03761. [Google Scholar]
  48. Montanari, A.; Ruan, F.; Sohn, Y.; Yan, J. The generalization error of max-margin linear classifiers: High-dimensional asymptotics in the overparametrized regime. arXiv 2019, arXiv:1911.01544. [Google Scholar]
  49. Deng, Z.; Kammoun, A.; Thrampoulidis, C. A Model of Double Descent for High-dimensional Binary Linear Classification. arXiv 2019, arXiv:1911.05822. [Google Scholar]
  50. Mignacco, F.; Krzakala, F.; Lu, Y.M.; Zdeborová, L. The role of regularization in classification of high-dimensional noisy Gaussian mixture. arXiv 2020, arXiv:2002.11544. [Google Scholar]
  51. Aubin, B.; Krzakala, F.; Lu, Y.M.; Zdeborová, L. Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization. arXiv 2020, arXiv:2006.06560. [Google Scholar]
  52. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009; Volume 317. [Google Scholar]
  53. Barron, A.R. Monotonic Central Limit Theorem for Densities; Technical Report; Stanford University: Stanford, CA, USA, 1984. [Google Scholar]
  54. Costa, M.H.M. A new entropy power inequality. IEEE Trans. Inf. Theory 1985, 31, 751–760. [Google Scholar] [CrossRef]
  55. Blachman, N. The convolution inequality for entropy powers. IEEE Trans. Inf. Theory 1965, 11, 267–271. [Google Scholar] [CrossRef]
  56. Brillinger, D.R. A Generalized Linear Model with “Gaussian” Regressor Variables. In A Festschrift For Erich L. Lehmann; Springer: New York, NY, USA, 1982; p. 97. [Google Scholar]
  57. Dhifallah, O.; Lu, Y.M. A precise performance analysis of learning with random features. arXiv 2020, arXiv:2008.11904. [Google Scholar]
  58. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  59. Gordon, Y. On Milman’s Inequality and Random Subspaces which Escape through a Mesh in Rn; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  60. Dhifallah, O.; Thrampoulidis, C.; Lu, Y.M. Phase retrieval via polytope optimization: Geometry, phase transitions, and new algorithms. arXiv 2018, arXiv:1805.09555. [Google Scholar]
  61. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  62. Genzel, M.; Jung, P. Recovering structured data from superimposed non-linear measurements. arXiv 2017, arXiv:1708.07451. [Google Scholar] [CrossRef]
  63. Goldstein, L.; Minsker, S.; Wei, X. Structured signal recovery from non-linear and heavy-tailed measurements. IEEE Trans. Inf. Theory 2018, 64, 5513–5530. [Google Scholar] [CrossRef]
  64. Thrampoulidis, C.; Rawat, A.S. The generalized lasso for sub-gaussian measurements with dithered quantization. arXiv 2018, arXiv:1807.06976. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Comparison between theoretical (solid lines) and empirical (markers) performance for least-squares (LS) and least-absolute deviations (LAD), as predicted by Theorem 1, and the optimal performance, as predicted by the upper bound of Theorem 2, for the signed model. The squares and circles denote the empirical performance for Gaussian and Rademacher features, respectively. (b) Illustrations of optimal loss functions for the signed model for different values of δ according to Theorem 3.
Figure 1. (a) Comparison between theoretical (solid lines) and empirical (markers) performance for least-squares (LS) and least-absolute deviations (LAD), as predicted by Theorem 1, and the optimal performance, as predicted by the upper bound of Theorem 2, for the signed model. The squares and circles denote the empirical performance for Gaussian and Rademacher features, respectively. (b) Illustrations of optimal loss functions for the signed model for different values of δ according to Theorem 3.
Entropy 23 00178 g001
Figure 2. Theoretical (solid lines) and empirical (markers) results of classification risk in GMM as in Theorem 4 and (39) for LS, LAD and logistic loss functions as a function of δ for r = 1 . The vertical line represents the threshold δ 3.7 as evaluated by (36). Logistic loss gives unbounded solution if and only if δ < δ .
Figure 2. Theoretical (solid lines) and empirical (markers) results of classification risk in GMM as in Theorem 4 and (39) for LS, LAD and logistic loss functions as a function of δ for r = 1 . The vertical line represents the threshold δ 3.7 as evaluated by (36). Logistic loss gives unbounded solution if and only if δ < δ .
Entropy 23 00178 g002
Figure 3. (a) Comparison between analytical and empirical results for the performance of LS, logistic loss, hinge loss and optimal loss function for logistic model. The vertical dashed line represents δ f 2.275 , as evaluated by (35). (b) Illustrations of optimal loss functions for different values of δ , derived according to Theorem 3 for logistic model. To signify the similarity of optimal loss function to the LS loss, the optimal loss functions (hardly visible) are scaled such that ( 1 ) = 0 and ( 2 ) = 1 .
Figure 3. (a) Comparison between analytical and empirical results for the performance of LS, logistic loss, hinge loss and optimal loss function for logistic model. The vertical dashed line represents δ f 2.275 , as evaluated by (35). (b) Illustrations of optimal loss functions for different values of δ , derived according to Theorem 3 for logistic model. To signify the similarity of optimal loss function to the LS loss, the optimal loss functions (hardly visible) are scaled such that ( 1 ) = 0 and ( 2 ) = 1 .
Entropy 23 00178 g003
Figure 4. (a) Comparison between analytical and empirical results for the performance of LS, hinge loss and optimal loss function for Probit model. The vertical dashed line represents δ f 2.699 , as evaluated by (35). (b) Illustrations of optimal loss functions for different values of δ derived according to Theorem 3 for Probit model. To signify the similarity of optimal loss function to the LS loss, the optimal loss functions (hardly visible) are scaled such that ( 1 ) = 0 and ( 2 ) = 1 .
Figure 4. (a) Comparison between analytical and empirical results for the performance of LS, hinge loss and optimal loss function for Probit model. The vertical dashed line represents δ f 2.699 , as evaluated by (35). (b) Illustrations of optimal loss functions for different values of δ derived according to Theorem 3 for Probit model. To signify the similarity of optimal loss function to the LS loss, the optimal loss functions (hardly visible) are scaled such that ( 1 ) = 0 and ( 2 ) = 1 .
Entropy 23 00178 g004
Table 1. Theoretical predictions and empirical performance of the optimal loss function for signed model. Empirical results are averaged over 20 experiments for n = 128 .
Table 1. Theoretical predictions and empirical performance of the optimal loss function for signed model. Empirical results are averaged over 20 experiments for n = 128 .
δ 23456789
Predicted Performance 0.8168 0.9101 0.9457 0.9645 0.9748 0.9813 0.9855 0.9885
Empirical (Gaussian) 0.8213 0.9045 0.9504 0.9669 0.9734 0.9801 0.9834 0.9873
Empirical (Rademacher) 0.8096 0.9158 0.9490 0.9633 0.9644 0.9768 0.9808 0.9829
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Taheri, H.; Pedarsani, R.; Thrampoulidis, C. Sharp Guarantees and Optimal Performance for Inference in Binary and Gaussian-Mixture Models. Entropy 2021, 23, 178. https://doi.org/10.3390/e23020178

AMA Style

Taheri H, Pedarsani R, Thrampoulidis C. Sharp Guarantees and Optimal Performance for Inference in Binary and Gaussian-Mixture Models. Entropy. 2021; 23(2):178. https://doi.org/10.3390/e23020178

Chicago/Turabian Style

Taheri, Hossein, Ramtin Pedarsani, and Christos Thrampoulidis. 2021. "Sharp Guarantees and Optimal Performance for Inference in Binary and Gaussian-Mixture Models" Entropy 23, no. 2: 178. https://doi.org/10.3390/e23020178

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop