Next Article in Journal
New Estimators of the Bayes Factor for Models with High-Dimensional Parameter and/or Latent Variable Spaces
Previous Article in Journal
Asymptotic Information-Theoretic Detection of Dynamical Organization in Complex Systems
Previous Article in Special Issue
Sharp Guarantees and Optimal Performance for Inference in Binary and Gaussian-Mixture Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Phase Transitions in Transfer Learning for High-Dimensional Perceptrons

John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 400; https://doi.org/10.3390/e23040400
Submission received: 4 January 2021 / Revised: 22 March 2021 / Accepted: 24 March 2021 / Published: 27 March 2021

Abstract

:
Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In this paper, we present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks. Despite the simplicity of our model, it reproduces several key phenomena observed in practice. Specifically, our asymptotic analysis reveals a phase transition from negative transfer to positive transfer as the similarity of the two tasks moves past a well-defined threshold.

1. Introduction

Transfer learning [1,2,3,4,5] is a promising approach to improving the performance of machine learning tasks. It does so by exploiting the knowledge gained from a previously learned model, referred to as the source task, to improve the generalization performance of a related learning problem, referred to as the target task. One particular challenge in transfer learning is to avoid so-called negative transfer [6,7,8,9], where the transferred source information reduces the generalization performance of the target task. Recent literature [6,7,8,9] shows that negative transfer is closely related to the similarity between the source and target tasks. Transfer learning may hurt the generalization performance if the tasks are sufficiently dissimilar.
In this paper, we present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks. Despite the simplicity of our model, it reproduces several key phenomena observed in practice. Specifically, the model reveals a sharp phase transition from negative transfer to positive transfer (i.e., when transfer becomes helpful) as a function of the model similarity.

1.1. Models and Learning Formulations

We start by describing the models for our theoretical study. We assume that the source task has a collection of training data { ( a s , i , y s , i ) } i = 1 n s , where a s , i R p is the source feature vector and y s , i R denotes the label corresponding to a s , i . Following the standard teacher–student paradigm, we assume that the labels { y s , i } i = 1 n s are generated according to the following model:
y s , i = φ ( a s , i ξ s ) , i { 1 , , n s } ,
where φ ( · ) is a scalar deterministic or probabilistic function and ξ s R p is an unknown source teacher vector.
Similar to the source task, the target task has access to a different collection of training data { ( a t , i , y t , i ) } i = 1 n t , generated according to
y t , i = φ ( a t , i ξ t ) , i { 1 , , n t } ,
where ξ t R p is an unknown target teacher vector. We measure the similarity of the two tasks using
ρ = def ξ t ξ s ξ t ξ s ,
with ρ = 0 indicating two uncorrelated tasks whereas ρ = 1 means that the tasks are perfectly aligned.
For the source task, we learn the optimal weight vector w ^ s by solving a convex optimization problem:
w ^ s = argmin w R p 1 p i = 1 n s y s , i ; a s , i w + λ 2 w 2 ,
where λ 0 is a regularization parameter and ( . ; . ) denotes some general loss function that can take one of the following two forms:
y ; x = ^ ( y x ) , for regression task y ; x = ^ ( y x ) , for classification task ,
where ^ ( . ) is a convex function.
In this paper, we consider a common strategy in transfer learning [4], which consists of transferring the optimal source vector, i.e., w ^ s , to the target task. One popular approach is to fix a (random) subset of the target weights to values of the corresponding optimal weights learned during the source training process [10]. In our learning model, this amounts to the following target learning formulation:
w ^ t = argmin w R p 1 p i = 1 n t y t , i ; a t , i w + λ 2 w 2
s . t . Q w = Q w ^ s .
The vector w ^ s is the optimal solution of the source learning problem, and Q R p × p is a diagonal matrix with diagonal entries drawn independently from a Bernoulli distribution with probability δ = m / p 1 . Here, m denotes the number of transferred components. Thus, on average, we retain δ p number of entries from the source optimal vector w ^ s . In addition to a possible improvement in the generalization performance, this approach can considerably lower the computational complexity of the target learning task by reducing the number of free optimization variables. In what follows, we refer to δ as the transfer rate and call (6) the hard transfer formulation.
Another popular approach in transfer learning is to search for target weight vectors in the vicinity of the optimal source weight vector w ^ s . This can be achieved by adding a regularization term to the target formulation [11,12], which in our model becomes
w ^ t = argmin w R p 1 p i = 1 n t y t , i ; a t , i w + λ 2 w 2 + 1 2 Σ ( w w ^ s ) 2 ,
with Σ R p × p denoting some weighting matrix. In what follows, we refer to (8) as the soft transfer formulation, since it relaxes the strict equality in (6). In fact, the hard transfer in (6) is just a special case of the soft transfer formulation, if we set Σ to be a diagonal matrix in which the diagonal entries are either + (with probability δ ) or 0 (with probability 1 δ ).
To measure the performance of the transfer learning methods, we use the generalization error of the target task. Given a new data sample ( a t , new , y t , new ) with y t , new = φ ( ξ t a t , new ) , we assume that the target task predicts the corresponding label as
y ^ t , new = φ ^ [ w ^ t a t , new ] ,
where φ ^ ( · ) is a predefined scalar function that might be different from φ ( · ) . We then calculate the generalization error of the target task as
E test = 1 4 υ E y t , new φ ^ ( w ^ t a t , new ) 2 ,
where the expectation is taken with respect to the new data ( a t , new , y t , new ) . The variable υ allows us to write a more compact formula: υ is taken to be 0 for a regression problem and υ = 1 for a binary classification problem. Finally, we use the training error
E train = 1 p i = 1 n t y t , i ; a t , i w ^ t + 1 2 Σ ( w ^ t w ^ s ) 2 ,
to quantify the performance of the training process. Here, we measure the training error on the training data without regularization.

1.2. Main Contributions

The main contributions of this paper are two-fold, as summarized below:

1.2.1. Precise Asymptotic Analysis

We present a precise asymptotic analysis of the transfer learning approaches introduced in (6) and (8) for Gaussian feature vectors and under regularity conditions on the eigenvalue distribution of the weighting matrix Σ . Specifically, we show that, as the dimensions p , n s , n t grow to infinity with the ratios α s = n s / p , α t = n t / p fixed, the generalization errors of the hard and soft formulations can be exactly characterized by the solutions of two low-dimensional deterministic optimization problems. (See Theorem 1 and Corollary 1 for details.) Our asymptotic predictions hold for any convex loss functions used in the training process, including the squared loss for regression problems and logistic loss commonly used for binary classification problems.
As illustrated in Figure 1, our theoretical predictions (drawn as solid lines in the figures) reach excellent agreement with the actual performance (shown as circles) of the transfer learning problem. Figure 1a considers a binary classification setting with logistic loss, and we plot the generalization errors of different transfer approaches as a function of the target data/dimension ratio α t = n t / p . We can see that the hard transfer formulation (6) is only useful when α t is small. In fact, we encounter negative transfer (i.e., hard transfer performing worse than no transfer) when α t becomes sufficiently large. Moreover, the soft transfer formulation (8) seems to achieve more favorable generalization errors compared to the hard formulation. In Figure 1b, we consider a regression setting with a squared loss and explore the impact of different weighting schemes on the performance of the soft formulation. We can see that the soft formulation indeed considerably improves the generalization performance of the standard learning method (i.e., learning the target task without any knowledge transfer).

1.2.2. Phase Transitions

Our asymptotic characterizations reveal a phase transition phenomenon in the hard transfer formulation. Let
δ = argmin 0 δ 1 E test ( δ ) ,
be the optimal transfer rate that minimizes the generalization error of the target task. Clearly, δ = 0 corresponds to the negative transfer regime, where transferring the knowledge of the source task will actually hurt the performance of the target task. In contract, δ > 0 signifies that we have entered the positive transfer regime, where transfer becomes helpful.
Figure 2a illustrates the phase transition from negative to positive transfer regimes in a binary classification setting, as the similarity ρ between the two tasks moves past a critical threshold. Similar phase transition phenomena also appear in nonlinear regression, as shown in Figure 2b. Interestingly, for this setting, the optimal transfer rate jumps from δ = 0 to δ = 1 at the transition threshold.
For general loss functions, the exact locations of the phase transitions can only be found numerically by solving the deterministic optimization problems in our asymptotic characterizations. For the special case of squared loss with no regularization, however, we are able to obtain the following simple analytical characterization for the phase transition threshold: We are in the positive transfer regime if and only if
ρ > ρ c ( α s , α t ) = 1 E [ φ 2 ( z ) ] E 2 [ z φ ( z ) ] 2 E 2 [ z φ ( z ) ] 1 α t 1 1 α s 1 ,
where z is a standard Gaussian random variable. This result is shown in Proposition 1.
By the Cauchy–Schwarz inequality, E [ φ 2 ( z ) ] E 2 [ z φ ( z ) ] . It follows that ρ c ( α s , α t ) is an increasing function of α t and a decreasing function of α s . This property is consistent with our intuition: As we increase α t , the target task has more training data to work with, and thus, we should set a higher bar in terms of when to transfer knowledge. As we increase α s , the quality of the optimal source vector becomes better, in which case, we can start the transfer at a lower similarity level. In particular, when α t > α s , we have ρ c ( α s , α t ) > 1 and, thus, the inequality in (11) is never satisfied (because | ρ | 1 by definition). This indicates that no transfer should be done when the target task has more training data than the source task.

1.3. Related Work

The idea of transferring informaton between different domains or different tasks was first proposed in [1] and further developed in [2]. It has been attracting significant interest in recent literature [4,5,6,7,8,9,11,12]. While most work focuses on the practical aspects of transfer learning, there have been several studies (e.g., [13,14]) that seek to provide analytical understandings of transfer learning in simplified models. Our work is particularly related to [14], which considers a transfer learning model similar to ours but for the special case of linear regression. The analysis in this paper is more general as it considers arbitrary convex loss functions. We would also like to mention an interesting recent work that studies a different but related setting referred to as knowledge distillation [15].
In term of technical tools, our asymptotic predictions are derived using the convex Gaussian min–max theorem (CGMT). The CGMT was first introduced in [16] and further developed in [17]. It extends a Gaussian comparison inequality first introduced in [18]. It particularly uses convexity properties to show the equivalence between two Gaussian processes. The CGMT has been successfully used to analyze convex regression formulations [17,19,20] and convex classification formulations [21,22,23,24].

1.4. Organization

The rest of this paper is organized as follows. Section 2 states the technical assumptions under which our results are obtained. Section 3 provides an asymptotic characterization of the soft transfer formulation. Precise analysis of the hard transfer formulation is presented in Section 4. We provide remarks about our approach in Section 5. Our theoretical predictions hold for general convex loss functions. We specialize these results to the settings of nonlinear regression and binary classification in Section 6, where we also provide additional numerical results to validate our predictions. Section 7 provides detailed proof of the technical statements introduced in Section 3 and Section 4. Section 8 concludes the paper. The Appendix provides additional technical details.

2. Technical Assumptions

The theoretical analysis of this paper is carried out under the following assumptions.
Assumption 1
(Gaussian Feature Vectors). The feature vectors { a s , i } i = 1 n s and { a t , i } i = 1 n t are drawn independently from a standard Gaussian distribution. The vector ξ s R p can be expressed as ξ s = ρ ξ t + 1 ρ 2 ξ r , where the vectors ξ t R p and ξ r R p are independent from the feature vectors, and they are generated independently from a uniform distribution on the unit sphere.
Moreover, our results are valid in a high-dimensional asymptotic setting, where the dimensions p, n s , n t , and m grow to infinity at fixed ratios.
Assumption 2
(High-dimensional Asymptotic). The number of samples and the number of transferred components in hard transfer satisfy n s = n s ( p ) , n t = n t ( p ) , and m = m ( p ) , with α s , p = n s ( p ) / p α s > 0 , α t , p = n t ( p ) / p α t > 0 , and δ p = m ( p ) / p δ > 0 as p .
The CGMT framework makes specific assumptions about the loss function and the feasibility sets. To guarantee these assumptions, this paper considers a family of loss functions that satisfy the following conditions. Note that the assumption is stated for the target task, but we assume that it is also valid for the source task.
Assumption 3
(Loss Function). If λ > 0 , the loss function ( y ; . ) defined in (5) is a proper convex function in R . If λ = 0 , the loss function ( y ; . ) defined in (5) is a proper strongly convex function in R , where the constant S > 0 is a strong convexity parameter. In this case, we only consider the case when α t > 1 . Define a random function L ( x ) = i = 1 n t ( y i ; x i ) , where y i φ ( z i ) , with z i being a collection of independent standard normal random variables and ∼ denoting equality in distribution. Denote by L the sub-differential set of L ( x ) . Then, for any constant C > 0 , there exists a constant R > 0 such that
P sup v C n t sup s L ( v ) s R n t p 1 . P sup v C n t | L ( v ) | R n t p 1 .
Furthermore, we consider the following assumption to guarantee that the generalization error defined in (10) concentrates in the large system limit.
Assumption 4
(Regularity Conditions). The data-generating function φ ( · ) is independent from the feature vectors. Moreover, the following conditions are satisfied.
  • φ ( · ) and φ ^ ( · ) are continuous almost everywhere in R . For every h > 0 and z N ( 0 , h ) , we have 0 < E [ φ 2 ( z ) ] < + and 0 < E [ φ ^ 2 ( z ) ] < + .
  • For any compact interval [ c , C ] , there exists a function g ( · ) such that
    sup h [ c , C ] | φ ^ ( h x ) | 2 g ( x ) for all x R .
    Additionally, the function g ( · ) satisfies E [ g 2 ( z ) ] < + , where z N ( 0 , 1 ) .
Finally, we introduce the following assumption to guarantee that the training and generalization errors of the soft formulation can be asymptotically characterized by deterministic optimization problems.
Assumption 5
(Weighting Matrix). Let Λ = Σ Σ , where Σ is the weighting matrix in the soft transfer formulation. Let σ min , 1 ( Λ ) and σ min , 2 ( Λ ) denote its two smallest eigenvalues. There exists a constant μ min 0 such that
σ min , 1 ( Λ ) p μ min | σ min , 1 ( Λ ) σ min , 2 ( Λ ) | p 0 .
Moreover, we assume that empirical distribution of the eigenvalues of the matrix Λ converges weakly to a probability distribution P μ ( · ) .
The above assumptions are essential to show that the soft formulation in (8) concentrates in the large system limit. We provide more details about these assumptions in Appendix A.

3. Sharp Asymptotic Analysis of Soft Transfer Formulation

In this section, we study the asymptotic properties of soft transfer formulation. Specifically, we provide a precise characterization of the training and generalization errors corresponding to (8).
The asymptotic performance of the source formulation defined in (4) has been studied in the literature [24]. In particular, it has been shown that the asymptotic limit of the source formulation in (4) can be quantified by the following deterministic optimization problem:
min q s , r s 0 sup σ s > 0 α s E M ( Y s , . ) r s H s + q s S s ; r s σ s r s σ s 2 + λ 2 ( q s 2 + r s 2 ) ,
where Y s = φ ( S s ) and H s and S s are two independent standard Gaussian random variables. Furthermore, the function M ( Y s , . ) introduced in the scalar optimization problem (14) is the Moreau envelope function defined as
M ( y , . ) ( a ; b ) = min c R ( y ; c ) + 1 2 b ( c a ) 2 .
The expectation in (14) is taken over the random variables H s and S s .
In our work, we focus on the target problem with soft transfer, as formulated in (8). It turns out that the asymptotic performance of the target problem can also be characterized by a deterministic optimization problem:
min q t , r t 0 sup σ t > μ min σ t r t 2 2 + 1 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 T 2 ( σ t ) + α t E M ( Y t , . ) r t H t + q t S t ; T 1 ( σ t ) + λ 2 ( q t 2 + r t 2 ) 1 2 q t ρ q s 2 σ t 1 / T 1 ( σ t ) ,
where Y t = φ ( S t ) , and H t and S t are independent standard Gaussian random variables. Additionally, μ min represents the minimum value of the random variable with distribution P μ ( . ) as defined in Assumption 5. In the formulation (16), the constants q s and r s are optimal solutions of the asymptotic formulation given in (14). Moreover, the functions T 1 ( . ) and T 2 ( . ) are defined as follows:
T 1 ( σ t ) = E μ [ 1 / ( μ + σ t ) ] , T 2 ( σ t ) = E μ μ σ t / ( μ + σ t ) ,
where the expectations are taken over the probability distribution P μ ( . ) defined in Assumption 5.
Theorem 1
(Precise Analysis of the Soft Transfer). Suppose that Assumptions 1–5 are satisfied. Then, the training error corresponding to the soft transfer formulation in (8) converges in probability as follows:
E train p C t λ 2 ( q t ) 2 + ( r t ) 2 ,
where C t denotes the minimum value achieved by the scalar formulation introduced in (16), and q t and r t are optimal solutions of the scalar formulation in (16). Moreover, the generalization error introduced in (10) corresponding to soft transfer formulation converges in probability as follows:
E test p 1 4 υ E φ ( ν 1 ) φ ^ ( ν 2 ) 2 ,
where ν 1 and ν 2 are two jointly Gaussian random variables with zero mean and a covariance matrix given by
1 q t q t ( q t ) 2 + ( r t ) 2 .
The proof of Theorem 1 is based on the CGMT framework [17] (Theorem 6.1). A detailed proof is provided in Section 7.3. The statements in Theorem 1 are valid for a general convex loss function and general learning models that can be expressed as in (1) and (2). The analysis in Section 7.3 shows that the deterministic problems in (14) and (16) are the asymptotic limits of the source and target formulations given in (4) and (8), respectively. Moreover, it shows that the deterministic problems (14) and (16) are strictly convex in the minimization variables. This implies the uniqueness of the optimal solutions of the minimization problems.
Remark 1.
The results of the theorem show that the training and generalization errors corresponding to soft transfer formulation can be fully characterized using the optimal solutions of scalar formulation in (16). Moreover, from its definition, (16) depends on the optimal solutions of the scalar formulation in (14) of the source task. This shows that the precise asymptotic performance of the soft transfer formulation can be characterized after solving two scalar deterministic problems.

4. Sharp Asymptotic Analysis of Hard Transfer Formulation

In this section, we study the asymptotic properties of hard transfer formulation. We then use these predictions to rigorously prove the existence of phase transitions from negative to positive transfer.

4.1. Asymptotic Predictions

As mentioned earlier, the hard transfer formulation can be recovered from (8) as a special case where the eigenvalues of the matrix Λ are + with probability δ and 0 otherwise. Thus, we obtain the following result as a simple consequence of Theorem 1.
Corollary 1.
Suppose that Assumptions 1–4 are satisfied. Then, the asymptotic limit of the hard formulation defined in (6) is given by the following deterministic formulation:
min q t , r t 0 sup σ > 0 λ 2 ( q t 2 + r t 2 ) + σ δ 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 + α t E M ( Y t , . ) r t H t + q t S t ; 1 δ σ σ r t 2 2 + σ δ 2 ( 1 δ ) q t ρ q s 2 .
Additionally, the training and generalization errors associated with the hard formulation converge in probability to the limits given in (17) and (18), respectively.

4.2. Phase Transitions

As illustrated in Figure 2, there is a phase transition phenomenon in the hard transfer formulation, where the problem moves from negative transfer to positive transfer as the similarity of the source and target tasks increases. For general loss functions, the exact location of the phase transition boundary can only be determined by numerically solving the scalar optimization problem in (19).
For the special case of squared loss, however, we are able to obtain analytical expressions. For the rest of this section, we restrict our discussions to the following special settings:
(a)
The loss function ( · , · ) in (4) and (6) is the squared loss, i.e., ( y , x ) = 1 2 ( y x ) 2 .
(b)
The regularization strength λ = 0 in the source and target formulations (4) and (6).
(c)
The data/dimension ratios α s and α t satisfy α s > 1 and α t > 1 .
We first consider a nonlinear regression task, where the function φ ( · ) in the generative models (1) and (2) can be arbitrary and where the function φ ^ ( · ) in (9) is the identity function.
Proposition 1
(Regression Phase Transition). In addition to conditions (a)–(c) introduced above, assume that the predefined function φ ^ ( · ) in (9) is the identity function. Let δ be the optimal transfer rate that leads to the lowest generalization error in the hard formulation (6). Then,
δ = 0 if ρ < ρ c ( α s , α t ) 1 if ρ > ρ c ( α s , α t ) ,
where ρ c ( α s , α t ) is defined in (11).
The result of Proposition 1, for which the proof can be found in Section 7.4, shows that ρ c ( α s , α t ) is the phase transition boundary separating the negative transfer regime from the positive transfer regime. When the similarity metric is ρ < ρ c ( α s , α t ) , the optimal transfer ratio is δ = 0 , indicating that we should not transfer any source knowledge. Transfer becomes helpful only when ρ moves past the threshold. Note that, for this particular model, there is also an interesting feature that the optimal δ jumps to 1 in the positive transfer phase, meaning that we should fully copy the source weight vector.

4.3. Sufficient Condition

Next, we consider a binary classification task, where the nonlinear functions φ ( · ) and φ ^ ( · ) are both the sign function. In this part, we provide a sufficient condition for when the hard transfer is beneficial. Before stating our predictions, we need a few definitions related to the Moreau envelope function defined in (15). For simplicity of notation, we refer to the Moreau envelope function as M ( · , · ) . Based on [25], M ( · , · ) is differentiable in R × R + . We refer to its derivatives with respect to the first and second arguments as M , 1 ( · , · ) and M , 2 ( · , · ) , respectively. If M ( · , · ) is twice differentiable, we refer to its second derivative with respect to the first and second arguments as M , 1 ( · , · ) and M , 2 ( · , · ) , respectively. Additionally, we refer to its second derivative with respect to the first then the second arguments as M , 12 ( · , · ) .
We define q 0 , r 0 , and σ 0 as the optimal solutions of the standard learning formulation (i.e., δ = 0 in (19)). Moreover, we define the constants β 1 and β 2 as follows:
β 1 = ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 , β 2 = ρ q s ,
where q s and r s are optimal solutions of the deterministic source formulation given in (14). Define the constants I 11 , I 12 , I 13 , and I 14 as follows:
I 11 = α t E S H M , 1 [ r 0 H + q 0 S ; 1 σ 0 ] , I 12 = α t E S 2 M , 1 [ r 0 H + q 0 S ; 1 σ 0 ] + λ I 13 = α t σ 0 2 E S M , 12 [ r 0 H + q 0 S ; 1 σ 0 ] ; I 14 = α t σ 0 E S M , 12 [ r 0 H + q 0 S ; 1 σ 0 ] + σ 0 ( q 0 β 2 ) ,
where Y = φ ( S ) , and H and S are two independent standard Gaussian random variables. Now, define the constants I 21 , I 22 , I 23 , and I 24 as follows:
I 21 = α t E H 2 M , 1 [ r 0 H + q 0 S ; 1 σ 0 ] σ 0 + λ , I 22 = α t E H S M , 1 [ r 0 H + q 0 S ; 1 σ 0 ] I 23 = α t σ 0 2 E H M , 12 [ r 0 H + q 0 S ; 1 σ 0 ] r 0 , I 24 = α t σ 0 E H M , 12 [ r 0 H + q 0 S ; 1 σ 0 ] .
Finally, define the constants I 31 , I 32 , I 33 , and I 34 as follows:
I 31 = α t σ 0 2 E H M , 21 [ r 0 H + q 0 S ; 1 σ 0 ] r 0 , I 32 = α t σ 0 2 E S M , 12 [ r 0 H + q 0 S ; 1 σ 0 ] I 33 = 2 α t σ 0 3 E M , 2 [ r 0 H + q 0 S ; 1 σ 0 ] + α t σ 0 4 E M , 2 [ r 0 H + q 0 S ; 1 σ 0 ] I 34 = α t σ 0 2 E M , 2 [ r 0 H + q 0 S ; 1 σ 0 ] + α t σ 0 3 E M , 2 [ r 0 H + q 0 S ; 1 σ 0 ] + 1 2 ( q 0 β 2 ) 2 + β 1 2 .
Now, we are ready to state our sufficient condition.
Proposition 2
(Classification). Assume that the Moreau envelope function is twice continuously differentiable almost everywhere in R × R + and that the above expectations are all well-defined. Moreover, assume that both φ ( · ) and φ ^ ( · ) are the sign function. Then,
δ > 0 if q 0 r 0 q 0 r 0 > 0 ,
where q 0 and r 0 are solutions to the following linear system of equations:
I 11 r 0 + I 12 q 0 + I 13 σ 0 + I 14 = 0 I 21 r 0 + I 22 q 0 + I 23 σ 0 + I 24 = 0 I 31 r 0 + I 32 q 0 + I 33 σ 0 + I 34 = 0 .
We prove this result at the end of Section 7. Note that Proposition 2 is valid for a general family of loss functions and general regularization strength λ 0 . For instance, we can see that the results stated in Proposition 2 are valid for the squared loss and the least absolute deviation (LAD) loss, i.e.,
( y ; x ) = 1 2 ( 1 y x ) 2 ( y ; x ) = | 1 y x | .
Unlike (20), the result in (22) only provides a sufficient condition for when the hard transfer is beneficial. Nevertheless, our numerical simulations show that the sufficient condition in (22) provides a good prediction of the phase transition boundary for the majority of parameter settings.

5. Remarks

5.1. Learning Formulations

Given that the target task predicts the new label with φ ^ ( · ) , it is more natural to consider loss functions satisfying the following form:
( y i ; φ ^ ( a i w ) ) .
In this case, the convexity assumption is not necessarily satisfied since the loss function can be viewed as the composition of a convex function with a nonlinear function. To guarantee the convexity, we need additional assumptions on the function φ ^ ( · ) . Moreover, note that, once the convexity is guaranteed, the function φ ^ ( · ) can be absorbed by the loss function ( · ; · ) .

5.2. Transition from Negative to Positive Transfer

Our first simulation example in Figure 2 shows that the optimal transfer rate δ can be 1 while the similarity ρ is still less than 1. Here, we provide an intuitive explanation of this behavior.
Given that the source and target feature vectors are generated from the same distribution, one can see that the source labels can be equivalently expressed as follows:
y s , i = φ ( ρ a t , i ξ t + z i ) , i { 1 , , n s } ,
where { z i } i = 1 n s is an additive noise caused by the mismatch between the source and target hidden vectors. Moreover, note that the noise strength depends on the similarity measure ρ .
First, consider the case when the number of source samples is bigger than the number of target samples (i.e., α s > α t ). We can see that a large value of ρ means that the source and target models are very closely related. Then, one can expect that the additional available data in the source task will be capable of defeating the effects of noise in (25) for large values of ρ . Specifically, it is expected in this regime that the source model will perform better than the standard learning formulation for values of ρ close to 1. However, as we decrease the similarity ρ , the source model will have a small information about the target data. Then, the performance of the hard formulation is expected to be lower than the standard formulation for small values of ρ . In this regime, the source information may hurt the generalization performance of the target task. Then, we need to only transfer a portion of the source information (see Figure 2a). In some settings, the transition is sharp, which means that the source information is irrelevant for the target task when ρ is smaller than a threshold (see Figure 2b).
Second, consider the case when the number of source samples is smaller than the number of target samples (i.e., α s < α t ). Given the observation in (25), the performance of the standard method is expected to be better than the hard formulation for all possible values of ρ in this regime (see Figure 7).

6. Additional Simulation Results

In this section, we provide additional simulation examples to confirm our asymptotic analysis and illustrate the phase transition phenomenon. In our experiments, we focus on the regression and classification models.

6.1. Model Assumptions

For the regression model, we assume that the source, target, and test data are generated according to
y i = max ( a i ξ , 0 ) , i { 1 , , n } .
The data { ( a i , y i ) } i = 1 n can be the training data of the source or target tasks. In this regression model, we assume that the function φ ^ ( . ) is the identity function, i.e., φ ^ ( x ) = x . Then, the generalization error corresponding to the soft formulation converges in probability as follows:
E test p v 2 c q t + ( ( q t ) 2 + ( r t ) 2 ) ,
where c and v are defined as follows
c = E [ z max ( z , 0 ) ] , v = E [ max ( z , 0 ) 2 ] ,
where z is a standard Gaussian random variable and q t and r t are defined in Theorem 1. Additionally, the asymptotic limit of the generalization error corresponding to the hard formulation can be expressed in a similar fashion.
For the binary classification model, we assume that the source, target, and test data labels are binary and generated as follows:
y i = sign ( a i ξ ) , i { 1 , , n } ,
where the data { ( a i , y i ) } i = 1 n can be the training data of the source and target tasks. In this classification model, the objective is to predict the correct sign of any unseen sample y new . Then, we fix the function φ ^ ( . ) to be the sign function. Following Theorem 1, it can be easily shown that the generalization error corresponding to the soft formulation given in (8) converges in probability as follows:
E test p 1 π cos 1 q t ( q t ) 2 + ( r t ) 2 .
Here, q t and r t are optimal solutions of the target scalar formulation given in (16). The generalization error corresponding to the hard formulation given in (6) can be expressed in a similar fashion.

6.2. Phase Transitions in the Hard Formulation

In Section 4, we presented analytical formulas for the phase transition phenomenon but only for the special case of squared loss with no regularization. The main purpose of this experiment, shown in Figure 3, is to demonstrate that the phase transition phenomenon still takes place in more general settings with different loss functions and regularization strengths.
In all the cases shown in Figure 3, the transition from negative to positive transfer is a discontinuous jump from standard learning (i.e., no transfer) to full source transfer. Additionally, Figure 3c,d show that the loss function has a small effect on the phase transition boundary.

6.3. Sufficient Condition for the Hard Formulation

In Section 4, we presented a sufficient condition for positive transfer. This sufficient condition is valid for a general family of loss functions and a general regularization strength. The main purpose of this experiment, shown in Figure 4, is to illustrate the precision of the sufficient condition for two particular loss functions, i.e., the squared loss and LAD loss.
In all the cases shown in Figure 4, we can see that the transition from negative to positive transfer is a discontinuous jump from standard learning to full source transfer. Additionally, Figure 4a,b show that the sufficient condition summarized in Proposition 2 provides a good prediction of the phase transition boundary for the considered setting.

6.4. Soft Transfer: Impact of the Weighting Matrix and Regularization Strength

In this experiment, we empirically explore the impact of the weighting matrix Σ on the generalization error corresponding to the soft formulation. We focus on the binary classification problem with logistic loss. The weighting matrix in (8) takes the following form:
Σ = β t V ,
where V is a diagonal matrix generated in three different ways. (1) Soft Identity: V is an identity matrix; (2) Soft Uniform: the diagonal entries of V are drawn independently from the uniform distribution and then scaled to have their mean equal to 1; and (3): Soft Beta: similar to (2), but with the diagonal entries drawn from the beta distribution, followed by rescaling to the unit mean.
Figure 5a shows that the considered weighting matrix choices have similar generalization performances, with the identity matrix being slightly better than the other alternatives. Moreover, Figure 5b illustrates the effects of the parameter β t in (28) on the generalization performance. It points to the interesting possibility of “designing” the optimal weight matrix to minimize the generalization error.

6.5. Soft and Hard Transfer Comparison

In this simulation example, we consider the regression model and compare the performances of the hard and soft transfer formulations as functions of α t and ρ .
Figure 6a shows that the soft formulation provides the best generalization performance for all values of α t . Moreover, we can see that the hard transfer formulation is only useful for small values α t . Figure 6b shows that the performance of the soft and hard transfer formulations depend on the similarity between the source and target tasks. Specifically, the generalization performances of different transfer approaches all improve as we increase the similarity measure ρ . We can also see that the full source transfer approach provides the lowest generalization error when the similarity measure is close to 1, while the soft transfer method leads to the best generalization performance at moderate values of the similarity measure. At very small values of ρ , which means that the two tasks share little resemblance, the standard learning method (i.e., no transfer) is the best scheme one should use.

6.6. Effects of the Source Parameters

In the last simulation example, we consider the regression and classification models. We study the performance of the hard and soft transfer formulations when α s < α t .
Figure 7a considers the regression model. It first shows that the soft transfer formulation provides a slightly better generalization performance compared to the standard method. This behavior can be explained by the fact that the soft formulation requires the target weight vector to be close and not necessarily equal to the source weight vector. Additionally, the source model carries some information about the target task.
We can also see that the hard transfer approach is not beneficial when the number of source samples is smaller than the number of target samples. This result can be explained by the fact that the hard formulation restricts some entries in the target weight vector to be exactly equal to the corresponding entries in the source weight vector. Moreover, the source model is not perfectly aligned with the target model and has smaller data than the target model (see Section 5.2).
The same behavior can be observed in Figure 7b, which considers the classification model.

7. Technical Details

In this section, we provide a detailed proof of Theorem 1, and Proportions 1 and 2. Specifically, we focus on analyzing the generalized formulation in (8) using the CGMT framework introduced in the following part.

7.1. Technical Tool: Convex Gaussian Min–Max Theorem

The CGMT provides an asymptotic equivalent formulation of primary optimization (PO) problems of the following form:
Φ p ( G ) = min w S w max u S u u G w + ψ ( w , u ) .
Specifically, the CGMT shows that the PO given in (29) is asymptotically equivalent to the following formulation:
ϕ p ( g , h ) = min w S w max u S u u g w + w h u + ψ ( w , u ) ,
referred to as the auxiliary optimization (AO) problem. Before showing the equivalence between PO and AO, the CGMT assumes that G R n × p , g R p , and h R n ; that all have independent and identically distributed standard normal entries; that the feasibility sets S w R p and S u R n are convex and compact; and that the function ψ ( . , . ) : R p × R n R is continuous convex-concave on S w × S u . Moreover, the function ψ ( . , . ) is independent of the matrix G . Under these assumptions, the CGMT [17] (Theorem 6.1) shows that, for any χ R and ζ > 0 , the following holds:
P | Φ p ( G ) χ | > ζ 2 P | ϕ p ( g , h ) χ | > ζ .
Additionally, the CGMT [17] (Theorem 6.1) provides the following conditions under which the optimal solutions of the PO and AO concentrates around the same set.
Theorem 2
(CGMT Framework). Consider an open set S p . Moreover, define the set S p c = S w \ S p . Let ϕ p and ϕ p c be the optimal cost values of AO formulation in (30) with feasibility sets S w and S p c , respectively. Assume that the following properties are all satisfied:
(1) 
There exists a constant ϕ such that the optimal cost ϕ p converges in probability to ϕ as p goes to + .
(2) 
There exists a positive constant ζ > 0 such that ϕ p c ϕ + ζ with probability going to 1 as p + .
Then, the following convergence in probability holds:
| Φ p ϕ p | p + 0 , and P ( w ^ p S p ) p + 1 , ,
where Φ p and w ^ p are the optimal cost and the optimal solution of the PO formulation in (29).
Theorem 2 allows us to analyze the generally easy AO problem to infer the asymptotic properties of the generally hard PO problem. Next, we use the CGMT to rigorously prove the technical results presented in Theorem 1.

7.2. Precise Analysis of the Source Formulation

The source formulation defined in (4) is well-studied in recent literature [26]. Specifically, it has been rigorously proven that the performance of the source formulation can be fully characterized after solving the following scalar formulation:
min q s , r s 0 sup σ s > 0 α s E M ( Y s , . ) r s H s + q s S s ; r s σ s r s σ s 2 + λ 2 ( q s 2 + r s 2 ) ,
where Y s = φ ( S s ) , and H s and S s are two independent standard Gaussian random variables. The expectation in (32) is taken over the random variables H s and S s . Furthermore, the function M ( Y s , . ) introduced in the scalar optimization problem (32) is the Moreau envelope function defined in (15).

7.3. Precise Analysis of the Soft Transfer Approach

In this part, we provide a precise asymptotic analysis of the generalized transfer formulation given in (8). Specifically, we focus on analyzing the following formulation:
min w R p 1 p i = 1 n t y i ; a i w + λ 2 w 2 + 1 2 Σ ( w w ^ s ) 2 ,
where w ^ s is the optimal solution of the source formulation given in (4). Note that the vector w ^ s is independent of the training data of the target task. For simplicity of notation, we denote by { ( a i , y i ) } i = 1 n t the training data of the target task. Here, we use the CGMT framework introduced in Section 7.1 to precisely analyze the above formulation.

7.3.1. Formulating the Auxiliary Optimization Problem

Our first objective is to rewrite the generalized formulation in the form of the PO problem given in (29). To this end, we introduce additional optimization variables. Specifically, the generalized formulation can be equivalently formulated as follows:
min w R p max u R n t 1 p u A w 1 p i = 1 n t y i ; u i + λ 2 w 2 + 1 2 Σ ( w w ^ s ) 2 ,
where the optimization vector u R n t is formed as u = [ u 1 , , u n t ] and the data matrix A R n t × p is given by A = [ a 1 , , a n t ] . Additionally, the function ( y ; . ) denotes the convex conjugate function of the loss function ( y ; . ) . First, observe that the CGMT framework assumes that the feasibility sets of the minimization and maximization problems are compact. Then, our next step is to show that the formulation given in (34) satisfies this assumption.
Lemma 1
(Primal-Dual Compactness). Assume that w ^ and u ^ are optimal solutions of the optimization problem in (34). Then, there exist two constants C w > 0 and C u > 0 such that the following convergence in probability holds:
P ( w ^ C w ) p + 1 , P ( u ^ / n t C u ) p + 1 .
A detailed proof of Lemma 1 is provided in Appendix B. The proof of the above result follows using Assumption 3 to prove the compactness of the optimal solution w ^ . Moreover, it uses the asymptotic results in [27] (Theorem 2.1), which provides the concentration properties of the minimum and maximum eigenvalues of random matrices. To show the compactness of the optimal dual vector u ^ , we use Assumption 3 and the result in [25] (Proposition 11.3), which provides the inversion rules for subgradient relations.
The theoretical result in Lemma 1 shows that the optimization problem in (34) can be equivalently formulated with compact feasibility sets on events with probability going to one. Then, it suffices to study the constrained version of (34). Note that the data labels { y i } i = 1 n t depend on the data matrix A . Then, one can decompose the matrix A as follows:
A = A P ξ t + A P ξ = A ξ t ξ t + A P ξ ,
where the matrix P ξ t R p × p denotes the projection matrix onto the space spanned by the vector ξ t and the matrix P ξ = I p ξ t ξ t denotes the projection matrix onto the orthogonal complement of the space spanned by the vector ξ t . Note that we can express A as follows without changing its statistics:
A = s t ξ t + G P ξ ,
where s t N ( 0 , I n t ) and the components of the matrix G R n t × p are drawn independently from a standard Gaussian distribution and where s t and G are independent. Here, (36) represents an equality in distribution. This means that the formulation in (34) can be expressed as follows:
min w C w max u C t 1 p u G P ξ w + 1 p u s t ξ t w + λ 2 w 2 1 p i = 1 n t y i ; u i + 1 2 Σ ( w w ^ s ) 2 ,
where the set C t is defined as C t = { u : u / n t C u } . Note that the formulation in (37) is in the form of the primary formulation given in (29). Here, the function ψ ( . , . ) is defined as follows:
ψ ( w , u ) = 1 p u s t ξ t w + λ 2 w 2 1 p i = 1 n t y i ; u i + 1 2 Σ ( w w ^ s ) 2 .
One can easily see that the optimization problem in (37) has compact convex feasibility sets. Moreover, the function ψ ( . , . ) is continuous, convex–concave, and independent of the Gaussian matrix G . This shows that the assumptions of the CGMT are all satisfied by the primary formulation in (37). Then, following the CGMT framework, the auxiliary formulation corresponding to our primary problem in (37) can be expressed as follows:
min w C w max u C t u p g P ξ w + 1 p u s t ξ t w + h u p P ξ w + λ 2 w 2 1 p i = 1 n t y i ; u i + 1 2 Σ ( w w ^ s ) 2 ,
where g R p and h R n t are two independent standard Gaussian vectors. The rest of the proof focuses on simplifying the obtained AO formulation and on studying its asymptotic properties.

7.3.2. Simplifying the AO Problem of the Target Task

Here, we focus on simplifying the auxiliary formulation corresponding to the target task. We start our analysis by decomposing the target optimization vector w R p as follows:
w = ( ξ t w ) ξ t + B ξ t r t ,
where r t R p 1 is a free vector and B ξ t R p × ( p 1 ) is formed by an orthonormal basis orthogonal to the vector ξ t . Now, define the variable q t as follows: q t = ξ t w . Based on the result in Lemma 1 and the decomposition in (40), there exist C q t > 0 , C r > 0 , and C u > 0 such that our auxiliary formulation can be asymptotically expressed in terms of the variables q t and r t as follows:
min ( q t , r t ) T 1 max u C t u p g B ξ t r t + r t p h u + q t p u s t + λ 2 q t 2 + λ 2 r t 2 1 p i = 1 n t y i ; u i + 1 2 q t 2 V p , t q t V p , t s + 1 2 r t ( B ξ t ) Λ B ξ t r t + q t ξ t Λ B ξ t r t r t ( B ξ t ) Λ w ^ s .
Here, we drop terms independent of the optimization variables and the matrix Λ R p × p is defined as Λ = Σ Σ . Additionally, the feasibility set T 1 is defined as follows:
T 1 = ( q t , r t ) : | q t | C q t , r t C r .
Here, the sequence of random variables V p , t and V p , t s are defined as follows:
V p , t = ξ t Λ ξ t , V p , t s = ξ t Λ w ^ s .
Next, we focus on simplifying the obtained auxiliary formulation. Our strategy is to solve over the direction of the optimization vector r R p 1 . This step requires an interchange between non-convex minimization and non-concave maximization. We can justify the interchange using the theoretical result in [17] (Lemma A.3). The main argument in [17] (Lemma A.3) is that the strong convexity of the primary formulation in (37) allows us to perform such an interchange in the corresponding auxiliary formulation. The optimization problem over the vector r t with fixed norm, i.e., r t = r t , can be formulated as follows:
C p = min r t R p 1 b p r t + 1 2 r t Λ r t , s . t . r t = r t .
Here, we ignore constant terms independent of r t , and the matrix Λ R ( p 1 ) × ( p 1 ) and the vector b p R p 1 can be expressed as follows:
Λ = ( B ξ t ) Λ B ξ t , b p = u p ( B ξ t ) g + q t ( B ξ t ) Λ ξ t ( B ξ t ) Λ w ^ s .
The optimization problem in (43) is non-convex given the norm equality constraint. It is well-studied in the literature [28] and is known as the trust region subproblem. Using the same analysis as in [20], the optimal cost value of the optimization problem (43) can be expressed in terms of a one-dimensional optimization problem as follows:
C p = sup σ t > μ p 1 2 b p [ Λ + σ t I p 1 ] 1 b p σ t r t 2 2 ,
where μ p is the minimum eigenvalue of the matrix Λ , denoted by σ min ( Λ ) . This result can be seen by equivalently formulating the non-convex problem in (43) as follows:
C p = min r t R p 1 max σ t R b p r t + 1 2 r t Λ r t + σ t 2 r t 2 r t 2 .
Then, we show that the optimal σ t satisfies a constraint that preserves the convexity over r t . This allows us to interchange the maximization and minimization and to solve over the vector r t . The above analysis shows that the AO formulation corresponding to our primary problem can be expressed as follows:
min ( q t , r t ) T 2 max u C t sup σ t > μ p r t p h u + q t p u s t + λ 2 q t 2 + λ 2 r t 2 1 p i = 1 n t y i ; u i + 1 2 q t 2 V p , t q t V p , t s u 2 2 p T p , g ( σ t ) σ t r t 2 2 1 2 q t 2 T p , t ( σ t ) 1 2 T p , s ( σ t ) + q t T p , t s ( σ t ) ,
where the set T 2 has the same definition as the set T 1 except that we replace r t with r t . Here, the sequence of random functions T p , g ( . ) , T p , t ( . ) , T p , s ( . ) , and T p , t s ( . ) can be expressed as follows:
T p , g ( σ t ) = 1 p g B ξ t [ Λ + σ t I p 1 ] 1 ( B ξ t ) g T p , t ( σ t ) = ξ t Λ B ξ t [ Λ + σ t I p 1 ] 1 ( B ξ t ) Λ ξ t T p , s ( σ t ) = w ^ s Λ B ξ t [ Λ + σ t I p 1 ] 1 ( B ξ t ) Λ w ^ s T p , t s ( σ t ) = ξ t Λ B ξ t [ Λ + σ t I p 1 ] 1 ( B ξ t ) Λ w ^ s .
Note that the formulation in (45) is obtained after dropping terms that converge in probability to zero. This simplification can be justified using a similar analysis to that in [20] (Lemma 3). The main idea in [20] (Lemma 3) is to show that both loss functions converge uniformly to the same limit.
Next, the objective is to simplify the obtained AO formulation over the optimization vector u R n t . Based on the property stated in [20] (Lemma 4), the optimization over the vector u can be expressed as follows:
I p = max u C t r t h u + q t u s t i = 1 n t y i ; u i u 2 2 T p , g ( σ t ) = i = 1 n t M ( y i , . ) r t h i + q t s t , i ; T p , g ( σ t ) .
This result is valid on events with probability going to one as p goes to + . Here, the function M ( y i , . ) is the Moreau envelope function defined in (15). The proof of this property is omitted since it follows the same ideas as [20] (Lemma 4). The main idea in [20] (Lemma 4) is to use Assumption 3 to show that the optimal solution of the unconstrained version of the maximization problem is bounded asymptotically and then to use the property introduced in [25] (Example 11.26) to complete the proof. Now, our auxiliary formulation can be asymptotically simplified to a scalar optimization problem as follows:
min ( q t , r t ) T 2 sup σ t > μ p λ 2 ( q t 2 + r t 2 ) σ t r t 2 2 1 2 q t 2 Z p , t ( σ t ) 1 2 Z p , s ( σ t ) + 1 p i = 1 n t M ( y i , . ) r t h i + q t s t , i ; T p , g ( σ t ) + q t Z p , t s ( σ t ) ,
where the functions Z p , t ( · ) , Z p , t s ( · ) , and Z p , s ( · ) are defined as follows:
Z p , t ( σ t ) = T p , t ( σ t ) V p , t , Z p , t s ( σ t ) = T p , t s ( σ t ) V p , t s
Z p , s ( σ t ) = T p , s ( σ t ) V p , s , where V p , s = w ^ s Λ w ^ s .
Note that the auxiliary formulation in (46) now has scalar optimization variables. Then, it remains to study its asymptotic properties. We refer to this problem as the target scalar formulation.

7.3.3. Asymptotic Analysis of the Target Scalar Formulation

In this part, we study the asymptotic properties of the target scalar formulation expressed in (46). We start our analysis by studying the asymptotic properties of the sequence of random functions T p , g ( . ) , Z p , t ( . ) , Z p , s ( . ) , and Z p , t s ( . ) as given in the following lemma.
Lemma 2
(Asymptotic Properties). First, the random variable μ p converges in probability to μ min , where μ min is defined in Assumption 5. For any fixed σ > 0 , the following convergence in probability holds true:
Z p , t ( σ μ p ) p + Z t ( σ μ min ) Z p , t s ( σ μ p ) p + Z t s ( σ μ min ) Z p , s ( σ μ p ) p + Z s ( σ μ min ) T p , g ( σ μ p ) p + T g ( σ μ min ) = T 1 ( σ μ min ) .
Here, the deterministic functions Z t ( . ) , Z t s ( . ) , Z s ( . ) , T 1 ( . ) , and T 3 ( . ) are defined as follows:
Z t ( σ ) = σ 1 / T 1 ( σ ) , Z t s ( σ ) = ρ q s Z t ( σ ) Z s ( σ ) = ( ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 ) T 3 ( σ ) + ( ρ q s ) 2 Z t ( σ ) T 1 ( σ ) = E μ 1 / ( μ + σ ) , T 3 ( σ ) = E μ μ σ / ( μ + σ ) .
Moreover, the constants q s and r s are optimal solutions of the source asymptotic formulation defined in (32).
A detailed proof of Lemma 2 is provided in Appendix C. Now that we obtained the asymptotic properties of the sequence of random variables, it remains to study the asymptotic properties of the optimal cost and optimal solution set of the scalar formulation in (46). To state our first asymptotic result, we define the following deterministic optimization problem:
min ( q t , r t ) T 2 sup σ t > μ min λ 2 ( q t 2 + r t 2 ) σ t r t 2 2 1 2 Z s ( σ t ) 1 2 q t 2 Z t ( σ t ) + α t E M ( Y t , . ) r t H t + q t S t ; T g ( σ t ) + q t Z t s ( σ t ) ,
where H t and S t are two independent standard Gaussian random variables and Y t = φ ( S t ) . Here, the function M ( Y t , . ) denotes the Moreau envelope function defined in (15) and the expectation is taken over the random variables H t and S t , and the possibly random function φ ( . ) . Now, we are ready to state our asymptotic property of the cost function of (46).
Lemma 3
(Cost Function of the Traget AO Formulation). Define O p , t ( . ) as the loss function of the target scalar optimization problem given in (46). Additionally, define O t ( . ) as the cost function of the deterministic formulation in (49). Then, the following convergence in probability holds true:
O p , t ( q t , r t , σ t μ p ) p + O t ( q t , r t , σ t μ min ) ,
for any fixed feasible q t , r t , and σ t > 0 .
The proof of the asymptotic property stated in Lemma 3 uses the asymptotic results stated in Lemma 2. Moreover, it uses the weak law of large numbers to show that the empirical mean of the Moreau envelope concentrates around its expected value. Based on Assumption 3, one can see that the following pointwise convergence is valid:
1 n t i = 1 n t M ( y i , . ) r t h i + q t s t , i ; x p + E M ( Y , . ) r t H + q t S ; x ,
where H and S are independent standard Gaussian random variables and Y = φ ( S ) . The above property is valid for any x > 0 , r t 0 , and q t . Based on [25] (Theorem 2.26), the Moreau envelope function is convex and continuously differentiable with respect to x > 0 . Combining this with [29] (Theorem 7.46), the above asymptotic function is continuous in x > 0 . Then, using Lemma 2, the uniform convergence, and the continuity property, we conclude that the empirical average of the Moreau envelope converges in probability to the following function:
E M ( Y , . ) r t H + q t S ; T g ( σ t μ min ) ,
for any fixed feasible q t , r t , and σ t > 0 . This completes the proof of Lemma 3.
Before continuing our analysis, we provide the convexity properties of the cost function of the deterministic problem in (49) in the following lemma.
Lemma 4
(Strong Conexity). Define O t ( · , · , · ) as the cost function of the optimization problem in (49). Then, O t ( · , · , · ) is concave in the maximization variable σ t for any fixed feasible ( q t , r t ) . Moreover, define the function O t ( · , · ) as follows:
O t ( q t , r t ) = sup σ t > μ min f ( q t , r t , σ t ) .
Then, the function O t ( · , · ) is strongly convex in the minimization variables ( q t , r t ) .
The proof of Lemma 4 is provided in Appendix D. Now, we use these properties to show that the optimal solution set of the formulation in (46) converges in probability to the optimal solution set of the formulation in (49).
Lemma 5
(Consistency of the Target AO Formulation). Define P p , t and P t as the optimal set of ( q t , r t ) of the optimization problems formulated in (46) and (49). Moreover, define O p , t and O t as the optimal cost values of the optimization problems formulated in (46) and (49). Then, the following converges in probability holds true:
O p , t p + O t , D ( P p , t , P t ) p + 0 ,
where D ( A , B ) denotes the deviation between the sets A and B and is defined as D ( A , B ) = sup c 1 A inf c 2 B c 1 c 2 .
The stated result can be proven by first observing that the loss function O t ( . ) corresponding to the deterministic formulation in (49) satisfies the following:
lim σ t + O t ( q t , r t , σ t μ min ) =
for any r t > 0 and any fixed q t . Combining this with the convergence result in Lemma 3, ref. [17] (Lemma B.1), and [17] (Lemma B.2), we obtain the following asymptotic result:
sup σ t > 0 O p , t ( q t , r t , σ t μ p ) p + sup σ t > 0 O t ( q t , r t , σ t μ min ) .
Here, the results in [17] (Lemma B.1) and [17] (Lemma B.2) provide convergence properties of minimization problems over open sets. Note that, if r t = 0 , the supremum in the above convergence result occurs at σ t + . However, it can be checked that the above convergence result still holds. Based on Lemma 4, the cost function of the minimization problem in (49) is strongly convex in ( q t , r t ) . Moreover, the feasibility set of the minimization problem is convex and compact. Additionally, the cost function of the minimization problem in (49) is continuous in the feasibility set. Then, using the results in [30] (Theorem II.1) and [31] (Theorem 2.1), we obtain the convergence properties stated in Lemma 5. Here, the results in [30] (Theorem II.1) and [31] (Theorem 2.1) provide uniform convergence and consistency properties of convex optimization problems.
Now that we obtained the asymptotic problem, it remains to study the asymptotic properties of the training and generalization errors corresponding to the target formulation in (8).

7.3.4. Specialization to Hard Formulation

Before starting the analysis of the generalization error, we specialize our general analysis to the hard transfer formulation. First, note that δ = 1 implies that the hard transfer formulation is equivalent to the source formulation. Next, we assume that δ < 1 . To obtain the asymptotic limit of the hard formulation, we specialize the general results in (49) to the following probability distribution:
P p ( μ ) = 0 with probability ( 1 δ ) + with probability δ .
Note that the probability distribution in (55) satisfies Assumption 5. Then, the asymptotic limit of the soft formulation corresponding to the probability distribution P μ ( . ) , defined in (55), can be expressed as follows:
min ( q t , r t ) T 2 sup σ t > 0 λ 2 ( q t 2 + r t 2 ) + σ t δ 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 + α t E M ( Y t , . ) r t H t + q t S t ; 1 δ σ t σ t r t 2 2 + σ t δ 2 ( 1 δ ) q t ρ q s 2 .
This shows that the asymptotic limit of the hard formulation is the deterministic problem (56).

7.3.5. Asymptotic Analysis of the Training and Generalization Errors

First, the generalization error corresponding to the target task is given by
E test = 1 4 υ E φ ( a t , new ξ t ) φ ^ ( w ^ t a t , new ) 2 ,
where a t , new is an unseen target feature vector. Now, consider the following two random variables
ν 1 = a t , new ξ t , and ν 2 = w ^ t a t , new .
Given w ^ t and ξ t , the random variables ν 1 and ν 2 have a bivaraite Gaussian distribution with zero mean vector and covariance matrix given as follows:
C p = ξ t 2 ξ t w ^ t ξ t w ^ t w ^ t 2 .
To precisely analyze the asymptotic behavior of the generalization error, it suffices to analyze the properties of the covariance matrix C p . Define the random variables q ^ p , t and r ^ p , t for the target task as follows:
q ^ p , t = ξ t w ^ t , and r ^ p , t = ( B ξ t ) w ^ t ,
where B ξ t is defined in Section 7.3.2. Then, the covariance matrix C p given in (58) can be expressed as follows:
1 q ^ p , t q ^ p , t ( q ^ p , t ) 2 + ( r ^ p , t ) 2 .
Hence, to study the asymptotic properties of the generalization error, it suffices to study the asymptotic properties of the random quantities q ^ p , t and r ^ p , t .
Lemma 6
(Consistency of the Target Formulation). The random quantities q ^ p , t and r ^ p , t satisfy the following asymptotic properties:
q ^ p , t p + q t , and r ^ p , t p + r t ,
where q t and r t are the optimal solutions of the deterministic formulation stated in (49).
To prove the above asymptotic result, we define q ˜ p , t and r ˜ p , t as follows:
q ˜ p , t = ξ t w ˜ t , and r ˜ p , t = ( B ξ t ) w ˜ t ,
where w ˜ t is the optimal solution of the auxiliary formulation in (39). Given the result in Lemma 5 and the analysis in Section 7.3.2 and Section 7.3.3, the convergence result in Lemma 5 is also satisfied by our auxiliary formulation in (39), i.e.,
q ˜ p , t p + q t , and r ˜ p , t p + r t .
The rest of the proof of the convergence result stated in Lemma 6 is based on the CGMT framework, i.e., Theorem 2. Specifically, it follows after showing that the assumptions in Theorem 2 are all satisfied. First, we define the set S p in Theorem 2 as follows:
S p = { w R p : | ξ t w q t | < ϵ } { w R p : | ( B ξ t ) w r t | < ϵ } ,
where q t and r t are the optimal solutions of the deterministic formulation stated in (49). Note that the cost function of the problem (49) is strongly convex in the minimization variables. Based on the analysis in the previous sections, note that the feasibility sets of the problems defined in Theorem 2 are compact asymptotically. Moreover, the analysis in the previous sections shows that there exists a constant ϕ such that the optimal cost ϕ p defined in Theorem 2 converges in probability to ϕ as p goes to + . Additionally, the same analysis in the previous sections shows that there exists a constant ϕ c such that the optimal cost ϕ p c defined in Theorem 2 converges in probability to ϕ c as p goes to + . The strong convexity property of the cost function of the optimization problem in (49) can then be used to show that there exists ζ > 0 such that ϕ c > ϕ + ζ . This implies that the second assumption in Theorem 2 is satisfied for the considered set S p and any fixed ϵ > 0 . This then shows that the convergence results in Lemma 6 are all satisfied.
Note that the CGMT framework applied to prove Lemma 6 also shows that the optimal cost value of the soft target formulation in (8) converges in probability to the optimal cost value of the deterministic formulation given in (49). Combining this with the result in Lemma 6 shows the convergence property of the training error stated in (17). Now, it remains to show the convergence of the generalization error. It suffices to show that the generalization error defined in (57) is continuous in the quantities q ^ p , t and r ^ p , t . This follows based on Assumption 4 and the continuity under integral sign property [32]. This shows the convergence result in (18), which completes the proof of Theorem 1. Note that the above analysis of the soft target formulation in (8) is valid for any choice of C q t and C r that satisfy the result in Lemma 1. One can ignore these bounds given the convexity properties of the deterministic formulation in (49). This leads to the scalar formulations introduced in (16) and (19).

7.4. Phase Transitions in Hard Formulation

In this part, we provide a rigorous proof of Proposition 1. Here, we consider the squared loss function. In this case, the deterministic source formulation given in (14) can be simplified as follows:
min q s , r s 0 1 2 max r s + α s ( q s 2 + r s 2 + v s 2 q s c s ) 1 2 , 0 2 + λ 2 ( q s 2 + r s 2 ) ,
where the constants v s and c s are defined as v s = E [ Y s 2 ] and c s = E [ S s Y s ] , Y s = φ ( S s ) , and S s is a standard Gaussian random variable. Additionally, the target scalar formulation given in (16) can be simplified as follows:
min q t , r t 0 sup σ t > 0 λ 2 ( q t 2 + r t 2 ) + σ t δ 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 + α t σ t 2 ( 1 δ ) + 2 σ t ( r t 2 + q t 2 + v t 2 q t c t ) σ t r t 2 2 + σ t δ 2 ( 1 δ ) q t ρ q s 2 ,
where the constants v t and c t are defined as v t = E [ Y t 2 ] and c t = E [ Y t S t ] , Y t = φ ( S t ) , and S t is a standard Gaussian random variable. Under the conditions stated in Proposition 1, the source deterministic formulation given in (62) can be simplified as follows:
min q s , r s 0 r s + α s ( q s 2 + r s 2 + v s 2 q s c s ) 1 2 .
Note that one can easily solve the variables q s and r s . Specifically, the optimal solutions of (64) can be expressed as follows:
q s = c s , and r s = v s c s 2 / α s 1 .
Moreover, the target deterministic formulation given in (63) can be expressed as follows:
min q t , r t 0 sup σ t > 0 σ t δ 2 β 2 + α t σ t 2 ( 1 δ ) + 2 σ t ( r t 2 + q t 2 + v t 2 q t c t ) σ t r t 2 2 + σ t δ 2 ( 1 δ ) q t β 1 2 ,
where β 1 and β 2 are given by
β 1 = ρ q s , β 2 = ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 .
Before solving the optimization problem in (66), we consider the following change in variable:
x t 2 + r t 2 δ β 2 δ 1 δ ( q t β 1 ) 2 .
Note that the above change in variable is valid since the formulation in (66) requires the left-hand side of (68) to be positive. Therefore, the formulation in (66) can be expressed in terms of x t instead of r t as follows:
min q t , x t 0 sup σ t > 0 α t σ t 2 ( 1 δ ) + 2 σ t x t 2 + δ β 2 + δ 1 δ ( q t β 1 ) 2 + q t 2 + v t 2 q t c t σ t x t 2 2 .
Now, it can be easily checked that the above optimization problem can be solved over the variable σ t to give the following formulation:
min q t , x t 0 1 2 max x t 1 δ + α t x t 2 + δ β 2 + δ 1 δ ( q t β 1 ) 2 + q t 2 + v t 2 q t c t 1 2 , 0 2 .
It is now clear that one can solve the problem in (69) in closed form. Moreover, it can be easily checked that the optimal solutions of the optimization problem (66) can be expressed as follows:
q t = ( 1 δ ) c t + δ β 1 ( r t ) 2 = 1 δ α t + δ 1 ( δ 1 ) c t 2 + δ β 1 2 + δ β 2 + v t 2 δ β 1 c t + δ β 2 + δ ( 1 δ ) ( c t β 1 ) 2 .
Then, the asymptotic limit of the generalization error corresponding to the hard formulation can be determined in closed-form. Given that the source and target models given in (1) and (2) use the same data-generating function, the constants v t , c t , v s , and c s are all equal. We express them as v and c in the rest of the proof.
Next, we assume that the function φ ^ ( . ) is the identity function. Based on the asymptotic result stated in Corollary 1, the asymptotic limit of the generalization error corresponding to the hard formulation can be expressed as follows:
E test = v 2 c q t + ( q t ) 2 + ( r t ) 2 .
It can be easily checked that the generalization error can be express as follows:
E test = α t α t + δ 1 δ { ( c β 1 ) 2 + β 2 } + ( v c 2 ) .
Note that the generalization error obtained above depends explicitly on δ . Now, it suffices to study the derivative of E test to find the properties of the optimal transfer rate δ that minimizes the generalization error. Note that the derivative can be expressed as follows:
E test ( δ ) = ( α t 1 ) { ( c β 1 ) 2 + β 2 } ( v c 2 ) ( α t + δ 1 ) 2 .
This shows that the derivative of the generalization error has the same sign as the numerator. This means that the optimal transfer rate satisfies the following:
δ = 1 if Z t < 0 0 if Z t > 0 [ 0   1 ] otherwise ,
where Z t is given by
Z t = ( α t 1 ) { ( c β 1 ) 2 + β 2 } ( v c 2 ) .
It can be easily shown that the condition in (72) can be expressed as the one given in (20). This completes the proof of Proposition 1.

7.5. Sufficient Condition for the Hard Formulation

In this part, we provide a rigorous proof to Proposition 2. Suppose that the assumptions in Proposition 2 are all satisfied. Additionally, we assume that the function φ ^ ( . ) is the sign function. Based on the asymptotic result stated in Corollary 1, the asymptotic limit of the generalization error corresponding to the hard formulation can be expressed as follows:
E test ( δ ) = 1 π acos q t ( δ ) ( q t ( δ ) ) 2 + ( r t ( δ ) ) 2 ,
where q t ( δ ) and r t ( δ ) are optimal solutions to the deterministic problem in (19) for fixed δ . A simple sufficient condition for positive transfer is when E test ( δ ) is decreasing at δ = 0 . This means that there exists some δ > 0 such that the transfer learning method introduced in (6) is better than the standard method when the following function increases at δ = 0 :
g ( δ ) = q t ( δ ) ( q t ( δ ) ) 2 + ( r t ( δ ) ) 2 .
After computing the derivative of the function g ( · ) at zero, one can see that the transfer learning method introduced in (6) is better than the standard method when the following condition is true:
q t ( 0 ) r t ( 0 ) q t ( 0 ) r t ( 0 ) > 0 ,
where q t ( 0 ) and r t ( 0 ) denote the optimal solutions of the standard learning formulation (i.e., δ = 0 in (19)). Additionally, q t ( 0 ) and r t ( 0 ) denote the derivative of the functions q t ( δ ) and r t ( δ ) at δ = 0 . The above analysis shows that it suffices to find the values of q t ( 0 ) and r t ( 0 ) to fully characterize the sufficient condition in (76). Before stating our analysis, we define β 1 and β 2 as follows:
β 1 = ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 , β 2 = ρ q s ,
where q s and r s are the optimal solutions of the deterministic source formulation given in (14).
Note that the optimal solution of the deterministic formulation in (19) satisfy the following system of equations:
α t E S M , 1 [ r t ( δ ) H + q t ( δ ) S ; 1 δ σ t ( δ ) ] + δ σ t ( δ ) 1 δ ( q t ( δ ) β 2 ) + λ q t ( δ ) = 0 α t E H M , 1 [ r t ( δ ) H + q t ( δ ) S ; 1 δ σ t ( δ ) ] σ t ( δ ) r t ( δ ) + λ r t ( δ ) = 0 δ 2 β 1 α t ( 1 δ ) σ t ( δ ) 2 E M , 2 [ r t ( δ ) H + q t ( δ ) S ; 1 δ σ t ( δ ) ] r t ( δ ) 2 2 + δ 2 ( 1 δ ) ( q t ( δ ) β 2 ) 2 = 0 .
The derivative of the first equation at δ = 0 can be expressed as follows:
α t E ( S H r ( 0 ) + S 2 q ( 0 ) ) M , 1 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + σ t ( 0 ) ( q t ( 0 ) β 2 ) α t σ t ( 0 ) 2 ( σ t ( 0 ) + σ t ( 0 ) ) E S M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + λ q t ( 0 ) = 0 ,
where q t ( 0 ) , r t ( 0 ) , and σ t ( 0 ) denote optimal solutions of the standard learning formulation (i.e., δ = 0 in (19)). This means that they are known. Moreover, q t ( 0 ) , r t ( 0 ) and σ t ( 0 ) are unknown and denote the derivative of the functions q t ( δ ) , r t ( δ ) , and σ t ( δ ) at δ = 0 . Now, define the constants I 11 , I 12 , I 13 , and I 14 as follows:
I 11 = α t E S H M , 1 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] I 12 = α t E S 2 M , 1 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + λ I 13 = α t σ t ( 0 ) 2 E S M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] I 14 = α t σ t ( 0 ) E S M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + σ t ( 0 ) ( q t ( 0 ) β 2 ) .
This means that the equation in (78) can be expressed as follows:
I 11 r t ( 0 ) + I 12 q t ( 0 ) + I 13 σ t ( 0 ) + I 14 = 0 .
Similarly, the derivative of the second equation at δ = 0 can be expressed as follows:
α t E ( H 2 r ( 0 ) + H S q ( 0 ) ) M , 1 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] σ t ( 0 ) r t ( 0 ) σ t ( 0 ) r t ( 0 ) α t σ t ( 0 ) 2 ( σ t ( 0 ) + σ t ( 0 ) ) E H M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + λ r t ( 0 ) = 0 .
Now, define the constants I 21 , I 22 , I 23 , and I 24 as follows:
I 21 = α t E H 2 M , 1 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] σ t ( 0 ) + λ I 22 = α t E H S M , 1 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] I 23 = α t σ t ( 0 ) 2 E H M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] r t ( 0 ) I 24 = α t σ t ( 0 ) E H M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] .
This means that the equation in (81) can be expressed as follows:
I 21 r t ( 0 ) + I 22 q t ( 0 ) + I 23 σ t ( 0 ) + I 24 = 0 .
Moreover, the derivative of the third equation at δ = 0 can be expressed as follows:
β 1 2 + α t σ t ( 0 ) 3 ( σ t ( 0 ) + 2 σ t ( 0 ) ) E M , 2 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] r t ( 0 ) r t ( 0 ) α t σ t ( 0 ) 2 E ( H r ( 0 ) + S q ( 0 ) ) M , 21 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + 1 2 ( q t ( 0 ) β 2 ) 2 + α t σ t ( 0 ) 4 ( σ t ( 0 ) + σ t ( 0 ) ) E M , 2 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] = 0 .
We define the constants I 31 , I 32 , I 33 , and I 34 as follows:
I 31 = α t σ t ( 0 ) 2 E H M , 21 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] r t ( 0 ) I 32 = α t σ t ( 0 ) 2 E S M , 12 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] I 33 = 2 α t σ t ( 0 ) 3 E M , 2 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + α t σ t ( 0 ) 4 E M , 2 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] I 34 = α t σ t ( 0 ) 2 E M , 2 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + α t σ t ( 0 ) 3 E M , 2 [ r t ( 0 ) H + q t ( 0 ) S ; 1 σ t ( 0 ) ] + 1 2 ( q t ( 0 ) β 2 ) 2 + β 1 2 .
Therefore, the equation in (84) can be expressed as follows:
I 31 r t ( 0 ) + I 32 q t ( 0 ) + I 33 σ t ( 0 ) + I 34 = 0 .
The above analysis shows that the values of q t ( 0 ) and r t ( 0 ) can be determined after solving the following system of linear equations:
I 11 r t ( 0 ) + I 12 q t ( 0 ) + I 13 σ t ( 0 ) + I 14 = 0 I 21 r t ( 0 ) + I 22 q t ( 0 ) + I 23 σ t ( 0 ) + I 24 = 0 I 31 r t ( 0 ) + I 32 q t ( 0 ) + I 33 σ t ( 0 ) + I 34 = 0 ,
over the three unknowns q t ( 0 ) , r t ( 0 ) , and σ t ( 0 ) . This completes the proof of Proposition 2.

8. Conclusions

In this paper, we presented a precise characterization of the asymptotic properties of two simple transfer learning formulations. Specifically, our results show that the training and generalization errors corresponding to the considered transfer formulations converge to deterministic functions. These functions can be explicitly found by combining the solutions of two deterministic scalar optimization problems. Our simulation results validate our theoretical predictions and reveal the existence of a phase transition phenomenon in the hard transfer formulation. Specifically, it shows that the hard transfer formulation moves from negative transfer to positive transfer when the similarity of the source and target tasks move past a well-defined critical threshold.

Author Contributions

Conceptualization, O.D. and Y.M.L.; methodology, O.D. and Y.M.L.; software, O.D.; validation, O.D. and Y.M.L.; formal analysis, O.D. and Y.M.L.; investigation, O.D. and Y.M.L.; resources, O.D. and Y.M.L.; data curation, O.D.; writing—original draft preparation, O.D.; writing—review and editing, O.D. and Y.M.L.; visualization, O.D.; supervision, Y.M.L.; project administration, Y.M.L.; funding acquisition, Y.M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Harvard FAS Dean’s Fund for Promising Scholarship and by the US National Science Foundations under grants CCF-1718698 and CCF-1910410.

Institutional Review Board Statement.

Not applicable

Informed Consent Statement.

Not applicable

Data Availability Statement.

Not applicable

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Technical Assumptions

Note that Assumption 1 is essential to show that the soft formulation in (4) concentrates in the large system limit. It also guarantees that the vectors ξ t R p and ξ s R p have correlations equal to ρ , asymptotically. This is aligned with the definition in (3). Assumption 4 is also introduced to guarantee that the generalization error concentrates in the large system limit. It is satisfied by popular regression and classification models. For instance, observe that the conditions in Assumption 4 are all satisfied by the regression model considering φ : x max ( x , 0 ) . Moreover, they are satisfied by the binary classification model considering φ : x sign ( x ) .
The analysis presented in this paper mostly focuses on regularized transfer learning formulations (i.e., λ > 0 ). The convexity properties in Assumption 3 are essential to apply the CGMT framework. Moreover, the properties in (12) are used to guarantee the compactness assumptions in the CGMT framework (see Theorem 2). In this appendix, we check the validity of Assumption 3 using popular loss functions, i.e., squared loss for regression tasks and logistic and hinge losses for binary classification tasks. To this end, assume that C is an arbitrary fixed positive constant.
  • Squared loss: It is easy to see that the squared loss is a proper strongly convex function in R , where 1 is a strong convexity parameter. Moreover, L ( · ) and its sub-differential set L ( · ) can be expressed as follows:
    L ( v ) = 1 2 v y 2 , L ( v ) = { v y } ,
    where the vector y is formed by the concatenation of { y i } i = 1 n t . Then, there exists R > 0 such that
    sup v C n t | L ( v ) | R n t , sup v C n t sup s L ( v ) s = sup v C n t v y R n t ,
    with probability going to 1 as p grow to + . The inequality follows using the regularity condition in Assumption 4 and the weak law of large numbers. Then, the squared loss satisfies Assumption 3 for any λ 0 .
  • Logistic loss: Now, we consider the logistic loss applied to a binary classification model (i.e., y i { 1 , 1 } ). Note that the logistic loss is a proper convex function in R . Moreover, L ( · ) and its sub-differential set L ( · ) are given by
    L ( v ) = i = 1 n t log ( 1 + e y i v i ) , L ( v ) = { x } , where x i = y i e y i v i 1 + e y i v i , i { 1 n t } .
    First, observe that the loss L ( · ) satisfies the following inequality:
    | L ( v ) | n t + v 1 .
    This means that there exists R 1 > 0 such that the following inequality is valid:
    sup v C n t | L ( v ) | R 1 n t .
    Additionally, the following results hold true:
    sup v C n t sup s L ( v ) s = sup v C n t x i = 1 n t y i 2 1 2 .
    This means that there exists R 2 > 0 such that the following inequality is valid:
    sup v C n t sup s L ( v ) s R 2 n t .
    Then, there exists a universal constant R > 0 such that Assumption 3 is satisfied for the logistic loss for any λ > 0 .
  • Hinge loss: Finally, we consider the hinge loss applied to a binary classification model (i.e., y i { 1 , 1 } ). It is clear that the hinge loss is a proper convex function in R . Moreover, L ( · ) is given by L ( · ) = i = 1 n t max ( 1 y i v i , 0 ) . Following [33], the sub-differential set L ( · ) can be expressed as follows:
    L ( v ) = 1 2 D ( 1 + g ) : g 1 , g ( D v + 1 ) = D v + 1 1 ,
    where D R n t × n t is a diagonal matrix with diagonal entries { y i } i = 1 n t . Note that the loss function L ( · ) satisfies the following inequality:
    | L ( v ) | n t + v 1 .
    This means that there exists R 1 > 0 such that the following inequality is valid:
    sup v C n t | L ( v ) | R 1 n t .
    Moreover, the result in (A8) shows that any element s in the sub-differential set L ( v ) satisfies the following:
    s 1 2 n t + 1 2 g 1 2 n t + 1 2 n t g n t .
    This means that there exists R 2 > 0 such that the following inequality is valid:
    sup v C n t sup s L ( v ) s R 2 n t .
    Then, there exists a universal constant R > 0 such that Assumption 3 is satisfied for the hinge loss for any λ > 0 .

Appendix B. Proof of Lemma 1

Appendix B.1. Primal Compactness

We start our analysis by assuming that λ > 0 . We first consider the compactness of the source problem given in (4). Note that the formulation in (4) has a unique optimal solution. Assume that w ^ s , p R p is the unique optimal solution of the optimization problem given in (4). The analysis in [20] (Lemma 1) can be used to prove that there exists C 1 > 0 such that the following inequality is valid:
w ^ s , p 2 C 1 ,
with probability going to one as p . Moreover, observe that the formulation in (33) has a unique optimal solution. Assume that w ^ t , p R p is the unique optimal solution of the optimization problem given in (33). Assumption 3 supposes that the loss function is proper. Then, we can conclude that there exists C 2 > 0 such that
( y , z ) C 2 , z R .
Now, we define O t , p as the optimal objective value of the formulation in (33). Then, we can see that there exists C 3 > 0 such that
λ 2 w ^ t , p 2 O t , p + C 3 .
Given that w ^ s , p is a feasible solution in the formulation given in (33), we obtain the following inequality:
λ 2 w ^ t , p 2 1 p i = 1 p ( y i ; a i w ^ s , p ) + λ 2 w ^ s , p 2 + C 3 .
Based on [27] (Theorem 2.1), the following convergence in probability holds:
A n t p + α t + 1 α t ,
where the matrix A R n t × p is formed by the concatenation of vectors { a i } i = 1 n t . Then, there exists C 4 > 0 such that the following inequality is valid:
A w ^ s , p A w ^ s , p C 4 n t ,
with probability going to one as p . Combining this with the assumption in (12), we see that there exists C 5 > 0 such that the following inequality is valid:
1 n t | i = 1 p ( y i ; a i w ^ s , p ) | C 5 .
Given that λ > 0 and the result in (A13), we conclude that there exists C 6 > 0 such that the following holds:
w ^ t , p 2 C 6 ,
with probability going to one as p .
Now, we consider the case when λ = 0 . Define g s , p ( · ) , O s , p , and w ^ s , p as the cost function, the optimal cost value, and the optimal solution of the formulation in (4). Moreover, define g t , p ( · ) , O t , p , and w ^ t , p as the cost function, the optimal cost value, and the optimal solution of the formulation in (33). Note that the loss function ( y , · ) is strongly convex with a strong convexity parameter S > 0 . Then, for any x 1 , x 2 R , the following property is valid:
y , x 1 + x 2 2 1 2 ( y , x 1 ) + 1 2 ( y , x 2 ) S 8 | x 1 x 2 | 2 .
This means that, for any i { 1 , , n } , the following property is valid:
y i , a i w 1 + a i w 2 2 1 2 ( y i , a i w 1 ) + 1 2 ( y i , a i w 2 ) S 8 | a i w 1 a i w 2 | 2 ,
where n can be the number of samples of the source task or target task and { y i } i = 1 n are the labels of the source task or target task. Given the convexity of the norm, we obtain the following inequality:
g p w 1 + w 2 2 1 2 g p ( w 1 ) + 1 2 g p ( w 2 ) S 8 p A ( w 1 w 2 ) 2 ,
where g p ( · ) can be the cost value of the source task or target task formulations. Now, we focus on the source formulation. Take w 1 = w ^ s , p and w 2 = 0 . Moreover, see that the loss function is proper. Then, there exists C 7 > 0 such that
S 8 p A w ^ s , p 2 C 7 + 1 2 g s , p ( 0 ) .
Given the assumption in (12), S > 0 , α s > 1 , and the analysis in [27] (Theorem 2.1), there exists C 8 > 0 such that
w ^ s , p 2 C 8 .
Now, we focus on the target task. Take w 1 = w ^ t , p and w 2 = w ^ s , p . Moreover, see that the loss function is proper. Then, there exists C 9 > 0 such that
S 8 p A ( w ^ t , p w ^ s , p ) 2 C 9 + 1 2 g t , p ( w ^ s , p ) .
Given the assumption in (12), the result in (A25), S > 0 , α t > 1 , and the analysis in [27] (Theorem 2.1), there exists C 10 > 0 such that
w ^ t , p 2 C 10 .
This completes the first part of the proof of Lemma 1.

Appendix B.2. Dual Compactness

The analysis in Appendix B.1 shows that the formulation in (34) can be equivalently formulated, where the primal feasibility set is given by
w 2 C ,
where C > 0 is a sufficiently large constant that satisfies the analysis in Appendix B.1. Now, define u ^ p as the optimal solution of the formulation in (34). Additionally, define the function L ( · ) as L ( u ) = i = 1 n t y i ; u i . We can see that the optimal vector u ^ p solves the following maximization problem:
u ^ p = argmax u R n t u A w L ( u ) ,
where the data matrix A = [ a 1 , , a n t ] R n t × p . Now, we denote by L ( u ) the sub-differential set of the function L ( · ) evaluated at u . Therefore, the solution of the above maximization problem satisfies the following condition:
A w L ( u ^ p ) .
Now, we use the result in [25] (Proposition 11.3) to show that the condition in (A29) can be equivalently expressed as follows:
u ^ p L A w ,
where the loss function L ( w ) = i = 1 n t y i ; w i based on [25] (Proposition 11.22). Note that the introduced constraint in (A28) is satisfied. Moreover, the analysis presented in (A17) shows that there exists C 1 > 0 such that the following inequality holds:
A w 2 C 1 n t ,
with probability going to one as p goes to + . Now, we use the assumption in (12) to conclude that there exists C 2 > 0 such that the following inequality holds:
u ^ p 2 C 2 n t ,
with probability going to one as p goes to + . This completes the proof of Lemma 1.

Appendix C. Proof of Lemma 2

To prove the convergence properties stated in Lemma 2, we show first that they are valid for the auxiliary formulation corresponding to the source problem.

Appendix C.1. Auxiliary Convergence

Note that the analysis present in Section 7 is also valid for the source problem. This is because the formulation in (8) is equivalent to the source problem in (4) if Σ is the all zero matrix and we use the source training data. Then, we can see that the optimal solution of the auxiliary formulation corresponding to the source problem, denoted by w ˜ s , can be expressed as follows:
w ˜ s = q p , s ξ s r p , s g ˜ s B ξ s g ˜ s ,
where g ˜ s = ( B ξ s ) g s and g s has independent standard Gaussian components. Here, B ξ s R p × ( p 1 ) is formed by an orthonormal basis orthogonal to the vector ξ s . Additionally, our analysis in Section 7 shows that the following convergence in probability holds:
q p , s p + q s and r p , s p + r s .
Here, q s and r s are the optimal solutions of the asymptotic limit of the source formulation defined in (14).
Note that μ p can be expressed as follows:
μ p = σ min ( ( B ξ t ) Λ B ξ t ) .
Using the eigenvalue interlacing theorem, one can see that
σ min , 1 ( Λ ) μ p σ min , 2 ( Λ ) .
Then, using the assumption in (13), we can see that the random variable μ p converges in probability to μ min , where μ min is defined in Assumption 5. Now, we study the properties of the remaining functions using the optimal solution of the auxiliary formulation defined in (A33), i.e., w ˜ s , instead of w ^ s . For instance, we first study the random sequence V ˜ p , t s = ξ t Λ w ˜ s to infer the asymptotic properties of V p , t s .
First, fix σ > μ min . Then, based on the convergence of μ p and [34] (Proposition 3), the sequence of random functions T p , g ( . ) converges in probability as follows:
T p , g ( σ ) p + T g ( σ ) = E μ 1 / ( μ + σ ) .
Now, we express σ as σ = σ x , where σ > 0 . This means that the following convergence in probability holds true:
T p , g ( σ x ) p + T g ( σ x ) ,
for any x < σ + μ min . Note that the functions T p , g ( . ) and T g ( . ) are both convex and continuous in the variable x in the set [ 0 , σ + μ min [ . Then, based on [30] (Theorem II.1), the convergence in (A38) is uniform in the variable x in the compact set [ 0 , σ / 2 + μ min ] . Now, note that μ p converges in probability to μ min . Therefore, we obtain the following convergence in probability:
T p , g ( σ μ p ) p + T g ( σ μ min ) ,
valid for any fixed σ > 0 . Using the block matrix inversion lemma, the function T p , t ( . ) can be expressed as follows:
T p , t ( σ ) = ξ t Λ B ξ t [ ( B ξ t ) Λ B ξ t + σ I p 1 ] 1 ( B ξ t ) Λ ξ t = ξ t Λ ξ t + σ 1 ξ t [ Λ + σ I p ] 1 ξ t .
Therefore, we obtain the following expression:
Z p , t ( σ ) = σ 1 ξ t [ Λ + σ I p ] 1 ξ t .
Then, using the theoretical results stated in [34] (Proposition 3), the functions Z p , t ( . ) converges in probability as follows:
Z p , t ( σ ) p + Z t ( σ ) = σ 1 E μ 1 / ( μ + σ ) .
Combine this with the above analysis to obtain the following convergence in probability:
Z p , t ( σ μ p ) p + Z t ( σ μ min ) ,
valid for any σ > 0 . Based on the result in (A33), the sequence of random functions Z ˜ p , t s ( . ) converges in probability to the following function:
Z t s ( σ ) = q s ρ Z t ( σ ) .
Combine this with the above analysis to obtain the following convergence in probability:
Z ˜ p , t s ( σ μ p ) p + Z t s ( σ μ min ) ,
valid for any σ > 0 . Using the same analysis and based on (A33) and (A34), one can see that the sequence of random functions Z ˜ p , s ( . ) converges in probability to the following function:
Z ˜ p , s ( σ ) p + Z s ( σ ) = ( ρ q s ) 2 Z t ( σ ) ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 E μ μ σ / ( μ + σ ) .
Combine this with the above analysis to obtain the following convergence in probability:
Z ˜ p , s ( σ μ p ) p + Z s ( σ μ min ) ,
valid for any σ > 0 . The above analysis shows that the asymptotic properties stated in Lemma 2 are valid for the AO formulation corresponding to the source problem. Now, it remains to show that these properties also hold for the primary formulation.

Appendix C.2. Primary Convergence

Here, we assume that λ > 0 . The case when λ = 0 can be conducted similarly. Now, we show that the convergence properties proved above are also valid for the primary problem. To this end, we show that all the assumptions in Theorem 2 are satisfied. We start our proof by defining the following open set:
T ϵ = { w R p : | ξ t [ Λ + σ I p ] 1 w ρ q s K | < ϵ } ,
where K is defined as follows:
K = E μ 1 / ( μ + σ ) .
Now, we consider the feasibility set D ϵ = T 1 / S ϵ , where T 1 is defined in (41). Based on the analysis of the generalized target formulation in Section 7.3.2, one can see that the AO formulation corresponding to the source formulation with the set D ϵ can be asymptotically expressed as follows:
V p : min ( q s , r s ) T 2 min r s D ˜ ϵ max u C s u p g s B ξ s r s + q s p u s s + λ 2 ( q s 2 + r s 2 ) + 1 p r s h s u 1 p i = 1 n s y s , i ; u i .
Here, the feasibility set T 2 is defined in Section 7.3.2 and the feasibility set D ˜ ϵ is given by
{ r s : | q s ρ K p , t + q s 1 ρ 2 K p , r + ξ t [ Λ + σ I p ] 1 B ξ s r s ρ q s K | ϵ , r s = r s } .
This follows based on the decomposition in (40) and where K p , t = ξ t [ Λ + σ I p ] 1 ξ t and K p , r = ξ t [ Λ + σ I p ] 1 ξ r . Note that the optimization problem given in V p can be equivalently formulated as follows:
V p : min ( q s , r s ) S ^ ϵ min r s D ˜ ϵ max u C s u p g s B ξ r s + q s p u s s + λ 2 ( q s 2 + r s 2 ) + 1 p r s h s u 1 p i = 1 n s y s , i ; u i .
Here, we replace the feasibility set T 2 by the feasibility set S ^ ϵ defined as follows:
{ | q s ρ K p , t + q s 1 ρ 2 K p , r r s ξ t [ Λ + σ I p ] 1 B ξ s g ˜ s g ˜ s ρ q s K | ϵ } T 2 ,
where g ˜ s = ( B ξ s ) g s . This follows since the first set in S ^ ϵ satisfies the condition in the set D ˜ ϵ . Now, assume that ϕ ^ p is the optimal cost value of the optimization problem V p and define the function h ^ p ( . ) as follows:
h ^ p ( q s , r s ) = min r s D ˜ ϵ max u C s u p g s B ξ s r s + q s u s s p + λ 2 ( q s 2 + r s 2 ) + r s p h s u 1 p i = 1 n s y s , i ; u i ,
in the set S ^ ϵ . Based on the max–min inequality [35], the function h ^ p ( . ) can be lower bounded by the following function:
h ˜ p ( q s , r s ) = max u C s min r s D ˜ ϵ u p g s B ξ s r s + q s u s s p + λ 2 ( q s 2 + r s 2 ) + r s p h s u 1 p i = 1 n s y s , i ; u i .
This is valid for any ( q s , r s ) S ^ ϵ . Moreover, note that the following inequality holds true:
min r s D ˜ ϵ u p g s B ξ s r s u p ( B ξ s ) g s r s ,
for any ( q s , r s ) S ^ ϵ . Following the generalized analysis in Section 7.3.2, one can see that the auxiliary problem corresponding to the source formulation can be expressed as follows:
min ( q s , r s ) T 2 sup σ s > 0 1 n s i = 1 n s M ( y s , i , . ) r s h s , i + q s s s , i ; r s g ˜ s n s σ s r s σ s 2 g ˜ s n s + λ 2 ( q s 2 + r s 2 ) .
This means that the function h ˜ p ( . ) can be lower bounded by the cost function of the minimization problem formulated in (A50) denoted by g ^ p ( . ) , i.e.,
g ^ p ( q s , r s ) h ˜ p ( q s , r s ) .
Here, both functions are defined in the feasibility set S ^ ϵ . Now, define ϕ p as the optimal cost value of the auxiliary optimization problem corresponding to the source formulation defined in Section 7.3.1. Note that the loss function g ^ p ( . ) is strongly convex in the variables ( q s , r s ) with strong convexity parameter λ > 0 . This means that, for any β [ 0 , 1 ] , ( q s , 1 , r s , 1 ) T 2 and ( q s , 2 , r s , 2 ) T 2 , we have the following inequality:
g ^ p ( β v 1 + ( 1 β ) v 2 ) β g ^ p ( v 1 ) + ( 1 β ) g ^ p ( v 2 ) λ 2 β ( 1 β ) v 1 v 2 2 ,
where v 1 = [ q s , 1 , r s , 1 ] and v 2 = [ q s , 2 , r s , 2 ] . Take v 1 as v p , which represents the optimal solution of the optimization problem (A50). Then, the inequality in (A52) implies the following inequality:
ϕ p g ^ p ( v 2 ) λ 2 β v p v 2 2 .
This is valid for any v 2 in the set T 2 . Now, taking β = 1 / 2 and the minimum over v 2 in the set S ^ ϵ in both sides, we obtain the following inequality:
ϕ p + λ 4 min v S ^ ϵ v p v 2 min v S ^ ϵ g ^ p ( v ) .
Based on the above analysis, note that the following inequality also holds true:
min v S ^ ϵ g ^ p ( v ) ϕ ^ p .
Then, to verify the assumption of [17] (Theorem 6.1), it remains to show that there exists ϵ > 0 such that, the following inequality holds:
λ 4 min v S ^ ϵ v p v 2 ϵ ,
with probability going to 1 as p . Note that any element in the set S ^ ϵ satisfies the following inequality:
ϵ | q s ρ K p , t + q s 1 ρ 2 K p , r r s ξ t [ Λ + σ I p ] 1 B ξ g ˜ s g ˜ s ρ q s K | | q s ρ K p , t ρ q s K | + | q s 1 ρ 2 | | K p , r | + | r s | | ξ t [ Λ + σ I p ] 1 B ξ g ˜ s g ˜ s | .
Based on the analysis in Appendix C.1, we have the following convergence in probability:
| q s ρ K p , t ρ q s K | p + | q s q s | ρ K | q s | 1 ρ 2 | K p , r | p + 0 , | r s | | ξ t [ Λ + σ I p ] 1 B ξ g ˜ s g ˜ s | p + 0 .
This means that there exists ϵ > 0 such that any elements in the set S ^ ϵ satisfies the following inequality:
| q s q s | ρ K ϵ ,
with probability going to 1 as p . Combining this with Assumption 5 and the consistency result stated in (A34) shows that there exists ϵ > 0 such that the following inequality holds:
λ 4 min v D ^ ϵ v p v 2 ϵ ,
with probability going to 1 as p . This also proves that there exists ϵ > 0 such that the following inequality holds:
ϕ ^ p ϕ p + ϵ ,
with probability going to 1 as p . This completes the verification of the assumptions in Theorem 2. This means that the optimal solution of the primary problem belongs to the set S ϵ on events with probability going to 1 as p . Since the choice of ϵ is arbitrary, we obtain the following asymptotic result:
ξ t [ Λ + σ I p ] 1 w ^ s p + q s ρ K ,
where w ^ s is the optimal solution of the source problem (4). Following the same analysis, one can also show the convergence properties stated in Lemma 2.

Appendix D. Proof of Lemma 4

Here, we assume that λ > 0 . The case when λ = 0 can be conducted similarly. The cost function of the optimization problem (49) can be expressed as follows:
O t ( q t , r t , σ t ) = λ 2 ( q t 2 + r t 2 ) σ t r t 2 2 1 2 Z s ( σ t ) 1 2 q t 2 Z t ( σ t ) + α t E M ( Y t , . ) r t H t + q t S t ; T g ( σ t ) + q t Z t s ( σ t ) .
Note that the function O t ( · , · , · ) can be expressed as follows:
O t ( q t , r t , σ t ) = σ t r t 2 2 + 1 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 T 2 ( σ t ) + α t E M ( Y t , . ) r t H t + q t S t ; T 1 ( σ t ) + λ 2 ( q t 2 + r t 2 ) 1 2 q t ρ q s 2 σ t 1 / T 1 ( σ t ) .
Here, the functions T 1 ( . ) and T 2 ( . ) are defined as follows:
T 1 ( σ t ) = E μ [ 1 / ( μ + σ t ) ] , T 2 ( σ t ) = E μ μ σ t / ( μ + σ t ) .
Based on Assumption 5, the functions T 1 ( · ) and T 2 ( · ) are twice continuously differentiable in the feasibility set. We start our analysis by showing that the function O t ( · , · , · ) is concave in the variable σ t for fixed feasible ( q t , r t ) . First, note that the function T 2 ( · ) is concave in the feasibility set. Now, define the function g ( · ) as follows:
g ( σ t ) = 1 T 1 ( σ t ) .
Then, we can see that the second derivative of the function g ( · ) can be expressed as follows:
g ( σ t ) = T 1 ( σ t ) T 1 ( σ t ) 2 T 1 ( σ t ) 2 T 1 ( σ t ) 3 .
Here, the first and second derivatives of the function T 1 ( · ) can be expressed as follows:
T 1 ( σ t ) = E μ [ 1 / ( μ + σ t ) 2 ] , T 1 ( σ t ) = 2 E μ [ 1 / ( μ + σ t ) 3 ] .
Then, using the Cauchy–Schwarz inequality, one can see that the second derivative of the function g ( · ) is negative. This implies the concavity of the function g ( · ) . Therefore, using the properties in [35] (Section 3.2), the function O t ( · , · , · ) is concave in the variable σ t .
Now, we focus on proving the strong convexity properties. Define the function O t ( · , · ) as follows:
O t ( q t , r t ) = sup σ t > μ min O t ( q t , r t , σ t ) .
Note that the term λ 2 ( q t 2 + r t 2 ) is strongly convex in the variables ( q t , r t ) . Then, to prove our property it suffices to show that the following function is jointly convex in the variables ( q t , r t ) in the feasibility set:
h ( q t , r t ) = sup σ t > μ min σ t r t 2 2 + 1 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 T 2 ( σ t ) 1 2 q t ρ q s 2 σ t 1 / T 1 ( σ t ) + α t E M ( Y t , . ) r t H t + q t S t ; T 1 ( σ t ) ,
Note that the function h ( · , · ) can also be expressed as follows:
h ( q t , r t ) = sup σ t > μ min min 0 τ C τ σ t τ 2 2 τ r t σ t + 1 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 T 2 ( σ t ) + α t E M ( Y t , . ) r t H t + q t S t ; T 1 ( σ t ) 1 2 q t ρ q s 2 σ t 1 / T 1 ( σ t ) .
Here, the feasibility set of the variable τ is bounded given that the optimal τ satisfies τ = r t . It can be easily seen that the cost function of the optimization problem in (A68) is convex in τ and concave in σ t . Then, using the result in [36], the function h ( · , · ) can also be expressed as follows:
h ( q t , r t ) = inf 0 < τ C τ sup σ t > μ min σ t τ 2 r t σ t + 1 2 ( 1 ρ 2 ) ( q s ) 2 + ( r s ) 2 T 2 ( σ t / τ ) + α t E M ( Y t , . ) r t H t + q t S t ; T 1 ( σ t / τ ) 1 2 q t ρ q s 2 σ t / τ 1 / T 1 ( σ t / τ ) .
Then, to prove our property, it suffices to show that the cost function of the above problem is jointly convex in the variables ( q t , r t , τ ) . Using the positivity of the second derivative, it is easy to see that the function τ T 2 ( σ t / τ ) is convex. Now, using the analysis below Equation (161) in [20] (Appendix H), we can see that the remaining functions are jointly convex in the variables ( q t , r t , τ ) . We omit these steps since they are similar to the approach employed in [20] (Appendix H). This shows that the function O t ( · , · ) is strongly convex in the variables ( q t , r t ) .

References

  1. Pratt, L.Y.; Mostow, J.; Kamm, C.A. Direct Transfer of Learned Information among Neural Networks. In Proceedings of the Ninth National Conference on Artificial Intelligence—Volume 2, AAAI’91, Anaheim, CA, USA, 14–19 July 1991; pp. 584–589. [Google Scholar]
  2. Pratt, L.Y. Discriminability-Based Transfer between Neural Networks. In Advances in Neural Information Processing Systems; Hanson, S., Cowan, J., Giles, C., Eds.; Morgan-Kaufmann: Burlington, MA, USA, 1993; Volume 5, pp. 204–211. [Google Scholar]
  3. Perkins, D.; Salomon, G. Transfer of Learning; Pergamon: Oxford, UK, 1992. [Google Scholar]
  4. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  5. Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. arXiv 2018, arXiv:1808.01974. [Google Scholar]
  6. Rosenstein, M.T.; Marx, Z.; Kaelbling, P.K.; Dietterich, T.G. To transfer or not to transfer. In NIPS Workshop on Transfer Learning; NIPS: Vancouver, BC, Canada, 2005. [Google Scholar]
  7. Bakker, B.; Heskes, T. Task Clustering and Gating for Bayesian Multitask Learning. J. Mach. Learn. Res. 2003, 4, 83–99. [Google Scholar]
  8. Ben-David, S.; Schuller, R. Exploiting Task Relatedness for Multiple Task Learning. In Learning Theory and Kernel Machines; Schölkopf, B., Warmuth, M.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 567–580. [Google Scholar]
  9. Kornblith, S.; Shlens, J.; Le, Q.V. Do Better ImageNet Models Transfer Better? arXiv 2019, arXiv:1805.08974. [Google Scholar]
  10. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks? arXiv 2014, arXiv:1411.1792. [Google Scholar]
  11. Tommasi, T.; Orabona, F.; Caputo, B. Learning Categories From Few Examples With Multi Model Knowledge Transfer. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 928–941. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, J.; Yan, R.; Hauptmann, A.G. Adapting SVM Classifiers to Data with Shifted Distributions. In Proceedings of the Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007), Omaha, NE, USA, 28–31 October 2007; pp. 69–76. [Google Scholar]
  13. Lampinen, A.K.; Ganguli, S. An Analytic Theory of Generalization Dynamics and Transfer Learning in Deep Linear Networks. arXiv 2019, arXiv:1809.10374. [Google Scholar]
  14. Dar, Y.; Baraniuk, R.G. Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks. arXiv 2021, arXiv:2006.07002. [Google Scholar]
  15. Saglietti, L.; Zdeborová, L. Solvable Model for Inheriting the Regularization through Knowledge Distillation. arXiv 2020, arXiv:2012.00194. [Google Scholar]
  16. Stojnic, M. A Framework to Characterize Performance of LASSO Algorithms. arXiv 2013, arXiv:1303.7291. [Google Scholar]
  17. Thrampoulidis, C.; Abbasi, E.; Hassibi, B. Precise high-dimensional error analysis of regularized M-estimators. In Proceedings of the 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 29 September–2 October 2015; pp. 410–417. [Google Scholar]
  18. Gordon, Y. On Milman’s inequality and random subspaces which escape through a mesh in R n . In Geometric Aspects of Functional Analysis; Lindenstrauss, J., Milman, V.D., Eds.; Springer: Berlin/Heidelberg, Germany, 1988; pp. 84–106. [Google Scholar]
  19. Dhifallah, O.; Thrampoulidis, C.; Lu, Y.M. Phase Retrieval via Polytope Optimization: Geometry, Phase Transitions, and New Algorithms. arXiv 2018, arXiv:1805.09555. [Google Scholar]
  20. Dhifallah, O.; Lu, Y.M. A Precise Performance Analysis of Learning with Random Features. arXiv 2020, arXiv:2008.11904. [Google Scholar]
  21. Salehi, F.; Abbasi, E.; Hassibi, B. The Impact of Regularization on High-dimensional Logistic Regression. arXiv 2019, arXiv:1906.03761. [Google Scholar]
  22. Kammoun, A.; Alouini, M.S. On the Precise Error Analysis of Support Vector Machines. arXiv 2020, arXiv:2003.12972. [Google Scholar]
  23. Mignacco, F.; Krzakala, F.; Lu, Y.M.; Zdeborová, L. The Role of Regularization in Classification of High-Dimensional Noisy Gaussian Mixture. arXiv 2020, arXiv:2002.11544. [Google Scholar]
  24. Aubin, B.; Krzakala, F.; Lu, Y.M.; Zdeborová, L. Generalization Error in High-Dimensional Perceptrons: Approaching Bayes Error with Convex Optimization. arXiv 2020, arXiv:2006.06560. [Google Scholar]
  25. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  26. Thrampoulidis, C.; Oymak, S.; Hassibi, B. Regularized Linear Regression: A Precise Analysis of the Estimation Error. In Proceedings of the 28th Conference on Learning Theory; Grünwald, P., Hazan, E., Kale, S., Eds.; PMLR: Paris, France, 2015; Volume 40, Proceedings of Machine Learning Research. pp. 1683–1709. [Google Scholar]
  27. Rudelson, M.; Vershynin, R. Non-Asymptotic Theory of Random Matrices: Extreme Singular Values. arXiv 2010, arXiv:1003.2990. [Google Scholar]
  28. Adachi, S.; Iwata, S.; Nakatsukasa, Y.; Takeda, A. Solving the Trust-Region Subproblem By a Generalized Eigenvalue Problem. SIAM J. Optim. 2017, 27, 269–291. [Google Scholar] [CrossRef]
  29. Shapiro, A.; Dentcheva, D.; Ruszczyński, A. Lectures on Stochastic Programming: Modeling and Theory, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2014. [Google Scholar]
  30. Andersen, P.K.; Gill, R.D. Cox’s Regression Model for Counting Processes: A Large Sample Study. Ann. Statist. 1982, 10, 1100–1120. [Google Scholar] [CrossRef]
  31. Newey, W.K.; Mcfadden, D. Chapter 36 Large sample estimation and hypothesis testing. In Handbook of Econometrics; Elsevier: Amsterdam, The Netherlands, 1994; p. 2111. [Google Scholar]
  32. Schilling, R.L. Measures, Integrals and Martingales; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  33. Shor, N. Minimization Methods for Non-Differentiable Functions; Springer: Berlin/Heidelberg, Germany, 1985. [Google Scholar]
  34. Debbah, M.; Hachem, W.; Loubaton, P.; de Courville, M. MMSE analysis of certain large isometric random precoded systems. IEEE Trans. Inf. Theory 2003, 49, 1293–1311. [Google Scholar] [CrossRef]
  35. Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  36. Sion, M. On general minimax theorems. Pac. J. Math. 1958, 8, 171–176. [Google Scholar] [CrossRef]
Figure 1. Theoretical predictions v.s. numerical simulations obtained by averaging over 100 independent Monte Carlo trials with dimension p = 2500 . (a) Binary classification with logistic loss. We take α s = 10 α t , λ = 0.3 , Σ = I p / 5 , and ρ = 0.85 , where α s = n s / p and α t = n t / p . The functions φ ( · ) and φ ^ ( · ) are both the sign function. For hard transfer, we set the transfer rate to be δ = 0.5 . Full source transfer corresponds to δ = 1.0 , whereas no transfer corresponds to δ = 0 . (b) Nonlinear regression using quadratic loss, where φ ( · ) is the ReLu function and φ ^ ( · ) is the identity function. Soft identity, beta, and uniform matrices refer to different choices of the weighting matrix in (8). Soft Identity Matrix: Σ is an identity matrix. Soft Uniform Matrix: Σ is a random matrix with diagonal elements drawn from the uniform distribution. Soft Beta Matrix: Σ is a random matrix with diagonal elements drawn from the beta distribution. We scale all diagonal elements of Σ to have the same mean. We also take α s = 10 α t , λ = 0.1 , and ρ = 0.8 .
Figure 1. Theoretical predictions v.s. numerical simulations obtained by averaging over 100 independent Monte Carlo trials with dimension p = 2500 . (a) Binary classification with logistic loss. We take α s = 10 α t , λ = 0.3 , Σ = I p / 5 , and ρ = 0.85 , where α s = n s / p and α t = n t / p . The functions φ ( · ) and φ ^ ( · ) are both the sign function. For hard transfer, we set the transfer rate to be δ = 0.5 . Full source transfer corresponds to δ = 1.0 , whereas no transfer corresponds to δ = 0 . (b) Nonlinear regression using quadratic loss, where φ ( · ) is the ReLu function and φ ^ ( · ) is the identity function. Soft identity, beta, and uniform matrices refer to different choices of the weighting matrix in (8). Soft Identity Matrix: Σ is an identity matrix. Soft Uniform Matrix: Σ is a random matrix with diagonal elements drawn from the uniform distribution. Soft Beta Matrix: Σ is a random matrix with diagonal elements drawn from the beta distribution. We scale all diagonal elements of Σ to have the same mean. We also take α s = 10 α t , λ = 0.1 , and ρ = 0.8 .
Entropy 23 00400 g001
Figure 2. Phase transitions of the hard transfer formulation. When the similarity ρ between the two tasks is small, we are in the negative transfer regime, where we should not transfer the knowledge from the source task. However, as ρ moves past a critical threshold, we enter the positive transfer regime. (a) Binary classification with squared loss, with parameters α t = 2 , α s = 2 α t , and λ = 0 . Both φ ( · ) and φ ^ ( · ) are the sign function. (b) Nonlinear regression with squared loss, with parameters α t = 2 , α s = 2 α t , and λ = 0 . φ ( . ) is the ReLu function and φ ^ ( . ) is the identity function.
Figure 2. Phase transitions of the hard transfer formulation. When the similarity ρ between the two tasks is small, we are in the negative transfer regime, where we should not transfer the knowledge from the source task. However, as ρ moves past a critical threshold, we enter the positive transfer regime. (a) Binary classification with squared loss, with parameters α t = 2 , α s = 2 α t , and λ = 0 . Both φ ( · ) and φ ^ ( · ) are the sign function. (b) Nonlinear regression with squared loss, with parameters α t = 2 , α s = 2 α t , and λ = 0 . φ ( . ) is the ReLu function and φ ^ ( . ) is the identity function.
Entropy 23 00400 g002
Figure 3. Additional illustrations of the phase transition phenomenon. (a) Regression (squared loss, α t = 0.5 , and α s = 3 α t ) (b) Regression (squared loss, α t = 2 , and α s = 2 α t ) (c) Binary classification (squared loss, α t = 1.5 , and α s = 3 α t ) (d) Binary classification (hinge loss, α t = 1.5 , and α s = 3 α t ). In all the experiments, we set the regularization strength to be λ = 0.1 . The blue line represents our theoretical predictions of the optimal transfer rate obtained by solving our asymptotic results in Section 4 for multiple values of δ . The empirical results are averaged over 100 independent Monte Carlo trials with p = 2500 .
Figure 3. Additional illustrations of the phase transition phenomenon. (a) Regression (squared loss, α t = 0.5 , and α s = 3 α t ) (b) Regression (squared loss, α t = 2 , and α s = 2 α t ) (c) Binary classification (squared loss, α t = 1.5 , and α s = 3 α t ) (d) Binary classification (hinge loss, α t = 1.5 , and α s = 3 α t ). In all the experiments, we set the regularization strength to be λ = 0.1 . The blue line represents our theoretical predictions of the optimal transfer rate obtained by solving our asymptotic results in Section 4 for multiple values of δ . The empirical results are averaged over 100 independent Monte Carlo trials with p = 2500 .
Entropy 23 00400 g003
Figure 4. Illustrations of the sufficient condition in Proposition 2. (a) Classification (squared loss, α t = 1.5 , and α s = 8 α t ) (b) Classification (LAD loss, α t = 1.5 , and α s = 8 α t ). In all the experiments, we set the regularization strength to be λ = 0.1 . The blue line represents our theoretical predictions of the optimal transfer rate obtained by solving our asymptotic results in Section 4 for multiple values of δ . The green line represents our sufficient condition for positive transfer stated in Proposition 2.
Figure 4. Illustrations of the sufficient condition in Proposition 2. (a) Classification (squared loss, α t = 1.5 , and α s = 8 α t ) (b) Classification (LAD loss, α t = 1.5 , and α s = 8 α t ). In all the experiments, we set the regularization strength to be λ = 0.1 . The blue line represents our theoretical predictions of the optimal transfer rate obtained by solving our asymptotic results in Section 4 for multiple values of δ . The green line represents our sufficient condition for positive transfer stated in Proposition 2.
Entropy 23 00400 g004
Figure 5. Continuous line: theoretical predictions. Circles: numerical simulations. (a) α s = 6 α t , λ = 0.1 , β t = 1 / 10 , and ρ = 0.9 . (b) α t = 1 , α s = 5 α t , λ = 0.3 , and ρ = 0.75 . In all the experiments, we consider the binary classification problem with the logistic loss function. The empirical results are averaged over 50 independent Monte Carlo trials, and we set p = 1000 .
Figure 5. Continuous line: theoretical predictions. Circles: numerical simulations. (a) α s = 6 α t , λ = 0.1 , β t = 1 / 10 , and ρ = 0.9 . (b) α t = 1 , α s = 5 α t , λ = 0.3 , and ρ = 0.75 . In all the experiments, we consider the binary classification problem with the logistic loss function. The empirical results are averaged over 50 independent Monte Carlo trials, and we set p = 1000 .
Entropy 23 00400 g005
Figure 6. Continuous line: theoretical predictions. Circles: numerical simulations. (a) α s = 12 α t , λ = 0.2 , and ρ = 0.75 . (b) α t = 1.5 , α s = 8 α t , and λ = 0.4 . In all the experiments, we consider the regression setting with a squared loss. The hard transfer formulation uses δ = 0.5 , and the soft transfer formulation uses an identity weighting matrix. The empirical results are averaged over 50 independent Monte Carlo trials and we set p = 1000 .
Figure 6. Continuous line: theoretical predictions. Circles: numerical simulations. (a) α s = 12 α t , λ = 0.2 , and ρ = 0.75 . (b) α t = 1.5 , α s = 8 α t , and λ = 0.4 . In all the experiments, we consider the regression setting with a squared loss. The hard transfer formulation uses δ = 0.5 , and the soft transfer formulation uses an identity weighting matrix. The empirical results are averaged over 50 independent Monte Carlo trials and we set p = 1000 .
Entropy 23 00400 g006
Figure 7. Continuous line: theoretical predictions. Circles: numerical simulations. (a) α s = 0.5 α t , λ = 0.6 , and ρ = 0.7 . We consider the regression setting with a squared loss. (b) α s = 0.5 α t , λ = 0.3 , and ρ = 0.8 . We consider the classification setting with a logistic loss. The hard transfer formulation uses δ = 0.5 , and the soft transfer formulation uses an identity weighting matrix. The empirical results are averaged over 60 independent Monte Carlo trials, and we set p = 1000 .
Figure 7. Continuous line: theoretical predictions. Circles: numerical simulations. (a) α s = 0.5 α t , λ = 0.6 , and ρ = 0.7 . We consider the regression setting with a squared loss. (b) α s = 0.5 α t , λ = 0.3 , and ρ = 0.8 . We consider the classification setting with a logistic loss. The hard transfer formulation uses δ = 0.5 , and the soft transfer formulation uses an identity weighting matrix. The empirical results are averaged over 60 independent Monte Carlo trials, and we set p = 1000 .
Entropy 23 00400 g007
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dhifallah, O.; Lu, Y.M. Phase Transitions in Transfer Learning for High-Dimensional Perceptrons. Entropy 2021, 23, 400. https://doi.org/10.3390/e23040400

AMA Style

Dhifallah O, Lu YM. Phase Transitions in Transfer Learning for High-Dimensional Perceptrons. Entropy. 2021; 23(4):400. https://doi.org/10.3390/e23040400

Chicago/Turabian Style

Dhifallah, Oussama, and Yue M. Lu. 2021. "Phase Transitions in Transfer Learning for High-Dimensional Perceptrons" Entropy 23, no. 4: 400. https://doi.org/10.3390/e23040400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop