Next Article in Journal
The Impact of Sex, Circadian Disruption, and the ClockΔ19/Δ19 Genotype on Alcohol Drinking in Mice
Next Article in Special Issue
The Value of the Stemness Index in Ovarian Cancer Prognosis
Previous Article in Journal
Dietary Restriction and Rapamycin Affect Brain Aging in Mice by Attenuating Age-Related DNA Methylation Changes
Previous Article in Special Issue
DLK2 Acts as a Potential Prognostic Biomarker for Clear Cell Renal Cell Carcinoma Based on Bioinformatics Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Penalization Method for Estimating Heterogeneous Covariate Effects in Cancer Genomic Data

1
School of Statistics, Renmin University of China, No. 59 Zhongguancun Street, Beijing 100872, China
2
Center for Applied Statistics, School of Statistics, Renmin University of China, No. 59 Zhongguancun Street, Beijing 100872, China
*
Author to whom correspondence should be addressed.
Genes 2022, 13(4), 702; https://doi.org/10.3390/genes13040702
Submission received: 22 March 2022 / Revised: 8 April 2022 / Accepted: 9 April 2022 / Published: 15 April 2022
(This article belongs to the Special Issue Application of Bioinformatics in Human Cancers)

Abstract

:
In high-throughput profiling studies, extensive efforts have been devoted to searching for the biomarkers associated with the development and progression of complex diseases. The heterogeneity of covariate effects associated with the outcomes across subjects has been noted in the literature. In this paper, we consider a scenario where the effects of covariates change smoothly across subjects, which are ordered by a known auxiliary variable. To this end, we develop a penalization-based approach, which applies a penalization technique to simultaneously select important covariates and estimate their unique effects on the outcome variables of each subject. We demonstrate that, under the appropriate conditions, our method shows selection and estimation consistency. Additional simulations demonstrate its superiority compared to several competing methods. Furthermore, applying the proposed approach to two The Cancer Genome Atlas datasets leads to better prediction performance and higher selection stability.

1. Introduction

The tremendous development of high-throughput sequencing techniques allows for the generation of massive genomic data, e.g., gene expressions and Single-Nucleotide Polymorphisms (SNPs). These data provide an unprecedented opportunity of uncovering biomarkers associated with outcomes such as the development and progression of complex diseases, e.g., cancers and type II diabetes. Numerous studies on this topic have been hitherto carried out. However, most existing studies assume that a covariate has an identical effect on the outcome variable for all subjects, which is often unrealistic in practice. For example, Ford et al. [1] found that the risk of breast and ovarian cancers in BRCA2 mutation carriers increases with age. Another example is that the effects of some genes in the nicotinic 15q25 locus on lung cancer risk are mediated by nicotine dependence [2]. These findings suggest that the effects of a specific covariate can be heterogenous and discrepancies in covariate effects or covariate-outcome associations may arise due to the differences in clinical characteristics and other traits that differ across subjects. As such, ignoring such effects, heterogeneity in genomic data analysis can result in biased estimations and misleading inferences.
The most commonly used strategy for handling heterogeneity is subgroup analysis, under which subjects form subgroups and each subgroup has unique covariate-outcome associations. A number of approaches have been proposed, such as the finite mixture model [3,4,5], and penalization-based approaches, such as concave fusion penalization [6,7], and C-Lasso [8]. However, these approaches assume that the effects of covariates are the same within each subgroup. As suggested by the literature, the covariate (e.g., genetic) effects are typically associated with clinical measures (e.g., age and number of cigarettes smoked per day), which are often continuous variables. As such, in some applications, covariate effects are more likely to vary smoothly rather than being locally constant within each subgroup.
In this study, we focus on a scenario where the subjects can be ordered by an auxiliary variable (see Section 2 for details). We consider a linear regression model with heterogeneous covariate effects by allowing the regression coefficients to vary smoothly across subjects. We then propose a novel penalization approach to capture the smoothing changes of coefficients. Under this approach, a “spline-lasso” penalty is imposed on the second-order derivatives of the coefficients to encourage smoothness in coefficients’ changes. Additionally, we introduce a penalty of the group Lasso form to accommodate the high dimensionality of genomic data (i.e., the number of genes is larger than the sample size) and select the relevant covariates.
Our work is related to the varying coefficient models, a kind of classical semi-parametric model. It treats the coefficients as functions of certain characteristics, and uses various nonparametric smoothing techniques, such as spline-based methods [9,10], and local polynomial smoothing [11], to approximate the unknown coefficient functions. For example, high-dimensional varying coefficient models proposed by Wei et al. [12], Xue and Qu [13], Song et al. [14], Chen et al. [15], finite mixture of varying coefficient model [16], and additive varying-coefficient model for non linear gene-environment interactions [17]. Compared to these varying-coefficient regression approaches, the proposed method has few requirements for the distribution of auxiliary variables and better estimates the regression coefficients when auxiliary variable is unevenly distributed (Figure 1).
Moreover, the proposed approach is also related to but also significantly advances existing ones. First, it advances existing genomic marker identification studies by considering the heterogeneity of covariate effects. Second, it advances gene-environment interaction analysis methods [18,19] by allowing more flexibility in the relationship pattern (not limited to a given relationship) between covariate (genetic) effects and environmental factors (auxiliary variables). Finally, the proposed approach also advances the existing multiple changing-point regression studies [20,21] by tracking the gradually changes of coefficients rather than the abrupt ones (Figure 1). Overall, this approach is practically useful for analyzing genomic data and may lead to important new findings.
To further illustrate differences of the proposed method from varying-coefficient models and multiple changing-point regression methods, consider a simple simulation example with N = 200 , p = 10 , and 3 significant variables. The coefficient for each variable varies among individuals and is a function of a certain environmental factor, e.g., age. Suppose the age is unevenly distributed among subjects, with subjects concentrated between the age of 25–35 and 45–55, which is indicated by denser rugs in the Figure 1. We compare proposed method with the varying-coefficient model [12] and the change point regression model [22]. The simulation results show that the compared method performs relatively poorly (root mean squared errors (RMSE) = 4.853, rooted prediction error (RPE) = 1.325 for varying-coefficient model; RMSE = 3.158, RPE = 1.242 for change point regression model), while proposed method identifies the true coefficient pathway consistently (RMSE = 0.954, RPE = 0.893).
The rest of this paper is organized as follows. In Section 2, we introduce the proposed approach, present the algorithm, and discuss some theoretical properties. Simulations are shown in Section 3. Section 4 presents the analysis of two The Cancer Genome Atlas (TCGA) datasets. Section 5 concludes the paper. The technical details of proofs and additional numerical results are provided in the Appendix A, Appendix B, Appendix C and Appendix D.

2. Materials and Methods

Assume a dataset consists of N independent subjects. For subject n, let y n and X n = ( X 1 n , X 2 n , , X p n ) denote the response variable and the p-dimensional vector of genomic measurements, respectively. In our numerical study, we analyze gene expression data. It is noted that the proposed approach can also be applied to other types of omics measurements. Assume the data has been standardized and consider a heterogenous linear regression model given by:
y n = X n β n + ε n ,
where ε n ’s are independent and identically distributed (i.i.d.) random errors and β n = ( β 1 n , β 2 n , , β p n ) are the regression coefficients. Different from the standard regression model, which imposes an identical β on all subjects, model (1) allows β n to be subject-specific. Here, we consider a linear regression, which is standard to model the relationship between covariates and outcomes. The proposed approach is applicable to other models, for example, the AFT model. More details are provided in Appendix A. In this paper, we focus on a scenario where the heterogeneity analysis of covariate effects can be conducted with the aid of an auxiliary variable whose measurement is available for N subjects. Specifically, we assume that the subjects have been sorted according to the auxiliary variable’s values. Further, the effect of a relevant covariate on the response variable is expected to vary smoothly across subjects. The studies reviewed in Section 1 and other similar ones suggest that the covariate (e.g., genetic) effects are usually associated with clinical traits. As such, we choose an auxiliary variable with known interactions with clinical variables. Please see the examples in the data analysis section for details (Section 4).
Remark 1.
In subgroup-level heterogeneity analysis, an auxiliary variable may not be needed. However, a subject-level heterogeneity analysis is intractable without the auxiliary variable due to non-identifiability. To date, the existing methods that can handle this type of heterogeneity, for example, varying-coefficients and interaction analysis, all require an auxiliary variable. Note that, in our analysis, the auxiliary variable does not need to be “precise.” Consider, for example, a sample of size 5. Auxiliary variable A has the values 1, 3, 7, 2, and 9 for the five subjects and auxiliary variable B has the values 0.8 , 0.4, 0.5, 0.0, and 3. Although auxiliary variables A and B do not match, the proposed method can lead to the same covariate effects when using both auxiliary variables as an ordering index.
As previously mentioned, we propose a novel penalized estimation and denote β j = ( β j 1 , , β j N ) and β = ( β 1 , β 2 , , β p ) . Then, we define estimator β ^ as the solution of the following optimization problem:
β ^ = arg min β F ( β ) 1 2 N n = 1 N ( y n X n β n ) 2 + λ 1 j = 1 p ω j β j 2 + λ 2 j = 1 p n = 1 N 1 2 ( β j n + 1 β j n ) ( β j n β j n 1 ) 2 ,
where u 2 represents the two-norm of any vector u and ω j ’s are weights. λ 1 0 and λ 2 0 are data-dependent tuning parameters. We also introduce an “expanded” measurement matrix Z :
Z = X 1 ( 1 ) X 2 ( 1 ) X p ( 1 ) X 1 ( N ) X 2 ( N ) X p ( N ) N × N p .
We denote Y = ( y 1 , y 2 , , y n ) . Then, objective function F ( β ) can be rewritten in a more compact form:
F ( β ) = 1 2 N Y Z β 2 2 + λ 1 j = 1 p ω j β j 2 + λ 2 2 j = 1 p A β j 2 2 ,
A = { e n 2 e n + 1 + e n + 2 , n = 1 , 2 , , N 2 } with e n being the N × 1 column vector, whose nth element is 1, and the others are 0.
Rationale. In (2), the first term is the lack-of-fit measure, expressed as the sum of N individual subjects. The first penalty is the group Lasso on β . Here the “group” refers to the regression coefficients of N subjects for a specific covariate. This penalty accommodates the high-dimensionality of the data and allows for the regularized estimation and selection of relevant covariates. The “all-in-all-out” property of the group Lasso leads to a homogeneous sparsity structure, that is, the N subjects have the same set of important covariates. To obtain an oracle estimator, we add weight ω j to the sparsity penalty, which is determined by an initial estimator. Assuming that initial estimator β j ˜ is available, let ω j = 1 β ˜ j .
The main advancement is the second penalty, which has a spline form. It penalizes the second-order derivatives (in discrete version) of coefficients β j n to promote the smoothness of coefficients between adjacent subjects. Note that the coefficients for any adjacent subjects are assigned a penalty of the same magnitude regardless of the distance between subjects measured by the auxiliary variable. Different from standard spline-lasso penalties [23], it is imposed on the regression coefficients of different subjects. Furthermore, different from some alternatives which promote first-order smoothness, such as the fusion Lasso [24] and smooth Lasso [25], this penalty encourages second-order smoothness. Additionally, the quadratic form of this penalty makes it computationally easier than the absolute-value-form penalty, such as Lasso. It is noted that the gene-environment interaction analysis also can capture the smooth change of covariate effects over an auxiliary variable (environmental factor). However, the interaction analysis approach requires specifying a parametric form of the relationship between covariate effects and auxiliary variable, which is not very flexible in practice, in particular, for high-dimensional data.

2.1. Computation

Optimization (2) can be realized using a block coordinate descent (CD) algorithm. For each covariate j, its measurement on the N subjects X j = ( X j 1 , X j 2 , , X j N ) forms a group and corresponding coefficients β j are simultaneously updated. The algorithm optimizes the objective function with respect to one group of coefficients and iteratively cycles through all groups until convergence is reached. Let Z [ j ] = diag ( X j ) represent the sub-matrix of Z , corresponding to X j , which is a diagonal matrix. We denote β j ( k ) as the estimate of β j in the kth iteration. The proposed algorithm proceeds as follows:
1.
Initialize k = 0 , β ( k ) = 0 and set β ( 1 ) = β ( 0 ) .
2.
Update k = k + 1 . For j { 1 , 2 , , p } , minimize M ( β j ) with respect to β j , where:
M ( β j ) = L ( β j ) + λ 1 ω j β j 2 , L ( β j ) = 1 2 N Y Z [ j ] β j m > j Z [ m ] β m ( k 1 ) m < j Z [ m ] β m ( k ) 2 2 + λ 2 2 A β j 2 2 .
This can be realized by executing the following steps:
(a)
Set the step size t = 1 .
Compute
D 1 j = 1 N Z [ j ] m j Z [ m ] β m ( k 1 ) + m < j Z [ m ] β m ( k ) Y + λ 2 A A β j ( k 1 ) , G j = 1 t λ 1 ω j β j ( k 1 ) t D 1 j 2 + ( β j ( k 1 ) t D 1 j ) .
Increase step size by t 0.8 t until
L ( G j ) L ( β j ( k 1 ) ) + D 1 j ( G j β j ( k 1 ) ) + 1 2 t G j β j ( k 1 ) 2 2 .
(b)
Compute
v = β j ( k 1 ) + k 2 k + 1 ( β j ( k 1 ) β j ( k 2 ) ) , D 2 j = 1 N Z [ j ] Z [ j ] v + m > j Z [ m ] β m ( k 1 ) + m < j Z [ m ] β m ( k ) Y + λ 2 A A v
and update the estimate of β j by
β j ( k ) 1 t λ 1 ω j v t D 2 j 2 + ( v t D 2 j ) .
3.
Repeat Step 2 until convergence is achieved. In our numerical study, the convergence criterion is min 1 j p β j ( k ) β j ( k 1 ) 2 < 10 3 .
To speed up the algorithm, we add a momentum term to the last iteration of β j ( k 1 ) in (3) and determine step size t via the backtracking line search method. After the algorithm converges, some groups of coefficients are estimated as zeros. To further improve estimation accuracy, in practice, we can remove the covariates with zero coefficients and re-estimate the nonzero coefficients by minimizing objective function (2) without the sparsity penalty. The proposed approach involves two tuning parameters selected using a grid search and the K-fold cross validation with K = 5 .
Realization. To facilitate data analysis within and beyond this study, we have developed a Python code implementing the proposed approach and made it publicly available at https://github.com/foliag/SSA (accessed on 21 March 2022). The proposed approach is computationally affordable. As shown in Figure A1, the computational time of the proposed approach is linear, with an increasing number of features.

2.2. Statistical Properties

Here, we establish the consistency properties of the proposed approach. We define a new dataset ( Y ˜ , Z ˜ ) by Y ˜ ( n + ( n 2 ) × p ) = ( Y , 0 ) and Z ˜ ( n + ( n 2 ) × p ) × n p = ( Z , N λ 2 A ) , where A = A I p × p . Then, objective function (2) can be converted to an adaptive group Lasso form:
F ( β ) = 1 2 N Y ˜ Z ˜ β 2 2 + λ 1 j = 1 p ω j β j 2 .
Let β 0 = ( ( β 1 0 ) , ( β 2 0 ) , , ( β p 0 ) ) be the true parameter values. We denote q as the number of non-zero coefficient vectors. Without loss of generality, assume β j 0 0 for 1 j q . We define two sets, E 1 = { j | 1 j q } and E 0 = { j | q + 1 j p } , corresponding to the index of nonzero and zero coefficient vectors, respectively. Let J = A A and Σ = 1 N Z Z + λ 2 J . We then use τ to represent the minimal eigenvalue of matrix Σ . The following conditions are assumed:
(C0)
Errors ε 1 , ε 2 , , ε N are i.i.d sub-Gaussian random variables with mean zero. That is, for certain constants 0.5 t 1 and K , C 0 , the tail probabilities of ε n satisfy P ( | ε n | > x ) K e C x t for all x 0 and n = 1 , 2 , , N .
(C1)
Let m = max 1 j p , 1 n N | X j n | . Then, m = O ( 1 ) .
(C2)
Let α 1 = min j E 1 β j 0 2 N . Then, α 1 = O ( 1 ) . Moreover, there exists a constant α 2 > 0 so that P ( min j E 1 β ˜ j > α 2 ) 1 .
(C3)
τ > 0 and λ 2 τ 0 .
(C4)
J β 0 2 = O ( N ) .
Condition (C0) is the sub-Gaussian condition is commonly assumed in studies [26]. Condition (C1) assumes the measurement matrix is bounded. Similar conditions have been considered by AuthMartinussen and Scheike [27] and Binkiewicz and Vogelstein [28]. Condition (C2) puts a lower bound on the size of the smallest signal and assumes the initial β ˜ j is not too small for j E 1 . Similar conditions have been considered by Wei and Huang [29]. Condition (C3) is similar to the assumption made in Case I of Guo et al. [23], which requires Σ to be invertible and the minimal eigenvalue τ to converge to 0 at a rate controlled by λ 2 . Condition (C4) makes a weak constraint on β 0 , which can be satisfied when for any nonzero coefficient vector β k ( k E 1 ) the largest gap between two adjacent components is bounded.
Theorem 1.
Assume Conditions (C0)–(C4) hold, as does event Ω = max j ( 1 , 2 , , p ) β ˜ j = o N 3 4 λ 1 log N log p when N does to infinity. We define β 0 β ^ 2 , N = β 0 β ^ 2 N . Then, with a probability converging to one, we have
β 0 β ^ 2 , N 4 λ 1 q α 2 1 + 2 λ 2 J β 0 2 τ N .
The proof is provided in Appendix B. If q is not too large and α 2 and τ are not too small, we may have q τ α 2 o N 5 4 log N log p (more details below). Then, we can find a λ 1 that satisfies 1 λ 1 o N 3 4 log N log p and λ 1 o τ α 2 N q simultaneously. It is not difficult to prove that event Ω holds for the marginal regression estimator as the initial estimator. As a result, under conditions (C3) and (C4), the gap between β 0 and β ^ converges to 0. This theorem thus establishes estimation consistency.
The following additional conditions are assumed:
(C5)
Initial estimators β ˜ j are r-consistent for the estimation of certain ξ j :
r max j E 0 β ˜ j ξ j = O p ( 1 ) , r ,
where ξ j is an unknown constant vector satisfying max j E 0 ξ j M .
(C6)
Constants { p , q , M , λ 1 , λ 2 , τ , α 2 } satisfy:
q log N τ N 5 4 + λ 1 τ α 2 q N + log N log ( p q ) ( N + q τ 1 ) N 9 4 λ 1 1 r + M 0 ,
2 m 2 q ( λ 1 α 2 1 q + λ 2 J β 0 2 ) τ λ 1 N 3 1 r + M 1 .
Condition (C5) is similar to condition (A2) in Huang et al. [26], which ensured that weight ω j 1 ξ j is not too small for j E 0 . Condition (C6) restricts the numbers of covariates with zero and nonzero coefficients, the penalty parameters, the minimal eigenvalue of Σ , and the smallest nonzero coefficient. Given all conditions in Theorems 1 and 2, we may assume λ 1 = O ( N a ) , λ 2 = O ( N b ) , and τ = O ( N c ) for some 0 < c < b < a < 0.5 ; then, the number of nonzero coefficients q can be as large as N d for some 0 d 2 ( 1 a + b c ) 3 . In this case, there can be O ( e N 1 2 δ ) zero coefficients, where δ is a small nonzero constant, assuming α 2 = O ( N d 1 2 ) and M = O ( 1 ) .
Theorem 2.
Under Conditions (C0)–(C6),
P β ^ j 2 0 , j E 1 , β ^ j 2 = 0 , j E 0 1 .
The proof is provided in Appendix C. This theorem establishes the selection consistency properties of the proposed approach under a high-dimensional setting.

3. Simulation

We set p = 500 . The data are generated from the following true model:
y n = j = 1 q X j n β j n + ε n , n = 1 , 2 , , N ,
where the random errors are simulated independently from N ( 0 , 1 ) . We investigate nine scenarios for the coefficients as follows:
Scenario 1.
The coefficients are generated from trigonometric functions; for n = 1 , 2 , , N ,
β j n = 1.5 sin ( 20 π N u j n ) + 2.5 j = 1 , , q 4 1.5 cos ( 17 π N u j n + 0.4 ) + 2.5 j = q 4 + 1 , , q 2 1.5 sin ( 17 π N u j n 1.2 ) + 2.5 j = q 2 + 1 , , 3 q 4 1.5 cos ( 20 π N u j n 2 ) + 2.5 j = 3 q 4 + 1 , , q ,
where u j n = a j + N 10 · n , a j U ( 0 , 0.5 ) .
Scenario 2.
The coefficients are generated from exponential functions:
β j n = 4 exp ( u j n ) + 1 j = 1 , , q 4 4 exp ( 0.9 u j n ) + 1 j = q 4 + 1 , , q 2 4 exp ( 0.8 u j n ) + 1 j = q 2 + 1 , , 3 q 4 4 exp ( 0.7 u j n ) + 1 j = 3 q 4 + 1 , , q ,
where u j n = a j + N 100 · n , a j U ( 0 , 0.2 ) .
Scenario 3.
The coefficients are generated from logarithmic functions:
β j n = 0.5 ln ( u j n ) 3 + 1 j = 1 , , q 4 0.5 ln ( u j n ) 2.9 + 1 j = q 4 + 1 , , q 2 0.5 ln ( u j n ) 2.7 + 1 j = q 2 + 1 , , 3 q 4 0.5 ln ( u j n ) 2.5 + 1 j = 3 q 4 + 1 , , q ,
where u j n = a j + N 20 · n , a j U ( 0.7 , 0.9 ) .
Scenario 4.
The coefficients are generated from linear functions:
β j n = 0.16 u j n + 2 j = 1 , , q 4 0.15 u j n + 2 j = q 4 + 1 , , q 2 0.14 u j n + 2 j = q 2 + 1 , , 3 q 4 0.13 u j n + 2 j = 3 q 4 + 1 , , q ,
where u j n = a j + N 10 · n , a j U ( 0 , 1 ) .
Scenario 5.
The coefficients are constants:
β j n = 3 a j + 2 j = 1 , , q 2 2 a j + 2 j = q 2 + 1 , , q ,
where a j U ( 0 , 1 ) .
Scenario 6.
The coefficients are generated from the four above (trigonometric, exponential, logarithmic and linear) functions, respectively. Each function generates an equal number of coefficients.
Scenario 7.
The coefficients are generated from the four above functions, where 40% and 35% of the coefficients are generated from the trigonometric and linear functions, respectively, and 10% and 15% of the coefficients are generated from the exponential and logarithmic functions, respectively.
Scenario 8.
The coefficients are generated from the four functions. The trigonometric, exponential, logarithmic, and linear functions generate 35%, 15%, 20%, and 30% of the coefficients, respectively.
Scenario 9.
The coefficients are generated as in Scenario 5. We select 40% of the coefficients and, for each function, add random perturbations on their values in one or two ranges, where each range includes 20 consecutive subjects.
In Scenarios 1–5, the q coefficients are generated from the same function, whereas from different functions in Scenarios 6–9. The coefficients in Scenario 5 are constants, that is, there is no heterogeneity in covariate effects. Some of coefficients in Scenario 9 do not change smoothly across subjects, but have a few discontinuous areas. Figure A2 presents q = 20 nonzero coefficients as a function of N = 200 subjects under nine scenarios. In the first eight scenarios, the p covariates are generated from a multivariate normal distribution with marginal mean 0 and variance 1. We consider an auto-regressive correlation structure, where covariates j and k have the correlation coefficient ρ | j k | with ρ = 0.3 and 0.8 , corresponding to the weak and strong correlations, respectively. In Scenario 9, the p covariates are generated independently from a uniform distribution on ( 1 , 1 ) . It is noted that the aforementioned nonlinear functions of regression coefficients are widely used in simulation studies of varying-coefficient models for genomic data [30,31].
We consider two versions of the proposed approach. One uses the “standard” Lasso to obtain the initial estimator of coefficients (New-Lasso) and the other uses marginal regression (New-Mar). Both estimators are homogeneous, that is, the coefficients are the same for all subjects. To better gauge the proposed approach, we compare it with three alternatives: (a) Lasso, which directly applies the Lasso method to the entire dataset but does not account for the heterogeneity of coefficients across different subjects; (b) AdLasso, which is the group adaptive Lasso in the varying-coefficient model [12]; and (c) IVIS, which uses the independent screening technique for fitting the varying-coefficient model [14]. The last two methods focus on variable selection and the estimation of the varying-coefficient model in high-dimensional settings, where each nonzero coefficient is assumed a smooth function of a known auxiliary variable.
For the proposed approach and its alternatives, we evaluate the variable selection performance by TP (number of true positives) and FP (number of false positives). Estimation and prediction are also evaluated. Specifically, estimation is measured by RMSE (root mean squared errors), defined as 1 p j = 1 p β j β ^ j 2 , and prediction is measured by RPE (root prediction errors), defined as 1 N n = 1 N ( y n X n β ^ n ) 2 .
Table 1 summarizes the simulation results over 100 replications for settings with N = 200 , q = 20 , and ρ = 0.3 . The rest of the results are presented in Table A1, Table A2 and Table A3. Across the simulation spectrum, the proposed approach has superior performance in terms of variable selection, as it can identify more important variables while having a low number of false positives. For example, in Scenario 1, N = 200 and ρ = 0.3 (Table 1), New-Lasso has (TP, FP) = (18.44, 0.16), while Lasso has (TP, FP) = (14.56, 0.30), AdLasso (TP, FP) = (16.64,0.70), and IVIS (TP, FP) = (13.76, 3.28). Consider another example, Scenario 9, N = 200 and q = 20 (Table 1). For the identification of important variables, the four approaches have the TP values 18.30 (New-Lasso), 15.40 (Lasso), 15.74 (AdLasso), and 14.24 (IVIS), and FP values 0.00 (New-Lasso), 2.60 (Lasso), 0.40 (AdLasso), and 4.64 (IVIS), suggesting the proposed approach is robust to perturbations. In most scenarios, New-Lasso outperforms New-Mar when covariates are weakly correlated ( ρ = 0.3 ), but performs worse than New-Mar when covariates are strongly correlated ( ρ = 0.8 ). These results stem from the fact that Lasso is not good at dealing with highly correlated covariates. In practice, we can select one of them according to the correlations among covariates. Examples are provided in Section 4. Lasso identifies a reasonable number of important variables but with higher false positive than the proposed approach. AdLasso shows a good performance in variable selection, but inferior to that of the proposed approach under most simulation settings. IVIS has the worst performance among the five approaches.
In the evaluation of estimation, the proposed approach again has a favorable performance. We plot the estimated nonzero coefficients as a function of subjects and 95% point-wise confidence intervals (Figure A3). In Scenario 6 with N = 200 , q = 20 , and ρ = 0.3 , the estimated coefficients are close to the true ones, and the confidence intervals contain the true coefficients for most subjects. However, the estimation results become worse for the coefficients of the first and last few subjects. This is because the information available to estimate these coefficients is less than that on the intermediate coefficients. This problem can be alleviated by increasing the sample size (Figure A4). Additionally, the proposed approach outperforms the alternatives in terms of prediction under most scenarios.
Overall, simulation suggests favorable performance of the proposed approach. It is interesting to note that it has satisfactory performance even under the no heterogeneity scenario (Scenario 5). Thus, it provides a safe choice for practical data analysis where the degree of heterogeneity in covariate effects is unknown. The other simulation settings have similar results. However, due to space constraints, we do not describe them here.

4. Data Analysis

Here, we apply the proposed approach to two TCGA datasets. As a cancer genomics program initiated by the National Institute of Health (NIH), TCGA publishes high quality clinical and genetic data. In our analysis, the data are downloaded from the cBioPortal website (http://www.cbioportal.org/, accessed on 16 January 2021) via the cgdrs package.

4.1. SKCM Data

Cutaneous melanoma (SKCM) is a cancer of the skin cells called melanocytes, leading to the majority of deaths from skin cancers. In our analysis, we are interested in the regulation of Breslow thickness, a measure of the size of melanoma growth, by gene expressions. We use age as the auxiliary variable, which is correlated with the melanoma development and progression [32]. After removing missing values from the Breslow thickness and age, a total of 228 patients are included in analysis. The median age is 58 (range: 18–90 years) and the median Breslow thickness is 2.45 (range: 0.28–75). All patients are sorted by age in ascending order. There are some patients that have the same age, but there are only a few (2–8) patients with the same age. The analysis results show that the orders of patients within each age have little impact on the identification of important genes and the effect estimation. Consequently, in the analysis, we sort the patients with the same age randomly. A total of 20,531 RNAseq gene expression measurements are available. More specifically, the processed level-3 gene expression data is used. Please refer to literature [33] for detailed information on generation and processing of gene expression data. To improve the reliability of the results, we conduct a marginal screening to screen out irrelevant genes and include 400 genes with lowest p-values in the downstream analysis. The gene expressions are assumed to connect with the response variable via a linear model.
The average correlation coefficient of 400 genes is 0.07, which is close to the 0.06 from the above simulation studies with ρ = 0.3 . As such, we adopt the New-Lasso method, which identifies 6 important genes. Figure 2 shows the estimated coefficients for the 6 genes. The changes in the effects of genes across patients are prominent, which suggests that the heterogenous model is more appropriate for this dataset. We observe different change patterns for the effects of the 6 genes. Specifically, genes AOC4P and EDNRB first increase then decrease; genes CRELD2 and TRIM64 show an increase then remain steady, while gene SERPINA3 demonstrate the opposite pattern, and effect of gene OR10GB has a bowl-shaped pattern. The literature suggests that the identified genes are biologically meaningful. For example, gene EDNRB provides instructions for making a protein called endothelia receptor type B. Inherited variations in this gene may be associated with an increased risk of melanomas [34]. Recent studies revealed that gene AOC4P plays critical roles at multiple levels in diverse physiological and pathological processes [35]. Some of changes in metastatic melanomas were identified in gene SERPINA3 encoding proteins involved in the regulation of the extracellular matrix [36]. A high SERPINA3 expression correlates with shorter disease survival [37,38], suggesting the SERPINA3 expression can be used as a prognostic marker in melanoma.
We also apply the alternatives described above. The comparative results are provided in Table A4. The different methods identify different sets of genes. Based on real data, the true set of important genes is unknown and, thus, it is difficult to directly evaluate the identification and estimation accuracy. To verify the results, we now evaluate prediction and stability. Specifically, the dataset is split into a training set and a testing set of sizes 2:1. The regression coefficients are estimated using the training set and used to make predictions for the subjects in the testing set. We repeat the process 50 times and calculate the average root prediction errors (RPEs) to be 0.775 (New-Lasso), 1.072 (Lasso), 1.036 (AdLasso), and 1.393 (IVIS). The proposed approach has the best prediction performance. Moreover, for the proposed approach, we compare the RPEs of training sets and that of testing sets, and no significant differences are found (p-value > 0.5), suggesting that the proposed approach does not produce obvious over-fitting. Additionally, we compute the observed occurrence index (OOI) values to evaluate the stability of the identification results. Figure A6 shows the OOIs of all methods. The proposed approach significantly outperforms the alternatives in terms of identification stability.

4.2. LUAD Data

Lung adenocarcinoma (LUAD) is a form of non-small cell lung cancer, being the most common type of lung cancer. In our analysis, survival time is the response variable. There are a total of 231 patients, sorted by their forced expiratory volume in one second (FEV1), an important measure of lung function. The median follow-up time is 20 (range: 0.13–232 months) and the median FEV1 is 83 (range: 1.95–156). A total of 18, 325 RNAseq gene expressions are initially available for the analysis. Using the same marginal screening process as described above, the number of gene expressions is reduced to 400.
We adopt the accelerated failure time (AFT) model for the analysis of these censored survival data. The estimation procedure described above can be directly applied to the AFT model (see Appendix C). Because the genes have an average correlation coefficient (0.16) higher than that in the simulation studies with ρ = 0.8 (≈0.13), the New-Mar method is used here. The proposed method identifies 7 genes. The estimated coefficients of the 7 genes are presented in Figure A5.
Extant studies provide biological evidence for the association of identified genes with lung cancer. For example, AGTR1, the gene encoding angiotensin II receptor type I, has been extensively studied in human cancers [39] and has shown a strong influence on tumor growth, angiogenesis, inflammation and immunity [40]. Guo et al. [41] shows that methylation profiles of AGTR1 could be an effective methylation-based assay for non-small cell lung cancer diagnosis.
Data are also analyzed using the alternative methods. The summary comparison results (Table A4) again suggest that different methods produce different results. With censored survival data, we use the log-rank statistics to measure prediction performance. The higher log-rank statistics indicate better prediction performance and the proposed approach has an average log-rank statistic of 11.67, compared with 4.43 for Lasso, 5.81 for AdLasso and 3.08 for IVIS. The OOI results are also presented in Figure A6. The proposed approach has again the highest OOI among all methods.

4.3. Simulation on SKCM Dataset

It has been recognized in some studies that simulated data may be “simpler” than real data. Here, we conduct an additional set of simulation based on the SKCM data analyzed above. Specifically, the observed gene expression data and the estimated coefficients in Section 4.1 are used in simulation. The simulation results are summarized in Table A5. It is observed that the proposed method maintains a relative edge over the alternatives, which justifies the effectiveness of the proposed method.

5. Discussion

The mature application of the high-throughput technology has produced a large amount of genomic data. With the rapid development of precision medicine, the heterogeneity effect of covariates has received increasing attention in disease genomic studies. However, most existing studies focus on the subgroup-specific effects, meaning the effects are the same within each subgroup, thus neglecting the possible varying effects within a subgroup. In this paper, we consider that the effects of covariates change smoothly across subjects. We thus propose a novel penalization-based estimation method, which combines a group-lasso penalty and a spline-lasso penalty based on subgroup-based studies by capturing the varying effects within each subgroup. It also advances the existing varying-coefficient studies by lowering the requirements for the distribution of the auxiliary variable. We show that, under the appropriate conditions, the proposed approach can correctly select important covariates with a probability converging to one and estimates the coefficients consistently. Simulations demonstrated a satisfactory practical performance and data analysis led to sensible findings, significantly different from those using alternative methods.
WIth the proposed regression model, it is impossible to estimate directly the subject-specific covariate effects due to the non-identifiability problem. This is resolved by introducing an auxiliary variable, which can have a biological interpretation. As such, it would be of interest to develop other frameworks that can differentiate between heterogeneous covariate effects in the (partial) absence of auxiliary variable. Additionally, the data analysis results also warrant further investigation.

Author Contributions

Conceptualization, Y.S.; methodology, Z.L., Y.Z. and Y.S.; software, Z.L.; validation, Z.L. and Y.Z.; formal analysis, Z.L.; investigation, Z.L. and Y.Z.; resources, Y.S.; data curation, Z.L. and Y.S.; writing—original draft preparation, Z.L.; writing—review and editing, Y.S.; visualization, Z.L.; supervision, Y.S.; project administration, Y.S.; funding acquisition, Y.S. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number 12171479), the Fund for building world-class universities (disciplines) of the Renmin University of China, and the Fund under the China Scholarship Council for Ziye Luo’s Visiting PhD program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. The SKCM and the LUAD datasets can be found here: http://www.cbioportal.org/ (accessed on 16 January 2021), and can be downloaded via the cgdsr R package.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Estimation under the Accelerated Failure Time Model

The AFT model is an alternative to the commonly used Cox model in survival analysis, and regresses the logarithm of the survival time over the covariates. Consider a sample set { ( X n , Y n ) : X n R p , Y n R } of size N, where X n = ( X 1 n , , X p n ) denotes the p-dimensional covariates. Under the right-censoring situation, we obtain Y n = min { T n , C n } , where T n and C n denote the survival time and censoring time of the nth subject, respectively. Assume N subjects have been sorted by a known biomarker. We specify the following AFT model:
log ( T n ) = j = 1 p X j n β j n + ε n , n = 1 , 2 , , N ,
where ε n is the random error with mean zero.
Unknown coefficients β j = ( β j 1 , , β j N ) can be estimated by the weighted least squares method [42], where the weight is defined as a Kaplan-Meier weight. Let Y [ 1 ] Y [ 2 ] Y [ n ] be the order statistics of Y n , n = 1 , 2 , , N , and δ [ n ] the associated indicator function. The Kaplan-Meier weight can be computed as:
w 1 = δ [ 1 ] N w n = δ [ n ] N n + 1 i = 1 n 1 ( N i N i + 1 ) δ [ i ] , n = 2 , , N .
The weighted least-square loss function becomes:
L ( β ) = n = 1 N w n ( log Y [ n ] j = 1 p X j [ n ] β j [ n ] ) 2 ,
where X [ n ] is the vector of the covariates associated with Y [ n ] and β j [ n ] ’s are the corresponding coefficients.

Appendix B

Proof of Theorem 1. 
From the definition of β ^ :
1 2 N Y ˜ Z ˜ β ^ 2 2 + λ 1 j = 1 p 1 β ˜ j β ^ j 2 1 2 N Y ˜ Z ˜ β 0 2 2 + λ 1 j = 1 p 1 β ˜ j β j 0 2 .
Let ε ˜ = ( ε , ( N λ 2 A β 0 ) ) with ε = ( ε 1 , , ε N ) . From (A1),
1 2 N Z ˜ β 0 Z ˜ β ^ 2 2 λ 1 j = 1 p ω j ( β j 0 2 β ^ j 2 ) + 1 N ε ˜ Z ˜ ( β ^ β 0 ) .
From the Cauchy-Schwartz inequality, we have:
1 N ε ˜ Z ˜ ( β ^ β 0 ) 1 N ε Z ( β ^ β 0 ) + λ 2 J β 0 2 β 0 β ^ 2 1 N m ε | | 2 j = 1 p β j 0 β j ^ 2 + λ 2 J β 0 2 β 0 β ^ 2 .
We define κ p = max 1 j p β ˜ j . Under event Ω , by Lemma 1 in Huang et al. [26], for any 1 j p , we have:
P ( 1 N m ε 2 > λ 1 β j ˜ ) exp κ p m log N log p N 3 4 λ 1 2 0 .
As a result, from (A2):
1 2 N Z ˜ β 0 Z ˜ β ^ 2 2 λ 1 j = 1 p ω j ( β j 0 2 β ^ j 2 ) + λ 1 j = 1 p ω j β j 0 β ^ j 2 + λ 2 J β 0 2 β 0 β ^ 2 2 λ 1 j = 1 q ω j β j 0 β ^ j 2 + λ 2 J β 0 2 β 0 β ^ 2 2 λ 1 q α 2 1 β 0 β ^ 2 + λ 2 J β 0 2 β 0 β ^ 2 .
According to condition (C2), we finally obtain:
β 0 β ^ 2 , N 4 λ 1 q α 2 1 + 2 λ 2 J β 0 2 τ N .
This completes the proof of Theorem 1. □

Appendix C

Proof of Theorem 2. 
Consider the Karush-Kuhn-Tucker (KKT) condition:
1 N Z j ( Y Z β ^ ) + λ 2 A A β j ^ + λ 1 ω j β j ^ β ^ j 2 = 0 , if β ^ j 0
λ 1 ω j e N 1 N Z j ( Y Z β ^ ) λ 1 ω j e N , if β ^ j = 0
where e N is a N × 1 vector whose elements are all 1s. We define Z * = N ( 1 N Z Z + λ 2 J ) 1 2 and Y * = Z * 1 Z Y . Therefore, β ^ is also the minimizer of the following objective function:
1 2 N Y * Z * β | | 2 2 + λ 1 j = 1 p ω j β j 2 .
As a result, if β ^ j 2 0 for j E 1 , then, by the KKT condition:
1 N Z * E 1 ( Y * E 1 Z * E 1 β ^ E 1 ) = W E 1 ,
where W E 1 = W 1 , , W q is a N × q vector with W j = λ 1 ω j β ^ j 2 β ^ j . Since
Z * β 0 E ( Y * ) = Z * 1 Z * 2 β 0 Z * 1 Z Z β 0 = Z * 1 ( Z * 2 Z Z ) β 0 = Z * 1 ( N λ 2 J ) β 0 ,
we have
Z * E 1 Y * E 1 = Z * E 1 E ( Y * E 1 ) + Z * E 1 Y * E 1 E ( Y * E 1 ) = Z * E 1 2 β E 1 0 N λ 2 J E 1 β E 1 0 + Z E 1 Y E ( Z E 1 Y ) = N Σ E 1 E 1 β E 1 0 N λ 2 J E 1 β E 1 0 + Z E 1 ε .
Let u ^ = β ^ β 0 and S = Z ε / N . As a result, if there exists u ^ so that:
Σ E 1 E 1 u ^ E 1 1 N S E 1 = W E 1 λ 2 J E 1 β E 1 0
u ^ j 2 β j 0 2 , for j E 1
and
1 N Z j ( Y Z E 1 β ^ E 1 ) 2 < N λ 1 ω j , for j E 0 .
Then, we have β ^ j 2 = 0 for j E 0 , and β ^ j 2 0 for j E 1 . From (A3),
u ^ E 1 1 N Σ E 1 E 1 1 S E 1 = Σ E 1 E 1 1 W E 1 λ 2 Σ E 1 E 1 1 J E 1 β E 1 0 .
Then,
Y Z E 1 β ^ E 1 = ε Z E 1 ( β ^ E 1 β E 1 0 ) = ε 1 N Z E 1 Σ E 1 E 1 1 Z E 1 ε + Z E 1 Σ E 1 E 1 1 W E 1 + λ 2 Z E 1 Σ E 1 E 1 1 J E 1 β E 1 0 .
We define H = I 1 N Z E 1 Σ E 1 E 1 1 Z E 1 . Then, from (A3)–(A5), if
( 1 N Σ E 1 E 1 1 S E 1 Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) ) j 2 β j 0 2 , j E 1 1 N Z j H ε + Z E 1 Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) 2 < N λ 1 ω j , j E 0
are satisfied, we have β ^ j 2 = 0 for j E 0 and β ^ j 2 0 for j E 1 . We define the events as:
D 1 = 1 N ( Σ E 1 E 1 1 Z E 1 ε ) j 2 > β j 0 2 2 , k E 1 ,
D 2 = [ Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) ] j 2 > β j 0 2 2 , k E 1 ,
D 3 = 1 N Z j H ε 2 > N λ 1 ω j 2 , k E 0 ,
and
D 4 = 1 N Z j Z E 1 Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) 2 > N λ 1 ω j 2 , k E 0 .
Then, we have:
P ( β ^ j 2 0 , j E 0 or β ^ j 2 = 0 , j E 0 ) P ( D 1 ) + P ( D 2 ) + P ( D 3 ) + P ( D 4 ) .
First, we consider P ( D 1 ) . Because Z E 1 2 = Z E 1 2 = sup X R N | | Z E 1 X | | 2 | | X | | 2 m q , then, for any j E 1 ,
( Σ E 1 E 1 1 Z E 1 ε ) j 2 Σ E 1 E 1 1 Z E 1 ε 2 Σ E 1 E 1 1 2 Z E 1 ε 2 m q ε 2 τ .
From condition (C6) and Lemma 1 in Huang et al. [26], we have
P ( D 1 ) P 1 N Σ E 1 E 1 1 Z E 1 ε 2 > N α 1 2 exp τ α 1 N 5 4 2 m q log N 2 0 .
For D 2 , we define R = { β ˜ j α 2 , j E 1 } . Then,
P ( D 2 ) = P ( D 2 R ) + P ( D 2 R c ) P ( D 2 R ) + P ( R c ) .
From condition (C2), P ( R c ) 0 . Then, we only need to prove P ( D 2 R ) 0 . Since Σ E 1 E 1 1 is invertible, we can prove that for any j E 1 ,
( Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) ) j 2 Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) 2 Σ E 1 E 1 1 W E 1 2 + Σ E 1 E 1 1 λ 2 J E 1 β E 1 0 2 λ 1 q α 2 1 + λ 2 J β 0 2 τ .
From Condition (C6), we have
2 ( Σ E 1 E 1 1 ( W E 1 + λ 2 J E 1 β E 1 0 ) ) j 2 N α 1 2 λ 1 τ α 1 α 2 q N + 2 λ 2 J β 0 τ α 1 N 0 .
Therefore, P ( D 2 R ) = 0 .
Next, we consider P ( D 3 ) . Similarly to above, we define E = { | | β ˜ j | | < 1 r + M , j E 0 } R . Then,
P ( D 3 ) = P ( D 3 E ) + P ( D 3 E c ) P ( D 3 E ) + P ( E c ) .
Under Conditions (C2) and (C5), we know P ( E c ) 0 and, thus, only need to prove that P ( D 3 E ) 0 . Since Σ E 1 E 1 is invertible, we have, for any j E 0 ,
Z j H ε 2 m ( ε 2 + m 2 q ε 2 τ N ) .
Then,
P ( D 3 E ) P 2 N Z j H ε 2 > N λ 1 1 r + M , j E 0 P 1 N ε 2 > N λ 1 2 ( 1 r + M ) ( m + m 3 q τ N ) ( p q ) q N * N 5 4 λ 1 2 ( 1 r + M ) ( m + m 3 q τ N ) 2 ,
where function q N * ( · ) is the same as q n * ( · ) in Lemma 1 of Huang et al. [26]. Therefore, from Lemma 1 of Huang et al. [26] and Condition (C6), P ( D 3 E ) 0 .
Finally, we consider D 4 . To prove P ( D 4 ) 0 , we only need to prove P ( D 4 E ) 0 . Since Σ E 1 E 1 is invertible, we can prove that, for any j E 0 ,
1 N Z j Z E 1 Σ E 1 E 1 1 ( W E 1 λ 2 J E 1 β E 1 0 ) 2 m 2 q τ N ( λ 1 α 2 1 q + λ 2 J β 0 2 ) .
Under Condition (C6),
2 N Z j Z E 1 Σ E 1 E 1 W E 1 2 N λ 1 ω j 2 m 2 q ( λ 1 α 2 1 q + λ 2 J β 0 2 ) τ N 3 λ 1 ( 1 r + M ) 1 .
Namely, P ( D 4 E ) 0 . This completes the proof of Theorem 2. □
Remark A1.
We show that the marginal regression estimator satisfies Condition (C5) under some assumptions and can thus be used as the initial estimator. With the standardization of X = ( X 1 , X 2 , , X p ) , the estimated marginal regression coefficient becomes:
β ˜ k = n = 1 N X k n y n n = 1 N ( X k n ) 2 = j = 1 q n = 1 N X k n X j n β j 0 n N + X k ε N .
We define
ξ k = j = 1 q n = 1 N X k n X j n β j 0 n N .
For k E 1 , we restrict ξ k = O ( N d 1 2 ) , where 0 d 1 3 , so that the non-zero coefficients’ signals are bounded away from zero at certain rates.
Similarly to the “partial orthogonality” condition in Huang et al. [26], we assume that the correlation between the covariates with zero coefficients and those with nonzero coefficients (multiplying the corresponding coefficient) is not large, that is,
1 N | n = 1 N X k n ( X j n β j 0 n ) | = 1 N | X k f j ( X j ) | ρ N , k E 0 , j E 1 .
For k E 0 , we have | ξ k | q ρ N . Assume
ρ N < τ λ 1 N 3 2 m 2 q 3 ( λ 1 α 2 1 q + λ 2 J β 0 2 ) .
From Condition (C6), q ρ N < 1 . From Lemma 1 in Huang et al. [26], for any ϵ > 0 , if r = o N log p log N , we have:
P r max 1 j p | β ˜ j ξ j | > ϵ = P r max 1 j p X j ε N > ϵ p q * N ϵ r log N = o ( 1 ) .
When p = O ( e N 1 2 δ ) with 0 < δ < 0.5 , r can be set as O ( N δ c log N ) for a small c > 0 . Therefore, the marginal regression estimator satisfies Condition (C5).

Appendix D. More Tables and Figures

Table A1. Simulation results for N = 200, p = 500, q = 20, and ρ = 0.8 . Each cell shows the mean (sd). The bold represents the best value.
Table A1. Simulation results for N = 200, p = 500, q = 20, and ρ = 0.8 . Each cell shows the mean (sd). The bold represents the best value.
ScenarioMethodTPFPRMSERPE
1Lasso13.61 (2.37)0.20 (0.41)5.87 (0.50)14.08 (2.48)
AdLasso16.23 (1.54)0.26 (0.83)5.05 (0.46)10.10 (0.39)
IVIS12.85 (1.43)2.88 (1.37)5.98 (1.02)11.72 (0.92)
New-Lasso15.56 (1.39)0.00 (0.00)4.48 (0.82)2.66 (0.54)
New-Mar20.00 (0.00)0.56 (0.19)1.12 (0.17)0.82 (0.04)
2Lasso11.50 (1.67)0.41 (0.82)6.72 (0.45)18.20 (1.97)
AdLasso16.90 (1.06)0.11 (0.31)5.20 (0.44)10.38 (0.44)
IVIS12.89 (1.13)3.03 (0.86)6.04 (0.78)11.95 (0.96)
New-Lasso15.37 (1.53)0.16 (0.07)4.90 (0.95)2.94 (0.64)
New-Mar20.00 (0.00)0.70 (0.16)0.84 (0.10)0.72 (0.05)
3Lasso12.90 (2.9)0.10 (0.31)7.32 (0.80)18.60 (3.54)
AdLasso16.80 (1.32)0.07 (0.25)5.55 (0.69)11.16 (0.54)
IVIS13.33 (0.96)2.92 (1.01)6.55 (1.15)12.65 (1.24)
New-Lasso15.61 (1.73)0.04 (0.02)5.56 (1.17)3.10 (0.69)
New-Mar20.00 (0.00)0.56 (0.15)0.96 (0.14)0.76 (0.05)
4Lasso14.03 (2.27)0.20 (0.05)7.56 (0.82)18.54 (3.21)
AdLasso17.34 (1.49)0.13 (0.51)6.02 (0.85)12.29 (0.48)
IVIS14.35 (0.93)3.75 (0.92)6.47 (0.73)12.43 (0.97)
New-Lasso16.90 (1.45)0.08 (0.01)5.26 (1.39)2.86 (0.80)
New-Mar20.00 (0.00)0.84 (0.75)0.92 (0.10)0.70 (0.04)
5Lasso20.00 (0.00)1.02 (0.27)0.50 (0.10)0.69 (0.07)
AdLasso17.08 (1.37)0.07 (0.24)5.36 (0.67)11.13 (0.49)
IVIS13.54 (0.81)3.24 (0.90)6.07 (0.69)11.06 (0.7)
New-Lasso19.87 (0.37)0.00 (0.00)1.02 (0.49)0.79 (0.14)
New-Mar20.00 (0.00)0.84 (0.23)0.90 (0.12)0.72 (0.03)
6Lasso15.23 (2.65)0.33 (0.80)6.18 (0.78)13.52 (2.50)
AdLasso17.09 (1.14)0.06 (0.24)5.39 (0.49)11.03 (0.44)
IVIS13.40 (1.05)3.04 (0.97)6.05 (0.89)12.21 (0.9)
New-Lasso17.00 (1.75)0.00 (0.00)4.36 (1.43)2.02 (0.68)
New-Mar20.00 (0.00)1.44 (0.33)1.16 (0.14)0.74 (0.05)
7Lasso16.25 (2.29)0.37 (0.81)5.90 (0.77)11.86 (2.23)
AdLasso17.28 (1.11)0.13 (0.46)5.30 (0.61)10.99 (0.41)
IVIS13.76 (0.99)2.82 (1.15)5.97 (0.86)12.22 (0.92)
New-Lasso16.90 (1.07)0.00 (0.00)4.38 (0.85)2.02 (0.38)
New-Mar19.95 (0.22)1.10 (0.21)1.22 (0.40)0.80 (0.14)
8Lasso16.15 (2.18)0.16 (0.37)5.80 (0.93)11.80 (2.11)
AdLasso16.75 (1.79)0.10 (0.36)6.08 (0.46)10.08 (0.39)
IVIS13.03 (1.22)3.20 (1.21)6.11 (0.92)12.39 (1.1)
New-Lasso16.70 (2.03)0.00 (0.00)4.50 (1.63)2.06 (0.78)
New-Mar19.90 (0.31)0.96 (0.15)1.36 (0.59)0.84 (0.25)
Table A2. Simulation results for N = 500, p = 500, q = 40, and ρ = 0.3 . Each cell shows the mean (sd). The bold represents the best value.
Table A2. Simulation results for N = 500, p = 500, q = 40, and ρ = 0.3 . Each cell shows the mean (sd). The bold represents the best value.
ScenarioMethodTPFPRMSERPE
1Lasso36.85 (0.37)0.60 (0.67)9.20 (0.45)10.78 (0.74)
AdLasso36.00 (1.58)0.14 (0.45)10.21 (0.61)12.01 (0.76)
IVIS32.92 (1.63)6.51 (1.77)12.66 (1.32)14.36 (1.05)
New-Lasso39.71 (0.66)0.08 (0.10)1.78 (0.24)0.86 (0.14)
New-Mar35.64 (2.60)1.20 (0.28)6.32 (1.12)3.46 (0.56)
2Lasso35.70 (1.69)0.92 (0.83)8.53 (0.48)10.14 (0.42)
AdLasso35.91 (2.16)0.84 (1.02)8.58 (0.57)9.48 (0.64)
IVIS33.56 (1.04)6.94 (0.95)11.25 (1.23)14.74 (0.95)
New-Lasso38.04 (1.50)0.63 (0.10)3.76 (0.45)1.38 (0.46)
New-Mar35.83 (2.09)1.83 (0.29)5.92 (0.36)2.24 (0.38)
3Lasso34.47 (2.26)0.00 (0.00)17.78 (1.89)21.82 (2.38)
AdLasso36.64 (1.38)0.04 (0.20)10.18 (0.62)7.62 (0.47)
IVIS33.08 (1.03)8.25 (2.34)12.07 (1.84)16.28 (1.12)
New-Lasso40.00 (0.00)0.00 (0.00)1.26 (0.08)0.60 (0.02)
New-Mar36.51 (0.83)3.40 (0.95)4.62 (1.10)2.20 (0.74)
4Lasso31.35 (3.91)0.20 (0.52)30.34 (2.06)37.18 (2.82)
AdLasso36.24 (1.41)0.10 (0.31)12.03 (0.73)17.19 (0.83)
IVIS34.27 (2.45)9.76 (2.22)14.49 (2.31)19.24 (2.69)
New-Lasso39.96 (0.22)0.69 (0.10)1.42 (0.27)0.68 (0.18)
New-Mar37.40 (0.68)5.60 (0.54)6.48 (0.31)1.16 (0.33)
5Lasso40.00 (0.00)0.70 (0.88)0.79 (0.10)1.30 (0.09)
AdLasso35.67 (1.75)0.12 (0.39)13.88 (0.59)12.16 (0.75)
IVIS34.97 (1.2)5.29 (1.65)14.69 (1.44)16.20 (1.26)
New-Lasso40.00 (0.00)0.00 (0.00)1.04 (0.06)0.58 (0.03)
New-Mar37.54 (1.90)2.41 (0.98)5.46 (1.50)2.16 (0.16)
6Lasso34.64 (1.35)0.74 (0.90)12.08 (0.70)13.94 (1.11)
AdLasso35.58 (0.88)0.08 (0.27)7.78 (0.50)8.43 (0.51)
IVIS33.80 (1.59)7.66 (2.80)11.47 (1.70)15.37 (1.06)
New-Lasso37.54 (1.39)0.10 (0.27)5.16 (0.84)2.92 (0.44)
New-Mar33.89 (2.19)6.10 (2.78)9.66 (2.17)3.34 (0.77)
7Lasso34.70 (2.13)0.91 (1.02)13.26 (0.82)15.32 (1.36)
AdLasso35.64 (0.56)0.08 (0.27)7.79 (0.48)8.31 (0.46)
IVIS33.09 (1.7)7.72 (2.13)11.28 (1.90)14.02 (1.18)
New-Lasso37.36 (1.73)0.12 (0.31)5.72 (0.50)2.64 (0.31)
New-Mar33.40 (2.11)6.32 (2.86)10.02 (1.03)3.56 (0.44)
8Lasso34.30 (2.18)0.88 (0.73)13.20 (0.73)14.38 (1.18)
AdLasso35.45 (0.73)0.20 (0.40)7.72 (0.54)8.25 (0.48)
IVIS34.38 (1.79)7.04 (1.09)11.43 (1.70)14.37 (2.21)
New-Lasso37.71 (1.54)0.00 (0.00)5.22 (0.62)3.04 (0.41)
New-Mar32.03 (2.35)5.84 (2.41)9.88 (1.76)3.74 (0.99)
9Lasso28.83 (3.54)0.40 (0.68)15.42 (1.68)14.12 (1.12)
AdLasso35.34 (1.58)0.20 (0.48)21.36 (0.52)12.02 (0.75)
IVIS33.11 (1.20)6.42 (1.54)16.94 (2.01)15.97 (1.95)
New-Lasso37.56 (1.19)0.94 (0.29)4.46 (1.10)1.31 (0.28)
New-Mar24.82 (2.88)9.55 (0.80)12.50 (1.56)3.88 (0.73)
Table A3. Simulation results for N = 200, p = 500, q = 40, and ρ = 0.8 . Each cell shows the mean (sd). The bold represents the best value.
Table A3. Simulation results for N = 200, p = 500, q = 40, and ρ = 0.8 . Each cell shows the mean (sd). The bold represents the best value.
ScenarioMethodTPFPRMSERPE
1Lasso31.56 (2.17)0.36 (0.33)13.40 (0.67)18.82 (1.86)
AdLasso32.52 (1.54)0.14 (0.40)9.66 (0.75)13.28 (0.49)
IVIS29.46 (0.87)6.32 (1.21)13.22 (0.76)20.86 (0.85)
New-Lasso32.96 (2.21)0.00 (0.00)11.4 (1.40)3.58 (0.61)
New-Mar39.84 (0.37)0.44 (0.09)2.16 (0.40)0.70 (0.10)
2Lasso30.60 (2.44)0.07 (0.31)11.76 (0.45)20.90 (2.16)
AdLasso31.06 (2.20)0.02 (0.14)8.60 (0.69)10.08 (0.53)
IVIS30.46 (1.68)5.31 (0.91)13.22 (0.85)21.86 (0.98)
New-Lasso31.80 (1.94)0.00 (0.00)11.76 (1.14)4.18 (0.65)
New-Mar39.60 (0.68)0.23 (0.15)2.58 (0.47)0.78 (0.18)
3Lasso34.30 (2.88)0.10 (0.30)19.28 (1.83)24.90 (2.63)
AdLasso32.04 (1.40)0.00 (0.00)13.20 (1.43)17.40 (0.81)
IVIS31.86 (1.49)7.37 (1.36)14.38 (1.28)25.43 (1.3)
New-Lasso33.20 (2.17)0.03 (0.02)12.92 (3.07)3.76 (0.77)
New-Mar40.00 (0.00)0.60 (0.08)2.12 (0.17)0.66 (0.03)
4Lasso32.85 (3.45)0.20 (0.41)31.06 (1.42)38.52 (3.74)
AdLasso33.34 (0.82)0.15 (0.55)16.36 (1.55)18.56 (0.83)
IVIS31.15 (1.57)8.13 (1.44)19.42 (1.93)28.65 (2.23)
New-Lasso32.90 (2.47)0.96 (0.12)23.28 (3.47)5.54 (1.04)
New-Mar40.00 (0.00)0.90 (0.19)1.76 (0.13)0.60 (0.03)
5Lasso40.00 (0.00)1.44 (0.74)0.69 (0.08)1.01 (0.03)
AdLasso33.53 (1.54)0.18 (0.44)12.34 (0.76)13.28 (0.49)
IVIS30.48 (1.38)6.44 (0.93)16.01 (2.53)21.08 (1.54)
New-Lasso40.00 (0.00)0.00 (0.00)1.66 (0.13)0.64 (0.06)
New-Mar40.00 (0.00)0.84 (0.10)1.72 (0.15)0.62 (0.02)
6Lasso30.81 (2.35)0.22 (0.57)16.80 (1.59)22.90 (2.74)
AdLasso33.82 (2.08)0.08 (0.34)13.78 (1.24)19.28 (0.81)
IVIS30.42 (2.02)6.24 (0.97)15.49 (1.86)21.18 (1.14)
New-Lasso32.40 (2.09)0.00 (0.00)11.06 (2.49)3.18 (0.90)
New-Mar39.80 (0.41)1.76 (0.27)2.48 (1.22)0.72 (0.21)
7Lasso31.40 (2.35)0.31 (0.32)18.38 (1.02)23.54 (2.13)
AdLasso34.45 (1.81)0.10 (0.42)13.32 (1.21)18.16 (0.65)
IVIS31.65 (2.31)5.43 (1.24)15.13 (1.35)22.11 (1.47)
New-Lasso32.71 (1.69)0.04 (0.02)12.90 (2.15)3.62 (0.66)
New-Mar39.65 (0.67)1.57 (0.39)3.06 (1.03)0.88 (0.16)
8Lasso30.50 (1.99)0.40 (0.41)17.48 (1.18)22.74 (2.19)
AdLasso32.04 (1.86)0.12 (0.39)13.04 (1.12)17.84 (0.79)
IVIS30.96 (2.58)6.68 (1.75)15.22 (2.55)23.64 (3.41)
New-Lasso32.23 (1.80)0.00 (0.00)12.70 (2.05)3.64 (0.80)
New-Mar39.51 (0.61)1.85 (0.29)3.48 (0.68)0.96 (0.22)
Table A4. Data analysis: comparison of variable selection results. Each cell shows the number of overlapping identifications.
Table A4. Data analysis: comparison of variable selection results. Each cell shows the number of overlapping identifications.
NewLassoAdLassoIVIS
SKCM Dat
New6643
Lasso 38126
AdLasso 255
IVIS 21
LUAD data
New7343
Lasso 2985
AdLasso 274
IVIS 25
Table A5. Simulation results for SKCM dataset. Each cell shows the mean (sd). The bold represents the best value.
Table A5. Simulation results for SKCM dataset. Each cell shows the mean (sd). The bold represents the best value.
MethodTPFPRMSERPE
Lasso1.70 (1.09)6.60 (2.12)1.37 (0.05)1.30 (0.04)
AdLasso2.60 (0.70)4.40 (2.37)1.35 (0.09)1.18 (0.11)
IVIS1.88 (0.69)11.47 (2.30)1.66 (0.07)1.26 (0.13)
New-Lasso3.43 (0.53)3.25 (2.43)1.22 (0.11)0.95 (0.05)
New-Mar2.96 (0.89)8.20 (2.09)1.36 (0.15)1.04 (0.06)
Figure A1. Simulation results: computation time of the proposed approach as a function of the number of features p for five replicates under Scenario 6 with N = 200 , q = 20 , ρ = 0.3 . The red dots represents the computation time under corresponding variable dimension, and the blue line represents the fitted value.
Figure A1. Simulation results: computation time of the proposed approach as a function of the number of features p for five replicates under Scenario 6 with N = 200 , q = 20 , ρ = 0.3 . The red dots represents the computation time under corresponding variable dimension, and the blue line represents the fitted value.
Genes 13 00702 g0a1
Figure A2. Nonzero coefficients of all subjects under Scenarios 1–9. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Figure A2. Nonzero coefficients of all subjects under Scenarios 1–9. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Genes 13 00702 g0a2
Figure A3. Estimated coefficients under Scenario 5 with N = 200 , q = 20 , and ρ = 0.3. The blue lines represent the true coefficients, the orange ones the coefficients estimated by New-Lasso, and the shadowed areas the 95% confidence intervals. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Figure A3. Estimated coefficients under Scenario 5 with N = 200 , q = 20 , and ρ = 0.3. The blue lines represent the true coefficients, the orange ones the coefficients estimated by New-Lasso, and the shadowed areas the 95% confidence intervals. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Genes 13 00702 g0a3
Figure A4. Estimated coefficients under Scenario 5 with N = 500 , q = 40 , and ρ = 0.8. The blue lines represent the true coefficients, the orange ones the coefficients estimated by New-Mar, and the shadowed areas the 95% confidence intervals. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Figure A4. Estimated coefficients under Scenario 5 with N = 500 , q = 40 , and ρ = 0.8. The blue lines represent the true coefficients, the orange ones the coefficients estimated by New-Mar, and the shadowed areas the 95% confidence intervals. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Genes 13 00702 g0a4
Figure A5. Analysis of LUAD data using the proposed approach: estimated coefficients of the 7 genes for all subjects. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Figure A5. Analysis of LUAD data using the proposed approach: estimated coefficients of the 7 genes for all subjects. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Genes 13 00702 g0a5
Figure A6. OOIs in the data analysis. Top: SKCM, bottom: LUAD.
Figure A6. OOIs in the data analysis. Top: SKCM, bottom: LUAD.
Genes 13 00702 g0a6

References

  1. Ford, D.; Easton, D.F.; Stratton, M.; Narod, S.; Goldgar, D.; Devilee, P.; Bishop, D.T.; Weber, B.; Lenoir, G.; Chang-Claude, J.; et al. Genetic Heterogeneity and Penetrance Analysis of the BRCA1 and BRCA2 Genes in Breast Cancer Families. Am. J. Hum. Genet. 1998, 62, 676–689. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Galvan, A.; Dragani, T.A. Nicotine dependence may link the 15q25 locus to lung cancer risk. Carcinogenesis 2010, 31, 331–333. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Shen, J.; He, X. Inference for Subgroup Analysis with a Structured Logistic-Normal Mixture Model. J. Am. Stat. Assoc. 2015, 110, 303–312. [Google Scholar] [CrossRef]
  4. Lloyd-Jones, R.; Nguyen, D.; McLachlan, J. A globally convergent algorithm for lasso-penalized mixture of linear regression models. Comput. Stat. Data Anal. 2018, 119, 19–38. [Google Scholar] [CrossRef] [Green Version]
  5. Huynh, Y.; Chamroukhi, F. Estimation and Feature Selection in Mixtures of Generalized Linear Experts Models. arXiv 2019, arXiv:1907.06994. [Google Scholar]
  6. Ma, S.; Huang, J. A Concave Pairwise Fusion Approach to Subgroup Analysis. J. Am. Stat. Assoc. 2015, 112, 410–423. [Google Scholar] [CrossRef] [Green Version]
  7. Ma, S.; Huang, J.; Zhang, Z.; Liu, M. Exploration of Heterogeneous Treatment Effects via Concave Fusion. Int. J. Biostat. 2019, 16. [Google Scholar] [CrossRef] [Green Version]
  8. Su, L.; Shi, Z.; Phillips, P. Identifying Latent Structures in Panel Data. Econometrica 2016, 84, 2215–2264. [Google Scholar] [CrossRef] [Green Version]
  9. Chiang, C.; Rice, J.; Wu, C. Smoothing spline estimation for varying coefficient models with repeatedly measured dependent variables. J. Am. Stat. Assoc. 2001, 96, 309–376. [Google Scholar] [CrossRef]
  10. Huang, J.; Wu, C.; Zhou, L. Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Stat. Sin. 2004, 14, 763–788. [Google Scholar]
  11. Wang, H.; Xia, Y. Shrinkage Estimation of the Varying Coefficient Model. J. Am. Stat. Assoc. 2009, 104, 747–757. [Google Scholar] [CrossRef]
  12. Wei, F.; Huang, J.; Li, H. Variable selection in high-dimensional varying-coefficient models. Stat. Sin. 2011, 21, 1515–1540. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Xue, L.; Qu, A. Variable Selection in High-dimensional Varying-coefficient Models with Global Optimality. J. Mach. Learn. Res. 2012, 13, 1973–1998. [Google Scholar]
  14. Song, R.; Yi, F.; Zou, H. On varying-coefficient independence screening for high-dimensional varying-coefficient models. Stat. Sin. 2014, 24, 1735–1752. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Chen, Y.; Bai, Y.; Fung, W. Structural identification and variable selection in high-dimensional varying-coefficient models. J. Nonparametric Stat. 2017, 29, 258–279. [Google Scholar] [CrossRef]
  16. Ye, M.; Lu, Z.; Li, Y.; Song, X. Finite mixture of varying coefficient model: Estimation and component selection. J. Multivar. Anal. 2019, 171, 452–474. [Google Scholar] [CrossRef]
  17. Wu, C.; Zhong, P.S.; Cui, Y. Additive varying-coefficient model for nonlinear gene-environment interactions. Stat. Appl. Genet. Mol. Biol. 2017, 17, 2119–2126. [Google Scholar] [CrossRef]
  18. Wu, C.; Shi, X.; Cui, Y.; Ma, S. A penalized robust semiparametric approach for gene-environment interactions. Stat. Med. 2015, 34, 4016–4030. [Google Scholar] [CrossRef] [Green Version]
  19. Wu, M.; Zhang, Q.; Ma, S. Structured gene-environment interaction analysis. Biometrics 2020, 76, 23–25. [Google Scholar] [CrossRef]
  20. Zhang, B.; Geng, J.; Lai, L. Multiple Change-Points Estimation in Linear Regression Models via Sparse Group Lasso. IEEE Trans. Signal Process. 2015, 63, 2209–2224. [Google Scholar] [CrossRef]
  21. Kaul, A.; Jandhyala, V.; Fotopoulos, S. Detection and estimation of parameters in high dimensional multiple change point regression models via 1/0 regularization and discrete optimization. IEEE Trans. Signal Process. 2019, arXiv:1906.04396. [Google Scholar]
  22. Lee, S.; Seo, M.; Shin, Y. The lasso for high dimensional regression with a possible change point. J. R. Stat. Soc. 2016, 78, 193–210. [Google Scholar] [CrossRef] [PubMed]
  23. Guo, J.; Hu, J.; Jing, B.Y.; Zhang, Z. Spline-Lasso in High-Dimensional Linear Regression. J. Am. Stat. Assoc. 2016, 111, 288–297. [Google Scholar] [CrossRef]
  24. Tibshirani, R.; Saunders, M.; Rosset, S.; Zhu, J.; Knight, K. Sparsity and Smoothness via the Fused Lasso. J. R. Stat. Soc. B 2010, 67, 91–108. [Google Scholar] [CrossRef] [Green Version]
  25. Mohamed, H.; Geer, S. The Smooth-Lasso and other 1+2-penalized methods. Electron. J. Stat. 2011, 5, 1184–1226. [Google Scholar]
  26. Huang, J.; Ma, S.; Zhang, C. Adaptive LASSO for sparse high-dimensional regression. Stat. Sin. 2008, 18, 1603–1618. [Google Scholar]
  27. Martinussen, T.; Scheike, T. Covariate Selection for the Semiparametric Additive Risk Model. Scand. J. Stat. 2009, 36, 602–619. [Google Scholar] [CrossRef]
  28. Binkiewicz, N.; Vogelstein, J. Covariate-assisted spectral clustering. Biometrika 2017, 104, 361–377. [Google Scholar] [CrossRef] [Green Version]
  29. Wei, F.; Huang, J. Consistent Group Selection in High-Dimensional Linear Regression. Bernoulli 2010, 16, 1369–1384. [Google Scholar] [CrossRef] [Green Version]
  30. Shao, F.; Li, J.; Ma, S.; Lee, M.L.T. Semiparametric varying-coefficient model for interval censored data with a cured proportion. Stat. Med. 2014, 33, 1700–1712. [Google Scholar] [CrossRef]
  31. Mu, Y.; Li, J.; Ma, S. Sparse boosting for high-dimensional survival data with varying coefficients. Stat. Med. 2017, 37, 789–800. [Google Scholar]
  32. Song, R.; Yi, F.; Zou, H. Correlation Between Prognostic Factors and Increasing Age in Melanoma. Ann. Surg. Oncol. 2004, 11, 259–264. [Google Scholar]
  33. Molony, C.; Sieberts, S.K.; Schadt, E.E. Processing Large-Scale, High-Dimension Genetic and Gene Expression Data; Springer Press: Berlin/Heidelberg, Germany, 2009; pp. 307–330. [Google Scholar]
  34. Ronit, L. Endothelin receptor B is required for the expansion of melanocyte precursors and malignant melanoma. Int. J. Dev. Biol. 2005, 49, 173–180. [Google Scholar]
  35. Shi, X.; Nie, F.; Wang, Z.; Sun, M. Pseudogene-expressed RNAs: A new frontier in cancers. Tumor Biol. 2016, 37, 1471–1478. [Google Scholar] [CrossRef]
  36. Cheng, Y.; Lu, J.; Chen, G.; Ardekani, G.S.; Rotte, A.; Martinka, M.; Xu, X.; McElwee, K.J.; Zhang, G.; Zhou, Y. Stage-specific prognostic biomarkers in melanoma. Oncotarget 2015, 6, 4180–4189. [Google Scholar] [CrossRef] [Green Version]
  37. Wang, Y.; Jiang, H.; Dai, D.; Su, M.; Martinka, M.; Brasher, P.; Zhang, Y.; McLean, D.; Zhang, J.; Ip, W.; et al. Alpha 1 antichymotrypsin is aberrantly expressed during melanoma progression and predicts poor survival for patients with metastatic. Pigment. Cell Melanoma Res. 2010, 23, 575–578. [Google Scholar] [CrossRef]
  38. Zhou, J.; Cheng, Y.; Tang, L.; Martinka, M.; Kalia, S. Up-regulation of SERPINA3 correlates with high mortality of melanoma patients and increased migration and invasion of cancer cells. Oncotarget 2017, 8, 18712–18725. [Google Scholar] [CrossRef] [Green Version]
  39. Foy, J.P.; Pickering, C.R.; Papadimitrakopoulou, V.A.; Jelinek, J.; Lin, S.H.; William, W.N.; Frederick, M.J.; Wang, J.; Lang, W.; Feng, L.; et al. New DNA methylation markers and global DNA hypomethylation are associated with oral cancer development. Cancer Prev. Res. 2015, 8, 1027–1035. [Google Scholar] [CrossRef] [Green Version]
  40. Ma, Y.; Xia, Z.; Ye, C.; Lu, C.; Zhou, S.; Pan, J.; Liu, C.; Zhang, J.; Liu, T.; Hu, T.; et al. AGTR1 promotes lymph node metastasis in breast cancer by upregulating CXCR4/SDF-1α and inducing cell migration and invasion. Aging 2019, 11, 3969–3992. [Google Scholar] [CrossRef]
  41. Guo, S.; Yan, F.; Xu, J.; Bao, Y.; Zhu, J.; Wang, X.; Wu, J.; Li, Y.; Pu, W.; Liu, Y.; et al. Identification and validation of the methylation biomarkers of non-small cell lung cancer. Clin. Epigenetics 2015, 7, 3. [Google Scholar] [CrossRef] [Green Version]
  42. Wei, L. The accelerated failure time model: A useful alternative to the cox regression model in survival analysis. Stat. Med. 1992, 11, 1871–1879. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Estimation results for a toy example with N = 200 subjects and p = 10 genes with three important genes. The values of the gene expressions are generated from multivariate normal distribution N ( 0 , Σ ) , where Σ i i = 1 and Σ i j = 0 . 3 | i j | . The ticks on the x-axis represent the values of the auxiliary variable (age).
Figure 1. Estimation results for a toy example with N = 200 subjects and p = 10 genes with three important genes. The values of the gene expressions are generated from multivariate normal distribution N ( 0 , Σ ) , where Σ i i = 1 and Σ i j = 0 . 3 | i j | . The ticks on the x-axis represent the values of the auxiliary variable (age).
Genes 13 00702 g001
Figure 2. Analysis of the SKCM data using the proposed approach: estimated coefficients of the 6 genes for all subjects. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Figure 2. Analysis of the SKCM data using the proposed approach: estimated coefficients of the 6 genes for all subjects. The x-axis represents the subjects, and the y-axis represents the coefficient values.
Genes 13 00702 g002
Table 1. Simulation results for N = 200, p = 500, q = 20 , and ρ = 0.3 . Each cell shows the mean (sd). The bold represents the best value.
Table 1. Simulation results for N = 200, p = 500, q = 20 , and ρ = 0.3 . Each cell shows the mean (sd). The bold represents the best value.
ScenarioMethodTPFPRMSERPE
1Lasso14.57 (1.39)0.30 (0.67)6.56 (0.69)12.92 (1.80)
AdLasso16.64 (1.22)0.71 (0.95)4.69 (0.34)8.41 (0.47)
IVIS13.76 (1.31)3.29 (0.66)5.91 (0.74)11.17 (0.84)
New-Lasso18.45 (1.36)0.17 (0.03)2.34 (0.21)1.92 (0.29)
New-Mar16.14 (2.16)1.84 (0.53)3.98 (0.43)3.52 (0.38)
2Lasso14.43 (1.45)0.00 (0.00)6.30 (0.86)12.38 (2.12)
AdLasso17.50 (0.86)0.69 (0.84)4.74 (0.48)8.54 (0.62)
IVIS14.20 (0.92)3.10 (0.88)5.85 (0.66)10.23 (0.90)
New-Lasso19.76 (0.44)0.00 (0.00)0.98 (0.20)1.02 (0.32)
New-Mar18.03 (1.88)2.40 (0.42)2.82 (0.53)2.34 (0.40)
3Lasso14.35 (1.76)0.15 (0.37)7.24 (0.90)13.70 (2.42)
AdLasso16.90 (1.27)0.30 (0.53)5.44 (0.66)9.64 (0.91)
IVIS14.99 (0.89)3.58 (0.91)6.32 (0.71)11.37 (0.96)
New-Lasso19.81 (0.41)0.00 (0.00)1.02 (0.21)1.02 (0.39)
New-Mar18.11 (1.02)4.44 (0.31)3.74 (0.42)2.80 (0.58)
4Lasso17.57 (1.73)0.10 (0.31)7.08 (0.95)12.90 (2.00)
AdLasso17.34 (1.15)0.16 (0.46)5.77 (0.55)10.35 (0.75)
IVIS15.28 (0.81)4.58 (1.65)6.11 (0.62)12.78 (0.82)
New-Lasso20.00 (0.00)0.00 (0.00)0.54 (0.06)0.68 (0.04)
New-Mar19.14 (1.18)9.24 (2.59)2.38 (0.59)1.56 (0.22)
5Lasso20.00 (0.00)0.10 (0.31)0.43 (0.06)0.82 (0.09)
AdLasso16.74 (1.23)0.70 (0.64)6.04 (0.40)8.36 (0.50)
IVIS15.62 (0.88)3.38 (0.96)5.93 (0.56)10.14 (0.63)
New-Lasso20.00 (0.00)0.00 (0.00)0.54 (0.07)0.70 (0.04)
New-Mar18.30 (1.34)4.40 (0.74)2.58 (0.37)2.04 (0.26)
6Lasso15.56 (2.46)0.24 (0.91)6.42 (1.04)11.98 (2.22)
AdLasso16.64 (1.19)0.18 (0.44)5.21 (0.47)9.41 (0.74)
IVIS14.37 (1.02)3.16 (1.05)6.01 (0.69)10.79 (0.95)
New-Lasso19.65 (0.59)0.00 (0.00)1.16 (0.25)1.08 (0.24)
New-Mar18.14 (1.53)4.24 (1.77)3.12 (0.49)2.46 (0.35)
7Lasso14.64 (2.48)0.16 (0.49)6.68 (0.92)12.58 (2.03)
AdLasso16.05 (1.43)0.10 (0.36)5.33 (0.49)9.58 (0.65)
IVIS15.05 (1.14)2.94 (0.83)5.98 (0.68)11.16 (0.88)
New-Lasso19.77 (0.55)0.00 (0.00)1.02 (0.22)1.00 (0.35)
New-Mar17.65 (1.57)4.04 (1.88)3.38 (0.34)2.72 (0.25)
8Lasso16.50 (2.44)0.50 (0.41)6.08 (1.17)11.04 (2.41)
AdLasso16.06 (1.46)0.12 (0.33)5.38 (0.46)9.63 (0.69)
IVIS14.70 (1.19)3.32 (1.12)6.19 (0.73)11.24 (1.04)
New-Lasso19.50 (0.69)0.00 (0.00)1.36 (0.33)1.24 (0.25)
New-Mar17.63 (1.63)3.40 (0.30)3.50 (0.33)2.84 (0.33)
9Lasso15.41 (2.03)2.60 (1.41)6.72 (1.10)5.66 (1.02)
AdLasso15.74 (1.57)0.41 (0.62)7.62 (0.32)9.38 (0.52)
IVIS14.24 (1.32)4.63 (1.39)7.43 (1.07)11.02 (1.19)
New-Lasso18.30 (1.49)0.00 (0.00)2.52 (0.11)1.56 (0.59)
New-Mar14.45 (2.01)10.00 (2.97)5.52 (0.92)2.58(0.68)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Luo, Z.; Zhang, Y.; Sun, Y. A Penalization Method for Estimating Heterogeneous Covariate Effects in Cancer Genomic Data. Genes 2022, 13, 702. https://doi.org/10.3390/genes13040702

AMA Style

Luo Z, Zhang Y, Sun Y. A Penalization Method for Estimating Heterogeneous Covariate Effects in Cancer Genomic Data. Genes. 2022; 13(4):702. https://doi.org/10.3390/genes13040702

Chicago/Turabian Style

Luo, Ziye, Yuzhao Zhang, and Yifan Sun. 2022. "A Penalization Method for Estimating Heterogeneous Covariate Effects in Cancer Genomic Data" Genes 13, no. 4: 702. https://doi.org/10.3390/genes13040702

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop