Next Article in Journal
Dual-Gated Graph Convolutional Recurrent Unit with Integrated Graph Learning (DG3L): A Novel Recurrent Network Architecture with Dynamic Graph Learning for Spatio-Temporal Predictions
Next Article in Special Issue
Dual-Regularized Feature Selection for Class-Specific and Global Feature Associations
Previous Article in Journal
A Resource-Efficient Multi-Entropy Fusion Method and Its Application for EEG-Based Emotion Recognition
Previous Article in Special Issue
Motor Fault Diagnosis Based on Convolutional Block Attention Module-Xception Lightweight Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive CoCoLasso for High-Dimensional Measurement Error Models

School of Management, University of Science and Technology of China, Hefei 230026, China
Entropy 2025, 27(2), 97; https://doi.org/10.3390/e27020097
Submission received: 3 December 2024 / Revised: 11 January 2025 / Accepted: 16 January 2025 / Published: 21 January 2025
(This article belongs to the Special Issue Information-Theoretic Methods in Data Analytics)

Abstract

:
A significant portion of theoretical and empirical studies in high-dimensional regression have primarily concentrated on clean datasets. However, in numerous practical scenarios, data are often corrupted by missing values and measurement errors, which cannot be ignored. Despite the substantial progress in high-dimensional regression with contaminated covariates, methods that achieve an effective trade-off among prediction accuracy, feature selection, and computational efficiency remain significantly underexplored. We introduce adaptive convex conditioned Lasso (Adaptive CoCoLasso), offering a new approach that can handle high-dimensional linear models with error-prone measurements. This estimator combines a projection onto the nearest positive semi-definite matrix with an adaptively weighted 1 penalty. Theoretical guarantees are provided by establishing error bounds for the estimators. The results from the synthetic data analysis indicate that the Adaptive CoCoLasso performs strongly in prediction accuracy and mean squared error, particularly in scenarios involving both additive and multiplicative noise in measurements. While the Adaptive CoCoLasso estimator performs comparably or is slightly outperformed by certain methods, such as Hard, in reducing the number of incorrectly identified covariates, its strength lies in offering a more favorable trade-off between prediction accuracy and sparse modeling.

1. Introduction

High-dimensional statistical learning has found extensive applications across diverse fields, including artificial intelligence, genomics, molecular biology, and economics. Numerous effective methods leveraging sparse learning through regularization have been developed to facilitate statistical inference in high-dimensional settings. These methods are well-documented in various studies, such as [1,2,3,4,5,6,7,8,9,10,11,12,13], among others. However, most of the previous research has focused on error-free data. In practice, measurement errors are prevalent in applications such as surveys with missing or inaccurate data due to non-responses, voting systems affected by imprecise instruments or systematic biases, and sensor networks corrupted by communication failures or environmental interference. Challenges involving noisy, incomplete, or corrupted data are frequently encountered. Naively applying methods designed for clean datasets to those affected by measurement errors can lead to inconsistent and imprecise estimates, which in turn result in inaccurate conclusions, particularly in high-dimensional settings. Therefore, developing robust methods for model selection and estimation that explicitly account for measurement errors in high-dimensional problems is of paramount importance.
In recent years, sparse modeling in high-dimensional models with measurement errors has garnered widespread attention. For instance, Ref. [14] proposed minimizing regularized least squares while accounting for additive measurement errors in the covariate matrices of partially linear models. In high-dimensional linear sparse regression, Ref. [15] developed a Lasso-type estimator that utilizes an unbiased approximation to replace the corrupted Gram matrix. However, incorporating measurement error information often leads to non-convex likelihood functions, complicating the solution of the associated optimization problems. To address this challenge, Ref. [16] proposed the nearest positive semi-definite projection matrix as an approximation for the unbiased Gram matrix estimate. Using this matrix as a foundation, they introduced the convex conditioned Lasso (CoCoLasso), which reformulates the objective function as a convex optimization problem to facilitate efficient sparse learning in error-prone high-dimensional linear models.
Although CoCoLasso demonstrates superior computational efficiency due to its convex optimization framework, the use of the 1 penalty imposes the same level of shrinkage on all coefficients, which can introduce biases. This often results in overfitting by selecting an overly complex model to minimize prediction error [1,12,17]. To address the biases and overfitting introduced by the 1 penalty, Ref. [18] proposed balanced estimation, which is based on the nearest positive semi-definite matrix and incorporates combined 1 and concave regularization. Although balanced estimation achieves an ideal trade-off between prediction accuracy and variable selection, it suffers from certain limitations. Firstly, the non-convex nature of the concave regularization results in increased computational complexity, rendering its application in high-dimensional settings difficult. Secondly, the selection of tuning parameters for the concave penalty is often difficult and may lead to suboptimal performance in practice.
To address these issues, we propose Adaptive CoCoLasso, which combines the nearest positive semi-definite projection with an adaptive 1 penalty. The Adaptive CoCoLasso estimator not only preserves the computational efficiency of convex optimization but also achieves precise estimation and feature selection in the presence of both additive and multiplicative measurement errors. By imposing higher penalties on zero coefficients and lower penalties on nonzero coefficients, the Adaptive CoCoLasso minimizes estimation bias and enhances variable selection accuracy. Furthermore, error bounds for the Adaptive CoCoLasso estimator are established, and a theorem guarantees the consistency of support recovery.
This paper makes two primary contributions. First, we propose the Adaptive CoCoLasso estimator for high-dimensional linear regression models where the design matrix is affected by measurement errors, aiming to ensure precise estimation and accurate variable selection. By applying stronger penalties to zero coefficients and weaker penalties to nonzero coefficients, the method effectively mitigates overfitting when dealing with additive and multiplicative measurement errors. In addition, we establish theoretical guarantees for the proposed method by deriving oracle inequalities for prediction and estimation errors and proving the consistency of support recovery. Extensive simulation studies demonstrate the effectiveness of our approach.
The structure of this paper is as follows. Section 2 outlines the model setup and introduces the proposed Adaptive CoCoLasso estimator. Section 3 presents the theoretical properties, including bounds on oracle estimation errors. Section 4 evaluates the finite-sample performance of the proposed method through simulation studies. We provide all the proofs in the Appendix A.
Notation 1. 
For a vector x = ( x 1 , , x p ) , the q norm is defined as x q = j = 1 p | x j | q 1 / q for q ( 0 , ) , and the norm is provided by x = max 1 i p | x i | . For a matrix A = ( a i j ) R p × q , the following matrix norms are defined: A 1 = max 1 j q i = 1 p | a i j | , A = max 1 i p j = 1 q | a i j | , A max = max i , j | a i j | , and A 2 = Λ max ( A A ) 1 / 2 , where Λ min ( A ) and Λ max ( A ) denote the smallest and largest eigenvalues of A , respectively.

2. Adaptive CoCoLasso for Error-Prone Models

2.1. Model Setting

Consider the high-dimensional linear regression model
y = X β + ε ,
where y = ( y 1 , , y n ) represents the n-dimensional response vector, X = ( X 1 , , X n ) R n × p denotes the fixed design matrix, β = ( β 1 , , β p ) is the unknown p-dimensional regression coefficient vector, ε N ( 0 , σ 2 I n ) is an n-dimensional error vector independent of X , and I n is the n × n identity matrix (the Gaussian distribution is assumed for simplicity in analysis. However, similar theoretical results hold under the sub-Gaussian assumption provided that the tail probability of ε decays exponentially). Measurement errors in the design matrix are common in various applications, leading to the observation of a corrupted covariate matrix W R n × p rather than the true matrix X .
Two classical cases are associated with measurement errors in the design matrix X . For cases of additive errors, the observed covariates are represented as W = X + A , where the rows of the additive error matrix A = ( a i j ) n × p are independently and identically distributed (i.i.d.) with a mean vector 0 and a covariance matrix Σ a . For cases of multiplicative errors, the observed covariates follow W = X M , where ⊙ denotes the Hadamard product and the rows of the multiplicative error matrix M = ( m i j ) n × p have mean vector μ m and covariance matrix Σ m . Missing data can be treated as a special case of this model, where the entries of M are Bernoulli random variables with success probability 1 π j , representing the probability of observing the j-th covariate, and π j denotes the missingness rate for the j-th covariate. To ensure model identifiability, the covariance matrix Σ a (for additive errors) or the pair ( μ m , Σ m ) (for multiplicative errors) is assumed to be known, as in [16,18].

2.2. Adaptive CoCoLasso

In high-dimensional settings where the dimensionality p exceeds the sample size n, the true coefficient vector β is often assumed to be sparse. Specifically, the support set S = { j : β j 0 } , representing the indices of truly relevant predictors, has size s = | S | satisfying s = o n log p . This sparsity assumption ensures model identifiability by requiring that only a small subset of predictors are nonzero, i.e., s n . Let S C denote the complementary set of S. In the context of clean data, penalized least squares methods are widely employed for sparse estimation of the true coefficient vector β = ( β 1 , , β p ) in high-dimensional linear models. The loss function depends on Σ and ρ , where Σ = 1 n X X represents the Gram matrix and ρ = 1 n X y denotes the marginal correlation vector of ( X , y ) , respectively. When the covariate matrix is affected by errors, [15] proposed unbiased estimators Σ ^ and ρ ˜ to approximate the unobservable quantities Σ and ρ . Specifically, these estimators can be expressed as
Σ ^ a d d = 1 n W W Σ a , ρ ˜ a d d = 1 n W y
for the additive error cases and
Σ ^ m u l t = 1 n W W ( Σ m + μ m μ m ) , ρ ˜ m u l t = 1 n W y μ m
for the multiplicative error cases, where ⊘ denotes element-wise division.
However, the unbiased surrogate Σ ^ is generally not positive semi-definite in high-dimensional scenarios. Consequently, Σ ^ may possess a negative eigenvalue, resulting in the term β ^ Σ ^ β ^ lacking a lower bound and causing the loss function to lose convexity. To resolve this problem, the unbiased surrogate Σ ^ is replaced by its nearest positive semi-definite projection matrix, defined as Σ ˜ = arg min Σ 0 Σ Σ ^ max , which can be efficiently solved using the alternating direction method of multipliers (ADMM). By definition and the triangle inequality, it follows that
Σ ˜ Σ ^ max Σ ˜ Σ max + Σ Σ ^ max 2 Σ Σ ^ max ,
indicating that Σ ˜ serves as an approximation to Σ with accuracy comparable to that of the unbiased estimate Σ ^ .
Following the processing of Σ ˜ and ρ , Ref. [16] introduced the CoCoLasso method, which employs the 1 penalty for regularization. However, the L 1 penalty often selects overly large models to minimize prediction risk. Motivated by [19], we adopt a weighted L 1 penalty to develop a convex objective function, where the weights are determined by an initial estimator. Suppose β ˜ = ( β ˜ 1 , , β ˜ p ) is an initial estimator of β , which provides preliminary estimates of the regression coefficients. Based on this initial estimator, we define the weight vector ω = ( ω 1 , , ω p ) , where ω j = | β ˜ j | 1 for j = 1 , , p . These weights enable the penalty to adapt to the relative importance of each variable, assigning larger penalties to coefficients with smaller initial estimates and smaller penalties to coefficients with larger initial estimates. Specifically, the proposed Adaptive CoCoLasso estimator β ^ is defined as the optimal solution to the following optimization problem, computed after obtaining Σ ˜ and ρ ˜ as provided in (2) and (3):
β ^ = arg min β R p 1 2 β Σ ˜ β ρ ˜ β + 2 λ j = 1 p ω j | β j | ,
where λ = c 0 s log p n 1 / 2 for some positive constant c 0 . Let n 1 / 2 W ˜ R p × p be the Cholesky factor of Σ ˜ , such that n 1 W ˜ W ˜ = Σ ˜ , and define y ˜ R p to satisfy n 1 W ˜ y ˜ = ρ ˜ . Then, the proposed Adaptive CoCoLasso estimator β ^ defined in (5) can be reformulated equivalently as the global minimizer of the following optimization problem
β ^ = arg min β R p 1 2 n y ˜ W ˜ β 2 2 + 2 λ j = 1 p ω j | β j | .
For comparison, CoCoLasso solves the following optimization problem:
β ^ CoCoLasso = arg min β R p 1 2 n y ˜ W ˜ β 2 2 + λ j = 1 p | β j | .
Here, the penalty term λ j = 1 p | β j | introduces sparsity by shrinking some coefficients to exactly zero, effectively performing variable selection. However, since the same penalty λ > 0 is applied to all coefficients, it tends to introduce bias, particularly for larger coefficients. Unlike CoCoLasso, Adaptive CoCoLasso incorporates data-driven weights ω j , where ω j = | β ˜ j | 1 , into the penalty term j = 1 p ω j | β j | , adjusting the penalty to reflect the relative importance of each variable. This weighting scheme enables the Adaptive CoCoLasso to handle cases where variables have vastly different scales or signal strengths. Assigning smaller penalties to variables with larger estimated coefficients avoids over-penalizing important predictors, improving both variable selection accuracy and coefficient estimation. Additionally, the Adaptive CoCoLasso enhances the recovery of weak signals and reduces bias introduced by uniform penalties in traditional CoCoLasso.

3. Theoretical Properties

In this section, we rigorously derive the statistical error bounds for the Adaptive CoCoLasso estimate under 1 and 2 norms and establish theoretical guarantees for exact support recovery with high probability. Before discussing the theoretical results, we outline four technical assumptions.
Condition 1. 
The distributions of Σ ^ and ρ ˜ are identified by a set of parameters θ . Then, there exist generic constants C and c and positive functions ξ and ϵ 0 depending on β S , β S C , and σ 2 such that, for every ϵ ϵ 0 , Σ ^ and ρ ˜ satisfy the following probability statements:
P ( | Σ ^ i j Σ i j | ϵ ) C exp ( c n ϵ 2 ξ 1 ) , f o r a n y i , j = 1 , , p ;
P ( | ρ ˜ j ρ j | ϵ ) C exp ( c n s 2 ϵ 2 ξ 1 ) , f o r a n y j = 1 , , p .
Condition 2. 
For some positive constant κ, assume
0 < κ = min v S C 1 3 v S 1 v Σ v v 2 2 ,
where v = ( v S , v S C ) R p , with v S and v S C representing the subvectors corresponding to the support set S and its complement S C , respectively.
Condition 3. 
The minimum signal strength satisfies min j S | β j | C s λ , where C > 0 is a positive constant.
Condition 4. 
The initial estimator β ˜ satisfies β ˜ β 2 = O P s λ with λ = c 0 s log p n 1 / 2 .
Condition 1, known as the closeness condition proposed by [16], requires that surrogates Σ ^ (and consequently Σ ˜ ) and ρ ˜ achieve sufficient element-wise closeness to Σ and ρ , respectively. This condition has already been proven in [16] for typical additive and multiplicative measurement error cases, with Σ ^ and ρ ˜ defined in Equations (2) and (3). Condition 2, the restricted eigenvalue (RE) condition, ensures the stability and non-degeneracy of the design matrix on sparse predictor subsets. A similar RE condition was used in [20] to derive statistical error bounds for the clean Lasso estimator. Condition 3 specifies the minimum signal strength, ensuring that the true signal is large enough relative to the regularization parameter λ to differentiate significant predictors from noise. This condition is commonly assumed in high-dimensional regression settings to guarantee consistent variable selection and accurate estimation [21,22].
Condition 4 ensures that the initial estimator β ˜ approximates the true parameter β with an 2 error rate of O P s s log p n , providing the accuracy needed to construct adaptive weights that capture the underlying sparsity and improve the efficiency of the Adaptive CoCoLasso estimator. For clean covariates, commonly used initial estimators include Lasso [10], which leverages sparsity in high-dimensional settings, and ridge regression [23], which addresses multicollinearity effectively. Another widely used approach is the marginal regression estimator, which achieves zero-consistency under a partial orthogonality condition and is derived by fitting univariate regressions for each predictor separately [19]. When faced with a design matrix with measurement errors, CoCoLasso can be used as the initial estimator as it is specifically designed for measurement error models. Alternatively, the estimator introduced in [15], which provides theoretical guarantees for high-dimensional regression with noisy or missing data, can act as an initial estimator.
Theorem 1. 
Under Conditions 1–4, the Adaptive CoCoLasso estimator β ^ satisfies the following oracle inequalities with high probability 1 O ( p c 1 ) for some positive constant c 1 :
β ^ β 2 = O s λ , β ^ β 1 = O s λ .
Here, λ = c 0 s log p n 1 / 2 , and the constants hidden in the O ( · ) notation depend on the restricted eigenvalue constant κ, the noise variance σ 2 , and the probabilistic bounds specified in Condition 1.
Moreover, with high probability 1 O ( p c 2 ) , it holds that
P supp ( β ^ ) = supp ( β ) 1 as n ,
where supp ( β ) = { j : β j 0 } denotes the support set of β , representing the indices of its nonzero components. Similarly, supp ( β ^ ) represents the support set of the Adaptive CoCoLasso estimator β ^ .
Theorem 1 establishes the theoretical properties of the Adaptive CoCoLasso estimator under high-dimensional linear models with measurement errors. Specifically, it guarantees oracle inequalities for the estimation errors in both 2 and 1 norms, showing that the errors scale with s λ and s λ , respectively. The constants in these bounds depend on key factors such as the restricted eigenvalue condition, noise variance, and probabilistic bounds. Tail probability is influenced by measurement errors, as described by the component ζ in Condition 1. Additionally, the theorem guarantees consistent support recovery, meaning that the true set of relevant predictors can be identified with high probability as the sample size n and dimensionality p increase. All proofs are provided in Appendix A.

4. Numerical Studies

Within this section, we utilize synthetic datasets to evaluate the effectiveness of the Adaptive CoCoLasso (A-CoCoLasso) estimator on finite samples. The comparison includes various alternative estimators, such as CoCoLasso [16], balanced estimation with L 1 regularization and smoothly clipped absolute deviation (B-SCAD), balanced estimation with L 1 regularization and hard-thresholding (B-Hard) [18], and the standalone hard-thresholding method. The Adaptive CoCoLasso weights were computed using CoCoLasso regression coefficients. The CoCoLasso and Adaptive CoCoLasso estimators were implemented using the LARS algorithm. All the simulation studies were performed using R, focusing on both additive and multiplicative measurement errors to evaluate the performance of the proposed method. In all the numerical experiments, the penalty parameter λ was determined through 10-fold cross-validation.
To evaluate the aforementioned estimators, we employed the performance metrics introduced in [18]. The first two metrics are the count of correctly selected covariates (C) and the count of incorrectly selected covariates (IC). These are defined as C = True Positives ( TP ) = j S I ( β ^ j 0 ) and I C = False Positives ( FP ) = j S I ( β ^ j 0 ) , respectively. The third and fourth metrics are the prediction error (PE) and the mean squared error (MSE), provided by PE ( β ^ ) = ( β β ^ ) Σ X ( β β ^ ) and MSE ( β ^ ) = β β ^ 2 2 , respectively. These metrics collectively assess feature selection accuracy through C and IC while evaluating predictive performance and estimation performance via PE and MSE, thus providing a comprehensive comparison framework.

4.1. Additive Error Cases

Example 1. 
We followed the simulation setup in [18] and generated 100 datasets, each containing n = 100 observations from the linear model y = X β + ε , with p = 250 , β = ( 3 , 1.5 , 0 , 0 , 2 , 0 , , 0 ) , and σ = 3 . The components of X and ε were independently sampled from a multivariate normal distribution N ( 0 , Σ X ) . We considered two covariance structures for Σ X : the autoregressive structure, where Σ X = ( 0 . 5 | i j | ) 1 i , j p , and the compound symmetry structure, where Σ X = 0.3 1 1 + 0.7 I . The contaminated covariates W were obtained as W = X + A , where the rows of A were independently drawn from N ( 0 , τ 2 ) with τ = 0.75 and τ = 1.25 , respectively. The results for the five estimators are summarized in Table 1 and Table 2.
The results in Table 1 and Table 2 highlight the comparative performance of the five methods under additive measurement errors in both the autoregressive and compound symmetric structures. Comparing A-CoCoLasso with CoCoLasso, A-CoCoLasso consistently achieved a higher number of C and fewer instances of IC. For example, in the autoregressive structure with τ = 0.75 , A-CoCoLasso attained C = 2.93, which is closer to the ideal value of 3, compared to CoCoLasso (C = 2.94) while significantly reducing IC from 12.54 (CoCoLasso) to 3.89. Furthermore, A-CoCoLasso demonstrated better prediction error (PE = 1.68) and mean squared error (MSE = 2.01) compared to CoCoLasso (PE = 3.65; MSE = 3.64). Similarly, when compared to Hard, A-CoCoLasso achieved a higher C (2.93 vs. 2.27) and better overall estimation and prediction performance. These results indicate that A-CoCoLasso provides more accurate variable selection and estimation than both CoCoLasso and Hard.
In comparison to the balanced estimation methods (B-SCAD and B-Hard), A-CoCoLasso also demonstrated superior performance, particularly in reducing IC while maintaining C closer to the ideal value. For example, in the compound symmetric structure with τ = 1.25 , A-CoCoLasso achieved C = 2.46, outperforming both B-SCAD (C = 2.07) and B-Hard (C = 1.64). Additionally, A-CoCoLasso maintained a competitive IC of 12.88, which is lower than that of B-SCAD (IC = 13.59), while achieving better prediction and estimation accuracy (PE = 6.80; MSE = 8.39) compared to both B-SCAD (PE = 8.01; MSE = 9.91) and B-Hard (PE = 8.60; MSE = 11.04). These results demonstrate that A-CoCoLasso not only balances the trade-off between correctly identifying covariates and excluding noise variables but also provides robust estimation and prediction accuracy under various settings with additive measurement errors.
Example 2. 
To investigate the performance of the methods under ultra-high-dimensional settings with additive measurement errors, we adopted a similar setting to that in [7]. The coefficient vector was specified as β = ( 1 , 0.5 ,   0.7 ,   1.2 ,   0.9 ,   0.3 ,   0.55 ,   0 ,   ,   0 ) . The sample size, dimensionality, and noise level were set as n = 80 , p = 1000 , and σ = 1 , respectively, reflecting an ultra-high-dimensional setting. The variability in the additive errors was characterized by standard deviation values of τ = 0.25 and 0.5 . Table 3 summarizes the performance of the five methods under this setting.
Table 3 presents the performance of the five methods under ultra-high-dimensional settings with additive measurement errors for τ = 0.25 and τ = 0.5 . In terms of the number of correctly identified covariates (C), A-CoCoLasso achieved a competitive performance compared to CoCoLasso and B-SCAD, with values close to the ideal benchmark of 4 under τ = 0.25 (C = 3.95), and slightly reduced performance under τ = 0.5 (C = 3.56). In contrast, Hard demonstrated considerably lower values for C (3.14 for τ = 0.25 and 2.36 for τ = 0.5 ), indicating weaker variable selection ability.
When examining the number of incorrectly identified covariates (IC), A-CoCoLasso substantially outperformed CoCoLasso and B-SCAD, maintaining much lower IC values (12.7 for τ = 0.25 and 11.05 for τ = 0.5 ) compared to CoCoLasso (31.32 and 24.66, respectively) and B-SCAD (21.69 and 21.84, respectively). Hard achieved the smallest IC but at the cost of reduced C, highlighting its conservative nature. In terms of PE and MSE, A-CoCoLasso remained competitive, with PE = 0.90 and MSE = 1.55 for τ = 0.25 , and PE = 1.21 and MSE = 2.01 for τ = 0.5 , showing comparable or better results than Hard and B-Hard while maintaining a balanced variable selection performance.

4.2. Multiplicative Error Cases

Example 3. 
We evaluated the performance of Adaptive CoCoLasso and other competing methods, including CoCoLasso, Hard, and balanced estimation, under multiplicative measurement errors. The true model remained the same as in the additive error setup, as described in Example 1. To simulate the multiplicative errors, we generated W = X M , where the components of M = ( m i j ) n × p followed a log-normal distribution. Specifically, log ( m i j ) independently followed the same distribution as N ( 0 , τ 2 ) , with τ = 0.25 and τ = 0.75 . Table 4 and Table 5 present the outcomes for the multiplicative error scenarios.
The results in Table 4 and Table 5 demonstrate that A-CoCoLasso exhibits strong performance under multiplicative measurement errors across both autoregressive and compound symmetric structures. Compared to the other methods, A-CoCoLasso achieved a desirable balance between correctly identifying covariates and maintaining low false discovery rates while also demonstrating competitive prediction and estimation accuracy. Its robustness under varying levels of multiplicative error ( τ = 0.25 and τ = 0.75 ) further highlights its adaptability and effectiveness in handling challenging high-dimensional scenarios with multiplicative measurement errors.
Example 4. 
We examined the performance of Adaptive CoCoLasso in ultra-high-dimensional settings with multiplicative measurement errors. To maintain comparability with the additive error scenarios, the simulation setup remained largely consistent with that in Example 2, except that the standard deviation values of the multiplicative errors were specified as τ = 0.1 and τ = 0.2 , ensuring a comparable signal-to-noise ratio. The performance of the five methods is summarized in Table 6.
Table 6 presents the performance of five methods under ultra-high-dimensional settings with multiplicative measurement errors for τ = 0.1 and τ = 0.2 . The results demonstrate that A-CoCoLasso achieved a desirable balance between correctly identifying covariates (C) and maintaining a low number of incorrectly identified covariates (IC) while delivering competitive prediction and estimation accuracy. For τ = 0.1 , A-CoCoLasso shows robust performance, with C = 4.18 and IC = 14.55, outperforming CoCoLasso in terms of IC (34.79) while maintaining similar predictive performance (PE = 0.78 vs. PE = 1.21 for CoCoLasso). As τ increased to 0.2, A-CoCoLasso remained effective with C = 3.57 and IC = 4.26, again demonstrating a significant reduction in IC compared to CoCoLasso (26.81). Additionally, A-CoCoLasso achieved comparable PE and MSE values to the best-performing methods, such as B-SCAD, while demonstrating better variable selection than Hard. Overall, these results highlight A-CoCoLasso’s ability to effectively balance variable selection and prediction accuracy under challenging ultra-high-dimensional multiplicative error settings.

5. Discussion

This paper introduces the Adaptive CoCoLasso estimator, designed to balance prediction accuracy and feature selection in high-dimensional linear regression with measurement errors, effectively addressing both additive and multiplicative cases. The proposed method combines two key techniques: the nearest positive semi-definite projection matrix for correcting measurement errors by reconstructing the covariate matrix and an adaptive weighted 1 penalty, which enhances sparsity and variable selection by assigning data-driven weights to the coefficients. Unlike combined 1 and concave regularization, which introduces computational challenges due to its non-convex nature and parameter tuning difficulties, Adaptive CoCoLasso retains the computational efficiency of convex optimization while providing robust estimation performance. The methodology leverages the LARS algorithm to solve the penalized optimization problem, ensuring scalability to high-dimensional settings. The theoretical analysis and simulation results show that the Adaptive CoCoLasso achieves robust prediction and estimation performance, effectively addressing overfitting and the challenges posed by contaminated data.
Future work could focus on extending the Adaptive CoCoLasso estimator to address statistical inference challenges, such as constructing confidence intervals and performing hypothesis testing. A major difficulty in these extensions arises from the unknown true covariate matrix, which not only makes predicting the response vector challenging but also hinders accurate noise level estimation, even with a reliable coefficient estimator. These issues fall outside the scope of this paper and represent intriguing directions for future research.

Funding

This work was supported by the National Key R&D Program of China (Grant 2022YFA1008000), Natural Science Foundation of China (Grants 72071187, 11671374, 71731010, and 71921001), and Fundamental Research Funds for the Central Universities (Grants WK3470000017 and WK2040000027).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to author.

Conflicts of Interest

The author declares that they have no conflict of interest.

Appendix A

Appendix A.1. Proof of Theorem 1

Proof. 
Denote the estimation error by ν = β ^ β , where β ^ is the global minimizer defined in the Adaptive CoCoLasso method. For simplicity of notation, we let λ β ^ 1 , ω denote λ j = 1 p ω j | β j | , where β ^ 1 , ω = j = 1 p ω j | β j | represents the weighted 1 norm. Based on the definition of β ^ in (6), the following inequality holds
1 2 β ^ Σ ˜ β ^ ρ ˜ β ^ + 2 λ β ^ 1 , ω 1 2 β Σ ˜ β ρ ˜ β + 2 λ β 1 , ω .
By simple calculations and the definition of the norm, we have
1 2 ν Σ ˜ ν + 2 λ β ^ 1 , ω ν 1 ρ ˜ Σ ˜ β + 2 λ β 1 , ω .
For better clarity, the proof will be divided into six steps, deriving bounds on the prediction based on the inequality above.
Step 1. We derive a bound for the term ρ ˜ Σ ˜ β . By applying the triangle inequality, we obtain
ρ ˜ Σ ˜ β ρ ˜ ρ + ρ Σ β + ( Σ ˜ Σ ) β .
For the first part of the inequality (A2) on the right-hand side, ρ ˜ ρ , Condition 1 implies that
P ( | ρ ˜ j ρ j | λ / 6 ) C exp ( c n s 2 λ 2 ξ 1 ) , for any j = 1 , , p .
Applying the union bound,
P ρ ˜ ρ λ / 6 j = 1 p P | ρ ˜ j ρ j | λ / 6 p · C exp c n s 2 λ 2 ξ 1 .
Thus, we have ρ ˜ ρ = O P s log p n . Next, we derive a bound for the second term ρ Σ β . Note that
ρ Σ β = 1 n X y Σ β = 1 n X ( X β + ε ) Σ β = 1 n X ε .
Under the assumption that ε N ( 0 , σ 2 I n ) , invoking Lemma A1, we have the probability bound
P ρ Σ β > λ / 6 p · C exp c n λ 2 σ 2 ,
for some constants C > 0 and c > 0 . Therefore, with high probability, ρ Σ β = O P log p n . The third component ( Σ ˜ Σ ) β can be bounded as follows. Using Condition 1, for all i , j = 1 , , p , we have
P | Σ ˜ i j Σ i j | ϵ C exp c n ϵ 2 ξ 1 ,
where C > 0 and c > 0 are constants. Combining Lemma A2, we have
Σ ˜ Σ max = O P log p n .
For the sparse vector β with support size | S | = s , we additionally obtain
( Σ ˜ Σ ) β Σ ˜ Σ max β 1 .
Using β 1 s β 2 and assuming β 2 = O ( 1 ) , it follows that
( Σ ˜ Σ ) β = O P s log p n .
Combining the three parts above, we have
ρ ˜ Σ ˜ β O P s log p n + O P log p n + O P s log p n .
Since the sparsity s dominates in high-dimensional settings, the final bound is
ρ ˜ Σ ˜ β = O P s log p n .
Step 2. Decompose the error term ν into components on the support set S = { j : β j 0 } and its complement S C , so ν = ν S + ν S C . The weighted 1 norms can be written as
β ^ 1 , ω = β ^ S 1 , ω + β ^ S C 1 , ω , β 1 , ω = β S 1 , ω .
Substituting these into inequality (A1) yields
1 2 ν Σ ˜ ν + 2 λ β ^ S C 1 , ω ν ( ρ ˜ Σ ˜ β ) + 2 λ ν S 1 , ω .
For the first term on the left-hand side of (A4), we have
ν Σ ˜ ν = ν Σ ν + ν Δ ν ,
where Δ = Σ ˜ Σ and Σ = 1 n X X . By applying Condition 2 for any ν satisfying ν S C 1 3 ν S 1 , we have
ν Σ ν n κ ν S 2 2 ,
where κ > 0 is the restricted eigenvalue constant. For the error term Δ = Σ ˜ Σ , Condition 1 ensures that
Δ max O P log p n .
We can bound | ν Δ ν | as follows: | ν Δ ν | Δ max ν 1 2 . From the sparsity assumption, ν 1 = ν S 1 + ν S C 1 4 ν S 1 . Applying the Cauchy–Schwarz inequality ν S 1 s ν S 2 , where s = | S | , the size of the support set, we obtain
| ν Δ ν | O P log p n · ( 4 s ν S 2 ) 2 .
Combining the bounds for ν Σ ν and ν Δ ν , we have
ν Σ ˜ ν = ν Σ ν + ν Δ ν n κ ν S 2 2 O P s log p n ν S 2 2 .
When n s log p , the second term is asymptotically negligible. Redefining the restricted eigenvalue constant as κ = κ / 2 > 0 , we ensure that the first term on the left-hand side of (A4) satisfies ν Σ ˜ ν n κ ν S 2 2 .
Bound the first term ν ( ρ ˜ Σ ˜ β ) on the right-hand side of (A4) and we have
ν ( ρ ˜ Σ ˜ β ) = j S ν j ( ρ ˜ j Σ ˜ j β ) + j S ν j ( ρ ˜ j Σ ˜ j β ) .
Using the bound in (A3) ρ ˜ Σ ˜ β O P s log p n , we have
| ν ( ρ ˜ Σ ˜ β ) | ν 1 · O P s log p n .
For the sparsity structure of ν , ν 1 = ν S 1 + ν S C 1 . Under the sparsity constraint ν S C 1 3 ν S 1 , we obtain ν 1 4 ν S 1 . Further, by the Cauchy–Schwarz inequality ν S 1 s ν S 2 , we have
| ν ( ρ ˜ Σ ˜ β ) | 4 s ν S 2 · O P s log p n .
Simplifying, this yields
| ν ( ρ ˜ Σ ˜ β ) | O P s λ ν S 2 .
Substituting bounds (A5) and
ν Σ ˜ ν n κ ν S 2 2
into inequality (A4), we obtain
1 2 n κ ν S 2 2 + 2 λ β ^ S C 1 , ω O P s λ ν S 2 + 2 λ ν S 1 , ω .
Step 3. To bound β ^ S C 1 , ω on the left-hand side of inequality (A6), we observe that β ^ S C 1 , ω = j S C ω j | β ^ j | , where the weights ω = ( ω 1 , , ω p ) are defined based on the initial estimator β ˜ , with ω j = | β ˜ j | 1 for j = 1 , , p . Under Condition 4, the initial estimator satisfies β ˜ β 2 = O P s λ , and the sparsity assumption implies that β j = 0 for j S C . By the consistency of β ˜ , we have | β ˜ j | = O P s λ for j S C , leading to ω j = O P 1 s λ . Thus, the penalty term 2 λ ω j | β j | in the objective function (6) becomes 2 λ ω j | β j | = O P λ · ω j · | β j | = O P 1 s | β j | . Under Condition 3, as λ ω j becomes large for j S C , the penalty ensures P ( β ^ j = 0 ) 1 as n . To bound ν S 1 , ω on the right-hand side of (A6), we note that the weighted 1 norm satisfies ν S 1 , ω = j S ω j | ν j | , where ω j = | β ˜ j | 1 , and β ˜ j denotes the initial estimator. Under Condition 4, the weights satisfy | ω j | = O P ( 1 ) for j S since β ˜ j is consistent and β j 0 . Thus, the weighted 1 norm satisfies ν S 1 , ω = j S ω j | ν j | C ν S 1 for some constant C > 0 . Substituting, we obtain
1 2 n κ ν S 2 2 ν S 1 · O P s log p n + 2 C λ ν S 1 .
Step 4. We derive the 2 norm bound for the estimation error ν = β ^ β . Using the Cauchy–Schwarz inequality, the 1 norm and 2 norm satisfy ν S 1 s ν S 2 , where s = | S | is the sparsity level. Substituting this, inequality (A7) can be rewritten as
1 2 n κ ν S 2 2 s ν S 2 · O P s log p n + 2 C λ s ν S 2 .
Factoring out ν S 2 , we have
ν S 2 1 2 n κ ν S 2 s O P s log p n 2 C λ s 0 .
For ν S 2 > 0 , the term in parentheses must be non-positive. We have
ν S 2 2 s n κ O P s log p n + 4 C s λ n κ .
Substituting λ = O s log p n , we obtain ν S 2 s n κ O P s log p n . Simplifying, the 2 norm of the error satisfies ν S 2 C 1 s s log p n , where C 1 > 0 is a constant depending on κ . Hence, we obtain
β ^ β 2 = ν S 2 C 1 s λ .
where λ = s log p n and C 1 is some positive constant.
Step 5. We derive the 1 norm bound for the estimation error ν = β ^ β . The 1 norm of ν decomposes as ν 1 = ν S 1 + ν S C 1 . By the sparsity assumption, ν S C 1 3 ν S 1 , we have ν 1 4 ν S 1 . By the Cauchy–Schwarz inequality, the 1 norm on the support set S satisfies ν S 1 s ν S 2 , where s = | S | is the sparsity level. Substituting the 2 norm bound from inequality (A8), ν S 2 C 1 s s log p n , where C 1 > 0 depends on κ , n, and p. Substituting into the inequality for ν S 1 , we have ν S 1 s s · C 1 s log p n = C 1 s 2 log p n . Combining the bounds for ν S and ν S C , the 1 norm satisfies ν 1 4 ν S 1 4 C 1 s 2 log p n . Defining C 2 = 4 C 1 , we conclude
β ^ β 1 = ν 1 C 2 s λ ,
where λ = s log p n , C 2 > 0 depends on κ , s, n, and p.
Step 6. We aim to show that β ^ correctly identifies the support set of the true regression coefficients β , such that P supp ( β ^ ) = supp ( β ) 1 as n . For a vector ν = β ^ β , we have ν ν 2 . From inequality (A8), the 2 norm of the error satisfies β ^ β 2 C 1 s λ . Then, we conclude that β ^ β O P s λ . For each j S , the minimum signal strength, Condition 3 implies | β j | C s λ . With high probability, the estimation error satisfies | β ^ j β j | O P s λ . For j S , combining this with the minimum signal strength, Condition 3 provides | β ^ j | C c 0 s λ O P s λ . For sufficiently large n, where C c 0 > 1 , this ensures | β ^ j | > 0 . For j supp ( β ) , the estimation error simplifies to | β ^ j | = | β ^ j β j | = | β ^ j | . The norm error bound satisfies β ^ β O P s λ , which implies | β ^ j | O P s λ . In addition, by Condition 3, there is | β ^ j | C s λ . This implies O P ( s λ ) | β ^ j | C s λ . This leads to a contradiction. Therefore, β ^ j = 0 for j supp ( β ) . Combining the cases for j S and j S , we conclude that P supp ( β ^ ) = supp ( β ) 1 as n .

Appendix A.2. Proof of Lemmas

Lemma A1. 
Let X R n × p denote a fixed design matrix and ε N ( 0 , σ 2 I n ) represent an n-dimensional error vector, where I n is the n × n identity matrix. For any t > 0 , we define z = σ t 2 + 2 log p n . Under these settings, the following inequality holds
P ( n 1 X ε z ) 1 2 exp { t 2 / 2 } .
Proof. 
Given that ε N ( 0 , σ 2 I n ) , for t > 0 and z = σ t 2 + 2 log p n , we have
1 P ( n 1 X ε z ) = P ( n 1 X ε > z ) = P max 1 j p | ε X j | n > z ,
where X j is the j-th column of X . Using the union bound,
P max 1 j p | ε X j | n > z j = 1 p P | ε X j | n σ > t 2 + 2 log p .
Since ε X j n σ N ( 0 , 1 ) ,
P | ε X j | n σ > t 2 + 2 log p = 2 P ε X j n σ > t 2 + 2 log p .
Thus,
P max 1 j p | ε X j | n > z 2 p exp t 2 2 + log p .
Simplifying,
P ( n 1 X ε > z ) 2 exp t 2 2 .
The proof of Lemma A1 is now complete.□
Lemma A2. 
For any ϵ > 0 , P ( Σ ˜ Σ max ϵ ) p 2 max i , j P ( | Σ ^ i , j Σ i , j | ϵ / 2 ) .
Proof. 
From the inequality
Σ ˜ Σ max Σ ˜ Σ ^ max + Σ ^ Σ max 2 Σ ^ Σ max ,
it follows that P ( Σ ˜ Σ max ϵ ) P ( Σ ^ Σ max ϵ / 2 ) . The result is obtained by applying the union bound over P ( | Σ ^ i , j Σ i , j | ϵ / 2 ) for all i , j . □

References

  1. Bickel, P.J.; Ritov, Y.; Tsybakov, A.B. Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat. 2009, 37, 1705–1732. [Google Scholar] [CrossRef]
  2. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  3. Candes, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  4. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef]
  5. Fan, J.; Feng, Y.; Wu, Y. Network exploration via the adaptive Lasso and SCAD penalties. Ann. Appl. Stat. 2009, 3, 521. [Google Scholar] [CrossRef]
  6. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  7. Fan, Y.; Lv, J. Asymptotic properties for combined L1 and concave regularization. Biometrika 2014, 101, 57–70. [Google Scholar] [CrossRef]
  8. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  9. Kong, Y.; Zheng, Z.; Lv, J. The constrained Dantzig selector with enhanced consistency. J. Mach. Learn. Res. 2016, 17, 4205–4226. [Google Scholar]
  10. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  11. Wright, S.J. Coordinate descent algorithms. Math. Program. 2015, 151, 3–34. [Google Scholar] [CrossRef]
  12. Zou, H. The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc. 2006, 101, 1418–1429. [Google Scholar] [CrossRef]
  13. Zou, H.; Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2005, 67, 301–320. [Google Scholar] [CrossRef]
  14. Liang, H.; Li, R. Variable selection for partially linear models with measurement errors. J. Am. Stat. Assoc. 2009, 104, 234–248. [Google Scholar] [CrossRef] [PubMed]
  15. Loh, P.L.; Wainwright, M.J. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Ann. Stat. 2012, 40, 1637–1664. [Google Scholar] [CrossRef]
  16. Datta, A.; Zou, H. CoCoLasso for high-dimensional error-in-variables regression. Ann. Stat. 2017, 45, 2400–2426. [Google Scholar] [CrossRef]
  17. Zhao, P.; Yu, B. On model selection consistency of Lasso. J. Mach. Learn. Res. 2006, 7, 2541–2563. [Google Scholar]
  18. Zheng, Z.; Li, Y.; Yu, C.; Li, G. Balanced estimation for high-dimensional measurement error models. Comput. Stat. Data Anal. 2018, 126, 78–91. [Google Scholar] [CrossRef]
  19. Huang, J.; Ma, S.; Zhang, C.H. Adaptive Lasso for sparse high-dimensional regression models. Stat. Sin. 2008, 18, 1603–1618. [Google Scholar]
  20. van de Geer, S.A.; Bühlmann, P. On the conditions used to prove oracle results for the Lasso. Electron. J. Stat. 2009, 3, 1360–1392. [Google Scholar] [CrossRef]
  21. Zhang, C.H.; Huang, J. The sparsity and bias of the Lasso selection in high-dimensional linear regression. Ann. Stat. 2008, 36, 1567–1594. [Google Scholar] [CrossRef]
  22. Bühlmann, P.; van de Geer, S. Statistics for High-Dimensional Data: Methods, Theory, and Applications; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  23. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
Table 1. Means and standard errors (in parentheses) of four performance metrics for five methods under additive error cases over 100 replications in the autoregressive structure.
Table 1. Means and standard errors (in parentheses) of four performance metrics for five methods under additive error cases over 100 replications in the autoregressive structure.
MeasureA-CoCoLassoCoCoLassoHardB-SCADB-Hard
τ = 0.75
C2.93 (0.03)2.94 (0.03)2.27 (0.07)2.85 (0.04)2.69 (0.05)
IC3.89 (0.40)12.54 (0.75)0.14 (0.04)8.85 (0.53)0.71 (0.18)
PE1.68 (0.12)3.65 (0.17)3.01 (0.27)2.82 (0.16)2.26 (0.25)
MSE2.01 (0.15)3.64 (0.18)3.67 (0.33)3.06 (0.17)2.58 (0.28)
τ = 1.25
C2.79 (0.04)2.72 (0.05)1.78 (0.08)2.55 (0.05)2.14 (0.08)
IC5.30 (0.60)13.36 (0.86)0.25 (0.06)8.20 (0.47)0.83 (0.24)
PE5.40 (0.24)8.69 (0.24)6.51 (0.39)7.69 (0.32)6.45 (0.29)
MSE5.01 (0.21)7.53 (0.21)6.11 (0.34)6.60 (0.26)5.79 (0.33)
Table 2. Means and standard errors (in parentheses) of four performance metrics for five methods under additive error cases over 100 replications in the compound symmetric structure.
Table 2. Means and standard errors (in parentheses) of four performance metrics for five methods under additive error cases over 100 replications in the compound symmetric structure.
MeasureA-CoCoLassoCoCoLassoHardB-SCADB-Hard
τ = 0.75
C2.77 (0.04)2.69 (0.05)1.86 (0.08)2.62 (0.05)2.31 (0.07)
IC8.37 (0.50)14.95 (0.73)0.45 (0.07)8.88 (0.41)1.89 (0.29)
PE3.01 (0.15)4.62 (0.19)5.04 (0.41)3.43 (0.20)4.16 (0.31)
MSE3.79 (0.20)6.05 (0.25)6.15 (0.49)4.40 (0.25)5.43 (0.42)
τ = 1.25
C2.46 (0.06)2.24 (0.07)1.20 (0.07)2.07 (0.07)1.64 (0.07)
IC12.88 (0.44)18.84 (0.64)1.24 (0.13)13.59 (0.45)3.64 (0.27)
PE6.80 (0.21)8.46 (0.22)10.43 (0.52)8.01 (0.30)8.60 (0.31)
MSE8.39 (0.26)10.81 (0.28)11.59 (0.62)9.91 (0.36)11.04 (0.44)
Table 3. Means and standard errors (in parentheses) of four performance metrics for five methods under additive error cases over 100 replications in the ultra-high-dimensional autoregressive structure.
Table 3. Means and standard errors (in parentheses) of four performance metrics for five methods under additive error cases over 100 replications in the ultra-high-dimensional autoregressive structure.
MeasureA-CoCoLassoCoCoLassoHardB-SCADB-Hard
τ = 0.25
C3.95 (0.07)3.96 (0.07)3.14(0.11)4.30 (0.09)3.88 (0.08)
IC12.7 (1.48)31.32 (2.03)0.40 (0.07)21.69 (1.52)4.82 (1.49)
PE0.90 (0.03)1.37 (0.04)0.95 (0.04)0.82 (0.06)0.88 (0.05)
MSE1.55 (0.03)2.19 (0.05)1.70 (0.08)1.45 (0.05)1.52 (0.06)
τ = 0.5
C3.56 (0.07)3.54 (0.08)2.36 (0.11)3.69 (0.09)3.49 (0.09)
IC11.05 (1.26)24.66 (1.32)0.37 (0.10)21.84(0.99)5.05 (0.96)
PE1.21 (0.03)1.76 (0.04)1.36 (0.06)1.27 (0.04)1.24 (0.05)
MSE2.01 (0.04)2.72 (0.04)2.26 (0.08)2.07 (0.06)2.01 (0.06)
Table 4. Means and standard errors (in parentheses) of four performance metrics for five methods under multiplicative error cases over 100 replications in the autoregressive structure.
Table 4. Means and standard errors (in parentheses) of four performance metrics for five methods under multiplicative error cases over 100 replications in the autoregressive structure.
MeasureA-CoCoLassoCoCoLassoHardB-SCADB-Hard
τ = 0.25
C3.00 (0.00)3.00 (0.00)2.67 (0.05)2.99 (0.01)2.69 (0.02)
IC4.78 (0.58)14.50 (0.93)0.13 (0.05)8.46 (0.55)0.51 (0.06)
PE0.70 (0.05)1.90 (0.09)1.02 (0.10)0.99 (0.06)0.65 (0.08)
MSE0.82 (0.06)1.82 (0.10)1.41 (0.14)1.07 (0.07)0.77 (0.08)
τ = 0.75
C2.90 (0.03)2.85 (0.04)2.11 (0.07)2.79 (0.04)2.56 (0.06)
IC5.17 (0.50)12.98(0.70)0.27 (0.06)9.10 (0.58)0.88 (0.18)
PE2.36 (0.15)5.05 (0.20)3.76 (0.25)3.63 (0.21)2.98 (0.23)
MSE2.63 (0.16)4.84 (0.18)4.50 (0.31)3.80 (0.20)3.37 (0.26)
Table 5. Means and standard errors (in parentheses) of four performance metrics for five methods under multiplicative error cases over 100 replications in the compound symmetric structure.
Table 5. Means and standard errors (in parentheses) of four performance metrics for five methods under multiplicative error cases over 100 replications in the compound symmetric structure.
MeasureA-CoCoLassoCoCoLassoHardB-SCADB-Hard
τ = 0.25
C3.00 (0.00)2.97 (0.02)2.86 (0.03)2.97 (0.02)2.93 (0.03)
IC8.11 (0.60)14.79 (0.75)0.16 (0.05)8.69 (0.57)2.03 (0.62)
PE1.23 (0.07)2.22 (0.10)0.76 (0.09)1.06 (0.06)1.12 (0.13)
MSE1.53 (0.09)2.93 (0.14)0.96 (0.12)1.37 (0.08)1.47 (0.18)
τ = 0.75
C2.67 (0.05)2.54 (0.06)1.89 (0.07)2.55 (0.05)2.30 (0.06)
IC9.88 (0.53)15.76 (0.62)0.57 (0.09)10.79 (0.50)2.77 (0.33)
PE4.23 (0.18)6.25 (0.22)4.85 (0.33)4.16 (0.21)4.63 (0.28)
MSE5.27 (0.24)8.11 (0.29)5.83 (0.41)5.28 (0.27)6.16 (0.38)
Table 6. Means and standard errors (in parentheses) of four performance metrics for five methods under multiplicative error cases over 100 replications in the ultra-high-dimensional autoregressive structure.
Table 6. Means and standard errors (in parentheses) of four performance metrics for five methods under multiplicative error cases over 100 replications in the ultra-high-dimensional autoregressive structure.
MeasureA-CoCoLassoCoCoLassoHardB-SCADB-Hard
τ = 0.1
C4.18 (0.09)4.22 (0.08)3.51 (0.11)4.56 (0.09)4.03 (0.09)
IC14.55 (1.55)34.79 (3.78)0.49 (0.11)27.42 (3.70)12.87 (4.12)
PE0.78 (0.02)1.21 (0.04)0.83 (0.05)0.76 (0.04)0.82 (0.04)
MSE1.37 (0.03)1.99 (0.05)1.50 (0.08)1.35 (0.05)1.44 (0.05)
τ = 0.2
C3.57 (0.08)4.00 (0.08)3.21 (0.07)4.29 (0.08)3.92 (0.08)
IC4.26 (1.55)26.81 (1.45)0.43 (0.09)20.82 (0.94)4.88 (1.09)
PE0.92 (0.04)1.28 (0.04)0.93 (0.05)0.81 (0.04)0.87 (0.04)
MSE1.60 (0.05)2.10 (0.05)1.66 (0.07)1.44 (0.05)1.50 (0.05)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, Q. Adaptive CoCoLasso for High-Dimensional Measurement Error Models. Entropy 2025, 27, 97. https://doi.org/10.3390/e27020097

AMA Style

Yu Q. Adaptive CoCoLasso for High-Dimensional Measurement Error Models. Entropy. 2025; 27(2):97. https://doi.org/10.3390/e27020097

Chicago/Turabian Style

Yu, Qin. 2025. "Adaptive CoCoLasso for High-Dimensional Measurement Error Models" Entropy 27, no. 2: 97. https://doi.org/10.3390/e27020097

APA Style

Yu, Q. (2025). Adaptive CoCoLasso for High-Dimensional Measurement Error Models. Entropy, 27(2), 97. https://doi.org/10.3390/e27020097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop