Next Article in Journal
Certain Subclasses of Bi-Univalent Functions Involving Caputo Fractional Derivatives with Bounded Boundary Rotation
Previous Article in Journal
A New 5D Chaotic Supply Chain System with Transport Lag: Modeling, Bifurcation Analysis, Offset Boosting Control and Synchronization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Covariate-Adjusted Precision Matrix Estimation Under Lower Polynomial Moment Assumption

Department of Mathematics, Beijing University of Technology, Beijing 100124, China
Mathematics 2025, 13(21), 3562; https://doi.org/10.3390/math13213562
Submission received: 11 October 2025 / Revised: 3 November 2025 / Accepted: 5 November 2025 / Published: 6 November 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

Multiple regression analysis has a wide range of applications. The analysis of error structures in regression model Y = Γ X + Z has also attracted much attention. This paper focuses on large-scale precision matrix of the error vector that only has lower polynomial moments. We mainly study upper bounds of the proposed estimator under different norms in term of the probability estimation. It is shown that our estimator achieves the same optimal convergence order as under Gaussian assumption on the data. Simulation experiments further validate that our method has advantages.

1. Introduction

Multivariate regression methods are growing increasingly important in numerous fields, including chemometrics [1], neuroscience [2], genomics [3], and finance [4]. In genomics research, multivariate models connect gene expression Y to genetic variants X and infer gene regulatory networks via Gaussian graphical models (GGMs)—a process that relies heavily on precision matrices. As emphasized by Liu and Yu [5], the error structure (closely linked to precision matrices) of the multivariate linear regression model for responses, Y = Γ X + Z , is critical for reliable inference but often overlooked in high dimensions settings. Compared with the covariance matrix, the precision matrix of the random error Z is more directly interpretable (e.g., it captures conditional dependence relationships in gene networks) and more relevant to practical applications, thus attracting considerable research attention.
Cai et al. [6] proposed CLIME (Constrained l 1 Minimization Estimation) method for precision matrix and obtained probability estimation under exponential and polynomial moment conditions, respectively. Cai et al. [7] further developed the ACLIME (Adaptive Constrained l 1 Minimization Estimation) method and established optimal convergence rate for precision matrix under Gaussian assumption on the data. By replacing the sample covariance matrix with a pilot estimator, Avella-Medina et al. [8] discussed optimal precision matrix estimation under finite 2 + ε moment condition. This robust estimation framework focuses on independent random vectors, meaning it cannot address the coupling between regression effects and error structures.
While these methods have achieved notable progress in precision matrix estimation, their frameworks primarily focus on single random vectors (e.g., univariate responses) and fail to account for regression relationships between multiple variables. Cai et al. [9] proposed a two-stage constrained l 1 minimization approach to estimate the coefficient matrix and the precision matrix of sub-Gaussian error Z , which means the data requires finite moments of any order. Chen et al. [10] used a scaled Lasso and pairwise regression strategy to estimate these two matrices, which considers Gaussian data and does not provide any convergence rate. Ten et al. [11] developed a bias-corrected method to estimate the covariance matrix of the error vector Z , but this method cannot infer the conditional independence between error variables. Additionally, its robustness is based on the Gaussian distribution framework, making it inadequate for in-depth analysis of error structures in non-Gaussian scenarios.
This paper adopts a two-stage constrained l 1 minimization method to investigate probability precision matrix estimation for error vectors that only possess 4 γ + 4 + ε ( γ , ε > 0 ) order moment. Our theoretical results demonstrate that the proposed estimator not only achieves the same convergence rate as Cai et al.’s method but also exhibits superior numerical performance.
Throughout this paper, we use X Y to denote X C Y for some positive constant C, and X Y denotes Y X . For a vector β = β 1 , , β p T R p , we define β 1 = i = 1 p β i and β = max i β i . For a matrix A = ( a i j ) R p × q , A max = max i j | a i j | , A l 1 = max 1 j q i = 1 p a i j , A l = max 1 i p j = 1 q a i j , A 1 = i = 1 p j = 1 q | a i j | , A F = ( i = 1 p j = 1 q | a i j | 2 ) 1 / 2 and A 2 stands for the spectrum norm of a matrix A .
The rest is organized as follows. Section 2 details the construction of the proposed estimator; Section 3 presents the theoretical properties of the estimator; Section 4 reports simulation studies and an analysis of human gut microbiome data; Section 5 provides discussions. All technical proofs are included in Appendix A.

2. Preliminaries

We begin with a genetical genomics dataset. Let Y = ( Y 1 , , Y p ) T be a p-dimensional random vector of gene expressions and X = ( X 1 , , X q ) T be a q-dimensional random vector of genetic markers. We consider the following multiple regression model:
Y = Γ X + Z ,
where Γ R p × q is an unknown coefficient matrix. The random vector Z = ( Z 1 , , Z p ) T has a mean of 0 , a covariance matrix Σ = ( σ i j ) = E Z Z T , and a precision matrix Ω = Σ 1 = ( ω i j ) = ( ω 1 , , ω p ) . Assuming that X and Z are independent, we observe n to be independent and identically distributed (i.i.d) random samples ( X 1 , Y 1 ) , , ( X n , Y n ) generated from the model (1). Accordingly, the samples of Z can be derived from the model (1).
In genomic studies, the coefficient matrix Γ in the model (1) is sparse, as each gene is typically regulated by only a small number of genetic regulators. Similarly, the precision matrix Ω is expected to be sparse, reflecting the fact that genetic interaction networks generally exhibit limited connectivity. Motivated by these sparsity properties and the need to establish rigorous convergence rates for estimators, we introduce the following conditions.
Condition 1.
For some constants γ , c > 0 , the dimensions satisfy ( p q ) c n γ , where p q = max { p , q } . Additionally, there exist constants ε > 0 and K > 0 such that
max 1 i q E | X i | 4 γ + 4 + ε K , max 1 i p E | Z i | 4 γ + 4 + ε K , max 1 j p E | Z T ω j | 4 γ + 4 + ε K .
Condition 2.
The regression coefficient matrix Γ V δ 1 ( s 1 ( q ) ) with 0 δ 1 < 1 , where
V δ 1 ( s 1 ( q ) ) = { Γ R p × q : max 1 i p j = 1 q γ i j δ 1 s 1 ( q ) } .
Condition 3.
The precision matrix Ω = ω i j p × p U δ 2 ( s 2 ( p ) ) with 0 δ 2 < 1 , where
U δ 2 ( s 2 ( p ) ) = { Ω > 0 : Ω l M p , max 1 i p j = 1 p ω i j δ 2 s 2 ( p ) } .
Condition 4.
There exists some N q > 0 such that Σ X 1 l N q , where Σ X is the covariance matrix of X .
Remark 1.
Condition 1 imposes a polynomial moment constraint on X , Z , and Z T ω j . This condition is weaker than the sub-Gaussian moment assumption adopted by Cai et al. [9], as sub-Gaussian random variables require finite moments of all orders. The dimension growth constraint ( p q ) c n γ implies that the growth rates of p and q are controlled by a polynomial function n γ of the sample size n, with γ governing the maximum allowable growth rate of the dimensions relative to n.
Conditions 2 and 3 formalize the sparsity assumptions for Γ and Ω, respectively, where s 1 ( q ) and s 2 ( p ) represent the sparse measurements. Under the dimension growth constraint in Condition 1, it follows that log ( p q ) = o ( n ) . This ensures that the sparsity measures s 1 ( q ) , s 2 ( p ) and the convergence rates of the proposed estimators are all consistent with the polynomial growth of dimensions (see Theorems 2 and 3). Similar parameter spaces have been adopted in Bickel and Levina [12], Cai et al. [6] and Avella-Medina et al. [8]. However, our framework imposes no eigenvalue restrictions.
Note that Σ X 1 corresponds to the precision matrix of X , so the constraint in Condition 4 is analogous to the bounded norm assumption for Ω in Condition 3.
The following details the construction process of the proposed estimators.
Using the independent and identically distributed random samples ( X 1 , Y 1 ) , , ( X n , Y n ) , we define the sample means as X ¯ = 1 / n k = 1 n X k , Y ¯ = 1 / n k = 1 n Y k and Z ¯ = 1 / n k = 1 n Z k . From the model (1), it follows that
Y k Y ¯ = Γ ( X k X ¯ ) + Z k Z ¯ .
Similar to the sample covariance matrix, we define the following sample matrix, Σ ^ Y X : = 1 / n k = 1 n ( Y k Y ¯ ) ( X k X ¯ ) T , Σ ^ X X : = 1 / n k = 1 n ( X k X ¯ ) ( X k X ¯ ) T and Σ ^ Z X : = 1 / n k = 1 n ( Z k Z ¯ ) ( X k X ¯ ) T . It is straightforward to verify that Σ ^ Y X Γ Σ ^ X X = Σ ^ Z X .
To estimate Γ and Ω , we first construct an estimator for Γ via an optimization approach. Let Γ ^ = ( γ ^ 1 , , γ ^ p ) T , Σ ^ Y X = ( Σ ^ Y X , 1 , , Σ ^ Y X , p ) . For each j { 1 , , p } ,
γ ^ j : = arg min β j R q { β j 1 , Σ ^ Y X , j β j T Σ ^ X X max λ n , p } ,
where λ n , p = C 1 log ( p q ) / n is a tuning parameter with a positive constant C 1 > 0 . It follows that the above row-wise optimization problems are equivalent to the matrix-level optimization problem,
Γ ^ min Γ R p × q { Γ 1 , Σ ^ Y X Γ Σ ^ X X max   λ n , p } .
Secondly, we construct an estimator for Ω . Substitute Γ ^ into the model (1), and let Σ ^ Y Y = 1 / n k = 1 n ( Y k Γ ^ X k ) ( Y k Γ ^ X k ) T . We estimate Ω by CLIME method proposed by Cai et al. [6]. Let Ω ^ 1 : = ( ω ^ 1 1 ω ^ p 1 ) and
ω ^ j 1 : = arg min β j R p { β j 1 , Σ ^ Y Y β j e j max   τ n , p } , j { 1 , , p } ,
where e j is the j-th standard basis vector in R p , and τ n , p = C 2 log ( p q ) / n is a tuning parameter with a constant C 2 > 0 . We then symmetrize Ω ^ 1 to obtain the final estimator Ω ^ = ( ω ^ i j ) , defined as
ω ^ i j : = ω ^ i j 1 1 ( | ω ^ i j 1 |     | ω ^ j i 1 | ) + ω ^ j i 1 1 ( | ω ^ i j 1 |   >   | ω ^ j i 1 | ) ,
where 1 ( · ) denotes the indicator function.
Remark 2.
We obtain the proposed estimators via two key steps, corresponding to Equations (2) and (3). In the case where q = 0 , estimation of Γ is unnecessary, and the precision matrix estimator can be directly derived via (3) (CLIME, Cai et al. [6]). When q = 1 , several methods have been developed to estimate Γ, including those based on l 1 minimization (Tibshirani [13]) and the Dantzig selector (Candes and Tao [14]). In this paper, we consider the high-dimensional setting where both p and q may be large, but satisfy ( p q ) n γ for some constant γ > 0 .

3. Main Result

In this section, we present our main results.
Theorem 1.
Under Condition 1, there exists a constant C = C γ , δ , ε , K > 0 such that
P { Σ ^ Y X Γ Σ ^ X X max   C log ( p q ) n } = 1 O ( ( p q ) δ + n ε / 4 ) ,
P { Σ ^ X X Σ X max   C log ( p q ) n } = 1 O ( ( p q ) δ + n ε / 8 )
and
P { Σ ^ Z Z Ω I p max   C log ( p q ) n } = 1 O ( ( p q ) δ + n ε / 4 ) .
Remark 3.
As shown in Equation (2), Equation (4) provides the theoretical foundation for constructing the estimator Γ ^ . Furthermore, Equations (5) and (6) indicate that Σ ^ X X serves as a pilot estimator of the covariance matrix Σ X , while Σ ^ Z Z is a pilot estimator of the precision matrix Ω; see Avella-Medina et al. [8].
The following results establish error bounds for the coefficient matrix estimator Γ ^ under different matrix norms.
Theorem 2.
Suppose Conditions 1, 2 and 4 hold. Let Γ V δ 1 ( s 1 ( q ) ) with
s 1 ( q ) = o ( N q δ 1 1 ( n log ( p q ) ) 1 δ 2 ) .
Then, for some constant C = C γ , δ , ε , K > 0 , the estimator Γ ^ defined in (2) satisfies
P { Γ ^ Γ l   C N q 1 δ 1 s 1 ( q ) ( log ( p q ) n ) 1 δ 1 2 } = 1 O ( ( p q ) δ + n ε / 4 ) ,
P { Γ ^ Γ max   C N q log ( p q ) n } = 1 O ( ( p q ) δ + n ε / 4 )
and
P { 1 p Γ ^ Γ F C N q 2 δ 1 s 1 ( q ) ( log ( p q ) n ) 1 δ 1 2 } = 1 O ( ( p q ) δ + n ε / 4 ) .
Remark 4.
Since N q is bounded and ( p q ) n γ implies that log ( p q ) / n 0 as n , the sparsity condition s 1 ( q ) = o ( N q δ 1 1 ( n / log ( p q ) ) ( 1 δ 1 ) / 2 ) in Theorem 2 ensures that the upper bound of the above probability tends to zero. Moreover, since n / log ( p q ) as n , this sparsity requirement is mild.
Theorem 3.
Suppose Conditions 1–4 hold. Let Γ V δ 1 ( s 1 ( q ) ) and Ω U δ 2 ( s 2 ( p ) ) with s 2 ( p ) ( 1 + M p ) 1 N q δ 1 2 ( n / log ( p q ) ) ( 1 δ 1 ) / 2 . Then, for some constant C = C γ , δ , ε , K > 0 , the estimator Ω ^ defined in (2) satisfies
P { Ω ^ Ω max   C M p log ( p q ) n } = 1 O ( ( p q ) δ + n ε / 8 )
and
P { Ω ^ Ω 2   C M p 1 δ 2 s 2 ( p ) ( log ( p q ) n ) 1 δ 1 2 } = 1 O ( ( p q ) δ + n ε / 8 ) .
Remark 5.
Since M p is a positive deterministic sequence, which may be bounded or slowly diverging as n and p grow, the condition
s 2 ( p ) C ( 1 + M p ) 1 N q δ 1 2 n log ( p q ) ( 1 δ 1 ) / 2
implies the corresponding condition in Theorem 2. Moreover, the convergence rate given in (11) matches that established by Cai et al. [9], confirming its optimality.

4. Numerical Results

4.1. Simulation Analysis

In this section, we evaluate the performance of the proposed method through simulation studies and compare it with two existing methods proposed by Cai et al. [6] and Friedman et al. [15].
For the p × q coefficient matrix Γ = ( γ i j ) , the sparsity level is controlled by the parameter s 1 . Specifically, each element γ i j is generated independently from U ( 1 , 1 ) × Ber ( 1 , s 1 ) , where U ( 1 , 1 ) denotes the uniform distribution over [ 1 , 1 ] , and Ber ( 1 , s 1 ) denotes a Bernoulli random variable that takes 1 with probability s 1 and 0 with probability 1 s 1 . This generation mechanism implies γ i j is non-zero with probability s 1 (and zero otherwise), so a smaller s 1 indicates a higher sparsity level for Γ .
The sparsity of precision matrix Ω = ( ω i j ) is controlled by the parameter s 2 . To ensure Ω is positive definite, we first construct a matrix B = ( b i j ) with each element b i j generated independently from U ( 1 , 1 ) × Ber ( 1 , s 2 ) (consistent with the sparsity mechanism of Γ ). We then define Ω = B + ε I p , where ε = max ( λ min ( B ) , 0 ) + 0.01 ensures Ω is positive definite.
We simulate covariate vectors X k i . i . d . t ( 0 , Σ , 10 ) and error vectors Z k i . i . d . t ( 0 , Σ , 10 ) for k = 1 , , 100 , where t ( 0 , Σ , 10 ) stands for a multivariate Student-t distribution with 10 degrees of freedom and covariance matrix Σ = Ω 1 . The response vectors Y k are then computed via the regression model Y k = Γ X k + Z k .
In the following experiments, we control the sparsity of Γ and Ω using s 1 and s 2 , respectively, and consider three models with distinct dimensionality and sparsity settings:
Model 1: ( p , q , n , s 1 , s 2 ) = ( 50 , 30 , 100 , 0.1 , 0.1 ) ;
Model 2: ( p , q , n , s 1 , s 2 ) = ( 80 , 60 , 160 , 0.08 , 0.08 ) ;
Model 3: ( p , q , n , s 1 , s 2 ) = ( 200 , 150 , 300 , 0.05 , 0.05 ) .
The three models correspond to three dimensions: low, medium, and high. Such differences allow us to examine how methods perform under varying degrees of dimensionality and sparsity.
The tuning parameters λ n , p and τ n , p are selected using five-fold cross-validation (CV) for our estimator. Specifically, we divide all the samples into five disjoint subgroups (also known as folds), and let T v denote the index set of samples in the v-th fold ( v = 1 , , 5 ).
Define the five fold CV score as
CV ( λ n , p , τ n , p ) = v = 1 5 [ log ( det ( Ω ^ v ( λ n , p , τ n , p ) ) ) tr ( Σ ^ YY , v Ω ^ v ( λ n , p , τ n , p ) ) ] ,
where n v is the number of samples in T v and
Σ ^ YY , v = 1 n v k T v ( Y k Γ ^ v ( λ n , p ) X k ) ( Y k Γ ^ v ( λ n , p ) X k ) T .
Here, Ω ^ v ( λ n , p , τ n , p ) and Γ ^ v ( λ n , p ) are estimates of Ω and Γ computed using the sample set v = 1 5 T v T v .
We then determine the optimal tuning parameters as ( λ n , p * , τ n , p * ) = arg max CV ( λ n , p , τ n , p ) , and use this pair to compute the final estimates of Γ and Ω with the full sample set.
The proposed method is compared with two existing approaches, those of Cai et al. [6] and Friedman et al. [15], which do not account for covariate effects. We apply the same loss function log ( det ( Ω ) ) tr ( Σ ^ YY Ω ) to select the tuning parameters for those methods with Σ ^ YY = 1 n k = 1 n ( Y k Y ¯ ) ( Y k Y ¯ ) T .
Based on 50 independent replications, we compute the average errors (standard errors) under three norms for CLIME, GLASSO and our method.
Table 1 reports the numerical results. In Model 1, the standard errors of the three methods are comparable, but the proposed method still exhibits advantages due to its smaller average errors. As the model setting changes, with dimensionality increasing and sparsity becoming stricter (Models 2 and 3), the proposed method outperforms the compared methods across all three norms.
In summary, the results confirm that our method is particularly advantageous in medium-to-high-dimensional and highly sparse settings, where covariate adjustment plays a critical role in reducing estimation bias and improving stability. Its performance gains become more prominent as dimensionality increases, making it suitable for modern high-dimensional data analysis.

4.2. Application to Real Data

In this section, we apply our precision matrix estimator Ω ^ to analyze the human gut microbiome dataset from Wu et al. (Science, 2011) [16], which has also been studied by Cao et al. [17], He et al. [18], Li et al. [19], and Zhang et al. [20]. Our focus is on identifying differences in latent bacterial genus correlations between lean and obese individuals. The dataset includes 98 healthy subjects, with 63 classified as lean (BMI < 25 ) and 35 as obese (BMI 25 ).
Data Preprocessing and Network Construction
  • To ensure reliable signals, we retained bacterial genera present in at least 20% of samples within each group. This filtering step resulted in 30 retained bacterial genera, so the dimension p = 30 .
  • Zero counts in the filtered dataset were imputed with 0.5 and raw counts were normalized to relative abundances per sample to account for varying sequencing depths.
  • For both groups, tuning parameters λ n , p and τ n , p were selected via five-fold cross-validation; see Section 4.1. To evaluate the stability of support recovery, we generated 63 bootstrap samples for the lean group and 35 for the obese group, repeated the analysis 100 times, and calculated the average occurrence frequency of each edge. Edges with a frequency ≥ 50% (appearing in at least 50 of 100 resamplings) were retained as “stable edges” for final network construction.
Quantitative Characteristics of Microbial Networks
Table 2 summarizes key quantitative features of the inferred networks, including structural complexity, association patterns, and stability.
Network Visualization and Interpretation
Figure 1 and Figure 2 visualize the stable microbial networks for the lean and obese groups, respectively, to intuitively display differences in conditional associations between bacterial genera.
Biological Implications of Network Differences
Our analysis reveals meaningful differences between the gut microbial networks of lean and obese individuals, which align with and extend prior findings in gut microbiome research:
  • Predominant competitive conditional interactions: Both groups exhibit more negative than positive correlations between bacterial genera (lean group: 71.4% negative correlations; obese group: 60.0% negative correlations). This result is consistent with the findings of Cao et al. [17], Wang et al. [21], Zhang et al. [20] and Coyte et al. [22], and supports the notion that gut microbial interactions are primarily competitive.
  • Reduced network complexity in obesity: The obese group had fewer stable edges (five, compared to seven in the lean group) and a lower mean edge strength (0.25, compared to 0.32 in the lean group). These observations indicate weakened conditional associations between bacterial genera in obese individuals, suggesting a decline in gut microbial network complexity.
  • Network stability: The lean group’s network had a higher stability score (0.72, compared to 0.58 in the obese group), confirming more reproducible conditional associations and reflecting a robust gut microbial structure. In contrast, the lower stability of the obese group’s network suggested greater inter-individual variability in microbial interactions, a well-documented hallmark of obesity-related gut dysbiosis. This finding also aligned with prior reports of reduced modularity in obese gut microbial networks (Greenblum et al. [23]).
These results validate our multivariate method’s ability to capture biologically meaningful microbial interactions, highlighting competitive balance and core taxa preservation as key to gut health.

5. Conclusions

This paper proposes a two-stage procedure for covariate-adjusted precision matrix estimation in high dimensions. The error vector only requires a bounded lower polynomial moment condition, which is a weaker assumption than the sub-Gaussian conditions of existing methods. We establish non-asymptotic probabilistic upper bounds for the coefficient estimator Γ ^ and precision matrix estimator Ω ^ under multiple matrix norms. Despite relaxed assumptions, both estimators achieve optimal convergence rates matching those from stronger sub-Gaussian frameworks. Numerical simulations confirm the method outperforms covariate-unadjusted approaches (Cai et al. [6], Friedman et al. [15]). In a gut microbiome analysis, it successfully captures meaningful microbial network differences between lean and obese groups. Future work may extend this approach to other low moment condition pilot estimators (Avella-Medina et al. [8]).

Funding

This paper is supported by the National Natural Science Foundation of China (No. 12171016).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

First, we use Bernstein inequality to prove a lemma.
Theorem A1
(Massart (2007) [24]). Assume that Z 1 , , Z n are i.i.d. random variables, Z k M ( k = 1 , , n ) and k = 1 n E Z k 2 b n 2 . Then, for t > 0 ,
P { | k = 1 n ( Z k E Z k ) |   b n 2 t + M t 3 } 2 e t 2 .
Lemma A1.
Assume Condition 1 holds. Then, for any fixed δ > 0 , there exists a constant C = C γ , δ , ε , K > 0 such that
P { max 1 i q | k = 1 n X k i |   C n log ( p q ) } = O ( ( p q ) δ + n ε 4 ) ,
P { max 1 i q , 1 j p | k = 1 n X k i Z k j |   C n log ( p q ) } = O ( ( p q ) δ + n ε 4 ) ,
P { max 1 i , j q | k = 1 n ( X k i X k j E X k i X k j ) |   C n log ( p q ) } = O ( ( p q ) δ + n ε 8 )
and
P { max 1 i , j q | k = 1 n ( Z k i Z k T ω j E Z k i Z k T ω j ) |   C n log ( p q ) } = O ( ( p q ) δ + n ε 8 ) .
Proof. 
We first prove inequality (A1). Since E X k i = 0 , it suffices to show
I : = P { max 1 i q | k = 1 n ( X k i E X k i ) |   C n log ( p q ) } = O ( ( p q ) δ + n ε 4 ) .
By the triangle inequality for probability, it follows that
I P { max 1 i q | k = 1 n ( X k i 1 ( | X k i |   > n 1 / 4 ) E X k i 1 ( | X k i |   > n 1 / 4 ) ) |   C 2 n log ( p q ) } + P { max 1 i q | k = 1 n ( X k i 1 ( | X k i |   n 1 / 4 ) E X k i 1 ( | X k i |   n 1 / 4 ) ) |   C 2 n log ( p q ) }   : = I 1 + I 2 .
To estimate I 1 , note that E | X k i | 4 γ + 4 + ε 1 . Then,
k = 1 n | E X k i 1 ( | X k i |   > n 1 / 4 ) | n max i E ( | X k i | 3 n 1 / 2 ) n log p
and
I 1 P { max 1 i q | k = 1 n X k i 1 ( | X k i |   > n 1 / 4 ) |   > C n log p }   P { i = 1 q k = 1 n { | X k i |   > n 1 / 4 } } q n max k , i P { | X k i | 4 γ + 4 + ε > n ( 4 γ + 4 + ε ) / 4 } n γ + 1 n ( γ + 1 + ε / 4 ) = n ε 4
due to Markov inequality and max i E | X k i | 4 γ + 4 + ε 1 .
In order to estimate I 2 , define ξ k i : = X k i 1 ( | X k i |   n 1 / 4 ) E X k i 1 ( | X k i |   n 1 / 4 ) , then E ξ k i = 0 ( k = 1 , , n ) , | ξ k i |   2 n 1 / 4 and k = 1 n E ξ k i 2 k = 1 n E X k i 2 K 2 / ( 4 γ + 4 + ε ) n .
Apply Theorem A1 with t = 2 δ log ( p q ) and one obtains
P { | k = 1 n ξ k i |   K 1 / ( 4 γ + 4 + ε ) n 4 δ log ( p q ) + 2 n 1 / 4 3 2 δ log ( p q ) } 2 ( p q ) δ .
By ( p q ) c n γ , one obtains log ( p q ) n 1 / 4 . Therefore, n 1 / 4 log ( p q ) n log ( p q ) and
I 2 = P { | k = 1 n η k i |   C n log ( p q ) } 2 ( p q ) δ
with C > 2 K 1 / ( 4 γ + 4 + ε ) δ + 4 δ / 3 . This with (A6) and (A7) leads to (A1).
Since X and Z are independent, E X k i Z k j = E X k i · E Z k j = 0 . By Condition 1, E | X k i Z k j | 4 γ + 4 + ε = E | X k i | 4 γ + 4 + ε · E | Z k j | 4 γ + 4 + ε 1 . Replacing X k i with X k i Z k j and repeating the argument for (A1) yields (A2).
To prove (A3), let η k = X k i X k j and define J : = P { max 1 i , j q | k = 1 n ( η k E η k ) |   C n log ( p q ) } . Split J into tail and truncated parts:
J J 1 + J 2 ,
where J 1 corresponds to η k 1 ( | η k | > n / log ( p q ) ) and J 2 to the truncated term.
Note that E | η k | 2 max i E | X k i | 4 1 . Then,
k = 1 n | E η k 1 ( | η k | > n log ( p q ) ) |   n max k E ( | η k | 2 log ( p q ) n ) n log ( p q ) .
Moreover,
J 1 P { max i | k = 1 n η k 1 ( | η k |   > n log ( p q ) ) |   > C n log ( p q ) } .
This with | η k |   1 / 2 ( X k i 2 + X k j 2 ) , ( p q ) c n γ and Condition 1 give
J 1 q n max k , i P { | X k i | 2   > n log ( p q ) } n γ + 1 ( log ( p q ) n ) γ + 1 + ε / 4 ( log ( p q ) ) γ + 1 + ε / 4 n ε / 4 n ε / 8 .
To estimate J 2 , apply Bernstein’s inequality to the truncated η k (bounded by 2 n / log ( p q ) ), leading to J 2 = O ( p q ) δ . This gives J = O ( ( p q ) δ ) + n ε / 8 , and (A3) holds
Finally, we estimate (A4). By Condition 1, E | Z k i Z k T ω j | 2   ( E | Z k i | 4 ) 1 / 2 ( E | Z k T ω j | 4 ) 1 / 2 1 . Replacing η k = X k i X k j with η k = Z k i Z k T ω j and repeating the argument for (A3) yields (A4). The proof is done. □
Lemma A2.
Assume Condition 1 holds. Then, for any fixed δ > 0 , there exists a constant C = C γ , δ , ε , K > 0 such that
P ( max 1 j q | 1 n k = 1 n ( X k j 2 E X 1 j 2 ) |   C log ( p q ) n ) = 1 O ( ( p q ) δ + n ε / 8 ) .
Consequently,
max 1 j q 1 n k = 1 n X k j 2 max 1 j q E X 1 j 2 + C log ( p q ) n
with the same probability bound.
Proof. 
Let T n : = n 1 / 4 and split X k j 2 into truncated and tail parts,
U k j : = X k j 2 1 { | X k j |   T n } , V k j : = X k j 2 1 { | X k j |   > T n } .
Then,
1 n k = 1 n ( X k j 2 E X 1 j 2 ) = 1 n k = 1 n ( U k j E U 1 j ) + 1 n k = 1 n ( V k j E V 1 j ) .
Since | U k j E U 1 j |   2 T n 2 = 2 n 1 / 2 and k = 1 n Var ( U k j ) n E X 1 j 4 K n due to Condition 1, then Bernstein’s inequality (Theorem A1) yields, and for t = 2 δ log ( p q ) ,
P ( | k = 1 n ( U k j E U 1 j ) |   C 1 ( n t + n 1 / 2 t ) ) 2 e t .
Divide by n and use ( p q ) n γ so that log ( p q ) = o ( n ) ; since t log ( p q ) and n 1 / 2 t / n = t / n , we obtain
P ( | 1 n k = 1 n ( U k j E U 1 j ) |   C 2 log ( p q ) n ) 2 ( p q ) δ .
A union bound over j = 1 , , q preserves the ( p q ) δ rate.
By Condition 1, for some r = 4 γ + 4 + ε > 4 and constant K 1 ,
E X 1 j r K 1 E V 1 j = E [ X 1 j 2 1 X 1 j > T n ] E X 1 j r T n r 2 n ( r 2 ) / 4 .
Similarly, P ( | X k j |   > T n ) n r / 4 so that P ( k = 1 n 1 { | X k j |   > T n } ) = O P ( n 1 r / 4 ) uniformly in j. Hence
max 1 j q | 1 n k = 1 n ( V k j E V 1 j ) | = O P ( n ( r 2 ) / 4 ) = O P ( n 1 / 2 ε / 4 ) ,
which is negligible compared to log ( p q ) / n since l o g ( p q ) = O ( l o g n ) under ( p q ) n γ .
Combine (A8) and (A9) and apply a union bound over j to conclude the stated probability bound for the maximum over j. □
Proof of Theorem 1. 
We first prove inequality (4). By the regression model Y = Γ X + Z , centering both sides by sample means gives Y k Y ¯ = Γ ( X k X ¯ ) + Z k Z ¯ . Multiplying both sides by ( X k X ¯ ) T and averaging over k yields
Σ ^ Y X Γ Σ ^ X X = Σ ^ Z X ,
where Σ ^ Z X = 1 / n k = 1 n ( Z k Z ¯ ) ( X k X ¯ ) T . Thus, it suffices to show
I : = P { 1 n k = 1 n ( Z k Z ¯ ) ( X k X ¯ ) T max   λ n , p } = O ( ( p q ) δ + n ε / 4 ) ,
where λ n , p = C 1 log ( p q ) / n with the constant C 1 to be specified.
Since 1 / n k = 1 n ( Z k i Z ¯ i ) ( X k j X ¯ j ) = 1 / n k = 1 n Z k i X k j Z ¯ i X ¯ j , then
I P { max i j | 1 / n k = 1 n Z k i X k j |   λ n , p } + P { max i j | Z ¯ i X ¯ j |   λ n , p } .
Recall that λ n , p = C 1 log ( p q ) / n and ( p q ) n γ yields log ( p q ) = o ( n ) . Then,
P { max i j | Z ¯ i X ¯ j |   λ n , p }   P { max i | Z ¯ i |   C 1 log ( p q ) n }   +   P { max j | X ¯ j |   C 1 log ( p q ) n } .
By (A1) and (A2) in Lemma A1, we have I = O ( ( p q ) δ + n ε / 4 ) , and (4) holds.
To prove (5), since Σ ^ X X Σ X max = max i j | 1 / n ( X k i X k j E X k i X k j ) X ¯ i X ¯ j | , then
P { Σ ^ X X Σ X max λ n , p }   P { max i j | 1 n ( X k i X k j E X k i X k j ) |   1 / 2 λ n , p }   + P { max i j | X ¯ i X ¯ j |   1 / 2 λ n , p } = O ( ( p q ) δ + n ε / 4 )
due to Lemma A1.
Recall E Z = 0 and Σ ^ Z Z = 1 / n k = 1 n Z k Z k T . Then,
Σ ^ Z Z Ω I p = ( Σ ^ Z Z Σ ) Ω = 1 n k = 1 n ( Z k Z k T Ω E Z k Z k T Ω )
and
P { Σ ^ Z Z Ω I p max λ n , p } P { max i j | 1 n k = 1 n ( Z k i Z k T ω j E Z k i Z k T ω j ) |   λ n , p } .
Hence, (6) holds due to (A4) in Lemma A1. □
Lemma A3
(Cai et al. (2011) [6]). Supposing matrix A = ( a 1 , , a n ) = ( a i j ) m × n satisfies max 1 i m j = 1 n a i j δ s ( p ) , A ^ is any estimator of A and a ^ j 1 a j 1 . Then
A ^ A l s ( p ) A ^ A max 1 δ .
Proof of Theorem 2. 
Define two events A 1 : = { Σ ^ Y X Γ Σ ^ X X max   λ n , p } and A 2 : = { Σ ^ X X Σ X max   λ n , p } . Then, by (4) and (5), we have P ( A 1 A 2 ) = 1 O ( ( p q ) δ + n ε / 8 ) . It suffices to show that on A 1 A 2 ,
Γ ^ Γ l C N q 1 δ 1 s 1 ( q ) ( log ( p q ) n ) 1 δ 1 2 .
Since γ ^ j is the solution of (2), then γ ^ j 1 γ j 1 and Σ ^ Y X Γ ^ Σ ^ X X max λ n , p . This with event A 1 A 2 gives ( Γ ^ Γ ) Σ ^ X X max 2 λ n p and
( Γ ^ Γ ) Σ X max ( Γ ^ Γ ) Σ ^ X X max + ( Γ ^ Γ ) ( Σ X Σ ^ X X ) max   2 λ n , p + Γ ^ Γ l λ n , p .
This with Condition 4 implies
Γ ^ Γ max ( Γ ^ Γ ) Σ X max Σ X 1 l 1 N q λ n , p ( 2 + Γ ^ Γ l ) .
Recall that Condition 2 hold and γ ^ j 1 γ j 1 . Then, by Lemma A3, we have
Γ ^ Γ l C s 1 ( q ) Γ ^ Γ max 1 δ 1 s 1 ( q ) N q 1 δ 1 λ n , p 1 δ 1 ( 1 + Γ ^ Γ l 1 δ 1 ) .
If Γ ^ Γ l 1 , then we have Γ ^ Γ l C s 1 ( q ) N q 1 δ 1 λ n , p 1 δ 1 . If Γ ^ Γ l 1 , then by known condition s 1 ( q ) = o ( N q δ 1 1 ( n / log ( p q ) ) ( 1 δ 1 ) / 2 ) , for large n,
Γ ^ Γ l C s 1 ( q ) N q 1 δ 1 λ n , p 1 δ 1 + 1 2 Γ ^ Γ l .
Thus, we proved (A11) when A 1 A 2 happens and (7) holds. Substituting it into (A12), we can obtain that (8) holds. Finally, (9) follows from (7) and (8) and the inequality p 1 Γ ^ Γ F 2   Γ ^ Γ max Γ ^ Γ l . The proof is done. □
Proof of Theorem 3.
Recall (3). We define event A : = { Σ ^ Y Y Ω I p max τ n , p } with τ n , p = C 2 log ( p q ) / n . We first show that on A, the desired bounds for Ω ^ hold, and then verify P ( A ) = 1 O ( p q ) δ + n ε / 8 .
By the definitions of ω ^ j and ω ^ j 1 , it holds on A:
ω ^ j 1 ω ^ j 1 1 ω j 1 .
Hence, to estimate (11), one needs only to prove that when event A happens,
Ω ^ Ω max M p log p n
thanks to Lemma A3 and Ω ^ Ω 2 Ω ^ Ω l .
Note that the symmetrized estimator Ω ^ satisfies Ω ^ Ω max Ω ^ 1 Ω max . Using Ω = Ω Σ ^ Y Y Ω ^ 1 + Ω ( I p Σ ^ Y Y Ω ^ 1 ) , we have
Ω ^ 1 Ω = ( I p Ω Σ ^ Y Y ) Ω ^ 1 Ω ( I p Σ ^ Y Y Ω ^ 1 ) .
This with matrix norm inequality AB max A max B l 1 yields
Ω ^ Ω max I p Ω Σ ^ Y Y max Ω ^ 1 l 1 + Σ ^ Y Y Ω ^ 1 I p max Ω l 1 .
On event A, I p Ω Σ ^ Y Y max = I p Σ ^ Y Y Ω max τ n , p , and I p Σ ^ Y Y Ω ^ 1 max τ n , p from (3). Moreover, it follows from (A13)
Ω ^ Ω max 2 τ n , p Ω l 1 ,
which concludes the expected estimate (A14) thanks to Ω U δ 2 ( s 2 ( p ) ) and τ n , p = C 2 log ( p q ) / n .
Now, one needs only to prove P ( A ) = P { Σ ^ Y Y Ω I p max   τ n , p } = 1 O ( p δ + n ε / 8 ) .
Since Σ ^ Y Y Ω I p max ( Σ ^ Y Y Σ ^ Z Z ) Ω max + I p Σ ^ Z Z Ω max , then by (6), we have P { Σ ^ Z Z Ω I p max   τ n , p } = 1 O ( ( p q ) δ + n ε / 4 ) and it suffices to show
P { Σ ^ Y Y Σ ^ Z Z max   C M p 1 τ n , p } = 1 O ( ( p q ) δ + n ε / 8 )
due to ( Σ ^ Y Y Σ ^ Z Z ) Ω max Σ ^ Y Y Σ ^ Z Z max Ω l , Ω U δ 2 ( s 2 ( p ) ) and Ω l M p .
Recall E Z = 0 and Σ ^ Z Z = 1 / n k = 1 n Z k Z k T . Substituting Y k = Γ X k + Z k into Σ ^ Y Y = 1 / n k = 1 n ( Y k Γ ^ X k ) ( Y k Γ ^ X k ) T and denote Δ n = ( δ i j ) : = Γ ^ Γ , and we have
Σ ^ Y Y = 1 n k = 1 n ( Z k Δ n X k ) ( Z k Δ n X k ) T
and
Σ ^ Y Y Σ ^ Z Z max 2 n k = 1 n Z k X k T Δ n T max + 1 n k = 1 n Δ n X k X k T Δ n T max .
Hence, we only need to prove
P { 1 n k = 1 n Z k X k T Δ n T max   C M p 1 τ n , p } = 1 O ( ( p q ) δ + n ε / 8 )
and
P { 1 n k = 1 n Δ n X k X k T Δ n T max   C M p 1 τ n , p } = 1 O ( ( p q ) δ + n ε / 8 ) .
To estimate (A16), by s 1 ( p ) C ( 1 + M p ) 1 N q δ 1 2 ( n / log ( p q ) ) ( 1 δ 1 ) / 2 and (7), we have
1 n k = 1 n Z k X k T Δ n T max Γ ^ Γ l max i j | 1 n k = 1 n Z k i X k j |   C M p 1 τ n , p
with probability at least 1 O ( ( p q ) δ + n ε / 4 ) . Thus, (A16) holds. It remains to show that (A17) holds, since
max i l | 1 n k = 1 n j = 1 q δ i j X k j j = 1 q δ l j X k j |   = max i | 1 n k = 1 n ( j = 1 q δ i j X k j ) 2 |   max i | j = 1 q δ i j 2 1 n k = 1 n X k j 2 | .
With Lemma A2, in the event of probability at least 1 O ( ( p q ) δ + n ε / 8 ) ,
max 1 j q | 1 n k = 1 n X k j 2 |   max 1 j q E X 1 j 2 + C log ( p q ) n C ,
where C depends only on the moment bound in Condition 1. Substituting this into (A18) yields
max i | j = 1 q δ i j 2 1 n k = 1 n X k j 2 |   Γ ^ Γ max Γ ^ Γ l max j | 1 n k = 1 n X k j 2 |   C M p 1 τ n , p ,
using the bounds from Theorem 2 on Γ ^ Γ max , Γ ^ Γ l and s 1 ( p ) , thereby establishing (A17). The proof is done. □

References

  1. Wold, S.; Sjostrom, M.; Eriksson, L. PLS-regression: A basic tool of chemometrics. Chemom. Intell. Lab. Syst. 2001, 58, 109–130. [Google Scholar] [CrossRef]
  2. Harrison, L.; Penny, W.; Friston, K. Multivariate autoregressive modeling of fMRI time series. NeuroImage 2003, 19, 1477–1491. [Google Scholar] [CrossRef]
  3. Meng, C.; Kuster, B.; Culhane, A.C.; Gholami, A.M. A multivariate approach to the integration of multi-omics datasets. BMC Bioinform. 2014, 15, 162. [Google Scholar] [CrossRef]
  4. Lee, C.F.; Lee, A.C.; Lee, J. Handbook of Quantitative Finance and Risk Management; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  5. Liu, R.; Yu, G. Estimation of the Error Structure in Multivariate Response Linear Regression Models. WIREs Comput. Stat. 2025, 17, e70021. [Google Scholar] [CrossRef]
  6. Cai, T.; Liu, W.; Luo, X. A Constrained l1 Minimization Approach to Sparse Precision Matrix Estimation. J. Am. Stat. Assoc. 2011, 106, 594–607. [Google Scholar] [CrossRef]
  7. Cai, T.T.; Liu, W.; Zhou, H.H. Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation. Ann. Stat. 2016, 44, 455–488. [Google Scholar] [CrossRef]
  8. Avella-Medina, M.; Battey, H.S.; Fan, J.; Li, Q. Robust estimation of high-dimensional covariance and precision matrices. Biometrika 2018, 105, 271–284. [Google Scholar] [CrossRef] [PubMed]
  9. Cai, T.T.; Li, H.; Liu, W.; Xie, J. Covariate-adjusted precision matrix estimation with an application in genetical genomics. Biometrika 2012, 100, 139–156. [Google Scholar] [CrossRef]
  10. Chen, M.; Ren, Z.; Zhao, H.; Zhou, H. Asymptotically Normal and Efficient Estimation of Covariate-Adjusted Gaussian Graphical Model. J. Am. Stat. Assoc. 2016, 111, 394–406. [Google Scholar] [CrossRef]
  11. Tan, K.; Romon, G.; Bellec, P.C. Noise covariance estimation in multi-task high-dimensional linear models. Bernoulli 2024, 30, 1695–1722. [Google Scholar] [CrossRef]
  12. Bickel, P.J.; Levina, E. Covariance regularization by thresholding. Ann. Stat. 2008, 36, 2577–2604. [Google Scholar] [CrossRef] [PubMed]
  13. Tibshirani, R. Regression Shrinkage and Selection via The Lasso: A Retrospective. J. R. Stat. Soc. Ser. B Stat. Methodol. 2011, 73, 273–282. [Google Scholar] [CrossRef]
  14. Candes, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar] [CrossRef] [PubMed]
  15. Friedman, J.; Hastie, T.; Tibshirani, R. Sparse Inverse Covariance Estimation with the Graphical Lasso. Biostatistics 2008, 9, 432–441. [Google Scholar] [CrossRef]
  16. Wu, G.D.; Chen, J.; Hoffmann, C.; Bittinger, K.; Chen, Y.Y.; Keilbaugh, S.A.; Bewtra, M.; Knights, D.; Walters, W.A.; Knight, R.; et al. Linking Long-Term Dietary Patterns with Gut Microbial Enterotypes. Science 2011, 334, 105–108. [Google Scholar] [CrossRef]
  17. Cao, Y.; Lin, W.; Li, H. Large Covariance Estimation for Compositional Data Via Composition-Adjusted Thresholding. J. Am. Stat. Assoc. 2019, 114, 759–772. [Google Scholar] [CrossRef]
  18. He, Y.; Liu, P.; Zhang, X.; Zhou, W. Robust covariance estimation for high-dimensional compositional data with application to microbial communities analysis. Stat. Med. 2021, 40, 3499–3515. [Google Scholar] [CrossRef]
  19. Li, D.; Srinivasan, A.; Chen, Q.; Xue, L. Robust Covariance Matrix Estimation for High-Dimensional Compositional Data with Application to Sales Data Analysis. J. Bus. Econ. Stat. 2023, 41, 1090–1100. [Google Scholar] [CrossRef]
  20. Zhang, S.; Wang, H.; Lin, W. CARE: Large Precision Matrix Estimation for Compositional Data. J. Am. Stat. Assoc. 2025, 120, 305–317. [Google Scholar] [CrossRef]
  21. Wang, J.; Liang, W.; Li, L.; Wu, Y.; Ma, X. A new robust covariance matrix estimation for high-dimensional microbiome data. Aust. N. Z. J. Stat. 2024, 66, 281–295. [Google Scholar] [CrossRef]
  22. Coyte, K.Z.; Schluter, J.; Foster, K.R. The ecology of the microbiome: Networks, competition, and stability. Science 2015, 350, 663–666. [Google Scholar] [CrossRef]
  23. Greenblum, S.; Turnbaugh, P.J.; Borenstein, E. Metagenomic systems biology of the human gut microbiome reveals topological shifts associated with obesity and inflammatory bowel disease. Proc. Natl. Acad. Sci. USA 2012, 109, 594–599. [Google Scholar] [CrossRef]
  24. Massart, P. Concentration Inequalities and Model Selection; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2007; Volume 1896, pp. xiv+337. [Google Scholar]
Figure 1. Conditional correlation network of gut microbiota in the lean group (BMI < 25 ). Nodes represent 30 retained bacterial genera. Edges denote stable associations (bootstrap frequency 50 % ): green lines indicate positive correlations, red lines indicate negative correlations, and darker colors represent stronger associations.
Figure 1. Conditional correlation network of gut microbiota in the lean group (BMI < 25 ). Nodes represent 30 retained bacterial genera. Edges denote stable associations (bootstrap frequency 50 % ): green lines indicate positive correlations, red lines indicate negative correlations, and darker colors represent stronger associations.
Mathematics 13 03562 g001
Figure 2. Conditional correlation network of gut microbiota in the obese group (BMI 25 ). Nodes represent 30 retained bacterial genera. Edges denote stable associations (bootstrap frequency 50 % ): green lines indicate positive correlations, red lines indicate negative correlations, and darker colors represent stronger associations.
Figure 2. Conditional correlation network of gut microbiota in the obese group (BMI 25 ). Nodes represent 30 retained bacterial genera. Edges denote stable associations (bootstrap frequency 50 % ): green lines indicate positive correlations, red lines indicate negative correlations, and darker colors represent stronger associations.
Mathematics 13 03562 g002
Table 1. Average errors (standard errors) under three methods.
Table 1. Average errors (standard errors) under three methods.
(p,q,n, s 1 , s 2 )MethodSpectral NormFrobenius NormMatrix l 1 Norm
Model 1CLIME2.68 (0.03)2.58 (0.06)1.98 (0.59)
GLASSO2.75 (0.03)2.61 (0.05)2.51 (0.16)
Our Method2.12 (0.09)1.78 (0.20)1.54 (0.15)
Model 2CLIME2.78 (0.04)2.68 (0.03)2.49 (0.04)
GLASSO2.94 (0.02)2.90 (0.02)2.59 (0.04)
Our Method2.75 (0.03)2.61 (0.05)2.51 (0.16)
Model 3CLIME2.78 (0.04)2.68 (0.03)2.49 (0.04)
GLASSO2.94 (0.02)2.90 (0.02)2.59 (0.04)
Our Method2.75 (0.03)2.61 (0.05)2.51 (0.16)
Table 2. Quantitative characteristics of gut microbial networks in lean and obese groups.
Table 2. Quantitative characteristics of gut microbial networks in lean and obese groups.
MetricLean GroupObese Group
Number of retained genera (nodes)3030
Number of stable edges75
Positive correlations (proportion)2 (28.6%)2 (40.0%)
Negative correlations (proportion)5 (71.4%)3 (60.0%)
Network stability score 10.720.58
1 Stability score refers to the average bootstrap frequency of all stable edges, with a range of [0, 1] and higher values indicating more reliable networks.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, S. Covariate-Adjusted Precision Matrix Estimation Under Lower Polynomial Moment Assumption. Mathematics 2025, 13, 3562. https://doi.org/10.3390/math13213562

AMA Style

Hu S. Covariate-Adjusted Precision Matrix Estimation Under Lower Polynomial Moment Assumption. Mathematics. 2025; 13(21):3562. https://doi.org/10.3390/math13213562

Chicago/Turabian Style

Hu, Shuwei. 2025. "Covariate-Adjusted Precision Matrix Estimation Under Lower Polynomial Moment Assumption" Mathematics 13, no. 21: 3562. https://doi.org/10.3390/math13213562

APA Style

Hu, S. (2025). Covariate-Adjusted Precision Matrix Estimation Under Lower Polynomial Moment Assumption. Mathematics, 13(21), 3562. https://doi.org/10.3390/math13213562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop