Next Article in Journal
A UAV-Assisted STAR-RIS Network with a NOMA System
Previous Article in Journal
Multiphase Transport Network Optimization: Mathematical Framework Integrating Resilience Quantification and Dynamic Algorithm Coupling
Previous Article in Special Issue
Federated Learning for Heterogeneous Multi-Site Crop Disease Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Test for High-Dimensional Mean Vectors and Covariance Matrices Using Random Projection

by
Tung-Lung Wu
Department of Mathematics and Statistics, Mississippi State University, 75 B.S. Hood Drive, Starkville, MS 39762, USA
Mathematics 2025, 13(13), 2060; https://doi.org/10.3390/math13132060
Submission received: 6 May 2025 / Revised: 10 June 2025 / Accepted: 17 June 2025 / Published: 21 June 2025
(This article belongs to the Special Issue Computational Intelligence in Addressing Data Heterogeneity)

Abstract

This paper proposes an improved random projection-based method for testing high-dimensional two-sample mean vectors and covariance matrices. For mean testing, the proposed approach incorporates training data to guide the construction of projection matrices toward the estimated mean difference, thereby substantially enhancing the power of the projected Hotelling’s T 2 statistic. We introduce three aggregation strategies—maximum, average, and percentile-based—to ensure stable performance across multiple projections. For covariance testing, the method employs data-driven projections aligned with the leading eigenvector of the sample covariance matrix to amplify the differences between matrices. Aggregation strategies—maximum-, average-, and percentile-based for the mean problem and minimum and average p-values for the covariance problem—are developed to further stabilize performance across repeated projections. An application to gene expression data is provided to illustrate the method. Extensive simulation studies show that the proposed method performs favorably compared to a recent state-of-the-art technique, particularly in detecting sparse signals, while maintaining control of the Type-I error rate.

1. Introduction

We consider two-sample hypothesis testing problems involving high-dimensional mean vectors and covariance matrices. In high-dimensional settings—where the number of variables p exceeds the sample size n—traditional methods such as Hotelling’s T 2 test become invalid due to the singularity of the sample covariance matrix. Such scenarios are common in modern applications, ranging from genomics to finance, where high-dimensional data are collected with relatively small sample sizes. These challenges have motivated the development of new statistical tools tailored to the high-dimensional regime. A notable contribution is the random projection method introduced by Lopes et al. [1], which has gained considerable attention due to its simplicity, flexibility, and effectiveness for dimension reduction.
Let X 1 , , X n 1 be independent and identically distributed (i.i.d.) random vectors from a p-dimensional normal distribution N p ( μ 1 , Σ ) , and let Y 1 , , Y n 2 be i.i.d. from N p ( μ 2 , Σ ) . We are interested in testing the hypothesis:
H 0 : μ 1 = μ 2 .
The likelihood ratio test (LRT) statistic for testing the null hypothesis in (1) is given by
L R T = n 1 n 2 n 1 + n 2 ( X ¯ Y ¯ ) S 1 ( X ¯ Y ¯ ) ,
where
  • X ¯ = 1 n 1 i = 1 n 1 X i and Y ¯ = 1 n 2 i = 1 n 2 Y i are the sample mean vectors,
  • S X = 1 n 1 1 i = 1 n 1 ( X i X ¯ ) ( X i X ¯ ) ,
  • S Y = 1 n 2 1 i = 1 n 2 ( Y i Y ¯ ) ( Y i Y ¯ ) , and 
  • S is the pooled sample covariance matrix:
    S = ( n 1 1 ) S X + ( n 2 1 ) S Y n 1 + n 2 2 .
It is well known that the LRT performs poorly in high-dimensional settings, particularly when the dimension p increases, while the sample size n remains fixed or grows slowly. Moreover, the test statistic becomes undefined when p > n due to the singularity of the sample covariance matrix. These limitations have led to the development of alternative testing procedures, several of which are reviewed below.
Many studies address the issue of singularity by modifying the sample covariance matrix. Bai and Saranadasa [2] showed that the performance of the classical Hotelling’s T 2 test deteriorates when the dimension p approaches the sample size n. They proposed a test based on the squared Euclidean norm X ¯ Y ¯ 2 , which avoids inverting the sample covariance matrix. Their test statistic was shown to be asymptotically normal under mild conditions, including the existence of finite fourth moments and a high-dimensional asymptotic regime where p / n y > 0 .
Chen and Qin [3] extended the work of Bai and Saranadasa [2] by developing a more general framework. Notably, they established the asymptotic normality of their test statistic without requiring a specific relationship between p and n; in particular, the ratio p / n may diverge to infinity.
A projection-based method was proposed by Lopes et al. [1], in which high-dimensional data are projected onto a lower-dimensional subspace of dimension k < n . This dimensionality reduction enables the use of the classical Hotelling’s T 2 test in the reduced space. Their test statistic takes the following form:
T k 2 = n 1 n 2 n 1 + n 2 R k ( X ¯ Y ¯ ) R k S R k 1 R k ( X ¯ Y ¯ ) ,
where R k is a projection matrix that maps the original p-dimensional data to a k-dimensional subspace.  Lopes et al. [1] derived the asymptotic power function of their test and recommended choosing k = n / 2 , where a denotes the integer part of a. Building on this framework, Srivastava et al. [4] proposed an exact F-test version of the projection method, which can be implemented via Monte Carlo simulation. They also analyzed the asymptotic power of their test in the regime where both p and n tend to infinity. The projection method offers several advantages: it is conceptually simple, computationally efficient, and imposes no restrictions on the relationship between p and n, making it suitable for high-dimensional applications.
An alternative approach based on random subspaces was proposed by Thulin [5]. Instead of testing the equality of the full mean vectors, this method repeatedly selects k < n 1 + n 2 2 components at random from the sample mean vectors and tests the equality of the corresponding sub-vectors. This procedure is repeated multiple times, and multiple testing corrections, such as the Bonferroni adjustment, are applied to control the overall Type-I error rate. A notable feature of this method is that the test statistic is invariant under linear transformations of the marginal distributions. Permutation tests are used to approximate the null distribution.
Instead of relying on the sample covariance matrix, Chen et al. [6] proposed a one-sample test based on improved estimators of tr ( Σ ) and tr ( Σ 2 ) , under the assumption that tr ( Σ 4 ) = o tr 2 ( Σ 2 ) .
In the two-sample covariance testing context, let X 1 , , X n 1 be i.i.d. from N p ( 0 , Σ 1 ) and Y 1 , , Y n 2 be i.i.d. from N p ( 0 , Σ 2 ) . We test:
H 0 2 : Σ 1 = Σ 2 .
The LRT statistic is given by
T 2 = 2 log | S 1 | n 1 / 2 · | S 2 | n 2 / 2 w 1 S 1 + w 2 S 2 ( n 1 + n 2 ) / 2 ,
where S 1 and S 2 are the sample covariance matrices of X i and Y i , respectively, and  w j = n j / ( n 1 + n 2 ) for j = 1 , 2 .
This statistic also suffers from singularity issues when p > n j , rendering the test unstable in high-dimensional scenarios.
In summary, significant progress has been made in the field of testing high-dimensional covariance matrices. Broadly, three main approaches have emerged. The first involves the use of random matrix theory to study the limiting distributions of the extreme eigenvalues of the sample covariance matrix, as developed in Bai [7] and Bai and Yin [8]. The second focuses on constructing consistent and improved estimators of the population covariance matrix; notable contributions include the works of Bickel and Levina [9,10] and Li and Chen [11]. The third approach involves regularized covariance estimation, where the estimator is obtained by solving a maximum likelihood problem under a constraint on the condition number, as proposed in Won et al. [12].
In parallel, several works have explored the use of random projections for testing two-sample means in high-dimensional settings; see, for example, Lopes et al. [1], Srivastava et al. [4].
In this study, we extend the projection-based method of Lopes et al. [1] by investigating random projections with nonzero means while guided by training data. The approach leverages the training data to identify and emphasize directions that are most discriminative. As a result, it enhances the power of the projected Hotelling’s T 2 test by combining the strengths of both non-random (data-driven) and random projections.
This method is appealing for three reasons: it is conceptually simple, straightforward to implement, and computationally efficient. Moreover, it does not require any specific relationship between the dimensionality p and sample size n unless otherwise specified.
The remainder of the paper is organized as follows. Section 2 presents the proposed methodology. Section 3 provides extensive simulation studies comparing the proposed tests with recent approaches. A real-world application to Acute Lymphoblastic Leukemia (ALL) gene expression data is presented in Section 4. Section 5 concludes with a summary and discussion.

2. Random Projection

2.1. A Brief Review

Random projection is an emerging and powerful technique for handling high-dimensional data. It reduces the dimensionality of the data from p to k, where k < min ( n , p ) , while preserving the validity and applicability of conventional statistical procedures. This approach enables efficient computation and facilitates hypothesis testing in settings where classical methods may break down due to the high dimensionality. Several researchers have successfully applied random projection techniques to hypothesis testing problems, particularly for moderate to large values of k.
Given a random projection matrix R R p × k that is independent of the data, observations X i and Y i are projected onto a lower-dimensional space via
X i R = R X i , i = 1 , , n 1 , and Y i R = R Y i , i = 1 , , n 2 ,
where X i R and Y i R R k are the projected observations. Conditional on R , X i R and Y i R follow N k ( R μ 1 , R Σ R ) and N k ( R μ 2 , R Σ R ) , respectively. For notational convenience, we suppress the subscript when referring to univariate normal distributions.
Before developing test statistics based on the projected data, we briefly discuss the construction of the projection matrix R = ( r i j ) . In theory, the entries r i j can be drawn from any distribution with mean zero. A common and theoretically well-justified choice is to take R with i.i.d. standard normal entries and then orthonormalize the columns so that R R = I k . This ensures that the projected data preserves important geometric and statistical properties. For computational efficiency, Srivastava et al. [4] proposed a method based on “one permutation + one random projection”, inspired by the ideas of “very sparse random projections” in  Li et al. [13] and “one permutation hashing” in Li et al. [14].
In the context of two-sample testing, Srivastava et al. [4] introduced RAPTT (RAndom Projection T-Test), an exact testing procedure for high-dimensional mean vectors under multivariate normality. In their approach, the Hotelling’s T 2 statistic is computed on the projected data:
T R 2 = 1 n 1 + 1 n 2 1 ( X ¯ Y ¯ ) R ( R S R ) 1 R ( X ¯ Y ¯ ) ,
where X ¯ and Y ¯ are the sample means of the two groups, and  S is the pooled sample covariance matrix. To increase statistical power, RAPTT aggregates the results over multiple random projections. Let p i denote the p-value obtained from the projected Hotelling test using projection matrix R i for i = 1 , , m . The null hypothesis is rejected if
1 m i = 1 m p i < μ α ,
where μ α is a threshold chosen such that
P 1 m i = 1 m p i < μ α | H 0 = α .
In a related line of work, Wu and Li [15] extended the random projection methodology to high-dimensional covariance matrix testing. Their results demonstrate that even using one-dimensional projection vectors can be sufficient and effective for certain classes of structured covariance matrices, offering a computationally attractive alternative to full-dimensional methods.

2.2. An Improved Test for Equality of Two Mean Vectors

The random projection approach reduces high-dimensional data to a lower-dimensional subspace. However, this transformation may introduce distortion, potentially affecting the accuracy of the test statistic. To mitigate this issue and enhance the power of the test, we propose using a random projection matrix whose columns are generated with a nonzero mean aligned with the estimated difference between the group means.
Let X 1 , , X n 1 follow a p-dimensional normal distribution N p ( μ 1 , Σ ) , and let Y 1 , , Y n 2 follow a p-dimensional normal distribution N p ( μ 2 , Σ ) . Suppose that there are reference samples { X i 0 } i = 1 m 1 and { Y i 0 } i = 1 m 2 , referred to as training samples. Let X 1 0 , , X m 1 0 N p ( μ 1 , Σ ) and Y 1 0 , , Y m 2 0 N p ( μ 2 , Σ ) . Let X ¯ 0 , Y ¯ 0 , S X 0 , and  S Y 0 denote the sample mean vectors and sample covariance matrices computed from the training samples { X i 0 } i = 1 m 1 and { Y i 0 } i = 1 m 2 , respectively.
To test H 0 in (1), we consider the projected statistic
T P 2 = ( 1 / n 1 + 1 / n 2 ) 1 ( X ¯ Y ¯ ) P ( P S p P ) 1 P ( X ¯ Y ¯ ) ,
where S p = ( n 1 1 ) S X + ( n 2 1 ) S Y + ( m 1 1 ) S X 0 + ( m 2 1 ) S Y 0 n 1 + n 2 + m 1 + m 2 4 is the pooled sample covariance matrix computed from both training and test samples, and the columns of the p × k projection matrix P are generated independently from N p ( θ 0 , I ) , where θ 0 = X ¯ 0 Y ¯ 0 . This construction enables P to amplify the mean difference, thereby increasing the power of the test.
Lemma 1.
Given the projection matrix P , the statistic T P 2 follows a Hotelling’s T 2 distribution.
Proof. 
Note that { X i } i = 1 n 1 , { Y i } i = 1 n 2 , { X i 0 } i = 1 m 1 , and  { Y i 0 } i = 1 m 2 are independent samples. Under normality, X ¯ , Y ¯ , X ¯ 0 , and  Y ¯ 0 are independent of S X , S Y , S X 0 , and  S Y 0 . Thus, the projection matrix P , which depends only on X ¯ 0 Y ¯ 0 , is independent of S p . Moreover, the pooled sample covariance matrix S p follows a Wishart distribution with n 1 + n 2 + m 1 + m 2 4 degrees of freedom. Hence, by Theorem 5.8 in Härdle and Simar [16], given the projection matrix P , T P 2 follows a Hotelling’s T 2 distribution with degrees of freedom k and n 1 + n 2 + m 1 + m 2 4 .    □
Theorem 1.
Let c α be chosen such that F k , n 1 + n 2 + m 1 + m 2 3 k ( c α ) = 1 α , where F k , n 1 + n 2 + m 1 + m 2 3 k is the F-distribution function with degrees of freedom k and n 1 + n 2 + m 1 + m 2 3 k . Then,
P n 1 + n 2 + m 1 + m 2 3 k k ( n 1 + n 2 + m 1 + m 2 4 ) T P 2 c α | H 0 = α .
Proof. 
P n 1 + n 2 + m 1 + m 2 3 k k ( n 1 + n 2 + m 1 + m 2 4 ) T P 2 c α | H 0 = E P n 1 + n 2 + m 1 + m 2 3 k k ( n 1 + n 2 + m 1 + m 2 4 ) T P 2 c α | | H 0 , P = E [ α ] = α .
The second equality follows from Lemma 1.    □
As discussed in Wu and Li [15], a single random-projection Hotelling test may suffer from low power, as different projections can yield contradictory results. To address this, a common remedy is to use multiple random projections. Specifically, let P 1 , P 2 , , P m be m independent projection matrices, and let T P j 2 denote the statistic in (7) for projection P j . We consider the following three aggregate statistics:
(1)
Maximum: T 1 = max 1 j m T P j 2 ;
(2)
Average: T 2 = 1 m j = 1 m T P j 2 ;
(3)
100p-th Percentile: T 3 = T P ( m p ) 2 , where T P ( 1 ) 2 T P ( 2 ) 2 T P ( m ) 2 and x denotes the smallest integer not less than x.
The null hypothesis H 0 : μ 1 = μ 2 is rejected if T i c i ( α ) , where c i ( α ) is chosen such that P T i c i ( α ) | H 0 = α . These aggregate statistics help stabilize the performance of the test by leveraging information across multiple projections, thus improving robustness and power. For the numerical results, we select p = 0.95 .

2.3. An Improved Test for Equality of Two Covariance Matrices

In the two-sample covariance test, let X 1 , , X n 1 be i.i.d. observations from a p-dimensional normal distribution N p ( 0 , Σ 1 ) , and let Y 1 , , Y n 2 be i.i.d. observations from N p ( 0 , Σ 2 ) . In this section, we develop an improved test adopting the idea in Wu and Li [15] using one-dimensional random projections.
We project the two samples { X i } and { Y i } using one-dimensional p × 1 random vectors R j , j = 1 , , m . Given R j , the projected data X i j = R j X i and Y i j = R j Y i follow univariate normal distributions with variances σ 1 = R j Σ 1 R j and σ 2 = R j Σ 2 R j , respectively. The testing problem in (4) reduces to test if σ 1 σ 2 = 0 .
Suppose reference (or training) samples are available to estimate ν (if not, part of the data can be used as the training samples). Let s i j = R j S i R j , i = 1 , 2 and F j = s 2 j / s 1 j , j = 1 , , m . Regardless of R j , it is always true that σ 1 σ 2 = 0 under H 0 2 . When H 0 2 is not true, we then want to maximize the difference between σ 1 and σ 2 for a better power. As a result, we generate each random vector R j from a p-variate normal distribution N p ( ν , I ) , where ν is the eigenvector associated with the largest eigenvalue for S 2 to amplify the difference between Σ 1 and Σ 2 .
Theorem 2.
Let f α be chosen such that F n 2 1 , n 1 1 ( f α ) = 1 α , where F n 2 1 , n 1 1 is the F-distribution function with degrees of freedom n 2 1 and n 1 1 . Then, for  j = 1 , 2 , , m ,
P F j f α | H 0 2 = α .
Proof. 
The proof follows the argument similar to those used in the proof of Theorem 1 and is therefore omitted.    □
Let p j be the p-value of the statistic F j using R j . Rather than using max F j , as suggested in Wu and Li [15], we consider two test statistics given by
(1)
Minp: W 1 = min 1 j m { p j } ;
(2)
Avep: W 2 = 1 m j = 1 m p j .
The null hypothesis H 0 2 is rejected if W i w i ( α ) , where w i ( α ) is chosen such that P W i w i ( α ) | H 0 2 = α , i = 1 , 2 .

3. Numerical Results

3.1. Comparing Two Mean Vectors

3.1.1. Setup and Simulation Design

To evaluate the performance of the proposed projection-based testing procedures, we conduct a series of simulation studies under high-dimensional settings. In all simulations, we set the sample sizes n 1 = n 2 = 50 and the dimension p = 200 . Without loss of generality, we let μ 1 = 0 .
We consider two types of covariance structures for the underlying distributions:
  • Independent Structure: Σ 1 = I p , where I p is the p × p identity matrix.
  • Toeplitz Structure: Σ 2 is a symmetric Toeplitz matrix defined by the autocorrelation sequence ( 1 , 0.4 , 0 , , 0 ) , so that
    Σ 2 = 1 0.4 0 0 0.4 1 0.4 0 0.4 0 1 0.4 0 0 0.4 1 .
Under the null hypothesis, both mean vectors are set to zero, i.e.,  μ 1 = μ 2 = 0 . To investigate the power under alternatives, we consider two scenarios for μ 2 :
  • Dense alternative: 75% of the components of μ 2 are nonzero.
  • Sparse alternative: 1% of the components of μ 2 are nonzero.
The nonzero components are sampled from N ( 1 , 1 ) and rescaled to ensure
1 2 ( μ 1 μ 2 ) Σ 1 ( μ 1 μ 2 ) = 1 .
When no reference data are available, we divide the samples into training and testing sets. Let m 1 = m 2 denote the training sample sizes per group. The projection matrix P is constructed from columns independently drawn from N p ( θ 0 , I p ) , where θ 0 = X ¯ 0 Y ¯ 0 is the sample mean difference from the training data. Critical values are estimated from the combined simulated null distributions based on 1000 simulation runs for each covariance matrix.

3.1.2. Simulation Results

We evaluate three aggregation strategies over m random projections:
(1)
Maximum (Max):  T 1 = max 1 j m T P j 2 ;
(2)
Average (Ave):  T 2 = 1 m j = 1 m T P j 2 ;
(3)
95th Percentile (95):  T 3 = T P ( 0.95 m ) 2 .
Type-I Error Control
Empirical sizes under the null hypothesis are presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. Across all configurations—varying m 1 , k, and  m = 10 or 100—the empirical sizes are generally controlled at the nominal level α = 0.05 , with minor fluctuations attributable to sampling variability.
Power Comparisons
We now present the empirical power of the three proposed tests—Max, Ave, and 95th percentile (95)—in various scenarios. The simulations evaluate the performance of these tests across a range of projection dimensions k, training sample sizes m 1 , number of projections m, and both dense and sparse alternatives under two different covariance structures ( Σ 1 = I and a Toeplitz matrix Σ 2 ).
Case 1: Dense Alternative with Σ 1 = I and m = 10
Figure 7, Figure 8 and Figure 9 show power curves for training sizes m 1 = 1 , , 30 and projection dimensions k = 1 , , 47 . Using training data significantly improves power for small k. For example, when k = 2 , the power increases more than twofold compared to RAPTT (horizontal red dotted line) without training. The Ave statistic generally outperforms Max and 95, especially for moderate k.
Case 2: Sparse Alternative with Σ 1 = I and m = 10
Figure 10, Figure 11 and Figure 12 show that, under sparse alternatives, the Max and 95 statistics are more powerful for small k, while Ave dominates for larger k. Training samples consistently enhance power, particularly in low dimensions. These results highlight that in sparse settings, we can replace the maximum by the 95th percentile to detect sparse signals in lower-dimensional projections (small k), while maintaining satisfactory performance for moderate to large k.
Case 3: Dense Alternative with Σ 2 (Toeplitz) and m = 100
Figure 13, Figure 14 and Figure 15 provide power curves for the more complex covariance structure Σ 2 . Similar trends are observed: Ave performs best overall, while Max is slightly superior for small k and small m 1 .
Case 4: Sparse Alternative with Σ 2 and m = 100
Figure 16, Figure 17 and Figure 18 present results under the sparse alternative with Σ 2 . As expected, the Max test excels at small k and small training sample sizes. It consistently outperforms both Ave and 95 when projection dimensionality is kept low due to its ability to capture strong, isolated differences.
As k increases, the performance of Max deteriorates, and Ave regains superiority. The 95 test continues to provide a good compromise between power and robustness, especially for intermediate values of k.
Although the overall power of the tests under sparse alternatives is lower compared to dense ones, the use of training samples still yields meaningful power gains at small k. The gap between RAPTT and our methods is narrower in this case, but the benefit of informed projections remains visible.
Finally, to demonstrate the consistency of the proposed test, the mean vector μ 2 is generated from N ( 1 , 1 ) and then rescaled so that
1 2 ( μ 1 μ 2 ) Σ 1 ( μ 1 μ 2 ) = c .
To evaluate performance across a range of signal strengths c, we consider various combinations of projection dimensions k and training sample sizes m 1 . The results in Table 1 and Table 2 demonstrate that the proposed test procedures are consistent. Specifically, the power of all tests increases monotonically with the signal strength c, approaching one as c grows large.

3.2. Comparing Two Covariance Matrices

3.2.1. Setup and Simulation Design

We adopt the models in Wu and Li [15] for evaluating our proposed method. The three models considered are:
Model a:
{ X i } i = 1 n 1 and { Y i } i = 1 n 2 are independently generated as follows: let Z i j be generated from a standard normal distribution, and then construct X i = ( X i 1 , , X i p ) by letting
X i j = Z i j + θ 1 Z i j + 1 ,
and Y i = ( Y i 1 , , Y i p ) by letting
Y i j = Z i j + θ 1 Z i j + 1 + θ 2 Z i j + 2 ,
with θ 1 = 0.5 and θ 2 = 0.5 .
Next, consider population covariance matrices of the following form:
Σ 1 = d 1 ρ 1 ρ 1 p 2 ρ 1 p 1 ρ 1 d 1 ρ 1 p 3 ρ 1 p 2 ρ 1 p 2 ρ 1 p 3 d 1 ρ 1 ρ 1 p 1 ρ 1 p 2 ρ 1 d 1   and   Σ 2 = d 2 ρ 2 ρ 2 p 2 ρ 2 p 1 ρ 2 d 2 ρ 2 p 3 ρ 2 p 2 ρ 2 p 2 ρ 2 p 3 d 2 ρ 2 ρ 2 p 1 ρ 2 p 2 ρ 2 d 2 .
Model b:
( d 1 , ρ 1 , d 2 , ρ 2 ) = ( 1.2 , 0.6 , 1 , 0.5 )
Model c:
( d 1 , ρ 1 , d 2 , ρ 2 ) = ( 1.1 , 0.24 , 1 , 0.2 )
Critical values are estimated from the combined simulated null distributions based on 1000 simulation runs for each covariance matrix in Models a, b, and c.

3.2.2. Simulation Results

We evaluated the performance of the proposed tests for comparing two covariance matrices under three distinct models (Models a, b, and c). We also include the test statistic max F j from Wu and Li [15] for comparison. This allows us to evaluate the relative performance of our p-value-based approaches against the previously suggested maximum-based method.
Type-I Error Control
Under the null hypothesis Σ 1 = Σ 2 , we select one representative scenario from each model to evaluate the sizes of the proposed tests. Specifically, we choose θ 1 = 0.5 and θ 2 = 0 for Model a, d 1 = d 2 = 1.2 and ρ 1 = ρ 2 = 0.6 for Model b, and  d 1 = d 2 = 1.1 and ρ 1 = ρ 2 = 0.24 for Model c. The empirical sizes under these settings are reported in Table 3 and Table 4 for m = 10 and m = 100 , respectively.
The empirical sizes reported in Table 3 demonstrate that the proposed tests effectively control the Type I error across all training sizes and models. Most values remain close to the nominal level of 0.05. The empirical sizes reported in Table 4 show that the proposed tests generally maintain appropriate Type I error control across all training sizes and models when m = 100 . Most values remain close to the nominal level of 0.05. While a few entries reach values near 0.08, these deviations are modest. Overall, the results suggest that the tests remain well-calibrated under the null hypothesis.
Power Comparisons
We computed the empirical power of the tests across varying training sample sizes (from 0 to 20 in increments of 2) and two projection settings ( m = 10 and m = 100 ). The results are summarized in Table 5 and Table 6.
The Ave p statistic consistently outperformed both Min p and max F j across all models and settings, particularly for larger m. This advantage likely stems from the non-sparse nature of the covariance matrix differences, where averaging over projections can capture a broader structure.
For m = 10 (Table 5), all three statistics demonstrated modest gains in power with small training sample sizes. However, Ave p remained superior in most scenarios, while max F j and Min p performed comparably. The results for m = 100 (Table 6) revealed similar patterns, though with more substantial power gains compared to the m = 10 case. Increasing the number of projections to m = 100 significantly enhanced the power of all three Ave p , Min p , and max F j statistics.
We demonstrate the power consistency of the proposed tests using Model a, with parameters θ 1 = 0 and varying θ 2 . In this setting, the Frobenius norm of Σ 2 Σ 1 increases as θ 2 increases, reflecting greater deviation from the null hypothesis. As shown in Figure 19, Figure 20, Figure 21 and Figure 22 the empirical power of the tests approaches 1 as θ 2 increases for both m = 10 and m = 100 .

4. Application to Acute Lymphoblastic Leukemia (ALL) Data

We apply the proposed method to the Acute Lymphoblastic Leukemia (ALL) dataset, which has been previously analyzed by Chen et al. [6]. ALL is a common type of cancer characterized by the overproduction of lymphocytes in the bone marrow. The dataset comprises gene expression profiles from 128 individuals, each with 12,625 features.

4.1. Data Preparation

To align with biological interpretability, we subset the data to include only individuals classified as either BCR/ABL or NEG based on molecular biology annotations (mol.biol column). This results in two groups: BCR/ABL ( n 1 = 37 individuals) and NEG ( n 2 = 42 individuals). We focus on Gene Ontology (GO) terms across three domains—Molecular Function (MF), Biological Process (BP), and Cellular Component (CC)—to test for differences in covariance matrices between the two groups.

4.2. Procedures

  • Initial Screening: We follow the same procedure in Chen et al. [6] to perform an initial screening using the genefilter package (Bioconductor version 3.6) to retain 2391 genes for analysis.
  • Gene Set Selection: GO term-based gene sets are extracted, excluding those with fewer than 2 genes to ensure meaningful multivariate analysis. This yields 3468, 571, and 803 GO terms for BP, CC, and MF, respectively. The largest gene set contains 1644 genes.
  • Projected Test: For each gene set, we apply the covariance matrix test Avep test. For those gene sets that fail to reject, we then apply the test Ave with m = 10 random projections to compare mean vectors between the BCR/ABL and NEG groups. We consider two training sample size settings: m 1 = m 2 = 0 (no training data) and m 1 = m 2 = 2 (with training data) as suggested by Table 5.
  • Multiple Testing Correction: p-values are adjusted using the Benjamini-Hochberg procedure to control the false discovery rate (FDR) at 5%.

4.3. Results

The analysis reveals significant differences in covariance structures between the two groups:
  • Case 1: m 1 = m 2 = 0
  • BP: 208 (6.00%) gene sets are significant ( p < 0.05 after FDR correction)
  • CC: 25 (4.38%) gene sets are significant
  • MF: 27 (3.36%) gene sets are significant
  • Case 2: m 1 = m 2 = 2
  • BP: 206 (5.94%) gene sets are significant ( p < 0.05 after FDR correction)
  • CC: 28 (4.90%) gene sets are significant
  • MF: 32 (3.99%) gene sets are significant
The results indicate that the proposed method can detect differences in covariance structures in high-dimensional gene expression data. The version using training samples performs slightly better than the one without, identifying more significant gene sets in some cases. These findings suggest that incorporating training data may improve sensitivity and that accounting for covariance heterogeneity is important in genomic analyses.

5. Summary and Conclusions

This paper introduces an improved random projection framework for high-dimensional two-sample testing of both mean vectors and covariance matrices. By aligning projection directions with estimated parameters obtained from training samples, our approach enhances the power of traditional projection-based tests while maintaining the Type I error rate at the nominal level.
When no external training data are available, the method divides the observed data into training and test subsets. Although this reduces the test sample size, simulations show that the resulting power gains often outweigh this drawback. We recommend using no more than one-third of the available data as training samples.
For mean vector testing, which is among the proposed aggregation strategies, the Ave statistic provides the most consistent performance across settings, especially for dense signals. The Max statistic is more powerful for detecting sparse alternatives, particularly in low-dimensional projections. The 95th percentile statistic offers a good balance between the two, effectively capturing both sparse and dense signals.
Two p-value-based statistics were introduced for testing the equality of two covariance matrices. Across all three models examined, the average p-value statistic Avep consistently outperformed the minimum p-value approach Minp. The gain in power from using a training sample is subtle due to the large number of unknown parameters in the covariance matrices. Future research on high-dimensional covariance matrix inference remains open.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Lopes, M.; Jacob, L.; Wainwright, M.J. A More Powerful Two-Sample Test in High Dimensions using Random Projection. In Advances in Neural Information Processing Systems 24; Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F., Weinberger, K.Q., Eds.; Curran Associates, Inc.: New York, NY, USA, 2011; pp. 1206–1214. [Google Scholar]
  2. Bai, Z.; Saranadasa, H. Effect of high dimension: By an example of a two sample problem. Stat. Sin. 1996, 6, 311–329. [Google Scholar]
  3. Chen, S.X.; Qin, Y.L. A two-sample test for high-dimensional data with applications to gene-set testing. Ann. Stat. 2010, 38, 808–835. [Google Scholar] [CrossRef]
  4. Srivastava, R.; Li, P.; Ruppert, D. RAPTT: An exact two-sample test in high dimensions using random projections. J. Comput. Graph. Stat. 2016, 25, 954–970. [Google Scholar] [CrossRef]
  5. Thulin, M. A high-dimensional two-sample test for the mean using random subspaces. Comput. Stat. Data Anal. 2014, 74, 26–38. [Google Scholar] [CrossRef]
  6. Chen, S.X.; Zhang, L.X.; Zhong, P.S. Tests for high-dimensional covariance matrices. J. Amer. Stat. Assoc. 2010, 105, 810–819. [Google Scholar] [CrossRef]
  7. Bai, Z.D. Convergence rate of expected spectral distributions of large random matrices. II. Sample covariance matrices. Ann. Probab. 1993, 21, 649–672. [Google Scholar] [CrossRef]
  8. Bai, Z.D.; Yin, Y.Q. Limit of the smallest eigenvalue of a large-dimensional sample covariance matrix. Ann. Probab. 1993, 21, 1275–1294. [Google Scholar] [CrossRef]
  9. Bickel, P.J.; Levina, E. Covariance regularization by thresholding. Ann. Stat. 2008, 36, 2577–2604. [Google Scholar] [CrossRef] [PubMed]
  10. Bickel, P.J.; Levina, E. Regularized estimation of large covariance matrices. Ann. Stat. 2008, 36, 199–227. [Google Scholar] [CrossRef]
  11. Li, J.; Chen, S.X. Two sample tests for high-dimensional covariance matrices. Ann. Stat. 2012, 40, 908–940. [Google Scholar] [CrossRef]
  12. Won, J.H.; Lim, J.; Kim, S.J.; Rajaratnam, B. Condition-number-regularized covariance estimation. J. R. Stat. Soc. Ser. B Stat. Methodol. 2013, 75, 427–450. [Google Scholar] [CrossRef] [PubMed]
  13. Li, P.; Hastie, T.J.; Church, K.W. Very Sparse Random Projections. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 20–23 August 2006; KDD ’06. pp. 287–296. [Google Scholar] [CrossRef]
  14. Li, P.; Owen, A.B.; Zhang, C.H. One Permutation Hashing. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Granada Spain, 12–15 December 2011; NIPS’12. pp. 3113–3121. [Google Scholar]
  15. Wu, T.L.; Li, P. Projected tests for high-dimensional covariance matrices. J. Stat. Plann. Inference 2020, 207, 73–85. [Google Scholar] [CrossRef]
  16. Härdle, W.K.; Simar, L. Applied Multivariate Statistical Analysis, 5th ed.; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
Figure 1. Empirical sizes of the proposed tests for projection dimensions k = 1 , 2 , , 16 , covariance matrix Σ 1 , and number of projections m = 10 .
Figure 1. Empirical sizes of the proposed tests for projection dimensions k = 1 , 2 , , 16 , covariance matrix Σ 1 , and number of projections m = 10 .
Mathematics 13 02060 g001
Figure 2. Empirical sizes of the proposed tests for projection dimensions k = 17 , 18 , , 32 , covariance matrix Σ 1 , and number of projections m = 10 .
Figure 2. Empirical sizes of the proposed tests for projection dimensions k = 17 , 18 , , 32 , covariance matrix Σ 1 , and number of projections m = 10 .
Mathematics 13 02060 g002
Figure 3. Empirical sizes of the proposed tests for projection dimensions 33 , 34 , , 47 , covariance matrix Σ 1 , and number of projections m = 10 .
Figure 3. Empirical sizes of the proposed tests for projection dimensions 33 , 34 , , 47 , covariance matrix Σ 1 , and number of projections m = 10 .
Mathematics 13 02060 g003
Figure 4. Empirical sizes of the proposed tests for projection dimensions k = 1 , 2 , , 16 , covariance matrix Σ 2 , and number of projections m = 100 .
Figure 4. Empirical sizes of the proposed tests for projection dimensions k = 1 , 2 , , 16 , covariance matrix Σ 2 , and number of projections m = 100 .
Mathematics 13 02060 g004
Figure 5. Empirical sizes of the proposed tests for projection dimensions k = 17 , 18 , , 32 , covariance matrix Σ 2 , and number of projections m = 100 .
Figure 5. Empirical sizes of the proposed tests for projection dimensions k = 17 , 18 , , 32 , covariance matrix Σ 2 , and number of projections m = 100 .
Mathematics 13 02060 g005
Figure 6. Empirical sizes of the proposed tests for projection dimensions k = 33 , 34 , , 47 , covariance matrix Σ 2 , and number of projections m = 100 .
Figure 6. Empirical sizes of the proposed tests for projection dimensions k = 33 , 34 , , 47 , covariance matrix Σ 2 , and number of projections m = 100 .
Mathematics 13 02060 g006
Figure 7. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 75% nonzero elements of μ 2 .
Figure 7. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 75% nonzero elements of μ 2 .
Mathematics 13 02060 g007
Figure 8. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 75% nonzero elements of μ 2 .
Figure 8. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 75% nonzero elements of μ 2 .
Mathematics 13 02060 g008
Figure 9. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 75% nonzero elements of μ 2 .
Figure 9. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 75% nonzero elements of μ 2 .
Mathematics 13 02060 g009
Figure 10. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 1% nonzero elements of μ 2 .
Figure 10. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 1% nonzero elements of μ 2 .
Mathematics 13 02060 g010
Figure 11. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 1% nonzero elements of μ 2 .
Figure 11. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 1% nonzero elements of μ 2 .
Mathematics 13 02060 g011
Figure 12. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 1% nonzero elements of μ 2 .
Figure 12. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 1% nonzero elements of μ 2 .
Mathematics 13 02060 g012
Figure 13. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 75% nonzero elements of μ 2 .
Figure 13. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 75% nonzero elements of μ 2 .
Mathematics 13 02060 g013
Figure 14. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 75% nonzero elements of μ 2 .
Figure 14. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 75% nonzero elements of μ 2 .
Mathematics 13 02060 g014
Figure 15. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 75% nonzero elements of μ 2 .
Figure 15. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 75% nonzero elements of μ 2 .
Mathematics 13 02060 g015
Figure 16. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 1% nonzero elements of μ 2 .
Figure 16. Power comparison between RAPTT and our proposed tests for projection dimensions k = 1 , 2 , , 16 with 1% nonzero elements of μ 2 .
Mathematics 13 02060 g016
Figure 17. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 1% nonzero elements of μ 2 .
Figure 17. Power comparison between RAPTT and our proposed tests for projection dimensions k = 17 , 18 , , 32 with 1% nonzero elements of μ 2 .
Mathematics 13 02060 g017
Figure 18. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 1% nonzero elements of μ 2 .
Figure 18. Power comparison between RAPTT and our proposed tests for projection dimensions k = 33 , 34 , , 47 with 1% nonzero elements of μ 2 .
Mathematics 13 02060 g018
Figure 19. Empirical power curves for the Minp test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 10 .
Figure 19. Empirical power curves for the Minp test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 10 .
Mathematics 13 02060 g019
Figure 20. Empirical power curves for the Avep test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 10 .
Figure 20. Empirical power curves for the Avep test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 10 .
Mathematics 13 02060 g020
Figure 21. Empirical power curves for the Minp test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 100 .
Figure 21. Empirical power curves for the Minp test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 100 .
Mathematics 13 02060 g021
Figure 22. Empirical power curves for the Avep test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 100 .
Figure 22. Empirical power curves for the Avep test under Model a, with training sizes ranging from 0 to 20 (step size 2) and m = 100 .
Mathematics 13 02060 g022
Table 1. Power curves for selected values of m 1 and k = 1 under Σ 1 and m = 10 .
Table 1. Power curves for selected values of m 1 and k = 1 under Σ 1 and m = 10 .
RAPTTMaxAve95
cm1 = 10m1 = 20m1 = 10m1 = 20m1 = 10m1 = 20m1 = 10m1 = 20
00.0400.0500.0460.0640.0420.0500.0460.064
0.50.0720.0940.0860.0900.0960.0900.0860.090
10.1720.1480.1580.1360.2240.1680.1580.136
1.50.3160.3060.3000.2400.4120.3580.3000.240
20.4800.4920.4480.4260.6000.5580.4480.426
2.50.6760.6760.6500.5480.7840.7420.6500.548
30.8180.7920.7780.7160.8760.8520.7780.716
3.50.8800.8700.8440.8060.9260.9240.8440.806
40.9320.9520.9360.8920.9740.9740.9360.892
4.50.9780.9720.9640.9360.9920.9900.9640.936
50.9900.9840.9820.9780.9960.9960.9820.978
5.50.9980.9940.9960.9961.0001.0000.9960.996
61.0001.0001.0001.0001.0001.0001.0001.000
Table 2. Power curves for selected values of m 1 and k = 25 under Σ 2 and m = 100 .
Table 2. Power curves for selected values of m 1 and k = 25 under Σ 2 and m = 100 .
RAPTTMaxAve95
cm1 = 10m1 = 20m1 = 10m1 = 20m1 = 10m1 = 20m1 = 10m1 = 20
00.0640.0520.0440.0460.0580.0440.0420.034
0.50.2120.1780.1120.1220.2220.1720.2120.144
10.5100.3880.2960.2160.5000.3840.4460.316
1.50.7700.6320.4540.3800.7660.6320.6920.552
20.8880.7880.6640.5420.8980.7740.8580.698
2.50.9720.9300.7960.7460.9740.9180.9420.858
30.9880.9820.8980.8740.9880.9820.9780.952
3.51.0000.9940.9700.9200.9980.9901.0000.974
41.0000.9980.9960.9741.0000.9981.0000.998
4.51.0001.0000.9960.9841.0001.0001.0000.996
51.0001.0000.9961.0001.0001.0001.0001.000
5.51.0001.0001.0001.0001.0001.0001.0001.000
61.0001.0001.0001.0001.0001.0001.0001.000
Table 3. Empirical sizes for training sizes from 0 to 20 (step size 2), m = 10 under three models.
Table 3. Empirical sizes for training sizes from 0 to 20 (step size 2), m = 10 under three models.
MinpAvepmax Fj
m1Model aModel bModel cModel aModel bModel cModel aModel bModel c
00.04200.05300.05900.05900.05800.05600.04200.05300.0590
20.04900.05200.06000.05000.04400.04100.04900.05200.0600
40.04100.03200.04500.04900.04700.05800.04100.03200.0450
60.06700.05300.05900.06400.06200.05900.06700.05300.0590
80.05000.06300.06300.05200.06000.05300.05000.06300.0630
100.06400.07100.07300.04500.03900.04300.06400.07100.0770
120.04300.04300.04200.05900.04700.05600.04300.04300.0420
140.05100.05200.03900.04900.04900.04400.05100.05200.0390
160.06200.05600.05200.05500.06100.05900.06200.05600.0520
180.03900.04400.04400.04800.04000.05500.03900.04400.0440
200.04800.04300.04800.04300.04500.04200.04800.04300.0480
Table 4. Empirical size for training sizes from 0 to 20 (step size 2), m = 100 under three models.
Table 4. Empirical size for training sizes from 0 to 20 (step size 2), m = 100 under three models.
MinpAvepmax Fj
m1Model aModel bModel cModel aModel bModel cModel aModel bModel c
00.04000.04600.04300.06100.05300.05000.04000.04600.0430
20.06600.06300.06100.06400.04800.05300.06600.06300.0610
40.04600.03300.04200.07200.05800.06600.04600.03300.0420
60.04400.05100.04900.07400.04700.05700.04400.05100.0490
80.05400.04200.04900.07700.04200.04400.05400.04200.0490
100.06200.06300.06500.07200.05800.05800.06200.06300.0650
120.03900.06100.05400.07600.06000.06700.03900.06100.0540
140.05900.05900.05400.07000.04900.06000.05900.05900.0540
160.04500.03800.03800.08000.05300.06700.04500.03800.0380
180.04200.03900.04600.07900.06100.06200.04200.03900.0460
200.05600.05100.05800.08200.05800.05200.05600.05100.0580
Table 5. Empirical powers for training sizes from 0 to 20 (step size 2), m = 10 under three models.
Table 5. Empirical powers for training sizes from 0 to 20 (step size 2), m = 10 under three models.
MinpAvepmax Fj
m1Model aModel bModel cModel aModel bModel cModel aModel bModel c
00.23800.11500.22800.58600.24000.59700.23400.11000.2560
20.20900.10600.21000.62800.27900.59300.22800.13700.2730
40.23200.11400.24100.60400.25900.57900.21300.09500.2210
60.20000.09300.20100.50700.21200.52100.21900.13300.2310
80.23000.10400.23500.57300.24900.55100.23200.12700.2360
100.16600.08600.15200.55200.26000.56600.21700.11100.2360
120.18100.08800.19800.50000.20100.51500.19200.09700.1980
140.17900.11500.20900.46200.20100.49100.18600.11900.2160
160.19700.10400.21300.50500.22300.45700.21900.11700.2040
180.16000.09900.18000.44300.18400.42800.13900.08800.1570
200.15800.10000.17000.39600.17600.43700.16900.08000.1820
Table 6. Empirical powers for training sizes from 0 to 20 (step size 2), m = 100 under three models.
Table 6. Empirical powers for training sizes from 0 to 20 (step size 2), m = 100 under three models.
MinpAvepmax Fj
m1Model aModel bModel cModel aModel bModel cModel aModel bModel c
00.29700.13000.31500.99900.80500.99800.32100.15600.3330
20.28900.13900.31200.99900.83500.99900.37200.17300.3760
40.34900.17200.41200.99700.79700.99700.28600.11900.3030
60.28400.11800.30700.99800.79100.99900.25100.10900.2670
80.24100.13300.28600.99600.78400.99900.26200.13400.2760
100.26900.14900.27300.99500.76000.99700.29100.13700.3040
120.18800.09400.22200.99500.77700.99600.28500.14500.2840
140.25900.12100.27900.99000.75000.99400.24700.12600.2540
160.24400.12000.26300.98600.73200.99000.21700.11000.2150
180.23500.12000.24300.98400.66100.99100.21400.10300.2310
200.23100.13800.23500.97600.66300.98900.23600.11400.2260
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, T.-L. Improved Test for High-Dimensional Mean Vectors and Covariance Matrices Using Random Projection. Mathematics 2025, 13, 2060. https://doi.org/10.3390/math13132060

AMA Style

Wu T-L. Improved Test for High-Dimensional Mean Vectors and Covariance Matrices Using Random Projection. Mathematics. 2025; 13(13):2060. https://doi.org/10.3390/math13132060

Chicago/Turabian Style

Wu, Tung-Lung. 2025. "Improved Test for High-Dimensional Mean Vectors and Covariance Matrices Using Random Projection" Mathematics 13, no. 13: 2060. https://doi.org/10.3390/math13132060

APA Style

Wu, T.-L. (2025). Improved Test for High-Dimensional Mean Vectors and Covariance Matrices Using Random Projection. Mathematics, 13(13), 2060. https://doi.org/10.3390/math13132060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop