Mean Equality Tests for High-Dimensional and Higher-Order Data with k-Self Similar Compound Symmetry Covariance Structure

An extension of the D2 test statistic to test the equality of mean for high-dimensional and k-th order array-variate data using k-self similar compound symmetry (k-SSCS) covariance structure is derived. The k-th order data appear in many scientific fields including agriculture, medical, environmental and engineering applications. We discuss the property of this k-SSCS covariance structure, namely, the property of Jordan algebra. We formally show that our D2 test statistic for k-th order data is an extension or the generalization of the D2 test statistic for second-order data and for third-order data, respectively. We also derive the D2 test statistic for third-order data and illustrate its application using a medical dataset from a clinical trial study of the eye disease glaucoma. The new test statistic is very efficient for high-dimensional data where the estimation of unstructured variance-covariance matrix is not feasible due to small sample size.


Introduction
We study the hypotheses testing problems of equality of means for high-dimensional and higher-order (multi-dimensional arrays) data. Standard multivariate techniques such as Hotelling's T 2 distribution with one big unstructured variance-covariance matrix (assuming a large sample size) do not work for these higher-order data, as Hotelling's T 2 distribution cannot incorporate any higher-order information into the test statistic and thus draws wrong conclusions [1]. Higher-order data are formed by representing the additional associations that are inherent in repetition across several dimensions. To obtain a better understanding of higher-order data, we first share a simple and interesting example of higher-order data: • Traditional multivariate (vector-variate) data are the first-order data. For example, a clinical trial study of glaucoma, where several factors (m 1 ) such as intraocular pressure (IOP) and central corneal thickness (CCT) are effective in the diagnosis of glaucoma. This example is an illustration of the (m 1 × 1) vector-variate first-order data. • When the first-order data are measured at various locations/sites or time points, the data become two-dimensional matrix-variate data, and we name it as second-order data. These data are also recognized as multivariate repeated measures data, or doubly multivariate data; e.g., multivariate spatial data or multivariate temporal data. In the above example of the clinical trial study, an ophthalmologist or optometrist diagnoses glaucoma by measuring IOP and CCT (m 1 ) in both the eyes (m 2 ). So, we see how the vector-variate first-order dataset discussed in the previous paragraph becomes a (m 1 × m 2 ) matrix-variate second-order dataset by measuring m 1 variables repeatedly over another dimension.
• When the second-order data are measured at various sites, or over various time points, the data become three-dimensional array-variate data, and we name it as third-order data. In addition, these are recognized as triply multivariate data, e.g., multivariate spatio-temporal data or multivariate spatio-spatio data. In the previous example, if the IOP and CCT (m 1 = 2) are measured in both eyes (m 2 = 2) as well as over, say, three time points (m 3 = 3), the dataset would become third-order data. • When the third-order data are measured at various directions, the data become fourdimensional (m 1 × m 2 × m 3 × m 4 ) array-variate fourth-order data, e.g., multivariate directo-spatio-temporal data or multivariate directo-spatio-spatio data. • When the fourth-order data are measured at various depths, the data become fivedimensional (m 1 × m 2 × m 3 × m 4 × m 5 ) array-variate fifth-order data, and so on, e.g., multivariate deptho-directo-spatio-temporal data.
In the above glaucoma data example, the dataset has m 1 variables, and these m 1 variables are repeatedly measured over various dimensions adding higher-order information to the dataset. Now, the question is, what is the higher-order information? Higher-order information is embedded in the higher-order covariance structures that are formed by signifying the additional associations that are inherent in repetition of the variables across several dimensions. The other question is how can we measure and capture the higher-order information? For this, one needs to understand how to read these structured higher-order data and how to use the appropriate variance-covariance structure to incorporate the higher-order information that are integral to the higher-order data.
Higher-order data have been studied by many authors in the last 20 years using various variance-covariance structures to reduce the number of unknown parameters, which is very important for high-dimensional data. Second-order data are studied using matrix-variate normal distribution [2,3]. Second-order data can also be analyzed vectorially using a twoseparable (Kronecker product) variance-covariance structure [4,5], or a block compound symmetry (BCS), also named a block exchangeable (BE) or a 2-SSCS covariance structure [6]. Two-separable covariance structure for second-order data has two covariance matrices, one for each order of the data; in other words, one covariance matrix for within-subject information and the other covariance matrix for between-subject information. Combining the covariance structures of within-subject information and between-subject information results in a second-order model for second-order data. Ignoring this information often influences the test statistic, and if not properly taken care of this information, test statistic will end up yielding wrong conclusions [1]. To obtain a picture of the third-order data, see [7]. Manceur and Dutilleul [7] used tensor normal distribution with double separable covariance structure. 2-SSCS and 3-SSCS covariance structures are useful tools for the analyses of the second-and third-order datasets, respectively. Manceur and Dutilleul [7] also studied fourth-order data with four-separable covariance structure. In the same way, k-th order data can be analyzed vectorially with some structured variance-covariance matrix to integrate the higher-order information into the model, e.g., k-separable covariance structure [8,9] for the k-th order data. However, k-separable covariance structure may not be appropriate for all datasets; thus, we investigate some other structure, namely, k-SSCS covariance structure (defined in Section 3) for the k-th order data in this article. See [10].
The high-dimensionality of a dataset needs to exploit the structural properties of the data to reduce the number of estimated degrees of freedom for more accurate conclusion for the k-th order data, and k-SSCS covariance structure is one of them. For example, for the third-order glaucoma data, the number of unknown parameters in the (12 × 12)-dimensional unstructured variance-covariance matrix is 78, whereas the number of unknown parameters for 3-SSCS covariance structure is just 9 and thus may help in providing the correct information about the true association of the structured third-order data. The data quickly become high-dimensional with the increase in the order of the data, and thus, the variance-covariance matrix becomes singular for small samples, and thus testing of mean is not possible. This necessitates the development of new statistical methods with a suitable structured variance-covariance matrix, which can integrate the existing correlations information of the higher-order data into the test statistic and can take care of the high-dimensionality of the data as well.
Rao [11,12] introduced 2-SSCS covariance structure while classifying genetically different groups. Olkin and Press [13] examined a circular stationary model. The problem of estimation in balanced multilevel models with a block circular symmetric covariance structure was studied by Liang et al. [14]. Olkin [15] studied the hypothesis testing problem of the equality of the mean vectors of multiple populations of second-order data using a 2-SSCS covariance structure, which is reminiscent of a model of Wilks [16]. Arnold [17] studied normal testing problems that mean is patterned when the variance-covariance matrix has a 2-SSCS structure. Arnold [17] also studied multivariate analysis of variance problem when the variance-covariance matrix has a 2-SSCS structure. Arnold [18] later developed linear models with a 2-SSCS structure as the error matrix for one matrix-variate observation. Roy et al. [19] and Žežula et al.
[1] studied the hypotheses testing problems on the mean for the second-order data using a 2-SSCS covariance structure. There are few studies on third-order data using the 3-SSCS covariance structure. See Leiva and Roy [20] for classification problems and Roy and Fonseca [21] for linear models with a 3-SSCS covariance structure on the error vectors. Recently, Žežula et al. [22] studied the mean value test for third-order data using a 3-SSCS covariance structure.
A majority of the above-mentioned authors only studied the second-order matrixvariate data and used a 2-SSCS covariance structure where the exchangeability (invariance) property in one factor was present. However, we obtain datasets these days with more than one factor, and the assumption of exchangeability on the levels of factors is appropriate for these datasets. A k-SSCS structured matrix results from the exchangeability property of the k − 1 factors of a dataset. Employing a 2-SSCS covariance structure would be wrong for the datasets with more than one factor. One may construct a second-order data from a k-th order data by summing the observations, however, it would result in a loss of detailed information of particular characteristics that may be of interest. One may also consider matricization of the k-th order data to a second-order data and then using the 2-SSCS covariance structure, but then, once again, all the correlation information will be wiped out. So, the development of new statistical methods are in demand to handle the k-th order data using k-SSCS variance-covariance matrix.
The aim of this paper is to derive a test statistic for mean for high-dimensional k-th order data using k-SSCS covariance matrix by generalizing D 2 test statistics developed in Žežula et al. [1]. In doing so, we exploit the distributions of the eigenblocks of the k-SSCS covariance matrix. We obtain the test statistic to test the mean for one sample case, paired samples case and two independent samples case. We show through Remark 2 that our generalized D 2 test statistic for the k-th order data generalizes the test statistic for the second-order data, and we derive the test statistic for the third-order data, which is largely motivated by the work of Žežula et al. [22] and in Remark 2 as well.
This article is organized as follows. In Section 2, we set up some preliminaries about some matrix notations and definitions related to block matrices. Section 3 has the definition of k-SSCS covariance matrix. Section 3 has properties of the k-SSCS covariance matrix, such as Jordan algebra. Section 4 discusses the estimation of the eigenblocks and their distributions. The test for the mean for one population is proposed in Section 5. Tests for the equality of means for two populations are proposed in Section 6, and an example of a dataset exemplifying our proposed method is presented in Section 7. Finally, Section 8 concludes with some discussion and the scope for the future research.
Definition 1. We say that a matrix A k p 1,k ×p 1,k is a k-th order block matrix according to the factorization p 1,k = k ∏ g=1 m g to point out that it can be expressed as k different "natural" partitioned matrix forms, that is: Note that for the case j = 0, the matrix A is a (k × k)−dimensional matrix with 1 × 1 blocks. Clearly, both m 1 ≥ 2 and m 2 ≥ 2 for second-order data, and m 1 ≥ 2, m 2 ≥ 2 and m 3 ≥ 2 for third-order data, and so on. Next, we define matrix operators that will be useful tools in working with these k-th order block matrices, where k ≥ 2. Let M p 1,g denote the set of p 1,g × p 1,g -matrices.
Definition 2. Let BS p 1,g and BT p 1,g denote the p 1,g -Sum and p 1,g -Trace block operators from M p 1,h to M p 1,g for 1 ≤ g ≤ h ≤ k , respecitvely, where M p 1,h will always be evident from the context. These block operators applied to a matrix G k p 1,g p g+1,h ×p 1,g p g+1,h give the following p 1,g × p 1,g -matrices: The subindex p 1,g in these block matrix operators represents p 1,g × p 1,g -dimensional blocks in a partitioned square matrix G, and thus their use results in p 1,g × p 1,g -dimensional matrices. Many useful properties of these block operators, which we will use later in this article, are examined in Leiva and Roy [10]. For any natural number a > 1, we use the following additional notations: and where J a = 1 a 1 a , with 1 a be the a × 1 vector of ones, and I a = [e a,1 , · · · , e a,a ] being the a × a− identity matrix with e a,i the i th column vector of I a . Observe that P a and Q a are idempotent matrices and mutually orthogonal to each other, that is: (P a ) 2 = P a , (Q a ) 2 = Q a and P a Q a = 0.
For a fixed natural number k ≥ 2, let R k,j be the p 1,k × p 1,k -matrix where, the symbol ⊗ represents the Kronecker product operator and for each j = 1, . . . , k − 1, where

Properties of the Self Similar Compound Symmetry Covariance Matrix
Let x r; f be an m 1 −variate vector of measurements on the r th replicate (individual) at the f = ( f k , . . . , f 2 ) ∈ F = F k,2 = F k × · · · × F 2 = 2 × g=k F g factor combination. Let x r be the p 1,k = ∏ k j=1 m j −variate vector of all measurements corresponding to the r th sample unit of the population, that is, x r = x r;1,...,1 , . . . , x r,m k ,...,m 1 . Thus, the unstructured covariance matrix Γ k has q = p 1,k (p 1,k + 1)/2 unknown parameters that can be large for random values of either of the m j 's. Consequently, if the data are high-dimensional, k-SSCS covariance matrix (defined bellow in Definition 3) with number of unknown parameters km 1 (m 1 + 1)/2 is a good choice if the exchangeable feature is present in the data. Definition 3. We say that x r has a k-SSCS covariance matrix if Γ k = cov[x r ] is of the form: where U k,j , for j = 1, . . . , k, are m 1 × m 1 -matrices called SSCS-component matrices, with the assumption that J p 2,1 is equal to the real number 1.
An alternative expression to (8) is: with and T k,j = U k,j − U k,j+1 for j = 1, . . . , k − 1 (11) or equivalently U k,k = T k,k , and The covariance matrix Γ k given in (8) is called k-self similar compound symmetry covariance matrix because if we consider the p 1,k -dimensional vector x = x 1,...,1 , . . . , x m k ,...,m 1 with a k-SSCS covariance matrix Γ k and for each fixed g {2, . . . , k − 1}, we also consider the partition of x in p 1,g -subvectors. Then, its corresponding covariance matrix Γ k is partitioned in p 1,g × p 1,g -submatrices, which is (k + 1 − g)-SSCS matrix Γ * k+1−g see Leiva and Roy [10] as follows: where U * k+1−g,1 is the g-SSCS matrix given by: The existence of Γ −1 k can be proved using the principle of Mathematical Induction and to derive its expression as well. For the expression of Γ −1 k , we need matrices ∆ k,j for j = 1, . . . , k, which are defined as follows: where U k,k+1 = 0 and p 2,1 = 1. Note that It can be proved that if matrices ∆ k,j , j = 1, . . . , k are non-singular, then Γ −1 k exists and is given by: see Leiva and Roy [10] , where the symbol ∆ −1 k,0 indicates the m 1 × m 1 zero matrix (∆ −1 k,0 = 0 m 1 ×m 1 ). It is worthwhile to note that the structure of Γ −1 k is the same as the structure of Γ k , that is, it has the k-SSCS structure given in (9) with (10) and (11) and where, in this formula Γ −1 k , T k,j is as follows Using a similar inductive arguments, it can be proved that: where the matrices ∆ k,j are given by (12), and it is assumed that p k+1,k = 1 and p k+2,k = 0. The matrices ∆ k,j , j = 1, . . . , k are the k eigenblocks of the k-SSCS covariance structure. See Lemma 4 of Leiva and Roy [10] for proof. The matrix Γ k can be written as the following sum of k orthogonal parts: and if Γ −1 k exists, then it can be written as: where R * k,j+1 is given in (5), for each j = 1, . . . , k − 1, and, for j = k, R * k,k+1 is given in (7). The conventional Hotelling's T 2 statistic to test the mean is based on the unbiased estimate of the unstructured variance-covariance matrix, which follows a Wishart distribution. Nevertheless, the unbiased estimate of the k-SSCS covariance matrix does not follow a Wishart distribution, and thus the test statistic to test the equality of mean does not follow Hotelling's T 2 statistic. We thus make a canonical transformation of the data to block diagonalize the k-SSCS covariance matrix, and show that a scalar multiple of the estimates of the diagonal blocks (eigenblocks) follow independent Wishart distributions and use this property in our advantage to obtain test statistics to test the mean for the k-th order data (k ≥ 2). We see from Leiva and Roy [10] that the k-SSCS matrix Γ k given by (8) can be transformed into an m 1 × m 1 -block diagonal matrix (blocks in the diagonal are m 1 × m 1 -matrices) by pre-and post-multiplying Γ k by appropriate orthogonal matrices.
For 1 ≤ j < g < i ≤ k, let I m i ,m j denote the m i m i−1 · · · m j identity matrix, that is: where H m h is an m h × m h Helmert matrix for each h = 2, . . . , k, i.e., each H m h is an orthogonal matrix whose first column is proportional to 1 m h . Then: is an orthogonal matrix (note that L h are not function of either of the U i 's), and in particular Lemma 4 of Leiva and Roy [10] states and proves the block diagonalization result of the k-SSCS matrix Γ k by using the orthogonal matrix L k as defined in (16), that is: .., f 2 are given by: where f k+1 = 1 is not taken into consideration, that is: Thus, ∆ k,j for j = 1, . . . , k are the k eigenblocks of the k-SSCS covariance matrix Γ k . We will obtain the estimators of the eigenblocks ∆ k,j , j = 1, . . . , k in Section 4. In the following section, we briefly discuss that the k-SSCS covariance structure is of the Jordan algebra type.

k-SSCS Covariance Structure Is of the Jordan Algebra Type
The k-SSCS covariance structure is of the Jordan algebra type (Jordan et al. [23]). Let G p 1,k be the set of all k-SSCS p 1,k × p 1,k matrices. It is clear that under the usual matrix addition and scalar multiplication, G p 1,k is a subspace of the linear vectorial space S p 1,k of the p 1,k × p 1,k symmetric matrices. For any natural number k ≥ 2, it is easy to prove the following proposition: Proposition 1. If Γ k is a k-SSCS matrix given by (9) or equivlently by (8) , then Γ 2 k = Γ k Γ k is also a k-SSCS matrix given by: and where T * k,1 =: Therefore, we conclude that G p 1,k is a Jordan Algebra. See Lemma 4.1 on Page 10 in Malley [24], which states that G p 1,k is a Jordan Algebra if and only if Γ 2 k = Γ k Γ k ∈ G p 1,k for all Γ k ∈ G p 1,k . See Roy et al. [25] and Kozioł et al. [26] for proofs that 2-SSCS and 3-SSCS covariance structures are of Jordan algebra types.

Estimators of the Eigenblocks
Let x r : r = 1, . . . , n be p 1,k × 1 random vectors partitioned into m 1 × 1 subvectors as follows: The vectors x r : r = 1, . . . , n are a random sample from a population with distribution N p 1,k (µ; Γ k ), where Γ k is a positive definite k-SSCS structured covariance matrix as given in (8) in Definition 3. Let X be the n × p 1,k -sample data matrix as follows: In this section, we prove that certain unbiased estimators (to be defined) of the matrix parameters U k,j : j = 1, . . . , k − 1 can be written as functions of the usual sample variancecovariance matrix S as follows: where Q n is given in (2) with (3). Now the sample mean x can be expressed as: Thus, S f , f * in S can be expressed as: Since S is an unbiased estimator of Γ k , we have: Therefore, to find a better unbiased estimator of U k,j , we average all the above random matrices that are unbiased estimator of the same U k,j . The unbiased estimators U k,j of U k,j for each j = 1, . . . , k are derived in Lemma 5 in Leiva and Roy [10] with q k,j defined in Lemma 3 in Leiva and Roy [10] as: Unbiased estimators of the eigenblocks ∆ k,j can be obtained from (13). Then, using (14), the unbiased estimators of Γ k can be obtained as the following othogonal sums: and if Γ −1 k exists, it can be obtained from (15) as follows: where R * k,j+1 is given in (5), for each j = 1, . . . , k − 1, and, for j = k, R * k,k+1 is given in (7). The computation of the unbiased estimates of the component matrices U k,j for each j = 1, . . . , k is easy, as all of them have explicit solutions. At this point, we want to mention that for k-separable covariance structure the estimates of the component matrices are not easy, as the MLEs have implicit equations, and therefore are not tractable analytically. Now, from Theorem 1 of Leiva and Roy [10], we see that a multiple of the unbiased estimators of the eigenblocks ∆ k,j for each j = 1, . . . , k, have Wishart distributions as follows: where R k,j+1 = R * k,j+1 ⊗ I m 1 given by (4) with (5) and (6) with (7), are independent and (n − 1)p j+2,k m j+1 − 1 ∆ k,j ∼ W m 1 (n − 1)p j+2,k m j+1 − 1 ; ∆ k,j for 1, . . . , k − 1 and (n − 1) ∆ k,k ∼ W m 1 (n − 1); ∆ k,k .
The critical values of this distribution can be obtained using simulations. However, LHtrace distribution is usually aproximated by F distribution, and we use here the second approximation suggested in McKeon [27]. For j th case, i.e., for j = 1, . . . , k − 1, let us use the notations m (j) = d j − m 1 − 1 = (n − 1)p j+2,k m j+1 − 1 − m 1 − 1, k (j) = p j+2,k m j+1 − 1 and Then, the distribution Finally, for j = k, the distribution is the usual Hotelling T 2 1,n−1 , that is, distributed as an exact distribution as follows: (n − 1)m 1 n − m 1 F (m 1 , n − m 1 ).
This means that the distribution of D 2 can be approximated by the convolution of the above k distributions (k − 1) approximated F distribution and one exact F distribution , where its critical values are obtained by the method suggested by Dyer [28].
We will now discuss some special cases of the D 2 statistic in the following remark.
[1] for multivariate repeated measures data (second-order data) with 2-SSCS or BCS covariance structure. Therefore, we can say that our mean test statistic in this article is an extension or generalization of Žežula et al.'s [1] mean test statistic for k-th order data with k-SSCS covariance structure.
[1]. We will discuss the distribution of D 2 under H 0 for third-order data in detail in the following section.

Distribution of Statistic D 2 under H 0 for Third-order Data with 3-SSCS Covariance Structure
This section is adopted from the work of Žežula et al. [22]. However, we use a much simpler, straightforward approach so that the practitioners or the analysts can appreciate and apply the method easily. Let L 3 = H m 3 ⊗ H m 2 ⊗ I m 1 = H m 3 ,m 2 ⊗ I m 1 be a matrix such that for each j = 2, 3 H m j is an (m j × m j ) Helmert matrix, that is, an (m j × m j ) orthogonal matrix with the first column proportional to the m j × 1 vector of 1's. We use here the following canonical transformation: where, for each j = 1, 2, 3, the diagonal m 1 × m 1 -matrices D f = D f 3 , f 2 are given by Therefore, particularizing D 2 , given by (23), for k = 3 we have Since the subsets of vectors involved in T 2 01 , T 2 02 , T 2 03 , respectively, form a partition of the set of independent vectors z f 3 , f 2 : f 3 , f 2 ∈ F 3 × F 2 = F , T 2 01 , T 2 02 , T 2 03 are mutually independent. Moreover, since, for j = 1, has a LH-trace distribution denoted by T 2 0 (m 1 ; m 3 (m 2 − 1), d 1 ) if d 1 = (n − 1)m 3 (m 2 − 1) > m 1 . Similarly, for j = 2, 0j is the following convolution of three independent LH-trace distributions: if for j = 1, 2, 3, (n − 1)p j+2,3 m j+1 − 1 ≥ m 1 . The critical values of this distribution can be obtained using simulations. LH-trace distribution is usually approximated by F distribution as mentioned before, however, we use here the second approximation suggested in McKeon [27]. For j = 1, denoting by m (1) = d 1 − m 1 − 1 = (n − 1)m 3 (m 2 − 1) − m 1 − 1, by k 1 = m 3 (m 2 − 1) and by

Paired Observation Model
In this section, we consider that in each one of the n individuals, a p 1,k -variate vector is measured at two different times (e.g., before and after a treatment). These measurements are k-th order (array-variate) measurements from each individual. To be more precise, for each f = ( f k , . . . , f 2 ) ∈ F, let v r; f k ,..., f 2 and w r; f k ,..., f 2 be the paired m 1 -dimensional vectors measured at the ( f k , . . . , f 2 ) site of the r individuals, for r = 1, . . . , n. Let u r be the partitioned 2m-variate vectors u r = (v r , w r ) = v r,1 , . . . , v r,m k , w r,1 , . . . , w r,m k , where v r; f k = v r; f k ,1 , . . . , v r; f k−1 ,m k−1 and w r; f k = w r; f k ,1 , . . . , ∼ N 2p 1,k (µ u , Γ u ), where i.i.d. stands for independent and identically distributed, and µ u = (µ v , µ w ) and Γ u is the partitioned 2p 1,k × 2p 1,k -matrix The matrices Γ vw and Γ wv are accountable for the linear dependence among the considered m 1 paired measurements. Particular cases of Γ vw could be of interest, e.g., U vw,1 = · · · = U vw,k , that is, Γ vw = Γ wv = J p 2,k ⊗ U vw,k see, for example, when k = 2, Definition 2 on Page 388 in Leiva [6] . Under this set up, we are interested in testing the following hypothesis: If we define x = v − w the above hypothesis is equivalent to where U x,j = U v,j + U w,j − U vw,j − U vw,j for j = 1, . . . , k.
Assuming Γ is a positive definite matrix and that n > p 1,k , one may consider the likelihood ratio test for the above hypothesis testing problem for klevel multivariate data assuming the mean vectors µ v and µ w are unstructured. Note that this test problem reduces to the one sample mean case of the previous section where µ 0 = 0. Therefore, all the results obtained in the previous section are valid for this case. Following the same logic as in Remark 1, the needed sample size for the test is n ≥ m 1 + 1, regardless of any m j , j = 2, . . . , k.
Iester et al. [29] reported the mean and standard deviation (SD) of the IOP and CCT measurements for both the eyes from 794 Italian Caucasian glaucoma patients (see Table 2). We deem these means as the means of the IOP and CCT at the first time point and then randomly generate four samples within three standard errors (SD of mean) from these reported means of IOP and CCT to represent the means of IOP and CCT for the left and right eyes in the third and sixth months, respectively. These randomly generated means of IOP and CCT for the left and right eyes at three time points in vector form are reported in Table 3, and we will take this mean vector as the targeted mean µ 0 in (21). The sample mean vector in Table 1 appears to be very different from the targeted population mean vector µ 0 in Table 3.  The aim of our study is to see whether our sample of 30 glaucoma patients has the same mean vector as the Italian Caucasian glaucoma patients. Our main intention of the analysis of our glaucoma dataset is to illustrate the use of our new hypotheses testing procedures rather than giving any insight into the dataset itself.
The calculated D 2 statistic (26), which is a convolution of three independent L-H distributions, T 2 0 (2; 3, 87), T 2 0 (2; 2, 58) and T 2 0 (2; 1, 29), respectively, which in turn is approximated by two approximated F distributions and one exact F distribution, is 317.2971, and the corresponding p value is 0. So, we reject the null hypothesis that the population mean of our dataset is equal to the Italian Caucasian glaucoma patients, and this conclusion was expected from the data.

Conclusions and Discussion
We study the tests of hypotheses of equality of means for one population as well as for two populations for high-dimensional and higher-order data with k-SSCS covariance structure. Such a structure is natural and a credible assumption in many research studies. MLEs and the unbiased estimates of the matrix parameters of the k-SSCS covariance structure have closed-form solutions. On the other hand, the MLEs and the unbiased estimates of the matrix parameters of the k−separable covariance structure are not tractable and are computationally intensive. So, k-SSCS covariance structure is a desirable covariance structure for k-th order data. Aghaian et al. [30] examined differences in CCT of 801 subjects, establishing the fact that the CCT of Japanese participants was significantly lower than that of Caucasians, Chinese, Filipinos, and Hispanics, and greater than that of African Americans. African American individuals have thinner corneas compared to white individuals [31]. So, CCT and IOP in glaucoma patients vary with race, and our result confirms this fact. Our proposed new hypotheses testing procedures are perfect for high-dimensional array-variate data, which are ubiquitous in this century. In discriminant analysis [32], the first step is to test the equality of means for the two populations. Therefore, our new method developed in this article will have important applications in the analysis of modern multivariate datasets with higher-order structure. Our new method can be extended to non-normal datasets. In addition, it can be extended in testing the equality of means for more than two populations and simultaneous hypotheses testing in models with k-SSCS covariance structure.