Next Article in Journal
Special Issue on Quantum Information Applied in Neuroscience
Previous Article in Journal
Fitness-Based Acceleration Coefficients Binary Particle Swarm Optimization (FACBPSO) to Solve the Discounted Knapsack Problem
Previous Article in Special Issue
Solving the Sylvester-Transpose Matrix Equation under the Semi-Tensor Product
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonnegative Estimation of Variance Components for a Nested Three-Way Random Model

Department of Statistics, Keimyung University, 1095 Dalgubeoldaero, Dalseogu, Daegu 42601, Korea
Symmetry 2022, 14(6), 1210; https://doi.org/10.3390/sym14061210
Submission received: 25 April 2022 / Revised: 3 June 2022 / Accepted: 6 June 2022 / Published: 11 June 2022
(This article belongs to the Special Issue Matrix Equations and Symmetry)

Abstract

:
A nonnegative variance estimation procedure is suggested for an unbalanced data where two factors are nested in another. Since the involved factors are all random, the approach is based on a nested three-way random model. The proposed method for the estimation of variance components is compared with Henderson’s Method I and III in view of the same estimation procedure based on the method of moments. Although both the Henderson’s Method I and III are known to be useful for the estimation of variance components for balanced or unbalanced data, they often yield negative values as variance estimates whereas the estimates by the suggested method are never be negative. Hence, it points out what makes this happen and discusses how to fix the problem. The proposed method shows how to define sums of squares and the orthogonal coefficient matrices that are necessary for the evaluation of expectations. All the matrices of the quadratic forms for computing sums of squares are symmetric and idempotent. It also reveals the individual coefficient of each variance component does not change from equation to equation.

1. Introduction

The estimation of variance components in random models is extremely important in scientific research area for the evaluation of random effects. There are lots of literature on interesting topics concerning random components such as [1,2,3,4]. Substantial attention has been given for decades on fixing the problem of negative values which can not be admissible as estimates of variance components. Much literature has concentrated on the perplexing issue but the problem has not been resolved yet. Instead, several strategies of action are presented when it happens. Some reasonable comments are given by Thompson [5,6,7], Nelder [8], and Searl and Fawcett [9]. In recent research there are some improvements on nonnegative estimates from the study of variance estimation of random effects such as Choi [10,11]. The purpose of this study is to fix the problem of getting negative estimates of variance components in a three-way random model. Most often negative estimates come out of the analysis of unbalanced data. So, the nonnegative estimation is developed for an unbalanced data under a nested three-way random model. When a nesting occurs in the treatment structure nested effects are involved in the model. Related topics are shown in the literature such as Milliken and Johnson [12], Montgomery [13], Searle [14] Khan et al. [15] Sharma [16] and Ferreira et al. [17]. Although either Henderson’s Method I or III [18] as a method of moments is mainly used for the estimation of variance components, there seems to be no strategy when negative estimates happen. The discussion of nonnegative estimation procedure for the nested random model has been made a lot in literature but the solution for that issue still remains unsolved. Therefore, it is worthy to develop a method for getting nonnegative solution in a different way from Henderson’s Method I or III, and compare the pros and cons. Since the nesting occurs in the treatment structure it should be noticed that there is only one size of experimental unit available in data analysis. Otherwise, there would be one more size of experimental unit when the nesting occurs in the design structure. This paper concerns that how to get sums of squares for nested main effects and nested interaction effects when two random factors are nested in another, and deals with the problem how to fix if the estimates of the variance components are negative.

2. Nested and Crossed Random Effects Model

Suppose that data are from an experimental design where the three factors in the treatment structure are random, the levels of B are crossed with the levels of C, and the levels of two crossed factors B and C are nested within the levels of A, all in a completely randomized design structure. So, the data can be displayed on a three-way table. Let’s consider a model describing the data. For the model description, let y i j k l denote the observation of a unit treated with the ith level of A, jth level of B, and kth level of C. The model for the assumed experimental situation is
y i j k l = μ + α i + β j ( i ) + γ k ( i ) + δ j k ( i ) + ϵ i j k l , i = 1 , 2 , , a ; j = 1 , 2 , , b ; k = 1 , 2 , , c ; l = 1 , 2 , , n i j k ,
where μ is an overall mean, α i is the effect of the ith level of random factor A, β j ( i ) is the effect of the jth level of random factor B nested in ith level of A, γ k ( i ) is the effect of the kth level of random factor C nested in ith level of A, δ j k ( i ) is the interaction effect of the jth level of factor B and the kth level of factor C nested in ith level of A, and ϵ l ( i j k ) is the random error term. In the random model, we assume that the α i ’s, β j ( i ) ’s, γ k ( i ) ’s and δ j k ( i ) ’s are random with zero means and variances σ α 2 , σ β 2 , σ γ 2 and σ β γ 2 respectively. ϵ i j k l ’s are random errors assumed to have mean zero and variance σ ϵ 2 . All of the random effects in the model are assumed to be independent of each other.
The matrix form of the model can be expressed as
y = j μ + X α α + X β β + X γ γ + X δ δ + ϵ ,
where y denotes the n × 1 vector of observations ( n = i = 1 a j = 1 b k = 1 c n i j k ); j denotes n × 1 vector of ones; X α is the n × a coefficients matrix of α ; α denotes the a × 1 random vector and is assumed to be distributed as N ( 0 , σ α 2 I a ) ; X β denotes the n × b coefficients matrix of β ; β denotes the b × 1 random vector and is assumed to be distributed as N ( 0 , σ β 2 I b ) ; X γ denotes the n × c coefficients matrix of γ ; γ denotes the c × 1 random vector and is assumed to be distributed as N ( 0 , σ γ 2 I c ) ; X δ denotes the n × b c coefficients matrix of δ ; δ denotes the b c × 1 random vector and is assumed to be distributed as N ( 0 , σ δ 2 I b c ) ; ϵ is the usual n × 1 random error vector and is assumed to be distributed as N ( 0 , σ ϵ 2 I n ) .
The parameters of the model are μ , σ α 2 , σ β 2 , σ γ 2 , σ δ 2 and σ ϵ 2 . There are many ways to estimate variance components from unbalanced data due to the way of fitting rather different from balanced data.

3. Sums of Squares and Orthogonal Coefficient Matrices

A method can be derived from the the matrix form of the model (2) for the nonnegative estimation of variance components. This method can be available for finding a set of sums of squares or quadratic forms in the observations that can be used for estimating variance components and getting a set of orthogonal coefficient matrices of random effects that is necessary for the expectations of mean squares. The procedure to have necessary sums of squares for estimating variance components is partitioning the vector space of observations into two subspaces, i.e., estimation space and error space, by the proper projections. The definition of projection, it’s related theorems and ideas are well discussed in Graybill [19], and Johnson and Wichern [20]. The number of sums of squares is required as many as the number of variance components in the model. A sum of squares is obtained in a quadratic form in the observations; the quadratic form is based on the concepts of symmetry; that is, the matrix of the quadratic form is symmetric and idempotent.
For the model (2), we let y as
y = X θ + ϵ ,
where X = ( j , X α , X β , X γ , X δ ) and θ = ( μ , α , β , γ , δ ) . The vector space of y can be divided into two orthogonal vector subspaces by projecting y onto the column space generated by X . Let the vector subspace generated by the model matrix X of the model (3) be V s . Then, the projection of y onto the vector subspace V s is defined as X X y where X denotes the Moore-Penrose generalized inverse. A sum of squares can be obtained as a quadratic form in the observations such as y ( I X X ) y whereas the coefficient matrix of the error vector ϵ can be found as I X X from the residual vector. Let P ϵ denote the coefficient matrix of the error vector ϵ . Then, we get one pair of information about the residual sum of squares and the coefficient matrix from fitting the model (3). They are defined as
Q e = y ( I X X ) y P ϵ = I X X .
The second model to be fitted is
y = X c θ c + ϵ c ,
where X c = ( j , X α , X β , X γ ) and θ c = ( μ , α , β , γ ) . Let V c be the vector subspace generated by the model matrix X c of the model (5). Then, X c X c y is the projection of y onto the vector subspace, V c . Denote the second sum of squares and the coefficient matrix of δ by Q c and P δ respectively. They are defined as
Q c = y ( I X c X c ) y P δ = ( I X c X c ) X δ ,
where P δ is the coefficient matrix of δ and also orthogonal to the coefficient matrix of the random error vector. Since Q c is the squared distance of the residual vector it has the information on two variance components σ δ 2 and σ ϵ 2 . The model to be fitted as the third is
y = X b θ b + ϵ b ,
where X b = ( j , X α , X β ) and θ b = ( μ , α , β ) . Let V b be the vector subspace generated by X b of the model (7). Then, X b X b y is the projection of y onto the vector subspace, V b . We define Q b and P γ as follows:
Q b = y ( I X b X b ) y P γ = ( I X b X b ) X γ ,
where Q b is the sum of squares represented in the quadratic form as y ( I X b X b ) y which has the information on three variance components; i.e., σ γ 2 , σ δ 2 and σ ϵ 2 . The coefficient matrix, P γ , of the random vector γ can be driven from the residual vector of the model (7). Since there are 5 variance components in the model (3) we need two more sums of squares and two more corresponding coefficient matrices as well. The fourth model to be fitted is
y = X a θ a + ϵ a ,
where X a = ( j , X α ) and θ a = ( μ , α ) . Let V a be the vector subspace generated by X a of the model (9). Then, X a X a y is the projection of y onto the vector subspace, V a . Denote the sums of squares by Q a and the derived coefficient matrix by P β . Then, they are defined as
Q a = y ( I X a X a ) y P β = ( I X a X a ) X β ,
where Q a is the sum of squares represented in the quadratic form as y ( I X a X a ) y which has the information on four variance components; i.e., σ β 2 , σ γ 2 , σ β γ 2 and σ ϵ 2 . The fourth coefficient matrix P β of β can be derived from the residual vector after fitting the model (9). The final model to be fitted is
y = X j θ j + ϵ j ,
where X j = ( j ) and θ j = μ . Let V j be the vector subspace generated by X j of the model (11). Then, X j X j y is the projection of y onto the vector subspace, V j . Denote the sums of squares by Q j and the derived coefficient matrix by P μ . Then, they are defined as
Q j = y ( I X j X j ) y P α = ( I X j X j ) X α ,
where Q j is the sum of squares represented in the quadratic form as y ( I X j X j ) y which has the information on five variance components; i.e., σ α 2 , σ β 2 , σ γ 2 , σ β γ 2 and σ ϵ 2 . The fifth coefficient matrix P α of α can be defined from the residual vector after fitting the model (11).
Since all the necessary coefficient matrices of random effects vectors and the sums of squares have been defined through the model fitting procedures, expectations of sums of squares should be studied for the estimation of variance as a next step.

4. Expectations of Sums of Squares

When y is represented by the sum of all the orthogonal projections from the foregoing model fittings, it can be defined as
y = X j X j y + P α P α y + P β P β y + P γ P γ y + P δ P δ y + P ϵ P ϵ y .
So, the covariance matrix of y can be found as
Var ( y ) = σ α 2 P α P α + σ β 2 P β P β + σ δ 2 P δ P δ + σ γ 2 P γ P γ + σ ϵ 2 P ϵ P ϵ = Σ .
Let y Q y be a quadratic form in the observations of y . The associated matrix Q with the quadratic form is symmetric and idempotent. Then,
E ( y Q y ) = tr ( Q Σ ) + μ 2 j Q j = tr ( Q Σ ) ,
where tr( W ) is the sum of the diagonal elements of the square matrix W . Since all the matrices in quadratic forms are defined such as Q j = 0 , the expectations of the sums of squares do not depend on μ . The expectation of the quadratic form y Q y is
E y Q y = tr Q σ α 2 P α P α + σ β 2 P δ P β + σ γ 2 P γ P γ + σ δ 2 P δ P δ + σ ϵ 2 P ϵ P ϵ = σ α 2 tr P α Q P α + σ β 2 tr P β Q P β + σ γ 2 tr P γ Q P γ + σ δ 2 tr P δ Q P δ + σ ϵ 2 tr P ϵ Q P ϵ ) = c q α σ α 2 + c q β σ β 2 + c q γ σ γ 2 + c q δ σ δ 2 + c q ϵ σ ϵ 2 ,
where c’s are constants denoting the traces. Hartley’s synthesis [21] can be used for the calculation of the constants. Since there are five sums of squares in the quadratic form of the observations, expectations of the quadratic forms are given as follows.
E ( Q e ) = tr ( I X X ) Σ = σ α 2 c e α + σ β 2 c e β + σ γ 2 c e γ + σ δ 2 c e δ + σ ϵ 2 c e ϵ , E ( Q c ) = tr ( I X c X c ) Σ = σ α 2 c c α + σ β 2 c c β + σ γ 2 c c γ + σ δ 2 c c δ + σ ϵ 2 c c ϵ , E ( Q b ) = tr ( I X b X b ) Σ = σ α 2 c b α + σ β 2 c b β + σ γ 2 c b γ + σ δ 2 c b δ + σ ϵ 2 c b ϵ , E ( Q a ) = tr ( I X a X a ) Σ = σ α 2 c a α + σ β 2 c a β + σ γ 2 c a γ + σ δ 2 c a δ + σ ϵ 2 c a ϵ , E ( Q j ) = tr ( I X j X j ) Σ = σ α 2 c j α + σ β 2 c j β + σ γ 2 c j γ + σ δ 2 c j δ + σ ϵ 2 c j ϵ .

5. A Set of Linear Equations in Variance Components

Equating the sums of squares to their corresponding expected values to obtain variance component estimators yields the equations that are linear in variance components of the model. The set of equations can be summarized as follows.
Q e = σ α 2 c e α + σ β 2 c e β + σ γ 2 c e γ + σ δ 2 c e δ + σ ϵ 2 c e ϵ , Q c = σ α 2 c c α + σ β 2 c c β + σ γ 2 c c γ + σ δ 2 c c δ + σ ϵ 2 c c ϵ , Q b = σ α 2 c b α + σ β 2 c b β + σ γ 2 c b γ + σ δ 2 c b δ + σ ϵ 2 c b ϵ , Q a = σ α 2 c a α + σ β 2 c a β + σ γ 2 c a γ + σ δ 2 c a δ + σ ϵ 2 c a ϵ , Q j = σ α 2 c j α + σ β 2 c j β + σ γ 2 c j γ + σ δ 2 c j δ + σ ϵ 2 c j ϵ ,
When the set of linear equations are arranged in matrix form, it can be displayed as
Q e Q c Q b Q a Q j = c e α c e β c e γ c e δ c e ϵ c c α c c β c c γ c c δ c c ϵ c b α c b β c b γ c b δ c b ϵ c a α c a β c a γ c a δ c a ϵ c j α c j β c j γ c j δ c j ϵ σ α 2 σ β 2 σ γ 2 σ δ 2 σ ϵ 2 or Q = C σ 2 ,
where Q denotes ( Q e , Q c , Q b , Q a , Q j ) , C is the matrix of the coefficients, and σ 2 = ( σ α 2 , σ β 2 , σ γ 2 , σ δ 2 , σ ϵ 2 ) in (17). Since the system of equations in (18) can be inconsistent in general, the normal equations are used for the best approximate solution to the system. They are:
C C σ 2 ^ = C Q .
From the normal equations, we get σ ^ 2 = ( C C ) 1 ( C Q ) .

6. Comparison of Three Sets of Variance Component Estimates

Variance components can never be negative by definition in any experimental situations. However, some experimental designs with random effects can yield negative values. A real data set is studied for verifying the nonnegative estimation procedure. As for the comparison of estimates of variance components, Milliken and Johnson’s data [12] are analyzed under an assumed model by 3 different methods; i.e., the new method, the analysis of variance method or Henderson’s Method I, and the fitting constants method or Henderson’s Method III. The data are for the study of the efficiency of workers in assembly lines at several plants. Three plants, four assembly sites within each plant, and three workers at each plant were randomly selected. Each worker was to work five times at each assembly site in his or her plant. The efficiency scores are recorded. Since there are unequal number of observations on each treatment combination, the data set is unbalanced. The total number of observations in the data are 118. The model used to describe the data is
y i j k l = μ + p i + w j ( i ) + s k ( i ) + w s j k ( i ) + ϵ i j k l ,
where p i is the ith plant effect, w j ( i ) is the jth worker effect within plant i, s k ( i ) is the site effect within plant i, w s j k ( i ) is the interaction effect of worker and site in plant i, and ϵ i j k l is the error term. p i , w j ( i ) , s k ( i ) , w s j k ( i ) and ϵ i j k l are assumed to follow N ( 0 , σ p 2 ) , N ( 0 , σ w 2 ) , N ( 0 , σ s 2 ) , N ( 0 , σ w s 2 ) , and N ( 0 , σ ϵ 2 ) respectively. For the estimation of the variance components in the model (20), we apply the new method described as before. Then, the set of linear equations are
Q e = 82 σ ϵ 2 ^ , Q s = 51.42503 σ δ 2 ^ + 82 σ ϵ 2 ^ , Q w = 82.73543 σ γ 2 ^ + 51.42503 σ δ 2 ^ + 82 σ ϵ 2 ^ , Q p = 78.21936 σ β 2 ^ + 82.74543 σ γ 2 ^ + 51.42503 σ δ 2 ^ + 82 σ ϵ 2 ^ , Q j = 77.88136 σ α 2 ^ + 78.21936 σ β 2 ^ + 82.73543 σ γ 2 ^ + 51.42503 σ δ 2 ^ + 82 σ ϵ 2 ^ ,
where Q e = 408.617 , Q s = 2329.906 , Q w = 3086.348 , Q p = 5828.000 , and Q j = 10,455.518 . From the normal equations, we get σ 2 ^ = ( σ p 2 ^ , σ w 2 ^ , σ s 2 ^ , σ w s 2 ^ , σ ϵ 2 ^ ) , where σ p 2 ^ = 59.417532 , σ w 2 ^ = 35.050804 , σ s 2 ^ = 9.142909 , σ w s 2 ^ = 37.360959 , σ ϵ 2 ^ = 4.983134 . Three different sets of estimates are displayed in Table 1. The new method yields nonnegative estimates while the other two methods, Henderson’s I and III, do not.

7. Discussion

Compared to the previous studies on the three-way random models with nested effects in literature, the proposed method for the estimation of variance components under the nested random model has several findings that are very significant and worthy to note. The first thing is it always provides nonnegative estimates regardless of whether the data are balanced or not. Secondly, it is notable the method is fitting the sub-models in the sequential fashion by excluding parameters one by one from the the whole model in order to get the sums of squares and corresponding orthogonal coefficient matrices of the random effects vectors. Thirdly, it reveals that the individual coefficient of each variance component does not change within the set of linear equations in variance components unlike the other methods. This implies that Satterthwait’s approximation [22] is not necessary for testing the hypothesis of a variance component. That the individual coefficient of each variance component has the same value is attributed to the use of the orthogonal coefficient matrix derived from the model fitting. Defining orthogonal coefficients matrices of random effects vectors are somewhat complicated depending on the model structure. Also, the coefficients matrices may be different depending on the model fitting procedure even for the same model. However, the estimates are the same for all the different fitting procedures. Since there are many various type of data, techniques for deriving the orthogonal coefficients matrices should be studied in view of model structure.

8. Conclusions

The suggested method can be applied to get both sums of squares and orthogonal coefficient matrices due to the source of variation in any type of three-way random models with nested random effects. The things obtained through the stepwise procedure can be used to get nonnegative estimates of variance components.The nonnegative estimation procedure can increase the precision of inference on variance components and hence it can be used as a proper tool for analyzing data analysis. The suggested method can be seen as an extension of the projection method for a two-way random model in Choi [10] or the projection method III for mixed-effects models in Choi [11]. The reason why we are employing the projection method III here is that the expectation of a sum of square can be expressed as the familiar form seen in expected mean square column in the analysis of variance table. However, the use of any one of three methods can yield same nonnegative estimates for variance components irrespective of the methods. The new method as an extension of the projection method yields nonnegative estimates for variance components of random main effects, nested random interaction effects and random error when there exists nested relation among factors in three-way random model. This paper delineates the procedure how to define orthogonal coefficients for unbalanced data in three-way classification with two crossed factors nested in another.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Table 21.1, pp. 265–266, Milliken and Johnson [12]. The data analyzed in this study are openly available in reference number [12].

Acknowledgments

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Harville, D.A. Variance Component Estimation for the Unbalanced One-Way Random Classification—A Critique; ARL-69; Aerospace Research Laboratory: Dayton, OH, USA, 1969. [Google Scholar]
  2. Hill, B.M. Inference about variance components in the one-way model. J. Am. Stat. Assoc. 1965, 60, 806–825. [Google Scholar] [CrossRef]
  3. Hill, B.M. Correlated errors in the random model. J. Am. Stat. Assoc. 1967, 62, 1387–1400. [Google Scholar] [CrossRef]
  4. Searle, S.R.; Casella, G.; McCulloch, C.E. Variance Components; John Wiley and Sons: New York, NY, USA, 2009. [Google Scholar]
  5. Thompson, W.A. Negative estimates of variance components: An introduction. Bull. Int. Inst. Stat. 1961, 34, 1–4. [Google Scholar]
  6. Thompson, W.A. The problem of negative estimates of variance components. Ann. Math. Stat. 1962, 33, 273–289. [Google Scholar] [CrossRef]
  7. Thompson, W.A.; Moore, J.R. Non-negative estimates of variance components. Technometrics 1963, 5, 441–449. [Google Scholar] [CrossRef]
  8. Nelder, J.A. The interpretation of negative components of variance. Biometrika 1954, 41, 544–548. [Google Scholar] [CrossRef]
  9. Searle, S.R.; Fawcett, R.F. Expected mean squares in variance components models having finite populations. Biometrics 1970, 26, 243–254. [Google Scholar] [CrossRef]
  10. Choi, J. Nonnegative estimates of variance components in a two-way random model. Commun. Stat. Appl. Methods 2019, 26, 337–346. [Google Scholar] [CrossRef]
  11. Choi, J. Nonnegative variance component estimation for mixed-effects models. Commun. Stat. Appl. Methods 2020, 27, 523–533. [Google Scholar] [CrossRef]
  12. Milliken, G.A.; Johnson, D.E. Analysis of Messy Data Volume 1: Designed Experiments; Van Nostrand Reinhold: New York, NY, USA, 1984. [Google Scholar]
  13. Montgomery, D.C. Design and Analysis of Experiments; John Wiley and Sons: New York, NY, USA, 2013. [Google Scholar]
  14. Searle, S.R. Linear Models; John Wiley and Sons: New York, NY, USA, 1971. [Google Scholar]
  15. Khan, A.R.; Saleem, S.M.A.; Mehdi, H. Detection of edges using two-way nested design. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 136–144. [Google Scholar]
  16. Sharma, H.L. Nested balanced n-ary designs and their PB arrays. J. Reliab. Stat. Stud. 2014, 7, 29–36. [Google Scholar]
  17. Ferreira, S.S.; Ferreira, D.; Mexia, J.T. Double tier cross nesting design models. J. Interdiscip. Math. 2008, 11, 275–289. [Google Scholar] [CrossRef]
  18. Henderson, C.R. Estimation of variance and covariance components. Biometrics 1953, 9, 226–252. [Google Scholar] [CrossRef]
  19. Graybill, F.A. Matrices with Applications in Statistics; Wadsworth: Belmont, CA, USA, 1983. [Google Scholar]
  20. Johnson, D.W.; Wichern, R.A. Applied Multivariate Statistical Analysis; Prentice Hall: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
  21. Hartley, H.O. Expectations, variances and covariances of ANOVA means squares by “synthesis”. Biometrics 1967, 23, 105–114. [Google Scholar] [CrossRef] [PubMed]
  22. Satterthwaite, F.E. An approximate distribution of estimates of variance components. Biom. Bull. 1946, 2, 110–114. [Google Scholar] [CrossRef]
Table 1. Estimates of Variance Components.
Table 1. Estimates of Variance Components.
Variance ComponentNew MethodAnalysis of Variance MethodHenderson’s Method III
σ p 2 59.41753249.72700548.806265
σ w 2 35.05080423.32997324.254899
σ s 2 9.142909−8.593521−4.877959
σ w s 2 37.36095939.33230435.616742
σ ϵ 2 4.9831344.9831344.983134
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Choi, J. Nonnegative Estimation of Variance Components for a Nested Three-Way Random Model. Symmetry 2022, 14, 1210. https://doi.org/10.3390/sym14061210

AMA Style

Choi J. Nonnegative Estimation of Variance Components for a Nested Three-Way Random Model. Symmetry. 2022; 14(6):1210. https://doi.org/10.3390/sym14061210

Chicago/Turabian Style

Choi, Jaesung. 2022. "Nonnegative Estimation of Variance Components for a Nested Three-Way Random Model" Symmetry 14, no. 6: 1210. https://doi.org/10.3390/sym14061210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop