Next Article in Journal
Improved Estimator Using Auxiliary Information in Adaptive Cluster Sampling with Networks Selected Without Replacement
Previous Article in Journal
Long-Term Timing Analysis of PSR J1741—3016: Efficient Noise Characterization Using PINT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linear Models with Nested Random Effects

Department of Mathematics and Center of Mathematics and Applications, University of Beira Interior, 6201-001 Covilha, Portugal
Symmetry 2025, 17(3), 374; https://doi.org/10.3390/sym17030374
Submission received: 9 January 2025 / Revised: 12 February 2025 / Accepted: 25 February 2025 / Published: 28 February 2025
(This article belongs to the Section Mathematics)

Abstract

:
Symmetry is a crucial concept in various fields of mathematics, offering a systematic approach to understanding structural properties and simplifying complex problems. This study focuses on linear mixed models, emphasizing the role of symmetry in the design of experiments and the structure of variance–covariance matrices. Building on foundational works, this paper introduces the concept of nested models and employs Wishart matrices for variance component estimation, enhancing the efficiency of complex model analysis. The methodology is particularly applicable when variance–covariance matrices conform to a commutative Jordan algebra (CJA). The practical significance of this modeling approach is demonstrated through numerical applications using both simulated and real-world data.

1. Introduction

Symmetry plays a fundamental role in various branches of mathematics and their applications, providing a framework for understanding structural properties and simplifying complex problems. In the context of linear mixed models, symmetry can manifest in the design of experiments, the structure of variance–covariance matrices, and the underlying algebraic structures used in estimation procedures.
Linear mixed models have been extensively explored in the literature, with numerous seminal contributions enhancing the theoretical and practical understanding of them. For instance, Brown and Prescott [1] provide a thorough exploration of how linear mixed models can be effectively applied in medical research, offering invaluable insights into both theoretical foundations and practical applications. Pinheiro and Bates [2] extend the applicability of mixed-effects models by delving into their implementation in S and S-PLUS, highlighting the versatility and practicality of these models. Similarly, Rao and Kleffe [3] make a foundational contribution by advancing the estimation of variance components and their applications, laying a critical groundwork for subsequent developments in the field. Sahai and Ageel [4] enrich the discourse on linear mixed models by addressing the complexities of fixed, random, and mixed models in their analysis of variance, providing a comprehensive treatment of these methodologies. Additionally, Demidenko [5] presents a modern perspective by integrating theoretical insights with practical implementations in R, bridging the gap between abstract theory and applied statistics.
Recent advancements have also addressed computational challenges in estimating variance components efficiently. Tack and Müller [6] developed fast restricted maximum likelihood (REML) estimation techniques for linear mixed models with Kronecker product covariance structures, improving computational performance in large-scale applications. Additionally, Lee et al. [7] provide a unified approach to generalized linear mixed models, offering a comprehensive treatment that extends classical methods to accommodate non-normal responses, thus broadening the scope of applications.
Beyond these influential works on linear mixed models, Fonseca et al. [8], for example, investigate the properties and applications of binary operations in linear algebra, specifically examining the scope of orthogonal normal models. Furthermore, Mexia et al. [9] contribute to this domain by developing the COBS methodology, which incorporates segregation, matching, crossing, and nesting, streamlining complex problems through innovative techniques. More recently, Ferreira et al. [10] extend these advancements by exploring inference in nonorthogonal mixed models, addressing key challenges in model estimation and interpretation. Additionally, Ferreira et al. [11] further enhance statistical methodologies by examining inference in mixed models with a mixture of distributions and controlled heteroscedasticity. Complementing these developments, Bailey et al. [12] focus on experimental design, proposing designs for half-diallel experiments with commutative orthogonal block structures, which optimize efficiency in statistical analyses.
In our study, we build on these foundational works by introducing the concept of nested models and utilizing Wishart matrices for estimating variance components. By simplifying complex models as building blocks, our approach streamlines the estimation process and makes it more efficient. Our method is particularly effective when variance–covariance matrices fall under a commutative Jordan algebra, known as CJA, and allows for the structure of variance–covariance matrices to be captured and manipulated in a way that respects both symmetry and commutativity, facilitating the estimation of variance components.
In the next section, we analyze the structure of linear models with nested random effects models, which refer to models with random effects that are hierarchically structured within each level of the model. Moreover, we also analyze how to estimate the variance components associated with them. Then, we discuss the testing of hypotheses on the effects and interactions of fixed effect factors using F-tests and introduce a measure of relevance for the hypotheses, which becomes especially useful when there are several rejected null hypotheses. In the special case section, we focus on a specific case where the mean and the vector of residuals to the mean are independent for every block of observations, and analyze the implications of this assumption for the model. The Multiple Regression Designs section extends these results to multi-treatment regression models, with nested random effect factors, see Mexia [13]. Finally, we illustrate our methodology through a numerical application, followed by concluding remarks.

2. Variance Components

Let b represent the number of blocks in a mixed model that correspond to the level combinations of fixed effect factors. The observation vector for each block,  Y l l = 1 , , b , is assumed to be normally distributed and independent, with mean vector
E ( Y l ) = 1 r μ + X l β , l = 1 , , b ,
where r represents the number of observations within each block,  1 r  is a column vector of ones with r elements,  μ  is the overall average response, and  β  represents a vector of unknown fixed effects.
The variance–covariance structure of  Y l  is given by
Var ( Y l ) = V l ( θ ) = j = 1 w θ j M j ,
where w represents the number of variance components in the model,  M j = Z j ( Z j ) Z j  is a known design matrix, and  θ j  represents the jth variance component.
Each block l has the same factor structure, so the design matrix for the random effects in block l, denoted by  Z l , is related to the design matrices  Z j  through
Z l = j = 1 w Z j ( l ) ,
where  Z j ( l )  represents the contribution of the jth random effect in block l. The linear mixed model for each block may then be written as
Y l = 1 r μ + X l β + Z l γ + ε l , l = 1 , , b ,
where  γ  is a vector of random effects, assumed to be normally distributed with mean zero and variance–covariance matrix  G , and  ε l  represents the error term for block l. The elements of  ε l  are typically assumed to be independent and normally distributed with mean zero and variance  σ 2 I r .
Symmetry is explicitly leveraged in this model through the use of orthogonal transformations. Let  L r  be a  ( r 1 ) × r  matrix whose row vectors constitute an orthogonal basis for the orthogonal complement,  R ( 1 r ) , of the range space of the column matrix  1 r . The transformed vectors
Y ˙ l = L r Y l , l = 1 , , b ,
are normally distributed and independent with mean zero and variance–covariance matrix
V ˙ ( θ ) = j = 1 w θ j M ˙ j ,
where
M ˙ j = L r M j L r , j = 1 , , w .
This transformation simplifies the variance–covariance structure while preserving the symmetry in the model. The orthogonal decomposition facilitated by  L r  ensures that the variance components remain independent, reflecting the fundamental principles of symmetry.
Next, consider a symmetric matrix  W  of dimension  d × d  with elements  w l , h . The half-vectorization of  W , denoted as  vech ( W ) , is a column vector of dimension  d ( d + 1 ) 2  obtained by extracting the upper triangular elements (including the diagonal) of  W
vech ( W ) = ( w 1 , 1 , , w 1 , d , w 2 , 2 , , w 2 , d , , w d 1 , d , w d , d ) .
Thus,  vech ( W )  contains the main diagonal and upper triangle of  W .
Now, defining
m ˙ j = vech ( M ˙ j ) , j = 1 , , w , v ˙ = vech ( V ˙ ( θ ) ) ,
and
M ˙ = [ m ˙ 1 , , m ˙ w ] ,
we obtain
M ˙ θ = v ˙ .
If  m ˙ 1 , , m ˙ w  are linearly independent, then  M ˙  is full-rank, and the variance components can be estimated using the Moore–Penrose inverse
θ = M ˙ + v ˙ ,
where + denotes the Moore–Penrose inverse.
We have a set of b independent normally distributed random variables  Y ˙ l  with null mean vectors and variance–covariance matrix  V ˙ ( θ ) = j = 1 w θ j M ˙ j , as given by Equation (2). For each of the random variables  Y ˙ l , we can calculate its variance–covariance matrix as
V ˙ l = E Y ˙ l Y ˙ l ,
where E denotes the expectation operator. Using the linearity of expectation we can then calculate the expected variance–covariance matrix of all the random variables as
V ˙ = E l = 1 b Y ˙ l Y ˙ l = l = 1 b E Y ˙ l Y ˙ l = l = 1 b V ˙ l .
So the usual estimator of  V ˙  is then simply the sample variance–covariance matrix, given by
V ˙ ˜ ( θ ) = 1 b l = 1 b Y ˙ l Y ˙ l .
Using Equation (12), and substituting the estimator  V ˙ ˜ ( θ )  for  V ˙ ( θ ) , we obtain the estimator
θ ˜ = m ˙ + V ˙ ˜ ( θ ) ,
for  θ .
To understand the properties of the estimator  θ ˜ , we analyze the distribution and covariance structure of the sample variance–covariance matrix  V ˙ ˜ ( θ )  and its role in the estimation process.
We can write
b V ˙ ˜ ( θ ) = l = 1 b Y ˙ l Y ˙ l ,
where  b V ˙ ˜ ( θ )  follows a Wishart distribution. Given that  V ˙ ˜ ( θ )  is an unbiased estimator of  V ˙ ( θ ) , we have
E [ V ˙ ˜ ( θ ) ] = V ˙ ( θ ) = j = 1 w θ j M ˙ j .
So the covariance matrix of the estimator  V ˙ ˜ ( θ )  is
Σ / ( v ˙ ˜ ) = E V ˙ ˜ ( θ ) E [ V ˙ ˜ ( θ ) ] V ˙ ˜ ( θ ) E [ V ˙ ˜ ( θ ) ]
= E V ˙ ˜ ( θ ) V ˙ ˜ ( θ ) E [ V ˙ ˜ ( θ ) ] E [ V ˙ ˜ ( θ ) ] ,
see Anderson [14].
Furthermore, since  Y ˙ l  are independent and have null mean vectors, we have
E [ V ˙ ˜ ( θ ) ] = E 1 b l = 1 b Y ˙ l Y ˙ l = 1 b l = 1 b E [ Y ˙ l Y ˙ l ] = V ˙ ( θ ) .
Note that the final step follows from the fact that the expected value of the sample variance–covariance matrix is just the variance–covariance matrix itself. Now, substituting  E [ V ˙ ˜ ( θ ) ] = V ˙ ( θ )  into the expectation of the estimator, we have
E [ θ ˜ ] = E [ m ˙ + V ˙ ˜ ( θ ) ] = m ˙ + E [ V ˙ ˜ ( θ ) ] = m ˙ + V ˙ ( θ ) .
Recalling Equation (12),  θ = m ˙ + v ˙ , we then have
E [ θ ˜ ] = θ = m ˙ + v ˙ .
Therefore, the covariance matrix of the estimator  θ ˜  is
Σ / ( θ ˜ ) = E θ ˜ E [ θ ˜ ] θ ˜ E [ θ ˜ ]
= E θ ˜ θ ˜ m ˙ + v ˙ m ˙ + v ˙
= m ˙ + Σ / ( v ˙ ˜ ) m ˙ + .
This result establishes that the sample variance–covariance matrix  V ˙ ˜ ( θ )  is unbiased and provides the foundation for deriving the properties of the estimator  θ ˜ , including its covariance structure, as shown in subsequent equations.
The estimator  θ ˜  derived in Equation (16) is both efficient and effective in estimating the variance components. Efficiency in this context refers to the estimator having minimal variance among the class of unbiased estimators, ensuring that the variance components are estimated with the greatest possible precision given the available data. Effectiveness pertains to how well the estimator captures the underlying variance structure while maintaining desirable statistical properties such as unbiasedness and consistency. By leveraging the transformation matrix  L r  and ensuring independence in variance decomposition, the method provides an optimal approach for estimating variance components in mixed models. The explicit use of symmetry further enhances the reliability and interpretability of the estimates, making the approach both statistically sound and computationally feasible. Moreover, we can consider  θ ˜  as a least square estimator, measuring the quality of  θ ˜  by its determination coefficient,  R 2 , as well as in the fixed effects models, to obtain the partitions
Y l = 1 r Y , l + L r Y ˙ l , l = 1 , , b ,
where
Y , l = 1 r 1 r Y l , l = 1 , , b .
However, in general, the  Y , l  and  Y ˙ l l = 1 , , b , are not independent. To quantify the variability of the block means  Y , l , we define  σ 2  as their variance. This variance can be expressed in terms of the variance components  θ j  as follows
σ 2 = 1 r 2 1 r j = 1 w θ j M j 1 r = j = 1 w c j θ j ,
with
c j = 1 r 2 1 r M j 1 r , j = 1 , , w .
So, to estimate  σ 2 , we define the estimator
σ ˜ 2 = j = 1 w c j θ ˜ j ,
which depends on the estimated variance components  θ ˜ j . However, in general, this estimator  σ ˜ 2  is not independent of the vector of block means
Y = ( Y , 1 , , Y , b ) ,
because the variance components  θ ˜ j  themselves are influenced by the variability within blocks. This lack of independence introduces a complication in interpreting the variance structure.

Special Case

An interesting and important special case arises when the conditionr
1 r M j L r = 0 h j , j = 1 , , w ,
is satisfied. Under this condition, the matrices  M ˙ j  simplify to
M ˙ j = L r M j L r = 0 h j ,
which implies that the transformed variance–covariance matrix becomes
V ˙ = j = 1 w θ j M ˙ j = 0 h j .
In this case, the block means  Y , l  are independent of the deviations  Y ˙ l l = 1 , , b . Consequently, the estimator  σ ˜ 2  becomes independent of the vector of block means  Y = ( Y , 1 , , Y , b ) . This independence is a desirable property because it simplifies the interpretation of the variance components. Furthermore, under this condition, the estimator  σ ˜ 2  is unbiased, as its expectation equals the true variance
E [ σ ˜ 2 ] = σ 2 = j = 1 w c j θ j .
This result underscores the importance of the condition in Equation  ( 33 ) , as it ensures that  σ ˜ 2  accurately reflects the variability of the block means without being confounded by the within-block deviations.

3. Multiple Regression Designs

The application of regression models with fixed and random effects has a long history in the statistical literature. Early foundational work, such as that by Henderson [15], introduced best linear unbiased predictors (BLUPs) for mixed models, setting the stage for their use in various disciplines. Later, Searle, Casella, and McCulloch [16] provided comprehensive methodologies for variance component estimation and model inference. These developments have been applied to diverse fields, including genetics, where mixed models are used to estimate heritability [17], and industrial quality control, where random effects are incorporated to account for batch-to-batch variability [18].
A fundamental principle underlying multiple regression designs is the concept of symmetry, which plays a crucial role in variance component estimation. Symmetry in experimental designs allows for an equitable decomposition of effects and interactions, ensuring orthogonality and simplifying inference. As outlined by Scheffé [19], the decomposition of effects into orthogonal components facilitates hypothesis testing and interpretation in complex models. More recently, Mexia [13] extended these ideas to encompass generalized least squares estimators and their properties under specific experimental conditions.
In this section, we analyze multiple regression designs in the context of the base model defined earlier. The focus is on assessing the influence of factors and their interactions on estimable functions. We derive least squares estimators, establish their statistical properties, and construct hypothesis tests and confidence regions. These results are critical for understanding the effects of the experimental design and for validating the model’s assumptions.
Consider the case where the observation vector  Y ( l ) , for each of the c treatments, satisfies Equation (33). The mean vector of  Y ( l )  is  μ ( l ) = X β ( l ) , and the variance–covariance matrix is  σ 2 I b r , with  X  having k linearly independent column vectors. This type of model has been applied in many situations, see Mexia [13]. To simplify computation, we replace  X  with  X o , derived using the Gram–Schmidt orthonormalization technique, ensuring  R ( X o ) = R ( X ) . This yields the least squares estimator
β ˜ o ( l ) = X o Y ( l ) , l = 1 , , c ,
with mean vector  β o ( l )  and variance–covariance matrix  σ 2 I k . The estimators  β ˜ o ( l ) l = 1 , , c ,  are mutually independent and independent from the
S ( l ) = Y o ( l ) Y o ( l ) X o β ˜ o ( l ) , l = 1 , , c ,
which will be the product by  σ 2  of independent central chi squares with  n k  degrees of freedom,  S ( l ) = σ 2 χ n k 2 , l = 1 , , c , . Thus,
S = l = 1 c S ( l ) σ 2 χ g 2 ,
with  g = c ( n k ) .
To analyze the behavior of the individual components of the regression coefficients, we define the vectors  λ ( i )  and  λ ¯ ( i ) . These quantities summarize the behavior of the estimators across treatments and allow us to study their distributional properties. Specifically, we define
λ ( i ) = ( β i o ( 1 ) , , β i o ( c ) ) , i = 1 , , k , λ ¯ ( i ) = ( β ˜ i o ( 1 ) , , β ˜ i o ( c ) ) , i = 1 , , k ,
where  λ ( i )  represents the true regression coefficients for the i-th component across all c treatments, and  λ ¯ ( i )  represents their least squares estimators.
The estimators  λ ¯ ( i ) i = 1 , , k , will be normally distributed, with mean vectors  λ ( i ) i = 1 , , k , and variance-covariance matrix  σ 2 I c . Furthermore, the estimators are independent of one another and also independent of the residual sums of squares  S ( 1 ) , , S ( c ) . As a result, they are also independent of the overall residual sum of squares, S.
To study the influence of factors and interactions in the base design, we introduce an orthogonal partition of the space  R c . This partition allows us to isolate and analyze the contributions of different effects and interactions through appropriate test statistics. Specifically, we assume the orthogonal partition
R c = j = 1 u j ,
where ⊞ denotes the orthogonal direct sum of subspaces associated with the effects and interactions of the factors. If the  g j  row vectors of  A j  form an orthonormal basis for  j j = 1 , , u , we have
A j v = 0 g j , j = 1 , , u ,
if and only if  v j , j = 1 , , u .  The symmetric structure of this partition ensures that estimates remain unbiased and efficiently computed.
To analyze specific effects, we define the quantities
ψ ˜ j ( i ) = A j λ ˜ o ( i ) , ψ j ( i ) = A j λ ( i ) , ψ 0 , j ( i ) = A j d ( i ) , ,
where  i = 1 , , k  and  j = 1 , , u .  Using these quantities, we define
δ j ( i ) = 1 σ 2 ψ j ( i ) ψ 0 , j ( i ) 2
and
δ j = i = 1 k δ j ( i ) .
The hypotheses to be tested are
H 0 , j ( i , w i ) : δ j ( i ) w i , H 0 , j ( i ) : δ j ( i ) = 0 , H ¯ 0 , j ( w ) : δ j w ,
where rejection of the null hypotheses indicates a significant effect or interaction.
To evaluate these hypotheses, we construct the test statistics
F j ( i ) = g g j ψ ˜ j ( i ) ψ 0 , j ( i ) 2 S , F j = g k g j i = 1 k ψ ˜ j ( i ) ψ 0 , j ( i ) 2 S ,
where the test statistics follow an  F -distribution with  g j  and g [or  k g j  and g] degrees of freedom and non-centrality parameters  δ j ( i )  [or  δ j ] for  i = 1 , , k  and  j = 1 , , u .
The uniformity in variance structure, a direct consequence of symmetry, enables the construction of uniformly most powerful (UMP) tests. Specifically, as noted by Lehmann-EL and Romano [20], the  F -tests for the hypotheses  H 0 , j ( i ) i = 1 , , k , and  H ¯ 0 , j ( 0 ) j = 1 , , c , are uniformly most powerful (UMP) within the class of tests whose power is determined by non-centrality parameters. Furthermore, their power increases proportionally with these parameters, as established in Mexia [21].
Let  f p , g , g  denote the p-th quantile for the central  F -distribution with  g  and g degrees of freedom. Since
ψ ˜ j ( i ) ψ j ( i ) 2 σ 2 χ g j 2 , i = 1 , , k , j = 1 , , u ,
and the quantities are independent from S, we have
i = 1 k ψ ˜ j ( i ) ψ j ( i ) 2 σ 2 χ k g j 2 , i = 1 , , k ,
also independent of  S .
The pivot variables
F j , i = g g j ψ ˜ j ( i ) ψ j ( i ) 2 S , i = 1 , , k , j = 1 , , u , F j = g k g j i = 1 k ψ ˜ j ( i ) ψ j ( i ) 2 S , j = 1 , , u ,
follow a central  F -distribution with degrees of freedom  g j  [or  k g j ] and  g .
This result provides a framework for constructing confidence regions for the parameters  ψ j ( i ) , i = 1 , , k , j = 1 , , u .  Specifically, we have
ψ j ( i ) ψ ˜ j ( i ) 2 g j g S , i = 1 , , k , j = 1 , , u , ψ j ψ ˜ j 2 k g j g S , j = 1 , , u ,
where
ψ ˜ j = [ ψ ˜ j ( 1 ) , , ψ ˜ j ( k ) ] , j = 1 , , u .
To validate the model, we use the  R 2 ( 1 ) , , R 2 ( c )  values introduced earlier and the Bartlett homoscedasticity test, see Bartlett and Kendall [22]. Assuming
S ( l ) σ 2 ( l ) χ b ( r 1 ) 2 , l = 1 , , c ,
with independent chi squares, we test the hypothesis
H 0 o : σ 2 ( 1 ) = = σ 2 ( c ) .
Rejection of  H 0 o  indicates heteroscedasticity, suggesting that the model may need refinement.
Finally, we construct confidence regions for estimable functions. Let
a ˜ μ ˜ N ( a μ , σ 2 a X X a ) ,
and assume
S σ 2 χ c ( n k ) 2 .
The pivot variable
F a = c ( n k ) g a ˜ Y a X β ˜ 2 a X X a S ,
has a central  F -distribution with g and  c ( n k )  degrees of freedom. This leads to the  1 α  confidence region
a ˜ Y a X β ˜ 2 f 1 α , c ( n k ) , g a X X a S .
This confidence region quantifies the uncertainty in the estimable functions, providing a practical tool for inference.

4. Numerical Application

In this section, we first illustrate the theory using simulated data and then apply the proposed model to a real dataset.
All computations and simulations were conducted using the R software 4.3.0. The program estimates variance components within a balanced experimental design, defining both a full model and a two-block model. It constructs the necessary design matrices, simulates responses from a normal distribution, and applies the formulas presented in the previous sections to estimate the variance components.

4.1. Simulated Data

In agricultural experiments, it is common to study crop yields under different fertilizers while accounting for the variability introduced by different plots of land. In this example, let us consider such data, where plots are treated as fixed effects and fertilizers as random effects. The dataset consists of crop yields for four plots (fixed effect factor) under four different fertilizers (random effect factor), as shown in Table 1. To investigate potential differences in variability across subsets of the data, the dataset was divided into two blocks: Block 1, which includes data from Plot 1 and Plot 2, and Block 2, which includes data from Plot 3 and Plot 4.
The crop yield data in Table 1 were generated based on the following:
Y l = 1 8 μ + X l β + Z l γ + ε , l = 1 , 2 ,
where  1 8  is a column vector of ones with eight elements  ( r = 8 ) , representing the intercept term;  Y l  contains the observed crop yields for block l β  is a vector of unknown fixed effects associated with the plots; and  γ  is a vector of random effects associated with the fertilizers, assumed to be normally distributed with mean zero and variance–covariance matrix  G , where  G = θ γ I 8 , with  θ γ  being the variance component for the random effects and  I 8  being the  8 × 8  identity matrix. Additionally,  ε l  represents the error term for block l, with its elements being independent and identically distributed, following a normal distribution with mean zero and variance  σ 2 .
The design matrices for the fixed and random effects are, respectively,
X l = 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 , Z l = 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 .
For data generation, the variance components were set as  θ γ = 1  and  σ 2 = 1 . The crop yield data for each block,  Y l , were simulated in R software 4.3.0 according to Equation (59).  θ γ  and  σ 2  were estimated as indicated in (16) and the orthogonal transformation matrix used was
L r = 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 .
This transformation ensures that the variance–covariance structure becomes clearer. For example, the variance of  Y l  is represented as
V ˙ ( θ ) = j = 1 w θ j M ˙ j ,
where
M ˙ 1 = L r Z l Z l L r , M ˙ 2 = L r I 8 L r .
Using the transformed data, the variance components  θ γ  and  θ ε  were estimated by solving
v e c h ( V ˙ ˜ ( θ ) ) = m ˙ θ ,
with
θ ˜ = m ˙ + v e c h ( V ˙ ˜ ( θ ) ) .
To analyze the entire model (i.e., using data from all four plots without splitting into blocks), the same linear mixed model framework was applied. The dataset was treated as a single block, where the observation vector  Y  for all plots was modeled as
Y = 1 16 μ + X β + Z γ + ε ,
with  1 16  being a column vector of ones with 16 elements,  X  encoding the fixed effects for all plots, and  Z  representing the random effects for the fertilizers,
X = 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 1 0 0 1 0 1 , Z = 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 .
The variance components  θ γ  and  σ 2  were estimated using the same method described before.
The estimation procedure was repeated 1000 times, yielding the mean variance component estimates in Table 2.
As can be seen, the estimates for the variance components  θ γ  and  σ 2  differed slightly when calculated within the blocks as compared to the entire model. When considering the entire model, the estimates for the variance components were more averaged, representing a general overview of the variability across all plots. However, the block-wise analysis yielded slightly more refined estimates for each block, indicating the possibility of different variance structures within the blocks. So, while the differences in estimates were not large, the block approach can be beneficial for situations where there are substantial differences between groups (blocks), as it may lead to more accurate and tailored estimates of the variance components.

4.2. Real Data

In this subsection, we apply the proposed model to a real dataset related to housing affordability across different European countries. Specifically, we analyze the standardized house-price-to-income ratio for four countries (Croatia, Spain, Ireland, and Poland) over four years (2020–2023). This ratio measures how affordable housing is in each country, with higher values indicating higher house prices relative to income.
The dataset, sourced from Eurostat [23], 2025, is presented in Table 3.
Here, years are treated as a fixed effect, while countries are modeled as a random effect. The analysis follows the same methodology as in the simulated data case. That is, considering the entire dataset and the block approach, the dataset is divided into Block 1 (2020 and 2021) and Block 2 (2022 and 2023). The results are shown in Table 4.
The random effect variance  θ ˜ γ , which accounts for country-level differences, varies significantly across blocks (15.4309 for Block 1 vs. 19.2157 for Block 2). The error variance  σ ˜ 2  also differs (higher in Block 1, lower in Block 2), suggesting changes in overall variability over time. The entire model provides an averaged estimate of variance components ( θ ˜ γ = 17.2496 σ ˜ 2 = 2.6584 ), potentially smoothing over important structural differences across years.
The block-wise model provides a more refined view of the variance structure across different periods, which is valuable when significant shifts occur in the data. In contrast, the entire model offers a more general picture, which may be useful for broad trend analysis. In this case, since  θ ˜ γ  and  σ ˜ 2  differ across blocks, using a separated model can be advantageous if the goal is to capture year-specific variability. However, if the primary interest is in overall trends, the entire model remains a reasonable choice.

5. Limitations and Future Directions

While the models proposed in this paper offer significant advancements, several limitations should be acknowledged. First, the assumptions regarding the structure of the data, such as the CJA condition for variance–covariance matrices, may not always hold in practical applications. Additionally, the computational complexity of implementing these models could present challenges, particularly in large-scale datasets or when dealing with high-dimensional random effects. Future work could explore relaxing these assumptions.

6. Final Remarks

This paper offers a comprehensive guide to linear models with nested random effects. The statistical methods developed in this paper rely on symmetry principles. From the orthogonal transformations that simplify the variance structure to the independence and unbiasedness of estimators, symmetry plays a pivotal role in ensuring the robustness and elegance of the statistical framework. A significant innovative aspect is the extension of results to multi-treatment regression models with nested random effect factors, providing a more versatile modeling approach. The models developed are particularly practical when variance–covariance matrices fall under a commutative Jordan algebra (CJA), expanding their applicability. A numerical application is also provided to illustrate the practical implementation of these concepts and methods.

Funding

Nece and this work are supported by FCT—Fundação para a Ciência e a Tecnologia, I.P. by project references UIDB/00212/2020, UIDB/04630/2020 and UIDB/00297/2020.

Data Availability Statement

Eurostat Database. Available online: https://ec.europa.eu/eurostat/databrowser/view/tipsho60/default/table?lang=en&category=prc.prc_hpi (accessed on 1 February 2025).

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Brown, H.; Prescott, R. Applied Mixed Models in Medicine; Wiley: New York, NY, USA, 1999. [Google Scholar]
  2. Pinheiro, J.C.; Bates, D.M. Mixed-Effects Models in S and S-PLUS; Springer: New York, NY, USA, 2000. [Google Scholar]
  3. Rao, C.R.; Kleffe, J. Estimation of Variance Components and Applications; North-Holland: Amsterdam, The Netherlands, 1988. [Google Scholar] [CrossRef]
  4. Sahai, H.; Ageel, M.I. Analysis of Variance: Fixed, Random and Mixed Models; Birkhäuser: Cambridge, MA, USA, 2000. [Google Scholar]
  5. Demidenko, E. Mixed Models: Theory and Applications with R; Wiley: New York, NY, USA, 2013. [Google Scholar]
  6. Tack, L.; Müller, S. Fast REML estimation for linear mixed models with kronecker product covariance structures. J. Comput. Graph. Stat. 2021, 30, 1148–1160. [Google Scholar]
  7. Lee, Y.; Nelder, J.A.; Pawitan, Y. Generalized Linear Mixed Models: A Unified Approach; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  8. Fonseca, M.; Mexia, J.T.; Zmyslony, R. Binary operation on Jordan algebras and orthogonal normal models. Linear Algebra Appl. 2006, 417, 75–86. [Google Scholar] [CrossRef]
  9. Mexia, J.T.; Vaquinhas, R.; Fonseca, M.; Zmyslony, R. COBS: Segregation, Matching, Crossing and Nesting. Latest Trends on Applied Mathematics, Simulation, Modeling. In Proceedings of the 4th International Conference on Applied Mathematics, Simulation, Modelling, Corfu Island, Greece, 22–25 July 2010. [Google Scholar]
  10. Ferreira, D.; Ferreira, S.S.; Nunes, C.; Mexia, J.T. Inference in nonorthogonal mixed models. Math. Methods Appl. Sci. 2022, 45, 3183–3196. [Google Scholar] [CrossRef]
  11. Ferreira, D.; Ferreira, S.S.; Antunes, P.; Oliveira, T.A.; Mexia, J.T. Inference in mixed models with a mixture of distributions and controlled heteroscedasticity. Commun. Stat.—Theory Methods 2024. [Google Scholar] [CrossRef]
  12. Bailey, R.A.; Cameron, P.J.; Ferreira, D.; Ferreira, S.S.; Nunes, C. Designs for Half-Diallel Experiments with Commutative Orthogonal Block Structure. J. Stat. Plan. Inference 2024, 231, 106139. [Google Scholar] [CrossRef]
  13. Mexia, J.T. Multi-Treatment Regression Designs; Universidade Nova de Lisboa: Lisbon, Portugal, 1987. [Google Scholar]
  14. Anderson, T.W. Introduction to Multivariate Statistical Analysis; John Wiley & Sons, Inc.: New York, NY, USA, 1958. [Google Scholar]
  15. Henderson, C.R. Estimation of genetic parameters. Ann. Math. Stat. 1950, 21, 309–310. [Google Scholar]
  16. Searle, S.R.; Casella, G.; McCulloch, C.E. Variance Components; John Wiley & Sons: New York, NY, USA, 1992. [Google Scholar]
  17. Lynch, M.; Walsh, B. Genetics and Analysis of Quantitative Traits; Sinauer Associates: Sunderland, MA, USA, 1998. [Google Scholar]
  18. Montgomery, D.C. Design and Analysis of Experiments, 8th ed.; John Wiley & Sons: New York, NY, USA, 2008. [Google Scholar]
  19. Scheffé, H. The Analysis of Variance; Wiley: New York, NY, USA, 1959. [Google Scholar]
  20. Lehmann, E.L.; Romano, J.P. Testing Statistical Hypotheses; Springer: New York, NY, USA, 2005. [Google Scholar]
  21. Mexia, J.T. Controlled Heteroscedasticity, Quocient Vector Spaces and F Tests for Hypothesis on Mean Vectors. Ph.D. Thesis, Universidade Nova de Lisboa, Lisbon, Portugal, 1987. [Google Scholar]
  22. Bartlett, M.S.; Kendall, D.G. The Statistical Analysis of Variance Heterogeneity and the Logarithmic Transformation. Suppl. J. R. Stat. Soc. 1946, 8, 128–138. [Google Scholar] [CrossRef]
  23. Eurostat Database. Available online: https://ec.europa.eu/eurostat/databrowser/view/tipsho60/default/table?lang=en&category=prc.prc_hpi (accessed on 1 February 2025).
Table 1. Crop yields under different fertilizers.
Table 1. Crop yields under different fertilizers.
CropFertilizer
Plot Fertilizer 1 Fertilizer 2 Fertilizer 3 Fertilizer 4
1 y 1 y 5 y 9 y 13
2 y 2 y 6 y 10 y 14
3 y 3 y 7 y 11 y 15
4 y 4 y 8 y 12 y 16
Table 2. Variance component estimates for different blocks and the entire model.
Table 2. Variance component estimates for different blocks and the entire model.
Block/Model θ ˜ γ σ ˜ 2
Block 1 (Plots 1 and 2)1.04081.0165
Block 2 (Plots 3 and 4)0.98340.9997
Entire Model (all plots)1.01621.0049
Table 3. Standardized house-price-to-income ratio—annual data.
Table 3. Standardized house-price-to-income ratio—annual data.
YearCroatiaSpainIrelandPoland
202093.9192.7199.3697.92
202192.6795.7986.8090.25
202294.9793.30100.8796.56
2023100.6995.2389.9987.88
Table 4. Variance component estimates for different blocks and the wntire model.
Table 4. Variance component estimates for different blocks and the wntire model.
Block/Model θ ˜ γ σ ˜ 2
Block 1 (2020 and 2021)15.43093.5482
Block 2 (2022 and 2023)19.21571.6213
Entire Model (all years)17.24962.6584
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ferreira, D. Linear Models with Nested Random Effects. Symmetry 2025, 17, 374. https://doi.org/10.3390/sym17030374

AMA Style

Ferreira D. Linear Models with Nested Random Effects. Symmetry. 2025; 17(3):374. https://doi.org/10.3390/sym17030374

Chicago/Turabian Style

Ferreira, Dário. 2025. "Linear Models with Nested Random Effects" Symmetry 17, no. 3: 374. https://doi.org/10.3390/sym17030374

APA Style

Ferreira, D. (2025). Linear Models with Nested Random Effects. Symmetry, 17(3), 374. https://doi.org/10.3390/sym17030374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop