Next Article in Journal
Existence Results of a Nonlocal Fractional Symmetric Hahn Integrodifference Boundary Value Problem
Next Article in Special Issue
Efficient Estimation for the Derivative of Nonparametric Function by Optimally Combining Quantile Information
Previous Article in Journal
Quantifying the Effect of Initial Fluctuations on Isospin-Sensitive Observables from Heavy-Ion Collisions at Intermediate Energies
Previous Article in Special Issue
Bayesian Estimation for the Coefficients of Variation of Birnbaum–Saunders Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Results for Multinomial Models

by
Isaac Akoto
1,2,*,†,
João T. Mexia
1,3,† and
Filipe J. Marques
1,3
1
NOVA School of Science and Technology, Campus de Caparica, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
2
Department of Mathematics and Statistics, University of Energy and Natural Resources, Sunyani P.O. Box 214, Ghana
3
Center of Mathematics and Its Applications, NOVA School of Science and Technology, Campus de Caparica, Universidade NOVA de Lisboa, 2829-516 Lisbon, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(11), 2173; https://doi.org/10.3390/sym13112173
Submission received: 6 October 2021 / Revised: 30 October 2021 / Accepted: 4 November 2021 / Published: 12 November 2021
(This article belongs to the Special Issue Probability, Statistics and Applied Mathematics)

Abstract

:
In this work, we derived new asymptotic results for multinomial models. To obtain these results, we started by studying limit distributions in models with a compact parameter space. This restriction holds since the key parameter whose components are the probabilities of the possible outcomes have non-negative components that add up to 1. Based on these results, we obtained confidence ellipsoids and simultaneous confidence intervals for models with normal limit distributions. We then studied the covariance matrices of the limit normal distributions for the multinomial models. This was a transition between the previous general results and on the inference for multinomial models in which we considered the chi-square tests, confidence regions and non-linear statistics—namely log-linear models with two numerical applications to those models. Namely, our approach overcame the hierarchical restrictions assumed to analyse the multidimensional contingency table.

1. Introduction

In several fields of study such as health, business, social sciences and education, the outcomes of variables are mainly discrete, i.e., the variables only take finite or countable numbers. A discrete variable whose outcome only takes finite numbers is called a categorical variable [1]. A categorical variable consists of a set of categories that are non-overlapping [2] and the outcome could be binary (dichotomous), i.e., with just two possible levels, such as “present” or “absent” as a desired condition, or polytomous, i.e., with more than two levels, as is the case of the “Likert” scale [3]. There are two common types of polytomous variables, as can be seen in [4], which are the ordinal and nominal scale of measurement. Categorical variables, such as one’s eye colour, ethnicity and affiliations, the categories of which cannot be ordered in any way, are nominal, while categories such as the level of resistance to a drug of a patient, the level of education and economic status exhibit a natural order and are thus ordinal.
In a study in which all the observed variables are categorical, the most common way of representing the data is in a contingency table, which is a cross-tabulation of the variables [5,6]. When there are m-variables, the contingency table is an m-dimensional table—also known as a multidimensional table when the attributes are more than two. The information of a contingency table is mainly summarized through appropriate measures such as measures of association or models. Association measures, although easy in their computation and interpretation, lead to a great loss of information, as can be seen in [7]. Models are preferred in the case where a more sensitive analysis is required. A model is a “theory” or a conceptual framework about observations, and the parameters in the model represent the “effects” that particular variables or combinations of variables have in determining the values taken by the observations.
The easiest and most common model for a contingency table is the log-linear model [8]. It is constructed by taking the natural logarithms of the cell probabilities by the analogy of the analysis of variance (ANOVA) models, as can be seen in [9,10,11]. Classical log-linear models are sometimes regarded in the framework of the generalized linear model (GLM). They are also important in connection with contingency matrices, as can be seen in [12]. Contemporary problems in categorical data analysis with extremely high-dimensional data with demanding computational procedures require the development of complex models. Much work has been done on the modelling of categorical data, as can be seen in [7,13]. For example, in [14], the author used regression models for modelling categorical data. In our work, we derived new asymptotic results that will enable us to obtain confidence ellipsoids and simultaneous confidence intervals, respectively, for the vector of probabilities and its components, which will enable us to overcome some inference limitations of the existing procedures.
Inferential statistical analysis requires assumptions about the probability distribution of the response variable. For categorical data, the main distribution is the multinomial distribution. Most of the time, categorical data result from n-independent and identical trials with each trial having two or more possible outcomes. When the n is identical and independent trials have the same category probabilities, then the distribution of counts in the various categories is the multinomial distribution. The binomial distribution is a special case of the multinomial distribution with just two possible outcomes for each trial. Usually, the parameters of the multinomial distribution are not known and these parameters are often estimated from the sample data by several estimation methods such as the maximum likelihood estimation (MLE), as can be seen, for instance, in [15], the minimum discrimination information (MDI) [16], weighted least squares (WLS) [17] and Bayesian estimation (BA) [18]. In a previous study [19], we wanted to minimize the average cost so we used statistical decision theory (SDT) since there were only a finite number of possible choices. We point out that we achieved consistency since the probability of selecting the choice with the least average cost tends towards 1 and when the sample size tends towards infinity.
If we have n realizations of an experiment with m possible results with probabilities p 1 , , p m , we have the probability mass function, as can be seen in [20,21]:
P r l = 1 m N l = n l = n ! l = 1 m n l ! p l n l
for the vector N = N 1 , , N m of the times we obtain the different results. This probability mass function corresponds to the singular multinomial distribution M · | n , p . We name as multinomial the models for describing these sets of independent realizations of experiments with a finite number of results:
For the vector with p = p 1 , , p m of probabilities, we have the vector of estimators:
p ˜ = p ˜ 1 , , p ˜ m
with:
p ˜ l = n l n , l = 1 , , m
Moreover, as can be seen in [22], as n :
n p ˜ m p N 0 , U p
where ∼ indicates the limit distribution, in this case N 0 , U p , the normal distribution with the null mean vector and covariance matrix:
U p = D p p p t ,
where D p is the diagonal matrix with principal elements p 1 , , p m . This result will play an important role in the asymptotic treatment of the multinomial models which is this paper’s goal.
To carry out that asymptotic treatment, we start by obtaining a convenient version of the continuous mapping theorem [23] in the next section on limit distributions. Then, we obtained confidence regions in Section 3, namely the confidence ellipsoids and simultaneous confidence intervals. Then, in Section 4, we studied the algebraic structure of the limit covariance matrix, U p .
In Section 5, we obtained chi-square tests for hypotheses on outcome probabilities and confidence ellipsoids and simultaneous confidence intervals for them. We also considered log-linear models for which we presented a numerical application. We pointed out that our approach to these models overcame the hierarchical restriction used to analyse multidimensional contingency tables.
Our use of both the classical and the new version of the parametrized continuous mapping theorem (PCMT) enabled us to carry out statistical inference for multinomial models. This inference was similar to ANOVA and related techniques but F-tests were replaced by chi-square tests which is highly convenient since now we have an infinity of degrees of freedom for the error.
Finally, we stress the close relationship between our ANOVA-like inference using the chi-square test and the usual treatment of fixed effect models. We point out that the F-test in that treatment had interesting invariance properties that expressed the symmetry of those models, especially since those models are associated with orthogonal partitions or sub-spaces which are invariant for rotation.

2. Limit Distributions

Let C be the class of continuous functions. If l ( · ) C , and the distribution F Y n , of Y n converges to F Y C (that is F Y n ( y · ) F Y ( y · ) , whenever y · is a continuity point of F Y ( · ) ), we have, as can be seen in [23,24,25,26]:
F l ( Y n ) n F l ( Y )
as follows from the continuous mapping theorem.
If z is obtained superposing sub-vectors u and v , we put z = u v t . Then, if θ n p θ , with θ belonging to a compact set D , and F Y n F Y , putting Y ˙ n = Y n t θ n t t , Y n + = Y n t θ t t and Y ˙ = Y t θ t t to show that:
F l ˙ ( Y ˙ n ) F l ˙ ( Y ˙ ) ( · | θ ) ,
it is only needed to show that, as can be seen in [27]:
sup | F l ˙ ( Y n ˙ ) F l ˙ ( Y n + ) | n 0
since F l ˙ ( Y n + ) ( · ) = F l ˙ ( Y ˙ ) ( · | θ )
With ξ n ( . ) and ξ ( . ) , the probability measures associated with F Y n and F Y and representing the Cartesian product by ×, whatever ε > 0 , there exists a parallelepiped:
H ( ε ) = × i = 1 m a ( ε ) ; b ( ε )
with Y n , Y R n , such that ξ ( H ( ε ) ) 1 ε . Since F Y n F Y , we have ξ n ( H ( ε ) ) n ξ ( H ( ε ) ) and so there will be n ( ε ) such that, for n > n ( ε ) :
ξ n ( H ( ε ) ) > 1 2 ε .
Now:
H ˙ ( ε ) = H ( ε ) × D
will also be a compact. Thus, if l ˙ ( · ) C , it is restriction to H ˙ ( ε ) will be uniformly continuous. So, whatever δ > 0 , there exists δ ( δ ) > 0 , such that, if Y ˙ , Y ˙ H ˙ ( ε ) and Y ˙ Y ˙ δ ( δ ) , l ˙ ( Y ˙ ) l ˙ ( Y ˙ ) < δ , where · indicates the Euclidean norm of a vector.
Let E ˙ n ( δ ) and E ˙ n ( ε ) be events that occur when l ˙ ( Y ˙ ) l ˙ ( Y ˙ ) < δ and when Y n H ( ε ) , respectively. We now establish:
Lemma 1.
P r E ˙ n ( δ ) n 1 .
Proof. 
Since the restriction on l ˙ ( · ) to H ˙ ( ε ) is uniformly continuous:
P r E ˙ n ( δ ) | E ˙ n ( ε ) n 1 .
Thus, we only have to point out that ε is arbitrary and that:
P r E ˙ n ( δ ) | E ˙ n ( ε ) P r E ˙ n ( δ ) 1 P r E ˙ n ( ε )
so that:
P r E ˙ n ( δ ) P r E ˙ n ( δ ) | E ˙ n ( ε ) 1 P r E ˙ n ( ε )
where lim n 1 P r E ˙ n ( ε ) ε , whatever ε > 0 , to establish the thesis. □
Since:
P r E ˙ n ( δ ) n 1 , δ > 0 ,
we have:
P r l ˙ Y n + z ± ε 1 m | E ˙ n δ n P r l ˙ Y n + ( z ± ε 1 m ) = F l ˙ Y n + z ± ε 1 m P r l ˙ Y ˙ n z ± ε 1 m | E ˙ n δ n P r l ˙ Y ˙ n ( z ± ε 1 m ) = F l ˙ Y ˙ n z ± ε 1 m
thus, also whatever δ > 0 , there exists n ¨ ( ε ) such that, for n n ¨ ( ε ) , we have:
F l ˙ Y n + z = F l ˙ Y n z | θ n F l ˙ Y z | θ
as well as:
F l ˙ Y n + z n F l ˙ Y z | θ
whenever z is a continuity point of F l ˙ Y . Thus, we have the parametrized continuous mapping theorem (PCMT).
If l ˙ ( · ) C , F Y n F Y C , and θ n n p θ , with θ D , a compact set:
F l ˙ Y ˙ n · F l Y | θ
Remark 1
(Corollary to PCMT ).
a 
With θ a parameter both for F Y and F l ( Y ) ;
b 
when F Y is N 0 , V ( θ ) and θ n p θ , we have:
F l ˙ Y ˙ n N 0 , G V ( θ ) G t
when l ˙ Y ˙ n = G Y n t , θ n t t with G a matrix and V ( θ ) the covariance matrix of the parameter, θ .
This remark will be renamed as a corollary of PCMT (CPCMT), as can be seen in [6].
We now consider the case of a sequence of random vectors with the same limit distribution. The sequence X n of random vectors is the mean stable when all its vectors have the same mean vector μ .
Between the mean stable sequences, we may establish an equivalence relation writing X n γ Y n if and only if:
S n = sup Y n X n n 0 s 0
where s means stochastic convergence, i.e., convergence in probability, as can be seen in [27,28]. We now establish:
Proposition 1.
If S n s 0 and F X n F C , then F Y n F , whenever X n γ Y n .
Proof. 
Let the vectors in X n and Y n have m components. Given x R m , we consider the events A n , 1 ( ε ) = X n x ε 1 m , A n , 2 ( ε ) = Y n x , A n , 3 ( ε ) = X n ε + 1 m and B n ( ε ) = S n ε .
Taking A ˙ n , i ( ε ) = A n , i ( ε ) B n ( ε ) , and ⇒ to indicate the implication, we have A ˙ n , 1 ( ε ) A ˙ n , 2 ( ε ) A ˙ n , 3 ( ε ) . Thus, with q n , i ( ε ) = P r A n , 1 ( ε ) and q ˙ n , i ( ε ) = P r A ˙ n , 1 ( ε ) , i = 1 , 2 , 3 , we have q ˙ n , 1 q ˙ n , 2 q ˙ n , 3 .
Moreover, since S n n s 0 , whatever δ > 0 , there exist n ( ε , δ ) such that, for n > n ( ε , δ ) , P r B n ( ε ) > 1 δ , so q ˙ n , i ( ε ) q n , i ( ε ) q ˙ n , i ( ε ) + δ , and, since δ is arbitrary:
lim n q ˙ n , i ( ε ) = lim n q n , i ( ε ) , i = 1 , 2 , 3
so:
lim n q n , 1 ( ε ) lim n q n , 2 ( ε ) lim n q n , 3 ( ε )
since F C , we have lim n q n , 1 ( ε ) = F x ε 1 n and lim n q n , 3 ( ε ) = F x + ε 1 n , so:
F x ε 1 n lim n F Y n ( x ) F x + ε 1 n
and given ε is arbitrary and x may be whatever, from F C , we obtain:
F Y n ( x ) n F ( x )
which completes the proof. □
We then consider normal limit distributions, starting with:
Proposition 2.
Given G ( · ) = g 1 ( · ) , , g w ( · ) such that its component functions have gradients g 1 ( · ) , , g w ( · ) , Hessian matrices g ̲ 1 ( · ) , , g ̲ w ( · ) , and continuous second-order partial derivatives. Whatever the mean stable sequence Z n with the invariant mean vector μ , taking Y n = G ( Z n ) and X n = G ( Z n ) with G ( · ) = g 1 ( · ) , , g w ( · ) t , we have X n γ Y n , whenever n Z n μ n converges in distribution to F C .
Proof. 
We have g j ( Z n ) g j ( μ ) = g j ( μ ) t Z n μ + 1 2 Z n μ t g ̲ j θ n , j Z n μ with θ n , j between Z n and μ . Since Z n s μ , we also have θ n , j s μ . Then, with Θ ε ( μ ) the radius ε sphere with centre μ , P r Z n , θ n , j ε Θ ε ( μ ) 1 , j = 1 , , m . Now g ̲ j ( z ) is a continuous function of z so it will have a maximum u j , ε in Θ ε ( μ ) that will exceed the supremum of the spectral radius of g ̲ j ( μ ) in Θ ε ( μ ) , so:
1 2 Z n μ t g ̲ j θ n , j Z n μ u j , ε Z n μ 2 , j = 1 , , m
thus:
g j ( Z n ) g j ( μ n ) g j ( μ ) t Z n μ s 0 , j = 1 , , m
and so:
G ( Z n ) G ( μ ) G ( μ ) t Z n μ s 0
and the thesis follows from Proposition 1. □
Corollary 3.
If n Z n μ N 0 , V , under the hypothesis of Propositions 1 and 2, n G ( Z n ) G ( μ ) N 0 , G ( μ ) V G ( μ ) t .
Proof. 
The thesis follows from Propositions 1 and 2 since the continuous mapping theorem, as can be seen in [23,24], implies that the limit distribution of n G ( μ ) Z n μ is N 0 , G ( μ ) V G ( μ ) t . □

3. Confidence Ellipsoids

We start by establishing the following.
Proposition 4.
If Y —not necessarily normal—has a covariance matrix C , with a range space Ω = R ( C ) , and the mean vector μ , then:
P r Y μ Ω = 1
Proof. 
Let α 1 , , α m constitute an orthonormal basis for the orthogonal complement, Ω , of Ω . Then, α j t Y μ will have a null mean value and variance. Thus, according to the Bienaymé–Tchebycheff inequality:
P r α j t Y μ = 0 = 1 , j = 1 , , m .
Therefore, we obtain, with A , the matrix with row vectors α 1 , , α m :
P r Y μ Ω = P r A Y μ = 0 = P r j = 1 m α j t Y μ = 0 = m ( m 1 ) = 1 ,
as follows from the Boole generalized inequalities, and so the thesis is established. □
We then have:
Lemma 2.
Given B is a positive semi-definite [definite] matrix with positive eigenvalues ε 1 , , a n d   ε h corresponding to eigenvectors α 1 , , α m , we have B = A t D A and B + = A t D 1 A with + indicating the Moore–Penrose inverse, D the diagonal matrix with principal elements ε 1 , , ε h and A t = α 1 , , α h .
Proof. 
It is easy to show that BB + and B + B are asymmetrical and that BB + B = B and B + BB + = B + establish the thesis. □
We can now establish the following.
Proposition 5.
If Y N μ , σ 2 B , with B + the Moore–Penrose inverse of B:
Y μ t B + Y μ σ 2 χ h 2
where χ h 2 is a central chi-square distribution with h = r a n k ( B ) , degrees of freedom and σ 2 is the variance of Y .
Proof. 
As stated in Lemma 2, we have:
Y μ t B + Y μ = Y ¨ t Y ¨ σ 2 χ h 2
with Y ¨ = D 1 2 A Y μ where D 1 2 is the diagonal matrix with principal elements ε 1 1 / 2 , , ε h 1 / 2 . We now only have to point out that Y ¨ N 0 , σ 2 I h to establish the thesis, where I h is the identity matrix. □
We now consider confidence ellipsoids and simultaneous confidence intervals. Ellipsoids and their support planes are presented in [29]; the affine point of x belongs to the ellipsoid:
ξ μ , B , r = x : x μ t B + x μ r
if and only if:
v | v t μ v t x | r v t B v ,
where v indicates that all possible vectors v are considered. We now establish:
Proposition 6.
If Y N μ , B :
P r v | v t μ v t y | x h , 1 q v t B v = 1 q ,
with x h , 1 q , the (1-q)-th quantile of χ h 2 (the central chi-square with h degrees of freedom), when r a n k ( B ) = h .
Proof. 
The proof for the case Y N μ , B directly follows from the previous considerations. Thus, we only have to point out that x ξ μ , B , r is equivalent to, as can be seen in [29]:
v | v t μ v t x | r v t B v .
 □
Since, as we saw Y μ t B + Y μ σ 2 χ h 2 when Y N μ , σ 2 B , we have p p ˜ t U + ( p ) p p ˜ χ m 1 2 , because, as we shall see in the next section, r a n k U ( p ) = m 1 .
In the next section, we will obtain results on U ( p ) that will be used to obtain chi-square confidence regions for p and through duality, test hypothesis on p .

4. Covariance Matrices

As we saw, for p ˜ , the limit covariance matrix of n p ˜ m p is:
U ( p ) = D p p p t
where p = p 1 , , p m , p j > 0 , j = 1 , , m and j = 1 m p j = 1 . For the rank of the covariance matrix, we have:
r a n k U ( p ) = r a n k D ( p ) pp t m 1
since r a n k D p = m and r a n k p p t = 1 , as follows from, as can be seen in [30], page 46, that | r a n k ( A ) r a n k ( B ) | r a n k ( A + B ) . In addition to this, U ( p ) 1 = 0 so r a n k U ( p ) m 1 . Thus, r a n k U ( p ) = m 1 :
Matrix U ( p ) is a covariance matrix which, as can be seen in, [30], is positive semi-definite. There is therefore an orthogonal matrix P ( p ) and a diagonal matrix D ( v ) whose principal elements are the eigenvalues v 1 , , v m of U ( p ) such that:
U ( p ) = P ( p ) D ( v ) P ( p ) t
Since r a n k ( U ( p ) ) = m 1 , we may order its eigenvalues to have v j > 0 , j = 1 , , m 1 , and v m = 0 . With D v 1 2 , the diagonal matrix with principal elements, v 1 1 / 2 , , v m 1 / 2 and:
U ( p ) 1 2 = P ( p ) D ( v ) 1 2 P ( p ) t
we will have:
U ( p ) = U ( p ) 1 2 U ( p ) 1 2 .
We now establish:
Lemma 3.
If the m × m matrices M 1 M w are such that M j t M j = 0 m × m , when j j , we have r a n k j = 1 w M j = j = 1 w r a n k ( M j ) .
Proof. 
With g j = r a n k ( M j ) and M j = m j , 1 , , m j , m , j = 1 , , w , there will be g j linearly independent column vectors m ˙ j , l ; l D j of M j , j = 1 , , w . The vectors in j = 1 w m ˙ j , l ; l D j will be linearly independent, since, when j j , they are orthogonal. Thus, r a n k [ M 1 M w ] j = 1 w g j = j = 1 w r a n k ( M j ) . Moreover, if we join another column vector of M 1 M w , say m j , l , to the set j = 1 w m j , l ; l D j , it will linearly depend on the m j , l ; l C j . Thus, the vectors in the extended set will not be linearly independent. Thus, r a n k [ M 1 M w ] = j = 1 w g j = j = 1 w r a n k ( M j ) . □
Consider that j = 1 w M j = M 1 M w 1 w I m , with ⨂ indicating the Kronecker matrix product, as can be seen in [31], with r a n k 1 w I m = r a n k ( 1 m ) r a n k ( I m ) = m . Thus, as can be seen in [30]:
r a n k j = 1 w M j r a n k [ M 1 M w ] + r a n k 1 w I m m = r a n k [ M 1 , , M w ] = j = 1 w r a n k ( M j ) .
and:
r a n k j = 1 w M j r a n k [ M 1 , , M w ] = j = 1 w r a n k ( M j )
so:
r a n k j = 1 w M j = j = 1 w r a n k ( M j ) ,
as we wished to establish.
Let Q 1 , , Q w now be pairwise orthogonal orthogonal projection matrices (POOPM) with Q 1 = 1 m 1 m 1 n t . Now, U ( p ) is m × m with rank m 1 . Thus, its nullity space, N ( p ) will have a dimension 1 and since α 1 = 1 m 1 m N ( p ) , α 1 α 1 t will be the orthogonal projection matrix on N ( p ) , since U ( p ) is symmetrical so its range space R ( p ) will be the orthogonal complement N ( p ) of N ( p ) . The orthogonal projection matrix T ( p ) on R ( p ) will then be:
T ( p ) = I m Q 1 .
Thus, if j = 1 w Q j = I m , we will have T ( p ) = j = 2 w Q j as well as
U ( p ) = T ( p ) U ( p ) = j = 2 w Q j U ( p )
and, according to Lemma 3:
m 1 = r a n k ( U ( p ) ) = j = 1 w r a n k Q j U ( p ) .
Now:
r a n k Q j U ( p ) r a n k Q j , j = 2 , , w
and m 1 = j = 1 w r a n k Q j . Thus, we must have:
r a n k Q j U ( p ) = r a n k Q j , j = 2 , , w
We now highlight that U ( p ) and U ( p ) 1 2 have the same eigenvectors associated with positive eigenvalues. These eigenvectors constitute an orthonormal basis for R ( U ( p ) ) = R ( U ( p ) 1 2 ) . Thus:
T ( p ) U ( p ) 1 2 = U ( p ) 1 2
and, reasoning as above, we obtain:
r a n k Q j U ( p ) 1 2 = r a n k Q j , j = 2 , , w
Matrices Q 1 , , Q w naturally appear, as can be seen in [32], when there are factors that cross or groups of nested factors that cross. The sums of squares of effects and interactions of these factors are the A j Y 2 , j = 2 , . . . , w , and A 1 Y 2 can be associated with the general mean.

5. Inference

5.1. Chi-Square Tests

According to the PCMT, when:
H 0 ( A ) : Ap = 0
holds, the limit distribution of:
L n ( A ) = n A p ˜ n t A U ( p ˜ n ) A t + A p ˜ n
will be that of χ r 2 with r = r a n k AU ( p ˜ n ) A t , since when H 0 ( A ) holds:
n A p ˜ n N 0 , AU ( p n ) A t
and we also have p ˜ n p p . We thus have for H 0 ( A ) a q limit level test with statistic L n ( A ) and a critical value x r , 1 q , the ( 1 q ) t h is a quantile of χ r 2 .
Moreover, under any alternative to H 0 ( A ) :
H 1 ( A ) : Ap = q
we have, whatever K > 0 :
P r L n ( A ) > K n 1
so the chi-square tests will be strongly consistent.
Let us assume that the probabilities p 1 , , p m correspond to the treatments of a fixed effects model in which d factors, with h 1 , , h d levels, cross (for instance, probabilities of cures for different treatments). We then have:
m = l = 1 d h l
which, as can be seen in [32], tests the effects and interactions of the factors. These effects and interactions correspond to subsets of d ¯ = 1 , , d . Thus, the null set, ϕ , will correspond to the general mean value, if the set has one element, it will be associated with the effects of the factor whose index belongs to the set. Otherwise, if the set has more than one element, it will be associated with the interaction between the levels of the factors with those indices. The sets can be ordered by the indexes:
j ( φ ) = 1 + l φ 2 l 1
Putting φ j to indicate the j t h set, j = 1 , , 2 d , we have, as can be seen in [30], the matrices:
A ( φ j ) = j = 1 d A l ( φ j ) , j = 1 , , 2 d
where:
A l ( φ j ) = 1 h l 1 h l t , l φ j j = 1 , , 2 d A l ( φ j ) = T h l , l φ j j = 1 , , 2 d ,
we then have, with A j = A φ j , j = 1 , , 2 d :
g j = r a n k ( A j ) = j = φ j h j 1 j = 1 , , 2 d
Thus, for testing the:
H 0 , j = H 0 ( A j ) : j = 1 , , 2 d
we have the statistic L n ( A j ) , with g j degrees of freedom, j = 1 , , 2 d .
Another interesting case is that of cross-nesting factors. The factors in the h -th group have a h , 1 , , a h , f h levels, h = 1 , , d . There are then b h , v = v = 1 v a h , v , v = 1 , , f h , combinations of levels of the first v factors in group h, and we also put b h , 0 = 1 , h = 1 , , d . Each of the combinations contains c h , v = v = v + 1 f h a h , v , 0 v s . < f h , or c h , f h = 1 , for combinations of levels of the remaining factors, and we have the matrices:
A h , 0 = 1 c h , 0 1 c h , 0 t , h = 1 , , d A h , j = I b h , j 1 T a h , j 1 c h , j 1 c h , j t j = 1 , , f h , h = 1 , , d ,
where T r is obtained by deleting the first row equal to 1 r 1 r t of a r × r orthogonal matrix and ⨂ indicates the Kronecker matrix product. These matrices have ranks:
g h , 0 = 1 , h = 1 , , d g h , j = b h , j b h , j 1 h = 1 , , d ,
where b h , 0 = 1 , h = 1 , , d .
The effects and interactions in this cross-nesting are associated with the vectors j = j 1 , , j d with j h = 0 , , f h , h = 1 , , d . So j is associated with the matrices:
A j = h = 1 d A h , j h ; j Γ
with ranks:
g j = h = 1 d g h , j h ; j Γ
and where Γ = j ; 0 j h f h , h = 1 , , d .
To test the hypothesis:
H 0 , j = H 0 , j ( A j ) ; j Γ
we have the statistic L n ( A j ) with g j degrees of freedom, j Γ .

5.2. Confidence Regions

According to the continuity of the Moore–Penrose inverses, as can be seen in [30], pp. 221–224, we have:
U ( p ˜ n ) + n U ( p ) + ,
so the PCMT gives:
n p p ˜ n t U ( p ˜ n ) + p p ˜ n χ m 1 2
as well as:
n A j p A j p ˜ n t A j U ( p ˜ n ) A j t + A j p A j p ˜ n χ g j 2 , j Γ
for models with factors crossing and cross-nesting.
Thus:
P r n p p ˜ n t U ( p ˜ n ) + p p ˜ n x m 1 , 1 q n 1 q P r n A j p A j p ˜ n t A j U ( p ˜ n ) A j t + A j p A j p ˜ n x r ( A j ) , 1 q n 1 q , j = 2 , , w P r n A j p A j p ˜ n t A j U ( p ˜ n ) A j t + A j p A j p ˜ n x r ( A j ) , 1 q n 1 q , j Γ
We can now apply Proposition 6 to obtain:
P r v | v t p v t p ˜ n | x n 1 , 1 q v t U ( p ˜ n ) v n 1 q P r v | v t A j p v t A j p ˜ n | x g j , 1 q v t A j U ( p ˜ n ) A j t v n 1 q , j = 2 , , w P r v | v t A j p v t A j p ˜ n | x g j , 1 q v t A j U ( p ˜ n ) A j t v n 1 q , j Γ

5.3. Non-Linear Statistics

We assume that the component functions g l ( · ) , l = 1 , , w of G ( · ) have continuous partial derivatives of the second order to apply Proposition 2 and its corollary in showing that:
n G ( p ˜ n ) G ( p ) N 0 , G ( p ) U ( p ) G ( p ) t .
An interesting application of this result is that to log-linear models, as can be seen in [12], in which we use:
G ( p ˜ n ) = log p ˜ n = l o g p ˜ n , 1 , , l o g p ˜ n , m G ( p ) = log p = l o g p 1 , , l o g p m
We now have:
G ( p ) = D 1 p 1 , , 1 p m = D p 1 ,
so:
G ( p ) U ( p ) G ( p ) t = D p 1 1 m 1 m t = W ( p ) ,
since D p 1 p = 1 .
Moreover, since D p 1 is invertible, taking:
U 0 ( p ) = D ( p ) 1 U ( p ) D ( p ) 1
we have:
r a n k U 0 ( p ) = r a n k U ( p ) = m 1 .
Thus:
U 0 ( p ) 1 p p = 0
so 1 p p belonging to the nullity space N 0 ( p ) of U 0 ( p ) constitutes an orthonormal basis for that space. Since U 0 ( p ) is symmetrical, its range space R 0 ( p ) will be N 0 ( p ) , and the orthogonal projection matrix on R 0 ( p ) will be:
T 0 ( p ) = I m 1 p p p t
Let A 0 have row vectors that constitute an orthonormal basis for a sub-space 0 . Putting:
l 0 ( p ) = log ( p ) l 0 ( p ˜ n ) = log ( p ˜ n )
We have, according to the PCMT:
n A l 0 ( p ˜ n ) Al ( p ) N 0 , AU 0 ( p ) A t .
Thus, to test:
H 0 ( A ) : A l 0 ( p ) = 0
we have the limit q level chi-square test with the statistic n l 0 ( p ˜ n ) t AU 0 ( p ) A t + l 0 ( p ˜ n ) and the critical value x r 0 ( A ) , 1 q with r 0 ( A ) = r a n k AU 0 ( p ) A t . These tests will be strongly consistent as follows from l 0 p ˜ n n p l 0 ( p ) .
Moreover, we have the limit level 1 q confidence ellipsoids given by
P r l 0 ( p ) l 0 ( p ˜ n ) t U 0 ( p ˜ n ) + l 0 ( p ) l 0 ( p ˜ n ) x m 1 , 1 q n 1 q P r Al 0 ( p ) Al 0 ( p ˜ n ) t AU 0 ( p ˜ n A t ) + Al 0 ( p ) A l 0 ( p ˜ n ) x r 0 ( A ) , 1 q n 1 q
We can now apply Proposition 6 to obtain:
P r v | v t l 0 ( p ) v t l 0 ( p ˜ n ) | x m 1 , 1 q v t U 0 ( p ˜ n ) v n 1 q P r v | v t Al 0 ( p ) v t Al 0 ( p ˜ n ) | x r 0 ( A ) , 1 q v t AU 0 ( p ˜ n ) A t v n 1 q , j = 2 , , w

5.4. Numerical Example

In this section, we apply our results, for non-linear statistic, to a dataset on coronary heart disease analysed by [12,33], with the log-linear model, a non-linear statistic. In all, a total of 1330 sick patients were categorized with respect to three variables, namely blood pressure, serum cholesterol and whether they had a coronary heart disease or not. The Blood pressure, which was the first variable had four categorical levels. The second variable, which was, Serum cholesterol, also had four categorical levels while the third variable, which indicated the presence of coronary heart disease, had two levels. So in all, the data had 32 classes with a total of 1330 sick patients. Kindly refer to Section 5.6 of [12] for details on the variables and their levels as well as a cross-classification of the frequencies for each category.
To proceed with the analysis by the application of our method, as can be seen in Equation (63), we firstly estimate the probabilities of each of the 32 classes. In order to apply the non-linear statistics, we calculated the logarithms of those estimates. As in Equation (63), the composite functions of the estimates were normally distributed with a null mean vector and a certain covariance matrix. We proceeded by evaluating our covariance matrix—firstly by determining the Jacobian matrix of the gradient of our composite function, as in Equation (65) and by Equation (66), and then determining the covariance matrix for our composite function.
To test the hypothesis of the absence of effects and interactions, we started by obtaining matrices A using orthogonal matrices.
The first orthogonal matrix we considered is:
P 2 = 1 2 1 2 1 2 1 2
Since we have 32 classes, we build the P matrices up to P 32 with Kronecker matrix products:
P 4 = P 2 P 2 P 8 = P 2 P 4 P 16 = P 2 P 8 P 32 = P 2 P 16 , where is the Kronecker product .
After getting the orthogonal matrix P 32 , we determined our A matrices. Since we have three factors, we have eight A matrices defined as follows.
Let the set containing the factors indexes be φ = 1 , 2 , 3 , then the subsets of φ and the corresponding factor effects and interactions are:
j = 1 , φ = : overall or general mean effect j = 2 , φ = 1 : effects of the first factor j = 3 , φ = 2 : effects of the second factor j = 4 , φ = 1 , 2 : interactions between the first and sec ond factors j = 5 , φ = 3 : effects of the third factor j = 6 , φ = 1 , 3 : interactions between the first and third factors j = 7 , φ = 2 , 3 : interactions between the sec ond and third factors j = 8 , φ = 1 , 2 , 3 : interactions between all factors
Now, the A j , j = 1 , , 8 are obtained from P 32 as follows: A 1 is the first row; A 2 is the second to fourth rows; A 3 is the fifth, ninth and thirteenth rows; A 4 is the sixth, seventh, eighth, tenth, eleventh, twelfth, fourteenth, fifteenth and sixteenth rows. Similarly, A 5 is the seventeenth row; A 6 is the eighteenth, nineteenth and twentieth rows; A 7 is the twenty-first, twenty-fifth and twenty-ninth rows; and A 8 is the twenty-second, twenty-third, twenty-forth, twenty-sixth, twenty-seventh, twenty-eight, thirtieth, thirty-first and thirty-second rows.
Now, according to Equation (44), we have the covariance matrices:
V j ( p ) = A j t W ( p ) A j , j = 1 , , 8 ,
where W ( p ) is defined in Equation (66), for the:
Z j ( p ) = A j G ( p ) , j = 1 , , 8 ,
where G ( p ) is given by Equation (64). The sum of squares ( SS ) for the statistic Z j ( p ) , j = 1 , , 8 are given by
SS j = n Z j ( p ) t V j ( p ) + Z j ( p ) , j = 1 , , 8
where n is the total number of our observations, and + indicates the Moore–Penrose inverses of the V j ( p ) , j = 1 , , 8 . The SS j , j = 1 , , 8 are chi-squares with degrees of freedom the r a n k ( A j ) , j = 1 , , 8 .
Table 1 is an ANOVA-like table that presents the results of our analysis of the coronary heart disease data. This gives the sources of variation of the general mean, the main effects and the interaction effects. It also presents the degrees of freedom and the sum of squares for these effects. The significance level of these effects are indicated in *. From Table 1, we see that the general mean is highly significantly different from 0. Moreover, the factors blood pressure, serum cholesterol and coronary heart disease as well as both interactions between the first and second factors with the third factor were highly significant. The interactions in which the first and second factors did partake were not significant. We pointed out that we were able to consider the interaction between the three factors by using our approach. In the classical analysis of the data, as can be seen in [12] (Section 5.6), this would not be possible.
Again, we applied our method to analyse the General Social Survey (GSS 2008). The General Social Survey (GSS) is a survey which conducts basic scientific research on the structure and development of American society with a data-collection program designed to both monitor societal change within the United States and to compare the United States with other nations [34]. We considered three categorical variables, “Education”, “Political party affiliation” and “Gender”. The “Education” variable had 5 categorical levels, while “Political party affiliation” and “Gender” has 7 and 2 categorical levels respectively, as analyzed by [7] in Section 3.2.4.
Just as in Table 1 and Table 2, an ANOVA-like table, presents the results of our analysis. It gives the sources of variation of the general mean, the main effects and the interaction effects for our GSS (2008) data. It also presents the degrees of freedom and the sum of square for these effects. The significance level of these effects is indicated in *. From the table, both factors 1 and 2 have significant effects as well as a significant interaction. The interactions between factors 2 and 3 is also significant. Factor 3 had neither significant effects nor interactions with the exception of its interaction with factor 2.
We can thus conclude that the core of significance was in factors 1 and 2 (education and political party affiliation) and that both genders behave similarly.

Comparing Procedures

Our procedure is based on the asymptotic distribution given by
n p ˜ n p N 0 , U ( p )
and by
n l ( p ˜ n ) l ( p ˜ ) N 0 , J ( p ) U ( p ) J ( p ) t
where J ( · ) is the Jacobian matrix of the gradients of l ( · ) (the row vectors of J ( · ) are the gradients of the components of l ( · ) ):
This enabled us to, given an orthogonal partition:
R m = j = 1 d j ,
where m is the number of probabilities, test the hypotheses:
H 0 , j : l ( p ) j .
In our numerical example, we have:
( ) : general mean ( { 1 } ) : null effect of the first factor ( { 2 } ) : null effect of the second factor ( { 3 } ) : null effect of the third factor ( { 1 , 2 } ) : null interaction of the first and second factors ( { 1 , 3 } ) : null interaction of the first and third factors ( { 2 , 3 } ) : null interaction of the second and third factors ( { 1 , 2 , 3 } ) : null interaction of the three factors
This way, we overcame the requirement of using hierarchical models.

6. Conclusions

Multinomial models having their limit distribution given by n p ˜ n p N 0 , U p with r a n k U p = m 1 and p belonging to a compact set led us to establish the parametrized continuous mapping theorem (PCMT) to carry out inference for them. Namely, we obtained the confidence ellipsoids and confidence intervals. Moreover, we obtained chi-square tests for the hypothesis:
H 0 ( A ) : Ap = 0
which led to ANOVA-like inference for both multinomial and log-linear models. We pointed out that now hierarchical assumptions made on log-linear models are no longer necessary and all effects and interactions can be tested without any restrictions. Actually, this was used in the two numerical examples we presented. In addition to this, the replacement of F-tests by chi-square ones increases the power of our inference by replacing a finite number of degrees of freedom for the error by an infinity of them.

Author Contributions

This research article was written while preparing my Ph.D. thesis. Thus, I performed most of the efforts while interacting with my supervisors who are also the co-authors of this article. Conceptualization, I.A. and J.T.M.; Data curation, I.A.; Formal analysis, I.A. and J.T.M.; Methodology, I.A., J.T.M. and F.J.M.; Project administration, F.J.M.; Software, I.A. and F.J.M.; Supervision, J.T.M. and F.J.M.; Validation, J.T.M.; Visualization, I.A. and F.J.M.; Writing—original draft, I.A.; Writing—review & editing, J.T.M. and F.J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) through project UIDB/00297/2020.

Data Availability Statement

The data on coronary heart disease used in this article were obtained from [12,33].

Acknowledgments

I would like to especially acknowledge the much appreciated and extensive support from my supervisors, J.T.P.N. Mexia and Filipe Marques for their supervision, resources and in-depth contributions to my Ph.D. I am very grateful to them.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bishop, Y.M.; Fienberg, S.E.; Holl, P.W. Discrete Multivariate Analysis: Theory and Practice; Springer Science & Business Media: New York, NY, USA, 2007. [Google Scholar]
  2. Agresti, A. Categorical Data Analysis, 3rd ed.; Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  3. Agresti, A. Analysis of Ordinal Categorical Data, 2nd ed.; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  4. Tang, W.; He, H.; Tu, X.M. Applied Categorical and Count Data Analysis; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  5. Fagerl, M.W.; Lydersen, S.; Laake, P. Statistical Analysis of Contingency Tables; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  6. Lloyd, C.J. Statistical Analysis of Categorical Data; Wiley: New York, NY, USA, 1999. [Google Scholar]
  7. Kateri, M. Contingency Table Analysis, 1st ed.; Methods and Implementation Using R; Editorial Advisory Board: Aachen, Germany, 2014. [Google Scholar]
  8. Kateri, M.; Balakrishnan, N. Statistical evidence in contingency tables analysis. J. Stat. Plan. Inference 2008, 138, 873–887. [Google Scholar] [CrossRef]
  9. Birch, M.W. Maximum likelihood in three-way contingency tables. J. R. Stat. Soc. Ser. (Methodol.) 1963, 25, 220–233. [Google Scholar] [CrossRef]
  10. Goodman, L.A. Interactions in multidimensional contingency tables. Ann. Math. Stat. 1964, 35, 632–646. [Google Scholar] [CrossRef]
  11. Fienberg, S.E.; Rinaldo, A. Three centuries of categorical data analysis: Log-linear models and maximum likelihood estimation. J. Stat. Plan. Inference 2007, 137, 3430–3445. [Google Scholar] [CrossRef]
  12. Everitt, B.S. The Analysis of Contingency Tables; Chapman and Hall/CRC: New York, NY, USA, 2019; pp. 94–99. [Google Scholar]
  13. Andersen, E.B. Introduction to the Statistical Analysis of Categorical Data; Springer Science & Business Media: New York, NY, USA, 2012. [Google Scholar]
  14. Tutz, G. Regression for Categorical Data; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  15. Fienberg, S.E.; Rinaldo, A. Maximum likelihood estimation in log-linear models. Ann. Stat. 2012, 40, 996–1023. [Google Scholar] [CrossRef] [Green Version]
  16. Irel, C.T.; Kullback, S. Minimum discrimination information estimation. Biometrics 1968, 24, 707–713. [Google Scholar]
  17. Grizzle, J.E.; Starmer, C.F.; Koch, G.G. Analysis of categorical data by linear models. Biometrics 1969, 25, 489–504. [Google Scholar] [CrossRef]
  18. Fienberg, S.E. When did Bayesian inference become “Bayesian”? Bayesian Anal. Int. Soc. Bayesian Anal. 2006, 1, 1–40. [Google Scholar] [CrossRef]
  19. Akoto, I.; Mexia, J.T.; Guerreiro, G.R. Discriminant analysis in discrete models: An application to HIV Treatment status. Symmetry 2021. submitted. [Google Scholar]
  20. Rohatgi, V.K.; Saleh, A.K.M. An Introduction to Probability and Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2015; pp. 189–192. [Google Scholar]
  21. Taboga, M. Lectures on Probability Theory and Mathematical Statistics; CreateSpace Independent Publishing Platform: North Charleston, SC, USA, 2012; pp. 431–438. [Google Scholar]
  22. Wilks, S.S. Mathematical Statistics, 2nd ed.; John Willey & Sons: New York, NY, USA, 1962; p. 262. [Google Scholar]
  23. Kallenberg, O. Foundations of Modern Probability, 2nd ed.; Springer: New York, NY, USA, 2010; p. 76. [Google Scholar]
  24. Rao, J.N.K. On two simple schemes of unequal probability sampling without replacement. J. Indian Stat. Assoc. 1965, 3, 80–173. [Google Scholar]
  25. Mukhopadhyay, P. Complex Surveys: Analysis of Categorical Data; Springer: New York, NY, USA, 2016; pp. 223–240. [Google Scholar]
  26. DasGupta, A. Probability for Statistics and Machine Learning: Fundamentals and Advanced Topics; Springer Science & Business Media: New York, NY, USA, 2011; pp. 268–282. [Google Scholar]
  27. Van der Vaart, A.W. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 2000; Volume 3. [Google Scholar]
  28. Loeve, M.M. Probability Theory I; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1977; p. 151. [Google Scholar]
  29. Scheffé, H. The Analysis of Variance; John Willey & Sons: New York, NY, USA, 1959; pp. 406–411. [Google Scholar]
  30. Schott, J.R. Matrix Analysis for Statistics, 3rd ed.; John Willey & Sons: New York, NY, USA, 2016; p. 46. [Google Scholar]
  31. Silvey, S.D. Statistical Inference; Reprinted; Chapman & Hall: New York, NY, USA, 1975. [Google Scholar]
  32. Fonseca, M.; Mexia, J.; Zmyślony, R. Estimators and tests for variance components in cross nested orthogonal designs. Discuss. Math. Probab. Stat. 2003, 23, 175–201. [Google Scholar]
  33. Ku, H.H.; Kullback, S. Log-linear models in contingency table analysis. Am. Stat. 1974, 28, 115–122. [Google Scholar]
  34. National Opinion Research Center. General Social Survey; Inter-University Consortium for Political and Social Research: Ann Arbor, MI, USA, 2016. [Google Scholar] [CrossRef]
Table 1. ANOVA-like table for the coronary heart disease data.
Table 1. ANOVA-like table for the coronary heart disease data.
Sources of VariationDegree of FreedomSum of Squares
General mean, μ 14752.504 ***
1330.289 ***
2320.915 ***
1 × 295.623
31365.871 ***
1 × 3324.204 ***
2 × 3322.284 ***
1 × 2 × 394.568
*** indicates “significance” at 0.005 levels. Source: author’s own calculations.
Table 2. ANOVA-like table for the GSS 2008 data.
Table 2. ANOVA-like table for the GSS 2008 data.
Sources of VariationDegree of FreedomSum of Squares
General mean, μ 149,387.730 ***
14860.829 ***
2684.692 ***
1 × 22489.636 ***
31 1.943
1 × 34 1.624
2 × 3617.466 **
1 × 2 × 32427.902 ***
*** and ** indicate “significance” at 0.005 and 0.01 levels respectively. Source: author’s own calculations.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Akoto, I.; Mexia, J.T.; Marques, F.J. Asymptotic Results for Multinomial Models. Symmetry 2021, 13, 2173. https://doi.org/10.3390/sym13112173

AMA Style

Akoto I, Mexia JT, Marques FJ. Asymptotic Results for Multinomial Models. Symmetry. 2021; 13(11):2173. https://doi.org/10.3390/sym13112173

Chicago/Turabian Style

Akoto, Isaac, João T. Mexia, and Filipe J. Marques. 2021. "Asymptotic Results for Multinomial Models" Symmetry 13, no. 11: 2173. https://doi.org/10.3390/sym13112173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop