Next Article in Journal
Efficiency Index for Binary Classifiers: Concept, Extension, and Application
Next Article in Special Issue
Bertrand’s Paradox Resolution and Its Implications for the Bing–Fisher Problem
Previous Article in Journal
A More Efficient and Practical Modified Nyström Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Consistency of Decision in Finite and Numerable Multinomial Models

1
Center of Mathematics and Its Applications, NOVA School of Science and Technology, Universidade NOVA de Lisboa, Campus de Caparica, 2829-516 Caparica, Portugal
2
Department of Mathematics and Statistics, University of Energy and Natural Resources, Sunyani P. O. Box 214, Ghana
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(11), 2434; https://doi.org/10.3390/math11112434
Submission received: 14 April 2023 / Revised: 20 May 2023 / Accepted: 22 May 2023 / Published: 24 May 2023
(This article belongs to the Special Issue Probability, Statistics and Random Processes)

Abstract

:
The multinomial distribution is often used in modeling categorical data because it describes the probability of a random observation being assigned to one of several mutually exclusive categories. Given a finite or numerable multinomial model M | n , p whose decision is indexed by a parameter θ and having a cost c θ , p depending on θ and on p , we show that, under general conditions, the probability of taking the least cost decision tends to 1 when n tends to , i.e., we showed that the cost decision is consistent, representing a Statistical Decision Theory approach to the concept of consistency, which is not much considered in the literature. Thus, under these conditions, we have consistency in the decision making. The key result is that the estimator p ˜ n with components p ˜ n , i = n i n , i = 1 , , where n i is the number of times we obtain the ith result when we have a sample of size n, is a consistent estimator of p . This result holds both for finite and numerable models. By this result, we were able to incorporate a more general form for consistency for the cost function of a multinomial model.
MSC:
62F12; 62C05; 60F05

1. Introduction

Classical Statistical Inference (cSI) is centered on minimizing the probabilities of errors, but in Statistical Decision Theory (SDT), minimizing the decision costs is the goal; see for instance [1,2]. In minimizing the probability of errors in cSI, or the decision costs in SDT, a desired property is to have our procedure to be consistent, that is, we would like the mass of the distribution of the sequence of the sample for the procedure to converge to the population constant as the sample size gets larger, see [3,4]. As far as we have searched prior to the writing of this manuscript, not many studies have been conducted on the consistency of decision costs for SDT as compared to that of the error probabilities in the case of cSI.
Multinomial distribution is often used in modeling categorical data because it describes the probability of a random observation being assigned to one of several mutually exclusive categories. Thus, having n independent realizations of experiments with a finite or numerable set of incompatible results with probabilities p i , i = 1 , , , the probabilities of obtaining N i = n i times the ith result, i = 1 , follows the multinomial distribution denoted as M | n , p with parameters n and p . The probability density function of this distribution is
P i = 1 N i = n i | p = n ! i = 1 n i ! i = 1 p i n i
where 0 0 = 1 , p = p 1 , , p i , and n = i = 1 n i , see [5,6]. To avoid repetitions, we give the expression for the numerable case since the particularization to the finite set is direct.
In what follows, we take a double approach in the treatment of Multinomial Models, both classical and decision theory approaches. In our classical approach, we show that p ˜ = p 1 ˜ , , p i ˜ , , with p ˜ n , i = n i n , i = 1 , , is a consistent estimator of p = p 1 , , p i , . This result will play a central part in our paper. We point out that we use a finite sample to obtain a numerable family p ˜ n , i , i = 1 , of jointly consistent estimators. We thus have consistent results in the fold of classical statistical inference.
Now considering a decision problem, let there be a family of possible decisions. For each of these decisions, we have a cost that depends on the results of the experiment. These results will have probabilities p i , i = 1 , . We thus have for the ith result the costs c i ( d ) , d D , i = 1 , . The average cost for decision d D will be
c · ( d ) = i = 1 p i c i ( d ) , d D .
Assuming the c i ( d ) , d D , i = 1 , is known, we can use the estimators
p ˜ n , i = n i n , i = 1 , ,
where n i , i = 1 , is the number of times that in n independent realizations of the experiment, we get the ith result with cost c i ( d ) , d D , i = 1 , . We have the estimators for the costs as
c ˜ n , · ( d ) = i = 1 p ˜ n , i c i ( d ) , d D .
We will show that p ˜ n , i , i = 1 , , are jointly consistent even when we have a numerable set of possible results, thus the c ˜ n , · ( d ) will also be consistent, d D .
If there is a decision d with least average cost and d ˜ n is the one with the least estimated average cost where there are n realizations of the experiment, we will show that
P r d ˜ n = d n 1
so that, see [7], we will have consistency in decision taking into the setup of experiments with finite or numerable set of incompatible results.
In the subsequent sections, we consider the multinomial model and its estimator in Section 2, where by limits distributions we show that the estimators of the multinomial model are consistent. In Section 3, we develop the cost function by statistical decision theory for multinomial models and again show that this function has the property of consistency. A further extension for the cost function is presented in Section 4.

2. Multinomial Models and Estimators

In this section, we obtain and consider estimators for the multinomial model. By limit distributions, see [8], we show that the estimators are consistent.
In X , the space of vectors v = v 1 , , v i , with numerable sets of components such that
v 1 = i = 1 | v i | < + .
We can consider v 1 as a norm, see [9]. The sub-space Γ of X constituted by the vectors p with non-negative components that add up to 1 will be bounded since p 1 = 1 . Given p 1 , p 2 Γ , we have
p 1 p 2 1 2 ,
and if
p n v 1 n 0
we have v Γ since the components of v will be non-negative and add up to 1. Thus, Γ is compact since it is bounded and closed.
Let us put
p ( m ) = i = 1 m p i p c ( m ) = 1 p ( m )
as well as
p ( m ) = p 1 , , p m + p c ( m ) , 0
in order to get
p p ( m ) 1 = p c ( m ) + i = m + 1 p i = 2 p c ( m ) .
Besides this, we have the vector
p ˜ n = p ˜ n , 1 , , p ˜ n , m ,
whose components are the estimators p ˜ n , i = n i n , i = 1 , . Let us put
p ˜ n ( m ) = i = 1 m p ˜ n , i p ˜ n c ( m ) = 1 p ˜ n ( m )
as well as
p ˜ n ( m ) = p ˜ n , 1 , , p ˜ n , m + p ˜ n c ( m ) , 0 ,
so that
p ˜ n p ˜ n ( m ) 1 = 2 p ˜ n c ( m ) .
With
m ˜ n = min h : 0 = p ˜ n , h + 1 = p ˜ n , h + 2 = ,
for m m ˜ n , we get
p ˜ n p ˜ n ( m ) p p ( m ) 1 = p p ( m ) 1 = 2 p c ( m )
since p ˜ n = p ˜ n ( m ) when m m ˜ n . We also get
p ˜ n p ˜ n ( m ) 1 2 p p ( m ) 1 4 p c ( m ) .
Now, by representing stochastic convergence by n s , we establish Proposition 1:
Proposition 1.
The estimator p ˜ n is said to be a consistent estimator for p since
p ˜ n n s p ,
that is according to the Weak Law of Large numbers, see [10,11].
Proof. 
Taking
m ε = min m : p c ( m ) ε , ϵ > 0 .
so that for m > m ε , we have
P p n c m 2 ε n s 1 ,
as well as
P p ˜ n p ˜ n ( m ) 1 4 p c ( m ) n s 1 ,
since
p ˜ n p ˜ n ( m ) 1 2 p c ( m ) .
Now
p ˜ n ( m ) p ( m ) 1 n s 0 ,
since see [7,8,12],
n p ˜ n ( m ) p ( m ) D N 0 , U p ( m )
where D indicates convergence in distribution in this case to the normal distribution with null mean vector and covariance matrix
U p ( m ) = D p ( m ) p ( m ) p ( m ) t ,
where D p ( m ) is the diagonal matrix whose principal elements are the components of p ( m ) . Thus
P p ˜ n p ˜ n ( m ) 1 ε n s 1 ,
and so
P p ˜ n p ˜ n ( m ) 1 4 ε p ˜ n ( m ) p ( m ) 1 ε n s 1 ,
as well as
P p ˜ n p ( m ) 1 5 ε n s 1 ,
whenever m > m ( ε ) .
Now, we also have
p p ( m ) 1 = 2 p n c 2 ε ,
if m > m ( ε ) , then
P p ˜ n p 1 7 ε n s 1 ,
which establishes the thesis. □
Corollary 2.
If w ( p ) is a continuous function of p , we have
w ( p ˜ n ) n s w ( p ) .
The thesis follows from Proposition 1 and the Slutsky theorem, see [13,14].
Corollary 3.
With d h ( v ) the vector of the indexes of the h largest components of v n X , we have see [10]
d h ( p ˜ n ) n s d h ( p )
if the h + 1 largest components of p are distinct.
Proof. 
When the h + 1 largest components of p are distinct, p will be a continuity point of d h ( · ) and the thesis follows from Corollary 3. □
The results in this section belong to the study of consistency in classical Statistical Inference (cSI). These classical inferences are, in most instances, made without regard to the use to which they are to be put. In the next section, we go into consistency for Statistical Decision Theory.

3. Cost Function for Multinomial Models

In Statistical Decision Theory (SDT), the goal is to incorporate more than just sample data in order to arrive at the optimal decision, unlike cSI. The knowledge of the possible consequences of a decision is much incorporated and this knowledge is quantified as the cost incurred for each possible decision that is taken. According to [15], Abraham Wald was the first person to thoroughly examine the inclusion of a cost function in statistical analysis.
The cost function represents the costs associated with taking a particular decision. It is a function that maps every possible decision and outcome to a real-valued cost. The cost function is used to evaluate the performance of various decision rules in terms of their expected cost. The goal of statistical decision theory is to identify the decision rule that minimizes the expected cost, see [16].
Now, we go back to the decision problem we presented in the Introduction and consider the cost function for multinomial models.
Let c ( p , p ˜ n , d ) , d D , be the cost for decision d D , where p is the vector of probabilities and p ˜ n are the estimated probabilities of the n results. We will assume that this cost is the sum of two components, both non-negative, c 0 ( p , d ) , that in a given decision d D , depends only on p , the vector of probabilities, and c 1 ( p , p n ˜ , d ) that depends on the estimation errors. Namely, we take
c p , p ˜ n , d = c 0 p , d + k d p ˜ n p 1 , d D
with k d > 0 , d D , so
c 0 ( p , d ) = min c p , p ˜ n , d , d D
since, as we saw
p ˜ n n s p
we will have
c p , p ˜ n , d n s c 0 ( p , d ) , d D .
Thus, for every d D , the limit cost will be c 0 ( p , d ) . It is now easy to see that if there is d ( p ) such that
c 0 p , d 0 ( p ) c 0 ( p , d ) , d D ,
with d ˜ n 0 ( p ) as the decision with the least estimated cost. We have
P r d ˜ n 0 ( p ) = d 0 ( p ) n s 1
Proposition 4.
We have consistency for the cost function given by Equation (32) whenever Equation (33) holds.
If the h + 1 largest components of p are obtained, as an alternative to Equation (32), we may take
c p , p ˜ n = c 0 p + c 1 d h ( p ˜ n ) d h ( p ) 1 ,
reobtaining Proposition 4, since
d h ( p ˜ n ) d h ( p ) 1 n s 0
and so we continue to have
c p , p ˜ n n s c 0 ( p ) .
We may also take
c 1 p ˜ n p = p ˜ n p t M p ˜ n p
with M a positive definite matrix, see [8,9], or
c 1 p ˜ n p = d h ( p ˜ n ) d h ( p ) t M d h ( p ˜ n ) d h ( p ) .
Thus, there is a wide range of possible cost functions, namely, with g ( · ) , a continuous function
γ ˜ n = g ( p ˜ n )
will, according to the Slutsky theorem, be a consistent estimator of γ = g ( p ) .
Moreover, if we have a cost function
c γ , γ ˜ n = c 0 ( γ ) + c 1 γ ˜ n γ 1 ,
where c 0 ( γ ) is the cost that depends on γ and c 1 ( · ) is continuous and such that c 1 ( 0 ) = 0 , we can use again the Slutsky theorem, to get
c γ , γ ˜ n n s c 0 ( γ ) .
We thus extended our previous results on p to any parameter given by a continuous function of p , such as the cost function.

4. Extension on Cost Functions

In this section, we develop an extension for the previous Section 3 to incorporate a more general form of our results on consistency for the cost function for multinomial models.
For instance, we consider
γ ( φ ) = j φ p j
the sum of the probabilities of the results with indexes in φ . These results may be of interest and so we are led to consider their joint probability. A direct extension of this case is given by
γ ( φ 1 , , φ k ) = l = 1 k a l j φ l p j
where we consider k sets of results. The coefficients a 1 , , a k value the relevances of the corresponding sets of results.
In general, let us have a succession Y n of observation vectors whose distributions depend on a parameter θ for which we have a consistent estimator θ ˜ n . Then, if we have a cost function
c θ , θ ˜ n = c 0 θ + c 1 θ ˜ n θ 1
with c 0 ( θ ) a continuous function of θ and c 1 ( · ) also continuous, and such that
min c 1 ( z 1 ) = c 1 ( 0 ) = 0 ,
we have
c θ , θ ˜ n n s c 0 ( θ )
which implies consistency for the cost functions. Thus, the two consistent features display the relation we had already found for multinomial models. Namely, we obtain the following result:
Proposition 5.
If we have a consistent estimator θ ˜ n for a parameter θ, we have consistency for cost functions
c θ , θ ˜ n = c 0 θ + c 1 θ ˜ n θ 1
where c 1 ( · ) is continuous with minimum c 1 ( 0 ) .
The extension behind this proposition, and getting consistent estimators for a numerable set of parameters, the components of p , from a finite sample are maybe the most interesting features of our discussion.

5. Final Remark

In this study, based on limit distribution, by considering the vector of probabilities for the multinomial model, we showed, using classical Statistical Inference, that the estimators for the vector of probabilities are consistent. Due to the limitation of classical Statistical Inference but not incorporating the knowledge of the possible consequences of a decision, we used a Statistical Decision Theory approach to quantify the cost incurred for each possible decision by obtaining a cost function for the vector of probabilities. We showed that the estimators of cost function are consistent.
Our results on having consistency for the estimator of probabilities leads to consistency of decision function; in this, we hope to have opened an interesting line of work on multinomial and other models using Statistical Decision Theory.

Author Contributions

Conceptualization, I.A. and J.T.M.; methodology, I.A. and J.T.M.; software, I.A.; formal analysis, J.T.M.; writing—original draft, I.A.; writing—review & editing, I.A. and J.T.M.; supervision, J.T.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All relevant data are within the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Le Cam, L. Asymptotic Methods in Statistical Decision Theory, 2nd ed.; Springer: New York, NY, USA, 2012. [Google Scholar]
  2. Liese, F.; Miescke, K.-J. Statistical Decision Theory: Estimation, Testing, and Selection; Springer Science and Business Media: New York, NY, USA, 2008; pp. 1–52. [Google Scholar]
  3. Berger, C.; Casella, G.T. Statistical Inference, 2nd ed.; Cengage Learning: Duxbury, CA, USA, 2005; pp. 232–233. [Google Scholar]
  4. Hogg, R.V.; McKean, J.W.; Craig, A.T. Introduction to Mathematical Statistics, 6th ed.; Pearson Education, Inc.: Upper Saddle River, NJ, USA, 2005; pp. 204–205. [Google Scholar]
  5. Rohatgi, V.K.; Saleh, A.K.M.E. An Introduction to Probability and Statistics, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2015; pp. 189–191. [Google Scholar]
  6. Evans, M.; Hastings, N.; Peacock, B.; Forbes, C. Statistical Distributions, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2011; pp. 135–136. [Google Scholar]
  7. Wilks, S.S. Mathematical Statistics, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 1962; p. 261. [Google Scholar]
  8. Akoto, I.; Mexia, J.T.; Marques, F.J. Asymptotic results for multinomial modesls. Symmetry 2021, 13, 2173. [Google Scholar] [CrossRef]
  9. Schott, R.J. Matrix Analysis for Statistics, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2016; p. 12. [Google Scholar]
  10. Akoto, I. Asymptotic Treatment for Multinomial Models and Applications. Ph.D. Thesis, NOVA University Lisbon, School of Science and Technology, Caparica, Portugal, 2022. [Google Scholar]
  11. Resnick, S. A Probability Path, 2005th ed.; Springer Science and Business Media: New York, NY, USA, 2019; pp. 204–208. [Google Scholar]
  12. Van der Vaart, A.W. Asymptotic Statistics, 2nd ed.; Cambridge University Press: Cambridge, UK, 2000; pp. 25–34. [Google Scholar]
  13. Kallenberg, O. Foundations of Modern Probability, 2nd ed.; Springer: Jersey City, NJ, USA, 1997; p. 571. [Google Scholar]
  14. Slutsky, E.E. Qualche proposizione relative alla teoria delle funzioni aleatorie. Giorn. Dell’Istituto Ital. Degli Attuari 1937, 8, 183–199. [Google Scholar]
  15. Wald, A. Statistical decision functions. Ann. Math. Stat. 1949, 20, 165–205. [Google Scholar] [CrossRef]
  16. Berger, J.O. Statistical Decision Theory and Bayesian Analysis, 2nd ed.; Springer Science and Business Media: New York, NY, USA, 1985; pp. 1–2. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akoto, I.; Mexia, J.T. Consistency of Decision in Finite and Numerable Multinomial Models. Mathematics 2023, 11, 2434. https://doi.org/10.3390/math11112434

AMA Style

Akoto I, Mexia JT. Consistency of Decision in Finite and Numerable Multinomial Models. Mathematics. 2023; 11(11):2434. https://doi.org/10.3390/math11112434

Chicago/Turabian Style

Akoto, Isaac, and João T. Mexia. 2023. "Consistency of Decision in Finite and Numerable Multinomial Models" Mathematics 11, no. 11: 2434. https://doi.org/10.3390/math11112434

APA Style

Akoto, I., & Mexia, J. T. (2023). Consistency of Decision in Finite and Numerable Multinomial Models. Mathematics, 11(11), 2434. https://doi.org/10.3390/math11112434

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop