Next Article in Journal
Interdisciplinary Research between Theoretical Informatics and the Humanities
Previous Article in Journal
Information Science: Its Past, Present and Future
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Pearson-Fisher Chi-Square Statistic Revisited

1
“Iuliu Haţieganu” University of Medicine and Pharmacy Cluj-Napoca, 6 Louis Pasteur, Cluj-Napoca 400349, Romania
2
University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca, 3-5 Mănăştur, Cluj-Napoca 400372, Romania
3
Fruit Research Station, 3-5 Horticultorilor, Cluj-Napoca 400454, Romania
*
Author to whom correspondence should be addressed.
Information 2011, 2(3), 528-545; https://doi.org/10.3390/info2030528
Submission received: 22 July 2011 / Revised: 20 August 2011 / Accepted: 8 September 2011 / Published: 15 September 2011
(This article belongs to the Section Information Theory and Methodology)

Abstract

: The Chi-Square test (χ2 test) is a family of tests based on a series of assumptions and is frequently used in the statistical analysis of experimental data. The aim of our paper was to present solutions to common problems when applying the Chi-square tests for testing goodness-of-fit, homogeneity and independence. The main characteristics of these three tests are presented along with various problems related to their application. The main problems identified in the application of the goodness-of-fit test were as follows: defining the frequency classes, calculating the X2 statistic, and applying the χ2 test. Several solutions were identified, presented and analyzed. Three different equations were identified as being able to determine the contribution of each factor on three hypothesizes (minimization of variance, minimization of square coefficient of variation and minimization of X2 statistic) in the application of the Chi-square test of homogeneity. The best solution was directly related to the distribution of the experimental error. The Fisher exact test proved to be the “golden test” in analyzing the independence while the Yates and Mantel-Haenszel corrections could be applied as alternative tests.

1. Introduction

Statistical instruments are used to extract knowledge from the observation of real world phenomena as Fisher suggested “… no progress is to be expected without constant experience in analyzing and interpreting observational data of the most diverse types” (where observational data are seen as information) [1]. Moreover, the amount of information in an estimate (obtained on a sample) is directly related with the amount of information data [2]. Fisher pointed out in [2] that scientific information latent in any set of observations could be brought out by statistical analysis whenever the experimental design is conducted in order to maximize the information obtained. The analysis of information related to associations among data requires specific instruments, the Chi-Square test being one of them. The χ2 test was introduced by K. Pearson in 1900 [3]. A significant modification to the Pearson's χ2 test was introduced by R.A. Fisher in 1922 [4] (the degree of freedom was decreased by one unit when applied to contingency tables). Another correction made by Fisher took into account the number of unknown parameters associated to the theoretical distribution, when the parameters are estimated from central moments [5].

The Chi-square test introduced by K. Pearson was subject of debate for much research. A series of papers analyzed Pearson's test [6,7] and its problems were tackled in [8,9].

It is well known that Pearson's Chi-square (χ2) is a family of tests with the following assumptions [10,11]: (1) The data are randomly drawn from a population; (2) The sample size is sufficiently large. The application of the Chi-square test to a small sample could lead to an unacceptable rate of type II error (accepting the null hypothesis when actually false) [12-14]. There is no accepted cut-off for the sample size; the minimum sample size varies from 20 to 50; and (3) The values on cells are adequate when no more than 1/5 of the expected values are smaller than five and there are no cells with zero count [15,16]. The source of these rules seems to be W. G. Cochran and they appear to have been arbitrarily chosen [17].

Yates' correction is applied when the third assumption is not met [18]. The Fisher's Exact test is the alternative when Yates' correction is not acceptable [19].

Koehler and Larntz suggested the use of at least three categories if the number of observations is at least 10. Moreover, they suggested that the square of the number of observations be at least 10 times higher than the number of categories [20].

The Chi-square test has been applied in all research areas. Its main uses are: goodness-of-fit [21-25], association/independence [26-29], homogeneity [30-33], classification [34-37], etc.

The aim of our paper was to present solutions to common problems when applying the Chi-square for testing goodness-of-fit, homogeneity and independence.

2. Material and Methods

The most frequently used Chi-square tests were presented (Table 1) and solutions to frequent problems were provided and discussed.

The main characteristics of these tests are as follows:

  • Goodness-of-fit (Pearson's Chi-Square Test [3]):

    • Is used to study similarities between groups of categorical data.

    • Tests if a sample of data came from a population with a specific distribution (compares the distribution of a variable with another distribution when the expected frequencies are known) [38].

    • Can be applied to any univariate distribution by calculating its cumulative distribution function (CDF).

    • Has as alternatives the Anderson-Darling [39] and Kolmogorov-Smirnov [40] goodness-of-fit tests.

The agreement between observation and hypothesis is analyzed by dividing the observations in a defined number of intervals (f). The X2 statistic is calculated based on the formula presented in Equation (1).

X 2 = i = 1 f ( O i E i ) 2 E i χ 2 ( f t 1 )

where X2 = value of Chi-square statistics; χ2 = value of the Chi-square parameter from Chi-square distribution; Oi = experimental (observed) frequency associated to the ith frequency class; Ei = expected frequency calculated from the theoretical distribution law for the ith frequency class; t = number of parameters in theoretical distribution estimated from central moments.

The probability to reject the null hypothesis is calculated based on the theoretical distribution (χ2). The null hypothesis is accepted if the probability to be rejected (χ2CDF(X2, f-t-1)) is lower than 5%.

The Chi-square test is the most well known statistics used to test the agreement between observed and theoretical distributions, independency and homogeneity. Defining the applicability domain of the Chi-square test is a complex problem [38].

At least three problems occur when the Chi-square test is applied in order to compare observed and theoretical distributions:

  • Defining the frequency classes.

  • Calculating the X2 statistic.

  • Applying the χ2 test.

  • The test of homogeneity:

    • Is used to analyze if different populations are similar (homogenous or equal) in terms of some characteristics.

    • Is applied to verify the homogeneity of: data, proportions, variance (more than two variances are tested; for two variances the F test is applied), error variance, sampling variances.

The Chi-square test of homogeneity is used to determine whether frequency counts are identically distributed across different populations or across different sub-groups of the same population. An important assumption is made for the test of homogeneity in populations coming from a contingency of two or more categories (this is the link between the test of homogeneity and the test of independence): the observable under assumption of homogeneity should be observed in a quantity proportional with the product of the probabilities given by the categories (assumption of independence between categories). When the number of categories is two, the expectations are calculated using the Ei,j formula (mean of the expected value (for Chi-square of homogeneity) or frequency counts (Chi-square test of independence) for (i,j) pair of factors) [41].

The observed contingency table is constructed; the values for the first factor/population/subgroup are in the rows and the values for the second variable/factor/population/subgroup are in the columns. The observed frequencies are counted at the intersection of rows with columns and the hypothesis of homogeneity is tested.

The value of X2 statistic is computed using the formula presented in Equation (2).

X 2 = i = 1 r j = 1 c ( O i , j E i , j ) 2 E i , j χ 2 ( ( r 1 ) ( c 1 ) )
where r = number of rows in contingency table; c = number of columns in contingency table; 1ir = indices of observations associated to the first factor; 1jc = indices of observations associated to the second factor; Oi,j = mean of the observed value (for chi-square of homogeneity) or frequency counts (Chi-square test of independence) for (i,j) pair of factors; Ei,j = mean of the expected value (for Chi-square of homogeneity) or frequency counts (chi-square test of independence) for (i,j) pair of factors; X2 = the value of Chi-square statistic; χ2 = Chi-square critical parameter (from chi-square distribution).

  • The test of independence (also known as Chi-square test of association):

    • Is used to determine whether two characteristics are dependent or not.

    • Compares the frequencies of one nominal variable for different values of a second nominal variable.

    • Is an alternative to the G-test of independence (also known as the Likelihood Ratio Chi-square test) [43].

    • Fisher's exact test of independence [44] is preferred whenever small expected values are presented.

The chi-square test of independence is applied in order to compare frequencies of nominal or ordinal data for a single population/sample (two variables at the same time).

The Chi-square for independence also faced some difficulties when applied on experimental data [39]. Fisher exact test [43] was proposed by Fisher as an alternative to the Chi-square test [44]; the Fisher exact test is based on the calculation of marginal probabilities (which unfortunately has an exact calculation formula only for 2 × 2 contingencies).

Glass and Hopkins [45] consider that the Chi-square test of association is equivalent to the Chi-square test of independence and to the Chi-square test of homogeneity.

3. Results and Discussion

3.1. Chi-Square Test of Goodness-of-Fit

The first problem of the Chi-square test of goodness-of-fit is how to establish the number of frequency classes. At least two approaches could be applied:

  • The number of frequency classes (discreet number) is computed from Hartley's entropy [46] of observed versus expected data: log2(2n), where n = number of observations. The EasyFit software (MathWave Technologies. http://www.mathwave.com) uses this approach.

  • The number of frequency classes is obtained based on the histogram of observed values as estimator of density [47]. The optimal criterion is applied in order to obtain the width of the classes when the histogram is used. For example, Dataplot (National Institute for Standards and Technology. http://www.itl.nist.gov/div898/software/dataplot.html) automatically generates frequency classes using this method: the width of the frequency class is 0.3·s (where s = standard deviation of the sample). The lower and upper bounders are given by m ± 6·s (where m = arithmetic mean, s = standard deviation) and the marginal classes of frequencies are omitted.

One rule-of-thumb suggests dividing the sample in a number of frequency classes equal to 2·n2/5 (where n = sample size) [48].

The second problem refers to the width of the frequency classes. Two approaches could be applied here:

  • Data could be grouped in probability frequency classes (theoretical or observed) with equal width. This approach is frequently used when the observed data are grouped.

  • Data could be grouped in intervals with equal width.

The third problem is the number of observations in each frequency class. Every class must contain at least five observations; otherwise the frequencies from two proximity classes are cumulated.

3.2. Chi-Square Test of Homogeneity

The investigation of homogeneity of the values associated to a class (row or column in the contingency table) could be carried out by decomposing the X2 expression (see Equation (3)). A hierarchy of irregularities on the contingency table could also be obtained by decomposing the X2 expression.

X 2 c = i = 1 r ( O i , c E i , c ) 2 E i , c χ 2 ( r 1 ) X 2 r = j = 1 c ( O r , j E r , j ) 2 E r , j χ 2 ( c 1 )

One assumption is that the Oi,j observations are the result of multiplying two factors; repeated observations approximate better the effect of multiplication. Thus, the formula of expected frequencies (Ei,j [43]) is the consequence of factors' multiplication and it is presented in Equation (4).

E i , j = ( k = 1 r O i , k ) . ( k = 1 c O k , j ) / ( i = 1 r j = 1 c O i , j )

Three mathematical assumptions could be formulated in terms of square error ((Oi,j − Ei,j)2) of observation:

  • The measurement is affected by chance errors, absolute values (S2, Equation (5));

  • The measurement is affected by chance errors, relative values (CV2, Equation (6));

  • The measurement is affected by chance errors on a scale with values (X2, Equation (7)) between absolute and relative errors.

The first hypothesis (chance errors, absolute values) leads mathematically to the minimization of the variance (S2) obtained between model and observation.

S 2 = i = 1 r j = 1 c ( O i , j a i b j ) 2 = min

where ai, 1 ≤ i ≤ r = contribution of first factor to the expected value Ei,j; bi, 1 ≤ j ≤ c = contribution of second factor to the expected value Ei,j; Ei,j=ai·bj.

The second hypothesis (chance errors, relative values) leads to the minimization of the squared coefficient of variation (CV2) (see Equation (6)).

CV 2 = i = 1 r j = 1 c ( O i , j a i b j ) 2 ( a i b j ) 2 = min

One possible solution for the third hypothesis is the minimization of the X2 statistic (see Equation (7)).

X 2 = i = 1 r j = 1 c ( O i , j a i b j ) 2 a i b j = min

The contribution of each factor (A = (ai)1 ≤i ≤ r, and B = (bj)1 ≤ j ≤ c) could be determined through the minimization of values given by Equations (5)(7). The following formula (Equation (8)) was applied in order to minimize the values form Equations (5)(7).

( ( a i , b j ) a i = 0 ) 1 i r ; ( ( a i , b j ) b j = 0 ) 1 j c
where the expression of derivate (ai, bj) is the expression of S2/CV2/X2 given in Equations (5)(7).

The calculations revealed the followings:

a i = ( j = 1 c b j O i , j ) / ( j = 1 c b j 2 ) , i = 1.. r ; b j = ( i = 1 r a i O i , j ) / ( i = 1 r a i 2 ) , j = 1.. c
a i = ( j = 1 c O i , j 2 b j 2 ) / ( j = 1 c O i , j b j ) , i = 1.. r ; b j = ( i = 1 r O i , j 2 a i 2 ) / ( i = 1 r O i , j a i ) , j = 1.. c
a i 2 = ( j = 1 c O i , j 2 b j ) / ( j = 1 c b j ) , i = 1.. r ; b j 2 = ( i = 1 r O i , j 2 a i ) / ( i = 1 r a i ) , j = 1.. c

The relations presented in Equations (9)(11) admit an infinity of solutions and the family of solutions are close to the family of solutions given by Equation (4). Equation (4) was rewritten as presented in Equation (12).

a i b j = ( k = 1 r O i , k ) ( k = 1 c O k , j ) / ( i = 1 r j = 1 c O i , j )

Dealing directly with Equations (9)(11) without using Equation (12) is ineffective. For example, for r = 2 and c = 3 substituted in Equation (9) leads to Equation (13):

( a 2 a 1 ) 2 + ( O 1 , 1 2 + O 1 , 2 2 + O 1 , 3 2 ) ( O 2 , 1 2 + O 2 , 2 2 + O 2 , 3 2 ) ( O 1 , 1 O 2 , 1 + O 1 , 2 O 2 , 2 + O 1 , 3 O 2 , 3 ) ( a 2 a 1 ) 1 = 0

This is solvable in (a2/a1). Thus, there are an infinity of solutions (for any non-null value of a1 there is a value a2 that verifies Equation (13)) and the degree of equation Equation (13) is given by min(r,c). The equations that are obtained by direct substitution are more complicated as the r and c values increase. For example, if r = 2, and c = 3 the substitutions in Equation (11) lead to the relation presented in Equation (14), which is an equation of fifth degree (r + c).

O 1 , 1 2 O 1 , 2 2 ( O 1 , 1 2 O 1 , 2 2 ) ( a 2 a 1 ) 5 + ( O 1 , 1 4 O 2 , 2 2 O 1 , 2 4 O 2 , 1 2 ) ( a 2 a 1 ) 4 + + 2 O 1 , 1 2 O 1 , 2 2 ( O 2 , 2 2 O 2 , 1 2 ) ( a 2 a 1 ) 3 + 2 O 2 , 1 2 O 2 , 2 2 ( O 1 , 2 2 O 1 , 1 2 ) ( a 2 a 1 ) 2 + ( O 1 , 2 2 O 2 , 1 4 O 1 , 1 2 O 2 , 2 4 ) ( a 2 a 1 ) + O 2 , 2 2 O 2 , 1 2 ( O 2 , 1 2 O 2 , 2 2 ) = 0

The application of successive approximations using the solution offered by Equation (12) is the indirect way to solve the relations presented in Equations (9)(11). The relation presented in Equation (12) is used in order to obtain the first approximation (initial approximation) of the solution; in every sequence of approximations the oldest values are replaced on the right side of the relations presented in Equations (9)(11) in order to obtain the new approximations.

The method of successive approximations rapidly converged towards the optimal solution. Thus, three iterations are necessary in order to obtain a residual value of 282.11735 for the relation presented in Equation (9). Starting with the third iteration, the value of residuals is changed at the level of the 5th decimal. For the relation presented in Equation (11) the same quality of representation of the optimal solution is obtained after the 4th iteration.

The experimental data reported by Fisher [4] were used for exemplification (Table 2). The values suggested by Equation (12) for the (aibj)1 ≤i ≤ 6; 1 ≤ j ≤ 12 are showed in Table 3.

The values resulted when the iterative approach was applied to obtain the solution for Equations (9)(11) are presented in Tables 46. The summary of the results obtained by all four approaches are presented in Table 7.

The analysis of the results presented in Table 7 revealed that each method defined in Equations (9)(11) increases the value of the objective sum compared with the expression defined in the Equation (12) formula; the methods provided by Equations (9)(11) also represent corrections of Equations (12). The relation presented in Equation (9) offers a better solution compared to Equation (12) under the hypothesis of experimental errors uniformly distributed among classes (absolute experimental error). The relation presented in Equation (10) obtained a better solution compared to Equation (12) under the hypothesis of experimental errors proportional with the magnitude of the observed phenomena (relative experimental error). The relation presented in Equation (11) obtained a better solution compared to Equation (12) when the aim is to minimize the Pearson-Fisher X2 statistics (Pearsonian expression of type III [4], p. 337).

The values for all three types of experimental errors (square absolute S2, square relative CV2 and Pearson's X2) and for all four analyzed cases are presented in Table 7 (theoretical frequency estimated from the contingency table - Equation (12); theoretical frequency estimated through the minimization of the square absolute error - Equation (9); theoretical frequency estimated through minimization of the square relative error - Equation (10); theoretical frequency estimated through minimization of Pearson-Fisher statistics - Equation (11)); the values are obtained in a design of experiments with two independent factors (type of treatment and type of potato variety, abbreviated as factors A and B). This experiment allowed the representation of the Euclidian distances between obtained results (see Figure 1).

The experimental errors estimated by Equations (9)(11) are presented in Figure 2 using the Snyder triangle [49] (diagrams frequently used in chromatography in order to represent three or more parameters which depend on two factors).

Figure 2 was obtained by setting the representation at the same scale of error area in ratio with two factors (the distance between the coordinate of experimental error for the hypothesis that S2 = min and the coordinate of experimental errors for the hypothesis that CV2 = min was used as reference). The coordinates for the hypothesis that X2 = min were obtained after maximizing the error area (maximization of the A, V and X triangle area). The coordinates of contingency were obtained so that its projections on the sides of the triangles could split the sides into the ratios observed among the differences in Table 7.

The graphical representation in Figure 1 provides qualitative remarks regarding the contingency model defined in Equation (12) in relation with experimental errors:

  • The intersection between the contingency area and error areas is done through the absolute square error. Therefore, the contingency defined by Equation (12) assured the agreement between observation and model for the absolute square error only (one out of the three types of errors included in the study).

  • The triangle of the X2 statistics variation intersects only with the X2 statistics triangle. This fact recommends the use of optimization defined in Equation (5) [5] or the one defined in Equation (7) [39]. Moreover, this demonstrated why the Chi-square test is more exposed to type I errors (the null hypothesis that the row variable is not related to the column variable is rejected even if this hypothesis is true) [50] compared to the Kolmogorov-Smirnov [40,51] and Anderson-Darling [39,43] tests.

The analysis of errors distribution obtained from the above association analysis is presented in Supplementary Material.

The relative position of the solution proposed in Equation (12) could be represented in relation to the optimal values obtained using Equations (9)(11). Therefore, the values presented in Table 7 (last row) were re-arranged and then expressed after being divided to their minimum values. The results are presented in Table 8.

Figure 2 contains the representation of the relative values of errors (error excess) in the coordinates defined by the values of S2, CV2 and X2 for the results obtained through simple estimation (E, Equation (12)), minimization of the absolute square error (S2 = min, Equation (9)), minimization of the relative square error (CV2 = min, Equation (10)) and minimization of the X2 statistics (X2 = min, Equation (11)).

The results of the representations showed in Figure 2 are consistent with the results of the projections in the areas illustrated in Figure 1. Figure 2 showed that the solution proposed by Equation (12) is very close to the solution proposed by Equation (9) and Equation (11). Moreover, the solution is intermediate between Equation (9) and Equation (11) and far away from the solution proposed by Equation (10).

3.3. Chi-Square Test of Independence

A single degree of freedom is known to exist for a 2 × 2 contingency table.

Table 9 presents such a situation in which the restrictions come from the sums of observations.

The probability to observe the situation presented in Table 9 is given by the multinomial distribution (given by (Equation (15)). The value of the Chi-square statistics (X2) is given by the relation presented in Equation (16).

p MN ( x ; n 1 , n 2 , n 3 ) = n 1 ! n 2 ! n 3 ! ( n 2 + n 3 n 1 ) ! x ! ( n 1 x ) ! ( n 2 x ) ! ( n 3 n 1 + x ) ! ( n 2 + n 3 ) !
X 2 ( x ; n 1 , n 2 , n 3 ) = ( xn 2 + xn 3 n 1 n 2 ) 2 ( n 2 + n 3 ) n 1 n 2 n 3 ( n 2 + n 3 n 1 )

The range in which x could take values is [0. Min (n1,n2)].

In order to exemplify this problem, the experimental data reported by Fisher in 1935 [19] (n1 = 13, n2 = 12, n3 = 18) for a range of x variation from 0 to 12 and with an observed value of 10 was analyzed. The value of X2 statistic (Equation (16)) was represented in Figure 3.

The space of possible observations regarding the X2 statistics as function of the independent variable x is discrete as it can be observed from Figure 3. The observed value (x = 10) is situated into the vicinity of one boundary (x = 12) having only two less favorable observations (with an X2 value higher than the observed value) compared with the observed value in the same vicinity (x = 11 and x = 12) and a less favorable observation in the opposite vicinity (x = 0).

Two possible approaches could be applied in relation to the objective of the comparison in a contingency table:

  • If higher distances from homogeneity than the observed gives the statistic, then the probability associated to observation is obtained by cumulating the probabilities for x = 0, x = 10, x = 11 and x = 12 (red and blue dots on Figures 3 and 4).

  • If higher distances from homogeneity than the observed strictly in the sense of the observed gives the statistic, then the probability associated to observation is obtained by cumulating the probabilities for x = 10, x = 11 and x = 12 (red dots on Figures 3 and 4).

Figure 4 present graphically the probability of the observation (calculated from Equation (15)).

Table 10 presents the values of three probabilities: the probability of χ2 distribution (px2), the probability to observe a higher distance from homogeneity in the direction of the observed value (pO2) and the probability to observe a higher distance from the homogeneity in any direction (pD2). The probability obtained from the χ2 distribution (pX2) estimates a higher distance from homogeneity in any direction (pD2).

Table 10 shows how the Chi-square test is in error when the values in the contingency table are far from the imposed conditions for expected counts or frequencies (no more than 20% of the cells in the contingency table should have counts/frequencies lower that 5). Table 10 also shows how in this case the Chi-square test is exposed to type I errors (giving a lower observation probability than the real one; the risk is to accept the alternative hypothesis even if this is not true).

Frank Yates proposed in 1934 [18] a continuity correction in order to correct the statistical significance in a contingency table. If this correction is applied to Equations (1)(3), a 0.5 value must be subtracted from the absolute difference between observed and expected frequencies in the hypothesis of independence (the middle of the frequency interval). Mantel and Haenszel proposed in 1959 [52] a correction of Chi-square test by dividing its value to df/(df − 1), where df = degree of freedom.

4. Conclusions

The application of the Chi-square test is directly related with some assumptions and with the design of the experiment. Three problems were identified in the application of Chi-square goodness-of-fit and solutions were identified, presented and analyzed.

Three different equations were identified as able to determine the contribution of each factor on three hypothesizes (minimization of variance, minimization of square coefficient of variation and minimization of X2 statistic) in the application of the Chi-square test of homogeneity. The best solution proved to be directly related to the distribution of the experimental error.

The Fisher exact test proved to be the “golden test” in analyzing the independence while the Yates and Mantel-Haenszel corrections could be applied as alternative tests.

Supplementary Material

information-02-00528-s001.pdf
Figure 1. Euclidian distances among estimations of experimental errors.
Figure 1. Euclidian distances among estimations of experimental errors.
Information 02 00528f1 1024
Figure 2. Position of empirical estimation (Equation (12)) within minimum relative errors (Equations (9)(11)).
Figure 2. Position of empirical estimation (Equation (12)) within minimum relative errors (Equations (9)(11)).
Information 02 00528f2 1024
Figure 3. Value of X2 statistic as function of independent observation x.
Figure 3. Value of X2 statistic as function of independent observation x.
Information 02 00528f3 1024
Figure 4. Value of the statistical probability of the observed according to the observable.
Figure 4. Value of the statistical probability of the observed according to the observable.
Information 02 00528f4 1024
Table 1. Summary of Chi-square tests.
Table 1. Summary of Chi-square tests.
TypeAimHypothesesStatistics df H0 acceptance rule
Goodness-of-fit
-

One sample.

-

Compares the expected and observed values to determine how well the experimenter's predictions fit the data.

H0: The observed values are equal to theoretical values (expected). (The data followed the assumed distribution).
Ha: The observed values are not equal to theoretical values (expected). (The data did not follow the assumed distribution).
χ 2 = i = 1 r j = 1 c ( O i , j E i , j ) 2 E i , j df = ( k 1 ) χ 2 χ 1 α 2
Homogeneity
-

Two different populations (or sub-groups).

-

Applied to one categorical variable.

H0: Investigated populations are homogenous.
Ha: Investigated populations are not homogenous.
χ 2 = i = 1 r j = 1 c ( O i , j E i , j ) 2 E i , j df = ( r 1 ) ( c 1 ) χ 2 χ 1 α 2
Independence
-

One population.

-

Type of variables: nominal, dichotomical, ordinal or grouped interval

-

Each population is at least 10 times as large as its respective sample [21]

Research hypothesis: The two variables are dependent (or related).
H0: There is no association between two variables. (The two variables are independent).
Ha: There is an association between two variables.
χ 2 = i = 1 r j = 1 c ( O i , j E i , j ) 2 E i , j df = ( r 1 ) × ( c 1 ) χ 2 χ 1 α 2
Table 2. Experimental values: response to fertilization with manure on different potato varieties.
Table 2. Experimental values: response to fertilization with manure on different potato varieties.
TVUDKKKPTPIDGSAJBQNDEPACDYΣ
DS25.328.023.320.022.920.822.321.918.314.713.810.0241.3
DC26.027.024.419.020.624.416.820.920.315.611.011.8237.8
DB26.523.814.220.020.121.821.720.616.014.311.113.3223.4
US23.020.418.220.215.815.812.712.811.812.512.58.2183.9
UC18.517.020.818.117.514.419.613.713.012.012.78.3185.6
UB9.56.54.97.74.42.34.26.61.62.22.21.653.7
Σ128.8122.7105.8105101.399.597.396.58171.363.353.21125.7

TV: Treatment vs. Variety; UD, KK, KP, TP, ID, GS, AJ, BQ, ND, EP, AC, DY: potato varieties (UD = Up to Date; KK= K of K; KP = Kerr's Pink; TP = Tinwald Perfection; ID = Iron Duke; GS = Great Scott; AJ = Ajax; BQ = British Queen; ND = Nithsdale; EP = Epicure; AC = Arran Comrade; DY = Duke of York); DS, DC, DB, US, UC, UB: types of treatment (D* - manure; U* - without manure; S - sulphate; C - chloride; B - basal); Σ = sum.

Table 3. Values of (aibj)1 ≤ i ≤ 6; 1 ≤ j ≤ 12 calculated with Equation (12) for the response to fertilization on different potato varieties.
Table 3. Values of (aibj)1 ≤ i ≤ 6; 1 ≤ j ≤ 12 calculated with Equation (12) for the response to fertilization on different potato varieties.
TVUDKKKPTPIDGSAJBQNDEPACDY
DS27.6126.3022.6822.5121.7121.3320.8620.6917.3615.2813.5711.40
DC27.2125.9222.3522.1821.4021.0220.5520.3917.1115.0613.3711.24
DB25.5624.3521.0020.8420.1019.7519.3119.1516.0714.1512.5610.56
US21.0420.0417.2817.1516.5516.2515.9015.7613.2311.6510.348.69
UC21.2420.2317.4417.3116.7016.4116.0415.9113.3511.7610.448.77
UB6.145.855.055.014.834.754.644.603.863.403.022.54

TV: Treatment vs. Variety; (D* - manure; U* - without manure; S - sulphate; C - chloride; B - basal). UD, KK, KP, TP, ID, GS, AJ, BQ, ND, EP, AC, DY: potato varieties (UD = Up to Date; KK= K of K; KP = Kerr's Pink; TP = Tinwald Perfection; ID = Iron Duke; GS = Great Scott; AJ = Ajax; BQ = British Queen; ND = Nithsdale; EP = Epicure; AC = Arran Comrade; DY = Duke of York); DS, DC, DB, US, UC, UB: types of treatment;

Table 4. Optimized value of the (aibj)1≤i≤6;1≤j≤12 calculated with Equation (9) for the response to fertilization on different potato varieties.
Table 4. Optimized value of the (aibj)1≤i≤6;1≤j≤12 calculated with Equation (9) for the response to fertilization on different potato varieties.
TVUDKKKPTPIDGSAJBQNDEPACDY
DS27.0726.4222.6421.8521.8521.9420.9420.6317.9315.4813.5411.61
DC26.6626.0222.2921.5221.5221.6020.6220.3217.6615.2413.3311.43
DB24.9124.3220.8320.1120.1120.1919.2718.9916.5014.2512.4610.69
US20.6420.1517.2616.6616.6616.7315.9615.7313.6711.8010.328.85
UC20.5820.0917.2116.6116.6116.6815.9215.6913.6311.7710.298.83
UB6.296.145.265.085.085.104.864.794.173.603.142.70

TV: Treatment vs. Variety; UD, KK, KP, TP, ID, GS, AJ, BQ, ND, EP, AC, DY: potato varieties (UD = Up to Date; KK= K of K; KP = Kerr's Pink; TP = Tinwald Perfection; ID = Iron Duke; GS = Great Scott; AJ = Ajax; BQ = British Queen; ND = Nithsdale; EP = Epicure; AC = Arran Comrade; DY = Duke of York); DS, DC, DB, US, UC, UB: types of treatment; (D* - manure; U* - without manure; S - sulphate; C - chloride; B - basal).

Table 5. Optimized value of the (aibj)1 ≤ i ≤ 6; 1 ≤ j ≤ 12 calculated with Equation (10) for the response to fertilization on different potato varieties.
Table 5. Optimized value of the (aibj)1 ≤ i ≤ 6; 1 ≤ j ≤ 12 calculated with Equation (10) for the response to fertilization on different potato varieties.
TVUDKKKPTPIDGSAJBQNDEPACDY
DS27.5726.0823.0422.6121.4821.6121.1320.6917.6615.2313.7911.56
DC27.3825.9022.8822.4521.3421.4620.9920.5517.5415.1313.6911.48
DB25.8424.4421.5921.1920.1420.2619.8019.4016.5614.2812.9210.83
US21.2320.0817.7417.4016.5416.6416.2715.9313.6011.7310.628.90
UC21.4720.3117.9417.6116.7316.8316.4616.1213.7611.8610.749.00
UB7.026.645.875.765.475.515.385.274.53.883.512.94

TV: Treatment vs. Variety; UD, KK, KP, TP, ID, GS, AJ, BQ, ND, EP, AC, DY: potato varieties (UD = Up to Date; KK= K of K; KP = Kerr's Pink; TP = Tinwald Perfection; ID = Iron Duke; GS = Great Scott; AJ = Ajax; BQ = British Queen; ND = Nithsdale; EP = Epicure; AC = Arran Comrade; DY = Duke of York); DS, DC, DB, US, UC, UB: types of treatment; (D* - manure; U* - without manure; S - sulphate; C - chloride; B - basal).

Table 6. Optimized value of (aibj)1 ≤ i ≤ 6; 1 ≤ j ≤ 12 calculated with Equation (11) for the response to fertilization on different potato varieties.
Table 6. Optimized value of (aibj)1 ≤ i ≤ 6; 1 ≤ j ≤ 12 calculated with Equation (11) for the response to fertilization on different potato varieties.
TVUDKKKPTPIDGSAJBQNDEPACDY
DS27.6426.1922.8522.6021.5921.4420.9820.7117.4915.2413.6711.47
DC27.3525.9122.6122.3621.3621.2220.7620.5017.3015.0813.5211.35
DB25.7424.4021.2821.0520.1119.9719.5519.2916.2914.2012.7310.68
US21.1720.0617.5017.3116.5316.4216.0715.8713.3911.6810.478.78
UC21.4020.2817.6917.5016.7116.6016.2516.0413.5411.8010.588.88
UB6.576.235.435.375.135.104.994.934.163.633.252.73

TV: Treatment vs. Variety; UD, KK, KP, TP, ID, GS, AJ, BQ, ND, EP, AC, DY: potato varieties (UD = Up to Date; KK= K of K; KP = Kerr's Pink; TP = Tinwald Perfection; ID = Iron Duke; GS = Great Scott; AJ = Ajax; BQ = British Queen; ND = Nithsdale; EP = Epicure; AC = Arran Comrade; DY = Duke of York); DS, DC, DB, US, UC, UB: types of treatment; (D* - manure; U* - without manure; S - sulphate; C - chloride; B - basal).

Table 7. Comparative value for chance experimental errors.
Table 7. Comparative value for chance experimental errors.
TtS2X2CV2

Equation (12)Equation (9)Equation (11)Equation (10)Equation (12)Equation (9)Equation (11)Equation (10)Equation (12)Equation (9)Equation (11)Equation (10)
DS23.418.7624.1257.971.100.9371.1272.3080.0560.05150.05730.0971
DC59.748.4859.86104.953.082.4973.0524.8470.1640.1330.16110.2365
DB69.866.7771.4795.213.783.5963.7964.8030.2210.20780.21670.2633
US41.649.0341.6635.342.723.1902.7092.3580.1860.21580.1830.1635
UC57.659.0156.5382.163.463.6603.3394.3670.2180.23750.20650.2444
UB37.540.137.1328.267.898.2957.6595.9561.7511.80181.66961.3512
UD30.326.328.2078.92.662.352.153.580.3350.2930.2350.232
KK15.313.515.8018.70.760.640.730.880.0450.0330.0350.044
KP63.062.764.0067.53.113.153.133.190.1550.1620.1590.155
TP34.331.433.3076.52.792.692.373.670.3570.3400.2560.242
ID3.43.94.004.50.210.270.280.260.0170.0280.0290.021
GS26.225.626.9028.62.292.452.522.420.3190.3490.3520.327
AJ45.047.045.3043.42.562.712.602.440.1520.1680.1640.148
BQ21.520.421.0031.81.931.711.672.190.2530.2050.1820.193
ND18.317.919.1020.52.132.292.352.270.3930.4240.4270.403
EP2.93.23.303.80.530.640.660.620.1330.1580.1630.142
AC18.218.818.7019.31.761.871.841.830.2090.2320.2330.221
DY11.111.511.2010.61.311.401.391.270.2280.2550.2580.227
Σ289.5282.2290.8404.122.0422.1721.6924.622.5962.6472.4932.355

Tt = type of treatment; S2 = Equation (5); X2 = Equation (7); CV2 = Equation (6)

Table 8. Transformation of the residuals presented in Table 7 in relation to their minimum values.
Table 8. Transformation of the residuals presented in Table 7 in relation to their minimum values.
Absolute valueS2X2CV2
E289.522.042.596
S2 = min.282.222.172.647
X2 = min.290.821.692.493
CV2 = min.404.124.622.355
Relative valueS2X2CV2

E1.0261.0161.102
S2 = min.11.0221.124
X2 = min.1.03011.059
CV2 = min.1.4321.1351

E = use of Equation (4) in place of aibj in Equations (5)(7); S2 = Equation (5); X2 = Equation (7); CV2 = Equation (6).

Table 9. 2 × 2 contingency table with one degree of freedom.
Table 9. 2 × 2 contingency table with one degree of freedom.
X2Class AClass Ω1\ATotal Ω1
Class Bxn1 − xn1
Class Ω2\Bn2−xn3 − n1 + xn2 + n3 − n1
Total Ω2n2n3n2+n3

X2 = Chi-Square. Class A = first value of first category. Ω1 = whole first category. Class B = first value of second category. Ω2 = whole second category.

Table 10. Probability of observation.
Table 10. Probability of observation.
ProbabilityExpression of calculusValue
pX2χ2CDF(X2 = 13.03,df = 1)3.063 ×·10−4
pO2 (x2 ≥ X2)pMN(10,13,12,18) + pMN(11,13,12,18) + pMN(12,13,12,18)4.625·× 10−4
pO2 (x2 > X2)pMN(11,13,12,18) + pMN(12,13,12,18)1.548·× 10−5
pD2 (x2 ≥ X2)pO2(x2 ≥ X2) + PMN(0,13,12,18)5.367·× 10−4
pD2 (x2 > X2)pO2(x2 > X2) + PMN(0,13,12,18)8.702·× 10−5

pX2 = probability of χ2 distribution; pO2 = probability of observing a higher distance from homogeneity in the direction of the observed value; pD2 = probability of observing a higher distance from homogeneity in any direction; χ 2CDF = probability of cumulative distribution function; pMN = probability from multinomial distribution.

Acknowledgments

The study was supported by UEFISCSU/ID1105/2008 for R. Sestraş and by POSDRU/89/1.5/S/62371 through a fellowship for L. Jäntschi.

References

  1. Fisher, R.A. Biometry. Biometrics 1948, 4, 217–219. [Google Scholar]
  2. Fisher, R.A. Statistics. In Scientific Thought in the Twentieth Century; Heath, A.E., Ed.; Watts: London, UK, 1951; pp. 31–55. [Google Scholar]
  3. Pearson, K. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling. Philos. Mag. 1900, 50, 157–175. [Google Scholar]
  4. Fisher, R.A. On the interpretation of χ2 from contingency tables, and the calculation of P. J. R. Stat. Soc. 1922, 85, 87–94. [Google Scholar]
  5. Fisher, R.A. The conditions under which χ2 measures the discrepancy between observation and hypothesis. J. R. Stat. Soc. 1924, 87, 442–450. [Google Scholar]
  6. Mirvaliev, M. The components of chi-squared statistics for goodness-of-fit tests. J. Sov. Math. 1987, 38, 2357–2363. [Google Scholar]
  7. Plackett, R.L. Karl pearson and the Chi-squared test. Int. Statist. Rev. 1983, 51, 59–72. [Google Scholar]
  8. Baird, D. The fisher/pearson Chi-squared controversary: A turning point for inductive inference. Br. J. Philos. Sci. 1983, 34, 105–118. [Google Scholar]
  9. Cochran, W.G. Some methods for strengthening the common chi-square tests. Biometrics 1954, 10, 417–451. [Google Scholar]
  10. Agresti, A. Introduction to Categorical Data Analysis; John Wiley and Sons: New York, NY, USA, 1996; pp. 231–236. [Google Scholar]
  11. Levin, I.P. Relating Statistics and Experimental Design; Sage Publications: Thousand Oaks, CA, USA, 1999. [Google Scholar]
  12. Neyman, J.; Pearson, E.S. On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference, Part I.; reprinted at pp. 1-66 in Joint Statistical Papers; Neyman, J., Pearson, E.S., Eds.; Cambridge University Press: Cambridge, UK, (originally published in 1928); 1967. [Google Scholar]
  13. Neyman, J.; Pearson, E.S. The Testing of Statistical Hypotheses in Relation to Probabilities a Priori; reprinted at pp. 186-202 in Joint Statistical Papers; Neyman, J., Pearson, E.S., Eds.; Cambridge University Press: Cambridge, UK, (originally published in 1933); 1967. [Google Scholar]
  14. Pearson, E.S.; Neyman, J. On the Problem of Two Samples; reprinted at pp. 99-115 in Joint Statistical Papers; Neyman, J., Pearson, E.S., Eds.; Cambridge University Press: Cambridge, UK, (originally published in 1930); 1967. [Google Scholar]
  15. Rosner, B. Fundamentals of Biostatistics. Chapter 10. Chi-Square Goodness-of-fit, 6th ed.; Thomson Learning Academic Resource Center: Duxbury, MA, USA, 2006; pp. 438–441. [Google Scholar]
  16. Roscoe, J.T.; Byars, J.A. An investigation of the restraints with respect to sample size commonly imposed on the use of the chi-square statistic. J. Am. Stat. Assoc. 1971, 66, 755–759. [Google Scholar]
  17. Cochran, WG. The χ2 test of goodness of fit. Ann. Math. Stat. 1952, 25, 315–345. [Google Scholar]
  18. Yates, F. Contingency table involving small numbers and the χ2 test. Suppl. J. R. Stat. Soc. 1934, 1, 217–235. [Google Scholar]
  19. Fisher, R.A. The logic of inductive inference. J. R. Stat. Soc. 1935, 98, 39–54. [Google Scholar]
  20. Koehler, K.J.; Larntz, K. An empirical investigation of goodness-of-fit statistics for sparse multinomials. J. Am. Stat. Assoc. 1980, 75, 336–344. [Google Scholar]
  21. Li, G.; Doss, H. Generalized pearson-fisher Chi-square goodness-of-fit tests, with applications to models with life history data. Ann. Stat. 1993, 21, 772–797. [Google Scholar]
  22. Moore, D.S.; Spruill, M.C. Unified large-sample theory of general Chi-squared statistics for tests of fit. Ann. Stat. 1975, 3, 599–616. [Google Scholar]
  23. Moorea, D.S.; Stubblebinea, J.B. Chi-square tests for multivariate normality with application to common stock prices. Commun. Stat. Theory Methods 1981, 10, 713–738. [Google Scholar]
  24. Mihalko, D.P.; Moore, D.S. Chi-Square Tests of Fit for Type II Censored Data. Ann. Stat. 1980, 8, 625–644. [Google Scholar]
  25. Joe, H.; Maydeu-Olivares, A. A general family of limited information goodness-of-fit statistics for multinomial data. Psychometrika 2010, 75, 393–419. [Google Scholar]
  26. Mantel, N. Chi-square tests with one degree of freedom; extension of the mantel-haenszel procedure. J. Am. Stat. Assoc. 1963, 58, 690–700. [Google Scholar]
  27. Nathan, G. On the asymptotic power of tests for independence in contingency tables from stratified samples. J. Am. Stat. Assoc. 1972, 67, 917–920. [Google Scholar]
  28. O'Brien, P.C.; Fleming, T.H. A multiple testing procedure for clinical trials. Biometrics 1979, 35, 549–556. [Google Scholar]
  29. Tobin, J. Estimation of relationship for limited dependent variables. Econometrica 1958, 26, 24–36. [Google Scholar]
  30. Overall, J.E.; Starbuck, R.R. F-test alternatives to fisher's exact test and to the Chi-square test of homogeneity in 2 × 2 tables. J. Educ. Behav. Stat. 1983, 8, 59–73. [Google Scholar]
  31. Cox, M.K.; Key, C.H. Post hoc pair-wise comparisons for the Chi-square test of homogeneity of proportions. Key Educ. Psychol. Meas. 1993, 53, 951–962. [Google Scholar]
  32. Pardo, L.; Martín, N. Omogeneity/heterogeneity hypotheses for standardized mortality ratios based on minimum power-divergence estimators. Biom. J. 2009, 51, 819–836. [Google Scholar]
  33. Andrés, A.M.; Tejedor, I.H. Comments on ‘Tests for the homogeneity of two binomial proportions in extremely unbalanced 2 × 2 contingency tables’. Stat. Med. 2009, 28, 528–531. [Google Scholar]
  34. Baker, S.; Cousins, R.D. Clarification of the use of Chi-square and likelihood functions in fits to histograms. Nucl. Instrum. Methods Phys. Res. 1984, 221, 437–442. [Google Scholar]
  35. Elmore, K.L. Alternatives to the Chi-Square Test for Evaluating Rank Histograms from Ensemble Forecasts. Weather Forecast. 2005, 20, 789–795. [Google Scholar]
  36. Zhang, Jin-Ting. Approximate and asymptotic distributions of Chi-squared-type mixtures with applications. J. Am. Stat. Assoc. 2005, 100, 273–285. [Google Scholar]
  37. Gagunashvili, N.D. Chi-square tests for comparing weighted histograms. Nucl. Instrum. Methods Phys. Res. Sect. A 2010, 614, 287–296. [Google Scholar]
  38. Snedecor, G.W.; Cochran, W.G. Statistical Methods, 8th ed.; Iowa State University Press: Iowa City, IA, USA, 1989. [Google Scholar]
  39. Anderson, T.W.; Darling, D.A. Asymptotic theory of certain “goodness-of-fit” criteria based on stochastic processes. Ann. Math. Stat. 1952, 23, 193–212. [Google Scholar]
  40. Kolmogorov, A. Confidence limits for an unknown distribution function. Ann. Math. Stat. 1941, 12, 461–463. [Google Scholar]
  41. Fisher, R.A.; Mackenzie, W.A. Studies in crop variation. II. The manurial response of different potato varieties. J. Agric. Sci. 1923, 13, 311–320. [Google Scholar]
  42. Sokal, R.R.; Rohlf, F.J. Biometry: The Principles and Practice of Statistics in Biological Research, 3rd ed.; Freeman: New York, NY, USA, 1994; pp. 729–739. [Google Scholar]
  43. Fisher, R.A. Statistical Methods for Research Workers; Oliver and Boyd: Edinburgh, UK, 1934. [Google Scholar]
  44. Scholz, F.W.; Stephens, M.A. K-sample anderson-darling tests. J. Am. Stat. Assoc. 1987, 82, 918–924. [Google Scholar]
  45. Glass, G.V.; Hopkins, K.D. Statistical Methods in Education and Psychology, 3rd ed.; Allyn and Bacon: Needham Heights, MA, USA, 1996. [Google Scholar]
  46. Hartley, R.V.L. Transmission of Information. Bell Sys. Tech. J. 1928, 1928, 535–563. [Google Scholar]
  47. Scott, D. Multivariate Density Estimation; John Wiley: Hoboken, NJ, USA, 1992. [Google Scholar]
  48. Chi-square goodness-of-fit test. NIST/SEMATECH e-Handbook of Statistical Methods, Available online: http://www.itl.nist.gov/div898/handbook/prc/section2/prc211.htm (accessed 1 November 2010).
  49. Snyder, L.R. Classification of the solvent properties of common liquids. J. Chromatogr. A 1974, 92, 223–230. [Google Scholar]
  50. Steele, M.; Chaseling, J.; Hurst, C. Simulated Power of the Discrete Cramer-von Mises Goodness-of-fit Tests. Proceedings of the MODSIM 05 International Congress on Modelling and Simulation. Advances and Applications for Management and Decision Making, Melbourne, VIC, Australia, 2005; pp. 1300–1304.
  51. Smirnov, N.V. Table for estimating the goodness of fit of empirical distributions. Ann. Math. Stat. 1948, 19, 279–281. [Google Scholar]
  52. Mantel, N.; Haenszel, W. Statistical aspects of the analysis of data from retrospective studies of disease. J. Natl. Cancer Inst. 1959, 22, 719–748. [Google Scholar]

Share and Cite

MDPI and ACS Style

Bolboacă, S.D.; Jäntschi, L.; Sestraş, A.F.; Sestraş, R.E.; Pamfil, D.C. Pearson-Fisher Chi-Square Statistic Revisited. Information 2011, 2, 528-545. https://doi.org/10.3390/info2030528

AMA Style

Bolboacă SD, Jäntschi L, Sestraş AF, Sestraş RE, Pamfil DC. Pearson-Fisher Chi-Square Statistic Revisited. Information. 2011; 2(3):528-545. https://doi.org/10.3390/info2030528

Chicago/Turabian Style

Bolboacă, Sorana D., Lorentz Jäntschi, Adriana F. Sestraş, Radu E. Sestraş, and Doru C. Pamfil. 2011. "Pearson-Fisher Chi-Square Statistic Revisited" Information 2, no. 3: 528-545. https://doi.org/10.3390/info2030528

Article Metrics

Back to TopTop