We reconsider the properties and relationships of the interaction information and its modified versions in the context of detecting the interaction of two SNPs for the prediction of a binary outcome when interaction information is positive. This property is called predictive interaction, and we state some new sufficient conditions for it to hold true. We also study chi square approximations to these measures. It is argued that interaction information is a different and sometimes more natural measure of interaction than the logistic interaction parameter especially when SNPs are dependent. We introduce a novel measure of predictive interaction based on interaction information and its modified version. In numerical experiments, which use copulas to model dependence, we study examples when the logistic interaction parameter is zero or close to zero for which predictive interaction is detected by the new measure, while it remains undetected by the likelihood ratio test.
The aim of the paper is to review existing measures and to introduce a new measure of interaction strength of two nominal factors in predicting a binary outcome and to investigate how they perform for this task. We show that there exist interactive effects that are not detectable by parametric tests, such as the likelihood ratio test, and we propose a test statistic, which performs much better in such situations. A specific situation that we have in mind is the case of two Single Nucleotide Polymorphisms (SNPs) and their joint strength to predict the occurrence of a certain disease as compared to their individual strengths. We will speak of gene-gene interaction when the prediction effects of genotypes at two corresponding loci combine non-additively. Thus, rather than aiming at a simpler task of testing associations that allow for interactions, we focus here on testing the predictive interaction of two SNPs. We will refer to the phenomenon as the interaction effect rather than (statistical) epistasis in order to avoid confusion, as the term “epistasis” is frequently used to mean blocking of one allelic effect by another allele at a different locus (cf. ).
There are many methods that aim at detecting gene-gene interaction. We mention Multifactor Dimensionality Reduction (MDR ), Bayesian Epistasis Association Mapping (BEAM ), SNP Harvester (SH ), Pairwise Interaction-based Association Mapping (PIAM ), Genome Wide Interaction Search (GWIS ), logistic regression and methods based on information measures, such as information gain or interaction information (cf. [7,8]), among others. We refer to [9,10] for overviews of the methods used.
There are problems encountered when applying some of the aforementioned methods. The first one, which is not generally recognized, is that some methods are designed to detect the dependence between pairs of genes and a disease and do not distinguish whether the dependence is due to main effects (genes influencing disease individually) or their interaction (the combined effect of two genes). Thus, the case with strong main effects and with no or a weak interaction effect is still frequently referred to as gene-gene interaction. In the paper, we carefully distinguish between main and interaction effects and discuss situations when interaction can be detected based on the strength of overall dependence. The second problem is that some of the methods used, such as logistic regression, are model dependent and define interaction in a model-specific way. Thus, it may happen, as we show, that genes interacting predictively, as it is understood in this paper, do not interact when modeled, e.g., by logistic regression. Logistic regression is thus blind to predictive interaction in such cases. The third one is that the fact of whether loci are interlinked or not influences the analysis. We show that the dependence between genes may have profound effects on the strength of interaction and its detection. A focus is here on information measures and their approximations, as their decompositions allow one to neatly attribute parts of dependence either to the main effects or to the interaction effect. Let us note that the problem of the dependency of SNPs to be taken into account is recognized; see, e.g., . We also refer to , where the maximal strength of interactions for two-locus models without main effects is studied. Sensitivity analysis of a biological system using interaction information is discussed in . The analogous problem of measuring interaction in the Quantitative Trait Loci (QTL) setting with quantitative response is analyzed in, e.g., in .
We study the properties of the modified interaction information introduced by . In particular, we state new sufficient conditions for the predictive interaction of the pair of genes. We also study its chi square approximations in the neighborhood of total independence (loci and the outcome are mutually independent) and in the case when the loci may be interdependent. Plug-in estimators of the interaction measures are introduced, and we investigate their behavior in parametric models. This analysis leads us to the introduction of a new test statistic for establishing predictive interaction, which is defined as a maximum of the information interaction estimator and its modified version. It is shown that it leads to a test that is superior or on par with the Likelihood Ratio Test (LRT) in all parametric models considered, and the superiority is most pronounced when logistic interaction is zero or close to zero. Thus, there are cases when genes interact predictively that are not detected by LRT. As the detection of weak interactions becomes increasingly important, the proposed method seems worth studying. The proposal is also interesting from the computational point of view, as LRT, which is a standard tool to detect interactions in the logistic regression model, does not have a closed form expression; its calculation is computationally intensive and requires iterative methods. Our experience shows that the execution of LRT described below is at least fifteen-times longer than for any test considered based on interaction information. This is particularly important when the interaction effect has to be tested for a huge number of pairs of SNPs. We do not treat this case, here leaving it for a separate paper.
The paper is structured as follows. In Section 2, we discuss interaction information measures, their variants, as well as approximations and the corresponding properties. Parametric approaches to interaction are examined in Section 3. We also show some links between the lack of predictive interaction and additive logistic regression (cf. Proposition 7). Moreover, the behavior of the introduced measures in logistic models for independent and dependent SNPs is studied there. In Section 4, we investigate the performance of tests based on empirical counterparts of the discussed indices by means of numerical experiments. An illustrative example of the analysis for a real dataset is also included. Section 5 concludes the paper.
2. Measures of Interaction
2.1. Interaction Information Measure
We adopt the following qualitative definition of the predictive interaction of SNPs and in explaining dichotomous qualitative outcome Y. We say that and interact predictively in explaining Y when the strength of the joint prediction ability of and in explaining Y is (strictly) larger than the sum of the individual prediction abilities of and for this task. This corresponds to a synergetic effect between and as opposed to the case when the sum is larger then the strength of a joint prediction, which can be regarded as a redundancy between and .
In order to make this definition operational, we need a measure of the strength of prediction ability of X in explaining Y where X is either a single SNP: or a pair of SNPs: . This can be done in various ways; we apply the information-theoretic approach and use mutual information to this aim. The Kullback–Leibler distance between P and Q will be denoted by . We will consider mass function p corresponding to probability distribution P and use and to denote mass functions of and , respectively, when no confusion arises. Mutual information between X and Y is defined as:
where sums range over all possible values of Y and of X, , , and is the so-called product measure of marginal distributions of X and Y, defined by . is thus the probability distribution corresponding to , where and are independent and have distributions corresponding to and , respectively. Therefore, mutual information is the Kullback–Leibler distance between joint distribution and the product of the marginal distributions. Note that if , the value in (1) is two-dimensional and equals one of the possible values of .
The motivation behind the definition of is of a geometric nature and is based on the idea that if Y and X are strongly associated, their joint distribution should significantly deviate from the joint distribution of and . In view of this interpretation and taking , we define the strength of association of with Y as:
and, analogously, the strengths of individual associations of with Y as:
We now introduce the interaction information as (cf. [15,16]):
Thus, in concordance with our qualitative definition above, we say that SNPs and interact predictively in explaining Y when is positive. We stress that the above definition of interaction is not model dependent in contrast to, e.g., the definition of interaction in a logistic regression. This is a significant advantage as for model-dependent definitions of interaction, the absence of such an effect under one model does not necessarily extend to other models. This will be discussed in greater detail later.
Let us note that defined above is one of the equivalent forms of interaction information. Namely, observe that:
where is the entropy of with other quantities defined analogously. This easily follows from noting that (cf. , Chapter 2). The second equality above is actually a restatement of the decomposition of entropy in terms of the values of its difference operator Δ (cf. ). Namely, Formula (4.10) in  asserts that .
Interaction information is not necessarily positive. It is positive when the strength of association of with Y is larger than an additive effect of both and , i.e., when and interact predictively in explaining Y. Below, we list some properties of .
(ii) are independent of Y if and only if , and ; (iii) We have:
(iv) It holds that:
whereis conditional mutual information defined by:
Some comments are in order. Note that (i) is an obvious restatement of (4) and is analogous to the decomposition of variability in ANOVA models. The proof of (ii) easily follows from (i) after noting that the independence of Y is equivalent to in view of the information inequality (see , Theorem 2.6.3). Whence, if is independent of Y, then , and thus, also, for . In view of (6), we then have that . The trivial consequence of (i) is that ; thus, when main effects are zero, i.e., are independent of Y, we have , and in this case, is a measure of association between and Y.
Part (ii) asserts that in order to check the joint independence of and Y, one needs to check that for are individually independent with Y, and moreover, interaction information has to be zero. Part (iii) follows easily from (5).
Part (iv) yields another interpretation of as a change of mutual information (information gain) when the outcome Y becomes known. This can be restated by saying that in the case when and are independent, the interaction between genes can be checked by testing the conditional dependence between genes given Y. This is the source of the methods discussed, e.g., in [19,20] based on testing the difference of inter-locus associations between cases and controls. Note however that this works only for independent SNPs, and in the case when and are dependent, conditional mutual information overestimates interaction information.
Let be the function appearing in the denominator of (7):
and the associated distribution. is called the (unnormalized) Kirkwood superposition approximation of P. Note that (7) implies that if the KL distance between P and is small, then interaction is negligible. Let:
be the Kirkwood parameter. If Kirkwood parameter equals one, then Kirkwood approximation is a probability distribution. In general,
is a probability distribution, which will be called the Kirkwood superposition distribution. We say that a discrete distribution p has perfect bivariate marginals if the following conditions are satisfied ():
Note that Condition (13) implies that bivariate marginals of coincide with those of Now, we state some new facts on the interplay between predictive interaction, the value of the Kirkwood parameter and Condition (13). In particular it follows that if , then genes interact predictively, and the sufficient condition for that is given in Part (iv) below.
(i) , and thus, if , then ; (ii) If any of the conditions in (13) are satisfied then and ; (iii) If any two components of random vector are independent, then ; (iv) If for any then .
Part (i) is equivalent to . The proof of (ii) follows by direct calculation. Assume that, e.g., the first condition in (13) is satisfied. Then:
(iii) is a special case of (ii) as the independence of two components of implies that a respective condition in (13) holds. Note that the condition in (iv) is weaker than the third equation in (13), and Part (iv) states that if are weakly individually associated with Y, they either do not interact or interact predictively.
The usefulness of using the normalized Kirkwood approximation to test for interactions was recognized by . It is applied in the BOOST package to screen off pairs of genes that are unlikely to interact. In , interaction information is used for a similar purpose; see also . We call:
modified interaction information, which is always nonnegative. Numerical considerations indicate that it is also useful to consider:
Note that is equivalent to . In connection with Proposition 2, we note that we also have another representation of in terms of the Kullback–Leibler distance, namely:
where is a distribution on values of Y pertaining to / and:
The last representation of follows from (5) by an easy calculation.
2.2. Other Nonparametric Measures of Interaction
Note that bivariate marginals of coincide with those of , e.g., ; however, is necessarily positive. We have the following decomposition:
Thus, the terms and correspond to the first order dependence effects for , whereas reflects the second order effect. Furthermore, note that the second order effect is equivalent to the dependence effect when all of the first order effects, including , are zero.
(i) are independent of Y if and only if , and for any ; (ii) The independence of and Y is equivalent to , and ; (iii) Condition for any is equivalent to:
for some and .
Part (i) is checked directly. Note, e.g., that ; ; is equivalent to ; are independent of Y; and further, . Thus, is equivalent to being independent of Y. Part (ii) is obvious in view of (i). Note that it is an analogue of Proposition 1 (ii). Part (iii) is easily checked. This is due to .
In the proposition below, we prove a new decomposition of , which can be viewed as an analogue of (6) for the chi square measure. In particular, in view of this decomposition, is a measure of interaction information. Namely, the following analogue of Proposition 1 (i) holds:
In order to prove (27), noting the rewriting of (22), we have:
We claim that squaring both sides, multiplying them by and summing over yields (27). Namely, we note that all resulting mixed terms disappear. Indeed, the mixed term pertaining to the first two terms on the right-hand side equals:
due to . The mixed term pertaining to the last two terms on the right-hand side equals:
We note that (27) is an analogue of the decomposition of into four terms (see Equation (9) in ). Han in  proved that for the distribution of close to independence, i.e., when all three variables , and Y are approximately independent, interaction information and are approximately equal. A natural question in this context is how those measures compare in general.
In particular, we would like to to allow for the dependence of and . In this, case Han’s result is not applicable, as is not close to independence (cf. (22)). It turns out that despite analogous decompositions in (6) and (27) in the vicinity of mass function (independence of and Y), is approximated by different functions of chi squares.
We have the following approximation in the vicinity of :
where term tends to zero when the vicinity of shrinks to this point.
Expanding for around , we obtain:
Rearranging the terms, we have:
Summing the above equality over and k and using the definition of , we have:
Reasoning analogously, we obtain for . Using now the definition of interaction information, we obtain the conclusion.
Note that it follows from the last two propositions that we have the following generalization of Lemma 3.3 in .
In the vicinity of , it holds that:
This easily follows by replacing in (31) by and using the definition of .
2.3. Estimation of the Interaction Measures
We discuss now the estimators of the introduced measures. Suppose that we have n observations on genotypes of the two SNPs under consideration. The data can be cross-tabulated in a contingency table with denoting the number of data points falling in the cell and . The considered estimators are plug-in versions of theoretical quantities. Namely, we define (cf. (7)):
where and other empirical quantities are defined analogously. Let:
where is a plug-in estimator of η, an estimator of . Analogously, we define:
denote the plug-in estimator of defined in (25) and the plug-in estimator of the main term on the right-hand side of (31).
Han in  proved that for the distribution of close to independence, i.e., when all three variables , and Y are approximately independent, we have that the distribution of is close to the distribution of for large sample sizes. Moreover:
in the distribution when the sample size tends to infinity, where denotes the chi square distribution with four degrees of freedom. However, the large sample distribution of for is unknown. Note that although one can establish the asymptotic behavior of empirical counterparts of each term on the right-hand side of (31), these parts are dependent, which contributes to the difficulty of the problem.
In Section 3 below, we discuss the problem of detecting predictive interaction using the empirical indices defined above as the test statistics.
3. Modeling Gene-Gene Interactions
Interaction information and its modifications are model-free indices of measuring the interaction of SNPs in predicting the occurrence of a disease. Below, we list several approaches to measure interaction based on modeling. Although, as it turns out, the logistic model encompasses all of them, various parametrizations used for such models imply that the meaning of interaction differs in particular cases.
3.1. Logistic Modeling of Gene-Gene Interactions
As a main example of parametric modeling involving the quantification of interaction strength, we consider logistic regression. It turns out that any type of conditional dependence can be described by it. Namely, a general logistic model with interactions that models conditional dependence , where and are qualitative variables with I and J values, respectively, has parameters. Indeed, it allows for an intercept term, main effects of and and interactions, i.e., parameters in total. This is equal to the number of possible pairs . Thus, any form of conditional dependence of Y on and can be described by this model for some specific choice of intercept, main effects and interactions.
We discuss a specific setting frequently used for GWAS analysis when and stand for two biallelic genetic markers with the respective genotypes AA, Aa, aa (reference value) and BB, Bb, bb (reference value). For convenience, the values of SNPs will be denoted, although not treated as, consecutive integers 1, 2 and 3. Y is a class label, i.e., a binary outcome, which we want to predict, with one standing for cases and zero for controls.
We consider the additive logistic regression model ω, which asserts that:
and compare it with a general saturated model Ω:
In the logistic regression model, and interact when is non-zero for some , and Model (40) is frequently called a general logistic model with interactions. Thus, interactions here correspond to all coefficients , and a lack of interactions means that all of them are zero. Note that the number of independent parameters in (40) when both and have three values is nine and equals the number of possible pairs . For specific values of the main effects and interaction, which have been used in GWAS, see, e.g., [19,22], and for complete enumeration of models with 0/1 penetrance, see .
We discuss now different modeling approaches. The main difference in comparison with (40) is that another function of the odds is parametrized. The choice of the function influences the parametrization of the model, however; for discrete predictors, all such parametrizations are equivalent.
In particular, in , so-called multiplicative and threshold models for a disease were considered (Table 1, p. 365). In the multiplicative model, the odds of the disease have a baseline value γ and increase multiplicatively once there is at least one disease allele at each locus:
where , , and , otherwise. In the threshold model, the odds of the disease increase by a constant multiplicative factor once there is at least one disease allele at each locus, i.e., and , otherwise. It is easily seen that both models are special cases of (40). Moreover, if is such that , then Equation (41) is a special case of the additive model (39). Note that interactions in (41) are measured by coefficients , as well as by θ.
An analogous approach is to model the prevalence of the disease instead of the odds as in (41) or the logarithmic odds as in (40). This is adopted in , where Table 1, p. 832, lists six representative models. In particular, a threshold model corresponds to the situation when the prevalence of the disease increases from zero to f, where f is positive, provided a disease allele is present at each locus. Interaction in the threshold model is measured by f. As the model considers zero as a baseline value, it is not a special case of the threshold model in .
Interesting insight for particular dependence models is obtained when they are parametrized using Minor Allele Frequency (), overall prevalence and heritability . This is adopted in [22,25] where several models with varying values of these parameters are considered.
Below, we state the proposition that describes the connection between the additive logistic regression model and the lack of predictive interaction. Namely, Part (ii) states that if the Kirkwood parameter is not larger than one and , then the additive logistic model holds true.
(i) Equation (39) is equivalent to:
for some values ; (ii) If and , then satisfies (39).
It is easily checked that Condition (42) implies (39). On the other hand, if we assume (39), then we have , and we can take , and and . This proves (i). Now, for (ii), if and , then , which satisfies (42). The last equality follows from the generalization of the information inequality stating that KL distance if and only if when p is a probability mass function and q is nonnegative and such that . Thus, if , then any model with interaction information of zero has to be an additive logistic regression model. However, we will show that conditions and are not equivalent even in the case when and are independent and .
The principal tool to detect interactions in logistic regression is the log-likelihood ratio statistic (LRT), defined as:
where , are respectively the values of the likelihood for models Ω and ω, and are respectively estimated probability distributions. Large positive values of LRTare interpreted as an indication that the additive model is not adequate and that interactions between genes occur. In order to check what is the usual range of values of LRT under ω, we use the property stating that when ω is adequate, LRT is approximately distributed as a χ square distribution with four degrees of freedom provided that all cells contain at least five observations. Whereas the calculation of is straightforward, as it involves only fractions as parametric estimates of probabilities of interest, the calculation of is computationally intensive and involves the Iterated Weighted Least Squares (IWLS) procedure. Thus, it is also if interest to find an easily computable approximation of LRT. This was exactly the starting point of  where it was noticed that probability mass function follows the additive logistic regression model. Indeed, we have, in view of (10):
and thus, it satisfies (39). In particular, it follows that:
where is a value of the likelihood for a logistic regression model using plug-in estimators to estimate Kirkwood probabilities. Since is easily computable, the lower bound on can be imposed to screen pairs that are unlikely to interact, as in view of (45), the cases with small values of yield even smaller values of LRT. However, as we discuss in Section 3, there are cases of interactions that will be detected by , but they will remain undetected by LRT.
Note also that from the considerations above, we have revealed the interpretation of the interaction information:
where is a probability distribution corresponding to estimated non-normalized Kirkwood approximations.
Another interaction modeling tool for contingency tables is a log-linear model for which the logarithm of the expected value of the number of observations falling into each cell is modeled. Since the expected value for the cell equals , it is seen that the approach is equivalent to logistic modeling. In particular, Model (39) is equivalent to the so-called homogeneous association model:
Because of the equivalence, we will discuss only the logistic setting later on.
3.2. ANOVA Model for Binary Outcome
Additive ANOVA models briefly described below are used to model the dependence of the quantitative outcome on qualitative predictors, in QTL studies in particular. However, they work reasonably well for a binary outcome. We provide a brief justification for this.
In an additive ANOVA model ω, we assume that the conditional distribution of Y given and :
and for model Ω with interactions, we postulate that:
Estimation of the parameters of ANOVA models is based on least squares analysis, i.e., minimization of the following sum of squares:
where is a prediction for the l-th observation under assumed model M. It is well known (see, e.g., ) that the F statistic defined by (51) below has an asymptotically F distribution with parameters and (p and q denote the number of coefficients in models Ω and ω, respectively).
In our problem, the outcome is binary, so formally, it is not legitimate to use the ANOVA model in this case. Nevertheless, the prediction has an interesting property. Let us denote and . Then, for the additive ANOVA model and the model with interaction, we have:
Moreover, if the values of the respective SNPs are denoted by and , then for the model with interaction, we have and . Using this notation for both models, we can treat predictors and as estimators of and , respectively. Now, manipulating conditional probabilities, we can rewrite (50) as:
Note that it follows that can be treated as the weighted variability of prediction, which is the largest when Thus, minimizing (53) leads to finding the parameters of model M that yield the most certain prediction. This provides some intuition, in addition to (52), for why the ANOVA model yields reasonable estimates in the case of a binary outcome.
3.3. Behavior of Interaction Indices for Logistic Models
Our main goal here is to check whether estimators of the information interaction lead to satisfactory, universal and easy to compute tests for predictive interaction. We recall that and interact predictively in explaining Y when Thus, we consider as the null hypothesis and as an alternative , the hypothesis we are interested in, namely . As test statistics, we employ sample versions of interaction information indices and their approximations introduced above. We discuss the behavior of the pertaining tests for logistic models (see Section 3), as for discrete predictors, they cover all possible types of conditional dependence of Y given their values. Two types of distributions of will be considered, the first, when and are independent and the second one, when their dependence is given by Frank’s copula with parameter (see Appendix A for the definition). Frank’s copula with parameter was chosen as for the logistic models considered below, it leads to predictive interaction. In both cases, we set Minor Allele Frequency , where . In Table A1, the conditional distributions are specified (see also Appendix A for the method of generation). As discussed below, larger values of λ and γ lead to larger values of interaction measures.
3.4. Behavior of Interaction Indices When and Are Independent
We consider first a special case when and are independent. We recall that it follows from Proposition 2 (iii) that parameter η then equals one regardless of the form of prevalence mapping. Thus, Kirkwood superposition approximation is a probability distribution, and the fact that is equivalent to the property that distribution P equals its Kirkwood approximation. Moreover, we then have and We omit the proof of the second equality. Below, we discuss specific logistic regression models that are used in the simulations. For each model, the intercept was set so that prevalence is approximately . In Table 1, the coefficients for additive logistic models considered here are specified.
Note that for model , both predictors are independent of Y, whereas in and , only is independent of Y. It is seen from (39) that for , logarithmic odds depend on the occurrence of either value , or both, whereas for , they also depend on the values and . The additive influence of both loci is the same and measured by parameter λ.
We also consider a general logistic model with interactions given in (40). We employ three types of models, the additive parts of which are the same as in models and , respectively. Note that denotes model with . The form of interaction is stated in Table 2. for will denote the logistic model, the additive part of which is as in model and interaction part as in Table 2 for a fixed value of Thus, is a model with no main effects, but with a logistic interaction effect, whereas additive effects, as well logistic interaction are nonzero in and . Note that in the models considered, parameter γ measures the strength of the logistic interaction.
We discuss now how interaction indices behave for the introduced models. We start with additive logistic regression models . Note that for models and , all considered interactions measures are zero, since for , response Y is independent of , whereas for and , predictor is independent of . The values of and as a function of λ for models and are shown in Figure 1.
In models and , the values of are small, but strictly positive for . Note that the monotone dependence of on λ is much stronger than for . In Figure 2, the behavior of interaction measures as a function of γ for the logistic model with the nonzero interaction term is depicted. Thus, we check how nonparametric interaction information and its modifications depend on logistic interaction parameter γ. Observe that is positive, close to zero for and for grows slowly in all models considered. There is no significant difference between the values of for all models and when γ is fixed, which means that the additive part in the saturated logistic model has a weak influence on the interaction information. Index is also approximately 0 for but grows much faster than when γ increases. The differences between the values of for models and are much more pronounced than for interaction information , and they increase with The values of all indices are larger when the logistic interaction is nonzero.
3.5. Behavior of Interaction Indices When and Are Dependent
The situation when and are dependent is more involved. First, Kirkwood superposition approximation does not have to be a probability distribution; therefore, the fact that is not equivalent to the equality of mass functions and Second, the dependence of predictors means that distribution deviates more strongly from joint independence, i.e., from the situation when the asymptotic behavior of is known and given in (38).
As before, we consider the logistic model with and without interactions. For logistic regression models, we choose the same values of parameters as in the previous section (see Table 1 and Table 2) with the exception of μ, which was set such that in every model, prevalence is equal approximately to
The behavior of interaction indices for the discussed models is shown below. Their variability in λ for models when predictors are dependent is shown in Figure 3 and for models with interaction in Figure 4. Model is omitted as all considered indices are zero there independently of λ. Note that we have a stronger dependence of on λ in than in ; the effect is much weaker in the case of due to the fact that negatively depends on λ for M4. Furthermore, for this model, the dependence of on λ is much stronger than for the independent case.
When logistic interaction is nonzero, we again see pronounced differences in the behavior of between and . This indicates that in the contrast to the independence case, two factors, namely the value of γ, as well as the form of main effects, influence the value of . The differences between the values of for different models are negligible. Values are larger for the dependent than for the independent case for the same value of γ. Note that from the logistic model perspective. the strength of interaction in all three models is the same and corresponds to the value of γ. Again, the dependence of and on γ is much stronger than for .
4. Tests for Predictive Interaction
The main challenge with the application of the constructed statistics for testing is the determination of their behavior under the null hypothesis . The asymptotic distribution of is known only for the case when and Y are independent, which is a special case only of the null hypothesis, and the asymptotic distributions of and are unknown. In order to overcome this problem, we hypothesize that the distributions of all three statistics do not deviate much from the distribution, at least in the right tails, and we check the validity of our conjecture by calculating the actual Type I error rates for nominal error rates α using the Monte Carlo method. The results are shown in Figure 5 for the independent predictors and in Figure 6 for dependent ones. The results for LRT and ANOVA tests are included as the benchmarks. It is seen that for the independent predictors, discrepancies between actual and nominal rates for and are negligible and comparable to the discrepancy of LRT for all models , and the same is true for in the case of models and . The same observation holds for the dependent predictors, although here, empirical evidence is restricted to model , as it is the only model known to us that satisfies the null hypothesis in this case. Based on this observation, in the following, we compare the power of LRT and ANOVA with the powers of the new tests with a rejection region , where stands for the quantile of order 0.95 of the distribution and stands for either , or .
Datasets pertaining to the discussed models are generated as described in the Appendix. The pertinent parameters are:
, the number of observations in controls () and cases (), set equal in our experiments and , the total number of observations. Values of and were considered.
MAF, the minor allele frequency for and . We set for both loci.
copula, the function that determines the cumulative distribution of based on its marginal distributions.
, the prevalence mapping, which in our experiments was either additive logistic or logistic with nonzero interaction.
For every model, 1000 datasets were generated, and for each of them, tests of the interaction effect based on the introduced indices were performed. The null hypothesis is the lack of predictive interaction The null hypothesis at the specific significance level α is not rejected if the value of the test statistic is less than , the quantile of appropriate distribution; otherwise, we reject this hypothesis and claim that there is a predictive interaction effect. As discussed above, the tests based on all indices except the ANOVA test were compared with the distribution, and for the ANOVA test, null distribution was the F-Snedecor distribution with parameters four and . For models with , we estimate the probability of Type I error for a given α as a fraction of the tests that are falsely rejected. For the model with the predictive interaction effect, statistical power is estimated as a fraction of the tests that are rejected at a significance level of . We will refer to those tests as simply interaction information, , LRT and ANOVA tests in the following.
All experiments have been performed in the R environment. The source codes of functions used are available from the corresponding author’s website.
4.1. Behavior of Interaction Tests When and Are Independent
We consider the situation when SNPs and are independent. In this case, and both and estimate consistently.
4.1.1. Type I Errors for Models –
First, we consider models – where and for which the null hypothesis holds. Recall that for , all components of random vector are independent; thus, in this case, convergence (38) to the distribution holds. In Figure 5, we compare the Type I error rate with the nominal error rate. Note that Type I errors of and approximately agree with nominal level This indicates that the distributions of these statistics under the null hypothesis are approximately distributed. For ANOVA and tests, the probability of Type I error is slightly smaller than α. For the maximum statistic , the probability of Type I error agrees well with the nominal level for models and for ; it exceeds α, but the relative difference is not larger than for .
4.1.2. Power for Additive Logistic Models
Now, we focus on models and . From Figure 1, we see that for , index is very close to zero, and predictably, the power is close to the significance level for such λ (see Figure 7). However, for larger λ, the power of and increases for , and in the case of , this holds also for . The difference between the behavior of on the one side and and approximations on the other for model is remarkable and shows that modification incorporating is worthwhile. However, for model , it does not improve the performance of . Note also that unsurprisingly. The power of the test stays close to the significance level, but the power of ANOVA starts to increase for large λ. A clear winner in both cases is the test based on , which performs very well for both models, as it takes advantage of the good performance of for and for model .
4.1.3. Power for the Logistic Model with Interactions
We consider now the power of the tests with respect to logistic interaction parameter γ for models and Note that is now included as is positive for . In all cases, performs better than , supporting the usefulness of the correction. works comparably to ANOVA and, in some cases, worse than and . We stress that this does not contradict the fact that is the most powerful test at the level α for null hypothesis . Indeed, it follows from the previous experiments that predictive interaction tests are not level α tests for ; thus, they may have larger power than . The power of and is around for , even for γ close to zero, when for , it stays around a significance level of 0.05. Note also that and fail to detect the predictive interaction for model .
4.2. Behavior of the Interaction Tests When and Are Dependent
We consider the situation where SNPs and are dependent. The distribution of is modeled by the Frank copula with (see Table A1).
4.2.1. Type I Errors for Model
Note that for dependent and , only satisfies the null hypothesis, as for the models and , index is negative. It follows from Figure 6 that for all tests apart from tests, Type I error for is approximately α, and it is significantly smaller than α for both tests. For larger α and errors for information, interaction tests exceed slightly α, similarly to LRT.
4.2.2. Power for Additive Logistic Models When and Are Dependent
The behavior of the discussed tests is analogous to the case of independence (see Figure 8) with one important difference. Namely, now performs better than apart from model for and . and ANOVA do not detect any interaction for , but the power of ANOVA starts to pick up for large λ. Note the erratic behavior of the test for which the power actually starts to decrease for larger λ. This possibly is caused by the fact that the test is not well calibrated and the fact that is likely to become negative for large λ. The power of is the largest one among all tests.
4.2.3. The Powers for Logistic Models with Interaction When and Are Dependent
Obviously, when logistic interaction is present, the test performs much better (see Figure 9). The same applies to ANOVA and , but the best performance is again exhibited by . Consider, e.g., its performance for the model . Its excellent behavior for small γ’s stems from the very good performance of , whereas for larger γ’s from the performance of Comparing Figure 9 and Figure 10, we see that the dependence between predictors improved the performance of the tests based on the interaction information measures, especially for a smaller sample size, which is consistent with the larger values of the interaction information in the dependent than in the independent case. Note that in the dependent case, the power of and for is above for γ close to zero, whereas it is less then for such γ in the independent case.
4.3. Real Data Example
We perform the analysis of a real dataset in order to show that the pairs of SNPs that correspond to large values of interaction indices exhibit interesting patterns of conditional dependence. We used data on pancreatic cancer studied by  and downloaded from the address . They consist of 208 observations (121 cases and 87 controls) with values of 901 SNPs. We applied tests , and for all pairs and with Bonferroni correction resulting in an individual level of significance , where . All three tests rejected the null hypothesis for 11 pairs. The pair of SNPs with the largest values of and is the pair (rs1131854,rs7374) with values and . Figure 11 shows the probability mass function of this pair (a) and of the conditional mass functions given (b) and (c).
Note that the pattern of the change of the conditional probabilities of the occurrence of the AA genotype for SNP2 given the genotypes of SNP1 for the cases is reversed in the case of the controls . For the pooled sample, the conditional probabilities are approximately equal. Moreover, observe that the conditional frequency of given is around 0.2 for the cases, whereas it is zero for the controls.
This preliminary example indicates that the analysis based on large values of interaction information indices allows one to detect interesting patterns of dependence between pairs of SNPs and the response.
In the theoretical part of the paper, we reviewed and proved some new properties of interaction information, its modification and their approximations. It is argued that parameter η introduced in (11) plays an important role in establishing predictive interaction. Theoretical analysis supported considering a new measure defined in (16), being the maximum of interaction information, and its modified version, which was considered in numerical experiments. There are several conclusions that can be drawn from the conducted numerical experiments. The first one is that the dependence between predictors influences the performance of interaction information tests: while performs in general better than for independent predictors, the situation is reversed for dependent ones. Their natural combination, statistics , is superior to any of them and . When compared to , the difference in performance is most striking when detecting predictive interaction in the cases when logistic interaction is 0 as, e.g., in models and . This should serve as a cautionary remark for the situations when the interaction information test is used as a screening test and then LRT is applied for remaining pairs of genes: the screening test may rightly retain pairs of genes interacting predictively; however, the interaction may not be confirmed by the test. Such cases are likely to occur especially for dependent pairs. It is also worthwhile to note that is as least good as and sometimes much better for detecting interactions when the logistic interaction γ is close to zero. This is a promising feature, as detecting such weak interactions is the most challenging and bears important consequences for revealing the dependence structure of diseases. This of course requires the construction of interaction tests suitable for a huge number of pairs and is a subject of further study. On the negative side, tests based on approximations tend to perform less well than those based on interaction information and their modifications. This is possibly due to their non-adequate calibration in the case when the null hypothesis holds. It also should be stressed that in numerical experiments, we studied the problem of distinguishing between no interaction information () and predictive interaction (). In the case when interaction information is negative, and are likely not to detect such an effect.
The comments of two reviewers, which contributed to the improvement of the original version of the manuscript, are gratefully acknowledged. The research and publication was supported by Institute of Computer Science, Polish Academy of Sciences and Faculty of Mathematics and Information Science, Warsaw University of Technology.
Both authors contributed equally to the paper. Both authors have read and approved the final manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
We describe briefly the generation method. Our objective is to generate separately two samples from conditional distributions of given and with predetermined sizes and . This is easily performed for a given prevalence mapping and bivariate mass function , as we have by Bayes’s theorem:
The prevalence mapping we consider is either additive logistic or logistic with interactions. We discuss below the choice of mass function .
Appendix A.1. Distribution of X1,X2
We first specify marginal distributions of and and then construct bivariate mass function with those marginals. We fix Minor Allele Frequency () for two SNPs (here, we use the same value of for and ) and assume that both SNPs fulfil the Hardy–Weinberg principle, which means that and Let and denote cumulative distributions of and , respectively, which can be different in general. Then, we determine the bivariate distribution function of as:
where is a fixed copula (cf. ). This approach allows one to fix marginals in advance; moreover, it provides us with a simple method of introducing dependence between alleles. We consider the case when and are independent, for which , and dependent and characterized by the Frank copula given by:
Other popular copulas such as Gumbel’s and Clayton’s copula yield negative interaction information and are not considered here.
Below, we specify mass functions for independent and dependent and when the dependence is given by Frank copula with parameter and marginals have the same distribution with .
Mass function of for The upper panel corresponds to independent and and the lower to the Frank copula with .
Mass function of for The upper panel corresponds to independent and and the lower to the Frank copula with .
Frank Copula with
Appendix A.2. Prevalence Mapping with the Logistic Regression Model
Logarithmic odds in the logistic regression model are given by (40):
This is equivalent to:
The above formula is used in (A1) in order to determine distributions of in both populations.
Cordell, H.J. Epistasis: What it means, what it doesn’t mean, and statistical methods to detect it in humans. Hum. Mol. Genet.2002, 11, 2463–2468. [Google Scholar] [CrossRef]
Ritchie, M.; Hahn, L.; Roodi, N.; Bailey, L.; Dupont, W.; Parl, F.; Moore, J. Multifactor dimensionality reduction reveals high-order interactions in genome-wide association studies in sporadic breast cancer. Am. J. Hum. Genet.2001, 69, 138–147. [Google Scholar] [CrossRef] [PubMed]
Zhang, J.; Liu, R. Bayesian inference of epistatic interactions in case-control studies. Nat. Genet.2007, 39, 167–173. [Google Scholar] [CrossRef] [PubMed]
Yang, C.; He, Z.; Wan, X.; Yang, Q.; Xue, H.; Yu, W. SNP Harvester: A filtering-based approach for detecting epistatic interactions in genome-wide association studies. Bioinformatics2009, 25, 504–511. [Google Scholar] [CrossRef] [PubMed]
Liu, Y.; Xu, H.; Chen, S. Genome-wide interaction based association analysis identified multiple new susceptibility loci for common diseases. PLoS Genet.2011, 7, e1001338. [Google Scholar] [CrossRef] [PubMed]
Goudey, B.; Rawlinson, D.; Wang, Q.; Shi, F.; Ferra, H.; Campbell, R.M.; Stern, L.; Inouye, M.T.; Ong, C.S.; Kowalczyk, A. GWIS—Model-free, fast and exhaustive search for epistatic interactions in case-control GWAS. BMC Genom.2013, 14 (Suppl. 3), S10. [Google Scholar] [CrossRef] [PubMed]
Chanda, P.; Sucheston, L.; Zhang, A.; Brazeau, D.; Freudenheim, J.; Ambrosone, C.; Ramanathan, M. AMBIENCE: A novel approach and efficient algorithm for identifying informative genetic and environmental associations with complex phenotypes. Genetics2008, 180, 1191–1210. [Google Scholar] [CrossRef] [PubMed]
Wan, X.; Yang, C.; Yang, Q.; Xue, T.; Fan, X.; Tang, N.; Yu, W. BOOST: A fast approach to detecting gene-gene interactions in genome-wide case-control studies. Am. J. Hum. Genet.2010, 87, 325–340. [Google Scholar] [CrossRef] [PubMed]
Wei, W.H.; Hermani, G.; Haley, C. Detecting epistasis in human complex traits. Nat. Rev. Genet.2014, 15, 722–733. [Google Scholar] [CrossRef] [PubMed]
Moore, J.; Williams, S. Epistasis. Methods and Protocols; Humana Press: New York, NY, USA, 2015. [Google Scholar]
Duggal, P.; Gillanders, E.; Holmes, T.; Bailey-Wilson, J. Establishing an adjusted p-value threshold to control the family-wide type 1 error in genome wide association studies. BMC Genom.2008, 9, 516. [Google Scholar] [CrossRef] [PubMed]
Culverhouse, R.; Suarez, B.K.; Lin, J.; Reich, T. A perspective on epistasis: Limits of models displaying no main effect. Am. J. Hum. Genet.2002, 70, 461–471. [Google Scholar] [CrossRef] [PubMed]
Lüdtke, N.; Panzeri, S.; Brown, M.; Broomhead, D.S.; Knowles, J.; Montemurro, M.A.; Kell, D.B. Information-theoretic sensitivity analysis: A general method for credit assignment in complex networks. J. R. Soc. Interface2008, 5, 223–235. [Google Scholar] [CrossRef] [PubMed]
Evans, D.; Marchini, J.; Morris, A.; Cardon, L. Two-stage two-locus models in genome-wide asssociation. PLoS Genet.2006, 2, 1424–1432. [Google Scholar]
Fano, F. Transmission of Information: Statistical Theory of Communication; MIT Press: Cambridge, MA, USA, 1961. [Google Scholar]
Cover, T.; Thomas, J. Elements of Information Theory; Wiley: New York, NY, USA, 2006. [Google Scholar]
Han, T.S. Multiple mutual informations and multiple interactions in frequency data. Inf. Control1980, 46, 26–45. [Google Scholar] [CrossRef]
Kang, G.; Yue, W.; Zhang, J.; Cui, Y.; Zuo, Y.; Zhang, D. An entropy-based approach for testing genetic epistasis underlying complex diseases. J. Theor. Biol.2008, 250, 362–374. [Google Scholar] [CrossRef] [PubMed]
Zhao, J.; Jin, L.; Xiong, M. Test for interaction between two unlinked loci. Am. J. Hum. Genet.2006, 79, 831–845. [Google Scholar] [CrossRef] [PubMed]
Darroch, J. Interactions in multi-factor contingency tables. J. R. Stat. Soc. Ser. B1962, 24, 251–263. [Google Scholar]
Sucheston, L.; Chanda, P.; Zhang, A.; Tritchler, D.; Ramanathan, M. Comparison of information-theoretic to statistical methods for gene-gene interactions in the presence of genetic heterogeneity. BMC Genom.2010, 11, 487. [Google Scholar] [CrossRef] [PubMed]
Darroch, J. Multiplicative and additive interaction in contingency tables. Biometrika1974, 9, 207–214. [Google Scholar] [CrossRef]
Li, W.; Reich, J. A complete enumeration and classification of two-locus disease models. Hum. Hered.2000, 17, 334–349. [Google Scholar] [CrossRef]
Culverhouse, R. The use of the restricted partition method with case-control data. Hum. Hered.2007, 63, 93–100. [Google Scholar] [CrossRef] [PubMed]
Rencher, A.C.; Schaalje, G.B. Linear Models in Statistics; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
Tan, A.; Fan, J.; Karakiri, C.; Bibikova, M.; Garcia, E.; Zhou, L.; Barker, D.; Serre, D.; Feldmann, G.; Hruban, R.; et al. Allele-specific expression in the germline of patients with familial pancreatic cancer: An unbiased approach to cancer gene discovery. Cancer Biol. Ther.2008, 7, 135–144. [Google Scholar] [CrossRef]
The statements, opinions and data contained in the journal Entropy are solely
those of the individual authors and contributors and not of the publisher and the editor(s).
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The statements, opinions and data contained in the journals are solely
those of the individual authors and contributors and not of the publisher and the editor(s).
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.