Bayesian test of significance for conditional independence: The multinomial model

Conditional independence tests (CI tests) have received special attention lately in Machine Learning and Computational Intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of Probabilistic Graphical Models (PGM)--which includes Bayesian Networks (BN) models--CI tests are especially important for the task of learning the PGM structure from data. In this paper, we propose the Full Bayesian Significance Test (FBST) for tests of conditional independence for discrete datasets. FBST is a powerful Bayesian test for precise hypothesis, as an alternative to frequentist's significance tests (characterized by the calculation of the \emph{p-value}).


Introduction
Barlow and Pereira [1] discussed a graphical approach to conditional independence. A probabilistic influence diagram is a directed acyclic graph (DAG) that helps model statistical problems. The graph is composed of a set of nodes or vertices, which represent the variables, and a set of arcs joining the nodes, which represent the dependence relationships shared by these variables.
The construction of this model helps us understand the problem and gives a good representation of the interdependence of the implicated variables. The joint probability of these variables can be written as a product of their conditional distributions, based on their independence and conditional independence.
The interdependence of the variables [2] is sometimes unknown. In this case, the model structure must be learned from data. Algorithms, such as the IC-algorithm (inferred causation) described in Pearl and Verma [3], have been designed to uncover these structures from the data. This algorithm uses a series of conditional independence tests (CI tests) to remove and direct the arcs, connecting the variables in the model and returning a DAG that minimally (with the minimum number of parameters and without loss of information) represents the variables in the problem.
The problem of constructing the DAG structures based on the data motivates the proposal of new powerful statistical tests for the hypothesis of conditional independence, because the accuracy of the structures learned is directly affected by the errors committed by these tests. Recently proposed structure learning algorithms [4][5][6] indicate that the results of CI tests are the main source of errors.
In this paper, we propose the full Bayesian significance test (FBST) as a test of conditional independence for discrete datasets. FBST is a powerful Bayesian test for a precise hypothesis and can be used to learn the DAG structures based on the data as an alternative to the CI tests currently in use, such as Pearson's chi-squared test.
This paper is organized as follows. In Section 2, we review the FBST. In Section 3, we review the FBST for the composite hypothesis. Section 4 gives an example of testing for conditional independence that can be used to construct a simple model with three variables.

The Full Bayesian Significance Test
The full Bayesian significance test was presented by Pereira and Stern [7] as a coherent Bayesian significance test for sharp hypotheses. In the FBST, the evidence for a precise hypothesis is computed. This evidence is given by the complement of the probability of a credible set, called the tangent set, which is a subset of the parameter space in which the posterior density of each of the elements is greater than the maximum of the posterior density over the null hypothesis. This evidence is called the e-value, ev(H), and has many desirable properties as a statistical support. For example, Borges and Stern [8] described the following properties: (1) provides a measure of significance for the hypothesis as a probability defined directly in the original parameter space.
(2) provides a smooth measure of the significance, both continuous and differentiable, of the hypothesis parameters.
(3) has an invariant geometric definition, independent of the particular parameterization of the null hypothesis being tested or the particular coordinate system chosen for the parameter space.
(5) requires no ad hoc artifice, such as an arbitrary initial belief ratio between hypotheses.
(6) is a possibilistic support function, where the support of a logical disjunction is the maximum support among the support of the disjuncts.
(7) provides a consistent test for a given sharp hypothesis.
(9) is an exact procedure, making no use of asymptotic approximations when computing the e-value.
(10) allows the incorporation of previous experience or expert opinions via prior distributions.
Furthermore, FBST is an exact test, whereas tests, such as the one presented in Geenens and Simar [9], are asymptotically correct. Therefore, the authors consider that a direct comparison between FBST and such test is not relevant in the context of this paper; considering, as future research, the comparison using small samples, in which case, FBST is still valid.
A more formal definition is given below.
Consider a model in a statistical space described by the triple, (Ξ, ∆, Θ), where Ξ is the sample space, ∆, the family of measurable subsets of Ξ and Θ the parameter space (Θ is a subset of n ).
Define a subset of the parameter space, T ϕ (tangent set), where the posterior density (denoted by f x ) of each element of this set is greater than ϕ.
The credibility of T ϕ is given by its posterior probability, where 1 Tϕ (θ) is the indicator function.
Defining the maximum of the posterior density over the null hypothesis as f * x , with the maximum point at θ * 0 , and defining T * = T f * x as the tangent set to the null hypothesis, H 0 , the credibility of T * is κ * . The measure of the evidence for the null hypothesis (called the e-value), which is the complement of the probability of the set T * , is defined as follows: If the probability of the set, T * , is large, the null set falls within a region of low probability, and the evidence is against the null hypothesis, H 0 . However, if the probability of T * is small, then the null set is in a region of high probability, and the evidence supports the null hypothesis. Figure 1 shows the tangent set for a null hypothesis H 0 : µ = 1, for the posterior distribution, f x , given bellow, where µ is the mean of a normal distribution and τ is the precision (the inverse of the variance τ = 1 σ 2 ):

FBST: Compositionality
The relationship between the credibility of a complex hypothesis, H, and its elementary constituent, H j , j = 1, . . . , k, under the full Bayesian significance test, was analyzed by Borges and Stern [8].
For a given set of independent parameters, (θ 1 , . . . , θ k ) ∈ (Θ 1 × . . . × Θ k ), a complex hypothesis, H, can be given as follows: where Θ H j is a subset of the parameter space, Θ j , for j = 1, . . . , k and is constrained to the hypothesis, H, which can be decomposed into its elementary components (hypotheses): The credibility of H can be evaluated based on the credibility of these components. The evidence in favor of the complex hypothesis, H (measured by its e-value), cannot be obtained directly from the evidence in favor of the elementary components; instead, it must be based on their truth function, W j (or cumulative surprise distribution), as defined below. For a given elementary component (H j ) of the complex hypothesis, H, θ * j is the point of maximum density of the posterior distribution (f x ) that is constrained to the subset of the parameter space defined by hypothesis H j : The truth function, W j , is the probability of the parameter subspace (region R j (v) of the parameter space defined below), where the posterior density is lower than or equal to the value, v: The evidence supporting the hypothesis, H j , is given as follows: The evidence supporting the complex hypothesis can be then described in terms of the truth function of its components as follows. Given two independent variables, X and Y , if Z = XY , with cumulative distribution functions F Z (z), F X (x) and F Y (y), then: Accordingly, we define a functional product for cumulative distribution functions, namely, The same result concerning the product of non-negative random variables can be expressed by the Mellin convolution of the probability density functions, as demonstrated by Kaplan and Lin [10], Springer [11] and Williamson [12].
The evidence supporting the complex hypothesis can be then described as the Mellin convolution of the truth function of its components: The Mellin convolution of two truth functions, W 1 ⊗ W 2 , is the distribution function; see Borges and Stern [8]: The Mellin convolution W 1 ⊗ W 2 gives the distribution function of the product of two independent random variables, with distribution functions W 1 and W 2 ; see Kaplan and Lin [13] and Williamson [12]. Furthermore, the commutative and associative properties follow immediately for the Mellin convolution,

Mellin Convolution: Example
An example of a Mellin convolution to find the product of two random variables, Y 1 and Y 2 , both of which have a Log-normal distribution, is given below.
Assume Y 1 and Y 2 to be continuous random variables, such that: We denote the cumulative distributions of Y 1 and Y 2 by W 1 and W 2 , respectively, i.e., where f Y 1 and f Y 2 are the density functions of Y 1 and Y 2 , respectively. These distributions can be written as a function of two normally distributed random variables, X 1 and X 2 : We can confirm that the distribution of the product of these random variables (Y 1 · Y 2 ) is also Log-normal, using simple arithmetic operations: The cumulative density function of Y 1 · Y 2 (W 12 (y 12 )) is defined as follows: where In the next section, we show different numerical methods for use in the convolution and condensation procedures, and we apply the results of these procedures to the example given here.

Numerical Methods for Convolution and Condensation
Williamson and Downs [14] developed the idea of probabilistic arithmetics. They investigated numerical procedures that allow for the computation of a distribution using arithmetic operations on random variables by replacing basic arithmetic operations on numbers with arithmetic operations on random variables. They demonstrated numerical methods for calculating the convolution of probability distributions for a set of random variables.
The convolution for the multiplication of two random variables, X 1 and X 2 (Z = X 1 · X 2 ), can be written using their respective cumulative distribution functions, F X 1 and F Y 2 : The algorithm for the numerical calculation of the distribution of the product of two independent random variables (Y 1 and Y 2 ), using their discretized marginal probability distributions (f Y 1 and f Y 2 ) is shown in Algorithm 1 (an algorithm for a discretization procedure is given by Williamson and Downs [14]). The description of Algorithm 1 is given below.
(1) The algorithm has as inputs two discrete variables, Y 1 and Y 2 , as well as their respective probabilistic density functions (pdf): (2) The algorithm finds the products ( (3) The values of Y 1 · Y 2 are sorted in increasing order.
(5) The cumulative density function (cdf) of the product Y 1 · Y 2 is found (it has N 2 bins).
The numerical convolution of the two distributions with N bins, as described above, returns a distribution with N 2 bins. For a sequence of operations, such a large number of bins would be a problem, because the result of each operation would be larger than the input for the operations. Therefore, the authors have proposed a simple method for reducing the size of the output to N bins without introducing further error into the result. This operation is called condensation and returns the upper and lower bounds of each of the N bins for the distribution resulting from the convolution. The algorithm for the condensation process is shown in Algorithm 2. The description of Algorithm 2 is given below.
(1) The algorithm has as input a cdf with N 2 bins.
(2) For each group of N bins (there are N groups of N bins), the value of the cdf at the first bin is taken as the lower bound, and the value of the cdf at the last bin is taken as the upper bound.
(3) The algorithm returns a cdf with N bins, where each bin has a lower and an upper bound.
Algorithm 1 Find the distribution of the product of two random variables.

5:
for i ← 1, n do f 1 and f 2 have n bins 6: for j ← 1, n do 7:

Vertical Condensation
Kaplan and Lin [13] proposed a vertical condensation procedure for discrete probability calculations, where the condensation is done using the vertical axis, instead of the horizontal axis, as used by Williamson and Downs [14].
The advantage of this approach is that it provides greater control over the representation of the distribution; instead of selecting an interval of the domain of the cumulative distribution function (values assumed by the random variable) as a bin, we select the interval from the range of the cumulative distribution in [0, 1], which should be represented by each bin.
In this case, it is also possible to focus on a specific region of the distribution. For example, if there is a greater interest in the behavior of the tail of the distribution, the size of the bins can be reduced in this region, consequently increasing the number of bins necessary to represent the tail of the distribution.
An example of such a convolution that is followed by a condensation procedure using both approaches is given in Section 3.1. For this example, we used discretization and condensation procedures, with the bins uniformly distributed over both axes. At the end of the condensation procedure, using the first approach, the bins are uniformly distributed horizontally (over the sample space of the variable). For the second approach, the bins of the cumulative probability distribution are uniformly distributed over the vertical axis on the interval [0, 1]. Algorithm 3 shows the condensation with the bins uniformly distributed over the vertical axis.
Algorithm 3 Condensation with the bins vertically uniformly distributed.
Histograms of a cdf and pdf, and breaks in the x-axis. for all b ∈ breaks do 8: find break to create current bin 9: if W [w] = b then if the break is within a current bin 10:  Figure 2 shows the cumulative distribution functions of Y 1 and Y 2 (Section 3.1) after they have been discretized with bins uniformly distributed over both the xand y-axes (horizontal and vertical discretizations). Figure 3 shows an example of convolution followed by condensation (based on the example in Section 3.1), using both the horizontal and vertical condensation procedures and the true distribution of the product of two variables with Log-normal distributions. Figure 2. Example of different discretization methods for the representation of the cdf of two random variables (Y 1 and Y 2 ) with Log-normal distributions. In (a) and (c), respectively, the cdf of Y 1 and Y 2 are shown, with the bins uniformly distributed over the x-axis. In (b) and (d), respectively, the cdf of Y 1 and Y 2 are shown, with the bins uniformly distributed over the y-axis.  (a) W 1 ⊗ W 2 : Horizontal discretization

Test of Conditional Independence in Contingency Table Using FBST
We now apply the methods shown in the previous sections to find evidence of a complex null hypothesis of conditional independence for discrete variables.
Given the discrete random variables, X, Y and Z, with X taking values on {1, . . . , k} and Y and Z serving as categorical variables, the test for conditional independence Y ⊥ ⊥ Z|X can be written as the complex null hypothesis, H: The hypothesis, H, can be decomposed into its elementary components: Note that the hypotheses, H 1 , . . . , H k , are independent. For each value, x, taken by X, the values taken by variables Y and Z are assumed to be random observations drawn from some distribution p(Y, Z|X = x). Each of the elementary components is a hypothesis of independence in a contingency n 22x · · · n 2cx · · · · · · · · · · · · · · · Y = r n r1x n r2x · · · n rcx The test of the hypothesis, H x , can be set up using the multinomial distribution for the cell counts of the contingency table and its natural conjugate prior, i.e., the Dirichlet distribution for the vector of the For a given array of hyperparameters α x = [α 11x , . . . , α rcx ], the Dirichlet distribution is defined as: The multinomial likelihood for the given contingency table, assuming the array of observations n x = [n 11x , . . . , n rcx ] and the sum of the observations n ..x = r,c y,z n yzx , is: The posterior distribution is thus a Dirichlet distribution, f n (θ x ): Under hypothesis H x , we have Y ⊥ ⊥ Z|X = x. In this case, the joint distribution is equal to the product of the marginals: p (Y = y, Z = z|X = x) = p (Y = y|X = x) p (Z = z|X = x). We can define this condition using the array of parameters, θ x . In this case, we have: where θ .zx = r y n yzx and θ y.x = c z θ yzx . The elementary components of hypothesis H are as follows: The point of maximum density of the posterior distribution that is constrained to the subset of the parameter space defined by hypothesis H x can be estimated using the maximum a posteriori (MAP) estimator under hypothesis H x (the mode of parameters, θ x ). The maximum density (f * x ) is the posterior density evaluated at this point.
where θ * x = [θ * 11x , . . . , θ * rcx ]. The evidence supporting H x can be written in terms of the truth function, W x , as defined in Section 3: The evidence supporting H x is: Finally, the evidence supporting the hypothesis of conditional independence (H) is given by the convolution of the truth functions that are evaluated at the product of the points of maximum posterior density, for each component of hypothesis H: The e-value for hypothesis H can be found using modern mathematical integration methods. An example is given in the next section, using the numerical convolution, followed by the condensation procedures described in Section 3.2. Applying the horizontal condensation method results in an interval for the e-value (found using the lower and upper bounds resulting from the condensation process) and in a single value for the vertical procedure.

Example of CI Test Using FBST
In this section, we describe an example of the CI test using the full Bayesian significance test for conditional independence using samples from two different models. For both models, we test whether the variable, Y , is conditionally independent of Z given X.
Two probabilistic graphical models (M 1 and M 2 ) are shown in Figure 4, where the three variables, X, Y and Z, assume values in {1, 2, 3}. In the first model (Figure 4a), the hypothesis of independence H : Y ⊥ ⊥ Z|X is true, but in the second model (Figure 4b), the same hypothesis is false. The synthetic conditional probability distribution tables (CPTs) used to generate the samples are given in Appendix.
We calculate the intervals for the e-values and compare them, for hypothesis H of conditional independence, for both models: Ev M 1 (H) and Ev M 2 (H). The complexity hypothesis, H, can be decomposed into its elementary components: For each model, 5000 observations were generated; the contingency table of Y and Z for each value of X is shown in Table 2. The hyperparameters of the prior distribution were all set to one, because, in this case, the prior is equivalent to a uniform distribution (from Equation (23)): The posterior distribution, found using Equations (24) and (25), is then given as follows: For example, for the given contingency table for Model M 1 , when X = 2 (Table 2c), the posterior distribution is the following: The point of highest density, in this example, following the hypothesis of independence (Equations (26) and (28)), was found to be the following: The truth function and the evidence supporting the hypothesis of independence given X = 2 (hypothesis H 2 ) for Model M 1 , as given in Equations (29) and (30), are as follows: We used the methods of numerical integration to find the e-value of the elementary components of hypothesis H (H 1 ,H 2 and H 3 ), and the results for each model are given below.  (Figure 4a) for X = 1, 2, and 3, respectively; (b,d,f): samples from Model M 2 (Figure 4b) for X = 1, 2, and 3, respectively.
(a) Model M 1 (for X = 1)   Figure 5 shows the histogram of the truth functions, W 1 , W 2 and W 3 , for the model, M 1 (Y and Z are conditionally independent, given X). In Figure 5a,c,e, 100 bins are uniformly distributed over the x-axis (using the empirical values of min f n (θ x ) and max f n (θ x )). In Figure 5b,d,f, 100 bins are uniformly distributed over the y-axis (each bin represents an increase in 1% in density from the previous bin). The function, W x , evaluated at the maximum posterior density over the respective hypothesis, f n (θ * x ), in red, corresponds to the e-values found (e.g., W 3 (f (θ * 3 )) ≈ 0.1066, for the horizontal discretization in Figure 5e).
In red is the maximum posterior density under the respective elementary component (H 1 , H 2 and H 3 ) of the hypothesis of conditional independence H for both horizontal and vertical discretization procedures.

Horizontal Discretization
Vertical Discretization The evidence supporting the hypothesis of the conditional independence H, as in Equation (31), for each model is as follows: The convolution follows the commutative property, and the order of the convolutions is therefore irrelevant.
Using the algorithm for numerical convolution described in Algorithm 1, we found the convolution of the truth functions, W 1 and W 2 , resulting in a cumulative function (W 12 ) with 10, 000 bins (100 2 bins). We then performed the condensation procedures described in Algorithms 2 and 3 and reduced the cumulative distribution to 100 bins, with lower and upper bounds (W l 12 and W u 12 ) for the horizontal condensation. The results are shown in Figure 6a,b for Model M 1 (horizontal and vertical condensations, respectively) and in Figure 7a  The e-values supporting the hypothesis of conditional independence for both models are given below.
The intervals for the e-values were found using horizontal discretization and condensation, as follows: The e-values found using vertical discretization and condensation were as follows: These results show strong evidence supporting the hypothesis of conditional independence between Y and Z, given X, for Model M 1 (using both discretization/condensation procedures). No evidence supporting the same hypothesis for the second model was found. This result is very relevant and promising as a motivation for further studies on the use of FBST as a CI test for the structural learning of graphical models.

Conclusions and Future Work
This paper provides a framework for performing tests of conditional independence for discrete datasets using the Full Bayesian Significance Test. A simple application of this test includes examining the structure of a directed acyclic graph given two different models. The result found in this paper suggests that FBST should be considered a good alternative to performing CI tests to uncover the structures of probabilistic graphical models from data.
Future research should include the use of FBST in algorithms to learn the structures of graphs with larger numbers of variables; to increase the capacity for performing these mathematical methods to calculate e-values (because learning DAG structures from data requires an exponential number of CI tests to be performed, each CI test needs to be performed faster); and to empirically evaluate the threshold for e-values to define conditional independence versus dependence. The last of these areas of future exploration should be achieved by minimizing the linear combination of type I and II errors (incorrect rejection of a true hypothesis of conditional independence and failure to reject a false hypothesis of conditional independence). Table A1. Conditional probability distribution tables. (a) The distribution of X, (b) the conditional distribution of Y , given X, and (c) the conditional distribution of Z, given X.