Next Article in Journal
Phase Composition of a CrMo0.5NbTa0.5TiZr High Entropy Alloy: Comparison of Experimental and Simulated Data
Previous Article in Journal
Combination Synchronization of Three Identical or Different Nonlinear Complex Hyperchaotic Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Time Complexities for Learning Boolean Networks

1
Faculty of Life Science and Technology, Kunming University of Science and Technology, Kunming 650500, Yunnan, China
2
School of Computer Engineering, Nanyang Technological University, 50 Nanyang Avenue 639798, Singapore
*
Authors to whom correspondence should be addressed.
Entropy 2013, 15(9), 3762-3795; https://doi.org/10.3390/e15093762
Submission received: 27 May 2013 / Revised: 2 September 2013 / Accepted: 3 September 2013 / Published: 11 September 2013

Abstract

:
Existing algorithms for learning Boolean networks (BNs) have time complexities of at least O(N·n0.7(k+1)), where n is the number of variables, N is the number of samples and k is the number of inputs in Boolean functions. Some recent studies propose more efficient methods with O(N·n2) time complexities. However, these methods can only be used to learn monotonic BNs, and their performances are not satisfactory when the sample size is small. In this paper, we mathematically prove that OR/AND BNs, where the variables are related with logical OR/AND operations, can be found with the time complexity of O(k·(N + lognn2), if there are enough noiseless training samples randomly generated from a uniform distribution. We also demonstrate that our method can successfully learn most BNs, whose variables are not related with exclusive OR and Boolean equality operations, with the same order of time complexity for learning OR/AND BNs, indicating our method has good efficiency for learning general BNs other than monotonic BNs. When the datasets are noisy, our method can still successfully identify most BNs with the same efficiency. When compared with two existing methods with the same settings, our method achieves a better comprehensive performance than both of them, especially for small training sample sizes. More importantly, our method can be used to learn all BNs. However, of the two methods that are compared, one can only be used to learn monotonic BNs, and the other one has a much worse time complexity than our method. In conclusion, our results demonstrate that Boolean networks can be learned with improved time complexities.

1. Introduction

Gene Regulatory Networks (GRNs) are believed to be the underlying mechanisms that control different gene expression patterns. Every regulatory module in the genome receives multiple disparate inputs and processes them in ways that can be mathematically represented as combinations of logical functions (e.g., “AND” functions, “SWITCH” functions, and “OR” functions) [1]. Then, different regulatory modules are weaved together into complex GRNs, which give specific outputs, i.e., different gene expression patterns (like in developmental processes), depending on their inputs, i.e., the current status of the cell. Hence, the architecture of GRN is fundamental for both explaining and predicting developmental phenomenology [2,3].
Boolean networks (BNs) [4] as models have received much attention in reconstructing GRNs from gene expression data sets [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. Many algorithms are computationally expensive, with their complexities on the order of O ( N · n k + 1 ) , as summarized in Table 1. Liang et al. [13] proposed the REVEALalgorithm, with O ( N · n k + 1 ) complexity, to reconstruct BNs from binary transition pairs. Akutsu et al. [5] introduced an algorithm for the same purpose, but the complexity of their method is O ( k · 2 2 k · N · n k + 1 ) , which is worse than that of REVEAL. Akutsu et al. [6] proposed another algorithm with a complexity of O ( N ω 2 · n k + 1 + N · n k + ω 2 ) , where ω = 2.376 to find the BNs with high probability. Schmulevich et al. [19,20] introduced an algorithm with a complexity of O ( 2 2 k · N · n k + 1 ) . Lähdesmäki et al. [12] proposed an algorithm with O ( n k · n · N · poly ( k ) ) complexity, where poly ( k ) is k in most cases. Laubenbacher and Stigler [11] proposed a method based on algebra with a complexity of O ( n 2 N 2 ) + O ( ( N 3 + N ) ( log p ) 2 + N 2 n 2 ) + O ( n ( N 1 ) 2 c N + N 1 ) , where p is the size of the state set, S, and c is a constant. Nam et al. [18] proposed a method with an average time complexity of O ( N · n k + 1 / ( log N ) k 1 ) . Kim et al. [10] introduced an algorithm with O ( n 2 + n 1 , j · ( n 1 ) ) + O ( 2 2 k · j = 1 n i = 1 n 1 , j n 2 , i j k · N · poly ( k ) ) , where n 1 , j is the number of first selected genes for the jth gene and n 2 , i j is the number of second selected genes when the ith gene is selected in the first step.
In the field of machine learning, there are also many algorithms introduced for learning Boolean functions [22,23,24,25,26,27,28,29,30]. If these algorithms are modified to learning BNs of a bounded indegree of k, i.e., n Boolean functions with k inputs, their complexities are at least O ( N · n k + 1 ) .
More efficient algorithms are indispensable to simulate GRNs with BNs on a large scale. Akutsu et al. [8] proposed an approximation algorithm called GREEDY1with the complexity of O ( ( 2 ln N + 1 ) · N · n 2 ) , but the success ratio of the GREEDY1 algorithm is not satisfactory, especially when k is small. For example, the success ratio of the GREEDY1 algorithm is only around 50%, no matter how many learning samples are used when k = 2 for the general Boolean functions. An efficient algorithm was also proposed by Mossel et al. [31] with a time complexity of O ( n k + 1 ) ω ω + 1 , which is about O ( n 0.7 ( k + 1 ) ) , for learning arbitrary BNs, where ω < 2.376 . Arpe and Reischuk [32] showed that monotonic BNs could be learned with a complexity of poly ( n 2 , 2 k , log ( 1 / δ ) , γ a d , γ b 1 ) under ( γ a , γ b ) -bounded noise. A Boolean function is monotonic if for every input variable, the function is either monotonically increasing or monotonically decreasing in that variable [16]. Recently, Maucher et al. [16] proposed an efficient method with a time complexity of O ( N · n 2 ) . However, this method, as well as its improvement in [17], is only applicable to BNs containing only monotonic Boolean functions, and its specificity is unsatisfactory when the sample size is small. In our work [21], we introduced the Discrete Function Learning (DFL) algorithm with the expected complexity of O ( k · ( N + log n ) · n 2 ) for reconstructing qualitative models of GRNs from gene expression datasets.
Table 1. The summary of the complexities of different algorithms for learning BNs.
Table 1. The summary of the complexities of different algorithms for learning BNs.
Time ComplexityReference
O ( N · n k + 1 ) [13]
O ( k · 2 2 k · N · n k + 1 ) [5]
O ( N ω 2 · n k + 1 + N · n k + ω 2 ) , where ω = 2 . 376 [6]
O ( 2 2 k · N · n k + 1 ) [19,20]
O ( n k · n · N · poly ( k ) ) [12]
O ( n 2 n 2 ) + O ( ( N 3 + N ) ( log p ) 2 + N 2 n 2 ) + O ( n ( N 1 ) 2 c N + N 1 ) [11]
O ( n k + 1 ) ω ω + 1 , where ω < 2.376 [31]
O ( N · n k + 1 / ( log m ) k 1 ) [18]
poly ( n 2 , 2 k , log ( 1 / δ ) , γ a d , γ b 1 ) * [32]
O ( n 2 + n 1 , j · ( n 1 ) ) + O ( 2 2 k · j = 1 n i = 1 n 1 , j n 2 , i j k · m · poly ( k ) ) [10]
O ( N · n 2 ) * [16]
* for BNs of monotonic Boolean functions.
Until now, there is still an open question about learning general BNs with a complexity better than n ( 1 o ( 1 ) ) ( k + 1 ) [31,32]. In this work, as an endeavor to meet the challenge, we prove that the complexity of the DFL algorithm is strictly O ( k · ( N + log n ) · n 2 ) for learning the OR/AND BNs in the worst case, given enough noiseless random samples from the uniform distribution. This conclusion is also validated through comprehensive experiments. We also demonstrate that the complexity of the DFL algorithm is still O ( k · ( N + log n ) · n 2 ) for learning general BNs, whose input variables are not related with exclusive OR (XOR) and Boolean equality (the inversion of exclusive OR, also called XNOR) operations. Even for exclusive OR and Boolean equality functions, the DFL algorithm can still correctly identify the original networks, although it uses more time, with the worst case complexity of O ( ( N + log n ) · n k + 1 ) . Furthermore, even when the datasets are noisy, the DFL algorithm shows a more competitive performance than the existing methods in [12,16] without losing its efficiency, especially when the sample size is small.

2. Background and Theoretical Foundation

We first introduce the notation. We use capital letters to represent random variables, such as X and Y, lower case letters to represent an instance of the random variable, such as x and y, bold capital letters, like X, to represent a set of variables, and lower case bold letter, like x, to represent an instance of X. P ^ and I ^ mean the estimations of probability P and mutual information (MI) I using the training samples, respectively. In Boolean functions, we use “∨”, “∧”, “¬”, “⨁” and “≡” to represent logical OR, AND, INVERT (also named NOT or SWITCH), exclusive OR (XOR) and Boolean equality (the inversion of exclusive OR, also XNOR), respectively. For the purpose of simplicity, we represent P ( X = x ) with p ( x ) , P ( Y = y ) with p ( y ) , and so on. We use log to stand for log2, where its meaning is clear. We use X ( i ) to mean the ith selected input variable of the function to be learned.
In this section, we first introduce BN as a model of GRN. Then, we cover some preliminary knowledge of information theory. Next, we introduce the theoretical foundation of our method. The formal definition of the problem of learning BNs and the quantity of data sets are discussed in the following.

2.1. BN as a Model of GRN

In qualitative models of GRNs, the genes are represented by a set of discrete variables, V = { X 1 , , X n } . In GRNs, the expression level of a gene, X, at time step, t + 1, is controlled by the expression levels of its regulatory genes, which encode the regulators of the gene, X, at time step t. Hence, in qualitative models of GRNs, the genes at the same time step are assumed to be independent of each other, which is a standard assumption in learning GRNs, as adopted by [5,6,8,12,13,21]. Formally, 1 i , j n , X i ( t ) and X j ( t ) are independent. Additionally, the regulatory relationships between the genes are expressed by discrete functions related to variables. Formally, a GRN G ( V , F ) with indegree k (the number of inputs) consists of a set, V = { X 1 , , X n } , of nodes representing genes and a set, F = { f 1 , , f n } , of discrete functions, where a discrete function, f i ( X i 1 , , X i k ) , with inputs from specified nodes, X i 1 , , X i k , at time step t is assigned to the node, X i , to calculate its value at time step t + 1 . As shown in the following equation:
X i ( t + 1 ) = f i ( X i 1 ( t ) , , X i k ( t ) )
where 1 i n and X i j ( t ) , j = 1 , , k , to mean the jth selected input variable of the function related to X i ( t + 1 ) . We call the inputs of f i the parent nodes of X i ( t + 1 ) , and let Pa ( X i ( t + 1 ) ) = { X i 1 ( t ) , , X i k ( t ) } .
The state of the GRN is expressed by the state vector of its nodes. We use v ( t ) = ( x 1 ( t ) , , x n ( t ) ) to represent the state of the GRN at time t and v ( t + 1 ) = ( x 1 ( t + 1 ) , , x n ( t + 1 ) ) to represent the state of the GRN at time t + 1 . v ( t + 1 ) is calculated from v ( t ) with F. A state transition pair is ( v ( t ) , v ( t + 1 ) ) . Hereafter, we use X i to represent X i ( t + 1 ) , Pa ( X i ) to represent Pa ( X i ( t + 1 ) ) , V to represent V ( t + 1 ) , and so on.
When f i s in Equation (1) are Boolean functions, i.e., f i : { 0 , 1 } k { 0 , 1 } , the G ( V , F ) is a BN model [4,5,21]. When using BNs to model GRNs, genes are represented with binary variables with two values: ON (1) and OFF (0).

2.2. Preliminary Knowledge of Information Theory

The entropy of a discrete random variable, X, is defined in terms of the probability of observing a particular value, x, of X as [33]:
H ( X ) = x P ( X = x ) log P ( X = x )
The entropy is used to describe the diversity of a variable or vector. The more diverse a variable or vector is, the larger entropy it has. Generally, vectors are more diverse than individual variables; hence, they have larger entropy. The MI between a vector, X, and Y is defined as [33]:
I ( X ; Y ) = H ( Y ) H ( Y | X ) = H ( X ) H ( X | Y ) = H ( X ) + H ( Y ) H ( X , Y )
MI is always non-negative and can be used to measure the relation between two variables, a variable and a vector (Equation (3)) or two vectors. Basically, the stronger the relation between two variables is, the larger MI they have. Zero MI means that the two variables are independent or have no relation. Formally:
Theorem 1 
([34](p. 27)) For any discrete random variables, Y and Z, I ( Y ; Z ) 0 . Moreover, I ( Y ; Z ) = 0 , if and only if Y and Z are independent.
The conditional MI, I ( X ; Y | Z ) [34](the MI between X and Y given Z), is defined by:
I ( X ; Y | Z ) = x , y , z p ( x , y , z ) log p ( x , y | z ) p ( x | z ) p ( y | z )
Immediately from Theorem 1, the following corollary is also correct.
Corollary 1 
([34](p. 27)) I ( Y ; Z | X ) 0 with equality, if and only if Z and Y are conditionally independent given X.
Here, conditional independence is introduced in [35].
The conditional entropy and entropy are related to Theorem 2.
Theorem 2 
([34](p. 27)) H ( X | Y ) H ( X ) with equality, if and only if X and Y are independent.
The chain rule for MI is given by Theorem 3.
Theorem 3 
([34](p. 22))
I ( X 1 , X 2 , , X n ; Y ) = i = 1 n I ( X i ; Y | X i 1 , X i 2 , , X 1 )
In a function, we have the following theorem.
Theorem 4 
([36](p. 37)) If Y = f ( X ) , then I ( X ; Y ) = H ( Y ) .

2.3. Theoretical Foundation of The DFL Algorithm

Next, we introduce Theorem 5, which is the theoretical foundation of our algorithm.
Theorem 5 
([34](p. 43)) If the mutual information between X and Y is equal to the entropy of Y, i.e., I ( X ; Y ) = H ( Y ) , then Y is a function of X.
In the context of inferring BNs from state transition pairs, we let X = { X ( 1 ) , , X ( k ) } with X ( j ) V ( j = 1 , , k ), and Y V . The entropy, H ( Y ) , represents the diversity of the variable, Y. The MI, I ( X ; Y ) , represents the relation between vector X and Y. From this point of view, Theorem 5 actually says that the relation between vector X and Y is very strong, such that there is no more diversity for Y if X has been known. In other words, the value of X can fully determine the value of Y.
In practice, the empirical probability (or frequency), the empirical entropy and MI estimated from a sample may be biased, since the number of experiments is limited. Thus, let us restate Borel’s law of large numbers about the empirical probability, p ^ ( x ) .
Theorem 6 
(Borel’s Law of Large Numbers)
lim N p ^ ( x ) = N x N = p ( x )
where N x is the number of instances in which X = x .
From Theorem 6, it is known that if enough samples, p ( x ) , as well as H ( X ) and I ( X ; Y ) , can be correctly estimated, which ensures the successful usage of Theorem 5 in practice.
We discuss the probabilistic relationship between X, Y and another vector, Z V \ X .
Theorem 7 
(Zheng and Kwoh, [37]) If I ( X ; Y ) = H ( Y ) , X = { X ( 1 ) , , X ( k ) } , Z V \ X , Y and Z are conditionally independent given X.
Immediately from Theorem 7 and Corollary 1, we have Corollary 2.
Corollary 2 
If I ( X ; Y ) = H ( Y ) , X = { X ( 1 ) , , X ( k ) } , Z V \ X , then I ( Z ; Y | X ) = 0 .
Theorem 7 says that if there is a subset of features, X, that satisfies I ( X ; Y ) = H ( Y ) , the remaining variables in V do not provide additional information about Y, once we know X. If Z and X are independent, we can further have Theorem 8, whose proof is given in the Appendix.
Theorem 8 
If I ( X ; Y ) = H ( Y ) , and X and Z are independent, then I ( Z ; Y ) = 0 .
In the context of BNs, remember that 1 i , j n , X i ( t ) and X j ( t ) in V are independent. Thus, Z V \ Pa ( X i ) , Z and Pa ( X i ) are independent.

2.4. Problem Definition

Technological development has made it possible to obtain time series gene expression profiles with microarray [38] and high-throughput sequencing (or RNA-seq) [39]. The time series gene expression profiles can thus be organized as input-output transition pairs whose inputs and outputs are the expression profile of time t and t + 1 , respectively [6,13]. The problem of inferring the BN model of the GRN from input-output transition pairs is defined as follows.
Definition 1 
(Inference of the BN) Let V = { X 1 , , X n } . Given a transition table, T = { ( v j , v j ) } , where j goes from one to a constant, N, find a set of Boolean functions F = { f 1 , f 2 , , f n } , so that X i ( t + 1 ) is calculated from f i as follows:
X i ( t + 1 ) = f i ( X i 1 ( t ) , , X i k ( t ) )
Akutsu et al. [6] stated another set of problems for inferring BNs. The CONSISTENCY problem defined in [6] is to determine whether or not there exists a Boolean network consistent with the given examples and an output of one if it exists. Therefore, the CONSISTENCY problem is the same as the one in Definition 1.

2.5. Data Quantity

Akutsu et al. [5] proved that Ω ( 2 k + k log 2 n ) transition pairs are the theoretic lower bound to learn a BN. Formally:
Theorem 9 
(Akutsu et al. [5]) Ω ( 2 k + k log 2 n ) transition pairs are necessary in the worst case to identify the Boolean network of maximum indegreek.

3. Methods

In this section, we briefly reintroduce the DFL algorithm, which can efficiently find the target subsets from i = 1 k n i subsets of V with less than or equal to k variables. The detailed steps, the analysis of complexities and the correctness of the DFL algorithm are stated in our early work [21,37,40].

3.1. A Brief Introduction of the DFL Algorithm

Based on Theorem 5, the motivation of the DFL algorithm is to find a subset of features that satisfies I ( U ; Y ) = H ( Y ) . To efficiently fulfill this purpose, the DFL algorithm employs a special searching strategy to find the expected subset. In the first round of its searching, the DFL algorithm uses a greedy strategy to incrementally identify real relevant feature subsets, U, by maximizing I ( U ; Y ) [37]. However, in some special cases, such as the exclusive OR functions, individual real input features have zero MI with the output feature, although for a vector, X, with all real input variables, I ( X ; Y ) = H ( Y ) is still correct in these special cases. Thus, in these cases, after the first k features, which are not necessarily the true inputs, are selected, a simple greedy search fails to identify the correct inputs with a high probability. Therefore, the DFL algorithm continues its searching until it checks all subsets with ≤ k variables after the first round of greedy searching. This strategy guarantees the correctness of the DFL algorithm even if the input variables have special relations, such as exclusive OR.
The main steps of the DFL algorithm are listed in Table 2 and Table 3. The DFL algorithm has two parameters, the expected cardinality, k, and the ϵ value. The k is the expected maximum number of inputs in the GRN models. The DFL algorithm uses the k to prevent the exhaustive searching of all subsets of attributes by checking those subsets less than or equal to k variables. The ϵ value will be discussed below and has been introduced in detail in our early work [21,37,40].
Table 2. The Discrete Function Learning (DFL) algorithm.
Table 2. The Discrete Function Learning (DFL) algorithm.
Algorithm: DFL(V, k, T)
Input: V with n genes, indegree k, T = {(V(t), V(t + 1))}, t = 1, …, N.
Output: F = {f1, f2, …, fn}
Begin:
1L ← all single element subsets of V;
2ΔTree.FirstNodeL
3for every gene YV {
4  calculate H(Y′);  //from T
5  D ← 1;  //initial depth
6 *  F. Add (Sub(Y, ΔTree, H(Y′), D, k));
}
7return F;
End
* The Sub() is a sub-routine listed in Table 3.
From Theorem 5, if an input subset, U V , satisfies I ( U ; Y ) = H ( Y ) , then Y is a deterministic function of U, which means that U is a complete and optimal input subset. U is called essential attributes (EAs), because U essentially determines the value of Y [41].
In real life, datasets are often noisy. The noise changes the distributions of X and Y; therefore H ( X ) , H ( X , Y ) and H ( Y ) are changed, due to the noise. Thus, the I ( X ; Y ) = H ( Y ) in Theorem 5 is disrupted. In these cases, we have to relax the requirement to obtain the best estimated result. Therefore, ϵ is defined as a significance factor to control the difference between I ( X ; Y ) and H ( Y ) . Precisely, if a subset, U, satisfies H ( Y ) I ( U ; Y ) ϵ × H ( Y ) , then the DFL algorithm stops the searching process and uses U as a best estimated subset. Accordingly, line 4 of Table 3 should be modified.
Table 3. The sub-routine of the DFL algorithm.
Table 3. The sub-routine of the DFL algorithm.
Algorithm: Sub(Y, ΔTree, H, D, k)
Input: Y, ΔTree, entropy H(Y)
current depth D, indegree k
Output: function Y = f(X)
Begin:
1L ← ΔTree.DthNode;
2for every element XL {
3 calculate I(X; Y);
4if(I(X; Y) == H) {
5 *  extract Y = f(X) from T;
6  return Y = f(X);
  }
 }
7sort L according to I;
8for every element XL {
9if(D < k){
10  DD + 1;
11  ΔTree.DthNode ← Δ1(X);
12  return Sub(Y, ΔTree, H, D, k);
  }
 }
13return “Fail(Y)”;
End
* By deleting unrelated variables and duplicate rows in T.
When choosing candidate inputs, our approach maximizes the MI between the input subsets, X, and the output attribute, Y. Suppose that U s 1 has already been selected at step s 1 , and the DFL algorithm tries to add a new input X i V \ U s 1 to U s 1 . Specifically, our method uses Equation (4) as a criterion to add new features to U.
{ (4) X ( 1 ) = arg max i I ( X i ; Y ) , i = 1 , , n X ( s ) = arg max i I ( U s 1 , X i ; Y ) ,
where s , 1 < s k , U 1 = { X ( 1 ) } , and U s = U s 1 { X ( s ) } .
To illustrate the searching strategy used by the DFL algorithm, let us consider a BN consisting of four genes, as shown in Table 4. The set of all genes is V = { A , B , C , D } . The search procedure of the DFL algorithm to find the Boolean function of C of the example in Table 4 is shown in Figure 1. The DFL algorithm uses a data structure called Δ T r e e in its calculation [37]. For instance, the Δ T r e e when the DFL algorithm is learning C is shown in Figure 2. As shown in Figure 1, the DFL algorithm searches the first layer, L 1 , then it sorts all subsets according to their MI with C on L 1 . From Theorem 8, A, C and D have larger MI with C than B has. Consequently, the DFL algorithm finds that { A } shares the largest MI with C among subsets on L 1 , as shown in Figure 2a. Next, one additional variable is added to the selected A. Similarly to L 1 , the DFL algorithm finds that { A , D } shares the largest MI with C on L 2 , as shown in Figure 2b. Then, the DFL algorithm adds one more variable to the selected { A , D } . Finally, the DFL algorithm finds that the subset, { A , C , D } , satisfies the requirement of Theorem 5, as shown in Figure 2c, and constructs the function, f, for C with these three attributes. First, B is deleted from the training dataset, since it is a non-essential attribute. Then, the duplicate rows of ( ( A , C , D ) , C ) are removed from the training dataset to obtain the final function, f, as the truth table of C = A C D along with the counts for each instance of ( ( A , C , D ) , C ) . This is the reason for which we name our algorithm the Discrete Function Learning algorithm.
Table 4. Boolean functions, F , of the example.
Table 4. Boolean functions, F , of the example.
GeneRule
AA′ = B
BB′ = AC
CC′ = ACD
DD′ = (AB) ∨ (CD)
Figure 1. Search procedures of the DFL algorithm when learning C = A C D . { A , C , D } * is the target combination. The combinations with a black dot under them are the subsets that share the largest mutual information (MI) with C on their layers. Firstly, the DFL algorithm searches the first layer, L 1 , then finds that { A } , with a black dot under it, shares the largest MI with C among subsets on the first layer. Then, it continues to search Δ 1 ( A ) (subsets with A and another variable) on the second layer, L 2 . Similarly, these calculations continue, until the target combination, { A , C , D } , is found on the third layer, L 3 .
Figure 1. Search procedures of the DFL algorithm when learning C = A C D . { A , C , D } * is the target combination. The combinations with a black dot under them are the subsets that share the largest mutual information (MI) with C on their layers. Firstly, the DFL algorithm searches the first layer, L 1 , then finds that { A } , with a black dot under it, shares the largest MI with C among subsets on the first layer. Then, it continues to search Δ 1 ( A ) (subsets with A and another variable) on the second layer, L 2 . Similarly, these calculations continue, until the target combination, { A , C , D } , is found on the third layer, L 3 .
Entropy 15 03762 g001
Figure 2. The ΔTree when searching the Boolean functions for C = A C D (a) after searching the first layer of V, but before the sort step; (b) when searching the second layer of V (the { A } , { C } and { D } , which are included in the Pa ( C ) , are listed before { B } after the sort step); (c) when searching the third layer; { A , C , D } * is the target combination. Similar to part (b), the { A , C } and { A , D } are listed before { A , B } . When checking the combination, { A , C , D } , the DFL algorithm finds that { A , C , D } is the complete parent set for C , since { A , C , D } satisfies the criterion of Theorem 5.
Figure 2. The ΔTree when searching the Boolean functions for C = A C D (a) after searching the first layer of V, but before the sort step; (b) when searching the second layer of V (the { A } , { C } and { D } , which are included in the Pa ( C ) , are listed before { B } after the sort step); (c) when searching the third layer; { A , C , D } * is the target combination. Similar to part (b), the { A , C } and { A , D } are listed before { A , B } . When checking the combination, { A , C , D } , the DFL algorithm finds that { A , C , D } is the complete parent set for C , since { A , C , D } satisfies the criterion of Theorem 5.
Entropy 15 03762 g002

3.2. Correctness Analysis

We have proven the following theorem [37].
Theorem 10 
(Zheng and Kwoh, [37]) Let V = { X 1 , , X n } . The DFL algorithm can find a consistent function, Y = f ( U ) , of maximum indegree k with O ( ( N + log n ) · n k ) time in the worst case from T = { ( v i , y i ) : i = 1 , 2 , , N } .
The word “consistent” means that the function Y = f ( U ) is consistent with the learning samples, i.e., u i , f ( u i ) = y i . Clearly, the original generating function of a synthetic dataset is a consistent function of the synthetic dataset.
From Theorem 5, to solve the problem in Definition 1 is actually to find a group of genes X ( t ) = { X i 1 ( t ) , , X i k ( t ) } , so that the MI between X ( t ) and X i is equal to the entropy of X i . Because n functions should be learned in Definition 1, the total time complexity is O ( ( N + log n ) · n k + 1 ) in the worst case, based on Theorem 10.

4. The Time Complexity for Learning Some Special BNs

In this section, we first analyze the MI between variables in some special BNs where variables are related with the logical OR(AND) operations. Then, we propose the theorems about the complexities of the DFL algorithm for learning these BNs. The proofs of the theorems in this section are given in the Appendix.
It should be mentioned that because the real functional relations between variables are unknown in prior, it is infeasible to specifically design an algorithm just for one kind of special BN. Although some special BNs are discussed in this section, the DFL algorithm can be used to learn all BNs without knowing the real functional relations between variables a priori.

4.1. The MI in OR BNs

Formally, we define the OR BN as follows.
Definition 2 
The OR BN of a set of binary variables V = { X 1 , , X n } is   X i :
X i = X i 1 ( t ) X i k ( t )
where the “∨” is the logical OR operation.
We have Theorem 11 to compute the I ( X i j ; X i ) in OR BNs.
Theorem 11 
In an OR BN with an indegree of k over V, the mutual information between X ( j ) Pa ( X i ) = { X i 1 , , X i k } and X i is:
I ( X ( j ) ; X i ) = 1 2 2 k 1 2 k log 2 k 1 2 k + 2 k 1 1 2 k log 2 k 1 1 2 k
From Equation (6), we see that I ( X ( j ) ; X i ) is strictly positive and tends to be zero when k , as shown in Figure 3a. Intuitively, when k increases, there would be more “1” values in the X i column of the truth table, while only one “0” whatever value the k is. That is to say, X i tends to take the value of “1” with higher probability or there are less uncertainties in X i when k increases, which causes H ( X i ) to decrease. From Theorem 4, H ( X i ) = I ( Pa ( X i ) ; X i ) ; thus, I ( Pa ( X i ) ; X i ) also decreases. Therefore, each X i j shares less information with X i when k increases.
Figure 3. MI in OR function X i = X i 1 X i k , where the unit of MI (vertical axis) is a bit. (a) The I ( X ( j ) ; X i ) as a function of k, X ( j ) Pa ( X i ) ; (b) I ( { X ( 1 ) , , X ( p ) } ; X i ) as a function of p, where k = 6 , 10 , and p goes from 1 to k.
Figure 3. MI in OR function X i = X i 1 X i k , where the unit of MI (vertical axis) is a bit. (a) The I ( X ( j ) ; X i ) as a function of k, X ( j ) Pa ( X i ) ; (b) I ( { X ( 1 ) , , X ( p ) } ; X i ) as a function of p, where k = 6 , 10 , and p goes from 1 to k.
Entropy 15 03762 g003
Similar to Theorem 11, we have Theorem 12 for computing I ( { X ( 1 ) , , X ( p ) } ; X i ) , where X ( j ) Pa ( X i ) , j = 1 , , p . From Theorem 12, we further have Theorem 13.
Theorem 12 
In an OR BN with an indegree of k over V, 1 p k , X ( 1 ) , X ( 2 ) , , X ( p ) Pa ( X i ) , the mutual information between { X ( 1 ) , X ( 2 ) , , X ( p ) } and X i is:
I ( { X ( 1 ) , , X ( p ) } ; X i ) = p 2 p 2 k 1 2 k log 2 k 1 2 k + 2 k p 1 2 k log 2 k p 1 2 k
Theorem 13 
In an OR BN with indegree of k over V, 2 p k , I ( { X ( 1 ) , , X ( p ) } ; X i ) > I ( { X ( 1 ) , , X ( p 1 ) } ; X i ) .
From Theorem 13, it is known that when variables from Pa ( X i ) are added to the candidate parent set, U, for X i , the MI, I ( U ; X i ) , increases, which is also shown in Figure 3b.

4.2. The Complexity Analysis for the Bounded OR BNs

Theorems 11 to 13 show theoretical values. When learning BNs from a training dataset with limited samples, the estimation of MI is affected by how the samples are obtained. Here, we assume that the samples are generated randomly from a uniform distribution. According to Theorem 9, Ω ( 2 k + k log 2 n ) transition pairs are necessary to successfully identify the BNs of ≤ k inputs. Then, based on Theorem 6, if enough samples are provided, the distributions of Pa ( X i ) and X i tend to be those in the truth table of X i = f ( Pa ( X i ) ) . We then obtain the following theorem.
Theorem 14 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify an OR BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Enough learning samples are required in Theorem 14. If the sample size is small, it is possible that variables in Pa ( X i ) do not share larger MI than variables in V \ Pa ( X i ) do. As will be discussed in the Discussion section and Figure 4, it is shown that when N = 20, I ^ ( X 7 ; X i ) > I ^ ( X j ; X i ) , j = 1 , 2 , 3 . However, Pa ( X i ) = { X 1 , X 2 , X 3 } in this example, so it takes more steps before the DFL algorithm finally finds the target subset of { X 1 , X 2 , X 3 } . Therefore, the complexity of the DFL algorithm becomes worse than O ( k · ( N + log n ) · n 2 ) if N is too small, as will be shown in Figure 9.
If some variables take their inverted values in the OR BNs, these kinds of BNs are defined as generalized OR BNs.
Definition 3 
The generalized OR BN of a set of binary variables V = { X 1 , , X n } is,   X i :
X i = X i 1 ( t ) X i k ( t )
where the “∨” is the logical OR operation; X i j s can also take their inverted values.
For generalized OR BNs, the DFL algorithm also maintains its time complexity of O ( k · ( N + log n ) · n 2 ) .
Corollary 3 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify a generalized OR BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Figure 4. The estimated I ^ ( X j ; X i ) for OR BNs with 10 variables, where X i = X 1 X 2 X 3 on different datasets. The unit is a bit. The curve marked with circles is learned from the truth table of X i = X 1 X 2 X 3 , so it is the ideal case, or the Golden Rule. The curves marked with diamonds, squares, triangles and stars represent the values obtained from truth table of X i = X 1 X 2 X 3 , datasets of N = 1000 , N = 100 , N = 20 and N = 100 with 10% noise, respectively.
Figure 4. The estimated I ^ ( X j ; X i ) for OR BNs with 10 variables, where X i = X 1 X 2 X 3 on different datasets. The unit is a bit. The curve marked with circles is learned from the truth table of X i = X 1 X 2 X 3 , so it is the ideal case, or the Golden Rule. The curves marked with diamonds, squares, triangles and stars represent the values obtained from truth table of X i = X 1 X 2 X 3 , datasets of N = 1000 , N = 100 , N = 20 and N = 100 with 10% noise, respectively.
Entropy 15 03762 g004
In a binary system, there are only two values for variables. If we replace the zero in the OR BN truth table with one and vice versa, the resulting BN has opposite probabilities of one and zero to those of the original OR BN, respectively. It is easy to show that such a BN is an AND BN defined in the following.
Definition 4 
The AND BN of a set of binary variables V = { X 1 , , X n } is,   X i :
X i = X i 1 ( t ) X i k ( t )
where the “∧” is the logical AND operation.
From Theorem 14, it is straightforward to obtain the following corollary.
Definition 4 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify an AND BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Similarly to generalized OR BN, we define generalized AND BN in the following and obtain Corollary 5.
Definition 5 
The generalized AND BN of a set of binary variables V = { X 1 , , X n } is,   X i :
X i = X i 1 ( t ) X i k ( t )
where the “∧” is the logical AND operation; X i j s can also take their inverted values.
Corollary 5 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify a generalized AND BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.

4.3. The Complexity Analysis For Unbounded OR BNs

From Theorem 14, we see that the complexity of the DFL algorithm is O ( ( N + log n ) · n 3 ) if k becomes n.
However, according to Theorem 9, it needs Ω ( 2 n + n log 2 n ) samples to successfully find the BN of indegree n. In other words, an exponential number of samples makes the complexity of the DFL algorithm very high in this case, even if enough samples are provided. Therefore, it remains an open problem whether there are efficient algorithms for inferring AND/OR functions with unbounded indegrees, as proposed by Akutsu et al. [8].
However, if the indegree of the OR/AND BN is undetermined, but known to be much smaller than n, the DFL algorithm is still efficient. Biological knowledge supports this situation, because in real gene regulatory networks, each gene is regulated by a limited number of other genes [42]. In these cases, the expected cardinality, k, can be assigned as n, and the DFL algorithm can automatically find how many variables are sufficient for each X i . From Theorem 14, the DFL algorithm still has the complexity of O ( k · ( N + log n ) · n 2 ) for the OR/AND BN, given enough samples.

5. Results

We implement the DFL algorithm with the Java programming language (version 1.6). The implementation software, called DFLearner, is available for non-commercial purposes upon request.
In this section, we first introduce the synthetic datasets of BN models that we use. Then, we perform experiments on various data sets to validate the efficiency of the DFL algorithm. In the following, we carry out experiments on small datasets to examine the sensitivity of the DFL algorithm. The sensitivity is the correctly identified true input variables divided by the total number of true input variables [40]. Additionally, the specificity is the identified true non-input variables divided by the total number of non-input variables. We next compare the DFL algorithm with two existing methods in the literature under the same settings. The performance of the DFL algorithm on noisy datasets was also evaluated in our early work [40].

5.1. Synthetic Datasets of BNs

We present the synthetic datasets of BNs in this section. For a BN consisting of n genes, the total state space is 2 n . The v of a transition pair is randomly chosen from 2 n possible instances of V with the Discrete Uniform Distribution, i.e., p ( i ) = 1 2 n , where i is a randomly chosen value from zero to 2 n 1 , inclusively. Since the DFL algorithm examines different subsets in the kth layer with lexicographic order (see Figure 1), the run time of the DFL algorithm may be affected by the different positions of the target subsets in the kth layer. Therefore, we select the first and the last k variables in V as the inputs for all X i . The datasets generated from the first k and last k variables are named “head” and “tail” datasets. If the k inputs are randomly chosen from the n inputs, the datasets are named “random” datasets. There are 2 2 k different Boolean functions when the indegree is k. Then, we use the OR function (OR), the AND function (AND) or one of the Boolean functions randomly selected from 2 2 k possible functions (RANDOM) to generate the v , i.e., f 1 = f 2 = = f n . If two datasets are generated by the OR functions defined with the first and last k variables, then we name them OR-head and OR-tail datasets (briefly as OR-h and OR-t), respectively, and so on. Additionally, the Boolean function used to generate a dataset is called the generating Boolean function or, briefly, the generating function, of the dataset. The noisy samples are generated by reversing their output values. The program used to generate our synthetic datasets has been implemented in the Java programming language and been included in the DFLearner package.

5.2. Experiments for Time Complexities

As introduced in Theorem 14, the complexity of the DFL algorithm is O ( k · ( N + log n ) · n 2 ) for the OR/AND BNs given enough noiseless random samples from the uniform distribution. We first perform experiments for the OR/AND BNs to further validate our analysis. Then, we perform experiments for the general BNs to examine the complexity of the DFL algorithm for them.

5.2.1. Complexities for Bounded OR/AND BNs

In all experiments in this section, the training datasets are noiseless and the expected cardinality, k, and ϵ of the DFL algorithm are set to k of the generating functions and zero, respectively. In this study, we perform three types of experiments to investigate the effects of k, n and N in O ( k · ( N + log n ) · n 2 ) , respectively. In each type of experiment, we change only one of k, n or N and keep the other two unchanged. We generate 20 OR and 20 AND datasets for each k, N and n, respectively, i.e., 10 OR-h, 10 OR-t, 10 AND-h and 10 AND-t datasets. Then, we use the average value of these 20 datasets as the run time for this k, N and n value, respectively.
Figure 5. The run times, t (vertical axes, shown in seconds), of the DFL algorithm for inferring the bounded Boolean networks (BNs). The values shown are the average of 20 noiseless datasets. The curves marked with circles and diamonds are for OR and AND datasets, respectively. (a) The run time vs. k, when n = 1000 and N = 600; (b) The run time vs. N, when n = 1000 and k = 3; (c) The run time vs. n, when k = 3 and N = 200.
Figure 5. The run times, t (vertical axes, shown in seconds), of the DFL algorithm for inferring the bounded Boolean networks (BNs). The values shown are the average of 20 noiseless datasets. The curves marked with circles and diamonds are for OR and AND datasets, respectively. (a) The run time vs. k, when n = 1000 and N = 600; (b) The run time vs. N, when n = 1000 and k = 3; (c) The run time vs. n, when k = 3 and N = 200.
Entropy 15 03762 g005
First, we perform the experiments for various k, when n = 1000 and N = 600. The run times are shown in Figure 5a. Then, we perform the experiments for various N, when n = 1000 and k = 3. The run times of these experiments are shown in Figure 5b. Finally, we perform the experiments for various n, when k = 3 and N = 200. The run times are shown in Figure 5c. In all experiments for various k, N and n, the DFL algorithm successfully finds the original BNs.
As shown in Figure 5, the DFL algorithm uses almost the same times to learn the OR and AND BNs. As introduced in Theorem 14 and Corollary 4, the DFL algorithm has the same complexity of O ( k · ( N + log n ) · n 2 ) for learning the OR and AND BNs. The run time of the DFL algorithm grows slightly faster than linear with k, as shown in Figure 5a. This is due to the fact that the computation of entropy and MI takes more time when k increases. As shown in Figure 5b, the run time of the DFL algorithm grows linearly with N. Additionally, as in Figure 5c, the run time of the DFL algorithm grows quasi-squarely with n. A BN with 6,000 genes can correctly be found in a modest number of hours.

5.2.2. Complexities for Unbounded OR/AND BNs

To examine the complexity of the DFL algorithm for the unbounded OR/AND BN, we generate 20 OR datasets (10 OR-h and 10 OR-t) with N = 10, 000, n = 10, and the k is chosen from two to 10. Then, we apply the DFL algorithm to these datasets. In all experiments of this section, the expected cardinality, k, and ϵ of the DFL algorithm are set to 10 and zero, respectively, since it is assumed that the DFL algorithm does not know the indegrees of BNs a priori.
The DFL algorithm successfully finds the original BNs for all data sets. The average run times of the DFL algorithm for these data sets are shown in Figure 6a. The numbers of the subsets checked by the DFL algorithm for learning one OR Boolean function are shown in Figure 6b. As shown in Figure 6a, the run time of the DFL algorithm grows linearly with k for the unbounded OR datasets. In Figure 6b, the number of subsets checked by the DFL algorithm is exactly i = 0 k 1 ( n i ) for the unbounded OR datasets. In other words, the complexity of the DFL algorithm is O ( ( N + log n ) · n 3 ) in the worst case for learning the unbounded OR BNs given enough samples. The DFL algorithm checks slightly more subsets for the OR-t data sets than for the OR-h datasets, when k < 10. This is due to the fact that the DFL algorithm examines different subsets in the kth layer with lexicographic order.
Figure 6. The efficiency of the DFL algorithm for the unbounded OR datasets. The values shown are the average of 20 OR noiseless datasets. (a) The run time, t (vertical axis, shown in seconds), of the DFL algorithm to infer the unbounded BNs; (b) The number of the subsets checked by the DFL algorithm for learning one OR Boolean function. The curves marked with circles and diamonds are for OR-h and OR-t datasets, respectively.
Figure 6. The efficiency of the DFL algorithm for the unbounded OR datasets. The values shown are the average of 20 OR noiseless datasets. (a) The run time, t (vertical axis, shown in seconds), of the DFL algorithm to infer the unbounded BNs; (b) The number of the subsets checked by the DFL algorithm for learning one OR Boolean function. The curves marked with circles and diamonds are for OR-h and OR-t datasets, respectively.
Entropy 15 03762 g006

5.2.3. Complexities for General BNs

In all experiments in this section, the expected cardinality, k, and ϵ of the DFL algorithm are set to k of the generating functions and zero, respectively. To examine the complexity of the DFL algorithm for general BNs, we examine the BNs of k = 2, k = 3 and n = 100. There are 2 2 2 = 16 and 2 2 3 = 256 possible Boolean functions whose output values correspond to binary 0 to binary 15 and 255, respectively. We use the decimal value of the output value of a Boolean function as its index. For instance, the index of the OR function of k = 2 is seven, since its output value is 0111, i.e., decimal 7. Thus, we generate 16 and 256 BNs, which are determined by the 16 and 256 Boolean functions when k = 2 and k = 3, respectively. For each BN, we generate the noiseless “head”, “random” and “tail” data sets, and the average run times of five independent experiments of these three datasets are shown in Figure 7.
Figure 7. The run times of the DFL algorithm for learning general BNs. (a) to (c) are run times on noiseless head, random and tail datasets of k = 2, respectively; (d) to (f) are run times on noiseless head, random and tail datasets of k = 3, respectively. The horizontal axes are the index of datasets and vertical axes are the run times, t, in seconds. The average sensitivities of the DFL algorithm are 100% for datasets of Part (a) to (c), and over 99.3% for datasets of Part (d) to (f). The shown times are the average values of five runs. The error bars are the standard deviations. These experiments were performed on a computer with an Intel Xeon 64-bit CPU of 2.66 GHz and 32 GB memory running the CENTOS Linux operating system.
Figure 7. The run times of the DFL algorithm for learning general BNs. (a) to (c) are run times on noiseless head, random and tail datasets of k = 2, respectively; (d) to (f) are run times on noiseless head, random and tail datasets of k = 3, respectively. The horizontal axes are the index of datasets and vertical axes are the run times, t, in seconds. The average sensitivities of the DFL algorithm are 100% for datasets of Part (a) to (c), and over 99.3% for datasets of Part (d) to (f). The shown times are the average values of five runs. The error bars are the standard deviations. These experiments were performed on a computer with an Intel Xeon 64-bit CPU of 2.66 GHz and 32 GB memory running the CENTOS Linux operating system.
Entropy 15 03762 g007
In Figure 7a–c, it is clear that all BNs of k = 2 can be learned in a few seconds, except the two with index 6 and 9. We find that their corresponding generating Boolean functions are exclusive OR ⨁ and Boolean equality ≡, respectively, as shown in Table 5. In all experiments in Figure 7a–c, the DFL algorithm successfully identifies the original generating functions of data sets, i.e., achieving sensitivities of 100%. According to Figure 7d–f, we also check the ten BNs that the DFL algorithm uses more time to learn and list their generating functions in Table 5. In all of these BNs, the input variables of their generating Boolean functions are related with ⨁ or ≡ and their combinations. The average sensitivities of the DFL algorithm are over 99.3% for experiments in Figure 7d–f. We then examine the number of subsets that are searched when the DFL algorithm finds the target subsets. Except the ten BNs with generating functions listed in Table 5, the DFL algorithm only checks O ( k · n ) subsets before finding the true input variable subsets for each X i . As to be discussed in the following sections, only 218 out of 256 BNs of k = 3 actually have three inputs. Therefore, when the datasets are noiseless, the DFL algorithm can efficiently learn most, > 95% (208/218), general BNs of k = 3 with the time complexity of O ( k · ( N + log n ) · n 2 ) with a very high sensitivity of over 99.3%.
Table 5. The generating Boolean functions of BNs that the DFL algorithm used more computational time to learn.
Table 5. The generating Boolean functions of BNs that the DFL algorithm used more computational time to learn.
IndexGenerating Boolean Function
k = 2
6 X i = X ( 1 ) X ( 2 )
9 X i = X ( 1 ) X ( 2 )
k = 3
24 X i = ¬ ( X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 ) )
36 X i = ¬ ( X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 ) )
66 X i = ¬ ( X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 ) )
105 X i = X ( 1 ) X ( 2 ) X ( 3 )
126 X i = X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 )
129 X i = ¬ ( X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 ) )
150 X i = ¬ ( X ( 1 ) X ( 2 ) X ( 3 ) )
189 X i = X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 )
219 X i = X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 )
231 X i = X ( 1 ) X ( 2 ) X ( 1 ) X ( 3 )

5.3. Experiments of Small Datasets

From Theorem 9, it is known that the sufficient sample size for inferring BNs is related to the indegree, k, and the number of variables, n, in the networks. Therefore, we apply the DFL algorithm to 200 noiseless OR (100 OR-h and 100 OR-t) and 200 noiseless RANDOM (100 RANDOM-h and 100 RANDOM-t) datasets with k = 3, n = 100 and various N. Then, we apply the DFL algorithm to 200 noiseless OR (100 OR-h and 100 OR-t) datasets, where k = 3, n = 100, 500, 1000 and various N. Finally, we apply the DFL algorithm to 200 noiseless OR (100 OR-h and 100 OR-t) datasets, where k = 2, 3, 4, n = 100 and various N. The relation between the sensitivity of the DFL algorithm and N is shown in Figure 8. In all experiments in this section, the expected cardinality, k, and ϵ of the DFL algorithm are set to k of the generating functions and zero, respectively.
Figure 8. The sensitivity of the DFL algorithm vs. sample size N. The values shown are the average of 200 noiseless datasets. (a) The sensitivity vs. N for OR and RANDOM datasets, when n = 100, k = 3. The curves marked with circles and diamonds are for OR and RANDOM datasets, respectively; (b) The sensitivity vs. N for OR datasets, when n = 100, 500, 1000, and k = 3. The curves marked with circles, diamonds and triangles are for data sets of n = 100, 500 and 1000, respectively; (c) The sensitivity vs. N for OR datasets, when k = 2, 3, 4, and n = 100. The curves marked with diamonds, circles and triangles are for data sets of k = 2, 3 and 4, respectively.
Figure 8. The sensitivity of the DFL algorithm vs. sample size N. The values shown are the average of 200 noiseless datasets. (a) The sensitivity vs. N for OR and RANDOM datasets, when n = 100, k = 3. The curves marked with circles and diamonds are for OR and RANDOM datasets, respectively; (b) The sensitivity vs. N for OR datasets, when n = 100, 500, 1000, and k = 3. The curves marked with circles, diamonds and triangles are for data sets of n = 100, 500 and 1000, respectively; (c) The sensitivity vs. N for OR datasets, when k = 2, 3, 4, and n = 100. The curves marked with diamonds, circles and triangles are for data sets of k = 2, 3 and 4, respectively.
Entropy 15 03762 g008
From Figure 8, it is shown that the sensitivity of the DFL algorithm grows approximately linearly with the logarithmic value of N, but becomes one after a certain N value, except the RANDOM datasets in part (a). For the RANDOM datasets, the sensitivity of the DFL algorithm has increased to 99.3% when N = 200 and, further, to 99.7% when N = 1000. This means that if the training dataset is large enough, the DFL algorithm can correctly identify the original OR BNs and correctly find the original RANDOM BNs with a very high probability.
As shown in Figure 8b, for the same N ∈ (0, 100), the sensitivity of the DFL algorithm shows a small decrease when n increases. Figure 8c shows that, for the same N, the sensitivity shows a large decrease when k increases. This is due to the different effect of k and n in determining the sufficient sample size. In Theorem 9, the sufficient sample size, N, grows exponentially with k, but linearly with log n. However, in both Figure 8b and c, the sensitivity of the DFL algorithm gradually converges to 100% when N increases.
We also find that when the sample size is small, the DFL algorithm may use more time than O ( k · ( N + log n ) · n 2 ) . For example, the average run times of the DFL algorithm for the 200 small OR datasets of n = 100 and k = 3 used in this section are shown in Figure 9. The complexity of the DFL algorithm is bad when the sample size N falls into the region from 20 to 100, but resumes linear growth after N is bigger than 100.
Figure 9. The run time, t (vertical axis, shown in seconds), of the DFL algorithm for small OR datasets, where n = 100 and k = 3. The values shown are average of 200 datasets.
Figure 9. The run time, t (vertical axis, shown in seconds), of the DFL algorithm for small OR datasets, where n = 100 and k = 3. The values shown are average of 200 datasets.
Entropy 15 03762 g009

5.4. Comparisons with Existing Methods

Maucher et al. [16] presented a comparison of their correlation algorithm and the best fit algorithm [19,20] on 50 random BNs with monotonic generating functions [16]. We generate datasets using the same settings of [16], except the generating functions. In contrast to only monotonic generating functions used in [16], we generate the same number of samples, from 50 to 800, with 10% noise for all 2 2 3 = 256 BNs of k = 3, with n = 10 and 80, respectively. In the 256 Boolean functions of k = 3, there are 38 functions whose numbers of input variables are actually less than 3. Among 2 2 2 = 8 Boolean functions of k = 2, there are two constant functions and four functions with only one input. Thus, only 10 BNs really have two inputs among 16 BNs of k = 2. Hence, there are 30 functions with two inputs, six functions with one input and two constant functions among the 256 Boolean functions of k = 3. Only the remaining 218 BNs with three inputs are used, because the other BNs represent models of k < 3. For each of the 218 BNs, we generate three datasets, one “head”, one “random” and one “tail”, for each sample size. Then, the average sensitivities and specificities for each sample size are calculated for the 218 “head”, 218 “random” and 218 “tail” datasets, respectively. Then, the results of “head”, “random” and “tail” datasets are combined to calculate the average values and standard deviations shown in Figure 10.
We use the ϵ-value method introduced in Methods to handle the noise issue. We try different ϵ-values from zero to one with a step of 0.01. Because there are no subsets that satisfy I ( X ; Y ) = H ( Y ) in noisy datasets, the DFL algorithm falls into the exhaustive search of i = 1 k n i subsets with ≤ k input variables. Thus, we use a restricted searching method that only checks i = 0 k 1 ( n i ) k n subsets at each ϵ-value for the datasets of n = 80. In the restricted searching method, all subsets with one variable are examined, and the subset, say { X i } , with the largest MI with Y is kept. Then, the DFL algorithm continues to check n − 1 two element subsets with X i , and the subset with the largest MI with Y among these n − 1 subsets, say { X i , X j } , is kept. This searching procedure continues, until k input variables are chosen.
Figure 10. The performance of the DFL algorithm when learning general BNs from datasets of 10% noise with different sample sizes. For each sample size, three (one “head”, one “random” and one “tail”) datasets were generated for each of the 218 BNs of k = 3. The sensitivities, specificities and the number of data sets, m, on which the DFL algorithm successfully achieved 100% sensitivities and specificities were calculated on the 218 “head”, 218 “random” and 218 “tail” datasets, respectively. Then, the average values in the curves and standard deviations (the error bars) were calculated from the averages of the “head”, “random” and “tail” datasets. The curves marked with circles (blue), DFLr, and dots (red), DFLx, represent the average values on the datasets of n = 80 with restricted searching of only i = 0 k 1 ( n i ) subsets and n = 10 with exhaustive searching of i = 1 k n i subsets, respectively. The curves marked with triangles (green), best-fit, and diamonds (magenta), Corr., are the average values of the Best-fit and Correlation algorithm on noisy datasets of 50 monotonic Boolean networks with n = 80 and Gaussian noise with SD σ = 0.4 (reported in [16]), respectively. (a) The sensitivities vs. N; (b) The specificities vs. N; (c) The number of datasets, m, on which the DFL algorithm achieved 100% sensitivities and specificities.
Figure 10. The performance of the DFL algorithm when learning general BNs from datasets of 10% noise with different sample sizes. For each sample size, three (one “head”, one “random” and one “tail”) datasets were generated for each of the 218 BNs of k = 3. The sensitivities, specificities and the number of data sets, m, on which the DFL algorithm successfully achieved 100% sensitivities and specificities were calculated on the 218 “head”, 218 “random” and 218 “tail” datasets, respectively. Then, the average values in the curves and standard deviations (the error bars) were calculated from the averages of the “head”, “random” and “tail” datasets. The curves marked with circles (blue), DFLr, and dots (red), DFLx, represent the average values on the datasets of n = 80 with restricted searching of only i = 0 k 1 ( n i ) subsets and n = 10 with exhaustive searching of i = 1 k n i subsets, respectively. The curves marked with triangles (green), best-fit, and diamonds (magenta), Corr., are the average values of the Best-fit and Correlation algorithm on noisy datasets of 50 monotonic Boolean networks with n = 80 and Gaussian noise with SD σ = 0.4 (reported in [16]), respectively. (a) The sensitivities vs. N; (b) The specificities vs. N; (c) The number of datasets, m, on which the DFL algorithm achieved 100% sensitivities and specificities.
Entropy 15 03762 g010
The exhaustive search is used for the datasets of n = 10. The BNs learned with the smallest ϵ-values are compared with the generating functions to calculate the sensitivities and specificities that are shown in Figure 10a and b. We calculate the numbers of BNs that are learned by the DFL algorithm with 100% sensitivities and specificities, as shown in Figure 10c. We also compare the results of the DFL algorithm in Figure 10 to those of two existing algorithms reported in [16] (in Figure 1 of [16]).
The DFL algorithm using exhaustive search achieves good sensitivities and specificities when N > 100 on datasets of n = 10, Figure 10a and b. In comparison with the Correlation algorithm [16], the DFL algorithm shows much better specificities and sensitivities, especially when N < 200. When compared with the Best-fit algorithm [12], the DFL algorithm demonstrates much better sensitivities when N < 200 and comparable specificities.
As shown in Figure 10, the performance of the DFL algorithm using restricted searching decreases a little bit. When compared with the Best-fit algorithm, DFL using restricted searching has better sensitivities when N < 250, but has slightly worse specificities. DFL using restricted searching shows better specificities than the Correlation algorithm when N < 600, but has slightly worse sensitivities for N < 100. Remember that our datasets that are applied to the DFL algorithm also consist of non-monotonic functions, such as those listed in Table 5. When the restricted searching is used, it is possible that the DFL algorithm cannot correctly learn the original generating functions for these datasets whose I ( X i j ; X i ) = 0 . This explains why the DFL algorithm demonstrates a slightly declined performance when the restricted searching strategy is used. Recall that the results of the Best-fit and Correlation algorithm reported in [16] are obtained on datasets of 50 monotonic functions. Therefore, the performance of the DFL algorithm is still comparable to these two methods, even using the restricted searching method. It is also interesting to point out in Figure 10c that the DFL algorithm can successfully identify about 98% BNs using the exhaustive searching and can soundly learn about 185 (85%) BNs, even using the restricted searching when N < 200. In other words, when maintaining 100% sensitivity and specificity, the DFL algorithm keeps its complexity of O ( k · ( N + log n ) · n 2 ) for learning these 85% BNs of k = 3 from noisy datasets.
In summary, these results demonstrate that the DFL algorithm has a better comprehensive performance than the methods compared in this study.

6. Discussion

6.1. Advantages of the DFL Algorithm

An advantage of the DFL algorithm is that it requires less samples to achieve good sensitivities and specificities than existing methods. As shown in Figure 10, when the sample size is larger than 100, the DFL algorithm achieves over 98% sensitivities and specificities using exhaustive searching. Taking the log n in Theorem 9 into consideration, the DFL algorithm needs around 200 samples to achieve over 98% sensitivities for learning BNs of n = 80. As demonstrated in Figure 10, the DFL algorithm achieves sensitivities of about 90% and specificities of about 95% when N ≥ 200 when the restricted searching is used, i.e., only using O ( k · ( N + log n ) · n 2 ) time. When compared with the DFL algorithm using the restricted searching, the results in [16] show that the Best-fit algorithm [12] needs more samples to achieve comparable sensitivity of 90%, and its time complexity is O ( n k · n · N · poly ( k ) ) , which is much worse than O ( k · ( N + log n ) · n 2 ) of the DFL algorithm. The Correlation algorithm demonstrates similar sensitivity to the DFL algorithm when N ≥ 200, but its specificities are much worse than those of the DFL algorithm, with a value of about 78% when N = 200. Another advantage of the DFL algorithm is that it can learn more general BNs than the Correlation algorithm [16], as shown in Figure 7 and Figure 10. Actually, the DFL algorithm can successfully learn 208 and 185 BNs of the 218 (about 95% and 85%, respectively) BNs of k = 3 from noiseless and noisy datasets, respectively, only using the O ( k · ( N + log n ) · n 2 ) time; Figure 7d–f and Figure 10c. Furthermore, if the computation time is less important than sensitivities and specificities, the DFL algorithm can achieve better performance for noisy datasets by using the exhaustive searching strategy, as shown in Figure 10, for other BNs.

6.2. The Noise and Size of Training Datasets

Let us consider the factors that affect the sensitivity of the DFL algorithm. As shown in Figure 4, the I ( X j ; X i ) of BNs is affected by two factors, the sample size, N, and the noise levels of datasets.
One factor that affects the complexity of the DFL algorithm is the amount of noise in the training datasets. This is because noise changes the distributions of X and Y, thus destroying the equality between H ( Y ) and I ( X ; Y ) if Y = f ( X ) . Therefore, we use the ϵ-value method to learn BNs from noisy datasets [21,37,40]. As shown in Figure 10, the DFL algorithm achieves > 99% sensitivities and specificities when N > 100, if the exhaustive searching strategy is used. The DFL algorithm can successfully identify most general BNs from noisy datasets, around 98%(213/218), given enough samples, as shown in Figure 10c. The results in Figure 10a and Figure 8a show that the performance of the DFL algorithm is stable for noisy data sets, even when the percentage of the noise samples is increased to 20% [40]. To keep its efficiency, a restricted searching method is also examined in Figure 10 for the datasets of n = 80. Our results demonstrate that most BNs, about 85%, can be learned very efficiently with the restricted searching method in a time complexity of O ( k · ( N + log n ) · n 2 ) . Thus, it is advantageous to use the restricted searching to find a model and to refine it with exhaustive searching when using the DFL algorithm in practice.
In addition to the level of noise, sample size is another factor that affects the time complexity of the DFL algorithm. As demonstrated in Figure 8, the DFL algorithm shows increasing sensitivities when the sample sizes increases, and the sensitivity of the DFL algorithm converges to 100% when enough samples are provided. In the meantime, in Figure 9, the running time shows an interesting convex pattern when the sample size is small, but increases linearly when the number of samples reaches a threshold. That is because the DFL algorithm is disturbed by other irrelevant variables when the sample size is small. Let us use the example in Figure 4 to explain the issue. In the ideal case, or the Golden Rule, I ^ ( X j ; X i ) , X j Pa ( X i ) , should be values generated from the truth table of the generating function, and I ^ ( X j ; X i ) = 0 , X j V \ Pa ( X i ) , based on Theorem 8. In this example, I ^ ( X j ; X i ) , j = 4 , , 10 tends to be zero given enough samples. Actually, when N = 1000, the I ^ ( X j ; X i ) for j = 4 , , 10 is almost zero, as shown in Figure 4. However, when N is small, the I ^ ( X j ; X i ) is very different. Many irrelevant variables have non-zero I ^ ( X j ; X i ) . Thus, it takes many additional computations to find the correct Pa ( X i ) , which explains the increased computational time in Figure 9 when N is small. Even worse, there probably exist other subsets of V, which satisfy the criterion of Theorem 5. For instance, the DFL algorithm finds that X i = f ( X 2 , X 7 , X 9 ) , which is incorrect, since I ^ ( X 2 , X 7 , X 9 ) = H ^ ( X i ) when N = 20 in the example of Figure 4. Consequently, it is possible that the DFL algorithm cannot find the original BNs. Theorem 10 is correct no matter how many learning samples are provided. In cases of small sample sizes, such as N = 20 in the example, the obtained BNs are still consistent with the learning datasets. However, the sensitivity of the DFL algorithm becomes 1/3 for this example, since only 1/3 of the edges of the original network are correctly identified. In this case, the consistency used by [5] is not suitable for evaluating the performance of a learning algorithm. This explains why we use the sensitivity to evaluate the performance of the DFL algorithm.
Recently, Perkins and Hallett [43] provided an improved sample complexity for learning BNs. They also demonstrated that uncorrelated samples reduce the number of samples needed, but increase the learning time, and strongly correlated samples have the opposite effect. Their findings suggest that the correlation between samples should also be considered when using DFL to learn BNs, depending on the preference of either less samples or less computational time.
Sample distribution is another point to be mentioned. Because Theorem 6 is correct regardless of sample distribution, the MI and entropy can be correctly estimated from enough samples randomly drawn from distributions other than the uniform distribution. Therefore, we hypothesize that Theorem 14 and Corollaries 3, 4 and 5 are also correct on the training samples drawn from other distributions, although this theorem and these corollaries require that the learning samples are randomly drawn from the Discrete Uniform Distribution.

6.3. The BNs With More Computation Time

In some Boolean functions, such as the ⨁ functions, I ( X i j ; X i ) = 0 for X i j Pa ( X i ) . This makes it very unlikely to rank the X i j in the front part of the list after the sort step in line 7 of Table 3. Fortunately, in the empirical studies in Figure 7, the worst case happens with very low probability, < 5%, for inferring general BNs with an indegree of three. The experimental results show that although the DFL algorithm uses more steps for finding the target subsets for those functions whose I ( X i j ; X i ) = 0 than for other functions, its complexity is still polynomial for each X i .
In the context of GRNs, the exclusive OR and Boolean equality function are also unlikely to happen between different regulators of a gene. In some cases, a gene can be activated by several activators, and any of these activators is strong enough to activate the gene. This can be modeled as an “OR” logic between these activators. In other cases, several activators must simultaneously bind to their binding sites in the cis-regulatory region to turn on the gene, which can be modeled as an “AND” logic. A repressor often turns off the gene, so it can be modeled as the “INVERT” logic. Suppose that the regulators act with ⨁ relation; then, it needs an odd number of regulators with a high expression level (in the logic 1 state). For example, a gene, X, has one activator, A, and one repressor, R; then, X = A R = ( A ¬ R ) ( ¬ A R ) . The second term on the right side says that if the activator is on its low level and the repressor is on its high level, then X will be turned on, which is clearly unreasonable. Similarly, suppose X = A R = ( ¬ A ¬ R ) ( A R ) . The first term on the right side means that X will be turned on if both A and R are on their low levels. This situation is also unreasonable. From the above analysis, the exclusive OR and Boolean equality relations are unlikely to happen in a real biological system, which is also argued in [16].

6.4. The Constant Functions

The DFL algorithm outputs that “ X i is a constant” for two extreme cases, i.e., X i = f ( X i 1 , , X i k ) = 1 or 0, ( x i 1 , , x i k ) . Since, in these two cases, the X i is a constant, the entropy of X i is zero. In other words, the information content of X i is zero. Therefore, it is unnecessary to know which subset of features are the genuine inputs of X i in these two cases.
In case of GRNs, some genes show constant expression levels in a specific biological process. For example, a large amount of genes do not show significant changes in their expression levels during the yeast, Saccharomyces cerevisiae, cell-cycle in the study of [38]. These genes are considered as house-keeping genes and removed before further analysis of the expression datasets [38].
However, it is also possible that only one output value appears in the training samples, especially when the datasets are limited. In this case, the DFL algorithm still reports the model as a constant, which is not true. Thus, it is advisable to check the sample size based on Theorem 9 to make sure that the sample size is large enough if there is a possibility that the underlying generating function should not be a constant based on prior knowledge.

7. Conclusions

We prove that the DFL algorithm can learn the OR/AND BNs with the O ( k · ( N + log n ) · n 2 ) time complexity in the worst case given enough noiseless random samples drawn from the uniform distribution. The experimental results validate this conclusion. Our experiments demonstrate that the DFL algorithm can successfully learn > 95% BNs of k = 3 with the same complexity of O ( k · ( N + log n ) · n 2 ) from noiseless samples. For noisy datasets, the DFL algorithm successfully learns about 85% of BNs of k = 3 using the O ( k · ( N + log n ) · n 2 ) time. Furthermore, when the datasets are noisy, the DFL algorithm can successfully learn more BNs of k = 3, about 98%, using the exhaustive searching; Figure 10c.
Although the learning of Boolean functions is discussed in this paper, the DFL algorithm has also been demonstrated to learn multi-value discrete functions or GRN models. For example, we use the DFL algorithm to learn GLFmodels [44], in which genes are related with multi-value functions, in our early work [21,40].
Since Boolean function learning algorithms have been used to solve many problems [22,23,24,25,26,27,28,29,30], the DFL algorithm can potentially find its application in other fields, such as classification [41,45,46,47], feature selection [37], pattern recognition, functional dependencies retrieving and association rules retrieving.

Author’s Contributions

Yun Zheng and Chee Keong Kwoh conceived of and designed the research. Yun Zheng conducted the theoretical analysis, implemented the method and performed the experiments. Yun Zheng and Chee Keong Kwoh analyzed the results and wrote the manuscript. Both authors read and approved the final manuscript.

Acknowledgements

The research is supported in part by a grant of Jardine OneSolution (2001) Pte Ltd to Chee Keong Kwoh and a start-up grant of Kunming University of Science and Technology to Yun Zheng.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

Proofs of the Theorems

Theorem 15 
([34](p. 33)) If Y = f ( X ) , I ( Z ; X ) I ( Z ; Y ) .
Theorem 8 
If I ( X ; Y ) = H ( Y ) , and X and Z are independent, then: I ( Z ; Y ) = 0 .
Proof 1 
Since X and Z are independent, based on Theorem 1, we have:
I ( Z ; X ) = 0
Since I ( X ; Y ) = H ( Y ) , based on Theorem 5, we have Y = f ( X ) . Then, based on Theorem 15, we have
I ( Z ; X ) I ( Z ; Y )
From Equations (11) and (12), we have I ( Z ; Y ) 0 . Based on Theorem 1, we have I ( Z ; Y ) = 0 .□
Theorem 11 
In an OR BN with an indegree of k over V, the mutual information between X ( j ) Pa ( X i ) = { X i 1 , , X i k } and X i is:
I ( X ( j ) ; X i ) = 1 2 2 k 1 2 k log 2 k 1 2 k + 2 k 1 1 2 k log 2 k 1 1 2 k
Proof. 
Without loss of generality, we consider I ( X i 1 , X i ) . In the truth table of X i , there are equal numbers of “0” and “1” for X i 1 . Thus, H ( X i 1 ) = 1 .
In the truth table of X i , there are 2 k lines totally. Additionally, in the column of X i , there is only one “0”, and 2 k 1 “1”. Thus,
H ( X i ) = 1 2 k log 1 2 k 2 k 1 2 k log 2 k 1 2 k = k 2 k 2 k 1 2 k log 2 k 1 2 k
There are only three possible instances for the tuple, ( X i 1 , X i ) , i.e., (0, 0), (0, 1) and (1, 1). By counting the numbers of these instances, and divided by the total number of lines, we have
H ( X i 1 , X i ) = 1 2 k log 1 2 k 2 k 1 1 2 k log 2 k 1 1 2 k 1 2 log 1 2
Therefore, we obtain
I ( X i 1 , X i ) = H ( X i 1 ) + H ( X i ) H ( X i 1 , X i ) (16) = 1 2 2 k 1 2 k log 2 k 1 2 k + 2 k 1 1 2 k log 2 k 1 1 2 k
Theorem 12 
In an OR BN with an indegree of k over V, 1 p k , X ( 1 ) , X ( 2 ) , , X ( p ) Pa ( X i ) , the mutual information between { X ( 1 ) , X ( 2 ) , , X ( p ) } and X i is
I ( { X ( 1 ) , , X ( p ) } ; X i ) = p 2 p 2 k 1 2 k log 2 k 1 2 k + 2 k p 1 2 k log 2 k p 1 2 k
Proof. 
Similar to the proof of Theorem 11, we have
H ( X i ) = k 2 k 2 k 1 2 k log 2 k 1 2 k
Consider X i = X 1 X 2 X 3 and p = 2 first. Without loss of generality, we derive I ( { X 1 , X 2 } ; X i ) . For H ( X 1 , X 2 ) , as shown in Table A1, there are 2 p = 4 possible instances for ( X 1 , X 2 ) , i.e., (0,0), (0,1), (1,0) and (1,1). By counting the number of these instances and dividing by the total number of lines, we get H ( X 1 , X 2 ) = 2 p × ( 1 2 p log 1 2 p ) = 2 (bits).
Table A1. The truth table of X i = X 1 X 2 X 3 .
Table A1. The truth table of X i = X 1 X 2 X 3 .
X1X2X3 X i
0000
0011
0101
0111
1001
1011
1101
1111
Next, we derive H ( X ( 1 ) , , X ( p ) , Y ) . There are 2 p + 1 = 5 possible instances for ( X 1 , X 2 , Y ) , i.e., (0,0,0), (0,0,1), (0,1,1), (1,0,1) and (1,1,1). Their probabilities are
p ( 0 , 0 , 0 ) = 1 2 k p ( 0 , 0 , 1 ) = 2 k p 1 2 k = 2 k 2 1 2 k p ( 0 , 1 , 1 ) = 2 k p 2 k = 2 k 2 2 k p ( 1 , 0 , 1 ) = 2 k p 2 k = 2 k 2 2 k (19) p ( 1 , 1 , 1 ) = 2 k p 2 k = 2 k 2 2 k
Hence, we get
H ( X 1 , X 2 , X i ) = 1 2 k log 1 2 k 2 k 2 1 2 k log 2 k 2 1 2 k 3 × ( 2 k 2 2 k log 2 k 2 2 k ) (20) = 3 2 1 2 k log 1 2 k 2 k 2 1 2 k log 2 k 2 1 2 k
Finally, we have
I ( { X 1 , X 2 } ; X i ) = H ( X 1 , X 2 ) + H ( X i ) H ( X 1 , X 2 , X i ) = 1 2 k log 1 2 k 2 k 1 2 k log 2 k 1 2 k + 2 3 2 + 1 2 k log 1 2 k + 2 k 2 1 2 k log 2 k 2 1 2 k (21) = 1 2 2 k 1 2 k log 2 k 1 2 k + 2 k 2 1 2 k log 2 k 2 1 2 k
By generalizing three to p, we have
H ( X ( 1 ) , , X ( p ) ) = 2 p × ( 1 2 p log 1 2 p ) = p ( b i t s )
From Equation (19), there is one instance of ( 0 , , 0 , 0 ) for ( X ( 1 ) , , X ( p ) , X i ) . There are 2 k p 1 instances of ( 0 , , 0 , 1 ) for ( X ( 1 ) , , X ( p ) , X i ) . There are 2 p 1 possible instances of ( X ( 1 ) , , X ( p ) , X i ) with the same probabilities of 2 k p 2 k . Hence,
H ( X ( 1 ) , , X ( p ) , X i ) = 1 2 k log 1 2 k 2 k p 1 2 k log 2 k p 1 2 k ( 2 p 1 ) ( 2 k p 2 k log 2 k p 2 k ) (23) = p + k 2 k p 2 p 2 k p 1 2 k log 2 k p 1 2 k
Finally, by combing Equations (18), (22) and (23), we have the result, 1 p k ,
(24) I ( { X ( 1 ) , , X ( p ) } ; X i ) = H ( X ( 1 ) , , X ( p ) ) + H ( X i ) H ( X ( 1 ) , , X ( p ) , X i ) (25) = p 2 p 2 k 1 2 k log 2 k 1 2 k + 2 k p 1 2 k log 2 k p 1 2 k
When p = k , from Theorem 12, we have I ( X ( 1 ) , , X ( k ) ; X i ) = k 2 k 2 k 1 2 k log 2 k 1 2 k = H ( X i ) , as shown in Equation (18). This result is exactly consistent with Theorem 4, which validates Theorem 12 from another aspect. For instance, as shown in Figure 3b, when p = 6, the I ( { X ( 1 ) , , X ( p ) } ; X i ) in the curve for k = 6 is 0.116 bits, which should be equal to H ( X i ) from Theorem 4. Then, from Equation (18), when k = 6, H ( X i ) = 6 64 63 64 · log 2 ( 63 64 ) = 0 . 116 bits, too.
Theorem 13 
In an OR BN with an indegree of k over V, 2 p k , I ( { X ( 1 ) , , X ( p ) } ; X i ) > I ( { X ( 1 ) , , X ( p 1 ) } ; X i ) .
Proof. 
From Theorem 12, we have
I ( { X ( 1 ) , , X ( p ) } ; X i ) = p 2 p 2 k 1 2 k log 2 k 1 2 k + 2 k p 1 2 k log 2 k p 1 2 k
Thus:
I p = ( p 2 p ) + ( 2 k p 1 2 k log 2 2 k p 1 2 k ) = 1 2 p + p ( 1 2 p ) + [ log 2 2 k p 1 2 k + 1 2 k p 1 2 k ln 2 ] · ( 2 k p 1 2 k ) = 2 p p · 2 p · ln 2 + [ log 2 2 k p 1 2 k + 1 ln 2 ] · ( 2 p ) = 2 p p · 2 p · ln 2 + [ log 2 2 k p 1 2 k + 1 ln 2 ] · ( 2 p · ln 2 ) = 2 p p · 2 p · ln 2 2 p · ln 2 · log 2 2 k p 1 2 k 2 p = 2 p · ln 2 · ( log 2 2 k p 1 2 k p ) = 2 p · ln 2 · [ k log 2 ( 2 k p 1 ) p ] (27) = 2 p · ln 2 · [ log 2 2 k p log 2 ( 2 k p 1 ) ]
Since y = log 2 x monotonically increases x ( 0 , + ) , so log 2 2 k p > log 2 ( 2 k p 1 ) ; thus, I p > 0 , 1 p k . Therefore, I ( { X ( 1 ) , , X ( p ) } ; X i ) monotonically increases with p, 1 p k . So, we have the result.□
The correctness of Theorem 13 is also demonstrated in Figure 3b.
Theorem 14 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify an OR BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Proof. 
The datasets are generated with the original Boolean functions of the BNs. From Theorem 6, the empirical probabilities of Pa ( X i ) and X i in the Boolean functions, X i = f i ( Pa ( X i ) ) , tend to be the probabilities in the truth table of f i when the sample size is large enough.
First, consider the searching process in the first layer of the search graph, like Figure 1. From Theorem 8, we obtain lim N I ^ ( Z ; X i ) 0 if Z V \ Pa ( X i ) . Meanwhile, from Theorem 11, lim N I ^ ( X i j ; X i )   =   I ( X i j ; X i ) > 0 . Thus, X i j s are listed in front of the other variables, Zs, after the sort step in line 7 of Table 3.
In the following, the Δ 1 ( X i j ) (subsets with X i j and another variable) are dynamically added to the second layer of the ΔTree. Now, consider the MI, I ^ ( X i j , Z ; X i ) , where Z is one of the variables in V \ X i j . First, if Z V \ Pa ( X i ) , from Theorem 8, lim N I ^ ( Z ; X i ) 0 . Since X i , X j V , X i and X j are independent variables, from Theorem 2, we get H ^ ( X i | X j ) = H ^ ( X i ) . From Theorem 3, we have:
lim N I ^ ( X i j , Z ; X i ) = lim N [ I ^ ( Z ; X i ) + I ^ ( X i j ; X i | Z ) ] = lim N I ^ ( X i j ; X i | Z ) = lim N [ H ^ ( X i j | Z ) H ^ ( X i j | X i , Z ) ] = lim N [ H ^ ( X i j ) H ^ ( X i j | X i ) ] = lim N [ I ^ ( X i j ; X i ) ] (28) = I ( X i j ; X i )
Second, if Z Pa ( X i ) , from Theorem 13, we have
lim N I ^ ( { X i j , Z } ; X i ) = I ( { X i j , Z } ; X i ) > lim N I ^ ( X i j ; X i ) = I ( X i j ; X i ) .
After combining the results in Equations (28) and (29), we have that Z Pa ( X i ) , I ^ ( { X i j , Z } ; X i ) is larger than the same measure when Z V \ Pa ( X i ) .
Therefore, in the second layer of the ΔTree, the combinations with two elements from Pa ( X i ) are listed in front of other combinations, and so on so forth, until the DFL algorithm finds Pa ( X i ) in the kth layer of the ΔTree, finally.
In the searching process, only i = 0 k 1 ( n i ) k n subsets are visited by the DFL algorithm. Therefore, the complexity of the DFL algorithm becomes O ( k · ( N + log n ) · n 2 ) , where log n is for sort step in line 7 of Table 3 and N is for the length of input table T.□
Corollary 3 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify a generalized OR BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Proof. 
We replace those X i j s that are taking their inverted values with another variable, X i j * , i.e., let X i j * = ¬ X i j ; then, the resulting BN is an OR BN. H ( X i ) does not change in the new OR BN.
To satisfy the criterion of Theorem 5, compare the MI, I ( X i j * ; X i ) , with I ( X i j ; X i ) . We have
I ( X i j * ; X i ) = H ( X i j * ) + H ( X i ) H ( X i j * , X i )
H ( X i ) remains the same value as the corresponding item in I ( X i j ; X i ) . In binary systems, there are only two states, i.e., “0” and “1”. It is straightforward to obtain H ( X i j * ) = H ( X i j ) . Therefore, the only item changed in the I ( X i j * ; X i ) is the joint entropy, H ( X i j * , X i ) . Next, we prove that H ( X i j * , X i ) = H ( X i j , X i ) .
Consider the tuple, ( X i j , X i ) . If we replace “0’ of X i j with “1” and vice versa, it becomes ( X i j * , X i ) , as shown in Table A2. The three instances, (0, 0), (0, 1) and (1, 1), of ( X i j , X i ) change to (1, 0), (1, 1) and (0, 1) of ( X i j * , X i ) , respectively. However, the probabilities (frequencies) of them are coincidentally equal, respectively. Thus, H ( X i j * , X i ) = H ( X i j , X i ) .
Table A2. The tuple ( X i j , X i ) and ( X i j * , X i ) , where k is two.
Table A2. The tuple ( X i j , X i ) and ( X i j * , X i ) , where k is two.
X i j X i X i j * X i
0010
0111
1101
1101
From Theorem 14, the results are obtained.□
Corollary 4 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify an AND BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Proof. 
In an AND BN, P ( X i = 0 ) and P ( X i = 1 ) are equal to P ( X i = 1 ) and P ( X i = 0 ) in a corresponding OR BN with the same Pa ( X i ) for all X i , respectively. Therefore, I ( X i j ; X i ) is the same as that of the corresponding OR BN. From Theorem 14, the result can be directly obtained.□
Corollary 5 
For sufficiently large N ( N = Ω ( 2 k + k log 2 n ) ) , if the samples are noiseless and randomly generated from a uniform distribution, then the DFL algorithm can identify a generalized AND BN with an indegree of k in O ( k · ( N + log n ) · n 2 ) time, strictly.
Proof. 
Similar to that of Corollary 3.□

References

  1. Davidson, E.; Levin, M. Gene regulatory networks special feature: Gene regulatory networks. Proc. Natl. Acad. Sci. USA 2005, 102. [Google Scholar] [CrossRef] [PubMed]
  2. Davidson, E.; McClay, D.; Hood, L. Regulatory gene networks and the properties of the developmental process. Proc. Natl. Acad. Sci. USA 2003, 100, 1475–1480. [Google Scholar] [CrossRef] [PubMed]
  3. Levine, M.; Davidson, E. From the cover. Gene regulatory networks special feature: Gene regulatory networks for development. Proc. Natl. Acad. Sci. USA 2005, 102, 4936–4942. [Google Scholar] [CrossRef] [PubMed]
  4. Kauffman, S. Metabolic stability and epigenesis in randomly constructed genetic nets. J. Theor. Biol. 1969, 22, 437–467. [Google Scholar] [CrossRef]
  5. Akutsu, T.; Miyano, S.; Kuhara, S. Identification of Genetic Networks from a Small Number of Gene Expression Patterns under the Boolean Network Model. In Proceedings of Pacific Symposium on Biocomputing ’99, Big Island, HI, USA, 4–9 January 1999; Volume 4, pp. 17–28.
  6. Akutsu, T.; Miyano, S.; Kuhara, S. Algorithm for identifying boolean networks and related biological networks based on matrix multiplication and fingerprint function. J. Comput. Biol. 2000, 7, 331–343. [Google Scholar] [CrossRef] [PubMed]
  7. Akutsu, T.; Miyano, S.; Kuhara, S. Inferring qualitative relations in genetic networks and metabolic pathways. Bioinformatics 2000, 16, 727–734. [Google Scholar] [CrossRef] [PubMed]
  8. Akutsu, T.; Miyano, S.; Kuhara, S. A simple greedy algorithm for finding functional relations: Efficient implementation and average case analysis. Theor. Comput. Sci. 2003, 292, 481–495. [Google Scholar] [CrossRef]
  9. Ideker, T.; Thorsson, V.; Karp, R. Discovery of Regulatory Interactions Through Perturbation: Inference and Experimental Design. In Proceedings of Pacific Symposium on Biocomputing, Island of Oahu, HI, USA, 4–9 January 2000; Volume 5, pp. 302–313.
  10. Kim, H.; Lee, J.K.; Park, T. Boolean networks using the chi-square test for inferring large-scale gene regulatory networks. BMC Bioinforma. 2007, 8. [Google Scholar] [CrossRef]
  11. Laubenbacher, R.; Stigler, B. A computational algebra approach to the reverse engineering of gene regulatory networks. J. Theor. Biol. 2004, 229, 523–537. [Google Scholar] [CrossRef] [PubMed]
  12. Lähdesmäki, H.; Shmulevich, I.; Yli-Harja, O. On learning gene regulatory networks under the boolean network model. Mach. Learn. 2003, 52, 147–167. [Google Scholar]
  13. Liang, S.; Fuhrman, S.; Somogyi, R. REVEAL, a General Reverse Engineering Algorithms for Genetic Network Architectures. In Proceedings of Pacific Symposium on Biocomputing ’98, Maui, HI, USA, 4–9 January 1998; Volume 3, pp. 18–29.
  14. Maki, Y.; Tominaga, D.; Okamoto, M.; Watanabe, S.; Eguchi, Y. Development of a System for the Inference of Large Scale Genetic Networks. In Proceedings of Pacific Symposium on Biocomputing, Big Island, HI, USA, 3–7 January 2001; Volume 6, pp. 446–458.
  15. Müssel, C.; Hopfensitz, M.; Kestler, H.A. BoolNet-an R package for generation, reconstruction and analysis of Boolean networks. Bioinformatics 2010, 26, 1378–1380. [Google Scholar] [CrossRef] [PubMed]
  16. Maucher, M.; Kracher, B.; Kühl, M.; Kestler, H.A. Inferring Boolean network structure via correlation. Bioinformatics 2011, 27, 1529–1536. [Google Scholar] [CrossRef] [PubMed]
  17. Maucher, M.; Kracht, D.V.; Schober, S.; Bossert, M.; Kestler, H.A. Inferring Boolean functions via higher-order correlations. Comput. Stat. 2012. [Google Scholar] [CrossRef]
  18. Nam, D.; Seo, S.; Kim, S. An efficient top-down search algorithm for learning boolean networks of gene expression. Mach. Learn. 2006, 65, 229–245. [Google Scholar] [CrossRef]
  19. Shmulevich, I.; Saarinen, A.; Yli-Harja, O.; Astola, J. Inference of genetic regulatory networks via best-fit extensions. In Computational and Statistical Approaches to Genomics; Zhang, W., Shmulevich, I., Eds.; Springer: New York, NY, USA, 2003; Chapter 11; pp. 197–210. [Google Scholar]
  20. Shmulevich, I.; Yli-Harja, O.; Astola, J.; Core, C.G. Inference of Genetic Regulatory Networks Under the Best-Fit Extension Paradigm. In Proceedings of the IEEE—EURASIP Workshop on Nonlinear Signal and Image Processing (NSIP-01), Baltimore, MD, USA, 3–6 June 2001; Kluwer Academic Publishers: Norwell, MA, USA, 2002; pp. 3–6. [Google Scholar]
  21. Zheng, Y.; Kwoh, C.K. Dynamic Algorithm for Inferring Qualitative Models of Gene Regulatory Networks. In Proceedings of the 3rd Computational Systems Bioinformatics Conference, CSB 2004, Stanford, CA, USA, 16–19 August 2004; IEEE Computer Society Press: Stanford, CA, USA, 2004; pp. 353–362. [Google Scholar]
  22. Birkendorf, A.; Dichterman, E.; Jackson, J.; Klasner, N.; Simon, H.U. On restricted-focus-of-attention learnability of boolean functions. Mach. Learn. 1998, 30, 89–123. [Google Scholar] [CrossRef]
  23. Bshouty, N.H. Exact learning Boolean functions via the monotone theory. Inf. Comput. 1995, 123, 146–153. [Google Scholar] [CrossRef]
  24. Eiter, T.; Ibaraki, T.; Makino, K. Decision lists and related Boolean functions. Theor. Comput. Sci. 2002, 270, 493–524. [Google Scholar] [CrossRef]
  25. Huhtala, Y.; Kärkkäinen, J.; Porkka, P.; Toivonen, H. TANE: An efficient algorithm for discovering functional and approximate dependencies. Comput. J. 1999, 42, 100–111. [Google Scholar] [CrossRef]
  26. Littlestone, N. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Mach. Learn. 1988, 2, 285–318. [Google Scholar] [CrossRef]
  27. Mannila, H.; Raiha, K. On the complexity of inferring functional dependencies. Discret. Appl. Math. 1992, 40, 237–243. [Google Scholar] [CrossRef]
  28. Mannila, H.; Räihä, K.J. Algorithms for inferring functional dependencies from relations. Data Knowl. Eng. 1994, 12, 83–99. [Google Scholar] [CrossRef]
  29. Mehta, D.; Raghavan, V. Decision tree approximations of Boolean functions. Theor. Comput. Sci. 2002, 270, 609–623. [Google Scholar] [CrossRef]
  30. Rivest, R.L. Learning decision lists. Mach. Learn. 1987, 2, 229–246. [Google Scholar] [CrossRef]
  31. Mossel, E.; O’Donnell, R.; Servedio, R.A. Learning functions of k relevant variables. J. Comput. Syst. Sci. 2004, 69, 421–434. [Google Scholar] [CrossRef]
  32. Arpe, J.; Reischuk, R. Learning juntas in the presence of noise. Theor. Comput. Sci. 2007, 384, 2–21. [Google Scholar] [CrossRef]
  33. Shannon, C.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Urbana, IL, USA, 1963. [Google Scholar]
  34. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
  35. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: San Mateo, CA, USA, 1988. [Google Scholar]
  36. Gray, R.M. Entropy and Information Theory; Springer: New York, NY, USA, 1991. [Google Scholar]
  37. Zheng, Y.; Kwoh, C.K. A feature subset selection method based on high-dimensional mutual information. Entropy 2011, 13, 860–901. [Google Scholar] [CrossRef]
  38. Spellman, P.; Sherlock, G.; Zhang, M.; Iyer, V.; Anders, K.; Eisen, M.; Brown, P.; Botstein, D.; Futcher, B. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 1998, 9, 3273–3297. [Google Scholar] [CrossRef] [PubMed]
  39. Trapnell, C.; Williams, B.; Pertea, G.; Mortazavi, A.; Kwan, G.; van Baren, M.; Salzberg, S.; Wold, B.; Pachter, L. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat. Biotechnol. 2010, 28, 511–515. [Google Scholar] [CrossRef] [PubMed]
  40. Zheng, Y.; Kwoh, C.K. Dynamic algorithm for inferring qualitative models of gene regulatory networks. Int. J. Data Min. Bioinforma. 2006, 1, 111–137. [Google Scholar] [CrossRef]
  41. Zheng, Y.; Kwoh, C.K. Identifying Simple Discriminatory Gene Vectors with An Information Theory Approach. In Proceedings of the 4th Computational Systems Bioinformatics Conference, CSB 2005, Stanford, CA, USA, 8–11 August 2005; pp. 12–23.
  42. Arnone, M.; Davidson, E. The hardwiring of development: Organization and function of genomic regulatory systems. Development 1997, 124, 1851–1864. [Google Scholar] [PubMed]
  43. Perkins, T.J.; Hallett, M.T. A trade-off between sample complexity and computational complexity in learning boolean networks from time-series data. IEEE/ACM Trans. Comput. Biol. Bioinforma. 2010, 7, 118–125. [Google Scholar] [CrossRef] [PubMed]
  44. Thomas, R.; d’Ari, R. Biological Feedback; CRC Press: Boca Raton, FL, USA, 1990. [Google Scholar]
  45. Zheng, Y.; Hsu, W.; Lee, M.L.; Wong, L. Exploring Essential Attributes for Detecting MicroRNA Precursors from Background Sequences. In Data Mining and Bioinformatics; Revised Selected Papers of First International Workshop, VDMB 2006, Seoul, Korea, 11 September 2006; Dalkilic, M.M., Kim, S., Yang, J., Eds.; Volume 4316, Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; pp. 131–145. [Google Scholar]
  46. Zheng, Y.; Kwoh, C.K. Cancer classification with MicroRNA expression patterns found by an information theory approach. J. Comput. 2006, 1, 30–39. [Google Scholar] [CrossRef]
  47. Zheng, Y.; Kwoh, C.K. Informative MicroRNA Expression Patterns for Cancer Classification. In Data Mining for Biomedical Applications, Proceedings of PAKDD 2006 Workshop, BioDM 2006, Singapore, Singapore, 9 April 2006; Li, J., Yang, Q., Tan, A.-H., Eds.; Volume 3916, Lecture Notes in Computer Science. Springer: New York, NY, USA, 2006; pp. 143–154. [Google Scholar]

Share and Cite

MDPI and ACS Style

Zheng, Y.; Kwoh, C.K. Improved Time Complexities for Learning Boolean Networks. Entropy 2013, 15, 3762-3795. https://doi.org/10.3390/e15093762

AMA Style

Zheng Y, Kwoh CK. Improved Time Complexities for Learning Boolean Networks. Entropy. 2013; 15(9):3762-3795. https://doi.org/10.3390/e15093762

Chicago/Turabian Style

Zheng, Yun, and Chee Keong Kwoh. 2013. "Improved Time Complexities for Learning Boolean Networks" Entropy 15, no. 9: 3762-3795. https://doi.org/10.3390/e15093762

Article Metrics

Back to TopTop