Next Article in Journal
Application of Improved Sparrow Search Algorithm to Path Planning of Mobile Robots
Next Article in Special Issue
A High-Speed Acoustic Echo Canceller Based on Grey Wolf Optimization and Particle Swarm Optimization Algorithms
Previous Article in Journal
Enhancing Icephobic Coatings: Exploring the Potential of Dopamine-Modified Epoxy Resin Inspired by Mussel Catechol Groups
Previous Article in Special Issue
A Grey Wolf Optimizer Algorithm for Multi-Objective Cumulative Capacitated Vehicle Routing Problem Considering Operation Time
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Choice Function-Based Hyper-Heuristics for Causal Discovery under Linear Structural Equation Models

School of Electronic and Information, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(6), 350; https://doi.org/10.3390/biomimetics9060350
Submission received: 17 May 2024 / Revised: 5 June 2024 / Accepted: 7 June 2024 / Published: 10 June 2024
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2024)

Abstract

:
Causal discovery is central to human cognition, and learning directed acyclic graphs (DAGs) is its foundation. Recently, many nature-inspired meta-heuristic optimization algorithms have been proposed to serve as the basis for DAG learning. However, a single meta-heuristic algorithm requires specific domain knowledge and empirical parameter tuning and cannot guarantee good performance in all cases. Hyper-heuristics provide an alternative methodology to meta-heuristics, enabling multiple heuristic algorithms to be combined and optimized to achieve better generalization ability. In this paper, we propose a multi-population choice function hyper-heuristic to discover the causal relationships encoded in a DAG. This algorithm provides a reasonable solution for combining structural priors or possible expert knowledge with swarm intelligence. Under a linear structural equation model (SEM), we first identify the partial v-structures through partial correlation analysis as the structural priors of the next nature-inspired swarm intelligence approach. Then, through partial correlation analysis, we can limit the search space. Experimental results demonstrate the effectiveness of the proposed methods compared to the earlier state-of-the-art methods on six standard networks.

1. Introduction

Causal discovery from observable data is described by Judea Pearl as one of the seven important tasks and tools for moving toward a strong artificial intelligence society. It is widely used in medicine [1,2,3], biology [4], environmentology [5], and other fields. Currently, there are two types of causal modeling based on DAGs: Bayesian networks and SEMs. Bayesian networks operate on discrete data, modeling the relationships between causal variables as probabilistic relationships. In contrast, SEMs operate on continuous data, assuming the data follow a specified distribution to interpret the causal relations. So, causal discovery methods based on SEMs make it possible to guarantee the unique identification theory of causal structure. The classical SEMs proposed thus far include the linear non-Gaussian acyclic model (LiNGAM) [6], additive noise model (ANM) [7], post-nonlinear model (PNL) [8], and information-geometric causal inference (IGCI) [9].
There are two main approaches for learning a DAG: constraint-based and score-based. Constraint-based approaches, such as the well-known PC [10], utilize conditional independence (CI) tests to search for a Markov equivalence class of causal graphs and do not need to assume any kind of causal mechanism. Therefore, they can be easily extended to address more complex problems. However, high-order CI tests are time-consuming and unreliable with limited samples. Score-based approaches, which use a scoring function to estimate the quality of DAGs and then search for a DAG with the highest score, are currently the most widely utilized method. However, the number of DAGs contained in the search space increases exponentially with the number of nodes. Exact methods become infeasible because they address the entire search space, and an increasing number of heuristic methods have been proposed to address this task. Examples include K2 [11], A* [12], and GES [13], but they often become trapped in local optima. To escape local optima, nature-inspired meta-heuristic optimization algorithms have been recognized for use in DAG learning. These nature-inspired optimization algorithms include the genetic algorithm (GA) [14], evolutionary programming [15], ant colony optimization [16], cuckoo optimization [17], water cycle optimization [18], particle swarm optimization (PSO) [19,20], artificial bee colony (ABC) algorithms [21], bacterial foraging optimization (BFO) algorithms [22], and firefly algorithms (FAs) [23]. Although these optimization algorithms have achieved relatively good results, they still face the following challenges:
  • As suggested by the no-free-lunch theorem, a single meta-heuristic algorithm cannot meet the different needs of various practical problems and cannot guarantee good performance in all cases.
  • For large DAGs, the global search ability of the meta-heuristic algorithm is insufficient, the algorithm can easily fall into local optima, and the convergence accuracy is not high.
Hybridization of more than one meta-heuristic can make use of the differences and complementarities of each heuristic to improve the performance of DAG learning. Many recent results in the scientific literature seem to support this notion. Hybridization is the combination of different meta-heuristics or components of meta-heuristics. Unlike the hybridization of meta-heuristics, hyper-heuristics represent a hybridization approach where heuristics are used to choose or generate heuristics for solving combinatorial optimization problems. Recently, hyper-heuristics have been successfully applied to many practical problems in various fields, including the traveling salesman problem [24], the vehicle routing problem [25], the knapsack problem [26], and T-way testing [27]. According to the literature on these applications, hyper-heuristic methods increase the abstraction level of heuristic algorithms and can achieve better generalization ability so that satisfactory solutions can be obtained at a small cost. Given the excellent performance of the hyper-heuristic approach, designing hyper-heuristic algorithms is a topic worthy of study for DAG learning.
There are two main hyper-heuristic categories: heuristic selection and heuristic generation. A selection hyper-heuristic, which is the focus of our study, designs a high-level strategy to select low-level heuristics with the best performance in the search process. In this paper, we develop a hyper-heuristic with a choice function as the high-level strategy, and low-level heuristics are derived from the operators of several nature-inspired optimization algorithms. To further improve the search performance of hyper-heuristics, several common heuristic algorithm search strategies are also adopted. First, we learn from several hybrid algorithms that can reduce the size of the search space. Hybrid algorithms, such as MMHC [28] and PCS [29], are combinations of constraint-based approaches and score-based approaches. The most common strategy is to reduce the size of the search space through a constraint-based approach and then perform a search. For linear SEMs, we consider using partial correlation analysis to obtain a more compact search space, while partial v-structures are also identified as a structural prior and then integrated into the search process as an alternative or supplement to expert prior knowledge. Second, we learn multi-population strategies from swarm intelligence algorithms that enhance global search capabilities and reduce the likelihood of falling into local optima.
The main contributions of this paper are summarized as follows:
  • We propose a novel method to mine conditional independence information and determine the v-structure through partial correlation analysis, and demonstrate that this method is correct in both theory and practice. In the partial correlation analysis, two restricted search spaces are obtained, and the low-level heuristics can select the appropriate search space to improve efficiency.
  • We select the components of the existing heuristic algorithm to build the low-level algorithm library. To enhance the global search capability of large-scale DAGs, we redesign the global search operator. In addition, we design a search space switching operator for the global search operator. In the first stage, the global search operator works in the restricted search space to improve efficiency, and in the second stage, it works in the complete search space to improve accuracy.
  • We propose a multi-population choice function hyper-heuristic to provide sufficient coverage of the search space, and various groups communicate with each other through an immigration operator. To solve the problem that there is an order of magnitude difference between the fitness change and running time in DAG learning problems, we modify the choice function to balance the attention between them.
The remainder of this paper is organized as follows. In Section 2, we review related works. In Section 3, the preliminaries are introduced. In Section 4, we describe our proposed algorithm. In Section 5 and Section 6, the experiments and conclusions are presented, respectively.

2. Related Works

Scholars have been exploring DAG learning for more than 40 years, and Constantinou divides the research results during these years into four main research directions: ideal data, continuous optimization, weakening faithfulness, and knowledge fusion [30].
1. Ideal data. In the first direction, DAG structures are constructed using various causal discovery algorithms and optimization algorithms for datasets that are ideally unbiased and satisfy causal sufficiency and faithfulness. These algorithms are based on combinatorial optimization and form two solution directions: constraint-based approaches and score-based approaches. The specific implementation process of the constraint-based approach is divided into two steps: the first step involves conducting CI tests on variables, and the second step involves learning the global structure or local structure based on the CI test results. The most classic global structure discovery method is the PC [10] algorithm, which has the advantage of low time complexity, but at the cost of the loss of stability. Therefore, Colombo and Maatthuis et al. proposed the PC-stable [31] algorithm, which effectively eliminates the order dependence in the process of skeleton determination and edge orientation. It is also a constraint-based algorithm widely recognized and used by scholars in recent years, and it is used as a comparison algorithm in this paper. In addition, Spirtes et al. proposed the FCI [10] algorithm in response to the existence of hidden variables or confusion factors in research questions. Some scholars have improved it in recent years, such as RFCI [32] and FCI+ [30]. The local structure discovery method focuses on learning Markov blankets (MBs) in a DAG. The best-known method for local structure discovery is IAMB [33], which uses conditional mutual information to determine the order in which individual variables are incorporated into the MB. Later, improved versions were proposed: Inter-IAMB and Fast-IAMB. In addition, Tsamardios and Aliferis et al. proposed the MMPC, HITON-PC, and SI-HITON-PC [30] algorithms to discover MBs. Note that the performance of constraint-based approaches, which employ statistical tools to test conditional independence in the empirical joint distribution, may be severely limited by the hypothesis tests they use. Score-based approaches can be divided into approximate approaches and exact approaches according to whether they can obtain the global optimal solution. With well-defined scores, such as the Bayesian Information Criterion (BIC), the Minimum Description Length (MDL), and the Bayesian Dirichlet equivalence (BDe), score-based approaches turn causal discovery problems into combinatorial optimization problems. Based on this, several exact methods for solving combinatorial optimization problems, such as dynamic programming [34], branch-and-bound [35], and integer linear programming [36], have been applied to DAG learning. Due to the poor scalability of exact approaches, approximate approaches have gained extreme popularity. HC [37], which uses a greedy strategy and different operators to search the neighborhood of the current DAG and update the optimal structure until the termination condition is reached, is the most classical approximate learning algorithm in DAG space. However, this algorithm can easily fall into local optima. Therefore, many nature-inspired optimization algorithms have emerged in recent years, among which PSO [20], ABC [21], and GAs [14] are widely used meta-heuristic algorithms, and many versions of these algorithms have been proposed. Although these meta-heuristic algorithms have achieved relatively good results, their search and generalization abilities still need to be improved. Therefore, this paper adopts a hyper-heuristic approach to DAG learning to obtain stronger search and generalization abilities compared to a single heuristic approach. To the best of our knowledge, hyper-heuristic methods have not been applied to DAG learning.
2. Continuous optimization. Most of the causal structures output by traditional constraint-based approaches and score-based approaches belong to the Markov equivalence class. To solve the problem of the Markov equivalence class, the method of introducing SEMs into causal models is receiving increasing attention. In SEMs, if some additional assumptions are made about the functional and/or parametric forms of the underlying true data-generating structure, then one can exploit asymmetries to identify the direction of causality. For example, Shimizu et al. [6,38] first proposed an estimation method based on independent component analysis (ICA) for LiNGAM, which is unique enough to identify the complete DAG by the non-Gaussian properties of the data. For nonlinear data, Hoyer et al. [7] proposed the ANM to infer causality based on the assumption of independence between cause variables and noise variables. Compared with the ANM, Zhang et al. [8] proposed PNL, which describes the data generation process more generally. Janzing et al. [9] started from the perspective of information geometry and made causal inferences based on information entropy. In 2018, Zheng et al. first proposed NOTEARS [39], which formulates the structure learning problem as a continuous optimization problem by introducing a smooth characterization of acyclicity. This method makes it possible to use gradient updating to acquire large-scale learning and online learning abilities. On this basis, an increasing number of machine learning methods, such as neural networks [40], reinforcement learning [41], and autoencoders [42], have been introduced into this field. In addition, some updated versions of NOTEARS have also been proposed recently, such as NO TEARS+ [43], NO BEARS [44], and NO FEARS [45]. However, NOTEARS and its variants still lack a theoretical analysis of the unique identification of this model [46]. Moreover, in our experiments, NOTEARS sometimes failed to return a DAG.
3. Weakening faithfulness. Traditional causal faithfulness is a very demanding requirement, and theorists are constantly trying to relax the use of faithfulness “bottom lines” regarding data distribution and independence tests to improve the robustness of models by using more relaxed faithfulness. Unlike the PC algorithm, which is based on complete causal faithfulness, the CPC [47] algorithm uses weaker adjacency faithfulness and directed faithfulness in the v-structure determination phase. Zhang and Spirtes believe that this weak faithfulness hypothesis can also be applied in the skeleton determination stage, so triangular faithfulness has been proposed [48]. In addition, Cheng et al. proposed the TPDA, which requires a stronger faithfulness hypothesis (monotone faithfulness).
4. Knowledge fusion. Expert knowledge is often used to assist in DAG modeling, and the integration of expert knowledge is divided into two methods: soft constraints and hard constraints. The former guides or intervenes in the learning process, while the latter forces the final learning outcome to meet certain conditions. For hard constraints, De Campos and Castellano et al. took the lead in modifying HC and PC algorithms to make the learning results meet the given edge constraints [49]. Later, De Campos proposed an improved B&B algorithm [50], which supports predetermination of the direction of partial edges before learning. Borboudakis and Tsarmadios first proposed applying this constraint to PC and FCI algorithms to improve the accuracy of the edge orientation phase [51]. For soft constraints, different initial search graphs or restricted search spaces can also be regarded as soft constraints of score search algorithms, such as MMHC [28] and PC-PSO [19]. In our algorithm, partial correlation is used to mine structural priors as a supplement or alternative for expert knowledge to guide the search process.
In summary, we consider introducing a hyper-heuristic method guided by expert knowledge to improve the search performance of causal discovery algorithms.

3. Background

3.1. DAG Model

A graph G = ( V , E ) represents a joint distribution P X as a factorization of n variables X = X 1 , , X n , using n corresponding nodes v V and connecting edges ( i , j ) E , where ( i , j ) indicates an edge between v i and v j . If all the edges are directed and there are no cycles, we have what is known as a DAG.
Definition 1
(v-structure). In a DAG, if there are two distinct adjacent nodes of X on a simple path, and both of them are parents of X, then these three nodes form a v-structure, and node X is called a collider. Otherwise, we call X a noncollider [37].
Definition 2
(d-separation). Two nodes X i , X j are d-separated by Z X X i , X j if every simple path from X i to X j is blocked by Z. Note that a simple path is blocked if there is at least one noncollider in Z or if at least one collider and all its descendants are not in Z [37].

3.2. SEMs and Partial Correlation

An SEM is a set of equations describing the value of each node X i in X as a function f X of its parents p a X i and a random disturbance term u X i :
x i = f X i p a X i , u X i
where the functions f X i can be defined as linear or nonlinear. If we restrict it to be linear, we set the formula of a linear SEM as follows:
x i = w X i T p a X i + u X i
Definition 3
(Partial correlation). The partial correlation coefficient between two nodes X i , X j X , given a set of conditions Z X X i , X j , denoted as ρ X i , X j Z , or simply ρ i j , is the correlation of the residuals R X i and R X j resulting from the least-squares linear regression of X i on Z and X j on Z, respectively [29].
The most common method for calculating the partial correlation coefficient relies on inverting the correlation matrix R of X. Given R 1 = r i j , the partial correlation coefficient can be efficiently computed according to Equation (3).
ρ X i , X j Z = r i j / r i i r j j
In particular, the full partial correlation between two nodes X i , X j means that the set of conditions Z is equal to X X i , X j , and the set of conditions Z corresponding to a local partial correlation is a possible subset of X X i , X j .
Theorem 1.
When the data are generated by linear SEMs, if the random disturbance term u X i has constant variance and is uncorrelated, the partial correlation analysis can be used as a criterion in the CI test [52].
Theorem 2.
If the sample size (denoted as m) of a given dataset generated by linear SEMs is sufficiently large ( m > 120 ) , the test statistic t concerning the partial correlation coefficient approximately follows a t-distribution with m n degrees of freedom [29].
t = ρ i j 1 ρ i j 2 / ( m n ) t m n
Definition 4
(Bayes factor). Given a dataset D, the Bayes factor for a null hypothesis H 0 over an alternative hypothesis H 1 , denoted as B F 01 , can be written according to Equation (5).
B F 01 = P D | H 0 P D | H 1
For partial correlation analysis, the Bayes factor provides an index of preference for one hypothesis over another that is more intuitive in interpretation than the traditional p value. Since p values are often misunderstood and misused, the Bayes factor is used as a significance test for partial correlation analysis in this paper [53]. The reason that the Bayes factor is not commonly used is that it is inconvenient to calculate. In this paper, because the partial correlation coefficient approximately follows a t-distribution, the Bayes factor can be directly computed using an approximation algorithm.
B F 01 n 1 + t 2 m n n

3.3. Scoring Function

The scoring function used in this paper is the BIC, which is composed of the goodness of fit of a model and the penalty for model complexity. The BIC score is defined as:
S c o r e B I C = j = 1 n N L L X j , p a X j , θ ^ j m l e + θ ^ j m l e 2 log m
where j = 1 n N L L X j , p a X j , θ ^ m l e denotes the negative log-likelihood used to evaluate the goodness of fit of a model, j = 1 n θ ^ j m l e 2 logm denotes the penalty for model complexity, θ ^ j m l e denotes the maximum likelihood estimate of the parameters for node X j , and θ ^ j m l e is the number of estimated parameters for node X j , which is equal to the number of its parents.
Theorem 3.
In a linear SEM, the best linear unbiased estimator of the parameters is the ordinary least-squares estimator if the random disturbance term u X i has a mean of zero and constant variance and is uncorrelated.
In this paper, the least-squares method was used for parameter estimation. It is a statistical method used to determine the best-fit line by minimizing the sum of squares created by a mathematical function. Therefore, the negative log-likelihood for node X j can be computed according to Equation (8),
N L L X j , p a X j , θ ^ j m l e = j = 1 m x i j θ ^ j m l e T p a x i j 2
where θ ^ j m l e is equal to the parameter estimated by the least-squares method and can be computed according to Equation (9),
θ ^ j m l e = x x 1 x x j
where x j denotes the vector of observations on X j and x denotes the vector of observations on its parents.

4. Methodology

In this section, we first propose a new method called structural priors by partial correlation (SPPC), where the key idea is to use partial correlation analysis to mine conditional independence information. Next, this conditional independence information is integrated into the hyper-heuristic algorithm as a structural prior.

4.1. SPPC

Due to the equivalence of zero partial correlation and CI for linear SEMs, the goal of the SPPC algorithm is to use partial correlation analysis to narrow the search space as much as possible in addition to identifying partial v-structures. The SPPC algorithm starts with an empty graph and consists of three stages: full partial correlation, local partial correlation, and identification of v-structures. Through these three stages, we can obtain the global search space (GSS), local search space (LSS), and v-structure (V). The pseudocode is shown in Algorithm 1, and we explain each stage in more detail in the following paragraphs. The three stages can be summarized as follows:
  • For any two nodes X i , X j , add an edge X i X j if the full partial correlation coefficient is significantly different from zero.
  • For every edge X i X j in the undirected graph built in step 1, we perform a local partial correlation analysis that looks for a d-separating set Z. If the partial correlation ρ X i , X j Z vanishes, we consider this edge to be a spurious link caused by v-structure effects and then remove it.
  • For every edge X i X j that is removed in step 2, we find the colliders contained in Z. If node U is a collider, we add two edges X i U , X j U .
Algorithm 1: SPPC
Biomimetics 09 00350 i001
In the first stage, we perform a full partial correlation analysis and reconstruct a Markov random field. In the full partial correlation analysis, if B F 01 X i , X j is less than a threshold k , we consider that the two nodes are correlative and connect with each other in the GSS and LSS. In contrast, if  B F 01 X i , X j is greater than the threshold k , we consider that the two nodes are uncorrelated. If the data satisfy the faithfulness assumption, the GSS derived from the identified undirected graph may resemble a moral graph. Therefore, we treat the GSS as the primary search space to ensure the completeness of the search space. Unfortunately, in the GSS, all parents of colliders are connected, and the v-structures are transformed into triangles. It should be noted that these spurious links caused by v-structure effects have a more severe negative impact on the search process compared to other error edges. When dealing with large-scale problems, the GSS cannot effectively alleviate the inefficiency of the search algorithm, and it easily falls into local optima. Fortunately, the partial correlation coefficient for the CI test is easy to calculate, even when the size of the condition set is large. Therefore, to improve search efficiency, we consider further mining conditional independence information in the second stage.
In the second stage, our goal is to find a set Z that blocks all simple paths between two nodes X i , X j . Obviously, the exhaustive method is inefficient and undesirable. Therefore, heuristic strategies are usually used to find such a cut set. For example, a two-phase algorithm [54] utilized a heuristic method based on monotone faithfulness that employs the absolute value of partial correlation as a decision criterion. However, monotone faithfulness is sometimes a bad assumption [55]. In this paper, we propose a new heuristic strategy to determine a d-separating set, and the pseudocode is shown in Algorithm 2. To illustrate how our heuristic strategy works, some relevant concepts are briefly introduced.
Algorithm 2: Local partial correlation
Biomimetics 09 00350 i002
Theorem 4.
For any two nodes X i , X j , if there is no edge between them, we can determine a d-separating set by choosing nodes only from either M B X i or M B X j [52]. Here, M B X i denotes the Markov random field of nodes X i .
This theorem enables us to perform local partial correlation analysis on small sets, which makes the estimation results more efficient and stable. Next, the most important task of a heuristic strategy is to find an appropriate metric to tightly connect the CI test with d-separation.
Definition 5
(Simple path). A simple path is an adjacency path that does not contain duplicate nodes [52].
According to the definition of d-separation, for any two nodes X i , X j , if there exists such a cut set Z that makes the two nodes conditionally independent, we can finally find it by blocking all the simple paths between the two nodes.
Definition 6
(Active path). For any two nodes X i , X j , given a simple path U between the two nodes, the path U is blocked by Z if and only if at least one noncollider on U is in Z or at least one collider and all of its descendants are not in Z [52]. If path U is not blocked by Z, we call path U an active path on Z.
Definition 7
(Open simple path). For any two nodes X i , X j , and given a set of conditions Z , for a node Z m in Z, if ρ X i , Z m Z X j Z m and ρ X j , Z m Z X i Z m are both significantly different from zero, we refer to the simple paths from Z m to X i and X j as open. In this case, Z m is said to have an open simple path (OSP) to X i , X j on Z.
Notably, an OSP is different from an active path in a directed graph because we cannot determine whether node Z m is a noncollider or a collider. The initial OSP is the intersection of the Markov random fields of two nodes.
Theorem 5.
For any two nodes X i , X j , given a set of conditions Z, if there is no edge between the two nodes in the underlying graph and ρ X i , X j Z is significantly different from zero, then all active paths on Z with colliders must satisfy that every collider and all of its descendants contain a node (denoted as Z m that belongs to Z and has an OSP to X i , X j .
Proof of Theorem 5.
If ρ X i , X j Z is significantly different from zero, there must be at least one active path between X i and X j on Z, denoted as set U. For any path in U, denoted as path u , according to the definition of the active path, we know that all the noncolliders on u are not in Z and every collider satisfies that itself or at least one of its descendants is in Z. For any collider in u, let Z m denote the node that satisfies the above condition. We can easily construct an active path based on path u between Z m and X i on Z X j Z m , and similarly between Z m and X j . Therefore, we can consider that ρ X i , Z m Z X j Z m and ρ X j , Z m Z X i Z m are both significantly different from zero, and then Z m has an OSP to X i , X j .    □
If path u does not contain a collider, we cannot block this path by removing nodes. Therefore, our heuristic strategy is to start with an initial set that contains the d-separation set and then block the simple paths by gradually removing nodes that have OSPs to X i , X j . Throughout the process, as each node with an OSP is removed, we observe the number of remaining nodes that have OSPs, and this number is used as a criterion to determine which node to delete. In this paper, we greedily choose the node with the lowest value for removal. When no node in the conditional set has an OSP, the search stops.
In the third stage, our task is to orient some edges correctly by detecting v-structures. For each edge removed in the local partial correlation analysis, we find the colliders contained in Z . If node U is a collider, we add two edges to V.

4.2. Proposed Multi-Population Choice Function Hyper-Heuristic

Hyper-heuristics are high-level methodologies that perform a search over the space formed by a set of low-level heuristics when solving optimization problems. In general, a hyper-heuristic contains two levels—a high-level strategy and low-level heuristics—and there is a domain barrier between the two. The former comprises two main stages: a heuristic selection strategy and a move acceptance criterion. The latter involves a pool of low-level heuristics, the initial solution, and the objective function (often also the fitness or cost function). The working principle of hyper-heuristics is shown in Figure 1.

4.2.1. The High-Level Strategy

Various combinations of heuristic selection strategies and move acceptance criteria have been reported in the literature. Classical heuristic selection strategies include choice functions, nature-inspired algorithms, multi-armed bandit (MAB)-based selection, and reinforcement learning, while move acceptance criteria include only improvement, all moves, simulated annealing, and late acceptance. In this article, we use the “choice function accept all moves” as a high-level strategy, which evaluates the performance score (F) of each LLH using three different measurements: f 1 , f 2 , and f 3 . The specific calculation method is shown in Equation (10):
F H i = φ f 1 H i + φ f 2 H i , H j + δ f 3 H i
Parameter f 1 reflects the previous performance of the currently selected heuristics, H i . The value of f 1 is evaluated using Equation (11),
f 1 H i = I H i / T H i + φ f 1 H i
where I H i is the change in solution quality by H i and is set to 0 when the solution quality does not improve. T H i is the time taken by H i .
Parameter f 2 attempts to capture any pairwise dependencies between heuristics. The values of f 2 are calculated for the current heuristic H i when employed immediately following H j , using Equation (12),
f 2 H i , H j = I H i , H j / T H i , H j + φ f 2 H i , H j
where I H i , H j is the change in solution fitness and  T H i , H j is the time taken by both the heuristics. Similarly, I H i , H j is set to 0 when the solution does not improve.
Parameter f 3 captures the time elapsed since the heuristic H i was last selected. The value of f 3 is evaluated using Equation (13):
f 3 H i = τ H i
The value range of parameters φ and δ is (0,1) and is initially set to 0.5. If the solution quality improves, φ is rewarded heavily by being assigned the highest value (0.99), whereas it is harshly punished by being assigned the lowest value (0.01). If the solution quality deteriorates, φ decreases linearly, and  δ increases by the same amount. The values of both parameters are calculated using Equations (14) and (15):
φ t = 0.99 , if quality improves max φ t 1 0.01 , 0.01 , if quality deteriorates
δ t = 1 φ t
For each LLH, the respective values of F are computed using the same parameters φ and δ . The setting scheme of these two weight parameters makes the intensification component the dominating factor in the calculation of F while ensuring the diversification of the heuristic search process. However, in DAG learning problems, there is usually an order of magnitude difference between the fitness change and running time. As a result, the balance between the intensification component and the diversification cannot be guaranteed. To solve this problem, we record the values of I H i / T H i and I H i , H j / T H i , H j when the current heuristic increases the score of the optimal structure. Then, all the previously recorded values are linearly transformed into the interval ( a m , b m ) , where m represents the average running time of all the calls. Coefficients a and b are used to balance the fitness change and running time, taking values of 0.1 and 0.2, respectively, in this article.

4.2.2. The Low-Level Heuristics

In this section, we introduce the 13 operators that make up the low-level algorithm library, which is primarily derived from several nature-inspired meta-heuristic optimization algorithms. For example, we decompose the BNC-PSO algorithm into three operators: the mutation operator, cognitive personal operator, and cooperative global operator. In addition, we modify the three operators. First, the mutation operator works on the GSS to improve its efficiency. Second, the acceleration coefficient of the cooperative global operator increases linearly from 0.1 to 0.5 to avoid prematurity.
For the BFO algorithm, we choose only two operators: the chemotactic operator and the elimination and dispersal operator. Addition, deletion, and reversion operators are three candidate directions for each bacterium to select in the chemotactic process, and for large DAG learning, these local operations can be blind and inefficient. Therefore, we consider these three operations to be used only to manipulate the parent set of a node. Specifically, the addition operation continuously adds possible parents to the selected node to improve the score, and its search space is the GSS. Correspondingly, the deletion operation and the reversion operation perform sequential deletion or parent–child transformation of the parent set of the selected node to improve the score. The elimination and dispersal operator is a global search operator, and we need to redesign a restart scheme only for the parent set of a selected node. For a selected node, we perform a local restart of the optimal structure in the population as a bacterial elimination and dispersal operation. First, we remove all the parent nodes of the selected node, calculate the score at this point, and record the structure at this point as the starting point for the restart. Second, in the search space, an addition chemotaxis operation is performed on the selected node to find a potential parent set, which is subsequently sorted by partial correlation values. Note that we are not updating the starting point structure in this step. Third, we add the nodes of the potential parent set one by one to the selected node, and if a node can improve the score, we add both itself and its parent and update the structure. Fourth, for the parent nodes that have been added, we greedily remove the one that has the greatest negative impact on the score and update the structure until the score cannot be improved. The nodes that have the greatest negative impact are achieved by the deletion chemotactic operation. Finally, we perform a reversion operation. The startup of the elimination and dispersal operator is controlled by the parameter c 3 , which increases linearly from 0.1 to 1 and is computed using Equation (16),
c 3 = 0.1 + 0.9 L / L max
where L represents the number of iterations in which the global maximum score did not improve, and  L max represents the maximum number of iterations allowed without increasing the global maximum score.
We decompose the ABC algorithm into three operators: worker bees, onlooker bees, and scout bees. Worker bees and onlooker bees, as local search operators, continue to work on the GSS. We redesign the scout bees to accommodate large-scale DAG learning. For a selected node, we perform a local restart of the optimal structure in the population. First, we record the parent set of the selected node. Second, a parent node is selected, and a parent–child transformation is performed with the selected node. Third, the addition, deletion, and reversion chemotaxis operations are performed successively. If the score of the new structure is higher than the score of the optimal structure, the structure is updated as a new starting point. Finally, we skip to step 2 and continue until all parent nodes have been tested. The startup of the scout is controlled by the parameter l m when the individual best score does not improve for l m consecutive iterations.
Inspired by the moth–flame optimization algorithm, we randomly arrange the individual historical optimal solutions as flames and design moths to fly around them, which is equivalent to moths learning from the flames. The learning mode is the same as that of the BNC-PSO algorithm. Similarly, we adopt the learner phase from the teaching–learning-based optimization algorithm. In the current generation, each student is randomly assigned a collaborator to learn from if they are better than themselves, with the learning mode aligned with that of the BNC-PSO algorithm.
To make efficient use of structural priors, expert knowledge operators are designed. In this operator, a fixed proportion of individuals are selected to be guided by expert knowledge or a structural prior, i.e., all identified v-structures are given. For large-scale DAG learning, an insufficient sample size often leads to overfitting problems. To reduce the complexity of the model, pruning operators are designed to remove all edges if the score change caused by these edges is less than a threshold μ . This threshold is shared by all operators as the basis for judging whether the score has improved. In addition, a more efficient neighborhood perturbation operator is designed to operate on the LSS.

4.2.3. Framework of Our Algorithm

In this section, we describe the workflow of our proposed multi-population choice function hyper-heuristic (MCFHH) algorithm, the framework of which is shown in Algorithm 3. The MCFHH algorithm starts by randomly generating the initial valid population, and the initial valid population is obtained by performing several local hill-climbing operations on V. Next, we divide the population evenly into several groups, and each group runs its own choice function individually. Our algorithm terminates when the optimal score does not improve in successive L max generations or the maximum number of allowed iterations is reached. In addition, we introduce the migration operator and search space switching operator when running the algorithm.
The migration operator runs only after a certain number of iterations, and we set it to a minimum value between 100 and N. In the migration operation, we record the optimal structure of each subgroup and then swap the best with the worst. To avoid inbreeding, we use the inbreeding rate as a parameter to limit immigration operations. For DAG learning, we measure the inbreeding rate using the Hamming distance between the optimal individual of each subgroup and the globally optimal individual. In this paper, if the Hamming distance is less than 4, we assume that the optimal individual of the subgroup and the globally optimal individual are close relatives. The inbreeding rate is defined as the number of close relatives of a globally optimal individual divided by the number of subgroups, which, in this paper, is limited to no more than 0.6.
For large-scale DAG learning, the GSS cannot guarantee the completeness of the search space when the sample size is insufficient. Therefore, we introduce a search space switching operator. The search space switching operator is executed only once when the number of iterations without an increase in the highest score reaches L max . After execution, all global search operators operate within the complete search space (CSS) to correct errors caused by possible incompleteness in the GSS, and the number of iterations without an increase in the highest score is recalculated. This switching scheme is a balanced strategy that can improve efficiency in the early stage of the algorithm and improve accuracy in the late stage of the algorithm.
Algorithm 3: MCFHH
Biomimetics 09 00350 i003

5. Experiments

In this section, several existing competitive algorithms and networks are selected to test the performance of the MCFHH algorithm. The following algorithms were selected for comparison: the PC-stable, LiNGAM, PCS, BNC-PSO, and NOTEARS algorithms (https://github.com/xunzheng/notears, accessed on 16 May 2024). We added structural prior knowledge, including the initial population and the GSS, to the BNC-PSO algorithm in this paper. All the experiments were implemented and executed on a computer running Windows 10 with an AMD 1.7 GHz CPU and 16 GB of memory. NOTEARS was implemented in Python 3.10.5, and the other algorithms were implemented in MATLAB R2020a.

5.1. Networks and Datasets

In our experiments, six networks were selected from the BNLEARN repository (https://www.bnlearn.com/bnrepository/, accessed on 16 May 2024), and a summary of these networks is shown in Table 1.
The datasets used in the experiments were generated by linear SEMs. Three different SEMs were designed, including the linear Gaussian model and the linear non-Gaussian model, as follows:
( 1 ) x i = w 1 X i T p a X i + N ( 0 , 1 ) ( 2 ) x i = w 2 X i T p a X i + N ( 0 , 1 ) ( 3 ) x i = w 1 X i T p a X i + rand ( 1 , 1 )
where w 1 X i = ± 1 + N ( 0 , 1 ) / 4 and w 2 X i = rand ( 0.2 , 1 ) . For SEM1, the weight w 1 X i is a Gaussian distribution, and the random disturbance term is also a Gaussian distribution. Thus, SEM1 follows a multivariate Gaussian distribution and is a linear Gaussian model. For SEM2, the weight w 2 X i is randomly and uniformly distributed, and the random disturbance term follows a Gaussian distribution. Thus, SEM2 also follows a multivariate Gaussian distribution and is a linear Gaussian model. For SEM3, the weight is a Gaussian distribution, and the random disturbance term is randomly and uniformly distributed. Thus, SEM3 is a linear non-Gaussian model.

5.2. Performance Evaluation of the MCFHH Algorithm

The parameters of the MCFHH algorithm are listed in Table 2, and the parameters of the other algorithms are the best values from the corresponding literature. To evaluate the performance of these algorithms, the following metrics were used:
  • BIC: the BIC score of the final output structure.
  • SBS: the BIC score of the standard network.
  • AD: the difference of arcs incorrectly added over all trials.
  • DD: the difference of arcs incorrectly deleted over all trials.
  • RD: the difference of arcs incorrectly reversed over all trials.
  • RET: the execution time of the restriction phase.
  • SET: the execution time of the search phase.
  • F1: the F1 score of the final output structure.
The first performance metric is the BIC (higher is better), representing the score of the final output structure. The calculation method of the BIC was introduced in Section 3.3. The SBS represents the score of the original network, which is a fixed reference value based on the sample data. AD, DD, and RD are used to evaluate the structural errors of the learning result, representing the number of incorrectly added edges, incorrectly deleted edges, and incorrectly reversed edges, respectively, in the final output network compared to the original network. RET and SET represent the execution times of the restriction phase (SPPC) and search phase (MCFHH), respectively. The F1 score (higher is better) is calculated as F 1 = 2 P R / ( P + R ) , where P represents precision and R represents recall.
First, extensive experiments were conducted on six standard networks and three different linear SEMs to verify that our proposed algorithm is effective and robust. In our experiment, for each of the networks, we randomly sampled four datasets with 1000, 3000, 5000, and 10,000 cases. We report the mean and standard deviation of the evaluation indicators of 10 runs. Table 3, Table 4 and Table 5 present the results of the experiments on each dataset.
It can be seen in Table 3, Table 4 and Table 5 that for all the datasets, the standard deviations of the BIC, AD, DD, and RD are all 0 after multiple runs, indicating no variation in the results of the MCFHH algorithm across multiple runs. At the same time, the mean value of the BIC is consistently greater than that of the ABS across all datasets. The above results fully demonstrate the stable convergence performance of the MCFHH algorithm. Regardless of the size of the network, as long as a network with a higher score exists in the search space, the algorithm has the ability to find it. Regarding structural errors, we can observe that for datasets with structural errors in the output structure, the BIC surpassed the ABS (highlighted in bold in the table). The reason for this may be that the data cannot fully reflect the network structure’s characteristics. In addition, for all three SEMs, our algorithm yielded structures with stable F1 scores, which shows that the MCFHH algorithm is a robust DAG learning algorithm, whether applied to Gaussian or non-Gaussian models. In terms of execution time, RET increased very little and SET did not increase significantly as the sample size increased, indicating that our algorithm can handle large sample sizes. However, with the increase in the size of the network, SET increased much faster than RET. The reason for this is that the second stage search for working on the CSS increased the time cost. Next, we considered whether and under what circumstances the search space switching operator should be removed to save time. In theory, adding the search space switching operator can reduce the dependence of the algorithm on the sample size, which can also be seen in the insensitivity of each performance index to the sample size. Therefore, for small sample data, the search space switching operator may be an important guarantee for accuracy. Therefore, we report the performance of the MCFHH algorithm after removing the search space switching operator when the sample size was sufficient (10,000).
As shown in Table 6, for the four networks—alarm, win95pts, munin, and pigs—the same structure could still be output after deleting the search space switching operator. For the hepar2 and andes networks, although the same structure could not be output, the maximum coefficient of variation (standard deviation divided by the mean) of the output structure on the three SEMs was 0.08% and 0.04%, respectively. Therefore, when the sample size was sufficient, deleting the search space switching operator still reliably produced a high-score structure. However, for the relatively complex hepar2 and andes networks, it was difficult to guarantee the integrity of the GSS, even with a sufficient sample size, and the edges not covered by the GSS caused structural errors, which the search space switching operator aimed to correct by increasing the coverage of the search space. Regarding running time, Table 6 shows that except for the alarm and win95pts networks, the removal of the search space switching operator significantly reduced the time cost. For hepar2, munin, andes, and pigs, the average SET reduction rates were 50%, 84%, 72%, and 95%, respectively. In summary, in the first stage, the search space switching operator used the constraint method to limit the search space to improve search efficiency, and in the second stage, it corrected the structural errors caused by the incomplete search space by extending the coverage of the search space. In practice, we would likely face a trade-off between accuracy and speed.

5.3. Comparison with Other Algorithms

The performance of these comparison algorithms depends on the sample size. For a fair comparison, the sample size was uniformly set to 1000. Due to the serious impact of the search space switching operator on the performance of our algorithm, the algorithm that deletes the search space switching operator was also compared as a new algorithm, denoted as MCFHH1. Obviously, MCFHH1 represents the performance of our algorithm in the worst-case scenario.
Table 7 and Table 8 show the comparison results of the F1 scores and BIC scores, respectively, between our proposed algorithm and other algorithms in different SEMs. In these comparisons, MCFHH consistently outperformed the others (highlighted in bold in the table), which shows that our proposed algorithm is accurate and robust in linear SEMs. To further demonstrate the performance of our algorithm, we compared only MCFHH1 with other algorithms. The comparison of the BIC and F1 scores confirms the conclusion that MCFHH1>BNC-PSO>PCS>NOTEARS>PC>LiNGAM, which verifies that our algorithm maintains the reliability of the search even in the worst-case scenario.
Like the MCFHH1 algorithm, PCS also uses partial correlation to limit the search space. Its restrictions are more relaxed, so the coverage of its search space is wider. In theory, it is easier to search for a structure with a higher score. However, by comparing the BIC scores of PCS and MCFHH1, we found that the BIC scores of MCFHH1 were not lower than those of PCS on 12 datasets, most of which were concentrated on large-scale networks, such as munin, andes, and pigs. These results indicate that MCFHH1 has a stronger global search capability than PCS. Compared to MCFHH1, NOTEARS achieved higher BIC scores on 6 out of 18 datasets, while it failed to produce any output seven times. This means that NOTEARS is unstable and cannot stably output results. In addition, the performance of both the constraint-based method (PC-stable) and the exploiting structural asymmetries method (LiNGAM) was significantly worse compared to our method, especially on the andes network.
Figure 2 illustrates the BIC scores with respect to the number of iterations for six networks, and the results after the algorithm stopped are indicated by dotted lines. As shown in Figure 2, three algorithms improved the quality of the solutions at the beginning of the search process, but BNC-PSO converged faster than MCFHH and MCFHH1. This phenomenon became more obvious on the last four networks at larger scales. With the increase in the number of iterations, the convergence speeds of the three algorithms tended to be the same. For the hepar2 and win95pts networks, we can clearly observe that the MCFHH algorithm continued to find structures with higher scores after the BNC-PSO and MCFHH1 algorithms converged. In addition, on the hepar2 and andes networks, the convergence accuracy of BNC-PSO was significantly lower than that of MCFHH and MCFHH1. This shows that the BNC-PSO algorithm cannot guarantee good performance in all cases. By comparing BNC-PSO and MCFHH1, we found that the latter achieved the highest BIC scores across all datasets and the highest F1 scores on 14 out of 18 datasets, with an order of magnitude difference in the BIC scores between the two on the andes network. Therefore, we can conclude that the latter has better generalizability and search capability than the former.
In summary, our algorithm increases population diversity by combining and optimizing a variety of nature-inspired heuristics, thereby increasing convergence accuracy and decreasing convergence speed. In our algorithm, the completeness of the search space guarantees convergence accuracy, but the complete search space greatly increases the time cost. Therefore, finding ways to limit the search space as much as possible while ensuring its completeness will be a direction for improving the performance of our algorithm. Overall, regardless of whether the data are Gaussian or non-Gaussian, our algorithm can stably output a structure that is closer to true causality. Our algorithm uses the constraint method to reduce the difficulty of the search method and uses the search method to correct the errors caused by the constraint method. To some extent, the advantages of the two types of methods are absorbed, and the defects of both methods are compensated for.

6. Conclusions and Future Research

In this paper, structural priors are obtained using the SPPC algorithm and integrated into the score search process to improve search efficiency. We prove the correctness and validity of the SPPC in theory. To make effective use of this prior knowledge, we devised a hyper-heuristic method called MCFHH to discover causality under linear SEMs. The experimental results show that the proposed method has better generalizability and search capability. Compared to state-of-the-art methods, it outputs structures that are closer to real causality. Additional efforts will be required to expand our work. In this paper, we have only proposed a hybrid approach under linear SEMs, and we intend to further investigate this hybrid method for both discrete and nonlinear problems. We will also develop better hyper-heuristic algorithms.

Author Contributions

Conceptualization, Y.D.; methodology, Y.D.; software, Y.D.; validation, Y.D.; formal analysis, Y.D. and Z.W.; investigation, Y.D.; resources, Y.D.; data curation, Y.D. and Z.W.; writing—original draft preparation, Y.D.; writing—review and editing, Y.D.; visualization, Y.D.; supervision, X.G.; project administration, Y.D. and Z.W.; funding acquisition, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (61573285), the Fundamental Research Funds for the Central Universities, China (No. G2022KY0602), and the key core technology research plan of Xi’an, China (No. 21RGZN0016).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The true networks of all eight datasets are known, and they are publicly available (http://www.bnlearn.com/bnrepository, accessed on 10 May 2024).

Acknowledgments

I have benefited from the presence of my supervisor and classmates. I am very grateful to my supervisor Xiaoguang Gao who gave me encouragement, careful guidance, and helpful advice throughout the writing of this thesis.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABCArtificial bee colony
BFBayes factor
BFOBacterial foraging optimization
BICBayesian information criterion
CIConditional independence
CSSComplete search space
DAGDirected acyclic graph
GSSGlobal search space
LLHLow-level heuristics
LSSLocal search space
MBMarkov blanket
MCFHHMulti-population choice function hyper-heuristic
NLLNegative log-likelihood
OSPOpen simple path
PSOParticle swarm optimization
SEMStructural equation model
SPPCStructural priors by partial correlation

References

  1. Larsson, S.C.; Butterworth, A.S.; Burgess, S. Mendelian randomization for cardiovascular diseases: Principles and applications. Eur. Heart J. 2023, 44, 4913–4924. [Google Scholar] [CrossRef] [PubMed]
  2. Michoel, T.; Zhang, J.D. Causal inference in drug discovery and development. Drug Discov. Today 2023, 28, 17. [Google Scholar] [CrossRef] [PubMed]
  3. Pavlovic, M.; Al Hajj, G.S.; Kanduri, C.; Pensar, J.; Wood, M.E.; Sollid, L.M.; Greiff, V.; Sandve, G.K. Improving generalization of machine learning-identified biomarkers using causal modelling with examples from immune receptor diagnostics. Nat. Mach. Intell. 2024, 6, 15–24. [Google Scholar] [CrossRef]
  4. Corander, J.; Hanage, W.P.; Pensar, J. Causal discovery for the microbiome. Lancet Microbe 2022, 3, E881–E887. [Google Scholar] [CrossRef] [PubMed]
  5. Runge, J.; Gerhardus, A.; Varando, G.; Eyring, V.; Camps-Valls, G. Causal inference for time series. Nat. Rev. Earth Environ. 2023, 4, 487–505. [Google Scholar] [CrossRef]
  6. Shimizu, S.; Hoyer, P.O.; Hyvärinen, A.; Kerminen, A. A linear non-Gaussian acyclic model for causal discovery. J. Mach. Learn. Res. 2006, 7, 2003–2030. [Google Scholar]
  7. Hoyer, P.O.; Janzing, D.; Mooij, J.M.; Peters, J.; Schölkopf, B. Nonlinear causal discovery with additive noise models. In Proceedings of the Advances in Neural Information Processing Systems 21—Proceedings of the 2008 Conference, Vancouver, BC, Canada, 8–11 December 2008; pp. 689–696. [Google Scholar]
  8. Zhang, K.; Wang, Z.K.; Zhang, J.J.; Schölkopf, B. On Estimation of Functional Causal Models: General Results and Application to the Post-Nonlinear Causal Model. ACM Trans. Intell. Syst. Technol. 2016, 7, 22. [Google Scholar] [CrossRef]
  9. Janzing, D.; Mooij, J.; Zhang, K.; Lemeire, J.; Zscheischler, J.; Daniusis, P.; Steudel, B.; Schölkopf, B. Information-geometric approach to inferring causal directions. Artif. Intell. 2012, 182, 1–31. [Google Scholar] [CrossRef]
  10. Spirtes, P.; Glymour, C.; Scheines, R. Causation, Prediction, and Search, 2nd ed.; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  11. Cooper, G.F.; Herskovits, E. A Bayesian method for the induction of probabilistic networks from data. Mach. Learn. 1992, 9, 309–347. [Google Scholar] [CrossRef]
  12. Yuan, C.; Malone, B. Learning Optimal Bayesian Networks: A Shortest Path Perspective. J. Artif. Intell. Res. 2013, 48, 23–65. [Google Scholar] [CrossRef]
  13. Chickering, D.M. Optimal structure identification with greedy search. J. Mach. Learn. Res. 2003, 3, 507–554. [Google Scholar]
  14. Lee, J.; Chung, W.Y.; Kim, E. Structure learning of Bayesian networks using dual genetic algorithm. IEICE Trans. Inf. Syst. 2008, 91, 32–43. [Google Scholar] [CrossRef]
  15. Cui, G.; Wong, M.L.; Lui, H.K. Machine learning for direct marketing response models: Bayesian networks with evolutionary programming. Manag. Sci. 2006, 52, 597–612. [Google Scholar] [CrossRef]
  16. Gámez, J.A.; Puerta, J.M. Searching for the best elimination sequence in Bayesian networks by using ant colony optimization. Pattern Recognit. Lett. 2002, 23, 261–277. [Google Scholar] [CrossRef]
  17. Askari, M.B.A.; Ahsaee, M.G.; IEEE. Bayesian network structure learning based on cuckoo search algorithm. In Proceedings of the 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), Shahid Bahonar Univ Kerman, Kerman, Iran, 28 February–2 March 2018; pp. 127–130. [Google Scholar]
  18. Wang, J.Y.; Liu, S.Y. Novel binary encoding water cycle algorithm for solving Bayesian network structures learning problem. Knowl.-Based Syst. 2018, 150, 95–110. [Google Scholar] [CrossRef]
  19. Sun, B.D.; Zhou, Y.; Wang, J.J.; Zhang, W.M. A new PC-PSO algorithm for Bayesian network structure learning with structure priors. Expert Syst. Appl. 2021, 184, 11. [Google Scholar] [CrossRef]
  20. Gheisari, S.; Meybodi, M.R. BNC-PSO: Structure learning of Bayesian networks by Particle Swarm Optimization. Inf. Sci. 2016, 348, 272–289. [Google Scholar] [CrossRef]
  21. Ji, J.Z.; Wei, H.K.; Liu, C.N. An artificial bee colony algorithm for learning Bayesian networks. Soft Comput. 2013, 17, 983–994. [Google Scholar] [CrossRef]
  22. Yang, C.C.; Ji, J.Z.; Liu, J.M.; Liu, J.D.; Yin, B.C. Structural learning of Bayesian networks by bacterial foraging optimization. Int. J. Approx. Reason. 2016, 69, 147–167. [Google Scholar] [CrossRef]
  23. Wang, X.C.; Ren, H.J.; Guo, X.X. A novel discrete firefly algorithm for Bayesian network structure learning. Knowl.-Based Syst. 2022, 242, 10. [Google Scholar] [CrossRef]
  24. Pandiri, V.; Singh, A. A hyper-heuristic based artificial bee colony algorithm for k-Interconnected multi-depot multi-traveling salesman problem. Inf. Sci. 2018, 463, 261–281. [Google Scholar] [CrossRef]
  25. Wang, Z.; Liu, J.L.; Zhang, J.L. Hyper-heuristic algorithm for traffic flow-based vehicle routing problem with simultaneous delivery and pickup. J. Comput. Des. Eng. 2023, 10, 2271–2287. [Google Scholar] [CrossRef]
  26. Drake, J.H.; Özcan, E.; Burke, E.K. A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem. Evol. Comput. 2016, 24, 113–141. [Google Scholar] [CrossRef] [PubMed]
  27. Zamli, K.Z.; Din, F.; Kendall, G.; Ahmed, B.S. An experimental study of hyper-heuristic selection and acceptance mechanism for combinatorial t-way test suite generation. Inf. Sci. 2017, 399, 121–153. [Google Scholar] [CrossRef]
  28. Tsamardinos, I.; Brown, L.E.; Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach. Learn. 2006, 65, 31–78. [Google Scholar] [CrossRef]
  29. Yang, J.; Li, L.; Wang, A.G. A partial correlation-based Bayesian network structure learning algorithm under linear SEM. Knowl.-Based Syst. 2011, 24, 963–976. [Google Scholar] [CrossRef]
  30. Kitson, N.K.; Constantinou, A.C.; Guo, Z.G.; Liu, Y.; Chobtham, K. A survey of Bayesian Network structure learning. Artif. Intell. Rev. 2023, 56, 8721–8814. [Google Scholar] [CrossRef]
  31. Colombo, D.; Maathuis, M.H. Order-Independent Constraint-Based Causal Structure Learning. J. Mach. Learn. Res. 2014, 15, 3741–3782. [Google Scholar]
  32. Ogarrio, J.M.; Spirtes, P.; Ramsey, J. A Hybrid Causal Search Algorithm for Latent Variable Models. JMLR Workshop Conf. Proc. 2016, 52, 368–379. [Google Scholar]
  33. Tsamardinos, I.; Aliferis, C.F.; Statnikov, A. Time and sample efficient discovery of Markov blankets and direct causal relations. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–27 August 2003; pp. 673–678. [Google Scholar]
  34. Koivisto, M.; Sood, K. Exact Bayesian structure discovery in Bayesian networks. J. Mach. Learn. Res. 2004, 5, 549–573. [Google Scholar]
  35. de Campos, C.P.; Ji, Q. Efficient Structure Learning of Bayesian Networks using Constraints. J. Mach. Learn. Res. 2011, 12, 663–689. [Google Scholar]
  36. Cussens, J.; Järvisalo, M.; Korhonen, J.H.; Bartlett, M. Bayesian Network Structure Learning with Integer Programming: Polytopes, Facets and Complexity. J. Artif. Intell. Res. 2017, 58, 185–229. [Google Scholar] [CrossRef]
  37. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  38. Shimizu, S.; Inazumi, T.; Sogawa, Y.; Hyvärinen, A.; Kawahara, Y.; Washio, T.; Hoyer, P.O.; Bollen, K. DirectLiNGAM: A Direct Method for Learning a Linear Non-Gaussian Structural Equation Model. J. Mach. Learn. Res. 2011, 12, 1225–1248. [Google Scholar]
  39. Zheng, X.; Aragam, B.; Ravikumar, P.; Xing, E.P. DAGs with NO TEARS: Continuous Optimization for Structure Learning. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 2–8 December 2018. [Google Scholar]
  40. Yu, Y.; Chen, J.; Gao, T.; Yu, M. DAG-GNN: DAG Structure Learning with Graph Neural Networks. In Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  41. Wang, X.; Du, Y.; Zhu, S.; Ke, L.; Chen, Z.; Hao, J.; Wang, J. Ordering-Based Causal Discovery with Reinforcement Learning. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 19–27 August 2021; pp. 3566–3573. [Google Scholar]
  42. Zhang, M.H.; Jiang, S.L.; Cui, Z.C.; Garnett, R.; Chen, Y.X. D-VAE: A Variational Autoencoder for Directed Acyclic Graphs. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  43. Zheng, X.; Dan, C.; Aragam, B.; Ravikumar, P.; Xing, E.P. Learning Sparse Nonparametric DAGs. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Electr Network, Online, 26–28 August 2020; pp. 3414–3424. [Google Scholar]
  44. Lee, H.C.; Danieletto, M.; Miotto, R.; Cherng, S.T.; Dudley, J.T. Scaling structural learning with NO-BEARS to infer causal transcriptome networks. In Proceedings of the Pacific Symposium on Biocomputing, Fairmont Orchid, HI, USA, 3–7 January 2020; pp. 391–402. [Google Scholar]
  45. Wei, D.; Gao, T.; Yu, Y. DAGs with no fears: A closer look at continuous optimization for learning Bayesian networks. In Proceedings of the Advances in Neural Information Processing Systems, Virtual, 6–12 December 2020. [Google Scholar]
  46. Kaiser, M.; Sipos, M. Unsuitability of NOTEARS for Causal Graph Discovery when Dealing with Dimensional Quantities. Neural Process. Lett. 2022, 54, 1587–1595. [Google Scholar] [CrossRef]
  47. Ramsey, J.; Spirtes, P.; Zhang, J. Adjacency-faithfulness and conservative causal inference. In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, UAI 2006, Cambridge, MA, USA, 13–16 July 2006; pp. 401–408. [Google Scholar]
  48. Zhang, J.; Spirtes, P. Detection of unfaithfulness and robust causal inference. Minds Mach. 2008, 18, 239–271. [Google Scholar] [CrossRef]
  49. de Campos, L.M.; Castellano, J.G. Bayesian network learning algorithms using structural restrictions. Int. J. Approx. Reason. 2007, 45, 233–254. [Google Scholar] [CrossRef]
  50. Correia, A.H.C.; de Campos, C.P.; van der Gaag, L.C. An Experimental Study of Prior Dependence in Bayesian Network Structure Learning. In Proceedings of the 11th International Symposium on Imprecise Probabilities—Theories and Applications (ISIPTA), Ghent, Belgium, 3–6 July 2019; pp. 78–81. [Google Scholar]
  51. Borboudakis, G.; Tsamardinos, I. Scoring and searching over Bayesian networks with causal and associative priors. In Proceedings of the Uncertainty in Artificial Intelligence—Proceedings of the 29th Conference, UAI 2013, Bellevue, WA, USA, 12–14 July 2013; pp. 102–111. [Google Scholar]
  52. Wang, Z.X.; Chan, L.W. Learning Bayesian Networks from Markov Random Fields: An Efficient Algorithm for Linear Models. ACM Trans. Knowl. Discov. Data 2012, 6, 31. [Google Scholar] [CrossRef]
  53. Chén, O.Y.; Bodelet, J.S.; Saraiva, R.G.; Phan, H.; Di, J.R.; Nagels, G.; Schwantje, T.; Cao, H.Y.; Gou, J.T.; Reinen, J.M.; et al. The roles, challenges, and merits of the p value. Patterns 2023, 4, 22. [Google Scholar] [CrossRef] [PubMed]
  54. Wang, Z.; Chan, L. An efficient causal discovery algorithm for linear models. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 25–28 July 2010; pp. 1109–1117. [Google Scholar]
  55. Cheng, J.; Greiner, R.; Kelly, J.; Bell, D.; Liu, W.R. Learning Bayesian networks from data: An information-theory based approach. Artif. Intell. 2002, 137, 43–90. [Google Scholar] [CrossRef]
Figure 1. Generic structure of a traditional hyper-heuristic model.
Figure 1. Generic structure of a traditional hyper-heuristic model.
Biomimetics 09 00350 g001
Figure 2. Convergence of the BIC scores of the three algorithms on the six networks.
Figure 2. Convergence of the BIC scores of the three algorithms on the six networks.
Biomimetics 09 00350 g002
Table 1. Summary of networks.
Table 1. Summary of networks.
NetworkNodesEdgesMax.indegMax.outdegAvg.deg
alarm3746452.49
hepar2701236173.51
win95pts761127102.95
munin1892823152.98
andes2233386123.03
pigs4415922392.68
Table 2. Parameters of the MCFHH.
Table 2. Parameters of the MCFHH.
Param.ValueDescriptions
k0.01The threshold of the Bayes factor
n P o p 50The population size
L m a x 2 · n The maximum number of unpromoted iterations allowed
M a x I t 5000The maximum number of iterations allowed
s n 5The number of subgroups
μ ln m The threshold of pruning
l m 20Control parameters of the elimination and dispersal operator
z j 0.5Percentage receiving expert guidance
Table 3. Performance of MCFHH algorithm on different datasets for SEM1. Bold denotes that BIC is greater than SBS.
Table 3. Performance of MCFHH algorithm on different datasets for SEM1. Bold denotes that BIC is greater than SBS.
NetworkDatasetBICSBSADDDRDRET(s)SET(s)F1
alarm1000 1.8710 × 10 4 ± 0 1.8710 × 10 4 0 ± 00 ± 00 ± 00.272.491 ± 0
3000 5.5791 × 10 4 ± 0 5.5791 × 10 4 0 ± 00 ± 00 ± 00.282.981 ± 0
5000 9.2964 × 10 4 ± 0 9.2964 × 10 4 0 ± 00 ± 00 ± 00.295.111 ± 0
10,000 1.8598 × 10 5 ± 0 1.8598 × 10 5 0 ± 00 ± 00 ± 00.299.491 ± 0
hepar21000 3.5372 × 10 4 ± 0 3.5372 × 10 4 0 ± 00 ± 00 ± 01.0828.071 ± 0
3000 1.0568 × 10 5 ± 0 1.0568 × 10 5 0 ± 00 ± 00 ± 01.7435.541 ± 0
5000 1.7600 × 10 5 ± 0 1.7600 × 10 5 0 ± 00 ± 00 ± 01.9969.391 ± 0
10,000 3.5157 × 10 5 ± 0 3.5157 × 10 5 0 ± 00 ± 00 ± 02.06151.381 ± 0
win95pts1000 3.8374 × 10 4 ± 0 3.8374 × 10 4 0 ± 00 ± 00 ± 01.8219.161 ± 0
3000 1.1480 × 10 5 ± 0 1.1480 × 10 5 0 ± 00 ± 00 ± 02.2419.981 ± 0
5000 1.9122 × 10 5 ± 0 1.9122 × 10 5 0 ± 00 ± 00 ± 02.6239.271 ± 0
10,000 3.8166 × 10 5 ± 0 3.8166 × 10 5 0 ± 00 ± 00 ± 02.6765.021 ± 0
munin1000−9.5488 × 10 4 ± 0 9.5663 × 10 4 0 ± 059 ± 00 ± 01.62160.460.8832 ± 0
3000−2.8535 × 10 5 ± 0 2.8556 × 10 5 0 ± 059 ± 00 ± 01.74172.430.8832 ± 0
5000−4.7464 × 10 5 ± 0 4.7486 × 10 5 0 ± 059 ± 00 ± 01.82272.540.8832 ± 0
10,000−9.4648 × 10 5 ± 0 9.4673 × 10 5 0 ± 059 ± 00 ± 01.96518.020.8832 ± 0
andes1000 1.1296 × 10 5 ± 0 1.1296 × 10 5 0 ± 00 ± 00 ± 04.92660.711 ± 0
3000 3.3678 × 10 5 ± 0 3.3678 × 10 5 0 ± 00 ± 00 ± 05.69617.391 ± 0
5000 5.6010 × 10 5 ± 0 5.6010 × 10 5 0 ± 00 ± 00 ± 06.19875.831 ± 0
10,000 1.1172 × 10 6 ± 0 1.1172 × 10 6 0 ± 00 ± 00 ± 06.211640.181 ± 0
pigs1000−2.2197 × 10 5 ± 0 2.2301 × 10 5 0 ± 0357 ± 00 ± 06.902850.500.5683 ± 0
3000−6.6343 × 10 5 ± 0 6.6467 × 10 5 0 ± 0357 ± 00 ± 06.932635.210.5683 ± 0
5000−1.1037 × 10 6 ± 0 1.1051 × 10 6 0 ± 0357 ± 00 ± 06.792914.080.5683 ± 0
10,000−2.2062 × 10 6 ± 0 2.2077 × 10 6 0 ± 0357 ± 00 ± 06.933388.360.5683 ± 0
Table 4. Performance of MCFHH algorithm on different datasets for SEM2. Bold denotes that BIC is greater than SBS.
Table 4. Performance of MCFHH algorithm on different datasets for SEM2. Bold denotes that BIC is greater than SBS.
NetworkDatasetBICSBSADDDRDRET(s)SET(s)F1
alarm1000 1.8638 × 10 4 ± 0 1.8638 × 10 4 0 ± 00 ± 00 ± 00.202.941 ± 0
3000 5.5744 × 10 4 ± 0 5.5744 × 10 4 0 ± 00 ± 00 ± 00.262.841 ± 0
5000 9.2944 × 10 4 ± 0 9.2944 × 10 4 0 ± 00 ± 00 ± 00.305.971 ± 0
10,000 1.8587 × 10 5 ± 0 1.8587 × 10 5 0 ± 00 ± 00 ± 00.287.451 ± 0
hepar21000 3.5318 × 10 4 ± 0 3.5318 × 10 4 0 ± 00 ± 00 ± 01.2320.571 ± 0
3000 1.0562 × 10 5 ± 0 1.0562 × 10 5 0 ± 00 ± 00 ± 02.1823.881 ± 0
5000 1.7604 × 10 5 ± 0 1.7604 × 10 5 0 ± 00 ± 00 ± 02.5147.621 ± 0
10,000 3.5157 × 10 5 ± 0 3.5157 × 10 5 0 ± 00 ± 00 ± 02.7297.381 ± 0
win95pts1000−3.8309 × 10 4 ± 0 3.8310 × 10 4 0 ± 00 ± 01 ± 01.6214.120.9911 ± 0
3000 1.1476 × 10 5 ± 0 1.1476 × 10 5 0 ± 00 ± 00 ± 02.1817.181 ± 0
5000 1.9119 × 10 5 ± 0 1.9119 × 10 5 0 ± 00 ± 00 ± 02.3934.611 ± 0
10,000 3.8160 × 10 5 ± 0 3.8160 × 10 5 0 ± 00 ± 00 ± 02.4960.281 ± 0
munin1000−9.5380 × 10 4 ± 0 9.5561 × 10 4 0 ± 059 ± 01 ± 01.59179.940.8792 ± 0
3000−2.8533 × 10 5 ± 0 2.8554 × 10 5 0 ± 059 ± 00 ± 01.76140.150.8832 ± 0
5000−4.7471 × 10 5 ± 0 4.7495 × 10 5 0 ± 059 ± 00 ± 01.79241.230.8832 ± 0
10,000−9.4647 × 10 5 ± 0 9.4673 × 10 5 0 ± 059 ± 00 ± 01.86284.040.8832 ± 0
andes1000−1.1280 × 10 5 ± 0 1.1281 × 10 5 1 ± 00 ± 00 ± 06.51559.270.9985 ± 0
3000 3.3677 × 10 5 ± 0 3.3677 × 10 5 0 ± 00 ± 00 ± 05.93552.351 ± 0
5000 5.6012 × 10 5 ± 0 5.6012 × 10 5 0 ± 00 ± 00 ± 05.33582.981 ± 0
10,000 1.1171 × 10 6 ± 0 1.1171 × 10 6 0 ± 00 ± 00 ± 04.19931.861 ± 0
pigs1000−2.2209 × 10 5 ± 0 2.2316 × 10 5 1 ± 0357 ± 00 ± 06.903184.710.5676 ± 0
3000−6.6328 × 10 5 ± 0 6.6455 × 10 5 0 ± 0357 ± 01 ± 06.782807.740.5659 ± 0
5000−1.1040 × 10 6 ± 0 1.1053 × 10 6 1 ± 0357 ± 00 ± 06.723399.620.5676 ± 0
10,000−2.2065 × 10 6 ± 0 2.2079 × 10 6 0 ± 0357 ± 00 ± 06.753187.430.5683 ± 0
Table 5. Performance of MCFHH algorithm on different datasets for SEM3. Bold denotes that BIC is greater than SBS.
Table 5. Performance of MCFHH algorithm on different datasets for SEM3. Bold denotes that BIC is greater than SBS.
NetworkDatasetBICSBSADDDRDRET(s)SET(s)F1
alarm1000 6.3849 × 10 3 ± 0 6.3849 × 10 3 0 ± 00 ± 00 ± 00.301.951 ± 0
3000 1.8688 × 10 4 ± 0 1.8688 × 10 4 0 ± 00 ± 00 ± 00.322.571 ± 0
5000 3.1042 × 10 4 ± 0 3.1042 × 10 4 0 ± 00 ± 00 ± 00.265.511 ± 0
10,000 6.1808 × 10 4 ± 0 6.1808 × 10 4 0 ± 00 ± 00 ± 00.308.121 ± 0
hepar21000 1.2133 × 10 4 ± 0 1.2133 × 10 4 0 ± 00 ± 00 ± 01.0123.251 ± 0
3000 3.5491 × 10 4 ± 0 3.5491 × 10 4 0 ± 00 ± 00 ± 01.4928.991 ± 0
5000 5.8769 × 10 4 ± 0 5.8769 × 10 4 0 ± 00 ± 00 ± 01.7052.071 ± 0
10,000 1.1715 × 10 5 ± 0 1.1715 × 10 5 0 ± 00 ± 00 ± 02.03129.981 ± 0
win95pts1000 1.3083 × 10 4 ± 0 1.3083 × 10 4 0 ± 00 ± 00 ± 01.6818.611 ± 0
3000 3.8459 × 10 4 ± 0 3.8459 × 10 4 0 ± 00 ± 00 ± 02.0924.801 ± 0
5000 6.3749 × 10 4 ± 0 6.3749 × 10 4 0 ± 00 ± 00 ± 02.2643.621 ± 0
10,000 1.2713 × 10 5 ± 0 1.2713 × 10 5 0 ± 00 ± 00 ± 02.3974.951 ± 0
munin1000−3.2199 × 10 4 ± 0 3.2393 × 10 4 0 ± 059 ± 00 ± 01.64177.870.8832 ± 0
3000−9.5189 × 10 4 ± 0 9.5418 × 10 4 0 ± 059 ± 00 ± 01.77156.880.8832 ± 0
5000−1.5820 × 10 5 ± 0 1.5844 × 10 5 0 ± 059 ± 00 ± 01.80248.070.8832 ± 0
10,000−3.1594 × 10 5 ± 0 3.1620 × 10 5 0 ± 059 ± 00 ± 01.83415.100.8832 ± 0
andes1000 3.8190 × 10 4 ± 0 3.8190 × 10 4 0 ± 00 ± 00 ± 05.87676.991 ± 0
3000 1.1267 × 10 5 ± 0 1.1267 × 10 5 0 ± 00 ± 00 ± 05.54647.911 ± 0
5000 1.8691 × 10 5 ± 0 1.8691 × 10 5 0 ± 00 ± 00 ± 05.37947.981 ± 0
10,000 3.7329 × 10 5 ± 0 3.7329 × 10 5 0 ± 00 ± 00 ± 04.481733.181 ± 0
pigs1000−7.4199 × 10 4 ± 0 7.5375 × 10 4 0 ± 0357 ± 00 ± 06.962984.920.5683 ± 0
3000−2.2124 × 10 5 ± 0 2.2261 × 10 5 0 ± 0357 ± 00 ± 06.793229.690.5683 ± 0
5000−3.6861 × 10 5 ± 0 3.7007 × 10 5 0 ± 0357 ± 00 ± 06.743335.360.5683 ± 0
10,000−7.3672 × 10 5 ± 0 7.3831 × 10 5 0 ± 0357 ± 00 ± 06.753452.050.5683 ± 0
Table 6. Performance of MCFHH algorithm without the switching operator.
Table 6. Performance of MCFHH algorithm without the switching operator.
NetworkSEMBICSBSADDDRDRET(s)SET(s)F1
alarm1 1.8598 × 10 5 ± 0 1.8598 × 10 5 0 ± 00 ± 00 ± 00.272.881 ± 0
2 1.8587 × 10 5 ± 0 1.8587 × 10 5 0 ± 00 ± 00 ± 00.282.371 ± 0
3 6.1808 × 10 4 ± 0 6.1808 × 10 4 0 ± 00 ± 00 ± 00.292.731 ± 0
hepar21 3.7076 × 10 5 ± 290.56 3.5157 × 10 5 0.50 ± 1.582 ± 01.60 ± 1.901.0812.560.9768 ± 0.0215
2 3.5297 × 10 5 ± 0 3.5157 × 10 5 0 ± 01 ± 00 ± 01.7410.770.9959 ± 0
3 1.2365 × 10 5 ± 0 1.1715 × 10 5 0 ± 02 ± 01 ± 01.9912.170.9836 ± 0
win95pts1 3.8166 × 10 5 ± 0 3.8166 × 10 5 0 ± 00 ± 00 ± 01.8213.221 ± 0
2 3.8160 × 10 5 ± 0 3.8160 × 10 5 0 ± 00 ± 00 ± 02.2412.361 ± 0
3 1.2713 × 10 5 ± 0 1.2713 × 10 5 0 ± 00 ± 00 ± 02.6214.571 ± 0
munin1 9.4648 × 10 5 ± 0 9.4673 × 10 5 0 ± 059 ± 00 ± 01.6228.460.8832 ± 0
2 9.4647 × 10 5 ± 0 9.4671 × 10 5 0 ± 059 ± 00 ± 01.7425.070.8832 ± 0
3 3.1594 × 10 5 ± 0 3.1620 × 10 5 0 ± 059 ± 00 ± 01.8229.700.8832 ± 0
andes1 1.1223 × 10 6 ± 0 1.1172 × 10 6 5 ± 01 ± 01 ± 04.92206.390.9882 ± 0
2 1.1204 × 10 6 ± 40.23 1.1171 × 10 6 1.90 ± 0.321 ± 01 ± 05.69105.170.9928 ± 0.0005
3 3.7807 × 10 5 ± 139.81 3.7329 × 10 5 7 ± 0.472 ± 02.10 ± 0.326.19232.260.9806 ± 0.0015
pigs1 2.2062 × 10 6 ± 0 2.2077 × 10 6 0 ± 0357 ± 00 ± 06.90158.350.5683 ± 0
2 2.2065 × 10 6 ± 0 2.2079 × 10 6 0 ± 0357 ± 00 ± 06.93163.470.5683 ± 0
3 7.3672 × 10 5 ± 0 7.3831 × 10 5 0 ± 0357 ± 00 ± 06.79156.060.5683 ± 0
Table 7. F1 scores of the algorithms for different SEMs. Bold denotes the F1 score that was the best found amongst all methods. “-” indicates that no result is displayed.
Table 7. F1 scores of the algorithms for different SEMs. Bold denotes the F1 score that was the best found amongst all methods. “-” indicates that no result is displayed.
SEMNetworkPC-StableLiNGAMPCSNOTEARSBNC-PSOMCFHH1MCFHH
SEM1alarm0.85390.63870.97870.98920.989211
hepar20.58710.71030.8593-0.90490.91531
win95pts0.78820.46830.86310.99120.97150.97401
munin0.75460.33500.6314-0.86110.87200.8832
andes0.54970.28230.6826-0.87680.90141
pigs0.47660.16500.36120.56830.52720.56830.5683
SEM2alarm0.86050.61160.97870.94510.97780.97781
hepar20.63460.71170.86510.76030.90480.89961
win95pts0.84910.61540.91360.96460.96950.96830.9911
munin0.79760.37240.62160.81510.86070.86520.8792
andes0.83940.40080.6000-0.93270.93270.9985
pigs0.41740.14990.32650.56000.51410.56760.5676
SEM3alarm0.82760.93880.98920.9053111
hepar20.55450.74130.8696-0.93920.91801
win95pts0.80200.55420.99120.94420.97130.96911
munin0.76700.36270.7876-0.87390.87430.8832
andes0.52090.26490.7862-0.92000.94231
pigs0.46870.17830.50980.56420.56310.56830.5683
Table 8. BIC scores of the algorithms for different SEMs. Bold denotes the BIC score that was the best found amongst all methods. “-” indicates that no result is displayed.
Table 8. BIC scores of the algorithms for different SEMs. Bold denotes the BIC score that was the best found amongst all methods. “-” indicates that no result is displayed.
SEMNetworkPC-StableLiNGAMPCSNOTEARSBNC-PSOMCFHH1MCFHH
SEM1alarm 2.23 × 10 4 2.15 × 10 4 1.87 × 10 4 1.87 × 10 4 −1.87 × 10 4 −1.87 × 10 4 −1.87 × 10 4
hepar2 1.48 × 10 5 4.26 × 10 4 3.97 × 10 4 - 4.73 × 10 4 4.57 × 10 4 −3.54 × 10 4
win95pts 7.70 × 10 4 4.94 × 10 4 4.09 × 10 4 3.84 × 10 4 4.08 × 10 4 4.08 × 10 4 −3.84 × 10 4
munin 1.29 × 10 5 1.51 × 10 5 1.22 × 10 5 - 9.86 × 10 4 9.76 × 10 4 −9.55 × 10 4
andes 3.97 × 10 8 5.77 × 10 8 4.39 × 10 5 - 6.49 × 10 6 2.35 × 10 5 −1.13 × 10 5
pigs 2.28 × 10 5 2.46 × 10 5 2.26 × 10 5 −2.22 × 10 5 2.23 × 10 5 −2.22 × 10 5 −2.22 × 10 5
SEM2alarm 2.04 × 10 4 1.95 × 10 4 1.86 × 10 4 1.88 × 10 4 1.90 × 10 4 1.90 × 10 4 −1.86 × 10 4
hepar2 4.48 × 10 4 3.79 × 10 4 3.63 × 10 4 3.68 × 10 4 3.69 × 10 4 3.69 × 10 4 −3.53 × 10 4
win95pts 4.63 × 10 4 4.03 × 10 4 3.83 × 10 4 3.84 × 10 4 3.85 × 10 4 3.85 × 10 4 −3.83 × 10 4
munin 9.70 × 10 4 1.06 × 10 5 9.85 × 10 4 9.59 × 10 4 9.57 × 10 4 9.56 × 10 4 −9.54 × 10 4
andes 1.52 × 10 5 1.47 × 10 5 1.24 × 10 5 - 1.18 × 10 5 1.16 × 10 5 −1.13 × 10 5
pigs 2.25 × 10 5 2.32 × 10 5 2.23 × 10 5 2.22 × 10 5 2.22 × 10 5 −2.22 × 10 5 −2.22 × 10 5
SEM3alarm 7.87 × 10 3 6.40 × 10 3 6.39 × 10 3 6.50 × 10 3 −6.38 × 10 3 −6.38 × 10 3 −6.38 × 10 3
hepar2 3.90 × 10 4 1.57 × 10 4 1.38 × 10 4 - 1.69 × 10 4 1.61 × 10 4 −1.21 × 10 4
win95pts 2.07 × 10 4 1.61 × 10 4 1.31 × 10 4 1.32 × 10 4 1.39 × 10 4 1.39 × 10 4 −1.31 × 10 4
munin 4.68 × 10 4 4.28 × 10 4 3.37 × 10 4 - 3.25 × 10 4 3.25 × 10 4 −3.22 × 10 4
andes 1.88 × 10 8 6.22 × 10 7 1.04 × 10 5 - 2.35 × 10 6 4.26 × 10 4 −3.82 × 10 4
pigs 7.60 × 10 4 8.31 × 10 4 7.55 × 10 4 7.42 × 10 4 7.44 × 10 4 −7.42 × 10 4 −7.42 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dang, Y.; Gao, X.; Wang, Z. Choice Function-Based Hyper-Heuristics for Causal Discovery under Linear Structural Equation Models. Biomimetics 2024, 9, 350. https://doi.org/10.3390/biomimetics9060350

AMA Style

Dang Y, Gao X, Wang Z. Choice Function-Based Hyper-Heuristics for Causal Discovery under Linear Structural Equation Models. Biomimetics. 2024; 9(6):350. https://doi.org/10.3390/biomimetics9060350

Chicago/Turabian Style

Dang, Yinglong, Xiaoguang Gao, and Zidong Wang. 2024. "Choice Function-Based Hyper-Heuristics for Causal Discovery under Linear Structural Equation Models" Biomimetics 9, no. 6: 350. https://doi.org/10.3390/biomimetics9060350

APA Style

Dang, Y., Gao, X., & Wang, Z. (2024). Choice Function-Based Hyper-Heuristics for Causal Discovery under Linear Structural Equation Models. Biomimetics, 9(6), 350. https://doi.org/10.3390/biomimetics9060350

Article Metrics

Back to TopTop