Open Access
This article is
 freely available
 reusable
Algorithms 2015, 8(4), 10351051; https://doi.org/10.3390/a8041035
Article
Natalie 2.0: Sparse Global Network Alignment as a Special Case of Quadratic Assignment †
^{1}
Life Sciences Group, Centrum Wiskunde & Informatica (CWI), Science Park 123, 1098 XG Amsterdam, The Netherlands
^{2}
Centre for Integrative Bioinformatics VU (IBIVU), VU University Amsterdam, De Boelelaan 1081A, 1081 HV Amsterdam, The Netherlands
^{3}
Center for Computational Molecular Biology and Department of Computer Science, Brown University, Providence, RI 02912, USA
^{*}
Author to whom correspondence should be addressed.
^{†}
This paper is an extended version of our paper published in Pattern Recognition in Bioinformatics. ElKebir, M.; Heringa, J.; Klau, G.W. Lagrangian Relaxation Applied to Sparse Global Network Alignment. In Proceedings of the 6th IAPR International Conference on Pattern Recognition in Bioinformatics, Delft, The Netherlands, 2–4 November 2011; pp. 225–236.
Academic Editors:
Giuseppe Lancia
and
Alberto Policriti
Received: 6 July 2015 / Accepted: 12 November 2015 / Published: 18 November 2015
Abstract
:Data on molecular interactions is increasing at a tremendous pace, while the development of solid methods for analyzing this network data is still lagging behind. This holds in particular for the field of comparative network analysis, where one wants to identify commonalities between biological networks. Since biological functionality primarily operates at the network level, there is a clear need for topologyaware comparison methods. We present a method for global network alignment that is fast and robust and can flexibly deal with various scoring schemes taking both nodetonode correspondences as well as network topologies into account. We exploit that network alignment is a special case of the wellstudied quadratic assignment problem (QAP). We focus on sparse network alignment, where each node can be mapped only to a typically small subset of nodes in the other network. This corresponds to a QAP instance with a symmetric and sparse weight matrix. We obtain strong upper and lower bounds for the problem by improving a Lagrangian relaxation approach and introduce the open source software tool Natalie 2.0, a publicly available implementation of our method. In an extensive computational study on protein interaction networks for six different species, we find that our new method outperforms alternative established and recent stateoftheart methods.
Keywords:
global network alignment; bioinformatics; graph matching; network analysis; network comparison1. Introduction
In the last decade, data on molecular interactions has increased at a tremendous pace. For instance, the STRING database [1], which contains proteinprotein interaction (PPI) data, grew from 261,033 proteins in 89 organisms in 2003 to 9,643,763 proteins in 2031 organisms in 2015, more than doubling the number of proteins in the database every two and a half years. The same trends can be observed for other types of biological networks, including metabolic, generegulatory, signal transduction and metagenomic networks, where the latter can incorporate the excretion and uptake of organic compounds through, for example, a microbial community [2,3]. In addition to the plethora of experimentally derived network data for many species, the structure and behavior of molecular networks have also become intensively studied over the last few years [4], leading to the observation of many conserved features at the network level. However, the development of solid methods for analyzing network data is still lagging behind, particularly in the field of comparative network analysis. Here, one wants to identify commonalities between biological networks from different strains or species, or derived form different conditions. Based on the assumption that evolutionary conservation implies functional significance, comparative approaches may help (i) improve the accuracy of data; (ii) generate, investigate, and validate hypotheses; and (iii) transfer functional annotations. Until recently, the most common way of comparing two networks has been to solely consider nodetonode correspondences, for example by finding homologous relationships between nodes (e.g., proteins in PPI networks) of either network, while the topology of the two networks has not been taken into account. Since biological functionality primarily operates at the network level, there is a clear need for topologyaware comparison methods. In this paper, we present a network alignment method that is fast and robust, and can flexibly deal with various scoring schemes taking both nodetonode correspondences as well as network topologies into account.
1.1. Previous Work
Network alignment establishes node correspondences based on both nodetonode similarities and conserved topological information. Similar to sequence alignment, local network alignment aims at identifying one or more shared subnetworks, whereas global network alignment addresses the overall comparison of the complete input networks. In this paper, we focus on pairwise global network alignment.
Over the last few years, many methods have been proposed for this task. An overview of the recent literature on global network alignment is given in [5]. Here, we shortly list the most important algorithms. The IsoRank algorithm by Singh et al. [6] formulates the global alignment problem as an eigenvalue problem, which preferentially matches nodes with similar neighborhood. Klau [7] presented Natalie, the predecessor of Natalie 2.0, which is described in detail in this paper. The methods are based on an integer linear programming approach solved by Lagrangian relaxation. Kuchaiev et al. [8] presented GRAAL, which matches nodes that share a similar distribution of graphlets. Graphlets are small connected nonisomorphic induced subgraphs. Several variations and improvements of this approach have been published since then. GHOST [9] uses spectral signatures of node neighborhoods in a greedy approach to compute alignments. NETAL [10] is a fast greedy aligner that also takes similar node neighborhoods into account. Another fast method is SPINAL [11], a twostage approach that combines elements of IsoRank with greedy and improvement heuristics. PISwap [12] is a pure improvement heuristic that is based on 3OPT exchange moves. HubAlign [13] exploits the assumption that hubs in the networks tend to be topologically more conserved. Therefore, it processes nodes in the order of decreasing degree during the heuristic alignment process. MAGNA++ [14] and its predecessor MAGNA are genetic algorithms that aim at directly optimizing several more recent evaluation measures such as the symmetric substructure score (S3). Optnetalign [15] is a metaaligner that is able to combine the results of several other methods by means of a multiobjective memetic algorithm. LGRAAL [16] is the latest member of the GRAAL family of aligners. Similarly to Natalie, LGRAAL uses Lagrangian relaxation but takes graphlets into account in its scoring function.
1.2. Contribution
We present an algorithm for global network alignment based on an integer linear programming (ILP) formulation. We exploit that network alignment is a special case of the wellstudied quadratic assignment problem (QAP). We focus on sparse network alignment, where each node can be mapped only to a typically small subset of nodes in the other network. This corresponds to a QAP instance with a symmetric and sparse weight matrix. We improve upon an existing Lagrangian relaxation approach presented in previous work [7] to obtain strong upper and lower bounds for the problem. We exploit the closeness to QAP and generalize a dual descent method for updating the Lagrangian multipliers to the generalized problem. We have implemented the revised algorithm from scratch as the open source software tool Natalie 2.0. In an extensive computational study on protein interaction networks for six different species, we compare Natalie 2.0 to the two established methods GRAAL and IsoRank as well as to the recent LGRAAL method, which has been shown to perform very well in recent studies [5,16]. We evaluate the number of conserved edges in terms of edge correctness (EC), induced and symmetric substructure scores (ICS and S3), as well as functional coherence of the modules in terms of gene ontology (GO) annotation. We find that Natalie 2.0 outperforms the alternative methods with respect to several quality measures and running time.
2. Preliminaries
Given two simple graphs ${G}_{1}=({V}_{1},{E}_{1})$ and ${G}_{2}=({V}_{2},{E}_{2})$, an alignment $a:{V}_{1}\rightharpoondown {V}_{2}$ is a partial injective function from ${V}_{1}$ to ${V}_{2}$. As such, we have that an alignment relates every node in ${V}_{1}$ to at most one node in ${V}_{2}$ and that conversely every node in ${V}_{2}$ has at most one counterpart in ${V}_{1}$. An alignment is assigned a realvalued score using an additive scoring function s defined as follows:
where $c:{V}_{1}\times {V}_{2}\to \mathbb{R}$ is the score of aligning a pair of nodes in ${V}_{1}$ and ${V}_{2}$ respectively. On the other hand, $w:{V}_{1}\times {V}_{2}\times {V}_{1}\times {V}_{2}\to \mathbb{R}$ allows for scoring topological similarity. The problem of global pairwise network alignment (GNA) is to find the highest scoring alignment ${a}^{*}=arg\; maxs\left(a\right)$. Figure 1 shows an example:
$$s\left(a\right)=\sum _{v\in {V}_{1}}c(v,a\left(v\right))+\sum _{\begin{array}{c}v,w\in {V}_{1}\\ v<w\end{array}}w(v,a\left(v\right),w,a\left(w\right))$$
Figure 1.
Example of a network alignment. With the given scoring function, the alignment has a score of $4+40=44$.
NPhardness of GNA follows from a simple reduction from the decision problem Clique, which asks whether there is a clique of cardinality at least k in a given simple graph $G=(V,E)$ [20]. The corresponding GNA instance concerns the alignment of the complete graph of k vertices ${K}_{k}=({V}_{k},{E}_{k})$ with G using the scoring function $s\left(a\right)=\left\right\{(v,w)\in {E}_{k}\mid (a\left(v\right),a\left(w\right))\in E\left\}\right$. Since an alignment is injective, there is a clique of cardinality at least k if and only if the cost of the optimal alignment is $(\genfrac{}{}{0pt}{}{k}{2})$. The close relationship of GNA with the quadratic assignment problem is more easily observed when formulating GNA as a mathematical program. Throughout the remainder of the text, we use variables $i,j\in \{1,\dots ,{V}_{1}\left\right\}$ and $k,l\in \{1,\dots ,{V}_{2}\left\right\}$ to denote nodes in ${V}_{1}$ and ${V}_{2}$, respectively. Let C be a ${V}_{1}\times {V}_{2}$ matrix such that ${c}_{ik}=c(i,k)$ and let W be a ${V}_{1}\times {V}_{2}\times {V}_{1}\times {V}_{2}$ matrix whose entries ${w}_{ikjl}$ correspond to interaction scores $w(i,k,j,l)$. Now, we can formulate GNA as
where the decision variable ${x}_{ik}$ indicates whether the ith node in ${V}_{1}$ is aligned with the kth node in ${V}_{2}$. The above formulation shares many similarities with Lawler’s formulation [21] of the QAP. However, instead of finding an assignment we are interested in finding a matching, which is reflected in Constraints (2) and (3) being inequalities rather than equalities. As can be seen in Equation (1), we only consider the upper triangle of W rather than the entire matrix. An analogous way of looking at this is to consider W to be symmetric. This is usually not the case for QAP instances. In addition, due to the fact that biological input graphs are typically sparse and the restriction of possible matchings to a few candidates per node on average, we have that W is sparse as well. These differences allow us to come up with an effective method of solving the problem as we will see in the following.
$$\begin{array}{cc}\hfill \underset{x}{max}& \phantom{\rule{1.em}{0ex}}\sum _{i,k}{c}_{ik}{x}_{ik}+\sum _{\begin{array}{c}i,j\\ i<j\end{array}}\sum _{\begin{array}{c}k,l\\ k\ne l\end{array}}{w}_{ikjl}{x}_{ik}{x}_{jl}\hfill \end{array}$$
$$\begin{array}{ccc}\hfill \text{s.t.}& \phantom{\rule{1.em}{0ex}}\sum _{l}{x}_{jl}\le 1\hfill & \hfill \forall j\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}\sum _{j}{x}_{jl}\le 1\hfill & \hfill \forall l\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{x}_{ik}\in \{0,1\}\hfill & \hfill \forall i,k\end{array}$$
3. Methods
The relaxation presented here follows the same lines as the one given by Adams and Johnson for the QAP [22]. We start by linearizing objective function (IQP) by introducing binary variables ${y}_{ikjl}$ defined as ${y}_{ikjl}:={x}_{ik}\xb7{x}_{jl}$ and constraints ${y}_{ikjl}\le {x}_{jl}$ and ${y}_{ikjl}\le {x}_{ik}$ for all $i\le j$ and $k\ne l$. We focus here on the case in which all entries in W are nonnegative. Therefore, we do not need to enforce ${y}_{ikjl}\ge {x}_{ik}+{x}_{jl}1$, which would be necessary in a general linearization of a product of two binary variables. In Section 5, we will discuss this assumption. Rather than using the aforementioned constraints, we make use of a stronger set of constraints which we obtain by multiplying Constraints (2) and (3) by ${x}_{ik}$:
$$\begin{array}{cc}\hfill \sum _{\begin{array}{c}l\\ l\ne k\end{array}}{y}_{ikjl}=\sum _{\begin{array}{c}l\\ l\ne k\end{array}}{x}_{ik}{x}_{jl}\le \sum _{l}{x}_{ik}{x}_{jl}\le {x}_{ik},& \phantom{\rule{1.em}{0ex}}\forall i,j,k,\phantom{\rule{0.277778em}{0ex}}i<j\hfill \end{array}$$
$$\begin{array}{cc}\hfill \sum _{\begin{array}{c}j\\ j>i\end{array}}{y}_{ikjl}=\sum _{\begin{array}{c}j\\ j>i\end{array}}{x}_{ik}{x}_{jl}\le \sum _{j}{x}_{ik}{x}_{jl}\le {x}_{ik},& \phantom{\rule{1.em}{0ex}}\forall i,k,l,\phantom{\rule{0.277778em}{0ex}}k\ne l\hfill \end{array}$$
We proceed by splitting the variable ${y}_{ikjl}$ (where $i<j$ and $k\ne l$). In other words, we extend the objective function such that the counterpart of ${y}_{ikjl}$ becomes ${y}_{jlik}$. This is accomplished by rewriting the dummy constraint in Equation (6) to $j\ne i$. In addition, we split the weights: ${w}_{ikjl}={w}_{jlik}=({w}_{ikjl}^{\prime}/2)$ where ${w}_{ikjl}^{\prime}$ denotes the original weight. Furthermore, we require that the counterparts of the split decision variables assume the same value, which results in the following integer linear programming formulation:
$$\begin{array}{cc}\hfill \underset{x,y}{max}& \phantom{\rule{1.em}{0ex}}\sum _{i,k}{c}_{ik}{x}_{ik}+\sum _{\begin{array}{c}i,j\\ i<j\end{array}}\sum _{\begin{array}{c}k,l\\ k\ne l\end{array}}{w}_{ikjl}{y}_{ikjl}+\sum _{\begin{array}{c}i,j\\ i>j\end{array}}\sum _{\begin{array}{c}k,l\\ k\ne l\end{array}}{w}_{ikjl}{y}_{ikjl}\phantom{\rule{60.0pt}{0ex}}\hfill \end{array}$$
$$\begin{array}{ccc}\hfill \text{s.t.}& \phantom{\rule{1.em}{0ex}}\sum _{l}{x}_{jl}\le 1\hfill & \hfill \forall j\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}\sum _{j}{x}_{jl}\le 1\hfill & \hfill \forall l\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}\sum _{\begin{array}{c}l\\ l\ne k\end{array}}{y}_{ikjl}\le {x}_{ik}\hfill & \hfill \forall i,j,k,\phantom{\rule{0.277778em}{0ex}}i\ne j\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}\sum _{\begin{array}{c}j\\ j\ne i\end{array}}{y}_{ikjl}\le {x}_{ik}\hfill & \hfill \forall i,k,l,\phantom{\rule{0.277778em}{0ex}}k\ne l\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{y}_{ikjl}={y}_{jlik}\hfill & \hfill \forall i,j,k,l,\phantom{\rule{0.277778em}{0ex}}i<j,k\ne l\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{y}_{ikjl}\in \{0,1\}\hfill & \hfill \forall i,j,k,l,\phantom{\rule{0.277778em}{0ex}}i\ne j,k\ne l\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{x}_{ik}\in \{0,1\}\hfill & \hfill \forall i,k\end{array}$$
We can solve the continuous relaxation of Equation (ILP) via its Lagrangian dual by dualizing the linking constraints Equation (11) with multiplier λ:
where ${Z}_{\text{LD}}\left(\lambda \right)$ equals
$$\begin{array}{cc}\hfill \underset{\lambda}{min}& \phantom{\rule{1.em}{0ex}}{Z}_{\text{LD}}\left(\lambda \right)\hfill \end{array}$$
$$\begin{array}{cc}\hfill & \phantom{\rule{0.166667em}{0ex}}\underset{x,y}{max}\phantom{\rule{1.em}{0ex}}\sum _{i,k}{c}_{ik}{x}_{ik}+\sum _{\begin{array}{c}i,j\\ i<j\end{array}}\sum _{\begin{array}{c}k,l\\ k\ne l\end{array}}({w}_{ikjl}+{\lambda}_{ikjl}){y}_{ikjl}+\sum _{\begin{array}{c}i,j\\ i>j\end{array}}\sum _{\begin{array}{c}k,l\\ k\ne l\end{array}}({w}_{ikjl}{\lambda}_{jlik}){y}_{ikjl}\hfill \\ \hfill & \phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\text{s.t.}\phantom{\rule{1.em}{0ex}}\text{Constraints (7)\u2013(13)}\hfill \end{array}$$
Now that the linking constraints have been dualized, one can observe that the remaining constraints decompose the variables into ${V}_{1}\left\right{V}_{2}$ disjoint groups, where variables across groups are not linked by any constraint, and where each group contains a variable ${x}_{ik}$ and variables ${y}_{ikjl}$ for $j\ne i$ and $l\ne k$. Hence, we have
which corresponds to a maximum weight bipartite matching problem on the socalled alignment graph ${G}_{m}=({V}_{1}\cup {V}_{2},{E}_{m})$. In the general case, ${G}_{m}$ is a complete bipartite graph, i.e. ${E}_{m}=\{(i,k)\mid i\in {V}_{1},{v}_{2}\in {V}_{2}\}$. However, by exploiting biological knowledge, one can make ${G}_{m}$ more sparse by excluding biologicallyunlikely edges (see Section 4). For the global problem, the weight of a matching edge $(i,k)$ is set to ${c}_{ik}+{v}_{ik}\left(\lambda \right)$, where the latter term is computed as
$$\begin{array}{cc}\hfill {Z}_{\text{LD}}\left(\lambda \right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\underset{x}{max}& \phantom{\rule{1.em}{0ex}}\sum _{i,k}[{c}_{ik}+{v}_{ik}\left(\lambda \right)]{x}_{ik}\hfill \end{array}$$
$$\begin{array}{ccc}\hfill \text{s.t.}& \phantom{\rule{1.em}{0ex}}\sum _{l}{x}_{jl}\le 1\hfill & \hfill \phantom{\rule{1.em}{0ex}}\forall j\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}\sum _{j}{x}_{jl}\le 1\hfill & \hfill \forall l\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{x}_{ik}\in \{0,1\}\hfill & \hfill \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall i,k\end{array}$$
$$\begin{array}{cc}\hfill {v}_{ik}\left(\lambda \right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\underset{y}{max}& \sum _{\begin{array}{c}j\\ j>i\end{array}}\sum _{\begin{array}{c}l\\ l\ne k\end{array}}({w}_{ikjl}+{\lambda}_{ikjl}){y}_{ikjl}+\sum _{\begin{array}{c}j\\ j<i\end{array}}\sum _{\begin{array}{c}l\\ l\ne k\end{array}}({w}_{ikjl}{\lambda}_{jlik}){y}_{ikjl}\phantom{\rule{30.0pt}{0ex}}\hfill \end{array}$$
$$\begin{array}{ccc}\hfill \text{s.t.}& \phantom{\rule{1.em}{0ex}}\sum _{\begin{array}{c}l\\ l\ne k\end{array}}{y}_{ikjl}\le 1\hfill & \hfill \phantom{\rule{1.em}{0ex}}\forall j,\phantom{\rule{0.277778em}{0ex}}j\ne i\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}\sum _{\begin{array}{c}j\\ j\ne i\end{array}}{y}_{ikjl}\le 1\hfill & \hfill \phantom{\rule{1.em}{0ex}}\forall l,\phantom{\rule{0.277778em}{0ex}}l\ne k\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{y}_{ikjl}\in \{0,1\}\hfill & \hfill \phantom{\rule{1.em}{0ex}}\forall j,l\end{array}$$
Again, this is a maximum weight bipartite matching problem on the same alignment graph but excluding edges incident to either i or k and using different edge weights: the weight of an edge $(j,l)$ is ${w}_{ikjl}+{\lambda}_{ikjl}$ if $j>i$, or ${w}_{ikjl}{\lambda}_{jlik}$ if $j<i$. So, in order to compute ${Z}_{\text{LD}}\left(\lambda \right)$, we need to solve a total number of ${V}_{1}\left\right{V}_{2}+1$ maximum weight bipartite matching problems, which, using the Hungarian algorithm [23,24] can be done in $O\left({n}^{5}\right)$ time, where $n=max\left(\right{V}_{1},{V}_{2}\left\right)$. In case the alignment graph is sparse, i.e., $O\left(\right{E}_{m}\left\right)=O\left(n\right)$, ${Z}_{\text{LD}}\left(\lambda \right)$ can be computed in $O({n}^{4}logn)$ time using the successive shortest path variant of the Hungarian algorithm [25]. It is important to note that for any λ, ${Z}_{\text{LD}}\left(\lambda \right)$ is an upper bound on the score of an optimal alignment. This is because any alignment a is feasible to ${Z}_{\text{LD}}\left(\lambda \right)$ and does not violate the original linking constraints and therefore has an objective value equal to $s\left(a\right)$. In particular, the optimal alignment ${a}^{*}$ is also feasible to ${Z}_{\text{LD}}\left(\lambda \right)$ and hence ${a}^{*}\le {Z}_{\text{LD}}\left(\lambda \right)$. Since the two sets of problems resulting from the decomposition both have the integrality property [26], the smallest upper bound we can achieve equals the linear programming (LP) bound of the continuous relaxation of Formulation (ILP) [27]. Given solution $(x,y)$ to ${Z}_{\text{LD}}\left(\lambda \right)$, we obtain a lower bound on $s\left({a}^{*}\right)$, denoted ${Z}_{\text{lb}}\left(\lambda \right)$, by considering the score of the alignment encoded in x.
3.1. Solving Strategies
In this section we will discuss strategies for identifying Lagrangian multipliers λ that yield an as small as possible gap between the upper and lower bound resulting from the solution to ${Z}_{\text{LD}}\left(\lambda \right)$.
3.1.1. Subgradient Optimization
We start by discussing subgradient optimization, which is originally due to Held and Karp [28]. The idea is to generate a sequence ${\lambda}^{0},{\lambda}^{1},\dots $ of Lagrangian multiplier vectors starting from ${\lambda}^{0}=\mathbf{0}$ as follows:
where $g({\lambda}_{ikjl}^{t})$ corresponds to the subgradient of multiplier ${\lambda}_{ikjl}^{t}$, i.e. $g({\lambda}_{ikjl}^{t})={y}_{ikjl}{y}_{jlik}$, and α is the step size parameter. Initially, α is set to 1 and it is halved if neither ${Z}_{\text{LD}}\left(\lambda \right)$ nor ${Z}_{\text{lb}}\left(\lambda \right)$ have improved for over N consecutive iterations. Conversely, α is doubled if M times in a row there was an improvement in either ${Z}_{\text{LD}}\left(\lambda \right)$ or ${Z}_{\text{lb}}\left(\lambda \right)$ [29]. In case all subgradients are zero, the optimal solution has been found and the scheme terminates. Note that this is not guaranteed to happen. Therefore, we abort the scheme after exceeding a time limit or a prespecified number of iterations. In addition, we terminate if α has dropped below machine precision. Algorithm 1 gives the pseudo code of this procedure.
$$\begin{array}{cc}\hfill {\lambda}_{ikjl}^{t+1}={\lambda}_{ikjl}^{t}\frac{\alpha \xb7({Z}_{\text{LD}}\left(\lambda \right){Z}_{\text{lb}}\left(\lambda \right))}{\parallel g\left({\lambda}^{t}\right){\parallel}^{2}}g({\lambda}_{ikjl}^{t})& \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall i,j,k,l,\phantom{\rule{0.277778em}{0ex}}i<j,k\ne l\hfill \end{array}$$
Algorithm 1: SubgradientOpt$(\lambda ,M,N)$ 

3.1.2. Dual Descent
In this section we derive a dual descent method which is an extension of the one presented in [22]. The dual descent method takes as a starting point the dual of ${Z}_{\text{LD}}\left(\lambda \right)$:
where the dual of ${v}_{ik}\left(\lambda \right)$ is
$$\begin{array}{cc}\hfill {Z}_{\text{LD}}\left(\lambda \right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\underset{\alpha ,\beta}{min}& \phantom{\rule{1.em}{0ex}}\sum _{i}{\alpha}_{i}+\sum _{k}{\beta}_{k}\hfill \end{array}$$
$$\begin{array}{ccc}\hfill \text{s.t.}& \phantom{\rule{1.em}{0ex}}{\alpha}_{i}+{\beta}_{k}\ge {c}_{ik}+{v}_{ik}\left(\lambda \right)\hfill & \hfill \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall i,k\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{\alpha}_{i}\ge 0\hfill & \hfill \forall i\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{\beta}_{k}\ge 0\hfill & \hfill \forall k\end{array}$$
$$\begin{array}{cc}\hfill {v}_{ik}\left(\lambda \right)\phantom{\rule{0.166667em}{0ex}}=\phantom{\rule{0.166667em}{0ex}}\underset{\mu ,\nu}{min}& \phantom{\rule{1.em}{0ex}}\sum _{\begin{array}{c}j\\ j\ne i\end{array}}{\mu}_{j}^{ik}+\sum _{\begin{array}{c}l\\ l\ne k\end{array}}{\nu}_{l}^{ik}\hfill \end{array}$$
$$\begin{array}{ccc}\hfill \text{s.t.}& \phantom{\rule{1.em}{0ex}}{\mu}_{j}^{ik}+{\nu}_{l}^{ik}\ge {w}_{ikjl}+{\lambda}_{ikjl}\hfill & \hfill \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall j,l,\phantom{\rule{0.277778em}{0ex}}j>i,l\ne k\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{\mu}_{j}^{ik}+{\nu}_{l}^{ik}\ge {w}_{ikjl}{\lambda}_{jlik}\hfill & \hfill \phantom{\rule{1.em}{0ex}}\forall j,l,\phantom{\rule{0.277778em}{0ex}}j<i,l\ne k\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{\mu}_{j}^{ik}\ge 0\hfill & \hfill \forall j\end{array}$$
$$\begin{array}{ccc}& \phantom{\rule{1.em}{0ex}}{\nu}_{l}^{ik}\ge 0\hfill & \hfill \forall l\end{array}$$
Suppose that for a given ${\lambda}^{t}$ we have computed dual variables $(\alpha ,\beta )$ solving Problem (21) with objective value ${Z}_{\text{LD}}\left({\lambda}^{t}\right)$, as well as dual variables $({\mu}^{ik},{\nu}^{ik})$ yielding values ${v}_{ik}\left(\lambda \right)$ to Problems (25). The goal now is to find ${\lambda}^{t+1}$ such that the resulting bound is better or just as good, i.e. ${Z}_{\text{LD}}\left({\lambda}^{t+1}\right)\le {Z}_{\text{LD}}\left({\lambda}^{t}\right)$. We prevent the bound from increasing, by ensuring that the dual variables $(\alpha ,\beta )$ remain feasible for Problem (21). This we can achieve by considering the slacks: ${\pi}_{ik}\left(\lambda \right)={\alpha}_{i}+{\beta}_{k}{c}_{ik}{v}_{ik}\left(\lambda \right)$. Thus, for $(\alpha ,\beta )$ to remain feasible, we can only allow every ${v}_{ik}\left({\lambda}^{t}\right)$ to increase by as much as ${\pi}_{ik}\left({\lambda}^{t}\right)$. We can achieve such an increase by considering Linear Programs (25) and their slacks defined as
and update the multipliers in the following way.
$$\begin{array}{ccc}\hfill {\gamma}_{ikjl}\left(\lambda \right)& =\left\{\begin{array}{cc}{\mu}_{j}^{ik}+{\nu}_{l}^{ik}{w}_{ikjl}+{\lambda}_{ikjl},\hfill & \text{if}j>i,\hfill \\ {\mu}_{j}^{ik}+{\nu}_{l}^{ik}{w}_{ikjl}{\lambda}_{jlik},\hfill & \text{if}j<i,\hfill \end{array}\right.\hfill & \hfill \forall j,l,\phantom{\rule{0.277778em}{0ex}}j\ne i,l\ne k\end{array}$$
Lemma 1.
The adjustment scheme below yields solutions to Linear Programs (25) with objective values ${v}_{ik}\left({\lambda}^{t+1}\right)$ at most ${\pi}_{ik}\left({\lambda}^{t}\right)+{v}_{ik}\left({\lambda}^{t}\right)$ for all $i,k$.
$$\begin{array}{cc}\hfill {\lambda}_{ikjl}^{t+1}={\lambda}_{ikjl}^{t}& +{\phi}_{ikjl}\left[{\gamma}_{ikjl}\left({\lambda}^{t}\right)+{\tau}_{ik}\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\right]\hfill \\ & {\phi}_{jlik}\left[{\gamma}_{jlik}\left({\lambda}^{t}\right)+{\tau}_{jl}\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{jl}\left({\lambda}^{t}\right)\right]\hfill \end{array}$$
for all $j,l,\phantom{\rule{0.166667em}{0ex}}i<j,k\ne l$, where ${n}_{1}=\left{V}_{1}\right$, ${n}_{2}=\left{V}_{2}\right$, and $0\le {\phi}_{ikjl},{\tau}_{jl}\le 1$ are parameters.
Proof.
We prove the lemma by showing that for any $i,k$ there exists a feasible solution $({\mu}^{\prime ik},{\nu}^{\prime ik})$ to Problem (25) whose objective value ${v}_{ik}\left({\lambda}^{t+1}\right)$ is at most ${\pi}_{ik}\left({\lambda}^{t}\right)+{v}_{ik}\left({\lambda}^{t}\right)$. Let $({\mu}^{ik},{\nu}^{ik})$ be the solution to Problem (25) given multipliers ${\lambda}^{t}$. We claim that setting
results in a feasible solution to Problem (25) given multipliers ${\lambda}^{t+1}$. We start by showing that Constraints (26) and (27) are satisfied. Equation (31) implies the following bounds on ${\lambda}^{t+1}$:
$$\begin{array}{cc}\hfill {\mu}_{j}^{\prime ik}={\mu}_{j}^{ik}+\frac{{\pi}_{ik}\left({\lambda}^{t}\right)}{2({n}_{1}1)}& \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall j,\phantom{\rule{0.277778em}{0ex}}j\ne i\hfill \\ \hfill {\nu}_{l}^{\prime ik}={\nu}_{j}^{ik}+\frac{{\pi}_{ik}\left({\lambda}^{t}\right)}{2({n}_{2}1)}& \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall l,\phantom{\rule{0.277778em}{0ex}}l\ne k\hfill \end{array}$$
$$\begin{array}{cc}\hfill {\lambda}_{ikjl}^{t}{\gamma}_{jlik}\left({\lambda}^{t}\right)\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{jl}\left({\lambda}^{t}\right)\le {\lambda}_{ikjl}^{t+1}& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall j,l,\phantom{\rule{0.277778em}{0ex}}j<i,l\ne k\hfill \\ \hfill {\lambda}_{ikjl}^{t+1}\le {\lambda}_{ikjl}^{t}+{\gamma}_{ikjl}\left({\lambda}^{t}\right)+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)& \phantom{\rule{0.277778em}{0ex}}\phantom{\rule{1.em}{0ex}}\forall j,l,\phantom{\rule{0.277778em}{0ex}}j<i,l\ne k\hfill \end{array}$$
Therefore, we have that the following inequalities imply Constraints (26) and (27) for all $j,l$, $j>i$, $l\ne k$:
and for all $j,l$, $j<i$, $l\ne k$
$$\begin{array}{c}\hfill {\mu}_{j}^{\prime ik}+{\nu}_{l}^{\prime ik}\ge {w}_{ikjl}+{\lambda}_{ikjl}^{t}+{\gamma}_{ikjl}\left({\lambda}^{t}\right)+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\end{array}$$
$$\begin{array}{c}\hfill {\mu}_{j}^{\prime ik}+{\nu}_{l}^{\prime ik}\ge {w}_{ikjl}{\lambda}_{jlik}^{t}+{\gamma}_{ikjl}\left({\lambda}^{t}\right)+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\end{array}$$
Constraints (26) and (27) are indeed implied, as, for all $j,l$, $j>i$, $l\ne k$,
and for all $j,l$, $j<i$, $l\ne k$
$$\begin{array}{cc}\hfill {\mu}_{j}^{\prime ik}+{\nu}_{l}^{\prime ik}& ={\mu}_{j}^{ik}+{\nu}_{l}^{ik}+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\hfill \\ & \ge {w}_{ikjl}+{\lambda}_{ikjl}^{t}+{\gamma}_{ikjl}\left({\lambda}^{t}\right)+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\hfill \end{array}$$
$$\begin{array}{cc}\hfill {\mu}_{j}^{\prime ik}+{\nu}_{l}^{\prime ik}& ={\mu}_{j}^{ik}+{\nu}_{l}^{ik}+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\hfill \\ & \ge {w}_{ikjl}{\lambda}_{jlik}^{t}+{\gamma}_{ikjl}\left({\lambda}^{t}\right)+\left(\frac{1}{2({n}_{1}1)}+\frac{1}{2({n}_{2}1)}\right){\pi}_{ik}\left({\lambda}^{t}\right)\hfill \end{array}$$
Since ${\mu}_{j}^{ik},{\nu}_{l}^{ik}\ge 0$ ($\forall j,l,\phantom{\rule{0.277778em}{0ex}}j\ne i,l\ne k$) and by definition ${\pi}_{ik}\left({\lambda}^{t}\right)\ge 0$, Constraints (28) and (29) are satisfied as well. The objective value of $({\mu}^{\prime ik},{\nu}^{\prime ik})$ is given by
$$\sum _{\begin{array}{c}j\\ j\ne i\end{array}}{\mu}_{j}^{\prime ik}+\sum _{\begin{array}{c}l\\ l\ne k\end{array}}{\nu}_{l}^{\prime ik}=\sum _{\begin{array}{c}j\\ j\ne i\end{array}}{\mu}_{j}^{ik}+\sum _{\begin{array}{c}l\\ l\ne k\end{array}}{\nu}_{l}^{ik}+{\pi}_{ik}\left({\lambda}^{t}\right)={v}_{ik}\left({\lambda}^{t}\right)+{\pi}_{ik}\left({\lambda}^{t}\right)$$
Since the dual Problems (25) are minimization problems and there exist, for all $i,k$, feasible solutions with objective values ${v}_{ik}\left({\lambda}^{t}\right)+{\pi}_{ik}\left({\lambda}^{t}\right)$, we can conclude that the objective values of the solutions are bounded by this quantity. Hence, the lemma follows: ☐
We use $\phi =0.5$, $\tau =1$, and perform the dual descent method L successive times (see Algorithm 2).
Algorithm 2: DualDescent$(\lambda ,L)$ 

3.1.3. Overall Method
Our overall method combines both the subgradient optimization and dual descent method. We do this performing the subgradient method until termination and then switching over to the dual descent method. This procedure is repeated K times (see Algorithm 3).
Algorithm 3: Natalie$(K,L,M,N)$ 

We implemented Natalie in C^{++} using the LEMON graph library [30]. The successive shortest path algorithm for maximum weight bipartite matching was implemented and contributed to LEMON. Special care was taken to deal with the inherent numerical instability of floating point numbers. Our implementation supports both the GraphML and GML graph formats. Rather than using one big alignment graph, we store and use a different alignment graph for every local problem (L${\text{D}}_{\lambda}^{ik}$). This proved to be a significant improvement in running time, especially when the global alignment graph is sparse. The source code of Natalie is publicly available under the MIT license at [17].
4. Experimental Evaluation
From the STRING database v8.3 [1], we obtained PPI networks for the following six species: C. elegans (cel), S. cerevisiae (sce), D. melanogaster (dme), R. norvegicus (rno), M. musculus (mmu) and H. sapiens (hsa). We only considered interactions that were experimentally verified. Table 1 shows the sizes of the networks. We performed, using the BLOSUM62 matrix, an allagainstall global sequence alignment on the protein sequences of all $(\genfrac{}{}{0pt}{}{6}{2})=15$ pairs of networks. We used affine gap penalties with a gapopen penalty of 10 and a gapextension penalty of 2. The first experiment in Section 4.1 compares the performance of IsoRank, GRAAL, LGRAAL and Natalie 2.0 in terms of a variety of topological measures. In Section 4.2, we evaluate the biological relevance of the alignments produced by the four methods. All experiments were conducted on a compute cluster with 2.26 GHz processors with 24 GB of RAM.
Table 1.
Characteristics of input networks considered in this study. The columns contain species identifier, number of nodes in the network, number of nodes annotated with at least one gene ontology (GO) term, and number of interactions.
Species  Nodes  Annotated  Interactions 

cel (c)  5948  4694  23,496 
sce (s)  6018  5703  131,701 
dme (d)  7433  6006  26,829 
rno (r)  8002  6786  32,527 
mmu (m)  9109  8060  38,414 
hsa (h)  11,512  9328  67,858 
4.1. Topological Measures
A popular evaluation metric for network alignments is edge correctness (EC), which is the number of conserved edges divided by the number of edges of the smaller network. This measure can be directly optimized, for example in Natalie 2.0, by setting the scoring function to $s\left(a\right)=\left\right\{(v,w)\in {E}_{1}\mid (a\left(v\right),a\left(w\right))\in {E}_{2}\left\}\right$. In addition, for Natalie 2.0 and LGRAAL, we measured the size of the largest aligned connected component (LCC), and the recently proposed measures induced and symmetric substructure score (ICS and S3). ICS takes also matching nonedges into account and is defined as the number of matched edges divided by the number of edges in the subgraph of ${G}_{2}$ that is induced by the matching. The asymmetry in this measure is corrected for by the S3 measure, which is the fraction of matched edges between ${G}_{1}$ and the subnetwork of ${G}_{2}$ induced by the alignment. Note that it is easy to achieve perfect ICS or S3 values when alignments are defined as partial functions. In this case, matching, for example, two ${K}_{3}$ subgraphs of the input graphs would give a perfect score in terms of ICS or S3. For this reason, it is preferable to consult EC and LCC in addition.
As mentioned in Section 3, Natalie 2.0 as well as LGRAAL can benefit greatly from using a sparse alignment graph. To that end, we use the evalues obtained from the allagainstall sequence alignment to prohibit biologically unlikely matchings by only considering proteinpairs whose evalue is at most 100. Note that this only applies to Natalie and LGRAAL as both GRAAL and IsoRank consider the complete alignment graph. On each of the 15 instances, we ran GRAAL with three different random seeds and sampled the input parameter which balances the contribution of the graphlets with the node degrees uniformly within the allowed range of $[0,1]$. As for IsoRank, when setting the parameter α, which controls to what extent topological similarity plays a role, to the desired value of one, very poor results were obtained. Therefore we also sampled this parameter within its allowed range and reevaluated the resulting alignments in terms of edgecorrectness. Natalie was run with a time limit of 10 minutes CPU time and the standard settings $K=3$, $L=100$, $M=10$, $N=20$. LGRAAL was run with a CPU time limit of 10 min as well as one hour. For both GRAAL and IsoRank, only the highestscoring results were considered.
Figure 2.
Performance of the four different methods for allagainstall species comparisons (15 alignment instances). Missing bars correspond to exceeded time/memory limits or software crashes. For LCC, ICS and S3 only Natalie 2.0 and LGRAAL were compared. (a) Edge correctness (EC); (b) Size of largest connected component (LCC); (c) Induced Substructure Score (ICS); (d) Symmetric Substructure Score (S3).
Figure 2 shows the results. IsoRank was only able to compute alignments for three out of the 15 instances. On the other instances IsoRank crashed, which may be due to the large size of the input networks. For GRAAL no alignments concerning sce could be computed, which is due to the large number of edges in the network: in 12 h only for 3% of the nodes the graphlet degree vector was computed. As for the last three instances, GRAAL crashed due to exceeding the memory limit inherent to 32bit processes. Unfortunately no 64bit executable was available. On the instances for which GRAAL could compute alignments, the alignment quality is poor when compared to the other methods. Natalie 2.0 outperforms the other methods in terms of edge correctness and outperforms LGRAAL in terms of ICS and S3. The LCC values of both methods are similar.
4.2. GO Similarity
In order to measure the biological relevance of the obtained network alignments, we make use of the Gene Ontology (GO) [31]. For every node in each of the six networks, we obtained a set of GO annotations (see Table 1 for the exact numbers). Each annotation set was extended to a multiset by including all ancestral GO terms for every annotation in the original set. Subsequently, we employed a similarity measure that compares a pair of aligned nodes based on their GO annotations and also takes into account the relative frequency of each annotation [32]. Since the similarity measure assigns a score between 0 and 1 to every aligned node pair, the highest similarity score one can get for any alignment is the minimum number of annotated nodes in either of the networks. We therefore normalize the similarity scores by this quantity. Unlike the previous experiment, this time we considered the bitscores of the pairwise global sequence alignments. Similarly to the IsoRank and LGRAAL parameter α, we introduced a parameter $\beta \in [0,1]$ such that the sequence part of the score has weight $(1\beta )$ and the topology part has weight β. We sampled the weight parameters uniformly in the range $[0,1]$ for all methods. Figure 3 shows that on the smaller networks Natalie, LGRAAL and IsoRank identify functionally coherent alignments with similar GO scores. However, Natalie outfperforms the other methods on many of the larger networks.
5. Conclusions
Inspired by results for the closely related quadratic assignment problem (QAP), we have presented new algorithmic ideas in order to make a Lagrangian relaxation approach for global network alignment practically useful and competitive. In particular, we have generalized a dual descent method for the QAP. We have found that combining this scheme with the traditional subgradient optimization method leads to fastest progress of upper and lower bounds.
Our implementation of the new method, Natalie 2.0, works well and fast when aligning biological networks, which we have shown in an extensive study on the alignment of crossspecies PPI networks. We have compared Natalie 2.0 to the established and new stateoftheart methods IsoRank, GRAAL and LGRAAL, which aim at optimizing similar scoring functions. Our experiments show that the Lagrangian relaxation approach is a powerful method, which often outperforms the competitors in terms of quality of the results and running time.
Currently, all methods, including Natalie 2.0, approach the global network alignment problem heuristically, that is, the computed alignments are not guaranteed to be optimal solutions of the problem. While some approaches are intrinsically heuristic—both IsoRank and GRAAL, for instance, approximate the neighborhood of a node and then match it with a similar node—the inexactness in Natalie 2.0 and also LGRAAL has two causes that we plan to address in future work: on the one hand, there may still be a gap between upper and lower bound of the Lagrangian relaxation approach after the last iteration. One could use these bounds in a branchandbound approach that will compute provably optimal solutions. On the other hand, we currently do not consider the complete bipartite alignment graph and may therefore miss optimal alignments. Here, preprocessing strategies, in the spirit of [33], which safely sparsify the input bipartite graph without violating optimality conditions, may be useful.
The independence of local problems (L${\text{D}}_{\lambda}^{ik}$) allows for straightforward parallelization, which would lead to an even faster method. Another improvement in running times might be achieved when considering more involved heuristics for computing the lower bound, such as local search. More functionallycoherent alignments can be obtained when considering a scoring function where nodetonode correspondences are not only scored via sequence similarity but also for instance via GO similarity. In certain cases, even negative weights for topological interactions might be desired in which case one needs to reconsider the assumption of entries of matrix W being positive.
Acknowledgments
We thank SARA Computing and Networking Services for their support in using the Lisa Compute Cluster. In addition, we are grateful to Bernd Brandt for helping out with various bioinformatics issues and also to Samira Jaeger for providing code and advice on the GO similarity experiments. We thank Noël MalodDognin for helping us with running LGRAAL and for general discussions. We also thank the anonymous referees for carefully reading our work and their comments.
Author Contributions
Gunnar W. Klau, Mohammed ElKebir and Jaap Heringa designed the study, interpreted the results and wrote the manuscript. Gunnar W. Klau and Mohammed ElKebir conceived the method. Mohammed ElKebir and Gunnar W. Klau implemented the software and ran the experiments.
Conflicts of Interest
The authors declare no conflict of interest.
References
 Szklarczyk, D.; Franceschini, A.; Wyder, S.; Forslund, K.; Heller, D.; HuertaCepas, J.; Simonovic, M.; Roth, A.; Santos, A.; Tsafou, K.P.; et al. STRING v10: Proteinprotein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015, 43. [Google Scholar] [CrossRef] [PubMed]
 Sharan, R.; Ideker, T. Modeling cellular machinery through biological network comparison. Nat. Biotechnol. 2006, 24, 427–433. [Google Scholar] [CrossRef] [PubMed]
 Kanehisa, M.; Goto, S.; Hattori, M.; AokiKinoshita, K.F.; Itoh, M.; Kawashima, S.; Katayama, T.; Araki, M.; Hirakawa, M. From genomics to chemical genomics: New developments in KEGG. Nucleic Acids Res. 2006, 34. [Google Scholar] [CrossRef] [PubMed]
 Alon, U. Network motifs: Theory and experimental approaches. Nat. Rev. Genet. 2007, 8, 450–461. [Google Scholar] [CrossRef] [PubMed]
 Elmsallati, A.; Clark, C.; Kalita, J. Global alignment of proteinprotein interaction networks: A survey. IEEE/ACM Trans. Comput. Biol. Bioinf. 2015, 99. [Google Scholar] [CrossRef] [PubMed]
 Singh, R.; Xu, J.; Berger, B. Global alignment of multiple protein interaction networks with application to functional orthology detection. Proc. Natl. Acad. Sci. USA 2008, 105, 12763–12768. [Google Scholar] [CrossRef] [PubMed]
 Klau, G.W. A new graphbased method for pairwise global network alignment. BMC Bioinf. 2009, 10. [Google Scholar] [CrossRef] [PubMed]
 Kuchaiev, O.; Milenkovic, T.; Memisevic, V.; Hayes, W.; Przulj, N. Topological network alignment uncovers biological function and phylogeny. J. R. Soc. Interface 2010, 7, 1341–54. [Google Scholar] [CrossRef] [PubMed]
 Patro, R.; Kingsford, C. Global network alignment using multiscale spectral signatures. Bioinformatics 2012, 28, 3105–3114. [Google Scholar] [CrossRef] [PubMed]
 Neyshabur, B.; Khadem, A.; Hashemifar, S.; Arab, S.S. NETAL: A new graphbased method for global alignment of proteinprotein interaction networks. Bioinformatics 2013, 29, 1654–1662. [Google Scholar] [CrossRef] [PubMed]
 Aladağ, A.E.; Erten, C. SPINAL: Scalable protein interaction network alignment. Bioinformatics 2013, 29, 917–924. [Google Scholar] [CrossRef] [PubMed]
 Chindelevitch, L.; Ma, C.Y.; Liao, C.S.; Berger, B. Optimizing a global alignment of protein interaction networks. Bioinformatics 2013, 29, 2765–2773. [Google Scholar] [CrossRef] [PubMed]
 Hashemifar, S.; Xu, J. HubAlign: An accurate and efficient method for global alignment of proteinprotein interaction networks. Bioinformatics 2014, 30, i438–i444. [Google Scholar] [CrossRef] [PubMed]
 Vijayan, V.; Saraph, V.; Milenković, T. MAGNA++: Maximizing accuracy in global network alignment via both node and edge conservation. Bioinformatics 2015, 31. [Google Scholar] [CrossRef] [PubMed]
 Clark, C.; Kalita, J. A multiobjective memetic algorithm for PPI network alignment. Bioinformatics 2015, 31, 1988–1998. [Google Scholar] [CrossRef] [PubMed]
 MalodDognin, N.; Przulj, N. LGRAAL: Lagrangian graphletbased network aligner. Bioinformatics 2015, 31, 2182–2189. [Google Scholar] [CrossRef] [PubMed]
 Natalie 2.0. Available online: http://software.cwi.nl/natalie (accessed on 15 November 2015).
 ElKebir, M.; Brandt, B.W.; Heringa, J.; Klau, G.W. NatalieQ: A web server for proteinprotein interaction network querying. BMC Syst. Biol. 2014, 8. [Google Scholar] [CrossRef] [PubMed]
 NatalieQ. Available online: http://www.ibi.vu.nl/programs/natalieq/ (accessed on 15 November 2015).
 Karp, R.M. Reducibility Among Combinatorial Problems. In Complexity of Computer Computations; Miller, R.E., Thatcher, J.W., Eds.; Plenum Press: New York, NY, USA, 1972; pp. 85–103. [Google Scholar]
 Lawler, E.L. The quadratic assignment problem. Manage Sci. 1963, 9, 586–599. [Google Scholar] [CrossRef]
 Adams, W.P.; Johnson, T. Improved linear programmingbased lower bounds for the quadratic assignment problem. DIMACS Ser. Discr. Math. Theor. Comput. Sci. 1994, 16, 43–77. [Google Scholar]
 Kuhn, H.W. The Hungarian method for the assignment problem. Naval Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
 Munkres, J. Algorithms for the assignment and transportation problems. SIAM J. Appl. Math. 1957, 5, 32–38. [Google Scholar] [CrossRef]
 Edmonds, J.; Karp, R.M. Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM 1972, 19, 248–264. [Google Scholar] [CrossRef]
 Edmonds, J. Path, trees, and flowers. Can. J Math 1965, 17, 449–467. [Google Scholar] [CrossRef]
 Guignard, M. Lagrangean relaxation. Top 2003, 11, 151–200. [Google Scholar] [CrossRef]
 Held, M.; Karp, R.M. The travelingsalesman problem and minimum spanning trees: Part II. Math. Progr. 1971, 1, 6–25. [Google Scholar] [CrossRef]
 Caprara, A.; Fischetti, M.; Toth, P. A heuristic method for the set cover problem. Oper. Res. 1999, 47, 730–743. [Google Scholar] [CrossRef]
 Egerváry Research Group on Combinatorial Optimization. LEMON Graph Library. Available online: http://lemon.cs.elte.hu/ (accessed on 15 November 2015).
 Ashburner, M.; Ball, C.A.; Blake, J.A.; Botstein, D.; Butler1, H.; Cherry, J. M.; Davis, A.P.; Dolinski, K.; Dwight, S.S.; Eppig, J.T.; et al. Gene Ontology: Tool for the unification of biology. Nat. Genet. 2000, 25, 25–29. [Google Scholar] [CrossRef] [PubMed]
 Couto, F.M.; Silva, M.J.; Coutinho, P.M. Measuring Semantic Similarity between Gene Ontology Terms. Data Knowl. Eng. 2007, 61, 137–152. [Google Scholar] [CrossRef]
 Wohlers, I.; Andonov, R.; Klau, G.W. Algorithm Engineering for optimal alignment of protein structure distance matrices. Optim. Lett. 2011, 5, 421–433. [Google Scholar] [CrossRef][Green Version]
© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).