Next Article in Journal
Game Theory-Inspired Evolutionary Algorithm for Global Optimization
Next Article in Special Issue
Scale Reduction Techniques for Computing Maximum Induced Bicliques
Previous Article in Journal
Properties of Vector Embeddings in Social Networks
Previous Article in Special Issue
Local Community Detection in Dynamic Graphs Using Personalized Centrality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Higher-Order Network Flows in Memory and Multilayer Networks with Infomap

Integrated Science Lab, Department of Physics, Umeå University, SE-901 87 Umeå, Sweden
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(4), 112; https://doi.org/10.3390/a10040112
Submission received: 15 June 2017 / Revised: 15 September 2017 / Accepted: 26 September 2017 / Published: 30 September 2017
(This article belongs to the Special Issue Algorithms for Community Detection in Complex Networks)

Abstract

:
Comprehending complex systems by simplifying and highlighting important dynamical patterns requires modeling and mapping higher-order network flows. However, complex systems come in many forms and demand a range of representations, including memory and multilayer networks, which in turn call for versatile community-detection algorithms to reveal important modular regularities in the flows. Here we show that various forms of higher-order network flows can be represented in a unified way with networks that distinguish physical nodes for representing a complex system’s objects from state nodes for describing flows between the objects. Moreover, these so-called sparse memory networks allow the information-theoretic community detection method known as the map equation to identify overlapping and nested flow modules in data from a range of different higher-order interactions such as multistep, multi-source, and temporal data. We derive the map equation applied to sparse memory networks and describe its search algorithm Infomap, which can exploit the flexibility of sparse memory networks. Together they provide a general solution to reveal overlapping modular patterns in higher-order flows through complex systems.

1. Introduction

To connect structure and dynamics in complex systems, researchers model, for example, people navigating the web [1], rumors wandering around among citizens [2], and passengers traveling through airports [3], as flows on networks with random walkers. Take an air traffic network as an example. In a standard network approach, nodes represent airports, links represent flights, and random walkers moving on the links between the nodes represent passengers. This dynamical process corresponds to a first-order Markov model of network flows: a passenger arriving in an airport will randomly continue to an airport proportional to the air traffic volume to that airport. That means, for example, that two passengers who arrive in Las Vegas, one from San Francisco and one from New York, will have the same probability to fly to New York next. In reality, however, passengers are more likely to return to where they come from [4]. Accordingly, describing network flows with a first-order Markov model suffers from memory loss and washes out significant dynamical patterns [5,6,7] (Figure 1a). Similarly, aggregating flow pathways from multiple sources, such as different airlines or seasons in the air traffic example, into a single network can distort both the topology of the network and the dynamics on the network [8,9,10,11] (Figure 1c,d).
As a consequence, the actual patterns in the data, such as pervasively overlapping modules in air traffic and social networks [12], cannot be identified with community-detection algorithms that operate on first-order network flows [4]. To take advantage of higher-order network flows, researchers have therefore developed different representations, models, and community-detection algorithms that broadly fall into two research topics: memory networks and multilayer networks. In memory networks, higher-order network flows can represent multistep pathways such as flight itineraries [4,13,14,15], and in multilayer networks, they can represent temporal or multi-source data such as multiseason air traffic [8,11,16,17] (Figure 1b). Whereas memory networks can capture where flows move to depending on where they come from (Figure 1e), multilayer networks can capture where flows move to depending on the current layer (Figure 1f). Because memory and multilayer networks originate from different research topics in network science, revealing their flow modules have been considered as disparate community detection problems. With the broad spectrum of higher-order network representations required to best describe diverse complex systems, the apparent need for specialized community-detection methods raises the question: How can we reveal communities in higher-order network flows with a general approach?
Here we show that describing higher-order network flows with so-called sparse memory networks and identifying modules with long flow persistence times, such as groups of airports that contain frequently traveled routes, provides a general solution to reveal modular patterns in higher-order flows through complex systems. When modeling flows in conventional networks, a single node type both represents a complex system’s objects and describes the flows with the nodes’ links (Figure 1c,d). Sparse memory networks, however, discriminate physical nodes, which represent a complex system’s objects, from state nodes, which describe a complex system’s internal flows with their links (Figure 1e,f). In sparse memory networks, state nodes are not bound to represent, for example, previous steps in memory networks or layers in multilayer networks, but are free to represent abstract states such as lumped states [18] or states in multilayer memory networks, which we demonstrate with multistep and multiquarter air traffic data. In this way, a sparse memory network is a MultiAspect Graph with two aspects: the physical object and the flow state such as memory or layer [17]. We show that various higher-order network flow representations, including memory and multilayer networks, can be represented with sparse memory networks. We also provide a detailed derivation of the information-theoretic map equation for identifying hierarchically nested modules with long flow persistence times in sparse memory networks, and introduce a new version of the map equation’s search algorithm Infomap that exploits the flexibility of sparse memory networks.

2. Modeling Network Flows

The dynamics and function of complex systems emerge from interdependences between their components [19,20,21]. Depending on the system under study, the components can be, for example, people, airports, hospital wards, and banks. The interdependence, in turn, often comes from flows of some entity between the components, such as ideas circulating among colleagues, passengers traveling through airports, patients moving between hospital wards, or money transferred between banks. To efficiently capture such flows through complex systems, researchers model them with random walks on networks.

2.1. First-Order Network Flows

In a first-order network representation, the flow direction only depends on the previously visited physical node. That is, for a random walker that moves between physical nodes i { 1 , 2 , , N P } , which represent objects in a complex system, and in t steps generates a sequence of random variables X 1 , X 2 , , X t , the transition probabilities
P ( X t | X t 1 , X t 2 , ) = P ( X t | X t 1 )
only depend on the previously visited node’s outlinks. With link weights w i j between physical nodes i and j, and total outlink weights w i = j w i j of node i, the first-order transition probabilities are
P ( i j ) = P i j = w i j w i ,
which give the stationary visit rates
π i = j π j P j i .
To ensure ergodic stationary visit rates, from each node we can let the random walker teleport with probability τ , or with probability 1 if the node has no outlinks, to a random target node proportional to the target node’s inlink weight [22].
While a first-order model is sufficient to capture flow dynamics in some systems, recent studies have shown that higher-order flows are required to capture important dynamics in many complex systems [5,6,23]. The standard approach of modeling dynamical processes on networks with first-order flows oversimplifies the real dynamics and sets a limit of what can actually be detected in the system (Figure 1). Capturing critical phenomena in the dynamics and function of complex systems therefore often requires models of higher-order network flows [5,6,7,23,24,25,26,27,28].

2.2. Higher-Order Network Flows

In higher-order network representations, more information than the previously visited physical node’s outlinks are used to determine where flows go. Examples of higher-order network representations are memory, multilayer, and temporal networks.
In memory networks, the flow direction depends on multiple previously visited nodes. Specifically, for a random walker that steps between physical nodes and generates a sequence of random variables X 1 , , X t , the transition probabilities for a higher-order flow model of order m,
P ( X t | X t 1 , X t 2 , ) = P ( X t | X t 1 , , X t m ) ,
depend on the m previously visited physical nodes. Assuming stationarity and m visited nodes from  x m m steps ago to previously visited i in sequence x m x m + 1 x 2 i , the mth-order transition probabilities between physical nodes i and j correspond to first-order transition probabilities P ( x m x 2 i x m + 1 x 2 i j ) between state nodes α i = x m x 2 i and β j = x m + 1 x 2 i j . We use subscript i in α i to highlight the state node’s physical node. With link weights w α i β j between state nodes α i and β j , and total outlink weights w α i = β j w α i β j , the transition probabilities for memory networks are
P ( x m x 1 i x m + 1 x 1 i j ) = P α i β j = w α i β j w α i .
As in a first-order network representation, teleportation can ensure ergodic stationary visit rates π α i [4].
In multilayer networks with the same physical nodes possibly present in multiple layers, the flow direction depends on both the previously visited physical node i and layer α { 1 , 2 , , l } . Similar to the memory representation, the multilayer transition probabilities between physical nodes i in layer α and j in layer β correspond to first-order transition probabilities between states nodes α i and  β j .
In many cases, empirical interaction data only contain links within layers such that w α i β j 0 only for α = β . Without data with links between layers, it is useful to couple layers by allowing a random walker to move between layers at a relax rate r. With probability 1 r , a random walker follows a link of the physical node in the currently visited layer, and with probability r the random walker relaxes the layer constraint and follows any link of the physical node in any layer. In both cases, the random walker follows a link proportional to its weight. With total outlink weight w α i = β j w α i β j from physical node i in layer α , and total outlink weight w i = α , β j w α i β j from physical node i across all layers, which both correspond to total intralayer outlink weights as long as there are only empirical links within layers, the transition probabilities for multilayer networks with modeled interlayer transitions are
P α i β j ( r ) = P α i β j ( r ) = ( 1 r ) δ α β w α i β j w α i + r w β i β j w i ,
where δ α β is the Kronecker delta, which is 1 if α = β and 0 otherwise. In this way, the transition probabilities capture completely separated layers for relax rate r = 0 and completely aggregated layers for relax rate r = 1 . The system under study and problem at hand should determine the relax rate. In practice, for relax rates in a wide range around 0.25, results are robust in many real multilayer networks [11].
With both intralayer and interlayer links such that w α i β j 0 also for α β , either from empirical data or modeled interlayer links P α i β j ( r ) w α i β j , the transition probabilities for multilayer networks can be written in their most general form,
P α i β j = P α i β j = w α i β j w α i ,
where w α i again is the total outlink weight from node i in layer β . For directed multilayer networks, teleportation can ensure ergodic stationary visit rates π α i .

2.3. Sparse Memory Networks

While memory networks and multilayer networks operate on different higher-order interaction data, the resemblance between Equations (5) and (7) suggests that they are two examples of a more general network representation. In a memory network model, a random walker steps between physical nodes such that the next step depends on the previous steps (Equation (5). In a multilayer network model, instead the next step depends on the visited layer (Equation (7). However, as Equations (5) and (7) show, both models correspond to first-order transitions between state nodes α i and β j associated with physical nodes i and j in the network,
P α i β j = w α i β j w α i .
Consequently, the state node visit rates are
π α i = β j π β j P β j α i ,
where the sum is over all state nodes in all physical nodes. The physical node visit rates are
π i = β j i π β j ,
where the sum is over all state nodes in physical node i. Both memory and multilayer networks can be represented with a network of physical nodes and state nodes that neither are bound to represent previous steps nor current layer. Because state nodes are free to represent abstract states, and redundant state nodes can be lumped together, we call this representation a sparse memory network.

2.4. Representing Memory and Multilayer Networks with Sparse Memory Network

To illustrate that sparse memory networks can represent both memory and multilayer networks, we use a schematic network with five physical nodes (Figure 2a). The network represents five individuals, with the center node’s two friends to the left and two colleagues to the right. We imagine that the multistep pathway data come from two sources, say a Facebook conversation thread among the friends illustrated with the top sequence above the network and the solid pathway on the network, and an email conversation thread among the colleagues illustrated with the bottom sequence and the dashed pathway. For simplicity, the pathway among the friends stays among the friends and steps between friends with equal probability. The pathway among the colleagues behaves in corresponding way. We first represent these pathway data with a memory and a multilayer network, and then show that both can be represented with a sparse memory network.
We can represent multistep pathways with links between state nodes in physical nodes. For a memory network representation of a second-order Markov model, each state node captures which physical node the flows come from. For example, the highlighted pathway step in Figure 2a corresponds to a link between state node ε i in physical node i—capturing flows coming to physical node i from physical node j—and state node γ k in physical node k—capturing flows coming to physical node k from physical node i (Figure 2b). In this way, random walker movements between state nodes can capture higher-order network flows between observable physical nodes.
We can represent multistep pathways with links between state nodes in layers. First, we can map all state nodes that represent flows coming from a specific physical node onto the same layer. For example, we map red state nodes ε i and ε k for flows coming from red physical node j to physical nodes i and k, respectively, in Figure 2b onto the red layer ε at the bottom in Figure 2c. This mapping gives one-to-one correspondence between the memory network and the multilayer network. Therefore we call it a multilayer memory network.
An alternative and more standard multilayer representation exploits that the pathway data come from two sources and uses one layer for each data source (Figure 2d). Network flows are first-order when constrained to move within individual layers and higher-order when free to move within and between layers. The highlighted pathway step in Figure 2a now corresponds to a step in layer η between state node η i and state node η k —capturing flows remaining among friends. Again, random walker movements between state nodes can capture higher-order network flows between observable physical nodes.
In our multistep pathway example from two sources, a full second-order model in the memory network is not required and partly redundant to describe the network flows. We can model the same pathways with a more compact description. Specifically, we can lump together state nodes α i and  β i in the same physical node i if they have identical outlinks, w α i γ j = w β i γ j for all γ j . The lumped state node α ˜ i replaces α i and β i and assembles all their inlinks and outlinks such that the transition probabilities remain the same. For example, for describing where flows move from physical node i, state nodes ε i and δ i have identical outlinks—reflecting that it does not matter which friend was previously active in the conversation—as well as state nodes β i and α i —reflecting that it does not matter which colleague was previously active in the conversation. Lumping together all such redundant state nodes, such as ε i , δ i α ˜ i or β i , α i δ ˜ i , gives the sparse memory network in Figure 2e, where state nodes no longer are bound to capture the exact previous steps but still capture the same dynamics as the redundant full second-order model. These unbound state nodes are free to represent abstract states, such as lumped state nodes in a second-order memory network, but can also represent state nodes in a full second-order memory network or state nodes in a multilayer network such as η i α ˜ i in Figure 2d,e. For example, lumping state nodes with minimal information loss can balance under- and overfitting for efficient sparse memory networks that represent variable-order network flow models [18]. Consequently, rather than an explosion of application and data dependent representations, sparse memory networks with physical nodes and state nodes that can represent abstract states provides an efficient solution for modeling higher-order network flows.

3. Mapping Network Flows

While networks and higher-order models make it possible to describe flows through complex systems, they remain highly complex even when abstracted to nodes and links. Thousands or millions of nodes and links can bury valuable information. To reveal this information, many times it is indispensable to comprehend the organization of large complex systems by assigning nodes into modules with community-detection algorithms. Here we show how the community-detection method known as the map equation can operate on sparse memory networks and allow for versatile mapping of network flows from multistep, multi-source, and temporal data.

3.1. The Map Equation for First-Order Network Flows

When simplifying and highlighting network flows with possibly nested modules, the map equation measures how well a modular description compresses the flows [29]. Because compressing data is dual to finding regularities in the data [30], minimizing the modular description length of network flows is dual to finding modular regularities in the network flows. For describing movements within and between modules, the map equation uses code books that connect node visits, module exits, and module entries with code words. To estimate the shortest average description length of each code book, the map equation takes advantage of Shannon’s source coding theorem and measures the Shannon entropy of the code word use rates [30]. Moreover, the map equation uses a hierarchically nested code structure designed such that the description can be compressed if the network has modules in which a random walker tends to stay for a long time. Therefore, with a random walker as a proxy for real flows, minimizing the map equation over all possible network clusterings reveals important modular regularities in network flows.
In detail, the map equation measures the minimum average description length for a multilevel map M of N physical nodes clustered into M modules, for which each module m has a submap M m with M m submodules, for which each submodule m n has a submap M m n with M m n submodules, and so on (Figure 3). In each submodule m n o at the finest level, the code word use rate for exiting a module is
q m n o = i M m n o j M m n o π i P i j ( Figure 3 a ) ,
and the total code word use rate for also visiting nodes in a module is
p m n o = q m n o + i M m n o π i .
such that the average code word length is
H ( P m n o ) = q m n o p m n o log q m n o p m n o i M m n o π i p m n o log π i p m n o .
Weighting the average code word length of the code book for module m n o at the finest level by its use rate gives the contribution to the description length,
L ( M m n o ) = p m n o H ( P m n o ) .
In each submodule m at intermediate levels, the code word use rate for exiting to a coarser level is
q m = i M m j M m π i P i j ,
and for entering the M m submodules M m n at a finer level is
q m n = i M m n j M m n π i P i j ( Figure 3 c ) .
Therefore, the total code rate use in submodule m is
q m = q m + n = 1 M m q m n ,
which gives the average code word length
H ( Q m ) = q m q m log q m q m n = 1 M m q m n q m log q m n q m .
Weighting the average code word length of the code book for module m at intermediate levels by its use rate, and adding the description lengths of submodules at finer levels in a recursive fashion down to the finest level in Equation (14), gives the contribution to the description length,
L ( M m ) = q m H ( Q m ) + n = 1 M m L ( M m n ) .
At the coarsest level, there is no coarser level to exit to, and the code word use rate for entering the M submodules M m n at a finer level is
q m = i M m j M m π i P i j ( Figure 3 d ) ,
such that the total code rate use at the coarsest level is
q = m = 1 M q m ,
which gives the average code word length
H ( Q ) = m = 1 M q m q log q m q .
Weighting the average code word length of the code book at the coarsest level by its use rate, and adding the description lengths of submodules at finer levels from Equation (19) in a recursive fashion, gives the multilevel map equation [31]
L ( M ) = q H ( Q ) + m = 1 M L ( M m ) .
To find the multilevel map that best represents flows in a network, we seek the multilevel clustering of the network that minimizes the multilevel map equation over all possible multilevel clusterings of the network (see Algorithm 1 in Section 4).
While there are several advantages with the multilevel description, including potentially better compression and effectively eliminated resolution limit [32], for simplicity researchers often choose two-level descriptions. In this case, there are no intermediate submodules and the two-level map equation is
L ( M ) = q H ( Q ) + m = 1 M p m H ( P m ) .

3.2. The Map Equation for Higher-Order Network Flows

The map equation for first-order network flows measures the description length of a random walker stepping between physical nodes within and between modules. This principle remains the same also for higher-order network flows, although higher-order models guide the random walker between physical nodes with the help of state nodes. Therefore, extending the map equation to higher-order network flows, including those described by memory, multilayer, and sparse memory networks, is straightforward. Equations (11) to (24) remain the same with i α i and j β j . The only difference is at the finest level (Figure 3b). State nodes of the same physical node assigned to the same module should share code word, or they would not represent the same object. That is, if multiple state nodes β j of the same physical node i are assigned to the same module m n o , we first sum their probabilities to obtain the visit rate of physical node i in module m n o ,
π i m n o = β j i M m n o π β j ( Figure 3 b ) .
In this way, the frequency weighted average code word length in submodule codebook m n o in Equation (13) becomes
H ( P m n o ) = q m n o p m n o log q m n o p m n o i M m n o π i m n o p m n o log π i m n o p m n o ,
where the sum is over all physical nodes that have state nodes assigned to module M m n o . In this way, the map equation can measure the modular description length of state-node-guided higher-order flows between physical nodes.
To illustrate that the separation between physical nodes and state nodes matters when clustering higher-order network flows, we cluster the red state nodes and the dashed blue state nodes in Figure 4 in two different modules overlapping in the center physical node. For a more illustrative example, we also allow transitions between red and blue nodes at rate r/2 in the center physical node. In the memory and sparse memory networks, the transitions correspond to links from state nodes in physical node i to state nodes in the other module with relative weight r/2 and to the same module with relative weight 1 r /2 (Figurs Figure 4b,c). In the multilayer network, the transitions correspond to relax rate r according to Equation (6), since relaxing to any layer in physical node i with equal link weights in both layers means that one half of relaxed flows switch layer (Figure 4a). Independent of the relax rate, in these symmetric networks the node visit rates are uniformly distributed: π α i = 1 / 6 for each of the six state nodes in the multilayer and sparse memory networks, and π α i = 1 / 12 for each of the twelve state nodes in the memory network. For illustration, if we incorrectly treat state nodes as physical nodes in the map equation, Equations (11) to (24), the one-module clustering of the memory network with twelve virtual physical nodes, M 1 12 p , has code length
L ( M 1 12 p ) = 12 12 p 1 H 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 H ( P 1 ) with H ( p 1 , p 2 , ) = i p i j p j log p i j p j 3.58 bits .
The corresponding two-module clustering of the memory network with twelve virtual physical nodes, M 2 12 p , has code length
L ( M 2 12 p ) = r 6 q H r 12 , r 12 H ( Q ) + 2 6 + r 12 p m H 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , 1 12 , r 12 H ( P m ) [ 2.58 , 3.44 ] for r [ 0 , 1 ] .
Therefore, the two-module clustering gives best compression for all relax rates (Figure 4e). For the sparse memory network with six virtual physical nodes, the one-module clustering, M 1 6 p , has code length
L ( M 1 6 p ) = 6 6 H 1 6 , 1 6 , 1 6 , 1 6 , 1 6 , 1 6 2.58 ,
and the two-module clustering, M 2 6 p , has code length
L ( M 2 6 p ) = r 6 H r 12 , r 12 + 2 6 + r 12 H 1 6 , 1 6 , 1 6 , r 12 [ 1.58 , 2.44 ] for r [ 0 , 1 ] .
While the two-module clustering again gives best compression for all relax rates, the code lengths are shifted by 1 bit compared with the memory network with twelve virtual physical nodes (Figure 4e). Same dynamics but different code length. These expanded solutions with virtual physical nodes do not capture the important and special role that physical nodes play as representatives of a system’s objects.
Properly separating state nodes and physical nodes as in Equation (26) instead gives equal code lengths for identical clusterings irrespective of representation. For example, the one-module clustering of the memory network with twelve state nodes, M 1 5 p 12 s , and the sparse memory network with six state nodes, M 1 5 p 6 s , have identical code length
L ( M 1 5 p 12 s ) = L ( M 1 5 p 6 s ) = H 1 6 , 1 6 , 1 6 , 1 6 , 2 6 2.25 ,
because visits to a physical node’s state nodes in the same module are aggregated according to Equation (25) such that those state nodes share code words and the encoding represents higher-order flows between a system’s objects. Similarly, the two-module clustering of the memory network with twelve state nodes, M 2 5 p 12 s , and the sparse memory network with six state nodes, M 2 5 p 6 s , have identical code length
L ( M 2 5 p 12 s ) = L ( M 2 5 p 6 s ) = r 6 H r 12 , r 12 + 2 6 + r 12 H 1 6 , 1 6 , 1 6 , r 12 [ 1.58 , 2.44 ] for r [ 0 , 1 ] ,
That is, same dynamics give same code length for identical clusterings with proper separation of state nodes and physical nodes.
For these solutions with physical nodes and state nodes that properly capture higher-order network flows, the overlapping two-module clustering gives best compression with relax rate r 0.71 (Figure 4e). In this example, the one-module clustering can for sufficiently high relax rate better compress the network flows than the two-module clustering. Compared with the expanded clusterings with virtual physical nodes where this cannot happen, the one-module clustering gives a relatively shorter code length thanks to code word sharing between state nodes in physical node i. In general, modeling higher-order network flows with physical nodes and state nodes, and accounting for them when mapping the network flows, gives overlapping modular clusterings that do not depend on the particular representation but only on the actual dynamical patterns.

4. Infomap

To find the best modular description of network flows according to the map equation, we have developed the stochastic and fast search algorithm called Infomap. Infomap can operate on standard, multilayer, memory, and sparse memory networks with unweighted or weighted and undirected or directed links, and identify two-level or multilevel and non-overlapping or overlapping clusterings. Infomap takes advantage of parallelization with OpenMP and there is also a distributed version implemented with GraphLab PowerGraph for standard networks [33]. In principle, the search algorithm can optimize other objectives than the map equation to find other types of community structures.
Recent versions of Infomap operate on physical nodes and state nodes. Each state node has a unique state id and is assigned to a physical id, which can be the same for many state nodes. Infomap only uses physical nodes and state nodes for higher-order network flow representations, such as multilayer, memory, and sparse memory networks, and physical nodes alone when they are sufficient to represent first-order network flows in standard networks.
To balance accuracy and speed, Infomap uses some repeatedly and recursively applied stochastic subroutine algorithms. For example, the multilevel clustering (Algorithm 1) and two-level clustering (Algorithm 2) algorithms first repeatedly aggregate nodes in modules (Algorithm 3) to optimize the map equation (Algorithm 4) and then repeatedly and recursively fine-tune (Algorithm 5) and coarse-tune (Algorithm 6) the results. Together with complete restarts, these tuning algorithms avoid local minima and improve accuracy.
By default, Infomap tries to find a multilevel clustering of a network (Algorithm 1).
Algorithm 1 Multilevel clustering
1
function multilevelPartition(Network G)
2
     M partition(G) (Algorithm 2)
3
     M repeatedSuperModules(G, M ) (Algorithm 7)
4
     M repeatedSubModules(G, M ) (Algorithm 8)
5
    return M
Algorithm 2 Two-level clustering
1
function partition(Network G)
2
    Let L be the current code length
3
    Let ϵ be a small threshold  
4
    Let I m a x T be a maximum number of iterations  
5
     M = repeatedNodeAggregation(G) (Algorithm 3)
6
     L o l d L ( M )  
7
     I T 0
8
    repeat 
9
         I T I T + 1
10
      if I is odd then
11
          M = FineTune(G,M) (Algorithm 5)
12
      else
13
          M = CoarseTune(G,M) (Algorithm 6)  
14
       δ L L ( M ) L o l d
15
       L o l d L ( M )  
16
  until δ L > ϵ or I T = I m a x T

4.1. Two-Level Clustering

The complete two-level clustering algorithm improves the repeated node aggregation algorithm (Algorithm 3) by breaking up modules of sub-optimally aggregated nodes. That is, once the algorithm assigns nodes to the same module, they are forced to move jointly when Infomap rebuilds the network, and what was an optimal move early in the algorithm might have the opposite effect later in the algorithm. Similarly, two or more modules that merge together and form a single module when the algorithm rebuilds the network can never be separated again in the repeated node aggregation algorithm. Therefore, the accuracy can be improved by breaking up modules of the final state of the repeated node aggregation algorithm and trying to move individual nodes or smaller submodules into new or neighboring modules. The two-level clustering algorithm (Algorithm 2) performs this refinement by iterating fine-tuning (Algorithm 5) and coarse-tuning (Algorithm 6).

4.2. Repeated Node Aggregation

The repeated node aggregation in Algorithm 3 follows closely the machinery of the Louvain method [34]. While the Louvain method seeks to maximize the modularity score, which fundamentally operates on link density rather than network flows, its repeated node aggregation machinery is effective also for optimizing other objectives than the modularity score.
Infomap aggregates neighboring nodes into modules, subsequently aggregates them into larger modules, and repeats. First, Infomap assigns each node to its own module. Then, in random sequential order, it tries to move each node to the neighboring or a new module that gives the largest code length decrease (Algorithm 4). If no move gives a code length decrease, the node stays in its original module. The change of code length for each possible move can be calculated locally based only on the change in e n t e r and e x i t flows, the current flow in the moving node and affected modules, and the initial e n t e r flow for the module. For state nodes, the physical node visit rates can be locally updated.
Infomap repeats this procedure, each time in a new random sequential order, until no move happens. Then Infomap rebuilds the network with the modules of the last level forming the nodes at the new level, and, exactly as in the previous level, aggregates the nodes into modules, which replace the existing ones. Infomap repeats this hierarchical network rebuilding until the map equation cannot be further reduced.
The computational complexity of the repeated node aggregation is considered to be O ( N log N ) for the Louvain method applied to networks with N nodes [34]. While Infomap optimizes the map equation with some more costly logarithms, there is no fundamental difference in the scaling. However, for sparse memory networks, the scaling applies to the number of state nodes. In any case, the first aggregation step is the most expensive because Infomap considers each node and all its links such that the complexity scales with the number of links in the network. To further optimize the map equation, Infomap performs stochastic tuning steps with more network dependent computational complexity.
Algorithm 3 Repeated node aggregation
1
function repeatedNodeAggregation(Network G, Node-to-module map M )
2
    Let L = L ( M ) be the code length for the map M with node-to-module assignments
3
    Let ϵ be a small threshold  
4
    Let I m a x L be a maximum number of level iterations  
5
    if M is empty then
6
         M one module per node in G
7
     L o l d L ( M )
8
     I L 0
9
    repeat
10
         I L I L + 1
11
         N number of nodes in G
12
         M = optimize(G, M ) (Algorithm 4)
13
         M number of modules in M
14
         L = L ( M )
15
         δ L L L o l d
16
         L o l d L  
17
        if M = 1 or M = N then
18
           Break
19
         G Aggregate nodes within modules and links between modules to a new network
20
         M one module per node in G  
21
    until δ L > ϵ or I L = I m a x L
22
    return M
Algorithm 4 Optimize network
1
function optimize(Network G, Node-to-module map M )
2
    Let I m a x M be a maximum number of iterations
3
     I M 0
4
    repeat
5
         I M I M + 1
6
        for node n i G in random order do
7
           for link l i j to neighboring node n j in module m j do
8
               Calculate change in e n t e r and e x i t flows for modules m i and m j if moving node n i to module m j
9
               For state nodes, calculate change in physical node visit rates (Equation (25))
10
        Calculate change in e n t e r and e x i t flow if moving to a new module
11
        Calculate change in code length δ L i j for each possible move of node n i to module m j
12
          if min j δ L i j < ϵ and m j m i then
13
              Update M with m i m j
14
              Update e n t e r , e x i t and node flow in modules m i and m j
15
               L L + δ L i j  
16
        δ L L L o l d  
17
        L o l d L  
18
   until δ L > ϵ or I M = I m a x M  
19
   return M

4.3. Fine-Tuning

In the fine-tuning step, Infomap tries to move single nodes out from existing modules into new or neighboring modules (Algorithm 5). First, Infomap reassigns each node to be the sole member of its own module to allow for single-node movements. Then it moves all nodes back to their respective modules of the previous step. At this stage, with the same clustering as in the previous step but with each node free to move between the modules, Infomap reapplies the repeated node aggregation (Algorithm 3).
Algorithm 5 Fine-tuning
1
function FineTune(Network G, Node-to-module map M )
2
     M ˜ one module per node in G
3
    Update M ˜ to mirror M
4
     M repeatedNodeAggregation(G, M ˜ ) (Algorithm 3).
5
    return M

4.4. Coarse-Tuning

In the course-tuning step, Infomap tries to move submodules out from existing modules into new or neighboring modules (Algorithm 6). First, Infomap treats each module as a network on its own and applies repeated node aggregation (Algorithm 3) on this network. This procedure generates one or more submodules for each module. Infomap then replaces the modules by the submodules and moves the submodules into their modules as an initial solution. At this stage, with the same clustering as in the previous step but with each submodule free to move between the modules, Infomap reapplies the repeated node aggregation (Algorithm 3).

4.5. Multilevel Clustering

Infomap identifies multilevel clusterings by extending two-level clusterings both by iteratively finding supermodules of modules and by recursively finding submodules of modules. To find the optimal multilevel solution, Infomap first tries to find an optimal two-level clustering, then iteratively tries to find supermodules and then recursively tries to cluster those modules until the code length cannot be further improved (Algorithm 1).
Algorithm 6 Coarse-tuning
1
function CoarseTune(Network G, Optional node-to-module map M )
2
    for module m i in M do
3
         G i network of nodes and links within m i
4
        Sub-map M i repeatedNodeAggregation( G i ) (Algorithm 3)
5
     G ˜ network of submodules M i
6
     M ˜ map of nodes in G ˜ to M
7
     M ˜ repeatedNodeAggregation( G ˜ , M ˜ ) (Algorithm 3).
8
     M M ( G ) such that M ( G ) M ˜ ( G ˜ )
9
    return M
To identify a hierarchy of supermodules from a two-level clustering, Infomap first tries to find a shorter description of flows between modules. It iteratively runs the two-level clustering algorithm (Algorithm 2) on a network with one node for each module at the coarsest level, node-visit rates from the module-entry rates, and links that describe aggregated flows between the modules (Algorithm 7). For each such step, a two-level code book replaces the coarsest code book. In this way, describing entries into, within, and out from supermodules at a new coarsest level replaces only describing entries into the previously coarsest modules.
For a fast multilevel clustering, Infomap can keep this hierarchy of supermodules and try to recursively cluster the bottom modules into submodules with Algorithm 8. However, as this hierarchy of supermodules is constrained by the initial optimal two-level clustering, discarding everything but the top supermodules and starting the recursive submodule clustering from them often identifies a better multilevel clustering. To find submodules, Infomap runs the two-level clustering (Algorithm 2) followed by the super-module clustering (Algorithm 7) algorithm on the network within each module to find the largest submodule structures. If Infomap finds non-trivial submodules (that is, not one submodule for all nodes and not one submodule for each node), it replaces the module with the submodules and collects the submodules in a queue to explore each submodule in parallel for each hierarchical level. In this way, Infomap can explore the submodule hierarchy in parallel with a breadth-first search to efficiently find a multilevel clustering of the input network.
Algorithm 7 Repeated supermodule clustering
1
function repeatedSuperModules(Network G, Node-to-module map M )
2
    Let L = L ( M ) be the code length for the map M with node-to-module assignments
3
    Let L ^ ( M ) be the index code length of M
4
    Let ϵ be a small threshold
5
     L ^ o l d L ^ ( M )
6
    repeat
7
         G ^ A new network with one node for each module at the coarsest level of M , node-visit rates from the module-entry rates, and links that describe aggregated flows between the modules
8
         N ^ number of nodes in G ^
9
         M ^ partition( G ^ ) (Algorithm 2)
10
       M ^ number of modules in M ^
11
       δ L L ( M ^ ) L ^ o l d
12
       L ^ o l d L ^ ( M ^ )
13
      if M ^ = 1 or M ^ = N ^ then
14
         Break
15
       M A multilevel map composed of M ^ and M
16
  until δ L > ϵ
17
  return M
Algorithm 8 Repeated submodule clustering
1
function repeatedSubModules(Network G, multilevel node-to-module map M )
2
     Q top modules in M
3
    Remove submodules in M if exist
4
    while Q not empty do
5
         R empty queue to collect submodules
6
        for module m i in Q (in parallel) do
7
            G i network of nodes and links within m i
8
            N number of nodes in G i
9
            M i partition( G i ) (Algorithm 2)
10
          M i repeatedSuperModules( G i , M i ) (Algorithm 7)
11
          M ^ number of top modules in M i
12
         if M ^ = 1 or M ^ = N then
13
             Replace m i with top modules in M i
14
             for each top module m ^ in M i do
15
                 Add m ^ to queue R
16
       Q R
17
  return M

4.6. Download Infomap

Infomap can be downloaded from www.mapequation.org, which contains installation instructions, a complete list of running options, and examples for how to run Infomap stand-alone or with other software, including igraph, Python, Jupyter, and R. In the Appendix, we provide Infomap’s basic syntax (Appendix A.1) and examples for how to cluster a multilayer network (Appendix A.2), a memory network (Appendix A.3), and a sparse memory network (Appendix A.4). The website also contains interactive storyboards to explain the mechanics of the map equation, and interactive visualizations to simplify large-scale networks and their changes over time.

4.7. Mapping Multistep and Multi-Source Data with Infomap

We illustrate our approach by mapping network flows from air traffic data between more than 400 airports in the US [35]. For quarterly reported itineraries in 2011, we assembled pathways into four quarterly, two half-year, and one full-year collection. From pathways of length two, three, and four, for each collection we generated sparse memory networks corresponding to first-, second-, and third-order Markov models of flows. Then we generated sparse memory networks corresponding to multilayer networks with four layers from the quarterly collections and two layers from the half-year collections for all Markov orders. This example illustrates how sparse memory networks can go beyond standard representations and represent combinations of multilayer and memory networks.
To identify communities in multistep and multi-source air traffic data represented with sparse memory networks, we run Infomap ten times and report the median values in Table 1. Except for a small drop in the number of physical nodes for the third-order models because some airports are not represented in long itineraries, the number of physical nodes remain the same for all models. However, the number of state nodes and links between them increases with order. Larger systems require state-node lumping based on minimum information loss into more efficient sparse memory networks that correspond to variable-order Markov models. Such more compact descriptions of higher-order can balance over- and underfitting as well as cut the computational time [18]. In any case, higher-order representations can better capture constraints on flows. For example, a drop in the entropy rate by one bit corresponds to doubled flow prediction, and the four-bit drop between the first- and third-order models corresponds to a sixteenfold higher flow prediction accuracy. These flows move with relatively long persistence times among groups of airports that form connected legs in frequent itineraries. Infomap capitalizes on this modular flow pattern and identifies pervasively overlapping modules.

5. Conclusions

The map equation applied to sparse memory networks provides a general solution to reveal modular patterns in higher-order flows through complex systems. Rather than multiple community-detections algorithms for a range of network representations, the map equation’s search algorithm Infomap can be applied to general higher-order network flows described by sparse memory networks. A sparse memory network uses abstract state nodes to describe higher-order dynamics and physical nodes to represent a system’s objects. This distinction makes all the difference. The flexible sparse memory network can efficiently describe higher-order network flows of various types such as flows in memory and multilayer networks or their combinations. Simplifying and highlighting important patterns in these flows with the map equation and Infomap open for more effective analysis of complex systems.

Acknowledgments

We are grateful to Christian Persson and Manlio De Domenico for helpful discussions. Martin Rosvall was supported by the Swedish Research Council grant 2016-00796.

Author Contributions

D.E., L.B., and M.R. conceived and designed the experiments; D.E. performed the experiments. D.E., L.B., and M.R. analyzed the data and wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Running Infomap

Appendix A.1. Infomap Syntax

To run Infomap on a network, use the command
./Infomap [options] network_data dest
The option networkdata should point to a valid network file and dest to a directory where Infomap should write the output files. If no option is given, Infomap will assume an undirected network and try to cluster it hierarchically. Optional arguments, including directed or undirected links, two-level or multilevel solutions, or various input formats, can be put anywhere. Run ./Infomap –help for a list and explanation of supported arguments.

Appendix A.2. Clustering a Multilayer Network

The multilayer network in Figure 4a can be described with the multilayer network format in fig3a.net below and clustered for relax rate r = 0.4 with the command
./Infomap --input-format multilayer --multilayer-relax-rate 0.4 fig3a.net .
See Appendix A.5 for the overlapping clustering output, and Appendix A.4 for an alternative representation with a sparse memory network. In fact, Infomap internally represents the multilayer network in fig3a.net for r = 0.4 with the sparse memory network in Appendix A.4 with transition rates r/2 between state nodes in different layers, since r/2 stays among state nodes in the same layer in this symmetric two-layer network.
# fig3a.net - Multilayer network
# Lines starting with # are ignored
*Vertices 5
#physicalId name
1 "i"
2 "j"
3 "k"
4 "l"
5 "m"
*Intra
#layerId physicalId physicalId weight
1 1 4 1
1 4 1 1
1 1 5 1
1 5 1 1
1 4 5 1
1 5 4 1
2 1 2 1
2 2 1 1
2 1 3 1
2 3 1 1
2 2 3 1
2 3 2 1
The inter-layer links will be generated based on the multilayer relax rate, but could also be modeled explicitly with an *Inter section as below
*Inter
#layerId physicalId layerId weight
1 1 2 1
2 1 1 1

Appendix A.3. Clustering a Memory Network

The memory network in Figure 4b with transition rates corresponding to relax rate r = 0.4 for the multilayer network in Figure 4a, can be described with the memory network format in fig3b.net below and clustered with the command
./Infomap --input-format memory fig3b.net .
See Appendix A.5 for the overlapping clustering output, and Appendix A.4 for an alternative representation with a sparse memory network.
# fig3b .net - Memory network
# Lines starting with # are ignored
*Vertices 5
#physicalId name
1 "i"
2 "j"
3 "k"
4 "l"
5 "m"
*3grams
#from through to weight
2 1 2 0.8
2 1 3 0.8
2 1 4 0.2
2 1 5 0.2
3 1 2 0.8
3 1 3 0.8
3 1 4 0.2
3 1 5 0.2
1 2 1 1
1 2 3 1
3 2 1 1
3 2 3 1
2 3 1 1
2 3 2 1
1 3 1 1
1 3 2 1
4 1 4 0.8
4 1 5 0.8
4 1 2 0.2
4 1 3 0.2
5 1 4 0.8
5 1 5 0.8
5 1 2 0.2
5 1 3 0.2
1 4 1 1
1 4 5 1
5 4 1 1
5 4 5 1
1 5 1 1
1 5 4 1
4 5 1 1
4 5 4 1

Appendix A.4. Clustering a Sparse Memory Network

The sparse memory network in Figure 4c with transition rates corresponding to relax rate r = 0.4 for the multilayer network in Figure 4a, can be described with the sparse memory network format in fig3c.net below and clustered with the command
./Infomap --input-format sparse fig3c.net.
See Appendix A.5 for the overlapping clustering output.
# fig3c.net - Sparse memory network
# Lines starting with # are ignored
*Vertices 5
#physicalId name
1 "i"
2 "j"
3 "k"
4 "l"
5 "m"
*States
#stateId physicalId name
1 1 "alpha~_i"
2 2 "beta~_j"
3 3 "gamma~_k"
4 1 "delta~_i"
5 4 "epsilon~_l"
6 5 "zeta~_m"
*Links
#sourceStateId targetStateId weight
1 2 0.8
1 3 0.8
1 5 0.2
1 6 0.2
2 1 1
2 3 1
3 1 1
3 2 1
4 5 0.8
4 6 0.8
4 2 0.2
4 3 0.2
5 4 1
5 6 1
6 4 1
6 5 1
          
For reference, the memory network in Appendix A.3 can also be represented with the sparse memory network format in fig3bsparse.net below and clustered with the command
./Infomap --input-format sparse fig3b_sparse.net .
See Appendix A.5 for the overlapping clustering output.
# fig3b_sparse.net - state network
# Lines starting with # are ignored
*Vertices 5
#physicalId name
1 "i"
2 "j"
3 "k"
4 "l"
5 "m"
*States
#stateId physicalId name
1 1 "alpha_i"
2 1 "beta_i"
3 2 "gamma_j"
4 2 "delta_j"
5 3 "epsilon_k"
6 3 "zeta_k"
7 1 "eta_i"
8 1 "theta_i"
9 4 "iota_l"
10 4 "kappa_l"
11 5 "lambda_m"
12 5 "mu_m"
*Links
#sourceStateId targetStateId weight
1 3 0.8
1 6 0.8
1 9 0.2
1 12 0.2
2 3 0.8
2 6 0.8
2 9 0.2
2 12 0.2
3 1 1
3 5 1
4 1 1
4 5 1
5 2 1
5 4 1
6 2 1
6 4 1
7 9 0.8
7 12 0.8
7 3 0.2
7 6 0.2
8 9 0.8
8 12 0.8
8 3 0.2
8 6 0.2
9 7 1
9 11 1
10 7 1
10 11 1
11 8 1
11 10 1
12 8 1
12 10 1

Appendix A.5. Clustering Output

All examples in Appendix A.2, Appendix A.3 and Appendix A.4 give the same overlapping physical node clustering below, where 1:1 0.166667 "i" 1 says, reading from right to left, that the physical node with id 1 is called i and has visit rate 0.166667 as the first node in the first module.
# A tree output from running Infomap on the example networks in Figure 3
# path flow name physicalId:
1:1 0.166667 "i" 1
1:2 0.166667 "j" 2
1:3 0.166667 "k" 3
2:1 0.166667 "i" 1
2:2 0.166667 "l" 4
2:3 0.166667 "m" 5
With the optional argument –expanded, Infomap gives the clustering of each state node (not shown).

References and Notes

  1. Brin, S.; Page, L. The anatomy of a large-scale hypertextual Web search engine. Comput. Netw. ISDN 1998, 30, 107–117. [Google Scholar] [CrossRef]
  2. Newman, M.E. A measure of betweenness centrality based on random walks. Soc. Netw. 2005, 27, 39–54. [Google Scholar] [CrossRef]
  3. Lordan, O.; Florido, J.; Sallan, J.M.; Fernandez, V.; Simo, P.; Gonzalez-Prieto, D. Study of the robustness of the European air routes network. In LISS 2014; Springer: Berlin, Germany, 2015; pp. 195–200. [Google Scholar]
  4. Rosvall, M.; Esquivel, A.V.; Lancichinetti, A.; West, J.D.; Lambiotte, R. Memory in network flows and its effects on spreading dynamics and community detection. Nat. Commun. 2014, 5, 4630. [Google Scholar] [CrossRef] [PubMed]
  5. Belik, V.; Geisel, T.; Brockmann, D. Natural human mobility patterns and spatial spread of infectious diseases. Phys. Rev. X 2011, 1. [Google Scholar] [CrossRef]
  6. Pfitzner, R.; Scholtes, I.; Garas, A.; Tessone, C.J.; Schweitzer, F. Betweenness preference: Quantifying correlations in the topological dynamics of temporal networks. Phys. Rev. Lett. 2013, 110. [Google Scholar] [CrossRef]
  7. Poletto, C.; Tizzoni, M.; Colizza, V. Human mobility and time spent at destination: Impact on spatial epidemic spreading. J. Theor. Biol. 2013, 338, 41–58. [Google Scholar] [CrossRef] [PubMed]
  8. Mucha, P.J.; Richardson, T.; Macon, K.; Porter, M.A.; Onnela, J.P. Community structure in time-dependent, multiscale, and multiplex networks. Science 2010, 328, 876–878. [Google Scholar] [CrossRef] [PubMed]
  9. Kivelä, M.; Arenas, A.R.; Barthelemy, M.; Gleeson, J.P.; Moreno, Y.; Porter, M.A. Multilayer Networks. J. Comput. Netw. 2014, 2, 203–271. [Google Scholar]
  10. Boccaletti, S.; Bianconi, G.; Criado, R.; Del Genio, C.; Gómez-Gardeñes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M. The structure and dynamics of multilayer networks. Phys. Rep. 2014, 544, 1–122. [Google Scholar] [CrossRef]
  11. De Domenico, M.; Lancichinetti, A.; Arenas, A.; Rosvall, M. Identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems. Phys. Rev. X 2015, 5. [Google Scholar] [CrossRef]
  12. Ahn, Y.; Bagrow, J.; Lehmann, S. Link communities reveal multiscale complexity in networks. Nature 2010, 466, 761–764. [Google Scholar] [CrossRef] [PubMed]
  13. Peixoto, T.P.; Rosvall, M. Modeling sequences and temporal networks with dynamic community structures. arXiv, 2017; arXiv:1509.04740. [Google Scholar]
  14. Xu, J.; Wickramarathne, T.L.; Chawla, N.V. Representing higher-order dependencies in networks. Sci. Adv. 2016, 2, e1600028. [Google Scholar] [CrossRef] [PubMed]
  15. Scholtes, I. When is a network a network? Multi-order graphical model selection in pathways and temporal networks. arXiv, arXiv:1702.05499.2017. [Google Scholar]
  16. De Domenico, M.; Solé-Ribalta, A.; Cozzo, E.; Kivelä, M.; Moreno, Y.; Porter, M.A.; Gómez, S.; Arenas, A. Mathematical formulation of multilayer networks. Phys. Rev. X 2013, 3. [Google Scholar] [CrossRef]
  17. Wehmuth, K.; Fleury, É.; Ziviani, A. MultiAspect Graphs: Algebraic representation and algorithms. Algorithms 2017, 10, 1. [Google Scholar] [CrossRef]
  18. Persson, C.; Bohlin, L.; Edler, D.; Rosvall, M. Maps of sparse Markov chains efficiently reveal community structure in network flows with memory. arXiv, 2016; arXiv:1606.08328. [Google Scholar]
  19. Barabási, A.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [PubMed]
  20. Barrat, A.; Barthelemy, M.; Pastor-Satorras, R.; Vespignani, A. The architecture of complex weighted networks. Proc. Natl. Acad. Sci. USA 2004, 101, 3747–3752. [Google Scholar] [CrossRef] [PubMed]
  21. Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M.; Hwang, D.U. Complex networks: Structure and dynamics. Phys. Rep. 2006, 424, 175–308. [Google Scholar] [CrossRef]
  22. Lambiotte, R.; Rosvall, M. Ranking and clustering of nodes in networks with smart teleportation. Phys. Rev. E 2012, 85. [Google Scholar] [CrossRef] [PubMed]
  23. Song, C.; Qu, Z.; Blumm, N.; Barabási, A. Limits of predictability in human mobility. Science 2010, 327, 1018–1021. [Google Scholar] [CrossRef] [PubMed]
  24. Meiss, M.R.; Menczer, F.; Fortunato, S.; Flammini, A.; Vespignani, A. Ranking web sites with real user traffic. In Proceedings of the International Conference on Web Search and Web Data Mining, Palo Alto, CA, USA, 11–12 February 2008; pp. 65–76. [Google Scholar]
  25. Chierichetti, F.; Kumar, R.; Raghavan, P.; Sarlós, T. Are web users really Markovian? In Proceedings of the 21st International Conference on World Wide Web, Lyon, France, 16–20 April 2012; pp. 609–618. [Google Scholar]
  26. Singer, P.; Helic, D.; Taraghi, B.; Strohmaier, M. Memory and Structure in Human Navigation Patterns. arXiv, 2014; arXiv:1402.0790. [Google Scholar]
  27. Takaguchi, T.; Nakamura, M.; Sato, N.; Yano, K.; Masuda, N. Predictability of conversation partners. Phys. Rev. X 2011, 1. [Google Scholar] [CrossRef]
  28. Holme, P.; Saramäki, J. Temporal networks. Phys. Rep. 2012, 519, 97–125. [Google Scholar] [CrossRef]
  29. Rosvall, M.; Bergstrom, C. Maps of random walks on complex networks reveal community structure. Proc. Natl. Acad. Sci. USA 2008, 105, 1118–1123. [Google Scholar] [CrossRef] [PubMed]
  30. Shannon, C. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  31. Rosvall, M.; Bergstrom, C. Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems. PLoS ONE 2011, 6, e18209. [Google Scholar] [CrossRef] [PubMed]
  32. Kawamoto, T.; Rosvall, M. Estimating the resolution limit of the map equation in community detection. Phys. Rev. E 2015, 91. [Google Scholar] [CrossRef] [PubMed]
  33. Bae, S.H.; Howe, B. GossipMap: A distributed community detection algorithm for billion-edge directed graphs. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, Austin, TX, USA, 15–20 November 2015; p. 27. [Google Scholar]
  34. Blondel, V.D.; Guillaume, J.L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, 2008. [Google Scholar] [CrossRef]
  35. We Have Compiled the Network from the Airline Origin and Destination Survey (DB1B), which Is a 10% Aample of Airline Tickets from Reporting Carriers Made Public by the Research and Innovative Technology Administration (RITA). Data from 2011. Available online: transtats.bts.gov (accessed on 1 September 2017).
Figure 1. Going beyond standard network representations makes it possible to take advantage of richer interaction data. (a) Standard methods shoehorn interaction data about a complex system into an often unweighted and undirected network, which limits what regularities can be detected; (b) Modeling and mapping higher-order network flows can break this detectability limit; (c) System flows from two data sources between objects. The black arrow shows how flows can come to the center object; (d) The system represented as an undirected network with nodes for the objects. The links show where flows coming in to the center node are constrained to go next; (e) The system represented as a memory network with physical nodes for the objects and state nodes for constraining flows along their links. The links show where flows coming in to the center node are constrained to go next depending on where they come from; (f) The system represented as a multilayer network with physical nodes for the objects and state nodes in layers corresponding to different data sources. The links show where the flows coming in to the center node are constrained to go next depending on which layer they are in.
Figure 1. Going beyond standard network representations makes it possible to take advantage of richer interaction data. (a) Standard methods shoehorn interaction data about a complex system into an often unweighted and undirected network, which limits what regularities can be detected; (b) Modeling and mapping higher-order network flows can break this detectability limit; (c) System flows from two data sources between objects. The black arrow shows how flows can come to the center object; (d) The system represented as an undirected network with nodes for the objects. The links show where flows coming in to the center node are constrained to go next; (e) The system represented as a memory network with physical nodes for the objects and state nodes for constraining flows along their links. The links show where flows coming in to the center node are constrained to go next depending on where they come from; (f) The system represented as a multilayer network with physical nodes for the objects and state nodes in layers corresponding to different data sources. The links show where the flows coming in to the center node are constrained to go next depending on which layer they are in.
Algorithms 10 00112 g001
Figure 2. Modeling higher-order network flows with sparse memory networks. (a) Multistep pathway data from two sources illustrated on a network with five physical nodes; (b) The pathway data modeled with a second-order Markov model on a memory network, where state nodes capture where flows come from; (c) The memory network represented as a multilayer network where layers, one for each physical node, capture where flows come from; (d) The pathway data modeled on a two-layer network, one layer for each data source; (e) Both memory and multilayer networks mapped on a sparse memory network with no redundant state nodes. The black link highlights the same step in all representations. See also the dynamic storyboard available on www.mapequation.org/apps/sparse-memory-network.
Figure 2. Modeling higher-order network flows with sparse memory networks. (a) Multistep pathway data from two sources illustrated on a network with five physical nodes; (b) The pathway data modeled with a second-order Markov model on a memory network, where state nodes capture where flows come from; (c) The memory network represented as a multilayer network where layers, one for each physical node, capture where flows come from; (d) The pathway data modeled on a two-layer network, one layer for each data source; (e) Both memory and multilayer networks mapped on a sparse memory network with no redundant state nodes. The black link highlights the same step in all representations. See also the dynamic storyboard available on www.mapequation.org/apps/sparse-memory-network.
Algorithms 10 00112 g002
Figure 3. Network flows at different modular levels. Large circles represent physical nodes, small circles represent state nodes, and dashed areas represent modules. (a) Finest modular level with physical nodes for first-order network flows; (b) Finest modular level with physical nodes and state nodes for higher-order network flows; (c) Intermediate level; (d) Coarsest modular level.
Figure 3. Network flows at different modular levels. Large circles represent physical nodes, small circles represent state nodes, and dashed areas represent modules. (a) Finest modular level with physical nodes for first-order network flows; (b) Finest modular level with physical nodes and state nodes for higher-order network flows; (c) Intermediate level; (d) Coarsest modular level.
Algorithms 10 00112 g003
Figure 4. Mapping higher-order network flows. (a) A clustered multilayer network with relax rate r; (b) The same network flows in a clustered second-order memory network; (c) The same network flows in a clustered sparse memory network; (d) Sample of corresponding higher-order flow pathways; (e) Code length as a function of the transition rate for all representations as well as extended networks with virtual physical nodes.
Figure 4. Mapping higher-order network flows. (a) A clustered multilayer network with relax rate r; (b) The same network flows in a clustered second-order memory network; (c) The same network flows in a clustered sparse memory network; (d) Sample of corresponding higher-order flow pathways; (e) Code length as a function of the transition rate for all representations as well as extended networks with virtual physical nodes.
Algorithms 10 00112 g004
Table 1. Mapping higher-order network flows from air traffic data. Sparse memory networks corresponding to Markov order m = 1, 2, and 3 in one layer with data from quarters 1 + 2 + 3 + 4 , in two layers with data from quarters 1 + 2 and 3 + 4 , and in four layers with data from quarters 1, 2, 3, and 4, respectively. We report the number of physical node N P , state nodes N S , and links N L . The entropy rate H measures the number of bits required to predict where flows are going. D is the node weighted average depth of nodes in the multilevel clustering. The perplexity P of the module size distribution measures the effective number of modules, and the physical node weighted average of the perplexity of module assignments. A is the effective number of module assignments per physical node. Finally, the time corresponds to runs without parallelization on a 3.60 GHz desktop computer. We use relax rate r = 0.25 and all values are medians of ten runs with Infomap.
Table 1. Mapping higher-order network flows from air traffic data. Sparse memory networks corresponding to Markov order m = 1, 2, and 3 in one layer with data from quarters 1 + 2 + 3 + 4 , in two layers with data from quarters 1 + 2 and 3 + 4 , and in four layers with data from quarters 1, 2, 3, and 4, respectively. We report the number of physical node N P , state nodes N S , and links N L . The entropy rate H measures the number of bits required to predict where flows are going. D is the node weighted average depth of nodes in the multilevel clustering. The perplexity P of the module size distribution measures the effective number of modules, and the physical node weighted average of the perplexity of module assignments. A is the effective number of module assignments per physical node. Finally, the time corresponds to runs without parallelization on a 3.60 GHz desktop computer. We use relax rate r = 0.25 and all values are medians of ten runs with Infomap.
Quartersml N P N S N L HDPATime
1 + 2 + 3 + 4 1143843896815.12.01.01.00.016 s
1 + 2 , 3 + 4 1243886134,3845.12.01.21.00.063 s
1 , 2 , 3 , 4 144381683121,7495.02.04.01.00.14 s
1 + 2 + 3 + 4 214389681181,3263.82.4133.71.7 s
1 + 2 , 3 + 4 2243817,203614,4723.83.0153.810 s
1 , 2 , 3 , 4 2443830,4892,014,6503.73.0133.327 s
1 + 2 + 3 + 4 31432180,900465,4561.14.0255.78.2 min
1 + 2 , 3 + 4 32432307,9041,406,6050.953.0325.022 min
1 , 2 , 3 , 4 34432507,0544,112,0890.913.0193.852 min

Share and Cite

MDPI and ACS Style

Edler, D.; Bohlin, L.; Rosvall, M. Mapping Higher-Order Network Flows in Memory and Multilayer Networks with Infomap. Algorithms 2017, 10, 112. https://doi.org/10.3390/a10040112

AMA Style

Edler D, Bohlin L, Rosvall M. Mapping Higher-Order Network Flows in Memory and Multilayer Networks with Infomap. Algorithms. 2017; 10(4):112. https://doi.org/10.3390/a10040112

Chicago/Turabian Style

Edler, Daniel, Ludvig Bohlin, and Martin Rosvall. 2017. "Mapping Higher-Order Network Flows in Memory and Multilayer Networks with Infomap" Algorithms 10, no. 4: 112. https://doi.org/10.3390/a10040112

APA Style

Edler, D., Bohlin, L., & Rosvall, M. (2017). Mapping Higher-Order Network Flows in Memory and Multilayer Networks with Infomap. Algorithms, 10(4), 112. https://doi.org/10.3390/a10040112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop