Grand Canonical Ensembles of Sparse Networks and Bayesian Inference

Maximum entropy network ensembles have been very successful in modelling sparse network topologies and in solving challenging inference problems. However the sparse maximum entropy network models proposed so far have fixed number of nodes and are typically not exchangeable. Here we consider hierarchical models for exchangeable networks in the sparse limit, i.e., with the total number of links scaling linearly with the total number of nodes. The approach is grand canonical, i.e., the number of nodes of the network is not fixed a priori: it is finite but can be arbitrarily large. In this way the grand canonical network ensembles circumvent the difficulties in treating infinite sparse exchangeable networks which according to the Aldous-Hoover theorem must vanish. The approach can treat networks with given degree distribution or networks with given distribution of latent variables. When only a subgraph induced by a subset of nodes is known, this model allows a Bayesian estimation of the network size and the degree sequence (or the sequence of latent variables) of the entire network which can be used for network reconstruction.


Introduction
Networks [1,2] have the ability to capture the topology of complex systems ranging from the brain to financial networks. Network models are key to have reliable unbiased null models of the network and to explain emergent phenomena of network evolution. Network model can be classified in two major classes: equilibrium maximum entropy models [3][4][5][6][7][8][9][10][11][12][13][14][15] and growing network models [1,[16][17][18]. While growing network models have a number of nodes that increases in time, maximum entropy models are used so far only for treating networks of a given number of nodes N. In this paper we are interested in extending the realm of maximum entropy network models to networks of varying network size N.
Maximum entropy network ensembles are the least biased ensembles satisfying a given set of constraints. As such maximum entropy ensembles are widely used as null models and for network reconstruction starting from features associated to the nodes of the network. Given the profound relation between information theory and statistical mechanics [19,20], maximum entropy network ensembles can be distinguished between microcanonical ensembles and canonical ensembles [3,21,22] similarly to the analogous distinction traditionally introduced in statistical mechanics for ensembles of particles. Microcanonical network ensembles are ensembles of networks of N nodes satisfying some hard constraints (such as the total number of links, or the given degree sequence). Canonical network ensembles instead are ensembles of networks of N nodes satisfying some soft constraints, (such as the expected total number of links or the expected degree sequence). The canonical ensembles with expected degree sequence can be also formulated as latent variable models where the latent variables can be associated to the nodes [5,23].
Maximum entropy models have been very successful in solving challenging inference models [6,8,[24][25][26], however they have the limitation that they only treat networks with a given fixed number of nodes N. Indeed in several scenarios, the number of nodes might not be fixed or might not be known. In this context an important problem is to compare networks of different network sizes. For instance in brain imaging one might choose a finer grid or a coarser grid of brain regions and an outstanding problem in machine learning is how to build neural networks that can generalize well when tested on network data with different network size than the network data in the training set [27,28].
In order to have network ensembles that can treat networks of different size, here we introduce the grand canonical network ensembles in which the number of nodes can vary. A well-defined grand-canonical network ensemble necessarily needs to be exchangeable [29], i.e., needs to be invariant under permutation of labels of the nodes of the network, so that removing or adding a node has an effect that is independent of the particular choice of the node added or removed.
The research on exchangeable networks is currently very vibrant. The graphon model [30] is the most well established exchangeable network model. However this model is dense, i.e., the number of links scales quadratically with the number of nodes while the vast majority of the network data is sparse with a total number of links scaling linearly with the network size. In other words most of the real world networks have constant average degree. However popular models for sparse networks such as the configuration model [31] and the exponential random graphs [4] are not exchangeable. In fact these models treat networks of labelled nodes with given degree or with given expected degree sequence. Therefore the network ensemble is not invariant under permutation of the node labels, except if all the degrees of all the expected degrees of the network are the same (for a more diffused discussion of why these networks are not exchangeable see discussion in ref. [32]). Several works have been proposed exchangeable network models in the when the average degree of the network diverges sublinearly with the network size [33][34][35][36][37][38]. Only recently, in ref. [32], a framework able to model sparse exchangeable networks in the limit of constant degree, has been proposed. The model is very general and has been extended to treat generalized network structures including multiplex networks [39] and simplicial complexes [40]. However the model is well defined only for finite networks of large but finite number of nodes N as exchangeable sparse networks need to obey the Aldous-Hoover theorem [41,42] according to which infinite sparse exchangeable networks must vanish. An alternative strategy for formulating exchangeable ensembles is to consider ensembles of unlabelled networks for which several results are already available [43].
Here we build on the recently proposed exchangeable sparse network ensembles [32] to formulate hierarchical grand-canonical ensembles of sparse networks. The proposed grandcanonical ensembles are hierarchical models [25,44] with variable number of nodes N and with given degree distribution or alternatively given latent variable distributions. The grand canonical approach provides a way to circumvent the limitations imposed by the Aldous-Hoover theorem because in this framework one considers a mixture of network ensembles with finite but unspecified and arbitrary large network sizes. In this paper we define the grand-canonical ensembles and we characterize them with statistical mechanics methods, evaluating their entropy, the marginal probability of a link and proposing generative algorithms to sample networks from these ensembles. [Note that the proposed grand canonical ensembles differ from the ensembles proposed in refs. [45,46], as in our case we consider networks with undetermined number of nodes, while in refs. [45,46] is the total sum of weights of weighted networks that is allowed to vary. From the statistical mechanics perspective our approach is fully classical while in refs. [45,46] networks ensembles are treated as quantum mechanical ensembles where the particles are associated to the links of the network and the adjacency matrix elements play the role of occupation numbers.].
Finally, we use the gran-canonical network ensembles to solve an inference problem. We consider a scenario in which the entire network has an unknown number of nodes, and we have only access to a subgraph induced by a subset of its nodes. In this hypothesis we use the grand-canonical network models to perform a Bayesian estimation of the true parameters of the network model (given by the network size and the degree sequence or the sequence of latent variables). This a posteriori estimate of the parameters can then be used to reconstruct the unknown part of the network.

The Grand Canonical Network Ensemble with Given Degree Distribution
We consider the hierarchical grand canonical ensemble of exchangeable sparse simple networks where we associate to every network G = (V, E) with N = |V| > N 0 nodes the probability P(G) = P(N)P(k|N)P(G|k, N) where P(N) indicates the probability that the network G has N nodes, P(k|N) indicates the conditional probability that the network has degree sequence k given that the network has N nodes, and P(G|k, N) indicates the probability of the network G with adjacency matrix a given that the network has N nodes and degree sequence k (see Figure 1 for a schematic representation of the model). To be specific we consider the following model giving rise to the hierarchical grand canonical ensemble of exchangeable simple models: (1) Drawing the total number of nodes N of the network. Let us discuss suitable choices for the distribution of the number of nodes N with N greater or equal than some minimum number of nodes N 0 . We indicate the distribution P(N) as While a statistical mechanics approach would suggest to take a distribution π(N) with a well defined mean value (such as the exponential distribution) where C is a normalization constant and µ > 0, in the context of network science it might actually be relevant to consider also broad distributions π(N) such as powerlaw distributions where D is a normalization constant and ν > 1. (2) Drawing the degree sequence of the network. In order to obtain a sparse exchangeable network ensemble with given degree distribution p(k) having finite average degree k , minimum allowed degreem and maximum allowed degree K we consider the following expression for the probability of a given degree sequence given the total number of nodes whereθ(x) indicates the Heaviside functionθ(x) = 1 if x ≥ 0 andθ(x) = 0 otherwise and where we used the notation k = ∑ k kp(k). In the following we will indicate with L the total number of links of the network given by L = k N/2. Note that P(k|N) is independent of the labels of the nodes, i.e., all the degree sequences that can be obtained by a permutation of the node labels of a given degree sequence have the same probability P(k|N). (3) Drawing the adjacency matrix of the network. The probability of a network G with adjacency matrix a given the total number of nodes N of the network and the degree sequence k is chosen in the least biased way by drawing the network from a uniform distribution, i.e., the conditional probability P(G|k, N) is equivalent to the probability of a network in the microcanonical ensemble. Therefore, by indicating with N (k|N) the total number of networks with N nodes and degree sequence k and with Σ N (k) = ln N (k|N) the entropy of the ensemble we can express P(G|k, N) as Note that for sparse networks of N ≥ N 0 nodes the entropy Σ N (k) obeys the Bender-Canfield formula as long as the network has a structural cutoff K S , i.e., as long as k i K S = k N 0 [3,21,22,47] where in Equation (7) we indicate with k = {k 1 , k 2 , . . . , k N } the degree sequence with k i , the degree of node i, given by k i = ∑ N j=1 a ij . It follows that the hierarchical grand canonical ensemble for exchangeable sparse networks can be cast into an Hamiltonian ensemble with probability P(G) given by with Hamiltonian H(G) given by This Hamiltonian is global and is invariant under permutation of the node labels, therefore this hierarchical grand canonical ensemble is exchangeable. Indeed we have that the probability of a network P(G) given by Equation (8) obeys whereG is any network obtained from network G under a generic permutation σ of the labels of the nodes. Moreover we note that for π(N) = δ (N,N), i.e., when the network size is fixed this model reduces to the exchangeable model for sparse network ensemble proposed in ref. [32].

The Grand Canonical Network Ensemble with Given Distribution of the Latent Variables
The grand canonical formalism can also be easily extended to treat network models with latent variables θ associated to the nodes of the network G = (V, E). Note that here and in the following we assume that the latent variables take discrete values. To this end we can consider the soft grand canonical hierarchical model associating to each network with N = |V| > N 0 nodes, latent variables θ and adjacency matrix a the probability with where π(N) is an arbitrary prior on the number of nodes in the network defined for N ≥ N 0 . Typical examples of the distribution π(N) are given by Equations (3) and (4). The probability of the latent variables is chosen to be exchangeable and given by where p(θ i ) is the probability distribution of each latent variable. The distribution p(θ) can be chosen arbitrarily, as long as the expectation of θ is finite. The probability of the network given the network size and the latent variables is chosen to be derived by a Bernoulli variable for each link, with probability of observing a link between node i and node j conditioned on the value of their latent variables given by p N (θ i , θ j ), i.e., To be concrete we consider the following expression for the probability p N (θ i , θ j ) which is the general expression of the marginal probability of a link in canonical network ensembles (or equivalently exponential random graph models), The advantage of taking this expression for the probability p N (θ i , θ j ) is that p N (θ i , θ j ) is always smaller or equal to one for every value of the latent variables. Therefore in this model we do not need to impose a structural cutoff on the latent variables. In summary the grand canonical network ensemble with given latent variable distribution is a hierarchical network model in which given the network size and latent variables the network is drawn according to a canonical ensemble of networks. In this ensemble the probability of a network G can be written in Hamiltonian form as with Hamiltonian H(G) given by This Hamitonian is invariant under permutation of the node labels, therefore this model is exchangeable.

The Entropy of Grand Canonical Ensembles
In this paragraph we show that the entropy S [3,48] of the two proposed grand canonical network ensembles, defined as can be decomposed into contributions that reflect the uncertainty related to an increasing number of hierarchical levels of the model. In order to show this results we discuss separately the entropy of the two proposed grand canonical ensembles.

Entropy of the Grand Canonical Ensemble with Given Degree Distribution
The entropy S of the ensemble fixing the degree distribution can be decomposed into the entropy of the model at different levels of the hierarchy according to the following expression, where S π(N) is the entropy associated to the number of typical choices of the total number of nodes N, S p(k) π(N) is the entropy associated to the choice of the degree sequence averaged over the distribution π(N) and Σ N (k) π(N),p(k) is the average of the Gibbs entropy [3] of the networks with given degree sequence averaged over the distribution π(N) and P(k|N).
In other words we have

Entropy of the Grand Canonical Ensemble with Given Latent Variable Distribution
Similarly to the previous case, it is easy to show that the entropy of the ensemble fixing the distribution of the latent variables can be decomposed into the entropy of the model at different levels of their hierarchy, according to the following expression where S π(N) is the entropy associated to the number of typical choices of the total number of nodes N, S p(θ) π(N) is the entropy associated to the choice of the latent variable distribution averaged over the distribution π(N) and S N (θ) π(N),p(k) is the average of the Shannon entropy [3] of the networks with given sequence of latent variables averaged over the distribution π(N) and P(θ|N). In other words we have where the Shannon entropy S N (θ) of the network given the sequence of latent variables and the network size N can be expressed as

The Case of the Grand Canonical Ensemble with Given Degree Distribution
The grand canonical ensemble of exchangeable sparse network ensembles is an ensemble in which the total number of nodes is not specified. If we consider the networks of this ensemble having a given number of nodes N, the model reduces to the exchangeable sparse network ensemble proposed in ref. [32] whose marginal probability of a link (i, j) is given byp Since the grand-canonical ensemble of sparse exchangeable networks with given degree distribution can be interpreted as a mixture of the exchangeable sparse models proposed in ref. [32] with different size N, it is immediate to show that the marginal probability of a link between node i and node j in the grand canonical ensembles is given by the exchangeable expression, Moreover the probability that two nodes are connected given that they have degree k and k is given by Finally the probability that two nodes are connected given that they have degree k and k and the actual size of the network is N is given by the uncorrelated network expression From these expressions of the marginal probability of a link it is possible to appreciate how the hierarchical grand canonical ensemble of sparse exchangeable networks circumvents the difficulties arising form the Aldous-Hoover theorem without violating it. Indeed the marginal probability p N (k, k ) of a link conditioned on the degrees of the two linked nodes and the number of nodes N of the network vanishes in the limit N → ∞, however if the number of nodes of the network is arbitrarily large but unknown the marginal probability of the link remains finite (as both p ij and p(k, k ) are finite).

The Case of the Grand Canonical Ensemble with Given Latent Variable Distribution
For the grand canonical ensemble with given latent variable distribution p(θ) we have that the marginal probability of a link is given by The probability of the link given the latent variable of the nodes is given by The probability of a link given the network size and the latent variables is given by As we discussed in the case of the grand canonical ensemble with given degree distribution also for the grand canonical ensemble with given latent variable distribution the grand canonical approach allows to circumvent the Aldous-Hoover theorem without violating it as the marginal probability of a link in an arbitrarily large network of unknown size is finite.

Generating Single Instances of Grand-Canonical Network Ensembles
In this section we describe two algorithms to generate single instances of the proposed grand canonical ensembles. In particular we will discuss a Metropolis-Hastings ensemble to generate single instances of networks drawn from the grand canonical ensemble with given degree distribution and a Monte Carlo algorithm to generate single instances of networks drawn from the grand canonical ensemble with given distribution of latent variables.

Metropolis-Hastings Algorithm for the Grand-Canonical Ensemble with Given Degree Distribution
The grand-canonical exchangeable ensemble of sparse networks can be obtained by implementing a Metropolis-Hastings algorithm using the network Hamiltonian given by Equation (9).
(1) Start with a network of N nodes having exactly L = k N/2 links and in which the minimum degree is greater of equal tom and the maximum degree is smaller or equal to K. (2) Perform the Metropolis-Hastings algorithm for exchangeable sparse networks with N nodes (defined below); (3) Propose to change the number of nodes to N = N + 1 (addition of one node) or N = N − 1 (removal of one node) with equal probability and accept the move with probability max(1, π(N )/π(N)) as long as N > N 0 . If the move is accepted change the number of nodes adding or removing a node, set the number of links to L = k N/2 and ensure that each node has minimal degree at leastm and maximum degree less than K. In particular if a node is added ensure it has at leastm links by rewiring randomly the existing links of the networks and adding a number of links so that the total number of links is the integer that better approximates k N/2. Instead, if a node needs to be removed, choose a random node of the network remove it and rewire/remove links in order to enforce that the total number of links is the integer that better approximates k N/2.
The Metropolis-Hastings algorithm for the exchangeable sparse networks with N nodes is the same algorithm used in Ref. [32] for exchangeable networks with finite size N and is indicated below.
(1) Start with a network of N nodes having exactly L = k N/2 links and in which the minimum degree is greater of equal tom and the maximum degree is smaller or equal to K. (2) Iterate the following steps until equilibration: (i) Let a be the adjacency matrix of the network; (i) Choose randomly a random link = (i, j) between node i and j and choose a pair of random nodes (i , j ) not connected by a link. (ii) Let a be the adjacency matrix of the network in which the link (i, j) is removed and the link (i , j ) is inserted instead. Draw a random number r from a uniform distribution in [0, 1], i.e., r ∼ U(0, 1). If r < max(1, e −∆H ) where ∆H = H(a ) − H(a) and if the move does not violate the conditions on the minimum and maximum degree of the network, replace a by a .
The Metropolis-Hastings algorithm can be used to sample the space of networks with variable number of nodes and given (stable) degree distribution (see Figure 2).

Monte Carlo Generation of Grand Canonical Network Ensemble with Given Latent Variable Distribution
A single instance of the grand canonical model with given latent variable distribution can be obtained by performing the following algorithm: 1 Draw the network size N from the π(N) distribution; 2 Draw the latent variable θ i of each node i independently from the latent variable distribution p(θ). 3 Draw each link (i, j) of the network with probability p N (θ i , θ j ).

Bayesian Estimation of the Network Parameters Given Partial Knowledge of the Network
In this section we will use the grand canonical network ensembles for calculating the posterior distribution of the network parameters given partial information of a network G = (V, E). In particular let us assume that we only know the subgraphĜ(V,Ê) induced by a set of nodesV ⊂ V ofN = |V| nodes and of adjacency matrixâ and we do not have access to the full network G with adjacency matrix a. Without loss of generality let us label the nodes of the network in such a way that the labels i with 1 ≤ i ≤N indicate the nodes inṼ (denote as sampled nodes) and the labels i with i >N indicate the nodes in V \V (denoted also as unsampled or unknown nodes). We indicate with κ the degree sequence of the sampled networkĜ. Our goal is to make a Bayesian estimation of the network size N and the true network parameters given the observed subgraphĜ. These a posteriori estimates of the true parameters of the network can then be used to reconstruct the unknown part of the network G.

Inferring the True Parameters with the Grand Canonical Ensemble with Given Degree Distribution
In this paragraph we will use the grand canonical ensemble with given degree distribution to find the posterior probability distribution of the network parameters. For convenience we will indicate with k i the true degree of the sampled nodes 1 ≤ i ≤N and we will indicate q i the true degree of the remaining unsampled N −N nodesN + 1 ≤ i ≤ N. To this end, using the Bayes rule we get the following expression for the posterior distribution of the network parameters given the observed subgraphĜ P(N, k, q|Ĝ) = P(N)P(k, q|N)P(Ĝ|k, q, N) P(Ĝ) where P(N) = π(N), with ∆ N Σ(k, q|κ) given by Here Σ N (k, q) indicates the entropy of the network fo size N with degree sequence [k, q] whose expression is given by the Bender-Canfield formula [3,21,22,47] (Equation (7)) which reads in this case MoreoverΣ N (k, q|κ) indicates the logarithm of the number of networks of N nodes havingĜ (with adjacency matrixâ and degree sequence κ) as induced subgraph between theN sampled nodes.
Moreover in Equation (31) P(Ĝ) indicates the evidence of the data given by Calculating the entropyΣ N (k, q|κ) using statistical mechanics methods including the use of a functional order parameter (see Appendix A), we derive the following expression: where M indicates the number of links between the sampled nodes and the unsampled nodes and Q indicates the sum over all the degrees of the unsampled nodes, i.e., where M and Q need to satisfy the constraint enforcing that the total number of true links is given by L = k N/2. Therefore, indicating withL = ∑N i=1 κ i /2, we must impose The expression obtained for the entropyΣ N (k, q|κ) implies that the asymptotic expression for the number of networks with N nodes, degree sequence [k, q] havingĜ as a subgraph is given by (see Appendix A for the derivation) This expression admits a simple combinatorial interpretation. In fact the networks with degree sequence [k, q] having as subgraphĜ can be constructed by adding (unsampled) links to the graphĜ. The unsampled part of the network can be constructed by assigning to each node i with 1 ≤ i ≤N a number of stubs given by k i − κ i and to each node i with i >N a number of stubs given by q i . The unsampled networks can then be obtained by matching the stubs pairwise with the constrains that the stubs of the firstN nodes can be only matched with the stubs of the unsampled nodes i >N. Therefore the reconstructed part of the network is formed by a bipartite network between the sampled and the unsampled nodes with a number of links given by M and a simple network among the unsampled nodes with number of links given by (Q − M)/2. The number of matchings of the M links of the bipartite network is given by M! the number of matching of the stubs of the simple network among unsampled nodes is (Q − M)!!. In order to get the number of distinct networks G with degree sequence [k, q] having as subgraphĜ we need to divide by the number of permutations of the stubs belonging to the same nodes and we need to multiply by Q choose M indicating the number of ways in which we can choose the M stubs of the unsampled nodes to be matched with the stubs of the sampled nodes.
Given the expression forΣ N (k, q|κ) provided by Equation (36), we can deduce the explicit expression for ∆ N Σ(k, q|κ): It follows that the describe Bayesian inference assigns a probability to the model parameters a probability P (N, k, q|Ĝ) with ∆ N Σ(k, q|κ) given by Equation (40). From this expression, imposing with a delta function that M = ∑N i=1 (k i − κ i ), expressing the delta in integral form and using the saddle point to evaluate the integral, we can calculate the marginal probability P(k i |Ĝ, ω) that a sampled node i with 1 ≤ i ≤N has true degree k i ≥ κ i given M and Q, i.e., where ω is related to M by In Figure 3 we show the difference between an exponential prior distribution p(k) on the degree of the nodes and the posterior marginal probability of the true degree of the sampled nodes P(k|Ĝ, ω) plotted for different values of the sampled degree κ of the same node. Finally, we can calculate the a posteriori probability P(N|Ĝ, M) that the real networks has N nodes, conditioned to M and to the sampled subrgraphĜ. To this end we sum Equation (41) over all the possible values of the degrees k and q such that Equation (37) are satisfied. Therefore, by inserting Equation (40) into Equation (41), enforcing Equation (37) with Kronecker deltas and integrating over all the possible values of k and q we get where where Q = k N − 2L − M. By expressing the Kronecker deltas in an integral form according to the expression performing a Wick rotation and evaluating the integrals at the saddle point, we can express I (k) (M) and I (q) (M, N) as with ω andω fixed by the saddle point equations  a)) of the true degree of a sampled nodes depends on the degree κ of the nodes in the sampled networkĜ and is non-zero only for k ≥ κ. The posterior probability P(θ|Ĝ,θ) of the latent variable of a sampled node (panel (b)) can be non-zero on the entire range of θ values allowed by the prior. Here we have plotted P(k i |Ĝ, ω) and P(θ|Ĝ,θ) for different values of κ and we have chosen ω = 2 andθ = 0.6. The dashed lines indicate the exponential prior on the degrees (panel (a)) and on the latent variables (panel (b)).
In Figure 4 we display the marginal a posteriori distribution P(N|Ĝ, M) as function of M demonstrating that the sampled network can modify significantly the prior assumptions on the total number of nodes in the network.

Inferring the True Parameters with the Grand Canonical Ensemble with Given Latent Variable Distribution
In this section we treat the problem of Bayesian estimation of the parameters of the true network G given the sampled networkĜ using the grand canonical model with given latent variable distribution. Let us indicate with θ i the latent variables of the sampled nodes 1 ≤ i ≤N and with φ i the latent variables of the unsampled nodes i >N. Using Bayes rule we have where P (Ĝ|θ, φ, N) is independent of φ, i.e., P(Ĝ|θ, φ, N) = P(Ĝ|θ, N) and where P(N) = π(N), with p N (θ i , θ j ) given by Equation (15) and withâ indicating the adjacency matrix of the sampled subgraphĜ. In Equation (49) P(Ĝ) indicates the evidence of the data given by Since, as we have observed previously, P (Ĝ|θ, φ, N) is independent of φ the Bayesian estimation of the parameters φ reduces simply to the prior in this case. Therefore we focus here only on the Bayesian estimate of the latent variables θ, i.e., we consider P(N, θ|Ĝ) = P(N)P(θ|N)P(Ĝ|θ, N) with P(N), P(Ĝ|θ, N), P(Ĝ) having the same definition as above and Using the explicit expression of p N (θ i , θ j ) given by Equation (15), we can express the likelihood P(Ĝ|θ, N) of the sampled network as whereL is the number of links of the sampled networkĜ. In the limit N 1 we can approximate this expression as with this approximation we get that the posterior probability P(N, θ|Ĝ) is given by Calculating the marginal posterior probability of a single latent variable conditional of θ we get In Figure 3 we show the difference between an exponential prior distribution p(θ) on the latent variables of the nodes and the posterior marginal probability of the true latent variables of the sampled nodes P(θ|Ĝ,θ) plotted for different values of the sampled degree κ of the same node.
Stating from Equation (56) we can also calculate the posterior distribution P(N|Ĝ) of the true number of nodes N >N. To this end we express the delta function in an integral form and we sum over all possible latent variables θ, obtaining where I (θ) (ω,θ) is given by The integrals in Equation (58) can be calculated at the saddle point getting In Figure 4 we display the marginal a posteriori distribution P(N|Ĝ) on the true number of nodes in the simplified assumption in whichĜ is regular and all degree κ are the same demonstrating that the sampled network can modify significantly the prior assumptions on the total number of nodes in the network.

Conclusions
In this paper we have proposed grand canonical network ensembles formed by networks of varying number of nodes. The grand canonical network ensembles we have introduced are both sparse and exchangeable, i.e., have a finite average degree and are invariant under permutation of the node labels. The grand canonical ensembles are hierarchical network models in which first the network size is selected, then the degree sequence (or the sequence of latent variables) and finally the network adjacency matrix is selected. The model circumvents the difficulties imposed by the Aldous-Hoover theorem that states that exchangeable infinite sparse network ensembles vanish, as the network is a mixture of finite networks, although the networks can have an arbitrarily large network size. Here we show how the grand-canonical ensembles can be used to perform a Bayesian estimation of the network parameters when only partial information about the network structures is known. This a posteriori estimation of the network parameters can then be used for network reconstruction.
The grand canonical framework for sparse exchangeable network ensembles is here described for the case simple networks but has the potential to be extended to generalized network structures including directed, bipartite networks, multiplex networks and simplicial complexes following the lines outlined in ref. [32].
In conclusion we hope that this work, proposing hierarchical grand canonical network ensembles able to treat networks of different size and relating network theory to statistical mechanics will stimulate further results of mathematicians, physicists, and computer scientists working in network science and related machine learning problems.

Conflicts of Interest:
The author declares no conflict of interest.

Appendix A. Derivation ofΣ N (k, q|κ)
In this Appendix our goal is to derive the asymptotic expression ofΣ N (k, q|κ) in the limit of large network size of the sampled networkN 1, and of the true network N = (1 + α)N 1 with α > 0.
Let us assume that the sampled subgraph G is the network between the sampled nodes 1 ≤ i ≤N and has adjacency matrixâ. The true network is instead formed by N nodes with adjacency matrix a. We assume that a has the block structure given by where b indicates theN × αN matrix between sampled nodes and the unsampled nodes andã indicates tha (αN) × (αN) adjacency matrix among the unsampled nodes. As we have mentioned in the main textΣ N (k, q|κ) is the logarithm of the number N (k, q|κ, N) of networks (or adjacency matrices a) with degree sequence [k, q] and admitting as a subgraphĜ having sampled degree sequence κ. In statistical mechanics we also call N (k, q|κ, N) the partition function of its corresponding statistical mechanics network model, and we indicate it by Z. In terms of the matrices b andã the partition function Z = N (k, q|κ) = exp Σ N (k, q|κ) can be written as Expressing the Kronecker deltas in the integral form and performing the sum over the elements of the matrices b andã we obtain and with Dω = ∏N i=1 [dω i /(2π)] and Dω = ∏ N i=1+N [dω i /(2π)]. Let us now introduce the functional order parameters [22,49,50].
whereP(k, κ) is the fraction of sampled nodes with degree κ in the sampled network and total inferred degree k;P(q) is the fraction of unsampled nodes with degree q. Moreover we have indicated with L = k N/2 and withL = ∑N i=1 κ i /2. By enforcing the definition of the order parameters with a series of delta functions we obtain After inserting these expressions into the partition function in the limit ∆ω → 0, indicating with ∑ the sum over the allowed degree range we obtain with f = f (c(ω, k),ĉ(ω, k), ρ(ω, q),ρ(ω, q), λ, h) given by where Ψ is given by By putting and performing a Wick rotation in λ and assuming z/N = e −iλ real and much smaller than one, i.e., z/N 1 which is allowed in the sparse regime, we can linearize the logarithm and express Ψ as with ν = ∑ m≤q≤KP (q) dωρ q (ω)e −iω .
The saddle point equations determining the value of the partition function can be obtained by performing the (functional) derivative of f with respect to the functional order parameters, obtaining −iĉ κ,k (ω) = zανe −iω , −iρ q (ω) = z(αν +ν)e −iω , c κ,k (ω) =P(κ, k) Using these expressions for the integral we can write the functional order parameters as