Entropy Multiparticle Correlation Expansion for a Crystal

As first shown by H. S. Green in 1952, the entropy of a classical fluid of identical particles can be written as a sum of many-particle contributions, each of them being a distinctive functional of all spatial distribution functions up to a given order. By revisiting the combinatorial derivation of the entropy formula, we argue that a similar correlation expansion holds for the entropy of a crystalline system. We discuss how one- and two-body entropies scale with the size of the crystal, and provide fresh numerical data to check the expectation, grounded in theoretical arguments, that both entropies are extensive quantities.


Introduction
The entropy multiparticle correlation expansion (MPCE) is an elegant statistical-mechanical formula that entails the possibility of reconstructing the total entropy of a many-particle system term by term, including at each step of summation the integrated contribution from spatial correlations between a specified number of particles.
The original derivation of the entropy MPCE is found in a book by H. S. Green (1952) [1]. Green's expansion applies for the canonical ensemble (CE). In 1958, Nettleton and M. S. Green derived an apparently different expansion valid in the grand-canonical ensemble (GCE) [2]. It took the ingenuity of Baranyai and Evans to realize, in 1989, that the CE expansion can indeed be reshuffled in such a way as to become formally equivalent to the GCE expansion [3].
A decisive step forward was eventually taken by Schlijper [4] and An [5], who have highlighted the similarity of the entropy formula to a cumulant expansion, and the close relationship with the cluster variation method (see, e.g., [6]). Other papers wherein in various ways the combinatorial content of the entropy MPCE is emphasized are references [7][8][9][10].
Since the very beginning it has been clear that the successive terms in the entropy expansion for a homogeneous fluid are not all of equal importance. In particular, the contributions from correlations between more than two particles are only sizable at moderate and higher densities. However, while the two-body entropy is easily accessed in a simulation, computing the higher-order entropy terms is a prohibitive task (see, however, reference [11]). Hence, the only viable method to compute the total entropy in a simulation remains thermodynamic integration (see e.g., [12]). The practical interest for the entropy expansion has thus shifted towards the residual multiparticle entropy (RMPE), defined as the difference between excess entropy and two-body entropy. The RMPE is a measure of the impact of non-pair multiparticle correlations on the entropy of the fluid. For hard spheres, Giaquinta and Giunta have observed that the RMPE changes sign from negative to positive very close to freezing [13]. At low densities the RMPE is negative, reflecting a global reduction (largely driven by two-body correlations) of the phase space available to the system as compared to the ideal gas. The change of sign of the RMPE close to freezing indicates that fluid particles, which at high enough densities are forced by more stringent packing constraints, start exploring, this time in a cooperative way, a different structural condition on a local scale, preluding to crystallization on a global scale. Since the original observation in [13], a clear correspondence between the RMPE zero and the ultimate threshold for spatial homogeneity in the system has been found in many simple and complex fluids [14][15][16][17][18][19][20][21][22][23][24], thereby leading to the belief that the vanishing of the RMPE is a signature of an impending structural or thermodynamic transition of the system from a less ordered to a more spatially organized condition (freezing is just an example of many). Albeit empirical, this entropic criterion is a valid alternative to the far more demanding exact free-energy methods when a rough estimate of the transition point is deemed sufficient. For a simple discussion of the interplay between entropy and ordering, the reader is referred to reference [25]; see instead references [26,27] for general considerations about the entropy of disordered solids.
A pertinent question to ask is, what happens to the RMPE on the solid side of the phase boundary, considering that an entropy expansion also holds for the crystal? This is precisely the problem addressed in this paper. Can the scope of the entropic criterion be extended in such a way that it also applies for melting? As it turns out, we can offer no definite answer to this question, since theory alone does not go far enough and we ran into a serious computational bottleneck: while the formulae are clear and the numerical procedure is straightforward, it is extremely hard to obtain reliable data for the two-body entropy of a three-dimensional crystal. We have only carried out a limited test on a triangular crystal of hard disks, but our results are affected by finite-size artifacts that make them inconclusive. Nevertheless, a few firm points have been established: (1) the approximate entropy expressions obtained by truncating the MPCE at a given order can all be derived from an explicit functional of the correlation functions up to that order; (2) the one-body entropy for a crystal is an extensive quantity (the same is held to be true for the two-body entropy, but our arguments are not sufficient for a proof); (3) the peaks present in the crystal one-body density have a nearly Gaussian shape; (4) we have also clarified the role of lattice symmetries in dictating the structure of the two-body density, which is explicitly determined at zero temperature. This paper is organized as follows. In Section 2 we resume the formalism of the entropy expansion for homogeneous fluids and provide the basic tools needed for its extension to crystals. Then, in Section 3 we exploit the symmetries of one-and two-body density functions to predict the scaling of one-and two-body entropies with the size of the crystal. The final Section 4 is reserved to concluding remarks.

Derivation of the Entropy MPCE
In this Section, we collect a number of well-established results on the entropy MPCE, with the only purpose of setting the language and notation for the rest of the paper. First, we recall the derivation of the entropy formula for a one-component system of classical particles in the canonical ensemble. Such an ensemble choice is by no means restrictive, since, as we show next, it is always possible to take advantage of the sum rules obeyed by the canonical correlation functions to arrange the entropy MPCE in an ensemble-invariant form. Then, in the following Section we present an application of the formalism to crystals.
The canonical partition function of a system of N classical particles of mass m at temperature T is where the ideal and excess parts are given by In Equation (1), V is the system volume, β = 1/(k B T), Λ = h/ √ 2πmk B T is the thermal wavelength, and U(R N ) is an arbitrary potential energy. As the particles are identical, for each n = 1, 2, . . . , N the cumulative sum of all n-body terms in U is invariant under permutations of particle coordinates (we can also say that U is S N -invariant, S N being the symmetric group of the permutations on N symbols). The CE average of a function f of coordinates reads where π can (R N ) is the configurational part of the canonical density function. Finally, the excess entropy We define a set of marginal density functions (MDFs) by Owing to S N -invariance of π can , it makes no difference which vector radii are integrated out in Equation (4); hence, P (n) (r n ) is S n -invariant (for example, P (2) (r, r ) = P (2) (r , r)). The following properties are obvious: Then, the n-body density functions (DFs), for n = 1, . . . , N, can be expressed as where the sum in (6) is carried out over all n-tuples of distinct particles (for example, the sum for n = 2 contains N(N − 1) terms). We note that P (1) = 1 and ρ (1) = N/V ≡ ρ if no one-body term is present in U, i.e., if no external potential acts on the particles (then U is translationally invariant). P (1) (r)/V is the probability density of finding a particle in r; hence, ρ (1) (r) = NP (1) (r)/V is the number density at r. Similarly, P (2) (r, r )/V 2 is the probability density of finding one particle in r and another particle in r ; hence, ρ (2) (r, r ) = N(N − 1)P (2) (r, r )/V 2 is the density of the number of particle pairs at (r, r ). As r increasingly departs from r, the positions of two particles become less and less correlated, until P (2) (r, r ) = P (1) (r)P (1) (r ) at infinite distance. We stress that this cluster property holds in full generality, even for a broken-symmetry phase. The n-body reduced density functions, for n = 2, . . . , N, read These functions fulfill the property which also holds for n = 1 if we define g (1) ≡ 1. For a homogeneous fluid, g (2) (r, r ) = g(|r − r |). From now on, we adopt the shorthand notation P 12...n = P (n) (R n ) and Q 12...n = Q (n) (R n ). Moreover, any integral of the kind V −n d 3 R 1 · · · d 3 R n (· · · ) is hereafter denoted as (· · · ). For example, Equations (3) and (4) indicate that S exc N /k B = − P 12...N ln P 12...N . To build up the CE expansion term by term, our strategy is to consider a progressively larger number of particles. For a one-particle system, the excess entropy in units of the Boltzmann constant is S exc 1 /k B = − P 1 ln P 1 , leading to a first-order approximation to the excess entropy of a N-particle system in the form S exc N /k B ≡ −N P 1 ln P 1 (that is, each particle contributes to the entropy independently of the other particles). For a two-particle system, the excess entropy is S (1) 2 plus a remainder k B R 2 , given by: Equation (9) suggests a second-order approximation for S exc N , where each distinct pair of particles contributes the same two-body residual term to the entropy: Notice that Equation (10) is exact for N = 2, i.e., S Similarly, for a three-particle system the excess entropy is S 3 plus a remainder k B R 3 : Hence, a third-order approximation follows for S exc N in the form Again, S (3) 3 = S exc 3 . Equation (12) reproduces the first three terms in the rhs of Equation (5.9) of reference [8], and one may legitimately expect that the further terms in the entropy expansion are similarly obtained by arguing for N = 4, 5, . . . like we did for N = 1, 2, 3 (see the proof in [8]).
The general entropy formula finally reads: This equation is trivially correct since, for any finite sequence {c a } of numbers, To prove (14), it is sufficient to observe that, for each fixed k = 2, . . . , N, the coefficient of c k in the above sum is A more compact entropy formula is (−1) n−a n a P 1...a ln P 1...a , which follows from The entropy expansion, (13) or (16), is only valid in the CE. Eliminating Q 1...a in favor of g 1...a by Equation (7), an overall constant comes out of the integral in Equation (13), namely, which, by Equation (15), equals ln N!/N N ; this term exactly cancels an identical term present in the ideal-gas entropy. In the end, a modified entropy MPCE emerges: (−1) n−a n a P 1...a ln g 1...a . (19) Notice that the first term in the rhs differs by N from the ideal-gas entropy expression in the thermodynamic limit. In order that Equation (19) conforms to the GCE entropy expansion, for each n a suitable fluctuation integral of value −N/[n(n − 1)] should be summed to (and subtracted from) the n-th term in the expansion. For example, using Equations (7) and (8) the second-order term in (13) can be rewritten as Overall, the extra constants appearing in each term of the entropy formula (for example, the quantity N/2 in Equation (20)) add to N. By absorbing such a N in the first term of (19) we recover the ideal-gas entropy in the thermodynamic limit, and the CE expansion becomes formally identical to the grand-canonical MPCE [9]. In Appendix A we present another derivation of the entropy formula in the CE, which is closer in spirit to the one given by H. S. Green. In parallel, we show that the approximation obtained by truncating the MPCE at a given order can be derived from a modified P 12...N distribution, which is an explicit functional of the spatial correlation functions up to that order.

The First Few Terms in the Expansion of Crystal Entropy
The entropy expansion in the CE is formally identical for a fluid system and a crystal, since the origin of (13) is purely combinatorial. However, the DFs of the two phases are radically different: most notably, while P 1 = 1 and ρ (1) = ρ for a homogeneous fluid, the one-body density is spatially structured for a crystal-at least once the degeneracy due to translations and point-group operations has been lifted; we stress that P 1 = 1 only provided that a specific determination of the crystal is taken, since otherwise P 1 = 1 also in the "delocalized" crystalline phase. In practice, in order to fix a crystal in space we should imagine to apply a suitable symmetry-breaking external potential, whose strength is sent to zero after statistical averages have been carried out (in line with Bogoliubov's advice to interpret statistical averages of broken-symmetry phases as quasiaverages [28], which amounts to sending the strength of the symmetry-breaking potential to zero only after the thermodynamic limit has been taken). A way to accomplish this task is to constrain the position of just one particle. When periodic boundary conditions are applied, keeping one particle fixed will be enough to break the continuous symmetries of free space. As N grows, the effect of the external potential becomes weaker and weaker, since it does not scale with the size of the system.

One-Body Entropy
A reasonable form of one-body density for a three-dimensional Bravais crystal without defects is the Tarazona ansatz [29]: where α > 0 is a temperature-dependent parameter, the R's are direct-lattice vectors, and the G's are reciprocal-lattice vectors (recall that (21) is a rather generic form of crystal density, which recently we have also applied in a different context [30]. More generally, the one-body density appropriate to a perfect crystal must obey ρ (1) (r + R) = ρ (1) (r) for all R, and is thus necessarily of the form Since V d 3 r ρ (1) (r) = N, it soon follows u 0 = ρ. Calling C a primitive cell and v 0 its volume, In real space, a legitimate ρ (1) (r) function is ∑ R φ(r − R) with d 3 r φ(r) = 1 (integration bounds are left unspecified when the integral is over a macroscopic V). In the zero-temperature/infinite-density limit, particles sit at the lattice sites and the one-body density then becomes Equation (23) is also recovered from Equation (21) in the α → ∞ limit. For a crystalline solid, the one-body entropy, that is, the first term in the expansion of excess entropy, is (in units of k B ): One may wonder whether the integral in (24) is O(N) in the infinite-size limit. The answer is affirmative, and a simple argument goes as follows. Let ρ (1) . Actually, we can provide a rigorous proof that S 1 is negative-semidefinite and its absolute value does not grow faster than N. Using ln x ≤ x − 1 for x > 0 and x ln x ≥ x − 1 for any x ≥ 0, we obtain To estimate d 3 r 1 ρ 2 1 we employ the one-body density in (21), which is sufficiently generic for our purposes: (the above result is nothing but Parseval's theorem as applied to (21)). The sum in the rhs of Equation (26) is the three-dimensional analog of a Jacobi theta function (see, e.g., [31]), whose value is O(1) for α > 0. Therefore, it follows from Equations (25) and (26) that the one-body entropy is at most O(N).

Two-Body Entropy
We now move to the problem of evaluating the two-body entropy S 2 for a crystal. For a homogeneous fluid, S 2 is an extensive quantity which, in k B units, is equal to fluid : For a crystal, we have from Equation (20) that As x ln x ≥ x − 1 for x > 0, S 2 is usually negative and zero exclusively for g 12 = 1. In terms of density functions, S 2 is written as We show below that Equation (29) can be expressed as a radial integral, i.e., in a way similar to the two-body entropy for a fluid. We can assign a radial structure to crystals by appealing to a couple of functions introduced in [32], namely and where the inner integrals are over the direction of r. For a homogeneous fluid, g(r) = g(r) and g 0 (r) = 1.
The authors of reference [32] have sketched the profile of g(r) and g 0 (r) for a crystal; both functions show narrow peaks at neighbor positions in the lattice, with an extra peak at zero distance for g 0 (r), and the oscillations persist till large distances. The following sum rules hold (cf. Equation (8) for and 4π dr r 2 ρ g 0 (r) = 1 When the latter two formulae are rewritten as 4πρ dr r 2 ( g(r) − 1) = −1 and 4πρ dr r 2 ( g 0 (r) − 1) = 0 , (34) it becomes apparent that both g(r) and g 0 (r) decay to 1 at infinity. Similarly, we define: which obviously vanishes at infinity. While h(r) = g(r) ln g(r) for a homogeneous fluid, we expect that h(r) = g(r) ln g(r) in the crystal. Putting Equations (30)-(35) together, we arrive at crystal : Even though the integrand vanishes at infinity, S 2 = O(N) only if the envelope of h(r) decays faster than r −3 (r −2 in two dimensions). A slower decay may be sufficient if S 2 is computed through the first integral in (36). For a spherically-symmetric interaction potential, the excess energy (i.e., the canonical average of the total potential energy U) can also be written as a radial integral: For the one-body density in (21), g 0 (r) can be obtained in closed form. First we have: Then, multiplying by ρ (1) (r 1 ) = ρ ∑ G e − G 2 4α e iG ·r 1 and finally integrating over r 1 we arrive at We see that the large-distance decay of g 0 (r) is usually slow, and the same will occur for g(r) since g(r) g 0 (r) for large r. In two dimensions, the one-body density and g 0 functions respectively read: 4α e iG·r and g 0 (r) where J 0 is a Bessel function of the first kind. Since the envelope of J 0 maxima decays as r −1/2 at infinity, we see that the asymptotic vanishing of g 0 is slower in two dimensions than in three. Equation (39) has a definite limit for α → ∞, corresponding to zero temperature. Indeed, using Poisson summation formula and the expression of Dirac's delta in spherical coordinates, we obtain: Hence, g 0 (r) reduces to a sum of delta functions centered at lattice distances (including the origin). The latter result is actually general. Inserting Equation (23) in (31), we obtain: q.e.d. At zero temperature, ρ g(r) is given by the same sum of delta-function terms as in (42), but for the first term, δ 3 (r), which is missing-see Equation (92) below.
We add a final comment on possible alternative formulations of g(r) for a crystal. One choice is to replace (30) with Apparently, this is a good definition since (see Equation (8)) 4π dr r 2 ρ g(r) = 4π However, with this g(r) we cannot write S 2 as a radial integral-hence, option B is discarded altogether. Another possibility is option C : but this option is useless too, since 4π dr r 2 ρ g(r) = ρ d 3 r 1 1 V d 3 r 2 g (2) (r 1 , r 2 ) = ?
(observe that the inner integral is different from the one appearing in Equation (8)).

Symmetries of the Two-Body Density
A general property of the two-body density for a crystal is the CE sum rule Other constraints follow from the translational symmetry of local crystal properties. As for the one-body density, fulfilling ρ (1) (r 1 + R) = ρ (1) (r 1 ) for every R, we must have that in turn implying g (2) (r 1 , r 2 ) = g (2) (r 1 + R, r 2 + R) .
Now observe [33] that (i) any function of r 1 and r 2 can also be viewed as a function of (r 1 + r 2 )/2 and r 2 − r 1 ; (ii) under a R-translation, only the former variable is affected, not the relative separation. Hence, the most general function consistent with (49) is: where and In order that lim r→∞ g (2) (r 1 , r 1 + r) = 1 it is sufficient that lim r→∞ v 0 (r) = 1 and lim We may reasonably expect that the most relevant term in the expansion (50) is indeed the G = 0 one (also notice that v G → 0 as G → ∞ by the Riemann-Lebesgue lemma). Equation (50) is still insufficient to establish the scaling of two-body entropy with the size of the crystal. Some general results can be obtained under the (strong) assumption that v G (r) = 0 for any G = 0. If we change the notation from v 0 to G(r) ≡ 1 + H(r) (which, by Equations (51) and (52), is a real and even function), then a necessary condition for H is: The rationale behind Equation (54) is particularly transparent near T = 0, where the peaks of the one-body density are extremely narrow. As argued below (see Equation (67) ff.), H as a function of r 2 is roughly −1 in the primitive cell C centered in r 1 ≈ R 1 , R 1 denoting the only lattice site contained in C and roughly zero outside C. Since the integral of ρ (1) over C equals 1, Equation (54) will immediately follow. Now writing H(r) as a Fourier integral, and using (21) as one-body density, Equation (54) yields which can only hold for arbitrary r 1 if Next, from Equation (30) we obtain: For the one-body density in (21), the inner integral becomes: with It is evident that I G (r) vanishes at infinity. Upon inserting (59) in (58), we finally obtain: As r increases, the second term gradually vanishes and the large-distance oscillations of g(r) then exactly match those of g 0 (r). As a countercheck, let us compute the integral of ρ g(r) − ρ g 0 (r) over the macroscopic system volume (which, by Equations (32) and (33), should be −1): Under the assumption that the entropy expansion for a crystal reads Providing that it vanishes sufficiently rapidly at infinity, the function can be written as a Fourier integral, and using (21) as one-body density, the two-body entropy becomes which is clearly O(N).

Two-Body Density at T = 0
In the zero-temperature limit, particles will be sitting at lattice sites, and the two-body density then becomes (see Equation (23)): which is of the form (63). In Equation (67), 1 C (r) is the indicator function of a Wigner-Seitz cell C centered at the origin (i.e., 1 C (r) = 1 if r ∈ C and 1 C (r) = 0 otherwise). While the factor ρ (1) (r 1 )ρ (1) (r 2 ) forces particles to be located at lattice sites, the only role of the G in (67) is to prevent the possibility of double site occupancy. However, a G function with this property is not unique; the one provided in (67) has the advantage of exactly complying with condition (57) (see below). Equation (67) indicates that the pair-correlation structure of a low-temperature solid is very different from the structure of a dense fluid close to freezing. For the Fourier transform reads: Now observe that f (r) = 1 is trivially periodic, and can thus be expanded in plane waves as 1 = ∑ G f G e iG·r , with f G = δ G,0 . On the other hand, Comparing Equations (69) and (70), we conclude that For H(r) = −1 C (r) the function I G (r) at Equation (60) equals − sin(Gr)/(Gr) for r < r m and 0 for r > r M , where r m (r M ) is the radius of the largest (smallest) sphere inscribed in (circumscribed to) C. It then follows from Equation (61) that g(r) = 0 for r < r m , while g(r) = g 0 (r) for r > r M (for a triangular crystal with spacing a we have r m = a/2 and r M = a/ √ 3, both comprised between the first, 0, and the second, a, lattice distance). T = 0, where g 0 (r) consists of infinitely narrow peaks centered at lattice distances; this implies that g(r) = g 0 (r) everywhere but at the origin, where g(r) = 0 while g 0 (r) is non-zero.

Scaling of Two-Body Entropy with N
We henceforth discuss in fully general terms how the two-body entropy scales with N for a crystal, avoiding to make any simplifying hypothesis on the structure of g (2) (r 1 , r 2 ). Using an obvious short-hand notation, the two-body entropy reads As we already know, S 2 ≤ 0. From the inequality ln x ≤ x − 1, valid for all x > 0, we derive −x ln x ≥ x − x 2 for x ≥ 0, and then obtain: Clearly, estimating the size of the lower bound in Equation (73) is a much simpler problem than working with S 2 itself. Taking h 12 ≡ g 12 − 1, it is evident that h 12 shares all symmetries of g 12 . Hence, we can write: Observe that the h G (r) functions are nothing but Fourier coefficients, once the h function has been expressed in terms of S = (r 1 + r 2 )/2 and r = r 1 − r 2 : By the Riemann-Lebesgue lemma, h G (r) → 0 as G → ∞ (for arbitrary r). Moreover, h G (r) → 0 for r → ∞ (for arbitrary G) since ρ 12 → ρ 1 ρ 2 for |r 1 − r 2 | → ∞. Similarly, for k 12 ≡ h 2 12 we have that Now observe that, for ρ (1) Using the above equation, and changing the integration variables from r 1 and r 2 to S and r, we obtain: where In the special case h G = H(r)δ G,0 , we have h 12 = h 0 = H(r 1 − r 2 ) and k G (r) = H(r) 2 δ G,0 . Then, from Equation (77) we derive An independent computation of the integral leads to the same result: which should be compared with Equation (66). For H(r) = −1 C (r) and u G = ρ exp{−G 2 /(4α)}, we readily obtain S 2 = −N/2 from both Equations (66) and (78), meaning that in this case the two-body entropy coincides with its lower bound in Equation (73).
The quantity (80) is clearly O(1), since the summand is rapidly converging to zero; this implies that the two-body entropy of a crystal is, at least for h G = H(r)δ G,0 , bounded from below by a O(N) quantity. In the most general case, where Equations (78) and (79), rather, apply, we can only observe the following. As G grows in size, for any fixed G and G both u G−G u G and h −G−G (r) h G (r) get smaller, suggesting that i G,−G will decrease too. However, this is not enough to conclude that ∑ G i G,−G is O(1), and the only way to settle the problem is numerical.

Numerical Evaluation of the Structure Functions
The utility of (36) clearly relies on the possibility of determining the integrand in simulation with sufficient accuracy. First we see how the one-body entropy, Equation (24), is computed. We start dividing V into a large number M = V/v c of identical cubes of volume v c , chosen to be small enough that a cube contains the center of at most one particle. Let c α = 0, 1 (with α = 1, . . . , M) be the occupancy of the αth cube in a given system configuration and c α its canonical average as computed in a long Monte Carlo simulation of the weakly constrained crystal (to fix the center of mass of the crystal in space it is sufficient to keep one particle fixed; then, periodic boundary conditions will contribute to keep crystalline axes also fixed in the course of simulation). Given this setup, the local density at r 1 (a point inside the αth cube) can be estimated as and the integral in (24) becomes (notice that ρv c = N/M 1; we need v c → 0 and an infinitely long simulation to make (82) an exact relation). Similarly, if r 2 falls within the βth cube, then and from Equation (30) we derive g(r) ≈ 1 In the above formula N γ 4πr 2 ∆r/v c is the number of cubes whose center lies at a distance r from α (to within a certain tolerance ∆r r), and the inner sum is carried out over those cubes only. Since ρv c N γ = 4πr 2 ∆rρ and Mρv c = N, an equivalent formula for g(r) is denoting N i (r ± ∆r/2) the number of particles found at a distance between r − ∆r/2 and r + ∆r/2 from the ith particle in the given configuration. Equation (86) closely reflects the method of computing the radial distribution function in a CE simulation (see, e.g., equation (11) in reference [34]).
The function g(r) admits yet another expression, which further strengthens its resemblance to the g(r) of a liquid (as reported e.g., in [35]). It follows from Equations (30) and (6) that Note that, for any sufficiently smooth function f (r), and we are allowed to replace (87), and thus obtain which finally leads to At zero temperature, we can neglect the average and simply write where in the last step we have followed the same path leading to Equation (41). We can similarly proceed for the functions at Equations (31) and (35), which can be computed by the following formulae: and While g(r) is the statistical average of an estimator whose histogram can be updated in the course of the simulation (see Equation (86)), g 0 (r) can only be estimated at the end of simulation, once c α has been evaluated for every α with effort comparable to that made for the one-body entropy. Much more costly is the calculation of h(r), which should also be performed at the end of simulation after evaluating c α c β for every α and β.
Using translational lattice symmetry, the radial distribution functions and h(r) of a crystal can also be written as: leading to simplifying Equations (85), (93), and (94) into In the above formulae, the α index only runs over the cubes contained in a Wigner-Seitz/Voronoi cell of the lattice, while the β sum is still carried out over all cubes in the simulation box.

Numerical Tests
We first examine the shape of the structure functions g(r) and g 0 (r) for hard spheres, choosing a r resolution of ∆r = 0.05 (in units of the particle diameter σ). We take a system of N = 4000 particles arranged in a fcc lattice with packing fraction η = 0.600 (recall that the melting value is approximately 0.545). Periodic conditions are applied at the system boundary. In order to constrain the crystal in space, we keep one particle fixed during the simulation. As for g 0 (r), we employ the Tarazona ansatz for α = 95 (see Equation (39)), a value providing the best fit to the one-body density drawn from simulation.
We use the standard Metropolis Monte Carlo (MC) algorithm, constantly adjusting the maximum shift of a particle during equilibration until the fraction of accepted moves becomes close to 50% (then, the maximum shift is no longer changed). We produce 50,000 MC cycles in the equilibration run, whereas CE averages are computed over a total of further 2 × 10 5 cycles. Our results are plotted in Figure 1. While at short distances g(r) and g 0 (r) are rather different, as r increases the oscillations of the two functions become closer and closer in amplitude. To obtain the one-body density with sufficient accuracy, we use a grid of about 50 points along each space direction in the unit cell. However, this grid resolution is too high for allowing the computation of h(r), as the memory requirements for processing the c α c β data are very huge. On the other hand, a coarser grid is incompatible with the ∆r chosen.
To get closer to achieving our goal, i.e., to ascertain the N dependence of the two-body entropy for a crystal, we consider a two-dimensional system-hard disks. For this system, the transformation from fluid to solid occurs in two stages, via an intermediate hexatic fluid phase [36] (the transition from isotropic to hexatic fluid is first-order, whereas the hexatic-solid transition is continuous and occurs at η = 0.700). We consider a system of N = 1152 hard disks, arranged in a triangular crystal with packing fraction η = 0.800, and a mesh consisting of about 80 points along each direction in the unit cell. Even though translational correlations are only quasi-long-ranged in an infinite two-dimensional crystal, when one of the particles is kept artificially fixed this specificity is lost and the (finite) two-dimensional crystal is made fully similar to a three-dimensional crystal. Observe also that an infinite two-dimensional crystal shares at least the same breaking of rotational symmetry typical of an infinite three-dimensional crystal.
As before, we first look at the structure functions drawn from simulation, g(r) and g 0 (r). Our results are plotted in Figure 2, together with the g 0 (r) function of Equation (40) for α = 75. For this α the matching between the two g 0 functions is nearly perfect, indicating that the peaks of the one-body density are (to a high level of accuracy) Gaussian in shape. For η = 0.800 we find S 1 /N = −2.156. g, η = 0.800 g 0 , η = 0.800 theory for g 0 , η = 0.800 Figure 2. Structure functions g(r) and g 0 (r) for a triangular crystal of hard disks (η = 0.800). We report data for two sizes, N = 288 and N = 1152. For comparison, we also plot the g 0 (r) function in Equation (40) for α = 75. As is clear, the Tarazona ansatz represents an excellent model for the one-body density of the weakly-constrained hard-disk crystal.
In Figure 3, we show our main result, h(r), for η = 0.800 and two different crystal sizes, N = 288 and 1152. We point out that, in order to obtain these data, we had to run a separate simulation for each r, as the memory usage is rather extreme. To be sure, we have computed the g 0 values in an independent way, i.e., using the same program loop written for h(r), eventually finding the same results as in Figure 2. Looking at Figure 3, we see that h(r) shows a series of peaks at neighbor positions and in the valleys within, taking preferentially positive values (meaning that its oscillations are not centered around zero). However, the damping of large-distance oscillations is too gradual to allow us to assess the nature of the asymptotic decay of h(r) and then compute S 2 . We must attempt a few explanations for this behavior of h(r): On one hand, the decay of h(r) may really be slow (at least in two dimensions), but S 2 would nonetheless be extensive, which implies a large S 2 /N value. It may as well be that constraining the crystal in space by hinging the position of one particle has a strong effect on the speed of h decay, which only a finite-size scaling of data can relieve. Indeed, when going from N = 288 to N = 1152 the values of h are slightly shifted downwards.  In summary, we have not reached any clear demonstration of S 2 extensivity in a crystal. This task has proved to be very hard to settle numerically. Our hope is that, based on our preparatory work, other authors with more powerful computational resources at their disposal can push the numerical analysis forward and eventually come up with a definite solution of the problem.

Conclusions
In this paper, we inquired into the possibility of extending the zero-RMPE criterion, a popular one-phase criterion of freezing for simple fluids, to also cover the melting of a solid. After revisiting the derivation of the entropy MPCE in the canonical ensemble, we argued that the formula applies for a crystal too. We exploited lattice symmetries to constrain the structures of one-and two-body densities, so as to gain as much information as possible on the first few terms in the entropy expansion. While this was enough to prove that the crystal one-body entropy is an extensive quantity, the information obtained was not sufficient to hold the same for the two-body entropy, whose scaling with the size of the crystal remains elusive. We thus attempted to clarify the question numerically, but we faced an insurmountable obstacle in the computational and memory limitations. To alleviate the problem, we turned towards a two-dimensional case, namely, hard disks, but with poor results: the structure function that must be integrated over distances to obtain the two-body entropy is weakly convergent to zero. In the near future, we plan to check whether the situation is more favorable for a different two-dimensional interaction, either endowed with an attractive tail (e.g., the Lennard-Jones potential) or provided with a soft core (for example, a Gaussian repulsion).
Funding: This research received no external funding.
Acknowledgments: This work has benefited from computer facilities made available by the PO-FESR 2007-2013 Project MedNETNA (Mediterranean Network for Emerging Nanomaterials).

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Truncating the Entropy Expansion
We hereafter give an interpretation of the successive estimates of S exc N /k B = − ln P 1...N obtained by stopping the expansion (13) at a given order of correlations. We show that each truncated entropy expansion can be arranged in the form − ln P 1...N , where P 1...N is a functional of all the MDFs up to n-th order, for n = 1, 2, . . . , N (however, without claiming that P 1...N represents a proper, i.e., normalized distribution). Our method resembles the one originally devised by H. S. Green to express the canonical entropy of a N-particle fluid in terms of correlation functions [1]. While H. S. Green correctly inferred the first three terms in the expansion, he did not provide a general recipe to obtain the further terms recursively.
For N = 1 there is only one MDF, P 1 ≡ P (1) (r 1 ), in terms of which a fully symmetric approximation to P 1...N can be constructed: 1st − order approximation : Notice that − ln P 1...N = S N /k B . To obtain a better approximation we consider a system of two particles. Since we see that P 12 is the product of the 1st-order approximation (A1) times a correction factor P 12 /(P 1 P 2 ) = Q 12 . Assuming that in a N-particle system each distinct pair of particles contributes the same factor to P 1...N , we arrive at the 2nd − order approximation : The number of factors in the second product is N(N − 1)/2. For this P 1...N we obtain − ln P 1...N = −N P 1 ln P 1 − N 2 P 12 ln Q 12 = S Moving to N = 3, we observe that which is the second-order approximation to P 123 times a correction factor. In the event that each distinct triplet of particles contributes the same factor to P 1...N , we obtain the 3rd − order approximation : Notice that a different expression for the latter ratio is Q ijk Q ij Q ik Q jk = P ijk P ij P ik P jk P i P j P k . (A7) The number of factors in the third product is N(N − 1)(N − 2)/6. For this P 1...N , the approximate entropy is − ln P 1...N = −N P 1 ln P 1 − N 2 P 12 ln Q 12 − N 3 P 123 ln Q 123 − 3 2 P 12 ln Q 12 = S We can similarly proceed to derive higher-order approximations. The 4-body MDF of a system of N = 4 particles is trivially decomposed as Notice that a different expression for the latter ratio is Q ijkl Q ijk Q ijl Q ikl Q jkl Q ij Q ik Q il Q jk Q jl Q kl = P ijkl P ijk P ijl P ikl P jkl P ij P ik P il P jk P jl P kl P i P j P k P l . (A11) Eventually, with the last Nth-order approximation we recover the exact distribution, namely, P 1...N = P 1...N , and the full entropy. Notice that, except for n = 1 and N, the nth-order functional P 1...N is not normalized.
We now provide a formalization of the procedure sketched above. For each value of n and each grouping I n = {i 1 , i 2 , . . . i n } of n particle indices, we write P(I n ) ≡ P i 1 ...i n as a product of positive cumulant factors to be determined recursively, that is P(I n ) = ∏ S 1 ⊂I n C(S 1 ) · · · ∏ S n−1 ⊂I n C(S n−1 ) × C(I n ) , where ∏ S k ⊂I n indicates the product over all k-tuples of distinct entries from I n -there are ( n k ) factors in the product ∏ S k ⊂I n . As shown before, C({i}) = P i , C({i, j}) = P ij /(P i P j ), C({i, j, k}) = P ijk P i P j P k /(P ij P ik P jk ), and so on. Taking the logarithm of (A13) we obtain: ln P(I n ) = ln C(I n ) + ∑ S n−1 ⊂I n ln C(S n−1 ) + . . . + ∑ S 1 ⊂I n ln C(S 1 ) , which can be solved with respect to cumulants by the Möbius inversion formula (see, e.g., Equations (3) and (4) of reference [5]): ln C(I n ) = ln P(I n ) − ∑ S n−1 ⊂I n ln P(S n−1 ) + . . . + (−1) n−1 ∑ which is in turn equivalent to writing C(I n ) = P(I n ) ∏ S n−1 ⊂In P(S n−1 ) . . .