Vibrational Entropy of Crystalline Solids from Covariance of Atomic Displacements

The vibrational entropy of a solid at finite temperature is investigated from the perspective of information theory. Ab initio molecular dynamics (AIMD) simulations generate ensembles of atomic configurations at finite temperature from which we obtain the N-body distribution of atomic displacements, ρN. We calculate the information-theoretic entropy from the expectation value of lnρN. At a first level of approximation, treating individual atomic displacements independently, our method may be applied using Debye–Waller B-factors, allowing diffraction experiments to obtain an upper bound on the thermodynamic entropy. At the next level of approximation we correct the overestimation through inclusion of displacement covariances. We apply this approach to elemental body-centered cubic sodium and face-centered cubic aluminum, showing good agreement with experimental values above the Debye temperatures of the metals. Below the Debye temperatures, we extract an effective vibrational density of states from eigenvalues of the covariance matrix, and then evaluate the entropy quantum mechanically, again yielding good agreement with experiment down to low temperatures. Our method readily generalizes to complex solids, as we demonstrate for a high entropy alloy. Further, our method applies in cases where the quasiharmonic approximation fails, as we demonstrate by calculating the HCP/BCC transition in Ti.


Introduction
The importance of entropy as a component of thermodynamic free energy, together with the difficulty of its calculation, motivates continuing efforts seeking improved computational approaches [1][2][3][4][5][6][7][8][9][10][11]. The entropy is a function of the state of the system, and is in principle determined by the instantaneous values of every degree of freedom. Most computational approaches to entropy calculation do not make explicit use of these values, and instead apply some form of thermodynamic integration to relate the entropy in the state of interest to some reference point of known entropy [12][13][14]. Our approach recognizes that the entropy equals, in suitable units, the information required to fully specify the state of the system. We capture this information in the form of many-body correlation functions obtained from ab initio molecular dynamics (AIMD).
Multiple types of excitation contribute to the entropy of a solid. Neglecting correlations among these, we may approximate the entropy as a sum S ≈ S v + S e + S c + · · · (1) where S v arises from atomic vibrations [15], S e includes electronic excitations, the nonvibrational configurational term S c incorporates vacancies and chemical species substitutions [16]. The additional terms may include magnetism and other effects [17]. The present paper primarily addresses the vibrational contribution, but for comparison with experiment we must include the electronic entropy. While our initial approach is classical, and intended for applicability at elevated temperatures close to melting, we show how quantum effects can be incorporated to obtain accurate results below the Debye temperature. Additionally, the electronic entropy is intrinsically a quantum mechanical property.
The following section presents our computational methods. The heart of our approach rests on approximating the many-body displacement correlation function as a Gaussian distribution matching the simulated covariance of atomic displacements. We then apply the method to two test cases, face-centered cubic Al and body-centered cubic Na. In each case we compare with standard thermodynamic data. We also show the applicability of a simple approximation based on experimental Debye-Waller factors (thermal B-factors) that allow experimental diffraction measurements to obtain approximate thermodynamic entropies.
Our principal results for these test cases are illustrated in Figure 1 a,b. Notice first that the Debye-Waller factors yield good qualitative results, lying within 1k B of the experimental values, but remaining consistently high. This is because the Debye-Waller factors treat the individual atomic vibrations independently, and neglect the mutual information contained in displacement correlation functions that must reduce the vibrational entropy [3,8,[18][19][20]. Including the covariances of displacements and electronic entropies (curves labeled classical) improves the agreement, but with negative deviations at low temperatures due to the ln T divergence of the classical vibrational entropy.  [21,22]. Orange triangles are calculated from Equation (16) using B-factors obtained from [23,24]. Red squares add the classical vibrational entropy calculated from Equation (10) to the electronic entropy calculated from Equation (15).
To overcome the deficiency of classical statistical mechanics at low temperatures, we introduce a quantum version of our method where we interpret eigenvalues of the covariance matrix as effective vibrational frequencies renormalized by anharmonic forces. This reveals a relationship between our method and a different approach based on velocity autocorrelation functions [25][26][27]. One could also consider our quantum approach as an application of a temperature-dependent effective harmonic potential [10,11].
We then apply our method to two examples that are scientifically interesting and technically challenging. First, we examine the high entropy alloy MoNbTaW [28,29]. Here the chemically disordered structure makes the conventional phonon-based approach time consuming. Unfortunately, it also increases the demands on AIMD run times and limits our ability to improve statistics through symmetrization. Next, we address the temperaturedriven HCP to BCC transition of Ti. Owing to the presence of imaginary frequency modes in the BCC state, the usual harmonic and quasiharmonic approaches cannot be applied, while our method succeeds.

Probability Density Function
Our approach focuses on the N-body probability density function ρ N (U , P) of a classical N-atom system in Cartesian phase space. The displacement variable U = (u 1 , u 2 , . . . , u N ), where u i ≡ r i − R i defines atomic displacement of the position r i of atom i from its mean position (ideal lattice site) R i , and P = (p 1 , p 2 , . . . , p N ) incorporates the momenta {p i }.
Owing to the additivity of kinetic and potential energy, the phase space probability factors into a product of density functions f U (U ) and f P (P ) The factor h 3N comes from the constraint that the probability density integrates to 1 The entropy according to Gibbs [30] is (in units of k B ) This is identical to the Shannon [31] information-theoretic entropy. According to classical Maxwell-Boltzmann statistics, the momentum distribution function is Gaussian, with Σ P a diagonal matrix of entries m i /β where m i is the mass of atom i and β = 1/k B T. Formally, we set M = diag(m 1 , m 1 , m 1 , m 2 · · · , m N ), so that Σ P = M/β. In contrast to the simplicity of the momentum distribution, the density function f U (U ) is difficult to describe precisely, considering the many-body and anharmonic interactions among atoms. We choose to approximate it as a Gaussian with suitable covariance. Hence we write where Σ U is the covariance matrix The σ i,j element of Σ U is the 3 × 3 covariance matrix of the displacements u i and u j of the ith and jth atoms, with x, y, and z the Cartesian coordinates of the displacement u. Diagonal elements of the covariance matrix yield the variances, e.g., for our cubic lattices σ i,i = x 2 1. Due to the Gaussian approximation, the many-body density f U (U ) factors into a product of two-body correlations. Note that these two-body terms include anharmonic effects through the values of their covariances. Within these approximations, the entropy S of N atoms becomes If all masses are equal, S simplifies to where Λ = 2πh 2 /mk B T is the thermal de Broglie wavelength for mass m at temperature T. Subject to the Gaussian approximation, our method resembles the approach of Morris and Ho [3], who applied it to a one-dimensional model system (see also Refs. [32,33]).
Refs. [8,9] apply this approach to solids and construct a tridiagonal Toeplitz matrix based on a one-dimensional model of correlations between x coordinates of nearest neighbor atoms Their entropies compare well with their target entropies calculated using thermodynamic integration, including cases where the interactions are angle-dependent. However, the formalism of Equation (4) applies generally, and we will examine corrections to the Gaussian approximation in Section 3.3.1. Figure 2 illustrates the covariance matrix Σ U for FCC Al at T = 900 K. Repeating patterns reflect the symmetries of the FCC structure. Translational symmetry requires that the covariance submatrix σ i,j depends only on the relative position R i,j = R j − R i = ha + kb + lc, of the ith and jth atoms. Consequently, covariance matrices σ i,j sharing the same Miller indices hkl share the same value, σ hkl . All 3 × 3 matrices along the diagonal are equivalent and share the form σ 000 shown in part (d), whose off diagonal elements vanish due to mirror symmetries. Three-fold rotational symmetry can be seen in the covariance matrices σ 011 , σ 101 , σ 110 (parts (e)-(g)) whose non-zero off-diagonal elements are yz, xz, and xy components.

Relation to Force Constant Matrix
The probability density ρ(x) of a classical oscillator in the harmonic potential U = 1 2 mω 2 x 2 , in thermal equilibrium, is and the variance of its displacement is σ 2 = x 2 = 1/(βmω 2 ). The force constant C = U = mω 2 is related to the variance by C = 1/βσ 2 . For an N-particle system, the force constant matrix C is defined in term of the second derivative of the potential U, where u iµ , u jν are elements of displacement U in which i, j denote atoms and µ, ν denote x, y, z Cartesian coordinates. The mass-weighted covariance matrix,Σ iµ,jν = √ m i m j Σ iµ,jν , relates to the mass-reduced force constant matrixC iµ,jν = C iµ,jν / √ m i m j , bỹ hence measurement of the covariance matrix yields the complete set of force constants. The matricesC andΣ are singular because of center of mass translation invariance. To invert the singular matrix, we representΣ = ∑ kµ λ kµ |kµ kµ| where {(λ kµ ≡ βω 2 kµ ) −1 , |kµ } is the set of eigenvalues and eigenvectors ofΣ. Then, noting thatC andΣ share common eigenvectors, we invert the nonvanishing eigenvalues to obtainC = ∑ kµ ω 2 kµ |kµ kµ|. For a harmonic potential U, the relationship Equation (13) is exact; for an anharmonic system we may take Equation (13) as defining temperature-dependent effective force constants and vibrational frequencies [10,11].

Quantum Harmonic Entropy
The entropies predicted by our classical theory agree quite well with the experimental values at high temperatures, but they fall below experiment at temperatures below the Debye temperatures Θ D , as seen in Figure 1. The negative deviation is a consequence of the negative divergence of ln (u 2 /Λ 2 ) ∼ 2 ln T as T → 0. Experimentally S → 0 for all materials, by the third law of thermodynamics, because quantum mechanics inhibits the excitation of vibrational modes with frequencies greater than k B T/h.
To overcome the singularity of classical entropy, we adopt entropy of the quantum harmonic oscillator, using effective harmonic frequencies ω kµ obtained from eigenvalues of our covariance matrix as discussed in Section 2.2. Summing over the nonzero vibrational frequencies, the entropy with quantum corrections is This yields better agreement when temperature is below the Debye temperature as shown in Figure 1. In particular, the limit S → 0 as T → 0 is obeyed. This quantum model is harmonic in the sense that it is exact for quadratic potentials U, but it incorporates anharmonicity through the effective vibrational frequencies which were derived from the simulated covariance matrix. Errors due to applying the quantum harmonic model should be small at low temperatures, where motion generically becomes harmonic. Some prior studies employ time-dependent velocity correlation functions, then Fourier transform over time to obtain frequencies [25][26][27]. The systematics of that approach differ markedly from ours, as in principle we do not require time evolution at all; we only simulate trajectories for the sake of enlarging our configurational ensemble.
The model Hamiltonian can be constructed in the actual harmonic limit of small oscillations by evaluating the force constants within density functional perturbation theory. This mode substantially underestimates the high temperature entropy as it neglects thermal expansion. The quasiharmonic approximation can be used to predict thermal expansion, resulting in improved agreement, or better yet we can evaluate the force constants at the experimental lattice parameters. As seen in Figure 1 the quasiharmonic approximation utilizing experimental lattice constants agree with experiment about as well as our new method.

Ab Initio Methods
Ab initio molecular dynamics (AIMD) simulations are performed for FCC Al in supercells of size 4 × 4 × 4 (256 atoms) and 6 × 6 × 6 (864 atoms), and for BCC Na in a 6 × 6 × 6 supercell (432 atoms). We use the Vienna Ab initio Simulation Package (VASP [34]) using augmented plane wave potentials [35] with the Perdew-Burke-Ernzerhof (PBE [36]) generalized gradient exchange correlation functional. We use a single electronic k-point and default plane wave energy cutoffs. When possible we use experimental lattice constants at the appropriate temperatures. The molecular dynamics simulations use Nosé thermostats with the default Nosé mass parameters. Our time steps are 2 fs, and our runs extend to 40 ps for Al (4 × 4 × 4) and 8 ps for (6 × 6 × 6), and 7 ps or greater for Na.
After allowing the simulated systems to approach equilibrium, the variances and covariances are calculated from a continuing simulation by averaging u i u j over many samples. We also average over Ω reflection, rotation and translation symmetry operations T k such that σ i,j = 1 Ω ∑ k T k u i u j becomes symmetry invariant. In principle, all the information needed to evaluate the entropy is contained in just a single representative structure of sufficient size, but the time averaging helps to reduce statistical error.
We perform phonon calculations as implemented in phonopy [37] to obtain force constants and vibrational frequencies, and then calculate vibrational entropy as discussed in Section 2.3. Rather than calculating the thermal expansion ab initio, as in the traditional quasiharmonic approximation [38], we simply evaluate the force constants at the experimentally known temperature-dependent lattice constant a(T).
Electronic entropy is evaluated as with D(E) the electronic density of states calculated from a structure with lattice constant a(T), and f T,µ the Fermi-Dirac occupation function. The chemical potential µ is obtained as a function of T using the program Felect [39].

Test Cases: FCC Al and BCC Na
Our method successfully predicts vibrational entropy for Al and Na, as shown in Figure 1. Figure 3 compares the residual errors of various approximations by subtracting off the experimental entropies. Curves labeled "Debye-Waller" and "single-site" neglect correlations among the displacements of different atoms. In this case the entropy reduces to where ρ 1 is the single-body probability, and σ 2 x = u 2 x is the mean square displacement. This quantity is related to the Debye-Waller factor [40] that diminishes the diffraction intensity of a peak of wavevector q by the factor exp (−q 2 u 2 /3). The displacements are sometimes given in terms of B = 8π 2 u 2 /3. In Figure 1, we compare the experimental entropies of Al and Na with the prediction of Equation (16) using experimental values of the B-factor. Given the seeming disparity between crystallographic and thermodynamic methods, the agreement is quite striking.
Note that the Debye-Waller and single-site entropies exceed the experimental values. The displacement of a single atom applies forces that displace nearby atoms, reducing the total amount of information needed to specify a given configuration U . A similar effect is found in the entropy of liquids, where the mutual information content of pair correlation functions reduces the entropy below the value for an ideal gas at the same overall density [18][19][20]. The mutual information of the two-body correlation function, where f 1 and f 2 are the independent and joint probabilities for displacements u 1 and u 2 of near-neighbor atoms, Σ u 1 u 2 is a 6 × 6 covariance matrix, and Σ u is the 3 × 3 single atom covariance matrix σ 000 . Figure 3 shows that reducing the single-site entropy by the mutual information of its neighbors results in improved agreement with experiment. Thus, we take S 1 − (z/2)I 2 , where z is the coordination number and z/2 is the number of neighbors/site. However, continuing to subtract the mutual information with even further neighbors (not shown) strongly overcorrects at low temperatures.  Figure 1). In addition, we show our vibrational quantum model (blue diamonds) calculated from Equation (14) using effective vibrational frequencies calculated by Equation (13); singlesite model (green triangles) from Equation (16); combined single-site and two-body model (S 1 − (z/2)I 2 ) (brown crosses) and the quasiharmonic prediction (magenta plus signs) with vibrational frequencies calculated by phonopy. All calculations are performed at the experimentally determined volumes for each temperature. All curves, except for Debye-Waller and single-site, include electronic entropy.
To better understand how the covariance matrix and entropy are influenced by the range of correlations, and by our finite MD simulation cells, we study the convergence of covariance matrix elements and corresponding entropy of Al, including only matrix elements σ hkl of pairs separated by R ≤ R hkl = |ha + kb + lc|. Figure 4a,b show that the absolute value of det (σ hkl ) drops rapidly with increasing the bond length, suggesting our simulation cell size is sufficient to capture the dominant collective motions of the solid, although some indication of cell size dependence can be seen in the excess correlation around [hkl] = 004 at T = 300 K. Similar decay of correlations was observed in other simulations [8,41]. Comparing T = 900 K with T = 300 K, we see similar variation with R hkl , while the values at high temperature are nearly two orders of magnitude larger. Figure 4c,d shows the complete entropy calculated according to Equation (10) with the covariance matrix Σ U truncated (i.e., all entries set to zero) beyond R hkl . Comparing convergence of the 4 × 4 × 4 (256 atom) cell with the 6 × 6 × 6 (864 atom) cell in part (d) suggests the 4 × 4 × 4 cell is adequate for entropy calculation at high temperatures. Convergence is irregular in the smaller cell at low temperatures (part (d)), and the complete covariance matrix is required for accurate results.
The improvement in entropy upon including the full covariance matrix is evident at high temperatures in Figures 1 and 3 (see curves labeled "classical"). As discussed previously in Section 2.3, it suffers an unavoidable ln T negative divergence at temperatures below the Debye temperature. This divergence is alleviated at low temperatures through the quantum model (Section 2.3) that utilizes effective vibrational frequencies.
The quasiharmonic model, which is quantum-mechanical based on harmonic frequencies obtained from density functional perturbation theory at temperature-dependent volumes, is also quite accurate at all temperatures.

High Entropy Alloy: Vibrational Entropies of MoNbTaW
Although high entropy alloys (HEAs) acquire their name from the entropy of chemical substitution, their vibrational entropy may exceed their substitutional entropy by a considerable margin. Substitutional entropy is relevant for stability mainly because the vibrational entropy of the mixture lies close to the average vibrational entropy of the elements [42]. Here, we investigate the applicability of our covariance method to calculate the vibrational entropy of MoNbTaW [28]. Since chemical substitution is prevalent in HEAs, we have to choose what specific arrangement of atoms to take. We will take as representative structures the final configurations from hybrid MC/MD simulations [29], which reflect the temperature variation of chemical order in cells of 128 atoms.
We calculate the vibrational entropy S v cm of a specific chemical configuration at each temperature using the covariance matrix Σ U obtained from from MD simulations. Figure 5a plots entropies S v cm + S e of theses structures. We compare our prediction with the average experimental entropies of pure elements, S avg , and with the quasiharmonic vibrational entropies S v qha + S e of a cF16 quaternary Heusler MoNbTaW structure at the same lattice parameters as our MD simulations. The vibrational entropy was nearly independent of the cF16 chemical arrangement. These temperature-dependent lattice parameters were determined by varying the volume until the simulated total pressures vanish on average. It is seen from Figure 5a that both quasiharmonic and covariance matrix entropies are close to, but slightly smaller than, the averaged entropy S avg of pure elements, consistent with prior calculations [43].  [44][45][46][47], S avg , with the entropy S v qha + S e calculated using the quasiharmonic approximation, the entropy S v cm + S e calculated from the covariance matrix, and the entropy S v pacm + S e calculated from the pair-averaged covariance matrix (pacm). (b) Convergence of vibrational entropy S v cm as simulation time increases compared with vibrational entropies S v qha and S v pacm .
The vibrational entropy derived from the covariance matrix converges slowly because these chemically disordered structures lack symmetry and we cannot employ symmetry averaging as discussed in Section 2.4. As a result, the covariance matrix has poor statistics and is hard to converge as illustrated in Figure 5b. Unfortunately, we lack an extrapolation formula for entropy vs. simulation time. At long times these entropies converge towards entropies calculated from the quasiharmonic approximation S qha .
In an effort to alleviate the poor statistics, we introduce a pair averaged covariance matrix (pacm),Σ U , that maintains the chemical identities at each site while averaging of their chemical environments. The (i, j) element of the full covariance matrix Σ U is the 3 × 3 matrix σ αβ i,j , where the superscripts remind us that the chemical species at site i is c(i) = α and the chemical species at site j is c(j) = β. Let the P αβ i,j be the set of all pairs (i , j ) such that R i ,j = R i,j and c(i ) = α and c(j ) = β. We define the (i, j) element ofΣ U as where the sum runs over the set P αβ i,j containing N αβ i,j elements. The entropy computed from Σ U is expected to provide a close upper bound on S v cm .

BCC to HCP Phase Transition in Titanium
Certain elements and compounds are so strongly anharmonic that the entropy simply cannot be calculated within the harmonic or quasiharmonic approximation. Elements in columns 3 and 4 of the Periodic Table undergo diffusionless (Martensitic) phase transformations from BCC (β-phase) stable at high temperature to HCP (α-phase) stable at low temperature. Harmonic analysis predicts their BCC states to be mechanically unstable at low temperature because they exhibit imaginary vibrational frequency modes. Eigenvectors of these modes describe the transformation pathway [48,49]. The instability prevents application of conventional harmonic or quasiharmonic calculations of the entropy. Our calculation method circumvents this difficulty because it does not require the calculation of vibrational frequencies.
These structural phase transitions are of practical importance, motivating considerable efforts to predict transition temperatures and understand their mechanisms [10,11,[50][51][52][53][54]. Proposed methods include phase space partitioning [50][51][52], effective force constant averaging (temperature dependent effective potentials) [10,11], and an "augmented lattice" model [51]. Predicted transition temperatures range from 1095 K to 1114 K, in general agreement with in agreement with the experimental transition temperature T c = 1166 K [55]. We apply our covariance matrix method to calculate vibrational entropy and predict the transition temperature T c = 1060 K.
We perform AIMD simulations for both BCC and HCP Ti at lattice constants that are fitted to experimental measurements with quadratic functions of temperature. Considering the scattering of experimental measurements of lattice constants, we choose to fit Refs. [56,57] for lattice parameters of BCC Ti and Refs. [56,58,59] for HCP Ti. To minimize size effect, we prepare simulation cells with the same number of atoms-an orthorhombic 256-atom 4 × 4 × 4 supercell based on a 4-atom unit cell (a = a, 0, 0; b = 0, a, −a; c = 0, a, a) for BCC Ti, and an orthorhombic 256-atom 4 × 4 × 4 supercell based on a 4-atom unit cell (a = a, 0, 0; b = 0, √ 3a, 0; c = 0, 0, c) for HCP Ti. A comparison of calculated total entropy S v cm + S e and experimental entropy is illustrated in Figure 6a. Electronic entropies S e are calculated from Equation (15) with electronic density of states obtained at the given volume for each temperature. As shown in Figure 7, BCC Ti has a substantially higher electronic entropy than HCP Ti due to the pseudogap at the Fermi energy of the HCP density of electronic states. Formation of the pseudogap drives the Burger's distortion from BCC to HCP [49]. Entropy of HCP Ti from our work compares well to the experimental entropy except one value at T = 1400 K which falls in the region where HCP is thermodynamically unstable. The entropy of BCC Ti, in contrast, is overestimated by an amount of 0.5k B to 1.0k B at all temperatures. α-phase β-phase α-phase, stable α-phase, unstable α-phase, stable α-phase, unstable β-phase, unstable β-phase, stable β-phase, unstable β-phase, stable T c pred =1060K Figure 6. Enthalpies are obtained by averaging energies over our MD simulations. To place enthalpies on the experimental scale, we shifted all of our calculated enthalpy values so that our enthalpy of α matched the experimental value at T = 800 K. For both phases our simulation matches well with measurement at temperatures below the α → β temperature while at higher temperatures it falls below the experimental enthalpy. Finally, we compute the Gibbs free energy G = H − TS based on our calculated entropy and enthalpy of HCP Ti at T = 800 K, 1000 K, 1200 K and BCC Ti at T = 1200 K, 1400 K, 1600 K and predict α-β phase transition temperature T pred c = 1060 K (see Figure 6). To understand the overestimate of BCC entropy, which leads to a low estimate of T c , we compare calculated phonon spectra and vibrational density of states derived from our force constant matrix (see Figure 8) with results from Ref. [62]. Note that our effective vibrational frequencies fall systematically below the experiment, explaining the overestimate of entropy. We tested to see if this could be due to errors in lattice constant, but the impact of volume changes was not sufficient to explain our disagreement. Presumably the fault lies in some aspect of our simulation method. Below, we investigate possible explanations in finite size effects, or anharmonicity, but these also turn out to be too small to explain the discrepancy. To evaluate the impact of simulated cell size on the entropy of BCC Ti, we perform entropy calculation for three sizes: 54-atoms, 128-atoms, and 250-atoms. Figure 9 shows a linear relation between entropy S v cm and inverse size 1/N. With larger cells, entropy increases, and so does the disagreement with experiment. This finite size effect for BCC Ti resembles the finite size effect in high-pressure high-temperature BCC Fe [63], so we believe the effect is real, but it does not explain the overestimation.
We tested different exchange correlation functionals. LDA [64] increased the vibrational entropy at T = 1200 K by about 0.2k B , while SCAN [65] decreased it by about 0.1k B . It seems that the choice of functional may provide a partial explanation for the 0.4k B excess. . Calculated vibrational entropy S v cm versus the inverse of the simulated number of atoms, 1/N, for BCC Ti at T = 1200 K. Experiment [62] is at T = 1208 K.

Anharmonicity
We investigate the effect of anharmonic corrections to the single site probability density At the lowest order of anharmonicity, the probability density p a (u) includes the isotropic term and the anisotropic term which are invariant under cubic symmetry operations. The anharmonic probability density is hence approximated by where Z is the normalization factor and the integration volume V is the Wigner-Seitz cell of an atom. In practice we cut off the integration at the cube V = [−8σ, +8σ] 3 , as justified by the rapid vanishing of p a (u). We calculate averages R 2 , R 4 , and A during our simulation, then we fit values of σ, a, and b by solving the simultaneous nonlinear equations where the probability p a (u) is given by Equation (22). Finally, the positional part of the anisotropic entropy, S a , is calculated from Table 1 compares the influence of anharmonicity in FCC Al with BCC Ti, and presents numerical values of the averages in Equation (24) and the solutions for σ 2 , a, b and entropy S. Anharmonicity tends to reduce the entropy for both FCC Al and BCC Ti, yet the reduction is insufficient to explain our entropy excess in BCC Ti. Differences in the signs of the a and b parameters between FCC Al and BCC Ti imply opposite deviations of our harmonic model from the simulated distribution. In FCC Al, the simulated distribution is more narrow, with a higher probability at origin than in our harmonic model; in BCC Ti, the simulated distribution is broader and lower at the origin. Our anharmonic model captures these deviations, as shown in the marginal distributions p(x) in Figure 10.   (19)), and blues lines (ANI) are fits to the anharmonic model p a (Equation (22)).
To examine the anisotropies, we plot the marginal distributions p a (x, y) in Figure 11. In FCC Al, atomic displacements are reduced in the near-neighbor directions [

Conclusions
We apply the information-theoretic entropy formula Equation (4) to evaluate the vibrational entropies of solids from the variance and covariance of atomic displacements. This approach generalizes prior work on the information-based entropy of liquids [18][19][20]. In the case of liquids, the single atom entropy (ideal gas term) overestimates the entropy and must be corrected by removing the mutual information of the pair correlation functions. In the case of solids, the variance of individual atomic displacements can be measured through diffraction experiments that yield the Debye-Waller B-factor. Thus, we find a crystallographic approach to estimate the thermodynamic entropy. However, as in the case of liquids, the one-body approximation overestimates the entropy by the information content of correlation functions, and we can improve the entropy estimate by including the covariance of atom pairs. This might be possible to achieve through diffraction experiments that measure the second-order thermal diffuse scattering [40]. It is easy to achieve through AIMD simulations of the atomic displacement covariance matrix, as we demonstrate in this paper for elemental Al and Na.
The method applies generally to solids, but the particular implementation given here relies on the accuracy of a Gaussian approximation to the distribution function. Hence it is most likely to work when the atomic displacements are small, and it is likely to fail in molecular solids where coherent bond rotations are present. Although we mainly demonstrated the method for elemental solids, it also holds in principle for complex crystalline and noncrystalline solids. We give an example of such an application for the MoNbTaW high entropy alloy.
The quasiharmonic method may be equally accurate and more efficient than our AIMD method when anharmonicity mainly enters through thermal expansion, but a simulation-based approach in principle includes additional anharmonic contributions. Doing so may require correlations beyond those captured by the Gaussian approximation (see Section 3.3.1). Our simulation-based approach seems most useful when the simulation has already been completed for other purposes. Then, the entropy comes essentially for free on top of whatever other information was sought.
In some cases the quasiharmonic method cannot be applied due to the presence of imaginary frequency vibrational modes. The high temperature BCC phases of columns 3 and 4 of the periodic table exhibit such modes; they achieve mechanical stability only through their entropies. For elemental Ti, our AIMD method is capable of estimating the vibrational entropy, although the modes seem slightly softer than observed in experiment. We also point out the unexpected strong contribution to stability from the electronic entropy.
Our simulation approach based on the probability distribution is more flexible than the velocity-velocity correlation method because it does not rely on an underlying harmonic model, at least in the high temperature limit. Further, because it does not depend upon the dynamics, it can be used in conjunction with Monte Carlo simulation in addition to molecular dynamics. It requires only a single representative configuration provided the cell is sufficiently large, rather than demanding a long continuous trajectory.  Data Availability Statement: Data will be provided upon request.