# Monte Carlo Based Techniques for Quantum Magnets with Long-Range Interactions

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

_{4}in an external field realises an Ising magnet in a transverse magnetic field [102,103,104,105]. A recent experiment with the two-dimensional Heisenberg ferromagnet Fe

_{3}GeTe

_{2}demonstrates that phase transitions and continuous symmetry breaking can be implemented by circumventing the Hohenberg–Mermin–Wagner theorem with long-range interactions [106]. This material is in the recently discovered material class of 2D magnetic van der Waals systems [107,108]. Further, dipolar interactions play a crucial role in the spin ice state in the frustrated magnetic pyrochlore materials Ho

_{2}Ti

_{2}O

_{7}and Dy

_{2}Ti

_{2}O

_{7}[90,91,92,93,94,95,96,97,98,99,100,101].

## 2. Quantum Phase Transitions

#### 2.1. Critical Exponents in the Thermodynamic Limit

#### 2.2. Finite-Size Scaling below the Upper Critical Dimension

#### 2.3. Finite-Size Scaling above the Upper Critical Dimension

## 3. Monte Carlo Integration

## 4. Series-Expansion Monte Carlo Embedding

#### 4.1. Motivation and Basic Concepts

#### 4.2. Perturbation Method: Perturbative Continuous Unitary Transformations

#### 4.3. Unravelling Cluster Additivity

#### 4.4. Calculating Cluster Contributions

- $n=0$
- We can directly calculate the ground-state energy ${E}_{0}\left(C\right)$ on a cluster C as it is already additive:$${E}_{0}\left(C\right)=\langle 0|{\mathcal{H}}_{\mathrm{eff}}{|0\rangle}_{C},$$
- $n=1$
- To calculate the irreducible amplitudes ${t}_{i;j}^{\left(1\right)}\left(C\right)$ associated with the hopping process ${b}_{j}^{\u2020}{b}_{i}^{\phantom{\u2020}}$ in ${\mathcal{H}}_{\mathrm{eff}\phantom{\rule{0.166667em}{0ex}}1}$, we need to subtract the zero-particle channel, as can be seen from Equation (96). However, we only need to subtract the ground-state energy if the hopping process is local, ${b}_{i}^{\u2020}{b}_{i}^{\phantom{\u2020}}$, since the ground-state energy only contributes to diagonal processes. Thus, we calculate$$\begin{array}{c}\begin{array}{cccc}\hfill {t}_{i;j}^{\left(1\right)}\left(C\right)& =\langle 1;j|{\mathcal{H}}_{\mathrm{eff}}{|1;i\rangle}_{C}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{if}\phantom{\rule{3.33333pt}{0ex}}i\ne j\phantom{\rule{4pt}{0ex}},\hfill \\ \hfill {t}_{i;i}^{\left(1\right)}\left(C\right)& =\langle 1;i|{\mathcal{H}}_{\mathrm{eff}}{|1;i\rangle}_{C}-{E}_{0}\left(C\right)\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{else}\phantom{\rule{4pt}{0ex}}.\hfill \end{array}\end{array}$$
- $n=2$
- In the two-particle case, we have to distinguish between three processes: pair hoppings (${t}_{i,j;k,l}^{\left(2\right)}\left(C\right)\phantom{\rule{0.166667em}{0ex}}{b}_{l}^{\u2020}{b}_{k}^{\u2020}{b}_{j}^{\phantom{\u2020}}{b}_{i}^{\phantom{\u2020}}$ with four distinct indices), correlated hoppings (${t}_{i,j;i,k}^{\left(2\right)}\left(C\right)\phantom{\rule{0.166667em}{0ex}}{b}_{k}^{\u2020}{b}_{j}^{\phantom{\u2020}}{n}_{i}$), and density–density interactions (${t}_{i,j;i,j}^{\left(2\right)}\left(C\right)\phantom{\rule{0.166667em}{0ex}}{n}_{j}{n}_{i}$). The free quasiparticle hopping is already irreducible, and nothing has to be done, but for the correlated hopping contribution, we have to subtract the free one-particle hopping. In the case of the two-particle density–density interactions, we need to subtract the local one-particle hoppings, as well as the ground-state energy, as this process is diagonal (cf. Equation (97)). Therefore, we calculate$$\begin{array}{c}\begin{array}{cccc}\hfill {t}_{i,j;k,l}^{\left(2\right)}\left(C\right)& =\langle 2;k,l|{\mathcal{H}}_{\mathrm{eff}}{|2;i,j\rangle}_{C}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{if}\phantom{\rule{3.33333pt}{0ex}}i\ne j\ne k\ne l\phantom{\rule{4pt}{0ex}},\hfill \\ \hfill \phantom{\rule{4pt}{0ex}}{t}_{i,j;i,k}^{\left(2\right)}\left(C\right)& =\langle 2;i,k|{\mathcal{H}}_{\mathrm{eff}}{|2;i,j\rangle}_{C}-{t}_{j;k}^{\left(1\right)}\left(C\right)\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{if}\phantom{\rule{3.33333pt}{0ex}}i\ne j\ne k\phantom{\rule{4pt}{0ex}},\hfill \\ \hfill {t}_{i,j;i,j}^{\left(2\right)}\left(C\right)& =\langle 2;i,j|{\mathcal{H}}_{\mathrm{eff}}{|2;i,j\rangle}_{C}-{t}_{i;i}^{\left(1\right)}\left(C\right)-{t}_{j;j}^{\left(1\right)}\left(C\right)-{E}_{0}\left(C\right)\hfill & \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{if}\phantom{\rule{3.33333pt}{0ex}}i\ne j\phantom{\rule{4pt}{0ex}}.\hfill \end{array}\end{array}$$

#### 4.5. Energy Spectrum and Observables

#### 4.5.1. Ground-State Energy and Elementary Excitation Gap

#### 4.5.2. Spectral Properties

#### 4.6. White Graph Decomposition

#### 4.6.1. Graph Theory

#### 4.6.2. Graph Generation

#### 4.6.3. White Graphs for Long-Range Interactions

#### 4.7. Monte Carlo Embedding of White Graphs

#### 4.7.1. Conventional Nearest-Neighbour Embedding

#### 4.7.2. Embedding for Models with Long-Range Interactions

#### 4.7.3. Monte Carlo Algorithm for the Long-Range Embedding Problem

- $\mathbf{Shift}\mathbf{move}:$
- This Monte Carlo move is implemented to introduce confined random fluctuations to the current configuration independent of the strength of the algebraically decaying long-range interactions. It is especially important for larger decay exponents $\sigma $ when the configurations are much more likely to be confined. First, we randomly select a vertex ${n}_{\mathrm{sel}}\in \{1,\cdots ,{n}_{s}\}$ drawn from a discrete uniform distribution with ${p}^{\mathrm{sel}}=1/{n}_{s}$. Second, for the fluctuation, we draw a shift value ${d}^{\mathrm{prop}}\in \{-{n}_{s},\cdots ,{n}_{s}\}$ from a discrete uniform distribution ${p}^{\mathrm{shift}}=1/(2{n}_{s}+1)$. In one dimension, we have to draw a single time, and in higher dimensions, we draw repeatedly for each component. Subsequently, we add the shift to the position of the selected vertex and propose the position:$${i}^{\mathrm{prop}}={i}_{{n}_{\mathrm{sel}}}+{d}^{\mathrm{prop}}\phantom{\rule{4pt}{0ex}}.$$We might have proposed a position that is already occupied by another vertex, so we have to check for overlaps. In one dimension, we reset the proposed position to the original one if there is an overlap, while in higher dimensions, we explicitly allow overlaps. As we remember from above, this distinction is also present in the reference sums in one dimension in Equation (185) compared to higher dimensions in Equation (187). If an overlap occurs in dimensions higher than one, then the target summand is explicitly set to zero such that these configurations cannot contribute (otherwise, the sum would become infinity). Then, we calculate the Metropolis acceptance probability:$$\begin{array}{cc}\hfill {p}_{\mathrm{acc}}^{\mathrm{shift}}& =min\left(1,\frac{\pi \left({c}^{\mathrm{prop}}\right)}{\pi \left({c}^{\mathrm{curr}}\right)}\right)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =min\left(1,\frac{\sqrt{{\left({f}_{{n}_{s}}^{\mathrm{ref}}\left({c}^{\mathrm{prop}}\right)\right)}^{2}+{R}^{2}{\left({f}_{{n}_{s}}\left({c}^{\mathrm{prop}}\right)\right)}^{2}}}{\sqrt{{\left({f}_{{n}_{s}}^{\mathrm{ref}}\left({c}^{\mathrm{curr}}\right)\right)}^{2}+{R}^{2}{\left({f}_{{n}_{s}}\left({c}^{\mathrm{curr}}\right)\right)}^{2}}}\right)\phantom{\rule{4pt}{0ex}},\hfill \end{array}$$
- $\mathbf{Rift}\mathbf{move}:$
- In contrast to the previous move, which should introduce fluctuations to the configuration independent of the current one and independent of the long-range interaction strength, “rift moves” are introduced to better capture the correct asymptotic behaviour induced by the algebraically decaying interactions. The moves are able to propose very large distances between vertices, but are also able to do the opposite, closing the “rift” between vertices when the configuration is split into essentially two clusters. At first, we select a site ${n}_{\mathrm{sel}}\in \{1,\cdots ,{n}_{s}-1\}$ from the vertex set with discrete uniform probability ${p}^{\mathrm{sel}}=1/({n}_{s}-1)$, explicitly excluding the last site. In one dimension, we can order the vertex set such that the first vertex is the one with the smallest positional value and the last the one with the largest value, so we order by ${i}_{n}<{i}_{m}$, where $n,m$ are vertex indices and ${i}_{n}$, ${i}_{m}$ the associated sites on the lattice. The same ordering was also performed when we solved the reference sum in Equation (185). In higher dimensions, a similar ordering comes at a much higher computational cost, so we stick to the vertex numbering given by the array indices, i.e., the order is $n<m$. Here, it is also important that the vertex labelling of the reference sum coincides with the labelling of the chain graph. To capture the physical asymptotics of the system, we draw random values from a $\zeta $-function distribution. In one dimension, we draw from$${p}^{\mathrm{rift}}\left({r}^{\mathrm{prop}}\right)=\frac{{\left({r}^{\mathrm{prop}}\right)}^{-\gamma}}{\zeta \left(\gamma \right)},$$$${i}_{n}^{\mathrm{prop}}={i}_{n}+({r}^{\mathrm{prop}}-{r}^{\mathrm{curr}})\phantom{\rule{4pt}{0ex}}.$$In higher dimensions, we have no such ordering and, therefore, extend such a distribution to negative values (we refer to it as a “double-sided” $\zeta $-function distribution) and draw random values from$${p}^{\mathrm{rift}}\left({r}^{\mathrm{prop}}\right)=\frac{(1+|{r}^{\mathrm{prop}}{|)}^{-\gamma}}{2\zeta \left(\gamma \right)-1}$$$${i}_{n>{n}_{\mathrm{sel}}}^{\mathrm{prop}}={i}_{n>{n}_{\mathrm{sel}}}+({r}^{\mathrm{prop}}-{r}^{\mathrm{curr}})\phantom{\rule{4pt}{0ex}}.$$The underlying idea is that, if there is a large distance between two vertices ${i}_{{n}_{\mathrm{sel}}}$ and ${i}_{{n}_{\mathrm{sel}}+1}$, we can close the “rift” of the entire configuration instead of introducing a new one between ${i}_{{n}_{\mathrm{sel}}+1}$ and ${i}_{{n}_{\mathrm{sel}}+2}$. The transition weights for this move are given by$$\begin{array}{cc}\hfill \tilde{T}({c}^{\mathrm{curr}}\to {c}^{\mathrm{prop}})& ={p}^{\mathrm{sel}}\times p\left({r}^{\mathrm{new}}\right)\phantom{\rule{4pt}{0ex}},\hfill \end{array}$$$$\begin{array}{cc}\hfill \tilde{T}({c}^{\mathrm{prop}}\to {c}^{\mathrm{curr}})& ={p}^{\mathrm{sel}}\times p\left({r}^{\mathrm{curr}}\right)\phantom{\rule{4pt}{0ex}}.\hfill \end{array}$$With these, we can calculate the Metropolis–Hastings acceptance probability in one dimension:$$\begin{array}{cc}\hfill {p}_{\mathrm{acc}}^{\mathrm{rift}}& =min\left(1,\frac{\pi \left({c}^{\mathrm{prop}}\right)}{\pi \left({c}^{\mathrm{curr}}\right)}\frac{p\left({c}^{\mathrm{curr}}\right)}{p\left({c}^{\mathrm{prop}}\right)}\right)\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& =min\left(1,\frac{\sqrt{{\left({f}_{{n}_{s}}^{\mathrm{ref}}\left({c}^{\mathrm{prop}}\right)\right)}^{2}+{R}^{2}{\left({f}_{{n}_{s}}\left({c}^{\mathrm{prop}}\right)\right)}^{2}}}{\sqrt{{\left({f}_{{n}_{s}}^{\mathrm{ref}}\left({c}^{\mathrm{curr}}\right)\right)}^{2}+{R}^{2}{\left({f}_{{n}_{s}}\left({c}^{\mathrm{curr}}\right)\right)}^{2}}}\frac{{\left({r}^{\mathrm{prop}}\right)}^{\gamma}}{{\left({r}^{\mathrm{curr}}\right)}^{\gamma}}\right)\hfill \end{array}$$$$\begin{array}{c}\hfill {p}_{\mathrm{acc}}^{\mathrm{rift}}=min\left(1,\frac{\sqrt{{\left({f}_{{n}_{s}}^{\mathrm{ref}}\left({c}^{\mathrm{prop}}\right)\right)}^{2}+{R}^{2}{\left({f}_{{n}_{s}}\left({c}^{\mathrm{prop}}\right)\right)}^{2}}}{\sqrt{{\left({f}_{{n}_{s}}^{\mathrm{ref}}\left({c}^{\mathrm{curr}}\right)\right)}^{2}+{R}^{2}{\left({f}_{{n}_{s}}\left({c}^{\mathrm{curr}}\right)\right)}^{2}}}\frac{{\prod}_{n=1}^{d}(1+|{r}_{n}^{\mathrm{prop}}{|)}^{\gamma}}{{\prod}_{n=1}^{d}(1+|{r}_{n}^{\mathrm{curr}}{|)}^{\gamma}}\right)\phantom{\rule{4pt}{0ex}}.\end{array}$$As above, we randomly draw $y\in 0,1$, accept if $y<{p}_{\mathrm{acc}}^{\mathrm{rift}}$, and update the current configuration if the proposed configuration is accepted. In Figure 11b, you can find a typical rift move illustrated.