Open Access
This article is

- freely available
- re-usable

*Entropy*
**2018**,
*20*(8),
585;
https://doi.org/10.3390/e20080585

Article

Investigation of Finite-Size 2D Ising Model with a Noisy Matrix of Spin-Spin Interactions

Scientific Research Institute for System Analysis, Russian Academy of Sciences, 117218 Moscow, Russia

^{*}

Author to whom correspondence should be addressed.

Received: 20 June 2018 / Accepted: 2 August 2018 / Published: 7 August 2018

## Abstract

**:**

We analyze changes in the thermodynamic properties of a spin system when it passes from the classical two-dimensional Ising model to the spin glass model, where spin-spin interactions are random in their values and signs. Formally, the transition reduces to a gradual change in the amplitude of the multiplicative noise (distributed uniformly with a mean equal to one) superimposed over the initial Ising matrix of interacting spins. Considering the noise, we obtain analytical expressions that are valid for lattices of finite sizes. We compare our results with the results of computer simulations performed for square N = L × L lattices with linear dimensions L = 50 ÷ 1000. We find experimentally the dependencies of the critical values (the critical temperature, the internal energy, entropy and the specific heat) as well as the dependencies of the energy of the ground state and its magnetization on the amplitude of the noise. We show that when the variance of the noise reaches one, there is a jump of the ground state from the fully correlated state to an uncorrelated state and its magnetization jumps from 1 to 0. In the same time, a phase transition that is present at a lower level of the noise disappears.

Keywords:

Ising model; noisy connections; ground state; free energy; internal energy; magnetization; specific heat; entropy; critical temperature## 1. Introduction

Calculation of the partition function is an essential of statistical physics and informatics. A few conceptual models allow exact solutions [1,2,3,4,5,6]. Among these a 2D Ising model [7], though simple, deserves special attention because of its importance for investigating critical effects. Having contributed a lot to the development of the spin glass theory, the Edwards-Anderson model [8] and Sherrington-Kirkpatrick model [9] are also worth mentioning. However, there are not many models that permit exact solutions. This is the reason why numerical methods are mostly used for tackling complex systems. Of them, two methods are most suitable for our purpose. The first is the Monte-Carlo method [10,11]. It enables us to analyze a system and determine its critical parameters quite accurately [12,13,14,15,16]. The thorough consideration of the method can be found in papers [17,18]. Unfortunately, the method needs a great deal of computations and does not allow direct calculation of the free energy. The second method uses the approach [19,20], which has recently given rise to the fast algorithm [21,22] that finds the free energy by computing the determinant of a matrix. The algorithm is popular because it allows the user to compute the free energy quite accurately and at the same time determine the energy and configuration of the ground state of a system.

The methods of statistical physics help researchers to understand the behavior of complex neural nets and evaluate the capacity of neural-net storage systems [23,24,25,26,27,28]. The machine learning and computer-aided image processing need fast calculations of the partition function of specific interconnect matrices [29,30]. The realization of Hinton’s ideas [31,32] gave rise to algorithms of deep learning and image processing [33,34,35,36]. Based on the optimization of the free energy of a spin (neuron) system, the algorithms, from the formal viewpoint, comes down to the optimization of the spin correlation in neighboring layers or within a single layer of a neural net. It should be understood that the system has a phase transition because the spin correlation grows abruptly at the critical point (the correlation length becomes nearly as great as the size of the whole system). In this case the optimization of the neural network becomes temperature dependent, which makes the learning algorithm almost impracticable.

The aim of the paper is to study the properties of a finite spin system whose Hamiltonian is defined as a quadratic Functional (1). The functional is often used in machine learning and image processing. Quantities ${s}_{i}=\pm 1$ may stand for either pixel class (object/background) in an image [35], or the neuron activity indication in a Bayes neural network [36]. We will use the physical notation calling quantities ${s}_{i}=\pm 1$ spins. The model under consideration has two limiting cases. The conventional 2D Ising model with regular interconnections presents the first case; the Edwards-Anderson model is the second case. The properties of our model lie somewhere in between. We introduce adjusting parameters in Functional (1), which allows us to go from the 2D Ising model to Edwards-Anderson model in a smooth manner and investigate the thermodynamic characteristics of the system in the transient state.

To avoid misunderstanding, let us point out two things. First, our interest is finite systems. For this reason, there is an expected discrepancy with Onsager results obtained at $N\to \infty $. Second, we cannot use the results of the spin glass theory to the full because the finite system under consideration is ergodic: it does not have multiple phase transitions caused by frustrations and provide self-averaging [37,38].

## 2. Essential Expressions, the Equation of State

Let us consider a system described by the Hamiltonian:

$$E=-\frac{1}{N}{\displaystyle \sum _{i>j}^{N}{J}_{ij}}{s}_{i}{s}_{j}.$$

This system consists of $N$ Ising spins ${s}_{i}=\pm 1\left(i=1,2,\dots ,N\right)$, positioned at the nodes of a planar grid, the nodes being numbered by index $i$. Only interactions with four nearest neighbors are considered. Spin-to-spin interactions ${J}_{ij}$ are random and defined as
where ${\epsilon}_{ij}$ is a random zero-mean variable uniformly distributed over the interval ${\epsilon}_{ij}\in [-\eta ,\eta ]$. We have chosen the uniform distribution of ${\epsilon}_{ij}$ to be able to control ${J}_{ij}$: when $\eta \le 1$, all interactions are positive $\left({J}_{ij}\ge 0\right)$. For the sake of simplicity, we assume that $J=1$.

$${J}_{ij}=J\cdot (1+{\epsilon}_{ij})$$

Our interest is the free energy of the system:
where the partition function $Z={\displaystyle {\sum}_{S}{e}^{-N\beta E(S)}}$ is defined as a sum over all possible configurations $S$ and $\beta =1/kT$ is the reverse temperature. The knowledge of the free energy makes it possible to compute the basic measurable parameters of the system:
where free energy $U=\langle E\rangle $ is the ensemble average at given $\beta $, ${\sigma}^{2}=\langle {E}^{2}\rangle -{\langle E\rangle}^{2}$ is the variance of energy and $C={\beta}^{2}{\sigma}^{2}$ is the specific heat.

$$f=-\frac{1}{N}\mathrm{ln}Z$$

$$U=\frac{\partial f}{\partial \beta},\text{\hspace{1em}}{\sigma}^{2}=-\frac{{\partial}^{2}f}{\partial {\beta}^{2}},\text{\hspace{1em}}C=-{\beta}^{2}\frac{{\partial}^{2}f}{\partial {\beta}^{2}}$$

Along with that, we are interested in the configuration ${S}_{0}$ of the ground state, its energy ${E}_{0}=E({S}_{0})$ and the magnetization ${M}_{0}=\frac{1}{N}{\displaystyle \sum _{i=1}^{N}{S}_{0i}}$.

The properties of the system depend on the dimension of the system $N$ and adjusting parameter $\eta $. Unfortunately, we cannot allow for the effect of the both parameters simultaneously, so we consider the contribution of each separately.

#### 2.1. The Effect of the Finite Grid Dimension

Let us consider how the fact of the grid having a finite dimension affects its properties. Let us take $\eta =0$ as the starting point. In this case the behavior of the system can be described by the expression (see reference [39]) which is true for finite systems with free boundary conditions:
where

$$\begin{array}{l}f=-\frac{\mathrm{ln}2}{2}-\mathrm{ln}\left(\mathrm{cosh}z\right)-\frac{1}{2\pi}{\displaystyle \underset{0}{\overset{\pi}{\int}}\mathrm{ln}\left(1+\sqrt{1-{\kappa}^{2}{\mathrm{cos}}^{2}\theta}\right)}d\theta ,\\ U=-\frac{1}{1+\Delta}\left\{2\mathrm{tanh}z+\frac{{\mathrm{sinh}}^{2}z-1}{\mathrm{sinh}z\cdot \mathrm{cosh}z}\left[\frac{2}{\pi}{K}_{1}-1\right]\right\},\\ {\sigma}^{2}=\frac{4{J}^{2}{\mathrm{coth}}^{2}z}{\pi {(1+\Delta )}^{2}}\cdot \left\{{a}_{1}\left({K}_{1}-{K}_{2}\right)-\left(1-{\mathrm{tanh}}^{2}z\right)\left[\frac{\pi}{2}+\left(2{a}_{2}{\mathrm{tanh}}^{2}z-1\right){K}_{1}\right]\right\},\end{array}$$

$$\begin{array}{l}z=\frac{2\beta J}{1+\Delta},\text{\hspace{1em}}\kappa =\frac{2\mathrm{sinh}z}{(1+\delta ){\mathrm{cosh}}^{2}z},\text{\hspace{1em}}\Delta =\frac{5}{4L},\text{\hspace{1em}}\delta =\frac{{\pi}^{2}}{{L}^{2}},\\ {a}_{1}=p{(1+\delta )}^{2},\text{\hspace{1em}}{a}_{2}=2p-1,\text{\hspace{1em}}p=\frac{{\left(1-{\mathrm{sinh}}^{2}z\right)}^{2}}{{(1+\delta )}^{2}{\mathrm{cosh}}^{4}z-4{\mathrm{sinh}}^{2}z}.\end{array}$$

Here ${\mathrm{K}}_{1}={\mathrm{K}}_{1}\left(\kappa \right)$ and ${\mathrm{K}}_{2}={\mathrm{K}}_{2}\left(\kappa \right)$ are full elliptical integrals of the first and second type correspondingly:

$${\mathrm{K}}_{1}(\kappa )={\displaystyle \underset{o}{\overset{\pi /2}{\int}}{(1-{\kappa}^{2}{\mathrm{sin}}^{2}\phi )}^{-1/2}d\phi},\text{\hspace{1em}\hspace{1em}\hspace{1em}}{\mathrm{K}}_{2}(\kappa )={\displaystyle \underset{o}{\overset{\pi /2}{\int}}{(1-{\kappa}^{2}{\mathrm{sin}}^{2}\phi )}^{1/2}d\phi}$$

Expressions (5)–(7) are the well-known Onsager solution [7], which is true for $N\to \infty $, modified for the case of finite $N$. Though true for $N\gg 1$, the expressions agree well with the experimental data even at relatively small grid dimensions $\left(L\sim 25\right)$. As could be expected, when $N\to \infty $, Formula (6) give $p\to 1$, ${a}_{1,2}\to 1$, $\Delta \to 0$, $\delta \to 0$ and Expression (5) turn into well-known ones [7].

Expression (5) agree excellently with experimental data: the relative error is less than 0.2% at $L=50$. With the growing $L$, the error decreases rapidly and is within the limits of experimental error at $L=1000\left({10}^{-5}\text{}\mathrm{for}\text{}{\sigma}^{2}\right)$. By way of comparison Figures 3, 6 and 7 gives the plots of Function (5) for $L=400$.

Expression (5) allow the $N$-dependences of the critical values of the reverse temperature, internal energy and energy variance of the system:
where ${\beta}_{\infty}=\frac{1}{2}\mathrm{ln}\left(\sqrt{2}+1\right)$ is the critical value for $L\to \infty $ [7].

$$\begin{array}{l}{\beta}_{c0}={\beta}_{\infty}\left(1+\frac{1}{L}\right),\\ {U}_{c0}=-\sqrt{2}\left(1-\frac{1}{L}\right),\\ {\sigma}_{c0}^{2}=2.4\cdot \left(\mathrm{ln}L-0.5\right),\end{array}$$

#### 2.2. The Effect of Noise

Let us consider the random character of quantities ${J}_{ij}\left(\eta \ne 0\right)$. Let $D\left(E\right)$ be the number of states of energy $E$. Then the sum of states can be presented as $Z={{\displaystyle {\sum}_{E}D(E)e}}^{-N\beta E}$. Passing from summation to integration, we get (to within an insignificant constant):
where $\mathrm{\Psi}\left(E\right)=\mathrm{ln}D\left(E\right)/N$. Applying the saddle-point method to integral (9), we get $Z\sim \mathrm{exp}\left[-Nf\left(\beta \right)\right]$, where

$$Z~{\displaystyle \underset{-\infty}{\overset{\infty}{\int}}{e}^{N[\mathrm{\Psi}(E)-\beta E]}dE}$$

$$f(\beta )=\beta E-\mathrm{\Psi}(E),\text{\hspace{1em}\hspace{1em}\hspace{1em}}\frac{d\mathrm{\Psi}(E)}{dE}=\beta .$$

The first expression in (10) defines the free energy, the second determines $E$ at the saddle point where the derivative of function $\mathrm{\Psi}\left(E\right)-\beta E$ turns to zero.

The form of spectral function $\mathrm{\Psi}\left(E\right)$ is known only for the one-dimensional Ising model. That is why we turn to the so-called n-vicinity method [28] to calculate the spectral function. The idea of the method is to divide the whole space of ${2}^{N}$ states into $N$ classes ($n$ vicinities) and approximate the energy distribution in each class by a corresponding Gaussian. In brief, the approach is as follows: Let us denote the ground-state configuration as ${S}_{0}$. Let class ${\Omega}_{n}$ be a set of configurations ${S}_{n}$ that differs from ${S}_{0}$ in that they have $n$ spins directed oppositely to the spins in ${S}_{0}$. The number of configurations in the class is equal to the number of compositions of $N$ in $n$, all configurations having the same (relative) magnetization $m={N}^{-1}\cdot {S}_{m}{S}_{0}^{\mathrm{T}}=1-2n/N$. The distribution of state energies within the n-vicinity was shown [28] to follow the normal distribution ${D}_{n}\left(E\right)$:
where

$${D}_{n}(E)\approx \left(\begin{array}{c}N\\ n\end{array}\right)\sqrt{\frac{N}{2\pi {\sigma}_{m}^{2}}}\mathrm{exp}\left[-\frac{1}{2}N{\left(\frac{E-{E}_{m}}{{\sigma}_{m}}\right)}^{2}\right],$$

$${E}_{m}={E}_{0}{m}^{2},\text{\hspace{1em}\hspace{1em}\hspace{1em}}{\sigma}_{m}^{2}=2(1-{m}^{2})\left(1-\alpha {m}^{2}\right),\text{\hspace{1em}\hspace{1em}\hspace{1em}}\alpha =1-{\sigma}_{h0}^{2}/2.$$

Here ${E}_{0}$ is the ground state energy, ${\sigma}_{h0}^{2}$ is the variance of ground-state local fields. In this case we have ${\sigma}_{h0}^{2}={\sigma}_{\eta}^{2}/(1+{\sigma}_{\eta}^{2})$, where ${\sigma}_{\eta}^{2}={\eta}^{2}/3$ is the variance of interconnections ${J}_{ij}$.

The sought-for distribution $D\left(E\right)$ is found by summing ${D}_{n}\left(E\right)$ over all $n$. Using the Stirling formula and passing from summation to integration with respect to variable $m=1-2n/N$, we get for $D\left(E\right)$:
where

$$\mathrm{D}(E)={\displaystyle \sum _{n=0}^{N}{D}_{n}(E)}=\frac{N}{2\pi}{\displaystyle \underset{0}{\overset{1}{\int}}{e}^{-NF(m,E)}\frac{dm}{{\sigma}_{n}\sqrt{1-{m}^{2}}}},$$

$$F(m,E)=-\mathrm{ln}N+\frac{1}{2}\left[(1-m)\mathrm{ln}(1-m)+(1+m)\mathrm{ln}(1+m)+\frac{{(E-{E}_{m})}^{2}}{{\sigma}_{m}^{2}}\right].$$

If we evaluate integral (13) by the saddle-point method, for the spectral function we get $\mathrm{\Psi}\left(E\right)=-F\left(m,E\right)$, where $m$ is the solution of equation $\partial F\left(m,E\right)/\partial m=0$. Let us combine (13)–(14) and (9)–(10). Then the free energy can be written as
where variables $m=m\left(\beta \right)$ and $E=E\left(\beta \right)$ are derived from the equations:

$$f(\beta )=F(m,E)+\beta E,$$

$$\mathrm{ln}\frac{1+m}{1-m}+2\frac{E-{E}_{m}}{{\sigma}_{m}}\frac{\partial}{\partial m}\left(\frac{E-{E}_{m}}{{\sigma}_{m}}\right)=0,\text{\hspace{1em}\hspace{1em}\hspace{1em}}\frac{E-{E}_{m}}{{\sigma}_{m}^{2}}+\beta =0.$$

It is easy to notice that set of Equation (16) is solvable when $m=0$. Correspondingly, when the values of $\beta $ are less than certain critical value ${\beta}_{c}$, (16) and (12) gives us ${E}_{m}=0$, ${\sigma}_{m}^{2}=2$ and $E=-2\beta $, the free energy taking the form $f\left(\beta \right)=-\mathrm{ln}N-{\beta}^{2}$. The phase transition occurs when $\beta $ allows yet another solution to (16) at $m\ne 0$. Note that substituting the second equation from (16) into the first one allows us to eliminate variable $E$. Doing things this way and performing several transformations, we obtain the equation of state that holds only one variable $m$:
where $\overline{\beta}=\beta /r$. Here we introduced adjusting coefficient $r$ to allow for the finite grid dimension: $r=1$ when $L\to \infty $, $r=1.11$ giving the excellent agreement with experiments at $L\sim 400$. The critical temperature is defined as value $\beta ={\beta}_{c}$ at which there is a nontrivial solution of (17). This solution has to be found by a numerical method: when $\beta >{\beta}_{c}$, we find $m\ne 0$ that satisfies (17) and compute the corresponding value of energy $E={E}_{m}-\beta {\sigma}_{m}^{2}$. Substitution of these values in (15) yields the corresponding value $f\left(\beta \right)$.

$$\frac{1}{4m}\mathrm{ln}\frac{1+m}{1-m}=\overline{\beta}-{\overline{\beta}}^{2}\left(1-{m}^{2}\right)\left(1+\frac{1}{2}{\sigma}_{\eta}^{2}\right),$$

Unfortunately, the n-vicinity method has an essential fault: it is applicable only when the condition ${\left({\displaystyle \sum {J}_{ij}}\right)}^{2}/\left(N{\displaystyle \sum {J}_{ij}^{2}}\right)\ge 4\mathrm{ln}2$ holds. In our case this condition works when $(1+{\sigma}_{\eta}^{2})\cdot \mathrm{ln}2\le 1$, that is when $\eta <1.2$. For such relatively small values of $\eta $ Formulae (15)–(17) gives ${\beta}_{c}$ and $f\left(\beta \right)$ that predict the experimental results well (see Figure 1 and Figure 2).

#### 2.3. Evaluating the Spectral Density

The algorithm we use allows us to compute function $f=f\left(\beta \right)$ and its derivatives. In turn, this allows us to investigate how energy distribution $D(E)=\mathrm{exp}\left[N\mathrm{\Psi}(E)\right]$ varies with the noise amplitude. Indeed, it is easy to derive from Formulae (10) the equation for the spectral function:
and its derivatives

$$\mathrm{\Psi}(E)=\beta E-f(\beta ),\text{\hspace{1em}\hspace{1em}\hspace{1em}}E=\frac{df}{d\beta}$$

$$\frac{d\mathrm{\Psi}}{dE}=\beta ,\text{\hspace{1em}\hspace{1em}\hspace{1em}}\frac{{d}^{2}\mathrm{\Psi}}{d{E}^{2}}={\left(\frac{{d}^{2}f}{d{\beta}^{2}}\right)}^{-1}$$

Note that $\mathrm{\Psi}(E)$ is entropy up to a constant and Equation (18) are well-known Legendre transformations, which are applicable for analyzing the spectral density of finite-dimension models [40,41]. It follows from these equations that when $\beta $ varies from $\beta =0$ to $\beta =\infty $, $E$ changes from 0 to ${E}_{0}$ and for each value of $\beta $ we have a pair of values of $E$ and $\mathrm{\Psi}(E)$. In so doing we determine the form of function $\mathrm{\Psi}(E)$ and its derivatives. The plots of function $\mathrm{\Psi}(E)$ and its derivatives presenting experimental data are given in Section 4.

The minimum of function ${d}^{2}\mathrm{\Psi}/d{E}^{2}$ at point $E=0$ changes into the maximum as the noise amplitude grows. Let us find $\eta $ at which it occurs. It can be noticed that with $E\to 0$ the entropy can be presented as the series:
where ${\mu}_{4}=\langle {E}^{4}\rangle /{\sigma}_{J}^{4}$ is the fourth cumulant, which in our case is described by the expression [28]:

$$\mathrm{\Psi}(E)=\mathrm{ln}2-\frac{1}{2}\frac{{E}^{2}}{{\sigma}_{J}^{2}}+\frac{{\mu}_{4}}{4!}\frac{{E}^{4}}{{\sigma}_{J}^{4}},\text{\hspace{1em}\hspace{1em}\hspace{1em}}{\sigma}_{J}^{2}=2\langle {J}_{ij}^{2}\rangle =2(1+{\sigma}_{\eta}^{2})$$

$${\mu}_{4}=4\left(5-6{\sigma}_{\eta}^{2}-\frac{9}{5}{\sigma}_{\eta}^{4}\right)/{\sigma}_{J}^{4}.$$

From (20)–(21) it follows that in the center point of the curve ($E=0$) quantity ${d}^{2}\mathrm{\Psi}/d{E}^{2}$ is determined by expression:
and the fourth derivative ${{d}^{4}\mathrm{\Psi}/d{E}^{4}|}_{E=0}={\mu}_{4}/{\sigma}_{J}^{4}$ changes its sign at $\eta ={\eta}_{c}$, when ${\mu}_{4}=0$:

$${\frac{{d}^{2}\mathrm{\Psi}}{d{E}^{2}}|}_{E=0}=-\frac{1}{2(1+{\sigma}_{\eta}^{2})}$$

$${\eta}_{c}={\left[5\left(\sqrt{2}-1\right)\right]}^{1/2}.$$

## 3. The Experiment Description

We make an intensive use of the Kasteleyn-Fisher algorithm [19,20] here to compute the free energy of the 2D square spin system. The algorithm gives exact results because the finding of the partition function is reduced to computation of the determinant of a matrix generated in accordance with the model under consideration. The algorithm permits us to exactly calculate the free energy of a spin system for an arbitrary planar graph with arbitrary links in a polynomial time. More information about the algorithm can be found in [21]. In this paper, we use the realization [22] of the algorithm that can give the same results in a shorter time. Using this algorithm, we were able to examine the behavior of free energy $f=f(\beta ;\eta )$ and its derivatives for a few lattices of different dimensions $N=L\times L$. Additionally, paper [22] offers the algorithm for searching the ground state. This algorithm helped us to investigate the energy and magnetization of the ground state as functions of noise amplitude. For each value, we generated a great number of matrices but the results were practically the same when we changed one matrix to another.

Let us point out that the both algorithms we use are only applicable to planar lattices. It means that we considered only lattices with free boundary conditions because lattices with periodic boundary conditions do not belong to a planar graph. The length of the lattice varied from $L=25$ to $L={10}^{3}$. Most of the plots present the results for $L=400$. The results for other sizes did not differ qualitatively.

The free energy is computed to 15-digit accuracy after the decimal point. Because we use the finite-difference method to compute the derivatives, the number of significant digits after the decimal point is about 7 for $U\left(\beta \right)$ and 4 for ${\sigma}^{2}(\beta )$. With large grid dimensions $\left(L\sim 1000\right)$ and with $\beta >1$ the computation error becomes too big and the plots of second derivatives start exhibiting oscillations. It is interesting to notice that introduction of little noise into grid interconnections allows us to decrease these oscillations.

## 4. Experimental Results

In the experiments, we calculate the free energy and its derivatives and find the ground-state configuration and energy. The accent is given to the finding of the critical point and corresponding quantities. The location of the maximum of curve ${\sigma}^{2}={\sigma}^{2}(\beta )$ is used to find the critical temperature. Most important experimental data are presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 and Table 1.

#### 4.1. The Free and Internal Energy

Experimental dependencies $f=f(\beta )$ and $U=U(\beta )$ are shown in Figure 1 and Figure 2. It is seen from Figure 1 that the curves go down with $\eta $ because the ground-state energy grows. When noise is small ($\eta <1.2$), the curves of free energy $f(\beta )$ and internal energy $U(\beta )$ almost merge (Figure 1 and Figure 2). When $\eta <1.7$ the curves $U(\beta )$ demonstrate a cusp (Figure 2) which corresponds to the phase transition. When $\eta ~1.7$, the cusp disappears and the further increase of noise changes only the asymptotic behavior of curves $f(\beta )$ and $U(\beta )$ according to (26).

#### 4.2. The Energy Variance

Curves ${\sigma}^{2}={\sigma}^{2}(\beta )$ are shown in Figure 3. It should be noted that because the n-vicinity method gives a piecewise-linear approximation of the energy variance, the red marks in Figure 3 indicates values obtained by using the generalization of Onsager solution to a finite-dimension case according to Formula (5). The formula gives a perfect agreement with experimental data, yet it is applicable only in a zero-noise case.

The behavior of curves ${\sigma}^{2}={\sigma}^{2}(\beta )$ near point $\beta =0$ is quite expected for any $\eta $: when $\beta =0$, the energy variance is equal to ${\sigma}_{0}^{2}$ and, according to (20), grows gradually in proportion to noise variance ${\sigma}_{\eta}^{2}={\eta}^{2}/3$. With great $\beta $ the behavior of curves $\sigma =\sigma (\beta )$ is much dependent on $\eta $. It is seen in Figure 3 that the energy variance peaks corresponding to the phase transition are observed only at $\eta <1.7$. The peaks become lower with growing $\eta $ and move to the right at the same time. When $\eta >1.8$, the peaks disappear at all, only the maximum at $\beta =0$ remains.

It is interesting that all the curves in Figure 3a have the common intersection point near $\beta \approx 0.29$. We could not find out why it is so. The intersection moves to the right slowly with the growing noise amplitude.

#### 4.3. The Critical Temperature

The critical temperature is defined by the location of the maximum of curve $\sigma =\sigma (\beta )$ or by the presence of a cusp on it. Figure 4 shows how the variance peak location and height vary with the growing noise. Holding true only for $\eta <1.2$, the numerical solution of the equation of state (17) gives ${\beta}_{\u0441}$ that agrees with the experimental data perfectly. For greater $\eta $ it is possible to use the approximate expression resulting from the experiment:
where ${\beta}_{c0}$ is the zero-noise critical value resulted from (8). The peak height lowers linearly with the growing noise amplitude:
where ${\sigma}_{\u04410}^{2}$ is the variance at $\eta =0$ defined in (8). It follows that if $\eta \approx \sqrt{3}$, ${\sigma}_{\u0441}^{2}$ falls to zero. It means that when $\eta >\sqrt{3}$, the variance peak disappears and we can say that the critical temperature is zero.

$${\beta}_{\u0441}\simeq {\beta}_{c0}\left(1+\frac{{\sigma}_{\eta}^{2}}{2}\right),$$

$${\sigma}_{\u0441}^{2}\simeq {\sigma}_{\u04410}^{2}\left(1-{\sigma}_{\eta}\right),$$

#### 4.4. The Ground State

The results we obtained testify that when the noise amplitude $\eta \approx 1.7$ (at ${\sigma}_{\eta}\approx 1$), the quality of the system changes. The ground state configuration experiences the most noticeable changes (see Figure 5). Clear that with zero noise the ground state is fully correlated, that is, all spins are the same ${s}_{i}=1$. The situation keeps as long as all matrix elements ${J}_{ij}>0$, that is, $\eta <1$. However, (see Figure 5) the ground-state energy proved to remain almost the same for ${\sigma}_{\eta}$ as big as ${\sigma}_{\eta}\approx 1$. Then it starts decreasing gradually and comes to an asymptotic value [42]:
corresponding to the energy of the ground state in the Edwards-Anderson model. The ground-state magnetization changes stepwise from 1 to 0 when the noise deviation comes close to unit ${\sigma}_{\eta}\approx 1$. A similar instability was discussed in [43,44].

$${E}_{0}=-1.317{\sigma}_{\eta},$$

#### 4.5. The Entropy

The change of the ground-state configuration and energy results in a change of energy distribution density $\mathrm{\Psi}(E)$. The curves of $\mathrm{\Psi}(E)$ and its derivatives are shown in Figure 6 and Figure 7.

The disappearance of the phase transition is easy to notice if we look at the curve of the second derivative ${d}^{2}\mathrm{\Psi}/d{E}^{2}$. It is seen in Figure 7a that the sink in the middle of the curve ($E=0$) rises with growing $\eta $ and, according to (23) the minimum of ${d}^{2}\mathrm{\Psi}/d{E}^{2}$ at $E=0$ turns into a maximum when $\eta \approx 1.5$. The peaks at points $E=\pm {U}_{c}$ separate with growing $\eta $ (${U}_{c}\to {E}_{0}$) and become lower like ${d}^{2}\mathrm{\Psi}/d{E}^{2}=-{\sigma}_{c}^{-2}$ until full disappearance at $\eta \approx 1.7$.

When $\eta >1.7$, curve ${d}^{2}\mathrm{\Psi}/d{E}^{2}$ has a noticeably convex shape and the phase transition peaks disappear. Moreover, in this case function ${d}^{2}\mathrm{\Psi}/d{E}^{2}$ is well described by the expression:

$$\frac{{d}^{2}\mathrm{\Psi}}{d{E}^{2}}=-\frac{1}{{\sigma}_{J}^{2}\left(1-{\epsilon}^{2}\right)},\text{\hspace{1em}\hspace{1em}\hspace{1em}}\epsilon =\frac{E}{2{E}_{0}}\left(1+\frac{{E}^{2}}{{E}_{0}^{2}}\right).$$

Formula (27) gives good approximation of experimental data (accurate to 0.5% over the energy interval $0\le \left|E\right|\le 0.91\left|{E}_{0}\right|$).

## 5. Conclusions

In this paper, we have considered the Ising model on a two-dimensional grid with noise-polluted interconnections. In the limiting case $N\to \infty $ such system demonstrates the following properties: with low noise the system have all characteristics of conventional Ising model, with high noise it turns into the Edwards-Anderson spin glass model. The goal of our experiments was to observe the transition between these two limiting cases in the finite-dimension system $\left(N\le {10}^{6}\right)$. It proved that when the noise is weak (${\sigma}_{\eta}<1$), the behavior of the system is much like the behavior of the conventional Ising model. We expected that with heavy noise (${\sigma}_{\eta}>>1$), the behavior of the system would be like that of the Edwards-Anderson model. However, the experimental results are significantly different from the expectation. It turned out that even when the noise is relatively weak (${\sigma}_{\eta}~1$), the system undergoes considerable changes.

First, when ${\sigma}_{\eta}~1$, the energy spectrum $D(E)$ changes radically (it is clearly seen in Figure 7): the curves of ${d}^{2}\mathrm{\Psi}/d{E}^{2}$ has a two-humped form at ${\sigma}_{\eta}<1$ and with ${\sigma}_{\eta}>1$ become simply convex. Moreover, the ground-state magnetization changes to zero when ${\sigma}_{\eta}>1$. It means that when the threshold value $\eta =\sqrt{3}$ is surpassed, the ground-state configuration goes off the initial state by distance of ${\scriptscriptstyle \frac{1}{2}}N$ in the Hamming’s terms. In other words, the system undergoes a zero-temperature phase transition. The transition is followed by the change of the ground-state energy from ${E}_{0}=-2J$ to asymptotic value (26).

Second, the experimental relation between the critical temperature and noise divergence differs greatly from the well-known [8] expression $k{T}_{c}={\left({\scriptscriptstyle \frac{2}{9}}{\displaystyle {\sum}_{\alpha}\langle {J}_{i\alpha}^{2}\rangle}\right)}^{1/2}$, which in our terms takes the form:

$${\beta}_{c}=\frac{3}{2\sqrt{2(1+{\sigma}_{\eta}^{2})}}$$

We can see that the classical theory predicts that ${\beta}_{c}$ should fall with the growing deviation of noise. Moreover, Expression (28) predicts finite values of ${\beta}_{c}$ for any large $\eta $. The experiment yields the opposite result: in accordance with (24) ${\beta}_{c}$ grows in proportion with ${\sigma}_{\eta}^{2}$. The experiment also shows that ${\beta}_{c}$ grows with $\eta $ and when $\eta \to \sqrt{3}$ it reaches its maximum ${\beta}_{c}=0.625$, the phase transition disappears at $\eta >\sqrt{3}$ (${\sigma}_{\eta}>1$). It can be said conceptually that when the threshold value $\eta =\sqrt{3}$ is surpassed, the jump ${T}_{c}\to 0$ occurs.

In our opinion, the difference between the experiment and theoretical predictions is due to the finite dimension of the system. First, the finite system is ergodic and even at low temperatures does not have spontaneous magnetization, which can be tested easily with the help of Monte-Carlo algorithm. Second, the self-averaging principle used for building the theory for $N\to \infty $ is not realizable for finite $N$. Additionally, the use of terms “critical temperature” and “phase transition” is not quite correct in description of finite-dimension systems. For our estimates, we use approximate Expressions (5) and (6), which are valid for a special case of the free boundaries conditions and finite $L$. More general and more accurate estimates can be obtained using the results of papers [45,46], where the authors analyzed the Ising random-bond model with a tunable fraction of negative bonds and the paper [47], where the finite size of the lattice was taken into account accurately.

Finite-dimension grids are of interest in image processing and machine learning. In our paper, the grid dimensions were $N=L\times L$ with $L=25\xf71000$. If we consider a planar grid as a model of a flat pixel image, such dimensions are very popular. The main conclusion that can be drawn from our results is that the learning algorithms based on free energy optimization are temperature insensitive in the most popular condition of $\eta >>1$ because there is no observable phase transition in this case.

## Author Contributions

Authors contributed equally. All authors participated in the design of the survey, its realization, and in the writing of the manuscript.

## Funding

This research received no external funding.

## Acknowledgments

We thank V.S. Dotsenko for valuable discussions and helpful comments.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Baxter, R.J. Exactly Solved Models in Statistical Mechanics; Academic Press: London, UK, 1982. [Google Scholar]
- Stanley, H. Introduction to Phase Transitions and Critical Phenomena; Clarendon Press: Oxford, UK, 1971. [Google Scholar]
- Becker, R.; Doring, W. Ferromagnetism; Springer: Berlin, Germany, 1939. [Google Scholar]
- Huang, K. Statistical Mechanics; Wiley: New York, NY, USA, 1987. [Google Scholar]
- Kubo, R. An analytic method in statistical mechanics. Busserion Kenk.
**1943**, 1, 1–13. [Google Scholar] - Dixon, J.M.; Tuszynski, J.A.; Clarkson, P. From Nonlinearity to Coherence, Universal Features of Nonlinear Behaviour in Many-Body Physics; Clarendon Press: Oxford, UK, 1997. [Google Scholar]
- Onsager, L. Crystal statistics. A two-dimensional model with an order–disorder transition. Phys. Rev.
**1944**, 65, 117–149. [Google Scholar] [CrossRef] - Edwards, S.F.; Anderson, P.W. Theory of spin glasses. J. Phys. F Met. Phys.
**1975**, 5, 965. [Google Scholar] [CrossRef] - Sherrington, D.; Kirkpatrick, P. Solvable model of a spin-glass. Phys. Rev. Lett.
**1975**, 35, 1792. [Google Scholar] [CrossRef] - Metropolis, N.; Ulam, S. The Monte Carlo Method. J. Am. Stat. Assoc.
**1949**, 44, 335–341. [Google Scholar] [CrossRef] [PubMed] - Fishman, G.S. Monte Carlo: Concepts, Algorithms, and Applications; Springer: Berlin, Germnay, 1996. [Google Scholar]
- Bielajew, A.F. Fundamentals of the Monte Carlo Method for Neutral and Charged Particle Transport; The University of Michigan: Ann Arbor, MI, USA, 2001. [Google Scholar]
- Foulkes, W.M.C.; Mitas, L.; Needs, R.J.; Rajagopal, G. Quantum Monte Carlo simulations of solids. Rev. Mod. Phys.
**2001**, 73, 33. [Google Scholar] [CrossRef][Green Version] - Lyklema, J.W. Monte Carlo study of the one-dimensional quantum Heisenberg ferromagnet near = 0. Phys. Rev. B
**1983**, 27, 3108–3110. [Google Scholar] [CrossRef] - Marcu, M.; Muller, J.; Schmatzer, F.-K. Quantum Monte Carlo simulation of the one-dimensional spin-S xxz model. II. High precision calculations for S = ½. J. Phys. A
**1985**, 18, 3189–3203. [Google Scholar] [CrossRef] - Häggkvist, R.; Rosengren, A.; Lundow, P.H.; Markström, K.; Andren, D.; Kundrotas, P. On the Ising model for the simple cubic lattice. Adv. Phys.
**2007**, 5, 653–755. [Google Scholar] [CrossRef] - Binder, K. Finite Size Scaling Analysis of Ising Model Block Distribution Functions. Z. Phys. B Condens. Matter
**1981**, 43, 119–140. [Google Scholar] [CrossRef] - Binder, K.; Luijten, E. Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models. Phys. Rep.
**2001**, 344, 179–253. [Google Scholar] [CrossRef][Green Version] - Kasteleyn, P. Dimer statistics and phase transitions. J. Math. Phys.
**1963**, 4, 287–293. [Google Scholar] [CrossRef] - Fisher, M. On the dimer solution of planar Ising models. J. Math. Phys.
**1966**, 7, 1776–1781. [Google Scholar] [CrossRef] - Karandashev, Y.M.; Malsagov, M.Y. Polynomial algorithm for exact calculation of partition function for binary spin model on planar graphs. Opt. Mem. Neural Netw. (Inf. Opt.)
**2017**, 26, 87–95. [Google Scholar] [CrossRef][Green Version] - Schraudolph, N.; Kamenetsky, D. Efficient Exact Inference in Planar Ising Models. In NIPS. 2008. Available online: https://arxiv.org/abs/0810.4401 (accessed on 24 October 2008).
- Amit, D.; Gutfreund, H.; Sompolinsky, H. Statistical Mechanics of Neural Networks near Saturation. Ann. Phys.
**1987**, 173, 30–67. [Google Scholar] [CrossRef] - Kohring, G.A. A High Precision Study of the Hopfield Model in the Phase of Broken Replica Symmetry. J. Stat. Phys.
**1990**, 59, 1077–1086. [Google Scholar] [CrossRef] - Van Hemmen, J.L.; Kuhn, R. Collective Phenomena in Neural Networks. In Models of Neural Networks; Domany, E., van Hemmen, J.L., Shulten, K., Eds.; Springer: Berlin, Germany, 1992. [Google Scholar]
- Martin, O.C.; Monasson, R.; Zecchina, R. Statistical mechanics methods and phase transitions in optimization problems. Theor. Comput. Sci.
**2001**, 265, 3–67. [Google Scholar] [CrossRef] - Karandashev, I.; Kryzhanovsky, B.; Litinskii, L. Weighted patterns as a tool to improve the Hopfield model. Phys. Rev. E
**2012**, 85, 041925. [Google Scholar] [CrossRef] [PubMed] - Kryzhanovsky, B.V.; Litinskii, L.B. Generalized Bragg-Williams Equation for Systems with Arbitrary Long-Range Interaction. Dokl. Math.
**2014**, 90, 784–787. [Google Scholar] [CrossRef] - Yedidia, J.S.; Freeman, W.T.; Weiss, Y. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. Inf. Theory
**2005**, 51, 2282–2312. [Google Scholar] [CrossRef] - Wainwright, M.J.; Jaakkola, T.; Willsky, A.S. A new class of upper bounds on the log partition function. IEEE Trans. Inf. Theory
**2005**, 51, 2313–2335. [Google Scholar] [CrossRef] - Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science
**2006**, 313, 504–507. [Google Scholar] [CrossRef] [PubMed] - Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput.
**2006**, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed] - LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature
**2015**, 521, 436. [Google Scholar] [CrossRef] [PubMed] - Lin, H.W.; Tegmark, M.; Rolnick, D. Why does deep and cheap learning work so well? J. Stat. Phys.
**2017**, 168, 1223–1247. [Google Scholar] [CrossRef] - Wang, C.; Komodakis, N.; Paragios, N. Markov random field modeling, inference & learning in computer vision & image understanding: A survey. Comput. Vis. Image Understand.
**2013**, 117, 1610–1627. [Google Scholar] - Krizhevsky, A.; Hinton, G.E. Using Very Deep Autoencoders for Content-Based Image Retrieval. In Proceedings of the 9th European Symposium on Artificial Neural Networks ESANN-2011, Bruges, Belgium, 27–29 April 2011. [Google Scholar]
- Gorban, A.N.; Gorban, P.A.; Judge, G. Entropy: The Markov Ordering Approach. Entropy
**2010**, 12, 1145–1193. [Google Scholar] [CrossRef][Green Version] - Dotsenko, V.S. Physics of the spin-glass state. Phys.-Uspekhi
**1993**, 36, 455–485. [Google Scholar] [CrossRef] - Karandashev, I.M.; Kryzhanovsky, B.V.; Malsagov, M.Y. The Analytical Expressions for a Finite-Size 2D Ising Model. Opt. Mem. Neural Netw.
**2017**, 26, 165–171. [Google Scholar] [CrossRef] - Häggkvist, R.; Rosengren, A.; Andrén, D.; Kundrotas, P.; Lundow, P.H.; Markström, K. Computation of the Ising partition function for 2-dimensional square grids. Phys. Rev. E
**2004**, 69, 046104. [Google Scholar] [CrossRef] [PubMed] - Beale, P.D. Exact distribution of energies in the two-dimensional Ising model. Phys. Rev. Lett.
**1996**, 76, 78–81. [Google Scholar] [CrossRef] [PubMed] - Kryzhanovsky, B.; Malsagov, M. The Spectra of Local Minima in Spin-Glass Models. Opt. Mem. Neural Netw. (Inf. Opt.)
**2016**, 25, 1–15. [Google Scholar] [CrossRef] - Colangeli, M.; Giardinà, C.; Giberti, C.; Vernia, C. Nonequilibrium two-dimensional Ising model with stationary uphill diffusion. Phys. Rev. E
**2018**, 97, 030103. [Google Scholar] [CrossRef] [PubMed][Green Version] - Bodineau, T.; Presutti, E. Surface Tension and Wulff Shape for a Lattice Model without Spin Flip Symmetry. Ann. Henri Poincaré
**2003**, 4, 847–896. [Google Scholar] [CrossRef][Green Version] - Ohzeki, M.; Nishimori, H. Analytical evidence for the absence of spin glass transition on self-dual lattices. J. Phys. A Math. Theor.
**2009**, 42, 332001. [Google Scholar] [CrossRef][Green Version] - Thomas, C.K.; Katzgraber, H.G. Simplest model to study reentrance in physical systems. Phys. Rev. E
**2011**, 84, 040101. [Google Scholar] [CrossRef] [PubMed] - Izmailian, N. Finite size and boundary effects in critical two-dimensional free-fermion models. Eur. Phys. J. B
**2017**, 90, 160. [Google Scholar] [CrossRef]

**Figure 1.**Free energy $f(\beta )$ at different noise amplitudes $\eta =0;0.4;0.8;1.2;1.6;2.0;2.5;3$. Lower curves correspond to greater values of $\eta $. The red marks indicate the values that are found by the n-vicinity method with the aid of Formulae (15)–(17) at zero noise amplitude. The grid dimension $L=400$.

**Figure 2.**(

**a**) Internal energy $U(\beta )$ at different noise amplitudes $\eta \in [0,1.7]$ spaced by 0.1 intervals. The red marks indicate the values that are found by the n-vicinity method with the aid of Formulae (15)–(17) at zero noise amplitude. (

**b**) $\eta \in [1.8,3.0]$ spaced by 0.1 intervals, the lower curves correspond to greater $\eta $. The grid dimension $L=400$.

**Figure 3.**The energy variance ${\sigma}^{2}(\beta )$ at different noise amplitudes $\eta $: (

**a**) $\eta \in [0,1.7]$ and (

**b**) it changes by 0.1 intervals in range $\eta \in [1.8,3.0]$. The red marks indicate values ${\sigma}^{2}$ produced by Formula (5). The grid dimension $L=400$.

**Figure 4.**(

**a**) The critical temperature ${\beta}_{\u0441}$ and (

**b**) energy variance at the critical temperature ${\sigma}_{c}^{2}$ as functions of noise amplitude $\eta $. The solid lines correspond to Formulae (24)–(25). $L=400$.

**Figure 5.**(

**a**) Energy ${E}_{0}$ and (

**b**) magnetization ${M}_{0}$ of the ground state of the system as a function of noise amplitude. $L=400$.

**Figure 6.**(

**a**) Spectral density $\mathrm{\Psi}(E)$ and (

**b**) its first derivative for some noise amplitudes $\eta =0;0.3;0.7;1.1;1.5;1.8;2.2;2.5;3$. The marks show the zero-noise curve. The grid dimension $L=400$.

**Figure 7.**The second derivative of spectral density $\ddot{\mathrm{\Psi}}(E)$ at (

**a**) $\eta =[0,1.7]$ and (

**b**) $\eta =[1.8,3]$, the reading spacing is 0.1. The marks denote the zero-noise curve (

**a**) and the curve for $\eta =1.8$ resulted from (27) (

**b**). The grid dimension $L=400$.

**Table 1.**The energy of ground state ${E}_{0}$ and its magnetization ${M}_{0}$, critical values ${\beta}_{c}$, ${f}_{c}$, ${U}_{c}$ and ${\sigma}_{c}^{2}$ for different noise amplitudes.

$\mathit{\eta}$ | ${\mathit{E}}_{\mathbf{0}}$ | ${\mathit{M}}_{\mathbf{0}}$ | ${\mathit{\beta}}_{\mathit{c}}$ | ${\mathit{f}}_{\mathit{c}}$ | ${\mathit{U}}_{\mathit{c}}$ | ${\mathit{\sigma}}_{\mathit{c}}^{\mathbf{2}}$ |
---|---|---|---|---|---|---|

0 | −1.995 | 1 | 0.442 | −0.6931 | −1.978 × 10^{5} | 12.958 |

0.1 | −1.995 | 1 | 0.443 | −0.6931 | −1.986 × 10^{5} | 11.427 |

0.2 | −1.995 | 1 | 0.444 | −0.6932 | −0.0101 | 12.566 |

0.3 | −1.995 | 1 | 0.445 | −0.6932 | −0.0103 | 11.627 |

0.4 | −1.996 | 1 | 0.452 | −0.6933 | −0.0211 | 11.476 |

0.5 | −1.994 | 1 | 0.454 | −0.6934 | −0.0324 | 10.666 |

0.6 | −1.993 | 1 | 0.459 | −0.6936 | −0.0447 | 9.719 |

0.7 | −1.994 | 1 | 0.465 | −0.6939 | −0.0581 | 8.328 |

0.8 | −1.996 | 1 | 0.476 | −0.6946 | −0.0849 | 7.642 |

0.9 | −1.996 | 1 | 0.484 | −0.6957 | −0.1143 | 6.518 |

1.0 | −1.993 | 1 | 0.503 | −0.6979 | −0.1599 | 5.603 |

1.1 | −1.996 | 0.9998 | 0.515 | −0.7010 | −0.2109 | 4.656 |

1.2 | −1.995 | 0.9987 | 0.536 | −0.7065 | −0.2815 | 3.629 |

1.3 | −1.994 | 0.9943 | 0.562 | −0.7156 | −0.3747 | 2.775 |

1.4 | −1.996 | 0.9839 | 0.591 | −0.7327 | −0.5107 | 1.998 |

1.5 | −2.002 | 0.9602 | 0.623 | −0.7527 | −0.6414 | 1.380 |

1.6 | −2.014 | 0.9060 | - | - | - | - |

1.7 | −2.033 | 0.2155 | - | - | - | - |

1.8 | −2.065 | 0.0312 | - | - | - | - |

1.9 | −2.098 | 0.0241 | - | - | - | - |

2.0 | −2.139 | 0.0058 | - | - | - | - |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).