 Next Article in Journal
Simulated Annealing with Restart Strategy for the Path Cover Problem with Time Windows
Next Article in Special Issue
Modelling Oil Price with Lie Algebras and Long Short-Term Memory Networks
Previous Article in Journal
Effective Pedagogy of Guiding Undergraduate Engineering Students Solving First-Order Ordinary Differential Equations
Previous Article in Special Issue
A Solution of Richards’ Equation by Generalized Finite Differences for Stationary Flow in a Dam
Article

# Inverse Problem for Ising Connection Matrix with Long-Range Interaction

Center of Optical Neural Technologies, Scientific Research Institute for System Analysis, Russian Academy of Sciences, Nakhimov Ave, 36-1, 117218 Moscow, Russia
*
Author to whom correspondence should be addressed.
Academic Editors: Theodore E. Simos and Charampos Tsitouras
Mathematics 2021, 9(14), 1624; https://doi.org/10.3390/math9141624
Received: 11 June 2021 / Revised: 6 July 2021 / Accepted: 7 July 2021 / Published: 9 July 2021
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing)

## Abstract

In the present paper, we examine Ising systems on d-dimensional hypercube lattices and solve an inverse problem where we have to determine interaction constants of an Ising connection matrix when we know a spectrum of its eigenvalues. In addition, we define restrictions allowing a random number sequence to be a connection matrix spectrum. We use the previously obtained analytical expressions for the eigenvalues of Ising connection matrices accounting for an arbitrary long-range interaction and supposing periodic boundary conditions.

## 1. Introduction

In papers [1,2,3], we calculated eigenvalues of Ising connection matrices defined on $d$-dimensional hypercube lattices ($d = 1 , 2 , 3 …$). To provide the translation invariance we imposed periodic boundary conditions. In our calculations, we accounted for interactions not only with the nearest spins but with distant spins, as well. In papers [1,2], we analyzed isotropic interactions, while in paper , we discussed the general case of anisotropic interactions. We succeeded to obtain analytical expressions for the eigenvalues of the above-described Ising connection matrices. For the d-dimensional system, the eigenvalues are polynomials of the degree d in the eigenvalues for the one-dimensional system with long-range interaction (see [2,3]). The coefficients of these polynomials are the constants of interaction between spins.
In the present paper, we solve an inverse problem formulated as follows. Suppose that we need an Ising connection matrix with a given spectrum of its eigenvalues. Two questions arise. Firstly, can any sequence of random numbers be the spectrum of some connection matrix? Secondly, how to restore the interaction constants that define the connection matrix whose spectrum matches the given one? In Section 2, we obtain the answers to these questions for the one-dimensional Ising system. In Section 3 and Section 4, we extend the obtained results to the two- and three-dimensional systems, respectively. The discussion and conclusions are in Section 5.
The eigenvalues of Ising connection matrices became of interest, since in recent times, the problems have appeared where it is necessary to generate an Ising connection matrix with a given spectrum . Moreover, the eigenvalues of Ising connection matrix are closely related to the calculation of the partition function. Indeed, let $A$ be an $( N × N )$ Ising connection matrix and $s i$, $i = 1 , 2 , … 2 N$ are $N$-dimensional configuration vectors whose coordinates are $± 1$. In the absence of a magnetic field, the partition function is:
$Z N = ∑ i = 1 2 N e β A s i , s i = e β A s 1 , s 1 + e β A s 2 , s 2 + … + e β A s 2 N , s 2 N$
Let us expand each exponential here into a formal Taylor series, and rearrange the summands, combining in one sum the terms with the same power $β$. Then, we have:
$Z N ≡ 2 N + β ⋅ ∑ i = 1 2 N A s i , s i + β 2 2 ! ∑ i = 1 2 N A s i , s i 2 + β 3 3 ! ∑ i = 1 2 N A s i , s i 3 + …$
In , we showed that the first three coefficients of this expansion are:
Here, $Tr$ means the trace of the matrix: , where $λ i 1 N$ are eigenvalues of the matrix $A$. Note, beginning from $k = 4$ the expressions for the sums $∑ i = 1 A s i , s i k$ become more complex including not only but also some additional terms. We verified that up to $k = 6$ these additional terms are defined by , $l < k$. We hope that the same is also true for larger values of k. These arguments show that the eigenvalues of the Ising connection matrix may be useful when calculating the partition function.
In concluding the introduction, we would like to briefly discuss the place of the Ising model in the modern science. This model describes a system of interacting particles that are placed at the nodes of a multidimensional regular lattice. The Ising model appeared almost a hundred years ago. Its purpose was to analytically describe a collective behavior of a large number $N > > 1$ of interacting binary spins and to define thermodynamics properties of such system. W.L. Bragg and E.J. Williams were the first who succeeded in describing a phase transition with the aid of the Ising model. However, they made an unreal supposition that all the spins interacted in the same way (see the mean-field model described in ). Finally, in the late forties, L. Onsager et al. found an exact solution for the planar Ising system, when the spins were at the nodes of the plane square lattice and only the nearest spins interacted. Sometimes such a short-range interaction describes real systems. With regards to the Ising problem, this result is one of the most significant.
At first, the Ising model described systems of interacting spins. However, the universal formalism makes it possible to use this model in different scientific fields where the interacting neurons, agents, and other objects are defined by binary variables. Now the scientists use the Ising model when solving the problems of the spin glass theory  and the neural network theory . They use it in the theory and applications of the global minimization [9,10], in socio- and econophysics , and in many other problems. The calculation of the partition function $Z N$ is the main and the most difficult part of all these problems.
There is extensive literature on the inverse Ising problem, see, for example, a rather full review published in . When solving the inverse Ising problems, the authors examine how with the aid of the statistical inference method they can estimate the parameters of the Ising system—interaction constants and external magnetic fields—when they know empirical characteristics of a large number of random spin configurations. We would like to emphasize that although, as in the papers cited in , we also restore the parameters of the Ising systems, the setting of the problem and the method of its solution differ significantly. In our approach, we inverse the exact formulas that express the connection matrix eigenvalues in terms of its matrix elements. However, when using the statistical inference method the input data are the observables such as magnetizations, correlations, etc. The solution tools are also different. They are the Boltzmann equilibrium distribution, the principle of the maximal likelihood, the Bayes theorem, etc.

## 2. One-Dimensional Ising Model

(1) A one-dimensional Ising system is a linear chain of L interacting spins. To provide a translation invariance, let us close the chain in a ring. Then, the last spin is also the nearest neighbor of the first spin. This means that each spin has two (on the left and right) nearest neighbors, two next nearest neighbors (the distance to which is twice as large), two next-next nearest neighbors, etc. To be specific, we suppose that L is odd: $L = 2 l + 1$. Consequently, each spin has l pairs of the neighbors. Since we have in mind to discuss multidimensional lattices, we use the term “coordination spheres” to describe these pairs: First coordination sphere, second coordination sphere, …, l-th coordination sphere. In the beginning of the next Section, we will give a general definition of the coordination spheres.
By $J ( k )$, we denote a connection matrix that defines the interaction of each spin only with the spins from the k-th coordination sphere. For example, it is easy to see that the matrices $J ( 1 )$ and $J ( 2 )$ have the following form:
$J ( k )$ is a symmetric matrix with the ones at the $k$-th and $( L − k )$-th diagonals that are parallel to the main diagonal. We use the set of matrices $J ( k ) 1 l$ to write down the Ising connection matrix $A 0$ that accounts for interactions with spins belonging to all the coordination spheres. Let $w k$ be a constant of interaction with spins from the k-th coordination sphere. Then:
$A 0 = w 1 ⋅ J ( 1 ) + w 2 ⋅ J ( 2 ) + … + w l ⋅ J ( l ) .$
When there is no interaction with the spins from the k-th coordination sphere, the corresponding constant $w k$ in Equation (1) is equal to zero.
(2) The matrices $J ( k )$ are circulants: Each next row of such a matrix is obtained by a cyclic shift of the previous row one position to the right. All the circulants have the same set of the eigenvectors that may have complex coordinates [13,14]. In the general case, the eigenvalues of the circulant matrices can also be complex. However, since in our problem the matrices $J ( k )$ are symmetric, their eigenvalues are real. By $λ α ( k ) α = 1 L$, we denote the eigenvalues of these matrices. It can be shown that [2,3]:
The first eigenvalue of each matrix $J ( k )$ is equal to 2, and other eigenvalues are twice degenerate:
Consequently, for each k (if we do not take into account the first eigenvalue), the sequence is mirror-symmetrical about its middle (see Figure 1). In what follows, we repeatedly use this symmetry property.
The eigenvector $f ( 1 )$ with equal coordinates corresponds to the first eigenvalue $λ 1 ( k ) = 2$:
We can choose the two eigenvectors $f ( α )$ and $f ( L + 2 − α )$ corresponding to a degenerate eigenvalue $λ α ( k ) = λ L + 2 − α ( k )$ to be real. It is convenient to write them as follows:
Since the eigenvectors of all the matrices $J ( k )$ are the same, it is easy to write down the eigenvalues of the connection matrix (1):
Expression (4) is a generalization of the formula obtained previously in .
The spectrum of the eigenvalues of the connection matrix $A 0$ cannot be a set of arbitrary numbers. It has a structure defined by the properties of the summands in Equation (4). First, since the Equation (2) hold for each k, the spectrum of the eigenvalues $λ α A 0 α = 1 L$ has to be mirror-symmetrical about its middle (without accounting for the first eigenvalue). Then, we have the equalities:
Second, due to the zero-valued elements at the diagonals of all the matrices $J ( k )$ the sum of the eigenvalues of the matrix $A 0$ has to be equal to zero. This means that:
$λ 1 ( A 0 ) = − 2 ∑ α = 2 l + 1 λ α ( A 0 )$
Consequently, only l numbers $λ 2 ( A 0 )$, $λ 3 ( A 0 )$, … $λ l + 1 ( A 0 )$ of the set (4) can be arbitrary. The other eigenvalues are expressed through these numbers with the aid of the Equations (5) and (6).
(3) Let us analyze the inverse problem. Suppose we know a spectrum $λ ˜ α α = 1 L$ of a connection matrix of a one-dimensional Ising system (for example, obtained experimentally). Of course, the sequence $λ ˜ α α = 1 L$ satisfies equalities (5) and (6). What are the connections $w k$ between the spins that provide this spectrum?
To determine the unknowns $w k$, we have to solve the system (4) with the known left-hand side:
We can obtain the answer in an explicit form. Let us generate an $L$-dimensional vector $Λ ˜$ whose coordinates are the eigenvalues of the experimental spectrum $λ ˜ α α = 1 L$. We also generate $L$-dimensional vectors $Λ ( k )$ whose coordinates are the eigenvalues of the matrices $J ( k )$:
Then, we can rewrite the system of Equation (7) in the vector form:
$Λ ˜ = w 1 ⋅ Λ ( 1 ) + w 2 ⋅ Λ ( 2 ) + … + w l ⋅ Λ ( l )$
It is evident that the vectors $Λ ( k )$ and the eigenvectors $f ( k + 1 )$ are collinear: $Λ ( k ) ~ f ( k + 1 ) ,$ $k = 1 , 2 , … , l$. Consequently, we can calculate the weights $w k$ as scalar products of the vectors $Λ ˜$ and $Λ ( k )$:
By doing that, we solve the inverse problem in the one-dimensional case.

## 3. Two-Dimensional Ising Model

(1) In this case, the spins are in the nods of a square lattice of the size $L × L$. As previously shown, we set $L = 2 l + 1$ and assume periodic boundary conditions. Then, each spin has l pairs of neighbors along both the horizontal and the vertical axes. In addition, there are neighbors that are not on the same horizontal or vertical axes as the given spin.
The set of spins equally interacting with the given spin belongs to the same coordination sphere. In the case of an isotropic interaction, the coordination spheres consist of spins equally distant from the given spin. Then, we can enumerate the coordination spheres in the ascending order of distances to the given spin. In the anisotropic case, the interaction constants but not the distances define the spins belonging to the given coordination sphere.
When analyzing multidimensional Ising systems, we first have to distribute spins between the coordination spheres. This step is simple in the one-dimensional case: The pair of spins that are equidistant from the given spin belongs to the same coordination sphere. In the case of two-dimensional lattice, to describe the interaction between the spins spaced by m steps along the vertical axis and by k steps along the horizontal axis we introduce the interaction constant $w ( m , k )$. The values of $m$ and $k$ change independently from 0 to $l$. If the interaction is anisotropic $w ( m , k ) ≠ w ( k , m )$, in the isotropic case $w ( m , k ) ≡ w ( k , m )$. The difference between the coordination spheres in the isotropic and anisotropic cases influences the symmetry properties of the spectrum.
Let us make a few necessary comments. Since there is no self-action in the system, we always have $w ( 0 , 0 ) = 0$. It is convenient to introduce a unit $( L × L )$-dimensional matrix $J ( 0 ) = d i a g ( 1 , 1 , .. , 1 )$. This matrix completes the set of matrices $J ( k ) k = 1 l$. All the eigenvalues of the matrix $J ( 0 )$ are equal to one. With the aid of these eigenvalues, we define the L-dimensional vector:
which completes the set (8) of the vectors $Λ ( k )$: $Λ ( k ) k = 0 l$.
In the next item, we solve the inverse problem in the case of anisotropic interaction. The isotropic interaction is a subject of the last item of this Section.
(2) In paper , we showed that a $L 2 × L 2$-dimensional matrix $B 0$ that described the interactions $w m , k k , m = 0 l$ between spins had a block-circulant form and its eigenvectors were the pairwise Kronecker products of the eigenvectors $f ( α )$ defined by Equation (3). Exactly as in the one-dimensional case, the set of the eigenvectors of the matrix $B 0$ does not depend on the interaction constants and the eigenvalues of this matrix obtained in  are:
Let us write Equation (10) in the vector form using the above-introduced $L$-dimensional vectors $Λ ( k )$ (see Equation (8)). With the aid of these vectors, we generate $L 2$-dimensional vectors $Λ ( m , k )$ that are the Kronecker products of the vectors $Λ ( m )$ and $Λ ( k )$:
The vectors $Λ ( m , k )$ are mutually orthogonal. Let us define an $L 2$-dimensional vector $Μ$ whose coordinates are the eigenvalues $μ α β$ defined by Equation (10):
$Μ = ( μ 11 .. μ 1 L , μ 21 .. μ 2 L , … , μ l + 11 .. μ l + 1 L , μ l + 21 .. μ l + 2 L , … , μ L 1 .. μ L L ) +$
Now, we can rewrite the set of equalities (10) in the vector form:
$Μ = ∑ m = 0 l ∑ k = 0 l w ( m , k ) ⋅ Λ ( m , k ) = w ( 0 , 1 ) ⋅ Λ ( 0 , 1 ) + … + w ( 0 , l ) ⋅ Λ ( 0 , l ) + + w ( 1 , 0 ) ⋅ Λ ( 1 , 0 ) + … + w ( l , 0 ) ⋅ Λ ( l , 0 ) + ...... + w ( l , 0 ) ⋅ Λ ( l , 0 ) + … + w ( l , l ) ⋅ Λ ( l , l ) .$
Since $w ( 0 , 0 ) = 0$, in this equation the term $w ( 0 , 0 ) ⋅ Λ ( 0 , 0 )$ is absent.
Equation (13) allows us to easily solve the two-dimensional inverse problem. Namely, we have to determine the interaction constants $w ( m , k )$ that provide a known eigenvalues spectrum $μ ˜ α β α , β = 1 L$. For example, it might be an experimental spectrum.
Let us write an $L 2$-dimensional column vector $Μ ˜$ of the form (12) using the “experimental” spectrum components $μ ˜ α β α , β = 1 L$ and let us take into account the mutual orthogonality of the vectors $Λ ( m , k )$ (11). Then, the desired interaction constants are the scalar products of the $L 2$-dimensional vectors:
Now, let us discuss another question. In the same way as in the one-dimensional problem, not any sequence of the numbers $μ α β α , β = 1 L$ can be a spectrum of a connection matrix: The symmetry properties of the $L 2$-dimensional vectors $Λ ( m , k )$ impose rather severe restrictions on the values of these numbers.
Firstly, from Equation (13) it follows that the sum of the numbers $μ α β$ has to be equal to zero:
Secondly, in the one-dimensional problem the set of the eigenvalues (excluding the first eigenvalue) is mirror-symmetrical about its middle for each $m = 0 , 1 , .. , l$:
From Equation (11), which defines the $L 2$-dimensional vectors $Λ ( m , k )$ as the products of the eigenvalues $λ i ( m )$ by the vectors $Λ ( k )$, it is evident that their last $l ⋅ L$ coordinates copy the preceding $l ⋅ L$ ones. Consequently, the same has to be true for the sequence of the numbers $μ α β α , β = 1 L$. Then, it is necessary that the numbers that constitute the spectrum satisfy the equalities:
In other words, the last $l ⋅ L$ terms of the sequence of the numbers $μ α β$ are not free parameters.
Thirdly, since the last $l$ coordinates of each $L$-dimensional vector $Λ ( k )$ are a mirror image of the preceding $l$ coordinates, not all the first $( l + 1 ) ⋅ L$ coordinates of any vector $Λ ( m , k )$ are different. Consequently, the same has to be true for the given sequence $μ α β α , β = 1 L$: The last $l$ terms of the first group of its $L$ terms have to be a mirror image of the preceding $l$ terms. The last $l$ terms of the second group of its $L$ terms have to be a mirror image of the preceding $l$ terms, etc. Finally, for the last $( l + 1 )$-th group consisting of $L$ terms of the sequence $μ l + 1 , β β = 1 L$, the equalities $μ l + 1 , i = μ l + 1 , L + 2 − i$, $i = 2 , 3 , … , l + 1$ have to be fulfilled. This means that by symmetry reasons only $( l + 1 ) 2$ numbers
of the sequence $μ α β α , β = 1 L$ may be independent parameters.
We can rewrite Equation (15) using only the terms of sequence (16):
$μ 11 + 2 ∑ β = 2 l + 1 μ 1 β + ∑ α = 2 l + 1 μ α 1 + 4 ∑ α = 2 l + 1 ∑ β = 2 l + 1 μ α β = 0$
This equation allows us to express $μ 11$, through the other $l ( l + 2 )$ independent numbers $μ α β$ from sequence (16). Consequently, the number of the independent values $μ α β$ equals exactly the number of the orthogonal vectors $Λ ( n , m )$ taking part in the expansion (13).
(3) Finally, let us discuss briefly a two-dimensional Ising system with an isotropic interaction. Evidently, we again can use Equations (13), (14) and (17). However, now the number of various interaction constants $w ( m , k )$ is not $l ( l + 2 )$ but $l ( l + 3 ) / 2$. This means that the same has to be the number of independent terms in the given sequence $μ α β α , β = 1 L$ that represents the spectrum of an isotropic connection matrix. Let us without proving write down the formulas that replace Equations (16) and (17) when the interaction between spins is isotropic.
After removing all the numbers $μ α β$ that due to the symmetry reasons copy the coordinates of the vector $M$ (see Equation (12)), in place of (16) we obtain the sequence:
that includes only $( l + 1 ) ( l + 2 ) / 2$ numbers. Next, when the interaction is isotropic we can rewrite the general requirement (15) as follows:
$μ 11 + 4 ∑ β = 2 l + 1 μ 1 β + ∑ α = 2 l + 1 μ α α + 8 ∑ α = 2 l ∑ β = α + 1 l + 1 μ α β = 0$
and calculate $μ 11$ with the aid of this equation. As a result, we obtain the correct answer: In sequence (18), the number of independent values is equal to $l ( l + 3 ) / 2$.

## 4. Three-Dimensional Ising Model

(1) We consider a system of spins at the nods of a cubic lattice of the size $L × L × L$ $( L = 2 l + 1 )$ assuming periodic boundary conditions. Then, each spin has $l$ pairs of neighbors that are situated along the three independent coordinate axes. In addition, the spins have neighbors that are not on the same coordinate axes as the given spin. The spins equally interacting with the given spin constitute a coordination sphere.
Let $w ( n , m , k )$ be a constant of interaction between spins shifted with respect to each other by a distance $n$ along the first axis, by a distance $m$ along the second axis, and by a distance $k$ along the third axis. When the interaction is anisotropic, there are $( l + 1 ) 3 − 1$ independent interaction constants $w ( n , m , k ) n , m , k = 0 l$, where $− 1$ appears since there is no self-interaction and $w ( 0 , 0 , 0 ) = 0$. In the case of an isotropic interaction, the number of various constants $w ( n , m , k )$ is equal to $( l + 1 ) ( l + 2 ) ( l + 3 ) / 6 − 1$.
In paper , we showed that the $L 3 × L 3$-dimensional connection matrix $C 0$ defined by the interaction constants $w n , m , k n , k , m = 0 l$ is a block-circulant. Its eigenvectors $F α β γ$ are the Kronecker products of the eigenvectors $f ( α )$ (see Equation (3)):
The vectors $F α β γ$ constitute a full set of the eigenvectors of any connection matrix of the three-dimensional Ising system and they do not depend on the type of the interaction constants $w n , m , k n , k , m = 0 l$. Let us write down the eigenvalues of the matrix $C 0$ obtained in :
We use the above-introduced $L 2$-dimensional vectors $Λ ( m , k )$ (see Equation (11)) to generate $L 3$-dimensional vectors $Λ ( n , m , k )$ that are the Kronecker products of the vectors $Λ ( n )$ and $Λ ( m , k )$:
The vectors $Λ ( n , m , k )$ are mutually orthogonal.
Let us define an $L 3$-dimensional vector $Μ$ whose coordinates are the eigenvalues (19):
Then, we can rewrite the set of Equation (19) in the vector form:
$Μ = ∑ n = 0 l ∑ m = 0 l ∑ k = 0 l w ( n , m , k ) ⋅ Λ ( n , m , k ) = w ( 0 , 0 , 1 ) ⋅ Λ ( 0 , 0 , 1 ) + … + w ( 0 , l , l ) ⋅ Λ ( 0 , l , l ) + + w ( 1 , 0 , 0 ) ⋅ Λ ( 1 , 0 , 0 ) + … + w ( 1 , l , l ) ⋅ Λ ( 1 , l , l ) + .... + w ( l , 0 , 0 ) ⋅ Λ ( l , 0 , 0 ) + .... + w ( l , l , l ) ⋅ Λ ( l , l , l ) .$
Equation (22) allows us to solve the inverse problem and calculate the interaction constants $w ( n , m , k )$ that define the given set of the eigenvalues $μ ˜ α β γ α , β , γ = 1 L$ of the connection matrix. Indeed, let us transform this “experimental” spectrum $μ ˜ α β γ α , β , γ = 1 L$ into an $L 3$-dimensional column-vector $Μ ˜$ of the form (21) and use the mutual orthogonality of the vectors $Λ ( n , m , k )$. Then, we obtain the required interaction constants as the scalar products of the $L 3$-dimensional vectors:
This formula solves the problem of restoring the interaction constants corresponding to the given spectrum.
(2) Not any sequence of the numbers $μ α β γ α , β , γ = 1 L$ can represent the spectrum of a three-dimensional Ising connection matrix. To start with, the equality
$∑ α = 1 L ∑ β = 1 L ∑ γ = 1 L μ α β γ = 0$
has to be held. As in the two-dimensional problem, the cases of anisotropic and isotropic interactions differ significantly. When the interaction is anisotropic, it is easy to list the values $μ α β γ$ where we exclude the numbers repeated due the symmetry reasons. This list contains $( l + 1 ) 3$ values $μ α β γ α , β , γ = 1 l + 1$ (compare with Equation (16)). Due to Equation (24), the independent values in this list are one less. For example, we can express $μ 111$ in terms of the other independent values:
$μ 111 = − 2 ∑ β = 2 l + 1 μ 1 β 1 + ∑ γ = 2 l + 1 μ 11 γ + ∑ α = 2 l + 1 μ α 11 − 4 ∑ β , γ = 2 l + 1 μ 1 β γ + ∑ α , γ = 2 l + 1 μ α 1 γ + ∑ α , β = 2 l + 1 μ α β 1 − 8 ∑ α , β , γ = 2 l + 1 μ α β γ$
The symmetry reasons allow us to restore all the other numbers $μ α β γ$.
Consequently, the number of independent values $μ α β γ$ equals exactly the number of the basic vectors $Λ ( n , m , k )$ (see Equation (20)), which enter the sum (22) with nonzero coefficients.
When the interaction is isotropic, due to symmetry restrictions only $( l + 1 ) ( l + 2 ) ( l + 3 ) / 6$ values $μ α β γ$ may be independent. They are:
$μ 1 β γ β = 2 , γ = β + 1 l + 1 , μ 2 β γ β = 3 , γ = β + 1 l + 1 , .... , μ l + l l + l l + l$
In addition, due to Equation (24) this number is less by one. In the same way as we have done previously (see Equation (25)), we can define, for example, $μ 111$. Then, using the remaining independent values with the aid of the symmetry reasons we restore all the other numbers $μ α β γ$. Thus, in the given “experimental” set of eigenvalues there must be $( l + 1 ) ( l + 2 ) ( l + 3 ) / 6 − 1$ independent values and this number exactly matches the number of various coefficients $w ( n , m , k )$ in expansion (22).

## 5. Discussion and Conclusions

Connection matrices define the most important characteristics of Ising systems—such as the energies of the states and their distribution, the free energy, and all the macroscopic properties defined by the free energy. All these functions are crucially dependent on the connection matrix whose main characteristics are its eigenvalues and eigenvectors. In papers [1,2,3], we obtained the expressions for the eigenvalues of the Ising connection matrix $A = A i j i , j = 1 N$ with an arbitrary long-range interaction. In the present paper, we solve the inverse problem: We suppose that we know the matrix spectrum and we have to determine the interaction constants providing this spectrum.
We would like to note that the statement of the problem itself is not obvious. The point is that usually to calculate matrix elements of a matrix we have to know not only the eigenvalues but also all its eigenvectors. Indeed, let $λ α α$ and $f ( α ) = f 1 ( α ) , f 2 ( α ) , … + α$ be eigenvalues and eigenvectors of a symmetric matrix $A = A i j$, respectively. Then, its matrix elements are :
On the other hand, at the beginning of each Section we recall that all connection matrices of any d-dimensional Ising model are circulants and, consequently, all these matrices have the same set of the eigenvectors . In other words, their eigenvectors are known by default. However, our analysis shows that when calculating the matrix elements the internal symmetry of the problem allows us to not use Equation (26) but much more simpler and convenient formulas (see Equations (9), (14) and (23)). In addition, using the symmetry reasons, we obtain the number and positions of independent values in a given sequence that allows it to be the spectrum of some connection matrix.
Note that it is easy to generalize all the obtained results to the dimensions d > 3.

## Author Contributions

Propose the project, L.L.; mathematical methods, L.L. and B.K.; proofs of statements, L.L. and B.K.; writing the paper, L.L. and B.K. All authors have read and agreed to the published version of the manuscript.

## Funding

This work was supported by the State Program of Scientific Research Institute for System Analysis, Russian Academy of Sciences, project no. 0065-2019-0003.

Not applicable.

Not applicable.

## Acknowledgments

The authors are grateful to Inna Kaganowa for help when preparing this paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Kryzhanovsky, B.; Litinskii, L. Connection-Matrix Eigenvalues in the Ising Model: Taking into Account Interaction with Next-Nearest Neighbors. Dokl. Phys. 2019, 64, 414–418. [Google Scholar] [CrossRef]
2. Litinskii, L.; Kryzhanovsky, B. Eigenvalues of Ising connection matrix with long-range interaction. Phys. A 2020, 558, 124959. [Google Scholar] [CrossRef]
3. Kryzhanovsky, B.; Litinskii, L. Influence of long-range interaction on degeneracy of eigenvalues of connection matrix of d-dimensional Ising system. J. Phys. A Math. Theor. 2020, 53, 475002. [Google Scholar] [CrossRef]
4. Boothby, K.; Bunyk, P.; Raymond, J.; Roy, A. Next-Generation Topology of D-Wave Quantum Processors; Technical Report 14–1026A-C; D-Wave Systems: Burnaby, BC, Canada, 2019; Available online: https://www.dwavesys.com/resources/publications?type=white (accessed on 9 July 2021).
5. Kryzhanovsky, B.; Litinskii, L.; Egorov, V. Analytical solutions for Ising models on high dimensional lattices. arXiv 2021, arXiv:2105.14460. [Google Scholar]
6. Baxter, R.J. Exactly Solved Models in Statistical Mechanics; Academic Press: London, UK, 1982. [Google Scholar]
7. Dotsenko, V.S. Introduction to the Theory of Spin-Glasses and Neural Networks; World Scientific: Singapore, 1994. [Google Scholar]
8. Hertz, J.A.; Krogh, A.S.; Palmer, R.G. Introduction to the Theory of Neural Computation; Addison-Wesley: Redwood City, CA, USA, 1991. [Google Scholar]
9. Hartmann, A.K.; Rieger, H. (Eds.) New Optimization Algorithms in Physics; WILEY-VCH Verlag GmbH & Co.: Weinheim, Germany, 2004. [Google Scholar]
10. Lucas, A. Ising formulation of many NP problems. Front. Phys. 2014, 2, 5. [Google Scholar] [CrossRef]
11. Bouchaud, J.P. Crises and collective socio-economic phenomena: Simple models and challenges. J. Stat. Phys. 2013, 151, 567. [Google Scholar] [CrossRef]
12. Nguyen, H.C.; Zecchina, R.; Berg, J. Inverse statistical problems: From the inverse Ising problem to data science. Adv. Phys. 2017, 66, 197. [Google Scholar] [CrossRef]
13. Bellman, R. Introduction to Matrix Analysis; MacGraw-Hill Book Company: New York, NY, USA, 1960. [Google Scholar]
14. Gray, R.M. Toeplitz and Circulant Matrices: A Review; Now Publishers Inc.: Boston, MA, USA; Delft, The Netherlands, 2006; ISBN 1933019239/9781933019239. [Google Scholar]
15. Dixon, J.M.; Tuszynski, J.A.; Nip, M.L.A. Exact eigenvalues of the Ising Hamiltonian in one-, two- and three-dimensions in the absence of a magnetic field. Phys. A 2001, 289, 137–156. [Google Scholar] [CrossRef]
Figure 1. Eigenvalues of matrices $J ( k )$ when and . The vertical line in the middle shows explicitly the mirror symmetry of graphs.
Figure 1. Eigenvalues of matrices $J ( k )$ when and . The vertical line in the middle shows explicitly the mirror symmetry of graphs. Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.