Magnetic Normal Mode Calculations in Big Systems: A Highly Scalable Dynamical Matrix Approach Applied to a Fibonacci-Distorted Artiﬁcial Spin Ice

: We present a new formulation of the dynamical matrix method for computing the magnetic normal modes of a large system, resulting in a highly scalable approach. The motion equation, which takes into account external ﬁeld, dipolar and ferromagnetic exchange interactions, is rewritten in the form of a generalized eigenvalue problem without any additional approximation. For its numerical implementation several solvers have been explored, along with preconditioning methods. This reformulation was conceived to extend the study of magnetization dynamics to a broader class of ﬁner-mesh systems, such as three-dimensional, irregular or defective structures, which in recent times raised the interest among researchers. To test its effectiveness, we applied the method to investigate the magnetization dynamics of a hexagonal artiﬁcial spin-ice as a function of a geometric distortion parameter following the Fibonacci sequence. We found several important features characterizing the low frequency spin modes as the geometric distortion is gradually increased.


Introduction
In the last decade, the field of Magnonics has been extensively investigated either in its static or dynamic aspects: In this context, periodic networks of magnetic nanoelements (e.g., magnonic crystals, MC) have been intensively studied [1]. Periodicity is a key property for manipulating the information carriers in these systems, namely spin waves through Bragg diffraction, both from the experimental point of view (the geometric arrangement of the array elements, as well as the action of any external field, modifies the spin wave dispersions) and the theoretical one (by exploiting the translational symmetry to reduce the problem size for calculations, and in general by using the powerful formalism developed for Bragg diffraction). In the same years, the field of artificial spin ice (ASI), which was developed to simulate at the nanoscale the behavior of frustrated atomic spin ices [2], has gradually merged into the broader theme of Magnonics due to their common inherent periodic properties [3].
Defect structures have attracted recent interest [4][5][6], with the perspective of identifying effects analogous to those found in semiconductors, or for manipulating magnetic frustration in ASI. In particular, quasicrystals, i.e., aperiodic structures with only rotational symmetry, exhibit diverse behaviors such as random monopole drift, and have potential applications in cypher encryption [7][8][9]. Clearly, in these systems, translational symmetry is absent, and residual rotational symmetries are modified by non-uniform magnetization textures generated, for example, by the application of an external magnetic field. Hence, in these cases, the reduction of the degrees of freedom of the problem to a single primitive cell is impossible, and it is necessary to consider a modified approach to systems that become larger as the required accuracy becomes greater. For such large systems the computation times are likely to diverge, and furthermore, the addressability itself of the machine might not be adequate to manage the problem. Hence, it is necessary to look for more efficient, smart calculation tools.
Among the different methods for calculating the normal modes of a magnetic medium, those based on micromagnetics are particularly widespread. Micromagnetics considers the overall static and dynamic properties of a medium as consequences of interactions among discretized elements of the magnetization vector field [10]. The discretization can, in turn, be constructed within either a finite-element [11] or finite-difference [12] approach. While the former is particularly appropriate for systems of curved shape, the latter is easier to deal with when implementing differential equations, although they are limited by the serrated shape demanded by the prism-cell representation.
The fact that the average magnetic properties are represented as interactions among the magnetic moments of the discretized elements allows to fully take into account both dipolar and ferromagnetic exchange interactions among particles of any shape, making such micromagnetic methods general-purpose. Still these approaches may follow different paths. As far as micromagnetic programs are concerned, it is possible to exploit the magnetization evolvers for the calculation of the dynamic magnetic response to a small external excitation in the time domain, then extracting the mode properties (e.g., frequency and profile) through Fourier analysis. The modes found with this method have significant amplitude (Fourier coefficients) only if they are consistent with the symmetry of the spatial profile of the excitation, so that modes with incompatible symmetry must be calculated in different runs, with a different excitation profile; moreover, the resolution of the modes depends on how fine the frequency sampling interval is set, so that, for resolving quasidegenerate modes, extremely fine intervals would be needed with diverging computation times [13][14][15].
The direct calculation of the magnetic response to an external excitation (polarizability) has also been used to find the frequency of the magnetic modes as resonances (peaks) in the susceptibility spectra [16]. Generally speaking, these approach gives rise to a system of algebraic equations of the form: (non-homogeneous linear system), where the effective fields of the Landau-Lifshitz equation enter the matrix of the system D together with the fixed frequency of the external excitation, the magnetization of each cell (either the cartesian components or polar angles) forms the variable vector δm, and the exciting field gives rise to the non-homogeneous term h. Finally, methods based on the reduction of the linearized motion equations to an eigenvalue/eigenvector problem have been proposed [17][18][19][20] and used to study several significant problems, such as spin waves excited by spin current pulses, calculation of permittivity and ferromagnetic resonance absorption spectra, mode softening and asymmetric bandwidth variations, spin waves in antidot lattices, angle resolved band diagrams and also issues in artificial spin ice systems [21][22][23][24][25].
Within this eigenvalue/eigenvector approach, the mathematical problem has the form: (generalized eigenvalue problem); here C is a sparse matrix whose form depends on the choice of the magnetization coordinates, H contains the effective fields and ω is the (unknown) frequency of the spin modes. The number of equations in the systems (1) and (2) is very large, being two-times the number of micromagnetic cells in the magnetic particle (or in the unit-cell of the magnonic crystal). Therefore, the use of highly scalable numerical techniques for solving these problems is required, i.e., techniques with the ability to handle a growing amount of cells without an overwhelming amount of computational work, thanks to the use of calculation methods with the same efficiency in speed and precision at even greatly different system sizes. As shown below, thanks to the mathematical properties of the magnetic interactions and without further approximations, the matrix of the system (D or H) can be exactly split as a sum of a sparse matrix, containing local interactions (exchange, Zeeman, anisotropy), plus a dense one for the demagnetizing term, which is long range but has the form of a convolution product (Toeplitz structure). These two properties can be exploited for improving the scalability of numerical solution of the linear equation system, Equation (1). This is the case with all the micromagnetic packages currently in use for the calculation of the static magnetization or its time evolution (e.g., [11,12]); in contrast, the direct solving methods employed so far for solving the generalized eigenvalue problem (2) of magnetization dynamics, require direct access to the unsplit matrix H, making useless the above-mentioned properties: the full matrix H is neither sparse nor Toeplitz. An iterative method for solving the generalized eigenvalue problem is proposed in [20], but used only for treating the sparse part (since in the applications presented therein the demagnetizing term is always omitted).
In this paper, we show that an iterative method can be used for finding the magnetic eigenmodes of any magnetic system or network, either periodic or not. The dynamical matrix method developed by some of us [17] is here reformulated in an original way and a proper iterative, highly scalable method is identified for the solution of the eigenproblem when both exchange and demagnetizing interactions are taken into account. Moreover, a good preconditioning matrix is found for this specific problem, which improves the computational efficiency by improving the convergence, and also opens the way to the use of an additional important class of eigensystem solvers, as will be discussed below. In this way we show that the aforementioned symmetry properties can be transposed to the dynamical matrix method, obtaining an approach that combines the ability to deal with big systems, as in standard micromagnetic packages, with that of naturally obtaining the eigenmodes of the structure, degenerate or not, and without the need of any (specially shaped) external excitation.
Then, we apply the method to a system which is hardly manageable with the ordinary dynamical matrix method, namely an artificial spin ice (ASI) system to which a progressive distortion is applied. Clearly, the distortion makes it impossible to reduce the system size to a sole primitive cell with periodic boundary conditions, and instead the full, big network (more than 10 6 micromagnetic elemental cells) has to be processed. In particular, we focus on the low frequency modes, formerly found to rule the switching of the macrospins, and to be dual to the domain wall formation and annihilation [7]. The recent interest on aperiodic systems derives from the need to simulate the (controlled) spin disorder in atomic spin ice due to thermal energy [26]; to provide persistent and extended frustration even in lattices with no intrinsic frustration [6]; to replicate the role of defects in semiconductors for electrons, which, in magnonics, is played by spin waves [4,27,28]. A system of this type has also been recently studied with a different method [29].

Theoretical Approach
The dynamical matrix method is based on the linearization of the Hamilton equations of motion, allowing to study the self-oscillations (spin modes) of a magnetic system of arbitrary shape, symmetry and non-uniform ground state, taking into account dipolar and ferromagnetic exchange interaction, external field and magnetic anisotropies [17]. Assuming that the magnetic particle is divided into N identical interacting cells, and that the uniform normalized magnetization in each cell M i can be represented with its two polar and azimuthal angles θ i and φ i , the motion equations in the linear regime become: where H is the Hessian matrix of the system calculated in the ground state: the energy function contains all the relevant contributions (Zeeman, exchange, anisotropy, demagnetization): and δm is the vector of the magnetization fluctuations (variables): Equation (3) is derived within an Hamiltonian formalism (appendix A of Ref. [17]) and is based on the choice of polar and azimuthal angles as independent variables, so that the magnetization and its fluctuation in each cell can be written, respectively: The motion Equation (3) has the form of a generalized eigenvalue problem (Equation (2)), provided that The matrix C is sparse and Hermitian; the matrix H is complex and Hermitian positive definite (being the Hessian of the system calculated at equilibrium). Therefore, the eigenvalues are real and occur in pairs of opposite sign. Some eigensolvers (see below) are able to exploit this symmetry, that is, they compute a solution for Hermitian problems with less storage, computational cost and more accuracy than other methods that ignore this property. The Krylov-Schur and generalized Davidson eigensolvers are among them. Here we distinguish the sparse part of the matrix H, namely H s , from the dense one H d : As shown in the Appendix A, the H s matrix contains diagonal or quasi-diagonal interactions (exchange, Zeeman, anisotropy, and part of demagnetization), while H d contains the remaining part of the demagnetizing interaction (that can be written as convolution product). There are two broad classes of methods for solving generalized eigenvalue problems: direct solvers and iterative ones. Direct solvers manipulate the matrix, keeping track of the transformations, until the matrix is reduced in diagonal form. These methods are reliable and very efficient when all the eigenvalues/eigenvectors are needed, but require explicit access to the matrix and a large memory size; both storage and computation time go as O(N 2 ) with N the number of micromagnetic cells. Iterative solvers try instead to produce sequences that converge to eigenvalues one by one; they are intrinsically less efficient than direct solvers, but have a better scalability and may prove useful when only a small fraction of eigenvalues is required. In addition, in this case the system matrix is only accessed through its action (multiplication) on a given vector; therefore (1) these methods do not require explicit storage of the matrix and (2) the splitting shown in Equation (5) can be usefully exploited. Most eigensolvers focused on the partial solution of problems in which the matrices are large and sparse perform a Rayleigh-Ritz projection for extracting the spectral approximations, that is, they project the problem onto a lower-dimensional subspace that is built appropriately. The subspace is changed and the process iterated until the solution is found within the user-defined precision; the eigenpairs of the original problem are then obtained by backtransformation. Eigensolvers differ from each other in which subspace is used, how it is built and other technicalities aimed at improving convergence, reducing storage requirements, etc. An iterative procedure requires starting from a point of the spectrum; a natural choice is that of starting from the largest or smallest of the eigenvalues (in value or in absolute value) passing gradually to the next ones.
For the modes of a magnetic system the basic operation required by an iterative solver can be written as: with v an arbitrary vector. Being H s sparse, its storage size is O(N) and the product H s v requires O(N) operations. The other term contains a dense matrix, whose matrix-vector product would require, in general, O(N 2 ) operations; however in the case considered here we have managed to cast it in the form of a convolution product, thus reducing the number of operations required. To see this, consider the j component of the vector resulting from the matrix-vector multiplication in Equation (6), with H d given by Equation (A3): as: where M si is the saturation magnetization in the i-th cell,N(r j − r k ) the demagnetizing tensor between cells j and k and the derivatives are calculated at equilibrium. The introduction of the variables X(r k ) = M sk makes the (discrete) convolution product apparent: The storage required for the tensorN, arrays E and X is O(N), while the number of operations for calculating the convolution product, exploiting the convolution theorem and the fast fourier transform algorithm, is O(N log N) [30]. When N is large, this is a significant improvement with respect to direct solvers. Direct solvers proved to be able to reliably determine the eigenmodes of magnetic particles parted in some tens of thousands of cells; the present improvement allows to consider a number of cells two orders of magnitude greater (millions of cells).
There are many numerical methods available for the iterative solution of an eigenvalue problem, in principle all equally valid but with different degrees of success, depending on the numerical characteristics of the specific problem. In fact, iterative eigensolvers do not mathematically guarantee the convergence of all eigenvalues. Typically the selection of a method is based on an empirical choice. A range of different iterative eigensolvers are available from standard numerical libraries; we tested the following methods: power itera-tion with deflation, subspace iteration with Rayleigh-Ritz projection, Arnoldi with explicit restart and deflation, Lanczos with explicit restart and deflation, Krylov-Schur, generalized Davidson, Jacobi-Davidson, RQCG (conjugate gradient method for the minimization of the Rayleigh quotient), CISS (contour-integral solver). We found that the Krylov-Schur [31] approach usually gives the best result in finding the spin modes of a magnetic system; also the Lanczos method with explicit restart [32] proved useful to resolve degenerate modes in rare cases. Krylov subspace methods are a widely-used class of algorithms for approximating a small subset of the eigenvalues (and corresponding eigenvectors) of large, sparse matrices. In particular, the Krylov-Schur method uses as projection subspace the so-called Krylov subspace associated with the given matrices and a given initial vector; it also incorporates an effective and robust restarting technique. Restarting consists in stopping the algorithm after a defined number of iterations and rerun it with a new starting vector computed from the recently obtained spectral approximations. This allows to improve the convergence to the wanted eigenvectors. A common property of iterative eigensolver working without direct access to the matrix, i.e., based on Equation (6), and without preconditioning is that they are well suited for finding the extremal (largest or smallest) eigenvalues. However, they have major difficulties when dealing with interior eigenvalues (i.e., not extremal). In our experience, the use of harmonic projection alone (without preconditioning) [33] does not relieve the mentioned difficulties in most cases of interest in spin dynamics.
The rate of convergence is strongly dependent on the spectrum and preconditioning is typically used to alter it and hence accelerate the convergence rate of iterative techniques. A preconditioner is a matrix used to modify an (ill-conditioned) eigenproblem, such that the product of its inverse with that of the eigenproblem gives a new matrix closer to diagonal form. Thus, the ideal preconditioner would be the given matrix itself; however, this is not a practical choice, as applying this preconditioner would require the calculation of the inverse, which requires an effort equivalent to that of the original problem. The best effective preconditioner is therefore cheap to invert and results in a significant improvement of the preconditioned matrix. For the calculation of spin modes the matrix H s of Equation (A2) is an excellent preconditioner. It is rather close to the full H matrix of the eigenproblem, but its sparsity allows its efficient use as a preconditioner.
In addition to improving the convergence of the Krylov-Schur eigensolver, the availability of a good preconditioner also allows the use of a different subclass of iterative eigensolvers based on preconditioning. This is particularly important for calculation of interior eigenvalues, because preconditioned eigensolvers are usually better able to manage this problem. We have used a preconditioned, generalized Davidson eigensolver [34], joined with harmonic projection, to attack problems of interest in the spin dynamics of a magnetic particle, and determine both the lowest and interior eigenfrequencies.

System Structure
The subject of our investigation is a Kagome artificial spin ice to which we apply a Fibonacci distortion (twist) [35,36] whose strength is gauged by parameter r (r = 1 stands for no distortion at all). The Kagome artificial spin ice is derived from a honeycomb network whose underlying Bravais lattice is hexagonal, with two primitive vectors a 1 and a 2 of the same length, with an angle of 60 • between their directions in a 2D plane, and a two-point basis (A and B), Figure 1a.
The hexagonal lattice is distorted by applying an aperiodic Fibonacci sequence, such that along each primitive vector direction the honeycomb lattice points are separated by either distance L or distance S, with L/S = r and (L + S)/2 = |a 1 | is fixed. The sequence is applied along both primitive vector directions.  The distorted Kagome system is particularly significant for our purposes, since for any r > 1, the system is non-periodic from any point of view, the size of the problem is formidable (the number of film cells exceeds 10 6 ), and only highly scalable methods (i.e., those exploiting the Toeplitz structure of part of the matrix of the motion equations) can solve it in reasonable time. The advantage of our approach with respect to conventional micromagnetic methods is that it allows us to find all the spin modes, regardless of the relationship of their profile with that of an external excitation (which does not even exist in our case) and to accurately resolve also the degenerate ones. We focus our investigation on the low frequency modes, known to be localized in different areas within the system (especially in film segments or vertex nodes) and consequently responding differently to the twist. In fact, the specific Fibonacci distortion affects the segment size, and consequently the angles among them, with definite impact on the internal field magnitude, mode localization and frequency. However, as apparent from inspection of Figure 2, the specific method of distortion leaves unchanged a few segments, while heavily modifies others. The magnetization distribution is twisted as a consequence, and so is the internal magnetic field, which the spin wave mode frequencies depend on. The progressive distortion of the magnetization distribution, as r is increased, lifts the degeneracy of otherwise geometrically equivalent points, as discussed below.

Calculation Methods
The calculation of spin modes has been performed using the following standard values of the magnetic constants of permalloy: uniform magnetization M s = 860 G, γ = 1.854 × 10 7 rad/(s Oe), ferromagnetic exchange constant A = 1.3 × 10 −6 erg/cm and no anisotropy. An external field of 1 kOe is assumed along the x-axis (horizontally), so that the calculated ground state assumes a quasi Ising-saturated magnetization configuration, i.e., in each segment the magnetization is almost parallel to the local segment axis (Figure 3). The ground states have been calculated with a magnetization evolver [12] starting from different initial states (uniform magnetization, several random magnetization) in order to insure consistent results. A single fundamental state has been found for each case and used for the following dynamic calculation. The investigated systems ( Figure 2) are finite, non-periodic structures. We use right square prism cells, with base 10 nm × 10 nm (the height, 15 nm, coincides with that of the sample), so that the array is formed by a single layer of 1,048,576 cells. Despite the exchange length corresponding to our magnetic parameters is 5.3 nm, we safely choose a slightly larger cell size (hence, a slight overestimate of the exchange interaction) as a good compromise between faster calculations and reliable results. The only case extremely sensitive to the detailed balance between exchange and dipolar energies is the magnetic vortex formation, which is excluded in our system due to the specific aspect ratio of the segments. The segment width is 40 nm and (L + S)/2 = 432 nm, so that each hexagon side is composed of about 4 × 30 cells. The lowest frequency modes are calculated for arrays with r ranging from 1.00 to 1.60, by steps of 0.05.

Unperturbed System: Mode Profiles
The calculation performed in this system required about 70 h in the most powerful facility we have access to (DLX cluster at University of Kentucky), using a single node with 32 processors Intel Xeon X7560 and 512 GB memory. For comparison, the largest ever eigenvalue analysis with a direct algorithm to date (as far as we know) was the solution of a 10 6 problem, half the number of our independent variables, carried out in 2014 by the Japanese K computer in Riken. To be able to obtain this result, the K computer includes 88,000 processors that draw a peak power of 12.6 MW, while its operation costs annually US$10 million [37]. Many different modes are found in the lowest frequency spectrum of the system (0-20 GHz); some of the typical ones are plotted in Figure 4. Note that, due to the degeneracy and the presence of other resonances which we consider less interesting and do not study in detail as discussed below, the modes investigated in this work cannot be strictly considered extremal eigenvalues but are rather internal eigenvalues in mathematical sense, since the selected modes are interspersed within the first 1000, starting from the lowest frequency. As seen in the figure, there are modes located along the hexagon base or lateral edges, and modes localized at the vertices, with possibly several nodes. As shown in [29,38] for a regular (r = 1) lattice, the modes of Figure 4 can be experimentally detected.
The numerical approach used in [38] to obtain a first interpretation of the experimental data (micromagnetic calculation of the response to a small external excitation in the time domain) led to images of the modulus of the mode amplitudes (shown in the supplemental material of the cited paper). However, due to the low-resolution, that analysis did not allow to discriminate between modes located in the same segment but with different oscillations (nodes), nor could resolve the quasi-degeneracies of modes localized in different hexagons (discussed below). These observations highlight the advantages of the extended dynamical matrix approach over the full micromagnetic one.
An approximate correspondence can be drawn between the modes of Figure 4a-d with the modes labeled B, C, D in [38], respectively. Due to the periodicity of the system, each mode occurs in different hexagons at almost the same frequency (apart from border effects due to the finiteness of the sample, which are visible only for the most peripheral modes). Therefore, each mode listed in Figure 4 actually corresponds to a family of eigensolutions, with (almost-)degenerate frequencies. In this specific case, since any linear combination of degenerate eigenvectors is still an eigenvector, the modes in each family can be represented in different ways: for example each mode of a family can be chosen to occupy a single vertex or side; or each mode can extend to the whole array, with different phase relations between the occupied sites. We give our preference to the first representation, which links better with the case r = 1. There are also modes with different number of nodes along the segments, modes extending over several adjacent hexagons, and hybrid modes are observed when the frequencies of two different modes are close enough to allow their mixing. Although the calculation allows to find and resolve all the modes, we have chosen to limit ourselves to the strongest and most relevant ones, shown in Figure 4.
In order to get a deeper understanding of the behavior of spin modes in the hexagonal structure, we have calculated the spin wave dispersion of a single, infinite permalloy strip 40 nm wide, with the same thickess of the Kagome ASI (15 nm) and infinite length using the standard dynamical matrix method with periodic boundary conditions [39], taking into account dipolar, ferromagnetic exchange and Zeeman contributions. This structure mimics on a larger scale the behavior of the thin-film segments forming the hexagons, the magnetization of which is approximately uniform (Figure 3). Given that we are looking for the behavior of spin waves propagating along the strip axis, we considered two configurations: one, in which the applied field occurs parallel to the strip axis, and so is the strip magnetization; the other, in which the applied field is tilted 60 • with respect to the strip axis, but the magnetization is displaced 15 • from the strip axis (hence, it forms an actual angle of 45 • with the field). This happens because the equilibrium magnetization is affected by the competition between the external field and the shape anisotropy with easy axis directed along the strip, which in the last case have different orientations. These two configurations correspond to the two configuration of the actual ASI segments (apart from a 180 • mirror symmetry of some segments), and are studied to find a general behavior of the frequency dispersion of the spin wave confined to the ASI segments (hexagon sides). In Figure 5 we show the results, apparently confirming the almost flat frequency curve as a function of the wavevector. The frequency is plotted versus the wavevector, taken parallel to the stripe (the maximum wavevector hence corresponds to a (minimum) wavelength of 25 nm). Solid red line: the stripe is horizontal, i.e., aligned with the external field; the magnetization is assumed parallel to the stripe. Dashed green line: the stripe is tilted 60 • with respect to the external field; the magnetization is uniform and rotated by 15 • from the stripe axis, towards the external field. In both cases the dispersion is negative, as expected for propagation exactly, or even barely, parallel to the average magnetization. The inset shows the geometry of field and magnetization in the two cases.
The wavevector (k = 2π/λ) range in Figure 5 corresponds to wavelengths λ > 160 nm so as to include the typical wavelengths of the resonances considered in Figure 4, located on the sides of the hexagons that are about 300 nm long. For example, the frequency of the stripe mode with aligned magnetization at k = 21 rad µm −1 is 17.02 GHz, to be compared with the frequency of the mode of Figure 4d. Within this range the curves appear well-separated and rather flat. Therefore, only a weak frequency dependence on the wavelength should be expected in the the Kagome ASI.

Perturbed System: Mode Frequency Dispersion and Bands
The behavior of the selected modes is followed as a function of the parameter r. Despite the progressive deformation of the structure, the modes can still be recognized and their frequency is plotted in Figure 6. Since the elements (edges, vertices) of different hexagons are no longer equivalent when r = 1, each mode forms a band, whose width tends to increase with increasing r. This may lead to the superposition of bands; for example, for the modes with one node located on a lateral edge and the uniform mode located on a lateral edge, for r > 1. 25.
Two mechanisms contribute to the behavior of the frequency dispersion vs. r. The hexagon sides of a given type, all equal when r = 0, now become different both in (1) length and in (2) orientation. As a consequence, the band of each spin mode widens depending on how much this mode is affected by these variations. The large increase in bandwidth of the mode of Figure 4a which is visible in Figure 6 (about 5 GHz at r = 1.60), shows that this mode is very sensitive to small changes in the geometry of its support. This is typical of localized modes [40], and is due to the strong dependence of the potential well in which the mode is localized on small variations in shape of the structure. In this case the dependence is obviously due to the second mechanism only, since the intrinsically localized nature of the mode makes it insensitive to the length of the sides. The other modes (Figure 4b-d) that widen along whole hexagon sides, also undergo a bandwidth widening, but to a lesser extent (less than 2 GHz at r = 1.60). In this case both mechanisms contribute, but the flatness of the dispersion curves in Figure 5 allows us to see that the effects of moderate wavelength variation are not very influential and therefore the second mechanism prevails over the first.

Conclusions
In this paper, we illustrated a new method for studying the dynamics of large single magnetic particles and arrays, overcoming size limitations related to the computation time and the memory space needed for such systems by direct solvers. The relevance of casting a dynamical micromagnetic problem in the form of generalized eigenvalue problem for improving its scalability has been shown through simulations of a Kagome artificial spin ice with an applied Fibonacci distortion. This study allowed to identify the main spin modes of this structure, follow their changes as a function of the deformation, and understand their behavior in physical terms.
Several different eigensolvers have been tested; we have found that best results in finding the lowest spin modes of a magnetic particle can be achieved with the Krylov-Schur approach. We have also been able to highlight an excellent preconditioner for this type of problem. This opens the way to the treatment of large and/or fine meshed micro-magnetic dynamical problems, with fine details and possibly three-dimensional structures, including systems with aperiodicity or various defects.
Author Contributions: All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding: Research at the University of Kentucky was supported by the U.S. National Science Foundation Grant DMR-1506979.

Data Availability Statement:
The data presented in this study are available in this article.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Reformulated Dynamical Matrix Formalism
Following the dynamical matrix approach [17], a generalized eigenvalue problem can be built up from the linearized motion equation of the magnetization. Assuming that the magnetic particle is parted into N identical interacting cells, and that the uniform normalized magnetization in each cell m j can be represented by its polar and azimuthal angles θ j and φ j (angle of the in-plane component of the magnetization with respect to the horizontal direction and angle of the magnetization with respect to the perpendicular direction, respectively), the fluctuations of the magnetization can be used for building the vector of unknowns, Equation (4). The C matrix appearing in Equation (2) is given by: where M si and γ i are the saturation magnetization and gyromagnetic ratio in the i-th cell, respectively. Finally, the Hessian of the system contains the following magnetic interactions: exchange, Zeeman, anisotropy, magnetic dipole. In particular, we split in two the dipolar contribution: The first term contributes to the sparse part H s , since it vanishes when j = k due to the second order partial derivatives; the second enters H d and can be written as a convolution product. The sparse part is then: Here A is the exchange constant, s the cell size, K (1) and v the uniaxial anisotropy constant and the versor of its axis,N(r j − r k ) the demagnetizing tensor between cells j and k and all the derivatives are calculated at equilibrium; N. N. stands for nearest neighbor cells. Only the dipolar interaction contributes the dense part of the Hessian: