Enhancing the Iterative Smoothed Particle Hydrodynamics Method

: Motivated by recent research on the iterative approach proposed for the smoothed particle hydrodynamics (ISPH) method, some ideas to improve the process are introduced. The standard procedure is enhanced iterating on the residuals preserving the matrix-free nature of the process. The method is appealing providing reasonable results with disordered data distribution too and no kernel variations are needed in the approximation. This work moves forward with a novel formulation requiring a lower number of iterations to reach a desired accuracy. The computational procedure is described and some results are introduced to appreciate the proposed formulation.


Introduction
Mesh-free methods are promising approaches emerging in recent years as valid computational alternatives to grid-based ones. Grid-based methods adopt a computational frame which is made up of nodes, where the field variables are evaluated, related to each other through a predefined nodal connectivity. Mesh-free methods are independent of a mesh and are well suited for problems with complex geometries or problems which require highly adaptive discretizations maintaining a suitable computational effort [1][2][3].
Smoothed Particle Hydrodynamics is a popular kernel based approach introduced in astrophysics [4][5][6][7][8][9] and it is being increasingly used [3,[10][11][12][13][14]. The method provides a spatial approximation via a discrete kernel convolution. In [15] the standard procedure has been improved by means of an iterative approach refining the residuals. An approximant is generated by employing strictly definite positive kernel functions and the method, in convergence, improves the results without requiring evenly spaced data distribution. If the linear approximation order is ensured [16,17], the iterative procedure performs better. In a recent paper [18] some results on this improved approach have been presented, however, the algorithm can become prohibitive for the number of iterations. In this work, starting with the linear approximation order, a formulation which guarantees accuracy with a lower number of iterations is proposed and discussed. The paper is organized as follows. In Section 2 the main ideas of the method are presented. In Section 3 the iterative process and the fast formulation are described. In Section 4 numerical validations are reported choosing the linear approximation order as starting iteration estimate. In Section 5, remarks and ideas on future work are outlined.

The Standard Approximation
The method approximates a function f at x ∈ R d , for d ≥ 1, by means of the kernel approximation where ξ = (ξ (1) , ..., ξ (d) ) and K(x, ξ; h) is the kernel function. The spatial kernel influence is localized by the smoothing length h. The continuum is decomposed into a set of arbitrarily distributed data sites Ξ = (ξ j ) N j=1 , where ( f (ξ j )) N j=1 are the corresponding measurements and the particle approximation is introduced as where dΩ j is the measure of the subdomain Ω j associated with the data ξ j . Particle approximation often loses its accuracy, especially on the boundary and with non uniform data locations [19]. Making use of the Taylor series expansion, by retaining p terms, multiplying for the kernel function and integrating over Ω [16,17,19] improvements can be gained. For p = 1 it means with < I h (x) >= 1/ Ω K(x, ξ; h)dΩ and the corresponding discrete formulation is By increasing p, the order of accuracy increases, requiring the solution of linear systems for each evaluation point [16,17]. In [15] an iterative matrix-free approach (ISPH) is introduced to improve the standard method. In [18] it is adopted refining the normalized approach (4) as starting values for the iterations. In the following section an enhanced iterative formulation (E-ISPH) is proposed. For the sake of clarity, a brief overview on the standard iterative formulation is at first described.

The Iterative Formulation (ISPH)
Let f be the vector collecting the function values at the data in Ξ, k(x) the vector with the kernel values at Ξ referring to the evaluation point x, Ω the diagonal matrix with the significant entries equal to the measures dΩ j , the SPH approximant (2) can be expressed as By assuming and For n → ∞, the summation in (7) approximates A −1 if and only if I − A 2 < 1 [20]; therefore, under this condition the { f The solution for P h (x) is admitted with strictly definite positive kernel functions on pair-wise distinct data sites and this is the only assumption for the data location [21,22]. It is not necessary to ask for evenly spaced data in the construction of { f

The Enhanced Iterative Formulation (E-ISPH)
In order to speed-up the process, the computation is revisited focusing on For each n, the summation S (2 n −1) of 2 n terms is taken into account. LetS (n) := S (2 n −1) , i.e.,S By considering that by substituting (10) in (9) with s = 2 n−1 the following relation holds Consequently, the subsequent recursive formula can be writteñ Now, the (12) is employed in the overall computation, giving rise to the new succession {f (n) (x)}. The described enhanced formulation is comparable in accuracy to ISPH, requiring a reduced number of iterations. In the next section, the behavior of the fast procedure E-ISPH is reported and compared with the standard SPH and ISPH formulations.

Numerical Results
The Franke's function (13) with d = 2 depicted in Figure 1, taken from the scattered data literature [23,24], is referred to as test function Two sets of data sites are considered: the gridded Ξ G , composed of regular distribution points, and the Halton Ξ H [25] which consists of unevenly distribution points. The haltonset function of MATLAB© is used to generate Ξ H . The accuracy of the estimates is measured with by fixing M = 1600 as the number of evaluation points and N = (2 t + 1) 2 , t = 1,2,.., as the number of data sites increasing in the unit domain. The kernel function is the Gaussian one. Some results on the accuracy and on the convergence are illustrated. The improvements in the linear approximation are evident in Table 1, where the RMSEs for the standard and the formulation with p = 1 are included, referring to the Ξ G and Ξ H data distributions.
In Tables 2 and 3, the error behavior adopting the iterative process ISPH, assuming f (0) h with p = 1, is shown for Ξ G and Ξ H respectively. A significant reduction, for the two data sequences, can be appreciated by increasing N and the number of the iterations. In Tables 4 and 5, the RMSEs refer to the enhanced iterative approach E-ISPH for Ξ G and Ξ H showing an accuracy comparable to ISPH requiring a reduced number of iterations. In Figures 2-4 the RMSEs behavior is reported in a loglog plot comparing the SPH,ISPH and E-ISPH approaches. In Figure 5, the time (s), required by the ISPH and E-ISPH to reach a comparable accuracy in the approximation, is exhibited in a loglog plot too. All simulations have been executed with a processor 2.3 GHz Intel Core i9 8 core.

Conclusions
The iterative procedure adopted via residual iterations gives interesting results into SPH approximation avoiding matrix generation and without changes on the kernel function. A normalized version which guarantees the linear order of approximation is a valid strategy coupled in this paper with a faster approach requiring less number of iterations to achieve the desired accuracy. Work in extending the procedure to differential operators is in progress, so allowing one to adopt the method for a large variety of problems in the applied sciences.