Accelerating Surface Tension Calculation in SPH via Particle Classiﬁcation and Monte Carlo Integration

: Surface tension has a strong inﬂuence on the shape of ﬂuid interfaces. We propose a method to calculate the corresponding forces efﬁciently. In contrast to several previous approaches, we discriminate to this end between surface and non-surface SPH particles. Our method effectively smooths the ﬂuid interface, minimizing its curvature. We make use of an approach inspired by Monte Carlo integration to estimate local normals as well as curvatures, based on which the force can be calculated. We compare different sampling schemes for the Monte Carlo approach, for which a Halton sequence performed best. Our overall technique is applicable, but not limited to 2D and 3D simulations, and can be coupled with any common SPH formulation. It outperforms prior approaches with regard to total computation time per time step in dynamic scenes. Additionally, it is adjustable for higher quality in small scale scenes with dominant surface tension effects.


Introduction
Surface tension is a phenomenon appearing at the interface of differing media, typically involving a liquid and a gas; such as, for instance, a water-air interface. It results from cohesive forces attracting the molecules of the liquid towards each other. Formally, surface tension is defined as the ratio between the surface force and the distance along which it acts. These forces lead, for instance, to smoothing of fluid surfaces, wherefore they play a vital role in the visual appearance. Accordingly, computational fluid simulations should include estimations of these processes.
In smoothed particle hydrodynamics (SPH) [1], fluids are discretized into particles. Due to this, the interface, e.g., between water and air, is not exactly defined. Therefore, proper ways of approximating the surface tension forces are required. In SPH approaches, these forces are often computed per particle, based on an estimate of the local normal direction as well as of the local curvature. However, some state-of-the-art methods generalize such force calculations to all particles in the fluid, not taking into consideration whether they are located at the surface or not. Technically, this should not introduce any artifacts, since the forces obtained for non-surface particles would be calculated as zero. Nevertheless, computational resources are being spent in the process, without having any effect on the fluid behavior.
Instead of the above, it could be advantageous to first classify particles into surface and non-surface ones, as for instance suggested in [2,3]. Assuming this can be done efficiently, the subsequent surface tension calculation could be accelerated, leading in total to a reduced computation time per simulation step. Related to this notion, a method for SPH surface detection in 2D has been presented in [4]. They classify a particle as part of the surface, if a circle centered at the particle position is not fully overlapped by circles associated with neighboring particles. Inspired by this idea, we propose an extension of the method, with which we first classify particles (in 2D or 3D). For this step, we employ linear regression, based on machine learning techniques. Once the particles are classified, the local normal and curvature have to be obtained. This is realized by a Monte Carlo approach, where the geometry is locally sampled to determine local coverage. The approach only requires the neighborhood geometry, wherefore it is applicable to currently existing SPH algorithms for fluid simulation. Furthermore, we also suggest adaptive adjustment of the sample resolution, according to the time step. Comparing our approach to state-of-the-art methods for surface tension force estimation, the total simulation runtimes could be consistently reduced with our approach. As an initial example using our method, see Figure 1-the evolution of two particle sets is depicted, arranged initially as two cubes, following our surface tension calculation. Note the coalescence of both parts, including oscillatory movement over time, while also exhibiting concavities. The final shape equilibrium, minimizing surface tension energy is, as expected, a spherical droplet. In addition to the above extensions, we also explored the use of different sampling techniques; employing a pseudo-random Halton sequence instead of a uniform random distribution could yield better performance. Nevertheless, it should be mentioned that minor drift in zero gravity test cases could possibly appear, due to numerical inaccuracies. This may be reduced by further improvement of the normal estimation; but this will require additional investigation.
t=0.0s 0.51s 0.88s 3.7s t =11.0s t =14.0s t =11.0s t =19.0s [2] [3] [1] eq eq eq eq Our method State-of-the-Art Figure 1. SPH time evolution of droplets in 3D (initially cuboid) in zero gravity, coalescing into a single spherical droplet. Results of our method (left) and comparison to final results obtained with alternative approaches [2,5,6] from literature (right). Surface tension calculation is accelerated using our method; convex and concave regions can be robustly handled. Shape evolution develops as physically expected. For the comparisons we employed implementations available in the SPlisHSPlasH framework [7]. Please note that alternative methods did not converge to a spherical droplet; the illustrated final steady state configuration was reached at the denoted simulation time t eq .

Related Work
Various approaches to calculate surface tension forces in SPH fluids have been proposed in the past. In earlier work, it was attempted to represent surfaces with a smoothed color field, as seen in [8][9][10][11]. The latter is a scalar field, which is initially set to one at particle locations, and zero everywhere else. This permits to obtain estimates of surface normals and curvatures. The latter are calculated as the gradient, as well as the divergence of the gradient of the field, respectively. However, the technique usually leads to a random assignment of normals for particles far from the surface. Moreover, errors in the curvature values result and conservation of the fluid momentum was not ensured. The local nature of our method will reduce problems of normal randomness at locations far from the surface, as well as reduce curvature error.
In [5], the surface tension force is modeled as a sum of cohesion forces between particles in the same fluid phase. However, the equilibrium of these cohesion forces, as found in simulations, does not always correspond to the correct minimal surface area, as one would expect from a surface tension dominated fluid. The method is also prone to clustering of particles on the fluid surface. To avoid such particle clustering, it was suggested in [12] to introduce a repulsion force when particles are too close to each other. This was achieved by manually tuning a force profile according to particle separation distance. In our method particle clustering is avoided since we do not use cohesion forces. Related to this, it was stated in [6] that the surface tension force cannot be estimated as a summation of cohesion forces alone, as observed in nature, since SPH particles represent a fluid on a macroscopic level. Instead, they suggest to combine a cohesion with a surface minimization term. Thus, their force term minimizes fluid surface area, conserves momentum, and prevents clustering. However, forces are manually tuned to attract particles in a certain distance range, while repulsing particles that are too close. In contrast, in [13] the curvature minimization problem is first solved on a mesh that is reconstructed from the SPH particles; and later the results are transferred back to the particles. The authors encountered surface waves that could appear due to a mismatch between mesh vertices and underlying SPH particles. The effect could be reduced in a post-processing step.
All the methods above treat all particles equally. However, for non-surface particles the resulting force will be zero. Thus, time is spent on calculations that do not have an effect on the simulation. Thus, it may be beneficial to classify particles initially, and then compute forces only for surface particles. One of the first methods that distinguishes between surface and non-surface particles is [2]. The force is modeled based on the asymmetric neighborhood of particles close to the surface, which leads to asymmetries in the summation of Van der Waals interactions. This yields a force acting on surface particles, proportional to surface curvature. The work in [3] introduces a method for surface particle classification based on visual occlusion of particles from different viewpoints. However, the method is computationally intensive and cannot accommodate false positives. In [14] surface tension was computed for long, thin objects.
In addition to the above, curvature estimation in general point clouds is also a widely studied topic. In related work, magnitudes proportional to the surface curvature are sometimes computed, but not the exact value itself. This problem has a similar cause; also in general point clouds surface curvature may not be exactly defined. Moreover, most existing work already assumes the availability of a surface-only point cloud, e.g., [15,16]. In contrast, our work starts with arbitrary particle locations in a volume. Finally, also note the relation of the problem to SPH surface reconstruction, e.g., via distance fields, such as [17,18].

Methods
Following the idea of modeling surface tension with a continuum method in [19], we calculate the surface tension force via f i st = − σκ ini , where κ i andn i are surface curvature as well as normal at SPH particle i (superscripts denote particle indices). Furthermore, σ is a constant surface tension parameter, measured in N/m, which depends on the simulated fluid. As mentioned above, we thus have to approximate curvature as well as normal direction per particle. Our proposed method is organized in three major algorithmic steps. First, particles in an SPH simulation are classified into two groups-surface and non-surface particles. Second, the normal vector is estimated for all surface particles. This makes use of a Monte Carlo technique to locally estimate an integral, taking into account neighboring particles. Due to the probabilistic nature of Monte Carlo computations, the resulting normal vectors are additionally smoothed. Thirdly, following a similar Monte Carlo strategy, we estimate local curvature, again only for the classified surface particles, also with a subsequent smoothing step. The described process can be applied to 2D or 3D scenes. In addition, the number of random samples, and thus the accuracy, can automatically be adjusted according to the simulation time step. Employing the computed data, the surface tension forces per particle can be determined. The individual steps are described in detail in the following.

Particle Classification
The main idea of classifying particles is to reduce the computation time of the surface tension calculation. We aim to achieve this by excluding (ideally all) non-surface particles from the calculation. Doing so should not affect the overall result since for the latter the surface tension force should be zero anyhow. In contrast, it is crucial for the correctness of the result that no surface particle be misclassified (i.e., there should be no false negatives). Incorrect classification of non-surface particles as surface ones (i.e., false positives) should be minimized, but does not affect the correctness of the fluid dynamics.
To properly classify the particles, we experimented with defining various feature spaces. Optimally, this should only depend on the local geometry. As possible features, we examined, for instance, the summation of neighborhood masses, using various weighting kernels. However, it turned out that good results could already be achieved by mapping fluid particles into a simple 2-dimensional feature space. The first component of this space is given by the mass-weighted average distance of particles in a local neighborhood: where m and x are masses and positions of particles i and j, respectively. The neighborhood is defined by the (user-selected) SPH kernel radius h; thus each particle i has associated neighbor particles j, at distances smaller than h. Please note that we normalize by dividing the mass-weighted average by h, thus making the feature independent of kernel size. For the second feature dimension, we just employ the number of neighbors per particle N i . Next, in order to train a classifier, we have to generate fluid simulation data, and determine for each particle which class it belongs to. The latter training data classification step is done employing a similar strategy as followed for our normal and curvature estimation, as outlined in Sections 3.2 and 3.4. We randomly generate samples on a sphere enclosing a particle and determine coverage of these by neighboring spheres. To achieve high accuracy in this classification, we employ a very large number of samples. In addition, any incorrect classification of a particle as surface (i.e., false positives) in this step, can potentially be remedied by checking for full sphere coverage. Further details of the underlying Monte Carlo strategy will be presented below. Simulations to create the training data have been performed using the SPlisHSPlasH framework [7]. Scenes with particle numbers between 2K and 30K are employed. Different particle configurations, obstacles, boundaries, and gravity forces are used, to ensure broad coverage. Thus generated, and initially classified, particles are plotted in our 2-dimensional feature space in Figure 2 (left). Please note that using the described features, the two particle classes already exhibit a reasonable separation. It becomes apparent that a linear classifier may already suffice for the classification task. For the classification step machine learning strategies can be employed. Since we initially worked in higher-dimensional feature spaces, we decided to use a neural network classifier. Nevertheless, as discussed above, moving to a 2-dimensional feature space turned out to be sufficient for our purpose. We still do employ a neural network as linear classifier; however, using a simpler approach, such as for instance support vector machines would also be adequate. In this context, it should be noted that some recent work explored the use of machine learning in fluid simulation, albeit only for obtaining solutions to the Navier-Stokes equations, instead of performing the task of classifying particles (e.g., [20][21][22][23]). In our technique, we effectively obtain a line separating the two classes in the feature space. However, since we strive to minimize (i.e., optimally avoid) false negatives, we opted to shift the line by a distance d along the ordinate, such that no false positives remain (i.e., with respect to the training data). The obtained linear classifier (y ≤ kx + d) is then applied in any new fluid simulation, dividing particles into surface and non-surface ones, progressively per time step. As visualized in Figure 2 (right), the selection of parameter d affects the particle classification, which will be discussed further below.
Applying this approach in our later tests, we did not encounter any false negatives in those simulations, also with different particle configurations and geometries. Still, false positives do result.
In the experiments outlined in Section 4 the method yielded on average 0.014 % false positives in the Droplet, 5.66 % in the Dambreak, and 4.75 % in the Crown splash test scenario, respectively. Still, the method proved to work fast and be robust with regard to false negatives. The performance of the method is three orders of magnitudes faster than the timings reported by [3] for a double dambreak scenario.
As a further well-defined testcase, we also examined the classification for the example of a hyperbolic paraboloid. For this shape the normals and curvatures are analytically known, both positives as well as negatives being present. To obtain a matching SPH particle system for the analytic surface, we employed the paraboloid function to define a fluid container as an OBJ triangle-mesh geometry. The bottom of this container geometry is set to the shape of the paraboloid. Given this boundary mesh, we then carried out an SPH simulation using the Implicit Incompressible SPH [24], with constant step size of t = 0.001. The simulation was run until the system converged to a steady state, providing particles in the container, including the bottom. The particles classified as surface particles in the resulting SPH system are extracted and cropped for further comparison and analysis. Snapshots of the simulation, as well as the cropped region of surface particles at the container's bottom are visualized in Figure 3 (note that also curvature computed for the particles is indicated). To visualize the curvature we employ a blue-white-red color-map, with an optional atan scale revealing variations in small number regions. On this dataset, we examined the effect of the shift parameter d in the linear classifier. Figure 4 illustrates how points located near the surface in the cropped section were classified. Results are shown for the paraboloid setup, with increasing d: the smaller the parameter the less surface points are correctly classified (note that all should be classified as surface points for this cropped part). In this special case, a value of about d = 13 performed well, since the number of classified surface points as well as the computation time per simulation time step plateaued at this value. Please note that a lower value leads to faster computation times by avoiding re-classification in the Monte Carlo process. Nevertheless, low values may lead to incorrect classifications, wherefore a higher value should be preferable.

Normal Calculation
Once the classification has been finalized, we have to compute the surface normals, as well as the curvatures, per particle. Since we do not make use of any smoothed field in the fluid we have to calculate both values using only the geometry as input. Both calculations follow a similar notion, wherefore, the general idea of both will be outlined first. The following will address the 3D case, but the concept applies analogously to 2D. The key idea in both cases is to first assume a sphere S 1 of radius r 1 around a considered surface particle at position x i . The radius will always be selected equal to the SPH kernel size h. Next, additional spheres S j 2 of radius r 2 are considered, with their respective centers given by all neighboring particles at position x j (i.e., all surface and non-surface ones combined). For this, the neighborhood of a particle is again given according to kernel radius h. Also, note that r 2 is usually smaller than r 1 . The neighboring spheres S j 2 will overlap the initial sphere S 1 , located at particle i, thus leaving a smaller spherical area A 1 that is not overlapped, i.e., not within the neighbor spheres.
Since we work with incompressible or weakly compressible SPH particle distributions the density of the point cloud has to be nearly constant. Thus, it can be conjectured that the surface normal n i at the particle will point towards the centroid of the non-overlapped spherical area on the sphere. In addition, as will be discussed in more detail below, we also found that the fraction of the sphere that has not been covered is related to the surface curvature at that point. The area of the sphere that is not overlapped by the neighboring ones can be calculated with a spherical integral. However, this integral can be computationally very expensive to determine exactly, wherefore we propose to estimate it using a Monte Carlo integration strategy. With regard to the normal computation, we first generate N i uniformly distributed random sample points p k on the surface of sphere S 1 of particle i. Initially, the positions are determined accordingly to a uniform random distribution. Below we will also examine other sampling strategies. Of the generated particles, we will next only consider those that are not inside of any neighboring sphere S j 2 . For our following derivations, we will represent this with a binary function: Based on this, we obtain a first estimate of the surface normal: Elements in this computation process are visualized in Figure 5b. Also note that non-surface particles will be assigned with a zero vector. Due to the probabilistic nature of our method, discontinuities in the estimated normal field may be encountered; especially, at lower sampling numbers. However, the normal field should be as smooth as possible on the surface of the fluid. Therefore, we propose to carry out an additional smoothing step. First, we compute a weighted average of all neighboring surface particle normals, based on the results obtained in the previous step: employing a weight kernel W, again with kernel radius h: (a) (c) (b) Also note again that the normals of non-surface particles have been set to zero in the previous step. The final smoothed surface particle normal is then obtained by a weighted average of both computed temporary normals:n where τ is a user selected interpolation parameter. For all our computations we have set it to 0.5; this yielded, for instance for the paraboloid test case, stable and plausible curvature values in the calculations below. Also, for this geometry variation of τ did not result in a strong influence on the angular error as well as the computation time. The outcome of the normal smoothing process is also illustrated in Figure 5c. Further note that this smoothing step can be repeated several times.
To evaluate the accuracy of the proposed normal estimation process we carried out comparisons between analytically defined and our estimated normals. As error measure we determine the angle between the vectors via acos(n i · n a ), wheren i is the estimated normal and n a the analytically correct one. In our first test case, we obtained the latter for the geometry of a 2D circle; a random 2D point cloud is generated by uniformly sampling the geometry. Next, due to the random nature of our estimation process, we determine as final error value the average of 100 computations. The results of this study are summarized in Figure 6 (left). As can be seen, the average error depends both on the number of samples as well as on the number of smoothing steps. The higher the number of sampling points, the more accurate the approximation becomes; while additional smoothing improves the estimates, by filtering out noise incurred by the representation as a point cloud. A second, more comprehensive analysis in 3D has also been carried out, using the previously described hyperbolic paraboloid test case. We compared the estimated normals to the analytically defined ones. In addition, we also compute the error metric for an alternative sampling approach using a pseudo-random Halton sampling (described in detail below). Finally, for further evaluation we also compute normals on the point cloud via a principal component analysis (PCA) in a local neighborhood, similar to [25]. For the latter, the mean center c of a surface point neighborhood is computed and used to determine the normalñ i Pca as the minor Eigenvector of the local (weighted) covariance matrix C: with W the cubic support kernel and r the support radius. Please note that in contrast to the Monte Carlo normal the orientation is not known. For our experiment, we set the orientation according to the Monte Carlo normals. Figure 6 (right) shows the error mean and standard deviation for three sampling methods, examining also the effect of smoothing and the number of samples, for the paraboloid test case. The angular error is low at < 3 • for high sample counts. Employing a PCA normal computation also leads to good accuracy, but takes about 1.6 ms additional computation time in the paraboloid experiment.

Sampling Scheme
As indicated above, akin to the notion of importance sampling, we explored possibly improvements of the naive uniformly distributed random Monte Carlo sampling. The first straightforward improvement is to employ pre-computed instead of run-time generated random spherical direction vectors. The former can be stored in look-up tables. We employed 16, 384 single precision pre-computed direction vectors. Values are accessed with a modulo index operation. Please note that the quality of the results correlate with precision and table size. Secondly, we also tested pseudo-random schemes that provide more homogeneous sample distributions. We tested various Halton sequences [26] as well as blue noise sampling (see Figure 7). For the former, the 2-3 scheme in 2D provided the best results. For the latter we employed the implementation of [27], which realizes a Poisson distribution on Wang tiles [28]. As best performing scheme we selected the Halton 2-3 (H23) scheme for further analysis. Using a look-up table as well as reducing the number of samples results in a speedup; mainly due to the more robust homogeneous sample distribution. We further also examined the effect of the number of samples in this context. For our paraboloid test geometry, we calculated the mean angular error of the Monte Carlo (MC) normals to the analytic normals; in addition, also the surface computation time was measured. Both are averaged over 100 steady state simulation steps. In Figure 8 the results are shown for uniform distribution Rnd, blue noise BN, and H23. To reach a small mean angular error below 5 • Rnd requires about 360 samples (note values marked by dashed boxes). Using BN the speed up due to using a look-up table can be seen. Nevertheless, obtaining errors below 5 • still requires 360 samples. Switching to H23 allows reducing the samples to 120 for a comparable error, yielding a good speedup of 8.4.

Curvature Calculation
The surface curvature of a 3D surface is locally given by two values, also known as principal curvatures [29]. These are defined as the eigenvalues of the shape operator at a point on the surface. By averaging the two we obtain the mean curvature κ. Gaussian and mean curvature estimation fail with point clouds including noise. We have found that it is possible to establish a direct relation between the mean curvature and the fraction of a sphere that is not covered by neighboring ones, via the process outlined above. We begin by noting that the fraction of the uncovered surface area of a sphere, using the mapping function (2), is given by: where x(θ, ϕ) is a sphere surface location with spherical coordinates θ and ϕ. As before, instead of attempting to compute this value exactly, we will approximate it, for a particle i, based on random samples following a Monte Carlo integration strategy: As will be seen later, it is possible to estimate κ from λ, which itself can be determined from the randomly sampled points p k . Please note that λ ∈ [0, 1]. We will first derive the underlying relationship in 2D, and later extend to 3D.

Relationship in 2D
The following derivation is explained while closely referring to the illustration and notation in Figure 9. We start with assuming in 2D a circular outline (shown on the bottom in gray), representing a shape for which the curvature should be determined. The circle has a radius of R, and thus the sought curvature κ is given in this case analytically by the reciprocal 1/R. However, later the formalism should be applied to any arbitrary shape or curve, based on randomly sampled locations.
First, in order to render our derivation independent of particle size, we will attempt to estimate an adjusted curvature parameterκ = hκ, considering correspondingly also an adaptedR = R/h. With this in mind, as a starting point for examining in this case the relationship between λ andκ, we will begin with deriving a lower bound λ min , i.e., in 2D the minimal arc that would not be covered by neighboring circles. For this, first consider a particle i on the circular outline. We associate with this particle again a circle C 1 , with radius r 1 , and center x i . Next, consider additional neighboring particles j, akin to what was discussed above; to these again correspond circles C j 2 with centers x j , all with the same radius r 2 < r 1 , overlapping circle C 1 . Please note that the maximal overlap will result for those particles j that are also located on the circular outline; in 2D there would be two of these, next to particle i. Thus, we have to find the geometrical relationship at which circle C 2 around such a particle j would cover a maximal arc on C 1 . When the circles overlap, we can find two intersection points; denoting the outer one as x I , the maximum coverage will result when the vector between x I and x j is perpendicular to the vector between x i and x j ; see also Figure 9 (right). In this situation, the angle between the normal at particle i and the vector between x i and x I is given as ϕ. Also note that this angle can be obtained via: Figure 9. Configuration for maximal arc coverage of a circle around a neighbor particle j on a circle around particle i. Left: A second neighbor is indicated as light grey dotted circle. The arc coverage λ is related to the curvature κ expressed as 1/R. Right: Geometric details for the neighbor particle j.
where angles α and β are defined based on the chord between the particle positions, as depicted. Also note that the distance between the latter is given as: According to the geometric configuration, both angles can be obtained via: Finally, due to having two neighboring particles in symmetric configuration, we have to consider 2ϕ for the non-covered arc. Overall, we obtain λ min = 2ϕ/2π. Using the previous equations, we obtain a closed form solution, independent of the sign of the curvature: where the adjusted curvature is related to the ratio of the radii r 2 /r 1 and the minimal covered fraction λ min , which we approximate via random sampling.

Relationship in 3D
In 3D we follow a similar derivation. As before, we attempt to do this via estimating the ratio of a minimal, uncovered spherical surface area to the complete surface of a sphere. Again, we assume a particle i on this surface, surrounded by several particles j, for which again local spheres of radius r 1 and r 2 are defined. In 3D the uncovered spherical surface area A will be a spherical cap, which is given analytically. A cap on a sphere with radius R, defined by a projected solid angle ϕ is given as: As in the 2D case, we compute λ min based on the non-covered surface area: Thus, rearranging the terms we can also in 3D express the adjusted curvature analytically: again depending on the ratio of r 2 /r 1 and λ min , which we can estimate. For our implementation, and the later test scenarios we employed the ratio r 2 /r 1 = 0.8, which generally yielded optimal performance. In line with this, in another example scene, i.e. the zero gravity droplet, artifacts were encountered when reducing the value to 0.7; particles occasionally became disconnected from the surface and remained as outliers. In contrast, the paraboloid case did allow for lower ratios of r 1 and r 2 . Nevertheless, using the above ratio was the most robust over all test cases. To evaluate our curvature estimation method, we compare our approximations with analytically defined mean curvature values. The latter can, for instance, be obtained in closed form for any point on an ellipsoid [30]. Thus, we create an ellipsoidal point cloud, for which we obtain our estimate, and compare to the ground truth. Due to the stochastic nature of our method, we determine the mean and the standard deviation for 40 measurements. Moreover, note that accuracy again depends on the number of samples, wherefore we also tested our method for different amounts of such samples. The results of this validation are compiled in Figure 10. As can be seen, for smaller curvatures our estimation approaches the correct solution, independent of the curvature sign. Moreover, even for a small number of samples our estimated average curvature is close to the correct solution. Nevertheless, the standard deviation is large for small sample numbers, but can be reduced by increasing the number of samples. In addition, we found that for larger curvatures, also larger errors in the mean curvature resulted. This is due to the sphere radius r 1 becoming closer to actual surface features.
Finally, it should be noted that our initial development of the method was in 2D. There, the circle segment shown in Figure 9 (left) can efficiently be computed analytically. Nevertheless, in 3D the matching computation of a spherical segment becomes more difficult and costly, wherefore we introduced the Monte Carlo approach.

Curvature Smoothing and Adaptive Sampling
Similar to the normal field, the curvature field should also be smooth along the surface. The probabilistic nature of the estimation process may also introduce artifacts. Thus, we again suggest to apply one or more smoothing steps, averaging computed curvatures in a local neighborhood. Furthermore, as already seen in Figure 6, the accuracy of Monte Carlo approaches will depend on the number of samples. A straightforward approach could be to employ a constant number at all times; however, we have found that adapting the number according to the underlying numerical simulation is advantageous. The idea is inspired by the Courant-Friedrichs-Lewy (CFL) condition [1], which relates numerical time step, spatial discretization, and propagation velocity. According to this, solution time steps in SPH algorithms are often adaptively adjusted; commonly based on forces or velocities of the fluid particles. Along this line, we propose to adjust the number of random sampling points used per time step as N = t s C SD , with time step t s and user-selected proportionality constant C SD . The latter can be considered to be a sampling density factor, its value representing a trade-off between accuracy and computation speed. We have achieved good results with setting this parameter to 10,000-100,000. Our adaptive sampling makes the total number of samples per particle over a simulation time period independent of the numerical time step size.

Results
To further evaluate our method, we compare it to approaches employed in prior work for the computation of surface tension, specifically the works by Becker and Teschner (2007) [5] and by Akinci, Akinci, and Teschner (2013) [6]. Furthermore, the surface tension calculations are integrated into different full SPH solvers, specifically weakly compressible SPH (WCSPH) [5], predictive corrective incompressible SPH (PCISPH) [31], and divergence free SPH (DFSPH) [32,33], in a reference implementation. As a framework for the comparisons we employ "SPlisHSPlasH" [7], an open source environment for physically-based simulation of fluids, which provides the implementations for the mentioned comparison algorithms. All computations were performed using only the CPU; i.e., no optimizations, such as GPU calculations were employed. Furthermore, we obtain computation times for three different common test scenes, as also suggested by [34]-Droplet, Crown splash, and Dambreak. These scenes cover various dynamic behaviors, and also require different time step intervals in Table 1, according to the CFL condition: The timing results of the comparison experiment are provided in Table 2. As can be seen, in all cases, our method resulted in reduced average total computation times. Please note that in all cases visually highly similar fluid simulation results were obtained, and no instabilities were encountered. Moreover, the smaller the time step, the better our method performed compared to the other two surface tension calculation methods, when computing f st ; this becomes especially evident for the Crown splash scene (Figure 11), which employed the smallest time step, where significant improvements resulted for this step. However, for larger time steps, the advantage of employing our proposed approach for calculating f st is reduced; for instance, in the droplet scene, for both DFSPH and PCISPH, the surface tension calculation times turned out to be slower for our method; nevertheless, the total computation time per time step still remained better. We assume this to be due to non-surface particles also experiencing non-zero surface tension forces in the other methods, which requires additional iterations of the pressure solver to achieve the correct fluid density. Figure 11. Left: Example of crown splash simulation using DFSPH and our surface tension force estimation (adaptive sampling with C SD = 10, 000, adaptive time step 0.1-1 ms). Right: Example of dam break simulation using DFSPH and our surface tension force estimation (adaptive sampling with C SD = 10, 000, adaptive time step 0.5-2 ms). The color change illustrates differences in velocity.

Conclusions
We presented a method to accelerate the calculation of surface tension forces in SPH fluid animations. In contrast to most other approaches, we discriminate between surface and non-surface particles. This leads to an improvement in the computation time, since the forces are calculated for just a fraction of the particles. Based on this, we can effectively smooth and minimize the surface of the fluid. The accuracy of our method can be tuned by adjusting the value of C SD . When the time step is reduced, e.g., according to the CFL condition, the number of sampling points N i is also adjusted. Surprisingly, we found that even for a low number, N i ≈ 10 the simulation remains stable. It is in cases when the time step is small (< 1 ms) that our method offers a considerably improved performance over the other tested methods. However, when the simulation runs slower (ts ≈ 5 ms) the advantage is diminished; still the computation of our method remains comparable to other algorithms.
A disadvantage we encountered when using a very low resolution in the sampling is the possible creation of incorrect momentum. The sum of the forces around a closed fluid surface should vanish, e.g. in the Droplet scenario, but for low resolution sampling of the integral this is not ensured. This drawback could not be alleviated by employing the pseudo-random Halton 2-3 sampling scheme. However, we noticed that selection of the parameter d may have an effect on this. Still, more analysis is required to characterize the influence of the parameters on minimizing the momentum effect while staying efficient and physically correct. An improvement may be possible by combining the Monte Carlo normals with normals obtained from PCA methods, at an increased computational cost. This idea was implemented and will be explored further in future work.
Our approach also includes an estimation of the local mean curvature at the fluid particles; here, the latter could be considered to be a general point cloud. Since the surface interface in any point cloud is not clearly defined, the curvature is neither. Related works, e.g., [8][9][10], compute the divergence of the gradient of the color field to estimate the curvature. This leads to a quantity that is only proportional to the exact curvature. In our method we obtain an approximation of the curvature based on a spherical integral. The procedure is similar to searching for a spherical surface, locally best fitting to a point cloud. Our method can effectively be coupled with any other SPH algorithm, since it only takes the geometry as input for the computation. It can be employed to improve the overall SPH computation time, when smoothing fluid surfaces in computer graphics applications. It is left for future work to explore the possibility of applying this type of procedure in other contexts of fluid simulations. As a possible extension, we will explore the performance of the method in the context of multi-scale SPH models, i.e., when different sampling densities are employed. Finally, the source code as well as the test scenes of our method are provided by pull request into the SPlisHSPlasH git repository [7].