Relaxation limit of the aggregation equation with pointy potential

This work is devoted to the study of a relaxation limit of the so-called aggregation equation with a pointy potential in one dimensional space. The aggregation equation is by now widely used to model the dynamics of a density of individuals attracting each other through a potential. When this potential is pointy, solutions are known to blow up in ﬁnal time. For this reason, measure-valued solutions have been deﬁned. In this paper, we investigate an approximation of such measure-valued solutions thanks to a relaxation limit in the spirit of Jin and Xin. We study the convergence of this approximation and give a rigorous estimate of the speed of convergence in one dimension with the Newtonian potential. We also investigate the numerical discretization of this relaxation limit by uniformly accurate schemes.


Introduction
The so-called aggregation equation has been widely used to model the dynamics of a population of individuals in interaction.Let W : R → R, sufficiently smooth, be the interaction potential governing the population.Then, in one dimension in space, the dynamics of the density of individuals, denoted by ρ, is governed by the following equation, for t > 0 and x ∈ R, Such equations appear in many applications in population dynamics: For instance to describe the collective migration of cells by swarming, the motion of bacteria by chemotaxis, the crowd motion, the flocking of birds, or fishes school see e.g.[25,6,5,31,30,14,18]. From a mathematical point of view, these equations have been widely studied.When the potential W is not smooth enough, it is known that weak solutions may blow up in finite time [1,2].Thus, the existence of weak (measure) solutions has been investigated in e.g.[7,10].
In this paper, we consider a relaxation limit in the spirit of Jin-Xin [22] of the aggregation equation in one space dimension on R. It is now well-established that such modifications allow to regularize the solutions.For a given c > a ∞ , we introduce the system This system is complemented with initial data ρ 0 and σ 0 := a[ρ 0 ]ρ 0 .It is clear, at least formally, that when ε → 0 the solution ρ of system (1.2) converges to the one of the aggregation equation (1.1) (and actually it is true only if c > a ∞ ).We mention that the aggregation equation may also be derived thanks to a hydrodynamical limit of kinetic equations [14,18,20].The aim of this work is to study the convergence as ε → 0 of the relaxation system (1.2) towards the aggregation equation.More precisely, we establish a precise estimate of the speed of convergence, and we also illustrate with some numerical simulations.These estimates are obtained only in the case of the Newtonian potential in one dimension W (x) = 1  2 |x|.Indeed, in this particular case we may link the aggregation equation to a scalar conservation law [3,19].The same link holds for the relaxation system (1.2): denoting where the notation ρ(t, dy) stands for the integral with respect to the probability measure ρ(t), then we verify easily that u = −W * ρ, ρ = −∂ x u, so that a[ρ] = u.Then, integrating (1.2), we deduce that (u, v) is a solution to which is complemented with initial data u 0 = 1 2 − x −∞ ρ 0 (dy), and v 0 = 1 2 − x −∞ σ 0 (dy).Clearly, as ε → 0, we expect that the solution of the above system converges to the solution of the following Burgers equation Introducing the quantities a = v − cu and b = v + cu, (1.3) is equivalent to the diagonalized system (1.4b) We will adapt the techniques developed in [23] to obtain convergence estimates for our system.In order to illustrate this convergence result, numerical discretizations of the relaxation system (1.2) are investigated.The schemes we propose are such that they are uniform with respect to ε, that is they satisfy the so-called asymptotic preserving (AP) property [21].Therefore, such schemes in the limit ε → 0 must be consistent with the aggregation equation.Numerical simulations of solutions of the aggregation equation for pointy potentials have been studied by several authors see e.g.[20,8,11,10,17,15,9].In particular, some authors pay attention to recover the correct behavior of the numerical solutions after the blow-up time.To do so, particular attention must be paid to the definition of the product a[ρ]ρ when ρ is a measure.
In this article, we propose two discretizations of the relaxation system which satisfy the AP property.In a first approach, we propose a simple splitting algorithm where we split the transport part and the right hand side in system (1.2).It results in a numerical scheme which is very simple to implement and for which we verify easily the AP property.The second approach relies on a wellbalanced discretization in the spirit of [16,17].This scheme is more expensive to implement than the first scheme, but its numerical solution has less diffusion, as it is illustrated by our numerical results.
The outline of the paper is the following.In section 2, after recalling some useful notations, we prove our main result: an estimation of the speed of convergence in the Wasserstein W 1 distance with respect to ε of the solutions of the relaxation system (1.2) towards the solution of the aggregation equation (1.1) in the case W (x) = 1 2 |x|.The numerical discretization is investigated in section 3. Two numerical schemes verifying the AP property are proposed.The first scheme is based on a splitting algorithm, whereas the second scheme relies on a well-balanced discretization.Numerical results and comparisons are provided in section 4.

Convergence result 2.1 Notations
Before stating and proving our main results, we first recall some useful notations and results.Since we are dealing with conservation laws (in which the total mass is conserved), we will work in some space of probability measures, namely the Wasserstein space of order p ≥ 1, which is the space of probability measures with finite order p moment: This space is endowed with the Wasserstein distance defined by (see e.g.[33,28]) where Γ(µ, ν) is the set of measures on R N × R N with marginals µ and ν, i.e.
with C 0 (R N ) the set of continuous functions on R N that vanish at infinity.From a simple minimization argument, we know that in the definition of W p the infimum is actually a minimum.A map that realizes the minimum in the definition (2.5) of W p is called an optimal transport plan, the set of which is denoted by Γ 0 (µ, ν).
In the one-dimensional framework, we may simplify these definitions.Indeed any probability measure µ on the real line R can be described in term of its cumulative distribution function F µ (x) = µ((−∞, x)) which is a right-continuous and nondecreasing function with F µ (−∞) = 0 and F µ (+∞) = 1.Then we can define the generalized inverse F −1 µ of F µ (or monotone rearrangement of µ) by > z}, it is a right-continuous and nondecreasing function as well, defined on [0, 1].We have for every nonnegative Borel map ξ, In particular, µ ∈ P p (R) if and only if F −1 µ ∈ L p (0, 1).Moreover, in the one-dimensional setting, there exists a unique optimal transport plan realizing the minimum in (2.5).More precisely, if µ and ν belong to P p (R), with monotone rearrangements F −1 µ and } where L (0,1) is the restriction of the Lebesgue measure on (0, 1).Thus we have the explicit expression of the Wasserstein distance (see [32,27,33]) and the map µ → F −1 µ is an isometry between P p (R) and the convex subset of (essentially) nondecreasing functions of L p (0, 1).

Convergence estimates
Let us first consider the limit ε → 0 for the system (1.3).Compactness methods have been used in [26] to get L 1 loc convergence in space.However, in order to pass to the aggregation equation, one may want global L 1 convergence, which we prove in the following theorem, along the lines of [23]: . There exists a constant C > 0 such that, for any ε > 0, denoting by (u ε , v ε ) the solution to (1.3) with initial data (u 0 , v 0 ), the following estimate holds: where u is the entropy solution to the Burgers equation with initial datum u 0 .
Proof.Denote (a ε , b ε ) the solution to (1.4), and − a+b 2 .So as to obtain entropy inequalities on (a ε , b ε ), we need monotonicity properties on G.One can check that G(a ε , b ε ) is decreasing with respect to a ε and b ε if the so-called subcharacteristic condition 3), which does not affect the value of (a ε , b ε ): Now, obtaining entropy inequalities on (a ε , b ε ) consists in making a comparison with constant state solutions to (1.4).Namely, letting and therefore (k, h(k)) is a solution to (1.4).Thus the following system holds: Multiplying (2.7a) by sgn(a ε − k), (2.7b) by sgn(b ε − h(k)) and summing yields: Hence, using the monotonicity of G we get the following entropy inequalities on (a ε , b ε ): We have: where To do so, we formally differentiate this quantity and use (1.4): Integrating in space gives: are positive constants which do not depend on ε nor on time.A Gronwall lemma then gives: where we still denote C = B/A a constant independent of time and of ε.
Differentiating (2.9) in time and (2.11) in space, and using (2.8) thus yields: (2.12) Then, we estimate u(t) − w ε (t) L 1 using Kuznetsov's doubling of variables technique (see e.g.[29] for scalar conservation laws with viscosity and [4] for a more general formalism) in order to combine (2.12) with Kruzkov inequalities on the entropy solution u, that read: Writing respectively (2.13) at point (s, x) for κ = w ε (t, y) and (2.12) at point (t, y) for κ = u(s, x), we get: β Ω x β be two mollyfing kernels.Setting g(s, t, x, y) = ω α (s− t)Ω β (x − y) and testing (2.14a) and (2.14b) against g(•, t, •, y)1 [0,T ] and g(s, •, x, •)1 [0,T ] respectively, and integrating over [0, T ] × R, we get on the one hand: (2.17) Then, we write: (2.18) A triangle inequality gives for I 1 : with T 1 ≤ Cβ • T V (u 0 ), the second term T 2 appearing in (2.17) and for the last one we write: and then we use the fact that w ε is uniformely Lipschitz in L 1 (R) with respect to ε.Indeed, one has 2c with h (a ε ) − 1 being uniformely bounded with respect to ε as a ε stays in the compact set [m, M ] for all time.In addition, estimating ∂ t a ε (t) L 1 can be done reusing (1.4) and (2.10): with C > 0 still independent of time and of ε.
All in all, we get for I 1 : And, similarly, for I 2 : Back to (2.18), we obtain: But using a triangle inequality, one can show that: and similarly: We then bound from above the term RHS using inequality Finally, we get: which, after optimizing the values of α and β and noticing that T V (a 0 ), T V (b 0 ) ≤ C • T V (u 0 ), gives: Denoting ρ = −∂ x u, the convergence of u ε (t) towards u(t) in L 1 (R) ensures that ρ(t) is a probability measure.Indeed, since for all ε > 0, ρ ε = −∂ x u ε is a nonnegative distribution, so is ρ.The Riesz-Markov theorem then ensures that ρ can be represented by a nonnegative Borel measure.Besides, for a.e.t ≥ 0, u ε (t) is a nonincreasing function taking values in [0, 1] and hence converges to a certain limit when x goes to +∞.The same holds true for the limit function u(t).But, since u ε (t) − u(t) ∈ L 1 (R), then u ε (t, x) − u(t, x) must vanish as x goes to +∞.Therefore the total mass of ρ(t) is 1.

Numerical discretization
From now on, we denote ∆t the time step and we introduce a Cartesian mesh of size ∆x.We denote t n = n∆t for n ∈ N and x j = j∆x for j ∈ Z.In this section, we extend our framework and consider the aggregation equation (1.1) with arbitrary pointy potentials W , which satisfy the following conditions: 1. W is even and In this framework, the convergence of ρ ε towards ρ for a slightly different problem has also been studied in [18].Adapting the argument, the convergence still holds provided the subcharacteristic condition c > a ∞ is verified.However, for such general potentials, the authors were not able to obtain the estimates of the speed of convergence as stated in Theorem 2.2.
In this section, we propose some numerical schemes able to capture the limit ε → 0, that is satisfying the so-called asymptotic preserving (AP) property.We consider two approaches, the first one based on a splitting algorithm, the second one based on a well-balanced discretisation.

A splitting algorithm
A first simple approach to discretize system (1.2) is to use a splitting method.Such a method is known to be convergent and easy to implement but introduces numerical diffusion.
Notice that the system (1.2) rewrites, with µ = σ − cρ, ν = σ + cρ, as: The idea of the method is to solve in a first step on (t n , t n + ∆t) the system with initial data (µ(t n ), ν(t n )) = (µ n , ν n ).We obtain µ n+ 1 2 j = µ(t n + ∆t, x j ) and ν = ν(t n + ∆t, x j ).Notice that this system may be solved explicitely.Indeed, by adding and subtracting the two equations, we deduce after an integration ν Then, in a second step, we discretize by a classical finite volume upwind scheme the system That is ), (3.24a) Coming back to the variables ρ and σ, we obtain Then, the splitting algorithm reads and Lemma 3.1 For any ε > 0, if both the CFL condition c∆t ∆x ≤ 1 and the subcharacteristic condition c ≥ a ∞ hold, then the splitting scheme (3.23)-(3.24) is L 1 -stable: Proof.We have: Under the condition c ≥ a ∞ , in the expression of µ , the coefficient in front of µ n j is nonnegative and the one in front of ν n j is nonpositive.Similarly, in ν , the coefficient of µ n j is nonpositive and the one in front of ν n j is nonnegative.Taking the absolute value and adding up therefore yields: It remains to remark that, provided the CFL condition c∆t ∆x ≤ 1 is verified, (3.24) gives: Note that similar schemes have also been studied in [24] and proved convergent at rate √ ∆x.Let us now verify the AP property.When ε → 0, we verify that the equation on ρ (3.25) converges to the following Rusanov discretization of (1.1) (see [15] for numerical simulations using the Rusanov scheme): This limiting scheme provides a consistant discretization of (1.1).Indeed, similar scheme has been extensively studied in [10] using compactness arguments and the following convergence result has been proved: and that the stability conditions c ∆t ∆x ≤ 1 and c ≥ a ∞ are satisfied.Let T > 0 and suppose we initialize the scheme (3.27) with ρ 0 j = 1 ∆x ρ 0 (C j ) where ).
Then, denoting ρ ∆x the reconstruction given by the scheme (3.27), that is: then ρ ∆x converges weakly in the sense of measures on [0, T ] × R towards the solution ρ of equation (1.1) as ∆x goes to 0.
It has been also proved in [12] that the scheme (3.27) converges at rate √ ∆x.

Well-balanced discretization
Although the splitting method provides a simple way to obtain a discretization which is uniform with respect to the parameter ε, the resulting scheme has strong numerical diffusion and may not have good large time behaviour.Then, well-balanced schemes have been introduced.A scheme is said to be well-balanced when it conserves equilibria.The method proposed in this section comes from [17].
Let us assume that for some n ∈ N the approximation (µ n j , ν n j ) j∈Z of (µ(t n , x j ), ν(t n , x j )) j∈Z solution of (3.22) is known.We construct an approximation at time t n+1 using a finite volume upwind discretization of (3.22), with the discretization of the source terms H n µ,j , H n ν,j to be prescribed right afterwards: In order to preserve equilibria, we set : where (µ, ν) solve the stationary system with incoming boundary conditions, on (x j−1 , x j ): ) And, in the same fashion, H n ν,j = 1 ∆x x j+1 x j H(μ, ν) dx, where (μ, ν) is the solution of the stationary system on (x j , x j+1 ): Reporting equations (3.30b) and (3.31a) into the discretization of the source term, we get H n ν,j = cε ∆x (ν(x j ) − ν j−1 ) and H n µ,j = − cε ∆x (µ n j − μ(x j )).Hence, one may rewrite the scheme (3.28) as: Remark that the stationary system is equivalent to , which are constant respectively on (x j , x j+1 ) and (x j−1 , x j ), one has: Thus, it turns out that the scheme can be rewritten only in terms of the discretized unknowns and of σ j± 1 2 : Or equivalently: ), (3.37a) ).
(3.37b)However, solving the stationary systems (3.30) and (3.31) involves the resolution of a nonlinear and nonlocal ODE.Instead, we propose an approximation in the spirit of [17].
We replace the nonlinear term in should be taken with care [17,10].In [12], the authors showed that, when discretizing the product a[ρ]ρ, if a[ρ] and ρ were not evaluated at the same point, then the resulting scheme produces the wrong dynamics.To take this into account, we will split ρ into one contribution coming from the left and one contribution coming from the right, i.e. we set ρ = ρ L + ρ R and σ = σ L + σ R where ρ L (∆x) = 0 and ρ R (0) = 0.This implies that ρ(∆x) = ρ R (∆x) and ρ(0) = ρ L (0).More precisely, we solve the two following boundary value problem, on (0, ∆x), ) We may solve explicitely these linear systems and, since ρ L (0) = ρ(0) and ρ R (∆x) = ρ(∆x), obtain the relations

.40)
Notice that we have where we denote a + = max(0, a) ≥ 0 and a − = max(0, −a) ≥ 0 the positive and negative negative part of a.Using the boundary conditions in (3.30), we have: .42)with (3.39) and the fact that σ = σ L + σ R is constant on [0, ∆x], we get the following 2 × 2 system on the unknowns µ(0), ν(∆x): Lemma 3.3 (L 1 stability) Under the CFL condition c∆t ∆x ≤ 1 and the subcharacteristic condition c ≥ a ∞ , there holds that the sequence (µ n j , ν n j ) j,n defined by the scheme (3.47) verifies the following L 1 stability property: Proof.In each combination of (3.47), the first coefficient is nonnegative under the CFL condition c∆t ∆x ≤ 1, and so is the last one since κ n j± 1 2 ,L ≥ 0 and κ n j± 1 2 ,R ≤ 0.Moreover, under the subcharacteristic condition c ≥ a ∞ , it holds that −c ≤ κ j± 1 2 ,R + κ j± 1 2 ,R ≤ c so the remaining coefficient is nonpositive.Thus, applying the triangle inequality and reindexing the sums appropriately, It concludes the proof.
Lemma 3.4 (Consistency for smooth solutions) Assume that, for all j ∈ Z, we have a ,L/R .Then, for any ε > 0, the scheme (3.37) is consistent with (1.2) provided that the solutions are smooth enough.
Figure 1 shows that both schemes recover the correct dynamics in the limit ε → 0: for the potential W (x) = |x| 2 , one can compute the exact velocity of both Dirac masses for the aggregation equation (1.1) and see that they should be located respectively in x = −0.2 and x = 0.2 in final time T = 1.2.
This test is set up with ε = 10 −7 , on a cartesian mesh of [−1, 1] with 1500 cells, c = 1 and the CFL c ∆t ∆x = 0.9.Both schemes (3.27) and (3.49) display the correct velocity for the Dirac masses, but one can notice that the Rusanov scheme (3.27) shows more numerical diffusion.Note that both schemes being written in conservation form, they preserve the total mass of ρ, which is also verified numerically.
We then investigate the order of convergence when ∆x goes to 0 with ε fixed, in Wasserstein distance W 1 (the numerical results are the same for W 2 ).
After performing tests for several values of ε, it appears that the convergence rate does not depend on the size of ε.Therefore, as an example, we propose simulations in final time T = 0.5, with the same intial data and stability parameters as above, and with ε = 2 × 10 −6 for Figure 2 and with ε = 10 −2 for Figure 3.
For a fixed value of ε, both schemes seem to converge with order 1/2 with respect to ∆x for the smooth potential W (x) = x 2 2 (see Figure 2) whereas they seem of order 1 for the potential W (x) = |x| 2 (see Figure 3).This can be explained as both schemes possess some numerical diffusion which is somehow counterbalanced by the aggregation phenomenon in the case of a pointy potential, as already observed in [15].Due to the link with the Burgers equation, this superconvergence phenomenon is directly linked to the results of Després [13] which should be rigorously extended to our case (the mere extension to the upwind scheme of [10] for the aggregation is not straightforward).Finally, we also verify the well-balanced property of the scheme (3.48) by computing the W 1 distance between the approximated solution at time T = 0.5 and the stationary solution of (1.2) given by: ρ(t, x) = ρ 0 (x) := 1 8εc 2 1 − tanh 2 x 4εc 2  .
The test is conducted with ε = 2 × 10 −4 , with the exact boundary conditions given by the above formula, and for several values of ∆x.As we show in Figure 4, the scheme (3.48) preserves well the above equilibrium for any ∆x (although we have replaced the resolution of the systems (3.30) and (3.31) with linear systems, see (3.38)), while, for the splitting scheme, we recover the linear convergence towards ρ 0 which is, in this case, the exact solution.

. 8 )
We now turn to proving entropy inequalities on u ε .Straightforward computations yield the existence of a constant C > 0 such that, for all a, b ∈ [m, M ], one has |h(a) − b| ≤ C|G(a, b)|.We therefore work on the variable w and this inequality, along with |h(a) − b| ≤ C|G(a, b)| and (2.10) gives in turn the result.

Figure 1 :
Figure 1: Dynamics of two Dirac masses for the potential W (x) = |x| 2 in time T = 1.2.

Figure 2 :
Figure 2: Order of convergence of the splitting scheme and the well-balanced scheme for the smooth potential W (x) = x 2 2 .

Figure 3 :
Figure 3: Order of convergence of the splitting scheme and the well-balanced scheme for the pointy potential W (x) = |x| 2 .

Figure 4 :
Figure 4: Distance to the equilibrium for the splitting scheme and the well-balanced scheme and for the pointy potential W (x) = |x| 2 .