Analysis of the Transient Behaviour in the Numerical Solution of Volterra Integral Equations

: In this paper, the asymptotic behaviour of the numerical solution to the Volterra integral equations is studied. In particular, a technique based on an appropriate splitting of the kernel is introduced, which allows one to obtain vanishing asymptotic (transient) behaviour in the numerical solution, consistently with the properties of the analytical solution, without having to operate restrictions on the integration steplength.


Introduction
Volterra integral equations (VIEs) of the type x(t) = g(t) + t 0 K(t, s)x(s)ds, t ∈ [0, +∞), (1) and their discrete version, K(t, s)x(s), t = 1, 2, . . . , x(0) given, (2) are significative mathematical models for representing real-life problems involving feedback and control [1,2]. The analysis of their dynamics allows one to describe the phenomena they represent. In [3], the two equations were analysed in the unifying notation of time scales, and some results were obtained under linear perturbation of the kernel. Here, we revise this approach to obtain results on classes of linear discrete equations whose kernel can be split into a well-behaving part (the unperturbed kernel) plus a term that acts as a perturbation. The implications for numerical methods are, in general, not straightforward and pass through some restrictions on the step length. Nevertheless, here, we overcome this problem and obtain some results on the stability of numerical methods for VIEs. For Equations (1) and (2), we assume that x(t) = (x 1 (t), . . . , x d (t)) T ∈ R d , g(t) = (g 1 (t), . . . , g d (t)) T ∈ R d , and K(t, s) = K ij (t, s) i,j=1,...,d is a d × d matrix.
The paper is organised as follows. In Section 2, we introduce the split kernel for Equation (2) and, using a new formulation of Theorem 2 in [3], we provide sufficient conditions for the above-mentioned splitting that imply that the solution vanishes. In Section 3, we propose a reformulation of the (ρ, σ) methods for (1) as discrete Volterra equations and exploit the theory developed in the previous section in order to investigate its numerical stability properties. In Section 4, some applications are described and analysed through the tools developed in Sections 2 and 3, for which we obtain new and more general results on the asymptotic behaviour for the numerical solutions of both linear and nonlinear equations. In Section 5, some numerical examples are reported.

Asymptotics for Discrete Equations
Consider the discrete Volterra Equation (2) with K(t, s) = P(t, s) + Q(t, s), where P and Q are d × d matrices. Let r Q (t, s) be the resolvent kernel associated with Q(t, s), which is defined as the solution of the equation: The following theorem, which we proved in [3], represents the starting point of our investigation. (2) with K(t, s) = P(t, s) + Q(t, s), s = 0, . . . , t, it holds that:

Theorem 1. Assume that for Equation
This theorem is particularly interesting when the matrix P in the splitting of kernel K is such that P(t, s) = 0 for s = M + 1, . . . , t − N − 1, with M and N positive constants and t ≥ N + M + 1. Then, Equation (2) can be rewritten as and the following result holds. Here and in the following, the limit of matrices is intended element-wise. (4), and assume that:
If there exists a constant G > 0 such that g(t) ≤ G, then and if lim t→∞ g(t) = 0, then lim t→∞ x(t) = 0.

Background Material on (ρ, σ) Methods
The analysis carried out in the previous section can be effectively applied to (ρ, σ) methods for the systems of VIEs: Here, we consider the numerical solution to (5) obtained by the (ρ, σ) methods with Gregory convolution weights (see, for example, [5][6][7]): n = n 0 , n 0 + 1, . . . , where y n y(t n ), with t n = nh for n = 0, 1, . . . , h > 0 is the step size and w nj , ω j are the weights. We assume that the weights are non-negative and that y 0 = f (0), y 1 . . . , y n 0 −1 , n 0 ≥ 1, are given starting values.
Moreover, we want to underline some properties of the Gregory convolution weights, ω n (see, for example [5,7]), which will be useful in the subsequent sections: From now on, we assume that h satisfies where I is the identity matrix of size d.
Choose n * > n 0 and let The (ρ, σ) method (6) can be written, for n = n * + n 0 , n * + n 0 + 1, . . . , as follows: This alternative formulation of the method in terms of matrices P and Q allows us to analyse its asymptotic properties using the theory developed in the previous paragraph for Equation (4). So, (11) corresponds to the discrete Equation (4) with M and N equal to n * − 1 and n 0 − 1, respectively, and Q(n, j) = 0, for j = 0, . . . , M.

Dynamic Behaviour of Numerical Approximations and Applications
In [8], we carried out an analysis of Volterra equations on time scales that allowed us to obtain results on the asymptotic behaviour of the analytical solution of (5) and on its discrete counterpart in hZ, under the assumptions: and respectively, where h > 0, t n = nh, n = 0, 1, . . . , andt =nh. If k(t, s) is non-increasing with respect to s, the bound (13) is certainly implied by (12) for those values of the parameter h such that sup n≥n h k(t n ,t) + t n t k(t n , s) ds ≤ α < 1.
This relation, which allows one to establish a connection between the behaviour of the analytical solution of (5) and of its discrete counterpart in hZ, does not straightforwardly apply to numerical methods due to the presence of the weights w nj and ω j of the (ρ, σ) methods. This is because the weights can cause the loss of monotonicity, and they may also be greater than 1; then, (13) is not satisfied. In [8], it was proved that if (12), sup t∈[s,+∞) k(t, s) < +∞, and lim t→∞ k(t, s) = 0, ∀s ≥ 0, are satisfied, the analytical solution y(t) of Equation (5) vanishes at infinity as lim t→∞ f (t) = 0. Moreover, in [9], it was shown that, if sup The bound (14) assures that, when (12) is satisfied, the numerical solution y n tends to zero for n → ∞ if the step size h is small enough, consistently with the behaviour of y(t).
Theorem 2 in Section 2 allows us to remove the restriction on h given by (14). In order to show this result, which states, in fact, the unconditional stability of the (ρ, σ) methods, we need the following preparatory lemma. Lemma 1. Assume that: Proof. Let h > 0 be a fixed value of the step size. Assumption (ii) implies that k(t, s) ≤ k(t,t) for anyt ≤ s ≤ t. Moreover, lim t→+∞ k(t,t) = 0 because of (i); thus, for any > 0, we choose t * > max {t,s} such that Now, we choose such that h + α ≤ β < 1 and n * such that n * h ≥ t * . Since, for ii), k(t n , s) is a non-increasing function in s for each

Theorem 3.
Assume that all the hypotheses of Lemma 1 hold for the kernel k of Equation (5); then, for the numerical approximation to its solution y(t) obtained by the (ρ, σ) method (6), if lim t→+∞ f (t) = 0, one has lim n→+∞ y n = 0.
Proof. For a fixed h > 0, Lemma 1 provides a value n * > n 0 for which h ∑ n j=n * k(t n , t j ) ≤ β < 1, with β positive constant. Referring to the reformulation (11) of the method, all the assumptions of Theorem 2 are satisfied. Thus, because of property (7) on the asymptotic behaviour of starting weights and of assumption i) of Lemma 1, g(n) = f (t n ) + h ∑ n 0 −1 j=0 w nj k(t n , t j )y j tends to zero for n → +∞. So, in view of Theorem 2, y n also vanishes.
Other applications of Theorem 2 are concerned with the equation which has been the subject of great attention in the literature (see, for example, [10][11][12]).
Here and in the following, we assume that Equation (16) is scalar (d = 1), the kernel k = k(t − s) is of convolution type, G(t, y) is a continuous function for t ∈ [0, +∞), and y ∈ R. A main assumption (see, for example, [13]) that is generally made on the nonlinear term G is that it represents a small perturbation, that is, there exists a function p(t) > 0 such that |G(t, y)| ≤ p(t)|y|.
In order to describe the asymptotic behaviour of y n , we prove the following theorem. Proof. From (17) and (18), with p j = p(t j ), j = 0, 1, . . . , Now, consider the equation Since, from (8), ω n are bounded, we have that ∑ n j=n 0 ω n−j |k(t n − t j )|p j ≤ Ω ∑ n j=n 0 |k(t n − t j )|p j , which is the convolution product of an l 1 (k n ) and a vanishing (p n ) sequence and, therefore, tends to zero as n → +∞. Therefore, ∀ > 0, ∃ν : ∀n > ν, h ∑ n j=n 0 ω n−j |k(t n − t j )|p j < . We choose > 0 such that α + ≤ β < 1 andn such thatnh ≥t in assumption h2). With n * ≥ max {ν,n}, the equation for ζ n can be written in the more convenient form, (11), for which all the assumptions of Theorem 2 hold. Thus, because of property (7) on the asymptotic behaviour of starting weights and because of the vanishing behaviour of the kernel k, g(n) = f (t n ) + h ∑ n 0 −1 j=0 w nj k(t n − t j )(1 + p j )y j tends to zero for n → +∞. Therefore, lim n→∞ ζ n = 0. This ends the proof because, using the comparison theorem in [14], |y n | ≤ ζ n . This theorem states that the numerical solution y n of (16) vanishes when the forcing term f tends to zero for any step size h > 0. The result is, of course, more interesting if we know that the analytical solution to (16) tends to zero. This can be proved by means of a result that the authors proved in [8]. To be more specific, the assumptions of Theorem 4 here assure that all the hypotheses of Theorem 9 in [8] are satisfied, thus implying that lim t→∞ y(t) = 0.
The following result, which we prove in the case of scalar equations, represents a generalisation of Theorem 3.1 in [15], where the numerical stability of the (ρ, σ) methods up to order 3 was proved under some restriction on the step length h. In this paper, by applying Theorem 2 to the (ρ, σ) (6), we remove the constraint on the step size, and extend the investigation to any method in the class of (ρ, σ).
If, however, assumption (iii) 2 , holds instead of (iii) 1 , the step size h has to be chosen such that hΦ ≤ β 1 < 1 and +∞ nh ϕ(t)dt ≤ β 2 , with β 1 + β 2 ≤ α < 1. So, by Lemma 1 in [9], Consider now the following convolution equation: Its solution has the form where the resolvent kernel R is the solution of the equation: t ≥ 0. If the kernel a(t) of Equation (22) is completely monotone, that is, (−1) j a (j) (t) > 0, j = 0, 1, . . . , t ≥ 0, then (see, e.g., [16]) the resolvent R(t) is also completely monotone. Furthermore, the analytical solution x(t) and its numerical approximation x n obtained by a (ρ, σ) method both tend to zero as t → ∞ and n → ∞, respectively, when the forcing term f (t) tends to zero (see [6]). We point out that if lim t→∞ a(t) = 0, then lim t→∞ R(t) = 0 as well, as R(t) is the solution of a Volterra Equation (23) where the kernel is completely monotone and the forcing tends to zero. The significance of completely monotone kernels in Volterra equations is underlined in [13] (p. 27).
This equation can be written in terms of the unperturbed solution as (see [13]): Starting from assumption (17) on the nonlinear term G, and from the relation (25), we want to investigate the asymptotic behaviour of the numerical solution to (24) when it is known that lim n→∞ x n = 0. (24), and assume that (17) holds for the function G and that:
Proof. For assumption 1., the solution x(t) of Equation (22) with a completely monotone kernel satisfies lim t→∞ x(t) = 0. This also holds true for its numerical approximation (see, for example, [6]). Considering Equation (25), it is Since R(t) is completely monotone and p(t) is bounded, the solution of the equation satisfies lim t→∞ z(t) = 0. By using the comparison theorem (see, for example, [17]), y(t) also tends to zero. Considering the numerical solution z n of (26), we want to show, by means of Theorem 5, that lim n→∞ z n = 0. Then, y n will also vanish. This is true because all the assumptions of Theorem 5 are satisfied. Indeed: (i) |K(t, s)| = R(t, s)p(s), thus, ∂ ∂t |K(t, s)| = R (t − s)p(s) < 0, and lim t→∞ K(t, s) = lim t→∞ R(t − s)p(s) = 0; as a matter of fact, as R is the resolvent of a completely monotone vanishing kernel, it is, in turn, a completely monotone vanishing kernel.

Numerical Examples
In this section, we report some numerical experiments in order to experimentally prove the theoretical results illustrated in Section 4. For our experiments, we choose illustrative test equations and we use the (ρ, σ) method (6) with trapezoidal weights.
In our first example, we refer to Equation (5) with the kernel k given by and the forcing term f (t) such that the solution y(t) = e −t . Since f (t) tends to zero as t goes to infinity, all the assumptions of Theorem 3 are satisfied (for example, witht > 3.5), and thus, both the numerical solution and the continuous one vanish. This is also clear in Figure 1. , and f (t) = e −t 2 . (28) In Figure 2, we draw the numerical solution obtained with step size h = 0.1, which clearly vanishes at infinity, according to Theorem 4, since all assumptions are accomplished with p(t) = 2 1+t 2 andt > 1 2 . Our third example consists in Equation (5) with and f (t) such that the solution y(t) = 1 t+1 . According to Theorem 5, with ϕ(t) = (1 + t) −2 , since f (t) tends to zero, the numerical solution vanishes regardless of the step size h, thus replicating the asymptotic behaviour of the continuous one. This behaviour is shown in Figure 3. In all our experiments, we used sizes for the meshes that ensure reasonable accuracy in the numerical solution at finite times. Integration with larger discretisation steps naturally introduces greater errors on finite time intervals, but the numerical solution maintains the expected behaviour at infinity. Thus, this confirms the asymptotic-preserving characteristics of the numerical schemes without restrictions on h. This can be observed, for example, in Figure 4, again referring to example (29) with h = 1.

Conclusions
Starting from an idea developed in [3], here, we have introduced a technique for the analysis of the vanishing behaviour of the numerical solution to VIEs. This new approach, which is based on suitable splittings of the kernel function, allows one to preserve the character of the analytical solution even in the weighted sums that appear in the method, thus leading to unconditional stability results in many applications of interest.
Author Contributions: Both authors contributed equally to this work. Both authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the INdAM-GNCS 2020 project "Metodi numerici per problemi con operatori non locali".