Comparison of Symbolic Computations for Solving Linear Delay Differential Equations Using the Laplace Transform Method

: In this paper, we focus on investigating the performance of the mathematical software program Maple and the programming language MATLAB when using these respective platforms to compute the method of steps (MoS) and the Laplace transform (LT) solutions for neutral and retarded linear delay differential equations (DDEs). We computed the analytical solutions that are obtained by using the Laplace transform method and the method of steps. The accuracy of the Laplace method solutions was determined (or assessed) by comparing them with those obtained by the method of steps. The Laplace transform method requires, among other mathematical tools, the use of the Cauchy residue theorem and the computation of an inﬁnite series. Symbolic computation facilitates the whole process, providing solutions that would be unmanageable by hand. The results obtained here emphasize the fact that symbolic computation is a powerful tool for computing analytical solutions for linear delay differential equations. From a computational viewpoint, we found that the computation time is dependent on the complexity of the history function, the number of terms used in the LT solution, the number of intervals used in the MoS solution, and the parameters of the DDE. Finally, we found that, for linear non-neutral DDEs, MATLAB symbolic computations were faster than Maple. However, for linear neutral DDEs, which are often more complex to solve, Maple was faster. Regarding the accuracy of the LT solutions, Maple was, in a few cases, slightly better than MATLAB, but both were highly reliable. and G.G.-P.; investigation, M.S., G.K. and G.G.-P.; resources, M.S., G.K. and G.G.-P.; writing—original draft preparation, M.S., G.K. and G.G.-P.; writing—review and editing, M.S., G.K. and G.G.-P.; visualization, M.S., G.K. and G.G.-P.; supervision, G.G.-P. All authors the version the manuscript.


Introduction
Often, engineers and scientists may need to choose a software that is specific to their application or modeling purpose. Some software languages may have different features, speed/computational complexity, tolerances, architecture, libraries, ease of use, etc., that can affect the accuracy of the model. Here, we investigated the performance of the mathematical software program Maple and the programming language MATLAB when applying the method of steps (MoS) and the Laplace transform (LT) methods to solve linear delay differential equations (DDEs). To make the comparison fair, we used the benefits offered by these two programs for symbolic computation, which allows us to derive the solutions of these DDEs in the ways we would think about carrying them out by hand with variables, symbols, functions, and other mathematical formulas. An advantage of using symbolic computation is that it eliminates cases of roundoff errors in intermediate calculations since parameter variables are carried throughout the calculations using infinite precision arithmetic [1][2][3]. However, in some cases, using symbolic computations can lead to slower execution times depending on the program used and the process [4][5][6]. Symbolic computation has been used and proposed for solving many different problems [7][8][9][10][11].
In this paper, we focus on comparing the symbolic mathematical computations of Maple and MATLAB to obtain analytical solutions of linear neutral and non-neutral DDEs briefly described in Section 2.1 and the LT method described in Section 3. First, we explored two examples of non-neutral DDEs and then we followed with several examples of neutral DDEs. For each example, we compared: (1) the solutions of the two methods from each program software, (2) the errors associated with each method and software, and (3) the computation times. DDEs have been used in many different fields, including astronomy and epidemiology [17,[41][42][43]. For some applications, such as control or communication systems, the computation time may be a critical factor [44][45][46][47]. All computations were performed on a computer system with an Intel(R) Core(TM) i9-10900K CPU @ 3.70 GHz processor and 64 GB RAM.
The organization of this paper is as follows. In Section 2, preliminaries for DDEs, NDDEs, and the MoS for solving linear DDEs are presented. In Section 3, we briefly present the main aspects of the LT method with the relevant equations and formulas. In Section 4, we compare the symbolic computations to obtain the solution of linear DDEs using the LT method. We present several examples related to DDEs and their corresponding numerical solutions. In Section 5, conclusions regarding the symbolic computations required to obtain the solution of DDEs using the LT method are provided.

Preliminaries
In this section, we present some preliminary definitions that are needed to solve the different linear DDEs presented in this work. Some preliminaries include background concepts, the Lambert function, the residue theorem, and the MoS. Familiarity with the basic concepts of complex analysis is assumed.

Method of Steps (MoS)
The traditional method for solving DDEs is with an elementary method called the method of steps [17,34,35,[48][49][50]. It is well-known for being a tedious process when being worked out by hand, but the complexity can be drastically simplified with symbolic toolboxes in computer programs [15]. Let us consider a NDDE with fixed delay τ: where t 0 is the initial time value, τ is a constant/discrete delay, and y 0 is some constant. For DDE initial value problems (IVPs), we require a history function, φ 0 (t) : [−τ, t 0 ] → R, and an initial value, y(t 0 ) = y 0 for t = t 0 , to solve the problem. This condition implies that DDEs are in fact infinite-dimensional problems since we define an infinite set of initial conditions between t ∈ [−τ, t 0 ]. Using integration properties on the interval [t 0 , t 0 + τ], the solution is then given by y 1 (t), which is a solution to the following NDDE: Repeating this process for successive intervals and using the solution over the past interval, we find the following general derivative formula for the nth partition of the IVP (n ∈ N): The solutions of {y n (t)} ∞ n=1 form a piecewise solution of the original NDDE (1) on the interval t ∈ [0, nτ).
Even though the MoS provides an analytical solution for DDEs, it has some drawbacks, including time consumption and the complexity of the integrals to be solved over each interval. This method also requires full knowledge of the history before the approach can be used. Since it is a successive process, it can be quite difficult to observe the behavior of the solution or to perform stability analysis of y(t) as t → ∞.

Lambert W Function
From ODEs, the roots of the characteristic equation provide information about the solution of a linear ODE or system of linear ODEs. Often times, this equation is a simple polynomial that is easy to solve with algebraic operations or use of the quadratic formula. A linear DDE or system of linear DDEs can be characterized in a similar manner to ODEs by looking at the characteristic equation. Take, for example, the following linear DDE with discrete time delay τ: where y 0 is the initial value. The characteristic polynomial of the above DDE takes the form of a quasi-polynomial due to the introduction of the exponential, i.e., P(λ) = λe τλ − cy 0 . By setting P(λ) = 0, there could be an infinite number of poles. Through simplifying, we obtain λe τλ = cy 0 , which takes the form of a Lambert W function with the equation form z = W(z)e W(z) , where z is some complex number and W(z) is the multi-valued function solution of the equation. Lambert functions are used when the unknown of an equation appears as both a base and exponent. Equations such as λe τλ = cy 0 cannot be solved with algebraic, logarithmic, or exponential properties. For certain values of z, this equation can have an infinite number of solutions called branches of W, and are denoted by W k (z) for k ∈ Z. One of the branches, called the principal branch, is analytic at zero and is denoted by W 0 (k = 0). For real numbers, the branches W 0 and W −1 can be used to solve the equation ye y = x with real numbers x and y if x ≥ − 1 e . The solution, y, can be found using the following conditions.
We note that the second condition implies that, in the interval − 1 e ≤ x < 0, the equation ye y = x has two real solutions.
In terms of our given problem, the poles of the quasi-polynomial are given by λ k = 1 τ W k (cy 0 τ). The analytical solution to the DDE, then, can be expressed in terms of an infinite number of branches, given as where the C k s are the coefficients determined from the history function, φ 0 (t), and the initial value, y 0 . One can analyze how changes in the parameters c, y 0 , and τ affect the real part of the eigenvalues of the system and, in turn, how that impacts the stability of the solution. Further details and applications of the Lambert function can be found in literature such as [51,52].

Cauchy's Residue Theorem
The LT method for solving DDEs requires the use of Cauchy's residue theorem, which is a powerful tool discussed in complex analysis for evaluating complex real integrals or infinite series [53,54]. Here, it was used to find the solution of a DDE when converting from the s-space back to the t-domain.
Suppose f (z) is an analytic function in the complex region R defined by 0 < |z − z 0 | < d, or the neighborhood of a point z = z 0 ∈ C. Then, z 0 is an isolated singular point of f (z). The function f (z) can be represented by the Laurent series expansion as [54]: are the coefficients and C is a simple closed contour in R. The principal part of the series is given by the negative portion, [53,54]: where M ∈ N is the order of the pole, ψ(z) is an analytic function in R, and ψ(z 0 ) = 0.

Residues of Poles:
If M = 1, then f (z) has a simple pole at z 0 . At a simple pole z 0 ∈ C, the residue of f (z) is given by [54]: If the limit does not exist, then there exists an essential singularity at z 0 . If the limit is zero, then f (z) is analytic at z 0 , or z 0 is a removable singularity. If the limit is infinity, then the order is greater than M = 1 [53,54].
Now, suppose f (z) can be expressed as a rational function, f (z) = N(z) D(z) , where N(z) and D(z) are analytic functions in R and D(z) has a zero at z 0 . Let D(z) = (z − z 0 ) MD (z), whereD(z 0 ) is analytic in R andD(z 0 ) is nonzero. Using the Taylor series expansion of N(z) andD(z), the residue is given as [53,54]: If M ≥ 2, then f (z) has a pole of order M at z 0 . From Equation (2), the residue of f (z) at z 0 is given by [53,54]: Residue Theorem: Suppose C is a simple closed contour and let f (z) be an analytic function inside and on C, excluding a finite number (K) of isolated singularities inside C. Then, The use of Cauchy's residue theorem in the LT method comes from the definition of the inverse Laplace transform (ILT): where the integration is over a vertical path in the complex plane defined at c = Re(s). From complex analysis, the function e st is entire and, thus, we require the vertical path to be to the right of all of the poles of F(s). In other words, the real part of all of the poles needs to be to the left of c. Let us denote this vertical path γ shown in Figure 1. To use the residue theorem to evaluate the above integral, we construct a contour path, C R , enclosing all of the poles with radius R as shown in Figure 1 going in a CCW direction. From this contour and Cauchy's residue theorem, we obtain which implies that Let We need to show that lim Achieving this will give: Let F(s) = N(s) D(s) . For t > 0, we enclosed in the left half plane where s = c + Re iθ , θ ∈ [π/2, 3π/2]. Then, ds = iRe iθ dθ, and we obtain We find that e ct e Rt(cos(θ)+i sin(θ)) = e ct e Rt cos(θ) and |i(s − c)| = |i| · |s − c| ≤ |s| + |c| < 2|s| for sufficiently large |s| . Then, we have In the integrand, we note that cos(θ) is negative for θ ∈ (π/2, 3π/2). Then, as R → ∞, we can observe that |I C R | → 0, which implies that lim we have shown that the ILT of F(s) can be represented using residues and is given by Res(F(s)e st ; s n ).

Laplace Transform (LT) Method
We denote the LT of the solution y(t) as L{y(t)} = Y(s). Then, for a given delay term in the state variable, the LT is given by (such that t 0 = 0) When the delay is present in the derivative of the state variable, the LT is given by (such that t 0 = 0): The LT method uses Equations (9) and (10) as a part of the process to obtain the solutions of the linear NDDEs. For further details, interested readers are referred to [14,15].

Theory for Linear Non-Neutral DDE Case
First, we consider a simple linear non-neutral DDE case as shown below, where the solution is derived using the ILT and the residue formula as described in Section 2.3: Considering the homogeneous case, and applying the LT to Equation (11) (using Equation (9)), we obtain the LT transform, Y(s) = N(s) D(s) , of y(t): Setting D(s) = 0, we find that the poles of Y(s) occur at s k = a + 1 τ W k (τbe −aτ ) for k ∈ Z. Recall from the definition of the Lambert function that, if the argument τbe −aτ = − 1 e , then we encounter a case where the real pole s = a − 1 τ is of order M = 2 (since s 0 = s −1 by definition of the W 0 and W −1 Lambert branches). If τbe −aτ = − 1 e , then the poles s k are of order M = 1. By applying the ILT, for M = 1, we obtain the solution where the c k s are the computed residues. Further details can be found in [14].
If the linear non-neutral DDE is non-homogeneous, we obtain the LT: where G(s) is the LT of the function g(t). We denote the additional poles introduced by G(s) as s v for v = 1, 2, ..., V (assuming that s k = s v , ∀k, v and are of order M = 1) with corresponding residues c v . Applying the ILT and the residue theorem, we obtain the solution Re lim where For more details about the LT, the interested readers are referred to [14]. In Section 4.1, we will explore the solutions of both cases (homogeneous and non-homogeneous).

Theory for NDDE Case
Next, we consider a simple NDDE case as shown below, where the solution is derived using the residue formula as described in Section 2.3: where τ = max(η 1 , η 2 ), and η 1 , η 2 > 0. Considering the homogeneous case, and applying the LT to Equation (16) (using Equations (9) and (10)), we obtain the LT transform, Y(s) = N(s) D(s) , of y(t): Setting D(s) = 0, we can find the poles s k of Y(s). Under some general conditions, the poles s k are of order M = 1. Then, applying the ILT, we obtain the solution where the c k s are the computed residues. We can analyze different cases depending on g(t), and the signs and magnitudes of a, b, c and η i for i = 1, 2. For further details, interested readers are referred to [14].
If the NDDE is non-homogeneous, similar properties hold as in the non-neutral case. In Section 4.2, we explore the solution behavior of different delay cases. Further details and analysis can be found in [14].

Comparison of MoS and LT Methods Using Symbolic Computation
Here, we present and discuss different examples of non-neutral DDEs and NDDEs introduced in [14] and apply the MoS and LT methodologies to each example to find the solution of such equations using the Maple and MATLAB software. First, the solutions obtained by each method will be compared between programs to highlight the variances in program computations. Second, the accuracy of the LT method will be evaluated against the analytical solution obtained by the MoS for each program. For this section, we introduce the following notation:

Linear Non-Neutral DDE Examples
The two examples presented in this section demonstrate solutions formed from poles of order M = 1. Figures 2 and 3 and Table 1 summarize the solution results, error comparisons between MoS and LT in each programming language, and the computation times for the corresponding example. Example 1. Let us consider the following DDE: where τ = 1. Taking the LT of Equation (19), we obtain the quasi-polynomial D(s) = (s + 15)e s − 15, which has a real pole at s 0 = 0 with residue c 0 = 2.75 (from Equation (4)). The remaining poles and residues are given by: s k = −15 + W k (15e 15 ), k ∈ Z\{0}, with property s −k = s k , and c k = N(s k ) D (s k ) . Applying the ILT and Cauchy's residue theorem, we obtain the LT solution of y(t): Re lim s→s k c k e s k t .
The solution of Example 1 via MoS (over m = 9 intervals) and LT (n = 5000) and in each software program is shown in Figure 2a. We notice that, as t → ∞, y(t) → c 0 = 2.75 since a = −b, s 0 = 0, and each Re(s k ) < 0 for k ∈ Z\{0}. Figure 2b,c demonstrate the variances between the software when evaluating the solution via MoS (blue curve) or the LT (red curve). Figure 2d demonstrates the accuracy of the LT compared to the analytical MoS via Maple (blue curve) and MATLAB (red curve). Upon analyzing Figure 2b,d, we note that the MATLAB MoS solution significantly deviates from its LT solution and the Maple MoS solution. In the MoS process, a piecewise function is produced for each interval and it becomes harder to integrate and solve the sub-IVPs as the functions become more complicated with each step. In the LT solution, we see that, as t progresses, the variance fluctuations in the solution between programs decrease. Each plot shows that error spikes occur every kτ. The reason is that, around these points, the sines of the truncated nonharmonic series do not play any significant role, while the cosine and exponential terms are essentially equal to 1. Therefore, the approximated series solution loses the ability to approximate the solution close to t = kτ. The spikes can be reduced if we include more terms in the truncated series [14].   The computation times via each method and software are shown in Table 1. Given the number of terms to compute for the LT solution, the computation time was higher for both programs compared to the MoS solutions. However, with the use of MATLAB's symbolic toolbox functions, the computation time for the LT solution was reduced by a factor of four compared to Maple for this particular example.
The solution of Example 2 via MoS (over m = 9 intervals) and LT (n = 5000) and in each software program is shown in Figure 3a. We noticed that, as t → ∞, y(t) → 0 since Re(s k ) < 0 ∀k ∈ Z. Figure 3b,c demonstrates the variances between the software when evaluating the solution via MoS (blue curve) or the LT (red curve). Figure 3d demonstrates the accuracy of the LT compared to the analytical MoS via Maple (blue curve) and MATLAB (red curve).
Compared to Example 1, Figure 3b,d demonstrates that the performance error in MATLAB's MoS solution, relative to the Maple MoS solution and the MATLAB LT solution, decreased. One notable difference between Examples 1 and 2 is the reduced complexity in the history function. In Example 1, the history function was a fifth-order polynomial, whereas, here, the history is a second-order polynomial. In both examples, the LT solution is truncated to 5000 terms. Note that truncation creates ringing in the solution near the history function endpoints. In Figure 3d, the LT error decreases exponentially for both software, while, in Figure 2d, we notice that the error has a maximum peak at around t = 8. In some cases, increasing the number of terms reduces the error further and improves the history function approximation, but at the cost of computational complexity and time.
The computation times via each method and software are shown in Table 1. We noticed that Examples 1 and 2 exhibit similar times since we used the same solution structure in both programs, compatible functions between programs, number of terms to be computed, and number of intervals to integrate over. Given the number of terms to be computed for the LT solution, the computation time was higher for both programs compared to the MoS solutions. However, with the use of MATLAB's symbolic toolbox functions, the computation time for the LT solution was reduced by 4x compared to Maple for both examples.

Linear NDDE Examples
The NDDE examples to be discussed conform to the general form in Equation (16).
These five examples demonstrate solutions formed from poles of order M = 1. Figures 4-8 and Table 2 summarize the solution results, error comparisons between MoS and LT in each programming language, and the computation times for the corresponding example.

Taking the LT of Equation
Re lim s→s k c k e s k t .
The solution of Example 3 via MoS (over m = 5 intervals) and LT (n = 2000) and in each software program is shown in Figure 4a. Figure 4b,c shows that, as t → ∞, the variance between each software increases exponentially. Figure 4d indicates that the accuracy of the LT solution improves exponentially as t increases for both programs, and with noticeable spikes every kτ. Compared to the non-neutral examples, NDDE solutions have a higher complexity and, thus, one software may be more beneficial to use vs. another.
Applying the MoS, the following analytical solution of y(t) over the interval [−τ, mτ] was obtained, where m is the number of intervals: The characteristic equation of the system is given by D(s) = s + 2 − 1 2 se −2s − e −2s , which has a real pole at s 0 ≈ −0.3466 with residue c 0 ≈ −0.0786. The remaining poles and residues are given by the sequence {s k } ∞ k=−∞ for k ∈ Z\{0} (s −k = s k ) and c k = N(s k ) D (s k ) , respectively. These poles are exactly given by s k = ln(c)+2kπi τ ≈ −0.3466 + kπi for k ∈ N (see [14]). Since s 0 < 0 and Re(s k ) < 0 ∀k, then, from the stability analysis, the solution will be stable. Applying the ILT and Cauchy's residue theorem, we obtained the LT solution of y(t): Re lim s→s k c k e s k t .
The solution of Example 4 via MoS (over m = 6 intervals) and LT (n = 500) and in each software program is shown in Figure 5a. Figure 5b,c shows that the variance between each software is of order 10 −16 and, as t → ∞, the error decreases with peak errors every t = k for k ∈ N. Figure 5d indicates that the accuracy of the LT solution improves as t increases for both programs, and has an interesting spike located around πτ.
where η 2 = 2 and τ = η 2 . The third example does not contain a state-dependent delay term. This equation is governed by the characteristic quasi-polynomial given by D(s) = s + 3 2 + 9 10 se −2s , which has a real pole at s 0 ≈ −0.4602 with residue c 0 ≈ −2.2994. The other poles are given by the sequence {s k } ∞ k=1 (s −k = s k ). For large j ≥ k, these poles approach s j = ln(|c|)+(2j+1)πi τ ≈ −0.0527 + (2j + 1)πi for j ∈ N (see [14]). The corresponding residues are given by c k = N(s k ) D (s k ) . Applying the ILT and Cauchy's residue theorem, we obtained the LT solution of y(t): Re lim s→s k c k e s k t .
The solution of Example 5 via MoS (over m = 10 intervals) and LT (n = 500) and in each software program is shown in Figure 6a. Figure 6b,c shows that the variance between each software is of order 10 −5 via MoS and order 10 −15 via LT. From Figure 6d, as t → ∞, the error decreases with peak errors every t = k for k ∈ N. This indicates that the accuracy of the LT solution improves as t increases for both programs, and has an interesting spike located around πτ.
The solution of Example 6 via MoS (over m = 7τ intervals) and LT (n = 500) and in each software program is shown in Figure 7a. Figure 7b,c shows that the variance between each software is of order 10 −9 via MoS and order 10 −16 via LT. We see similar deviances in the MoS solutions compared to Example 5 between each software. A larger timespan would be needed to deduce conclusions on the similarities of the LT solution between each software. From Figure 6d, as t → ∞, the error decreases, with peak errors every t = kτ for k ∈ N.
The solution of Example 7 via MoS (over m = 7τ intervals) and LT (n = 500) and in each software program is shown in Figure 8a. Figure 8b,c shows that the variance between each software is of order 10 −9 via MoS and order 10 −15 via LT. We see similar deviances in the MoS solutions near the boundary intervals compared to Example 5. Relative to previous NDDE examples, Example 7 has a larger error in the LT solution vs. the analytical MoS via both software, with an order of magnitude 10 −3 as shown in Figure 8d, despite the simplicity of the history function. Nevertheless, we can observe that, as t → ∞, the error decreases.
The computation times via each method and software are shown in Table 2. We make the following notes regarding the time computation results: From these results, we notice that the computation time is dependent on the complexity of the history function, the number of terms used in the LT solution, the number of intervals used in the MoS solution, and the magnitude of the parameters of the NDDE. Regarding MoS solutions, Examples 4 and 5 both have a complicated history function (one is a fourthorder polynomial and the other is periodic), and we observe that both programs took longer to compute the solutions. Regarding LT solutions, Examples 3 and 7 took the most time to compute via both software, even though Example 1 had the most terms. The differences between these examples is that Example 7 consisted of an extra sequence of poles due to the different delays in the state-dependent term and the derivative-of-state-dependent term. Nevertheless, Example 3 was found to be the quickest to compute via MoS for both softwares, whereas Examples 4 and 5 were found to be the quickest to compute via LT. We also note that, in all cases, Maple's computation time was found to significantly outrank MATLAB's computation time.
We notice that Examples 3 and 4 exhibit similar times since we used the same solution structure in both programs, compatible functions between programs, number of terms to be computed, and number of intervals to integrate over. Given the number of terms to be computed for the LT solution, the computation time was higher for both programs compared to the MoS solutions. However, with the use of MATLAB's symbolic toolbox functions, the computation time for the LT solution was reduced by 4x compared to Maple for both examples.

Conclusions
In this paper, we investigated the symbolic mathematical computational performance of the software program Maple and the programming language MATLAB when applying the method of steps (MoS) and the Laplace transform (LT) methods to solve neutral and retarded linear DDEs. Of particular interest was comparing how these programs performed. The exact, analytic MoS solution was used to aid in this comparison. However, the main reason for why most of the graphs only include the first 5-10 time-intervals is due to the fact that the method of steps often could not compute the solution beyond these intervals. The Laplace solution is an infinite series, which requires one to compute the relevant poles along with their associated residues. We computed the analytical solutions using Maple and MATLAB in order to take advantage of their respective symbolic computation features. The accuracy of the Laplace method solutions were measured by comparing them with those obtained by the method of steps. The Laplace method produces an analytical solution that can be evaluated at any particular value of time. The results obtained emphasize that symbolic computational software is a powerful tool, capable of computing analytic solutions for linear delay differential equations, including neutral ones. From a computational viewpoint, we found that the computation time is dependent on the complexity of the history function, the number of terms used in the LT solution, the number of intervals used in the MoS solution, and the DDE parameters. We found that, for linear non-neutral DDEs, MATLAB symbolic computations were faster than the ones from Maple. However, for the linear neutral DDEs, which are often more difficult to solve, Maple was faster. Regarding accuracy, it is important to point out that we compared Maple and MATLAB via MoS and LT. The differences between both software were smaller for the LT in comparison to the MoS. In addition, the accuracy of the LT solutions using Maple were, in a few cases, slightly better than the ones generated by MATLAB. However, both were reliable, but we can mention that Maple lightly surpasses MATLAB in symbolic computation due to its symbolic capabilities/nature. Finally, we would like to mention some limitations of the LT irrespective of the software choice. The LT method is only applicable for linear DDEs due to the Laplace transform definition. Thus, nonlinear DDEs should be solved by other means. In addition, we implemented the LT for scalar linear DDEs with constant coefficients and a finite number of delays. However, the LT method can also be extended to solve first-order linear DDE systems. In this work, we applied the LT method to homogeneous and non-homogenous DDEs. Future works that are in progress include solving linear non-homogeneous DDEs with Dirac delta function inputs, with one related goal being to develop a Green's function type kernel.