Approximate Solutions for a Class of Nonlinear Fredholm and Volterra Integro-Differential Equations Using the Polynomial Least Squares Method

: We apply the polynomial least squares method to obtain approximate analytical solutions for a very general class of nonlinear Fredholm and Volterra integro-differential equations. The method is a relatively simple and straightforward one, but its precision for this type of equations is very high, a fact that is illustrated by the numerical examples presented. The comparison with previous approximations computed for the included test problems emphasizes the method’s simplicity and accuracy.


Introduction
Integro-differential equations are important in both pure and applied mathematics, with multiple applications in mechanics, engineering, physics, etc. The behavior and evolution of many physical systems in many fields of science and engineering, including, for example, visco-elasticity, evolutionary problems, fluid dynamics, the dynamics of populations and many others, may be successfully modeled by using integro-differential equations of the Fredholm or Volterra type.
The beginning of the theory of integral equations can be attributed to N. H. Abel (1802-1829), who formulated an integral equation in 1812 by studying a problem of mechanics. Since then, many other great mathematicians, including T. Lalescu (who wrote the world's first dissertation of integral equations in 1911), A. Cauchy (1789-1857), J. Liouville (1809-1882), V. Volterra (1860-1940), I. Fredholm (1866-1927), D. Hilbert , and E. Picard  have contributed to the field of integral and integro-differential equations. In the 20th century, the theory of integral equations presented a strong development regarding both the perspective of its applications and the actual methods of computation of the solutions. Among the main methods used in the study of integral and integro-differential equations, we mention fixed point methods, variational methods, iterative methods, numerical methods and approximate methods.
The class of equations studied in this paper has the following general expression: x a k 1 (x, s) × g 1 (s, u(s), u (s)) ds + λ 2 × b a k 2 (x, s) × g 2 (s, u(s), u (s)) ds, and, depending on the problem, may have attached a set of boundary conditions of the following type: Here, a, b, λ 1 , λ 2 are constants and we assume that the functions p j (j = 0, ..., n), f , k 1 , k 2 , g 1 , g 2 have suitable derivatives on [a, b] such that the problem consisting of Equation (1) together with the set of conditions of (2) (if present) admits a solution.
We remark that this class of equations evidently includes both Fredholm-and Volterratype equations, linear and nonlinear equations and, also, both integro-differential and integral equations, so it is a very general class of equations indeed.
While the qualitative properties of integro-differential equations are thoroughly studied ( [1,2]), leaving aside a relatively small number of exceptions (mostly test problems, such as the ones included as examples), the exact solution of a nonlinear integro-differential equation of the type (1) cannot be found, and numerical solutions or approximate analytical solutions must be computed. Of these two types of solutions, the approximate analytical ones are usually more useful if any subsequent computation involving the solution must be performed.

The Polynomial Least Squares Method
We associate to the problem (1) and (2) the following operator: Let u app denote an approximate solution of (1). If we replace the exact solution u of (1) with u app , then the error corresponding to this replacement can be described by the so-called remainder: We find approximate polynomial solutions u app of (1) and (2) on [a, b] such that u app satisfies the following conditions: Definition 1. An approximate polynomial solution u app , which satisfies the relations (5) and (6), is called a ε-approximate polynomial solution of the problem (1) and (2).

Definition 2.
An approximate polynomial solution u app satisfying the relation b a R 2 (x, u app )dx ≤ δ together with the initial conditions (6), is called a weak δ-approximate polynomial solution of the problem (1) and (2).

Definition 3.
Let there be the sequence of polynomials P m (x) = a 0 + a 1 x + ... + a m x m , a i ∈ R, i = 0, 1, ..., m which satisfy the following conditions: The sequence of polynomials P m (x) is called convergent to the solution of the problem (1) and (2) if lim m→∞ D(P m (x)) = 0.
We can prove the following theorem regarding the convergence of the method. Theorem 1. If T m (x) denotes a weak ε-approximate polynomial solution of the problem (1) and (2), then the necessary condition for the problem to admit a sequence of polynomials P m (x) convergent to its solution is as follows: lim Proof. We compute a weak ε-polynomial solution: The constants c 0 , c 1 , ..., c m are determined by performing the computations included in the following steps: • First, we replace the approximate solution (7) in the Equation (1), obtaining the following expression: If we could find the constants c 0 0 , c 0 1 , ..., c 0 m such that R(x, c 0 0 , c 0 1 , ..., c 0 m ) = 0 ∀x ∈ [a, b] and if the corresponding expressions of (2) (if included in the problem) are also satisfied, then by substituting c 0 0 , c 0 1 , ..., c 0 m in (7), we find the exact solution of (1) and (2). • Next, we associate to (1) and (2) the following functional: Using c 0 0 , c 0 1 , ..., c 0 m computed at the previous step, we construct the following polynomial: Considering the relations (8)- (11) and the way the coefficients of T m (x) are computed, it can be deduced that the following inequality holds: Hence, From the above limit, we deduce that ∀ε > 0, ∃m 0 ∈ N such that ∀m ∈ N, m > m 0 , it follows that T m (x) is a weak ε-approximate polynomial solution of (1) and (2).

Remark 1.
We observe that ifũ app is a ε-approximate polynomial solution of (1) and (2), theñ u app is also a weak ε 2 × (b − a)-approximate polynomial solution. However, the reciprocal property is not always true. As a consequence, we deduce that the set of weak approximate solutions of (1) and (2) contains also the approximate solutions of the problem.
As a consequence of the previous Remark, in order to compute ε-approximate polynomial solutions of the problem (1) and (2) by the polynomial least squares method (from now on denoted as PLSM), we first compute the weak approximate polynomial solutions,ũ app . If |R(t,ũ app )| < ε, thenũ app is also a ε-approximate polynomial solution of the problem.

Remark 2.
Regarding the practical implementation of the method, we wish to make the following remarks: • Regarding the choice of the degree of the polynomial approximation, in the computations, we usually start with the lowest degree (i.e., first degree polynomial) and compute successively higher degree approximations until the error (see next item) is considered low enough from a practical point of view for the given problem (or, in the case of a test problem, until the error is lower than the error corresponding to the solutions obtained by other methods). Of course, in the case of a test problem when the known solution is a polynomial, one may start directly with the corresponding degree, but this is just a shortcut and by no means necessary when using the method.

•
If the exact solutions of the problem are not known, as would be the case of a real-life problem, and as a consequence, the error cannot be computed, then instead of the actual error, we can consider as an estimation of the error the value of the remainder R (4) corresponding to the computed approximation, as mentioned in the previous remark.

•
If the problem has an (unknown) exact polynomial solution, it is easy to see if PLSM has found it since the value of the minimum of the functional in this case is actually 0. In this situation, if we keep increasing the degree (even though there is no point in that), from the computation, we obtain that the coefficients of the higher degrees are actually zero.
• Regarding the choice of the optimization method used for the computation of the minimum of the functional (9), if the solution of the problem is a known polynomial (such as in the case of Application 1, Application 3, Application 5 and Application 6) we usually employ the critical (stationary) points method, because in this way, by using PLSM, we can easily find the exact solution. Such problems are relatively simple ones; the expression of the functional (9) is also not very complicated; and indeed, the solutions can usually be computed even by hand (as in the case of this application). In general, no concerns of conditioning or stability arise. However, for a more complicated (real-life) problem, when the solution is not known (or even if the exact solution is known but not polynomial), we would not use the critical points method.
In fact, we would not even use a iterative-type method, but rather a heuristic algorithm, such as differential evolution or simulated annealing. In our experience, with this type of problem, even a simple Nelder-Mead-type algorithm works well (as was the case for the following Application 2, Application 4 and Application 7). In fact, Application 4 includes a small comparison of several optimization methods. • Finally, we remark that in the case when the solution of the problem is not analytic, the convergence of the PLSM solutions will be slower; another basis of functions (wavelets, and piecewise polynomials) should be used to control the approximation levels.

Numerical Examples
In this section, we apply PLSM to several well-known test problems, and we compare the solutions obtained by using PLSM with solutions previously computed by means of other methods.
The computations are performed using the Wolfram Mathematica software.

Application 1: First Order Nonlinear Fredholm Integro-Differential Equation
The first application is an an initial-value problem, including a first-order nonlinear Fredholm integro-differential equation ( [7]): The problem (12) has the exact solution u e = x. As the solution is a polynomial, we expect PLSM to be able to compute this exact solution.
The corresponding remainder (4) is as follows: and the corresponding functional (10) is as follows: It is easy to show that the stationary point is c 1 = 1, and this is indeed a minimum of the functional.
We conclude that, as expected, PLSM can find the exact solution of (12).

Application 2: Second Order Fredholm Integro-Differential Equation
The second application is a problem including a linear second order Fredholm integrodifferential equation ( [5]): The problem (13) has the exact solution u e (x) = cos(x). Choosing a fifth degree polynomial of the following type, from the initial conditions, it follows that c 0 = 1 and Replacing c 0 and c 1 inũ(x) we compute the remainder (8) as follows: We minimize the corresponding functional J(c 2 , c 3 , c 4 , c 5 ) (10) (whose expression is too large to be presented), and the corresponding values of the constants are as follows: We compute c 0 and c 1 by using again the initial conditions obtaining the approximation: Figure 1 shows the plot of the absolute error corresponding to the above approximation (as the difference in absolute value between the exact solution and the approximation): In the same manner, we compute the polynomial approximate solutions of several other degrees, including the following 7th and 9th degree approximations: • The 7th degree polynomial approximation: The 9th degree polynomial approximation: Table 1 compares the absolute errors of the approximation obtained by using the operational Tau method ( [5]) and of the approximations obtained by using PLSM.
We remark that the solution presented in [5] is a piecewise constructed function consisting of 7 polynomials of 5th degree.
As the comparison clearly shows, the PLSM solutions, even though they consist of a single polynomial (and thus, are much easier to work with) offer much better accuracy.
Moreover, Table 1 illustrates the convergence of the method since the error decreases quickly when the degree of the polynomial approximation increases.

Application 3: Voltera Integro-Differential Equation
We consider a Voltera integro-differential equation together with the initial condition [21]): The problem (14) has the exact solution u e (x) = x 2 . Choosing the approximate solution as a second degree polynomial,ũ(x) = c 2 x 2 + c 1 x + c 0 and following the steps described in the PLSM algorithm, the exact solution of the problem is computed.
We remark that the numerical approximation u bp computed in [21] using block-pulse and hybrid functions leads to errors between 10 −4 and 10 −6 .

Application 4: Nonlinear Volterra Integral Equation
The next application is a nonlinear Volterra integral equation ( [11]): The problem has the exact solution u e = e −x . Employing the same PLSM steps used in the previous examples, we calculated several approximate polynomial solutions. Regarding the computation of the minimum of the functional J (10), we wish to make the following remark: As addressed in Remark 2.2, the method of the computation of the minimum is not specified as a part of the PLSM algorithm. In the previous applications, where the exact solutions were polynomial, it seemed natural to use the critical/stationary points method since this way, it is easy to find the exact solution. However, when the solution is not known (or known but not a polynomial), it could be preferable to use other types of optimization algorithms, such as, for example, heuristic algorithms. In the following, we present several approximations obtained by using different optimization algorithms: • The 7th degree polynomial approximation, using the stationary points method: The 7th degree polynomial approximation, using a differential evolution algorithm: The 7th degree polynomial approximation, using a simulated annealing algorithm: The 7th degree polynomial approximation, using a Nelder-Mead algorithm: In Table 2, we present the comparison of the absolute errors of these approximations. Since in the case of this problem (and in fact, also in the case of the other test problems studied), the functional J (10) does not have a particularly complicated expression, the influence of the optimization method is not very strong (but it could be significant if the initial problem is a very difficult one). Table 2. Absolute errors of the 7th degree approximations for problem (15) corresponding to different optimization methods. Using PLSM with the Nelder-Mead algorithm, we compute approximations of different degrees. In the comparison, we use the following approximate solutions: • The 4th degree polynomial approximation: The 5th degree polynomial approximation: The 6th degree polynomial approximation: The 7th degree polynomial approximation: Equation (15) was previously solved by employing the hybrid Taylor block-pulse functions method ( [20]) and by employing a combination of the Newton-Kantorovich and Haar wavelet methods ( [11]). Table 3 presents the comparison of the best absolute errors corresponding to these methods and to the above approximations computed by PLSM. In Figure 2, we present the absolute errors corresponding to several of the above PLSM approximations.  Again, the comparison in Table 3 clearly shows the precision of our method and, together with Figure 2, at the same time illustrates its quick convergence.

Application 5: Volterra-Fredholm Integro-Differential Equation
The next example is the Volterra-Fredholm integro-differential equation together with the corresponding condition ( [22]): While the errors of the approximation computed in ( [22]) are of the order of 10 −3 , by using PLSM, we can find the exact solution of (16), u e = x 2 .

Application 6: Fourth Order Nonlinear Volterra-Fredholm Integro-Differential Equation
The next example is the nonlinear Volterra-Fredholm integro-differential equation together with the corresponding conditions ( [24]): Using PLSM, we can find again the exact solution, u e = x 2 + 1.

Application 7: Eighth Order Volterra-Fredholm Integro-Differential Equation
The last example is the Volterra-Fredholm integro-differential equation together with the corresponding conditions ( [18,19]): The problem has the exact solution u e = (1 − x) × e x . Table 4 shows the comparison between our solutions and the solutions computed in [18] by using the variational iteration method (15th degree polynomial) and in [19] by using a projection method based on generalized Bernstein polynomials (15 terms).

Conclusions
The paper presents the polynomial least squares method as a simple and straightforward but efficient and accurate method to calculate approximate polynomial solutions for nonlinear integro-differential equations of the Fredholm and Volterra type.
The main advantages of PLSM are as follows: • The simplicity of the method-the computations involved in PLSM are as straightforward as possible (in fact, in the case of a lower degree polynomial, the computations can be easily carried out by hand; see Application 1).

•
The accuracy of the method-this is well illustrated by the applications presented since by using PLSM, we could compute approximations more precisely than the ones computed in previous papers. We remark that, even though we only included a handful of (significant) test problems, we actually tested the method on most of the usual test problems for this type of equation. In all the cases when the solution was a polynomial (which is a frequent case), we could find the exact solution, while in the cases when the solution was not polynomial, most of the time we were able to find approximations that were at least as good (if not better) than the ones computed by other methods. • The simplicity of the approximation-since the approximations are polynomial, they also have the simplest possible form and thus, any subsequent computation involving the solution can be performed with ease. While it is true that for some approximation methods which work with polynomial approximations the convergence may be very slow, this is not the case here (see, for example, Application 2, Application 4 and Application 7, which are representative for the performance of the method).
We remark that the class of equations presented here is a very general one, including most of the usual integro-differential Fredholm and Volterra problems. However, we also wish to remark that since the method itself is not really dependent on a certain expression of the equation, it could be easily adapted to solve other different types of difficult problems.
Author Contributions: All authors contributed equally. All authors have read and agreed to the published version of the manuscript.