A Unified Formulation of Analytical and Numerical Methods for Solving Linear Fredholm Integral Equations

This article is concerned with the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels. A unified treatment of some classes of analytical and numerical classical methods, such as the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM) and the Projection Methods (PM), is proposed. The problem is formulated as an abstract equation in a Banach space and a solution formula is derived. Then, several approximating schemes are discussed. In all cases, the method yields an explicit, albeit approximate, solution. Several examples are solved to illustrate the performance of the technique.


Introduction
Fredholm integral equations arise in the mathematical modeling of various processes in science and engineering, but also as reformulations of differential boundary value problems in applied mathematics. For example, in [1], a two-dimensional Stokes flow through a periodic channel problem is reformulated into an integral equation over the boundary of the domain and solved numerically; in [2], the solution of several boundary value problems and initial boundary value problems of interest to geomechanics through their reduction to integral equations is described, and many related references are cited; in [3], several different approaches to transformation of the second-order ordinary differential equations into integral equations is presented, and approximate solutions are derived via numerical quadrature methods; in [4], planar problems for Laplace's equation are reformulated as boundary integral equations and then solved numerically.
The general linear Fredholm integral equation of the second kind has the form where a, b ∈ R, the kernel K(x, t) is a given complex valued and continuous function on [a, b] × [a, b], the input or source function f (x) is assumed to be complex valued and continuous on [a, b], λ is a complex parameter, and u(x) is the unknown continuous function to be determined. In this paper, for simplicity, we confine our investigations to one dimension with x ∈ [a, b] ⊂ R, but the results obtained can be extended to other regions in two or more dimensions. Integral equations of the type (1) have been studied by many researchers over the last century and continue to receive much attention in recent years. For their solution, a variety of methods have been developed; see [5][6][7][8][9] and others. Well-known classical solution techniques are the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM), and the Projection Methods (PM); see, for example, the standard treatises [5][6][7], the traditional articles [10][11][12], and the recent papers [13][14][15][16][17][18][19][20][21]. The DCM has the advantage that it delivers the exact solution in closed form, but its application is limited to special cases where the kernel is separable (degenerate) and the integrals involved can be determined analytically. DKM utilize approximate finite representations for the kernel, and possibly the input function, and they are easy to manage and to perform error analysis. However, as with DCM, when the terms in the degenerate kernel are other than simple functions, then their integration has to be performed numerically. QM are very efficient, particularly when they are combined with Nyström's interpolation, although their error analysis becomes more involved. The formulation of PM is more complicated, while some of these methods can be dealt with as the DKM. The above methods are treated separately in the literature.
Here, we present a common formulation suitable for symbolic computations for all of these techniques. Our approach is based on the idea that in all instances, the integral equation in (1) may be cast or reduced to an equation of the form where for m ≥ 1, g j (x), j = 1, 2, . . . , m, are known continuous functions on [a, b] and Ψ j , j = 1, 2, . . . , m are linear bounded functionals such as definite integrals or sums of values at some points contained in [a, b]. Both {g j (x)} and {Ψ j } are obtained through the separation or approximation of the kernel, the approximation of the integral, the unknown function or a combination of them. Equation (2), under certain conditions, which are associated with the existence and uniqueness of the solution of the integral equation in (1), can be solved symbolically to obtain an exact or approximate analytic solution of (1). We implement the proposed method to construct exact closed-form solutions when the kernel K(x, t) is separable, approximate analytic solutions when K(x, t) is not separable, but it can be represented as a truncated power series or an interpolation polynomial, and semi-discrete solutions when the definite integral is replaced by a finite sum by using a quadrature rule. The economy and the efficiency of the method are revealed by solving several tests problems from the literature.
The paper is organized as follows. In Section 2, an abstract formulation of the problem in a Banach space is presented and a closed-form solution of (2) is derived. In Sections 3-6, we elaborate on the cases where K(x, t) is separable, K(x, t) is approximated by a power series or a polynomial, the definite integral is replaced by a quadrature formula and the unknown function is approximated by a polynomial, respectively. Several examples are solved in Section 7. Finally, some conclusions are quoted in Section 8.

Formulation in Banach Space
Let X be a complex Banach space of functions and X * the adjoint space of X, i.e., the set of all complex-valued linear bounded functionals Ψ j : X → C, j ∈ N. Let Ψ = col(Ψ 1 , Ψ 2 , ..., Ψ m ) be a vector of linear bounded functionals Ψ j , j = 1, 2, ..., m, and G = λ(g 1 , g 2 , ..., g m ) a vector of functions g j ∈ X, j = 1, 2, ..., m. Then Equation (2) may be written in the form where the linear operator T : X → X is defined by In (3) and (4), the components of the vectors G and Ψ are known, f is given and u has to be determined.
To examine the solvability and find the unique solution of (3), we state and prove the theorem below, but first, we explain some formulae and notations which we will use. It is understood that Ψ(u) and Ψ(G) denote the m × 1 column vector and the m × m matrix respectively. It can be easily verified that where c = col(c 1 , c 2 , . . . , c m ) is a constant vector. Bold lowercase and capital letters denote vectors and matrices, respectively, whose elements are numbers. By 0 and I m , we mark the zero column vector and the identity matrix of order m, respectively. Theorem 1. Let the linear operator T : X → X be defined by (4).
In this case, the unique solution of Equation (3) is given by where T −1 : X → X denotes the inverse operator of T.
Proof. Suppose V is a non-singular matrix, i.e., det V = 0, and let z ∈ ker T. Then, Acting by the vector Ψ on both sides of (7), we obtain which implies that Ψ(z) = 0. Then, from (7) follows that z = 0 and hence ker T = {0}, which means that the operator T is injective. Conversely, we will prove that if T is an injective operator then det V = 0 or equivalently, if det V = 0 then T is not injective. Let det V = 0. Then, there exists a vector c = col(c 1 , c 2 , . . . , c m ) = 0 such that Vc = 0. Let the element z = Gc and note that z = 0; otherwise, which contradicts the hypothesis. Substituting z into (7), we obtain which means that ker T = {0}, and so T is not injective. Assume now that (5) is true. Applying the vector Ψ on both sides of (3), viz.

Direct Computational Method (DCM)
In this Section, we consider the ideal case where the kernel K(x, t) in (1) is a separable function, i.e., has the special form where the functions g j (x), h j (x), j = 1, 2, ..., m, are continuous on [a, b] and preferably, but not necessarily, linearly independent sets. Substituting (9) into (1), we obtain Define the row vector of functions and the column vector of linear bounded functionals By means of (11) and (12) and after setting u = u(x) and f = f (x), Equation (10) may be put in the vector form Further, by taking X = C[a, b] and defining the operator T : X → X by Tu = u − GΨ(u), Equation (13) may be cast in the operator form (3), namely Provided condition (5) is fulfilled, the unique solution of (14) follows from Theorem 1, and specifically from Formula (6).
If the functions g j (x), h j (x), j = 1, 2, . . . , m, and f (x) are such that the evaluation of Ψ(G) and Ψ( f ) can be performed by analytic means, i.e., without the use of numerical tricks, and then the solution of (14) is the exact closed-form solution of the integral Equation (1).

Degenerate Kernel Methods (DKM)
If the kernel K(x, t) in (1) is not separable, then we can consider approximating it in a way that makes it separable. There are several mathematical means to accomplish this [7]. We discuss here two such common methods for general continuous kernels.

Power Series Approximation
Let us express K(x, t) as a power series in t at a point t 0 , namely where the coefficients a k (x), k = 0, 1, . . . , are continuous functions of x on [a, b]. We truncate this series and take the partial sum of the first n + 1 terms, viz.
We replace the kernel K(x, t) in (1) by (15) to obtain the degenerate integral equatioñ whereũ n (x) indicates an approximate solution of (1). By defining the vectors whereũ n =ũ n (x); Equation (16) may be written in the symbolic form wherein the operator T s : X → X and X = C[a, b]. Furthermore, to facilitate the computation of Ψ(G) and Ψ( f ), without resorting to numerical integration techniques, we can approximate the functions {g j (x)} and f (x), provided they are analytic in [a, b], by power series of the same type as above.
Note that we will arrive at an equation similar to (22) if we approximate the kernel K(x, t) directly by a finite segment of a double power series, viz.
where the coefficients a (l−1)(k−1) are constants. This shows how involved computations can be handled efficiently by the present formulation.

Polynomial Approximation
An important and at same time one of the simplest methods to construct degenerate kernels approximating given continuous ones is via interpolation. From the several kinds of interpolation and the many interpolation basis functions, we choose here the polynomial interpolation and in particular the Lagrange formula.
Let the n + 1 distinct ordered points where are known as Lagrange basis functions. By putting (23) into (1) in the place of the kernel K(x, t), we obtaiñ Specifying the vectors

Equation (24) may be written in the symbolic form
where the operator T p : X → X and X = C[a, b]. In (25) the computation of Ψ(G) and Ψ( f ), except in some special cases, has to be performed numerically. To avoid the numerical integration, we may approximate the functions {g j (x)} and f (x) by polynomials of degree ≤ r, which interpolate these functions at the r + 1 points and Then, we can substitute (26) into (25) to obtain In addition, we may use (27) to set up the vector and then by substituting into (28) to have where the operator T pp : Note that we will arrive at an analogous equation to (30) if bilinear interpolation is used for interpolating the kernel K(x, t) at the Cartesian mesh nodes (x l , t j ), where l varies from 1 to r + 1 and j varies from 1 to n + 1.
All three Equations (25), (28) and (30) are of the type (3), and thus their unique solutions may obtained through Theorem 1.

Quadrature Methods (QM)
In this Section, we explore the use of some of the numerical integration techniques to approximate the integral operator in (1) and to thus obtain a semi-discrete equation of the kind (2).
A numerical integration or numerical quadrature formula may be written in the form where y(x) ∈ C[a, b]. The abscissas, usually equally spaced points, x j , j = 1, 2, . . . , n + 1, and the weights w j , j = 1, 2, . . . , n + 1, are determined only by the quadrature rule that we apply and do not depend on any way upon the integrand y(x). E n (y) denotes the quadrature error which depends upon a, b, n and the value of a higher-order derivative of y(x) at some point between a and b [22]. Using (31), we may express the definite integral in (1) where {t j } is a set of n + 1 points in [a, b], {w j } is a specific set of positive weights not depending on x and {t j }, and E n (K, u) is an error function which depends upon x as well as a, b, n and the values of higher-order derivatives of K(x, t) and u(t) with respect to t at some point between a and b. Substituting (32) into (1), we find After disregarding the error term, we obtain the semi-discrete equatioñ whereũ n denotes an approximate solution of u. By specifying the vectors

Equation (33) may be recast into symbolic form
where the operator T q : X → X and X = C[a, b]. This equation is of the type (3) and its unique solution for the entire interval [a, b] is by means of Theorem 1. We observe that in (35), the evaluation of Ψ(G) and Ψ( f ) consists merely of the computation of the functions g j (x), j = 1, 2, . . . , n + 1, and f (x) at the quadrature points.
Moreover, Formula (35) corresponds to what is known as the natural interpolation form of Nyström, which is one of the most efficient methods for computing accurate approximate values of the true solution in the entire interval [a, b] from its approximate values at a set of nodes in [a, b]; see [5] or [6] for more details.
As an alternative to this, we may use other interpolating schemes to construct an approximate solution of specific type throughout the interval [a, b]. We consider below two such cases where the functions {g j (x)} and f (x) are replaced by other, simpler functions, such as Taylor series and polynomials.
Let us approximate each of g j (x), j = 1, 2, . . . , n + 1 and f (x) by partial sums of Taylor series as in (20) and (18), respectively. Then, by means of (21), Equation (34) is carried to where the operator T qs : X → X and X = C[a, b]. Analogously, we may approximate each of g j (x), j = 1, 2, . . . , n + 1 and f (x) by interpolating polynomials as in (27) and (26), respectively. Then, by using (29), Equation (34) decreases to where the operator T qp : X → X and X = C[a, b]. Both Equations (36) and (37) are of the form (3), and hence they can be solved explicitly by Theorem 1.

Projection Methods (PM)
Characteristic cases of projection methods are collocation and Galerkin methods. By way of illustration, we consider here the collocation method wherein the unknown function (1) is approximated through the whole of [a, b] by the interpolating polynomial of degree ≤ n,ũ where j (x), j = 1, . . . , n + 1 are the Lagrange basis functions defined in (23), which interpolates u(x) at n + 1 distinct ordered points a = x 1 < x 2 < . . . < x n < x n+1 = b. By using (38), Equation (1) degenerates tõ Setting up the vectors

Equation (39) may be written in the symbolic form
where the operator T c : X → X and X = C[a, b]. Furthermore, approximating {g j (x)} and f (x) by polynomials of degree ≤ n as above, viz.
where the operator T cc : X → X and X = C[a, b].
Equations (40) and (41) are of the type (3), and so their unique solutions may obtained by Theorem 1.

Examples
To clarify the application of the proposed technique and to evaluate its performance, we consider from the literature five example integral equations with known exact solutions and construct approximate explicit solutions in several ways.
We emphasize that the solutions obtained with the proposed procedure have an explicit form. However, to avoid listing large expressions and to be able to compare these solutions, except in some cases, we convert all coefficients to floating point numbers with six decimal places without rounding. For the error estimation between the exact solution u and the approximate solutionũ n , we use the ∞ norm, i.e., n = u −ũ n ∞ = max a≤x≤b |u(x) −ũ n (x)|. Example 1. Let the integral equation [23] u The kernel K(x, t) = sin(x − t) and the input function f (x) = 1 are continuous on [0, π] × [0, π] and [0, π], respectively, and we seek the unique solution u(x) ∈ C[0, π]. We solve this equation exactly as well as approximately.
DCM: Exact solution: i.e., K(x, t) is separable, the integral equation in (42) is written as Following the procedure in Section 3, we set up the vectors Condition (5) is fulfilled, specifically det V = π 2 4 + 1 = 0, and thus from (6) it follows that u(x) = 1 − 4π which is the exact solution of (42).

DKM: Taylor series:
Let us now approximate the kernel K(x, t) by a truncated Taylor series in t about the point 0 where all terms through t n are included, i.e., As an illustration, let n = 2. Then, after substituting K 2 (x, t), Equation (42) degenerates tõ Following the steps in Section 4.1, we specify the vectors and put (42) in the form T sũ2 =ũ 2 − GΨ(ũ 2 ) = 1.
Solving by (5) and (6), we acquirẽ In a similar manner, other approximate solutions of the same analytic form for higher values of n are derived. We tabulate some of these solutions in Table 1 and compare them against the exact answer. The size of maximum errors and the error ratios between two approximations are also given. According to Section 5, let us divide the interval [0, π] into n equal subintervals of length h = π/n, where n is an even integer number, consider the n + 1 abscissas x j = h(j − 1), j = 1, 2, . . . , n + 1, and employ the Simpson's formula to approximate the integral in (42). By way of illustration, let n = 2. Then, Equation (42) assumes the semi-discrete form, Assemble the vectors , or alternatively and write Equation (46) as T qũ2 =ũ 2 − GΨ(ũ 2 ) = 1.
This equation is solved by means of (5) and (6) to obtaiñ which is an approximate solution of (42). Other solutions of the same type for various values of n are recorded in Table 2. Clearly, the size of the error shows that the accuracy of the solutions is O(h 4 ), for which accuracy is valid throughout the interval [0, π] and not only to the quadrature nodes.

Example 2.
Consider the integral equation [5] u(x) − 1 with We formulate the given integral equation as in (19). Specifically, we replace the kernel K(x, t) and the input function f (x) by finite segments of Taylor series of degree n (r = n) about 0 in t and x, respectively, as follows and solve via (5) and (6). For λ A = 2, and n = 2, we get Analogous solutions are obtained for other larger values of n; e.g., for n = 4 and n = 8, we haveũ Comparison of these approximate solutions with the exact solution shows the excellent agreement accomplished even with small values of n. Further, the maximum errors between the approximate solutions for various values of n and the exact solution are given in Table 3. The maximum error is located at the point x = 1 in all cases. The results are distinguished for their high accuracy. We follow the procedure in Section 4.2 and approximate K(x, t) and f (x) with interpolating polynomials of degree n (r = n). Expressing the integral equation in (48) in the form (28) and solving by means of Theorem 1 for λ A = 2, we find u 2 (x) = 1.003217 + 0.877792x + 0.847809x 2 , u 4 (x) = 1.000005 + 0.998808x + 0.509797x 2 + 0.140256x 3 + 0.069434x 4 , u 8 (x) = 1.000000 + 0.999999x + 0.500000x 2 + 0.166664x 3 + 0.041675x 4 +0.008310x 5 + 0.001425x 6 + 0.000164x 7 + 0.000041x 8 . Table 4 shows the maximum errors between these approximate solutions and the exact solution and the points where they occur. As expected, the results are very accurate. Compared with those obtained by the Taylor series above, they are superior.
to approximate the integral in (48). Following the procedure in Section 5, we may initially express (48) in the form (34). However, to construct an explicit solution of a type such as a polynomial throughout the interval [a, b], we substitute the components of the vector G and the function f (x) with finite segments of the Taylor series of degree r = n in x about 0, as in (36). After solving by means of (5) and (6) for λ A = 2, and n = 2, we obtain the solutioñ Similarly, for n = 4 and n = 8, we acquire the higher order solutions The maximum errors between the approximate solutions corresponding to various values of n and the exact solution are listed in Table 5. Moreover, in Table 6, we give the results obtained when λ A = 50. It is evident that the accuracy is O(h 2 ) in the entire interval [0, 1]. As the value of the parameter λ A increases, the accuracy improves and the convergence to the exact solution is faster. Let us now consider the approximating scheme in (37) where we employ the composite trapezoidal rule to discritize the integral and interpolating polynomials of degree r = n to approximate the components of G and the function f (x). For λ A = 2, and n = 2, n = 4 and n = 8, we obtain the corresponding solutions u 2 (x) = 1.131517 + 0.972464x + 0.969511x 2 , u 4 (x) = 1.029984 + 1.025406x + 0.526503x 2 + 0.145148x 3 + 0.072597x 4 , u 8 (x) = 1.007331 + 1.006524x + 0.503972x 2 + 0.168191x 3 + 0.042115x 4 +0.008410x 5 + 0.001444x 6 + 0.000167x 7 + 0.000041x 8 .
In Table 7, we list the maximum errors between these approximate solutions and the exact solution. Additionally, in Table 8, we present the results for the value of the parameter λ A = 50. The results are very close to those obtained above with the composite Trapezoidal rule and Taylor series. Finally, we solve the integral equation in (48) by employing a projection method such as the simple collocation method given in Section 6, where the ordinary Lagrange basis functions are used. For λ A = 1, by means of (41) and polynomials of degree n = 3, n = 4 and n = 5 interpolating u(x) on the equally spaced nodes x j = (j − 1)/n, j = 1, 2, . . . , n + 1, we obtain the following solutionsũ respectively. In Table 9, we give the maximum errors between these approximate solutions and the exact solution. By way of illustration, we also compare these results with the maximum errors obtained in [24,25], where an advanced quasi-projection method based on B-spline tight framelets is utilized.
where the parameters λ A = 0 and c > 0. This equation appears in electrostatics and it is known as Love's equation [26]. The kernel function is continuous on [0, 1] × [0, 1] with a peak at x = t when c is small. Figure 2 shows the shape of K(x, t) for various values of c and x = 0.5. It is understood that as c diminishes to zero, the more difficult it is to construct the solution to the problem. Let λ A = −0.5, c = 0.1 and so that (50) has the exact solution u(x) = 0.06 − 0.8x + x 2 [26]. We noticed that Taylor series and interpolation polynomials are not generally efficient in solving problems with kernel functions of the type (51) when c is small (c < 0.5). Therefore, we apply here the quadrature method in Section 5 by using the Trapezoidal and Simpson's rules.
Solving (52) via (5) and (6), we obtain an approximate analytic solution to (50) in the interval [0, 1]. For example, for n = 2, we havẽ In Table 10, we record the maximum errors for various values of n. The maximum error occurs at the point x = 1 except in the instance n = 8, which is a poor approximation anyway, where it is located at x = 0.935. We utilize now the quadrature formula in (45), with n being an even number, and repeat the steps above. As an illustration, for n = 16, we obtain the solutioñ   Table 11 shows the errors of the approximate solutions for varying values of n. It is noted that the maximum error occurs at the point x = 1, with the exception of the instances n = 8 and n = 16, where it is located at the points x = 0.95 and x = 0.995, respectively. In the same Table, some results obtained in [26] are also quoted for comparison. The supremacy of the Simpson's rule is reaffirmed and the agreement with results of other formulations is acknowledged.
The kernel K(x, t) = xt 2 − x 2 t is continuous on [−1, 1] and separable. Thus, Equation (53) can be solved exactly by the DCM in Section 3 to obtain u(x) = − 30 133 x + 20 133 We also solve this equation here by the collocation method (PM) explained in Section 6. Specifically, we use Lagrange interpolating polynomials of the type (38) to approximate u(x) and put the integral equation in (53) in the symbolic form (41). For n = 2, n = 3 and n = 4, we obtaiñ u 2 (x) = − 6 19 x − 15 19 x 2 , respectively. Notice that for n = 4, as expected, the exact solution was fully recovered. This further validates the procedure in Section 6.

Discussion
The main objective of the present article was to present a unified and versatile procedure suitable for symbolic computations for the construction of approximate analytic solutions to linear Fredholm integral equations of the second kind with general continuous kernels.
It was shown how some of the classical methods such as the Direct Computational Method (DCM), the Degenerate Kernel Methods (DKM), the Quadrature Methods (QM) and the Projection Methods (PM) can be incorporated in the proposed procedure. Additionally, it was demonstrated how complicated calculations such as double power series approximation, interpolation in two dimensions and combinations of different types of approximation can be handled in an effective way.
The technique was tested by solving several examples from the literature. Several approximating schemes for the kernel and the integral operator were used and their accuracy and convergence were evaluated.
In all cases, explicit solutions of high accuracy for the entire interval [a, b] were obtained, which converge to the true solution as n increases. The power series approximation and polynomial interpolation of the kernel yield excellent results when the kernel is smooth with no "peaks". For continuous kernels with "peaks" or kernels of the convolution type, numerical integration of the integral operator is appropriate.
In this paper, the emphasis was on presenting a novel framework for solving integral equations and on establishing its versatility and reliability in practice. A proper convergence and error analysis is postponed to a sequel paper. In the proposed framework, other numerical methods such as piecewise projection methods, Galerkin methods and wavelets methods can be included. The technique can be extended to two or more dimensions.