Combining Nyström Methods for a Fast Solution of Fredholm Integral Equations of the Second Kind

: In this paper, we propose a suitable combination of two different Nyström methods, both using the zeros of the same sequence of Jacobi polynomials, in order to approximate the solution of Fredholm integral equations on [ − 1,1 ] . The proposed procedure is cheaper than the Nyström scheme based on using only one of the described methods . Moreover, we can successfully manage functions with possible algebraic singularities at the endpoints and kernels with different pathologies. The error of the method is comparable with that of the best polynomial approximation in suitable spaces of functions, equipped with the weighted uniform norm. The convergence and the stability of the method are proved, and some numerical tests that conﬁrm the theoretical estimates are given.


Introduction
Let the following be a Fredholm Integral Equation (FIE) of the second kind: where ρ is a Jacobi weight, g and k are known functions defined in (−1, 1) and (−1, 1) 2 , respectively, µ is a non zero real parameter and f is the unknown function we want to look for. The kernel function k is also allowed to be weakly singular along the diagonal y = x, or it could show some other pathologies such as high oscillating behaviour or a "nearly singular" factor. The nature of the kernel, with the presence of the Jacobi weight inside the integral, implies that the solution f can have a singular behaviour at the endpoints of the definition interval (see for instance [1,2]); therefore, the natural choice is to study Equation (1) in suitable spaces of weighted functions. A high number of papers on the numerical methods for FIEs is disposable in the literature, and in the last two decades a deep attention was devoted, in the case under consideration, to the so-called "global approximation methods". They are essentially based on polynomial approximation and use zeros of orthogonal polynomials (see for instance [3,4] and the references therein). There are also examples of global approximation methods based on equispaced points [5], which are especially convenient when the data are available in discrete form but are limited to the unweighted case (see [5,6]). Global methods, more or less, behave as the best polynomial approximation of the solution in suitable spaces of weighted functions; consequently, this approximation strategy provides a powerful performance in the case of very smooth functions. On the other hand, these methods can converge slowly if the functions are not smooth or if the kernel has pathologies as described above.
Recently in [4], a new method based on the collocation approach using the so-called Extended Interpolation was proposed in order to reduce the computational effort in the cases where the solution is not so smooth [7]. Moreover the method delays the computation of high degree polynomial zeros that becomes progressively unstable as the degree increases.
Following a similar idea, we propose here a Mixed Nyström scheme based on product quadrature rules of the "extended" type, i.e., based on the zeros of the polynomial p m+1 (w)p m (w), where {p n (w)} n denotes the orthonormal sequence with respect to a suitable fixed Jacobi weight w [8]. The advantages of using a Nyström scheme with respect to a collocation one include several benefits. Indeed, first of all, we will use here only one sequence of orthonormal polynomials, while in the collocation method in [4] two different sequences ({p n (w)} n and p n (ϕ 2 w) n , where ϕ(x) = √ 1 − x 2 ) were required to obtain optimal Lebesgue constants of the interpolating operators. Secondly, due to its nature, the Nyström strategy with a fixed kernel provides a faster convergence with respect to the collocation approach if the right-hand side g in (1) is not so smooth. Third, a Nyström method based on a product rule allows treating kernel functions having different pathologies.
The idea of the proposed scheme is the following. Consider two sequences of Nyström interpolants { f m } m and f 2m+1 m : The first is based on the product rule using the zeros of p m (w), and the second one is based on the extended product rule using the zeros of p m+1 (w)p m (w). Each step of the procedure consists in solving, for a fixed m, the first Nyström method and in using the coefficients defining the corresponding Nyström interpolant f m in order to "reduce" about one half of the computation of the coefficients of the Extended Nyström interpolant f 2m+1 . In other words, we will assume that the two interpolants "coincide" on the zeros of p m (w). This assumption results in solving only a linear system of order m + 1 instead of one of dimension 2m + 1 in order to obtain an approximating function that is comparable with f 2m+1 from the convergence point of view. The outline of the paper is the following. Section 2 contains preliminary notations and a collection of tools needed to introduce the main results stated in Section 3. Here, we present an extended Nyström method, and the combined algorithm that allows us to solve Equation (1) faster based on this. In Section 4, we provide some computational details for the effective construction of the linear systems. Section 5 concerns the numerical tests, while Section 6 contains the proofs.

Notation and Preliminary Results
Throughout the paper, we use C in order to denote a positive constant, which may have different values at different occurrences, and we write C = C(n, f , . . .) to mean that C > 0 is independent of n, f , . . ..

Function Spaces
Let u be the Jacobi weight defined as follows: We denote by C u the Banach space of the locally continuous functions f on (−1, 1) such that the following limit conditions are satisfied: C u is equipped with the following norm: The limit conditions (2) are necessary in order to assure that the following is the case (see for instance [9]): lim where, denoted by P m the space of all algebraic polynomials having degree at most m, it is the error of best polynomial approximation of f ∈ C u . For smoother functions, we consider the following Sobolev-type subspaces of C u of order r ∈ N, defined as the following: where AC denotes the space of all the functions that are absolutely continuous on every closed subset of (−1, 1), and ϕ(x) := is equipped with the following norm: Finally, by L log + L, we denote the set of all measurable function f defined in (−1, 1) such that the following is the case: For any bivariate function k(x, y), we will write k y (or k x ) in order to regard k as the univariate function in the only variable x (or y).

Solvability of the Equation (1) in C u
Let us set the following: Equation (1) can be rewritten in the following form: where I denotes the identity operator. In order to provide the sufficient conditions assuring the compactness of the operator K : C u → C u , we need to recall the following definition. For any f ∈ C u and with 0 < r ∈ N, in [10], it was defined the following modulus of smoothness: where and the following is the case: For any f ∈ W r (u), the modulus Ω r ϕ ( f , t) u is estimated by means of the following inequality (see for instance [11], p. 314): We are now able to state a theorem that guarantees the solvability of the Equation (1) in the space C u and for which its proof is given in Section 6. Theorem 1. Under the following assumptions, with 0 < s < r and C = C( f ), the operator K : C u → C u is compact. Therefore, if ker(I − K) = 0, for any g ∈ C u , Equation (1) admits a unique solution in C u .

Product Integration Rules
Denoted by {p m (w)} m∈N , the system of the orthonormal polynomials with respect to the Jacobi weight w = v α,β , α, β > −1, the polynomial p m (w) is so defined: Let {x k := x m,k (w) : k = 1, . . . , m} be the zeros of p m (w) and let the following: be the Christoffel numbers with respect to w. For the following integral: consider the following product integration rule: where According to a consolidated terminology, we will refer to the product integration rule in (5) as Ordinary Product Rule only to distinguish it from the extended product integration rule introduced below. Moreover, we recall that {M i (y)} i∈N are known as Modified Moments [12] (see, e.g., [13]).
With respect to the stability and the convergence of the previous rule, the following result, useful for our aim, can be deduced by ( [9], p. 348) (see also [14]).

Theorem 2.
Under the following assumptions: for any f ∈ C u , we obtain the following bounds: In addition to the previous well-known product rule, we recall the following Extended Product Rule (see [8]) based on the zeros of p m (w)p m+1 (w). Denoted by {y k := x m+1,k (w) : k = 1, . . . , m + 1} the zeros of p m+1 (w), the extended formula is as follows: where and are known as the Generalized Modified Moments (GMMs). With respect to the stability and convergence of the extended quadrature rule (8), we recall the following.

Theorem 3 ([8], Theorem 3.2)
. Under the following assumptions: for any f ∈ C u , we obtain the following bounds: and sup with C = C(m, f ).

A Nyström Method
In order to approximate the solution of (1), we recall the following weighted Nyström method based on the product quadrature rule (5). Introducing the sequence (K m f )(y) m , where the following is the case: we proceed to solve in C u the following finite dimensional equation in the unknown f m : By multiplying both sides of the previous equation by the weight function u and collocating at the nodes {x i } m i=1 , we reach the following linear system of order m in the where {C k (y)} k is defined in (6).
By setting the following: the system (15) can be rewritten in the following matrix form: Details on the matrix D m will be given in Section 4. Once the solution {c * i } m i=1 of the system (16) has been determined, the Ordinary Nyström interpolant takes the form: About the convergence of this method, the following theorem can be obtained by using weighted arguments in [15]: Under the assumptions of Theorems 1 and 2, for any g ∈ W r (u), the finite dimensional Equation (14) admits a unique solution f * m ∈ C u such that we have the following: In what follows, we will refer to this Nyström method as the Ordinary Nyström Method (ONM).

The Extended Nyström Method
Now, we introduce a Nyström method based on the extended product quadrature rule (8), calling it the Extended Nyström Method (ENM). Proceeding in analogy to the ONM, we begin by constructing the following sequence ( K 2m+1 f )(y) m : Moreover, we solve in C u the following extended finite dimensional equation in the unknown f 2m+1 : By multiplying both sides of (19) by the weight function u and collocating at the , we obtain the following linear system: where the following is the case: The coefficients {A k (y)} m k=1 and {B k (y)} m+1 k=1 are defined in (9)-(10). The linear system of order 2m + 1 can be rewritten in the following more convenient block-matrix form: with . Details on the effective construction of the system will be provided in Section 4.
T ] T , the vector solution of (21), the extended Nyström interpolant takes the following form: With respect to the convergence, we are able to prove the following: Under the assumption of Theorems 1 and 3, for any g ∈ W r (u), the finite dimensional Equation (19) admits a unique solution f 2m+1 ∈ C u such that the following error estimate holds:

The Mixed Nyström Method
We observe that, under suitable assumptions, both sequences { f m } m and { f 2m+1 } m uniformly converge to the solution f of (1). Thus, it makes sense to consider a mixed scheme that combines the two methods previously introduced. Therefore • By assuming a m ∼ c * m in the linear system (21), we obtain the following: by which we deduce the reduced system of order m + 1 in the only unknown b m+1 Denoted by b * m+1 its solution, we construct the interpolant f 2m+1 ∼ f 2m+1 : • Restart the procedure determining f 4m and f 8m+1 and so on.
The mixed sequence of Nyström interpolants is obtained by iterating a couple of steps of the types (24)-(26), allowing us to obtain the following mixed sequence of Nyström interpolants:f Denoting by A ∞ = max 1≤i≤n ∑ n j=1 |a ij | the infinity norm of the matrix A ∈ R n×n , the uniform convergence of {f n } n to the solution f ∈ C u of (1) is stated in the following: Theorem 6. Under the assumptions of Theorems 1, 4, 5 and supposing that the matrix D 2,2 in (25) is invertible, with sup n D −1 2,2 ∞ < ∞, for any g ∈ W r (u), the sequence {f n } n uniformly converges to f ∈ C u , and the following error estimate holds: Remark 2. By comparing (23) with (28), both the sequences obtained by the extended and the mixed Nyström methods uniformly converge to f ∈ C u with the same rate of convergence.
By implementing the Mixed Nyström Method, we gain different advantages, specifically the reduction in the sizes of the involved linear systems.
More precisely, at each step of the mixed scheme, setting m = 2 n , we solve two systems of order m and m + 1. By doing so, the obtained error is comparable with that performed by solving the two systems of order m and 2m + 1 by the Ordinary Nyström Method.
Therefore, the computational cost of the global procedure is strongly reduced. Indeed, if we compute the solution of the linear systems by Gaussian Elimination, we save 77.8% off long operations and 33.2% off function evaluations.
Furthermore, the difficulties in the evaluation of the modified moments for "large" degree are delayed, as well as the instability in the construction of Jacobi polynomial zeros of high degrees by the Golub-Welsh algorithm.

Computational Details
Given two integers h, k, h < k,, in this section, we use the short notation h : k to denote the set h, h + 1, . . . , k. Denoting by I m the identity matrix of order m, the matrix of the linear system (16) takes the following form: It is well known that the system (16) and the finite dimensional Equation (14) are equivalent (see for instance ([16], Theorem 12.7, p. 202)).
About the block-matrix of the system (21), according to the previously introduced matrices, we have the following: and the matrices M 1,1 , M 1,2 , M 2,1 and M 2,2 are as follows: Remark 3. The entries of the matrices in (29) require the computation of the GMMs. As usual, the ordinary Modified Moments (MMs), which depend on the specific kernel we considered, are often derived by suitable recurrence relations (see, e.g., [13]). In [8] a general scheme for deriving GMMs starting from MMs was proposed. Alternatively, for very smooth kernels, Gaussian rules can be also used. In any case, the global algorithm can be organized in such a manner that the matrices in (29), requiring the most expensive computation effort, can be performed once for a given couple (m, m + 1).

Numerical Experiments
Now we propose some tests showing the numerical results that were obtained by approximating the solution of equations of the type (1) by the mixed sequence {f n } n in (27). We will compare the results with those attained by the corresponding ordinary sequence used in the standard method and in Example 2 also with those achieved by the mixed collocation method proposed in [4]. Indeed, in this test, the kernel is moderately smooth, and the convergence conditions of both methods are satisfied.
We have selected g possessing different regularities and kernels k presenting some kinds of drawback such as a contemporary high oscillating behaviour with a "near" singular fixed point or being weakly singular. In each test, we will report either the common weight w used in the construction of the quadrature formulae and the weight u defining the space to which f belongs.
For effective comparison between the ordinary and the extended sequences on the same number of nodes, we have considered the following sequences: Since the solution f is unknown, we retain the exact values attained by the approximating function f N , for sufficiently large N. We remark that N = 1024 turns out to be a suitable choice for the functions under consideration. In the tables, for increasing n, we will report the weighted maximum error attained byf n andf n at the set (z i ) i=1,...,M , M = 1000 of equally spaced points of (−1, 1) by setting In each table, first and third columns contain the size of the ordinary system (o.l.s.) and its condition number cond one related to the ordinary sequence. In the fourth and sixth columns, the sizes of the couple of linear systems of the mixed scheme (m.l.s.) and the condition number cond mix of the "reduced" system (25) are reported. All the condition numbers have been computed with respect to the infinity norm.
Finally, in order to contain possible moderate loss of accuracy in computing GMMs, we have carried out their construction by using the software Wolfram Mathematica 12.1 in quadruple precision. All the other computations have been performed in double-machine precision eps ∼ 2.220446049250313 × 10 −16 . Example 1. Let us consider the following equation: In this case g ∈ W 3 (u) and according to Theorem 6, which holds since all the assumptions are satisfied, the errors are O m −3 , and the numerical results reported in Table 1 are even better. All the linear systems are well conditioned, the ordinary condition numbers being slightly smaller than the mixed ones. The weighted absolute errors by ONM and MNM are displayed in Figure 1.  In Tables 2 and 3 we report the results achieved by the mixed and ordinary Nyström methods and those obtained by the mixed and ordinary collocation methods in [4]. Indeed, the assumptions assuring stability and convergence for all the methods are satisfied; hence, the comparison makes sense.
We denote by E one n and E mix n the weighted maximum error attained by the Ordinary Collocation Method (OCM) and the Mixed Collocation Method (MCM) in [4] at the set (z i ) i=1,...,M , M = 1000 of equally spaced points of (−1, 1), respectively.
The results show that both the Nyström methods behave better than the collocation ones, and this is quite common in cases such as the one under consideration. Indeed, even if the solution f ∈ W 2 (u) (since g ∈ W 2 (u)), the rate of convergence of the collocation approach depends on both the approximations of the integral operator and the right-hand side. On the contrary, the order of convergence of the Nyström method depends essentially on the smoothness of the kernel. This is one of the reasons why in these cases the Nyström approach produces better results than the collocation one, as also announced in the Introduction.
In this test, the kernel k(x, y) = cos(250x) presents a fast oscillating behaviour. Its graphic is reported in Figure 2. Hence, the product formula allows us to overcome the drawbacks deriving from the use of the Gauss-Jacobi rule. About the rate of convergence, since g ∈ W 5 (u), we expect that the errors are O m −5 . We have reported the values attained by ONM and MNM in three different points of the interval (−1, 1) (Table 4) and the maximum errors on the entire interval (Table 5). In all the cases the theoretical estimates are attained. Moreover, in both methods the condition numbers of the linear systems are comparable.
In this case, ρ and u satisfy the assumptions for the convergence of the Nyström methods, while they do not satisfy those of the collocation methods [4]. We recall that for the convergence of both the collocation methods smoother kernels and more restrictive assumptions on the weights are required. About the rate of convergence, since g ∈ W 4 (u), we expect that the errors are O m −4 . Moreover, we have chosen this test to propose a comparison with the Nyström method obtained by approximating the coefficients {C k (y)} m k=1 in (5) by Gaussian rules. We will refer to this procedure as the Ordinary Nyström method by Gaussian rule (shortly, ONG). We point out that the nature of the kernel k makes this comparison possible, since the computation of the coefficients by the Gauss-Jacobi rule can be performed. Thus, in Table 6, in addition to the results by the mixed and ordinary Nyström methods, in the last two columns, we will set the maximum weighted errors attained by the ONG at the same set of nodes (z i ) i=1,...,M , M = 1000 and the condition numbers of the corresponding linear systems. They will be shortly denoted as E ONG n and cond ONG , respectively. The results by ONM and MNM are slightly better than the expected accuracy, and the condition numbers of the mixed linear systems are a little bit lower than their ordinary counterparts. With respect to the ONG method, as we can observe, the errors result as stagnant.
This test deals with a kernel that is a product of a periodic function having high frequency and multiplied by a "nearly" singular function. Such kernels, treated in the bidimensional case in [17], appear for instance in the solution of problems of propagation in uniform wave-guides with non-perfect conductors [18]. The graphic of the kernel is given in Figure 3. With respect to the results of the equation, since g ∈ W r (u), ∀r ≥ 1, a very fast convergence is expected, and this is confirmed also by the numerical results reported in Table 7. The mixed condition numbers are significantly smaller than the ordinary ones. The graphic of the weighted solution f u is provided in Figure 4.

Proposition 1 (Weak Jackson Inequality).
Let f ∈ C u and 1 0 Ω r ϕ ( f ,t) u t dt < ∞ with 1 ≤ r ∈ N. Then, the following inequality holds: where m > r and C = C(m, f ).
Proof of Theorem 1. Observing that the following is the case: the boundedness of the operator K is proved by using the first assumption of (4 E m (K f ) u = 0. Then, by using the weak Jackson inequality and (4), we obtain the following bound: Thus, the theorem follows.

Theorem 7.
Let (X , · ) be a Banach space. Assume K : X → X to be a bounded compact operator and K m : X → X , m ∈ N, to be a sequence of given bounded operators with lim m→∞ K f − K m f = 0 for all f ∈ X . Consider the operator equations of the following: where I is the identity operator in X and g ∈ X . If lim m→∞ (K − K m )K m = 0, i.e., the sequence {K m } m is collectively compact, then , for all sufficiently large m, (I − K m ) −1 exists and is uniformly bounded with the following.
Moreover, denoted by f * ∈ X and f * m ∈ X , the unique solutions of (32) and (33). respectively, the following results.
Proof of Theorem 5. The invertibility of I − K follows under the assumption of Theorem 1, while the uniformly boundedness of {K 2m+1 ( f )} m and the following limit condition hold under the assumptions of Theorem 3.
It remains to be proven that the sequence {K 2m+1 } m is collectively compact. This is equivalent (see ( [15], p. 114) and ( [19], p. 44)) to prove that lim Hence, noting that the following is the case: the thesis is easily proved using (34) and the uniform boundedness of the operator K.
Proof of Theorem 6. In order to prove (28), we first define the following sequence: obtained by composing two sub-sequences of those defined by the Nyström methods ONM in (17) and ENM in (22) . As proved in Theorems 4 and 5, under the assumptions of Theorems 1, 4 and 5, these sequences are both convergent to the unique solution of the integral Equation (1). Therefore, all the sub-sequences are convergent to the same limit function f , and the speed of convergence is the same. Consequently, we can conclude that with m = 2 n , the following is the case: . Hence, what remain to be conducted in order to obtain (28) is to estimate the following distance: From the definition of the two polynomial sequences, we can write the following: We immediately recognize that, by the definition of a * m and c * m , using estimates (18) and (23), we obtain the following.
Consequently, the following is the case: where d ∞ = max k |d k |, d = [d 1 , d 2 , . . . , d m ] T , denotes the infinity norm in R m . Now, we remark that by (21) and (25) and under the assumption that D 2,2 is invertible, the following identity holds true: Therefore, we have the following: If we denote by D 2m+1 the matrix of coefficients in (21), we note that D 2,1 is a submatrix of it. By using standard arguments (see for instance [15]), it is possible to show that D 2m+1 ∞ ≤ I − K 2m+1 C u →C u and the operator norm at the right-hand side is uniformly bounded with respect to m since the sequence { K 2m+1 } is bounded in virtue of (12). Therefore, since we are assuming that sup m D −1 2,2 ∞ < ∞, we can conclude that the following is the case: Hence, by (36), we can obtain the following: and (28) follows by (37) and estimate (12).

Conclusions
In this research, we have proposed a global Nyström method involving ordinary and extended product integration rules, both based on Jacobi zeros. For the nature of the method, we can handle FIE with kernels presenting some kind of pathological behaviours since the coefficients of the rules are exactly computed via recurrence relations. The method employs two different discrete sequences, namely the ordinary and the extended sequences, that are suitably mixed to strongly reduce the computational effort required by the ordinary Nyström method. Advantages are achieved with respect to the mixed collocation method in [4] from different points of view that can be summarised as follows: we can treat FIEs as having less regular kernels and under wider assumptions in order to obtain a better rate of convergence. Such improvements have been shown by means of some numerical tests. In particular, Example 2 evidences how the mixed Nyström method provides a better performance than the mixed collocation one in [4]. Furthermore, Example 4 shows how the assumptions of the mixed Nyström method are wider than those of the above mentioned mixed collocation one. Both methods allow us to reduce the sizes of the involved linear systems but require the computation of Modified and Generalized Modified Moments. In any case, once the kernel k and the order m are given, the algorithm can be organized pre-computing the matrix of the system. Moreover, once Modified Moments are given, Generalized Modified Moments can be always deduced by a suitable recurrence relation (see, e.g., [8]). Therefore, the global process has a general applicability and only requires the assumptions of convergence to be satisfied. With respect to the Modified Moments, they can be computed through recurrence relations (see, e.g., [13]). However, when these relations are unstable, Modified Moments can be accurately computed by suitable numerical methods. For instance, in the case of high oscillating or nearly singular kernels, this approach has been successfully tried by implementing "dilation" techniques [20,21]. The major cost represents a well known limit of the classical Nyström methods based on product integration rules. They are more expensive since the coefficients of the rule possessing many and different pathological kernels have to be "exactly" computed. On the other hand, this major effort is amply repaid by the better performance with respect to other cheaper procedures. Finally, establishing that the convergence conditions are also necessary is still an open problem. This will be a subject for further investigations.