1. Introduction
An important practical goal of computational mathematics is to create the best, i.e., the fastest and cheapest ways of solving mathematical problems. In short, the optimization of computational algorithms. The optimization of computational algorithms is well demonstrated by examples of the construction of cubature and quadrature formulas on the functional formulation. In this formulation, we consider functions
belonging to some Banach space
B. It is assumed that this space is nested in the space of continuous functions defined in the domain
. The integral of the function
with the weight function
over the region
is a linear functional in
B. Its approximate expression is.
will be another linear functional. The error functional [
1,
2] of the cubature formula will also be linear
The problem of constructing a cubature formula
in the functional formulation consists of finding such a functional (1) whose norm in the space
is minimal.
Studies of optimal and asymptotic cubature formulas are found in [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11]. Optimization studies of quadrature formulas are presented in [
12,
13,
14].
Currently, there are various methods for constructing optimal approximate integration formulas: the spline method, the function method and the Sobolev method.
In recent years, significant progress has been made in developing optimal quadrature formulas and analyzing their error estimates for the approximate evaluation of regular functions [
15,
16,
17,
18,
19,
20,
21,
22,
23,
24], singular [
25,
26], fractional [
27] and integrals from rapidly oscillating functions [
28] using the Sobolev method.
In this paper, the construction of composite optimal quadrature formulas with weight in the Sobolev space is studied using the variational method. Here, the square of the norm of the error functional of composite quadrature formulas with weight function is computed using the extremal function. By minimizing the given norm concerning the coefficients, a system of algebraic equations is derived. The uniqueness of the solution to this system is established. This approach allows for the optimal coefficients of quadrature formulas involving a weight function to be determined.
In the case where the weight function is equal to one, the coefficients of the well-known Euler–Maclorean quadrature formula are obtained from the general formula for the optimal coefficients.
The developed optimal quadrature formula is applied to obtain approximate solutions of certain linear Fredholm integral equations of the second kind, and the results are compared with those presented in [
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46].
2. Compound Quadrature Formulas of Hermite Type
Let us consider quadrature formulas of the form
in the space
. Here
is the space of functions whose
generalized derivative sums to square on the interval [0, 1];
is the weight function whose
are coefficients;
are nodes of quadrature formulas; and
Here, the integral is considered to be regular, singular, fractional and strongly oscillating.
The error of the quadrature formula is the difference between the integral and the sum
where
is the characteristic function of the interval [0, 1],
is Dirac’s delta function,
is the error functional of the quadrature Formula (3).
The functional
of the form (4) is defined in the space
, i.e., this functional belongs to the conjugate space
, then we have
The problem of constructing an optimal quadrature formula of the form (3) with the error functional (4) in the space
consists of finding the value of
at fixed nodes
.
In Formula (6), is the extremal function of the quadrature Formula (3) in the space .
Theorem 1. The extremal function of the error functional in the space is of the formwhere is Green’s function of the operator , i.e., is some polynomial of degree . At
, i.e., for the functional
the extremal function was found in [
1,
2]. For any
, Theorem 1 is proved in [
8].
Since
is a Hilbert space, the norm of the error functional
is connected to the function
through the following relation:
In addition, there is an equality
By substituting the extremal function given in Formula (7) into equality (9) and taking into account Expression (5), a series of calculations yields the squared norm of the error functional (4) corresponding to the quadrature Formula (3).
where
Recall that the coefficients
in equality (10) must satisfy the system of linear equations
which is equivalent to the accuracy conditions (5) of the quadrature formula for polynomials of degree less than
m. In system (12)
Now let us formulate the conditions of the quadratic function
on the set of vectors
subject to relations (5). For this purpose, we apply the method of Lagrange indefinite multipliers.
Let us compose the Lagrange function
Equating to zero the partial derivatives of
by
and
, we obtain
where
In the system of Equations (13) and (14), the unknown variables are
and
. The solution to this system corresponds to a stationary point of the function
, which we denote by
and
. According to the theory of constrained extrema, there exists a sufficient condition ensuring that this solution represents a conditional minimum of
on the manifold defined by (5). This condition requires the associated quadratic form to be positive definite.
on the set of vectors
obtaining the requirement
In matrix form system (16) has the form
We proceed to prove that in the considered case, the quadratic form (15) is positive definite.
Lemma 1. For any nonzero vector C belonging to the subspace defined by , the function takes strictly positive values.*
Proof of Lemma 1. From the definition of the Lagrange function
, and by utilizing equality (15), it directly follows that…
Consider the following functional
It is known that, by condition (6), this functional belongs to , i.e., .
For this functional, there corresponds an extremal function
, which is a solution of Equation
The solution of Equation (
20) has the form
The square of the norm of the function
in
coincides with the form
It follows that for nonzero
, the function
is strictly positive, i.e., the positivity of
for such
C follows from the positivity of the norm
in
.
Lemma 1 is proved completely. □
Lemma 2. If the matrix S of system (16) has a right inverse, then the matrix Q of the system (13) and (14) is non-degenerate.
Proof of Lemma 2. Let us write the homogeneous system corresponding to systems (13) and (14) in the following form
where
Let us denote by G the matrix of quadratic form (18), and write the homogeneous systems (22) and (23) in the following form
Now we prove that the only solution of the homogeneous system (24) is identically zero, i.e., and .
Let be the solution of system (24).
Consider the functional corresponding to the vector
Clearly,
.
Let us take the following as the extremal function for the functional
:
This is possible because
and is a solution of Equation
The system of Equation (
24) means that
takes zero values at all nodes
, i.e.,
, when
. Then with respect to the norm in
of the functional
, we have
which is possible only at
. Taking this into account from (24), we obtain
By convention, the matrix S has a right inverse, then has a left inverse. Hence and from (25) it follows that .
Lemma 2 is proved completely. □
Thus, systems (13) and (14) have a single solution. Therefore, the vector delivers a local minimum to the quadratic function on the set of solutions of system (5). The following theorem follows directly from Theorem 1 and Lemmas 1 and 2.
Theorem 2. Let the error functional of the quadrature Formula (3) be defined in the space , such that it vanishes on all polynomials of degree less than m and is optimal in the sense that, among all functionals of the form (4) with fixed nodes , it has the minimal norm in . Under these conditions, there exists a function that satisfies the corresponding equationwhich goes to zero with its derivatives of order at nodes , i.e., and belongs to . Theorem 2 generalizes the theorem of I. Babushka [
47], i.e., it is proved that the extremal function for the error functional goes to zero at nodes
.
4. Method for Constructing Weighted Optimal Quadrature Formulas Incorporating Derivatives
Let .
Our approach to constructing optimal quadrature formulas with derivatives proceeds as follows. Initially, for
, that is, within the space
, by minimizing the squared norm of the error functional (31) concerning the coefficients
(for
) under the constraints given by (28), we derive the following system for determining
:
Here (33) is obtained from (28) when
. Systems (32) and (33) are solved in [
8], i.e., here we find the optimal coefficients
in the space
. The application of this quadrature formula to the linear integral equations given in [
48].
Next, let us consider the case m = 2. For this purpose, by substituting the found optimal coefficients into (31), then minimizing the square of the norm on the coefficients in the space , we obtain the optimal coefficients .
Continuing this method, we successively find the optimal coefficients
. Substituting these coefficients into (31) and minimizing the square of the norm on the coefficients of
in the space
, we obtain a system for finding the optimal coefficients of
:
Here
5. Optimal Coefficients of Weighted Quadrature Formulas with Derivative
We now solve the system of linear algebraic Equations (34) and (35) with respect to , the optimal coefficients of weight quadrature formulas with derivatives.
Let us rewrite systems (34) and (35) in the following form
Here
is a constant, and
is determined by the equality (36),
The coefficients of the first term of the first equation depend only on the difference . These kinds of equations in the continuous case, where instead of the sum there are integrals, are called Wiener–Hopf equations. As is typical for Wiener–Hopf equations, we assume that is defined everywhere, i.e., , and equal to zero if .
Let us further assume
. Then systems (37) and (38) are written in the form of convolution equations
We now proceed to the actual solution of systems (39) and (41). For this purpose, instead of
we introduce the following function
Next, we define the function
when
and
.
Let
, then by virtue of (40) we obtain
is unknown.
Let
, then
is unknown.
So, we have defined a function
for all values of
:
Since in (42) at
and
the left and right parts coincide, we have
Then we obtain a new representation of the function
in
:
Now we will need the well-known formula [
9]:
where
By virtue of Formulas (43) and (44), we find the optimal coefficients
at
:
Hence, using Formula (45), we obtain
Thus, the following theorem is proved.
Theorem 3. The optimal coefficients of the quadrature formula of the form (26) in the Sobolev space are determined by Formulas (47)–(49).
From Theorem 3, when we obtain:
Corollary 1. The optimal coefficients of the quadrature formula of the form (26) at in the Sobolev space are defined by the formulaswhere are the Bernoulli numbers. Corollary 1 demonstrates that the optimal coefficients coincide with those of the classical Euler–Maclaurin quadrature formula. The optimality of the Euler–Maclaurin quadrature formula in the space
has been established in [
8,
9].
For completeness, we give the following theorem.
Theorem 4. Euler–Maclorean quadrature formulawith the error functionalis the optimal quadrature formula in Sobolev space . The square of the norm of the error functional of the optimal Euler–Maclorean quadrature formula is defined by the following equalityHere are Bernoulli numbers, , From Theorem 3, after some simplifications, we obtain the following results in the spaces .
Theorem 5. In the Sobolev space , the following quadrature formula is the optimal quadrature formulawhere Theorem 6. In the Sobolev space , the optimal quadrature formula is the following formulaHere are determined from Theorem 5, and the optimal coefficients are computed by the following formulas below: Theorem 7. The optimal coefficients of the quadrature formula of the formin the space are defined by the formulas 6. Application of the Quadrature Formula with Derivatives to Linear Fredholm Equations of the Second Kind
Consider the following linear Fredholm equation of the second kind
where
is the kernel of the integral equation,
is the right-hand side,
is the parameter of the integral equation and
is an unknown function to be determined.
Various numerical techniques—such as collocation methods, projection methods, and Galerkin methods—have been proposed for approximating the solution of Equation (
50). These methods have been thoroughly studied with respect to their stability and convergence within appropriate function spaces, taking into account the smoothness characteristics of both the kernel
K and the right-hand side function
f (see, for example, [
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67]).
To solve Equation (
50) we apply the quadrature Formula (26) and pass the difference grid on the argument x
Here
are the optimal coefficients of the quadrature formula,
is grid spacing.
In the system of Equation (
51) the number of equations is
, and the number of unknowns is
, i.e., in addition to the unknown function, its derivatives at nodal points participate in the system of equations. To solve this problem, we differentiate Equation (
50)
times by the argument
x, and we have
Now applying the quadrature formula to system (52), we obtain
Thus, we have a system of linear algebraic equations with respect . The values of the desired function .
7. Numerical Results
The following examples illustrate the solution of integral equations using the quadrature method based on the optimal quadrature formula with derivatives for
. The obtained results are compared with the exact solutions and with those reported by other researchers, using the absolute error as the measure of accuracy.
Here
is the maximum absolute error,
is the approximate solution and
is the exact solution.
It should be noted that a program was created with the Maple language for the above algorithm. All calculations are performed using 32 significant digits.
Example 1. In (50) . Then integral Equation (61) will take the following form:The exact solution of integral Equation (54) is as follows: The results for this example are summarized in
Table 1.
Here ES is the exact solution, OQF is the optimal quadrature formula and MAE is the maximum absolute error.
The authors of [
29] solved this example using a modified multistage average integral method and obtained a result with the maximum absolute error
for
equidistant collocation nodes.
On the basis of using the method of integral replacement with the twelfth-order quadrature formula, the authors of [
45] obtained the result with the maximum absolute error
at
integration intervals.
Our method gave the result with the value of the maximum absolute error at .
Example 2. In (50), . Then integral Equation (50) takes the following form:The exact solution of integral Equation (55) is as follows: The results for this example are shown in
Table 2.
To solve the Fredholm integral equation of the second kind, the authors of [
30] developed a method using spline technology. Using this method, they obtained the result for this example. The maximum absolute error was
at
. Our method gave the result with the value of maximum absolute error
at
.
Example 3. In (50), . Then integral Equation (50) will take the following form:The exact solution of integral Equation (56) is as follows: The results for this example are given in
Table 3.
The authors of [
42] developed a graph-theoretic polynomial using Hosoi polynomials and solved integral Equation (
56). They obtained the result with the maximum absolute error
at
.
The authors of [
43] solved integral Equation (
56) and obtained the result with the maximum absolute error
at
.
Our method gave the result with the value of the maximum absolute error at .
Example 4. In (50) . Then integral Equation (50) will take the following form:The exact solution of integral Equation (57) is as follows: The results for this example are given in
Table 4.
To solve integral Equation (
57), the author of [
32] applied the polynomial method using Boubaker polynomials and obtained the result with the maximum absolute error
at
.
The authors of [
36] also applied the polynomial method. They used the Tushar polynomial and obtained the result with the maximum absolute error
at
for
.
Using the Gauss–Lobatto quadrature formula, the author of [
41] obtained good results. The maximum absolute error was
at
and
.
The authors of [
44] applied the polynomial method using Bernstein polynomials to solve integral Equation (
57) and obtained the result with the maximum absolute error
at
and
.
Our method gave the result with the value of maximum absolute error at .
Example 5. In (50), . Then the integral Equation (50) will take the following form:The exact solution of integral Equation (58) is as follows: The results for this example are shown in
Table 5.
The authors of [
33] solved integral Equation (
58) using the integral method of mean values and obtained the result with the maximum absolute error
at
.
Our method gave the result with the value of maximum absolute error at .
Example 6. In (50) . Then integral Equation (50) will take the following form:The exact solution of integral Equation (59) is as follows: The results for this example are given in
Table 6.
The authors of [
34] solved integral Equation (
59) by applying a new type of spline function of fractional order and obtained the result with the maximum absolute error
at
and
. Our method gave the result with the value of maximum absolute error
at
.
Example 7. In (50) . Then integral Equation (50) will take the following form:The exact solution of integral Equation (60) is as follows: The results for this example are given in
Table 7.
The authors of [
35] solved integral Equation (
60) based on a special representation of vector forms of triangular functions and obtained the result with the maximum absolute error
at
.
The authors of [
39] using a combination of Taylor series and block-plus functions solved the same integral Equation (
60). They obtained the result with the maximum absolute error
at
.
The authors of [
40] solved integral Equation (
60) using Chebyshev polynomial approximation and obtained the result with the maximum absolute error
at
.
With the scheme based on Legendre polynomials and Legendre wavelets, the authors of [
43] solved integral Equation (
60) and obtained the result with the maximum absolute error
at
, and
.
Our method gave the result with the maximum absolute error from to at .
Example 8. In (50) . Then integral Equation (50) will take the following form:The exact solution of integral Equation (61) is as follows: The results for this example are shown in
Table 8.
The author [
37] solved integral Equation (
61) using the Pell–Lucas series method and obtained the result with the maximum absolute error
at
.
Our method gave the result with the maximum absolute error at .
Example 9. In (50) . Then the integral Equation (50) will take the following form:The exact solution of integral Equation (62) is as follows: The results for this example are shown in
Table 9.
The authors of [
38] solved integral Equation (
62) using the Taylor series expansion method and obtained the result with the maximum absolute error
in
and
.
Our method gave the result with the maximum absolute error at .
Example 10. In (50) . Then integral Equation (50) will take the following form:The exact solution of integral Equation (63) is as follows: The results for this example are shown in
Table 10.
The authors of [
38] solved integral Equation (
63) and obtained the result with the maximum absolute error
in
and
.
The authors of [
39], in addition to integral Equation (
60), solved integral Equation (
63) and obtained the result with the maximum absolute error
at
.
The authors of [
46] solved integral Equation (
63) using general Legendre wavelets and obtained the result with the highest absolute error
in
and
. Our method gave the result with the maximum absolute error
at
.