Next Article in Journal
The Circular Hill Problem Regarding Arbitrary Disturbing Forces: The Periodic Solutions that are Emerging from the Equilibria
Next Article in Special Issue
A Unified Proximity Algorithm with Adaptive Penalty for Nuclear Norm Minimization
Previous Article in Journal
Software Tool for Acausal Physical Modelling and Simulation
Previous Article in Special Issue
A Closed Formula for the Horadam Polynomials in Terms of a Tridiagonal Determinant

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# On Some Iterative Numerical Methods for Mixed Volterra–Fredholm Integral Equations

by
Sanda Micula
Department of Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
Symmetry 2019, 11(10), 1200; https://doi.org/10.3390/sym11101200
Submission received: 6 September 2019 / Revised: 21 September 2019 / Accepted: 23 September 2019 / Published: 24 September 2019
(This article belongs to the Special Issue Fixed Point Theory and Computational Analysis with Applications)

## Abstract

:
In this paper, we propose a class of simple numerical methods for approximating solutions of one-dimensional mixed Volterra–Fredholm integral equations of the second kind. These methods are based on fixed point results for the existence and uniqueness of the solution (results which also provide successive iterations of the solution) and suitable cubature formulas for the numerical approximations. We discuss in detail a method using Picard iteration and the two-dimensional composite trapezoidal rule, giving convergence conditions and error estimates. The paper concludes with numerical experiments and a discussion of the methods proposed.
MSC:
45B05; 45D05; 45P05; 47H09; 65D32

## 1. Introduction

Integral equations play an important role in the area of applied mathematics, as they arise from a variety of physical, engineering and biological problems.
In this work, we consider mixed Volterra–Fredholm integral equations (MVFIEs) of the form
$u ( x ) = ∫ a x ∫ a b K x , t , s , u ( s ) d s d t + f ( x ) , x ∈ [ a , b ] ,$
where $K ∈ C [ a , b ] × [ a , b ] × [ a , b ] × R$ and $f ∈ C [ a , b ]$. More assumptions are made on K and f below. Such equations arise in many applications in areas of physics, fluid dynamics, electrodynamics, and biology. Various formulations of boundary value problems, with Neumann, Dirichlet or mixed boundary conditions are reduced to such integral equations. They also provide mathematical models for the development of an epidemic and numerous other physical and biological problems.
Over time, especially recently, there have been many papers devoted to studying these equations and their properties. Procedures for approximating their solutions numerically have been developed via collocation methods ([1,2,3]), CAS wavelets ([4]), Taylor expansion methods ([5]), block-pulse functions ([6]), linear programming ([7]), spectral methods ([8]), etc. For more considerations on mixed integral equations, see, e.g., [9,10].
The aim of this paper is to present a class of simple, yet reliable numerical methods for approximating the solution of MVFIE’s, using a combination of fixed point results for the existence and uniqueness of the solution and a suitable cubature formula for the numerical approximation of the successive iterates.
The rest of the paper is organized as follows. In Section 2, we analyze the solvability of Equation (1), using results from fixed point theory. In Section 3, we develop a numerical procedure for approximating the solution of Equation (1), when an appropriate cubature formula is used, discussing conditions for convergence and giving error estimates. In particular, we analyze in detail the convergence and give an error bound for the case when the two-dimensional composite trapezoidal rule is used. Section 4 contains numerical experiments that show the applicability of the described method. In Section 5, we draw conclusions and discuss future research ideas.

## 2. Solvability of the MVFIE in Equation (1)

We study the solvability of Equation (1) using fixed point theory. We recall the main results for fixed points on a Banach space.
Definition 1.
Let $( X , | | · | | )$ be a Banach space. A mapping $F : X → X$ is called a$q —$contractionif there exists $0 < q < 1$ such that
$| | F u − F v | | ≤ q | | u − v | | ,$
for all $u , v ∈ X$.
We have the classical result, the contraction principle on a Banach space.
Theorem 1.
Let $( X , | | · | | )$ be a Banach space and $F : X → X$ a $q —$contraction. Then,
(a)
equation $u = F u$ has exactly one solution $u * ∈ X$;
(b)
the sequence of successive approximations $u n + 1 = F u n , n ∈ N ,$ converges to the solution $u *$, for any arbitrary choice of initial point $u 0 ∈ X$; and
(c)
the error estimate
$| | u n − u * | | ≤ q n 1 − q | | u 1 − u 0 | |$
holds for every $n ∈ N$.
Remark 1.
Theorem 1 still holds if X is replaced by any closed subset $Y ⊆ X$ satisfying $F ( Y ) ⊆ Y$. This is important below (see Remark 2), since, to approximate the solution of the MVFIE, we want to apply this fixed point result locally (on such a subset Y), not globally, on the entire space X.
We apply Theorem 1 to the MVFIE in Equation (1). To this end, we define the integral operator $F : C [ a , b ] → C [ a , b ]$ associated with Equation (1), by
$F u ( x ) : = ∫ a x ∫ a b K x , t , s , u ( s ) d s d t + f ( x ) .$
Then, finding a solution of the integral in Equation (1) is equivalent to finding a fixed point for the associated operator F:
$u = F u .$
To use Theorem 1 (actually, Remark 1), we consider the space $X = C [ a , b ]$ equipped with the Chebyshev norm
$| | u | | : = max x ∈ [ a , b ] | u ( x ) | , u ∈ X$
and the ball $B R : = { u ∈ C [ a , b ] | | | u − f | | ≤ R }$, for some $R > 0$. Then, $( X , | | · | | )$ is a Banach space and $B R ⊆ X$ is a closed convex subset. Thus, we have:
Theorem 2.
Let $K ∈ C [ a , b ] × [ a , b ] × [ a , b ] × R$ and $f ∈ C [ a , b ]$. Assume that:
(i)
there exists a constant $L > 0$ such that
$| K ( x , t , s , u ) − K ( x , t , s , v ) | ≤ L | u − v | ,$
for all $x , t , s ∈ [ a , b ]$ and all $u , v ∈ [ R 1 − R , R 2 + R ]$, where $R 1 : = min x ∈ [ a , b ] f ( x ) , R 2 : = max x ∈ [ a , b ] f ( x ) ;$
(ii)
$M K ( b − a ) 2 ≤ R ,$
where $M K : = max | K ( x , t , s , u ) |$ over all $x , t , s ∈ [ a , b ]$ and all $u , v ∈ [ R 1 − R , R 2 + R ]$
(iii)
$q : = L ( b − a ) 2 < 1 ;$
Then,
(a)
Equation (3) has exactly one solution $u * ∈ B R$;
(b)
the sequence of successive approximations
$u n + 1 = F u n , n = 0 , 1 , …$
converges to the solution $u * ,$ for any arbitrary initial point $u 0 ∈ B R$; and
(c)
the error estimate
$| | u n − u * | | ≤ q n 1 − q | | u 1 − u 0 | |$
holds for every $n ∈ N$.
Proof.
Let $u ∈ B R$, arbitrary. Since $| | u − f | | ≤ R$, it follows that $u ( x ) ∈ [ R 1 − R , R 2 + R ]$, for all $x ∈ [ a , b ]$. Then, for every fixed $x ∈ [ a , b ]$, we have
$| F u ( x ) − f ( x ) | ≤ ∫ a x ∫ a b | K x , t , s , u ( s ) | d s d t ≤ M K ( b − a ) 2 .$
Thus, by Equation (5), $F u ∈ B R$.
Next, again for every fixed $x ∈ [ a , b ]$, by Equation (4), we have:
$| F u ( x ) − F v ( x ) | ≤ ∫ a x ∫ a b | K x , t , s , u ( s ) − K x , t , s , v ( s ) | d s d t ≤ L | | u − v | | ∫ a x ∫ a b d s d t ≤ q | | x − y | | .$
Then,
$| | F u − F v | | ≤ q | | u − v | |$
and by Equation (6), all the conclusions follow from Theorem 1 and Remark 1. □
Remark 2.
Let us discuss the Lipschitz condition in Equation (4). One case where it is relatively easy to verify it is if $| ∂ K ∂ u |$ is bounded. Then, any bound of the derivative is a Lipschitz constant L for the function. Still, this condition can be quite restrictive, as well, if imposed on the entire space. However, it is much more relaxed when the partial derivative has to be bounded only locally, which is the reason we use a local fixed point result, instead of a global one. We use this in our numerical examples. Of course, there exist functions with unbounded derivatives that still satisfy Equation (4). Even some non-differentiable kernels (for instance, some kernels with weak singularities) can verify a Lipschitz condition (see, e.g., [1]).
For more considerations on fixed point results, see, e.g., [11].

## 3. Numerical Approximation of the Solution

Now, we have an iterative procedure, given by Equation (7), for approximating the true solution $u *$. However, to use it, we have to approximate the integrals numerically and give approximations of the iterative solution at some mesh points. We consider a cubature formula
$∫ a b ∫ c d φ ( t , s ) d s d t = ∑ i = 0 m 1 ∑ j = 0 m 2 a i j φ ( t i , s j ) + R φ ,$
with nodes $a = t 0 < t 1 < ⋯ < t m 1 = b , c = s 0 < s 1 < ⋯ < s m 2 = d$, coefficients $a i j ∈ R , i = 0 , 1 , … , m 1 , j = 0 , 1 , … , m 2$ and for which the remainder satisfies
$| R φ | ≤ M ,$
for some $M > 0$, with $M → 0$ as $m 1 , m 2 → ∞$.
Since we need to integrate on $[ a , x ] × [ a , b ] , x ∈ [ a , b ]$, we use Equation (9) the following way: Let $a = x 0 < x 1 < ⋯ < x m = b$ be a partition of $[ a , b ]$ and let $u 0 = u ˜ 0 ≡ f$ be the initial approximation. Then, we use the iteration in Equation (7) and the numerical integration scheme in Equation (9) to approximate $u n ( x k )$ by $u ˜ n ( x k )$, for $k = 0 , m ¯$ and $n = 0 , 1 , …$. We have the following approximations:
$u 1 ( x k ) = ∫ a x k ∫ a b K x k , t , s , f ( s ) d s d t + f ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , f ( x j ) + R K + f ( x k ) = u ˜ 1 ( x k ) + R ˜ 1 ,$
where
$u ˜ 1 ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , f ( x j ) + f ( x k ) .$
Then, denoting the maximum error at the nodes by
$e ( u 1 , u ˜ 1 ) : = max x k ∈ [ a , b ] | u 1 ( x k ) − u ˜ 1 ( x k ) | ,$
we have, by Equation (10),
$e ( u 1 , u ˜ 1 ) ≤ | R ˜ 1 | ≤ M .$
We proceed further in a similar way to get:
$u 2 ( x k ) = ∫ a x k ∫ a b K x k , t , s , u 1 ( s ) d s d t + f ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , u 1 ( x j ) + R K + f ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , u ˜ 1 ( x j ) + R ˜ 1 + R K + f ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , u ˜ 1 ( x j ) + f ( x k ) + R ˜ 2 = u ˜ 2 ( x k ) + R ˜ 2 ,$
where
$u ˜ 2 ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , u ˜ 1 ( x j ) + f ( x k ) .$
Thus, the values $u ˜ 2 ( x k )$ can be computed from the previous iteration.
Now, let us estimate the error. To this end, denote by $γ : = L ∑ i = 0 m ∑ j = 0 m | a i j |$. Then, by Equation (11), we have
$e ( u 2 , u ˜ 2 ) ≤ | R ˜ 2 | ≤ ∑ i = 0 k ∑ j = 0 m | a i j | L | R ˜ 1 | + | R K | ≤ L M ∑ i = 0 m ∑ j = 0 m | a i j | + M = M ( 1 + γ ) .$
Similarly, denoting by
$u ˜ n ( x k ) = ∑ i = 0 k ∑ j = 0 m a i j K x k , x i , x j , u ˜ n − 1 ( x j ) + f ( x k ) , k = 0 , 1 , … , m ,$
inductively, we get
$e ( u n , u ˜ n ) ≤ | R ˜ n | ≤ γ | R ˜ n − 1 | + M ≤ M 1 + γ + ⋯ + γ n − 1 .$
Now, we can give an error estimate for our approximations.
Theorem 3.
Assume the conditions of Theorem 2 hold. Further, assume that the coefficients in the cubature in Equation (9) satisfy the condition
$γ = L ∑ i = 0 m ∑ j = 0 m | a i j | < 1 .$
Then, for the true solution $u *$ of Equation (3) and the approximations $u ˜ n$ given by Equation (14), the error estimate
$e ( u ˜ n , u * ) ≤ q n 1 − q | | u 1 − u 0 | | + M 1 − γ$
holds for every $n ∈ N$.
Proof.
For every $k = 0 , … , m$ and $n = 0 , 1 , …$, we have
$| u * ( x k ) − u ˜ n ( x k ) | ≤ | u * ( x k ) − u n ( x k ) | + | u n ( x k ) − u ˜ n ( x k ) | .$
The assertion then follows from Equations (15) and (16) and Theorem 2. □

#### Using the Trapezoidal Rule

Thus, to approximate the iterates $u n ( x k )$, we can choose any numerical integration formula that satisfies condition in Equation (16). Next, we propose a simple such formula, the two-dimensional composite trapezoidal rule:
$∫ a b ∫ c d φ ( t , s ) d s d t = ( b − a ) ( d − c ) 4 m 1 m 2 [ φ ( a , c ) + φ ( b , c ) + φ ( a , d ) + φ ( b , d ) + 2 ∑ i = 1 m 1 − 1 φ ( t i , c ) + φ ( t i , d ) + 2 ∑ j = 1 m 2 − 1 φ ( a , s j ) + φ ( b , s j ) + 4 ∑ i = 1 m 1 − 1 ∑ j = 1 m 2 − 1 φ t i , s j ) ] + R φ ,$
with equidistant nodes $t i = a + b − a m 1 i , s j = c + d − c m 2 j , i = 0 , m 1 ¯ , j = 0 , m 2 ¯$. The remainder is given by
$R φ = − [ ( b − a ) 3 ( d − c ) 12 m 1 2 m 2 φ ( 2 , 0 ) ( ξ , η 1 ) + ( b − a ) ( d − c ) 3 12 m 1 m 2 2 φ ( 0 , 2 ) ( ξ 1 , η ) + ( b − a ) 3 ( d − c ) 3 144 m 1 2 m 2 2 φ ( 2 , 2 ) ( ξ , η ) ] , ξ , ξ 1 ∈ ( a , b ) , η , η 1 ∈ ( c , d ) ,$
where the notation $φ ( α , β ) ( t , s ) = ∂ α + β φ ∂ t α ∂ s β ( t , s )$ is used.
For a fixed m, we consider the nodes $x k = a + b − a m k , k = 0 , m ¯$. Then, we have
$∫ a x k ∫ a b K x k , t , s , u n ( s ) d s d t = ( x k − a ) ( b − a ) 4 k m [ K x k , a , a , u n ( a ) + K x k , x k , a , u n ( a ) + K x k , a , b , u n ( b ) + K x k , x k , b , u n ( b ) + 2 ∑ i = 1 k − 1 K x k , x i , a , u n ( a ) + K x k , x i , b , u n ( b ) + 2 ∑ j = 1 m − 1 K x k , a , x j , u n ( x j ) + K x k , x k , x j , u n ( x j ) + 4 ∑ i = 1 k − 1 ∑ j = 1 m − 1 K x k , x i , x j , u n ( x j ) ] + R K ,$
for each $k = 0 , 1 , … , m$.
Now, $x k − a k = b − a m$, thus, in this case, $γ ≤ L ( b − a ) 2 = q$, which is already assumed to be strictly less than 1, by Equation (6).
To find M from Equation (10), we have to find bounds for $K ( 2 , 0 ) x , t , s , u n ( s )$, $K ( 0 , 2 ) x , t , s , u n ( s )$ and $K ( 2 , 2 ) x , t , s , u n ( s )$, as functions of t and s. We have
$K ( 2 , 0 ) x , t , s , u n ( s ) = ∂ 2 K ∂ t 2 x , t , s , u n ( s ) , K ( 0 , 2 ) x , t , s , u n ( s ) = ∂ 2 K ∂ s 2 x , t , s , u n ( s ) + 2 ∂ 2 K ∂ s ∂ u x , t , s , u n ( s ) u n ′ ( s ) + ∂ 2 K ∂ u 2 x , t , s , u n ( s ) u n ′ ( s ) 2 + ∂ K ∂ u x , t , s , u n ( s ) u n ″ ( s ) , K ( 2 , 2 ) x , t , s , u n ( s ) = ∂ 4 K ∂ t 2 ∂ s 2 x , t , s , u n ( s ) + 2 ∂ 4 K ∂ t 2 ∂ s ∂ u x , t , s , u n ( s ) u n ′ ( s ) + ∂ 4 K ∂ t 2 ∂ u 2 x , t , s , u n ( s ) u n ′ ( s ) 2 + ∂ 3 K ∂ t 2 ∂ u x , t , s , u n ( s ) u n ″ ( s )$
and
$u n ( x ) = ∫ a x ∫ a b K x , t , s , u n − 1 ( s ) d s d t + f ( x ) , u n ′ ( x ) = ∫ a b K x , x , s , u n − 1 ( s ) d s + f ′ ( x ) , u n ″ ( x ) = ∫ a b ∂ K ∂ x x , x , s , u n − 1 ( s ) + ∂ K ∂ t x , x , s , u n − 1 ( s ) d s + f ″ ( x ) .$
From the calculations above, it is clear that, if K is a $C 4$ function with bounded fourth-order partial derivatives and f is a $C 2$ function with bounded second-order derivatives, then there exists a constant $C > 0$, independent of n or m, such that
$| R K | ≤ ( b − a ) 4 12 m 2 C = : M .$
Thus, when the trapezoidal rule is used, we have the following approximation result:
Theorem 4.
Assume the conditions of Theorem 2 hold with $K ∈ C 4 [ a , b ] × [ a , b ] × [ a , b ] × R$ and $f ∈ C 2 [ a , b ]$. Then, for the true solution $u *$ of Equation (3) and the approximations $u ˜ n$ given by Equation (14), for all $n = 1 , 2 , …$ and any $m ∈ N *$, we have the error estimate
$e ( u ˜ n , u * ) ≤ q n 1 − q | | u 1 − u 0 | | + ( b − a ) 4 12 m 2 C 1 − γ ,$
with $C$ given in Equation (21).

## 4. Numerical Experiments

Example 1.
$u ( x ) = 1 2 ∫ 0 x ∫ 0 π / 4 sin t 1 − u 2 ( s ) s d s d t + π − 2 16 1 + 16 2 π − 2 − 1 cos x , x ∈ 0 , π 4 ,$
with exact solution $u * ( x ) = 2 cos x$.
We have $K ( x , t , s , u ) = s 2 sin t 1 − u 2$ and $f ( x ) = π − 2 16 1 + 16 2 π − 2 − 1 cos x$. Then, on $0 , π 4$, f is a decreasing function, thus $R 1 = f ( π / 4 ) ≈ 1.0209$ and $R 2 = f ( 0 ) = 2$. In addition, on $0 , π 4$, $0 ≤ sin t ≤ 1 2$. If we choose $R = 1$, then
$max [ R 1 − R , R 2 + R ] | 1 − u 2 | = | 1 − ( R 2 + R ) 2 | = 2 ( 1 + 2 ) .$
Thus, for $x , t , s ∈ 0 , π 4$ and $u ∈ [ R 1 − R , R 2 + R ]$, we have
$M K ( b − a ) 2 ≤ π 8 1 2 2 ( 1 + 2 ) π 4 2 = π 4 3 1 + 1 2 ≈ 0.827 < R .$
Now, to find the Lipschitz constant L, we compute $∂ K ∂ u = − s u sin t$. Then, for $x , t , s ∈ 0 , π 4$ and $u ∈ [ R 1 − R , R 2 + R ]$, we get
$L ( b − a ) 2 ≤ π 4 1 2 ( R 2 + R ) π 4 2 = π 4 3 1 + 1 2 ≈ 0.827 < 1 .$
Thus, the conditions of Theorem 4 are satisfied.
We use the trapezoidal rule with $m = 12$ and $m = 24$, with the corresponding nodes $x k = π 4 m k , k = 0 , m ¯$. Table 1 contains the errors $e ( u * , u ˜ n )$, with initial approximation $u 0 ( x ) = f ( x ) = π − 2 16 1 + 16 2 π − 2 − 1 cos x$.
Example 2.
Next, consider the nonlinear MVFIE
$u ( x ) = 1 4 ∫ 0 x ∫ 0 1 t e x − 2 s u 2 ( s ) d s d t + 1 600 x ( 120 − x ) e x , x ∈ [ 0 , 1 ] ,$
whose true solution is $u * ( x ) = 1 5 x e x .$
In this case, $K ( x , t , s , u ) = 1 4 e x t e − 2 s u 2$, $f ( x ) = 1 600 x ( 120 − x ) e x$ and $∂ K ∂ u = 1 2 e x t e − 2 s u$. Now $R 1 = 0$ and $R 2 ≈ 0.5391$.
Choose again $R = 1$. Then, for $x , t , s ∈ [ 0 , 1 ]$ and $u ∈ [ R 1 − R , R 2 + R ]$, we have
$M K ( b − a ) 2 ≤ 1 4 ( R 2 + R ) 2 ≈ 0.5922 ≤ R and L ( b − a ) 2 ≤ 1 2 ( R 2 + R ) ≈ 0.7695 < 1 ,$
thus Theorem 4 applies.
For $m = 12$ and $m = 24$, we use the trapezoidal rule with nodes $x k = 1 m k , k = 0 , m ¯$. In Table 2, we give the errors $e ( u * , u ˜ n )$, with initial approximation $u 0 ( x ) = f ( x ) = 1 600 x ( 120 − x ) e x$.

## 5. Conclusions

In this paper, we develop a class of numerical iterative methods for the solutions of one-dimensional mixed Volterra–Fredholm integral equations of the second kind. These methods use successive iterations of the analytical solution (given by fixed point theory results) and numerical approximations of the integrals involved in the iterative solutions. Many results in fixed point theory can be used, as long as they provide an iterative procedure that converges to the true analytic solution. In addition, we describe the conditions that a quadrature rule has to satisfy in order to guarantee the convergence of the approximations of the solution at some mesh points to the true values at those points. Again, many such numerical schemes can be employed. We have chosen the contraction principle for the first part and the two-dimensional composite trapezoidal rule for the numerical integration of the iterates. These choices were made in order to develop a fairly simple numerical method, simple in the proof of the convergence, the conditions to be met and, especially, in implementation. Most mathematical software have the trapezoidal rule built in, thus they are very handy, very little further coding being required. Another advantage of these types of methods, compared to, e.g., collocation, Galerkin, Nyström methods—or other numerical methods that search the solution in a certain form, substitute it into the equation and then force the equation to be true at some mesh points—is that they do not lead to a system of algebraic equations, which oftentimes is ill-conditioned and, thus, difficult to solve. Although the method presented is fairly easy to use and implement, as the numerical examples show, it gives good approximations even with a relatively small number of iterations and of quadrature nodes.
Similar ideas can be used for other types of mixed integral equations, with more complicated kernel functions, such as kernels with some type of singularity or kernels with modified arguments. In addition, other fixed point successive approximation results can be considered (such as Mann iteration or Krasnoselskii iteration), which, under certain conditions, may converge faster than Picard iteration. To increase the speed of convergence of the method, other, higher-order numerical integration schemes can also be used (such as spline quadratures, see, e.g., [12,13,14]), as long as they satisfy the convergence conditions of Theorem 3.

## Funding

This research received no external funding.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Brunner, H. Collocation Methods for Volterra Integral and Related Functional Differential Equations; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
2. Hafez, R.M.; Doha, E.H.; Bhrawy, A.H.; Băleanu, D. Numerical Solutions of Two-Dimensional Mixed Volterra-Fredholm Integral Equations Via Bernoulli Collocation Method. Rom. J. Phys. 2017, 62, 111. [Google Scholar]
3. Ordokhani, Y.; Razzaghi, M. Solution of nonlinear Volterra–Fredholm–Hammerstein integral equations via a collocation method and rationalized Haar functions. Appl. Math. Lett. 2008, 21, 4–9. [Google Scholar] [CrossRef]
4. Ezzati, R.; Najafalizadeh, S. Numerical Methods for Solving Linear and Nonlinear Volterra-Fredholm Integral Equations by Using Cas Wavelets. World Appl. Sci. J. 2012, 18, 1847–1854. [Google Scholar]
5. Chen, Z.; Jiang, W. An Approximate Solution for a Mixed Linear Volterra-Fredholm Integral Equations. Appl. Math. Lett. 2012, 25, 1131–1134. [Google Scholar] [CrossRef]
6. Mashayekhi, S.; Razzaghi, M.; Tripak, O. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials. Sci. World J. 2014, 2014, 1–8. [Google Scholar] [CrossRef] [PubMed]
7. Hasan, P.M.A.; Suleiman, N.A. Numerical Solution of Mixed Volterra-Fredholm Integral Equations Using Linear Programming Problem. Appl. Math. 2018, 8, 4–45. [Google Scholar]
8. Aziz, I.; Islam, S. New algorithms for the numerical solution of nonlinear Fredholm Volterra integral equations using Haar wavelets. J. Comput. Appl. Math. 2013, 239, 333–345. [Google Scholar] [CrossRef]
9. Atkinson, K.E. The Numerical Solution of Integral Equations of the Second Kind, Cambridge Monographs on Applied and Computational Mathematics, 4; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
10. Wazwaz, A.M. Linear and Nonlinear Integral Equations, Methods and Applications; Higher Education Press: Beijing, China; Springer: New York, NY, USA, 2011. [Google Scholar]
11. Berinde, V. Iterative Approximation of Fixed Points, Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2007. [Google Scholar]
12. Barton, M.; Calo, V.M. Gauss-Galerkin quadrature rules for quadratic and cubic spline spaces and their application to isogeometric analysis. Comput. Aided Des. 2017, 82, 57–67. [Google Scholar] [CrossRef]
13. Hiemstra, R.; Calabro, F.; Schillinger, D.; Hughes, T.J.R. Optimal and reduced quadrature rules for tensor product and hierarchically refined splines in isogeometric analysis. Comput. Method Appl. Mech. Eng. 2017, 316, 966–1004. [Google Scholar] [CrossRef] [Green Version]
14. Sablonnière, P. Univariate spline quasi-interpolants and applications to numerical analysis. Rend. Del Semin. Mat. 2005, 63, 211–222. [Google Scholar]
Table 1. Errors for Example 1.
Table 1. Errors for Example 1.
m1224
n
1$1.061540 × 10 − 1$$1.923056 × 10 − 2$
5$9.798965 × 10 − 4$$2.386057 × 10 − 4$
10$1.318075 × 10 − 4$$3.071171 × 10 − 5$
Table 2. Errors for Example 2.
Table 2. Errors for Example 2.
m1224
n
1$4.110239 × 10 − 2$$1.260717 × 10 − 2$
5$2.253881 × 10 − 4$$5.973695 × 10 − 5$
10$1.521347 × 10 − 5$$3.273994 × 10 − 6$

## Share and Cite

MDPI and ACS Style

Micula, S. On Some Iterative Numerical Methods for Mixed Volterra–Fredholm Integral Equations. Symmetry 2019, 11, 1200. https://doi.org/10.3390/sym11101200

AMA Style

Micula S. On Some Iterative Numerical Methods for Mixed Volterra–Fredholm Integral Equations. Symmetry. 2019; 11(10):1200. https://doi.org/10.3390/sym11101200

Chicago/Turabian Style

Micula, Sanda. 2019. "On Some Iterative Numerical Methods for Mixed Volterra–Fredholm Integral Equations" Symmetry 11, no. 10: 1200. https://doi.org/10.3390/sym11101200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.