Next Article in Journal
Modelling the Process to Access the Spanish Public University System Based on Structural Equation Models
Previous Article in Journal
Effect of Thrust on the Structural Vibrations of a Nonuniform Slender Rocket
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Function Approach to Solving a Class of Fredholm and Volterra Integro-Differential Equations

1
Department of Mathematics, Faculty of Science II, Lebanese University, Fanar P.O. Box 90656, El Metn, Lebanon
2
Department of Mathematics and Statistics, Faculty of Natural and Applied Sciences, Notre Dame University-Louaizé, Zouk Mikael P.O. Box 72, Lebanon
3
Department of Mathematics, Shivaji University, Vidyanagar, Kolhapur 416004, India
4
School of Mathematics and Statistics, University of Hyderabad, Hyderabad 500046, India
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2020, 25(2), 30; https://doi.org/10.3390/mca25020030
Submission received: 4 March 2020 / Revised: 12 May 2020 / Accepted: 22 May 2020 / Published: 27 May 2020
(This article belongs to the Section Natural Sciences)

Abstract

:
In this paper, we use a numerical method that involves hybrid and block-pulse functions to approximate solutions of systems of a class of Fredholm and Volterra integro-differential equations. The key point is to derive a new approximation for the derivatives of the solutions and then reduce the integro-differential equation to a system of algebraic equations that can be solved using classical methods. Some numerical examples are dedicated for showing the efficiency and validity of the method that we introduce.

1. Introduction

It is well-known that integral equations reign over many mathematical models of various phenomena in mathematics, physics, economy, biology, engineering and other fields. Several explanatory examples of such models can be found in the literature and several mathematical physics, applied mathematics, and engineering problems are reduced to Volterra–Fredholm integral equations. More precisely, scientific researchers have explored the topic of integro-differential equations through their work in various fields of science such as physics [1], biology [2] and engineering [3,4] and in numerous applications such as heat transfer, neurosciences [5], diffusion process, neutron diffusion, biological species [6,7], biomechanics, economics, electrical engineering, electrodynamics, electrostatics, filtration theory, fluid dynamics, game theory, oscillation theory, queuing theory [8], airfoil theory [9], elastic contact problems [10,11], fracture mechanics [12], combined infrared radiation, molecular conduction [13] and so on.
In recent years, many different basic functions have been used to estimate the solution of integral equations, such as orthogonal functions and wavelets. Three families of the orthogonal functions are classified: piecewise constant orthogonal functions (e.g., Walsh, Haar, block-pulse, etc.), orthogonal polynomials (e.g., Legendre, Laguerre, Chebyshev, etc.) and sine–cosine functions in the Fourier series. For instance, many authors investigated the general k-th order integro-differential equation
y ( k ) ( t ) + l ( t ) y ( t ) + a b g ( t , s ) y ( m ) ( s ) d s = f ( t ) ,
with initial conditions
y ( a ) = a 0 , , y ( n 1 ) ( a ) = a n 1 ,
where a 0 , , a n 1 are real constants, k , m are positive integers and m < k , the functions l , f , g are given and y ( t ) is the solution to be determined. In [14], the authors applied the homotopy perturbation method to solve Equation (1), while in [15,16], the authors changed the equation to an ordinary integro-differential equation and applied the variational iteration method to solve it so that the Lagrange multipliers can be effectively identified. Using the operational matrix of derivatives of hybrid functions, a numerical method has been presented in [17] to solve Equation (1). In [18], Hemeda used the iterative method introduced in [19] to solve the more general equation
y ( k ) ( t ) + l ( t ) y ( t ) + a b g ( t , s ) y ( n ) ( s ) y ( m ) ( s ) d s = f ( t ) ,
where n m < q .
In this paper, we use block-pulse and hybrid functions to approximate solutions y ( t ) of the Fredholm integro-differential system given by
y ( t ) + λ 0 1 k ( t , s ) y ( m ) ( s ) y ( n ) ( s ) d s = f ( t ) , y ( 0 ) = a 0 , , y ( l ) ( 0 ) = a l ,
and solutions y ( t ) of the Volterra integro-differential system given by
y ( t ) + β 0 t g ( t , s ) y ( m ) ( s ) y ( n ) ( s ) d s = f ( t ) , y ( 0 ) = a 0 , , y ( l ) ( 0 ) = a l .
Here m , n are positive integers, l = max ( m , n ) 1 , a 0 , , a l are initial conditions, the parameters β , λ and the functions k ( t , s ) , g ( t , s ) and f ( t ) are known and belong to L 2 [ 0 , 1 ) . The function y ( t ) , as well as its derivatives y ( n ) and y ( m ) , are unknown. We point out that System (3) is a particular case of Equation (2).
Hybrid functions have been applied extensively for solving differential systems and proved to be a useful mathematical tool. The pioneering work via hybrid functions was led by the authors in [20,21], who first derived an operational matrix for the integral of the hybrid function vector, and paved the way for the hybrid function analysis of the dynamic systems. Since then, the hybrid functions’ approach has been improved and used to approximate differential equations or systems (see [22,23,24,25,26,27,28] and the references therein).
The novelty and the key point in solving Systems (3) and (4) are to use some useful properties of hybrid functions to derive a new approximation Y ( n ) of the derivative y ( n ) ( t ) of order n of the solution y ( t ) (see Lemma 1). Hence, Systems (3) and (4) can be converted into reduced algebraic systems.
For arbitrary positive integers q and r, the set { b k m ( t ) } , k = 1 , 2 , , q , m = 0 , 1 , , r 1 of hybrid functions will be used to approximate the solution y r 1 ( t ) of the given system of integro-differential equations. This approximate solution will involve Legnedre polynomials of degree r 1 defined on q subintervals of [ 0 , 1 ] .
This paper is organized as follows: in Section 2, we introduce hybrid functions and its properties. In Section 3, we describe the method for approximating solutions of the Fredholm and Volterra integro-differential Systems (3) and (4). An upper bound of the error is given in Section 4, and finally numerical results are reported in Section 5.

2. Preliminaries

In this section, we define the Legendre polynomials p m ( t ) , as well as block-pulse and hybrid functions. We also recall functions’ approximation in the Hilbert space L 2 [ 1 , 1 ] .
The Legendre polynomials p m ( t ) are polynomials of degree m defined on the interval [ 1 , 1 ] by
p m ( t ) = k = 0 M ( 1 ) k ( 2 m 2 k ) ! 2 m k ! ( m k ) ! ( m 2 k ) ! t m 2 k , m N ,
where M = m 2 , if m is even , m 1 2 , if m is odd .
Equivalently, the Legendre polynomials are given by the recursive formula (see [6,23,29,30])
p 0 ( t ) = 1 , p 1 ( t ) = t , p m + 1 ( t ) = 2 m + 1 m + 1 t p m ( t ) m m + 1 p m 1 ( t ) , m = 1 , 2 , 3 , .
The set { p m ( t ) ; t = 0 , 1 , } is a complete orthogonal system in L 2 [ 1 , 1 ] .
Definition 1.
[6,23,29] For an arbitrary positive integer q, let { b k ( t ) } k = 1 q be the finite set of block-pulse functions on the interval [ 0 , 1 ) defined by
b k ( t ) = 1 , i f k 1 q t < k q , 0 , e l s e w h e r e .
The block-pulse functions are disjoint and have the property of orthogonality on [ 0 , 1 ) , since for i , j = 1 , 2 , , q , we have:
b i ( t ) b j ( t ) = 0 , if i j , b i ( t ) , if i = j ,
and
b i ( t ) , b j ( t ) = 0 , if i j , 1 q , if i = j ,
where . , . is the scalar product given by f , g = 0 1 f ( t ) g ( t ) d t , for any functions f , g L 2 [ 0 , 1 ) .
Definition 2.
[6,23,29,30] Let r be an arbitrary positive integer. The set of hybrid functions { b k m ( t ) } , k = 1 , 2 , , q , m = 0 , 1 , , r 1 , where k is the order for block-pulse functions, m is the order for Legendre polynomials and t is the normalized time, is defined on the interval [ 0 , 1 ) as
b k m ( t ) = p m ( 2 q t 2 k + 1 ) , i f k 1 q t < k q , 0 , e l s e w h e r e .
Since b k m ( t ) is the combination of Legendre polynomials and block-pulse functions which are both complete and orthogonal, then the set of hybrid functions is a complete orthogonal system in L 2 [ 0 , 1 ) .
We are now able to define the vector function B ( t ) of hybrid functions on [ 0 , 1 ) by
B ( t ) = B 1 T ( t ) , , B q T ( t ) T ,
where B i ( t ) = b i 0 ( t ) , , b i ( r 1 ) ( t ) T , for i = 1 , 2 , , q , and V T denotes the transpose of a vector V.
Function approximation [6,23,29,30]: every function f ( t ) L 2 [ 0 , 1 ) can be approximated as
f ( t ) k = 1 q m = 0 r 1 f k m b k m ( t ) ,
where
f k m = f ( t ) , b k m ( t ) b k m ( t ) , b k m ( t ) , k = 1 , , q , m = 0 , , r 1 .
Thus,
f ( t ) F T B ( t ) = B T ( t ) F ,
where F is the r q × 1 column vector having f k m as entries. In a similar way, any function g ( t , s ) L 2 [ 0 , 1 ) × [ 0 , 1 ) can be approximated as
g ( t , s ) B T ( t ) G B ( s ) ,
where G = ( g i j ) is an r q × r q matrix given by
g i j = B ( i ) ( t ) , g ( t , s ) , B ( j ) ( s ) B ( i ) ( t ) , B ( i ) ( t ) B ( j ) ( s ) , B ( j ) ( s ) , i , j = 1 , 2 , , r q ,
and B ( i ) ( t ) (resp. B ( j ) ( s ) ) denotes the i t h component (resp. the j t h component) of B ( t ) (resp. B ( s ) ).
Operational matrix of integration [6,23,29,30]: The integration of the vector function B ( t ) may be approximated by 0 t B ( t ) d t P B ( t ) , where P is an r q × r q matrix known as the operational matrix of integration and given by
P = E H H . . . H 0 E H . . . H 0 0 E . . . . . . . . . . . . H 0 0 0 . . . E
where H and E are r × r matrices defined by
H = 1 q 1 0 0 . . . 0 0 0 0 . . . 0 0 0 0 . . . 0 . . . . . . 0 0 0 0 . . . 0
and
E = 1 2 q 1 1 0 0 . . . 0 0 0 1 3 0 1 3 0 . . . 0 0 0 0 1 5 0 1 5 . . . 0 0 0 . . . . . . . . . . . . . . . . . 0 0 0 0 . . . 1 2 r 3 0 1 2 r 3 0 0 0 0 . . . 0 1 2 r 1 0 .
The integration of two hybrid functions [6,23,29,30]: the integration of the cross product of two hybrid function vectors is given by L = 0 1 B ( t ) B T ( t ) d t , where L is the r q × r q diagonal matrix defined by
L = D 0 0 0 . . . 0 0 0 0 D 0 0 . . . 0 0 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . D . . . . 0 0 0 0 . . . 0 D 0 0 0 0 0 . . . 0 0 D
where D is the r × r matrix given by
D = 1 q 1 0 0 0 . . . 0 0 0 0 1 3 0 0 . . . 0 0 0 . . . . . . . . . . . . . . . . . 0 0 0 0 . . . 0 0 1 2 r 1 .
The matrix C ˜ associated to a vector C [6,22,23,30]: for any r q × 1 vector C, we define the r q × r q matrix C ˜ such that
B ( t ) B T ( t ) C = C ˜ B ( t ) .
C ˜ is called the coefficient matrix. In [22], Hsiao computed the matrix C ˜ for r = 2 and q = 8 , while the authors in [6] considered the case of r = 4 and q = 3 .
The vector S ^ associated to a matrix S: for any r q × r q matrix S, we define the 1 × r q row vector S ^ such that B T ( t ) S B ( t ) = S ^ B ( t ) . For instance, let S be a 12 × 12 matrix with coefficients s 11 , s 12 , , s ( 12 ) ( 11 ) , s ( 12 ) ( 12 ) . After developing and comparing the two sides of the equation B T ( t ) S B ( t ) = S ^ B ( t ) , we deduce that the 1 × 12 row vector S ^ is given by:
S ^ = s 11 + 1 3 s 22 + 1 5 s 33 s 12 + s 21 + 2 5 s 23 + 2 5 s 32 s 13 + s 31 + 2 3 s 22 + 2 7 s 33 s 44 + 1 3 s 55 + 1 5 s 66 s 45 + s 54 + 2 5 s 56 + 2 5 s 65 s 46 + s 64 + 2 3 s 55 + 2 7 s 66 s 77 + 1 3 s 88 + 1 5 s 99 s 78 + s 87 + 2 5 s 89 + 2 5 s 98 s 79 + s 97 + 2 3 s 88 + 2 7 s 99 s ( 10 ) ( 10 ) + 1 3 s ( 11 ) ( 11 ) + 1 5 s ( 12 ) ( 12 ) s ( 10 ) ( 11 ) + s ( 11 ) ( 10 ) + 2 5 s ( 11 ) ( 12 ) + 2 5 s ( 12 ) ( 11 ) s ( 10 ) ( 12 ) + s ( 12 ) ( 10 ) + 2 3 s ( 11 ) ( 11 ) + 2 7 s ( 12 ) ( 12 ) .

3. Main Results

In this section, we approximate solutions y ( t ) of Systems (3) and (4). For this, we need the approximation of y ( n ) ( t ) .
Lemma 1.
Let y ( t ) be a function and consider its approximation y r 1 ( t ) = Y T B ( t ) = B T ( t ) Y . If Y ( n ) denotes the approximation of y ( n ) ( t ) , then for any n 1 , we have:
Y ( n ) = J n Y k = 1 n J k Y 0 ( n k ) ,
where J = ( P T ) 1 and Y 0 ( i ) are the approximations of the initial conditions y 0 ( i ) , for i = 0 , , n 1 .
Proof. 
By the Fundamental Theorem of Calculus, we have
y ( t ) = 0 t y ( s ) d s + y ( 0 ) .
Approximating y ( t ) , y ( t ) and y 0 ( t ) , we get
Y T B ( t ) 0 t ( Y ( 1 ) ) T B ( s ) d s + Y 0 T B ( t ) ( Y ( 1 ) ) T 0 t B ( s ) d s + Y 0 T B ( t ) ( Y ( 1 ) ) T P B ( t ) + Y 0 T B ( t ) ( Y ( 1 ) T P + Y 0 T B ( t ) .
Thus, Y T = ( Y ( 1 ) ) T P + Y 0 T and so Y = P T Y ( 1 ) + Y 0 , giving that Y ( 1 ) = J ( Y Y 0 ) and the result is true for n = 1 .
By induction, assume that the result is true for n and prove it for n + 1 . We have
Y ( n + 1 ) = J Y ( n ) Y 0 ( n ) = J J n Y k = 1 n J k Y 0 ( n k ) Y 0 ( n ) = J n + 1 Y k = 1 n J k + 1 Y 0 ( n k ) J Y 0 ( n ) = J n + 1 Y k = 0 n J k + 1 Y 0 ( n k ) = J n + 1 Y k = 1 n + 1 J k Y 0 ( n + 1 k ) ,
which is the desired result. □
We are now ready to approximate solutions of Systems (3) and (4).

3.1. Approximated Solution of the Fredholm Integro-Differential System (3)

Using the approximations (5) and (6) of functions of one and two variables, System (3) can be approximated as
B T ( t ) Y + λ 0 1 B T ( t ) K B ( s ) B T ( s ) Y ( m ) B T ( s ) Y ( n ) d s = B T ( t ) F Y + λ K 0 1 B ( s ) B T ( s ) Y ( m ) B T ( s ) Y ( n ) d s = F Y + λ K 0 1 Y ( m ) ˜ B ( s ) B T ( s ) Y ( n ) d s = F Y + λ K Y ( m ) ˜ 0 1 B ( s ) B T ( s ) d s Y ( n ) = F Y + λ K Y ( m ) ˜ L Y ( n ) = F .
Using Lemma 1, the last equation becomes
Y + λ K J m Y k = 1 m J k Y 0 ( m k ) ˜ L J n Y k = 1 n J k Y 0 ( n k ) = F .
This is a nonlinear system of r q equations in r q variables which can be solved by any iterative method.

3.2. Approximated Solution of the Volterra Integro-Differential System (4)

Using the approximations (5) and (6), System (4) can be approximated as:
Y T B ( t ) + β 0 t B T ( t ) G B ( s ) B T ( s ) Y ( m ) B T ( s ) Y ( n ) d s = F T B ( t ) Y T B ( t ) + β B T ( t ) G 0 t B ( s ) B T ( s ) Y ( m ) B T ( s ) Y ( n ) d s = F T B ( t ) Y T B ( t ) + β B T ( t ) G Y ( m ) ˜ 0 t B ( s ) B T ( s ) Y ( n ) d s = F T B ( t ) Y T B ( t ) + β B T ( t ) G Y ( m ) ˜ Y ( n ) ˜ 0 t B ( s ) d s = F T B ( t ) Y T B ( t ) + β B T ( t ) G Y ( m ) ˜ Y ( n ) ˜ P B ( t ) = F T B ( t ) .
Consider the matrix S : = G Y ( m ) ˜ Y ( n ) ˜ P . We obtain
Y T B ( t ) + β B T ( t ) S B ( t ) = F T B ( t ) Y T B ( t ) + β S ^ B ( t ) = F T B ( t ) .
Hence,
Y T + β S ^ = F T .
Finally, using Lemma 1 for Y ( m ) and Y ( n ) , we get a nonlinear system which can be solved by any iterative method. We use Wolfram Mathematica commands “Solve” and “NSolve” to find the solution of such a nonlinear system. These commands use a suitable iterative method to solve the problem. “For systems of algebraic equations, NSolve computes a numerical Gröbner basis using an efficient monomial ordering, then uses eigensystem methods to extract numerical roots” [31].

4. Error Analysis

We assume that the function y ( t ) is sufficiently smooth on the interval [ 0 , 1 ] . Suppose that t 0 , t 1 , , t μ are the roots of μ + 1 degree shifted Chebyshev polynomial P μ ( t ) in [ 0 , 1 ] that interpolates y ( t ) at the nodes t i , 0 i μ . The error in the interpolation is given in [32] by
y ( t ) P μ ( t ) = d μ + 1 d t μ + 1 ( y ( δ ) ) Π i = 0 μ ( t t i ) ( μ + 1 ) ! ,
for some δ [ 0 , 1 ] . This shows that
| y ( t ) P μ ( t ) | M 2 2 μ + 1 ( μ + 1 ) ! ,
where M = max t [ 0 , 1 ] | d μ + 1 d t μ + 1 ( y ( t ) ) | .
We recall here that the L 2 norm of a function y : [ 0 , 1 ] R is given by y 2 = 0 1 y 2 ( t ) d t 1 2 .
Theorem 1.
If y μ ( t ) = B T ( t ) Y is the best approximation of the solution y ( t ) obtained using Legendre polynomials then
y ( t ) y μ ( t ) 2 M 2 2 μ + 1 ( μ + 1 ) ! ,
for some constant M > 0 .
Proof. 
Let X μ be the space of all polynomials of degree less than or equal to μ . Since y μ is the best approximation to y, y y μ 2 y g 2 , for any arbitrary polynomial g in X μ . Therefore by using (9), we get
y y μ 2 2 = 0 1 y ( t ) y μ ( t ) 2 d t 0 1 y ( t ) P μ ( t ) 2 d t M 2 2 μ + 1 ( μ + 1 ) ! 2 0 1 d t = M 2 2 μ + 1 ( μ + 1 ) ! 2 .
The result is obtained by taking square-root on both sides. □

5. Numerical Examples

In this section, we apply the methods described in Section 3 to some numerical examples to solve Systems (3) and (4).
Example 1.
Consider the following Fredholm integro-differential system
y ( t ) + 0 1 e t s y ( s ) y ( s ) d s = e t + 1 , y ( 0 ) = 1 .
Comparing with the standard form of System (3), we get λ = 1 , k ( t , s ) = e t s , m = 0 , n = 1 , f ( t ) = e t + 1 , l = 0 and a 0 = 1 .
Case 1: First, we consider r = 2 and q = 1 . It can be verified that B ( t ) = ( 1 , 2 t 1 ) T and
K = e + 1 e 2 3 ( e + 3 e 4 ) 3 ( e + 4 3 e ) 9 ( 6 e 9 e ) .
The matrix approximations F of the function f ( t ) = e t + 1 and Y 0 of y ( 0 ) are respectively given by
F = e 2 e 3 e 2 + 9 e
and
Y 0 = 1 0 .
From Equation (7), we deduce
Y = e 1 3 e + 9 .
Using the approximation y 1 ( t ) = Y T B ( t ) = B T ( t ) Y , we get y 1 ( t ) = 4 e 10 + ( 18 6 e ) t . In Figure 1, we compare this approximate solution y 1 ( t ) with the exact solution e t . The absolute errors at various values of t are shown in Table 1.
This is a degree 1 approximation, therefore μ = 1 . Further, M = max [ 0 , 1 ] y ( t ) = e . The error estimate is obtained by using (11) is M / 2 4 0.169893 . It can be checked from Table 1 that our computed values are less than this error bound.
Case 2: Now, we consider r = 3 and q = 4 . A 12 × 12 matrix K is given by:
K = 1.0052 0.1255 0.0052 0.7829 0.0978 0.0041 0.1255 0.0157 0.0007 0.0978 0.0122 0.0005 0.0052 0.0007 0.0 0.00401 0.0005 0.0 1.2907 0.1612 0.0067 1.0052 0.1255 0.0052 0.1612 0.0202 0.0008 0.1255 0.0157 0.0007 0.0067 0.0008 0.0 0.0052 0.0007 0.0 1.6573 0.2070 0.0086 1.2907 0.1612 0.0067 0.2070 0.0258 0.0011 0.1612 0.0201 0.0008 0.0086 0.0011 0.0 0.0067 0.0008 0.0 2.1281 0.2658 0.0111 1.6573 0.2070 0.0086 0.2657 0.0332 0.0014 0.2070 0.0258 0.0011 0.0111 0.0014 0.0 0.0086 0.0011 0.0
0.6097 0.0761 0.0032 0.4748 0.0593 0.0025 0.0761 0.0095 0.0003 0.0593 0.0074 0.0003 0.0032 0.0004 0.0 0.0025 0.0003 0.0 0.7829 0.0978 0.0040 0.6097 0.0761 0.0032 0.0978 0.0122 0.0005 0.0761 0.0095 0.0004 0.0041 0.0005 0.0 0.0032 0.0004 0.0 1.0052 0.1255 0.0052 0.7829 0.0978 0.00401 0.1255 0.0157 0.0007 0.0978 0.0122 0.0005 0.0052 0.0007 0.0 0.0041 0.0005 0.0 1.2907 0.1612 0.0067 1.0052 0.1255 0.0052 0.1612 0.0201 0.0008 0.1255 0.0157 0.0007 0.0067 0.0008 0.0 0.0052 0.0007 0.0
Other approximations in this case are as below:
B ( t ) = χ [ 0 , 1 / 4 ) , ( 1 + 8 t ) χ [ 0 , 1 / 4 ) , ( 1 24 t + 96 t 2 ) χ [ 0 , 1 / 4 ) , χ [ 1 / 4 , 1 / 2 ) , ( 3 + 8 t ) χ [ 1 / 4 , 1 / 2 ) , ( 13 72 t + 96 t 2 ) χ [ 1 / 4 , 1 / 2 ) , χ [ 1 / 2 , 3 / 4 ) , ( 5 + 8 t ) χ [ 1 / 2 , 3 / 4 ) , ( 37 120 t + 96 t 2 ) χ [ 1 / 2 , 3 / 4 ) , χ [ 3 / 4 , 1 ) , ( 7 + 8 t ) χ [ 3 / 4 , 1 ) , ( 73 168 t + 96 t 2 ) χ [ 3 / 4 , 1 ) ,
(where χ A is the characteristic function of a set A)
F = 3.08824 , 0.385629 , 0.0160607 , 3.96538 , 0.495157 , 0.0206224 , 5 . 09165 , 0 . 635795 , 0 . 0264796 , 6 . 53781 , 0 . 816377 , 0 . 0340005 T ,
Y 0 = 1 , 0 , 0 , 1 , 0 , 0 , 1 , 0 , 0 , 1 , 0 , 0 T ,
and
Y = 1.1361 , 0.141865 , 0.00590841 , 1.45878 , 0.182158 , 0.00758655 , 1 . 87312 , 0 . 233896 , 0 . 00974132 , 2 . 40513 , 0 . 300328 , 0 . 0125081 T .
The approximate solution of (12) is given by
y 2 ( t ) = 1.1361 χ [ 0 , 1 / 4 ) + 1.45878 χ [ 1 / 4 , 1 / 2 ) + 1.87312 χ [ 1 / 2 , 3 / 4 ) + 2.40513 χ [ 3 / 4 , 1 ) + 0.3 ( 7 + 8 t ) χ [ 3 / 4 , 1 ) + 0.233896 ( 5 + 8 t ) χ [ 1 / 2 , 3 / 4 ) + 0.182158 ( 3 + 8 t ) χ [ 1 / 4 , 1 / 2 ) + 0.14187 ( 1 + 8 t ) χ [ 0 , 1 / 4 ) + 0.0125 ( 73 168 t + 96 t 2 ) χ [ 3 / 4 , 1 ) + 0.0097 ( 37 120 t + 96 t 2 ) χ [ 1 / 2 , 3 / 4 ) + 0.00759 ( 13 72 t + 96 t 2 ) χ [ 1 / 4 , 1 / 2 ) + 0.00591 ( 1 24 t + 96 t 2 ) χ [ 0 , 1 / 4 ) .
Figure 2 shows the graphs of the approximate solution y 2 ( t ) and the exact solution e t . The absolute errors at various values of t are given in Table 2. It can be observed that, in this case, the approximate solution is well in agreement with the exact solution. Further, using (11), the error bound is 0.01416 . Table 2 shows that our computed maximum value is 0.000145961 .
Example 2.
Consider the following Volterra integro-differential system
y ( t ) + 0 t sin ( t s ) y ( s ) y ( s ) d s = 2 t 3 + t 2 12 t + 12 sin ( t ) , y ( 0 ) = 0 .
Comparing with the standard form of System (4), we get β = 1 , g ( t , s ) = sin ( t s ) , m = 0 , n = 1 , f ( t ) = 2 t 3 + t 2 12 t + 12 sin ( t ) , l = 0 and a 0 = 0 . We take r = 3 and q = 4 . Following the procedure described in Section 2, we get
F = 0.0208496 , 0.0312848 , 0.0104457 , 0.146854 , 0.0951443 , 0.0109877 , 0 . 406543 , 0 . 166108 , 0 . 0129514 , 0 . 824576 , 0 . 255306 , 0 . 0171862 T ,
G = 0 0.1245 0 0.2461 0.1206 0.0013 0.1245 0 0.0006 0.1206 0.0039 0.0006 0 0.0006 0 0.0013 0.0006 0 0.2461 0.1206 0.0013 0 0.1245 0 0.1206 0.0039 0.0006 0.1245 0 0.0006 0.0013 0.0006 0 0 0.0006 0 0.4769 0.1092 0.0025 0.2461 0.1206 0.0013 0.1092 0.0075 0.0006 0.1206 0.0039 0.0006 0.0025 0.0006 0 0.0013 0.0006 0 0.6781 0.0911 0.0035 0.4769 0.1092 0.0025 0.0911 0.0106 0.0005 0.1092 0.0075 0.0006 0.0035 0.0005 0 0.0025 0.0006 0
0 . 4769 0 . 1092 0 . 0025 0 . 6781 0 . 0911 0 . 0035 0 . 1092 0 . 0075 0 . 0006 0 . 0912 0 . 0106 0 . 0005 0 . 0025 0 . 0006 0 0 . 0035 0 . 0005 0 0 . 2461 0 . 1206 0 . 0013 0 . 4769 0 . 1092 0 . 0025 0 . 1206 0 . 0039 0 . 0006 0 . 1092 0 . 0075 0 . 0006 0 . 0013 0 . 0006 0 0 . 0025 0 . 0006 0 0 0 . 1245 0 0 . 2461 0 . 1206 0 . 0013 0 . 1245 0 0 . 0006 0 . 1206 0 . 0039 0 . 0006 0 0 . 0006 0 0 . 0013 0 . 0006 0 0 . 2461 0 . 1206 0 . 00128 0 0 . 1245 0 0 . 1206 0 . 0039 0 . 0006 0 . 1245 0 0 . 0006 0 . 0013 0 . 0006 0 0 0 . 0006 0 ,
Y 0 = 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 T ,
and B ( t ) is same as given in the previous example. Using Equation (8), we get
Y = 0.0208333 , 0.0312487 , 0.0104375 , 0.145833 , 0.0937492 , 0.0104781 , 0 . 395833 , 0 . 15625 , 0 . 0105145 , 0 . 770834 , 0 . 218753 , 0 . 010543 T .
The approximate solution of System (13) is given by
y 2 ( t ) = 0.0208333 χ [ 0 , 1 / 4 ) + 0.145833 χ [ 1 / 4 , 1 / 2 ) + 0.395833 χ [ 1 / 2 , 3 / 4 ) + 0.770834 χ [ 3 / 4 , 1 ) + 0.218753 ( 7 + 8 t ) χ [ 3 / 4 , 1 ) + 0.15625 ( 5 + 8 t ) χ [ 1 / 2 , 3 / 4 ) + 0.0937492 ( 3 + 8 t ) χ [ 1 / 4 , 1 / 2 ) + 0.0312487 ( 1 + 8 t ) χ [ 0 , 1 / 4 ) + 0.010543 ( 73 168 t + 96 t 2 ) χ [ 3 / 4 , 1 ) + 0.0105145 ( 37 120 t + 96 t 2 ) χ [ 1 / 2 , 3 / 4 ) + 0.0104781 ( 13 72 t + 96 t 2 ) χ [ 1 / 4 , 1 / 2 ) + 0.0104375 ( 1 24 t + 96 t 2 ) χ [ 0 , 1 / 4 ) .
The approximate solution y 2 ( t ) is compared with the exact solution t 2 in Figure 3. Table 3 shows the absolute error in the solution at different values of t.
Using (11), the error-bound for this example is approximately 0.0104167 and our value from Table 3 is 0.0000271471 .

Comparison with Other Methods

In this section, we compare our results with the existing methods viz. Daftardar–Gejji–Jafari (DGJ) method [33], Adomian decomposition method (ADM) [34] and quadrature methods [35,36].
The DGJ algorithm for the solution of Equation (12) is y 0 = e t + 1 and y n + 1 = y 0 0 1 e t s y n ( s ) y n ( s ) d s , n = 0 , 1 , 2 , . The k-term approximate solution is given by y k 1 . Therefore, the three-term solution is y ( t ) = e ( 1 + t ) 1 ( 1 + e ) e ( 1 + e e 2 ) 2 .
The ADM algorithm for the solution of Equation (12) is y 0 = e t + 1 and y n + 1 = 0 1 e t s 1 2 d A n ( s ) d s d s , n = 0 , 1 , 2 , , where A n ( s ) are the Adomian polynomials [34] of the function y 2 ( s ) . The k-term approximate solution using ADM is given by n = 0 k 1 y n . Therefore, the three-term solution is y ( t ) = e ( 1 + t ) 1 + e + e 2 4 e 3 + 2 e 4 .
The comparison of these solutions with the exact solution in Figure 4 shows that these methods diverge for this example.
Let us divide the interval [ 0 , 1 ] into n equal sub-intervals of length h. The approximation of the function y at the node j h is denoted by y j . We take n as an even natural number. The Simpson’s quadrature numerical algorithm [36] applied to Equation (12) provides the following expression:
y j = 1 2 ( e + 1 ) e j h h 6 e j h y 0 2 + 4 k = 1 n / 2 e ( j 2 k + 1 ) h y 2 k 1 2 + 2 k = 1 n / 2 1 e ( j 2 k ) h y 2 k 2 + e ( j n ) h y n 2 ,
where y 0 = 1 and j = 1 , 2 , , n . This algorithm provides the solution which is in agreement with the exact solution. We compare the absolute errors at various points with h = 0 . 05 and n = 20 in Table 4.
It can be observed from Table 2 and Table 4 that our method has less error as compared with quadrature method.
Similarly, we compare our results of Equation (13) with DGJ, ADM and trapezium quadrature rule.
The 2-term approximate solutions of this example using DGJ and ADM provide the same expression given by y ( t ) = 312 2160 t + 157 t 2 + 336 t 3 10 t 4 12 t 5 + 3 ( 104 15 t + t 2 + 2 t 3 ) cos ( t ) + ( 2157 3 t + 27 t 2 2 t 3 3 t 4 ) sin ( t ) + 24 sin ( 2 t ) .
As in Equation (12), we divide the interval [ 0 , 1 ] into n equal intervals, where n is a natural number. The trapezium quadrature formula to compute the solution of Equation (13) is
y j = u j h 4 y j 2 + cos ( j h ) y 0 2 + 2 k = 1 j 1 cos ( ( j k ) h ) y k 2 ,
where u j = 2 ( j h ) 3 + ( j h ) 2 12 j h + 12 sin ( j h ) and y 0 = 0 . We take h = 0 . 05 and n = 20 .
It is observed that all these methods provide the solutions which are matching with the exact solution. We compare the absolute errors in these solutions at various points in Table 5.
It is observed that the error in our method is smaller at most of the points.

6. Conclusions

In this work, we have discussed an efficient method for solving a class of Fredholm and Volterra integro-differential equations. Our method is based on a new approximation for the derivatives of the equations’ solutions using hybrid and block-pulse functions. The absolute errors reported in tables show that the approximate solution is in a good agreement with the exact solution. It is verified in each example that the practical error in our method is less than the theoretical error-bound. In fact, this method is highly efficient, very easy and a powerful mathematical tool for finding the numerical solution of some class of Fredholm and Volterra integro-differential equations. The approximate solutions are found by using the computer code written in Wolfram Mathematica. The method is computationally attractive, and applications are demonstrated through illustrative examples. For future research works, we can use this method to solve various kinds of problems such as higher dimensional problems, stochastic integro-differential equations and partial integro-differential equations of fractional order with additional work. The comparison with other methods viz. DGJ, ADM and quadrature rules shows that our method works equally well for Fredholm as well as Volterra equations and produce the results with a smaller error.

Author Contributions

A.H. and R.N. conceived and designed the experiments; S.B. performed the experiments; A.H. and R.N. analyzed the data; S.B. contributed materials and analysis tools; A.H. and R.N. wrote the paper. All authors have read and agree to the published version of the manuscript.

Acknowledgments

Sachin Bhalekar acknowledges the Science and Engineering Research Board (SERB), New Delhi, India for the Research Grant (Ref. MTR/2017/000068) under the Mathematical Research Impact-Centric Support (MATRICS) Scheme.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bloom, F. Asymptotic bounds for solutions to a system of damped integro-differential equations of electromagnetic theory. J. Math. Anal. Appl. 1980, 73, 524–542. [Google Scholar] [CrossRef] [Green Version]
  2. Holmaker, K. Global asymptotic stability for a stationary solution of a system of integro-differential equations describing the formation of liver zones. SIAM J. Math. Anal. 1993, 24, 116–128. [Google Scholar] [CrossRef]
  3. Abdou, M.A. On asymptotic methods for Fredholm–Volterra integral equation of the second kind in contact problems. J. Comput. Appl. Math. 2003, 154, 431–446. [Google Scholar] [CrossRef] [Green Version]
  4. Forbes, L.K.; Crozier, S.; Doddrell, D.M. Calculating current densities and fields produced by shielded magnetic resonance imaging probes. SIAM J. Appl. Math. 1997, 57, 401–425. [Google Scholar]
  5. Delves, L.M.; Mohamed, J.L. Computational Methods for Integral Equations; Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
  6. Basirat, B.; Maleknejad, K.; Hashemizadeh, E. Operational matrix approach for the nonlinear Volterra–Fredholm integral equations: Arising in physics and engineering. Int. J. Phys. Sci. 2012, 7, 226–233. [Google Scholar]
  7. Wazwaz, A.M. A First Course in Integral Equations; World Scientifics: Singapore, 1997. [Google Scholar]
  8. Polyanin, A.D.; Manzhirov, A.V. Handbook of Integral Equations, 2nd ed.; Chapman and Hall/CRC Press: London, UK, 2008. [Google Scholar]
  9. Golberg, M.A. The convergence of a collocations method for a class of Cauchy singular integral equations. J. Math. Appl. 1984, 100, 500–512. [Google Scholar] [CrossRef] [Green Version]
  10. Kovalenko, E.V. Some approximate methods for solving integral equations of mixed problems. Probl. Math. Mech. 1989, 53, 85–92. [Google Scholar] [CrossRef]
  11. Semetanian, B.J. On an integral equation for axially symmetric problem in the case of an elastic body containing an inclusion. J. Appl. Math. Mech. 1991, 55, 371–375. [Google Scholar] [CrossRef]
  12. Willis, J.R.; Nemat-Nasser, S. Singular perturbation solution of a class of singular integral equations. Quart. Appl. Math. 1990, XLVIII, 741–753. [Google Scholar] [CrossRef] [Green Version]
  13. Frankel, J. A Galerkin solution to regularized Cauchy singular integro-differential equation. Q. Appl. Math. 1995, 52, 145–258. [Google Scholar] [CrossRef] [Green Version]
  14. Golbabai, A.; Javidi, M. Application of He’s homotopy perturbation method for nth-order integro-differential equations. Appl. Math. Comput. 2007, 190, 1409–1416. [Google Scholar] [CrossRef]
  15. Shang, X.; Han, D. Application of the variational iteration method for solving nth-order integro-differential equations. J. Comput. Appl. Math. 2010, 234, 1442–1447. [Google Scholar] [CrossRef] [Green Version]
  16. Abbasbandy, S.; Shivanian, E. Application of variational iteration method for nth-order integro-differential equations. Zeitschrift fur Naturforschung A 2009, 64, 439–444. [Google Scholar] [CrossRef]
  17. Hou, J.; Yang, C. Numerical method in solving Fredholm integro-differential equations by using hybrid function operational matrix of derivative. J. Inf. Comput. Sci. 2013, 10, 2757–2764. [Google Scholar] [CrossRef]
  18. Hemeda, A.A. New iterative method: Application to nth-order integro-differential equations. Int. Math. Forum 2012, 7, 2317–2332. [Google Scholar]
  19. Daftardar-Gejji, V.; Jafari, H. An iterative method for solving non-linear functional equations. J. Math. Anal. Appl. 2006, 316, 753–763. [Google Scholar] [CrossRef] [Green Version]
  20. Marzban, H.R.; Razzaghi, M. Optimal control of linear delay systems via hybrid of block-pulse and Legendre polynomials. J. Franklin Inst. 2004, 341, 279–293. [Google Scholar] [CrossRef]
  21. Razzaghi, M.; Marzban, H.R. A hybrid analysis direct method in the calculus of variations. Int. J. Comput. Math. 2000, 75, 259–269. [Google Scholar] [CrossRef]
  22. Hsiao, C.H. Hybrid function method for solving Fredholm and Volterra integral equations of the second kind. J. Comput. Appl. Math. 2009, 230, 59–68. [Google Scholar] [CrossRef] [Green Version]
  23. Maleknejad, K.; Basirat, B.; Hashemizadeh, E. Hybrid Legendre polynomials and block-pulse functions approach for nonlinear Volterra–Fredholm integro-differential equations. Comput. Math. Appl. 2011, 61, 282–288. [Google Scholar] [CrossRef] [Green Version]
  24. Mizaee, F. Numerical solution of system of linear integral equations via improvement of block-pulse functions. J. Math. Model. 2016, 4, 133–159. [Google Scholar]
  25. Mizaee, F.; Alipour, S. Approximate solution of nonlinear quadratic integral equations of fractional order via piecewise linear functions. J. Comput. Appl. Math. 2018, 331, 217–227. [Google Scholar] [CrossRef]
  26. Mizaee, F.; Alipour, S.; Samadyar, N. Numerical solution based on hybrid of block-pulse and parabolic functions for solving a system of nonlinear stochastic Ito–Volterra integral equations of fractional order. J. Comput. Appl. Math. 2019, 349, 157–171. [Google Scholar] [CrossRef]
  27. Mizaee, F.; Hoseini, S.F. A new collocation approach for solving systems of high-order linear Volterra integro-differential equations with variable coefficients. Appl. Math. Comput. 2017, 311, 272–282. [Google Scholar]
  28. Mizaee, F.; Hoseini, S.F. Hybrid functions of Bernstein polynomials and block-pulse functions for solving optimal control of the nonlinear Volterra integral equations. Indag. Math. 2016, 27, 835–849. [Google Scholar] [CrossRef]
  29. Hashemizadeh, E.; Basirat, B. An efficient computational method for the system of linear Volterra integral equations by means of hybrid functions. Math. Sci. 2011, 5, 355–368. [Google Scholar]
  30. Shojaeizadeh, T.; Abadi, Z.; Golpar Raboky, E. Hybrid functions approach for solving Fredholm and Volterra integral equations. J. Prime Res. Math. 2009, 5, 124–132. [Google Scholar]
  31. Which Method Does Mathematica Use in NSolve for One Variable Functions? Available online: https://community.wolfram.com/groups/-/m/t/116219 (accessed on 22 May 2020).
  32. Stewart, G.W. Afternotes on Numerical Analysis; SIAM: Philadelphia, PA, USA, 1996; Volume 49. [Google Scholar]
  33. Kumar, M.; Jhinga, A.; Daftardar-Gejji, V. New algorithm for solving non-linear functional equations. Int. J. Appl. Comput. Math. 2020, 6, 26. [Google Scholar] [CrossRef]
  34. Adomian, G. Solving Frontier Problems of Physics: The Decomposition Method; Springer Science and Business Media: Dordrecht, The Netherlands, 2013; Volume 60. [Google Scholar]
  35. Van der Houwen, P.J.; Wolkenfelt, P.H.M. On the stability of multistep formulas for Volterra integral equations of the second kind. Computing 1980, 24, 341–347. [Google Scholar] [CrossRef] [Green Version]
  36. Ray, S.S.; Sahu, P.K. Numerical methods for solving Fredholm integral equations of second kind. Abstr. Appl. Anal. 2013, 2013, 426916. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of approximate and exact solutions of System (12) for the case r = 2 and q = 1 .
Figure 1. Comparison of approximate and exact solutions of System (12) for the case r = 2 and q = 1 .
Mca 25 00030 g001
Figure 2. Comparison of approximate and exact solutions of System (12) for the case r = 3 and q = 4 .
Figure 2. Comparison of approximate and exact solutions of System (12) for the case r = 3 and q = 4 .
Mca 25 00030 g002
Figure 3. Comparison of approximate and exact solutions of System (13) for the case r = 3 and q = 4 .
Figure 3. Comparison of approximate and exact solutions of System (13) for the case r = 3 and q = 4 .
Mca 25 00030 g003
Figure 4. Comparison of DGJ, ADM and exact solutions of System (12).
Figure 4. Comparison of DGJ, ADM and exact solutions of System (12).
Mca 25 00030 g004
Table 1. Absolute errors in solution of System (12) with r = 2 and q = 1 .
Table 1. Absolute errors in solution of System (12) with r = 2 and q = 1 .
tErrortError
0.00.1208250.50.08146
0.10.055790.60.07827
0.20.001820.70.05683
0.30.039920.80.01525
0.40.068160.90.04861
Table 2. Absolute errors in solution of System (12) with r = 3 and q = 4 .
Table 2. Absolute errors in solution of System (12) with r = 3 and q = 4 .
tErrortError
0.00.0001459610.50.000240649
0.10.00004096790.60.0000675446
0.20.00005532810.70.0000912206
0.30.00006568970.80.000108304
0.40.00005361720.90.0000883998
Table 3. Absolute errors in solution of System (13) with r = 3 and q = 4 .
Table 3. Absolute errors in solution of System (13) with r = 3 and q = 4 .
tErrortError
0.00.00002216890.50.0000975226
0.18.9  × 10 6 0.60.0000429716
0.24.99  × 10 8 0.74.32  × 10 6
0.3 × 10 6 0.83.76  × 10 6
0.40.00002714710.90.0000547743
Table 4. Absolute errors in the solution of System (12) with quadrature method and h = 0.05 , n = 20 .
Table 4. Absolute errors in the solution of System (12) with quadrature method and h = 0.05 , n = 20 .
tErrortError
0.000.50.0214405
0.10.0143720.60.0236955
0.20.01588350.70.0261875
0.30.0175540.80.0289417
0.40.01940020.90.0319855
Table 5. Absolute errors in solutions of System (13) obtained by using DGJ/ADM and quadrature method.
Table 5. Absolute errors in solutions of System (13) obtained by using DGJ/ADM and quadrature method.
tDGJ/ADM ErrorQuadrature Rule ErrortDGJ/ADM ErrorQuadrature Rule Error
0.0000.50.00004868070.0000509481
0.11.24837 × 10 10 4.06185 × 10 7 0.60.0002093060.0000867569
0.23.19707 × 10 8 3.30782 × 10 6 0.70.0007188130.000134977
0.38.18704 × 10 7 0.00001116650.80.002094950.000196122
0.48.17149 × 10 6 0.00002633650.90.005388320.000269884

Share and Cite

MDPI and ACS Style

Hosry, A.; Nakad, R.; Bhalekar, S. A Hybrid Function Approach to Solving a Class of Fredholm and Volterra Integro-Differential Equations. Math. Comput. Appl. 2020, 25, 30. https://doi.org/10.3390/mca25020030

AMA Style

Hosry A, Nakad R, Bhalekar S. A Hybrid Function Approach to Solving a Class of Fredholm and Volterra Integro-Differential Equations. Mathematical and Computational Applications. 2020; 25(2):30. https://doi.org/10.3390/mca25020030

Chicago/Turabian Style

Hosry, Aline, Roger Nakad, and Sachin Bhalekar. 2020. "A Hybrid Function Approach to Solving a Class of Fredholm and Volterra Integro-Differential Equations" Mathematical and Computational Applications 25, no. 2: 30. https://doi.org/10.3390/mca25020030

Article Metrics

Back to TopTop