Next Article in Journal
A Contactless Glucose Solution Concentration Measurement System Based on Improved High Accurate FMCW Radar Algorithm
Next Article in Special Issue
Data Analysis and Filter Optimization for Pulse-Amplitude Measurement: A Case Study on High-Resolution X-ray Spectroscopy
Previous Article in Journal
Photoemission of Plasmonic Gold Nanostars in Laser-Controlled Electron Current Devices for Technical and Biomedical Applications
Previous Article in Special Issue
Computational Methods for Parameter Identification in 2D Fractional System with Riemann–Liouville Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving the Integral Differential Equations with Delayed Argument by Using the DTM Method

1
Department of Mathematics Applications and Methods for Artificial Intelligence, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
2
Department of Mathematics, University of Hafr Al Batin, Hafr Al Batin 31991, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(11), 4124; https://doi.org/10.3390/s22114124
Submission received: 7 May 2022 / Revised: 23 May 2022 / Accepted: 27 May 2022 / Published: 29 May 2022

Abstract

:
Recently, a lot of attention has been paid to the field of research connected with the wireless sensor network and industrial internet of things. The solutions found by theorists are next used in practice in such area as smart industries, smart devices, smart home, smart transportation and the like. Therefore, there is a need to look for some new techniques for solving the problems described by means of the appropriate equations, including differential equations, integral equations and integro-differential equations. The object of interests of this paper is the method dedicated for solving some integro-differential equations with a retarded (delayed) argument. The proposed procedure is based on the Taylor differential transformation which enables to transform the given integro-differential equation into a respective system of algebraic (nonlinear, very often) equations. The described method is efficient and relatively simple to use, however a high degree of generality and complexity of problems, defined by means of the discussed equations, makes impossible to obtain a general form of their solution and enforces an individual approach to each equation, which, however, does not diminish the benefits associated with its use.

1. Introduction

In recent times, a dynamically developing field of research is the wireless sensor network and industrial internet of things. Theoretical solutions connected with these topics are next used in practical applications, in such area as smart industries, smart devices, smart home, smart transportation and the like. Therefore, there is a necessity to search for some new techniques for solving the problems described by means of appropriate equations, including differential equations, integral equations and integro-differential equations, in particular with a retarded argument.
Differential equations and integral equations with a retarded argument can find an application in many areas of science, among others, in mechanics, biophysics and control theory [1,2,3,4]. The integro-differential equation with a retarded argument in its full general form is not easy to find in mathematical models expressing the real existing processes, but some specific cases of this equations can be used in description of phenomena with viscoelastic effects [5,6], in the population dynamics, for example in the description of predator-prey models [7], as well as in modeling the problems connected with wireless sensor network and internet of things [8]. Analyzing the subject literature it is quite difficult to find any practical papers concerning the integral and integro-differential equations. The Taylor differential transformation, thanks to its properties, very often simplifies significantly the investigated problem, therefore it has a number of applications in mathematics, engineering and technics. Authors of this paper applied the Taylor differential transformation for solving, among others, the nonlinear ordinary differential equations, variational problems, systems of nonlinear ordinary differential equations, integro-differential equations, Stefan problem, as well as the integral equations [9,10,11]. The discussed transformation is also used by many authors for solving various problems, for example numerous kinds of differential equations (ordinary and partial) and integral equations, there exist also few well known papers devoted to the differential equations with a retarded argument [1,12,13,14,15,16,17,18]. The Taylor differential transformation is popular for a long time, but only recently the methods based on this transformation became useful due to the development of computers and programs enabling to execute the symbolic calculations, for example, in [19] the Authors apply the differential transform method to find the analytic solution to some delay differential equations with the help of the computer algebra system Mathematica, whereas in [20] DTM is compared with the other iterative method: the Daftardar-Gejji and Jafari Method in solving the differential equations. In this paper we also apply the Mathematica software in version 12.2 [21,22].
In this paper we discuss the integral differential equations with the delayed, or retarded, argument of the following form
f ( x , y ( x + α 0 ) , y ( x + α 1 ) , , y ( k ) ( x + α k ) ) + i = 1 n x 1 x 2 g i ( x , y ( x + β i ) ) d x + i = 1 m x 0 x h i ( t , y ( t + γ i ) ) d t = 0 ,
with conditions
y ( i ) ( ε i ) = ζ i , i = 0 , 1 , , k 1 ,
where a x b , x + α i , x + β j , t + γ k a , b , 0 i k , 1 j n , 1 k m , x 0 , x 1 , x 2 a , b , whereas the functions f, g i , i = 1 , 2 , , n , h j , j = 1 , 2 , , m , are continuous in set a , b × R . Similarly ε i a , b , ζ i R , 0 i k 1 .
In Equation (1) the functions f, g and h can be nonlinear, whereas the integrals, occurring in this equation, can take the particular forms of integrals appearing in the Fredholm and Volterra integral equations, or they can have such form only with respect to the boundaries of integration, while the integrands g ( x ) and h ( x ) can be nonlinear also with respect to the sought function y ( x ) .
Equation (1) will be solved with the aid of the Taylor differential transformation, which, due to its properties, transforms this equation to the respective system of algebraic (nonlinear, very often) equations. The unknown elements in this systems are the coefficients of the Taylor series expanding the sought function. After solving the developed system of equations the approximate solution of the investigated equation can be determined.
Equation (1) in its full form appears rather rarely in the mathematical models describing some physical phenomena. One can find however the specific cases of this equation in the description of phenomena with viscoelastic effects. First papers concerning this problem were developed in the 20th century (by Boltzmann, Maxwell and Kelvin) and were devoted to investigating the creep and recovery phenomena in various materials. This theory evolved in the 20th century thanks to the research on polymers and this development continues even today. For example, paper [5] includes many models describing the viscoelasticity phenomenon, whereas in paper [6] one can find the Laederman model describing the reaction of the fibers creep to the predetermined tensions. This connection is expressed by the relation
ε ( t ) = J 0 , σ σ ( t ) + 0 t J ( t τ ) d ( g ( σ ( t ) ) ) d t d τ ,
where ε denotes the deformation, J 0 , σ means the initial creep compliance, J is the creep compliance, σ denotes the tension, and g ( σ ) is the tension function.
Particular cases of Equation (1) can be also found in the population dynamics, for example in the predator-prey model, described at first by Volterra [7] with the aid of the following system of equations (a similar issue was considered e.g., in work [23])
N 1 ( t ) = b 1 a 1 N 2 ( t ) N 1 ( t ) , N 2 ( t ) = a 2 N 1 ( t ) b 2 + r 0 k ( s ) N 1 ( t + s ) d s N 2 ( t ) ,
where N 1 and N 2 denote the number of predator and prey populations, respectively.
Model (3), modified by Brelot in [24], takes the form
y ( x ) = y ( x ) r 1 a 11 y ( x ) a 12 x F ( x s ) z ( s ) d s , z ( x ) = z ( x ) a 21 x G ( x s ) y ( s ) d s r 2 a 22 z ( x ) ,
where r 1 denotes the growth rate of the prey population under the assumption of the predators absence, a 11 is the self-regulation constant of the prey population, a 12 describes the predation of the prey by predators, r 2 denotes the death rate of predators under the assumption of the prey absence, a 21 is the conversion rate of predators, a 22 describes the intraspecies competition between the predators, functions F and G are the retardation functions responsible for including the prey and predators history into the model. Specific case of the equation System (4) is solved in Example 5.
Goal of this elaboration is to discuss the specific cases of the integro-differential equations with a retarded argument solved by using the new approach. Deriving the solution formula for the general form of the equation under consideration is impossible, but the discussed examples explain the way how to deal with these problems.
Novelty of this paper is a new approach to the solution of the equation in general form (1), based on the Taylor differential transformation. Contribution of the presented research is the development of an efficient technique for the solution of integro-differential equations with a retarded (delayed) argument with the use of the Taylor differential transformation and showing that the presented solution technique is efficient and relatively simple to use.
In Section 1 we introduce the equations being the object of our interests. Section 2 includes the theoretical description of the Taylor transformation together with its properties, particularly useful in the introduced solution procedure. Some of the properties are proved by example. In Section 3 we explain the details of the solution method and finally in Section 4 we illustrate the theoretical description with five example. The Section 5 contains some comments and conclusions.

2. The Taylor Transformation

Let us assume that we consider only such functions of the real variable x, defined in some region X R , which can be expanded into the Taylor series within some neighborhood of point α X . We call such functions as the originals and denote by the small letters of Latin alphabet, for example f, y, u, v, w, and so on. Thus, if the function y is the original, then the following equality holds
y ( x ) = k = 0 y ( k ) ( α ) k ! ( x α ) k ,
where α X is the point, in the neighborhood of which the function y is expanded into the Taylor series. Each original is assigned to a function Y α of nonnegative integer arguments k = 0 , 1 , 2 , , according to formula
Y α ( k ) = y ( k ) ( α ) k ! , k = 0 , 1 , 2 ,
The function Y α is called the image of the function y, the T α –function of the function y or the transform of the function y, and the discussed transformation is called the Taylor transformation.
The obvious fact is that, by having the T α –function Y α one can find, according to Formulas (5) and (6), the corresponding original in the form of its expansion into the Taylor series, that is
y ( x ) = k = 0 Y α ( k ) ( x α ) k , x , α X .
Transformation (6), assigning to each original its image, is called the direct transformation. Whereas the transformation (7), assigning the corresponding original to the image, is called the inverse transformation. Connection between these both transformations will be denoted by means of the following symbols:
Y α ( k ) = T [ y ( x ) ; k , α ] ,
for the direct transformation and
y ( x ) = T 1 [ Y α ( k ) ; x ] ,
for the inverse transformation, where T and T 1 are the symbols of the proper transformations.
In the used notation, for example for the function y ( x ) = cosh x and α = 0 , we have
Y 0 ( k ) = T [ cosh x ; k , 0 ] = T 1 2 k = 0 1 + ( 1 ) k k ! x k ; k , 0 = 1 + ( 1 ) k 2 k ! ,
where k = 0 , 1 , 2 , Whereas in case of the inverse transformation for the above function we get
y ( x ) = T 1 [ Y 0 ( k ) ; x ] = T 1 1 + ( 1 ) k 2 k ! ; x = 1 2 k = 0 1 + ( 1 ) k k ! x k = cosh x .
The Taylor transformation possesses a number of properties causing that the application of this tool, with the aid of computational platforms giving the possibility to execute the symbolic calculations, like Mathematica for example, is quite simple.
In the current elaboration we use the specific case of the Taylor series—the Maclaurin series, therefore α , constantly equal to zero, will be omitted henceforward.
Particularly, the following properties are very useful [1,13,14,25,26]:
T [ x n ; k ] = δ ( k n ) = 1 , k = n , 0 , k n ,
T [ e a x ; k ] = a k k ! ,
T [ sin ( a x ) ; k ] = a k k ! sin π k 2 ,
T [ c · u ( x ) ; k ] = c · U ( k ) ,
T [ u ( x ) ± w ( x ) ; k ] = U ( k ) ± W ( k ) ,
T [ x n · u ( x ) ; k ] = 0 , k = 0 , 1 , , n 1 , U ( k n ) , k n ,
T [ x 0 x u ( t ) d t ; k ] = 0 , k = 0 , U ( k 1 ) k , k 1 ,
T [ u ( x ) x 0 x w ( t ) d t ; k ] = 0 , k = 0 , r = 0 k 1 U ( r ) W ( k r 1 ) k r , k 1 ,
T [ u ( x ) x 0 x v ( t ) w ( t ) d t ; k ] = 0 , k = 0 , i = 0 k 1 j = 0 k i 1 U ( i ) V ( j ) W ( k i j 1 ) k i , k 1 ,
where a , c R , n N { 0 } , and k = 0 , 1 , 2 ,
Besides the “classic” Properties (8)–(14), the Taylor differential transformation possesses a number of properties corresponding to the retarded argument. Some of them are presented below (derivation of these formulas can be found, among others in [1], whereas some of these formulas, like for example Formula (33), are developed by the Authors):
T [ y ( x + α ) ; k ] = r = k N r k α r k Y ( r ) ,
T [ y ( x ) z ( x + α ) ; k ] = r = 0 k k 1 = k r N k 1 k r α k 1 k + r Y ( r ) Z ( k 1 ) ,
T [ y ( x + α ) z ( x + β ) ; k ] = r = 0 k k 1 = r N k 2 = k r N k 1 r k 2 k r α k 1 r β k 2 k + r Y ( k 1 ) Z ( k 2 ) ,
T [ y ( x ) z ( x ) w ( x + α ) ; k ] = r = 0 k r 1 = 0 k r k 1 = k r r 1 N k 1 k r r 1 × α k 1 k + r + r 1 Y ( r ) Z ( r 1 ) W ( k 1 ) ,
T [ y ( x ) z ( x + α ) w ( x + β ) ; k ] = r = 0 k r 1 = 0 k r k 1 = r 1 N k 2 = k r r 1 N k 1 r 1 × k 2 k r r 1 α k 1 r 1 β k 2 k + r + r 1 Y ( r ) Z ( k 1 ) W ( k 2 ) ,
T [ y ( x + α ) ; k ] = ( k + 1 ) k 1 = k + 1 N k 1 k + 1 α k 1 k 1 Y ( k 1 ) ,
T [ y ( x ) z ( n ) ( x + α ) ; k ] = k 1 = 0 k r = k k 1 + n N ( k k 1 + n ) ! ( k k 1 ) ! r k k 1 + n × α r k + k 1 n Y ( k 1 ) Z ( r ) ,
T [ y ( n ) ( x + α ) z ( m ) ( x + β ) ; k ] = k 1 = 0 k r 1 = k 1 + n N r 2 = k k 1 + m N ( k 1 + n ) ! k 1 ! × ( k k 1 + m ) ! ( k k 1 ) ! r 1 k 1 + n r 2 k k 1 + m × α r 1 k 1 n β r 2 k + k 1 m Y ( r 1 ) Z ( r 2 ) ,
T [ y ( x ) z ( n ) ( x + α ) w ( m ) ( x + β ) ; k ] = r = 0 k k 1 = 0 k r r 1 = k 1 + n N r 2 = k r k 1 + m N [ ( k 1 + n ) ! k 1 ! ( k r k 1 + m ) ! ( k r k 1 ) ! r 1 k 1 + n r 2 k r k 1 + m × α r 1 k 1 n β r 2 k + k 1 m Y ( r ) Z ( r 1 ) W ( r 2 ) ] ,
T [ x 0 x y ( t + α ) d t ; k ] = 0 , k = 0 , k 1 = k 1 N k 1 k 1 α k 1 k + 1 Y ( k 1 ) k , k 1 ,
T [ x 0 x y ( t ) z ( t + α ) d t ; k ] = 0 , k = 0 , r = 0 k 1 k 1 = r N k 1 r Y ( k 1 ) Z ( k r 1 ) α k 1 r k , k 1 ,
T [ x 0 x y ( t + α ) z ( t + β ) d t ; k ] = 0 , k = 0 , r = 0 k 1 k 1 = r N k 2 = k r 1 N k 1 r k 2 k r 1 α k 1 r β k 2 k + r + 1 Y ( k 1 ) Z ( k 2 ) k , k 1 ,
T [ x 0 x y ( t ) z ( t + α ) w ( t + β ) d t ; k ] = 0 , k = 0 , r = 0 k 1 r 1 = 0 k r 1 k 1 = r 1 N k 2 = k r r 1 1 N k 1 r 1 k 2 k r r 1 1 α k 1 r 1 × β k 2 k + r + r 1 + 1 Y ( r ) Z ( k 1 ) W ( k 2 ) k , k 1 ,
T [ y ( x ) x 0 x z ( t + α ) d t ; k ] = 0 , k = 0 , r = 0 k 1 k 1 = k r 1 N k 1 k r 1 α k 1 k + r + 1 Y ( r ) Z ( k 1 ) k r , k 1 ,
T [ y ( x ) x 0 x z ( t ) w ( t + α ) d t ; k ] = 0 , k = 0 , r = 0 k 1 r 1 = 0 k r 1 k 1 = k r r 1 1 N k 1 k r r 1 1 α k 1 k + r + r 1 + 1 × Y ( r ) Z ( r 1 ) W ( k 1 ) k r , k 1 ,
T [ y ( x ) x 0 x z ( t + α ) w ( t + β ) d t ; k ] = 0 , k = 0 , r = 0 k 1 r 1 = 0 k r 1 k 1 = r 1 N k 2 = k r r 1 1 N k 1 r 1 k 2 k r r 1 1 α k 1 r 1 × β k 2 k + r + r 1 + 1 Y ( r ) Z ( k 1 ) W ( k 2 ) k r , k 1 ,
T [ 0 x y ( t + α ) z ( t ) d t ; k ] = F ( k ) = 0 , k = 0 , 1 k · Z ( 0 ) ( r = k 1 N r k 1 α r k + 1 Y ( r ) + r = 0 k 2 ( r + 1 ) F ( r + 1 ) Z ( k 1 r ) ) , k 1 ,
α , β R , n , m N , k = 0 , 1 , 2 , , where N .
As an example we prove the correctness of Formula (33). For this purpose we prove at first some auxiliary identities (in all these theorems we assume that the occurring functions are the originals).
Lemma  1
(see (14)). Let f ( x ) = 0 x y ( t ) d t . Then we have
F ( k ) = 0 , k = 0 , Y ( k 1 ) k , k 1 .
Proof. 
Using the expansion of function y into the Taylor (Maclaurin) series, we have
f ( x ) = 0 x k = 0 Y ( k ) t k d t = 0 x Y ( 0 ) + Y ( 1 ) t + Y ( 2 ) t 2 + d t = 0 + Y ( 0 ) x + Y ( 1 ) 2 x 2 + Y ( 2 ) 3 x 3 + + Y ( k 1 ) k x k + ,
hence we get the thesis
F ( k ) = 0 , k = 0 , Y ( k 1 ) k , k 1 .
Lemma 2
(see (17)). Let f ( x ) = y ( x + α ) . Then we have
F ( k ) = r = k N r k α r k Y ( r ) , k 0 , N .
Proof. 
Using the expansion of function y into the Taylor series, we have
f ( x ) = k = 0 Y ( k ) ( x + α ) k = Y ( 0 ) + Y ( 1 ) ( x + α ) + Y ( 2 ) ( x + α ) 2 + Y ( 3 ) ( x + α ) 3 + = Y ( 0 ) + α Y ( 1 ) + α 2 Y ( 2 ) + α 3 Y ( 3 ) + α 4 Y ( 4 ) + Y ( 1 ) + 2 α Y ( 2 ) + 3 α 2 Y ( 3 ) + 4 α 3 Y ( 4 ) + x + Y ( 2 ) + 3 2 α Y ( 3 ) + 4 2 α 2 Y ( 4 ) + 5 2 α 3 Y ( 5 ) + x 2 + + k k Y ( k ) + k + 1 k α Y ( k + 1 ) + k + 2 k α 2 Y ( k + 2 ) + x k + ,
hence we get
F ( k ) = k k Y ( k ) + k + 1 k α Y ( k + 1 ) + k + 2 k α 2 Y ( k + 2 ) + = = r = k N r k α r k Y ( r ) , k 0 , N .
Lemma 3.
If f ( x ) = y ( x ) z ( x ) , z ( 0 ) 0 , then
F ( k ) = 1 Z ( 0 ) Y ( k ) r = 0 k 1 F ( r ) Z ( k r ) .
Proof. 
Let y ( x ) = k = 0 Y ( k ) x k , z ( x ) = k = 0 Z ( k ) x k , then we have
f ( x ) = y ( x ) z ( x ) = k = 0 F ( k ) x k ,
where F ( k ) , k 0 , are unknown. Since the Relation (34) implies that f ( x ) z ( x ) = y ( x ) , therefore we have
F ( 0 ) + F ( 1 ) x + F ( 2 ) x 2 + Z ( 0 ) + Z ( 1 ) x + Z ( 2 ) x 2 + = Y ( 0 ) + Y ( 1 ) x + Y ( 2 ) x 2 +
and by comparing the coefficients associated to the respective powers of x, we obtain successively
F ( 0 ) Z ( 0 ) = Y ( 0 ) F ( 0 ) = Y ( 0 ) Z ( 0 ) ,
F ( 0 ) Z ( 1 ) + F ( 1 ) Z ( 0 ) = Y ( 1 ) F ( 1 ) = Y ( 1 ) F ( 0 ) Y ( 1 ) Z ( 0 ) ,
F ( 0 ) Z ( 2 ) + F ( 1 ) Z ( 1 ) + F ( 2 ) Z ( 0 ) = Y ( 2 ) F ( 2 ) = Y ( 2 ) F ( 0 ) Z ( 2 ) + F ( 1 ) Z ( 1 ) Z ( 0 ) , ,
F ( 0 ) Z ( k ) + F ( 1 ) Z ( k 1 ) + F ( 2 ) Z ( k 2 ) + + F ( k 1 ) Z ( 1 ) + F ( k ) Z ( 0 ) = Y ( k ) ,
hence we get
F ( k ) = Y ( k ) F ( 0 ) Z ( k ) + F ( 1 ) Z ( k 1 ) + F ( 2 ) Z ( k 2 ) + + F ( k 1 ) Z ( 1 ) Z ( 0 ) = 1 Z ( 0 ) Y ( k ) r = 0 k 1 F ( r ) Z ( k r ) , k 1 .
Since for k = 0 the last sum is “empty”, so it can be considered that the obtained formula holds true for all integers k 0 . Thus we have
F ( k ) = 1 Z ( 0 ) Y ( k ) r = 0 k 1 F ( r ) Z ( k r ) , k 0 ,
which is the thesis of the lemma. □
Theorem 1
(see (33)). Let f ( x ) = 0 x y ( t + α ) z ( t ) d t , z ( 0 ) 0 . Then F ( 0 ) = 0 and for k 1 we have
F ( k ) = 1 k · Z ( 0 ) r = k 1 N r k 1 α r k + 1 Y ( r ) r = 0 k 2 ( r + 1 ) F ( r + 1 ) Z ( k 1 r ) .
Proof. 
Applying Lemmas 2 and 3 we can write that if g ( x ) = y ( x + α ) z ( x ) , z ( 0 ) 0 , then
G ( k ) = 1 Z ( 0 ) Y α ( k ) r = 0 k 1 G ( r ) Z ( k r ) ,
where Y α ( k ) is the T-function of function y ( x + α ) , that is (see Lemma 2):
Y α ( k ) = r = k N r k α r k Y ( r ) , k 0 , N .
Thus we have
G ( k ) = 1 Z ( 0 ) r = k N r k α r k Y ( r ) r = 0 k 1 G ( r ) Z ( k r ) , k 0 , N .
Now, by applying Lemma 1 we have f ( x ) = 0 x g ( t ) d t , hence we get F ( 0 ) = 0 and for k 1 we have
F ( k ) = 1 Z ( 0 ) 1 k r = k 1 N r k 1 α r ( k 1 ) Y ( r ) r = 0 k 1 1 H k , r .
On should pay a special attention to the sum of elements H k , r . We cannot simply substitute there the expression k 1 in place of k, because the coefficients F occur in this sum. Referring to the indices changed in this way we came across the negative indices. Such indices suppose to be properly retarded and the changed value, resulting from this retardation, must be regulated with the aid of appropriate multipliers. Hence we finally get for k 1 :
F ( k ) = 1 k Z ( 0 ) r = k 1 N r k 1 α r ( k 1 ) Y ( r ) r = 0 k 2 ( r + 1 ) F ( r + 1 ) Z ( k 1 r ) ,
which ends the proof of theorem. □

3. Method of Solution

Besides the properties, listed above, for solving the equations of considered kind we will also use the initial Conditions (2). To use them efficiently, one should notice that
y ( x ) = k = 0 y ( k ) ( x 0 ) k ! ( x x 0 ) k y ( n ) ( x ) = k = 0 y ( k + n ) ( x 0 ) k ! ( x x 0 ) k ,
or else
y ( x ) = k = 0 Y ( k ) ( x x 0 ) k y ( n ) ( x ) = k = 0 ( k + n ) ! k ! Y ( k + n ) ( x x 0 ) k ,
hence, if we take x 0 = 0 , we get
y ( n ) ( β ) = γ k = 0 ( k + n ) ! k ! Y ( k + n ) β k = γ .
Let us observe that in Formula (27), and in the similar formulas as well, the lower limit of integration is equal to the point, around which the given function is expanded into the Taylor series. According to the agreement, mentioned before, in this paper we take x 0 = 0 . If the case arises that in Equation (1) the lower limit of the Volterra type integral will be different than zero, then it is enough to notice, that if (independently whether a < 0 , a = 0 , or a > 0 ):
u ( x ) = a x h ( t , y ( t + β ) ) d t = a 0 h ( t , y ( t + β ) ) d t + 0 x h ( t , y ( t + β ) ) d t = λ + v ( x ) ,
where
R λ = a 0 h ( t , y ( t + β ) ) d t ,
v ( x ) = 0 x h ( t , y ( t + β ) ) d t ,
and
T [ v ( x ) ; k ] = 0 , k = 0 , V ( k ) , k 1 ,
then
T [ u ( x ) ; k ] = λ , k = 0 , V ( k ) , k 1 .
Obviously, in Formulas (17)–(33) we cannot take the infinite sums. Instead, the summation will be executed until the fixed N N . We obtain then N + 1 equations with unknowns: Y ( 0 ) , Y ( 1 ) , …, Y ( N ) . After solving this system of equations we receive the following approximate solution
y N ( x ) = k = 0 N Y ( k ) x k .

4. Examples

Example 1.
We consider the equation
y ( x 1 ) y ( x + 1 ) + 9 x y ( x ) 21 20 1 1 y ( x ) y ( x 1 ) d x + 4 0 x ( y ( t ) y ( t 1 ) ) d t = 4 x ( 3 x 2 ) ,
with condition
y ( 2 ) = 0 ,
the exact solution of which is given by function y ( x ) = x ( 4 x 2 ) .
Equation (37), under assumption 21 20 1 1 y ( x ) y ( x 1 ) d x = λ R and after applying, among others, the Properties (8), (11), (12), (13), (14), (24) and (26), transforms to the form
r = 0 k k 1 = r + 1 N k 2 = k r + 1 N [ ( r + 1 ) ( k r + 1 ) k 1 r + 1 k 2 k r + 1 ( 1 ) k 1 r 1 × Y ( k 1 ) Y ( k 2 ) ] + 0 , k = 0 , 9 Y ( k 1 ) + 4 Y ( k 1 ) k , k 1 λ δ ( k 0 ) 4 0 , k = 0 , k 1 = k 1 N k 1 k 1 ( 1 ) k 1 k + 1 Y ( k 1 ) k = 12 δ ( k 1 ) 4 δ ( k 3 )
with the following equation included, on the grounds of initial Condition (38) and Formula (35):
i = 0 N Y ( i ) ( 2 ) i = 0 .
Taking N = 3 , Equation (39) leads to a system of equations of the form
λ + ( Y ( 1 ) ) 2 4 ( Y ( 2 ) ) 2 + 6 Y ( 1 ) Y ( 3 ) + 9 ( Y ( 3 ) ) 2 = 0 , 9 Y ( 0 ) + 4 ( Y ( 1 ) ( 1 + Y ( 2 ) ) + Y ( 3 ) Y ( 2 ) ( 1 + 3 Y ( 3 ) ) ) = 12 , 4 Y ( 2 ) + 4 ( Y ( 2 ) ) 2 6 Y ( 3 ) ( 1 + 3 Y ( 3 ) ) + 3 Y ( 1 ) ( 3 + 2 Y ( 3 ) ) = 0 , 4 Y ( 3 ) + 3 Y ( 2 ) ( 3 + 4 Y ( 3 ) ) = 4 Y ( 0 ) 2 Y ( 1 ) + 4 Y ( 2 ) 8 Y ( 3 ) = 0 .
Solving the system of Equations (41), we get three groups of solutions. After the appropriate verification (we assess the quality of the obtained approximate solution by substituting it into Equation (37)), it turns out that the best solution is Y ( 0 ) = 0 , Y ( 1 ) = 4 , Y ( 2 ) = 0 , Y ( 3 ) = 1 and λ = 1 , that is the function y 3 ( x ) = 4 x x 3 , which is the exact solution. For N > 3 we obtain the same values λ , Y ( i ) , 0 i 3 and Y ( i ) = 0 , i 4 , that is the same exact solution. It is worth noting that we did not need to use here the definition of λ .
Example 2.
We consider the equation
y x π 2 2 2 π y x π 2 + 1 2 0 x y x + π 2 y x π 2 2 d t = 1 2 ( 4 + π 2 ) x + cos 2 x sin 2 x ,
for π 2 x π 2 , with conditions
y π 4 = π 4 , y ( 0 ) = 0 ,
the exact solution of which is given by function y ( x ) = x sin x cos x .
Equation (42), in result of applying, among others, the Properties (8), (10)–(12), (22)–(26) and (28), and using the initial Condition (43) ( Y ( 1 ) = 0 ), develops into the form
r = 0 k k 1 = r + 2 N k 2 = k r + 2 N [ ( r + 1 ) ( r + 2 ) ( k r + 1 ) ( k r + 2 ) k 1 r + 2 k 2 k r + 2 × π 2 k 1 + k 2 k 4 Y ( k 1 ) Y ( k 2 ) ] 2 π ( k + 1 ) k 1 = k + 1 N k 1 k + 1 π 2 k 1 k 1 Y ( k 1 ) + 1 2 0 , k = 0 , r = 0 k 1 k 1 = r N k 2 = k r 1 N k 1 r k 2 k r 1 π 2 k 1 + k 2 k + 1 Y ( k 1 ) Y ( k 2 ) k , k 1 0 , k = 0 , r = 0 k 1 k 1 = r N k 2 = k r 1 N k 1 r k 2 k r 1 π 2 k 1 r π 2 k 2 k + r + 1 × Y ( k 1 ) Y ( k 2 ) k , k 1 + 1 2 0 , k = 0 , r = 0 k 1 k 1 = r N k 2 = k r 1 N k 1 r k 2 k r 1 π 2 k 1 + k 2 k + 1 Y ( k 1 ) Y ( k 2 ) k , k 1 = cos k π 2 sin k π 2 k ! 2 k + 1 2 ( 4 + π 2 ) δ ( k 1 ) ,
with the following equation included, basing on the initial Condition (43) and Formula (35):
i = 0 N Y ( i ) π 4 i = π 4 .
The unknowns of this system are Y ( 0 ) , Y ( 2 ) , Y ( 3 ) , , Y ( N ) . Taking N = 5 in Relations (44)–(45) we get the system of equations, the solution if which are Y ( 0 ) = 0.9759 , Y ( 2 ) = 0.4389 , Y ( 3 ) = 0.1656 , Y ( 4 ) = 0.0071 , Y ( 5 ) = 0.0091 , which gives
y 5 ( x ) = 0.9759 + 0.4389 x 2 + 0.1656 x 3 0.0071 x 4 0.0091 x 5 ,
whereas by taking N = 9 we obtain the approximate solution y 9 ( x ) = i = 0 9 Y ( i ) x i :
y 9 ( x ) = 0.9953 + 0.4923 x 2 + 0.177 x 3 0.0296 x 4 0.0128 x 5 0.0021 x 6 + 0.0006 x 7 + 0.0002 x 8 + 0.0000 x 9 .
Plots of the received solutions, together with the plots of absolute errors Δ N ( x ) of these solutions
Δ N ( x ) = | y ( x ) y N ( x ) | ,
are presented in Figure 1, Figure 2, Figure 3 and Figure 4. In Figure 1 and Figure 3 the solid green line represents the exact solution, whereas the dashed red line marks the approximate solution. The errors Δ are displayed in Figure 2 and Figure 4.
Example 3.
Let us consider the equation
e 2 x y ( x 1 ) y ( x + 1 ) 15 4 1 1 ( x + 1 ) y ( x + 1 ) d x + 6 e 1 x ( t + 2 ) e t y ( t + 1 ) d t = x ( 2 x 2 + 4 x 13 ) ,
for 2 x 2 , with conditions
y ( 0 ) = 2 , y ( 1 ) = 2 e ,
the exact solution of which is defined by function y ( x ) = ( x 2 ) e x .
Equation (46), under assumption
15 4 1 1 ( x + 1 ) y ( x + 1 ) d x = λ 1 R
and after applying the Property (36), as well as under assumption
6 e 1 0 ( x + 2 ) e x y ( x + 1 ) d x = λ 2 R
and after applying, among others, the Properties (8), (9), (11), (12), (13), (25) and (27), transforms into the form
r = 0 k r 1 = 0 k r k 1 = r 1 + 1 N k 2 = k r r 1 + 2 N [ ( r 1 + 1 ) ( k r r 1 + 1 ) ( k r r 1 + 2 ) × k 1 r 1 + 1 k 2 k r r 1 + 2 ( 1 ) k 1 r 1 1 ( 2 ) r r ! Y ( k 1 ) Y ( k 2 ) ] + 6 e 0 , k = 0 , r = 0 k 1 r 1 = 0 k r 1 k 1 = k r r 1 1 N k 1 k r r 1 1 × ( δ ( r 1 ) + 2 δ ( r 0 ) ) ( 1 ) r 1 Y ( k 1 ) k · r 1 ! , k 1 + ( λ 2 λ 1 ) δ ( k 0 ) = 2 δ ( k 3 ) + 4 δ ( k 2 ) 13 δ ( k 1 ) ,
where Y ( 0 ) = 2 basing on Condition (47), with the following equation included, on the grounds of initial Condition (47) and Formula (35):
i = 0 N 1 ( i + 1 ) Y ( i + 1 ) ( 1 ) i = 2 e
and the definitions of constants λ 1 and λ 2 given below
15 4 1 1 ( x + 1 ) i = 0 N Y ( i ) ( x + 1 ) i d x = λ 1 , 6 e 1 0 ( x + 2 ) e x i = 0 N Y ( i ) ( x + 1 ) i d x = λ 2 .
Taking the fixed value of N in Equations (48)–(50) we get the system of equations with unknowns Y ( 1 ) , Y ( 2 ) , , Y ( N ) . So, taking N = 6 , we receive in result the following approximate solution
y 6 ( x ) = 2 0.9159 x 0.037 x 2 + 0.1063 x 3 + 0.0901 x 4 + 0.0394 x 5 + 0.0082 x 6
with λ 1 = 14.8719 and λ 2 = 12.9006 (their exact values are λ 1 = 15 , λ 2 = 13 ). Whereas for N = 9 we obtain the following approximate solution
y 9 ( x ) = 2 1.0049 x + 0.002 x 2 + 0.1701 x 3 + 0.083 x 4 + 0.0242 x 5 + 0.0054 x 6 + 0.001 x 7 + 0.0002 x 8 + 0.00002 x 9
with λ 1 = 15.0077 and λ 2 = 13.006 .
Plots of the approximate Solutions (51) and (52), the exact solution and the absolute errors Δ N ( x ) of these approximations are displayed in Figure 5, Figure 6, Figure 7 and Figure 8 (the notation of figures is the same as previously).
Example 4.
Now we take into consideration the following equation
( e x + y ( x 1 ) ) ( e x y ( x + 1 ) ) + 2 0 1 y ( x ) y ( x 1 ) d x + e 0 x 1 y ( t 1 ) 1 y ( t ) d t = 2 + x 2 sinh 2 x ,
for 2 x 2 , with conditions
y ( 0 ) = 0 , y ( 0 ) = 1 ,
the exact solution of which is described by function y ( x ) = 1 e x .
Equation (53), in result of applying the Property (36), that is assuming that
2 0 1 y ( x ) y ( x 1 ) d x = λ R
and in result of using, among others, the Properties (8), (9), (11), (12), (23), (24) and (33), develops into the form
r = 0 k k 1 = k r + 2 N ( k r + 2 ) ! ( k r ) ! k 1 k r + 2 ( 1 ) k 1 k + r 2 ( 1 ) r r ! Y ( k 1 ) r = 0 k k 1 = k r + 2 N ( k r + 2 ) ! ( k r ) ! k 1 k r + 2 ( 1 ) r r ! Y ( k 1 ) r = 0 k k 1 = r + 2 N k 2 = k r + 2 N [ ( r + 2 ) ! r ! ( k r + 2 ) ! ( k r ) ! k 1 r + 2 k 2 k r + 2 × ( 1 ) k 1 r 2 Y ( k 1 ) Y ( k 2 ) ] + λ δ ( k 0 ) + e F ( k ) = 2 δ ( k 0 ) + δ ( k 1 ) 2 k k ! ,
where
F ( k ) = 0 , k = 0 , 1 k ( 1 Y ( 0 ) ) ( k 1 = k 1 N k 1 k 1 ( 1 ) k 1 k + 1 ( δ ( k 1 0 ) Y ( k 1 ) ) r = 0 k 2 ( r + 1 ) F ( r + 1 ) ( δ ( k r 1 0 ) Y ( k 1 r ) ) ) , k 1 ,
where Y ( 0 ) = 0 , Y ( 1 ) = 1 basing on Condition (54), with the following definition of constant λ included
2 0 1 i = 0 N Y ( i ) x i i = 0 N Y ( i ) ( x 1 ) i d x = λ .
Assuming in (55) and (56) the fixed value of N we get the system of equations with unknowns Y ( 2 ) , Y ( 3 ) , , Y ( N ) . So, value N = 5 leads to the approximate solution
y 5 ( x ) = x 0.5043 x 2 0.1791 x 3 0.0503 x 4 0.0075 x 5
with λ = 0.3524 (the exact value is λ = 2 + 1 e e 0.3504 ). Whereas for N = 9 we obtain the approximate solution of the form
y 9 ( x ) = x 0.4994 x 2 0.1662 x 3 0.0417 x 4 0.0085 x 5 0.0014 x 6 0.0002 x 7 0.00003 x 8 3 · 10 6 x 9
with λ = 0.350359 .
Plots of the approximate Solutions (57) and (58), the exact solution and the absolute errors Δ N ( x ) of these approximations are displayed in Figure 9, Figure 10, Figure 11 and Figure 12 (the previous notation of figures is kept).
Example 5.
Let us solve the specific case of the system of Equation (4), in which we take: r 1 = a 11 = a 12 = 1 , a 21 = 0 , r 2 = 1 , a 22 = 1 , F ( x ) = e x . Then the system takes the form
y ( x ) = y ( x ) 1 y ( x ) 0 x e x s z ( s ) d s , z ( x ) = z ( x ) ( z ( x ) 1 ) ,
for 0 x 2 , with conditions
y ( 0 ) = 1 2 , z ( 0 ) = 1 ,
the exact solution of which is given by functions y ( x ) = e 2 x 1 + e x and z ( x ) = 1 .
System of Equations (59), after applying the properties of the Taylor transformation (see (16) where u ( x ) = y ( x ) e x ), can be written in the following form
( k + 1 ) Y ( k + 1 ) = Y ( k ) r = 0 k Y ( r ) Y ( k r ) i = 0 k 1 j = 0 k i 1 r = 0 i Y ( r ) Z ( k i j 1 ) ( 1 ) j ( k i ) ( i r ) ! j ! , ( k + 1 ) Z ( k + 1 ) = r = 0 k Z ( r ) Z ( k r ) Z ( k ) .
Using the Conditions (60) and taking in Relations (61) the successive values k 0 , we get in turn
for k = 0 : Y ( 1 ) = 3 4 , Z ( 1 ) = 0 ;
for k = 1 : 2 Y ( 2 ) = 1 , 2 Z ( 2 ) = 0 , hence we get Y ( 2 ) = 1 2 , Z ( 2 ) = 0 ;
for k = 2 : 3 Y ( 2 ) = 9 16 , 3 Z ( 2 ) = 0 , hence we get Y ( 3 ) = 3 16 , Z ( 3 ) = 0 ;
for k = 3 : 4 Y ( 4 ) = 1 6 , 4 Z ( 4 ) = 0 , hence we get Y ( 4 ) = 1 24 , Z ( 4 ) = 0 , etc .
Limiting the k values to k = 5 we get the approximate solution
y 5 ( x ) = 1 2 3 4 x 1 2 x 2 3 16 x 3 1 24 x 4 1 160 x 5 , z 5 ( x ) = 1 .
Since function z is reconstructed exactly, Figure 13 presents only the comparison of the exact solution y and the approximate solution y 5 (the notation of figures is the same as in the previous examples) together with the graph of absolute error Δ 5 ( x ) of this approximation. Next, Figure 14 shows the corresponding graphs for the approximate solution y 10 .
Interesting is that we obtain in the successive steps the exact values of coefficients of the expansions of functions y ( x ) and z ( x ) into the Taylor series, which means that if only we would be able to discover this relation, we could obtain the exact solution of System (5).

5. Conclusions

This paper discusses the possibility of applying the Taylor differential transformation for solving the integro-differential equations with a retarded argument. The investigated equations are important, because we can find them in various mathematical, technical and engineering problems, and simultaneously, solving them is not an easy task. So, the contribution of the presented research is the development of an efficient technique for the solution of nonlinear equations which may arise in the area of up-growing interest, like the wireless sensor network and industrial internet of things. Examples, presented in this elaboration, show that the solution technique based on the Taylor differential transformation is efficient in case of equations of considered kind and its additional advantage is the simplicity of its application. The discussed method is well applicable when the sought function is expandable into the Taylor series in the considered region. In all cases, examined in the current paper and elsewhere, the satisfactory solution was possible to find always when this assumption was satisfied. In future we plan to compare the discussed method with some similar methods based on other transformations, like, for example, the Laplace transform. Moreover, we plan to apply DTM for solving some problems described by the equations with fractional derivative.

Author Contributions

All authors contributed equally to the research. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Arikoglu, A.; Ozkol, I. Solution of differential-difference equations by using differential transform method. Appl. Math. Comput. 2006, 174, 153–162. [Google Scholar] [CrossRef]
  2. Elmer, C.E.; van Vleck, E.S. A Variant of Newton’s Method for the Computation of Traveling Waves of Bistable Differential-Difference Equations. J. Dyn. Differ. Equat. 2002, 14, 493–517. [Google Scholar] [CrossRef]
  3. Rodríguez, F.; López, J.C.C.; Castro, M.A. Models of Delay Differential Equations; MDPI: Basel, Switzerland, 2021. [Google Scholar]
  4. Smith, H. An Introduction to Delay Differential Equations with Applications to the Life Sciences; Springer: New York, NY, USA, 2011. [Google Scholar]
  5. Drozdov, A.D. Finite Elasticity and Viscoelasticity. A Course in the Nonlinear Mechanics of Solids; World Scientific Publishing: Singapore, 1996. [Google Scholar]
  6. Smart, J.; Aruna, K.; Williams, J. A comparison of single-integral non-linear viscoelasticity theories. J. Mech. Phys. Solids 1972, 20, 313–324. [Google Scholar] [CrossRef]
  7. Volterra, V. Sur la théorie mathématique des phénomènes héréditaires. J. Mathématiques Pures Appliquées 1928, 7, 249–298. [Google Scholar]
  8. Amin, R.; Nazir, S.; García-Magariño, I. A Collocation Method for Numerical Solution of Nonlinear Delay Integro-Differential Equations for Wireless Sensor Network and Internet of Things. Sensors 2020, 20, 1962. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Grzymkowski, R.; Hetmaniok, E.; Pleszczyński, M. A novel algorithm for solving the ordinary differential equations. In Selected Problems on Experimental Mathematics; Hetmaniok, E., Słota, D., Trawiński, T., Wituła, R., Eds.; Silesian University of Technology Press: Gliwice, Poland, 2017; pp. 103–112. [Google Scholar]
  10. Grzymkowski, R.; Pleszczyński, M. Application of the Taylor transformation to the systems of ordinary differential equations. In Proceedings of the Information and Software Technologies, ICIST 2018, Communications in Computer and Information Science, Vilnius, Lithuania, 4–6 October 2018; Damasevicius, R., Vasiljeviene, G., Eds.; Springer: Cham, Switzerland, 2018. [Google Scholar]
  11. Hetmaniok, E.; Pleszczyński, M. Comparison of the Selected Methods Used for Solving the Ordinary Differential Equations and Their Systems. Mathematics 2022, 10, 306. [Google Scholar] [CrossRef]
  12. Biazar, J.; Eslami, M.; Islam, M.R. Differential transform method for special systems of integral equations. J. King Saud Univ. Sci. 2012, 24, 211–214. [Google Scholar] [CrossRef] [Green Version]
  13. Doğan, N.; Ertürk, V.S.; Momani, S.; Akın, Ö.; Yıldırım, A. Differential transform method for solving singularly perturbed Volterra integral equations. J. King Saud Univ. Sci. 2011, 23, 223–228. [Google Scholar] [CrossRef] [Green Version]
  14. Odibat, Z.M. Differential transform method for solving Volterra integral equation with separable kernels. Math. Comput. Model. 2008, 48, 1144–1149. [Google Scholar] [CrossRef]
  15. Ravi Kanth, A.S.V.; Aruna, K. Differential transform method for solving the linear and nonlinear Klein–Gordon equation. Comput. Phys. Commun. 2009, 185, 708–711. [Google Scholar] [CrossRef]
  16. Ravi Kanth, A.S.V.; Aruna, K.; Chaurasial, R.K. Reduced differential transform method to solve two and three dimensional second order hyperbolic telegraph equations. J. King Saud Univ. Eng. Sci. 2017, 29, 166–171. [Google Scholar]
  17. Srivastava, V.K.; Awasthi, M.K. Differential transform method for solving linear and non-linear systems of partial differential equations. Phys. Lett. A 2008, 372, 6896–6898. [Google Scholar]
  18. Tari, A.; Shahmorad, S. Differential transform method for the system of two-dimensional nonlinear Volterra integro-differential equations. Comput. Math. Appl. 2011, 61, 2621–2629. [Google Scholar] [CrossRef] [Green Version]
  19. Liu, B.Q.; Zhou, X.J.; Du, Q.K. Differential Transform Method for Some Delay Differential Equations. Appl. Math. 2015, 6, 585–593. [Google Scholar] [CrossRef] [Green Version]
  20. Sharma, V.S.; Joshi, M.A. A Comparison Between Two Approaches to Solve Functional Differential Equations: DTM and DJM. Int. J. Math. Appl. 2017, 5, 521–527. [Google Scholar]
  21. Wolfram, S. An Elementary Introduction to the Wolfram Language, 2nd ed.; Wolfram Media, Inc.: Champaign, IL, USA, 2017. [Google Scholar]
  22. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media, Inc.: Champaign, IL, USA, 2003. [Google Scholar]
  23. Tari, A. The Differential Transform Method for solving the model describing biological species living together. Iran. J. Math. Sci. Inform. 2012, 7, 63–74. [Google Scholar]
  24. Brelot, M. Sur le problème biologique héréditaiare de deux especès dévorante et dévorée. Ann. Mat. Pura Appl. 1931, 9, 58–74. [Google Scholar] [CrossRef]
  25. Arikoglu, A.; Ozkol, I. Solution of boundary value problems for integro-differential equations by using differential transform method. Appl. Math. Comput. 2005, 168, 1145–1158. [Google Scholar] [CrossRef]
  26. Arikoglu, A.; Ozkol, I. Solutions of integral and integro-differential equation systems by using differential transform method. Comput. Math. Appl. 2008, 56, 2411–2417. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Exact solution and the approximate solution y 5 ( x ) .
Figure 1. Exact solution and the approximate solution y 5 ( x ) .
Sensors 22 04124 g001
Figure 2. Error Δ 5 of approximation y 5 ( x ) .
Figure 2. Error Δ 5 of approximation y 5 ( x ) .
Sensors 22 04124 g002
Figure 3. Exact solution and the approximate solution y 9 ( x ) .
Figure 3. Exact solution and the approximate solution y 9 ( x ) .
Sensors 22 04124 g003
Figure 4. Error Δ 9 of approximation y 9 ( x ) .
Figure 4. Error Δ 9 of approximation y 9 ( x ) .
Sensors 22 04124 g004
Figure 5. Exact solution and the approximate solution y 6 ( x ) .
Figure 5. Exact solution and the approximate solution y 6 ( x ) .
Sensors 22 04124 g005
Figure 6. Error Δ 6 of approximation y 6 ( x ) .
Figure 6. Error Δ 6 of approximation y 6 ( x ) .
Sensors 22 04124 g006
Figure 7. Exact solution and the approximate solution y 9 ( x ) .
Figure 7. Exact solution and the approximate solution y 9 ( x ) .
Sensors 22 04124 g007
Figure 8. Error Δ 9 of approximation y 9 ( x ) .
Figure 8. Error Δ 9 of approximation y 9 ( x ) .
Sensors 22 04124 g008
Figure 9. Exact solution and the approximate solution y 5 ( x ) .
Figure 9. Exact solution and the approximate solution y 5 ( x ) .
Sensors 22 04124 g009
Figure 10. Error Δ 5 of approximation y 5 ( x ) .
Figure 10. Error Δ 5 of approximation y 5 ( x ) .
Sensors 22 04124 g010
Figure 11. Exact solution and the approximate solution y 9 ( x ) .
Figure 11. Exact solution and the approximate solution y 9 ( x ) .
Sensors 22 04124 g011
Figure 12. Error Δ 9 of approximation y 9 ( x ) .
Figure 12. Error Δ 9 of approximation y 9 ( x ) .
Sensors 22 04124 g012
Figure 13. Exact solution and the approximate solution y 5 ( x ) together with the error Δ of this approximation.
Figure 13. Exact solution and the approximate solution y 5 ( x ) together with the error Δ of this approximation.
Sensors 22 04124 g013
Figure 14. Exact solution and the approximate solution y 10 ( x ) together with the error Δ of this approximation.
Figure 14. Exact solution and the approximate solution y 10 ( x ) together with the error Δ of this approximation.
Sensors 22 04124 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hetmaniok, E.; Pleszczyński, M.; Khan, Y. Solving the Integral Differential Equations with Delayed Argument by Using the DTM Method. Sensors 2022, 22, 4124. https://doi.org/10.3390/s22114124

AMA Style

Hetmaniok E, Pleszczyński M, Khan Y. Solving the Integral Differential Equations with Delayed Argument by Using the DTM Method. Sensors. 2022; 22(11):4124. https://doi.org/10.3390/s22114124

Chicago/Turabian Style

Hetmaniok, Edyta, Mariusz Pleszczyński, and Yasir Khan. 2022. "Solving the Integral Differential Equations with Delayed Argument by Using the DTM Method" Sensors 22, no. 11: 4124. https://doi.org/10.3390/s22114124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop