Next Article in Journal
An Anisotropic Model for the Universe
Previous Article in Journal
Parameter Estimation and Hypothesis Testing of Multivariate Poisson Inverse Gaussian Regression
Previous Article in Special Issue
Oscillation Properties of Singular Quantum Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Construction of Solutions of Linear Differential Systems with Argument Deviations of Mixed Type

by
András Rontó
1,2,* and
Natálie Rontóová
2
1
Institute of Mathematics, Czech Academy of Sciences, Brno branch, Žižkova 22, 616 62 Brno, Czech Republic
2
Faculty of Business and Management, Brno University of Technology, Kolejní 4, 612 00 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(10), 1740; https://doi.org/10.3390/sym12101740
Submission received: 14 September 2020 / Revised: 7 October 2020 / Accepted: 15 October 2020 / Published: 21 October 2020
(This article belongs to the Special Issue Nonlinear Oscillations and Boundary Value Problems)

Abstract

:
We show the use of parametrization techniques and successive approximations for the effective construction of solutions of linear boundary value problems for differential systems with multiple argument deviations. The approach is illustrated with a numerical example.

1. Introduction

In several applications, it is desirable to have effective tools allowing one to construct solutions of functional differential equations under various boundary conditions. Many real processes are modelled by systems of equations with argument deviations (see, e.g., [1,2,3] and references therein). For delay equations, the classical method of steps [1] allows one to construct the solution of the Cauchy problem by extending it from the initial interval in a stepwise manner; in this way, an ordinary differential equation is solved at every step, with every preceding part of the curve serving as a historical function for the next one. This technique, together with the ODE solvers available in the mathematical software, is commonly used in the practical analysis of dynamic models based on equations with retarded argument under the initial conditions (e.g., in economical models [4,5,6]).
In certain cases, boundary conditions are more complicated and deviations are of mixed type (i. e., equations involve both retarded and advanced terms [7,8] or deviations of neither type), which in particular, makes impossible to apply the method of steps due to the absence of the Volterra property of the corresponding operator. The aim of this paper is to show that the techniques suggested in [9] for boundary value problems for ordinary differential equations, under certain assumptions, can be adopted for application to functional differential equations covering, in particular, the case of deviations of mixed type and general boundary conditions. As a result, one can formulate a scheme for the effective finding of approximations to solutions of boundary value problems for functional differential equations, which also theoretically allows one to establish the solvability in a rigorous manner. Here, we consider the linear case and deal with the construction of approximate solutions only. The techniques are rather flexible and can be used in relation to other problems. Although the approach is not explicitly designed for partial differential equations, it still might be used in the cases where systems of equations with one independent variable arise (e.g., equations obtained when a symmetry reduction is possible, equations related to the inverse scattering transform or discretization [10,11]). Unlike methods used for integrable systems (e.g., the Hirota bilinear method [12]), the technique is aimed at getting approximate solutons without previous knowledge of the structure of the solution set.
We consider the system of n linear functional differential equations
u ( t ) = ( l u ) ( t ) + q ( t ) , t [ a , b ] ,
under the linear boundary conditions
h ( u ) = d ,
where u = ( u i ) i = 1 n , < a < b < , l : C ( [ a , b ] , R n ) L ( [ a , b ] , R n ) is a linear bounded operator, h : C ( [ a , b ] , R n ) R n is a bounded linear vector-valued functional, d = col d 1 , , d n and q L ( [ a , b ] , R n ) are fixed. System (1) can be rewritten in coordinates as
u i ( t ) = j = 1 n ( l i j u ) ( t ) + q i ( t ) , t a , b , i = 1 , , n ,
where l i j : C ( [ a , b ] , R ) L ( [ a , b ] , R ) are the components of the operator l = col ( l 1 , , l n ) defined by the equalities
l i j v : = l i ( v e j ) , i , j = 1 , 2 , , n
for v from C ( [ a , b ] , R ) (here, e j stands for the jth unit vector). The difference between (1) and (3) is a matter of notation: given system (3), it can be rewritten as (1) by setting
l u = j = 1 n l 1 j ( u j ) j = 1 n l 2 j ( u j ) j = 1 n l n j ( u j )
for u C ( [ a , b ] , R n ) . Here, we consider the case where the component operators have values in L ( [ a , b ] , R ) , i. e., the coefficients in the equation are essentially bounded, which excludes the presence of integrable singularities (e.g., 1 / t , t ( 0 , 1 ] ).
System (3) covers, in particular, the system of differential equations with argument deviations of the form
u i ( t ) = j = 1 n p i j ( t ) u j ( τ i j ( t ) ) + q i ( t ) , t a , b , i = 1 , , n ,
where τ i j : [ a , b ] [ a , b ] , are measurable functions, p i j : a , b R and q i : [ a , b ] R i , j = 1 , , n , are from L ( [ a , b ] , R ) . A particular case of (6) is, e.g., the well-known pantograph equation, which arises independently in several applied problems of various nature (see, e.g., [13,14] and references therein). The assumption that i , j = 1 n τ i j ( [ a , b ] ) [ a , b ] is not a restriction of generality because the setting involving initial functions can be reduced to the present one by a suitable transformation [15]. The retarded character of the equation is not assumed (in particular, for (6), it is not required that τ i j ( t ) t for all i, j) and, therefore, the method of steps is inapplicable.
We use the following notation. For any vectors ξ = col ( ξ 1 , , ξ n ) , η = col ( η 1 , , η n ) , we write ξ = col ( ξ 1 , , ξ n ) and
max { ξ , η } = col ( max { ξ 1 , η 1 } , , max { ξ n , η n } ) .
The inequalities between vectors as well as minima and maxima of vector functions are understood likewise in the componentwise sense. r ( A ) denotes the spectral radius of a matrix A. For an essentially bounded function v : [ a , b ] R n and an interval J [ a , b ] , we put
δ J ( v ) : = ess sup t J v ( t ) ess inf t J v ( t ) .

2. Auxiliary Problems with Two-Point Conditions

The idea is to use a suitable parametrization. Motivated by [9], we will seek for a solution of (1), (2) among all solutions of Equation (1) under the simplest two-point conditions
u ( a ) = ξ , u ( b ) = η
with unfixed boundary values ξ and η . Under suitable assumptions, every problem (1), (9) is shown to be uniquely solvable and its solution is constructed as the limit of certain iterations { u m ( · , ξ , η ) : m 0 } . The vectors z = col ξ 1 , ξ 2 , , ξ n and η = col η 1 , η 2 , , η n are parameters the values of which remain unknown at the moment; they should be chosen appropriately in order to satisfy condition (2).
Let us fix closed bounded connected sets D 0 and D 1 in R n and focus on the solutions of problem (1), (2) with the values at the endpoints such that
u ( a ) D 0 , u ( b ) D 1 .
To treat solutions with properties (10), we carry out “freezing” at the endpoints by passing temporarily to conditions (9). Thus, instead of (1), (2) we will consider the parametrised problem (1), (9). There are two (or, in coordinates, 2 n ) degrees of freedom there and we will see that one can, in a sense, go back to the original problem by choosing the values of the parameters appropriately.

3. Iteration Process

Let L : L ( [ a , b ] , R n ) L ( [ a , b ] , R n ) be the continuous linear operator defined by the equality
( L y ) ( t ) : = a t y ( s ) d s t a b a a b y ( s ) d s , t [ a , b ] ,
for any y L ( [ a , b ] , R n ) . Let us fix arbitrary values of ξ and η , put
q ˜ : = L q ,
and define the sequence of functions { u m ( · , ξ , η ) : m 0 } by setting
u 0 ( t , ξ , η ) = 1 t a b a ξ + t a b a η ,
u m ( t , ξ , η ) = u 0 ( t , ξ , η ) + q ˜ ( t ) + ( L l u m 1 ( · , ξ , η ) ) ( t ) , t [ a , b ] ,
for m 1 . When no confusion may arise, for the sake of convenience we will sometimes omit the arguments ξ , η in u m ( t , ξ , η ) and write simply u m ( t ) assuming the dependence on the parameters implicitly. The sequence is obviously related to the iterative solution of the equation
u = u 0 + L l u + q ˜ .
The reason for choosing it in this particular way is explained by the following statement, which is an immediate consequence of (14).
Lemma 1.
Let ξ and η be fixed. If u m ( · , ξ , η ) u as m + on [ a , b ] , then u satisfies the two-point conditions (9) and the equation
u ( t ) = ( l u ) ( t ) + q ( t ) + 1 b a η ξ a b ( l u ) ( s ) d s a b q ( s ) d s , t [ a , b ] .
Proof. 
The function u satisfies the integral Equation (15) and, in particular, is absolutely continuous. By (11) and (13), Equation (15) is rewritten as
u ( t ) = ξ + t a b a ( η ξ ) + a t ( l u ) ( s ) d s + a t q ( s ) d s t a b a a b ( l u ) ( s ) d s + a b q ( s ) d s ,
the differentiation of which yields (16). Since obviously u m ( a , ξ , η ) = ξ and u m ( b , ξ , η ) = η for all m, the fulfilment of conditions (9) is verified by passing to the limit in (14). □
The idea of the approach is briefly this: Equation (16) differs from (1) by a finite-dimensional term and after its ultimate “removal” by adjusting ξ appropriately we are among the solutions of (1), (9); then we should try to choose η so that (2) is satisfied. In order to proceed according to this scheme, it is needed to ensure the convergence of sequence of functions (14).

4. Applicability Conditions

Let L be the matrix constituted by the norms of the components of l as operators from C ( [ a , b ] , R ) to L ( [ a , b ] , R ) :
L : = l 11 l 12 l 1 n l 21 l 22 l 2 n l n 1 l n 2 l n n .
All the components of the matrix L are well defined due to the boundedness of l as a mapping from C ( [ a , b ] , R n ) to L ( [ a , b ] , R n ) . It follows immediately from (4) and (18) that the componentwise inequality
| ( l u ) ( t ) | L max s [ a , b ] | u ( s ) |
holds for a.e. t [ a , b ] and arbitrary u from C ( [ a , b ] , R n ) .
Theorem 1.
Assume that the spectral radius of L satisfies the inequality
r ( L ) < 2 b a .
Then, for any fixed ξ D 0 , η D 1 , the sequence { u m ( · , ξ , η ) : m 0 } converges to a certain function u ( · , ξ , η ) as m + uniformly on [ a , b ] . The function u ( · , ξ , η ) satisfies conditions (9) and the differential equation
u ( t ) = ( l u ) ( t ) + q ( t ) + 1 b a Δ ( ξ , η ) , t [ a , b ] ,
where
Δ ( ξ , η ) : = η ξ a b ( l u ( · , ξ , η ) ) ( s ) d s a b q ( s ) d s .
The function u ( · , ξ , η ) is the unique solution of (21) with the initial condition u ( a ) = ξ and also the unique solution of Equation (17). The function u ( · , ξ , η ) satisfies the original Equation (1) if and only if ξ and η are chosen so that
Δ ( ξ , η ) = 0 .
The form of iteration sequence (14) is chosen specifically according to the two-point separated boundary conditions (9), so that (9) is automatically satisfied on every step. The smallness of r ( L ) (assumption (20)) ensures the unique solvability of all the auxiliary initial value problems (16) and (24),
u ( a ) = ξ ,
which is also helpful in cases where the initial value problem (1), (24) for the original equation is not uniquely solvable. It should be noted that the last mentioned issue may arise even for very simple equations of type (6); for example, it is easy to verify that if (1) has the form
u ( t ) = u ( b ) b a q ( t ) , t [ a , b ] ,
then the initial value problem (25), (24) is not uniquely solvable (it has infinitely many solutions if ξ = a b q ( s ) d s and no solution for ξ a b q ( s ) d s ) and the corresponding Picard iterations diverge. However, the iterations defined according to equalities (13) and (14) can be used since assumption (20) is satisfied ( L = ( b a ) 1 in this case).
Equation (21) clearly resembles (16) with the difference that Δ ( ξ , η ) appearing in (21), where the limit function is directly involved, is not a functional term. Equation (21) can thus be regarded as a perturbation of equation (1) with a constant forcing term,
u ( t ) = ( l u ) ( t ) + q ( t ) + μ , t [ a , b ] ,
the value of which is related to the two-point condition (9).
Theorem 2.
Assume that condition (20) holds and ξ, η are fixed. Then the solution of (26) with the initial value (24) satisfies the condition
u ( b ) = η
if and only if
μ = 1 b a Δ ( ξ , η )
with Δ given by (22).
Proof. 
The proof is similar to that of ([9] Theorem 7). If μ in (26) has form (28), then the function u = u ( · , ξ , η ) , which is well defined by Theorem 1, is the unique solution of the Cauchy problem (26), (24). By Lemma 1, this function also satisfies condition (27).
Conversely, let v be a function satisfying (26) and (24) with a certain value of μ . The integration of (26) gives the representation
v ( t ) = ξ + a t ( l v ) ( s ) d s + a t q ( s ) d s + μ t a b a , t [ a , b ] ,
whence
( b a ) μ = v ( b ) ξ a b ( l v ) ( s ) d s a b q ( s ) d s .
Combining (29) and (30) and assuming that v ( b ) = η , we get
v ( t ) = ξ + a t ( l v ) ( s ) d s + a t q ( s ) d s + t a b a η ξ a b ( l v ) ( s ) d s a b q ( s ) d s ,
or, which is the same,
v ( t ) = ξ + t a b a ( η ξ ) + a t ( l v ) ( s ) d s + a t q ( s ) d s t a b a a b ( l v ) ( s ) d s + a b q ( s ) d s
for t [ a , b ] . Equation (31) coincides with (17) and thus v is a solution of (17). Since, by Theorem 1, the function u = u ( · , ξ , η ) is the unique solution of equation (17), it follows that v = u , v ( b ) = u ( b ) = η and, hence, by (30), μ is necessarily of form (28). □
Theorem 3.
Under assumption (20), the limit function u ( · , ξ , η ) of sequence (14) is a solution of the original problem (1), (2) if and only if the vectors ξ and η satisfy the system of 2 n equations
Δ ( ξ , η ) = 0 , h ( u ( · , ξ , η ) ) = d .
Proof. 
It is sufficient to apply Theorem 1. Equations (32) bring us from the auxiliary two-point parametrised conditions back to the given functional conditions (2). □
Combining these statements, one can formulate the following theorem.
Theorem 4.
Assume condition (20). If there exist ( ξ , η ) D 0 × D 1 for which equations (32) are satisfied, then the boundary value problem (1), (2) has a solution u ( · ) such that u ( a ) = ξ and u ( b ) = η . Conversely, if problem (1), (2) has a solution u ( · ) , then the values ξ = u ( a ) and η = u ( b ) satisfy (32).
Proof. 
Indeed, if ( ξ , η ) satisfy (32) then, by Theorem 3, the function u ( · , ξ , η ) is a solution of problem (1), (2). By Theorem 1, this function satisfies the two-point conditions (9).
Assume now that u is a solution of problem (1), (2) and put ξ = u ( a ) , η = u ( b ) . Then u is a solution of the initial value problem (26), (24) with μ = 0 and, furthermore,
u = u ( · , ξ , η )
because the solution of this problem is unique (Theorem 1). By Theorem 2, it follows that (23) holds. Since, by assumption, u satisfies (2), it is obvious from (55) that h ( u ( · , ξ , η ) ) = d , i. e., (32) holds. □
The boundary value problem (1), (2) is thus theoretically reduced to the solution of Equation (32) in the variables ξ and η .

5. Proof of Theorem 1

We need some technical statements.
Lemma 2.
For any essentially bounded function y : [ a , b ] R n ,
| ( L y ) ( t ) | 1 2 α 1 ( t ) δ [ a , b ] ( y ) , t [ a , b ] .
This is a modified version of Lemma 3 from [16]; the proof goes the same lines. Here, notation (8) is used and α 1 is defined by
α 1 ( t ) = 2 ( t a ) 1 t a b a , t [ a , b ] .
The inequality in (34) and similar relations below are componentwise.
Lemma 3.
The estimate
max t [ a , b ] | u m + 1 ( t , ξ , η ) u m ( t , ξ , η ) | ( b a ) m + 1 2 m + 2 L m δ [ a , b ] ( ( l u 0 ) ( · , ξ , η ) )
holds for m = 0 , 1 , .
Proof. 
Since max t [ a , b ] α 1 ( t ) = 1 2 ( b a ) , it follows from Lemma 2 that
| u 1 ( t ) u 0 ( t ) | = | ( L l u 0 ) ( t ) | 1 2 α 1 ( t ) δ [ a , b ] ( l u 0 ) b a 4 δ [ a , b ] ( l u 0 )
and (35) thus holds for m = 0 . Assume that the required estimate (35) holds for a certain m = m 0 1 . It is clear from (8) that
δ [ a , b ] ( l u ) 2 ess sup t [ a , b ] | ( l u ) ( t ) |
for any u. Then, by Lemma 2,
| u m 0 + 2 ( t ) u m 0 + 1 ( t ) | = | ( L l ( u m 0 + 1 u m 0 ) ) ( t ) | 1 2 α 1 ( t ) δ [ a , b ] ( l ( u m 0 + 1 u m 0 ) ) b a 4 δ [ a , b ] ( l ( u m 0 + 1 u m 0 ) ) b a 2 ess sup s [ a , b ] | l ( u m 0 + 1 u m 0 ) ( s ) | .
By virtue of (19) and (35) with m = m 0 , we have
ess sup s [ a , b ] | l ( u m 0 + 1 u m 0 ) ( s ) | L max τ [ a , b ] | u m 0 + 1 ( τ ) u m 0 ( τ ) | L ( b a ) m 0 + 1 2 m 0 + 2 L m 0 δ [ a , b ] ( l u 0 ) = ( b a ) m 0 + 1 2 m 0 + 2 L m 0 + 1 δ [ a , b ] ( l u 0 )
and, therefore, (37) yields
| u m 0 + 2 ( t ) u m 0 + 1 ( t ) | b a 2 ( b a ) m 0 + 1 2 m 0 + 2 L m 0 + 1 δ [ a , b ] ( l u 0 ) = ( b a ) m 0 + 2 2 m 0 + 3 L m 0 + 1 δ [ a , b ] ( l u 0 ) .
Relation (38) means that estimate (35) holds with m = m 0 + 1 and hence, due to the arbitrariness of m 0 , for every m 0 . □
To prove Theorem 1, it is sufficient to use Lemma 3. Indeed, estimate (35) implies that
| u m + k ( t ) u m ( t ) | j = 1 k | u m + j ( t ) u m + j 1 ( t ) | j = 1 k ( b a ) m + j 2 m + j + 1 L m + j 1 δ [ a , b ] ( l u 0 ) = ( b a ) m + 1 2 m + 2 L m i = 0 k 1 ( b a ) i 2 i L i δ [ a , b ] ( l u 0 )
( b a ) m + 1 2 m + 2 L m i = 0 ( b a ) i 2 i L i δ [ a , b ] ( l u 0 )
fo any k 1 . The assumption imposed on l ensures that l u 0 L ( [ a , b ] , R n ) and, therefore, δ [ a , b ] ( l u 0 ) < + . Due to assumption (20), we have 2 m ( b a ) m L m 0 as m and the Neumann series m = 0 2 m ( b a ) m L m converges. It then follows from (39) that { u m ( · , ξ , η ) : m 0 } is a Cauchy sequence in C ( [ a , b ] , R n ) and thus converges to u ( · , ξ , η ) uniformly on [ a , b ] (in fact, due to the boundedness of D 0 and D 1 , the convergence is also uniform with respect to ( ξ , η ) D 0 × D 1 ). Passing to the limit as m in (14), we find that u = u ( · , ξ , η ) is a solution of (15). The proof of the uniqueness is standard: if (15) has another solution v, then v u = L l ( v u ) , whence, by virtue of Lemma 2 and inequalities (19) and (36),
max t [ a , b ] | v ( t ) u ( t ) | b a 4 δ [ a , b ] ( l ( v u ) ) b a 2 ess sup t [ a , b ] ( l ( v u ) ( t ) ) b a 2 L max t [ a , b ] | v ( t ) u ( t ) | ,
and therefore, in view of condition (20), v coincides with u.

6. Some Estimates

Passing to the limit in (39) when k gives the usual estimate
max t [ a , b ] | u ( t , ξ , η ) u m ( t , ξ , η ) | b a 4 Q m ( 1 n Q ) 1 δ [ a , b ] ( ( l u 0 ) ( · , ξ , η ) ) ,
where Q is given by
Q : = 1 2 ( b a ) L .
Lemma 4.
For all { ξ , ξ ¯ } D 0 , { η , η ¯ } D 1 , m 1 and t [ a , b ] , the estimates
| u m ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) | k = 0 m 1 Q k | u 0 ( t , ξ ξ ¯ , η η ¯ ) | + b a 4 Q m 1 δ [ a , b ] ( l u 0 ( · , ξ ξ ¯ , η η ¯ ) )
and
| u m ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) | k = 0 m Q k max { | η η ¯ | , | ξ ξ ¯ | }
hold.
Recall that all the inequalities are understood in the componentwise sense. In (43), notation (7) is used.
Proof. 
Fix ξ , ξ ¯ and η , η ¯ and put
w ( t ) : = u 0 ( t , ξ ξ ¯ , η η ¯ ) , t [ a , b ] .
The proof is carried out by induction. It follows immediately from (13) that
u 0 ( t , ξ , η ) u 0 ( t , ξ ¯ , η ¯ ) = u 0 ( t , ξ ξ ¯ , η η ¯ ) ,
and therefore, by virtue of (15) and Lemma 2,
| u 1 ( t , ξ , η ) u 1 ( t , ξ ¯ , η ¯ ) | = | u 0 ( t , ξ , η ) u 0 ( t , ξ ¯ , η ¯ ) + ( L l [ u 0 ( · , ξ , η ) u 0 ( · , ξ ¯ , η ¯ ) ] ) ( t ) | = | w ( t ) + ( L l w ( t ) | | w ( t ) | + b a 4 δ [ a , b ] ( l w ) ,
which means that (42) holds with m = 1 . Assume that (42) is satisfied for some arbitrarily fixed m. Then, by (14), (19), (36) and (41), we have
| u m + 1 ( t , ξ , η ) u m + 1 ( t , ξ ¯ , η ¯ ) | = | u 0 ( t , ξ , η ) u 0 ( t , ξ ¯ , η ¯ ) + ( L l [ u m ( · , ξ , η ) u m ( · , ξ ¯ , η ¯ ) ] ) ( t ) | | w ( t ) | + b a 4 δ [ a , b ] ( l ( u m ( · , ξ , η ) u m ( · , ξ ¯ , η ¯ ) ) ) | w ( t ) | + b a 2 ess sup s [ a , b ] | l ( u m ( · , ξ , η ) u m ( · , ξ ¯ , η ¯ ) ) ( s ) | | w ( t ) | + b a 2 L max s [ a , b ] | u m ( s , ξ , η ) u m ( s , ξ ¯ , η ¯ ) | | w ( t ) | + Q k = 0 m 1 Q k | w ( t ) | + b a 4 Q Q m 1 δ [ a , b ] ( l w ) = k = 0 m Q k | w ( t ) | + b a 4 Q m δ [ a , b ] ( l w ) ,
whence, recalling (44), we see that (42) holds for the value m + 1 .
Inequality (43) is a consequence of (42). Indeed, according to (13), the graph of every component u 0 , i ( · , ξ , η ) of u 0 ( · , ξ , η ) is a straight line segment joining the points ( a , ξ i ) and ( b , η i ) , whence it follows that
max t [ a , b ] | u 0 , i ( t , ξ , η ) | = max { | ξ i | , | η i | }
for all i = 1 , 2 , , n . Using notation (7), we can rewrite this in the componentwise form
max t [ a , b ] | u 0 ( t , ξ , η ) | = max { | ξ | , | η | } .
Then, by (44),
| w ( t ) | max { | η η ¯ | , | ξ ξ ¯ | } , t [ a , b ] .
Using (42) and taking (19), (36), (48) into account, we get
| u m ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) | k = 0 m 1 Q k | w ( t ) | + b a 4 Q m 1 δ [ a , b ] ( l w ) k = 0 m 1 Q k | w ( t ) | + b a 2 Q m 1 L max s [ a , b ] | w ( s ) | k = 0 m 1 Q k max { | η η ¯ | , | ξ ξ ¯ | } + Q m max { | η η ¯ | , | ξ ξ ¯ | } ,
i.e., (43) holds. □
Lemma 5.
The estimate
| u ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) | j = 0 m 1 Q j max { | η η ¯ | , | ξ ξ ¯ | }
+ Q m max s [ a , b ] | u ( s , ξ , η ) u 0 ( s , ξ ¯ , η ¯ ) |
holds for all { ξ , ξ ¯ } D 0 , { η , η ¯ } D 1 , m 1 and t [ a , b ] .
Proof. 
According to (14), the function u m ( · , ξ ¯ , η ¯ ) satisfies the recurrence relation
u m ( t , ξ ¯ , η ¯ ) = u 0 ( t , ξ ¯ , η ¯ ) + ( L l u m 1 ( · , ξ ¯ , η ¯ ) ) ( t )
and furthermore, by Theorem 1, the limit function u ( · , ξ , η ) satisfies Equation (15):
u ( t , ξ , η ) = u 0 ( t , ξ , η ) + ( L l u ( · , ξ , η ) ) ( t )
for t [ a , b ] . Define w by (44) and put
y j ( t ) : = u ( t , ξ , η ) u j ( t , ξ ¯ , η ¯ ) , t [ a , b ] , j = 0 , 1 , , m .
Then, combining (50) and (51), we notice that
y m ( t ) = w ( t ) + ( L l y m 1 ) ( t ) , t [ a , b ] .
The sequential application of (53) gives
y m ( t ) = w ( t ) + ( L l w ) ( t ) + + ( ( L l ) m 1 w ) ( t ) + ( ( L l ) m y 0 ) ( t ) , t [ a , b ] .
By analogy to (45), (46), using (19), (36) and (48), we get
| ( L l w ) ( t ) | b a 4 δ [ a , b ] ( l w ) | ( L l w ) 2 ( t ) | b a 4 δ [ a , b ] ( l L l w ) b a 2 ess sup s [ a , b ] | ( l L l w ) ( s ) | b a 2 L max s [ a , b ] | ( L l w ) ( s ) | = Q max s [ a , b ] | ( L l w ) ( s ) | b a 4 Q δ [ a , b ] ( l w )
and, similarly,
| ( L l w ) j ( t ) | b a 4 Q j 1 δ [ a , b ] ( l w ) , t [ a , b ] ,
for j = 1 , 2 , , m . In view of (48), it follows from (55) that
| ( L l w ) j ( t ) | b a 4 Q j 1 δ [ a , b ] ( l w ) b a 2 Q j 1 L max s [ a , b ] | w ( s ) | Q j max { | η η ¯ | , | ξ ξ ¯ | }
for 1 j m , t [ a , b ] . Equality (54) then yields
| y m ( t ) | j = 0 m 1 Q j max { | η η ¯ | , | ξ ξ ¯ | } + Q m max s [ a , b ] | y 0 ( s ) | ,
which in view of (52), coincides with (49). □
In view of assumption (20), the second term in (49) involving the unknown u ( · , ξ , η ) tends to zero with m growing, due to condition (20) and the presence of the multiplier Q m ; the difference of values of u m with respect to the parameters is mainly estimated by j = 0 m 1 Q j multiplied by the size of the perturbation. The next lemma provides an alternative estimate of this type.
Lemma 6.
Let { ξ , ξ ¯ } D 0 , { η , η ¯ } D 1 , m 1 be fixed. Then
max t [ a , b ] | u ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) | Q m + 1 ( 1 n Q ) 1 max { | ξ | , | η | } + j = 0 m Q j max { | η η ¯ | , | ξ ξ ¯ | } .
Proof. 
In view of (19), (36), and (47), it follows from (40) that
max t [ a , b ] | u ( t , ξ , η ) u m ( t , ξ , η ) | b a 4 Q m ( 1 n Q ) 1 2 ess sup s [ a , b ] | l u 0 ( · , ξ , η ) | b a 2 Q m ( 1 n Q ) 1 L max s [ a , b ] | u 0 ( s , ξ , η ) | = Q m + 1 ( 1 n Q ) 1 max { | ξ | , | η | } .
Estimating the left-hand term of (67) as
| u ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) | | u ( t , ξ , η ) u m ( t , ξ , η ) | + | u m ( t , ξ , η ) u m ( t , ξ ¯ , η ¯ ) |
and using (57) and inequality (43) of Lemma 4, we arrive at (56). □

7. Practical Realisation

Theorems 1 and 2 suggest to replace the boundary value problem (1), (2) by the system of 2 n Equation (32). These equations are usually referred to as determining equations because their roots determine the solutions of the original problem among the solutions of the auxiliary ones (i.e., we obtain a solution once the values of ξ and η are found). The main difficulty here is the fact that u ( · , ξ , η ) is known in exceptional cases only and thus system (32), in general, cannot be constructed explicitly. This complication can be resolved by using the approximate determining systems of the form
Δ m ( ξ , η ) = 0 , h ( u m ( · , ξ , η ) ) = d ,
where m 0 is fixed and the function Δ m : D 0 × D 1 R n is given by the formula
Δ m ( ξ , η ) : = η ξ a b ( l u m ( · , ξ , η ) ) ( s ) d s a b q ( s ) d s
for arbitrary ξ D 0 , η D 1 . The function Δ m is obtained after m iterations and thus, in contrast to system (32), Equation (58) involve only functions that are constructed in a finite number of steps.
The uniform convergence of functions (14) to u ( · , ξ , η ) implies that systems (32) and (58) are close enough to one another for m sufficiently large and, under suitable conditions, the solvability of the mth approximate determining system (58) can be used to prove that of (32). Existence theorems can be formulated by analogy to [17] (we do not discuss this kind of statements here). A practical scheme of analysis of the boundary value problem (1), (2) along these lines can be described as follows.
  • Solve the zeroth approximate determining system
    Δ 0 ( ξ , η ) = 0 , h ( u 0 ( · , ξ , η ) ) = d .
    This approximate determining system has the simplest form and its root ( ξ ( 0 ) , η ( 0 ) ) serves as a rough approximation of the unknown values of ( ξ , η ) . The function
    U 0 ( t ) : = u 0 ( ξ ( 0 ) , η ( 0 ) ) , t [ a , b ] ,
    is the zeroth approximation of the solution we are looking for. This approximation is always linear (this is the straight line segment joining the points ( a , ξ ( 0 ) ) and ( b , η ( 0 ) ) ; see (13)) and its construction is easy because the iteration is not carried out yet.
  • Analytically construct the function u 1 ( · , ξ , η ) according to the recurrence Formula (14), keeping ξ and η as parameters. Numerically solve the corresponding first approximate determining system
    Δ 1 ( ξ , η ) = 0 , h ( u 1 ( · , ξ , η ) ) = d
    in a neighbourhood of ( ξ ( 0 ) , η ( 0 ) ) and find its root ( ξ ( 1 ) , η ( 1 ) ) . Substitute the values ( ξ ( 1 ) , η ( 1 ) ) into (14) for m = 1 and construct the first approximation
    U 1 ( t ) : = u 1 ( ξ ( 1 ) , η ( 1 ) ) , t [ a , b ] .
  • Choose a certain m 0 1 and continue by analogy to step 2 for m = 1 , 2 , , m 0 by analytically constructing the functions u m ( · , ξ , η ) , m = 1 , 2 , , m 0 . Computer algebra systems are very helpful for this purpose. Numerically solve every mth approximate determining system in a neighbourhood of the root of the ( m 1 ) th one, ( ξ ( m 1 ) , η ( m 1 ) ) , m = 1 , 2 , , m 0 . Collect the values ( ξ ( m ) , η ( m ) ) , m = 1 , 2 , , m 0 , into a table, construct the approximations
    U m ( t ) : = u m ( ξ ( m ) , η ( m ) ) , t [ a , b ] ,
    for m = 1 , 2 , , m 0 , and draw their graphs. By construction, we always have
    U m ( a ) = ξ ( m ) , U m ( b ) = η ( m ) .
    Multiple roots of system (58) usually indicate the existence of multiple solutions of the problem. In such cases, in order to select a particular one, we specify a suitable neighbourhood when solving the approximate determining equations numerically.
  • Analyse the results of step 3 and decide whether the computaton should be continued.
The approach involves both the analytic part (construction of iterations with parameters according to equalities (13) and (14)) and the numerical computation, when the approximate determining Equation (58) are solved. We can (and, in this approach, are generally encouraged to) start with step 1 without knowing anything about the solvability of the problem, because a hint on the existence of a solution and its localisation is likely to be obtained in the course of computation.
When analysing the results obtained at step 3, the following main scenarios may be observed:
  • Clear signs of convergence and a good degree of coincidence ( U m 0 satisfies the set accuracy requirements; it remains only to check the solvability rigorously as mentioned above).
  • There are signs of convergence but the accuracy requirements are not met (continue the computation with m = m 0 + 1 ).
  • There are signs of divergence, usually accompanied by failure to solve some of the equations (either there is no solution or the convergence boundary is trespassed; the scheme is inapplicable).
  • Failure to carry out symbolic computations at a certain point (software or hardware limitations; some simplifications should be used).
The main restriction is assumption (20), without which the presented proof fails and the procedure may diverge. When condition (20) is not satisfied, to exclude scenario 3, the use of the interval division technique [18] can be suggested, for which purpose the auxiliary two-point conditions of form (9) are very suitable. In the case of difficulties with symbolic computation, polynomial approximations can be used by analogy to [19]. We do not discuss these two issues here in more detail.
The accuracy of approximation can be checked by substituting U m into the equation and computing the residual functions R m = col ( R m , 1 , R m , 2 , R m , 3 ) for m = 1 , 2 , , m 0 , where
R m , i ( t ) : = U m , i ( t ) ( l i U m ) ( t ) q i ( t ) , i = 1 , 2 , 3 .
By assumption, the operator l in Equation (1) has range among essentially bounded functions and, hence, one may require the smallness of the values ess sup t [ a , b ] | R m , i ( t ) | , i = 1 , 2 , 3 .
It follows from inequality (56) of Lemma 6 that the estimate
max t [ a , b ] | u * ( t ) U m ( t ) | Q m + 1 ( 1 n Q ) 1 max { | u * ( a ) | , | u * ( b ) | } + ( 1 n Q ) 1 max { | u * ( b ) η ( m ) | , | u * ( a ) ξ ( m ) | }
holds for all m 1 , where u * is an exact solution of problem (1), (2) (provided it exists), the values ( ξ ( m ) , η ( m ) ) are the roots of the mth approximate determining system (58), and U m is the corresponding mth approximation (64). In other words, according to (67), the convergence of the roots of approximate determining equations to the corresponding values of a particular exact solution u * at the points a and b guarantees the approximation of u * by U m the quality of which is growing with m.

8. A Numerical Example

To show the practical realisation of the above scheme, let us consider the problem of solving the system of differential equations with argument deviations
u 1 ( t ) = 1 6 u 1 ( 1 t ) + 2 3 ( t 1 ) u 3 ( 1 t ) + q 1 ( t ) , u 2 ( t ) = β u 3 t 3 + q 2 ( t ) , u 3 ( t ) = u 2 ( t ) 16 u 2 ( τ ( t ) ) + q 3 ( t ) , t 0 , 1 ,
where q 1 ( t ) : = 11 12 1 72 sin 4 t 2 , q 2 ( t ) : = β 144 + 8 t + β cos ( 4 9 t 2 ) + 1 18 , q 3 ( t ) : = 4 ( sin 4 t 2 ) 2 + 2 9 sin 4 t 2 + t 6 , τ is the function τ : [ 0 , 1 ] [ 0 , 1 ] given by the formula
τ ( t ) : = 1 4 ( t + sin 4 t 2 ) , t [ 0 , 1 ] ,
and β is a positive constant, under the non-local boundary conditions
u 1 ( 1 ) = 5 12 , 0 1 u 2 ( s ) d s = 979 720 , u 3 ( 0 ) = 1 .
The given particular form of the forcing terms q 1 , q 2 and q 3 in (68) is chosen for the purpose of checking explicitly an exact solution, which is known to be
u 1 * ( t ) = 1 12 sin ( 4 ( 1 t ) 2 ) + t 2 12 t + 1 2 , u 2 * ( t ) = 4 t 2 + t 18 1 720 , u 3 * ( t ) = cos 4 t 2 + t 48
in this case.
Along with the retarded term containing the deviation t 1 3 t , the right-hand side of system (68) involves also terms with the argument reflection t 1 t and the argument transformation τ for which the function t t τ ( t ) has multiple sign changes. The latter two are deviations of mixed type (i. e., of neither retarded nor advanced type).
System (68) is obviously of type (6) with n = 3 , a = 0 , b = 1 , l 12 = l 21 = l 22 = l 31 = l 33 = 0 , ( l 11 v ) ( t ) : = 1 6 v ( 1 t ) , ( l 13 v ) ( t ) : = 2 3 ( t 1 ) v ( 1 t ) , ( l 23 v ) ( t ) : = β v ( 1 3 t ) , ( l 32 v ) ( t ) : = v ( t ) 16 v ( τ ( t ) ) for t [ 0 , 1 ] , v L ( [ 0 , 1 ] , R ) , while the boundary conditions (69) can be rewritten as (2) with h ( u ) : = col u 1 ( 1 ) , 0 1 u 2 ( s ) d s , u 3 ( 0 ) and d = col 5 12 , 979 720 , 1 . It is easy to verify that the corresponding operator l (see formula (5)) satisfies the componentwise inequality (19) for all u C ( [ 0 , 1 ] , R 3 ) with the matrix
L = 1 6 0 2 3 0 0 β 0 17 0 .
Since r ( L ) = max { 1 6 , 17 β } , it follows that condition (20) holds if
β < 4 17 0.235 .
Let us choose, e.g., β : = 0.23 and proceed as described in Section 7 (note that condition (20) is only sufficient; in particular, numerical experiments for larger values of β show that the convergence is still observed for β 1.1 ).
We start from the zeroth approximation defined according to formula (13). The linearity of the functional differential system (68) and of the boundary conditions (69) implies that the approximate determining system are linear. To construct the zeroth approximate determining system (60), one should only substitute (13) into (59) with m = 0 . After some comutation we find that, in this case, (60) has the form
η 1 = 5 12 , ξ 3 = 1 , ξ 2 + η 2 = 979 360 , 1 12 η 1 2 9 η 3 + 1 12 ξ 1 1 9 ξ 3 S ( 2 ) 144 11 12 = η 1 ξ 1 , 23 120 ξ 3 23 600 η 3 + 69 C 2 3 200 + 38941 9600 = η 2 ξ 2 , 2 ξ 2 η 2 + 1 18 S ( 2 ) C ( 2 2 ) 2 27 2 ξ 2 3 2 η 2 + 25 12 = η 3 ξ 3 ,
where S ( t ) : = 0 t sin s 2 d s and C ( t ) : = 0 t cos s 2 d s , t [ 0 , 1 ] , are the Fresnel integrals. System (71) consists of six equations in six variables ξ i , η i , i = 1 , 2 , 3 , which according to (65), have the meaning of approximate values of u i ( 0 ) , u i ( 1 ) , i = 1 , 2 , 3 . Solving Equation (71), we find its roots
ξ 1 ( 0 ) 0.63083 , ξ 2 ( 0 ) 0.68261 , ξ 3 ( 0 ) = 1 , η 1 ( 0 ) = 5 12 , η 2 ( 0 ) 3.40206 , η 3 ( 0 ) 0.14388 .
Substituting values (72) into formula (13) according to (61), we obtain the zeroth approximation U 0 = col ( U 0 , 1 , U 0 , 2 , U 0 , 3 ) , whose components U 0 , i represent the straight line segments joining the points ( 0 , ξ i ( 0 ) ) and ( 1 , η i ( 0 ) ) , i = 1 , 2 , 3 . Although this linear approximation is very rough, it still provides some useful information on the localisation of the solution. On Figure 1, its graph is drawn together with that of the known exact solution (70). In particular, we have (see also Table 1) that
ξ 1 ( 0 ) u 1 * ( 0 ) 0.194 , ξ 2 ( 0 ) u 2 * ( 0 ) 0.681 , ξ 3 ( 0 ) u 3 * ( 0 ) = 0 , η 1 ( 0 ) u 1 * ( 1 ) = 0 , η 2 ( 0 ) u 2 * ( 1 ) 0.652 , η 2 ( 0 ) u 3 * ( 1 ) 0.777 .
The values of ξ 3 and η 1 in (71) are determined directly due to the form of the boundary conditions (69).
Furthermore, better approximations are obtained when we proceed to higher order iterations according to formula (14). For problem (68), (69), by virtue of (11), this formula means that the functions u m = col ( u m , 1 , u m , 2 , u m , 3 ) , m 1 , are constructed according to the recurrence relations
u m , 1 ( t , ξ , η ) = u 0 , 1 ( t , ξ , η ) + q ˜ 1 ( t ) + 1 6 0 t u m 1 , 1 ( 1 s , ξ , η ) d s + 2 3 0 t ( s 1 ) u m 1 , 3 ( 1 s , ξ , η ) d s t 6 0 1 u m 1 , 1 ( 1 s , ξ , η ) d s 2 t 3 0 1 ( s 1 ) u m 1 , 3 ( 1 s , ξ , η ) d s , u m , 2 ( t , ξ , η ) = u 0 , 2 ( t , ξ , η ) + q ˜ 2 ( t ) β 0 t u m 1 , 3 s 3 , ξ , η d s β t 0 1 u m 1 , 3 s 3 , ξ , η d s , u m , 3 ( t , ξ , η ) = u 0 , 3 ( t , ξ , η ) + q ˜ 3 ( t ) + 0 t u m 1 , 2 ( s , ξ , η ) d s 16 0 t u m 1 , 2 ( τ ( s ) , ξ , η ) d s t 0 1 u m 1 , 2 ( s , ξ , η ) d s + 16 t 0 1 u m 1 , 2 ( τ ( s ) , ξ , η ) d s
for t [ 0 , 1 ] , m = 1 , 2 , , where u 0 = col ( u 0 , 1 , u 0 , 2 , u 0 , 3 ) is the starting approximation (13) and the functions q ˜ i , i = 1 , 2 , 3 , are computed by formula (12). A direct computation shows that, in this case, q ˜ i , i = 1 , 2 , 3 , have the form
q ˜ 1 ( t ) = S ( 2 t ) t S ( 2 ) 144 , q ˜ 2 ( t ) = 115223 28800 t 2 115223 28800 t + 69 200 C 2 3 t t C 2 3 , q ˜ 3 ( t ) = S ( 2 t ) t S ( 2 ) 9 + t C ( 2 2 ) C ( 2 2 t ) 2 + t ( t 1 ) 12 , t [ 0 , 1 ] .
The values ξ i and η i , i = 1 , 2 , 3 , are kept as parameters when passing to the next iteration. The main computational work is to construct the functions u m , i ( · , ξ , η ) , i = 1 , 2 , 3 , analytically, for which purpose computer algebra systems are natural to be used.
Here, when carrying out computations according to the above recurrence relations, we use MAPLE and additionally simplify the task by using polynomial approximations in the spirit of [19] (more precisely, 9th order polynomials over the Chebyshev nodes; the interpolation nodes used in the course of computation are marked on the graphs). Carried out without interpolation (i.e., literally according to Section 7), the procedure allows achieving the required accuracy in a fewer number of steps but is more computationally expensive.
The rough zeroth approximation is improved when we start the iteration. To obtain the first approximation (63), we need to construct the functions u 1 , i ( · , ξ , η ) , i = 1 , 2 , 3 , and the corresponding first approximate determining system (62). Constructing system (62) and solving it numerically, we get the values
ξ 1 ( 1 ) 0.51673 , ξ 2 ( 1 ) 0.01895 , ξ 3 ( 1 ) = 1 , η 1 ( 1 ) = 5 12 , η 2 ( 1 ) 4.08704 , η 3 ( 1 ) 0.41232
and, by (63), obtain the first approximation U 1 = col ( U 1 , 1 , U 1 , 2 , U 1 , 3 ) the graph of which is shown on Figure 2. Checking the residual function R 1 corresponding to U 1 (Formula (66)) in Figure 3, we see that already the first approximation provides a reasonable degree of accuracy. Higher order approximations are obtained by repeating these steps for more iterations.
Figure 4 shows the graphs of several further approximations. We can observe that, starting from the third one, the graphs in fact coincide with one another. The maximal value of the residual t U 5 ( t ) ( l U 5 ) ( t ) q ( t ) of the fifth approximation U 5 is about 0.008 (see Figure 5). The result is refined further when we continue the computation with more iterations. In this particular case, when interpolation is additionally used, this depends also on the number of nodes; increasing it from 9, e.g., to 11, we get the sixth approximation U 6 with the residual not exceeding 0.00025 in every component. This is seen in Figure 6 (especially Figure 6c), where the graphs of the components of R m , 1 m 6 , are shown. The graph of U 6 does not optically differ from that of U 5 presented in Figure 4.
The values ξ ( m ) , η ( m ) obtained by numerically solving the approximate determining Equation (58) for 0 m 10 (with interpolation on 11 nodes) are collected in Table 1. In the last row of the table, for the sake of comparison, we give the values u i * ( 0 ) , u i * ( 1 ) , i = 1 , 2 , 3 , of the exact solution (70).
The approximations (64) are constructed in an analytic form; e.g., for the above-mentioned U 6 , using Maple we obtain the explicit formulae
U 6 , 1 ( t ) 4.042408471 t 11 + 20.46952618 t 10 37.43787653 t 9 + 17.54308868 t 8 + 38.65671249 t 7 69.52449717 t 6 + 43.70459764 t 5 5.021255652 t 4 6.515215520 t 3 + 1.878014779 t 2 0.5643854438 t + 0.4370323501 , U 6 , 2 ( t ) 0.00004217847108 t 11 0.0003918694568 t 10 + 0.001582265878 t 9 0.003636034946 t 8 + 0.005193027536 t 7 0.004701252907 t 6 + 0.002625994458 t 5 0.0008335495560 t 4 + 0.0001241374492 t 3 + 3.999981974 t 2 + 0.055555553 t 0.001384503 , U 6 , 3 ( t ) 69.00767411 t 11 297.1865115 t 10 + 515.3651026 t 9 495.0362578 t 8 + 310.1738752 t 7 122.453762 t 6 + 30.80698530 t 5 12.71224445 t 4 + 0.3960898466 t 3 0.01470825 t 2 + 0.02088904869 t + 1
for all t [ 0 , 1 ] .
The accuracy of approximation of u * by U 6 can be verified in Figure 7, where the graphs of the corresponding error functions t | U 6 , i ( t ) u i * ( t ) | , i = 1 , 2 , 3 , are shown. We can see that the maximal value of the absolute error does not exceed 10 4 .

Author Contributions

Investigation, A.R. and N.R.; Methodology, A.R.; Software, N.R.; Writing—original draft, A.R.; Writing—review and editing, N.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by RVO: 67985840 and Grant FP-S-20-6376 of the Internal Grant Agency at Brno University of Technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kolmanovskiĭ, V.; Myshkis, A. Applied Theory of Functional-Differential Equations; Mathematics and its Applications (Soviet Series); Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1992; Volume 85, p. xvi+234. [Google Scholar] [CrossRef]
  2. Gopalsamy, K. Stability and Oscillations in Delay Differential Equations of Population Dynamics; Mathematics and its Applications; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1992; Volume 74, p. xii+501. [Google Scholar] [CrossRef]
  3. Ruan, S. Delay differential equations in single species dynamics. In Delay Differential Equations and Applications; NATO Science Series II: Mathematics, Physics and Chemistry; Springer: Dordrecht, The Netherlands, 2006; Volume 205, pp. 477–517. [Google Scholar] [CrossRef]
  4. Varyšová, T.; Novotná, V. Solution of dynamical macroeconomic model using modern methods. J. East. Eur. Res. Bus. Econ. 2015, 2015, 1–9. [Google Scholar] [CrossRef]
  5. Bobalová, M.; Novotná, V. The use of functional differential equations in the model of the meat market with supply delay. Procedia Soc. Behav. Sci. 2015, 213, 74–79. [Google Scholar] [CrossRef] [Green Version]
  6. Novotná, V. Numerical solution of the inventory balance delay differential equation. Int. J. Eng. Bus. Manag. 2015, 7, 7. [Google Scholar] [CrossRef]
  7. Novotná, V.; Půža, B. On the construction of solutions of general linear boundary value problems for systems of functional differential equations. Miskolc Math. Notes 2018, 19, 1063–1078. [Google Scholar] [CrossRef]
  8. Manzanilla, R.; Mármol, L.G.; Vanegas, C.J. On the controllability of a differential equation with delayed and advanced arguments. Abstr. Appl. Anal. 2010, 2010, 307409. [Google Scholar] [CrossRef]
  9. Rontó, A.; Rontó, M.; Varha, J. A new approach to non-local boundary value problems for ordinary differential systems. Appl. Math. Comput. 2015, 250, 689–700. [Google Scholar] [CrossRef]
  10. Bluman, G.W.; Kumei, S. Symmetries and Differential Equations; Applied Mathematical Sciences; Springer: New York, NY, USA, 1989; Volume 81, p. xiv+412. [Google Scholar] [CrossRef]
  11. Mao, H.; Liu, Q.P. The short pulse equation: Bäcklund transformations and applications. Stud. Appl. Math. 2020, 1–21. [Google Scholar] [CrossRef]
  12. Zhang, H.; Liu, D.Y. Localized waves and interactions for the high dimensional nonlinear evolution equations. Appl. Math. Lett. 2020, 102, 106102. [Google Scholar] [CrossRef]
  13. Ockendon, J.R.; Tayler, A.B. The dynamics of a current collection system for an electric locomotive. Proc. R. Soc. Lond. A 1971, 322, 447–468. [Google Scholar] [CrossRef]
  14. Zaidi, A.A.; Van Brunt, B.; Wake, G.C. Solutions to an advanced functional partial differential equation of the pantograph type. Proc. R. Soc. SA 2015, 471, 20140947. [Google Scholar] [CrossRef] [PubMed]
  15. Azbelev, N.V.; Maksimov, V.P.; Rakhmatullina, L.F. Introduction to the Theory of Functional-Differential Equations; Nauka: Moscow, Russia, 1991; (In Russian, with an English Summary). [Google Scholar]
  16. Rontó, M.; Mészáros, J. Some remarks on the convergence of the numerical-analytical method of successive approximations. Ukrain. Math. J. 1996, 48, 101–107. [Google Scholar] [CrossRef]
  17. Rontó, A.; Rontó, M.; Shchobak, N. On finding solutions of two-point boundary value problems for a class of non-linear functional differential systems. Electron. J. Qual. Theory Differ. Eq. 2012, 1–17. [Google Scholar] [CrossRef] [Green Version]
  18. Rontó, A.; Rontó, M.; Shchobak, N. Notes on interval halving procedure for periodic and two-point problems. Bound. Value Probl. 2014, 2014, 1–20. [Google Scholar] [CrossRef] [Green Version]
  19. Rontó, A.; Rontó, M.; Shchobak, N. Parametrisation for boundary value problems with transcendental non-linearities using polynomial interpolation. Electron. J. Qual. Theory Differ. Eq. 2018, 1–22. [Google Scholar] [CrossRef]
Figure 1. The zeroth approximation for problems (68) and (69).
Figure 1. The zeroth approximation for problems (68) and (69).
Symmetry 12 01740 g001
Figure 2. The first approximation for problems (68) and (69).
Figure 2. The first approximation for problems (68) and (69).
Symmetry 12 01740 g002
Figure 3. Residual of the first approximation.
Figure 3. Residual of the first approximation.
Symmetry 12 01740 g003
Figure 4. Several further approximations for problems (68) and (69).
Figure 4. Several further approximations for problems (68) and (69).
Symmetry 12 01740 g004
Figure 5. Residual of the fifth approximation.
Figure 5. Residual of the fifth approximation.
Symmetry 12 01740 g005
Figure 6. Residuals of the first six approximations (11 nodes).
Figure 6. Residuals of the first six approximations (11 nodes).
Symmetry 12 01740 g006
Figure 7. Error of the sixth approximation (11 nodes).
Figure 7. Error of the sixth approximation (11 nodes).
Symmetry 12 01740 g007
Table 1. Values of the parameters computed from the approximate determining equations with 0 m 10 (11 nodes)
Table 1. Values of the parameters computed from the approximate determining equations with 0 m 10 (11 nodes)
m ξ 1 ξ 2 ξ 3 η 1 η 2 η 3
00.630827−0.6826131−0.4166673.402060.143882
10.516729−0.01894861−0.4166674.08704−0.412343
20.4862640.001962451−0.4166674.04751−0.675279
30.425443−0.00202871−0.4166674.05544−0.624705
40.439424−0.001266831−0.4166674.05392−0.634359
50.43643−0.001412261−0.4166674.05421−0.632516
60.437032−0.00138451−0.4166674.05416−0.632868
70.436914−0.00138981−0.4166674.05417−0.632801
80.436937−0.001388791−0.4166674.05417−0.632813
90.436932−0.001388981−0.4166674.05417−0.632811
100.436933−0.001388951−0.4166674.05417−0.632811
0.436933−0.001388891−0.4166674.05417−0.63281
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rontó, A.; Rontóová, N. On Construction of Solutions of Linear Differential Systems with Argument Deviations of Mixed Type. Symmetry 2020, 12, 1740. https://doi.org/10.3390/sym12101740

AMA Style

Rontó A, Rontóová N. On Construction of Solutions of Linear Differential Systems with Argument Deviations of Mixed Type. Symmetry. 2020; 12(10):1740. https://doi.org/10.3390/sym12101740

Chicago/Turabian Style

Rontó, András, and Natálie Rontóová. 2020. "On Construction of Solutions of Linear Differential Systems with Argument Deviations of Mixed Type" Symmetry 12, no. 10: 1740. https://doi.org/10.3390/sym12101740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop