On Construction of Solutions of Linear Differential Systems with Argument Deviations of Mixed Type

: We show the use of parametrization techniques and successive approximations for the effective construction of solutions of linear boundary value problems for differential systems with multiple argument deviations. The approach is illustrated with a numerical example.


Introduction
In several applications, it is desirable to have effective tools allowing one to construct solutions of functional differential equations under various boundary conditions. Many real processes are modelled by systems of equations with argument deviations (see, e.g., [1][2][3] and references therein). For delay equations, the classical method of steps [1] allows one to construct the solution of the Cauchy problem by extending it from the initial interval in a stepwise manner; in this way, an ordinary differential equation is solved at every step, with every preceding part of the curve serving as a historical function for the next one. This technique, together with the ODE solvers available in the mathematical software, is commonly used in the practical analysis of dynamic models based on equations with retarded argument under the initial conditions (e.g., in economical models [4][5][6]).
In certain cases, boundary conditions are more complicated and deviations are of mixed type (i. e., equations involve both retarded and advanced terms [7,8] or deviations of neither type), which in particular, makes impossible to apply the method of steps due to the absence of the Volterra property of the corresponding operator. The aim of this paper is to show that the techniques suggested in [9] for boundary value problems for ordinary differential equations, under certain assumptions, can be adopted for application to functional differential equations covering, in particular, the case of deviations of mixed type and general boundary conditions. As a result, one can formulate a scheme for the effective finding of approximations to solutions of boundary value problems for functional differential equations, which also theoretically allows one to establish the solvability in a rigorous manner. Here, we consider the linear case and deal with the construction of approximate solutions only. The techniques are rather flexible and can be used in relation to other problems. Although the approach is not explicitly designed for partial differential equations, it still might be used in the cases where systems of equations with one independent variable arise (e.g., equations obtained when a symmetry reduction is possible, equations related to the inverse scattering transform or discretization [10,11]). Unlike methods used for integrable systems (e.g., the Hirota bilinear method [12]), the technique is aimed at getting approximate solutons without previous knowledge of the structure of the solution set. We consider the system of n linear functional differential equations under the linear boundary conditions h(u) = d, where u = (u i ) n i=1 , −∞ < a < b < ∞, l : C([a, b], R n ) → L ∞ ([a, b], R n ) is a linear bounded operator, h : C([a, b], R n ) → R n is a bounded linear vector-valued functional, d = col (d 1 , . . . , d n ) and q ∈ L ∞ ([a, b], R n ) are fixed. System (1) can be rewritten in coordinates as where , R) are the components of the operator l = col(l 1 , . . . , l n ) defined by the equalities for v from C([a, b], R) (here, e j stands for the jth unit vector). The difference between (1) and (3) is a matter of notation: given system (3), it can be rewritten as (1) by setting for u ∈ C([a, b], R n ). Here, we consider the case where the component operators have values in L ∞ ([a, b], R), i. e., the coefficients in the equation are essentially bounded, which excludes the presence of integrable singularities (e.g., 1/ √ t, t ∈ (0, 1]). System (3) covers, in particular, the system of differential equations with argument deviations of the form where (6) is, e.g., the well-known pantograph equation, which arises independently in several applied problems of various nature (see, e.g., [13,14] and references therein). The assumption that is not a restriction of generality because the setting involving initial functions can be reduced to the present one by a suitable transformation [15]. The retarded character of the equation is not assumed (in particular, for (6), it is not required that τ ij (t) ≤ t for all i, j) and, therefore, the method of steps is inapplicable. We use the following notation. For any vectors ξ = col(ξ 1 , . . . , ξ n ), η = col(η 1 , . . . , η n ), we write |ξ| = col(|ξ 1 | , . . . , |ξ n |) and max{ξ, η} = col(max{ξ 1 , η 1 }, . . . , max{ξ n , η n }).
The inequalities between vectors as well as minima and maxima of vector functions are understood likewise in the componentwise sense. r(A) denotes the spectral radius of a matrix A. For an essentially bounded function v : [a, b] → R n and an interval J ⊆ [a, b], we put

Auxiliary Problems with Two-Point Conditions
The idea is to use a suitable parametrization. Motivated by [9], we will seek for a solution of (1), (2) among all solutions of Equation (1) under the simplest two-point conditions with unfixed boundary values ξ and η. Under suitable assumptions, every problem (1), (9) is shown to be uniquely solvable and its solution is constructed as the limit of certain iterations {u m (·, ξ, η) : m ≥ 0}. The vectors z = col (ξ 1 , ξ 2 , . . . , ξ n ) and η = col (η 1 , η 2 , . . . , η n ) are parameters the values of which remain unknown at the moment; they should be chosen appropriately in order to satisfy condition (2). Let us fix closed bounded connected sets D 0 and D 1 in R n and focus on the solutions of problem (1), (2) with the values at the endpoints such that To treat solutions with properties (10), we carry out "freezing" at the endpoints by passing temporarily to conditions (9). Thus, instead of (1), (2) we will consider the parametrised problem (1), (9). There are two (or, in coordinates, 2n) degrees of freedom there and we will see that one can, in a sense, go back to the original problem by choosing the values of the parameters appropriately.

Iteration Process
for any y ∈ L ∞ ([a, b], R n ). Let us fix arbitrary values of ξ and η, put q := Lq, (12) and define the sequence of functions {u m (·, ξ, η) : m ≥ 0} by setting for m ≥ 1. When no confusion may arise, for the sake of convenience we will sometimes omit the arguments ξ, η in u m (t, ξ, η) and write simply u m (t) assuming the dependence on the parameters implicitly. The sequence is obviously related to the iterative solution of the equation The reason for choosing it in this particular way is explained by the following statement, which is an immediate consequence of (14).

Lemma 1.
Let ξ and η be fixed. If u m (·, ξ, η) ⇒ u as m → +∞ on [a, b], then u satisfies the two-point conditions (9) and the equation Proof. The function u satisfies the integral Equation (15) and, in particular, is absolutely continuous. By (11) and (13), Equation (15) is rewritten as the differentiation of which yields (16). Since obviously u m (a, ξ, η) = ξ and u m (b, ξ, η) = η for all m, the fulfilment of conditions (9) is verified by passing to the limit in (14).
The idea of the approach is briefly this: Equation (16) differs from (1) by a finite-dimensional term and after its ultimate "removal" by adjusting ξ appropriately we are among the solutions of (1), (9); then we should try to choose η so that (2) is satisfied. In order to proceed according to this scheme, it is needed to ensure the convergence of sequence of functions (14).

Applicability Conditions
Let L be the matrix constituted by the norms of the components of l as operators from C( All the components of the matrix L are well defined due to the boundedness of l as a mapping from C([a, b], R n ) to L ∞ ([a, b], R n ). It follows immediately from (4) and (18) that the componentwise inequality holds for a.e. t ∈ [a, b] and arbitrary u from C([a, b], R n ).
Theorem 1. Assume that the spectral radius of L satisfies the inequality Then, for any fixed The function u ∞ (·, ξ, η) satisfies conditions (9) and the differential equation where The function u ∞ (·, ξ, η) is the unique solution of (21) with the initial condition u(a) = ξ and also the unique solution of Equation (17). The function u ∞ (·, ξ, η) satisfies the original Equation (1) if and only if ξ and η are chosen so that The form of iteration sequence (14) is chosen specifically according to the two-point separated boundary conditions (9), so that (9) is automatically satisfied on every step. The smallness of r(L) (assumption (20)) ensures the unique solvability of all the auxiliary initial value problems (16) and (24), which is also helpful in cases where the initial value problem (1), (24) for the original equation is not uniquely solvable. It should be noted that the last mentioned issue may arise even for very simple equations of type (6); for example, it is easy to verify that if (1) has the form then the initial value problem (25), (24) is not uniquely solvable (it has infinitely many solutions if ξ = b a q(s)ds and no solution for ξ = b a q(s)ds) and the corresponding Picard iterations diverge. However, the iterations defined according to equalities (13) and (14) can be used since assumption (20) Equation (21) clearly resembles (16) with the difference that ∆(ξ, η) appearing in (21), where the limit function is directly involved, is not a functional term. Equation (21) can thus be regarded as a perturbation of equation (1) with a constant forcing term, the value of which is related to the two-point condition (9).

Theorem 2.
Assume that condition (20) holds and ξ, η are fixed. Then the solution of (26) with the initial value (24) satisfies the condition if and only if with ∆ given by (22).

Theorem 3.
Under assumption (20), the limit function u ∞ (·, ξ, η) of sequence (14) is a solution of the original problem (1), (2) if and only if the vectors ξ and η satisfy the system of 2n equations Proof. It is sufficient to apply Theorem 1. Equations (32) bring us from the auxiliary two-point parametrised conditions back to the given functional conditions (2).
The boundary value problem (1), (2) is thus theoretically reduced to the solution of Equation (32) in the variables ξ and η.

Proof of Theorem 1
We need some technical statements.

Lemma 2. For any essentially bounded function
This is a modified version of Lemma 3 from [16]; the proof goes the same lines. Here, notation (8) is used and α 1 is defined by The inequality in (34) and similar relations below are componentwise.
To prove Theorem 1, it is sufficient to use Lemma 3. Indeed, estimate (35) implies that fo any k ≥ 1. The assumption imposed on l ensures that lu 0 ∈ L ∞ ([a, b], R n ) and, therefore,  ([a, b], R n ) and thus converges to u ∞ (·, ξ, η) uniformly on [a, b] (in fact, due to the boundedness of D 0 and D 1 , the convergence is also uniform with respect to (ξ, η) ∈ D 0 × D 1 ). Passing to the limit as m → ∞ in (14), we find that u = u ∞ (·, ξ, η) is a solution of (15). The proof of the uniqueness is standard: if (15) has another solution v, then v − u = Ll(v − u), whence, by virtue of Lemma 2 and inequalities (19) and (36), and therefore, in view of condition (20), v coincides with u.

Some Estimates
Passing to the limit in (39) when k → ∞ gives the usual estimate where Q is given by hold.
Recall that all the inequalities are understood in the componentwise sense. In (43), notation (7) is used.
In view of assumption (20), the second term in (49) involving the unknown u ∞ (·, ξ, η) tends to zero with m growing, due to condition (20) and the presence of the multiplier Q m ; the difference of values of u m with respect to the parameters is mainly estimated by ∑ m−1 j=0 Q j multiplied by the size of the perturbation. The next lemma provides an alternative estimate of this type.

Practical Realisation
Theorems 1 and 2 suggest to replace the boundary value problem (1), (2) by the system of 2n Equation (32). These equations are usually referred to as determining equations because their roots determine the solutions of the original problem among the solutions of the auxiliary ones (i.e., we obtain a solution once the values of ξ and η are found). The main difficulty here is the fact that u ∞ (·, ξ, η) is known in exceptional cases only and thus system (32), in general, cannot be constructed explicitly. This complication can be resolved by using the approximate determining systems of the form where m ≥ 0 is fixed and the function ∆ m : D 0 × D 1 → R n is given by the formula for arbitrary ξ ∈ D 0 , η ∈ D 1 . The function ∆ m is obtained after m iterations and thus, in contrast to system (32), Equation (58) involve only functions that are constructed in a finite number of steps. The uniform convergence of functions (14) to u ∞ (·, ξ, η) implies that systems (32) and (58) are close enough to one another for m sufficiently large and, under suitable conditions, the solvability of the mth approximate determining system (58) can be used to prove that of (32). Existence theorems can be formulated by analogy to [17] (we do not discuss this kind of statements here). A practical scheme of analysis of the boundary value problem (1), (2) along these lines can be described as follows.

1.
Solve the zeroth approximate determining system This approximate determining system has the simplest form and its root (ξ (0) , η (0) ) serves as a rough approximation of the unknown values of (ξ, η). The function is the zeroth approximation of the solution we are looking for. This approximation is always linear (this is the straight line segment joining the points (a, ξ (0) ) and (b, η (0) ); see (13)) and its construction is easy because the iteration is not carried out yet.

2.
Analytically construct the function u 1 (·, ξ, η) according to the recurrence Formula (14), keeping ξ and η as parameters. Numerically solve the corresponding first approximate determining system in a neighbourhood of (ξ (0) , η (0) ) and find its root (ξ (1) , η (1) ). Substitute the values (ξ (1) , η (1) ) into (14) for m = 1 and construct the first approximation for m = 1, 2, . . . , m 0 , and draw their graphs. By construction, we always have Multiple roots of system (58) usually indicate the existence of multiple solutions of the problem. In such cases, in order to select a particular one, we specify a suitable neighbourhood when solving the approximate determining equations numerically.

4.
Analyse the results of step 3 and decide whether the computaton should be continued.
The approach involves both the analytic part (construction of iterations with parameters according to equalities (13) and (14)) and the numerical computation, when the approximate determining Equation (58) are solved. We can (and, in this approach, are generally encouraged to) start with step 1 without knowing anything about the solvability of the problem, because a hint on the existence of a solution and its localisation is likely to be obtained in the course of computation.
When analysing the results obtained at step 3, the following main scenarios may be observed: 1. Clear signs of convergence and a good degree of coincidence (U m 0 satisfies the set accuracy requirements; it remains only to check the solvability rigorously as mentioned above).

2.
There are signs of convergence but the accuracy requirements are not met (continue the computation with m = m 0 + 1).

3.
There are signs of divergence, usually accompanied by failure to solve some of the equations (either there is no solution or the convergence boundary is trespassed; the scheme is inapplicable).

4.
Failure to carry out symbolic computations at a certain point (software or hardware limitations; some simplifications should be used).
The main restriction is assumption (20), without which the presented proof fails and the procedure may diverge. When condition (20) is not satisfied, to exclude scenario 3, the use of the interval division technique [18] can be suggested, for which purpose the auxiliary two-point conditions of form (9) are very suitable. In the case of difficulties with symbolic computation, polynomial approximations can be used by analogy to [19]. We do not discuss these two issues here in more detail.
The accuracy of approximation can be checked by substituting U m into the equation and computing the residual functions R m = col(R m,1 , R m,2 , R m, 3 ) for m = 1, 2, . . . , m 0 , where By assumption, the operator l in Equation (1) holds for all m ≥ 1, where u * is an exact solution of problem (1), (2) (provided it exists), the values (ξ (m) , η (m) ) are the roots of the mth approximate determining system (58), and U m is the corresponding mth approximation (64). In other words, according to (67), the convergence of the roots of approximate determining equations to the corresponding values of a particular exact solution u * at the points a and b guarantees the approximation of u * by U m the quality of which is growing with m.

A Numerical Example
To show the practical realisation of the above scheme, let us consider the problem of solving the system of differential equations with argument deviations where q 1 (t) := − 11 12 − 1 72 sin 4t 2 , q 2 (t) := β 144 + 8 t + β cos( 4 9 t 2 ) + 1 18 , q 3 (t) := 4(sin 4t 2 ) 2 + 2 9 sin 4t 2 + The given particular form of the forcing terms q 1 , q 2 and q 3 in (68) is chosen for the purpose of checking explicitly an exact solution, which is known to be in this case. Along with the retarded term containing the deviation t → 1 3 t, the right-hand side of system (68) involves also terms with the argument reflection t → 1 − t and the argument transformation τ for which the function t → t − τ(t) has multiple sign changes. The latter two are deviations of mixed type first approximate determining system (62). Constructing system (62) and solving it numerically, we get the values ξ (1) 3 ≈ −0.41232 and, by (63), obtain the first approximation U 1 = col(U 1,1 , U 1,2 , U 1,3 ) the graph of which is shown on Figure 2. Checking the residual function R 1 corresponding to U 1 (Formula (66)) in Figure 3, we see that already the first approximation provides a reasonable degree of accuracy. Higher order approximations are obtained by repeating these steps for more iterations.  Figure 4 shows the graphs of several further approximations. We can observe that, starting from the third one, the graphs in fact coincide with one another. The maximal value of the residual t → U 5 (t) − (lU 5 )(t) − q(t) of the fifth approximation U 5 is about 0.008 (see Figure 5). The result is refined further when we continue the computation with more iterations. In this particular case, when interpolation is additionally used, this depends also on the number of nodes; increasing it from 9, e.g., to 11, we get the sixth approximation U 6 with the residual not exceeding 0.00025 in every component. This is seen in Figure 6 (especially Figure 6c), where the graphs of the components of R m , 1 ≤ m ≤ 6, are shown. The graph of U 6 does not optically differ from that of U 5 presented in Figure 4.
The values ξ (m) , η (m) obtained by numerically solving the approximate determining Equation (58) for 0 ≤ m ≤ 10 (with interpolation on 11 nodes) are collected in Table 1. In the last row of the table, for the sake of comparison, we give the values u * i (0), u * i (1), i = 1, 2, 3, of the exact solution (70).

Conflicts of Interest:
The authors declare no conflict of interest.