Analysis of the Sign of the Solution for Certain Second-Order Periodic Boundary Value Problems with Piecewise Constant Arguments

: We ﬁnd sufﬁcient conditions for the unique solution of certain second-order boundary value problems to have a constant sign. To this purpose, we use the expression in terms of a Green’s function of the unique solution for impulsive linear periodic boundary value problems associated with second-order differential equations with a functional dependence, which is a piecewise constant function. Our analysis lies in the study of the sign of the Green’s function.


Introduction
Delay differential equations find interesting applications in fields like biology, fisiology, physics, etc. Some interesting and recent applications of this type of equations appear, for instance, when proposing a mathematical model for electrohydraulic servomechanisms [1], absorption complexities in pharmacokinetics [2], the approximation of Fitzhugh-Nagumo and Hodgkin-Huxley models for action potential generation in excitable cells [3], tick population with diapause [4], or thermal models for variable pipe flow [5].
The study of the solutions' properties to such kind of models is a relevant area of study [6]. In particular, the existence of solutions with a constant sign is important from the point of view of the interpretation of the biological models. Indeed, the non-negativity of the solutions [7] is essential for the coherence between mathematical expressions and the magnitudes that, in this context, normally refer to positive values such as the number of individuals in a certain population or the concentration of a certain substance. Recent work [8] deepens the problem of the existence of non-negative solutions to linear autonomous functional differential equations. See also [9] for order preserving evolution operators of functional differential equations.
This paper is devoted to delay differential equations with piecewise constant delay given by the greatest integer function. The appearance of such kind of delays can be interpreted as the dependence of the rate of change of the variable of interest not only on the present state but also on some previous (equidistributed with respect to time) memorized values of the state (see [10], where a Liénard-type differential equation with piecewise constant delays is considered).
The study of second-order differential equations independent of the first-order derivative x with a functional dependence given by a piecewise constant argument goes back to references [11][12][13][14][15]. Here, the functional dependence considered is the greatest integer function. In [16], the authors consider boundary value problems for second-order differential equations with deviating arguments and, in [17], the uniqueness issue is addressed for second-order linear functional differential equations.
In particular, the periodic boundary value problem: is studied in [15], where T > 0, and [t] denotes the floor of t, that is, the greatest integer less than or equal to t. The references [18,19] are devoted to similar problems for first-order differential equations. In order to consider a problem where a delay term also affects the derivative of the function, the existence of solutions for second-order functional differential equations with piecewise constant arguments of the form: is considered in [20] by studying the existence of solutions for the impulsive linear periodic boundary value problems: where s ∈ J, a, b, c, d, λ ∈ R, T > 0, and x (z − ), x (z + ) denote, respectively, the one-sided left and right limits of x at the point z ∈ J.
Although the expression of the solution to (1) can be obtained directly from the solution of (2), which represents the Green's function, it is quite tedious to find sufficient conditions for the Green's function to have a constant sign. In this paper, we prove the existence of non-negative (resp. non-positive) solutions for (1) under the appropriate conditions. Some other references dealing with similar problems are [21], where the existence and stability of periodic solutions for a quasi-linear differential equation with arguments which are piecewise constant functions is studied, or [22], where the stability of the null solution for the corresponding quasilinear equation is analyzed. In [23], a similar study is made to obtain the Green's function to express the unique solution to a second-order nonautonomous differential equation with piecewise constant arguments.
The objectives of the main results in the paper are summarized as follows: • Theorems 1-3 give sufficient conditions for the unique solution to problem (2) with s = 0 or s = T to have a constant sign. The technical Lemmas A2-A6 allow to interpret two of the sufficient conditions directly in terms of the coefficients of the linear problem a, b, c, d (see Remarks 1 and 2). The other condition is algebraically manipulated in Lemma 1, Remark 3, Lemma 2, and Remark 4; • An analogous study can be made for 0 < s < T with s ∈ Z + , see Theorem 4, Lemma 3, Theorem 5, and Remark 5; • To improve significantly these first results, it is important to determine what planar regions of the type: S := {(x, y) ∈ R 2 : x ≥ 0, y ≥ −Mx}, for a certain M > 0, are such that their elements are mapped into the interval [0, +∞) when scalar multiplied by a certain vector (h 1 (z), h 2 (z)), which will be defined in relation with the differential problem. Lemma 4 is helpful to give an answer, therefore, some of the results are extended in the sense that the conditions required are reduced to some properties on the sign of functions h 1 and h 2 , and the fact that certain iterates remain in S (see Theorems 6 and 7); • In order to study the sign of the solution to (1), we need to obtain similar results for problem (2) when 0 < s < T, s ∈ (n, n + 1), for some n ∈ Z + (see Theorem 8). Remarks 6-8 provide some translations of the sufficient conditions given by using the technical lemmas in the appendices or direct manipulation; • Finally, Theorems 9 and 10 provide sufficient conditions for the unique solution to problem (1) to have a constant sign on J.
Our study allows the establishment of some comparison results for boundary value problems related to second-order functional differential equations, allowing the application of some techniques to study the existence of solution to second-order nonlinear problems.
The paper is organized as follows. In Section 2, the main results on the existence of non-negative (resp. non-positive) solutions to problems (2) and (1) are given. These results are based on some preliminary and technical results that are placed on the appendices for simplicity. In Section 3, some examples are shown to illustrate the applicability of the main results. At the end, it is possible to find Appendices A-C. The first one is devoted to recall some preliminary results from [20], which are essential to the procedure since they provide the expression of the solution to the problems of interest; the second and third ones focus on the properties of the sign of three auxiliary functions, namely h 1 , h 2 , and g, and their derivatives, which is crucial to decide the sign of the Green's function to problem (1). The proofs of these auxiliary lemmas have been written in appendices in order to improve the readability of the work, focusing the interest in the main results.

Main Results
We start by introducing some additional notation that is essential to our procedure. We denote Z + := N ∪ {0}. Define the functions h 1 , h 2 , and g by the respective expressions: In the previous definitions, we denote, for b = 0, a 2 > 4b, For s ∈ J, we consider E s the class of functions y : R −→ R which satisfy the conditions: (i) y continuous for t ∈ R, (ii) y continuous for t ∈ R \ {s}, and there exist y (s − ), y (s + ) ∈ R, (iii) y exists and is continuous for t ∈ [n, n + 1) \ {s}, n ∈ Z, and there exist y (s − ), y (s + ), y (n − ) ∈ R, ∀n ∈ Z.
In condition iii), y (z − ), y (z + ) denote, respectively, the one-sided left and right limits of y at the point z ∈ J. Definition 1 (c.f. Definition 1, [20]). For s ∈ J, y : R −→ R is a solution to problem (2) if y ∈ E s and satisfies conditions in (2), taking y (s) = y (s + ) and y (t) = y (t + ), for all t ∈ Z ∪ {s}.

Theorem 1.
Assume that condition: holds, that is, the matrix: For s = 0 or s = T, the unique solution to problem (2) is non-negative if the following conditions hold: (iii) The elements in the second column of the matrix Proof. It is obvious from the expression of the unique solution given in Theorem 3.1 [20]: • If a = b = 0: d ≤ 0 and c ≤ 1; One of the following conditions holds: and one of the following conditions holds: * b − a 2 4 ≤ π 2 and a 2 + c ≤ 0; or * b − a 2 4 < π 2 and 0 < a Theorem 3. Theorems 1 and 2 are still valid, if we replace condition (ii) by the more general condition:

Remark 2.
Taking into account Theorem 3, it is possible to improve the conditions given in Remark 1, in the last case. According to Lemma A6, conditions (i) and ( ii) are fulfilled for the case b = 0, a 2 < 4b, if: one of the following conditions holds: ≤ π, and one of the following conditions holds: , and a 2 + c > 0, provided that B ≥ 0. In the last two options, we consider: where, Note that: In Figure 1, we illustrate how the last cases included the consideration of different situations not covered by the estimates in Remark 1. Indeed, taking R = 3π 2 , a = 7, b = R 2 + a 2 4 , and c = −2.5, it is satisfied that b = 0, a 2 < 4b, a 2 + c > 0, and B ≥ 0, and function h 2 is non-negative on (0, 1), such that h 2 (1) > 0.
Proof. We use that, for A = r 1 r 2 r 3 r 4 , , the first row of A is: , and the second row of A is: .
Thus, the elements in the second column of A −1 are given by: has the first component less than or equal to 1 and the second component is non-negative. However, in the context of Theorem 1, due to the validity of i) and ii), (or, more generally, i) and ii)), it is clear that the part above in (5) (5) holds. Assuming also that the first component in (7) is less than or equal to 1, then the opposite to (6) holds. Thus, by Lemma 2, these two conditions imply the validity of ( iii).
On the other hand, if det (I − H(T − [T])C [T] ) > 0, ( iii) holds if the first component in (7) is greater than or equal to 1 and the second component in (7) is null.
Next, we study the sign of the solution to the problem (2), for 0 < s < T with s ∈ Z + . Theorem 4. Assume that condition (4) holds. For 0 < s < T, s ∈ Z + , the unique solution to problem (2) is non-negative if the following conditions hold: (i) in Theorem 1, ( ii) in Theorem 3 (see Remarks 1 and 2), and ( iii) The elements in the second column of the matrix: Proof. It is deduced from the expression of the unique solution to (2) given in Theorem A4 (Theorem 3.1 [20]): Note that V 0 ≥ 0 (non-negative components) implies that C s V 0 + 0 1 ≥ 0, that is, this vector has also non-negative components.
Lemma 3. Assume one of the following hypotheses: has non-negative components.

In particular, this last condition is valid if
> 0 and the following numbers are non-negative: Proof. Under these particular conditions, and using the same procedure as in the proof of Lemma 1, we prove that [I − H(T − [T])C [T] ] −1 has non-negative components, since, obviously, Theorem 5. Assume that condition (4) holds. For 0 < s < T, s ∈ Z + , the unique solution to problem (2) is non-positive if the following conditions hold: (i) in Theorem 1, ( ii) in Theorem 3 (see Remarks 1 and 2), and (iv) The elements in V 0 , the second column of the matrix: are non-positive, and the elements of C s V 0 + 0 1 are non-positive.
Next, we give some extensions of Theorems 1, and 2 by virtue of the following Lemma.
Theorem 6. Assume that condition (4) holds. For s = 0 or s = T, the unique solution to problem (2) is non-negative if the following conditions hold: The vector V 0 given by: Theorem 7. Assume that condition (4) holds. For s = 0 or s = T, the unique solution to problem (2) is non-positive if the following conditions hold: Similar considerations can be made for Theorems 4 and 5. We omit them since they are not required for the study of the sign of the nonhomogeneus Equation (1).
> 0, and one of the following conditions hold: * > 0, and one of the following conditions hold: , and , and one of the following conditions hold: ≥ 0, and one of the following restrictions hold: > 0 and one of the following restrictions hold: R ∈ (π, 3π 2 ), and a 2 + c > R cot (R) > 0; or R = 3π 2 , and a 2 + c > 0. * a 2 + c = 0, and R ≤ π 2 .
Remark 7. According to Lemmas A7-A11, Condition (III) in Theorem 8 is satisfied if one of the following conditions holds: Remark 8. If T ∈ (0, 1), Conditions IV) and V) in Theorem 8 are reduced to V 0 ∈ S. If T ≥ 1, then several successive iterations by C of the vectors in IV) and V) should belong to the set S.
To show the first iteration, suppose that h 1 (1) > 0, h 2 > 0 on (0, 1), and h 2 (1) + Mh 2 (1) > 0. Let V = (x, y) ∈ S. Then V is such that CV ∈ S if and only if: so that the first inequality in (9) holds. On the other hand, if the second inequality in (9) holds. Hence, the interesting set for one iteration is: If we define the (finite) sequence (assuming that the denominators are positive): the interesting set for iterations until order l is: If we are interested in analyzing the sign of the solution to the second-order functional-differential equation with piecewise constant arguments (1), which exists and is unique under the appropriate hypothesis, we need to determine the sign of K(t, 0) and K(t, s) on [0, T] × [0, T] except at most in a set of measure zero. This is a consequence of the fact that the expression of K(t, s), the solution to problem (2), represents the Green's function for the solution to problem (1), as established in Theorem 3.2 [20].
Theorem 9. Suppose that hypothesis (4) holds. If σ ∈ Λ is non-negative on J, λ ≥ 0, conditions (I), (II) in Theorem 6 hold and conditions (III), (IV), and (V) in Theorem 8 hold [it is assumed that (IV) and (V) will be fulfilled for all 0 < s < T, with s ∈ (n, n + 1), for some n ∈ Z + ], then the unique solution to problem (1) is non-negative on J.
Proof. This result is obtained by applying Theorems 6, and 8, and using that the unique solution to problem (1) is, according to Theorem A5 (Theorem 3.2 [20]), given by: where, for all s ∈ J, K(·, s) is the unique solution to problem (2). Theorem 10. Suppose that the hypothesis (4) holds. Assume also that the conditions (I), (II) in Theorem 6 hold and conditions (III), (IV), and (V) in Theorem 8 hold [it is assumed (IV) and (V) are to be fulfilled for all 0 < s < T, with s ∈ (n, n + 1), for some n ∈ Z + ], if σ ∈ Λ is non-positive on J and λ ≤ 0, then the unique solution to problem (1) is non-positive on J.
Remark 9. Note that we only need to impose Condition (V) in Theorems 9 and 10 if T > n + 1, since the value of K(T, s) is not required.

Examples
Finally, we present some examples to illustrate the main results. Example 1. Consider b = 0, a = 0, and 0 < T < 1. We check that conditions in Theorem 1 hold. Indeed, according to Remark 1, conditions (i), (ii) are true for d ≤ 0 and one of the following hypotheses: For condition (iii), using that [T] = 0, then the matrix: Note that det (I − H(T)) = dT a (1 − e −aT ) = 0 if and only if d = 0, due to a = 0. Suppose that d = 0. According to Lemma 1, for the validity of (5) and (6), we only have to check that: and Indeed, we have to impose that: We take into account that function ν(z) = z − (1 − e −z ) is positive for z = 0, because ν(0) = 0 and Then ν(aT) > 0, since a = 0, and hence the condition we must assume for the validity of (12) is: which is trivially true independently of the value of a = 0. Thus, condition (12) is trivially satisfied. On the other hand, for the validity of (10), we must assume that: This proves that, under condition (13), hypotheses (5) and (6) hold. Therefore, after analyzing the cases that lead to incompatible conditions, it is possible to affirm that, if a < 0, b = 0, c = 0, d < 0, and 0 < T < 1, then the unique solution to problem (2) is non-negative, for s = 0 or s = T.
Condition (4) is satisfied due to d = 0 (see Example 1). Condition (I) is fulfilled (see Remark 6) since > 0 is valid by the choice of the constants. On the other hand, condition (III) is trivially satisfied by virtue of Remark 7. Since T ∈ (0, 1) and the choice of the constants, .
Note that it is not true that all the elements in the matrix are non-negative, since its element (2, 1) is negative: − 1 T < 0. Next, we check condition II). Consider V 0 given by: , and check that V 0 ∈ S. In this case, Let ϕ(z) := a−dz 1−e −az , z ∈ (0, 1). The sign of ϕ coincides with the sign of function ψ, defined on [0, 1] by ψ(z) = d − a 2 + adz − de az . The derivative ψ (z) = da(1 − e az ) is negative and ψ(0) = −a 2 < 0. Hence ψ < 0 on [0, 1] and ϕ < 0 on (0, 1). In consequence, Hence, V 0 ∈ S, since 1 dT > 0 and Indeed, this inequality is equivalent to: and also to: which is trivially valid since the left-hand side is negative and the right-hand side is positive.
On the other hand, we check IV), proving that V 0 given by: belongs to S, for every s ∈ (0, T). Indeed, V 0 ∈ S, since 1 dT > 0 and which is rewritten as: , which is trivially valid, since the left-hand side is negative and the right-hand side is positive. Therefore, if σ ∈ Λ is non-negative on J, and λ ≥ 0, then the unique solution to problem (1) is non-negative on J.

Acknowledgments:
We thank the editor and the anonymous referees for their interesting comments and suggestions.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Some Preliminary Results
For the proof of the results in this appendix, see [20]. Note that, for σ ≡ 0 and λ = 0, the trivial function satisfies the conditions in (1).
Lemma A1 (c.f. Lemma 1, [20]). The solutions for the initial value problem: where v n , v n ∈ R, are obtained in terms of a and b as: where h 1 and h 2 are defined at the starting of Section 2.
Note that a necessary (non sufficient) condition for the validity of (4) is b + d = 0.
If problem (2) has a unique solution, for s ∈ J fixed, we denote it by K(t, s), the value of the solution for problem (2) at the point t.
Recall that function g is defined in (3).
Theorem A4 (c.f. Theorem 3.1, [20]). For s ∈ J fixed, problem (2) has a unique solution if and only if condition (4) holds, that is, the matrix: In such a case, the solution is given in J by the following expression: • If s = 0 or s = T, where, • If 0 < s < T, s ∈ (n, n + 1), for some n ∈ Z + , where, for T < n + 1, and, for n + 1 ≤ T, Theorem A5 (c.f. Theorem 3.2, [20]). If hypothesis (4) holds, problem (1) has a unique solution, for all σ ∈ Λ (see Definition 2) and λ ∈ R, which can be obtained by the expression: where, for all s ∈ J, K(·, s) is the unique solution to (2).
(1 − e −as ) , we obtain: hence the sign of h 2 is the sign of ψ.
Using that: we obtain that: The conclusion is obtained taking into account that α > β, thus e αs − e βs > 0, for every s, and β − α < 0.
Proof. Let R := b − a 2 4 . By Lemma A1, since: and 4R 2 + a 2 = 4b, we have: Note that, in this case, 0 ≤ a 2 < 4b, thus b > 0. The study of the sign of h 1 is reduced to the study of the sign of b + d and the function s → sin Rs. The assertions concerning h 1 are, therefore, valid.
Notice that we consider R = b − a 2 4 ≤ π in order to keep the sign of sin b − a 2 4 s positive on (0, 1). Now, we consider: and denote again by R := b − a 2 4 , thus, the derivative of h 2 is the function: To study the sign of h 2 , we analyze the sign of ϕ.
Taking into account that, in this case, then ϕ > 0 on [0, s 1 ) and ϕ < 0 on (s 1 , 1), hence the same behavior follows for h 2 , thus h 2 has a change of sign in (0, 1).
For b − a 2 4 ≤ 3π 2 , only s 0 and s 1 may belong to (0, 1). The value of R := b − a 2 4 defined in the proof of Lemma A6 can be arbitrarily large, in such a way that there could be many zeros of h 2 in (0, 1). In this sense, the following Remarks are of interest, where it is denoted: , and ϕ(s) := cos Rs − N sin Rs.
On the other hand, for k ≥ 1, s k > (2k−1)π 2R ≥ π 2R > 0. In consequence, for the validity of s k < 1 is necessary that R > (2k−1)π 2 . Besides, the condition s k < 1 is satisfied if arctan 1 N + kπ < R, that is, We distinguish two cases: Case 1: N > 0. In this case, arctan 1 N > 0, and we are interested in the values of k odd. Hence, necessarily, R − kπ > 0 (k < R π ) and we distinguish again two subcases: Subcase 1.1: If R − kπ ≥ π 2 , (A14) is trivially satisfied, and the values of k are those with R − π 2 ≥ kπ, that is, which is written as Summarizing, for N > 0 ( a 2 + c > 0), the admissible values are s k with 1 ≤ k ≤ R π − 1 2 , or in the case a 2 + c > R cot(R) > 0, the values of k ≥ 1 such that R π − 1 2 < k < R π , that is, R ∈ (kπ, (2k + 1) π 2 ). As indicated, to study the sign of h 2 , we are interested in the values of this function at the corresponding points s k in (0, 1) with k odd. Case 2: N < 0 ( a 2 + c < 0). In this case, the values of k interesting for the study of the sign of h 2 are even (and positive, since s 0 < 0). We check that the values of k with s k ∈ (0, 1) are:
Remark A3. In the case b = 0, a 2 < 4b, note that h 2 (s) = e − a 2 s ϕ(s), for every s, where the function ϕ given in the proof of Lemma A6 is 2π R -periodic. However, h 2 is not periodic if a = 0. In fact, for a > 0, the oscillations of h 2 are damped, and, for a < 0, the oscillations have increasing amplitude. For instance, in Figure A3, we show the graph of the function h 2 (x) = e − a 2 x ϕ(x), for a = 3, R = 1, and N = 2. We obtain h 2 (s) = h 2 (0) +  Figure A3 for the corresponding regions in the particular case where a = 3, R = 1, and N = 2. Taking µ(s) = e − a 2 s sin Rs − c s 0 e − a 2 r sin Rr dr, and proceeding similarly to Remark A2, conditions (A12) and (A13) can be written with this formulation in the following equivalent way: e − a 2 s k sin Rs k ≥ c s k 0 e − a 2 r sin Rr dr, for every k odd with s k ∈ (0, 1), if N > 0, and e − a 2 s k sin Rs k ≥ c s k 0 e − a 2 r sin Rr dr, for every k even with s k ∈ (0, 1), if N < 0.

Proof.
Obvious from the expression of g, g(z) = z.