The Sign of the Green Function of an n-th Order Linear Boundary Value Problem

This paper provides results on the sign of the Green function (and its partial derivatives) of an n-th order boundary value problem subject to a wide set of homogeneous two-point boundary conditions. The dependence of the absolute value of the Green function and some of its partial derivatives with respect to the extremes where the boundary conditions are set is also assessed.


Introduction
Let J be a compact interval in R and let us consider the real disfocal differential operator L: C n (J) → C(J) defined by Ly = y (n) (x) + a n−1 (x)y (n−1) (x) + · · · + a 0 (x)y(x), x ∈ J, where a j (x) ∈ C(J), 0 ≤ j ≤ n − 1. Following Eloe and Ridenhour [1], let Ω l be the set whose members are collections of l different ordered integer indices i such that 0 ≤ i ≤ n − 1, let k ∈ N be such that 1 ≤ k ≤ n − 1, let α ∈ Ω k be the set {α 1 , . . . , α k } and β ∈ Ω n−k be the set {β 1 , . . . , β n−k }, both associated to the homogeneous boundary conditions where [a, b] ⊂ J. Throughout this paper we will impose the condition that, for any integer m such that 1 ≤ m ≤ n, at least m terms of the sequence α 1 , . . . , α k , β 1 , . . . , β n−k are less than m. Due to their resemblance with the conditions defined by Butler and Erbe in [2], we will call them admissible boundary conditions (note that (2) and (3) are not exactly the same boundary conditions defined by Butler and Erbe since the latter applied to the so-called quasiderivatives of y(x) and not to derivatives). In particular, if for every integer m such that 1 ≤ m ≤ p + 1, exactly m terms of the sequence α 1 , . . . , α k , β 1 , . . . , β n−k are less than m, we will say that the boundary conditions are p-alternate. In the case p = n − 1 we will call the boundary conditions strongly admissible. The admissible conditions cover well known cases like conjugate boundary conditions (α 1 = 0, α 2 = 1, . . . , α k = k − 1 and β 1 = 0, β 2 = 1, . . . , β n−k = n − k − 1), focal boundary conditions (right focal with α 1 = 0, α 2 = 1, . . . , α k = k − 1 and β 1 = k, β 2 = k + 1, . . . , β n−k = n − 1 or left focal with α 1 = n − k, α 2 = n − k + 1, . . . , α k = n − 1 and β 1 = 0, β 2 = 1, . . . , β n−k = n − k − 1) and many other. The focal boundary conditions are also strongly admissible (or (n − 1)-alternate). The purpose of this paper will be to provide results on the sign of G(x, t), the Green function associated to the problem as well as some of its partial derivatives with regards to x, both in the interval (a, b) and at the extremes a and b. We will also analyze the dependence of the absolute value of G(x, t) and its derivatives with respect to the extremes a and b. In this sense, this paper represents an extension of the work by Eloe and Ridenhour [1] which in turn extended previous results from Peterson [3,4], Elias [5] and Peterson and Ridenhour [6]. Note that the disfocality of L on [a, b], according to Nehari [7], implies that y(x) ≡ 0 is the only solution of Ly = 0 satisfying y (i) (x i ) = 0, i = 0, 1, 2, . . . , n − 1, with x i ∈ [a, b], and also guarantees the existence of the Green function of (4).
It is well known (see for instance [8], Chapter 3) that problems of the type with f ∈ C[a, b] being an input function, have a solution given by y(x) = b a G(x, t) f (t)dt. Therefore, the knowledge of the sign of G(x, t) and its derivatives can provide information on the sign of the solution y(x) and these same derivatives, at least when f does not change sign on (a, b). This was already used by Eloe and Ridenhour in [1] to show that a clamped beam is stiffer that a simply supported beam. Likewise, the evolution of G(x, t) as a or b vary can also provide insights on the dependence of the value of y(x) on these extremes and can allow comparing the effect of a longer separation of the extremes when the same input function f is applied to a system modeled by (5).
The knowledge about the sign of G(x, t) is also useful to find information about the eigenvalues and eigenfunctions of the general problem Ly = λ ∑ µ l=0 c l (x)y (l) (x), x ∈ (a, b), y (α i ) (a) = 0, α i ∈ α; y (β i ) (b) = 0, β i ∈ β, (6) with µ ≤ n − 1, c l (x) ∈ C(J) for 0 ≤ l ≤ µ. These problems are tackled by converting them in the equivalent integral problem where M is the operator If the partial derivative of G(x, t) of the highest order whose sign is constant on (a, b) is not lower than µ, it is possible to define a cone P associated to that partial derivative such that MP ⊂ P and, with the help of the cone theory elaborated by Krein and Rutman [9] and Krasnosel'skii [10], prove that there exists a solution of (7) associated to the smallest eigenvalue λ. Moreover, it is possible to determine some properties of λ and even compare the values of λ for different boundary conditions.
The non-linear version of (6), namely subject to different homogeneous, mixed or integral boundary conditions (see for instance [18,19]), is also addressed usually by converting it in the integral problem In most of these problems, the information about the sign of the Green function is relevant to apply other tools (fixed-point theorems, upper and lower solutions method, fixed-point index theory, etc.) to determine the existence of a solution. In some of them, the knowledge of the sign of the partial derivatives can help to achieve the same goal ( [18,20,21]).
As for a physical applicability, problems of the type (5), (6) and (9) appear in many situations, like the study of the deflections of beams, both straight ones with non-homogeneous cross-sections in free vibration (which are subject to the fourth-order linear Euler-Bernoulli equation) and curved ones with different shapes. An account of these and other applications can be found in [22], Chapter IV.
Throughout the paper we will use the terms G(α, β, x, t) and G a,b (x, t) (and further G a,b (α, β, x, t)) when we want to highlight the dependence of the Green function of (4) on the boundary conditions (α, β) and the extremes a, b, respectively. That will be particularly useful when we manipulate Green functions subject to different boundary conditions or different extremes. We will denote by H(x, t) and I(x, t) the partial derivatives of G(α, β, x, t) with respect to the extreme b and a, respectively, that is We will say that a, b are interior to A, B if A ≤ a < b ≤ B and A < a or b < B. We will use the expression card{D} to denote the number of elements (or cardinal) of the set D.
Likewise, if we assume that y is a function with (n − 1)th derivative in [a, b], we will make use of the following nomenclature associated to (α, β): is the minimum derivative of y(x) for which the boundary conditions (α, β) specify that y (i) (a) = 0 or y (i) (b) = 0 for i = K(α, β) + 1, . . . , n − 1, with K(α, β) = n − 1 if both y (n−1) (a) = 0 and y (n−1) (b) = 0. • m(α, i) is the number of derivatives of y of order equal or higher than i which the boundary conditions α do not specify to be zero at a. • n(β, i) is the number of derivatives of y of order higher than i which the boundary conditions β do specify to be zero at b.
Note that if β B / ∈ α then the boundary conditions are p-alternate with p > β B , whereas if β B ∈ α and β B > 0 then the boundary conditions are (β B − 1)-alternate. • S(α) is the sum of all indices of α. Likewise, S(β) is the sum of all indices of β.
As for the organization of the paper, Section 2 will provide the main results of the paper. Concretely, in the Section 2.1 we will tackle the general case of admissible boundary conditions, in the Section 2.2 we will prove some additional results associated to p-alternate boundary conditions and in the Section 2.3 we will cover the strongly admissible boundary conditions. Finally in Section 3 we will elaborate some conclusions.

The Sign of the Green Function and Its Derivatives on the Admissible Case
In this subsection, we will prove some basic results concerning the sign of the Green function of the problem (4) and its derivatives, as well as comparisons of their absolute values when the extremes a and b vary. To this end, it is interesting to recall a couple of results from Eloe and Ridenhour, which we will state (modified slightly using our notations) for completeness.
If β 1 = 0, then for i = 0, . . . , α 1 , These theorems, although of considerable scope, unfortunately, do not yield information on the sign of all the partial derivatives of G(x, t) at the extremes a and b, whose knowledge is necessary for the application of cone theory to the eigenvalue problem (6) mentioned in the Introduction, as well as for the analysis of the strongly admissible case (see Section 2.3). Likewise, they do not cover the dependence of G(x, t) with the extremes a and b when either α k or β n−k are equal to n − 1. These shortcomings and the lack of explicit proofs of these theorems in [1] (the reader is left to obtain them following the techniques devised by the authors in previous sections of that paper) lead us to dedicate this subsection to reproduce what we suppose were the steps used by Eloe and Ridenhour to obtain Theorems 1 and 2 as well as to prove the missing results (see Remark 2 for some examples of the latter).

Lemma 1.
Let us assume that L is disfocal on [a, b] and that y(x) is a nontrivial solution of Ly = 0 which satisfies the n − 1 homogeneous boundary conditions Let us also assume that Then y(x) is essentially unique (to within the norm) and satisfies 1.
Neither y(x) nor any of its derivatives vanish at a or b on derivatives lower than K(α, β) + 1 and different from those of (16), that is 2.
Proof. Following the argumentation of [6], let us denote by l j , r j the following values We will show by induction that the number of zeroes of y (j) (x) in the interval (a, b) (let us name it z j (a, b)) is at least l j + r j − j. For j = 0 it is straightforward, so let us assume that the hypothesis holds for j − 1, that is, If we consider the possible zeroes of y (j−1) (x) at a or b, Rolle's theorem mandates that From the definition of l j , r j , this result also implies that the number of zeroes of With this in mind it is immediate to see that the condition (17) translates into whereas the definition of K(α, β) implies The key insight for the rest of the proof is that any additional zero of y (i) (x) on [a, b] for i = 0, . . . , K(α, β) not forced by the homogeneous boundary conditions nor by Rolle's theorem will imply, again by Rolle's theorem, that z K(α,β) [a, b] = 1 which together with (19) and (20) give Since L is disfocal on [a, b] by hypothesis, such an additional zero will mean y ≡ 0. This proves properties 1 and 2 (the p-alternate condition grants that only one homogeneous boundary condition -at either a or bis set in each derivative up to p-th one, so these boundary conditions cannot force, at least via Rolle's theorem, any zeroes in (a, b) in the derivatives up to the (p + 1)-th one) and also the fact that y is essentially unique to within the norm (if there were two different solutions y 1 and y 2 one could create a non trivial linear combination y 3 of these two with a zero of y As for property 3, if i + 1 ≤ K(α, β) then the number of zeroes of y (i+1) (x) on (a, b) must be finite (otherwise from Rolle's theorem we would end up with a zero of y (K(α,β)) (x) on (a, b) and the disfocality of L on [a, b] would force y ≡ 0) and there must be an > 0 such that y (i+1) (x) = 0 on (a, a + ). Since To prove property 4, let (19)). There cannot be any zeroes of y (i+1) (x) on (a, x i ) since, by the previous argumentation, this would imply again a zero of y (K(α,β)) (x) on (a, b) and therefore y ≡ 0. As The proof of properties 5 and 6 is similar to that of properties 3 and 4, respectively.

Remark 1.
It is important to stress that the results 3-6 of the previous Lemma only apply if i ≤ K(α, β) − 1. If y (K(α,β)) (x) = 0 on [a, b] we cannot deduce anything about the zeroes of higher derivatives of y(x) on [a, b], as the disfocality condition would already not be met in y (K(α,β)) .

Theorem 3.
Let us assume that the boundary conditions (α, β), with α ∈ Ω k and β ∈ Ω n−k , are admissible. Then one has and In addition: 2. If Proof. Let us note first that the admissibility of the boundary conditions imposes that α 1 = 0 or β 1 = 0.
We will focus initially on the case α 1 = 0, for which we will follow a similar approach as that used in [1], Lemma 2.4. Thus, as a starting point, let us fix t ∈ [a, b] and let us consider the boundary conditions (α, β) with α = {0, . . . , k − 1}, which (as it is straightforward to show) are always admissible regardless of the value of k and β. From [1], Lemma 2.4 one has (22) and from [1], Theorem 2.1 one gets (23) and If k < n − 1, we can pick new boundary conditions (α , β ) with α = {0, . . . , k} and β = β\β n−k (that is β = {β 1 , . . . , β n−k−1 }, for which [1], Theorem 2.1 gives again We can build the function , which is n-times continuously differentiable (the difference of the Green functions compensate the discontinuity of their (n − 1)-th partial derivatives with regards to x at x = t) and satisfies From (25) and (27) The boundary conditions of g 1 are (α, β ). It is straighforward to prove that K(α, β ) = n − 1 and that g 1 satisfies the hypothesis (17) of Lemma 1 for 1, . . . , n − 1. In consequence, one can apply properties 1 and 4 of Lemma 1 to g 1 and, taking (28) into account, get to From here and (26) one has This argument can be repeated recursively to obtain which is (21). Next, we will proceed by induction over S(α). Thus, let us consider admissible (but not strongly admissible) boundary conditions (α, β) with α ∈ Ω k and β ∈ Ω n−k , and let us define new conditions (α , β) by taking α and replacing the homogeneous boundary condition α i by α k + 1 (that is, α specifies y (α k +1) (a) = 0 instead of y (α i ) (a) = 0). Let us assume that (α , β) are also admissible.
The function is n-times continuously differentiable and satisfies Let (α , β) be the homogeneous boundary conditions satisfied by g 2 , with α ∈ Ω k−1 . We will prove now that and that g 2 complies with the hypotheses of Lemma 1 for K(α , β).
If β n−k ≥ α i , the property is straightforward as (α , β) are also admissible.
Let us focus now on the case α 1 > 0, β 1 = 0. For that we will consider the function which as one can readily show (see e.g., [8], Chapter 3, page 105) is the Green function of the problem with L defined as L y = y (n) (x) − a n−1 (b + a − x)y (n−1) (x) + · · · + (−1) n a 0 (b + a − x)y(x).
Since β 1 = 0 is a boundary condition applied at a, G satisfies the hypotheses of the first part of this theorem. Thus, from (21), (22) and (23) one gets to and (60), (61), (62) and the relationship finally yield As (64) and (65) one readily gets (21) and (22), respectively. Next, we will assess the dependence of G(x, t) and some of its partial derivatives with regards to the extremes a and b.

Lemma 2. Fixed t ∈ [a, b], H(x, t) is the solution of the problem
Likewise, I(x, t) is the solution of the problem Proof. The proof of (67) follows the same steps as that of [13], Lemma 3.3 with x 1 = a and k = 1 and will not be repeated. The proof of (68) is also similar.

Theorem 4.
Let us assume that (α, β) are admissible boundary conditions. If α 1 = 0 and either with at least one l / ∈ β such that 0 ≤ l ≤ n − 1 and and If α 1 > 0 and either Proof. Let us suppose that Note that if β i + 1 ∈ β then h β i (x, t) ≡ 0 due to the disfocality of L on [a, b]. That implies that we only need to take into account those β i such that β i + 1 / ∈ β. If β n−k < n − 1 then β i < n − 1 for 1 ≤ i ≤ n − k and we can apply (22) and (75) to obtain which combined with the properties properties 2 (as commented at the end of the Introduction the homogeneous boundary conditions in (75) are at least (β B − 1)-alternate), 5 and 6 of Lemma 1, and the fact that n(β, and As (77) and (78), the facts that β B ≤ β i and n(β, j) = n − k for j < β 1 , and the decomposition of H(x, t) in h β i (x, t), one gets to (71) and (72).
On the contrary, if β n−k = n − 1 then (77) and (78) hold for all h β i but for h β n−k , since in that case the sign of ∂x n , which Theorem 3 does not yield. In that case we need to revert to the definition of L. Thus, from (1) and (4) one has From (22) The proof of (74) in the case α 1 > 0 can be done following the same reasoning. and If α 1 > 0 then Proof. The proof is immediate from Theorem 4.

Theorem 5.
Let us assume that (α, β) are admissible boundary conditions. If α 1 = 0 and either If α 1 > 0 and either with at least one l / ∈ α such that 0 ≤ l ≤ n − 1 and (−1) m(α,l) a l (a) < 0, and Proof. The proof is similar to that of Theorem 4.
If α 1 > 0 then and In that case the statement (i) of [1], Theorem 3.4 (see (14)) does not seem to be valid for l = β 1 and b 1 = b 2 , unless an approach not based on the sign of I and its derivatives was used by the authors to prove that assertion. The lack of an explicit proof of that theorem complicates any further analysis, but one cannot help having the impression that the statement is incorrect. The same comment is applicable to the statement (ii) of [1], Theorem 3.4 in the case α 1 > 0, a 1 = a 2 (see (15)), which seems only valid for l = 0, . . . , β B and not for l = α 1 when β B = α 1 .

The Case of p-Alternate Boundary Conditions
When the boundary conditions are p-alternate, the lack of simultaneous boundary conditions at a and b for any derivative lower than p suggests no need for the immediately higher derivative to change the sign on (a, b), at least as a consequence of Rolle's theorem. The following theorem shows that this is to some extent the case under certain hypotheses. Theorem 6. Let us assume that (α, β) are p-alternate admissible boundary conditions.
If α 1 = 0 and either with at least one l / ∈ β such that 0 ≤ l ≤ n − 1 and and, if β n−k , p > β B , If α 1 > 0 and either with at least one l / ∈ α such that 0 ≤ l ≤ n − 1 and and, if α k , p > α A , Proof. Let us tackle the case α 1 = 0 first. From Theorem 3, concretely (23), we already know that (94) holds for 0 ≤ j ≤ β 1 (note that n(β, β 1 ) = n − k − 1). Next, let us assume that x > t. From the definition of H one has G a,x (α, β, x, t) is the Green function of the problem (4) when b = x, so it satisfies the boundary conditions related to β at x, that is On the other hand, from the hypotheses and Theorem 4 it follows that From (102), (103) and (104) one finally gets (95) for x > t and β 1 ≤ j ≤ β B . Let us focus now on the case x ≤ t. As before one has G a,t (α, β, x, t) is the Green function of the problem (4) when b = t, so it satisfies the boundary conditions related to β at t, that is If n − 1 / ∈ β, G a,t (α, β, x, t) is n-times continuously differentiable in (a, t), satisfies LG a,t (α, β, x, t) = 0 for x ∈ (a, t) and n homogeneous boundary conditions at a and b. Since L is disfocal on [a, b], it is also disfocal on [a, t) and therefore G a,t (α, β, x, t) ≡ 0 for x ∈ [a, t). From here, (104) and (105) one gets (95). On the contrary, if n − 1 ∈ β, from the properties of the Green function (see [8], Chapter 3, page 105, property (ii))) it is straightforward to show that G a,t (α, β, x, t) is n-times continuously differentiable on (a, t), satisfies LG a,t (α, β, x, t) = 0 for x ∈ (a, t), n − 1 homogeneous boundary conditions at a and b and the boundary condition As noted in the Introduction, p ≥ β B − 1. We can apply properties 2, 5 and 6 of Lemma 1 to (107), as well as the definition of n(β, j), to yield and From (104), (105) and (108) one gets (95) for the case x ≤ t.
To address (96), let us note that if both β n−k , p > β B then β B / ∈ α, β B + 1 ∈ α and β B + 1 / ∈ β due to the definition of β B and the p-alternate property of the boundary conditions (α, β). In that case we can define the boundary conditions (α,β) by adding β B + 1 and removing β n−k to/from β, that isβ = {β\β n−k } ∪ (β B + 1). Then, fixed t ∈ [a, b], the function g 4 (x) = G(α,β, x, t) − G(α, β, x, t) is n times continuously differentiable on [a, b] and satisfies From (22) and (110) it follows that Applying property 2 of Lemma 1 to (111) (note that p ≥ β B + 1) one has Likewise, applying (95) to G(α,β, x, t) (note thatβ n−k < n − 1) one has which is also Combining (112) and (114) one finally gets to that m(α, l) = n(β, l). From here, Theorem 3 and the hypotheses (116), (118), again, one gets that ∂x n ≥ 0 otherwise. Next, let us do a similar comparison for the partial derivatives of lower order. If n − 1 ∈ α, from Taylor's theorem there must be a δ > 0 such that Applying Taylor's theorem recursively and taking into account (21) one proves that there exists a As for b, (22) already gives Applying again Taylor's theorem recursively and taking into account (22) one has that there must be a δ 2 > 0 such that and From (123) and (125) it is clear that has the same (positive, in this case) sign on x ∈ (a, a + δ 1 ) ∪ (b − δ 2 , b]. We can prove by induction that this same sign property is valid for all partial derivatives of lower order, namely, that the signs given by (124), (126) and (127) are the same for each partial derivative. Thus, let us suppose that the sign of the partial derivative of order l + 1 is the same in the neighborhoods of a and b, and is given by (124). If l ∈ β, then by Taylor's theorem, the sign of the derivative of order l must be the opposite of the sign of the derivative of order l + 1 in the neighborhood of b. Likewise, m(α, l) = m(α, l + 1) + 1, so from (124) the sign of the derivative of order l must also be the opposite of the sign of the derivative of order l + 1 in the neighborhood of a. Therefore, the sign of the partial derivatives of order l must coincide at the proximity of a and b. Likewise, if l ∈ α then by Taylor's theorem the sign of the derivative of order l must be the same as the sign of the derivative of order l + 1 in the neighborhood of a, whereas the sign of the derivative of order l at b is given by (−1) n(β,l) . If l + 1 / ∈ β then from (126) and since n(β, l) = n(β, l + 1) the sign of the derivative of order l + 1 at b must also coincide with that of the derivative of order l at b. If l + 1 ∈ β then n(β, l) = n(β, l + 1) + 1, so from (127) the sign of the derivative of order l + 1 at b must also coincide with that of the derivative of order l at b. That means, again, that the signs of the partial derivatives of G(x, t) of order l must also coincide at the neighborhoods of a and b.
A similar reasoning can be done for the case n − 1 ∈ β, leading to the same conclusions.
Once we have that the signs of the partial derivatives of G(x, t) on the vicinity of a and b are the same, regardless of the order, and knowing already from Theorem 6 (note that the strongly admissible conditions are (n − 1)-alternate) that the sign of ∂ i G(x,t) , and determined by (124) in all cases (it is straightforward to check), it remains to prove that the sign of is constant on (a, b) for the rest of values of i up to n − 1. We will do it by reduction to the absurd. Thus, let us suppose that there is an order l for which ∂ l G(x,t) ∂x l changes sign on (a, b). Since the sign at the vicinity of the extremes is the same, there must be at least an even number of sign changes on (a, b). Let us call x 1,l the minimum of these points and x 2,l the maximum of these points. Clearly the sign of ∂ l G(x,t) ∂x l must be the same for x ∈ (a, x 1,l ) and x ∈ (x 2,l , b), and be given by (124).
Let us assume that {l, . . . , n − 1} ⊂ α. Then by Rolle's Theorem we can obtain a sequence of zeroes x 1,j , j = l, . . . , n − 1, such that x 1,l > x 1,l+1 > . . . > x 1,n−2 > a, for which the sign of ∂ j G(x,t) ∂x j is constant on (a, x 1,j ), and again given by (124). Since ∂ n−1 G(x,t) has a discontinuity at x = t, there must be a smallest point x 1,n−1 < x 1,n−2 where there is a change of sign of ∂ n−1 G(x,t) ∂x n−1 from positive (see (124)) to negative, but from (120) it is clear that such a point cannot be x 1,n−1 = t, so it must be a zero of ∂x n−1 . From the mean value theorem there must exist an x * ∈ (a, x 1,n−1 ) such that ∂ n G(x * ,t) ∂x n < 0. However, the above reasoning implies that the sign of all partial derivatives of orders from l to n − 1 is given by (124) for x ∈ (a, x 1,n−1 ), and from (116), that also means that the sign of ∂ n G(x,t) ∂x n must be non-negative for all x ∈ (a, x 1,n−1 ), which is a contradiction.

Remark 6.
If a l (a) = 0 for all j / ∈ α, then the hypothesis (117) of the Theorem 7 can be replaced by any combination of a l (x) that grants ∂ n G(x,t) ∂x n > 0 for x ∈ (a, a + δ). Likewise, if a l (b) = 0 for all j / ∈ β, then the hypothesis (118) of the Theorem 7 can be replaced by any combination of a l (x) that grants ∂ n G(x,t) ∂x n > 0 for x ∈ (b − δ, b).

Remark 7.
One cannot help wondering if, with the right combinations of signs of a l (x) in [a, b], it is possible to guarantee the conservation of sign of each partial derivative of G with respect to x in [a, b] regardless of how α j and β j alternate in the case of strongly admissible conditions (that is, without imposing Conditions 1 and 2 in Theorem 7). Even though that assertion looks quite plausible, its proof has been elusive to the authors so far.

Discussion
The results presented in this paper provide information about the sign and dependence on the extremes a and b of the Green function of the problem (4) and its derivatives when the two-point boundary conditions are admissible, property which encompasses many types of boundary conditions usually covered in the literature (for instance, conjugate or focal boundary conditions). By doing so, this paper extends (and to a small degree corrects, as discussed in the Remark 5) the results of Eloe and Ridenhour in [1], a fine piece of Green function theory that is considered a reference in the subject. The paper goes beyond to address the p-alternate and strongly admissible cases, for which results on the signs of higher derivatives on the interval are provided. Thus, whilst both [1] and the Section 2.1 yield sign results only for derivatives up to max(α 1 , β 1 )-th order, in the case of p-alternate they are supplied for derivatives up to α A + 1 (if α 1 > 0) and β B + 1 (if α 1 = 0) orders, and in the case of strongly admissible conditions, for derivatives up to (n − 1)-th order. As stated in the Introduction, this is relevant since the maximum value of the integer µ of the problem (6) which allows a cone-based approach is limited by the order of the highest derivative of G(x, t) with constant sign, so that finding results for higher derivatives of G(x, t) permits increasing the applicability of the cone theory to such problems.
One question that is left open is whether it is possible to find conditions on the sign of the coefficients of L which grant a constant sign of every derivative of G(x, t) on (a, b) up to the (n − 1)-th order, for any strongly admissible boundary conditions. We hypothesize an affirmative response, but a proper proof is still pending.
To conclude, other areas that can benefit from an extension of these sign findings are those of boundary conditions mixing different derivatives or those with integral conditions. The determination of the sign of the Green function of fractional boundary value problems is also a topic that has raised interest recently, as part of more sophisticated mechanisms to find solutions of other related non-linear fractional boundary value problems (see for instance [23][24][25][26]). However, there is a lot to do in this area, since most of these cases require the explicit calculation of the associated Green function, and this calculation is only possible in the simplest ones. A more generic approach that provided signs without having to solve fractional differential equations, similar to that presented here, would, therefore, be very welcome.

Conflicts of Interest:
The authors declare no conflict of interest.