Accurate Estimations of Any Eigenpairs of N -th Order Linear Boundary Value Problems

: This paper provides a method to bound and calculate any eigenvalues and eigenfunctions of n -th order boundary value problems with sign-regular kernels subject to two-point boundary conditions. The method is based on the selection of a particular type of cone for each eigenpair to be determined, the recursive application of the operator associated to the equivalent integral problem to functions belonging to such a cone, and the calculation of the Collatz–Wielandt numbers of the resulting functions.


Introduction
Let J be a compact interval in R and let us consider the real differential operator L disconjugate on J and defined by: Ly = y (n) (x) + a n−1 (x)y (n−1) (x) + · · · + a 0 (x)y(x), x ∈ J, where a j (x) ∈ C j (J) for 0 ≤ j ≤ n − 1.
If all the p (for p = 1, 2, . . .) are equal to +1, then a sign-regular kernel is called totally non-negative, whereas a strongly sign-regular kernel is called oscillatory. For the Green functions of two-point boundary problems like that of (6), sign-regularity is equivalent to strong sign-regularity (see [1]), and the condition (7) needs only to hold for p = 1, . . . , n − 1 (see [3], Condition A).
Throughout the paper, we will assume that the Green function of (6) is sign-regular. The interest of sign-regular Green functions resides on the Sturm-Liouville-like properties that it gives to the boundary value problem (2) and (3), namely (see [3]): 1.
The eigenfunction ϕ i (x) (i = 1, 2, . . .), corresponding to each λ i , has exactly i − 1 zeroes in (a, b), all of which are simple. Moreover, the zeroes of ϕ i and ϕ i+1 alternate (i = 2, 3, . . .). At the extremes a, b, all the eigenfunctions ϕ i have zeroes of the order exactly imposed by the boundary conditions. 3.
Each non-trivial linear combination c r ϕ r + · · · + c l ϕ l with r ≤ l has at least r − 1 nodal zeroes (that is, zeroes where the function changes its sign) and at most l − 1 zeroes in I 1 , where I 1 is the interval obtained from [a, b] by removing a if k m = 0 and b if k n = 0, and the zeroes which are antinodes (that is, zeroes where the function does not change its sign) are counted twice.
There is a fourth property of the eigenfunctions ϕ i , which requires the introduction of the following definition (see [4], Chapter 3, Section 5): Definition 1. A system of continuous functions, y 1 (x), y 2 (x), . . . , y p (x), x ∈ I, is called a Chebyshev system on the interval I if every linear combination of these functions vanishes on the interval I at most p − 1 times. Likewise, a sequence of (finite or infinite) functions is a Markov sequence within the interval I if for every p (p = 1, 2, . . .), the functions y 1 (x), y 2 (x), . . . , y p (x) form a Chebyshev system on I.
The problem (2)-(7) appears in the analysis of the vibrations of a loaded continuum, but the associated theory can be applied to multiple differential problems as long as the boundary conditions imply the sign-regularity of the Green function G(x, t). In particular, in [5] (Appendix D), one can find multiple examples of differential problems of the type (2) in the theory of fluid dynamics, problems whose Green function satisfies (7) under some boundary conditions. Likewise, [6] contains many examples of physical and biological problems with sign regular kernels satisfying (7)- (9).
The eigenvalue problem (2) with a sign-regular kernel has been studied thoroughly in the literature. It was Kellogg [7][8][9] who first assessed symmetric totally non-negative kernels satisfying condition (7). The non-symmetric case was developed by Gantmakher and Krein in [10][11][12]. Karlin obtained new results by attacking the problem from the theory of spline interpolation and Chebyshev and Markov systems [2,13]. Other important breakthroughs were achieved by Levin and Stepanov [1], who extended the results to the sign-regular case, and Borovskikh and Pokornyi [14], who applied them to discontinuous kernels. Later, Stepanov provided necessary and sufficient conditions for the Green function of (6) to be sign-regular in [3]. The research on this topic was continued by Pokornyi and his collaborators due to its relationship with the theory of differential equations in networks [6,15]. Some more recent contributions include [16,17].
While the aforementioned papers cover multiple aspects of the theory of sign-regular kernels and the properties of the solutions of (2), none of them seems to have attempted to use them to calculate its eigenvalues and eigenfunctions, as far as the authors are aware. That will be the purpose of this paper, which will provide an iterative procedure to:

1.
Bound and calculate any eigenvalues λ i and 2.
Calculate the associated eigenfunctions ϕ i , with as much precision as desired. Our approach will make use of Krein-Rutman cone theory, which was also employed by many of the papers mentioned before, in the following manner: 1.
Defining a Banach space and a cone of functions, and picking up a function u which belongs to it. Concretely, there will be a Banach space and cone for each eigenvalue λ i to be determined.

2.
Calculating M j u iteratively, where M j is the composition of M with itself j − 1 times,

3.
Calculating the so-called Collatz-Wielandt numbers of M j u in that cone, for different values of j. These numbers are bounds for the inverse of the eigenvalue λ i , and converge to this as the iteration index j grows.

4.
Determining the eigenfunctions ϕ i from M j u.
The procedure requires a sequential calculation of the eigenpairs, that is, in order to calculate λ i and ϕ i , one has to run the process for the eigenpairs associated with eigenvalues of smaller absolute value.
For self-completeness, let us recall that, given a Banach space B, a cone P ⊂ B is a non-empty closed set defined by the conditions: 1.
If u, v ∈ P, then cu + dv ∈ P for any real numbers c, d ≥ 0. Note that this condition implies that 0 ∈ P.

2.
If u ∈ P and −u ∈ P, then u = 0. A cone P is reproducing if any y ∈ B can be expressed as y = u − v with u, v ∈ P. The existence of a cone in a Banach space B allows the definition of a partial ordering relationship in that Banach space by setting u ≤ v if and only if v − u ∈ P. Thus, we will say that the operator M is u 0 -positive if there exists an integer q > 0 and a u 0 ∈ P such that for any v ∈ P\{0} one can find positive constants δ 1 , δ 2 such that δ 1 u 0 ≤ M q v ≤ δ 2 u 0 (note that δ 1 and δ 2 will not be the same for all v). We will denote by int(P) the interior of the cone P, provided that it exists. Let us note that if M q maps P\{0} into int(P), then it is u 0 -positive, with u 0 being any member of int(P).
Following the Forster-Nagy definition [18], if u ∈ P\{0}, the upper and lower Collatz-Wielandt numbers are defined, respectively, as: They are called upper and lower Collatz-Wielandt numbers as they extend the estimates for the spectral radius of a non-negative matrix given by L. Collatz [19] and H. Wielandt [20]. We will write them too as r(M, u, P) and r(M, u, P) when we want to make an explicit reference to the concrete cone P in which they are calculated.
The properties of r(M, u) and r(M, u) and their relationship with the spectral radius of the operator M have been studied by several authors, starting with Marek [21,22], Forster and Nagy [18], who corrected some previous mistakes from Marek, and Marek again [23]. The concept has been extended to multiple types of operators, Banach spaces and cones. The references [24][25][26][27], include a good account of recent results.
The use of r(M, M j u) and r(M, M j u) to bound and estimate the principal eigenvalue of a boundary value problem of the type (3) and (4) seems to date from Webb [28], who applied it to define conditions for the existence of solutions to non-linear boundary value problems. Chang [25] proved it for other 1-homogeneous non-linear differential problems like p-Laplace systems, calling it the power method, so it is possible that it was known and used before. Later, the authors used it in [29][30][31] to determine the solvability of boundary value problems and in [32] to bound and estimate the principal eigenvalue of boundary value problems including higher derivatives, for which some results on the sign of the derivatives of the Green function were needed. However, to the knowledge of the authors, it has never been applied to determine the value of other eigenpairs apart from the principal one. This paper took inspiration from an algorithm used to calculate eigenvalues of an oscillatory matrix described in [4] (Appendix 1), which is also based on the use of the Collatz-Wielandt numbers in different cones.
The organization of the paper is as follows. In Section 2, the main results will be presented. In particular, Section 2.1 will introduce the Banach space and the cones to be used for each λ i , and will prove the convergence of the Collatz-Wielandt numbers of M j u in such cones. The subsection Section 2.2 will yield a procedure to find a function u belonging to each cone, and will show how to simplify the calculation of the Collatz-Wielandt numbers. Section 3 will give an example of how to apply the previous theory to calculate several eigenpairs of a boundary value problem. Finally, Section 4 will draw some conclusions.

Main Results
2.1. The Procedure to Calculate λ p and ϕ p 2.1.1. Some Preliminaries The fact that a j (x) ∈ C j [a, b] allows definition of the operator L * adjoint to L, namely [33] (Chapter 11, Section 1): L * y = (−1) n y (n) (x) + (−1) n−1 (a n−1 (x)y(x)) (n−1) + · · · + a 0 (x)y(x), x ∈ [a, b]. (11) Accordingly, let us consider the eigenvalue problem adjoint to (2) where U * i are the boundary conditions adjoint to U i (see [33] (Chapter 11, Theorem 3.1) for a definition of adjoint boundary conditions). From the properties of the adjoint eigenvalue problems, it is well known [33] (Chapter 12, Theorem 5.2) that the eigenvaluesλ i of (12) are the complex conjugates of those of (2) and (3) (that isλ i = λ * i ), and that its eigenfunctions ψ i form a biorthogonal system with ϕ i , namely that: Given that the eigenvalues λ i are real, obviouslyλ i are also real andλ i = λ i . Next, given a set of p functions y i (x) ∈ C[a, b] for i = 1, . . . , p, let us introduce the notation: The determinant (14) has very interesting symmetric properties. In particular, its value in [a, b] p is determined by its value in the simplex Ω = {(x 1 , . . . , x p ) : a ≤ x 1 < · · · < x p ≤ b}, as can be easily shown using the properties of determinants. We will use this simplex frequently in the rest of the paper, together with the related simplex Ω * = {(x 1 , . . . , x p ) ∈ Ω : x 1 , · · · , x p ∈ I 1 }.
Let u ∈ C[a, b]. We will denote by ∆ p (u; x 1 , . . . , x p ) (or simply ∆ p (u)) the function: where ϕ i are the eigenfunctions of (2). As before, the value of ∆ p (u) is given by its value in the simplex Ω. Now we are in a position to define the Banach spaces and cones needed by our method. Thus, for each index p, we will define the Banach space B p as: subject to the sup norm is complete with regards to the sup norm and the functional ψ i , y is linear and bounded for each ψ i , and therefore continuous, it is straightforward to show that B p is also complete and therefore a proper Banach space. Likewise, the cone P p will be defined by: where Ω is the closure of Ω. Similarly, the Banach space B p will be defined as: where AC m [a, b] is the space of functions whose m-th derivative is absolutely continuous in [a, b]. This space will be endowed with the norm Finally, linked to B p , the cone P p will be given by: Lemma 1. The cones P p and P p are actually cones.
Proof. From property 3 of the eigenfunctions of (2), (13) and the definition of ∆ p in (15), it is clear that either ϕ p or −ϕ p belong to both cones, so they are not empty. From the properties of the determinants one also has: This is only possible if y is a linear combination of ϕ i for i = 1, . . . , p − 1. As the definition of B p and B p requires y to be orthogonal to the adjoint eigenfunctions ψ i , i = 1, . . . , p − 1, (13) leaves y ≡ 0 in Ω as the only alternative. This completes the proof.

The Operator M p and Its Properties in the Cones
Let us introduce the operator M p , defined by: The operator M p has some interesting properties in the cone P p , such as, for instance, its positive character.
Theorem 1. The operator M p maps P p into itself.
Proof. From (5), (6) and (20), it is clear that , and incidentally the resulting function satisfies the boundary conditions (3). Moreover, The Green function of the adjoint problem (12) is exactly G * (x, t) = G(t, x) (see for instance [33], Chapter 11, Theorem 4.2), which yields for any u ∈ B p . That implies that ψ i , M p u = 0 for i = 1, . . . , p − 1, and therefore M p maps B p into itself. Next, we will prove that ∆ p (M p u) ≥ 0 in Ω when ∆ p (u) ≥ 0 in the same set. We will show first that, in fact, ∆ p (M p u) = 0 for x ∈ Ω * in that case. Thus, let us assume that, on the contrary, (14), it follows that there is a linear combination of ϕ 1 , . . . , ϕ p−1 , M p u: which vanishes at least at x = x 1 , . . . , x p ∈ I 1 .
In order for the Green function (6) to be sign-regular, it is necessary that the equation , that is, no solution of such an equation can have n zeroes in [a, b] (see, for instance, [3], p. 1690). In that case, a result from Polya [34] allows factoring L in first order differential operators as follows: and Ly = L n y, Let us assume that the number of zeroes of y of (21) is finite in (a, b). Following [3] (Section 5), let S(d 0 , . . . , d n ) denote the number of sign changes in the sequence d 0 , . . . , d n of non-zero real numbers and let σ( f ) be the number of sign changes of f (x) in (a, b). For the function y, we define the one-sided limit: Using (22) and Rolle's theorem, Stepanov proved [3] (Lemma 5.1) that: However, if one reviews the proof of such a lemma, one can easily conclude that in fact: where z(y) is the number of zeroes of y in (a, b). Stepanov used (24) in several lemmata of the same paper [3] (Lemmata 6.2, 6.3 and 6.5) to prove the sufficiency of his [3] (Theorem 1.3) for the Green function (6) to be sign regular on [a, b], theorem who Stepanov had previously [3] (p. 1713) shown also to be necessary for the sign regularity. Such lemmata essentially proved and one can repeat their same arguments, using inequality (25) instead and mutatis mutandi, to show that σ(Ly) ≥ z(y) ≥ p.
Since [3] (Theorem 1.3) is also a necessary condition, any sign regular Green function must fulfil (26) for y having p zeroes in (a, b). Recalling the form of y in (21), then the function: must change its sign at least p times in (a, b), exactly at the same points as given that r(x) is piecewise continuous and positive. Let (x 1 , . . . , x p ) be such points and let us build the matrix: whose determinant |A| is obviously zero. Given that ϕ 1 , . . . , ϕ p−1 is a Chebyshev system on I 1 , the matrix    has a range of p − 1, and therefore A must also have a range of p − 1 (the difference between both matrices is one row and one column). That means that the null space N of A, namely the subspace of column vectors C ∈ R p such that CA = 0, has dimension 1, and the vector composed by the coefficients of (27) belongs to N. Expanding the determinant: along its last column, one obtains a linear combination of ϕ 1 (x), . . . , ϕ p−1 (x), u(x) that vanishes at x 1 , . . . , x p ∈ I 1 and which equals ∆ p (u; x 1 , . . . , x p−1 , x) for any x ∈ (a, b). Therefore, the column vector composed by the coefficients of that linear combination (namely, the cofactors of the last column of (31)) must also belong to the subspace N and be a multiple of the coefficients of ϕ 1 (x), . . . , ϕ p−1 (x), u(x) in (27), that is, v(x) of (28) must be a multiple of ∆ p (u; x 1 , . . . , x p−1 , x). As v(x) changed its sign at x = x p , ∆ p (u; x 1 , . . . , x p−1 , x) must also change its sign at x = x p . A similar conclusion can be obtained if y in (21) has infinitely many zeroes in (a, b). Thus, we have proven that if ∆ p (M p u) vanishes at x' ∈ Ω * , then ∆ p (u) must change sign at least at a point x" ∈ Ω * , so u cannot belong to P p . In summary, if u ∈ P p , ∆ p (M p u) = 0 in Ω * . It remains to be proven that ∆ p (M p u) and ∆ p (u) have the same sign, namely that ∆ p (M p u) > 0 in Ω * when ∆ p (u) > 0 in the same set. This follows from the expression (see, for instance, [4], Chapter 4, Section 3, Equation (62)), The left hand side of (32) is exactly p ∆ p (M p u) |λ 1 ···λ p−1 | . From the sign-regularity of G(x, t), (7) and (32), one has that the signs of ∆ p (M p u) and ∆ p (u) coincide in Ω * . This completes the proof.
Although the Theorem 1 can be used to obtain some information about the nature of the eigenvalues λ p , it does not provide any indication about the relationship between the Collatz-Wielandt numbers and λ p . A first step towards that direction can be made if we can find a solid cone K p contained in P p (P p is not solid, as per its definition), which is mapped by M p into itself, as the next theorem will show. For that, we need to introduce the notion of weak irreducibility (see [35], Definition 7.5): Definition 2. Let P be a solid cone. We say that M p is weakly irreducible, if the boundary ∂P of P contains no eigenvectors of M p pertaining to nonnegative eigenvalues. Theorem 2. Let K p be a solid cone such that K p ⊂ P p . If M p maps K p into itself, then ϕ p ∈ int(K p ) and for any u ∈ K p \{0} one has and lim where f (u) is a non-zero linear functional dependent on u and ϕ p .
Proof. Let us first prove that M p is weakly irreducible in K p . From the property 2 of the sign regular problems (see the Introduction) any non-trivial linear combination of ϕ 1 , . . . , ϕ p−1 , ϕ i with i > p, where the coefficients of ϕ 1 , . . . , ϕ p−1 are zero, must have i − 1 zeroes in I 1 . That implies that ∆ p (ϕ i ) must vanish in at least i − p points in Ω * . Using an argument similar to that used in the Theorem 1, one has that ∆ p (ϕ i ) must change its sign in Ω * , so ϕ i / ∈ P p (ergo ϕ i / ∈ K p ) for i > p, and therefore M p is weakly irreducible in K p according to the Definition 2.
Next, in the Banach space B p it is clear that r(M p ) = 1 |λ p | > 1 |λ i | for i > p. We can apply [35] (Theorem 7.7) to conclude that 1 |λ p | is a simple eigenvalue of M p with an eigenvector ϕ p ∈ int(K p ). From here and [25, Lemma 1.16], one gets (33). Likewise, following the same reasoning as in [32, Theorems 6 and 7], one can get to (35) and, noting that for some j 0 > 0, M j p u ∈ int(K p ) for all j ≥ j 0 , also to (34).

The Cone P p
The previous theorem does not offer any hints for finding the solid cone K p , nor does it indicate any relationship between r(M p , M j p u, K p ) and λ p beyond the fact that the upper Collatz-Wielandt number converges to 1 |λ p | . To determine such a relationship, the solid cone K p must be such that M p maps it (excluding the zero element) to its interior. As it turns out, under certain conditions, the cone P p defined in (19) is solid and satisfies that property with regards to M p .
To establish that, let us start by identifying the interior of P p . Although one could be tempted to think that int(P p ) is merely composed by the functions y ∈ B p such that ∆ p (y) > 0 in Ω * , in the end this is only a necessary condition as one must pay attention to the value of ∆ p (y) in the vicinity of the closure of Ω * where in fact ∆ p (y) vanishes, namely when x 1 is close to a if a / ∈ I 1 , when x p is close to b if b / ∈ I 1 , and when several values x i converge simultaneously to the same point x * .
The next Lemma will give the value of ∆ p (y) in the latter case.

Lemma 2. Let us suppose that
Proof. Noting that 1 + 2 + · · · + l = l(l+1) 2 , Taylor's formula for multivariate functions allows expressing ∆ p (u; x 1 , . . . , x p ) when x i+1 , . . . , x i+l are in a neighborhood of x i of radius δ, as: From the properties of the determinants, (14) and (15), it is clear that all the terms in (37) where the order of the partial derivatives of two different x j coincide, vanish, which yields: where K is the set of all permutations of the indexes (1, 2, . . . , l). Let us denote by s(j 1 , j 2 , . . . , j l ) the signature of the l-tuple (j 1 , j 2 , . . . , j l ) (the signature of a tuple is defined to be +1 whenever the reordering (1, 2, . . . , l) can be achieved by successively interchanging two entries of the tuple an even number of times, and −1 whenever it can be achieved by an odd number of such interchanges). As the different partial derivatives appearing in (38) are continuous in x i+1 , . . . , x i+l (ϕ i , u ∈ C l [a, b] by hypothesis) and are calculated at the same point x i+j = x i , we can exchange their order just by taking into account the impact of such a change in the determinant (14) (it is an exchange of rows), which leads us to: Equations (38) and (39) give: The expression within the sum in (40) has exactly the form of the Vandermonde determinant , whose value, as it is well known, is ∏ i≤k<j≤i+l (x j − x k ). From here and (40), one gets (36).
Given that a ≤ x 1 < . . . < x p ≤ b for (x 1 , . . . , x p ) ∈ Ω, the consequence of the Lemma 2 is that, if α is the lowest derivative which the boundary conditions on a do not specify to vanish, β is the lowest derivative which the boundary conditions on b do not specify to vanish, and ϕ i , u ∈ AC p [a, b], the interior of P p is defined by: Remark 1. By the definition of M p and G(x, t), it is clear that ϕ i , M p u ∈ AC n−1 [a, b]. However, if p > n − 1, one cannot grant that ϕ i , M p u ∈ AC p [a, b], or even the mere existence of int(P p ), without imposing extra conditions on u, r and the coefficients a j of L in (1). The next theorem will display some sufficient conditions for that.

Theorem 3.
Let us suppose that either p < n or p ≥ n, r(x), a j (x) ∈ AC p−n [a, b] for j = 0, . . . , n − 1. Let q be the lowest integer greater than 1 such that q · n > p. Then P p is solid and M q p maps P p \{0} into int(P p ). In addition, if u ∈ P p \{0}, then, and lim where f (u) is a non-zero linear functional dependent on u and ϕ p .
Proof. From (1) and (2), one has: Therefore, in order for (M q p u) (n) (x) to belong to AC p−n [a, b], it suffices that M q−1 p u(x), r(x), a j (x) ∈ AC p−n [a, b] for j = 0, . . . , n − 1, which is granted by the hypotheses and the fact that q · n > p and M p u ∈ AC n−1 [a, b]. With this, following the same steps as in the Theorem 1 it is straightforward to show that M q p maps B p into B p , and that ∆ p (M q p u) > 0 for x ∈ Ω * provided that u ∈ P p \{0}, which covers the conditions of the first line of (41).
Let us focus now on the condition of the second line of (41), related to the derivative of ∆ p (M q p u) at a when a / ∈ I 1 . From (32), one has: As q > 1, ∆ p (M q−1 p u) > 0 in Ω * according to the Theorem 1. For that reason, the key to grant the positivity of the α-th partial derivative at a lies on the value of the determinant of the matrix:  (47) Let us denote by K p (t 1 ; x 2 , . . . , x p ) the matrix: whose determinant we will write as |K p |. Using Taylor's formula, when x 1 is in the neighborhood of a one has: so that the matrix K p (t 1 ; x 2 , . . . , x p ) must also be sign regular with p |K p (t 1 ; x 2 , . . . , x p )| ≥ 0 for a < t 1 < x 2 < · · · < x p < b as per (7). We will prove in fact that We will proceed by induction, following the ideas of [1]. Thus, from [1] (Equation (12.12)), we know that: Let us assume that p−1 |K p−1 (t 1 ; x 2 , . . . , x p−1 )| > 0 for a < t 1 < x 2 < · · · < x p−1 < b and p |K p (t 1 ; x 2 , . . . , x p )| = 0 for a < t 1 < x 2 < · · · < x p−1 < x p < b. If we introduce an additional pair (x * , x * ) such that a < t 1 < x * < x 2 , by the same argument as before on the sign regularity of G(x, t) one must have that the matrix K p+1 (t 1 ; x * , x 2 , . . . , x p ) must be sign regular too with p+1 |K p+1 (t 1 ; x * , x 2 , . . . , x p )| ≥ 0. K p+1 (t 1 ; x * , x 2 , . . . , x p ) is therefore a (p + 1) × (p + 1) sign regular matrix whose first row is composed of terms of the type ∂ α G(a,t 1 ) ∂x α and ∂ α G(a,x j ) ∂x α while the rest of rows form the matrix:  which is p × (p + 1) and sign regular, and whose last p columns are linearly independent (their determinant does not vanish as per (9)). Accordingly its range is p, and the range of K p+1 (t 1 ; x * , x 2 , . . . , x p ) must be at least p too.
In the same way, one can find that the range of K p (t 1 ; x 2 , . . . , x p ), whose determinant vanishes, is p − 1, with its last p − 1 rows linearly independent. Since the range of K p−1 (t 1 ; x 2 , . . . , x p−1 ) is p − 1 by the induction hypothesis, one can apply [1] (Lemma 2) to conclude that the range of K p+1 (t 1 ; x * , x 2 , . . . , x p ) equals p − 1, contradicting the previous assertion. Therefore, |K p (t 1 ; x 2 , . . . , x p )| cannot be zero for a < t 1 < x 2 < · · · < x p < b and, due to its sign regularity, p |K p (t 1 ; x 2 , . . . , x p )| > 0. By continuity, the matrix (47) must have a determinant of sign p for t i ∈ (x i − δ, x i + δ), i = 2, . . . , p. From here and (46) one The condition of the third line of (41) with respect to the derivatives at b, if b / ∈ I 1 , can be proven in the same way.
As for the last condition of (41), let us assume that: for an x * ∈ Ω * . From here, (14) and (15) one has that there exists a linear combination of ϕ 1 , . . . , ϕ p−1 , M q p u, w(x) = d 1 ϕ 1 + · · · + d p−1 ϕ p−1 + d p M q p u, with p zeroes on I 1 , counting their multiplicities (there must be a zero multiple of order l + 1 at x * ). Using a similar argument to that of the Theorem 1, one has that the function must change its sign at least p times in (a, b), exactly at the same points as: Let these points be x 1 , . . . , x p . This means that the function ∆ p (M q−1 p u; x 1 , . . . , x p−1 , x) must change its sign at x = x p and therefore u cannot belong to P p . This completes the proof that M q p (P p \{0}) ⊂ int(P p ), that is, that M p is u 0 -positive in P p . Equations (42)-(44) follow now from [32] (Theorems 6 and 7). To clarify how to calculate them, let us recall that, from (10), (14), (15) and (19), for the cone P p , they can be expressed as: and provided that ∆ p (u) ≥ 0 in Ω. Therefore, their calculation requires a comparison of two functions in the simplex Ω ⊂ [a, b] p .
We will close this subsection by showing that, in practice, the function u does not need to be orthogonal to the adjoint eigenfunctions ψ i for i = 1, . . . , p − 1 for Equations (42)-(43) to be valid. Proof. If we decompose v as: it follows that u ∈ B p and

Remark 3.
The property that the function u of the previous theorem fails to hold due to the lack of orthogonality with ψ i , i = 1, . . . , p − 1, is precisely (44), since the term in (58) associated to 1 λ 1 gets bigger in absolute value than the rest of terms as the iteration index j grows.

The Calculation of the Adjoint Eigenfunctions ψ i
The application of the method described in the Remark 2 for different values of p requires knowledge of the eigenfunctions ϕ i , i = 1, . . . , p − 1. Although the Theorem 4 stated that one can start the iteration M j p v with a function v not orthogonal to the adjoint eigenfunctions ψ i , i = 1, . . . , p − 1, such an orthogonality is necessary in order to use (44) to determine ϕ p , and employ the latter in the calculation of λ i , ϕ i for i > p. This implies that knowledge on ψ i must also be obtained as p increases.
The process to obtain ψ p is very similar to that followed for ϕ p . To start with, the sign regularity of G(x, t) ensures the sign regularity of G * (x, t), where G * (x, t) is the Green function of the adjoint problem, This is due to the fact that G * (x, t) = G(t, x) (see [33], Chapter 11, Theorem 4.2), so that, if G(x, t) satisfies (7)-(9), it is immediately shown that these conditions hold for G(t, x) too.
Next, one has to define the Banach space, subject to the sup norm y = sup{| f (x)|, x ∈ [a, b]}, the cone P * A possible solution to this problem goes by interpolating u by means of linear splines. Thus, let us assume a partition {ẋ l } of [a, b], with t points and a mesh size h = max{ẋ l+1 − x l , l = 1, . . . , t − 1}. The linear splineû is defined by: The linear splineû(x) defines a function continuous on [a, b], whose interpolation error e(x) = |u(x) −û(x)| in each subinterval, if u ∈ C 2 [ẋ l ,ẋ l+1 ], is given by: with ξ depending on x, and can be bounded by: From (71), it follows that if the size h of the mesh is small the interpolation error will also be small.
The advantage of the use of the splines is that it allows the reduction of the determination of ∆ p (u) to calculations over the t points of the mesh, that is, to vectors composed by the values of ϕ i , i = 1, . . . , p − 1, and u at the pointsẋ l . In [4] (Chapter 5, Section 3), one can find a couple of Lemmata, Lemma 2 and Lemma 3, which allow constructing a vector {v(ẋ l )} ∈ R t , which forms a Markov system of vectors with the vectors {ϕ i (ẋ l )} ∈ R t , i = 1, . . . , p − 1. To do so, it suffices to select (more or less randomly) p − 1 values for the points v(ẋ 1 ), . . . v(ẋ p−1 ) and pick up a value v(ẋ p ), such that the following inequality holds: which is always possible ifẋ 1 ∈ I 1 , given that: as ϕ 1 , . . . , ϕ p−1 form a Chebyshev system on I 1 . Ifẋ 1 / ∈ I 1 , then the determinant (72) will vanish regardless of the value ofẋ i , i = 2, . . . , p, so we must start the process by calculatinġ x p+1 such that: The values of the following coefficients v(ẋ j ) can be determined using a similar inequality: In order to apply the procedure described in this paper, one must verify that G(x, t) is in fact a sign-regular kernel. In [3], one can find several theorems (Theorems 1.1-1.3) that provide algorithmically effective conditions for such an identification. However, in this case it is easier to use the following theorem of Kalafati-Gantmakher-Krein-Karlin (see for instance [13], Theorem 4 or [1], Theorem 8): Then (−1) n−m G(x, t) is an oscillatory kernel provided that the boundary value problem is non-singular.
The problem (78) has the form (22) whereas B is just the matrix B = 1 0 0 0 .
The only non-zero minor of order m = 3 of A is whereas the only non-zero minor of order n − m = 1 of B equals 1. In addition, the homogeneous boundary conditions are poised in Elias' sense [36] (that is, the number of boundary conditions set on derivatives of an order lower than t is at least t, for t = 1, . . . , n). This implies that λ = 0 is not an eigenvalue and the problem is not singular [36] (Lemma 10.3). Accordingly, one can apply the Kalafati-Gantmakher-Krein-Karlin theorem and conclude that −G(x, t) of (79) is an oscillatory kernel, that is, i = (−1) i for all i. Given that the coefficients of (78) are infinitely continuously differentiable, one can apply the method described in previous sections to determine all eigenfunctions and eigenvalues. As an example, we will calculate λ 1 and λ 2 , as well as the corresponding eigenfunctions ϕ 1 and ϕ 2 , and the adjoint eigenfunctions ψ 1 and ψ 2 .
The operator M p can be calculated as: x 1 tu(t)dt + 1 6 x 1 t 2 u(t)dt + x−1 The Green function G * (x, t) of the problem adjoint to (78) is linked to G(x, t) of (80) by G * (x, t) = G(t, x). Therefore, The resulting eigenfunctions ϕ 1 and ϕ 2 , as well as the adjoint eigenfunctions ψ 1 and ψ 2 , are shown in the Figure 1. They have been normalized to sup norm. It is worth remarking that two phenomena observed during the numerical calculations: • The calculation of ∆ p is very sensitive to rounding errors when x i , . . . , x i+p are close to the extremes a or b, if there are homogeneous boundary conditions set at these. The reason for that is that the values of ∆ p are zero or almost zero there. In these points of the partition, it makes sense to replace the calculation of ∆ p by the calculation of the equivalent determinant composed by the lowest derivatives of ϕ i and M j p u, which do not vanish at the extreme.

•
If u is not exactly orthogonal to ψ i , i = 1, . . . , p − 1, beyond a certain iteration it can happen that in the decomposition of M j u as a sum of terms of the form 1 λ j i u, ψ i ϕ i , the terms associated with those i < p for which u, ψ i = 0 start to get a size similar to that of the term 1 λ j p u, ψ p ϕ p , as anticipated by Remark 3. Further iterations will make M j p u diverge from ϕ p and get closer to the eigenfunctions ϕ i , i < p, for which u, ψ j ϕ j = 0. The precise orthogonality is therefore key for the accuracy of the method.

Discussion
The results presented in this paper allow finding the n smallest eigenvalues (and their associated eigenfunctions) of boundary value problems with sign-regular Green functions, as well as the following ones provided that certain conditions on the functions a j (x) of L and r(x) (namely, the absolutely continuity of their derivatives) are met.
The procedure is sequential in the sense that it requires running it for the first p − 1 eigenvalues in order to use it to calculate the p-th one.
For each p, it can be summarized in the following algorithm, which assumes the knowledge of the p − 1 previous eigenfunctions ϕ i and the p − 1 previous adjoint eigenfunctions ψ i : in the calculation of the Collatz-Wielandt numbers, so that the difference between M j+1 p u and ωM j p u is not zero at any points of the mesh, but a proper analysis on the effect of the interpolation error needs to be performed.
In any case, we the authors believe it can be a practical alternative for the calculation of eigenpairs, especially for lower values of p, and also a source for later work, since an aspect not stressed in this paper is that this approach also allows the determination of the existence of the eigenvalues λ p , the Markov character of the sequence ϕ i and, with the right conditions on a j (x) and r(x), the algebraical and geometrical simplicity of each eigenvalue and their different absolute value than the others. These properties are widely known from the previous literature (the reason our focus has been more practical, on the determination of λ p ), but in this sense it is worth highlighting that the approach used here differs from that used in the classical papers [1, 4,14], to give a few examples. These based their analysis on expressions such as (32), whose iterative application p times leads to strictly positive kernels, and applied it to the cone of positive functions in C[a, b] p , making use of Krein-Rutman cone theory and other classical results of Schur. While this approach has the advantage of not imposing extra conditions on a j (x) and r(x), it does not lend itself easily to work with cone interiors, which are key for the calculation of the Collatz-Wielandt numbers (or rather, to their relationship with the eigenvalue λ p ), as Theorems 2 and 3 show.
To complete this paper, let us mention several areas of interest for future research: • Explore ways of extending the procedure to the case r(x) = 0 in a set of points of [a, b]. The effect of this is that the set I 1 , on which the eigenfunctions ϕ i form a Chebyshev system, does not contain these points where r(x) vanishes, complicating the extensions of some of the results presented here; • Analyze the effect of the interpolation error committed in the calculation of each eigenvalue λ p by performing the calculation of Collatz-Wielandt numbers only in the points of the mesh {ẋ l }; • Simplify or categorize the conditions defined by Stepanov for the sign-regularity of the Green function [3] so that their validation does not always require the calculation of Wronskians of the solutions of Ly = 0 under certain boundary conditions. This would allow an easier identification of sign-regular problems, where the procedures of this manuscript can be applied; • Last but not least, we have made use of the cone P p as it allows the fixing of conditions for such a cone to be solid and for M p to map P p into its interior. However, this does not exclude the existence of other solid cones K p on which to apply Theorem 2. It would be very interesting to find some examples of these, in order to relax the hypothesis that P p demands on r and a j .