Next Article in Journal
q-Functions and Distributions, Operational and Umbral Methods
Next Article in Special Issue
Mathematical Modeling of Toxoplasmosis Considering a Time Delay in the Infectivity of Oocysts
Previous Article in Journal
Coordinated Control of Single-Phase End-Users for Phase Load Balancing in Active Electric Distribution Networks
Previous Article in Special Issue
On the Approximated Solution of a Special Type of Nonlinear Third-Order Matrix Ordinary Differential Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate Estimations of Any Eigenpairs of N-th Order Linear Boundary Value Problems

1
Vodafone Spain, Avda. América 115, 28042 Madrid, Spain
2
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(21), 2663; https://doi.org/10.3390/math9212663
Submission received: 18 September 2021 / Revised: 12 October 2021 / Accepted: 15 October 2021 / Published: 21 October 2021
(This article belongs to the Special Issue Mathematical Models and Methods in Engineering and Social Sciences)

Abstract

:
This paper provides a method to bound and calculate any eigenvalues and eigenfunctions of n-th order boundary value problems with sign-regular kernels subject to two-point boundary conditions. The method is based on the selection of a particular type of cone for each eigenpair to be determined, the recursive application of the operator associated to the equivalent integral problem to functions belonging to such a cone, and the calculation of the Collatz–Wielandt numbers of the resulting functions.

1. Introduction

Let J be a compact interval in R and let us consider the real differential operator L disconjugate on J and defined by:
L y = y ( n ) ( x ) + a n 1 ( x ) y ( n 1 ) ( x ) + + a 0 ( x ) y ( x ) , x J ,
where a j ( x ) C j ( J ) for 0 j n 1 .
In this paper, we will address the eigenvalue problem:
L y = λ r ( x ) y , x [ a , b ] J ,
subject to the boundary conditions
U i ( y ) y ( k i ) ( a ) + k < k i γ i k y ( k ) ( a ) = 0 ( i = 1 , , m ) , U i ( y ) y ( k i ) ( b ) + k < k i γ i k y ( k ) ( b ) = 0 ( i = m + 1 , , n ) ,
where 1 m n 1 , n 1 k 1 > > k m 0 , n 1 k m + 1 > > k n 0 , and r ( x ) is a positive function piecewise continuous on [ a , b ] .
If the problem L y = 0 subject to the boundary conditions (3) does not have a non-trivial solution (namely, if λ = 0 is not an eigenvalue of (2) and (3)), then the problem (2) and (3) is equivalent to the integral eigenvalue problem:
y = λ M y ,
where M is the operator C [ a , b ] A C n 1 [ a , b ] defined by:
M u = a b G ( x , t ) r ( t ) u ( t ) d t , x [ a , b ] ,
and G ( x , t ) is the Green function of the problem,
L G ( x , t ) = 0 , x ( a , t ) ( t , b ) , U i ( G ( x , t ) ) k i G ( a , t ) x k i + k < k i γ i k k G ( a , t ) x k = 0 ( i = 1 , , m ) , U i ( G ( x , t ) ) k i G ( b , t ) x k i + k < k i γ i k k G ( b , t ) x k = 0 ( i = m + 1 , , n ) .
Let us introduce the naming convention:
G x 1 , x 2 , , x p t 1 , t 2 , , t p = G ( x 1 , t 1 ) G ( x 1 , t p ) G ( x p , t 1 ) G ( x p , t p ) ,
where | · | is the determinant of the matrix.
According to [1,2], G ( x , t ) is called a sign-regular kernel of the integral Equations (4) and (5) if the following condition is satisfied for a sequence of real numbers ϵ 1 , ϵ 2 , , each one taking either the value 1 or the value 1 , for p = 1 , 2 , :
ϵ p G x 1 , x 2 , , x p t 1 , t 2 , , t p 0 a < x 1 < x 2 < < x p t 1 < t 2 < < t p < b .
A sign-regular kernel is called strongly sign-regular if, in addition, it satisfies the following two conditions:
ϵ 1 G ( t , s ) > 0 ( a < t < b , a < s < b ) ,
ϵ p G x 1 , x 2 , , x p x 1 , x 2 , , x p > 0 ( a < x 1 < x 2 < < x p < b , p = 1 , 2 , ) .
If all the ϵ p (for p = 1 , 2 , ) are equal to + 1 , then a sign-regular kernel is called totally non-negative, whereas a strongly sign-regular kernel is called oscillatory. For the Green functions of two-point boundary problems like that of (6), sign-regularity is equivalent to strong sign-regularity (see [1]), and the condition (7) needs only to hold for p = 1 , , n 1 (see [3], Condition A).
Throughout the paper, we will assume that the Green function of (6) is sign-regular.
The interest of sign-regular Green functions resides on the Sturm–Liouville-like properties that it gives to the boundary value problem (2) and (3), namely (see [3]):
  • The problem (2) and (3) has countable infinitely many different eigenvalues λ 1 , λ 2 , , which are real, algebraically and geometrically simple, and can be ordered as 0 < ϵ 0 ϵ 1 λ 1 < ϵ 1 ϵ 2 λ 2 < , where ϵ 0 = 1 and ϵ 1 , ϵ 2 , are the same numbers as in (7).
  • The eigenfunction φ i ( x ) (i = 1, 2, …), corresponding to each λ i , has exactly i 1 zeroes in ( a , b ) , all of which are simple. Moreover, the zeroes of φ i and φ i + 1 alternate (i = 2, 3,…). At the extremes a, b, all the eigenfunctions φ i have zeroes of the order exactly imposed by the boundary conditions.
  • Each non-trivial linear combination c r φ r + + c l φ l with r l has at least r 1 nodal zeroes (that is, zeroes where the function changes its sign) and at most l 1 zeroes in I 1 , where I 1 is the interval obtained from [ a , b ] by removing a if k m = 0 and b if k n = 0 , and the zeroes which are antinodes (that is, zeroes where the function does not change its sign) are counted twice.
There is a fourth property of the eigenfunctions φ i , which requires the introduction of the following definition (see [4], Chapter 3, Section 5):
Definition 1.
A system of continuous functions,
y 1 ( x ) , y 2 ( x ) , , y p ( x ) , x I ,
is called a Chebyshev system on the interval I if every linear combination of these functions
y ( x ) = i = 1 p c i y i ( x ) i = 1 p c i 2 > 0
vanishes on the interval I at most p 1 times. Likewise, a sequence of (finite or infinite) functions
y 1 ( x ) , y 2 ( x ) , y 3 ( x ) ,
is a Markov sequence within the interval I if for every p ( p = 1 , 2 , ), the functions y 1 ( x ) , y 2 ( x ) , , y p ( x ) form a Chebyshev system on I.
According to [3], the eigenfunctions φ 1 , φ 2 , of the problem (2) form a Markov sequence on [ a , b ] within I 1 .
The problem (2)–(7) appears in the analysis of the vibrations of a loaded continuum, but the associated theory can be applied to multiple differential problems as long as the boundary conditions imply the sign-regularity of the Green function G ( x , t ) . In particular, in [5] (Appendix D), one can find multiple examples of differential problems of the type (2) in the theory of fluid dynamics, problems whose Green function satisfies (7) under some boundary conditions. Likewise, [6] contains many examples of physical and biological problems with sign regular kernels satisfying (7)–(9).
The eigenvalue problem (2) with a sign-regular kernel has been studied thoroughly in the literature. It was Kellogg [7,8,9] who first assessed symmetric totally non-negative kernels satisfying condition (7). The non-symmetric case was developed by Gantmakher and Krein in [10,11,12]. Karlin obtained new results by attacking the problem from the theory of spline interpolation and Chebyshev and Markov systems [2,13]. Other important breakthroughs were achieved by Levin and Stepanov [1], who extended the results to the sign-regular case, and Borovskikh and Pokornyi [14], who applied them to discontinuous kernels. Later, Stepanov provided necessary and sufficient conditions for the Green function of (6) to be sign-regular in [3]. The research on this topic was continued by Pokornyi and his collaborators due to its relationship with the theory of differential equations in networks [6,15]. Some more recent contributions include [16,17].
While the aforementioned papers cover multiple aspects of the theory of sign-regular kernels and the properties of the solutions of (2), none of them seems to have attempted to use them to calculate its eigenvalues and eigenfunctions, as far as the authors are aware. That will be the purpose of this paper, which will provide an iterative procedure to:
  • Bound and calculate any eigenvalues λ i and
  • Calculate the associated eigenfunctions φ i ,
with as much precision as desired.
Our approach will make use of Krein–Rutman cone theory, which was also employed by many of the papers mentioned before, in the following manner:
  • Defining a Banach space and a cone of functions, and picking up a function u which belongs to it. Concretely, there will be a Banach space and cone for each eigenvalue λ i to be determined.
  • Calculating M j u iteratively, where M j is the composition of M with itself j 1 times, M M M j t i m e s M .
  • Calculating the so-called Collatz–Wielandt numbers of M j u in that cone, for different values of j. These numbers are bounds for the inverse of the eigenvalue λ i , and converge to this as the iteration index j grows.
  • Determining the eigenfunctions φ i from M j u .
The procedure requires a sequential calculation of the eigenpairs, that is, in order to calculate λ i and φ i , one has to run the process for the eigenpairs associated with eigenvalues of smaller absolute value.
For self-completeness, let us recall that, given a Banach space B, a cone P B is a non-empty closed set defined by the conditions:
  • If u , v P , then c u + d v P for any real numbers c , d 0 . Note that this condition implies that 0 P .
  • If u P and u P , then u = 0 .
A cone P is reproducing if any y B can be expressed as y = u v with u , v P . The existence of a cone in a Banach space B allows the definition of a partial ordering relationship in that Banach space by setting u v if and only if v u P . Thus, we will say that the operator M is u 0 -positive if there exists an integer q > 0 and a u 0 P such that for any v P { 0 } one can find positive constants δ 1 , δ 2 such that δ 1 u 0 M q v δ 2 u 0 (note that δ 1 and δ 2 will not be the same for all v). We will denote by i n t ( P ) the interior of the cone P, provided that it exists. Let us note that if M q maps P { 0 } into i n t ( P ) , then it is u 0 -positive, with u 0 being any member of i n t ( P ) .
Following the Forster–Nagy definition [18], if u P { 0 } , the upper and lower Collatz–Wielandt numbers are defined, respectively, as:
r ¯ ( M , u ) = inf { w R : M u w u } , r ̲ ( M , u ) = sup { w R : w u M u } .
They are called upper and lower Collatz–Wielandt numbers as they extend the estimates for the spectral radius of a non-negative matrix given by L. Collatz [19] and H. Wielandt [20]. We will write them too as r ¯ ( M , u , P ) and r ̲ ( M , u , P ) when we want to make an explicit reference to the concrete cone P in which they are calculated.
The properties of r ̲ ( M , u ) and r ¯ ( M , u ) and their relationship with the spectral radius of the operator M have been studied by several authors, starting with Marek [21,22], Forster and Nagy [18], who corrected some previous mistakes from Marek, and Marek again [23]. The concept has been extended to multiple types of operators, Banach spaces and cones. The references [24,25,26,27], include a good account of recent results.
The use of r ̲ ( M , M j u ) and r ¯ ( M , M j u ) to bound and estimate the principal eigenvalue of a boundary value problem of the type (3) and (4) seems to date from Webb [28], who applied it to define conditions for the existence of solutions to non-linear boundary value problems. Chang [25] proved it for other 1-homogeneous non-linear differential problems like p-Laplace systems, calling it the power method, so it is possible that it was known and used before. Later, the authors used it in [29,30,31] to determine the solvability of boundary value problems and in [32] to bound and estimate the principal eigenvalue of boundary value problems including higher derivatives, for which some results on the sign of the derivatives of the Green function were needed. However, to the knowledge of the authors, it has never been applied to determine the value of other eigenpairs apart from the principal one.
This paper took inspiration from an algorithm used to calculate eigenvalues of an oscillatory matrix described in [4] (Appendix 1), which is also based on the use of the Collatz–Wielandt numbers in different cones.
The organization of the paper is as follows. In Section 2, the main results will be presented. In particular, Section 2.1 will introduce the Banach space and the cones to be used for each λ i , and will prove the convergence of the Collatz–Wielandt numbers of M j u in such cones. The subsection Section 2.2 will yield a procedure to find a function u belonging to each cone, and will show how to simplify the calculation of the Collatz–Wielandt numbers. Section 3 will give an example of how to apply the previous theory to calculate several eigenpairs of a boundary value problem. Finally, Section 4 will draw some conclusions.

2. Main Results

2.1. The Procedure to Calculate λ p and φ p

2.1.1. Some Preliminaries

The fact that a j ( x ) C j [ a , b ] allows definition of the operator L adjoint to L, namely [33] (Chapter 11, Section 1):
L y = ( 1 ) n y ( n ) ( x ) + ( 1 ) n 1 ( a n 1 ( x ) y ( x ) ) ( n 1 ) + + a 0 ( x ) y ( x ) , x [ a , b ] .
Accordingly, let us consider the eigenvalue problem adjoint to (2)
L y = λ ˜ r ( x ) y , x [ a , b ] ; U i ( y ) = 0 , i = 1 , , n ,
where U i are the boundary conditions adjoint to U i (see [33] (Chapter 11, Theorem 3.1) for a definition of adjoint boundary conditions). From the properties of the adjoint eigenvalue problems, it is well known [33] (Chapter 12, Theorem 5.2) that the eigenvalues λ ˜ i of (12) are the complex conjugates of those of (2) and (3) (that is λ i ˜ = λ i ), and that its eigenfunctions ψ i form a biorthogonal system with φ i , namely that:
ψ i , φ j = a b ψ i ( x ) φ j ( x ) r ( x ) d x = δ i j .
Given that the eigenvalues λ i are real, obviously λ ˜ i are also real and λ ˜ i = λ i .
Next, given a set of p functions y i ( x ) C [ a , b ] for i = 1 , , p , let us introduce the notation:
Δ y 1 y 2 y p x 1 x 2 x p = y 1 ( x 1 ) y 1 ( x p ) y p ( x 1 ) y p ( x p ) , ( x 1 , x p ) [ a , b ] p R p .
The determinant (14) has very interesting symmetric properties. In particular, its value in [ a , b ] p is determined by its value in the simplex Ω = { ( x 1 , , x p ) : a x 1 < < x p b } , as can be easily shown using the properties of determinants. We will use this simplex frequently in the rest of the paper, together with the related simplex Ω = { ( x 1 , , x p ) Ω : x 1 , , x p I 1 } .
Let u C [ a , b ] . We will denote by Δ p ( u ; x 1 , , x p ) (or simply Δ p ( u ) ) the function:
Δ φ 1 φ 2 φ p 1 u x 1 x 2 x p 1 x p , ( x 1 , x p ) [ a , b ] p R p ,
where φ i are the eigenfunctions of (2). As before, the value of Δ p ( u ) is given by its value in the simplex Ω .
Now we are in a position to define the Banach spaces and cones needed by our method. Thus, for each index p, we will define the Banach space B p as:
B p = { y C [ a , b ] : ψ i , y = 0 , i = 1 , , p 1 } ,
subject to the sup norm y = sup { | f ( x ) | , x [ a , b ] } . Since C [ a , b ] is complete with regards to the sup norm and the functional ψ i , y is linear and bounded for each ψ i , and therefore continuous, it is straightforward to show that B p is also complete and therefore a proper Banach space.
Likewise, the cone P p will be defined by:
P p = { y B p : Δ p ( y ) 0 , ( x 1 , x p ) Ω ¯ } ,
where Ω ¯ is the closure of Ω .
Similarly, the Banach space B p will be defined as:
B p = { y A C max ( n 1 , p ) [ a , b ] : ψ i , y = 0 , i = 1 , , p 1 ; U i ( y ) y ( k i ) ( a ) + k < k i γ i k y ( k ) ( a ) = 0 , i = 1 , , m ; U i ( y ) y ( k i ) ( b ) + k < k i γ i k y ( k ) ( b ) = 0 , i = m + 1 , , n } ,
where A C m [ a , b ] is the space of functions whose m-th derivative is absolutely continuous in [ a , b ] . This space will be endowed with the norm y = max { sup { | f ( i ) ( x ) | , x [ a , b ] } , i = 0 , , max ( n 1 , p ) } , for p = 0 , 1 , . As before, it is straightforward to show that B p is a Banach space.
Finally, linked to B p , the cone P p will be given by:
P p = { y B p : Δ p ( y ) 0 , ( x 1 , x p ) Ω ¯ } .
Lemma 1.
The cones P p and P p are actually cones.
Proof. 
From property 3 of the eigenfunctions of (2), (13) and the definition of Δ p in (15), it is clear that either φ p or φ p belong to both cones, so they are not empty. From the properties of the determinants one also has:
Δ p ( c y + d z ) = c Δ p ( y ) + d Δ p ( z ) ,
so if c , d > 0 and Δ p ( y ) , Δ p ( z ) 0 for x Ω ¯ , then obviously Δ p ( c y + d z ) 0 for x Ω ¯ . Last, but not least, if Δ p ( y ) 0 and Δ p ( y ) 0 for x Ω ¯ , then Δ ( y ) 0 in Ω ¯ . This is only possible if y is a linear combination of φ i for i = 1 , , p 1 . As the definition of B p and B p requires y to be orthogonal to the adjoint eigenfunctions ψ i , i = 1 , , p 1 , (13) leaves y 0 in Ω ¯ as the only alternative. This completes the proof. □

2.1.2. The Operator M p and Its Properties in the Cones

Let us introduce the operator M p , defined by:
M p = ϵ p 1 ϵ p M .
The operator M p has some interesting properties in the cone P p , such as, for instance, its positive character.
Theorem 1.
The operator M p maps P p into itself.
Proof. 
From (5), (6) and (20), it is clear that M p maps C [ a , b ] into A C n 1 [ a , b ] C [ a , b ] , and incidentally the resulting function satisfies the boundary conditions (3). Moreover,
ψ i , M p u = ϵ p 1 ϵ p a b ψ i ( x ) r ( x ) a b G ( x , t ) u ( t ) r ( t ) d t d x =
ϵ p 1 ϵ p a b u ( t ) r ( t ) a b G ( x , t ) ψ i ( x ) r ( x ) d x d t .
The Green function of the adjoint problem (12) is exactly G ( x , t ) = G ( t , x ) (see for instance [33], Chapter 11, Theorem 4.2), which yields
ψ i , M p u = ϵ p 1 ϵ p a b u ( t ) r ( t ) a b G ( t , x ) ψ i ( x ) r ( x ) d x d t = ϵ p 1 ϵ p a b u ( t ) r ( t ) ψ i ( t ) λ i d t = 0 ,
for any u B p . That implies that ψ i , M p u = 0 for i = 1 , , p 1 , and therefore M p maps B p into itself.
Next, we will prove that Δ p ( M p u ) 0 in Ω ¯ when Δ p ( u ) 0 in the same set. We will show first that, in fact, Δ p ( M p u ) 0 for x Ω in that case. Thus, let us assume that, on the contrary, Δ p ( M p u ) = 0 at x Ω , that is, there exist ( x 1 , , x p ) Ω such that Δ p ( M p u ; x 1 , , x p ) = 0 . From (14), it follows that there is a linear combination of φ 1 , , φ p 1 , M p u :
y ( x ) = c 1 φ 1 ( x ) + + c p 1 φ p 1 ( x ) + c p M p u ( x ) ,
which vanishes at least at x = x 1 , , x p I 1 .
In order for the Green function (6) to be sign-regular, it is necessary that the equation L z = 0 is disconjugate on [ a , b ] , that is, no solution of such an equation can have n zeroes in [ a , b ] (see, for instance, [3], p. 1690). In that case, a result from Polya [34] allows factoring L in first order differential operators as follows:
L 0 y = ρ 0 y ,
L i y = ρ i ( L i 1 y ) , i = 1 , , n ,
a n d L y = L n y ,
where ρ i > 0 , ρ 0 ρ 1 ρ n = 1 and ρ i C n i [ a , b ] .
Let us assume that the number of zeroes of y of (21) is finite in ( a , b ) . Following [3] (Section 5), let S ( d 0 , , d n ) denote the number of sign changes in the sequence d 0 , , d n of non-zero real numbers and let σ ( f ) be the number of sign changes of f ( x ) in ( a , b ) . For the function y, we define the one-sided limit:
S ( y , x ) = lim ξ x S ( L 0 y ( ξ ) , L 1 y ( ξ ) , , L n y ( ξ ) ) .
Using (22) and Rolle’s theorem, Stepanov proved [3] (Lemma 5.1) that:
σ ( L y ) σ ( y ) + S ( y , b ) S ( y , a ) .
However, if one reviews the proof of such a lemma, one can easily conclude that in fact:
σ ( L y ) z ( y ) + S ( y , b ) S ( y , a ) ,
where z ( y ) is the number of zeroes of y in ( a , b ) . Stepanov used (24) in several lemmata of the same paper [3] (Lemmata 6.2, 6.3 and 6.5) to prove the sufficiency of his [3] (Theorem 1.3) for the Green function (6) to be sign regular on [ a , b ] , theorem who Stepanov had previously ([3], p. 1713) shown also to be necessary for the sign regularity. Such lemmata essentially proved
σ ( L y ) σ ( y ) ,
and one can repeat their same arguments, using inequality (25) instead and mutatis mutandi, to show that
σ ( L y ) z ( y ) p .
Since [3] (Theorem 1.3) is also a necessary condition, any sign regular Green function must fulfil (26) for y having p zeroes in ( a , b ) . Recalling the form of y in (21), then the function:
L y ( x ) = r ( x ) c 1 λ 1 φ 1 ( x ) + + c p 1 λ p 1 φ p 1 ( x ) + c p ϵ p ϵ p 1 u ( x ) ,
must change its sign at least p times in ( a , b ) , exactly at the same points as
v ( x ) = c 1 λ 1 φ 1 ( x ) + + c p 1 λ p 1 φ p 1 ( x ) + c p ϵ p ϵ p 1 u ( x ) ,
given that r ( x ) is piecewise continuous and positive. Let ( x 1 , , x p ) be such points and let us build the matrix:
A = φ 1 ( x 1 ) φ 1 ( x p 1 ) φ 1 ( x p ) φ p 1 ( x 1 ) φ p 1 ( x p 1 ) φ p 1 ( x p ) u ( x 1 ) u ( x p 1 ) u ( x p ) ,
whose determinant | A | is obviously zero. Given that φ 1 , , φ p 1 is a Chebyshev system on I 1 , the matrix
φ 1 ( x 1 ) φ 1 ( x p 1 ) φ p 1 ( x 1 ) φ p 1 ( x p 1 ) ,
has a range of p 1 , and therefore A must also have a range of p 1 (the difference between both matrices is one row and one column). That means that the null space N of A, namely the subspace of column vectors C R p such that C A = 0 , has dimension 1, and the vector composed by the coefficients of φ 1 ( x ) , , φ p 1 ( x ) , u ( x ) in (27) belongs to N.
Expanding the determinant:
φ 1 ( x 1 ) φ 1 ( x p 1 ) φ 1 ( x ) φ p 1 ( x 1 ) φ p 1 ( x p 1 ) φ p 1 ( x ) u ( x 1 ) u ( x p 1 ) u ( x )
along its last column, one obtains a linear combination of φ 1 ( x ) , , φ p 1 ( x ) , u ( x ) that vanishes at x 1 , , x p I 1 and which equals Δ p ( u ; x 1 , , x p 1 , x ) for any x ( a , b ) . Therefore, the column vector composed by the coefficients of that linear combination (namely, the cofactors of the last column of (31)) must also belong to the subspace N and be a multiple of the coefficients of φ 1 ( x ) , , φ p 1 ( x ) , u ( x ) in (27), that is, v ( x ) of (28) must be a multiple of Δ p ( u ; x 1 , , x p 1 , x ) . As v ( x ) changed its sign at x = x p , Δ p ( u ; x 1 , , x p 1 , x ) must also change its sign at x = x p . A similar conclusion can be obtained if y in (21) has infinitely many zeroes in ( a , b ) . Thus, we have proven that if Δ p ( M p u ) vanishes at x Ω , then Δ p ( u ) must change sign at least at a point x Ω , so u cannot belong to P p . In summary, if u P p , Δ p ( M p u ) 0 in Ω .
It remains to be proven that Δ p ( M p u ) and Δ p ( u ) have the same sign, namely that Δ p ( M p u ) > 0 in Ω when Δ p ( u ) > 0 in the same set. This follows from the expression (see, for instance, [4], Chapter 4, Section 3, Equation (62)),
Δ M φ 1 M φ 2 M φ p 1 M u x 1 x 2 x p 1 x p = a b a t 2 G x 1 , x 2 , , x p t 1 , t 2 , , t p Δ φ 1 φ 2 φ p 1 u t 1 t 2 t p 1 t p r ( t 1 ) r ( t p ) d t 1 d t p .
The left hand side of (32) is exactly ϵ p Δ p ( M p u ) | λ 1 λ p 1 | . From the sign-regularity of G ( x , t ) , (7) and (32), one has that the signs of Δ p ( M p u ) and Δ p ( u ) coincide in Ω . This completes the proof. □
Although the Theorem 1 can be used to obtain some information about the nature of the eigenvalues λ p , it does not provide any indication about the relationship between the Collatz–Wielandt numbers and λ p . A first step towards that direction can be made if we can find a solid cone K p contained in P p ( P p is not solid, as per its definition), which is mapped by M p into itself, as the next theorem will show. For that, we need to introduce the notion of weak irreducibility (see [35], Definition 7.5):
Definition 2.
Let P be a solid cone. We say that M p is weakly irreducible, if the boundary P of P contains no eigenvectors of M p pertaining to nonnegative eigenvalues.
Theorem 2.
Let K p be a solid cone such that K p P p . If M p maps K p into itself, then φ p i n t ( K p ) and for any u K p { 0 } one has
r ̲ ( M p , M p j u , K p ) 1 | λ p | , j = 1 , ,
lim j r ̲ ( M p , M p j u , K p ) = lim j r ¯ ( M p , M p j u , K p ) = 1 | λ p | ,
and
lim j | λ p | j M p j u = f ( u ) φ p ,
where f ( u ) is a non-zero linear functional dependent on u and φ p .
Proof. 
Let us first prove that M p is weakly irreducible in K p .
From the property 2 of the sign regular problems (see the Introduction) any non-trivial linear combination of φ 1 , , φ p 1 , φ i with i > p , where the coefficients of φ 1 , , φ p 1 are zero, must have i 1 zeroes in I 1 . That implies that Δ p ( φ i ) must vanish in at least i p points in Ω . Using an argument similar to that used in the Theorem 1, one has that Δ p ( φ i ) must change its sign in Ω , so φ i P p (ergo φ i K p ) for i > p , and therefore M p is weakly irreducible in K p according to the Definition 2.
Next, in the Banach space B p it is clear that r ( M p ) = 1 | λ p | > 1 | λ i | for i > p . We can apply [35] (Theorem 7.7) to conclude that 1 | λ p | is a simple eigenvalue of M p with an eigenvector φ p i n t ( K p ) . From here and [25] (Lemma 1.16), one gets (33). Likewise, following the same reasoning as in [32] (Theorems 6 and 7), one can get to (35) and, noting that for some j 0 > 0 , M p j u i n t ( K p ) for all j j 0 , also to (34). □

2.1.3. The Cone P p

The previous theorem does not offer any hints for finding the solid cone K p , nor does it indicate any relationship between r ¯ ( M p , M p j u , K p ) and λ p beyond the fact that the upper Collatz–Wielandt number converges to 1 | λ p | . To determine such a relationship, the solid cone K p must be such that M p maps it (excluding the zero element) to its interior. As it turns out, under certain conditions, the cone P p defined in (19) is solid and satisfies that property with regards to M p .
To establish that, let us start by identifying the interior of P p . Although one could be tempted to think that i n t ( P p ) is merely composed by the functions y B p such that Δ p ( y ) > 0 in Ω , in the end this is only a necessary condition as one must pay attention to the value of Δ p ( y ) in the vicinity of the closure of Ω where in fact Δ p ( y ) vanishes, namely when x 1 is close to a if a I 1 , when x p is close to b if b I 1 , and when several values x i converge simultaneously to the same point x .
The next Lemma will give the value of Δ p ( y ) in the latter case.
Lemma 2.
Let us suppose that φ i , u A C l [ a , b ] . If x i + 1 , , x i + l ( x i δ , x i + δ ) for δ > 0 , then
Δ p ( u ; x 1 , , x p ) = 1 j = 1 l j ! l x i + l l l 1 x i + l 1 l 1 Δ p ( u ) x i + 1 x i = = x i + l i k < j i + l ( x j x k ) + o ( δ l ( l + 1 ) 2 ) .
Proof. 
Noting that 1 + 2 + + l = l ( l + 1 ) 2 , Taylor’s formula for multivariate functions allows expressing Δ p ( u ; x 1 , , x p ) when x i + 1 , , x i + l are in a neighborhood of x i of radius δ , as:
Δ p ( u ; x 1 , , x p ) = Δ p ( u ) x i = = x i + l + j = 1 l Δ p ( u ) x i + j x i = = x i + l ( x i + j x i ) + 1 2 j = 1 l k = 1 l x i + j Δ p ( u ) x i + k x i = = x i + l ( x i + j x i ) ( x i + k x i ) + + + 1 l ( l + 1 ) 2 ! j = 1 l k = 1 l q = 1 l l ( l + 1 ) 2 s u m m a t i o n s y m b o l s x i + j Δ p ( u ) x i + q x i = = x i + l ( x i + j x i ) ( x i + q x i ) + o ( δ l ( l + 1 ) 2 ) .
From the properties of the determinants, (14) and (15), it is clear that all the terms in (37) where the order of the partial derivatives of two different x j coincide, vanish, which yields:
Δ p ( u ; x 1 , , x p )
= ( j 1 , , j l ) K 1 j = 1 l j ! j l x i + l j l j l 1 x i + l 1 j l 1 j 1 Δ p ( u ) x i + 1 j 1 x i = = x i + l ( x i + l x i ) j l ( x i + 1 x i ) j 1 + o ( δ l ( l + 1 ) 2 ) ,
where K is the set of all permutations of the indexes ( 1 , 2 , , l ) .
Let us denote by s ( j 1 , j 2 , , j l ) the signature of the l-tuple ( j 1 , j 2 , , j l ) (the signature of a tuple is defined to be + 1 whenever the reordering ( 1 , 2 , , l ) can be achieved by successively interchanging two entries of the tuple an even number of times, and 1 whenever it can be achieved by an odd number of such interchanges). As the different partial derivatives appearing in (38) are continuous in x i + 1 , , x i + l ( φ i , u C l [ a , b ] by hypothesis) and are calculated at the same point x i + j = x i , we can exchange their order just by taking into account the impact of such a change in the determinant (14) (it is an exchange of rows), which leads us to:
j l x i + l j l j l 1 x i + l 1 j l 1 j 1 Δ p ( u ) x i + 1 j 1 x i = = x i + l = ( 1 ) s ( j 1 , j 2 , , j l ) l x i + l l l 1 x i + l 1 l 1 Δ p ( u ) x i + 1 x i = = x i + l .
Equations (38) and (39) give:
Δ p ( u ; x 1 , , x p )
= 1 j = 1 l j ! l x i + l l l 1 x i + l 1 l 1 Δ p ( u ) x i + 1 x i = = x i + l ( j 1 , , j l ) K ( 1 ) s ( j 1 , j 2 , , j l ) ( x i + 1 x i ) j 1 ( x i + l x i ) j l + o ( δ l ( l + 1 ) 2 ) .
The expression within the sum in (40) has exactly the form of the Vandermonde determinant
x i + 1 x i x i + l x i ( x i + 1 x i ) l ( x i + l x i ) l ,
whose value, as it is well known, is i k < j i + l ( x j x k ) . From here and (40), one gets (36). □
Given that a x 1 < < x p b for ( x 1 , , x p ) Ω , the consequence of the Lemma 2 is that, if α is the lowest derivative which the boundary conditions on a do not specify to vanish, β is the lowest derivative which the boundary conditions on b do not specify to vanish, and φ i , u A C p [ a , b ] , the interior of P p is defined by:
i n t ( P p ) = y B p : Δ p ( y ) > 0 , ( x 1 , x p ) Ω , α Δ p ( u ) x 1 α x 1 = a > 0 , ( a , x 2 , , x p ) Ω , i f a I 1 , ( 1 ) β β Δ p ( u ) x p β x p = b > 0 , ( x 1 , x 2 , , b ) Ω , i f b I 1 , l x i + l l l 1 x i + l 1 l 1 Δ p ( u ) x i + 1 x i = = x i + l = x > 0 , i = 1 , , p , l = 1 , , p i .
Remark 1.
By the definition of M p and G ( x , t ) , it is clear that φ i , M p u A C n 1 [ a , b ] . However, if p > n 1 , one cannot grant that φ i , M p u A C p [ a , b ] , or even the mere existence of i n t ( P p ) , without imposing extra conditions on u, r and the coefficients a j of L in (1). The next theorem will display some sufficient conditions for that.
Theorem 3.
Let us suppose that either p < n or p n , r ( x ) , a j ( x ) A C p n [ a , b ] for j = 0 , , n 1 . Let q be the lowest integer greater than 1 such that q · n > p . Then P p is solid and M p q maps P p { 0 } into i n t ( P p ) . In addition, if u P p { 0 } , then,
r ̲ ( M p , M p j u ) 1 | λ p | r ¯ ( M p , M p j u ) , j = q , ,
lim j r ̲ ( M p , M p j u ) = lim j r ¯ ( M p , M p j u ) = 1 | λ p | ,
and
lim j | λ p | j M p j u = f ( u ) φ p ,
where f ( u ) is a non-zero linear functional dependent on u and φ p .
Proof. 
From (1) and (2), one has:
( M p q u ) ( n ) ( x ) = ϵ p 1 ϵ p r ( x ) M p q 1 u ( x ) i = 0 n 1 a i ( x ) ( M p q u ) ( i ) ( x ) , x ( a , b ) .
Therefore, in order for ( M p q u ) ( n ) ( x ) to belong to A C p n [ a , b ] , it suffices that M p q 1 u ( x ) , r ( x ) , a j ( x ) A C p n [ a , b ] for j = 0 , , n 1 , which is granted by the hypotheses and the fact that q · n > p and M p u A C n 1 [ a , b ] . With this, following the same steps as in the Theorem 1 it is straightforward to show that M p q maps B p into B p , and that Δ p ( M p q u ) > 0 for x Ω provided that u P p { 0 } , which covers the conditions of the first line of (41).
Let us focus now on the condition of the second line of (41), related to the derivative of Δ p ( M p q u ) at a when a I 1 . From (32), one has:
ϵ p | λ 1 λ p 1 | α Δ p ( M p q u ; a , x 2 , , x p ) x 1 α = α Δ x 1 α M φ 1 M φ 2 M φ p 1 M ( M p q 1 ) u a x 2 x p 1 x p = a b a t 2 α G ( a , t 1 ) x 1 α α G ( a , t p ) x 1 α G ( x 2 , t 1 ) G ( x 2 , t p ) G ( x p , t 1 ) G ( x p , t p ) Δ φ 1 φ 2 φ p 1 M p q 1 u t 1 t 2 t p 1 t p r ( t 1 ) r ( t p ) d t 1 d t p = a b a t 2 α G ( a , t 1 ) x 1 α α G ( a , t p ) x 1 α G ( x 2 , t 1 ) G ( x 2 , t p ) G ( x p , t 1 ) G ( x p , t p ) Δ ( M p q 1 u ; t 1 , , t p ) r ( t 1 ) r ( t p ) d t 1 d t p .
As q > 1 , Δ p ( M p q 1 u ) > 0 in Ω according to the Theorem 1. For that reason, the key to grant the positivity of the α -th partial derivative at a lies on the value of the determinant of the matrix:
α G ( a , t 1 ) x 1 α α G ( a , t p ) x 1 α G ( x 2 , t 1 ) G ( x 2 , t p ) G ( x p , t 1 ) G ( x p , t p ) .
Let us denote by K p ( t 1 ; x 2 , , x p ) the matrix:
K p ( t 1 ; x 2 , , x p ) = α G ( a , t 1 ) x 1 α α G ( a , x 2 ) x 1 α α G ( a , x p ) x 1 α G ( x 2 , t 1 ) G ( x 2 , x 2 ) G ( x 2 , x p ) G ( x p , t 1 ) G ( x p , x p ) G ( x p , x p ) ,
whose determinant we will write as | K p | . Using Taylor’s formula, when x 1 is in the neighborhood of a one has:
G x 1 , x 2 , , x p t 1 , x 2 , , x p = α G ( a , t 1 ) x 1 α α G ( a , x 2 ) x 1 α α G ( a , x p ) x 1 α G ( x 2 , t 1 ) G ( x 2 , x 2 ) G ( x 2 , x p ) G ( x p , t 1 ) G ( x p , x 2 ) G ( x p , x p ) ( x 1 a ) α α ! + o ( ( x 1 a ) α ) ,
so that the matrix K p ( t 1 ; x 2 , , x p ) must also be sign regular with ϵ p | K p ( t 1 ; x 2 , , x p ) | 0 for a < t 1 < x 2 < < x p < b as per (7). We will prove in fact that
ϵ p | K p | > 0 , a < t 1 < x 2 < x p < b .
We will proceed by induction, following the ideas of [1]. Thus, from [1] (Equation (12.12)), we know that:
ϵ 1 α G ( a , t ) x α > 0 , a < t < b .
Let us assume that ϵ p 1 | K p 1 ( t 1 ; x 2 , , x p 1 ) | > 0 for a < t 1 < x 2 < < x p 1 < b and ϵ p | K p ( t 1 ; x 2 , , x p ) | = 0 for a < t 1 < x 2 < < x p 1 < x p < b . If we introduce an additional pair ( x , x ) such that a < t 1 < x < x 2 , by the same argument as before on the sign regularity of G ( x , t ) one must have that the matrix K p + 1 ( t 1 ; x , x 2 , , x p ) must be sign regular too with ϵ p + 1 | K p + 1 ( t 1 ; x , x 2 , , x p ) | 0 .
K p + 1 ( t 1 ; x , x 2 , , x p ) is therefore a ( p + 1 ) × ( p + 1 ) sign regular matrix whose first row is composed of terms of the type α G ( a , t 1 ) x α and α G ( a , x j ) x α while the rest of rows form the matrix:
G ( x , t 1 ) G ( x , x ) G ( x , x 2 ) G ( x , x p ) G ( x 2 , t 1 ) G ( x 2 , x ) G ( x 2 , x 2 ) G ( x 2 , x p ) G ( x p , t 1 ) G ( x p , x ) G ( x p , x 2 ) G ( x p , x p ) ,
which is p × ( p + 1 ) and sign regular, and whose last p columns are linearly independent (their determinant does not vanish as per (9)). Accordingly its range is p, and the range of K p + 1 ( t 1 ; x , x 2 , , x p ) must be at least p too.
In the same way, one can find that the range of K p ( t 1 ; x 2 , , x p ) , whose determinant vanishes, is p 1 , with its last p 1 rows linearly independent. Since the range of K p 1 ( t 1 ; x 2 , , x p 1 ) is p 1 by the induction hypothesis, one can apply [1] (Lemma 2) to conclude that the range of K p + 1 ( t 1 ; x , x 2 , , x p ) equals p 1 , contradicting the previous assertion. Therefore, | K p ( t 1 ; x 2 , , x p ) | cannot be zero for a < t 1 < x 2 < < x p < b and, due to its sign regularity, ϵ p | K p ( t 1 ; x 2 , , x p ) | > 0 . By continuity, the matrix (47) must have a determinant of sign ϵ p for t i ( x i δ , x i + δ ) , i = 2 , , p . From here and (46) one gets α Δ p ( u ) x 1 α x 1 = a > 0 for ( a , x 2 , , x p ) Ω .
The condition of the third line of (41) with respect to the derivatives at b, if b I 1 , can be proven in the same way.
As for the last condition of (41), let us assume that:
l x i + l l l 1 x i + l 1 l 1 Δ p ( M p q u ) x i + 1 x i = = x i + l = x = 0 ,
for an x Ω . From here, (14) and (15) one has that there exists a linear combination of φ 1 , , φ p 1 , M p q u ,
w ( x ) = d 1 φ 1 + + d p 1 φ p 1 + d p M p q u ,
with p zeroes on I 1 , counting their multiplicities (there must be a zero multiple of order l + 1 at x ). Using a similar argument to that of the Theorem 1, one has that the function
L w ( x ) = r ( x ) d 1 λ 1 φ 1 + + d p 1 λ p 1 φ p 1 + d p ϵ p 1 ϵ p M p q 1 u
must change its sign at least p times in ( a , b ) , exactly at the same points as:
y ( x ) = d 1 λ 1 φ 1 + + d p 1 λ p 1 φ p 1 + d p ϵ p 1 ϵ p M p q 1 u .
Let these points be x 1 , , x p . This means that the function Δ p ( M p q 1 u ; x 1 , , x p 1 , x ) must change its sign at x = x p and therefore u cannot belong to P p . This completes the proof that M p q ( P p { 0 } ) i n t ( P p ) , that is, that M p is u 0 -positive in P p . Equations (42)–(44) follow now from [32] (Theorems 6 and 7). □
Remark 2.
The Theorem 3 shows that the Collatz–Wielandt numbers r ̲ ( M p , M p j u ) and r ¯ ( M p , M p j u ) are lower and upper bounds of the inverse of the eigenvalue | λ p | , which converge to it as the iteration index j grows. Therefore, they allow determining λ p with as much accuracy as desired, as the error in the approximation is bounded by the difference r ¯ ( M p , M p j u ) r ̲ ( M p , M p j u ) .
To clarify how to calculate them, let us recall that, from (10), (14), (15) and (19), for the cone P p , they can be expressed as:
r ¯ ( M p , M p j u ) = inf { w R : Δ p ( M p j + 1 u ) w Δ p ( M p j u ) , x Ω } , j q ,
and
r ̲ ( M p , M p j u ) = sup { w R : w Δ p ( M p j u ) Δ p ( M p j + 1 u ) , x Ω } , j q ,
provided that Δ p ( u ) 0 in Ω. Therefore, their calculation requires a comparison of two functions in the simplex Ω [ a , b ] p .
We will close this subsection by showing that, in practice, the function u does not need to be orthogonal to the adjoint eigenfunctions ψ i for i = 1 , , p 1 for Equations (42)–(43) to be valid.
Theorem 4.
Under the conditions of the Theorem 3, let us suppose that v C [ a , b ] and it is not identically zero. If Δ p ( v ) 0 in Ω, then its Collatz–Wielandt numbers satisfy (42)–(43).
Proof. 
If we decompose v as:
v = i = 1 p 1 ψ i , v φ i + u ,
it follows that u B p and
Δ p ( u ) = i = 1 p 1 ψ i , v Δ p ( φ i ) + Δ p ( v ) = Δ p ( v ) 0 ,
for x Ω , that is, u P p . Likewise,
M p j v = ( ϵ p ϵ p 1 ) j i = 1 p 1 ψ i , v λ i j φ i + M p j u ,
so Δ p ( M p j v ) = Δ p ( M p j u ) and therefore r ̲ ( M p , M p j u ) = r ̲ ( M p , M p j v ) and r ¯ ( M p , M p j v ) = r ¯ ( M p , M p j u ) , for j = 1 , . □
Remark 3.
The property that the function u of the previous theorem fails to hold due to the lack of orthogonality with ψ i , i = 1 , , p 1 , is precisely (44), since the term in (58) associated to 1 λ 1 gets bigger in absolute value than the rest of terms as the iteration index j grows.

2.1.4. The Calculation of the Adjoint Eigenfunctions ψ i

The application of the method described in the Remark 2 for different values of p requires knowledge of the eigenfunctions φ i , i = 1 , , p 1 . Although the Theorem 4 stated that one can start the iteration M p j v with a function v not orthogonal to the adjoint eigenfunctions ψ i , i = 1 , , p 1 , such an orthogonality is necessary in order to use (44) to determine φ p , and employ the latter in the calculation of λ i , φ i for i > p . This implies that knowledge on ψ i must also be obtained as p increases.
The process to obtain ψ p is very similar to that followed for φ p . To start with, the sign regularity of G ( x , t ) ensures the sign regularity of G ( x , t ) , where G ( x , t ) is the Green function of the adjoint problem,
L y = 0 , x [ a , b ] ; U i ( y ) = 0 , i = 1 , , n .
This is due to the fact that G ( x , t ) = G ( t , x ) (see [33], Chapter 11, Theorem 4.2), so that, if G ( x , t ) satisfies (7)–(9), it is immediately shown that these conditions hold for G ( t , x ) too.
Next, one has to define the Banach space,
B p = { y C [ a , b ] : φ i , y = 0 , i = 1 , , p 1 } ,
subject to the sup norm y = sup { | f ( x ) | , x [ a , b ] } , the cone P p
P p = { y B p : Δ p ( y ) 0 , ( x 1 , x p ) Ω ¯ } ,
where Δ p ( y ) is defined by:
Δ p ( y ) = Δ ψ 1 ψ 2 ψ p 1 y p x 1 x 2 x p 1 x p , ( x 1 , x p ) [ a , b ] p R p ,
the Banach space B p
B p = { y A C max ( n 1 , p ) [ a , b ] : φ i , y = 0 , i = 1 , , p 1 ; U i ( y ) = 0 , i = 1 , , n } ,
and the cone P p
P p = { y B p : Δ p ( y ) 0 , ( x 1 , x p ) Ω ¯ } .
The operator M p is defined by:
M p w ( x ) = ϵ p ϵ p 1 a b G ( t , x ) w ( t ) r ( t ) d t , x [ a , b ] ,
given that the Green function of the adjoint problem (12) is exactly G ( t , x ) .
However, the conditions on a j ( x ) required for P p to be solid, and for M p to map P p { 0 } into i n t ( P p ) are stronger, as the next theorem will show:
Theorem 5.
Let us suppose that either p < n and a j ( x ) A C j [ a , b ] for j = 0 , , n 1 , or p n , r ( x ) A C p n [ a , b ] and a j ( x ) A C p + j n [ a , b ] for j = 0 , , n 1 . Let q be the lowest integer greater than 1 such that q · n > p . Then P p is solid and ( M p ) q maps P p { 0 } into i n t ( P p ) . In addition, if w P p { 0 } , then,
r ̲ ( M p , ( M p ) j w ) 1 | λ p | r ¯ ( M p , ( M p ) j w ) , j = q , ,
lim j r ̲ ( M p , ( M p ) j w ) = lim j r ¯ ( M p , ( M p ) j w ) = 1 | λ p | ,
and
lim j | λ p | j ( M p ) j w = g ( w ) ψ p ,
where g ( w ) is a non-zero linear functional dependent on w and ψ p .
Proof. 
The proof is essentially the same as that of the Theorem 3, changing the references to φ i , Δ p and M p , to ψ i , Δ p and M p , respectively. □
Remark 4.
The Theorem 5 implies that, in practice, r ( x ) A C p n [ a , b ] and a j ( x ) A C p + j n [ a , b ] for j = 0 , , n 1 , in order to be able to apply the procedure to determine λ p , φ p and ψ p for values of p higher than n.

2.2. Some Practical Considerations for the Application of the Procedure

2.2.1. The Selection of the Starting Function u

A key aspect in the procedure described before is the selection of a function u belonging to the cone P p . Whereas the continuity in [ a , b ] is a condition easy to meet, and for the orthogonality with the adjoint eigenfunctions ψ i one can follow a similar approach to that of the Theorem 4, the satisfaction of the condition Δ p ( u ) 0 in Ω is not so straightforward.
A possible solution to this problem goes by interpolating u by means of linear splines. Thus, let us assume a partition { x ˙ l } of [ a , b ] , with t points and a mesh size h = max { x ˙ l + 1 x ˙ l , l = 1 , , t 1 } . The linear spline u ^ is defined by:
u ^ ( x ) = u ( x ˙ l ) x ˙ l + 1 x x ˙ l + 1 x ˙ l + u ( x ˙ l + 1 ) x x ˙ l x ˙ l + 1 x ˙ l , x ( x ˙ l , x ˙ l + 1 ) .
The linear spline u ^ ( x ) defines a function continuous on [ a , b ] , whose interpolation error e ( x ) = | u ( x ) u ^ ( x ) | in each subinterval, if u C 2 [ x ˙ l , x ˙ l + 1 ] , is given by:
e ( x ) = ( x ˙ l + 1 x ) ( x x ˙ l ) 2 u ( ξ ) , x , ξ ( x ˙ l , x ˙ l + 1 ) ,
with ξ depending on x, and can be bounded by:
e = ( x ˙ l + 1 x ˙ l ) 2 8 max u ( ξ ) , ξ ( x ˙ l , x ˙ l + 1 ) .
From (71), it follows that if the size h of the mesh is small the interpolation error will also be small.
The advantage of the use of the splines is that it allows the reduction of the determination of Δ p ( u ) to calculations over the t points of the mesh, that is, to vectors composed by the values of φ i , i = 1 , , p 1 , and u at the points x ˙ l . In [4] (Chapter 5, Section 3), one can find a couple of Lemmata, Lemma 2 and Lemma 3, which allow constructing a vector { v ( x ˙ l ) } R t , which forms a Markov system of vectors with the vectors { φ i ( x ˙ l ) } R t , i = 1 , , p 1 . To do so, it suffices to select (more or less randomly) p 1 values for the points v ( x ˙ 1 ) , v ( x ˙ p 1 ) and pick up a value v ( x ˙ p ) , such that the following inequality holds:
φ 1 ( x ˙ 1 ) φ p 1 ( x ˙ 1 ) v ( x ˙ 1 ) φ 1 ( x ˙ 2 ) φ p 1 ( x ˙ 2 ) v ( x ˙ 2 ) φ 1 ( x ˙ p ) φ p 1 ( x ˙ p ) v ( x ˙ p ) > 0 ,
which is always possible if x ˙ 1 I 1 , given that:
φ 1 ( x ˙ 1 ) φ p 1 ( x ˙ 1 ) φ 1 ( x ˙ 2 ) φ p 1 ( x ˙ 2 ) φ 1 ( x ˙ p 1 ) φ p 1 ( x ˙ p 1 ) > 0 ,
as φ 1 , , φ p 1 form a Chebyshev system on I 1 . If x ˙ 1 I 1 , then the determinant (72) will vanish regardless of the value of x ˙ i , i = 2 , , p , so we must start the process by calculating x ˙ p + 1 such that:
φ 1 ( x ˙ 2 ) φ p 1 ( x ˙ 2 ) v ( x ˙ 2 ) φ 1 ( x ˙ 3 ) φ p 1 ( x ˙ 3 ) v ( x ˙ 3 ) φ 1 ( x ˙ p + 1 ) φ p 1 ( x ˙ p + 1 ) v ( x ˙ p + 1 ) > 0 .
The values of the following coefficients v ( x ˙ j ) can be determined using a similar inequality:
φ 1 ( x ˙ i ) φ p 1 ( x ˙ i ) v ( x ˙ i ) φ 1 ( x ˙ i + 1 ) φ p 1 ( x ˙ i + 1 ) v ( x ˙ i + 1 ) φ 1 ( x ˙ i + p 1 ) φ p 1 ( x ˙ i + p 1 ) v ( x ˙ i + p 1 ) > 0 .
Once all entries v ( x l ˙ ) have been selected, one has to make the spline defined by { v ( x ˙ l ) } orthogonal to the adjoint eigenfunction functions ψ i , also constructed as splines. The way to do it is by calculating the inner vector product:
ψ i , v = l = 1 t 1 x l x l + 1 v ( x ˙ l ) x ˙ l + 1 x x ˙ l + 1 x ˙ l + v ( x ˙ l + 1 ) x x ˙ l x ˙ l + 1 x ˙ l ψ i ( x ˙ l ) x ˙ l + 1 x x ˙ l + 1 x ˙ l + ψ i ( x ˙ l + 1 ) x x ˙ l x ˙ l + 1 x ˙ l r ( x ) d x ,
for i < p , and putting
u ( x l ˙ ) = v ( x l ˙ ) i = 1 p 1 ψ i , v φ i ( x ˙ l ) , l = 1 , , t .
From (69) and (75), one can build the linear spline u ^ ( x ) to be used as the starting point for the calculations of M p j u ^ and Δ p ( M p j u ^ ) .
A similar procedure can be used to find a starting function w for the calculations of ( M p ) j w ^ and Δ p ( ( M p ) j w ^ ) in the adjoint problem, taking into consideration { ψ i ( x ˙ l ) } and the orthogonality of w and φ i , i = 1 , , p 1 .

2.2.2. How to Simplify the Calculation of the Collatz-Wielandt Numbers

The Lemmata of [4] that simplified the determination of the starting function u ^ are based on a Theorem by Fekete ([4], Theorem 8), which relates the minors of matrices whose columns are long vectors (in our case, { φ i ( x ˙ l ) } , i = 1 , , p 1 , and { M p j u ( x ˙ l ) } ) with the minors made by consecutive entries of these vectors (see (74)). Given that the calculation of the Collatz–Wielandt numbers requires basically finding ω such that Δ p ( M p j + 1 u ω M p j u ; x ˙ ) is greater or lower than zero for all x ˙ Ω , we can also exploit this property to reduce the number of combination of points x ˙ j in the simplex Ω , where that determinant has to be calculated.
Thus, r ̲ ( M p , M p j u ) will be given by the supremum of ω such that:
φ 1 ( x ˙ i ) φ p 1 ( x ˙ i ) M p j + 1 u ( x ˙ i ) ω M p j u ( x ˙ i ) φ 1 ( x ˙ i + 1 ) φ p 1 ( x ˙ i + 1 ) M p j + 1 u ( x ˙ i + 1 ) ω M p j ( x ˙ i + 1 ) φ 1 ( x ˙ i + p 1 ) φ p 1 ( x ˙ i + p 1 ) M p j + 1 u ( x ˙ i + p 1 ) ω M p j ( x ˙ i + p 1 ) > 0 , i = 1 , , t p + 1 ,
whereas r ¯ ( M p , M p j u ) will be given by the infimum of ω such that:
φ 1 ( x ˙ i ) φ p 1 ( x ˙ i ) M p j + 1 u ( x ˙ i ) ω M p j u ( x ˙ i ) φ 1 ( x ˙ i + 1 ) φ p 1 ( x ˙ i + 1 ) M p j + 1 u ( x ˙ i + 1 ) ω M p j ( x ˙ i + 1 ) φ 1 ( x ˙ i + p 1 ) φ p 1 ( x ˙ i + p 1 ) M p j + 1 u ( x ˙ i + p 1 ) ω M p j ( x ˙ i + p 1 ) < 0 , i = 1 , , t p + 1 .

3. An Example

Let us consider the problem:
y ( 4 ) λ 1 x y = 0 , x [ 1 , 2 ] , y ( 1 ) = y ( 1 ) = y ( 3 ) ( 1 ) = y ( 2 ) = 0 ,
which matches the structure (1)–(3) since r ( x ) = 1 x > 0 . The Green function of the associated problem:
4 G ( x , t ) x 4 = 0 , x [ 1 , 2 ] , G ( 1 , t ) = 2 G ( 1 , t ) x 2 = 3 G ( 1 , t ) x 3 = G ( 2 , t ) = 0 ,
is defined by:
G ( x , t ) = ( 2 t ) 3 ( x 1 ) 6 , x [ 1 , t ) , ( 2 t ) 3 ( x 1 ) 6 + ( x t ) 3 6 , x ( t , 2 ] .
In order to apply the procedure described in this paper, one must verify that G ( x , t ) is in fact a sign-regular kernel. In [3], one can find several theorems (Theorems 1.1–1.3) that provide algorithmically effective conditions for such an identification. However, in this case it is easier to use the following theorem of Kalafati–Gantmakher–Krein–Karlin (see for instance [13], Theorem 4 or [1], Theorem 8):
Theorem 6.
Suppose that the operator L on [ a , b ] has the form (22) and the boundary conditions are:
k = 1 n α i k L k 1 y ( a ) = 0 , i = 1 , , m , k = 1 n β j k L k 1 y ( b ) = 0 , j = 1 , , n m .
Suppose also that all the non-zero minors of order m of the matrix
A = ( 1 ) k α i k , i = 1 , , m , k = 1 , , n ,
have the same sign and the same holds for the minors of order ( n m ) of the matrix
B = β j k , j = 1 , , n m , k = 1 , , n .
Then ( 1 ) n m G ( x , t ) is an oscillatory kernel provided that the boundary value problem is non-singular.
The problem (78) has the form (22) with ρ 0 = ρ 1 = ρ 2 = ρ 3 = 1 , so that L i y ( x ) = y ( i ) ( x ) . The resulting matrix A is
A = 1 0 0 0 0 0 1 0 0 0 0 1 ,
whereas B is just the matrix
B = 1 0 0 0 .
The only non-zero minor of order m = 3 of A is
1 0 0 0 1 0 0 0 1 = 1 ,
whereas the only non-zero minor of order n m = 1 of B equals 1. In addition, the homogeneous boundary conditions are poised in Elias’ sense [36] (that is, the number of boundary conditions set on derivatives of an order lower than t is at least t, for t = 1 , , n ). This implies that λ = 0 is not an eigenvalue and the problem is not singular [36] (Lemma 10.3). Accordingly, one can apply the Kalafati–Gantmakher–Krein–Karlin theorem and conclude that G ( x , t ) of (79) is an oscillatory kernel, that is, ϵ i = ( 1 ) i for all i.
Given that the coefficients of (78) are infinitely continuously differentiable, one can apply the method described in previous sections to determine all eigenfunctions and eigenvalues. As an example, we will calculate λ 1 and λ 2 , as well as the corresponding eigenfunctions φ 1 and φ 2 , and the adjoint eigenfunctions ψ 1 and ψ 2 .
The operator M p can be calculated as:
M p u = x 1 6 1 x ( 2 t ) 3 t u ( t ) d t x 3 6 1 x u ( t ) t d t + x 2 2 1 x u ( t ) d t x 2 1 x t u ( t ) d t + 1 6 1 x t 2 u ( t ) d t + x 1 6 x 2 ( 2 t ) 3 t u ( t ) d t , x [ 1 , 2 ] .
The Green function G ( x , t ) of the problem adjoint to (78) is linked to G ( x , t ) of (80) by G ( x , t ) = G ( t , x ) . Therefore,
G ( x , t ) = ( 2 x ) 3 ( t 1 ) 6 , t [ 1 , x ) , ( 2 x ) 3 ( t 1 ) 6 ( x t ) 3 6 , t ( x , 2 ]
and
M p u = ( 2 x ) 3 6 1 2 t 1 t u ( t ) d t + x 3 6 x 2 u ( t ) t d t x 2 2 x 2 u ( t ) d t + x 2 x 2 t u ( t ) d t 1 6 x 2 t 2 u ( t ) d t , x [ 1 , 2 ] .
Since p 2 and n = 4 , in all cases it will suffice to use q = 1 ( q · n = 1 × 4 = 4 > 3 p ).
The execution of the procedure, using a partition of [ 1 , 2 ] with size h = 10 6 , gives the Collatz–Wielandt numbers associated to p = 1 and p = 2 , which are displayed in Table 1. According to them, one deduces that λ 1 = 163.36711 and λ 2 = 5229.8041 .
The resulting eigenfunctions φ 1 and φ 2 , as well as the adjoint eigenfunctions ψ 1 and ψ 2 , are shown in the Figure 1. They have been normalized to sup norm.
It is worth remarking that two phenomena observed during the numerical calculations:
  • The calculation of Δ p is very sensitive to rounding errors when x i , , x i + p are close to the extremes a or b, if there are homogeneous boundary conditions set at these. The reason for that is that the values of Δ p are zero or almost zero there. In these points of the partition, it makes sense to replace the calculation of Δ p by the calculation of the equivalent determinant composed by the lowest derivatives of φ i and M p j u , which do not vanish at the extreme.
  • If u is not exactly orthogonal to ψ i , i = 1 , , p 1 , beyond a certain iteration it can happen that in the decomposition of M j u as a sum of terms of the form 1 λ i j u , ψ i φ i , the terms associated with those i < p for which u , ψ i 0 start to get a size similar to that of the term 1 λ p j u , ψ p φ p , as anticipated by Remark 3. Further iterations will make M p j u diverge from φ p and get closer to the eigenfunctions φ i , i < p , for which u , ψ j φ j 0 . The precise orthogonality is therefore key for the accuracy of the method.

4. Discussion

The results presented in this paper allow finding the n smallest eigenvalues (and their associated eigenfunctions) of boundary value problems with sign-regular Green functions, as well as the following ones provided that certain conditions on the functions a j ( x ) of L and r ( x ) (namely, the absolutely continuity of their derivatives) are met.
The procedure is sequential in the sense that it requires running it for the first p 1 eigenvalues in order to use it to calculate the p-th one.
For each p, it can be summarized in the following algorithm, which assumes the knowledge of the p 1 previous eigenfunctions φ i and the p 1 previous adjoint eigenfunctions ψ i :
  • Calculate q > 1 such that q · n > p ;
  • Select u P p { 0 } using the process described in the Section 2.2.1, so that u is orthogonal to ψ i , i = 1 , , p 1 ;
  • Calculate M p j u for j q ;
  • Calculate the Collatz–Wielandt numbers using (76) and (77) and the considerations described in the Section 2.2.2. These will be bounds for the inverse of the absolute value of the eigenvalue λ p , whose sign is determined by ϵ p 1 and ϵ p of (7), the error in the calculation being given by the difference r ¯ ( M p , M p j u ) r ̲ ( M p , M p j u ) . Due to the convergence of the Collatz-Wielandt numbers as j increases, the eigenvalue λ p can be estimated with as much accuracy as desired;
  • The quotient M p j u r ¯ ( M p , M p j u ) j will converge to φ p as the iteration index j grows;
  • Select w P p { 0 } using the process described in the Section 2.2.1, so that w is orthogonal to φ i , i = 1 , , p 1 ;
  • Calculate ( M p ) j w for j q . The quotient ( M p ) j w r ¯ ( M p , M p j u ) j will converge to ψ p as the iteration index j grows;
  • Once obtained, this ψ p will have to be normalized by dividing it by φ p , ψ p so as to satisfy (13).
The method, however, has also some limitations, mainly:
  • For p > n 1 it requires some absolutely continuity conditions on r ( x ) and a j ( x ) in order to be applicable, in addition to those required for the existence of the adjoint problem;
  • The method depends on the accuracy of the calculation of the p 1 previous eigenfunctions, given that the determinant Δ p ( u ) depends on them. For greater values of p one can expect more accumulated errors in φ i , i < p , and potentially bigger errors in λ p and φ p ;
  • As the size of the determinant Δ p ( M p j u ) grows with p, the computations become more complicated as p increases. The use of optimized algorithms for the calculation of the determinants is key to reduce this problem;
  • In a practical scenario, the calculation of the Collatz–Wielandt numbers needs to be performed on a mesh of the simplex Ω , as described in the Section 2.2.2. This raises some questions about the validity of these numbers in other points of the simplex. The problem can be addressed by avoiding the use of the supremum and infimum in the calculation of the Collatz–Wielandt numbers, so that the difference between M p j + 1 u and ω M p j u is not zero at any points of the mesh, but a proper analysis on the effect of the interpolation error needs to be performed.
In any case, we the authors believe it can be a practical alternative for the calculation of eigenpairs, especially for lower values of p, and also a source for later work, since an aspect not stressed in this paper is that this approach also allows the determination of the existence of the eigenvalues λ p , the Markov character of the sequence φ i and, with the right conditions on a j ( x ) and r ( x ) , the algebraical and geometrical simplicity of each eigenvalue and their different absolute value than the others. These properties are widely known from the previous literature (the reason our focus has been more practical, on the determination of λ p ), but in this sense it is worth highlighting that the approach used here differs from that used in the classical papers [1,4,14], to give a few examples. These based their analysis on expressions such as (32), whose iterative application p times leads to strictly positive kernels, and applied it to the cone of positive functions in C [ a , b ] p , making use of Krein–Rutman cone theory and other classical results of Schur. While this approach has the advantage of not imposing extra conditions on a j ( x ) and r ( x ) , it does not lend itself easily to work with cone interiors, which are key for the calculation of the Collatz–Wielandt numbers (or rather, to their relationship with the eigenvalue λ p ), as Theorems 2 and 3 show.
To complete this paper, let us mention several areas of interest for future research:
  • Explore ways of extending the procedure to the case r ( x ) = 0 in a set of points of [ a , b ] . The effect of this is that the set I 1 , on which the eigenfunctions φ i form a Chebyshev system, does not contain these points where r ( x ) vanishes, complicating the extensions of some of the results presented here;
  • Analyze the effect of the interpolation error committed in the calculation of each eigenvalue λ p by performing the calculation of Collatz–Wielandt numbers only in the points of the mesh { x ˙ l } ;
  • Simplify or categorize the conditions defined by Stepanov for the sign-regularity of the Green function [3] so that their validation does not always require the calculation of Wronskians of the solutions of L y = 0 under certain boundary conditions. This would allow an easier identification of sign-regular problems, where the procedures of this manuscript can be applied;
  • Last but not least, we have made use of the cone P p as it allows the fixing of conditions for such a cone to be solid and for M p to map P p into its interior. However, this does not exclude the existence of other solid cones K p on which to apply Theorem 2. It would be very interesting to find some examples of these, in order to relax the hypothesis that P p demands on r and a j .

Author Contributions

Conceptualization, P.A.; methodology, P.A. and L.J.; investigation, P.A.; validation, P.A. and L.J.; writing–original draft preparation, P.A.; writing–review and editing, L.J.; visualization, P.A. and L.J.; supervision, P.A.; project administration, L.J.; funding acquisition, L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Levin, A.Y.; Stepanov, G.D. One-dimensional boundary-value problems with operators that do not decrease the number of sign chances. Uspekhi Mat. Nauk. 1975, 30, 245–276. [Google Scholar]
  2. Karlin, S. Total Positivity; Stanford University Press: Stanford, CA, USA, 1968. [Google Scholar]
  3. Stepanov, G.D. Effective criteria for the strong sign-regularity and the oscillation property of the Green’s functions of two-point boundary-value problems. Sb. Math. 1997, 188, 1687–1728. [Google Scholar] [CrossRef]
  4. Gantmacher, F.R.; Krein, M.G. Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, Revised ed.; AMS Chelsea Publishing: Providence, RI, USA, 2002. [Google Scholar] [CrossRef]
  5. Joseph, D.D. Stability of Fluid Motions I; Springer Tracts in Natural Philosophy; Springer: Berlin/Heidelberg, Germany, 1976; Volume 27. [Google Scholar]
  6. Pokornyi, Y.V.; Borovskikh, A.V. Differential equations on networks (geometric graphs). J. Math. Sci. 2004, 119, 691–718. [Google Scholar] [CrossRef]
  7. Kellog, O.D. The oscillation of functions of an orthogonal set. Am. J. Math. 1916, 38, 1–5. [Google Scholar] [CrossRef]
  8. Kellog, O.D. Orthogonal function sets arising from integral equations. Am. J. Math. 1918, 40, 145–154. [Google Scholar] [CrossRef]
  9. Kellog, O.D. Interpolation properties of orthogonal sets of solutions of differential equations. Am. J. Math. 1918, 40, 220–234. [Google Scholar] [CrossRef]
  10. Gantmacher, F.R.; Krein, M.G. On a class of determinants in connection with Kellog’s integral kernels. Mat. Sb. 1933, 40, 501–508. [Google Scholar]
  11. Gantmacher, F.R. On non-symmetric Kellog kernels. Dokl. Akad. Nauk. SSSR 1936, 10, 3–5. [Google Scholar]
  12. Gantmacher, F.R.; Krein, M.G. Sur les matrices oscillatoires et completement non negatives. Compos. Math. 1937, 4, 445–476. [Google Scholar]
  13. Karlin, S. Total positivity, Interpolation by splines and Green’s functions for ordinary differential equations. J. Approx. Theory 1971, 4, 91–112. [Google Scholar] [CrossRef] [Green Version]
  14. Borovskikh, A.V.; Pokornyi, Y.V. Chebyshev-Haar systems in the theory of discontinuous Kellogg kernels. Uspekhi Mat. Nauk. 1994, 49, 3–42. [Google Scholar] [CrossRef]
  15. Pokornyi, Y.V. On the spectrum of certain problems on graphs. Uspekhi Mat. Nauk. 1987, 42, 128–129. [Google Scholar]
  16. Vladimirov, A.A. On the problem of oscillation properties of positive differential operators with singular coefficients. Math. Notes 2016, 100, 790–795. [Google Scholar] [CrossRef]
  17. Kulaev, R.C. On the oscillation property of Green’s function of a fourth-order discontinuous boundary-value problem. Math. Notes 2016, 100, 391–402. [Google Scholar] [CrossRef]
  18. Forster, K.H.; Nagy, B. On the Collatz-Wielandt numbers and the local spectral radius of a nonnegative operator. Linear Algebra Its Appl. 1989, 120, 193–205. [Google Scholar] [CrossRef] [Green Version]
  19. Collatz, L. Einschliessungssatze fur charakteristische Zhalen von Matrizen. Math. Z. 1942, 48, 221–226. [Google Scholar] [CrossRef]
  20. Wielandt, H. Unzerlegbare, nicht negative Matrizen. Math. Z. 1950, 52, 642–648. [Google Scholar] [CrossRef]
  21. Marek, I.; Varga, R.S. Nested Bounds for the Spectral Radius. Numer. Math. 1969, 14, 49–70. [Google Scholar] [CrossRef]
  22. Marek, I. Frobenius theory of positive operators: Comparison theorems and applications. SIAM J. Appl. Math. 1970, 19, 607–628. [Google Scholar] [CrossRef]
  23. Marek, I. Collatz-Wielandt Numbers in General Partially Ordered Spaces. Linear Algebra Its Appl. 1992, 173, 165–180. [Google Scholar] [CrossRef] [Green Version]
  24. Akian, M.; Gaubert, S.; Nussbaum, R. A Collatz-Wielandt characterization of the spectral radius of order-preserving homogeneous maps on cones. arXiv 2014, arXiv:1112.5968. [Google Scholar]
  25. Chang, K.C. Nonlinear extensions of the Perron-Frobenius theorem and the Krein-Rutman theorem. J. Fixed Point Theory Appl. 2014, 15, 433–457. [Google Scholar] [CrossRef]
  26. Thieme, H.R. Spectral radii and Collatz-Wielandt numbers for homogeneous order-preserving maps and the monotone companion norm. In Ordered Structures and Applications; Trends in Mathematics; de Jeu, M., de Pagter, B., van Gaans, O., Veraar, M., Eds.; Birkhauser: Cham, Switzerland, 2016; pp. 415–467. [Google Scholar] [CrossRef]
  27. Chang, K.C.; Wang, X.; Wu, X. On the spectral theory of positive operators and PDE applications. Discret. Contin. Dyn. Syst. 2020, 40, 3171–3200. [Google Scholar] [CrossRef] [Green Version]
  28. Webb, J.R.L. Estimates of eigenvalues of linear operators associated with nonlinear boundary value problems. Dyn. Syst. Appl. 2014, 23, 415–430. [Google Scholar]
  29. Almenar, P.; Jódar, L. Solvability of N-th order boundary value problems. Int. J. Differ. Equ. 2015, 2015, 1–19. [Google Scholar] [CrossRef] [Green Version]
  30. Almenar, P.; Jódar, L. Improving results on solvability of a class of n-th order linear boundary value problems. Int. J. Differ. Equ. 2016, 2016, 1–10. [Google Scholar] [CrossRef]
  31. Almenar, P.; Jódar, L. Solvability of a class of n-th order linear focal problems. Math. Model. Anal. 2017, 22, 528–547. [Google Scholar] [CrossRef] [Green Version]
  32. Almenar, P.; Jódar, L. Estimation of the smallest eigenvalue of an n-th order linear boundary value problem. Math. Methods Appl. Sci. 2021, 44, 4491–4514. [Google Scholar] [CrossRef]
  33. Coddington, E.A.; Levinson, N. Theory of Ordinary Differential Equations; Tata McGraw-Hill: New Delhi, India, 1987. [Google Scholar]
  34. Pólya, G. On the mean value theorem corresponding to a given linear homogeneous differential operator. Trans. Am. Math. Soc. 1924, 24, 312–324. [Google Scholar] [CrossRef]
  35. Li, D.; Jia, M. A dynamical approach to the Perron-Frobenius theory and generalized Krein-Rutman type theorems. J. Math. Anal. Appl. 2021, 496, 1–22. [Google Scholar] [CrossRef]
  36. Elias, U. Oscillation Theory of Two-Term Differential Equations; Mathematics and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997; Volume 396. [Google Scholar]
Figure 1. Eigenfunctions v 1 and v 2 and adjoint eigenfunctions w 1 and w 2 .
Figure 1. Eigenfunctions v 1 and v 2 and adjoint eigenfunctions w 1 and w 2 .
Mathematics 09 02663 g001
Table 1. Calculation of Collatz–Wielandt numbers for the first and second eigenvalue.
Table 1. Calculation of Collatz–Wielandt numbers for the first and second eigenvalue.
j r ̲ ( M 1 , M 1 j u ) r ¯ ( M 1 , M 1 j u ) r ̲ ( M 2 , M 2 j u ) r ¯ ( M 2 , M 2 j u )
10.00594450.00744220.00000130.0006649
20.00611650.00620120.00015680.000287
30.0061210.00612440.00018630.0002122
40.00612120.00612130.00019020.0001961
50.00612120.00612120.00019070.0001917
60.00612120.00612120.00019120.0001912
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Almenar, P.; Jódar, L. Accurate Estimations of Any Eigenpairs of N-th Order Linear Boundary Value Problems. Mathematics 2021, 9, 2663. https://doi.org/10.3390/math9212663

AMA Style

Almenar P, Jódar L. Accurate Estimations of Any Eigenpairs of N-th Order Linear Boundary Value Problems. Mathematics. 2021; 9(21):2663. https://doi.org/10.3390/math9212663

Chicago/Turabian Style

Almenar, Pedro, and Lucas Jódar. 2021. "Accurate Estimations of Any Eigenpairs of N-th Order Linear Boundary Value Problems" Mathematics 9, no. 21: 2663. https://doi.org/10.3390/math9212663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop