Constant Sign Solutions to Linear Fractional Integral Problems and Their Applications to the Monotone Method

: This manuscript provides some results concerning the sign of solutions for linear fractional integral equations with constant coefﬁcients. This information is later used to prove the existence of solutions to some nonlinear problems, together with underestimates and overestimates. These results are obtained after applying suitable modiﬁcations in the classical process of monotone iterative techniques. Finally, we provide an example where we prove the existence of solutions, and we compute some estimates.


Introduction
The theory of equations of arbitrary order has been proposed as an adequate framework to deal with the heterogeneity and memory effects present in the physical phenomena [1,2]. The study of fractional integral equations is relevant by itself and also for the study of the properties of the solutions to fractional differential equations. Linear problems for fractional equations can be addressed by passing to integer order equations [3,4]. On the other hand, for nonlinear problems, one interesting approach is the development of iterative techniques based on the use of upper and lower solutions [5].
The main purpose of this manuscript is to provide some results concerning estimations of solutions to nonlinear fractional integral problems. The paper is structured as follows.
In the second section, we introduce some basic concepts involving fractional calculus, together with some fundamental and useful results. In the third section, we describe several theorems providing conditions ensuring that certain linear fractional integral equations with constant coefficients have nonnegative solutions. In the fourth section, we use the previous results to adapt the classical idea of the monotone iterative technique [5,6] for this case. In the last section, we give an example of application in a specific nonlinear equation.

Preliminaries
In this section we introduce some definitions and notation that will be used for the rest of the document. These concepts are, essentially, the fundamental notions of fractional calculus, together with some theorems involving them. These introductory results focus on conditions ensuring the existence and uniqueness of a solution to a fractional integral or differential equation.
We will assume that the reader is familiar with the basic notions on Banach Spaces, together with its classical notation, for instance, the space L 1 [0, b]. Interested readers in Banach Spaces can consult [7,8].
In the rest of the document [0, b] ⊂ R will denote a compact interval. In particular, any equation that equals two expressions depending on t will be true if, and only if, it holds for every t ∈ [0, b] except, at most, for a zero-measure set. It is important to notice that, when both expressions involved in the equation are continuous, if the equality holds for every t ∈ [a, b] except for a zero-measure set, then it has to hold for every t ∈ [a, b].
Thus, when we talk about nonnegative functions in L 1 [0, b] we have to understand that we are describing an element of the quotient space that admits a nonnegative representative. Analogously, a nondecreasing function in L 1 [0, b] will describe an element of the quotient space that admits a nondecreasing representative. In this framework, we recall the Dominated Convergence Theorem for the Lebesgue Integral [9].
Theorem 1 (Lebesgue Dominated Convergence). Consider a sequence of measurable functions f n on [0, b] converging pointwise to a function f . If there is an integrable function g such that | f n | ≤ g, then f is integrable and, moreover, f is the limit of the sequence ( f n ) n∈N with respect to the L 1 [0, b] norm.
On the other hand, we shall introduce some basic concepts and notation concerning fractional calculus. Further information about fractional calculus, additional to that provided here, can be consulted in [1,2]. We begin by reproducing the definition of the Riemann-Liouville fractional integral, which is a natural generalization of the Cauchy formula for repeated integration.

Definition 1.
We define the Riemann-Liouville fractional integral of a function f ∈ L 1 (a, b) with initial point a ∈ R and order δ ∈ R + as where the function Γ(δ), for δ > 0, is defined as Moreover, the previous definition is extended for δ = 0 as I 0 a + := Id.

Remark 1.
The main results involving the Riemann-Liouville fractional integral are that it is a continuous linear operator from L 1 [0, b] to itself, that it forms a continuous semigroup with respect to δ, and the law of additivity of orders I δ 2 a + I We also summarize the results in [3,4] about the existence and uniqueness of solutions to linear fractional integral problems, in the following theorem.

Inequalities
For the sake of simplicity, we rewrite (1) as where T := A 1 I δ 1 0 + + · · · + A n I δ n 0 + , i.e., T + is the part of the previous sum involving positive coefficients, i.e., T − is the part of the sum that involves negative coefficients, and we have denoted the source term as σ 0 (t). The question is how to impose conditions on T and σ 0 (t) ensuring that y(t), which is the unique solution to the equation, is nonnegative on [0, b]. In the first subsection, we will state and prove a theorem for the case of negative coefficients T + = 0. In the second subsection we will develop a theorem for the case of positive coefficients T − = 0. Finally, a combined argument will allow us to give a result for the general case when T + = 0 and T − = 0 simultaneously.
3.1. The Particular Case T + = 0 As we said before, initially, we want to establish a theorem that applies for the case T = T − . Theorem 3. If A 1 , . . . , A n < 0 and σ 0 (t), η 0 (t) are functions in L 1 [0, b] such that σ 0 (t) ≥ η 0 (t) at almost every point, then the unique solution of Equation (2) associated to σ 0 (t) is greater or equal, at almost every point, than the one associated with η 0 (t).
We will deduce the previous theorem after developing some partial results. We begin with the following lemma, which gives a result of nonnegativity, provided that the source term is continuous and nonnegative. Proof of Lemma 1. At first, we will show that y is continuous. It can be shown inductively that, for all n ∈ Z + , (T + Id)(y(t) − σ + Tσ + · · · + (−1) n T n−1 σ) = (−1) n T n σ.
If y had a zero, a combination of the infimum property over the set of zeros of y, together with the continuity of y, shows that we can choose the first zero t 0 ∈ [0, b]. However, the evaluation of (2) in t 0 gives a contradiction, as (T + Id)y(t 0 ) = Ty(t 0 ) = σ 0 (t 0 ) ≥ 0, however, T was a negative operator. The contradiction arises as y is nonnegative in [0, t 0 ] and strictly positive at some [0, δ], so (T + Id)y(t 0 ) = Ty(t 0 ) < 0.
, then the unique solution of Equation (2) associated to the source term σ 0 (t) is greater or equal, at any point, than the one associated to η 0 (t). (2), where ε > 0, we get that the associated solution y ε (t) is nonnegative. However, (T + Id) −1 is continuous (remember that T + Id was a continuous linear bijection between Banach spaces) so y ε (t) converges to y in the L 1 [0, b] norm. Consequently, y(t) is essentially nonnegative and, since it is continuous, y(t) is nonnegative. It is also straightforward to check that Corollary 1 is still valid if σ 0 (0) ≥ η 0 (0).
If we change the source term σ 0 (t) by a perturbation σ 0,ε (t) in (2), such that σ 0,ε − σ 0 < ε, the associated solution y ε (t) will be a nonnegative continuous function. As before, y ε (t) will converge to the solution y(t), in the L 1 [0, b] norm. Consequently, we deduce that y(t) is essentially nonnegative. It is also straightforward to check, from the previous arguments, that Corollary 1 can be adapted for the case where σ 0 , η 0 are in L 1 [0, b], implying Theorem 3.

The Particular Case T − = 0
Now, we describe a theorem that applies for the case where the operator T is positive, i.e., T − = 0. The main idea is to describe the solution to Equation (2) as a functional series. The convergence of this series will be guaranteed by the first hypothesis in Theorem 4. Moreover, we will see that the solution is positive by checking that the sum of the term at position 2k − 1 with the term at position 2k is positive for any k ∈ Z + . This will be ensured by the second hypothesis in Theorem 4. Theorem 4. If A 1 , . . . , A n > 0 and σ 0 (t) is an essentially nonnegative integrable function in [0, b], then the unique solution to Equation (2) is nonnegative provided that the following conditions hold: Proof of Theorem 4. From (2), it is possible to deduce the equation One can proceed inductively and, in fact, the following identity will hold for any natural m ∈ N: when m goes to infinity, because T < 1. In fact, if σ 0 is essentially bounded, then the sequence converges uniformly. Provided that converges, the continuity of T would ensure that (T + Id)(y − S)(t) = 0, which has a unique solution due to Theorem 2, which has to be the trivial one. So, in summary, provided that S(t) exists, we have that y(t) = S(t).
As we said before, the condition T < 1 is enough to ensure that S(t) converges. It is enough to check that the Cauchy condition holds for the sequence of partial sums, as L 1 [0, b] is complete. To check the Cauchy condition, we have to see that σ n + · · · + σ m can be arbitrarily small if N ∈ N is big enough and m > n ≥ N. We use the trivial bound However the last quantity can be bounded from above by which can be arbitrarily small if N is big enough. In conclusion, S(t) converges and we have to prove that it is nonnegative. To complete our task, we will use the remaining hypothesis.
It is straightforward to see that σ j is nonnegative when j is even and that σ j is non-positive when j is odd. Thus, a good idea to prove that S is nonnegative is to show that σ 2j + σ 2j+1 ≥ 0 for any j ≥ 0. However, σ 2j + σ 2j+1 = T 2j (σ 0 + σ 1 ) and, since T is a positive operator, it is enough to show that σ 0 (t) + σ 1 (t) ≥ 0. This is immediate, as it is exactly the last condition in Theorem 4.
Before facing the general case, where T + , T − = 0, we will discuss the two hypotheses in Theorem 4. On the one hand, we rewrite the first hypothesis in computable terms. On the other hand, we give a more restrictive condition for the second hypothesis, but it is much easier to check.
Remark 4 (About the first condition in Theorem 4). It is easy to compute explicitly the value of T , that is . At first, we show that the previously mentioned quantity is an upper bound for T , since However, computing the RHS, together with a direct application of Hölder's inequality, To conclude, we show that the previous upper bound is optimal. Given ε ∈ (0, b), we define the function χ ε that takes the value 1 ε in [0, ε] and 0 in [ε, b]. It is obvious that χ ε = 1 and that However, it is trivial to compute explicitly the previous lower bound as Finally, if ε goes to zero, the previous lower bound for T goes to the upper bound ∑ n i=1 A i ·b δ i Γ(1+δ i ) , and we conclude that the first hypothesis in Theorem 4 can be rewritten as Remark 5 (About the second condition in Theorem 4). Due to Hölder's inequality, the second hypothesis is an immediate consequence of that is easier to check. Moreover, if σ 0 is a nondecreasing function, this new condition just means that which was already included in the first hypothesis. We observe that, for this simplification, we have also used that σ 0 is nonnegative. Hence, if σ 0 is an nondecreasing function, the second hypothesis in Theorem 4 can be removed.
3.3. The General Case Where T + , T − = 0 We will provide the proof of the following result, which allows T to have, simultaneously, positive and negative coefficients. The theorem obtained in this section will not be used in the rest of the document, although it is interesting by itself.

Theorem 5.
If σ 0 (t) is an essentially nonnegative integrable function in [0, b], then the unique solution to (2) is nonnegative, provided that the following conditions hold: T − < 1 and the least integral order δ i in T − is greater than or equal to 1.
We provide the proof for Theorem 5. The idea is similar to the one used for the proof of Theorem 4, but with some additional considerations.
Proof of Theorem 5. From Equation (2), it is straightforward to deduce where r 0 (t) is the unique solution to the equation (T + + Id)y(t) = σ 0 (t), and where we have renamed µ 0 (t) := σ 0 (t). We can ensure that r 0 (t) is positive because of Theorem 4 and the two first hypotheses, so µ 1 (t) will be also positive and, furthermore, nondecreasing since the least order in −T − is greater than or equal to 1.
One can proceed inductively and, in fact, the following identity will hold for any natural m ∈ N: where r j (t) is the unique solution to the problem which means r j (t) = (T + + Id) −1 µ j (t), and where µ j+1 (t) = −T − (r j (t)). This inductive construction also shows that any µ j (t) and r j (t) are positive. This claim can be immediately proven by induction, taking into account Remark 5 and that the least integral order in −T − is greater than or equal to one. Now, we should ensure that µ m (t) tends to 0 in L 1 [0, b] when m goes to infinity and the convergence of As r m is nonnegative for any m ∈ N, it is trivial to obtain the following bounds, From these bounds, since T − < 1, one can emulate the proof in Theorem 4 and conclude that R(t) converges, that it is positive (it is a sum of positive addends), and that it is the unique solution to (2).

Nonlinear Problem
In this section, we consider the nonlinear fractional integral equation Next, we develop a suitable version of the method of upper and lower solutions for problem (4), via the results obtained in Section 3.2. It is obvious that the solutions to (4) coincide with those of problem A 1 I δ 1 0 + + · · · + A n I δ n 0 + + Id x(t) =(A 1 I δ 1 0 + + · · · + A n I δ n 0 + )x(t) + f (t, I δ 1 0 + x(t), . . . , I δ n 0 + x(t)), for any fractional operator A 1 I δ 1 0 + + · · · + A n I δ n 0 + .
Similarly, we say that a function β ∈ L 1 [0, b] is an upper solution for problem (4) if the following conditions are satisfied: (I I) There exist coefficients A 1 , . . . , A n > 0 and orders δ 1 > · · · > δ n > 0 such that function g α,β given by , nonnegative, and nondecreasing, for every t ∈ [0, b]. (I I I) The operator T := A 1 I δ 1 0 + + · · · + A n I δ n 0 + , associated to the constants given in (II), fulfils T < 1, which is the first hypothesis in Theorem 4 (in this sense, recall Remark 5).

Theorem 7.
Suppose that f : [0, b] × R n −→ R is continuous and that Conditions (I) and (I I I) in Theorem 6 hold. Suppose, also, that (I I * )There exist coefficients A 1 , . . . , A n > 0, and orders δ 1 > · · · > δ n > 0, such that function g η,ξ given by , it admits a nonnegative and nondecreasing representative, and for any choice of functions we have that α ≤ η ≤ ξ ≤ β.

Proof. We consider the functional interval
For each fixed source term, depending on η ∈ [α, β], we consider the following linear problem If we denote the RHS in (5) as σ η , we can define the operator S as the map taking each η ∈ [α, β] into the unique solution to (5) with source term σ η . It is clear that a function in L 1 [0, b] is a fixed point of S if and only if it is a solution to (4).
For the construction of the sequence that was described in the thesis of the theorem, we choose α 0 = α, and α 1 is the unique solution to (5) associated with σ α . Thus, the sequences (α n ) n∈N and (β n ) n∈N are, therefore, defined via the recurrence relation α n = S(α n−1 ), β n = S(β n−1 ), ∀n ≥ 1.
We prove the following properties: i) S is nondecreasing on the functional interval [α, β]. ii) The operator S maps the interval [α, β] into itself. iii) (α n ) n∈N is convergent towards ρ and (β n ) n∈N is convergent towards γ. iv) ρ, γ are the extremal solutions to (4) in the functional interval [α, β].
Now, to prove iii), note that (α n ) n∈N is increasing, and (β n ) n∈N is decreasing. Indeed, we have seen that α ≤ S(α) ≤ S(β) ≤ β and that S is increasing. Thus, we have that α ≤ S(α) ≤ S 2 (α) ≤ S 2 (β) ≤ S(β) ≤ β. If we apply this argument inductively, we derive the monotonicity of the sequences (α n ) n∈N and (β n ) n∈N . We need to prove that both sequences are convergent. Without loss of generality, we develop our argument for the sequence (α n ) n∈N .
We see that α n ≤ |α| + |β| for any n ∈ N. Thus, the sequence (α n ) n∈N is uniformly bounded by an integrable function.
Due to the Lebesgue's Dominated Convergence Theorem, we conclude that (α n ) n∈N converges in the L 1 norm to some ρ ∈ L 1 [0, b]. Analogously, we deduce that (β n ) n∈N converges to some γ ∈ L 1 [0, b].
is a solution to (4) such that α ≤ x ≤ β, we use that S is nondecreasing to conclude that α n = S n (α) ≤ S n (x) ≤ S n (β) = β n . Thus, we have that α n ≤ x ≤ β n for every n ∈ N. This implies, after taking the limits when n → ∞, that ρ ≤ x ≤ γ, showing that ρ and γ are the extremal solutions in [α, β].
It is obvious that M(η)(t) is measurable, once η is fixed. Moreover, due to the Lipschitz condition, it is straightforward to see that it is also absolutely integrable. Now we need to prove that M is continuous. We consider a sequence η n converging to η in the L 1 [0, b] norm. We need to show that M(η n ) converges to M(η) in the L 1 [0, b] norm.
Since f is continuous, and the intervals [I δ 1 0 + α, I Thus, due to the last paragraph, and given any ε > 0, we can consider m ∈ N such that Thus, if we make a direct estimate with the L 1 norm, we obtain that we can choose m ∈ N such that function. The previous arguments, replacing (η m 1 , η m 2 ) by (η, η m ), show that M(η m ) − M(η) tends to zero as m → ∞, implying that M(η m ) converges to M(η), and that M(η) lies in L 1 [0, b].

An Example
In this final section we provide an example of application of the previous results. We will obtain a specific value of b ensuring the existence of solutions in L 1 [0, b], which will lie between a lower and an upper solution, to the following problem x(t) = f t, I 3 2 0 + x(t), I 5 4 0 + x(t) := 1 − Γ 0 + x(t) .
To check (I), observe that the constant functions α = 0 and β = 1 are a lower and upper solution to our equation, respectively, for b = 1.
Of course, this function is nonnegative and nondecreasing. On the other hand, we have again that β(t) − f t, I To check (I I * ), consider the perturbation operator T = A 1 · I δ 1 0 + + A 2 · I δ 2 0 + = Γ 5 2 I 3 2 0 + + Γ 9 4 I 5 4 0 + . We have that and (I I * ) is fulfilled, since ξ ≥ η implies that each factor of the previous product is nonnegative and nondecreasing. Of course, (I I * ) implies (I I).
To check (I I I), we compute T via Remark 4. We get that It is easy to see numerically that b = 3 5 implies T < 1. Finally, (IV) holds trivially, in virtue of Remark 6: The integral orders associated to the fractional operator are greater or equal to one, the upper and lower solutions are bounded, and the function f (t, a 1 , a 2 ) = 1 − Γ 5 2 a 1 · 1 − Γ 9 4 a 2 is continuously differentiable. Thus, the previous problem is under the hypotheses of Theorem 7 when b = 3 5 . In particular, we know that the problem has at least one solution defined in [0, 3 5 ], whose image lies in the interval [0, 1].
Author Contributions: All authors contributed equally to every part of this work. All authors have read and agreed to the published version of the manuscript.