Distributed-Order Non-Local Optimal Control

Distributed-order fractional non-local operators have been introduced and studied by Caputo at the end of the 20th century. They generalize fractional order derivatives/integrals in the sense that such operators are defined by a weighted integral of different orders of differentiation over a certain range. The subject of distributed-order non-local derivatives is currently under strong development due to its applications in modeling some complex real world phenomena. Fractional optimal control theory deals with the optimization of a performance index functional subject to a fractional control system. One of the most important results in classical and fractional optimal control is the Pontryagin Maximum Principle, which gives a necessary optimality condition that every solution to the optimization problem must verify. In our work, we extend the fractional optimal control theory by considering dynamical systems constraints depending on distributed-order fractional derivatives. Precisely, we prove a weak version of Pontryagin's maximum principle and a sufficient optimality condition under appropriate convexity assumptions.


Introduction
Distributed-order fractional operators were introduced and studied by Caputo at the end of the previous century [1,2]. They can be seen as a kind of generalization of fractional order derivatives/integrals in the sense that these operators are defined by a weighted integral of different orders of differentiation over a certain range. This subject gained more interest at the beginning of the current century, by researchers from different mathematical disciplines, through attempts to solve differential equations with distributed-order derivatives [3][4][5][6]. Moreover, at the same time, in the domain of applied mathematics, those distributed-order fractional operators have started to be used, in a satisfactory way, to describe some complex phenomena modeling real world problem: see, for instance, works in viscoelasticity [7,8] and in diffusion [9]. Today, the study of distributed-order systems with fractional derivatives is a hot subject: see, e.g., [10][11][12] and references therein.
Fractional optimal control deals with optimization problems involving fractional differential equations as well as a performance index functional. One of the most important results is the Pontryagin Maximum Principle, which gives a first-order necessary optimality condition that every solution to the dynamic optimization problem must verify. By applying such result, it is possible to find and identify candidate solutions to the optimal control problem. For the state of the art on fractional optimal control we refer the readers to [13][14][15] and references therein. Recently, distributed-order fractional problems of the calculus of variations were introduced and investigated in [16]. Here, our main aim is to extend the distributed-order fractional Euler-Lagrange equation of [16] to the Pontryagin setting (see Remark 2).
Regarding optimal control for problems with distributed-order fractional operators, the results are rare and reduce to the following two papers: [17] and [18]. Both works develop numerical methods while, in contrast, here we are interested in analytical results (not in numerical approaches). Moreover, our results are new and bring new insights. Indeed, in [17] the problem is considered with Riemann-Liouville distributed derivatives, while in our case we consider optimal control problems with Caputo distributed derivatives. It should be also noted an inconsistence in [17]: when one defines the control system with a Riemann-Liouville derivative, then in the adjoint system it should appear a Caputo derivative; when one considers optimal control problems with a control system with Caputo derivatives, then the adjoint equation should involve a Riemann-Liouville operator; as a consequence of integration by parts (cf. Lemma 1). This inconsistence has been corrected in [18], where optimal control problems with Caputo distributed derivatives (like we do here) are considered. Unfortunately, there is still an inconsistence in the necessary optimality conditions of both [17] and [18]: the transversality conditions are written there exactly as in the classical case, with the multiplier vanishing at the end of the interval, while the correct condition, as we prove in our Theorem 1, should involve a distributed integral operator: see condition (3).
The text is organized as follows. We begin by recalling definitions and necessary results of the literature in Section 2 of preliminaries. Our original results are then given in Section 3. More precisely, we consider fractional optimal control problems where the dynamical system constraints depend on distributed-order fractional derivatives. We prove a weak version of Pontryagin's maximum principle for the considered distributed-order fractional problems (see Theorem 1) and investigate a Mangasarian-type sufficient optimality condition (see Theorem 2). An example, illustrating the usefulness of the obtained results, is given (see Examples 1 and 2). We end with Section 4 of conclusions, mentioning also some possibilities of future research.

Preliminaries
In this section, we recall necessary results and fix notations. We assume the reader to be familiar with the standard Riemann-Liouville and Caputo fractional calculi [19,20].
Let α be a real number in [0, 1] and let ψ be a non-negative continuous function defined on [0, 1] such that This function ψ will act as a distribution of the order of differentiation.
where D α a + and D α b − are, respectively, the left and right-sided Riemann-Liouville fractional derivatives of order α.
Definition 2 (See [1]). The left and right-sided Caputo distributed-order fractional derivatives of a function x : [a, b] → R are defined, respectively, by where C D α a + and C D α b − are, respectively, the left and right-sided Caputo fractional derivatives of order α.
As noted in [16], there is a relation between the Riemann-Liouville and the Caputo distributed-order fractional derivatives: Along the text, we use the notation where I 1−α b − represents the right Riemann-Liouville fractional integral of order 1 − α. The next result has an essential role in the proofs of our main results, that is, in the proofs of Theorems 1 and 2.
Next, we recall the standard notion of concave function, which will be used in Section 3.3.

Definition 3 (See [21]). A function h
for all β ∈ [0, 1] and for all θ 1 , θ 2 in R n . [21]). Let h : R n → R be a continuously differentiable function. Then h is a concave function if and only if it satisfies the so called gradient inequality:

Lemma 2 (See
Finally, we recall a fractional version of Gronwall's inequality, which will be useful to prove continuity of solutions in Section 3.1. [22]). Let α be a positive real number and let a(·), b(·), and u(·) be non-negative continuous functions on [0, T] with b(·) monotonic increasing on [0, T). If

Main Results
The basic problem of optimal control we consider in this work, denoted by (BP), consists to find a piecewise continuous control u ∈ PC and the corresponding piecewise smooth state trajectory x ∈ PC 1 , solution of the distributed-order non-local variational problem where functions L and f , both defined on [a, b] × R × R, are assumed to be continuously differentiable in all their three arguments: L ∈ C 1 , f ∈ C 1 . Our main contribution is to prove necessary (Section 3.2) and sufficient (Section 3.3) optimality conditions.

Sensitivity analysis
Before we can prove necessary optimality conditions to problem (BP), we need to establish continuity and differentiability results on the state solutions for any control perturbation (Lemmas 4 and 5), which are then used in Section 3.2. The proof of Lemma 4 makes use of the following mean value theorem for integration, that can be found in any textbook of calculus (see, e.g., Lemma 1 of [23]): if F : [0, 1] → R is a continuous function and ψ is an integrable function that does not change sign on the interval, then there exists a numberᾱ such that Lemma 4 (Continuity of solutions). Let u be a control perturbation around the optimal control u * , that is, Then, we have that x converges to the optimal state trajectory x * when tends to zero.
Proof. Starting from definition, we have, for all t ∈ [a, b], that Then, by linearity, and it follows, by definition of the distributed operator, that Now, using the mean value theorem for integration, and denoting m := 1 0 ψ(α)dα, we obtain that there exists anᾱ such that Clearly, one has which leads to Moreover, because f is Lipschitz-continuous, we have By setting K = max{K 1 , K 2 }, it follows that . Now, by applying Lemma 3 (the fractional Gronwall inequality), it follows that The series in the last inequality is a Mittag-Leffler function and thus convergent. Hence, by taking the limit when tends to zero, we obtain the desired result: Lemma 5 (Differentiability of the perturbed trajectory). There exists a function η defined on [a, b] such that Proof. Since f ∈ C 1 , we have that Observe that u − u * = h(t) and u → u * when → 0 and, by Lemma 4, we have x → x * when → 0. Thus, the residue term can be expressed in terms of only, that is, the residue is o( ). Therefore, we have We want to prove the existence of the limit lim →0 x − x * =: η, that is, to prove that x (t) = x * (t) + η(t) + o( ). This is indeed the case, since η is solution of the distributed order fractional differential equation The intended result is proved.

Pontryagin's maximum principle of distributed-order
The following result is a necessary condition of Pontryagin type [24] for the basic distributed-order non-local optimal control problem (BP).

Theorem 1 (Pontryagin Maximum Principle for (BP)
). If (x * (·), u * (·)) is an optimal pair for (BP), then there exists λ ∈ PC 1 , called the adjoint function variable, such that the following conditions hold for all t in the interval [a, b]: • the optimality condition • the adjoint equation • the transversality condition Proof. Let (x * (·), u * (·)) be solution to problem (BP), h(·) ∈ PC be a variation, and a real constant. Define u (t) = u * (t) + h(t), so that u ∈ PC. Let x be the state corresponding to the control u * , that is, the state solution of Note that u (t) → u * (t) for all t ∈ [a, b] whenever → 0. Furthermore, Something similar is also true for x : because f ∈ C 1 , it follows from Lemma 4 that, for each fixed t, exists for each t. The objective Next, we introduce the adjoint function λ. Let λ(·) be in PC 1 , to be determined. By the integration by parts formula (see Adding this zero to the expression J[x , u ] gives which by (4) is equivalent to Since the process (x * , u * ) = (x 0 , u 0 ) is assumed to be a maximizer of problem (BP), the derivative of φ( ) with respect to must vanish at = 0, that is, where the partial derivatives of L and f , with respect to x and u, are evaluated at (t, x * (t), u * (t)).
Rearranging the term and using (5), we obtain that where the partial derivatives of H are evaluated at (t, that is, given the adjoint equation (2) and the transversality condition (3), it yields b a ∂H ∂u (t, x * (t), u * (t), λ(t)) h(t) = 0 and, by the fundamental lemma of the calculus of variations [25], we have the optimality condition (1): This concludes the proof.

Remark 1.
If we change the basic optimal control problem (BP) by changing the boundary condition given on the state variable at initial time, x(a) = x a , to a terminal condition, then the optimality condition and the adjoint equation of the Pontryagin Maximum Principle (Theorem 1) remain exactly the same. Changes appear only on the transversality condition: • a boundary condition at final/terminal time, that is, fixing the value x(b) = x b with x(a) remaining free, leads to • in case when no boundary conditions is given (i.e., both x(a) and x(b) are free), then we have Remark 2. If f (t, x, u) = u, that is, C D ψ(·) a+ x(t) = u(t), then our problem (BP) gives a basic problem of the calculus of variations, in the distributed-order fractional sense of [16]. In this very particular case, we obtain from our Theorem 1 the Euler-Lagrange equation of [16] (cf. Theorem 2 of [16]).

Remark 3.
Our distributed-order fractional optimal control problem (BP) can be easily extended to the vector setting. Precisely, let x := (x 1 , . . . , x n ) and u := (u 1 , . . . , u n ) with (n, m) ∈ N 2 such that m ≤ n, and functions f : [a, b] × R n × R m → R n and L : [a, b] × R n × R m → R be continuously differentiable with respect to all its components. If (x * , u * ) is an optimal pair, then the following conditions hold for t ∈ [a, b]: • the optimality conditions • the adjoint equations , u * (t)), j = 1, . . . , n; • the transversality conditions Definition 4. The candidates to solutions of (BP), obtained by the application of our Theorem 1, will be called (Pontryagin) extremals.
We now illustrate the usefulness of our Theorem 1 with an example.

Sufficient condition for global optimality
We now prove a Mangasarian type theorem for the distributed-order fractional optimal control problem (BP).

Theorem 2.
Consider the basic distributed-order fractional optimal control problem (BP).
for any admissible pair (x, u).
Proof. Because L is concave as a function of x and u, we have from Lemma 2 that for any control u and its associated trajectory x. This gives From the adjoint equation (2), we have From the optimality condition (1), we know that ∂L ∂u (t,x(t),ũ(t)) = −λ(t) ∂ f ∂u (t,x(t),ũ(t)).

Example 2.
The extremal (x,ũ, λ) given in Example 1 is a global minimizer for problem (7). This is easily checked from Theorem 2 since the Hamiltonian defined in (8) is a concave function with respect to both variables x and u and, furthermore, λ(t) ≡ 0. In Figure 1, we give the plots of the optimal solution to problem (7).

Conclusion
In this paper we investigated fractional optimal control problems depending on distributed-order fractional operators. We have proved a necessary optimality condition of Pontryagin's type and a Mangasarian-type sufficient optimality condition. The new results were illustrated with an example. As future work, it would be interesting to develop proper numerical approaches to solve problems of optimal control with distributed-order fractional derivatives. In this direction, the approaches found in [17] and [18] can be easily adapted. The optimal control u * and corresponding optimal state variable x * , solution of problem (7).