Optimal Control of Insect Populations

: We consider some optimal control problems for systems governed by linear parabolic PDEs with local controls that can move along the domain region Ω of the plane. We prove the existence of optimal paths and also deduce the ﬁrst order necessary optimality conditions, using the Dubovitskii–Milyutin’s formalism, which leads to an iterative algorithm of the ﬁxed-point kind. This problem may be considered as a model for the control of a mosquito population existing in a given region by using moving insecticide spreading devices. In this situation, an optimal control is any trajectory or path that must follow such spreading device in order to reduce the population as much as possible with a reasonable not too expensive strategy. We illustrate our results by presenting some numerical experiments.


Introduction
In this paper, we present and solve theoretically and numerically some optimal control problems concerning the extinction of mosquito populations by using moving devices whose main role is to spread insecticide. The controls are the trajectories or paths followed by the devices and the states are the resulting mosquito population densities. Thus, the unknowns in the considered problems are couples (γ, u), where γ is a curve in the plane (see Figure 1), and u indicates how many mosquitoes there are and where they stay. The goal is to compute a trajectory leading to a minimal cost, measured in terms of operational costs and total population up to a final time.
The main simplifying assumptions for the model are the following: Assumption 1. Mosquitoes grow at a not necessarily constant known rate a = a(x, t) and diffuses at a known constant rate α.
Assumption 2. The insecticide immediately kills a fixed fraction of the population at a rate that decreases with the distance to the spreading source.

Assumption 3.
There are no obstacles for the admissible trajectories. These assumptions can be relaxed in several ways. For the problems considered in this paper, we prove the existence of an optimal solution, that is, a trajectory that leads to the best possible status of the system. We also characterize the optimal control-state pairs by appropriate first order optimality conditions. As usual, this is a coupled system that must be satisfied by any optimal control, its associated state, and an additional variable (the adjoint state) and must be viewed as a necessary condition for optimality.
In this paper, this characterization is obtained by using the so-called Dubovitskii-Milyutin techniques (see, for instance, Girsanov [1]). This relies on the following basic idea: on a local minimizer, the set of descent directions for the functional to minimize must be disjoint to the intersection of the cones of admissible directions determined by the restrictions. Accordingly, in view of Hahn-Banach's Theorem, there exist elements in the associated dual cones, not all zero, whose sum vanishes. This algebraic condition is in fact the Euler-Lagrange equation for the problem in question. In order to be applied in our context, we must carry out the task of identifying all these (primal and dual) cones.
This formalism has been applied with success to several optimal control problems for PDEs, including the FitzHugh-Nagumo equation Brandao et al. [2]; solidification Boldrini et al. [3], the evolutionary Boussinesq model Boldrini et al. [4]; the heat equation Gayte et al. [5] and, also, some ecological systems that model the interaction of three species Coronel et al. [6].
We remark that part of the results presented here have their origin in ones in the Ph.D. thesis of Araujo [7]. This paper is organized as follows: Section 2 is devoted to describe the main achievements. The next two sections will be devoted to the rigorous proof of the optimality conditions; Section 3 contains some preparatory material and Section 4 deals with the main part of the proof. In Section 5, we deduce optimality conditions for a problem similar to the previous one, including in this case a restriction on the norm of the admissible trajectories. Once the optimality conditions are established, they can be used to devise numerical schemes to effectively compute suitable approximations of optimal trajectories; this will be explained in detail in Section 6. In Section 7, we present the results of numerical experiments performed with the numerical scheme described in the previous section. Finally, in Section 8, we present our conclusions and comment on further possibilities of investigation.

Main Achievements
Let Ω ⊂ R 2 be a non-empty bounded connected open set with boundary ∂Ω such that either ∂Ω is of class C 2 or Ω is a convex polygon and let T > 0 be a given time. We want to find a curve γ * : [0, T] → R 2 such that where V := { γ ∈ H 1 (0, T) 2 : γ(0) = 0 } is the space of admissible controls, and F is the following cost functional: whereγ is the time derivative of γ, and u = u(x, t) is the associated state, that is, the unique solution to the problem Parameters µ 0 ≥ 0, µ 1 > 0, and µ 2 > 0 are given constants. Concerning the interpretation of the cost functional (2), we observe that it can also be written in the form The function γ determines the trajectory of the device andγ is its corresponding velocity; moreover, the same path (geometric locus of the trajectory) can be traveled with different speeds leading to different operational costs. Since γ L 2 (0,T) is a measure of the size of γ and γ L 2 (0,T) is a measure of the velocityγ, the quantity µ 0 γ 2 L 2 (0,T) + µ 1 γ 2 can be regarded as a measure of the operational costs associated with the trajectory γ. Parameters µ 0 and µ 1 weight the relative importance being attributed to the size and the velocity of a trajectory in those operational costs. On the other hand, u L 2 (Q) is a measure of the size of the mosquito population and µ 2 is a constant parameter that weights the importance that is attributed to the decrease of such population. It is expected that an improvement of the operational costs implies an increase of the effectiveness of the process and consequently leads to a reduction of the mosquito population, while a decrease of the operational costs leads to an increase of the mosquito population. Therefore, the minimization of F, once the parameters µ 0 , µ 1 , and µ 2 are given, leads to a trajectory that balances the reduction of the mosquito population and the amount of operational costs. We remark that the values of µ 0 , µ 1 and µ 2 have a large influence on the shape of a possible optimal trajectory.
In short, F(γ) is thus the sum of three terms, respectively related (but not coincident) to the length of the path travelled by the device, its speed along (0, T) and the resulting population of insects. A maybe more realistic cost functional is indicated in Section 8.
In (3), α > 0, b > 0, a = a(x, t) is a non-negative function and k : R 2 → R is a positive C 1 -function. The coefficient a is related to insect proliferation. It is standard in dynamics of population. In fact, in (3), the mosquito population u is assumed to have a space-time dependent Malthusian growth; this means that the population growth rate at each position and time is proportional to the number of individuals. The model admits a non-constant a to cope with the possibility of geographical places and times with different effects; for instance, more favorable growth rates occur in places with the presence of bodies of water or during rainy seasons.
As we mentioned in the Introduction, we assume that the insecticide immediately kills a fraction of the mosquito population present at a position x and time t, with a rate that is proportional to the insecticide concentration at that same position and time. At time t, the spreading device is at position γ(t) and spreads a cloud of insecticide around it; the resulting insecticide concentration at any point x then depends on the position of x relative to γ(t) (we expect that such concentration decays with the distance from the device). The spatial distribution of this concentration is then mathematically described as k(x − γ(t))c max , where c max is the maximal spreading concentration capacity of the device, and k is a known C 1 -function such that 0 ≤ k ≤ 1 and k(0) = 1. Then, the associated effective killing rate of the mosquito population is proportional to the product of insecticide concentration and the mosquito population. That is, f k(x − γ(t))c max u(x, t) for some proportionality factor f . By introducing b = f c max , we get that the killing rate at position x and time t is b k(x − γ(t))u(x, t).
In our present analysis, it is not strictly needed but is natural to view k as an approximation of the Dirac distribution. For instance, it is meaningful to take k(z) := k 0 e −|z| 2 /σ for some k 0 , σ > 0; this means that the device action in (3) at time t is maximal at x = γ(t) and negligeable far from γ(t). To this respect, see the choice of k in the numerical experiments in Section 7.
Of course, we can consider more elaborate models and include additional (nonlinear) terms in (3) related to competition; this is not done here just for simplicity of exposition.
Any solution (γ * , u * ) to (1) provides an optimal trajectory and the associated population of mosquitoes. The existence of such a pair (γ * , u * ) is established below, see Section 4.
As already mentioned, we also provide in this paper a characterization of optimal control-state pairs. It will be seen that, in the particular case of (1), the main consequence of the Dubovitskii-Milyutin method is the following: if γ * is an optimal control and u * is the associated state, there exists p * (the adjoint state), such that the following optimality conditions are satisfied: and

Preliminaries
As usual, D(Ω) will stand for the space of the functions in C ∞ (Ω) with compact support in Ω and W r,p (Ω) and H m (Ω) = W m,2 (Ω) will stand for the usual Sobolev spaces; see Adams [8] for their definitions and properties. The gradient and Laplace operators will be respectively denoted by ∇ and ∆.
The constants µ 0 ≥ 0, µ 1 > 0, µ 2 > 0, α > 0 and b > 0 and the functions k, u 0 and a are given, with In the sequel, C will denote a generic positive constant and the symbol · · will stand for several duality pairings.
Very frequently, the following spaces will be needed: For the main results concerning these spaces, we refer, for instance, to [9]. Let us just recall one of them that is sometimes called the Lions-Peetre Embedding Theorem, see ( [10], p. 13): For any Banach space B, we will denote by · B the corresponding norm. Its topological dual space will be denoted by B . Recall that, if K ⊂ B is a cone, the associated dual cone is the following: Let A ⊂ B be given and let us assume that e 0 ∈ A. It will be said that a nonzero linear form f ∈ B is a support functional for A at e 0 if Let Z be a Banach space and let M : B → Z be a mapping. For any e 0 , e 1 ∈ B, we will denote by M (e 0 )e 1 the Gateaux-derivative of M at e 0 in the direction e 1 whenever it exists. For obvious reasons, in the particular case Z = R, this quantity will be written in the form M (e 0 ), e 1 .
The following result is known as Aubin-Lions' Immersion Theorem. In fact, this version was given by Simon in [11], see p. 85, Corollary 4: Lemma 2. Let X, B, and Y be Banach spaces such that X → B → Y with continuous embeddings, the first of them being compact and assume that 0 < T < +∞. Then, the following embeddings are compact: The following result guarantees the existence and uniqueness of a state for each control γ in the space of admissible controls V: and Ω.
For completeness, let us also recall an existence-uniqueness result for the adjoint system: possesses exactly one solution p ∈ W 2,1 2,N (Q) such that with C as in Theorem 1.
We finish this section by recalling the Dubovitskii-Milyutin method. The presentation is similar to that in Boldrini et al. [3].
Let X be a Banach space and let J : X → R be a given function. Consider the extremal problem J(e * ) = min where the Q ( = 1, . . . , n + 1) are by definition the restriction sets. It is assumed that There are many situations where (9) holds In particular, the following holds: • For any 1 ≤ i ≤ n, Q i is an inequality restriction set of the form where the p i : X → R are continuous seminorms and the a i > 0. • Q n+1 is the equality restriction set The following theorem is a generalized version of the Dubovitskii-Milyutin principle and will be used in Sections 4 and 5: =1 Q i be a local minimizer of problem (8). Let DC 0 be the decreasing cone of the cost functional J at e 0 , let FC i be the feasible (or admissible) cone of Q i at e 0 for 1 ≤ i ≤ n, and let TC be the tangent cone to Q n+1 at e 0 . Suppose that 1.
The cones DC 0 and FC i (1 ≤ i ≤ n) are non-empty, open, and convex.

2.
The cone TC is non-empty, closed, and convex. Then In order to identify the decreasing, feasible, and tangent cones, we will use the following well known definitions and facts: Then, for any e ∈ X, the decreasing cone of J at e is open and convex and is given by where · · stands for the usual duality product associated with X and X . • Suppose that the set Q ⊂ X is convex, Int Q = ∅ and e ∈ Q. Then, the feasible cone of Q at e is open and convex and is given by • Finally, we have the celebrated Ljusternik Inverse Function Theorem; see, for instance, ( [12], p. 167). The statement is the following: suppose that X and Y are Banach spaces, M : X → Y is a mapping and the set Q := { e ∈ X : M(e) = 0 } is non-empty; in addition, suppose that e 0 ∈ Q, M is strictly differentiable at e 0 and R(M (e 0 )) = Y; then, M maps a neighborhood of e 0 onto a neighborhood of 0 and the tangent cone to Q at e 0 is the closed subspace:

A First Optimal Control Problem
In this section, we will consider the optimal control problem (1)-(3). We will introduce an equivalent reformulation as an extremal problem of the kind (8). Then, we will prove that there exist optimal control-state pairs. Finally, we will use Theorem 3 to deduce appropriate (first order) necessary optimality conditions.
Our problem is the following: where J is given by (2) for any (γ, u) ∈ V × L 2 (Q), and the set of admissible control-state pairs U ad is defined by Here, Theorem 4. The extremal problem (10) possesses at least one solution.
Proof. Let us first check that U ad is non-empty.
Let us now see that U ad is sequentially weakly closed in V × L 2 (Q). Thus, assume that (γ n , u n ) ∈ U ad for all n and γ n → γ weakly in V and u n → u weakly in L 2 (Q).
Then, γ n converges uniformly to γ in [0, T]. Since u n is the unique solution to (3) with γ replaced by γ n , we necessarily have that u is the state associated with γ, that is, (γ, u) ∈ U ad and U ad are certainly sequentially weakly closed. Obviously, J(γ, u) is a well defined real number for any (γ, u) ∈ U ad . Furthermore, J : V × L 2 (Q) → R is continuous and (strictly) convex. Consequently, J is sequentially weakly lower semicontinuous in V × L 2 (Q).
Finally, J is coercive, i.e., it satisfies Therefore, J attains its minimum in the weakly closed set U ad , and the result holds.

Remark 1.
Note that, although J is strictly convex, we cannot guarantee the uniqueness of solution to (10), since the admissible set U ad is not necessarily convex. See, however, Remark 3 for a discussion on uniqueness.
We will provide a proof based on the Dubovitskii-Milyutin's formalism, i.e., Theorem 3. Before, let us establish some technical results.
First of all, the functional J : Consequently, we have the following: (ii) The mapping M is C 1 in a neighborhood of (γ, u). Furthermore, M (γ, u) is surjective.
Proof. In order to prove (i), we use the definition of the Gâteaux-derivative. First, note that Therefore, in view of the assumptions on k, (15) and Lebesgue's Theorem, it follows that We will now prove (ii). It is clear that, for any (γ, u) ∈ V × W 2,1 2,N (Q), M (γ, u) is a well defined continuous linear operator on V × W 2,1 2,N (Q). Let (β, v) ∈ V × W 2,1 2,N (Q) be given and let us assume that (γ n , u n ) ∈ V × W 2,1 2,N (Q) for all n and (γ n , u n ) → (γ, u) strongly in V × W 2,1 2,N (Q) as n → +∞. Then, In the last line, the first norm can be bounded as follows: Hence, from the hypotheses on k and the fact that γ n → γ uniformly in [0, T], we find that On the other hand, we can deduce the following inequalities: Consequently, using again the hypotheses on k and the uniform convergence of γ n , we can also write that From (16) and (17), we find that However, this is easy: it suffices to first choose β ∈ V arbitrarily and then solve the linear problem: Obviously, the couple (β, v) satisfies (18).
From this lemma and Ljusternik's Theorem, we obtain the following characterization of any tangent cone to U ad : Lemma 5. Let (γ, u) be given in U ad . Then, the tangent cone to U ad at (γ, u) is the set The associated dual cone is given by We can now present the proof of Theorem 5.
Proof of Theorem 5. Let (γ * , u * ) ∈ V × W 2,1 2 (Q) be a solution to (10). The cone of decreasing directions of J at (γ * , u * ) is In addition, (γ * , u * ) ∈ U ad , the cone of tangent directions to U ad at this point is whence there must exist G 0 = −λ 0 J (γ * , u * ) ∈ [DC] * and also G = (η, ζ) ∈ [TC] * , not simultaneously zero, such that Since λ 0 ≥ 0 and G 0 and G cannot be simultaneously zero, we necessarily have λ 0 > 0, and it can be assumed that λ 0 = 1. Accordingly, for all (β, v) ∈ V × W 2,1 2,N (Q). In particular, we see that Now, let γ ∈ V be an arbitrary admissible control. Let w be the unique solution to the linear system: It follows from Lemma 5 that (γ, w) ∈ TC, whence Let us introduce the adjoint system (5) and let us denote by p * its unique solution. By multiplying (21) by −p * and (5) by w, summing, integrating with respect to x and t and performing integrations by parts, we easily get that Taking into account (22), we thus find that However, this is just the weak formulation of the boundary problem (6). Indeed, standard arguments show that γ * solves (24) if and only if γ * ∈ H 1 (0, T) 2 , the secondorder integro-differential equation holds in the distributional sense in (0, T), γ * (0) = 0 andγ * (T) = 0. Thus, the triplet (γ * , u * , p * ) satisfies (4)- (6). This ends the proof.

Remark 2.
There are other ways to prove Theorem 5. For instance, it is possible to apply an argument relying on Lagrange multipliers, starting from the Lagrangian L(γ, u, p) := J(γ, u) + p, M(γ, u) .

Remark 3.
As already said, in general, there is no reason to expect uniqueness. However, in view of Theorem 5, it is reasonable to believe that, under appropriate assumptions on b and k, the solution is unique. Indeed, taking into account that k is uniformly bounded in R 2 , if b is sufficiently small, (γ i , u i ) is an optimal pair for i = 1, 2 and one sets γ := γ 1 − γ 2 , u := u 1 − u 2 , the following is not difficult to prove: • The u i are uniformly bounded (for instance) in L 2 (Q) by a constant of the form e C(1+b) . • u and p are bounded in L 2 (Q) by a constant of the form be C(1+b) γ L ∞ . • γ is bounded in L ∞ (0, T) by a constant of the form be C(1+b) γ L ∞ .
In these estimates, C depends on k W 1,∞ (R 2 ) and the other data of the problem but is independent of b. Consequently, if b is small enough, the solution to the optimal control problem is unique.

A Second Optimal Control Problem
This section deals with a more realistic second optimal control problem. Specifically, we will analyze the constrained problem where and F is given by (2). Here, R 0 > 0 is a prescribed constant. We can interpret the constraint γ ∈ B as a limitation on the positions and the speed of the device; roughly speaking, a solution to (25) furnishes a strategy that leads to a minimal insect population (in the L 2 sense) with few resources. For this problem, we will also prove the existence of optimal controls, and we will also find the optimality conditions furnished by the Dubovitskii-Milyutin formalism.
Thus, let γ * be an optimal control for this new problem, let u * and p * be the associated state and adjoint state and let us introduce the notation Then, it will be shown that the associated optimality conditions of first order are (4), (5) and The problem can be rewritten in the form where J is given by (2) and the set of admissible pairs Z ad is Recall that M = (M 1 , M 2 ) is given by (12). One has: Theorem 6. The extremal problem (29) possesses at least one solution.
Proof. We will argue as in the proof of Theorem 5.
First, notice that Z ad can be written in the form Let (γ * , u * ) ∈ V × W 2,1 2 (Q) be a solution to (29). Then, the feasible cone of B × L 2 (Q) at (γ * , u * ) is the set Recall that the latter means that h ∈ V and From Theorem 3, we now have As before, we necessarily have λ 0 > 0. Indeed, if λ 0 = 0, then However, for any γ ∈ V, we can always construct w in W 2,1 2,N (Q) such that (γ, w) ∈ TC. Hence, we would have (η, ζ), (γ, w) = 0 and µ h, γ = 0 for all γ ∈ V, which is impossible.
We can thus assume that λ 0 = 1 and In particular, we get: Let us consider again the adjoint system (5) and let us denote by p * its unique solution. Let γ ∈ B be an arbitrary admissible control and let w be the unique solution to (21). Since (γ, w) ∈ TC and we still have (23), we get: Taking into account that γ is arbitrary in B and (31) holds, we deduce that (28) is satisfied, and the result is proved.

A Numerical Scheme
In this section, we present a numerical scheme to find an approximate solution to (1)- (3). To this purpose, we will use the optimality conditions (4)- (6).
Obviously, we will have to approximate the equations in time and space. With respect to the time variable, we will incorporate finite differences taking into account the following:

•
Since the equation (4) is parabolic, in order to guarantee unconditional stability, we discretize in time by using a backward Euler method. For the same reason, we discretize M in time in by using a forward Euler method. • Since (6) is a second order two-point boundary value problem, we approximate there the time derivatives with a standard (centered) finite difference scheme.
Let N be a positive (large) integer and let us consider a partition Λ N = {0 = t 0 , t 1 , . . . , t N = T} of the time interval [0, T] in N subintervals. For simplicity, we assume that this partition is uniform, with time step ∆t := T/N.
On the other hand, the approximation in space will be performed via a finite element method. Thus, let T h be a triangular mesh of a polynomial approximation Ω h of Ω and let us denote by V h a suitable finite element space associated with T h . For instance, V h can be the usual P 1 -Lagrange space, formed by the continuous piecewise linear functions on Ω h .
Then, given an approximation In the last line of (36), D h denotes a suitable discrete operator associated with the time derivative. In our code, to preserve second order approximation in time, we took ) and, thus, the required Neumann boundary condition can be imposed just using a reduced form of the finite difference opera-

An Iterative Algorithm for Fixed N and T h
The previous finite dimensional system is nonlinear and, consequently, cannot be solved exactly. Accordingly, an iterative algorithm has been devised to obtain a solution. It is the following: Base step: Choose tolerances outer > 0 and inner > 0; by starting with γ N h,0 ≡ 0, proceed recursively for n = 1, 2, . . . in an outer iteration scheme as follows: First step: Find u N h,n . Since γ N h,n−1 (t i ) is known, advance in time to find u N h,n (t i ), for i = 1, . . . , N, by successively solving the linear problems Second step: Find p N h,n . Since γ N h,n−1 (t i ) and u N h,n are known, proceed backwards in time to find p N h,n (t i ), for i = N − 1, . . . , 0, by solving the problems Third step: Find γ N h,n . This is done by applying an inner iteration scheme. Thus, by starting withγ 0 = γ N h,n−1 , find recursively {γ m } ∞ m=1 by repeating the following for m = 1, 2, . . .: with a meaning similar to above for D hγm (t N ) = 0. This is a finite linear system for γ m = [γ m (t 1 ), . . . ,γ m (t N )] where the coefficient matrix is independent of m and can thus be inverted only once, at the beginning of the process.
The relative stopping criterion for the iteration process is where · denotes the usual discrete L 2 -norm. When this stopping criterion is satisfied, take γ N h,n =γ n and proceed to the next step. as the computed approximated solutions. Otherwise, increase n by one and go back to the First step.

Acceleration of Algorithm
To accelerate the process, inner can be taken larger than outer . In addition, to get a better resolution, in the final iterative scheme, we have allowed for a decrease of the time-step by increasing the number of time intervals N.
The resulting iterates are as follows: Base step: Give tolerances γ > 0, outer > 0, inner > 0 and an initial number of discrete time intervals N 0 ≥ 1.

First step:
Take N = N 0 and apply the previous iterative algorithm, to obtain u N h , p N h and γ N h . Define γ before = γ N 0 h .

Second step:
Double the value of N and apply the previous algorithm to obtain new u N h , p N h and γ N h .
Third step: Compare γ N h and γ before . If the convergence criterion is satisfied, stop the iterates and take as approximated solutions u * = u N h,n , p * = p N h,n and γ * = γ N h,n .
Otherwise, go back to the Second step and repeat the process.

Numerical Experiments
In order to illustrate the behavior of the previous algorithm, let us present the results of some experiments with Ω = [−2.5, 2.5] × [−2.5, 2.5].
Let us denote by 1 ω the indicator function of ω for any ω ⊂ Ω; we consider an initial mosquito population given by where B 1 and B 2 are, respectively, the circle of center (−1.5, 1.5) and radius 0.5 and the circle of center (1.0, 1.0) and radius 0.5. This means that the mosquito population is initially concentrated in two disjoint circles with the same radius and the same amount of population, see Figure 2. We also consider the following values for the parameters and functions in (3): For the stopping criteria, we have taken outer = 0.05 and inner = 0.2.
The computations have been performed by using the software FreeFem++, [13] and all figures were made with Octave [14].

Convergence Behavior
In this section, we test the convergence of the algorithm described in Section 6.1. We take a(x, t) ≡ 1.0 in (3) and µ 0 = µ 1 = µ 2 = 1. The numerical solution of the optimality system (4)- (6)  Moreover, we consider three different regular meshes T h k , k = 1, 2, 3, with m × m lateral nodes. Table 1 shows the size h k , the number of vertices n v , and the number of triangles n T for each mesh. With no control, i.e., b = 0 in (3), the solution evolves to a final state that is displayed in Figure 3a,c,e for T = 2, 4 and 6, respectively. In these figures, we observe that the mosquito population spreads and increases in Ω. When we apply the control, due to the initial distribution of population, it is expected to obtain a solution that starts traveling in the direction of the nearest herd of mosquitos (the circle B 2 in the present case) and, then, changes course in the direction of the farthest herd (B 1 ). This is what is found in each case, as can be seen respectively in Figure 3b,d,f), where, besides the trajectories, the respective computed optimal states are also shown at the final time. Table 2 reports the minimal and maximal values of the control u in all cases depicted in Figure 3; Table 3 presents the required number of outer iterations and the obtained values of the cost functional F for the three considered meshes.
From the results in Table 3, we can observe that, for T = 2, 4, the change in the cost is lower to 4% for the third mesh and N = 200. For this reason, we select these parameters to perform the simulations in the following sections. To keep the computational cost at a reasonable level, we also use the same parameters for T = 6, where the relative change is about 13%.      Table 2.

Influence of the Functional Weights
In this section, we study the influence of the functional weights. As in the previous section, we take a(x, t) ≡ 1.0. Table 4 presents the required number of outer iterations, and the values of the cost functional in four cases I, II, III, and IV for three values of the final time: T = 2, 4, and 6. Calculations were made by using N = 200 time steps and the third spatial mesh of Table 1. From the results in Table 4, we see that the cost is larger in case I for small T. This seems to indicate that, when we assign major relevance to the remaining population, short time operations are not satisfactory. For larger T, the device cost becomes more important. We see, however, that, as T grows, the costs have a tendency to equalize (and the particular values of the µ i seem to lose relevance). Figures 4-6 depict the trajectories and states u, respectively, at T = 2, 4, 6 for all cases described in Table 4. The minimal and maximal values of the mosquito population u for each case are reported in Table 5.      Table 5.   Table 5.   Table 5.

Two Examples with a(·, ·) Variable
In this section, we present the calculations obtained by considering the following two different definitions of the function a: a(x, t) = a 0 max 1 − x 2 + y 2 , 0 , a(x, t) = a 0 max{1 − t, 0}.
with a 0 a positive constant associated with the proliferation velocity of the insect population. With these choices, we will study separately the influence of a in the spatial and time behavior of the state u and the control γ. Calculations were carried out by considering T = 2 with N = 200 time steps, the third spatial mesh of Table 1 and the following values for a 0 , and the weights of the functional F:   Figure 7 depicts the uncontrolled state, the controlled state, and the respective trajectory γ for T = 2 in the case of a(x, t) defined in (40) by considering three different values of a 0 : 1 (Figure 7a,b), 2 (Figure 7c,d), and 4 (Figure 7e,f). We observe that the optimal control trajectories are qualitatively similar in all cases and stay close to the initial point. This is because the non-zero values of a (where the insects proliferate) depend on the spatial coordinate and are located in the unit ball centered at this point. Figure 8 depicts the uncontrolled state, the controlled state, and the respective trajectory γ for T = 2 in the case of a(x, t) defined in (41) by considering three different values of a 0 : 1 (Figure 8a,b), 2 (Figure 8c,d), and 4 (Figure 8e,f). In these cases, we can observe the influence of the time coordinate t: the length of the trajectory γ increases when the value of a 0 does, giving a similar behavior for the case a = 1 studied in Section 7.1.
The respective minimal and maximal values of the uncontrolled and controlled states corresponding to Figures 7 and 8 are reported in Table 7. The influence of the control γ on the state u by decreasing their minimal and maximal values is clearly observed.   Table 7.    Table 7.

Conclusions
We have performed a rigorous analysis of an optimal control problem concerning the spreading of mosquito populations; the optimality conditions have been used to devise a suitable numerical scheme and compute an optimal trajectory. The success in completing these tasks with the help of appropriate theoretical and numerical tools seems to indicate that a similar analysis can be performed in other more complex cases. Thus, it can be more natural to consider, instead of (2), the cost functional Indeed, in this functional, the three terms can be respectively viewed as measures of the true length of the path traveled by the device, the total fuel needed in the process, and the total mosquito population along (0, T). The L 1 norms of γ,γ and u represent quantities more adequate to the model, although the analysis of the corresponding control problem is more involved.
Other realistic situations can also be taken into account. For instance, a very interesting setup appears when there are obstacles to the admissible trajectories. In addition, instead of the Malthusian growth rate for the mosquito population assumed in the present work, we could assume a Verhustian or Gomperzian growth rate. The corresponding models and their associated control problems are being investigated at present.