Next Article in Journal
The Recent Progress in Modification of Polymeric Membranes Using Organic Macromolecules for Water Treatment
Next Article in Special Issue
A New Hilbert-Type Inequality with Positive Homogeneous Kernel and Its Equivalent Forms
Previous Article in Journal
Some New Families of Special Polynomials and Numbers Associated with Finite Operators
Previous Article in Special Issue
A New Filter Nonmonotone Adaptive Trust Region Method for Unconstrained Optimization

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Sufficiency for Purely Essentially Bounded Optimal Controls

by
Gerardo Sánchez Licea
Symmetry 2020, 12(2), 238; https://doi.org/10.3390/sym12020238
Submission received: 13 December 2019 / Revised: 10 January 2020 / Accepted: 17 January 2020 / Published: 4 February 2020

## Abstract

:
For optimal control problems of Bolza with variable and free end-points, nonlinear dynamics, nonlinear isoperimetric inequality and equality restrictions, and nonlinear pointwise mixed time-state-control inequality and equality constraints, sufficient conditions for strong minima are derived. The algorithm used to prove the main theorem of the paper includes a crucial symmetric inequality, making this technique an independent self-contained method of classical concepts such as embedding theorems from ordinary differential equations, Mayer fields, Riccati equations, or Hamilton–Jacobi theory. Moreover, the sufficiency theory given in this article is able to detect discontinuous solutions, that is, solutions which need to be neither continuous nor piecewise continuous but only essentially bounded.

## 1. Introduction

In [1], we studied the following nonparametric calculus of variations problem, denoted by $( P ¯ )$, which consists in minimizing a functional of the form
$J ( x ) : = ℓ ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L ( t , x ( t ) , x ˙ ( t ) ) d t$
over all $x : [ t 0 , t 1 ] → R n$ absolutely continuous satisfying the constraints
Elements x in $X : = A C ( [ t 0 , t 1 ] ; R n )$ are called arcs or trajectories, and a trajectory x is admissible if it satisfies the constraints. Here, $c ( t , x , x ˙ )$ denotes either $L ( t , x , x ˙ )$, $L γ ( t , x , x ˙ )$ $( γ = 1 , … , K )$, $ϕ ( t , x , x ˙ )$ or any of its partial derivatives of order less than or equal to two with respect to x and $x ˙$, where $ϕ = ( ϕ 1 , … , ϕ s )$ determines the set of mixed-constraints
$R ¯ : = { ( t , x , x ˙ ) ∈ [ t 0 , t 1 ] × R n × R n ∣ ϕ α ( t , x , x ˙ ) ≤ 0 ( α ∈ R ) , ϕ β ( t , x , x ˙ ) = 0 ( β ∈ S ) } ,$
with $R : = { 1 , … , r }$ and $S : = { r + 1 , … , s }$ $( r = 0 , 1 , … , s )$. If $r = 0$, then $R = ∅$ and we disregard statements involving $φ α$. Similarly, if $r = s$, then $S = ∅$ and we disregard statements involving $φ β$.
The main novelty of the work in [1] is that the sets $B i$ $( i = 0 , 1 )$ are any subsets of $R n$ satisfying a crucial relation
$B 0 × B 1 ⊂ Ψ ( R n ) ,$
where $Ψ$ is an adequately selected $C 2$ function. Thus, a novelty of the sufficiency results given in [1] concerns the fact that the end-points $x ( t i )$, $( i = 0 , 1 )$, are not only variable end-points lying in a smooth manifold determined by some functions and equalities or inequalities of the form
$Φ ( x ( t 0 ) , x ( t 1 ) ) = 0 or Φ ( x ( t 0 ) , x ( t 1 ) ) ≤ 0 ,$
but also completely free, in the sense that $x ( t i ) ∈ B i$ $( i = 0 , 1 )$.
In this paper, we generalize the results of [1] to a general optimal control setting by establishing and proving two new sufficiency results for strong minima and for optimal control problems of Bolza with variable and free end-points, nonlinear dynamics, nonlinear isoperimetric inequality and equality restrictions, and nonlinear mixed time-state-control pointwise inequality and equality constraints. Concretely, the nonparametric optimal control problem we deal with, denoted by $( P ¯ )$, consists in minimizing a functional
$J ( x , u ) : = ℓ ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L ( t , x ( t ) , u ( t ) ) d t$
over all $( x , u ) ∈ A$ satisfying the constraints
$x ( t i ) ∈ B i for i = 0 , 1 . x ˙ ( t ) = g ( t , x ( t ) , u ( t ) ) ( a . e . in [ t 0 , t 1 ] ) . J i ( x , u ) : = ℓ i ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L i ( t , x ( t ) , u ( t ) ) d t ≤ 0 ( i = 1 , … , k ) . J j ( x , u ) : = ℓ j ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L j ( t , x ( t ) , u ( t ) ) d t = 0 ( j = k + 1 , … , K ) . ( t , x ( t ) , u ( t ) ) ∈ R ¯ ( t ∈ [ t 0 , t 1 ] ) .$
Here, elements $( x , u )$ in $A : = A C ( [ t 0 , t 1 ] ; R n ) × L ∞ ( [ t 0 , t 1 ] ; R m )$ are called processes, a process $( x , u )$ is admissible if it satisfies the constraints and, the set $R ¯$ is defined by the set of mixed time-state-control constraints
$R ¯ : = { ( t , x , u ) ∈ [ t 0 , t 1 ] × R n × R m ∣ ϕ α ( t , x , u ) ≤ 0 ( α ∈ R ) , ϕ β ( t , x , u ) = 0 ( β ∈ S ) } .$
Even though the current optimal control problem has a similar statement from the calculus of variations problem posed in [1] and even when the approach of sufficiency presented in this paper is parallel from the one studied in [1], it is crucial to detect the dissimilarities. For instance, functions such as $L ( t , x , u )$, $L γ ( t , x , u )$ $( γ = 1 , … , K )$, $g ( t , x , u )$ or $ϕ ( t , x , u )$ have as their third independent variable a control u whose role, in general, is not of the derivative of the trajectories x. Moreover, the motions $x ˙$ of the absolutely continuous trajectories x are controlled by a nonlinear dynamic g, that is, $x ˙$ and g must satisfy the relation
$x ˙ ( t ) = g ( t , x ( t ) , u ( t ) ) a . e . in [ t 0 , t 1 ] .$
When $g ( t , x , u ) ¬ ≡ u$, the optimal control theory of this paper lies beyond the scope of the theory of sufficiency given in [1] (see examples 3.3 and 3.4 of section 3); in particular, the solutions provided in this paper cannot be obtained from the results of [1].
On the other hand, let us mention that the proof of the main sufficiency theorem of the article strongly depends upon a fundamental equality, commonly called the transversality condition, which is inherited from the calculus of variations theory and a fundamental symmetric inequality condition which arises from the original algorithm used to prove the previously mentioned sufficiency result. It is worth mentioning that this method has a self-contained nature and it is independent from classical or alternative sufficient techniques frequently used to obtain sufficiency in optimal control. Some of these approaches can be found in [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. To give a brief overview of some of these treatments let us mention that, in [2], sufficiency is obtained by means of the construction of a bounded solution to a matrix-valued Riccati equation; in [3], a verification function satisfying the Hamilton–Jacobi equation and a quadratic function that satisfies a Hamilton–Jacobi inequality become fundamental tools to develop sufficiency; in [4], the insertion of the optimal control problem in a Banach space becomes a fundamental component to obtain the corresponding sufficiency theory; in [5], an alternate algorithm which involves some type of convexity arguments provides sufficient conditions for local minima in the calculus of variations; in [6], an indirect method together with a generalized theory of Jacobi by means of conjugate points provides sufficiency for local minima in an unconstrained optimal control problem of Lagrange with fixed end-points; and in [7], a two norm approach yields an appropriate theory which not only provides sufficiency in certain classes of optimal control problems, but also the corresponding technique allows measuring the deviation between the cost of any admissible process and the cost of the candidate to be an optimal control by means of the classical norm of the Banach space $L 2$.
It is worth mentioning that the optimal control sufficiency theories having the same degree of applicability of that studied in this paper, in general, depend upon the hypotheses of the continuity to the proposed optimal controls (see, for example, [2,3,4,5,6,7,8,10,12,13,14,15,16,19,20,21,23]), where that crucial assumption is an indispensable device in the corresponding sufficiency treatments. A distinctive feature of the new sufficiency theory presented in this paper is its applicability to optimal control problems in which the proposed optimal control to be a strong minimum does not satisfy that crucial hypothesis. In particular, in Section 3, we solve an optimal control problem with the property that the admissible process satisfying all conditions of the new corollary has a discontinuous optimal control, that is, the former is neither continuous nor piecewise continuous but only essentially bounded. Additionally, it is important to point out that the furnished conclusion given in the examples of Section 3 cannot be detected by a simple inspection of the constraints which must be satisfied by feasible processes; in other words, the examples given in Section 3 show how one of the new sufficiency results of this article fulfills the principal characteristic, which must have a sufficiency theorem that is precisely able to detect solutions whose nature is neither trivial nor evident.
Some optimal control treatments having less degree of generality from the one studied in this article with no assumptions of continuity of the propose optimal controls can be found in [24]. There, an optimal control problem of Lagrange with fixed-endpoints, nonlinear dynamics, and equality control constraints is studied. The main novelty of the work in [24] is precisely the removal of continuity of the proposed optimal controls in the main sufficiency theorem of that paper. Additionally, this proof has been generalized in [25] to optimal control problems containing equality restrictions not only depending on the controls but also on the time and the states. Moreover, sufficient conditions for weak minima for a fixed end-points optimal control problem of Lagrange containing inequality and equality constraints in the controls with no assumptions of continuity of the optimal controls can be found in [26].
The main properties of the new sufficiency theorems of this paper can be outlined as follows: given an admissible process which needs to be neither continuous nor piecewise continuous but only essentially bounded, the pieces of the new sufficiency results of this article are two crucial first-order sufficient conditions involving the Hamiltonian of the problem, the classical transversality condition, an essential symmetric inequality which arises from the properties of the original algorithm used to prove the main theorem of the article, a similar condition of the necessary condition of Legendre–Clebsch, the positivity of the second variation on a cone of critical directions, and three conditions involving some Weierstrass excess functions.
The paper is organized as follows. In Section 2, we pose the parametric optimal control problem we deal with together with some basic definitions and the statement of the main result of the article. In Section 3, we enunciate the nonparametric optimal control problem we study, some basic definitions, a corollary that is also one of the main results of the paper, and two examples that show how even the nonexpert can manage to apply the result. Section 4 is devoted to stating three auxiliary lemmas, in which the proof of the theorem is strongly based. Section 5 is dedicated to the proof of the main theorem of the article, that is, the proof of Theorem 1. In Section 6, we prove the lemmas given in Section 4 and in the final section we present some auxiliary results that are helpful to solve Example 1 of Section 3.

## 2. A Parametric Problem of Bolza and the Main Result

Suppose we are given an interval $T : = [ t 0 , t 1 ]$ in $R$, and functions $l ( b ) : R p → R$, $l γ ( b ) : R p → R$ $( γ = 1 , … , K )$, $Ψ i ( b ) : R p → R n$ $( i = 0 , 1 )$, $L ( t , x , u ) : T × R n × R m → R$, $L γ ( t , x , u ) : T × R n × R m → R$ $( γ = 1 , … , K )$, $f ( t , x , u ) : T × R n × R m → R n$, and $φ ( t , x , u ) : T × R n × R m → R s$. Set
$R : = { ( t , x , u ) ∈ T × R n × R m ∣ φ α ( t , x , u ) ≤ 0 ( α ∈ R ) , φ β ( t , x , u ) = 0 ( β ∈ S ) }$
where $R : = { 1 , … , r }$ and $S : = { r + 1 , … , s }$ $( r = 0 , 1 , … , s )$. If $r = 0$, then $R = ∅$ and we disregard statements involving $φ α$. Similarly, if $r = s$, then $S = ∅$ and we disregard statements involving $φ β$.
It is assumed throughout the paper that L, $L γ$ $( γ = 1 , … , K )$, f and $φ = ( φ 1 , … , φ s )$ have first- and second-derivatives with respect to x and u. Moreover, we assume that the functions l, $l γ$ $( γ = 1 , … , K )$ and $Ψ i$ $( i = 0 , 1 )$ are of class $C 2$ on $R p$. In addition, if we denote by $c ( t , x , u )$ either $L ( t , x , u )$, $L γ ( t , x , u )$ $( γ = 1 , … , K )$, $f ( t , x , u )$, $φ ( t , x , u )$ or any of its partial derivatives of order less than or equal to two with respect to x and u, we assume that, if $C$ is any bounded subset of $T × R n × R m$, then $| c ( C ) |$ is a bounded subset of $R$. Additionally, we assume that, if ${ ( Γ q , Λ q ) }$ is any sequence in $A C ( T ; R n ) × L ∞ ( T ; R m )$ such that for some $Υ ⊂ T$ measurable and some ${ ( Γ 0 , Λ 0 ) } ∈ A C ( T ; R n ) × L ∞ ( T ; R m )$, $( Γ q ( t ) , Λ q ( t ) ) → ( Γ 0 ( t ) , Λ 0 ( t ) )$ uniformly on $Υ$, then, for all $q ∈ N$, $c ( t , Γ q ( t ) , Λ q ( t ) )$ is measurable on $Υ$ and
$c ( t , Γ q ( t ) , Λ q ( t ) ) → c ( t , Γ 0 ( t ) , Λ 0 ( t ) ) uniformly on Υ .$
Note that all conditions above concerning the functions L, $L γ$ $( γ = 1 , … , K )$, f and $φ$, are satisfied if the functions L, $L γ$ $( γ = 1 , … , K )$, f and $φ$ and their first and second derivatives with respect to x and u are continuous on $T × R n × R m$.
Define
$X : = A C ( T ; R n ) = { x : T → R n ∣ x is absolutely continuous on T } , U η : = L ∞ ( T ; R η ) = { u : T → R η ∣ u is essentially bpunded on T } .$
Here, the natural number $η$ denotes the dimension of the codomain of the controls $u : T → R m$ or of the multipliers $μ : T → R s$ associated to the mixed pointwise constraints.
Set
$A : = X × U m × R p .$
We use the notation $z b$ to denote any element $z b : = ( z , b ) = ( x , u , b ) ∈ A$. The parametric optimal control problem we deal with, denoted by (P), is that of minimizing the functional
$I ( z b ) : = l ( b ) + ∫ t 0 t 1 L ( t , x ( t ) , u ( t ) ) d t$
over all $z b ∈ A$ satisfying the constraints
$x ˙ ( t ) = f ( t , x ( t ) , u ( t ) ) ( a . e . in T ) . x ( t i ) = Ψ i ( b ) for i = 0 , 1 . I i ( z b ) : = l i ( b ) + ∫ t 0 t 1 L i ( t , x ( t ) , u ( t ) ) d t ≤ 0 ( i = 1 , … , k ) . I j ( z b ) : = l j ( b ) + ∫ t 0 t 1 L j ( t , x ( t ) , u ( t ) ) d t = 0 ( j = k + 1 , … , K ) . ( t , x ( t ) , u ( t ) ) ∈ R ( t ∈ T ) .$
The elements $b = ( b 1 , … , b p ) ∗$ (the notation * denotes transpose) are called parameters, the elements $z b$ in $A$ are called processes, and a process $z b$ is admissible if it satisfies the constraints. The notation $z 0 b 0$ refers to an element $( z 0 , b 0 ) = ( x 0 , u 0 , b 0 ) ∈ A$.
Let us now introduce some definitions that are used throughout the paper.
• A process $z 0 b 0$solves $( P )$ if it is admissible and $I ( z 0 b 0 ) ≤ I ( z b )$ for all admissible processes $z b$. An admissible process $z 0 b 0$ is called a strong minimum of (P) if it is a minimum of I relative to the following norm
$∥ z b ∥ : = | b | + sup t ∈ T | x ( t ) | = | b | + ∥ x ∥ C ,$
that is, if for some $ϵ > 0$, $I ( z 0 b 0 ) ≤ I ( z b )$ for all admissible processes satisfying $∥ z b − z 0 b 0 ∥ < ϵ$.
• For all $( x , u ) ∈ X × U m$, we use the notation $( z ˜ ( t ) )$ to represent $( t , x ( t ) , u ( t ) )$. In addition, $( z ˜ 0 ( t ) )$ represents $( t , x 0 ( t ) , u 0 ( t ) )$.
• Given K real numbers $λ 1 , … , λ K$, consider the functional $I 0 : A → R$ defined by
$I 0 ( z b ) : = I ( z b ) + ∑ γ = 1 K λ γ I γ ( z b ) = l 0 ( b ) + ∫ t 0 t 1 L 0 ( z ˜ ( t ) ) d t ,$
where $l 0 : R p → R$ is given by
$l 0 ( b ) : = l ( b ) + ∑ γ = 1 K λ γ l γ ( b ) ,$
and $L 0 : T × R n × R m → R$ is given by
$L 0 ( t , x , u ) : = L ( t , x , u ) + ∑ γ = 1 K λ γ L γ ( t , x , u ) .$
• Given $λ 1 , … , λ K$, for all $( t , x , u , ρ , μ ) ∈ T × R n × R m × R n × R s$, define the Hamiltonian of the problem by
$H ( t , x , u , ρ , μ ) : = 〈 ρ , f ( t , x , u ) 〉 − L 0 ( t , x , u ) − 〈 μ , φ ( t , x , u ) 〉 ,$
where $ρ ∈ R n$ denotes the adjoint variable and $μ ∈ R s$ is the associated multiplier of the mixed time-state-control constraints.
• Given $( ρ , μ ) ∈ X × U s$, and $λ 1 , … , λ K$, for all $( t , x , u ) ∈ T × R n × R m$, define the following function associated to the Hamiltonian,
$F 0 ( t , x , u ) : = − H ( t , x , u , ρ ( t ) , μ ( t ) ) − 〈 ρ ˙ ( t ) , x 〉 .$
• Given $( ρ , μ ) ∈ X × U s$ and $λ 1 , … , λ K$, define $J 0 : A → R$ by
$J 0 ( z b ) : = 〈 ρ ( t 1 ) , x ( t 1 ) 〉 − 〈 ρ ( t 0 ) , x ( t 0 ) 〉 + l 0 ( b ) + ∫ t 0 t 1 F 0 ( z ˜ ( t ) ) d t .$
• The notation $w β$ refers to any element $( y , v , β )$ in $X × L 2 ( T ; R m ) × R p$.
• For any $z b ∈ A$ and any $w β ∈ X × L 2 ( T ; R m ) × R p$ consider the first variation of $I γ$ $( γ = 1 , … , K )$ with respect to $z b$ over $w β$ which is given by
$I γ ′ ( z b ; w β ) : = l γ ′ ( b ) β + ∫ t 0 t 1 { L γ x ( z ˜ ( t ) ) y ( t ) + L γ u ( z ˜ ( t ) ) v ( t ) } d t .$
• For all $( t , x , u ) ∈ T × R n × R m$, denote by
$I a ( t , x , u ) : = { α ∈ R ∣ φ α ( t , x , u ) = 0 } ,$
the set of active indices of $( t , x , u )$ with respect to the mixed inequality constraints.
• For all $z b ∈ A$, denote by
$i a ( z b ) : = { i = 1 , … , k ∣ I i ( z b ) = 0 } ,$
the set of active indices of $z b$ with respect to the isoperimetric inequality constraints.
• Given $z b ∈ A$, let $Y ( z b )$ be the set of all $w β ∈ X × L 2 ( T ; R m ) × R p$ satisfying
$y ˙ ( t ) = f x ( z ˜ ( t ) ) y ( t ) + f u ( z ˜ ( t ) ) v ( t ) ( a . e . in T ) , y ( t i ) = Ψ i ′ ( b ) β ( i = 0 , 1 ) , I i ′ ( z b ; w β ) ≤ 0 ( i ∈ i a ( z b ) ) , I j ′ ( z b ; w β ) = 0 ( j = k + 1 , … , K ) , φ α x ( z ˜ ( t ) ) y ( t ) + φ α u ( z ˜ ( t ) ) v ( t ) ≤ 0 ( a . e . in T , α ∈ I a ( z ˜ ( t ) ) ) , φ β x ( z ˜ ( t ) ) y ( t ) + φ β u ( z ˜ ( t ) ) v ( t ) = 0 ( a . e . in T , β ∈ S ) .$
The set $Y ( z b )$ is called the cone of critical directions along $z b$.
• Given $( ρ , μ ) ∈ X × U s$, and $λ 1 , … , λ K$, for any $z b ∈ A$ and any $w β ∈ X × L 2 ( T ; R m ) × R p$, we define the second variation of $J 0$ with respect to $z b$ over $w β$, by
$J 0 ″ ( z b ; w β ) : = 〈 l 0 ″ ( b ) β , β 〉 + ∫ t 0 t 1 2 Ω 0 ( z ; t , y ( t ) , v ( t ) ) d t ,$
where, for all $( t , y , v ) ∈ T × R n × R m$,
$2 Ω 0 ( z ; t , y , v ) : = 〈 y , F 0 x x ( z ˜ ( t ) ) y 〉 + 2 〈 y , F 0 x u ( z ˜ ( t ) ) v 〉 + 〈 v , F 0 u u ( z ˜ ( t ) ) v 〉 .$
• Denote by $E 0$ the Weierstrass excess function of $F 0$, given by
$E 0 ( t , x , u , v ) : = F 0 ( t , x , v ) − F 0 ( t , x , u ) − F 0 u ( t , x , u ) ( v − u ) .$
• Similarly, the Weierstrass excess function of $L γ$ $( γ = 1 , … , K )$ is given by
$E γ ( t , x , u , v ) : = L γ ( t , x , v ) − L γ ( t , x , u ) − L γ u ( t , x , u ) ( v − u ) .$
• For all $π = ( π 1 , … , π n ) ∗ ∈ R n$ or $π = ( π 1 , … , π m ) ∗ ∈ R m$, set
$V ( π ) : = ( 1 + | π | 2 ) 1 / 2 − 1 .$
• For all $x ∈ X$ and all $u ∈ L 1 ( T ; R m )$, define
$Q ( x , u ) : = max { Q 1 ( x ) , Q 2 ( u ) }$
where
$Q 1 ( x ) : = ∫ t 0 t 1 V ( x ˙ ( t ) ) d t and Q 2 ( u ) : = ∫ t 0 t 1 V ( u ( t ) ) d t .$
• For all $x ∈ X$ and all $u ∈ L 1 ( T ; R m )$, define
$D ( x , u ) : = max { D 1 ( x ) , D 2 ( x , u ) }$
where
$D 1 ( x ) : = V ( x ( t 0 ) ) + Q 1 ( x ) and D 2 ( x , u ) : = V ( x ( t 0 ) ) + Q 2 ( u ) .$
• As mentioned above, the symbol * denotes transpose.
The following theorem is the main result of the article. This theorem gives sufficient conditions for a strong minimum of problem (P). Hypothesis (i) of Theorem 1 is commonly called the transversality condition; Hypothesis (ii) is a symmetric inequality which arises from the properties of the original proof of the theorem; Hypothesis (iii) is a similar version of the necessary condition of Legendre–Clebsch; Hypothesis (iv) is the positivity of the second variation on the cone of critical directions; and Hypothesis (v) involves three conditions related to the Weierstrass excess functions. Note that the proposed optimal control need not be continuous or piecewise continuous but only essentially bounded.
Theorem 1.
Let $z 0 b 0$ be an admissible process. Assume that $I a ( z ˜ 0 ( · ) )$ is piecewise constant on T, and there exist $( ρ , μ ) ∈ X × U s$ with $μ α ( t ) ≥ 0$ and $μ α ( t ) φ α ( z ˜ 0 ( t ) ) = 0$ $( α ∈ R , t ∈ T )$, two positive numbers $δ , ϵ$, and multipliers $λ 1 , … , λ K$ with $λ i ≥ 0$ and $λ i I i ( z 0 b 0 ) = 0$ $( i = 1 , … , k )$ such that
$ρ ˙ ( t ) = − H x ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . i n T ) ,$
$H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( t ∈ T ) ,$
and the following holds
(i)
$l 0 ′ ∗ ( b 0 ) + Ψ 1 ′ ∗ ( b 0 ) ρ ( t 1 ) − Ψ 0 ′ ∗ ( b 0 ) ρ ( t 0 ) = 0$.
(ii)
$ρ ∗ ( t 1 ) Ψ 1 ″ ( b 0 ; β ) − ρ ∗ ( t 0 ) Ψ 0 ″ ( b 0 ; β ) ≥ 0 for all β ∈ R p$.
(iii)
$H u u ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ≤ 0$$( a . e . in T )$.
(iv)
$J 0 ″ ( z 0 b 0 ; w β ) > 0$ for all nonnull $w β ∈ Y ( z 0 b 0 )$.
(v)
For all $z b$ admissible with $∥ x − x 0 ∥ C < ϵ$,
a.
$E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) ≥ 0$$( a . e . in T )$.
b.
$∫ t 0 t 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ δ Q ( z − z 0 )$.
c.
$∫ t 0 t 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ δ | ∫ t 0 t 1 E γ ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t |$$( γ = 1 , … , K )$.
Then, for some $θ 1 , θ 2 > 0$ and all admissible processes $z b$ satisfying $∥ z b − z 0 b 0 ∥ < θ 1$,
$I ( z b ) ≥ I ( z 0 b 0 ) + θ 2 min { | b − b 0 | 2 , D ( z − z 0 ) } .$
In particular, $z 0 b 0$ is a strong minimum of (P).

## 3. A Nonparametric Problem of Bolza

Suppose we are given an interval $T : = [ t 0 , t 1 ]$ in $R$, two sets $B 0 , B 1 ⊂ R n$ and functions $ℓ ( x 1 , x 2 ) : R n × R n → R$, $ℓ γ ( x 1 , x 2 ) : R n × R n → R$ $( γ = 1 , … , K )$, $L ( t , x , u ) : T × R n × R m → R$, $L γ ( t , x , u ) : T × R n × R m → R$ $( γ = 1 , … , K )$, $g ( t , x , u ) : T × R n × R m → R n$, and $ϕ ( t , x , u ) : T × R n × R m → R s$. Set
$R ¯ : = { ( t , x , u ) ∈ T × R n × R m ∣ ϕ α ( t , x , u ) ≤ 0 ( α ∈ R ) , ϕ β ( t , x , u ) = 0 ( β ∈ S ) }$
where $R : = { 1 , … , r }$ and $S : = { r + 1 , … , s }$ $( r = 0 , 1 , … , s )$. If $r = 0$, then $R = ∅$ and we disregard statements involving $φ α$. Similarly, if $r = s$, then $S = ∅$ and we disregard statements involving $φ β$.
It is assumed throughout this section that $L$, $L γ$ $( γ = 1 , … , K )$, g and $ϕ = ( ϕ 1 , … , ϕ s )$ have first and second derivatives with respect to x and u. Moreover, we assume that the functions , $ℓ γ$ $( γ = 1 , … , K )$ are of class $C 2$ on $R n × R n$. In addition, if we denote by $c ( t , x , u )$ either $L ( t , x , u )$, $L γ ( t , x , u )$ $( γ = 1 , … , K )$, $g ( t , x , u )$, $ϕ ( t , x , u )$ or any of its partial derivatives of order less than or equal to two with respect to x and u, we assume that all the assumptions posed in Section 2 in the statement of the problem are satisfied.
As in Section 2, $X$ denotes the set of absolutely continuous functions mapping T to $R n$ and $U η : = L ∞ ( T ; R η )$ the set of essentially bounded functions mapping T to $R η$. Set $A : = X × U m$.
The nonparametric optimal control problem we deal with, denoted by $( P ¯ )$, consists in minimizing the functional
$J ( x , u ) : = ℓ ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L ( t , x ( t ) , u ( t ) ) d t$
over all $( x , u ) ∈ A$ satisfying the constraints
$x ( t i ) ∈ B i for i = 0 , 1 . x ˙ ( t ) = g ( t , x ( t ) , u ( t ) ) ( a . e . in T ) . J i ( x , u ) : = ℓ i ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L i ( t , x ( t ) , u ( t ) ) d t ≤ 0 ( i = 1 , … , k ) . J j ( x , u ) : = ℓ j ( x ( t 0 ) , x ( t 1 ) ) + ∫ t 0 t 1 L j ( t , x ( t ) , u ( t ) ) d t = 0 ( j = k + 1 , … , K ) . ( t , x ( t ) , u ( t ) ) ∈ R ¯ ( t ∈ T ) .$
The elements $( x , u )$ in A are called processes, and a process $( x , u )$ is admissible if it satisfies the constraints.
A process $( x 0 , u 0 )$ solves $( P ¯ )$ if it is admissible and $J ( x 0 , u 0 ) ≤ J ( x , u )$ for all admissible processes $( x , u )$. An admissible process $( x 0 , u 0 )$ is called a strong minimum of $( P ¯ )$ if it is a minimum of $J$ relative to the norm
$∥ x ∥ : = sup t ∈ T | x ( t ) | ,$
that is, if for some $ϵ > 0$, $J ( x 0 , u 0 ) ≤ J ( x , u )$ for all admissible processes satisfying $∥ x − x 0 ∥ < ϵ$.
Let $Ψ : R n → R n × R n$ be any function of class $C 2$ such that $B 0 × B 1 ⊂ Ψ ( R n )$. Associate the nonparametric problem $( P ¯ )$ with the parametric problem of Section 2, which we denote by $( P Ψ )$, that is, $( P Ψ )$ is the parametric problem given in Section 2, with $p = n$, $l = ℓ ∘ Ψ$, $l γ = ℓ γ ∘ Ψ$ $( γ = 1 , … , K )$, $L = L$, $L γ = L γ$ $( γ = 1 , … , K )$, $f = g$, $φ = ϕ$, and $Ψ 0 , Ψ 1$ the components of $Ψ$, that is, $Ψ = ( Ψ 0 , Ψ 1 )$. Recall that the notation $z b$ means $( x , u , b )$ where $b ∈ R n$ is a parameter.
Lemma 1.
The following is satisfied:
(i)
$z b$ is an admissible process of $( P Ψ )$ if and only if $z = ( x , u )$ is an admissible process of $( P ¯ )$ and $b ∈ Ψ − 1 ( x ( t 0 ) , x ( t 1 ) )$.
(ii)
If $z b$ is an admissible process of $( P Ψ )$, then
$J ( x , u ) = I ( z b ) .$
(iii)
If $z 0 b 0$ is a solution of $( P Ψ )$, then $( x 0 , u 0 )$ is a solution of $( P ¯ )$.
Proof.
Copy the proof of Lemma 3.1 of [1]. □
The following corollary, which is a consequence of Theorem 1 and Lemma 1, provides a set of sufficient conditions of problem $( P ¯ )$. Once again, it is worth observing that the control of the proposed process to be a strong minimum need not be continuous nor piecewise continuous but only essentially bounded.
Corollary 1.
Let $Ψ : R n → R n × R n$ be any function of class $C 2$ such that $B 0 × B 1 ⊂ Ψ ( R n )$ and let $( P Ψ )$ be the parametric problem defined in the previous paragraph of Lemma 1. Let $z 0 b 0$ be an admissible process of $( P Ψ )$. Assume that $I a ( z ˜ 0 ( · ) )$ is piecewise constant on T, and there exist $( ρ , μ ) ∈ X × U s$ with $μ α ( t ) ≥ 0$ and $μ α ( t ) φ α ( z ˜ 0 ( t ) ) = 0$ $( α ∈ R , t ∈ T )$, two positive numbers $δ , ϵ$, and multipliers $λ 1 , … , λ K$ with $λ i ≥ 0$ and $λ i I i ( z 0 b 0 ) = 0$ $( i = 1 , … , k )$ such that
$ρ ˙ ( t ) = − H x ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) ,$
$H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( t ∈ T ) ,$
and the following holds
(i)
$l 0 ′ ∗ ( b 0 ) + Ψ 1 ′ ∗ ( b 0 ) ρ ( t 1 ) − Ψ 0 ′ ∗ ( b 0 ) ρ ( t 0 ) = 0$.
(ii)
$ρ ∗ ( t 1 ) Ψ 1 ″ ( b 0 ; β ) − ρ ∗ ( t 0 ) Ψ 0 ″ ( b 0 ; β ) ≥ 0 for all β ∈ R n$.
(iii)
$H u u ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ≤ 0$$( a . e . in T )$.
(iv)
$J 0 ″ ( z 0 b 0 ; w β ) > 0$ for all nonnull $w β ∈ Y ( z 0 b 0 )$.
(v)
For all $z b$ admissible with $∥ x − x 0 ∥ < ϵ$,
a.
$E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) ≥ 0$$( a . e . in T )$.
b.
$∫ t 0 t 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ δ Q ( z − z 0 )$.
c.
$∫ t 0 t 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ δ | ∫ t 0 t 1 E γ ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t |$$( γ = 1 , … , K )$.
Then, $( x 0 , u 0 )$ is a strong minimum of $( P ¯ )$.
Now, we illustrate by means of two examples the properties of the sufficiency theory developed in this article. In Example 1, we solve an inequality constrained nonparametric optimal control problem $( P ¯ )$ with a completely free final end-point in which the proposed optimal control is neither continuous nor piecewise continuous but only essentially bounded and moreover for some $( ρ , μ ) ∈ X × U 4$ an element $( x 0 , u 0 , ρ , μ )$ satisfies the first-order sufficient conditions
$ρ ˙ ( t ) = − H x ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) , H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( t ∈ T ) ,$
conditions (i)–(v) of Corollary 1 becoming in this way a strong minimum of $( P ¯ )$.
Example 1.
Let $u 02 : [ 0 , 1 ] → R$ be given by
$u 02 ( t ) : = 1 , t = 0 1 , t ∈ ∪ j = 1 ∞ [ 1 2 j , 1 2 j − 1 ] − 1 , t ∈ ∪ j = 2 ∞ ( 1 2 j − 1 , 1 2 j − 2 ) .$
Consider the nonparametric optimal control problem $( P ¯ )$ of minimizing
$J ( x , u ) : = x 2 ( 1 ) + ∫ 0 1 { u 1 ( t ) + u 1 2 ( t ) cos ( 2 π u 2 ( t ) ) − x 2 ( t ) } d t$
over all $( x , u ) ∈ A$ satisfying the constraints
$x ( 0 ) ∈ { 0 } , x ( 1 ) ∈ R . x ˙ ( t ) = − u 1 ( t ) u 2 ( t ) − 1 2 u 1 ( t ) ( a . e . in [ 0 , 1 ] ) . J 1 ( x , u ) : = ∫ 0 1 { ( 1 / 2 ) ( u 1 ( t ) u 2 ( t ) + ( 1 / 2 ) u 1 ( t ) ) 2 − u 1 2 ( t ) } d t ≤ 0 . ( t , x ( t ) , u ( t ) ) ∈ R ¯ ( t ∈ [ 0 , 1 ] )$
where
$R ¯ : = { ( t , x , u ) ∈ [ 0 , 1 ] × R × R 2 ∣ u 1 ≥ 0 , u 2 − u 02 ( t ) ≤ 1 , u 02 ( t ) − u 2 ≤ 1 , u 2 2 = 1 } ,$
$A : = X × U 2 ,$
For this problem, we consider the data of the nonparametric problem given in this section which are given by $T = [ 0 , 1 ]$, $n = 1$, $m = 2$, $r = 3$, $s = 4$, $k = 1$, $K = 1$, $B 0 = { 0 }$, $B 1 = R$, $ℓ ( x 1 , x 2 ) = x 2 2$, $L ( t , x , u ) = u 1 + u 1 2 cos ( 2 π u 2 ) − x 2$, $L 1 ( t , x , u ) = 1 2 ( u 1 u 2 + 1 2 u 1 ) 2 − u 1 2$, $g ( t , x , u ) = − u 1 u 2 − 1 2 u 1$, $ϕ 1 ( t , x , u ) = − u 1$, $ϕ 2 ( t , x , u ) = u 2 − u 02 ( t ) − 1$, $ϕ 3 ( t , x , u ) = u 02 ( t ) − u 2 − 1$ and $ϕ 4 ( t , x , u ) = 1 − u 2 2$.
As one readily verifies, the functions , $L$, $L 1$, g, and $ϕ = ( ϕ 1 , ϕ 2 , ϕ 3 , ϕ 4 )$ satisfy all the assumptions posed in this section in the statement of the problem.
Moreover, it is evident that the process $z 0 = ( x 0 , u 0 ) = ( x 0 , u 01 , u 02 )$ with $x 0 ≡ 0$, $u 01 ≡ 0$ and $u 02$ given above, is admissible of $( P ¯ )$. Let $Ψ : R → R × R$ be defined by $Ψ ( b ) : = ( 0 , b )$. Clearly, $Ψ$ is $C 2$ in $R$ and $B 0 × B 1 ⊂ Ψ ( R )$. The associated parametric problem of Section 2 denoted by $( P Ψ )$ has the following data, $p = 1$, $l = ℓ ∘ Ψ$, $L = L$, $L 1 = L 1$, $f = g$, $φ = ϕ$, and $Ψ 0$, $Ψ 1$ the components of $Ψ$, that is, $Ψ = ( Ψ 0 , Ψ 1 )$ with $Ψ 0 ( b ) = 0$ and $Ψ 1 ( b ) = b$ $( b ∈ R )$. Recall that the notation $z b$ means $( x , u , b )$ where $b ∈ R$ is a parameter.
Observe that if we set $b 0 : = 0$, then $z 0 b 0 = ( x 0 , u 0 , b 0 ) ≡ ( x 0 , u 0 , 0 )$ is admissible of $( P Ψ )$ and $u 0$ is neither continuous nor piecewise continuous but only essentially bounded. In addition, clearly $I a ( z ˜ 0 ( · ) ) = { 1 }$ is constant on T. Let $ρ ≡ 0$, $μ = ( μ 1 , μ 2 , μ 3 , μ 4 ) ≡ ( 1 , 0 , 0 , 0 )$ and note that $( ρ , μ ) ∈ X × U 4$, $μ α ( t ) ≥ 0$, and $μ α ( t ) φ α ( z ˜ 0 ( t ) ) = 0$ $( α ∈ R = { 1 , 2 , 3 } , t ∈ T )$. In addition, if we set $λ 1 : = 0$, then $λ 1 ≥ 0$ and $λ 1 I 1 ( z 0 b 0 ) = 0$.
Now, observe that the Hamiltonian H is given by
$H ( t , x , u , ρ , μ ) = − ρ u 1 u 2 − 1 2 ρ u 1 − u 1 − u 1 2 cos ( 2 π u 2 ) + x 2 + μ 1 u 1 − μ 2 [ u 2 − u 02 ( t ) − 1 ] − μ 3 [ u 02 ( t ) − u 2 − 1 ] − μ 4 [ 1 − u 2 2 ] ,$
and note that
$H u ( t , x , u , ρ , μ ) = − ρ u 2 − 1 2 ρ − 1 − 2 u 1 cos ( 2 π u 2 ) + μ 1 − ρ u 1 + 2 π u 1 2 sin ( 2 π u 2 ) − μ 2 + μ 3 + 2 μ 4 u 2 ∗ ,$
$H x ( t , x , u , ρ , μ ) = 2 x .$
As one readily verifies, for all $t ∈ T$,
$H x ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 and H u ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = ( 0 , 0 )$
and thus, for all $t ∈ T$,
$ρ ˙ ( t ) = − H x ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) and H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ,$
that is, $( x 0 , u 0 , ρ , μ )$ satisfies the first-order sufficient conditions of Corollary 1. Since $Ψ 0 ( b ) = 0$, $Ψ 1 ( b ) = b$, $l 0 ( b ) = b 2$ $( b ∈ R )$, then
$l 0 ′ ( b 0 ) + Ψ 1 ′ ( b 0 ) ρ ( 1 ) − Ψ 0 ′ ( b 0 ) ρ ( 0 ) = 0$
and thus Condition (i) of Corollary 1 is satisfied. In addition, as one readily verifies,
$ρ ( 1 ) Ψ 1 ″ ( b 0 ; β ) − ρ ( 0 ) Ψ 0 ″ ( b 0 ; β ) = 0 for all β ∈ R$
and thus the condition of symmetry (ii) of Corollary 1 is fulfilled.
Now, for all $( t , x , u ) ∈ T × R × R 2$, we have
$H ( t , x , u , ρ ( t ) , μ ( t ) ) = − u 1 2 cos ( 2 π u 2 ) + x 2$
and thus, for all $t ∈ T$,
$〈 H u u ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) h , h 〉 = − 2 h 1 2 ≤ 0$
which in turn implies that $( x 0 , u 0 , ρ , μ )$ satisfies Condition (iii) of Corollary 1.
In addition, for all $( t , x , u ) ∈ T × R × R 2$, we have
$F 0 ( t , x , u ) = − H ( t , x , u , ρ ( t ) , μ ( t ) ) − ρ ˙ ( t ) x = u 1 2 cos ( 2 π u 2 ) − x 2 ,$
and, for all $t ∈ T$,
$f x ( z ˜ 0 ( t ) ) = 0 , f u ( z ˜ 0 ( t ) ) = ( − u 02 ( t ) − 1 2 , 0 ) ,$
$φ 1 x ( z ˜ 0 ( t ) ) = 0 , φ 1 u ( z ˜ 0 ( t ) ) = ( − 1 , 0 ) , φ 4 x ( z ˜ 0 ( t ) ) = 0 , φ 4 u ( z ˜ 0 ( t ) ) = ( 0 , − 2 u 02 ( t ) ) .$
Since $Y ( z 0 b 0 )$ is given by all $w β ∈ X × L 2 ( T ; R 2 ) × R$ satisfying $y ( 0 ) = 0$, $y ( 1 ) = β$, $y ˙ ( t ) = ( − u 02 ( t ) − 1 2 ) v 1 ( t )$, and $− v 1 ( t ) ≤ 0$, $v 2 ( t ) = 0$ $( a . e . in T )$, the fact that $l 0 ″ ( b 0 ) = 2$ and, for all $t ∈ T$,
$F 0 x x ( z ˜ 0 ( t ) ) = − 2 , F 0 x u ( z ˜ 0 ( t ) ) = ( 0 , 0 ) , F 0 u u ( z ˜ 0 ( t ) ) = 2 0 0 0 ,$
then, for all $w β ∈ Y ( z 0 b 0 )$,
$J 0 ″ ( z 0 b 0 ; w β ) = 2 β 2 + 2 ∫ 0 1 { v 1 2 ( t ) − y 2 ( t ) } d t = 2 β 2 + 2 ∫ 0 1 y ˙ 2 ( t ) ( u 02 ( t ) + 1 2 ) 2 − y 2 ( t ) d t ≥ 2 β 2 + 2 ∫ 0 1 { ( 4 / 9 ) y ˙ 2 ( t ) − y 2 ( t ) } d t .$
From the calculus of variations theory and Appendix A, it follows that the integral
$∫ 0 1 { ( 4 / 9 ) y ˙ 2 ( t ) − y 2 ( t ) } d t$
is greater than zero for all nonnull $y : [ 0 , 1 ] → R$ absolutely continuous with $y ˙ ∈ L 2 ( [ 0 , 1 ] ; R )$ satisfying $y ( 0 ) = y ( 1 ) = 0$. Consequently,
$J 0 ″ ( z 0 b 0 ; w β ) > 0$
for all nonnull $w β ∈ Y ( z 0 b 0 )$, and thus Condition (iv) of Corollary 1 is verified.
Now, if $z b$ is admissible, for all $t ∈ T$, we have
$E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) = u 1 2 ( t ) cos ( 2 π u 2 ( t ) ) = u 1 2 ( t ) cos ( 2 π u 02 ( t ) ) = u 1 2 ( t ) .$
Therefore, if $z b$ is admissible, for all $t ∈ T$,
$E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) ≥ 0 .$
In addition, note that if $z b$ is admissible, for all $t ∈ T$,
$u ( t ) − u 0 ( t ) = ( u 1 ( t ) − u 01 ( t ) , u 2 ( t ) − u 02 ( t ) ) = ( u 1 ( t ) , u 02 ( t ) − u 02 ( t ) ) = ( u 1 ( t ) , 0 ) ,$
and hence
$∫ 0 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t = ∫ 0 1 u 1 2 ( t ) d t ≥ ∫ 0 1 V ( u 1 ( t ) ) d t = ∫ 0 1 V ( u ( t ) − u 0 ( t ) ) d t = Q 2 ( u − u 0 ) .$
Additionally, if $z b$ is admissible, we have
$∫ 0 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t = ∫ 0 1 u 1 2 ( t ) d t ≥ ∫ 0 1 1 2 ( u 1 ( t ) u 2 ( t ) + 1 2 u 1 ( t ) ) 2 d t = ∫ 0 1 1 2 ( x ˙ ( t ) − x ˙ 0 ( t ) ) 2 d t ≥ ∫ 0 1 V ( x ˙ ( t ) − x ˙ 0 ( t ) ) d t = Q 1 ( x − x 0 ) .$
By Equations (2) and (3), if $z b$ is admissible,
$∫ 0 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ max { Q 1 ( x − x 0 ) , Q 2 ( u − u 0 ) } = Q ( x − x 0 , u − u 0 ) = Q ( z − z 0 ) .$
Finally, note that, if $z b$ is admissible, for all $t ∈ T$,
$E 1 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) = 1 2 ( u 1 ( t ) u 2 ( t ) + 1 2 u 1 ( t ) ) 2 − u 1 2 ( t ) .$
Consequently, if $z b$ is admissible,
$∫ 0 1 E 1 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t = ∫ 0 1 { 1 2 ( u 1 ( t ) u 2 ( t ) + 1 2 u 1 ( t ) ) 2 − u 1 2 ( t ) } d t = ∫ 0 1 { 1 2 ( u 1 ( t ) u 02 ( t ) + 1 2 u 1 ( t ) ) 2 − u 1 2 ( t ) } d t = ∫ 0 1 { 1 2 u 1 2 ( t ) ( u 02 ( t ) + 1 2 ) 2 − u 1 2 ( t ) } d t ≤ ∫ 0 1 u 1 2 ( t ) | 1 2 ( u 02 ( t ) + 1 2 ) 2 − 1 | d t ≤ ∫ 0 1 u 1 2 ( t ) d t = ∫ 0 1 E 0 ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t .$
Thus, by Equations (1), (4), and (5), Condition (v)(a)–(c) of Corollary 1 are satisfied with any $ϵ > 0$ and $δ = 1$. By Corollary 1, $( x 0 , u 0 )$ is a strong minimum of $( P ¯ )$.
In Example 2, we solve an inequality constrained nonparametric optimal control problem $( P ¯ )$ with a completely free initial end-point and for which for some $( ρ , μ ) ∈ X × U 2$, an element $( x 0 , u 0 , ρ , μ )$ satisfies the first-order sufficient conditions
$ρ ˙ ( t ) = − H x ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) , H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( t ∈ T ) ,$
Conditions (i)–(v) of Corollary 1 becoming in this way a strong minimum of $( P ¯ )$.
As in Example 2, isoperimetric constraints are not imposed, thus l, L, F, E, and $J ″$ correspond to $l 0$, $L 0$, $F 0$, $E 0$, and $J 0 ″$ respectively.
Example 2.
Consider the nonparametric optimal control problem $( P ¯ )$ of minimizing
$J ( x , u ) : = x 3 ( 0 ) + ∫ 0 1 { 2 u 1 2 ( t ) + u 2 2 ( t ) + 2 u 1 4 ( t ) + u 2 4 ( t ) − sinh x ( t ) } d t$
over all $( x , u ) ∈ A$ satisfying the constraints
$x ( 0 ) ∈ R , x ( 1 ) ∈ { 0 } . x ˙ ( t ) = sin 2 u 1 ( t ) − sin 2 u 2 ( t ) + u 1 2 ( t ) ( a . e . in [ 0 , 1 ] ) . ( t , x ( t ) , u ( t ) ) ∈ R ¯ ( t ∈ [ 0 , 1 ] )$
where
$R ¯ : = { ( t , x , u ) ∈ [ 0 , 1 ] × R × R 2 ∣ 2 x − u 1 2 − u 2 2 ≤ 0 , sin 2 u 1 − sin 2 u 2 ≤ 0 } ,$
$A : = X × U 2 ,$
For this problem, we consider the data of the nonparametric problem given in this section, which are given by $T = [ 0 , 1 ]$, $n = 1$, $m = 2$, $r = 2$, $s = 2$, $k = 0$, $K = 0$, $B 0 = R$, $B 1 = { 0 }$, $ℓ ( x 1 , x 2 ) = x 1 3$, $L ( t , x , u ) = 2 u 1 2 + u 2 2 + 2 u 1 4 + u 2 4 − sinh x$, $g ( t , x , u ) = sin 2 u 1 − sin 2 u 2 + u 1 2$, $ϕ 1 ( t , x , u ) = 2 x − u 1 2 − u 2 2$, and $ϕ 2 ( t , x , u ) = sin 2 u 1 − sin 2 u 2$.
As one readily verifies, the functions $L$, g, $ϕ = ( ϕ 1 , ϕ 2 )$ and their first and second derivatives with respect to x and u are continuous on $T × R × R 2$. In addition, the function is $C 2$ in $R × R$.
Moreover, it is evident that the process $z 0 = ( x 0 , u 0 ) ≡ ( 0 , 0 , 0 )$ is admissible of $( P ¯ )$. Let $Ψ : R → R × R$ be defined by $Ψ ( b ) : = ( b , 0 )$. Clearly, $Ψ$ is $C 2$ in $R$ and $B 0 × B 1 ⊂ Ψ ( R )$. The associated parametric problem of this section denoted by $( P Ψ )$ has the following data, $p = 1$, $l = ℓ ∘ Ψ$, $L = L$, $f = g$, $φ = ϕ$, and $Ψ 0$, $Ψ 1$ the components of $Ψ$, that is, $Ψ = ( Ψ 0 , Ψ 1 )$ with $Ψ 0 ( b ) = b$ and $Ψ 1 ( b ) = 0$ $( b ∈ R )$.
Observe that, if we set $b 0 : = 0$, then $z 0 b 0 = ( x 0 , u 0 , b 0 ) ≡ ( 0 , 0 , 0 )$ is admissible of $( P Ψ )$. In addition, clearly $I a ( z ˜ 0 ( · ) ) = { 1 , 2 }$ is constant on T. Let $ρ ≡ − t$, $μ = ( μ 1 , μ 2 ) ≡ ( 0 , 0 )$ and note that $( ρ , μ ) ∈ X × U 2$, $μ α ( t ) ≥ 0$ and $μ α ( t ) φ α ( z ˜ 0 ( t ) ) = 0$ $( α ∈ R = { 1 , 2 } , t ∈ T )$.
Now, observe that the Hamiltonian H is given by
$H ( t , x , u , ρ , μ ) = ρ sin 2 u 1 − ρ sin 2 u 2 + ρ u 1 2 − 2 u 1 2 − u 2 2 − 2 u 1 4 − u 2 4 + sinh x − μ 1 [ 2 x − u 1 2 − u 2 2 ] − μ 2 [ sin 2 u 1 − sin 2 u 2 ] ,$
and note that
$H u ( t , x , u , ρ , μ ) = 2 ρ sin u 1 cos u 1 + 2 ρ u 1 − 4 u 1 − 8 u 1 3 + 2 μ 1 u 1 − 2 μ 2 sin u 1 cos u 1 − 2 ρ sin u 2 cos u 2 − 2 u 2 − 4 u 2 3 + 2 μ 1 u 2 + 2 μ 2 sin u 2 cos u 2 ∗ ,$
$H x ( t , x , u , ρ , μ ) = cosh x − 2 μ 1 .$
$ρ ˙ ( t ) = − H x ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) and H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0$
and thus $( x 0 , u 0 , ρ , μ )$ satisfies the first order sufficient conditions of Corollary 1. Since $Ψ 0 ( b ) = b$, $Ψ 1 ( b ) = 0$, and $l ( b ) = b 3$ $( b ∈ R )$, then
$l ′ ( b 0 ) + ρ ( 1 ) Ψ 1 ′ ( b 0 ) − ρ ( 0 ) Ψ 0 ′ ( b 0 ) = 0$
and thus Condition (i) of Corollary 1 is satisfied. In addition, as one readily verifies,
$ρ ( 1 ) Ψ 1 ″ ( b 0 ; β ) − ρ ( 0 ) Ψ 0 ″ ( b 0 ; β ) = 0 for all β ∈ R$
and thus the symmetric Condition (ii) of Corollary 1 is fulfilled.
Now,
$H ( t , x , u , ρ ( t ) , μ ( t ) ) = − t sin 2 u 1 + t sin 2 u 2 − t u 1 2 − 2 u 1 2 − u 2 2 − 2 u 1 4 − u 2 4 + sinh x$
and thus, for all $t ∈ T$,
$〈 H u u ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) h , h 〉 = − [ 4 t + 4 ] h 1 2 + [ 2 t − 2 ] h 2 2 ≤ 0$
which in turn implies that $( x 0 , u 0 , ρ , μ )$ satisfies Condition (iii) of Corollary 1.
In addition, for all $( t , x , u ) ∈ T × R × R 2$, we have
$F ( t , x , u ) = − H ( t , x , u , ρ ( t ) , μ ( t ) ) − ρ ˙ ( t ) x = t sin 2 u 1 − t sin 2 u 2 + t u 1 2 + 2 u 1 2 + u 2 2 + 2 u 1 4 + u 2 4 + x − sinh x$
and, for all $t ∈ T$,
$f x ( z ˜ 0 ( t ) ) = 0 , f u ( z ˜ 0 ( t ) ) = ( 0 , 0 ) ,$
$φ x ( z ˜ 0 ( t ) ) = 2 0 , φ u ( z ˜ 0 ( t ) ) = 0 0 0 0 .$
Since $Y ( z 0 b 0 )$ is given by all $w β ∈ X × L 2 ( T ; R 2 ) × R$ satisfying $y ( 0 ) = β$, $y ( 1 ) = 0$, $y ˙ ( t ) = 0$, $y ( t ) ≤ 0$ $( a . e . in T )$, the fact that $l ″ ( b 0 ) = 0$ and, for all $t ∈ T$,
$F x x ( z ˜ 0 ( t ) ) = 0 , F x u ( z ˜ 0 ( t ) ) = ( 0 , 0 ) , F u u ( z ˜ 0 ( t ) ) = 4 t + 4 0 0 2 − 2 t ,$
then, for all $w β ∈ Y ( z 0 b 0 )$,
$J ″ ( z 0 b 0 ; w β ) = ∫ 0 1 { [ 4 t + 4 ] v 1 2 ( t ) + [ 2 − 2 t ] v 2 2 ( t ) } d t .$
Consequently,
$J ″ ( z 0 b 0 ; w β ) > 0$
for all nonnull $w β ∈ Y ( z 0 b 0 )$ and thus Condition (iv) of Corollary 1 is verified.
Now, note that, for all $t ∈ T$,
$E ( t , x ( t ) , u 0 ( t ) , u ( t ) ) = t sin 2 u 1 ( t ) − t sin 2 u 2 ( t ) + t u 1 2 ( t ) + 2 u 1 2 ( t ) + u 2 2 ( t ) + 2 u 1 4 ( t ) + u 2 4 ( t ) .$
Since for all $t ∈ T$, the function $Φ ( u 2 ) : = u 2 2 − t sin 2 u 2$ is nonnegative for all $u 2 ∈ R$, then Condition (v)(a) of Corollary 1 is satisfied for any $ϵ > 0$.
To verify Condition (v)(b) of Corollary 1, note first that, for all $π ∈ R$, $V ( π ) ≤ | π | 2 / 2$, and thus, for all $z b$ admissible and all $t ∈ T$,
$E ( t , x ( t ) , u 0 ( t ) , u ( t ) ) ≥ 2 u 1 4 ( t ) + u 2 4 ( t ) ≥ sin 4 u 1 ( t ) + sin 4 u 2 ( t ) + u 1 4 ( t ) ≥ [ sin 2 u 2 ( t ) − sin 2 u 1 ( t ) ] 2 + u 1 4 ( t ) ≥ 1 4 [ | sin 2 u 2 ( t ) − sin 2 u 1 ( t ) | + u 1 2 ( t ) ] 2 = 1 4 | sin 2 u 2 ( t ) − sin 2 u 1 ( t ) + u 1 2 ( t ) | 2 ≥ 1 4 | sin 2 u 2 ( t ) − sin 2 u 1 ( t ) − u 1 2 ( t ) | 2 = 1 4 | sin 2 u 1 ( t ) − sin 2 u 2 ( t ) + u 1 2 ( t ) | 2 ≥ 1 2 V ( sin 2 u 1 ( t ) − sin 2 u 2 ( t ) + u 1 2 ( t ) ) = 1 2 V ( x ˙ ( t ) ) = 1 2 V ( x ˙ ( t ) − x ˙ 0 ( t ) ) .$
Consequently, for any $z b$ admissible,
$∫ 0 1 E ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ 1 2 Q 1 ( x − x 0 ) .$
Now, observe that for any $z b$ admissible,
$∫ 0 1 E ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ ∫ 0 1 { u 1 2 ( t ) + u 2 2 ( t ) } d t + ∫ 0 1 t { sin 2 u 1 ( t ) − sin 2 u 2 ( t ) + u 1 2 ( t ) } d t = ∫ 0 1 | u ( t ) | 2 d t + ∫ 0 1 t x ˙ ( t ) d t = ∫ 0 1 | u ( t ) | 2 d t + ∫ 0 1 − x ( t ) d t ≥ ∫ 0 1 | u ( t ) | 2 d t − 1 2 ∫ 0 1 { u 1 2 ( t ) + u 2 2 ( t ) } d t = 1 2 ∫ 0 1 | u ( t ) | 2 d t ≥ ∫ 0 1 V ( u ( t ) ) d t = ∫ 0 1 V ( u ( t ) − u 0 ( t ) ) d t = Q 2 ( u − u 0 ) .$
With this in mind and Equation (6), it follows that, for any $ϵ > 0$ and for any $z b$ admissible with $∥ x − x 0 ∥ < ϵ$,
$∫ 0 1 E ( t , x ( t ) , u 0 ( t ) , u ( t ) ) d t ≥ 1 2 max { Q 1 ( x − x 0 ) , Q 2 ( u − u 0 ) } = 1 2 Q ( x − x 0 , u − u 0 ) = 1 2 Q ( z − z 0 ) .$
Therefore, Condition (v)(b) of Corollary 1 is verified for any $ϵ > 0$ and $δ = 1 2$. Since $k = K = 0$, it is evident that Condition (v)(c) of Corollary 1 is also satisfied with any $ϵ > 0$ and $δ$ given above. By Corollary 1, $( x 0 , u 0 )$ is a strong minimum of $( P ¯ )$.

## 4. Auxiliary Results

In this section, we state three auxiliary results, which are used to prove Theorem 1. The proof of these results is given in Section 6.
Throughout this section, we assume that we are given an element $z 0 : = ( x 0 , u 0 ) ∈ X × L 1 ( T ; R m )$ and a sequence ${ z q : = ( x q , u q ) }$ in $X × L 1 ( T ; R m )$ such that
$lim q → ∞ D ( z q − z 0 ) = 0 and d q : = [ 2 D ( z q − z 0 ) ] 1 / 2 > 0 ( q ∈ N ) .$
For all $q ∈ N$ and $t ∈ T$, define
$y q ( t ) : = x q ( t ) − x 0 ( t ) d q and v q ( t ) : = u q ( t ) − u 0 ( t ) d q .$
For all $q ∈ N$ and for almost all $t ∈ T$, define
$W q ( t ) : = max { W 1 q ( t ) , W 2 q ( t ) }$
where
$W 1 q ( t ) : = [ 1 + 1 2 V ( x ˙ q ( t ) − x ˙ 0 ( t ) ) ] 1 / 2 ( a . e . in T ) , W 2 q ( t ) : = [ 1 + 1 2 V ( u q ( t ) − u 0 ( t ) ) ] 1 / 2 ( t ∈ T ) .$
Lemma 2.
For some $v 0 ∈ L 2 ( T ; R m )$ and some subsequence of ${ z q }$, again denoted by ${ z q }$, ${ v q }$ converges weakly to $v 0$ in $L 1 ( T ; R m )$. Moreover, ${ u q }$ converges almost uniformly to $u 0$ on T in the sense that, for any $ϵ > 0$, there exists $Υ ϵ ⊂ T$ measurable with $m ( Υ ϵ ) < ϵ$ such that $u q ( t ) → u 0 ( t )$ uniformly on $T ∖ Υ ϵ$.
Lemma 3.
There exist $σ 0 ∈ L 2 ( T ; R n )$, $y ¯ 0 ∈ R n$, and a subsequence of ${ z q }$, again denoted by ${ z q }$, such that ${ y ˙ q }$ converges weakly in $L 1 ( T ; R n )$ to $σ 0$. Moreover, if $y 0 ( t ) : = y ¯ 0 + ∫ t 0 t σ 0 ( τ ) d τ$ $( t ∈ T )$, then $y q ( t ) → y 0 ( t )$ uniformly on T.
Lemma 4.
Suppose $Υ ⊂ T$ is measurable and $W q ( t ) → 1$ uniformly on Υ. Let $R q , R 0 ∈ L ∞ ( Υ ; R m × m )$, assume that $R q ( t ) → R 0 ( t )$ uniformly on Υ, $R 0 ( t ) ≥ 0$ $( t ∈ Υ )$, and let $v 0$ be the function considered in Lemma 2. Then,
$lim inf q → ∞ ∫ Υ 〈 R q ( t ) v q ( t ) , v q ( t ) 〉 d t ≥ ∫ Υ 〈 R 0 ( t ) v 0 ( t ) , v 0 ( t ) 〉 d t .$

## 5. Proof of Theorem 1

The proof of Theorem 1 is divided into three Lemmas. In Lemmas 5–7, we assume that all hypotheses of Theorem 1 are satisfied. Before enunciating the lemmas, we introduce some definitions.
First, note that, given $x = ( x 1 , … , x n ) ∗ ∈ R n$ and $b = ( b 1 , … , b p ) ∗ ∈ R p$, if we define $x i , b j ∈ R n + p$ by $x i : = ( x 1 , … , x n , 0 , … , 0 ) ∗$ and $b j : = ( 0 , … , 0 , b 1 , … , b p ) ∗$, then
$x i + b j = ( x 1 , … , x n , b 1 , … , b p ) ∗ = x b ∈ R n + p .$
Define $F ˜ 0 : T × R n + p × R m → R$ by
$F ˜ 0 ( t , ξ , u ) : = l 0 ( ξ n + 1 , … , ξ n + p ) t 1 − t 0 + F 0 ( t , ξ 1 , … , ξ n , u ) .$
Observe that the Weierstrass excess function $E ˜ 0 : T × R n + p × R m × R m → R$ of $F ˜ 0$ is given by
$E ˜ 0 ( t , ξ , u , v ) : = F ˜ 0 ( t , ξ , v ) − F ˜ 0 ( t , ξ , u ) − F ˜ 0 u ( t , ξ , u ) ( v − u ) .$
It is clear that, for all $( t , x , u , v ) ∈ T × R n × R m × R m$ and all $b ∈ R p$,
$E ˜ 0 ( t , x i + b j , u , v ) = E 0 ( t , x , u , v ) .$
Define
$J ˜ 0 ( z b ) : = 〈 ρ ( t 1 ) , x ( t 1 ) 〉 − 〈 ρ ( t 0 ) , x ( t 0 ) 〉 + ∫ t 0 t 1 F ˜ 0 ( t , x ( t ) i + b j , u ( t ) ) d t .$
We have that $J 0 ( z b ) = J ˜ 0 ( z b )$ for all $z b ∈ A$, and
$J ˜ 0 ( z b ) = J ˜ 0 ( z 0 b 0 ) + J ˜ 0 ′ ( z 0 b 0 ; z b − z 0 b 0 ) + K ˜ 0 ( z 0 b 0 ; z b ) + E ˜ 0 ( z 0 b 0 ; z b )$
where
$E ˜ 0 ( z 0 b 0 ; z b ) : = ∫ t 0 t 1 E ˜ 0 ( t , x ( t ) i + b j , u 0 ( t ) , u ( t ) ) d t , K ˜ 0 ( z 0 b 0 ; z b ) : = ∫ t 0 t 1 { M ˜ 0 ( t , x ( t ) i + b j ) + 〈 u ( t ) − u 0 ( t ) , N ˜ 0 ( t , x ( t ) i + b j ) 〉 } d t , J ˜ 0 ′ ( z 0 b 0 ; z b − z 0 b 0 ) : = 〈 ρ ( t 1 ) , x ( t 1 ) − x 0 ( t 1 ) 〉 − 〈 ρ ( t 0 ) , x ( t 0 ) − x 0 ( t 0 ) 〉 + ∫ t 0 t 1 { F ˜ 0 ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) ( [ x ( t ) − x 0 ( t ) ] i + [ b − b 0 ] j ) + F ˜ 0 u ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) ( u ( t ) − u 0 ( t ) ) } d t ,$
and $M ˜ 0$, $N ˜ 0$ are given by
$M ˜ 0 ( t , x i + b j ) : = F ˜ 0 ( t , x i + b j , u 0 ( t ) ) − F ˜ 0 ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) − F ˜ 0 ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) ( [ x − x 0 ( t ) ] i + [ b − b 0 ] j ) ,$
$N ˜ 0 ( t , x i + b j ) : = F ˜ 0 u ∗ ( t , x i + b j , u 0 ( t ) ) − F ˜ 0 u ∗ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) .$
We have,
$M ˜ 0 ( t , x i + b j ) = 1 2 〈 [ x − x 0 ( t ) ] i + [ b − b 0 ] j , P ˜ 0 ( t , x i + b j ) ( [ x − x 0 ( t ) ] i + [ b − b 0 ] j ) 〉 ,$
$N ˜ 0 ( t , x i + b j ) = Q ˜ 0 ( t , x i + b j ) ( [ x − x 0 ( t ) ] i + [ b − b 0 ] j ) ,$
where
$P ˜ 0 ( t , x i + b j ) : = 2 ∫ 0 1 ( 1 − λ ) F ˜ 0 ξ ξ ( t , [ x 0 ( t ) + λ ( x − x 0 ( t ) ) ] i + [ b 0 + λ ( b − b 0 ) ] j , u 0 ( t ) ) d λ , Q ˜ 0 ( t , x i + b j ) : = ∫ 0 1 F ˜ 0 u ξ ( t , [ x 0 ( t ) + λ ( x − x 0 ( t ) ) ] i + [ b 0 + λ ( b − b 0 ) ] j , u 0 ( t ) ) d λ .$
Lemma 5.
For some $ν , ζ > 0$ $( ζ ≤ ϵ )$ and any admissible process $z b$ satisfying $∥ z b − z 0 b 0 ∥ < ζ$,
$E ˜ 0 ( z 0 b 0 ; z b ) ≥ δ [ D ( z − z 0 ) − V ( x ( t 0 ) − x 0 ( t 0 ) ) ] , | K ˜ 0 ( z 0 b 0 ; z b ) | ≤ ν ∥ z b − z 0 b 0 ∥ [ 1 + D ( z − z 0 ) ] .$
Proof.
Keeping in mind the definitions of $Q i$, $D i$ $( i = 1 , 2 )$, $Q$ and D, copy the proof of Lemma 5.1 of [1]. □
Lemma 6.
If the conclusion of Theorem 1 is false, then there exists a subsequence ${ z b q q }$ of admissible processes such that
$lim q → ∞ D ( z q − z 0 ) = 0 and d q : = [ 2 D ( z q − z 0 ) ] 1 / 2 > 0 ( q ∈ N ) .$
Proof.
Observing that $D ( z q − z 0 ) = D ( x q − x 0 , u q − u 0 ) = 0$ if and only if $x q = x 0$ and $u q = u 0$, copy the proof of Lemma 5.2 of [1]. □
Lemma 7.
If conclusion of Theorem 1 is false, then Condition (iv) of Theorem 1 is false.
Proof.
Let ${ z b q q }$ be the sequence of admissible processes given in Lemma 6. Then,
$lim q → ∞ D ( z q − z 0 ) = 0 and d q = [ 2 D ( z q − z 0 ) ] 1 / 2 > 0 ( q ∈ N ) .$
Case (1): Suppose first that the sequence ${ ( b q − b 0 ) / d q }$ is bounded in $R p$.
For all $q ∈ N$ and $t ∈ T$, define
$y q ( t ) : = x q ( t ) − x 0 ( t ) d q , v q ( t ) : = u q ( t ) − u 0 ( t ) d q ,$
$ω q ( t ) : = y q ( t ) i + b q − b 0 d q j .$
By Lemma 2, there exist $v 0 ∈ L 2 ( T ; R m )$ and a subsequence of ${ z q }$, again denoted by ${ z q }$, such that ${ v q }$ converges weakly in $L 1 ( T ; R m )$ to $v 0$. By Lemma 3, there exist $σ 0 ∈ L 2 ( T ; R n )$, $y ¯ 0 ∈ R n$, and a subsequence of ${ z q }$, again denoted by ${ z q }$, such that, if $y 0 ( t ) : = y ¯ 0 + ∫ t 0 t σ 0 ( τ ) d τ$ $( t ∈ T )$, then
$lim q → ∞ y q ( t ) = y 0 ( t ) uniformly on T .$
Since the sequence ${ ( b q − b 0 ) / d q }$ is bounded in $R p$; then, we may assume that there exists some $β 0 ∈ R p$ such that
$lim q → ∞ b q − b 0 d q = β 0 .$
First, we show that, for $i = 0 , 1$,
$y 0 ( t i ) = Ψ i ′ ( b 0 ) β 0 .$
Note first that for $i = 0 , 1$ and all $q ∈ N$, we have that
$y q ( t i ) = ∫ 0 1 Ψ i ′ ( b 0 + λ [ b q − b 0 ] ) ( b q − b 0 ) d q d λ .$
By Equations (9), (10) and (12), we obtain Equation (11). Now, we claim that
$J 0 ″ ( z 0 b 0 ; w 0 β 0 ) ≤ 0 and w 0 β 0 = ( y 0 , v 0 , β 0 ) ¬ ≡ ( 0 , 0 , 0 ) .$
To prove it, observe that by Equations (8)–(10),
$lim q → ∞ M ˜ 0 ( t , x q ( t ) i + b q j ) d q 2 = lim q → ∞ 1 2 〈 ω q ( t ) , P ˜ 0 ( t , x q ( t ) i + b q j ) ω q ( t ) 〉 = 1 2 〈 y 0 ( t ) i + β 0 j , F ˜ 0 ξ ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) [ y 0 ( t ) i + β 0 j ] 〉 ,$
$lim q → ∞ N ˜ 0 ( t , x q ( t ) i + b q j ) d q = lim q → ∞ Q ˜ 0 ( t , x q ( t ) i + b q j ) ω q ( t ) = F ˜ 0 u ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) [ y 0 ( t ) i + β 0 j ]$
both uniformly on T. This fact together with Lemma 2, implies that
$lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 = 1 2 ∫ t 0 t 1 { 〈 y 0 ( t ) i + β 0 j , F ˜ 0 ξ ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) [ y 0 ( t ) i + β 0 j ] 〉 + 2 〈 v 0 ( t ) , F ˜ 0 u ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) [ y 0 ( t ) i + β 0 j ] 〉 } d t .$
Since $( x 0 , u 0 , ρ , μ )$ satisfies the first-order sufficient conditions
$ρ ˙ ( t ) = − H x ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) , H u ∗ ( z ˜ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( t ∈ T ) ,$
and by Condition (i) of Theorem 1, it follows that
$lim q → ∞ J ˜ 0 ′ ( z 0 b 0 ; z b q q − z 0 b 0 ) d q 2 = lim q → ∞ 1 d q 2 [ 〈 ρ ( t 1 ) , x q ( t 1 ) − x 0 ( t 1 ) 〉 − 〈 ρ ( t 0 ) , x q ( t 0 ) − x 0 ( t 0 ) 〉 + l 0 ′ ( b 0 ) ( b q − b 0 ) ] = lim q → ∞ 1 d q 2 [ ρ ∗ ( t 1 ) ( Ψ 1 ( b q ) − Ψ 1 ( b 0 ) − Ψ 1 ′ ( b 0 ) ( b q − b 0 ) ) − ρ ∗ ( t 0 ) ( Ψ 0 ( b q ) − Ψ 0 ( b 0 ) − Ψ 0 ′ ( b 0 ) ( b q − b 0 ) ) ] = lim q → ∞ 1 d q 2 ∫ 0 1 ∑ i = 0 1 ( − 1 ) i + 1 ( 1 − λ ) ρ ∗ ( t i ) Ψ i ″ ( b 0 + λ [ b q − b 0 ] ; b q − b 0 ) d λ = 1 2 [ ρ ∗ ( t 1 ) Ψ 1 ″ ( b 0 ; β 0 ) − ρ ∗ ( t 0 ) Ψ 0 ″ ( b 0 ; β 0 ) ] .$
Consequently, by Equation (7), the fact that
$J 0 ( z b q q ) − J 0 ( z 0 b 0 ) < min | b q − b 0 | 2 q , D ( z q − z 0 ) q ,$
Equation (15) and Condition (ii) of Theorem 1,
$0 ≥ lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 + lim inf q → ∞ E ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 .$
Now, let us show that
$lim inf q → ∞ E ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 ≥ 1 2 ∫ t 0 t 1 〈 v 0 ( t ) , F ˜ 0 u u ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) v 0 ( t ) 〉 d t .$
To this end, let $Υ$ be a measurable subset of T such that $u q ( t ) → u 0 ( t )$ uniformly on $Υ$. For all $q ∈ N$ and $t ∈ Υ$, we have that
$1 d q 2 E ˜ 0 ( t , x q ( t ) i + b q j , u 0 ( t ) , u q ( t ) ) = 1 2 〈 v q ( t ) , R q ( t ) v q ( t ) 〉 ,$
where
$R q ( t ) : = 2 ∫ 0 1 ( 1 − λ ) F ˜ 0 u u ( t , x q ( t ) i + b q j , u 0 ( t ) + λ [ u q ( t ) − u 0 ( t ) ] ) d λ .$
Clearly,
$lim q → ∞ R q ( t ) = R 0 ( t ) : = F ˜ 0 u u ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) uniformly on Υ .$
By Condition (iii) of Theorem 1, we have
$F ˜ 0 u u ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) = R 0 ( t ) ≥ 0 ( t ∈ Υ ) .$
For all $q ∈ N$ and almost all $t ∈ T$, define
$W q ( t ) : = max { W 1 q ( t ) , W 2 q ( t ) }$
where
$W 1 q ( t ) : = [ 1 + 1 2 V ( x ˙ q ( t ) − x ˙ 0 ( t ) ) ] 1 / 2 ( a . e . in T ) ,$
$W 2 q ( t ) : = [ 1 + 1 2 V ( u q ( t ) − u 0 ( t ) ) ] 1 / 2 ( t ∈ T ) .$
By the fact that
$∥ z b q q − z 0 b 0 ∥ < min { ζ , 1 / q } ,$
and the admissibility of $z b q q$, $W q ( t ) → 1$ uniformly on $Υ$. With this in mind, and since by (v)(a) of Theorem 1 for all $q ∈ N$,
$E 0 ( t , x q ( t ) , u 0 ( t ) , u q ( t ) ) ≥ 0 ( a . e . in T ) ,$
by (18) and Lemma 4,
$lim inf q → ∞ E ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 = lim inf q → ∞ 1 d q 2 ∫ t 0 t 1 E ˜ 0 ( t , x q ( t ) i + b q j , u 0 ( t ) , u q ( t ) ) d t = lim inf q → ∞ 1 d q 2 ∫ t 0 t 1 E 0 ( t , x q ( t ) , u 0 ( t ) , u q ( t ) ) d t ≥ lim inf q → ∞ 1 d q 2 ∫ Υ E 0 ( t , x q ( t ) , u 0 ( t ) , u q ( t ) ) d t = lim inf q → ∞ 1 d q 2 ∫ Υ E ˜ 0 ( t , x q ( t ) i + b q j , u 0 ( t ) , u q ( t ) ) d t = 1 2 lim inf q → ∞ ∫ Υ 〈 v q ( t ) , R q ( t ) v q ( t ) 〉 d t ≥ 1 2 ∫ Υ 〈 v 0 ( t ) , R 0 ( t ) v 0 ( t ) 〉 d t .$
As $Υ$ can be chosen to differ from T by a set of an arbitrarily small measure and the function
$t ↦ 〈 v 0 ( t ) , R 0 ( t ) v 0 ( t ) 〉$
belongs to $L 1 ( T ; R )$, this inequality holds when $Υ = T$, and this establishes Equation (17). With this in mind, by Equations (14) and (16), we have
$0 ≥ ∫ t 0 t 1 { 〈 v 0 ( t ) , F ˜ 0 u u ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) v 0 ( t ) 〉 + 2 〈 v 0 ( t ) , F ˜ 0 u ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) [ y 0 ( t ) i + β 0 j ] 〉 + 〈 y 0 ( t ) i + β 0 j , F ˜ 0 ξ ξ ( t , x 0 ( t ) i + b 0 j , u 0 ( t ) ) [ y 0 ( t ) i + β 0 j ] 〉 } d t = 〈 l 0 ″ ( b 0 ) β 0 , β 0 〉 + ∫ t 0 t 1 { 〈 v 0 ( t ) , F 0 u u ( z ˜ 0 ( t ) ) v 0 ( t ) 〉 + 2 〈 v 0 ( t ) , F 0 u x ( z ˜ 0 ( t ) ) y 0 ( t ) 〉 + 〈 y 0 ( t ) , F 0 x x ( z ˜ 0 ( t ) ) y 0 ( t ) 〉 } d t = 〈 l 0 ″ ( b 0 ) β 0 , β 0 〉 + ∫ t 0 t 1 2 Ω 0 ( z 0 ; t , y 0 ( t ) , v 0 ( t ) ) d t = J 0 ″ ( z 0 b 0 ; w 0 β 0 ) .$
Now, let us show that $w 0 β 0 ≢ ( 0 , 0 , 0 )$. By Equation (16), the first conclusion of Lemma 5, the fact that $V ( π ) ≤ | π | 2 / 2$ for all $π ∈ R n$,
$0 ≥ lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 + δ 2 − δ 2 lim sup q → ∞ | x q ( t 0 ) − x 0 ( t 0 ) | 2 d q 2 = lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 + δ 2 − δ 2 lim sup q → ∞ | Ψ 0 ( b q ) − Ψ 0 ( b 0 ) | 2 d q 2 = lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 + δ 2 − δ 2 lim sup q → ∞ ∫ 0 1 Ψ 0 ′ ( b 0 + λ [ b q − b 0 ] ) b q − b 0 d q d λ 2 = lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 + δ 2 − δ 2 | Ψ 0 ′ ( b 0 ) β 0 | 2 = lim q → ∞ K ˜ 0 ( z 0 b 0 ; z b q q ) d q 2 + δ 2 − δ 2 | y 0 ( t 0 ) | 2 .$
With this in mind and Equation (14), the fact that $w 0 β 0 ≡ ( 0 , 0 , 0 )$ contradicts the positivity of $δ$ and this establishes Equation (13). Now, let us show that
$y ˙ 0 ( t ) = f x ( z ˜ 0 ( t ) ) y 0 ( t ) + f u ( z ˜ 0 ( t ) ) v 0 ( t ) ( a . e . in T ) .$
Observe that for all $q ∈ N$,
$y ˙ q ( t ) = A q ( t ) y q ( t ) + B q ( t ) v q ( t ) ( a . e . in T )$
where
$A q ( t ) = ∫ 0 1 f x ( t , x 0 ( t ) + λ [ x q ( t ) − x 0 ( t ) ] , u 0 ( t ) + λ [ u q ( t ) − u 0 ( t ) ] ) d λ ,$
$B q ( t ) = ∫ 0 1 f u ( t , x 0 ( t ) + λ [ x q ( t ) − x 0 ( t ) ] , u 0 ( t ) + λ [ u q ( t ) − u 0 ( t ) ] ) d λ .$
Choose $Υ ⊂ T$ measurable such that
$A q ( t ) → f x ( z ˜ 0 ( t ) ) , B q ( t ) → f u ( z ˜ 0 ( t ) )$
both uniformly on $Υ$. As $y q ( t ) → y 0 ( t )$ uniformly on $Υ$ and ${ v q }$ converges weakly in $L 1 ( Υ ; R m )$ to $v 0$, it follows that ${ y ˙ q }$ converges weakly in $L 1 ( Υ ; R n )$ to $f x ( z ˜ 0 ( t ) ) y 0 ( t ) + f u ( z ˜ 0 ( t ) ) v 0 ( t )$. By Lemma 3, ${ y ˙ q }$ converges weakly in $L 1 ( Υ ; R n )$ to $σ 0 = y ˙ 0$. Then,
$y ˙ 0 ( t ) = f x ( z ˜ 0 ( t ) ) y 0 ( t ) + f u ( z ˜ 0 ( t ) ) v 0 ( t ) ( t ∈ Υ ) .$
As $Υ$ can be chosen to differ from T by a set of an arbitrarily small measure, there cannot exist a subset of T of positive measure where the functions $y 0$ and $v 0$ do not satisfy the differential equation $y ˙ 0 ( t ) = f x ( z ˜ 0 ( t ) ) y 0 ( t ) + f u ( z ˜ 0 ( t ) ) v 0 ( t )$, and thus, Equation (19) is verified.
Now, we claim that
i.
$I i ′ ( z 0 b 0 ; w 0 β 0 ) ≤ 0 ( i ∈ i a ( z 0 b 0 ) )$.
ii.
$I j ′ ( z 0 b 0 ; w 0 β 0 ) = 0 ( j = k + 1 , … , K )$.
iii.
$φ α x ( z ˜ 0 ( t ) ) y 0 ( t ) + φ α u ( z ˜ 0 ( t ) ) v 0 ( t ) ≤ 0 ( a . e . in T , α ∈ I a ( z ˜ 0 ( t ) ) )$.
iv.
$φ β x ( z ˜ 0 ( t ) ) y 0 ( t ) + φ β u ( z ˜ 0 ( t ) ) v 0 ( t ) = 0 ( a . e . in T , β ∈ S )$.
As one readily verifies Conditions (i)–(iv) above are obtained if one simply copies the proofs from Equations (22)–(29) of [1].
Consequently, from Equations (11) and (19) and Conditions (i)–(iv) above, it follows that $w 0 β 0 ∈ Y ( z 0 b 0 )$. This fact together with Equation (13) contradicts Condition (iv) of Theorem 1.
Case (2): Now, suppose that the sequence ${ ( b q − b 0 ) / d q }$ is not bounded. Then,
$lim q → ∞ b q − b 0 d q = + ∞ .$
In this case, if one copies the proofs from Equations (31)–(43) of [1], then one obtains that for some $β ¯ 0 ∈ R p$ with $| β ¯ 0 | = 1$,
a.
$Ψ i ′ ( b 0 ) β ¯ 0 = 0 ( i = 0 , 1 )$.
b.
$J 0 ″ ( z 0 b 0 ; 0 β ¯ 0 ) ≤ 0 .$
c.
$I i ′ ( z 0 b 0 ; 0 β ¯ 0 ) ≤ 0 ( i ∈ i a ( z 0 b 0 ) )$.
d.
$I j ′ ( z 0 b 0 ; 0 β ¯ 0 ) = 0 ( j = k + 1 , … , K )$.
Consequently, Conditions (a)–(d) above contradict Condition (iv) of Theorem 1 and this completes the proof of Theorem 1. □

## 6. Proof of Lemmas 2–4

Proof of Lemma 2.
Observe that for all $π ∈ R m$, $V ( π ) ( 2 + V ( π ) ) = | π | 2$. For all $q ∈ N$, we have
$∫ t 0 t 1 | v q ( t ) | 2 W q 2 ( t ) d t ≤ 1 d q 2 ∫ t 0 t 1 | u q ( t ) − u 0 ( t ) | 2 1 + 1 2 V ( u q ( t ) − u 0 ( t ) ) d t = 1 D ( z q − z 0 ) ∫ t 0 t 1 | u q ( t ) − u 0 ( t ) | 2 2 + V ( u q ( t ) − u 0 ( t ) ) d t = 1 D ( z q − z 0 ) ∫ t 0 t 1 V ( u q ( t ) − u 0 ( t ) ) d t = Q 2 ( u q − u 0 ) D ( z q − z 0 ) ≤ D 2 ( z q − z 0 ) D ( z q − z 0 ) ≤ 1 .$
Then, there exist $v 0 ∈ L 2 ( T ; R m )$ and a subsequence of ${ z q }$, again denoted by ${ z q }$, such that ${ v q / W q }$ converges weakly in $L 2 ( T ; R m )$ to $v 0$. As for $i = 1 , 2$, $W i q 2 ( t ) ≥ W i q ( t ) ≥ 1$ for all $q ∈ N$ and for almost all $t ∈ T$, we have
$0 ≤ ∫ t 0 t 1 [ W i q ( t ) − 1 ] d t ≤ ∫ t 0 t 1 [ W i q 2 ( t ) − 1 ] d t ≤ max ∫ t 0 t 1 V ( x ˙ q ( t ) − x ˙ 0 ( t ) ) d t , ∫ t 0 t 1 V ( u q ( t ) − u 0 ( t ) ) d t = max { Q 1 ( x q − x 0 ) , Q 2 ( u q − u 0 ) } = Q ( x q − x 0 , u q − u 0 ) ≤ D ( z q − z 0 ) .$
Thus, it follows that
$lim q → ∞ ∫ t 0 t 1 [ W q ( t ) − 1 ] d t = lim q → ∞ ∫ t 0 t 1 [ W q 2 ( t ) − 1 ] d t = 0 .$
Note also that
$∫ t 0 t 1 [ W q ( t ) − 1 ] 2 d t = ∫ t 0 t 1 [ W q 2 ( t ) − 1 ] d t − 2 ∫ t 0 t 1 [ W q ( t ) − 1 ] d t .$
Then, for any $ψ ∈ L ∞ ( T ; R m )$,
$lim q → ∞ ∥ ψ W q − ψ ∥ 2 = 0 ,$
and thus
$lim q → ∞ ∫ t 0 t 1 〈 ψ ( t ) , v q ( t ) 〉 d t = lim q → ∞ ∫ t 0 t 1 ψ ( t ) W q ( t ) , v q ( t ) W q ( t ) d t = ∫ t 0 t 1 〈 ψ ( t ) , v 0 ( t ) 〉 d t .$
Therefore, ${ v q }$ converges weakly in $L 1 ( T ; R m )$ to $v 0$.
Now, let us show that $u q ( t ) → u 0 ( t )$ almost uniformly on T. For all $t ∈ T$, define
$W ( t ) : = [ 1 + 1 2 V ( u ( t ) ) ] 1 / 2 .$
Observe that
$∫ t 0 t 1 2 W 2 ( t ) d t = 2 t 1 − 2 t 0 + ∫ t 0 t 1 V ( u ( t ) ) d t = 2 t 1 − 2 t 0 + Q 2 ( u ) ≤ 2 t 1 − 2 t 0 + D ( x , u ) ,$
$∫ t 0 t 1 | u ( t ) | 2 2 W 2 ( t ) d t = ∫ t 0 t 1 | u ( t ) | 2 2 + V ( u ( t ) ) d t = ∫ t 0 t 1 V ( u ( t ) ) d t = Q 2 ( u ) ≤ D ( x , u ) .$
From these relations, we have
$∥ u ∥ 1 2 ≤ ∫ t 0 t 1 | u ( t ) | 2 2 W 2 ( t ) d t ∫ t 0 t 1 2 W 2 ( t ) d t ≤ D ( x , u ) [ 2 t 1 − 2 t 0 + D ( x , u ) ] .$
Consequently, $∥ u q − u 0 ∥ 1 → 0$ and thus some subsequence of ${ u q }$ converges almost uniformly to $u 0$ on T. □
Proof of Lemma 3.
For all $q ∈ N$, define
$c q : = [ 1 + 1 2 V ( x q ( t 0 ) − x 0 ( t 0 ) ) ] 1 / 2 .$
For all $q ∈ N$, note that
$| y q ( t 0 ) | 2 c q 2 + ∫ t 0 t 1 | y ˙ q ( t ) | 2 W q 2 ( t ) d t ≤ | x q ( t 0 ) − x 0 ( t 0 ) | 2 d q 2 [ 1 + 1 2 V ( x q ( t 0 ) − x 0 ( t 0 ) ) ] + 1 d q 2 ∫ t 0 t 1 | x ˙ q ( t ) − x ˙ 0 ( t ) | 2 1 + 1 2 V ( x ˙ q ( t ) − x ˙ 0 ( t ) ) d t = | x q ( t 0 ) − x 0 ( t 0 ) | 2 D ( z q − z 0 ) [ 2 + V ( x q ( t 0 ) − x 0 ( t 0 ) ) ] + 1 D ( z q − z 0 ) ∫ t 0 t 1 | x ˙ q ( t ) − x ˙ 0 ( t ) | 2 2 + V ( x ˙ q ( t ) − x ˙ 0 ( t ) ) d t = 1 D ( z q − z 0 ) V ( x q ( t 0 ) − x 0 ( t 0 ) ) + ∫ t 0 t 1 V ( x ˙ q ( t ) − x ˙ 0 ( t ) ) d t = D 1 ( x q − x 0 ) D ( z q − z 0 ) ≤ 1 .$
Clearly, $lim q → ∞ c q = 1$. Then, there exist some subsequence of ${ z q }$, again denoted by ${ z q }$, some $y ¯ 0 ∈ R n$ and some $σ 0 ∈ L 2 ( T ; R n )$ such that
$lim q → ∞ y q ( t 0 ) c q = lim q → ∞ y q ( t 0 ) = y ¯ 0 ,$
${ y ˙ q / W q } converges weakly in L 2 ( T ; R n ) to σ 0 .$
Thus,
$lim q → ∞ y q ( t 0 ) = y ¯ 0 ,$
${ y ˙ q } converges weakly in L 1 ( T ; R n ) to σ 0 .$
Hence, ${ y ˙ q }$ is equi-integrable on T and therefore the sequence ${ y q }$ is equi-continuous on T. Thus, if $y 0 ( t ) : = y ¯ 0 + ∫ t 0 t σ 0 ( τ ) d τ$, then
$lim q → ∞ y q ( t ) = lim q → ∞ y q ( t 0 ) + lim q → ∞ ∫ t 0 t y ˙ q ( τ ) d τ = y 0 ( t ) uniformly on T .$
□
Proof of Lemma 4.
Copy the proof of Lemma 4.2 of [1]. □

## Funding

This research received no external funding.

## Acknowledgments

The author is grateful to Dirección General de Asuntos del Personal Académico, Universidad Nacional Autónoma de México, for the support given by the project PAPIIT-IN102220. In addition, the author is grateful to the three anonymous referees for the encouraging comments made in the review.

## Conflicts of Interest

The author declares no conflict of interest.

## Appendix A

Suppose we are given a continuous function $L ( t , x , u ) : [ 0 , 1 ] × R × R → R$ having first- and second-order continuous partial derivatives with respect to x and u. For all $x ∈ A C ( [ 0 , 1 ] ; R )$ with $x ˙ ∈ L ∞ ( [ 0 , 1 ] ; R )$ and all $y ∈ A C ( [ 0 , 1 ] ; R )$ with $y ˙ ∈ L 2 ( [ 0 , 1 ] ; R )$, define
$I ″ ( x , y ) : = ∫ 0 1 { L x x ( x ˜ ( t ) ) y 2 ( t ) + 2 L x u ( x ˜ ( t ) ) y ( t ) y ˙ ( t ) + L u u ( x ˜ ( t ) ) y ˙ 2 ( t ) } d t$
where as usual $( x ˜ ( t ) )$ represents $( t , x ( t ) , x ˙ ( t ) )$. The functional $I ″ ( x , y )$ is commonly called the second variation of I along x in the direction y where I is given by
$I ( x ) : = ∫ 0 1 L ( t , x ( t ) , x ˙ ( t ) ) d t .$
Theorem A1.
Set
and let $x 0 ∈ A C ( [ 0 , 1 ] ; R )$ with $x ˙ 0 ∈ L ∞ ( [ 0 , 1 ] ; R )$. Then,
if and only if
Proof.
$⟹ :$ It is trivial.
$⟸ :$ Let $y ∈ Y$ be given. For all $q ∈ N$, let $y ˜ q ∈ C ( [ 0 , 1 ] ; R )$ with $y ˜ q ( 0 ) = 0$ such that
$∥ y ˜ q − y ˙ ∥ 2 < 1 q .$
Define $y q : [ 0 , 1 ] → R$ by
$y q ( t ) : = ∫ 0 t y ˜ q ( τ ) d τ .$
Then,
$y ˙ q ( t ) = y ˜ q ( t ) ( t ∈ [ 0 , 1 ] ) .$
Therefore, for all $q ∈ N$,
$∥ y ˙ q − y ˙ ∥ 2 < 1 q .$
Thus, for all $t ∈ T$ and $q ∈ N$,
$| y q ( t ) − y ( t ) | = ∫ 0 t y ˙ q ( τ ) d τ − ∫ 0 t y ˙ ( τ ) d τ ≤ ∫ 0 1 | y ˙ q ( t ) − y ˙ ( t ) | d t ≤ ∥ y ˙ q − y ˙ ∥ 2 .$
Hence,
$lim q → ∞ ∥ y q − y ∥ C = 0 .$
Consequently,
$lim q → ∞ I ″ ( x 0 , y q ) = I ″ ( x 0 , y ) .$
Since $y ( 0 ) = y ( 1 ) = 0$, by Equation (A1), there exists $y ¯ q ∈ C 1 ( [ 0 , 1 ] ; R )$ with $y ¯ q ( 0 ) = y ¯ q ( 1 ) = 0$ such that
$| I ″ ( x 0 , y q ) − I ″ ( x 0 , y ¯ q ) | < 1 q ( q ∈ N ) .$
With this in mind and by hypothesis,
$I ″ ( x 0 , y ) = lim q → ∞ I ″ ( x 0 , y ¯ q ) ≥ 0 .$
□
Theorem A2.
Let $L : [ 0 , 1 ] × R × R → R$ be given by
$L ( t , x , u ) : = ( 4 / 9 ) u 2 − x 2 .$
Let (P) be the problem of minimizing
$I ( x ) : = ∫ 0 1 L ( t , x ( t ) , x ˙ ( t ) ) d t$
over all $x ∈ Y$, where $Y$ is the set defined in Theorem A1. Then,
To make the proof of Theorem A2, we make use of the following results and definitions.
Definition A1.
A function $x 0 : [ 0 , 1 ] → R$ satisfies a Lipschitz condition in $[ 0 , 1 ]$ if there exists a positive number M such that
$| x ( τ ) − x ( t ) | ≤ M | τ − t | f o r a l l τ , t ∈ [ 0 , 1 ] .$
If the function $x 0$ satisfies a Lipschitz condition in $[ 0 , 1 ]$, we write $x 0 ∈ L i p [ 0 , 1 ]$.
Lemma A1.
Let $x 0 ∈ L i p [ 0 , 1 ]$ satisfy the integral form of the Euler equation, where for almost all $t ∈ [ 0 , 1 ]$, the function $u ↦ L ( t , x 0 ( t ) , u )$ is strictly convex. Then, $x 0$ is $C 1$ in $[ 0 , 1 ]$.
Lemma A1 is precisely Theorem 15.9 of [10].
Lemma A2.
Let $L ( t , x , u )$ have the form $f ( t , x ) + g ( u )$, where f and g are continuously differentiable and, for some constant $c > 0$, the function g satisfies
Then, any weak local solution of (P) satisfies the integral form of the Euler equation.
Lemma A2 is precisely Exercise 16.14 of [10].
Definition A2.
We say that L has Nagumo growth along $x 0$ if there exists a function $θ : [ 0 , + ∞ ) → R$ satisfying
$lim t → ∞ θ ( t ) t = + ∞ ,$
such that
$t ∈ [ 0 , 1 ] , u ∈ R ⟹ L ( t , x 0 ( t ) , u ) ≥ θ ( | u | ) .$
Definition A3.
The Lagrangian L is autonomous when L does not depend on the t variable.
Theorem A3.
Let $x 0 ∈ A C ( [ 0 , 1 ] ; R )$ be a strong local minimizer for problem (P), where the Lagrangian is continuous, autonomous, convex in u, and has Nagumo growth along $x 0$. Then, $x 0$ is Lipschitz in $[ 0 , 1 ]$.
Theorem A3 is precisely Theorem 16.18 of [10].
Proof.
Proof of Theorem A2:
If we set
$f ( t , x ) : = − x 2 and g ( u ) : = ( 4 / 9 ) u 2 ( t ∈ [ 0 , 1 ] , x , u ∈ R ) ,$
we have that f and g are continuously differentiable and for the constant $c = 8 / 9$, the function g satisfies
$| g ′ ( u ) | ≤ c ( 1 + | u | + | g ( u ) | ) for all u ∈ R .$
Indeed, $g ′ ( u ) = ( 8 / 9 ) u$ and hence, Equation (A2) turns out to be
$( 8 / 9 ) | u | ≤ ( 8 / 9 ) ( 1 + | u | + ( 4 / 9 ) u 2 ) for all u ∈ R$
which is always true. Therefore, if we suppose that $x 0 ∈ Y$, $x 0 ≠ 0$ and
$∫ 0 1 { ( 4 / 9 ) x ˙ 0 2 ( t ) − x 0 2 ( t ) } d t = 0 ,$
then, from the classical calculus of variations theory and by Theorem A1, the integral I of Theorem A2 affords a global minimum at the arc $x = x 0$. By Lemma A2, $x 0$ satisfies the integral form of the Euler equation. Now, define $θ : [ 0 , + ∞ ) → R$ by
$θ ( t ) : = ( 4 / 9 ) t 2 − K$
where K is such that
$x 0 2 ( t ) ≤ K ( t ∈ [ 0 , 1 ] ) .$
We have that
$lim t → ∞ θ ( t ) t = lim t → ∞ ( 4 / 9 ) t 2 − K t = + ∞ ,$
and, moreover,
$t ∈ [ 0 , 1 ] , u ∈ R ⟹ ( 4 / 9 ) u 2 − x 0 2 ( t ) ≥ ( 4 / 9 ) u 2 − K = θ ( | u | ) .$
Consequently, $L ( t , x , u ) = ( 4 / 9 ) u 2 − x 2$ has Nagumo growth along $x 0$. Clearly, the Lagrangian L is continuous, autonomous, convex in u and since $x 0 ∈ A C ( [ 0 , 1 ] ; R )$ is a strong local minimum of (P) and L has Nagumo growth along $x 0$, then by Theorem A3, $x 0$ is Lipschitz in $[ 0 , 1 ]$. By Lemma A1 and since for almost all $t ∈ [ 0 , 1 ]$, the function
$u ↦ L ( t , x 0 ( t ) , u )$
is strictly convex, then $x 0$ is $C 1$ in $[ 0 , 1 ]$. Thus, once again from the classical calculus of variations theory, it follows that
$∫ 0 1 { ( 4 / 9 ) x ˙ 0 2 ( t ) − x 0 2 ( t ) } d t > 0$

## References

1. Sánchez Licea, G. Sufficiency for singular trajectories in the calculus of variations. AIMS Math. 2019, 5, 111–139. [Google Scholar] [CrossRef]
2. Maurer, H.; Oberle, H.J. Second order sufficient conditions for optimal control problems with free final time: The Riccati approach. SIAM J. Control Optim. 2002, 41, 380–403. [Google Scholar] [CrossRef]
3. Maurer, H.; Pickenhain, S. Second order sufficient conditions for control problems with mixed control-state constraints. J. Optim. Theory Appl. 1995, 86, 649–667. [Google Scholar] [CrossRef]
4. Stefani, G.; Zezza, P.L. Optimality conditions for a constrained optimal control problem. SIAM J. Control Optim. 1996, 34, 635–659. [Google Scholar] [CrossRef]
5. Loewen, P.D. Second-order sufficiency criteria and local convexity for equivalent problems in the calculus of variations. J. Math. Anal. Appl. 1990, 146, 512–522. [Google Scholar] [CrossRef] [Green Version]
6. Rosenblueth, J.F. Variational conditions and conjugate points for the fixed-endpoint control problem. IMA J. Math. Control Inf. 1999, 16, 147–163. [Google Scholar] [CrossRef]
7. Maurer, H. First and second order sufficient optimality conditions in mathematical programming and optimal control. In Mathematical Programming at Oberwolfach. Mathematical Programming Studies; Springer: Berlin/Heidelberg, Germany, 1981; Volume 14, pp. 163–177. [Google Scholar]
8. Agrachev, A.; Stefani, G.; Zezza, P.L. A Hamiltonian approach to strong minima in optimal control. In Proceedings of the AMS Proceedings of Differential Geometry and Control, Boulder, CO, USA, 29 June–19 July 1997; AMS: Providence, RI, USA, 1997; pp. 11–22. [Google Scholar]
9. Agrachev, A.; Stefani, G.; Zezza, P.L. Strong optimality for a bang-bang trajectory. SIAM J. Control Optim. 2002, 41, 991–1014. [Google Scholar] [CrossRef] [Green Version]
10. Clarke, F.H. Functional Analysis, Calculus of Variations and Optimal Control; Springer: London, UK, 2013. [Google Scholar]
11. Felgenhauer, U. Weak and strong optimality conditions for constrained control problems with discontinuous control. J. Math. Anal. Appl. 2001, 110, 361–387. [Google Scholar] [CrossRef]
12. Hestenes, M.R. Calculus of Variations and Optimal Control Theory; John Wiley: New York, NY, USA, 1966. [Google Scholar]
13. Malanowski, K. Sufficient optimality conditions for optimal control subject to state constraints. SIAM J. Control Optim. 1997, 35, 205–227. [Google Scholar] [CrossRef]
14. Malanowski, K.; Maurer, H. Sensitivity analysis for parametric control problems with control-state constraints. Comput. Optim. Appl. 1996, 5, 253–283. [Google Scholar] [CrossRef]
15. Malanowski, K.; Maurer, H.; Pickenhain, S. Second order sufficient conditions for state-constrained optimal control problems. J. Optim. Theory Appl. 2004, 123, 595–617. [Google Scholar] [CrossRef]
16. Maurer, H. Sufficient conditions and sensitivity analysis for economic control problems. Ann. Oper. Res. 1999, 88, 3–14. [Google Scholar] [CrossRef]
17. Maurer, H.; Osmolovskii, N.P. Second order sufficient conditions for time optimal bang-bang control problems. SIAM J. Control Optim. 2004, 42, 2239–2263. [Google Scholar] [CrossRef] [Green Version]
18. Maurer, H.; Osmolovskii, N.P. Second order sufficient optimality conditions for a control problem with continuous and bang-bang control components: Riccati approach. IFIP Adv. Inf. Commun. Technol. 2007, 312, 411–429. [Google Scholar]
19. Maurer, H.; Pesh, H.J. Solution differentiability for parametric nonlinear control problems with control-state constraints. J. Optim. Theory Appl. 1995, 86, 285–309. [Google Scholar] [CrossRef] [Green Version]
20. Milyutin, A.A.; Osmolovskii, N.P. Calculus of Variations and Optimal Control; American Mathematical Society: Providence, RI, USA, 1998. [Google Scholar]
21. Osmolovskii, N.P. Second order sufficient conditions for an extremum in optimal control. Control Cybern. 2002, 31, 803–831. [Google Scholar]
22. Osmolovskii, N.P. Sufficient quadratic conditions of extremum for discontinuous controls in optimal control problems with mixed constraints. J. Math. Sci. 2011, 173, 1–106. [Google Scholar] [CrossRef]
23. Osmolovskii, N.P. Second-order sufficient optimality conditions for control problems with linearly independent gradients of control constraints. ESAIM Control Optim. Calc. Var. 2012, 18, 452–482. [Google Scholar] [CrossRef] [Green Version]
24. Rosenblueth, J.F.; Sánchez Licea, G. Sufficiency and singularity in optimal control. IMA J. Math. Control Inf. 2013, 30, 37–65. [Google Scholar] [CrossRef]
25. Sánchez Licea, G. Relaxing strengthened Legendre-Clebsch condition. SIAM J. Control Optim. 2013, 51, 3886–3902. [Google Scholar] [CrossRef]
26. Sánchez Licea, G. Sufficiency for essentially bounded controls which do not satisfy the strengthened Legendre-Clebsch condition. Appl. Math. Sci. 2018, 12, 1297–1315. [Google Scholar]

## Share and Cite

MDPI and ACS Style

Sánchez Licea, G. Sufficiency for Purely Essentially Bounded Optimal Controls. Symmetry 2020, 12, 238. https://doi.org/10.3390/sym12020238

AMA Style

Sánchez Licea G. Sufficiency for Purely Essentially Bounded Optimal Controls. Symmetry. 2020; 12(2):238. https://doi.org/10.3390/sym12020238

Chicago/Turabian Style

Sánchez Licea, Gerardo. 2020. "Sufficiency for Purely Essentially Bounded Optimal Controls" Symmetry 12, no. 2: 238. https://doi.org/10.3390/sym12020238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.