Next Article in Journal
Common Fixed Point Results of Set Valued Maps for Aφ-Contraction and Generalized ϕ-Type Weak Contraction
Next Article in Special Issue
Sasaki-Einstein 7-Manifolds, Orlik Polynomials and Homology
Previous Article in Journal
Selective Poisoning Attack on Deep Neural Networks
Previous Article in Special Issue
The Existence of Two Homogeneous Geodesics in Finsler Geometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multivariate Optimal Control with Payoffs Defined by Submanifold Integrals

by
Andreea Bejenaru
*,† and
Constantin Udriste
Department of Mathematics and Informatics, University Politehnica of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2019, 11(7), 893; https://doi.org/10.3390/sym11070893
Submission received: 7 June 2019 / Revised: 30 June 2019 / Accepted: 4 July 2019 / Published: 8 July 2019
(This article belongs to the Special Issue Geometry of Submanifolds and Homogeneous Spaces)

Abstract

:
This paper adapts the multivariate optimal control theory to a Riemannian setting. In this sense, a coherent correspondence between the key elements of a standard optimal control problem and several basic geometric ingredients is created, with the purpose of generating a geometric version of Pontryagin’s maximum principle. More precisely, the local coordinates on a Riemannian manifold play the role of evolution variables (“multitime”), the Riemannian structure, and the corresponding Levi–Civita linear connection become state variables, while the control variables are represented by some objects with the properties of the Riemann curvature tensor field. Moreover, the constraints are provided by the second order partial differential equations describing the dynamics of the Riemannian structure. The shift from formal analysis to optimal Riemannian control takes deeply into account the symmetries (or anti-symmetries) these geometric elements or equations rely on. In addition, various submanifold integral cost functionals are considered as controlled payoffs.

1. Introduction

For many centuries, researchers were preoccupied with finding the perfect description for geometric objects (curves, surfaces, and others) with some optimizing features. Therefore, important problems were phrased and solved. Among these, let us recall:
- The Plateau problem concerning the existence of minimal surfaces with isoperimetric constraints;
- The minimal submanifolds as solutions for the volume optimizing problem;
- The harmonic maps resulting from optimizing the energy functional;
- Dirichlet’s principle, which identifies the minimizers of the Dirichlet’s energy with the solutions of a Poisson equation subject to boundary constraints;
- Fermats’s principle which states that the path followed by some ray of light is the one taking the least time;
- Hilbert’s isoperimetric problem, stating that the Einstein manifolds are minimizers for the total scalar curvature, with isoperimetric constraints;
- Dieudonne–Rashevsky type problems referring to optimization of multiple integral cost functionals with first order partial differential equations constraints, with applicability in elasticity (the torsion of a prismatic bar), population dynamics (age structure related models), image processing, and others.
Many of these important problems were solved using calculus of variations. Nevertheless, in the last few decades, the optimal control theory has benefited from a consistent development, providing an improvement of the variational techniques and, ultimately, replacing them. Moreover, an important step forward related to optimal control was made by increasing the dimension of the time variable.
Motivated by this mathematical trend, we appreciate as necessary any consistent approach on optimal control theory in geometric framework as it should be suitable for reanalyzing the classical examples, like those presented above, as well as for defining and solving relevant new problems. It is the basic objective of this paper to give answers to the following questions: Is it possible to provide an unitary approach on optimal control which could lead to general tools or formulas for solving all the mentioned problems and possibly others? What are the convenient ways to phrase optimal control problems in the Riemannian context (more precisely, what type of cost functional could be considered)? Which geometric elements will play the key roles of (multi)time, state, and control variables? Which geometric elements interfere in the constraints? What is the geometric significance of the co-state variables?
The main results of our study are Theorem 1 and Corollaries 1–4, containing a formal approach on the Pontryagin’s maximum principle (see [1,2,3,4]), for multivariate optimal control problems with various types of submanifolds integral type payoffs. Later, in Corollaries 5–9, they are rephrased for a new class of geometric optimal control problems, continuing the ideas from the paper [5]. Not least, Example 2 reconsiders Hilbert’s isoperimetric problem in this newly provided setting, while Example 3 provides an additional argument for the utility of this geometric approach. We point out the idea that our Riemannian optimal control is completely distinct from the geometric optimal control described in [6,7,8], where the role of the evolution variable was the classical one (time variable), while the state and control variables were assumed to be lying on differentiable manifolds.
Our source of inspiration and the research tools cover the following topics:
- Classical optimal control, meaning the original optimal control theory involving a unique time variable, a cost functional including, in general, a running payoff and a terminal payoff, as well as a set of dynamic constraints expressed by ordinary differential equations as well as static constraints expressed generally by inequalities ([9,10,11,12]);
- Various statements of the Pontryagin’s maximum principle, via a properly defined Hamiltonian ([1,2,3,4,13]);
- Multivariate optimal control, initially considered in connection with Dieudonne–Rashevsky problems which involve payoff functionals expressed via multiple integrals and dynamic constraints expressed by first order partial differential equations (see [14,15,16,17,18,19]);
- Differential geometry under its general aspects, but, more importantly, Riemannian geometry; the most important elements we borrow from Riemannian geometry are the Riemannian metric, the Levi–Civita linear connection, the curvature tensor field, and the equations describing the way they connect (see [20,21]).
A first attempt in the direction of Riemannian optimal control was related to solving two flow-type optimal control problems in the Riemannian setting: The total divergence of a fixed vector field and the total Laplacian (the gradient flux) of a fixed differentiable function. Both times, the cost functional was a multiple-type integral functional (Riemannian extension of Dieudonne–Rashevsky type problems). This paper extends all these ideas by varying the considered type of cost functionals and by considering second order geometric dynamics.
Reaching the above ideas, as well as the ideas developed throughout this paper, was possible after a consistent analysis of multivariate optimal control problems, from different points of views and more extensively than the preliminary approach initiated by Cesari [14] for Dieudonne–Rashevsky problems. For instance, the multivariate optimal control achieved new dimensions by considering other types of cost functionals (stochastic integrals [22], curvilinear-type integrals [23], or mixt payoffs containing both multiple or curvilinear integrals [24]), as well as various types of evolution dynamics (second order partial differential equations, nonholonomic constraints [25]), or different working techniques (multivariate dynamic programming [26], multivariate needle-shaped variations [24,27]). The applicative features of the multivariate Pontryagin’s maximum principle were emphasized in [5], where the minimal submanifolds, the harmonic maps, or the Plateau problem were approached under this new light. In addition, multivariate controllability- and observability-related features were studied in [28], while [29] provides a comparison analysis of various types of cost functionals.
In optimal control issues, the variables involved play distinct roles. In this case, the states represent entities with geometric features (Riemannian metric, linear connection, etc.), and the local coordinates of the manifold are variables of evolution. Usually, an object having the properties of the curvature tensor field plays the role of the control element.
The rest of the paper is organized as follows. Section 2 contains a formal overview regarding the multivariate optimal control theory, introducing the specific terminology and establishes the methodology. Section 3 is a review of geometric elements. Section 4 contains the main results derived from applying the technical results from Section 2 to the geometric framework provided in Section 3. Section 5 contains the conclusions and policy implications.

2. Optimal Control Formalism

2.1. Single-Time Case

We start our approach with recalling the standard statement of an optimal control problem, in its most simple form, by namely using a one-dimensional evolution variable. The purpose of this is just to fix the specific terminology and techniques. Later, these elements will be adapted to multi-dimensional evolution variables and ultimately, to geometric objects, by properly identifying the role of each of them.
Formally, an optimal control problem refers to finding:
max c ( · ) C J ( c ( · ) ) = t 0 T X 0 ( x , s ( x ) , c ( t ) ) d x + χ ( T , s ( T ) )
subject to:
s 0 S ; s ˙ = X ( x , s ( x ) , c ( x ) ) , x [ 0 , T ] .
The nature or the meaning of the elements involved in the expressions above are as follows:
  • The real number T is called the final time or horizon; t 0 is called initial time. Usually, x [ t 0 , T ] represents the time variable, but this comes just from the fact that the optimal control problems which originated this theory used to have temporal evolutions. We prefer to instead call them evolution variables since this terminology is more compatible with the idea of increasing the dimension (we have even avoided to denote it with t);
  • U R k is called the set of control variables. A function c : [ 0 , T ] U is called the control strategy. Sometimes there are additional requirements concerning the control strategies (for instance, the local integrability condition or static constraints) resulting the set of admissible strategies C ;
  • S R m is called the set of state variables. For a given control strategy c ( · ) and a given initial state s 0 S , the solution of the evolution equation s ( · ) = s ( · , s 0 , c ( · ) ) is called the state trajectory;
  • X 0 : [ 0 , T ] × S × C R is called instantaneous performance index. Moreover, χ : [ 0 , T ] × S R is called the payoff from the final state;
  • The functional J on the set of admissible control strategies is called the cost functional or payoff functional.

2.2. Multivariate Case

This section is dedicated to featuring the general aspects of the multivariate optimal control (in a Euclidean setting). The basic ingredients are N R n with global coordinates ( x 1 , , x n ) , S R m with global coordinates ( s 1 , , s m ) , and U R k having global coordinates ( c 1 , , c k ) . Let us denote by D a bounded Lipschitz domain of a p-dimensional submanifold of N, with a ( p 1 ) -dimensional oriented boundary D . In particular, when p = n we denote by Ω a bounded Lipschitz domain in N, when p = n 1 we denote by Σ a bounded oriented hyper-surface while, we use C to denote a differentiable curve in N with given endpoints x i and x f .
Let X = ( X i α ) : N × S × C R m n be a C 1 tensor field. For a given control function c : N U , we define the following completely integrable evolution system:
s α x i ( t ) = X i α ( x , s ( x ) , c ( x ) ) , x N .
The multivariate evolution system in Equation (1) is used as constraint when we want to optimize various integral-type cost functionals.
Problem 1.
p-Dimensional integral cost functional.
This section reflects the most general expression of multivariate optimal control problems, by considering p-dimensional domains in N and cost functionals defined as integrals on these domains.
Denote:
I σ = { ( i 1 i 2 i n σ ) | 1 i 1 < i 2 < < i n σ n } , σ = 1 , n 1 ¯ , p n 1 ; , p = n .
We define the cost functional:
J D [ c ( · ) ] = D I I p X I ( x , s ( x ) , c ( x ) ) d x I + D I I p 1 χ I ( x , s ( x ) ) d x I ,
where, if I = ( i 1 i 2 i n p ) , then d x I is the p-form resulted from the multiple interior product of the n-form d x with the vector fields i n p , , i 1 .
The corresponding control Hamiltonian ( n p ) -form has the components
H I ( x , s , p , c ) = X I ( x , s , c ) + p α I s X s α ( x , s , c ) , I I p .
In order to keep the expressions as simple as possible, let us introduce the following notations: Given a multi-index I = ( i 1 i 2 i n p ) I p , let I = i 1 i 2 i n p , let G denote the induced inner product on the exterior algebra of vector fields, N 1 N n p be the cross (wedge) product of the normal distribution on submanifold D, while { η 1 , η 2 , , η n p + 1 } denotes a normal distribution on D .
Theorem 1. (Multivariate maximum principle for p -dimensional integral cost functional)
Suppose c * ( · ) is optimal for max c ( · ) J D , E q u a t i o n ( 1 ) and s * ( · ) is the corresponding optimal n-sheet. Then there exists a costate mapping ( p * ) = ( p α * I ) : D R m n n p , p α * I = p α * τ ( I ) , τ ( I ) a transposition of the multi-index I I p , such that the following equations are satisfied:
  • State equations:
    s * α x i = H I p α I i ( x , s * , p * , c * ) , I I p , i = 1 , n ¯ , α = 1 , m ¯ , x D ;
  • Adjoint equations:
    G p α * I s x s + H I s α ( x , s * , p * , c * ) I , N 1 N n p = 0 , α = 1 , m ¯ , x D ;
  • Optimality conditions
    G H I c a ( x , s * , p * , c * ) I , N 1 N n p = 0 , a = 1 , k ¯ , x D .
  • The boundary conditions
    G p α * I χ I s α I , η 1 η n p + 1 = 0 , α 1 , m ¯ , x D .
Proof. 
If c * ( · ) is an optimal control, consider a variation c ϵ ( · ) = c * ( · ) + ϵ v ( · ) , ϵ ( ϵ 0 , ϵ 0 ) . This generates a variational state s ϵ ( · ) , with d s ϵ α d ϵ ϵ = 0 = τ α and a cost function:
J ϵ = J D [ c ϵ ( · ) ] = D I I p X I ( x , s ϵ , c ϵ ) d x I + D I I p 1 χ I ( x , s ϵ ) d x I = D I I p H I ( x , s ϵ , p , c ϵ ) p α I s X s α ( x , s ϵ , c ϵ ) d x I + D I I p 1 χ I ( x , s ϵ ) d x I = D I I p H I ( x , s ϵ , p , c ϵ ) p α I s s ϵ α x s d x I + D I I p 1 χ I ( x , s ϵ ) d x I = D I I p H I ( x , s ϵ , p , c ϵ ) p α I s s ϵ α x s + p α I s x s s ϵ α d x I + D I I p 1 χ I ( x , s ϵ ) d x I = D I I p H I ( x , s ϵ , p , c ϵ ) + p α I s x s s ϵ α d x I + D I I p 1 p α I s ϵ α + χ I ( x , s ϵ ) d x I .
Since c * is a optimal solution, it follows that ϵ = 0 is a critical point for ϵ J ϵ . That is:
0 = D I I p H I s α ( x , s * , p , c * ) + p α I s x s τ α + H I c a ( x , s * , p , c * ) v a d x I
+ D I I p 1 p α I + χ I s α ( x , s * ) τ α d x I .
Choosing the costate tensor p * as solution for the adjoint partial differential equations system:
G p α I s x s + H I s α ( x , s * , p , c * ) I , N 1 N n p = 0 , α = 1 , m ¯ , x D
with a boundary condition:
G p α I χ I s α I , η 1 η n p + 1 = 0 , x D ,
we find:
D I I p H I c a ( x , s * , p * , c * ) v a d x I = 0 , v a ,
leading to the optimality conditions:
G H I c a ( x , s * , p * , c * ) I , N 1 N n p = 0 , a = 1 , k ¯ , x D .
 □
Remark 1.
A better way to phrase the optimality conditions is given by the inequality
D H I ( x , s * , p * , c * ) H I ( x , s * , p * , c ) d x I 0 , c ( · ) C .
A proof leading to this condition is based on needle-shaped control variations and, beside the fact that provides a more general formula, it also allows control variables to reach boundary values (for more details about this technique, please see see [5,24]). Moreover, this expression is preferable when the Hamiltonians are linear with respect to the control variables, i.e., H I ( x , s , p , c ) = σ I ( x , s , p ) · c + ψ I ( x , s , p ) . If such is the case, two approaches are possible. If σ ( x , s ( x ) , p ( x ) ) = 0 almost everywhere on D, the problem is control-free and the optimal solutions are of a singular-type. Otherwise, the optimal control is a bang-bang, meaning that it switches abruptly between boundary values.
Problem 2.
Multiple integral cost functional.
This is a particular case of the general one analyzed above, since Ω N can be considered as a domain of maximal dimension p = n . The general expression for a multiple integral cost functional is:
J Ω [ c ( · ) ] = Ω X ( x , s ( x ) , c ( x ) ) d x + Ω χ l ( x , s ( x ) ) d x l .
and the corresponding Hamiltonian function (0-form) is:
H ( x , s , p , c ) = X ( x , s , c ) + p α s X s α ( x , s , c ) .
Then, Theorem 1 reads as in the following Corollary.
Corollary 1. (Multitime maximum principle for multiple integral cost functional)
Suppose c * ( · ) is an optimal solution of the control problem max c ( · ) J Ω , E q u a t i o n ( 1 ) and t * ( · ) is the corresponding optimal state. Then there exists a costate tensor p * = ( p α * i ) : Ω R m n to satisfy:
  • State equations
    s * α x i = H p α i ( x , s * , p * , c * ) , x Ω , α = 1 , m ¯ , i = 1 , n ¯ ;
  • Adjoint equations
    p α * s x s = H s α ( x , s * , p * , c * ) , x Ω , α = 1 , m ¯ ;
  • Optimality conditions
    H c a ( x , s * , p * , c * ) = 0 , x Ω , a = 1 , k ¯ ;
  • Boundary conditions
    p α * l + χ l s α ( x , s * ) x l T x Ω , x Ω , α = 1 , m ¯ .
Problem 3.
Hyper-surface integral cost functional.
When considering o domain Σ of dimension p = n 1 it results in the following cost functional:
J Σ [ c ( · ) ] = Σ X l ( x , s ( x ) , c ( x ) ) d x l + Σ 1 i < j n χ i j ( x , s ( x ) ) d x i j ,
where d x l = i x l d x and d x i j = i x j d x i .
Similar to previous paragraphs, the multitime maximum principle involves some appropriate Hamiltonian vector field with components:
H l ( x , s , p , c ) = X l ( x , s , c ) + p α l s X s α ( x , s , c ) ,
and Theorem 1 conducts to the next statement.
Corollary 2. (Multitime maximum principle for hyper-surface integral cost functional)
Suppose c * ( · ) is optimal for max c ( · ) J Σ , E q u a t i o n ( 1 ) and s * ( · ) is the corresponding optimal n-sheet. Then there exists a co-state mapping ( p * ) = ( p α * i j ) : Σ R m n , p α * i j = p α * j i to satisfy:
  • State equations
    s * α x i = H l p α l i ( x , s * , p * , c * ) , i = 1 , n ¯ , α = 1 , m ¯ , x Σ ;
  • Adjoint equations
    p α * l s x s + H l s α ( x , s * , p * , c * ) x l T x Σ , α = 1 , m ¯ , x Σ ;
  • Optimality conditions
    H l c a ( x , s * , p * , c * ) x l T x Σ , a = 1 , k ¯ , x Σ ;
  • Boundary condition
    G 1 i < j n p α * i j + χ i j s α ( x , s * ) x i x j , η 1 η 2 = 0 , x Σ ,
    where G denotes the induced inner product on the exterior algebra of vector fields and η 1 η 2 is the cross product of the normal distribution { η 1 , η 2 } on Σ .
Problem 4.
Curvilinear integral cost functional.
When the selected domain is a curve C, the corresponding dimension is p = 1 . The expression of the curvilinear integral cost functional is:
J C [ c ( · ) ] = C X l ( x , s ( x ) , c ( x ) ) d x l + χ ( x f , s ( x f ) ) χ ( x i , s ( x i ) ) ,
where x i and x f are the endpoints of C.
The corresponding Hamiltonian is an 1-form with components:
H l ( x , s , p , c ) = X l ( x , s , c ) + p α X l α ( x , s , c )
leading to the following statement for the maximum principle.
Corollary 3. (Multitime maximum principle for curvilinear integral cost functional)
Suppose c * ( · ) is an optimal solution of the control problem max c ( · ) J C , E q u a t i o n ( 1 ) and s * ( · ) is the corresponding optimal state. Then there exists a costate mapping p * = ( p α * ) : C R m to satisfy:
  • State equations
    s * α x l = H l p α ( x , s * , p * , c * ) , x C , α = 1 , m ¯ , l = 1 , n ¯ ;
  • Adjoint equations
    δ l s p α * x l + H l s α ( x , s * , p * , c * ) x s T C , x C , α = 1 , m ¯ ;
  • Optimality conditions
    δ l s H l u a ( x , s * , p * , c * ) x s T C , x C , a = 1 , k ¯ ;
  • Terminal conditions
    p α * ( x f ) = χ s α ( x f , s * ( x f ) ) ; p α * ( x i ) = χ s α ( x i , s * ( x i ) ) , α 1 , m ¯ .
Problem 5.
Evolution equations with symmetries.
The previous sections phrased optimal control conditions for the evolution system in Equation (1) and for different types of integral costs. The section instead aims to describe the optimal control behavior, when dealing with an evolution system supporting some sort of symmetries. Assume that the dimension of the considered domain D is p 2 . We define:
  • A symmetric-type evolution system:
    s j α x i ( x ) + s i α x j ( x ) = X i j α ( x , s ( x ) , c ( x ) ) + X j i α ( x , s ( x ) , c ( x ) ) .
  • An ntisymmetric-type evolution system:
    s j α x i ( x ) s i α x j ( x ) = X i j α ( x , s ( x ) , c ( x ) ) X j i α ( x , s ( x ) , c ( x ) ) .
The multivariate maximum principles (necessary conditions) corresponding to the optimal control problems max c ( · ) J D , E q u a t i o n ( 2 ) and max c ( · ) J D , E q u a t i o n ( 3 ) connects the existence of an optimal control c * to co-state mappings p * = ( p α * I ) I I p 2 , with some symmetry particularities:
( p 1 ) in the case of symmetric-type evolution system, p I = p τ ( I ) for each transposition of the multi-index I I p 2 , except the transposition τ 0 of the last two elements of the multi-index, for which p I = p τ 0 ( I ) ;
( p 2 ) in the case of antisymmetric-type evolution system, p I = p τ ( I ) for each transposition of the multi-index I I p 2 , with no exceptions.
These costate mappings allow the definition of the Hamiltonian ( n p ) -form of components:
H I ( x , s , p , c ) = X I ( x , s , c ) + p α I i j X i j α ( x , s , c ) , I I p .
Using their symmetries, similar arguments as in the proof of Theorem 1 lead to the outcome stated below.
Corollary 4. (multitime maximum principle for symmetric/antisymmetric evolution equations)
Suppose c * ( · ) is optimal for max c ( · ) J D , E q u a t i o n ( 2 ) or max c ( · ) J D , E q u a t i o n ( 3 ) and that t * ( · ) is the corresponding optimal n-sheet. Then there exists a co-state mapping p * with properties ( p 1 ) and ( p 2 ) , respectively, to satisfy:
  • State equations:
    s * j α x i ± s * i α x j = H I p α I i j ± H I p α I j i ( x , s * , p * , c * ) ,
    x D , I I p , i , j = 1 , n ¯ , α = 1 , m ¯ ;
  • Adjoint equations:
    G p α * I s i x s + H I s i α ( x , s * , p * , c * ) I , N 1 N n p = 0 ,
    x Ω , i = 1 , n ¯ , α = 1 , m ¯ ;
  • Optimality conditions:
    G H I u a ( x , s * , p * , c * ) d x I , N 1 N n p = 0 , x D , a = 1 , k ¯ ;
  • Boundary conditions:
    G p α * I l χ I s l α ( x , s * ) I , η 1 η n p + 1 = 0 , x D , α = 1 , m ¯ , l = 1 , n ¯ .

3. Basics on Riemannian Geometry

Let ( M , g ) be a Riemannian manifold and ( x 1 , , x n ) be local coordinates on M. A basic result in Riemannian geometry ([20,21]) states the existence of the Levi–Civita connection, i.e., the unique torsion-free ( X Y Y X = [ X , Y ] ) and metric compatible ( g = 0 ) linear connection associated to g.
In coordinates, the Levi–Civita connection can be described using the Christoffel symbols Γ = Γ i j k . The torsion free condition is then equivalent to the symmetry property Γ i j k = Γ j i k , while the compatibility with the metric is given by the following partial differential equations:
g i j x k ( x ) = g p s ( x ) δ i p Γ j k s ( x ) + δ j p Γ i k s ( x ) , i , j , k = 1 , , n ,
or, equivalent,
g i j x k ( x ) = g p s ( x ) δ p i Γ s k j ( x ) + δ p j Γ s k i ( x ) , i , j , k = 1 , , n ,
where g 1 = ( g i j ) is the dual metric tensor field, i.e., g i s g s j = δ j i , i , j = 1 , , n .
Moreover, a second order covariant differentiation of the Riemannian structure g generates the Riemann curvature ( 1 , 3 ) -tensor field:
R ( X , Y ) Z = X Y Z Y X Z [ X , Y ] Z ,
which, in terms of local coordinates R = ( R i j k l ) , is defined by:
R k i j l = Γ k j l x i Γ k i l x j + Γ k j s Γ s i l Γ k i s Γ s j l , i , j , k , l = 1 , , n ,
or, equivalent:
Γ k j l x i Γ k i l x j = R k i j l Γ k j s Γ s i l + Γ k i s Γ s j l , i , j , k , l = 1 , , n .
Lowering the index via the metric g, allows the introduction of the Riemann curvature ( 0 , 4 ) -type tensor field R i j k l = g i s R j k l s , having the symmetry properties:
R i j k l = R k l i j ; R i j k l = R j i k l
and satisfying the Bianchi identities:
R i j k l + R i k l j + R i l j k = 0 ; R i j k l , r + R i j l r , k + R i j r k , l = 0 ,
where a comma denotes the covariant derivative. We introduce the set of curvature like tensor fields:
C T 4 0 = { T i j k l | with the properties from relations ( 7 ) , ( 8 ) } .
In the following, we shall switch the order of the geometric ingredients. Given a ( 0 , 4 ) -tensor field R = ( R i j k l ) in C T 4 0 , we ask ourselves whether there exist a linear connection Γ and a Riemannian structure g on M satisfying Equations (4) and (6), respectively Equations (5) and (6). More precisely, adding initial conditions:
g i j ( x 0 ) = η i j , Γ i j k ( x 0 ) = γ i j k ( x 0 ) ,
we consider the relations in Equations (4) and (6) and Equations (5) and (6) as controlled evolution laws and we shall call them second order metric compatibility evolution system.
Hereafter, the metric tensor g = ( g i j ) and the linear connection Γ = ( Γ i j k ) will denote symmetric state objects, the local coordinates x = ( x 1 , , x n ) will play the role of the evolution variables, and the tensor field R = ( R i j k l ) will denote a control object with symmetries.
The partial differential equations system provided by Equations (4) and (6) has solutions if and only if the complete integrability conditions:
x l g p s δ i p Γ j k s + δ j p Γ i k s = x k g p s δ i p Γ j l s + δ j p Γ i l s ; 0 = x p R k i j l Γ k j s Γ s i l + Γ k i s Γ s j l + x i R k j p l Γ k p s Γ s j l + Γ k j s Γ s p l + x j R k p i l Γ k i s Γ s p l + Γ k p s Γ s i l
are satisfied. Explicitly, this means R i j k l = R j i k l and R i j k l , r + R i j l r , k + R i j r k , l = 0 . These relations are among the properties of R = ( R i j k l ) since we have assumed R = ( R i j k l ) to be described by the conditions in Equations (7) and (8).

4. Riemannian Optimal Control

In order to motivate our further approach, we provide the following example from [5], which proves that some problems turn out to be very interesting optimal control issues, by properly stating them and by properly assigning roles for the involved variables.
Example 1.
If D is a compact set of R m = ( t 1 , , t m ) , with a piecewise smooth ( m 1 ) -dimensional boundary D , then its volume can be expressed as follows:
V ( D ) = D d t = 1 m D δ α β t α N β d σ ,
where N denotes the exterior unit normal vector field on the boundary. On the other hand, by taking a parametrization of D , having the parameters’ domain U R m 1 = { η 1 , , η m 1 } , the area of the boundary surface is:
A ( D ) = D d σ = U δ α β N α N β d η ,
where N stands for the exterior normal vector field, hence d σ = | | N | | d η .
Let us show that of all solids having a given surface area, the sphere being the one that has the greatest volume. To prove this statement, we take the normal vector field N as a control and we formulate the multivariate optimal control problem with (static) isoperimetric constraint:
max N D δ α β t α N β d η   s u b j e c t   t o U δ α β N α N β d η = const . .
The corresponding Hamiltonian is:
H ( t , p , N ) = δ α β t α N β + p δ α β N α N β , p = const .
and the optimality conditions lead to:
0 = H N = t p N o n D ,
which, knowing that | | N | | = 1 , describes the boundary of D as being the solution for | | t | | 2 = p 2 . Hence D is precisely the ball of radius p.
If ( M , g ) is a n-dimensional Riemannian manifold, let x = ( x 1 , , x n ) denote the local coordinates relative to a fixed local map ( V , h ) . We use the same notations as in the formal case: Ω is a bounded Lipschitz domain of M, with oriented boundary Ω , Σ is a bounded oriented hyper-surface, while C denotes a differentiable curve on M with given endpoints x i and x f .
We shall further consider several types of cost functionals.
I. Curvature related functionals
  • Multiple integral-type functional:
    J Ω [ R ( · ) ] = Ω X ( x , g ( x ) , Γ ( x ) , R ( x ) ) d x + Ω χ i ( x , g ( x ) , Γ ( x ) ) d x i ,
    where d x = d x 1 d x n denotes the canonical differential n-form on M and d x l = i x l d x , i X denoting the interior product of a differential form with respect to a vector field X.
  • Hyper-surface integral-type functional:
    J Σ [ R ( · ) ] = Σ X l ( x , g ( x ) , Γ ( x ) , R ( x ) ) d x l + Σ 1 i < j n χ i j ( x , g ( x ) , Γ ( x ) ) d x i j ,
    where d x i j = i x j d x i .
  • Path independent curvilinear integral-type cost:
    J C [ R ( · ) ] = C X l ( x , g ( x ) , Γ ( x ) , R ( x ) ) d x l + χ ( x f , g ( x f ) , Γ ( x f ) ) χ ( x i , g ( x i ) , Γ ( x i ) ) .
II. Connection related functionals
4.
I. Multiple integral-type functional:
J Ω [ Γ ( · ) ] = Ω X ( x , g ( x ) , Γ ( x ) ) d x + Ω χ i ( x , g ( x ) ) d x i .
5.
Hyper-surface integral-type functional:
J Σ [ Γ ( · ) ] = Σ X l ( x , g ( x ) , Γ ( x ) ) d x l + Σ 1 i < j n χ i j ( x , g ( x ) ) d x i j .
6.
Path independent curvilinear integral-type cost:
J C [ Γ ( · ) ] = C X l ( x , g ( x ) , Γ ( x ) ) d x l + χ ( x f , g ( x f ) ) χ ( x i , g ( x i ) ) .
Definition 1.
The problem of maximizing (minimizing) one of the cost functionals ( J Ω ) ( J C ) , subject to one of the metric evolution systems given by Equations (4) and (6) or Equations (5) and (6) is called the Riemannian optimal control problem.
All the outcomes resulted in connection with the functionals above are in fact the expressions from Corollaries 1–3, for the particular choice of the state variables s = ( g , Γ ) and control variables c = R (or s = g and c = Γ if the curvature tensor is not involved at all). Since the main ingredients of this Riemannian optimal control problem (the state variables, the control variables, and evolution constraints) have some sort of symmetries, we shall derive adapted multitime maximum principles, based on co-state variables with symmetries as in Corollary 4. In the following, we list these outcomes, together with the Hamiltonians they rely on.

4.1. Riemannian Control with Multiple Integral Cost Functional

Problem 6.
Optimize J Ω [ R ( · ) ] subject to Equations (5) and (6).
For that, let us consider Lagrange multipliers of type p i j k = p j i k and q s k i j = q s k j i and the control Hamiltonian:
H ( x , g , Γ , R , p , q ) = X ( x , g , Γ , R ) g i s Γ s k j p i j k + q s k i j 1 2 R k i j s Γ k j p Γ p i s .
Corollary 5.
Suppose the tensor field R * ( · ) is an optimal solution for max R ( · ) J Ω [ R ( · ) ] , constraint by the evolution laws in Equations (5) and (6) and that g * ( · ) and Γ * ( · ) are the corresponding optimal Riemannian structure and the optimal linear connection, respectively. Then there exist the dual objects p * = ( p i j * k = p j i * k ) and q * = ( q s * k i j = q s * k j i ) satisfying:
  • The state equations:
    g * i j x k = H p i j k + H p j i k , Γ k j s x i Γ k i s x j = H q s k i j H q s k j i ;
  • The adjoint equations:
    p i j * k x k + H g i j + H g j i = 0 , q s * i k j x k + q s * j k i x k + H Γ i j s + H Γ j i s = 0 ;
  • The optimality conditions:
    H R k i j s H R k j i s = 0 ;
  • The boundary conditions:
    p i j * k | Ω = χ k g i j + χ k g j i Ω , q s * i k j + q s * j k i Ω = χ k Γ i j s + χ k Γ j i s Ω .
Remark 2.
If ( g * , Γ * , R * ) is an optimal solution with corresponding dual objects ( p * , q * ) and
H j * i = H j i ( g * , Γ * , R * , p * , q * ) = X ( g * , Γ * , R * ) δ j i g * k s Γ s j * l p k l * i + q s * k i l Γ k l * s x j
is an autonomous anti-trace Hamiltonian, then the following conservation law is satisfied
D i H j * i = 0 , j = 1 , n ¯ .
Example 2.
(Hilbert’s isoperimetric problem) Consider the functional I [ R ( · ) ] = ρ ( Ω ) , where ρ ( Ω ) = Ω ρ d v denotes the total scalar curvature. Therefore, we try to minimize I [ R ( · ) ] = Ω g i j R i k j k g d x , subject to the controlled evolution system defined by Equations (5) and (6) and to the isoperimetric constraint v o l ( Ω ) = C . We start by introducing a Lagrangian functional:
J Ω [ R ( · ) ] = ρ ( Ω ) λ v o l ( Ω ) = Ω g i j R i k j k g λ g d x .
We may identify X ( x , g , Γ , R ) = g i j R i k j k g λ g and χ k ( x , g , Γ ) = 0 . The corresponding Hamiltonian density is
H ( x , g , Γ , R , p , q ) = g i j R i k j k g λ g g i s Γ s k j p i j k + q l k i j 1 2 R k i j l Γ k j s Γ s i l .
Denoting σ l k i j ( g , Γ , p , q ) = 1 2 q l k i j + g k j δ l i g g k i δ l j g and ψ ( g , Γ , p , q ) = λ g g i s Γ s k j p i j k q l k i j Γ k j s Γ s i l , we may rewrite the autonomous Hamiltonian
H ( g , Γ , R , p , q ) = σ l k i j ( g , Γ , p , q ) R k i j l + ψ ( g , Γ , p , q ) ,
which is linear with respect to the control variables. For bang-bang optimal control, we impose | | R ( · ) | | M , where the norm is the Riemannian one. To judge in the sense of singular optimal control, we need σ ( x ) 0 , x Ω 1 Ω . Therefore, the optimal solutions may exhibit both bang-bang and singular sub-sheets as described in Remark 1.
Let us search for singular solutions (see [9]), that is ( i ) σ ( x ) 0 and ( i i ) the conservation law for the autonomous anti-trace Hamiltonian is satisfied.
The first condition, combined with the antisymmetry property of q, provides:
q l k i j = g k i δ l j g k j δ l i g .
In addition to this, the singular solution also satisfies the adjoint partial differential equations system:
p i j k x k = p i s k Γ j k s + p j s k Γ i k s + 2 R i k j k + ( ρ λ ) g i j g
and:
q l i k j x k + q l j k i x k = p s k j g s i + p s k i g s j + Γ s k i q l s j k + Γ s l k q k j s i + Γ s k j q l s i k + Γ s l k q k i s j .
Replacing q in the latter leads to p s k j g s i + p s k i g s j = 0 , with the solution
p i j k = 0 .
Finally, by substituting p in the first adjoint set of equations, we obtain R i k j k = ρ λ 2 g i j , that is the Einstein Equation in vacuum R i c i j = λ n 2 g i j .
Moreover, the anti-trace autonomous Hamiltonian is:
H j * i = ( ρ λ ) δ j i + g * k i Γ k s * s x j g * k s Γ k s * i x j g *
and the conservation law D i H j * i = 0 is satisfied by the Einstein structure, therefore, the Einstein manifolds are singular critical points for the total scalar curvature functional with isoperimetric constraints.
Problem 7.
Optimize J Ω [ Γ ( · ) ] subject to Equation ( 5 ) .
The corresponding Hamiltonian has a simplified expression:
H ( x , g , Γ , p , q ) = X ( x , g , Γ ) g i s Γ s k j p i j k
and the multitime maximum principle is described by the following Corollary.
Corollary 6.
Suppose the linear connection Γ * ( · ) is an optimal solution for max Γ ( · ) J Ω ( Γ ( · ) , E q u a t i o n ( 5 ) and that g * ( · ) is the corresponding optimal Riemannian structure. Then there exist a dual object p * = ( p i j * k = p j i * k ) satisfying:
  • The state equations:
    g * i j x k = H p i j k + H p j i k ;
  • The adjoint equations:
    p i j * k x k + H g i j + H g j i = 0 ;
  • The optimality conditions:
    H Γ i j k + H Γ j i k = 0 ;
  • The boundary conditions:
    p i j * k | Ω = χ k g i j + χ k g j i Ω .
Example 3.
Consider the least squares Lagrangian-type cost functional:
J [ Γ ] = 1 2 Ω g i j Γ i s k Γ j k s d x ,
which measures the mean square deviation tensor Γ Γ 0 , where Γ is a linear connection and Γ 0 = 0 is the Euclidean linear connection. The corresponding Hamiltonian density is:
H = 1 2 g i j Γ i s k Γ j k s g i s Γ s k j p i j k
and, according to Corollary 6, we have:
  • The optimality conditions: g i s Γ s k j + g j s Γ s k i g i s p s k j g j s p s k i = 0 , leading to the general solution
    p i j k = Γ i j k γ i j k , o r , i n v a r i a n t p = Γ T ,
    where T = ( γ i j k ) is a (1,2) symmetric tensor field, satisfying the anti-symmetry condition g i s γ s k j + g j s γ s k i = 0 ;
  • The boundary conditions p i j k | Ω = 0 , which, by substituting p, lead to
    γ i j k | Ω = Γ i j k | Ω ;
  • The adjoint equations:
    p i j k x k = Γ i s k Γ j k s + Γ i s k p j k s + Γ j s k p i k s ,
    rewritten, after substituting p: Γ i j k x k γ i j k x k = Γ i s k Γ j k s γ i s k Γ j k s γ j s k Γ i k s , or Γ i j k x k Γ i s k Γ j k s = ( Div T ) i j , or, even better
    Ric i j + i ln g x j = ( Div T ) i j .
    In particular, by taking γ = 0 , it follows that manifolds satisfying
    Ric = d ln g
    are critical points for the functional
    J [ Γ ( · ) ] = 1 2 Ω g i j Γ i s k Γ j k s d x .

4.2. Riemannian Control with Hypersurface Integral-Type Cost Functional

Problem 8.
Optimize J Σ [ R ( · ) ] subject to Equations (5) and (6).
Let us consider Lagrange multipliers of type p i j l k = p j i l k = p i j k l and q s l k i j = q s l k j i = q s k l i j , and the control Hamiltonian vector field:
H l ( x , g , Γ , p ) = X l ( x , g , Γ , R ) g i s Γ s k j p i j l k + q s l k i j 1 2 R k i j s Γ k j p Γ p i s .
Corollary 7.
Suppose the tensor field R * ( · ) is an optimal solution for max R ( · ) J Σ [ R ( · ) ] , E q u a t i o n s ( 5 ) a n d ( 6 ) and that g * ( · ) and Γ * ( · ) are the corresponding optimal Riemannian structure and the optimal linear connection, respectively. Then there exist the dual objects p * = ( p i j * l k = p j i * l k = p i j * k l ) and q * = ( q s * l k i j = q s * l k j i = q s * k l i j ) satisfying:
  • The state equations:
    g * i j x k = H l p i j l k + H l p j i l k , l = 1 , n ¯ , Γ k j s x i Γ k i s x j = H l q s l k i j H l q s l k j i ;
  • The adjoint equations:
    p i j * l k x k + H l g i j + H l g j i x l T x Σ , q s * l i k j x k + q s * l j k i x k + H l Γ i j s + H l Γ j i s x l T x Σ ;
  • The optimality conditions:
    H l R k i j s H l R k j i s x l T x Σ ;
  • The boundary conditions:
    G p i j * l k χ l k g i j + χ l k g j i l k , η 1 η 2 = 0 , G q s * l i k j + q s * l j k i χ l k Γ i j s + χ l k Γ j i s l k , η 1 η 2 = 0 ,
    where G denotes the induced inner product on the exterior algebra of vector fields and η 1 η 2 is the cross product of the normal distribution { η 1 , η 2 } on Σ .
Problem 9.
Optimize J Σ [ Γ ( · ) ] subject to Equation ( 5 ) .
The corresponding Hamiltonian is:
H l ( x , g , Γ , p ) = X l ( x , g , Γ ) g i s Γ s k j p i j l k
and the multivariate maximum principle is described in the following statement.
Corollary 8.
Suppose the linear connection Γ * ( · ) is an optimal solution for max Γ ( · ) J Σ [ Γ ( · ) ] , E q u a t i o n ( 5 ) and that g * ( · ) is the corresponding optimal Riemannian structure. Then there exist a dual object p * = ( p i j * l k = p j i * l k = p i j * k l ) satisfying:
  • State equations:
    g * i j x k = H l p i j l k + H l p j i l k ( n o   s u m   o n   l ) ;
  • Adjoint equations:
    p i j * l k x k + H l g i j + H l g j i x l T x Σ ;
  • Optimality conditions:
    H l Γ i j k + H l Γ j i k x l T x Σ ;
  • Boundary conditions:
    G p i j * l k χ l k g i j + χ l k g j i l k , η 1 η 2 = 0 .

4.3. Riemannian Control with Curvilinear Integral Cost Functional

The natural expression for dual mapping necessary to phrase the optimality conditions requires curvature free Hamiltonians, therefore, if the cost functional is of a curvilinear type we can only analyze optimal control problems depending on connection. More precisely, we analyze the problem of optimizing J C [ Γ ( · ) ] subject to Equation ( 5 ) . The corresponding Hamiltonian 1-form is:
H l ( x , g , Γ , p ) = X l ( x , g , Γ ) g i s Γ s l j p i j
and the corresponding multivariate maximum principle is described by the following statement.
Corollary 9.
Suppose the linear connection Γ * ( · ) is an optimal control solution for max Γ ( · ) J C [ Γ ( · ) ] , E q u a t i o n ( 5 ) and that g * ( · ) is the corresponding optimal Riemannian structure. Then there exist a dual tensor field p * = ( p i j * = p j i * ) to satisfy:
  • The state equuations:
    g * i j x l = H l p i j + H l p j i ;
  • the adjoint equations
    g l s p i j * x l H l g i j + H l g j i x s T x C ;
  • The optimality conditions:
    g l s H l Γ i j k + H l Γ j i k x s T x C ;
  • The terminal conditions:
    p i j * ( x ) = χ g i j + χ g j i ( x ) , x { x i , x f } .

5. Conclusions

The idea of finding optimal Riemannian structures for geometric meaningful integrals has classical roots. Nevertheless, the well-known Riemannian optimization approaches refer only to particular problems (like Hilbert’s problem, or Plateau’s problem) and the results are generally obtained via calculus of variations. This paper adapted multivariate optimal control techniques to general Riemannian optimization problems in order to derive a Hamiltonian approach. The cost functionals considered here were multiple, curvilinear, or hypersurface-type integrals. Descriptions for necessary optimality conditions were given. Furthermore, Hilbert’s classical isoperimetric problem was solved in a Hamiltonian manner, together with another fresh example.

Author Contributions

The conceptualization, formal analysis, and validation of the content were done with equal participation of both authors.

Funding

This research was funded by Balkan Society of Geometers, Bucharest, Romania.

Acknowledgments

Many thanks to the Balkan Society of Geometers, who funded this research. The subject of this paper was scientifically supported by the University Politehnica of Bucharest, and by the Academy of Romanian Scientists.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abbas, H.; Kim, Y.; Siegel, J.B.; Rizzo, D.M. Synthesis of Pontryagin’s Maximum Principle Analysis for Speed Profile Optimization of All-Electric Vehicles. J. Dyn. Sys. Meas. Control 2019, 141, 071004. [Google Scholar] [CrossRef]
  2. Ali, H.M.; Pereira, F.L.; Gama, S.M.A. A new approach to the Pontryagin maximum principle for nonlinear fractional optimal control problems. Math. Methods Appl. Sci. 2016, 39, 3640–3649. [Google Scholar] [CrossRef] [Green Version]
  3. Avakov, E.R.; Magaril-Il’yaev, G.G. Generalized Maximum Principle in Optimal Control. Doklady Math. 2018, 98, 575–578. [Google Scholar] [CrossRef] [Green Version]
  4. Ross, I.M. A Primer on Pontryagin’s Principle in Optimal Control, 2nd ed.; Collegiate Publishers: San Frnacisco, CA, USA, 2015. [Google Scholar]
  5. Udrişte, C. Minimal submanifolds and harmonic maps through multitime maximum principle. Balkan J. Geom. Appl. 2013, 18, 69–82. [Google Scholar]
  6. Agrachev, A.; Sachkov, Y.L. Control theory from the geometric viewpoint. In Encyclopedia of Mathematical Sciences; Springer: Berlin, Germany, 2004; Volume 87. [Google Scholar]
  7. Schattler, H.; Ledzewicz, U. Geometric Optimal Control: Theory, Methods and Examples. In Interdisciplinary Applied Mathematics; Springer: New York, NY, USA, 2012. [Google Scholar]
  8. Zhu, J.; Trelat, E.; Cerf, M. Geometric optimal control and applications to aerospace. Pac. J. Math. Ind. 2017, 9, 8. [Google Scholar] [CrossRef]
  9. Evans, L.C. An Introduction to Mathematical Optimal Control Theory; Lecture Notes; Department of Mathematics, University of California: Berkeley, CA, USA, 2010. [Google Scholar]
  10. Baillieul, J.; Willems, J.C. Mathematical Control Theory; Springer Science and Business Media: New York, NY, USA, 1999. [Google Scholar]
  11. Lee, E.B.; Markus, L. Foundations of Optimal Control Theory; Wiley: Hoboken, NJ, USA, 1967. [Google Scholar]
  12. Macki, J.; Strauss, A. Introduction to Optimal Control; Springer: New York, NY, USA, 1982. [Google Scholar]
  13. Pontriaguine, L.; Boltianski, V.; Gamkrelidze, R.; Michtchenko, E. Théorie Mathématique des Processus Optimaux; MIR: Moscow, Russia, 1974.
  14. Cesari, L. Optimization with partial differential equations in Dieudonne-Rashevsky form and conjugate problems. Arch. Rat. Mech. Anal. 1969, 33, 339–357. [Google Scholar] [CrossRef]
  15. Klotzler, R. On Pontrjagin’s maximum principle for multiple integrals. Beitrage zur Analysis 1976, 8, 67–75. [Google Scholar]
  16. Mititelu, Ş.; Pitea, A.; Postolache, M. On a class of multitime variational problems with isoperimetric constraints. U.P.B. Sci. Bull. Ser. A Appl. Math. Phys. 2010, 72, 31–40. [Google Scholar]
  17. Pickenhain, S.; Wagner, M. Pontryaguin Principle for State-Constrained Control Problems Governed by First-Order PDE System. J. Optim. Theory Appl. 2000, 107, 297–330. [Google Scholar] [CrossRef]
  18. Rund, H. Pontrjagin functions for multiple integral control problems. J. Optim. Theory Appl. 1976, 18, 511–520. [Google Scholar] [CrossRef]
  19. Wagner, M. Pontryaguin Maximum Principle for Dieudonne-Rashevsky Type Problems Involving Lipcshitz functions. Optimization 1999, 46, 165–184. [Google Scholar] [CrossRef]
  20. Kobayashi, S.; Nomizu, K. Foundations of Differential Geometry. In Interscience Tracts in Pure and Applied Mathematics; John Wiley and Sons Inc.: New York, NY, USA, 1963. [Google Scholar]
  21. Lee, J.M. Introduction to Riemannian Manifolds, 2nd ed.; Graduate Texts in Mathematics; Springer: Cham, Switzerland, 2019. [Google Scholar]
  22. Udrişte, C. Multitime stochastic control theory. In Proceedings of the Selected Topics on Circuits, Systems, Electronics, Control and Signal Processing, Cairo, Egypt, 29–31 December 2007; pp. 171–176. [Google Scholar]
  23. Udrişte, C.; Ţevy, I. Multitime Dynamic Programming for Curvilinear Integral Actions. J. Optim. Theory Appl. 2010, 146, 189–207. [Google Scholar] [CrossRef]
  24. Bejenaru, A.; Udrişte, C. Multitime optimal control and equilibrium deformations. In Recent Researches in Hydrology, Geology and Continuum Mechanics; WSEAS Press: Cambridge, UK, 2011; pp. 126–136. [Google Scholar]
  25. Udrişte, C. Nonholonomic approach of multitime maximum principle. Balkan J. Geom. Appl. 2009, 14, 101–116. [Google Scholar]
  26. Udrişte, C.; Ţevy, I. Multitime Dynamic Programming for Multiple Integral Actions. J. Glob. Optim. 2011, 51, 345–360. [Google Scholar] [CrossRef]
  27. Udrişte, C.; Bejenaru, A. Multitime optimal control with area integral costs on boundary. Balkan J. Geom. Appl. 2011, 16, 138–154. [Google Scholar]
  28. Udrişte, C. Multitime controllability, observability and bang-bang principle. J. Optim. Theory Appl. 2008, 139, 141–157. [Google Scholar] [CrossRef]
  29. Udrişte, C. Equivalence of multitime optimal control problems. Balkan J. Geom. Appl. 2010, 15, 155–162. [Google Scholar]

Share and Cite

MDPI and ACS Style

Bejenaru, A.; Udriste, C. Multivariate Optimal Control with Payoffs Defined by Submanifold Integrals. Symmetry 2019, 11, 893. https://doi.org/10.3390/sym11070893

AMA Style

Bejenaru A, Udriste C. Multivariate Optimal Control with Payoffs Defined by Submanifold Integrals. Symmetry. 2019; 11(7):893. https://doi.org/10.3390/sym11070893

Chicago/Turabian Style

Bejenaru, Andreea, and Constantin Udriste. 2019. "Multivariate Optimal Control with Payoffs Defined by Submanifold Integrals" Symmetry 11, no. 7: 893. https://doi.org/10.3390/sym11070893

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop