Optimal Control Theory: Introduction to the Special Issue

Optimal control theory is a modern extension of the classical calculus of variations. Converting a calculus of variation problem into an optimal control problem requires one more conceptual extension—the addition of control variables to state equations. While the main result of the calculus of variations was the Euler equation, the Pontryagin maximum principle is the main result of optimal control theory. The maximum principle was developed by a group of Russian mathematicians in the 1950s and gives the necessary conditions for optimality in a wide range of dynamic optimization problems. At present, for deterministic control models described by ordinary differential equations, the Pontryagin maximum principle is used as often as Bellman’s dynamic programming method. An optimal control problem includes a calculation of the optimal control and the synthesis of the optimal control system. Optimal control, as a rule, is calculated by numerical methods for finding the extremum of an objective function or by solving a two-point boundary value problem for a system of differential equations. The synthesis of optimal control from a mathematical point of view is a nonlinear programming problem in function spaces. This Special Issue gathers research focused on the development of novel analytical and numerical methods for solutions of optimal control or of dynamic optimization problems, including changing and incomplete information about the investigated objects, application to medicine, infectious diseases, and economic or physical phenomena. Investigations of new classes of optimization problems, optimal control of nonlinear systems, as well as the task of reconstructing input signals are also presented. For example, the articles that develop new algorithms to implement some of the principles of regularization using constructive iterative procedures or papers that create an optimal control model which can accumulate experience and improve its work on this basis (the so-called learning optimal control system) are given. Finally, the applied articles focused on control models of economic, physical, medical or environmental processes or resource allocation on the specified time interval or on the infinite planning horizon are presented also. The original research articles of this issue reflect new advances in optimal control and differential games; deterministic and stochastic control processes; combined methods of synthesis of both deterministic and stochastic systems with full information about parameters, states and perturbations. This issue collects the papers that allow the use of analytical methods to study the various problems of optimal control and its evaluation, as well as applications of optimal controls and differential games to describe complex nonlinear phenomena. A short summary of all the manuscripts placed in an alphabetical order of the first authors is given below. Within the framework of the above, Arias-Castro, Martinez-Romero and Vasilieva [1] focus on the design and analysis of short-term control intervention measures seeking to suppress local populations of Aedes aegypti mosquitoes, the major transmitters of dengue and other vector-borne infections. In addition to traditional measures involving the spraying of larvicides and/or insecticides, a biological control based on the deliberate introduction of predacious species feeding on the aquatic stages of mosquitoes is included. From a methodological standpoint, such a study relies on the application of the optimal control modeling framework in combination with cost-effectiveness analysis. This approach not only enables the design of optimal strategies for external control intervention

Optimal control theory is a modern extension of the classical calculus of variations. Converting a calculus of variation problem into an optimal control problem requires one more conceptual extension-the addition of control variables to state equations. While the main result of the calculus of variations was the Euler equation, the Pontryagin maximum principle is the main result of optimal control theory. The maximum principle was developed by a group of Russian mathematicians in the 1950s and gives the necessary conditions for optimality in a wide range of dynamic optimization problems. At present, for deterministic control models described by ordinary differential equations, the Pontryagin maximum principle is used as often as Bellman's dynamic programming method.
An optimal control problem includes a calculation of the optimal control and the synthesis of the optimal control system. Optimal control, as a rule, is calculated by numerical methods for finding the extremum of an objective function or by solving a two-point boundary value problem for a system of differential equations. The synthesis of optimal control from a mathematical point of view is a nonlinear programming problem in function spaces.
This Special Issue gathers research focused on the development of novel analytical and numerical methods for solutions of optimal control or of dynamic optimization problems, including changing and incomplete information about the investigated objects, application to medicine, infectious diseases, and economic or physical phenomena. Investigations of new classes of optimization problems, optimal control of nonlinear systems, as well as the task of reconstructing input signals are also presented. For example, the articles that develop new algorithms to implement some of the principles of regularization using constructive iterative procedures or papers that create an optimal control model which can accumulate experience and improve its work on this basis (the so-called learning optimal control system) are given. Finally, the applied articles focused on control models of economic, physical, medical or environmental processes or resource allocation on the specified time interval or on the infinite planning horizon are presented also.
The original research articles of this issue reflect new advances in optimal control and differential games; deterministic and stochastic control processes; combined methods of synthesis of both deterministic and stochastic systems with full information about parameters, states and perturbations. This issue collects the papers that allow the use of analytical methods to study the various problems of optimal control and its evaluation, as well as applications of optimal controls and differential games to describe complex nonlinear phenomena.
A short summary of all the manuscripts placed in an alphabetical order of the first authors is given below.
Within the framework of the above, Arias-Castro, Martinez-Romero and Vasilieva [1] focus on the design and analysis of short-term control intervention measures seeking to suppress local populations of Aedes aegypti mosquitoes, the major transmitters of dengue and other vector-borne infections. In addition to traditional measures involving the spraying of larvicides and/or insecticides, a biological control based on the deliberate introduction of predacious species feeding on the aquatic stages of mosquitoes is included. From a methodological standpoint, such a study relies on the application of the optimal control modeling framework in combination with cost-effectiveness analysis. This approach not only enables the design of optimal strategies for external control intervention but also allows for assessment of their performance in terms of the cost-benefit relationship. By examining numerous scenarios derived from combinations of chemical and biological control measures, attempts are made to find out whether the presence of predacious species at the mosquito breeding sites may (partially) replace the common practices of larvicide/insecticide spraying and thus reduce their negative impact on non-target organisms. As a result, two strategies exhibiting the best metrics of cost-effectiveness and providing some useful insights for their possible implementation in practical settings are identified.
Arguchintsev and Poplevko [2] deal with an optimal control problem for a linear system of first-order hyperbolic equations with a function on the right-hand side determined from controlled bilinear ordinary differential equations. These ordinary differential equations are linear with respect to state functions with controlled coefficients. Such problems arise in the simulation of some processes of chemical technology and population dynamics. This problem is reduced to an optimal control problem for a system of ordinary differential equations. Such a reduction is based on non-classic exact increment formulas for the objective function. This approach allows us to use some efficient optimal control methods for the analysis of the resulting optimal control problem.
Aseev and Katsumoto [3] develop a new dynamic model of optimal investments in R&D and manufacturing for a technological leader competing with a large number of identical followers on the market of a technological product. The model is formulated in the form of the infinite time horizon stochastic optimization problem. The evolution of new generations of the product is treated as a Poisson-type cyclic stochastic process. The technology spillover effect acts as a driving force of technological change. It shows that the original probabilistic problem that the leader is faced with can be reduced to a deterministic one. This result makes it possible to perform analytical studies and numerical calculations.
Pursuit-evasion games are used to define guidance strategies for multi-agent planning problems. Although optimal strategies exist for deterministic scenarios, in the case when information about the opponent players is imperfect, it is important to evaluate the effect of uncertainties on the estimated variables. Battistini [4] proposes a method to characterize the game space of a pursuit-evasion game under a stochastic perspective. The Mahalanobis distance is used as a metric to determine the levels of confidence in the estimation of the Zero Effort Miss across the capture zone. This information can be used to gain an insight into the guidance strategy.
Chica-Pedraza, Mojica-Nava and Cadena-Muñoz [5] consider Multi-Agent Systems (MASs), which have been used to solve several optimization problems in control systems. MASs allow one to understand the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between agents is always available, which means the employment of a full-information model. Some tendencies have been growing in importance to tackle scenarios where information constraints are relevant issues. In this sense, game theory approaches appear as a useful technique that uses a strategy concept to analyze the interactions of the agents and achieve the maximization of agent outcomes. In this paper, we propose a distributed control method of learning that allows analyzing the effect of the exploration concept in a MAS. The dynamics obtained use Q-learning from reinforcement learning as a way to include the concept of exploration into the classic exploration-less Replicator Dynamics equation. Then, the Boltzmann distribution is used to introduce the Boltzmann-Based Distributed Replicator Dynamics as a tool for controlling behaviors of agents. This distributed approach can be used in several engineering applications, where communication constraints between agents are considered. The behavior of the proposed method is analyzed using a smart grid application for validation purposes. Results show that despite the lack of full information of the system, by controlling some parameters of the method, it has similar behavior to the traditional centralized approaches.
Grigorenko and Luk'yanova [6] deal with a model for a one-sector economy of production funds acquisition, which includes two differential links of the zero order and two series-connected inertial links. Zero-order differential links correspond to the equations of the Ramsey model. These equations contain a scalar-bounded control, which determines the distribution of the available funds into two parts: investment and consumption. Two series-connected inertial links describe the dynamics of the changes in the volume of the actual production at the current production capacity. For the considered control system, the problem is posed to maximize the average consumption value over a given time interval. The properties of optimal control are analytically established using the Pontryagin maximum principle. The cases are highlighted when such control is a bang-bang, as well as the cases when, along with bang-bang (non-singular) portions, control can contain a singular arc. At the same time, concatenation of singular and non-singular portions is carried out using chattering. A bang-bang suboptimal control is presented, which is close to the optimal one according to the given quality criterion. A positional terminal control is proposed for the first approximation when a suboptimal control with a given deviation of the objective function from the optimal value is numerically found.
N. Hritonenko, V. Hritonenko and Yatsenko [7] formulate and study a nonlinear game of several symmetric countries that produce, pollute, and spend part of their revenue on pollution mitigation and environmental adaptation. The optimal emission, adaptation, and mitigation investments are analyzed in both Nash equilibrium and cooperative cases. Modeling assumptions and outcomes are compared to other publications in this fast-developing area of environmental economics. In particular, this analysis implies that: (a) mitigation is more effective than adaptation in a crowded multi-country world; (b) mitigation increases the effectiveness of adaptation; (c) the optimal ratio between mitigation and adaptation investments in the competitive case is larger for more productive countries and is smaller when more countries are involved in the game.
Idczak and Walczak [8] deal with deriving an extremum principle. It can be treated as an intermediate result between the celebrated smooth-convex extremum principle due to Ioffe and Tikhomirov and the Dubovitskii-Milyutin theorem. The proof of this principle is based on a simple generalization of the Fermat's theorem, the smooth-convex extremum principle and the local implicit function theorem.
CAR T-cell immunotherapy is a new development in the treatment of leukemia, promising a new era in oncology. Although so far, this procedure only helps 50-90% of patients and, similar to other cancer treatments, has serious side-effects. Khailov, Grigorieva and Klimenkova [9] propose a controlled model for leukemia treatment to explore possible ways to improve immunotherapy methodology. This model is described by four nonlinear differential equations with two bounded controls, which are responsible for the rate of injection of chimeric cells, as well as for the dosage of the drug that suppresses the so-called "cytokine storm". The optimal control problem of minimizing the cancer cells and the activity of the cytokine is stated and solved using the Pontryagin maximum principle. The five possible optimal control scenarios are predicted analytically using investigation of the behavior of the switching functions. Interesting results, explaining why therapies with rest intervals (for example, stopping injections in the middle of the treatment interval) are more effective (within the model) rather than with continuous injections, are presented.
Korytowski and Szymkat [10] propose an elementary approach to a class of optimal control problems with pathwise state constraint. Based on spike variations of control, it yields simple proofs and constructive necessary conditions, including some new characterizations of the corresponding optimal control.
Zaslavski [11] studies the structure of trajectories of discrete disperse dynamical systems with a Lyapunov function, which are generated by set-valued mappings. A weak version of the turnpike property that holds for all trajectories of such dynamical systems, which are of a sufficient length, is established. This result is usually true for models of economic growth which are prototypes of our dynamical systems.
The articles of this Special Issue will be of interest not only to specialists in the field of optimal control, differential games, optimization and their applications, but should be of interest to those who aspire to become such, namely, graduate students.

Conflicts of Interest:
The author declares no conflict of interest.