 Next Article in Journal
Intention or Request: The Impact of Message Structures
Next Article in Special Issue
An Optimal Control Problem by a Hybrid System of Hyperbolic and Ordinary Differential Equations
Previous Article in Journal
Acknowledgment to Reviewers of Games in 2020
Previous Article in Special Issue
Necessary Optimality Conditions for a Class of Control Problems with State Constraint Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Optimal Control and Positional Controllability in a One-Sector Economy

Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119992 Moscow, Russia
*
Author to whom correspondence should be addressed.
Games 2021, 12(1), 11; https://doi.org/10.3390/g12010011
Received: 26 November 2020 / Revised: 8 January 2021 / Accepted: 20 January 2021 / Published: 1 February 2021

## Abstract

:
A model of production funds acquisition, which includes two differential links of the zero order and two series-connected inertial links, is considered in a one-sector economy. Zero-order differential links correspond to the equations of the Ramsey model. These equations contain scalar bounded control, which determines the distribution of the available funds into two parts: investment and consumption. Two series-connected inertial links describe the dynamics of the changes in the volume of the actual production at the current production capacity. For the considered control system, the problem is posed to maximize the average consumption value over a given time interval. The properties of optimal control are analytically established using the Pontryagin maximum principle. The cases are highlighted when such control is a bang-bang, as well as the cases when, along with bang-bang (non-singular) portions, control can contain a singular arc. At the same time, concatenation of singular and non-singular portions is carried out using chattering. A bang-bang suboptimal control is presented, which is close to the optimal one according to the given quality criterion. A positional terminal control is proposed for the first approximation when a suboptimal control with a given deviation of the objective function from the optimal value is numerically found. The obtained results are confirmed by the corresponding numerical calculations.

## 1. Introduction

The elements of the zero order (multiplier, accelerator), first order (inertial links), and second order are often found in nonlinear models of economic dynamics . The link of the second order can be represented by an oscillatory link or by two connected inertial links. An example of a mathematical model containing inertial links is a model that takes into account the delay in the introduction of funds in the problem of optimizing the accumulation rate for the Ramsey model .
In this article, the following model for the development of the introduced production capacities is considered
$k ˙ ( t ) = u ( t ) f ( v ( t ) ) − μ k ( t ) , t ∈ [ 0 , T ] , v ¨ ( t ) + 4 τ − 1 v ˙ ( t ) + 4 τ − 2 v ( t ) = 4 τ − 2 k ( t ) , y ˙ ( t ) = e − δ t ( 1 − u ( t ) ) f ( v ( t ) ) , k ( 0 ) = k 0 > 0 , k ( T ) ≥ k T > 0 , v ( 0 ) = v 0 ≥ 0 , v ˙ ( 0 ) = v ˙ 0 ≥ 0 , y ( 0 ) = 0 ,$
where $k ( t )$ is the volume of the entire production capacity at time t, $v ( t )$ represents the volume of actual production at production capacity $k ( t )$ ($v ( t ) ≤ k ( t )$). Here, $y ( t )$ is the total consumption at time t, $u ( t )$ defines a control function, $f ( v )$ represents a production function , and $τ$, $δ$ are positive parameters.
The first and third equations of system (6) refer to dynamic links. The first equation describes the dynamics of changes in the volume of the introduced production capacities, the third equation sets the dynamics of the consumption process. The second equation refers to the oscillatory link characterizing the dynamics of the development of the introduced production capacities .
The equations of the dynamical change in the introduced production capacities of the Ramsey model [1,2,3,4,5] are used as dynamic links. They contain a scalar bounded control $u ( t )$, which defines the distribution of the available funds into two parts: investment and consumption.
The optimization problem consists of finding a control $u ( t )$ that transfers system (6) from a given initial position to a position $k ( T ) = k T$ at the maximum value of $y ( T )$, which is an indicator of the average consumption value over a given time interval $[ 0 , T ]$.
The solution of the optimal control problem for this version of the model provides important information about the extreme value of the objective function and the type of optimal control. This control can contain singular controls of the third order, the direct implementation of which in economic practice can be complicated by the presence of chattering regimes. Consideration of suboptimal controls overcomes this difficulty. However, the question arises of finding a suboptimal control for which the deviation in the objective function satisfies the given constraint. The search for such a control can be realized numerically and the initial approximation is chosen in the form of a positional terminal control. The obtained properties of the optimal, suboptimal, and terminal controls for the considered model are confirmed by corresponding numerical calculations.

## 2. Statement of the Dynamic Model and Maximization Problem

Let us transform system (6) by introducing new phase variables: $v 1 ( t )$ is the volume of the actual production at production capacity $k ( t )$, and $v 2 ( t )$ is the volume of the actual production at production capacity $v 1 ( t )$. In addition, we assume that the increase in production is proportional to the under-utilized capacity. All this makes it possible to rewrite the second-order differential equation in system (6) in the form of two first-order differential equations. As a result, on a given time interval $[ 0 , T ]$, we have the following system of differential equations:
$k ˙ ( t ) = u ( t ) f ( v 2 ( t ) ) − μ k ( t ) , v ˙ 1 ( t ) = 2 τ − 1 ( k ( t ) − v 1 ( t ) ) , v ˙ 2 ( t ) = 2 τ − 1 ( v 1 ( t ) − v 2 ( t ) ) , y ˙ ( t ) = e − δ t ( 1 − u ( t ) ) f ( v 2 ( t ) )$
with the initial and final phase constraints:
$k ( 0 ) = k 0 > 0 , k ( T ) ≥ k T > 0 , v 1 ( 0 ) = v 01 ≥ 0 , v 2 ( 0 ) = v 02 ≥ 0 , y ( 0 ) = 0 ,$
where $k ( t )$, $v 1 ( t )$, $v 2 ( t )$, $y ( t )$ are the phase variables; $k 0$, $v 01$, $v 02$ are the corresponding initial conditions; $δ$, $μ$, $τ$, T, $k T$ are the positive parameters.
Here, $f ( x )$ is a neoclassical production function [3,5], that is, a linearly homogeneous production function satisfying the conditions:
$f ( 0 ) = 0 , f ˙ ( x ) > 0 , f ¨ ( x ) < 0 , lim x → + ∞ f ( x ) = + ∞ , lim x → + 0 f ˙ ( x ) = + ∞ , lim x → + ∞ f ˙ ( x ) = 0 .$
Next, $u ( t )$ is a control function that obeys the following constraints:
$0 ≤ u ( t ) ≤ 1 .$
We consider that the set of all admissible controls $Ω ( T )$ is formed by all possible of Lebesgue measurable functions $u ( t )$, which for almost all $t ∈ [ 0 , T ]$ satisfy inequalities (4).
Model (2) of the dynamics of the economic quantities includes four phase variables, the first of which is the volume of the entire production capacity, the fourth is specific consumption, and the third is the amount of funds available for use at the current time t [1,5,6].
Now, let us consider for system (2) on the set of admissible controls $Ω ( T )$ the following maximization problem:
$J ( u ( · ) ) = y ( T ) → max u ( · ) ∈ Ω ( T ) .$
The objective function in (5) means maximization of the average amount of funds allocated for consumption over a given time interval $[ 0 , T ]$.

## 3. Properties of Solution of System (2)

Note that the system of Equation (2) satisfies the conditions of the Cauchy–Peano existence theorem , and hence, the solution $( k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) )$ of this system corresponding to the control $u ( · ) ∈ Ω ( T )$ exists (at least locally) in the neighborhood of initial point $( k 0 , v 01 , v 02 , 0 )$.
Now, the lower bounds for the components of the solution to system (2) are established using the following lemma.
Lemma 1.
Let $u ( t )$ be an arbitrary admissible control and
$z ( t ) = ( z 1 ( t ) , z 2 ( t ) , z 3 ( t ) , z 4 ( t ) ) = ( k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) )$
be the corresponding solution of system (2) defined for $t ≥ 0$. Then the following relationships are true:
$z 1 ( t ) > e − μ T k 0 > 0 f o r k 0 > 0 ; z 2 ( t ) > e − 2 τ − 1 T v 10 > 0 f o r v 10 > 0 ; z 3 ( t ) > e − 2 τ − 1 T v 20 > 0 f o r v 20 > 0 ; z 4 ( t ) ≥ 0 f o r y 0 = 0 .$
Proof.
According to the first equation of system (2), due to the conditions for the function $f ( v 2 )$ and the restrictions on the control $u ( t )$, we have the following relationship:
$z ˙ 1 ( t ) = u ( t ) f ( z 3 ( t ) ) − μ z 1 ( t ) ≥ − μ z 1 ( t ) .$
Integrating it, we obtain the required inequality: $z 1 ( t ) ≥ e − μ T z 1 ( 0 ) > 0$.
Due to this inequality, we have similar expression for the second equation of this system:
$z ˙ 2 ( t ) = 2 τ − 1 ( z 1 ( t ) − z 2 ( t ) ) > − 2 τ − 1 z 2 ( t ) .$
Hence, we find that $z 2 ( t ) ≥ e − 2 τ − 1 T z 2 ( 0 ) > 0$.
Similarly, from the last inequality and the third equation of the system, we obtain:
$z ˙ 3 ( t ) = 2 τ − 1 ( z 2 ( t ) − z 3 ( t ) ) > − 2 τ − 1 z 3 ( t ) .$
Then it implies that $z 3 ( t ) ≥ e − 2 τ − 1 T z 3 ( 0 ) > 0$.
Finally, due to the nonclassical conditions on the function $f ( v 2 )$ and the restrictions on the control $u ( t )$, we easily conclude that $z 4 ( t ) ≥ 0$. □
Now, let us introduce the following positive values:
$z ¯ 1 ( T ) = e − μ T k 0 , z ¯ 2 ( T ) = e − 2 τ − 1 T v 10 , z ¯ 3 ( T ) = e − 2 τ − 1 T v 20 .$
The next lemma shows that an arbitrary solution of system (2) exists on the entire time interval $[ 0 , T ]$.
Lemma 2.
The solution
$z ( t ) = ( z 1 ( t ) , z 2 ( t ) , z 3 ( t ) , z 4 ( t ) ) = ( k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) )$
of system (2) corresponding to the control $u ( · ) ∈ Ω ( T )$ is defined for all $t ∈ [ 0 , T ]$.
Proof.
Let us show that for any admissible control $u ( t )$, the corresponding solution of system (2) cannot go into infinity over a finite time interval. We assume that this solution is defined on a certain interval $Δ$. It follows from neoclassical conditions that the function $f ( x ) / x$ is monotonically decreasing from $+ ∞$ to 0. Actually, the L’Hôpital’s rule for calculating limits implies that
$lim x → + 0 f ( x ) x = lim x → + 0 f ˙ ( x ) = + ∞ , lim x → + ∞ f ( x ) x = lim x → + ∞ f ˙ ( x ) = 0 .$
Now, let us verify directly the monotone decrease of the function $f ( x ) / x$. Since
$d d t f ( x ) x = f ˙ ( x ) x − f ( x ) x 2 ,$
then it suffices to establish the validity of the inequality $f ˙ ( x ) x < f ( x )$ for all $x > 0$. For this, we use the Newton–Leibniz formula $f ( x ) = ∫ 0 x f ˙ ( y ) d y$. Function $f ˙ ( x )$ is decreasing (see the inequality $f ¨ ( x ) < 0$ in the neoclassical conditions), then we obtain the inequality $f ˙ ( y ) > f ˙ ( x )$ for all $y ∈ [ 0 , x ]$. In such case,
$f ( x ) = ∫ 0 x f ˙ ( y ) d y > ∫ 0 x f ˙ ( x ) d y = f ˙ ( x ) x .$
Thus, the monotonic decrease of the function $f ( x ) / x$ from $+ ∞$ to 0 is established.
Due to the proven property of function $f ( x ) / x$, we have the inequality:
$f ( z 3 ) z 3 ≤ f ( z ¯ 3 ( T ) ) z ¯ 3 ( T ) , z 3 ∈ [ z ¯ 3 ( T ) , + ∞ ) ,$
and
$u f ( v 2 ) − μ k < u f ( z 3 ) ≤ f ( z ¯ 3 ( T ) ) z ¯ 3 ( T ) z 3 .$
Thus, for all $t ∈ Δ$ the following inequality holds:
$z ˙ 1 ( t ) ≤ C ( z ¯ 3 ( T ) ) z 3 ( t ) ,$
where $C ( · )$ is a positive constant.
Since
$z ˙ 2 = 2 τ − 1 ( z 1 − z 2 ) ≤ 2 τ − 1 z 1 , z ˙ 3 = 2 τ − 1 ( z 2 − z 3 ) ≤ 2 τ − 1 z 2 , z ˙ 4 = e − δ t ( 1 − u ) f ( z 3 ) ≤ f ( z ¯ 3 ( T ) ) z ¯ 3 ( T ) z 3 ,$
then there is a positive constant $C 1$ such that for all $z i > 0$, $i = 1 , 2 , 3$ and $z 4 ≥ 0$, $u ∈ [ 0 , 1 ]$ the following estimate is valid:
$( z , F ( z , u ) ) ≤ C 1 1 + ( z , z ) 2 .$
Here, $F ( z , u )$ is the right-hand side of the system (2), and $( z , F ( z , u ) )$, $( z , z )$ are the corresponding scalar products of the vectors z, $F ( z , u )$ and vector z by itself. Therefore, according to the theorem on the continuation of solutions for differential equations  the solution $( k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) )$ for system (2) exists on the entire time interval $[ 0 , T ]$. □
The last lemma justifies the uniqueness of the solution to the system (2).
Lemma 3.
The solution to system (2) is unique.
Proof.
Let us introduce the set $Γ = { ( z 1 , z 2 , z 3 ) : z i > 0 , i = 1 , 2 , 3 }$. Then, when the initial conditions $( z 1 ( 0 ) , z 2 ( 0 ) , z 3 ( 0 ) ) ∈ Γ$, the first three components of the solution to system (2) satisfy the estimates from Lemma 1, that is, $( z 1 ( t ) , z 2 ( t ) , z 3 ( t ) ∈ Γ$ for all $t ∈ ( 0 , T ]$. Due to the neoclassical conditions on the function $f ( x )$, the functions $f ( x )$, $f ˙ ( x )$ are defined and continuous for all $x ≥ 0$. Thus, for the first three components of the right-hand side of system (2) the conditions of the uniqueness theorem  are satisfied. The right-hand side of the fourth equation of the system also satisfies the conditions of this uniqueness theorem, and therefore the solution to system (2) is unique. □
Lemmas 1–3 imply that for a neoclassical function $f ( x )$ and for any control $u ( · ) ∈ Ω ( T )$, a solution to system (2) exists, is unique and is determined for all $t ∈ [ 0 , T ]$.
Below we give a solution to the Problem (2) and (5) which was found on the basis of the Pontryagin maximum principle [8,9] in the class of Lebesgue measurable functions satisfying inequalities (4) for almost all $t ∈ [ 0 , T ]$.

## 4. Pontryagin Maximum Principle

For the Problem (2) and (5), the Hamiltonian and Lagrangian of the phase constraints (3) have the form:
$H ( t , k , v 1 , v 2 , y , u , ψ 1 , ψ 2 , ψ 3 , ψ 4 ) = ( u f ( v 2 ) − μ k ) ψ 1 + 2 τ − 1 ( k − v 1 ) ψ 2 + 2 τ − 1 ( v 1 − v 2 ) ψ 3 + e − δ t ( 1 − u ) f ( v 2 ) ψ 4 = u f ( v 2 ) ψ 1 − e − δ t ψ 4 − μ k ψ 1 + 2 τ − 1 ( k − v 1 ) ψ 2 + 2 τ − 1 ( v 1 − v 2 ) ψ 3 + e − δ t f ( v 2 ) ψ 4 , l ( k ( 0 ) , v 1 ( 0 ) , v 2 ( 0 ) , y ( 0 ) , k ( T ) , v 1 ( T ) , v 2 ( T ) , a 0 , a 1 , a 2 , a 3 , a 4 , a 5 ) = − a 0 y ( T ) + a 1 ( k ( 0 ) − k 0 ) + a 2 ( k ( T ) − k T ) + a 3 ( v 1 ( 0 ) − v 01 ) + a 4 ( v 2 ( 0 ) − v 02 ) + a 5 ( y ( 0 ) − 0 ) .$
Here, $ψ i$, $i = 1 , ⋯ , 4$ are adjoint variables, and $a i$, $i = 0 , ⋯ , 5$ are Lagrange multipliers (see ).
The extremal control $u * ( t )$ is defined by the relationship:
$u * ( t ) = 1 , η ( t ) > 0 , u sing ∈ [ 0 , 1 ] , η ( t ) ≡ 0 , 0 , η ( t ) < 0 ,$
where $η ( t ) = ψ 1 ( t ) − e − δ t ψ 4 ( t )$ is the switching function.
The adjoint system is given by the following system of differential equations:
$ψ ˙ 1 ( t ) = − H k ′ ( t , k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) , u ( t ) , ψ 1 ( t ) , ψ 2 ( t ) , ψ 3 ( t ) , ψ 4 ( t ) ) = μ ψ 1 ( t ) − 2 τ − 1 ψ 2 ( t ) , ψ ˙ 2 ( t ) = − H v 1 ′ ( t , k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) , u ( t ) , ψ 1 ( t ) , ψ 2 ( t ) , ψ 3 ( t ) , ψ 4 ( t ) ) = 2 τ − 1 ( ψ 2 ( t ) − ψ 3 ( t ) ) , ψ ˙ 3 ( t ) = − H v 2 ′ ( t , k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) , u ( t ) , ψ 1 ( t ) , ψ 2 ( t ) , ψ 3 ( t ) , ψ 4 ( t ) ) = − u ( t ) f ˙ ( v 2 ( t ) ) ( ψ 1 ( t ) − e − δ t ψ 4 ( t ) ) + 2 τ − 1 ψ 3 ( t ) − e − δ t f ˙ ( v 2 ( t ) ) ψ 4 ( t ) , ψ ˙ 4 ( t ) = − H y ′ ( t , k ( t ) , v 1 ( t ) , v 2 ( t ) , y ( t ) , u ( t ) , ψ 1 ( t ) , ψ 2 ( t ) , ψ 3 ( t ) , ψ 4 ( t ) ) = 0 ,$
which satisfy the boundary conditions:
$ψ 1 ( 0 ) = l k ( 0 ) ′ = a 1 , ψ 1 ( T ) = − l k ( T ) ′ = − a 2 , ψ 2 ( 0 ) = l v 1 ( 0 ) ′ = a 3 , ψ 2 ( T ) = − l v 1 ( T ) ′ = 0 , ψ 3 ( 0 ) = l v 2 ( 0 ) ′ = a 4 , ψ 3 ( T ) = − l v 2 ( T ) ′ = 0 , ψ 4 ( 0 ) = l y ( 0 ) ′ = a 5 , ψ 4 ( T ) = − l y ( T ) ′ = a 0 ≥ 0 .$
We note that relationships (8) and (9) are written in terms of the corresponding partial derivatives of the Hamiltonian and the Lagrangian defined in (6).
The equation for $ψ 4 ( t )$ in system (8) and the corresponding transversality conditions in (9) imply that $ψ 4 ( t ) = a 0 = a 4 ≥ 0$. Moreover, an abnormal case occurs when $a 0 = 0$, and a normal case corresponds to equality $a 0 = 1$. In the abnormal case, we have $ψ 4 ( t ) = a 0 = 0$. It is proven by contradiction that then, in accordance with the maximum principle [8,9], the switching function $η ( t )$ cannot take a zero value on a set of nonzero measure, which leads to a correction of the relationship (7). As a result, Formula (7) of the extremal control $u * ( t )$ takes the form:
$u * ( t ) = 1 , η ( t ) ≥ 0 , 0 , η ( t ) < 0 .$
Let us consider further the normal case of the optimal control problem (2) and (5), when $a 0 = 1$. In this case, we have $ψ 4 ( t ) = 1$.

#### Singular Regime

System (2) is such that, on the one hand, it is linear in control $u ( t )$, and on the other hand, it clearly depends on time t due to the presence of the factor $e − δ t$ (it is non-autonomous). In addition, the problem under study (5) is a nonstandard, because it is a problem of finding the maximum, not for the traditional minimum. However, all these differences do not prevent us from applying the well-known theory of singular arcs [10,11]. This is due to the fact that:
(i)
performing a standard change of variables $χ ˙ = 0$, $χ ( 0 ) = 0$, where $χ$ is a new auxiliary variable, leads, on the one hand, to an increase by one in the order of the original system (2), and, on the other hand, makes such a system autonomous (see );
(ii)
introducing a new objective function $J ˜ ( u ( · ) ) = − J ( u ( · ) )$ leads the considered problem (5) to the problem on minimum:
$J ˜ ( u ( · ) ) = − y ( T ) → min u ( · ) ∈ Ω ( T ) .$
Thus, the original maximization problem (5) is easily transformed to the one for which the indicated theory is applicable. Therefore, further we will not explicitly follow the steps described above, but will imply them.
A singular regime takes place if for the switching function $η ( t )$ the equality $ψ 1 ( t ) − e − δ t = 0$ holds identically for all $t ∈ [ t 1 , t 2 ]$. This regime is of order 3 (see [10,11]) and is characterized by the following relationships:
$ψ 1 ( t ) = e − δ t , ψ 2 ( t ) = 0.5 τ ( μ + δ ) e − δ t , ψ 3 ( t ) = 0.5 τ ( μ + δ ) ( 1 + 0.5 τ δ ) e − δ t , f ˙ ( v 2 sing ) = ( μ + δ ) ( 1 + 0.5 τ δ ) 2 , v 1 sing = v 2 sing , k sing = 2 v 1 sing − v 2 sing , u sing = μ k sing f v 2 sing − 1 ,$
which imply that the five consecutive derivatives of the switching function $η ( t )$ vanish on the segment $t ∈ [ t 1 , t 2 ]$ and only the sixth derivative has a nonzero term depending on the control $u ( t )$.
Let us denote by $v ^ 2$ the solution of the equation $f ( v ^ 2 ) = μ v ^ 2$. Due to the concavity of the function $f ( v 2 )$, we have: $f ˙ ( ξ 1 ) > f ˙ ( ξ 2 )$, if $ξ 1 < ξ 2$. Thus, the chain of the inequalities
$f ˙ v 2 sing = ( μ + δ ) ( 1 + 0.5 τ δ ) 2 > μ > f ˙ ( v ^ 2 )$
implies $v 2 sing < v ^ 2$, which means that the control of a singular regime $u sing ∈ [ 0 , 1 ]$ is admissible. For the control $u sing$, the necessary optimality conditions for the singular control are satisfied (the Copp–Moyer conditions) . Due to the neoclassical conditions $f ( v 2 ) > 0$ and $f ¨ ( v 2 ) < 0$, we obtain the relationship:
$∂ ∂ u d 6 d t 6 ∂ H ∂ u = − 2 τ − 1 e − δ t f ¨ v 2 sing f v 2 sing 2 > 0 .$
Thus, singular regime can be part of an extremal control. Moreover, the singular set consists of phase and conjugate variables $( k , v 1 , v 2 , ψ 1 , ψ 2 , ψ 3 )$ satisfying equalities (10). Since the singular control has order 3 and the strengthened Copp–Moyer condition (11) is satisfied for the corresponding singular portion, the concatenation of non-singular and singular portions is carried out by chattering. This means that there is an infinite number of switchings between the values 0 and 1, accumulating to the point of concatenation of such portions [11,12].
Lemma 4.
In problems (2) and (5), the optimal control $u * ( t )$ has the form (7). It may have a singular regime of order 3, its explicit form is given by the corresponding formula in (10). The concatenation of non-singular and singular portions is performed using chattering.
Based on the obtained characteristics of the extremal control, we numerically construct a control containing chattering, and then, based on its structure, we construct an approximation in the form of a suboptimal control. For this control, the boundary conditions for the phase trajectory are satisfied, it contains a finite number of switchings between the boundary values 0 and 1, as well as intermediate constant values. Finally, the corresponding value of the objective function differs from its optimal value by some acceptable value.

## 5. Numerical Results

Here, for the problem (2) and (5), we present the results of numerical calculations using BOCOP  for the following values of its parameters:
$f ( v 2 ) = v 2 , τ = 1.0 , μ = 0.2 , δ = 0.01 , T = 100 .$
For these parameter values, it is easy to calculate that
$k sing = 25.0 , v 2 sing = 5.3947 .$
In Figure 1, the graphs $k * ( t )$, $v 1 * ( t )$, $v 2 * ( t )$, $u * ( t )$ of the corresponding optimal solution are given for the following boundary conditions:
$k 0 = 0.1 , k T = 21.0 , v 01 = 0.1 , v 02 = 0.1 .$
The optimal value of the objective function is $y * ( T ) = 58.6602$.
Figure 1 shows the behavior of the optimal phase variables $k * ( t )$, $v 1 * ( t )$, $v 2 * ( t )$, and optimal control $u * ( t )$ as functions of time, as well as the qualitative features of the behavior of the trajectory $v 2 * ( k * )$ as a whole and when it is found on a singular set.

## 6. Suboptimal Control

The value of the objective function, which is close to the optimal value, can be achieved using a simpler control structure in the form:
$u ˜ ( t ) = u ˜ t ; t 1 , t 2 , t 3 , u ^ = 1 t ∈ [ 0 , t 1 ] , 0 t ∈ [ t 1 , t 2 ] , u ^ t ∈ [ t 2 , t 3 ] , 0 t ∈ [ t 3 , t 4 ] , 1 t ∈ [ t 4 , T ] ,$
where $t 1$, $t 2$, $t 3$, $t 4$, $u ^$ are variable parameters.
Let us consider the following extremal problem:
$J = J ( u ˜ t ; t 1 , t 2 , t 3 , t 4 , u ^ ) → max t 1 , t 2 , t 3 , t 4 , u ^ : 0 ≤ t 1 ≤ t 2 ≤ t 3 ≤ t 4 ≤ T , u ^ ∈ [ 0 , 1 ]$
subject to the following restrictions:
$J = ∫ 0 T e − δ t 1 − u ˜ t ; t 1 , t 2 , t 3 , t 4 , u ^ f ( v 2 ( t ) ) d t , k ˜ ˙ ( t ) = u ˜ t ; t 1 , t 2 , t 3 , t 4 , u ^ f ( v 2 ( t ) ) − μ k ˜ ( t ) , v ˙ 1 ( t ) = 2 τ − 1 k ˜ ( t ) − v 1 ( t ) , v ˙ 2 ( t ) = 2 τ − 1 v 1 ( t ) − v 2 ( t ) , k ˜ ( 0 ) = k ( 0 ) = k 0 , k ˜ ( T ) = k ( T ) , v 1 ( 0 ) = v 01 , v 2 ( 0 ) = v 02 .$
Due to numerical calculations in BOCOP , the solution to the extremal problem (13) with parametrically specified control (12) has the form:
$t 1 = 16.3129 , t 2 = 27.9977 , t 3 = 87.06082 , t 4 = 87.147 , u ^ = 0.621105 .$
Figure 2 shows the graphs of approximate suboptimal control $u ˜ ( t )$ defined by (12) and of the corresponding solution $k ˜ ( t )$ for the same boundary conditions, as functions of time. The value of the objective function for such control is $y ( T ) = 56.9703$.

## 7. Position Controller for the Terminal Control Problem

The numerical solution of a suboptimal control problem (a control that does not contain chattering and guarantees the value of the objective function close to optimal), for example, using the method of local variations , requires a control of the first approximation. As such an approximation, one can use a positional controller (i.e., depending on the phase variables) for the terminal control problem for system (2), found by the method of analytical construction of aggregated controllers described in . We note that the terminal control problem is the problem of transferring the phase state of the considered system from a given initial position to the required final position at some time interval.
To construct it, we introduce an additional phase variable $ξ$ and transform the system (2) to a form that does not contain constraints (4) on the control function:
$k ˙ ( t ) = 0.5 + π − 1 arctan ( b ξ ( t ) ) f ( v 2 ( t ) ) − μ k ( t ) , ξ ˙ ( t ) = z ( t ) , ξ ( 0 ) = ξ 0 , k ( 0 ) = k 0 > 0 , | k ( T ) − k T | ≤ ϵ , v ˙ 1 ( t ) = 2 τ − 1 ( k ( t ) − v 1 ( t ) ) , v 1 ( 0 ) = v 01 ≥ 0 , v ˙ 2 ( t ) = 2 τ − 1 ( v 1 ( t ) − v 2 ( t ) ) , v 2 ( 0 ) = v 02 ≥ 0 , y ˙ ( t ) = e − δ t 1 − 0.5 + π − 1 arctan ( b ξ ( t ) ) f ( v 2 ( t ) ) , y ( 0 ) = 0 ,$
where T is the non-fixed moment of the end of the process, and $ϵ$, b are positive constants.
Let us choose the value
$ψ 1 = 0.5 + π − 1 arctan ( b ξ ) + z 1 ( k , v 2 )$
as the first macro-variable. Here, $z 1 ( k , v 2 )$ is the intermediate control.
Then we synthesize control $z ( k , v 1 , v 2 , ξ )$ using a functional equation:
$θ 1 ψ ˙ 1 + ψ 1 = 0 .$
Then, substituting $ψ 1$ from (15) into Formula (16), we obtain an equation containing z:
$θ 1 b z π ( 1 + ( b ξ ) 2 ) + ∂ z 1 ∂ k k ˙ + ∂ z 1 ∂ v 2 v ˙ 2 + ψ 1 = 0 .$
From (17), the control z can be found as
$z = z ( ξ , v 1 , v 2 , k ) = − π ( 1 + ( b ξ ) 2 ) b ∂ z 1 ∂ k 0.5 + π − 1 arctan ( b ξ ) f ( v 2 ) − μ k + ∂ z 1 ∂ v 2 2 τ − 1 ( v 1 − v 2 ) + θ 1 − 1 ψ 1 .$
Control (18) transfers the phase vector to the neighborhood of the manifold $ψ 1 = 0$ (15), the motion along which is described by the differential equations:
$k ˙ ( t ) = − z 1 ( t ) f ( v 2 ( t ) ) − μ k ( t ) , ξ ˙ ( t ) = z ( t ) , v ˙ 1 ( t ) = 2 τ − 1 ( k ( t ) − v 1 ( t ) ) , v ˙ 2 ( t ) = 2 τ − 1 ( v 1 ( t ) − v 2 ( t ) ) , y ˙ ( t ) = e − δ t ( 1 + z 1 ( t ) ) f ( v 2 ( t ) ) .$
We will find the intermediate control $z 1$ from the equation
$θ 2 ψ ˙ 2 + ψ 2 = 0 ,$
by introducing the second macro-variable
$ψ 2 = k − k T .$
Using $ψ 2$ and Equation (19), we find the control
$z 1 = 1 f ( v 2 ) k − k T θ 2 − μ k$
that transfers the phase vector to the manifold $ψ 2 = 0$. The movement along it is described by the equation:
$k ˙ = − k − k T θ 2 .$
The solution to such an equation has the following property:
$k ( t ) → k T f o r t → + ∞ .$
Substituting $z 1$ from (20) into the Formula (18), we find the desired control law
$z = z ( ξ , v 1 , v 2 , k ) = − π ( 1 + ( b ξ ) 2 ) b 1 f ( v 2 ) θ 2 − 1 − μ 0.5 + π − 1 arctan ( b ξ ) f ( v 2 ) − μ k − ∂ f ∂ v 2 ( f ( v 2 ) ) − 2 2 τ − 1 ( v 1 − v 2 ) + θ 1 − 1 ψ 1 .$
The behavior of the phase variable $k ( t )$ under control (23), starting from a certain finite time moment $T 1 < T$, is determined by the Equation (21) and, thus, we obtain the limiting relationship (22).
Moreover, the phase variable $v 2 ( t )$ satisfies the equation
$v ¨ 2 + 4 τ − 1 v ˙ 2 + 4 τ − 2 v 2 = 4 τ − 2 k .$
Taking into account the limiting relationship (22) of the phase variable $k ( t )$, starting from a certain time moment $T 2$, $T 2 > T 1$, $T 2 < T$, the variable $v 2 ( t )$ tends to $k ( t )$. The choice of parameters $ϵ$, b, $θ 1$, $θ 2$ affects the values of the time moments $T 1$ and $T 2$.
Lemma 5.
For given positive parameters μ, $ξ 0$, $k 0$, $k T$, $v 10$, $v 20$, δ, τ, there exist parameters $T 1$, $T 2$, T depending on ϵ, b, $θ 1$, $θ 2$ for which the control (23) solves the terminal control problem to system (14).
The use of control (23) as the first approximation in the numerical determination of control by the method of local variations made it possible to obtain the final continuous control $u ( t )$, for which the deviation of the objective function from the optimal value is 2%. From the last equation of system (14), it is possible to obtain the value of $y ( T )$ under control (23). Graphs of the solutions $k ( t )$ and $v 2 ( t )$ to system (14) and trajectory $v 2 ( k )$ under control (23) for system parameters from Section 5 are shown in Figure 3.

## 8. Conclusions

In this article, at a given time interval, the Ramsey model was used to describe the change in fixed assets of a one-sector economy. It included a bounded control function that divided the available funds into investment and consumption. In addition, differential equations have been added to this model to reflect the dynamics of the development of the introduced production capacities. The optimal control problem was posed for such a modified Ramsey model, which consisted in maximizing the average consumption over a given period of time. For an analytical description of the properties of the corresponding optimal control, the Pontryagin maximum principle was applied. Situations were found where this control was a bang-bang function, as well as where it was capable of containing a singular regime concatenating to non-singular bang-bang portions using chattering. It is clear that such a phenomenon as chattering cannot be implemented in real economic processes, and therefore an approach was proposed in the article to find suboptimal control, which would be close to the optimal in terms of the considered objective function. This approach requires a good initial approximation. As such an approximation, the article proposed to use positional terminal control. The corresponding numerical calculations were carried out, which showed the effectiveness of the proposed approach.

## Author Contributions

Conceptualization, N.G. and L.L.; methodology, N.G. and L.L.; software, L.L.; validation, N.G. and L.L.; formal analysis, L.L.; investigation, N.G. and L.L.; resources, N.G. and L.L.; data curation, not applicable; writing–original draft preparation, N.G. and L.L.; writing–review and editing, N.G. and L.L.; visualization, N.G. and L.L.; supervision, N.G.; project administration, not applicable; funding acquisition, not applicable. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Kolemaev, V.A. Mathematical Economics; Yuniti-Dana: Moscow, Russia, 2015. [Google Scholar]
2. Grass, D.; Caulkins, J.P.; Feichtinger, G.; Tragler, G.; Behrens, D.A. Optimal Control of Nonlinear Processes. With Applications in Drugs, Corruptions, and Terror; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
3. Ramsey, F.P. A mathematical theory of saving. Econ. J. 1928, 38, 543–559. [Google Scholar] [CrossRef]
4. Seierstad, A.; Sydsæter, K. Optimal Control Theory with Economic Applications; North-Holland: Amsterdam, The Netherlands, 1987. [Google Scholar]
5. Shell, K. Applications of Pontryagin’s maximum principle to economics. In Mathematical Systems Theory and Economics 1; Springer: Berlin, Germany, 1969; pp. 241–292. [Google Scholar]
6. Spear, S.E.; Young, W. Optimum savings and optimal growth: Ramsey–Mavlinvaud–Koopmans nexus. Macroecon. Dyn. 2014, 18, 215–243. [Google Scholar] [CrossRef]
7. Pontryagin, L.S. Ordinary Differential Equations; Addison-Wesley Publishing Company: London, UK, 1962. [Google Scholar]
8. Pontryagin, L.S.; Boltyanskii, V.G.; Gamkrelidze, R.V.; Mishchenko, E.F. Mathematical Theory of Optimal Processes; John Wiley & Sons: New York, NY, USA, 1962. [Google Scholar]
9. Vasiliev, F.P. Optimization Methods; Factorial Press: Moscow, Russia, 2002. [Google Scholar]
10. Afanasiev, V.N.; Kolmanovskii, V.; Nosov, V.R. Mathematical Theory of Control Systems Design; Springer: Dordrecht, The Netherlands, 1996. [Google Scholar]
11. Schättler, H.; Ledzewicz, U. Optimal Control for Mathematical Models of Cancer Therapies: An Application of Geometric Methods; Springer: New York, NY, USA; Heidelberg, Germany; Dordrecht, The Netherlands; London, UK, 2015. [Google Scholar]
12. Zelikin, M.I.; Borisov, V.F. Theory of Chattering Control: With Applications to Astronautics, Robotics, Economics and Engineering; Birkhäuser: Boston, MA, USA, 1994. [Google Scholar]
13. Bonnans, F.; Martinon, P.; Giorgi, D.; Grélard, V.; Maindrault, S.; Tissot, O.; Liu, J. BOCOP 2.2.0—User Guide. Available online: http://bocop.org (accessed on 27 June 2019).
14. Kolesnikov, A.; Veselov, G.; Kolesnikov, A.; Monti, A.; Ponci, F.; Santi, E.; Dougal, R. Synergetic synthesis of Dc-Dc boost converter controllers: Theory and experimental analysis. In Proceedings of the 17th Annual IEEE Applied Power Electronics Conference and Exposition, Dallas, TX, USA, 10–14 March 2002; pp. 409–415. [Google Scholar]
Figure 1. Upper row: graphs of the optimal solutions $k * ( t )$, $v 1 * ( t )$ and $v 2 * ( t )$; lower row: the graph of the optimal control $u * ( t )$, the phase portrait $v 2 * ( k * )$, and the phase portrait $v 2 * ( k * )$ in the neighborhood of the singular arc.
Figure 1. Upper row: graphs of the optimal solutions $k * ( t )$, $v 1 * ( t )$ and $v 2 * ( t )$; lower row: the graph of the optimal control $u * ( t )$, the phase portrait $v 2 * ( k * )$, and the phase portrait $v 2 * ( k * )$ in the neighborhood of the singular arc.
Figure 2. Graph of the approximating control $u ˜ ( t )$ (left) and graph of the corresponding solution $k ˜ ( t )$ (right).
Figure 2. Graph of the approximating control $u ˜ ( t )$ (left) and graph of the corresponding solution $k ˜ ( t )$ (right).
Figure 3. Graphs of solutions $k ( t )$ and $v 2 ( t )$ to system (14) and trajectory $v 2 ( k )$ under control (23).
Figure 3. Graphs of solutions $k ( t )$ and $v 2 ( t )$ to system (14) and trajectory $v 2 ( k )$ under control (23).
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Grigorenko, N.; Luk’yanova, L. Optimal Control and Positional Controllability in a One-Sector Economy. Games 2021, 12, 11. https://doi.org/10.3390/g12010011

AMA Style

Grigorenko N, Luk’yanova L. Optimal Control and Positional Controllability in a One-Sector Economy. Games. 2021; 12(1):11. https://doi.org/10.3390/g12010011

Chicago/Turabian Style

Grigorenko, Nikolai, and Lilia Luk’yanova. 2021. "Optimal Control and Positional Controllability in a One-Sector Economy" Games 12, no. 1: 11. https://doi.org/10.3390/g12010011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.