1. Introduction
Minimizing the duration of transient processes in mechanical and oscillatory systems is one of the fundamental problems of modern control theory. Classical optimal control methods, developed within the framework of Pontryagin’s maximum principle and dynamic programming [
1,
2,
3,
4], provide a rigorous theoretical foundation for studying the structure of optimal controls. However, their direct application to nonlinear oscillators with bounded parametric control is highly nontrivial. Existing solutions to optimal frequency control problems largely fall into two categories: rigorous analytical solutions for linearized models and numerical approximation methods for nonlinear systems.
A large class of contemporary applied problems is associated with swing suppression and fast load transportation in crane systems. In recent years, a wide range of highly efficient command-shaping and active vibration suppression methods has been developed. In particular, [
5,
6] propose optimization-based input-shaping and MPC-based control algorithms for overhead cranes. Enhanced swing-suppression strategies, such as Negative Zero-Vibration schemes and low-pass-filter-based methods, are studied in [
7,
8]. Phase-planning and optimal anti-sway methods for tower and container cranes are presented in [
9,
10,
11]. These results show that controlling the frequency or parameters of the oscillatory dynamics is a key mechanism for accelerating motion while maintaining stability.
In parallel, there has been active development in the analytical and semi-analytical study of nonlinear oscillators. Methods for estimating periods and frequencies, models with fractal–fractional operators, and various refinements of He’s frequency formulation are analyzed in [
12,
13,
14]. Robust and optimal control methods for fractional nonlinear models are presented in [
15], while modern variational and Hamiltonian-based computational techniques are discussed in [
16]. Despite their high accuracy in describing the dynamics, these approaches generally do not yield a fully analytical solution to the minimum-time problem for nonlinear pendulums with a bounded controllable frequency.
Significant progress has also been achieved in dynamic programming and adaptive dynamic programming (ADP), including value-iteration and constrained-cost schemes for nonlinear systems [
17,
18]. Applications of ADP to systems with state delays are presented in [
19]. Classical works on ADP and reinforcement-learning-based optimal control [
20,
21] provide powerful numerical tools for solving constrained optimal control problems; however, they do not supply explicit analytical switching conditions either.
Modern optimal control techniques are particularly in demand in power systems, mechatronics, and space engineering. Applications of Neural ODE methods to frequency stabilization in power systems are investigated in [
22]; nonlinear frequency regulation in microgrids is studied in [
23]; and swing suppression in flexible space structures is addressed in [
24]. These directions demonstrate a growing interest in controlling oscillator-like parameters in complex real-world systems.
For linear harmonic systems with parametric excitation and viscous friction, rigorous analytical solutions for the structure of optimal controls have been obtained in [
25,
26]. While linear theories provide clear switching structures, they fail to capture the amplitude-dependent period variations inherent in nonlinear pendulums. Conversely, numerical methods handle nonlinearity but often lack the insight of an explicit control structure.
Thus, there is a gap between, on the one hand, the highly developed swing-suppression methods for quasi-linear models [
5,
6,
7,
8], analytical and semi-empirical techniques for nonlinear oscillators [
12,
13,
14,
15,
16], and numerical ADP/MPC-based methods [
17,
18,
21,
22], and, on the other hand, the absence of a strict analytical solution to the minimum-time control problem for a nonlinear pendulum when the frequency acts as the control parameter.
The aim of this work is to help fill this gap. We consider a nonlinear pendulum-type oscillator whose natural frequency, varying within a prescribed range, serves as the control input, and we study the problem of minimizing the transfer time between two rest states. Based on Pontryagin’s maximum principle and Bellman’s principle of optimality, we rigorously decompose the motion into semi-oscillations, show that the optimal control on each semi-oscillation is bang–bang with at most two switchings, derive analytical formulas for the semi-oscillation duration and switching conditions, and finally reduce the global problem to a finite-dimensional optimization problem.
The structure of the paper is as follows.
Section 2 formulates the problem statement.
Section 3 is devoted to deriving the structure of the optimal control on a single semi-oscillation.
Section 4 investigates the conditions for the existence of a solution and provides analytical expressions for the optimal time.
Section 5 constructs the global minimum-time trajectory using Bellman’s principle and presents numerical results together with a comparison to the linear system. The conclusions discuss possible directions for further development of the model.
2. Problem Statement
We consider a nonlinear pendulum-type oscillator with a time-varying natural frequency
serving as the control input. Its dynamics are described by
where
is the angular displacement (state coordinate),
is the angular velocity,
is the angular acceleration, and
is the (unknown) final time of motion.
The control input is the frequency function
, which is subject to the bounds
where
and
are fixed constants.
The initial and terminal states are specified by
where
and
. The point
is an equilibrium of system (
1) for any admissible frequency
, since the control enters the equation multiplicatively. Hence a direct transfer through the equilibrium state
is impossible and is not considered.
The goal of the study is to transfer the system from the initial state to the terminal state, defined in (
3), in minimal time, subject to the constraint (
2). The quantities to be determined are the optimal terminal time
T and the optimal control
. Collecting all conditions together, we arrive at the following minimum-time optimal control problem [
1]:
In what follows, we restrict attention to solutions satisfying
This assumption rules out phase wrapping (slipping), so that the equilibrium position remains at
. Allowing phase slipping/rotations would require handling trajectories that cross the separatrix and may involve different qualitative dynamics; this would require extending the analysis.
To solve problem (
4), we adopt an approach analogous to that used in [
25,
26] for optimal control of a linear oscillator. Exploiting the oscillatory nature of the optimal trajectory, the optimal control problem is decomposed into a sequence of similar subproblems, each corresponding to a single semi-oscillation of the optimal trajectory. First, using Pontryagin’s maximum principle [
1], we solve the problem for one semi-oscillation. Then, by applying the dynamic programming method [
4], we obtain a solution to the global problem.
3. Optimal Control on a Single Semi-Oscillation
Exploiting the oscillatory nature of the trajectories of system (
4) for any admissible control and the symmetry with respect to the origin in the phase plane, we decompose the global motion into separate semi-oscillations. By Bellman’s principle of optimality, each semi-oscillation of a globally optimal trajectory must itself be optimal for an appropriately posed two-point boundary-value problem.
In this section, we introduce such an auxiliary problem corresponding to a single semi-oscillation and formulate it as a minimum-time optimal control problem.
We consider a motion that starts at rest at a positive angular displacement and ends at rest at a negative angular displacement, with strictly decreasing angle along the way. This corresponds to one semi-oscillation of the pendulum-like system and leads to the following auxiliary minimum-time optimal control problem:
The monotonicity condition ensures that is strictly decreasing on and indeed describes a single semi-oscillation from the amplitude A to the amplitude B. In other words, the trajectory passes from a right-hand rest position to a left-hand rest position without any additional turning points.
The boundary conditions in (
5) prescribe both the coordinate and the velocity at the endpoints (here, zero velocity). They play a crucial role for two reasons:
They guarantee that individual semi-oscillations can be smoothly concatenated into a single global trajectory of problem (
4), with continuity of both state and velocity at the junction points.
They allow us to invoke Bellman’s principle of optimality for the original problem (
4): if a trajectory solves (
4) optimally, then each of its semi-oscillations must be optimal for the corresponding auxiliary problem (
5).
Thus, understanding the optimal control for the auxiliary problem (
5) is a central step in solving the global minimum-time problem (
4). In the next section, we apply Pontryagin’s maximum principle to (
5), derive the structure of the optimal control on a single semi-oscillation, and determine the admissible bang–bang switching patterns.
4. Application of Pontryagin’s Maximum Principle to the Single Semi-Oscillation Problem
We apply Pontryagin’s maximum principle [
1,
2] (PMP) to problem (
5). PMP is a fundamental necessary condition for optimality in control theory. It reduces the problem of finding an optimal control to the problem of maximizing a specific function, known as the Hamiltonian, at every instant of time. First we introduce the substitution
, which allows us to express (
5) in the form of a first-order system:
We write the Pontryagin function (Hamiltonian):
and denote its upper boundary:
According to PMP, if
,
, and
constitute a solution to the optimal control problem (
6), then the following three conditions are satisfied:
- (I)
There exists a continuous adjoint pair
, not identically zero (i.e.,
and
do not vanish simultaneously for all
t), satisfying the adjoint dynamics:
- (II)
For every
, the control
maximizes the Hamiltonian, i.e.,:
- (III)
For all
, the maximized Hamiltonian is non-negative:
From condition (
8) the optimal control is obtained in the form:
The maximum condition in (
9) could, in principle, yield a singular arc characterized by
on a time interval of positive length. However, we will prove that this singular case is non-admissible. Arguing by contradiction, we suppose such an interval exists. A direct consequence is that the optimal control
becomes indeterminate from the maximum principle alone on this interval.
The continuity of and implies that the identity can only arise if either is identically zero or over some time interval.
If
, then its derivative must also vanish:
. Substituting this into the second adjoint equation from system (
7) forces
. Consequently, both adjoint variables vanish identically, which violates the non-triviality condition (I) of the maximum principle.
The case implies . This directly contradicts the strict negativity condition on the interval , thus rendering it impossible.
Consequently, we arrive at the following proposition:
The optimal control is bang-bang, taking only the values or , which are determined by the sign of the switching function . We may disregard the singular case where this product equals zero, as the control’s value at isolated points (or any finite set of points) does not affect the integral properties or the resulting trajectory of the system.
Finally, let us examine condition (III), which is particularly informative at the endpoints and .
At
this condition takes the form:
The specified boundary conditions,
, along with the positivity of the control
and the state constraints
and
, lead to further auxiliary constraints. These are obtained directly from Equations (
10) and (
11):
We now investigate the number of possible switching points of the optimal control. From Formula (
9) we see that a switching is only possible at points where either
or
. On a single semi-oscillation, the trajectory
crosses zero exactly once. We therefore study the function
and determine the maximal possible number of its zeros on the interval
. Combining system (
5) with the adjoint system (
7), we obtain the following system of two differential equations, which governs the adjoint variable
:
We will show that, on any interval where
keeps a constant sign, the function
can have at most one zero. Assume the contrary, and suppose that there exist at least two points at which
vanishes (see
Figure 1). In the absence of switchings, the control is constant, and the solution of the first equation of system (
13) is a periodic function. Consider an interval of length equal to half of this period, bounded by two consecutive zeros of
(for the auxiliary optimal control problem, we are in fact interested in an even shorter interval, corresponding to one quarter of the period). Note that the second equation of system (
13) can also be regarded as an oscillation equation with a time-varying frequency determined by
. Accordingly, the minimal distance between consecutive zeros of
is achieved at the maximal frequency, that is, at the maximal value of the coefficient
(see Sturm–Picone comparison theorem [
27]), which corresponds to the minimal possible value of
. This implies that, in order to attain the minimal distance between zeros of
, its first zero must coincide with a zero of
, as shown in
Figure 1. This configuration corresponds to the following Cauchy problem:
Note that the condition follows from the symmetry of the first equation with respect to the point and the evenness of the function , while the condition follows from the possibility of normalizing the adjoint variable.
We also note that we are interested in an interval where the control keeps a constant sign. In this case, by the time rescaling
one can eliminate
from the equations, so we may assume
. Hence system (
14) depends only on a single unknown parameter
. We now study this dependence.
Let
denote the first nonzero instant such that
, and let
denote the first nonzero instant such that
(see
Figure 1). We compute these quantities as functions of
by numerically solving the Cauchy problem (
14) and plotting the resulting dependencies.
The numerical results in
Figure 2 show that the configuration depicted in
Figure 1 is impossible, since the distance between consecutive zeros of
is always larger than the distance between consecutive zeros of
. Moreover, both distances tend to
as
, because in this limit the dynamics approach those of the corresponding linear system, for which these distances are equal [
25,
26]. For larger values of
(specifically
) one observes a transition from oscillatory to rotational motion, which is incompatible with the constraint
.
Thus, we have shown that in problem (
5) the optimal control can have at most one switching in each region where
keeps a constant sign, and therefore at most three switchings in total (two at zeros of
and one at the zero of
).
Furthermore, the case of three switchings is also impossible: in that situation we would have
and
, and from conditions (
9) and (
12) it follows that the optimal control near the initial and terminal segments must take the same value
, which contradicts the odd number of switchings.
As a result, taking into account condition (
12), we conclude that the optimal control on a single semi-oscillation can only have one of the patterns shown in
Figure 3. The remaining three control types, which do not satisfy all the conditions of the maximum principle, are depicted in
Figure 4. Note that this selection of admissible control patterns coincides with the problem studied in [
25,
26], since the signs of
and
coincide on the interval
.
We have determined the structure of the optimal control on a single semi-oscillation. It remains to identify for which boundary conditions problem (
5) admits a solution, and which control type corresponds to the given boundary values.
5. Existence of Solution for One Semi-Oscillation
Let us consider the question of the existence of a solution for one semi-oscillation, i.e., for which boundary values there exists a control that transfers the system from the initial state to the final state in one semi-oscillation.
We transform the first equation of problem (
5) taking into account that the control
is a piecewise constant function. On an interval where the control is constant, we multiply the equation by
:
Noting that
and
, we obtain the equation:
Integrating the last equation with
, we have:
Integrating the obtained equation once more on any interval
where the control is constant, we arrive at an expression for time in terms of an elliptic integral (see [
28]):
Here we have a minus sign because the function
is decreasing. Note also that the function
is the upper limit of the elliptic integral, which cannot be evaluated analytically.
The constant
h in (
15) is determined from the differentiability condition of the function
for
and the satisfaction of the boundary conditions.
Now, based on what has been said, we determine the missing parameter values (the constant in (
15), the switching points
) for the optimal control and optimal trajectory. Let us first consider control type 4 from
Figure 3, which also includes type 3 when
and type 5 when
In this case, the optimal control
has the form:
where the switching points
and
are unknown; it is only known that
.
Next, using the form of control (
17) and the boundary conditions from (
5), we determine the values of the constant
h in Equation (
15) on each interval where the control is constant:
On the interval
we have:
On the interval
we have:
On the interval
, using the condition
, and the continuity and differentiability of the function
at the switching points, we have the system:
Denoting
we write the solution of system (
18) in the form:
Note that the switching moment
is defined implicitly by the second equation of system (
19).
Similarly, we consider control type 2 from
Figure 3, which includes control type 1 when
. We obtain equations for the constant
and the switching point
:
We express
from (
20) and (
19) as a function of the variable
:
For
the function
decreases from
A to 0 and the function
is monotonically increasing for
. For
, the function
increases from 0 to
B and the function
is also monotonically increasing (see
Figure 5).
From the monotonicity of the function
, we obtain that for a fixed value of
A, the minimum value of
B is achieved on control type 1 from
Figure 3 and equals
The maximum value of
B is achieved on control type 5 from
Figure 3 and equals
The case
corresponds to a situation where phase slip is possible and is not considered in this work. Thus, the optimal control problem for one semi-oscillation has a solution in the case:
Figure 6 shows examples of the domains of parameter values
A and
B satisfying condition (
21) for different values of
.
6. Solution of the Optimal Control Problem for One Semi-Oscillation
In the previous section, constraints on the boundary conditions were obtained under which the system is controllable for one semi-oscillation. From the monotonicity of the function , shown in the previous section, we obtain that the type of optimal control and the switching moments are uniquely determined by the boundary conditions.
For the case
we have control type 3, 4, or 5 from
Figure 3. In this case, from Formulas (
16) and (
19), we obtain the solution of the optimal control problem (
5):
Optimal control:
where
are determined by Formula (
22), and the time
T is determined by the formula:
Note that the switching moment
is defined implicitly by the last equation in (
22), and the optimal trajectory
can be found, for example, by numerical integration of the differential equation in (
4).
For the case
we have control type 1 or 2 from
Figure 3. From Formulas (
16) and (
20), we obtain the solution of problem (
5):
where
T is the optimal time, determined by the formula:
The optimal control is given by the formula:
The structure of the optimal control for auxiliary problem (
5) is determined by
B relative to
A,
, and
:
Type 2 for .
Type 3 for .
Type 4 for .
Type 5 for .
Otherwise, the problem has no solution.
Figure 7 presents a graph of the optimal time function for one semi-oscillation
, given by Formulas (
24) and (
26), for all values of
A and
B satisfying condition (
21).
7. Solution to the Main Optimal Control Problem
Let
be an arbitrary trajectory (not necessarily optimal) that fulfills the equation and boundary conditions of the main problem (
4) and is composed of an unknown number
n of semi-oscillations (
Figure 8). The time moments at which the derivative becomes zero are denoted by
, with the corresponding amplitudes given by
.
According to the Bellman optimality principle on each semi-oscillation interval
the trajectory must be optimal and be a solution to problem (
5). Otherwise, the trajectory over
could be replaced with the solution to auxiliary problem (
5), which would reduce the total time. We can write the expression for the total time as the sum of the optimal times for each semi-oscillation, using Formulas (
24) and (
26) from the previous section:
Thus, the problem reduces to an optimization where the objective is to find the number of semi-oscillations
n and the intermediate amplitude values
(
) that yield the minimal total time, while simultaneously satisfying constraints
where
which guarantee the existence of a trajectory on each of the semi-oscillations (see Formula (
21)).
Thus, the optimal control problem (
4) is reduced to a finite-dimensional minimization problem (
28) and (
29). Owing to the complexity of the resulting expressions, this minimization problem is addressed using a numerical approach based on dynamic programming.
Let us fix the initial state
and solve the problem for various terminal states
. We introduce the so-called Bellman function
, which is equal to the optimal time in problem (
4). To find this function, we will apply the dynamic programming method and consider the following iterative process. Let us define the function
This initial function defines the optimal time for 0 semi-oscillations. Next, for
, we define the iterative process by the formula
The function
defines the optimal time
in problem (
4) in no more than
k semi-oscillations.
Consequently, the iterative scheme (
30) allows us to determine the global minimum time for transferring the oscillator from a fixed initial state
to any admissible terminal state
. By continuing the iterations, we identify the optimal number of semi-oscillations and the corresponding sequence of intermediate amplitudes, thereby fully solving the finite-dimensional reduction (
28) and (
29) of the original optimal control problem.
8. Numerical Calculations and Comparison with the Linear Case
We consider the numerical implementation of the iterative method defined by Formula (
30). The calculations were performed using the Python 3.11 programming language and the standard libraries NumPy 2.3 and Matplotlib 3.8.0. The average computation time for solving the finite-dimensional optimization problem on a standard desktop computer was less than 1 min. This high efficiency is achieved because the semi-oscillation durations are computed via fast elliptic integral libraries, avoiding the need for computationally expensive numerical integration of differential equations during the optimization phase. The following parameter values were used:
,
, along with a fixed initial state
. The results of the first ten iterations are shown in
Figure 9. The values of the function
were approximated by its values at the nodes of a uniform grid on the interval
.
Using the dynamic programming method, we can also construct a family of optimal trajectories emanating from a given initial point
and terminating at all possible final values
, for a prescribed maximum number of semi-oscillations. This is because, at each step of the iterative scheme (
30), in addition to the value of the Bellman function, the minimization procedure actually also yields the optimal control (its type and switching times) on the current
k-th semi-oscillation.
The computational results for at most three semi-oscillations are shown in
Figure 10 (trajectories as functions of time) and
Figure 11 (phase portrait of the trajectories). Here, the segments of the trajectories corresponding to the control
are highlighted in red, and those corresponding to the control value
are highlighted in blue. In the phase portrait (
Figure 11), arrows additionally indicate the direction of motion corresponding to increasing time
t.
These figures illustrate the overall behavior of the optimal trajectories and the regions of constant control in the phase space.
Next, in order to study in more detail the properties of the optimal control in problem (
4), we consider three additional examples for prescribed initial and terminal states.
In the first example we choose
,
,
,
. The optimal control and the corresponding optimal trajectory are shown in
Figure 12. Note that the types of control on the first and second semi-oscillations do not coincide. On the first semi-oscillation we have a type 2 control, while on the second we have a type 4 control (see
Figure 3). The amplitudes of the semi-oscillations also vary non-monotonically. Here we observe a substantial difference from the linear case [
25,
26], where the control was a periodic function and the amplitudes of the semi-oscillations formed a geometric progression.
In the second example we choose
,
,
,
. The optimal control and the corresponding optimal trajectory are shown in
Figure 13. In this case, for relatively small values of the variable
x we have
, and the optimal control, although not strictly periodic, is close to periodic.
In the last example we choose
,
,
,
. The optimal control and the corresponding optimal trajectory are shown in
Figure 14. In this case, for relatively large values of the variable
x, the optimal control strongly deviates from periodic behavior. There is a noticeable difference in the duration of individual semi-oscillations.
9. Conclusions
In this work, we solved the minimum-time optimal control problem for a nonlinear pendulum-type oscillator, where the control parameter is its natural frequency constrained to the interval . The objective was to transfer the system from one arbitrary rest state to another in the shortest possible time.
By applying Pontryagin’s maximum principle and Bellman’s principle of optimality, we decomposed the original problem into a sequence of similar subproblems corresponding to single semi-oscillations. For each such subproblem, it was rigorously shown that the optimal control is of relay (bang–bang) type and contains no more than two switchings. This, in turn, reduces the dynamics on each semi-oscillation to three segments with constant control.
A key analytical result is the derivation of expressions for the switching times and the total duration of a semi-oscillation in terms of elliptic integrals. This made it possible to reduce the original infinite-dimensional optimal control problem to a finite-dimensional minimization problem, namely the search for an optimal sequence of intermediate semi-oscillation amplitudes. The resulting finite-dimensional problem was solved numerically using dynamic programming, which enabled us to construct the Bellman function (the optimal-time function) and, simultaneously, to recover the optimal control in the original problem.
Numerical examples confirmed the effectiveness of the proposed approach. A particularly important outcome is the demonstration of qualitative differences from the analogous linear problem (where
). In the nonlinear case, the optimal control is generally non-periodic: as the computations show, the duration of semi-oscillations and the control structure (switching type) may vary irregularly with the current amplitude, reflecting the intrinsic nonlinearity of
. From a numerical perspective, it is natural to compare the proposed dynamic programming scheme with state-of-the-art direct optimal control and NMPC solvers [
29,
30,
31].
Overall, the paper provides a structurally explicit and computationally efficient solution to the minimum-time frequency control problem for a nonlinear oscillator, combining PMP-based switching analysis with elliptic-integral timing and dynamic programming for multi-semi-oscillation transfers. This work lays a theoretical foundation for further research. The most promising directions include:
Incorporating dissipative forces into the model, primarily Coulomb and viscous friction, which will complicate the Hamiltonian but bring the model significantly closer to real physical systems.
Introducing constraints on the rate of change of the control (slew-rate constraints), i.e., , which is more realistic from an engineering point of view.
Extending the proposed approach to systems with multiple degrees of freedom, such as spherical pendulums or models of robotic manipulators.