Time-Optimal Motions of a Mechanical System with Viscous Friction

: Optimal control is a critical tool for mechanical robotic systems, facilitating the precise manipulation of dynamic processes. These processes are described through differential equations governed by a control function, addressing a time-optimal problem with bilinear characteristics. Our study utilizes the classical approach complemented by Pontryagin’s Maximum Principle (PMP) to explore this inverse optimal problem. The objective is to develop an exact piecewise control function that effectively manages trajectory control while considering the effects of viscous friction. Our simulations demonstrate that the proposed control law markedly diminishes oscillations induced by boundary conditions. This research not only aims to delineate the reachability set but also strives to determine the minimal time required for the process. The findings include an exact analytical solution for the stated control problem.


Introduction
This paper explores the time-optimal control in systems subject to viscous friction, an aspect vital across various domains including robotics and economic systems.It particularly examines the application of Pontryagin's Maximum Principle (PMP) to provide a fundamental understanding necessary for mastering time-optimal control strategies [1][2][3][4].The literature cited includes a broad range of sources from theoretical frameworks to practical applications, highlighting both the complexities and strategies for optimizing control processes to enhance time efficiency.
Incorporating the damping term, denoted as µ > 0, significantly increases the complexity of solving the optimal problem and understanding the dynamics of the system.Despite this complexity, including the damping term is vital for developing methods to experimentally determine modal characteristics, such as eigenmodes, eigenfrequencies, and generalized masses.The cited references [5][6][7] specifically address the behavior of the damped system for computational and, more importantly, for experimental analysis purposes.It is well known that transient simulation of systems with friction requires excessive computational power due to the nonlinear constitutive laws and the high stiffnesses involved.In Ref. [8], authors proposed control laws for friction dampers which maximize energy dissipation in an instantaneous sense by modulating the normal force at the friction interface.Besides optimization of the mechanical design or various types of passive damping treatments, active structural vibration control concepts are efficient means to reduce unwanted vibrations [9].The conclusion from this broad survey is that the system model and friction model are fundamentally coupled, and they cannot be chosen independently.
Viscous dampers work by converting mechanical energy from motion (kinetic energy) into heat energy through viscous fluids.As a part of the damping process, they oppose the relative motion through fluid resistance, effectively controlling the speed and motion of connected components.Viscous dampers are essential for managing dynamic systems where control of movement and stability is necessary, making them indispensable in many high-stakes environments like automotive engineering and structural design.These dampers are increasingly sophisticated, incorporating technologies like electrorheological and magnetorheological fluids, which allow for variable stiffness and damping properties.This adaptability enhances their ability to mitigate vibrations across various earthquake intensities [10].By integrating dampers into the structural design using mathematical models, engineers can significantly improve a building's ability to absorb and dissipate energy during earthquakes.This includes detailed discussions on the calculation of damping coefficients and their impact on the building's overall dynamic response to seismic events [11].It is noteworthy that optimizing this type of damper (friction damper) remains a relatively unexplored subject worldwide, which highlights the innovative nature of our paper and serves as the driving motivation for our research.
Within the sphere of optimal control, the time-varying harmonic oscillator garners particular interest for its ability to reach designated energy levels effectively in the form Systems that are linear with respect to their variables and exhibit bounded control u(t) from the right side (1) often resort to a bang-bang control strategy.The oscillations in such a system significantly differ both from the natural oscillations in a system described by an equation with constant coefficients and from the forced oscillations due to an external force that depends only on time.This approach toggles the system's excitation between two extremities at precisely calculated switching intervals, which are essential as they mark the instances of control adjustments.These intervals are visually represented by a switching curve within the state space, directing the oscillator's management for any given state combination (position x(t) and velocity ẋ(t)).An extensive examination of time-optimality for both undamped and damped harmonic oscillators, including simulations that illustrate their practicality, is detailed in references [12,13].
A complex nonlinear system under state feedback control with a time delay corresponding to two coupled nonlinear oscillators with a parametric excitation is investigated by an asymptotic perturbation method based on Fourier expansion and time rescaling [14].Given that the present investigation focuses on the optimal control of the coefficient ω(t) and u(t) = 0, the issue assumes a bilinear form.In the field of engineering, particularly in nonlinear dynamics, parametric excitation is used to control vibrations in complex mechanical systems.The pendulum with periodically varying length which is also treated as a simple model of a child's swing is investigated in [15].Simulations were performed in [16] on a double obstacle pendulum system to investigate the effects of various parameters, including the positions and quantities of obstacle pins, and the initial release angles, on the pendulum's motion through numerical simulations.The pendulum with vertically oscillating support and the pendulum with periodically varying length were considered as two forced dissipative pendulum systems, with a view to draw comparisons between their behavior [17].The varying length pendulum is studied to address its oscillations damping using the conveniently generated Coriolis force [18].By applying the homotopy analysis method to the governing equation of the pendulum, a closed-form approximate solution was obtained [19].
The damping results in prolonged oscillations until equilibrium is achieved.Adjusting the damping coefficient, ω(t), can expedite the damping process.Time-optimal control problems, known for their inverse characteristics, are prone to instability [20], which challenges traditional analytical approaches and necessitates regularization of solutions.To complement complex analytical solutions, numerical methods are employed, offering a tangible presentation of results.This research unveils an analytical solution for the control function ω(t) and the optimal duration of the process across a wide range of parameters.It also introduces bang-bang relay-type controls and defines the system's reachability set.
Moreover, the paper underscores the critical role of time-optimal control in contemporary industrial and technological realms, stressing the urgency for durable solutions where time efficiency is pivotal to the sustainability of robot-technical systems [21].
To summarize, we address the time-optimal control problem, which must primarily be solved analytically.Of course, this problem can be solved numerically, but this is difficult for a wide range of parameters.This research addresses whether the time-optimal process exhibits periodicity and whether the control function shows symmetry throughout its period, with findings confirming the former and refuting the latter.This focus on the control coefficient ω(t) opens new avenues for inquiry, especially concerning the periodicity of the optimal process and the nature of the control function, providing both a confirmation of, and challenging, insights into established assumptions.
The rest of this paper is organized as follows.Section 2 contains the formulation of the optimal control problem.Section 3 contains a preliminary study of the considered controlled system and reveals some of its properties.Section 4 reveals the local properties of the problem and the application of the maximum principle (PMP) to a single semi-oscillation.Section 5 concludes the study and establishes the global properties of the optimal solution.Section 6 presents the main result of the study: a step-by-step optimization algorithm for solving the problem.Section 6 also presents numerical examples and a discussion of the results obtained.The full text of the paper is summarized in Section 7.

Optimal Control Problem Statement
Let us consider the optimal control problem of a mechanical system: where x(t) is the coordinate, and ω(t) is the unknown frequency of the external controlling action, subject to determination.The minimum in the problem is sought in the class of piecewise-continuous functions ω(t).µ is the coefficient of viscous friction, where 0 < µ < 2ω 0 .If this condition is violated, subsequent analysis is also possible, but we have not investigated it, as we believe it does not arouse interest from a technical point of view.The case A < 0 leads to a consideration of a change in the sign of the variable x(t).
In this setting, the problem is not symmetric with respect to time inversion because of friction.

General Properties of the Controlled System (2)
With any permissible control ω(t), it is observed that the trajectory x(t) of the controlled system in (2) oscillates around the starting coordinate with successive intervals of monotonic increase and decrease (Figure 1).The amplitude and duration of each oscillation can vary, based on the chosen control function ω(t) (typically discontinuous).Indeed, if the conditions ẋ(t * ) = 0 and x(t * ) ̸ = 0 are satisfied at some moment in time t * ∈ [0, T], it can be derived from the differential equation of problem (2) that ẍ(t) = −µ ẋ(t) − ω 2 (t)x(t).Given that the functions x(t) and ẋ(t) are continuous, the sign of the second derivative will match the sign of x(t) in a small vicinity of point t * , except, possibly, at a finite number of discontinuity points of the function ω(t).This implies that, for x(t * ) > 0, the trajectory will have a point of local maximum, and, for x(t * ) < 0, a point of local minimum.
From the boundary conditions, it is understood that the speeds ẋ(0) and ẋ(T) at the initial and final moments of time equal zero, a situation that occurs only at the extreme points of the oscillatory process.These moments in time are denoted as t i (Figure 1), and the time intervals t ∈ [t i , t i+1 ] are referred to as semi-oscillations.From this point, it is inferred that the optimal trajectory comprises a whole number of semi-oscillations N, being an even number when B > 0, and an odd number when B < 0 (A > 0).To investigate the total optimal control problem, let us divide the trajectory into separate semi-oscillations and first solve the problem for one semi-oscillation t ∈ [t i , t i+1 ] .We will denote Utilizing the linearity and homogeneity of the differential equation allows for the normalization of the variable x(t) by dividing it by its initial value A i .It is also taken into account that the coefficient of friction µ is independent of time t, meaning the initial moment in time can be considered as zero.This approach transforms all subproblems (3) for i = 0, . . ., N − 1 into a unified auxiliary mini-problem of optimal control: Given probem (2) and knowing the numbers t i and A i , the equation for optimal time in task (4) will be accurately represented by T i = t i+1 − t i , and the optimal trajectories and control in the auxiliary task (4) will coincide with the optimal trajectories and control in task (2) over the interval [t i , t i+1 ] [1] .It will be demonstrated below that the optimal process is broken down into individual equal time intervals, calculated using analytical formulas.
Furthermore, for convenience in solving (4), instead of x i and ω i , the notations x and ω will be used.

Solution of the Optimal Control Problem for a Single Semi-Oscillation
In the previous section, it was demonstrated how to resolve the initial problem (2) by first solving an auxiliary problem: , (5) and find the dependency of the optimal time T on the terminal value C.
Here, the condition ẋ(t) < 0 denotes the monotonicity of the trajectory x(t), which corresponds to one semi-oscillation.
First, the question of controllability will be examined, and the range of values for C for which problem (5) has a solution will be defined.
The following notations will be introduced: , The largest value x max = |x(T)| can be attained with the control because, with such control, acceleration is maximized when x(t) ≥ 0 and deceleration is minimized when x(t) < 0.
Similarly, the smallest value x min = |x(T)| can be reached analogously with the control Solving the differential equation with the boundary conditions from system (5) and with control ( 6) or ( 7), the following is obtained: where To apply PMP [1], we introduce the notation ẋ(t) = v(t) and rewrite (5) in the form of a system of first-order differential equations: Now, we let the terminal value C satisfy condition (8), which ensures the controllability of the system.
We write the Pontryagin function: and denote its upper boundary: If x(t), v(t), and ω(t) constitute a solution to the optimal control problem (10), then the following three conditions are satisfied: (I) There exist continuous functions ψ 1 (t) and ψ 2 (t), which never simultaneously become zero and are solutions to the adjoint system: (II) For any t ∈ [0, T], the maximum condition is satisfied: (III) For any t ∈ [0, T], a specific inequality occurs: From condition (12) for the maximum of the function H, the optimal control is obtained in the form Let us show that the case of singular control in Formula ( 13), specifically when ψ 2 (t)x(t) ≡ 0 over a non-zero length interval of time, is impossible, assuming the opposite.This means considering the existence of a time interval during which ψ 2 (t)x(t) ≡ 0. In such an interval, determining the value of optimal control from the maximum condition would not be feasible.
Given the continuity of the functions ψ 2 (t) and x(t), it is possible either for ψ 2 (t) ≡ 0 over some interval or for x(t) ≡ 0 over a certain time period.
In the scenario where x(t) ≡ 0, it follows that v(t) = ẋ(t) ≡ 0. Such a case is deemed impossible, as the controlled system cannot stay in a zero state under any control value, given that the term of the system's differential Equation (5), which includes the control, would also equate to zero.
This reasoning leads to the formulation of a statement: Lemma 1. Optimal control ω(t) is limited to only two values, 1 and ω 0 , dictated by the sign of the product ψ 2 (t)x(t).Considering the case where this product equals zero as non-existent is justified by the fact that the control value at a single point or a finite number of points lacks any impact on the trajectory of the controlled system.Now, we consider condition (III).It represents the greatest interest at values t = 0 and t = T.
At t = 0, the condition is expressed as At t = T, the condition becomes Given the boundary conditions that v(0) = v(T) = 0, and considering that the control value ω(t) is always positive, with x(0) > 0 and x(T) < 0, the following additional conditions are derived from ( 14) and ( 15) Now, we explore the potential form of optimal control and the number of switches.It is already known that the value of optimal control is determined by the sign of the product ψ 2 (t)x(t).
The trajectory x(t), due to its monotonic nature, crosses zero only once.This moment in time is denoted as τ.
Thus, control ω(t) may only change its value at the point τ and at points where the sign of the adjoint variable ψ 2 (t) changes.If at point τ, both x(t) and ψ 2 (t) change their signs simultaneously, then the control value remains unchanged.
Firstly, consider an interval of time where control ω(t) ≡ 1.Then, the general solution x(t) of the differential equation from system (4) and ψ 2 (t) from the adjoint system (5) will take a specific form: where constants C 1 , C 2 , C 3 , and C 4 must be determined from the boundary conditions on the interval of constant control.The value of the adjoint variable ψ 1 (t) is not of interest, as it does not enter into Formula ( 13).Now, we consider an interval of time during which control ω(t) ≡ ω 0 .Similarly, it is obtained that where constants D 1 , D 2 , D 3 , and D 4 are also to be determined from the boundary conditions.
It is now proposed that the adjoint variable ψ 2 (t) turns to zero at most twice within the interval [0, τ], or in [τ, T].For instance, let ψ 2 (ξ 1 ) = ψ 2 (ξ 2 ) = 0, where 0 ≤ ξ 1 < ξ 2 ≤ τ.Then, within the interval [ξ 1 , ξ 1 ], the control value does not change, and this leads to a contradiction with Formulas ( 17) and (18) because the distance between zeros of the function ψ 2 (t) (for example, π β 1 for Formula (17)) exceeds the maximum length of an interval of constancy of sign and monotonicity of the function x(t) (for example, ).Thus, it is proven that Lemma 2. In problem 5, optimal control can have no more than one switch in each of the intervals [0, τ] and [τ, T].
The function ψ 2 (t) has a continuous derivative (as the right-hand side of the second equation of the adjoint system (11) is continuous) and turns to zero no more than twice within the interval [0, T].Moreover, these zeros cannot both lie within the same subinterval [0, τ] or [τ, T].This leads to 10 different cases (Figure 2) of sign changes for the function ψ 2 (t) over the interval [0, T].Dashed gray lines on the graph indicate scenarios that contradict the PMP, while solid red lines indicate cases with no contradiction with PMP found.A detailed analysis of these cases is provided.If ψ 2 (τ) = 0, then ψ 2 (t) ̸ = 0 for t ̸ = τ, leading to cases (1) and (2).In case (1), a constant control equal to 1 is maintained throughout the entire time interval.Case (2) is not possible, as ψ 2 (T) < 0 and does not satisfy condition (16).
After analyzing cases (1-10), it is determined that the following statement holds: Lemma 3. Optimal control (bang-bang) in the problem (5) can be one of the five types represented in Figure 3.It is noted that all types of control satisfying the maximum principle (illustrated in Figure 3) differ in the length of the segment where the control value equals ω 0 , and its placement, respectively, to the point τ.
Introducing the parameter s = ξ − τ, the values of τ and T can be distinctly determined from the equation and three boundary conditions (excluding the condition x(T) = C) of problem ( 5) by substituting the corresponding control.This results in the determination of the end time T(s) and the terminal value C(s) = x(T(s)) as functions of the unknown parameter s.
For control type 3 (illustrated in Figure 3), s = 0 corresponds, and for control type 1, the smallest value s min = − π−φ 2 β 2 < 0. For control type 5, the largest value s max = φ 2 β 2 is obtained as the longest possible duration of motion under constant control ω(t) ≡ ω 0 , that is s max = T − τ, with the moments of time τ and T derived from Formula (18) and the conditions x(τ) = 0, ẋ(T) = 0, x(T) < 0, T > τ, aiming to minimize T − τ.Similarly, from Formula (18), the smallest value of s min = −τ is obtained.Controls of type 2 and 4 correspond to intermediate values of s within intervals (s min , 0) and (0, s max ).
Knowing the switching moment of control and having an analytical solution (Formulas ( 17) and ( 18)), the end time T and the terminal trajectory value x(T) can be explicitly calculated as functions of the parameter s.
Let us consider s ∈ 0, φ 2 β 2 ; then, for t ∈ [0, τ), ω(t) ≡ 1 and from Formula ( 17) and the initial condition . Subsequently, for t ∈ [τ, τ + s), ω(t) ≡ ω 0 and from Formula (18) and the continuity of ẋ(t) at t = τ, similarly, x(t) = − 1 β 2 e − µ 2 t sin(β 2 (t − τ)).Finally, for t ∈ [τ + s, T], ω(t) ≡ 1 and from Formula (17) and the continuity of ẋ(t) at t = τ + s, it is found that where From Formula ( 19) and the condition ẋ(T) = 0, the end moment of time is obtained as follows: Simplifying Expressions ( 19) and ( 20), ultimately, for s ∈ [0, s max ], one obtains Conducting analogous calculations for the case s ∈ [s min , 0), one obtains We note that Formulas ( 21) and ( 22) parametrically define a certain curve T(C) depicting the dependency of the end time on the terminal value C when utilizing controls that satisfy the maximum principle.The parametric formulation of the function allows for the calculation of the first two derivatives of T(C) as functions of the variable C. Thus, the following properties of the function T(C) are established Investigating the properties of the function T(C), it was found that each permissible terminal value C corresponds to a unique control that satisfies the PMP.Therefore, the statement follows Lemma 5.The function T(C), defined by formulas ( 21) and (22), determines the optimal time in problem (5).

Example 1
Considering an example with given parameters µ = 0.1, ω 0 = 0.5, it is calculated that x min ≈ 0.39, and x max ≈ 1.59.From (21), it is found that x * = C(0) ≈ 0.85. Figure 4 illustrates the graph of the function T(C), demonstrating how the optimal time varies with different terminal values C within the specified range.It has been demonstrated that each value of s unequivocally corresponds to a specific optimal control and an optimal trajectory, leading to a particular terminal point C(s).Different optimal trajectories, corresponding to various types of controls, are presented in Figure 5. Controls of types 1 and 5 correspond to trajectories reaching the extreme points of the reachability set.Control of type 2 corresponds to the upper branch of the T(C) curve (left branch on Figure 4).Control of type 3, which has no switches, corresponds to the trajectory with the minimum possible time.Control of type 4 corresponds to the lower branch of the T(C) curve (right branch on Figure 4).
Trajectories are constructed for the given parameter values on Figure 4, but the general character of the drawing does not change with different parameter values.

Solution to the General Timing Optimal Problem
We apply the results of the previous section to solve the original problem (2).Let us first explore the question of controllability and determine under what boundary conditions A and B the system is controllable.
Using the estimate (8), we obtain an estimate for x(T) depending on the number of semi-oscillations N: Thus, the lemma is the following: Lemma 6.The system is controllable if and only if there exists an even natural number N (for B > 0) or an odd natural number N (for B < 0), such that Since φ 2 ∈ (0, π/2) and ω 0 < 1, it follows that x min < 1 and x N min → 0 as N → +∞.Therefore, the system will be controllable for any non-zero values of A and B provided that x max > 1.
Utilizing formula (9), this inequality can be expressed as follows: Having resolved the question of controllability, we now return to the original problem of optimal control (2).Given x(t), the solution of the optimal control problem (2), let us consider two consecutive semi-oscillations t ∈ [t i−1 , t i+1 ].This segment of the optimal trajectory satisfies the boundary conditions of the original problem and must itself be optimal.Using the results of the previous section and normalizing variable x(t), the time for this segment can be expressed by the formula We fix A i+1 and A i−1 (noting that they have the same sign) and find the minimum of the last expression by the variable A i .Denoting D = and introducing a new variable , the time t i+1 − t i−1 can be expressed by the function where T(C) is parametrically defined using Formulas ( 21) and ( 22).Let q and D q belong to the domain of definition of the function T(C).We find the first derivative of the function g(q): It is easy to notice that this derivative becomes zero at the point q * = − √ D. Let us compute the second derivative at the point q * : Given that − √ D ∈ (−x max , −x 0 ], the function T(C) decreases and is concave downwards.Therefore, all terms in the above expression are positive, and the found point is a point of minimum.At the boundary points of the domain of definition, the function T(C) is not differentiable, but, in this case, there exists a unique control (either Equations ( 6) or ( 7)), leading the controlled system to its extreme position.For the remaining values of − √ D, the positivity of the above expression (26) follows from complex algebraic manipulations using the parametric setting of the function T(C) with the help of Formulas ( 21) and ( 22).
We have shown that the numbers A i−1 , A i , and A i+1 form a geometric progression.Applying this reasoning to the entire trajectory, we obtain the following statement: Lemma 7. The numbers A i for the optimal process satisfy the condition where the number of semi-oscillations is determined as the smallest N satisfying Lemma 6.

Since the ratio
is constant for the optimal trajectory, the optimal control on each segment [t i , t i+1 ] will be the same.Hence, if the number of semi-oscillations required to reach the end point is more than one, then the optimal control is a periodic function, where the period is one semi-oscillation.

Main Result
Thus, necessary and sufficient conditions are determined for the optimal control problem to have a solution.The controlled process is oscillatory in nature and a formula has been obtained to determine the number of semi-oscillations.It has been proven that the amplitudes of semi-oscillations form a geometric progression.We will combine all the statements proven above (Lemmas 1-7) into a step-by-step algorithm for solving the original problem (2).

1.
Determine if the problem has a solution and find the number of semi-oscillations N from the condition (24).Note that the problem will have a solution for any A > 0 and Calculate the denominator of the geometric progression C * = A i+1 A i using formula This value determines how much the amplitude changes over one semi-oscillation.

3.
Using the parametric setting The optimal time for rapid action in problem ( 2) is then T = NT * .

4.
The value s * = ξ − τ uniquely determines the type of optimal control for one semioscillation (Figure 3) and allows determining the number and position of switching points for one semi-oscillation.
In the case of s * > 0 we have optimal control of the type 4 or 5.In this case, within one semi-oscillation, we calculate the moment of the first switching τ = π−φ 1 β 1 . Then, if s * < s max , the second switching moment is calculated using formula ξ = τ + s * .In the case of s * = 0, there is no switching moment (it is optimal control of type 3).In the case of s * < 0, the optimal control of the type 1 or 2 is considered.Here, first, the second switching moment τ = T * − φ 1 β 1 is calculated, then the first switching moment ξ = τ + s * is found.Subsequently, control values for each semi-oscillation periodically repeat.Thus, we find the optimal control and optimal trajectory over the entire segment t ∈ [0, T].
From Equation ( 9), it follows that x min ≈ 0.39 and x max ≈ 1.59.From ( 21), x * = C(0) ≈ 0.85.It is also noted that, since x max > 1, the problem will have a solution for any boundary conditions given the specific values of µ and ω 0 .
From (24), it is determined that the end point is reachable within N = 3 semioscillations.Further, according to the Formula (27), the value is C * = − 3 1 4 ≈ −0.63.This value of C * corresponds to Formula (22) and optimal control of type 2, from which we find s * ≈ −1.09, T * ≈ 3.35, and T = T * • 3 ≈ 10.05.The second switching moment is τ ≈ 1.83 and the first switching moment is ξ ≈ 0.74.The optimal trajectory and phase portrait are shown on Figure 6.

Example 3
Using the obtained result about the periodicity of optimal control, we can construct the reachability set and optimal trajectories for the case when the endpoint is reachable within no more than three semi-oscillations for µ = 0.2, ω 0 = 0.75 (Figure 7).Since the optimal control is a periodic function, the optimal trajectories are first constructed for several selected values of the parameter s on one semi-oscillation using given above algorithm.Then, the optimal control on one semi-oscillation is repeated on the next two semi-oscillations.For this example, three semi-oscillations are considered, but the process could easily be extended to any number of semi-oscillations.
It is important to note the discontinuity in the curve of optimal time T(C) in the case of more than one semi-oscillation.That is, a small change in the boundary conditions can lead to a significant change in the optimal time by increasing the number of semi-oscillations at the optimal trajectory.We also note that for these parameter values, condition (25) is not satisfied, and the optimal control problem is not solvable for any boundary conditions.Figure 7 shows that all optimal trajectories are damped oscillations and, although the optimal control is a periodic function, this period differs for different trajectories.

Conclusions
In conclusion, this study presents an insightful examination of a bilinear optimal control problem, with a particular emphasis on the coefficient modulation.Through rigorous analysis, it has been established that optimal control is non-singular, the optimal process exhibits periodic characteristics with oscillatory behavior, and the amplitudes form a geometric progression.Furthermore, it was determined that while the optimal process itself is indeed periodic, the control function does not retain symmetry within a single period.
The implications of these findings extend to the broader realm of control theory and its applications in engineering and physics, offering a new perspective on the nature of bilinear control systems.The periodicity of the optimal process suggests potential for efficient energy usage and system stabilization in various applications, from mechanical systems to electrical circuits.
However, the lack of symmetry in the control function within the period underscores the complexity of bilinear control systems and indicates that intuition alone may not be sufficient to predict the system behavior.Future research may explore the nuances of this asymmetry and its impact on system performance.
As far as zero speed boundary conditions are concerned, our study is focused on mechanisms that perform full oscillations.Therefore, we can always use such boundary conditions for the fastest damping.
The analytical solution obtained in the paper allows for the precise determination of the switching moments, as well as the amplitudes and the total optimal time of the process.This paper contributes to the ongoing discourse in control theory, providing a foundation for subsequent studies to build upon.The results underscore the necessity for a nuanced approach to control strategy development, especially in systems where time-optimality is a paramount consideration.The methodologies and findings herein have practical implications for designing more efficient and robust control systems in the future.
Author Contributions: Conceptualization, V.T.; investigation, D.K.All authors have read and agreed to the published version of the manuscript.

Figure 1 .
Figure 1.An example of the trajectory of the controlled system (2) under the action of bang-bang control ω(t) for the case µ = 0.05, ω 0 = 0.75.

Figure 2 .
Figure 2. Cases (1)-(10) of sign changes in the function ψ 2 (t).The dashed gray line represents situations that do not satisfy the PMP.The solid red line represents cases that do not contradict the PMP.

Figure 3 .
Figure 3.All possible variants of optimal control encountered in problem (10).

Lemma 4 .Remark 1 .
Formulas (21) and (22): 1. Uniquely determine the function T(C), defined for C ∈ [−x max , −x min ]. 2. The function T(C) is continuous for C ∈ [−x max , −x min ]. 3. The function T(C) is differentiable for C ∈ (−x max , −x min ).At the endpoints of the interval, the derivative equals infinity, while at the point corresponding to the parameter s = 0, the derivative equals zero.Let x * = −C(0) be denoted.4. The function T(C) decreases on the interval C ∈ [−x max , −x * ] and increases on the interval C ∈ [−x * , −x min ]. 5.The second derivative of the function T(C) is negative on the intervals C ∈ (−x max , −x * ) ∪ (−x * , −x min ).This condition signifies that the function T(C) is concave down for C ∈ [−x max , −x min ].It is important to note that the constancy of the sign of the second derivative was established by calculations via symbolic mathematics by Wolfram.

Figure 5 .
Figure 5.The reachability set of optimal trajectories and control switching points for various values of C in the case of a single oscillation.µ = 0.1, ω 0 = 0.5.
(21) and (22) of the function T(C) and the value C * found in the previous step, calculate the value of the parameter s * as the solution of the equation C(s * ) = C * and the duration of one semi-oscillation T * = T(C * ).

Figure 7 .
Figure 7.The reachability set of optimal trajectories and control switching points for various values of C in the case of no more than 3 semi-oscillations.µ = 0.2, and ω 0 = 0.75.