Next Article in Journal
A Novel Approximation Method for Solving Ordinary Differential Equations Using the Representation of Ball Curves
Previous Article in Journal
A Majority Theorem for the Uncapacitated p = 2 Median Problem and Local Spatial Autocorrelation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Prescribed-Time Convergence Acceleration Algorithm with Time Rescaling

1
College of Mathematics and System Science, Xinjiang University, Urumqi 830047, China
2
School of Mathematics and Statistics, YiLi Normal University, Yining 835000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(2), 251; https://doi.org/10.3390/math13020251
Submission received: 31 October 2024 / Revised: 19 December 2024 / Accepted: 28 December 2024 / Published: 13 January 2025

Abstract

In machine learning, the processing of datasets is an unavoidable topic. One important approach to solving this problem is to design some corresponding algorithms so that they can eventually converge to the optimal solution of the optimization problem. Most existing acceleration algorithms exhibit asymptotic convergence. In order to ensure that the optimization problem converges to the optimal solution within the prescribed time, a novel prescribed-time convergence acceleration algorithm with time rescaling is presented in this paper. Two prescribed-time acceleration algorithms are constructed by introducing time rescaling, and the acceleration algorithms are used to solve unconstrained optimization problems and optimization problems containing equation constraints. Some important theorems are given, and the convergence of the acceleration algorithms is proven using the Lyapunov function method. Finally, we provide numerical simulations to verify the effectiveness and rationality of our theoretical results.

1. Introduction

Machine learning stands as a predominant approach to addressing numerous artificial intelligence challenges in the present day. It has a wide range of applications across various domains, including but not limited to computer vision, natural language processing, secure communication, and search technologies. With the development of the Internet, dealing with huge datasets has become a topic of concern for scholars. As a result, many requirements are placed on the convergence speed of machine learning, and gradient descent-based algorithms naturally attract attention.
The origins of the earliest accelerated algorithms [1] can be traced to the heavy-ball method proposed by Polyak in 1964. This method achieved a local linear convergence rate through spectral analysis. Since it was difficult to guarantee global convergence for the heavy-ball method, Nesterov later proposed the Nesterov accelerated gradient (NAG) method [2] using estimated sequences in 1983, which reduced the complexity of the classical gradient descent method and showed the worst-case convergence speed of O ( 1 / k 2 ) in minimizing smooth convex functions. To further advance the development of the acceleration algorithm, Nesterov proposed methods with a convergence speed of O ( 1 / k 2 ) for a class of smooth convex functions that are unconditionally minimized [3]. A universal method for developing optimal algorithms aimed at minimizing smooth convex functions was presented in [4]. Thereafter, the heavy-ball method and the accelerated gradient method attracted the attention of numerous scholars. In 1994, Pierro et al. [5] proposed a method to speed up iterative algorithms for solving symmetric linear complementarity problems. At the same time, Arihiro et al. [6] proposed an enhancement to the error backpropagation algorithm, widely used in multilayer neural networks, by incorporating prediction to improve its speed. However, these methods did not garner significant interest within the machine learning community. It was not until Beck and Teboulle [7] introduced the accelerated proximity gradient (APG) method in 2009, aimed at solving composite optimization problems, including sparse and low-rank models, that the machine learning community began to take significant notice. This method was an extension of the work in [4] and was simpler compared to that in [8]. As it happens, the sparse and low-rank models are common in machine learning. The accelerated proximity gradient (APG) method has been widely recognized in the field of machine learning.
However, one of the drawbacks of the methods based on Nesterov’s method was that they exhibited an oscillatory behavior, which could seriously slow down their convergence speed. Many scholars have made extensive attempts to address this problem. B. O’Donoghue and E. Candès [9] introduced a function restart strategy and a gradient restart strategy to enhance the convergence speed of Nesterov’s method. Further, Nguyen et al. [10] proposed the accelerated residual method, which could be regarded as a finite-difference approximation of the second-order ODE system. The method was shown to be superior to Nesterov’s method and was extended to a large class of accelerated residual methods. These strategies successfully mitigated the oscillatory convergence associated with Nesterov’s method. The convergence of the aforementioned accelerated algorithms is mainly asymptotic, i.e., the solution to the optimization problem or the ODE problem is obtained as time tends to infinity.
It is well known that finite-time convergence has made great progress in dynamical systems, where the convergence time is linked to the system’s initial conditions. In addition, there is another type of convergence: fixed-time convergence. Fixed-time convergence is independent of the initial value and enables the estimation of the upper limit of the settling time without relying on any data regarding the initial conditions. While fixed-time convergence offers numerous benefits for estimating when a process will stop, it cannot establish a straightforward and lucid link between the control parameters and the target maximum stopping time. This frequently results in an overestimation of the stopping time, which, in turn, misrepresents the system’s performance. Additionally, the stopping time cannot be directly adjusted in finite-time or fixed-time convergence scenarios since it is also influenced by the design parameters of other control systems. In order to address the challenge of excessive estimations regarding the stopping time and to reduce the dependence of the stopping time on design parameters, a prescribed-time convergence has been developed [11]. In other words, the system is capable of achieving stability within a predetermined timeframe, irrespective of the initial conditions. It should be highlighted that the integration of prescribed-time convergence with optimization problems is likewise a fascinating area of study [12,13,14].
In practice, many dynamical systems are related to time. Time rescaling is a concept that involves time transformation. Within the realm of non-autonomous dissipative dynamic systems, adjusting the time parameter is a simple yet effective approach to expedite the convergence of system trajectories. As noted in the unconstrained minimization problem in [15,16,17] and the linear constrained minimization problem in [18,19], the time-rescaling parameter has the effect of further increasing the rate of convergence of the objective function values along the trajectory. Balhag et al. [20] developed fast methods for convex unconstrained optimization by leveraging inertial dynamics that combine viscous damping and Hessian-driven damping with time rescaling. In their work [21], Hulett et al. introduced the time-rescaling function, which resulted in improved convergence rates. This achievement can also be considered a further development of the time-rescaling approach that was presented for constrained scenarios in [15,16].
Based on the above facts, we ensure that the optimization problem converges to the optimal solution within the prescribed time, and a novel prescribed-time convergence acceleration algorithm with time rescaling is proposed. A distinctive aspect of this paper is the utilization of time rescaling to integrate the concept of prescribed time with second-order systems for tackling optimization problems. This enables the optimization problem to achieve convergence to the optimal value within the prescribed time. Several second-order systems with respect to time t for unconstrained optimization problems and optimization problems containing equational constraints were presented in [22,23,24,25]. Under these second-order systems, the above optimization problems yielded asymptotic convergence. In contrast to [22,23,24,25], the contributions of this paper are as follows:
(1) We obtain the prescribed-time convergence rate e a · M ( t ) , where 0 < a < + , d ( s ) is a positive function, T is the prescribed time, and as tT, we have
M ( t ) = 0 t d ( s ) ds + .
(2) In some cases, the use of time rescaling improves the convergence of the optimization algorithm. So, we use α ( s ) to transform the time t into an integral form, i.e., t = t ( δ ) = 0 δ α ( s ) ds , where α ( s ) is a continuous positive function. In this way, the second-order system we construct becomes more flexible, and the convergence is improved to a certain extent. We give different α ( s ) and verify the validity of the results with numerical simulations.
In addition, we compare this paper with the literature [15,16,18,20], as shown in Table 1. To simplify, ➀ and ➁ are used to represent smooth and non-smooth objective functions, respectively; ➂, ➃, and ➄ are used to represent the unconstrained optimization problem, the optimization problem with equation constraints, and the structured convex optimization problem, respectively; ➅ is used to represent “Accelerate”; and ➆ and ➇ are used to represent “converge within the prescribed time, i.e., t T , f ( x ) f ( x ) ” and “asymptotic convergence, i.e., t + , f ( x ) f ( x ) ”, respectively.
The subsequent sections of this article are structured as follows. Section 2 provides a concise overview of the fundamental concepts utilized throughout this paper. In Section 3 and Section 4, we design new algorithms that allow unconstrained optimization problems and optimization problems with equational constraints to converge to the optimum within the prescribed time. We give corresponding examples for different time rescalings in Section 5. Numerical simulations are given in Section 6. Conclusions and suggestions for future work are given in Section 7.

2. Preliminaries

Consider the Hilbert space V, and let f:VR be a properly μ -strongly convex differentiable smooth function. Furthermore, the space V is equipped with an inner product · , · and the corresponding norm · . The notation · , · represents the duality pairing between V * and V, where V * is the continuous dual space of V, equipped with the standard dual norm · * . For simplicity, we consider the real vector space R n , whose Euclidean norm is denoted as x for x R n . For a given function f, let x be its optimal point for the optimization problem, and the optimal value is f ( x ) . In addition, the lemmas used in this paper are provided below.
Lemma 1
([23]). For any u = u ( t ) , v = v ( t ) , w = w ( t ) R n , we have
u w , w v = 1 2 ( u v 2 u w 2 w v 2 ) .
Lemma 2
([23]). If f is a μ-strongly convex differentiable function, then for any x = x ( t ) , y = y ( t ) Ω , where Ω is the domain of definition of f, we have
f ( x ) f ( y ) f ( y ) , x y + μ 2 x y 2 .
Assumption 1.
The function α ( s ) is a continuous positive function, i.e., α ( s ) > 0 .
Assumption 2.
When α ( s ) > 0 holds, let the relationship between t and δ be
t = t ( δ ) = 0 δ α ( s ) d s ,
i.e., (1) is a time rescaling. And the above equation satisfies that when δ + , t T holds, where T is a positive number. We have
t ( δ ) = α ( δ ) .
Also, the inverse function
δ = δ ( t ) = t 1 ( t ( δ ) )
obtained from the above equation satisfies that when t T , there is δ + .
Assumption 3.
When α ( s ) > 0 holds, let the function
m ( δ ) = 0 δ 1 α ( s ) d s
and the above equation satisfy that when δ + , there is m ( δ ) + .
Assumption 4.
In the case of α ( δ ) > 0 and (1),
α ( δ ) · d ( t ) = 1 α ( δ )
holds.
Assumption 5.
In the case of satisfying d ( s ) > 0, let the function
M ( t ) = 0 t d ( s ) d s
and the above equation satisfy that when t T , there is M ( t ) + .
In [23], for the unconstrained optimization problem
min x ( t ) V f ( x ( t ) )
of a smooth function f on the entire space V, the second-order ODE constructed for the optimization problem is
γ ( t ) · x ( t ) + ( μ + γ ( t ) ) · x ( t ) + f ( x ( t ) ) = 0 .
The aforementioned system can be transformed into the following system of first-order ordinary differential equations (ODEs):
x ( t ) = v ( t ) x ( t ) , γ ( t ) · v ( t ) = μ ( x ( t ) v ( t ) ) f ( x ( t ) ) ,
Meanwhile,
γ ( t ) = μ γ ( t ) .
In [25], the author used primal-dual methods to study the convex optimization problem constrained by linearity
m i n   f ( x ( t ) ) s . t . A x ( t ) = b .
Specifically, when f is a smooth function, the system constructed for the optimization problem containing the equation constraints is
θ ( t ) · λ ( t ) = λ L β ( v ( t ) , λ ( t ) ) , x ( t ) = v ( t ) x ( t ) , γ ( t ) · v ( t ) = μ β ( x ( t ) v ( t ) ) x L β ( x ( t ) , λ ( t ) ) ,
Meanwhile,
θ ( t ) = θ ( t ) ,
γ ( t ) = μ β γ ( t ) .
Different second-order systems are constructed for the above two types of optimization problems. Then, by introducing different Lyapunov functions, both optimization problems reach asymptotic convergence, i.e., as t + , there is f ( x ) f ( x ) .
Remark 1.
Under certain conditions, the authors of [26] used the generalized finite-time gain function concerning the variable τ, obtaining that when τ T , there is t + . The result obtained is also asymptotically convergent. Inspired by the literature [26], we modify the above two systems to obtain that when t → T, there is f ( x ) f ( x ) , where T is the prescribed time.
Remark 2.
Instead of directly giving the second-order system with respect to the variable t, we first use the time rescaling in [23] to construct the system with respect to the variable δ. By establishing the coefficient relationship between the two systems and applying this relationship, we indirectly obtain the second-order system with respect to the variable t.

3. For Unconstrained Optimization Problems

In this subsection, we construct a category of second-order systems designed to address the unconstrained optimization issue (9), ensuring that the solution converges to the optimal outcome within the prescribed time under the influence of these second-order systems.
We consider the unconstrained optimization problem
min x R n f ( x ) ,
where x = x ( t ) R n , f : R n R , and f is a μ -strongly convex differentiable smooth function. Let its optimal solution be x .
Based on the ODE theory, which can provide deeper insights into optimization, we aim to design a second-order system of the following form:
x ( t ) = a · d ( t ) · ( x ( t ) + v ( t ) ) v ( t ) = a · d ( t ) · μ γ ( t ) · x ( t ) v ( t ) 1 γ ( t ) · f ( x ( t ) )
where t [ 0 , T ) , γ ( t ) is a positive function, T > 0 , 0 < a < + , γ ( 0 ) = γ 0 > 0 , and
γ ( t ) = a · d ( t ) · ( μ γ ( t ) ) .
Additionally, our objective is to identify an appropriate d ( t ) such that (9) achieves convergence to the optimal solution within the prescribed time under the influence of Equation (9a,b). Specifically, as t T , we require that f ( x ( t ) ) f ( x ) . In order to find the right d ( t ) , we proceed as described below.
Firstly, the variable transformation is used to change the optimization problem (9) into the following equivalent optimization problem (11). The details are given below.
Using the relationship between t and δ , we can obtain
x ( t ) = x ( t ( δ ) ) = y ( δ ) .
By substituting (10) into (9), we obtain the optimization problem
min y R n f ( y ) ,
which is equivalent to (9), where y = y( δ ), f : R n R , and f is a μ -strongly convex differentiable smooth function. The optimal solution of the optimization problem (11) is also f ( x ) .
Secondly, under certain conditions, a second-order system is constructed to solve the optimization problem (11), so that (11) converges asymptotically to the optimal solution under the action of the system. The details are given below.
Inspired by [23], a second-order system is constructed for the optimization problem (11) as follows:
y ( δ ) = a · h ( δ ) · ( y ( δ ) + w ( δ ) ) w ( δ ) = a · h ( δ ) · μ p ( δ ) · y ( δ ) w ( δ ) 1 p ( δ ) · f ( y ( δ ) ) ,
where δ [ 0 , + ) , p ( δ ) , and h ( δ ) are positive functions, 0 < a < + , and p ( 0 ) = p 0 = γ 0 > 0 . Additionally, we have the equation
p ( δ ) = a · h ( δ ) · ( μ p ( δ ) ) .
Next, by applying the variable transformation
w ( δ ) = v ( t ( δ ) ) p ( δ ) = γ ( t ( δ ) )
along with (2), (9a,b) and (11a,b), we obtain
y ( δ ) = x ( t ) · t ( δ ) = α ( δ ) · x ( t ) w ( δ ) = v ( t ) · t ( δ ) = α ( δ ) · v ( t ) p ( δ ) = γ ( t ) · t ( δ ) = α ( δ ) · γ ( t ) .
Finally, after analyzing (9a,b) and (11a,b), we find that
h ( δ ) = α ( δ ) · d ( t )
By substituting Assumption 4 into the above equation, we find that
h ( δ ) = 1 α ( δ )
holds. Therefore, for the optimization problem (11), we use α ( δ ) to construct a second-order system (14a,b):
y ( δ ) = a · 1 α ( δ ) · ( y ( δ ) + w ( δ ) ) w ( δ ) = a · 1 α ( δ ) · μ p ( δ ) · y ( δ ) w ( δ ) 1 p ( δ ) · f ( y ( δ ) )
where δ [ 0 , + ) , p ( δ ) is a positive function, and 0 < a < + , p ( 0 ) = p 0 = γ 0 > 0 . Additionally, we have the equation
p ( δ ) = a · 1 α ( δ ) · ( μ p ( δ ) ) .
Next, we show that the optimization problem (11) converges asymptotically to the optimal value under system (14a,b), where α ( δ ) satisfies Assumptions 1–4.
Theorem 1.
The optimization problem (11) converges asymptotically to the optimal value under system (14a,b), where α ( δ ) satisfies Assumptions 1–4.
Proof. 
We construct the Lyapunov function as follows:
L ( δ ) = f ( y ( δ ) ) f ( x ) + p ( δ ) 2 · w ( δ ) x 2 .
Differentiating the above equation with respect to δ yields
L ( δ ) = f ( y ( δ ) ) , y ( δ ) + p ( δ ) 2 · w ( δ ) x 2 + p ( δ ) · w ( δ ) , w ( δ ) x .
By substituting (14a,b) into the above equation and using Lemmas 1 and 2, we obtain
L ( δ ) a · 1 α ( δ ) · L ( δ ) .
Furthermore, we have
L ( δ ) L ( 0 ) · e a · 0 δ 1 α ( s ) d s = L ( 0 ) · e a · m ( δ ) .
By applying Assumption 3 to the above equation, we can obtain L ( δ ) 0 . So, as δ + , we can obtain f ( y ( δ ) ) f ( x ) .    □
In summary, we can ensure that the optimization problem (11) asymptotically converges to the optimal solution by using the second-order system (14a,b) derived from Assumptions 1–4. Although we can obtain the expression of d ( t ) from Assumption 4, substituting this d ( t ) into system (9a,b) does not guarantee that the optimization problem (9) converges to the optimal solution within the prescribed time T.
Thirdly, under certain conditions, a second-order system is constructed to solve the optimization problem (9) so that (9) converges to the optimal solution within the prescribed time T.
Theorem 2.
When α ( δ ) and d ( t ) satisfy Assumptions 1–5, the optimization problem (9) can converge to the optimal solution within the prescribed time under the action of system (9a,b).
Proof. 
From Assumptions 1–4, we can derive d ( t ) . In addition, it is known from Assumption 2 that t T as δ + , and vice versa.
For (9a,b), the adaptive Lyapunov function is
L ( t ) = f ( x ( t ) ) f ( x ) + γ ( t ) 2 · v ( t ) x 2
and (16) takes advantage of the fact that γ ( t ) is a positive function. Differentiating the above equation with respect to t yields
L ( t ) = f ( x ( t ) ) , x ( t ) + γ ( t ) 2 · v ( t ) x 2 + γ ( t ) · v ( t ) , v ( t ) x .
By substituting (9a,b) into the above equation and using Lemmas 1 and 2, we obtain
L ( t ) a · d ( t ) · L ( t ) .
Further, we have
L ( t ) L ( 0 ) · e a · 0 t d ( s ) d s = L ( 0 ) · e a · M ( t ) ,
so we obtain
f ( x ( t ) ) f ( x ) L ( 0 ) · e a · M ( t ) .
By applying Assumption 5 to the above equation, we can conclude that f ( x ( t ) ) f ( x ) .    □
Thus, under Assumptions of 1–5, we turn the optimization problem (9) of the strongly convex objective function into an equivalent optimization problem (11). The latter converges asymptotically to the optimal solution under the action of system (14a,b), while the former converges to the optimal solution within the prescribed time under the action of system (9a,b).
For the unconstrained optimization problem (9), our algorithm is summarized below (Algorithm 1).
Algorithm 1: The Eulerian method that accelerates the convergence of the unconstrained optimization problem (9) within the prescribed time
Input:  a > 0 , γ 0 > 0 , μ > 0 , T > 0 , x 0 R n , v 0 R n
1. for k = 1, 2, …, K.
2. x k + 1 = h · ( a · d k · ( x k + v k ) ) + x k .
3. v k + 1 = h · a · d k · μ γ k · ( x k v k ) 1 γ k · f ( x k ) + x k
4. γ k + 1 = h · a · d k · μ γ k + γ k
end for
Remark 3.
When a = 1 and α ( δ ) = 1 , (9a,b) and (14a,b) become asymptotic systems that solve the unconstrained optimization problems in [23].

4. For Optimization Problems with Equality Constraints

In this subsection, we consider optimization problems with equality constraints:
m i n   f ( x ) s . t . A x = b ,
where x = x ( t ) R n , f : R n R (with f being a μ -strongly convex differentiable smooth function), A R m × n , and b R m . The Lagrangian function for problem (17) is
L ( x ( t ) , λ ( t ) ) = f ( x ( t ) ) + λ ( t ) , A x ( t ) b .
Let x * , λ * be the saddle point of L ( x ( t ) , λ ( t ) ) , thus
L ( x * , λ ( t ) ) L x * , λ * L x ( t ) , λ * .
x * is the optimal value point of the problem (17), that is,
x * = x .
Based on the ODE theory, we want to design a second-order system that has the following form:
x ( t ) = a · d ( t ) ( v ( t ) x ( t ) ) v ( t ) = a · d ( t ) μ γ ( t ) · ( x ( t ) v ( t ) ) 1 γ ( t ) · x L ( x ( t ) , λ ( t ) ) λ ( t ) = a · d ( t ) · 1 β ( t ) · λ L ( v ( t ) , λ ( t ) ) ,
where t [ 0 , T ) , γ ( t ) , and β ( t ) are positive functions. T > 0 , 0 < a < + , γ ( 0 ) = γ 0 > 0 , 0 < β ( 0 ) = β 0 < + , and
β ( t ) = β 0 · e a · 0 t d ( s ) d s γ ( t ) = a · d ( t ) · ( μ γ ( t ) ) .
Clearly, from the first equation of (17b), we obtain
β ( t ) = a · d ( t ) · β ( t ) .
Further, our aim is also to select a suitable d ( t ) so that (17) can converge to the optimal solution within the prescribed time under the action of (17a,b), that is, f ( x ( t ) ) f ( x ) as t T . To determine the appropriate d ( t ) , our work is divided into the steps described below.
Firstly, the variable transformation (10) is used to convert the optimization problem (17) into the following equivalent optimization problem (18). The details are given below.
By substituting (10) into (17), we obtain the following optimization problem:
m i n   f ( y ) s . t . A y = b ,
where y = y ( δ ) , f : R n R (with f being a μ -strongly convex differentiable smooth function), A R m × n , and b R m . The optimal solution of the optimization problem (18) is also x . The Lagrangian function of problem (18) is
L ( y ( δ ) , l ( δ ) ) = f ( y ( δ ) ) + l ( δ ) , A y ( δ ) b ,
where x * , λ * is the saddle point of L ( y ( δ ) , l ( δ ) ) .
Secondly, under certain conditions, a second-order system is constructed to solve the optimization problem (18) so that (18) converges asymptotically to the optimal solution under the action of the system. The details are given below.
Inspired by [25], a second-order system is constructed for the optimization problem (18):
y ( δ ) = a · h ( δ ) · ( w ( δ ) y ( δ ) ) w ( δ ) = a · h ( δ ) · μ p ( δ ) · ( y ( δ ) w ( δ ) ) 1 p ( δ ) · y L ( y ( δ ) , l ( δ ) ) l ( δ ) = a · h ( δ ) · 1 u ( δ ) · l L ( w ( δ ) , l ( δ ) ) ,
where δ [ 0 , + ) , p ( δ ) and u ( δ ) are positive functions. 0 < a < + , p ( 0 ) = p 0 = γ 0 > 0 , u ( 0 ) = u 0 = β 0 , and
u ( δ ) = u 0 · e a · 0 δ 1 α ( s ) d s p ( δ ) = a · h ( δ ) · ( μ p ( δ ) ) .
Clearly, from the first equation of (18b), we obtain
u ( δ ) = a · 1 α ( δ ) · u ( δ ) .
In addition to (10) and (12), we apply the variable transformation
u ( δ ) = β ( t ( δ ) ) l ( δ ) = λ ( t ( δ ) )
to obtain
y ( δ ) = x ( t ) · t ( δ ) = α ( δ ) · x ( t ) w ( δ ) = v ( t ) · t ( δ ) = α ( δ ) · v ( t ) l ( δ ) = λ ( t ) · t ( δ ) = α ( δ ) · λ ( t ) u ( δ ) = β ( t ) · t ( δ ) = α ( δ ) · β ( t ) p ( δ ) = γ ( t ) · t ( δ ) = α ( δ ) · γ ( t ) .
After analyzing (17a,b) and (18a,b), we find that
h ( δ ) = α ( δ ) · d ( t ) .
By substituting Assumption 4 into the above equation, we find that
h ( δ ) = 1 α ( δ )
holds. Therefore, for the optimization problem (18), we use α ( δ ) to construct a second-order system:
y ( δ ) = a · 1 α ( δ ) · ( w ( δ ) y ( δ ) ) w ( δ ) = a · 1 α ( δ ) · μ p ( δ ) · ( y ( δ ) w ( δ ) ) 1 p ( δ ) · y L ( y ( δ ) , l ( δ ) ) l ( δ ) = a · 1 α ( δ ) · 1 u ( δ ) · l L ( w ( δ ) , l ( δ ) ) ,
where δ [ 0 , + ) , p ( δ ) and u ( δ ) are positive functions. 0 < a < + , p ( 0 ) = p 0 = γ 0 > 0 , u ( 0 ) = u 0 = β 0 , and
u ( δ ) = u 0 · e a · 0 δ 1 α ( s ) d s p ( δ ) = a · 1 α ( δ ) · ( μ p ( δ ) ) .
Next, we show that the optimization problem (18) converges asymptotically to the optimal value under system (20a,b), where α ( δ ) satisfies Assumptions 1–4.
Theorem 3.
The optimization problem (18) converges asymptotically to the optimal value under system (20a,b), where α ( δ ) satisfies Assumptions 1–4.
Proof. 
We construct the Lyapunov function as follows:
G ( δ ) = L ( y ( δ ) , λ * ) L ( x * , l ( δ ) ) + p ( δ ) 2 · w ( δ ) x * 2 + u ( δ ) 2 · l ( δ ) λ * 2 .
Differentiating the above equation with respect to δ yields
G ( δ ) = y L ( y ( δ ) , λ * ) , y ( δ ) + p ( δ ) 2 · w ( δ ) x * 2 + p ( δ ) · w ( δ ) , w ( δ ) x * + u ( δ ) 2 · l ( δ ) λ * 2 + u ( δ ) · l ( δ ) , l ( δ ) λ * ,
and
l L ( x * , l ( δ ) ) = 0
is used to ensure that the above equation holds. By substituting (20a,b) into the above equation, we obtain
G ( δ ) = I 1 + I 2 ,
where
I 1 = a · 1 α ( δ ) · y L ( y ( δ ) , λ * ) , w ( δ ) y ( δ ) a · 1 α ( δ ) · y L ( y ( δ ) , l ( δ ) ) , w ( δ ) x * , I 2 = a · 1 α ( δ ) · μ 2 · w ( δ ) x * 2 a · 1 α ( δ ) · p ( δ ) 2 · w ( δ ) x * 2 + a · 1 α ( δ ) · μ · ( y ( δ ) w ( δ ) ) , w ( δ ) x * a · 1 α ( δ ) · u ( δ ) 2 · l ( δ ) λ * 2 + a · 1 α ( δ ) · A w ( δ ) b , l ( δ ) λ * .
From
y L ( y ( δ ) , l ( δ ) ) = y L ( y ( δ ) , λ * ) + A ( l ( δ ) λ * )
and when f is a strongly convex function, we have
y L ( y ( δ ) , λ * ) , y ( δ ) + x * L ( x * , λ * ) L ( y ( δ ) , λ * ) μ 2 · y ( δ ) x * 2
which leads to
I 1 a · 1 α ( δ ) · L ( x * , λ * ) L ( y ( δ ) , λ * ) μ 2 · y ( δ ) x * 2 a · 1 α ( δ ) · A w ( δ ) b , l ( δ ) λ * .
By substituting the above equation into (22), we have
G ( δ ) a · 1 α ( δ ) · G ( δ ) .
Further, we obtain
G ( δ ) G ( 0 ) · e a · 0 δ 1 α ( s ) d s = G ( 0 ) · e a · m ( δ ) .
Assuming that the initial point is not the optimal point, it is clear that G ( 0 ) > 0 is true. By applying Assumption 3 to the above equation, we can deduce that G ( δ ) 0 . However, we are not able to achieve convergence, as this requires further work. The details are given below.
By substituting (21) into (23), we obtain
L ( y ( δ ) , λ * ) L ( x * , l ( δ ) ) = f ( y ( δ ) ) f ( x * ) + λ * , A y ( δ ) b G ( 0 ) · e a · m ( δ ) ,
l ( δ ) λ * 2 G ( 0 ) · e a · m ( δ ) u ( δ ) .
Let
H ( δ ) = l ( δ ) 1 u ( δ ) · ( A y ( δ ) b ) .
From the expression for u ( δ ) in (20b), we obtain
1 u ( δ ) = a · 1 α ( δ ) · 1 u ( δ ) .
Differentiating (26) with respect to δ and substituting into the first and third equations of (20a) yields H ( δ ) = 0 . So, we have
H ( δ ) = H ( 0 ) = l 0 1 u 0 ( A y 0 b ) .
From (26), we have A y ( δ ) b = u ( δ ) · ( l ( δ ) H ( δ ) ) , so
A y ( δ ) b = u ( δ ) · l ( δ ) H ( δ ) = u ( δ ) · l ( δ ) H ( 0 ) = u ( δ ) · l ( δ ) l 0 + 1 u 0 ( A y 0 b ) = u ( δ ) · l ( δ ) λ * + λ * l 0 + 1 u 0 ( A y 0 b ) C 1 · e a · m ( δ ) ,
where C 1 = 2 G ( 0 ) u 0 + u 0 λ * l 0 + A y 0 b . From the above equation and (25), we achieve
f ( y ( δ ) ) f ( x * ) λ * , A y ( δ ) b + G ( 0 ) · e a · m ( δ ) .
Further, we have
f ( y ( δ ) ) f ( x * ) = f ( y ( δ ) ) f ( x * ) = f ( y ( δ ) ) f ( x * ) = f ( y ( δ ) ) f ( x ) C 2 · e a · m ( δ ) ,
where C 2 = λ * · C 1 + G ( 0 ) , so
f ( y ( δ ) ) f ( x ) C 2 · e a · m ( δ ) .
Since G ( 0 ) > 0 , substituting Assumption 3 into the above equation yields f ( y ( δ ) ) f ( x ) .    □
In summary, we can ensure that the optimization problem (18) asymptotically converges to the optimal solution by using the second-order system (20a,b) constructed based on Assumptions 1–4. Although we can derive the expression for d ( t ) by substituting α ( δ ) into Assumption 4, substituting the obtained d ( t ) into system (17a,b) does not guarantee that the optimization problem (17) will converge to the optimal solution within the prescribed time T.
Thirdly, under certain conditions, a second-order system is constructed to solve the optimization problem (17) so that (17) converges to the optimal solution within the prescribed time T.
Theorem 4.
When α ( δ ) and d ( t ) satisfy Assumptions 1–5, the optimization problem (17) can converge to the optimal solution within the prescribed time under the action of system (17a,b).
Proof. 
From Assumptions 1–4, we are able to derive d ( t ) . In addition, according to Assumption 2, as δ + , t T , and vice versa.
For (17), the adaptive Lyapunov function is
G ( t ) = L ( x ( t ) , λ * ) L ( x * , λ ( t ) ) + γ ( t ) 2 · v ( t ) x * 2 + β ( t ) 2 · λ ( t ) λ * 2 .
Differentiating the above equation with respect to t yields
G ( t ) = x L ( x ( t ) , λ * ) , x ( t ) + γ ( t ) 2 · v ( t ) x * 2 + γ ( t ) · v ( t ) , v ( t ) x * + β ( t ) 2 · λ ( t ) λ * 2 + β ( t ) · λ ( t ) , λ ( t ) λ * ,
where
λ L ( x * , λ ( t ) ) = 0
is used so that the above equation holds. By substituting (17) into the above equation, we obtain
G ( t ) = I 3 + I 4 ,
where
I 3 = a · d ( t ) · x L ( x ( t ) , λ * ) , v ( t ) x ( t ) a · d ( t ) · x L ( x ( t ) , λ ( t ) ) , v ( t ) x * , I 4 = a · d ( t ) · μ 2 · v ( t ) x * 2 a · d ( t ) · γ ( t ) 2 · v ( t ) x * 2 + a · d ( t ) · μ · ( x ( t ) v ( t ) ) , v ( t ) x * a · d ( t ) · β ( t ) 2 · λ ( t ) λ * 2 + a · d ( t ) · A v ( t ) b , λ ( t ) λ * .
From
x L ( x ( t ) , λ ( t ) ) = x L ( x ( t ) , λ * ) + A ( λ ( t ) λ * )
and when f is a strongly convex function with
x L ( x ( t ) , λ * ) , x ( t ) + x * L ( x * , λ * ) L ( x ( t ) , λ * ) μ 2 · x ( t ) x * 2
we obtain
I 3 a · d ( t ) · L ( x * , λ * ) L ( x ( t ) , λ * ) μ 2 · x ( t ) x * 2 a · d ( t ) · A v ( t ) b , λ ( t ) λ * .
By substituting the above equation into (28), we have
G ( t ) a · d ( t ) · G ( t ) .
Further, we obtain
G ( t ) G ( 0 ) · e a · 0 t d ( s ) d s = G ( 0 ) · e a · M ( t ) .
Assuming that the initial point is not the optimal point, it is clear that G ( 0 ) > 0 . By applying Assumption 5 to the above equation, we can obtain G ( t ) 0 . However, the desired convergence speed has not yet been achieved, which requires further work.
By substituting (27) into (29), we obtain
L ( x ( t ) , λ * ) L ( x * , λ ( t ) ) = f ( x ( t ) ) f ( x * ) + λ * , A x ( t ) b G ( 0 ) · e a · M ( t ) ,
λ ( t ) λ * 2 G ( 0 ) · e a · M ( t ) β ( t ) .
Let
H ( t ) = λ ( t ) 1 β ( t ) · ( A x ( t ) b ) .
From the expression for β ( t ) , we obtain
1 β ( t ) = a · d ( t ) · 1 β ( t ) .
Differentiating (32) with respect to t and substituting into (17a) yields H ( t ) = 0 . So, we have
H ( t ) = H ( 0 ) = λ 0 1 β 0 ( A x 0 b ) .
From (32), we have A x ( t ) b = β ( t ) · ( λ ( t ) H ( t ) ) , so
A x ( t ) b = β ( t ) · λ ( t ) H ( t ) = β ( t ) · λ ( t ) H ( 0 ) = β ( t ) · λ ( t ) λ 0 + 1 β 0 ( A x 0 b ) = β ( t ) · λ ( t ) λ * + λ * λ 0 + 1 β 0 ( A x 0 b ) C 1 · e a · M ( t ) ,
where C 1 = 2 G ( 0 ) β 0 + β 0 λ * λ 0 + A x 0 b . And from the above equation and (28), we obtain
f ( x ( t ) ) f ( x * ) λ * , A x ( t ) b + G ( 0 ) · e a · M ( t ) .
Further, we have
f ( x ( t ) ) f ( x * ) = f ( x ( t ) ) f ( x * ) = f ( x ( t ) ) f ( x * ) = f ( x ( t ) ) f ( x ) C 2 · e a · M ( t ) ,
where C 2 = λ * · C 1 + G ( 0 ) , so
f ( x ( t ) ) f ( x ) C 2 · e a · M ( t ) .
Since G ( 0 ) > 0 holds, substituting Assumption 5 into the above equation yields f ( x ( t ) ) f ( x ) .    □
Thus, under Assumptions 1–5, we transform the optimization problem (17) into an equivalent optimization problem (18). The latter converges asymptotically to the optimal solution under the action of system (20a,b), while the former converges to the optimal solution within the prescribed time under the action of system (17a,b).
For the optimization problem (17), our algorithm is summarized below (Algorithm 2).
Algorithm 2: The Eulerian method that accelerates the convergence of the optimization problem (17) within the prescribed time
Input:  a > 0 , γ 0 > 0 , β 0 > 0 , μ > 0 , T > 0 , A R m × n , b R m , x 0 R n , v 0 R n , λ 0 R m .
1. for k = 1, 2, …, K.
2. x k + 1 = h · ( a · d k · ( x k + v k ) ) + x k .
3. v k + 1 = h · a · d k · μ γ k · ( x k v k ) 1 γ k · f ( x k ) + A λ k + x k
4. λ k + 1 = h · a · d k · 1 β k · A v k b + λ k
5. β k + 1 = β 0 · e a · 0 t k d ( s ) d s = β 0 · e a · M k
6. γ k + 1 = h · a · d k · μ γ k + γ k
end for
Remark 4.
When a = 1 and α ( δ ) = 1 , (17a,b) and (20) become asymptotic systems that solve optimization problems with equality constraints in [25].

5. Examples

Furthermore, we construct the following functions α i ( s ) for i = 1 , 2 , 3 , 4 , based on linear functions, exponential functions, and the Pearl function, and correspondingly define the functions d i ( t ) for i = 1 , 2 , 3 , 4 . We prove that these functions satisfy Assumptions 1–5. Then, we prove that substituting the corresponding d i ( t ) , i = 1 , 2 , 3 , 4 into system (9a,b) or system (17a,b) ensures that the unconstrained optimization problem (9) and the optimization problem with equality constraints (17) converge to the optimal solution within the prescribed time T. The details are given below.
Example 1.
We construct
α 1 ( s ) = ( 2 β 1 ) T ( 1 + s ) 2 β
based on the linear functions, where β > 1 2 and
d 1 ( t ) = T T t 4 β 2 β 1 ( 2 β 1 ) 2 · T 2 .
Clearly, α 1 ( s ) > 0 , so Assumption 1 holds, and
α 1 ( δ ) = ( 2 β 1 ) T ( 1 + δ ) 2 β .
From t = t ( δ ) = 0 δ α 1 ( s ) d s , we know that
t = T 1 1 ( 1 + δ ) 2 β 1 .
Since β > 1 2 and δ [ 0 , + ) , it follows that t 0 . Clearly, as δ → +∞, t = t ( δ ) → T, and vice versa, so Assumption 2 holds. Next, we compute
m 1 ( δ ) = 0 δ 1 α 1 ( s ) d s = ( 1 + δ ) 2 β + 1 1 ( 4 β 2 1 ) T .
Clearly, as δ → +∞, m 1 ( δ ) → +∞, so Assumption 3 holds. By substituting t = t ( δ ) into d ( t ) , we obtain
α 1 ( δ ) · d 1 ( t ) = 1 α 1 ( δ ) ,
thus Assumption 4 holds. Let
Q 1 ( t ) = 1 T t 2 β + 1 2 β 1 1 T 2 β + 1 2 β 1
and then
M 1 ( t ) = 0 t d 1 ( s ) d s = T 2 2 β 1 4 β 2 1 · Q 1 ( t ) .
Clearly, as M 1 ( t ) + , t → T, so Assumption 5 holds. Next, we show that γ ( t ) is a positive function. By substituting d 1 ( t ) into (9b) or (17b), we obtain
γ ( t ) = μ ( μ γ 0 ) · e a · T 2 2 β 1 4 β 2 1 · Q 1 ( t ) = μ ( μ γ 0 ) · e a · M 1 ( t ) .
Since the coefficient of μ γ ( t ) is positive, by considering μ = γ ( t ) , μ > γ ( t ) , and μ < γ ( t ) , we find that m i n γ 0 , μ γ ( t ) m a x γ 0 , μ according to the image method. As t → T, γ ( t ) →μ. Since γ 0 > 0 , we conclude that γ ( t ) is a positive function.
Case 1: For the unconstrained optimization problem (9), we use (16) as the Lyapunov function. Taking the derivative of (16) with respect to the variable t, we can obtain L ( t ) a · d 1 ( t ) · L ( t ) by further calculation. Thus, as t → T, we have f ( x ( t ) f ( x ) .
Case 2: For the optimization problem with equality constraints, we still have that γ ( t ) and β ( t ) are positive functions. Using (27) as the Lyapunov function and taking the derivative of (27) with respect to the variable t, we obtain
G ( t ) a · d 1 ( t ) · G ( t ) .
Further, we have
G ( t ) G ( 0 ) · e a · T 2 2 β 1 4 β 2 1 · Q 1 ( t ) = G ( 0 ) · e a · M 1 ( t ) ,
where β > 1 2 . Assuming that the selected initial value point is not the optimal value point, it is clear that G ( 0 ) > 0 . So, as t → T, G ( t ) → 0. We also obtain
1 β ( t ) = a · d 1 ( t ) · 1 β ( t ) .
Further, when t → T, there is f ( x ( t ) ) f ( x ) .
Example 2.
If we choose
α 2 ( s ) = T 2 ( T + s ) 2
and
d 2 ( t ) = T 4 ( T t ) 4 ,
we can still get the same conclusion.
Example 3.
We use the exponential function as the basis for constructing the exponential function type
α 3 ( s ) = k T · e k s
where k > 0 . Clearly, α 3 ( s ) > 0 , and
α 3 ( δ ) = k T · e k δ .
In addition,
d 3 ( t ) = 1 k 2 ( T t ) 2 .
We are able to show that problems (9) and (17) can converge to the optimal solution f ( x ) within the prescribed time T under system (9a,b) and (17a,b), respectively.
Example 4.
We construct the following function based on the Pearl function
α 4 ( s ) = 2 T l n 1 + b 1 b · 1 1 b 2 · e b s e b s ,
where 0 < b < 1 . Clearly, we have
α 4 ( δ ) = 2 T l n 1 + b 1 b · 1 1 b 2 · e b δ e b δ .
In addition, we construct
d 4 ( t ) = l n 1 + b 1 b 2 4 b 2 T 2 · 1 + b 1 b 1 t T + 1 1 + b 1 b 1 t T 1 1 + b 1 b 1 t T 1 1 + b 1 b 1 t T + 1 2 .
Let us first prove that α 4 ( s ) > 0 . Since 0 < b < 1 , 1 + b 1 b > 1 , it follows that
l n 1 + b 1 b > 0 ,
so the first term in (33) is positive. The second term in (33) is decreasing with respect to the variable δ, and as δ→ + , the second term in (33) tends to 0, so the second term in (33) is always a positive function. Therefore, α 4 ( s ) > 0 is true, which satisfies Assumption 1. From t = t ( δ ) = 0 δ α 4 ( s ) d s , we know that
t = t ( δ ) = 0 δ α 4 ( s ) d s = T l n 1 + b 1 b · l n 1 + b 1 b · 1 b e b δ 1 1 b e b δ + 1 .
Let
g ( δ ) = 1 + b 1 b · 1 b e b δ 1 1 b e b δ + 1 ,
where δ [ 0 , + ) , 0 < b < 1 , and 1 + b 1 b > 0 . We now compute the derivative
g ( δ ) = 1 + b 1 b · 2 e b δ 1 b e b δ + 1 2 > 0 ,
which shows that
g ( δ ) > 0 .
When δ = 0 , g ( 0 ) = 1 , so g ( δ ) 1 and t = t ( δ ) 0 . And as δ → +∞, 1 b e b δ 1 1 b e b δ + 1 → 1 and g ( δ ) 1 + b 1 b . So, t = t ( δ ) →T. Further, we obtain
δ ( t ) = l n ( H + 1 ) b H 1 1 b ,
where
H = 1 + b 1 b 1 t T = e b δ + b e b δ b .
From the above equation, we know that
( H + 1 ) b H 1 = e b δ .
As t → T, H → 1. So, as t → T, δ(t) → +∞, which satisfies Assumption 2. Next,
m 4 ( δ ) = 0 δ 1 α 4 ( s ) d s = l n 1 + b 1 b 2 b T 1 b 2 e b δ + e b δ 1 1 b 2 .
From the above equation, it follows that as δ → +∞, m 4 ( δ ) → +∞. So, Assumption 3 holds. By substituting t = t ( δ ) into d 4 ( t ) , we obtain
α 4 ( δ ) · d 4 ( t ) = 1 α 4 ( δ )
which clearly satisfies Assumption 4. In addition,
M 4 ( t ) = 0 t d 4 ( s ) d s = 1 + b 1 b 2 1 + b 1 b 2 · 1 t T 1 + b 1 b 2 · 1 t T 1 · 1 + b 1 b 2 1 .
As, M 4 ( t ) + , t → T, which satisfies Assumption 5. Therefore, using a similar process as above, we can conclude that both the unconstrained optimization problem (9) and the optimization problem (17) with equality constraints converge to the optimal solution within the prescribed time T.

6. Numerical Results

On a Windows 8.1 system, we utilized MATLAB R2018a to carry out the corresponding numerical simulations.
By selecting different α ( δ ) , d ( s ) , and varying the parameter a, we considered system (9a,b) with the variable t and system (14a,b) with the variable δ for the unconstrained optimization problems (9) and (11), respectively. We also considered system (17a,b) with the variable t and system (20a,b) with the variable δ for optimization problems (17) and (18), respectively.
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 show the unconstrained optimization problems, where Figure 1, Figure 2, Figure 5 and Figure 6 show systems with the variable t, and Figure 3, Figure 4, Figure 7 and Figure 8 show systems with the variable δ . Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 show the optimization problems with equality constraints, where Figure 9, Figure 10, Figure 13 and Figure 14 show systems with the variable t, and Figure 11, Figure 12, Figure 15 and Figure 16 show systems with the variable δ . The findings indicate that both systems converged to the same optimal solution for the strongly convex objective function.
Case 3: When μ = 0.5, a = 2, β = 1, T = 6, and A = 1 0 0 0.5 , we choose α 1 ( δ ) and d 1 ( s ) , and we consider the minimum of the strongly convex function f = 1 2 x T A x .
Figure 1 and Figure 2 show the variations in the variables x and f ( x ( t ) ) with respect to t when system (9a,b) was applied to solve the problem.
Figure 3 and Figure 4 show the variations in the variables y and f(y( δ )) with respect to δ when system (14a,b) was applied to solve the problem.
Remark 5.
The optimization problem reached the optimal solution x = ( 0.0000 , 0.0000 ) within the prescribed time T = 6 under the action of system (9a,b), and the equivalent optimization problem converged asymptotically to the same optimal solution under the action of system (14a,b).
Case 4: When μ = 1, a = 3, T = 9.5, A = 4 1 2 1 5 3 2 3 6 , c = 1 1 0 , d = 1, and b = 1/2, we choose α 4 ( δ ) and d 4 ( s ) , and we consider the minimum of the strongly convex function f = 1 2 x T A x + c x + d .
Figure 5 and Figure 6 show the variations in the variables x and f ( x ( t ) ) with respect to t when system (9a,b) was applied to solve the problem.
Figure 7 and Figure 8 show the variations in the variables y and f ( y ( δ ) ) with respect to δ when system (14a,b) was applied to solve the problem.
Remark 6.
The optimization problem reached the optimal solution x = ( 0.3000 , 0.2857 , 0.0429 ) within the prescribed time T = 9.5 under the action of system (9a,b), and the equivalent optimization problem converged asymptotically to the same optimal solution under the action of system (14a,b).
Case 5: When μ = 1, a = 0.5, T = 6.5, k = 0.9, A = 1 0 0 3 , and b = 1 2.5 , we choose α 3 ( δ ) and d 3 ( s ) , and we consider the minimum of the strongly convex function f = 1 2 x T A x .
Figure 9 and Figure 10 show the variations in the variables x and f ( x ( t ) ) with respect to t when system (17a,b) was applied to solve the problem.
Figure 11 and Figure 12 show the variations in the variables y and f ( y ( δ ) ) with respect to δ when system (20a,b) was applied to solve the problem.
Remark 7.
Within the allowable error range, the optimization problem reached the optimal solution x = ( 1.0751 , 0.2200 ) within the prescribed time T = 6.5 under the action of system (17a,b), and the equivalent optimization problem converged asymptotically to the same optimal solution under the action of system (20a,b).
Case 6: When μ = 0.35, a = 0.8, T = 8, A = 1.2 1.1 0.3 0.2 0.3 0.1 1.5 0.4 0.5 , and b = 1.1 3 2 , we choose α 2 ( δ ) and d 2 ( s ) , and we consider the minimum of the strongly convex function f = 1 2 ( x 1 ) 2 .
Figure 13 and Figure 14 show the variations in the variables x and f ( x ( t ) ) with respect to t when system (17a,b) was applied to solve the problem.
Figure 15 and Figure 16 show the variations in the variables y and f ( y ( δ ) ) with respect to δ when system (20a,b) was applied to solve the problem.
Remark 8.
The optimization problem reached the optimal solution x = ( 1.0214 , 0.1111 , 0.0008 ) within the prescribed time T = 8 under the action of system (17a,b), and the equivalent optimization problem converged asymptotically to the same optimal solution under the action of system (20a,b).

7. Conclusions and Future Work

For the unconstrained optimization problem with a strongly convex objective function and the optimization problem with equality constraints, we develop a novel prescribed-time convergence acceleration algorithm with time rescaling. Our basic idea is to construct different second-order systems under certain conditions so that the two types of optimization problems converge to the optimal solution within the prescribed time T. These systems are more flexible than traditional exponential asymptotic convergence and significantly improve the convergence of the algorithm.
The advantage of our model is that it transforms the time t into an integral of α ( s ) ( α ( s ) > 0 ), i.e., t = t ( δ ) = 0 δ α ( s ) d s . In this way, the second-order system we construct becomes more flexible, and convergence is achieved within the prescribed time T. The limitation of this paper is that our algorithm is only applicable to strongly convex functions, which limits its applicability to general convex and quasi-convex functions. Our future work will focus on the following:
(1) We will investigate the convergence of optimization problems with general convex and quasi-convex objective functions within the prescribed time T under the constraints of general convex sets or inequalities.
(2) In addition to the Euler method, we also hope to use the Runge–Kutta method and other methods to discretize the system and ensure that it converges to the optimal solution within the prescribed time T.
(3) We will explore acceleration algorithms for distributed optimization problems that converge within the prescribed time T.

Author Contributions

X.M.: Conceptualization, Funding acquisition, Methodology, Validation, Formal analysis, Writing—original draft. P.Z.: Supervision, Validation. H.J.: Writing—review & editing. Z.Y.: Sofware, Validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (62163035), Technology Development Guided by the Central Government (ZYYD2022A05), the Tianshan Talent Training Program (2022TSYCLJ0004), the Ministry of Science and Technology Base and Talent Special Project—Third Xinjiang Scientific Expedition Project (2021xjkk1404), and Natural Science Foundation of Xinjiang Uygur Autonomous Region (grant No. 2022D01C45).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  2. Nesterov, Y.E. A method of solving a convex programming problem with convergence rate O(1/k2). Proc. USSR Acad. Sci. 1983, 269, 543–547. [Google Scholar]
  3. Nesterov, Y.E. One class of methods of unconditional minimization of a convex function, having a high rate of convergence. USSR Comput. Math. Math. Phys. 1985, 24, 80–82. [Google Scholar] [CrossRef]
  4. Nesterov, Y.E. An approach to the construction of optimal methods for minimization of smooth convex function. Èkonom. Mat. Metody 1988, 24, 509–517. [Google Scholar]
  5. Pierro, A.; Lopes, J. Accelerating iterative algorithms for symmetric linear complementarity problems. Int. J. Comput. Math. 1994, 50, 35–44. [Google Scholar] [CrossRef]
  6. Arihiro, K.; Satoshi, F.; Tadashi Ae Members. Acceleration by prediction for error back-propagation algorithm of neural network. Syst. Comput. Jpn. 1994, 25, 78–87. [Google Scholar]
  7. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  8. Nesterov, Y.E. Gradient Methods for Minimizing Composite Objective Function; Technical Report Discussion Paper, 2007/76; Center for Operations Research and Econometrics (CORE): Leuven, Belgium, 2007. [Google Scholar]
  9. O’Donoghue, B.; Cand, E. Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 2015, 15, 715–732. [Google Scholar] [CrossRef]
  10. Nguyen, N.C.; Fernandez, P.; Freund, R.M.; Peraire, J. Accelerated residual methods for the iterative solution of systems of equations. SIAM J. Sci. Comput. 2018, 40, A3157–A3179. [Google Scholar] [CrossRef]
  11. Song, Y.; Wang, Y.; Holloway, J.; Krstic, M. Time-varying feedback for regulation of normal-form nonlinear systems in prescribed finite time. Automatica 2017, 83, 243–251. [Google Scholar] [CrossRef]
  12. Li, H.; Zhang, M.; Yin, Z.; Zhao, Q.; Xi, J.; Zheng, Y. Prescribed-time distributed optimization problem with constraints. ISA Trans. 2024, 148, 255–263. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, L.; Liu, P.; Teng, Z.; Zhang, L.; Fang, Y. Predefined-time position tracking optimization control with prescribed performance of the induction motor based on observers. ISA Trans. 2024, 147, 187–201. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, Y.; Chadli, M.; Xiang, Z. Prescribed-Time Adaptive Fuzzy Optimal Control for Nonlinear Systems. IEEE Trans. Fuzzy Syst. 2024, 32, 2403–2412. [Google Scholar] [CrossRef]
  15. Attouch, H.; Chbani, Z.; Riahi, H. Fast proximal methods via time scaling of damped inertial dynamics. SIAM J. Optim. 2019, 29, 2227–2256. [Google Scholar] [CrossRef]
  16. Attouch, H.; Chbani, Z.; Riahi, H. Fast convex optimization via time scaling of damped inertial gradient dynamics. Pure Appl. Funct. Anal. 2021, 6, 1081–1117. [Google Scholar]
  17. Attouch, H.; Chbani, Z.; Fadili, J.; Riahi, H. First-order optimization algorithms via inertial systems with Hessian driven damping. Math. Program. 2022, 193, 113–155. [Google Scholar] [CrossRef]
  18. Attouch, H.; Chbani, Z.; Fadili, J.; Riahi, H. Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics. J. Optim. Theory Appl. 2022, 193, 704–736. [Google Scholar] [CrossRef]
  19. He, X.; Hu, R.; Fang, Y.P. Inertial primal-dual dynamics with damping and scaling for linearly constrained convex optimization problems. Appl. Anal. 2022, 102, 4114–4139. [Google Scholar] [CrossRef]
  20. Balhag, A.; Chbani, Z.; Attouch, H. Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling. Evol. Equ. Control Theory 2022, 11, 487–514. [Google Scholar]
  21. Hulett, D.A.; Nguyen, D.-K. Time Rescaling of a Primal-Dual Dynamical System with Asymptotically Vanishing Damping. Appl. Math. Optim. 2023, 88, 27. [Google Scholar] [CrossRef]
  22. Luo, H. A primal-dual flow for affine constrained convex optimization. ESAIM Control. Calc. Var. 2022, 28, 1–34. [Google Scholar] [CrossRef]
  23. Luo, H.; Chen, L. From differential equation solvers to accelerated first-order methods for convex optimization. Math. Program. 2021, 195, 735–781. [Google Scholar] [CrossRef]
  24. Chen, L.; Luo, H. A unified convergence analysis of first order convex optimization methods via strong lyapunov functions. arXiv 2021, arXiv:2108.00132. [Google Scholar]
  25. Luo, H. Accelerated primal-dual methods for linearly constrained convex optimization problems. arXiv 2021, arXiv:2109.12604v2. [Google Scholar]
  26. Tran, D.; Yucelen, T. Finite-time control of perturbed dynamical systems based on a generalized time transformation approach. Syst. Control Lett. 2020, 136, 104605. [Google Scholar] [CrossRef]
Figure 1. Change in x ( t ) with respect to t.
Figure 1. Change in x ( t ) with respect to t.
Mathematics 13 00251 g001
Figure 2. Change in f ( x ( t ) ) with respect to t.
Figure 2. Change in f ( x ( t ) ) with respect to t.
Mathematics 13 00251 g002
Figure 3. Change in y ( δ ) with respect to δ .
Figure 3. Change in y ( δ ) with respect to δ .
Mathematics 13 00251 g003
Figure 4. Change in f ( y ( δ ) ) with respect to δ .
Figure 4. Change in f ( y ( δ ) ) with respect to δ .
Mathematics 13 00251 g004
Figure 5. Change in x ( t ) with respect to t.
Figure 5. Change in x ( t ) with respect to t.
Mathematics 13 00251 g005
Figure 6. Change in f ( x ( t ) ) with respect to t.
Figure 6. Change in f ( x ( t ) ) with respect to t.
Mathematics 13 00251 g006
Figure 7. Change in y ( δ ) with respect to δ .
Figure 7. Change in y ( δ ) with respect to δ .
Mathematics 13 00251 g007
Figure 8. Change in f ( y ( δ ) ) with respect to δ .
Figure 8. Change in f ( y ( δ ) ) with respect to δ .
Mathematics 13 00251 g008
Figure 9. Change in x ( t ) with respect to t.
Figure 9. Change in x ( t ) with respect to t.
Mathematics 13 00251 g009
Figure 10. Change in f ( x ( t ) ) with respect to t.
Figure 10. Change in f ( x ( t ) ) with respect to t.
Mathematics 13 00251 g010
Figure 11. Change in y ( δ ) with respect to δ .
Figure 11. Change in y ( δ ) with respect to δ .
Mathematics 13 00251 g011
Figure 12. Change in f ( y ( δ ) ) with respect to δ .
Figure 12. Change in f ( y ( δ ) ) with respect to δ .
Mathematics 13 00251 g012
Figure 13. Change in x ( t ) with respect to t.
Figure 13. Change in x ( t ) with respect to t.
Mathematics 13 00251 g013
Figure 14. Change in f ( x ( t ) ) with respect to t.
Figure 14. Change in f ( x ( t ) ) with respect to t.
Mathematics 13 00251 g014
Figure 15. Change in y ( δ ) with respect to δ .
Figure 15. Change in y ( δ ) with respect to δ .
Mathematics 13 00251 g015
Figure 16. Change in f ( y ( δ ) ) with respect to δ .
Figure 16. Change in f ( y ( δ ) ) with respect to δ .
Mathematics 13 00251 g016
Table 1. Different algorithmic models.
Table 1. Different algorithmic models.
 TypeConvergenceAcceleration Principle
[15]
By time discretization
of inertial gradient dynamics
that have been rescaled in time
[16]

For the damped inertial
gradient system ( I G S ) γ , β
using the function of Γ ( t ) .
[18]

By adjusting the three time-varying
parameters in the temporally rescaled
inertial augmented Lagrangian system
and applying Lyapunov analysis
[20]

By adjusting the three time-varying
parameters in the inertial gradient system
( I G S ) γ , β , b
and applying Lyapunov analysis
Our
model

➂, ➃

Transform time t into
the form of an integral,
i.e., t = t ( δ ) = 0 δ α ( s ) d s ,
where α ( s ) > 0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mei, X.; Zhang, P.; Jiang, H.; Yu, Z. A Novel Prescribed-Time Convergence Acceleration Algorithm with Time Rescaling. Mathematics 2025, 13, 251. https://doi.org/10.3390/math13020251

AMA Style

Mei X, Zhang P, Jiang H, Yu Z. A Novel Prescribed-Time Convergence Acceleration Algorithm with Time Rescaling. Mathematics. 2025; 13(2):251. https://doi.org/10.3390/math13020251

Chicago/Turabian Style

Mei, Xuehui, Pengrui Zhang, Haijun Jiang, and Zhiyong Yu. 2025. "A Novel Prescribed-Time Convergence Acceleration Algorithm with Time Rescaling" Mathematics 13, no. 2: 251. https://doi.org/10.3390/math13020251

APA Style

Mei, X., Zhang, P., Jiang, H., & Yu, Z. (2025). A Novel Prescribed-Time Convergence Acceleration Algorithm with Time Rescaling. Mathematics, 13(2), 251. https://doi.org/10.3390/math13020251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop