Next Article in Journal
A Novel, Direct Matrix Solver for Supersonic Boundary Element Method Systems
Next Article in Special Issue
Research on Recommendation of Core Competencies and Behavioral Indicators of Pilots Based on Collaborative Filtering
Previous Article in Journal
Special Issue: Advances in CubeSat Sails and Tethers (1st Edition)
Previous Article in Special Issue
A Robust Adaptive PID-like Controller for Quadrotor Unmanned Aerial Vehicle Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Momentum-Based Adaptive Laws for Identification and Control

School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0150, USA
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(12), 1017; https://doi.org/10.3390/aerospace11121017
Submission received: 29 October 2024 / Revised: 4 December 2024 / Accepted: 9 December 2024 / Published: 11 December 2024
(This article belongs to the Special Issue Challenges and Innovations in Aircraft Flight Control)

Abstract

:
In this paper, we develop momentum-based adaptive update laws for parameter identification and control to improve parameter estimation error convergence and control system performance for uncertain dynamical systems. Specifically, we introduce three novel continuous-time, momentum-based adaptive estimation and control algorithms and evaluate their effectiveness via several numerical examples. Our proposed adaptive architectures show faster parameter convergence rates as compared to the classical gradient descent and model reference adaptive control methods.

1. Introduction

One of the fundamental problems in feedback control design is the ability to address discrepancies between system models and real-world systems. To this end, adaptive control has been developed to address the problem of system uncertainty in control-system design [1,2,3,4]. In particular, adaptive control is based on constant linearly parameterized system uncertainty models of a known structure but unknown variation and, in the case of indirect adaptive control, combines an online parameter estimation algorithm with a control law to improve performance in the face of system uncertainties. More specifically, indirect adaptive controllers utilize parameter update laws to identify unknown system parameters and adjust feedback gains to account for system uncertainty.
The parameter estimation algorithms that have been typically used in adaptive control are predicated on two gradient descent type methods that update the system parameter errors in the direction that minimizes a prediction error for a specified cost function. Namely, for a gradient descent flow algorithm, the cost function involves an instantaneous prediction error between an estimated system output and the actual system output of the most recent data, whereas for an integral gradient descent algorithm the cost function captures an aggregate of the past prediction errors in an integral form while placing more emphasis on recent measurements [2].
Additionally, the recursive least squares (RLS) algorithm [2] has also been used for parameter estimation and has been shown to give superior convergence rates as compared to the classical gradient descent algorithms. RLS algorithms can be viewed as a gradient descent method with a time-varying learning rate [5]. For static (i.e., time-invariant) cost functions, momentum-based methods, also known as higher-order tuners, such as the Nesterov method, have been shown to achieve faster parameter error convergence as compared to traditional gradient descent algorithms [6]. Since faster convergence can lead to improved transient system performance, there has been a significant interest in extending momentum-based methods to time-varying cost functions, which naturally appear in adaptive control formulations [7,8,9].
Specifically, in [7] momentum-based architectures were merged with the standard gradient descent method using a variational approach to guarantee boundedness for the parameter estimation and model reference adaptive control problems involving time-varying regressors. Higher-order tuners were also shown to provide an additional degree of freedom in the design of adaptive control laws [10,11]. Given that integral gradient and recursive least squares algorithms provide superior noise rejection properties and improved system performance to that of standard gradient descent methods, there has been a recent interest in integrating momentum-based architectures into adaptive laws for parameter identification and control [5,7].
In this paper, we develop new continuous-time, momentum-based adaptive laws for identification and control by augmenting higher-order tuners into the integral gradient, recursive least squares and composite gradient algorithms. Specifically, in Section 2, we first review the existing gradient-based and momentum-based update laws and introduce three new higher-order tuner architectures. Next, in Section 3, we show how these update laws can be applied for parameter identification, and then, in Section 4, we introduce a momentum-based recursive least squares (MRLS) algorithm for model reference adaptive control. Finally, Section 5 provides several numerical examples that highlight the improved parameter error convergence rate of the proposed momentum-based update laws.
The notation used in this paper is standard. Specifically, we write R n to denote the set of n × 1 column vectors, R n × m to denote the set of n × m real matrices, and ( · ) T to denote transpose. The equi-induced 2-norm of a matrix Q R m × n is denoted as Q λ max Q T Q = σ max ( Q ) , where λ max ( · ) denotes the maximum eigenvalue and σ max ( · ) denotes the maximum singular value. We write λ min ( · ) (resp., σ min ( · ) ) for the minimum eigenvalue (resp., singular value), V x ( x ) for the gradient of a scalar-valued function V with respect to a vector-valued variable x, and f ^ ( s ) for the Laplace transform of f ( · ) , that is f ^ ( s ) L { f ( t ) } = 0 e s t f ( t ) d t . Finally, we write R ( s ) to denote the proper rational functions with coefficients in R (i.e., SISO proper rational transfer functions), R [ s ] to denote polynomials with real coefficients, L 2 to denote the space of square-integrable Lebesgue measurable functions on [ 0 , ) , and L to denote the space of bounded Lebesgue measurable functions on [ 0 , ) .

2. From First-Order to Higher-Order Tuners for Parameter Estimation

Consider the system given by
y ( t ) = θ * T ϕ ( t ) ,
where, for t 0 , y ( t ) R is the system output, ϕ ( t ) R N is a time-varying regressor vector, and θ * R N is an unknown system parameter vector. Here we assume that ϕ ( · ) is bounded, and hence, can be normalized via the scaling factor 1 + ϕ ( t ) so that, with a minor abuse of notation, ϕ ( t ) 1 , t 0 . Since θ * is unknown, we design an estimator of the form y e ( t ) = θ ( t ) T ϕ ( t ) , where θ ( t ) , t 0 , is an estimate of θ * , so that the prediction error between the estimated output y e ( t ) , t 0 , and the actual system output y ( t ) , t 0 , is given by
e ( t ) y e ( t ) y ( t ) = θ ( t ) T ϕ ( t ) y ( t ) = θ ˜ ( t ) T ϕ ( t ) ,
where θ ˜ ( t ) θ ( t ) θ * is the parameter estimation error.
For the error system (2), consider the quadratic loss cost function
L ( θ , t ) = 1 2 e 2 ( t )
and note that the gradient of L with respect to the parameter estimate θ is given by L ( θ , t ) θ = ϕ ( t ) e ( t ) . In this case, the gradient flow algorithm is expressed as
θ ˙ ( t ) = γ L ( θ , t ) θ = γ ϕ ( t ) e ( t ) , θ ( 0 ) = θ 0 , t 0 ,
where γ > 0 is a weighting gain parameter. As shown in [7], e L L 2 for all γ > 0 .
It is well known that the rate of convergence for the gradient flow algorithm (4) can be slow [6]. However, if the regressor vector ϕ ( · ) satisfies a persistency of the excitation condition, then θ ( t ) converges to θ * exponentially with a convergence rate proportional to γ . Although in this case the convergence rate can improve significantly, large values of γ can result in a stiff initial value problem for (4) necessitating progressively smaller sampling times for implementing (4) on a digital computer. Furthermore, small sampling times can increase the sensitivity of the adaptive law to measurement noise and modeling errors [2]. To address these limitations, momentum-based accelerated algorithms predicated on higher-order tuner laws are introduced in [7] to expedite parameter estimation error convergence.
In [12], an array of accelerated methods for continuous-time systems are derived using a Lagrangian variational formulation. Building on this work, [7] extends the variational framework of [12] to address a time-varying instantaneous quadratic loss function of the form given by (3). Specifically, a Lagrangian of the form
L ( θ , θ ˙ , t ) = e t 0 t β N ( s ) d s 1 2 β N ( t ) [ θ ˙ T ( t ) θ ˙ ( t ) γ β N ( t ) e ( t ) 2 ]
is introduced, where N ( t ) 1 + μ ϕ ( t ) T ϕ ( t ) , μ > 0 , β > 0 is a friction coefficient, and γ > 0 , and the functional
J ( θ ) = T L ( θ , θ ˙ , t ) d s ,
where T is a finite-time interval, is considered. A necessary condition for θ minimizing (6) is shown in [7] to satisfy
θ ¨ ( t ) + β N ( t ) N ˙ ( t ) N ( t ) θ ˙ ( t ) = γ β N ( t ) ϕ ( t ) e ( t ) , θ ˙ ( 0 ) = θ ˙ 0 , θ ( 0 ) = θ 0 , t 0 .
Note that (7) can be rewritten as
V ˙ ( t ) = γ ϕ ( t ) e ( t ) , V ( 0 ) = V 0 , t 0 ,
θ ˙ ( t ) = β [ θ ( t ) V ( t ) ] N ( t ) , θ ( 0 ) = θ 0 ,
where V is an auxiliary parameter and V 0 = 1 N ( 0 ) β θ ˙ 0 + θ 0 . The stability properties of (8) and (9) are given in [7]. Taking the limiting case as β , the gradient flow algorithm (4) can be recovered as a special case of (8) and (9).
Rather than relying solely on the instantaneous system measurement y ( t ) , t 0 , the loss cost function (3) can be modified to incorporate a weighted sum of past system measurements. Specifically, we can consider loss cost functions in the form of (see [2])
L ( θ , t ) = 1 2 0 t e ν ( t τ ) [ y ( τ ) θ T ( t ) ϕ ( τ ) ] 2 d τ ,
where ν > 0 is a forgetting factor that prevents the degeneration of system information updates in some direction by placing more emphasis on the more recent measurements. In this case, the gradient of the loss cost function (10) is given by
L ( θ , t ) θ = 0 t e ν ( t τ ) [ y ( τ ) θ T ( t ) ϕ ( τ ) ] ϕ ( τ ) d τ = R ( t ) θ ( t ) + Q ( t ) ,
where
R ( t ) 0 t e ν ( t τ ) ϕ ( τ ) ϕ ( τ ) T d τ ,
Q ( t ) 0 t e ν ( t τ ) y ( τ ) ϕ ( τ ) T d τ .
Now, setting
θ ˙ ( t ) = γ L ( θ , t ) θ , θ ( 0 ) = θ 0 , t 0 ,
we obtain the integral gradient algorithm ([2])
θ ˙ ( t ) = γ [ R ( t ) θ ( t ) + Q ( t ) ] , θ ( 0 ) = θ 0 , t 0 ,
R ˙ ( t ) = ν R ( t ) + ϕ ( t ) ϕ ( t ) T , R ( 0 ) = 0 ,
Q ˙ ( t ) = ν Q ( t ) y ( t ) ϕ ( t ) , Q ( 0 ) = 0 .
The stability properties of (15)–(17) are addressed in [2] ([Thm. 3.6.7]).
Next, motivated by the variational formulation of [7] we introduce a momentum-based integral gradient algorithm. Specifically, defining the Lagrangian
L ( θ , θ ˙ , t ) = e t 0 t β N i ( s ) d s 1 β N i ( t ) 1 2 θ ˙ ( t ) T θ ˙ ( t ) γ β N i ( t ) 1 2 0 t e ν ( t τ ) ( y ( τ ) θ T ( t ) ϕ ( t ) ) 2 d τ ,
where N i ( t ) 1 + μ R ( t ) , μ 0 , and using the Euler-Lagrange equation, a necessary condition for θ minimizing (6) yields
θ ¨ ( t ) + β N i ( t ) N i ˙ ( t ) N i ( t ) θ ˙ ( t ) = γ β N i ( t ) [ R ( t ) θ ( t ) + Q ( t ) ] , θ ˙ ( 0 ) = θ ˙ 0 , θ ( 0 ) = θ 0 , t 0 ,
where, for t 0 , R ( t ) and Q ( t ) are given by (12) and (13). Note that (19) can be rewritten as
V ˙ ( t ) = γ [ R ( t ) θ ( t ) + Q ( t ) ] , V ( 0 ) = V 0 , t 0 ,
θ ˙ ( t ) = β [ θ ( t ) V ( t ) ] N i ( t ) , θ ( 0 ) = θ 0 ,
R ˙ ( t ) = ν R ( t ) + ϕ ( t ) ϕ ( t ) T , R ( 0 ) = 0 ,
Q ˙ ( t ) = ν Q ( t ) y ( t ) ϕ ( t ) , Q ( 0 ) = 0 ,
where V is an auxiliary parameter and V 0 = 1 N i ( 0 ) β θ ˙ 0 + θ 0 . Furthermore, (20) and (21) can be rewritten in terms of the parameter estimation error θ ˜ ( t ) , t 0 , and the auxiliary parameter estimation error V ˜ ( t ) V ( t ) θ * as
V ˜ ˙ ( t ) = γ R ( t ) θ ˜ ( t ) , V ˜ ( 0 ) = V ˜ 0 , t 0 ,
θ ˜ ˙ ( t ) = β [ θ ˜ ( t ) V ˜ ( t ) ] N i ( t ) , θ ˜ ( 0 ) = θ ˜ 0 .
The following definition and lemmas are needed for the main results of this section.
Definition 1. 
A vector signal ϕ : R R N is persistently excited (PE) if there exist a positive constant ρ and a finite-time δ > 0 such that
ρ I N t t + δ ϕ ( τ ) ϕ ( τ ) T d τ , t > 0 .
Lemma 1. 
Consider the system (22). Then, the following statements hold:
( i )
R ( t ) 1 ν , t 0 .
( i i ) If ϕ ( · ) is persistently excited, then
e ν δ ρ R ( t ) , t δ .
Proof. 
To show ( i ) , note that it follows from (12) and ϕ ( t ) ϕ ( t ) T I N , t 0 , that
R ( t ) 0 t e ν ( t τ ) I N d τ 1 ν I N , t 0 ,
which proves (27). Next, to show ( i i ) , note that if ϕ ( · ) is PE, then
R ( t ) = 0 t δ e ν ( t τ ) ϕ ( τ ) ϕ ( τ ) T d τ + t δ t e ν ( t τ ) ϕ ( τ ) ϕ ( τ ) T d τ t δ t e ν ( t τ ) ϕ ( τ ) ϕ ( τ ) T d τ e ν δ t δ t ϕ ( τ ) ϕ ( τ ) T d τ e ν δ ρ I N , t > δ .
This proves the lemma. □
Lemma 2 
([3]). Let f : [ 0 , ) R n . If f L 2 L and f ˙ L , then lim t f ( t ) = 0 .
Theorem 1. 
Let μ > 2 γ β , and consider the momentum-based integral gradient algorithm (20)–(23). Then, the following statements hold:
( i ) θ L , θ ˙ , V ˙ , and ( θ V ) L , and lim t θ ˙ ( t ) = 0 .
( i i ) If ϕ ( · ) is persistently excited, then θ ( t ) converges to θ * exponentially as t with a convergence rate of a 2 , where a min { e ν δ ρ γ , 2 β } .
Proof. 
To show ( i ) , consider the positive-definite function V ( V ˜ , θ ˜ ) V ˜ T V ˜ + ( θ ˜ V ˜ ) T ( θ ˜ V ˜ ) and note that
V ˙ ( V ˜ , θ ˜ , t ) = 2 V ˜ T V ˜ ˙ + 2 ( θ ˜ V ˜ ) T ( θ ˜ ˙ V ˜ ˙ ) = 2 γ V ˜ T R ( t ) θ ˜ 2 β θ ˜ V ˜ 2 ( 1 + μ R ( t ) ) + 2 γ ( θ ˜ V ˜ ) T R ( t ) θ ˜ 2 γ θ ˜ T R ( t ) θ ˜ 2 β θ ˜ V ˜ 2 ( 1 + μ R ( t ) ) + 4 γ ( θ ˜ V ˜ ) T R 1 2 ( t ) R 1 2 ( t ) θ ˜ 2 γ θ ˜ T R ( t ) θ ˜ 2 β θ ˜ V ˜ 2 ( 1 + μ R ( t ) ) + 4 γ ( θ ˜ V ˜ ) T R 1 2 ( t ) R 1 2 ( t ) θ ˜ γ θ ˜ T R ( t ) θ ˜ 2 β θ ˜ V ˜ 2 γ R 1 2 ( t ) θ ˜ 2 ( θ ˜ V ˜ ) T R 1 2 ( t ) 2 γ θ ˜ T R ( t ) θ ˜ 2 β θ ˜ V ˜ 2 ,
which, since R ( t ) 0 , t 0 , implies V ˙ ( V ˜ , θ ˜ , t ) 0 , t 0 . Thus, V , θ ˜ , and ( θ ˜ V ˜ ) L .
Next, integrating V ˙ over [ 0 , ) yields
0 R 1 2 ( τ ) θ ˜ ( τ ) 2 d τ V ( V ˜ 0 , θ ˜ 0 , 0 ) lim t V ( V ˜ ( t ) , θ ˜ ( t ) , t ) < ,
and hence, R 1 2 θ ˜ L 2 . Similarly, θ ˜ V ˜ 2 L 2 , and hence, ( θ ˜ V ˜ ) L L 2 . Since (27) implies that R L , it follows that R 1 2 θ ˜ L . Now, it follows from (24) and (25) that V ˜ ˙ , θ ˜ ˙ L L 2 . Finally, since R ˙ L , it follows that V ˜ ¨ , θ ˜ ¨ L , and, by Lemma 2, it follows that lim t θ ˙ ( t ) = 0 .
Next, to show ( i i ) , it follows from (28) and (30) that
V ˙ ( V ˜ , θ ˜ , t ) min { e ν δ ρ γ , 2 β } V ( V ˜ , θ ˜ ) , t δ ,
which implies that
V ( V ˜ ( t ) , θ ˜ ( t ) ) V ( V ˜ 0 , θ ˜ 0 ) e a t , t δ .
Now, (31) implies that, for all t δ ,
V ˜ ( t ) V ( V ˜ 0 , θ ˜ 0 ) e a 2 t , θ ˜ ( t ) V ˜ ( t ) V ( V ˜ 0 , θ ˜ 0 ) e a 2 t ,
and hence, using the triangle inequality θ ˜ V ˜ θ ˜ V ˜ , it follows that
θ ˜ ( t ) 2 V ( V ˜ 0 , θ ˜ 0 ) e a 2 t , t δ ,
which proves that θ ˜ ( t ) converges to 0 exponentially as t with a convergence rate of a 2 . □
Remark 1. 
Note that unlike gradient flow and recursive least squares algorithms, the momentum-based integral gradient algorithm (20)–(23) does not require the assumption that ϕ ˙ ( · ) L in order to guarantee that lim t θ ˙ ( t ) = 0 [7].
Remark 2. 
Note that a time-varying forgetting factor can also be employed in (22) and (23). In this case, the PE condition can be replaced with a less restrictive excitation condition where (26) holds for a fixed-time t and not for all t 0 [13].
Next, we consider the cost function
J ( θ , t ) = 0 t ( y ( τ ) ϕ ( τ ) T θ ( τ ) ) 2 ( θ ( t ) θ ( τ ) ) T F ( τ ) ( θ ( t ) θ ( τ ) ) d τ + ( θ ( t ) θ ( 0 ) ) T Γ 0 ( θ ( t ) θ ( 0 ) ) ,
where for all t 0 , F ( t ) is an N × N nonnegative-definite matrix function called the general forgetting matrix and Γ 0 is an N × N positive-definite matrix. It can be shown that a necessary condition for θ minimizing (33) gives the recursive least squares algorithm ([14])
θ ˙ ( t ) = Γ ( t ) ϕ ( t ) e ( t ) , θ ( 0 ) = θ 0 , t 0 ,
Γ ˙ ( t ) = Γ ( t ) [ ϕ ( t ) ϕ ( t ) T F ( t ) ] Γ ( t ) , Γ ( 0 ) = Γ 0 .
Note that setting F ( t ) 0 and F ( t ) = ν H ( t ) = ν Γ 1 ( t ) , where Γ ( t ) is positive-definite for all t 0 (see [2]), we recover, respectively, the pure recursive least squares and the recursive least squares with exponential forgetting algorithms discussed in [2].
Note that the RLS algorithms (34) and (35) do not involve a gradient flow architecture, and the variational formulation cannot be used to generate a momentum-based recursive least squares (MRLS) algorithm. Despite this fact, higher-order tuner laws can still be incorporated into the RLS architecture to give the MRLS architecture
V ˙ ( t ) = Γ ( t ) ϕ ( t ) e ( t ) , V ( 0 ) = V 0 , t 0 ,
θ ˙ ( t ) = β ( θ ( t ) V ( t ) ) N r ( t ) , θ ( 0 ) = θ 0 ,
Γ ˙ ( t ) = Γ ( t ) [ ϕ ( t ) ϕ ( t ) T F ( t ) ] Γ ( t ) , Γ ( 0 ) = Γ 0 ,
where β > 0 and N r ( t ) 1 + μ ϕ ( t ) T Γ ( t ) ϕ ( t ) = 1 + μ Γ 1 2 ( t ) ϕ ( t ) 2 , μ > 0 . Note that (36) and (37) can be rewritten as
V ˜ ˙ ( t ) = Γ ( t ) ϕ ( t ) e ( t ) , V ˜ ( 0 ) = V ˜ 0 , t 0 ,
θ ˜ ˙ ( t ) = β ( θ ˜ ( t ) V ˜ ( t ) ) N r ( t ) , θ ˜ ( 0 ) = θ ˜ 0 .
Noting that H ( t ) = Γ 1 ( t ) and using the fact that
d H ( t ) Γ ( t ) d t = d I n d t = 0 = H ˙ ( t ) Γ ( t ) + H ( t ) Γ ˙ ( t ) ,
it follows that
H ˙ ( t ) = F ( t ) + ϕ ( t ) ϕ ( t ) T , H ( 0 ) = Γ 0 1 , t 0 .
Lemma 3. 
Consider the system (42) with F ( t ) = ν H ( t ) . Then, the following statements hold:
( i )
H ( t ) α 2 , t 0 ,
where α 2 λ max ( Γ 0 1 ) + 1 ν .
( i i ) If ϕ ( · ) is persistently excited, then
α 1 H ( t ) , t 0 ,
where α 1 min { e ν δ ρ , e ν δ λ min ( Γ 0 1 ) } .
Proof. 
To show ( i ) , note that it follows from (42) with F ( t ) = ν H ( t ) that
H ( t ) = e ν t Γ 0 1 + 0 t e ν ( t τ ) ϕ ( τ ) ϕ ( τ ) T d τ .
Now, using ϕ ( t ) ϕ ( t ) T I N , t 0 , yields
H ( t ) Γ 0 1 + 0 t e ν ( t τ ) I N d τ λ max ( Γ 0 1 ) I N + 1 ν I N , t 0 ,
which proves (43).
Next, to show ( i i ) , note that if ϕ ( · ) is PE, then
H ( t ) e ν t Γ 0 1 e ν δ λ min ( Γ 0 1 ) I N , 0 t < δ ,
and
H ( t ) e ν δ t δ t ϕ ( τ ) ϕ ( τ ) T d τ e ν δ ρ I N , t > δ .
Thus,
α 1 H ( t ) , t 0 .
This proves the lemma. □
Theorem 2. 
Let μ > 2 β , and consider the momentum-based recursive least squares algorithm (36)–(38). Then, e ( · ) given by (2) satisfies e L 2 . Furthermore, the following statements hold:
( i ) If there exists α 1 > 0 such that (44) holds, then V , θ L . If, in addition, ϕ ˙ L , then lim t θ ˙ ( t ) = 0 and lim t e ( t ) = 0 .
( i i ) If F ( t ) = ν H ( t ) and ϕ ( · ) is persistently excited, then the zero solution ( V ˜ ( t ) , θ ˜ ( t ) ) ( 0 , 0 ) to (39) and (40) is exponentially stable.
Proof. 
Consider the function V ( V ˜ , θ ˜ , t ) V ˜ T H ( t ) V ˜ + ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) and note that
V ˙ ( V ˜ , θ ˜ , t ) = 2 V ˜ T H ( t ) V ˜ ˙ + V ˜ T H ˙ ( t ) V ˜ + 2 ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ ˙ V ˜ ˙ ) + ( θ ˜ V ˜ ) T H ˙ ( t ) ( θ ˜ V ˜ ) = 2 V ˜ T H ( t ) Γ ( t ) ϕ ( t ) e + V ˜ T [ ν H ( t ) + ϕ ( t ) ϕ ( t ) T ] V ˜ 2 β ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N r ( t ) 2 β ( θ ˜ V ˜ ) T H ( t ) Γ ( t ) ϕ ( t ) e + ( θ ˜ V ˜ ) T [ ν H ( t ) + ϕ ( t ) ϕ ( t ) T ] ( θ ˜ V ˜ ) = V ˜ T F ( t ) V ˜ ν ( θ ˜ V ˜ ) T F ( t ) ( θ ˜ V ˜ ) 2 V ˜ T ϕ ( t ) ϕ ( t ) T θ ˜ + V ˜ T ϕ ( t ) ϕ ( t ) T V ˜ 2 β ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N r ( t ) 2 β ( θ ˜ V ˜ ) T ϕ ( t ) ϕ ( t ) T θ ˜ + ( θ ˜ V ˜ ) T ϕ ( t ) ϕ ( t ) T ( θ ˜ V ˜ ) = V ˜ T F ( t ) V ˜ ν ( θ ˜ V ˜ ) T F ( t ) ( θ ˜ V ˜ ) + ( θ ˜ V ˜ ) T ϕ ( t ) ϕ ( t ) T ( θ ˜ V ˜ ) 2 e 2 2 β ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N r ( t ) 2 β ( θ ˜ V ˜ ) T ϕ ( t ) ϕ ( t ) T θ ˜ + ( θ ˜ V ˜ ) T ϕ ( t ) ϕ ( t ) T ( θ ˜ V ˜ ) = V ˜ T F ( t ) V ˜ ν ( θ ˜ V ˜ ) T F ( t ) ( θ ˜ V ˜ ) + 2 ( θ ˜ V ˜ ) T ϕ ( t ) 2 2 e 2 2 β ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N r ( t ) 2 ( θ ˜ V ˜ ) T ϕ ( t ) ϕ ( t ) T θ ˜ V ˜ T F ( t ) V ˜ ν ( θ ˜ V ˜ ) T F ( t ) ( θ ˜ V ˜ ) + 2 ( θ ˜ V ˜ ) T ϕ ( t ) 2 2 e 2 2 β ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) ( 1 + μ Γ 1 2 ( t ) ϕ ( t ) ) + 2 ( θ ˜ V ˜ ) T ϕ ( t ) | e | .
Now, by the Cauchy–Schwarz inequality,
( θ ˜ V ˜ ) T ϕ ( t ) = ( θ ˜ V ˜ ) T H 1 2 ( t ) Γ 1 2 ( t ) ϕ ( t ) H 1 2 ( t ) ( θ ˜ V ˜ ) Γ 1 2 ( t ) ϕ ( t ) ,
and hence,
V ˙ ( V ˜ , θ ˜ , t ) F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 + 2 ( 1 β μ ) H 1 2 ( t ) ( θ ˜ V ˜ ) 2 Γ 1 2 ( t ) ϕ ( t ) 2 2 e 2 2 β H 1 2 ( t ) ( θ ˜ V ˜ ) 2 + 2 H 1 2 ( t ) ( θ ˜ V ˜ ) Γ 1 2 ( t ) ϕ ( t ) | e | F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 H 1 2 ( t ) ( θ ˜ V ˜ ) 2 Γ 1 2 ( t ) ϕ ( t ) 2 2 e 2 2 β H 1 2 ( t ) ( θ ˜ V ˜ ) 2 H 1 2 ( t ) ( θ ˜ V ˜ ) 2 Γ 1 2 2 + 2 H 1 2 ( t ) ( θ ˜ V ˜ ) Γ 1 2 ( t ) ϕ ( t ) | e | e 2 F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 β H 1 2 ( t ) ( θ ˜ V ˜ ) 2 [ H 1 2 ( t ) ( θ ˜ V ˜ ) Γ 1 2 ( t ) ϕ ( t ) e ] 2 e 2 F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 0 , t 0 .
Next, integrating V ˙ over [ 0 , ) yields
0 e 2 ( τ ) d τ V ( V ˜ 0 , θ ˜ 0 , 0 ) lim t V ( V ˜ ( t ) , θ ˜ ( t ) , t ) < ,
and hence, e L 2 .
To show (i), note that (49) and (44) imply that V ˜ L and θ ˜ V ˜ L . In addition, (37) implies that θ ˜ ˙ L , and for ϕ ˙ L , it follows from (2) and Lemma 2 that lim t θ ˜ ˙ ( t ) = 0 and lim t e ( t ) = 0 .
Finally, to show ( i i ) , note that if ϕ ( · ) is persistently excited, then (44) holds by Lemma 1. Now, using the fact that
V ˜ 2 + V ˜ θ ˜ 2 = [ V ˜ T , θ ˜ T ] T P [ V ˜ , θ ˜ ] ,
where P 2 1 1 1 is a positive-definite matrix with eigenvalues { 3 5 2 , 3 + 5 2 } , it follows from the Schur decomposition that
3 5 2 ( V ˜ 2 + θ ˜ 2 ) V ˜ 2 + V ˜ θ ˜ 2 3 + 5 2 ( V ˜ 2 + θ ˜ 2 ) ,
and hence,
3 5 2 α 1 ( V ˜ 2 + θ ˜ 2 ) V ( V ˜ , θ ˜ , t ) 3 + 5 2 α 2 ( V ˜ 2 + θ ˜ 2 ) .
In this case, (49) gives
V ˙ ( V ˜ , θ ˜ , t ) e 2 F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 α 1 ν ( V ˜ 2 + V ˜ θ ˜ 2 ) α 1 ν 3 5 2 ( V ˜ 2 + θ ˜ 2 ) ,
which shows that the zero solution ( V ˜ ( t ) , θ ˜ ( t ) ) ( 0 , 0 ) to (39) and (40) is exponentially stable [15] ([Theorem 4.6]). □
Remark 3. 
Note that setting
F ( t ) = ν H ( t ) , if H ( t ) α 1 , t 0 , 0 , otherwise ,
guarantees that (44) holds [14].
The constraints on parameter μ in Theorems 1 and 2 limit the amount of momentum that can be applied, thereby limiting the potential advantage gained by using a momentum-based algorithm. In the context of static optimization, Lyapunov-like functions are typically constructed as the sum of two terms; one term representing the norm of the error of the parameter estimate and the other term corresponding to the loss cost function [12]. This generalized structure allows for adjustments to the parameter estimate θ ˜ , that may not decrease V ˜ or θ ˜ but can reduce the loss cost function L ( θ ˜ , t ) as shown in Figure 1.
In order to incorporate (10) in our Lyapunov function, we introduce the momentum-based composite gradient algorithm
V ˙ ( t ) = Γ ( t ) ϕ ( t ) e ( t ) β 2 Γ ( t ) N r ( t ) [ R ( t ) θ ( t ) + Q ( t ) ] , V ( 0 ) = V 0 , t 0 ,
θ ˙ ( t ) = β [ θ ( t ) V ( t ) ] N r ( t ) , θ ( 0 ) = θ 0 ,
Γ ˙ ( t ) = Γ ( t ) [ ϕ ( t ) ϕ ( t ) T F ( t ) ] Γ ( t ) , Γ ( 0 ) = Γ 0 ,
where R ( · ) and Q ( · ) are given by (12) and (13). Note that this composite update law includes an integral gradient term as well as a recursive least squares term in (55).
Theorem 3. 
Let μ > 2 β , and consider the momentum-based composite gradient algorithm (55)–(57). Then, the following statements hold:
( i ) If there exists α 1 > 0 such that (44) holds, then θ L , θ ˙ , V ˙ , and ( θ V ) L , and lim t θ ˙ ( t ) = 0 .
( i i ) If F ( t ) = ν H ( t ) and ϕ ( · ) is persistently excited, then the zero solution ( V ˜ ( t ) , θ ˜ ( t ) ) ( 0 , 0 ) to (55)–(57) is exponentially stable.
Proof. 
This proof is similar to the proof of Theorem 2. Namely, consider the positive-definite function
V ( V ˜ , θ ˜ , t ) V ˜ T H ( t ) V ˜ + ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) + 2 L ( θ ˜ , t )
satisfying V ( 0 , 0 , t ) = 0 , t 0 , and note that
L ˙ ( θ ˜ , t ) = L θ ˜ d θ ˜ d t + L t = θ ˜ T R ( t ) θ ˜ ˙ ν L + 1 2 e ( t ) 2 .
Now, using (49) and (59), note that
V ˙ ( V ˜ , θ ˜ , t ) e ( t ) 2 F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 β N r ( t ) V ˜ T R ( t ) θ ˜ + β N r ( t ) ( θ ˜ V ˜ ) T R ( t ) θ ˜ 2 θ ˜ T R ( t ) β ( θ ˜ V ˜ ) N r ( t ) 2 ν L + e ( t ) 2 F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 β N r ( t ) θ ˜ T R ( t ) θ ˜ + 2 β N r ( t ) ( θ ˜ V ˜ ) T R ( t ) θ ˜ 2 θ ˜ T R ( t ) β ( θ ˜ V ˜ ) N r ( t ) 2 ν L F 1 2 ( t ) V 2 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 θ ˜ T R ( t ) θ ˜ 2 ν L .
( i ) now follows using similar arguments as in the proof of Theorem 2.
To show ( i i ) , note that if ϕ ( · ) is persistently excited, then (44) and (51) hold, and hence, using the fact that L ( θ ˜ , t ) 1 ν θ ˜ 2 , we have
3 5 2 e ν δ ρ ( V ˜ 2 + θ ˜ 2 ) V ( V ˜ , θ ˜ , t ) 3 + 5 2 1 ν ( V ˜ 2 + θ ˜ 2 ) + 1 ν θ ˜ 2 5 + 5 2 ν ( V ˜ 2 + θ ˜ 2 ) .
Now, using (49) and (60), it follows that the zero solution ( V ˜ ( t ) , θ ˜ ( t ) ) ( 0 , 0 ) to (55) and (56) is exponentially stable [15] ([Theorem 4.6]). □

3. System Parameter Identification

In this section, we address the problem of parameter identification using the first- and higher-order tuner algorithms developed in Section 2. First, the following definition is needed.
Definition 2 
([2]). A signal u : [ 0 , ) R is stationary and sufficiently rich of order n if the following statements hold:
( i ) The limit
lim T 1 T t 0 t 0 + T u ( τ ) u T ( t + τ ) d τ
exists uniformly in t 0 .
( i i ) The support of the spectral measure S u ( ω ) , ω R , of u contains at least n points.
Consider the stable single-input, single-output (SISO) plant, with output y ( · ) and input u ( · ) being the only available signals for measurement, given by
x ˙ ( t ) = A x ( t ) + B u ( t ) , x ( 0 ) = x 0 , t 0 ,
y ( t ) = C T x ( t ) ,
where x ( t ) R n , t 0 , A R n × n , and B , C R n . Since there are an infinite number of triples ( A , B , C ) yielding the same input–output measurements, we cannot uniquely define the n 2 + 2 n coefficients in ( A , B , C ). However, we can determine the n + m + 1 parameters corresponding to the coefficients of the stable and strictly proper rational transfer function G R ( s ) given by
G ( s ) = C T ( s I n A ) 1 B = y ^ ( s ) u ^ ( s ) = b m s m + b m 1 s m 1 + + b 0 s n + a n 1 s n 1 + + a 0 ,
where, for s C , y ^ ( s ) is the system output, u ^ ( s ) is the system input, and the coefficients a i , i = 0 , , n 1 , and b j , j = 0 , , m , where m n 1 , are unknown system parameters. The goal of the system parameter identification problem is to identify the system parameter vector
θ * = [ b m , b m 1 , , b 0 , a n 1 , , a 0 ]
containing the unknown coefficients of the plant transfer function that satisfies
s n y ^ ( s ) = θ * T Y ^ ( s ) ,
where Y ^ ( s ) [ α m T ( s ) u ^ ( s ) , α n 1 T ( s ) y ^ ( s ) ] T and α i ( s ) [ s i , s i 1 , , 1 ] T .
Note that (67) has the form of (1) in the frequency domain; however, in most applications the only signals that are available for measurement are the input u ( t ) , t 0 , and the output y ( t ) , t 0 , and not their derivatives. To address this, we filter (67) through a stable filter 1 Λ ( s ) , where Λ ( s ) is an arbitrary monic Hurwitz polynomial of degree n, to obtain, for y ( 0 ) = 0 ,
z ^ ( s ) = θ * T ϕ ^ ( s ) ,
where z ^ ( s ) s n y ^ ( s ) Λ ( s ) and ϕ ^ ( s ) [ α m T ( s ) Λ ( s ) u ^ ( s ) , α n 1 T ( s ) Λ ( s ) y ^ ( s ) ] T . The time signals ϕ ( t ) , t 0 , and z ( t ) , t 0 , can be generated by the state equations ([2])
ϕ ˙ 0 ( t ) = Λ c ϕ 0 ( t ) + b c u ( t ) , ϕ 0 ( 0 ) = 0 , t 0 ,
ϕ 1 ( t ) = P 0 ϕ 0 ( t ) ,
ϕ ˙ 2 ( t ) = Λ c ϕ 2 ( t ) b c y ( t ) , ϕ 2 ( 0 ) = 0 ,
ϕ ( t ) = [ ϕ 1 T ( t ) , ϕ 2 T ( t ) ] T ,
z ( t ) = y ( t ) + λ T ϕ 2 ( t ) ,
where, for t 0 , ϕ 0 ( t ) R n , ϕ 1 ( t ) R m + 1 , ϕ 2 ( t ) R n , and
Λ c = λ T I n 1 | 0 ( n 1 ) × 1 , b c = [ 1 , 0 , , 0 ] T ,
P 0 = [ 0 ( m + 1 ) × ( n m 1 ) | I m + 1 ] R ( m + 1 ) × n , λ = [ λ n 1 , λ n 2 , , λ 0 ] T ,
and where det ( s I n Λ c ) = Λ ( s ) = s n + λ T α n 1 ( s ) .
Theorem 4. 
Assume that G R ( s ) given by (65) has no pole-zero cancellations and y ( 0 ) = 0 . If u is stationary and sufficiently rich of order n + m + 1 , then the adaptive laws (20)–(23), (36)–(38), and (55)–(57) guarantee that θ ( t ) converges to θ * as t .
Proof. 
The proof follows from Theorems 1, 2, and 3 by noting that if u ( · ) is stationary and sufficiently rich of order n + m + 1 and G has no pole-zero cancellations, then ϕ ( · ) is PE [2] ([Thm. 5.2.4]). □
Remark 4. 
Note that if y ( 0 ) 0 , then (68) would involve an additional bias error term. As shown in [2], this term converges to the origin exponentially fast, and hence, Theorem 4 remains valid.

4. Momentum-Based Recursive Least Squares and Model Reference Adaptive Control

Recursive least squares type algorithms were first introduced in adaptive control in [16] and extended in [2,17,18]. In this section, we show how the MRLS algorithm proposed in Section 2 can be extended to the problem of model reference adaptive control for systems with relative degree one.
Consider the linear SISO system given by
y ^ p ( s ) = G ( s ) u ^ p ( s ) ,
where G ( s ) = k p N p ( s ) D p ( s ) R ( s ) , N p ( s ) , D p ( s ) R [ s ] are unknown polynomials, k p is an unknown constant, u ^ p ( s ) is the control system input, and y ^ p ( s ) is the system output. As in [18], we make the following assumptions.
( i ) G R ( s ) is minimum phase and has relative degree one.
( i i ) The degree of D p ( s ) is n.
( i i i ) The sign of k p is known.
The control objective is to design an appropriate control law u p ( t ) , t 0 , such that all the signals of the closed-loop system are bounded and y p ( t ) , t 0 , tracks the output y r ( t ) , t 0 , of a reference model given by
y ^ r ( s ) = M ( s ) r ^ ( s ) ,
where M ( s ) k r s + a r R ( s ) , a r > 0 and k r are known constants, and r ^ ( s ) is the Laplace transform of a bounded piecewise continuous signal r ( t ) , t 0 .
To address this problem, consider the filter system ([2])
v ˙ 1 ( t ) = F c v 1 ( t ) + g c u p ( t ) , v 1 ( 0 ) = v 10 , t 0 ,
v ˙ 2 ( t ) = F c v 2 ( t ) + g c y p ( t ) , v 2 ( 0 ) = v 20 ,
where, for t 0 , v 1 ( t ) R n 1 , v 2 ( t ) R n 1 ,
F c = λ ^ T I n 2 | 0 ( n 2 ) × 1 R ( n 1 ) × ( n 1 ) , λ ^ = [ λ n 2 , λ n 3 , , λ 0 ] T ,
g c = [ 1 , 0 , 0 ] T R n 1 , and det ( s I n 1 F c ) = s n 1 + λ ^ T α n 2 ( s ) = F ( s ) , where F ( s ) R [ s ] is an arbitrary monic Hurwitz polynomial of degree n 1 . Here, we use v 1 ( t ) , t 0 , and v 2 ( t ) , t 0 , to form the regressor vector
ϕ ( t ) = [ v 1 T ( t ) , v 2 T ( t ) , y p ( t ) , r ( t ) ] T R 2 n .
Note that the existence of a constant parameter vector θ * = [ θ 1 * T , θ 2 * T , θ 3 * , θ 4 * ] T R 2 n such that the transfer function of the SISO system (76) with u ^ p ( s ) = θ * T ϕ ^ ( s ) , where ϕ ^ ( s ) = [ α n 2 ( s ) F ( s ) u ^ p ( s ) , α n 2 ( s ) F ( s ) y ^ p ( s ) , y ^ p ( s ) , r ^ ( s ) ] , matching the reference model transfer function M ( s ) R ( s ) is guaranteed by the choice of M ( s ) and Assumptions ( i ) ( i i i ) .
Next, consider the control law
u ^ p ( s ) = L ( s ) θ ^ T ( s ) ξ ^ ( s ) ,
where θ ^ ( s ) C 2 n , L ( s ) s + λ 1 , λ 1 > 0 , and ξ ^ ( s ) L 1 ( s ) ϕ ^ ( s ) C 2 n , and note that the tracking error e ^ 1 ( s ) y ^ p ( s ) y ^ r ( s ) satisfies
e ^ 1 ( s ) = M ( s ) L ( s ) k * [ u ^ p ( s ) θ * T ϕ ^ ( s ) ] = M ( s ) L ( s ) k * θ ˜ ^ T ( s ) ξ ^ ( s ) ,
where k * = k p k r . Note that (83) has a nonminimal state-space realization given by
e ˙ 2 ( t ) = A m e 2 ( t ) + B m k * θ ˜ T ( t ) ξ ( t ) , e 2 ( 0 ) = e 20 , t 0 ,
e 1 ( t ) = C m e ( t ) + k r k * θ ˜ T ( t ) ξ ( t ) ,
where e 2 ( t ) R 3 n 2 , A m R ( 3 n 2 ) × ( 3 n 2 ) , B m = R ( 3 n 2 ) × 1 , and C m = R 1 × ( 3 n 2 ) . Furthermore, note that
M ( s ) L ( s ) = k r ( s + λ 1 ) s + a r = k r ( λ 1 a r ) s + a r + k r ,
where λ 1 > a r , k r ( λ 1 a r ) s + a r is a strictly positive real transfer function. In this case, it follows from the Meyer–Kalman–Popov lemma [2] ([Lem. 3.5.4]) that there exist ( 3 n 2 ) × ( 3 n 2 ) positive-definite matrices P > 0 and Q > 0 such that
A m T P + P A m = 2 Q ,
P B m = C m T .
Since for every nonsingular matrix S R ( 3 n 2 ) × ( 3 n 2 ) and constant α > 0 , the realizations ( A , B , C ) and ( S A S 1 , α S B , α 1 C S 1 ) are equivalent, choosing S = Q 1 2 and α = C m we can ensure that ( A m , B m , C m ) is a realization that satisfies (87) and (88) with Q = I 3 n 2 and C m is normalized so that C m = 1 .
Next, consider the recursive least-squares algorithm given by
θ ˙ ( t ) = α sgn ( k * ) Γ ( t ) ξ ( t ) e 1 ( t ) , θ ( 0 ) = θ 0 , t 0 ,
Γ ˙ ( t ) = ν Γ ( t ) Γ ( t ) ξ ( t ) ξ ( t ) T Γ ( t ) , Γ ( 0 ) = Γ 0 ,
where sgn ( σ ) σ / | σ | , σ 0 , and sgn ( 0 ) 0 , α > 0 , and ν 0 . Note that for ν = 0 , (89) and (90) recover the recursive least-squares algorithm of [19].
Here, we modify the recursive least-squares algorithm (89) and (90) to construct our MRLS algorithm as
V ˙ ( t ) = γ sgn ( k * ) Γ ( t ) ξ ( t ) e 1 ( t ) , V ( 0 ) = V 0 , t 0 ,
θ ˙ ( t ) = β ( θ ( t ) V ( t ) ) N m ( t ) , θ ( 0 ) = θ 0 ,
Γ ˙ ( t ) = ν Γ ( t ) Γ ( t ) ξ ( t ) ξ ( t ) T Γ ( t ) , Γ ( 0 ) = Γ 0 ,
where β > 0 , γ > 0 , α > 0 , ν > 0 ,
N m ( t ) 1 + μ ξ ( t ) T Γ ( t ) ξ ( t ) = 1 + μ Γ 1 2 ( t ) ξ ( t ) 2 ,
and μ > 0 . Note that (91) and (92) can be rewritten in terms of the error parameters V ˜ ( t ) , t 0 , and θ ˜ ( t ) , t 0 , as
V ˜ ˙ ( t ) = γ sgn ( k * ) Γ ( t ) ξ ( t ) e 1 ( t ) , V ˜ ( 0 ) = V 0 V * , t 0 ,
θ ˜ ˙ ( t ) = β ( θ ˜ ( t ) V ˜ ( t ) ) N m ( t ) , θ ˜ ( 0 ) = θ 0 θ * .
Theorem 5. 
Consider the SISO system (76) with control law (82) and the MRLS algorithm given by (91)–(93) with α > 1 | k r | | k * | and μ > 1 β ( 1 + 2 α | k * | ζ ) , where ζ min { 1 , 1 2 k r } . Then e 2 L 2 L and θ ˜ T ξ L 2 . If, in addition, ν = 0 , then θ ˜ , V ˜ L and lim t e 1 ( t ) = 0 .
Proof. 
Consider the function
V ( θ ˜ , V ˜ , e 2 , t ) α 1 | k * | V ˜ T H ( t ) V ˜ + α 1 | k * | ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) + e 2 T P e 2 ,
where P > 0 satisfies (87) and (88), and note that
V ˙ ( θ ˜ , V ˜ , e 2 , t ) = e 2 T P e ˙ 2 + 2 | k * | α 1 V ˜ T H ( t ) V ˜ ˙ + | k * | α 1 V ˜ T H ˙ ( t ) V ˜ + | k * | α 1 ( θ ˜ V ˜ ) T H ˙ ( t ) ( θ ˜ V ˜ ) + 2 | k * | α 1 ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ ˙ V ˜ ˙ ) = 2 e 2 2 + 2 e 2 T P B m k * θ ˜ T ξ ( t ) 2 k * V ˜ T ξ ( t ) e 1 + | k * | α 1 V ˜ T H ˙ ( t ) V ˜ + | k * | α 1 ( θ ˜ V ˜ ) T H ˙ ( t ) ( θ ˜ V ˜ ) 2 β | k * | α 1 ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N m ( t ) + 2 k * ( θ ˜ V ˜ ) T ξ ( t ) e 1 = 2 e 2 2 + 2 ( C m e 2 ) T k * θ ˜ T ξ ( t ) 2 k * V ˜ T ξ ( t ) e 1 + | k * | α 1 V ˜ T H ˙ ( t ) V ˜ + | k * | α 1 ( θ ˜ V ˜ ) T H ˙ ( t ) ( θ ˜ V ˜ ) 2 β | k * | α 1 ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N m ( t ) + 2 k * ( θ ˜ V ˜ ) T ξ ( t ) e 1 = 2 e 2 2 + 2 ( e 1 k r k * θ ˜ T ξ ( t ) ) T k * θ ˜ T ξ ( t ) 2 k * V ˜ T ξ ( t ) e 1 + | k * | α 1 V ˜ T [ F ( t ) + ξ ( t ) ξ ( t ) T ] V ˜ + | k * | α 1 ( θ ˜ V ˜ ) T [ F ( t ) + ξ ( t ) ξ ( t ) T ] ( θ ˜ V ˜ ) 2 β | k * | α 1 ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) N m ( t ) + 2 k * ( θ ˜ V ˜ ) T ξ ( t ) e 1 = 2 e 2 2 | k * | α 1 F 1 2 ( t ) V 2 | k * | α 1 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 k r ( k * θ ˜ T ξ ( t ) ) 2 + 4 e 1 k * ( θ ˜ V ˜ ) T ξ ( t ) + | k * | α 1 | V ˜ T ξ ( t ) | 2 + | k * | α 1 | ( θ ˜ V ˜ ) T ξ ( t ) | 2 2 β | k * | α 1 ( θ ˜ V ˜ ) T H 1 2 ( t ) 2 N m ( t ) , t 0 .
Next, using the Cauchy–Schwarz and Young inequalities we obtain
e 1 2 = e 2 T C m T C m e 2 + ( k r k * θ ˜ T ξ ( t ) ) 2 + 2 k r k * θ ˜ T e 2 T C m C m 2 e 2 2 + ( k r k * θ ˜ T ξ ( t ) ) 2 + 2 k r k * θ ˜ T ξ ( t ) e 2 T C m 2 C m 2 e 2 2 + 2 ( k r k * θ ˜ T ξ ( t ) ) 2 2 e 2 2 + 2 ( k r k * θ ˜ T ξ ( t ) ) 2 .
Now, using (98) along with triangle inequality | V ˜ T ξ ( t ) | 2 | θ ˜ T ξ ( t ) | 2 + | ( θ ˜ V ˜ ) T ξ ( t ) | 2 , t 0 , it follows from (97) that
V ˙ ( θ ˜ , V ˜ , e 2 , t ) 2 ( 1 ζ ) e 2 2 ζ e 1 2 + 2 ζ ( k r k * θ ˜ T ξ ( t ) ) 2 | k * | α 1 F 1 2 ( t ) V 2 2 k r ( k * θ ˜ T ξ ( t ) ) 2 | k * | α 1 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 + 4 | e 1 | | k * | ( θ ˜ V ˜ ) T ξ ( t ) + | k * | α 1 [ | θ ˜ T ξ ( t ) | 2 + ( θ ˜ V ˜ ) T ξ ( t ) 2 ] + | k * | α 1 | ( θ ˜ V ˜ ) T ξ ( t ) | 2 2 β | k * | α 1 ( θ ˜ V ˜ ) T H 1 2 ( t ) 2 N m ( t ) 2 ( 1 ζ ) e 2 2 + | k * | ( 2 k r | k * | + 2 ζ k r 2 | k * | + α 1 ) | θ ˜ T ξ ( t ) | 2 | k * | α 1 F 1 2 ( t ) V 2 [ ζ | e 1 | 2 | k * | ζ | ( θ ˜ V ˜ ) T ξ ( t ) | ] 2 + 2 | k * | α 1 + 4 | k * | 2 ζ | ( θ ˜ V ˜ ) T ξ ( t ) | 2 | k * | α 1 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 β | k * | α 1 ( θ ˜ V ˜ ) T H 1 2 ( t ) 2 N m ( t ) 2 ( 1 ζ ) e 2 2 + | k * | ( 2 k r | k * | + 2 ζ k r 2 | k * | + α 1 ) | θ ˜ T ξ ( t ) | 2 | k * | α 1 F 1 2 ( t ) V 2 + 2 | k * | α 1 + 4 | k * | 2 ζ 2 β μ | k * | α 1 | ( θ ˜ V ˜ ) T H 1 2 ( t ) | 2 Γ 1 2 ( t ) ξ ( t ) | k * | α 1 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 β | k * | α 1 ( θ ˜ V ˜ ) T H 1 2 ( t ) 2 2 ( 1 ζ ) e 2 2 + | k * | ( 2 k r | k * | + 2 ζ k r 2 | k * | + α 1 ) | θ ˜ T ξ ( t ) | 2 | k * | α 1 F 1 2 ( t ) V 2 | k * | α 1 F 1 2 ( t ) ( θ ˜ V ˜ ) 2 2 β | k * | α 1 ( θ ˜ V ˜ ) T H 1 2 ( t ) 2 0 , t 0 ,
which shows that V ( θ ˜ , V ˜ , e 2 , t ) L . Hence, V ˜ T H ( t ) V ˜ , ( θ ˜ V ˜ ) T H ( t ) ( θ ˜ V ˜ ) L , t 0 , e 2 T P e 2 L . Next, integrating V ˙ over [ 0 , ) yields
0 e 2 ( τ ) 2 d τ V ( V ˜ 0 , θ ˜ 0 , e 20 , 0 ) lim t V ( V ˜ ( t ) , θ ˜ ( t ) , e 2 ( t ) , t ) < ,
and hence, e 2 L 2 . Similarly, θ ˜ T ξ L 2 and, since P > 0 , e 2 L .
Finally, if ν = 0 , then H ( t ) satisfies
H ˙ ( t ) = ξ ( t ) ξ ( t ) T , H ( 0 ) = Γ 0 1 , t 0 ,
or
H ( t ) = Γ 0 1 + 0 t ξ ( τ ) ξ ( τ ) T d τ .
Hence, H ( t ) > Γ 0 1 , t 0 , and thus, V ˜ , θ ˜ L . Now, if e 1 , θ ˜ L , then (85) implies that e ˙ 1 L and, by Lemma 2, lim t e 1 ( t ) = 0 . □
Remark 5. 
Note that unlike many MRAC schemes (e.g., [20]), (91)–(93) does not necessitate a projection operator to guarantee boundedness of the tracking error. However, unlike [19], wherein only a lower bound is necessary for | k * | , we require knowledge of a lower and upper bound for | k * | .
Remark 6. 
It is important to note that there exists an alternative MRAC framework in the litterature known as the normalized adaptive laws [2] whose design is not based on the tracking error e 1 ( · ) but rather on a normalized estimation error of a particular parametrization of the plant. For this framework, the momentum-based integral gradient algorithm (20)–(23), the momentum-based recursive least squares algorithms (36)–(38), and momentum-based composite gradient algorithm (55)–(57) can be used directly for strictly proper plants without a relative degree restriction. In this case, the parametrization of the ideal controller is given by u ( s ) = θ * T ϕ ( s ) , where θ * satisfies z ^ ( s ) = θ * T ϕ ^ ( s ) , z ^ ( s ) = M ( s ) u p ( s ) , and ϕ ^ ( s ) = [ M ( s ) v ^ 1 T ( s ) , v ^ 2 T ( s ) , M ( s ) y ^ p ( s ) , y ^ p ( s ) ] .

5. Illustrative Numerical Examples

5.1. System Parameter Identification

Consider the third-order transfer function representing a servo control system for the pitch control of an aircraft given by
y ^ ( s ) = b 1 s + b 0 s 3 + a 2 s 2 + a 1 s + a 0 u ^ ( s ) ,
where y ^ ( s ) C is the pitch angle, u ^ ( s ) C is the system input, and b 1 , b 0 , a 2 , a 1 , a 0 > 0 , and b 0 0 are the unknown system plant parameters. Note that, (68) gives z ^ ( s ) = s 3 Λ ( s ) y ^ ( s ) with θ * = [ b 1 , b 0 , a 2 , a 1 , a 0 ] T and ϕ ^ ( s ) = [ [ s , 1 ] Λ ( s ) u ^ ( s ) , [ s 2 , s , 1 ] Λ ( s ) y ^ ( s ) ] T .
Let a 0 = 0.1774 , a 1 = 2.072 , a 2 = 0.739 , b 0 = 0.1774 , b 1 = 1.151 , u ( t ) = U [ cos ( 4 t ) + cos ( 2 t ) + sin ( 4 t ) + sin ( t ) ] , U > 0 , and choose Λ ( s ) = ( s + 1 ) 3 . Note that u ( · ) is a stationary and sufficiently rich signal of order five, and hence, Theorem 4 guarantees that the estimated system parameters θ ( t ) will converge exponentially to θ * as t using both the momentum-based integral gradient algorithm (20)–(23) and the MRLS algorithm (36)–(38).
First, we compare the performance of the RLS algorithm (34) and (35) with the MRLS algorithm (20)–(23). For this comparison, we set U = 10 , Γ ( 0 ) = 3000 I 5 , ν = 0.1 , β = 0.1 , μ = 2 β , θ ( 0 ) = [ 0 , 0 , 0 , 0 , 0 ] T , and V ( 0 ) = [ 0 , 0 , 0 , 0 , 0 ] T . Figure 2 shows the system parameter estimate versus time. It can be seen that for both the RLS and the MRLS algorithms the parameter estimate θ ( t ) converges to θ * as expected by Theorem 2. Moreover, as seen in Figure 3 the MRLS algorithm provides faster convergence of the system parameters to the true values as compared to the standard RLS algorithm.
Next, we compare the momentum-based composite gradient algorithm (55)–(57) with the composite gradient algorithm given by
θ ˙ ( t ) = Γ ( t ) ϕ ( t ) e ( t ) β 2 Γ ( t ) N r ( t ) [ R ( t ) θ ( t ) + Q ( t ) ] , θ ( 0 ) = θ 0 , t 0 .
Let U = 1 , Γ ( 0 ) = 1000 I 5 , ν = 0.2 , β = 0.1 , μ = 2 β , θ ( 0 ) = [ 0 , 0 , 0 , 0 , 0 ] T , and V ( 0 ) = [ 0 , 0 , 0 , 0 , 0 ] T . Figure 4 shows the system parameter estimates versus time. It can be seen that for both the composite gradient and the momentum-based composite gradient algorithms the parameter estimate θ ( t ) converges to θ * as expected by Theorem 3. However, as seen in Figure 5, our proposed algorithm provides faster convergence of the system parameters to their true values. In particular, the momentum-based composite gradient settles around a value of 10 3 at t = 16 s as compared to 20 s for the composite gradient. Note that the convergence rate due to the addition of momentum is more pronounced in the case of the composite algorithm as compared to the RLS algorithm.

5.2. Momentum-Based Recursive Least Squares and Model Reference Adaptive Control

Here, we consider the short-term dynamics of the aircraft as an example, as shown in Figure 6, where α is the angle of attack, γ is the flight path angle, and U e is the equilibrium linear axial velocity. The transfer function describing the short-term dynamics of the reduced-order longitudinal state equation of the aircraft with angle of attack output and elevator deflection input η is given by [21]
α ^ ( s ) η ^ ( s ) = k p ( s + b 0 ) s 2 + a 1 s + a 0 ,
where k p = z η U e , b 0 = U e m η z η , a 1 = m q + z w , and a 0 = m q z w m w U e , and m η , m w , and m q are the concise partial derivatives (see [21]) of the pitching moment with respect to the elevator angle η , the normal velocity w, and the pitch rate q. Moreover, z η and z w denote the concise partial derivatives of the normal force on the aircraft with respect to the elevator angle η and the normal velocity w. Here, we set k p = 0.1601 , b 0 = 71.9844 , a 1 = 5.0101 , and a 0 = 12.9988 [21].
The desired system performance is to track the reference output
y ^ r ( s ) = 5 s + 0.3 r ^ ( s ) ,
where r ^ ( s ) is a reference command signal. Next, we use the filter system (78) and (79) with F c = 2 and g c = 1 , and the control law (82) whose state space realization is given by
ξ ˙ ( t ) = λ 1 ξ ( t ) + ϕ ( t ) , ξ ( 0 ) = 0 , t 0 ,
u p ( t ) = θ T ( t ) ϕ ( t ) + θ ˙ T ( t ) ξ ( t ) ,
where ϕ ( t ) = [ v 1 ( t ) , v 2 ( t ) , y p ( t ) , r ( t ) ] T . For our simulations, we select the reference signal to be a square wave with a frequency 2 π and amplitude 1. Furthermore, the system initial conditions are set to y ( 0 ) = 0 , y p ( 0 ) = 0 , y ˙ p ( 0 ) = 0 , and Γ ( 0 ) = 300 I 4 .
Next, we compare the RLS algorithm (89) and (90) with our proposed MRLS algorithm (91)–(93). For this comparison, we set λ 1 = 2 , α = 1 , β = 1 , μ = 5 , γ = 1 , ν = 0.3 , θ ( 0 ) = [ 0 , 0 , 0 , 0 ] T , and V ( 0 ) = [ 0 , 0 , 0 , 0 ] T . Note that all the conditions of Theorem 5 are satisfied. Figure 7 shows the system parameters versus time for the MRLS and the RLS algorithms. It can be seen that in both cases the system parameters converge to the true values with the MRLS providing faster convergence as compared to the standard RLS update law. Finally, Figure 8 shows the moving average (MA) of the absolute value of the tracking error e 1 ( · ) with a sliding window of period 2 π . Note that the MRLS algorithm provides slightly better tracking accuracy as compared to the RLS algorithm after the first 10 s. Finally, we note that the difference in runtime complexity between the different algorithms addressed in this paper is negligible.

6. Conclusions

In this paper, we developed three new momentum-based update laws for online parameter identification and model reference adaptive control. Specifically, we augmented higher-order tuner architectures into the integral gradient, recursive least squares, and composite gradient algorithms to achieve faster error convergence of the system parameters. Several numerical examples were provided to show the efficacy of the proposed approach. Future work will focus on developing new adaptive update laws for identification and control that guarantee finite time and fixed time convergence of the system parameters.

Author Contributions

L.S.: Conseptualization, Formal analysis, Software, Visualization, Writing-original draft. W.M.H.: Conceptualization, Formal analysis, Writing-review and editing, Supervision, Funding Acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Air Force Office of Scientific Research under Grant FA9550-20-1-0038.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Åström, K.J.; Wittenmark, B. Adaptive Control; Dover Publications: Mineola, NY, USA, 2008. [Google Scholar]
  2. Ioannou, P.; Sun, J. Robust Adaptive Control; Dover Publications: Garden City, NY, USA, 2012. [Google Scholar]
  3. Narendra, K.; Annaswamy, A.M. Stable Adaptive Systems; Prentice-Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  4. Krstic, M.; Kanellakopoulos, I.; Kokotovic, P. Nonlinear and Adaptive Control Design; Wiley: New York, NY, USA, 1995. [Google Scholar]
  5. Cui, Y.; Annaswamy, A.M. Discrete-Time High Order Tuner with A Time-Varying Learning Rate. In Proceedings of the 2023 American Control Conference, San Diego, CA, USA, 31 May–2 June 2023; pp. 2993–2998. [Google Scholar]
  6. Nesterov, Y. Introductory Lectures on Convex Optimization; Springer: New York, NY, USA, 2004. [Google Scholar]
  7. Gaudio, J.E.; Gibson, T.E.; Annaswamy, A.M.; Bolender, M.A. Provably Correct Learning Algorithms in the Presence of Time-Varying Features Using a Variational Perspective. arXiv 2019, arXiv:1903.04666. [Google Scholar]
  8. Boffi, N.M.; Slotine, J.J.E. Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction. Neural Comput. 2021, 33, 590–673. [Google Scholar] [CrossRef] [PubMed]
  9. Online accelerated data-driven learning for optimal feedback control of discrete-time partially uncertain systems. Int. J. Adapt. Control Signal Process. 2023, 38, 848–876.
  10. Costa, R.R. Model-reference adaptive control with high-order parameter tuners. In Proceedings of the 2022 American Control Conference, Atlanta, GA, USA, 8–10 June 2022; pp. 3370–3375. [Google Scholar]
  11. Costa, R.R. Least-squares model-reference adaptive control with high-order parameter tuners. Automatica 2024, 163, 111544. [Google Scholar] [CrossRef]
  12. Wibisono, A.; Wilson, A.C.; Jordan, M.I. A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. USA 2016, 113, 7351–7358. [Google Scholar] [CrossRef] [PubMed]
  13. Cho, N.; Shin, H.S.; Kim, Y.; Tsourdos, A. Composite Model Reference Adaptive Control with Parameter Convergence Under Finite Excitation. IEEE Trans. Autom. Control 2018, 63, 811–818. [Google Scholar] [CrossRef]
  14. Shaferman, V.; Schwegel, M.; Glück, T.; Kugi, A. Continuous-time least-squares forgetting algorithms for indirect adaptive control. Eur. J. Control 2021, 62, 105–112. [Google Scholar] [CrossRef]
  15. Haddad, W.M.; Chellaboina, V. Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach; Princeton University Press: Princeton, NJ, USA, 2008. [Google Scholar]
  16. Goodwin, G.; Mayne, D. A parameter estimation perspective of continuous time model reference adaptive control. Automatica 1987, 23, 57–70. [Google Scholar] [CrossRef]
  17. Gaudio, J.E.; Annaswamy, A.M.; Lavretsky, E.; Bolender, M.A. Parameter Estimation in Adaptive Control of Time-Varying Systems Under a Range of Excitation Conditions. IEEE Trans. Autom. Control 2022, 67, 5440–5447. [Google Scholar] [CrossRef]
  18. Costa, R.R. Lyapunov design of least-squares model-reference adaptive control. IFAC-PapersOnLine 2020, 53, 3797–3802. [Google Scholar] [CrossRef]
  19. Costa-Gomes, M.A.; Crawford, V.P.; Iriberri, N. Comparing models of strategic thinking in Van Huyck, Battalio, and Beil’s coordination games. J. Eur. Econ. Assoc. 2009, 7, 365–376. [Google Scholar] [CrossRef]
  20. Naik, S.; Kumar, P.; Ydstie, B. Robust continuous-time adaptive control by parameter projection. IEEE Trans. Autom. Control 1992, 37, 182–197. [Google Scholar] [CrossRef]
  21. Cook, M. Flight Dynamics Principles; Elsevier: Oxford, UK, 2007. [Google Scholar]
Figure 1. Visualization of momentum-based learning for an ill-conditioned problem; θ ˜ is updated in a direction that increases θ ˜ while still decreasing the loss cost function.
Figure 1. Visualization of momentum-based learning for an ill-conditioned problem; θ ˜ is updated in a direction that increases θ ˜ while still decreasing the loss cost function.
Aerospace 11 01017 g001
Figure 2. Parameter estimate values θ ( t ) versus time using the RLS and MRLS algorithms. In both cases the parameter estimate values converge to the true values with the MRLS algorithm converging faster.
Figure 2. Parameter estimate values θ ( t ) versus time using the RLS and MRLS algorithms. In both cases the parameter estimate values converge to the true values with the MRLS algorithm converging faster.
Aerospace 11 01017 g002
Figure 3. Norm of the parameter estimate error θ ˜ ( t ) versus time using the RLS and MRLS algorithms.
Figure 3. Norm of the parameter estimate error θ ˜ ( t ) versus time using the RLS and MRLS algorithms.
Aerospace 11 01017 g003
Figure 4. Parameter estimate values θ ( t ) versus time using the composite gradient and the momentum-based composite gradient algorithm. In both cases the parameter estimate values converge to the true values with the momentum-based integral algorithm converging slightly faster.
Figure 4. Parameter estimate values θ ( t ) versus time using the composite gradient and the momentum-based composite gradient algorithm. In both cases the parameter estimate values converge to the true values with the momentum-based integral algorithm converging slightly faster.
Aerospace 11 01017 g004
Figure 5. Norm of the parameter estimate error θ ˜ ( t ) versus time using the composite and the momentum-based composite algorithms.
Figure 5. Norm of the parameter estimate error θ ˜ ( t ) versus time using the composite and the momentum-based composite algorithms.
Aerospace 11 01017 g005
Figure 6. Schematic of an aircraft.
Figure 6. Schematic of an aircraft.
Aerospace 11 01017 g006
Figure 7. Parameter estimate values θ ( t ) versus time using the RLS and MRLS algorithms. In both cases the parameter estimate values converge to the true values with the MRLS algorithm converging faster.
Figure 7. Parameter estimate values θ ( t ) versus time using the RLS and MRLS algorithms. In both cases the parameter estimate values converge to the true values with the MRLS algorithm converging faster.
Aerospace 11 01017 g007
Figure 8. Moving average of the absolute value of the error e 1 ( t ) = y p ( t ) y r ( t ) for the RLS and MRLS algorithms with a sliding window of 2 π . The tracking error for the MRAC scheme predicated on the MRLS algorithm decreases faster than that predicated on the RLS algorithm.
Figure 8. Moving average of the absolute value of the error e 1 ( t ) = y p ( t ) y r ( t ) for the RLS and MRLS algorithms with a sliding window of 2 π . The tracking error for the MRAC scheme predicated on the MRLS algorithm decreases faster than that predicated on the RLS algorithm.
Aerospace 11 01017 g008
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Somers, L.; Haddad, W.M. Momentum-Based Adaptive Laws for Identification and Control. Aerospace 2024, 11, 1017. https://doi.org/10.3390/aerospace11121017

AMA Style

Somers L, Haddad WM. Momentum-Based Adaptive Laws for Identification and Control. Aerospace. 2024; 11(12):1017. https://doi.org/10.3390/aerospace11121017

Chicago/Turabian Style

Somers, Luke, and Wassim M. Haddad. 2024. "Momentum-Based Adaptive Laws for Identification and Control" Aerospace 11, no. 12: 1017. https://doi.org/10.3390/aerospace11121017

APA Style

Somers, L., & Haddad, W. M. (2024). Momentum-Based Adaptive Laws for Identification and Control. Aerospace, 11(12), 1017. https://doi.org/10.3390/aerospace11121017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop