Next Article in Journal
The Impact of Tritium in the Environment
Previous Article in Journal
Geochemistry and Zircon U-Pb Chronology of West Kendewula Late Paleozoic A-Type Granites in the East Kunlun Orogenic Belt: Implications for Post-Collision Extension
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Control Framework for Direct Adaptive State Estimation with Known Inputs for Linear Time-Invariant Dynamic Systems

Department of Mechanical Engineering, Texas A&M University, College Station, TX 77840, USA
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(12), 6657; https://doi.org/10.3390/app15126657
Submission received: 2 April 2025 / Revised: 3 June 2025 / Accepted: 7 June 2025 / Published: 13 June 2025
(This article belongs to the Section Mechanical Engineering)

Abstract

Many dynamic systems experience unwanted actuation caused by an unknown exogenous input. Typically, when these exogenous inputs are stochastically bounded and a basis set cannot be identified, a Kalman-like estimator may suffice for state estimation, provided there is minimal uncertainty regarding the true system dynamics. However, such exogenous inputs can encompass environmental factors that constrain and influence system dynamics and overall performance. These environmental factors can modify the system’s internal interactions and constitutive constants. The proposed control scheme examines the case where the true system’s plant changes due to environmental or health factors while being actuated by stochastic variances. This approach updates the reference model by utilizing the input and output of the true system. Lyapunov stability analysis guarantees that both internal and external error states will converge to a neighborhood around zero asymptotically, provided the assumptions and constraints of the proof are satisfied.

1. Introduction

Dynamic systems can often experience unwanted actuation caused by an exogenous input. For aero-space systems, the input can be a gust of wind, changes in atmospheric pressure, and variation in temperature, among other factors [1,2,3]. These outside conditions can affect the dynamics and overall performance of the system. Two main methods that deal with exogenous inputs are disturbance accommodating control (DAC) and robustness techniques. Several variations in DAC exist; one notable technique involves identifying the disturbance’s basis and utilizing adaptive control to determine the appropriate linear combination of the basis to remove the unknown input [4,5]. Notably, the adaptive control method of DAC is derived from a fixed gain method, which requires solving a set of matching conditions—also known as solvability conditions—before implementing DAC [6]. The fixed gain method requires significant knowledge of the system. Conversely, the DAC can be implemented for the adaptive case if system stability conditions are satisfied. An alternative viewpoint is treating disturbances as unknown inputs to the system, and an observer-estimator can be formulated to estimate the disturbance [7,8].
In many instances, the exogenous-unknown input behaves stochastically, and the basis set of the disturbance remains unidentified. In 2002, Fuentes, R. and Balas explored the case of Robust Adaptive DAC, given that a set of known and unknown input basis functions exist [9]. The emergence of robustness techniques stemmed from the limitations of the Luenberger-like estimators, the inability to account for stochastic noise adequately [10,11]. This limitation ultimately led to the development of traditional Kalman-like filters, which operate under the assumption that the stochastic variation in the unknown input follows a Gaussian distribution centered around zero [12,13]. For Luenberger and Kalman-like estimators to work, there must be minimal uncertainty about the true system. Luenberger and Kalman-like estimator’s sensitivity to minimal uncertainty about the true system led to the creation of plant-robust techniques [14,15].
The proposed control technique expands our original finding by accounting for unknown input stochastic variation, where the input basis and plant model uncertainty are unknown [16]. The control technique can accommodate significant changes to the true system’s plant, as defined in the derivation, which may arise from environmental factors, aging, or usage effects. Changes to the plant can involve modifications to internal interactions and constitutive constants. In 2022, Griffith and Balas developed an open-loop coupled approach for robust adaptive plant and input estimation. Plant estimation depends on adaptive control, and input estimation uses a fixed gain approach [17]. The proposed control technique is a closed-loop approach, enabling the decoupling of plant and input estimation. The closed-loop method allows access to the true system’s input and output to update the reference model adaptively.
The proposed control scheme is designed for a generic system and can, in theory, be implemented in any system that falls within the assumptions and constraints of the proof. Nonetheless, practical and numerical limitations may arise, some of which will be mentioned later in the text. While the proof is presented for globally linear time-invariant (LTI) systems, the proposed observer-estimator can also be applied to local LTI systems, provided that the assumptions and constraints of both the local LTI system and the derived theorem are satisfied. Given that it is common engineering practice to linearize non-linear systems around an operating point, the proposed scheme is well suited to aid state estimation in such cases [18,19]. Moreover, the proposed method can either complement or replace gain estimation scheduling techniques, which are often employed to estimate the state of a system under varying operational conditions or non-linear dynamics [20]. Gain estimation scheduling can be computationally intensive, especially when a large number of operating points must be considered, and it often requires detailed knowledge of the system dynamics.
Although the required decomposition of the true plant may seem restrictive, it is one of the strongest attributes. This becomes especially apparent when the true system is derived from a non-linear model linearized at a nominal operating point. If the system subsequently undergoes a health-related change—such that the original non-linear model no longer accurately captures the dynamics—the theory still guarantees that the error states will converge to a neighborhood around zero, provided that the altered system continues to satisfy the structured decomposition. However, it is essential to note that there may still be numerical and practical limitations. While the theorem is often framed in the context of health monitoring, it can also be interpreted as an adaptive observer that tracks changes in the true system across different localized operating regimes.
Additionally, given that the proposed estimator is non-invasive—meaning that none of the estimated states are fed back into the true system—the control scheme can be utilized without adversely affecting the true system. The robustness proof depends on two stability criteria: Strictly Positive Real (SPR) and Almost Strictly Dissipative (ASD). For a formal definition, along with SPR and ASD assumptions and constraints, refer to [16,21].

2. Main Result

The work presented herein aims to determine the internal and external state of the true physical system given a form of health change. In this context, a health change constitutes any deviation that can be captured within the prescribed decomposition of the true plant, typically manifesting as variations in constitutive constants or internal interactions. Such changes may arise due to environmental influences, aging, operational wear, or shifts in the system’s operating point. The initial model and the true system may differ substantially, as long as the true system conforms to the prescribed decomposition, the proposed proof guarantees that the estimation error will converge to a neighborhood around zero. However, there may exist real-world limitations of the theorem, some of which are mentioned in the discussion section of the paper.
This study extends our previous findings by considering the case where the true system experiences stochasticities and non-linearities in the dynamics, summarized in Theorem 1. In contrast, earlier results only considered model uncertainties [16]. Here, given an initial model of the true plant, states are estimated using known inputs and the true system’s external state. Due to the inherent stochasticities, state error can only be guaranteed to converge in a neighborhood around zero ( R * ) . In the absence of such stochasticities, our prior results demonstrate that the estimation error asymptotically converges to zero [16].
Although presenting the main theorem prior to introducing the true system and the proposed observer may seem unconventional, this structure is intended to provide a high-level overview before delving into the technical details. The control diagram for Theorem 1 is expressed in Figure 1.
Theorem 1. 
Output Feedback on the Reference Model for Robust Adaptive Plant and State Estimation.
Consider the following state error system:
e ˙ x = ( A m K C ) e x + B ( L L * = L ) y + v e ^ y = C e x L ˙ = L ˙ = e y y γ y a L ,
where e x is the estimated internal state error, e ^ y is the estimated external state error, L * is the fixed-correction matrix, L is the variability-uncertainty term, K is a fixed gain, γ y is an interaction tuning term, ‘a’ is a positive scalar feedback term, and v are the unknown stochastic variations present in the system.
Given:
1. 
The triples of ( A , B , C ) and ( A m , B , C ) are ASD and SPR, respectively.
2. 
A model plant ( A m ) must exist.
3. 
Input matrix ( B ) and output matrix ( C ) are known.
4. 
Allow A sp { A m , B L * C } A = A m + B L * C .
5. 
The set of eigenvalues ( σ ) of the true and reference plants are stable (i.e., Re ( σ { A } ) < 0 & Re ( σ { A m } ) < 0 ) .
6. 
Input ( u ) must be bounded and continuous.
7. 
Allow the stochastic variation ( v ) to be bounded M v > 0 sup t 0 v ( t ) M v < .
8. 
Allow ‘a’ to be a scalar value a λ min ( Q x ) 2 λ max ( P x ) > 0 , where λ min ( Q x ) is the minium eigenvalue of matrix Q x and λ max ( P x ) is the maximum eigenvalue of matrix P x .
9. 
Allow tr ( L * L * ) 1 2 to be bounded ∋ M k M k > 0 tr ( L * L * ) 1 2 M k .
10. 
Allow γ y satisfy t r ( γ y 1 ) M v a M k 2 .
If conditions are met, then lim sup t e x 1 + λ max ( P x ) a λ min ( P x ) M v R * asymptotically, where R * is the radius confidence around zero. L is guaranteed to be bounded; however, there is no guarantee of L converging.
Figure 1. Control diagram for adaptive plant and state estimation accounting stochastic variations and environmental/health changes in the true system using known inputs and outputs.
Figure 1. Control diagram for adaptive plant and state estimation accounting stochastic variations and environmental/health changes in the true system using known inputs and outputs.
Applsci 15 06657 g001

2.1. Defining True System Dynamics

This case study assumes the true plant dynamics are linear time-invariant and either weak stochasticities or non-linearities ( v x ) are bounded and present in the dynamics:
T r u e   S y s t e m x ˙ = A x + B u + v x y = C x .
The input matrix ( B ) , output matrix ( C ) , output (external) state ( y ) , and input ( u ) are assumed to be known and well understood. The true plant ( A ) is assumed to have stable eigenvalues ( Re ( σ { A } ) < 0 ) , also referred to as a Hurwitz stable matrix. Given that the true plant experiences a health change, the true plant dynamics related to internal interactions and constitutive constants are unknown, but an initial plant model ( A m ) exists.

2.2. Overview of Updating the Reference Model

The following sections will derive a control law to minimize the state error between the true (Equation (2)) and model (Equation (3)) systems given the presence of stochasticities in the true system. Note that both the true and model systems match in dimensions but may differ in internal interactions and constitutive constants. The eigenvalues of the initial model plant ( A m ) are stable ( Re ( σ { A m } ) < 0 ) and the weak stochasticities ( v x ) are unknown, bounded, and not accounted for in the reference model. Accounting for stochasticities in the true system adds an additional layer of complexity, potentially making the reference model dynamics less reliable.
Reference   Model x ˙ m = A m x m + B u y m = C x m
A board overview, the reference model considers both the input and output of the true system to modify its dynamics, aiming to minimize the estimated state errors { e x , e ^ y } . To update the reference model, the true system’s plant is assumed to be decomposable into two components: the initial plant model ( A m ) and the plant correction term ( B L * C ) :
A = A m + B L * C .
The decomposition of the true plant is structured so that the initial model can be updated and corrected in the estimator via a correction term ( L ) :
L ( t ) = L ( t ) + L * ,
where L is the variability-uncertainty term. Derived later, the correction term ( L ) depends on the output state ( y ) and the output error ( e ^ y ) between the model and true systems to adaptively update. By adjusting the reference dynamics, both internal and external errors can be guaranteed to converge to a neighborhood around zero asymptotically. Alternatively, for the derived control scheme to be feasible, the triplet of ( A , B , C ) must be ASD A = A m + B L * C and ( A m , B , C ) is SPR.

2.3. Estimated State Error

Internal state ( x ) information is often blended into a linear combination or omitted from the external state ( y ) . Therefore, an estimator is required to capture the full state vector ( x ) . However, the estimator complexity increases when the true system undergoes a health change and when bounded stochasticities exist in the system dynamics. Given the initial plant ( A m ) and using the true system’s external state, an estimator-observer can be created:
E s t i m a t o r x ^ ˙ = A m x ^ + B ( u + L y ) y ^ = C x ^ .
Note that the estimator and true systems are actuated using the same input ( u ) .
To minimize the error between both the estimator and the true systems, consider the following error states:
e x = x ^ x e ^ y = C e x .
If both internal and external error states converge to zero ∋ { e x , e ^ y } t 0 , the true plant states have been numerically captured. To guarantee { e x , e ^ y } t 0 , error state dynamics must be investigated:
e ˙ x = x ^ ˙ x ˙ = A m x ^ + B ( u + L y ) ( A x + B u + v x ) = A m x ^ + B ( L * + L = L ) y A m + B L * C = A x v x = A m e x + B L y = w x + v = v x .
Therefore, the state error equations can be written as follows:
Error   Equations e ˙ x = A m e x + B w x + v e ^ y = C e x ,
where w x = L y and v = v x . In addition, the estimator can be modified to use fixed gain ( K ) :
Estimator   w / Fixed   Gain x ^ ˙ = A m x ^ + B ( u + L y ) + K ( y y ^ ) y ^ = C x ^ .
Resulting in the following error equations:
Error   Equations   w / Fixed   Gains e ˙ x = ( A m K C ) e x + B w x + v e ^ y = C e x .
In order to use the fixed gain ( K ) , ( A m , C ) must be observable and selected fixed gain ( K ) must satisfy: Re ( σ { ( A m K C ) } ) < 0 .
Independent of the desired estimator, { e x , e ^ y } cannot guarantee to converge to zero because of the two residual terms { B w x , v } in the error equations. An additional argument will be needed to remove and bound residual terms.

2.4. Lyapunov Stability for the Estimated State Error

Due to the presence of residual terms { B w x , v } in the error equations, { e x , e ^ y } t 0 cannot be guaranteed. To address this, Lyapunov stability analysis will be employed to minimize and possibly remove all of the error state. In this analysis, the internal error state ( e x ) will be minimized by considering an energy-like function for the error system with assumed real scalars:
V e ( e x ) = 1 2 e x P x e x ; P x > 0 ,
where ( · ) is the conjugate-transpose operator. Following, P x is a positive-definite Re ( σ { P x } ) > 0 and self-adjoint matrix ( P x = P x ) , alternatively written as P x > 0 . To understand how the error state in the energy-like function evolves with time, take the time derivative of V e and substitute error dynamics:
2 V ˙ e = e ˙ x P x e x + e x P x e ˙ x = ( A m e x + B w x + v ) P x e x + e x P x ( A m e x + B w x + v ) = e x ( A m P x + P x A m ) e x + 2 ( B w x ) P x e x = e x P x B w x + 2 v P x e x = e x P x v = e x ( A m P x + P x A m ) e x + 2 e x P x B w x + 2 v P x e x = ( v , P x e ) .
The inner product operator is detonated as ( · , · ) .
Consider the SPR condition modified for the reference model:
A m P x + P x A m < Q x P x B = C ,
where Q x > 0 . Applying the reference model SPR condition on V ˙ e ,
V ˙ e = 1 2 e x Q x e x + e x C = e ^ y w x + ( v , P x e x ) = ( P x e x , v ) = 1 2 e x Q x e x + ( e ^ y , w x ) = ( w x , e ^ y ) + ( P x e x , v ) .
Removing the residual terms { ( e ^ y , w x ) , ( P x e x , v ) } in Equation (15), results in V ˙ e 0 .
In order to remove or compensate the two residual terms in Equation (15), consider another Lyapunov energy-like function:
V L ( L ) = 1 2 tr ( L γ y 1 L ) ; γ y > 0 .
To determine the energy-like time rate of change of V L , take the time derivative of V L :
V ˙ L = tr ( L ˙ γ y 1 L ) = tr ( L γ y 1 L ˙ ) .
A control law is defined for the time rate of change of plant correction variance ( L ˙ ) :
L ˙ = e ^ y y γ y a L ,
where a is a scalar. Additional constraints for a will be derived later. The term a L in Equation (18) functions as a feedback filter. Substituting Equation (18) into Equation (17):
V ˙ L = tr ( ( e ^ y y γ y a L ) = L ˙ γ y 1 L ) = tr ( e ^ y y γ y γ y 1 L a L γ y 1 L ) = tr ( e ^ y y L = w x ) tr ( a L γ y 1 L ) = tr ( e ^ y w x ) = tr ( w x e ^ y ) tr ( a L γ y 1 L ) = w x e ^ y = ( w x , e ^ y ) tr ( a L γ y 1 L ) = ( w x , e ^ y ) = ( e ^ y , w x ) tr ( a L γ y 1 L ) .
Therefore, the closed-loop Lyapunov function for the error system can be written as follows:
V e L ( e x , L ) = V e ( e x ) + V L ( L ) = 1 2 e x P x e x + 1 2 tr ( L γ y 1 L ) .
Following, the closed-loop energy-like time rate of change for the error system follows:
V ˙ e L ( e x , L ) = V ˙ e ( e x ) + V ˙ L ( L ) = 1 2 e x Q x e x + ( e ^ y , w x ) + ( P x e x , v ) ( e ^ y , w x ) tr ( a L γ y 1 L ) = 1 2 e x Q x e x + ( P x e x , v ) tr ( a L γ y 1 L ) .
While adaptive feedback has eliminated some of the residual terms in the closed-loop time derivative of the Lyapunov function, as demonstrated in Equation (21), residual terms still exist such that V ˙ e 0 is not guaranteed. As a result, the Barbalat–Lyapunov Lemma cannot be applied and therefore does not guarantee e x t 0 asymptotically. An additional argument will be presented to establish an upper bound on the estimated state error term, confining both the internal and external error to a neighborhood around zero.

2.5. Robustness Lyapunov Analysis for Error System

To determine the radius of convergence ( R * ) for the estimated state error ( e x ) , each component from the right-hand side (RHS) of Equation (21) must be bounded. The following sections will derive bounds for each component on the RHS of Equation (21) by combining Sylvester’s Inequality, the Cauchy–Schwarz Inequality, and algebraic manipulation.

2.5.1. Upper Bounding 1 2 e x Q x e x

In order to find the upper bound of 1 2 e x Q x e x , first consider bounding the Lyapunov function in Equation (12) using Sylvester’s Inequality:
λ min ( P x ) 2 e x 2 V e = 1 2 e x P x e x λ max ( P x ) 2 e x 2 ,
where { λ min ( P x ) , λ max ( P x ) } are the minimum and maximum eigenvalues of matrix P x , respectively, and e x is the norm of the internal state error ( e x ) . Continuing, V e can be upper-bounded via:
V e λ max ( P x ) 2 e x 2 .
From Equation (23), e x 2 can be solved for:
2 V e λ max ( P x ) e x 2 .
Moving forward, consider using Sylvester’s inequality to bound the 1 2 e x Q x e x term from Equation (21):
λ min ( Q x ) 2 e x 2 1 2 e x Q x e x λ max ( Q x ) 2 e x 2 ,
where { λ min ( Q x ) , λ max ( Q x ) } are the minimum and maximum eigenvalues of matrix Q x , respectively. Therefore, the upper-bound of Equation (25) follows:
1 2 e x Q x e x λ min ( Q x ) 2 e x 2 .
Substituting the e x 2 term from Equation (24) into Equation (26) results in the following:
1 2 e x Q x e x λ min ( Q x ) 2 2 V e λ max ( P x ) = e x 2 λ min ( Q x ) λ max ( P x ) = 2 a > 0 V e ,
where a λ min ( Q x ) 2 λ max ( P x ) > 0 and is a scalar. Substituting Equation (27) into Equation (21),
V ˙ e L 2 a V e + ( P x e x , v ) a tr ( L γ y 1 L ) ,
results in an updated closed-loop energy-like time rate of change for the error system.

2.5.2. Upper Bounding a tr ( L γ y 1 L )

From Equation (28), let us focus on the a tr ( L γ y 1 L ) term. Recall: L = L + L * , shown in Equation (5). Therefore, a tr ( L γ y 1 L ) can be alternatively expressed as follows:
a tr ( L γ y 1 L ) = a tr ( ( L + L * ) = L γ y 1 L ) = a tr ( L γ y 1 L ) = 2 V L + a tr ( L * γ y 1 L ) = 2 a V L + a tr ( L * γ y 1 L ) .
Substituting Equation (29) into Equation (28):
V ˙ e L 2 a V e + ( P x e x , v ) 2 a V L a tr ( L * γ y 1 L ) 2 a ( V e + V L = V e L ) + ( P x e x , v ) a tr ( L * γ y 1 L ) .
Alternatively, Equation (30) can be written as follows:
V ˙ e L + 2 a V e L ( P x e x , v ) a tr ( L * γ y 1 L ) .

2.5.3. Upper Bounding ( P x e x , v )

The right-hand side of Equation (31) can be upper bounded by taking the absolute value of terms:
V ˙ e L + 2 a V e L | ( P x e x , v ) | + a | tr ( L * γ y 1 L ) | .
To further bound { | ( P x e x , v ) | , a | tr ( L * γ y 1 L ) | } , recall the Cauchy–Schwarz inequality: | ( u , v ) | ( u , u ) 1 2 ( v , v ) 1 2 . Therefore, applying the Cauchy–Schwarz’s Inequality on | ( P x e x , v ) | :
| ( P x e x , v ) | = | e x P x v | = | e x P x 1 2 P x 1 2 v | = | ( P x 1 2 e x , P x 1 2 v ) | ( P x 1 2 e x , P x 1 2 e x ) 1 2 ( P x 1 2 v , P x 1 2 v ) 1 2 ( P x e x , e x ) 1 2 = ( 2 V e ) 1 2 ( 2 V e L ) 1 2 ( P x v , v ) 1 2 ( 2 V e L ) 1 2 ( P x v , v ) 1 2 .
From Equation (33), the ( P x v , v ) 1 2 term can be further bounded by applying Sylvester’s Inequality:
λ min ( P x ) 1 2 v ( P x v , v ) 1 2 λ max ( P x ) 1 2 v .
Therefore, the upper-bound of ( P x v , v ) 1 2 follows:
( P x v , v ) 1 2 λ max ( P x ) 1 2 v .
Given that norm of the bounded-stochastic waveform ( v ) can vary with time, the upper bound of v can be expressed as a supremum ( s u p ) such that
M v > 0 sup t 0 v ( t ) M v .
Using the result from Equation (36), the upper-bound of ( P x v , v ) 1 2 from Equation (35) can be expressed as follows:
( P x v , v ) 1 2 λ max ( P x ) 1 2 M v .
Therefore, using the results from Equations (33) and (37), the upper-bound of | ( P x e x , v ) | can be expressed as follows:
| ( P x e x , v ) | ( 2 V e L ) 1 2 ( P x v , v ) 1 2 ( 2 V e L ) 1 2 λ max ( P x ) 1 2 M v .

2.5.4. Upper Bounding | tr ( L * γ y 1 L ) |

Transitioning, lets consider bounding the | tr ( L * γ y 1 L ) | term from Equation (32). Using the Cauchy–Schwarz inequality, alternatively written as | tr ( A B ) | tr ( A A ) 1 2 tr ( B B ) 1 2 , on | tr ( L * γ y 1 L ) | :
| tr ( L * γ y 1 L ) | tr ( L * γ y 1 L * ) 1 2 tr ( L γ y 1 L ) 1 2 = ( 2 V L ) 1 2 ( 2 V e L ) 1 2 tr ( L * γ y 1 L * ) 1 2 ( 2 V e L ) 1 2 .
Using the trace operator ( tr ) property of invariant under circular shifts (i.e., tr ( A B C ) = tr ( C A B ) = tr ( B C A ) ), the tr ( L * γ y 1 L * ) 1 2 term from Equation (39) can be rewritten as follows:
tr ( L * γ y 1 L * ) 1 2 = tr ( L * L * γ y 1 ) 1 2 .
Equation (40) can be upper bounded by applying the Cauchy–Schwarz inequality:
tr ( L * L * γ y 1 ) 1 2 | tr ( L * L * γ y 1 ) 1 2 | tr ( L * L * L * L * ) 1 4 tr ( γ y 1 γ y 1 ) 1 4 tr ( L * L * ) 1 2 tr ( γ y 1 ) 1 2 .
For notation purposes, allow the following assumptions:
M k > 0 tr ( L * L * ) 1 2 M k
and
tr ( γ y 1 ) M v a M k 2 .
Substituting Equations (42) and (43) in Equation (41),
tr ( L * L * γ y 1 ) 1 2 tr ( L * L * ) 1 2 tr ( γ y 1 ) 1 2 M k M v a M k M v a .
Following, Equation (44) can be substituted in Equation (39):
| tr ( L * γ y 1 L ) | tr ( L * γ y 1 L * ) 1 2 ( 2 V e L ) 1 2 M v a ( 2 V e L ) 1 2 .

2.5.5. Combining Bounds to Determine Radius of Convergence ( R * )

Using the result from Equations (38) and (45) in Equation (32) results in the following:
V ˙ e L + 2 a V e L | ( P x e x , v ) | + a | tr ( L * γ y 1 L ) | λ max ( P x ) 1 2 M v ( 2 V e L ) 1 2 + a M v a ( 2 V e L ) 1 2 ( λ max ( P x ) 1 2 + 1 ) M v ( 2 V e L ) 1 2 .
Following, Equation (46) can also be expressed as follows:
V ˙ e L + 2 a V e L V e L 1 2 2 ( λ max ( P x ) 1 2 + 1 ) M v .
Consider an alternative expression for the left-hand side of Equation (47):
d d t ( 2 e a t V e L 1 2 ) = e a t V e L 1 2 V ˙ e L + 2 a e a t V e 1 2 = V ˙ e L + 2 a V e L V e L 1 2 e a t .
Therefore, Equation (47) can be expressed as follows:
d d t ( 2 e a t V e L 1 2 ) 2 ( λ max ( P x ) 1 2 + 1 ) M v e a t .
Integrating and evaluating both sides of Equation (49) from 0 to τ results in the following:
( 2 e a τ V e L ( τ ) 1 2 2 V e L ( 0 ) 1 2 ) 2 ( λ max ( P x ) 1 2 + 1 ) M v a ( e a τ 1 ) .
Solving for V e L ( τ ) 1 2 :
V e L ( τ ) 1 2 e a τ V e L ( 0 ) 1 2 + 2 ( λ max ( P x ) 1 2 + 1 ) M v 2 a ( 1 e a τ ) .
From Equation (51), V e L 1 2 can be expressed as being bounded for all τ 0 , which indicates that V e L is also bounded. Given V e L is a function of { e x , L } , it follows that both { e x , L } are likewise bounded.
Continuing, V e L ( τ ) 1 2 can be lower bounded by considering the following:
1 2 λ min ( P x ) 1 2 e x V e ( τ ) 1 2 V e ( τ ) 1 2 + V L ( τ ) 1 2 V e L ( τ ) 1 2 .
Substituting the lower bound for Equation (52) lower bound into Equation (51),
1 2 λ min ( P x ) 1 2 e x e a τ V e L ( 0 ) 1 2 + 2 ( λ max ( P x ) 1 2 + 1 ) M v 2 a ( 1 e a τ ) .
Solving for e x results in the following:
e x 2 V e L ( 0 ) 1 2 λ min ( P x ) 1 2 e a τ + ( λ max ( P x ) 1 2 + 1 ) M v λ min ( P x ) 1 2 a ( 1 e a τ ) .
Taking the supremum of e x as time approaches infinity results in the following:
lim sup t e x 1 + λ max ( P x ) a λ min ( P x ) M v R * .
Therefore, upper bounding the error state norm to a radius of convergence ( R * ) around a neighborhood of zero.
Alternatively, if the following adaptive law is considered,
L ˙ = e ^ y y γ y ,
where a L = 0 from Equation (18), robustness proof can be modified accordingly, resulting in the following radius of convergence:
lim sup t e x λ max ( P x ) a λ min ( P x ) M v R * .
This version of the robustness proof has looser constraints, as Equation (43) need not be satisfied. Regardless of the adaptive law selected, robustness proof can be extended without loss of generality to consider the error system with fixed gains, as in Equation (11).

2.5.6. Determining the Bounds for L

In order to determine a radius of convergence for L , recall V L = 1 2 tr ( L γ y 1 L ) from Equation (16). Using the cyclic property of the trace, V L can be alternatively expressed as follows:
V L = 1 2 tr ( L L γ y 1 ) .
Applying the Cauchy–Schwarz inequality on Equation (58),
1 2 tr ( L L γ y 1 ) 1 2 tr ( L L L L ) 1 2 tr ( γ y 1 γ y 1 ) 1 2 1 2 tr ( L L ) = L F 2 tr ( γ y 1 ) ,
where L F is the Frobenius norm of L . Given that L F can be lower-bounded by additional norms, the norm of L must be constricted to L F . Using the previous assumption from Equation (43) on Equation (59) results in the following:
V L 1 2 L F 2 M v a M k 2 = tr ( γ y 1 ) .
Therefore, V L 1 2 can be expressed as follows:
V L 1 2 1 2 L F M v a M k .
Meaning, Equation (51) can alternatively be lower bounded by V L 1 2 :
V L 1 2 1 2 L F M v a M k V e L ( τ ) 1 2 e a τ V e L ( 0 ) 1 2 + 2 ( λ max ( P x ) 1 2 + 1 ) M v 2 a ( 1 e a τ ) .
Solving for L F in Equation (62):
L F 2 a M k M v e a τ V e L ( 0 ) 1 2 + ( λ max ( P x ) 1 2 + 1 ) M k ( 1 e a τ ) .
Taking the supremum of L F as time approaches infinity results in the following:
lim sup t L F λ max ( P x ) + 1 M k R * * .
Therefore, upper-bounding the correction variance norm to a radius of convergence of R * * . Alternatively, if the control law in Equation (56) is considered, the supremum of L as time approaches infinity results in the following:
lim sup t L F λ max ( P x ) M k R * * .

2.5.7. Alternative L Bound

As detailed in the previous section, the bound on L depends on the assumption outlined in Equation (43). Consider the case where the adaptive law is considered without the feedback filter ( a L = 0 ) , as detailed in Equation (56), and the Equation (43) constraint is removed. Applying Sylvester’s Inequality to γ y 1
λ min ( γ y 1 ) γ y 1 λ max ( γ y 1 ) ,
Therefore, V L can be alternatively bounded:
λ min ( γ y 1 ) 2 L F 2 V L = 1 2 tr ( L γ y 1 L ) λ max ( γ y 1 ) 2 L F 2 .
Meaning, the lower bound of V L 1 2 can be expressed as follows:
λ min ( γ y 1 ) 2 L F V L 1 2 .
Using the result from Equation (68) to lower bound Equation (51):
λ min ( γ y 1 ) 2 L F V L 1 2 V e L ( τ ) 1 2 e a τ V e L ( 0 ) 1 2 + 2 ( λ max ( P x ) 1 2 + 1 ) M v 2 a ( 1 e a τ ) .
Solving and taking the supremum of L from Equation (69) as time approaches infinity results in the following:
lim sup t L F = λ max ( P x ) M v λ min ( γ y 1 ) a R * * .
There are several bounds for L as depicted in Equations (64), (65), and (70); however, they are all dependent on the assumptions and constraints of the control problem.

2.6. Main Result Summarized

In conclusion, given that the true system experiences a health change and there exist weak and bounded stochasticities ( v x ) in the true system, robustness proof with adaptive law (Equation (71)) guarantees internal state error to converge to a radius ( R * ) around the neighborhood of zero, e x t R * . However, stability proof is only valid when the triples of the model ( A m , B . C ) and true system ( A , B , C ) are SPR and ASD, respectively, such that the true plant can be written as A A m + B L * C . Proof does guarantee the plant variance ( L ) to be bounded; however, there is no guarantee of L t 0 . If L t 0 numerically, the true system’s dynamics or some energy equivalence has been numerically captured. The control diagram for proof is shown in Figure 1.
Adaptive   Estimation   Law L ˙ = L ˙ = e ^ y y γ y a L

3. Illustrative Example

This section will demonstrate an illustrative example implementing the control diagram, Figure 1, for state estimation given the uncertainty in the model plant and stochasticities ( v x ) existing in the true dynamics. True and model values for this example are modified from [17]. All initial values for { x ( 0 ) , x ^ ( 0 ) , L ( 0 ) } are equal to zero, while the values for v x range between ± 0.25 . While the illustrative example may be perceived as limited, the full capacity of the theorem can be constrained by the specific application and the local uncertainties related to certain operational points. View Appendix A for an alternative example, where the true plant is altered and the fixed gain in the estimator is not considered. Note the assumptions and constraints for this control scheme detailed in Theorem 1.

3.1. State Space Representations for Reference and True Systems

In order to implement the proposed control scheme, an initial model plant ( A m ) must exist. Allow the model system, as defined in Equation (3), to have the following properties:
A m = 7 2 4 2 1 2 2 2 1 ; B = 0 1 . 7 0 2 0 ; C = 0.5 0 1 1.2 0 . 25 ; x ( 0 ) = 0 .
For the control approach to be viable, as defined in Theorem 1, allow A sp { A m , B L * C } A = A m + B L * C . For this example, allow the fixed correction matrix ( L * ) to follow:
L * = L 11 L 12 L 21 L 22 = 3 4 1 5 .
Therefore, the true system, as defined in Equation (2), is written as follows:
A = A m + B L * C = 13 2 1.75 0.31 1 0.6 4.6 2 5 ; B = 0 1 . 7 0 2 0 ; C = 0.5 0 1 1.2 0 . 25 ; x ( 0 ) = 0 .
Allow the stochasticities to be bounded such supremum of v x = 0.25 . Recall that internal interactions and constitutive constants of the true system’s plant ( A ) are unknown, but an initial model exists ( A m ) such that plant dimensions correspond. Following the input and output matrices { B , C } are known.
Giving both model and true systems, defined by proprieties in Equations (72) and (74), respectively, a unit step input, results in discrepancies in steady-state output responses, shown in Figure 2. The variation in the output response becomes particularly evident when comparing the eigenvalues of the model and the true plant:
σ { A m } = { 1 , 3 , 5 } σ { A } { 0.5 , 4.6 , 14.4 } .

3.2. Defining the Known Input ( u )

The estimator and true systems are actuated using the same known bounded and continuous input ( u ) . For this example, allow the known input to be defined as follows:
u = 3 + 1.2 sin ( t ) 2 + cos ( 4 t ) .
This input may represent a known disturbance or any arbitrary excitation used to query the system. Alternatively, the selected input can correspond to the control input that regulates the true system response. Crucially, both the true system and the estimator must be subjected to the same input, beyond the stochastic variation acting solely on the true system.
Figure 2. True ( y ) and reference ( y m ) system’s output response given a unit-step input ( u ) .
Figure 2. True ( y ) and reference ( y m ) system’s output response given a unit-step input ( u ) .
Applsci 15 06657 g002

3.3. Adaptive State Estimation

The following section implements the control scheme outlined in Theorem 1 and illustrated in Figure 1. The proof demonstrates that the proposed control scheme can be executed with or without a fixed gain ( K ) in the estimator. Utilizing a fixed gain can offer certain advantages, as it can influence the convergence of the state error. Both cases presented will utilize the same fixed gain, which is derived using a Linear Quadratic Regulator with Q = I 3 and R = 1 . In contrast, the adaptive laws will exhibit slight variations, as noted in Equations (56) and (71). Furthermore, the interaction tuning term ( γ y ) controls the estimator’s sensitivity to output error, directly impacting how the error state converges. There are numerical limits for γ y , if increased excessively, the estimator can diverge. For this example, γ y = I 2 .

3.3.1. Adaptive Control Scheme Without a Feedback  Filter ( a L = 0 )

The control scheme illustrated in Figure 1 is implemented with a fixed gain ( K ) and the adaptive law outlined in Equation (56), where γ y = I and a L = 0 . As illustrated in Figure 3 and Figure 4, both internal and external state errors converge to a radius around zero. Although the proof establishes that the internal and external error states will converge to a neighborhood around zero, it only assures that the variance matrix ( L ) will remain bounded. In this particular example, numerical results indicate as L collapses such that L t L * , shown in Figure 5. Consequently, as L t L * , the internal error radius further reduces to a smaller neighborhood around zero, as seen in Figure 3b.
Due to the stochastic variations in the true system, values of L will show variability as time approaches infinity. In order to compare the estimated plant with the true system, the last 20 s of L’s simulated data were averaged, denoted by L a v g . Consequently, the L a v g for this example is as follows:
L a v g 3 4 1.02 4.95 .
Therefore, the estimated plant ( A e s t ) can be expressed as follows:
A e s t A m + B L a v g C 13.46 2 1.74 0.32 1 0.60 4.62 2 5 ,
where the eigenvalues of A e s t are σ { A e s t } { 0.51 , 4.64 , 14.32 } . Observe the similarities in internal interactions and constitutive constants between true and estimated plants, as shown in Equation (74) and Equation (78), respectively.
Figure 3. Both sub-figures (a,b) illustrate the internal state error ( e x ) converging to a radius ( R * ) around a neighborhood of zero; however, they do so at two different time histories. The implemented control scheme used a fixed gain and a L = 0 .
Figure 3. Both sub-figures (a,b) illustrate the internal state error ( e x ) converging to a radius ( R * ) around a neighborhood of zero; however, they do so at two different time histories. The implemented control scheme used a fixed gain and a L = 0 .
Applsci 15 06657 g003
Figure 4. External state error ( e ^ y ) converging to a radius around zero with the use of the fixed gain and a L = 0 .
Figure 4. External state error ( e ^ y ) converging to a radius around zero with the use of the fixed gain and a L = 0 .
Applsci 15 06657 g004
Figure 5. Adaptive Gain L ( t ) converging to L * with the use of the fixed gain ( K 0 ) and a L = 0 .
Figure 5. Adaptive Gain L ( t ) converging to L * with the use of the fixed gain ( K 0 ) and a L = 0 .
Applsci 15 06657 g005

3.3.2. Adaptive Control Scheme with a Feedback Filter ( a L 0 )

In this section, the control scheme illustrated in Figure 1 and adaptive law (Equation (71)) are implemented using a fixed gain (i.e., K 0 ), γ y = I , and a = 0.05 . As shown in Figure 6 and Figure 7, both internal and external state errors converge to a radius around zero. Although there is variability in L as time approaches infinity, Figure 8, the average L term ( L a v g ) as time approaches infinity follows:
L a v g 0.78 0.38 2.77 1.4 .
Consequently, this leads to the following estimated true plant matrix ( A e s t ) :
A e s t A m + B L a v g C 10.08 2 0.88 2.59 1 1.39 3.70 2 2.75 ,
where the eigenvalues are given by σ { A e s t } { 0.94 , 3.72 , 9.17 } . Although the estimated plant matrix does not equal the true plant, A e s t gives a better intuition into the true plant ( A ) compared to the initial model plant ( A m ) and grants sup t 0 e x = R * . For this example, it is important to note that as t , there were no noteworthy changes in the convergence error radius for both internal and external error states, nor in L.
Note the similarities in the state error responses when implementing the adaptive control scheme both with and without the feedback filter. This is evident in the internal error comparison shown in Figure 3a and Figure 6, as well as the external error comparison in Figure 4 and Figure 7. Although the adaptive estimation law differs (as given by Equations (56) and (71)), the error states still converge to a neighborhood around zero. In the case without the feedback filter ( a L = 0 ), since L t L * , the estimation error converges to a smaller-radius neighborhood around zero. This level of reduction in error is not observed in the cases with the feedback filter ( a L 0 ) .
Figure 6. Internal state error ( e x ) converging to a radius around zero with the use of fixed gain and a L 0 .
Figure 6. Internal state error ( e x ) converging to a radius around zero with the use of fixed gain and a L 0 .
Applsci 15 06657 g006
Figure 7. Externalstate error ( e ^ y ) converging to a radius around zero with the use of the fixed gain and a L 0 .
Figure 7. Externalstate error ( e ^ y ) converging to a radius around zero with the use of the fixed gain and a L 0 .
Applsci 15 06657 g007
Figure 8. AdaptiveGain L ( t ) approaching a steady state with the use of a fixed gain and a L 0 .
Figure 8. AdaptiveGain L ( t ) approaching a steady state with the use of a fixed gain and a L 0 .
Applsci 15 06657 g008

3.3.3. Kalman–Bucy Filter–Kalman Filter in Continuous Time

In both Section 3.3.1 and Section 3.3.2, the error dynamics for the proposed adaptive state estimator are presented in two variations: without and with the feedback filter. This section depicts the error dynamics without implementing the adaptive estimator; the fixed gain value ( K ) in the estimator remains unchanged compared to Section 3.3.1 and Section 3.3.2. As illustrated in Figure 9a,b, the lack of adaptation in the reference model results in significantly larger internal and external estimation errors compared to those observed when the adaptive control scheme is applied.

4. Discussion

The theorem and illustrative example presented in this paper pertain to global LTI systems. However, since many real-world problems are inherently non-linear, the proposed theorem can also be applied to any local linear approximation of a non-linear system, provided that the assumptions and constraints of both the theorem and the linear approximation are met.
While there are theoretical hard constraints, such as that the decomposition of the true plant must follow A = A m + B L * C , these constraints are essential for the proof. Future research could explore ways to relax or circumvent these constraints. Moreover, the decomposition of the true plant ( A ) may appear unrealistic for many applications; however, if additional actuators and sensors can be incorporated, thereby rendering the true system over-controllable and over-observable, the feasibility of satisfying the decomposition improves. If adding extra sensors is not an option, employing sensor blending or fusion techniques can aid in meeting the decomposition constraints. Sensor blending or fusion techniques are routinely used in signal processing and state estimation applications [22,23]. Although the necessary decomposition of the true system may initially seem restrictive, given an initial model, the theory enables the error states to converge to a neighborhood around zero, regardless of how large or small the values inside L * are. In practice, numerical limitations may exist in the theorem due to the integration of large errors.
Knowledge of the input and output matrices { B , C } is essential for the theoretical development and stability proof. This assumption is common in control design as the input matrix ( B ) denotes how the input ( u ) influences the system dynamics, and the output matrix ( C ) specifies the linear combination of the internal states that are measured-observed. Practical issues may arise if these matrices degrade over time or with use, but those types of degradation are outside the scope of this work.
The current design of the proposed estimator is non-invasive, meaning that none of the estimated states are fed back into the true system. As a result, the estimation scheme can be implemented without any risk of harming the true system. One potential application of the estimator is to operate in conjunction with governing inputs that regulate the true system response. This parallel implementation can assist in monitoring or refining the equations of motion that govern the true system dynamics. A significant change in a specific constitutive constant or internal interaction could alert an operator to inspect a particular area for potential failures or indicate that a component within the system needs to be replaced.
The theorem only guarantees an asymptotic convergence of the error states, which may limit the potential real-time, online use. In principle, the theorem can be applied to both fast and slow dynamics systems. However, several factors may limit the practical implementation of the theorem. These include the accuracy of the initial system model, the operational range of the true system, and whether the estimated states are used in control design. For instance, consider a scenario involving a mechatronic system with a poorly identified initial model, where the goal is trajectory tracking using the estimated states. In such cases, the implementation of the proposed theorem may not be feasible for online use. However, the scenario can still be evaluated in an offline setting to assess its performance. Additionally, incorporating a fixed gain term ( K ) in the estimator can aid or hinder state estimation, depending on how the fixed gain term is tuned. However, the proof indicates that the inclusion of this fixed term is not necessary for the state estimation error to converge to a neighborhood around zero.
The outlined theorem in this paper pertains to accounting for model uncertainty and stochastic noise. If noise term is removed ( v x = 0 ) , the need for the robust analysis subsides. From here, our previous finding, outlined in [16], can be used to ensure { e x , e ^ y } t 0 . However, the result in [16] can be extended by considering the control law defined in Equation (18), which truncates the Lyapunov analysis expressed in Equation (32) to the following:
V ˙ e L + 2 a V e L a | tr ( L * γ y 1 L ) | a tr ( L * L * ) ) 1 2 tr ( γ y ) 1 2 = M L a ( 2 V e L ) 1 2 = M L ( 2 V e L ) 1 2 ,
where M L is a scalar value greater than zero ( M L > 0 ) . Following a similar analysis described earlier in this text, the error state norm is bounded by the following radius of convergence, denoted as R * :
lim sup t e x M L a λ min ( P x ) R * .
where R * represents a radius around a neighborhood of zero.

5. Conclusions

The true system is influenced by stochastic variations and health changes, resulting in alterations in internal interactions and constitutive constants. If these health changes are not considered, discrepancies may arise between the true system and the reference model. The proposed control scheme addresses these issues by updating the reference model utilizing the input and output of the true system. The stability proof ensures that both the internal and external error states will converge within a defined radius around zero. The size of this radius of convergence is contingent upon the extent of the stochastic variations. A potential application of the estimator is to operate in parallel with the system inputs that regulate the response. This parallel configuration can assist in monitoring the health status of the true system, potentially alerting operators when specific internal interactions or constitutive parameters deviate from their nominal values. Future work will focus on relaxing certain Lyapunov-based stability constraints—most notably, the requirement for a specific decomposition of the true plant.

Author Contributions

Writing—original draft, K.F.; Supervision, M.B. and J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ATrue Plant
A m Model Plant
ASDAlmost Strictly Dissipative
BInput Matrix
B L * C Ideal Plant Correction Term
Belongs
COutput Matrix
( · ) Conjugate Transpose
^Estimate
For All
L * Fixed Correction Matrix
L Variance Matrix
· Norm
SPRStrictly Positive Real
σ Set of Eigenvalues
Such that
R * Radius of Convergence
ReReal
There Exists
( · , · )     Inner Product
uInput
xInternal State (Full State Vector)
yExternal (Output) State

Appendix A. Alternative Illustrative Example

This section adopts the same architecture outlined in Section 3, with modifications to the parameters of the true system and the removal of the fixed gain assumption in the estimator. All other estimation parameters and inputs remain unchanged unless otherwise specified. The purpose of this example is to demonstrate that the theorem remains applicable beyond local uncertainty of the model plant.

Appendix A.1. Defining the True System Plant

Recall for the adaptive estimator to be utilized, the true plant must follow a specific decomposition: A = A m + B L * C . In this example, consider an alternative value for L * given by the following:
L * = 74 20 6 50 .
Therefore, the true plant can be represented as follows:
A = 70 2 14.5 11.1 1 46.3 28 2 139 ,
with the eigenvalues of the true plant ( A ) are given by σ { A } { 1.72 , 64.70 , 143.6 } . Note the dependencies between the model and true plant, as defined in Equation (72) and Equation (A2), respectively. For reference, the eigenvalues of the model plant ( A m ) are as follows: σ { A m } = { 1 , 3 , 5 } .

Appendix A.2. Adaptive Control Scheme Without a Feedback Filter (aL = 0) and Without a Fixed Gain (K = 0)

The control scheme illustrated in Figure 1 is implemented without a fixed gain ( K = 0 ) and the adaptive law without the feedback filter ( a L = 0 ) , as described in Equation (56). Additionally, the interaction turning term ( γ y ) and the input ( u ) remain the same from the previous example.
As illustrated in Figure A1 and Figure A2, both internal and external errors converge to a radius around zero. Due to the theorem only guaranteeing asymptotic convergence and the relatively large discrepancy between the initial model and the true system, it does take a relatively significant time for the error states to approach zero. Nonetheless, the model is updated so the error states converge to a radius around zero.
As discussed in the main text, if there is a large discrepancy between the model and the true system, this theorem may not be realistic in an online setting for fast dynamic systems. This is imparted by the time required for the error states to approach zero can be substantial. Conversely, the theorem could be beneficial offline, where extended time frames can be simulated quickly.
Furthermore, due to the true system having eigenvalues with large negative real parts, the true system response has a smaller magnitude than the previous example using the same input. Consequently, applying the same input leads to slower observable error decay. For this specific case, increasing the magnitude of the input scales the true system’s response accordingly, allowing the adaptive estimator to adjust rapidly and causing the error to subside faster.
Figure A1. Sub-figures (ac) illustrate the internal state error ( e x ) converging to zero over varying time scales. Due to the relatively large discrepancies between the model and the true plant, the internal error requires a significant amount of time to converge to a small neighborhood around zero. The implemented control scheme uses neither a fixed gain ( K = 0 ) nor the feedback filter on the adaptive estimator law ( a L = 0 ) .
Figure A1. Sub-figures (ac) illustrate the internal state error ( e x ) converging to zero over varying time scales. Due to the relatively large discrepancies between the model and the true plant, the internal error requires a significant amount of time to converge to a small neighborhood around zero. The implemented control scheme uses neither a fixed gain ( K = 0 ) nor the feedback filter on the adaptive estimator law ( a L = 0 ) .
Applsci 15 06657 g0a1
Although there exists variability in the estimated L’s data as time approaches infinity, as shown in Figure A3, the last 20 s of simulated data was averaged, denoted by L a v g , resulting in the following:
L a v g 74.05 19.96 4.64 48.45 .
Therefore, the estimated plant ( A a v g ) can be expressed as follows:
A a v g 67.45 2 12.75 11.16 1 46.34 28.16 2 139.13 ,
where the eigenvalues of A a v g follows: σ { A a v g } = { 1.72 , 62.91 , 142.95 } . Compare the similarities between the true and estimated plant, denoted by Equations (A2) and (A4), respectively.
This example illustrates that model uncertainty can be either local or global. As long as the uncertainty adheres to the specified decomposition, the error states can be shown to converge to the neighborhood around zero. Furthermore, the decay of the error states is influenced by the true system input, emphasizing the role of excitation in the adaptation process.
Figure A2. Sub-figures (ac) illustrate the external state error ( e ^ y ) converging to zero over varying time scales.
Figure A2. Sub-figures (ac) illustrate the external state error ( e ^ y ) converging to zero over varying time scales.
Applsci 15 06657 g0a2
Figure A3. Sub-figures (a,b) illustrated the Adaptive Gain L ( t ) converging to L * without the use of the fixed gain ( K = 0 ) and a L = 0 , across different time scales.
Figure A3. Sub-figures (a,b) illustrated the Adaptive Gain L ( t ) converging to L * without the use of the fixed gain ( K = 0 ) and a L = 0 , across different time scales.
Applsci 15 06657 g0a3

References

  1. Cook, R.G.; Palacios, R.; Goulart, P. Robust gust alleviation and stabilization of very flexible aircraft. AIAA J. 2013, 51, 330–340. [Google Scholar] [CrossRef]
  2. Von Karman, T. Compressibility effects in aerodynamics. J. Spacecr. Rocket. 2003, 40, 992–1011. [Google Scholar] [CrossRef]
  3. Chen, S.; Gojon, R.; Mihaescu, M. High-temperature effects on aerodynamic and acoustic characteristics of a rectangular supersonic jet. In Proceedings of the 2018 AIAA/CEAS Aeroacoustics Conference, Atlanta, Georgia, 25–29 June 2018; p. 3303. [Google Scholar]
  4. Balas, M.; Fuentes, R.; Erwin, R. Adaptive control of persistent disturbances for aerospace structures. In Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit, Monterey, CA, USA, 5–8 August 2000; p. 3952. [Google Scholar]
  5. Fuentes, R.J.; Balas, M.J. Direct adaptive disturbance accommodation. In Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No. 00CH37187), Sydney, NSW, Australia, 12–15 December 2000; Volume 5, pp. 4921–4925. [Google Scholar]
  6. Wang, N.; Wright, A.D.; Balas, M.J. Disturbance accommodating control design for wind turbines using solvability conditions. J. Dyn. Syst. Meas. Control 2017, 139, 041007. [Google Scholar] [CrossRef]
  7. Chen, J.; Patton, R.J.; Zhang, H.Y. Design of unknown input observers and robust fault detection filters. Int. J. Control 1996, 63, 85–105. [Google Scholar] [CrossRef]
  8. Guan, Y.; Saif, M. A novel approach to the design of unknown input observers. IEEE Trans. Autom. Control 1991, 36, 632–635. [Google Scholar] [CrossRef]
  9. Fuentes, R.J.; Balas, M.J. Robust model reference adaptive control with disturbance rejection. In Proceedings of the 2002 American Control Conference (IEEE Cat. No. CH37301), Anchorage, AK, USA, 8–10 May 2002; Volume 5, pp. 4003–4008. [Google Scholar]
  10. Luenberger, D.G. Observing the State of a Linear System. IEEE Trans. Mil. Electron. 1964, 8, 74–80. [Google Scholar] [CrossRef]
  11. Luenberger, D. An introduction to observers. IEEE Trans. Autom. Control 1971, 16, 596–602. [Google Scholar] [CrossRef]
  12. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  13. Kalman, R.E.; Bucy, R.S. New Results in Linear Filtering and Prediction Theory. J. Basic Eng. 1961, 83, 95–108. [Google Scholar] [CrossRef]
  14. Doyle, J.C. A review of μ for case studies in robust control. IFAC Proc. Vol. 1987, 20, 365–372. [Google Scholar] [CrossRef]
  15. Nagpal, K.M.; Khargonekar, P.P. Filtering and smoothing in an H/sup infinity/setting. IEEE Trans. Autom. Control 1991, 36, 152–166. [Google Scholar] [CrossRef]
  16. Fuentes, K.; Balas, M.J.; Hubbard, J. A Control Framework for Direct Adaptive Estimation With Known Inputs for LTI Dynamical Systems. In Proceedings of the AIAA SCITECH 2025 Forum, Orlando, FL, USA, 6–10 January 2025; p. 2796. [Google Scholar]
  17. Griffith, T.; Gehlot, V.P.; Balas, M.J. Robust adaptive unknown input estimation with uncertain system Realization. In Proceedings of the AIAA SCITECH 2022 Forum, San Diego, CA, USA, 3–7 January 2022; p. 611. [Google Scholar]
  18. Rigatos, G.G. A Derivative-Free Kalman Filtering Approach to State Estimation-Based Control of Nonlinear Systems. IEEE Trans. Ind. Electron. 2012, 59, 3987–3997. [Google Scholar] [CrossRef]
  19. Jo, N.H.; Seo, J.H. Input output linearization approach to state observer design for nonlinear system. IEEE Trans. Autom. Control 2000, 45, 2388–2393. [Google Scholar] [CrossRef]
  20. Leith, D.J.; Leithead, W.E. Survey of gain-scheduling analysis and design. Int. J. Control 2000, 73, 1001–1025. [Google Scholar] [CrossRef]
  21. Balas, M.; Fuentes, R. A non-orthogonal projection approach to characterization of almost positive real systems with an application to adaptive control. In Proceedings of the 2004 American Control Conference, Boston, MA, USA, 30 June 30–2 July 2004; Volume 2, pp. 1911–1916. [Google Scholar]
  22. Sun, S.L.; Deng, Z.L. Multi-sensor optimal information fusion Kalman filter. Automatica 2004, 40, 1017–1023. [Google Scholar] [CrossRef]
  23. Wang, S.; Yi, S.; Zhao, B.; Li, Y.; Li, S.; Tao, G.; Mao, X.; Sun, W. Sowing Depth Monitoring System for High-Speed Precision Planters Based on Multi-Sensor Data Fusion. Sensors 2024, 24, 6331. [Google Scholar] [CrossRef] [PubMed]
Figure 9. Sub-figures (a,b) illustrate the internal state error ( e x ) and external state error ( e ^ y ) , respectively, in the absence of reference model updating to account for changes in the true system health. Both error states exhibit significantly larger estimation errors compared to their counterparts obtained using the adaptive scheme.
Figure 9. Sub-figures (a,b) illustrate the internal state error ( e x ) and external state error ( e ^ y ) , respectively, in the absence of reference model updating to account for changes in the true system health. Both error states exhibit significantly larger estimation errors compared to their counterparts obtained using the adaptive scheme.
Applsci 15 06657 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fuentes, K.; Balas, M.; Hubbard, J. A Robust Control Framework for Direct Adaptive State Estimation with Known Inputs for Linear Time-Invariant Dynamic Systems. Appl. Sci. 2025, 15, 6657. https://doi.org/10.3390/app15126657

AMA Style

Fuentes K, Balas M, Hubbard J. A Robust Control Framework for Direct Adaptive State Estimation with Known Inputs for Linear Time-Invariant Dynamic Systems. Applied Sciences. 2025; 15(12):6657. https://doi.org/10.3390/app15126657

Chicago/Turabian Style

Fuentes, Kevin, Mark Balas, and James Hubbard. 2025. "A Robust Control Framework for Direct Adaptive State Estimation with Known Inputs for Linear Time-Invariant Dynamic Systems" Applied Sciences 15, no. 12: 6657. https://doi.org/10.3390/app15126657

APA Style

Fuentes, K., Balas, M., & Hubbard, J. (2025). A Robust Control Framework for Direct Adaptive State Estimation with Known Inputs for Linear Time-Invariant Dynamic Systems. Applied Sciences, 15(12), 6657. https://doi.org/10.3390/app15126657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop