Next Article in Journal
Reducing Fuel Consumption in Aircraft: A Cruising Strategy-Based Approach
Previous Article in Journal
Stochastic Formulation of Multiscale Model of Hepatitis B Viral Infections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Least-Squares Control Strategy for Asymptotic Tracking and Disturbance Rejection Using Tikhonov Regularization and Cascade Iteration

Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3707; https://doi.org/10.3390/math13223707
Submission received: 29 September 2025 / Revised: 6 November 2025 / Accepted: 12 November 2025 / Published: 19 November 2025
(This article belongs to the Section E2: Control Theory and Mechanics)

Abstract

This paper presents a comprehensive strategy for addressing tracking and disturbance rejection for both lumped and distributed parameter systems, focusing on infinite-dimensional input and output spaces. Building on the geometric theory of regulation, the proposed methodology employs a cascade algorithm coupled with Tikhonov regularization to derive control laws that improve tracking accuracy iteratively. Unlike traditional optimal control approaches, the framework minimizes the limsup in time of the tracking error norm, rather than with respect to a quadratic cost function. It is important to note that this work also includes applicability to over- and under-determined systems. We provide theoretical insights, detailed algorithmic formulations, and numerical simulations to demonstrate the effectiveness and generality of the method. Results indicate that the cascade controls asymptotically approximate the classical optimal control solutions, with limitations addressed through rigorous error analysis. Applications include diverse scenarios with both finite and infinite-dimensional input and output spaces, showcasing the versatility of the approach.

1. Introduction

Tracking and disturbance rejection are fundamental objectives in control design, particularly for systems governed by partial differential equations (PDEs) [1,2,3]. This work presents a general strategy applicable to both lumped and distributed parameter systems, primarily focusing on systems modeled by PDEs. The proposed method generates a sequence of control laws by solving a cascade of easily manageable control problems. At each step, tracking accuracy is progressively enhanced through least-squares error minimization.
The foundation of the cascade algorithm, developed over a series of studies [4,5,6], lies in the theory of geometric regulation [1,7,8,9,10,11] and related works. Many of the authors’ previous studies have assumed finite-dimensional input and output spaces, with the additional constraint that the system transfer function at zero is invertible. A significant contribution of this work is the removal of such restrictions, enabling the algorithm to handle input and output spaces of arbitrary dimensionality. This generality allows the method to produce approximate controls even for infinite-dimensional input and output spaces and for over- or under-determined systems in the finite-dimensional case.
In this work, we do not claim that the resulting controls are optimal in the classical sense, as achieved through LQR or LQG methodologies [12,13], since we do not consider a quadratic time integral cost function. Instead, our goal is to generate suitable controls that enable the derivation of error estimates under straightforward smoothness assumptions and large-time supremum norm bounds for both the reference and disturbance signals. This approach accommodates extremely general signals that may have transient oscillations for a time but ultimately transition into smooth bounded motion. Most importantly, we do not require that these signals be generated by an exogenous system as in the classical theory.
In this work, we adopt a cost function based on the large-time supremum norm of the error rather than on an integral quadratic form. The resulting control algorithm is computationally efficient, simple to implement, and geared toward accurate long-term tracking and disturbance rejection. A notable advantage of this formulation is that the error at each step can be written in terms of explicit iterated convolution integrals (see [4,5,6]), which yield a priori estimates that may be computed offline. This perspective shifts the focus from traditional optimality to long-term performance, and it bears similarities to the discounted cost formulations used for infinite-horizon tracking problems [14].
Through several examples, we observe that the sequence of controls derived from the proposed algorithm converges in time to the classical optimal control solution. Here we note that standard optimal control problems for parabolic systems typically involve backward parabolic adjoint systems. Solving these problems often requires iterative forward–backward solutions or space–time discretization [15,16,17,18], which are computationally intensive. To alleviate this burden, approaches such as Receding-Horizon Control [19,20], ad hoc time-stepping schemes [21], and preconditioning techniques [22] have been proposed.
Finally, this study focuses on bounded input and output operators. However, the methodology can be extended to cases involving unbounded inputs and outputs, such as boundary control and sensing. Such generalizations require additional effort and will be addressed in future work.
The paper is organized as follows: Section 2 provides a statement of the problem, describing the plant as a controlled dynamical system with control inputs and outputs defined in an infinite-dimensional Hilbert space. The primary Control Objectives addressed in this work are formulated in Problems 1 and 2.
In Section 3, we present the approximate cascade controller. The initial step, referred to as the 0th-order controller, is given in (14) and (15), while the general jth-order controller is defined in (17) and (18). The motivation and derivation of the controller are discussed in Appendix A. Although not immediately apparent, this control strategy is rooted in obtaining approximate solutions to the regulator equations in the context of geometric regulation. Due to space limitations, details of the derivation process (e.g., β regularization) are omitted here but thoroughly covered in prior works [1,4,5,6,23].
In Appendix A, Equation (A6), we present an initial form for the 0th-order controller and show why it is unsuitable. By substituting this control into equation (A1) and applying regularization and simplification, we obtain the final form (A17)–(A19), which leads to the results in (15).
At first glance, solving the dynamical systems in the controllers may seem numerically challenging. However, Section 4 presents a straightforward and practical algorithm for solving all cascade dynamical systems. This algorithm enables solutions using off-the-shelf finite element software, eliminating the need for complex numerical programming. In this section, we focus on the final results, leaving the detailed derivation of the Formulas (21)–(23) for the 0th-order equations and (25)–(27) for the jth-order equations to Appendix B.
A significant advantage of our control strategy is its ability to provide explicit formulas for the errors at each cascade step. These formulas express the errors in terms of the reference signals, disturbances, system parameters, the regularization parameter β , and the Tikhonov regularization parameter α . In Section 5, we provide these explicit error formulas: (35) for the 0th step and (38) and (39) for the general nth-order step, where n 1 . As with the dynamical systems, the derivations of these results involve lengthy discussions, which are provided in Appendix D.
Section 6 presents a detailed analysis of the errors in the special case of finite-dimensional input and output spaces. In this scenario, sending the Tikhonov regularization parameter α to zero is possible, effectively replacing the Tikhonov regularization with the Moore–Penrose pseudoinverse. This approach enables a detailed examination of errors at each cascade step and an analysis of the overall cascade error structure. The ability to send α to zero critically depends on Lemma 1, whose proof is provided in Appendix C.
In Section 7, we consider a common situation in applications to partial differential equations with infinite-dimensional input and output spaces when the input map is given by an extension operator and the output map is given by a restriction operator. Here, the state operator A is a coercive elliptic operator acting in the state space L 2 ( Ω ) , where Ω is a bounded domain in R n , and the control input u ( x , t ) acts on a subdomain Ω b Ω . In contrast, the output y ( x , t ) corresponds to the solution of the differential equation evaluated on a subset Ω c Ω . Lemma 3 in Section 7 shows that in the case of infinite-dimensional input and output spaces, the only way one can achieve error zeroing is in the colocated case, i.e., Ω b = Ω c .
In Section 8, we present numerical simulations covering a broad range of scenarios with both finite- and infinite-dimensional input/output spaces. Example 1 considers a two-input/two-output system, while Example 2 addresses an under-determined case where error zeroing fails. The remainder of our numerical examples are concerned with the most challenging case of infinite-dimensional input and output spaces. We then study a one-dimensional parabolic system with non-colocated input/output spaces (Example 3) and its colocated counterpart (Example 4). Finally, parabolic problems are examined in a two-dimensional spatial domain: a non-colocated case (Example 5) and a colocated case (Example 6).
As discussed above, many lengthy technical calculations have been moved to Appendix A, Appendix B, Appendix C, Appendix D and Appendix E to not detract from the main points in the paper.

2. Statement of Problem

We consider a control system with state operator A assumed to be sectorial with compact resolvent that generates an exponentially stable C 0 semigroup on the Hilbert space Z with inner product · , · and norm · = · , · 1 / 2 . B in : U Z is the bounded input operator, and C : Z Y is the bounded output operator, where U and Y are the control input and output Hilbert spaces, respectively. Also, B d : D Z is the bounded disturbance input operator, and D is the disturbance input space.
In this work, we emphasize that only bounded input, disturbance, and output operators are considered. However, after many numerical simulations, we have observed that the methodology presented here can be extended to cases involving unbounded inputs, disturbances, and outputs, such as boundary control and sensing. The analysis with such operators would require additional effort and will be addressed in future work.
Consider the control system in the Hilbert state space Z
z t = A z + B d d + B in u ,
z ( 0 ) = z 0 ,
y ( t ) = C z ( t ) .
Problem 1.
Given a reference signal r C b ( R + , Y ) and disturbance d C b ( R + , D ) find a control u to minimize the limsup of the error, defined by e ( t ) = r ( t ) C z ( t ) . Namely, we want to minimize
lim ¯ e ( t ) Y .
Here, C b ( R + , X ) denotes the space of infinitely differentiable X-valued functions with all derivatives bounded on R + . More generally we will also consider C b N ( R + , X ) , the Banach space of smooth N-times continuously differentiable X-valued functions with all derivatives bounded on R + , and the notation lim ¯ means
lim ¯ ϕ X = lim T sup t T ϕ ( t ) X , f o r ϕ C ( R + , X ) .
It is desirable to choose the control u ( t ) so that the error e ( t ) = r ( t ) C z ( t ) vanishes asymptotically, which is often unachievable in classical optimal control.
We aim for a general setting, allowing U and Y to be either finite- or infinite-dimensional input and output spaces. In the case of infinite-dimensional input and output spaces, the problem becomes classically ill-posed, as the transfer function G = C ( A ) 1 B in is not boundedly invertible. In particular, G is a compact operator due to our assumptions on A, B in , and C; hence, it is non-invertible. Even in the finite-dimensional case, systems may be over- or under-determined, leading to additional challenges.
The Moore–Penrose pseudoinverse provides an ideal approximate control when available. However, it may not exist if the transfer function has a non-trivial null space or lacks a closed range. Such problems are generally challenging to solve, and solving for the control often leads to ill-posed inverse problems.
Therefore, to regularize the ill-posed problems, we employ Tikhonov regularization with a small penalty α > 0 . The main theoretical problem we aim to address is the following.
Problem 2
(Control Objective). For the system (1)–(3), with given reference signals r C b N ( R + , Y ) and disturbance d C b N ( R + , D ) , our objective is to choose a control u to minimize the limsup of the norm of the tracking error, e ( t ) = r ( t ) C z ( t ) .
Consider α > 0 , and define
J α ( z 0 , r , d , u ) : = lim ¯ e Y 2 + α u U 2 .
For given, z 0 , r and d, our objective is to find u to minimize
{ J α ( u ) : u S } ,
where S is the Constraint Set
S = L ( 0 , ; U ) .
A direct approach to Control Problem 2 results again in an ill-posed dynamical system. We introduce a second stabilization parameter β ( 0 , 1 ) to regularize the system, producing an approximate control u 0 . It is important to note that u 0 differs from u and may deviate significantly, potentially leading to an error e 0 larger than the desired e. This observation motivates the development of a cascade of control problems, where the Control Objective at each iteration is to reduce the error generated in the previous iteration, eventually converging to e.
The rationale and details behind the introduction of the regularization parameters α and β are thoroughly explained in Appendix A. In the next section, we provide key definitions and a concise summary of the cascade controller algorithm described in Appendix A.
In the definition of the cost functional J α in Problem 2, the parameter α has been taken as a scalar for simplicity. In a more general setting, α can be replaced by a symmetric positive definite weighting matrix A , thereby assigning distinct relative weights to the different control components or introducing cross-coupling between them. Analogously, a weighting matrix Γ can be introduced in the error norm to emphasize or de-emphasize specific components of the tracking error. In the infinite-dimensional framework, these ideas naturally extend to non-uniform weight functions or tensor-valued operators acting on the input and output spaces, allowing spatially varying or direction-dependent penalization. We emphasize that such matrix weights and non-uniform weighting functions can be seamlessly incorporated into the definition of the underlying norms, rendering the present formulation immediately extensible to these more general cases. In the remainder of the paper and in all numerical examples, we restrict attention to identity and uniform weights; however, this choice entails no loss of generality, and all theoretical results remain valid for the weighted case.

3. Approximate Cascade Controller

We begin by setting some notation. Denote by G the transfer function for A, B in , C evaluated at 0. Namely, G : U Y is given by
G = C ( A ) 1 B in .
Recall that if U and Y are infinite-dimensional, even if G is injective ( N ( G ) = { 0 } ), G * G has an unbounded inverse; thus, the Moore–Penrose pseudoinverse cannot be directly computed. Indeed, since G * G is a compact operator, the inverse is unbounded unless the input and output spaces are finite-dimensional. In what follows, the * notation refers to the Hilbert space adjoint operator. Definitions for the bounded case are in Chapter 2, page 31, and for the unbounded case in Chapter 10, page 305 [24]; see also T. Kato [25].
For α > 0 , define
R α : = ( α I + G * G ) 1 G * : Y U ,
which forms the basis for applying Tikhonov regularization, as discussed in detail in [26,27,28,29,30]. Then, for 0 < β < 1 , define
R α , β : = ( α I + β G * G ) 1 G * .
Next, we simplify the notation by defining
B α , β : = B in R α , β ,
A α , β : = A ( 1 β ) B α , β C ,
I α , β : = I + β B α , β C A 1 .
A detailed discussion on the appropriate choice of β will be provided later, ensuring that the operator A α , β , as defined in (12), generates an exponentially stable analytic semigroup.
With the above notations, we can introduce the first step in the cascade controller given by the dynamical system
z t 0 = A α , β z 0 + I α , β B d d ( t ) + B α , β r ( t ) , z 0 ( 0 ) = z 0 ,
which produces the first approximate control
u 0 ( t ) = R α , β r ( t ) ( 1 β ) C z 0 + β C A 1 B d d ( t ) .
We then obtain the first approximate error
e 0 ( t ) = r ( t ) C z 0 ( t ) .
The derivation of the controller system in (14) and (15) is presented in Appendix A. We note that the controller (14) and (15) is a feedback (dynamic) controller that depends only on r ( t ) , d ( t ) , and z 0 .
In general, the control u 0 will not be accurate enough to solve our tracking problem due to the effect of the β regularization parameter. Therefore, we propose a sequence, or cascade, of controllers, where each stage improves the approximate control and reduces the error. Each stage solves a new approximate tracking problem, using the previous step’s error as the reference signal.
Namely, for a fixed n Z + and 1 j n , define new variables z j , u j , and the jth cascade controller as follows.
z t j = A α , β z j + B α , β e j 1 ( t ) , z j ( 0 ) = 0 ,
which produces a new control law u j given by
u j = R α , β ( e j 1 ( t ) ( 1 β ) C z j ) ,
and, this, in turn, will produce a new cascade error e j given by
e j ( t ) = e j 1 ( t ) C ( z j ) .
Furthermore, all the coefficients in the controllers (14) and (15), as well as (17) and (18), can be “precomputed” in terms of R α , β , β , B in , C, B d , and A 1 B d .
The nth step cascade control used in the plant (1)–(3) is given by the sum
u n = j = 0 n u j .
Since the initial conditions in (2) and (14) coincide and the semigroup generated by A is assumed to be exponentially stable, linearity implies that if the control u n is used in plant (1), then the state, z, and error e ( t ) satisfy
z ( t ) = j = 0 n z j ( t ) , and e ( t ) = e n ( t ) for all t 0 .
A key advantage of cascade controllers is that they enable a detailed analysis of the errors e j ( t ) , leading to explicit formulas for the errors. For finite-dimensional input and output spaces, these formulas yield concise error estimates, as presented in Theorem 1.
Remark 1.
The term cascade here does not refer to the classical two-loop cascade control architecture used in process or networked control systems [31,32]. Instead, it denotes an iterative correction scheme in which each stage of the algorithm uses the tracking error from the previous stage as input, progressively refining the control and reducing the error within each cascade step.

4. A Practical Solution Algorithm for the Controllers

This section presents a simple algorithm for solving the dynamical systems (14) and (17). The primary difficulty lies in handling the expression R α , β . One common approach is to use the Singular Value Decomposition (SVD) of the bounded linear operator G * G . The SVD is useful for theoretical results. However, explicitly determining the singular values and corresponding eigenfunctions in the infinite-dimensional case is rarely feasible. Therefore, developing a numerical algorithm that can be implemented using standard off-the-shelf finite element software is advantageous.
Appendix B presents a procedure that leads to the following algorithm for solving (14) and obtaining u 0 .
z t 0 = A z 0 + B d d + B in B in * X 0 α ,
0 = A * X 0 + C * r ( 1 β ) C z 0 β C Y 0 ,
0 = A Y 0 + B d d + B in B in * X 0 α ,
where the approximate control u 0 is given by
u 0 = B in * X 0 α .
Similarly, for the systems (17) with control (18), we have
z t j = A z j + B in B in * X j α ,
0 = A * X j + C * e j 1 ( 1 β ) C z j β C Y j ,
0 = A Y j + B in B in * X j α ,
where the approximate cascade controls u j are given by
u j = B in * X j α .
Finally, the nth step control to be used in the plant is given by
u n = B in * α j = 0 n X j .
At each cascade level, we solve one parabolic PDE for z i ( x , t ) together with the constraint equations for the coupled fields ( X i ( x , t ) , Y i ( x , t ) ) . Although X i and Y i lack explicit time derivatives, they are algebraically coupled to z i ( x , t ) and are computed simultaneously via standard time marching from t k 1 to t k . We implemented this on several platforms, including COMSOL Multiphysics 6.1, and in-house FEM software (MATLAB R2025b, and FEMuS, available on GitHub, https://github.com/eaulisa/MyFEMuS, accessed on 11 November 2025), all showing similar behavior; all reported results in Section 7 use COMSOL and their implementation is basic. A detailed scheme is given in Figure 1. Unlike classical optimal control with a backward-in-time parabolic adjoint equation over [ 0 , T ] [17,18], our scheme advances only forward, simplifying computation.

5. Explicit Formulas for the Errors

The analysis of the error e 0 ( t ) is carried out in Appendix D by applying the variation of parameters formula to the dynamical system (14), followed by integration by parts.
Define the operator
L α : = ( I G R α ) ,
where R α is defined in (9). Denote by L the orthogonal projection of the output space Y onto the null space of G G * , N ( G G * ) = N ( G * ) = R ( G ) . Then, ( I L ) is the orthogonal projection onto the R ( G ) ¯ and I = L + ( I L ) . Using this orthogonal decomposition, it follows that L α can be written as
L α = L + M α , M α : = α ( α I + G G * ) 1 ( I L ) .
Notice that
Y = N ( G * ) Y ˜ where Y ˜ = ( I L ) Y .
The key properties of the operator M α : Y Y are contained in the following lemma.
Lemma 1.
Recall our formula for M α from (31). For any ψ Y (whether Y is finite or infinite-dimensional), we have ( I L ) ψ Y ˜ , and
lim α 0 M α ψ = 0 .
In the case where the number of inputs and outputs is finite, we can say more. Namely, there exists σ c > 0 such that
M α α 2 σ c ,
so that M α converges to zero in the operator norm as α 0 .
The proof of Lemma 1 is provided in Appendix C.
Let us define
K ( t ) : = C ( A α , β ) 1 e A α , β t B α , β , K d ( t ) : = C ( A α , β ) 1 e A α , β t I α , β B d ,
and
S 0 : = I , S i : = M α + K d d t S i 1 = M α + K d d t i , i Z + .
Note that the operators M α and K d d t do not commute. Nevertheless, employing the formulas presented in Appendix D, we can write
e 0 ( t ) = P ( t ) + L α r ( t ) + C A 1 B d d ( t ) + ( K r ) ( t ) + ( K d d ) ( t ) = P ( t ) + L α r ( t ) + ( K r ) ( t ) + L α C A 1 B d d ( t ) + ( K d d ) ( t ) = P ( t ) + ( L + S 1 ) r ( t ) + L α C A 1 B d d ( t ) + ( K d d ) ( t ) = e 0 r ( t ) + e 0 d ( t ) ,
with
e 0 r ( t ) : = P ( t ) + ( L + S 1 ) r ( t )
e 0 d ( t ) : = P ( t ) + L α C A 1 B d d ( t ) + ( K d d ) ( t )
Here and in the future, P ( t ) denotes a generic exponentially decaying function that depends on initial data and other known variables. At each iteration, P ( t ) may change, but for simplicity, we do not track these changes and redefine P ( t ) accordingly.
From (35), we note that the operator L α = I G R α in (30) plays a central role in the error e 0 ( t ) , as well as in the errors e n ( t ) for n 1 . Appendix D, specifically Lemmas A2 and A3, contains the derivation of the following expressions for e n ( t ) . For all n 1 , we have
e n ( t ) = e n r ( t ) + e n d ( t ) ,
with
e n r ( t ) = P ( t ) + L p = 0 n S p + S n + 1 r ( t ) ,
e n d ( t ) = P ( t ) + L p = 0 n 1 S p + S n e 0 d ( t ) .
Since L M α = 0 (cf. Remark A1 in Appendix D), it is possible to rewrite the generic Formulas (38) and (39) using the following relation:
L S i = L M α + K d d t S i 1 = L K d d t S i 1 , i Z + .

6. Error Estimates for Finite-Dimensional Input and Output Spaces

For Problem 1, Tikhonov regularization, as described in Appendix A, is not the only potential optimization method. An alternative approach could involve a least-squares minimal solution using the Moore–Penrose pseudoinverse. However, Tikhonov regularization appears to be the better choice in control systems governed by partial differential equations and with infinite-dimensional input and output spaces U and Y . For finite-dimensional U and Y , which are common in many applications, it is useful to assess the impact on Section 3, since G then yields a simpler formula for R α , as defined in (9).
From (9), assuming ( G * G ) is invertible, as α 0 , we have
lim α 0 R α = ( G * G ) 1 G * : = G ,
where G denotes the Moore–Penrose pseudoinverse. According to Lemma 1, M α 0 as α 0 , so L α = L + M α , defined in (31), converges to L.
Our main goal in this section is to estimate lim ¯ e n ( t ) . To this end, assuming that we have passed α to zero, let us define
e ¯ n ( t ) : = e ¯ n r ( t ) + e ¯ n d ( t ) = lim α 0 e n r ( t ) + lim α 0 e n d ( t ) ,
with e n r ( t ) defined in (38) and e n d ( t ) defined in (39). In Appendix E, Equation (A68), we show that
e ¯ n = lim α 0 ( e n r + e n d ) = P + L j = 0 n ( K ) j r ( j ) + C A 1 B d d + j = 1 n ( K ) j 1 K d d ( j ) + ( K ) n + 1 r ( n + 1 ) + ( K ) n K d d ( n + 1 ) .
For j 1 , define
H 0 ( t ) = r ( t ) + C A 1 B d d ( t ) and H j ( t ) = ( K ) j r ( j ) ( t ) + ( K ) j 1 K d d ( j ) ( t ) ,
and then the error e ¯ n ( t ) can be written explicitly as
e ¯ n ( t ) : = P ( t ) + L j = 0 n H j ( t ) + H n + 1 ( t ) .
Remark 2.
In deriving the error formula (43) for α 0 , we have not explicitly included the dependence of the operators K and K d on α. Explicit formulas for these operators are provided later in Equation (47) and Theorem 1.
In the case of finite-dimensional input and output spaces, the projection operator L takes the form
L = ( I G G ) = ( I G ( G * G ) 1 G * ) ,
and we have
G ( I G G ) = ( G G ) = 0 .
Remark 3.
One important special case occurs when n b = n c and G is invertible. In this case
G = ( G * G ) 1 G * = G 1 ( G * ) 1 G * = G 1 .
This case has been treated in detail in the earlier works [1,4,5,6] and references therein. In particular, the steps required to compute the limsup of the norms of the iterated errors described below are not repeated here; rather, we refer to these works for the specific details. A critical observation in the case G is invertible is that
L = ( I G G ) = ( I G G 1 ) = 0 .
As a result, in this case, it can be shown that the controls obtained from the iterative process are always error zeroing.
Considering the more general analysis of the errors in the case of finite-dimensional input and output spaces, we now estimate
lim ¯ e ¯ n ( t ) lim ¯ e ¯ n r ( t ) + lim ¯ e ¯ n d ( t ) .
For both e ¯ n r ( t ) and e ¯ n d ( t ) , estimates of these expressions require estimating the limsup of iterated convolutions. This procedure has already been addressed, and the interested reader can consult [1,4,5,6,23]. Here, we present only the essential results.
Definition 1.
For I R + , a fixed interval, the space C b N ( I , R ) , consisting of real-valued functions that are N-times continuously differentiable on I and whose derivatives up to order N are bounded, is a Banach space with the norm
φ ( · ) I , N = max 0 j N sup t I | φ ( j ) ( t ) | .
In the vector-valued case, let φ ( · ) = [ φ 1 ( · ) , φ 2 ( · ) , , φ m ( · ) ] T so that φ ( · ) C b N ( I , R m ) . We define its norm by
φ ( · ) I , N = sup 1 i m ( φ i ( · ) I , N ) .
For any interval I R + , the reference r ( · ) and disturbance d ( · ) signals are smooth, bounded, vector-valued functions in C b N ( I , Y ) and C b N ( I , D ) , respectively. We also note that for any 0 j < N
r I , j r I , j + 1 r I , N a n d d I , j d I , j + 1 d I , N .
Remark 4.
In establishing our estimates for the limsup of the errors, we are particularly interested in the supremum norm taken over time intervals of the form I = [ T , ) for a large T > 0 . This allows us to consider reference and disturbance signals that may have large excursions for some time but eventually settle into a more stable oscillatory behavior. To simplify the notation in this case, we write
lim ¯ φ ( · ) N = lim T φ ( · ) [ T , ) , N .
To describe the error estimates in the special case considered in this section, we also need the form of various operators as α 0 . From (12), we have
lim α 0 A α , β = lim α 0 A ( 1 β ) B in ( α I + β G * G ) 1 G * C = A ( 1 β ) β B in G C ,
and from (13), we have
lim α 0 I α , β = lim α 0 I + β B in ( α I + β G * G ) 1 G * C A 1 = I + B in G C A 1 .
This leads to the definitions
A β : = A ( 1 β ) β B in G C , I 0 : = I + B in G C A 1 .
Using these formulas, we obtain the form of K ( t ) and K d ( t ) for α = 0 given in (48), in the following theorem.
Theorem 1.
Let
K ( t ) = 1 β C A β 1 e A β t B in , K d ( t ) = C A β 1 e A β t I 0 B d ,
where A β and I 0 are given in (47), and set
D = 0 K ( t ) d t , D d = 0 K d ( t ) d t .
For any N 0 and any reference and disturbance signals, r ( · ) C b N ( R + , R n b ) , a n d d ( · ) C b N ( R + , R n c ) , define
D = D d D , C N = lim ¯ r N + D lim ¯ d N .
Then, we have
lim ¯ H N ( t ) D N C N .
Proof. 
Using the definition of H N in (42), we have
lim ¯ H N ( t ) lim ¯ ( K ) N r ( N ) ( t ) + lim ¯ ( K ) N 1 K d d ( N ) ( t ) D N lim ¯ r N + D N 1 D d lim ¯ d N = D N C N .
The proof that
lim ¯ ( K ) N r ( N ) ( t ) D N lim ¯ r N and lim ¯ ( K ) N 1 K d d ( N ) ( t ) D N 1 D d lim ¯ d N
involves a lengthy series of calculations involving estimates of repeated convolution integrals. The details of this analysis can be found in [5]. □
A frequently encountered case in the analysis of tracking and disturbance rejection of sinusoidal signals leads directly to the following corollary.
Corollary 1.
For reference signal r ( · ) and disturbance d ( · ) , assume that there exist constants δ > 0 , C ¯ > 0 such that
M N = max lim ¯ r N , lim ¯ d N C ¯ δ N with δ D < 1 ,
for all N Z + . Then
C N ( 1 + D ) C ¯ δ N ,
and for sufficiently large N, we have
lim ¯ H N C ¯ ( 1 + D ) D δ N .
Therefore, from (43), we have
lim n lim ¯ e ¯ n = lim ¯ L j = 0 H j ,
where the infinite sum converges due to the geometric convergence of H j . Furthermore, in the special case that G = G 1 , so that L = 0 , we see that the problem is “error zeroing”, i.e.,
lim n lim ¯ e ¯ n = 0 .
In particular, we have from (52)
lim ¯ e ¯ n ( t ) = lim ¯ H n + 1 ( · ) ( 1 + D ) D δ n + 1 n 0 .

7. Infinite-Dimensional Colocated Input and Output Spaces

A reasonably general situation with infinite-dimensional input and output spaces arises in the control of partial differential equations defined on a bounded domain Ω R n , with state space Z = L 2 ( Ω ) . In this case, the operator A is often a coercive elliptic operator with smooth coefficients. We assume that the control u = u ( x , t ) acts through a subdomain Ω b Ω , while the measured output is taken on another subdomain Ω c Ω . The control operator B in maps the input space U = L 2 ( Ω b ) to the state space Z = L 2 ( Ω ) and the measured output operator maps Z = L 2 ( Ω ) to the output space Y = L 2 ( Ω c ) . In our examples involving infinite-dimensional input and output spaces, we will use the following operators to facilitate the definition of the input and output maps.
Definition 2.
Let Ω a Ω . Then, we define
1. 
Restriction Operator R a : L 2 ( Ω ) L 2 ( Ω a ) is defined for φ L 2 ( Ω ) by
R a φ ( x ) = φ ( x ) for x Ω a .
Or, more simply R a φ ( x ) = φ ( x ) | x Ω a .
2. 
Extension Operator E a : L 2 ( Ω a ) L 2 ( Ω ) is defined for ψ L 2 ( Ω a ) by
E a ψ ( x ) = ψ ( x ) , x Ω a 0 , x Ω Ω a .
Using these definitions, for φ L 2 ( Ω b ) , we can write
B in φ ( x ) = E b φ ( x ) .
Similarly, for ψ Z , define
C ψ ( x ) = R c ψ ( x ) .
In addition, we define B d : L 2 ( Ω d ) L 2 ( Ω ) for φ L 2 ( Ω d ) by
B d φ ( x ) = E d φ ( x ) .
Then, the operators B in , B d , and C are bounded operators from their respective domains to their range spaces. Similarly, for the adjoint operators, we have
B in * ψ ( x ) = R b ψ ( x ) ,
C * φ ( x ) = E c φ ( x ) .
In this case, we can easily express a formula for the operators G and G * as
G = C ( A ) 1 B in : L 2 ( Ω b ) L 2 ( Ω c ) , G * = B in * ( A * ) 1 C * : L 2 ( Ω c ) L 2 ( Ω b ) .
In general, the sub-regions Ω b and Ω c may overlap, but if we assume that the interior of Ω b Ω c is not empty, then G * is not injective, i.e., N ( G * ) { 0 } . In this case, the operator L, i.e., the orthogonal projection of Y onto the null space of G G * , N ( G G * ) = N ( G * ) = R ( G ) , in (31) is not zero. Therefore, as we have seen in the previous section, the minimization does not yield an error-zeroing solution but the least-squares solution restricted to N ( G G * ) .
In the following, we analyze the colocated case when Ω b = Ω c .
Assumption 1
(Colocated Input–Output Case). The operator A is a coercive, invertible, and elliptic differential operator with smooth coefficients satisfying homogeneous boundary conditions (generally, any combination of Dirichlet, Neumann, or Robin) and
Ω b = Ω c Ω o ,
where Ω o denotes the interior of Ω. In particular, this implies that the boundary of Ω b lies strictly within the interior of Ω.
The following well-known results can be found in most functional analysis books. For example, see Theorem 5.13, page 234 in T. Kato, [25].
Lemma 2.
Let T : X Y be a bounded linear operator with X and Y Hilbert spaces. Then, we have
( I ) R ( T ) = N ( T * ) , ( I I ) N ( T ) = R ( T * ) ¯ .
Lemma 3.
Let Assumption 1 hold. Assume that B in is the extension operator from U = L 2 ( Ω b ) to L 2 ( Ω ) and C is the restriction operator from L 2 ( Ω ) to Y = L 2 ( Ω c ) . Then, we have
R ( G ) ¯ = L 2 ( Ω c ) , and N ( G ) = { 0 } .
This implies that N ( G * ) = { 0 } and the projection operator L = 0 .
Proof. 
In order to show R ( G ) ¯ = L 2 ( Ω c ) , let f L 2 ( Ω c ) be given and choose ϵ > 0 . Let C c ( Ω c o ) denote smooth compactly supported functions in Ω c . Appealing to the density of C c ( Ω c o ) in L 2 ( Ω c ) , we can find a f ˜ C c ( Ω c o ) so that
f f ˜ c ϵ .
Then, under our assumption on A, we know that all C c ( Ω ) are contained in the domain of A. Therefore, we see that f ˜ C c ( Ω c ) C c ( Ω ) D ( A ) and
f ˜ = A 1 A f ˜ .
We note that in the case of infinite-dimensional U and Y and in the colocated case with our particular B in and C, we have that B in * = C is a restriction operator and C * = B in is an extension operator.
Clearly A f ˜ C c ( Ω c ) (since A is a local operator) and supp ( f ˜ ) Ω c . Set
g : = A f ˜ ,
and then we have
A f ˜ = B in g
since B in acts as the identity on Ω b and vanishes outside.
Finally, we note from (64) that
f ˜ = C f ˜ = C ( A ) 1 B in g = G g R ( G ) ,
and also (63) holds. Hence, we have shown that R ( G ) is dense in L 2 ( Ω c ) . Thus, G * is injective.
Now, we turn to the other part of the lemma, showing that N ( G ) = { 0 } . From part (II) of Lemma 2, we have N ( G ) = R ( G * ) ¯ . Thus, our goal is to show that R ( G * ) is dense in L 2 ( Ω b ) ; then, N ( G ) = L 2 ( Ω b ) so that N ( G ) = { 0 } . But under our assumption that Ω c = Ω b , we can repeat the same arguments used above for G * = B in * ( A * ) 1 C * : L 2 ( Ω c ) L 2 ( Ω b ) to show that R ( G * ) is dense in L 2 ( Ω b ) . And therefore, G is injective. □
Remark 5.
Even if G is injective, the range of G, R ( G ) , is not closed. In particular, the operators G 1 and G are unbounded. So, the problem of solving for the control u is ill-posed, requiring some form of regularization.
We also note that if Ω c Ω b , the first part of Lemma 3 still holds, namely R ( G ) ¯ = L 2 ( Ω c ) and the projection operator L = 0 , while the second part of the Lemma is not true anymore, namely N ( G ) { 0 } .

8. Numerical Simulations

We begin with two examples with finite-dimensional input and output spaces. For our simulations, we consider a convective diffusion operator in one space dimension with
A = a d 2 d x 2 v d d x , v > 0 ,
with homogeneous Dirichlet boundary conditions at the domain’s endpoints Ω = [ 0 , ] R . We study the control problem in the state space Z = L 2 ( Ω ) in which we write the domain of the state operator for our plant as
D ( A ) = { φ H 2 ( Ω ) : φ ( 0 ) = φ ( L ) = 0 } = H 2 ( Ω ) H 0 1 ( Ω ) .
The operator A is not self-adjoint; rather, we have
A * = a d 2 d x 2 + v d d x ,
with D ( A * ) = D ( A ) .
Example 1
( n b = 2 , n c = 2 ). In our first simulation, the control u enters through the regions Ω b j for j = 1 , 2 , and a disturbance d enters through the region Ω d . We also assume that there are two scalar measured outputs
y j = C j z = c j , z : = Ω z ( x , t ) c j ( x ) d x where c j = χ Ω c j ,
and that scalar time-dependent reference signals r j ( t ) are provided for the regions Ω c j . Recall that our objective is to minimize the limsup of the 2 -norm of the error given by
e ( t ) = e 1 ( t ) e 2 ( t ) = r 1 ( t ) C 1 z ( t ) r 2 ( t ) C 2 z ( t ) .
Thus, we seek to minimize
lim ¯ | e j ( t ) | = lim ¯ | r j ( t ) C j z ( t ) | for j = 1 , 2 .
In Figure 2, we describe the domain geometry with the two scalar inputs and the two scalar outputs.
Thus, we have U = R n b = R 2 , Y = R n c = R 2 , D = R n d = R , so that
B in u = b 1 u 1 + b 2 u 2 , b j = χ Ω b j Z , C φ = C 1 φ C 2 φ = c 1 , φ c 2 , φ , c j = χ Ω c j Z ,
and
G = C ( A ) 1 B in = c i , ( A ) 1 b j i = 1 , j = 1 2 , 2 .
We note that in this numerical simulation, with our choices of b j and c i , the 2 × 2 matrix G in (68) is invertible, N ( G * ) = { 0 } , and the projection L = 0 .
In the system (65), we have set a = 2 , v = 2 and set the length of the domain to = 2 , so Ω = [ 0 , 2 ] . The sub-regions Ω b 1 , Ω c 1 , Ω b 2 , Ω c 2 , and Ω d are given by
Ω b 1 = [ 0.25 , 0.35 ] , Ω c 1 = [ 0.55 , 0.65 ] ,
Ω b 2 = [ 0.85 , 0.95 ] , Ω c 2 = [ 1.25 , 1.35 ] , Ω d = [ 1.55 , 1.65 ] .
We have chosen asymptotic sinusoidal reference and disturbance signals with a smooth transition at time zero. Namely, we have set
r 1 ( t ) = sin ( t ) F ( t 2 , 1 ) , r 2 ( t ) = sin ( 2 t ) F ( t 2 , 1 ) , d ( t ) = cos ( t ) F ( t 2 , 1 ) ,
where F ( t t * , ε ) is a smooth, piecewise-defined unit step function centered at t = t * . In the transition region [ t * ε , t * + ε ] , F is a ninth-degree polynomial satisfying at the endpoints t = t * ε and t = t * + ε the following: F ( ε , ε ) = 0 , F ( ε , ε ) = 1 , and F ( k ) ( ε , ε ) = F ( k ) ( ε , ε ) = 0 , k = 1 , , 4 . For t < t * ε , we set F = 0 and for t > t * + ε , we set F = 1 . The function F is introduced to suppress transient components that arise in the cascade iteration. These transient terms decay rapidly; F is included only to smooth the results in a neighborhood of t = 0 without affecting the asymptotic behavior.
We solved three extra iterations for the cascade algorithm, producing errors e j via approximate controls u j for j = 0 , 1 , 2 , 3 . Therefore, we obtain the cumulative control, given by u 3 = j = 0 3 u j . The control u = u 3 was then applied in the plant
z t = A z + B d d + B in u ,
z ( x , 0 ) = 2 x ( 1 x ) ,
y ( t ) = C z = C 1 z C 2 z ,
where C 1 and C 2 are defined in (67). In our numerical simulation, we have set α = 10 9 , β = 0.25 and computed three extra iterations.
In Figure 3 and Figure 4, we have plotted the resulting numerical values obtained for r j ( t ) and y j ( t ) and notice that the curves are essentially indistinguishable after a very short time.
To draw attention to the rapid decay of the transient terms due to the nonzero initial conditions, in Figure 5 and Figure 6, we have plotted r j and y j for small t.
Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 appear to demonstrate the rapid (even geometric) decrease to zero of the limsup of the errors to zero. In Table 1, we provide even stronger evidence by giving the ratios
lim ¯ e j 1 lim ¯ e ( j 1 ) 1 , lim ¯ e j 2 lim ¯ e ( j 1 ) 2 j = 1 , 2 , 3 .
In Figure 13 and Figure 14, we draw attention to the errors from the cascade controller and the plant for small values of t. Namely, due to the difference in initial conditions for the controller and plant, we have a transient deviation in the first and second components of the plant error, e 1 and e 2 , and the first and second components of the third cascade error, e 31 and e 32 . Notice that these corresponding errors rapidly converge to each other.
Indeed, for this example, the 2 × 2 transfer function G is invertible and the use of Tikhonov regularization or even Moore–Penrose pseudoinverse is not required to obtain an error-zeroing cascade control.
In order to demonstrate the convergence of the control u 0 = u 0 in Equation (24), and the subsequent cascade controls u n = j = 0 n u j in Equation (29), with the u j from Equation (28), we consider two values of the regularization parameter α = 10 7 , 10 9 . Each u n , for n = 0 , 1 , 2 , 3 , consists of two components, denoted by u n 1 and u n 2 . In Figure 15, corresponding to α = 10 7 , u 01 is shown in blue, u 11 in green, u 21 in red, and u 31 in magenta. Similarly, in Figure 16, u 02 is blue, u 12 is green, u 22 is red, and u 32 is magenta. The same color convention is used in Figure 17 and Figure 18, which corresponds to α = 10 9 . Note that for α = 10 7 , the cascade controls converge toward the optimal ones as n increases, but for the smaller value, α = 10 9 , the cascade controls essentially lie on top of each other.
Example 2
( n b = 1 , n c = 2 ). Our second simulation considers an under-determined system with one scalar input and two scalar outputs. Here, we use the same rod considered in the previous example, but in this case, we consider Ω b = Ω b 1 Ω b 2 (see Figure 2). In this case, we have U = R n b = R , Y = R n c = R 2 , D = R n d = R so that
B in u = b u , b = χ Ω b Z , C φ = C 1 φ C 2 φ = c 1 , φ c 2 , φ , c j Z .
Remark 6. 
Unlike the previous example, in this case, G is a 2 × 1 matrix, so not invertible, but, as we will see below, G * G is 1 × 1 and invertible. Therefore, there is a well-defined Moore–Penrose pseudoinverse given by
G = ( G * G ) 1 G * ,
so it would not be necessary to use Tikhonov regularization. Nevertheless, we provide the details of the development made in Section 3 and Section 4.
In this case, we have
G = C ( A ) 1 B in = c 1 , ( A ) 1 b c 2 , ( A ) 1 b : = γ 1 γ 2 G * = γ 1 γ 2 ,
and
G * G = γ 1 2 + γ 2 2 : = σ 2 , σ α 2 = α + σ 2 .
R α = ( α + G * G ) 1 G * = 1 σ α 2 γ 1 γ 2
Recall that
L α = I G R α = I G ( α + G * G ) 1 G * = I ( α + G G * ) 1 G G * ,
where G G * = γ i γ j i , j = 1 2 . In this case, spectral analysis for the matrix G G * is straightforward. The eigenvalues are 0 and σ 2 and orthonormal eigenvectors are
ψ 0 = 1 σ γ 2 γ 1 , ψ 1 = 1 σ γ 1 γ 2 ,
satisfying
G G * ψ 0 = 0 , G G * ψ 1 = σ 2 ψ 1 , ψ i , ψ j R 2 = ψ i ψ j = δ i , j .
Therefore,
( α + G G * ) 1 G G * = σ 2 σ α 2 ψ 1 ψ 1 ,
and, from (30)
L α = I ( α + G G * ) 1 G G * = ψ 0 ψ 0 + ψ 1 ψ 1 σ 2 σ α 2 ψ 1 ψ 1 , = ψ 0 ψ 0 + α σ α 2 ψ 1 ψ 1 = L + M α ,
where L and M α are defined in (31), and in this case given explicitly by
L = ψ 0 ψ 0 , M α = α σ α 2 ψ 1 ψ 1 , L M α = M α L = 0 .
Here, L is the orthogonal projection onto the N ( G G * ) and M α goes to zero as α goes to zero. Finally, the operator A α , β is
A α , β ( · ) = A ( · ) ( 1 β ) b α + β σ 2 γ 1 c 1 , · + γ 2 c 2 , · .
In this example, we have chosen the same reference and disturbance signals considered in (69) in Example 1. We note that in this case, while our controls are in some sense optimal, the tracking errors do not converge to zero. The point is that the control input does its best to track two different reference signals. In this example, the critical point of our work is that we can explain exactly what the errors are converging to as time goes to infinity.
In Figure 19, we have set α = 10 6 and β = 0.25 and plotted e j 1 for j = 0 , 1 , 2 , 3 . Similarly, Figure 20 presents the corresponding results for e j 2 for j = 0 , 1 , 2 , 3 . We observe a slight variation in these curves due to the value of α . To further investigate this effect, we decrease α to 10 9 , obtaining the plots shown in Figure 21 and Figure 22. The curves are indistinguishable in this case, indicating that the earlier discrepancies resulted from the regularization parameter α .
Figure 23 and Figure 24 contain plots of the components of the closed loop system error
e = e 1 e 2 = r C ( z )
where z is the state variable obtained from the plant, (70)–(72), using the cascade control u 3 = j = 0 3 u j and the error formula E 1 obtained by sending α to zero as described in (42) and (43). Namely, we have
E 1 ( t ) : = E 11 E 12 : = lim α 0 e 1 ( t ) = P ( t ) + L H 0 ( t ) + H 1 ( t ) + H 2 ( t ) .
We note that the term P ( t ) decays to zero exponentially with time, and the term H 2 ( t ) is very small compared with the term L H 0 ( t ) + H 1 ( t ) . In particular, we observe that the main contribution to the error comes from the projection onto the N ( G * ) = R ( G ) . Note that the two curves are on top of each other, confirming the accuracy of our explicit formula for the error.
Introduction of the function F in the definitions of r and d allows for the very smooth plots near t = 0 in Figure 23 and Figure 24. Without it, there would be large oscillations from the exponentially decaying terms P ( t ) .
Example 3
(Non-Colocated Infinite-Dimensional Input and Output Spaces). In this example, we consider a control problem in one spatial dimension. We impose Dirichlet boundary conditions at the endpoints of the domain Ω = [ 0 , 1 ] . As in the previous examples, we write the state operator A for our plant as in (65).
As reported in Figure 25, here we take
Ω b = 1 5 , 2 5 , Ω c = 1 2 , 7 10 , Ω d = 4 5 , 9 10 ,
so that we can write
B in = E b , C = χ Ω c ( x ) , and B d ( x ) = E d .
Here, we have noted that the restriction operator can be explicitly expressed in terms of the characteristic function so we can write
C ψ ( x ) = R c ψ ( x ) = χ Ω c ψ ( x ) .
Therefore, in this example, the input operator is an extension operator, and the output operator is a restriction operator. We consider a tracking problem with reference and disturbance signals
r ( x , t ) = x sin ( t ) F ( t 1.5 , 1.5 ) , d ( x , t ) = x 2 sin ( 2 t ) F ( t 1.5 , 1.5 )
where F is a smooth function that, together with its first four derivatives, is zero at t = 0 . The function F is introduced to suppress transient terms that arise in the cascade iteration. These transient terms decay exponentially in time.
In this case, the resulting controls obtained at each cascade step are functions of both x and t. In Figure 26, Figure 27, Figure 28 and Figure 29, we have plotted the norms e j c of the errors e j for j = 0 , 1 , 2 and for α = 10 3 , 10 5 , 10 7 , 10 9 . Here, we have used blue to depict e 0 , green for e 1 , and red for e 2 .
In the above figures, we used the following color code to indicate the curves e j ( t ) : e 0 ( t ) blue, e 1 ( t ) green, and e 2 ( t ) red.
The main point here is that for α < 10 5 , the errors converge very rapidly to a limiting function, which is that given by the sum of the formulas given in (38) for e n r and (39) for e n d . In particular, we note that as α becomes small, the contributions from M α become negligible very quickly, and the main contribution is from the projection L onto the null space of G * .
Once again, after two iterations of the cascade algorithm, we obtain the cumulative control, given by u 2 = j = 0 2 u j . The control u = u 2 was then applied in the plant
z t = A z + B d d + B in u ,
z ( x , 0 ) = x ( 1 x ) ,
y ( t ) = C z ,
producing the error e ( x , t ) = r ( x , t ) C z ( x , t ) with norm
e ( t ) c = Ω c | r ( x , t ) C z ( x , t ) | 2 d x 1 / 2 .
In Figure 30 and Figure 31, we have plotted the norm of plant system error e ( t ) (in red) and, just as in the previous example, the first approximate error E 0 (in blue) given by the formulas in (38) and (39). Notice that in the case of infinite-dimensional input and output spaces, we cannot use the formulas described in (42) and (43) since M α becomes unbounded as α goes to zero in the operator norm; however, it does go to zero in the strong operator topology.
In Figure 32 and Figure 33, we have plotted e 2 ( t ) c (in blue) for α = 10 9 obtained by solving our cascade control algorithm along with the norm of the error, E oc (in red), obtained by computing an optimal control u using the variational approach formulation in (82)–(84) with ϵ = 10 9 . We notice that the two curves are indistinguishable.
Here, the “full control” u is obtained by using minimizing variations for the form
J ϵ ( r , z , d , u ) = 0 T Ω c 1 2 ( z r ) 2 d x + ϵ 2 Ω b u 2 d x + Ω λ z t A z B in u B d d d x d t
with appropriate boundary conditions on z and λ . This analysis leads to the following system:
z ¯ t = A z ¯ + B d d + B in λ ϵ ,
λ t = A * + χ Ω c ( r z ¯ ) ,
z ¯ ( 0 , t ) = z ¯ ( L , t ) = λ ( 0 , t ) = λ ( L , t ) = λ ( x , T ) = 0 .
The desired control law is
u = λ ϵ .
Note that Equation (83) is an inverse parabolic equation with initial data prescribed at T and must therefore be integrated backward in time (from T to 0). Consequently, the optimality system (82)–(84) must be solved all at once on the full interval [ 0 , T ] , effectively coupling time as an additional dimension of the discrete problem and making the control solve substantially more demanding.
COMSOL, however, is engineered for classical time marching, not for globally coupled space–time solves. To approximate (82)–(84), we adopted a workaround, embedding the PDE in a cylindrical domain with an auxiliary coordinate that mimics time. While feasible, this reformulation is markedly more expensive (many time levels are coupled simultaneously), scales poorly with problem size, and is not practical in 3D. For these reasons, a head-to-head runtime comparison would be misleading: any timings would reflect COMSOL’s solver limitations for all-at-once optimal control rather than the intrinsic efficiency of the methods. Specialized software [33], by contrast, implements forward–backward (sweep) iterations until convergence [17,18], yielding faster solves of (82)–(84) and making such approaches competitive. However, this lies outside the scope of the present work, so we omit any runtime comparisons between the two methods.
Example 4
(Colocated Infinite-Dimensional Input and Output Spaces). In this example, we consider the same system operator A introduced at the beginning of this section in (65) with domain (66) where Ω = [ 0 , ] . The main difference between Example 3 and this example is that we consider a colocated case in which Ω b = Ω c , see Figure 34. In this case, we expect the norms of the errors obtained from the cascade algorithm to decrease geometrically because of Lemma 3.
In our simulation, we have chosen = 1 , Ω b = Ω c = [ 0.45 , 0.55 ] , and Ω d = [ 0.7 , 0.8 ] . We have run five additional steps in our cascade using the same r and d introduced in (78). In Figure 35, Figure 36, Figure 37, Figure 38 and Figure 39, we have plotted e j c for j = 1 , 2 , 3 , 4 , 5 for α = 10 3 , 10 5 , 10 7 , 10 9 , respectively. As expected, and unlike the previous example, the error norms decrease geometrically with each iteration.
In the above figures, we used the following color code to indicate the curves e j ( t ) : e 1 ( t ) blue, e 2 ( t ) red, e 3 ( t ) magenta, e 4 ( t ) green, and e 5 ( t ) black. Then, in Figure 39 and Figure 40, we have set α = 10 9 and plotted both e 5 c and the norm of the plant system error, e ( t ) c = r ( t ) z ( t ) c , where we have used the control u 5 = j = 0 5 u j . After a very short transient, the two curves are indistinguishable.
Figure 35, Figure 36 and Figure 37 demonstrate a geometric decrease in the limsup of the errors. In Table 2, we provide even stronger evidence by giving the ratios
lim ¯ e j lim ¯ e ( j 1 ) , j = 1 , , 5 .
We note that for each value of α , the ratios of lim ¯ e 1 lim ¯ e 0 are significantly smaller than all the subsequent ratios, so the first step in the iterative scheme always produces an outlier in the geometric convergence. In the following table, we confirm the above statement by providing the ratios of the limsups of the norms of the errors pairwise.
Figure 41 plots the cascade control u 5 ( x , t ) = j = 0 5 u j ( x , t ) at the final time t = 15 , for three values of α = 10 7 , 10 8 , and 10 9 , over the domain Ω b . The curves u 5 ( x , 15 ) are colored blue for α = 10 7 , red for α = 10 8 , and black for α = 10 9 . As α decreases, the control becomes progressively rougher, reflecting higher-frequency oscillations that enhance tracking accuracy. Conversely, for larger values of α , the control smooths out and approaches an almost constant profile.
Example 5
(Non-Colocated Example in a Two-Dimensional Spatial Domain). We consider a control problem in two spatial dimensions analogous to the one-dimensional Example 3. In this example, we again impose Dirichlet boundary conditions on the boundary of a domain Ω R 2 (see Figure 42). We can write the state operator for our plant as
A = a Δ V · grad , V = [ u , v ] D ( A ) = { φ H 2 ( Ω ) : φ | Ω = 0 } .
As in our previous examples, the operator A is not self-adjoint. Rather, we have
A * = a Δ + V · grad
with
D ( A * ) = { φ H 2 ( Ω ) : φ | Ω = 0 } .
We consider a tracking problem with reference and disturbance signals
r ( x , y , t ) = x y ( 1 y ) ( y 0.5 ) sin ( t ) F ( t 1.5 , 1.5 ) ,
d ( x , y , t ) = x ( y 0.5 ) 2 sin ( 2 t ) F ( t 1.5 , 1.5 ) ,
where, as in the previous example, F is a smooth function that, together with its first four derivatives, is zero in a neighborhood of t = 0 . The function F is introduced to suppress transient terms that arise in the cascade iteration. These transient terms decay exponentially in time.
In this case, the resulting cascade controls u j , the cascade control, u n = j = 0 n u j , and the full optimal control are functions of both x, y and t.
For comparison, in this example, we have also solved a system similar to (82)–(84) to obtain an optimal control using an inverse parabolic equation for the Lagrange multiplier λ . The error associated with this control is denoted by E o c ( t ) . Also, the error obtained when using the cascade control u 3 in the plant with initial condition φ ( x ) = x y is denoted by e ( t ) .
For our simulations, we have set a = 2 , u = v = 2 , β = 0.25 and taken various values for α . In this case, just as in Example 3, for small α , the dynamics are dominated by the projection onto the N ( G * ) . For that reason, in Figure 43, we have plotted the results for a large value of α = 10 4 to demonstrate the decrease in the norm for the errors e j c for j = 0 , 1 , 2 , 3 . Then, in Figure 44, Figure 45 and Figure 46, we have fixed α = 10 7 . In Figure 44, we have again plotted the norm for the errors e j c for j = 0 , 1 , 2 , 3 . In this case, the different curves are indistinguishable for all t. In Figure 45, we plotted the norms of the cascade error e 3 and the plant error using nonzero initial data. Finally, in Figure 46, we compare the cascade error e 3 and the full optimal control error E oc obtained with the penalty constant ϵ = 10 7 . After a short transient, the two errors become indistinguishable.
Example 6
(Colocated Example in a Two-Dimensional Spatial Domain). This example considers a control problem in two spatial dimensions analogous to the one-dimensional colocated Example 4. Namely, we assume that the equations, reference and disturbance signals, and basic geometry are the same as in Example 5 except that now we assume that Ω b and Ω c are colocated (see Figure 47). In this case, we expect the cascade controls to produce errors that go to zero geometrically with increasing cascade iterations. Once again, impose Dirichlet boundary conditions on the boundary of a domain Ω R 2 .
In our numerical simulations, we have computed the cascade errors e j for j = 0 , , 3 for various values of α . In Figure 48, we have plotted e j ( t ) c for j = 0 , , 3 for α = 10 4 , and in Figure 49, we have plotted these same results for α = 10 7 . Notice that, for fixed α , the norms decrease geometrically with each cascade iteration.
In the above figures, we used the following color code to indicate the curves e j ( t ) : e 0 ( t ) blue, e 1 ( t ) green, e 2 ( t ) red, e 3 ( t ) magenta, and e 5 ( t ) black. In Figure 50 and Figure 51, we have plotted the e 4 ( t ) c and the norm of the plant error e ( t ) c both for large and small times. Here, we have set the initial condition for the plant to φ ( x ) = x y . Notice that after a very brief time, the two curves become indistinguishable.

9. Conclusions

This work introduces a rigorous and general methodology for achieving asymptotic tracking and disturbance rejection in both lumped and distributed parameter systems, including those with infinite-dimensional input and output spaces. We overcome significant challenges posed by compact operators in infinite-dimensional input and output spaces by employing a cascade algorithm based on the geometric regulation theory, which is enhanced with Tikhonov regularization. Our approach extends beyond traditional optimal control methods, providing approximate solutions for various systems, including those that are over- or under-determined.
Through a detailed error analysis, we identify conditions under which the cascade process is error zeroing and those that limit its effectiveness, such as non-colocated input–output operators and contributions from regularization. Numerical simulations validate the method’s practical applicability, demonstrating its adaptability across diverse examples with varying dimensionality and configurations. This study lays a foundation for further research into advanced control strategies for complex linear and nonlinear dynamical systems control, with potential extensions to unbounded input and output operators and broader classes of nonlinear partial differential equations.

Author Contributions

Conceptualization, E.A. and D.S.G.; methodology, D.S.G.; software, E.A.; formal analysis, D.S.G.; investigation, E.A. and A.C.; writing—original draft, E.A., A.C. and D.S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We sincerely thank John Burns, in Mathematics at Virginia Tech, for his invaluable suggestions and insightful feedback throughout this work. His expertise and thoughtful contributions have greatly enhanced the quality of our research.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Motivation and Derivation of the Cascade Controller

Beginning with the equations for the digital twin of the plant,
z t ( t ) = A z ( t ) + B d d ( t ) + B in u ( t ) ,
z ( 0 ) = z 0 ,
e ( t ) = r ( t ) C z ( t ) ,
our methodology seeks to solve (A1) and (A2) directly for u ( t ) in the attempt of minimizing e ( t ) in (A3). The idea is to first solve (A1) for z and then apply C to both sides:
C z = C A 1 z t C A 1 B d d + C ( A 1 ) B in u .
Using the definition of the error yields
e = r C z = r C A 1 z t + C A 1 B d d C ( A 1 ) B in u .
Defining G = C ( A 1 ) B in , then G can be identified with the transfer function of the system,
C ( s I A ) 1 B in ,
evaluated at s = 0 . Our standing assumption that A generates an exponentially stable analytic semigroup ensures that 0 lies in the resolvent set of A. We attempt to solve for u by minimizing the error
e = r C A 1 z t + C A 1 B d d G u .
In our earlier work [1,5,6], the input and output spaces were always finite-dimensional and of equal dimension. In that setting, G is a square matrix, which we assumed to be invertible (equivalently, 0 is not a transmission zero of the system).
In the present work, however, we address the more general case of arbitrary input and output spaces. This includes both over-determined and under-determined systems, as well as the significantly more challenging case of infinite-dimensional input and output spaces. If the input and output spaces are finite-dimensional, and G is not invertible, the minimal solution of (A4) can be obtained using the Moore–Penrose pseudoinverse. For infinite-dimensional input and output spaces, G is a compact operator and thus cannot possess a bounded inverse or pseudoinverse. Consequently, solving (A4) for u is an ill-posed problem. Our general approach is to apply Tikhonov regularization.
For a small parameter α > 0 , we define the regularized operator
R α : = ( α I + G * G ) 1 G * .
This operator yields the best least-squares solution of (A4). Here, G * denotes the Hilbert space adjoint of G. Since G and G * are compact and hence bounded, G * G is bounded, non-negative, and self-adjoint. Therefore, for α > 0 , the operator ( α I + G * G ) is invertible with bounded inverse. Consequently,
u = R α r C A 1 z t + C A 1 B d d
represents the best least-squares solution of the minimization problem
min u ( t ) e Y 2 + α u U 2 = min u ( t ) r C A 1 z t + C A 1 B d d G u Y 2 + α u U 2 , for a fixed t .
To clarify the role of Tikhonov regularization, we proceed as follows. Substituting the expression for u from (A6) into (A1), we can recast the system as a dynamical system:
z t = A z + B d d + B in R α r C A 1 z t + C A 1 B d d .
Rearranging, we obtain
( I + B in R α C A 1 ) z t = A z + ( I + B in R α C A 1 ) B d d + B in R α r .
To express this as a standard dynamical system, we require the inverse of ( I + B in R α C A 1 ) . A direct calculation shows
( I + B in R α C A 1 ) 1 = I B in R α ( I G R α ) 1 C A 1 ,
where ( I G R α ) 1 is given explicitly by
( I G R α ) 1 = α I + G G * α .
Since
R α ( I G R α ) 1 = ( α I + G * G ) 1 G * α I + G G * α = G * α ,
we have
( I + B in R α C A 1 ) 1 = I B in G * C A 1 α ,
which becomes unbounded as α 0 and leads to numerical instabilities when α is very small (as is typical in Tikhonov regularization). To address this issue, we introduce a second regularization by replacing ( I + B in R α C A 1 ) with
( I + ( 1 β ) B in R α C A 1 ) , 0 β 1 .
When β = 0 , it reduces to the original expression; when β = 1 , it reduces to the identity.
Defining
R α , β : = ( α I + β G * G ) 1 G * ,
we can now state the following relation.
Lemma A1. 
( I + ( 1 β ) B in R α C A 1 ) 1 = ( I ( 1 β ) B in R α , β C A 1 ) .
Proof. 
We establish (A12) by direct verification with
( I + ( 1 β ) B in R α C A 1 ) I ( 1 β ) B in R α I + ( 1 β ) G R α , β C A 1 = I + ( 1 β ) B in R α R α , β ( 1 β ) R α B in A 1 C R α , β C A 1 = I + ( 1 β ) B in R α R α , β + ( 1 β ) R α G R α , β C A 1 .
We now show that
R α , β ( 1 β ) R α G R α , β = I ( 1 β ) R α G R α , β = R α .
We note that
I ( 1 β ) R α G = ( α I + G * G ) 1 α I + G * G ( 1 β ) G * G = ( α I + G * G ) 1 ( α I + β G * G )
Therefore
I ( 1 β ) R α G R α , β = ( α I + G * G ) 1 ( α I + β G * G ) R α , β = R α .
Note that for β = 0 , Equation (A12) recovers Equation (A9); for β = 1 , R α , β = R α .
If G has a well-defined pseudoinverse, G = ( G * G ) 1 G * , or if G 1 exists, we can pass to the limit in R α , β to obtain
lim α 0 R α , β = 1 β ( G * G ) 1 G * ,
so that
lim α 0 I ( 1 β ) B in R α , β C A 1 = I ( 1 β ) β B in G C A 1 ,
which remains well behaved even when α = 0 . In particular, this allows choosing a small α in Tikhonov regularization without causing numerical instabilities.
With the β -regularization, the system (A7) becomes
( I + ( 1 β ) B in R α C A 1 ) z t 0 = A z 0 + ( I + B in R α C A 1 ) B d d + B in R α r ,
where we have introduced a new state variable z 0 , which, in general, differs from z for β 0 . Here, the superscript 0 denotes the initial iteration in the cascade controller algorithm, which will be introduced later. Applying the inverse from (A12) and defining
B α , β : = B in R α , β , A α , β : = A ( 1 β ) B α , β C , I α , β : = I + β B α , β C A 1 ,
we arrive at the dynamical system referred to as the regularized controller.
z t 0 = A α , β z 0 + I α , β B d d + B α , β r .
For fixed α > 0 , the operator A α , β is sectorial and generates an exponentially stable, analytic semigroup for all β sufficiently close to 1. Indeed, the term
( 1 β ) B α , β C
is a bounded perturbation of A and vanishes as β 1 .
Furthermore, rewriting (A14) in the form
z t 0 = A z 0 + B d d + B in R α , β r ( 1 β ) C z 0 + β C A 1 B d d ,
we can immediately identify the explicit formula for u 0 :
u 0 = R α , β r ( 1 β ) C z 0 + β C A 1 B d d .
Systems of this form allow us to derive explicit formulas for the approximate errors that arise in the cascade iteration scheme presented in the paper.

The Cascade Controller

Because of the β -regularization, we cannot expect the resulting error e 0 = r C z 0 to be the least-squares error e we seek. While tuning both α and β can reduce the asymptotic value of e, the effect is limited. For this reason, we introduce a methodology that produces progressively more accurate controls with smaller tracking errors.
We begin with the initial step of the iterative scheme presented above
z t 0 ( t ) = A z 0 ( t ) + B d d ( t ) + B in u 0 ( t ) ,
z 0 ( 0 ) = z 0 0 ,
u 0 ( t ) = R α , β r ( 1 β ) C z 0 ( t ) + β C A 1 B d d ( t ) ,
which produces the error
e 0 ( t ) = r ( t ) C z 0 ( t ) .
In the digital twin of the plant, (A1)–(A3), we now set z = z 0 + z 1 and u = u 0 + u 1 . Using Equations (A17)–(A19) leads to the new problem of finding z 1 and u 1 such that
z t 1 ( t ) = A z 1 ( t ) + B in u 1 ( t ) ,
z 1 ( 0 ) = 0 ,
and minimizing the error
e = r C z = r C ( z 0 + z 1 ) = e 0 C z 1 .
These form a new set of controller equations, designed to track the reference signal e 0 ( t ) . System (A21)–(A23) is formally simpler than (A17)–(A19), since there is no disturbance and the initial condition is zero; however, its solution still requires the same Tikhonov and β -regularization as before. Then, proceeding exactly as for the 0th step in (A19), we find the controller
u 1 = R α , β e 0 ( 1 β ) C z 1 ,
which produces the new error
e 1 ( t ) = e 0 ( t ) C z 1 ( t ) .
Once again, we cannot expect e 1 ( t ) = e ( t ) , because of the β -regularization.
Proceeding, we can repeat the above steps as many times as we like. At the n t h step, we set
z = z 0 + z 1 + + z n , u = u 0 + u 1 + + u n ,
which leads to the following equations for z n and u n :
z t n ( t ) = A z n ( t ) + B in u n ( t ) ,
z n ( 0 ) = 0 ,
u n ( t ) = R α , β e n 1 ( t ) ( 1 β ) C z ( t ) n .
We produce the error
e n ( t ) = e n 1 ( t ) C z n ( t ) , .
Practically, we stop iterating once e n ( t ) e n 1 ( t ) Y ε for sufficiently large t, with a small given ε . We note that, in the case of finite-dimensional input and output spaces, from (A68), we have
e j ( t ) e j 1 ( t ) = P ( t ) + ( K ) j + 1 r ( j + 1 ) ( t ) + ( K ) j K d d ( j + 1 ) ( t ) G G ( K ) j r ( j ) ( t ) + ( K ) j 1 K d d ( j ) .
Therefore, under suitable conditions on the growth of r and d, as in Corollary 1, we see that
lim ¯ e j ( t ) e j 1 ( t ) j 0 .
We then use the control u in the plant and obtain an error e ( t ) = e n ( t ) for all time if the same initial data are used, or e ( t ) e n ( t ) as t if different initial data are used.

Appendix B. A Practical Solution Algorithm

In this appendix, we derive the solution algorithm presented in Section 5. Specifically, we derive the Formulas (21)–(23) for solution of z 0 and u 0 by writing the Equation (14) in the form
z t 0 = A z 0 + B d d + B in R α , β R
where
R : = r ( 1 β ) C z 0 + β C A 1 B d d .
Then, given R, we focus on solving R α , β R using the formula (A11). To this end, recall that
R α , β R = G * ( α I + β G G * ) 1 R , G = B in ( A ) 1 C , and G * = C * ( A * ) 1 B in * ,
and set
X ¯ = ( α I + β G G * ) 1 R G * X ¯ = R α , β R α X ¯ + β G G * X ¯ = R .
To compute G * X ¯ (or R α , β R ), we set
X ˜ 0 = ( A * ) 1 C * X ¯ so that G * X ¯ = R α , β R = B in * X ˜ 0 .
Next, let
Y ˜ 0 = ( A ) 1 B in B in * X ˜ 0 so that C Y ˜ 0 = G G * X ¯ .
Using (A30) into (A28) yields
α X ¯ + β C Y ˜ 0 = R ,
which implies
X ¯ = R β C Y ˜ 0 α .
Therefore, from (A29)
X ˜ 0 = ( A * ) 1 C * R β C Y ˜ 0 α .
At last, we define
X d = ( A ) 1 B d d , so that R = r ( 1 β ) C z 0 β C X d .
Therefore, to solve (A15), we solve the equivalent coupled system
z t 0 = A z 0 + B d d + B in B in * X ˜ 0 ,
0 = A * X ˜ 0 + C * r ( 1 β ) C z 0 β C ( X d + Y ˜ 0 ) α ,
0 = A X d + B d d ,
0 = A Y ˜ 0 + B in B in * X ˜ 0 .
A numerically more stable and simpler algorithm is obtained by defining
X 0 : = α X ˜ 0 ,
Y 0 : = X d + Y ˜ 0 ,
We can repeat the above construction for the cascade controls to provide a simple set of systems from the systems (17). Namely, we can write (17) as
z t j = A z j + B in R α , β e j 1 ( 1 β ) C z j .
This leads to the system (25)–(27)
z t j = A z j + B in B in * X j α ,
0 = A * X j + C * e j 1 ( 1 β ) C z j β C Y j ,
0 = A Y j + B in B in * X j α .
The desired cascade controls u j are given by
u j = B in * X j α ,
so that the control for the nth step control is
u n = B in * α j = 0 n X j .

Appendix C. Proof of Lemma 1

Our proof of Lemma 1 relies on the spectral theory of compact, self-adjoint operators (see for example [24,25,34,35]) applied to the non-negative, compact, and self-adjoint operator G G * : Y ˜ Y ˜ . A proof could also be obtained using standard SVD arguments, as in [29,30]. According to the spectral theorem, in the Hilbert space Y ˜ , there exists an orthonormal family { u j } N ( G * ) and a sequence of positive numbers, { σ j 2 } , decreasing to zero, satisfying
G G * u j = σ j 2 u j
G G * ψ = j = 1 σ j 2 ψ , u j u j .
Note that for any ψ Y ˜ and any continuous real-valued function F, we have
F ( G G * ) ψ = j = 1 F ( σ j 2 ) ψ , u j u j .
Proof of Lemma 1. 
Taking F ( s ) = α α + s , the operator in (A43) is given, for ψ Y , by
M α ψ = j = 1 α α + σ j 2 ( I L ) ψ , u j c u j ,
where · , · c denotes the inner product in Y .
Due to the orthonormality of the family { u j } , we have
M α ψ 2 = j = 1 α ( α + σ j 2 ) 2 | ( I L ) ψ , u j c | 2 .
Given ψ Y , let us define ψ ˜ = ( I L ) ψ Y ˜ . Our proof proceeds by showing that for any arbitrary ϵ > 0 and ψ Y , we can choose α sufficiently small so that M α ψ ϵ . We note that for any ψ Y , we have
j = 1 | ψ ˜ , u j c | 2 ψ ˜ 2 < .
Let us fix an arbitrary ϵ > 0 . Notice that for all j Z + and α > 0
α ( α + σ j 2 ) < 1 .
Therefore, for our fixed ϵ , from (A45), we can choose N so that
j = N + 1 α ( α + σ j 2 ) 2 | ψ ˜ , u j c | 2 j = N + 1 | ψ ˜ , u j c | 2 < ϵ 2 2 .
Then, since the σ j 2 are decreasing and α > 0
α α + σ j 2 2 α α + σ N 2 2 < α σ N 2 2 , j = 1 , , N .
We now choose α 0 small enough so that for 0 < α < α 0 , we have
α σ N 2 2 < ϵ 2 2 ψ ˜ c 2 .
Then, considering (A45), we obtain
j = 1 N α α + σ j 2 2 | ψ ˜ , u j c | 2 α σ N 2 2 j = 1 N | ψ ˜ , u j c | 2 α σ N 2 2 j = 1 | ψ ˜ , u j c | 2 ϵ 2 2 .
Finally
M α ψ 2 = j = 1 N α α + σ j 2 2 | ψ ˜ , u j c | 2 + j = N + 1 α α + σ j 2 2 | ψ ˜ , u j c | 2 ϵ 2 2 + ϵ 2 2 = ϵ 2 ,
and
M α ψ ϵ .
In the case of finitely many inputs n b and outputs n c , the sum in (A44) becomes
M α ψ 2 = j = 1 n c α α + σ j 2 2 | ψ ˜ , u j c | 2 α α + σ n c 2 2 ψ ˜ 2 ,
which implies
M α α ( α + σ n c 2 ) α 2 σ n c .
Therefore, the operator M α converges to zero in the operator norm. □

Appendix D. Analysis of the Errors

As discussed in Section 5, this appendix contains a detailed analysis of the errors obtained in the cascade procedure. We begin by applying the variation of parameters formula to the dynamical system for z 0 in (14) to obtain
z 0 ( t ) = e A α , β t z 0 0 + 0 t e A α , β ( t τ ) B in R α , β r ( τ ) d τ + 0 t e A α , β ( t τ ) I α , β B d d ( τ ) d τ .
Applying integration by parts to the two integral terms produces
z 0 ( t ) = e A α , β t z 0 0 + 0 t ( A α , β ) 1 e A α , β ( t τ ) B α , β r ( τ ) d τ + 0 t ( A α , β ) 1 e A α , β ( t τ ) I α , β B d d ( τ ) d τ = e A α , β t z 0 0 + ( A α , β ) 1 e A α , β ( t τ ) B α , β r ( τ ) | τ = 0 τ = t + ( A α , β ) 1 e A α , β ( t τ ) I α , β B d d ( τ ) | τ = 0 τ = t 0 t ( A α , β ) 1 e A α , β ( t τ ) B α , β r ( τ ) d τ 0 t ( A α , β ) 1 e A α , β ( t τ ) I α , β B d d ( τ ) d τ = P ( t ) + ( A α , β ) 1 B α , β r ( t ) + ( A α , β ) 1 I α , β B d d ( t ) 0 t ( A α , β ) 1 e A α , β ( t τ ) B α , β r ( τ ) d τ 0 t ( A α , β ) 1 e A α , β ( t τ ) I α , β B d d ( τ ) d τ ,
where the term P ( t ) collects terms that decay exponentially as t goes to infinity:
P ( t ) = e A α , β t z 0 0 + A α , β 1 e A α , β t B α , β r ( 0 ) + ( A α , β ) 1 e A α , β t I α , β B d d ( 0 ) .
In this work, many expressions decay exponentially to zero as t goes to infinity. These terms do not affect the computation of the limsup of the norm as t . To simplify the analysis, we use P ( t ) to denote any such expression that decays exponentially to zero. Therefore, we do not distinguish between P ( t ) and P ( t ) or sums of these terms.
Now, by applying C to the expression for z 0 ( t ) obtained in (A50), and subtracting both sides of the resulting formula from r ( t ) and recalling the definition of e 0 ( t ) in (A20), we have
e 0 ( t ) = r ( t ) C z 0 ( t ) = P ( t ) + ( I + C A α , β 1 B α , β ) r ( t ) + C A α , β 1 I α , β B d d ( t ) + ( K r ) ( t ) + ( K d d ) ( t ) .
where K and K d are defined in (33).
Considering
A α , β 1 = A 1 I + ( 1 β B in R α C A 1 ) ,
and
A α , β 1 B α , β = A 1 B in R α ,
which follows by direct verification from the definitions (A11) and (A13), we obtain
C ( A α , β ) 1 B α , β = C A 1 B in R α = G R α .
To evaluate C A α , β 1 I α , β , we use the identities
I ( 1 β ) G R α = ( α I + G G * ) 1 ( α I + β G G * ) ,
I β G R α , β = α ( α I + β G G * ) 1 .
Considering the value of A α , β 1 in (A52), and the definition of I α , β in (13), we evaluate the following:
C A α , β 1 I α , β B d = C A 1 ( I + ( 1 β ) B in R α C A 1 ) ( I + β B in R α , β C A 1 ) B d = ( I ( 1 β ) G R α ) C A 1 ( I + β B in R α , β C A 1 ) B d = ( I ( 1 β ) G R α ) ( I + β C A 1 B in R α , β ) C A 1 B d = ( I ( 1 β ) G R α ) ( I β G R α , β ) C A 1 B d = ( α I + G G * ) 1 ( α I + β G G * ) α ( α I + β G G * ) 1 C A 1 B d = α ( α I + G G * ) 1 C A 1 B d = ( I G R α ) C A 1 B d ,
where on the second-to-last step, we have used (A54) and (A55), and on the last step we have used (A8).
At this point, considering (A51), recalling the definition of L α in (30), M α in (31), S i in (34),
S 0 : = I , S i : = M α + K d d t S i 1 = M α + K d d t i , i Z + ,
and I G R α = L α = L + M α , we have
e 0 ( t ) = P ( t ) + L α r ( t ) + C A 1 B d d ( t ) + ( K r ) ( t ) + ( K d d ) ( t ) = P ( t ) + L + M α + K d d t r ( t ) + ( L + M α ) C A 1 B d + K d d d t d ( t ) = P ( t ) + ( L + S 1 ) r ( t ) + L α C A 1 B d d ( t ) + ( K d d ) ( t ) : = P ( t ) + e 0 r ( t ) + e 0 d ( t ) .
Next, we apply the same variation in parameters to the system
z t j = A α , β z j + B in R α , β e j 1 , z j ( 0 ) = 0 ,
which results in a formula similar to that for z 0 in (A49), as well as a formula resembling the one for e 0 in (A56). Specifically, we have
z j ( t ) = 0 t e A α , β ( t τ ) B in R α , β e j 1 ( τ ) d τ .
For the case j = 1 , we have
e 1 ( t ) = P ( t ) + L α + K d d t e 0 ( t ) = P ( t ) + L + S 1 e 0 r ( t ) + L + S 1 e 0 d ( t ) : = P ( t ) + e 1 r ( t ) + e 1 d ( t ) .
More generally, for all j 1 ,
e j ( t ) = P ( t ) + L + M α + K d d t e j 1 ( t ) = P ( t ) + L + S 1 e j 1 r ( t ) + L + S 1 e j 1 d ( t ) : = P ( t ) + e j r ( t ) + e j d ( t ) .
Our next goal is to derive a general formula for e j ( t ) by separately analyzing the errors e j r ( t ) and e j d ( t ) , in terms of L, M α , K ( t ) , K d ( t ) , r ( t ) , d ( t ) , and exponentially decaying terms P ( t ) .
Remark A1.
The following relations hold:
L M α = M α L = K ( t ) L = S 1 L = 0 , L 2 = L .
Clearly, L M α = 0 and M α L = 0 because M α maps into the closure of the range of G, R ( G ) ¯ , and L is the orthogonal projection onto R ( G ) . The third equality follows from the useful result given in (A53), i.e.,
( A α , β ) 1 B α , β = ( A ) 1 B in R α .
Next, we observe that
K ( t ) = C ( A α , β ) 1 e A α , β t B α , β = C e A α , β t ( A α , β ) 1 B α , β = C e A α , β t ( A ) 1 B in R α .
Since R α maps into R ( G ) , then R a L = 0 and therefore K ( t ) L = 0 , using (A61).
Finally, since both K ( t ) L = 0 and M α L = 0 , then S 1 L = 0 .
Notice that from (A59), it is clear that the reference and disturbance signal term errors can be studied independently. So, let us start by considering e n r . We have
e n r = L + M α + K d d t e n 1 r = L + S 1 e n 1 r ,
Lemma A2.
For all n 0 we have
e n r = P + L p = 0 n S p + S n + 1 r .
Proof. 
First, we prove that (A63) is valid for n = 0 :
e 0 r = P + L S 0 + S 1 r = P + L + S 1 r .
That corresponds with the definition (A56).
We now assume that the formula (A63) holds for some n = j Z + , and prove that it holds for n = j + 1 . So let
e j r = P + L p = 0 j S p + S j + 1 r ,
and consider (A62) to obtain
e j + 1 r = P + ( L + S 1 ) P + L p = 0 j S p + S j + 1 r = P + L p = 0 j S p + L S j + 1 r + S 1 S j + 1 r = P + L p = 0 j + 1 S p + S j + 2 r ,
which proves (A63) by induction. Note that in the third line, we used the relation S 1 L = 0 , proved in Remark A1. □
We now turn to the disturbance part of the error e n d . Recall from (A58) that
e 0 d = P + ( L + M α ) C A 1 B d + K d d d t d ( t ) ,
and from (A59) that
e j d = P + L + M α + ( K ) d d t e j 1 d = P + L + S 1 e j 1 d .
Lemma A3.
For all n 1 , we have
e n d = P + L p = 0 n 1 S p + S n e 0 d ,
where e 0 d is given in (A56).
Proof. 
As in the proof of Lemma A2, we first verify (A65) for n = 1 :
e 1 d = P + L S 0 + S 1 e 0 d ,
This matches the definition of e 1 d ( t ) given in (A58).
Now, assume (A65) holds for some n = j Z + . We will also show it holds for n = j + 1 . Let
e j d = P + L p = 0 j 1 S p + S j e 0 d ,
and consider (A64) to obtain
e j + 1 d = P + L + S 1 P + L p = 0 j 1 S p + S j e 0 d = P + L p = 0 j 1 S p + L S j e 0 d + S 1 S j e 0 d = P + L p = 0 j S p + S j + 1 e 0 d .
The results of Lemmas A2 and A3 are reported in the main text as Equations (38) and (39).

Appendix E. Error Estimates in the Case of Finite-Dimensional Input and Output Spaces

In this appendix, we estimate the limsup in time of the norm of the errors,
e j ( t ) = e j r ( t ) + e j d ( t ) .
Here, e j r ( t ) and e j d ( t ) are defined in (38) and (39), respectively. We focus on the case of finite-dimensional input and output spaces. In particular, we exploit the fact that we can pass to the limit α 0 (which in turn implies M a 0 ) to examine the errors defined in (40). As M a 0 , we obtain
lim α 0 S i = K d d t S i 1 , and lim α 0 S n r = ( K ) n r ( n ) .
Passing to the limit as α goes to zero, collecting all the terms, and simplifying, we have
e ¯ n r : = lim α 0 e n r = lim α 0 P + L j = 0 n S j + S n + 1 r = P + L j = 0 n ( K ) j r ( j ) + ( K ) n + 1 r ( n + 1 ) .
Similarly, for the disturbance part of the error, we have
lim α 0 e n d = P + L j = 0 n 1 ( K ) j lim α 0 e 0 d ( j ) + ( K ) n lim α 0 e 0 d ( n ) , lim α 0 e 0 d = P + L C A 1 B d d + K d d ( 1 ) .
Substituting and using the identities L 2 = L and ( K ) L = 0 , we simplify to obtain
e ¯ n d : = lim α 0 e n d = P + L j = 0 n 1 ( K ) j L C A 1 B d d + K d d ( 1 ) ( j ) + ( K ) n L C A 1 B d d + K d d ( 1 ) ( n ) = P + L C A 1 B d d + j = 0 n 1 ( K ) j K d d ( j + 1 ) + ( K ) n K d d ( n + 1 ) = P + L C A 1 B d d + j = 1 n ( K ) j 1 K d d ( j ) + ( K ) n K d d ( n + 1 ) .
Combining (A66) and (A67), we obtain the estimate given above in (41),
e ¯ n : = lim α 0 e n = lim α 0 ( e n r + e n d ) = P + L j = 0 n ( K ) j r ( j ) + C A 1 B d d + j = 1 n ( K ) j 1 K d d ( j ) + ( K ) n + 1 r ( n + 1 ) + ( K ) n K d d ( n + 1 ) .

References

  1. Aulisa, E.; Gilliam, D. A Practical Guide to Geometric Regulation for Distributed Parameter Systems; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  2. Burns, J.A.; He, X.; Hu, W. Feedback stabilization of a thermal fluid system with mixed boundary control. Comput. Math. Appl. 2016, 71, 2170–2191. [Google Scholar] [CrossRef]
  3. Deutscher, J.; Kerschbaum, S. Robust output regulation by state feedback control for coupled linear parabolic PIDEs. IEEE Trans. Autom. Control 2019, 65, 2207–2214. [Google Scholar] [CrossRef]
  4. Aulisa, E.; Gilliam, D.; Pathiranage, T. Analysis of an iterative scheme for approximate regulation for nonlinear systems. Int. J. Robust Nonlinear Control 2018, 28, 3140–3173. [Google Scholar] [CrossRef]
  5. Aulisa, E.; Gilliam, D.S.; Pathiranage, T.W. Analysis of the error in an iterative algorithm for asymptotic regulation of linear distributed parameter control systems. ESAIM Math. Model. Numer. Anal. 2019, 53, 1577–1606. [Google Scholar] [CrossRef]
  6. Aulisa, E.; Gilliam, D.S. Approximation methods for geometric regulation. arXiv 2021, arXiv:2102.06196. [Google Scholar]
  7. Francis, B.A.; Wonham, W.M. The internal model principle of control theory. Automatica 1976, 12, 457–465. [Google Scholar] [CrossRef]
  8. Francis, B.A. The linear multivariable regulator problem. SIAM J. Control Optim. 1977, 15, 486–505. [Google Scholar] [CrossRef]
  9. Hespanha, J.P. Linear Systems Theory, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
  10. Azimi, A.; Koch, S.; Reichhartinger, M. Robust internal model-based control for linear-time-invariant systems. Int. J. Robust Nonlinear Control 2024, 34, 12476–12496. [Google Scholar] [CrossRef]
  11. Bymes, C.I.; Laukó, I.G.; Gilliam, D.S.; Shubov, V.I. Output regulation for linear distributed parameter systems. IEEE Trans. Autom. Control 2000, 45, 2236–2252. [Google Scholar] [CrossRef]
  12. Anderson, B.D.; Moore, J.B. Optimal Control: Linear Quadratic Methods, reprint edition ed.; Dover Publications: Mineola, NY, USA, 2007. [Google Scholar]
  13. Dorato, P.; Abdallah, C.; Cerone, V. Linear-Quadratic Control: An Introduction; Krieger Publishing Company: Malabar, FL, USA, 2000. [Google Scholar]
  14. Najafi Birgani, S.; Moaveni, B.; Khaki-Sedigh, A. Infinite horizon linear quadratic tracking problem: A discounted cost function approach. Optim. Control Appl. Methods 2018, 39, 1549–1572. [Google Scholar] [CrossRef]
  15. Bornemann, F.A. An adaptive multilevel approach to parabolic equations III. 2D error estimation and multilevel preconditioning. IMPACT Comput. Sci. Eng. 1992, 4, 1–45. [Google Scholar] [CrossRef]
  16. Tröltzsch, F. On the Lagrange–Newton–SQP method for the optimal control of semilinear parabolic equations. SIAM J. Control Optim. 1999, 38, 294–312. [Google Scholar] [CrossRef]
  17. McAsey, M.; Mou, L.; Han, W. Convergence of the forward-backward sweep method in optimal control. Comput. Optim. Appl. 2012, 53, 207–226. [Google Scholar] [CrossRef]
  18. Tröltzsch, F. Optimal Control of Partial Differential Equations: Theory, Methods and Applications; Graduate Studies in Mathematics; American Mathematical Society: Providence, RI, USA, 2010; Volume 112. [Google Scholar]
  19. Lee, Y.; Kouvaritakis, B. Constrained receding horizon predictive control for systems with disturbances. Int. J. Control 1999, 72, 1027–1032. [Google Scholar] [CrossRef]
  20. Camacho, E.F.; Bordons, C. Constrained model predictive control. In Model Predictive Control; Springer: Berlin/Heidelberg, Germany, 2007; pp. 177–216. [Google Scholar]
  21. Güttel, S.; Pearson, J.W. A rational deferred correction approach to parabolic optimal control problems. IMA J. Numer. Anal. 2018, 38, 1861–1892. [Google Scholar] [CrossRef]
  22. Leveque, S.; Pearson, J.W. Fast iterative solver for the optimal control of time-dependent PDEs with Crank–Nicolson discretization in time. Numer. Linear Algebra Appl. 2022, 29, e2419. [Google Scholar] [CrossRef]
  23. Aulisa, E.; Burns, J.A.; Gilliam, D.S. Approximate Error Feedback Controller for Tracking and Disturbance Rejection for Linear Distributed Parameter Systems. In Proceedings of the 2022 American Control Conference (ACC), Atlanta, GA, USA, 8–10 June 2022; pp. 976–981. [Google Scholar]
  24. Conway, J.B. A Course in Functional Analysis; Springer: Berlin/Heidelberg, Germany, 2019; Volume 96. [Google Scholar]
  25. Kato, T. Perturbation Theory for Linear Operators; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 132. [Google Scholar]
  26. Baumeister, J. Stable Solution of Inverse Problems; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  27. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problems, 3rd ed.; Applied Mathematical Sciences; Springer: Cham, Switzerland, 2021; Volume 120. [Google Scholar] [CrossRef]
  28. Morozov, V.A. Methods for Solving Incorrectly Posed Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  29. Kress, R.; Maz’ya, V.; Kozlov, V. Linear Integral Equations; Springer: Berlin/Heidelberg, Germany, 1989; Volume 82. [Google Scholar]
  30. Hsiao, G.C.; Wendland, W.L. Boundary Integral Equations, 2nd ed.; Applied Mathematical Sciences; Springer: Cham, Switzerland, 2021; Volume 164. [Google Scholar] [CrossRef]
  31. Fallahnejad, M.; Kazemy, A.; Shafiee, M. Event-triggered H stabilization of networked cascade control systems under periodic DoS attack: A switching approach. Int. J. Electr. Power Energy Syst. 2023, 153, 109278. [Google Scholar] [CrossRef]
  32. Du, Z.; Chen, C.; Li, C.; Yang, X.; Li, J. Fault-Tolerant H-Infinity Stabilization for Networked Cascade Control Systems with Novel Adaptive Event-Triggered Mechanism. IEEE Trans. Autom. Sci. Eng. 2025. early access. [Google Scholar] [CrossRef]
  33. Hecht, F.; Lance, G.; Trélat, E. PDE-Constrained Optimization Within FreeFEM; Open-Access Monograph; LJLL/Sorbonne Université: Paris, France, 2024. [Google Scholar]
  34. Schwartz, J.T. Linear Operators: Spectral Theory: Self Adjoint Operators in Hilbert Space; Interscience: Saint-Nom-la-Bretéche, France, 1963. [Google Scholar]
  35. Lewin, M. Spectral Theory and Quantum Mechanics; Universitext; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the solution algorithm for the cascade controller in the time interval ( 0 , T ] . The outer loop advances in time, while the inner loop iterates over the cascade index j.
Figure 1. Block diagram of the solution algorithm for the cascade controller in the time interval ( 0 , T ] . The outer loop advances in time, while the inner loop iterates over the cascade index j.
Mathematics 13 03707 g001
Figure 2. Ω , n b = 2 , n c = 2 .
Figure 2. Ω , n b = 2 , n c = 2 .
Mathematics 13 03707 g002
Figure 3. y 1 ( t ) (blue), r 1 ( t ) (red).
Figure 3. y 1 ( t ) (blue), r 1 ( t ) (red).
Mathematics 13 03707 g003
Figure 4. y 2 ( t ) (blue), r 2 ( t ) (red).
Figure 4. y 2 ( t ) (blue), r 2 ( t ) (red).
Mathematics 13 03707 g004
Figure 5. y 1 ( t ) (blue), r 1 ( t ) (red), for small time.
Figure 5. y 1 ( t ) (blue), r 1 ( t ) (red), for small time.
Mathematics 13 03707 g005
Figure 6. y 2 ( t ) (blue) and r 2 ( t ) (red) for small time.
Figure 6. y 2 ( t ) (blue) and r 2 ( t ) (red) for small time.
Mathematics 13 03707 g006
Figure 7. e 01 ( t ) (blue), e 11 ( t ) (red).
Figure 7. e 01 ( t ) (blue), e 11 ( t ) (red).
Mathematics 13 03707 g007
Figure 8. e 11 ( t ) (blue), e 21 ( t ) (red).
Figure 8. e 11 ( t ) (blue), e 21 ( t ) (red).
Mathematics 13 03707 g008
Figure 9. e 21 ( t ) (blue), e 31 ( t ) (red).
Figure 9. e 21 ( t ) (blue), e 31 ( t ) (red).
Mathematics 13 03707 g009
Figure 10. e 02 ( t ) (blue), e 12 ( t ) (red).
Figure 10. e 02 ( t ) (blue), e 12 ( t ) (red).
Mathematics 13 03707 g010
Figure 11. e 12 ( t ) (blue), e 22 ( t ) (red).
Figure 11. e 12 ( t ) (blue), e 22 ( t ) (red).
Mathematics 13 03707 g011
Figure 12. e 22 ( t ) (blue), e 32 ( t ) (red).
Figure 12. e 22 ( t ) (blue), e 32 ( t ) (red).
Mathematics 13 03707 g012
Figure 13. e 1 ( t ) (blue), e 31 ( t ) (red).
Figure 13. e 1 ( t ) (blue), e 31 ( t ) (red).
Mathematics 13 03707 g013
Figure 14. e 2 ( t ) (blue), e 32 ( t ) (red).
Figure 14. e 2 ( t ) (blue), e 32 ( t ) (red).
Mathematics 13 03707 g014
Figure 15. u j 1 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 7 .
Figure 15. u j 1 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 7 .
Mathematics 13 03707 g015
Figure 16. u j 2 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 7 .
Figure 16. u j 2 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 7 .
Mathematics 13 03707 g016
Figure 17. u j 1 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 9 .
Figure 17. u j 1 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 9 .
Mathematics 13 03707 g017
Figure 18. u j 2 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 9 .
Figure 18. u j 2 ( t ) , with j = 0 , 1 , 2 , 3 , and α = 10 9 .
Mathematics 13 03707 g018
Figure 19. e j 1 , α = 10 6 .
Figure 19. e j 1 , α = 10 6 .
Mathematics 13 03707 g019
Figure 20. e j 2 ( t ) , α = 10 6 .
Figure 20. e j 2 ( t ) , α = 10 6 .
Mathematics 13 03707 g020
Figure 21. e j 1 ( t ) , α = 10 9 .
Figure 21. e j 1 ( t ) , α = 10 9 .
Mathematics 13 03707 g021
Figure 22. e j 2 ( t ) , α = 10 9 .
Figure 22. e j 2 ( t ) , α = 10 9 .
Mathematics 13 03707 g022
Figure 23. e 1 ( t ) and E 11 ( t ) , α = 10 9 .
Figure 23. e 1 ( t ) and E 11 ( t ) , α = 10 9 .
Mathematics 13 03707 g023
Figure 24. e 2 ( t ) and E 12 ( t ) , α = 10 9 .
Figure 24. e 2 ( t ) and E 12 ( t ) , α = 10 9 .
Mathematics 13 03707 g024
Figure 25. One-dimensional rod, non-colocated.
Figure 25. One-dimensional rod, non-colocated.
Mathematics 13 03707 g025
Figure 26. e j c j = 0 , 1 , 2 , α = 10 3 .
Figure 26. e j c j = 0 , 1 , 2 , α = 10 3 .
Mathematics 13 03707 g026
Figure 27. e j c j = 0 , 1 , 2 , α = 10 5 .
Figure 27. e j c j = 0 , 1 , 2 , α = 10 5 .
Mathematics 13 03707 g027
Figure 28. e j c j = 0 , 1 , 2 , α = 10 7 .
Figure 28. e j c j = 0 , 1 , 2 , α = 10 7 .
Mathematics 13 03707 g028
Figure 29. e j c j = 0 , 1 , 2 , α = 10 9 .
Figure 29. e j c j = 0 , 1 , 2 , α = 10 9 .
Mathematics 13 03707 g029
Figure 30. E 0 c , e c , α = 10 9 .
Figure 30. E 0 c , e c , α = 10 9 .
Mathematics 13 03707 g030
Figure 31. Same for small t.
Figure 31. Same for small t.
Mathematics 13 03707 g031
Figure 32. E oc c and e 2 ( t ) c with ϵ = α = 10 9 .
Figure 32. E oc c and e 2 ( t ) c with ϵ = α = 10 9 .
Mathematics 13 03707 g032
Figure 33. Same for small t.
Figure 33. Same for small t.
Mathematics 13 03707 g033
Figure 34. One-dimensional rod, colocated.
Figure 34. One-dimensional rod, colocated.
Mathematics 13 03707 g034
Figure 35. e j c j = 1 , , 5 , α = 10 3 .
Figure 35. e j c j = 1 , , 5 , α = 10 3 .
Mathematics 13 03707 g035
Figure 36. e j c j = 1 , , 5 , α = 10 5 .
Figure 36. e j c j = 1 , , 5 , α = 10 5 .
Mathematics 13 03707 g036
Figure 37. e j c j = 1 , , 5 , α = 10 7 .
Figure 37. e j c j = 1 , , 5 , α = 10 7 .
Mathematics 13 03707 g037
Figure 38. e j c j = 1 , , 5 , α = 10 9 .
Figure 38. e j c j = 1 , , 5 , α = 10 9 .
Mathematics 13 03707 g038
Figure 39. e 5 ( t ) c (blue) e ( t ) , (red) α = 10 9 .
Figure 39. e 5 ( t ) c (blue) e ( t ) , (red) α = 10 9 .
Mathematics 13 03707 g039
Figure 40. e 5 c (blue) and e ( t ) c (red) for small t.
Figure 40. e 5 c (blue) and e ( t ) c (red) for small t.
Mathematics 13 03707 g040
Figure 41. u 5 ( x , 15 ) for α = 10 7 (blue), 10 8 (red), and 10 9 (black).
Figure 41. u 5 ( x , 15 ) for α = 10 7 (blue), 10 8 (red), and 10 9 (black).
Mathematics 13 03707 g041
Figure 42. Two-dimensional region, non-colocated.
Figure 42. Two-dimensional region, non-colocated.
Mathematics 13 03707 g042
Figure 43. e j c j = 0 , , 3 , α = 10 4 .
Figure 43. e j c j = 0 , , 3 , α = 10 4 .
Mathematics 13 03707 g043
Figure 44. e j c j = 0 , , 3 , α = 10 7 .
Figure 44. e j c j = 0 , , 3 , α = 10 7 .
Mathematics 13 03707 g044
Figure 45. e 3 c , e c , α = 10 9 .
Figure 45. e 3 c , e c , α = 10 9 .
Mathematics 13 03707 g045
Figure 46. e 3 c , E oc c .
Figure 46. e 3 c , E oc c .
Mathematics 13 03707 g046
Figure 47. Two-dimensional region, colocated.
Figure 47. Two-dimensional region, colocated.
Mathematics 13 03707 g047
Figure 48. e j c j = 0 , , 4 , α = 10 4 .
Figure 48. e j c j = 0 , , 4 , α = 10 4 .
Mathematics 13 03707 g048
Figure 49. e j c j = 1 , , 4 , α = 10 7 .
Figure 49. e j c j = 1 , , 4 , α = 10 7 .
Mathematics 13 03707 g049
Figure 50. e 4 c , e c , α = 10 9 .
Figure 50. e 4 c , e c , α = 10 9 .
Mathematics 13 03707 g050
Figure 51. e 4 c , e c on 0.5 < t < 1.5 .
Figure 51. e 4 c , e c on 0.5 < t < 1.5 .
Mathematics 13 03707 g051
Table 1. Comparison lim ¯ e j k / lim ¯ e ( j 1 ) k for j = 1 , 2 , 3 and k = 1 , 2 .
Table 1. Comparison lim ¯ e j k / lim ¯ e ( j 1 ) k for j = 1 , 2 , 3 and k = 1 , 2 .
j = 1 j = 2 j = 3
k = 1 8.8800 × 10 2 1.0400 × 10 1 1.0060 × 10 1
k = 2 8.6800 × 10 2 9.7800 × 10 2 9.8200 × 10 2
Table 2. lim ¯ e j / lim ¯ e ( j 1 ) for j = 1 , , 5 and α = 10 k k = 3 , 5 , 7 , 9 .
Table 2. lim ¯ e j / lim ¯ e ( j 1 ) for j = 1 , , 5 and α = 10 k k = 3 , 5 , 7 , 9 .
α = 10 3 α = 10 5 α = 10 7 α = 10 9
j = 1 0.88510.52380.64860.1369
j = 2 0.88520.97650.86220.8516
j = 3 0.88540.98170.90240.9014
j = 4 0.88560.98180.91370.9202
j = 5 0.88580.98200.92150.9315
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aulisa, E.; Chierici, A.; Gilliam, D.S. A Least-Squares Control Strategy for Asymptotic Tracking and Disturbance Rejection Using Tikhonov Regularization and Cascade Iteration. Mathematics 2025, 13, 3707. https://doi.org/10.3390/math13223707

AMA Style

Aulisa E, Chierici A, Gilliam DS. A Least-Squares Control Strategy for Asymptotic Tracking and Disturbance Rejection Using Tikhonov Regularization and Cascade Iteration. Mathematics. 2025; 13(22):3707. https://doi.org/10.3390/math13223707

Chicago/Turabian Style

Aulisa, Eugenio, Andrea Chierici, and David S. Gilliam. 2025. "A Least-Squares Control Strategy for Asymptotic Tracking and Disturbance Rejection Using Tikhonov Regularization and Cascade Iteration" Mathematics 13, no. 22: 3707. https://doi.org/10.3390/math13223707

APA Style

Aulisa, E., Chierici, A., & Gilliam, D. S. (2025). A Least-Squares Control Strategy for Asymptotic Tracking and Disturbance Rejection Using Tikhonov Regularization and Cascade Iteration. Mathematics, 13(22), 3707. https://doi.org/10.3390/math13223707

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop