Abstract
This paper presents a comprehensive strategy for addressing tracking and disturbance rejection for both lumped and distributed parameter systems, focusing on infinite-dimensional input and output spaces. Building on the geometric theory of regulation, the proposed methodology employs a cascade algorithm coupled with Tikhonov regularization to derive control laws that improve tracking accuracy iteratively. Unlike traditional optimal control approaches, the framework minimizes the limsup in time of the tracking error norm, rather than with respect to a quadratic cost function. It is important to note that this work also includes applicability to over- and under-determined systems. We provide theoretical insights, detailed algorithmic formulations, and numerical simulations to demonstrate the effectiveness and generality of the method. Results indicate that the cascade controls asymptotically approximate the classical optimal control solutions, with limitations addressed through rigorous error analysis. Applications include diverse scenarios with both finite and infinite-dimensional input and output spaces, showcasing the versatility of the approach.
Keywords:
asymptotic least-squares tracking; infinite-dimensional input and output spaces; Tikhonov regularization; cascade iteration MSC:
93-10; 93B27; 35A25
1. Introduction
Tracking and disturbance rejection are fundamental objectives in control design, particularly for systems governed by partial differential equations (PDEs) [1,2,3]. This work presents a general strategy applicable to both lumped and distributed parameter systems, primarily focusing on systems modeled by PDEs. The proposed method generates a sequence of control laws by solving a cascade of easily manageable control problems. At each step, tracking accuracy is progressively enhanced through least-squares error minimization.
The foundation of the cascade algorithm, developed over a series of studies [4,5,6], lies in the theory of geometric regulation [1,7,8,9,10,11] and related works. Many of the authors’ previous studies have assumed finite-dimensional input and output spaces, with the additional constraint that the system transfer function at zero is invertible. A significant contribution of this work is the removal of such restrictions, enabling the algorithm to handle input and output spaces of arbitrary dimensionality. This generality allows the method to produce approximate controls even for infinite-dimensional input and output spaces and for over- or under-determined systems in the finite-dimensional case.
In this work, we do not claim that the resulting controls are optimal in the classical sense, as achieved through LQR or LQG methodologies [12,13], since we do not consider a quadratic time integral cost function. Instead, our goal is to generate suitable controls that enable the derivation of error estimates under straightforward smoothness assumptions and large-time supremum norm bounds for both the reference and disturbance signals. This approach accommodates extremely general signals that may have transient oscillations for a time but ultimately transition into smooth bounded motion. Most importantly, we do not require that these signals be generated by an exogenous system as in the classical theory.
In this work, we adopt a cost function based on the large-time supremum norm of the error rather than on an integral quadratic form. The resulting control algorithm is computationally efficient, simple to implement, and geared toward accurate long-term tracking and disturbance rejection. A notable advantage of this formulation is that the error at each step can be written in terms of explicit iterated convolution integrals (see [4,5,6]), which yield a priori estimates that may be computed offline. This perspective shifts the focus from traditional optimality to long-term performance, and it bears similarities to the discounted cost formulations used for infinite-horizon tracking problems [14].
Through several examples, we observe that the sequence of controls derived from the proposed algorithm converges in time to the classical optimal control solution. Here we note that standard optimal control problems for parabolic systems typically involve backward parabolic adjoint systems. Solving these problems often requires iterative forward–backward solutions or space–time discretization [15,16,17,18], which are computationally intensive. To alleviate this burden, approaches such as Receding-Horizon Control [19,20], ad hoc time-stepping schemes [21], and preconditioning techniques [22] have been proposed.
Finally, this study focuses on bounded input and output operators. However, the methodology can be extended to cases involving unbounded inputs and outputs, such as boundary control and sensing. Such generalizations require additional effort and will be addressed in future work.
The paper is organized as follows: Section 2 provides a statement of the problem, describing the plant as a controlled dynamical system with control inputs and outputs defined in an infinite-dimensional Hilbert space. The primary Control Objectives addressed in this work are formulated in Problems 1 and 2.
In Section 3, we present the approximate cascade controller. The initial step, referred to as the 0th-order controller, is given in (14) and (15), while the general jth-order controller is defined in (17) and (18). The motivation and derivation of the controller are discussed in Appendix A. Although not immediately apparent, this control strategy is rooted in obtaining approximate solutions to the regulator equations in the context of geometric regulation. Due to space limitations, details of the derivation process (e.g., regularization) are omitted here but thoroughly covered in prior works [1,4,5,6,23].
In Appendix A, Equation (A6), we present an initial form for the 0th-order controller and show why it is unsuitable. By substituting this control into equation (A1) and applying regularization and simplification, we obtain the final form (A17)–(A19), which leads to the results in (15).
At first glance, solving the dynamical systems in the controllers may seem numerically challenging. However, Section 4 presents a straightforward and practical algorithm for solving all cascade dynamical systems. This algorithm enables solutions using off-the-shelf finite element software, eliminating the need for complex numerical programming. In this section, we focus on the final results, leaving the detailed derivation of the Formulas (21)–(23) for the 0th-order equations and (25)–(27) for the jth-order equations to Appendix B.
A significant advantage of our control strategy is its ability to provide explicit formulas for the errors at each cascade step. These formulas express the errors in terms of the reference signals, disturbances, system parameters, the regularization parameter , and the Tikhonov regularization parameter . In Section 5, we provide these explicit error formulas: (35) for the 0th step and (38) and (39) for the general nth-order step, where . As with the dynamical systems, the derivations of these results involve lengthy discussions, which are provided in Appendix D.
Section 6 presents a detailed analysis of the errors in the special case of finite-dimensional input and output spaces. In this scenario, sending the Tikhonov regularization parameter to zero is possible, effectively replacing the Tikhonov regularization with the Moore–Penrose pseudoinverse. This approach enables a detailed examination of errors at each cascade step and an analysis of the overall cascade error structure. The ability to send to zero critically depends on Lemma 1, whose proof is provided in Appendix C.
In Section 7, we consider a common situation in applications to partial differential equations with infinite-dimensional input and output spaces when the input map is given by an extension operator and the output map is given by a restriction operator. Here, the state operator A is a coercive elliptic operator acting in the state space , where is a bounded domain in , and the control input acts on a subdomain . In contrast, the output corresponds to the solution of the differential equation evaluated on a subset . Lemma 3 in Section 7 shows that in the case of infinite-dimensional input and output spaces, the only way one can achieve error zeroing is in the colocated case, i.e., .
In Section 8, we present numerical simulations covering a broad range of scenarios with both finite- and infinite-dimensional input/output spaces. Example 1 considers a two-input/two-output system, while Example 2 addresses an under-determined case where error zeroing fails. The remainder of our numerical examples are concerned with the most challenging case of infinite-dimensional input and output spaces. We then study a one-dimensional parabolic system with non-colocated input/output spaces (Example 3) and its colocated counterpart (Example 4). Finally, parabolic problems are examined in a two-dimensional spatial domain: a non-colocated case (Example 5) and a colocated case (Example 6).
As discussed above, many lengthy technical calculations have been moved to Appendix A, Appendix B, Appendix C, Appendix D and Appendix E to not detract from the main points in the paper.
2. Statement of Problem
We consider a control system with state operator A assumed to be sectorial with compact resolvent that generates an exponentially stable semigroup on the Hilbert space with inner product and norm . is the bounded input operator, and is the bounded output operator, where and are the control input and output Hilbert spaces, respectively. Also, is the bounded disturbance input operator, and is the disturbance input space.
In this work, we emphasize that only bounded input, disturbance, and output operators are considered. However, after many numerical simulations, we have observed that the methodology presented here can be extended to cases involving unbounded inputs, disturbances, and outputs, such as boundary control and sensing. The analysis with such operators would require additional effort and will be addressed in future work.
Consider the control system in the Hilbert state space
Problem 1.
Given a reference signal and disturbance find a control u to minimize the limsup of the error, defined by . Namely, we want to minimize
Here, denotes the space of infinitely differentiable X-valued functions with all derivatives bounded on . More generally we will also consider , the Banach space of smooth N-times continuously differentiable X-valued functions with all derivatives bounded on , and the notation means
It is desirable to choose the control so that the error vanishes asymptotically, which is often unachievable in classical optimal control.
We aim for a general setting, allowing and to be either finite- or infinite-dimensional input and output spaces. In the case of infinite-dimensional input and output spaces, the problem becomes classically ill-posed, as the transfer function is not boundedly invertible. In particular, G is a compact operator due to our assumptions on A, , and C; hence, it is non-invertible. Even in the finite-dimensional case, systems may be over- or under-determined, leading to additional challenges.
The Moore–Penrose pseudoinverse provides an ideal approximate control when available. However, it may not exist if the transfer function has a non-trivial null space or lacks a closed range. Such problems are generally challenging to solve, and solving for the control often leads to ill-posed inverse problems.
Therefore, to regularize the ill-posed problems, we employ Tikhonov regularization with a small penalty . The main theoretical problem we aim to address is the following.
Problem 2
(Control Objective). For the system (1)–(3), with given reference signals and disturbance , our objective is to choose a control u to minimize the limsup of the norm of the tracking error, .
Consider , and define
For given, , r and d, our objective is to find u to minimize
where is the Constraint Set
A direct approach to Control Problem 2 results again in an ill-posed dynamical system. We introduce a second stabilization parameter to regularize the system, producing an approximate control . It is important to note that differs from u and may deviate significantly, potentially leading to an error larger than the desired e. This observation motivates the development of a cascade of control problems, where the Control Objective at each iteration is to reduce the error generated in the previous iteration, eventually converging to e.
The rationale and details behind the introduction of the regularization parameters and are thoroughly explained in Appendix A. In the next section, we provide key definitions and a concise summary of the cascade controller algorithm described in Appendix A.
In the definition of the cost functional in Problem 2, the parameter has been taken as a scalar for simplicity. In a more general setting, can be replaced by a symmetric positive definite weighting matrix , thereby assigning distinct relative weights to the different control components or introducing cross-coupling between them. Analogously, a weighting matrix can be introduced in the error norm to emphasize or de-emphasize specific components of the tracking error. In the infinite-dimensional framework, these ideas naturally extend to non-uniform weight functions or tensor-valued operators acting on the input and output spaces, allowing spatially varying or direction-dependent penalization. We emphasize that such matrix weights and non-uniform weighting functions can be seamlessly incorporated into the definition of the underlying norms, rendering the present formulation immediately extensible to these more general cases. In the remainder of the paper and in all numerical examples, we restrict attention to identity and uniform weights; however, this choice entails no loss of generality, and all theoretical results remain valid for the weighted case.
3. Approximate Cascade Controller
We begin by setting some notation. Denote by G the transfer function for A, , C evaluated at 0. Namely, is given by
Recall that if and are infinite-dimensional, even if G is injective (), has an unbounded inverse; thus, the Moore–Penrose pseudoinverse cannot be directly computed. Indeed, since is a compact operator, the inverse is unbounded unless the input and output spaces are finite-dimensional. In what follows, the * notation refers to the Hilbert space adjoint operator. Definitions for the bounded case are in Chapter 2, page 31, and for the unbounded case in Chapter 10, page 305 [24]; see also T. Kato [25].
For , define
which forms the basis for applying Tikhonov regularization, as discussed in detail in [26,27,28,29,30]. Then, for , define
Next, we simplify the notation by defining
A detailed discussion on the appropriate choice of will be provided later, ensuring that the operator , as defined in (12), generates an exponentially stable analytic semigroup.
With the above notations, we can introduce the first step in the cascade controller given by the dynamical system
which produces the first approximate control
We then obtain the first approximate error
The derivation of the controller system in (14) and (15) is presented in Appendix A. We note that the controller (14) and (15) is a feedback (dynamic) controller that depends only on , , and .
In general, the control will not be accurate enough to solve our tracking problem due to the effect of the regularization parameter. Therefore, we propose a sequence, or cascade, of controllers, where each stage improves the approximate control and reduces the error. Each stage solves a new approximate tracking problem, using the previous step’s error as the reference signal.
Namely, for a fixed and , define new variables , , and the jth cascade controller as follows.
which produces a new control law given by
and, this, in turn, will produce a new cascade error given by
Furthermore, all the coefficients in the controllers (14) and (15), as well as (17) and (18), can be “precomputed” in terms of , , , C, , and .
Since the initial conditions in (2) and (14) coincide and the semigroup generated by A is assumed to be exponentially stable, linearity implies that if the control is used in plant (1), then the state, z, and error satisfy
A key advantage of cascade controllers is that they enable a detailed analysis of the errors , leading to explicit formulas for the errors. For finite-dimensional input and output spaces, these formulas yield concise error estimates, as presented in Theorem 1.
Remark 1.
The term cascade here does not refer to the classical two-loop cascade control architecture used in process or networked control systems [31,32]. Instead, it denotes an iterative correction scheme in which each stage of the algorithm uses the tracking error from the previous stage as input, progressively refining the control and reducing the error within each cascade step.
4. A Practical Solution Algorithm for the Controllers
This section presents a simple algorithm for solving the dynamical systems (14) and (17). The primary difficulty lies in handling the expression . One common approach is to use the Singular Value Decomposition (SVD) of the bounded linear operator . The SVD is useful for theoretical results. However, explicitly determining the singular values and corresponding eigenfunctions in the infinite-dimensional case is rarely feasible. Therefore, developing a numerical algorithm that can be implemented using standard off-the-shelf finite element software is advantageous.
Appendix B presents a procedure that leads to the following algorithm for solving (14) and obtaining .
where the approximate control is given by
Similarly, for the systems (17) with control (18), we have
where the approximate cascade controls are given by
Finally, the nth step control to be used in the plant is given by
At each cascade level, we solve one parabolic PDE for together with the constraint equations for the coupled fields . Although and lack explicit time derivatives, they are algebraically coupled to and are computed simultaneously via standard time marching from to . We implemented this on several platforms, including COMSOL Multiphysics 6.1, and in-house FEM software (MATLAB R2025b, and FEMuS, available on GitHub, https://github.com/eaulisa/MyFEMuS, accessed on 11 November 2025), all showing similar behavior; all reported results in Section 7 use COMSOL and their implementation is basic. A detailed scheme is given in Figure 1. Unlike classical optimal control with a backward-in-time parabolic adjoint equation over [17,18], our scheme advances only forward, simplifying computation.
Figure 1.
Block diagram of the solution algorithm for the cascade controller in the time interval . The outer loop advances in time, while the inner loop iterates over the cascade index j.
5. Explicit Formulas for the Errors
The analysis of the error is carried out in Appendix D by applying the variation of parameters formula to the dynamical system (14), followed by integration by parts.
Define the operator
where is defined in (9). Denote by L the orthogonal projection of the output space onto the null space of , . Then, is the orthogonal projection onto the and . Using this orthogonal decomposition, it follows that can be written as
Notice that
The key properties of the operator are contained in the following lemma.
Lemma 1.
Recall our formula for from (31). For any (whether is finite or infinite-dimensional), we have , and
In the case where the number of inputs and outputs is finite, we can say more. Namely, there exists such that
so that converges to zero in the operator norm as .
The proof of Lemma 1 is provided in Appendix C.
Let us define
and
Note that the operators and do not commute. Nevertheless, employing the formulas presented in Appendix D, we can write
with
Here and in the future, denotes a generic exponentially decaying function that depends on initial data and other known variables. At each iteration, may change, but for simplicity, we do not track these changes and redefine accordingly.
From (35), we note that the operator in (30) plays a central role in the error , as well as in the errors for . Appendix D, specifically Lemmas A2 and A3, contains the derivation of the following expressions for . For all , we have
with
Since (cf. Remark A1 in Appendix D), it is possible to rewrite the generic Formulas (38) and (39) using the following relation:
6. Error Estimates for Finite-Dimensional Input and Output Spaces
For Problem 1, Tikhonov regularization, as described in Appendix A, is not the only potential optimization method. An alternative approach could involve a least-squares minimal solution using the Moore–Penrose pseudoinverse. However, Tikhonov regularization appears to be the better choice in control systems governed by partial differential equations and with infinite-dimensional input and output spaces and . For finite-dimensional and , which are common in many applications, it is useful to assess the impact on Section 3, since G then yields a simpler formula for , as defined in (9).
From (9), assuming is invertible, as , we have
where denotes the Moore–Penrose pseudoinverse. According to Lemma 1, as , so , defined in (31), converges to L.
Our main goal in this section is to estimate . To this end, assuming that we have passed to zero, let us define
with defined in (38) and defined in (39). In Appendix E, Equation (A68), we show that
For , define
and then the error can be written explicitly as
Remark 2.
In the case of finite-dimensional input and output spaces, the projection operator L takes the form
and we have
Remark 3.
One important special case occurs when and G is invertible. In this case
This case has been treated in detail in the earlier works [1,4,5,6] and references therein. In particular, the steps required to compute the limsup of the norms of the iterated errors described below are not repeated here; rather, we refer to these works for the specific details. A critical observation in the case G is invertible is that
As a result, in this case, it can be shown that the controls obtained from the iterative process are always error zeroing.
Considering the more general analysis of the errors in the case of finite-dimensional input and output spaces, we now estimate
For both and , estimates of these expressions require estimating the limsup of iterated convolutions. This procedure has already been addressed, and the interested reader can consult [1,4,5,6,23]. Here, we present only the essential results.
Definition 1.
For , a fixed interval, the space , consisting of real-valued functions that are N-times continuously differentiable on I and whose derivatives up to order N are bounded, is a Banach space with the norm
In the vector-valued case, let so that . We define its norm by
For any interval , the reference and disturbance signals are smooth, bounded, vector-valued functions in and , respectively. We also note that for any
Remark 4.
In establishing our estimates for the limsup of the errors, we are particularly interested in the supremum norm taken over time intervals of the form for a large . This allows us to consider reference and disturbance signals that may have large excursions for some time but eventually settle into a more stable oscillatory behavior. To simplify the notation in this case, we write
To describe the error estimates in the special case considered in this section, we also need the form of various operators as . From (12), we have
and from (13), we have
This leads to the definitions
Using these formulas, we obtain the form of and for given in (48), in the following theorem.
Theorem 1.
For any and any reference and disturbance signals, , define
Then, we have
Proof.
Using the definition of in (42), we have
The proof that
involves a lengthy series of calculations involving estimates of repeated convolution integrals. The details of this analysis can be found in [5]. □
A frequently encountered case in the analysis of tracking and disturbance rejection of sinusoidal signals leads directly to the following corollary.
Corollary 1.
For reference signal and disturbance , assume that there exist constants , such that
for all . Then
and for sufficiently large N, we have
Therefore, from (43), we have
where the infinite sum converges due to the geometric convergence of . Furthermore, in the special case that , so that , we see that the problem is “error zeroing”, i.e.,
In particular, we have from (52)
7. Infinite-Dimensional Colocated Input and Output Spaces
A reasonably general situation with infinite-dimensional input and output spaces arises in the control of partial differential equations defined on a bounded domain , with state space . In this case, the operator A is often a coercive elliptic operator with smooth coefficients. We assume that the control acts through a subdomain , while the measured output is taken on another subdomain . The control operator maps the input space to the state space and the measured output operator maps to the output space . In our examples involving infinite-dimensional input and output spaces, we will use the following operators to facilitate the definition of the input and output maps.
Definition 2.
Let . Then, we define
- 1.
- Restriction Operator is defined for byOr, more simply .
- 2.
- Extension Operator is defined for by
Using these definitions, for , we can write
Similarly, for , define
In addition, we define for by
Then, the operators , , and C are bounded operators from their respective domains to their range spaces. Similarly, for the adjoint operators, we have
In this case, we can easily express a formula for the operators G and as
In general, the sub-regions and may overlap, but if we assume that the interior of is not empty, then is not injective, i.e., . In this case, the operator L, i.e., the orthogonal projection of onto the null space of , , in (31) is not zero. Therefore, as we have seen in the previous section, the minimization does not yield an error-zeroing solution but the least-squares solution restricted to .
In the following, we analyze the colocated case when .
Assumption 1
(Colocated Input–Output Case). The operator A is a coercive, invertible, and elliptic differential operator with smooth coefficients satisfying homogeneous boundary conditions (generally, any combination of Dirichlet, Neumann, or Robin) and
where denotes the interior of Ω. In particular, this implies that the boundary of lies strictly within the interior of Ω.
The following well-known results can be found in most functional analysis books. For example, see Theorem 5.13, page 234 in T. Kato, [25].
Lemma 2.
Let be a bounded linear operator with X and Y Hilbert spaces. Then, we have
Lemma 3.
Let Assumption 1 hold. Assume that is the extension operator from to and C is the restriction operator from to . Then, we have
This implies that and the projection operator .
Proof.
In order to show , let be given and choose . Let denote smooth compactly supported functions in . Appealing to the density of in , we can find a so that
Then, under our assumption on A, we know that all are contained in the domain of A. Therefore, we see that and
We note that in the case of infinite-dimensional and and in the colocated case with our particular and C, we have that is a restriction operator and is an extension operator.
Clearly (since A is a local operator) and . Set
and then we have
since acts as the identity on and vanishes outside.
Finally, we note from (64) that
and also (63) holds. Hence, we have shown that is dense in . Thus, is injective.
Now, we turn to the other part of the lemma, showing that . From part (II) of Lemma 2, we have . Thus, our goal is to show that is dense in ; then, so that . But under our assumption that , we can repeat the same arguments used above for to show that is dense in . And therefore, G is injective. □
Remark 5.
Even if G is injective, the range of G, , is not closed. In particular, the operators and are unbounded. So, the problem of solving for the control u is ill-posed, requiring some form of regularization.
We also note that if , the first part of Lemma 3 still holds, namely and the projection operator , while the second part of the Lemma is not true anymore, namely .
8. Numerical Simulations
We begin with two examples with finite-dimensional input and output spaces. For our simulations, we consider a convective diffusion operator in one space dimension with
with homogeneous Dirichlet boundary conditions at the domain’s endpoints . We study the control problem in the state space in which we write the domain of the state operator for our plant as
The operator A is not self-adjoint; rather, we have
with .
Example 1
(, ). In our first simulation, the control u enters through the regions for , and a disturbance d enters through the region . We also assume that there are two scalar measured outputs
and that scalar time-dependent reference signals are provided for the regions . Recall that our objective is to minimize the limsup of the -norm of the error given by
Thus, we seek to minimize
In Figure 2, we describe the domain geometry with the two scalar inputs and the two scalar outputs.
Figure 2.
, , .
Thus, we have so that
and
We note that in this numerical simulation, with our choices of and , the matrix G in (68) is invertible, , and the projection .
In the system (65), we have set , and set the length of the domain to , so . The sub-regions , , , , and are given by
We have chosen asymptotic sinusoidal reference and disturbance signals with a smooth transition at time zero. Namely, we have set
where is a smooth, piecewise-defined unit step function centered at . In the transition region , F is a ninth-degree polynomial satisfying at the endpoints and the following: , , and , . For , we set and for , we set . The function F is introduced to suppress transient components that arise in the cascade iteration. These transient terms decay rapidly; F is included only to smooth the results in a neighborhood of without affecting the asymptotic behavior.
We solved three extra iterations for the cascade algorithm, producing errors via approximate controls for . Therefore, we obtain the cumulative control, given by . The control was then applied in the plant
where and are defined in (67). In our numerical simulation, we have set , and computed three extra iterations.
In Figure 3 and Figure 4, we have plotted the resulting numerical values obtained for and and notice that the curves are essentially indistinguishable after a very short time.
Figure 3.
(blue), (red).
Figure 4.
(blue), (red).
To draw attention to the rapid decay of the transient terms due to the nonzero initial conditions, in Figure 5 and Figure 6, we have plotted and for small t.
Figure 5.
(blue), (red), for small time.
Figure 6.
(blue) and (red) for small time.
Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 appear to demonstrate the rapid (even geometric) decrease to zero of the limsup of the errors to zero. In Table 1, we provide even stronger evidence by giving the ratios
Figure 7.
(blue), (red).
Figure 8.
(blue), (red).
Figure 9.
(blue), (red).
Figure 10.
(blue), (red).
Figure 11.
(blue), (red).
Figure 12.
(blue), (red).
Table 1.
Comparison for and .
In Figure 13 and Figure 14, we draw attention to the errors from the cascade controller and the plant for small values of t. Namely, due to the difference in initial conditions for the controller and plant, we have a transient deviation in the first and second components of the plant error, and , and the first and second components of the third cascade error, and . Notice that these corresponding errors rapidly converge to each other.
Figure 13.
(blue), (red).
Figure 14.
(blue), (red).
Indeed, for this example, the transfer function G is invertible and the use of Tikhonov regularization or even Moore–Penrose pseudoinverse is not required to obtain an error-zeroing cascade control.
In order to demonstrate the convergence of the control in Equation (24), and the subsequent cascade controls in Equation (29), with the from Equation (28), we consider two values of the regularization parameter . Each , for , consists of two components, denoted by and . In Figure 15, corresponding to , is shown in blue, in green, in red, and in magenta. Similarly, in Figure 16, is blue, is green, is red, and is magenta. The same color convention is used in Figure 17 and Figure 18, which corresponds to . Note that for , the cascade controls converge toward the optimal ones as n increases, but for the smaller value, , the cascade controls essentially lie on top of each other.
Figure 15.
, with , and .
Figure 16.
, with , and .
Figure 17.
, with , and .
Figure 18.
, with , and .
Example 2
(, ). Our second simulation considers an under-determined system with one scalar input and two scalar outputs. Here, we use the same rod considered in the previous example, but in this case, we consider (see Figure 2). In this case, we have so that
Remark 6.
Unlike the previous example, in this case, G is a matrix, so not invertible, but, as we will see below, is and invertible. Therefore, there is a well-defined Moore–Penrose pseudoinverse given by
so it would not be necessary to use Tikhonov regularization. Nevertheless, we provide the details of the development made in Section 3 and Section 4.
In this case, we have
and
Recall that
where . In this case, spectral analysis for the matrix is straightforward. The eigenvalues are 0 and and orthonormal eigenvectors are
satisfying
Here, L is the orthogonal projection onto the and goes to zero as goes to zero. Finally, the operator is
In this example, we have chosen the same reference and disturbance signals considered in (69) in Example 1. We note that in this case, while our controls are in some sense optimal, the tracking errors do not converge to zero. The point is that the control input does its best to track two different reference signals. In this example, the critical point of our work is that we can explain exactly what the errors are converging to as time goes to infinity.
In Figure 19, we have set and and plotted for . Similarly, Figure 20 presents the corresponding results for for . We observe a slight variation in these curves due to the value of . To further investigate this effect, we decrease to , obtaining the plots shown in Figure 21 and Figure 22. The curves are indistinguishable in this case, indicating that the earlier discrepancies resulted from the regularization parameter .
Figure 19.
, .
Figure 20.
, .
Figure 21.
, .
Figure 22.
, .
Figure 23 and Figure 24 contain plots of the components of the closed loop system error
where z is the state variable obtained from the plant, (70)–(72), using the cascade control and the error formula obtained by sending to zero as described in (42) and (43). Namely, we have
We note that the term decays to zero exponentially with time, and the term is very small compared with the term . In particular, we observe that the main contribution to the error comes from the projection onto the . Note that the two curves are on top of each other, confirming the accuracy of our explicit formula for the error.
Figure 23.
and , .
Figure 24.
and , .
Example 3
(Non-Colocated Infinite-Dimensional Input and Output Spaces). In this example, we consider a control problem in one spatial dimension. We impose Dirichlet boundary conditions at the endpoints of the domain . As in the previous examples, we write the state operator A for our plant as in (65).
Figure 25.
One-dimensional rod, non-colocated.
Here, we have noted that the restriction operator can be explicitly expressed in terms of the characteristic function so we can write
Therefore, in this example, the input operator is an extension operator, and the output operator is a restriction operator. We consider a tracking problem with reference and disturbance signals
where F is a smooth function that, together with its first four derivatives, is zero at . The function F is introduced to suppress transient terms that arise in the cascade iteration. These transient terms decay exponentially in time.
In this case, the resulting controls obtained at each cascade step are functions of both x and t. In Figure 26, Figure 27, Figure 28 and Figure 29, we have plotted the norms of the errors for and for . Here, we have used blue to depict , green for , and red for .
Figure 26.
, .
Figure 27.
, .
Figure 28.
, .
Figure 29.
, .
In the above figures, we used the following color code to indicate the curves : blue, green, and red.
The main point here is that for , the errors converge very rapidly to a limiting function, which is that given by the sum of the formulas given in (38) for and (39) for . In particular, we note that as becomes small, the contributions from become negligible very quickly, and the main contribution is from the projection L onto the null space of .
Once again, after two iterations of the cascade algorithm, we obtain the cumulative control, given by . The control was then applied in the plant
producing the error with norm
In Figure 30 and Figure 31, we have plotted the norm of plant system error (in red) and, just as in the previous example, the first approximate error (in blue) given by the formulas in (38) and (39). Notice that in the case of infinite-dimensional input and output spaces, we cannot use the formulas described in (42) and (43) since becomes unbounded as goes to zero in the operator norm; however, it does go to zero in the strong operator topology.
Figure 30.
, , .
Figure 31.
Same for small t.
In Figure 32 and Figure 33, we have plotted (in blue) for obtained by solving our cascade control algorithm along with the norm of the error, (in red), obtained by computing an optimal control u using the variational approach formulation in (82)–(84) with . We notice that the two curves are indistinguishable.
Figure 32.
and with .
Figure 33.
Same for small t.
Here, the “full control” u is obtained by using minimizing variations for the form
with appropriate boundary conditions on z and . This analysis leads to the following system:
The desired control law is
Note that Equation (83) is an inverse parabolic equation with initial data prescribed at T and must therefore be integrated backward in time (from T to 0). Consequently, the optimality system (82)–(84) must be solved all at once on the full interval , effectively coupling time as an additional dimension of the discrete problem and making the control solve substantially more demanding.
COMSOL, however, is engineered for classical time marching, not for globally coupled space–time solves. To approximate (82)–(84), we adopted a workaround, embedding the PDE in a cylindrical domain with an auxiliary coordinate that mimics time. While feasible, this reformulation is markedly more expensive (many time levels are coupled simultaneously), scales poorly with problem size, and is not practical in 3D. For these reasons, a head-to-head runtime comparison would be misleading: any timings would reflect COMSOL’s solver limitations for all-at-once optimal control rather than the intrinsic efficiency of the methods. Specialized software [33], by contrast, implements forward–backward (sweep) iterations until convergence [17,18], yielding faster solves of (82)–(84) and making such approaches competitive. However, this lies outside the scope of the present work, so we omit any runtime comparisons between the two methods.
Example 4
(Colocated Infinite-Dimensional Input and Output Spaces). In this example, we consider the same system operator A introduced at the beginning of this section in (65) with domain (66) where . The main difference between Example 3 and this example is that we consider a colocated case in which , see Figure 34. In this case, we expect the norms of the errors obtained from the cascade algorithm to decrease geometrically because of Lemma 3.
Figure 34.
One-dimensional rod, colocated.
In our simulation, we have chosen , , and . We have run five additional steps in our cascade using the same r and d introduced in (78). In Figure 35, Figure 36, Figure 37, Figure 38 and Figure 39, we have plotted for for , respectively. As expected, and unlike the previous example, the error norms decrease geometrically with each iteration.
Figure 35.
, .
Figure 36.
, .
Figure 37.
, .
Figure 38.
, .
Figure 39.
(blue) , (red) .
In the above figures, we used the following color code to indicate the curves : blue, red, magenta, green, and black. Then, in Figure 39 and Figure 40, we have set and plotted both and the norm of the plant system error, , where we have used the control . After a very short transient, the two curves are indistinguishable.
Figure 40.
(blue) and (red) for small t.
Figure 35, Figure 36 and Figure 37 demonstrate a geometric decrease in the limsup of the errors. In Table 2, we provide even stronger evidence by giving the ratios
Table 2.
for and .
We note that for each value of , the ratios of are significantly smaller than all the subsequent ratios, so the first step in the iterative scheme always produces an outlier in the geometric convergence. In the following table, we confirm the above statement by providing the ratios of the limsups of the norms of the errors pairwise.
Figure 41 plots the cascade control at the final time , for three values of , , and , over the domain . The curves are colored blue for , red for , and black for . As decreases, the control becomes progressively rougher, reflecting higher-frequency oscillations that enhance tracking accuracy. Conversely, for larger values of , the control smooths out and approaches an almost constant profile.
Figure 41.
for (blue), (red), and (black).
Example 5
(Non-Colocated Example in a Two-Dimensional Spatial Domain). We consider a control problem in two spatial dimensions analogous to the one-dimensional Example 3. In this example, we again impose Dirichlet boundary conditions on the boundary of a domain (see Figure 42). We can write the state operator for our plant as
Figure 42.
Two-dimensional region, non-colocated.
As in our previous examples, the operator A is not self-adjoint. Rather, we have
with
We consider a tracking problem with reference and disturbance signals
where, as in the previous example, F is a smooth function that, together with its first four derivatives, is zero in a neighborhood of . The function F is introduced to suppress transient terms that arise in the cascade iteration. These transient terms decay exponentially in time.
In this case, the resulting cascade controls , the cascade control, , and the full optimal control are functions of both x, y and t.
For comparison, in this example, we have also solved a system similar to (82)–(84) to obtain an optimal control using an inverse parabolic equation for the Lagrange multiplier . The error associated with this control is denoted by . Also, the error obtained when using the cascade control in the plant with initial condition is denoted by .
For our simulations, we have set , , and taken various values for . In this case, just as in Example 3, for small , the dynamics are dominated by the projection onto the . For that reason, in Figure 43, we have plotted the results for a large value of to demonstrate the decrease in the norm for the errors for . Then, in Figure 44, Figure 45 and Figure 46, we have fixed . In Figure 44, we have again plotted the norm for the errors for . In this case, the different curves are indistinguishable for all t. In Figure 45, we plotted the norms of the cascade error and the plant error using nonzero initial data. Finally, in Figure 46, we compare the cascade error and the full optimal control error obtained with the penalty constant . After a short transient, the two errors become indistinguishable.
Figure 43.
, .
Figure 44.
, .
Figure 45.
, , .
Figure 46.
, .
Example 6
(Colocated Example in a Two-Dimensional Spatial Domain). This example considers a control problem in two spatial dimensions analogous to the one-dimensional colocated Example 4. Namely, we assume that the equations, reference and disturbance signals, and basic geometry are the same as in Example 5 except that now we assume that and are colocated (see Figure 47). In this case, we expect the cascade controls to produce errors that go to zero geometrically with increasing cascade iterations. Once again, impose Dirichlet boundary conditions on the boundary of a domain .
Figure 47.
Two-dimensional region, colocated.
In our numerical simulations, we have computed the cascade errors for for various values of . In Figure 48, we have plotted for for , and in Figure 49, we have plotted these same results for . Notice that, for fixed , the norms decrease geometrically with each cascade iteration.
Figure 48.
, .
Figure 49.
, .
In the above figures, we used the following color code to indicate the curves : blue, green, red, magenta, and black. In Figure 50 and Figure 51, we have plotted the and the norm of the plant error both for large and small times. Here, we have set the initial condition for the plant to . Notice that after a very brief time, the two curves become indistinguishable.
Figure 50.
, , .
Figure 51.
, on .
9. Conclusions
This work introduces a rigorous and general methodology for achieving asymptotic tracking and disturbance rejection in both lumped and distributed parameter systems, including those with infinite-dimensional input and output spaces. We overcome significant challenges posed by compact operators in infinite-dimensional input and output spaces by employing a cascade algorithm based on the geometric regulation theory, which is enhanced with Tikhonov regularization. Our approach extends beyond traditional optimal control methods, providing approximate solutions for various systems, including those that are over- or under-determined.
Through a detailed error analysis, we identify conditions under which the cascade process is error zeroing and those that limit its effectiveness, such as non-colocated input–output operators and contributions from regularization. Numerical simulations validate the method’s practical applicability, demonstrating its adaptability across diverse examples with varying dimensionality and configurations. This study lays a foundation for further research into advanced control strategies for complex linear and nonlinear dynamical systems control, with potential extensions to unbounded input and output operators and broader classes of nonlinear partial differential equations.
Author Contributions
Conceptualization, E.A. and D.S.G.; methodology, D.S.G.; software, E.A.; formal analysis, D.S.G.; investigation, E.A. and A.C.; writing—original draft, E.A., A.C. and D.S.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Acknowledgments
We sincerely thank John Burns, in Mathematics at Virginia Tech, for his invaluable suggestions and insightful feedback throughout this work. His expertise and thoughtful contributions have greatly enhanced the quality of our research.
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A. Motivation and Derivation of the Cascade Controller
Beginning with the equations for the digital twin of the plant,
our methodology seeks to solve (A1) and (A2) directly for in the attempt of minimizing in (A3). The idea is to first solve (A1) for z and then apply C to both sides:
Using the definition of the error yields
Defining , then G can be identified with the transfer function of the system,
evaluated at . Our standing assumption that A generates an exponentially stable analytic semigroup ensures that 0 lies in the resolvent set of A. We attempt to solve for u by minimizing the error
In our earlier work [1,5,6], the input and output spaces were always finite-dimensional and of equal dimension. In that setting, G is a square matrix, which we assumed to be invertible (equivalently, 0 is not a transmission zero of the system).
In the present work, however, we address the more general case of arbitrary input and output spaces. This includes both over-determined and under-determined systems, as well as the significantly more challenging case of infinite-dimensional input and output spaces. If the input and output spaces are finite-dimensional, and G is not invertible, the minimal solution of (A4) can be obtained using the Moore–Penrose pseudoinverse. For infinite-dimensional input and output spaces, G is a compact operator and thus cannot possess a bounded inverse or pseudoinverse. Consequently, solving (A4) for u is an ill-posed problem. Our general approach is to apply Tikhonov regularization.
For a small parameter , we define the regularized operator
This operator yields the best least-squares solution of (A4). Here, denotes the Hilbert space adjoint of G. Since G and are compact and hence bounded, is bounded, non-negative, and self-adjoint. Therefore, for , the operator is invertible with bounded inverse. Consequently,
represents the best least-squares solution of the minimization problem
To clarify the role of Tikhonov regularization, we proceed as follows. Substituting the expression for u from (A6) into (A1), we can recast the system as a dynamical system:
Rearranging, we obtain
To express this as a standard dynamical system, we require the inverse of . A direct calculation shows
where is given explicitly by
Since
we have
which becomes unbounded as and leads to numerical instabilities when is very small (as is typical in Tikhonov regularization). To address this issue, we introduce a second regularization by replacing with
When , it reduces to the original expression; when , it reduces to the identity.
Defining
we can now state the following relation.
Lemma A1.
If G has a well-defined pseudoinverse, , or if exists, we can pass to the limit in to obtain
so that
which remains well behaved even when . In particular, this allows choosing a small in Tikhonov regularization without causing numerical instabilities.
With the -regularization, the system (A7) becomes
where we have introduced a new state variable , which, in general, differs from z for . Here, the superscript 0 denotes the initial iteration in the cascade controller algorithm, which will be introduced later. Applying the inverse from (A12) and defining
we arrive at the dynamical system referred to as the regularized controller.
For fixed , the operator is sectorial and generates an exponentially stable, analytic semigroup for all sufficiently close to 1. Indeed, the term
is a bounded perturbation of A and vanishes as .
Systems of this form allow us to derive explicit formulas for the approximate errors that arise in the cascade iteration scheme presented in the paper.
The Cascade Controller
Because of the -regularization, we cannot expect the resulting error to be the least-squares error e we seek. While tuning both and can reduce the asymptotic value of e, the effect is limited. For this reason, we introduce a methodology that produces progressively more accurate controls with smaller tracking errors.
We begin with the initial step of the iterative scheme presented above
which produces the error
In the digital twin of the plant, (A1)–(A3), we now set and . Using Equations (A17)–(A19) leads to the new problem of finding and such that
and minimizing the error
These form a new set of controller equations, designed to track the reference signal . System (A21)–(A23) is formally simpler than (A17)–(A19), since there is no disturbance and the initial condition is zero; however, its solution still requires the same Tikhonov and -regularization as before. Then, proceeding exactly as for the 0th step in (A19), we find the controller
which produces the new error
Once again, we cannot expect , because of the -regularization.
Proceeding, we can repeat the above steps as many times as we like. At the step, we set
which leads to the following equations for and :
We produce the error
Practically, we stop iterating once for sufficiently large t, with a small given . We note that, in the case of finite-dimensional input and output spaces, from (A68), we have
Therefore, under suitable conditions on the growth of r and d, as in Corollary 1, we see that
We then use the control u in the plant and obtain an error for all time if the same initial data are used, or as if different initial data are used.
Appendix B. A Practical Solution Algorithm
In this appendix, we derive the solution algorithm presented in Section 5. Specifically, we derive the Formulas (21)–(23) for solution of and by writing the Equation (14) in the form
where
To compute (or ), we set
Next, let
Therefore, from (A29)
At last, we define
A numerically more stable and simpler algorithm is obtained by defining
We can repeat the above construction for the cascade controls to provide a simple set of systems from the systems (17). Namely, we can write (17) as
The desired cascade controls are given by
so that the control for the nth step control is
Appendix C. Proof of Lemma 1
Our proof of Lemma 1 relies on the spectral theory of compact, self-adjoint operators (see for example [24,25,34,35]) applied to the non-negative, compact, and self-adjoint operator . A proof could also be obtained using standard SVD arguments, as in [29,30]. According to the spectral theorem, in the Hilbert space , there exists an orthonormal family and a sequence of positive numbers, , decreasing to zero, satisfying
Note that for any and any continuous real-valued function F, we have
Proof of Lemma 1.
Taking , the operator in (A43) is given, for , by
where denotes the inner product in .
Due to the orthonormality of the family , we have
Given , let us define . Our proof proceeds by showing that for any arbitrary and , we can choose sufficiently small so that . We note that for any , we have
Let us fix an arbitrary . Notice that for all and
Then, since the are decreasing and
We now choose small enough so that for , we have
Then, considering (A45), we obtain
Finally
and
Therefore, the operator converges to zero in the operator norm. □
Appendix D. Analysis of the Errors
As discussed in Section 5, this appendix contains a detailed analysis of the errors obtained in the cascade procedure. We begin by applying the variation of parameters formula to the dynamical system for in (14) to obtain
Applying integration by parts to the two integral terms produces
where the term collects terms that decay exponentially as t goes to infinity:
In this work, many expressions decay exponentially to zero as t goes to infinity. These terms do not affect the computation of the limsup of the norm as . To simplify the analysis, we use to denote any such expression that decays exponentially to zero. Therefore, we do not distinguish between and or sums of these terms.
Now, by applying C to the expression for obtained in (A50), and subtracting both sides of the resulting formula from and recalling the definition of in (A20), we have
where K and are defined in (33).
Considering
and
which follows by direct verification from the definitions (A11) and (A13), we obtain
To evaluate , we use the identities
Considering the value of in (A52), and the definition of in (13), we evaluate the following:
where on the second-to-last step, we have used (A54) and (A55), and on the last step we have used (A8).
At this point, considering (A51), recalling the definition of in (30), in (31), in (34),
and , we have
Next, we apply the same variation in parameters to the system
which results in a formula similar to that for in (A49), as well as a formula resembling the one for in (A56). Specifically, we have
For the case , we have
More generally, for all ,
Our next goal is to derive a general formula for by separately analyzing the errors and , in terms of L, , , , , , and exponentially decaying terms .
Remark A1.
The following relations hold:
Clearly, and because maps into the closure of the range of G, , and L is the orthogonal projection onto . The third equality follows from the useful result given in (A53), i.e.,
Next, we observe that
Since maps into , then and therefore , using (A61).
Finally, since both and , then .
Notice that from (A59), it is clear that the reference and disturbance signal term errors can be studied independently. So, let us start by considering . We have
Lemma A2.
For all we have
Lemma A3.
Proof.
As in the proof of Lemma A2, we first verify (A65) for :
This matches the definition of given in (A58).
Now, assume (A65) holds for some . We will also show it holds for . Let
and consider (A64) to obtain
□
The results of Lemmas A2 and A3 are reported in the main text as Equations (38) and (39).
Appendix E. Error Estimates in the Case of Finite-Dimensional Input and Output Spaces
In this appendix, we estimate the limsup in time of the norm of the errors,
Here, and are defined in (38) and (39), respectively. We focus on the case of finite-dimensional input and output spaces. In particular, we exploit the fact that we can pass to the limit (which in turn implies ) to examine the errors defined in (40). As , we obtain
Passing to the limit as goes to zero, collecting all the terms, and simplifying, we have
Similarly, for the disturbance part of the error, we have
Substituting and using the identities and , we simplify to obtain
References
- Aulisa, E.; Gilliam, D. A Practical Guide to Geometric Regulation for Distributed Parameter Systems; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Burns, J.A.; He, X.; Hu, W. Feedback stabilization of a thermal fluid system with mixed boundary control. Comput. Math. Appl. 2016, 71, 2170–2191. [Google Scholar] [CrossRef]
- Deutscher, J.; Kerschbaum, S. Robust output regulation by state feedback control for coupled linear parabolic PIDEs. IEEE Trans. Autom. Control 2019, 65, 2207–2214. [Google Scholar] [CrossRef]
- Aulisa, E.; Gilliam, D.; Pathiranage, T. Analysis of an iterative scheme for approximate regulation for nonlinear systems. Int. J. Robust Nonlinear Control 2018, 28, 3140–3173. [Google Scholar] [CrossRef]
- Aulisa, E.; Gilliam, D.S.; Pathiranage, T.W. Analysis of the error in an iterative algorithm for asymptotic regulation of linear distributed parameter control systems. ESAIM Math. Model. Numer. Anal. 2019, 53, 1577–1606. [Google Scholar] [CrossRef]
- Aulisa, E.; Gilliam, D.S. Approximation methods for geometric regulation. arXiv 2021, arXiv:2102.06196. [Google Scholar]
- Francis, B.A.; Wonham, W.M. The internal model principle of control theory. Automatica 1976, 12, 457–465. [Google Scholar] [CrossRef]
- Francis, B.A. The linear multivariable regulator problem. SIAM J. Control Optim. 1977, 15, 486–505. [Google Scholar] [CrossRef]
- Hespanha, J.P. Linear Systems Theory, 2nd ed.; Princeton University Press: Princeton, NJ, USA, 2018. [Google Scholar]
- Azimi, A.; Koch, S.; Reichhartinger, M. Robust internal model-based control for linear-time-invariant systems. Int. J. Robust Nonlinear Control 2024, 34, 12476–12496. [Google Scholar] [CrossRef]
- Bymes, C.I.; Laukó, I.G.; Gilliam, D.S.; Shubov, V.I. Output regulation for linear distributed parameter systems. IEEE Trans. Autom. Control 2000, 45, 2236–2252. [Google Scholar] [CrossRef]
- Anderson, B.D.; Moore, J.B. Optimal Control: Linear Quadratic Methods, reprint edition ed.; Dover Publications: Mineola, NY, USA, 2007. [Google Scholar]
- Dorato, P.; Abdallah, C.; Cerone, V. Linear-Quadratic Control: An Introduction; Krieger Publishing Company: Malabar, FL, USA, 2000. [Google Scholar]
- Najafi Birgani, S.; Moaveni, B.; Khaki-Sedigh, A. Infinite horizon linear quadratic tracking problem: A discounted cost function approach. Optim. Control Appl. Methods 2018, 39, 1549–1572. [Google Scholar] [CrossRef]
- Bornemann, F.A. An adaptive multilevel approach to parabolic equations III. 2D error estimation and multilevel preconditioning. IMPACT Comput. Sci. Eng. 1992, 4, 1–45. [Google Scholar] [CrossRef]
- Tröltzsch, F. On the Lagrange–Newton–SQP method for the optimal control of semilinear parabolic equations. SIAM J. Control Optim. 1999, 38, 294–312. [Google Scholar] [CrossRef]
- McAsey, M.; Mou, L.; Han, W. Convergence of the forward-backward sweep method in optimal control. Comput. Optim. Appl. 2012, 53, 207–226. [Google Scholar] [CrossRef]
- Tröltzsch, F. Optimal Control of Partial Differential Equations: Theory, Methods and Applications; Graduate Studies in Mathematics; American Mathematical Society: Providence, RI, USA, 2010; Volume 112. [Google Scholar]
- Lee, Y.; Kouvaritakis, B. Constrained receding horizon predictive control for systems with disturbances. Int. J. Control 1999, 72, 1027–1032. [Google Scholar] [CrossRef]
- Camacho, E.F.; Bordons, C. Constrained model predictive control. In Model Predictive Control; Springer: Berlin/Heidelberg, Germany, 2007; pp. 177–216. [Google Scholar]
- Güttel, S.; Pearson, J.W. A rational deferred correction approach to parabolic optimal control problems. IMA J. Numer. Anal. 2018, 38, 1861–1892. [Google Scholar] [CrossRef]
- Leveque, S.; Pearson, J.W. Fast iterative solver for the optimal control of time-dependent PDEs with Crank–Nicolson discretization in time. Numer. Linear Algebra Appl. 2022, 29, e2419. [Google Scholar] [CrossRef]
- Aulisa, E.; Burns, J.A.; Gilliam, D.S. Approximate Error Feedback Controller for Tracking and Disturbance Rejection for Linear Distributed Parameter Systems. In Proceedings of the 2022 American Control Conference (ACC), Atlanta, GA, USA, 8–10 June 2022; pp. 976–981. [Google Scholar]
- Conway, J.B. A Course in Functional Analysis; Springer: Berlin/Heidelberg, Germany, 2019; Volume 96. [Google Scholar]
- Kato, T. Perturbation Theory for Linear Operators; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 132. [Google Scholar]
- Baumeister, J. Stable Solution of Inverse Problems; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
- Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problems, 3rd ed.; Applied Mathematical Sciences; Springer: Cham, Switzerland, 2021; Volume 120. [Google Scholar] [CrossRef]
- Morozov, V.A. Methods for Solving Incorrectly Posed Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
- Kress, R.; Maz’ya, V.; Kozlov, V. Linear Integral Equations; Springer: Berlin/Heidelberg, Germany, 1989; Volume 82. [Google Scholar]
- Hsiao, G.C.; Wendland, W.L. Boundary Integral Equations, 2nd ed.; Applied Mathematical Sciences; Springer: Cham, Switzerland, 2021; Volume 164. [Google Scholar] [CrossRef]
- Fallahnejad, M.; Kazemy, A.; Shafiee, M. Event-triggered H∞ stabilization of networked cascade control systems under periodic DoS attack: A switching approach. Int. J. Electr. Power Energy Syst. 2023, 153, 109278. [Google Scholar] [CrossRef]
- Du, Z.; Chen, C.; Li, C.; Yang, X.; Li, J. Fault-Tolerant H-Infinity Stabilization for Networked Cascade Control Systems with Novel Adaptive Event-Triggered Mechanism. IEEE Trans. Autom. Sci. Eng. 2025. early access. [Google Scholar] [CrossRef]
- Hecht, F.; Lance, G.; Trélat, E. PDE-Constrained Optimization Within FreeFEM; Open-Access Monograph; LJLL/Sorbonne Université: Paris, France, 2024. [Google Scholar]
- Schwartz, J.T. Linear Operators: Spectral Theory: Self Adjoint Operators in Hilbert Space; Interscience: Saint-Nom-la-Bretéche, France, 1963. [Google Scholar]
- Lewin, M. Spectral Theory and Quantum Mechanics; Universitext; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).


















































