Abstract
Most research activities that utilize linear matrix inequality (LMI) techniques are based on the assumption that the separation principle of control and observer synthesis holds. This principle states that the combination of separately designed linear state feedback controllers and linear state observers, which are independently proven to be stable, results in overall stable system dynamics. However, even for linear systems, this property does not necessarily hold if polytopic parameter uncertainty and stochastic noise influence the system’s state and output equations. In this case, the control and observer design needs to be performed simultaneously to guarantee stabilization. However, the loss of the validity of the separation principle leads to nonlinear matrix inequalities instead of LMIs. For those nonlinear inequalities, the current paper proposes an iterative LMI solution procedure. If this algorithm produces a feasible solution, the resulting controller and observer gains ensure robust stability of the closed-loop control system for all possible parameter values. In addition, the proposed optimization criterion leads to a minimization of the sensitivity to stochastic noise so that the actual state trajectories converge as closely as possible to the desired operating point. The efficiency of the proposed solution approach is demonstrated by stabilizing the Zeeman catastrophe machine along the unstable branch of its bifurcation diagram. Additionally, an observer-based tracking control task is embedded into an iterative learning-type control framework.
1. Introduction
LMIs provide powerful tools for the design of guaranteed stabilizing controllers for exactly known (nominal) system models as well as for scenarios with bounded uncertainty in selected parameters. In both cases, the basic idea of using LMIs for control design (as well as for the dual task of state estimation) is the proof of asymptotic stability of the closed-loop dynamics by means of a suitable Lyapunov function candidate. Then, this stability proof can be included directly in the stabilizing control synthesis. The most important options for a control synthesis with the help of LMIs are the solution of a so-called pure feasibility problem (i.e., a controller guaranteeing asymptotic stability and, in the cases of bounded parameter uncertainty, achieving input-to-state stability) or extensions which deal with optimization problems (such as and ). Furthermore, regions for the closed-loop eigenvalues can be specified so that minimum and maximum state variation rates during transient phases can be influenced systematically. Numerous references, such as [,,,,,,], provide further information concerning techniques for an LMI-based control synthesis.
For linear system models with uncertain, bounded parameters, parameter-independent Lyapunov function candidates can be used in many application scenarios. The resulting LMIs need to be specified for suitably chosen extremal system realizations of a polytopic uncertainty representation. Thereby, not only stability can be ensured but also restrictions on admissible eigenvalue domains (referred to as -stability in []) can be guaranteed if the design task is solved successfully [,]. If the assumption of a parameter-independent Lyapunov function leads to an unacceptably high level of conservativeness, augmented LMI representations can be introduced, which are based on parameter-dependent techniques (cf. [,]). Besides pure parameter uncertainty, which reflects inevitable tolerances such as imprecisely known masses in mechanical systems, quasi-linear system models can also be handled with these techniques. In such cases, polytopic uncertainty representations are required that are composed of state-dependent system and input matrices, which are bounded by their element-wise defined worst-case realizations. In such a way, LMI techniques can also be employed for the design of robust controllers for nonlinear process models [].
The statements above hold equally for the design of state observers with a Luenberger-like structure. Corresponding design criteria have been derived by exploiting the duality to control synthesis in []. However, the interconnection of independently designed controllers and state observers, where both of them need to be robust against bounded parameter uncertainty, is not ensured to be asymptotically stable. This issue is visualized in Section 2 with an academic example. As a consequence, it is inevitable to design controllers and observers simultaneously. For example, this problem is treated in [,,] for discrete-time systems. The simultaneous controller and observer design leads to nonlinear matrix inequalities that can be solved in an iterative way.
Moreover, most practical systems are subject to stochastic disturbances in the forms of actuator, process, and sensor noise. These systems can be controlled by an optimal observer-based LQG technique if the system model is linear and specified by precisely known, that is, point-valued system matrices. The LQG technique combines a linear quadratic regulator design with the optimal Gaussian (i.e., Kalman filter-based) state estimation. As summarized, for example, by Skelton in [], the problem of control parameterizations that make sure that the noise-induced output and input covariances (i.e., uncertainty on the closed-loop system outputs and actuator signals) fall below specific threshold values, can be cast into LMIs.
If the techniques for an optimal LMI-based observer parameterization of continuous-time systems, published in [], are used, the domains around the system’s equilibrium states can be identified for which no contractivity, that is, stability in a stochastic sense, can be verified. To perform the analysis, the Itô differential operator [] was applied to a Lyapunov function candidate to perform its temporal differentiation in the corresponding stochastic setting. In addition, we introduced an LMI-based numerical optimization of the observer gains in [] to minimize the domains for which the contractivity of the observer’s error dynamics cannot be proven by the technique above.
This paper proposes a novel iterative solution that (i) solves the nonlinear matrix inequalities for a combined control and observer design of linear continuous-time processes with polytopic parameter uncertainty in an iterative manner and (ii) generalizes the procedures from [] to optimize the controller and observer gains so that the sensitivity against actuator, process, and sensor noise on the closed-loop trajectories and the observer’s error dynamics is minimized by means of the simultaneous control and observer design. Besides the design of observer-based full state feedback controllers, our approach is also capable of designing output feedback control laws, where the measured outputs and fed-back quantities do not necessarily have to coincide. Due to the inclusion of the state observer in the design procedure, the proposed approach is more general than the robust output feedback design that was proposed by Gershon and Shaked in [] for discrete-time systems.
Finally, the work of Do et al. in [] should be mentioned. There, a combined control and observer design was performed. However, the authors do not aim at directly minimizing the domain for which stability in a stochastic sense cannot be proven. Instead, they aim to achieve the stability of the overall closed-loop control structure by suppressing the influence of disturbances by means of a frequency response shaping approach.
This paper is structured as follows. After a short introductory example in Section 2, Section 3 reviews our results published in [] to turn them into a novel iterative LMI approach for an optimized, guaranteed stabilizing combined control and observer synthesis in Section 4. The efficiency of this approach is demonstrated in Section 5 in numerical simulations for the stabilization of the Zeeman catastrophe machine []. The stabilization is performed along the curve of input-dependent unstable equilibria. In addition, a representative tracking control task is presented, in which the feedforward control signal is determined by a P-type iterative learning control approach [,]. Finally, conclusions and an outlook on future work are provided in Section 6.
2. Observer-Based Control of Systems with Polytopic Uncertainty: A Cautionary Tale
As an introductory warning example, consider the unstable linear plant
with the interval parameter . It is desired to parameterize a linear feedback controller , where is a filtered state estimate resulting from the observer
Here, represents the measured plant output and is a fixed point-valued parameter taken from the interval .
Remark 1.
Because this example should purely visualize the loss of validity of the classical separation principle of control and observer synthesis due to the bounded uncertainty of the parameter , stochastic process and sensor noise are not considered in this section. Noise will be accounted for systematically in Section 3 and Section 4.
A classical control synthesis for the system model (1) would assume that holds. Then, the closed-loop dynamics result in
which become asymptotically stable if the controller gain k is chosen so that
holds.
Similarly, the observer error dynamics with would classically be stated as
under the assumption of an identical parameter a in both the plant and observer model, where
would ensure stability.
When considering that in the observer differential Equation (2) is some point value taken from the interval , the resulting combined system dynamics turn into
for which the control and observer gains k and h need to be chosen so that all eigenvalues of are guaranteed to lie strictly within the open left complex half-plane for all possible realizations of .
Due to the simple structure of the scalar system model with a single uncertain parameter, the optimal choice (with respect to maximizing the admissible domains for the gains k and h) is obvious; it would be . However, in more general settings, such simple statements are typically not possible. At least for systems with a small number of uncertain parameters, as well as controller and observer gains, the choice could be assisted analytically by the parameter space approach from [].
For more complex models, only numerical techniques are helpful for determining the admissible domains of stabilizing controller and observer gains k and h, respectively. For the academic example in this section, a numerical analysis of the stability domains is shown in Figure 1. Domains of robustly stabilizing gains are highlighted in a light gray color, while gains with unstable eigenvalues for at least one are highlighted in dark gray. Clearly, setting to a value that is significantly smaller than (e.g., to the interval midpoint in Figure 1a) requires much larger gains for k and h than the settings in Figure 1b–d. The loss of the validity of the separation principle of control and observer design can be seen clearly from the fact that the boundary between stable and unstable realizations is a curved line that is not parallel to the coordinate axes k and h in Figure 1. The separation principle would only hold in cases where the stability boundaries for k and h are fully decoupled.

Figure 1.
Stability analysis of the observer-based closed-loop control system (7) for different values of and : The choice of requires a significant increase of both k and h above the thresholds and .
The iterative LMI-based solution procedure given in the following sections provides a systematic approach for stabilizing the observer-based closed-loop control system when the point-valued parameterization of the parallel model in the observer (the parameter in (Equation (2))) is fixed. In addition to purely finding the stabilizing controller and observer gains, the proposed technique optimizes the gains so that the resulting dynamics become as insensitive as possible against stochastic actuator, process, and sensor noise. An optimization of the point-valued parameters of the parallel model included in the state observer can be considered a subject for future work.
3. Methodological Background: LMI-Based Control and Observer Design
In this paper, dynamic system models are considered, which are given by the stochastic differential equations
with the state vector and the input vector ; and are the system and input matrices, where is a vector of either constant or time-varying bounded parameters. Alternatively, this vector may denote the dependence of all system matrices on the state variables , cf. []. For the sake of a compact notation, all entries of are treated as mutually independent with an affine dependence of and on these quantities. Moreover, and are stochastically independent standard normally distributed Brownian motions of actuator and process noise. In this sense, and represent the (element-wise non-negative) disturbance input matrices containing the corresponding standard deviations.
The measured system output is given by
where the output matrix is assumed to be exactly known; is the standard normally distributed sensor noise, while is the weighting matrix representing the actual standard deviation of the output disturbance.
In the following, we aim at designing either an observer-based state feedback (Case 1) or an observer-based output feedback control strategy (Case 2). In the second case, the measured output does not necessarily coincide with the system output to be controlled. A linear filter-based output feedback (Case 3), in which the filter is designed in a model-free manner, is only mentioned for the sake of completeness and has been treated by the authors in a separate publication []. The reason for this separate investigation is the fact that the design criteria do not fully coincide with the requirements derived for the Cases 1 and 2 in this paper:
- Case 1:
- The control signal is defined aswhere (without loss of generality) is a constant feedforward signal and is the state estimate determined by the robust observerwhich makes use of the nominal system and input matrices and , see Section 3.4. For the following stabilization and performance optimization, is assumed. Desired operating points outside the origin are easily achievable by adding suitable nonzero offset terms.
- Case 2:
- Case 3:
- The control signal is defined aswhere is a vector consisting of filtered system outputs and estimated output derivatives, cf. [].
Remark 2.
Case 3 can be interpreted as a dynamic output feedback control approach, whereas the Cases 1 and 2 are model-based approaches employing feedback controllers that rely on an estimation of the complete state vector by an appropriate full observer. It should be noted that the different control structure of Case 3 leads to a similar, however not identical, LMI approach. Therefore, Case 3 is treated separately in []. Moreover, as already discussed in [], Case 3 should typically only be used if a maximum of two time derivatives of measured output signals are estimated with the model-free filter. In all other cases, the observer-based concepts of Cases 1 and 2 are advantageous.
To guarantee solvability of the control design task in all three cases above, it is assumed that the system (8) is stabilizable using either of the system inputs (10), (12), or (13), and that the pair is robustly observable (or at least detectable) in Case 1 as well as in Case 2. Here, robust observability is defined as observability of all possible values of the parameters according to the definition given in []. Note that the requirements of both stabilizability and detectability will not be proven explicitly. These properties are instead verified constructively by determining a solution that ensures input-to-state stability of the system dynamics as well as of the observer’s error dynamics.
3.1. Polytopic Uncertainty Modeling
As shown in [,,], it is possible to describe the influence of uncertainty in many practical applications by bounded uncertainty domains of polytope type. There, it is assumed that all system matrices in (8) belong to a convex combination of extremal vertex matrices in the form
where denotes the number of independent extremal realizations for the union of all four matrices included in (14).
3.2. Robust State Feedback Control
In all following subsections, sufficient conditions for asymptotic stability of the closed-loop system dynamics (and the observers’ error dynamics, respectively) are derived on the basis of quadratic, parameter-independent, radially unbounded Lyapunov function candidates
with the positive definite matrix
as a free decision variable to be determined during the proposed iterative solution of matrix inequalities.
Theorem 1
([,] Sufficient stability condition for full state feedback). Robust asymptotic stability of the closed-loop control system according to Case 1 for a noise- and error-free state feedback (i.e., ) is ensured if the gain matrix satisfies the bilinear matrix inequalities
, for all polytope vertices in (14).
Proof.
Substituting the control law (10) with in the deterministic part of the system model (8), computing the time derivative of the Lyapunov function candidate (15), and representing the system matrices and by the polytopic uncertainty model (14) leads to
A sufficient stability condition is guaranteed to be satisfied in terms of for all if all matrix inequalities (17) hold true for a parameter-independent controller gain . □
Corollary 1.
To allow for an efficient solution of the bilinear matrix inequalities in Theorem 1 with the help of standard solvers for LMIs such asSeDuMi[] in combination withYALMIP[], the linearizing change of variables
is introduced. Multiplying (17) from the left and right with the matrix leads to equivalent LMIs
with . Their solution with then needs to be transformed back by means of (19).
3.3. Robust Output Feedback Control
Theorem 2
(Sufficient stability condition for output feedback control). Robust asymptotic stability of the closed-loop control system according to Case 2 for an error-free output feedback (i.e., ) is ensured if the gain matrix satisfies the bilinear matrix inequalities
, for all polytope vertices in (14).
Proof.
The proof of Theorem 2 is a direct consequence of the sufficient stability condition used for the proof of Theorem 1. □
Corollary 2
([]). For precisely known matrices , an LMI formulation of Theorem 2 is obtained by introducing a linearizing change of variables with and the equality constraints
If the matrix , and therefore also , have full row rank, the resulting controller gain is given by
Remark 3.
In the literature [,], an alternative problem formulation is often given in terms of
with and the additional equality constraint
leading to . Applying this formulation is, however, only useful for precisely known matrices , where instead may be subject to a polytopic uncertainty model.
Remark 4.
Theorems 1 and 2 become identical for the case of a full state feedback, namely, for . Then, the constraint becomes redundant as .
3.4. Robust State Observation
Following the duality principle between control and observer synthesis, as described for example in [,], leads to the counterparts of Theorem 1 and Corollary 1 according to the following Theorem 3, where the observer differential equation is given by Equation (11).
Theorem 3
([] Sufficient stability condition for robust state observer). The error dynamics of a robust state observer according to Equation (11) with the exactly known output matrix are robustly asymptotically stable if the observer gain satisfies the bilinear matrix inequalities
for some positive definite matrix at all polytope vertices in (14).
As stated before, the matrix in (27) does not necessarily have to be identical to the matrix to be fed back in the control law (12). Note that the prime symbol in (27) has been introduced to indicate that the matrix is typically not the same as in the following corollary.
Corollary 3
([]). To allow for an efficient solution of the bilinear matrix inequalities in Theorem 3, after considering the duality between control and observer synthesis by evaluating the transposed of (27) according to
the linearizing change of variables
4. Main Results: Iterative LMI Solution for a Combined Control and Observer Synthesis in the Presence of Stochastic Noise
This section provides the main result of this paper. It consists of the description of an augmented stochastic differential equation model, comprising the controlled plant as well as the robust state observer. Based on this augmented model, optimality conditions for the combined computation of the control and observer gains are presented which allow us to minimize the size of the domain around the desired operating point for which stability cannot be proven due to the presence of stochastic actuator, process, and sensor noise. For a visualization of such domains in the frame of oscillation attenuation for boom cranes in marine applications, the reader is referred to [].
4.1. Observer-Based Feedback Control in the Presence of Noise
To ensure stability of the control laws (10) or (12) despite the general invalidity of the separation principle according to Section 2 and to simultaneously optimize the controller and observer gain matrices in Cases 1 and 2, the closed-loop control systems’ dynamics as well as the point-valued observers’ error dynamics are described by using a combined set of stochastic differential equation models. In the following, all system models are derived for each of the worst-case realizations , to simultaneously account for bounded parameter uncertainty and stochastic noise.
For that purpose, the deterministic system models from the previous section (i.e., the noise-free, however, parameter-dependent terms) are extended by the terms for stochastic process and sensor noise as well as by accounting for the control signals
Hence, each vertex realization for the closed-loop control system is represented by a stochastic differential equation
Here and in the following, and hold for Case 1.
Similarly, the stochastic differential equation for the observer dynamics at each vertex v is obtained as
The latter leads to the associated error dynamics
As discussed in [], it may be impossible to verify stability in the close vicinity of the desired stationary operating point in the presence of stochastic noise. This is true even if a robust stabilization by simultaneously designing the controller and observer for the noise-free polytopic uncertainty model is successful.
To design a state observer minimizing the sensitivity against noise, our previous work [] suggests estimating the volume of an ellipsoid centered at the desired operating point for which stability properties cannot be proven. For the case of exactly known system models, an LMI-based optimization approach was introduced in [] to reduce the corresponding ellipsoid volume. This approach is now extended by the following two aspects: (i) simultaneously computing the controller and observer gains in a jointly stabilizing sense in the presence of noise, as well as (ii) additionally considering the goal of achieving insensitivity against bounded parameter uncertainty in the system matrices by the minimization of the aforementioned ellipsoid volume.
For that purpose, define an augmented vector
consisting of system states and estimation errors for each extremal system representation from the polytope (14). The corresponding stochastic differential equations are given by
with
where the sub-matrices in the second row result in
and
Similarly, the augmented matrix of noise standard deviations turns into
Note that for the simplified case of [] without parameter uncertainty, the sub-matrices in (38) and (39) turn into and with and . For this case, the separation principle holds and the observer and controller parameterization (hence also their optimization) can be performed independently.
The following subsection introduces a cost function J that can be minimized in an iterative manner by applying state-of-the-art LMI solvers, such as SeDuMi [] in combination with YALMIP []. Thereby, the controller and observer gains, as well as the Lyapunov function candidate, are computed in parallel.
4.2. Optimality of the Noise Insensitive Observer-Based Control Design
Theorem 4
(Optimal control and observer parameterization). The parameterization of the observer-based controller for the augmented system (36)–(40) is optimal in the sense of a minimization of the influence of noise if the controller and observer gains are chosen to minimize the cost function
with and
where
and
with hold. Here, symbols indicate an iterative evaluation, where all such values are replaced by the outcome of the previous iteration stage, especially by inserting the previous gain values into (37) to obtain the matrices in (42).
Proof.
Define a quadratic form
as the candidate for a Lyapunov function with the block diagonal matrix
consisting of those matrices that were obtained from the Corollaries 1 or 2, as well as from Theorem 3.
The corresponding time derivative of (46) is computed by applying the Itô differential operator [,]. It leads to the expression
Despite the strict negative definiteness of (corresponding to the asymptotic stability of the deterministic part of the augmented system model due to the existence of a positive definite matrix ), the value of may become positive in the close vicinity of due to persistent stochastic excitation. The non-provable stability domain is, hence, characterized by the interior of an ellipsoid that is described by the boundary according to
where
and
The volume of this ellipsoid is proportional to
Following the same steps as in ([], Equations (17)–(20)), it is desired to minimize the quality criterion
by tuning the control and observer gains, where the second factor helps to maximize the domains for which the linear feedback in the closed-loop system is bounded by some positive constant [].
According to [], multiplicative couplings between and the gains included in are removed by an iterative LMI formulation of the optimization task. For that purpose, the term is relaxed into the matrix inequality
with . The inequality (54) is finally re-written by applying the Schur complement formula according to
Remark 5.
The pure block diagonal structure of this matrix in (47) with independent parameters for the state and error dynamics, may introduce some degree of conservativeness. However, the positive definiteness of this matrix always guarantees closed-loop stability (see the discussion of Equation (48)). Moreover, the simulations in the following section have shown that the resulting control signals outperform classical design approaches in terms of a reduction of the control effort. In addition, notice that the obtained solution can be conservative due to the use of a common quadratic Lyapunov function for the whole polytope. The use of parameter-dependent Lyapunov functions, such as those discussed in [] for discrete-time processes with time-varying parameters, will be a subject of future research.
4.3. Summary of the Proposed Algorithm and Further Discussion
The proposed iterative LMI formulation of the combined optimization of control and observer gains for linear systems with polytopic parameter uncertainty and stochastic process, actuator, and sensor noise is summarized in the structure diagram in Figure 2.
Figure 2.
Structure diagram of the proposed offline control and observer parameterization.
For the application scenario considered in the following section, the proposed iterative solution technique converged to constant control and observer gains in less than 20 iterations. It should be pointed out that each control and observer parameterization—for which all augmented system matrices (37) can be proven to be asymptotically stable with the help of a common Lyapunov function candidate—stabilizes the deterministic part of the closed-loop control structure with certainty. Hence, a comparison of the optimized cost function value according to (41), after convergence of the iterative optimization procedure and the result of an evaluation of this cost function for a non-optimized, although guaranteed stabilizing parameterization, may be interpreted as the achieved enhancement of robustness. In general, this cost function value is proportional to the average size of the domain of non-provable contractivity around a desired operating point due to noise, where is evaluated for each of the vertices of the polytopic uncertainty representation.
5. Simulation Results
As a benchmark for the proposed control design, we consider the Zeeman catastrophe machine illustrated in Figure 3. It consists of a disc of radius R that can rotate freely around its center at the origin of the -plane. The angle of orientation (with corresponding velocity ) can be manipulated by using the position as a control input. This position is assumed to be controlled by an actuator with a non-negligible time constant T, where u denotes the control signal in terms of the desired value for .
Figure 3.
Application scenario: Zeeman catastrophe machine.
5.1. Modeling
Depending on the location , which is assumed to be fixed, this simple mechanical setup with the connecting elements of the variable lengths and , which are given by linear springs with stiffness k, can show nonlinear behavior including hysteresis and chaotic motion []. Both springs have the nominal length .
The first goal of designing a partial state feedback controller (using only the measurable angle and its derivative , where both are smoothened and estimated by a state observer) is to realize a stable transition between two bifurcation points at which the open-loop dynamics change between asymptotic stability and instability (represented by a jump-like motion between two significantly different angles). Second, a tracking controller is designed, which allows for following a reference trajectory for the angle .
According to the discussion above, the controller is parameterized with the output matrix
while
represents the sensor’s relation to the state vector.
The equations of motion of this system
with the velocity–proportional damping term and the state-dependent spring lengths
and
can be derived on the basis of Lagrange’s equations of second kind. For this derivation, it is assumed that the total mass m of the system is located at a single point on the edge of the rotating disc.
For the LMI-based design procedure, the system model is re-written into the quasi-linear form
of a polytopic uncertainty system model with and , where
and
These intervals have been determined for the parameters , , , , , , , and the worst-case operating domains
and
All of these parameters and state variables are assumed to be given in terms of corresponding SI units.
Remark 6.
In the following simulations, the nonlinear model is used to represent the controlled plant and observer. The observer gain is designed according to Section 4 with a point-valued system matrix given by the nominal values of the quasi-linear form. Note that this is only one solution of the parameter-dependent observer matrix . This simplification is possible under the assumption that both tracking and state estimation errors are small. In general, however, the stability of the nonlinear observer can be proven by checking the stability of all combinations of the vertex matrices of the controlled plant to each extremal system matrix of the observer.
5.2. Tracking the Unstable Branch of the Bifurcation Diagram
Figure 4a shows the numerically computed equilibria of the system (58) in dependence of piecewise constant inputs . The hysteresis behavior of the open-loop system can be clearly seen so that starting with angles of approximately , an increase of leads to a jump in the angle (transition from the open to the filled green triangle at some in Figure 4), while successively reducing leads to a jump between two different angles for some (red triangles in Figure 4). Figure 4a can be interpreted as a bifurcation diagram in the phase space, where the branch between both open green and red triangles is unstable and is not visible in an open-loop operation of the system.
Figure 4.
Guidance of the Zeeman catastrophe machine along the unstable branch of equilibria in the -plane; comparison of an observer-based full state feedback control parameterized by the LQG technique with the LMI-based parameterization of (12).
However, this branch can be visualized by using the closed-loop control strategy as shown in Figure 4b,c with
where is a piecewise constant reference signal and holds in the second scenario due to the choice of according to (56) for the LMI-based controller (12).
For the sake of comparison with a standard approach (which, however, does not strictly guarantee closed-loop stability due to the parameter dependency of the system model), an LQG approach has been implemented additionally. It makes use of a continuous-time steady-state Extended Kalman Filter. Both LQG and LMI approaches are parameterized with , , and in this subsection, where is chosen according to (57).
The controller optimization in the LQG case was performed with and for the point-valued parameters and . These values were chosen to reflect the strongest instability of the open-loop plant for the control synthesis. Due to the state dependency of the system matrix in (61), this parameterization does not provide a strict proof of stability. In contrast, the LMI approach provides a strict proof of stability and considerably reduced deviations in transient operation from the unstable branch of the bifurcation diagram compared to the LQG design. To limit the observer’s eigenvalues so that a simulation with a fixed step size of is admissible, the heuristically chosen penalty term was added onto J, defined in (41).
All simulations were performed for piecewise constant signals for (hold time of ) with increments of approximately . If only the terminal states of each step are plotted (cf. Figure 4d), the LMI-case reconstructs the exact unstable branch with deviations of less than , which is slightly better than in the LQG case. Obviously, the LMI-based solution approach does not only provide a strict proof of stability of the observer-based control structure but in addition, it leads to a significant reduction of the control effort, resulting also in much smoother values for in Figure 4c. The reduction of the control effort is investigated in more detail in the following example.
5.3. Trajectory Tracking Control
As a second application for the proposed LMI-based parameterization of the control laws (10), Case 1, and (12), Case 2, and its comparison with the LQG design, consider the following four scenarios:
- S1:
- , , ;
- S2:
- , , ;
- S3:
- , , ;
- S4:
- , , .
Here, we assume tracking of a continuous reference trajectory along the unstable branch of the previously investigated bifurcation diagram. Figure 5 presents a comparison of all three control approaches with an ideal tracking behavior in a noise-free setting. Obviously, the control effort of the LMI-based output feedback control is the best in all scenarios. This is achieved by minimizing the domain of non-provable stability properties and by feeding back only the first two components of the estimated state vector.
Figure 5.
Comparison of the control effort for the LMI-based full state feedback controller (Case 1), the LMI-based output feedback controller (Case 2), and the LQG technique with the ideal noise-free control signal.
To achieve accurate tracking of the reference trajectory in Figure 6, the feedforward control signal in (10) and (12) was chosen as a signal that is piecewise constant for intervals of . It is determined in the sense of a P-type iterative learning control approach by the update rule
with the initialization
where generally holds for . The learning gain , weighting the difference between the observed system output in the iteration i and the reference signal, is given by
where is either the LQG controller’s gain, the full state feedback gain in (10), or for the partial state feedback (12); and result from evaluating the system matrices in (61) for the current state estimate at the point of time during the epoch i (also denoted as trial or pass in the literature on iterative learning control).
Figure 6.
Comparison of the output tracking behavior as well as the state reconstruction accuracy for the proposed LMI-based output feedback controller (Case 2) after the 10th iteration.
For the first iteration , the controller
is identical to (66) introduced in the previous subsection due to the initialization (68).
In addition to the accurate trajectory tracking for the final iteration , Figure 6 shows the strong suppression of measurement noise by the LMI-based observer and the fact that the actual trajectories for the state and the corresponding estimate practically cannot be distinguished within the graphical resolution. Finally, Figure 7 and Figure 8 show the noise-insensitive convergence of the control signals and root mean square (RMS) output tracking errors for all four scenarios S1–S4.
Figure 7.
Comparison of the control signal for 10 iterations of the proposed LMI-based output feedback controller (Case 2).

Figure 8.
Reduction of the output tracking error over 10 iterations for the proposed LMI-based output feedback controller (Case 2).
As a summary, all three control approaches are compared in Table 1. It can be observed that the LMI-based output feedback controller (Case 2) has the smallest deviations between the actual control signals and the ideal noise-free system inputs . This is shown by a reduction of the RMS values
of the system inputs by more than in all scenarios , in comparison with the LQG implementation. These RMS values are evaluated on an equidistant discretization grid with leading to .
Table 1.
Comparison of tracking errors and control effort of all controllers for the scenarios S1–S4 in terms of the root mean square deviations to an ideal noise-free trajectory tracking; the control improvement is quantified by a comparison of Case 2 with the LQG in terms of .
Moreover, the full state feedback according to Case 1 has a quality comparable to that of the classical LQG in terms of the control amplitudes as well as the output RMS values
where is the simulated first state variable in the respective scenario . However, in comparison to the LQG, it possesses a strict proof of stability despite the polytopic parameter uncertainty, which makes the separation principle invalid as discussed in Section 2. Concerning Case 2, which is less accurate concerning output tracking than the full state feedback, it should be pointed out that the RMS output values are significantly smaller than the standard deviations of the measurement noise, except for scenario S1, where a large additive disturbance acts directly on the control input.
From a practical point of view, it is necessary to make a trade-off between tracking accuracy and control effort. Using the presented approach, the partial state feedback (Case 2) reduces the control effort in comparison to the full state feedback (Case 1), however, with the cost of a slight decrease in tracking accuracy.
6. Conclusions and Outlook on Future Work
In this paper, a novel iterative LMI solution for the joint control and observer parameterization of systems with polytopic parameter uncertainty and stochastic process, actuator, and sensor noise was derived. It is, by construction, equally applicable to the design of observer-based state and output feedback approaches. Most importantly, it allows for a guaranteed stabilization of the system dynamics despite bounded (polytopic) uncertainty in the system matrices, which makes the classical separation principle of control and observer design invalid. It was shown in benchmark simulations that the approach outperforms an LQG design for which a point-valued system model was used for the underlying controller and observer parameterization and, therefore, does not guarantee stability by design.
In future work, this approach will be further investigated for discrete-time processes. Moreover, it is desired to optimize the structure of output feedback controllers and reduced-order observers for high-dimensional systems such that the number of variables that are required for a robust parameterization is minimized. Finally, we aim at introducing and optimizing weighting factors for the diagonal entries in the matrix (47) so that, during the minimization of (41), the user can decide which of the individual states (or estimation error signals) has the highest importance. This extension will have similarities to tuning the ratio between the weighting matrix entries for state deviations in the classical LQR design.
Author Contributions
Conceptualization, A.R.; Investigation, A.R. and S.R.; Software, A.R.; Validation, R.D., S.L. and B.T.; Writing—original draft, A.R., R.D., S.R., S.L. and B.T.; Writing—review and editing, A.R., R.D., S.R., S.L. and B.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data are contained within the article.
Acknowledgments
We acknowledge support from the Open Access Publication Fund of the University of Wuppertal.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Turner, M.C.; Bates, D.G. (Eds.) Mathematical Methods for Robust and Nonlinear Control: EPSRC Summer School; Lecture Notes in Control and Information Sciences; Springer: London, UK, 2007. [Google Scholar]
- Pipeleers, G.; Demeulenaere, B.; Swevers, J.; Vandenberghe, L. Extended LMI Characterizations for Stability and Performance of Linear Systems. Syst. Control Lett. 2009, 58, 510–518. [Google Scholar] [CrossRef]
- Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
- Barmish, B.R. New Tools for Robustness of Linear Systems; Macmillan: New York, NY, USA, 1994. [Google Scholar]
- Henrion, D. Robust Analysis with Linear Matrix Inequalities and Polynomial Matrices. 2001. Available online: http://homepages.laas.fr/henrion/courses/polyrobust/polyrobustIV1.pdf (accessed on 9 December 2020).
- Scherer, C.; Weiland, S. Linear Matrix Inequalities in Control. In Control System Advanced Methods, 2nd ed.; Levine, W.S., Ed.; The Electrical Engineering Handbook Series; CRC Press: Boca Raton, FL, USA, 2011; pp. 24–1–24–30. [Google Scholar]
- Skelton, R.E. Linear Matrix Inequality Techniques in Optimal Control. In Encyclopedia of Systems and Control; Baillieul, J., Samad, T., Eds.; Springer: London, UK, 2020; pp. 1–10. [Google Scholar]
- Ackermann, J.; Blue, P.; Bünte, T.; Güvenc, L.; Kaesbauer, D.; Kordt, M.; Muhler, M.; Odenthal, D. Robust Control: The Parameter Space Approach, 2nd ed.; Springer: London, UK, 2002. [Google Scholar]
- Kersten, J.; Rauh, A.; Aschemann, H. Interval Methods for Robust Gain Scheduling Controllers: An LMI-Based Approach. Granul. Comput. 2020, 5, 203–216. [Google Scholar] [CrossRef]
- Rauh, A. Sensitivity Methods for Analysis and Design of Dynamic Systems with Applications in Control Engineering; Shaker–Verlag: Aachen, Germany, 2017. [Google Scholar]
- Kheloufi, H.; Zemouche, A.; Bedouhene, F.; Souley-Ali, H. Robust H∞ Observer-Based Controller for Lipschitz Nonlinear Discrete-Time Systems with Parameter Uncertainties. In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 4337–4341. [Google Scholar]
- Peaucelle, D.; Ebihara, Y. LMI results for robust control design of observer-based controllers, the discrete-time case with polytopic uncertainties. In Proceedings of the 19th World Congress The International Federation of Automatic Control, Cape Town, South Africa, 24–29 August 2014; pp. 6527–6532. [Google Scholar]
- Zemouche, A.; Zerrougui, M.; Boulkroune, B.; Rajamani, R.; Zasadzinski, M. A new LMI observer-based controller design method for discrete-time LPV systems with uncertain parameters. In Proceedings of the 2016 American Control Conference, Boston, MA, USA, 6–8 July 2016. [Google Scholar]
- Rauh, A.; Romig, S.; Aschemann, H. When is Naive Low-Pass Filtering of Noisy Measurements Counter-Productive for the Dynamics of Controlled Systems? In Proceedings of the 23rd IEEE International Conference on Methods and Models in Automation and Robotics MMAR 2018, Miedzyzdroje, Poland, 27–30 August 2018.
- Kushner, H. Stochastic Stability and Control; Academic Press: New York, NY, USA, 1967. [Google Scholar]
- Gershon, E.; Shaked, U. Static H2 and H∞ Output-Feedback of Discrete-Time LTI Systems with State Multiplicative Noise. Syst. Control Lett. 2006, 55, 232–239. [Google Scholar] [CrossRef]
- Do, M.H.; Koenig, D.; Theilliol, D. Robust Observer-Based Controller for Uncertain-Stochastic Linear Parameter-Varying (LPV) System under Actuator Degradation. Int. J. Robust Nonlinear Control 2021, 31, 662–693. [Google Scholar] [CrossRef]
- Litherland, T.; Siahmakoun, A. Chaotic Behavior of the Zeeman Catastrophe Machine. Am. J. Phys. 1995, 63, 426–431. [Google Scholar] [CrossRef]
- Xu, J.X.; Tan, Y. On the P-Type and Newton-Type ILC Schemes for Dynamic Systems with Non-Affine-in-Input Factors. Automatica 2002, 38, 1237–1242. [Google Scholar] [CrossRef]
- Rogers, E.; Gałkowski, K.; Owens, D. Control Systems Theory and Applications for Linear Repetitive Processes; Lecture Notes in Control and Information Sciences; Springer: Berlin/Heidelberg, Germany, 2007; Volume 349. [Google Scholar]
- Rauh, A.; Romig, S. Linear Matrix Inequalities for an Iterative Solution of Robust Output Feedback Control of Systems with Bounded and Stochastic Uncertainty. Sensors 2021, 21, 3285. [Google Scholar] [CrossRef] [PubMed]
- Cichy, B.; Gałkowski, K.; Dąbkowski, P.; Aschemann, H.; Rauh, A. A New Procedure for the Design of Iterative Learning Controllers Using a 2D Systems Formulation of Processes with Uncertain Spatio-Temporal Dynamics. Control Cybern. 2013, 42, 9–26. [Google Scholar]
- Sturm, J. Using SeDuMi 1.02, A MATLAB Toolbox for Optimization over Symmetric Cones. Optim. Methods Softw. 1999, 11–12, 625–653. [Google Scholar] [CrossRef]
- Löfberg, J. YALMIP: A Toolbox for Modeling and Optimization in MATLAB. In Proceedings of the IEEE International Symposium on Computer Aided Control Systems Design, Taipei, Taiwan, 2–4 September 2004; pp. 284–289. [Google Scholar]
- Crusius, C.A.R.; Trofino, A. Sufficient LMI Conditions for Output Feedback Control Problems. IEEE Trans. Autom. Control 1999, 44, 1053–1057. [Google Scholar] [CrossRef]
- Rauh, A.; Kersten, J.; Aschemann, H. Robust Control for a Spatially Three-Dimensional Heat Transfer Process. In Proceedings of the 8th IFAC Symposium on Robust Control Design ROCOND’15, Bratislava, Slovakia, 8–11 July 2015. [Google Scholar]
- Rauh, A.; Senkel, L.; Gebhardt, J.; Aschemann, H. Stochastic Methods for the Control of Crane Systems in Marine Applications. In Proceedings of the 2014 European Control Conference (ECC), Strasbourg, France, 24–27 June 2014; pp. 2998–3003. [Google Scholar]
- Rauh, A.; Kersten, J.; Aschemann, H. Toward the Optimal Parameterization of Interval-Based Variable-Structure State Estimation Procedures. Reliab. Comput. 2017, 25, 118–132. [Google Scholar]
- Palma, J.M.; Morais, C.F.; Oliveira, R.C.L.F. A Less Conservative Approach to Handle Time-Varying Parameters in Discrete-Time Linear Parameter-Varying Systems with Applications in Networked Control Systems. Int. J. Robust Nonlinear Control 2020, 30, 3521–3546. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).