Next Article in Journal
Outer Synchronization of Two Muti-Layer Dynamical Complex Networks with Intermittent Pinning Control
Previous Article in Journal
Kato Chaos in Linear Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Output Stabilization of Linear Systems in Given Set

by
Ba Huy Nguyen
1,2,*,† and
Igor B. Furtat
1,†
1
Adaptive and Intelligent Control of Network and Distributed Systems Laboratory, Institute for Problems in Mechanical Engineering of the Russian Academy of Sciences (IPME RAS), 199178 Saint-Petersburg, Russia
2
Faculty of Control Systems and Robotics, ITMO University, 197101 Saint-Petersburg, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(16), 3542; https://doi.org/10.3390/math11163542
Submission received: 11 July 2023 / Revised: 11 August 2023 / Accepted: 14 August 2023 / Published: 16 August 2023
(This article belongs to the Section Dynamical Systems)

Abstract

:
This paper presents a method for designing control laws to achieve output stabilization of linear systems within specified sets, even in the presence of unknown bounded disturbances. The approach consists of two stages. In the first stage, a coordinate transformation is utilized to convert the original system with output constraints into a new system without constraints. In the second stage, a controller is designed to ensure the boundedness of the controlled variable of the transformed system obtained in the first stage. Two distinct control strategies are presented in the second stage, depending on the measurability of the state vector. If the state vector is measurable, a controller is designed using state feedback based on the Lyapunov method and Linear Matrix Inequalities (LMIs). Alternatively, if only the output vector is measurable, an observer-based controller is designed using a Luenberger observer. In this case, the state estimation error does not need to converge to zero but must remain bounded. The efficacy of the proposed method and the validity of the theoretical results are demonstrated through simulations performed in MATLAB/Simulink.

1. Introduction

In practice, there are many control problems associated with a guarantee of finding outputs in given sets. For example, it is required to maintain the frequency and output voltage of electric generators within specified bounds in the electric power network [1,2,3,4,5,6,7] or pressure and flow rate at the wellhead in a given band when controlling the formation pressure stabilization process [8,9,10], etc. Violating these constraints during operation may degrade the performance and lead to system instability. Therefore, guarantee of the desired control performance, including transient and steady-state performance, in the practical systems is one of the key criteria in the synthesis of automatic control systems in past decades.
For linear systems without uncertainty in the plant model parameters, the classical control methods, such as modal control, optimal control, etc., refs. [11,12,13,14,15] can ensure that the control error between the controlled signal and reference signal converges to an arbitrarily small region after a finite time. The characteristics of the transient process can be easily obtained by analyzing the dynamic model via a transfer function or state-space model. The control problems become more complicated when uncertainties, unknown disturbances and nonlinearities exist in the systems. In this case, the classical adaptive methods can be used to stabilize a closed-loop system to guarantee that the control error belongs to a given set in asymptotics or finite time [16,17,18,19,20,21]. However, the estimates for calculating the characteristics of the limit set and the time of the transient process are either rather rough or remain in terms of existence. Moreover, these methods do not guarantee the specified deviation of the controlled signal from the reference signal in the transient mode, which can be arbitrarily large at the initial time if the initial conditions of the plant are unknown.
To solve the above problem with a significant value of the control error in transient mode, in 1991, Miller and Davison [22] proposed an adaptive method to “force” this error “to be less than an (arbitrarily small) prespecified constant after an (arbitrarily short) prespecified period of time, with an (arbitrarily small) prespecified upper bound on the amount of overshoot”. In this paper, the given sets are defined by a sequence of rectangles. The size of each rectangle corresponds to the desired maximum deviation of the error from the origin and to the desired time when the error belongs to the respective rectangle. However, these rectangular regions are rather rough, and this approach only applies to systems with scalar input and output. In the paper [23] in 2008, Bechlioulis and Rovitakis proposed a new control methodology, namely “prescribed performance control (PPC)”. PPC is understood as a control method that ensures the control error belongs to a given set, which is depicted by a performance function designed by users, and converges into an arbitrarily small neighborhood of the origin. However, implementing this method in [23] requires knowledge of the sign and the set of initial conditions. Moreover, the upper and the lower performance functions for transients are rather rough because these bounds are determined by the same function with different signs. Additionally, these performance functions exponentially converge to some constants.
The papers by Furtat and Gushchin [24,25] propose a novel method that generalizes the results of [23] and guarantees that the output controlled variables belong to a given set. The method consists of using the special coordinate transformation, which allows reducing the control problem with constraints on the output variable to the control problem without constraints. In contrast to [23], the performance functions of the given set can be chosen asymmetrically with respect to the origin and have arbitrary forms, and may not particularly converge to constants. This fact allows us to establish that the method of Furtat and Guschin can be used to stabilize the output signals in [1,2,3,4] in the given sets, which have arbitrary forms, while the method of Bechlioulis and Rovitakis often refers to the tracking problem.
The application of the method in [24,25] is limited to linear systems with scalar input and scalar output (SISO), and the parameters of the controller are manually selected. In this paper, we introduce a novel approach for designing nonlinear control laws based on results [24,25]. Our contribution differs from [24,25] in the following ways:
  • We propose a general procedure for stabilizing unstable linear systems, including MIMO cases, using state feedback and observer-based feedback;
  • The parameters for the control law are computed utilizing LMIs;
  • We provide recommendations for selecting control law parameters to reduce the influence of disturbances.
The paper is organized as follows. Section 2 introduces the notation and gives an important lemma. Section 3 describes the problem of a linear plant control with a guarantee that the output signals belong to given sets. Section 4 gives a general approach to solving the problem. Section 5 proposes a design of state feedback controller for linear plants based on LMIs. Section 6 proposes a design of an observer-based controller when the state vector is not measurable. Section 7 illustrates the results obtained by simulation using MATLAB/Simulink and demonstrates the theoretical conclusions.

2. Notation and a Key Lemma

The following notation will be used throughout the paper:
-
R n is Euclidean space of dimension n with norm | · | ;
-
R n × m denotes the set of all n × m real matrices with norm · ;
-
I , 0 , d i a g { · } denotes the identity, zero, and diagonal matrix (of the corresponding dimension);
-
1 m denotes the all-one vector with m values 1;
-
For quadratic matrices A R n × n , A 0 ( A 0 ) means that A is a positive-definite matrix (negative-definite matrix). A 0 ( A 0 ) means that A is a non-negative definite matrix (non-positive definite matrix);
-
The symbol " " denotes a symmetric block in a symmetric matrix.
The following well-known lemma will be used to prove the main result of paper. According to [26,27,28], the S-procedure can be formulated as follows:
Lemma 1 
([26,27,28]). Let the quadratic forms
f i ( x ) = x T A i x , i = 0 , 1 , , m ,
where x R n , A i = A i T R n × n , and the numbers α 0 , α 1 , , α m . If there are numbers τ i 0 ,   i = 1 , , m , such that
A 0 i = 1 m τ i A i , α 0 i = 1 m τ i α i ,
then, the inequalities
f i ( x ) α i , i = 1 , , m ,
imply the inequality
f 0 ( x ) α 0
for all x 0 .

3. Problem Statement

Consider the linear dynamical system with m-input and m-output
x ˙ ( t ) = A x ( t ) + B u ( t ) + D f ( t ) , y ( t ) = C x ( t ) ,
where t 0 ; x ( t ) R n is the state vector; y ( t ) = c o l { y 1 ( t ) , , y m ( t ) } R m is the output signal; u ( t ) R m is control signal; f ( t ) R l is an unknown bounded disturbance with | f ( t ) | f ¯ , f ¯ is a known constant; the matrices A R n × n , B R n × m , C R m × n   a n d   D R n × l are known. The pair ( A , B ) is controllable, and the pair ( A , C ) is observable. The system (1) has relative degree equals 1 m (i.e., d e t ( C B ) 0 [29,30]).
The goal of the paper is to design a control law that guarantees the output signal y ( t ) of the system (1) stays in the following set
Y = { y ( t ) R m : g ̲ i ( t ) < y i ( t ) < g ¯ i ( t ) , i = 1 , , m } .
Here, g ̲ i ( t ) and g ¯ i ( t ) are continuous functions and bounded with their first-time derivatives. These functions can be selected by designers based on the requirements for system operation. For example, in the problem of an electric generator control (see [2,4]), it is required to maintain the frequency ω ( t ) and the output voltage U ( t ) within the specified bounds ω ̲ ( t ) ω ( t ) ω ¯ ( t ) and U ̲ ( t ) U ( t ) U ¯ ( t ) .

4. Solution Method

This section presents the mathematical framework for solving the given problem. The method is based on the change of coordinates [25], which allows one to transform the control problem with constraints on the output variable to the problem of control without constraints on the controlled variable.
Let us introduce the change of coordinates
ε ( t ) = Φ ( y ( t ) , t ) ,
where ε ( t ) R m is a new variable, the vector function Φ ( y , t ) satisfies the following conditions:
(a)
ε R m and t 0 there exists an inverse mapping
y = Φ 1 ( ε , t ) ;
(b)
Φ 1 ( ε , t ) is a differentiable function with respect to ε and t, Φ 1 ( ε , t ) ε 0 , ε R m and t 0 ;
(c)
g ̲ i ( t ) < Φ i 1 ( ε i , t ) < g ¯ i ( t ) , i = 1 , , m , ε i R and t 0 ;
(d)
Φ 1 ( ε , t ) t is a bounded function for ε and t 0 , Φ 1 ( ε , t ) t < γ , where γ > 0 is determined by (3).
Here, we consider functions Φ i 1 ( ε i , t ) depending only on ε i R and t, and the matrix Φ 1 ( ε , t ) ε has a diagonal form. To design a control law, the information about the dynamics of the variable ε ( t ) is required. Calculate the time derivative of the function y ( t ) on (4)
y ˙ = Φ 1 ( ε , t ) ε ε ˙ + Φ 1 ( ε , t ) t .
Taking into account (1) and (4), we rewrite (5) in the form
ε ˙ = Φ 1 ( ε , t ) ε 1 C A x + C B u + C D f Φ 1 ( ε , t ) t .
As both L D f ( t ) and Φ 1 ( ε , t ) t are bounded functions in (6), we introduce the substitution ψ ( t ) = C D f ( t ) Φ 1 ( ε , t ) t . Consequently, we have | ψ ( t ) | κ , where κ = | C D | f ¯ + γ . Considering this substitution, we rewrite expression (6) as follows:
ε ˙ = Φ 1 ( ε , t ) ε 1 C A x + C B u + ψ .
We recall the main result from [25] to solve the stated problem.
Theorem 1 
([25]). Let the conditions ( a ) ( d ) be satisfied for the transformation (3). If there exists a control law u ( t ) such that the solution (7) is bounded, then y ( t ) Y .
This theorem allows one to transfer the control problem (1) with constraints (2) on the output y ( t ) of the system (1) to the control problem on the auxiliary variable ε ( t ) of the system (7) without constraint.
In order to find a control u ( t ) that ensures that ε ( t ) is bounded, we consider a Lyapunov function in the form
V = 1 2 ε T ε .
According to (7), we get
V ˙ = ε T ε ˙ = ε T Φ 1 ( ε , t ) ε 1 C A x + C B u + ψ .
Let Ω be a bounded set containing the origin. Moreover, it is planned to stabilize the trajectory of ε ( t ) within this set. To maintain ε ( t ) within Ω , it is sufficient to ensure that the derivative of Lyapunov function is negative for all ε not belonging to Ω . In other words, V ˙ < 0 for all ε Ω in order to maintain ε ( t ) within Ω (as per the concept of input to state stability (ISS) [31]). The derivative of the Lyapunov function in (9) involves the matrix Φ 1 ( ε , t ) ε 1 , which is a positive definite. In particular, if the system (1) is one-dimensional, then Φ 1 ( ε , t ) ε 1 reduces to a positive scalar value that does not impact the sign of the expression V ˙ < 0 . Before formulating the controller, we will prove the following auxiliary proposition.
Proposition 1. 
Let us consider the following block matrices:
M = Q 0 Q 0 , N = N 11 N 12 N 22 0 ,
where Q , N 11 , N 12 , N 21 a n d N 22 R n × n are real diagonal matrices. Then, the matrix multiplication
M N = Q N 11 Q N 12 Q N 22 ,
is a negative definite matrix.
Proof. 
It is easy to see that the matrix M N is symmetric. Let λ i , x i , i = 1 , , 2 n be eigenvalues and eigenvectors of matrix M N , respectively. Then, we have the following relation
x i T N M N x i = λ i x i T N x i .
From the last equation we can express λ i as follows
λ i = x i T N M N x i x i T N x i .
Given that M 0 and N = N T 0 , it follows that N M N 0 , implying that x T N M N x > 0 x 0 . Additionally, considering that x T N x < 0 for all x 0 , we deduce that λ i < 0 for i = 1 , , 2 n . The symmetric matrix M N has all negative eigenvalues, then M N is a negative definite. □
Below, we will proceed to propose a control design for the system (1) in different scenarios: when the state vector is available for measurement, and when only the output signal is measurable.

5. State Feedback Control

Suppose the state vector of the system (1) is measured. We introduce the control law in the form
u = ( C B ) 1 [ K ε + C A x ] ,
where K R m × m is the control gain matrix to be designed. Next, we will demonstrate that selecting the control law (10) enables us to ensure the negativity of the derivative of Lyapunov function (8) when analyzing the stability of the closed-loop system.
By substituting the control law (10) into Equation (7), we obtain
ε ˙ = Φ 1 ( ε , t ) ε 1 K ε + ψ .
The following theorem provides a method for determining the gain matrix K in (10).
Theorem 2. 
Suppose the transformation (3) satisfies conditions (a)–(d), 0 Φ 1 ( ε , t ) ε σ ¯ I for any ε R m and t 0 . If for given numbers c > 0 , α > 0 , β > 0 and κ > 0 there exist a matrix K R m × m and positive coefficients τ 1 , τ 2 such that for σ ( 0 , σ ¯ ] the following linear inequalities are feasible
K + ( 0.5 τ 1 σ α + β ) I 0.5 I ( τ 2 σ β ) I 0 , c τ 1 + κ 2 τ 2 0 .
Then, for system (1) the control law (10), ensure goal (2).
In particular, if the system (1) is one-dimensional (i.e., m = 1 ), then the gain K of the control law (10) can be determined if the following inequalities are satisfied
K + α + 0.5 τ 1 0.5 τ 2 0 , c τ 1 + κ 2 τ 2 0 .
Proof. 
Case 1. m = 1 :
Lyapunov function (8) is reduced in the form
V = 1 2 ε 2 .
Taking the time derivative of Lyapunov function (14) along the solutions (11), we get
V ˙ = ε ε ˙ = ε Φ 1 ( ε , t ) ε 1 K ε + ψ .
Let us define the set Ω as follows
Ω : = { ε R m : | ε | < 2 c , c > 0 } ,
where c is a positive number.
For this case with m = 1 , the set Ω is the interval ( 2 c , 2 c ) . It is required that for any given positive number α > 0 : V ˙ 2 α V , ε Ω , i.e., V ˙ < 0 , ε : | ε | 2 c . Since Φ 1 ( ε , t ) ε > 0 does not affect the sign of the expression (15), these conditions can be rewritten as
( K α ) ε 2 + ε ψ 0 , ε , ψ : 0.5 ε 2 c , ψ 2 κ 2 .
Let us denote z = [ ε , ψ ] T , and express (17) in matrix form:
z T K + α 0.5 0 z 0 , z T 0.5 0 0 z c , z T 0 0 1 z κ 2 .
By applying the S-procedure, inequalities (18) are satisfied if condition (13) holds. Therefore, the system (11) is ISS stable. According to Theorem 1, goal condition (2) is satisfied.
Case 2. m > 1 :
The time derivative of Lyapunov function (8) along the solutions (11) is calculated as follows
V ˙ = ε T ε ˙ = ε T Φ 1 ( ε , t ) ε 1 K ε + ψ .
In this case, Φ 1 ( ε , t ) ε 1 is a matrix and cannot be neglected in the analysis sign of V ˙ , as in the previous case. We require the condition V ˙ α ε T Φ 1 ( ε , t ) ε 1 ε < 0 , ε Ω , with Ω is defined in (16). One can rewrite the above conditions as
ε T Φ 1 ( ε , t ) ε 1 ( K + α I ) ε + ε T Φ 1 ( ε , t ) ε 1 ψ 0 , ε , ψ : 0.5 ε T ε c , ψ T ψ κ 2 .
Denoting z = [ ε , ψ ] T , rewrite (20) in matrix form:
z T Φ 1 ( ε , t ) ε 1 ( K + α I ) 0.5 Φ 1 ( ε , t ) ε 1 0 z 0 , z T 0.5 I 0 0 z c , z T 0 0 I z κ 2 .
According to S-procedure, the inequalities (21) are satisfied if the following inequalities hold
Φ 1 ( ε , t ) ε 1 ( K + α I ) + 0.5 τ 1 I 0.5 Φ 1 ( ε , t ) ε 1 τ 2 I 0 , c τ 1 + κ 2 τ 2 0 .
The first inequality in (22) is equivalent to
Φ 1 ( ε , t ) ε 1 0 Φ 1 ( ε , t ) ε 1 × × ( K + α I ) + 0.5 τ 1 Φ 1 ( ε , t ) ε 0.5 I τ 2 Φ 1 ( ε , t ) ε 0
Since Φ 1 ( ε , t ) ε 1 0 and according to Proposition 1 inequality (23) holds if the following inequality holds for any positive number β > 0
( K + α I ) + 0.5 τ 1 Φ 1 ( ε , t ) ε 0.5 I τ 2 Φ 1 ( ε , t ) ε β I 0
By condition (20), we care about the sign of the derivative of the Lyapunov function only for ε from the set R m Ω = { ε R m : | ε | 2 c , c > 0 } . Moreover, ε R m Ω , we obtain an interval type uncertainty of (24) with 0 Φ 1 ( ε , t ) ε σ ¯ I . Then, conditions (20) are satisfied if (12) is feasible for any σ ( 0 , σ ¯ ] and any given numbers α , β > 0 . Furthermore, it is not hard to show that there always exists a diagonal matrix K and τ 1 , τ 2 > 0 , such that the inequalities (12) are feasible. Indeed, using Schur Complement Lemma [28], we can rewrite (12) as follows:
τ 2 σ + β < 0 , K + ( 0.5 τ 1 σ α + β ) I + 1 τ 2 σ β I 0 , c τ 1 + κ 2 τ 2 0 , τ 1 > 0 , τ 2 > 0 , α > 0 , β > 0 , 0 < σ σ ¯ .
It is evident that for a given c > 0 and any fixed σ , α , β , inequalities (25) always have finite solutions ( K , τ 1 , τ 2 ) . As a result, the control law (10) with K satisfying (12) guarantees the input-to-state stability in the closed-loop system (11), implying a boundedness of ε ( t ) . According to Theorem 1, the goal (2) is achieved. Theorem 2 is proved. □
Remark 1. 
The LMI technique and S-procedure allow us to design the controller for a MIMO system under the influence of unknown bounded disturbances based on analysis the input to state stability of the closed-loop system. Moreover, the problem of finding the control gain matrix of (10) can be reduced to the problem of finding the solutions to the feasible problems (12) and (13), which can be easily solved using popular solvers for semidefinite programming (such as SEDUMI [32], SDPT3 [33], CSDP [34] and others.)
Remark 2. 
It can be seen that the parameter c in (16) is associated with the radius of the open ball in which the trajectories of the system (11) are attracted (the radius of the ball is equal to 2 c ). If we decrease the value of c, then the radius of the ball will decrease and, in turn, the limiting value of ε ( t ) . Therefore, with a decrease in the limiting value of ε ( t ) , the oscillation of the variable y ( t ) also decreases in the set Y , which is caused by the influence of the external disturbance f ( t ) .

6. Observer-Based Feedback Control

This section consider the system (1) with a measurable output vector. The feedback control law for this system is defined as follows:
u ( t ) = ( C B ) 1 [ K ε ( t ) + C A x ^ ( t ) ] ,
where K R m × m is the feeback control gain matrix to be designed; x ^ ( t ) R n is the observer state referred to below as the state estimate, which is generated by the classical Luenberger observer [35]
x ^ ˙ ( t ) = A x ^ ( t ) + B u ( t ) + K o ( y ( t ) C x ^ ( t ) )
where K o R n × m represents the observer feedback gain matrix.
Let x ˜ ( t ) : = x ( t ) x ^ ( t ) be the observation error. Subtracting the first Equation (1) from Equation (27) yields
x ˜ ˙ ( t ) = ( A K o C ) x ˜ ( t ) + D f ( t )
Due to the observability of the system, it is feasible to choose a matrix K o such that the matrix ( A K o C ) is Hurwitz. Additionally, due to the boundedness of the disturbance f ( t ) , the observation error x ˜ ( t ) remains bounded. Assuming an initial observer state of zero, it follows that x ˜ ( 0 ) = x ( 0 ) . The solution of system (28) is given by
x ˜ ( t ) = e ( A K o C ) t x ( 0 ) + 0 t e ( A K o C ) ( t τ ) D f ( τ ) d τ , t 0 .
In this case, there exist constants M > 0 and λ > 0 , such that e ( A K o C ) t M e λ t | x ( 0 ) | , t 0 , and the solution x ˜ ( t ) satisfies the upper bound
| x ˜ ( t ) | μ , t 0 ,
where μ = M | x ( 0 ) | + M D λ f ¯ .
Substituting control law (26) into (7) leads to
ε ˙ = Φ 1 ( ε , t ) ε 1 C A x ˜ K ε + ψ .
Let ς ( t ) : = C A x ˜ ( t ) + ψ ( t ) with | ς | Δ : = | C A | μ + κ . Equation (31) can be expressed as
ε ˙ = Φ 1 ( ε , t ) ε 1 K ε + ς .
Now we introduce the main result of this section
Theorem 3. 
Suppose the transformation (3) satisfies conditions (a)–(d), 0 Φ 1 ( ε , t ) ε σ ¯ I for any ε R m and t 0 . If for given numbers c > 0 , α > 0 and β > 0 , there exist a matrix K R m × m and positive coefficients τ 1 , τ 2 such that for σ ( 0 , σ ¯ ] the following linear inequalities are feasible
K + ( 0.5 τ 1 σ α + β ) I 0.5 I ( τ 2 σ β ) I 0 , c τ 1 + Δ 2 τ 2 0 .
Then, for the system (1) the control law (26) based on the observer (27) guarantees the goal (2).
In particular, if the system (1) is one-dimensional (i.e., m = 1 ), then the gain K of the control law (26) can be determined if the following inequalities are satisfied
K + α + 0.5 τ 1 0.5 τ 2 0 , c τ 1 + Δ 2 τ 2 0 .
Proof. 
The proof of Theorem 3 follows directly from the proof of Theorem 2. □
Remark 3. 
As seen from (28) and (30), the observation error x ˜ ( t ) does not converge to zero due to the influence of the disturbance f ( t ) . Furthermore, this error can be large if the disturbance in the system is significant. However, in this paper, when designing the observer, it is not necessary to ensure that the state estimation error goes to zero; instead, it must be designed so that the estimation error is a bounded quantity for t 0 , and the bounded value must be estimated. With the utilization of control law (26), the observation error x ˜ ( t ) can be viewed as an externally bounded disturbance affecting the system (31). Using LMI and S-procedure techniques, the stability of the system (31) can be analyzed, and the control matrix gain of (26) can be determined, as shown in Section 5.

7. Examples

7.1. Example 1: SISO Case

Consider the system described by Equation (1) with the following parameter values
A = 0 1 2 3 , B = 0 1 , D = 1 1 , L = 2 1 , f ( t ) = 0.1 + sin ( 3 t ) + 0.5 s a t { d ( t ) } , x ( 0 ) = [ 1 1 ] T ,
where s a t { · } represents the saturation function and d ( t ) is the signal modeled in Matlab/Simulink using the “Band-Limited White Noise” block with a noise power and a sampling time of 0.1. This results in f ¯ = 1.6 . The graphic of the disturbance f ( t ) is depicted in Figure 1a.
In this example, the non-Hurwitz property of the matrix A in system (1) indicates system instability. The step response of the open-loop system (Figure 1b) shows that the output y ( t ) goes to infinity as t increases.
To design a closed-loop system, we select the function ε ( t ) as follows
ε ( t ) = l n y ( t ) g ̲ ( t ) g ¯ ( t ) y ( t ) .
It is evident that this function satisfies conditions (a)–(d) of transformation (3). The inverse function Φ 1 ( ε , t ) is given by
Φ 1 ( ε , t ) = g ¯ ( t ) e ε + g ̲ ( t ) e ε + 1 .
For all ε R and t 0 , it holds that
Φ 1 ( ε , t ) ε = e ε ( g ¯ ( t ) g ̲ ( t ) ) ( e ε + 1 ) 2 > 0 ,
and
Φ 1 ( ε , t ) t = g ¯ ˙ ( t ) e ε + g ̲ ˙ ( t ) e ε + 1 max { sup t 0 | g ¯ ˙ ( t ) | , sup t 0 | g ̲ ˙ ( t ) | } = γ .
The parameters of the restriction functions g ̲ ( t ) and g ¯ ( t ) of the set Y are given as follows
g ¯ ( t ) = 3 cos ( t ) + 0.2 , t < 2 π , cos ( t ) + 2.2 , t 2 π , g ̲ ( t ) = 3 cos ( t ) 0.2 , t < 2 π , cos ( t ) + 1.8 , t 2 π .
For the control law based on the state feedback method (10), it is defined as
u ( t ) = ( C B ) 1 K ln y ( t ) g ̲ ( t ) g ¯ ( t ) y ( t ) + C A x ( t ) .
Similarly, for the observer-based feedback method (26), the control law is defined as
u ( t ) = ( C B ) 1 K ln y ( t ) g ̲ ( t ) g ¯ ( t ) y ( t ) + C A x ^ ( t ) .
By choosing the eigenvalues of the observer matrix ( A K o C ) in (28) as λ = 1 and λ 2 = 4 , the matrix K o is determined as K o = c o l { 0.5 , 1 } . Consequently, the bound of | x ˜ ( t ) | can be computed as μ = 4.34 . The values of γ and κ are calculated as γ = 3 and κ = 7.8 , respectively, leading to Δ = 17.51 . To solve the inequalities (13) for the state feedback method and (34) for the observer-based method, the YALMIP package [36] is employed with the SEDUMI solver. The resulting solutions ( K , τ 1 , τ 2 ) for c = 100 and c = 0.1 are presented in Table 1.
The transient responses of y ( t ) , ε ( t ) and u ( t ) with an initial condition of x ( 0 ) = c o l { 1 , 1 } are illustrated in Figure 2, Figure 3 and Figure 4, respectively. Figure 5a shows the plot illustrating the observation errors. In Figure 2, it becomes evident that the output signal y ( t ) remains within the given set at any time. Furthermore, y ( t ) never reaches the boundaries of the given set Y . This observation is grounded in the choice of the function ε ( t ) in (35). Specifically, if the output signal y ( t ) reaches the boundaries g ̲ ( t ) or g ¯ ( t ) of the set Y , then the corresponding signal ε ( t ) would go to infinity. This divergent behavior contradicts the boundedness of ε ( t ) as obtained in Figure 3.
Furthermore, as depicted in Figure 3, the signal ε ( t ) is stabilized within predefined ball regions. Specifically, at c = 100 (Figure 3a), ε ( t ) is stabilized within a ball with a radius r < 14.14 (approximately r 1.5 ), whereas at c = 0.1 (Figure 3b), it is stabilized within a ball with a radius r < 0.44 (approximately r 0.2 ). Upon comparing Figure 2a,b, as well as Figure 3a,b, it is important to highlight that decreasing the value of the parameter c leads to enhanced suppression of disturbance effects.
In Figure 4, the oscillations observed in the control signal can be attributed to the influence of the disturbance f ( t ) within the system (depicted in Figure 1a). Furthermore, as mentioned in Remark 3, owing to the presence of f ( t ) , the observation errors illustrated in Figure 5a do not converge to zero. Nevertheless, the observer-based control method effectively maintains the output signal within the given set. As a result, both proposed control methods demonstrate a high level of performance in stabilizing the output of the system (1).
Remark 4. 
In the preceding example, we have assumed that the initial value of the output signal belongs to a given set. However, if the initial value of the output signal falls outside this given set, the proposed method becomes inapplicable. This limitation arises from the requirement established in transformation (3), demanding that the output signal be constrained within the predefined set. To address this limitation, we introduce an additional expedient: an exponentially decaying function with rapid decay, to augment the performance functions. By incorporating this modification, the newly defined bounds encompass the initial conditions as well.
Figure 5b illustrates the transient response of y ( t ) for the initial condition x ( 0 ) = c o l { 5 3 , 2 3 } , where y ( 0 ) = 4 , a value that lies outside the original set Y . To rectify this, we introduce a term g 0 ( t ) = 2 e 100 t into the function g ¯ ( t ) . This modification ensures that the initial condition y ( 0 ) is bounded from above by the updated performance function g ¯ ( t ) + g 0 ( t ) .

7.2. Example 2: MIMO Case

Let us demonstrate the performance of control for a system (1) with two inputs and two outputs with the following parameters
A = 0 1 0 0 0 1 0.1 2 3 , B = 1 2 1 1 1 3 , D = 1 1 1 , L = 2 1 1 1 2 1 ,
f ( t ) is defined as shown in Section 7.1. The given system is unstable since the matrix A is non-Hurwitz. Figure 6 illustrates the step response of the open-loop system, demonstrating that the output signals y 1 ( t ) (Figure 6a) and y 2 ( t ) (Figure 6b) go to infinity as time increases.
To design a closed-loop system, we formulate the function ε ( t ) = c o l { ε 1 ( t ) , ε 2 ( t ) } with ε i ( t ) as defined in (35)
ε i ( t ) = l n y i ( t ) g ̲ i ( t ) g ¯ i ( t ) y i ( t ) .
Then, we derive the inverse functions Φ i 1 ( ε i , t ) as follows:
Φ i 1 ( ε , t ) = g ¯ i ( t ) e ε i + g ̲ i ( t ) e ε i + 1 .
For all ε i R and t 0 , the following results are obtained
Φ i 1 ( ε i , t ) ε i = e ε i ( g ¯ i ( t ) g ̲ i ( t ) ) ( e ε i + 1 ) 2 > 0 .
and
Φ i 1 ( ε i , t ) t = g ¯ i ˙ ( t ) e ε i + g ̲ i ˙ ( t ) e ε i + 1 max { sup t 0 | g ¯ i ˙ ( t ) | , sup t 0 | g ̲ i ˙ ( t ) | } = γ i .
Consequently, the matrix Φ 1 ( ε , t ) ε = d i a g g ¯ i ( t ) e ε i + g ̲ i ( t ) e ε i + 1 , i = 1 , 2 is positive definite. Moreover, based on the last inequality, the upper bound of the function Φ 1 ( ε , t ) t can be estimated as γ = 2 γ i .
For ε Ω we have 0 Φ 1 ( ε , t ) ε σ ¯ I , where
σ ¯ = 1 4 max i sup t 0 ( g ¯ i ( t ) g ̲ i ( t ) ) .
The parameters governing the restriction functions g ̲ ( t ) and g ¯ ( t ) are specified as follows:
g ¯ 1 ( t ) = 3.52 e 0.5 t + 0.1 , g ̲ 1 ( t ) = 1.62 e 0.5 t 0.1 , g ¯ 2 ( t ) = 1.62 cos ( 0.5 t ) + 1.52 , g ̲ 2 ( t ) = cos ( 0.5 t ) + 0.8 .
Then, the value of σ ¯ is calculated as 0.54 . By selecting the eigenvalues of the observer matrix ( A K o C ) to be λ 1 = 1 , λ 2 = 4 , λ 3 = 5 , we determine that K o = 1.05 0.62 2.33 3.46 0.31 1.23 and Δ = 34.50 . Utilizing the YALMIP package along with the SEDUMI solver for β = 0.00001 , α = 0.01 , we find solutions ( K , τ 1 , τ 2 ) that satisfy the inequalities (12) and (33) at a specific value σ ( 0 , 0.54 ] for c = 0.1 . The detailed outcome is presented in Table 2.
Similar to Section 7.1, it becomes evident from Figure 7 that the control laws (10) and (26) ensure the adherence of the output signals y 1 ( t ) (Figure 7a) and y 2 ( t ) (Figure 7b) of the MIMO system (1) to the prescribed sets across all times for both cases of choosing σ = 0.00001 or σ = 0.54 . At the parameter value c = 0.1 , the external disturbance f ( t ) is effectively suppressed. The behavior of the control signals u 1 ( t ) and u 2 ( t ) for σ = 0.54 is illustrated in Figure 8a and Figure 8b, respectively. Figure 9 further illustrates the observation errors. As demonstrated in Section 7.1, achieving convergence to zero in observation errors during the design of the state observer is not necessary.

8. Conclusions

This paper proposed a general procedure for designing nonlinear control laws for linear systems with a guarantee of finding output signals in given sets based on the transformation of coordinates [25]. The selection function ε ( t ) in a diagonal form and using S-procedure Lemma enabled us to analyze the ISS stability of the closed-loop MIMO system consisting of a nonstationary matrix Φ 1 ( ε , t ) ε 1 . In this paper, two control laws were proposed depending on the measurability of the state vector, one based on state feedback and the other on observer feedback. Furthermore, the parameters for the control laws were computed utilizing LMIs, extending the applicability of the method [25] in practice. These control laws guaranteed the boundedness of the auxiliary variable ε ( t ) , leading to the maintenance of the output variable y ( t ) within the given set. By reducing the radius of the attractive set of ε ( t ) via parameter c, we reduced the effect of disturbances on the output y ( t ) . Moreover, when designing the observer for the system (1) with unknown disturbances, it is not required to ensure the convergence of observation errors to zero. Instead, the observer should be designed to guarantee the boundedness of the estimation error. As the simulation results showed, both proposed control laws demonstrated a high-performance level in stabilizing the output of the system (1) within the given set.

Author Contributions

Conceptualization, B.H.N. and I.B.F; methodology, B.H.N. and I.B.F; validation, B.H.N. and I.B.F; writing—original draft, B.H.N.; writing—review and editing, B.H.N. and I.B.F; supervision, I.B.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out at the IPME RAS with partial support from Goszadanie no. 121112500298-6 (EGISU NIOKTR) in Section 3, Section 4 and Section 5, and the Ministry of Science and Higher Education of the Russian Federation (project No. 075-15-2021-573) in Section 6 and Section 7.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boldea, I. Control of Electric Generators: A Review. IECON Proc. Industrial Electron. Conf. 2003, 1, 972–980. [Google Scholar] [CrossRef]
  2. Furtat, I.; Nekhoroshikh, A.; Gushchin, P. Synchronization of multi-machine power systems under disturbances and measurement errors. Int. J. Adapt. Control. Signal Process. 2022, 36, 1272–1284. [Google Scholar] [CrossRef]
  3. Pavlov, G.; Merkurev, G. Energy Systems Automation; Papirus Publisher: St. Petersburg, Russia, 2001. (In Russian) [Google Scholar]
  4. Kundur, P. Power System Stability and Control; McGraw-Hill Education: New York, NY, USA, 1994; p. 1176. [Google Scholar]
  5. Landera, Y.G.; Zevallos, O.C.; Neves, F.A.S.; Neto, R.C.; Prada, R.B. Control of PV Systems for Multimachine Power System Stability Improvement. IEEE Access 2022, 10, 45061–45072. [Google Scholar] [CrossRef]
  6. Feydi, A.; Elloumi, S.; Braiek, N.B. Robustness of Decentralized Optimal Control Strategy Applied On Two Interconnected Generators System With Uncertainties. In Proceedings of the 2023 IEEE International Conference on Advanced Systems and Emergent Technologies, Hammamet, Tunisia, 29 April–1 May 2023; pp. 1–6. [Google Scholar] [CrossRef]
  7. Tan, H.; Cong, L. Modeling and Control Design for Distillation Columns Based on the Equilibrium Theory. Processes 2023, 11, 607. [Google Scholar] [CrossRef]
  8. Verevkin, A.; Kiriushin, O. Control of the formation pressure system using finite-state-machine models. Oil Gas Territ. 2008, 10, 14–19. (In Russian) [Google Scholar]
  9. Bouyahiaoui, C.; Grigoriev, L.I.; Laaouad, F.; Khelassi, A. Optimal fuzzy control to reduce energy consumption in distillation columns. Autom. Remote Control 2005, 66, 200–208. [Google Scholar] [CrossRef]
  10. Sunori, S.K.; Joshi, K.A.; Mittal, A.; Nainwal, D.; Khan, F.; Juneja, P. Improvement in Controller Performance for Distillation Column Using IMC Technique. In Proceedings of the 2023 International Conference for Advancement in Technology (ICONAT), Goa, India, 24–26 January 2023; pp. 1–5. [Google Scholar] [CrossRef]
  11. Ogata, K. Modern Control Engineering; Prentice Hall: Upper Saddle River, NJ, USA, 2010; Volume 5. [Google Scholar]
  12. Lewis, F.L.; Vrabie, D.; Syrmos, V.L. Optimal Control; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  13. Filabadi, M.M.; Hashemi, E. Distributed Robust Control Framework for Adaptive Cruise Control Systems. In Proceedings of the 2023 American Control Conference (ACC), San Diego, CA, USA, 31 May–2 June 2023; pp. 3215–3220. [Google Scholar] [CrossRef]
  14. Wang, T.; Sun, Z.; Ke, Y.; Li, C.; Hu, J. Two-Step Adaptive Control for Planar Type Docking of Autonomous Underwater Vehicle. Mathematics 2023, 11, 3467. [Google Scholar] [CrossRef]
  15. Hassan, F.; Zolotas, A.; Halikias, G. New Insights on Robust Control of Tilting Trains with Combined Uncertainty and Performance Constraints. Mathematics 2023, 11, 3057. [Google Scholar] [CrossRef]
  16. Ioannou, P.A.; Sun, J. Robust Adaptive Control; PTR Prentice-Hall: Upper Saddle River, NJ, USA, 1996; Volume 1. [Google Scholar]
  17. Krstic, M.; Kokotovic, P.V.; Kanellakopoulos, I. Nonlinear and Adaptive Control Design; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1995. [Google Scholar]
  18. Narendra, K.S.; Annaswamy, A.M. Stable Adaptive Systems; Courier Corporation: Chelmsford, MA, USA, 2012. [Google Scholar]
  19. Tao, G. Adaptive Control Design and Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2003; Volume 37. [Google Scholar]
  20. Xu, J.; Lin, N.; Chi, R. Improved High-Order Model Free Adaptive Control. In Proceedings of the 2021 IEEE 10th Data Driven Control and Learning Systems Conference (DDCLS), Suzhou, China, 14–16 May 2021; pp. 704–708. [Google Scholar] [CrossRef]
  21. Khlebnikov, M.V.; Polyak, B.T.; Kuntsevich, V.M. Optimization of linear systems subject to bounded exogenous disturbances: The invariant ellipsoid technique. Autom. Remote Control 2011, 72, 2227–2275. [Google Scholar] [CrossRef]
  22. Miller, D.E.; Davison, E.J. An Adaptive Controller Which Provides an Arbitrarily Good Transient and Steady-State Response. IEEE Trans. Autom. Control 1991, 36, 68–81. [Google Scholar] [CrossRef]
  23. Bechlioulis, C.P.; Rovithakis, G.A. Robust Adaptive Control of Feedback Linearizable MIMO Nonlinear Systems With Prescribed Performance. IEEE Trans. Autom. Control 2008, 53, 2090–2099. [Google Scholar] [CrossRef]
  24. Furtat, I.; Gushchin, P. Control of Dynamical Systems with Given Restrictions on Output Signal with Application to Linear Systems. IFAC Pap. 2020, 53, 6384–6389. [Google Scholar] [CrossRef]
  25. Furtat, I.B.; Gushchin, P.A. Control of Dynamical Plants with a Guarantee for the Controlled Signal to Stay in a Given Set. Autom. Remote Control 2021, 82, 654–669. [Google Scholar] [CrossRef]
  26. Polyak, B.T.; Khlebnikov, M.V.; Shcherbakov, P.S. Linear Matrix Inequalities in Control Systems with Uncertainty. Autom. Remote Control 2021, 82, 1–40. [Google Scholar] [CrossRef]
  27. Poznyak, A.; Polyakov, A.; Azhmyakov, V. Attractive Ellipsoids in Robust Control; Springer International Publishing: Berlin/Heidelberg, Germany, 2014. [Google Scholar] [CrossRef]
  28. Boyd, S.; El Ghaoui, L.; Feron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
  29. Fradkov, A.L.; Miroshnik, I.V.; Nikiforov, V.O. Nonlinear and Adaptive Control of Complex Systems; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar] [CrossRef]
  30. Isidori, A. Nonlinear Control Systems II; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  31. Khalil, H.K. Nonlinear Control; Pearson: New York, NY, USA, 2015; Volume 406. [Google Scholar]
  32. Sturm, J.F. Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones. Optim. Methods Softw. 1999, 11, 625–653. [Google Scholar] [CrossRef]
  33. Toh, K.C.; Todd, M.J.; Toh, K.C.; Todd, M.J.; Tütüncü, R.H.; TOHal, K.C.; Tutuncu, R.H. SDPT3-a MATLAB software package for semidefinite programming, version 1.3. Optirniration Methods Softw. 1999, 11, 545–581. [Google Scholar] [CrossRef]
  34. Borchers, B. CSDP, A C library for semidefinite programming. Optim. Methods Softw. 1999, 11, 613–623. [Google Scholar] [CrossRef]
  35. Luenberger, D. An introduction to observers. IEEE Trans. Autom. Control 1971, 16, 596–602. [Google Scholar] [CrossRef]
  36. Löfberg, J. YALMIP: A toolbox for modeling and optimization in MATLAB. In Proceedings of the IEEE International Symposium on Computer-Aided Control System Design, Taipei, Taiwan, 2–4 September 2004; pp. 284–289. [Google Scholar] [CrossRef]
Figure 1. (a) The disturbance f ( t ) . (b) The open-loop step response y ( t ) .
Figure 1. (a) The disturbance f ( t ) . (b) The open-loop step response y ( t ) .
Mathematics 11 03542 g001
Figure 2. The transient of output y ( t ) in the closed-loop system under: (a) c = 100 . (b) c = 0.1 . The yellow and black dashed lines represent the performance functions g ¯ ( t ) and g ̲ ( t ) . The solid blue line represents the output y s t ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the output y o b ( t ) when the observed-based controller (26) is applied.
Figure 2. The transient of output y ( t ) in the closed-loop system under: (a) c = 100 . (b) c = 0.1 . The yellow and black dashed lines represent the performance functions g ¯ ( t ) and g ̲ ( t ) . The solid blue line represents the output y s t ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the output y o b ( t ) when the observed-based controller (26) is applied.
Mathematics 11 03542 g002
Figure 3. The graphics of the auxiliary variable ε ( t ) under: (a) c = 100 . (b) c = 0.1 . The solid blue line represents the output ε ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the control signal ε ( t ) when the observed-based controller (26) is applied.
Figure 3. The graphics of the auxiliary variable ε ( t ) under: (a) c = 100 . (b) c = 0.1 . The solid blue line represents the output ε ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the control signal ε ( t ) when the observed-based controller (26) is applied.
Mathematics 11 03542 g003
Figure 4. The control signal u ( t ) under: (a) c = 100 . (b) c = 0.1 . The solid blue line represents the control signal u s t ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the control signal u o b ( t ) when the observed-based controller (26) is applied.
Figure 4. The control signal u ( t ) under: (a) c = 100 . (b) c = 0.1 . The solid blue line represents the control signal u s t ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the control signal u o b ( t ) when the observed-based controller (26) is applied.
Mathematics 11 03542 g004
Figure 5. (a) The observation errors x ˜ ( t ) . (b) The transient of output y ( t ) in the closed-loop system under c = 0.1 for x ( 0 ) = c o l { 5 3 , 2 3 } .
Figure 5. (a) The observation errors x ˜ ( t ) . (b) The transient of output y ( t ) in the closed-loop system under c = 0.1 for x ( 0 ) = c o l { 5 3 , 2 3 } .
Mathematics 11 03542 g005
Figure 6. The open-loop step response: (a) y 1 ( t ) . (b) y 2 ( t ) .
Figure 6. The open-loop step response: (a) y 1 ( t ) . (b) y 2 ( t ) .
Mathematics 11 03542 g006
Figure 7. The transients of outputs y 1 ( t ) , y 2 ( t ) in the closed-loop system under c = 0.1 with σ = 0.00001 and σ = 0.54 : (a) y 1 ( t ) . (b) y 2 ( t ) . The yellow and black dashed lines represent the performance functions g ¯ ( t ) and g ̲ ( t ) . The red and orange solid lines represents the outputs y 1 s t ( t ) , y 2 s t ( t ) when the state feedback controller (10) is applied, and the the purple and green solid lines represents the output y 1 o b ( t ) , y 2 o b ( t ) when the observed-based controller (26) is applied.
Figure 7. The transients of outputs y 1 ( t ) , y 2 ( t ) in the closed-loop system under c = 0.1 with σ = 0.00001 and σ = 0.54 : (a) y 1 ( t ) . (b) y 2 ( t ) . The yellow and black dashed lines represent the performance functions g ¯ ( t ) and g ̲ ( t ) . The red and orange solid lines represents the outputs y 1 s t ( t ) , y 2 s t ( t ) when the state feedback controller (10) is applied, and the the purple and green solid lines represents the output y 1 o b ( t ) , y 2 o b ( t ) when the observed-based controller (26) is applied.
Mathematics 11 03542 g007
Figure 8. The control signals u 1 ( t ) and u 2 ( t ) for σ = 0.54 : (a) u 1 ( t ) . (b) u 2 ( t ) . The solid blue line represents the control signal u s t ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the control signal u o b ( t ) when the observed-based controller (26) is applied.
Figure 8. The control signals u 1 ( t ) and u 2 ( t ) for σ = 0.54 : (a) u 1 ( t ) . (b) u 2 ( t ) . The solid blue line represents the control signal u s t ( t ) when the state feedback controller (10) is applied, and the red dashed line represents the control signal u o b ( t ) when the observed-based controller (26) is applied.
Mathematics 11 03542 g008
Figure 9. The observation errors x ˜ ( t ) .
Figure 9. The observation errors x ˜ ( t ) .
Mathematics 11 03542 g009
Table 1. The solutions ( K , τ 1 , τ 2 ) of the inequalities (13) and (34) with various values of parameter c.
Table 1. The solutions ( K , τ 1 , τ 2 ) of the inequalities (13) and (34) with various values of parameter c.
cState Feedback MethodObserver-Based Method
100(6.31, 2.68, 4.34)(8.35, 6.69, 2.17)
0.1(33.18, 42.66, 0.05)(65.73, 85.65, 0.02)
Table 2. The solutions ( K , τ 1 , τ 2 ) of the inequalities (12) and (33) with c = 0.1 .
Table 2. The solutions ( K , τ 1 , τ 2 ) of the inequalities (12) and (33) with c = 0.1 .
σ State Feedback MethodObserver-Based Method
0.00001( d i a g { 3535.2 , 3535.2 } , 17,431 , 9.21 )( d i a g { 10,092 , 10,092 } , 49,458, 3.22)
0.54( d i a g { 40.63 , 40.63 } , 104.5, 0.053)( d i a g { 121.14 , 121.14 } , 307.43, 0.02)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen, B.H.; Furtat, I.B. Output Stabilization of Linear Systems in Given Set. Mathematics 2023, 11, 3542. https://doi.org/10.3390/math11163542

AMA Style

Nguyen BH, Furtat IB. Output Stabilization of Linear Systems in Given Set. Mathematics. 2023; 11(16):3542. https://doi.org/10.3390/math11163542

Chicago/Turabian Style

Nguyen, Ba Huy, and Igor B. Furtat. 2023. "Output Stabilization of Linear Systems in Given Set" Mathematics 11, no. 16: 3542. https://doi.org/10.3390/math11163542

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop