Abstract
In this paper, a problem of random disturbance attenuation capabilities for linear time-invariant continuous systems, affected by random disturbances with bounded -entropy, is studied. The -entropy norm defines a performance index of the system on the set of the aforementioned input signals. Two problems are considered. The first is a state-space -entropy analysis of linear systems, and the second is an optimal control design using the -entropy norm as an optimization objective. The state-space solution to the -entropy analysis problem is derived from the representation of the -entropy norm in the frequency domain. The formulae of the -entropy norm computation in the state space are presented in the form of coupled matrix equations: one algebraic Riccati equation, one nonlinear equation over log determinant function, and two Lyapunov equations. Optimal control law is obtained using game theory and a saddle-point condition of optimality. The optimal state-feedback control, which minimizes the -entropy norm of the closed-loop system, is found from the solution of two algebraic Riccati equations, one Lyapunov equation, and the log determinant equation.
Keywords:
linear systems; spectral entropy; optimal control; robust control; algebraic Riccati equation MSC:
93C05
1. Introduction
The main goals of control design are the maintenance of stability of the closed-loop system and assurance of the desired robustness and fulfillment of the quality criteria such as rejection of external input disturbances of the predefined class. In linear systems, to solve disturbance attenuation problems, and norms have become the most popular quality criteria. Both norms have a clear physical interpretation: the norm indicates dispersion of the output in presence of the white Gaussian noise, while the norm of the system stands for the maximum error energy gain for the input disturbance with bounded energy. The mentioned criteria have significant drawbacks [1]. Thus, systems closed by controllers, lack robustness [2]. Alternatively, controllers may lead to excessive energy consumption if external disturbances are slightly correlated noises. These facts gave the researchers an idea to find compromises between and optimization approaches [3,4,5,6].
In the late 1980s, a so-called entropy control appeared. The key idea of the minimum entropy control approach is to find a solution to the LQG control problem with additional constraints on the system’s entropy. The entropy function, suggested in [7], is an adaptation of the method of Arov and Krein [8]. The minimum entropy control theory has become a simple tradeoff between the (upper bounds on the) and LQG objectives and has found a number of applications [7,9,10,11]. The objectives reflect both robust stability and performance requirements, where the noise is taken to be of bounded energy. One can refer to [12] for more details.
A problem of minimax LQG control, solved in [6], involves the relative entropy function to describe possible uncertainties in the plant model. The idea of minimax LQG control is to find a controller that minimizes linear quadratic functional with respect to the worst uncertainties in the entropy sense.
Anisotropy-based control theory [13,14,15], which is closely related to the current research, utilizes relative entropy, i.e., Kullback–Leibler information divergence between the probability distribution functions of the ergodic signal and the white Gaussian noise. This makes it possible to consider and control theories as limiting cases of anisotropy-based control theory (see [1] for details). Unfortunately, anisotropy-based control theory operates only in discrete time.
Unlike the anisotropy-based approach, the spectral entropy (-entropy) method, presented in [16,17,18], allows operation with a wider set of signals, including non-stationary and fading stochastic signals, both in discrete and continuous time. Similar to the anisotropic norm, the -entropy norm lies between the and norms of the system. The axiomatics of anisotropy-based and discrete-time spectral entropy approaches are discussed and compared in [17], showing that spectral entropy analysis has the same solution as anisotropy-based analysis for the same classes of input signals.
This paper deals with continuous-time -entropy analysis and control for linear systems in the state space. The problem of spectral entropy analysis for continuous-time linear systems in the frequency domain was solved in [16], where it was mentioned that the entropy integral convergence in the expression for -entropy required considering the weighting function with the predefined properties. In the current research, this function is considered as , where is a positive constant, in order for -entropy functional to be dimensionless.
The main contributions of the paper are the following:
- state-space formulae of -entropy norm computation;
- optimal spectral entropy state-feedback control design for a linear time-invariant continuous system affected by the random input signal with with bounded -entropy.
The rest of the paper is organized as follows. Section 2 provides basic definitions of spectral entropy control theory: the -entropy of the signal, the -entropy norm of the system, and its computation in the frequency domain. Spectral entropy analysis in the state space is presented in Section 3. The problem of optimal -entropy state-feedback control and its solution are given in Section 4. A numerical example is given in Section 5. Concluding remarks and future problems are discussed in Section 6 and Section 7.
2. Theoretical Background
In this section, basic definitions of -entropy analysis and the frequency domain results are recalled.
Following [16], consider the following linear continuous-time stationary system with zero initial conditions, and define basic concepts of -entropy control theory for this system:
where , , , , is the system’s state, is an observable output, and is a random input signal. Matrix A is supposed to be Hurwitz, while the system is controllable and observable.
The input signal satisfies the following conditions:
and either norm
or the power norm of the input signal
is finite. Here stands for the mathematical expectation, and is the Euclidean norm of the vector .
Following [16], unify the descriptions of the input signals (3) and (4), introducing the norm of the signal in the following form:
where is a linear operator that transforms the Euclidean norm of the vector to or power () norm of the stochastic signal, operating in the following manner:
The operator allows for the definition of a correlation convolution, , of the input signal in the following form:
Fourier transform of leads to an Hermitian positive definite matrix of spectral density of the input signal :
where * stands for the Hermitian conjugation. Using the inverse Fourier transform, the correlation convolution can be expressed by the matrix of spectral density :
As the norm of the input signal equals to
it can be connected with the matrix of spectral density as
Similarly, the norm of the output signal equals to
where is a spectral density of , which can be presented by the transfer matrix of system (1) and the spectral density of the input signal as follows [19]:
Consequently, the norm of the output is
where .
Define the gain of system (1) with the input signal , that has a finite norm (5), as a relation of norm of the output to norm of the input :
Following [16], find -entropy as the integral characteristic of the input signal. Let be a rational function, then its integrability leads to the following asymptotics:
with being the Hermitian matrix and . However, such representation gives a non-integrable function , as
To get rid of this divergence, introduce -entropy as
where function and . In other words, should provide integrable asymptotics for . For the sake of certainty, define in the form
Finally, define the -entropy norm of system (1) as the maximal value of the gain (7) over all the inputs, whose -entropy (8) is not greater than s:
The following theorem shows how -entropy norm is calculated in the frequency domain and which is the worst-case spectral density of the input signal (for more information, see [16]).
Theorem 1.
Any σ-entropy norm of system (1) that is affected by a stochastic continuous signal with a finite norm (5) is calculated according to the formula
where
is the worst case spectral density of the input signal, for which the σ-entropy norm of the system is realized, and the parameter is the unique solution of the equation
Theorem 1 provides the formulae of -entropy analysis for linear time-invariant systems in the frequency domain. It is shown that the -entropy norm of the system is defined as a ratio between the weighted -norm of the spectral densities of the output and the worst case input signals under constraint (12), which defines the set of all the possible input signals with bounded spectral entropy.
In the next section, the conditions for -entropy norm computation with the defined function are derived in terms of matrix equations.
3. Spectral Entropy Analysis in the State Space
In this section, a state-space approach to -entropy analysis is conducted. The result is based on Theorem 1 given in the previous section, namely on the calculation of integrals (11)–(12) in the explicit form. Before calculating the -entropy norm of a linear stochastic continuous-time system in the state space and proving the corresponding theorem, formulate the conditions of the all-pass system and two original lemmas with their proofs.
Lemma 1
Lemma 2.
Let be a transfer matrix of the system G. Then
where matrix Γ is the solution of Lyapunov equation
Proof.
Introduce the transfer matrix
Multiply the left-hand side of the Equation (13) on
and represent the integral (16) in the form
where
According to (19), the transfer matrix may be represented as
Consequently, as , the norm of the system H is defined by [21]
where the observability gramian is the solution of the Lyapunov equation
Hence, the integral (16) equals to
and the observability gramian satisfies the following Lyapunov equation:
Lemma 3.
Let be a square transfer matrix. Then
Proof.
Using expression (15) for matrix , rewrite the integral from the left-hand side of the statement in the following form:
To obtain it, take the definition (18) of and the fact that
were used.
As , the integral Poisson theorem (see [22]) can be applied, leading to
According to (20),
Hence,
which completes the proof. □
Theorem 2.
In the state space, the σ-entropy norm of system (1), which is affected by the input signal with a finite norm, is calculated according to the formula
where the matrices , , and a scalar are the unique solution to the following system of equations:
Proof.
Recall that, according to Theorem 1, the -entropy norm is realised on the spectral density , which equals to
The condition means that
The positive definiteness of the matrix guarantees the existence of the matrix factorization in the form
Hence, Equation (28) can be transformed to the form
where .
Denote , then, according to the last equation, , i.e., is the all-pass system.
Consider matrix G in the form
where L and are random matrices of appropriate dimensions. The inverse matrix equals to
Let the following two systems be connected in series:
i.e., the output of the first system is the input for the second one . Then,
Left-multiply the first equation on matrix
and check that the transfer matrix of this system equals to a unit one
So, it is natural to consider matrix as a transfer matrix of the system, composed by two parallel subsystems and , and this matrix is equal to [21]:
In the state space, this system has the following form:
As the dynamical parts of these subsystems are identical, the last system may be reduced to
Consequently, the state-space realization of matrix takes the following form:
Applying Lemma 1 to system , we get
From Equations (32) and (33), it follows that
These two equations and Formula (31) form subsystem (22)–(24), and matrix G takes the following form:
As factorizes in the manner , rewrite the integral in the form
applying Lemma 2 and taking into account that the transfer matrix of system is equal to
the numerator of -entropy norm takes the following form:
A similar equation is obtained for the denominator of the -entropy norm (11):
with matrix Q being a solution to Equation (26). Substitute (34) and (35) into (11) and get Equation (21).
Now, consider the log-determinant Equation (12) with
and transform the left-hand side of this equation:
Applying Lemma 3 to the first component, Lemma 2 to the second integral of the second component, and taking into account
we obtain
Thus, the following log-determinant equation in the state-space is obtained:
Finally, select the exact values of the constants and . Set so that it simplifies the log-determinant expression (36), i.e., . The constant can be found from the requirement that for the parameter q from the log-determinant equation in the frequency domain (12) also equals to 0. Consequently, . Substitution of these values for and into (36) leads to Equation (27).
This completes the proof. □
The most interesting properties of the -entropy norm are
- when and ;
- when
Theorem 2 claims that for the given value of spectral entropy , the -entropy norm of the linear system can be found from the solution of the coupled nonlinear matrix Equations (22)–(27). Distinct from the frequency domain approach, these conditions can be applied to solve the -entropy control design problem, considered in the next section.
4. Spectral Entropy Optimal Control Design
This section is devoted to the -entropy optimal control design problem in the state space. As before, we deal with linear systems affected by random external disturbances bounded by a scalar value of spectral entropy. The problem is to find an optimal state-feedback gain which minimizes the -entropy norm of the closed-loop system.
4.1. Problem Statement
To formulate and solve the problem, consider the following linear continuous-time stationary system F:
where , , , , , and are constant real matrices.
In addition, is the system’s state, is the output signal, is a random input signal, and is the control input. The input random disturbance has bounded -entropy defined by (8).
Let system (37) satisfy the following assumptions:
- The pair is stabilizable.
- is invertible.
- The pair has no unobservable modes on the imaginary axis, that is required for the Riccati equations, which characterize the optimal controller, to have stabilizing solutions.
These assumptions are necessary for existence of the optimal control law which solves the problem stated below.
The problem is to find such a state-feedback control law that minimizes the -entropy norm of the closed-loop system with and :
i.e.,
4.2. Problem Solution
We introduce two sets: the set of input signals with bounded -entropy, denoted by , and the set of stabilizing control laws . The idea of the optimal control problem solution is based on a saddle point condition that can be formulated as follows:
where S is the spectral density of the input signal .
Set consists of the control laws which are the solutions of the weighted -optimization problem. is a set of all controllers, that make the closed-loop system stable. The set consists of the input signals with the worst spectral density for the closed-loop system.
Lemma 4.
If the control law K is a saddle point of the mapping , then it is the solution to problem (39).
Hence, the solution is composed of two steps. The first step is to find the worst case spectral density of the input signal with bounded -entropy. The second step deals with the solving of the weighted -control problem.
Step 1. The following highlights are the same. Let the input signal W be generated from the Gaussian white noise sequence V, i.e., by the shaping filter . Then, the shaping filter, which generates the signal with bounded -entropy and the worst case spectral density , following the proof of Theorem 2, can be presented in the following form.
Lemma 5.
For system (38) and σ-entropy , the worst-case shaping filter G has the following state-space representation:
where matrix L satisfies
Matrices , , and a positive scalar q are the unique solution.
Step 2. Consider the weighted system which is affected by the Gaussian white noise. The transfer matrix of this system is equal to
The goal is to solve the optimization problem for the system . To achieve it, consider an extended system with the state vector where and x are the states of the shaping filter and the system, respectively. Its dynamics can be described by
where
Here, matrix L satisfies the conditions of Lemma 5.
The optimal state-space control law, which solves the weighted -optimization problem for system (45), can be found in the following form:
where
and is the unique symmetric positive (semi)definite solution of the algebraic Riccati equation
Theorem 3.
Let system (37) satisfy assumptions 1–5. Then for the given σ-entropy level , the optimal σ-entropy state-feedback control law is given in the form (48), with and described by (46), while matrices and can be found from the solution of the set of coupled Equations (40)–(43) and (47) in the form
with matrices , , and a positive scalar q being the unique solutions.
Proof.
Since the most part of the proof is given above, we will prove only (48). The solution of the weighted control problem (47) has a dimension equal to . One substate corresponds to the plant while the other one corresponds to the shaping filter. It can be shown that system and have the same input-output operator. This means that the dimension of the control can be reduced as in (48). This completes the proof. □
It is shown that the -entropy optimal control problem is a classical minimax problem. Based on the saddle point condition of optimality, the solution to the stated problem is divided into two stages: at the first stage, the worst case input disturbance is defined, and at the second stage, a controller which minimizes the output dispersion is synthesized. Finally, the solution to the state-feedback optimal control problem is found from the set of coupled nonlinear equations (40)–(43) and (46)–(48).
5. Numerical Example
Consider a first-order system given by the equation
The controllable output is selected as
with , .
The desired frequency in the expression (40) for -entropy computation is selected as . In this case, R is a scalar, , and
The set of eight equations with eight variables is equal to the following:
The given set of equations was solved numerically using the fsolve function in Matlab 2009b.
The solution to the optimal control problem provides a feedback gain , while the optimal controller is equal to with the norm of the closed-loop system set to
The results of the optimal -entropy control design are given in Table 1. It can be seen that for zero spectral entropy, the solution to the optimal control problem coincides with optimal controller. When spectral entropy s tends to infinity, the solution to the optimal spectral entropy control tends to . However, the feedback gain is much less in the case. This means that spectral entropy control provides more smooth and fine-tuned control with almost the same performance.
Table 1.
Optimal controller for different values of spectral entropy.
6. Discussion
In this paper, a novel approach to the robust control of linear time-invariant systems is introduced. Based on the induced norm concept, spectral properties of the input disturbance as the control design quality criterion were suggested to be used. In this case, the frequency properties of the disturbance to be rejected are considered. To define a set of inputs, spectral entropy is introduced. It is a nonnegative scalar value that depends on the log determinant of the spectral density of the signal. On the one hand, it allows the construction of a fine tuned controller; on the other hand, it maintains the robustness of the closed-loop systems.
State space formulae of spectral entropy analysis for linear systems are derived. It is shown that the optimal state-feedback control problem leads to the set of coupled nonlinear equations. Assumptions 1–5 guarantee the existence of the unique admissible solution of these equations.
Similar to the well-known and approaches, the proposed -entropy approach deals with the operator norm of the system from the disturbance to the controlled output. The design objective is to find an optimal solution which minimizes the influence of the disturbances which act on the system and belong to the prescribed set. The set of the signal is defined by a nonnegative scalar value s. The larger the value of s, the wider set of possible signals it defines. Note that the case of corresponds to the random signals with unitary spectral density, i.e., standard Gaussian noise. Therefore, the -entropy optimization problem in the case of corresponds to the optimal control. In the case , the set of possible signals is extended to the whole range of stochastic signals with bounded or power norm. Therefore, the -entropy optimization problem for the case of corresponds to the optimal control. These facts are clearly demonstrated in the numerical problem. Thus, it became possible to unify both well-known control strategies within the common framework and to improve the properties of the closed-loop systems by better tuning of the controller.
7. Conclusions
In this research, the problem of analysis and optimal state-feedback spectral entropy control for linear continuous-time systems is considered. The analysis problem is to find the system’s gain from external random disturbances with the bounded spectral entropy to the controllable output, while the control problem is to find a state-feedback gain which minimizes the spectral entropy norm of the closed-loop system. Analytical solutions to both considered problems are derived in the paper. The suggested approach unifies and control theories within the common framework as limiting cases. The numerical example illustrates benefits of the suggested optimization criterion over the well-known and approaches. The application of the derived method can be found in linear control systems with and controllers to refine the control strategy, taking into account the frequency properties of the disturbance.
Future research will be conducted considering several directions. The first direction is the extension of this theory to a wider class of control plants. It is planned to derive an output feedback optimal -entropy control strategy. The second direction is the development of numerical tools for solving -entropy analysis and control design problems. As conditions are given as a set of nonlinear matrix equations, homotopy methods will be applied.
Author Contributions
Conceptualization, V.A.B. and A.A.B.; methodology, V.A.B.; formal analysis, V.A.B., A.A.B. and O.G.A.; investigation, V.A.B., A.A.B. and O.G.A.; writing—original draft preparation, V.A.B. and O.G.A.; writing—review and editing, A.A.B. and O.G.A.; supervision, A.A.B. and O.G.A.; project administration, A.A.B. and O.G.A.; funding acquisition, A.A.B. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially funded by Russian Science Foundation grant number 23-21-00306, https://rscf.ru/project/23-21-00306/ (accessed on 15 November 2024).
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Notations
The following list of notations is used throughout the paper:
| is the mathematical expectation of signal w; | |
| is Euclidean norm of the vector w; | |
| is a set of matrices with real values; | |
| is a set of n–dimensional vectors with real values; | |
| is the norm of signal w; | |
| is the power norm of signal w; | |
| is -entropy norm of system F; | |
| z | is a complex variable in Laplace transform; |
| i | is the imaginary unit; |
| is Hermitian conjugation of matrix G; | |
| is the trace of matrix A; | |
| is the determinant of matrix A; | |
| is the matrix of the worst case spectral density of the signal; | |
| denotes the closed-loop system. |
References
- Kurdyukov, A.; Andrianova, O.; Belov, A.; Gol’din, D. In Between the LQG/2- and ∞-Control Theories. Autom. Remote Control 2021, 82, 565–618. [Google Scholar] [CrossRef]
- Doyle, J.C. Guaranteed margins for LQG regulators. IEEE Trans. Autom. Control 1978, 23, 756–757. [Google Scholar] [CrossRef]
- Mustafa, D.; Glover, K. (Eds.) The minimum entropy ∞ control problem. In Minimum Entropy ∞ Control; Springer: Berlin/Heidelberg, Germany, 1990; pp. 15–33. [Google Scholar]
- Scherer, C. Mixed
2/ℋ ∞ Control. In Trends in Control; Isidori, A., Ed.; Springer: London, UK, 1995; pp. 173–216. [Google Scholar]ℋ - Vladimirov, I.G.; Kurdjukov, A.P.; Semyonov, A. State-Space Solution to Anisotropy-Based Stochastic
∞-Optimization Problem. IFAC Proc. Vol. 1996, 29, 3816–3821. [Google Scholar] [CrossRef]ℋ - Petersen, I.; James, M.; Dupuis, P. Minimax optimal control of stochastic uncertain systems with relative entropy constraints. IEEE Trans. Autom. Control 2000, 45, 398–412. [Google Scholar] [CrossRef]
- Mustafa, D.; Glover, K. Minimum Entropy
∞ Control; LIDS-R, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology: Cambridge, MA, USA, 1990. [Google Scholar]ℋ - Arov, D.; Krein, M. On computing the entropy integrals and their minima in generalized extension problems. Act. Sci. Mat. 1983, 45, 33–50. [Google Scholar]
- Jhang, J.; Yung, C.F. Study of minimum entropy controller for descriptor systems. Asian J. Control 2020, 23, 1161–1170. [Google Scholar] [CrossRef]
- Peters, M.; Iglesias, P. Minimum entropy control for discrete-time time-varying systems. Automatica 1997, 33, 591–605. [Google Scholar] [CrossRef]
- Fu, B.; Ferrari, S. Robust Flight Control via Minimum
∞ Entropy Principle. In Proceedings of the 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, FL, USA, 8–12 January 2018. [Google Scholar]ℋ - Peters, M.A.; Iglesias, P.A. Minimum Entropy Control for Time-Varying Systems; Birkhauser Boston, Inc.: Boston, MA, USA, 1997. [Google Scholar]
- Semyonov, A.; Vladimirov, I.; Kurdyukov, A. Stochastic approach to
∞-optimization. In Proceedings of the 1994 33rd IEEE Conference on Decision and Control, Lake Buena Vista, FL, USA, 14–16 December 1994; Volume 3, pp. 2249–2250. [Google Scholar]ℋ - Vladimirov, I.; Kurdyukov, A.; Semyonov, A. Anisotropy of signals and entropy of linear stationary systems. Dokl. Math. 1995, 51, 388–390. [Google Scholar]
- Vladimirov, I.; Kurdyukov, A.; Semyonov, A. On computing the anisotropic norm of linear discrete-time-invariant systems. In Proceedings of the 13th IFAC World Congress, San Francisco, CA, USA, 30 June–5 July 1996; pp. 179–184. [Google Scholar]
- Kurdyukov, A.; Boichenko, V. A Spectral Method of the Analysis of Linear Control Systems. Int. J. Appl. Math. Comput. Sci. 2019, 29, 667–679. [Google Scholar] [CrossRef]
- Boichenko, V.A.; Belov, A.A.; Andrianova, O.G. Axiomatic Foundations of Anisotropy-Based and Spectral Entropy Analysis: A Comparative Study. Mathematics 2023, 11, 2751. [Google Scholar] [CrossRef]
- Boichenko, V.; Belov, A. On σ-entropy Analysis of Linear Stochastic Systems in State Space. Syst. Theory Control Comput. J. 2021, 1, 30–35. [Google Scholar] [CrossRef]
- Zhou, K.; Glover, K.; Bodenheimer, B.; Doyle, J. Mixed
2 andℋ ∞ Performance Objectives I: Robust Performance Analysis. IEEE Trans. Autom. Control 1994, 39, 1564–1574. [Google Scholar] [CrossRef]ℋ - Zhou, K.; Doyle, J.C.; Glover, K. Robust and Optimal Control; Prentice Hall Inc.: Upper Saddle River, NJ, USA, 1996. [Google Scholar]
- Bernstein, D. Matrix Mathematics; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
- Rudin, W. Real and Complex Analysis; McGraw-Hill: New York, NY, USA, 1986. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).