Open Access
This article is
 freely available
 reusable
Algorithms 2018, 11(6), 81; https://doi.org/10.3390/a11060081
Article
A Randomized Algorithm for Optimal PID Controllers
Department of Computer Sciences, Lev Academic Center, Jerusalem College of Technology, P.O.B. 16031 Jerusalem, Israel
Received: 7 May 2018 / Accepted: 2 June 2018 / Published: 5 June 2018
Abstract
:A randomized algorithm is suggested for the syntheses of optimal PID controllers for MIMO coupled systems, where the optimality is with respect to the ${H}_{\infty}$norm, the ${H}_{2}$norm and the LQR functional, with possible systemperformance specifications defined by regional poleplacement. Other notions of optimality (e.g., mixed ${H}_{2}/{H}_{\infty}$ design, controller norm or controller sparsity) can be handled similarly with the suggested algorithm. The suggested method is direct and thus can be applied to continuoustime systems as well as to discretetime systems with the obvious minor changes. The presented algorithm is a randomized algorithm, which has a proof of convergence (in probability) to a global optimum.
Keywords:
continuoustime linear systems; discretetime linear systems; statespace representation; proportionalintegraldifferential controllers; static output feedbacks; optimal controllers; randomized algorithms; convergence in probability1. Introduction
Proportional Integral Derivative (PID) controllers are the most widelyused controllers in industry despite of many new results in control theory that have been achieved in recent years. This is because in many cases PID controllers achieve close to optimal performance, because it is hard to compete with their investmenttoprofit ratio and because of their wellunderstood influence on the closedloop system behavior, as well as the availability of many tuning procedures for various performance objectives, at least for SISO systems or MIMO systems that can be well decoupled. Often, the performance achieved by PID controllers can be significantly improved by using more effective design techniques. Therefore the field of PID controllers is an ongoing active field of research. A review of most recent techniques for tuning and designing PIDbased control structures, together with methods for assessing their performance is presented in [1]. In [2] optimal decentralized MIMO PID controller design for a benchmark MIMO distillation column models is carried out using a Modified Firefly Algorithm (MFA), where the optimality is with respect to an objective function which is a weighted combination of the ${L}_{1}$norm and the ${L}_{2}$norm of the tracking error, with respect to a reference signal.
Tuning of MIMO coupled systems PID controllers is much more challenging than tuning PID controllers for SISO system or decoupled MIMO systems, because of the number of parameters and because they should be tuned all at once, since otherwise, tuning each channel separately might end up in a very poor performance of the whole system or even destabilize the whole system. In [3] the need for effective methods for the design of PID optimal controllers for MIMO coupled systems that cannot be well approximated by secondorder systems, was raised. In [4] the parametrization of all the solutions of the standard ${H}_{\infty}$ control problem is used to obtain Bilinear Matrix Inequalities (BMI’s) constraints on the PID gains for each frequency. Next, for each frequency, Linear Matrix Inequalities (LMI’s) restricted constraints for a stabilizing PID gain are derived from the BMI constraints. Finally, the set of LMI’s for a chosen finite set of frequencies are solved by the Iterated Linear Matrix Inequalities (ILMI) method. The sequence is shown to converge to a local optimum. In [5] a method for MIMO coupled systems PID controller design for stable plants (given by transfer function at an appropriate set of frequencies) is represented. The method relies on solving a sequence of SDP’s, but it cannot guarantee finding a globally optimal design.
In this article, we suggest a randomized algorithm for PID optimal controllers. The algorithm is based on the known reduction of [3] to the Stabilizing StaticOutputFeedback (SOF) problem, which is then solved by the RayShooting Method, where the method is used both for the search for feasible starting SOF and for the optimal SOF (see [6,7,8] for some applications of the method).
A closely related approach to ours is given in [9] where the problem is first reduced to the SOF problem, by the reduction of [3] and by using a descriptor system representation of the augmented system (in order to bypass the feasibility condition that $I{\widehat{K}}_{3}{C}_{2}{B}_{2}$ should be invertible—see the following). Next, the search problem of a starting stabilizing SOF controller is solved via the ILMI approach. Finally, a BoundedReal Lemma version for descriptor systems is used in order to get a set of Quadratic Matrix Inequalities (QMI’s) that their solution guaranties a $\gamma $ suboptimal ${H}_{\infty}$ gain. The last can be solved by the ILMI method (without a proof of convergence). The method of ILMI was discussed in many articles (see [3,10,11,12]). Unfortunately, the ILMI method needs a good starting point and has no proof of convergence in general.
Another closely related approach is given in [13] where the 2Degree of Freedom PID (2DOFPID) problem is reduced to the SOF problem, which in order to incorporate the optimization problem of $\gamma $ suboptimal ${H}_{\infty}$ gain, is reformulated as a set of BMI’s, where the last is solved via standard BMI solvers.
Many problems can be reduced to the SOF problem, imposing structural or other constraints on the controllers. These include the reduction of the minimaldegree dynamicfeedback problem and robust or decentralized stability via staticfeedback (see [14,15], resp.). The design problem of remote digital PID controllers formulated as robust SOF problem for discretetime Networked Control Systems (NCS), subject to timedelays and randomly missingdata is considered in [16]. A formulation of the reducedorder ${H}_{\infty}$ filter problem as constrained SOF problem, is considered in [17]. Unfortunately, the SOF problem with bounded entries or other structural constraints is known to be NPhard (see [14,18]). Therefore, minimalgain SOF with bounded entries is obviously NPhard problem. Exact Poleplacement and simultaneous and robust stabilization via SOF are also NPhard (see [14,19], resp.). Thus, practically, one should expect that only approximation or randomized algorithms will be able to cope with these problems.
The article is organized as follows:
In Section 2 we define the feasibility problem of PID controllers and the specific reduction to the SOF problem that will be used. In Section 3 we define the PID controller optimization problems where the optimality is with respect to the ${H}_{\infty}$norm, the ${H}_{2}$norm and the LQR functional, with possible regional poleplacement. In Section 4 we describe the suggested randomized algorithm and finally, in Section 5 we give a reallife example for each one of the abovementioned optimization problems.
The notions are standard. By $\Re \left(z\right)$ and $\Im \left(z\right)$ we denote the real and imaginary parts of $z\in \mathbb{C}$, resp., where by ${\mathbb{C}}_{}$ we denote the left halfplane. For a matrix Z we denote by ${Z}^{T}$ its transpose and by $\overline{Z}$ we denote the conjugate matrix. By ${Z}^{*}$ we denote conjugatetranspose of Z. By ${Z}^{+}$ we denote the MoorePenrose pseudoinverse of Z, where by ${L}_{Z}$ and ${R}_{Z}$ we denote the left and right orthogonal projections $I{Z}^{+}Z$ and $IZ{Z}^{+}$, resp.. For a square matrix Z, we denote by $\sigma \left(Z\right)$ the spectrum of Z and by $trace\left(Z\right)$ we denote the trace of Z. For matrices $Z,W$ we denote by $Z\otimes W$ the Kronecker product. By $\u2225Z\u2225$ we denote the spectral norm of Z, i.e., the largest singular value of Z, where by ${\u2225Z\u2225}_{F}$ we denote its Frobenious norm, i.e., ${\left(trace\left({Z}^{*}Z\right)\right)}^{\frac{1}{2}}$. For matrices $Z,W$ with the same size, we denote by ${\u2329Z,W\u232a}_{F}=trace\left({W}^{*}Z\right)$ the Frobenious inner product. For a vector v we denote by $\u2225v\u2225$ its Euclidean norm ${\left({\sum}_{j}{\left{v}_{j}\right}^{2}\right)}^{\frac{1}{2}}$. By $vec\left(Z\right)$ we denote the vector obtained from the matrix Z by chaining all its columns to a single column. By $mat$ we denote the inverse function of $vec$. For a transfer function $T\left(s\right)$ analytic in the open right halfplane, we denote by ${\u2225T\u2225}_{{H}_{\infty}}$ its ${H}_{\infty}$ norm where ${\u2225T\u2225}_{{H}_{\infty}}=esssu{p}_{\omega \in \mathbb{R}}\u2225T\left(j\omega \right)\u2225$ and by ${\u2225T\u2225}_{{H}_{2}}$ we denote its ${H}_{2}$ norm where ${\u2225T\u2225}_{{H}_{2}}={\left(\frac{1}{2\pi}{\int}_{\infty}^{+\infty}trace\left(T{\left(j\omega \right)}^{*}T\left(j\omega \right)\right)d\omega \right)}^{\frac{1}{2}}$. For a set X in a topological space, we denote by $\overline{X}$ its closure and by $int\left(X\right)$ its interior. For two sets $X,Y$ we denote by $CH\left(X,Y\right)$ the convexhull of X and Y, i.e., the minimal closed convex set that contains $X\cup Y$.
2. The Problem of Stabilization via PID Controllers
Let a system be given by:
where $A\in {\mathbb{R}}^{{n}_{x}\times {n}_{x}},{B}_{1}\in {\mathbb{R}}^{{n}_{x}\times {n}_{w}},{B}_{2}\in {\mathbb{R}}^{{n}_{x}\times {n}_{u}},{C}_{1}\in {\mathbb{R}}^{{n}_{z}\times {n}_{x}},{D}_{1,1}\in {\mathbb{R}}^{{n}_{z}\times {n}_{w}},{D}_{1,2}\in {\mathbb{R}}^{{n}_{z}\times {n}_{u}},{C}_{2}\in {\mathbb{R}}^{{n}_{y}\times {n}_{x}},{D}_{2,1}\in {\mathbb{R}}^{{n}_{y}\times {n}_{w}}$, where x is the state, z is the regulated output, y is the measurement, u is the control input and w is the noise. Assume that ${D}_{2,1}=0$ and let
where $r\left(t\right)$ is the new reference input. Let $\widehat{x}\left(t\right)=\left[\begin{array}{c}x\left(t\right)\hfill \\ {\int}_{0}^{t}y\left(\tau \right)d\tau \hfill \end{array}\right]$ be the augmented state, then:
implying that:
where $M=I+{K}_{D}{C}_{2}{B}_{2}$ is assumed to be invertible. Now:
where:
$$\left\{\begin{array}{c}\begin{array}{cc}\hfill \dot{x}\left(t\right)& =Ax\left(t\right)+{B}_{1}w\left(t\right)+{B}_{2}u\left(t\right)\hfill \\ \hfill z\left(t\right)& ={C}_{1}x\left(t\right)+{D}_{1,1}w\left(t\right)+{D}_{1,2}u\left(t\right)\hfill \\ \hfill y\left(t\right)& ={C}_{2}x\left(t\right)+{D}_{2,1}w\left(t\right),\hfill \end{array}\hfill \end{array}\right.$$
$$u\left(t\right)=r\left(t\right){K}_{P}y\left(t\right){K}_{I}{\int}_{0}^{t}y\left(\tau \right)d\tau {K}_{D}\dot{y}\left(t\right),$$
$$\begin{array}{cc}\hfill u\left(t\right)& =r\left(t\right){K}_{P}y\left(t\right){K}_{I}{\int}_{0}^{t}y\left(\tau \right)d\tau {K}_{D}\dot{y}\left(t\right)\hfill \\ & =r{K}_{P}\left[\begin{array}{cc}{C}_{2}& 0\end{array}\right]\widehat{x}\left(t\right){K}_{I}\left[\begin{array}{cc}0& I\end{array}\right]\widehat{x}\left(t\right)+\hfill \\ & \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{K}_{D}{C}_{2}\dot{x}\left(t\right)\hfill \\ & =r{K}_{P}\left[\begin{array}{cc}{C}_{2}& 0\end{array}\right]\widehat{x}\left(t\right){K}_{I}\left[\begin{array}{cc}0& I\end{array}\right]\widehat{x}\left(t\right)+\hfill \\ & \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{K}_{D}{C}_{2}\left(Ax\left(t\right)+{B}_{1}w\left(t\right)+{B}_{2}u\left(t\right)\right),\hfill \end{array}$$
$$\begin{array}{cc}\hfill u\left(t\right)& ={M}^{1}r\left(t\right){M}^{1}{K}_{P}\left[\begin{array}{cc}{C}_{2}& 0\end{array}\right]\widehat{x}\left(t\right)+\hfill \\ & \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{M}^{1}{K}_{I}\left[\begin{array}{cc}0& I\end{array}\right]\widehat{x}\left(t\right)+\hfill \\ & \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}{M}^{1}{K}_{D}\left[\begin{array}{cc}{C}_{2}A& 0\end{array}\right]\widehat{x}\left(t\right){M}^{1}{K}_{D}{C}_{2}{B}_{1}w,\hfill \end{array}$$
$$\begin{array}{cc}\hfill \dot{\widehat{x}}\left(t\right)& =\left[\begin{array}{c}\dot{x}\left(t\right)\\ y\left(t\right)\end{array}\right]\hfill \\ & =\left[\begin{array}{c}Ax\left(t\right)+{B}_{1}w\left(t\right)+{B}_{2}u\left(t\right)\\ {C}_{2}x\left(t\right)\end{array}\right]\hfill \\ & =\left[\begin{array}{cc}A& 0\\ {C}_{2}& 0\end{array}\right]\widehat{x}\left(t\right)+\hfill \\ & \phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}+\left[\begin{array}{c}{B}_{1}{B}_{2}{M}^{1}{K}_{D}{C}_{2}{B}_{1}\\ 0\end{array}\right]\widehat{w}\left(t\right)+\left[\begin{array}{c}{B}_{2}\\ 0\end{array}\right]\widehat{u}\left(t\right),\hfill \end{array}$$
$$\left\{\begin{array}{c}\widehat{w}\left(t\right)=w\left(t\right)\hfill \\ \widehat{u}\left(t\right)=\widehat{r}\left(t\right)\widehat{K}\widehat{y}\left(t\right)\hfill \\ \widehat{r}\left(t\right)={M}^{1}r\left(t\right)\hfill \\ \widehat{y}\left(t\right)=\left[\begin{array}{c}{C}_{2}x\left(t\right)\\ {\int}_{0}^{t}y\left(\tau \right)d\tau \\ {C}_{2}Ax\left(t\right)\end{array}\right]\hfill \\ \widehat{K}={M}^{1}\left[\begin{array}{ccc}{K}_{P}& {K}_{I}& {K}_{D}\end{array}\right].\hfill \end{array}\right.$$
Let
$$\left\{\begin{array}{c}\widehat{A}=\left[\begin{array}{cc}A& 0\\ {C}_{2}& 0\end{array}\right],\phantom{\rule{3.33333pt}{0ex}}{\widehat{B}}_{2}=\left[\begin{array}{c}{B}_{2}\\ 0\end{array}\right]\hfill \\ {\widehat{B}}_{1}=\left[\begin{array}{c}{B}_{1}{B}_{2}{M}^{1}{K}_{D}{C}_{2}{B}_{1}\\ 0\end{array}\right]\hfill \\ {\widehat{C}}_{1}=\left[\begin{array}{cc}{C}_{1}& 0\end{array}\right],\phantom{\rule{3.33333pt}{0ex}}{\widehat{C}}_{2}=\left[\begin{array}{cc}{C}_{2}& 0\\ 0& I\\ {C}_{2}A& 0\end{array}\right]\hfill \\ {\widehat{D}}_{1,1}={D}_{1,1}{D}_{1,2}{M}^{1}{K}_{D}{C}_{2}{B}_{1},\phantom{\rule{3.33333pt}{0ex}}{\widehat{D}}_{1,2}={D}_{1,2}.\hfill \end{array}\right.$$
Let $\widehat{z}\left(t\right)=z\left(t\right)$ then, the augmented system is given by:
$$\left\{\begin{array}{c}\dot{\widehat{x}}\left(t\right)=\left(\widehat{A}{\widehat{B}}_{2}\widehat{K}{\widehat{C}}_{2}\right)\widehat{x}\left(t\right)+{\widehat{B}}_{1}\widehat{w}\left(t\right)+{\widehat{B}}_{2}\widehat{r}\left(t\right)\hfill \\ \widehat{z}\left(t\right)=\left({\widehat{C}}_{1}{\widehat{D}}_{1,2}\widehat{K}{\widehat{C}}_{2}\right)\widehat{x}\left(t\right)+{\widehat{D}}_{1,1}\widehat{w}\left(t\right)+{\widehat{D}}_{1,2}\widehat{r}\left(t\right)\hfill \\ \widehat{y}\left(t\right)={\widehat{C}}_{2}\widehat{x}\left(t\right).\hfill \end{array}\right.$$
Once that a stabilizing SOF matrix $\widehat{K}=\left[\begin{array}{ccc}{\widehat{K}}_{1}\hfill & {\widehat{K}}_{2}\hfill & {\widehat{K}}_{3}\hfill \end{array}\right]$ for the augmented system (1) has been found, we have ${K}_{D}=M{\widehat{K}}_{3}$. Thus, $M=I+M{\widehat{K}}_{3}{C}_{2}{B}_{2}$ from which we conclude that:
$$M={\left(I{\widehat{K}}_{3}{C}_{2}{B}_{2}\right)}^{1}.$$
Finally, we have:
$$\left\{\begin{array}{c}{K}_{P}=M{\widehat{K}}_{1}={\left(I{\widehat{K}}_{3}{C}_{2}{B}_{2}\right)}^{1}{\widehat{K}}_{1}\hfill \\ {K}_{I}=M{\widehat{K}}_{2}={\left(I{\widehat{K}}_{3}{C}_{2}{B}_{2}\right)}^{1}{\widehat{K}}_{2}\hfill \\ {K}_{D}=M{\widehat{K}}_{3}={\left(I{\widehat{K}}_{3}{C}_{2}{B}_{2}\right)}^{1}{\widehat{K}}_{3}.\hfill \end{array}\right.$$
We conclude that the stabilization problem via PID controller is reducible to the stabilization via SOF problem, with the constraint that $I{\widehat{K}}_{3}{C}_{2}{B}_{2}$ is invertible. In the following we will assume that the couples $\left(\widehat{A},{\widehat{B}}_{2}\right)$ and $\left({\widehat{A}}^{T},{\widehat{C}}_{2}^{T}\right)$ are controllable, in order to make the task of finding a feasible starting SOF $\widehat{K}$ easier.
Remark 1.
Please note that although the method relies on a specific known statespace model of the plant, it does not require an exact model, since the modeling errors can be included in the dynamic uncertainty and they can be taken into account by selecting a nominal model with a suitable weighting function (see [20]). In any case, after designing a PID controller for the nominal model, its performance should be checked on the nonlinear plant model. Note also that the requirement that $I{\widehat{K}}_{3}{C}_{2}{B}_{2}$ should be invertible is not a problem since that if a stabilizing SOF $\widehat{K}$ exists, there exist an open neighborhood of it where all the members are stabilizing SOF’s, where on the other hand, the set of all matrices that does not satisfy the invertibility condition has measure 0.
Remark 2.
Since the problem of SOF is (polynomialtime) reducible to the PID controller problem (by restriction to ${K}_{I}=0,{K}_{D}=0$), therefore the PID controller problem is as hard as the SOF problem. Since the SOF problem with structural constraints (on ${K}_{P}$), or the problem of exact poleplacement via SOF are NPhard, it follows that these problems with PID controllers are NPhard. We therefore conclude that unless $P=NP$, no deterministic polynomialtime algorithm can solve these problems and we therefore are led to use randomization with the hope that randomized algorithms will be able to cope more efficiently with these problems.
3. Statement of the Problem
The ${H}_{\infty}$ and ${H}_{2}$ norm minimization relates to the transfer function ${T}_{\widehat{w},\widehat{z}}\left(s\right)$ from the noise $\widehat{w}$ to the output $\widehat{z}$ (when the reference signal $\widehat{r}$ is zero and the system is derived only by the noise). The transfer function is given by:
$${T}_{\widehat{w},\widehat{z}}\left(s\right)={\widehat{D}}_{1,1}+\left({\widehat{C}}_{1}{\widehat{D}}_{1,2}\widehat{K}{\widehat{C}}_{2}\right){\left(sI\widehat{A}+{\widehat{B}}_{2}\widehat{K}{\widehat{C}}_{2}\right)}^{1}{\widehat{B}}_{1}.$$
In order that the ${H}_{2}$norm of ${T}_{\widehat{w},\widehat{z}}\left(s\right)$ would be finite, we need that ${\widehat{D}}_{1,1}=0$. We therefore have the following lemma (given without a proof):
Lemma 1.
We have:
if and only if:
In this case, the set of all matrices ${\widehat{K}}_{3}$ satisfying (4) is given by:
where Z is arbitrary matrix with the same size as ${\widehat{K}}_{3}$.
$${\widehat{D}}_{1,1}={D}_{1,1}{D}_{1,2}{M}^{1}{K}_{D}{C}_{2}{B}_{1}={D}_{1,1}{D}_{1,2}{\widehat{K}}_{3}{C}_{2}{B}_{1}=0,$$
$$\left({\left({C}_{2}{B}_{1}\right)}^{T}\otimes \phantom{\rule{3.33333pt}{0ex}}{D}_{1,2}\right)\xb7{\left({\left({C}_{2}{B}_{1}\right)}^{T}\otimes \phantom{\rule{3.33333pt}{0ex}}{D}_{1,2}\right)}^{+}vec\left({D}_{1,1}\right)=vec\left({D}_{1,1}\right).$$
$${\widehat{K}}_{3}=mat\left({\left({\left({C}_{2}{B}_{1}\right)}^{T}\otimes \phantom{\rule{3.33333pt}{0ex}}{D}_{1,2}\right)}^{+}vec\left({D}_{1,1}\right)+{L}_{{\left({C}_{2}{B}_{1}\right)}^{T}\otimes \phantom{\rule{3.33333pt}{0ex}}{D}_{1,2}}vec\left(Z\right)\right),$$
For the LQR minimization, we assume that $\widehat{w}\left(t\right)=0$ and $\widehat{r}\left(t\right)=0$, i.e., the system is derived by the initial state ${\widehat{x}}_{0}:=\widehat{x}\left(0\right)$ (initial step input). Assume that a SOF $\widehat{K}$ was found, such that ${\widehat{A}}_{c\ell}\left(\widehat{K}\right):=\widehat{A}{\widehat{B}}_{2}\widehat{K}{\widehat{C}}_{2}$ is stable. Let the LQR functional be defined by:
where $Q>0$ and $R\ge 0$. Let $\widehat{C}\left(\widehat{K}\right):={\widehat{C}}_{1}{\widehat{D}}_{1,2}\widehat{K}{\widehat{C}}_{2}$. Then, $\widehat{z}\left(t\right)=\widehat{C}\left(\widehat{K}\right)\widehat{x}\left(t\right)$ and substitution of the last into (7) yields
where $P\left(\widehat{K}\right)>0$ is the unique solution to the Lyapunov equation
given explicitly by
$$J\left({\widehat{x}}_{0}\right):={\int}_{0}^{\infty}\left(\widehat{x}{\left(t\right)}^{T}Q\widehat{x}\left(t\right)+\widehat{z}{\left(t\right)}^{T}R\widehat{z}\left(t\right)\right)dt,$$
$$J\left({\widehat{x}}_{0},\widehat{K}\right)={\int}_{0}^{\infty}\widehat{x}{\left(t\right)}^{T}\left(Q+\widehat{C}{\left(\widehat{K}\right)}^{T}R\widehat{C}\left(\widehat{K}\right)\right)\widehat{x}\left(t\right)dt={\widehat{x}}_{0}^{T}P\left(\widehat{K}\right){\widehat{x}}_{0},$$
$${\widehat{A}}_{c\ell}{\left(\widehat{K}\right)}^{T}P+P{\widehat{A}}_{c\ell}\left(\widehat{K}\right)=\left(Q+\widehat{C}{\left(\widehat{K}\right)}^{T}R\widehat{C}\left(\widehat{K}\right)\right)$$
$$P\left(\widehat{K}\right)=mat\left({\left(I\otimes {\widehat{A}}_{c\ell}{\left(\widehat{K}\right)}^{T}+{\widehat{A}}_{c\ell}{\left(\widehat{K}\right)}^{T}\otimes I\right)}^{1}\xb7vec\left(Q+\widehat{C}{\left(\widehat{K}\right)}^{T}R\widehat{C}\left(\widehat{K}\right)\right)\right).$$
Thus, we look for SOF $\widehat{K}$ that minimizes the functional $J\left({x}_{0},\widehat{K}\right)={\widehat{x}}_{0}^{T}P\left(\widehat{K}\right){\widehat{x}}_{0}$. When ${\widehat{x}}_{0}$ is unknown, we seek a SOF $\widehat{K}$ for which
is minimal. In this case, we get a robust LQR via SOF, in the sense that it minimizes $J\left({\widehat{x}}_{0},\widehat{K}\right)$ for the worst possible (unknown) ${\widehat{x}}_{0}$. Please note that
and that there exists ${\widehat{x}}_{0}\ne 0$ for which equality holds. Therefore $\frac{J\left({\widehat{x}}_{0},\widehat{K}\right)}{{\u2225{\widehat{x}}_{0}\u2225}^{2}}\le {\sigma}_{max}\left(\widehat{K}\right)$, with equality in the worst case.
$${\sigma}_{max}\left(\widehat{K}\right):=max\left(\sigma \left(P\left(\widehat{K}\right)\right)\right)$$
$$\begin{array}{cc}\hfill {\widehat{x}}_{0}^{T}P\left(\widehat{K}\right){\widehat{x}}_{0}& ={\u2225P{\left(\widehat{K}\right)}^{\frac{1}{2}}{\widehat{x}}_{0}\u2225}^{2}\le \hfill \\ & \le {\u2225P{\left(\widehat{K}\right)}^{\frac{1}{2}}\u2225}^{2}{\u2225{\widehat{x}}_{0}\u2225}^{2}=\hfill \\ & =\u2225P\left(\widehat{K}\right)\u2225{\u2225{\widehat{x}}_{0}\u2225}^{2}={\sigma}_{max}\left(\widehat{K}\right){\u2225{\widehat{x}}_{0}\u2225}^{2},\hfill \end{array}$$
Let $\Omega $ denote a subset of ${\mathbb{C}}_{}$, which is symmetric with respect to the xaxis, containing a positive length segment of the real axis, with some neighborhood of it, and such that $\Omega =\overline{int\left(\Omega \right)}$ (i.e., $\Omega $ is the closure of its interior). Let ${\mathcal{S}}^{q\times r}$ denote the set of all matrices $\widehat{K}\in {\mathbb{R}}^{q\times r}$, such that ${\widehat{A}}_{c\ell}\left(\widehat{K}\right)$ is stable, i.e., $\sigma \left({\widehat{A}}_{c\ell}\left(\widehat{K}\right)\right)\subset {\mathbb{C}}_{}$. By ${\mathcal{S}}_{\Omega}^{q\times r}$, we denote the set of all matrices $\widehat{K}\in {\mathbb{R}}^{q\times r}$, such that $\sigma \left({\widehat{A}}_{c\ell}\left(\widehat{K}\right)\right)\subset \Omega $. In this case, we say that ${\widehat{A}}_{c\ell}\left(\widehat{K}\right)$ is $\Omega $stable (or, the augmented system (1) is $\Omega $stable). In the sequel, we will occasionally write ${\mathcal{S}}_{\Omega}$ instead of ${\mathcal{S}}_{\Omega}^{q\times r}$, when it is clear what the size of the related matrices is. Please note that ${\mathcal{S}}_{\Omega}$ is a closed set, since $\Omega $ is closed and that ${\mathcal{S}}_{\Omega}$ has nonempty interior (if it is nonempty) since $\Omega $ has nonempty interior.
Under the assumptions that ${D}_{2,1}=0$ and that $\left(\widehat{A},{\widehat{B}}_{2}\right),\left({\widehat{A}}^{T},{\widehat{C}}_{2}^{T}\right)$ are controllable (and the additional assumption (5) when dealing with the ${H}_{2}$norm minimization), for the PID controller problem with minimal ${H}_{\infty}$norm, we need to solve:
where ${T}_{\widehat{w},\widehat{z}}$ is given by (3). For the PID controller problem with minimal ${H}_{2}$norm, we need to solve:
where ${T}_{\widehat{w},\widehat{z}}$ is given by (3) and, for the PID controller problem with minimal LQR functionalvalue, we need to solve:
where ${\sigma}_{max}\left(\widehat{K}\right)$ is given by (11).
$$\begin{array}{c}min\phantom{\rule{3.33333pt}{0ex}}{f}_{{H}_{\infty}}\left(\widehat{K}\right):={\u2225{T}_{\widehat{w},\widehat{z}}\u2225}_{{H}_{\infty}}\hfill \\ \mathrm{such}\mathrm{that}:\hfill \\ \left\{\begin{array}{c}\widehat{K}\in {\mathcal{S}}_{\Omega}\hfill \\ I{\widehat{K}}_{3}{C}_{2}{B}_{2}\phantom{\rule{3.33333pt}{0ex}}\mathrm{is}\mathrm{invertible}\hfill \end{array}\right.\hfill \end{array}$$
$$\begin{array}{c}min\phantom{\rule{3.33333pt}{0ex}}{f}_{{H}_{2}}\left(\widehat{K}\right):={\u2225{T}_{\widehat{w},\widehat{z}}\u2225}_{{H}_{2}}\hfill \\ \mathrm{such}\mathrm{that}:\hfill \\ \left\{\begin{array}{c}\widehat{K}\in {\mathcal{S}}_{\Omega}\hfill \\ {\widehat{K}}_{3}\phantom{\rule{3.33333pt}{0ex}}\mathrm{has}\mathrm{the}\mathrm{structure}\phantom{\rule{3.33333pt}{0ex}}\left(6\right)\hfill \\ I{\widehat{K}}_{3}{C}_{2}{B}_{2}\phantom{\rule{3.33333pt}{0ex}}\mathrm{is}\mathrm{invertible}\hfill \end{array}\right.\hfill \end{array}$$
$$\begin{array}{c}min\phantom{\rule{3.33333pt}{0ex}}{f}_{LQR}\left(\widehat{K}\right):={\sigma}_{max}\left(\widehat{K}\right)\hfill \\ \mathrm{such}\mathrm{that}:\hfill \\ \left\{\begin{array}{c}\widehat{K}\in {\mathcal{S}}_{\Omega}\hfill \\ I{\widehat{K}}_{3}{C}_{2}{B}_{2}\phantom{\rule{3.33333pt}{0ex}}\mathrm{is}\mathrm{invertible}\hfill \end{array}\right.\hfill \end{array}$$
4. The Suggested Algorithm
Assume that ${\widehat{K}}^{\left(0\right)}\in int{\left({\mathcal{S}}_{\Omega}\right)}^{q\times r}$ was found by the RayShooting algorithm ([6]) or by any other method (see [21,22,23]). We assume that $I{\widehat{K}}_{3}^{\left(0\right)}{C}_{2}{B}_{2}$ is invertible (and in addition, we assume that ${\widehat{K}}_{3}^{\left(0\right)}$ has the form (6) for some initial matrix ${Z}^{\left(0\right)}$, when dealing with the ${H}_{2}$norm minimization). Let $h>0$ and let ${U}^{\left(0\right)}$ be a unit vector w.r.t. the Frobenius norm, i.e., ${\u2225{U}^{\left(0\right)}\u2225}_{F}=1$. Let ${L}^{\left(0\right)}={\widehat{K}}^{\left(0\right)}+h\xb7{U}^{\left(0\right)}$ and let $\mathcal{L}$ be the hyperplane defined by ${L}^{\left(0\right)}+V$, where ${\u2329V,{U}^{\left(0\right)}\u232a}_{F}=0$. Let ${r}_{\infty}>0$ and let ${\mathcal{R}}_{\infty}$ denote the set of all $F\in \mathcal{L}$, such that ${\u2225F{L}^{\left(0\right)}\u2225}_{F}\le {r}_{\infty}$. Let ${\mathcal{R}}_{\infty}\left(\u03f5\right)={\mathcal{R}}_{\infty}+\overline{\mathbb{B}\left(0,\u03f5\right)}$, where $\overline{\mathbb{B}\left(0,\u03f5\right)}$ denotes the closed ball centered at 0 with radius $\u03f5$ ($0<\u03f5\le \frac{1}{2}$), with respect to the Frobenius norm on ${\mathbb{R}}^{q\times r}$. Let ${\mathcal{D}}^{\left(0\right)}=CH\left({K}^{\left(0\right)},{\mathcal{R}}_{\infty}\left(\u03f5\right)\right)$ denote the convexhull of the vertex ${\widehat{K}}^{\left(0\right)}$ with the basis ${\mathcal{R}}_{\infty}\left(\u03f5\right)$. Let ${\mathcal{S}}_{\Omega}^{\left(0\right)}={\mathcal{S}}_{\Omega}\cap {\mathcal{D}}^{\left(0\right)}$ and please note that ${\mathcal{S}}_{\Omega}^{\left(0\right)}$ is compact (but generally not convex). We wish to minimize a continuous function $f\left(\widehat{K}\right)$ over the compact set ${\mathcal{S}}_{\Omega}\cap \overline{\mathbb{B}\left({\widehat{K}}^{\left(0\right)},h\right)}$. Let ${\widehat{K}}_{*}$ denote a point in ${\mathcal{S}}_{\Omega}\cap \overline{\mathbb{B}\left({\widehat{K}}^{\left(0\right)},h\right)}$ where a global minimum of $f\left(\widehat{K}\right)$ is accepted. Obviously, ${\widehat{K}}_{*}\in {\mathcal{D}}^{\left(0\right)}$, for some direction ${U}^{\left(0\right)}$, as above. When dealing with the ${H}_{2}$norm minimization, and if ${U}^{\left(0\right)}=\left[\begin{array}{ccc}{U}_{1}^{\left(0\right)}& {U}_{2}^{\left(0\right)}& {U}_{3}^{\left(0\right)}\end{array}\right]$ then, the directions will be constrained to have the form ${U}_{3}^{\left(0\right)}=mat\left({L}_{{\left({C}_{2}{B}_{1}\right)}^{T}\otimes \phantom{\rule{3.33333pt}{0ex}}{D}_{1,2}}vec\left(Z\right)\right)$, in order that ${L}^{\left(0\right)}$ would have the form (6). We will also restrict V as we have restricted ${U}^{\left(0\right)}$, in order that any element of $\mathcal{L}$ would have the structure (6).
The suggested Algorithm 1 goes as follows:
We start with a point ${\widehat{K}}^{\left(0\right)}\in int\left({\mathcal{S}}_{\Omega}\right)$, found by the RayShooting algorithm [6]. Assuming that ${\widehat{K}}_{*}\in {\mathcal{D}}^{\left(0\right)}$, the innerloop ($j=1,\dots ,n$) uses the RayShooting Method in order to find an approximation of the global minimum of the function $f\left(\widehat{K}\right)$ over ${\mathcal{S}}_{\Omega}^{\left(0\right)}$  the portion of ${\mathcal{S}}_{\Omega}$ bounded in the cone ${\mathcal{D}}^{\left(0\right)}$. The proof of convergence in probability of the innerloop and its complexity can be found in [6] (see also [7]). In the innerloop, we choose a search direction by choosing a point F in ${\mathcal{R}}_{\infty}\left(\u03f5\right)$  the base of the cone ${\mathcal{D}}^{\left(0\right)}$. Next, in the most innerloop ($k=1,\dots ,s$) we scan the ray $\widehat{K}\left(t\right):=\left(1t\right){\widehat{K}}^{\left(0\right)}+tF$ and record the best controller on it. Repeating this sufficiently many times, we reach ${\widehat{K}}_{*}$ (or an $\u03f5$ neighborhood of it) with high probability, under the assumption that ${\widehat{K}}_{*}\in {\mathcal{D}}^{\left(0\right)}$. In [8] it was shown that by taking $h={r}_{\infty}$ and $m=\u2308e\xb7\sqrt{2\pi \ell}\u2309$ iterations in the outerloop (where $\ell :=qr$), we have ${\widehat{K}}_{*}\in {\mathcal{D}}^{\left(0\right)}$ almost surely. Specifically, when $\ell \ge 12$, it was suggested to take $m=2\ell $ and $\ell \times \ell $ orthogonal matrix $U=\left[\begin{array}{cccc}{u}_{1}& {u}_{2}& \cdots & {u}_{\ell}\end{array}\right]$, and to take the directions ${U}_{j}^{\left(0\right)}=\pm mat\left({u}_{j}\right),j=1,\dots ,\ell $ in the outerloop.
The outerloop ($i=1,\dots ,m$) is used instead of executing the RayShooting algorithm again and again, by taking ${\widehat{K}}^{\left(\mathrm{best}\right)}$ as the new vertex of the search cone instead of ${\widehat{K}}^{\left(0\right)}$. The replacement of ${\widehat{K}}^{\left(0\right)}$ can be avoided if ${r}_{\infty}$ is chosen sufficiently large. The replacement of ${\widehat{K}}^{\left(0\right)}$ by ${\widehat{K}}^{\left(\mathrm{best}\right)}$ can be considered as a heuristic step, which is made instead of running the RayShooting algorithm many times in order to generate "the best starting point", which is relevant only if we actually evaluate $f\left(\widehat{K}\right)$ on each such point and take the point with the best value as the best starting point. Since in any case, we evaluate $f\left(\widehat{K}\right)$ in the main algorithm, we could avoid the repeated execution of the RayShooting algorithm. The outerloop is similar to what is done in the HideAndSeek algorithm (see [24,25]). The convergence in probability of the HideAndSeek algorithm can be found in [26].
Algorithm 1 Optimal PID controller randomized algorithm 
Require: An algorithm for deciding Ωstability and an algorithm for computing $f\left(\widehat{K}\right)$. Input: ${\widehat{K}}^{\left(0\right)}$ such that ${\widehat{A}}_{c\ell}\left({\widehat{K}}^{\left(0\right)}\right)$ is Ωstable and $I{\widehat{K}}_{3}^{\left(0\right)}{C}_{2}{B}_{2}$ is invertible (with the needed structure (6) if ${H}_{2}$norm minimization is sought), $\u03f5>0,{r}_{\infty}>0,h>0$ and integers $m,n,s$. Output: ${\widehat{K}}_{*}$ that globally minimize $f\left(\widehat{K}\right)$ (or within ϵ distance from a global minimum of $f\left(\widehat{K}\right)$) in hradius closedball centered at ${\widehat{K}}^{\left(0\right)}$.

Remark 3.
Please note that ${\widehat{K}}^{\left(0\right)}$ should satisfy the feasibility condition that $I{\widehat{K}}_{3}^{\left(0\right)}{C}_{2}{B}_{2}$ is invertible, in order to raise the probability that $I{\widehat{K}}_{3}^{\left(best\right)}{C}_{2}{B}_{2}$ would be invertible. Indeed, one can execute the algorithm without this condition, i.e., execute the algorithm from an infeasible point.
Note also that during the optimization in the main loop of Algorithm 1, we do not check this feasibility condition and we may pass to a better point which might not be feasible, because such points might lead to a better feasible points. This is made because the set of nonfeasible points has measure 0 and thus, we should not be disturbed by the possibility of temporarily stepping on infeasible points, during the search. The feasibility condition is checked at the end of Algorithm 1 and experience with the algorithm shows that almost always we end up with a feasible point, when we start from a feasible point (which is almost always the case, if ${\widehat{K}}^{\left(0\right)}$ was found).
5. Experimental Section
To synthesize LQR or ${H}_{\infty}$ optimal PID controller, we execute the RayShooting algorithm ([6]) in order to get a starting SOF ${\widehat{K}}^{\left(0\right)}$ such that the augmented system (1) is $\Omega $stable and $I{\widehat{K}}_{3}^{\left(0\right)}{C}_{2}{B}_{2}$ is invertible. Next, we apply Algorithm 1 in order to get ${\widehat{K}}^{\left(best\right)}$ from which the PID controller is derived, using (2). For the ${H}_{2}$ optimal PID controller, we execute the RayShooting algorithm, in order to get a starting SOF ${\widehat{K}}^{\left(0\right)}$, as above. Next, we execute the algorithm from [7], in order to get a SOF ${\widehat{K}}^{\left(1\right)}$ with the structure (6). This step is the most demanding step as can be seen from the following example. Finally, we execute Algorithm 1 in order to get ${\widehat{K}}^{\left(best\right)}$ from which the PID controller is derived, using (2).
Example 1.
For the AC1 system (taken from COMPl${}_{e}$ib—see [27,28,29]) with the following statespace model:
and
with
let the objective be to find a PID LQRoptimal controller, where the closedloop eigenvalues should be in the rectangular region defined by:
With the following parameters for the algorithm: $m=100,n=100,s=100,h={r}_{\infty}=100,\u03f5={10}^{16},Q={I}_{8},R={I}_{2}$ we had the following feasible SOF
where
found by the RayShooting algorithm in $CPUtime=0.0001\left[sec\right]$ and with LQR functional value ${f}_{LQR}\left({\widehat{K}}^{\left(0\right)}\right)=16.746246825328360$. Executing Algorithm 1, we had
where
with ${f}_{LQR}\left({\widehat{K}}^{\left(best\right)}\right)=13.601550793243616$ (a $18.778512\%$ relative improvement), in $CPUtime=76.343750\left[sec\right]$. The resulting PID controller matrices are:
and the resulting closedloop eigenvalues are
$$\begin{array}{c}A=\left[\begin{array}{ccccc}\hfill 0& \hfill 0& \hfill 1.132& \hfill 0& \hfill 1\\ \hfill 0& \hfill 0.0538& \hfill 0.1712& \hfill 0& \hfill 0.0705\\ \hfill 0& \hfill 0& \hfill 0& \hfill 1& \hfill 0\\ \hfill 0& \hfill 0.0485& \hfill 0& \hfill 0.8556& \hfill 1.013\\ \hfill 0& \hfill 0.2909& \hfill 0& \hfill 1.0532& \hfill 0.6859\end{array}\right],\hfill \\ {B}_{1}=\left[\begin{array}{ccc}\hfill 0.03593& \hfill 0& \hfill 0.01672\\ \hfill 0& \hfill 0.00989& \hfill 0\\ \hfill 0& \hfill 0.07548& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0.05635\\ \hfill 0.00145& \hfill 0& \hfill 0.06743\end{array}\right],{B}_{2}=\left[\begin{array}{ccc}\hfill 0& \hfill 0& \hfill 0\\ \hfill 0.12& \hfill 1& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0\\ \hfill 4.419& \hfill 0& \hfill 1.665\\ \hfill 1.575& \hfill 0& \hfill 0.0732\end{array}\right]\hfill \\ {C}_{1}=\left[\begin{array}{ccccc}\hfill 0& \hfill 0.707106781186547& \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0& \hfill 0.707106781186547& \hfill 0& \hfill 0\end{array}\right.,\hfill \end{array}$$
$$\begin{array}{c}{D}_{1,1}={0}_{2\times 3}\hfill \\ {D}_{1,2}=\left[\begin{array}{ccc}\hfill 0.707106781186547& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0.707106781186547& \hfill 0\end{array}\right]\hfill \\ {C}_{2}=\left[\begin{array}{ccccc}\hfill 1& \hfill 0& \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 1& \hfill 0& \hfill 0& \hfill 0\\ \hfill 0& \hfill 0& \hfill 1& \hfill 0& \hfill 0\end{array}\right]\hfill \\ {D}_{2,1}={0}_{3\times 3}\hfill \end{array}$$
$$\sigma \left(A\right)=\left\{\begin{array}{c}\hfill 0\\ \hfill 0.780052457480417\pm 1.029636749384622i\\ \hfill 0.017597542519583\pm 0.182585375210158i\end{array}\right\},$$
$$\Omega =\left\{z\in \mathbb{C}\left1\le \Re \left(z\right)\le 0.1,\left\Im \left(z\right)\right\le 1\right.\right\}.$$
$${\widehat{K}}^{\left(0\right)}=\left[\begin{array}{ccc}{\widehat{K}}_{1}^{\left(0\right)}& {\widehat{K}}_{2}^{\left(0\right)}& {\widehat{K}}_{3}^{\left(0\right)}\end{array}\right],$$
$$\begin{array}{c}{\widehat{K}}_{1}^{\left(0\right)}=\left[\begin{array}{ccc}\hfill 1.139087350327457& \hfill 0.076805805847702& \hfill 1.175733971595813\\ \hfill 0.175620269295905& \hfill 1.249662743416654& \hfill 0.394592760970558\\ \hfill 2.524143751375688& \hfill 0.262506326222373& \hfill 3.380417685135832\end{array}\right]\hfill \\ {\widehat{K}}_{2}^{\left(0\right)}=\left[\begin{array}{ccc}\hfill 0.265188982290714& \hfill 0.082432589634872& \hfill 0.236137372902669\\ \hfill 0.106177753653144& \hfill 0.443637620004996& \hfill 0.074457362520092\\ \hfill 0.527328403725350& \hfill 0.200910957443737& \hfill 0.850599462623321\end{array}\right]\hfill \\ {\widehat{K}}_{3}^{\left(0\right)}=\left[\begin{array}{ccc}\hfill 1.120493844818183& \hfill 0.190581999014316& \hfill 0.522738193473694\\ \hfill 0.036425300944087& \hfill 0.028600461083115& \hfill 0.183279009240151\\ \hfill 3.229623103192055& \hfill 0.550761163041108& \hfill 2.205172558841497\end{array}\right],\hfill \end{array}$$
$${\widehat{K}}^{\left(best\right)}=\left[\begin{array}{ccc}{\widehat{K}}_{1}^{\left(best\right)}& {\widehat{K}}_{2}^{\left(best\right)}& {\widehat{K}}_{3}^{\left(best\right)}\end{array}\right],$$
$$\begin{array}{c}{\widehat{K}}_{1}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 1.085160433924800& \hfill 0.152380406023858& \hfill 1.111840435453025\\ \hfill 0.173224251962231& \hfill 1.253268146513151& \hfill 0.449756208653024\\ \hfill 2.518304745419560& \hfill 0.284285880826183& \hfill 3.437786815141788\end{array}\right]\hfill \\ {\widehat{K}}_{2}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 0.221989603556075& \hfill 0.073112868340369& \hfill 0.241805447889791\\ \hfill 0.119158793726812& \hfill 0.403709017174812& \hfill 0.120711030008226\\ \hfill 0.541730211464951& \hfill 0.187380061037601& \hfill 0.903873015737950\end{array}\right]\hfill \\ {\widehat{K}}_{3}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 1.068327578247724& \hfill 0.146908957308029& \hfill 0.525373002842116\\ \hfill 0.006608329355743& \hfill 0.015609869717001& \hfill 0.182159600265849\\ \hfill 3.166355434538844& \hfill 0.515059657329335& \hfill 2.156140670934107\end{array}\right],\hfill \end{array}$$
$$\begin{array}{c}{K}_{P}=\left[\begin{array}{ccc}\hfill 1.040671697032480& \hfill 0.034045803431248& \hfill 1.158219284410088\\ \hfill 0.177951420403750& \hfill 1.273076937581805& \hfill 0.454684211704099\\ \hfill 2.362328186885230& \hfill 0.369320416016725& \hfill 3.600390066086302\end{array}\right]\hfill \\ {K}_{I}=\left[\begin{array}{ccc}\hfill 0.200613800205014& \hfill 0.131015509566988& \hfill 0.255249027751546\\ \hfill 0.121430088217771& \hfill 0.409861485359054& \hfill 0.122139482935926\\ \hfill 0.466787102815339& \hfill 0.390385480823640& \hfill 0.951005918753228\end{array}\right]\hfill \\ {K}_{D}=\left[\begin{array}{ccc}\hfill 1.050500770135795& \hfill 0.146612915752025& \hfill 0.542836776832826\\ \hfill 0.004714134766140& \hfill 0.015578413703667& \hfill 0.184015220520444\\ \hfill 3.103855027173255& \hfill 0.514021741975605& \hfill 2.217368288447396\end{array}\right]\hfill \end{array}$$
$$\sigma \left({\widehat{A}}_{c\ell}\left({\widehat{K}}^{\left(best\right)}\right)\right)=\left\{\begin{array}{c}\hfill 0.915131493828863\pm 0.914610403731497i\\ \hfill 0.881963764205315\pm 0.191516911259354i\\ \hfill 0.730009436848610\\ \hfill 0.616780364831122\\ \hfill 0.329413066341451\pm 0.071827905231358i\end{array}\right\}.$$
Example 2.
For the same system as in Example 1 with the objective to find a PID ${H}_{\infty}$norm optimal controller, where the closedloop eigenvalues should be in the same rectangular region Ω, starting from the same ${\widehat{K}}^{\left(0\right)}$ with ${f}_{{H}_{\infty}}\left({\widehat{K}}^{\left(0\right)}\right)=0.095033139822324$, we had
where
with ${f}_{{H}_{\infty}}\left({\widehat{K}}^{\left(best\right)}\right)=0.072175978563672$ (a $24.051779\%$ relative improvement), in $CPUtime=126\left[sec\right]$. The resulting PID controller matrices are:
and the resulting closedloop eigenvalues are
$${\widehat{K}}^{\left(best\right)}=\left[\begin{array}{ccc}{\widehat{K}}_{1}^{\left(best\right)}& {\widehat{K}}_{2}^{\left(best\right)}& {\widehat{K}}_{3}^{\left(best\right)}\end{array}\right],$$
$$\begin{array}{c}{\widehat{K}}_{1}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 1.087304300685670& \hfill 0.045931838436646& \hfill 1.085574892719301\\ \hfill 0.133605778924987& \hfill 1.184070254573485& \hfill 0.359343489961314\\ \hfill 2.678924736751655& \hfill 0.264961819769927& \hfill 3.490264530663762\end{array}\right]\hfill \\ {\widehat{K}}_{2}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 0.275424882288615& \hfill 0.138303327101369& \hfill 0.075398358659212\\ \hfill 0.090631547438998& \hfill 0.331640807629037& \hfill 0.054696779167659\\ \hfill 0.516085927047396& \hfill 0.184332519865895& \hfill 0.733373655058220\end{array}\right]\hfill \\ {\widehat{K}}_{3}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 1.165473798084745& \hfill 0.076978746950025& \hfill 0.595094516639020\\ \hfill 0.052626935909114& \hfill 0.061681173045286& \hfill 0.250098357975546\\ \hfill 3.316496450850004& \hfill 0.694282662038967& \hfill 2.314168285879468\end{array}\right],\hfill \end{array}$$
$$\begin{array}{c}{K}_{P}=\left[\begin{array}{ccc}\hfill 1.068321790034687& \hfill 0.039576561177247& \hfill 1.102041006775961\\ \hfill 0.118395560901336& \hfill 1.115554481033627& \hfill 0.346149598520451\\ \hfill 2.507718675999537& \hfill 0.506251036541779& \hfill 3.638774839724999\end{array}\right]\hfill \\ {K}_{I}=\left[\begin{array}{ccc}\hfill 0.266534452864325& \hfill 0.160949047585517& \hfill 0.078679654648945\\ \hfill 0.083507865416445& \hfill 0.313495349833722& \hfill 0.052067557547737\\ \hfill 0.435901834009705& \hfill 0.388577609900052& \hfill 0.762968147108973\end{array}\right]\hfill \\ {K}_{D}=\left[\begin{array}{ccc}\hfill 1.159203620514524& \hfill 0.071881042424526& \hfill 0.607938725615309\\ \hfill 0.057651074688413& \hfill 0.057596508025015& \hfill 0.239806609825754\\ \hfill 3.259944795394558& \hfill 0.648305713745055& \hfill 2.430012099310619\end{array}\right]\hfill \end{array}$$
$$\sigma \left({\widehat{A}}_{c\ell}\left({\widehat{K}}^{\left(best\right)}\right)\right)=\left\{\begin{array}{c}\hfill 0.965762603991179\pm 0.955842860416991i\\ \hfill 0.507539612520429\pm 0.539780823728534i\\ \hfill 0.929257645094484\pm 0.106820410747074i\\ \hfill 0.402381802562664\pm 0.042181153576293i\end{array}\right\}.$$
Example 3.
For the system from Example 1, with the same Ω and the same ${\widehat{K}}^{\left(0\right)}$, the algorithm from [7] failed to find structured SOF ${\widehat{K}}^{\left(1\right)}$. Indeed, from another starting SOF, the abovementioned algorithm succeeded to find a structured SOF. For
where
found by the RayShooting algorithm from [6], in $CPUtime=0.0001\left[sec\right]$, applying the algorithm from [7] we have found the structured SOF
where
in $CPUtime=2126.84375\left[sec\right]=35.447395\left[min\right]$, with objectivefunction value ${f}_{{H}_{2}}\left({\widehat{K}}^{\left(1\right)}\right)=0.364564164927770$ (note the small values of the $1,1$ and the $2,1$ entries of ${\widehat{K}}_{3}^{\left(1\right)}$ obtained from the structural constraints). Finally, executing Algorithm 1 with ${\widehat{K}}^{\left(1\right)}$ as a starting SOF, we have found
where
with ${f}_{{H}_{2}}\left({\widehat{K}}^{\left(best\right)}\right)=0.212807638848134$ (a $41.626835\%$ relative improvement), in $CPUtime=99.890625\left[sec\right]$. The resulting PID controller matrices are:
and the resulting closedloop eigenvalues are
$${\widehat{K}}^{\left(0\right)}=\left[\begin{array}{ccc}{\widehat{K}}_{1}^{\left(0\right)}& {\widehat{K}}_{2}^{\left(0\right)}& {\widehat{K}}_{3}^{\left(0\right)}\end{array}\right],$$
$$\begin{array}{c}{\widehat{K}}_{1}^{\left(0\right)}=\left[\begin{array}{ccc}\hfill 0.001662379279286& \hfill 0.330320315815391& \hfill 0.204742383931633\\ \hfill 0.107146675564809& \hfill 0.994156785847185& \hfill 0.016601261567192\\ \hfill 0.352905474519842& \hfill 0.942806073864177& \hfill 0.096372856789800\end{array}\right]\hfill \\ {\widehat{K}}_{2}^{\left(0\right)}=\left[\begin{array}{ccc}\hfill 0.021994917878383& \hfill 0.051106382240571& \hfill 0.041069916425995\\ \hfill 0.041186023579647& \hfill 0.283840911655084& \hfill 0.016793398034650\\ \hfill 0.000278730967043& \hfill 0.136566568491544& \hfill 0.140207270149126\end{array}\right]\hfill \\ {\widehat{K}}_{3}^{\left(0\right)}=\left[\begin{array}{ccc}\hfill 0.066971504986511& \hfill 0.041204949529465& \hfill 0.574933673498160\\ \hfill 0.152954829804652& \hfill 0.044219575277023& \hfill 0.285308855216965\\ \hfill 1.286404253117917& \hfill 0.132606565745259& \hfill 1.514386124380330\end{array}\right],\hfill \end{array}$$
$${\widehat{K}}^{\left(1\right)}=\left[\begin{array}{ccc}{\widehat{K}}_{1}^{\left(1\right)}& {\widehat{K}}_{2}^{\left(1\right)}& {\widehat{K}}_{3}^{\left(1\right)}\end{array}\right],$$
$$\begin{array}{c}{\widehat{K}}_{1}^{\left(1\right)}=\left[\begin{array}{ccc}\hfill 0.001662379279286& \hfill 0.330320315815391& \hfill 0.204742383931633\\ \hfill 0.107146675564809& \hfill 0.994156785847185& \hfill 0.016601261567192\\ \hfill 0.352905474519842& \hfill 0.942806073864177& \hfill 0.096372856789800\end{array}\right]\hfill \\ {\widehat{K}}_{2}^{\left(1\right)}=\left[\begin{array}{ccc}\hfill 0.021994917878383& \hfill 0.051106382240571& \hfill 0.041069916425995\\ \hfill 0.041186023579647& \hfill 0.283840911655084& \hfill 0.016793398034650\\ \hfill 0.000278730967043& \hfill 0.136566568491544& \hfill 0.140207270149126\end{array}\right]\hfill \\ {\widehat{K}}_{3}^{\left(1\right)}=\left[\begin{array}{ccc}\hfill 0.000000000000001& \hfill 1.630868948596140& \hfill 0.213689638336193\\ \hfill 0.000000000000003& \hfill 0.516222895350342& \hfill 0.067639698397123\\ \hfill 3.828454587894386& \hfill 1.446059480160095& \hfill 0.287354792999600\end{array}\right],\hfill \end{array}$$
$${\widehat{K}}^{\left(best\right)}=\left[\begin{array}{ccc}{\widehat{K}}_{1}^{\left(best\right)}& {\widehat{K}}_{2}^{\left(best\right)}& {\widehat{K}}_{3}^{\left(best\right)}\end{array}\right],$$
$$\begin{array}{c}{\widehat{K}}_{1}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 0.076825955660614& \hfill 0.288411611380877& \hfill 0.080218436303677\\ \hfill 0.092501965527188& \hfill 1.046837016022694& \hfill 0.126682619794964\\ \hfill 0.341825773843233& \hfill 0.939476093514249& \hfill 0.014192822579841\end{array}\right]\hfill \\ {\widehat{K}}_{2}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 0.054840679045778& \hfill 0.080221174899522& \hfill 0.114230924697635\\ \hfill 0.052402918799798& \hfill 0.301314353311879& \hfill 0.093055199101202\\ \hfill 0.061395620772426& \hfill 0.248330164720565& \hfill 0.198088337294925\end{array}\right]\hfill \\ {\widehat{K}}_{3}^{\left(best\right)}=\left[\begin{array}{ccc}\hfill 0.000000000000001& \hfill 1.623084863796692& \hfill 0.212669704596573\\ \hfill 0.000000000000003& \hfill 0.692852565144541& \hfill 0.090783146121880\\ \hfill 3.950508497484917& \hfill 1.418275521031274& \hfill 0.019182452364697\end{array}\right],\hfill \end{array}$$
$$\begin{array}{c}{K}_{P}=\left[\begin{array}{ccc}\hfill 0.346143328950049& \hfill 3.208734600167995& \hfill 0.360572630013263\\ \hfill 0.207466522885631& \hfill 2.539677456718627& \hfill 0.314844821170003\\ \hfill 0.106492276227244& \hfill 2.116381906236221& \hfill 0.370976920044842\end{array}\right]\hfill \\ {K}_{I}=\left[\begin{array}{ccc}\hfill 0.203018492396510& \hfill 0.925289371971877& \hfill 0.231015500114683\\ \hfill 0.115656159814819& \hfill 0.730540561515251& \hfill 0.240431886773992\\ \hfill 0.068084340579495& \hfill 0.630299798860383& \hfill 0.103593086737016\end{array}\right]\hfill \\ {K}_{D}=\left[\begin{array}{ccc}\hfill 0.000000000000010& \hfill 3.233767462898737& \hfill 0.423714364176847\\ \hfill 0.000000000000007& \hfill 1.380410927195360& \hfill 0.180872602940673\\ \hfill 3.950508497484925& \hfill 2.825713760036134& \hfill 0.165231487536706\end{array}\right]\hfill \end{array}$$
$$\sigma \left({\widehat{A}}_{c\ell}\left({\widehat{K}}^{\left(best\right)}\right)\right)=\left\{\begin{array}{c}\hfill 0.350618286842131\pm 0.990354187385021i\\ \hfill 0.963555748849909\pm 0.645732021985088i\\ \hfill 0.127768318435090\pm 0.216451742947409i\\ \hfill 0.159486067934620\\ \hfill 0.462031919768989\end{array}\right\}.$$
Please note that
and
which we treat numerically as 0.
$$\begin{array}{cc}\hfill {\widehat{D}}_{1,1}& ={D}_{1,1}{D}_{1,2}{\widehat{K}}_{3}^{\left(1\right)}{C}_{2}{B}_{1}\hfill \\ & ={10}^{16}\xb7\left[\begin{array}{ccc}\hfill 0.254063466480327& \hfill 0.173472347597681& \hfill 0.118228253814391\\ \hfill 0.762190399440980& \hfill 0.273218947466347& \hfill 0.354684761443172\end{array}\right]\hfill \end{array}$$
$$\begin{array}{cc}\hfill {\widehat{D}}_{1,1}& ={D}_{1,1}{D}_{1,2}{\widehat{K}}_{3}^{\left(best\right)}{C}_{2}{B}_{1}\hfill \\ & ={10}^{16}\xb7\left[\begin{array}{ccc}\hfill 0.254063466480326& \hfill 0.156125112837913& \hfill 0.118228253814391\\ \hfill 0.762190399440979& \hfill 0.277555756156289& \hfill 0.354684761443172\end{array}\right]\hfill \end{array}$$
6. Concluding Remarks
In this article, a randomized method for synthesizing PID optimal controllers (for the standard LQR, ${H}_{\infty}$ and ${H}_{2}$ optimization problems) for coupled MIMO systems was introduced. The method is based on a (direct) reduction of the problem to the problem of SOF (occasionally with some structure). The randomization circumvents the need to solve any SDP or BMI programs. Therefore, the method can be applied to more involved PID controller design that can be formulated as structured SOF problems, e.g., the design of 2DOFPID controllers and robust and gainscheduled PID controllers for Linear Parameter Varying (LPV) systems (see [13]), without the need to solve any BMI programs and thus can be applied to larger systems. Moreover, the method has a proof of convergence (in probability) to a global optimum of the objective function.
Conflicts of Interest
The author declares no conflict of interest.
References
 Visioli, A. Research Trends for PID Controllers. Acta Polytech. 2012, 52, 144–150. [Google Scholar]
 Suresh, A.; Meena, S.; Chitra, K. Controller design for mimo process using optimization algorithm. Int. J. Pure Appl. Math. 2017, 117, 163–170. [Google Scholar]
 Zheng, F.; Wang, Q.G.; Lee, T.H. On the design of multivariable PID controllers via LMI approach. Automatica 2002, 38, 517–526. [Google Scholar] [CrossRef]
 Saeki, M. Fixed structure PID controller design for standard H_{∞} control problem. Automatica 2006, 42, 93–100. [Google Scholar] [CrossRef]
 Boyd, S.; Hast, M.; Åström, K.J. MIMO PID tuning via iterated LMI restriction. Int. J. Robust Nonlinear Control 2016, 26, 1718–1731. [Google Scholar] [CrossRef]
 Peretz, Y. A randomized approximation algorithm for the minimalnorm staticoutputfeedback problem. Automatica 2016, 63, 221–234. [Google Scholar] [CrossRef]
 Peretz, Y. On applications of the RayShooting method for structured and structuredsparse staticoutputfeedbacks. Int. J. Syst. Sci. 2017, 48, 1902–1913. [Google Scholar] [CrossRef]
 Peretz, Y. On application of the RayShooting Method for LQR via staticoutputfeedback. Algorithms 2018, 11, 8. [Google Scholar] [CrossRef]
 Lin, C.; Wang, Q.G.; Lee, T.H. An improvement on multivariable PID controller design via iterative LMI approach. Automatica 2004, 40, 519–525. [Google Scholar] [CrossRef]
 Cao, Y.Y.; Lam, J.; Sun, Y.X. Static Output Feedback Stabilization: An ILMI Approch. Automatica 1998, 34, 1641–1645. [Google Scholar] [CrossRef]
 Cao, Y.Y.; Sun, Y.X.; Lam, J. Simultaneous Stabilization via Static Output Feedback and State Feedback. IEEE Trans. Autom. Control 1999, 44, 1277–1282. [Google Scholar]
 Rosinová, D.; Veselý, V. Robust Static output feedback for discretetime systems—LMI approach. Periodica Polytech. Ser. El. Eng. 2004, 48, 151–163. [Google Scholar]
 Bianchi, F.D.; Mantz, R.J.; Christiansen, C.F. Multivariable PID control with setpoint weighting via BMI optimisation. Automatica 2008, 44, 472–478. [Google Scholar] [CrossRef]
 Blondel, V.; Tsitsiklis, J.N. NPhardness of some linear control design problems. SIAM J. Control Optim. 1997, 35, 2118–2127. [Google Scholar] [CrossRef]
 Mesbahi, M. A semidefinite programming solution of the least order dynamic output feedback synthesis problem. In Proceedings of the 38th IEEE Conference on Decision and Control, Phoenix, AZ, USA, 7–10 December 1999; Volume 2, pp. 1851–1856. [Google Scholar]
 Zhang, H.; Shi, Y.; Mehr, A.S. Robust Static Output Feedback Control and Remote PID Design for Networked Motor Systems. IEEE Trans. Ind. Electron. 2011, 58, 5396–5405. [Google Scholar] [CrossRef]
 Borges, R.A.; Calliero, T.R.; Oliveira, C.L.F.; Peres, P.L.D. Improved conditions for reducedorder H_{∞} filter design as a static output feedback problem. In Proceedings of the American Control Conference 2011, SanFrancisco, CA, USA, 29 June–1 July 2011. [Google Scholar]
 Nemirovskii, A. Several NPhard problems arising in robust stability analysis. Math. Control Signal Syst. 1993, 6, 99–105. [Google Scholar] [CrossRef]
 Fu, M. Pole placement via static output feedback is NPhard. IEEE Trans. Autom. Control. 2004, 49, 855–857. [Google Scholar] [CrossRef]
 Sanchez Peña, R.; Sznaier, M. Robust Systems, Theory and Applications; Wiley: New York, NY, USA, 1998. [Google Scholar]
 Henrion, D.; Loefberg, J.; Kočvara, M.; Stingl, M. Solving Polynomial static output feedback problems with PENBMI. In Proceedings of the IEEE Conference on Decision Control and European Control Conference, Sevilla, Spain, 12–15 December 2005. [Google Scholar]
 Yang, K.; Orsi, R. Generalized pole placement via static output feedback: A methodology based on projections. Automatica 2006, 42, 2143–2150. [Google Scholar] [CrossRef]
 Gumussoy, S.; Henrion, D.; Millstone, M.; Overton, M.L. Multiobjective Robust Control with HIFOO 2.0. In Proceedings of the IFAC Symposium on Robust Control Design, Haifa, Israel, 16–18 June 2009. [Google Scholar]
 Romeijn, H.E.; Smith, R.L. Simulated Annealing for Constrained Global Optimization. Glob. Optim. 1994, 5, 101–126. [Google Scholar] [CrossRef]
 Zabinsky, Z.B. Stochastic Adaptive Search for Global Optimization; Kluer Academic Publishers: Dordrecht, The Netherlands, 2003. [Google Scholar]
 Bélisle, C.J.P. Convergence Theorems for a Class of Simulated Annealing Algorithms on ${\mathbb{R}}^{d}$. Appl. Probab. 1992, 29, 885–895. [Google Scholar] [CrossRef]
 Leibfritz, F. COMPl_{e}ib: Constrained MatrixOptimization Problem Library—A Collection of Test Examples for Nonlinear Semidefinite Programs, Control System Design and Related Problems. Available online: http://www.friedemannleibfritz.de/COMPlib_Data/COMPlib_Main_Paper.pdf (accessed on 1 June 2018).
 Leibfritz, F.; Lipinski, W. COMPl_{e}ib 1.0—User Manual and Quick Reference. Available online: http://www.friedemannleibfritz.de/COMPlib_Data/COMPlib_User_Guide.pdf (accessed on 1 June 2018).
 Leibfritz, F. Description of the Benchmark Examples in COMPl_{e}ib 1.0. Available online: http://www.friedemannleibfritz.de/COMPlib_Data/COMPlib_Test_Set.pdf (accessed on 1 June 2018).
© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).