In this section, we discuss an augmented monotonic tracking controller applied for a non-square system with prescribed output constraints. To enhance robustness, a linear time-invariant system with input integration is considered.

#### 2.1. Augmented Monotonic Tracking Controllers

Consider the linear time-invariant system:

where, for all

$t\ge 0$,

$x\left(t\right)\in {\mathbb{R}}^{n}$ is the state,

$u\left(t\right)\in {\mathbb{R}}^{m}$ is the control input,

$y\left(t\right)\in {\mathbb{R}}^{p}$ is the output, and

$A$,

$B$,

$C$ and

$D$ are appropriate dimensional constant matrices. Assume that

$B$ has full column rank and

$C$ has full row rank. In this paper, the aircraft engine state variable model (SVM) in Equation (2) is extracted by using the Commercial Modular Aero-Propulsion System Simulation (CMAPSS). CMAPSS is a Simulink port and a database with a user-friendly graphical user interface (GUI) allowing the user to perform model extraction, elementary control design, and simulations without much effort [

13]. The linearization method used in CMAPSS for establishing engine SVM is a bias derivative method; for details, readers may refer to [

14].

As aircraft engines are often considered to be a single-input and multi-outputs plant, system

$\Sigma $ may be non-square. Then we decompose system

$\Sigma $ into two components: square system

${\Sigma}_{s}$ and system

${\Sigma}_{n}$, which are governed by:

where

${\Sigma}_{s}$ is a square system with main outputs

${y}_{s}\left(t\right)\in {\mathbb{R}}^{{p}_{s}}$ to be tracked and

${\Sigma}_{n}$ is a system with the constrained auxiliary outputs

${y}_{n}\left(t\right)\in {\mathbb{R}}^{{p}_{n}}$. The output vector

$y\left(t\right)$ and appropriate dimensional constant matrices

$C$ and

$D$ of system

$\Sigma $ can be represented as:

where

${y}_{i}\left(t\right)$ stands for the

$ith$ output of system

$\Sigma $,

${c}_{i}$ and

${d}_{i}$ denote the

$ith$ row vector of the output matrix

$C$ and

$D$, respectively.

As we know, the monotonic tracking controller will result in zero steady-state error to a step reference for linear system with no uncertainty [

8], but in most real cases there are always some uncertainties in the plant parameters which cause the tracking error to be nonzero, and reduce the tracking accuracy of state feedback law for aircraft engines. One practicable method is to use integral control to obtain robust tracking. Then the ability to track references and reject uncertainties of the control system can be enhanced. To achieve integral control, the following augmented plant can be described as:

where

${y}_{s}\in {\mathbb{R}}^{{p}_{s}}$,

${y}_{n}\in {\mathbb{R}}^{{p}_{n}}$,

${u}_{r}\in {\mathbb{R}}^{m}$ is the new control input which equals to

$\dot{u}$ and the augmented state vector and system matrices are defined as:

The number of system outputs is:

The order of the system

${\Sigma}_{sa}$ is:

First, following assumptions are necessary to be adopted to design a monotonic tracking controller for system ${\sum}_{s}$.

**Assumption** **1.** System ${\sum}_{s}$ is right invertible and stabilizable, and ${\sum}_{s}$ has no invariant zeros at the origin.

**Assumption** **2.** System ${\sum}_{s}$ is square.

**Assumption** **3.** System ${\sum}_{s}$ has at least $n-{p}_{s}$ distinct invariant zeros in ${\u2102}^{-}$.

Next, the relationship between the invariant zeros of system ${\sum}_{sa}$ and system ${\sum}_{s}$ is discussed.

**Theorem** **1.** The invariant zeros of system ${\sum}_{s}$ is same as that of augmented system ${\sum}_{sa}$.

**Proof.** Let

$\left\{{\mathsf{\lambda}}_{1},\dots ,{\mathsf{\lambda}}_{n-{p}_{s}}\right\}$ denote the set of distinct invariant zeros of system

${\Sigma}_{s}$. Then, the rank of the system matrix pencil drops from its normal value for

$s={\mathsf{\lambda}}_{i}$. The system matrix pencil of system

${\Sigma}_{sa}$ is given by:

where

${I}_{{t}_{1}}$ and

${I}_{{t}_{2}}$ are the matrices formed by

${I}_{m}$,

${I}_{n}$and

${I}_{{p}_{s}}$, which can be expressed as:

where

${I}_{m}$,

${I}_{n}$,

${I}_{m+n}$ and

${I}_{{p}_{s}}$ are appropriate dimensional identity matrices. Properties of matrix elementary transformation implies

$rank\left[{P}_{{\Sigma}_{sa}}\left({\mathsf{\lambda}}_{i}\right)\right]=rank\left[{P}_{{\Sigma}_{s}}\left({\mathsf{\lambda}}_{i}\right)\right]+m$. Hence, the invariant zeros of system

${\Sigma}_{s}$ and

${\Sigma}_{sa}$ are the same.□

For definiteness and without loss of generality, Assumption 3 is replaced by the following Assumption 4:

**Assumption** **4.** System ${\Sigma}_{sa}$ has at least ${n}_{a}-2{p}_{s}$ distinct invariant zeros in ${C}^{-}$.

The following method is developed to design a tracking controller such that

${A}_{a}+{B}_{a}F$ is stable for a step reference signal with state feedback gain matrix

$F$. Let

${u}_{r,ss}\in {\mathbb{R}}^{m}$ and

${x}_{a,ss}\in {\mathbb{R}}^{m+n}$ denote the control input and the state at steady state, respectively. Then:

for any step reference

$r\in {\mathbb{R}}^{{p}_{s}}$, where

${u}_{r,ss}=\dot{u}=0$ and

${x}_{a,ss}={\left[\begin{array}{cc}{x}_{ss}^{T}& {u}_{ss}^{T}\end{array}\right]}^{T}$ is obtained by solving the following equation:

Let the tracking error vector and suppositional tracking error vector be defined as

${\mathsf{\epsilon}}_{sa}\left(t\right)=r-{y}_{s}\left(t\right)$ and

${\mathsf{\epsilon}}_{na}\left(t\right)={r}_{n}-{y}_{n}\left(t\right)$, respectively, where suppositional tracking reference is defined as

${r}_{n}={C}_{an}{x}_{a,ss}$. Applying the state feedback control law:

to Equation (5) and employing the change of variable

${\mathsf{\xi}}_{a}={x}_{a}-{x}_{a,ss}$, we obtain the closed-loop autonomous system:

Since ${A}_{a}+{B}_{a}F$ is stable, ${x}_{a}$ converges to ${x}_{a,ss}$, ${y}_{s}$ converges to $r$ and ${y}_{n}$ converges to ${r}_{n}$ as $t$ goes to infinity.

**Definition** **1.** If the main output ${y}_{s}\left(t\right)$ and the auxiliary output ${y}_{n}\left(t\right)$ obtained from applying ${u}_{r}\left(t\right)$ in Equation (13) are all monotonic, then we define this property as generalized monotonicity.

The following is the specific design method to shape the responses of the main output and auxiliary outputs. The key idea is the choice of a suitable closed loop eigenstructure, which is composed of eigenvalues

${L}_{a}=\left\{{\mathsf{\lambda}}_{1},\dots ,{\mathsf{\lambda}}_{{n}_{a}}\right\}\subset \u2102$ and eigenvectors

$v=\left\{{\mathsf{\upsilon}}_{1},\dots ,{\mathsf{\upsilon}}_{{n}_{a}}\right\}\subset {\u2102}^{{n}_{a}}$ such that generalized monotonicity can be achieved. Firstly, decompose the set

${L}_{a}=\left\{{\mathsf{\lambda}}_{1},\dots ,{\mathsf{\lambda}}_{{n}_{a}}\right\}$ into two parts. One part is the set of

${n}_{a}-2{p}_{s}$ distinct invariant zeros composed of

${\mathsf{\lambda}}_{i}$ for

$i\in \left\{1,\dots ,{n}_{a}-2{p}_{s}\right\}$. Another part is the set composed of

${\mathsf{\lambda}}_{i}$ for

$i\in \left\{{n}_{a}-2{p}_{s}+1,\dots ,{n}_{a}\right\}$, which may be freely chosen to be any distinct real stable modes. To obtain

$v$, let

$S=\left\{{s}_{1},\dots ,{s}_{{n}_{a}}\right\}\subset {\mathbb{R}}^{{p}_{s}}$ be such that:

where

$\left\{{e}_{1},\dots ,{e}_{{p}_{s}}\right\}$ is the canonical basis of

${\mathbb{R}}^{{p}_{s}}$. Provided

$v$ is linearly independent, then sets

$v=\left\{{\mathsf{\upsilon}}_{1},\dots ,{\mathsf{\upsilon}}_{{n}_{a}}\right\}\subset {\u2102}^{{n}_{a}}$ and

$w=\left\{{\mathsf{\omega}}_{1},\dots ,{\mathsf{\omega}}_{{n}_{a}}\right\}\subset {\u2102}^{{p}_{s}}$ are obtained by solving the Rosenbrook matrix equation:

for

${s}_{i}\in S$. The sets

${L}_{a}$,

$v$ and

$w$ all meet the requirements of Proposition 1 in [

13], then a gain matrix

$F$ can be obtained by use of the procedure given in that paper such that

${A}_{a}+{B}_{a}F$ has the desired eigenstructure. It is worth noting that when

${L}_{a}$ is real,

$F=W{V}^{-1}$, where

$W=\left[{\mathsf{\omega}}_{1},{\mathsf{\omega}}_{2}\dots ,{\mathsf{\omega}}_{{n}_{a}}\right]$ and

$V=\left[{\mathsf{\upsilon}}_{1},{\mathsf{\upsilon}}_{2}\dots ,{\mathsf{\upsilon}}_{{n}_{a}}\right]$. Since

${\mathsf{\omega}}_{i}=F{\mathsf{\upsilon}}_{i}$, the vectors in

$v$ satisfy:

**Notation** **1.** For each $k\in \left\{1,\dots ,{p}_{s}\right\}$, let:- (1)
${\mathsf{\upsilon}}_{k,1}$ and ${\mathsf{\upsilon}}_{k,2}$ denote the eigenvectors in $v$ associated with canonical basis vector ${e}_{k}$ in Equation (15), and let ${\mathsf{\lambda}}_{k,1}$ and ${\mathsf{\lambda}}_{k,2}$ be the eigenvalues corresponding to ${\mathsf{\upsilon}}_{k,1}$ and ${\mathsf{\upsilon}}_{k,2}$, ordered such that ${\mathsf{\lambda}}_{k,1}<{\mathsf{\lambda}}_{k,2}$ in each case;

- (2)
Let $\mathsf{\alpha}:={V}^{-1}{\mathsf{\xi}}_{a,0}$ be the coordinate vector of ${\mathsf{\xi}}_{a,0}$ in terms of $v$. Then define:

**Theorem** **2.** Assume that Assumptions 1, 2, and 4 are all satisfied. Let ${L}_{a}$ be a set of desired closed-loop poles, and assume that the set $v$ of associated eigenvectors obtained from solving Equation (16) with ${s}_{i}$ in Equation (15) is linearly independent. Let $r\in {\mathbb{R}}^{{p}_{s}}$ and ${x}_{a,0}\in {\mathbb{R}}^{{n}_{a}}$ be any step reference and any initial condition, respectively. Then, the output ${y}_{s}\left(t\right)$ obtained from applying ${u}_{r}\left(t\right)$ in Equation (13) to ${\Sigma}_{sa}$ tracks $r$ monotonically if and only if ${h}_{k}\left(t\right)=\left({\mathsf{\alpha}}_{k,1}{\mathsf{\lambda}}_{k,1}+{\mathsf{\alpha}}_{k,2}{\mathsf{\lambda}}_{k,2}\right){\mathsf{\alpha}}_{k,2}{\mathsf{\lambda}}_{k,2}\ge 0$ for all $k\in \left\{1,\dots ,{p}_{s}\right\}$.

**Proof.** The tracking error vector can be expressed as:

(Sufficiency). If

${h}_{k}\left(t\right)\ge 0$, then the following two possible situations should take into consideration:

If condition 1 holds,

${f}_{k}\left(t\right)$ increases monotonically with increasing

$t$ and takes its minimum value at

$t=0$. The sign of

${\dot{\mathsf{\epsilon}}}_{sa,k}\left(t\right)$ is determined by the sign of

${f}_{k}\left(t\right)$ as

${e}^{{\mathsf{\lambda}}_{k,1}t}$ is a positive constant. Then, we have

$f\left(t=0\right)={f}_{1}\ge 0$, which yields

${\dot{\mathsf{\epsilon}}}_{sa,k}\left(t\right)\le 0$ for all

$t\in \left(0,\infty \right)$. It means that the feature of monotonicity of

${\mathsf{\epsilon}}_{sa,k}\left(t\right)$ is kept. Thus, the

$kth$ component of the output

${y}_{s}\left(t\right)$ tracks

$r$ monotonically. The proof of condition 2 and condition 1 are similar. (Necessity). If

${\mathsf{\lambda}}_{k,1}<{\mathsf{\lambda}}_{k,2}<0$, we will concern about the following four possible situations:

If condition 1 holds, we have ${\mathsf{\alpha}}_{k,1}{\mathsf{\lambda}}_{k,1}<0,{\mathsf{\alpha}}_{k,2}{\mathsf{\lambda}}_{k,2}>0$. In order to keep the sign of ${\dot{\mathsf{\epsilon}}}_{sa,k}\left(t\right)$ unchanged, only need to let the condition ${\mathsf{\alpha}}_{k,1}{\mathsf{\lambda}}_{k,1}+{\mathsf{\alpha}}_{k,2}{\mathsf{\lambda}}_{k,2}\ge 0$ hold as ${e}^{\left({\mathsf{\lambda}}_{k,1}-{\mathsf{\lambda}}_{k,2}\right)t}$ increases monotonically with the increasing $t$. The proof of condition 2 is similar to condition 1. If condition 3 holds, then ${\mathsf{\alpha}}_{k,1}{\mathsf{\lambda}}_{k,1}>0,{\mathsf{\alpha}}_{k,1}{\mathsf{\lambda}}_{k,2}>0$. Thus, in either case ${\dot{\mathsf{\epsilon}}}_{sa,k}\left(t\right)$ does not change sign. The proof of condition 4 is similar to condition 3.

It can be known that the output ${y}_{s}\left(t\right)$ converges to $r$ monotonically if and only if $\left({\mathsf{\alpha}}_{k,1}{\mathsf{\lambda}}_{k,1}+{\mathsf{\alpha}}_{k,2}{\mathsf{\lambda}}_{k,2}\right){\mathsf{\alpha}}_{k,2}{\mathsf{\lambda}}_{k,2}\ge 0$ for all $k\in \left\{1,\dots ,{p}_{s}\right\}$.

After obtaining the condition of how to achieve the monotonicity of

${y}_{s}\left(t\right)$, then we think about how to keep the output

${y}_{n}\left(t\right)$ monotonic in order to obtain generalized monotonicity. The suppositional tracking error vector

${\mathsf{\epsilon}}_{na}\left(t\right)$ is defined as:

where

${c}_{an,k}$ is the

$kth$ row vector of

${C}_{an}$,

${g}_{i,k}={c}_{an,k}{\mathsf{\upsilon}}_{i}{\mathsf{\alpha}}_{i}$ for

$i\in \left\{1,\cdots ,{n}_{a}-2{p}_{s}\right\}$,

${g}_{\left(i,1\right),k}={c}_{an,k}{\mathsf{\upsilon}}_{i,1}{\mathsf{\alpha}}_{i,1}$ and

${g}_{\left(i,2\right),k}={c}_{an,k}{\mathsf{\upsilon}}_{i,2}{\mathsf{\alpha}}_{i,2}$ for

$i\in \left\{1,\cdots ,{p}_{s}\right\}$. Then,

${m}_{i,k}$ for

$k\in \left\{1,\cdots ,{p}_{n}\right\}$ and

${\mathsf{\lambda}}_{i}$ for

$i\in \left\{{n}_{a}-2{p}_{s}+1,\cdots ,{n}_{a}\right\}$ can be given by Equations (26) and (27), respectively. Then:

In Equations (26) and (27),

$j=\left(i+2{p}_{s}-{n}_{a}+1\right)/2$ if

$i+2{p}_{s}-{n}_{a}$ is odd and

$j=\left(i+2{p}_{s}-{n}_{a}\right)/2$ if

$i+2{p}_{s}-{n}_{a}$ is even. Let

${\mathsf{\epsilon}}_{na,k}$ denote the

$kth$ component of the suppositional tracking error

${\mathsf{\epsilon}}_{na}$ for

$k\in \left\{1,\cdots ,{p}_{n}\right\}$. Then:

□

For the sake of ensuring the monotonicity of

${\mathsf{\epsilon}}_{na,k}$ for

$t>0$, we should check whether

${\dot{\mathsf{\epsilon}}}_{n,k}\left(t\right)$ changes sign when the poles have been placed at the desired closed-loop poles positions. One approach is offered in [

15]. However, the results are conservative because this only provides a sufficient condition. The reason why no sufficient and necessary condition to be offered may be that it is difficult to find an analytical solution for high order systems. However, for low order systems, it is easier to obtain a condition with less conservativeness, even a sufficient and necessary condition. Therefore it is worth first thinking about the actual order of the aircraft engine system, and thereafter, to decide which method to employ. In fact, the dynamics of a turbine engine can be approximated by a set of low-order, linear model around operating points [

16]. There are three basic types of dynamic effects in gas turbine engines, namely, shaft dynamics caused by the inertial effect, pressure dynamics caused by the mass storage effect, and temperature dynamics caused by the energy storage as well as the heat transfer between the gas and the outer casing.

The shaft dynamics play the most important role in affecting gas turbine engines dynamic performance among the three dynamics, followed by temperature dynamics, and pressure dynamics in last. It is mainly because shaft speeds are directly linked with mass flow through the engine and thrust, which is the main output to be manipulated by the propulsion control system. Moreover, temperature dynamics of turbines, especially for high pressure turbine, are also considered in the analysis of dynamic performance. Pressure dynamics with minimal impact on dynamic performances are usually ignored for simplicity.

Shaft dynamics are generally considered in two-spool turbofan engines. Therefore, the number of state variables is 2, which means the model is second order. Now we take shaft dynamics of a two-spool aircraft engine into consideration. Thus, system ${\Sigma}_{aug}$ is third order (the augmented state is combined with the rotor speeds and the fuel flow), and the following theorem provides a necessary and sufficient condition for ensuring the monotonicity of auxiliary outputs.

**Theorem** **3.** Assume that ${\Sigma}_{aug}$ is a third order system. Let ${m}_{i,k}$ be a real constant for all $i\in \left\{1,2,3\right\}$ and $k\in \left\{1,\cdots ,{p}_{n}\right\}$, and let $\left\{{\mathsf{\lambda}}_{1},{\mathsf{\lambda}}_{2},{\mathsf{\lambda}}_{3}\right\}$ be sets of real numbers with ${\mathsf{\lambda}}_{1}<{\mathsf{\lambda}}_{2}<{\mathsf{\lambda}}_{3}<0$.

There exists a state feedback control law (12) such that the $jth$ output ${y}_{n,k}\left(t\right)$ of system ${\Sigma}_{aug}$ converges monotonically to the suppositional tracking reference signal ${r}_{n,k}$ if and only if one of the following conditions holds:- (1)
${m}_{2,k}{\mathsf{\lambda}}_{2}>0$, ${m}_{3,k}{\mathsf{\lambda}}_{3}>0$ and ${m}_{1,k}{\mathsf{\lambda}}_{1}+{m}_{2,k}{\mathsf{\lambda}}_{2}+{m}_{3,k}{\mathsf{\lambda}}_{3}>0$;

- (2)
${m}_{2,k}{\mathsf{\lambda}}_{2}<0$, ${m}_{3,k}{\mathsf{\lambda}}_{3}<0$ and ${m}_{1,k}{\mathsf{\lambda}}_{1}+{m}_{2,k}{\mathsf{\lambda}}_{2}+{m}_{3,k}{\mathsf{\lambda}}_{3}<0$;

- (3)
${m}_{2,k}{\mathsf{\lambda}}_{2}>0$, ${m}_{3,k}{\mathsf{\lambda}}_{3}<0$ and ${m}_{1,k}{\mathsf{\lambda}}_{1}+{g}_{k}\left({t}^{\ast}\right)<0$;

- (4)
${m}_{2,k}{\mathsf{\lambda}}_{2}<0$, ${m}_{3,k}{\mathsf{\lambda}}_{3}>0$ and ${m}_{1,k}{\mathsf{\lambda}}_{1}+{g}_{k}\left({t}^{\ast}\right)>0$.

where ${t}^{\ast}=\frac{1}{{\mathsf{\lambda}}_{3}-{\mathsf{\lambda}}_{2}}\text{ln}\left(\frac{{m}_{2,k}{\mathsf{\lambda}}_{2}\left({\mathsf{\lambda}}_{2}-{\mathsf{\lambda}}_{1}\right)}{{m}_{3,k}{\mathsf{\lambda}}_{3}\left({\mathsf{\lambda}}_{1}-{\mathsf{\lambda}}_{3}\right)}\right)$ and ${g}_{k}\left(t\right)={m}_{2,k}{\mathsf{\lambda}}_{2}{e}^{\left({\mathsf{\lambda}}_{2}-{\mathsf{\lambda}}_{1}\right)t}+{m}_{3,k}{\mathsf{\lambda}}_{3}{e}^{\left({\mathsf{\lambda}}_{3}-{\mathsf{\lambda}}_{1}\right)t}$.

**Proof.** When

${n}_{a}=3$, the first order derivative of Equation (28) can be expressed by:

(Sufficiency). If condition 1 holds,

${m}_{2,k}{\mathsf{\lambda}}_{2}{e}^{\left({\mathsf{\lambda}}_{2}-{\mathsf{\lambda}}_{1}\right)t}$ and

${m}_{3,k}{\mathsf{\lambda}}_{3}{e}^{\left({\mathsf{\lambda}}_{3}-{\mathsf{\lambda}}_{1}\right)t}$ are all increases monotonically on

$[0,\infty )$. Let:

In this case,

${q}_{k}\left(t\right)$ takes its minimum value

${f}_{3}>0$ at

$t=0$. This yields

${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)>0$ for any

$t\in [0,\infty )$. The proof of condition 2 is similar to condition 1. For condition 3, calculate the first order derivative of

${g}_{k}\left(t\right)$ with respect to time, we have

${\dot{g}}_{k}\left(t\right)$ as follows:

Let ${\dot{g}}_{k}\left(t\right)=0$, then we have ${t}^{*}$. Due to ${m}_{2,k}{\mathsf{\lambda}}_{2}>0$ and ${m}_{3,k}{\mathsf{\lambda}}_{3}<0$, $g\left(t\right)$ takes its maximum value at $t={t}^{*}$. If ${g}_{k}\left({t}^{\ast}\right)+{m}_{1,k}{\mathsf{\lambda}}_{1}<0$, then we have ${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)<0$ for any $t\in [0,\infty )$. The proof of condition 4 is similar to condition 3. The only difference is that ${g}_{k}\left(t\right)$ takes its minimum value at $t={t}^{*}$ and ${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)>0$ for any $t\in [0,\infty )$.

(Necessity).

${\mathsf{\epsilon}}_{na,k}\left(t\right)$ converging to zero monotonically implies that it is necessary that

${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)$ does not change sign. As shown in Equation (29), two parts dominate the sign, i.e.,

${e}^{{\mathsf{\lambda}}_{1}t}$ and

${g}_{k}\left(t\right)$. Thereafter, the remaining consideration is the sign of

${g}_{k}\left(t\right)$ since

${e}^{{\mathsf{\lambda}}_{1}t}$ is always a positive number. Then

$I$,

$II$,

$III$ and

$IV$ enumerate the ways in which this may occur:

For $I$, it is clear that ${g}_{k}\left(t\right)$ increases monotonically on $t$ and ${g}_{k}\left(t\right)>0$ for all $t\in [0,\infty )$. Hence, it is known that ${m}_{1,k}{\mathsf{\lambda}}_{1}+{g}_{k}\left(t\right)$ takes its minimum value at $t=0$. Then ${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)$ will not change sign if ${m}_{1,k}{\mathsf{\lambda}}_{1}+{g}_{k}\left(t\right)>0$ for any $t\ge 0$. The proofs of $II$ and $I$ are similar. For $III$, it is easy to see that ${g}_{k}\left(t\right)$ takes its maximum value at $t={t}^{*}$ when ${m}_{2,k}{\mathsf{\lambda}}_{2}>0$ and ${m}_{3,k}{\mathsf{\lambda}}_{3}<0$. If ${m}_{1,k}{\mathsf{\lambda}}_{1}+{g}_{k}\left({t}^{*}\right)<0$, then ${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)<0$ for any $t\ge 0$. For $IV$, it is easy to prove that ${g}_{k}\left(t\right)$ takes its minimum value at $t={t}^{*}$ and then ${\dot{\mathsf{\epsilon}}}_{na,k}\left(t\right)>0$ if the condition ${m}_{1,k}{\mathsf{\lambda}}_{1}+{g}_{k}\left({t}^{*}\right)>0$ holds.

Let

${\Lambda}_{a}=\left\{{\mathsf{\lambda}}_{1},\dots ,{\mathsf{\lambda}}_{{n}_{a}}\right\}\in {\Gamma}_{a}$ be the set of the closed-loop eigenvalues to be chosen for achieving generalized monotonicity, where

${\Gamma}_{a}$ denotes the compact set that constitutes all the possible sets

${\Lambda}_{a}$. Let

${x}_{a,0}$ and

${x}_{a,ss}$ denote the states at

$t=0$ and steady state respectively. Applying

${x}_{a,0}$ and

${x}_{a,ss}$ to

${\Sigma}_{na}$ yields the following two outputs:

□

**Theorem** **4.** Assume that Assumptions 1, 2, and 4 are all satisfied and generalized monotonicity is achieved. Let compact set $\mathsf{{\rm H}}$ denote the constraints to be satisfied for output limit. The output ${y}_{n}\left(t\right)$ of system ${\Sigma}_{na}$ is subjected to the constraint set $\mathsf{{\rm H}}$ if and only if: **Proof.** Assume that the number of output

${y}_{n}\left(t\right)$ is 1. Then the constraint set

$\mathsf{{\rm H}}$ turns into an interval, which can be represented as:

where

${y}_{n\text{min}}$ and

${y}_{n\text{max}}$ are all constants with respect to the limits. Suppose that

${y}_{n}\left(t=0\right)=a$ and

${y}_{n}\left(t\to \infty \right)=b$, and let

${y}_{n}\left(t={t}_{1}\right)$ equal

$c$ for some

${t}_{1}>0$.

(Sufficiency). If condition (33) holds, it is known that:

Assume that ${y}_{n}\left(t\right)$ is a monotonic increasing output, then ${y}_{n\text{min}}\le {y}_{n}\left(t=0\right)=a\le {y}_{n}\left(t={t}_{1}\right)$, $=c\le {y}_{n}\left(t\to \infty \right)=b\le {y}_{n\text{max}}$ and hence ${y}_{n}\left(t={t}_{1}\right)\in H$. The same goes for a monotonic decreasing output ${y}_{n}\left(t\right)$.

(Necessity). If ${y}_{n}\left(t\right)\in H$ for any $t\ge 0$, then ${y}_{n\text{min}}\le {y}_{n}\left(t\right)\le {y}_{n\text{max}}$. Assume that ${y}_{n}\left(t\right)$ is a monotonic increasing output, then ${y}_{n\text{min}}\le {y}_{n}\left(t=0\right)\le {y}_{n}\left(t={t}_{1}\right)\le {y}_{n}\left(t\to \infty \right)\le {y}_{n\text{max}}$. Then we have ${y}_{n}\left(t=0\right)\in H$ and ${y}_{n}\left(t\to \infty \right)\in H$. If ${y}_{n}\left(t\right)$ is a monotonic decreasing output, the proof is similar. The aforementioned proof concerning single output can be easily generalized to multi outputs. □