Open Access
This article is

- freely available
- re-usable

*Mathematics*
**2018**,
*6*(9),
169;
doi:10.3390/math6090169

Article

A Nonlinear Systems Framework for Cyberattack Prevention for Chemical Process Control Systems †

Department of Chemical Engineering and Materials Science, Wayne State University, Detroit, MI 48202, USA; Tel.: +1-313-577-3475

^{†}

This paper is an extended version of our paper published in the Proceedings of the 6th IFAC Conference on Nonlinear Model Predictive Control.

^{‡}

Current address: 5050 Anthony Wayne Drive, Detroit, MI 48202, USA.

Received: 13 August 2018 / Accepted: 12 September 2018 / Published: 14 September 2018

## Abstract

**:**

Recent cyberattacks against industrial control systems highlight the criticality of preventing future attacks from disrupting plants economically or, more critically, from impacting plant safety. This work develops a nonlinear systems framework for understanding cyberattack-resilience of process and control designs and indicates through an analysis of three control designs how control laws can be inspected for this property. A chemical process example illustrates that control approaches intended for cyberattack prevention which seem intuitive are not cyberattack-resilient unless they meet the requirements of a nonlinear systems description of this property.

Keywords:

cybersecurity; process control; model predictive control (MPC); nonlinear systems theory; Lyapunov stability## 1. Introduction

Accident prevention for chemical processes has been receiving increased attention in the process control literature as calls for a systems approach to chemical process safety [1,2,3] are being mathematically formalized and incorporated within control design [4,5,6]. Controllers have been formulated which compute control actions in a fashion that coordinates their actions with the actions of the safety systems [7], and several works have explored methods for keeping the closed-loop state of a nonlinear system away from unsafe conditions in state-space using controllers designed to avoid such regions [8,9,10,11]. In addition, several works have explored fault diagnosis and detection [12,13,14] or fault-tolerant control designs (e.g., [15,16,17,18]). Despite these advances in the integration of safety and control for handling safety issues which arise from faults or disturbances and are therefore not intended, the work which has explored the safety issues associated with cybersecurity [19] breaches of process control systems performed with the intent of bringing the plant to an unsafe, unprofitable, or under-producing condition to seek to hurt others has remained, for the most part, unexplored (with exploration of the topic in works such as [20]). This gap in the literature is notable given the increasing threat that cybersecurity breaches pose for safe process operation. For example, cyberattacks have been successful at creating power outages in the Ukraine [21], causing sewage to enter nearby land and water from a wastewater treatment plant [22] and damaging equipment at a uranium enrichment plant [23]. They have also recently targeted systems at a petrochemical plant [24,25] with the apparent goal of creating an explosion (though this attack thankfully failed). Unlike the most commonly discussed cyberattacks in the media and in the literature, which are primarily concerned with stealing information for the purpose of using that information to compromise companies or individuals economically or socially (e.g., [26]), cyberattacks against process control systems have the potential to seek to create physical damage, injury, or death or a lack of supply of products that are necessary for daily life and therefore are a critical problem to address.

A common technique for handling cybersecurity for control systems has been to rely on computer science/information technology, computer hardware, or networking solutions [27]. Example solutions in these categories include code randomization [28], limiting privileges in access or operation with respect to control systems [29], preventing types of information flow with unidirectional gateways [30], using redundant sensors [31], firewalls [32,33], and encryption [34]. Other approaches include changing library load locations [35] or network settings [36], or randomly selecting encrypted data from sensors to compare with unencrypted information [37]. However, the success of the recent attacks mentioned above on control systems, and the surprising methods by which some of them have been carried out (e.g., transmission via USB sticks and local area networks of the Stuxnet virus followed by its subsequent ability to evade detection with rootkits and zero-day vulnerabilities [20,23]) indicate that the traditional techniques for cyberattack prevention may not be enough. Furthermore, the use of wireless sensors in chemical process control networks can introduce cybersecurity vulnerabilities [38,39]. Given the efficiency gains and lower costs expected to be associated with developing technologies such as improved sensors, the Internet of Things [40], and Cloud computing [41], where increased connectivity and computing usage in the chemical process industries has the potential to pose new cybersecurity risks, the need for alternative techniques to the traditional approaches is growing. The topic of resilience of control designs against cyberattacks [42,43] has been explored in several works [44,45,46,47]. For example, in [48,49,50], resiliency of controllers to cyberattacks in the sense that they continue to function acceptably during and after cyberattacks has been explored in a game-theoretic context. Reliable state estimation also plays a part in resilience [51,52]. Approaches based on process models have been important in suggested attack detection policies [31,53,54] and in policies for preventing attacks that assume that the allowable (i.e., safe) state transitions can be enumerated and therefore that it can be checked whether a control action creates an allowable transition before applying it [55]). The ability of a controller to know the process condition/state has been considered to be an important part of cyberattack resilience of control systems as well [56].

Motivated by the above considerations, this work mathematically defines cyberattacks in a nonlinear systems framework and demonstrates how this framework should guide the development of process designs and controllers to prevent the success of cyberattacks of different types. We highlight the criticality of the nonlinear systems perspective, as opposed to seemingly intuitive approaches that follow more along the lines of traditional computing/networking cybersecurity concepts related to hiding or randomizing information, in preventing the success of cyberattacks, with a focus on those which impact sensor measurements. To demonstrate that intuitive approaches are insufficient for achieving cyberattack-resilience unless they cause specific mathematical properties to hold for the closed-loop system, we explore the pitfalls of two intuitive approaches that do not come with such guarantees and investigate a third approach for which the guarantees can be made for certain classes of nonlinear systems under sufficient conditions, showing that it may be possible to develop methods of operating a plant that meet these properties. This exploration of the properties of control designs that are and are not cyberattack-resilient elucidates key principles that are intended to guide the development of cyberattack-resilient controllers in the future: (a) cyberattack policies for simulation case studies have a potential to be determined computationally; (b) randomization in controller implementation can be introduced within frameworks such as model predictive control (MPC) [57,58] that are common in the process industries without compromising closed-loop stability; and (c) creative implementation strategies which trade off between control policies of different types may help with the development of cyberattack-resilient control designs. A chemical process example is used to demonstrate that controllers which do not meet the nonlinear systems definition of cyberattack resiliency may not be sufficient for preventing the closed-loop state from being brought to an unsafe operating condition. This paper extends the work in [59].

## 2. Preliminaries

#### 2.1. Notation

The notation $|\xb7|$ denotes the Euclidean norm of a vector. A function $\alpha :[0,a)\to [0,\infty )$ is of class $\mathcal{K}$ if $\alpha \left(0\right)=0$ and $\alpha $ is strictly increasing. The notation ${x}^{T}$ represents the transpose of a vector x. The symbol “ / ” denotes set subtraction (i.e., $x\in A/B=\{x\in {R}^{n}:x\in A,x\notin B\}$). $\lceil \xb7\rceil $ signifies the ceiling function (i.e., the function that returns the nearest integer greater than its argument); $\lfloor \xb7\rfloor $ signifies the floor function (i.e., the function that returns the nearest integer less than its argument).

#### 2.2. Class of Systems

The class of nonlinear systems under consideration in this work is:
where f is a locally Lipschitz nonlinear vector function of the state vector $x\in {R}^{n}$, input vector $u\in U\subset {R}^{m}$, and disturbance vector $w\in W\subset {R}^{l}$, where $W:=\{w\in {R}^{l}:|w|\le \theta \}$. We consider that X is a set of states considered to be safe to operate at in the sense that no safety incidents will occur if $x\in X$; therefore, we desire to maintain x within the set X. For the purposes of the developments below, we will assume that outside of X, the closed-loop state is in an unsafe region of state-space. We consider that the origin is an equilibrium of the system of Equation (1) (i.e., $f(0,0,0)=0$). Furthermore, we make the following stabilizability assumption:

$$\begin{array}{c}\hfill \dot{x}\left(t\right)=f(x\left(t\right),u\left(t\right),w\left(t\right))\end{array}$$

**Assumption**

**1.**

There exist ${n}_{p}$ explicit stabilizing control laws ${h}_{i}\left(x\right)$, $i=1,\dots ,{n}_{p}$, for the system of Equation (1), where ${n}_{p}\ge 1$, with corresponding sufficiently smooth positive definite Lyapunov functions ${V}_{i}:{R}^{n}\to {R}_{+}$ and functions ${\alpha}_{j,i}(\xb7)$, $j=1,\dots ,4$, of class $\mathcal{K}$ such that the following inequalities hold for all $x\in {D}_{i}\subset {R}^{n}$:
for $i=1,\dots ,{n}_{p}$, where ${D}_{i}$ is an open neighborhood of the origin.

$${\alpha}_{1,i}\left(\right|x\left|\right)\le {V}_{i}\left(x\right)\le {\alpha}_{2,i}\left(\right|x\left|\right)$$

$$\frac{\partial {V}_{i}\left(x\right)}{\partial x}\phantom{\rule{0.277778em}{0ex}}f(x,{h}_{i}\left(x\right),0)\le -{\alpha}_{3,i}\left(\right|x\left|\right)$$

$$\left|\frac{\partial {V}_{i}\left(x\right)}{\partial x}\right|\le {\alpha}_{4,i}\left(\right|x\left|\right)$$

$${h}_{i}\left(x\right)\in U$$

We define a level set of ${V}_{i}$ contained within ${D}_{i}$ where $x\in X$ as a stability region ${\mathrm{\Omega}}_{{\rho}_{i}}$ of the nominal ($w\left(t\right)\equiv 0$) system of Equation (1) under the controller ${h}_{i}\left(x\right)$ (${\mathrm{\Omega}}_{{\rho}_{i}}:=\{x\in X\cap {D}_{i}:{V}_{i}\left(x\right)\le {\rho}_{i}\}$).

By the smoothness of each ${V}_{i}$, the Lipschitz property of f, and the boundedness of x, u, and w, we obtain the following inequalities:
for all $x,{x}_{1},{x}_{2}\in {\mathrm{\Omega}}_{{\rho}_{i}}$, $i=1,\dots ,{n}_{p}$, $u\in U$, and $w\in W$, where ${L}_{x}>0$, ${L}_{w}>0$, and $M>0$ are selected such that the bounds in Equations (6) and (8) hold regardless of the value of i, and ${L}_{x,i}^{\prime}$ and ${L}_{w,i}^{\prime}$ are positive constants for $i=1,\dots ,{n}_{p}$.

$$|f({x}_{1},u,w)-f({x}_{2},u,0)|\le {L}_{x}|{x}_{1}-{x}_{2}|+{L}_{w}\left|w\right|$$

$$|\frac{\partial {V}_{i}\left({x}_{1}\right)}{\partial x}\phantom{\rule{0.277778em}{0ex}}f({x}_{1},u,w)-\frac{\partial {V}_{i}\left({x}_{2}\right)}{\partial x}\phantom{\rule{0.277778em}{0ex}}f({x}_{2},u,0)|\le {L}_{x,i}^{\prime}|{x}_{1}-{x}_{2}|+{L}_{w,i}^{\prime}\left|w\right|$$

$$\left|f\right(x,u,w\left)\right|\le M$$

The instantaneous cost of the process of Equation (1) is assumed to be represented by a continuous function ${L}_{e}(x,u)$ (we do not require that ${L}_{e}$ have its minimum at the origin/steady-state). We also assume that the instantaneous production rate of the desired product for the process is given by the continuous function ${P}_{d}(x,u)$ (which may be the same as ${L}_{e}$ but is not required to be).

#### 2.3. Model Predictive Control

MPC is an optimization-based control design formulated as:
where $u\left(t\right)\in S(\Delta )$ signifies that the input trajectories are members of the class of piecewise-constant vector functions with period $\Delta $. The nominal (i.e., $w\left(t\right)\equiv 0$) model of Equation (1) (Equation (10)) is used by the MPC of Equations (9)–(13) to develop predictions $\tilde{x}$ of the process state, starting at a measurement of the process state at ${t}_{k}$ (Equation (11); in this work, full state feedback is assumed to be available to an MPC), which are then used in computing the value of the stage cost ${L}_{e}$ over the prediction horizon of N sampling periods (Equation (9)) and evaluating the state constraints (Equation (12)). The inputs computed by the MPC are required to meet the input constraint (Equation (13)). The inputs are applied in a receding horizon fashion.

$$\underset{u\left(t\right)\in S(\Delta )}{min}\phantom{\rule{8.53581pt}{0ex}}{\int}_{{t}_{k}}^{{t}_{k+N}}{L}_{e}(\tilde{x}\left(\tau \right),u\left(\tau \right))\phantom{\rule{0.166667em}{0ex}}d\tau $$

$$\mathrm{s}.\mathrm{t}.\phantom{\rule{14.22636pt}{0ex}}\dot{\tilde{x}}\left(t\right)=f(\tilde{x}\left(t\right),u\left(t\right),0)$$

$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left({t}_{k}\right)=x\left({t}_{k}\right)$$

$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left(t\right)\in X,\phantom{\rule{0.166667em}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N})$$

$$\phantom{\rule{14.22636pt}{0ex}}u\left(t\right)\in U,\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N})$$

#### 2.4. Lyapunov-Based Model Predictive Control

Lyapunov-based model predictive control (LMPC) [60,61] is a variation on the MPC design of the prior section and is formulated as follows:
$$\underset{u\left(t\right)\in S(\Delta )}{min}\phantom{\rule{8.53581pt}{0ex}}{\int}_{{t}_{k}}^{{t}_{k+N}}{L}_{e}(\tilde{x}\left(\tau \right),u\left(\tau \right))\phantom{\rule{0.166667em}{0ex}}d\tau $$
$$\mathrm{s}.\mathrm{t}.\phantom{\rule{14.22636pt}{0ex}}\dot{\tilde{x}}\left(t\right)=f(\tilde{x}\left(t\right),u\left(t\right),0)$$
$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left(t\right)\in X,\phantom{\rule{0.166667em}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N})$$
$$\phantom{\rule{14.22636pt}{0ex}}u\left(t\right)\in U,\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N})$$
where the notation follows that of Equations (9)–(13). In LMPC, the predicted state is required to meet the Lyapunov-based stability constraint of Equation (19) when $x\left({t}_{k}\right)\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{\mathrm{\Omega}}_{{\rho}_{e,1}}\subset {\mathrm{\Omega}}_{{\rho}_{1}}$ by maintaining the predicted state within the set ${\mathrm{\Omega}}_{{\rho}_{e,1}}$ throughout the prediction horizon, and the input is required to meet the Lyapunov-based stability constraint of Equation (20) when $x\left({t}_{k}\right)\notin {\mathrm{\Omega}}_{{\rho}_{e,1}}$ to cause the closed-loop state to move toward a neighborhood of the origin throughout a sampling period. ${\mathrm{\Omega}}_{{\rho}_{e,1}}$ is chosen to make ${\mathrm{\Omega}}_{{\rho}_{1}}$ forward invariant under the LMPC of Equations (14)–(20) in the presence of sufficiently small disturbances and a sufficiently small $\Delta $. ${t}^{\prime}$ is a time after which it is desired to enforce the constraint of Equation (20) for all times regardless of the position of $x\left({t}_{k}\right)$ in state-space. Due to the closed-loop stability and robustness properties of ${h}_{1}\left(x\right)$ [62], ${h}_{1}\left(\tilde{x}\left({t}_{q}\right)\right)$, $q=k,\dots ,k+N-1$, $t\in [{t}_{q},{t}_{q+1})$, is a feasible solution to the optimization problem of Equations (14)–(20) at every sampling time if $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ because it is guaranteed to cause ${V}_{1}\left(x\right)$ to decrease along the closed-loop state trajectories of the nonlinear process throughout each sampling period in the prediction horizon when $\Delta $ and $\theta $ are sufficiently small until the closed-loop state enters a neighborhood ${\mathrm{\Omega}}_{{\rho}_{min,1}}$ of the origin. Furthermore, the LMPC of Equations (14)–(20) is guaranteed to maintain the closed-loop state within ${\mathrm{\Omega}}_{{\rho}_{1}}$ throughout all sampling periods of the prediction horizon when parameters such as ${\rho}_{e,1}$, $\Delta $, and $\theta $ are sufficiently small through the design of the Lyapunov-based stability constraints of Equations (19) and (20) which take advantage of the stability properties of ${h}_{1}\left(x\right)$ [60]. It is furthermore guaranteed under sufficient conditions that ${V}_{1}$ decreases along the closed-loop state trajectory throughout a sampling period when the constraint of Equation (20) is activated at a sampling time.

$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left({t}_{k}\right)=x\left({t}_{k}\right)$$

$$\begin{array}{c}{V}_{1}\left(\tilde{x}\left(t\right)\right)\le {\rho}_{e,1},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N}),\\ \mathrm{if}\phantom{\rule{3.33333pt}{0ex}}x\left({t}_{k}\right)\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{\mathrm{\Omega}}_{{\rho}_{e,1}}\end{array}$$

$$\begin{array}{c}\frac{\partial {V}_{1}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),u\left({t}_{k}\right),0)\hfill \\ \phantom{\rule{1.em}{0ex}}\le \frac{\partial {V}_{1}\left(x\left({t}_{k}\right)\right)}{\partial x}\phantom{\rule{0.277778em}{0ex}}f(x\left({t}_{k}\right),{h}_{1}\left(x\left({t}_{k}\right)\right),0)\hfill \\ \phantom{\rule{1.em}{0ex}}\mathrm{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}/{\mathrm{\Omega}}_{{\rho}_{e,1}}\phantom{\rule{0.277778em}{0ex}}\mathrm{or}\phantom{\rule{3.33333pt}{0ex}}{t}_{k}\ge {t}^{\prime}\hfill \end{array}$$

## 3. A Nonlinear Dynamic Systems Perspective on Cyberattacks

Cybersecurity of chemical process control systems is fundamentally a chemical engineering problem - cyberattackers can find value in attacking plants because they can affect the economics of large companies, the supply of important chemicals, and the health and lives of plant workers and civilians if they are able to gain control over the process inputs, due to the nature of chemical processes and how chemical processes behave. The implication of this is that chemical engineers should be able to take steps during process and control design that can make cyberattacks more difficult or, ideally, make it impossible for them to be successful at affecting economics, production, or safety.

Cyberattacks against process control systems seek to use information flows in control loops to impact physical systems; the ultimate goal of a cyberattacker of a process control system, therefore, can be assumed to be changing the inputs to the process [20] from what they would otherwise be if the attack was not occurring. In this work, we assume that the plant controllers are feedback controllers. There are various means by which a cyberattacker may attempt to affect such a control loop which include providing false state measurements to a feedback controller, providing incorrect signals to the actuators (i.e., bypassing the controller) [31], falsifying stored process data, preventing information from flowing to some part of a control loop [63], manipulating the controller code itself [20], or directly falsifying the signals received by an operator [37,64] (so that he or she does not notice that the process inputs are abnormal). In summary, the electromagnetic signals in the control loop can be falsified. These signals cause physical elements like actuators to move, impacting the condition of the actual process. Contrary to the typical assumptions in feedback control, the association between the input physically implemented on the process and the process state is removed during a cyberattack. A mathematical definition for cyberattacks on feedback control systems is therefore as follows:

**Definition**

**1.**

A cyberattack on a feedback control system is a disruption of information flow in the loop such that any $u\in U$ can potentially be applied at any state x that is accessed by the plant over time.

A process design that is resilient to cyberattacks attempting to influence process safety has many conceptual similarities to a process that is inherently safe [65,66,67,68,69]; the dynamic expression of this resilience property is as follows, where $\overline{X}\subseteq X$ represents a set of allowable initial conditions:

**Definition**

**2.**

A process design that is resilient to cyberattacks intended to affect process safety is one for which there exists no input policy $u\left(t\right)\in U$, $t\in [0,\infty )$, such that $x\left(t\right)\notin X$, for any $x\left({t}_{0}\right)\in \overline{X}$ and $w\left(t\right)\in W$, $t\in [0,\infty )$.

The resilience of the process design here depends on which variables are selected as manipulated inputs; a different input selection may lead to a different assessment of whether the process design is resilient to cyberattacks. Similarly, different designs will give a different dynamic model in Equation (1), which means that the inputs will impact the states differently over time (and whether $x\in X$); therefore, the design itself also plays a role in whether Definition 2 holds as well. Furthermore, the definition of resiliency is independent of the control laws used to control the process. This is because cyberattacks manipulate the process inputs such that they do not necessarily cause process constraints to be met (though the inputs are still physically constrained by the input bounds) and do not necessarily have any relationship to the actual state measurement (Definition 1). Therefore, resiliency of a process design to cyberattacks must be developed assuming that any input policy within the input bounds can be applied to the process.

We can also define cyberattack resilience of a process design against attacks on the plant economics. However, because of the minimal assumptions placed on ${L}_{e}$, it is not possible to require that resilience of a plant to cyberattacks on profitability means that the profit is not at all affected by a cyberattack. For example, consider the case that ${L}_{e}$ has a global minimum (e.g., it may be a quadratic function of the states and inputs). In this case, if u is not equal to its value at the global minimum of ${L}_{e}$ due to a cyberattack (which affects x), then it would not be expected that the long-term profit will be the same as it would be if the state always remained at its global minimum value. However, we would expect that if profit is minimally affected by a cyberattack, there are relatively small consequences to the attack occurring if it was to occur, and furthermore because of the minimal consequences, a cyberattacker may not find it worthwhile to attempt the attack. Therefore, we define lower and upper bounds on the asymptotic average value of ${L}_{e}$ (${L}_{e,lb}$ and ${L}_{e,ub}$, respectively) such that if the cost is within these bounds, the process is still considered highly profitable and the company suffers minimal consequences from an attack. This leads to the definition of a process design that is resilient to cyberattacks against plant profitability as follows (where it is still required that $x\left(t\right)\in X$ since safety during operation would be a prerequisite to production):

**Definition**

**3.**

A process design that is resilient to cyberattacks intended to affect process profit is one for which $x\left(t\right)\in X$ for $t\in [0,\infty )$ for any $x\left({t}_{0}\right)\in \overline{X}$ and the following inequality holds:
for all $u\left(t\right)\in U$ and $w\left(t\right)\in W$, for $t\in [0,\infty )$.

$${L}_{e,lb}\le \underset{T\to \infty}{lim\; sup}\frac{1}{T}{\int}_{0}^{T}{L}_{e}(x\left(t\right),u\left(t\right))dt\le {L}_{e,ub}$$

Cyberattack resilience of a process design against production losses would be defined as in Definition 3, except that Equation (21) would be replaced by
where ${P}_{d,lb}$ and ${P}_{d,ub}$ represent the minimum and maximum values in the allowable production range (or if there are ${n}_{q}$ products instead of one, each with instantaneous production rate ${P}_{d,i}$, $i=1,\dots ,{n}_{q}$, upper and lower bounds can be set on the time integral of each instantaneous production rate).

$${P}_{d,lb}\le \underset{T\to \infty}{lim\; inf}\frac{1}{T}{\int}_{0}^{T}{P}_{d}(x\left(t\right),u\left(t\right))dt\le {P}_{d,ub}$$

For the same reasons as noted for Definition 2, Definition 3 (and its extension to the production attack case) depends on the design and input selection, but not the control law. In general, it may be difficult to assess whether Definitions 2 and 3 or the production extension hold for a process, though closed-loop simulations for a variety of different values of $x\left({t}_{0}\right)\in \overline{X}$, $u\in U$ and $w\in W$, with different sampling periods for each, may provide some sense of how the process behaves and potentially could help demonstrate that the process is not cyberattack resilient if there is an input found in the bounds that causes a lack of satisfaction of the conditions. However, not finding any such input during simulations does not necessarily mean that the process is resilient to cyberattacks unless every situation posed in the definitions has been tested.

Despite the difficulty of verifying whether Definitions 2 and 3 or its production extension hold for a process, the definitions serve an important role in clarifying what cyberattack resilience of a system would look like from a nonlinear systems perspective. At first, the independence of these definitions from the control law implies that cybersecure process systems are only possible to achieve if the process design itself with the selected inputs and their ranges causes Definitions 2 and 3 or the production extension to be satisfied, which would not be expected to be typical. Therefore, at first this seems to imply that chemical processes will generally be susceptible to cyberattacks. However, it also must be understood that the definitions are meant to express resilience against any cyberattack of any kind geared toward affecting the inputs, as they express cyberattacks in the most general sense as being related to inputs and states; different types of cyberattacks would need to be analyzed individually to see whether it is possible to design a process or control system that prevents cyberattack success.

**Remark**

**1.**

Though Definitions 2 and 3 and the production extension are presented such that any input policy can be chosen (e.g., continuous or sample-and-hold with different sampling periods), a knowledge that the inputs are only applied in sample-and-hold could be used to require that the definitions only hold for sample-and-hold input policies in the bounds with the sampling periods noted (assuming that the cyberattack cannot also impact the sampling period).

**Remark**

**2.**

Other works have mathematically defined cyberattack-resilience concepts as well. For example, ref. [70] explores event triggering within the context of resilient control defined for input-affine nonlinear systems with disturbances to be the capacity of a controller to return the state to a set of safe states when it exits these in finite time. Ref. [71] also defines resiliency, for linear systems, as being related to the capacity of a controller to drive the closed-loop state to certain sets and maintain it in safe states (similar to the definitions above).

## 4. Defining Cyberattack Resilience Against Specific Attack Types: Sensor Measurement Falsification in Feedback Control Loops

In the remainder of this work, we focus on attacks that provide false state measurements within X to feedback controllers with the goal of impacting process safety and will seek a better understanding of the properties of controllers that are cyberattack-resilient in such a case. The difference between what is required for cyberattack resilience in this case and what is required in Definition 2 is that the controller and its implementation strategy always play a role in state measurement falsification attacks (i.e., the controller is not bypassed completely to get to the actuators, so that the control law itself always plays a role in dictating what inputs can be computed for given falsified state measurements). Therefore, we would ideally like to develop controllers and their implementation strategies that ensure that the inputs which would be computed by these controllers, regardless of the state measurements they are provided, would over time guarantee that $x\in X$, $\forall t\ge 0$, if $x\left({t}_{0}\right)\in \overline{X}$. The definition of cyberattack resilience becomes:

**Definition**

**4.**

Consider the system of Equation (1) under feedback controllers and their implementation strategies for which the set of all possible input policies which may be computed for $t\in [0,\infty )$ for all $x\left({t}_{0}\right)\in \overline{X}$ given the control laws and their implementation strategies is denoted by ${U}_{allow,i}\left(t\right)$, $i=1,\dots ,{n}_{u}$, $t\ge 0$, where ${n}_{u}\ge 1$ represents the number of possible input trajectories, with each covering the time horizon $t\in [0,\infty )$. The system of Equation (1) is resilient to cyberattacks that falsify state measurements with the goal of affecting process safety under these feedback control policies if there exists no possible input policy $u\left(t\right)\in {U}_{allow,i}\left(t\right)$, $i=1,\dots ,{n}_{u}$, $t\in [0,\infty )$, such that $x\notin X$, for any $x\left({t}_{0}\right)\in \overline{X}$ and $w\left(t\right)\in W$, $t\in [0,\infty )$.

In Definition 4, ${n}_{u}$ maybe ∞. Furthermore, sampling period lengths are taken into account in the definition of ${U}_{allow,i}\left(t\right)$. Though Definition 4 may appear difficult to use, we will later provide an operating policy which, for certain subclasses of the system of Equation (1), guarantees cyberattack resilience of the closed-loop system according to Definition 4, indicating that provably cyberattack-resilient control designs for false state measurements in X intended to affect process safety may be possible to develop, particularly if assumptions or restrictions are imposed.

## 5. Control Design Concepts for Deterring Sensor Measurement Falsification Cyberattacks on Safety: Benefits, Limitations, and Perspectives

In this section, we initially use a chemical process example to motivate the need for cyberattack-resilient control designs according to Definition 4, despite the non-constructive nature of the definition, by demonstrating that cyberattack-resilient control is preferable compared to strategies that detect attacks when they occur and subsequently compensate for them [20,72,73,74,75,76,77]. Subsequently, we will investigate in more detail what it takes for a control design to be cyberattack-resilient. To do this, we will present two “intuitive” concepts for operating a process in a manner intended to deter cyberattacks; however, through a chemical process example, we will illustrate that due to the definition of cyberattacks in a nonlinear systems context (Definition 1), these intuitive methods are not cyberattack-resilient according to Definition 4. Despite this, the study of the reasons that these designs fail to guarantee cyberattack resilience will develop important insights that may guide future work on cyberattack-resilient controllers. We close with an example of a control design that is cyberattack resilient according to Definition 4 for a subset of the class of systems of Equation (1), demonstrating that despite the non-constructive nature of Definition 4, it may be possible to find operating strategies that can be proven to meet this definition.

#### 5.1. Motivating Example: The Need for Cyberattack-Resilient Control Designs

Consider the simplified Tennessee Eastman process, developed in [78] and used in [20] to explore the results of several cyberattacks on sensors for this process performed one sensor at a time. The process consists of a single vessel that serves as both a reaction vessel and a separator, in which the reaction $A+C\to D$ occurs in the presence of an inert B. The reactor has two feed streams with molar flow rates ${F}_{1}$ and ${F}_{2}$, where the former contains A, C, and trace B, and the latter contains pure A (these will be denoted in the following by Stream 1 and 2 (S1 and S2), respectively). A, B, and C are assumed to be in the vapor phase at the conditions in the reactor, with D as a nonvolatile liquid in which none of the other species is appreciably soluble, such that the streams leaving the reaction vessel are a vapor at molar flow rate ${F}_{3}$ containing only A, C, and B, and a liquid product at molar flow rate ${F}_{4}$ containing only D (the vapor and liquid streams will be denoted by Stream 3 and 4 (S3 and S4), respectively, in the following). The dynamic model describing the changes in the number of mols of each species in the reactor (${N}_{A}$, ${N}_{B}$, ${N}_{C}$ and ${N}_{D}$ for species A, B, C, and D, respectively, each in kmol) is given as follows:
where ${y}_{A1}=0.485$, ${y}_{B1}=0.005$, and ${y}_{C1}=0.51$ are the mol fractions of A, B, and C, in S1, and ${y}_{A3}$, ${y}_{B3}$, and ${y}_{C3}$ are the mol fractions of A, B, and C in S3 (i.e., ${y}_{i3}=\frac{{N}_{i}}{({N}_{A}+{N}_{B}+{N}_{C})}$, $i=A,\phantom{\rule{3.33333pt}{0ex}}B,\phantom{\rule{3.33333pt}{0ex}}C$). The units of both sides of Equations (23)–(26) are kmol/h. ${r}_{1}$ is the rate at which the reaction in the vessel takes place, and it is given by the following:
where ${r}_{1}$ is given in units of kmol/h and P (in kPa) represents the pressure in the vessel and is computed via the ideal gas law as follows:
where ${R}_{g}=8.314$ kJ/kmol·K and $T=373\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$ (i.e., isothermal operation is assumed). ${V}_{v}$ represents the volume of vapor in the vessel, where the vessel has a fixed volume of V = 122 m${}^{3}$ but the liquid has a time-varying volume that depends on ${N}_{D}$ and the liquid molar density of $8.3$ kmol/m${}^{3}$ such that ${V}_{v}$ is given (in m${}^{3}$) as follows:
with ${N}_{D}$ in kmol. It is desired that the liquid level in the tank not exceed 30 m${}^{3}$ (the steady-state value of the liquid level is 44.18% of its maximum value).

$$\frac{d{N}_{A}}{dt}={y}_{A1}{F}_{1}+{F}_{2}-{y}_{A3}{F}_{3}-{r}_{1}$$

$$\frac{d{N}_{B}}{dt}={y}_{B1}{F}_{1}-{y}_{B3}{F}_{3}$$

$$\frac{d{N}_{C}}{dt}={y}_{C1}{F}_{1}-{y}_{C3}{F}_{3}-{r}_{1}$$

$$\frac{d{N}_{D}}{dt}={r}_{1}-{F}_{4}$$

$${r}_{1}=0.00117{y}_{A3}^{1.2}{y}_{C3}^{0.4}{P}^{1.6}$$

$$P=\frac{({N}_{A}+{N}_{B}+{N}_{C}){R}_{g}T}{{V}_{v}}$$

$${V}_{v}=122-\frac{{N}_{D}}{8.3}$$

Three process inputs are assumed (${u}_{1}$, ${u}_{2}$, and ${u}_{3}$), which represent set-points for the percent opening of three valves that determine the flow rates ${F}_{1}$, ${F}_{2}$, and ${F}_{3}$ as follows:
where the units of time in Equations (30)–(32) are h and the units of flow in Equations (33)–(35) are kmol/h, and ${X}_{1}$, ${X}_{2}$, and ${X}_{3}$ represent the percentage opening of each valve (with an allowable range between 0% and 100%, such that the valve output would saturate if it hits these bounds). A fourth valve is also available for S4 for which the set-point for the valve position is adjusted with a proportional controller based on the error between the percentage of the 30 m${}^{3}$ of available liquid volume that is used in the tank (${V}_{\%,used}$) and the desired (steady-state) value of the percentage of the available liquid volume (${V}_{\%,sp}$) as follows:
where ${X}_{4,s}$ represents the steady-state value of the percentage opening of the valve for S4, ${X}_{4}$ represents the percentage opening of the valve for S4, ${K}_{c}=-1.4$ is the tuning parameter of the proportional controller used in setting the set-point value for ${X}_{4}$, and ${V}_{\%,used}=\frac{\left(100\right)\left({N}_{D}\right)}{(8.3)\left(30\right)}$. The molar flow rate of S4 is given in terms of ${X}_{4}$ as follows:

$$\frac{d{X}_{1}}{dt}=360({u}_{1}-{X}_{1})$$

$$\frac{d{X}_{2}}{dt}=360({u}_{2}-{X}_{2})$$

$$\frac{d{X}_{3}}{dt}=360({u}_{3}-{X}_{3})$$

$${F}_{1}=330.46\frac{{X}_{1}}{100}$$

$${F}_{2}=22.46\frac{{X}_{2}}{100}$$

$${F}_{3}=0.00352{X}_{3}\sqrt{P-100}$$

$$\frac{d{X}_{4}}{dt}=360([{X}_{4,s}+{K}_{c}({V}_{\%,sp}-{V}_{\%,used})]-{X}_{4})$$

$${F}_{4}=0.0417{X}_{4}\sqrt{P-100}$$

The steady-state values for the variables and associated inputs are presented in Table 1, with the subscript s denoting the steady-state value of each variable.

For this process, it is desired to maintain the value of the pressure in the reaction vessel below ${P}_{max}=3000\phantom{\rule{3.33333pt}{0ex}}$kPa. To regulate the process at its steady-state value, where ${P}_{s}<{P}_{max}$ as required as shown in Table 1, different control laws can be considered. We first consider the proportional-integral (PI) control laws developed in [78], which were applied in cyberattack scenarios involving attacks on sensors in [20]. In this case, the input ${u}_{1}$ is adjusted in a manner that seeks to modify the flow rate of the product D, ${u}_{2}$ is adjusted in a manner that seeks to modify the composition of A in S3 to avoid losing more reactant than necessary, and ${u}_{3}$ is adjusted in a manner that seeks to modify the pressure in the vessel since it can directly affect how much vapor flow can exit the vessel. To account for physical limitations on the maximum value of S3, an additional mechanism is also added to help with pressure control by allowing pressures greater than 2900 kPa to result in the set-point value for ${F}_{4}$ that ${u}_{1}$ uses in computing how large ${F}_{1}$ should be being lowered to avoid providing reactants to the reactor and thereby decreasing the outlet pressure. This is achieved through a fourth PI controller that computes a signal ${u}_{4}$ used in adjusting the set-point of ${F}_{4}$. The control laws, in sample-and-hold with a sampling period of $\Delta =0.1$ h, are as follows:
where ${P}_{bound}$ = 2900 kPa and the controller parameters are given in Table 1. ${F}_{4,sp,adj}$ represents the adjusted set-point for ${F}_{4}$ set to ${F}_{4,s}$ if ${u}_{4}>0$ but to ${F}_{4,sp,adj}={F}_{4,s}+{u}_{4}$ otherwise. ${u}_{1}$, ${u}_{2}$, and ${u}_{3}$ would saturate at 0 or 100% if these limits were reached.

$${u}_{1}({t}_{k})={u}_{1}({t}_{k-1})+{K}_{c,1}({e}_{1}({t}_{k})-{e}_{1}({t}_{k-1})+\frac{\Delta}{{\tau}_{I,1}}{e}_{1}({t}_{k}))$$

$${e}_{1}({t}_{k})={F}_{4,sp,adj}({t}_{k})-{F}_{4}({t}_{k})$$

$${u}_{2}({t}_{k})={u}_{2}({t}_{k-1})+{K}_{c,2}({e}_{2}({t}_{k})-{e}_{2}({t}_{k-1})+\frac{\Delta}{{\tau}_{I,2}}{e}_{2}({t}_{k}))$$

$${e}_{2}({t}_{k})=100({y}_{A3,s}-{y}_{A3}({t}_{k}))$$

$${u}_{3}({t}_{k})={u}_{3}({t}_{k-1})+{K}_{c,3}({e}_{3}({t}_{k})-{e}_{3}({t}_{k-1})+\frac{\Delta}{{\tau}_{I,3}}{e}_{3}({t}_{k}))$$

$${e}_{3}({t}_{k})={P}_{s}-P({t}_{k})$$

$${u}_{4}({t}_{k})={u}_{4}({t}_{k-1})+{K}_{c,4}({e}_{4}({t}_{k})-{e}_{4}({t}_{k-1})+\frac{\Delta}{{\tau}_{I,4}}{e}_{4}({t}_{k}))$$

$${e}_{4}({t}_{k})={P}_{bound}-P({t}_{k})$$

In [20], several cyberattacks are proposed on the sensors associated with the controllers described above (i.e., incorrect measurements are provided to the controllers, causing them to compute inputs for the process which they would not otherwise have computed), with one sensor being attacked at a time. The results in [20] indicate that some types of attacks are successful at driving the pressure above its maximum bound, whereas others are not. For example, the authors of [20] comment that it was difficult in the simulations to achieve problematic pressures in the vessel with the measured values of ${y}_{A3}$ or ${F}_{4}$ being falsified for the controllers computing ${u}_{1}$ and ${u}_{2}$, whereas it is possible with a falsification of the measurement of P for the controllers computing ${u}_{3}$ and ${u}_{4}$ to achieve a pressure in the reactor above its limit. For example, Figure 1 shows the results of setting the measurement of ${y}_{A3}$ received by the controller computing ${u}_{1}$ to its maximum value (i.e., a mol fraction of 1) between 10 h and 30 h of operation after initializing the process at the steady-state. In both this case and in simulations with the measurement of ${F}_{4}$ received by the controller computing ${u}_{2}$ set to its minimum value (i.e., 0 kmol/h) between 10 h and 30 h of operation after initializing the process at the steady-state, the pressure during the simulations did not exceed 3000 kPa. However, if we simulate the process with the P measurement set to its minimum value of 0 kPa to affect the controllers computing ${u}_{3}$ and ${u}_{4}$, the pressure does exceed 3000 kPa (i.e., the cyberattack succeeds in bringing the plant to an unsafe condition; in this case, the simulation was performed only for $30\phantom{\rule{3.33333pt}{0ex}}$h as the unsafe condition was already reached within this timeframe). The simulations were performed with an integration step size of ${10}^{-4}\phantom{\rule{3.33333pt}{0ex}}$h for simulating the dynamic process model of Equations (23)–(45). The simulations were performed in MATLAB R2016a by MathWorks

^{®}.The differences in the results based on the attack performed indicate the complexities of closed-loop nonlinear systems that can make it difficult to predict every possible attack at a plant to develop appropriate detection and compensation strategies for attacks. In each case, a nonlinear system evolves over time under different input policies, and its response is therefore difficult to predict a priori. In addition to the dynamics of the process itself, the dynamics of the other controllers that are not receiving falsified measurements and how they interact with the inputs computed by controllers that are receiving false measurements impact the success of the attack. For example, in Figure 1, the pressure measurement has not been compromised, and mechanisms are in place (through ${u}_{3}$ and ${u}_{4}$) for adjusting the pressure if it increases. Those come into play once the pressure increases significantly, and are able to maintain the pressure below the problematic value of 3000 kPa. A similar mechanism prevents the pressure from exceeding its threshold when the ${F}_{4}$ measurement is falsified; when the measurement of P is falsified, however, the controllers which provided the robustness against the attack success in the other two cases are compromised and the attacks succeed. The number of sensors and which sensors are compromised also play a role (i.e., as shown by the attack on P, if the right sensors are compromised, an unsafe situation can be set up in this process). Furthermore, Figure 1 demonstrates that attack scenarios can be non-obvious. In this figure, the highest value of the pressure occurs not when the value of ${y}_{A3}$ received by the controller which manipulates ${u}_{2}$ is being falsified, but in the transient after it ceases to be falsified. If the maximum pressure bound had been lower, the pressure in this transient could have exceeded it by creating a rapid change in direction of the inputs once the actual state measurement of ${y}_{A3}$ becomes available again. In such a case, an attack could focus on the falsification followed by the removal of the falsification as an attack, rather than only on the falsified measurement.

#### 5.2. Deterring Sensor Measurement Falsification Cyberattacks on Safety: Creating Non-Intuitive Controller Outputs

The simplified Tennessee Eastman Process demonstrates that control designs with theoretical guarantees regarding cyberattack-resilience would be a valuable alternative to approaches which assume cyberattacks can be detected. In the next several sections, we seek to better understand how such controllers might be developed by examining two “intuitive” approaches which fail to meet the definition of cyberattack-resilience despite the logic behind their design, followed by an approach which meets the cyberattack-resilience definition. The first “intuitive” approach to be discussed is based on the concept that if the control law can be kept hidden from an attacker and the control law is sufficiently complex such that it is difficult for an attacker to postulate what input will be computed for a given state measurement without knowing the control law, the attacker may have difficulty in performing an attack. The control design that we will explore in this regard is an MPC with a sufficient number of and/or types of constraints in the controller such that it may become difficult to predict, without solving the optimization problem, what input may be computed for a given state measurement. The LMPC of Equations (14)–(20) is an example of a controller which might be considered. In that controller, the constraints of Equations (19) and (20) may cause the inputs computed by the LMPC of Equations (14)–(20) to be different from those computed by the MPC of Equations (9)–(13); therefore, if the same falsified state measurement was provided to both, it is possible that one might compute a control action that could drive the closed-loop state to an unsafe condition, whereas the other may not. If the cyberattacker did not know the control law being used, the presence of additional constraints like the stability-based constraints may cause inputs to be computed which an attacker does not expect. Furthermore, due to the closed-loop stability guarantees which can be made for LMPC (i.e., the closed-loop state remains in ${\mathrm{\Omega}}_{{\rho}_{1}}$ at all times under sufficient conditions) [60], a check at each sampling time on whether the measured state is in ${\mathrm{\Omega}}_{{\rho}_{1}}$ may provide a type of detection mechanism for cyberattacks that may make it more difficult for them to succeed. Specifically, under normal operating conditions, the state measurement should never be outside ${\mathrm{\Omega}}_{{\rho}_{1}}$; if it is, it may be considered that there is a potential the state measurement has been falsified. If a cyberattacker is unaware of the value of ${\rho}_{1}$, he or she may provide a false state measurement to the controller which triggers detection; on the other hand, if he or she is only able to attack a limited number of sensors, unless the attacker knows or can predict the readings of the unattacked sensors at each sampling time, the attacker does not know how close the full state measurement being received by the controller (incorporating the attacked and unattacked measurements) is to being outside of ${\mathrm{\Omega}}_{{\rho}_{1}}$. Again, an attack may be detected or deterred in this case.

Difficulties with this approach include, however: (1) if the cyberattacker did not know the control law being used, it is questionable whether a high-impact attack would be attempted regardless of the control law being used (i.e., it may not matter whether it has Lyapunov-based stability constraints or not), because in any case the control law is not known and therefore attempting to randomly attack the controller may be considered overly risky and unlikely to avoid detection; (2) the attacker may gain access to all of the sensors and learn the value of ${\rho}_{1}$, and thereby be able to maintain the falsified state measurement always in ${\mathrm{\Omega}}_{{\rho}_{1}}$ to avoid detection.

**Remark**

**3.**

We note that closed-loop stability of an approach like LMPC under normal operation (no cyberattacks) is proven elsewhere (e.g., [60]). The proof in [60] relies on the state measurement being accurate; therefore, this proof does not allow us to prove closed-loop stability in the presence of a cyberattack.

#### 5.2.1. Problems with Creating Non-Intuitive Controller Outputs

The pitfall of this approach from a nonlinear dynamic systems perspective is that it does not make any attempt to prevent policies from existing that could create unsafe operating conditions if the control law becomes known (i.e., Definition 4 is violated); it essentially assumes luckiness by hoping that the cyberattacker will never be able to figure out enough about the control design to be able to attack it. If the attacker does figure out the control law, it may not be overly difficult for them to develop an attack policy that could drive the closed-loop state to an unsafe condition while maintaining the falsified state measurement in ${\mathrm{\Omega}}_{{\rho}_{1}}$, despite the many constraints. For example, it may be possible to develop an optimization problem in some cases that can be used in helping develop attack policies, and then those can be assessed within closed-loop simulations to see whether they may be likely to produce a problematic state trajectory.

To see this, consider a continuous stirred tank reactor (CSTR) in which the reactant A is converted to the product B via an irreversible second-order reaction. The feed and outlet volumetric flow rates of the CSTR are F, with the feed concentration ${C}_{A0}$ and feed temperature ${T}_{0}$. The CSTR is operated non-isothermally with a jacket used to remove or add heat to the reactor at heat rate Q. Constant liquid density ${\rho}_{L}$, heat capacity ${C}_{p}$, and liquid volume V are assumed, with the constants (from [79]) in Table 2. The dynamic process model is:
where ${C}_{A}$ and T represent the concentration and temperature in the reactor, respectively, E is the activation energy of the reaction, ${k}_{0}$ is the pre-exponential constant, ${R}_{g}$ is the ideal gas constant, and $\Delta H$ is the enthalpy of reaction. We develop the following vectors for the states and inputs in deviation form: $x={\left[{x}_{1}\phantom{\rule{3.33333pt}{0ex}}{x}_{2}\right]}^{T}={[{C}_{A}-{C}_{As}\phantom{\rule{3.33333pt}{0ex}}T-{T}_{s}]}^{T}$ and $u={\left[{u}_{1}\phantom{\rule{3.33333pt}{0ex}}{u}_{2}\right]}^{T}={[{C}_{A0}-{C}_{A0s}\phantom{\rule{3.33333pt}{0ex}}Q-{Q}_{s}]}^{T}$, where ${C}_{As}=1.22$ kmol/m${}^{3}$, ${T}_{s}=438.2\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$, ${C}_{A0s}=4$ kmol/m${}^{3}$, and ${Q}_{s}=0$ kJ/h are the steady-state values of ${C}_{A}$, T, ${C}_{A0}$, and Q at the operating steady-state.

$${\dot{C}}_{A}=\frac{F}{V}({C}_{A0}-{C}_{A})-{k}_{0}{e}^{-\frac{E}{{R}_{g}T}}{C}_{A}^{2}$$

$$\dot{T}=\frac{F}{V}({T}_{0}-T)-\frac{\Delta H{k}_{0}}{{\rho}_{L}{C}_{p}}{e}^{-\frac{E}{{R}_{g}T}}{C}_{A}^{2}+\frac{Q}{{\rho}_{L}{C}_{p}V}$$

The control objective is to maximize the following profit-based stage cost for the process of Equations (46) and (47) representing the production rate of the product B while computing control actions which meet the input constraints $0.5\le {C}_{A0}\le 7.5$ kmol/m${}^{3}$ and $-5\times {10}^{5}\le Q\le 5\times {10}^{5}$ kJ/h and maintain closed-loop stability:

$${L}_{e}={k}_{0}{e}^{-\frac{E}{{R}_{g}T\left(\tau \right)}}{C}_{A}{\left(\tau \right)}^{2}$$

We will use an LMPC with the stage cost in Equation (48) to control this process. We choose a Lyapunov function ${V}_{1}={x}^{T}Px$, where $P=[1200\phantom{\rule{3.33333pt}{0ex}}5;\phantom{\rule{3.33333pt}{0ex}}5\phantom{\rule{3.33333pt}{0ex}}0.1]$, ${h}_{1,1}\left(x\right)=0$ kmol/m${}^{3}$ for simplicity, and ${h}_{1,2}\left(x\right)$ is determined by Sontag’s control law [80] as follows:
where if ${h}_{1,2}$ fell below or exceeded the upper or lower bound on ${u}_{2}$, ${h}_{1,2}$ was saturated at the respective bound. In Equation (49), $\tilde{f}$ represents the vector containing the terms in Equations (46) and (47) (after the model has been rewritten in deviation variable form in terms of ${x}_{1}$ and ${x}_{2}$) that do not contain any inputs, and $\tilde{g}$ represents the matrix that multiplies the vector of inputs ${u}_{1}$ and ${u}_{2}$ in the equation. ${L}_{\tilde{f}}{V}_{1}$ and ${L}_{{\tilde{g}}_{k}}{V}_{1}$ represent the Lie derivatives of ${V}_{1}$ with respect to $\tilde{f}$ and ${\tilde{g}}_{k}$, $k=1,2$. The state-space was discretized and the locations where ${\dot{V}}_{1}<0$ under the controller ${h}_{1}\left(x\right)$ were examined and used to set ${\rho}_{1}=180$. ${\rho}_{e,1}$ was set to be less than ${\rho}_{1}$, and was (heuristically) chosen to be 144. The process is initialized at ${x}_{init}=$$[-0.4$ kmol/m${}^{3}$${\phantom{\rule{3.33333pt}{0ex}}20\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}]}^{T}$ and simulated with the integration step of ${10}^{-4}\phantom{\rule{3.33333pt}{0ex}}$h, with N set to 10, and with $\Delta $ set to $0.01\phantom{\rule{3.33333pt}{0ex}}$h. The Lyapunov-based stability constraint activated when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,1}}$ was enforced at the end of every sampling period in the prediction horizon, and whenever the Lyapunov-based stability constraint involving the time-derivative of the Lyapunov function was enforced, the other Lyapunov-based constraint was implemented at the end of the sampling periods after the first. The simulations were implemented in MATLAB using fmincon. The initial guess provided to fmincon was the steady-state input vector. The maximum and minimum values of ${u}_{2}$ were multiplied by ${10}^{-5}$ within the optimization problem due to the large magnitudes of the upper and lower bounds allowed for this optimization variable.

$$\begin{array}{c}\hfill {h}_{1,2}\left(x\right)=\left\{\begin{array}{cc}-\frac{{L}_{\tilde{f}}{V}_{1}+\sqrt{{L}_{\tilde{f}}{V}_{1}^{2}+{L}_{{\tilde{g}}_{2}}{V}_{1}^{4}}}{{L}_{{\tilde{g}}_{2}}{V}_{1}},\hfill & \phantom{\rule{3.33333pt}{0ex}}\mathrm{if}\phantom{\rule{3.33333pt}{0ex}}{L}_{{\tilde{g}}_{2}}{V}_{1}\ne 0\hfill \\ 0,\hfill & \phantom{\rule{3.33333pt}{0ex}}\mathrm{if}\phantom{\rule{3.33333pt}{0ex}}{L}_{{\tilde{g}}_{2}}{V}_{1}=0\hfill \end{array}\right.\end{array}$$

To consider an attack on the safety of this process, we assume that we do not want the temperature in the reactor to go 55 K above ${T}_{s}$ (because no temperature at any point in the stability region is this high, the controller should, under normal operation, have no trouble achieving this). However, if we assume that the cyberattacker knows the control law and can access the state measurements, he or she could exploit this to design an attack policy specific to the closed-loop system under consideration. To demonstrate that this can be possible, we will computationally develop an attack policy for this process through two optimization problems, the first of which tries to compute control actions within the input bounds which maximize the temperature reached within $N\Delta $ time units from the (actual) current state measurement, and the second of which finds a state measurement (to use as the false value in an attack) which can generate control actions that, ideally, are as close as possible to those developed in the first optimization problem and also ensure that there is a feasible solution to the constraints which will be employed in the LMPC. The first optimization problem is as follows:

$$\underset{u\left(t\right)\in S(\Delta )}{min}\phantom{\rule{8.53581pt}{0ex}}-({x}_{2}\left({t}_{N}\right)+{T}_{s})$$

$$\mathrm{s}.\mathrm{t}.\phantom{\rule{14.22636pt}{0ex}}\dot{\tilde{x}}\left(t\right)=\tilde{f}\left(\tilde{x}\left(t\right)\right)+\tilde{g}u\left(t\right)$$

$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left({t}_{0}\right)={x}_{init}$$

$$\phantom{\rule{14.22636pt}{0ex}}-3.5\le {u}_{1}\left(t\right)\le 3.5,\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{0},{t}_{N})$$

$$\phantom{\rule{14.22636pt}{0ex}}-{10}^{5}\le {u}_{2}\left(t\right)\le {10}^{5},\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{0},{t}_{N})$$

Equations (50)–(54) are designed such that the solution of this optimization problem is a piecewise-constant input trajectory that meets the process input constraints (Equations (53) and (54)) and drives the temperature in the reactor as high as possible in $N\Delta $ time units (Equation (50)) according to the dynamics of the process (Equation (51)) starting from the state measurement at the current time (Equation (52); the current time is denoted by ${t}_{0}$ in this optimization problem since this problem is solved only once instead of in a receding horizon fashion). The solution of this optimization problem for the process of Equations (46) and (47) is a piecewise-constant input trajectory with ${u}_{1}$ varying between $3.4975$ and $3.4983$ kmol/m${}^{3}$ and ${u}_{2}$ varying between $499856.52$ and $499908.01$ kJ/h over the $N\Delta $ time units.

Because the inputs are approximately constant throughout the $N\Delta $ time units in the solution to Equations (50)–(54), this suggests that a single initial condition may be sufficient for causing the problematic input policy to be generated at each sampling time. Specifically, the only information that the LMPC of Equations (14)–(20) receives from an external source at each time that it is solved is the state measurement in Equation (16); because it uses a deterministic process model and deterministic constraints, the LMPC of Equations (14)–(20) has a single solution for a given state measurement. Therefore, if a cyberattacker determines that an attack policy which applies the same input at every sampling time is desirable, he or she can cause the controller to compute this input at every sampling time by determining a state measurement value for which the problematic input is the solution to Equations (14)–(20), and then providing that same state measurement to the LMPC at every sampling time to cause it to keep computing the same problematic input.

The following second optimization problem finds the initial condition to use at each of the next N sampling periods that may cause the values of ${u}_{1}$ and ${u}_{2}$ in the first sampling period of the prediction horizon to be close to the averages of the N values of ${u}_{1}$ (${u}_{1,desired}$) and the N values of ${u}_{2}$ (${u}_{2,desired}$), respectively, determined by Equations (50)–(54), while allowing the constraints of Equations (14)–(20) to be met:
$$\mathrm{s}.\mathrm{t}.\phantom{\rule{14.22636pt}{0ex}}\dot{\tilde{x}}\left(t\right)=\tilde{f}\left(\tilde{x}\left(t\right)\right)+\tilde{g}u\left(t\right)$$
$$\phantom{\rule{14.22636pt}{0ex}}-3.5\le {u}_{1}\left(t\right)\le 3.5,\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{0},{t}_{N})$$
$$\phantom{\rule{14.22636pt}{0ex}}-{10}^{5}\le {u}_{2}\left(t\right)\le {10}^{5},\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{0},{t}_{N})$$

$$\underset{u\left(t\right)\in S(\Delta ),{x}_{meas}}{min}\phantom{\rule{8.53581pt}{0ex}}{\int}_{{t}_{0}}^{{t}_{1}}\left[{({u}_{1}\left(\tau \right)-{u}_{1,desired})}^{2}+{10}^{-10}{({u}_{2}\left(\tau \right)-{u}_{2,desired})}^{2}\right]\phantom{\rule{0.166667em}{0ex}}d\tau $$

$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left({t}_{0}\right)={x}_{meas}$$

$$\phantom{\rule{14.22636pt}{0ex}}{V}_{1}\left(\tilde{x}\left({t}_{j}\right)\right)\le {\rho}_{e,1},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}j=0,\dots ,N$$

This optimization problem reverse engineers the LMPC of Equations (14)–(20) (except that it neglects the objective function of the controller) in the sense that it seeks to find an initial condition ${x}_{meas}$ (Equation (57)) to provide to the LMPC of Equations (14)–(20) for which there exists a feasible input policy for the N sampling periods of the prediction horizon that meets the process input constraints (Equations (58) and (59)) as well as the Lyapunov-based stability constraint of Equation (19) (Equation (60)) while allowing this feasible trajectory to include ${u}_{1}$ and ${u}_{2}$ in the first sampling period of the prediction horizon taking values as close to the problematic values ${u}_{1,desired}$ and ${u}_{2,undesired}$ as possible. The reason for only requiring ${u}_{1}$ and ${u}_{2}$ in the first sampling period of the prediction horizon to be as close as possible to the attack values is that though the optimization problem of Equations (55)–(60) is being solved only once to obtain the sensor attack policy ${x}_{meas}$ to provide to the LMPC at each subsequent sampling time, the LMPC will be solved at every sampling time and will only apply the input for the first sampling period of the prediction horizon in each case. The formulation of Equation (60) assumes that the attacker knows the exact manner in which this constraint is enforced in the LMPC, where, as noted above, it will be enforced at the end of every sampling period in the prediction horizon. The addition of the requirement in Equation (60) that ${V}_{1}\left(\tilde{x}\left({t}_{0}\right)\right)\le {\rho}_{e,1}$ is used to pre-select that ${x}_{meas}$ should be within ${\mathrm{\Omega}}_{{\rho}_{e,1}}$. This eliminates the need to try to solve a disjunctive or mixed integer nonlinear program [81] that allows the initial condition to be either in ${\mathrm{\Omega}}_{{\rho}_{e,1}}$ or ${\mathrm{\Omega}}_{{\rho}_{1}}/{\mathrm{\Omega}}_{{\rho}_{e,1}}$ such that the constraint to be employed (i.e., Equation (19) or Equation (20)) depends on the optimization variables that are the components of ${x}_{meas}$. The components of ${x}_{meas}$ were essentially unconstrained in Equations (55)–(60).

In solving Equations (50)–(60), the bounds on ${u}_{2}$ were multiplied by ${10}^{-5}$. The false state measurement determined from Equations (55)–(60) was ${x}_{1}=-0.05207$ kmol/m${}^{3}$ and ${x}_{2}=-8.3934$ K. Figure 2 demonstrates that when this state measurement is used at every sampling period for 10 sampling periods, the inputs computed are able to drive the temperature significantly above its threshold value ${x}_{2}=55$ K within a short time. When disturbances are added (specifically, simulations were performed with disturbances added to the right-hand sides of Equations (46) for ${w}_{1}$ and (47) for ${w}_{2}$) generated using the MATLAB functions rng(10) to generate a seed with normrnd to generate a pseudorandom number from a normal distribution with mean of zero and a standard deviation of 30 kmol/h (for ${w}_{1}$) and 3200 K/h (for ${w}_{2}$), with both inputs clipped when necessary to bound them such that $|{w}_{1}|\le 90$ and $|{w}_{2}|\le 9600$, an unsafe situation is again set up in 10 sampling periods in which ${x}_{2}$ approaches 300 K as in Figure 2. The LMPC only receives state measurements, regardless of whether there are disturbances or not; therefore, if the same state measurement is given every time, it computes the same solution to the optimization problem every time and when this solution is able to drive the closed-loop state to an unsafe condition if continuously applied, the cyberattacker succeeds. The attack-defining concept posed here could be attempted for other attack goals as well, such as minimizing a profit-based objective function in Equations (50)–(54) to seek to compute an attack policy that financially attacks the plant or minimizing a production-based objective function to seek to attack the chemical supply from the plant.

**Remark**

**4.**

The CSTR example indicates an important difference between traditional safety thinking and thinking about cyberattacks. In traditional safety thinking, there will be unsafe operating conditions that might be considered very unlikely to be achieved; when considering cyberattacks, there can be deliberate attempts on the part of the attacker to set up unsafe operating conditions that might otherwise be very unlikely to be achieved. It is therefore important to seriously consider unlikely scenarios at the hazard analysis stage from the perspective of whether a cyberattack on the control system could lead them to occur.

**Remark**

**5.**

Though the cyberattack design methodology presented in this section suggests that cyberattacks on specific control designs might be developed computationally, the framework used in Equations (50)–(60) may not always achieve expected effects. Specifically, the initial condition determined by Equations (55)–(60) may not actually result in the control actions of Equations (50)–(54) being computed at each sampling time by the controller because the only feature of Equations (55)–(60) that seeks to associate ${x}_{meas}$ with ${u}_{1,desired}$ and ${u}_{2,desired}$ is a soft constraint rather than a hard constraint, and it is, therefore, not guaranteed to be met. Furthermore, Equations (55)–(60) do not account for the role of the objective function in affecting which inputs would actually be computed for a given state measurement. In this example, the false state measurement determined through Equations (50)–(60) was able to rapidly set up an unsafe scenario when used to cyberattack the LMPC; to develop attacks for other systems, it may be necessary to develop a more sophisticated method for determining the false state measurements or to use closed-loop simulations to determine if the false state measurements determined computationally provide an appropriate attack scenario with which to test research results. Finally, it should be noted that Equations (50)–(54) are not guaranteed to find an input that drives ${x}_{2}$ above its threshold in N sampling periods; whether or not this occurs may depend on the process dynamics, the input bounds, the initial condition, and also the number of sampling periods N over which the increase in ${x}_{2}$ is allowed to occur.

#### 5.3. Deterring Sensor Measurement Falsification Cyberattacks on Safety: Creating Unpredictable Controller Outputs

The second “intuitive” approach seeks to address a perceived deficiency in the first “intuitive” approach, namely that the success of the cyberattacks in Section 5.2.1 was related to the fact that the cyberattacker could figure out the mapping between $x\left({t}_{k}\right)$ and u by learning the control law. One idea for addressing this would be to develop sets of stabilizing control laws for a process and choose only one, randomly, at each sampling time. Then, if the inputs which the various potential control laws would compute for the same state measurement are significantly different, it may be more difficult for an attacker to determine an attack policy that, regardless of the control law chosen at a sampling time, will drive the closed-loop state to an unsafe condition (even if the attacker knew every potential control law).

Before we can consider such an approach, it must be established that randomization in the controller selection process can be achieved without impacting closed-loop stability under normal operation (i.e., in the absence of a cyberattack). Theory-based control designs with stability guarantees from an explicitly characterizable region of attraction even in the presence of disturbances (e.g., LMPC) are therefore attractive options for use in randomization strategies for control laws. In the remainder of this section, we present an example of a control design and implementation strategy that uses LMPC to incorporate randomness in process operation (with the goal of deterring cyberattacks by obscuring the mapping between a state measurement at a given sampling time and the input to be computed) with closed-loop stability guarantees under normal operation even in the presence of the randomness. However, like the design in Section 5.2, this design and its implementation strategy do not fundamentally prevent the existence of an input policy which could create an unsafe condition for some $x\left({t}_{0}\right)\in \overline{X}$ (when, for example, $\overline{X}={\mathrm{\Omega}}_{{\rho}_{1}}$), and therefore if this design succeeds in preventing or delaying the impacts of cyberattacks, it does so more on the basis of chance than rigor, which is demonstrated below using the CSTR example.

#### 5.3.1. Creating Unpredictable Controller Outputs: Incorporating Randomness in LMPC Design

The randomized LMPC design involves the development of ${n}_{p}$ controllers of the form of Equations (14)–(20) but where each can have a different Lyapunov function, Lyapunov function upper bound, and Lyapunov-based controller as follows:
$$\underset{u\left(t\right)\in S(\Delta )}{min}\phantom{\rule{8.53581pt}{0ex}}{\int}_{{t}_{k}}^{{t}_{k+N}}{L}_{e}(\tilde{x}\left(\tau \right),u\left(\tau \right))\phantom{\rule{0.166667em}{0ex}}d\tau $$
$$\mathrm{s}.\mathrm{t}.\phantom{\rule{14.22636pt}{0ex}}\dot{\tilde{x}}\left(t\right)=f(\tilde{x}\left(t\right),u\left(t\right),0)$$
$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left(t\right)\in X,\phantom{\rule{0.166667em}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N})$$
$$\phantom{\rule{14.22636pt}{0ex}}u\left(t\right)\in U,\phantom{\rule{3.33333pt}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N})$$
where ${V}_{i}$, ${\rho}_{e,i}$, ${\rho}_{i}$, and ${h}_{i}$, $i=1,\dots ,{n}_{p}$, play the roles in Equations (61)–(67) of ${V}_{1}$, ${\rho}_{e,1}$, ${\rho}_{1}$, and ${h}_{1}$, respectively, from Equations (14)–(20). Each combination of ${V}_{i}$ and ${h}_{i}$ is assumed to satisfy Equations (2)–(5) $\forall x\in {\mathrm{\Omega}}_{{\rho}_{i}}$ and ${\mathrm{\Omega}}_{{\rho}_{e,i}}\subset {\mathrm{\Omega}}_{{\rho}_{i}}$. For $j=2,\dots ,{n}_{p}$, the ${\mathrm{\Omega}}_{{\rho}_{j}}$ should be subsets of ${\mathrm{\Omega}}_{{\rho}_{1}}$ for reasons that will be clarified in Section 5.3.1.1. To introduce an additional aspect of randomness at each sampling time, the parameter $\delta $ is introduced in Equation (67). It can take a value of either 0 or 1, and one of those two values is randomly selected for it at each sampling time. $\delta =1$ corresponds to activation of the constraint of Equation (67) even when ${t}_{k}<{t}^{\prime}$ or $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,i}}$.

$$\phantom{\rule{14.22636pt}{0ex}}\tilde{x}\left({t}_{k}\right)=x\left({t}_{k}\right)$$

$$\begin{array}{c}{V}_{i}\left(\tilde{x}\left(t\right)\right)\le {\rho}_{e,i},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}\forall \phantom{\rule{0.166667em}{0ex}}t\in [{t}_{k},{t}_{k+N}),\\ \mathrm{if}\phantom{\rule{3.33333pt}{0ex}}x\left({t}_{k}\right)\phantom{\rule{3.33333pt}{0ex}}\in \phantom{\rule{3.33333pt}{0ex}}{\mathrm{\Omega}}_{{\rho}_{e,i}}\end{array}$$

$$\begin{array}{c}\frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),u\left({t}_{k}\right),0)\hfill \\ \phantom{\rule{1.em}{0ex}}\le \frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}\phantom{\rule{0.277778em}{0ex}}f(x\left({t}_{k}\right),{h}_{i}\left(x\left({t}_{k}\right)\right),0)\hfill \\ \phantom{\rule{1.em}{0ex}}\mathrm{if}\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{e,i}}\phantom{\rule{0.277778em}{0ex}}\mathrm{or}\phantom{\rule{3.33333pt}{0ex}}{t}_{k}\ge {t}^{\prime}\phantom{\rule{0.277778em}{0ex}}\mathrm{or}\phantom{\rule{0.277778em}{0ex}}\delta =1\hfill \end{array}$$

With the ${n}_{p}$ controllers of the form of Equations (61)–(67) and the two possible values of $\delta $ in each of these LMPC’s at every sampling time, Equations (61)–(67) represent $2{n}_{p}$ potential controllers which may be selected at every sampling time (though if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{e,i}}$ for ${n}_{q}$ of these controllers, Equations (61)–(67) with $\delta =0$ and $\delta =1$ are the same, such that the number of control laws is $2{n}_{p}-{n}_{q}$). One could consider other potential control options in addition, such as the Lyapunov-based controllers ${h}_{i}\left(x\right)$, $i=1,\dots ,{n}_{p}$. However, though all of these controllers are designed and are available in principle, they could cause closed-loop stability issues that require that not all of them be available to be randomly selected between at each sampling time. The conditions which determine which controllers are possibilities at a given sampling time should rely on the position of $x\left({t}_{k}\right)$ in state-space and specifically whether $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ for the i-th controller to be considered as a candidate.

To exemplify this, consider the two level sets ${\mathrm{\Omega}}_{{\rho}_{1}}$ and ${\mathrm{\Omega}}_{{\rho}_{2}}$ and their subsets ${\mathrm{\Omega}}_{{\rho}_{e,1}}$ and ${\mathrm{\Omega}}_{{\rho}_{e,2}}$ shown in Figure 3. Two potential values of $x\left({t}_{k}\right)$ are presented (${x}_{a}$ and ${x}_{b}$) to exemplify the role that the state-space location of $x\left({t}_{k}\right)$ should play in determining which of the ${n}_{p}$ controllers of the form of Equations (61)–(67) or the Lyapunov-based controllers of the form ${h}_{i}\left(x\left({t}_{k}\right)\right)$ should be considered as candidates to randomly select between at a given sampling time. Consider first that $x\left({t}_{k}\right)={x}_{a}$. In this case, $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}/{\mathrm{\Omega}}_{{\rho}_{e,1}}$, and therefore, as described in Section 2.4, the LMPC of Equations (61)–(67) with $i=1$ would be able to maintain the closed-loop state in ${\mathrm{\Omega}}_{{\rho}_{1}}$ throughout the subsequent sampling period. It is also true that $x\left({t}_{k}\right)\notin {\mathrm{\Omega}}_{{\rho}_{e,2}}$, so it may at first seem reasonable to consider that if the LMPC of Equations (61)–(67) is used with $i=2$, the constraint of Equation (67) could be activated to decrease the value of the Lyapunov function between two sampling periods and thereby drive the closed-loop state toward the origin using the properties of the Lyapunov-based controller and the constraint of the form of Equation (67) described in Section 2.4. However, the closed-loop stability properties delivered by the constraint of Equation (67) are developed with the requirement that Equations (2)–(5) must hold within the stability region and that $x\left({t}_{k}\right)$ must be in this stability region. When $x\left({t}_{k}\right)\notin {\mathrm{\Omega}}_{{\rho}_{2}}$, these properties are not guaranteed to hold. Therefore, when $x\left({t}_{k}\right)={x}_{a}$ in Figure 3, the LMPC of Equations (61)–(67) with $i=2$ would not be a wise choice to randomly select at a given sampling time. Similarly, ${h}_{2}\left(x\left({t}_{k}\right)\right)$ is guaranteed to maintain closed-loop stability when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{2}}$, but if ${h}_{2}\left(x\left({t}_{k}\right)\right)$ is applied when $x\left({t}_{k}\right)={x}_{a}$, $x\left({t}_{k}\right)\notin {\mathrm{\Omega}}_{{\rho}_{2}}$ and therefore the stability properties are not guaranteed to hold.

In contrast, consider the potential initial condition $x\left({t}_{k}\right)={x}_{b}$. In this case, $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ and ${\mathrm{\Omega}}_{{\rho}_{2}}$. Consequently, Equations (61)–(67) with $i=1$ or $i=2$ (for $\delta =1$ or $\delta =0$), ${h}_{1}\left(x\left({t}_{k}\right)\right)$, and ${h}_{2}\left(x\left({t}_{k}\right)\right)$ can all maintain closed-loop stability of the process of Equation (1), and therefore all could be considered as potential control designs between which to randomly select at ${t}_{k}$. This indicates that the location of $x\left({t}_{k}\right)$ in state-space should be checked with respect to ${\mathrm{\Omega}}_{{\rho}_{i}}$, $i=1,\dots ,{n}_{p}$, before developing a candidate set of controllers to randomly select between at ${t}_{k}$. It should be noted, however, that if ${\mathrm{\Omega}}_{{\rho}_{i}}$, $i=2,\dots ,{n}_{p}$, are subsets of ${\mathrm{\Omega}}_{{\rho}_{1}}$, then at each sampling time, Equations (61)–(67) with $i=1$ and $\delta =0$, Equations (61)–(67) with $i=1$ and $\delta =1$, and ${h}_{1}\left(x\left({t}_{k}\right)\right)$ are all candidate controllers that can maintain closed-loop stability. If $x\left({t}_{k}\right)$ is in the intersection of additional level sets, there are additional candidate controllers which could be randomly selected between. Therefore, the minimum number of candidate controllers is 3 (or 2 if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}/{\mathrm{\Omega}}_{{\rho}_{e,1}}$ such that Equations (61)–(67) with $\delta =0$ and $\delta =1$ are equivalent), with more potentially being possible, especially as more stability regions with more intersections are developed.

Taking the above considerations into account, the implementation strategy for the LMPC design of Equations (61)–(67) is proposed as follows:

Step 1. At ${t}_{k}$, a random integer j between 1 and $2{n}_{p}$ is selected, and $\delta $ is randomly selected to be zero or one.

Step 2. If $j\in \{2,\dots ,{n}_{p}\}$, set $i=j$. If $j\in \{{n}_{p}+2,\dots ,2{n}_{p}\}$, set $i=j-{n}_{p}$. Verify that ${V}_{i}\left(x\left({t}_{k}\right)\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$. If yes, move to Step 3. If not, return to Step 1.

Step 3. If j is a number between 1 and ${n}_{p}$, use the LMPC of Equations (61)–(67) with $i=j$ and the selected value of $\delta $. If $j={n}_{p}+d$, $d=1,\dots ,{n}_{p}$, set $u={h}_{d}\left(x\left({t}_{k}\right)\right)$.

Step 5.${t}_{k}\leftarrow {t}_{k+1}$. Return to Step 1.

**Remark**

**6.**

To prevent the possibility that the same index that is found to not meet the conditions in Step 2 at ${t}_{k}$ will be selected multiple times as Steps 1 and 2 are repeated until a value of j is found for which ${V}_{i}\left(x\left({t}_{k}\right)\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$, indexes that cause ${V}_{i}\left(x\left({t}_{k}\right)\right)\notin {\mathrm{\Omega}}_{{\rho}_{i}}$ can be removed in the random integer selection procedure in Step 1 at ${t}_{k}$ as they are identified before they force the algorithm to return to Step 1.

#### 5.3.1.1. Stability Analysis of Randomized LMPC

In this section, we develop sufficient conditions required for the randomized LMPC implementation strategy to provide closed-loop stability of the nonlinear process of Equation (1) under this implementation strategy and feasibility of the LMPC of Equations (61)–(67) when it is selected via the implementation strategy in the absence of a cyberattack in Section 5.3.1 to be used in determining a control action at a given sampling time. We first introduce two propositions that will then be used in proving the main results.

**Proposition**

**1.**

Refs. [60,82] Consider the systems
with initial states ${x}_{a}\left({t}_{0}\right)={x}_{b}\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$. There exists a function ${f}_{W}$ of class $\mathcal{K}$ such that:
for all ${x}_{a}\left(t\right),{x}_{b}\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ and all $w\left(t\right)\in W$ with:

$${\dot{x}}_{a}\left(t\right)=f({x}_{a}\left(t\right),u\left(t\right),w\left(t\right))$$

$${\dot{x}}_{b}\left(t\right)=f({x}_{b}\left(t\right),u\left(t\right),0)$$

$$|{x}_{a}\left(t\right)-{x}_{b}\left(t\right)|\le {f}_{W}(t-{t}_{0})$$

$${f}_{W}\left(\tau \right)=\frac{{L}_{w}\theta}{{L}_{x}}({e}^{{L}_{x}\tau}-1)$$

**Proposition**

**2.**

Refs. [60,82] Consider the Lyapunov function ${V}_{i}(\xb7)$ of the system of Equation (1). There exists a quadratic function ${f}_{V,i}(\xb7)$ such that:
for all $x,\widehat{x}\in {\mathrm{\Omega}}_{{\rho}_{i}}$ with
where ${M}_{v,i}>0$ is a constant.

$${V}_{i}\left(x\right)\le {V}_{i}\left(\widehat{x}\right)+{f}_{V,i}\left(\right|x-\widehat{x}\left|\right)$$

$${f}_{V,i}\left(s\right)={\alpha}_{4,i}\left({\alpha}_{1,i}^{-1}\left({\rho}_{i}\right)\right)s+{M}_{v,i}{s}^{2}$$

**Proposition**

**3.**

Ref. [62] Consider the Lyapunov-based controller ${h}_{i}\left(x\right)$ that meets Equations (2)–(5) with Lyapunov function ${V}_{i}(\xb7)$, applied in sample-and-hold to the system of Equation (1). If ${\rho}_{i}>{\rho}_{e,i}>{\rho}_{min,i}>{\rho}_{s,i}$, and $\theta >0$, $\Delta >0$, and ${\u03f5}_{w,i}>0$ satisfy:
then $\forall x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{s,i}}$,
and $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ for $t\in [{t}_{k},{t}_{k+1})$. Furthermore, if ${\rho}_{min,i}$ is defined as follows:
then the closed-loop state is ultimately bounded in ${\mathrm{\Omega}}_{{\rho}_{min,i}}$ in the sense that:

$$-{\alpha}_{3,i}\left({\alpha}_{2,i}^{-1}\left({\rho}_{s,i}\right)\right)+{L}_{x,i}^{\prime}M\Delta +{L}_{w,i}^{\prime}\theta \le -{\u03f5}_{w,i}/\Delta $$

$${V}_{i}\left(x\left(t\right)\right)\le {V}_{i}\left(x\left({t}_{k}\right)\right)$$

$${\rho}_{min,i}=max\{{V}_{i}\left(x(t+\Delta )\right):{V}_{i}\left(x\left(t\right)\right)\le {\rho}_{s,i}\}$$

$$\underset{t\to \infty}{lim\; sup}\left|x\left(t\right)\right|\in {\mathrm{\Omega}}_{{\rho}_{min,i}}$$

**Theorem**

**1.**

Consider the system of Equation (1) in closed-loop under the implementation strategy of Section 5.3.1 based on controllers ${h}_{i}\left(x\right)$ that satisfy Equations (2)–(5), and consider that the conditions in Proposition 3 hold. Let ${\u03f5}_{w,i}>0$, $\Delta >0$, ${\rho}_{i}>{\rho}_{e,i}>{\rho}_{min,i}>{\rho}_{s,i}$ satisfy:
and Equations (74) and (76), for $i=1,\dots ,{n}_{p}$, and ${\mathrm{\Omega}}_{{\rho}_{e,j}}\subset {\mathrm{\Omega}}_{{\rho}_{e,1}}$, $j=2,\dots ,{n}_{p}$. If $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ and $N\ge 1$, then the state $x\left(t\right)$ of the closed-loop system is always bounded in ${\mathrm{\Omega}}_{{\rho}_{1}}$.

$${\rho}_{e,i}\le {\rho}_{i}-{f}_{V,i}\left({f}_{W}(\Delta )\right)$$

**Proof.**

The proof consists of two parts. In the first part, we demonstrate that despite the random selection of a control law in Step 1 of the implementation strategy in Section 5.3.1, a characterizable control action is applied at every sampling time, and the LMPC of Equations (61)–(67) is feasible at every sampling time at which it is used for determining the control action to apply to the process. In the second part, we prove the results of Theorem 1.

Part 1. To demonstrate that an input with characterizable properties is returned by the implementation strategy of Section 5.3.1 at every sampling time to be applied to the process, we note that one of two inputs is returned at every sampling time: a) a control action computed by the LMPC of Equations (61)–(67) with $i=j$ where $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{j}}$ or b) a Lyapunov-based controller ${h}_{j}\left(x\left({t}_{k}\right)\right)$ where $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{j}}$.

In case (a), a solution to the LMPC of Equations (61)–(67) must have the characterizable property that it met the constraints of the LMPC because the LMPC always has at least one feasible solution. Specifically, ${h}_{i}\left(\tilde{x}\left({t}_{q}\right)\right)$, $q=k,\dots ,k+N-1$, $t\in [{t}_{q},{t}_{q+1})$, with $i=j$, is a feasible solution to the optimization problem of Equations (61)–(67) when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{j}}$. It causes the constraint of Equation (64) to be met because ${h}_{i}\left(\tilde{x}\left({t}_{q}\right)\right)$, $q=k,\dots ,k+N-1$, $t\in [{t}_{q},{t}_{q+1})$, maintains the closed-loop state in ${\mathrm{\Omega}}_{{\rho}_{j}}\subseteq {\mathrm{\Omega}}_{{\rho}_{1}}$ by Proposition 3, and the state constraint of Equation (64) is met for all states in ${\mathrm{\Omega}}_{{\rho}_{1}}$. ${h}_{i}\left(x\right)$ in sample-and-hold also satisfies the input constraint of Equation (65) by Equation (5). From Proposition 3, it causes the constraint of Equation (66) to be met when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{j}}$, and it trivially satisfies the constraint of Equation (67). Notably, the feasibility of ${h}_{i}\left(x\right)$ in sample-and-hold is true regardless of whether $\delta =1$ or $\delta =0$ because this is a feasible solution to all constraints of the optimization problem.

In case (b), the control action applied to the process is also characterizable because it is a control action that meets Proposition 3. Therefore, regardless of the control action applied at ${t}_{k}$, the control action has characterizable properties which can be used in establishing closed-loop stability. Furthermore, whenever Equations (61)–(67) are used to determine an input at a given sampling time, a feasible solution to this optimization problem always exists because it is ensured that $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ before the solution is obtained, and the feasibility of ${h}_{i}\left(\tilde{x}\left({t}_{q}\right)\right)$, $q=k,\dots ,k+N-1$, $t\in [{t}_{q},{t}_{q+1})$ was demonstrated to hold above as long as $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$.

Part 2. In this part, we prove that even with a control law randomly selected at every sampling time according to the implementation strategy in Section 5.3.1, the closed-loop state is maintained within ${\mathrm{\Omega}}_{{\rho}_{1}}$ for all times if $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$.

To demonstrate this, we first consider the case that at a given sampling time, a control law of the form of Equations (61)–(67) with $i=j$ when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{j}}$ is selected. In this case, either the constraint of Equation (66) is activated (if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,i}}$), the constraint of Equation (67) is activated (if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{e,i}}$, ${t}_{k}\ge {t}^{\prime}$, or $\delta =1$), or both are activated (as may occur, for example, if ${t}_{k}\ge {t}^{\prime}$ or $\delta =1$ but $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,i}}$).

Consider first the case that Equation (66) is activated. In this case, application of Proposition 2 (assuming that $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ for $t\in [{t}_{k},{t}_{k+1})$) gives:
for $t\in [{t}_{k},{t}_{k+1})$. Applying the constraint of Equation (66) and Proposition 1, we obtain that:
for $t\in [{t}_{k},{t}_{k+1})$. When Equation (78) holds, ${V}_{i}\left(x\left(t\right)\right)\le {\rho}_{i}$, for $t\in [{t}_{k},{t}_{k+1})$, which validates the assumption used in deriving this result and guarantees that $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ for $t\in [{t}_{k},{t}_{k+1})$ when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,i}}$ and the LMPC of Equations (61)–(67) is used to determine the input to the process of Equation (1). Because ${\mathrm{\Omega}}_{{\rho}_{i}}\subseteq {\mathrm{\Omega}}_{{\rho}_{1}}$, $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ for $t\in [{t}_{k},{t}_{k+1})$.

$${V}_{i}\left(x\left(t\right)\right)\le {V}_{i}\left(\tilde{x}\left(t\right)\right)+{f}_{V,i}\left(\right|x\left(t\right)-\tilde{x}\left(t\right)\left|\right)$$

$${V}_{i}\left(x\left(t\right)\right)\le {\rho}_{e,i}+{f}_{V,i}({f}_{W}\left(\right|t-{t}_{k}\left|\right))\le {\rho}_{e,i}+{f}_{V,i}\left({f}_{W}(\Delta )\right)$$

Consider now the case that the constraint of Equation (67) is activated. In this case, we have from this constraint and Equation (3) that
from which we can obtain:
for $t\in [{t}_{k},{t}_{k+1})$, where the last inequality follows from Equations (7) and (8). Furthermore, if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{s,i}}$, we can obtain from Equation (82) that:

$$\begin{array}{c}\frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),u\left({t}_{k}\right),0)\hfill \\ \le \frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),{h}_{i}\left(x\left({t}_{k}\right)\right),0)\le -{\alpha}_{3,i}\left(\right|x\left({t}_{k}\right)\left|\right)\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),u\left({t}_{k}\right),w\left(t\right))\hfill \\ =\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),u\left({t}_{k}\right),w\left(t\right))\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}-\frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),u\left({t}_{k}\right),0)\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}+\frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),u\left({t}_{k}\right),0)\hfill \\ \le \left|\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),u\left({t}_{k}\right),w\left(t\right))\right.\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\left.-\frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),u\left({t}_{k}\right),0)\right|-{\alpha}_{3,i}\left(\right|x\left({t}_{k}\right)\left|\right)\hfill \\ \le {L}_{x,i}^{\prime}|x\left(t\right)-x\left({t}_{k}\right)|+{L}_{w,i}^{\prime}\left|w\right|-{\alpha}_{3,i}\left(\right|x\left({t}_{k}\right)\left|\right)\hfill \\ \le {L}_{x,i}^{\prime}M\Delta +{L}_{w,i}^{\prime}\theta -{\alpha}_{3,i}\left(\right|x\left({t}_{k}\right)\left|\right)\hfill \end{array}$$

$$\begin{array}{c}\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),u\left({t}_{k}\right),w\left(t\right))\hfill \\ \phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}\le {L}_{x,i}^{\prime}M\Delta +{L}_{w,i}^{\prime}\theta -{\alpha}_{3,i}\left({\alpha}_{2,i}^{-1}\left({\rho}_{s,i}\right)\right)\hfill \end{array}$$

If Equation (74) holds, then

$$\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),u\left({t}_{k}\right),w\left(t\right))\le -{\u03f5}_{w,i}/\Delta $$

Integrating Equation (84) gives that ${V}_{i}\left(x\left(t\right)\right)\le {V}_{i}\left(x\left({t}_{k}\right)\right)$, $\forall t\in [{t}_{k},{t}_{k+1})$, such that if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{s,i}}$, then $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$, $\forall t\in [{t}_{k},{t}_{k+1})$.

If instead $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{s,i}}\subset {\mathrm{\Omega}}_{{\rho}_{i}}$, then from Equation (76), $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{min,i}}\subset {\mathrm{\Omega}}_{{\rho}_{i}}$ for $t\in [{t}_{k},{t}_{k+1})$. Therefore, if Equations (61)–(67) are used to compute the input trajectory at ${t}_{k}$ and $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ and Equation (67) is applied, $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ for $t\in [{t}_{k},{t}_{k+1})$ (this holds regardless of whether Equation (66) is simultaneously applied since this proof relied only on whether Equation (67) is applied and not whether the other constraints were simultaneously applied). Because ${\mathrm{\Omega}}_{{\rho}_{i}}\subseteq {\mathrm{\Omega}}_{{\rho}_{1}}$, this indicates that when the LMPC of Equations (61)–(67) is used with the constraint of Equation (67) activated to determine the control action at ${t}_{k}$ when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$, then $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ for $t\in [{t}_{k},{t}_{k+1})$.

Finally, consider the case that $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ and ${h}_{i}\left(x\left({t}_{k}\right)\right)$ is used to control the process of Equation (1) from ${t}_{k}$ to ${t}_{k+1}$. In this case, the following holds:
as follows from Equation (3). Using a similar series of steps as in Equation (82), we obtain:

$$\frac{\partial {V}_{i}\left(x\left({t}_{k}\right)\right)}{\partial x}f(x\left({t}_{k}\right),h\left(x\left({t}_{k}\right)\right),0)\le -{\alpha}_{3,i}\left(\right|x\left({t}_{k}\right)\left|\right)$$

$$\begin{array}{c}\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),h\left(x\left({t}_{k}\right)\right),w\left(t\right))\hfill \\ \le {L}_{x,i}^{\prime}M\Delta +{L}_{w,i}^{\prime}\theta -{\alpha}_{3,i}\left(\right|x\left({t}_{k}\right)\left|\right)\hfill \end{array}$$

If $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{s,i}}$, then as for Equation (83), we obtain:

$$\begin{array}{c}\frac{\partial {V}_{i}\left(x\left(t\right)\right)}{\partial x}f(x\left(t\right),h\left(x\left({t}_{k}\right)\right),w\left(t\right))\hfill \\ \le {L}_{x,i}^{\prime}M\Delta +{L}_{w,i}^{\prime}\theta -{\alpha}_{3,i}\left({\alpha}_{2,i}^{-1}\left({\rho}_{s,i}\right)\right)\hfill \end{array}$$

If Equation (74) holds, then we can use a similar series of steps as for Equation (84) to derive that ${V}_{i}\left(x\left(t\right)\right)\le {V}_{i}\left(x\left({t}_{k}\right)\right)$, $\forall t\in [{t}_{k},{t}_{k+1})$, such that if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}/{\mathrm{\Omega}}_{{\rho}_{s,i}}$, then $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$, $\forall t\in [{t}_{k},{t}_{k+1})$. If $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{s,i}}$, then when Equation (76) holds, we obtain that $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{min,i}}$, $t\in [{t}_{k},{t}_{k+1})$, so that $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ for $t\in [{t}_{k},{t}_{k+1})$. Since ${\mathrm{\Omega}}_{{\rho}_{i}}\subseteq {\mathrm{\Omega}}_{{\rho}_{1}}$, we again obtain that if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ and ${h}_{i}\left(x\left({t}_{k}\right)\right)$ is applied for $t\in [{t}_{k},{t}_{k+1})$, then $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, $\forall \phantom{\rule{3.33333pt}{0ex}}t\in [{t}_{k},{t}_{k+1})$.

The above results indicate that throughout every sampling period, if the conditions of Theorem 1 hold and the implementation strategy in Section 5.3.1 is used, then the closed-loop state does not leave ${\mathrm{\Omega}}_{{\rho}_{1}}$, implying that it also holds throughout all time if $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$. This completes the proof. ☐

**Remark**

**7.**

Theorem 1 only speaks to the closed-loop state remaining in a bounded region of operation. If the randomness is removed and the $i=1$ controller is selected to be used with the constraint of Equation (67) activated for all subsequent times (i.e., Equations (14)–(20) with $t>{t}^{\prime}$), the closed-loop state is guaranteed to be ultimately bounded in a neighborhood of the origin [60]. If the randomness is not removed but $t>{t}^{\prime}$ in Equations (61)–(67), the i-th controller will cause ${V}_{i}\left(x\left(t\right)\right)<{V}_{i}\left(x\left({t}_{k}\right)\right)$, $t\in ({t}_{k},{t}_{k+1}]$ as noted in Section 2.4. However, consider the case that $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ and $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{z}}$, but the i-th controller is selected at ${t}_{k}$. The decrease in ${V}_{i}$ throughout the sampling period as a result of using the i-th controller does not necessarily imply that ${V}_{z}\left(x\left(t\right)\right)<{V}_{z}\left(x\left({t}_{k}\right)\right)$, $\forall t\in ({t}_{k},{t}_{k+1}]$. If the randomness is removed, however, and only the $i=1$ controller is used with $t>{t}^{\prime}$, ${V}_{1}\left(x\left(t\right)\right)<{V}_{1}\left(x\left({t}_{k}\right)\right)$, $t\in ({t}_{k},{t}_{k+1}]$ in every sampling period (i.e., a continuous decrease of the same Lyapunov function is ensured so that the closed-loop state is guaranteed to move to lower level sets of this Lyapunov function and not to again leave them) until the closed-loop state reaches ${\mathrm{\Omega}}_{{\rho}_{s,1}}$, after which point it remains ultimately bounded in ${\mathrm{\Omega}}_{{\rho}_{min,1}}$. Another idea for driving the closed-loop state to a neighborhood of the origin with a randomized LMPC implementation strategy would be to change the implementation strategy at ${t}^{\prime}$ to only allow controllers to be selected in Steps 1-2 for which ${V}_{1}$ and ${h}_{1}$ are used in their design (e.g., ${h}_{1}$ and the $i=1$ LMPC) so that each of the potential controllers would cause a decrease in the same Lyapunov function value over time.

**Remark**

**8.**

The stability analysis reveals that despite the intuitive nature of the approach for deterring cyberattackers, it suffers the same problem as the controller in Section 5.2; namely, it does not meet Definition 4, and once the controller learns the implementation strategy itself, he or she could develop an attack policy that is not guaranteed to maintain closed-loop stability according to the proof methodology above. We can see a potential for the lack of resilience by referring again to Figure 3 and noting that if the actual state measurement is at ${x}_{a}$, the closed-loop stability proof relies on the $i=2$ controller not being an option; however, a false state measurement of ${x}_{b}$ may cause the $i=2$ controller to be chosen when $x\left({t}_{k}\right)={x}_{a}$, such that the conditions required for the closed-loop stability proof in Theorem 1 (i.e., that the implementation strategy in Section 5.3.1 is correctly followed) do not hold. However, the closed-loop stability issues with the proposed design in the presence of a cyberattack are deeper than this; the problem is not necessarily that the control action computed by a controller that would not otherwise have been selected is used, but rather that regardless of whether that controller should have been allowed to be used per the implementation strategy in Section 5.3.1 is used or not, the input applied to the process has no relationship to the state in the sense that, for example, the state constraints in Equations (66) and (67) are not necessarily met (or even close to being met) by the actual process state even if the controller used at ${t}_{k}$ indicated feasibility of the control action with respect to these constraints. This is because the controller is using a different initial condition than the actual process initial condition and therefore will compute, potentially, a state trajectory under the input selected as optimal by the LMPC that is very different from the actual process state trajectory under that same input, even in the absence of disturbances/plant-model mismatch. Mismatch is introduced by the cyberattack at the initial condition for the model of Equation (62).

#### 5.3.2. Problems with Incorporating Randomness in LMPC Design

In this section, we demonstrate the use of the randomized LMPC for the CSTR example of Section 5.2.1 during routine operation and also in the case that false state measurements are provided to demonstrate that the randomized LMPC implementation strategy can maintain closed-loop stability under normal operation, but may at best in certain sensor cyberattack cases only delay an unsafe condition from being reached (i.e., randomness by itself, without giving the properties in Definition 4, does not create cyberattack resilience in control). We first develop the set of LMPC’s to be used to control the process of Equations (46) and (47). We begin by developing seven (i.e., ${n}_{p}=7$) potential combinations of ${V}_{i}$, ${h}_{i}$, ${\mathrm{\Omega}}_{{\rho}_{i}}$, and ${\mathrm{\Omega}}_{{\rho}_{e,i}}$. The form of each ${V}_{i}$ is ${x}^{T}{P}_{i}x$, where ${P}_{i}$ is a symmetric positive definite matrix of the following form:

$$\left[\begin{array}{cc}{P}_{11}& {P}_{12}\\ {P}_{12}& {P}_{22}\end{array}\right]$$

Sontag’s control law [80] was used to set the value of the component of every ${h}_{i}={\left[{h}_{i,1}\phantom{\rule{3.33333pt}{0ex}}{h}_{i,2}\right]}^{T}$ corresponding to ${u}_{2}$ as follows:
where if ${h}_{i,2}$ fell below or exceeded the upper or lower bound on ${u}_{2}$, ${h}_{i,2}$ was saturated at the respective bound. ${L}_{\tilde{f}}{V}_{i}$ and ${L}_{{\tilde{g}}_{k}}{V}_{i}$ represent the Lie derivatives of ${V}_{i}$ with respect to $\tilde{f}$ and ${\tilde{g}}_{k}$, $k=1,2$. For simplicity, ${h}_{i,1}$ was taken to be 0 kmol/m${}^{3}$ for $i=1,\dots ,7$. Using the values of the entries of each ${P}_{i}$ associated with each ${V}_{i}$ in Table 3 and the associated ${h}_{i}$, $i=1,\dots ,7$, the stability regions in Table 3 were obtained by discretizing the state-space and choosing an upper bound on each Lyapunov function in a region of state-space where ${\dot{V}}_{i}$ was negative at the discretized points under the controller ${h}_{i}$, $i=1,\dots ,7$ (the discretization was performed in increments of 0.01 kmol/m${}^{3}$ in ${C}_{A}$ for ${C}_{A}$ between 0 and 4 kmol/m${}^{3}$, and in increments of 1 in T for T between 340 and 560$\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$). Subsets of the stability regions were selected to be ${\mathrm{\Omega}}_{{\rho}_{e,i}}$ with the goal of allowing several different control laws to be developed. For $i=2,\dots ,7$, ${\mathrm{\Omega}}_{{\rho}_{i}}\subseteq {\mathrm{\Omega}}_{{\rho}_{1}}$. The value of ${\rho}_{e,i}$ was not more than 80% of ${\rho}_{i}$ in each case.

$$\begin{array}{c}\hfill {h}_{i,2}\left(x\right)=\left\{\begin{array}{cc}-\frac{{L}_{\tilde{f}}{V}_{i}+\sqrt{{L}_{\tilde{f}}{V}_{i}^{2}+{L}_{{\tilde{g}}_{2}}{V}_{i}^{4}}}{{L}_{{\tilde{g}}_{2}}{V}_{i}},\hfill & \phantom{\rule{3.33333pt}{0ex}}\mathrm{if}\phantom{\rule{3.33333pt}{0ex}}{L}_{{\tilde{g}}_{2}}{V}_{i}\ne 0\hfill \\ 0,\hfill & \phantom{\rule{3.33333pt}{0ex}}\mathrm{if}\phantom{\rule{3.33333pt}{0ex}}{L}_{{\tilde{g}}_{2}}{V}_{i}=0\hfill \end{array}\right.\end{array}$$

Initially, we evaluate the closed-loop stability properties of the process of Equations (46) and (47) for normal operation under the randomized LMPC implementation strategy and, for comparison, under the $i=1$ LMPC used for all times. The process was initialized from ${x}_{init}=$$[-0.4$ kmol/m${}^{3}$${\phantom{\rule{3.33333pt}{0ex}}20\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}]}^{T}$. For the randomized LMPC design, the implementation strategy in Section 5.3.1 was followed with the exception that, for simplicity, $\delta $ was set to 0 at every sampling time, and only ${h}_{1}\left(x\right)$ was considered as a candidate controller at a given sampling time as an alternative to controllers in Table 3. Therefore, at every sampling time, both the LMPC of Equations (61)–(67) with $i=1$ and ${h}_{1}\left(x\right)$ were allowable control actions, and the i-th controller in Table 3 was also allowable if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$. The simulations were implemented in MATLAB using fmincon and the seed rng(5) and random integer generation function randi when the randomized LMPC implementation strategy was used. The integration step for the model of Equations (46) and (47) was set to ${10}^{-4}$ h, $N=10$, and $\Delta =0.01$ h, with 1 h of operation used. The Lyapunov-based stability constraint activated when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,i}}$ was enforced at the end of every sampling period in the prediction horizon, and whenever the Lyapunov-based stability constraint involving the time-derivative of the Lyapunov function was enforced, the other Lyapunov-based constraint was implemented at the end of the sampling periods after the first. The initial guess provided to fmincon in both cases was the steady-state input vector. The maximum and minimum values of ${u}_{2}$ were multiplied by ${10}^{-5}$ in numerically solving the optimization problem.

Figure 4, Figure 5 and Figure 6 show the state, input, and state-space trajectories resulting from controlling the process with one LMPC throughout the time period of operation, and Figure 7, Figure 8 and Figure 9 show the results of controlling the LMPC with one of the eight potential control laws selected at every sampling time, but depending on the position of the state measurement in state-space. The figures indicate that both the single LMPC implemented over time and the randomized LMPC implementation strategy were able to maintain the closed-loop state within ${\mathrm{\Omega}}_{{\rho}_{1}}$. Figure 10 shows which controller (i in Table 3) was selected by the randomized LMPC implementation strategy at each sampling time. Notably, the control laws associated with $i=2$, 3, and 4 in Table 3 were not chosen, which is consistent with the requirement that a control law can only be available to be selected if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{i}}$ (from Figure 9, we see that the closed-loop state did not enter, for example, ${\mathrm{\Omega}}_{{\rho}_{2}}$ and ${\mathrm{\Omega}}_{{\rho}_{3}}$, and the results of the simulations indicate that though the closed-loop state sometimes entered ${\mathrm{\Omega}}_{{\rho}_{4}}$ as shown in Figure 9, it was never in this region at a sampling time, which explains why these controllers were never selected by the randomized implementation strategy). The time-integral of Equation (48) was monitored for the process of Equations (46) and (47) under the inputs applied to the process, and also for steady-state operation. For the single LMPC implemented over time, it evaluated to 32.2187, while for the randomized LMPC implementation strategy, it evaluated to 27.7536. There is some profit loss due to the randomized LMPC implementation strategy, and also large variations in states and inputs shown in Figure 7 and Figure 8. If the randomized LMPC implementation strategy was able to deter cyberattacks, one could consider whether that made the variations and profit loss acceptable. Despite the decrease in profits due to the randomization, both the single LMPC over time and the LMPC’s implemented within the randomized implementation strategy significantly outperformed steady-state operation, which had a value of the time-integral of Equation (48) of 13.8847.

After analyzing normal operation for the LMPC and randomized LMPC implementation strategy, we look at differences in their response to the cyberattack policy determined in Section 5.2.1, where the attack on the sensors is simulated for 10 sampling periods and the process is initialized at ${x}_{init}$. The metric that we use for comparing the results in the two scenarios is the time until the closed-loop state exceeds its threshold of 55$\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$ for ${x}_{2}$ (as ${x}_{2}>55\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$ occurs outside the stability region, the closed-loop state exits the stability region before this unsafe condition is reached). For the single LMPC, ${x}_{2}$ first exceeds it threshold around 0.0142h. In the case of the randomized LMPC, different input policies (i.e., different sequences of randomly selected control laws) give different behavior in the presence of the cyberattack. Therefore, in Table 4, we present the approximate time that ${x}_{2}$ exceeds its threshold for 10 different arguments provided to the MATLAB seeding function rng to create 10 different seeds for the random number generator that selects which control law to randomly select at each sampling time. The table indicates that the randomization may slightly delay the time at which ${x}_{2}$ first exceeds its threshold compared to the case that the single LMPC is used. However, in none of the cases simulated was it able to prevent the cyberattack from driving the value of ${x}_{2}$ above its threshold in $0.1\phantom{\rule{3.33333pt}{0ex}}$h of operation. If a cyberattacker believes that some delay in the attack may cause him or her to be caught, this strategy may help with deterring some types of attacks. However, the results indicate that it is not cyberattack-resilient according to Definition 4. Figure 11 shows the results of the simulations for $0.1\phantom{\rule{3.33333pt}{0ex}}$h with the randomized LMPC implementation strategy for different arguments of rng in state-space.

Figure 12 displays data on the inputs and value of ${V}_{1}\left(x\right)$ over time under both the randomized LMPC implementation strategy and the single LMPC, as well as the selected control law among the 8 possibilities at each sampling time in the case that the argument of rng is set to 20. This figure suggests that some of the difficulty with maintaining the closed-loop state in a bounded region under the attack is that for the falsified state measurement, the available controllers (the $i=3$ and $i=4$ controllers are not available because the false state measurement that the controller receives and uses in determining which control laws should be made available according to the randomized LMPC implementation strategy is outside of ${\mathrm{\Omega}}_{{\rho}_{3}}$ and ${\mathrm{\Omega}}_{{\rho}_{4}}$) compute inputs with similarities to each other and to the inputs which the single LMPC would compute in the sense that they are either close in value or create similar effects on the closed-loop state (i.e., the fact that different control laws may be chosen to compute an input is not very effective in this case at obscuring the mapping between $x\left({t}_{k}\right)$ and the inputs applied to the process). From Figure 12, we see that all of the available control laws were used at some point, but the inputs computed in every case except for the $i=8$ controller were close to those of the single LMPC, and the $i=8$ controller was also not effective at causing a direction change in the value of ${V}_{1}$, despite that it has some more noticeable differences compared to the trajectory computed by the single LMPC.

The attack policy chosen plays a role in the amount of delay in the success of an attack which the randomized LMPC implementation strategy of Section 5.3.1 may cause. For example, consider instead the falsified initial condition ${x}_{1}=0.0632$ kmol/m${}^{3}$ and ${x}_{2}=21.2056\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$, which is also within the stability region (but not within the stability regions of the $i=2,\phantom{\rule{3.33333pt}{0ex}}3$, or 4 controllers). If used at each sampling time, it can cause ${x}_{2}>55\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$ in 0.0319 h under the single LMPC. For this attack policy, the approximate time after which ${x}_{2}>55\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$ for the randomized LMPC implementation strategy is reported in Table 5. Some of the delays in the success of the attack at driving ${x}_{2}>55\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$ in this case are much more significant than in Table 4. The simulation results demonstrate that the lack of resiliency of the randomized LMPC policy can come from the lack of correlation between the inputs applied and the actual process state at each sampling time, as discussed in Remark 8. For example, for the case where the seed used is 5, the same inputs are applied to the process in both the case that the single LMPC is used and the case that the randomized LMPC implementation strategy is used at the sampling period beginning at ${t}_{k}=0.02\phantom{\rule{3.33333pt}{0ex}}$h, but because the initial condition at ${t}_{k}$ in both cases is different (caused by the different input policies computed in the prior sampling period by the use of the different control laws), these same inputs in one case drive the closed-loop state out of the stability region in the sampling period, and in the other case they do not succeed in driving it out in the sampling period. Conversely, in the sampling periods between ${t}_{k}=0.03\phantom{\rule{3.33333pt}{0ex}}$h and $0.05\phantom{\rule{3.33333pt}{0ex}}$h, the inputs applied to the process under the randomized LMPC implementation strategy are not the values that would have been computed if the single LMPC had been used, but they drive the closed-loop state out of the stability region. Though the randomness may be beneficial at helping delay the success of attacks in some cases, it does not address the fundamental lack of correlation between the applied inputs and the actual process state that causes the cyberattack success.

Simulations were also performed in the case that it was attempted to operate the process at steady-state (instead of in a time-varying fashion) by removing the constraint of Equation (66) and using the following quadratic stage cost:
where $Q=\mathrm{diag}({10}^{4},100)$ and $R=\mathrm{diag}({10}^{4},{10}^{-6})$. In this case, the LMPC and randomized LMPC implementation strategy with rng(5) drive the closed-loop state to a neighborhood of the origin in the absence of a cyberattack. If the falsified state measurement determined in Section 5.2.1 is applied (without attempting to see whether there may be a more problematic input policy for the tracking control design), ${x}_{2}$ > 55 K in $0.0834\phantom{\rule{3.33333pt}{0ex}}$h under the single LMPC and $0.1395\phantom{\rule{3.33333pt}{0ex}}$h under the randomized LMPC strategy with rng(5). This demonstrates that processes operated at steady-state are not immune to cyberattacks when operated under LMPC or a randomized LMPC implementation strategy because again in this case, the value of $x\left({t}_{k}\right)$ becomes decoupled from the input being applied. In a coupled nonlinear system, this may result in state trajectories that do not drive the (actual) closed-loop state to the origin.

$${L}_{e}={\tilde{x}}^{T}Q\tilde{x}+{u}^{T}Ru$$

**Remark**

**9.**

The last result concerning steady-state operation indicates that the difficulties with the randomized LMPC design with respect to Definition 4 hold regardless of whether δ in Equations (61)–(67) is fixed at 0 or 1, as the issue does not stem from whether the controller is attempting to drive the state predictions it is making toward the steady-state through the inputs it computes or whether it is attempting to operate the process in a time-varying fashion, but rather stems from the disconnect between what the controller thinks it is achieving and what it is actually achieving due to the falsified state measurements. This also indicates that having the inputs computed by the different potential controllers be significantly different from one another to create significant randomness in what input would be applied to the process may help in some cases (particularly if it sometimes reverses the direction in which ${V}_{1}$ changes), but it cannot address the input-state disconnect unless the manner in which random control laws are selected or generated can be proven to cause Definition 4 to be met. The fact that an allowable input policy exists that can cause problems means that even random attack strategies may pose a problem. Therefore, while a cyberattacker who cannot afford any delay in an attack might be deterred by the randomized LMPC implementation strategy, it is unlikely that this policy would provide a sufficient barrier to attacks.

#### 5.3.3. Creating Unpredictable Controller Outputs: Other Types of Randomness in MPC Design

There are many other techniques besides the randomized LMPC design of the prior sections which could be used to create randomness in control selection/design. For example, the closed-loop stability proofs for LMPC in [60] are independent of the objective function; therefore, one method for introducing randomness in the operation of the process of Equation (1) under LMPC without losing closed-loop stability during normal operation would be to make random modifications to the objective function of Equations (14)–(20) at each sampling time by adding penalty terms which change/are randomly generated at every sampling time (e.g., in some sampling periods they are zero, in some sampling periods they may penalize the difference between the input values from randomly selected values within the input bounds). The LMPC could also seek to generate input policies that create significant input variation over time by using penalty terms in the objective function on the similarity between the input trajectory computed at ${t}_{k}$ and that applied at ${t}_{k-1}$ (through, for example, terms such as ${\sum}_{i=1}^{m}{({u}_{i}\left({t}_{k}\right)-{u}_{i}\left({t}_{k-1}\right))}^{2}$ subtracted from the stage cost to minimize the objective function more strongly if the difference between the inputs is greater between two sampling periods; this is not a randomly generated penalty but it is one that can differ between sampling times as $u\left({t}_{k-1}\right)$ can be different at each sampling time). A potential disadvantage of this approach, however, is that it causes other terms in the objective function, which are chosen to be meaningful with respect to operating objectives such as profit or steady-state tracking, to compete with randomly generated terms.

Another idea for creating randomness within the control design that does not impact the objective function (and therefore does not require the difficult task of determining an appropriate tuning that can trade off meaningful terms against randomly generated terms, as in the policies of the prior paragraph) would be to randomly generate constraints for an MPC at every sampling time. For example, the state constraint of Equation (17) might be modified to become $\tilde{x}\left(t\right)\in \tilde{X}$, $t\in [{t}_{k},{t}_{k+N})$, where $\tilde{X}$ is a state-space region that is randomly generated at every sampling time (but $\tilde{X}\subset X$ to ensure that the modified state constraint maintains the closed-loop state predictions in X). As an example, consider that $\tilde{x}\left(t\right)\in X$ represents a state constraint of the form ${x}_{min}\le \tilde{x}\left(t\right)\le {x}_{max}$, $t\in [{t}_{k},{t}_{k+N})$. A constraint of the form $\tilde{x}\left(t\right)\in \tilde{X}$ might require that at every sampling time, ${x}_{rand,min}\le \tilde{x}\left(t\right)\le {x}_{rand,max}$, where ${x}_{rand,min}$ and ${x}_{rand,max}$ are two randomly selected real numbers (at every sampling time) with ${x}_{rand,min}\ge {x}_{min}$, ${x}_{rand,max}\le {x}_{max}$, and ${x}_{rand,min}\le {x}_{rand,max}$. However, these modified state constraints are hard constraints that are not guaranteed to be satisfied throughout ${\mathrm{\Omega}}_{{\rho}_{1}}$ ($\tilde{x}\in X$ can be guaranteed to be satisfied by defining ${\mathrm{\Omega}}_{{\rho}_{1}}$ to be in X, but it is not guaranteed that $\tilde{x}$ can be maintained in randomly generated subsets of X that may only constitute subsets of the stability region that are not necessarily related to ${V}_{1}$ and therefore are not necessarily forward invariant). Therefore, the randomly generated hard constraints may impact feasibility of an LMPC. Methods for handling this could include reformulating the hard constraints as soft constraints in the objective function when the problem is determined to be infeasible at ${t}_{k}$, or generating multiple (i.e., up to $\overline{p}$) random subsets of X at ${t}_{k}$, and up to $\overline{p}$ LMPC’s using these different subsets to form the state constraints of Equation (17), and then attempting to solve these LMPC’s in order from 1 to $\overline{p}$ to see whether one is feasible and can be used to compute a control action before applying a backup control law that guarantees closed-loop stability such as ${h}_{1}\left(x\right)$. Closed-loop stability of the system of Equation (1) under the LMPC of Equations (14)–(20) with Equation (17) modified to allow for random state constraint generation would follow from the results in [60] if feasibility is maintained. One could also consider other methods for developing randomly generated state constraints, such as exploring the potential for randomly generating constraints on regions for the closed-loop state to avoid [9,10,11] at each sampling time. However, even if optimization-based control designs with randomly generated constraints are feasible at a sampling time, they may also have disadvantages with respect to profit. For example, if the objective function is related to process economics and subsets of the allowable operating region are disallowed by hard constraints, the inputs seek to optimize the economics with a more restricted constraint set than is actually available, which would be expected to negatively impact profits. This is because the goal of the randomization would be to cause the controller to compute inputs which it would not normally compute if the constraint set was less restrictive in order to prevent an attacker from mapping $x\left({t}_{k}\right)$ to an input. If the global optimum of the objective function within the allowable constraint set is assumed to be achieved with the solution to the controller without the randomization, then any deviations of the solution from this optimal value for the purpose of making the input-state measurement mapping difficult to determine would result in a decrease in profit compared to the optimum condition. If the global optimum is achieved, however, this means that the randomization is not succeeding in computing inputs which are difficult to map to the state measurements. Therefore, the premise of the randomized constraint designs would cause a profit reduction in cases where the economics are being optimized in the objective function (though popular techniques for solving nonlinear optimization problems (e.g., [83]) may find local rather than global optima, making it less obvious whether the randomization strategy will result in a profit loss compared to the (local) solution which might be found without the randomization).

The results of the prior sections of this work indicate that cyberattack-deterring control policies incorporating randomness cannot rely on randomness alone to prevent cyberattacks from being successful or from being attempted; the inputs computed by any cyberattack-resilient policy according to Definition 4 must have a structure that prevents the fact that they are decoupled from the state measurements from driving the closed-loop state out of a set of safe operating conditions.

#### 5.4. Deterring Sensor Measurement Falsification Cyberattacks on Safety: Using Open-Loop Controller Outputs

Whereas the “intuitive” approaches of the prior sections failed to be cyberattack-resilient, in this section, we show that it may be possible to develop operating policies for which sensor falsification cyberattacks intended to impact process safety cannot be successful. The policy to be examined is specific to a subset of the class of systems of Equation (1), specifically those which have an open-loop asymptotically stable equilibrium. For clarity of notation in the following, we will denote the set of nonlinear systems of the form of Equation (1) with an open-loop asymptotically stable equilibrium as follows:
where ${f}_{as}$ is a locally Lipschitz vector function of its arguments and ${f}_{as}(0,0,0)=0$. The following conditions hold for all $x\in {D}^{\prime}\subset {R}^{n}$, where ${D}^{\prime}$ is a neighborhood of the origin:
where ${u}_{s}=0$ denotes the steady-state input, ${V}^{\prime}:{R}^{n}\to {R}_{+}$ is a sufficiently smooth positive definite Lyapunov function, and the functions ${\alpha}_{5}$, ${\alpha}_{6}$ and ${\alpha}_{7}$ are of class $\mathcal{K}$. We define a level set of ${V}^{\prime}$ within ${D}^{\prime}$ where $x\in X$ as a stability region ${\mathrm{\Omega}}_{{\rho}^{\prime}}$ of the nominal system of Equation (91) under ${u}_{s}$ (${\mathrm{\Omega}}_{{\rho}^{\prime}}:=\{x\in X\cap {D}^{\prime}:{V}^{\prime}\left(x\right)\le {\rho}^{\prime}\}$). In the remaining developments, we assume that ${V}^{\prime}$ can be chosen to be the same as ${V}_{1}$.

$$\dot{x}={f}_{as}(x,u,w)$$

$${\alpha}_{5}\left(\right|x\left|\right)\le {V}^{\prime}\left(x\right)\le {\alpha}_{6}\left(\right|x\left|\right)$$

$$\frac{\partial {V}^{\prime}\left(x\right)}{\partial x}\phantom{\rule{0.277778em}{0ex}}{f}_{as}(x,{u}_{s},0)\le -{\alpha}_{7}\left(\right|x\left|\right)$$

#### 5.4.1. Using Open-Loop Controller Outputs: Integration with LMPC

For the system of Equation (91), ${u}_{s}$ itself is a cyberattack-deterring input policy according to Definition 4 when $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}\subset {\mathrm{\Omega}}_{{\rho}^{\prime}}\subset X$ because it drives the closed-loop state to the origin and is independent of the sensor measurements. However, it does not use feedback of the process state to impact the speed with which the steady-state is approached. Furthermore, it cannot drive the closed-loop state off of the steady-state in a fashion that seeks to optimize process economics. It therefore lacks the desirable properties of feedback controllers for non-attack scenarios, but in the case of cyberattacks on sensors, it has advantages over feedback control in that it is not dependent on sensor readings. This indicates that ${u}_{s}$ and feedback controllers complement one another; the former is beneficial for preventing cyberattack success, and the latter is beneficial for normal operation. Therefore, in this section, we explore integrating these two types of control in an implementation strategy that, as will be proven in the next section, is guaranteed under sufficient conditions to maintain closed-loop stability both in the presence and absence of cyberattacks (i.e., it meets Definition 4). For developing this implementation strategy, we again use LMPC because the a priori characterizable region ${\mathrm{\Omega}}_{{\rho}_{1}}$ within which LMPC maintains the process state during normal operation can be beneficial for developing a controller implementation strategy that guarantees that Definition 4 is met (in general, the results of this work suggest that theory-based control designs may be important for allowing cyberattack-resilient control designs to be developed, indicating that an important direction of future research may be making theory-based control designs easier to use in an industrial setting). The implementation strategy proposed is as follows:

Step 1. Given $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}\subset {\mathrm{\Omega}}_{{\rho}^{\prime}}\subset X$, apply ${u}_{s}$ for ${N}_{1}$ sampling periods. Go to Step 2.

Step 2. Utilize an LMPC with the form in Equations (14)–(20) to control the process of Equation (91) for ${N}_{2}$ sampling periods. Go to Step 3.

Step 3. Apply ${u}_{s}$ for ${N}_{1}$ sampling periods. Return to Step 2.

Characterizations of ${N}_{1}$ and ${N}_{2}$ that allow closed-loop stability of the system of Equation (91) to be guaranteed, even in the presence of cyberattacks and sufficiently small disturbances, under this implementation strategy are presented in the next section.

#### Stability Analysis of Open-Loop Control Integrated with LMPC

This section presents the conditions under which closed-loop stability of the system of Equation (91) under the implementation strategy in Section 5.4.1 is guaranteed in both the presence of and absence of a cyberattack that provides false state measurements ${x}_{f}\in {\mathrm{\Omega}}_{{\rho}_{1}}$ at every sampling time (where the notation ${x}_{f}$ represents a falsified sensor signal that in general can be different at each sampling time). The results are presented in a theorem that relies on the following proposition.

**Proposition**

**4.**

Ref. [62] Consider ${u}_{s}$ for the system of Equation (91) such that the inequalities of Equations (92) and (93) are met with Lyapunov function ${V}^{\prime}(\xb7)={V}_{1}(\xb7)$. If ${\rho}^{\prime}>{\rho}_{min}^{\prime}>{\rho}_{s}^{\prime}$, and $\theta >0$, $\Delta >0$, and ${\u03f5}_{w}^{\prime}>0$ satisfy:
then $\forall x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}^{\prime}}/{\mathrm{\Omega}}_{{\rho}_{{s}^{\prime}}}$,
for $t\in [{t}_{k},{t}_{k+1})$ and $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}^{\prime}}$. Furthermore, if ${\rho}_{min}^{\prime}$ is defined as follows:
then the closed-loop state is ultimately bounded in ${\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$ in the sense that:

$$-{\alpha}_{7}\left({\alpha}_{6}^{-1}\left({\rho}_{s}^{\prime}\right)\right)+{L}_{w,1}^{\prime}\theta \le -{\u03f5}_{w}^{\prime}/\Delta $$

$${V}^{\prime}\left(x\left(t\right)\right)\le {V}^{\prime}\left(x\left({t}_{k}\right)\right)$$

$${\rho}_{min}^{\prime}=max\{{V}^{\prime}\left(x(t+\Delta )\right):{V}^{\prime}\left(x\left(t\right)\right)\le {\rho}_{s}^{\prime}\}$$

$$\underset{t\to \infty}{lim\; sup}\left|x\left(t\right)\right|\in {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$$

**Theorem**

**2.**

Consider the system of Equation (91) under the implementation strategy of Section 5.4.1 based on controllers ${u}_{s}$ and ${h}_{1}\left(x\right)$ that satisfy Equations (92) and (93) and (2)–(5), respectively, and consider that the conditions in Proposition 4 hold, as well as those in Proposition 3 and Equation (78) with $i=1$. If $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, ${\mathrm{\Omega}}_{{\rho}_{s}^{\prime}}\subset {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}\subset {\mathrm{\Omega}}_{{\rho}_{e,1}}\subset {\mathrm{\Omega}}_{{\rho}_{1}}\subset {\mathrm{\Omega}}_{{\rho}^{\prime}}$, ${V}^{\prime}(\xb7)={V}_{1}(\xb7)$, $N\ge 1$, ${N}_{1}=\lceil \frac{({\rho}_{1}-{\rho}_{min}^{\prime})}{{\u03f5}_{w}^{\prime}}\rceil $, and ${N}_{2}=\lfloor \frac{({\rho}_{1}-{\rho}_{min}^{\prime})}{\left({\alpha}_{4,1}\left({\alpha}_{5}^{-1}\left({\rho}_{1}\right)\right)\right)M\Delta}\rfloor $, then the state $x\left(t\right)$ of the closed-loop system is always bounded in ${\mathrm{\Omega}}_{{\rho}_{1}}$, $\forall \phantom{\rule{3.33333pt}{0ex}}t\ge 0$, regardless of the value of $\tilde{x}\left({t}_{k}\right)$ in Equation (16), $\forall \phantom{\rule{3.33333pt}{0ex}}k\ge 0$, if $\tilde{x}\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ when Equations (14)–(20) are used at a sampling time for computing the control action applied to the process according to the implementation strategy in Section 5.4.1.

**Proof.**

The proof consists of four parts. In the first part, feasibility of the LMPC of Equations (14)–(20) at every sampling time in which it is used according to the implementation strategy in Section 5.4.1 will be demonstrated, regardless of whether the state measurements provided to the LMPC in Equation (16) are accurate or falsified, if they are within ${\mathrm{\Omega}}_{{\rho}_{1}}$. The second part will demonstrate that for any $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, $x\left({t}_{k+{N}_{1}}\right)\in {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$ when ${u}_{s}$ is used for ${N}_{1}$ sampling periods. The third part demonstrates that if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$ and the LMPC of Equations (14)–(20) is used for the next ${N}_{2}$ sampling periods to control the system of Equation (91) with potentially falsified state measurements, then $x\left({t}_{k+{N}_{2}}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$. The fourth part combines the results of the prior three parts to demonstrate that the implementation strategy of Section 5.4.1 guarantees that the closed-loop state remains in ${\mathrm{\Omega}}_{{\rho}_{1}}$ at all times, whether or not cyberattacks which provide falsified state measurements occur.

Part 1. When the input ${u}_{s}$ is applied to the system of Equation (91) according to the implementation strategy in Section 5.4.1, no optimization problem is solved, and therefore there is no feasibility issue with using ${u}_{s}$ at ${t}_{k}$. However, if the LMPC of Equations (14)–(20) is used, then if the state measurement $\tilde{x}\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ (regardless of whether $\tilde{x}\left({t}_{k}\right)$ equals the true state measurement $x\left({t}_{k}\right)$ or a falsified state measurement ${x}_{f}\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$), ${h}_{1}\left(\tilde{x}\left({t}_{q}\right)\right)$, $q=k,\dots ,k+N-1$, $t\in [{t}_{q},{t}_{q+1})$, is a feasible solution to all constraints of the optimization problem when $\tilde{x}\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{e,1}}$ or when $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{\rho}/{\mathrm{\Omega}}_{{\rho}_{e,1}}$ for the reasons noted in the proof of Part 1 of Theorem 1. While ${x}_{f}$ can always be chosen to be in ${\mathrm{\Omega}}_{{\rho}_{1}}$ to guarantee feasibility when the LMPC is used in computing control actions, the proof that $x\left({t}_{k}\right)$ is always in ${\mathrm{\Omega}}_{{\rho}_{1}}$ when the LMPC is used so that the feasibility guarantees at each sampling time hold when no cyberattack occurs at ${t}_{k}$ will be developed in subsequent parts of this proof.

Part 2. To demonstrate that for any $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, $x\left({t}_{k+{N}_{1}}\right)\in {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$, we look at the change in the value of ${V}^{\prime}$ along the closed-loop state trajectory of the system of Equation (91) as follows:
which follows from Equations (93) and (7) (since ${V}^{\prime}={V}_{1}$ and systems of the form of Equation (91) are members of the class of Equation (1)), and the bound on w. If we consider that $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}/{\mathrm{\Omega}}_{{\rho}_{s}^{\prime}}$, then from Equation (92), ${\alpha}_{6}^{-1}\left({\rho}_{s}^{\prime}\right)\le \left|x\left(t\right)\right|$ such that the upper bound on ${\dot{V}}^{\prime}\left(x\left(t\right)\right)$ is determined as follows:

$$\begin{array}{cc}\hfill {\dot{V}}^{\prime}\left(x\left(t\right)\right)=& \frac{\partial {V}^{\prime}\left(x\left(t\right)\right)}{\partial x}{f}_{as}(x\left(t\right),{u}_{s},w\left(t\right))+\frac{\partial {V}^{\prime}\left(x\left(t\right)\right)}{\partial x}{f}_{as}(x\left(t\right),{u}_{s},0)-\frac{\partial {V}^{\prime}\left(x\left(t\right)\right)}{\partial x}{f}_{as}(x\left(t\right),{u}_{s},0)\hfill \\ & \le -{\alpha}_{7}\left(\right|x\left(t\right)\left|\right)+\left|\frac{\partial {V}^{\prime}\left(x\left(t\right)\right)}{\partial x}{f}_{as}(x\left(t\right),{u}_{s},w)-\frac{\partial {V}^{\prime}\left(x\left(t\right)\right)}{\partial x}{f}_{as}(x\left(t\right),{u}_{s},0)\right|\hfill \\ & \le -{\alpha}_{7}\left(\right|x\left(t\right)\left|\right)+{L}_{w,1}^{\prime}\theta \hfill \end{array}$$

$${\dot{V}}^{\prime}\left(x\left(t\right)\right)\le -{\alpha}_{7}\left({\alpha}_{6}^{-1}\left({\rho}_{s}^{\prime}\right)\right)+{L}_{w,1}^{\prime}\theta $$

If Equation (94) holds, then $\frac{d{V}^{\prime}}{dt}\le -{\u03f5}_{w}^{\prime}/\Delta $. Integrating this equation gives:
for $t\ge {t}_{k}$ while $x\left(t\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}/{\mathrm{\Omega}}_{{\rho}_{s}^{\prime}}$.

$${V}^{\prime}\left(x\left(t\right)\right)\le {V}^{\prime}\left(x\left({t}_{k}\right)\right)-\frac{{\u03f5}_{w}^{\prime}(t-{t}_{k})}{\Delta}$$

We are interested in the amount of time that it would take to drive the closed-loop state from any $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$ into ${\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$ using ${u}_{s}$. In a worst case, ${V}^{\prime}\left(x\left({t}_{k}\right)\right)={V}_{1}\left(x\left({t}_{k}\right)\right)={\rho}_{1}$, and we would like ${V}^{\prime}$ at t to be ${\rho}_{min}^{\prime}$. From Equation (100), the worst-case time ${t}_{WC}$ that it would take to drive $x\left({t}_{k}\right)$ from the boundary of ${\mathrm{\Omega}}_{{\rho}_{1}}$ to the boundary of ${\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$ using ${u}_{s}$ is ${t}_{WC}=\frac{({\rho}_{1}-{\rho}_{min}^{\prime})\Delta}{{\u03f5}_{w}^{\prime}}$. However, ${t}_{WC}$ may not be an integer multiple of a sampling period; to guarantee that at least the worst-case amount of time passes after ${t}_{k}$ during which ${u}_{s}$ is applied to the process, ${N}_{1}=\lceil \frac{{t}_{WC}}{\Delta}\rceil $ is the number of sampling periods throughout which ${u}_{s}$ must be applied to guarantee that for any $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, $x\left({t}_{k+{N}_{1}}\right)\in {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$.

Part 3. We next demonstrate that if $x\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$, it will not exit ${\mathrm{\Omega}}_{{\rho}_{1}}$ within ${N}_{2}$ sampling periods under any input within the input bounds (i.e., under any input which the LMPC of Equations (14)–(20) may compute in the presence or absence of cyberattacks). Specifically, the following inequality holds for the time derivative of ${V}^{\prime}$ along the closed-loop state trajectory of the system of Equation (91) for any $x\in {\mathrm{\Omega}}_{{\rho}_{1}}$, $u\in U$, and $w\in W$:
which follows from Equations (4) and (8) (${f}_{as}$ is a member of the class of systems of Equation (1)), Equation (92), and ${V}^{\prime}={V}_{1}$. The result of Equation (101) can be integrated to give:
for $t\ge {t}_{k}$.

$$\begin{array}{cc}\hfill \frac{\partial {V}^{\prime}\left(x\right)}{\partial x}{f}_{as}(x,u,w)& \le \left|\frac{\partial {V}^{\prime}\left(x\right)}{\partial x}{f}_{as}(x,u,w)\right|\hfill \\ & \le \left|\frac{\partial {V}^{\prime}\left(x\right)}{\partial x}\right|\left|{f}_{as}(x,u,w)\right|\hfill \\ & \le {\alpha}_{4,1}\left(\right|x\left|\right)M\hfill \\ & \le {\alpha}_{4,1}\left({\alpha}_{5}^{-1}\left({\rho}_{1}\right)\right)M\hfill \end{array}$$

$${V}^{\prime}\left(x\left(t\right)\right)\le {V}^{\prime}\left(x\left({t}_{k}\right)\right)+{\alpha}_{4,1}\left({\alpha}_{5}^{-1}\left({\rho}_{1}\right)\right)M(t-{t}_{k})$$

To find the shortest possible time that it would take for a sequence of inputs $u\left(t\right)\in U$ applied in sample-and-hold to drive the closed-loop state to the border of ${\mathrm{\Omega}}_{{\rho}_{1}}$, we compute t in Equation (102) if ${V}^{\prime}\left(x\left({t}_{k}\right)\right)={\rho}_{min}^{\prime}$ and ${V}^{\prime}\left(x\left({t}_{ST}\right)\right)={\rho}_{1}$, where ${t}_{ST}$ denotes the first possible time at which ${V}^{\prime}\left(x\left(t\right)\right)={\rho}_{1}$. This gives a shortest time of ${t}_{ST}=\frac{({\rho}_{1}-{\rho}_{min}^{\prime})}{\left({\alpha}_{4,1}\left({\alpha}_{5}^{-1}\left({\rho}_{1}\right)\right)\right)M}$. However, this may not be an integer multiple of a sampling period, so that the maximum number of sampling periods over which the LMPC of Equations (14)–(20) can be used in the implementation strategy of Section 5.4.1 while guaranteeing closed-loop stability even in the presence of cyberattacks on the sensor measurements is ${N}_{2}=\lfloor \frac{{t}_{ST}}{\Delta}\rfloor $.

Part 4. Finally, we prove the results of Theorem 2 by combining the results of the prior parts of the proof. According to the implementation strategy of Section 5.4.1, for any $x\left({t}_{0}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, ${u}_{s}$ will be applied for ${N}_{1}$ sampling periods. From Part 2 of this proof, this will drive the closed-loop state into ${\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}$ by ${t}_{k+{N}_{1}}$ and also, from Proposition 4, will maintain the closed-loop state in ${\mathrm{\Omega}}_{{\rho}_{1}}$ at all times from Equations (95)–(97). Subsequently, the LMPC of Equations (14)–(20) may be used for ${N}_{2}$ sampling periods. In this case, the closed-loop state will also remain bounded within ${\mathrm{\Omega}}_{{\rho}_{1}}$ from Part 3 of this proof. Then, ${u}_{s}$ will be used again for ${N}_{1}$ sampling periods, and will again maintain the closed-loop state in ${\mathrm{\Omega}}_{{\rho}_{1}}$. This sequence of steps will then continue according to the implementation strategy of Section 5.4.1 such that the closed-loop state will be maintained within ${\mathrm{\Omega}}_{{\rho}_{1}}$ at all times. ☐

**Remark**

**10.**

Minimal assumptions are made on the trajectory of ${x}_{f}$ over time in the above proof (only that ${x}_{f}\left({t}_{k}\right)\in {\mathrm{\Omega}}_{{\rho}_{1}}$, $\forall {t}_{k}\ge 0$). Therefore, the applied policy can handle attacks where ${x}_{f}$ changes at each sampling time, regardless of the manner in which it changes as long as the assumptions are met (e.g., there is no need for separate implementation strategies for different types of sensor attack policies such as surge, bias, geometric, or replay attacks [20,84]). ${u}_{s}$ is also an attack-resistant policy for denial-of-service attacks [46] of any length, and the implementation strategy can handle such attacks if an additional statement of what the LMPC should do when it is not provided a state measurement at ${t}_{k}$ is added (the proof of Theorem 2 indicates that the controller could choose any $u\in U$ if no sensor signal is provided to it at ${t}_{k}$ when the LMPC should be used and if the implementation strategy is followed, closed-loop stability is maintained). Furthermore, the implementation strategy can also be used with closed-loop stability guarantees if ${x}_{f}$ is received at some sampling times and $x\left({t}_{k}\right)$ at others (as both meet the requirement of Theorem 2 that the state measurement must be in ${\mathrm{\Omega}}_{{\rho}_{1}}$). The results also hold if only a partially falsified state measurement is received (i.e., only some components of the state vector are falsified due to only some sensors being compromised), as long as the full state measurement vector received by the controller at every sampling time is in ${\mathrm{\Omega}}_{{\rho}_{1}}$ (if not, this may indicate that a cyberattack may be occurring and could trigger the use of ${u}_{s}$ only so that closed-loop stability is still guaranteed but without the potential benefits of trading it off with a feedback controller).

#### 5.4.2. Problems with Integrating Open-Loop Control and LMPC

Despite the guarantees which are developed in the prior section for open-loop control integrated with LMPC, the fact that open-loop inputs are required and that both ${N}_{1}$ and ${N}_{2}$ depend on the process dynamics through, for example, ${\u03f5}_{w}^{\prime}$ and ${\alpha}_{4,1}$, ${\alpha}_{5}$, and M indicates that this method has fundamental limitations based on the process time constants. The open-loop policy removes the benefits of feedback control in terms of speeding up the process response. The values of ${N}_{1}$ and ${N}_{2}$ may be such that the process would essentially always have to operate in open-loop (i.e., ${N}_{1}$ is large and ${N}_{2}$ is zero) to guarantee that no cyberattack can impact closed-loop stability. Open-loop control is not a viable alternative for feedback control as an operating strategy at all times.

Another problem that may occur with the proposed approach is that the region ${\mathrm{\Omega}}_{{\rho}^{\prime}}$ within which ${u}_{s}$ is guaranteed to drive the closed-loop state to the steady-state may be very small. ${V}^{\prime}$ might be adjusted to try to increase the size of ${\mathrm{\Omega}}_{{\rho}^{\prime}}$, but it is not guaranteed that the input ${u}_{s}$ can drive the closed-loop state to the steady-state from a large region around the steady-state, as only local asymptotic stability is implied by Equations (92) and (93). Therefore, the fact that ${\mathrm{\Omega}}_{{\rho}^{\prime}}$ is small may be a fundamental limitation of the system for any ${V}^{\prime}$. Because the results of Theorem 2 require ${\mathrm{\Omega}}_{{\rho}_{1}}\subset {\mathrm{\Omega}}_{{\rho}^{\prime}}$, a small ${\mathrm{\Omega}}_{{\rho}^{\prime}}$ means that ${\mathrm{\Omega}}_{{\rho}_{1}}$ must be small as well, which can significantly limit the potential of the LMPC to enforce a policy that is not steady-state operation or that is economically beneficial compared to steady-state operation. If steady-state operation is desired, a small ${\mathrm{\Omega}}_{{\rho}_{1}}$ means that closed-loop stability is only guaranteed in a small region around the steady-state, requiring small sampling times and small disturbances to maintain the closed-loop state in the resulting small ${\mathrm{\Omega}}_{{\rho}_{1}^{\prime}}\subset {\mathrm{\Omega}}_{{\rho}_{min}^{\prime}}\subset {\mathrm{\Omega}}_{{\rho}_{1}}\subset {\mathrm{\Omega}}_{{\rho}^{\prime}}$ per Equations (94) and (74), which may not be practical for certain processes with larger disturbances or larger computation time requirements that restrict the minimum size of $\Delta $. For this reason as well, the proposed technique, despite the guarantees of Theorem 2, is not likely to pose a viable solution to the cyberattack problem. Furthermore, the approach only holds for an open-loop stable steady-state; this is overly restrictive as there are many cases where it may be desirable to operate around an open-loop unstable steady-state. It may be necessary to utilize additional assumptions (e.g., that there is an alternative way to obtain a state measurement that is known to be accurate at certain times) to develop cyberattack-resilient controllers in general that meet Definition 4.

#### 5.5. Deterring Sensor Measurement Falsification Cyberattacks on Safety: Perspectives

The prior sections demonstrated that due to the fundamental nonlinear dynamics considerations which define cyberattacks, concepts for deterring cyberattacks on chemical process control systems that at first seem intuitive may not be proper solutions to the problem. However, the characteristics of proper solutions can be explicitly defined mathematically. Some policies which meet the mathematical definition, however, such as the policy developed in Section 5.4, may be undesirable for some processes under normal operation. Though policies like that in Section 5.4 might be considered to be a reasonable policy if a cyberattack is detected (i.e., it becomes reasonable to give up the benefits of feedback control), the difficulty of predicting the responses of nonlinear systems to changes in the process inputs a priori makes it difficult to assess all cyberattack possibilities during the design of the detection policies to ensure that detection policies will not miss any attacks; therefore, there is value in continuing to search for control laws/implementation strategies which are resilient to any cyberattack of a certain type. The results of the prior sections suggest that cyberattack-resilient control designs may need to incorporate special features compared to techniques such as LMPC that do not account for cyberattack-resilience, potentially making them more conservative than control designs which do not account for cyberattacks in the sense that they may not achieve instantaneous profits as large as those with alternative controllers; however, a company could assess the potential for profit loss over time with a cyberattack-resilient controller compared to potential reductions in information technology-related security costs and the potential economic and human costs of accidents without cyberattack-resilient control when selecting a controller for a process.

The control designs presented in Section 5.2, Section 5.3 and Section 5.4 for investigating the nature of cyberattacks and of cyberattack-resilient control demonstrated several principles that can be used to guide future research. The design in Section 5.2 led to the presentation of a potential cyberattack-development methodology that uses optimization to attempt to systematically determine attack policies in terms of both inputs and false sensor measurements. Though only one potential computational technique for cyberattack development was explored, it suggests that cyberattack development for non-intuitive situations, such as large-scale processes under control laws with many constraints, may be able to be approached computationally, rather than requiring a trial-and-error approach, which is critical for enabling research on cyberattack-resilient control designs for the process industries to include simulation case studies. The developments in Section 5.3 demonstrate that randomness that impacts process operation may be able to be achieved with closed-loop stability guarantees as part of a cyberattack prevention policy, and therefore can be considered in developing future designs geared toward addressing Definition 4. Finally, in Section 5.4, we demonstrated that despite the strength of the conditions required to meet Definition 4, it may be possible to develop control laws with their implementation policies that do satisfy the definition, particularly by relying on the implementation strategy or potentially additional assumptions on the process dynamics or instrumentation setup/accurate measurement availability. For example, though it is not guaranteed in the strategy presented in Section 5.4 that if ${V}_{1}\left(x\left({t}_{0}\right)\right)={\rho}_{1}$, there is no input that could be computed by the LMPC of Equations (14)–(20) for any provided false state measurement in ${\mathrm{\Omega}}_{{\rho}_{1}}$, the implementation strategy that trades off the use of LMPC with the open-loop input policy prevents the state from ever reaching a condition where closed-loop stability would be compromised in the face of a cyberattack. It may also be beneficial to consider control designs such as LMPC that are based on theory that allow rigorous guarantees to be made even in the presence of disturbances, particularly from a set of initial conditions that can be characterized a priori, since cyberattack-resilience according to Definition 4 depends on the allowable set of initial conditions for the system.

A final outcome of the results in this work is that they indicate the utility of the recent theoretical developments resulting from the study of the stability properties of economic model predictive control (EMPC) [85,86,87,88,89,90], which have included notions of stability developed for processes operated in a time-varying fashion, in studying cybersecurity even for processes that would be operated at steady-state without cyberattacks. Closed-loop stability when analyzing cyberattacks requires characterizing the boundedness of the closed-loop state in operating regions in state-space under the attack (in a sense, the state is being manipulated in a time-varying fashion by the attacker) and not necessarily driving the state to the steady-state under the attack, as the attacker’s goal for a process typically operated at steady-state would involve moving it off of that steady-state. As we consider more complex process [91,92] and control designs (in the sense of greater coupling between process states due to process designs and controllers intended to improve efficiency and enhance economics), it may become more difficult to predict all the potential methods by which a cyberattacker may attack a plant, enhancing the need for cyberattack-resilient systems by process and control design.

## 6. Conclusions

This work developed a comprehensive nonlinear systems characterization of cyberattacks of different kinds on chemical process control systems, which indicated that cyberattacks on control systems in the chemical process industries are first and foremost a chemical engineering problem which should be considered during process and control design. We subsequently focused on a specific type of cyberattack in which sensor measurements to feedback controllers are compromised with the goal of impacting process safety and discussed the nonlinear systems definition of a process system resilient to these types of cyberattacks. We used three control designs to explore the concept of cyberattack-resilience against sensor measurement attacks geared toward impacting process safety and to explore the properties required of controllers for making cyberattack-resilience guarantees. The results indicate that a control design/implementation strategy which can be effective at deterring sensor measurement falsification-based cyberattacks geared toward impacting process safety should: (1) maintain closed-loop stability under normal operating conditions and also guarantee closed-loop stability when inputs that have no relationship to the state measurement are applied to the process; and (2) result in a desirable operating policy (i.e., not open-loop) during normal operation (i.e., in the absence of cyberattacks).

Future work will explore cyberattack-resilient control design for larger-scale, more realistic and complex chemical process models. It will also seek to use the insights gained on cyberattack-resilient control for nonlinear systems as developed in this work to create cyberattack-resilient controllers, and to more thoroughly investigate a range of MPC designs which handle disturbances or measurement noise in control designs such as MPC (e.g., [93,94,95,96,97]) in the context of cyberattack-resilience. All future work will consider that a defining feature of cyberattacks is that they remove the association between the input physically implemented on the process and the process state, attempting to make the controller a vehicle for computing a problematic process input (i.e., misusing the controller) rather than using the controller formulation to maintain closed-loop stability in the case that state measurements are falsified.

## Funding

Financial support from Wayne State University is gratefully acknowledged.

## Conflicts of Interest

The author declares no conflict of interest.

## References

- Leveson, N.G.; Stephanopoulos, G. A system-theoretic, control-inspired view and approach to process safety. AIChE J.
**2014**, 60, 2–14. [Google Scholar] - Mannan, M.S.; Sachdeva, S.; Chen, H.; Reyes-Valdes, O.; Liu, Y.; Laboureur, D.M. Trends and challenges in process safety. AIChE J.
**2015**, 61, 3558–3569. [Google Scholar] - Venkatasubramanian, V. Systemic failures: Challenges and opportunities in risk management in complex systems. AIChE J.
**2011**, 57, 2–9. [Google Scholar] - Albalawi, F.; Durand, H.; Christofides, P.D. Process operational safety via model predictive control: Recent results and future research directions. Comput. Chem. Eng.
**2018**, 114, 171–190. [Google Scholar] - Albalawi, F.; Durand, H.; Alanqar, A.; Christofides, P.D. Achieving operational process safety via model predictive control. J. Loss Prev. Process Ind.
**2018**, 53, 74–88. [Google Scholar] - Albalawi, F.; Durand, H.; Christofides, P.D. Process operational safety using model predictive control based on a process Safeness Index. Comput. Chem. Eng.
**2017**, 104, 76–88. [Google Scholar] - Zhang, Z.; Wu, Z.; Durand, H.; Albalawi, F.; Christofides, P.D. On integration of feedback control and safety systems: Analyzing two chemical process applications. Chem. Eng. Res. Des.
**2018**, 132, 616–626. [Google Scholar] - Carson, J.M.; Açıkmeşe, B.; Murray, R.M.; MacMartin, D.G. A robust model predictive control algorithm augmented with a reactive safety mode. Automatica
**2013**, 49, 1251–1260. [Google Scholar] - Wu, Z.; Durand, H.; Christofides, P.D. Safe economic model predictive control of nonlinear systems. Syst. Control Lett.
**2018**, 118, 69–76. [Google Scholar] - Wieland, P.; Allgöwer, F. Constructive Safety Using Control Barrier Functions. IFAC Proc. Vol.
**2007**, 40, 462–467. [Google Scholar] - Braun, P.; Kellett, C.M. On (the existence of) Control Lyapunov Barrier Functions. 2017. Available online: https://epub.uni-bayreuth.de/3522/ (accessed on 10 August 2018).
- Shahnazari, H.; Mhaskar, P. Distributed fault diagnosis for networked nonlinear uncertain systems. Comput. Chem. Eng.
**2018**, 115, 22–33. [Google Scholar] - Shahnazari, H.; Mhaskar, P. Actuator and sensor fault detection and isolation for nonlinear systems subject to uncertainty. Int. J. Robust Nonlinear Control
**2018**, 28, 1996–2013. [Google Scholar] - Yin, X.; Liu, J. Distributed output-feedback fault detection and isolation of cascade process networks. AIChE J.
**2017**, 63, 4329–4342. [Google Scholar] - Alanqar, A.; Durand, H.; Christofides, P.D. Fault-Tolerant Economic Model Predictive Control Using Error-Triggered Online Model Identification. Ind. Eng. Chem. Res.
**2017**, 56, 5652–5667. [Google Scholar] - Demetriou, M.A.; Armaou, A. Dynamic online nonlinear robust detection and accommodation of incipient component faults for nonlinear dissipative distributed processes. Int. J. Robust Nonlinear Control
**2012**, 22, 3–23. [Google Scholar] - Xue, D.; El-Farra, N.H. Resource-aware fault accommodation in spatially-distributed processes with sampled-data networked control systems. In Proceedings of the American Control Conference, Seattle, WA, USA, 24–26 May 2017; pp. 1809–1814. [Google Scholar]
- Xue, D.; El-Farra, N.H. Actuator fault-tolerant control of networked distributed processes with event-triggered sensor-controller communication. In Proceedings of the American Control Conference, Boston, MA, USA, 6–8 July 2016; pp. 1661–1666. [Google Scholar]
- Smith, R.E. Elementary Information Security; Jones & Bartlett Learning, LLC: Burlington, MA, USA, 2016. [Google Scholar]
- Cárdenas, A.A.; Amin, S.; Lin, Z.S.; Huang, Y.L.; Huang, C.Y.; Sastry, S. Attacks against process control systems: Risk assessment, detection, and response. In Proceedings of the ACM Asia Conference on Computer & Communications Security, Hong Kong, China, 22–24 March 2011. [Google Scholar]
- Greenberg, A. How an Entire Nation Became Russia’s Test Lab for Cyberwar. 2017. Available online: https://www.wired.com/story/russian-hackers-attack-ukraine/ (accessed on 11 July 2018).
- Clark, R.M.; Panguluri, S.; Nelson, T.D.; Wyman, R.P. Protecting drinking water utilities from cyberthreats. J. Am. Water Works Assoc.
**2017**, 109, 50–58. [Google Scholar] - Langner, R. Stuxnet: Dissecting a Cyberwarfare Weapon. IEEE Secur. Priv.
**2011**, 9, 49–51. [Google Scholar] - Perlroth, N.; Krauss, C.; A Cyberattack in Saudi Arabia Had a Deadly Goal. Experts Fear Another Try. 2018. Available online: https://www.nytimes.com/2018/03/15/technology/saudi-arabia-hacks-cyberattacks.html (accessed on 11 March 2018).
- Groll, E. Cyberattack Targets Safety System at Saudi Aramco. 2017. Available online: https://foreignpolicy.com/2017/12/21/cyber-attack-targets-safety-system-at-saudi-aramco/ (accessed on 11 July 2018).
- Liu, Y.; Sarabi, A.; Zhang, J.; Naghizadeh, P.; Karir, M.; Bailey, M.; Liu, M. Cloudy with a Chance of Breach: Forecasting Cyber Security Incidents. In Proceedings of the USENIX Security Symposium, Washington, DC, USA, 12–14 August 2015; pp. 1009–1024. [Google Scholar]
- Solomon, M.G.; Kim, D.; Carrell, J.L. Fundamentals of Communications and Networking; Jones & Bartlett Publishers: Burlington, MA, USA, 2014. [Google Scholar]
- McLaughlin, S.; Konstantinou, C.; Wang, X.; Davi, L.; Sadeghi, A.R.; Maniatakos, M.; Karri, R. The Cybersecurity Landscape in Industrial Control Systems. Proc. IEEE
**2016**, 104, 1039–1057. [Google Scholar] - Hull, J.; Khurana, H.; Markham, T.; Staggs, K. Staying in control: Cybersecurity and the modern electric grid. IEEE Power Energy Mag.
**2012**, 10, 41–48. [Google Scholar] - Ginter, A. Unidirectional Security Gateways: Stronger than Firewalls. In Proceedings of the ICALEPCS, San Francisco, CA, USA, 6–11 October 2013; pp. 1412–1414. [Google Scholar]
- Khorrami, F.; Krishnamurthy, P.; Karri, R. Cybersecurity for Control Systems: A Process-Aware Perspective. IEEE Des. Test
**2016**, 33, 75–83. [Google Scholar] - He, D.; Chan, S.; Zhang, Y.; Wu, C.; Wang, B. How Effective Are the Prevailing Attack-Defense Models for Cybersecurity Anyway? IEEE Intel. Syst.
**2014**, 29, 14–21. [Google Scholar] - Ten, C.W.; Liu, C.C.; Manimaran, G. Vulnerability Assessment of Cybersecurity for SCADA Systems. IEEE Trans. Power Syst.
**2008**, 23, 1836–1846. [Google Scholar] - Pang, Z.H.; Liu, G.P. Design and implementation of secure networked predictive control systems under deception attacks. IEEE Trans. Control Syst. Technol.
**2012**, 20, 1334–1342. [Google Scholar] - Rieger, C.; Zhu, Q.; Başar, T. Agent-based cyber control strategy design for resilient control systems: Concepts, architecture and methodologies. In Proceedings of the 5th International Symposium on Resilient Control Systems, Salt Lake City, UT, USA, 14–16 August 2012; pp. 40–47. [Google Scholar]
- Chavez, A.R.; Stout, W.M.S.; Peisert, S. Techniques for the dynamic randomization of network attributes. In Proceedings of the IEEE International Carnahan Conference on Security Technology, Taipei, Taiwan, 21–24 September 2015; pp. 1–6. [Google Scholar]
- Linda, O.; Manic, M.; McQueen, M. Improving control system cyber-state awareness using known secure sensor measurements. In Critical Information Infrastructures Security. CIRITIS 2012; Hämmerli, B.M., Kalstad Svendsen, N., Lopez, J., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7722, pp. 46–58. [Google Scholar]
- Plosz, S.; Farshad, A.; Tauber, M.; Lesjak, C.; Ruprechter, T.; Pereira, N. Security vulnerabilities and risks in industrial usage of wireless communication. In Proceedings of the IEEE International Conference on Emerging Technology and Factory Automation, Barcelona, Spain, 6–19 September 2014; pp. 1–8. [Google Scholar]
- Lopez, J.; Zhou, J. (Eds.) Wireless Sensor Network Security; IOS Press: Amsterdam, The Netherlands, 2008. [Google Scholar]
- Xu, L.D.; He, W.; Li, S. Internet of Things in Industries: A Survey. IEEE Trans. Ind. Inform.
**2014**, 10, 2233–2243. [Google Scholar] - Almorsy, M.; Grundy, J.; Müller, I. An analysis of the cloud computing security problem. arXiv, 2016; arXiv:1609.01107. [Google Scholar]
- Rieger, C.G. Notional examples and benchmark aspects of a resilient control system. In Proceedings of the 2010 3rd International Symposium on Resilient Control Systems, Idaho Falls, ID, USA, 10–12 August 2010; pp. 64–71. [Google Scholar]
- Rieger, C.G.; Gertman, D.I.; McQueen, M.A. Resilient control systems: Next generation design research. In Proceedings of the 2009 2nd Conference on Human System Interactions, Catania, Italy, 21–23 May 2009; pp. 632–636. [Google Scholar]
- Wakaiki, M.; Tabuada, P.; Hespanha, J.P. Supervisory control of discrete-event systems under attacks. arXiv, 2017; arXiv:1701.00881. [Google Scholar]
- Bopardikar, S.D.; Speranzon, A.; Hespanha, J.P. An H-infinity approach to stealth-resilient control design. In Proceedings of the 2016 Resilience Week, Chicago, IL, USA, 16–18 August 2016; pp. 56–61. [Google Scholar]
- Amin, S.; Cárdenas, A.A.; Sastry, S.S. Safe and secure networked control systems under denial-of-service attacks. In Hybrid Systems: Computation and Control. HSCC 2009; Majumdar, R., Tabuada, P., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5469, pp. 31–45. [Google Scholar]
- Fawzi, H.; Tabuada, P.; Diggavi, S. Secure Estimation and Control for Cyber-Physical Systems Under Adversarial Attacks. IEEE Trans. Autom. Control
**2014**, 59, 1454–1467. [Google Scholar] - Zhu, Q.; Başar, T. Game-Theoretic Methods for Robustness, Security, and Resilience of Cyberphysical Control Systems: Games-in-Games Principle for Optimal Cross-Layer Resilient Control Systems. IEEE Control Syst.
**2015**, 35, 46–65. [Google Scholar] - Zhu, Q.; Başar, T. Robust and resilient control design for cyber-physical systems with an application to power systems. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; pp. 4066–4071. [Google Scholar]
- Zhu, Q.; Bushnell, L.; Başar, T. Resilient distributed control of multi-agent cyber-physical systems. In Control of Cyber-Physical Systems; Tarraf, D., Ed.; Lecture Notes in Control and Information Sciences; Springer: Berlin/Heidelberg, Germany, 2013; Volume 449, pp. 301–316. [Google Scholar]
- Zonouz, S.; Rogers, K.M.; Berthier, R.; Bobba, R.B.; Sanders, W.H.; Overbye, T.J. SCPSE: Security-Oriented Cyber-Physical State Estimation for Power Grid Critical Infrastructures. IEEE Trans. Smart Grid
**2012**, 3, 1790–1799. [Google Scholar] - Zheng, S.; Jiang, T.; Baras, J.S. Robust State Estimation under False Data Injection in Distributed Sensor Networks. In Proceedings of the 2010 IEEE Global Telecommunications Conference, Miami, FL, USA, 6–10 December 2010; pp. 1–5. [Google Scholar]
- Pasqualetti, F.; Dorfler, F.; Bullo, F. Control-Theoretic Methods for Cyberphysical Security: Geometric Principles for Optimal Cross-Layer Resilient Control Systems. IEEE Control Syst.
**2015**, 35, 110–127. [Google Scholar] - Pasqualetti, F.; Dörfler, F.; Bullo, F. Attack Detection and Identification in Cyber-Physical Systems. IEEE Trans. Autom. Control
**2013**, 58, 2715–2729. [Google Scholar] - McLaughlin, S. CPS: Stateful policy enforcement for control system device usage. In Proceedings of the 29th Annual Computer Security Applications Conference, New Orleans, LA, USA, 9–13 December 2013; pp. 109–118. [Google Scholar]
- Melin, A.; Kisner, R.; Fugate, D.; McIntyre, T. Minimum state awareness for resilient control systems under cyber-attack. In Proceedings of the 2012 Future of Instrumentation International Workshop, Gatlinburg, TN, USA, 8–9 October 2012; pp. 1–4. [Google Scholar]
- Qin, S.J.; Badgwell, T.A. A survey of industrial model predictive control technology. Control Eng. Pract.
**2003**, 11, 733–764. [Google Scholar] - Rawlings, J.B. Tutorial overview of model predictive control. IEEE Control Syst.
**2000**, 20, 38–52. [Google Scholar] - Durand, H. State Measurement Spoofing Prevention through Model Predictive Control Design. In Proceedings of the IFAC NMPC-2018, Madison, WI, USA, 19–22 August 2018; pp. 643–648. [Google Scholar]
- Heidarinejad, M.; Liu, J.; Christofides, P.D. Economic model predictive control of nonlinear process systems using Lyapunov techniques. AIChE J.
**2012**, 58, 855–870. [Google Scholar] - Mhaskar, P.; El-Farra, N.H.; Christofides, P.D. Stabilization of nonlinear systems with state and control constraints using Lyapunov-based predictive control. Syst. Control Lett.
**2006**, 55, 650–659. [Google Scholar] - Muñoz de la Peña, D.; Christofides, P.D. Lyapunov-Based Model Predictive Control of Nonlinear Systems Subject to Data Losses. IEEE Trans. Autom. Control
**2008**, 53, 2076–2089. [Google Scholar] - Zhu, B.; Joseph, A.; Sastry, S. A taxonomy of cyber attacks on SCADA systems. In Proceedings of the 2011 IEEE International Conferences on Internet of Things, and Cyber, Physical and Social Computing, Dalian, China, 19–22 October 2011; pp. 380–388. [Google Scholar]
- Krotofil, M.; Cárdenas, A.A. Resilience of process control systems to cyber-physical attacks. In Proceedings of the Nordic Conference on Secure IT Systems, Ilulissat, Greenland, 18–21 October 2013; pp. 166–182. [Google Scholar]
- Gentile, M.; Rogers, W.J.; Mannan, M.S. Development of an inherent safety index based on fuzzy logic. AIChE J.
**2003**, 49, 959–968. [Google Scholar] - Heikkilä, A.M.; Hurme, M.; Järveläinen, M. Safety considerations in process synthesis. Comput. Chem. Eng.
**1996**, 20, S115–S120. [Google Scholar] - Khan, F.I.; Amyotte, P.R. How to Make Inherent Safety Practice a Reality. Can. J. Chem. Eng.
**2003**, 81, 2–16. [Google Scholar] - Gupta, J.P.; Edwards, D.W. Inherently Safer Design—Present and Future. Process Saf. Environ. Prot.
**2002**, 80, 115–125. [Google Scholar] - Kletz, T.A. Inherently safer plants. Plant/Oper. Prog.
**1985**, 4, 164–167. [Google Scholar] - Li, L.; Hu, B.; Lemmon, M. Resilient event triggered systems with limited communication. In Proceedings of the 2012 51st IEEE Conference on Decision and Control, Maui, HI, USA, 10–13 December 2012; pp. 6577–6582. [Google Scholar]
- Melin, A.M.; Ferragut, E.M.; Laska, J.A.; Fugate, D.L.; Kisner, R. A mathematical framework for the analysis of cyber-resilient control systems. In Proceedings of the 2013 6th International Symposium on Resilient Control Systems, San Francisco, CA, USA, 13–15 August 2013; pp. 13–18. [Google Scholar]
- Chandy, S.E.; Rasekh, A.; Barker, Z.A.; Shafiee, M.E. Cyberattack Detection using Deep Generative Models with Variational Inference. arXiv, 2018; arXiv:1805.12511. [Google Scholar]
- Rosich, A.; Voos, H.; Li, Y.; Darouach, M. A model predictive approach for cyber-attack detection and mitigation in control systems. In Proceedings of the IEEE Conference on Decision and Control, Florence, Italy, 10–13 December 2013; pp. 6621–6626. [Google Scholar]
- Tajer, A.; Kar, S.; Poor, H.V.; Cui, S. Distributed joint cyber attack detection and state recovery in smart grids. In Proceedings of the IEEE International Conference on Smart Grid Communications, Brussels, Belgium, 17–20 October 2011; pp. 202–207. [Google Scholar]
- Kiss, I.; Genge, B.; Haller, P. A clustering-based approach to detect cyber attacks in process control systems. In Proceedings of the IEEE 13th International Conference on Industrial Informatics, Cambridge, UK, 22–24 July 2015; pp. 142–148. [Google Scholar]
- Valdes, A.; Cheung, S. Intrusion Monitoring in Process Control Systems. In Proceedings of the 42nd Hawaii International Conference on System Sciences, Big Island, HI, USA, 5–8 January 2009; pp. 1–7. [Google Scholar]
- Wu, Z.; Albalawi, F.; Zhang, J.; Zhang, Z.; Durand, H.; Christofides, P.D. Detecting and Handling Cyber-attacks in Model Predictive Control of Chemical Processes. Mathematics
**2018**, accepted. [Google Scholar] - Ricker, N.L. Model predictive control of a continuous, nonlinear, two-phase reactor. J. Process Control
**1993**, 3, 109–123. [Google Scholar] - Alanqar, A.; Ellis, M.; Christofides, P.D. Economic model predictive control of nonlinear process systems using empirical models. AIChE J.
**2015**, 61, 816–830. [Google Scholar] - Lin, Y.; Sontag, E.D. A universal formula for stabilization with bounded controls. Syst. Control Lett.
**1991**, 16, 393–397. [Google Scholar] - Grossmann, I.E. Review of nonlinear mixed-integer and disjunctive programming techniques. Optim. Eng.
**2002**, 3, 227–252. [Google Scholar] - Mhaskar, P.; Liu, J.; Christofides, P.D. Fault-Tolerant Process Control: Methods and Applications; Springer: London, UK, 2013. [Google Scholar]
- Wächter, A.; Biegler, L.T. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program.
**2006**, 106, 25–57. [Google Scholar] - Mo, Y.; Sinopoli, B. Secure control against replay attacks. In Proceedings of the 2009 47th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 30 September–2 October 2009; pp. 911–918. [Google Scholar]
- Ellis, M.; Durand, H.; Christofides, P.D. A tutorial review of economic model predictive control methods. J. Process Control
**2014**, 24, 1156–1178. [Google Scholar] - Rawlings, J.B.; Angeli, D.; Bates, C.N. Fundamentals of economic model predictive control. In Proceedings of the Conference on Decision and Control, Maui, HI, USA, 10–13 December 2012; pp. 3851–3861. [Google Scholar]
- Faulwasser, T.; Korda, M.; Jones, C.N.; Bonvin, D. Turnpike and dissipativity properties in dynamic real-time optimization and economic MPC. In Proceedings of the IEEE 53rd Annual Conference on Decision and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 2734–2739. [Google Scholar]
- Müller, M.A.; Grüne, L.; Allgöwer, F. On the role of dissipativity in economic model predictive control. IFAC-PapersOnLine
**2015**, 48, 110–116. [Google Scholar] - Huang, R.; Harinath, E.; Biegler, L.T. Lyapunov stability of economically oriented NMPC for cyclic processes. J. Process Control
**2011**, 21, 501–509. [Google Scholar] - Omell, B.P.; Chmielewski, D.J. IGCC power plant dispatch using infinite-horizon economic model predictive control. Ind. Eng. Chem. Res.
**2013**, 52, 3151–3164. [Google Scholar] - Amini-Rankouhi, A.; Huang, Y. Prediction of maximum recoverable mechanical energy via work integration: A thermodynamic modeling and analysis approach. AIChE J.
**2017**, 63, 4814–4826. [Google Scholar] - Tula, A.K.; Babi, D.K.; Bottlaender, J.; Eden, M.R.; Gani, R. A computer-aided software-tool for sustainable process synthesis-intensification. Comput. Chem. Eng.
**2017**, 105, 74–95. [Google Scholar] - Limon, D.; Alamo, T.; Salas, F.; Camacho, E. Input to state stability of min–max MPC controllers for nonlinear systems with bounded uncertainties. Automatica
**2006**, 42, 797–803. [Google Scholar] - Campo, P.J.; Morari, M. Robust Model Predictive Control. In Proceedings of the American Control Conference, Minneapolis, MN, USA, 10–12 June 1987; pp. 1021–1026. [Google Scholar]
- Pannocchia, G.; Gabiccini, M.; Artoni, A. Offset-free MPC explained: Novelties, subtleties, and applications. IFAC-PapersOnLine
**2015**, 48, 342–351. [Google Scholar] [CrossRef] - Ellis, M.; Zhang, J.; Liu, J.; Christofides, P.D. Robust moving horizon estimation based output feedback economic model predictive control. Syst. Control Lett.
**2014**, 68, 101–109. [Google Scholar] - Das, B.; Mhaskar, P. Lyapunov-based offset-free model predictive control of nonlinear process systems. Can. J. Chem. Eng.
**2015**, 93, 471–478. [Google Scholar]

**Figure 2.**State-space trajectory showing the state trajectory in 10 sampling periods with the falsified state measurements determined through optimization applied at every sampling time, in the absence of disturbances.

**Figure 3.**Intersecting stability regions with two different potential initial conditions $x\left({t}_{k}\right)={x}_{a}$ and $x\left({t}_{k}\right)={x}_{b}$.

**Figure 6.**State-space trajectories under the single LMPC for the CSTR of Equations (46) and (47). The figure indicates that the closed-loop trajectory settled on the boundary of ${\mathrm{\Omega}}_{{\rho}_{e,1}}$ to optimize the objective function while meeting the constraints. For simplicity, only one level set for each of the ${n}_{p}$ potential LMPC’s is shown (${\mathrm{\Omega}}_{{\rho}_{i}}$ is shown if ${V}_{i}\ne {V}_{1}$, and ${\mathrm{\Omega}}_{{\rho}_{e,i}}$ is shown if ${V}_{i}={V}_{1}$, $i>1$).

**Figure 9.**State-space trajectories under the randomized LMPC implementation strategy for the CSTR of Equations (46) and (47). For simplicity, only one level set for each of the ${n}_{p}$ potential LMPC’s is shown (${\mathrm{\Omega}}_{{\rho}_{i}}$ is shown if ${V}_{i}\ne {V}_{1}$, and ${\mathrm{\Omega}}_{{\rho}_{e,i}}$ is shown if ${V}_{i}={V}_{1}$, $i>1$).

**Figure 10.**Scatter plot showing the control law chosen (i in Table 3) in each sampling period by the randomized LMPC implementation strategy.

**Figure 11.**State-space trajectories for all of the situations in Table 4. The numbers in the caption represent the seed values for rng. ‘S’ represents the single LMPC.

**Figure 12.**Trajectories of ${u}_{1}$, ${u}_{2}$, and ${V}_{1}$ under the randomized LMPC implementation strategy for rng(20) (denoted by ‘Randomized’ in the figure) and under the single LMPC (denoted by ‘Single’ in the figure). The value of ${\rho}_{1}$ is denoted by the horizontal line in the plot for the value of ${V}_{1}$. The bottom plot indicates the controller selected by the randomized LMPC implementation strategy at each of the 10 sampling times in the simulation.

**Table 1.**Steady-state values for the states of the Tennessee Eastman Process [78].

Parameter | Value | Unit |
---|---|---|

${N}_{A,s}$ | 44.49999958429348 | kmol |

${N}_{B,s}$ | 13.53296996509594 | kmol |

${N}_{C,s}$ | 36.64788062995841 | kmol |

${N}_{D,s}$ | 110.0 | kmol |

${X}_{1,s}$ | 60.95327313484253 | % |

${X}_{2,s}$ | 25.02232231706676 | % |

${X}_{3,s}$ | 39.25777017606444 | % |

${X}_{4,s}$ | 47.03024823457651 | % |

${u}_{1,s}$ | 60.95327313484253 | % |

${u}_{2,s}$ | 25.02232231706676 | % |

${u}_{3,s}$ | 39.25777017606444 | % |

${V}_{\%,sp}$ | 44.17670682730923 | % |

${F}_{1,s}$ | 201.43 | kmol/h |

${F}_{2,s}$ | 5.62 | kmol/h |

${F}_{3,s}$ | 7.05 | kmol/h |

${F}_{4,s}$ | 100 | kmol/h |

${P}_{s}$ | 2700 | kPa |

${y}_{A3,s}$ | 0.47 | - |

${y}_{B3,s}$ | 0.1429 | - |

${y}_{C3,s}$ | 0.3871 | - |

${K}_{c,1}$ | 0.1 | % h/kmol |

${\tau}_{I,1}$ | 1 | h |

${K}_{c,2}$ | 2 | % |

${\tau}_{I,2}$ | 3 | h |

${K}_{c,3}$ | –0.25 | %/kPa |

${\tau}_{I,3}$ | 1.5 | h |

${K}_{c,4}$ | 0.7 | kmol/kPa·h |

${\tau}_{I,4}$ | 3 | h |

Parameter | Value | Unit |
---|---|---|

V | 1 | m${}^{3}$ |

${T}_{0}$ | 300 | K |

${C}_{p}$ | $0.231$ | kJ/kg·K |

${k}_{0}$ | $8.46\times {10}^{6}$ | m${}^{3}$/h·kmol |

F | 5 | m${}^{3}$/h |

${\rho}_{L}$ | 1000 | kg/m${}^{3}$ |

E | $5\times {10}^{4}$ | kJ/kmol |

${R}_{g}$ | $8.314$ | kJ/kmol·K |

$\Delta H$ | $-1.15\times {10}^{4}$ | kJ/kmol |

i | ${\mathit{P}}_{11}$ | ${\mathit{P}}_{12}$ | ${\mathit{P}}_{22}$ | ${\mathit{\rho}}_{\mathit{i}}$ | ${\mathit{\rho}}_{\mathit{e},\mathit{i}}$ |
---|---|---|---|---|---|

1 | 1200 | 5 | 0.1 | 180 | 144 |

2 | 2000 | –20 | 1 | 180 | 144 |

3 | 1500 | –20 | 10 | 180 | 144 |

4 | 0.2 | 0 | 2000 | 180 | 144 |

5 | 1200 | 5 | 0.1 | 180 | 100 |

6 | 1200 | 5 | 0.1 | 180 | 130 |

7 | 1200 | 5 | 0.1 | 180 | 30 |

**Table 4.**Approximate time after which ${x}_{2}>55$ K for various seed values of rng for the randomized LMPC design subjected to a cyberattack on the sensors determined in Section 5.2.1.

Seed | Time ${\mathit{x}}_{2}>55$ K (h) |
---|---|

5 | 0.0143 |

10 | 0.0148 |

15 | 0.0146 |

20 | 0.0324 |

25 | 0.0146 |

30 | 0.0142 |

35 | 0.0143 |

40 | 0.0147 |

45 | 0.0248 |

50 | 0.0231 |

**Table 5.**Approximate time after which ${x}_{2}>55$ K for various seed values of rng for the randomized LMPC design subjected to a cyberattack on the sensors with ${x}_{1}=0.0632$ kmol/m${}^{3}$ and ${x}_{2}=21.2056\phantom{\rule{3.33333pt}{0ex}}\mathrm{K}$.

Seed | Time ${\mathit{x}}_{2}>55$ K (h) |
---|---|

5 | 0.0674 |

10 | 0.0458 |

15 | 0.0555 |

20 | 0.0767 |

25 | 0.0569 |

30 | 0.0418 |

35 | 0.0457 |

40 | 0.0874 |

45 | 0.0580 |

50 | 0.0950 |

© 2018 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).