Kleene Algebra to Compute Invariant Sets of Dynamical Systems

: In this paper, we show that a basic ﬁxed point method used to enclose the greatest ﬁxed point in a Kleene algebra will allow us to compute inner and outer approximations of invariant-based sets for continuous-time nonlinear dynamical systems. Our contribution is to provide the deﬁnitions and theorems that will allow us to make the link between the theory of invariant sets and the Kleene algebra. This link has never be done before and will allow us to compute rigorously sets that can be deﬁned as a combination of positive invariant sets. Some illustrating examples show the nice properties of the approach.


Introduction
In this paper, we deal with a dynamical system S defined by the following state equation: S :ẋ(t) = γ(x(t)) (1) where x(t) ∈ R n is the state vector and γ : R n → R n is the evolution function of S [1,2]. Denote by ϕ γ the flow map of (1); i.e., with the initial vector x 0 = x(0), the system S reaches the state ϕ γ (t, x 0 ) at time t. Our goal is to compute invariant sets [3] associated to the system with an algebraic approach, which is new in this context. Moreover, we propose to compute sets that can be expressed as a combination (union, intersection, image by a function, etc.) of invariant sets. In the application section of this paper, we will show why computing such combinations may be important in practice.
In particular, we will show the link between problems that can be expressed in terms of invariant sets and the Kleene algebra, the elements of which are automorphisms of a lattice [4]. We will take advantage of this algebraic structure to derive new efficient algorithms that are able to solve problems involving invariant sets that were not possible to compute with existing methods.
Our approach does not only provide a method to compute invariant sets. However, it allows us to compute sets that can be defined as a combination of invariant sets, which is the main contribution. We decided to explore an algebraic approach in order to get an elegant formalization of the problem resolution, as shown in several examples at the end of this paper.
A set A is positive invariant of (1) if we have a ∈ A, t ≥ 0 =⇒ ϕ γ (t, a) ∈ A.
The set of all positive invariant sets is a complete lattice; i.e., the union and the intersection of two positive invariant sets is positive invariant. A consequence is that given a set A, the notion of the greatest positive invariant set contained in X and smallest positive invariant set enclosing A can be defined. For instance, the greatest positive invariant set for (1) included in A is given by Methods exist to characterize positive invariant sets for specific cases such as for instance when γ is linear [11][12][13] or discrete time [14]. Outer approximation can be computed by providing a model for approximation [15] or support functions [16]. Moreover, most approximations correspond to convex sets such as ellipsoids, zonotopes [17], or polytopes. These linear-based approaches can be extended to hybrid linear systems [18][19][20].
For continuous-time nonlinear systems, computing invariant sets is much more difficult, and different types of approaches can be extracted from the literature.

•
The first approach is based on sampling. It has been used for instance by Saint Pierre [21] to rigorously compute viability kernel, which is a specific type of controlled invariant set. Bobiti and Lazar [22] also used a sampling-based method for stability verification of both continuous and discrete-time nonlinear systems. To get guaranteed results, an interval integration is often needed (see e.g., [23][24][25]). • The second approach is based on Lyapunov theory [3,26] and is convenient for proving asymptotic stability of problems with an infinite time horizon. The principle is to find a parameter vector p or a Lyapunov-like function V(p, x) such that the set is empty. In such a case, the set S(p) = {x|V(p, x) ≤ 0} defines a positive invariant set. The function that is generally chosen for V(p, x) is polynomial in the x i 's and linear in the p j 's. When γ is polynomial, this choice for V(p, x) makes it possible to use LMI/SOS/interval resolution techniques, which can be efficient when a lowdimensional approximation exists. This approach relies on assumptions such as the knowledge of an equilibrium point [27] or the polynomial property of the dynamics of the system [28], which is not always realistic. • The third approach is based is based on occupation measures, which are adapted to a finite time horizon [27]. Now, it can also consider problems with infinite time horizon at the price of technical difficulties [29]. • The fourth approach is based a polygonal decomposition of the state space [30,31] and corresponds to the approach we will consider in the paper. For instance, in [30], a triangulated region yields an index filtration for a Morse decomposition of the flow on the system [32], which approximates the flow arbitrarily closely.
Different solvers such as SpaceEx [33,34] are available to compute numerically such invariant sets.
The paper proposes to compute inner and outer approximations of invariant sets in the general case where the system is continuous-time and nonlinear. More than that, it introduces for the first time an approach based on the Kleene algebra to compute sets that can be defined as operations on invariant sets. A possible application is the path planning and avoid problem [35], where we search for the set of all initial conditions for trajectories starting from of set A reaching the set B while avoiding the set C. It is not possible to solve this type of problem rigorously using existing approaches. This paper is organized as follows. Section 2 recalls some notions on lattices that will be needed to understand the Kleene algebra in Section 3. Then, the specific case where the elements of the Kleene algebra are made with automorphism is considered in Section 4. The link with invariant sets in dynamical systems is introduced in Section 5. Some illustrative test cases are given in Section 6. Section 7 concludes the paper.

Lattices
Invariant sets have a lattice structure [36], and this will allow us to formalize our problem in an algebraic form. This section recalls some classical definitions on lattices and provides some illustrations to understand the basic principles that are needed to understand our methodology.

Definitions
A lattice (L, ≤, ∧, ∨) is a partially ordered set, closed under least upper and greatest lower bounds [37]. The least upper bound of x and y is called the join and is denoted by x ∨ y. The greatest lower bound is called the meet and is written as x ∧ y.
A lattice E is complete if for all (finite or infinite) subsets A of E , the least upper bound ∨A and the greatest lower bound ∧A belong to E . We define the top and the bottom of E as = ∨E and ⊥ = ∧E . A sublattice of a lattice (L, ≤) is a nonempty subset of L that is a lattice with the same meet and join operations ∨ and ∧ as L.

Machine Lattice
Consider a complete lattice (L, ≤, ∧, ∨). A machine lattice (L M , ≤, ∧, ∨) of L is a complete sublattice of (L, ≤, ∧, ∨) which is finite (thus, we can store it in the memory of a computer [38]). Moreover, both L and L M have the same top and bottom ⊥. This is illustrated by Figure 1, which can be interpreted as follows. • The gray square represents L. • The element k is greater than f , since it is at its top right. • The grid made with blue dots corresponds to L M . • The variables a, b, c, d all belong to L, and we have c = a ∨ b and d = a ∧ b.

•
The red polygon P is a sub-lattice of L. Its bottom is ⊥ P = ⊥ and its top is P = e.

•
The element i ∈ L is inside [⊥ P , P ]. Thus, there exists an element n in P which corresponds to the smallest element in P, which is larger than i. There exists also an element m in P that corresponds to the greatest element in P, which is smaller than i.

Remark 1.
In our context, the lattice L will correspond to the set of all subsets of R n and L M to the set of all machine sets (or mazes [39,40]). In Figure 1, the red polygon could represent the set of all positive invariant sets included in the set represented by e. The figure could also illustrate that given a set A, there exists a smallest invariant set that contains A, and there exists a largest element in P, which is included in A.

Kleene Algebra
The invariant sets of a dynamical system can be defined as fixed points of monotonic operators, which can be formalized elegantly using a Kleene algebra. The basic notions related to Kleene algebras are recalled in this section.

Definition
A Kleene algebra (K, +, ·, * ) is a set K together with two binary operations + : K × K → K and · : K × K → K and one function * : K → K, so that the axioms listed in the Table 1 are satisfied.
The literature contains several inequivalent definitions of Kleene algebras and related algebraic structures [41]. Our definition differs when we introduce the Kleene operator: we assume star-continuity, which is needed as soon as infinite cardinal sets are considered.
Probably the mainstream definition is that of Kozen in [42], where the Kleene operator is defined formally by the following properties and the star-continuity is defined by xa * y = ∑ i≥0 xa i y. Now, if we take x = y = 1, we get which corresponds to our definition.
The following proposition illustrates the fact that applying a Kleene star operator to a amounts to computing a fixed point. This will be used later in the algorithms. Proposition 1. Given a star continuous Kleene algebra (K, +, ·, * ), we have Proof. We first show by induction that Assume that the proposition is true for n. We have Since the two sequences involved in (5) are equal, they have the same limit. Therefore, Table 1. Axioms/definitions of a star-continuous Kleene algebra.

Intervals
Consider a star-continuous Kleene algebra (K, +, ·, * ). It is equipped with the order Table 1). We consider (K M , ≤, ∧, ∨) a machine lattice of K with respect to ≤, i.e., The max and min are used since K M is finite. To prove (i) we consider c, d in K M . We have It can be shown that (K M , +, ·, * ) is also a Kleene algebra. Note that since K M is finite, it is obviously star-continuous. Definition 1. An interval of (K, +, ·, * ) is a subset [a] of K that can be written as Note that both ∅ and K are intervals of K. Thus, an interval arithmetic similar to that proposed by Moore [43] for real numbers can be derived. This will be used later to compute with quantities defined as expressions of fixed point operators. As a consequence, due to the monotonicity of the operators +, ·, * .

Kleene Algebra of Automorphisms
The notions presented in Sections 2 and 3 can be seen as direct extensions of existing algebraic approaches related to lattices and Kleene algebra. In this section, we introduce automorphism-based Kleene algebras [4], which will allow us to make a first bridge between algebraic tools and invariant sets of nonlinear dynamical systems.

Automorphisms
Given a complete lattice (L, ≤, ∧, ∨, ⊥, ), an automorphism of L is a function As we will see later (see, e.g., Table 2), no property is assumed concerning the operator ∨. We denote by A(L) the set of automorphisms of L.
Proposition 2. If f , g are in A(L), then: where the function f ∧ g is defined by Moreover, (iv) It corresponds to the Kleene operator and is a direct consequence of (ii), (iii).
Proof. We need to check all properties of Table 2. In the two tables, the symbols of the operators changed: + corresponds to ∧, * corresponds to •, and ≥ corresponds to ≤. To , we proceed as follows: All other properties can be proved similarly.

Remark 2.
The relation order for automorphism has been chosen backward (i.e., ≥ instead of ≤). This choice is motivated by the fact that the inclusion order will be used later, and we will have the following correspondences: ≤↔⊂, ∧ ↔ ∩. , Proof. The proof is decomposed in two parts. In the first part, we prove that Fix(Id ∧ f ) ⊂ Fix( f * ) and in the second part, we prove the inverse.
We first prove by induction that From the partial order property of Table 2, we have we get that the right-hand side of (15) is true and thus Id ≥ Id ∧ f is true. This means that for i = 0, (14) is true. Assume now that (14) is true for i, let us check that it is true for i + 1. From Proposition 2 (i), we get that Since (14) is true for i, we get thus, we have proved (14).
From (14), we get (Id ∧ f ) i is a decreasing sequence and thus Take a fixed point x of (Id ∧ f ) ∞ , we get thus, (Id ∧ f )(x) = x, which implies that x is also a fixed point of Id ∧ f .
Since f is a monotonic, so is Id ∧ f and f * . From the Knaster-Tarski theorem, we deduce that Fix( f * ) is a complete sublattice of L.

Factorization
In this paper, we want to compute expressions involving the Kleene star operator. For efficiency reasons, we want to avoid reaching the fixed point each time the star operator is used. Ideally, we would like to have a unique fixed point to be reached. For this, we need to transform an expression containing several stars into an expression with only one star. Equivalently, we want to factorize the fixed point operator as much as possible. For this, we can use the factorization rules such as [42]: but we can do more. Assume for instance that we have to compute we understand that it is not necessary to compute f * (a) and g * (b) independently to finally observe that we get ⊥. For instance, we may have spent a lot of time to compute accurately f * (a) and a few milliseconds to get that g * (b) = ⊥ to finally reach the conclusion that f * (a) ∧ g * (b) = ⊥. As a result, we need to develop some specific algorithms taking into account that calling closures has a cost. The factorization allows us to reduce several fixed point iterations into a single one. This can be used to increase the speed of convergence of fixed point algorithms.

Intervals
Given a lattice (L, ≤, ∧, ∨) and an automorphism f ∈ A(L), we want to compute f * (a) where a ∈ L. We consider also a machine lattice L M of L. An automorphism of L M is called a machine automorphism. As seen in Subsection 3.2, since A(L) is a Kleene algebra, we can define intervals in A(L).

Definition 2. An interval of
, which can be written as where f − , f + belong to A(L M ).

Proposition 5.
We have Proof. Let us first prove the first inclusion. First, since ( f − ) * ∈ A(L M ) , all its fixed points are inside L M . We have (13)).
Let us now prove the second inclusion. We have , where a − , a + both belong to L M , then Remark 3. The membership relation (i) means that we are able to compute in a finite time an enclosure of the fixed point f * (a). Relation (ii) states that the fixed point obtained by ( f − ) * is a fixed point of f * . However, this is not true for ( f + ) * . Relation (iii) tells us that at each iteration i, we have an upper bound for the solution f * (a), but we need to reach the fixed point to get a lower approximation ( f − ) * (a − ) of f * (a).

Proof. (i) is a consequence of the fact that
The inclusion (iii) is illustrated by Figure 2 where the grid corresponds to L M , the magenta points correspond to Fix ( f − ) * , the blue points correspond to Fix ( f + ) * , and the light red polygon corresponds to Fix( f * ).

Algorithm to Find the Greatest Fixed Point
We propose here an interval algorithm [44] to compute f * (a), where a ∈ [a] = [a − , a + ]. We assume that we have an interval (20), we know that To compute f * (a), we apply the sequence of interval operations defined by up to the fixed point. From Theorem 1, we get a guaranteed approximation of f * (a). Since L M is finite, the algorithm always terminates. The principle of the algorithm is illustrated by The sequence (4) provides a guaranteed enclosure for the solution, and the accuracy is related to the precision we used to define the machine lattice L M . Once the algorithm terminates, if we are not satisfied by the quality of the approximation, we should restart from the beginning by redefining L M with a finer level of granularity.

Remark 4.
In our implementation, a multi-scale approach is used for more efficiency and more accuracy: once the fixed point interval [d − , d + ] is reached, we build a new grid inside the interval [d − , d + ]. We also combine with an inflation process, which increases d − without overtaking the solution d. More precisely, in Figure 3, we may increase d − top-right still staying inside the red polygon in order to get a more accurate approximation for the solution d. We called this process inflation, since in the context of this paper, the points of the figure correspond to subsets of R n equipped with the order relation is ⊂. When we increase d, we may understand that we inflate the set corresponding to d − still being included in the solution set corresponding to d.

Application to Dynamical Systems
In this section, we show that the previous algorithm can directly be used to compute invariant sets of continuous-time dynamical systems. Furthermore, we will show that we are able to compute sets that can be defined as a combination of invariant sets.

Greatest Positive Invariant
Consider the system S defined by Equation (1). The power set P (R n ) of the state space R n and equipped with ∩, ∪ is a lattice. We denote by (A(P (R n )), ∩, •, * ) the associated set of automorphisms. We want to find an automorphism − → f in A(P (R n )) that can be enclosed between two machine automorphisms and such that − → f * (A) corresponds to the greatest positive invariant set included in A, i.e., We may find some tools for that such as CAPD [24] or a tube approach [45] devoted to this type of problem. Now, these types of approaches only consider a finite time integration and are unable to compute the fixed point (23) in a reasonable time as shown in [46,47]. Therefore, it is important to build an automorphism − → f , which is fast to evaluate and that will converge quickly. This can be done by using an Eulerian positive predictor [10], which analyzes the geometry of the vector field associated to the dynamicẋ(t) = γ(x(t)) of the system without performing any time integration. We propose to use a discretization of the state space using mazes [31]. Mazes correspond to a polygonal decomposition of the state space coupled with an interval enclosure for γ(x), which is valid inside the corresponding polygon. This decomposition by mazes can be interpreted as an interval of dynamics with a lower bound and an upper bound. The polygonal representation associated to the maze is a discrete object that can be represented in the memory of the computer by floating point numbers. It will be used to approximate from inside and from outside the sequence of sets that should converge in a finite number of iterations to the invariant set we want to compute. Equivalently, the polygonal representation corresponds to the machine lattice we use in our implementation to represent subsets of R n .  γ(x(t)). The path associated to a is defined as

Paths
Equivalently, we can write The path Ψ γ (a) is a closed set and contains the equilibrium points or cycles to which the system will converge from a. Definition 4. (Path inside a region). Consider the system (1), a region Y, and a point a ∈ Y. We define the path inside Y as Ψ γ|Y (a) where the function γ|Y is defined as This notion is illustrated by Figure 4 where two trajectories starting from a and b are represented. The path Ψ γ|Y (a) contains a limit cycle, whereas Ψ γ|Y (b) stops at the boundary of Y.

Cover and Automorphism
In this section, we define the automorphism − → f that we will use to solve our problems involving invariant sets. It will be based on the notion of cover that we now define. A cover P is a collection of open boxes of R n whose union is R n . Denote by P (x) the union of all boxes of P containing x.
where Inv + (A) is the largest positive invariant set included in A.
Consider the system S :ẋ(t) = γ(x(t)), a closed set A, and a cover P of the state space. The set-valued function defined by is an automorphism. We call − → f (A) the forward Eulerian predictor, since it predicts where the state will go for one step. Now, the step is not temporal (as for Lagrangian predictors) but spacial and related to the cover P.
This is illustrated by Figure 5 in the case where γ(x), represented by its blue arrow vector field, is constant and oriented to the right. The polygonal set on the first sub-figure represents A. In this figure, the cover is made with two boxes, which are open and overlap (just a little) on their boundaries. Now, this overlapping is not represented for the sake of clarity.
In the figure a ∈ A, P (a) is made with a single box (the left one). The set Ψ γ|P (a) (a) is represented by the blue dotted segment starting from a. Since Ψ γ|P (a) (a) is not a subset of A, a / ∈ − → f (A). It means from Proposition 6 that a / ∈ Inv + (A), and this is why it can be removed. We have b ∈ A and Ψ γ|P (b) (b) is the blue dotted segment starting from b.
. For point c ∈ A , the set P (c) is made with the two boxes, instead of one for P (a) and P (b). As a result, Ψ γ|P (c) (c), the left dotted green segment is not a subset of A.
. It is eliminated, since it cannot be an element of Inv + (A).

Figure 5. Forward Eulerian predictor
− → f (A) eliminating points that will escape from A.

Invariant Set
In this section, we now show that the automorphism is linked to invariant sets by a Kleene star operator. More precisely, we will show that Inv + (A) = − → f * (A). As a consequence, an interval evaluation of − → f * (A) will allow us to have an inner and an outer approximation of an invariant set Inv + (A).
however, also, there exists a neighborhood V b of b such that which is inconsistent with (34, (i)).

Test Cases
We consider here several test cases in order to illustrate the principle and the efficiency of our approach. We can note that our method is limited to small-dimension systems because of the exponential complexity of the algorithm w.r.t. the dimension. This is indeed the case for all safe methods dealing with non-convex solution sets.

Negative Invariant
We consider here a problem treated in [30] involving the Van der Pol system:  (21)). The resulting approximation is illustrated by Figure 8, which is obtained in less than 5 s on a standard laptop (all the computations were performed on an Intel i5-3320M@2.6 GHz with 8 GB of RAM). The magenta part corresponds to the inner approximation X − of X. From (19), we know that X − (magenta) is a negative invariant set. The outer approximation X + corresponds to the union of the yellow and the magenta zones. Note that X + may not be negative invariant.

Forward Reach Set
Given the system (38), the forward reach set [48] of a set A is defined by We obtain in less than 4 s, with A = (x 1 , the approximation illustrated by Figure 9. The frame box is [−3, 3] × [−3, 3]. Note that we were able to get a non-empty inner approximation of Fwd(A) that was not possible with existing interval base methods such as [49]. Similar results could have been obtained using flow* [50], but to have the guarantee to enclose the whole trajectory, we need to deal with an infinite horizon, whereas the Taylor-based method (used in Flow*) is devoted to predict the trajectory for a limited time horizon.

Backward Reach Set
Given a set A, the backward reach set is defined by For the system (38), we get in less than 4 s the approximation of Bwd(A), as illustrated by Figure 10

Control Forward Reach Set
Consider the system: where u is the control that can be chosen asynchronously inside the set {0, 1}. Given an initial state set A, we want to compute the set X of all states that can be reached from A [51].

Minimal Robust Positive Invariant Set
The example is a continuous-time version of the example taken from [52]. We consider the system described by where ω ∈ [ω] = [−1, 1] is the perturbation. The system has the formẋ(t) = γ(x(t), ω). We want to compute the smallest set X containing 0 such that the system cannot escape. This set corresponds to the minimal robust positively invariant set [53] and is known to be difficult to compute. Moreover, no method exists in the literature to get a guaranteed inner approximation for nonlinear continuous-time systems. Now, this problem is similar to the previous one except that the control u is now replaced by a perturbation ω, and we can use the same method. In 2 s, we obtain the approximation illustrated by Figure 12 where the frame box is [−3, 3] × [−3, 3].  Let us find the set X of all points corresponding to a path that starts from A, avoids B, and reaches C. It corresponds to a path planning problem [54,55] for which interval analysis has been shown to be particularly efficient [56][57][58]. We have and thus our methodology applies. The result, depicted on Figure 13 left, was obtained in less than 38 s for a search box [−3, 3] × [−3, 3]. The three images on the right show on several zooms around A,B,C that a non-empty inner approximation was obtained, which was not possible with existing solvers. Figure 13. Paths starting from A, avoiding B, and reaching C.

Conclusions
In this paper, we have proposed a new approach to compute invariant sets of continuoustime dynamical systems. Our contributions are the following: • A link between Kleene algebra and invariant sets. This allowed us to derive a simple fixed point method able to compute guaranteed inner and outer approximations of invariant sets. • The treatment of toy examples for which no other existing approach is able to deal with.
The approach uses the fact that a suited automorphism has been found for a dynamical system described by a deterministic state equation.
Moreover, our formalism allowed us to compute sets that can be defined as combinations (intersection, union, complementary, image by automorphism) of invariant sets. This combination can be interpreted as a first step toward what could be called an invariant algebra, i.e., an algebra the atoms of which are positive invariant sets of dynamical systems. This algebra transforms a complex problem such as the reach and avoid problem without developing a complex algorithm with properties that are difficult to analyze. Instead, our algebra yields a simple expression operating in our invariant algebra.
Our approach can directly be extended to discrete time system and the algorithm, based on a formal expression, remains unchanged. Only atoms (i.e., the automorphisms) have to be adapted. For a discrete time system of the form x(k + 1) = γ(x(k)), the automorphism is even simpler, since it could be X → γ(X) = {y|∃x ∈ X, y = γ(x)}. Thus, we get an approach similar to that proposed in [59] where the set invariance is used to prove properties of discrete-time dynamical systems in a context of temporal logic.
In our approach, we have chosen a structure that is a Kleene algebra. It captures many properties we have when we deal with invariant sets. Now, some properties are forgotten by the Kleene algebra. For instance, for our continuous-time systems, we have the property − → f • − → f ⊃ − → f . From this property, we may get some other simplifications that could be used to increase the efficiency of the resolution. The