## 2. Differential Game and Viability Kernels

Consider a differential game with the autonomous dynamics:

Here,

x is the state vector,

u and

v are control parameters of the first and second players, respectively, and

P and

Q are compacts of the corresponding dimensions. We suppose that

f satisfies standard requirements: continuity in

$x,u$ and

v; the Lipschitz condition in

x; and a growth condition that provides the continuability of the solutions to any time interval.

Based on the conflict control system (

1), consider for any

$v\in Q$ the differential inclusion:

**Definition 1** (u-stability property [3]). Let T be an arbitrary time instant. A set $W\subset (-\infty ,T]\times {R}^{n}$ is said to be $u-$stable on the interval $(-\infty ,T]$ if for any initial position, $({t}_{*},{x}_{*})\in W$,

for any time instant, ${t}^{*}\phantom{\rule{4pt}{0ex}}({t}_{*}\le {t}^{*}\le T)$,

for any $v\in Q$,

there exists a solution, $x(\xb7)$,

to the differential inclusion (2) with the initial state $x\left({t}_{*}\right)={x}_{*}$,

such that $({t}^{*},x\left({t}^{*}\right))\in W$.

**Definition 2** (viability property [2]). A set $K\subset {R}^{n}$ is said to be viable if for any ${x}_{*}\in K$,

for any $v\in Q$ there exists a solution $x(\xb7)$ to the differential inclusion (2) with the initial state $x\left(0\right)={x}_{*}$ such that $x\left(t\right)\in K,\phantom{\rule{4pt}{0ex}}t\ge 0$.

**Definition 3** (viability kernel [2]). For a given compact set $G\subset {R}^{n}$ denote by $Viab\left(G\right)$ the largest subset of G with the viability property. This subset is called the viability kernel of G.

The following assertions follow from Definitions 1–3:

- (1)
The closure of a u-stable (viable) set is a u-stable (viable) set.

- (2)
The union of any family of u-stable (viable) sets is a u-stable (viable) set.

- (3)
If

$W\subset (-\infty ,T]\times {R}^{n}$ is a

u-stable closed set, then for any initial position,

$({t}_{*},{x}_{*})\in W$, for any

$v\in Q$, there exists a solution,

$x(\xb7)$, to the differential inclusion (

2) with the initial state

$\phantom{\rule{4pt}{0ex}}x\left({t}_{*}\right)={x}_{*}$ such that

$(t,x\left(t\right))\in W,\phantom{\rule{4pt}{0ex}}{t}_{*}\le t\le T.$

The first assertion follows from the compactness and semi-continuity of solution sets of differential inclusions; see e.g., [

13], and notice that the solution set of Equation (

2) depends, even Lipschitz continuous, on the initial state. The second assertion follows from the definition of

u-stability (viability). The third assertion can be proven by considering ever finer subdivisions of the interval

$[{t}_{*},T]$, step-by-step constructing solutions of Equation (

2) that satisfy the assertion at nodes of the subdivisions and using the compactness of the solution sets of Equation (

2).

Assertion (3) shows that Definition 2 is a special case of Definition 1 with $T=\infty $ and $W=R\times K$. Assertions (1) and (2) provide the existence of the viability kernel, $Viab\left(G\right)$, for any compact set, G, if there exists at least one viable subset of G. Thus, the following problem can be considered.

**Problem.** Given a compact set $G\subset {R}^{n}$ it is required to construct the viability kernel, $Viab\left(G\right)$.

To solve this problem, fix an arbitrary

$T\in R$ and introduce the set

$N=(-\infty ,T]\times G$. Due to Assertions (1) and (2), there exists a closed maximal

u-stable on the interval

$(-\infty ,T]$ set

$W\subset N$. For any time instant,

$t\le T$, denote

**Remark 1.** The family $\left\{W\right(t):t\le T\}$ has the property of monotonicity: $W\left({t}_{1}\right)\subset W\left({t}_{2}\right)$ for ${t}_{1}\le {t}_{2}\le T$. Really, if $\mathcal{W}$ is a u-stable set, then the set $\widehat{\mathcal{W}}$ defined by the cross-sections $\widehat{\mathcal{W}}\left(t\right)={\cup}_{\tau \le t}\mathcal{W}\left(\tau \right)$ is also u-stable. Therefore, the maximal u-stable set, W, must have the required monotonicity property.

These facts immediately follow from the above definitions, and they are left to the reader to prove.

**Proposition 1.** Let ${\left\{{X}_{i}\right\}}_{i=0}^{\infty}$ be a sequence of nonempty compact sets such that ${X}_{j}\subset {X}_{i}$ if $j>i$, and let $X={\bigcap}_{i=0}^{\infty}{X}_{i}$, then $X\ne \xd8$ and ${X}_{i}$ $converge$ to X in the $Hausdorffmetric.$

**Proof.** First, X is nonempty as the intersection of embedded compacts. Second, assume that the convergence is absent. Then, there exists an $\u03f5>0$, such that ${A}_{k}:={X}_{k}\bigcap \text{cl}({X}_{0}^{\u03f5}\setminus {X}^{\u03f5})\ne \xd8$ for all $k\ge 0$. Here, the upper index, $\u03f5$, denotes the $\u03f5$-neighborhood of sets, and the symbol “cl” denotes the closure operation. The sets ${A}_{k},k\ge 0$, are compact and embedded. Therefore, there exists $x\in {\bigcap}_{k\ge 0}{A}_{k}=X\bigcap \text{cl}({X}_{0}^{\u03f5}\setminus {X}^{\u03f5})$, which is a contradiction. ☐

**Theorem 1.** If $W\left(t\right)\ne \xd8$ for any $t\le T$,

then there exists the Hausdorff limit $\underset{t\to -\infty}{lim}W\left(t\right)$ and: Otherwise, there is not any viable subset of G.

**Proof.** Denote

$K:={\bigcap}_{t\le T}W\left(t\right)$ for brevity. If for any

$t\le T$, the set,

$W\left(t\right)$, is nonempty, then

K is also nonempty as the intersection of nonempty embedded compacts. Show that

K is viable in the sense of Definition 2. Choose arbitrary

${x}_{*}\in K,\phantom{\rule{4pt}{0ex}}v\in Q$ and

$\delta >0$. By Proposition 1, for any

$\epsilon >0$, there exists

${t}_{\epsilon}$, such that:

The condition,

${x}_{*}\in K$, implies the inclusion,

${x}_{*}\in W({t}_{\epsilon}-\delta )$. By virtue of

u-stability of the set

W, there exists a solution,

$x(\xb7)$, of Equation (2) with the initial state

$x({t}_{\epsilon}-\delta )={x}_{*}$, such that

$x\left({t}_{\epsilon}\right)\in W\left({t}_{\epsilon}\right)$. With the autonomy of the differential inclusions (

2), we conclude the existence of a solution,

$x(\xb7)$, of Equation (

2), such that

$x\left(0\right)={x}_{*}$ and

$x\left(\delta \right)\in {K}^{\epsilon}.$Thus, for arbitrarily small

$\epsilon >0$, there exists a solution,

$x(\xb7)$, of Equation (

2) that satisfies the relations,

$x\left(0\right)={x}_{*}$ and

$x\left(\delta \right)\in {K}^{\epsilon}$. Since the solution set of Equation (

2) is compact, this gives a solution,

$x(\xb7)$, such that

$x\left(0\right)={x}_{*}$ and

$x\left(\delta \right)\in K$. Thus, we have proven the viability property of

K, and relation (

3) proves the Hausdorff convergence of

$W\left(t\right)$ to

K as

$t\to -\infty $.

Assume now that ${K}^{\prime}\subset G$ is a viable set, such that $K\subset {K}^{\prime}$. As was noticed in the comment to Definition 2, the viability property of ${K}^{\prime}$ means the u-stability property of the set $(-\infty ,T]\times {K}^{\prime}$. Since $(-\infty ,T]\times {K}^{\prime}\subset N$, and W is the maximal u-stable set belonging to N, then $(-\infty ,T]\times {K}^{\prime}\subset W$. Thus, ${K}^{\prime}\subset W\left(t\right)$ for all $t\le T$, and therefore, ${K}^{\prime}\subset {\bigcap}_{t\le T}W\left(t\right)=K$. This proves Theorem 1. ☐

## 3. Approximation

Immediate implementation of Theorem 1 for finding

$Viab\left(G\right)$ requires precise computing of the sets

$W\left(t\right)$ for

$t\in (-\infty ,T]$. If, for example, the step-by-step backward procedure from [

9] related to the dynamic programming method is used, then the computational error grows infinitely as

$t\to -\infty .$ The algorithm proposed in the current paper uses the idea of decreasing the step of the backward procedure simultaneously with the passage to the limit as time goes to infinity. The algorithm looks as follows:

Here, S is the unit closed ball in ${R}^{n},\phantom{\rule{4pt}{0ex}}\beta =LM$, L is the Lipschitz constant of ${F}_{v}$, and $M=sup\left\{\right|f|:f\in {F}_{v}\left(x\right),x\in G,v\in Q\}.$

**Theorem 2.** Let the family $\left\{{K}_{i}\right\}$ be defined by Equation (4), where the sequence {${\delta}_{i}\}$ is such that ${\delta}_{i}\to 0$ and ${\sum}_{i=1}^{\infty}$ ${\delta}_{i}=\infty $.

If ${K}_{i}\ne \xd8$ for any $i>1$,

then there exists the Hausdorff limit,

${lim}_{i\to \infty}{K}_{i}$,

and Otherwise, there is no viable subset of G.

Before starting the proof of Theorem 2, we prove three auxiliary lemmas. The first lemma shows a discrete

u-stability of the family

$\left\{{K}_{i}\right\}$ generated by Equation (

4). Assume just for the simplicity of subsequent calculations that

${\delta}_{j}L\le 1$ for all

$j>0$. This requirement allows us to avoid the consideration of terms of the form

${\sum}_{i=l+1}^{s}$ ${\delta}_{i}^{3}$ in the proof of Lemma 1, which reduces the amount of calculations.

**Lemma 1.** Let l and s be integers such that $0<l<s$.

Let $v\in Q$ be fixed. If ${x}_{*}\in {K}_{s}$,

then there exists a solution, $x(\xb7)$,

of Equation (2) with $x\left(0\right)={x}_{*}$,

such that $x\left(\sigma \right)\in {K}_{l}^{\omega}$,

where $\sigma ={\sum}_{i=l+1}^{s}$ ${\delta}_{i}$,

$\omega =4LM{e}^{L\sigma}{\sum}_{i=l+1}^{s}$ ${\delta}_{i}^{2}$.

We first prove the following two auxiliary propositions.

**Proposition 2.** Let $v\in Q$ and ${x}_{1},{x}_{2}\in {R}^{n}$.

If $|{x}_{1}-{x}_{2}|\le \alpha $,

then for any solution, ${x}_{1}(\xb7)$,

of differential inclusion (2) with the initial state ${x}_{1}\left(0\right)={x}_{1}$,

there exists a solution, ${x}_{2}(\xb7)$,

with the initial state ${x}_{2}\left(0\right)={x}_{2}$,

such that $|{x}_{1}\left(t\right)-{x}_{2}\left(t\right)|\le \alpha {e}^{Lt}.$**Proof of Proposition 2.** The proof immediately follows from the Filippov–Gronwall inequality obtained in [

14], Theorem 1. ☐

**Proposition 3.** Let $v\in Q$,

$j>0$ and $\alpha >0$ be fixed. If ${x}_{0}\in {K}_{j}^{\alpha}$,

then there is a solution, $x(\xb7)$,

of Equation (2) with the initial condition $x\left(0\right)={x}_{0}$,

such that $x($ ${\delta}_{j})\in {K}_{j-1}^{{\alpha}_{1}}$,

where ${\alpha}_{1}=\alpha {e}^{L{\delta}_{j}}+4ML$ ${\delta}_{j}^{2}.$**Proof of Proposition 3.** Choose

$v\in Q,\phantom{\rule{4pt}{0ex}}j>0$ and

$\alpha >0$. Let

${x}_{0}\in {K}_{j}^{\alpha}$. Then, there exists a point,

${x}_{*}\in {K}_{j}$, satisfying

$|{x}_{0}-{x}_{*}|\le \alpha $. By the definition of

${K}_{j}$, there are a point

$\overline{x}\in {K}_{j-1}$ and vectors

$\overline{g}\in {F}_{v}\left(\overline{x}\right)$ and

$\overline{h}$,

$|\overline{h}|\le 1$, such that:

Using the Lipschitz property of the right-hand side of Equation (

2), choose a vector,

${g}_{*}\in {F}_{v}\left({x}_{*}\right)$, to provide the inequality

$|{g}_{*}-\overline{g}|\le L|{x}_{*}-\overline{x}|$. Denote

$\widehat{x}={x}_{*}+$ ${\delta}_{j}{g}_{*}$ and calculate:

Accounting for the technical assumption

${\delta}_{j}L\le 1$,

$j>0$, simplifies the last estimate as follows:

Let

$g\left(x\right)\in {F}_{v}\left(x\right)$ be the nearest point to

${g}_{*}$ for every

$x\in {R}^{n}$. Obviously, the function

$x\to g\left(x\right)$ is continuous and satisfies the following inequality, due to the Lipschitz property of

${F}_{v}(\xb7)$:

Suppose

$x(\xb7)$ is a solution of the differential equation

$\dot{x}=g\left(x\right)$ with the initial state

$x\left(0\right)={x}_{*}$. Then:

The last equation and the estimate (

6) yield the inequality:

and, with Equation (5), it holds that:

By Proposition 2, there exists a solution,

${x}_{0}(\xb7)$, of Equation (

2) with the initial state

${x}_{0}\left(0\right)={x}_{0}$, such that:

Then:

and therefore:

Proposition 3 is proven. ☐

**Proof of Lemma 1.** Let

${x}_{*}\in {K}_{s}$ be chosen. Setting

$D=4ML$ and applying Proposition 3, we construct a solution,

$x(\xb7)$, of Equation (

2) that satisfies the conditions:

It is easy to estimate

${\alpha}_{p}$ to obtain the inequality:

where

$\sigma ={\sum}_{i=l+1}^{s}{\delta}_{i}.$ This proves Lemma 1. ☐

**Lemma 2.** If ${K}_{i}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for all $i>0$,

then the set: is nonempty and possesses the viability property.

**Proof.** If ${K}_{i}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for all $i>0$, then K is nonempty as the intersection of nonempty embedded compacts. Let ${x}_{*}\in K,\phantom{\rule{4pt}{0ex}}v\in Q$ and $\delta >0$ be arbitrary. For any $\epsilon >0$, one can choose a large integer, k, to satisfy the following conditions:

(a) $\text{cl}{\bigcup}_{i\ge k}{K}_{i}\subset {K}^{\epsilon}$ (see Proposition 1);

(b) ${\delta}_{i}<\epsilon $ for all $i\ge k$.

Choose

$m>k$, such that

${\sum}_{i=k+1}^{m}{\delta}_{i}>\delta .$ Since

${x}_{*}\in K$, it holds that

${x}_{*}\in \text{cl}{\bigcup}_{i>m}{K}_{i}$. Hence, for any

$\xi >0$, there exist an integer

$s>m$ and a point,

${x}_{\xi}\in {K}_{s}$, such that

$|{x}_{\xi}-{x}_{*}|<\xi $. Using the Lipschitz continuity of the solution set of Equation (

2) (see Proposition 2), one can choose the value of

ξ to be so small that for any solution,

${x}_{\xi}(\xb7)$, with

${x}_{\xi}\left(0\right)={x}_{\xi}$, there exists a solution,

$x(\xb7)$, with

$x\left(0\right)={x}_{*}$, satisfying

$|{x}_{\xi}\left(\delta \right)-x\left(\delta \right)|<\epsilon $. By virtue of Condition (b) and the choice of

$k,m$ and

s, there exists

$l\ge k$, such that

Set

$\sigma ={\sum}_{i=l+1}^{s}{\delta}_{i}$ and

$D=4LM{e}^{L\sigma}$. By Lemma 1, there exists a solution,

${x}_{\xi}(\xb7)$, of Equation (

2) with

${x}_{\xi}\left(0\right)={x}_{\xi}$ and

${x}_{\xi}\left(\sigma \right)\in {K}_{l}^{\omega}$, where

$\omega =D{\sum}_{i=l+1}^{s}{\delta}_{i}^{2}$. With Condition (b), it holds that

Taking into account the obvious inequality

$|{x}_{\xi}\left(\delta \right)-{x}_{\xi}\left(\sigma \right)|\le M\epsilon $, we have:

where

${D}_{2}={D}_{1}+M.$Condition (a) implies the inclusion

${K}_{l}\subset {K}^{\epsilon}$, and therefore:

where

${D}_{3}={D}_{2}+1.$Using the choice of

ξ, we obtain the existence of a solution,

$x(\xb7)$, of Equation (

2) with

$x\left(0\right)={x}_{*}$ and

$x\left(\delta \right)\in {K}^{{D}_{4}\epsilon}$, where

${D}_{4}={D}_{3}+1.$ Since

ε is arbitrary and the solution set of Equation (

2) is compact, there exists a solution,

$x(\xb7)$, such that

$x\left(0\right)={x}_{*},x\left(\delta \right)\in K.$ This proves Lemma 2. ☐

**Lemma 3** If a set ${K}^{\prime}\subset G$ has the viability property, then ${K}^{\prime}\subset {K}_{i}$ for all $i>0$.

**Proof.** Define the following sequence:

where:

One can easily check the following properties of

$\left\{\tilde{{K}_{i}}\right\}$:

- (a)
$\tilde{{K}_{0}}=G,\phantom{\rule{4pt}{0ex}}\tilde{{K}_{i}}\subset G,i\ge 1.$

- (b)
For any point

${x}_{*}\in \tilde{{K}_{i}}$ and any vector

$v\in Q$, there is a solution,

$x(\xb7)$, of Equation (

2), such that

$x\left(0\right)={x}_{*}$ and

$x\left({\delta}_{i}\right)\in {\tilde{K}}_{i-1}.$- (c)
If

${x}_{*}\in G$, but

${x}_{*}\notin \tilde{{K}_{i}}$, then there exists

$v\in Q$ such that for any solution,

$x(\xb7)$, of Equation (

2) with

$x\left(0\right)={x}_{*}$ it holds that

$x\left({\delta}_{i}\right)\notin {\tilde{K}}_{i-1}$.

Arguing by induction, one can easily prove the maximality of $\left\{{\tilde{K}}_{i}\right\}$ in the following sense: for any family, $\left\{{K}_{i}^{\prime}\right\}$, with Properties (a) and (b), the inclusion ${K}_{i}^{\prime}\subset {\tilde{K}}_{i}$ holds for any $i>0$. However, the viability property of ${K}^{\prime}$ implies that the family obtained by the replication of ${K}^{\prime}$ has Properties (a), except for ${K}^{\prime}=G$, and (b). Hence, ${K}^{\prime}\subset {\tilde{K}}_{i}$ for $i>0.$ To complete the proof, it remains to check the inclusion ${\tilde{K}}_{i}\subset {K}_{i}$ for all $i>0$. To do this, the following proposition will be proven.

**Proposition 4.** For arbitrary $v\in Q,\phantom{\rule{4pt}{0ex}}\tau >0$,

and any solution, $x(\xb7)$,

of the differential inclusion $\dot{x}\in -{F}_{v}\left(x\right)$ with the initial state $x\left(0\right)={x}_{*}$,

there exists a vector, ${g}_{*}\in {F}_{v}\left({x}_{*}\right)$,

such that:**Proof of Proposition 4.** Let

$g\left(\xi \right)\in -{F}_{v}\left({x}_{*}\right)$ be the nearest point to

$\dot{x}\left(\xi \right)$ for any

$\xi \in [0,\tau ]$. It is evident that the function

$\xi \to g\left(\xi \right)$ is measurable, and the following inequality holds:

This implies the estimate:

where:

Taking into account the obvious inequality:

we obtain the desired estimation, which proves Proposition 4. ☐

Thus, Lemma 3 is also proven. ☐

**Proof of Theorem 2.** Define

K by Formula Equation (

4). If

${K}_{i}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for all

$i>0$, then

K is nonempty and viable by Lemma 1.

Show the maximality of

$K.$ To this end, consider an arbitrary viable set,

${K}^{\prime}$. Lemma 3 says that

${K}^{\prime}\subset {K}_{i},\phantom{\rule{4pt}{0ex}}i>0$, and therefore:

This proves the maximality of K.

Prove now the Hausdorff convergence. Take an arbitrary

$\epsilon >0$ and use Proposition 1 to choose an integer,

k, such that

${\bigcup}_{i>k}{K}_{i}\subset {K}^{\epsilon}$. Taking

$j>k$ and accounting for Lemma 3 yield the relations:

which prove the convergence of

${K}_{j}$ to

K in the Hausdorff metric.

Notice, if there exists a viable set

${K}^{\prime}\subset G$, then, for every

$j>0$, the condition

$\tilde{{K}_{j}}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ implies that

${\tilde{K}}_{j+1}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ (these sets are defined by Equation (

7)). Hence,

${\tilde{K}}_{i}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for all

$i>0$. As was noticed in the proof of Lemma 3, the inclusion

$\phantom{\rule{4pt}{0ex}}{\tilde{K}}_{i}\subset {K}_{i}$ holds for any

$i>0$. Therefore,

${K}_{i}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for any

$i>0.$ Thus, the case

${K}_{s}=\phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $, for some

$s>0$, contradicts with the existence of viable subsets of

G. Theorem 2 is proven. ☐

**Remark 2.** For a linear differential game with the dynamics: relations Equation (4) turn into the following ones:Here,

E is the identity matrix, and the sign “$\underline{*}$” denotes the geometric difference. If the sets, $G,P$ and Q,

are convex polyhedra, and D is the unit cube in ${R}^{n}$ (any bounded polyhedron containing the unit closed ball is appropriate), then all sets ${K}_{i}$ produced by Equation (8) are polyhedra, too. These formulas were used as the basis for a computer algorithm. A computer program developed by the authors permits the implementation of Equation (8) for the space dimension up to three.

Consider the case where the right-hand side of Equation (

2) does not depend on

$v.$ In this case, Definition 2 coincides with the usual definition of viability (see [

2]), and the sets

$W\left(t\right)$ appearing in Theorem 1 can be found as follows:

$W(T-\tau )={X}_{-F}^{G}\left(\tau \right)$, where:

Thus, the following theorem holds.

**Theorem 3.** If ${X}_{-F}^{G}\left(\tau \right)\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for any $\tau >0$,

then there exists the Hausdorff limit, $\underset{\tau \to \infty}{lim}{X}_{-F}^{G}\left(\tau \right)$,

and:Otherwise, there are no viable subsets of G.

The approximation theorem is now formulated as follows.

**Theorem 4.** Let where S is the unit ball,

$\beta =LM,\phantom{\rule{4pt}{0ex}}L$ is the Lipschitz constant of the right-hand side of differential inclusion Equation (2) and:Suppose the sequence $\left\{{\delta}_{i}\right\}$ satisfies the conditions ${\delta}_{i}\to 0$ and ${\sum}_{i=1}^{\infty}{\delta}_{i}=\infty $.

If ${K}_{i}\ne \phantom{\rule{3.40001pt}{0ex}}\phantom{\rule{-4.03748pt}{0ex}}\varnothing $ for all $i>0$,

then there exists the Hausdorff limit, ${lim}_{i\to \infty}{K}_{i}$,

and: Otherwise, there are no viable subsets of G.

## 4. Numerical Scheme

The idea of the numerical method consists in the representation of the viability kernel as a level set of an appropriate function. Assume that the constraint set,

G, is described by the relation:

where

g is a Lipschitz continuous function. It is required to construct a function,

V, such that:

Define the Hamiltonian of the inclusion Equation (

2) as follows:

Let

$\mathcal{V}$ be a Lipschitz function satisfying the conditions:

- (i)
$\mathcal{V}\left(x\right)\ge g\left(x\right)$ for all $x\in {R}^{n}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}$;

- (ii)
for any point ${y}_{0}\in {R}^{n}$ and any function $\phi \in {C}^{1}$, such that $\mathcal{V}-\phi $ attains a local minimum at ${y}_{0}$, the following inequality holds: $H\left({y}_{0},\nabla \phi \left({y}_{0}\right)\right)\le 0$.

**Proposition 5.** The function:

has the property:Notice that Condition (i) provides the embedding of the level sets of

V into the corresponding level sets of

g. Condition (ii) provides the

u-stability of functions

$\mathcal{V}$ (see [

15,

16]), and therefore, the

u-stability of the function

V. The operation “inf” provides the minimality of the resulting function,

i.e., the maximality of its level sets. Thus, Proposition 5 is valid.

Unfortunately, the direct application of this proposition to the computation of

V is impossible, because the validation of Condition (ii) is very difficult algorithmically. On the other hand, Theorem 1 shows that the function

V can be computed as

${lim}_{t\to -\infty}V(t,\xb7)$, where

$V(t,x)$ is the value function of the differential game with the Hamiltonian

$H(x,p)$ and the objective functional

$J\left(x(\xb7)\right)={max}_{\tau \in [t,0]}\{x\left(0\right),g\left(\tau \right)\}$; see [

16]. This remark allows us to use the numerical methods developed for constructing time-dependent value functions in differential games with state constraints (see [

15,

16]).

Let us outline the numerical methods of [

15,

16] and show how they should be adopted to our aims.

Consider the following finite difference scheme. Let

$\delta >0$ be the backward time step and

$h:=({h}_{1},...,{h}_{n})$ space discretization steps. Set

$\left|h\right|:=max\{{h}_{1},...,{h}_{n}\}$. For any continuous function

$\mathcal{V}:{R}^{n}\to R$, define:

Denote by:

the restrictions of

$\mathcal{V}$ and

g to the grid.

Let

${\mathcal{L}}_{h}$ be an interpolation operator that maps grid functions to continuous functions and satisfies the estimate:

for any smooth function,

ϕ. Here,

${\varphi}^{h}$ is the restriction of

ϕ to the grid,

$\parallel \xb7\parallel $ the point-wise maximum norm,

${D}^{2}\varphi $ the Hessian matrix of

ϕ and

C an independent constant.

Notice that Estimate (

10) is typical for interpolation operators (see, e.g., [

17]). Roughly speaking, interpolation operators reconstruct values and gradients of interpolated functions, and therefore, the expected error is given by Equation (

10).

As an example, consider a multilinear interpolation operator constructed in the following way (see [

15]).

Let

$m\in \overline{1,{2}^{n}}$ be an integer and

$({j}_{1}^{m},...,{j}_{n}^{m})$ the binary representation of

m, so that

${j}_{i}^{m}$ is either zero or one. Thus, each multi-index

$({j}_{1}^{m},...,{j}_{n}^{m})$ represents a vertex of the unit cube in

${R}^{n}$, and

m counts the vertices. Introduce the following functions:

Notice that the

i-th member in the product Equation (

11) is either

$1-{x}_{i}$ or

${x}_{i}$ depending on the value of

${j}_{i}^{m}.$ Consider a point

$x=({x}_{1},...,{x}_{n})\in {R}^{n}.$ Denote by

${\underline{x}}_{i}$ the lower and by

${\overline{x}}_{i}={\underline{x}}_{i}+{h}_{i}$ the upper grid points of the

i-th axis, such that

${\underline{x}}_{i}\le {x}_{i}\le {\overline{x}}_{i}$. Let

${\varphi}_{m}^{h}$,

$m=1,...,{2}^{n}$, be the values of a grid function,

${\varphi}^{h}$, in the vertices of the n-brick

${\prod}_{i=1}^{n}[{\underline{x}}_{i},{\overline{x}}_{i}]$ (the vertices are ordered in the same way as the vertices of the unit n-cube above). The multilinear interpolation of

${\varphi}^{h}$ at

$({x}_{1},...,{x}_{n})$ is:

Let

$\left\{{\delta}_{\ell}\right\}$ be a sequence of positive reals, such that

${\delta}_{\ell}\to 0$ and

${\sum}_{\ell =0}^{\infty}{\delta}_{\ell}=\infty $. Consider the following grid scheme:

Notice that $F\left({\mathcal{L}}_{h}\left[{\mathcal{V}}_{\ell}^{h}\right];{\delta}_{\ell}\right)$ is a continuous function, which is then restricted to the grid and then compared with the grid function ${g}^{h}$.

Relations Equation (

9) and Equation (

12) can be interpreted as follows. The shift,

$\delta f$, of the argument of the function

$\mathcal{V}$ in Equation (

9) means the opposite shift of level sets of

$\mathcal{V}$; compare with Equation (

4). The minimum over

f means the union of level sets of

$\mathcal{V}$, and the maximum over

v results in the intersection of the level sets. The subtraction of the value,

${\delta}^{2}{\beta}^{2}$, means adding of the ball

${\delta}^{2}\beta S$ to the level sets. Moreover, the maximum in Equation (

12) means the intersection of the level sets with the constraint set,

G. Therefore, the numerical scheme Equation (

12) implements the relation Equation (

4) in the language of level sets.

Consider another numerical grid scheme (see [

16]) that approximately implements the limit

${lim}_{t\to -\infty}V(t,\xb7)$. Introduce the following upwind operator:

where

${f}_{i}$ are the components of

f, and:

Notice that the new operator, F, can be immediately applied to a grid function and returns a grid function.

The numerical scheme is now of the form:

Notice that the application of the algorithm Equation (

13) requires the relation

${\delta}_{\ell}/h\le 1/\left(M\sqrt{n}\right)$ for all

ℓ (remember that

M is the bound of

${F}_{v}$); see [

15] and [

16]. On the other hand, numerical experiments show a very nice property of this method: the noise usually coming from the boundary of the grid area is absent, so that the grid region may not be too much larger than the region where the solution is searched. The algorithm Equation (

12) does not possess such a property, so that larger grid regions are necessary in this case. On the other hand, this algorithm admits larger steps,

${\delta}_{\ell}$, which can compensate for the necessary extent of the region.

## 5. Examples

**Example 1.** The first example illustrates Theorem 3 (the case of one player). Let the differential inclusion be of the form:

where

$x\in {R}^{2}$ is the state vector,

S the unit ball of

${R}^{2}$ and

$\alpha >0$ a positive real number. Let

$G=rS$, where

$r>\alpha $. According to Theorem 3, consider the differential inclusion in reverse time (utilize the symmetry of

S):

Using the function

$\frac{1}{2}({x}_{1}^{2}+{x}_{2}^{2})$ as the Lyapunov function, one can easily see that any solution of Equation (

14) does not leave

G. Therefore,

i.e.,

${X}_{-F}^{G}\left(\tau \right)$ is the attainable set of System Equation (

14) at the time instant,

τ. A calculation shows that the support function of

${X}_{-F}^{G}\left(\tau \right)$ is given by the formula:

Thus, Theorem 3 yields that $Viab\left(G\right)=\alpha S.$

**Example 2.** The second example illustrates Theorem 2 and the application of the grid algorithm Equation (

13). Consider a pendulum with a moving suspension point. The dynamics of the object is described by the system:

Here,

θ is the deflection angle,

l the length,

m the mass,

${g}_{gr}$ the gravity acceleration,

u the torque applied to the pendulum at the suspension point (control) and

${v}_{1}$ and

${v}_{2}$ the vertical and horizontal accelerations of the suspension point, respectively, (disturbances). The following values of parameters and bounds on the control and disturbances are chosen:

The constraint set, G, is the unit circle given by the relation $g(\theta ,w):=\sqrt{{\theta}^{2}+{w}^{2}}\le 1$, and the sequence $\left\{{\delta}_{\ell}\right\}$ is chosen as ${\delta}_{\ell}=0.001/ln(3+\ell )$. The grid is formed by dividing the region ${[-1.2,1.2]}^{2}$ into $100\times 100$ square cells.

The run tests of the algorithm Equation (

13) show that:

which is the stopping criterion. The runtime on a laptop with six threads is approximately 1 min.

Figure 1 shows the viability kernel obtained as

$Viab\left(G\right)=\{(\theta ,w):{\mathcal{V}}_{\ell}^{h}(\theta ,w)\le 1\}.$

**Figure 1.**
The viability kernel for the problem Equation (

15) with the above described data.

**Figure 1.**
The viability kernel for the problem Equation (

15) with the above described data.