You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

9 December 2025

Existence Results for Nonconvex Nonautonomous Differential Inclusions in Hilbert Spaces

Department of Mathematics, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
This article belongs to the Special Issue Recent Investigations of Differential and Fractional Equations and Inclusions, 3rd Edition

Abstract

We establish a solvability criterion for nonautonomous time-evolution inclusions governed by the right-hand side without the convexity assumption. In this study, we examine the problem x ˙ ( t ) F ( t , x ( t ) ) a . e . on I 0 : = [ 0 , T 0 ] , for some T 0 > 0 , where F has a closed graph, constrained within the subdifferential operator of a convex, time-dependent potential g ( t , · ) . This work extends the existing literature, which has primarily focused on the autonomous case or time-dependent mappings with a time-independent convex potential in finite-dimensional spaces. Under novel assumptions that the potential g is convex in the state variable and Lipschitz in time, we establish a solvability criterion. An example of the applicability of our result for nonconvex nonautonomous differential inclusions is stated. As a significant application of our main theorem, we demonstrate that a certain class of implicit nonconvex sweeping processes with an unbounded perturbative term admits solutions.

1. Introduction

The theory of differential inclusions provides a powerful framework for modeling a vast array of phenomena in nonsmooth mechanics, control systems, and economics, where the dynamics are governed not by a single vector field but by a set-valued one. A fundamental challenge in this domain, which has attracted considerable research interest, is the solvability of differential inclusions governed by nonconvex set-valued mappings.
A significant contribution to this problem was made in [1], providing a core existence theory for autonomous set-valued dynamical systems in a finite-dimensional setting. The authors considered the problem: for any x 0 R n there is T 0 > 0 in a way that
x ˙ ( t ) F ( x ( t ) ) a . e . on [ 0 , T 0 ] ,   with   x ( 0 ) = x 0 ,
where F is upper semicontinuous (in short u.s.c.) without the convexity assumption, constrained within the subdifferential operator of a convex, time-dependent potential g, i.e., F ( x ) g ( x ) for any x around x 0 .
This seminal work has inspired numerous extensions. For instance, the authors in [2] introduced a Carathéodory univalued perturbation f ( t , x ) , studying the inclusion x ˙ ( t ) F ( x ( t ) ) + f ( t , x ( t ) ) . In [3], the author further generalized this framework to include a set-valued perturbation G ( t , x ) and viability constraints, ensuring the solution remains within a closed set S. Subsequent research, such as [4,5], extended these ideas to the case F ( x ) g ( x ) for a nonconvex Lipschitz function g in both finite and infinite dimension settings. Various other extensions can be found in, e.g., [6,7,8,9,10,11,12,13,14,15].
Despite this extensive literature, the nonautonomous case, where F depends explicitly on time t, remains largely unexplored. To our knowledge, the only attempt in this direction is [16], which considers a time-dependent mapping F with F ( t , x ) g ( x ) for a time-independent convex function g.
Our primary objective in this work is to bridge this gap by proving that the nonautonomous nonconvex evolution inclusion admits solutions:
x ˙ ( t ) F ( t , x ( t ) ) a . e . on I 0 .
Here, F is considered from I 0 × H to H with a closed graph in I 0 × H × H (without the convexity of the values of F), satisfying F ( t , x ) φ t ( x ) , where φ t ( x ) = g ( t , x ) . We suppose that g ( · , x ) is Lipschitz on I 0 , uniformly on bounded sets with respect to x, and for every t, g ( t , · ) is convex continuous on H .
As a second result, we apply our existence result to a class of multivalued dynamical systems called implicit nonconvex sweeping processes. Specifically, we establish the solvability of the inclusion
x ˙ ( t ) N C ( t ) C ( x ˙ ( t ) ) + x ( t ) a . e . on I 0 .
Here, N C ( t ) C stands for the Clarke normal cone to C ( t ) . This application underscores the utility of our theoretical findings for modeling complex, controlled dynamical systems.
This paper is structured as follows. Section 2 states the main assumptions and provides the necessary technical lemmas, whose proofs are also included. In Section 3, we state and prove our principal solvability criterion, recorded as Theorem 1. Subsequently, we apply Theorem 1 to deduce the solvability of a class of nonconvex sweeping processes, which is formulated as Theorem 2.

2. Preliminary Lemmas

Throughout this work, the space H is assumed to be Hilbert, T > 0 , I = I , and F assigns to any ( t , x ) I × H , a subset of H . We will work under the conditions on F:
A 1
F has a closed graph in I × H × H (with possibly nonconvex values).
A 2
There is g : I × H R satisfying:
(a)
For every bounded subset B H , there is L B > 0 so that
| g ( t , z ) g ( s , z ) | L B | t s | , t , s I , z B ;
(b)
For every s I , φ s : = g ( s , · ) is convex continuous on H ;
(c)
t g ( · , · ) is continuous on bounded sets for z in H and for a.e. s I ;
(d)
F ( s , z ) φ s ( z ) c ( s ) K , s I ,
uniformly on bounded sets for z, where K is a given subset of H assumed to be convex compact and c : I [ 0 , + ) is a nondecreasing function.
Establishing that solutions exist for the nonconvex nonautonomous differential inclusion (2) requires first proving several supporting lemmas. We start with the first one.
Lemma 1.
Under the assumptions ( a ) and ( b ) in ( A ) 2 , for any x 0 H , there exist δ 2 > 0 and L 1 > 0 , L 2 > 0 so that s 1 , s 2 I and z 1 , z 2 x 0 + δ 2 B ,
| g ( s 1 , z 1 ) g ( s 2 , z 2 ) | L 1 | s 1 s 2 | + L 2 z 1 z 2 .
Proof. 
Fix any x 0 H and any R > 0 . Set B : = x 0 + R B . By the assumption ( a ) in ( A ) 2 , we have for some L B > 0
| g ( s 1 , z ) g ( s 2 , z ) | L B | s 1 s 2 | , s 1 , s 2 I , z B .
Hence, for every z B ,
| g ( t , z ) | | g ( 0 , z ) | + L B t | g ( 0 , z ) | + L B T , t I .
By the assumption ( b ) in ( A ) 2 , we have that g ( 0 , · ) is convex continuous on H and so it is bounded on bounded sets; i.e., there is M 0 > 0 so that
| g ( 0 , z ) | M 0 , z B .
Thus,
| g ( t , z ) | | g ( 0 , z ) | + L B T M 0 + L B T = : M , t I , z B ,
That is, g is bounded by M on I × B . Now fix any t I . By ( b ) in ( A ) 2 , g ( t , · ) is convex continuous on H , and since it is bounded by M on x 0 + R B , we obtain by a standard result that g ( t , · ) is Lipschitz on x 0 + R 2 B ; that is,
| g ( t , z 1 ) g ( t , z 2 ) | 2 M R z 1 z 2 , z 1 , z 2 x 0 + R 2 B .
Consequently, by combining (5) and (9) we obtain
| g ( s 1 , z 1 ) g ( s 2 , z 2 ) | L B | s 1 s 2 | + 2 M R z 1 z 2 ,
for all s 1 , s 2 I and all z 1 , z 2 x 0 + R 2 B . This ensures (4) with δ 2 : = R 2 , L 1 = L B , L 2 : = 2 M R , and so we establish the desired result. □
We use this lemma to demonstrate the following one.
Lemma 2.
Under the assumptions ( a ) and ( b ) in ( A ) 2 , for any fixed δ > 0 very small and any fixed v H , the function ψ δ , v ( s , z ) : = δ 1 [ g ( s , z + δ v ) g ( s , z ) ] is u.s.c. on I × H .
Proof. 
Fix an arbitrary ( t 0 , x 0 ) I × H . Using Lemma 1, we obtain for some δ 2 > 0 , L 1 > 0 , L 2 > 0 such that
| g ( s 1 , z 1 ) g ( s 2 , z 2 ) | L 1 | s 1 s 2 | + L 2 z 1 z 2 , s 1 , s 2 I , z 1 , z 2 x 0 + δ 2 B .
Fix any v H . Let δ > 0 be very small so that δ v < δ 2 2 . We shall show that
lim sup ( s , z ) ( t 0 , x 0 ) ψ δ , v ( s , z ) ψ δ , v ( t 0 , x 0 ) .
Take any sequence ( t n , x n ) ( t 0 , x 0 ) . For sufficiently large n, it holds that t n I and x n x 0 + δ 2 2 B and hence x n + δ v x 0 x n x 0 + δ v δ 2 2 + δ 2 2 = δ 2 . So, we obtain by (10)
ψ δ , v ( t n , x n ) = g ( t n , x n + δ v ) g ( t n , x n ) δ [ g ( t n , x 0 + δ v ) + L 2 x n x 0 ] [ g ( t n , x 0 ) L 2 x n x 0 ] δ g ( t n , x 0 + δ v ) g ( t n , x 0 ) + 2 L 2 x n x 0 δ .
By taking the limsup in this inequality as n + , we obtain
lim sup n + ψ δ , v ( t n , x n ) lim sup n + g ( t n , x 0 + δ v ) g ( t n , x 0 ) + 2 L 2 x n x 0 δ lim sup n + g ( t n , x 0 + δ v ) g ( t n , x 0 ) δ = lim n + g ( t n , x 0 + δ v ) g ( t n , x 0 ) δ g ( t 0 , x 0 + δ v ) g ( t 0 , x 0 ) δ = ψ δ , v ( t 0 , x 0 ) .
Therefore, the function ψ δ , v ( · , · ) is u.s.c. on I × H .
  • We have to mention that for the two special cases, t 0 = 0 and t 0 = T , the sequence ( t n ) converges from the right to 0 in the first case and converges from the left to T in the second case. □
Lemma 3.
Under the same assumptions as in Lemma 2, one has the following:
(i).
The function σ ( φ t ( x ) ; v ) is u.s.c. on I × H , v H . Here σ ( A ; · ) is the support map of A H .
(ii).
For any ( t n , x n ) ( t 0 , x 0 ) , and for any ζ n w ζ 0 with ζ n φ t n ( x n ) , we have ζ 0 φ t 0 ( x 0 ) .
Proof. 
( i ) First, we recall that the support function σ ( φ t ( x ) ; v ) equals the directional derivative φ t ( x ; v ) . So, we shall prove the u.s.c. of ( t , x ) φ t ( x ; v ) on I × H for every v H . Fix t I . Since x φ t ( x ) is convex continuous, the directional derivative can be reformulated as
φ t ( z ; v ) = inf δ > 0 g ( t , z + δ v ) g ( t , z ) δ = inf δ > 0 ψ δ , v ( t , z ) .
From Lemma 2 the function ψ δ , v ( · , · ) is u.s.c. on I × H , δ > 0 , v H . Therefore,
φ t ( z ; v ) = inf δ > 0 ψ δ , v ( t , z ) = inf δ Q + ψ δ , v ( t , z )
is the infimum over a countable set and so it is u.s.c., which completes the proof of part ( i ) .
( i i ) Let ( t n , x n ) ( t 0 , x 0 ) for any ζ n w ζ 0 with ζ n φ t n ( x n ) . Then
ζ n , y x n φ t n ( y ) φ t n ( x n ) = g ( t n , y ) g ( t n , x n ) , y H .
Taking n + we derive by the continuity of g
ζ 0 , z x 0 g ( t 0 , z ) g ( t 0 , x 0 ) = φ t 0 ( z ) φ t 0 ( x 0 ) , z H ,
which means that ζ 0 φ t 0 ( x 0 ) , and hence the proof is complete. □
Lemma 4.
Consider a bounded open set Ω in H . Suppose that (a)–(c) in ( A ) 2 hold. For every sequence of measurable maps y n : I Ω with pointwise limit y on I, one has
lim n + 0 T t g ( s , y n ( s ) ) d s = 0 T t g ( s , y ( s ) ) d s .
Proof. 
We invoke the dominated convergence theorem for h n ( s ) : = t g ( s , y n ( s ) ) and the function h ( s ) : = t g ( s , y ( s ) ) . First, we check the pointwise convergence of h n to h almost everywhere on I. By the hypothesis of the lemma, we have y n ( s ) y ( s ) for all s I , and so by the assumption (c) in ( A ) 2 we have for a.e. s on I, t g ( s , · ) is continuous at y ( s ) , and so h n ( s ) h ( s ) . Let us check the domination assumption. We have by assumption ( a ) in ( A ) 2 that the function t g ( t , x ) is Lipschitz with constant L Ω > 0 on the bounded set Ω . So, it is differentiable almost everywhere on I and for any x Ω , which ensures that t g ( s , x ) L Ω for a.e. s I and x Ω . Hence, we obtain t g ( s , y n ( s ) ) L Ω for a.e. s I and n . Consequently, we deduce
lim n 0 T t g ( s , y n ( s ) ) d s = 0 T lim n t g ( s , y n ( s ) ) d s = 0 T t g ( s , y ( s ) ) d s ,
and hence the demonstration is achieved. □
The next result relies on the following one, whose proof appears in Theorem 2 in [17] (see also Lemma 2.2 in [4]).
Lemma 5.
Consider h : H R convex continuous and z : I H Lipschitz mapping. For a.e. t I , the set h ( z ( t ) ) ; z ˙ ( t ) : = { ζ ; z ˙ ( t ) : ζ h ( z ( t ) ) } is a singleton and h ( z ( t ) ) ; z ˙ ( t ) = h ( z ( t ) ; z ˙ ( t ) ) .
Lemma 6.
Assume that the assumptions ( a ) and ( b ) in ( A ) 2 are fulfilled. Consider a Lipschitz mapping y from I to H and let Φ ( · ) : = g ( · , y ( · ) ) . There is T 0 ( 0 , T ) so that Φ is Lipschitz on I 0 , and for a.e. τ I 0
d d t Φ ( τ ) = t g ( τ , y ( τ ) ) + φ τ ( y ( τ ) ) ; y ˙ ( τ ) .
Proof. 
We commence the proof by showing the Lipschitz continuity of Φ on I 0 for time T 0 ( 0 , T ) . From Lemma 1, it follows that for some δ 2 > 0 , L 1 > 0 , L 2 > 0 such that
| g ( s 1 , z 1 ) g ( s 2 , z 2 ) | L 1 | s 1 s 2 | + L 2 z 1 z 2 , s 1 , s 2 I , z 1 , z 2 x 0 + δ 2 B .
By the Lipschitz continuity of y on I, there is L 3 > 0 so that
y ( s 1 ) y ( s 2 ) L 3 | s 1 s 2 | s 1 , s 2 I .
Fix some T 0 ( 0 , min T , δ 2 L 3 ) . Thus, we write
y ( τ ) y ( 0 ) L 3 τ L 3 T 0 < δ 2 , τ I 0 ,
which ensures that y ( τ ) y ( 0 ) + δ 2 B . Consequently, we apply (15) with s 1 , s 2 I 0 and x 0 : = y ( 0 ) , z 1 : = y ( s 1 ) , z 2 : = y ( s 2 ) to write
| Φ ( s 1 ) Φ ( s 2 ) | = | g ( s 1 , y ( s 1 ) ) g ( s 2 , y ( s 2 ) ) | L 1 | s 1 s 2 | + L 2 y ( s 1 ) y ( s 2 ) L 1 | s 1 s 2 | + L 2 L 3 | s 1 s 2 | L 1 + L 2 L 3 | s 1 s 2 | , s 1 , s 2 I 0 .
We now proceed to prove Formula (14). Since Φ is Lipschitz on I 0 , it is differentiable a.e. on I 0 . We want to show:
d d t Φ ( τ ) = t g ( τ , y ( τ ) ) + φ τ ( y ( τ ) ) , y ˙ ( τ ) a . e . on I 0 .
Here φ τ ( y ( τ ) ) means the subdifferential of φ τ ( · ) = g ( τ , · ) at y ( τ ) , and y ˙ ( τ ) is the derivative of y (exists a.e. since y is Lipschitz on I 0 ). Fix arbitrary elements τ ( 0 , T 0 ) and h R to be very small so that τ + h ( 0 , T 0 ) . Then We start with the difference quotient for the composite function Φ ( τ ) = g ( τ , y ( τ ) ) :
Φ ( τ + h ) Φ ( τ ) h = g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ ) ) h .
This is decomposed into two separate terms by adding and subtracting g ( τ , y ( τ + h ) ) :
g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ ) ) h = g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ + h ) ) h + g ( τ , y ( τ + h ) ) g ( τ , y ( τ ) ) h .
Thus,
Φ ( τ + h ) Φ ( τ ) h = g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ + h ) ) h + g ( τ , y ( τ + h ) ) g ( τ , y ( τ ) ) h .
Now, we notice that by Lipschitz continuity of g in time on I, there is τ h τ as h 0 so that
g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ + h ) ) h = t g ( τ h , y ( τ + h ) ) .
By the assumption (c) in ( A ) 2 , we have that t g ( · , · ) is continuous a.e. on I and uniformly w.r.t. x on bounded sets. Taking h 0 we get τ h τ and y ( τ + h ) y ( τ ) . Also, note that both y ( τ ) and y ( τ + h ) belong to y ( 0 ) + δ 2 B , and consequently, letting h 0 in (18) yields
lim h 0 g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ + h ) ) h = lim h 0 t g ( τ h , y ( τ + h ) ) = t g ( τ , y ( τ ) ) .
However, by convexity of φ τ ( · ) , we derive
g ( t , y ( τ + h ) ) g ( τ , y ( τ ) ) = φ τ ( y ( τ + h ) ) φ τ ( y ( τ ) ) ξ ( τ ) , y ( τ + h ) y ( τ ) , for all ξ ( τ ) φ t ( y ( τ ) ) ,
and
g ( τ , y ( τ ) ) g ( τ , y ( τ + h ) ) = φ τ ( y ( τ ) ) φ τ ( y ( τ + h ) ) ξ ( τ + h ) , y ( τ ) y ( τ + h ) , for all ξ ( τ + h ) φ t ( y ( τ + h ) ) .
So
ξ ( τ ) , y ( τ + h ) y ( τ ) h g ( τ , y ( τ + h ) ) g ( τ , y ( τ ) ) h ξ ( τ + h ) , y ( τ + h ) y ( τ ) h .
Since φ τ is Lipschitz over y ( 0 ) + δ 2 B with Lipschitz constant L 2 independent of t, the subdifferential x φ τ ( x ) is bounded by L 2 on y ( 0 ) + δ 2 B , and so for any sequence h n 0 , the set { ξ ( τ + h n ) : n 1 } is bounded and so there is a subsequence weakly convergent to some p H . Using the u.s.c. of the subdifferential and the property that y ( τ + h n ) y ( τ ) as n + and ξ ( τ + h n ) φ τ ( y ( τ + h n ) ) , we obtain p φ τ ( y ( τ ) ) . By letting n in (19), we get
ξ ( τ ) , lim n + y ( τ + h n ) y ( τ ) h n lim n + g ( τ , y ( τ + h n ) ) g ( τ , y ( τ ) ) h n p , lim n + y ( τ + h n ) y ( τ ) h n ,
and since the sequence h n 0 is taken arbitrarily, we obtain
ξ ( τ ) , d d t y ( τ ) lim h 0 g ( τ , y ( τ + h ) ) g ( τ , y ( τ ) ) h p , d d t y ( τ ) .
Using Lemma 5, we have that φ τ ( y ( τ ) ) is a singleton and so p = ξ ( τ ) , which ensures that
lim h 0 g ( τ , y ( τ + h ) ) g ( τ , y ( τ ) ) h = ξ ( τ ) , d d t y ( τ ) .
Consequently, we take h 0 in (17),
lim h 0 Φ ( τ + h ) Φ ( τ ) h = lim h 0 g ( τ + h , y ( τ + h ) ) g ( τ , y ( τ + h ) ) h + lim h 0 φ τ ( y ( τ + h ) ) φ τ ( y ( τ ) ) h = t g ( τ , y ( τ ) ) + ξ ( τ ) , d d t y ( τ ) ,
Thus,
d d t Φ ( τ ) = t g ( τ , y ( τ ) ) + φ τ ( y ( τ ) ) , d d t y ( τ ) ,   for   a . e . τ I 0 .

3. Main Results

We now turn to the core objective of our work: establishing an existence theory for the nonconvex, time-varying set-valued system (2).
Theorem 1.
Under the assumptions ( A ) 1 and ( a ) ( d ) in ( A ) 2 , for every x 0 H , there is T 0 ( 0 , T ) so that (2) has a Lipschitz solution x : I 0 H with x ( 0 ) = x 0 .
Proof. 
By Lemma 1, there exists some δ 2 > 0 for which g ( t , · ) is Lipschitz with some constant L 2 0 (independent of t) on the neighborhood x 0 + δ 2 B for any t I . Let L 0 > 0 such that K L 0 B (since K is compact). Put T 0 0 , δ 2 c ( T ) L 0 + 1 . For any n 1 , we introduce the following collection of subdivisions of the interval I 0 :
I 0 n = { 0 } ,   and   I i + 1 n : = ( t i n , t i + 1 n ]   with   t i n : = i T 0 n , i = 0 , 1 , n 1 .
Additionally, we introduce the following:
x n ( t 0 n ) : = x 0   and   f n ( t 0 ) : = f 0 n F ( t 0 n , x n ( t 0 n ) ) ; f n ( t ) : = f i n F ( t i n , x n ( t i n ) ) , t I i + 1 n ; x n ( t ) : = x 0 + 0 t f n ( s ) d s , t I 0 .
Using the assumption ( d ) in ( A ) 2 , we show the Lipschitz continuity of x n on I 0 . Indeed, for any t 1 , t 2 I 0 , one has
x n ( t 1 ) x n ( t 2 ) = t 1 t 2 f n ( s ) d s t 1 t 2 f n ( s ) d s t 1 t 2 c ( T 0 ) L 0 d s = c ( T 0 ) L 0 | t 2 t 1 | ,
so x n is Lipschitz on I 0 with Lipschitz constant c ( T 0 ) L 0 . Define the sequence of functions θ n : I 0 I 0 by θ n ( t ) = t i n , t I i + 1 n . Then, we have
x ˙ n ( t ) = f n ( t ) F ( θ n ( t ) , x n ( θ n ( t ) ) ) ,   a . e . on   I 0 .
By assumption ( d ) in ( A ) 2 , we have
x ˙ n ( t ) F ( θ n ( t ) , x n ( θ n ( t ) ) ) c ( θ n ( t ) ) K .
Hence,
x n ( t ) x 0 + 0 t c ( θ n ( s ) ) K d s x 0 + 0 t c ( θ n ( s ) ) d s K x 0 + c 0 K = : K 0 ,
for c 0 : = T 0 c ( T 0 ) . Hence { x n ( t ) : n 1 } is relativ. compact in H , t I 0 . Also, by assumption ( d ) in ( A ) 2 , we have
x ˙ n ( t ) F ( θ n ( t ) , x n ( θ n ( t ) ) ) c ( θ n ( t ) ) L 0 B c ( T 0 ) L 0 B .
Therefore, all the hypotheses of Arzela-Ascoli theorem (Thm. 4 in [18]) are fulfilled, and so there is a subsequence of ( x n ) , which we still denote by ( x n ) by a slight abuse of notation, that converges unif. to some Lipschitz map x : I 0 H and x ˙ n w x ˙ weakly in L 2 ( I 0 , H ) .
Choose t I 0 such that x ˙ n ( t ) and x ˙ ( t ) exist, and fix an arbitrary ζ H . Combining the weak conv. of x ˙ n to x ˙ with Mazur’s lemma (see Page 6 in [19]), which guarantees that a point in the weak closure of a set also lies in the strong closure of its convex hull, we obtain the following pointwise characterization:
x ˙ ( t ) c o ¯ { x ˙ k ( t ) : k n } , n 1 .
Thus,
x ˙ ( t ) , ζ inf n 1 sup k n x ˙ k ( t ) , ζ .
Now, by the limit superior definition,
inf n 1 sup k n x ˙ k ( t ) , ζ lim sup n + x ˙ n ( t ) , ζ .
Therefore, we obtain
x ˙ ( t ) , ζ lim sup n + x ˙ n ( t ) , ζ .
By the assumption ( d ) in ( A ) 2 , we get
x ˙ n ( t ) φ θ n ( t ) ( x n ( θ n ( t ) ) ) .
So the above inequalities (24)–(26) ensure
x ˙ ( t ) , ζ lim sup n + σ φ θ n ( t ) ( x n ( θ n ( t ) ) ) , ζ σ φ t ( x ( t ) ) , ζ .
The last inequality results from the u.s.c. of σ φ t ( x ( · ) ) , ζ , ζ H , proved in ( i i ) in Lemma 3.
  • Consequently, it follows that
x ˙ ( t ) φ t ( x ( t ) )   a . e .   on   I 0 .
We now establish the strong convergence x ˙ n x ˙ in L 2 ( I 0 , H ) . Since it is already known that x ˙ n w x ˙ weakly in L 2 ( I 0 , H ) and the latter is a Hilbert space, it suffices to demonstrate the norm convergence x ˙ n L 2 x ˙ L 2 . To achieve this we set Φ n ( · ) : = g ( · ; x n ( · ) ) and Φ ( · ) : = g ( · ; x ( · ) ) , and we use Lemma 6 to write
d d t Φ ( t ) = t g ( t , x ( t ) ) + φ t ( x ( t ) ) ; x ˙ ( t ) .
Thus, from (29) we derive
d d t Φ ( t ) = t g ( t , x ( t ) ) + x ˙ ( t ) ; x ˙ ( t ) = t g ( t , x ( t ) ) + x ˙ ( t ) 2 .
From the construction of the sequence ( x n ) , we obtain
x ˙ n ( t ) = f i n F ( t i n , x n ( t i n ) ) φ t i n ( x n ( t i n ) )   for a . e .   t ( t i n , t i + 1 n ) .
Therefore, we start with the inequality
x ˙ n ( t ) ; x n ( t i + 1 n ) x n ( t i n ) φ t i n ( x n ( t i + 1 n ) ) φ t i n ( x n ( t i n ) ) , t ( t i n , t i + 1 n ) ,
which holds because of the definition of the subdifferential. Since by construction φ τ ( z ) = g ( τ , z ) , we have
φ t i n ( x n ( t i + 1 n ) ) φ t i n ( x n ( t i n ) ) = g ( t i n , x n ( t i + 1 n ) ) g ( t i n , x n ( t i n ) ) .
Now we insert a zero-sum term g ( t i + 1 n , x n ( t i + 1 n ) ) g ( t i + 1 n , x n ( t i + 1 n ) ) and rearrange
g ( t i n , x n ( t i + 1 n ) ) g ( t i n , x n ( t i n ) ) = g ( t i n , x n ( t i + 1 n ) ) g ( t i + 1 n , x n ( t i + 1 n ) ) + g ( t i + 1 n , x n ( t i + 1 n ) ) g ( t i n , x n ( t i n ) ) .
Finally, we use the definition Φ n ( s ) : = g ( s , x n ( s ) ) to rewrite the last bracket:
g ( t i + 1 n , x n ( t i + 1 n ) ) g ( t i n , x n ( t i n ) ) = Φ n ( t i + 1 n ) Φ n ( t i n ) .
Putting everything together gives
x ˙ n ( t ) ; x n ( t i + 1 n ) x n ( t i n ) g ( t i n , x n ( t i + 1 n ) ) g ( t i + 1 n , x n ( t i + 1 n ) ) + Φ n ( t i + 1 n ) Φ n ( t i n ) .
Hence, for a.e. t ( t i n , t i + 1 n )
t i n t i + 1 n x ˙ n ( s ) 2 d s = t i n t i + 1 n x ˙ n ( s ) ; x ˙ n ( s ) d s = x ˙ n ( t ) ; t i n t i + 1 n x ˙ n ( s ) d s = x ˙ n ( t ) ; x n ( t i + 1 n ) x n ( t i n ) g ( t i n , x n ( t i + 1 n ) ) g ( t i + 1 n , x n ( t i + 1 n ) ) + Φ n ( t i + 1 n ) Φ n ( t i n ) t i n t i + 1 n t g ( s , x n ( t i + 1 n ) ) d s + Φ n ( t i + 1 n ) Φ n ( t i n ) t i n t i + 1 n t g ( s , x n ( ρ n ( s ) ) ) d s + Φ n ( t i + 1 n ) Φ n ( t i n ) ,
where ρ n : I 0 I 0 is defined by ρ n ( t ) = t i + 1 n , t I i + 1 n . By adding, we obtain
0 T 0 x ˙ n ( s ) 2 d s 0 T 0 t g ( s , x n ( ρ n ( s ) ) ) d s + Φ n ( T 0 ) Φ n ( 0 ) .
From the Lipschitzness of g w.r.t. ( t , x ) , together with the convergence of x n ( T 0 ) x ( T 0 ) , x n ( 0 ) x ( 0 ) , and x n ( ρ n ( t ) ) x ( t ) (uniformly on I 0 ), by taking the limsup as n + in the above inequality we obtain
lim sup n + 0 T 0 x ˙ n ( s ) 2 d s lim sup n + 0 T 0 t g ( s , x n ( ρ n ( s ) ) ) d s + Φ ( T 0 ) Φ ( 0 ) lim sup n + 0 T 0 t g ( s , x n ( ρ n ( s ) ) ) d s + 0 T 0 d d t Φ ( s ) d s .
Using (31), we deduce
lim sup n + 0 T 0 x ˙ n ( s ) 2 d s lim sup n + 0 T 0 t g ( s , x n ( ρ n ( s ) ) ) d s + 0 T 0 t g ( s , x ( s ) ) + x ˙ ( s ) 2 d s lim sup n + 0 T 0 t g ( s , x ( s ) ) t g ( s , x n ( ρ n ( s ) ) ) d s + 0 T 0 x ˙ ( s ) 2 d s .
By applying (31), we obtain
lim sup n + 0 T 0 x ˙ n ( s ) 2 d s lim sup n + 0 T 0 t g s , x n ( ρ n ( s ) ) d s + 0 T 0 t g ( s , x ( s ) ) + x ˙ ( s ) 2 d s .
The first term on the right can be rearranged. Introducing the expression for the time derivative at the limit trajectory x ( s ) yields
lim sup n + 0 T 0 t g s , x n ( ρ n ( s ) ) d s = lim sup n + 0 T 0 t g ( s , x ( s ) ) t g s , x n ( ρ n ( s ) ) d s .
Substituting this into the previous inequality gives the refined estimate
lim sup n + 0 T 0 x ˙ n ( s ) 2 d s lim sup n + 0 T 0 t g ( s , x ( s ) ) t g s , x n ( ρ n ( s ) ) d s + 0 T 0 x ˙ ( s ) 2 d s .
Given the measurability of x n ( ρ n ( · ) ) and the pointwise convergence x n ( ρ n ( s ) ) x ( s ) , s I 0 , we may invoke Lemma 4 to obtain
lim n + 0 T 0 t g ( s , x n ( ρ n ( s ) ) ) d s = 0 T 0 t g ( s , x ( s ) ) d s .
Consequently,
lim sup n + x ˙ n L 2 2 = lim sup n + 0 T 0 x ˙ n ( s ) 2 d s 0 T 0 x ˙ ( s ) 2 d s = x ˙ L 2 2 .
Since the norm is weakly l.s.c. and x ˙ n w x ˙ weakly in L 2 ( I 0 , H ) , we obtain
lim inf n + x ˙ n L 2 2 x ˙ L 2 2 .
Therefore, we deduce that lim n + x ˙ n L 2 2 = x ˙ L 2 2 . This entails x ˙ n x ˙ strongly in L 2 ( I 0 , H ) . To conclude, we use our construction to rewrite
x ˙ n ( t ) F ( θ n ( t ) , x n ( θ n ( t ) ) )   for a . e .   t I 0 .
Employing the assumption ( A ) 1 , we can let n + in the above inclusion to obtain (2). This completes the demonstration of the theorem. □
The main result in [1] can be derived straightforwardly from our Theorem 1.
Corollary 1.
Consider F : R d R d to be u.s.c. with closed values. We suppose that there is a convex continuous function g : R d R so that x R d , F ( x ) g ( x ) . Then, x 0 R d , there is T 0 ( 0 , T ) such that (1) has at least one Lipschitz solution.
Proof. 
It follows from standard convex analysis that a continuous convex function on H is locally Lipschitz; therefore, g enjoys this property. There are ρ > 0 and L x 0 > 0 so that g is Lipschitz on x 0 + ρ B with Lipschitz constant L x 0 > 0 . This guarantees that g ( x ) L x 0 B , x x 0 + ρ B . Hence the condition ( d ) in ( A ) 2 is satisfied with c ( t ) L x 0 and K = B (which is compact since dim H < + ). Therefore, under the assumptions in Corollary 1, all the conditions in Theorem 1 are easily verified and so Theorem 1 is applicable and hence (1) has a Lipschitz solution. □
We now proceed to state a very important and special form of set-valued mappings F depending on both t and x. We define F ( t , x ) : = Proj C ( t ) ( x ) , where C is a given moving set defined from I to H with nonempty closed values. Define g : I × H H by
g ( t , x ) : = sup ζ C ( t ) ζ , x 1 2 ζ 2 .
Assume that C verifies the following assumptions:
H 1
There is t ¯ I and R > 0 so that C ( t ¯ ) R B ;
H 2
There is L > 0 so that H ( C ( s 1 ) , C ( s 2 ) ) L | s 1 s 2 | , s 1 , s 2 I .
H 3
t d C ( · ) 2 ( · ) is continuous a.e. on I and uniformly on bounded subsets of H .
Notice that g can be reformulated as follows:
g ( t , x ) = 1 2 x 2 1 2 d C ( t ) 2 ( x ) .
Observe that under the assumption ( H ) 3 and using (33), the assumption (c) in ( A ) 2 is satisfied for the function g defined in (32). We investigate the basic properties of the function g.
Proposition 1.
t I , g ( t , · ) is convex. In addition, under the hypotheses ( H ) 1 and ( H ) 2 , t I , g ( t , · ) has the Lipschitz property on bounded subsets of H .
Proof. 
For any t I , g ( t , · ) is the supremum of linear functions in x, so it is convex in x. So, we shall prove the Lipschitz property of g in t I and in x over bounded sets. First, notice that by ( H ) 1 and ( H ) 2 , we have that all the values of C are bounded. Indeed,
d C ( t ) ( 0 ) d C ( t ¯ ) ( 0 ) + L | t t ¯ | d C ( t ¯ ) ( 0 ) + L T sup ζ C ( t ¯ ) ζ + L T R + L T , t I ,
which guarantees that C ( t ) ( R + L T ) B , t I . Take arbitrary points t , s I and arbitrary elements x , y H . For every ϵ > 0 , we pick by definition of the supremum in (32) some z ϵ , x C ( t ) in a way that
g ( t , x ) z ϵ , x , x 1 2 z ϵ , x 2 + ϵ .
Also, we have
sup ζ C ( t ) ζ , y 1 2 ζ 2 z ϵ , x , y 1 2 z ϵ , x 2 .
Combining these two inequalities we obtain
g ( t , x ) g ( t , y ) z ϵ , x , x y + ϵ z ϵ , x x y + ϵ ( R + L T ) x y + ϵ .
Interchanging x with y and then letting ϵ 0 leads to
| g ( t , x ) g ( t , y ) | ( R + L T ) x y , t I , x , y H .
Let M > 0 . Fix any t , s I and any x , y M B . It follows from (33) that
| g ( t , x ) g ( s , x ) | = 1 2 x 2 1 2 d C ( t ) 2 ( x ) 1 2 x 2 1 2 d C ( s ) 2 ( x ) 1 2 d C ( t ) 2 ( x ) d C ( s ) 2 ( x ) 1 2 [ d C ( t ) ( x ) + d C ( s ) ( x ) ] d C ( t ) ( x ) d C ( s ) ( x ) 1 2 [ d C ( t ) ( x ) + d C ( s ) ( x ) ] H ( C ( t ) , C ( s ) ) L 2 [ d C ( t ) ( x ) + d C ( s ) ( x ) ] | t s | .
We start by writing the difference
| g ( t , x ) g ( s , x ) | = 1 2 x 2 1 2 d C ( t ) 2 ( x ) 1 2 x 2 1 2 d C ( s ) 2 ( x ) .
Canceling the common term 1 2 x 2 gives
| g ( t , x ) g ( s , x ) | = 1 2 d C ( t ) 2 ( x ) d C ( s ) 2 ( x ) .
Using the identity for the difference of two squared distances, we obtain
1 2 d C ( t ) 2 ( x ) d C ( s ) 2 ( x ) = 1 2 d C ( t ) ( x ) + d C ( s ) ( x ) d C ( t ) ( x ) d C ( s ) ( x ) .
The difference of distances is bounded by the Hausdorff distance between the sets,
d C ( t ) ( x ) d C ( s ) ( x ) H C ( t ) , C ( s )
Finally, since C ( · ) is assumed to be Lipschitz with constant L in the Hausdorff metric, we have
H C ( t ) , C ( s ) L | t s | .
Putting all estimates together yields
| g ( t , x ) g ( s , x ) | L 2 d C ( t ) ( x ) + d C ( s ) ( x ) | t s | .
However, for any choice of t , s I and x M B , it holds that
d C ( t ) ( x ) d C ( t ) ( 0 ) + x R + L T + M .
Consequently,
| g ( t , x ) g ( s , x ) | L 2 [ d C ( t ) ( x ) + d C ( s ) ( x ) ] | t s | L ( R + L T + M ) | t s | .
Therefore, for any t , s I , x , y M B ,
| g ( t , x ) g ( s , y ) | | g ( t , x ) g ( t , y ) | + | g ( t , y ) g ( s , y ) | ( R + L T ) x y + L ( R + L T + M ) | t s | .
  • From the above proposition, we have that g ( · , x ) is Lipschitz on I uniformly on bounded subsets of H . Thus, we get the differentiability a.e. on I of g ( · , x ) uniformly in x on bounded sets, and hence for a.e. t I and for every M > 0 , we have
t g ( t , x ) L ( R + L T + M ) , t I , x M B .
We start by demonstrating that the graph of Proj C ( · ) ( · ) is closed in I × H × H .
Proposition 2.
Under the assumptions ( H ) 1 and ( H ) 2 , the graph of F ( · , · ) = Proj C ( · ) ( · ) is closed in I × H × H .
Proof. 
Let ( t n , x n , y n ) ( t 0 , x 0 , y 0 ) with y n F ( t n , x n ) . We shall show that y 0 F ( t 0 , x 0 ) . By definition of F we have
y n Proj C ( t n ) ( x n ) y n C ( t n ) , x n y n = d C ( t n ) ( x n ) .
Since C is Hausdorff continuous and t n t 0 , we have y 0 C ( t 0 ) . We claim that d ( x n , C ( t n ) ) d ( x 0 , C ( t 0 ) ) as n . Indeed,
| d C ( t n ) ( x n ) d C ( t 0 ) ( x 0 ) | | d C ( t n ) ( x n ) d C ( t n ) ( x 0 ) | + | d C ( t n ) ( x 0 ) d C ( t 0 ) ( x 0 ) | x n x 0 + L | t n t 0 | 0 as n ,
That is, d C ( t n ) ( x n ) d ( x 0 , C ( t 0 ) ) . Also, we have x n y n x 0 y 0 . Thus, by taking the limit in (39), we obtain x 0 y 0 = d ( x 0 , C ( t 0 ) ) , and since y 0 C ( t 0 ) , we get by definition of the projection that y 0 F ( t 0 , x 0 ) . Thus F we derived the closedness of the graph in I × H × H . □
The following proposition is the key tool of our last main result proved in Theorem 2.
Proposition 3.
( t , x ) I × H , one has
Proj C ( t ) ( x ) φ t ( x ) , w h e r e φ t : = g ( t ; · ) .
Proof. 
Take arbitrary elements ( t , x ) I × H . If Proj C ( t ) ( x ) = φ t ( x ) , then we are done. Suppose that Proj C ( t ) ( x ) . Fix any u Proj C ( t ) ( x ) . Then, u x 2 v x 2 for every v C ( t ) . Expanding the squared norms, this inequality is equivalent to u 2 2 u , x v 2 2 v , x for all v C ( t ) . Canceling the common term x 2 and rearranging gives u , x 1 2 u 2 v , x 1 2 v 2 for every v C ( t ) . Finally, taking the supremum over v C ( t ) yields the equivalent condition u , x 1 2 u 2 sup v C ( t ) v , x 1 2 v 2 . Since u C ( t ) , we obtain the equality form
u , x 1 2 u 2 = sup v C ( t ) v , x 1 2 v 2 = g ( t , x ) .
Now fix arbitrary elements ( t , x ) I × H . One has
g ( t , y ) g ( t , x ) = sup v C ( t ) v , y 1 2 v 2 u , x 1 2 u 2 ,
which gives
g ( t , y ) g ( t , x ) u , y 1 2 u 2 u , x 1 2 u 2 u , y x .
This yields u φ t ( x ) , and the demonstration is finished. □
Theorem 2.
Suppose that ( H ) 1 - ( H ) 3 are fulfilled. Assume also
Proj C ( t ) ( x ) , ( t , x ) I × H .
Then there is T 0 ( 0 , T ) and a Lipschitz solution of
x ˙ ( t ) Proj C ( t ) ( x ( t ) ) ,   a . e . o n   I 0 .
Proof. 
By virtue of Propositions 1–3, all the hypotheses in Theorem 1 are fverified for this F ( · , · ) : = Proj C ( · ) ( · ) , and so by Theorem 1, there is T 0 ( 0 , T ) , for which (2) has at least one Lipschitz solution. □
We present the following example of our finidngs in Theorem 2.
Example 1.
Consider the following nonconvex nonautonomous evolution inclusion: Find T 0 > 0 such that (2) holds, with
F ( t , x ) = { t } , t > 0 , x > 0 ; { t } , t > 0 , x < 0 ; { t , t } , t > 0 , x = 0 ; { 0 } , t = 0 .
This evolution inclusion can be written in the form of (2) with F ( t , x ) = Proj C ( t ) ( x ) and C ( t ) = { t , t } for all t 0 . Some computations give the following: for Proj C ( 0 ) ( x ) = { 0 } , x R , and t > 0 one has
Proj C ( t ) ( x ) = { t } , x > 0 ; { t } , x < 0 ; { t , t } , t > 0 , x = 0 .
d C ( t ) 2 ( x ) = ( x t ) 2 , x > 0 ; ( x + t ) 2 , x 0 ,
and t d C ( t ) 2 ( x ) = 2 ( t | x | ) , x R , t 0 . Obviously ( H ) 1 - ( H ) 3 , and (41) are verified, and so by Theorem 1 there exists T 0 > 0 and a Lipschitz solution of (42) a.e. on I 0 .
As a final outcome, we present the following existence result for implicit nonconvex sweeping processes.
Theorem 3.
Let C be an arbitrary (without convexity assumption) moving set defined from I to H that satisfies ( H ) 1 - ( H ) 3 . Suppose also that (41) is satisfied. Then there is T 0 ( 0 , T ) so that the implicit nonconvex sweeping process has a Lipschitz map x : I 0 H so that
x ˙ ( t ) N C ( t ) C ( x ˙ ( t ) ) + x ( t ) ,   a . e . o n   I 0 .
Proof. 
All hypotheses in Theorem 2 are fulfilled, and therefore we can choose a time T 0 ( 0 , T ) and a Lipschitz trajectory x : I 0 H satisfying
x ˙ ( t ) Proj C ( t ) ( x ( t ) ) for a . e . t I 0 .
Employing the definition of the proximal normal cone, we obtain the inclusion
x ( t ) x ˙ ( t ) N C ( t ) P ( x ˙ ( t ) ) N C ( t ) C ( x ˙ ( t ) ) .
Consequently, the inclusion (43) holds, which completes the demonstration. □
We state a solvability criterion for the implicit convex process in Hilbert spaces, partially extending Theorem 3.2 in [20]. The complete continuity (which is very restrictive assumption) of the perturbation is not needed in our theorem while it is crucial in Theorem 3.2 in [20]. Also, the perturbation G ( t , x ) = x in (43) is not bounded, contrarily to the one used in Theorem 3.2 in [20].
Corollary 2.
Consider C : I H to be a moving set with convex closed values. Suppose that C verifies ( H ) 1 - ( H ) 3 . Then the implicit convex sweeping process (43) has a Lipschitz solution on I 0 for some T 0 ( 0 , T ) .

4. Conclusions

In this paper, we have addressed a notable gap in the theory of time-evolution inclusions by extending existence results to the nonautonomous case, where F depends explicitly on time with a closed graph. Building upon the foundational work of Bressan and colleagues [1], as well as subsequent extensions involving perturbations and viability constraints, we have established a solvability criterion for the nonautonomous nonconvex time-evolution inclusion (2) under the hypothesis that F has a closed graph with nonconvex values and F ( t , x ) φ t ( x ) , where φ t ( x ) = g ( t , x ) so that g ( · , x ) is Lipschitz uniformly with respect to the state variable and t I , g ( t , · ) is convex continuous in the state variable.
Our second result demonstrates the applicability of this framework to implicit nonconvex sweeping processes. We have proven the solvability of the problem (43). This extends known results for implicit convex sweeping processes to nonconvex implicit sweeping processes and provides a basis for analyzing systems with nonconvex moving constraints and unbounded additional forcing terms.
Future research may focus on relaxing the regularity assumptions on g, considering more general classes of perturbations, or extending the results to stochastic or Banach infinite-dimensional settings. The application of these results to specific problems in nonsmooth mechanics, optimal control, and hysteresis modeling also presents a promising direction for further investigation.

Funding

The author extends his appreciations to Ongoing Research Funding program, (ORF-2025-1001), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The author would like to thank the reviewers for their thorough reading of the manuscript and for their valuable suggestions and remarks. He also gratefully acknowledges the support of the Ongoing Research Funding Program, (ORF-2025-1001), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Bressan, A.; Cellina, A.; Colombo, G. Upper semicontinuous differential inclusions without convexity. Proc. Amer. Math. Soc. 1989, 106, 771–775. [Google Scholar] [CrossRef]
  2. Cellina, A.; Staicu, V. On evolution equations having monotonicities of opposite sign. J. Differ. Equ. 1991, 90, 71–80. [Google Scholar] [CrossRef]
  3. Truong, X.D.H. Existence of Viable solutions of nonconvex differential inclusions. Atti. Semi. Mat. Fis. Modena 1999, 47, 457–471. [Google Scholar]
  4. Bounkhel, M. Existence Results of Nonconvex Differential Inclusions. Portugaliae Math. 2002, 59, 283–310. [Google Scholar]
  5. Bounkhel, M.; Haddad, T. Existence of viable solutions for nonconvex differential inclusions. Electron. J. Differ. Equ. 2005, 2005, 1–10. [Google Scholar]
  6. Aitalioubrahim, M. Viability for upper semicontinuous differential inclusions without convexity. Topol. Methods Nonlinear Anal. 2013, 42, 77–90. [Google Scholar]
  7. Aitalioubrahim, M. Existence of monotone solutions for functional differential inclusions without convexity. Discuss. Math. Differ. Incl. Control Optim. 2017, 37, 101–121. [Google Scholar] [CrossRef]
  8. Benabdellah, H. Sur une Classe d’equations différentielles multivoques semi continues supérieurement a valeurs non convexes. In Séminaire d’Analyse Convexe; Université des Sciences et Techniques du Languedoc: Montpellier, France, 1991; Exposé No. 6. [Google Scholar]
  9. Benabdellah, H.; Faik, A. Perturbations convexes et non convexes des équations d’évolution. [Convex and nonconvex perturbations of evolution equations]. Portugal. Math. 1996, 53, 187–208. [Google Scholar]
  10. Krastanov, M.I.; Ribarska, N.K.; Tsachev, Y. On the existence of solutions to differential inclusions with nonconvex right-hand sides. SIAM J. Optim. 2007, 18, 733–751. [Google Scholar] [CrossRef]
  11. Lupulescu, V. An existence result for a class of nonconvex functional differential inclusions. Acta Univ. Apulensis Math. Inform. 2005, 9, 49–56. [Google Scholar]
  12. Rossi, R.; Savaré, G. Gradient flows of non convex functionals in Hilbert spaces and applications. ESAIM Control Optim. Calc. Var. 2006, 12, 564–614. [Google Scholar] [CrossRef]
  13. Rossi, R.; Savaré, G. Existence and approximation results for gradient flows. Atti Della Accad. Naz. Dei Lincei. Cl. Di Sci. Fis. Mat. E Nat. Rend. Lincei. Mat. E Appl. 2004, 15, 183–196. [Google Scholar]
  14. Cardinali, T.; Colombo, G.; Papalini, F.; Tosques, M. On a class of evolution equations without convexity. Nonlinear Anal. 1997, 28, 217–234. [Google Scholar] [CrossRef]
  15. Papalini, F. Existence of solutions for differential inclusions without convexity. Rend. Istit. Mat. Univ. Trieste 1992, 24, 193–206. [Google Scholar]
  16. Kánnai, Z.; Tallos, P. Viable solutions to nonautonomous inclusions without convexity. CEJOR Cent. Eur. J. Oper. Res. 2003, 11, 47–55. [Google Scholar]
  17. Valadier, M. Entrainement unilateral, lignes de descente, fonctions lipschitziennes non pathologiques. CRAS Paris 1989, 308, 241–244. [Google Scholar]
  18. Aubin, J.P.; Cellina, A. Differential Inclusion; Springer: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
  19. Ekeland, I.; Temam, R. Convex Analysis and Variational Problems; Studies in Mathematics and its Applications; North-Holland: New York, NY, USA, 1976; Volume 1. [Google Scholar]
  20. Bounkhel, M. Implicit differential inclusions in reflexive smooth Banach spaces. Proc. Amer. Math. Soc. 2012, 140, 2767–2782. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.