Next Article in Journal
A New Closed-Form Formula of the Gauss Hypergeometric Function at Specific Arguments
Previous Article in Journal
Parameter Estimation in Spatial Autoregressive Models with Missing Data and Measurement Errors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Results for History-Dependent Variational Inequalities

by
Mircea Sofonea
1,*,† and
Domingo A. Tarzia
2,3,†
1
Laboratoire de Mathématiques et Physique, University of Perpignan Via Domitia, 52 Avenue Paul Alduy, 66860 Perpignan, France
2
Departamento de Matemática, FCE, Universidad Austral, Paraguay 1950, Rosario S2000FZF, Argentina
3
CONICET, Rosario S2000EZP, Argentina
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(5), 316; https://doi.org/10.3390/axioms13050316
Submission received: 10 April 2024 / Revised: 29 April 2024 / Accepted: 6 May 2024 / Published: 10 May 2024
(This article belongs to the Section Hilbert’s Sixth Problem)

Abstract

:
We consider a history-dependent variational inequality in a real Hilbert space, for which we recall an existence and uniqueness result. We associate this inequality with a gap function, together with two additional problems: a nonlinear equation and a minimization problem. Then, we prove that solving these problems is equivalent to solving the original history-dependent variational inequality. Next, we state and prove a convergence criterion, i.e., we provide necessary and sufficient conditions which guarantee the convergence of a sequence of functions to the solution of the considered inequality. Based on the equivalence above, we deduce various consequences that present some interest on their own, and, moreover, we obtain convergence results for the two additional problems considered. Finally, we apply our abstract results to the study of an inequality problem in solid mechanics. It concerns the study of a viscoelastic constitutive law with long memory and unilateral constraints, for which we deduce a convergence result and provide the corresponding mechanical interpretations.

1. Introduction

The current paper is structured around three main keywords and phrases: variational inequalities, history-dependent operators, and convergence results. A short introduction of these notions, together with some basic references, follows.
The theory of variational inequalities started in the early 1960s, motivated by important applications in the mechanics, physics, and engineering sciences. It uses results from nonlinear and nonconvex analysis as the main ingredients, including the properties of monotone and pseudo monotone opertors, lower semicontinuous functions and the subdifferential of convex functions. It deals with the study of various classes of elliptic, time-dependent, and evolutionary inequalities, for which it provides existence, uniqueness, and optimal control results. Over time, particular attention has been paid to the numerical analysis of different types of variational inequality problems, including error estimates and algorithms, to approximate the solution. Comprehensive references in the field are [1,2,3,4,5], for instance. Applications of the theory in mechanics and, in particular, in contact mechanics, can be found in [6,7,8,9,10].
History-dependent operators represent a special class of nonlinear operators defined on spaces of continuous functions. Such kinds of operators arise in nonlinear analysis, the theory of differential and integral equations, and solid and contact mechanics. Two elementary examples in nonlinear analysis are provided by the integral operator and the Volterra operator. In classical mechanics, the current position of a material point is determined by the initial position and the history of the velocity function and, therefore, it is expressed in terms of a history-dependent operator. In contact mechanics, it is common to consider that the coefficient of friction depends on the total slip or the total slip rate that, again, leads to history-dependent operators. History-dependent operators were introduced in [10], and since then, they have been intensively covered in the literature. References can be found in books [11,12], for instance.
Convergence results play an important role in both functional analysis and numerical analysis and mechanics. Some elementary examples are the convergence of the discrete solution to the solution of the continuous problem as the discretization parameter converges to zero, the convergence of the solution of a nonlinear problem with respect to the perturbation of data or the set of constraints, and the convergence of the solution of a contact problem with normal compliance to the solution of the Signorini contact problem as the stiffness coefficient vanishes. For all these reasons, a large number of convergence results have been obtained from the study of nonlinear equations, inequality problems, fixed-point problems, and optimization problems, among others. Convergence results to a solution of a given problem, T , are closely related to the well-posedness concepts associated with T . Comprehensive references in this field include [13,14,15,16,17], and, more recently, [11].
Motivated by a large number of applications, in the current paper, we deal with convergence results for a class of variational inequalities governed by a history-dependent operator, the so-called history-dependent variational inequalities. An inequality problem in the class we consider is stated as follows.
Problem P . Find a function of u C ( I ; K ) such that
( A u ( t ) , v u ( t ) ) X + ( S u ( t ) , v u ( t ) ) X + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) ( f ( t ) , v u ( t ) ) X v K , t I .
A detailed description of Problem P , including the assumptions about the data and its unique solvability, will be provided in Section 2 below. Here, we restrict ourselves to saying that X is a Hilbert space endowed with the inner product ( · , · ) X , I is a time interval, K X , C ( I ; X ) denotes the space of a continuous function on I, with values in K, A : X X , j : X × X I R , and f X , and S : C ( I ; X ) C ( I ; X ) is a history-dependent operator.
Our main aim is to study the convergence of a sequence of continuous functions to the solution of Problem P . More precisely, we are looking for a convergence criterion for the solution of inequality (1). Such types of criteria have been obtained in [18,19] in the study of elliptic variational inequalities, fixed-point problems, and differential equations. Moreover, they have also been obtained in [20] during the study of stationary inclusions in Hilbert spaces. To conclude, in this current paper, we continue our research from [18] by considering the case of history-dependent variational inequalities of the form in (1). In addition to the mathematical interest in such kinds of inequalities, our study is motivated by possible applications in solid and contact mechanics. Indeed, a large number of mathematical models that describe the contact of a viscoelastic body with an obstacle, the so-called foundation, lead to the variational formulation of the form in (1), in which u represents the displacement field. References in the field mainly come in the form of books, for instance, [11,12].
The rest of the manuscript is structured as follows. In Section 2, we introduce some preliminary material. Next, in Section 3, we associate a time-dependent gap function with Problem P n and construct two additional problems, Problems Q and R , respectively. Then, we prove the equivalence of these problems. In Section 4, we state and prove our main result, Theorem 3. It provides necessary and sufficient conditions which guarantee the convergence of a sequence of continuous functions to the solution of Problem P . Based on the equivalence mentioned above, in Section 5, we deduce some convergence results to the study of Problems Q and R , respectively. Moreover, we recover Tykhonov and Levitin-Polyak-type well-posedness results for the history-dependent variational inequality in (1). In Section 6, we provide an application of our abstract results in solid mechanics, and finally, in Section 7, we present some concluding remarks.

2. Preliminaries

In this section, we recall the notion of Hausdorff-Pompeiu distance and a history-dependent operator; then, we state the existence and uniqueness result in the study of inequality (1). Everywhere below, unless it is specified otherwise, we use the functional framework described in the Introduction section. Moreover, we denote the norm on the Hilbert space, X, as · X , and we use m to denote a given positive integer. We precise that the limits are considered as n , even if we do not mention it explicitly. We use the short notation 0 ε n 0 for a sequence { ε n } I R + that converges to zero (as n ) , and we write 0 ε n m 0 for any sequence, { ε n m } I R + (with m given), which converges to zero (as n ) .
  • The Hausdorff-Pompeiu distance: We denote the distance, d ( u , M ) , between the element, u X , and the set, M, that is
    d ( u , M ) = inf v M u v X .
    We recall that if M is a nonempty closed convex subset of X, then
    d ( u , M ) = u P M u X u X
    where P M : X M denotes the projection operator on M.
Next, if M and N are two nonempty subsets of X, then we use the notation H ( M , N ) for the Hausdorff-Pompeiu distance of the sets M , N X , defined as follows:
H ( M , N ) = max { e ( M , N ) , e ( N , M ) } ,
where
e ( M , N ) = sup u M d ( u , N ) , e ( N , M ) = sup v N d ( v , M ) .
It is easy to see that if N M , then d ( v , M ) = 0 for each v N and, therefore, e ( N , M ) = 0 . We conclude from here that
N M H ( M , N ) = e ( M , N ) .
This implication will be used in Section 6 of the manuscript.
  • History-dependent operators: In this paper, below, I will represent either an interval of time of the form [ 0 , T ] with T > 0 or the unbounded interval I R + = [ 0 , + ) . Moreover, we denote the space of continuous functions, C ( I ; X ) , defined on I with values in X, that is
    C ( I ; X ) = { v : I X v is continuous } .
    On occasion, this space will be denoted by C ( [ 0 , T ] ; X ) if I = [ 0 , T ] and C ( I R + ; X ) if I = I R + . The space C ( [ 0 , T ] ; X ) will be endowed with the norm
    v C ( [ 0 , T ] ; X ) = max t [ 0 , T ] v ( t ) X
    and recall that it is a Banach space. Moreover, the space C ( I R + ; X ) is a Fréchet space, as explained in [21], for instance. More precisely, the convergence of a sequence { v n } C ( I R + ; X ) to the element v C ( I R + ; X ) is characterized by the following equivalence:
    v n v i n   C ( I R + ; X ) max t [ 0 , m ] v n ( t ) v ( t ) 0 f o r a l l m N .
    In other words, the sequence { v n } converges to the element v in the space C ( I R + ; X ) if and only if it converges to v in the space C ( [ 0 , m ] ; X ) for any m N . The equivalence (8) will be used repeatedly in the next sections in order to prove various convergence results when working on the framework f an unbounded interval of time.
In this paper, below, we shall use the notation 0 X for the zero element of both spaces X and C ( I ; X ) , and C ( [ 0 , m ] ; X ) for any m N . Finally, we shall use the short hand notation C ( I ; K ) for the set of functions, u C ( I ; X ) , which will satisfy the inclusion of u ( t ) K for any t I .
We now proceed with the following definition.
Definition 1.
An operator, S : C ( I ; X ) C ( I ; X ) , is called history-dependent if one of the conditions (a) or (b) below are satisfied.
(a) I = [ 0 , T ] and L > 0 exists such that
S u 1 ( t ) S u 2 ( t ) X L 0 t u 1 ( s ) u 2 ( s ) X d s for all u 1 , u 2 C ( [ 0 , T ] ; X ) , t [ 0 , T ] .
(b) I = I R + , and for any m N , there exists L m > 0 such that
S u 1 ( t ) S u 2 ( t ) X L m 0 t u 1 ( s ) u 2 ( s ) X d s for all u 1 , u 2 C ( I R + ; X ) , t [ 0 , m ] .
Note that here and below, we use the shorthand notation S u ( t ) to represent the value of the function S u at the point t, i.e., S u ( t ) = ( S u ) ( t ) , for all t I . Examples of history-dependent operators will be provided in the next sections of this manuscript.
When working with history-dependent operators, we need a version of the Gronwall lemma, which will be used in many places in the rest of the manuscript. This elementary result is recalled in Lemma 1 below, where C ( I ) represents the space of real-valued continuous functions defined on the interval I, that is, C ( I ) = C ( I ; I R ) .
Lemma 1.
Let f, g C ( I ) ; assume that g is nondecreasing, and, moreover, assume that there exists c > 0 such that
f ( t ) g ( t ) + c 0 t f ( s ) d s t I .
Then,
f ( t ) g ( t ) e c t t I .
A proof of Lemma 1 can be found in [10], page 60; therefore, we have skipped it here.
  • An existence and uniqueness result: In the study of Problem P , we consider the following assumptions:
    K is   a   nonempty   closed   convex   subset   of X .
    A : X X is   strongly   monotone   and   Lipschitz   continuous , i . e . , ( a ) there   exists m A > 0 such   that ( A u A v , u v ) X m A u v X 2 for all u , v X , ( b ) there   exists M A > 0 such   that A u A v X M A u v X for all u , v X .
    S   is   a   history-dependent   operator , i . e . , it   satisfies either   inequality ( 9 ) ( if I = [ 0 , T ] ) or inequality ( 10 ) ( if I = I R + ) .
    j : X × X I R is   such   that : ( a ) j ( u , · ) : X I R is   convex   and   lower   semicontinuous for   all u X , ( b ) there   exists α j 0 such   that j ( u 1 , v 2 ) j ( u 1 , v 1 ) + j ( u 2 , v 1 ) j ( u 2 , v 2 ) α j u 1 u 2 X v 1 v 2 X for all u 1 , u 2 , v 1 , v 2 X .
    α j < m A .
    f C ( I ; X ) .
The following existence and uniqueness result provides the unique solvability of the history-dependent variational inequality (1).
Theorem 1.
Assume (11)–(16). Then, inequality (1) has a unique solution: u C ( I ; K ) .
Theorem 1 represents a particular case of a more general existence and uniqueness result that was proved in [12]. The proof is based on standard arguments on elliptic variational inequalities and a fixed-point property of history-dependent operators.

3. The Gap Function

The study of variational inequalities can be carried out by using a special auxiliary function, the so-called gap function. A comprehensive reference in the field is [22]. The form of the gap function depends on the variational inequality considered. In the study of Problem P , we keep assumptions (11)–(16) and consider the gap function  g K : C ( I ; X ) × I I R { + } , as defined by
g K ( v , t ) = sup w K ( A v ( t ) + S v ( t ) f ( t ) , v ( t ) w ) X + j ( v ( t ) , v ( t ) ) j ( v ( t ) , w )
for each v C ( I ; X ) and t I , together with the following associated problems.
  • Problem  Q . Find a function u C ( I ; K ) such that
    g K ( u , t ) = 0 t I .
  • Problem  R . Find a function u C ( I ; K ) such that
    g K ( u , t ) g K ( v , t ) v C ( I , K ) , t I .
Before studing the solvability of Problems Q and R , we state and prove the following property of the gap function (17).
Lemma 2.
The function g K is always positive, that is
g K ( v , t ) 0 v C ( I , K ) , t I .
Proof. 
For any w X , define the function k w : C ( I ; X ) × I I R by using equality
k w ( v , t ) = ( A v ( t ) + S v ( t ) f ( t ) , v ( t ) w ) X + j ( v ( t ) , v ( t ) ) j ( v ( t ) , w )
for each v C ( I ; X ) and t I . We use definitions (17) and (21) to see that
g K ( v , t ) = sup w K k w ( v , t ) v C ( I , X ) , t I .
Let v C ( I ; K ) and t I . Then, v ( t ) K , and by using (22), we have g K ( v , t ) k v ( t ) ( v , t ) . This inequality is combined with equality k v ( t ) ( v , t ) = 0 , guaranteed by definition (21); this shows that (20) holds, which concludes the proof. □
We now study the link between Problems P , Q , and R . We have the following result.
Theorem 2.
Let u C ( I ; K ) . Then, u is a solution to Problem P if and only if u is a solution of Problem Q . In this case, u is the solution to Problem R , too.
Proof. 
Assume that u is a solution to Problem P and fix t I . We have
( A u ( t ) + S u ( t ) f ( t ) , u ( t ) w ) X + j ( u ( t ) , u ( t ) ) j ( u ( t ) , w ) 0 w K
and, therefore, k w ( u , t ) 0 for all w K . Then, (22) implies that g K ( u , t ) 0 , and since (20) guarantees that g K ( u , t ) 0 , we deduce that g K ( u , t ) = 0 . This shows that u is a solution to Problem Q .
Conversely, assume that u is a solution to Problem Q and let t I . We have g K ( u , t ) = 0 . Then, by using (17), we find that
inf v K ( A u ( t ) + S u ( t ) f ( t ) , v u ( t ) ) X + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) = sup v K ( A u ( t ) + S u ( t ) f ( t ) , u ( t ) v ) + j ( u ( t ) , u ( t ) ) j ( u ( t ) , v ) = g K ( u , t ) = 0 .
This shows that for each v K and t I inequality, (1) holds, and, therefore, u is a solution to Problem P . On the other hand, if u is a solution to Problem Q , when using (18) and (20), it follows that inequality (19) holds, which shows that u is a solution to Problem R and concludes the proof. □
The unique solvability of Problems Q and R follows from the following existence and uniqueness result.
Corollary 1.
Assume (11)–(16). Then, a unique solution to Problems Q and R exists.
Proof. 
Let u C ( I ; K ) be the solution to Problem P obtained in Theorem 1. Then, the equivalence in Theorem 2 shows that u is the unique solution to Problem Q and, moreover, u is a solution to Problem R .
Assume now that u C ( I ; K ) is a another solution to Problem R and let t I . Then, (19) implies that g K ( u , t ) g K ( u , t ) , and by using (18), we deduce that g K ( u , t ) 0 . On the other hand, Lemma 2 shows that g K ( u , t ) 0 . It follows from here that g K ( u , t ) = 0 , i.e., u is a solution to Problem Q . The unique solvability of Problem Q implies now that u = u , and this shows the uniqueness of the solution to Problem R . □

4. A Convergence Criterion

In this section, we provide the necessary and sufficient conditions that guarantee the convergence of the solution to Problem P in the space C ( I ; X ) . To this end, we assume that (11)–(16) hold, and we denote the solution, u C ( I ; K ) , of inequality (1) guaranteed by Theorem 1. Moreover, when given a sequence, { u n } C ( I ; X ) , we consider the following statements:
( S 1 )   u n u in C ( I ; X ) .
( S 2 ) There exists 0 ε n 0 such that
( a )   d ( u n ( t ) , K ) ε n , ( b )   ( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + ε n ( 1 + v u n ( t ) X ) ( f ( t ) , v u n ( t ) ) X v K , for all n N and t I .
( S 3 ) There exists 0 ε n m 0 such that
( a ) d ( u n ( t ) , K ) ε n m , ( b ) ( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + ε n m ( 1 + v u n ( t ) X ) ( f ( t ) , v u n ( t ) ) X v K , for all n N , m N and t [ 0 , m ] .
Next, we consider the following additional assumption on the function j:
There   exists   a   function   c j : I R + I R + which   maps   bounded   sets   into   bounded   sets   such   that   j ( u , v ) j ( u , w ) c j ( u X ) v w X v , w X .
Our main result in this section is the following:
Theorem 3.
Assume (11)–(16) and (23) hold.
(a) If I = [ 0 , T ] with T > 0 , then the statements ( S 1 ) and ( S 2 ) are equivalent.
(b) If I = I R + , then the statements ( S 1 ) and ( S 3 ) are equivalent.
In order to prove the proof of Theorem (3), we need the following preliminary result:
Lemma 3.
Assume (11)–(16) and (23) hold.
(a) If I = [ 0 , T ] with T > 0 and ( S 2 ) holds, then there exists D > 0 such that
u n ( t ) X D n N , t [ 0 , T ] .
(b) If I = I R + and ( S 3 ) holds, then for each m N , there exists D m > 0 such that
u n ( t ) X D m n N , t [ 0 , m ] .
Proof. (a) Assume that I = [ 0 , T ] with T > 0 and let n N and t [ 0 , T ] . We use the inequality ( S 2 ) (b) with v = u ( t ) K to obtain
( A u n ( t ) , u ( t ) u n ( t ) ) X + ( S u n ( t ) , u ( t ) u n ( t ) ) X + j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + ε n ( 1 + u ( t ) u n ( t ) X ) ( f ( t ) , u ( t ) u n ( t ) ) X ,
 which implies that 
( A u n ( t ) A u ( t ) , u n ( t ) u ( t ) ) X ( A u ( t ) , u ( t ) u n ( t ) ) X + ( S u n ( t ) S u ( t ) , u ( t ) u n ( t ) ) X + ( S u ( t ) , u ( t ) u n ( t ) ) X + j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + ε n ( 1 + u ( t ) u n ( t ) X ) + ( f ( t ) , u n ( t ) u ( t ) ) X .
Moreover, by using (12)(a) and (13), we find that
m A u ( t ) u n ( t ) X 2 A u ( t ) X u ( t ) u n ( t ) X + L 0 t u n ( s ) u ( s ) X d s u ( t ) u n ( t ) X + S u ( t ) X u ( t ) u n ( t ) X + j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + ε n ( 1 + u ( t ) u n ( t ) X ) + f ( t ) X u n ( t ) u ( t ) X .
On the other hand, when writing
j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) = [ j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , u n ( t ) ) j ( u ( t ) , u ( t ) ) ] + [ j ( u ( t ) , u ( t ) ) j ( u ( t ) , u n ( t ) ) ]
and using assumptions (14)(b), (23), we deduce that
j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) α j u n ( t ) u ( t ) X 2 + c j ( u ( t ) X ) u n ( t ) u ( t ) X .
Let C be defined by
C = max max t [ 0 , T ] A u ( t ) X , max t [ 0 , T ] S u ( t ) X , max t [ 0 , T ] c j ( u ( t ) X ) , max t [ 0 , T ] f ( t ) X .
Then, by combining inequalities (26), (27) and using (28), we obtain that
( m A α j ) u ( t ) u n ( t ) X 2 4 C + ε n + L 0 t u n ( s ) u ( s ) X d s u ( t ) u n ( t ) X + ε n .
This inequality and assumption (15) imply that
u ( t ) u n ( t ) X 2 C 1 + C 2 ε n + C 3 0 t u n ( s ) u ( s ) X d s u ( t ) u n ( t ) X + C 4 ε n .
where here and below, C i represents some positive constant that does not depend on n and t. We now use the elementary inequality
x 2 a x + b x a + b x , a , b 0
to deduce that
u ( t ) u n ( t ) X C 1 + C 2 ε n + C 3 0 t u n ( s ) u ( s ) X d s + C 4 ε n
and after employing the Gronwall argument, it follows that
u ( t ) u n ( t ) X C 1 + C 2 ε n + C 4 ε n e C 3 t .
By using this inequality and the convergence ε n 0 , we deduce that there exists C 5 > 0 such that u ( t ) u n ( t ) X C 5 , and this implies the bound (24) with a constant D, which depends on T but does not depend on n and t.
(b) Assume now that I = I R + , and let n N , m N , and t [ 0 , m ] . We use assumption ( S 3 ) to see that condition ( S 2 ) holds with I = [ 0 , m ] and ε n = ε n m . Therefore, the bound (24) holds with T = m , and since the corresponding constant, D, depends on m, we denote it (in what follows) as D m to obtain (25). □
We now turn to the proof of Theorem 3.
Proof of Theorem 3.
(a) We start with the case I = [ 0 , T ] . Let v K , n N , and t [ 0 , T ] . First, since u ( t ) K , it follows that
d ( u n ( t ) , K ) u n ( t ) u ( t ) X .
Next, we write
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) ( f ( t ) , v u n ( t ) ) X = ( A u n ( t ) A u ( t ) , v u n ( t ) ) X + ( A u ( t ) , v u ( t ) ) X + ( A u ( t ) , u ( t ) u n ( t ) ) X + ( S u n ( t ) S u ( t ) , v u n ( t ) ) X + ( S u ( t ) , v u ( t ) ) X + ( S u ( t ) , u ( t ) u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , u ( t ) ) j ( u ( t ) , v ) + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) ( f ( t ) , v u ( t ) ) X + ( f ( t ) , u n ( t ) u ( t ) ) X ,
and by using (1), we deduce that
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) ( f ( t ) , v u n ( t ) ) X ( A u n ( t ) A u ( t ) , v u n ( t ) ) X + ( A u ( t ) , u ( t ) u n ( t ) ) X + ( S u n ( t ) S u ( t ) , v u n ( t ) ) X + ( S u ( t ) , u ( t ) u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , u ( t ) ) j ( u ( t ) , v ) + ( f ( t ) , u n ( t ) u ( t ) ) X .
Therefore,
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + j ( u n ( t ) , u n ( t ) ) j ( u n ( t ) , v ) + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) ( A u n ( t ) A u ( t ) , v u n ( t ) ) X + ( A u ( t ) , u ( t ) u n ( t ) ) X + ( S u n ( t ) S u ( t ) , v u n ( t ) ) X + ( S u ( t ) , u ( t ) u n ( t ) ) X + ( f ( t ) , u n ( t ) u ( t ) ) X + ( f ( t ) , v u n ( t ) ) X .
We now use assumptions (12)(b) and (13) and standard arguments to see that
( A u n ( t ) A u ( t ) , v u n ( t ) ) X M A u n ( t ) u ( t ) X v u n ( t ) X , ( A u ( t ) , u ( t ) u n ( t ) ) X A u ( t ) X u n ( t ) u ( t ) X , ( S u n ( t ) S u ( t ) , v u n ( t ) ) X L 0 t u n ( s ) u ( s ) X d s u n ( t ) v X , ( S u ( t ) , u ( t ) u n ( t ) ) X S u ( t ) X u n ( t ) u ( t ) X , ( f ( t ) , u n ( t ) u ( t ) ) X f ( t ) X u n ( t ) u ( t ) X .
Then, by substituting the previous inequalities in (31), we find that
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + j ( u n ( t ) , u n ( t ) ) j ( u n ( t ) , v ) + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) + M A u n ( t ) u ( t ) X v u n ( t ) X + A u ( t ) X u n ( t ) u ( t ) X + L 0 t u n ( s ) u ( s ) X d s u n ( t ) v X + S u ( t ) X u n ( t ) u ( t ) X + f ( t ) X u n ( t ) u ( t ) X ( f ( t ) , v u n ( t ) ) X .
On the other hand, when writing
j ( u n ( t ) , u n ( t ) ) j ( u n ( t ) , v ) + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) = [ j ( u n ( t ) , u n ( t ) ) j ( u n ( t ) , v ) + j ( u ( t ) , v ) j ( u ( t ) , u n ( t ) ) ] + [ j ( u ( t ) , u n ( t ) ) j ( u ( t ) , u ( t ) ) ]
and using assumptions (14)(b) and (23), we deduce that
j ( u n ( t ) , u n ( t ) ) j ( u n ( t ) , v ) + j ( u ( t ) , v ) j ( u ( t ) , u ( t ) ) α j u n ( t ) u ( t ) X v u n ( t ) X + c j ( u ( t ) X ) u n ( t ) u ( t ) X .
When combining now inequalities (32) and (33), we find that
A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + α j u n ( t ) u ( t ) X v u n ( t ) X + c j ( u ( t ) X ) u n ( t ) u ( t ) X + M A u n ( t ) u ( t ) X v u n ( t ) X + A u ( t ) X u n ( t ) u ( t ) X + L 0 t u n ( s ) u ( s ) X d s u n ( t ) v X + S u ( t ) X u n ( t ) u ( t ) X + f ( t ) X u n ( t ) u ( t ) X ( f ( t ) , v u n ( t ) ) X .
Therefore, by using the following notation:
ε n = max { ( α j + M A + L T + 1 ) max t [ 0 , T ] u n ( t ) u ( t ) , + max t [ 0 , T ] c j ( u ( t ) X ) + A u ( t ) X + S u ( t ) X + f ( t ) X u n ( t ) u ( t ) X }
we see that
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + ε n ( 1 + v u n ( t ) X ) f ( t ) , v u n ( t ) ) X .
On the other hand, (30) and (34) and assumption ( S 1 ) imply that
d ( u n ( t ) , K ) ε n .
ε n 0 .
We now combine (35)–(37) to see that condition ( S 2 ) is satisfied.
Conversely, assume now that ( S 2 ) and holds. We define the functions v n : I X and w n : I X as equalities v n ( t ) = P K u n ( t ) and w n ( t ) = u n ( t ) P K u n ( t ) for all n N and t I , where we recall that P K represents the projection operator on K. Then, it is easy to see that v n , w n C ( I ; X ) :
u n ( t ) = v n ( t ) + w n ( t ) , v n ( t ) K n N , t [ 0 , T ] ,
and since w n ( t ) X = d ( u n ( t ) , K ) , condition ( S 2 ) (a) implies that
w n ( t ) X ε n n N , t [ 0 , T ] .
We fix n N and t [ 0 , T ] and use condition ( S 2 ) (b) with v = u ( t ) K to see that
( A u n ( t ) , u ( t ) u n ( t ) ) X + ( S u n ( t ) , u ( t ) u n ( t ) ) X + j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + ε n ( 1 + u ( t ) u n ( t ) X ) ( f ( t ) , u ( t ) u n ( t ) ) X .
On the other hand, we use the regularity v n ( t ) K in (38) and test with v = v n ( t ) in (1) to find that
( A u ( t ) , v n ( t ) u ( t ) ) X + ( S u ( t ) , v n ( t ) u ( t ) ) X + j ( u ( t ) , v n ( t ) ) j ( u ( t ) , u ( t ) ) ( f ( t ) , v n ( t ) u ( t ) ) X .
We now add inequalities (40) and (41) to obtain that
( A u n ( t ) , u ( t ) u n ( t ) ) X + ( A u ( t ) , v n ( t ) u ( t ) ) X + ( S u n ( t ) , u ( t ) u n ( t ) ) X + ( S u ( t ) , v n ( t ) u ( t ) ) X + j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , v n ( t ) ) j ( u ( t ) , u ( t ) ) + ε n ( 1 + u ( t ) u n ( t ) X ) ( f ( t ) , v n ( t ) u n ( t ) ) X .
Next, we use equality u n ( t ) = v n ( t ) + w n ( t ) to see that
( A u n ( t ) , u ( t ) u n ( t ) ) X + ( A u ( t ) , v n ( t ) u ( t ) ) X = ( A u ( t ) A v n ( t ) , v n ( t ) u ( t ) ) X + ( A u n ( t ) A v n ( t ) , u ( t ) v n ( t ) ) X ( A u n ( t ) , w n ( t ) ) ,
( S u n ( t ) , u ( t ) u n ( t ) ) X + ( S u ( t ) , v n ( t ) u ( t ) ) X = ( S u ( t ) S v n ( t ) , v n ( t ) u ( t ) ) X + ( S u n ( t ) S v n ( t ) , u ( t ) v n ( t ) ) X ( S u n ( t ) , w n ( t ) ) .
Now, when writing
j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , v n ( t ) ) j ( u ( t ) , u ( t ) ) = j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , u n ( t ) ) j ( u ( t ) , u ( t ) ) + j ( u ( t ) , v n ( t ) ) j ( u ( t ) , u n ( t ) )
and using assumptions (14)(b) and (23), we deduce that
j ( u n ( t ) , u ( t ) ) j ( u n ( t ) , u n ( t ) ) + j ( u ( t ) , v n ( t ) ) j ( u ( t ) , u ( t ) ) α j u n ( t ) u ( t ) X 2 + c j ( u ( t ) X ) v n ( t ) u n ( t ) X .
Therefore, when combining relations (42)–(45) and using equality u n ( t ) = v n ( t ) + w n ( t ) , again, we find that
( A u ( t ) A v n ( t ) , v n ( t ) u ( t ) ) X + ( A u n ( t ) A v n ( t ) , u ( t ) v n ( t ) ) X ( A u n ( t ) , w n ( t ) ) + ( S u ( t ) S v n ( t ) , v n ( t ) u ( t ) ) X + ( S u n ( t ) S v n ( t ) , u ( t ) v n ( t ) ) X ( S u n ( t ) , w n ( t ) ) + α j u n ( t ) u ( t ) X 2 + c j ( u ( t ) X ) w n ( t ) X + ε n ( 1 + u ( t ) v n ( t ) w n ( t ) X ) + ( f ( t ) , w n ( t ) ) X 0 .
Hence, when using the assumptions (12)(a) and (13) on the operators A and S , as well as equality u n ( t ) = v n ( t ) + w n ( t ) , we deduce that
m A u ( t ) v n ( t ) X 2 M A w n ( t ) X u ( t ) v n ( t ) X + A u n ( t ) X * w n ( t ) X + L 0 t u ( s ) v n ( s ) X d s u ( t ) v n ( t ) X + L 0 t w n ( s ) X d s u ( t ) v n ( t ) X + S u n ( t ) X w n ( t ) X + α j u n ( t ) u ( t ) X 2 + c j ( u ( t ) X ) w n ( t ) X + ε n + ε n u ( t ) v n ( t ) X + ε n w n ( t ) X + f ( t ) X w n ( t ) X .
Therefore, when using inequality (39), we find that
m A u ( t ) v n ( t ) X 2 M A ε n u ( t ) v n ( t ) X + ε n A u n ( t ) X + L 0 t u ( s ) v n ( s ) X d s u ( t ) v n ( t ) X + L T ε n u ( t ) v n ( t ) X + ε n S u n ( t ) X + α j u n ( t ) u ( t ) X 2 + ε n c j ( u ( t ) X ) + ε n + ε n u ( t ) v n ( t ) X + ε n 2 + ε n f ( t ) X .
Next, the bound (24) in Lemma 3 and the properties of the operators A and S guarantee that there exists D 1 and D 2 such that
A u n ( t ) X D 1 , S u n ( t ) X D 2 .
In addition, the regularities of the functions c j , f, and u allow us to find some constants ( D 3 and D 4 ) such that
c j ( u ( t ) X ) D 3 , f ( t ) X D 4 .
Note that here and below in this section, D i denotes positive constants, which do not depend on n and t.
On the other hand, when writing u n ( t ) = v n ( t ) + w n ( t ) , we deduce that
u n ( t ) u ( t ) X 2 = ( v n ( t ) u ( t ) ) + w n ( t ) X 2 ( v n ( t ) u ( t ) X + w n ( t ) X ) 2 = v n ( t ) u ( t ) X 2 + 2 v n ( t ) u ( t ) X w n ( t ) X + w n ( t ) X 2
and when using inequality (39), we find that
α j u n ( t ) ) u ( t ) X 2 α j v n ( t ) u ( t ) X 2 + 2 ε n v n ( t ) u ( t ) X + ε n 2 .
We now combine inequalities (46)–(49) to find that
( m A α j ) u ( t ) v n ( t ) X 2 D 5 ε n + D 6 0 t u ( s ) v n ( s ) X d s u ( t ) v n ( t ) X + ( D 7 ε n + D 8 ε n 2 ) .
Then, we use the smallness assumption (15) and inequality (29) to see that
u ( t ) v n ( t ) X D 9 ε n + D 10 0 t u ( s ) v n ( s ) X d s + D 11 ε n + D 12 ε n 2 .
and after the use of the Gronwall argument, we obtain
u ( t ) v n ( t ) X D 9 ε n + D 11 ε n + D 12 ε n 2 e D 10 t .
Next, we use the convergences ε n 0 to find that
max t [ 0 , T ] u ( t ) v n ( t ) X 0 .
This implies that v n u in C ( [ 0 , T ] ; X ) , and when using (38) and (39), we deduce that u n u in C ( [ 0 , T ] ; X ) , which concludes the proof of this point.
(b) Assume now that I = I R + and let n N , m N , and t [ 0 , m ] . Then, it is easy to see that condition ( S 3 ) holds if and only if condition ( S 2 ) holds with I = [ 0 , m ] and ε n = ε n m . Therefore, when using the first part of the theorem, we deduce that u n u in C ( [ 0 , m ] ; X ) for any m N if and only if conditions ( S 3 ) holds. This implies that u n u in C ( I R + ; X ) if and only if ( S 3 ) holds, which concludes the proof. □
We end this section with the remark that Theorem 3 provides the necessary and sufficient conditions that describe the convergence of a sequence, { u n } C ( I ; X ) , to the solution, u, of Problem P . It follows from here that this theorem represents a convergence criterion. Note that this criterion was obtained under the additional assumption (23), which is not necessary in the statement of Theorem 3. Removing or relaxing this assumption is an interesting problem that deserves to be investigated in the future.

5. Some Consequences

In this section, we state and prove some of the consequences of our main result, Theorem 3. Everywhere below we assume that ( 11 ) ( 16 ) and ( 23 ) hold, even if we do not mention it explicitly; recall that we use the shorthand notation C ( I ) = C ( I ; I R + ) . The section is structured in several parts, as follows.
  • A convergence result: When given a sequence, { u n } C ( I ; X ) , we consider the following statement.
    ( S 4 ) ( a ) d ( u n ( t ) , K ) 0 in C ( I ) , ( b ) There   exists   a   sequence   { ε n } C ( I ; I R + ) such   that   ε n 0 in C ( I ) and ( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + ε n ( t ) ( 1 + v u n ( t ) X ) ( f ( t ) , v u n ( t ) ) X v K , for all n N and t I .
    Then, a first consequence of Theorem 3 is the following:
Corollary 2.
Assume ( 11 ) ( 16 ) and ( 23 ) and let { u n } C ( I ; X ) .
(a) If I = [ 0 , T ] with T > 0 , then the statements ( S 1 ) and ( S 4 ) are equivalent.
(b) If I = I R + , then the statement ( S 4 ) implies the statement ( S 1 ) .
Proof. (a) Let I = [ 0 , T ] with T > 0 . First, we assume that the statement ( S 1 ) holds. Then, Theorem 3(a) guarantees that condition ( S 2 ) is satisfied, which implies that ( S 4 ) holds with { ε n } C ( I ) given by ε n ( t ) = ε n for all n N and t I . Conversely, if ( S 4 ) holds, it is easy to see that the statement ( S 2 ) holds, with the numerical sequence { ε n } given by
ε n = max max t [ 0 , T ] d ( u n ( t ) , K ) , max t [ 0 , T ] ε n ( t ) n N .
Then, again, Theorem 3(a) implies that ( S 1 ) holds. We conclude (from the above) that the statements ( S 1 ) and ( S 4 ) are equivalent.
(b) Assume I = I R + and ( S 4 ) , and let m N . Then, it is easy to see that the statement ( S 4 ) holds for any t [ 0 , m ] , and when using the point (a) of the corollary, we deduce that u n u on C ( [ 0 , m ] ; X ) . Then, since m is arbitrarily chosen, we deduce that u n u in C ( I ; X ) , which concludes the proof. □
Note that Corollary 2 provides the necessary and sufficient conditions for a covergence to the solution to Problem P (if I = [ 0 , T ] ) and additional sufficient conditions for this convergence (if I = I R + ).
  • Continuous dependence results: The solution to inequality (1) depends on the set K and the function f; therefore, we shall denote it in what follows as u = u ( K , f ) . In what follows, we provide a continuous dependence result for ths solution with respect to these data and to this end, we consider two sequences: { K n } and { f n } such that for each n N , the conditions below are satisfied.
    K n is   a   nonempty   closed   convex   subset   of   X .
    f n C ( I ; X ) .
    Moreover, we consider the following problem:
  • Problem P n . Find a function u n C ( I ; K n ) such that
    ( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) ( f n ( t ) , v u n ( t ) ) X v K n , t I .
    Then, by using Theorem 1, we deduce that for each n N , a unique solution, u n = u ( K n , f n ) , to Problem P n exists. When using the notations in (4) and (5), we consider the following additional assumptions:
    K K n n N ,
    H ( K n , K ) 0 , as n .
    f n f in C ( I ; X ) , as n .
We have the following convergence result, which provides a continuous dependence for the solution u with respect to the pair ( K , f ) .
Corollary 3.
Assume (11)–(16), (51), (52), and (54)–(56). Then,
u n u in C ( I ; X ) .
Proof. 
Let n N and t I . We use the inclusion u n ( t ) K n and the notations in (5) and (4) to see that
d ( u n ( t ) , K ) sup w K n d ( w , K ) = e ( K n , K ) H ( K n , K )
and, therefore, assumption (55) guarantees that
d ( u n ( t ) , K ) 0 in C ( I ) .
On the other hand, the inclusion of (54) implies that
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + ( f ( t ) f n ( t ) , v u n ( t ) ) ( f ( t ) , v u n ( t ) ) X v K , t I
which shows that the inequality in ( S 4 ) (b) holds with the sequence { ε n } C ( I ) given by ε n ( t ) = f ( t ) f n ( t ) X for all n N and t I . By now recalling (56) and (58), it follows that ( S 4 ) holds. Corollary 3 is now a direct consequence of Corollary 2. □
  • Classical well-posedness results: The concept of Tykhonov well-posedness was introduced in [23] for a minimization problem, and it was extended by Levitin-Polyak in [24]. The Tykhonov and Levitin-Polyak well-posedness concepts have been generalized for different optimization problems, as shown in [13,14,15,16,17]. Well-posedness concepts for elliptic variational inequalities were introduced for the first time in [25,26]. References in the field are [27,28]. The well-posedness of a so-called generalized vector variational inequality was discussed in a recent paper [29] within the framework of topological vector spaces. There, the necessary and sufficient conditions for such an inequality to be well-posed in a generalized sense are provided, in terms of the upper semi-continuity of the approximate solution set map. Below, we introduce Tykhonov and Levitin-Polyak-type well-posedness concepts for history-dependent inequality (1).
Definition 2.
A sequence { u n } C ( I ; X ) is called an approximting sequence for the history-dependent variational inequality (1) if there exists 0 ε n 0 such that
( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) ) + ε n v u n ( t ) X ( f ( t ) , v u n ( t ) ) X v K , n N , t I .
Problem P is well-posed in the sense of Tykhonov if it has a unique solution, u, and every approximating sequence in C ( I ; X ) converges with u.
Definition 3.
A sequence { u n } C ( I ; X ) is called an LP-approximting sequence for the history-dependent variational inequality (1) if two sequences, { w n } C ( I ; X ) and { ε n } I R exist such that w n 0 X in C ( I ; X ) and 0 ε n 0 and
u n ( t ) + w n ( t ) K , ( A u n ( t ) , v u n ( t ) ) X + ( S u n ( t ) , v u n ( t ) ) X + j ( u n ( t ) , v ) j ( u n ( t ) , u n ( t ) + ε n v u n ( t ) X ( f ( t ) , v u n ( t ) ) X v K , n N , t I .
Problem P is well-posed in the sense of Levitin-Polyak if it has a unique solution, u, and every LP-approximating sequence in X converges with u.
It is easy to see that any approximating sequence is an L P -approximating sequence. Therefore, if Problem P is well-posed in the sense of Levitin-Polyak, then it is well-posed in the sense of Tykhonov, too. Some elementary examples can be constructed in order to see that the converse of this statement is not true. We conclude from here that the Levitin-Polyak concept of well-posedness (above) represents an extension of the concept of Tykhonov well-posedness.
We now state and prove the following result:
Corollary 4.
Assume (11)–(16) and (23). Then, Problem P is Levitin-Polyak and Tykhonov well-posed.
Proof. 
Let { u n } C ( I ; X ) be an L P -approximating sequence. Then, when using Definition 3, it follows that d ( u n ( t ) , K ) w n ( t ) X for all n N and t I , and since w n 0 X in C ( I ; X ) , we deduce the convergence (58), which means that condition (a) in statement ( S 4 ) is satisfied. We now use the inequality (61) to see that condition (b) in statement ( S 4 ) is satisfied, too, with the sequence { ε n } C ( I ) given by ε n ( t ) = ε n for all n N and t I . We are now in a position to use Corollary 2(b) in order to deduce that u n u in C ( I ; X ) . This shows that Problem P is Levitin-Polyak well-posed and, therefore, it is Tykhonov well-posed, too. □
The example below shows that there exist sequences, { u n } C ( I ; X ) , that satisfy the statement ( S 4 ) but that are not L P -approximating sequences. It follows from here that the convergence result in Corollary 2(b) is stronger than the well-posedness result provided by Corollary 4.
Example 1.
Consider the history-dependent variational inequality (1) in the particular case when I = I R + , X = I R , K = [ 0 , 1 ] and A u = u for all u I R , as well as j 0 and f ( t ) = t + 1 for all t I and
S u ( t ) = 0 t u ( s ) d s u C ( I ) , t I .
Then, problem (1) consists of finding a function, u C ( I ) , such that
u ( t ) [ 0 , 1 ] , u ( t ) + 0 t u ( s ) d s t 1 ( v u ( t ) ) 0 v [ 0 , 1 ] , t I .
The solution to this inequality is the constant function, u ( t ) = 1 , for each t I . Now, consider the sequence { u n } C ( I ; X ) defined by u n = 1 + 1 n for each n N and t I . Then, it is easy to check that the statement ( S 4 ) is satisfied with the sequence { ε n } given by
ε n ( t ) = t n + 1 n n N , t I .
Neverthless, we claim that { u n } is not an L P -approximating sequence for inequality (62). Indeed, in arguing by contradiction, we assume that a sequence, 0 ε n 0 , exists such that
u n ( t ) + 0 t u n ( s ) d s t 1 ( v u n ( t ) ) + ε n | v u n ( t ) | 0 v [ 0 , 1 ] , n N , t I .
Then, it follows that
ε n 1 n + t n n N , t I .
We now take t = n in this inequality to find ε n > 1 for each n N , which contradicts the convergence ε n 0 .
  • Convergence results for Problems Q and R : Theorem 2 and Corollary 2 allow us to provide conditions which guarantee the convergence to the solution to Problems Q and R . Our first result in this matter is the following:
Corollary 5.
Assume ( 11 ) ( 16 ) and ( 23 ) and let { u n } C ( I ; X ) . Assume, also, that d ( u n ( t ) , K ) 0 in C ( I ) ; a sequence, { ε n } C ( I ; I R + ) , exists such that ε n 0 in C ( I ) and, moreover,
g K ( u n ( t ) , t ) ε n ( t ) d ( u n ( t ) , K ) + 1 n N , t I .
Then, the sequence { u n } in C ( I ; X ) converges with the solution to Problems Q and R .
Proof. 
Let n N , t I and let v K . We use the function (21), equality (22), and assumption (63) to write
k v ( u n , t ) sup w K k w ( u n , t ) = g K ( u n , t ) ε n ( t ) d ( u n ( t ) , K ) + 1 .
Then, since d ( u n ( t ) , K ) u n ( t ) v X , we find that
k v ( u n , t ) ε n ( t ) u n ( t ) v X + 1 .
We now combine the definition in (21) and the inequality in (64) to see that the inequality in statement ( S 4 ) (b) holds. Recall that, by assumption, condition ( S 4 ) (a) is satisfied, too. Thus, we are in a position to use Corollary 2 to deduce the convergence, u n u , in C ( I ; X ) , where we recall that u is also the solution to Problem P . We now use Theorem 2 to recall that u is the solution to Problems Q and R , which concludes the proof. □
We remark that, in contrast to Theorem 3, which provides the necessary and sufficient conditions of convergence to the solution to Problems P , Q , and R , Corollary 3 provides only sufficient conditions to the solution to these problems. The question of whether these conditions are necessary for this convergence is left open.
Next, we consider two sequences: { K n } and { f n } such that for each n N , the conditions (51) and (52) are satisfied. Moreover, we consider the following problem:
  • Problem  Q n . Find a function u n C ( I ; K n ) such that
    g K n ( u , t ) = 0 t I .
Then, by using Corollary 1, we deduce that for each n N , there exists a solution, u n = u ( K n , f n ) , to Problem Q n . Moreover, Theorem 2 guarantees that u n is the solution to Problem P n , too. The following convergence result provides the continuous dependence of the solution to Problem Q with respect to the pair ( K , f ) .
Corollary 6.
Assume (11)–(16), (51), (52), and (54)–(56). Then, the convergence in (57) holds.
Proof. 
We shall provide two different proofs for this corollary.
For the first proof, we recall that u n is the solution to Problem P n , and u is the solution to problem P , too. Then, assumptions (54)–(56) allow us to use Corollary 3, and, in this way, we deduce the convergence in (57).
For the second proof, we use assumption (55) to see that the convergence in (58) holds. Moreover, assumption (54), definition (17), and equality (65) show that
g K ( u n , t ) g K n ( u n , t ) = 0 n N , t I .
This shows that condition (63) holds with ε n ( t ) = 0 . The convergence in (57) is now a consequence of Corollary 5. □

6. A Viscoelastic Constitutive Law

Exemples of history-dependent variational inequalities of the form in (1) arise in solid and contact mechanics. There, the operator A is related to the elasticity properties of the material, the operator S describes its memory preperties, and the function j models the frictionless and/or frictional contact conditions. The time-dependent function f is determined by the applied forces, and the set K is related to the unilateral constraints, which could arise either in the constitutive law or in the contact conditions. References in the field include the books [11,12], for instance. In this section, we present an example of such history-dependent variational inequalities that arise in solid mechanics. In our example, the function j vanishes since, for simplicity, we do not deal with contact models. Nevertheless, we mention that various examples (in which the function j does not vanish) can be constructed; for details about this, we send the reader to the references mentioned above in this paragraph.
In order to introduce the problem, we denote the space of second-order symmetric tensors as S d on I R d ( d = 1 , 2 , 3 ). The space S d will be equipped with the inner product and the Euclidean norm given by
( σ , τ ) = σ i j τ i j , τ = ( τ , τ ) 1 / 2 σ = ( σ i j ) , τ = ( τ i j ) S d ,
respectively. Here and below in this section, the indices i, j, k, and l run between 1 and d, and, unless stated otherwise, the summation convention over repeated indices is used. We use the notation t r σ and σ D for the trace and deviatoric part of a tensor σ = ( σ i j ) S d , as defined by
t r σ = σ i i , σ D = σ 1 d ( t r σ ) I d ,
with I d being the unit tensor of S d . The time interval of interest will be denoted by I and, as usual, I is either in the from [ 0 , T ] with T > 0 or I R + . Then, the problem we consider in this section is the following:
  • Problem M . Find a function σ C ( I ; S d ) such that
    ε ( t ) A σ ( t ) + 0 t B ( t s ) σ ( s ) d s + ψ K σ ( t ) t I .
In the study of this problem, we assume the following:
A = ( A i j k l ) : S d S d   is   a positively   symmetric   fourth-order   tensor .
B = ( B i j k l ) : I × S d S d   is   a   time-dependent symmetric fourth   order   tensor   such   that   B i j k l C ( I ) for   all   i , j , k , l .
K = τ S d : | t r τ | k , τ D g with k , g > 0 .
ε C ( I ; S d ) .
Note that inclusion (68) represents a nonlinear viscoelastic constitutive law with constraints, in which σ denotes the stress tensor, ε represents the linearized strain tensor, A is the fourth-order tensor of elastic compliances, and B is a time-dependent fourth-order relaxation tensor. Moreover, K represents a set of constraints, in which k and g are given yield limits, and ψ K represents the convex subdifferential of the indicator function of the set K, denoted by ψ K . Constitutive models of the form (68) can be derived by using rheological arguments, as explained in [10,30,31]. They have been used in the literature to model the behaviour of real materials such as metals, rocks, soils, and various polymers.
Next, for each n N , we consider the following assumptions:
K n = τ S d : | t r τ | k n , τ D g n with k n , g n > 0 .
ε n C ( I ; S d ) .
Moreover, by replacing ε and K in (68) with ε n and K n , respectively, we consider the inclusion problem below.
  • Problem M n . Find a function σ n C ( I ; S d ) such that
    ε n ( t ) A σ n ( t ) + 0 t B ( t s ) σ n ( s ) d s + ψ K n σ n ( t ) t I .
In addition, we assume that
k n k , g n g ,
k n k , g n g ,
ε n ε in C ( I ; S d ) .
Then, our result in this section is the following:
Theorem 4.
Assume (69)–(74). Then, Problem M has a unique solution σ C ( I ; K ) , and for each n N , Problem M n has a uniqe solution, σ n C ( I ; K n ) . Moreover, if (76)–(78) hold, then
σ n σ in C ( I ; S d ) .
Proof. 
We recall that for any ξ , η S d , the following equivalence holds:
ξ ψ K ( η ) η K and ( ξ , τ η ) 0 τ K .
By using this equivalence, we see that Problem M is equivalent to the problem of finding a function, σ C ( I ; K ) , such that
( A σ ( t ) , τ σ ( t ) ) + ( 0 t B ( t s ) σ ( s ) d s , τ σ ( t ) ) ( ε ( t ) , τ σ ( t ) ) τ K , t I .
Moreover, Problem M n is equivalent to the problem of finding a function, σ n C ( I ; K n ) , such that
( A σ n ( t ) , τ σ n ( t ) ) + ( 0 t B ( t s ) σ n ( s ) d s , τ σ n ( t ) ) ( ε n ( t ) , τ σ n ( t ) ) τ K n , t I .
The unique solvability of Problem M follows from Theorem 1 when applied to inequality (80) on the space X = S d with j 0 . Indeed, the set K defined by (71) satisfies condition (11), and assumption (69) shows that condition (12) holds, too. Moreover, when using (70), it is easy to see that the operator S : C ( I ; S d ) C ( I ; S d ) defined by
S σ ( t ) = 0 t B ( t s ) σ ( s ) d s σ C ( I ; S d ) , t I
is history-dependent, i.e., satisfies condition (13). Finally, assumption (72) shows that (16) holds with f = ε C ( I ; S d ) . Therefore, we are in a position to use Theorem 1 to obtain the existence of a unique function, σ C ( I ; S d ) , which satisfies inequality (80). Moreover, by using the equivalence between the inclusion (68) and inequality (80), we deduce that σ C ( I ; S d ) is the unique solution to Problem M . The unique solvability of Problem M n for each n N follows a form of similar arguments; this concludes the proof of the existence of the part in Theorem 4.
For the convergence part, we use Corollary 3. To this end, we note that assumption (76) implies that condition (54) is satisfied and, moreover, assumption (78) shows that condition (56) holds, too. Let n N and let
σ = 1 d ( t r σ ) I d + σ D K n .
This implies that
| t r σ | k n , σ D g n .
We now consider the tensor σ ˜ S d given by
σ ˜ = k k n d ( t r σ ) I d + g g n σ D .
Then, it is easy to see that | t r σ ˜ |     k , σ ˜ D     g , which implies that σ ˜ K . On the other hand, when using (82) and (84), we find that
σ σ ˜ 1 d | 1 k k n | | t r σ | I d + | 1 g g n | σ D
and when using (83) combined with equality I d = d , we obtain that
σ σ ˜ 1 d | 1 k k n | k n + | 1 g g n | g n .
Therefore,
d ( σ , K ) 1 d | k n k | + | g n g |
and, moreover, (5) implies that
e ( K n , K ) 1 d | k n k | + | g n g | .
Next, when using the implication (6), we deduce that H ( K n , K ) = e ( K n , K ) , and when using the bound (85) combined with the convergences in (77), we find that H ( K n , K ) 0 as n , which shows that condition (55) is satisfied. The convergence (79) is now a direct consequence of Corollary 3. □
In addition to the mathematical interest in Theorem 4, it is important from the mechanical point of view since it shows that when given a strain function, ε C ( I ; S d ) , there exists a unique stress field σ C ( I ; S d ) , which satisfies the viscoelastic constitutive law (68). Moreover, the corresponding stress field depends (continuosly) on the strain field, ε , and the yield limits, k and g.

7. Conclusions

In this paper, we consider a history-dependent variational inequality together with two associated problems, constructed by using the so-called gap function. Our main result is Theorem 3, which characterizes the convergence of a sequence to the unique solution to the corresponding inequality, both in the space of continuous functions (defined on a compact interval) and the space of continuous functions (defined on the positive real line). We exploited this theorem to deduce various convergence and well-posedness results for history-dependent inequalityand the associated problems involving the gap function. Then, we used these results in a study of a constitutive law, which describes the behaviour of a viscoelastic material with long-term memory and unilateral constraints.
The results in this paper could be extended to hemivariational or variational inequalities. They can be applied in the sensitivity analysis of such inequalities, which we recall as arising in the study of various mathematical models that describe the evolution of the mechanical state of a viscoelastic or viscoplastic body in contact with an obstacle, the so-called foundation. For such models, the history-dependent operator appears either in the constitutive law and/or in the boundary conditions. In this way, various convergence results can be obtained, and the link between various mathematical models of contact can be established. Finally, it would be interesting to provide computer simulations that validate the corresponding convergence results.

Author Contributions

Conceptualization, M.S.; methodology, M.S. and D.A.T.; original draft preparation, M.S.; review and editing, D.A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 823731 CONMECH.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Brézis, H. Equations et inéquations non linéaires dans les espaces vectoriels en dualité. Ann. Inst. Fourier 1968, 18, 115–175. [Google Scholar] [CrossRef]
  2. Brézis, H. Problèmes unilatéraux. J. Math. Pures Appl. 1972, 51, 1–168. [Google Scholar]
  3. Glowinski, R. Numerical Methods for Nonlinear Variational Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
  4. Gwinner, J.; Jadamba, B.; Khan, A.; Raciti, F. Uncertainty Qualification in Variational Inequalities. Theory, Numerics and Applications; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  5. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Classics in Applied Mathematics 31; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  6. Capatina, A. Variational Inequalities and Frictional Contact Problems; Advances in Mechanics and Mathematics 31; Springer: Heidelberg, Germany, 2014. [Google Scholar]
  7. Chouly, F.; Hild, P. On convergence of the penalty method for unilateral contact problems. Appl. Numer. Math. 2013, 65, 27–48. [Google Scholar] [CrossRef]
  8. Han, W.; Reddy, B.D. Plasticity: Mathematical Theory and Numerical Analysis, 2nd ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  9. Panagiotopoulos, P.D. Inequality Problems in Mechanics and Applications; Birkhäuser: Boston, MA, USA, 1985. [Google Scholar]
  10. Sofonea, M.; Matei, A. Mathematical Models in Contact Mechanics London Mathematical Society Lecture Note Series, Series Number 398; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  11. Sofonea, M. Well-Posed Nonlinear Problems. A Study of Mathematical Models of Contact; Advances in Machanics and Mathematics 50; Birkhäuser: Cham, Switzerland, 2023. [Google Scholar]
  12. Sofonea, M.; Migórski, S. Variational-Hemivariational Inequalities with Applications; Monographs and Research Notes in Mathematics; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  13. Dontchev, A.L.; Zolezzi, T. Well-Posed Optimization Problems; Lecture Notes Mathematics 1543; Springer: Berlin, Germany, 1993. [Google Scholar]
  14. Huang, X.X. Extended and strongly extended well-posedness of set-valued optimization problems. Math. Methods Oper. Res. 2001, 53, 101–116. [Google Scholar] [CrossRef]
  15. Huang, X.X.; Yang, X.Q. Generalized Levitin-Polyak well-posedness in constrained optimization. SIAM J. Optim. 2006, 17, 243–258. [Google Scholar] [CrossRef]
  16. Lucchetti, R. Convexity and Well-Posed Problems; CMS Books in Mathematics; Springer: New York, NY, USA, 2006. [Google Scholar]
  17. Zolezzi, T. Extended well-posedness of optimization problems. J. Optim. Theory Appl. 1996, 91, 257–266. [Google Scholar] [CrossRef]
  18. Gariboldi, C.; Ochal, A.; Sofonea, M.; Tarzia, D.A. A convergence criterion for elliptic variational inequalities. arXiv 2023, arXiv:2309.04805. [Google Scholar] [CrossRef]
  19. Sofonea, M.; Tarzia, D.A. Convergence criteria for fixed point problems and differential equations. Mathematics 2024, 12, 395. [Google Scholar] [CrossRef]
  20. Sofonea, M.; Tarzia, D.A. A convergence criterion for a class of stationary inclusions in Hilbert spaces. Axioms 2024, 13, 52. [Google Scholar] [CrossRef]
  21. Massera, J.J.; Schäffer, J.J. Linear Differential Equations and Function Spaces; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
  22. Auslander, A. Convergence of stationary sequences for variational inequalities with maximal monotone operators for nonexpansive mappings. Appl. Math. Optim. 1993, 28, 161–172. [Google Scholar] [CrossRef]
  23. Tykhonov, A.N. On the stability of functional optimization problems. USSR Comput. Math. Math. Phys. 1966, 6, 631–634. [Google Scholar]
  24. Levitin, E.S.; Polyak, B.T. Convergence of minimizing sequences in conditional extremum problem. Sov. Math. Dokl. 1966, 7, 764–767. [Google Scholar]
  25. Lucchetti, R.; Patrone, F. A characterization of Tychonov well-posedness for minimum problems with applications to variational inequalities. Numer. Funct. Anal. Optim. 1981, 3, 461–476. [Google Scholar] [CrossRef]
  26. Lucchetti, R.; Patrone, F. Some prroperties of “well-posedness” variational inequalities governed by linear operators. Numer. Funct. Anal. Optim. 1983, 5, 349–361. [Google Scholar] [CrossRef]
  27. Huang, X.X.; Yang, X.Q.; Zhu, D.L. Levitin-Polyak well-posedness of variational inequality problems with functional constraints. J. Glob. Optim. 2009, 44, 159–174. [Google Scholar] [CrossRef]
  28. Wang, Y.M.; Xiao, Y.B.; Wang, X.; Cho, Y.J. Equivalence of well-posedness between systems of hemivariational inequalities and inclusion problems. J. Nonlinear Sci. Appl. 2016, 9, 1178–1192. [Google Scholar] [CrossRef]
  29. Kumar, S.; Gupta, A. Well-posedness of generalized vector variational inequality problem via topological approach. Rend. Circ. Mat. Palermo Ser. 2 2024, 73, 161–169. [Google Scholar] [CrossRef]
  30. Doghri, I. Mechanics of Deformable Solids; Springer: Berlin, Germany, 2000. [Google Scholar]
  31. Drozdov, A.D. Finite Elasticity and Viscoelasticity–A Course in the Nonlinear Mechanics of Solids; World Scientific: Singapore, 1996. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sofonea, M.; Tarzia, D.A. Convergence Results for History-Dependent Variational Inequalities. Axioms 2024, 13, 316. https://doi.org/10.3390/axioms13050316

AMA Style

Sofonea M, Tarzia DA. Convergence Results for History-Dependent Variational Inequalities. Axioms. 2024; 13(5):316. https://doi.org/10.3390/axioms13050316

Chicago/Turabian Style

Sofonea, Mircea, and Domingo A. Tarzia. 2024. "Convergence Results for History-Dependent Variational Inequalities" Axioms 13, no. 5: 316. https://doi.org/10.3390/axioms13050316

APA Style

Sofonea, M., & Tarzia, D. A. (2024). Convergence Results for History-Dependent Variational Inequalities. Axioms, 13(5), 316. https://doi.org/10.3390/axioms13050316

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop