Systems of Hemivariational Inclusions with Competing Operators

: This paper focuses on a system of differential inclusions expressing hemivariational inequalities driven by competing operators constructed with p -Laplacians that involve two real parameters. The existence of a generalized solution is shown by means of an approximation process through approximate solutions in finite dimensional spaces. When the parameters are negative, the generalized solutions become weak solutions. The main novelty of this work is the solvability of systems of differential inclusions for which the ellipticity condition may fail.

The multivalued term in the inclusion (1) is expressed as the generalized gradient ∂F of a locally Lipschitz function F : R 2 → R, so pointwise ∂F(u 1 (x), u 2 (x)) is a subset of R 2 .We reference [1] for the subdifferentiation of locally Lipschitz functionals.Some basic elements are presented in Section 2. Any ζ ∈ ∂F(t, s) is a point of R 2 ; thus, it has two components, i.e., ζ = (ζ 1 , ζ 2 ) ∈ R 2 .Hence, (1) is a system of two differential inclusions that we call hemivariational inclusions because they involve generalized gradients.The inclusion problem (1) incorporates systems of equations with discontinuous nonlinearities.Differential equations with discontinuous nonlinearities via the generalized gradients were first studied in [2].
According to the definition of generalized gradient, it is apparent that each solution to system (1) solves the inequality problem.
For the locally Lipschitz function F : R 2 → R, we assume the following condition: (H) There are positive constants c 0 , In the statement of (1), there are two parameters µ 1 ∈ R and µ 2 ∈ R. The leading operators are −∆ p 1 + µ 1 ∆ q 1 and −∆ p 2 + µ 2 ∆ q 2 , for which the ellipticity condition fails when µ 1 > 0 and µ 2 > 0, which is the main point of our work (note that µ 1 and µ 2 are arbitrary real numbers).In this case, they become the so-called competing operators that were introduced in [8].Precisely, a competing operator was defined in reference [8] as −∆ p + ∆ q versus −∆ p − ∆ q ((p, q)-Laplacian) for 1 < q < p < +∞.The essential feature of such an operator is that the ellipticity property is lost.For any u ∈ W 1,p 0 (Ω) and any scalar λ > 0, the following expression does not have a constant sign when λ varies: Systems of differential equations with competing operators were investigated in [9].
Due to the possible loss of ellipticity for system (1), we introduce a new type of solution called a generalized solution.It is said that The notion of a generalized solution was proposed in [10] for differential equations driven by competing operators and in [9] for systems of differential equations with competing operators.The notion of a generalized solution for hemivariational inequalities with competing operators was recently introduced in [7].Here, for the first time, we define the generalized solution for a system of hemivariational inclusions exhibiting competing operators.
Our main results read as follows.
In the proof of Theorem 1, we make use of approximation through finite dimensional subspaces via a Galerkin basis combined with minimization and nonsmooth analysis.We obtain a priori estimates, which are of independent interest in the context of competing operators.The proof of Theorem 2 relies on properties of the underlying spaces and of operators of the p-Laplacian type.We end the paper with an example illustrating the applicability of our results.
The rest of the paper is organized as follows.Section 2 is devoted to the related mathematical background.Section 3 contains the needed minimization results and estimates.Section 4 sets forth the finite dimensional approximation approach.Section 4 presents the proofs of Theorems 1 and 2, as well as an example.

Mathematical Background
Given a Banach space X with the norm ∥ • ∥, X * denotes the dual space of X, and ⟨•, •⟩ denotes the duality pairing between X and X * .The norm convergence in X and X * is denoted by →, and the weak convergence is denoted by ⇀.
We outline basic elements of nonsmooth analysis.For a detailed treatment, we refer to [1].A function G : X −→ R on a Banach space X is called locally Lipschitz if, for every point u ∈ X, there are an open neighborhood U of u and a constant C > 0 such that

The generalized directional derivative of a locally Lipschitz function
and the generalized gradient of G at u ∈ X is the following set The following relation links the two notions: ⟨η, v⟩ for all u, v ∈ V.
We illustrate these definitions in two significant situations.For a continuous and convex function G : X → R, the generalized gradient ∂G coincides with the subdifferential of G in the sense of convex analysis.If the function G : X → R is continuously differentiable, the generalized gradient of G is just the differential of G.
We also mention a few things regarding the driving operators in system (1) (or hemivariational inequality (2)).Given any number 1 < r < +∞, the Sobolev space W 1,r 0 (Ω) is endowed with the norm ∥∇u∥ r , where ∥ • ∥ r denotes the L r norm.The dual space of W 1,r 0 (Ω) is W −1,r ′ (Ω).As usual, r * denotes the Sobolev critical exponent, that is, r * = Nr/(N − r) if N > r and r * = +∞ otherwise.The Rellich-Kondrachov embedding theorem ensures that W 1,r 0 (Ω) is compactly embedded into L d (Ω) for every 1 ≤ d < r * .In particular, there exists a positive constant S d,r such that For the background of Sobolev spaces, we refer to [11].Here, we solely recall that a Banach space W 1,r 0 (Ω) with 1 < r < +∞ is separable.This implies the existence of a Galerkin basis of space W 1,r 0 (Ω), meaning a sequence {X n } n∈N of vector subspaces of We refer to [12] for background related to Galerkin bases.The negative r-Laplacian −∆ r : The first eigenvalue of −∆ r is given by More details can be found, e.g., in [3].Since q 1 < p 1 and q 2 < p 2 , there are the continuous embeddings W (Ω), which can be readily verified through Hölder's inequality.Therefore, the sums −∆ Ω) entering system (1) are well defined.

Associated Euler Functional
We focus on nonsmooth function F : R 2 → R, for which assumption (H) holds true.Lemma 1. Assume that condition (H) is satisfied.Then, for each ε > 0, there exist constants c(ε) > 0 and d(ε) > 0 such that Proof.Rademacher's theorem ensures that there exists a gradient ∇F(x 1 , On the other hand, for every (t, s) ∈ R 2 , the function τ → F(τt, τs) belongs to space W (see [1], p. 32), hypothesis (H) implies Now, using Young's inequality with ε, we arrive at (6), which completes the proof.

Lemma 2.
Under assumption (H), the functional Φ : is Lipschitz continuous on the bounded subsets of L p 1 (Ω) × L p 2 (Ω).The generalized gradient Proof.The verification of the Lipschitz condition for the function Φ in (7) on the bounded subsets of the product space L p 1 (Ω) × L p 2 (Ω) is straightforward.The Aubin-Clarke theorem on the subdifferentiation under the integral sign (see [1], p. 83) can be shown to be valid under hypothesis (H).This readily leads to Formula (8), thus completing the proof.
In view of Lemma 2, the compact embeddings W . On this basis, we introduce the functional J : W Proposition 1. Assume condition (H).Then, the functional J in (9) is locally Lipschitz, with the generalized gradient expressed as Proof.The functional J in ( 9) is the difference of a continuously differentiable function and Φ in (7), which is known from Lemma 2 to be locally Lipschitz.Therefore J is locally Lipschitz continuous, and its generalized gradient on the product space L p 1 (Ω) × L p 2 (Ω) has the expression in (10).

Finite Dimensional Approximations to Resolve System (1)
Let us fix a Galerkin basis {X n } of the space W 1,p 1 0 (Ω) and a Galerkin basis {Y n } of the space W 1,p 2 0 (Ω).It follows that {X n × Y n } is a Galerkin basis of the product space W 1,p 1 0 (Ω) × W 1,p 2 0 (Ω).Minimization in the finite dimensional space X n × Y n will enable us to construct a generalized solution to system (1).

Proposition 3. Assume condition (H). For each positive integer n, there exist
Proof.According to Proposition 1, the functional J : W 9) is locally Lipschitz and, thus, continuous, while according to Proposition 2, J is coercive.Taking into account that the subspace A necessary condition of optimality for (13) is that In view of (10) and with (z 1n , z 2n ) ∈ ∂F(u 1n , u 2n ) a.e. in Ω.We are entitled to invoke hypothesis (H) to obtain Through Young's inequality with any ε > 0, we find that with positive constants c(ε) and d(ε).Take the sum of Inequalities (15) and ( 16) and insert the preceding estimates, also using ( 4) and ( 5), which result in Assumption (H) postulates that c 1 < λ 1,p 1 and d 2 < λ 1,p 2 , so we may choose a value of ε > 0 so small so as to have 1 given in Proposition 3 has the following property: there exists a constant M > 0 such that and with z 1n and z 2n as stated in (11) and (12), respectively.
Proof.According to Proposition 4 there is a constant M 0 > 0 such that Notice that As the functional J : W (Ω), we may admit that along a subsequence, we have 0 (Ω).We will show that the weak limit (u 1 , u 2 ) is a generalized solution to system (1).
The reflexivity of the spaces W −1,p ′ 1 (Ω) and W −1,p ′ 2 (Ω) implies that we can pass to relabeled subsequences satisfying −∆ p Ω).We claim that η 1 = 0 and η 2 = 0, that is, ⟨η 1 , v⟩ = 0 for all v ∈ W 1,p 1 0 (Ω) and ⟨η 2 , v⟩ = 0 for all v ∈ W 1,p 2 0 (Ω).We only prove the first assertion because the second one can be checked analogously.Let v ∈ W 1,p 1 0 (Ω) and suppose, first, that v ∈ ∞ n=1 X n .Fix some m with v ∈ X m .Then, for each n ≥ m, the element v can be used as a test function in (11), which gives In the limit, as n → ∞, we obtain ⟨η 1 , v⟩ = 0.If v ∈ W 1,p 1 0 (Ω) is arbitrary, we obtain ⟨η 1 , v⟩ = 0, owing to the density of ∞ n=1 X n in W 1,p 1 0 (Ω), as required by condition (c) of the Galerkin basis.Therefore, the claim is proven, which shows that condition (ii) in the definition of the generalized solution to system (1) is satisfied.Now, we deal with condition (iii) in the definition of the generalized solution to (1).It is known from (11) and ( 12 On the other hand, according to assertion (ii), one has Combining the preceding estimates renders Lemma 2 guarantees that the functional Φ : L p 1 (Ω) × L p 2 (Ω) → R given in ( 7) is Lipschitz continuous on the bounded subsets of L p 1 (Ω) × L p 2 (Ω); thus, its generalized gradient 2 (Ω) is a bounded multifunction, which means that the image of every bounded set is a bounded set.Hence, on the basis of the inclusion (z 1n , z 2n ) ∈ ∂Φ(u 1n , u 2n ) and Proposition 4, we are led to the conclusion that the sequence {(z 1n , z 2n )} is bounded in L p ′ 1 (Ω) × L p ′ 2 (Ω).Recalling that u 1n ⇀ u 1 in W Inserting this into (20) and (21), we see that requirement (iii) in the definition of the generalized solution is fulfilled.Therefore, (u 1 , u 2 ) ∈ W 1,p 1 0 (Ω) × W 1,p 2 0 (Ω) is a generalized solution to system (1).The proof of Theorem 1 is complete.

5. Proofs of the Main Results and Example Proof of Theorem 1.
(Ω), we directly infer from (19) the existence of a constant M > 0 for which (17) and (18) are fulfilled.The proof is achieved.Consider the sequence {(u 1n , u 2n )} ⊂ W which is provided by Proposition 3 corresponding to the Galerkin basis {X n × Y n } of the space W It is known from Proposition 4 that the sequence {(u 1n , u 2n )} is bounded in W