On a Boundary Value Problem for the Biharmonic Equation with Multiple Involutions

: A nonlocal analogue of the biharmonic operator with involution-type transformations was considered. For the corresponding biharmonic equation with involution, we investigated the solvability of boundary value problems with a fractional-order boundary operator having a derivative of the Hadamard-type. First, transformations of the involution type were considered. The properties of the matrices of these transformations were investigated. As applications of the considered transformations, the questions about the solvability of a boundary value problem for a nonlocal biharmonic equation were studied. Modiﬁed Hadamard derivatives were considered as the boundary operator. The considered problems covered the Dirichlet and Neumann-type boundary conditions. Theorems on the existence and uniqueness of solutions to the studied problems were proven.


Introduction
The concept of a nonlocal operator and the related concepts of a nonlocal differential equation appeared in the theory of differential equations not long ago. For example, in [1], the authors considered equations containing fractional derivatives of the desired function and equations with deviating arguments, in other words, equations that include an unknown function and its derivatives, generally speaking, for different values of arguments. Such equations are called nonlocal differential equations. Among nonlocal differential equations, a special place is occupied by equations in which the deviation of the arguments has an involutive character. Mapping S is usually called an involution if S 2 (x) = S(S(x)) = x. The theory of equations with involutively transformed arguments and their applications were described in detail in the monographs [2][3][4]. By now, for differential equations with various types of involution, the well-posedness of boundary and initial-boundary value problems, the qualitative properties of solutions, and spectral issues have been studied quite well [5][6][7][8][9][10][11][12][13][14][15][16]. In addition, for classical equations, one can study nonlocal boundary value problems of the Bitsadze-Samarskii-type, in which the values of the sought function u(x) at the boundary of the domain are related to the values of u(Sx) [17][18][19]. Note that the questions about the solvability of the main problems for the nonlocal Poisson equation were studied in [20,21]. Note also that boundary value problems with fractional-order boundary operators for elliptic equations were studied in [22][23][24][25][26][27][28][29][30].
Applications of the boundary value problems with boundary operators of fractionalorder for elliptic equations were given in [31][32][33][34].
Let us formulate the problem that is studied in this work. Let Ω be the unit ball in R l , l ≥ 2, and ∂Ω be the unit sphere. Let also u(x) be a smooth function in the domain Ω, µ ≥ 0, α ∈ (m − 1, m], m = 1, 2, . . . , r = |x| = x 2 1 + . . . + x 2 n , θ = x/r, x ∈ Ω, and δ = r d dr be the Dirac operator, where r d dr = l ∑ j=1 x j ∂ ∂x j . Consider the modified Hadamard integrodifferential operators ( [35], p. 116): ](x), m − 1 < α ≤ m. Let S 1 , . . . , S n be a set of real symmetric commutative matrices S i S j = S j S i such that S 2 i = I. Note that, since any transform S i is isometric, then if x ∈ Ω or x ∈ ∂Ω, then Sx ∈ Ω or Sx ∈ ∂Ω, respectively. For example, matrix S i can be a matrix of the following linear mapping S i x = (x 1 , . . . , x i−1 , −x i , x i+1 , . . . , x l ).
Let us introduce the following nonlocal analogue of the biharmonic operator: Consider the following boundary value problem in the domain Ω.

Remark 2.
Note also that for x ∈ ∂Ω, the following equalities hold: where ν is the outer normal to the sphere ∂Ω. Then, for α = 1, we obtain: Thus, if µ = 0, then in the case α = 0 in the problem H, we obtain the Dirichlet boundary conditions: and in the case α = 1, boundary conditions of the Neumann-type [36,37]: Moreover, in the case α = 1 and µ > 0, the boundary conditions of the problem H are written as: i.e., we obtain a generalization of the third boundary value problem [38].
The article is organized as follows. In Section 1, the main problem is formulated. In Section 2, the properties of matrices of involutive transformations are considered. Section 3 contains well-known statements about the properties of Hadamard integrodifferential operators. In Section 4, the solvability of the Dirichlet problem (case α = 0) is studied, and in Section 5, the necessary and sufficient conditions for the solvability of Neumann-type boundary value problems (case µ = 0, α = 1) are found. In Section 6, for the general case, the main boundary value problem is studied.

Auxiliary Statements
To study the problem (1)-(3), we need some auxiliary statements. Let us introduce the function: where the summation is taken in ascending order with respect to the index i. From this equality, it is easy to conclude that functions of the form v(S j n n . . . S j 1 1 x), where j = 0, . . . , 2 n − 1, can be linearly expressed in terms of functions u(S i n n . . . S i 1 1 x). If we consider the following vectors of order 2 n : then this dependence can be expressed in the matrix form: where A n = a i,j i,j=0,...,2 n −1 is the matrix of order 2 n × 2 n . Theorem 1. The matrix A n from the equality (5) can be represented in the form: A n = a i,j i,j=0,...,2 n −1 = a i⊕j i,j=0,...,2 n −1 , where the operation in the subscript of the matrix coefficients is understood in the following sense i ⊕ j ≡ (i) 2 ⊕ (j) 2 = ((i n + j n mod 2) . . . (i 1 + j 1 mod 2)) 2 , where (i) 2 = (i n . . . i 1 ) 2 is a representation of the index in the binary number system. The linear combination of matrices of the form of (6) is a matrix of the form of (6).
The theorem is proven.
The matrix A n is uniquely determined by its first row (a 0 , a 1 , . . . , a 2 n −1 ).
Indeed, the ith row of the matrix A n can be written through its first row in the form a i⊕0 , a i⊕1 , . . . , a i⊕(2 n −1) .
This property of the matrix A n we denote by the equality A n ≡ A n (a 0 , . . . , a 2 n −1 ).
Consider the element a i,j of the block matrix: It is located in the block with coordinates (i n . . . i m+1 ) 2 , (j n . . . j m+1 ) 2 , and this means it is in the block A and, therefore, has the form: This coincides with Formula (6). The corollary is proven.

Theorem 2.
Multiplication of matrices of the form of (6) is commutative. The product of matrices of the form of (6) is again a matrix of the form of (6).
Proof. Let us prove this property by induction on n. For n = 1, it is obviously true that: Assuming that the multiplication of matrices A n−1 and B n−1 of order n − 1 is commutative, we prove that the multiplication of matrices A n and B n of order n is also commutative. Denote A n−1 = A n−1 (a 0 , . . . , a 2 n−1 −1 ),Â n−1 = A n−1 (a 2 n−1 , . . . , a 2 n −1 ). By the property (9): The induction step is proven, and therefore, matrices of the form A n (a 0 , . . . , a 2 n −1 ) are commutative. It is not hard to see that: In the sum from the formula above, let us change the index k → l, as in Theorem 1, according to the equality i ⊕ k = l. Then, and this means that the correspondence k ∼ l is one-to-one. Replacement of the index k → l changes only the order of summation in the sum. By virtue of the associatively of the operation ⊕, we have: The first row of the matrix AB is: j=0,...,2 n −1 , and hence, the matrix C of the form of (6), constructed by the first row of AB, is written in the form coinciding with AB: The theorem is proven. Theorem 3. The eigenvectors of the matrix A n (a 0 , . . . , a 2 n −1 ) can be chosen in the form: where a k n−1 is the eigenvector of the matrix A n−1 (a 0 , . . . , The eigenvectors of the matrix A n are orthogonal. The eigenvalues of the matrix A n are of the form: where µ k n−1 andμ k n−1 are eigenvalues of the matrices: corresponding to the eigenvector a k n−1 , respectively; µ 0 Proof. Let us carry out the proof by induction on n. Suppose that the eigenvectors of the matrix A n (a 0 , . . . , a 2 n −1 ) are independent on numbers a 0 , . . . , a 2 n −1 . For n = 1, it is obvious that the eigenvectors of the matrix A 1 (a 0 , a 1 ) can be chosen in the form a + 1 = (1, 1) T , a − 1 = (1, −1) T , and the eigenvalues corresponding to them have the form µ + 1 = a 0 + a 1 , µ − 1 = a 0 − a 1 . For the matrix: the eigenvectors have the form: The + and − signs in the expressions ± 1 and ± 2 are values taken independently of each other. Indeed, the equalities: are true, and hence, a ± 1 1 , ± 2 a ± 1 1 T are the eigenvectors for four different combinations of signs ± 1 and ± 2 . It is seen that the eigenvectors a (± 1 ,± 2 ) 2 = (1, ± 1 1, ± 2 1, ± 2 ± 1 1) T of the matrix A 2 (a 0 , a 1 , a 2 , a 3 ) do not depend on numbers {a k }.
Let µ 0 n−1 , . . . , µ 2 n−1 −1 n−1 be the eigenvalues corresponding to the above eigenvectors of the matrix A n−1 (a 0 , . . . , a 2 n−1 −1 ), independent of its coefficients, then vectors of the form a k n = a k n−1 , ±a k n−1 T , where k = 0, . . . , 2 n−1 − 1, are the eigenvectors of the matrix A n (a 0 , . . . , a 2 n −1 ). Indeed, we have: whereμ k n−1 is the eigenvalue of the matrix A n−1 (a 2 n−1 , . . . , a 2 n −1 ) corresponding to the eigenvector a k n−1 . Obviously, there are 2 n vectors of the form a k n = a k n−1 , ±a k n−1 T . Therefore, the eigenvalues of the matrix A n (a 0 , . . . , a 2 n −1 ) are µ k,± n = µ k n−1 ±μ k n−1 . Orthogonality: It is obvious that the eigenvectors a The theorem is proven.
Corollary 3. Let k = (k n , . . . , k 1 ) 2 , k i = 0, 1, then the eigenvector of the matrix A n numbered by k can be written in the form: The eigenvalue corresponding to the eigenvector a (k n ...k 1 ) 2 n can be written in a similar form: Proof. Let us prove (13). For n = 1, we have a + 1 = a (13) is true. If Formula (13) is true for the vector a (k n−1 ...k 1 ) 2 n−1 , then by Theorem 3, we have: and hence, the formula (13) is also true for the vector a (k n k n−1 ...k 1 ) 2 n = a k n . Let us prove (14). For n = 1, we have: where k 1 = 0, 1. Assume that the formula (14) is valid for n = n − 1, and prove its validity for n. By Theorem 3, changing the notation ± = (−1) k n , we write: which proves (14). The corollary is proven.
Let us denote the operation of taking the adjoint matrix to the matrix A n (a 0 , . . . , a 2 n −1 ), for the convenience of presentation, in the form adj(A n (a 0 , . . . , a 2 n −1 )) = A n (a 0 , . . . , a 2 n −1 ). Then, by the definition of the adjoint matrix: In the case n = 0, we assume that a k = 1. Obviously, A n = A n , and if the matrices A n and B n of the form of (6) are nonsingular, then, by virtue of Theorem 2, on the commutativity of A n and B n , we have: Define the operation of multiplying a matrix C k−1 by a block matrix in the form: Note that if C k−1 = C k−1 (c 0 , . . . , c 2 k−1 −1 ), then C k = C k (c 0 , . . . , c 2 k−1 −1 , 0, . . . , 0): and hence, by Theorem 2, the matrix C k−1 · A k has the form of (6).

Theorem 4.
The determinant of the matrix A n (a 0 , . . . , a 2 n −1 ) can be written in the form: det A n (a 0 , . . . , a 2 n −1 ) = For any m = 0, . . . , n, the equality holds: where det A n−m is calculated for the block matrix A n−m , first with numerical coefficients a (k n ...k m+1 ) 2 , then the corresponding matrices A (k n ...k m+1 ) 2 m are substituted for them, and the adjoint matrix is taken for the resulting matrix. In addition, the adjoint matrix A n−m is also constructed for numerical coefficients a (k n ...k m+1 ) 2 , and then, the corresponding matrices A Let us prove (17). Note that all operations for constructing matrices in this formula are correct since the determinant of a matrix and the elements of an adjoint matrix are algebraic expressions and all matrices involved in these constructions are commutative.
Denote the matrix on the right side of (17) by M, and multiply it on the right by the matrix A n represented in the form of (10), then substitute numbers a (k n ...k m+1 ) 2 instead of matrices A where the formula (16) and an equality of the form of (15) are used and λ ∈ R. Here, the matrix I b n−m has a superscript b indicating that it is a block matrix, each block of size 2 m × 2 m . The resulting equality proves (17). The theorem is proven. Corollary 4. The matrix A n (a 0 , . . . , a 2 n −1 ) is a matrix of the form of (6). If inverse matrix A −1 n (a 0 , . . . , a 2 n −1 ) exists, then it also has the form of (6).
Proof. Obviously, A 1 (a 0 , a 1 ) = A 1 (a 0 , −a 1 ), and therefore, the matrix A 1 (a 0 , a 1 ) is of the form of (6). Suppose the matrix A n−1 a 0 , . . . , a 2 n−1 −1 for n > 1 is of the form of (6). Then, using Formula (17) for m = n − 1, we obtain: 2 has the form of (6), and hence, by the induction hypothesis, the matrix A 0 is also a matrix of the form of (6). Since, as said before in Theorem 4, the matrix C n−1 · A n has the form of (6), and the product of the matrices of the form of (6) is again a matrix of the form of (6). The induction step is proven, and therefore, the corollary is true.
This equality follows from (17) when m = n − 1, taking into account (16) and the fact that A 1 + B 1 = A 1 + B 1 .

Properties of Integrodifferential Operators
In this section, we present some well-known statements about fractional integrodifferential operators introduced in Section 1. In [27], the following assertions were proven.

The Dirichlet Problem
In this section, we study the problem H for α = 0, i.e., the following Dirichlet problem: In [20], the following assertion was proven. x i u x i (x) and the operator I S also commute δI S u(x) = I S δu(x) on functions u ∈ C 1 (Ω), and the equality ∇I S = I S S T ∇ is valid. The converse assertion is also valid. Proof. Let the function u ∈ C 4 (Ω) satisfy homogeneous Equation (1). Consider the function v(x) from (4): It is evident that v(x) ∈ C 4 (Ω) and ∆ 2 v(x) = 0, x ∈ Ω, i.e., the function v(x) is biharmonic in Ω. By virtue of Corollary 6, functions v(S i n n . . . S i 1 1 x) are also biharmonic in Ω. Further, as stated in (5), for vectors: T the equality V(x) = A n U(x) holds, where A n = A n (a 0 , . . . , a 2 n −1 ) is a matrix of the form of (6). By the condition of the lemma and by virtue of Theorem 4, the determinant of this system does not vanish. By Corollary 4, the matrix A n −1 also has the form of (6). Let us introduce the notation B n (b 0 , . . . , b 2 n −1 ) = A n −1 (a 0 , . . . , a 2 n −1 ). If the first row of the matrix B n is written as b = (b 0 , . . . , b 2 n −1 ) T , we obtain: This immediately implies that the function u(x) is biharmonic in Ω. The lemma is proven. Proof. Let us prove that the homogeneous problem (24) and (25) has only a zero solution, and hence, the solution to the inhomogeneous problem (24) and (25), if it exists, is unique. Let u(x) be a solution to the homogeneous problem (24) and (25). By Lemma 8, the function u(x) is biharmonic in Ω and satisfies the homogeneous conditions (24). Therefore, the function u(x) is a solution to the following Dirichlet problem: By virtue of the uniqueness of the solution to this Dirichlet problem, we have u(x) ≡ 0 in Ω. The theorem is proven.
From here, it follows that u(x) ∈ C 4 (Ω) ∩ C 2+λ (Ω). Therefore, according to Lemma 7, in Ω, we have the equalities: Therefore, the function u(x) satisfies Equation (24). Let us check the boundary conditions (25) of the studied problem. Taking into account that: for x ∈ ∂Ω, according to Lemma 7, we obtain: where k = 0, 1, and hence, the boundary conditions (25) for the function u(x) are satisfied. The theorem is proven.

The Neumann Problem
In this section, we first consider the following analogue of the Neumann problem: First, let us consider a property of the solution to the Dirichlet problem for the classical biharmonic equation. Consider the following problem: where the function F(x) has the form F(x) = (δ + 4) f (x), and functionsh 0 (x) andh 1 (x) are defined by the equalities: Let us assume that functions f (x), h 0 (x), h 1 (x) are quite smooth. First, we study the problem for the homogeneous equation, i.e., consider the problem: It is known (see, for example, [36]) that the solution to the problem (33) can be represented as w(x) = w 0 (x) + 1 − |x| 2 w 1 (x), where the functions w 0 (x), w 1 (x) are solutions to the following problems: If we represent the solutions to the problems (34) and (35) in the form of the Poisson integral, then we obtain: where P(x, y) = 1 ω l 1−|x| 2 |x−y| l is the Poisson kernel of the Dirichlet problem. Further, it is obvious that P(0, y) = 1 ω l , and for the harmonic function w 0 (x) the equality: holds. Moreover, in [18] (Lemma 5.1), it was proven that: ∂Ω g(S i y) ds y = ∂Ω g(y) ds y .
Then, for the function w(x) = w 0 (x) + 1 − |x| 2 w 1 (x), we obtain: Thus, if w(x) is a solution to the problem (33), then the equality: is valid.
Let us study the problem (32) when the boundary conditions are homogeneous, i.e., consider the problem: In [27], it was proven that the solution to the problem (37) satisfies the equality: Thus, using the equalities (36) and (38) to solve the problem (32), we obtain: Hence, it follows that for the condition w(0) = 0 to be satisfied, it is necessary and sufficient that the equality: be satisfied. Thus, we proved the following assertion.

Lemma 9.
If the function w(x) is a solution to the problem (32), then for the condition w(0) = 0 to be satisfied, it is necessary and sufficient to satisfy the equality (39).
Let us consider an analogue of the Neumann problem (30) and (31). The following assertion is valid.
Proof. Assume that a solution u(x) to the problem (30) and (31) exists, and let δ = Let us apply the operator δ to the function u(x) and denote v(x) = δ[u](x). Note that for x ∈ ∂Ω, the equalities hold: Then, taking into account the equality ∆ 2 δ[u](x) = (δ + 4)∆ 2 u(x), where x ∈ Ω and the commutability of the operators δ and I S i , we obtain: From the boundary conditions of the problem (30) and (31), taking into account the equalities (41), we obtain: Thus, if u(x) is a solution to the problem (30) and (31), then for the function v(x) = δ[u](x), we obtain the following Dirichlet problem: Moreover, from the equality v(x) = δ[u](x), it follows that the solution to this problem must satisfy the condition v(0) = 0. Let us find out when this condition is fulfilled. We introduce the notation: If v(x) is a solution to the problem (42), then the function w(x) from (43) satisfies the conditions of the problem (32) with the functions: From (43), it follows that: Then, under the condition a i = 0, the equality v(0) = 0 is satisfied if and only if the condition w(0) = 0 is satisfied. In turn, by the assertion of Lemma 9, for the equality w(0) = 0 to hold, it is necessary and sufficient to satisfy the condition (39). As in our case: then the condition for the solvability of the problem (30) and (31) can be rewritten as (40). Thus, the necessity of fulfilling the condition (40) for the existence of a solution to the problem (30) and (31) is proven. Let us show that the fulfillment of the condition (40) is also sufficient for the existence of a solution to the problem (30) and (31). To do this, consider in Ω the following Neumann problem with respect to the function z(x): It is known (see, for example, [36]) that the solvability condition of this problem can be written as: 1 2 By virtue of [18] (Lemma 5.1), ∂Ω g(S i x) ds x = ∂Ω g(x) ds x , whence it follows that: and therefore, the condition (37) can be rewritten in the form of (40). If this condition is satisfied, then a solution to the problem (44) and (45) exists and is unique up to a constant term. Further, as in the case of the Dirichlet problem, the solution to the Neumann problem (30) and (31) can be found by the formula:  (30) and (31). The theorem is proven.
Further, we define the solvability condition for the problem H in the case α = 1, µ = 0. In this case, the boundary conditions are written as: The second condition from (48), taking into account the first condition, can be rewritten in the form: . Then, the problem under consideration is equivalent to the problem (30) and (31) with the following boundary conditions: Therefore, Theorem 7 implies the following assertion.

The General Case of the Main Problem
In this section, we consider the general case of the problem H. The following assertion is true. Theorem 8. Let 0 < α ≤ 1, µ ≥ 0, the coefficients of the operator L n , satisfy the condition (1) If µ > 0, then a solution to the problem H exists, is unique, belongs to the class C λ+4 (Ω), and can be represented in the form: where v(x) is a solution of the following Dirichlet problem: (2) If µ = 0, then for solvability of the problem H, it is necessary and sufficient that the condition: be satisfied, where f 1−α (x) is defined by the equality f 1−α (x) = J 1−α 4 [ f ](x). If a solution to the problem exists, it is unique up to a constant term, belongs to the class C λ+4 (Ω), and can be represented as: where v(x) is a solution to the problem (51) for µ = 0.
Proof. Let the function u(x) be a solution to the problem H. We apply the operator D α µ to this function and denote v(x) = D α µ [u](x). Let us apply the operator ∆ 2 to this function. Then, by virtue of the equality (20), . From here, it follows that for any k = 0, . . . , 2 n − 1, ∆ 2 v(S i n n . . . In [27], it was proven that the function D α+1 µ [u](x) can be represented as D α+1 µ [u](x) = δ D α+1 µ [u] (x). Then, from the boundary conditions (2) and (3), it follows that: Thus, if u(x) is a solution to the problem H, then for the function v(x) = D α µ [u](x), we obtain the Dirichlet problem (51). If f (x) ∈ C λ+1 (Ω), then by virtue of Lemma 2, D α µ+4 [ f ](x) ∈ C λ (Ω). Moreover, by the condition of the theorem g 0 (x) ∈ C λ+4 (∂Ω), g 1 (x) ∈ C λ+3 (∂Ω). Then, by virtue of the assertion of Theorem 6, a solution to the problem (51) exists, is unique, and belongs to the class C λ+4 (Ω). If we apply the operator J α µ to both sides of the equality v(x) = D α µ [u](x), then by virtue of Lemma 3 in the case µ > 0, we obtain u(x) = J α µ [v](x), that is, if a solution to the problem H exists, then the representation (50) is valid for it.
Let, on the contrary, the function v(x) be a solution to the problem (51). Let us show that the function u(x) = J α µ [v](x) satisfies all the conditions of the problem H. Indeed, if we apply the operator L n to this function, then we obtain: i.e., the function u(x) = J α µ [v](x) satisfies Equation (1). Let us check the fulfillment of the boundary conditions (2) and (3). By the assertion of Lemma 4, namely by virtue of the equality (19) in the case µ > 0, the equalities: are satisfied, i.e., the boundary conditions are also satisfied. It remains to investigate the case µ = 0. In this case, the function v(x)-a solution to the problem (51)-must satisfy the additional condition v(0) = 0.
As the function F(x) = D α µ+4 [ f ](x) by virtue of Lemma 6 is represented as F(x) = (δ + 4) f 1−α (x), f 1−α (x) = J 1−α 4 [ f ](x), then, by Lemma 9, for the equality v(0) = 0 to hold, it is necessary and sufficient that condition (39) be satisfied. In our case, 2h 0 (y) − h 1 (y) = 2g 0 (x) − g 1 (x), and therefore, the condition (39) can be rewritten in the form of (52). Thus, the necessity of the fulfillment of the condition (52) for the existence of a solution to the problem H is proven. The other part of the theorem can be proven in the same way as in the case µ > 0. The theorem is proven.

Conclusions
Summarizing the investigation carried out, we note that with the help of transformations of the involution-type, a nonlocal analogue of the biharmonic operator was introduced. Then, for the corresponding biharmonic equation with multiple involutions, the solvability of the boundary value problems with a fractional-order boundary operator having a derivative of the Hadamard-type was investigated. Theorems on the existence and uniqueness of solutions to the problems under consideration were proven. Exact solvability conditions were found depending on the order of the boundary operators. In this work, we extended the correct statement of the boundary value problems for some class of nonlocal partial differential equations. In the future, we plan to investigate spectral questions for operators with multiple involutions.