On Eigenfunctions and Eigenvalues of a Nonlocal Laplace Operator with Multiple Involution

: We study the eigenfunctions and eigenvalues of the boundary value problem for the nonlocal Laplace equation with multiple involution. An explicit form of the eigenfunctions and eigenvalues for the unit ball are obtained. A theorem on the completeness of the eigenfunctions of the problem under consideration is proved.


Introduction and the Problem Statement
The notion of a nonlocal operator and the related notions of a nonlocal differential equation appeared relatively recently in the theory of differential equations. In [1], loaded equations, equations containing fractional derivatives of the unknown function,and equations with deviating arguments are considered. Equations in which the unknown function and its derivatives enter for different values of arguments are called nonlocal differential equations.
Special place among nonlocal differential equations, is occupied by equations in which the deviation of arguments has an involutive character. An involution is called a function that is its own inverse S 2 (x) = S(S(x)) = x. Differential equations containing an involutive deviation in the unknown function or its derivative are some model equations with an alternating deviation of the argument. Such equations can be classified as functional differential equations.
Mathematicians have been studying differential equations with involution for a long time. For example, in 1816, Babbage [2] considered algebraic and differential equations with involution. The monographs of D. Przeworska-Rolewicz [3] and J. Wiener [4] are devoted to the theory of solvability of various differential equations with involution. In papers [5][6][7][8][9][10][11][12][13][14], spectral problems for differential operators of the first and second orders with involution were studied. In [15][16][17][18][19][20][21][22], the results of studying spectral problems with involution are used to solve inverse problems. A series of works by the authors Alberto Cabada and F. Adrian F. Tojo are devoted to the creation of the theory of the Green's function for one-dimensional differential equations with involution (see, for example, Refs [23,24] as well as the bibliography in these papers). The papers [25][26][27][28] are devoted to questions of the theory of solvability of some partial differential equations with involution. Elliptic functional differential equations with mappings of compression and extension type are considered in [29][30][31]. In addition, in [32][33][34], some classes of functional differential equations with deviating arguments are investigated. In [35], for the following ODE: y (t) + ay (−t) = λy(t), −π < t < π the boundary value problem with Dirichlet conditions y(−π) = y(π) = 0 is studied. It is shown that the eigenfunctions and eigenvalues of this problem have the form: y k (t) = sin kt, λ k = −(1 + a)k 2 ; y m (t) = cos m − where k, m ∈ N. This system is complete in L 2 [−π, π]. Note that the eigenfunctions of this problem for a = 0 coincide with the eigenfunctions of the classical equation and differ only in eigenvalues.
In the present paper, generalizing the problems considered in [36], to the case of multiple involution, we introduce the concept of a nonlocal analogue of the Laplace operator. In Section 2, matrices of a special form arising in this operator are investigated. Then, in Section 3, we study the structure of the eigenfunctions and eigenvalues of the Dirichlet problem. In Section 4, the eigenfunctions and eigenvalues of the Dirichlet problem for the nonlocal Laplace equation in the unit ball are constructed in an explicit form and the completeness of the system of eigenfunctions is proved.
Let us introduce the following nonlocal differential operator: and consider the following boundary value problem. Problem S. Find a function u(x) = 0 from the class u ∈ C(Ω) ∩ C 2 (Ω), satisfying the conditions: where λ ∈ R. If n > 0, a 0 = 1, a j = 0, j = 1, . . . , 2 n − 1, then this problem coincides with the spectral Dirichlet problem for the classical Laplace operator.

Preliminaries
To study the above problems (1) and (2), we need some auxiliary statements. Let us introduce the function: where the summation is taken in the ascending order with respect to the index i. From this equality it is easy to conclude that functions of the form v(S where A n = a i,j i,j=0,...,2 n −1 is the matrix of order 2 n × 2 n . Let us investigate the structure of matrices of the form A n . Theorem 1. The matrix A n from the equality (4) can be represented in the form: where the operation in the subscript of the matrix coefficients is understood in the following sense i ⊕ j ≡ (i) 2 ⊕ (j) 2 = ((i n + j n mod 2) . . . (i 1 + j 1 mod 2)) 2 , where (i) 2 = (i n . . . i 1 ) 2 is a representation of the index in the binary number system. The linear combination of matrices of the form (5) is a matrix of the form (5).
The theorem is proved.
We present important information for the further analysis corollaries of Theorem 1.
Indeed, the ith row of the matrix A n can be written through its 1st row in the form a i⊕0 , a i⊕1 , . . . , a i⊕(2 n −1) .
This property of the matrix A n we denote by the equality A n ≡ A n (a 0 , . . . , a 2 n −1 ).
Further, it is easy to see the validity of the equalities: a (0i n−1 ...i 1 ) 2 ⊕(0j n−1 ...j 1 ) 2 i,j=0,...,2 n−1 −1 = a (1i n−1 ...i 1 ) 2 ⊕(1j n−1 ...j 1 ) 2 i,j=0,...,2 n−1 −1 (10) and: from which the property (8) follows. Indeed, if we divide the matrix A n into four equally sized square blocks and consider the lower right block, then its indices are located in the range (10 . . . 0) 2 ≤ i, j ≤ (11 . . . 1) 2 , which means that this block, by virtue of (10), has the form: i.e., the diagonal blocks of the matrix A n are of the form A n−1 (a 0 , . . . , a 2 n−1 −1 ). Similarly, the top right block of A n has the indices in the range (00 . . . 0) 2 ≤ i ≤ (01 . . . 1) 2 , (10 . . . 0) 2 ≤ j ≤ (11 . . . 1) 2 , which means this block has the form: By equality (10), the lower left block of A n has the form: Equality (8) is proved. Now consider a block matrix of the form: The elements of its block matrix with the number (k n . . . k m+1 ) 2 can be written as: Consider the element a i,j of the block matrix: It is located in the block with indices (i n . . . i m+1 ) 2 , (j n . . . j m+1 ) 2 , and this means it is , and therefore has the form: This coincides with Formula (5). Therefore, the corollary is proved. Example 1. Property (8) of the matrix A n can be seen on the example of matrices A 1 , A 2 and A 3 : a 0 a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 1 a 0 a 3 a 2 a 5 a 4 a 7 a 6 a 2 a 3 a 0 a 1 a 6 a 7 a 4 a 5 a 3 a 2 a 1 a 0 a 7 a 6 a 5 a 4 a 4 a 5 a 6 a 7 a 0 a 1 a 2 a 3 a 5 a 4 a 7 a 6 a 1 a 0 a 3 a 2 a 6 a 7 a 4 a 5 a 2 a 3 a 0 a 1 a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0 and property (9) is written as: Let us investigate the product of matrices of the form (5). (5) is commutative. The product of matrices of the form (5) is again a matrix of the form (5).

Theorem 2. Multiplication of matrices of the form
Proof. For n = 1 we have: Assuming that the multiplication of matrices A n−1 and B n−1 of the order n − 1 is commutative, using the property (8) and equalities similar to the above, it is easy to obtain A n B n = B n A n .
Thus, it is not hard to see that: In the sum, from the formula above, let us change the index k → l, as in Theorem 1, according to equality and it means that the correspondence k ∼ l is one-to-one. Replacement of the index k → l changes only the order of summation in the sum. By virtue of the associativity of the operation ⊕, we have: The first row of the matrix AB is: and hence, the matrix C of the form (5), constructed by the first row of AB, is written in the form coinciding with AB: The theorem is proved.
The following theorem gives an idea of eigenvectors and eigenvalues of matrices of the form (5). Theorem 3. The eigenvectors of the matrix A n (a 0 , . . . , a 2 n −1 ) can be chosen in the form: where a k n−1 is the eigenvector of the matrix A n−1 (a 0 , . . . , The eigenvectors of the matrix A n are orthogonal. The eigenvalues of the matrix A n are of the form: where µ k n−1 andμ k n−1 are eigenvalues of the matrices: A n−1 (a 0 , . . . , a 2 n−1 −1 ) andÂ n−1 = A n−1 (a 2 n−1 , . . . , a 2 n −1 ), respectively, corresponding to the eigenvector a k n−1 , besides µ 0 Proof. Let us carry out the proof by induction on n. Suppose that the eigenvectors of the matrix A n (a 0 , . . . , a 2 n −1 ) are independent on numbers a 0 , . . . , a 2 n −1 . For n = 1, it is obvious that the eigenvectors of the matrix A 1 (a 0 , a 1 ) can be chosen in the form a + 1 = (1, 1) T , a − 1 = (1, −1) T , and the eigenvalues corresponding to them have the form µ + 1 = a 0 + a 1 , µ − 1 = a 0 − a 1 . For the matrix: eigenvectors are: or briefly a (± 1 ,± 2 ) 2 = (a ± 1 1 , ± 2 a ± 1 1 ) T . Signs + and − in the expressions ± 1 and ± 2 are taken values independently of each other. Indeed, the equalities: are true and hence a ± 1 1 , ± 2 a ± 1 1 T , are the eigenvectors for four different combinations of signs ± 1 and ± 2 . It is seen that the eigenvectors a (± 1 ,± 2 ) 2 = (1, ± 1 1, ± 2 1, ± 2 ± 1 1) T , of the matrix A 2 (a 0 , a 1 , a 2 , a 3 ), do not depend on the numbers {a k }.
Orthogonality. It is obvious that the eigenvectors a + 1 = (1, 1) T , and a − 1 = (1, −1) T , of the matrix A 1 (a 0 , a 1 ), are orthogonal. If the eigenvectors a k n−1 , k = 0, . . . , 2 n−1 − 1 of the matrix A n−1 (a 2 n−1 , . . . , a 2 n −1 ) are chosen orthogonal, then the eigenvectors a k n = a k n−1 , ±a k n−1 T of the matrix A n (a 0 , . . . , a 2 n −1 ) are also orthogonal: Let us give important consequences from Theorem 3 that allow us to build eigenvectors and eigenvalues of the matrix A n . Corollary 3. Let k = (k n , . . . , k 1 ) 2 , k i = 0, 1, then the eigenvector of the matrix A n , numbered by k, can be written in the form: where k ⊗ i ≡ (k n . . . k 1 ) 2 ⊗ (i n . . . i 1 ) 2 = k n i n + . . . + k 1 i 1 is a "scalar" product of the indexes (k) 2 and (i) 2 . The eigenvalue corresponding to the eigenvector a (k n ...k 1 ) 2 n can be written in a similar form: Proof. Let us prove (12). For n = 1 we have a + 1 = a  where k 1 = 0, 1. Assume that the Formula (13) is valid for n = n − 1 and prove its validity for n. By Theorem 3, changing the notation ± = (−1) k n , we write: which proves (13). The corollary is proved.

Example 2.
For n = 2 the matrix: according to Corollary 3, has the following four eigenvectors: where k = (00) 2 , (01) 2 , (10) 2 , (11) 2 , or: where, for convenience, we transfer the superscript of the eigenvalue to the subscript as n = 2 is fixed. For the matrix A 3 (a 0 , a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 ) from the Formula (11) we obtain eigenvectors in the form: For example, for (101) 2 = 5 we have an eigenvector of the form: The eigenvalue corresponding to the eigenvector a 5 3 = a (101) 2 3 is written in a similar form:

The Main Problem S
To study the Problem S, the following statement is required. (Lemma 3.1)). Let S be an orthogonal matrix, then the operator I S u(x) = u(Sx) and the Laplace operator ∆ commute ∆I S u(x) = I S ∆u(x) on functions u ∈ C 2 (Ω). The operator

Lemma 1 ([36]
x i u x i (x) and operator I S also commute ΛI S u(x) = I S Λu(x) on functions u ∈ C 1 (Ω) and the equality ∇I S = I S S T ∇ is valid.
where µ = λ/µ k n and µ k n = 0 is the eigenvalue of the matrix A n (a 0 , . . . , a 2 n −1 ) corresponding to the vector a k n .
Proof. Let λ be the eigenvalue of the problem S and u(x) = 0 be its eigenfunction. By Corollary 4, the equality (15) is true. Let's multiply it scalar by the vector a k n . Then, we have: whence, using the symmetry of the matrix A n (a 0 , . . . , a 2 n −1 ) (see Corollary 2) and the properties of the vector a k n , we find: whence follows: µ k n ∆w(x) + λw(x) = 0 and since λ = µ k n µ, and µ k n = 0, we get (16): Finally, since u(x) = 0, x ∈ ∂Ω, and x ∈ ∂Ω ⇒ S i n n . . . S i 1 1 x ∈ ∂Ω, then U(x) = 0, and therefore w(x) = (U(x), a k n ) = 0, x ∈ ∂Ω. The theorem is proved.
The following converse statement to Theorem 4 is important, which allows us to construct solutions to Problem S. Theorem 5. Let the function w(x) = 0 be a solution to the problem (16) and (17): for some µ, then the function: where W(x) = w(S i n n . . . S i 1 1 x) T i=0,...,2 n −1 and a k n is an eigenvector of the matrix A n = A n (a 0 , . . . , a 2 n −1 ) with an eigenvalue µ k n = 0 is a solution to the Dirichlet problem (1) and (2) for λ = µ k n µ.
Proof. Let w(x) = 0 be a solution to problem (16) and (17) Thus, ∆U k (x) = ∆u k (x)a k n and hence, since by Lemma 1: we get: Separating the first components of this vector equality, we obtain: which means that u k (x) is a solution to Equation (1). Let us check the boundary conditions (2) of the problem S. Since x ∈ ∂Ω ⇒ S i n n . . . S i 1 1 x ∈ ∂Ω, then for x ∈ ∂Ω we get: , a k n = 0, a k n = 0.
The theorem is proved.  (16) and (17) can be taken in the form u k (x) = (W(x), a k n ), k = 0, 1, 2, 3 or: In what follows, it will be necessary to expand the polynomials into the sum of the "generalized parity" polynomials. Then the function H (k n ...k 1 ) 2 (x) has the "generalized parity" property: (19) and besides, the following equality: holds true. Moreover, the function H(x), x ∈ Ω can be represented as: Proof. It is not hard to see that: where a change of variables is made under the sum sign, as in Theorem 2. Equality (19) is proved. Consider now, equality (21). It is easy to see that for x ∈ Ω: Let us calculate the inner sum from the right-hand side of equalities (22). It is clear that i = 0 ⇒ ∃j i j = 0, and then: (22) implies (21). Now let us prove (20). It is not hard to see that: Therefore, for m ∈ N 0 :

Eigenfunctions and Eigenvalues of Problem S
Let us transform the result of Theorem 5 to a simpler form.
Theorem 6. The eigenfunctions and eigenvalues of the Dirichlet problem (1) and (2) from Theorem 5 can be represented as: where the function w µ (x) is a solution to the problem (16) and (17): for some µ ∈ R + . Functions u k n (x), for k = 0, . . . , 2 n − 1 are orthogonal in L 2 (Ω).
Proof. We prove Formula (23) by induction on n. For n = 1 from (18), taking into account the equalities a 0 1 = (1, 1) T , a 1 1 = (1, −1) T from Theorem 3, we obtain: We shifted the subscript of the functions u k (x) from (18) to the top to make room for the n subscript. Suppose that Formula (23) is valid for n = n − 1 and prove its validity for n. In accordance with Theorems 3 and 5, we have a k n = a k n−1 , ±a k n−1 T and u k (x) = (W(x), a k n ) and hence the function: u (k n k n−1 ...k 1 ) 2 (x) = u (k n−1 ...k 1 ) 2 (x) + (−1) k n u (k n−1 ...k 1 ) 2 (S n x) is an eigenfunction of the Dirichlet problem (1) and (2). Using the induction hypothesis, we transform this function: which proves Formula (23). The eigenvalues of the Dirichlet problem (1) and (2) corresponding to eigenfunction u k n (x), by Corollary 3, have the form: , then equalities (23) can be written in the matrix form U n = V n W n , where the matrix V n is symmetric and orthogonal.
Indeed the symmetry of V n follows from the equality (−1) i⊗j = (−1) j⊗i and the orthogonality is proved in Theorem 3.

Conclusions
Summarizing the investigation carried out, we note that due to the properties of the special form matrices A n from the equality (4), studied in Theorems 1-3, we managed in Theorem 5, Theorem 6, and then in Theorem 7 to write out the complete system of eigenfunctions and eigenvalues of the nonlocal problem S. If we consider possible further applications of the proposed method, we note that a similar method can be used to study the eigenfunctions and eigenvalues of the Neumann and Robin boundary value problems in a ball. Moreover, we hope that the proposed method also allows for a given nonlocal Laplace operator to investigate the spectral problem in l-dimensional parallelepiped and to find an explicit form of the eigenfunctions and eigenvalues of the Dirichlet and Neumann boundary value problems, as well as for problems with periodic conditions. Described problems are the subject of further work and we are going to consider them in our next articles.