Next Article in Journal
The Inequalities of Merris and Foregger for Permanents
Next Article in Special Issue
On Eigenfunctions of the Boundary Value Problems for Second Order Differential Equations with Involution
Previous Article in Journal
Optimizing the Tolerance for the Products with Multi-Dimensional Chains via Simulated Annealing
Previous Article in Special Issue
New Results on Qualitative Behavior of Second Order Nonlinear Neutral Impulsive Differential Systems with Canonical and Non-Canonical Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Eigenfunctions and Eigenvalues of a Nonlocal Laplace Operator with Multiple Involution

by
Batirkhan Turmetov
1,*,† and
Valery Karachik
2,†
1
Department of Mathematics, Khoja Akhmet Yassawi International Kazakh-Turkish University, Turkistan 161200, Kazakhstan
2
Department of Mathematical Analysis, South Ural State University (NRU), 454080 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(10), 1781; https://doi.org/10.3390/sym13101781
Submission received: 10 August 2021 / Revised: 15 September 2021 / Accepted: 18 September 2021 / Published: 25 September 2021

Abstract

:
We study the eigenfunctions and eigenvalues of the boundary value problem for the nonlocal Laplace equation with multiple involution. An explicit form of the eigenfunctions and eigenvalues for the unit ball are obtained. A theorem on the completeness of the eigenfunctions of the problem under consideration is proved.

1. Introduction and the Problem Statement

The notion of a nonlocal operator and the related notions of a nonlocal differential equation appeared relatively recently in the theory of differential equations. In [1], loaded equations, equations containing fractional derivatives of the unknown function, and equations with deviating arguments are considered. Equations in which the unknown function and its derivatives enter for different values of arguments are called nonlocal differential equations.
Special place among nonlocal differential equations, is occupied by equations in which the deviation of arguments has an involutive character. An involution is called a function that is its own inverse S 2 ( x ) = S ( S ( x ) ) = x . Differential equations containing an involutive deviation in the unknown function or its derivative are some model equations with an alternating deviation of the argument. Such equations can be classified as functional differential equations.
Mathematicians have been studying differential equations with involution for a long time. For example, in 1816, Babbage [2] considered algebraic and differential equations with involution. The monographs of D. Przeworska-Rolewicz [3] and J. Wiener [4] are devoted to the theory of solvability of various differential equations with involution. In papers [5,6,7,8,9,10,11,12,13,14], spectral problems for differential operators of the first and second orders with involution were studied. In [15,16,17,18,19,20,21,22], the results of studying spectral problems with involution are used to solve inverse problems. A series of works by the authors Alberto Cabada and F. Adrian F. Tojo are devoted to the creation of the theory of the Green’s function for one-dimensional differential equations with involution (see, for example, Refs [23,24] as well as the bibliography in these papers). The papers [25,26,27,28] are devoted to questions of the theory of solvability of some partial differential equations with involution. Elliptic functional differential equations with mappings of compression and extension type are considered in [29,30,31]. In addition, in [32,33,34], some classes of functional differential equations with deviating arguments are investigated. In [35], for the following ODE:
y " ( t ) + a y " ( t ) = λ y ( t ) , π < t < π
the boundary value problem with Dirichlet conditions y ( π ) = y ( π ) = 0 is studied. It is shown that the eigenfunctions and eigenvalues of this problem have the form:
y k t = sin k t , λ k = ( 1 + a ) k 2 ; y m t = cos m 1 2 t , λ m = ( 1 + a ) m 1 2 2 ,
where k , m N . This system is complete in L 2 [ π , π ] . Note that the eigenfunctions of this problem for a = 0 coincide with the eigenfunctions of the classical equation and differ only in eigenvalues.
In the present paper, generalizing the problems considered in [36], to the case of multiple involution, we introduce the concept of a nonlocal analogue of the Laplace operator. In Section 2, matrices of a special form arising in this operator are investigated. Then, in Section 3, we study the structure of the eigenfunctions and eigenvalues of the Dirichlet problem. In Section 4, the eigenfunctions and eigenvalues of the Dirichlet problem for the nonlocal Laplace equation in the unit ball are constructed in an explicit form and the completeness of the system of eigenfunctions is proved.
Let Ω = { x R l : | x | < 1 } be the unit ball in R l , l 2 , and Ω = { x R l : | x | = 1 } be the unit sphere. Let also S 1 , , S n , be a set of real symmetric commutative matrices S i S j = S j S i such that S i 2 = I . Note that since | x | 2 = ( S i 2 x , x ) = ( S i x , S i x ) = | S i x | 2 , then x Ω S i x Ω and y Ω S i y Ω . For example, matrix S 1 can be a matrix of the following linear mapping S 1 x = ( x 1 , x 2 , , x l ) , because:
S 1 = 1 0 1 × ( l 1 ) 0 ( l 1 ) × 1 I l 1 .
Let n N 0 and a 0 , a 1 , a 2 , a 3 , , a 2 n 1 be a set of real numbers. If we write the summation index i in the binary number system ( i n i 1 ) 2 i , where i k = 0 , 1 for k = 1 , , n , then the coefficients a k can be written as a ( 0 00 ) 2 , a ( 0 01 ) 2 , a ( 0 10 ) 2 , a ( 0 11 ) 2 , ⋯, a ( 1 11 ) 2 .
Let us introduce the following nonlocal differential operator:
L n u i = 0 2 n 1 a i Δ u ( S n i n S 1 i 1 x )
and consider the following boundary value problem.
Problem S . Find a function u ( x ) 0 from the class u C ( Ω ¯ ) C 2 ( Ω ) , satisfying the conditions:
L n u ( x ) + λ u ( x ) = 0 , x Ω ,
u x = 0 , x Ω ,
where λ R .
If n > 0 , a 0 = 1 , a j = 0 , j = 1 , , 2 n 1 , then this problem coincides with the spectral Dirichlet problem for the classical Laplace operator.

2. Preliminaries

To study the above problems (1) and (2), we need some auxiliary statements. Let us introduce the function:
v ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i u ( S n i n S 1 i 1 x ) ,
where the summation is taken in the ascending order with respect to the index i. From this equality it is easy to conclude that functions of the form v ( S n j n S 1 j 1 x ) , where j = 0 , , 2 n 1 can be linearly expressed in terms of functions u ( S n i n S 1 i 1 x ) . If we consider the following vectors U ( x ) = u ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T , V ( x ) = v ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T of order 2 n , then this dependence can be expressed in the matrix form:
V ( x ) = A n U ( x ) ,
where A n = a i , j i , j = 0 , , 2 n 1 is the matrix of order 2 n × 2 n .
Let us investigate the structure of matrices of the form A n .
Theorem 1.
The matrix A n from the equality (4) can be represented in the form:
A n = a i , j i , j = 0 , , 2 n 1 = a i j i , j = 0 , , 2 n 1
where the operation in the subscript of the matrix coefficients is understood in the following sense i j ( i ) 2 ( j ) 2 = ( ( i n + j n mod 2 ) ( i 1 + j 1 mod 2 ) ) 2 , where ( i ) 2 = ( i n i 1 ) 2 is a representation of the index in the binary number system. The linear combination of matrices of the form (5) is a matrix of the form (5).
Proof. 
Let n = 1 , then we have:
A 1 = a 0 0 a 0 1 a 1 0 a 1 1 = a 0 a 1 a 1 a 0 ,
and if n = 2 , then we get:
A 2 = a ( 00 ) 2 ( 00 ) 2 a ( 00 ) 2 ( 01 ) 2 a ( 00 ) 2 ( 10 ) 2 a ( 00 ) 2 ( 11 ) 2 a ( 01 ) 2 ( 00 ) 2 a ( 01 ) 2 ( 01 ) 2 a ( 01 ) 2 ( 10 ) 2 a ( 01 ) 2 ( 11 ) 2 a ( 10 ) 2 ( 00 ) 2 a ( 10 ) 2 ( 01 ) 2 a ( 10 ) 2 ( 10 ) 2 a ( 10 ) 2 ( 11 ) 2 a ( 11 ) 2 ( 00 ) 2 a ( 11 ) 2 ( 01 ) 2 a ( 11 ) 2 ( 10 ) 2 a ( 11 ) 2 ( 11 ) 2 = a 0 a 1 a 2 a 3 a 1 a 0 a 3 a 2 a 2 a 3 a 0 a 1 a 3 a 2 a 1 a 0 .
Consider the function v ( S n i n S 1 i 1 x ) , whose coefficients at u ( S n j n S 1 j 1 ) make up the i ( i n i 1 ) 2 th row of the matrix A n :
v ( S n i n S 1 i 1 x ) = j ( j n j 1 ) 2 = 0 2 n 1 = ( 1 1 ) 2 a ( j n j 1 ) 2 u ( S n j n S 1 j 1 S n i n S 1 i 1 x )                                                                = j ( j n j 1 ) 2 = 0 ( 1 1 ) 2 a ( j n j 1 ) 2 u ( S n j n + i n mod 2 S 1 j 1 + i 1 mod 2 x ) .
Here, the following properties S j 2 x = x and S j S i x = S i S j x of the matrices S 1 , , S n are taken into account. Let’s replace the index i j = l . Then l i = i j i = j , and the correspondence j l is one-to-one. Replacement j l of the index changes only the order of summation in the sum (6). For example, if i = 1 , then the sequence j : 0 , 1 , 2 , 3 , 4 , 5 , goes to l = 1 j : 1 , 0 , 3 , 2 , 5 , 4 , . After replacing the index, we get:
v ( S n i n S 1 i 1 x ) = l = 0 ( 1 1 ) 2 a ( i n + l n mod 2 i 1 + l 1 mod 2 ) 2 u ( S n l n S 1 l 1 x ) ,
whence a i , l = a ( i n + l n mod 2 i 1 + l 1 mod 2 ) 2 = a i l which proves (4).
It is clear that if α , β are constants, then:
α a i j i , j = 0 , , 2 n 1 + β b i j i , j = 0 , , 2 n 1 = α a i j + β b i j i , j = 0 , , 2 n 1 .
The theorem is proved. □
We present important information for the further analysis corollaries of Theorem 1.
Corollary 1.
The matrix A n is uniquely determined by its first row a 0 , a 1 , , a 2 n 1 .
Indeed, the ith row of the matrix A n can be written through its 1st row in the form a i 0 , a i 1 , , a i ( 2 n 1 ) .
This property of the matrix A n we denote by the equality A n A n ( a 0 , , a 2 n 1 ) .
Corollary 2.
The matrix A n has the symmetry property:
a i , j i , j = 0 , , 2 n 1 = a j , i i , j = 0 , , 2 n 1
and it can be written as:
A n = A n 1 ( a 0 , , a 2 n 1 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 0 , , a 2 n 1 1 ) ,
or more generally in the form of a block matrix A n m consisting of matrices A m :
A n = A n m A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 ,
where A m ( k n k m + 1 ) 2 ( a ( k n k m + 1 0 0 ) 2 , , a ( k n k m + 1 1 1 ) 2 ) is a matrix of the form (4) of order 2 m .
Proof. 
Indeed, since the binary operation i j is commutative:
i j = ( i n + j n mod 2 i 1 + j 1 mod 2 ) 2 = ( j n + i n mod 2 j 1 + i 1 mod 2 ) 2 = j i ,
then property (7) holds true, and:
a i , j i , j = 0 , , 2 n 1 = a i j i , j = 0 , , 2 n 1 = a j i i , j = 0 , , 2 n 1 = a j , i i , j = 0 , , 2 n 1 .
Further, it is easy to see the validity of the equalities:
a ( 0 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = a ( 1 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1
and:
a ( 0 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = a ( 1 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1
from which the property (8) follows. Indeed, if we divide the matrix A n into four equally sized square blocks and consider the lower right block, then its indices are located in the range ( 10 0 ) 2 i , j ( 11 1 ) 2 , which means that this block, by virtue of (10), has the form:
a ( 1 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1                                          = a ( 0 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = A n 1 ( a 0 , , a 2 n 1 1 ) ,
i.e., the diagonal blocks of the matrix A n are of the form A n 1 ( a 0 , , a 2 n 1 1 ) . Similarly, the top right block of A n has the indices in the range ( 00 0 ) 2 i ( 01 1 ) 2 , ( 10 0 ) 2 j ( 11 1 ) 2 , which means this block has the form:
a ( 0 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = A n 1 ( a 2 n 1 , , a 2 n 1 ) .
By equality (10), the lower left block of A n has the form:
a ( 1 i n 1 i 1 ) 2 ( 0 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1                                          = a ( 0 i n 1 i 1 ) 2 ( 1 j n 1 j 1 ) 2 i , j = 0 , , 2 n 1 1 = A n 1 ( a 2 n 1 , , a 2 n 1 ) .
Equality (8) is proved. Now consider a block matrix of the form:
A n m A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 = A m ( i n i m + 1 ) 2 ( j n j m + 1 ) 2 i , j = 0 , , 2 n m 1 .
The elements of its block matrix with the number ( k n k m + 1 ) 2 can be written as:
A m ( k n k m + 1 ) 2 ( a ( k n k m + 1 0 0 ) 2 , , a ( k n k m + 1 1 1 ) 2 ) = a ( k n k m + 1 ( i m i 1 ) 2 ( j m j 1 ) 2 ) i , j = 0 , , 2 m 1 .
Consider the element a i , j of the block matrix:
A n m A m ( 0 0 ) 2 , , A m ( k n k m + 1 ) 2 , , A m ( 1 1 ) 2 .
It is located in the block with indices ( i n i m + 1 ) 2 , ( j n j m + 1 ) 2 , and this means it is in the block A m ( i n i m + 1 ) 2 ( j n j m + 1 ) 2 , and therefore has the form:
a i , j = a ( ( i n i m + 1 ) 2 ( j n j m + 1 ) 2 ( i m i 1 ) 2 ( j m j 1 ) 2 ) 2 = a i j .
This coincides with Formula (5). Therefore, the corollary is proved. □
Example 1.
Property (8) of the matrix A n can be seen on the example of matrices A 1 , A 2 and A 3 :
A 2 ( a 0 , a 1 , a 2 , a 3 ) = a 0 a 1 a 2 a 3 a 1 a 0 a 3 a 2 a 2 a 3 a 0 a 1 a 3 a 2 a 1 a 0 = A 1 ( a 0 , a 1 ) A 1 ( a 2 , a 3 ) A 1 ( a 2 , a 3 ) A 1 ( a 0 , a 1 ) , A 3 = a 0 a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 1 a 0 a 3 a 2 a 5 a 4 a 7 a 6 a 2 a 3 a 0 a 1 a 6 a 7 a 4 a 5 a 3 a 2 a 1 a 0 a 7 a 6 a 5 a 4 a 4 a 5 a 6 a 7 a 0 a 1 a 2 a 3 a 5 a 4 a 7 a 6 a 1 a 0 a 3 a 2 a 6 a 7 a 4 a 5 a 2 a 3 a 0 a 1 a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0 = A 2 ( a 0 , a 1 , a 2 , a 3 ) A 2 ( a 4 , a 5 , a 6 , a 7 ) A 2 ( a 4 , a 5 , a 6 , a 7 ) A 2 ( a 0 , a 1 , a 2 , a 3 )
and property (9) is written as:
A 3 = A 2 A 1 ( 0 , 0 ) 2 ( a 0 , a 1 ) , A 1 ( 0 , 1 ) 2 ( a 2 , a 3 ) , A 1 ( 1 , 0 ) 2 ( a 4 , a 5 ) , A 1 ( 1 , 1 ) 2 ( a 6 , a 7 )                                                                  A 2 A 1 0 , A 1 1 , A 1 2 , A 1 3 = A 1 0 A 1 1 A 1 2 A 1 3 A 1 1 A 1 0 A 1 3 A 1 2 A 1 2 A 1 3 A 1 0 A 1 1 A 1 3 A 1 2 A 1 1 A 1 0 .
Let us investigate the product of matrices of the form (5).
Theorem 2.
Multiplication of matrices of the form (5) is commutative. The product of matrices of the form (5) is again a matrix of the form (5).
Proof. 
For n = 1 we have:
A 1 B 1 = a 0 a 1 a 1 a 0 b 0 b 1 b 1 b 0 = a 0 b 0 + a 1 b 1 a 0 b 1 + a 1 b 0 a 1 b 0 + a 0 b 1 a 1 b 1 + a 0 b 0 = B 1 A 1 .
Assuming that the multiplication of matrices A n 1 and B n 1 of the order n 1 is commutative, using the property (8) and equalities similar to the above, it is easy to obtain A n B n = B n A n .
Thus, it is not hard to see that:
A B = a i j i , j = 0 , , 2 n 1 b i j i , j = 0 , , 2 n 1 = k = 0 2 n 1 a i k b k j i , j = 0 , , 2 n 1 .
In the sum, from the formula above, let us change the index k l , as in Theorem 1, according to equality i k = l . Then l i = i k i = i i k = k , and it means that the correspondence k l is one-to-one. Replacement of the index k l changes only the order of summation in the sum. By virtue of the associativity of the operation ⊕, we have:
A B = l = 0 2 n 1 a l b ( l i ) j i , j = 0 , , 2 n 1 = l = 0 2 n 1 a l b l ( i j ) i , j = 0 , , 2 n 1 .
The first row of the matrix A B is:
A B i = 0 = k = 0 2 n 1 a k b k j j = 0 , , 2 n 1
and hence, the matrix C of the form (5), constructed by the first row of A B , is written in the form coinciding with A B :
C k = 0 2 n 1 a k b k ( i j ) j = 0 , , 2 n 1 = A B .
The theorem is proved. □
The following theorem gives an idea of eigenvectors and eigenvalues of matrices of the form (5).
Theorem 3.
The eigenvectors of the matrix A n ( a 0 , , a 2 n 1 ) can be chosen in the form:
a n k = a n 1 k , ± a n 1 k T , k = 0 , , 2 n 1 1 ,
where a n 1 k is the eigenvector of the matrix A n 1 ( a 0 , , a 2 n 1 1 ) , k = 0 , , 2 n 1 1 besides for n = 1 we have a 1 0 = 1 , 1 T , a 1 1 = 1 , 1 T . The eigenvectors of the matrix A n are orthogonal. The eigenvalues of the matrix A n are of the form:
μ n k , ± = μ n 1 k ± μ ^ n 1 k , k = 0 , , 2 n 1 1 ,
where μ n 1 k and μ ^ n 1 k are eigenvalues of the matrices:
A n 1 ( a 0 , , a 2 n 1 1 ) and A ^ n 1 = A n 1 ( a 2 n 1 , , a 2 n 1 ) ,
respectively, corresponding to the eigenvector a n 1 k , besides μ 1 0 = a 0 + a 1 , μ 1 1 = a 0 a 1 .
Proof. 
Let us carry out the proof by induction on n. Suppose that the eigenvectors of the matrix A n ( a 0 , , a 2 n 1 ) are independent on numbers a 0 , , a 2 n 1 . For n = 1 , it is obvious that the eigenvectors of the matrix A 1 ( a 0 , a 1 ) can be chosen in the form a 1 + = ( 1 , 1 ) T , a 1 = ( 1 , 1 ) T , and the eigenvalues corresponding to them have the form μ 1 + = a 0 + a 1 , μ 1 = a 0 a 1 . For the matrix:
A 2 = a 0 a 1 a 2 a 3 a 1 a 0 a 3 a 2 a 2 a 3 a 0 a 1 a 3 a 2 a 1 a 0 = A 1 ( a 0 , a 1 ) A 1 ( a 2 , a 3 ) A 1 ( a 2 , a 3 ) A 1 ( a 0 , a 1 )
eigenvectors are:
a 2 ( + , + ) = ( a 1 + , a 1 + ) T , a 2 ( , + ) = ( a 1 , a 1 ) T , a 2 ( + , ) = ( a 1 + , a 1 + ) T , a 2 ( , ) 2 = ( a 1 , a 1 ) T ,
or briefly a 2 ( ± 1 , ± 2 ) = ( a 1 ± 1 , ± 2 a 1 ± 1 ) T . Signs + and − in the expressions ± 1 and ± 2 are taken values independently of each other. Indeed, the equalities:
A 2 a 2 ( ± 1 , ± 2 ) = A 1 ( a 0 , a 1 ) A 1 ( a 2 , a 3 ) A 1 ( a 2 , a 3 ) A 1 ( a 0 , a 1 ) a 1 ± 1 ± 2 a 1 ± 1              = A 1 ( a 0 , a 1 ) a 1 ± 1 ± 2 A 1 ( a 2 , a 3 ) a 1 ± 1 A 1 ( a 2 , a 3 ) a 1 ± 1 ± 2 A 1 ( a 0 , a 1 ) a 1 ± 1 = ( a 0 ± 1 a 1 ) a 1 ± 1 ± 2 ( a 2 ± 1 a 3 ) a 1 ± 1 ( a 2 ± 1 a 3 ) a 1 ± 1 ± 2 ( a 0 ± 1 a 1 ) a 1 ± 1                                 = a 0 ± 1 a 1 ± 2 ( a 2 ± 1 a 3 ) a 1 ± 1 ± 2 a 1 ± 1 = a 0 ± 1 a 1 ± 2 ( a 2 ± 1 a 3 ) a 2 ( ± 1 , ± 2 )
are true and hence a 1 ± 1 , ± 2 a 1 ± 1 T , are the eigenvectors for four different combinations of signs ± 1 and ± 2 . It is seen that the eigenvectors a 2 ( ± 1 , ± 2 ) = 1 , ± 1 1 , ± 2 1 , ± 2 ± 1 1 T , of the matrix A 2 ( a 0 , a 1 , a 2 , a 3 ) , do not depend on the numbers a k .
Furthermore, assuming that the eigenvectors a n 1 0 , , a n 1 2 n 1 1 , of the matrix A n 1 = A n 1 ( a 0 , , a 2 n 1 1 ) , do not depend on its coefficients, we prove that this property is also true for the matrix A n = A n ( a 0 , , a 2 n 1 ) . Let μ n 1 0 , , μ n 1 2 n 1 1 be the eigenvalues corresponding to the above eigenvectors of the matrix A n 1 ( a 0 , , a 2 n 1 1 ) , independent of its coefficients, then vectors of the form a n k = a n 1 k , ± a n 1 k T , where k = 0 , , 2 n 1 1 , are the eigenvectors of the matrix A n ( a 0 , , a 2 n 1 ) . Indeed, we have:
A n a n k = A n a n 1 k , ± a n 1 k T           = A n 1 ( a 0 , , a 2 n 1 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 2 n 1 , , a 2 n 1 ) A n 1 ( a 0 , , a 2 n 1 1 ) a n 1 k ± a n 1 k           = A n 1 ( a 0 , , a 2 n 1 1 ) a n 1 k ± A n 1 ( a 2 n 1 , , a 2 n 1 ) a n 1 k A n 1 ( a 2 n 1 , , a 2 n 1 ) a n 1 k ± A n 1 ( a 0 , , a 2 n 1 1 ) a n 1 k                 = μ n 1 k a n 1 k ± μ ^ n 1 k a n 1 k μ ^ n 1 k a n 1 k ± μ n 1 k a n 1 k = μ n 1 k ± μ ^ n 1 k a n 1 k ± a n 1 k = μ n 1 k ± μ ^ n 1 k a n k ,
where μ ^ n 1 k is the eigenvalue of the matrix A n 1 ( a 2 n 1 , , a 2 n 1 ) , corresponding to the eigenvector a n 1 k . Obviously, there are 2 n vectors of the form a n k = a n 1 k , ± a n 1 k T . Therefore, all eigenvalues of the matrix A n ( a 0 , , a 2 n 1 ) , are μ n k , ± = μ n 1 k ± μ ^ n 1 k .
Orthogonality. It is obvious that the eigenvectors a 1 + = ( 1 , 1 ) T , and a 1 = ( 1 , 1 ) T , of the matrix A 1 ( a 0 , a 1 ) , are orthogonal. If the eigenvectors a n 1 k , k = 0 , , 2 n 1 1 of the matrix A n 1 ( a 2 n 1 , , a 2 n 1 ) are chosen orthogonal, then the eigenvectors a n k = a n 1 k , ± a n 1 k T of the matrix A n ( a 0 , , a 2 n 1 ) are also orthogonal:
a n k 1 a n k 2 = a n 1 k 1 , ± a n 1 k 1 T a n 1 k 2 , ± a n 1 k 2 T = a n 1 k 1 a n 1 k 2 + a n 1 k 1 a n 1 k 2 = 0 , k 1 k 2
and a n 1 k , a n 1 k T a n 1 k , a n 1 k T = 0 . The theorem is proved. □
Let us give important consequences from Theorem 3 that allow us to build eigenvectors and eigenvalues of the matrix A n .
Corollary 3.
Let k = ( k n , , k 1 ) 2 , k i = 0 , 1 , then the eigenvector of the matrix A n , numbered by k, can be written in the form:
a n k = ( 1 , ( 1 ) k 1 , ( 1 ) k 2 , ( 1 ) k 2 + k 1 , ( 1 ) k 3 , ( 1 ) k 3 + k 1 , ( 1 ) k 3 + k 2 ,                            ( 1 ) k 3 + k 2 + k 1 , ( 1 ) k 4 , , ( 1 ) k n + + k 1 ) T = ( 1 ) k m m = 0 , , 2 n 1 ,
where k i ( k n k 1 ) 2 ( i n i 1 ) 2 = k n i n + + k 1 i 1 is a “scalar” product of the indexes ( k ) 2 and ( i ) 2 . The eigenvalue corresponding to the eigenvector a n ( k n k 1 ) 2 can be written in a similar form:
μ n k μ n ( k n k 1 ) 2 = i = 0 2 n 1 ( 1 ) k i a i = i = 0 2 n 1 ( 1 ) k n i n + + k 1 i 1 a ( i n i 1 ) 2 .
Proof. 
Let us prove (12). For n = 1 we have a 1 + = a 1 ( 0 ) 2 = ( ( 1 ) 0 0 , ( 1 ) 0 1 ) T , a 1 = a 1 ( 1 ) 2 = ( ( 1 ) 1 0 , ( 1 ) 1 1 ) T and (12) is true. If Formula (12) is true for the vector a n 1 ( k n 1 k 1 ) 2 , then by Theorem 3 we have:
a n 1 ( k n 1 k 1 ) 2 , ± a n 1 ( k n 1 k 1 ) 2 T = a n 1 ( k n 1 k 1 ) 2 , ( 1 ) k n a n 1 ( k n 1 k 1 ) 2 T = ( 1 ) ( k n k n 1 k 1 ) 2 ( 0 m n 1 m 1 ) 2 m = 0 , , 2 n 1 1 , ( 1 ) ( k n k n 1 k 1 ) 2 ( 1 m n 1 m 1 ) 2 m = 0 , , 2 n 1 1 T = ( 1 ) k m m = 0 , , 2 n 1 1 , ( 1 ) k m m = 2 n 1 , , 2 n 1 T                                                                       = ( 1 ) k m m = 0 , , 2 n 1 T = a n ( k n k n 1 k 1 ) 2
and hence the Formula (12) is also true for the vector a n ( k n k n 1 k 1 ) 2 = a n k .
Let us prove (13). For n = 1 we have:
μ 1 k 1 = a 0 + ( 1 ) k 1 a 1 = ( 1 ) 0 a ( 0 ) 2 + ( 1 ) k 1 · 1 a ( 1 ) 2 ,
where k 1 = 0 , 1 . Assume that the Formula (13) is valid for n = n 1 and prove its validity for n. By Theorem 3, changing the notation ± = ( 1 ) k n , we write:
μ n k , ± = μ n 1 k ± μ ^ n 1 k = i = 0 2 n 1 1 ( 1 ) k n · 0 + k n 1 i n 1 + + k 1 i 1 a ( i n 1 i 1 ) 2 + i = 0 2 n 1 1 ( 1 ) k n · 1 + k n 1 i n 1 + + k 1 i 1 a ( 1 i n 1 i 1 ) 2 = i = 0 2 n 1 1 ( 1 ) k n i n + k n 1 i n 1 + + k 1 i 1 a ( i n i n 1 i 1 ) 2 + i = 2 n 1 2 n 1 ( 1 ) k n i n + k n 1 i n 1 + + k 1 i 1 a ( i n i n 1 i 1 ) 2 = i = 0 2 n 1 ( 1 ) k n i n + + k 1 i 1 a ( i n i n 1 i 1 ) 2 ,
which proves (13). The corollary is proved. □
Example 2.
For n = 2 the matrix:
A 2 ( a 0 , a 1 , a 2 , a 3 ) = a 0 a 1 a 2 a 3 a 1 a 0 a 3 a 2 a 2 a 3 a 0 a 1 a 3 a 2 a 1 a 0 ,
according to Corollary 3, has the following four eigenvectors:
a 2 k = 1 , ( 1 ) k 1 , ( 1 ) k 2 , ( 1 ) k 2 + k 1 T ,
where k = ( 00 ) 2 , ( 01 ) 2 , ( 10 ) 2 , ( 11 ) 2 , or:
a 2 0 = a 2 ( 00 ) 2 = ( 1 , 1 , 1 , 1 ) T , a 2 1 = a 2 ( 01 ) 2 = ( 1 , 1 , 1 , 1 ) T , a 2 2 = a 2 ( 10 ) 2 = ( 1 , 1 , 1 , 1 ) T , a 2 3 = a 2 ( 11 ) 2 = ( 1 , 1 , 1 , 1 ) T
and the following eigenvalues:
μ 0 = μ ( 00 ) 2 = a 0 + a 1 + a 2 + a 3 , μ 1 = μ ( 01 ) 2 = a 0 a 1 + a 2 a 3 , μ 2 = μ ( 10 ) 2 = a 0 + a 1 a 2 a 3 , μ 3 = μ ( 11 ) 2 = a 0 a 1 a 2 + a 3 ,
where, for convenience, we transfer the superscript of the eigenvalue to the subscript as n = 2 is fixed. For the matrix A 3 ( a 0 , a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 ) from the Formula(11) we obtain eigenvectors in the form:
a 3 k = 1 , ( 1 ) k 1 , ( 1 ) k 2 , ( 1 ) k 2 + k 1 , ( 1 ) k 3 , ( 1 ) k 3 + k 1 , ( 1 ) k 3 + k 2 , ( 1 ) k 3 + k 2 + k 1 T .
For example, for ( 101 ) 2 = 5 we have an eigenvector of the form:
a 3 5 = a 3 ( 101 ) 2 = 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 T .
The eigenvalue corresponding to the eigenvector a 3 5 = a 3 ( 101 ) 2 is written in a similar form:
μ 3 ( 101 ) 2 = i = 0 7 ( 1 ) k 3 i 3 + k 2 i 2 + k 1 i 1 a ( i 3 i 2 i 1 ) 2 = a ( 0 ) 2 a ( 1 ) 2 + a ( 10 ) 2 a ( 11 ) 2 a ( 100 ) 2                                 + a ( 101 ) 2 a ( 110 ) 2 + a ( 111 ) 2 = a 0 a 1 + a 2 a 3 a 4 + a 5 a 6 + a 7 .

3. The Main Problem S

To study the Problem S, the following statement is required.
Lemma 1
([36] (Lemma 3.1)). Let S be an orthogonal matrix, then the operator I S u ( x ) = u ( S x ) and the Laplace operator Δ commute Δ I S u ( x ) = I S Δ u ( x ) on functions u C 2 ( Ω ) . The operator Λ = i = 1 n x i u x i ( x ) and operator I S also commute Λ I S u ( x ) = I S Λ u ( x ) on functions u C 1 ( Ω ¯ ) and the equality I S = I S S T is valid.
Corollary 4.
Equation (1) generates a matrix equation equivalent to it:
A n Δ U ( x ) + λ U ( x ) = 0 ,
where U ( x ) = u ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T .
Proof. 
Let u ( x ) satisfy the Equation (1). We denote:
v ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i u ( S n i n S 1 i 1 x ) ,
and V ( x ) = v ( S n j n S 1 j 1 x ) j = 0 , , 2 n 1 T . The function v ( x ) generates the equality (4). Let us apply the Laplace operator to equality (4). Since the matrices of the form S n i n S 1 i 1 are symmetric and orthogonal, and therefore S n i n S 1 i 1 2 = I , then by virtue of Lemma 1, we can write:
Δ V ( x ) = Δ I S n j n S n j 1 v ( x ) j = 0 , , 2 n 1 T = I S n j n S n j 1 Δ v ( x ) j = 0 , , 2 n 1 T             = I S n j n S n j 1 i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i I S n i n S 1 i 1 Δ u ( x ) j = 0 , , 2 n 1 T                = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 a i I S n j n + i n S 1 j 1 + i 1 Δ u ( x ) j = 0 , , 2 n 1 T              = l ( l n l 1 ) 2 = 0 ( 1 1 ) 2 a j l I S n l n S 1 l 1 Δ u ( x ) l = 0 , , 2 n 1 T                = l ( l n l 1 ) 2 = 0 ( 1 1 ) 2 a j l Δ u ( S n l n S 1 l 1 x ) j = 0 , , 2 n 1 T                                                                                                                                   = A n Δ U ( x ) .
Hence, using the equality Δ v ( S n j n S 1 j 1 x ) + λ u ( S n j n S 1 j 1 x ) = 0 , we obtain Equation (15). The corollary is proved. □
Basing on Lemma 1, we prove the following statement about necessary conditions for the existence of eigenvalues of problem S.
Theorem 4.
Let the function u ( x ) 0 be an eigenfunction of the problem S, and λ be its eigenvalue, then the function w ( x ) = ( U ( x ) , a n k ) , where U ( x ) = u ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T and a n k is an eigenvector of the matrix A n ( a 0 , , a 2 n 1 ) , is a solution to the Dirichlet problem:
Δ w ( x ) + μ w ( x ) = 0 , x Ω ,
w x = 0 , x Ω ,
where μ = λ / μ n k and μ n k 0 is the eigenvalue of the matrix A n ( a 0 , , a 2 n 1 ) corresponding to the vector a n k .
Proof. 
Let λ be the eigenvalue of the problem S and u ( x ) 0 be its eigenfunction. By Corollary 4, the equality (15) is true. Let’s multiply it scalar by the vector a n k . Then, we have:
A n Δ U ( x ) , a n k + λ U ( x ) , a n k = 0 ,
whence, using the symmetry of the matrix A n ( a 0 , , a 2 n 1 ) (see Corollary 2) and the properties of the vector a n k , we find:
Δ U ( x ) , A n a n k + λ U ( x ) , a n k = 0 ,
whence follows:
μ n k Δ w ( x ) + λ w ( x ) = 0
and since λ = μ n k μ , and μ n k 0 , we get (16):
0 = μ n k Δ w ( x ) + μ w ( x ) Δ w ( x ) + μ w ( x ) = 0 .
Finally, since u ( x ) = 0 , x Ω , and x Ω S n i n S 1 i 1 x Ω , then U ( x ) = 0 , and therefore w ( x ) = ( U ( x ) , a n k ) = 0 , x Ω . The theorem is proved. □
The following converse statement to Theorem 4 is important, which allows us to construct solutions to Problem S.
Theorem 5.
Let the function w ( x ) 0 be a solution to the problem (16) and (17):
Δ w ( x ) + μ w ( x ) = 0 , x Ω , w ( x ) = 0 , x Ω
for some μ, then the function:
u k ( x ) = ( W ( x ) , a n k ) ,
where W ( x ) = w ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T and a n k is an eigenvector of the matrix A n = A n ( a 0 , , a 2 n 1 ) with an eigenvalue μ n k 0 is a solution to the Dirichlet problem (1) and (2) for λ = μ n k μ .
Proof. 
Let w ( x ) 0 be a solution to problem (16) and (17). Consider the vector W ( x ) = w ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T and compose the function u k ( x ) = ( W ( x ) , a n k ) , where x Ω . It is easy to see that, according to Corollary 3, we have in Ω :
u k ( S n j n S 1 j 1 x ) = ( W ( S n j n S 1 j 1 x ) , a n k )             = w ( S n i n + j n S 1 i 1 + j 1 x ) i = 0 , , 2 n 1 T , ( 1 ) k i i = 0 , , 2 n 1            = w ( S n l n S 1 l 1 x ) l = 0 , , 2 n 1 T , ( 1 ) k ( l j ) l = 0 , , 2 n 1                                                                                  = ( 1 ) k j W ( x ) , a n k = ( 1 ) k j u k ( x ) ,
and therefore:
U k ( x ) = u k ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T = ( 1 ) k j u k ( x ) i = 0 , , 2 n 1 T                                                                                                                                = u k ( x ) ( 1 ) k j i = 0 , , 2 n 1 T = u k ( x ) a n k .
Thus,
Δ U k ( x ) = Δ u k ( x ) a n k
and hence, since by Lemma 1:
Δ W ( x ) = Δ w ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T = μ w ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T = μ W ( x ) ,
we get:
A n Δ U k ( x ) = Δ u k ( x ) A n a n k = ( Δ W ( x ) , a n k ) μ n k a n k = μ ( W ( x ) , a n k ) μ n k a n k                                                                                                   = μ μ n k u k ( x ) a n k = μ μ n k U k ( x ) .
Separating the first components of this vector equality, we obtain:
i = 0 2 n 1 a i Δ u k ( S n i n S 1 i 1 x ) = μ μ n k u k ( x ) , x Ω ,
which means that u k ( x ) is a solution to Equation (1). Let us check the boundary conditions (2) of the problem S. Since x Ω S n i n S 1 i 1 x Ω , then for x Ω we get:
u k ( x ) = w ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T , a n k = 0 , a n k = 0 .
The theorem is proved. □
Example 3.
Let n = 2 . According to Example 2, the eigenvectors of the matrix A 2 ( a 0 , a 1 , a 2 , a 3 ) have the form:
a 2 0 = 1 , 1 , 1 , 1 T , a 2 1 = 1 , 1 , 1 , 1 T , a 2 2 = 1 , 1 , 1 , 1 T , a 2 3 = 1 , 1 , 1 , 1 T
and by Theorem 5 the eigenfunctions of the problem corresponding to the eigenvalue μ and the eigenfunction w μ ( x ) of problem (16) and (17) can be taken in the form u k ( x ) = ( W ( x ) , a n k ) , k = 0 , 1 , 2 , 3 or:
u 0 ( x ) = w μ ( x ) + w μ ( S 1 x ) + w μ ( S 2 x ) + w μ ( S 1 S 2 x ) , u 1 ( x ) = w μ ( x ) w μ ( S 1 x ) + w μ ( S 2 x ) w μ ( S 1 S 2 x ) , u 2 ( x ) = w μ ( x ) + w μ ( S 1 x ) w μ ( S 2 x ) w μ ( S 1 S 2 x ) , u 3 ( x ) = w μ ( x ) w μ ( S 1 x ) w μ ( S 2 x ) + w μ ( S 1 S 2 x ) .
In what follows, it will be necessary to expand the polynomials into the sum of the “generalized parity” polynomials.
Lemma 2.
Let H ( x ) be some function on Ω. We denote:
H k n k 1 2 ( x ) = 1 2 n i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k i H ( S n i n S 1 i 1 x ) , x Ω .
Then the function H k n k 1 2 ( x ) has the “generalized parity” property:
H k n k 1 2 ( S i x ) = ( 1 ) k i H k n k 1 2 ( x )
and besides, the following equality:
i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) m i H k n k 1 2 ( S n i n S 1 i 1 x ) = H k n k 1 2 ( x ) m = k 0 m k
holds true. Moreover, the function H ( x ) , x Ω can be represented as:
H ( x ) = k ( k n k 1 ) 2 = 0 ( 1 1 ) 2 H k n k 1 2 ( x ) , x Ω .
Proof. 
It is not hard to see that:
H k n k 1 2 ( S i x ) = 1 2 n j ( j n j 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k j H ( S n j n S i j i + 1 S 1 j 1 x )            = 1 2 n j ( j n j 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k n j n + + k 1 j 1 + k i H ( S n j n S i j i S 1 j 1 x ) = ( 1 ) k i H k n k 1 2 ( x ) ,
where a change of variables is made under the sum sign, as in Theorem 2. Equality (19) is proved.
Consider now, equality (21). It is easy to see that for x Ω :
k = 0 ( 1 1 ) 2 H k n k 1 2 ( x ) = k = 0 ( 1 1 ) 2 1 2 n i = 0 ( 1 1 ) 2 ( 1 ) k i H ( S n i n S 1 i 1 x )                                                                             = i = 0 ( 1 1 ) 2 H ( S n i n S 1 i 1 x ) 1 2 n k = 0 ( 1 1 ) 2 ( 1 ) k i .
Let us calculate the inner sum from the right-hand side of equalities (22). It is clear that i 0 j i j 0 , and then:
k = 0 ( 1 1 ) 2 ( 1 ) k i = k j = 0 1 ( 1 ) k j i j k n = 0 1 k 1 = 0 1 ( 1 ) k n i n + + k 1 i 1 = ( 1 ) 0 k n = 0 , , k 1 = 0 , ¬ k j 1 , , 1 ( 1 ) k n i n + + k 1 i 1 + ( 1 ) i j k n = 0 , , k 1 = 0 , ¬ k j 1 , , 1 ( 1 ) k n i n + + k 1 i 1 = 0 ,
If i = 0 , then k = 0 ( 1 1 ) 2 ( 1 ) k 0 = 2 n , i.e., 1 2 n k = 0 ( 1 1 ) 2 ( 1 ) k i = δ k , 0 . Therefore, (22) implies (21). Now let us prove (20). It is not hard to see that:
i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) m i H k n k 1 2 ( S n i n S 1 i 1 x )       = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) m i 1 2 n j ( j n j 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k j H ( S n i n + j n S 1 i 1 + j 1 x )       = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) m i 1 2 n l ( l n l 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k ( l i ) H ( S n l n S 1 l 1 x )       = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) m i + k i 1 2 n l ( l n l 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k l H ( S n l n S 1 l 1 x )                                             = H k n k 1 2 ( x ) 1 2 n i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) ( m k ) i = H k n k 1 2 ( x ) δ k , m
Here it is taken into account that m k = 0 m = k . The lemma is proved. □
Example 4.
Let l = 2 and n = 2 , S 1 x = ( x 1 , x 2 ) , S 2 x = ( x 1 , x 2 ) . Then, according to Lemma 2, the generalized parity components for the function H ( x ) from expansion (21) have the form:
H 0 ( x ) = H ( 00 ) 2 ( x ) = 1 4 H ( x 1 , x 2 ) + H ( x 1 , x 2 ) + H ( x 1 , x 2 ) + H ( x 1 , x 2 ) , H 1 ( x ) = H ( 01 ) 2 ( x ) = 1 4 ( H ( x 1 , x 2 ) + ( 1 ) ( 01 ) 2 ( 01 ) 2 H ( x 1 , x 2 ) + ( 1 ) ( 01 ) 2 ( 10 ) 2 H ( x 1 , x 2 ) + ( 1 ) ( 01 ) 2 ( 11 ) 2 H ( x 1 , x 2 ) ) = 1 4 H ( x 1 , x 2 ) H ( x 1 , x 2 ) + H ( x 1 , x 2 ) H ( x 1 , x 2 ) , H 2 ( x ) = H ( 10 ) 2 ( x ) = 1 4 H ( x 1 , x 2 ) + H ( x 1 , x 2 ) H ( x 1 , x 2 ) H ( x 1 , x 2 ) , H 3 ( x ) = H ( 11 ) 2 ( x ) = 1 4 H ( x 1 , x 2 ) H ( x 1 , x 2 ) H ( x 1 , x 2 ) + H ( x 1 , x 2 ) .
If, for example, the function H ( x ) is even in x 1 then its components of generalized parity 1 and 3 is zero H 1 ( x ) = 0 , H 3 ( x ) = 0 .
Let H ( x ) = H m ( x ) be homogeneous harmonic polynomial of degree m. Then, if ( r , φ ) are polar coordinates of x = ( x 1 , x 2 ) , then:
H m ( x ) = α Re ( x 1 + i x 2 ) m + β Im ( x 1 + i x 2 ) m = r m ( α cos m φ + β sin m φ )
and:
H m ( x 1 , x 2 ) = α Re ( x 1 + i x 2 ) m + β Im ( x 1 + i x 2 ) m = ( r ) m ( α cos m φ β sin m φ ) , H m ( x 1 , x 2 ) = α Re ( x 1 i x 2 ) m + β Im ( x 1 i x 2 ) m = r m ( α cos m φ β sin m φ ) , H m ( x 1 , x 2 ) = ( r ) m ( α cos m φ + β sin m φ ) .
From these equalities we get:
H m 0 ( x ) = r m 2 α ( 1 + ( 1 ) m ) cos m φ , H m 1 ( x ) = r m 2 α ( 1 ( 1 ) m ) cos m φ , H m 2 ( x ) = r m 2 α ( 1 ( 1 ) m ) sin m φ , H m 3 ( x ) = r m 2 α ( 1 + ( 1 ) m ) sin m φ .
Therefore, for m N 0 :
H 2 m 0 ( x ) = α r 2 m cos 2 m φ , H 2 m 1 ( x ) = 0 , H 2 m 2 ( x ) = β r 2 m sin 2 m φ , H 2 m 3 ( x ) = 0 , H 2 m 1 0 ( x ) = 0 , H 2 m 1 ( x ) = α r 2 m + 1 cos ( 2 m + 1 ) φ , H 2 m 2 ( x ) = 0 , H 2 m 3 ( x ) = β r 2 m + 1 sin ( 2 m + 1 ) φ .

4. Eigenfunctions and Eigenvalues of Problem S

Let us transform the result of Theorem 5 to a simpler form.
Theorem 6.
The eigenfunctions and eigenvalues of the Dirichlet problem (1) and (2) from Theorem 5 can be represented as:
u n k ( x ) = i ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k i w μ ( S n i n S 1 i 1 x ) , λ μ , k = μ i = 0 2 n 1 ( 1 ) k i a i ,
where the function w μ ( x ) is a solution to the problem (16) and (17):
Δ w ( x ) + μ w ( x ) = 0 , x Ω ; w x = 0 , x Ω
for some μ R + . Functions u n k ( x ) , for k = 0 , , 2 n 1 are orthogonal in L 2 ( Ω ) .
Proof. 
We prove Formula (23) by induction on n. For n = 1 from (18), taking into account the equalities a 1 0 = 1 , 1 T , a 1 1 = 1 , 1 T from Theorem 3, we obtain:
u 0 ( x ) = ( W ( x ) , a 1 0 ) = w μ ( x ) + w μ ( S 1 x ) u 1 0 ( x ) u 1 ( x ) = ( W ( x ) , a 1 1 ) = w μ ( x ) w μ ( S 1 x ) u 1 1 ( x ) .
We shifted the subscript of the functions u k ( x ) from (18) to the top to make room for the n subscript. Suppose that Formula (23) is valid for n = n 1 and prove its validity for n. In accordance with Theorems 3 and 5, we have a n k = a n 1 k , ± a n 1 k T and u k ( x ) = ( W ( x ) , a n k ) and hence the function:
u k n k n 1 k 1 2 ( x ) = u k n 1 k 1 2 ( x ) + ( 1 ) k n u k n 1 k 1 2 ( S n x )
is an eigenfunction of the Dirichlet problem (1) and (2). Using the induction hypothesis, we transform this function:
u k n k n 1 k 1 2 ( x ) = i ( 0 i n 1 i 1 ) 2 = 0 ( 01 1 ) 2 ( 1 ) k n 0 + k n 1 k 1 2 ( i n 1 i 1 ) 2 w μ ( S n 0 S n 1 i n 1 S 1 i 1 x ) + i ( 1 i n 1 i 1 ) 2 = 0 ( 11 1 ) 2 ( 1 ) k n 1 + k n 1 k 1 2 ( i n 1 i 1 ) 2 w μ ( S n S n 1 i n 1 S 1 i 1 x )            = i ( i n i n 1 i 1 ) 2 = 0 ( 11 1 ) 2 ( 1 ) k n k n 1 k 1 2 ( i n i n 1 i 1 ) 2 w μ ( S n i n S n 1 i n 1 S 1 i 1 x ) u n k n k 1 2 ( x ) ,
which proves Formula (23). The eigenvalues of the Dirichlet problem (1) and (2) corresponding to eigenfunction u n k ( x ) , by Corollary 3, have the form:
λ μ , k = μ μ n k = μ i = 0 2 n 1 ( 1 ) k i a i .
Now let us prove that the functions u n k ( x ) = u n k n k 1 2 ( x ) for different k are orthogonal in L 2 ( Ω ) . Indeed, if k m , then there exists i such that k i m i and hence k i + m i 0 mod 2 . According to Lemma 4.1 from [37] the following equality holds true for g C ( Ω ) :
Ω g ( S i ξ ) d ξ = Ω g ( ξ ) d ξ .
Therefore using equality (19) from Lemma 2 we get:
Ω u n k n k 1 2 ( x ) u n m n m 1 2 ( x ) d x = Ω u n k n k 1 2 ( S i x ) u n m n m 1 2 ( S i x ) d x =    = ( 1 ) k i + m i Ω u n k n k 1 2 ( x ) u n m n m 1 2 ( x ) d x = Ω u n k n k 1 2 ( x ) u n m n m 1 2 ( x ) d x .
This immediately implies the orthogonality:
Ω u n k n k 1 2 ( x ) u n m n m 1 2 ( x ) d x = 0 .
The theorem is proved. □
Corollary 5.
If H ( x ) is a harmonic polynomial, then the polynomials H ( k n k 1 ) 2 ( x ) for different k are orthogonal on Ω and therefore these polynomials are linearly independent.
Proof. 
Indeed, for k m , similarly to (24), by Lemma 4.1 from [37], we obtain:
Ω H k n k 1 2 ( x ) H m n m 1 2 ( x ) d s = Ω H k n k 1 2 ( S i x ) H m n m 1 2 ( S i x ) d s    = ( 1 ) k i + m i Ω H k n k 1 2 ( x ) H m n m 1 2 ( x ) d s = Ω H k n k 1 2 ( x ) H m n m 1 2 ( x ) d s ,
whence the assertion of the corollary follows. □
Remark 1.
If we denote:
U n ( x ) = u n i ( x ) i = 0 , , 2 n 1 T , W n ( x ) = w μ ( S n i n S 1 i 1 x ) i = 0 , , 2 n 1 T , V n = a n i i = 0 , , 2 n 1 T = ( 1 ) i j i , j = 0 , , 2 n 1
then equalities (23) can be written in the matrix form U n = V n W n , where the matrix V n is symmetric and orthogonal.
Indeed the symmetry of V n follows from the equality ( 1 ) i j = ( 1 ) j i and the orthogonality is proved in Theorem 3.
Example 5.
For n = 2 , according to Example 3, the matrix V 2 has the form:
V 2 = a 2 i i = 0 , , 3 T = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 .
It is seen that the matrix V 2 is symmetric and orthogonal.
Now, we transform the results of Theorem 6 and investigate the completeness of the eigenfunctions of Problem S.
Theorem 7.
Let μ n k 0 , k = 0 , , 2 n 1 . Then the system of eigenfunctions of the Dirichlet problem (1) and (2) is complete in L 2 ( Ω ) and has the form:
u n μ , m , k , j ( x ) = 1 | x | l / 2 1 J m + l / 2 1 μ | x | H m k n k 1 2 , j x / | x | ,
where J ν ( t ) is the Bessel function of the first kind, μ is a root of the Bessel function J m + l / 2 1 ( t ) , H m k n k 1 2 , j ξ : j = 1 , , j k is a system of orthogonal on Ω homogeneous harmonic polynomials of degree m and generalized parity k = k n k 1 2 . The eigenvalues of problem S are λ μ , k = μ i = 0 2 n 1 ( 1 ) k i a i .
Proof. 
Since the eigenfunctions of problem (16) and (17) have the form (see, for example, Refs. [38,39]):
w ( μ , m , j ) ( x ) = 1 | x | l / 2 1 J m + l / 2 1 μ | x | H m j x | x | ,
where H m j ( x ) : j = 1 , , h m , h m = 2 m + l 2 l 2 m + l 3 l 3 ( l > 2 ) is the system of homogeneous harmonic polynomials of degree m orthogonal on Ω (see, for example, Ref. [40]) and | x | = | S i x | , then the expansion (23) rather refers to homogeneous harmonic polynomials H m j ( x ) . We decompose the entire space of homogeneous harmonic polynomials of degree m into the sum of subspaces of the same “generalized parity” ( k n k 1 ) 2 (see equality (19)). This is possible due to the proof in Corollary 5, orthogonality on Ω of harmonic polynomials of different “generalized parity” k, and then in each subspace we choose a complete system H m k n k 1 2 , j ( x ) : j = 1 , , j k of homogeneous harmonic polynomials orthogonal on Ω . Note that for some k it is possible j k = 0 , that is, for such k components H m k n k 1 2 , j ( x ) are missing (see Example 4). Taking into account the notations of Lemma 2 and adding the “generalized parity” index k, we obtain the functions (25):
u n μ , m , k , j ( x ) = ( i n i 1 ) 2 = 0 ( 1 1 ) 2 ( 1 ) k i | S n i n S 1 i 1 x | m + l / 2 1 J m + l / 2 1 μ | S n i n S 1 i 1 x | H m k , j S n i n S 1 i 1 x                                                                    = 1 | x | l / 2 1 J m + l / 2 1 μ | x | H m k n k 1 2 , j x / | x | .
In Theorem 6 it is shown that the functions u n μ , m , k , j ( x ) are orthogonal for fixed μ and m. Moreover, since the Bessel functions J m + l / 2 1 ( μ t ) are orthogonal in L 2 ( ( 0 , 1 ) ; t ) for each fixed m N 0 and different μ , and the polynomials H m k n k 1 2 , j ( x ) are orthogonal in L 2 ( Ω ) for different ( m , k , j ) , then the functions u n μ , m , k , j ( x ) from (25) are orthogonal in L 2 ( Ω ) . Indeed, for different ( μ , m , k , j ) we have the equality:
Ω u n μ 1 , m 1 , k 1 , j 1 ( x ) u n μ 2 , m 2 , k 2 , j 2 ( x ) d x                         = 0 1 ρ J m 1 + l / 2 1 μ ρ J m 2 + l / 2 1 μ ρ d ρ · Ω H m 1 k 1 , j 1 ξ H m 2 k 2 , j 2 ( ξ ) d s ξ = 0 .
For μ 1 μ 2 and m 1 = m 2 , due to the properties of the Bessel functions, the first factor is zero. If m 1 m 2 , by the property of harmonic polynomials, the second factor from the right is zero. If m 1 = m 2 and μ 1 = μ 2 , then for ( k 1 , j 1 ) ( k 2 , j 2 ) the second factor from the right, is zero by the construction of the polynomials H m k , j ( x ) and in view of Corollary 5.
The constructed system of functions (25) is complete in L 2 ( Ω ) = L 2 ( ( 0 , 1 ) × Ω ) by Lemma 2 from [41] (p. 33): the system J m + l / 2 1 ( μ ρ ) : J m + l / 2 1 ( μ ) = 0 is orthogonal and complete in L 2 ( ( 0 , 1 ) ; t ) for each m, and the system H m k , j ( ξ ) is orthogonal and complete in L 2 ( Ω ) for different m , k , j . The theorem is proved. □
Example 6.
Let l = 2 , n = 2 , S 1 x = ( x 1 , x 2 ) , S 2 x = ( x 1 , x 2 ) then problem S has the form:
a 0 Δ u ( x 1 , x 2 ) + a 1 Δ u ( x 1 , x 2 ) + a 2 Δ u ( x 1 , x 2 ) + a 3 Δ u ( x 1 , x 2 ) + λ u ( x ) = 0 , x Ω , u ( x ) = 0 , x Ω .
Let us find the eigenfunctions of the problem (1) and (2) using Example 4. The eigenfunctions of the Dirichlet problem (16) and (17) in the polar coordinate system are determined according to equality (26) (see also [41]) (p. 392) in the form:
w ( μ , m , 0 ) ( x ) = J m ( μ r ) cos m φ , w ( μ , m , 1 ) ( x ) = J m ( μ r ) sin m φ , m N 0 ,
where μ is a positive root of the Bessel function J m ( t ) :
J m ( t ) = j = 0 ( 1 ) j ( j + m ) ! j ! t 2 2 j + m .
Using Formula (25), we write:
u 2 μ , m , k , j ( x ) = J m μ | x | H m k 2 k 1 2 , j x / | x | , j = 1 , , j k .
According to Example 4, for m even j 0 = j 2 = 1 , j 1 = j 3 = 0 and for m odd j 0 = j 2 = 0 , j 1 = j 3 = 1 . Therefore, taking into account (13), we write:
u 2 μ , 2 m , 0 , 1 ( x ) = J 2 m μ r cos 2 m φ , λ μ , k = μ a 0 + a 1 + a 2 + a 3 u 2 μ , 2 m + 1 , 2 , 1 ( x ) = J 2 m + 1 μ r sin ( 2 m + 1 ) φ , λ μ , k = μ a 0 + a 1 a 2 a 3 u 2 μ , 2 m + 1 , 1 , 1 ( x ) = J 2 m + 1 μ r cos ( 2 m + 1 ) φ , λ μ , k = μ a 0 a 1 + a 2 a 3 u 2 μ , 2 m , 3 , 1 ( x ) = J 2 m μ r sin 2 m φ , λ μ , k = μ a 0 a 1 a 2 + a 3 ,
where μ is a root of the corresponding Bessel function and m N 0 . The obtained functions are complete in L 2 ( Ω ) .

5. Conclusions

Summarizing the investigation carried out, we note that due to the properties of the special form matrices A n from the equality (4), studied in Theorems 1–3, we managed in Theorem 5, Theorem 6, and then in Theorem 7 to write out the complete system of eigenfunctions and eigenvalues of the nonlocal problem S. If we consider possible further applications of the proposed method, we note that a similar method can be used to study the eigenfunctions and eigenvalues of the Neumann and Robin boundary value problems in a ball. Moreover, we hope that the proposed method also allows for a given nonlocal Laplace operator to investigate the spectral problem in l-dimensional parallelepiped and to find an explicit form of the eigenfunctions and eigenvalues of the Dirichlet and Neumann boundary value problems, as well as for problems with periodic conditions. Described problems are the subject of further work and we are going to consider them in our next articles.

Author Contributions

B.T. and V.K.; investigation, B.T. and V.K.; writing original draft preparation, B.T. and V.K.; writing review and editing, B.T. and V.K. All authors have read and agreed to the published version of the manuscript.

Funding

The research of the first author is supported by the grant of the Committee of Sciences, Ministry of Education and Science of the Republic of Kazakhstan, project AP08855810. Second author is supported by Act 211 of the Government of the Russian Federation, contract no. 02.A03.21.0011.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data is present within the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nahushev, A.M. Equations of Mathematical Biology; Nauka: Moscow, Russia, 1995. (In Russian) [Google Scholar]
  2. Babbage, C. An essay towards the calculus of calculus of functions. Philos. Trans. R. Soc. Lond. 1816, 106, 179–256. [Google Scholar]
  3. Przeworska-Rolewicz, D. Equations with Transformed Argument, an Algebraic Approach; PWN: Warsaw, Poland, 1973. [Google Scholar]
  4. Wiener, J. Generalized Solutions of Functional Differential Equations; World Science: Singapore, 1993. [Google Scholar]
  5. Baskakov, A.G.; Krishtal, I.A.; Romanova, E.Y. Spectral analysis of a differential operator with an involution. J. Evol. Equ. 2017, 17, 669–684. [Google Scholar] [CrossRef]
  6. Baskakov, A.G.; Krishtal, I.A.; Uskova, N.B. On the spectral analysis of a differential operator with an involution and general boundary conditions. Eurasian Math. J. 2020, 11, 30–39. [Google Scholar] [CrossRef]
  7. Baskakov, A.G.; Krishtal, I.A.; Uskova, N.B. Similarity techniques in the spectral analysis of perturbed operator matrices. J. Math. Anal. Appl. 2019, 477, 669–684. [Google Scholar] [CrossRef] [Green Version]
  8. Burlutskaya, M.S.; Khromov, A.P. Fourier method in an initial-boundary value problem for a first-order partial differential equation with involution. Comput. Math. Math. Phys. 2011, 51, 2102–2114. [Google Scholar] [CrossRef]
  9. Garkavenko, G.V.; Uskova, N.B. Decomposition of linear operators and asymptotic behavior of eigenvalues of differential operators with growing potencial. J. Math. Sci. 2020, 246, 812–827. [Google Scholar]
  10. Kritskov, L.V.; Sadybekov, M.A.; Sarsenbi, A.M. Properties in Lp of root functions for a nonlocal problem with involution. Turk. J. Math. 2019, 43, 393–401. [Google Scholar] [CrossRef]
  11. Kritskov, L.V.; Sarsenbi, A.M. Spectral properties of a nonlocal problem for a second-order differential equation with an involution. Differ. Equ. 2015, 51, 984–990. [Google Scholar] [CrossRef]
  12. Kritskov, L.V.; Sarsenbi, A.M. Basicity in Lp of root functions for differential equations with involution. Electron. J. Differ. Equ. 2015, 2015, 1–9. [Google Scholar]
  13. Kritskov, L.V.; Sarsenbi, A.M. Riesz basis property of system of root functions of second-order differential operator with involution. Differ. Equ. 2017, 53, 33–46. [Google Scholar] [CrossRef]
  14. Sadybekov, M.A.; Sarsenbi, A.M. Criterion for the basis property of the eigenfunction system of a multiple differentiation operator with an involution. Differ. Equ. 2012, 48, 1112–1118. [Google Scholar] [CrossRef]
  15. Ahmad, B.; Alsaedi, A.; Kirane, M.; Tapdigoglu, R.G. An inverse problem for space and time fractional evolution equation with an involution perturbation. Quaest. Math. 2017, 40, 151–160. [Google Scholar] [CrossRef]
  16. Al-Salti, N.; Kerbal, S.; Kirane, M. Initial-boundary value problems for a time-fractional differential equation with involution perturbation. Math. Model. Nat. Phenom. 2019, 14, 1–15. [Google Scholar] [CrossRef] [Green Version]
  17. Kirane, M.; Al-Salti, N. Inverse problems for a nonlocal wave equation with an involution perturbation. J. Nonlinear Sci. Appl. 2016, 9, 1243–1251. [Google Scholar] [CrossRef]
  18. Kirane, M.; Malik, S.A.; Al-Gwaiz, M.A. An inverse source problem for a two dimensional time fractional diffusion equation with nonlocal boundary conditions. Math. Methods Appl. Sci. 2013, 36, 1056–1069. [Google Scholar] [CrossRef]
  19. Kirane, M.; Malik, S.A. Determination of an unknown source term and the temperature distribution for the linear heat equation involving fractional derivative in time. Appl. Math. Comput. 2011, 218, 163–170. [Google Scholar] [CrossRef] [Green Version]
  20. Kirane, M.; Samet, B.; Torebek, B.T. Determination of an unknown source term temperature distribution for the sub-diffusion equation at the initial and final data. Electron. J. Differ. Equ. 2017, 2017, 1–13. [Google Scholar]
  21. Kirane, M.; Sadybekov, M.A.; Sarsenbi, A.A. On an inverse problem of reconstructing a subdiffusion process from nonlocal data. Math. Methods Appl. Sci. 2019, 42, 2043–2052. [Google Scholar] [CrossRef]
  22. Torebek, B.T.; Tapdigoglu, R. Some inverse problems for the nonlocal heat equation with Caputo fractional derivative. Math. Methods Appl. Sci. 2017, 40, 6468–6479. [Google Scholar] [CrossRef]
  23. Cabada, A.; Tojo, F.A.F. On linear differential equations and systems with reflection. Appl. Math. Comput. 2017, 305, 84–102. [Google Scholar] [CrossRef] [Green Version]
  24. Tojo, F.A.F. Computation of Green’s functions through algebraic decomposition of operators. Bound. Value Probl. 2016, 167, 1–15. [Google Scholar] [CrossRef] [Green Version]
  25. Andreev, A.A. Analogs of classical boundary value problems for a second-order differential equation with deviating argument. Differ. Equ. 2004, 40, 1192–1194. [Google Scholar] [CrossRef]
  26. Ashyralyev, A.; Sarsenbi, A.M. Well-posedness of a parabolic equation with involution. Numer. Funct. Anal. Optim. 2017, 38, 1295–1304. [Google Scholar] [CrossRef]
  27. Ashyralyev, A.; Sarsenbi, A.M. Well-posedness of an elliptic equation with involution. Electron. J. Differ. Equ. 2015, 2015, 1–8. [Google Scholar]
  28. Yarka, U.; Fedushko, S.; Vesely, P. The Dirichlet Problem for the Perturbed Elliptic Equation. Mathematics 2020, 8, 2108. [Google Scholar] [CrossRef]
  29. Rossovskii, L.E.; Tovsultanov, A.A. On the dirichlet problem for an elliptic functional differential equation with affine transformations of the argument. Dokl. Math. 2019, 100, 551–553. [Google Scholar] [CrossRef]
  30. Rossovskii, L.E.; Tovsultanov, A.A. Elliptic functional differential equation with affine transformations. J. Math. Anal. Appl. 2019, 480, 1–9. [Google Scholar] [CrossRef]
  31. Skubachevskii, A.L. Boundary-value problems for elliptic functional-differential equations and their applications. Russ. Math. Surv. 2016, 71, 801–906. [Google Scholar] [CrossRef] [Green Version]
  32. Wang, Y.; Meng, F. New Oscillation Results for Second-Order Neutral Differential Equations with Deviating Arguments. Symmetry 2020, 12, 1937. [Google Scholar] [CrossRef]
  33. Althubiti, S.; Bazighifan, O.; Alotaibi, H.; Awrejcewicz, J. New Oscillation Criteria for Neutral Delay Differential Equations of Fourth-Order. Symmetry 2021, 13, 1277. [Google Scholar] [CrossRef]
  34. Bazighifan, O.; Alotaibi, H.; Mousa, A.A.A. Neutral Delay Differential Equations: Oscillation Conditions for the Solutions. Symmetry 2021, 13, 101. [Google Scholar] [CrossRef]
  35. Linkov, A. Substantiation of a method the fourier for boundary value problems with an involute deviation. Vestn. Samar. Univ.-Estestv.-Nauchnaya Seriya 1999, 2, 60–66. (In Russian) [Google Scholar]
  36. Karachik, V.V.; Sarsenbi, A.M.; Turmetov, B.K. On the solvability of the main boundary value problems for a nonlocal Poisson equation. Turk. J. Math. 2019, 43, 1604–1625. [Google Scholar] [CrossRef]
  37. Karachik, V.; Turmetov, B. On solvability of some nonlocal boundary value problems for biharmonic equation. Math. Slovaca 2020, 70, 329–342. [Google Scholar] [CrossRef]
  38. Karachik, V.V. Normalized system of functions with respect to the laplace operator and its applications. J. Math. Anal. Appl. 2003, 287, 577–592. [Google Scholar] [CrossRef] [Green Version]
  39. Karachik, V.V.; Antropova, N.A. On the solution of the inhomogeneous polyharmonic equation and the inhomogeneous Helmholtz equation. Differ. Equ. 2010, 46, 387–399. [Google Scholar] [CrossRef]
  40. Karachik, V.V. On some special polynomials. Proc. Am. Math. Soc. 2004, 132, 1049–1058. [Google Scholar] [CrossRef]
  41. Vladimirov, V.S. Equations of Mathematical Physics; Nauka: Moscow, Russia, 1981. (In Russian) [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Turmetov, B.; Karachik, V. On Eigenfunctions and Eigenvalues of a Nonlocal Laplace Operator with Multiple Involution. Symmetry 2021, 13, 1781. https://doi.org/10.3390/sym13101781

AMA Style

Turmetov B, Karachik V. On Eigenfunctions and Eigenvalues of a Nonlocal Laplace Operator with Multiple Involution. Symmetry. 2021; 13(10):1781. https://doi.org/10.3390/sym13101781

Chicago/Turabian Style

Turmetov, Batirkhan, and Valery Karachik. 2021. "On Eigenfunctions and Eigenvalues of a Nonlocal Laplace Operator with Multiple Involution" Symmetry 13, no. 10: 1781. https://doi.org/10.3390/sym13101781

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop