Next Article in Journal
Application of Orthogonal Polynomial in Orthogonal Projection of Algebraic Surface
Next Article in Special Issue
Approximation Properties of the Blending-Type Bernstein–Durrmeyer Operators
Previous Article in Journal
Iterated Partial Sums of the k-Fibonacci Sequences
Previous Article in Special Issue
Optimality of the Approximation and Learning by the Rescaled Pure Super Greedy Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of Eigenfunctions to One Nonlocal Second-Order Differential Operator with Double Involution

by
Batirkhan Turmetov
1,† and
Valery Karachik
2,*,†
1
Department of Mathematics, Khoja Akhmet Yassawi International Kazakh-Turkish University, Turkistan 161200, Kazakhstan
2
Department of Mathematical Analysis, South Ural State University (NRU), 454080 Chelyabinsk, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2022, 11(10), 543; https://doi.org/10.3390/axioms11100543
Submission received: 16 September 2022 / Revised: 3 October 2022 / Accepted: 7 October 2022 / Published: 11 October 2022

Abstract

:
In this paper, we study the eigenfunctions to one nonlocal second-order differential operator with double involution. We give an explicit form of the eigenfunctions to the boundary value problem in the unit ball with Dirichlet conditions on the boundary. For the problem under consideration, the completeness of the system of eigenfunctions is established.

1. Introduction and the Problem Statement

Among the nonlocal differential equations, which are the subject of many works, a special place is occupied by equations with involutive deviations of the argument. An involution is a mapping S : R n R n such that S 2 x = x . It should be noted monographs [1,2,3] from a variety of research papers in this direction. Refs. [4,5,6,7,8,9,10,11,12,13,14,15] are devoted to the questions of solvability of boundary and initial-boundary value problems for differential equations with involution. Spectral questions of differential equations with involution are studied in [16,17,18,19,20,21,22,23,24,25]. For example, in [22], the following boundary value problem
y ( t ) + a y ( t ) = λ y ( t ) , π < t < π , y ( π ) = y ( π ) = 0
is studied. The eigenfunctions and eigenvalues of this problem are given explicitly. The system of eigenfunctions is complete in L 2 [ π , π ] .
In [10], the following nonlocal analog of the Laplace operator is introduced
L l [ u ] ( x ) a 0 Δ u ( x ) + a 1 Δ u ( S x ) + + a l 1 Δ u ( S l 1 x ) ,
where Δ is the Laplace operator, a i for i = 0 , 1 , , l 1 are real numbers, S is an n × n orthogonal matrix for which there exists a number l N such that S l = I and I is the identity matrix. In the paper [10] cited above, for the corresponding nonlocal Poisson equation L l [ u ] = f ( x ) in the unit ball Ω , the solvability questions for some boundary value problems with different boundary conditions are studied. The corresponding spectral problem for the Dirichlet boundary value problem is studied in [14]. In that work, as in the case of the one-dimensional problem from [22], the eigenfunctions and eigenvalues of the considered problem are obtained explicitly. A theorem on the completeness of the system of eigenfunctions in the space L 2 ( Ω ) is proved.
Furthermore, in [24], a nonlocal Laplace operator with multiple involution of the following form is introduced:
L n [ u ] ( x ) ( i n i 1 ) 2 = 0 2 n 1 a ( i n i 1 ) 2 Δ u S n i n S 1 i 1 x ,
where a ( i n i 1 ) 2 are real numbers, ( i n i 1 ) 2 is a representation of the index i in the binary number system, S i are orthogonal n × n matrices satisfying the condition S i 2 = I , i = 1 , , n . In this paper [24], the explicit form of the eigenfunctions and eigenvalues of the corresponding Dirichlet problem
L n [ u ] ( x ) + λ u ( x ) = 0 , x Ω ; u ( x ) = 0 , x Ω
is given and the completeness of the system of eigenfunctions in the space L 2 ( Ω ) is proved.
In [26], one boundary value problem for the biharmonic equation is studied. This problem contains modified Hadamard integrodifferential operators in the boundary conditions.
In the present paper, continuing the above studies of the solvability of boundary value problems for harmonic and biharmonic equations with both ordinary involution and multiple involution, we are going to investigate similar issues for the Laplace operator with double involution of arbitrary orders. Special form matrices arising in the considered problem are investigated in Theorems 1–3 of Section 2. Then, in Section 3 (see Theorems 4 and 5), with the help of Lemma 1, the existence of eigenfunctions and eigenvalues of the problem under consideration is investigated. In Section 4 (see Theorems 6 and 7), with the help of Lemma 2, the eigenfunctions and eigenvalues of the considered nonlocal differential equation are constructed. These eigenfunctions are presented explicitly. The completeness of the resulting system of eigenfunctions in L 2 ( Ω ) is established. All new concepts and results obtained are illustrated by seven examples.
Let Ω = { x R n : | x | < 1 } be the unit ball in R n , n 2 , and Ω = { x R n : | x | = 1 } be the unit sphere. Let also S 1 , S 2 be two real commutative orthogonal n × n matrices such that S i l i = I , l i N , i = 1 , 2 . Note that, since | x | 2 = ( S i T S i x , x ) = ( S i x , S i x ) = | S i x | 2 , then x Ω S i x Ω and y Ω S i y Ω . For example, the matrix S i can be an orthogonal matrix of the following form:
S = I k 0 0 0 0 cos α sin α 0 0 sin α cos α 0 0 0 0 I n k 2 ,
where α = 2 π l , 0 k n 2 , and 0 are zero matrices of appropriate size. It is clear that S l = I .
Let l 1 , l 2 N 0 and a 0 , , a l 1 1 , a l 1 , , a 2 l 1 1 , , a ( l 2 1 ) l 1 1 , , a l 2 l 1 1 be a sequence of real numbers which we denote by a . If we represent the index i in the form i = ( i 2 , i 1 ) i 2 · l 1 + i 1 , where i k = 0 , 1 , l k 1 for k = 1 , 2 , then the elements of a can be represented as a ( 0 , 0 ) , , a ( 0 , l 1 1 ) , a ( 1 , 0 ) , , a ( 1 , l 1 1 ) , , a ( l 2 2 , l 1 1 ) , , a ( l 2 1 , l 1 1 ) . It is clear that, if 0 i < l 1 l 2 , then i 1 = { i / l 1 } , i 2 = [ i / l 1 ] , where [ · ] and { · } are integer and fractional parts of a number. Furthermore, we consider the sequence a also as a vector.
We introduce a new nonlocal differential operator formed by the sequence a and the Laplace operator Δ
L a u ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) a ( i 2 , i 1 ) Δ u ( S 2 i 2 S 1 i 1 x )
and formulate a natural boundary value problem with L a .
Problem S 2 . Find a non-zero function u ( x ) such that u C ( Ω ¯ ) C 2 ( Ω ) and which satisfied the equations
L a u ( x ) + λ u ( x ) = 0 , x Ω ,
u x = 0 , x Ω ,
where λ R .
In a special case l 1 = l 2 = 2 , this problem coincides with the spectral boundary value problem studied in [24].

2. Auxiliary Results

In order to start studying the above problem (1) and (2), we need some auxiliary assertions. We introduce the function
v ( x ) = ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) a ( i 2 , i 1 ) u ( S 2 i 2 S 1 i 1 x ) ,
where the summation is carried out over the index i = ( i 2 , i 1 ) i 2 · l 1 + i 1 in the form ( 0 , 0 ) , , ( 0 , l 1 1 ) , ( 1 , 0 ) , , ( 1 , l 1 1 ) , , ( l 2 2 , l 1 1 ) , , ( l 2 1 , l 1 1 ) . From equality (3), taking into account that S 2 l 2 = S 1 l 1 = I , it can be concluded that the following functions v ( S 2 j 2 S 1 j 1 x ) , where j = 0 , , l 2 l 1 1 are expressed as a linear combinations of the functions u ( S 2 i 2 S 1 i 1 x ) . Let us introduce the following vectors:
U ( x ) = u ( S 2 i 2 S 1 i 1 x ) i = 0 , , l 2 l 1 1 T , V ( x ) = v ( S 2 i 2 S 1 i 1 x ) i = 0 , , l 2 l 1 1 T
of order l 2 l 1 . Then, the dependence V ( x ) on U ( x ) can be presented in the matrix form:
V ( x ) = A ( 2 ) U ( x ) ,
where A ( 2 ) = a i , j i , j = 0 , , l 2 l 1 1 is some matrix of order l 2 l 1 × l 2 l 1 .
Let us investigate the structure of matrices of the form A ( 2 ) . For this, we introduce a new operation on indices of matrix coefficients as follows: i j = ( i 2 , i 1 ) ( j 2 , j 1 ) ( ( i 2 + j 2 mod l 2 ) , ( i 1 + j 1 mod l 1 ) ) , where ( i 2 , i 1 ) is a representation of the index i as mentioned above. It is clear that ⊕ is a commutative and associative operation on i { 0 , , l 2 l 1 1 } and ( i 2 , i 1 ) ( 0 , 0 ) = ( i 2 , i 1 ) . Since ( i 2 , i 1 ) ( j 2 , j 1 ) = ( 0 , 0 ) i 2 + j 2 = 0 mod l 2 , i 1 + j 1 = 0 mod l 1 j 2 = l 2 i 2 , j 1 = l 1 i 1 , then we can write i = ( l 2 i 2 , l 1 i 1 ) . For example, if l 1 = 2 , l 2 = 3 , then ( 2 , 1 ) = ( 1 , 1 ) or 5 = 3 . If we assume that ( i 2 , i 1 ) i = ( l 2 i 2 , l 1 i 1 ) , then we have
( i 2 , i 1 ) ( j 2 , j 1 ) = ( l 2 i 2 , l 1 i 1 ) ( j 2 , j 1 ) = ( l 2 i 2 + j 2 mod l 2 , l 1 i 1 + j 1 mod l 1 ) = ( i 2 + j 2 mod l 2 , i 1 + j 1 mod l 1 )
i.e., the operation ⊕ is formally applicable to numbers of the form ( i 2 , i 1 ) . We assume that
i j i ( j ) = ( i 2 j 2 mod l 2 , i 1 j 1 mod l 1 ) .
We extend the operations ⊕ and ⊖ to all numbers of the form ( i 2 , i 1 ) by setting ( i 2 , i 1 ) ( i 2 mod l 2 , i 1 mod l 1 ) . For example, if l 1 = 2 , l 2 = 3 , then ( 1 , 1 ) = ( 1 , 1 ) and ( 5 , 3 ) = ( 2 , 1 ) .
Theorem 1.
The matrix A ( 2 ) defined by the equality (4) is represented as
A ( 2 ) a i , j i , j = 0 , , l 2 l 1 1 = a j i i , j = 0 , , l 2 l 1 1 .
The sum of matrices of the form (5) is a matrix of the same form.
Proof. 
Consider the function v ( S 2 i 2 S 1 i 1 x ) whose coefficients at u ( S 2 j 2 S 1 j 1 ) make up the i ( i 2 , i 1 ) th row of the matrix A ( 2 )
v ( S 2 i 2 S 1 i 1 x ) = ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) u ( S 2 j 2 S 1 j 1 S 2 i 2 S 1 i 1 x ) = ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) u ( S 2 j 2 + i 2 mod l 2 S 1 j 1 + i 1 mod l 1 x ) .
Here, the following properties S 2 l 2 x = x , S 1 l 1 x = x and S 1 S 2 x = S 2 S 1 x of matrices S 1 and S 2 have been used. If we replace the index j by the index k using the equality k = i j , then k i = i j i = j , and we have j k . Substitution j k changes the order of summation in (6). For instance, if l 1 = 2 , l 2 = 3 and i = ( 0 , 1 ) , then j : 0 , 1 , 2 , 3 , 4 , 5 goes to k = 1 j : 1 , 0 , 3 , 2 , 5 , 4 . After changing the index, we have
v ( S 2 i 2 S 1 i 1 x ) = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) a k i u ( S 2 k 2 S 1 k 1 x ) .
Comparing the resulting equality with (4), we make sure that (5) is true a i , k = a k i .
There is no doubt that, if α , β R , then
α a j i i , j = 0 , , l 2 l 1 1 + β b j i i , j = 0 , , l 2 l 1 1 = α a j i + β b j i i , j = 0 , , l 2 l 1 1 ,
which completes the proof of the theorem. □
Example 1.
For example, let us write the matrix A ( 2 ) for l 2 = 3 and l 1 = 2
A ( 2 ) = a ( 0 , 0 ) ( 0 , 0 ) a ( 0 , 1 ) ( 0 , 0 ) a ( 1 , 0 ) ( 0 , 0 ) a ( 1 , 1 ) ( 0 , 0 ) a ( 2 , 0 ) ( 0 , 0 ) a ( 2 , 1 ) ( 0 , 0 ) a ( 0 , 0 ) ( 0 , 1 ) a ( 0 , 1 ) ( 0 , 1 ) a ( 1 , 0 ) ( 0 , 1 ) a ( 1 , 1 ) ( 0 , 1 ) a ( 2 , 0 ) ( 0 , 1 ) a ( 2 , 1 ) ( 0 , 1 ) a ( 0 , 0 ) ( 1 , 0 ) a ( 0 , 1 ) ( 1 , 0 ) a ( 1 , 0 ) ( 1 , 0 ) a ( 1 , 1 ) ( 1 , 0 ) a ( 2 , 0 ) ( 1 , 0 ) a ( 2 , 1 ) ( 1 , 0 ) a ( 0 , 0 ) ( 1 , 1 ) a ( 0 , 1 ) ( 1 , 1 ) a ( 1 , 0 ) ( 1 , 1 ) a ( 1 , 1 ) ( 1 , 1 ) a ( 2 , 0 ) ( 1 , 1 ) a ( 2 , 1 ) ( 1 , 1 ) a ( 0 , 0 ) ( 2 , 0 ) a ( 0 , 1 ) ( 2 , 0 ) a ( 1 , 0 ) ( 2 , 0 ) a ( 1 , 1 ) ( 2 , 0 ) a ( 2 , 0 ) ( 2 , 0 ) a ( 2 , 1 ) ( 2 , 0 ) a ( 0 , 0 ) ( 2 , 1 ) a ( 0 , 1 ) ( 2 , 1 ) a ( 1 , 0 ) ( 2 , 1 ) a ( 1 , 1 ) ( 2 , 1 ) a ( 2 , 0 ) ( 2 , 1 ) a ( 2 , 1 ) ( 2 , 1 ) = a 0 a 1 a 2 a 3 a 4 a 5 a 1 a 0 a 3 a 2 a 5 a 4 a 4 a 5 a 0 a 1 a 2 a 3 a 5 a 4 a 1 a 0 a 3 a 2 a 2 a 3 a 4 a 5 a 0 a 1 a 3 a 2 a 5 a 4 a 1 a 0 .
Let us present some corollaries of Theorem 1.
Corollary 1.
The matrix A ( 2 ) is uniquely determined by its first row a = a 0 , a 1 , , a l 2 l 1 1 .
It is not difficult to see that the i-th row of the matrix A ( 2 ) is represented via its 1st row as a 0 i , a 1 i , , a ( l 2 l 1 1 ) i . We indicate this property of the matrix A ( 2 ) by the equality A ( 2 ) A ( 2 ) ( a ) . For example, the matrix A ( 2 ) from Example 1 can be written as A ( 2 ) ( a ) , where a = ( a 0 , a 1 , a 2 , a 3 , a 4 , a 5 ) .
In Ref. [10], matrices of the following form are studied:
A ( 1 ) ( a 0 , , a l 1 ) = ( a j i mod l ) i , j = 0 , l 1 = a 0 a 1 a l 1 a l 1 a 0 a l 2 a 1 a 2 a 0 ,
which coincide with the matrix A ( 2 ) in the case l 2 = 1 , l 1 = l since, in this case, ( i 2 , i 1 ) = ( 0 , i 1 ) = i 1 and i 1 = 0 , , l 1 .
Corollary 2.
The matrix A ( 2 ) has the structure of a matrix consisting of l 2 × l 2 square blocks, each of which is an l 1 × l 1 matrix of type A ( 1 ) . If we represent the sequence a as a = ( a 0 , , a l 2 1 ) , where a j 2 = ( a j 2 l 1 , , a ( j 2 + 1 ) l 1 1 ) and denote A ( 1 ) ( j 2 ) = A ( 1 ) ( a j 2 ) , then the equality is true
A ( 2 ) ( a ) = A ( 1 ) A ( 1 ) ( 0 ) , A ( 1 ) ( 1 ) , , A ( 1 ) ( l 2 1 ) A ( 1 ) ( 0 ) A ( 1 ) ( 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( 0 ) A ( 1 ) ( l 2 2 ) A ( 1 ) ( 1 ) A ( 1 ) ( 2 ) A ( 1 ) ( 0 ) .
Proof. 
Obviously, the block matrix on the right side of (8) has size l 2 l 1 × l 2 l 1 . Denote its arbitrary element as a i , j , where i , j = 0 , , l 2 l 1 1 . If we write i = ( i 2 , i 1 ) , j = ( j 2 , j 1 ) , then we have a i , j = a ( i 2 , i 1 ) , ( j 2 , j 1 ) . This means that the element a i , j is in the j 2 th block column and in the i 2 th block row of this block matrix. Therefore, in accordance with the structure of the matrix A ( 1 ) , we have a i , j A ( 1 ) ( j 2 i 2 mod l 2 ) . If we now take into account the values of the indices j 1 and i 1 , which mean that the element a i , j is in the j 1 -th column and in the i 1 -th row of the matrix A ( 1 ) ( j 2 i 2 mod l 2 ) , then a i , j = b j 1 i 1 mod l 1 , where b k is the element of the 1st row of the matrix A ( 1 ) ( j 2 i 2 mod l 2 ) . Since from the definition of A ( 1 ) ( j 2 ) it follows that b k = a j 2 l 1 + k , k = 0 , , l 1 1 , then we have
a i , j = b j 1 i 1 mod l 1 = a ( j 2 i 2 mod l 2 ) l 1 + ( j 1 i 1 mod l 1 ) = a ( ( j 2 i 2 mod l 2 ) , ( j 1 i 1 mod l 1 ) ) = a j i .
Taking into account equality (5), from Theorem 1, this implies the equality of matrices (8). This proves the corollary. □
Example 2.
We can see the property (8) of matrices of the form A ( 2 ) by the matrix A ( 2 ) ( a ) from Example 1, where l 2 = 3 , l 1 = 2 , a = ( a 0 , a 1 , a 2 , a 3 , a 4 , a 5 ) . If denote a 0 = ( a 0 , a 1 ) T , a 1 = ( a 2 , a 3 ) T , a 2 = ( a 4 , a 5 ) T and
A ( 1 ) ( a 0 ) = a 0 a 1 a 1 a 0 = A ( 1 ) ( 0 ) , A ( 1 ) ( a 1 ) = a 2 a 3 a 3 a 2 = A ( 1 ) ( 1 ) , A ( 1 ) ( a 2 ) = a 4 a 5 a 5 a 4 = A ( 1 ) ( 2 ) ,
then a = ( a 0 , a 1 , a 2 ) T and the matrix A ( 2 ) ( a ) is written as
A ( 2 ) ( a ) = a 0 a 1 a 2 a 3 a 4 a 5 a 1 a 0 a 3 a 2 a 5 a 4 a 4 a 5 a 0 a 1 a 2 a 3 a 5 a 4 a 1 a 0 a 3 a 2 a 2 a 3 a 4 a 5 a 0 a 1 a 3 a 2 a 5 a 4 a 1 a 0 = A ( 1 ) ( 0 ) A ( 1 ) ( 1 ) A ( 1 ) ( 2 ) A ( 1 ) ( 2 ) A ( 1 ) ( 0 ) A ( 1 ) ( 1 ) A ( 1 ) ( 1 ) A ( 1 ) ( 2 ) A ( 1 ) ( 0 ) = A ( 1 ) A ( 1 ) ( 0 ) , A ( 1 ) ( 1 ) , A ( 1 ) ( 2 ) .
Corollary 3.
The transposed matrix A ( 2 ) T ( a ) has the structure of the matrix A ( 2 ) T , and besides A ( 2 ) T ( a ) = A ( 2 ) ( b ) , where b = ( a ( j 2 , j 1 ) ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) , and both index components j 2 and j 1 are taken by mod l 2 and by mod l 1 , respectively.
Proof. 
By Theorem 1, we write
A ( 2 ) T ( a ) = a j i i , j = 0 , , ( l 2 1 , l 1 1 ) T = a i j i , j = 0 , , ( l 2 1 , l 1 1 ) = a ( j i ) i , j = 0 , , ( l 2 1 , l 1 1 ) = b j i i , j = 0 , , ( l 2 1 , l 1 1 ) .
This is why b = ( a j ) j = 0 , , l 2 l 1 1 = ( a ( j 2 , j 1 ) ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) . The corollary is proved. □
Example 3.
For the matrix A ( 2 ) T ( a ) from Example 2, we have
b = ( a j ) j = 0 , , 5 = ( a 0 , a 1 , a 2 , a 3 , a 4 , a 5 ) = ( a 0 , a ( 0 , 1 ) , a ( 1 , 0 ) , a ( 1 , 1 ) , a ( 2 , 0 ) , a ( 2 , 1 ) ) = ( a 0 , a ( 0 , 1 ) , a ( 2 , 0 ) , a ( 2 , 1 ) , a ( 1 , 0 ) , a ( 1 , 1 ) ) = ( a 0 , a 1 , a 4 , a 5 , a 2 , a 3 ) .
Is the multiplication of matrices of the form (5) again such a matrix?
Theorem 2.
The product of matrices of form (5) is again the same matrix and multiplication is commutative.
Proof. 
Let the matrices A ( 2 ) ( a ) and B ( 2 ) ( b ) be two matrices of the form (5). Then,
A ( 2 ) ( a ) B ( 2 ) ( b ) = a j i i , j = 0 , l 2 l 1 1 b j i i , j = 0 , l 2 l 1 1 = k = 0 l 2 l 1 1 a k i b j k i , j = 0 , l 2 l 1 1 .
As in the proof of Theorem 1, let us change the index k s ; in the sum above, in accordance with the equation k i = s . Then, k = k i i = s i and therefore the correspondence k s is one-to-one. Thus, the index replacement k s changes only the summation order. Due to the commutativity and associativity of ⊕, we obtain
A ( 2 ) ( a ) B ( 2 ) ( b ) = s = 0 l 2 l 1 1 a s b j ( s i ) i , j = 0 , l 2 l 1 1 = s = 0 l 2 l 1 1 a s b ( j i ) s i , j = 0 , l 2 l 1 1 .
The elements of the first row of the resulting matrix have the form c j = s = 0 l 2 l 1 1 a s b j s and hence A ( 2 ) ( a ) B ( 2 ) ( b ) = ( c j i ) i , j = 0 , l 2 l 1 1 . Therefore, matrix A ( 2 ) ( a ) B ( 2 ) ( b ) has the form (5).
The commutativity of the product A ( 2 ) B ( 2 ) can be easily obtained from the equality (9). Replacing ( j i ) s k in the last sum from (9) and hence s = ( j i ) k , we obtain
s = 0 l 2 l 1 1 a s b ( j i ) s = k = 0 l 2 l 1 1 a ( j i ) k b k = k = 0 l 2 l 1 1 b k a ( j i ) k .
The last sum in this equality is a common element of the matrix ( B ( 2 ) A ( 2 ) ) i , j . Hence, (9) means that A ( 2 ) B ( 2 ) = B ( 2 ) A ( 2 ) . The theorem is proved. □
Let λ k = exp ( i 2 π k / l ) , k = 0 , , l 1 be the lth root of unity. In [10], it is shown that, for the matrix of the form
A ( 1 ) ( a ) = a 0 a 1 a l 1 a l 1 a 0 a l 2 a 1 a 2 a 0 ,
where a = ( a 0 , , a l 1 ) , eigenvectors and eigenvalues have the form
b k = ( 1 , λ k , , λ k l 1 ) T , μ k = s = 0 l 1 a s λ k s = a · b k .
Note that the eigenvectors of the matrices A ( 1 ) ( a ) do not depend on the vector a . Is this true for matrices of type A ( 2 ) ( a ) ? We will see below.
We present a theorem that clarifies questions about eigenvectors and eigenvalues of matrices A ( 2 ) from (5).
Theorem 3.
Eigenvectors of the matrix A ( 2 ) ( a ) are written as
a ( i 2 , i 1 ) = I l 1 , λ i 2 I l 1 , , λ i 2 l 2 1 I l 1 T b i 1 ,
where I l 1 is identity l 1 × l 1 matrix, vector b i 1 is taken from (10) at l = l 1 and λ i 2 is the l 2 th root of unity, i 1 = 0 , , l 1 1 , i 2 = 0 , , l 2 1 . The eigenvectors of the matrix A ( 2 ) ( a ) do not depend on the vector a .
Proof. 
In accordance with Theorem 1, represent the matrix A ( 2 ) as
A ( 2 ) ( a 0 , , a l 2 l 1 1 ) = A ( 1 ) ( 0 ) A ( 1 ) ( 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( 0 ) A ( 1 ) ( l 2 2 ) A ( 1 ) ( 1 ) A ( 1 ) ( 2 ) A ( 1 ) ( 0 ) ,
where A ( 1 ) ( j 2 ) = A ( 1 ) ( a j 2 l 1 , , a ( j 2 + 1 ) l 1 1 ) , j 2 = 0 , , l 2 1 . Consider the block multiplication of a l 2 l 1 × l 2 l 1 matrix by a l 2 l 1 × l 1 matrix of the form
A ( 1 ) ( 0 ) A ( 1 ) ( 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( l 2 1 ) A ( 1 ) ( 0 ) A ( 1 ) ( l 2 2 ) A ( 1 ) ( 1 ) A ( 1 ) ( 2 ) A ( 1 ) ( 0 ) · I l 1 λ i 2 I l 1 λ i 2 l 2 1 I l 1 = B 0 B 1 B l 2 1 ,
where square blocks B i 2 have size l 1 × l 1 . Let us extend the values of the upper indices of the matrices A ( 1 ) ( i 2 ) to Z and calculate them by mod l 2 . Then, similarly to (7), the mth block row of the matrix A ( 2 ) we represent as A ( 1 ) ( m mod l 2 ) , A ( 1 ) ( 1 m mod l 2 ) , A ( 1 ) ( l 2 m mod l 2 ) . Since the exponent of λ i 2 k can also be calculated by mod l 2 , then we write
B i 2 = k = 0 l 2 1 A ( 1 ) ( k m mod l 2 ) I l 1 λ i 2 k = k = 0 l 2 1 λ i 2 k A ( 1 ) ( k m mod l 2 ) = s = 0 l 2 1 λ i 2 s + m A ( 1 ) ( s ) = λ i 2 m B 0 .
Here, the substitution s = k m mod l 2 of the index has been made. Thus, we have
A ( 2 ) ( a 0 , , a l 2 l 1 1 ) · I l 1 λ i 2 I l 1 λ i 2 l 2 1 I l 1 = B 0 B 1 B l 2 1 = I l 1 λ i 2 I l 1 λ i 2 l 2 1 I l 1 B 0 .
It is easy to see that, in the obtained equality, the matrix B 0 = B 0 ( λ i 2 ) = s = 0 l 2 1 λ i 2 s A ( 1 ) ( s ) has a type of A ( 1 ) and hence the vectors b i 1 , for i 1 = 0 , , l 1 1 are eigenvectors of B 0 . We multiply the above matrix equality on the right by the vector b i 1 . Then, we obtain
A ( 2 ) I l 1 λ i 2 I l 1 λ i 2 l 2 1 I l 1 b i 1 = I l 1 λ i 2 I l 1 λ i 2 l 2 1 I l 1 B 0 b i 1 = λ ( i 2 , i 1 ) I l 1 λ i 2 I l 1 λ i 2 l 2 1 I l 1 b i 1 ,
where λ ( i 2 , i 1 ) is the eigenvalue of the matrix B 0 ( λ i 2 ) corresponding to the eigenvector b i 1 . If we now recall the notation (11), then we obtain
A ( 2 ) ( a ) a ( i 2 , i 1 ) = λ ( i 2 , i 1 ) a ( i 2 , i 1 ) ,
i.e., a ( i 2 , i 1 ) is an eigenvector of A ( 2 ) ( a ) . This completes the proof. □
Now, present some corollaries from Theorem 3 that make it possible to construct eigenvectors and eigenvalues of the matrix A ( 2 ) ( a ) .
Corollary 4.
1 0 . The eigenvector of A ( 2 ) ( a ) numbered by ( i 2 , i 1 ) one can write as
a ( i 2 , i 1 ) = λ i 2 j 2 λ i 1 j 1 ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T ,
where λ 2 is the l 2 th root of unity and λ 1 is the l 1 th root of unity. The eigenvalue corresponding to this eigenvector can be written in a similar form
λ ( i 2 , i 1 ) = ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) λ i 2 j 2 λ i 1 j 1 = a · a ( i 2 , i 1 ) .
2 0 . The eigenvectors of the matrix A ( 2 ) T ( a ) coincide with the eigenvectors a ( i 2 , i 1 ) , and the corresponding eigenvalues have the form λ ( i 2 , i 1 ) t = λ ¯ ( i 2 , i 1 ) = λ ( i 2 , i 1 ) .
Proof. 
It is easy to see that the Equality (11) can be written as
a ( i 2 , i 1 ) = b i 1 T , λ i 2 b i 1 T , , λ i 2 l 2 1 b i 1 T T = 1 , λ i 1 , , λ i 1 l 1 1 , λ i 2 , λ i 2 λ i 1 , λ i 2 λ i 1 l 1 1 , , λ i 2 l 2 1 , λ i 2 l 2 1 λ i 1 , , λ i 2 l 2 1 λ i 1 l 1 1 T = λ i 2 j 2 λ i 1 j 1 ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T ,
where the order of the vector elements corresponds to the order established for numbers ( j 2 , j 1 ) . This proves equality (13).
Furthermore, from Theorem 3, it follows that the eigenvalue λ ( i 2 , i 1 ) of the matrix A ( 2 ) ( a ) corresponding to the eigenvector a ( i 2 , i 1 ) is the same as the eigenvalue of the matrix
B 0 ( λ i 2 ) = j 2 = 0 l 2 1 λ i 2 j 2 A ( 1 ) ( j 2 ) ,
corresponding to the eigenvector b i 1 . Here, A ( 1 ) ( j 2 ) = A ( 1 ) ( a j 2 l 1 , , a ( j 2 + 1 ) l 1 1 ) . Since the matrix B 0 ( λ i 2 ) is of type A ( 1 ) , then, in accordance with (10), we find the vector representing the first row of the matrix B 0 ( λ i 2 ) . Denote this vector as b 0 ( λ i 2 )
b 0 ( λ i 2 ) = j 2 = 0 l 2 1 λ i 2 j 2 a j 2 l 1 , , j 2 = 0 l 2 1 λ i 2 j 2 a j 2 l 1 + l 1 1 .
Using the formula (10), we find
λ ( i 2 , i 1 ) = j 1 = 0 l 1 1 λ i 1 j 1 j 2 = 0 l 2 1 a j 2 l 1 + i 1 λ i 2 j 2 = ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) λ i 2 j 2 λ i 1 j 1 ,
which is the same as (14). Statement 1 0 is proved.
By Corollary 3 and Theorem 3, the eigenvectors of A ( 2 ) T ( a ) coincide with the eigenvectors of A ( 2 ) ( a ) . Let us find the eigenvalue corresponding to the vector a ( i 2 , i 1 ) . According to Corollary 3 A ( 2 ) T ( a ) = A ( 2 ) ( b ) , where b = ( a ( j 2 , j 1 ) ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) . This is why
λ ( i 2 , i 1 ) t = ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) λ i 2 j 2 λ i 1 j 1 = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) a ( k 2 , k 1 ) λ i 2 k 2 λ i 1 k 1 = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) a ( k 2 , k 1 ) λ ¯ i 2 k 2 λ ¯ i 1 k 1 = λ ¯ ( i 2 , i 1 ) .
It can be seen from the last equality that λ ¯ ( i 2 , i 1 ) = λ ( i 2 , i 1 ) . Statement 2 0 is proved and hence the corollary is proved. □
Another property of the eigenvectors of the matrix A ( 2 ) ( a ) is given later in Corollary 7.
Remark 1.
The expression λ i 2 λ i 1 is an ordered pair, and the first place in it is λ i 2 – the l 2 -th root of unity, and the second place in it is λ i 1 – the l 1 th root of unity. Therefore, in the general case, λ 1 λ 1 λ 1 2 .
Remark 2.
In ([24], Corollary 3), eigenvectors and eigenvectors of matrices, which for n = 2 are a special case of matrices A ( 2 ) , are obtained. The eigenvectors were written as
a 2 ( i 2 , i 1 ) = ( 1 ) j 2 i 2 + j 1 i 1 ( j 2 , j 1 ) = 0 , , 3 ,
which is the same as (13) for l 1 = l 2 = 2 . Indeed, in this case λ i 2 = ( 1 ) i 2 , λ i 1 = ( 1 ) i 1 , ( l 2 1 , l 1 1 ) = ( 1 , 1 ) = 3 and hence the common term of the eigenvector from (13) has the form
λ i 2 j 2 λ i 1 j 1 = ( 1 ) i 2 j 2 + i 1 j 1 ,
which coincides with the common term of the vector a 2 ( i 2 , i 1 ) . The eigenvalues obtained in (14) also coincide with those found in [24] for n = 2 .
Example 4.
For the matrix A ( 2 ) ( a ) from Example 2, we have λ i 2 = λ i 2 , λ i 1 = ( 1 ) i 1 , where λ = exp ( i 2 π 3 ) . Therefore, according to the formula a ( i 2 , i 1 ) = λ i 2 j 2 λ i 1 j 1 ( j 2 , j 1 ) = 0 , , ( 2 , 1 ) T (13), we obtain
a ( 0 , 0 ) = ( 1 , 1 , 1 , 1 , 1 , 1 ) , a ( 0 , 0 ) = ( 1 , 1 , 1 , 1 , 1 , 1 ) , a ( 1 , 0 ) = ( 1 , 1 , λ , λ , λ ¯ , λ ¯ ) , a ( 1 , 1 ) = ( 1 , 1 , λ , λ , λ ¯ , λ ¯ ) , a ( 2 , 0 ) = ( 1 , 1 , λ ¯ , λ ¯ , λ , λ ) , a ( 2 , 1 ) = ( 1 , 1 , λ ¯ , λ ¯ , λ , λ ) ,
and, using the formula λ ( i 2 , i 1 ) = a · a ( i 2 , i 1 ) (14), we calculate
λ ( 0 , 0 ) = a 0 + a 1 + a 2 + a 3 + a 4 + a 5 , λ ( 0 , 1 ) = a 0 a 1 + a 2 a 3 + a 4 a 5 , λ ( 1 , 0 ) = a 0 + a 1 + λ ( a 2 + a 3 ) + λ ¯ ( a 4 + a 5 ) , λ ( 1 , 1 ) = a 0 a 1 + λ ( a 2 a 3 ) + λ ¯ ( a 4 a 5 ) , λ ( 2 , 0 ) = a 0 + a 1 + λ ¯ ( a 2 + a 3 ) + λ ( a 4 + a 5 ) , λ ( 2 , 1 ) = a 0 a 1 + λ ¯ ( a 2 a 3 ) + λ ( a 4 a 5 ) .

3. The Problem S 2

To consider Problem S 2 , we need the following statement.
Lemma 1.
([10], Lemma 3.1) Let S be an orthogonal matrix, then the operator I S u ( x ) = u ( S x ) and the Laplace operator Δ satisfy the equality Δ I S u ( x ) = I S Δ u ( x ) for u C 2 ( Ω ) . The operator Λ = i = 1 n x i u x i ( x ) and operator I S also satisfy the equality Λ I S u ( x ) = I S Λ u ( x ) for u C 1 ( Ω ¯ ) .
Corollary 5.
Equation (1) generates a matrix equation which is equivalent to it
A ( 2 ) ( a ) Δ U ( x ) + λ U ( x ) = 0 ,
where U ( x ) = u ( S 2 i 2 S 1 i 1 x ) ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) T and λ R .
Proof. 
Let the function u ( x ) be a solution to equation (1). Let us denote
v ( x ) = ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) a ( i 2 , i 1 ) u ( S 2 i 2 S 1 i 1 x ) .
and V ( x ) = v ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T . The function v ( x ) generates equality (4). Let us apply the Laplace operator Δ to (4). Since the matrices of the form S 2 i 2 S 1 i 1 are orthogonal, by Lemma 1, we obtain
Δ V ( x ) = Δ I S 2 j 2 S 1 j 1 v ( x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = I S 2 j 2 S 1 j 1 Δ v ( x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T
= I S 2 j 2 S 1 j 1 ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) a ( i 2 , i 1 ) I S 2 i 2 S 1 i 1 Δ u ( x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T
= ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) a ( i 2 , i 1 ) I S 2 j 2 + i 2 mod l 2 S 1 j 1 + i 1 mod l 1 Δ u ( x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T
= ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) a k j I S 2 k 2 S 1 k 1 Δ u ( x ) ( k 2 , k 1 ) = 0 , , ( l 2 1 , l 1 1 ) T
= ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) a k j Δ u ( S 2 k 2 S 1 k 1 x ) ( k 2 , k 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = A ( 2 ) ( a ) Δ U ( x ) .
In the transformations made, the replacement of the summation index j i = k was used. Hence, using the equality Δ v ( S 2 k 2 S 1 k 1 x ) + λ u ( S 2 k 2 S 1 k 1 x ) = 0 (see (1)), which implies that Δ V ( x ) + λ U ( x ) = 0 , we easily obtain (15). Finally, note that the first equation in (15) is the same as (1). This completes the proof. □
Using Lemma 1, we are going to state the existence of the eigenvalues of Problem S 2 .
Theorem 4.
Assume that the non-zero function u ( x ) is an eigenfunction of Problem S 2 , and λ is its eigenvalue corresponding to u ( x ) . The function
w ( x ) = U ( x ) · a ( i 2 , i 1 ) ,
where U ( x ) = u S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T and a ( i 2 , i 1 ) is an eigenvector of the matrix A ( 2 ) ( a ) such that λ ( i 2 , i 1 ) 0 is a solution to the boundary value problem
Δ w ( x ) + μ w ( x ) = 0 , x Ω ,
w x = 0 , x Ω ,
where μ = λ / λ ¯ ( i 2 , i 1 ) .
Proof. 
Let us take u ( x ) a non-zero eigenfunction of Problem S 2 and the corresponding eigenvalue λ . In accordance with Corollary 5, equality (15) holds. If we multiply this equality by the vector a ( i 2 , i 1 ) scalarly, then we obtain
A ( 2 ) ( a ) Δ U ( x ) · a ( i 2 , i 1 ) + λ U ( x ) · a ( i 2 , i 1 ) = 0 ,
where we find
Δ U ( x ) · A ( 2 ) T ( a ) a ( i 2 , i 1 ) + λ U ( x ) · a ( i 2 , i 1 ) = 0 .
Since, due to Corollary 4, the vector a ( i 2 , i 1 ) is also an eigenvector of the matrix A ( 2 ) T , and λ ¯ ( i 2 , i 1 ) is its eigenvalue, then we have
λ ¯ ( i 2 , i 1 ) Δ w ( x ) + λ w ( x ) = 0
and, since λ = λ ¯ ( i 2 , i 1 ) μ , we obtain
0 = λ ¯ ( i 2 , i 1 ) Δ w ( x ) + μ w ( x ) ,
where, because λ ( i 2 , i 1 ) 0 , we obtain the equality (16)
Δ w ( x ) + μ w ( x ) = 0 , x Ω .
Lastly, because u ( x ) = 0 , for x Ω , and x Ω S 2 j 2 S 1 j 1 x Ω , then we have U ( x ) = 0 for x Ω . Therefore, we obtain w ( x ) = U ( x ) · a ( i 2 , i 1 ) = 0 , for x Ω . This completes the proof. □
Let us prove the assertion converse to Theorem 4. It provides an opportunity to find solutions to the main Problem S 2 .
Theorem 5.
Assume that the non-zero function u ( x ) is a solution of the boundary value problem (16) and (17) for some μ > 0
Δ w ( x ) + μ w ( x ) = 0 , x Ω , w ( x ) = 0 , x Ω ,
then the function u ( i 2 , i 1 ) ( x ) determined from the equality
u ( i 2 , i 1 ) ( x ) = W ( x ) · a ( i 2 , i 1 ) ,
where
W ( x ) = w ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T
and the vector a ( i 2 , i 1 ) from (13) is an eigenvector of the matrix A ( 2 ) ( a ) with an eigenvalue λ ( i 2 , i 1 ) 0 , which is a solution to Problem S 2 for λ = μ λ ¯ ( i 2 , i 1 ) .
Proof. 
Let w ( x ) 0 be a solution to the problem (16) and (17). Consider the vector W ( x ) = w ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T and compose the function
u ( i 2 , i 1 ) ( x ) = W ( x ) · a ( i 2 , i 1 ) ,
where x Ω . It is not difficult to see that, according to Corollary 4, we have in Ω
u ( i 2 , i 1 ) ( S 2 j 2 S 1 j 1 x ) = W ( S 2 j 2 S 1 j 1 x ) · a ( i 2 , i 1 ) = w ( S 2 j 2 + k 2 S 1 j 1 + k 1 x ) ( k 2 , k 1 ) = 0 , , ( l 2 1 , l 1 1 ) T · λ i 2 k 2 λ i 1 k 1 ( k 2 , k 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) w ( S 2 j 2 + k 2 S 1 j 1 + k 1 x ) λ i 2 k 2 λ i 1 k 1 = ( m 2 , m 1 ) = 0 ( l 2 1 , l 1 1 ) w ( S 2 m 2 S 1 m 1 x ) λ i 2 m 2 j 2 λ i 1 m 1 j 1 = ( m 2 , m 1 ) = 0 ( l 2 1 , l 1 1 ) w ( S 2 m 2 S 1 m 1 x ) λ i 2 m 2 λ i 1 m 1 λ i 2 j 2 λ i 1 j 1 = λ ¯ i 2 j 2 λ ¯ i 1 j 1 W ( x ) · a ( i 2 , i 1 ) .
Therefore, again by Corollary 4,
U ( i 2 , i 1 ) ( x ) = u ( i 2 , i 1 ) ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = W ( x ) · a ( i 2 , i 1 ) λ ¯ i 2 j 2 λ ¯ i 1 j 1 ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = u ( i 2 , i 1 ) ( x ) a ¯ ( i 2 , i 1 ) .
Here, we used the substitution of indexes j k = m k = m j . Thus,
Δ U ( i 2 , i 1 ) ( x ) = Δ u ( i 2 , i 1 ) ( x ) a ¯ ( i 2 , i 1 )
and therefore because, by Lemma 1,
Δ W ( x ) = Δ w ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = μ w ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) = μ W ( x ) ,
we obtain
A ( 2 ) ( a ) Δ U ( i 2 , i 1 ) ( x ) = Δ u ( i 2 , i 1 ) ( x ) A ( 2 ) ( a ) a ¯ ( i 2 , i 1 ) = Δ u ( i 2 , i 1 ) ( x ) A ( 2 ) ( a ) a ¯ ( i 2 , i 1 ) = ( Δ W ( x ) · a ( i 2 , i 1 ) ) λ ¯ ( i 2 , i 1 ) a ¯ ( i 2 , i 1 ) = μ ( W ( x ) · a ( i 2 , i 1 ) ) λ ¯ ( i 2 , i 1 ) a ¯ ( i 2 , i 1 ) = μ λ ¯ ( i 2 , i 1 ) u ( i 2 , i 1 ) ( x ) a ¯ ( i 2 , i 1 ) = μ λ ¯ ( i 2 , i 1 ) U ( i 2 , i 1 ) ( x ) .
Considering the first component of this equality, we obtain
( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) Δ u ( i 2 , i 1 ) ( S 2 j 2 S 1 j 1 x ) = μ λ ¯ ( i 2 , i 1 ) u ( i 2 , i 1 ) ( x ) , x Ω ,
which means that the function u ( i 2 , i 1 ) ( x ) satisfies the equation (1).
Let us make sure that the boundary conditions (2) are met. Since x Ω S 2 j 2 S 1 j 1 x Ω , then, for x Ω , we have
u ( i 2 , i 1 ) ( x ) = W ( x ) · a ( i 2 , i 1 ) = w ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T · a ( i 2 , i 1 ) = 0 · a ( i 2 , i 1 ) = 0 .
Thus, the function u ( i 2 , i 1 ) ( x ) is a solution to Problem S 2 . This completes the proof. □
Example 5.
Consider the problem (1) and (2) with l 2 = 3 , l 1 = 2 and a = ( a 0 , a 1 , a 2 , a 3 , a 4 , a 5 ) . Let us use Theorem 5. To do this, take the eigenvectors of the matrix A ( 2 ) ( a ) in the form (13) from Example 4. Let μ be an eigenvalue of the boundary value problem (16) and (17) and w μ ( x ) be a corresponding eigenfunction. Then, the eigenfunctions of the problem (1) and (2) corresponding to μ can be taken in the form u ( i 2 , i 1 ) ( x ) = W ( x ) · a ( i 2 , i 1 ) (18):
u ( 0 , 0 ) ( x ) = w μ ( x ) + w μ ( S 1 x ) + w μ ( S 2 x ) + w μ ( S 2 S 1 x ) + w μ ( S 2 2 x ) + w μ ( S 2 2 S 1 x ) , u ( 0 , 1 ) ( x ) = w μ ( x ) w μ ( S 1 x ) + w μ ( S 2 x ) w μ ( S 2 S 1 x ) + w μ ( S 2 2 x ) w μ ( S 2 2 S 1 x ) , u ( 1 , 0 ) ( x ) = w μ ( x ) + w μ ( S 1 x ) + λ w μ ( S 2 x ) + λ w μ ( S 2 S 1 x ) + λ ¯ w μ ( S 2 2 x ) + λ ¯ w μ ( S 2 2 S 1 x ) , u ( 1 , 1 ) ( x ) = w μ ( x ) w μ ( S 1 x ) + λ w μ ( S 2 x ) λ w μ ( S 2 S 1 x ) + λ ¯ w μ ( S 2 2 x ) λ ¯ w μ ( S 2 2 S 1 x ) , u ( 2 , 0 ) ( x ) = w μ ( x ) + w μ ( S 1 x ) + λ ¯ w μ ( S 2 x ) + λ ¯ w μ ( S 2 S 1 x ) + λ w μ ( S 2 2 x ) + λ w μ ( S 2 2 S 1 x ) , u ( 2 , 0 ) ( x ) = w μ ( x ) w μ ( S 1 x ) + λ ¯ w μ ( S 2 x ) λ ¯ w μ ( S 2 S 1 x ) + λ w μ ( S 2 2 x ) λ w μ ( S 2 2 S 1 x ) .
If we use the eigenvalues of the matrix A ( 2 ) ( a ) from (13), then the eigenvalues of the problem (1) and (2) corresponding to the eigenfunctions written above look like μ ( i 2 , i 1 ) = μ λ ¯ ( i 2 , i 1 ) = μ λ ( i 2 , i 1 ) :
μ ( 0 , 0 ) = μ λ ( 0 , 0 ) , μ ( 0 , 1 ) = μ λ ( 0 , 1 ) , μ ( 1 , 0 ) = μ λ ( 2 , 0 ) , μ ( 1 , 1 ) = μ λ ( 2 , 1 ) , μ ( 2 , 0 ) = μ λ ( 1 , 0 ) , μ ( 2 , 1 ) = μ λ ( 1 , 1 ) .
Next, we need to expand a given polynomial into a sum of “generalized parity” polynomials. Let H ( x ) be some function defined on Ω . Let us denote
F ( i 2 , i 1 ) [ H ] ( x ) = 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 H S 2 j 2 S 1 j 1 x , x Ω .
Lemma 2.
The function F ( i 2 , i 1 ) [ H ] ( x ) has the “generalized parity” property
F ( i 2 , i 1 ) [ H ] S 2 k 2 S 1 k 1 x = λ ¯ i 2 k 2 λ ¯ i 1 k 1 F ( i 2 , i 1 ) [ H ] ( x )
and the following equality
F ( k 2 , k 1 ) F ( i 2 , i 1 ) [ H ] ( x ) = F ( i 2 , i 1 ) [ H ] ( x ) ( i 2 , i 1 ) = ( k 2 , k 1 ) , 0 ( i 2 , i 1 ) ( k 2 , k 1 ) .
holds true. In addition, the function H ( x ) can be expanded in the form
H ( x ) = ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) F ( i 2 , i 1 ) [ H ] ( x ) , x Ω .
Proof. 
It is not hard to see that
F ( i 2 , i 1 ) [ H ] S 2 k 2 S 1 k 1 x = 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 H S 2 k 2 + j 2 S 1 k 1 + j 1 x
= 1 l 2 l 1 ( m 2 , m 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 m 2 k 2 λ i 1 m 1 k 1 H S 2 m 2 S 1 m 1 x = λ i 2 k 2 λ i 1 k 1 F ( i 2 , i 1 ) [ H ] ( x ) = λ ¯ i 2 k 2 λ ¯ i 1 k 1 F ( i 2 , i 1 ) [ H ] ( x ) ,
where, as in Theorem 2, the replacement of the index k j = m is done. Therefore, equality (20) holds true.
Consider now the equality (22). It is easy to see that
( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) F ( i 2 , i 1 ) [ H ] ( x ) = 1 l 2 l 1 ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 H S 2 j 2 S 1 j 1 x = ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) H S 2 j 2 S 1 j 1 x 1 l 2 l 1 ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 .
Let us transform the inner sum from the right side of (23). Let ( j 2 , j 1 ) 0 , then, for example, j 2 0 which means λ j 2 1 . Taking into account that λ j 2 l 2 = 1 , by a simple combinatorial identity, we find
( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 = ( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) λ j 2 i 2 λ j 1 i 1 = i 2 = 0 l 2 1 λ j 2 i 2 i 1 = 0 l 1 1 λ j 1 i 1 = λ j 2 l 2 1 λ j 2 1 i 1 = 0 l 1 1 λ j 1 i 1 = 0 .
If ( j 2 , j 1 ) = 0 , then λ j 2 = 1 , λ j 1 = 1 and so
( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 = l 2 l 1 .
Therefore, the expression on the right side of (23) is equal to H ( S 2 0 S 1 0 x ) = H ( x ) . This proves the equality (22).
Now, let us prove (21). It is not hard to see that, using (20) and (22), we can write
F ( k 2 , k 1 ) F ( i 2 , i 1 ) [ H ] ( x ) = 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ k 2 j 2 λ k 1 j 1 F ( i 2 , i 1 ) [ H ] S 2 j 2 S 1 j 1 x = 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ k 2 j 2 λ k 1 j 1 λ ¯ i 2 j 2 λ ¯ i 1 j 1 F ( i 2 , i 1 ) [ H ] ( x ) = F ( i 2 , i 1 ) [ H ] ( x ) 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) ( λ k 2 λ ¯ i 2 ) j 2 ( λ k 1 λ ¯ i 1 ) j 1 = F ( i 2 , i 1 ) [ H ] ( x ) 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ k 2 i 2 j 2 λ k 1 j 1 j 1 .
Since, by virtue of (24), the formula
1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ k 2 i 2 j 2 λ k 1 j 1 j 1 = 1 ( k 2 , k 1 ) = ( i 2 , i 1 ) , 0 ( k 2 , k 1 ) ( i 2 , i 1 )
holds true, then (21) follows from the last equality. Here, the equalities λ k 2 λ ¯ i 2 = λ k 2 λ i 2 = λ k 2 i 2 are taken into account. The lemma is proved. □
Example 6.
Let l 2 = l 1 = 2 , S 1 x = ( x 1 , x 2 ) , S 2 x = ( x 1 , x 2 ) . Taking into account that λ i 2 = ( 1 ) i 2 , λ i 1 = ( 1 ) i 1 , we obtain
F ( 0 , 0 ) [ H ] ( x ) = 1 4 H ( x 1 , x 2 ) + H ( x 1 , x 2 ) + H ( x 1 , x 2 ) + H ( x 1 , x 2 ) , F ( 0 , 1 ) [ H ] ( x ) = 1 4 H ( x 1 , x 2 ) H ( x 1 , x 2 ) + H ( x 1 , x 2 ) H ( x 1 , x 2 ) , F ( 1 , 0 ) [ H ] ( x ) = 1 4 H ( x 1 , x 2 ) + H ( x 1 , x 2 ) H ( x 1 , x 2 ) H ( x 1 , x 2 ) , F ( 1 , 1 ) [ H ] ( x ) = 1 4 H ( x 1 , x 2 ) H ( x 1 , x 2 ) H ( x 1 , x 2 ) + H ( x 1 , x 2 ) .
Let the function H ( x ) be even in x 1 . Then, its components F ( i 2 , i 1 ) [ H ] of generalized parity ( 0 , 1 ) and ( 1 , 1 ) are zero.
Consider homogeneous harmonic polynomial H m ( x 1 , x 2 ) of degree m and let ( r , φ ) be the polar coordinates of x = ( x 1 , x 2 ) . Then, there exist α , β R such that
H m ( x ) = α Re ( x 1 + i x 2 ) m + β Im ( x 1 + i x 2 ) m = r m ( α cos m φ + β sin m φ )
and hence
H m ( x 1 , x 2 ) = α Re ( x 1 + i x 2 ) m + β Im ( x 1 + i x 2 ) m = ( r ) m ( α cos m φ β sin m φ ) , H m ( x 1 , x 2 ) = α Re ( x 1 i x 2 ) m + β Im ( x 1 i x 2 ) m = r m ( α cos m φ β sin m φ ) , H m ( x 1 , x 2 ) = ( r ) m ( α cos m φ + β sin m φ ) .
The operator F ( i 2 , i 1 ) [ · ] extracts the following components of the harmonic polynomial H m ( x ) :
F ( 0 , 0 ) [ H m ] ( x ) = α r m 1 + ( 1 ) m 2 cos m φ , F ( 0 , 1 ) [ H m ] ( x ) = α r m 1 ( 1 ) m 2 cos m φ , F ( 1 , 0 ) [ H m ] ( x ) = β r m 1 ( 1 ) m 2 sin m φ , F ( 1 , 1 ) [ H m ] ( x ) = β r m 1 + ( 1 ) m 2 sin m φ .
Thus, for m N 0 ,
F ( 0 , 0 ) [ H 2 m ] ( x ) = α r 2 m cos 2 m φ , F ( 1 , 1 ) [ H 2 m ] ( x ) = β r 2 m sin 2 m φ , F ( 0 , 1 ) [ H 2 m + 1 ] ( x ) = α r 2 m + 1 cos ( 2 m + 1 ) φ , F ( 1 , 0 ) [ H 2 m + 1 ] ( x ) = β r 2 m + 1 sin ( 2 m + 1 ) φ ,
and the rest of the components vanish
F ( 0 , 1 ) [ H 2 m ] ( x ) = F ( 1 , 0 ) [ H 2 m ] ( x ) = 0 , F ( 0 , 0 ) [ H 2 m + 1 ] ( x ) = F ( 1 , 1 ) [ H 2 m + 1 ] ( x ) = 0 .

4. Finding Solutions to Problem S 2

Let us rewrite the result of Theorem 5 in a more convenient form.
Theorem 6.
Solutions to the boundary value problem (1) and (2) can be represented as
u ^ ( i 2 , i 1 ) ( x ) = F ( i 2 , i 1 ) [ w μ ] ( x ) , λ μ , ( i 2 , i 1 ) = μ ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) λ ¯ i 2 j 2 λ ¯ i 1 j 1 ,
where the operator F ( i 2 , i 1 ) [ · ] is defined in (19), the function w μ ( x ) is a solution to the boundary value problem (16) and (17)
Δ w ( x ) + μ w ( x ) = 0 , x Ω ; w ( x ) = 0 , x Ω
for some μ R + . Eigenfunctions u ^ ( i 2 , i 1 ) ( x ) for ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) and fixed μ are orthogonal in L 2 ( Ω ) .
The functions u ^ ( i 2 , i 1 ) ( x ) are a part of the function w μ ( x ) in the sense that
( i 2 , i 1 ) = 0 ( l 2 1 , l 1 1 ) u ^ ( i 2 , i 1 ) ( x ) = w μ ( x ) .
Proof. 
Denote u ^ ( i 2 , i 1 ) ( x ) = 1 l 2 l 1 u ( i 2 , i 1 ) ( x ) . It is clear that u ^ ( i 2 , i 1 ) ( x ) is also an eigenfunction of the problem (1) and (2). It is not hard to see that (18) implies
u ^ ( i 2 , i 1 ) ( x ) = 1 l 2 l 1 u ( i 2 , i 1 ) ( x ) = 1 l 2 l 1 W μ ( x ) · a ( i 2 , i 1 )
= 1 l 2 l 1 w μ ( S 2 j 2 S 1 j 1 x ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T · λ i 2 j 2 λ i 1 j 1 ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T
= 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) w μ ( S 2 j 2 S 1 j 1 x ) λ i 2 j 2 λ i 1 j 1 = F ( i 2 , i 1 ) [ w μ ] ( x ) ,
which proves the first formula from (26).
The eigenvalues of the problem (1) and (2) corresponding to eigenfunction u ^ ( i 2 , i 1 ) ( x ) , by Theorem 5 and (14) from Corollary 4 can be taken in the form
λ μ , ( i 2 , i 1 ) = μ λ ¯ ( i 2 , i 1 ) = μ ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) λ ¯ i 2 j 2 λ ¯ i 1 j 1 .
We now prove that the functions u ^ ( i 2 , i 1 ) ( x ) and u ^ ( j 2 , j 1 ) ( x ) for ( i 2 , i 1 ) ( j 2 , j 1 ) are orthogonal in L 2 ( Ω ) . Indeed, if ( i 2 j 2 , i 1 j 1 ) 0 , then either i 2 j 2 0 mod l 2 or i 1 j 1 0 mod l 1 . Let, for example, i 2 j 2 0 and hence λ i 2 j 2 1 . Then, using Lemma 4.1 from [10], we obtain the following equality for g C ( Ω )
Ω g ( S 2 ξ ) d ξ = Ω g ( ξ ) d ξ .
Therefore, by the Equality (20) from Lemma 2, we have
Ω u ^ ( i 2 , i 1 ) ( x ) u ^ ¯ ( j 2 , j 1 ) ( x ) d x = Ω F ( i 2 , i 1 ) [ w μ ] ( x ) F ¯ ( j 2 , j 1 ) [ w μ ] ( x ) d x = Ω F ( i 2 , i 1 ) [ w μ ] ( S 2 x ) F ¯ ( j 2 , j 1 ) [ w μ ] ( S 2 x ) d x = λ ¯ i 2 λ j 2 Ω F ( i 2 , i 1 ) [ w μ ] ( x ) F ¯ ( j 2 , j 1 ) [ w μ ] ( x ) d x = λ ¯ i 2 j 2 Ω u ^ ( i 2 , i 1 ) ( x ) u ^ ¯ ( j 2 , j 1 ) ( x ) d x .
Since λ i 2 j 2 1 , then this immediately implies the orthogonality
Ω u ^ ( i 2 , i 1 ) ( x ) u ^ ¯ ( j 2 , j 1 ) ( x ) d x = 0 .
Finally, the equality (27) is a consequence of the equality (22) from Lemma 2 for H = w μ . The theorem is proved. □
Corollary 6.
If H ( x ) is a harmonic polynomial with real coefficients, then the harmonic polynomials F ( i 2 , i 1 ) [ H ] ( x ) , ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) are orthogonal and linearly independent.
Proof. 
Indeed, let ( i 2 , i 1 ) ( j 2 , j 1 ) , which is possible, for example, for i 1 j 1 whence λ i 1 j 1 1 . By analogy with (28) and according to Lemma 4.1 from [10], we obtain
Ω F ( i 2 , i 1 ) [ H ] ( x ) F ¯ ( j 2 , j 1 ) [ H ] ( x ) d s = Ω F ( i 2 , i 1 ) [ H ] ( S 2 x ) F ¯ ( j 2 , j 1 ) [ H ] ( S 2 x ) d s = λ ¯ i 1 λ j 1 Ω F ( i 2 , i 1 ) [ H ] ( x ) F ¯ ( j 2 , j 1 ) [ H ] ( x ) d s = λ ¯ i 1 j 1 Ω F ( i 2 , i 1 ) [ H ] ( x ) F ¯ ( j 2 , j 1 ) [ H ] ( x ) d s ,
where, because λ i 1 j 1 1 , we obtain the orthogonality of F ( i 2 , i 1 ) [ H ] ( x ) and F ( j 2 , j 1 ) [ H ] ( x ) on Ω , and hence their linear independence. The corollary is proved. □
Corollary 7.
The matrix E ( a ) = a ( j 2 , j 1 ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) consisting of the eigenvectors of the matrix A ( 2 ) ( a ) is orthogonal and symmetric.
Proof. 
Let a ( j 2 , j 1 ) and a ( i 2 , i 1 ) be two different columns of the matrix E ( a ) . Then, using the equality λ j 2 λ ¯ i 2 = λ j 2 i 2 , we write
a ( j 2 , j 1 ) · a ¯ ( i 2 , i 1 ) = λ j 2 k 2 λ j 1 k 1 ( k 2 , k 1 ) = 0 , , ( l 2 1 , l 1 1 ) T · λ ¯ i 2 k 2 λ ¯ i 1 k 1 ( k 2 , k 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) λ j 2 k 2 λ j 1 k 1 λ ¯ i 2 k 2 λ ¯ i 1 k 1 = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) ( λ j 2 λ ¯ i 2 ) k 2 ( λ j 1 λ ¯ i 1 ) k 1 = ( k 2 , k 1 ) = 0 ( l 2 1 , l 1 1 ) λ j 2 i 2 k 2 λ j 1 i 1 k 1 = k 2 = 0 l 2 1 λ j 2 i 2 k 2 k 1 = 0 l 1 1 λ j 1 i 1 k 1 .
If ( j 2 , j 1 ) ( i 2 , i 1 ) , then ( j 2 i 2 , j 1 i 1 ) 0 , which means that one of the equalities j 2 i 2 0 or j 1 i 1 0 holds true. Therefore, either λ j 2 i 2 1 or λ j 1 i 1 1 , which means that, similarly to (24), we obtain a ( j 2 , j 1 ) · a ¯ ( i 2 , i 1 ) = 0 .
The symmetry of the matrix E ( a ) follows from the equalities
E T ( a ) = λ j 2 i 2 λ j 1 i 1 ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) T = λ i 2 j 2 λ i 1 j 1 ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) = λ j 2 i 2 λ j 1 i 1 ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) ( j 2 , j 1 ) = 0 , , ( l 2 1 , l 1 1 ) = E ( a ) .
The corollary is proved. □
Note that the matrix of eigenvectors in the case of multiple involution and for l 1 = = l n = 2 has a similar property [24].
Now, we can explore the completeness of the eigenfunctions of Problem S 2 .
Let H m be the space of homogeneous harmonic polynomials of degree m. By Lemma 2, it can be split into a sum of l 2 l 1 orthogonal on Ω subspaces F ( i 2 , i 1 ) [ H m ] , ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) of homogeneous harmonic polynomials of parity ( i 2 , i 1 ) . Let { H m ( i 2 , i 1 ) , k : k = 1 , 2 , , m ( i 2 , i 1 ) } be a complete in F ( i 2 , i 1 ) [ H m ] system of orthogonal on Ω polynomials.
Theorem 7.
Let the numbers λ ( i 2 , i 1 ) defined in (14) be all not zero. Then, the system of eigenfunctions of the Dirichlet problem (1) and (2) is complete in L 2 ( Ω ) and has the form
u μ , m , ( i 2 , i 1 ) , k ( x ) = 1 | x | m + n / 2 1 J m + n / 2 1 μ | x | H m ( i 2 , i 1 ) , k ( x ) ,
where m N 0 , ( i 2 , i 1 ) = 0 , , ( l 2 1 , l 1 1 ) , k = 1 , , m ( i 2 , i 1 ) , J ν ( t ) is the Bessel function of the first kind, and μ is a root of the Bessel function J m + n / 2 1 ( t ) . The eigenvalues of Problem S 2 are numbers λ μ , ( i 2 , i 1 ) , defined in (26)
λ μ , ( i 2 , i 1 ) = μ ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) a ( j 2 , j 1 ) λ ¯ i 2 j 2 λ ¯ i 1 j 1 .
Proof. 
By Theorem 6, to find eigenfunctions of the problem (1) and (2), it is necessary to find the function w μ ( x ) —a solution to the problem (16) and (17) and then write out the function u ^ ( i 2 , i 1 ) ( x ) = F ( i 2 , i 1 ) [ w μ ] ( x ) . A maximal system of eigenfunctions of the problem (16) and (17) have the form (see, for example, [27,28])
w μ , m , j ( x ) = 1 | x | m + n / 2 1 J m + n / 2 1 μ | x | H m j ( x ) ,
where H m j ( x ) : j = 1 , , h m , h m = 2 m + n 2 n 2 m + n 3 n 3 ( n > 2 ) is the maximal system of linearly independent homogeneous harmonic polynomials of degree m, and μ is a root of the Bessel function J m + n / 2 1 ( t ) . Then, since | S 2 S 1 x | = | x | , then
u ^ ( i 2 , i 1 ) ( x ) = F ( i 2 , i 1 ) [ w μ , m , j ] ( x ) = 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 w μ , m , j S 2 j 2 S 1 j 1 x = 1 l 2 l 1 ( j 2 , j 1 ) = 0 ( l 2 1 , l 1 1 ) λ i 2 j 2 λ i 1 j 1 1 | S 2 j 2 S 1 j 1 x | m + n / 2 1 J m + n / 2 1 μ | S 2 j 2 S 1 j 1 x | H m j S 2 j 2 S 1 j 1 x = 1 | x | m + n / 2 1 J m + n / 2 1 μ | x | F ( i 2 , i 1 ) [ H m j ] ( x ) .
Since F ( i 2 , i 1 ) [ H m j ] F ( i 2 , i 1 ) [ H m ] , then choose in the space F ( i 2 , i 1 ) [ H m ] a complete system of polynomials H m ( i 2 , i 1 ) , k : k = 1 , 2 , , m ( i 2 , i 1 ) orthogonal on Ω , to which correspond some polynomials from H m . Note that, for some value of m, it is possible m ( i 2 , i 1 ) = 0 , that is, for such m, the component H m ( i 2 , i 1 ) ( x ) is missing (see Example 6) and therefore u ^ ( i 2 , i 1 ) = 0 . Choosing in the resulting expression for u ^ ( i 2 , i 1 ) ( x ) instead of F ( i 2 , i 1 ) [ H m j ] ( x ) the harmonic polynomials H m ( i 2 , i 1 ) , k ( x ) and adding indices, indicating the dependence of the eigenfunction u ^ ( i 2 , i 1 ) ( x ) on μ , m and k, we have (29).
Since, in formula (29), H m ( i 2 , i 1 ) , k ( x ) are homogeneous harmonic polynomials of degree m, the functions u μ , m , ( i 2 , i 1 ) , k ( x ) have the form (30) and hence are eigenfunctions of problem (16) and (17). The reverse is also true. Each homogeneous harmonic polynomial H m j ( x ) by the formula (22) can be represented as a linear combination of harmonic polynomials of the form H m ( i 2 , i 1 ) ( x ) , and those are linear combinations of the polynomials H m ( i 2 , i 1 ) , k ( x ) and hence any function from (30) is a linear combination of functions of the form (29). The eigenvalues of the problem (1) and (2), in accordance with Theorem (6), are found from (26).
Let us study the orthogonality of the functions u μ , m , ( i 2 , i 1 ) , k ( x ) . The equality holds true
Ω u μ 1 , m 1 , ( i 2 , i 1 ) , k 1 ( x ) u ¯ μ 2 , m 2 , ( j 2 , j 1 ) , k 2 ( x ) d x = 0 1 ρ J m 1 + n / 2 1 ( μ 1 ρ ) J m 2 + n / 2 1 ( μ 2 ρ ) d ρ · Ω H m 1 ( i 2 , i 1 ) , k 1 ( ξ ) H ¯ m 2 ( j 2 , j 1 ) , k 2 ( ξ ) d s ξ = 0 .
Consider the right side of the obtained equality. For μ 1 μ 2 and m 1 = m 2 , due to the properties of the Bessel functions (orthogonality in L 2 ( ( 0 , 1 ) ; t ) ), the first factor is zero. If m 1 m 2 , by the property of harmonic polynomials, the second factor from the right side is zero. If μ 1 = μ 2 , m 1 = m 2 , then, for ( i 2 , i 1 ) ( j 2 , j 1 ) , the second factor from the right side is zero by Corollary 6. Finally, if ( i 2 , i 1 ) = ( j 2 , j 1 ) and k 1 k 2 , then the second factor is zero in accordance with the scheme for constructing polynomials H m ( i 2 , i 1 ) , k ( x ) .
By Lemma 2 from ([29], p. 33), the obtained system (29) of functions is complete in L 2 ( Ω ) = L 2 ( ( 0 , 1 ) × Ω ) because the system J m + n / 2 1 ( μ ρ ) : J m + n / 2 1 ( μ ) = 0 is orthogonal and complete in L 2 ( ( 0 , 1 ) ; t ) for each m, and the system H m ( i 2 , i 1 ) , k ) ( ξ ) is orthogonal and complete in L 2 ( Ω ) for different m , ( i 2 , i 1 ) and k. The theorem is proved. □
Example 7.
Let n = 2 , l 2 = l 1 = 2 , S 1 x = ( x 1 , x 2 ) , S 2 x = ( x 1 , x 2 ) then Problem S 2 has the form
a 0 Δ u ( x 1 , x 2 ) + a 1 Δ u ( x 1 , x 2 ) + a 2 Δ u ( x 1 , x 2 ) + a 3 Δ u ( x 1 , x 2 ) + λ u ( x ) = 0 , x Ω , u ( x ) = 0 , x Ω .
Using Example 6, we find the eigenfunctions of the problem (1) and (2). In the polar coordinate system, the eigenfunctions of the problem (16) and (17) are determined according to the equality (30) in the form
w μ , m , 0 ( x ) = J m ( μ r ) cos m φ , w μ , m , 1 ( x ) = J m ( μ r ) sin m φ , m N 0 ,
where μ is a root of the Bessel function J m ( t )
J m ( t ) = j = 0 ( 1 ) j ( j + m ) ! j ! t 2 2 j + m .
Using (31), we have
u μ , m , ( i 2 , i 1 ) ( x ) = J m ( μ | x | ) F ( i 2 , i 1 ) [ H m ] ( x / | x | ) .
In the written formula, the dependence of the eigenfunction u μ , m , ( i 2 , i 1 ) ( x ) on the index k is not indicated because, in accordance with Example 6, the dimension of the space F ( i 2 , i 1 ) [ H m ] is equal to 1. According to (25) and taking into account (26), we write
u μ , 2 m , ( 0 , 0 ) ( x ) = J 2 m ( μ r ) cos ( 2 m φ ) , λ μ , ( 0 , 0 ) = μ ( a 0 + a 1 + a 2 + a 3 ) , u μ , 2 m + 1 , ( 0 , 1 ) ( x ) = J 2 m + 1 ( μ r ) cos ( ( 2 m + 1 ) φ ) , λ μ , ( 0 , 1 ) = μ ( a 0 a 1 + a 2 a 3 ) , u μ , 2 m + 1 , ( 1 , 0 ) ( x ) = J 2 m + 1 ( μ r ) sin ( ( 2 m + 1 ) φ ) , λ μ , ( 1 , 0 ) = μ ( a 0 + a 1 a 2 a 3 ) , u μ , 2 m , ( 1 , 1 ) ( x ) = J 2 m ( μ r ) sin ( 2 m φ ) , λ μ , ( 1 , 1 ) = μ ( a 0 a 1 a 2 + a 3 ) ,
where m N 0 , μ is a root of the Bessel function J m ( t ) and λ μ , ( i 2 , i 1 ) 0 . It is clear that the obtained system of eigenfunctions is complete in L 2 ( Ω ) .
Let n = 3 . Then, the maximal system of homogeneous harmonic polynomials of degree m in (30) { H m j , 0 ( x ) , H m j , 1 ( x ) : j = 0 , , m } has the form [27]
H m j , 0 ( x 1 , x 2 , x 3 ) = G m j j ( x ) Re ( x 1 + i x 2 ) j , H m j , 1 ( x 1 , x 2 , x 3 ) = G m j j ( x ) Im ( x 1 + i x 2 ) j ,
where 0 j m and
G m j ( x ) G m j ( x 1 , x 2 , x 3 ) = k = 0 [ m / 2 ] ( 1 ) k ( x 1 2 + x 2 2 ) k x 3 m 2 k ( 2 k ) ! ! ( 2 j + 2 k ) ! ! ( m 2 k ) ! .
The system (32) has 2 m + 1 members for every m because H 0 j , 1 ( x ) = 0 . In the spherical coordinate system ( r , φ , θ ) , we can write it in a more compact way
H m j , 0 ( x 1 , x 2 , x 3 ) = r m G m j j ( r , φ , θ ) cos j φ , H m j , 1 ( x 1 , x 2 , x 3 ) = r m G m j j ( r , φ , θ ) sin j φ ,
where
G m j ( r , φ , θ ) = k = 0 [ m / 2 ] ( 1 ) k sin 2 k + j θ cos m 2 k θ ( 2 k ) ! ! ( 2 j + 2 k ) ! ! ( m 2 k ) ! ,
since cos θ = x 3 / | x | , sin θ = x 1 2 + x 2 2 / | x | and cos φ = x 1 / x 1 2 + x 2 2 . In this case, the dimension of the space F ( i 2 , i 1 ) [ H m ] is greater than 1. Note that the operator F ( i 2 , i 1 ) [ · ] acts only on the second multiplier of polynomials in (32). For example, in the space F ( 0 , 0 ) [ H m ] , one can choose the following basic polynomials { H m 2 k , 0 ( r , φ , θ ) : k = 0 , , [ m / 2 ] } , which means
u μ , m , ( 0 , 0 ) , k ( x ) = r 1 / 2 J m + 1 / 2 μ r G m 2 k 2 k ( r , φ , θ ) cos ( 2 k φ ) , k = 0 , , [ m / 2 ] .
The remaining eigenfunctions are obtained similarly.

5. Conclusions

The results obtained allow one to find explicitly, using the formula (29), the eigenfunctions and eigenvalues of the boundary value problem (1) and (2) for the nonlocal differential equation with double involution. The completeness of the system of eigenfunctions make it possible to use the Fourier method to construct solutions of initial-boundary value problems for nonlocal parabolic and hyperbolic equations.
Possible applications of the obtained results can be found in the modeling of optical systems, since differential equations with involution are an important part of the general theory of functional differential equations, which has numerous applications in optics. Applications of equations with involution in modeling optical systems are given, for example, in [30,31]. In particular, in [30], mathematical models important for applications in nonlinear optics are considered in the form of nonlinear functional-differential equations of a parabolic type with feedback and transformation of spatial variables, which is specified by the involution operator. The following parabolic functional differential equation is considered
u t + u = μ Δ u + K 1 + γ cos Q u , r 1 r r 2 , t 0 , μ > 0 ,
which describes the dynamics of the phase modulation of a light wave, in an optical system with an involution operator Q such that Q l = I .
With the above in mind, as further research steps on the topic of the presented article, we are going to investigate nonlocal initial-boundary value problems with involution for parabolic equations. In addition, we are going to study nonlocal boundary value problems in the case of multiple involution of arbitrary orders, generalizing the results obtained in [24], and also consider similar boundary value problems for a nonlocal biharmonic equation.

Author Contributions

B.T. and V.K. contributed equally to the conceptualization and the formal analysis of the problem discussed in the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The research of the first author is supported by the grant of the Committee of Sciences, Ministry of Education and Science of the Republic of Kazakhstan, project AP09259074.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All the data are presented within the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Przeworska-Rolewicz, D. Equations with Transformed Argument, an Algebraic Approach, 1st ed.; Elsevier Scientific: Amsterdam, The Netherlands, 1973; ISBN 0-444-41078-3. [Google Scholar]
  2. Wiener, J. Generalized Solutions of Functional Differential Equations, 1st ed.; World Scientific: Singapore; River Edge, NJ, USA; London, UK; Hong Kong, China, 1993; ISBN 981-02-1207-0. [Google Scholar]
  3. Cabada, A.; Tojo, F.A.F. Differential Equations with Involutions, 1st ed.; Atlantis Press: Paris, France, 2015; ISBN 978-94-6239-120-8. [Google Scholar]
  4. Ahmad, A.; Ali, M.; Malik, S.A. Inverse problems for diffusion equation with fractional Dzherbashian-Nersesian operator. Fract. Calc. Appl. Anal. 2021, 24, 1899–1918. [Google Scholar] [CrossRef]
  5. Ahmad, B.; Alsaedi, A.; Kirane, M.; Tapdigoglu, R.G. An inverse problem for space and time fractional evolution equations with an involution perturbation. Quaest. Math. 2017, 40, 151–160. [Google Scholar] [CrossRef]
  6. Al-Salti, N.; Kerbal, S.; Kirane, M. Initial-boundary value problems for a time-fractional differential equation with involution perturbation. Math. Model. Nat. Phenom. 2019, 14, 315. [Google Scholar] [CrossRef] [Green Version]
  7. Ashyralyev, A.; Ibrahim, S.; Hincal, E. On stability of the third order partial delay differential equation with involution and Dirichlet condition. Bull. Karaganda Univ. Math. Ser. 2021, 2, 25–34. [Google Scholar] [CrossRef]
  8. Ashyralyev, A.; Al-Hazaimeh, H. Stability of the time-dependent identification problem for the telegraph equation with involution. Int. J. Appl. Math. 2022, 35, 447–459. [Google Scholar]
  9. Burlutskaya, M.S. Some properties of functional-differential operators with involution ν(x) = 1 − x and their applications. Russ. Math. 2021, 65, 69–76. [Google Scholar] [CrossRef]
  10. Karachik, V.V.; Sarsenbi, A.M.; Turmetov, B.K. On the solvability of the main boundary value problems for a nonlocal Poisson equation. Turk. J. Math. 2019, 43, 1604–1625. [Google Scholar] [CrossRef]
  11. Kirane, M.; Sadybekov, M.A.; Sarsenbi, A.A. On an inverse problem of reconstructing a subdiffusion process from nonlocal data. Math. Methods Appl. Sci. 2019, 42, 2043–2052. [Google Scholar] [CrossRef]
  12. Roumaissa, S.; Nadjib, B.; Faouzia, R. A variant of quasi-reversibility method for a class of heat equations with involution perturbation. Math. Methods Appl. Sci. 2021, 44, 11933–11943. [Google Scholar] [CrossRef]
  13. Sadybekov, M.; Dildabek, G.; Ivanova, M. Direct and inverse problems for nonlocal heat equation with boundary conditions of periodic type. Bound. Value Probl. 2022, 53, 1–24. [Google Scholar] [CrossRef]
  14. Turmetov, B.K.; Karachik, V.V. Solvability of nonlocal Dirichlet problem for generalized Helmholtz equation in a unit ball. Complex Var. Elliptic Equ. 2022, 1–16. [Google Scholar] [CrossRef]
  15. Yarka, U.; Fedushko, S.; Veselý, P. The Dirichlet problem for the perturbed elliptic equation. Mathematics 2020, 8, 2108. [Google Scholar] [CrossRef]
  16. Baskakov, A.G.; Krishtal, I.A.; Romanova, E.Y. Spectral analysis of a differential operator with an involution. J. Evol. Equ. 2017, 17, 669–684. [Google Scholar] [CrossRef]
  17. Baskakov, A.G.; Krishtal, I.A.; Uskova, N.B. On the spectral analysis of a differential operator with an involution and general boundary conditions. Eurasian Math. J. 2020, 11, 30–39. [Google Scholar] [CrossRef]
  18. Baskakov, A.G.; Krishtal, I.A.; Uskova, N.B. Similarity techniques in the spectral analysis of perturbed operator matrices. J. Math. Anal. Appl. 2019, 477, 669–684. [Google Scholar] [CrossRef] [Green Version]
  19. Garkavenko, G.V.; Uskova, N.B. Decomposition of linear operators and asymptotic behavior of eigenvalues of differential operators with growing potencial. J. Math. Sci. 2020, 246, 812–827. [Google Scholar]
  20. Granilshchikova, Y.A.; Shkalikov, A.A. Spectral properties of a differential operator with involution. Vestn. Mosk. Univ. 1 Matematika. Mekhanika 2022, 4, 67–71. [Google Scholar]
  21. Kritskov, L.V.; Sadybekov, M.A.; Sarsenbi, A.M. Properties in Lp of root functions for a nonlocal problem with involution. Turk. J. Math. 2019, 43, 393–401. [Google Scholar] [CrossRef]
  22. Linkov, A. Substantiation of a method the fourier for boundary value problems with an involute deviation. Vestn. Samar. Univ.-Estestv. Seriya 1999, 2, 60–66. (In Russian) [Google Scholar]
  23. Sarsenbi, A.A.; Sarsenbi, A.M. On eigenfunctions of the boundary value problems for second order differential equations with involution. Symmetry 2021, 13, 1972. [Google Scholar] [CrossRef]
  24. Turmetov, B.; Karachik, V. On eigenfunctions and eigenvalues of a nonlocal Laplace operator with multiple involution. Symmetry 2021, 13, 1781. [Google Scholar] [CrossRef]
  25. Vladykina, V.E.; Shkalikov, A.A. Spectral Properties of Ordinary Differential Operators with Involution. Dokl. Math. 2019, 99, 5–10. [Google Scholar] [CrossRef]
  26. Turmetov, B.; Karachik, V.; Muratbekova, M. On a Boundary Value Problem for the Biharmonic Equation with Multiple Involutions. Mathematics 2021, 9, 2020. [Google Scholar] [CrossRef]
  27. Karachik, V.V. Normalized system of functions with respect to the laplace operator and its applications. J. Math. Anal. Appl. 2003, 287, 577–592. [Google Scholar] [CrossRef]
  28. Karachik, V.V.; Antropova, N.A. On the solution of the inhomogeneous polyharmonic equation and the inhomogeneous Helmholtz equation. Differ. Equ. 2010, 46, 387–399. [Google Scholar] [CrossRef]
  29. Vladimirov, V.S. Equations of Mathematical Physics; Nauka: Moscow, Russia, 1973. (In Russian) [Google Scholar]
  30. Kornuta, A.A.; Lukianenko, V.A. Dynamics of solutions of nonlinear functional differential equation of parabolic type. Izv. Vyss. Uchebnykh Zaved. Prikl. Nelineynaya Din. 2022, 30, 132–151. [Google Scholar] [CrossRef]
  31. Razgulin, A.V. Nonlinear Models of Optical Synergetics; MAKS Press: Moscow, Russia, 2008. (In Russian) [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Turmetov, B.; Karachik, V. Construction of Eigenfunctions to One Nonlocal Second-Order Differential Operator with Double Involution. Axioms 2022, 11, 543. https://doi.org/10.3390/axioms11100543

AMA Style

Turmetov B, Karachik V. Construction of Eigenfunctions to One Nonlocal Second-Order Differential Operator with Double Involution. Axioms. 2022; 11(10):543. https://doi.org/10.3390/axioms11100543

Chicago/Turabian Style

Turmetov, Batirkhan, and Valery Karachik. 2022. "Construction of Eigenfunctions to One Nonlocal Second-Order Differential Operator with Double Involution" Axioms 11, no. 10: 543. https://doi.org/10.3390/axioms11100543

APA Style

Turmetov, B., & Karachik, V. (2022). Construction of Eigenfunctions to One Nonlocal Second-Order Differential Operator with Double Involution. Axioms, 11(10), 543. https://doi.org/10.3390/axioms11100543

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop