Next Article in Journal
An Enhanced Multi-Constraint Optimization Algorithm for Efficient Network Topology Generation
Next Article in Special Issue
Miura-Type Transformations for Integrable Lattices in 3D
Previous Article in Journal
Existence of Best Proximity Point in O-CompleteMetric Spaces
Previous Article in Special Issue
On Recovering Sturm–Liouville-Type Operators with Global Delay on Graphs from Two Spectra
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Regularization and Inverse Spectral Problems for Differential Operators with Distribution Coefficients

by
Natalia P. Bondarenko
1,2,3
1
Department of Mechanics and Mathematics, Saratov State University, Astrakhanskaya 83, Saratov 410012, Russia
2
Department of Applied Mathematics and Physics, Samara National Research University, Moskovskoye Shosse 34, Samara 443086, Russia
3
S.M. Nikolskii Mathematical Institute, Peoples’ Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya Street, Moscow 117198, Russia
Mathematics 2023, 11(16), 3455; https://doi.org/10.3390/math11163455
Submission received: 16 July 2023 / Revised: 5 August 2023 / Accepted: 7 August 2023 / Published: 9 August 2023

Abstract

:
In this paper, we consider a class of matrix functions that contains regularization matrices of Mirzoev and Shkalikov for differential operators with distribution coefficients of order n 2 . We show that every matrix function of this class is associated with some differential expression. Moreover, we construct the family of associated matrices for a fixed differential expression. Furthermore, our regularization results are applied to inverse spectral theory. We study a new type of inverse spectral problems, which consist of the recovery of distribution coefficients from the spectral data independently of the associated matrix. The uniqueness theorems are proved for the inverse problems by the Weyl–Yurko matrix and by the discrete spectral data. As examples, we consider the cases n = 2 and n = 4 in more detail.

1. Introduction

This paper is concerned with regularization and inverse spectral problems for differential operators generated by the expression
n ( y ) = y ( n ) + k = 0 m 1 + s ( 1 ) k τ 2 k ( x ) y ( k ) ( k ) + k = 0 m 1 ( 1 ) k + 1 τ 2 k + 1 ( x ) y ( k ) ( k + 1 ) + τ 2 k + 1 ( x ) y ( k + 1 ) ( k ) , x ( 0 , 1 ) ,
where n = 2 m + s , m N , s { 0 , 1 } , ( τ ν ) ν = 0 n 1 are distributional coefficients (generalized functions), τ ν W 2 s i ν [ 0 , 1 ] for ν = 0 , n 1 ¯ , and the singularity orders ( i ν ) ν = 0 n 1 are defined as follows:
i 2 k + j : = m k j , k 0 , j { 0 , 1 } .
In other words,
τ ν = ( 1 ) i ν σ ν ( i ν ) , ν = 0 , n 1 ¯ ,
where σ ν L 2 s [ 0 , 1 ] .
In recent years, spectral theory and related issues for linear ordinary differential operators with distribution coefficients have been rapidly developed. In 2016, Mirzoev and Shkalikov [1] proposed a regularization approach for even-order differential operators with distribution coefficients. In particular, their approach allows to reduce the equation n ( y ) = λ y , where λ is the spectral parameter, to the equivalent first-order system
Y = ( F ( x ) + J ) Y , x ( 0 , 1 ) ,
where Y ( x ) is a column vector-function of size n,
J = k = 1 n 1 E k , k + 1 + λ E n , 1 ,
E k , j denotes the constant matrix whose entry at position ( k , j ) equals 1 and all the other entries equal zero, F ( x ) = [ f k , j ( x ) ] k , j = 1 n is the so-called associated matrix for the differential expression n ( y ) , f k , j = 0 for j > k and f k , j L 1 [ 0 , 1 ] otherwise.
Analogous results were obtained for the odd-order case in [2]. It is worth mentioning that the reduction of differential equations with regular (integrable) coefficients to the first-order systems of form (4) by introducing quasi-derivatives is well-known (see [3,4]). Weidmann [5] applied such reduction to a specific class of higher-order operators generated by matrix differential expressions, which included (1) with i 2 k = 1 , i 2 k + 1 = 0 .
Another regularization approach, based on quadratic forms, was developed by Neiman-Zade and Shkalikov [6,7] for the both ordinary and partial differential equations. Relying on the ideas of [6], Vladimirov found an associated matrix for a fourth-order operator in [8] and obtained convenient formulas for construction of associated matrices in the general case by using the coefficients of bilinear forms in [9]. Here, we focus on the bibliography for higher orders n > 2 . For n = 2 , four different regularization approaches are described in [10].
Regularization of differential equations with distribution coefficients opened a perspective of investigating solution properties and spectral theory for such equations. Relying on the reduction to the first-order system (4), Savchuk and Shkalikov [11] constructed the Birkhoff-type solutions for differential equations with distribution coefficients. Konechnaya et al. [12,13] applied the regularization approach to study the asymptotics of solutions for differential equations on the half-line ( 0 , ) as x . Vladimirov et al. [14,15] investigated oscillation properties for higher-order boundary value problems with distribution coefficients. Using the regularization methods of [1,2,9], Bondarenko [16,17,18,19] obtained a series of results on inverse spectral problems. Such problems consist of recovering coefficients of differential operators from spectral data.
Inverse spectral theory has a long history. Classical results in this field were obtained for the Sturm–Liouville operators y + q ( x ) y with integrable potentials q by using the famous transformation operator method (see the monographs [20,21,22,23] and references therein). For distributional potentials of classes W 2 α , α 1 , inverse problems also have been studied fairly completely (see, e.g., [24,25,26,27,28,29,30,31,32]). However, inverse problems for higher-order ( n > 2 ) differential operators are essentially different, because the transformation operator method is ineffective for them. Therefore, Yurko [33,34,35] developed the method of spectral mappings, which allowed him to create the inverse spectral theory for higher-order differential operators with regular coefficients. In recent years, the ideas of the method of spectral mappings were extended to operators generated by the differential expression (1) with τ n 1 = 0 (see [16,17,18,19,26,31]). In particular, in [26,31], the method of spectral mappings has been transferred to the Sturm–Liouville operators with distribution potentials of class W 2 1 [ 0 , 1 ] . In [16], the uniqueness theorems of inverse spectral problems have been proved for the higher-order differential operators with distribution coefficients of the Mirzoev–Shkalikov class [1,2] on a finite interval. In [18], differential operators on the half-line with singular coefficients of various singularity orders have been considered. For those operators, associated matrices have been constructed and the uniqueness theorems have been obtained. In [17], a constructive approach to the recovery of higher-order differential operators with distribution coefficients from the spectral data has been developed. That approach allowed the author to obtain the necessary and sufficient conditions for solvability of the inverse problem for the third-order differential equation in [19].
Spectral theory of linear differential operators has a variety of applications. The second-order Sturm–Liouville (one-dimensional Schrödinger) operators are widely used in mechanics, geophysics, acoustics, material science, and engineering. In particular, the Sturm–Liouville operators with singular potentials of class W 2 1 [ 0 , 1 ] model particle interactions in quantum mechanics [36]. The third order differential operators arise in the study of flows of thin viscous films over solid surfaces [37] and in the integration of the nonlinear Boussinesq equation by the inverse scattering transform [38]. Inverse spectral problems for the fourth-order differential operators appear in geophysics [39] and in vibration theory [40]. Some six-order eigenvalue problems that occur in mathematical models of vibrations of curved arches were considered in [41]. In recent years, the interest of scholars to spectral properties of the third- and the fourth-order differential operators with non-smooth and distribution coefficients has increased (see, e.g., [42,43,44,45,46]). Thus, the investigation of higher-order differential operators with distribution coefficients, on the one hand, is useful for the development of mathematical methods for a wider range of applied problems. On the other hand, construction of the general spectral theory for such operators is a fundamental mathematical question.
The goal of this paper is to study various associated matrices for the differential expression (1). As a simple example, consider the Sturm–Liouville equation
y q ( x ) y = λ y , x ( 0 , 1 ) .
If q = σ W 2 1 [ 0 , 1 ] , then Equation (6) is equivalent to the system (4) with the associated matrix F 1 ( x ) = σ 0 σ 2 σ . On the other hand, if q L 1 [ 0 , 1 ] , then the associated matrix F 2 ( x ) = 0 0 q 0 can be used. Consequently, the following questions arise:
  • Are there any other associated matrices and how to describe all the possible associated matrices?
  • Does the choice of the associated matrix influence the spectral characteristics, which are used in the inverse spectral theory, and the results concerning inverse problems?
For the second order, the answers are given in Section 5. However, for higher orders, the situation is much more complicated. Note that the studies of Mirzoev and Shkalikov [1,2] provide only a specific construction of the associated matrices and do not answer these questions. In the papers [16,17,18,19], inverse spectral problems are investigated also by using specific forms of associated matrices.
In this paper, we describe the family of all the matrix functions F ( x ) associated with the differential expression n ( y ) of form (1) in a certain natural class F n , which is defined in Section 2 and Section 3 for even and odd n, respectively. We choose the class F n so that the regularization matrices of Mirzoev and Shkalikov [1,2] belong to this class. Furthermore, we prove that every matrix F ( x ) of F n is associated with some differential expression n ( y ) (see Theorem 1). Moreover, we show that any two matrices F ( x ) and F ˜ ( x ) associated with the same differential expression n ( y ) generate equal domains D F and D F ˜ for solutions of the equation n ( y ) = λ y (see Theorem 2).
Next, we apply the obtained regularization results to inverse spectral problems. We study the influence of the associated matrix choice on the spectral data. As the main spectral characteristics, we use the Weyl–Yurko matrix M ( λ ) , which was introduced by Yurko [33,34,35] and used by Bondarenko [16,17,18,19] for the case of distribution coefficients. Note that the Weyl–Yurko matrix is different from the Weyl–Titchmarsch matrices, which are often used for investigation of scattering problems on infinite domains (see, e.g., [29] and the bibliography overview therein). The Weyl–Yurko matrix is closely related to several spectra (see [16] for details). In addition, we consider the discrete spectral data which consist of the Weyl–Yurko matrix poles Λ and of the so-called weight matrices N ( λ 0 ) , λ 0 Λ , which are obtained from the Laurent series of M ( λ ) . It is convenient to use the discrete data { λ 0 , N ( λ 0 ) } λ 0 Λ for constructive solution of the inverse problem (see [17]). We show that the Weyl–Yurko matrix, in general, depends on the choice of the associated matrix F ( x ) . Namely, if M ( λ ) and M ˜ ( λ ) are the Weyl–Yurko matrices that are obtained from different regularizations of the same differential expression n ( y ) , then M ( λ ) = L M ˜ ( λ ) , where L is a constant lower-triangular matrix (see Theorem 3). The discrete spectral data, on the contrary, do not depend on the associated matrix. Moreover, we prove Theorems 4 and 5 on the uniqueness of recovering the coefficients ( τ ν ) ν = 0 n 2 from the Weyl–Yurko matrix given up to a lower-triangular matrix factor L and from the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , respectively. We emphasize that the considered inverse spectral problems are fundamentally novel comparing with the ones of [16,17,18]. The results on inverse problems in this paper are independent of the choice of the associated matrix, while in the previous studies, specific associated matrices were considered and the antiderivatives ( σ ν ) ν = 0 n 2 were reconstructed. These two types of inverse problems are different and they both generalize the classical inverse problems for differential operators with regular coefficients.
The methodology of this paper is based on the ideas of the previous studies [1,2,9,16,17,18,35]. For constructing associated matrices, we first represent the differential expression n ( y ) as a bilinear form and then use the formulas of Vladimirov [9] to obtain the matrix F ( x ) . We find this two-step approach convenient for working with various regularization matrices. The study of inverse problems relies on the method of spectral mappings [35], which is the only effective method for higher-order differential operators on a finite interval.
It is worth noting that the regularization methods in [1,2,9] were developed for differential expressions of more general forms with locally integrable or locally square integrable antiderivatives σ ν and with non-trivial functional coefficients at y ( n ) . However, in the inverse problem theory, it is natural to consider the form (1) with the coefficient 1 at y ( n ) and the zero coefficient at y ( n 1 ) , following the previous studies [34,35]. The integrability of the antiderivatives σ ν on the whole interval [ 0 , 1 ] is crucial for the asymptotics of the Birkhoff-type solutions, which are important for investigation of the inverse problems. Therefore, in this paper, we confine ourselves to differential expressions of form (1) satisfying (3) with σ ν L 2 s [ 0 , 1 ] . However, our regularization results can be transferred to the case of L 2 s , l o c ( 0 , 1 ) with necessary technical modifications.
The paper is organized as follows. Section 2 contains the main regularization results together with their proofs for the even-order case. The odd-order case is considered in Section 3. Section 4 is concerned with inverse spectral problems. In Section 5, the main results are illustrated by the examples of n = 2 and n = 4 . Section 6 contains a brief summary of the results and concluding remarks.

2. Even Order

Let n = 2 m , m N . Consider the differential expression (1) for s = 0 with the complex-valued distributional coefficients T : = ( τ ν ) ν = 0 n 1 , which belong to the space
T n : = T = ( τ ν ) ν = 0 n 1 : τ ν W 2 i ν [ 0 , 1 ] , ν = 0 , n 1 ¯ ,
where the singularity orders ( i ν ) ν = 0 n 1 are defined by (2). We will write that Σ = ( σ ν ) ν = 0 n 2 = Σ ( T ) if the relations (3) hold. Note that the antiderivaties Σ are not uniquely determined by T . Anyway, the arguments below are valid for any possible choice of Σ .

2.1. Regularization of Mirzoev and Shkalikov

In this subsection, we provide the construction of the associated matrix of Mirzoev and Shkalikov [1] for the even-order differential expression n ( y ) by the method of [9,18].
Denote by D = C 0 ( 0 , 1 ) and D the spaces of test functions and generalized functions, respectively. In other words, D is the space of infinitely differentiable functions f with supp f ( 0 , 1 ) and D is the space of continuous linear functionals on D . For f D and z D , we use the notation ( f , z ) = f z . In particular, ( f , z ) = 0 1 f ( x ) z ( x ) d x if f L 1 , l o c ( 0 , 1 ) .
By direct calculations, one can easily prove the following proposition (see, e.g., [18]).
Proposition 1.
For y W 2 m [ 0 , 1 ] , we have n ( y ) D and the following relation holds:
( n ( y ) , z ) = ( 1 ) m ( y ( m ) , z ( m ) ) + r , j = 0 m ( q r , j y ( r ) , z ( j ) ) , z D ,
where Σ = Σ ( T ) ,
[ q r , j ] r , j = 0 m = Q n ( Σ ) : = ν = 0 n 1 σ ν ( x ) χ ν , i ν ,
χ ν , i = [ χ ν , i ; r , j ] r , j = 0 m , χ 2 k , i ; s + k , i s + k = C i s , s = 0 , i ¯ , χ 2 k + 1 , i ; s + k , i + 1 s + k = C i + 1 s 2 C i s 1 , s = 0 , i + 1 ¯ ,
all the other entries χ ν , i ; r , j equal zero, and C i s = i ! s ! ( i s ) ! are the binomial coefficients, C i 1 : = 0 .
Using the entries of the matrix function Q ( x ) = [ q r , j ] r , j = 0 m defined by (8), construct the matrix function F ( x ) = [ f k , j ] k , j = 1 n by the rule F = S n ( Q ) given by the formulas
f m , j : = ( 1 ) m + 1 q j 1 , m , j = 1 , m ¯ , f k , m + 1 : = ( 1 ) k + 1 q m , 2 m k , k = m + 1 , 2 m ¯ , f k , j : = ( 1 ) k + 1 q j 1 , 2 m k + ( 1 ) m + k q j 1 , m q m , 2 m k , k = m + 1 , 2 m ¯ , j = 1 , m ¯ , f k , j : = 0 , k < m or j > m + 1 or ( k , j ) = ( m , m + 1 ) .
Obviously, f k , j L 1 [ 0 , 1 ] , k , j = 1 , n ¯ . Define the quasi-derivatives
y [ 0 ] : = y , y [ k ] = ( y [ k 1 ] ) j = 1 k f k , j y [ j 1 ] , k = 1 , n ¯ ,
and the domain
D F = { y : y [ k ] W 1 1 [ 0 , 1 ] , k = 0 , n 1 ¯ } W 2 m [ 0 , 1 ] .
The results of [1] imply the following proposition on the regularization of the differential expression n ( y ) of even order.
Proposition 2.
For any y D F , the function n ( y ) is regular and n ( y ) = y [ n ] .
Note that the matrix function F ( x ) constructed by the Formulas (10) coincide with the associated matrix of [1]. In order to obtain F ( x ) , we first represent n ( y ) in terms of the bilinear form (7) with the matrix Q ( x ) and then use Q ( x ) to find F ( x ) . The Formulas (10) for constructing F ( x ) by using Q ( x ) have been obtained in [9]. For our purposes, it is convenient to use the bilinear form with the matrix Q ( x ) as an intermediate step.

2.2. Class F n

In this subsection, we define the class F n , which is the class of matrix functions F ( x ) associated with differential expressions of form (1), as it will be shown in Section 2.3. Here, we study the properties of the quasi-derivatives and of the domain D F generated by F F n .
Define the spaces of matrix functions
Q n : = { Q ( x ) = [ q r , j ] r , j = 0 m : q r , j L 1 [ 0 , 1 ] , r , j = 0 , m 1 ¯ , q r , m , q m , r L 2 [ 0 , 1 ] , r = 0 , m 1 ¯ , q m , m = 0 } , F n : = { F ( x ) = [ f k , j ] k , j = 1 n : f k , j L 1 [ 0 , 1 ] , f m , j , f k , m + 1 L 2 [ 0 , 1 ] , k = m + 1 , n ¯ , j = 1 , m ¯ , f k , j = 0 , k < m or j > m + 1 or ( k , j ) = ( m , m + 1 ) } .
The structure of the spaces Q n and F n can be symbolically presented as follows:
n = 2 : Q = L 1 L 2 L 2 0 , F = L 2 0 L 1 L 2 , n = 4 : Q = L 1 L 1 L 2 L 1 L 1 L 2 L 2 L 2 0 , F = 0 0 0 0 L 2 L 2 0 0 L 1 L 1 L 2 0 L 1 L 1 L 2 0 , n = 6 : Q = L 1 L 1 L 1 L 2 L 1 L 1 L 1 L 2 L 1 L 1 L 1 L 2 L 2 L 2 L 2 0 , F = 0 0 0 0 0 0 0 0 0 0 0 0 L 2 L 2 L 2 0 0 0 L 1 L 1 L 1 L 2 0 0 L 1 L 1 L 1 L 2 0 0 L 1 L 1 L 1 L 2 0 0 .
Clearly, in the Mirzoev–Shkalikov regularization described in Section 2.1, Q Q n and F F n . Furthermore, one can easily check that the mapping S n : Q n F n , which is defined by the Formulas (10), is a bijection. The inverse mapping Q = S n 1 ( F ) is given by the formulas
q j 1 , m : = ( 1 ) m + 1 f m , j , j = 1 , m ¯ , q m , 2 m k : = ( 1 ) k + 1 f k , m + 1 , k = m + 1 , 2 m ¯ , q j 1 , 2 m k : = ( 1 ) k + 1 ( f k , j f k , m + 1 f m , j ) , k = m + 1 , 2 m ¯ , j = 1 , m ¯ , q m , m : = 0 .
For each fixed F F n , we can define the quasi-derivatives y [ k ] by (11) and the domain D F by (12). Let us study some of their properties.
Proposition 3.
Let Q ( x ) be a matrix function of Q n and F : = S n ( Q ) . Then, for any y D F , the following relation holds:
( y [ n ] , z ) = ( 1 ) m ( y ( m ) , z ( m ) ) + r , j = 0 m ( q r , j y ( r ) , z ( j ) ) , z D .
Proposition 3 follows from the general construction of Vladimirov [9]. The proof can be found in [18]. In particular, if the matrix function Q ( x ) is constructed by (8), then Propositions 1 and 3 together imply Proposition 2.
Note that the domain D F is a Banach space with the norm
y D F : = k = 0 n 1 y [ k ] W 1 1 [ 0 , 1 ] .
Recall that
y W p s [ 0 , 1 ] = k = 0 s y ( k ) L p [ 0 , 1 ] , y L p [ 0 , 1 ] = 0 1 | y ( x ) | p d x 1 / p .
Lemma 1.
For every F F n , the Banach space D F is continuously and densely embedded in W 2 m [ 0 , 1 ] .
Proof. 
For s = m , n 1 ¯ , consider the Banach spaces
D F s : = { y : y [ k ] W 1 1 [ 0 , 1 ] , k = 0 , s ¯ }
with the corresponding norms
y D F s : = k = 0 s y [ k ] W 1 1 [ 0 , 1 ] .
Clearly, D F = D F n 1 . Let us prove that (i) D F m is continuously and densely embedded in W 2 m [ 0 , 1 ] , (ii) D F s is continuously and densely embedded in D F s 1 [ 0 , 1 ] for s = m + 1 , n 1 ¯ . Obviously, the assertions (i) and (ii) together yield the claim of the lemma.
Let y D F m . In view of (11) and the structure of F F n , we have
y ( k ) = y [ k ] W 1 1 [ 0 , 1 ] , k = 0 , m 1 ¯ , y ( m ) = y [ m ] + j = 1 m f m , j y ( j 1 ) L 2 [ 0 , 1 ] .
Therefore, y W 2 m [ 0 , 1 ] . Moreover, we have
y [ k ] L 2 [ 0 , 1 ] y [ k ] W 1 1 [ 0 , 1 ] , k = 0 , m ¯ , y ( m ) L 2 [ 0 , 1 ] y [ m ] L 2 [ 0 , 1 ] + j = 1 m f m , j L 2 [ 0 , 1 ] y ( j 1 ) W 1 1 [ 0 , 1 ] y [ m ] W 1 1 [ 0 , 1 ] + C j = 0 m 1 y [ j ] W 1 1 [ 0 , 1 ] .
Hence
y W 2 m [ 0 , 1 ] = k = 0 m y ( k ) L 2 [ 0 , 1 ] C k = 0 m y [ k ] W 1 1 [ 0 , 1 ] = y D F m .
Thus, the embedding D F m W 2 m [ 0 , 1 ] is continuous.
Next, let us construct a sequence { y r } r 1 D F m that approximates a function y W 2 m [ 0 , 1 ] . Note that, for y W 2 m [ 0 , 1 ] , the quasi-derivatives y [ k ] are correctly defined for k = 0 , m ¯ , and y [ m ] L 2 [ 0 , 1 ] . Since W 1 1 [ 0 , 1 ] is dense in L 2 [ 0 , 1 ] , there exists a sequence { h r } r 1 in W 1 1 [ 0 , 1 ] such that h r y [ m ] L 2 [ 0 , 1 ] 0 as r . The relations (7) for k = 0 , m ¯ can be rewritten as the first-order system
Y m = ( F m ( x ) + J m ) Y m + y [ m ] ( x ) e m , x ( 0 , 1 ) ,
where Y m is the column vector [ y [ j ] ] j = 0 m 1 , F m ( x ) and J m are the ( m × m ) upper left submatrices of F ( x ) and J, respectively, the matrix J was defined in (5), e m is the m-th column of the unit matrix. Let us consider the analogous system
Y m , r = ( F m ( x ) + J m ) Y m , r + h r ( x ) e m , x ( 0 , 1 ) ,
with respect to an unknown vector Y m , r ( x ) , r 1 . The initial value problem for (14) with the initial condition Y m , r ( 0 ) = Y m ( 0 ) has the unique solution Y m , r ( x ) = [ y m , r , k ( x ) ] k = 0 m 1 such that y m , r , k = y m , r , 0 [ k ] W 1 1 [ 0 , 1 ] for k = 0 , m 1 ¯ , h r = y m , r , 0 [ m ] W 1 1 [ 0 , 1 ] , and y m , r , k y [ k ] L 2 [ 0 , 1 ] 0 as r . In other words, y r : = y m , r , 0 D F m and y r y W 2 m [ 0 , 1 ] 0 as r . Hence, D F m is dense in W 2 m [ 0 , 1 ] .
Now, consider the embedding D F s D F s 1 , which is, obviously, continuous: y D F s 1 y D F s . Let y D F s 1 . Then, it follows from (7) that y [ s ] L 1 [ 0 , 1 ] . Due to the density of W 1 1 [ 0 , 1 ] in L 1 [ 0 , 1 ] , there exists a sequence { h r } r 1 W 1 1 [ 0 , 1 ] such that h r y [ s ] L 1 [ 0 , 1 ] 0 as r . Similarly to (14), we construct the system
Y s , r = ( F s ( x ) + J s ) Y s , r + h r ( x ) e s , x ( 0 , 1 ) ,
and prove that the first entry y r : = y s , r , 0 of its solution Y s , r belongs to D F s and y r y D F s 1 0 as r . Thus, the embedding D F s D F s 1 is dense, which completes the proof. □

2.3. Main Results and Proofs

In this subsection, we obtain the main results of this paper on the regularization of even order differential expressions. Namely, we construct the class of all the matrices F ( x ) F n associated with the differential expression n ( y ) with fixed coefficients T = ( τ ν ) ν = 0 n 1 and study the properties of this class. We begin with the rigorous definition.
Definition 1.
The matrix function F ( x ) F n is called associated with the differential expression n ( y ) if n ( y ) = y [ n ] for every y D F , where the quasi-derivatives y [ k ] and D F are defined by (11) and (12), respectively, by using the entries f k , j of F ( x ) .
For each T = ( τ ν ) ν = 0 n 1 T n , denote by F ( T ) the set of all the associated matrices for the differential expression n ( y ) with the coefficients T . It can be easily shown that F ( T ) F ( T ˜ ) = if T T ˜ . Here, we mean that T = T ˜ for T = ( τ ν ) ν = 0 n 1 and T ˜ = ( τ ˜ ν ) ν = 0 n 1 of class T n if τ ν = τ ˜ ν in W 2 i ν [ 0 , 1 ] for ν = 0 , n 1 ¯ . Indeed, if the matrix F F n is associated with two coefficient vectors T and T ˜ , then we have n ( y ) = ˜ n ( y ) = y [ n ] for all y D F . By virtue of Lemma 1, D F is dense in W 2 m [ 0 , 1 ] . Consequently, n ( y ) = ˜ n ( y ) for all y W 2 m [ 0 , 1 ] , which implies T = T ˜ .
By virtue of Proposition 2, for every T T n , at least one associated matrix F ( x ) exists, so F ( T ) . This matrix is constructed as F = S n ( Q ) , where Q = Q n ( Σ ( T ) ) (see (3) and (8)). However, there exist other associated matrices.
Theorem 1.
Every matrix F ( x ) of class F n is associated with some differential expression n ( y ) , whose coefficients T = ( τ ν ) ν = 0 n 1 belong to T n . In other words,
F n = T T n F ( T ) .
In order to prove Theorem 1, we need some auxiliary lemmas. Consider an arbitrary matrix function F ( x ) of F n . Find Q : = S n 1 ( F ) Q n . Consider the constant matrices χ ν , i of size ( m + 1 ) × ( m + 1 ) defined by (9) for ν = 0 , n 1 ¯ , i = 0 , i ν ¯ . For example, for n = 2 and n = 4 , we have
n = 2 : χ 0 , 0 = 1 0 0 0 , χ 0 , 1 = 0 1 1 0 , χ 1 , 0 = 0 1 1 0 , n = 4 : χ 0 , 0 = 1 0 0 0 0 0 0 0 0 , χ 0 , 1 = 0 1 0 1 0 0 0 0 0 , χ 0 , 2 = 0 0 1 0 2 0 1 0 0 , χ 1 , 0 = 0 1 0 1 0 0 0 0 0 , χ 1 , 1 = 0 0 1 0 0 0 1 0 0 , χ 2 , 0 = 0 0 0 0 1 0 0 0 0 , χ 2 , 1 = 0 0 0 0 0 1 0 1 0 , χ 3 , 0 = 0 0 0 0 0 1 0 1 0 .
Lemma 2.
The matrices χ ν , i , ν = 0 , n 1 ¯ , i = 0 , i ν ¯ , form a basis in the linear space
M n : = [ a r , j ] r , j = 0 m : a r , j R , a m , m = 0 .
Proof. 
Due to (9), every matrix χ ν , i has non-zero entries χ ν , i ; r , j only on the diagonal r + j = d , where d = ν + i . In general, for any diagonal with number d = 0 , 1 , 2 , , n 1 , we have the number of the corresponding matrices χ ν , i equal to the length of this diagonal. Moreover, these matrices are linearly independent. This concludes the proof. □
It follows from Lemma 2 that any matrix function Q Q n admits the unique representation
Q ( x ) = ν = 0 n 1 i = 0 i ν τ ν , i ( x ) χ ν , i ,
where
τ ν , i ν L 2 [ 0 , 1 ] , τ ν , i L 1 [ 0 , 1 ] , ν = 0 , n 1 ¯ , i = 0 , i ν 1 ¯ .
Note that the right-hand side of (8) is the special case of (16) with τ ν , i ν = σ ν and τ ν , i = 0 for i < i ν . Direct calculations prove the following lemma.
Lemma 3.
Let Q ( x ) be given by Formula (16), where τ ν , i are arbitrary functions satisfying (17). Then, for any y W 2 m [ 0 , 1 ] , the relation (7) holds, where the coefficients T = ( τ ν ) ν = 0 n 1 T n of n ( y ) are defined as follows:
τ ν : = i = 0 i ν ( 1 ) i τ ν , i ( i ) , ν = 0 , n 1 ¯ .
Proof of Theorem 1.
Let F F n and Q = S n 1 ( F ) . Then, Q Q n , and so Q ( x ) admits the unique representation (16). Using the coefficients τ ν , i of this representation, find τ ν by (18) and consider the differential expression n ( y ) with the coefficients T = ( τ ν ) ν = 0 n 1 T n . Using Lemma 3 and Proposition 3, we conclude that n ( y ) = y [ n ] in D for any y D F . Hence F F ( T ) . □
Corollary 1.
The set F ( T ) of associated matrices can be described constructively. Let T = ( τ ν ) ν = 0 n 1 T n be fixed. Choose arbitrary functions τ ν , i L 1 [ 0 , 1 ] and constants c ν , i C for ν = 0 , n 1 ¯ , i = 0 , i ν 1 ¯ . Taking Formula (18) into account, find
τ ν , i ν : = ( 1 ) i ν τ ν i = 0 i ν 1 ( 1 ) i τ ν , i ( i ) ( i ν ) + i = 0 i ν 1 c ν , i x i , ν = 0 , n 1 ¯ .
The notation y ( i ) in (19) is used for a fixed antiderivative of order i. For example, the choice of the antiderivative can be fixed by the conditions
0 1 x k w ( x ) d x = 0 , k = 0 , i 1 ¯ , w ( x ) = y ( i ) ( x ) .
Clearly, τ ν , i ν L 2 [ 0 , 1 ] . Then, using the functions τ ν , i , ν = 0 , n 1 ¯ , i = 0 , i ν ¯ , find Q ( x ) by (16) and F = S n ( Q ) . Denote the matrix F ( x ) constructed by this algorithm as F ( T , ( τ ν , i , c ν , i ) ) . The functions τ ν , i for i < i ν and the constants c ν , i can be chosen arbitrarily as parameters. Thus,
F ( T ) = F ( T , ( τ ν , i , c ν , i ) ) : τ ν , i L 1 [ 0 , 1 ] , c ν , i C , ν = 0 , n 2 ¯ , i = 0 , i ν 1 ¯ .
Theorem 2.
Let T T n be fixed. Then, D F = D F ˜ for any F , F ˜ F ( T ) .
Proof. 
Let T , F ( x ) = [ f k , j ] k , j = 1 n , and F ˜ ( x ) = [ f ˜ k , j ] k , j = 1 n satisfy the hypothesis of the theorem. In this proof, we use the notations y F [ k ] and y F ˜ [ k ] for the quasi-derivatives defined by Formulas (11) by the entries of the matrices F ( x ) and F ˜ ( x ) , respectively. Denote f ^ k , j : = f k , j f ˜ k , j and
y ^ [ j ] : = 0 , j = 0 , m 1 ¯ , y ^ [ k ] : = ( y ^ [ k 1 ] ) j = 1 m f ^ k , j y ( j 1 ) f ^ k , m + 1 y F [ m ] f ˜ k , m + 1 y ^ [ m ] , k = m , n ¯ .
In order to prove the theorem, it is sufficient to show that, for any y W 2 m [ 0 , 1 ] ,
y ^ [ j ] W 1 1 [ 0 , 1 ] , j = m , n 1 ¯ .
Indeed, if y D F , then y F ˜ [ k ] = y F [ k ] + y ^ [ k ] . Consequently, (22) implies that y F ˜ [ k ] W 1 1 [ 0 , 1 ] , k = 0 , n 1 ¯ , so y D F ˜ .
Put Q : = S n 1 ( F ) , Q ˜ : = S n 1 ( F ˜ ) , and q ^ r , j : = q r , j q ˜ r , j . Let y W 2 m [ 0 , 1 ] . Using (10) and (21), we derive
y ^ [ m ] = j = 1 m f ^ m , j y ( j 1 ) = j = 1 m 1 q ^ j , m y ( j ) , y ^ [ k ] = ( y ^ [ k 1 ] ) j = 1 m ( f ^ k , j f ^ k , m + 1 f m , j f ˜ k , m + 1 f ^ m , j ) y ( j 1 ) f ^ k , m + 1 y ( m ) = ( y ^ [ k 1 ] ) + ( 1 ) k j = 0 m q ^ j , 2 m k y ( j ) , k = m + 1 , n ¯ .
Hence, for z D and k = 0 , m 1 ¯ , we have
( y ^ [ m + k ] , z ) = ( y ^ [ m + k 1 ] , z ) + ( 1 ) m + k j = 0 m ( q ^ j , m k y ( j ) , z ) .
By induction, we obtain
( y ^ [ m + k ] , z ) = ( 1 ) m + k l = 0 k j = 0 m ( q ^ j , m k + l y ( j ) , z ( l ) ) .
The change of summation indices implies
( y ^ [ m + k ] , z ) = ( 1 ) m + k r = 0 m j = m k m ( q ^ r , j y ( r ) , z ( j ( m k ) ) ) .
By virtue of Proposition 3 and Corollary 1, for the both matrices Q ( x ) and Q ˜ ( x ) , the relation (7) holds with the same differential expression n ( y ) . Hence
r , j = 0 m ( q ^ r , j y ( r ) , g ( j ) ) = 0 , y W 2 m [ 0 , 1 ] , g D , r = 0 m j = m k m ( q ^ r , j y ( r ) , g ( j ) ) = r = 0 m j = 0 m k 1 ( q ^ r , j y ( r ) , g ( j ) ) .
It can be shown that the relation (24) is valid not only for g D but also for g = z ( s ) , z D , s N , where
z ( 0 ) : = z , z ( s ) : = 0 x z ( ( s 1 ) ) ( t ) d t .
Antiderivatives of D -functions are infinitely differentiable and have a support [ a , b ] ( 0 , 1 ] . In other words, the derivatives g ( k ) ( 1 ) , k = 0 , 1 , can be non-zero. In order to overcome this difficulty, one can extend the interval ( 0 , 1 ) to ( 0 , 1 + ε ) , ε > 0 , put τ ^ ν = 0 , q ^ r , j = 0 on ( 1 , 1 + ε ) , extend y and g so that y W 2 m [ 0 , 1 + ε ] and g C 0 ( 0 , 1 + ε ) , respectively. Consequently, we can apply the relation (24) to the function g : = z ( ( m k ) ) in (23):
( y ^ [ m + k ] , z ) = ( 1 ) m + k + 1 r = 0 m j = 0 m k 1 ( q ^ r , j y ( r ) , z ( j ( m k ) ) ) .
Integration by parts implies
( y ^ [ m + k ] , z ) = r = 0 m j = 0 m k 1 ( 1 ) j + 1 ( q ^ r , j y ( r ) ) j ( m k ) , z ,
where
y 0 : = y , y s ( x ) = x 1 y ( s 1 ) ( t ) d t , s 1 .
Hence
y ^ [ m + k ] = r = 0 m j = 0 m k 1 ( 1 ) j + 1 ( q ^ r , j y ( r ) ) j ( m k ) , k = 0 , m 1 ¯ .
Recall that Q , Q ˜ Q n and y W 2 m [ 0 , 1 ] . Therefore, q ^ r , j y ( r ) L 1 [ 0 , 1 ] for all r and j in (25). Furthermore, j ( m k ) 1 . This implies (22) and so concludes the proof. □

3. Odd Order

In this section, we provide the regularization results, analogous to the ones in Section 2, for odd orders.
Consider the differential expression (1) of order n = 2 m + 1 , m N , with T : = ( τ ν ) ν = 0 n 1 T n , where
T n : = T = ( τ ν ) ν = 0 n 1 : τ ν W 1 i ν [ 0 , 1 ] , ν = 0 , n 1 ¯ ,
and the singularity orders ( i ν ) ν = 0 n 1 are defined by (2). In other words, the relations (3) are valid for some σ ν L 1 [ 0 , 1 ] . If y W 1 m [ 0 , 1 ] , then n ( y ) D and the following relation holds:
( n ( y ) , z ) = ( 1 ) m ( y ( m + 1 ) , z ( m ) ) + r , j = 0 m ( q r , j y ( r ) , z ( j ) ) , z D ,
where Q ( x ) = [ q r , j ] r , j = 0 m and the matrices χ ν , i are defined by (8) and (9), respectively.
Construct the matrix function F ( x ) = [ f k , j ] k , j = 1 n by the rule F = S n ( Q ) given by the formulas
f k , j : = ( 1 ) k q j 1 , 2 m + 1 k , k = m + 1 , 2 m + 1 ¯ , j = 1 , m + 1 ¯ , f k , j : = 0 , otherwise .
For this matrix F ( x ) , Proposition 2 holds (see [2] and Theorem 2.2 in [18]).
Define the spaces of matrix functions
Q n : = { Q ( x ) = [ q r , j ] r , j = 0 m : q r , j L 1 [ 0 , 1 ] , r , j = 0 , m ¯ } , F n : = { F ( x ) = [ f k , j ] k , j = 1 n : f k , j L 1 [ 0 , 1 ] , k = m + 1 , 2 m + 1 ¯ , j = 1 , m + 1 ¯ , f k , j = 0 , k < m + 1 or j > m + 1 } .
The structure of the spaces F n can be symbolically presented as follows:
n = 3 : F = 0 0 0 L 1 L 1 0 L 1 L 1 0 , n = 5 : F = 0 0 0 0 0 0 0 0 0 0 L 1 L 1 L 1 0 0 L 1 L 1 L 1 0 0 L 1 L 1 L 1 0 0 .
As well as in the even-order case, the mapping S n : Q n F n defined by (27) is a bijection. The results of Mirzoev and Shkalikov [2] imply that the matrix function F = S n ( Q n ( Σ ) ) , where Σ = Σ ( T ) , is associated with the odd-order differential expression n ( y ) in the sense of Definition 1.
Similarly to the even-order case, denote by F ( T ) the set of associated matrices for n ( y ) with the coefficients T . Theorem 1 is also valid for odd n. Indeed, one can easily show that the matrices χ ν , i , ν = 0 , n 1 ¯ , i = 0 , i ν ¯ form a basis in the linear space
M n : = [ a r , j ] r , j = 0 m : a r , j R .
Note that the only difference in the matrices χ ν , i between the cases n = 2 m and n = 2 m + 1 is the additional matrix with the unit entry a m , m for the odd order. For example, for n = 3 , we have the following matrices (compare with (15)):
χ 0 , 0 = 1 0 0 0 , χ 0 , 1 = 0 1 1 0 , χ 1 , 0 = 0 1 1 0 , χ 2 , 0 = 0 0 0 1 .
Any matrix Q Q n admits the unique representation (16), where τ ν , i L 1 [ 0 , 1 ] , ν = 0 , n 1 ¯ , i = 0 , i ν ¯ . On the other hand, if Q ( x ) is given by (16), then, for any y W 1 m [ 0 , 1 ] , the relation (26) holds, where the coefficients T = ( τ ν ) ν = 0 n 1 are defined by (18) and T T n . This implies the assertion of Theorem 1 for odd orders.
The set F ( T ) is described by Formula (20). The functions τ ν , i ν constructed by (19) belong to L 1 [ 0 , 1 ] , ν = 0 , n 1 ¯ . Theorem 2 also holds for odd orders.
Remark 1.
In the inverse problem theory (see [16,17,18,19]), differential expressions of form (1) with τ n 1 = 0 are considered. Denote
T n 0 : = T = ( τ ν ) ν = 0 n 1 T n : τ n 1 = 0 .
Since the set F ( T ) of associated matrices for T T n 0 is constructed according to Corollary 1, we obtain
T T n 0 F ( T ) = F n 0 ,
where
F n 0 : = F F n : t r a c e ( F ) = 0
for both even and odd values of n.

4. Inverse Problems

In this section, inverse spectral problems are investigated for the differential equation generated by the expression n ( y ) . We define the spectral characteristics, study their dependence on the associated matrix, and prove the uniqueness theorems for the inverse problems (Theorems 4 and 5). In addition, we compare our novel results with the known uniqueness theorems from the previous studies [16,17,18].
Consider the differential expression n ( y ) with coefficients T T n 0 (i.e., τ n 1 = 0 ). Let F ( x ) be a fixed associated matrix for n ( y ) , that is, F F ( T ) F n 0 . Define the quasi-derivatives y [ k ] and the domain D F by (11) and (12), respectively. According to Definition 1, for any y D F , we have n ( y ) L 1 [ 0 , 1 ] . Below, we call a function y a solution of the equation
n ( y ) = λ y , x ( 0 , 1 ) ,
if y D F and the relation (28) holds a.e. on ( 0 , 1 ) .
Denote by { C k ( x , λ ) } k = 1 n and by { Φ k ( x , λ ) } k = 1 n the solutions of Equation (28) satisfying the initial conditions
C k [ j 1 ] ( 0 , λ ) = δ k , j , j = 1 , n ¯ ,
and the boundary conditions
Φ k [ j 1 ] ( 0 , λ ) = δ k , j , j = 1 , k ¯ , Φ k [ n s ] ( 1 , λ ) = 0 , s = k + 1 , n ¯ ,
respectively, where δ k , j is the Kronecker delta. It has been shown in [16] that the initial value problem solutions C k ( x , λ ) exist and are unique. The boundary value problem solutions Φ k ( x , λ ) are uniquely defined for all complex λ except for a countable set. Moreover, for each fixed x [ 0 , 1 ] and j = 1 , n ¯ , the quasi-derivatives C k [ j 1 ] ( x , λ ) are entire in λ and Φ k [ j 1 ] ( x , λ ) are meromorphic in λ . Furthermore, the matrix functions C ( x , λ ) = [ C k [ j 1 ] ( x , λ ) ] j , k = 1 n and Φ ( x , λ ) = [ Φ k [ j 1 ] ( x , λ ) ] j , k = 1 n are related as follows:
Φ ( x , λ ) = C ( x , λ ) M ( λ ) ,
where the matrix function M ( λ ) = [ M j , k ( λ ) ] j , k = 1 n is called the Weyl–Yurko matrix of Equation (28).
It can be shown similarly to [16,18,35] that M ( λ ) is a unit lower-triangular matrix. Furthermore, its non-trivial entries M j , k ( λ ) for j > k are meromorphic functions with countable sets of poles. More precisely, the poles of M j , k ( λ ) coincide with eigenvalues of the boundary value problem L k for Equation (28) with the boundary conditions
y [ j 1 ] ( 0 ) = 0 , j = 1 , k ¯ , y [ n s ] ( 1 ) = 0 , s = k + 1 , n ¯ ,
which correspond to (30).
Now, suppose that we have two associated matrices F ( x ) and F ˜ ( x ) of F ( T ) . Denote the quasi-derivatives constructed by F ( x ) and F ˜ ( x ) by y F [ k ] and y F ˜ [ k ] , respectively. We agree that, if a certain object α is related to F ( x ) , then the symbol α ˜ with tilde will denote the analogous object related to F ˜ ( x ) . The following theorem establishes the relation between the Weyl–Yurko matrices corresponding to different associated matrices.
Theorem 3.
Suppose that T T n 0 , F , F ˜ F ( T ) . Then M ( λ ) = L M ˜ ( λ ) , where L L n ,
L n = L = [ l j , k ] j , k = 1 n C n × n : l j , k = δ j , k for j n m or k > m .
Thus, the matrices of L n have the following structure:
n = 3 : 1 0 0 0 1 0 0 1 , n = 4 : 1 0 0 0 0 1 0 0 1 0 0 1 .
Proof of Theorem 3.
By virtue of Theorem 2, D F = D F ˜ . Therefore, it can be easily seen that
y D F : y F [ j 1 ] ( 0 ) = δ k , j , j = 1 , k ¯ = y D F ˜ : y F ˜ [ j 1 ] ( 0 ) = δ k , j , j = 1 , k ¯ ,
y D F : y F [ j 1 ] ( 1 ) = 0 , j = 1 , k ¯ = y D F ˜ : y F ˜ [ j 1 ] ( 1 ) = 0 , j = 1 , k ¯
for each k = 1 , n ¯ . Hence Φ ( x , λ ) Φ ˜ ( x , λ ) . For C ( x , λ ) and C ˜ ( x , λ ) , the relations (32) and (33) imply that
C ˜ k ( x , λ ) = C j ( x , λ ) + j = k + 1 n l j , k C j ( x , λ ) , l j , k C ,
that is, C ˜ ( x , λ ) C ( x , λ ) L , where L = [ l j , k ] j , k = 1 n is a unit lower-triangular matrix. Using the special structure of the matrices F ( x ) and F ˜ ( x ) of class F n , namely, the relations f k , j = 0 for k < n m 1 or j > m + 1 , we prove that L L n . Using the relations Φ ˜ ( x , λ ) Φ ( x , λ ) , C ˜ ( x , λ ) C ( x , λ ) L , and (31), we arrive at the assertion of the theorem. □
The inverse result is also valid:
Theorem 4.
Suppose that T = ( τ ν ) ν = 0 n 1 and T ˜ = ( τ ˜ ν ) ν = 0 n 1 belong to T n 0 , M ( λ ) and M ˜ ( λ ) are the Weyl–Yurko matrices defined by the associated matrices F F ( T ) and F ˜ F ( T ˜ ) , respectively, and M ( λ ) = L M ˜ ( λ ) , where L L n . Then T = T ˜ . Thus, the Weyl–Yurko matrix M ( λ ) known up to a multiplier L L n uniquely specifies the coefficients T T n 0 of the differential expression n ( y ) .
Proof. 
Introduce the matrix of spectral mappings
P ( x , λ ) = Φ ( x , λ ) ( Φ ˜ ( x , λ ) ) 1 .
Using (31) and the relation M ( λ ) = L M ˜ ( λ ) , we obtain
P ( x , λ ) = C ( x , λ ) M ( λ ) ( M ˜ ( λ ) ) 1 ( C ˜ ( x , λ ) ) 1 = C ( x , λ ) L ( C ˜ ( x , λ ) ) 1 .
Hence P ( x , λ ) is entire in λ for each fixed x [ 0 , 1 ] . Then, similarly to the proof of Theorem 2 in [16], we show that, for each fixed x [ 0 , 1 ) , P ( x , λ ) is a constant unit triangular matrix P ( x ) = [ p k , j ( x ) ] k , j = 1 n , which satisfies the relation
P ( x ) + P ( x ) F ˜ ( x ) = F ( x ) P ( x ) , x ( 0 , 1 ) .
Let us prove that (34) implies T = T ˜ . For definiteness, suppose that n = 2 m . The odd order case can be investigated analogously. By considering the first ( m 1 ) rows and the last ( m 1 ) columns of (34), we deduce that p k , j = 0 ( j < k ) for k = 1 , m ¯ and j = m + 1 , 2 m ¯ , respectively. From the relations for k = m , 2 m ¯ and j = 1 , m + 1 ¯ , we derive
p k , j + p k , j 1 + ( f ˜ k , j f ˜ k , m + 1 f ˜ m , j ) = p k + 1 , j + ( f k , j f k , m + 1 f m , j ) , k = m + 1 , 2 m ¯ , j = 1 , m ¯ .
Using (10), we transform (35) into the system
r l , s + r l 1 , s + r l , s 1 = q ^ l , s , l , s = 0 , m ¯ ,
r 1 , s = r s , 1 = r m , s = r s , m = 0 , s = 0 , m ¯ ,
where q ^ l , s = q l , s q ˜ l , s , [ q l , s ] l , s = 0 m : = S n 1 ( F ) , [ q ˜ l , s ] l , s = 0 m : = S n 1 ( F ˜ ) , and r j 1 , 2 m k : = ( 1 ) k + 1 p k , j , k = m , 2 m ¯ , j = 1 , m + 1 ¯ .
For y W 2 m [ 0 , 1 ] and z D , using (36), we derive
l , s = 0 m ( q ^ l , s y ( l ) , z ( s ) ) = l , s ( r l , s y ( l ) , z ( s ) ) + l , s ( r l 1 , s y ( l ) , z ( s ) ) + l , s ( r l , s 1 y ( l ) , z ( s ) ) = l , s ( ( r l , s y ( l ) ) , z ( s ) ) l , s ( r l , s y ( l + 1 ) , z ( s ) ) + l , s ( r l , s y ( l + 1 ) , z ( s ) ) + l , s ( r l , s y ( l ) , z ( s + 1 ) ) = 0 .
Here, we have applied the index shift, the integration by parts and have taken the boundary conditions (37) into account. Using Lemma 3 and Corollary 1, we obtain the relations
( n ( y ) , z ) = ( 1 ) m ( y ( m ) , z ( m ) ) + l , s = 0 m ( q l , s y ( l ) , z ( s ) ) , ( ˜ n ( y ) , z ) = ( 1 ) m ( y ( m ) , z ( m ) ) + l , s = 0 m ( q ˜ l , s y ( l ) , z ( s ) ) .
Combining them with (38), conclude that ( n ( y ) , z ) = ( ˜ n ( y ) , z ) for all y W 2 m [ 0 , 1 ] and z D . This implies T = T ˜ . □
Let us compare Theorem 4 with the following uniqueness result of [16]. Introduce the space
S n 0 = Σ = ( σ ν ) ν = 0 n 1 : σ ν L 2 [ 0 , 1 ] if n is even , σ ν L 1 [ 0 , 1 ] if n is odd , ν = 0 , n 2 ¯ , σ n 1 = 0 .
Proposition 4
([16]). Suppose that Σ = ( σ ν ) ν = 0 n 1 and Σ ˜ = ( σ ˜ ν ) ν = 0 n 1 belong to S n 0 , F = S n ( Q n ( Σ ) ) , F ˜ = S n ( Q n ( Σ ˜ ) ) , M ( λ ) and M ˜ ( λ ) are the Weyl–Yurko matrices of F and F ˜ , respectively, and M ( λ ) = M ˜ ( λ ) . Then Σ = Σ ˜ , that is, σ ν ( x ) = σ ˜ ν ( x ) a.e. on ( 0 , 1 ) .
Note that, in Proposition 4, the fixed Mirzoev–Shkalikov construction of the associated matrices is assumed. Then, the antiderivatives ( σ ν ) ν = 0 n 2 are uniquely determined by the Weyl–Yurko matrix. Theorem 4 corresponds to a different inverse problem. If the Weyl–Yurko matrix is known up to a factor L L n , then it uniquely specifies the distribution coefficients ( τ ν ) ν = 0 n 2 independently of the choice of the associated matrix. These are two different results. It is worth mentioning that, in [16], Proposition 4 has been proved for a more general type of separated boundary conditions than (30). The conditions (30) have the lowest possible orders, so they are the most simple ones. In other cases, boundary conditions contain constant coefficients, which either can be recovered or have to be given a priori (see [18]). For the general separated boundary conditions, Theorem 4 does not hold (see the example in Section 5.2).
Next, consider the inverse problem by the discrete spectral data, which was studied in [17]. Denote by Λ the poles of the Weyl–Yurko matrix M ( λ ) . We will write M W if all the poles of M ( λ ) are simple. Then, the Laurent series has the form
M ( λ ) = M 1 ( λ 0 ) λ λ 0 + M 0 ( λ 0 ) + M 1 ( λ 0 ) ( λ λ 0 ) + , λ 0 Λ .
Define the weight matrices as follows:
N ( λ 0 ) : = M 0 ( λ 0 ) 1 M 1 ( λ 0 ) , λ 0 Λ .
In view of Theorem 3, the weight matrices are uniquely specified by the coefficients T and do not depend on the associated matrix F F ( T ) . The inverse is also true:
Theorem 5.
Suppose that T = ( τ ν ) ν = 0 n 1 and T ˜ = ( τ ˜ ν ) ν = 0 n 1 belong to T n 0 , M ( λ ) and M ˜ ( λ ) are the Weyl–Yurko matrices defined by the associated matrices F F ( T ) and F ˜ F ( T ˜ ) , respectively, M , M ˜ W , and the corresponding spectral data sets { λ 0 , N ( λ 0 ) } λ 0 Λ and { λ 0 , N ˜ ( λ 0 ) } λ 0 Λ ˜ are equal to each other. Then T = T ˜ . Thus, the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ uniquely specify the coefficients T of the differential expression n ( y ) .
For n = 3 , Theorem 5 has been proved in [17]. For the general case, Theorem 5 is proved analogously to Theorem 4, since the matrix of spectral mappings P ( x , λ ) is entire in λ according to Lemma 9 in [17].

5. Examples

In this section, we consider examples of n = 2 and n = 4 to illustrate the main results of Section 2, Section 3 and Section 4.

5.1. Regularization for Order Two

Let n = 2 . Then, m = 1 and the differential expression (1) takes the form
2 ( y ) = y ( τ 1 ( x ) y ) τ 1 ( x ) y + τ 0 ( x ) y .
Thus, T = ( τ 0 , τ 1 ) , i 0 = 1 , i 1 = 0 , the space T 2 is defined by the conditions τ 0 W 2 1 [ 0 , 1 ] , τ 1 L 2 [ 0 , 1 ] , and Σ = ( σ 0 , σ 1 ) , τ 0 = σ 0 , σ 1 = τ 1 , so σ j L 2 [ 0 , 1 ] , j = 0 , 1 . The antiderivative σ 0 of τ 0 can be chosen uniquely up to an additive constant.
Suppose that y W 2 1 [ 0 , 1 ] and z D . Calculations show that
( y , z ) = ( y , z ) , ( ( τ 1 y ) τ 1 y , z ) = ( σ 1 y , z ) ( σ 1 y , z ) , ( τ 0 y , z ) = ( ( σ 0 y ) + σ 0 y , z ) = ( σ 0 y , z ) + ( σ 0 y , z ) .
By summation, we arrive at the relation (7):
( 2 ( y ) , z ) = ( y , z ) + ( q 0 , 0 y , z ) + ( q 1 , 0 y , z ) + ( q 0 , 1 y , z ) , y W 2 1 [ 0 , 1 ] , z D ,
where
Q ( x ) = q 0 , 0 q 0 , 1 q 1 , 0 0 = σ 0 χ 0 , 1 + σ 1 χ 1 , 0 = 0 σ 0 + σ 1 σ 0 σ 1 0 .
(The matrices χ 0 , 1 and χ 1 , 0 are given by (15)).
Using Formulas (10), we obtain
F ( x ) = S 2 ( Q ) = q 0 , 1 0 ( q 0 , 0 + q 0 , 1 q 1 , 0 ) q 1 , 0 = σ 1 + σ 0 0 σ 1 2 σ 0 2 σ 1 σ 0 .
The matrix-function F ( x ) is the special case of the regularization matrix by Mirzoev and Shkalikov [1]. It coincides with the associated matrices from [47,48]. For T T 2 0 (i.e., τ 1 = 0 ), we obtain the differential expression y + τ 0 y , τ 0 W 2 1 [ 0 , 1 ] and the associated matrix
F ( x ) = σ 0 0 σ 0 2 σ 0 ,
which was widely used for investigation of direct and inverse Sturm–Liouville problems (see, e.g., [10,24,25]).
Now, proceed to the construction of the set F ( T ) of all the associated matrices for the differential expression (40) with T T 2 . Any matrix function Q Q 2 admits the representation (16):
Q ( x ) = τ 0 , 0 1 0 0 0 + τ 0 , 1 0 1 1 0 + τ 1 , 0 0 1 1 0 ,
where τ 0 , 0 L 1 [ 0 , 1 ] and τ 0 , 1 , τ 1 , 0 L 2 [ 0 , 1 ] . The matrix Q ( x ) is related to the differential expression 2 ( y ) with the coefficients τ 0 = τ 0 , 0 τ 0 , 1 , τ 1 = τ 1 , 0 (see (18)).
Suppose that τ 0 W 2 1 [ 0 , 1 ] and τ 1 L 2 [ 0 , 1 ] are given. Choose an arbitrary function τ 0 , 0 L 1 [ 0 , 1 ] and a constant c 0 , 0 C . Find
τ 0 , 1 = τ 0 , 0 τ 0 ( 1 ) + c 0 , 0 , τ 1 , 0 = τ 1 ,
construct Q ( x ) by (42) and F ( x ) = S 2 ( Q ) , which is the associated matrix for 2 ( y ) . By choosing different τ 0 , 0 L 1 [ 0 , 1 ] and c 0 , 0 C , we will obtain different associated matrices for the same differential expression. In particular, for y + τ 0 y , τ 0 W 2 1 [ 0 , 1 ] , all the associated matrices can be represented as
F ( x ) = σ 0 σ 2 σ + 0 0 r 0 , σ L 2 [ 0 , 1 ] , r L 1 [ 0 , 1 ] , τ 0 = ( σ + r ) .
Clearly, r L 1 [ 0 , 1 ] can be chosen arbitrarily. After that, the function σ = ( τ 0 r ) ( 1 ) is determined uniquely up to an additive constant. Thus, every F T 2 0 can be represented as the sum of the two associated matrices that are usually used for the Sturm–Liouville expressions y τ ( x ) y , τ = σ W 2 1 [ 0 , 1 ] , and y r ( x ) y , r L 1 [ 0 , 1 ] .

5.2. Inverse Problems for Order Two

Proceed to inverse spectral problems for the Sturm–Liouville equation
y q ( x ) y = λ y , x ( 0 , 1 ) , q = σ W 2 1 [ 0 , 1 ] .
Let us formulate some known uniqueness results and compare them to the results of Section 4.
The associated matrix (41) ( σ 0 = σ ) produces the quasi-derivative y [ 1 ] = y σ y . Following the classical inverse problem theory (see, e.g., [22]), introduce the main spectral characteristics. Let { λ n } n 1 and { μ n } n 1 be the eigenvalues of the boundary value problems for Equation (44) with the boundary conditions y ( 0 ) = y ( 1 ) = 0 and y [ 1 ] ( 0 ) = y ( 1 ) = 0 , respectively. Denote by S ( x , λ ) and C ( x , λ ) the solutions of Equation (44) satisfying the initial conditions
S ( 0 , λ ) = C [ 1 ] ( 0 , λ ) = 0 , S [ 1 ] ( 0 , λ ) = C ( 0 , λ ) = 1 .
Obviously, the eigenvalues { λ n } n 1 and { μ n } n 1 coincide with the zeros of the characteristic functions S ( 1 , λ ) and C ( 1 , λ ) , respectively. In the case of simple eigenvalues { λ n } n 1 , introduce the weight numbers α n : = 0 1 y n 2 ( x ) d x , n 1 , where y n ( x ) = S ( x , λ n ) are the corresponding eigenfunctions. (In the case of multiple eigenvalues, one can use the generalized weight numbers as in [49,50]). Furthermore, define the Weyl function m ( λ ) : = C ( 1 , λ ) S ( 1 , λ ) , which is meromorphic in the λ -plane.
In the case of regular potential q L 1 [ 0 , 1 ] , each of the following three types of the spectral data uniquely specifies q:
(i)
The two spectra { λ n , μ n } n 1 ;
(ii)
The eigenvalues { λ n } n 1 and the weight numbers { α n } n 1 (if the eigenvalues are simple);
(iii)
The Weyl function m ( λ ) .
Moreover, the spectral data (i)–(iii) uniquely determine each other. In the case of distribution potential q W 2 1 [ 0 , 1 ] , the situation is slightly different:
  • The two spectra { λ n , μ n } n 1 and the Weyl function m ( λ ) uniquely specify each other. Indeed, on the one hand, { λ n } n 1 and { μ n } n 1 coincide with the poles and the zeros of m ( λ ) , respectively. On the other hand, the characteristic functions S ( 1 , λ ) and C ( 1 , λ ) can be constructed as infinite produces by their zeros, and so m ( λ ) can be found.
  • The weight numbers { α n } n 1 are uniquely specified by m ( λ ) : α n 1 = Res λ = λ n m ( λ ) , while m ( λ ) is uniquely determined by { λ n , α n } n 1 up to an additive constant (see [32]).
From the inverse problem viewpoint, we have the following uniqueness results (see [24,25]):
  • { λ n , μ n } n 1 or m ( λ ) uniquely specify σ ( x ) .
  • { λ n , α n } n 1 uniquely specify q ( x ) or σ ( x ) + c , where c is an arbitrary constant.
The first result corresponds to Proposition 4 and the second one, to Theorem 5. Indeed, due to Section 4, the Weyl–Yurko matrix for Equation (44) has the form
M ( λ ) = 1 0 m ( λ ) 1 ,
where m ( λ ) is the Weyl function. Note that the definition of m ( λ ) is strongly connected with the regularization matrix (41) ( σ 0 = σ ), while the antiderivative σ ( x ) of q ( x ) is defined up to a constant c. Thus, m ( λ ) depends on c. Consequently, one can uniquely recover σ ( x ) from m ( λ ) as in Proposition 4. Moreover, we can consider other associated matrices generated by (43). By virtue of Theorem 3, the Weyl–Yurko matrices M ( λ ) and M ˜ ( λ ) , which are obtained from different associated matrices F ( x ) and F ˜ ( x ) in the second-order case, are related as follows:
1 0 m ( λ ) 1 = 1 0 l 2 , 1 1 1 0 m ˜ ( λ ) 1 m ( λ ) = l 2 , 1 + m ˜ ( λ ) .
Thus, the assumption that M ( λ ) is given up to a factor L L 2 actually means that the Weyl function m ( λ ) is given up to an additive constant c. Then, by using the given Weyl function, we cannot uniquely determine σ ( x ) but, by virtue of Theorem 4, can uniquely determine q ( x ) . This corresponds to the inverse problem by the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , which in the second-order case has the form
Λ = { λ n } n 1 , N ( λ n ) = 0 0 α n 1 0 , n 1 .
Hence, the spectral data { λ n , α n } n 1 do not depend on the choice of the associated matrix. Theorem 5 imply that { λ n , α n } n 1 uniquely specify q ( x ) but, obviously, they cannot uniquely specify σ ( x ) .
As a practical example, let us consider the Sturm–Liouville Equation (44) with the potential q ( x ) = ω x a + u ( x ) having the Coulomb-type singularity, which arises in quantum mechanics (see, e.g., [36]). Here, a ( 0 , 1 ) , ω is a constant, and u L 1 [ 0 , 1 ] . Then, we have
σ ( x ) = ω ln | x a | + 0 x u ( t ) d t + c ,
where c can be an arbitrary constant. From the above arguments, we immediately obtain that σ ( x ) is uniquely determined by the Weyl function m ( λ ) or by the two spectra { λ n , μ n } n 1 . By using σ ( x ) , one can easily find its singular point a, the constant ω , the regular part u ( x ) , and the constant c. Thus, not only the potential q ( x ) but also some extra information c is recovered. In order to avoid this discrepancy, we can initially fix c = ω ln | a | to achieve σ ( 0 ) = 0 and consider the spectral data for this fixed value of c. It is worth mentioning that inverse spectral problems for the Sturm–Liouville operators with the Coulomb-type singularities were studied by different methods in [51,52] and many other papers. However, our method can be applied without essential modifications to the case of the Coulomb-type singularities and/or δ -type interactions in several points.
The situation changes for different types of boundary conditions. In particular, the spectral data of the Sturm–Liouville Equation (44) with the Robin-type boundary conditions
y [ 1 ] ( 0 ) h y ( 0 ) = 0 , y [ 1 ] ( 1 ) + H y ( 1 ) = 0 , h , H C ,
are invariant with respect to the shift σ : = σ + c , h : = h c , H : = H + c , c C . Therefore, it is natural to fix h = 0 . Then, the corresponding three types of spectral data (two spectra, eigenvalues and weight numbers, and the Weyl function) uniquely determine each other as well as the coefficients σ ( x ) and H (see [24,25]). However, the other types of inverse problems, which consist of determining q ( x ) (but not σ ( x ) ) and correspond to Theorems 4 and 5, cannot be considered for the Robin-type boundary conditions, because the associated matrix (roughly speaking, σ ( x ) ) is related to the coefficients h and H.
Thus, the described examples for n = 2 show that the both types of inverse problems (i.e., the recovery of T and of Σ ) generalize the classical problem statements. However, for distribution coefficients, these two types are different and the both are worth being studied.

5.3. Regularization for Order Four

Consider the fourth-order differential equation
4 ( y ) = y ( 4 ) ( p ( x ) y ) + q ( x ) y = λ y , x ( 0 , 1 ) ,
where p W 2 1 [ 0 , 1 ] and q W 2 2 [ 0 , 1 ] . This equation plays an important role in mechanics, since the Euler–Bernoulli equation, which is used for modeling beam vibrations, can be reduced to the form (45) (see [40] Chapter 13).
Let us illustrate our construction for the family of associated matrices by the example of the binomial differential expression 4 ( y ) in (45). Clearly, (45) corresponds to (1) with n = 4 , τ 2 = p , τ 0 = q , τ 1 = τ 3 = 0 . Hence, the vector T = ( τ ν ) ν = 0 3 belongs to the set
T 4 e v e n : = T = ( τ ν ) ν = 0 3 : τ 0 W 2 2 [ 0 , 1 ] , τ 2 W 2 1 [ 0 , 1 ] , τ 0 = τ 3 = 0 ,
which is a subset of T 4 0 . Due to (2), we have the singularity orders i 0 = 2 , i 1 = i 2 = 1 , i 3 = 0 .
Suppose that p W 2 1 [ 0 , 1 ] and q W 2 2 [ 0 , 1 ] are given, and we have to construct the corresponding family F ( T ) of associated matrices. Following the algorithm of Corollary 1, we choose arbitrary functions τ 0 , 0 , τ 0 , 1 , τ 1 , 0 , τ 2 , 0 of L 1 [ 0 , 1 ] and arbitrary complex constants c 0 , 0 , c 0 , 1 , c 1 , 0 , c 2 , 0 . Then, we find the functions τ ν , i ν :
τ 0 , 2 = q τ 0 , 0 + τ 0 , 1 ( 2 ) + c 0 , 1 + c 0 , 2 x , τ 1 , 1 = τ 1 , 0 ( 1 ) + c 1 , 0 , τ 2 , 1 = ( p + τ 2 , 0 ) ( 1 ) + c 2 , 0 , τ 3 , 0 = 0 .
Next, one can construct the matrix function Q ( x ) by (16) and the associated matrix F = S 4 ( Q ) . The set F ( T ) consists of such associated matrix for all the possible choices of the parameters τ 0 , 0 , τ 0 , 1 , τ 1 , 0 , τ 2 , 0 , c 0 , 0 , c 0 , 1 , c 1 , 0 , c 2 , 0 . For example, if we put all these parameters equal zero, then we obtain the Mirzoev–Shkalikov associated matrix:
F ( x ) = 0 1 0 0 σ 0 σ 2 1 0 σ 2 σ 0 σ 2 2 + 2 σ 0 σ 2 1 σ 0 2 σ 2 σ 0 σ 0 0 ,
where σ 2 = τ 2 , 1 , σ 0 = τ 0 , 2 , σ 2 = p , σ 0 = q , σ 0 , σ 2 L 2 [ 0 , 1 ] . It is worth mentioning that the associated matrix (46) for the first time appeared in the paper of Vladimirov [8] and was used for the investigation of the Barcilon-type inverse spectral problem in [53]. However, for studying other spectral aspects, other types of associated matrices of the family F ( T ) may be useful, especially if p and q have lower singularity orders.
Here, we do not consider inverse spectral problems of order four in detail. The author plans to devote a separate research paper to them.

6. Conclusions

In this paper, for each n 2 , we have considered the class F n , which contains the matrices of Mirzoev and Shkalikov [1,2] associated with the differential expressions n ( y ) . We have shown that, every matrix function F F n is associated with some differential expression n ( y ) with coefficients T = ( τ ν ) ν = 0 n 1 T n . Furthermore, we have constructively described the family F ( T ) F n of all the associated matrices for fixed T . In addition, we have proved that D F = D F ˜ for F , F ˜ F ( T ) . The both even and odd order cases have been studied. Moreover, we applied this construction to the inverse problem theory. The uniqueness theorems have been proved for inverse spectral problems of a new type.
Our results have the following advantages over the previous studies:
  • We have investigated various matrices associated with the same differential expression, while the previous studies provide only specific constructions of associated matrices.
  • We have studied a novel class of inverse spectral problems which consist of the recovery of distributional coefficients T independently of the associated matrix. In the previous works for higher-order differential operators with distribution coefficients, spectral data were connected with a fixed associated matrix. However, the both types of inverse problems generalize the classical problem statements and are worth being investigated.
In the future, our results can be applied to studying various spectral theory issues for differential operators with distribution coefficients, because, for investigation of different spectral properties, it can be convenient to use different associated matrices. In particular, the results and the methods of this paper have implications for the following directions of research:
  • From the physical viewpoint, it is worth considering linear differential operators with various types of boundary conditions. For example, in the second-order case, the Dirichlet boundary conditions y ( 0 ) = y ( 1 ) = 0 correspond to fixed ends of a string, while the Neumann boundary conditions y ( 0 ) = y ( 1 ) = 0 correspond to free ends. For the fourth order, the physical meaning of various boundary conditions is described, e.g., in Chapter 13 of [40]. A number of applications deal with periodic/antiperiodic boundary conditions. Note that the approach to inverse problems, which is developed in this paper and in the previous studies [16,17,18,19], works for various types of separated boundary conditions. Inverse spectral problems with non-separated boundary conditions are essentially different and so require a separate investigation. Anyway, the regularization results are concerned only with the differential expression, so they can be applied to any type of boundary conditions.
  • The regularization approach of Mirzoev and Shkalikov is limited to the singularity orders given by (2). For higher singularity orders, to the best of the author’s knowledge, there are no general results. Moreover, some special cases can be studied (see [10] for n = 2 ). For n > 2 , higher singularity orders are a topic for future research, in which the ideas of this paper can be potentially used.
  • In this paper, we consider associated matrices of class F n with the zero ( n m 1 ) first rows and ( n m 1 ) last columns. If some of these entries are non-zero, then the domain D F can be out of the space W 2 s m [ 0 , 1 ] , on which the differential expression (1) is defined in the sense of generalized functions. Thus, some of lower-triangular matrix functions F F n cannot be associated with differential expressions analogous to (1). However, some subclasses of associated matrices out of F n can be found and analyzed by developing the methods of this paper.
  • The results of this paper regarding inverse spectral problems are limited to the uniqueness theorems. In perspective, one can obtain reconstruction procedures and study solvability and stability of inverse problems. Some steps in this direction were implemented in [17,19]. This research can be continued by using various associated matrices obtained in this paper.

Funding

This work was supported by Grant 21-71-10001 of the Russian Science Foundation, https://rscf.ru/en/project/21-71-10001/ (accessed on 17 July 2023).

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares that this paper has no conflict of interest.

References

  1. Mirzoev, K.A.; Shkalikov, A.A. Differential operators of even order with distribution coefficients. Math. Notes 2016, 5, 779–784. [Google Scholar] [CrossRef]
  2. Mirzoev, K.A.; Shkalikov, A.A. Ordinary differential operators of odd order with distribution coefficients. arXiv 2019, arXiv:1912.03660. [Google Scholar]
  3. Naimark, M.A. Linear Differential Operators; Ungar: New York, NY, USA, 1968. [Google Scholar]
  4. Everitt, W.N.; Marcus, L. Boundary Value Problems and Symplectic Algebra for Ordinary Differential and Quasi-Differential Operators; Mathematical Surveys and Monographs; American Mathematical Society: Providence, RI, USA, 1999; Volume 61. [Google Scholar]
  5. Weidmann, J. Spectral Theory of Ordinary Differential Operators; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  6. Neiman-Zade, M.I.; Shkalikov, A.A. Schrödinger operators with singular potentials from spaces of multipliers. Math. Notes 1999, 66, 599–607. [Google Scholar] [CrossRef]
  7. Neiman-Zade, M.I.; Shkalikov, A.A. Strongly elliptic operators with singular coefficient. Russ. J. Math. Phys. 2006, 13, 70–78. [Google Scholar] [CrossRef]
  8. Vladimirov, A.A. On the convergence of sequences of ordinary differential equations. Math. Notes 2004, 75, 877–880. [Google Scholar] [CrossRef]
  9. Vladimirov, A.A. On one approach to definition of singular differential operators. arXiv 2017, arXiv:1701.08017. [Google Scholar]
  10. Savchuk, A.M.; Shkalikov, A.A. Sturm-Liouville operators with distribution potentials. Transl. Moscow Math. Soc. 2003, 6, 143–192. [Google Scholar]
  11. Savchuk, A.M.; Shkalikov, A.A. Asymptotic analysis of solutions of ordinary differential equations with distribution coefficients. Sb. Math. 2020, 211, 1623–1659. [Google Scholar] [CrossRef]
  12. Konechnaja, N.N.; Mirzoev, K.A. The leading term of the asymptotics of solutions of linear differential equations with first-order distribution coefficients. Math. Notes 2019, 106, 81–88. [Google Scholar] [CrossRef]
  13. Konechnaja, N.N.; Mirzoev, K.A.; Shkalikov, A.A. Asymptotics of solutions of two-term differential equations. Math. Notes 2023, 113, 228–242. [Google Scholar] [CrossRef]
  14. Vladimirov, A.A. On the problem of oscillation properties of positive differential operators with singular coefficients. Math. Notes 2016, 100, 790–795. [Google Scholar] [CrossRef]
  15. Vladimirov, A.A.; Shkalikov, A.A. On oscillation properties of self-adjoint boundary value problems of fourth order. Dokl. Math. 2021, 103, 5–9. [Google Scholar] [CrossRef]
  16. Bondarenko, N.P. Inverse spectral problems for arbitrary-order differential operators with distribution coefficients. Mathematics 2021, 9, 2989. [Google Scholar] [CrossRef]
  17. Bondarenko, N.P. Reconstruction of higher-order differential operators by their spectral data. Mathematics 2022, 10, 3882. [Google Scholar] [CrossRef]
  18. Bondarenko, N.P. Linear differential operators with distribution coefficients of various singularity orders. Math. Meth. Appl. Sci. 2023, 46, 6639–6659. [Google Scholar] [CrossRef]
  19. Bondarenko, N.P. Inverse spectral problem for the third-order differential equation. Res. Math. 2023, 78, 179. [Google Scholar] [CrossRef]
  20. Marchenko, V.A. Sturm-Liouville Operators and Their Applications; Birkhauser: Basel, Switzerland, 1986. [Google Scholar]
  21. Levitan, B.M. Inverse Sturm-Liouville Problems; VNU Sci. Press: Utrecht, The Netherlands, 1987. [Google Scholar]
  22. Freiling, G.; Yurko, V. Inverse Sturm-Liouville Problems and Their Applications; Nova Science Publishers: Huntington, NY, USA, 2001. [Google Scholar]
  23. Kravchenko, V.V. Direct and Inverse Sturm-Liouville Problems; Birkhäuser: Cham, Switzerland, 2020. [Google Scholar]
  24. Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operators with singular potentials. Inverse Probl. 2003, 19, 665–684. [Google Scholar] [CrossRef]
  25. Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operators with singular potentials. II. Reconstruction by two spectra. In North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2004; Volume 197, pp. 97–114. [Google Scholar]
  26. Freiling, G.; Ignatiev, M.Y.; Yurko, V.A. An inverse spectral problem for Sturm-Liouville operators with singular potentials on star-type graph. Proc. Symp. Pure Math. 2008, 77, 397–408. [Google Scholar]
  27. Savchuk, A.M.; Shkalikov, A.A. Inverse problems for Sturm-Liouville operators with potentials in Sobolev spaces: Uniform stability. Funct. Anal. Appl. 2010, 44, 270–285. [Google Scholar] [CrossRef]
  28. Hryniv, R.O. Analyticity and uniform stability in the inverse singular Sturm-Liouville spectral problem. Inverse Probl. 2011, 276, 065011. [Google Scholar] [CrossRef]
  29. Eckhardt, J.; Gesztesy, F.; Nichols, R.; Teschl, G. Supersymmetry and Schrödinger-type operators with distributional matrix-valued potentials. J. Spectr. Theory 2014, 4, 715–768. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Eckhardt, J.; Gesztesy, F.; Nichols, R.; Sakhnovich, A.; Teschl, G. Inverse spectral problems for Schrödinger-type operators with distributional matrix-valued potentials. Differ. Integral Equ. 2015, 28, 505–522. [Google Scholar] [CrossRef]
  31. Bondarenko, N.P. Solving an inverse problem for the Sturm-Liouville operator with singular potential by Yurko’s method. Tamkang Math. 2021, 52, 125–154. [Google Scholar] [CrossRef]
  32. Bondarenko, N.P. Direct and inverse problems for the matrix Sturm-Liouville operator with general self-adjoint boundary conditions. Math. Notes 2021, 109, 358–378. [Google Scholar] [CrossRef]
  33. Yurko, V.A. Recovery of nonselfadjoint differential operators on the half-line from the Weyl matrix. Math. USSR-Sb. 1992, 72, 413–438. [Google Scholar] [CrossRef]
  34. Yurko, V.A. Inverse problems of spectral analysis for differential operators and their applications. J. Math. Sci. 2000, 98, 319–426. [Google Scholar] [CrossRef]
  35. Yurko, V.A. Method of Spectral Mappings in the Inverse Problem Theory, Inverse and Ill-Posed Problems Series; VNU Science: Utrecht, The Netherlands, 2002. [Google Scholar]
  36. Albeverio, S.; Gesztesy, F.; Hoegh-Krohn, R.; Holden, H. Solvable Models in Quantum Mechanics, 2nd ed.; AMS Chelsea Publishing: Providence, RI, USA, 2005. [Google Scholar]
  37. Bernis, F.; Peletier, L.A. Two problems from draining flows involving third-order ordinary differential equations. Siam Math. Anal. 1996, 27, 515–527. [Google Scholar] [CrossRef]
  38. McKean, H. Boussinesq’s equation on the circle. Comm. Pure Appl. Math. 1981, 34, 599–691. [Google Scholar] [CrossRef]
  39. Barcilon, V. On the uniqueness of inverse eigenvalue problems. Geophys. J. Inter. 1974, 38, 287–298. [Google Scholar] [CrossRef] [Green Version]
  40. Gladwell, G.M.L. Inverse Problems in Vibration, Second Edition, Solid Mechanics and Its Applications; Springer: Dordrecht, Germany, 2005; Volume 119. [Google Scholar]
  41. Möller, M.; Zinsou, B. Sixth order differential operators with eigenvalue dependent boundary conditions. Appl. Anal. Disc. Math. 2013, 72, 378–389. [Google Scholar] [CrossRef]
  42. Uǧurlu, E.; Bairamov, E. Fourth order differential operators with distributional potentials. Turk. Math. 2020, 44, 825–856. [Google Scholar] [CrossRef]
  43. Badanin, A.; Korotyaev, E.L. Third-order operators with three-point conditions associated with Boussinesq’s equation. Appl. Anal. 2021, 100, 527–560. [Google Scholar] [CrossRef] [Green Version]
  44. Zhang, H.-Y.; Ao, J.-J.; Bo, F.-Z. Eigenvalues of fourth-order boundary value problems with distributional potentials. AIMS Math. 2022, 7, 7294–7317. [Google Scholar] [CrossRef]
  45. Zhang, M.; Li, K.; Wang, Y. Regular approximation of singular third-order differential operators. J. Math. Anal. Appl. 2023, 521, 126940. [Google Scholar] [CrossRef]
  46. Polyakov, D.M. On the spectral properties of a fourth-order self-adjoint operator. Diff. Equ. 2023, 59, 168–173. [Google Scholar] [CrossRef]
  47. Mirzoev, K.A. Sturm-Liouville operators. Trans. Moscow Math. Soc. 2014, 75, 281–299. [Google Scholar] [CrossRef]
  48. Shkalikov, A.A.; Vladykina, V.E. Asymptotics of the solutions of the Sturm-Liouville equation with singular coefficients. Math. Notes 2015, 98, 891–899. [Google Scholar] [CrossRef] [Green Version]
  49. Buterin, S.A. On inverse spectral problem for non-selfadjoint Sturm-Liouville operator on a finite interval. J. Math. Anal. Appl. 2007, 335, 739–749. [Google Scholar] [CrossRef] [Green Version]
  50. Buterin, S.A.; Shieh, C.-T.; Yurko, V.A. Inverse spectral problems for non-selfadjoint second-order differential operators with Dirichlet boundary conditions. Bound. Value Probl. 2013, 2013, 180. [Google Scholar] [CrossRef] [Green Version]
  51. Amirov, R.; Topsakal, N. Inverse problem for Sturm-Liouville operators with coulomb potential which have discontinuity conditions inside an interval. Math. Phys. Anal. Geom 2010, 13, 29–46. [Google Scholar]
  52. Panakhov, E.; Ulusoy, I. Inverse spectral theory for a singular Sturm-Liouville operator with Coulomb potential. Adv. Pure Math. 2016, 6, 41–49. [Google Scholar] [CrossRef]
  53. Guan, A.-W.; Yang, C.-F.; Bondarenko, N.P. Solving Barcilon’s inverse problems for the method of spectral mappings. arXiv 2023, arXiv:2304.05747. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bondarenko, N.P. Regularization and Inverse Spectral Problems for Differential Operators with Distribution Coefficients. Mathematics 2023, 11, 3455. https://doi.org/10.3390/math11163455

AMA Style

Bondarenko NP. Regularization and Inverse Spectral Problems for Differential Operators with Distribution Coefficients. Mathematics. 2023; 11(16):3455. https://doi.org/10.3390/math11163455

Chicago/Turabian Style

Bondarenko, Natalia P. 2023. "Regularization and Inverse Spectral Problems for Differential Operators with Distribution Coefficients" Mathematics 11, no. 16: 3455. https://doi.org/10.3390/math11163455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop