1. Introduction
The singular value decomposition (SVD) of a real
matrix
A establishes that there exist orthogonal matrices
and
, and a diagonal matrix
, such that
. The diagonal entries of
D,
, are called the singular values of
A. The columns
of
U are called the left singular vectors of
A, and the columns
of
V are the right singular vectors of
A, where
,
, and
, for
. It is known that the singular values of
A are the non-negative square roots of the eigenvalues of
or
. In this sense, the columns of
V are eigenvectors of
and the columns of
U are eigenvectors of
(see [
1,
2]).
SVD is a powerful tool that reveals the geometric structure of linear transformations. Unlike spectral decomposition, SVD is applicable to any matrix, thus generalizing the diagonalization concept. It is essential for low-rank approximations, enabling efficient data representation. It is also used in image compression, recommendation systems, sparse representations like k-SVD, and optimizing bioprocess data (see [
3,
4,
5,
6,
7]).
In this paper, we construct a matrix that we call a Lefkovitch-type matrix, since it has the same structure as the well-known Lefkovitch matrix (see [
8,
9,
10]). However, unlike the Lefkovitch matrix, the nonzero entries can take any real value. Specifically, we construct a matrix of the form
from a particular set of singular values associated with an inverse extremal eigenvalue problem for a symmetric bordered tridiagonal matrix of the form
Additionally, we provide a solution to the inverse extremal eigenvalue problem for a nonsymmetric bordered tridiagonal matrix of the form
The inverse extremal eigenvalue problem (IEEP) concerns the reconstruction of a structured matrix from prescribed extremal spectral data of its leading principal submatrices. This problem contrasts with the traditional inverse eigenvalue problem (IEP), which typically uses spectral data from the matrix itself. Within this framework, various classes of structured matrices have been studied, including symmetric and nonsymmetric tridiagonal and bordered diagonal matrices, whose combined structures give rise to a bordered tridiagonal matrix (see [
11,
12]) and the references cited therein. Matrices reconstructed from extremal spectral data are associated with some applications in science and engineering, including graph theory, vibration analysis, and particularly population dynamics (see [
8,
13,
14]). In this regard, the Lefkovitch matrix introduced in [
15] arises naturally in the discrete analysis of population growth by stages. As far as we know, a solution to the IEEP for this class of matrices is given only in [
8].
Analogously, the inverse singular value problem (ISVP) refers to the construction of a structured matrix with prescribed singular values and/or singular vectors. There have been a few advances in this area, with some ISVPs for specific structured matrices and some numerical methods (see [
16,
17,
18,
19,
20,
21,
22]). However, the construction of Lefkovitch matrices from singular values has not yet been explored in the literature. To address this problem, we observe the singular values of the submatrices
which are formed by the first
j full rows of
L, corresponding to the non-negative square roots of the eigenvalues of the leading principal submatrices of a bordered tridiagonal matrix. We introduce the following definition:
Definition 1. Let A be an matrix. The submatrix formed by the first j full rows of A is called the leading row submatrix of order j.
In this context, we will construct a Lefkovitch-type matrix considering the minimal and maximal singular values of all its leading row submatrices. Based on this, a special ISVP is established, which we will call the
inverse extremal singular value problem (IESVP)
of leading row submatrices. For this problem, we provide a solution using the technique given in [
20], which requires a prior solution to the IEEP for bordered tridiagonal matrices.
Bordered tridiagonal matrices arise in the computation of electric power systems, spline interpolation and approximation, telecommunication analysis, parallel computing, and particularly in the numerical solution of certain partial differential equations, such as the Schrödinger equation. In these contexts, the matrix of coefficients of the adjacent linear equation systems has the structure of a bordered tridiagonal matrix. Because of this, some efficient algorithmic procedures have been developed to calculate their inverses and determinants, as well as methods for solving the associated linear systems (see [
23,
24,
25,
26,
27,
28]).
Throughout this paper, given an matrix A, we will denote by its leading principal submatrix, and by its leading row submatrix. is the spectrum of , with , the set of singular values of , with . We will call the minimal eigenvalue and maximal eigenvalue of the extremal eigenvalues of . Analogously, we will call the minimal singular value and maximal singular value the extremal singular values of .
In this work, we discuss the following inverse problems:
ISEEP:
Given the set of real numbers and a nonzero vector , construct a symmetric bordered tridiagonal matrix B of the form (2), such that and are the extremal eigenvalues of the leading principal submatrix of B, and is an eigenpair of B.INSEEP:
Given the set of real numbersand the nonzero vectors and , construct a nonsymmetric bordered tridiagonal matrix of the form (3) with , such that and are the extremal eigenvalues of the leading principal submatrix of , which is an eigenpair of the matrix and is an eigenpair of the leading principal submatrix .IESVP:
Given the set of positive real numbersa nonzero vector , and real numbers and , construct a Lefkovitch-type matrix L of the form (1) with and , such that and are the extremal singular values of the leading row submatrix of L, and is a left singular vector of L.The following preliminary results are necessary for the development of the subsequent sections.
Lemma 1. (Cauchy’s interlacing theorem) let be the eigenvalues of an real symmetric matrix A and be the eigenvalues of an principal submatrix of A, then For the extremal eigenvalues of all leading principal submatrices, this result establishes that
Lemma 2
([
11])
. Let be a monic polynomial of degree n with all real zeroes. If α and β are, respectively, the smallest and largest zero of , then- (1)
If , then ,
- (2)
If , then .
The remainder of this paper is organized as follows:
Section 2 presents the main results in which sufficient conditions are given for the existence of a solution to the proposed problems.
Section 3 provides numerical examples to illustrate our results. The pseudocodes of the algorithms used in these examples can be found in
Appendix A. Finally,
Section 4 presents some concluding remarks.
2. Main Results
In this section, we present the main results of our work, focusing on the three proposed inverse extremal problems. For each case, we provide sufficient conditions on the prescribed data to ensure the existence of the corresponding matrix, along with a constructive procedure for obtaining it.
2.1. Solution to ISEEP
Next, we consider the inverse extremal eigenvalue problem for a symmetric bordered tridiagonal matrix. As a preliminary step, we establish the following lemma, which is fundamental for the derivation of the main result.
Lemma 3. Given an symmetric bordered tridiagonal matrix B of the form (2), and the leading principal submatrix of B with characteristic polynomial , then the sequence satisfies the recurrence relation:where is the characteristic polynomial of the submatrix resulting from deleting the first row and the first column of , and is the determinant of the submatrix resulting from deleting the -th row and the first column of . Proof. It is immediate by expanding . □
In the remainder of this paper, the following notations will be adopted.
for
with
.
Theorem 1. Let be real numbers and the vector satisfyingand where , and are as in (6), then there exists an symmetric bordered tridiagonal matrix B of the form (2), such that and are the extremal eigenvalues of the leading principal submatrix of B, for , and is the eigenpair of B. Proof of Theorem 1. We show that the system of equations
can be solved recursively for real values
, and
is positive and real, where the characteristic polynomials
satisfy Lemma 3.
It is clear that
. From system (
10), for
, and Lemma 2, we can get the required entries
and
as follows
and
Now, from system (
10), for
, we have
which has real solutions
, and
. Indeed,
there exists since
for
and
. Furthermore, solving (
13) we get the following equation
where
and
, which represents a conic. Fixing
X, the discriminant of Equation (
14) is
.
Then if
- (i)
, for all .
- (ii)
, for ;
therefore, Y exists in either case.
Moreover, the point
belongs to the conic
which, by Lemma 2 and condition (
9), is non-degenerate, non-empty, and centered at the origin. Consequently, there exist real numbers
and
,
satisfying (
14).
On the other hand, as
is a eigenpair of
B, it follows that
with
.
Thus, from (
15) and condition (
8), we obtain that
Finally, from Equation (
14), for
, condition (
9) and Lemma 2, we have
and
for
. The proof is complete. □
2.2. Solution to INSEEP
This section addresses the inverse extremal eigenvalue problem for nonsymmetric bordered tridiagonal matrices. The analysis begins with the following lemma.
Lemma 4. Given an nonsymmetric bordered tridiagonal matrix of the form (3), and the principal leading submatrix of with characteristic polynomial , then the sequence satisfies the following recurrence relation:where , , and are the determinants of the submatrices resulting from eliminating the -th row and the first column, the first row and -th column, and the first row and column of submatrix , respectively. To simplify future calculations, we introduce the following notation:
and
for
, with
.
Theorem 2. Let be real numbers , the vectors and , and real numbers , satisfying (7), (8),and Iffor , where w, β, and γ are as in (20) and (21), then there exists an nonsymmetric bordered tridiagonal matrix of the form (3), such that and are the extremal eigenvalues of the leading principal submatrix of , is the eigenpair of , is the eigenpair of , and . Proof of Theorem 2. We show that the system of equations
where
satisfies Lemma 4 and has real solutions
, and
.
It is clear that
. Note that the second and third equalities in (
26) have the following form:
From (
26), for
, condition (
7), and Lemma 2, we obtain
and
Now, from (
27) and condition (
24) we get
and
On the other hand, for
from (
27) and condition (
24) we get
and
Then, from the system (
26), we get
which has real solutions
and
. Indeed, considering (
32) and condition (
25) we write the system (
34), and we have
Solving the system (
35), we obtain
and
Analogously, for
, we obtain
and
where
Thus, the proof is concluded. □
2.3. Solution to IESVP
We now focus our analysis on finding a solution for the IESVP for Lefkovitch-type matrices, considering the minimal and maximal singular values of all its principal row submatrices and a singular vector of the matrix. Although it is known that the IEP and the ISVP are equivalent for symmetric matrices, and that any ISVP can be addressed by reformulating it as an IEP (see [
18,
29]), these statements may not hold when the eigenvalues and singular values belong to specific submatrices. This observation led us to consider a different approach.
We begin by recalling the following lemma, which plays a fundamental role in the subsequent analysis. This result, analogous to Cauchy’s interlacing theorem, establishes a relation between the singular values of a matrix and those of its leading row submatrices.
Lemma 5
([
30] (Interlacing property of singular values))
. Let , an () matrix, denote a submatrix of A obtained by deleting any p columns (any p rows) from A. Then As a consequence of this lemma, considering the notation and order established in
Section 1 for the extremal singular values of the leading row submatrices
, it follows that
Moreover, the inequalities are strict, since the leading principal submatrices of are irreducible. This relationship is a necessary condition satisfied by the extremal singular values of all leading row submatrices of a Lefkovitch-type matrix.
The following result establishes sufficient conditions for the solvability of Problem 3.
Theorem 3. Let be given positive real numbers , the vector , and real numbers and , satisfyingand If satisfies condition (9) of Theorem 1, then there exists an Lefkovitch-type matrix L of the form (1) with and , such that and are the extremal singular values of the leading row submatrix of and is a left singular pair of the matrix L. Proof of Theorem 3. Suppose that (
39) and (
40) hold. Since
and
also satisfy the conditions of Theorem 1, then there exists a symmetric bordered tridiagonal matrix
B, with entries
, of the form (
2), such that
and
are the extremal eigenvalues of its leading principal submatrices and
is an eigenpair of
B.
On the other hand, for a Lefkovitch-type matrix
L of the form (
1), we have that
If we set
, it follows that
with
and
.
Given that the product , for , corresponds to the leading principal submatrices of B, we can conclude that and are the extremal singular values of . Furthermore, since , it follows that is a left singular vector of L. Hence, the statement holds. □
3. Examples
This section provides some examples that illustrate and validate the theoretical results obtained in the preceding sections. Theorems 1, 2, and 3 lead to algorithmic procedures that reconstruct the associated matrices. These algorithms were implemented using MATLAB R2024b.
Next, we introduce the following notation: = and = , the vectors whose components are the prescribed eigenvalues and singular values, respectively. = and = are vectors with components equal to the extremal eigenvalues of the leading principal submatrices and the extremal singular values of the leading row submatrices of the reconstructed matrix, respectively.
, , and are, respectively, the eigenvector associated with , the eigenvector associated with , and the left singular vector associated with , of the corresponding reconstructed matrices.
To evaluate the closeness between the computed results and the prescribed data, we consider the following relative errors:
, , , , and .
Example 1. To illustrate the results obtained in Section 2.2, we use the data prescribed in Table 1, which satisfy the conditions of Theorem 1. The symmetric bordered tridiagonal matrix providing the requisite spectral properties is as follows:and the spectra of its leading principal submatrices are detailed in Table 2. The closeness of the obtained data is illustrated in Table 3. Example 2. In this example, the initial data given, which satisfy the criteria established by Theorem 2, are presented in Table 4. Table 5 shows the extremal eigenvalues of the submatrices of the reconstructed nonsymmetric bordered tridiagonal matrix . Finally, Table 6 provides the relative errors of the results obtained in our computations. Example 3. Let L be an Lefkovitch-type matrix and B the symmetric bordered tridiagonal matrix where . In this example, the matrix B is reconstructed from the squares of the extremal singular values of the leading row submatrices of L, and the left singular vector associated with the maximal singular value of L. These data satisfy the conditions of Theorems 1 and 3. Then, using the procedure given in Theorem 3, with and , the matrix L is reconstructed. We consider different sizes of these matrices and add the relative error to show the closeness between the given matrix A and the reconstructed matrix . The results obtained are shown in Table 7 and Table 8. Example 4. In the following example, we reconstruct a Lefkovitch-type matrix from the prescribed data given in Table 9. The Lefkovitch-type matrix that satisfies the properties required in Theorem 3 is Table 10 shows the singular values of the leading row submatrices of L. Finally, Table 11 summarizes the computed results. 4. Conclusions
This paper provides advances in the inverse extremal eigenvalue problem (IEEP) and, in particular, in the inverse extremal singular value problem (IESVP) for structured matrices. Due to their definitions, both problems are of theoretical importance, since IEEPs are a special class of IEPs and ISVPs are a natural extension of IEPs (see [
18]). Furthermore, they are also relevant in engineering applications. In [
13], a spring–mass system is reconstructed from the minimum and maximum frequencies of its subsystems, resulting from successively fixing an internal mass. This issue is closely related to the IEEP for tridiagonal matrices. Similarly, the ISVP for structured matrices is related to telecommunications theory, particularly in the design of signature sequences for synchronous code division multiple access systems (see [
21]).
Our contribution in this work focuses first on constructing symmetric and nonsymmetric bordered tridiagonal matrices of order
n from the extremal eigenvalues of their leading principal submatrices and an eigenvector of the matrix for the symmetric case and, additionally, an eigenvector of the submatrix of order
for the nonsymmetric case. Although the procedure used is already known in the literature, to our knowledge, the inverse eigenvalue problem for this type of matrix has not been addressed. On the other hand, it is known that there is limited literature on ISVPs. In this sense, we better substantiate the procedure given in [
20] and apply it to reconstruct a Lefkovitch-type matrix from the extremal singular values of its leading row submatrices and a left singular vector of the matrix. These data are associated with the spectral data used in the IEEP for a symmetric bordered tridiagonal matrix. In all cases, we provide sufficient conditions for the existence and construction of such matrices. As our results are constructive, algorithmic procedures are generated to determine a solution matrix explicitly.