Next Article in Journal
Fourth-Order Iterative Algorithms for the Simultaneous Calculation of Matrix Square Roots and Their Inverses
Previous Article in Journal
Consistent Markov Edge Processes and Random Graphs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inverse Extremal Eigenproblem for Bordered Tridiagonal Matrices Applied to an Inverse Singular Value Problem for Lefkovitch-Type Matrices

by
Hubert Pickmann-Soto
1,*,
Susana Arela-Pérez
1,
Cristina Manzaneda
2 and
Hans Nina
3
1
Departamento de Matemática, Facultad de Ciencias, Universidad de Tarapacá, Arica 1000000, Chile
2
Departamento de Matemáticas, Facultad de Ciencias, Universidad Católica del Norte, Antofagasta 1240000, Chile
3
Departamento de Matemáticas, Facultad de Ciencias Básicas, Universidad de Antofagasta, Antofagasta 1240000, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3369; https://doi.org/10.3390/math13213369
Submission received: 20 September 2025 / Revised: 20 October 2025 / Accepted: 21 October 2025 / Published: 22 October 2025

Abstract

This paper focuses on the inverse extremal eigenvalue problem (IEEP) and a special inverse singular value problem (ISVP). First, a bordered tridiagonal matrix is constructed from the extremal eigenvalues of its leading principal submatrices and an eigenvector. Then, based on the previous construction, a Lefkovitch-type matrix is constructed from a particular set of singular values and a singular vector. Sufficient conditions are established for the existence of a symmetric bordered tridiagonal matrix, while the nonsymmetric case is also addressed. Finally, numerical examples illustrating these constructions derived from the main results are presented.

1. Introduction

The singular value decomposition (SVD) of a real n × n matrix A establishes that there exist orthogonal matrices U = [ u 1 , u 2 , , u n ] and V = [ v 1 , v 2 , , v n ] , and a diagonal matrix D = d i a g { σ 1 , σ 2 , , σ n } , such that A = U D V T . The diagonal entries of D, σ 1 σ 2 σ n 0 , are called the singular values of A. The columns u i of U are called the left singular vectors of A, and the columns v i of V are the right singular vectors of A, where A v i = σ i u i , A T u i = σ i v i , and u i T A v i = σ i , for j = 1 , 2 , , n . It is known that the singular values of A are the non-negative square roots of the eigenvalues of A A T or A T A . In this sense, the columns of V are eigenvectors of A T A and the columns of U are eigenvectors of A A T (see [1,2]).
SVD is a powerful tool that reveals the geometric structure of linear transformations. Unlike spectral decomposition, SVD is applicable to any matrix, thus generalizing the diagonalization concept. It is essential for low-rank approximations, enabling efficient data representation. It is also used in image compression, recommendation systems, sparse representations like k-SVD, and optimizing bioprocess data (see [3,4,5,6,7]).
In this paper, we construct a matrix that we call a Lefkovitch-type matrix, since it has the same structure as the well-known Lefkovitch matrix (see [8,9,10]). However, unlike the Lefkovitch matrix, the nonzero entries can take any real value. Specifically, we construct a matrix of the form
L = p 1 f 2 f 3 f n g 1 p 2 0 0 0 g 2 p 3 0 0 0 g n 1 p n f j , g j , p j R { 0 }
from a particular set of singular values associated with an inverse extremal eigenvalue problem for a symmetric bordered tridiagonal matrix of the form
B = a 1 b 1 c 1 c 2 c n 2 b 1 a 2 b 2 0 0 c 1 b 2 a 3 c 2 0 0 b n 1 c n 2 0 0 b n 1 a n a j , b j , c j R .
Additionally, we provide a solution to the inverse extremal eigenvalue problem for a nonsymmetric bordered tridiagonal matrix of the form
B = a 1 b 1 d 1 d 2 d n 2 c 1 a 2 b 2 0 0 e 1 c 2 a 3 e 2 0 0 b n 1 e n 2 0 0 c n 1 a n a j , b j , c j , d j , e j R .
The inverse extremal eigenvalue problem (IEEP) concerns the reconstruction of a structured matrix from prescribed extremal spectral data of its leading principal submatrices. This problem contrasts with the traditional inverse eigenvalue problem (IEP), which typically uses spectral data from the matrix itself. Within this framework, various classes of structured matrices have been studied, including symmetric and nonsymmetric tridiagonal and bordered diagonal matrices, whose combined structures give rise to a bordered tridiagonal matrix (see [11,12]) and the references cited therein. Matrices reconstructed from extremal spectral data are associated with some applications in science and engineering, including graph theory, vibration analysis, and particularly population dynamics (see [8,13,14]). In this regard, the Lefkovitch matrix introduced in [15] arises naturally in the discrete analysis of population growth by stages. As far as we know, a solution to the IEEP for this class of matrices is given only in [8].
Analogously, the inverse singular value problem (ISVP) refers to the construction of a structured matrix with prescribed singular values and/or singular vectors. There have been a few advances in this area, with some ISVPs for specific structured matrices and some numerical methods (see [16,17,18,19,20,21,22]). However, the construction of Lefkovitch matrices from singular values has not yet been explored in the literature. To address this problem, we observe the singular values of the submatrices
p 1 f 2 f 3 f n g 1 p 2 0 0 0 0 g j 1 p j 0 j = 1 , 2 , , n ,
which are formed by the first j full rows of L, corresponding to the non-negative square roots of the eigenvalues of the leading principal submatrices of a bordered tridiagonal matrix. We introduce the following definition:
Definition 1. 
Let A be an n × n matrix. The j × n submatrix formed by the first j full rows of A is called the leading row submatrix of order j.
In this context, we will construct a Lefkovitch-type matrix considering the minimal and maximal singular values of all its leading row submatrices. Based on this, a special ISVP is established, which we will call the inverse extremal singular value problem (IESVP) of leading row submatrices. For this problem, we provide a solution using the technique given in [20], which requires a prior solution to the IEEP for bordered tridiagonal matrices.
Bordered tridiagonal matrices arise in the computation of electric power systems, spline interpolation and approximation, telecommunication analysis, parallel computing, and particularly in the numerical solution of certain partial differential equations, such as the Schrödinger equation. In these contexts, the matrix of coefficients of the adjacent linear equation systems has the structure of a bordered tridiagonal matrix. Because of this, some efficient algorithmic procedures have been developed to calculate their inverses and determinants, as well as methods for solving the associated linear systems (see [23,24,25,26,27,28]).
Throughout this paper, given an n × n matrix A, we will denote by A j its j × j leading principal submatrix, and by A j its j × n leading row submatrix. σ ( A j ) = { λ 1 ( j ) , λ 2 ( j ) , , λ j ( j ) } is the spectrum of A j , with λ 1 ( j ) λ 2 ( j ) λ j ( j ) , Σ ( A j ) = { σ 1 ( j ) , σ 2 ( j ) , , σ j ( j ) } the set of singular values of A j , with σ 1 ( j ) σ 2 ( j ) σ j ( j ) . We will call the minimal eigenvalue λ 1 ( j ) and maximal eigenvalue λ j ( j ) of A j the extremal eigenvalues of A j . Analogously, we will call the minimal singular value σ 1 ( j ) and maximal singular value σ j ( j ) the extremal singular values of A j .
In this work, we discuss the following inverse problems:
ISEEP: Given the set of 2 n 1 real numbers
λ 1 ( n ) , , λ 1 ( j ) , , λ 1 ( 2 ) , λ 1 ( 1 ) , λ 2 ( 2 ) , , λ j ( j ) , , λ n ( n ) ,
and a nonzero vector x = ( x 1 , x 2 , , x n ) T , construct a symmetric bordered tridiagonal matrix B of the form (2), such that λ 1 ( j ) and λ j ( j ) are the extremal eigenvalues of the leading principal submatrix B j , j = 1 , 2 , , n of B, and ( λ n ( n ) , x ) is an eigenpair of B.
INSEEP: Given the set of 3 n 3 real numbers
λ 1 ( n ) , , λ 1 ( j ) , , λ 1 ( 2 ) , λ 1 ( 1 ) , λ 2 ( 2 ) , , λ j ( j ) , , λ n ( n ) , 1 , 2 , , n 2 ,
and the nonzero vectors x = x 1 , , x n T and y = y 1 , , y n 1 T , construct a nonsymmetric bordered tridiagonal matrix B of the form (3) with d i = i ; i = 1 , , n 2 , such that λ 1 ( j ) and λ j ( j ) are the extremal eigenvalues of the leading principal submatrix B j , j = 1 , 2 , , n of B , λ n ( n ) , x which is an eigenpair of the matrix B and ( λ n 1 ( n 1 ) , y ) is an eigenpair of the leading principal submatrix B n 1 .
IESVP: Given the set of 2 n 1 positive real numbers
σ 1 ( n ) , , σ 1 ( j ) , , σ 1 ( 2 ) , σ 1 ( 1 ) , σ 2 ( 2 ) , , σ j ( j ) , , σ n ( n ) ,
a nonzero vector u = ( u 1 , u 2 , , u n ) T , and real numbers 1 and 2 , construct a Lefkovitch-type matrix L of the form (1) with p 1 = 1 and g 1 = 2 , such that σ 1 ( j ) and σ j ( j ) are the extremal singular values of the leading row submatrix L j , j = 1 , 2 , , n of L, and σ n ( n ) , u is a left singular vector of L.
The following preliminary results are necessary for the development of the subsequent sections.
Lemma 1. 
(Cauchy’s interlacing theorem) let λ 1 λ 2 λ n be the eigenvalues of an n × n real symmetric matrix A and μ 1 μ 2 μ n 1 be the eigenvalues of an ( n 1 ) × ( n 1 ) principal submatrix of A, then
λ 1 μ 1 λ 2 μ 2 λ n 1 μ n 1 λ n .
For the extremal eigenvalues of all leading principal submatrices, this result establishes that
λ 1 ( n ) λ 1 ( j ) λ 1 ( 2 ) λ 1 ( 1 ) λ 2 ( 2 ) λ j ( j ) < λ n ( n ) .
Lemma 2 
([11]).Let P ( λ ) be a monic polynomial of degree n with all real zeroes. If α and β are, respectively, the smallest and largest zero of P ( λ ) , then
(1) 
If μ < α , then ( 1 ) n P ( μ ) > 0 ,
(2) 
If μ > β , then P ( μ ) > 0 .
The remainder of this paper is organized as follows: Section 2 presents the main results in which sufficient conditions are given for the existence of a solution to the proposed problems. Section 3 provides numerical examples to illustrate our results. The pseudocodes of the algorithms used in these examples can be found in Appendix A. Finally, Section 4 presents some concluding remarks.

2. Main Results

In this section, we present the main results of our work, focusing on the three proposed inverse extremal problems. For each case, we provide sufficient conditions on the prescribed data to ensure the existence of the corresponding matrix, along with a constructive procedure for obtaining it.

2.1. Solution to ISEEP

Next, we consider the inverse extremal eigenvalue problem for a symmetric bordered tridiagonal matrix. As a preliminary step, we establish the following lemma, which is fundamental for the derivation of the main result.
Lemma 3. 
Given an n × n symmetric bordered tridiagonal matrix B of the form (2), and B j the j × j leading principal submatrix of B with characteristic polynomial P j λ = det ( λ I j B j ) ,   j = 1 , 2 , , n , then the sequence P j λ j = 1 n satisfies the recurrence relation:
P 1 λ = λ a 1 , P 2 λ = ( λ a 2 ) P 1 λ b 1 2 , P j ( λ ) = ( λ a j ) P j 1 ( λ ) b j 1 2 P j 2 ( λ ) c j 2 2 Q j 1 ( λ ) + ( 1 ) j 1 2 b j 1 c j 2 R j 1 ( λ ) , j = 3 , 4 , , n ,
where Q j 1 ( λ ) is the characteristic polynomial of the submatrix resulting from deleting the first row and the first column of B j 1 , and R j 1 ( λ ) is the determinant of the submatrix resulting from deleting the ( j 1 ) -th row and the first column of λ I j 1 B j 1 .
Proof. 
It is immediate by expanding det ( λ I j B j ) .    □
In the remainder of this paper, the following notations will be adopted.
ϕ j = P j 2 λ 1 ( j ) P j 1 λ j ( j ) P j 2 λ j ( j ) P j 1 λ 1 ( j ) ψ j = Q j 1 λ 1 ( j ) P j 1 λ j ( j ) Q j 1 λ j ( j ) P j 1 λ 1 ( j ) η j = ( 1 ) j 1 R j 1 λ j ( j ) P j 1 λ 1 ( j ) R j 1 λ 1 ( j ) P j 1 λ j ( j ) ξ j = λ j ( j ) λ 1 ( j ) P j 1 λ 1 ( j ) P j 1 λ j ( j )
for j = 2 , 3 , , n with P 0 ( λ ) = 1 .
Theorem 1. 
Let 2 n 1 be real numbers λ 1 ( j ) , λ j ( j ) j = 1 n and the vector x = ( x 1 , x 2 , , x n ) satisfying
λ 1 ( n ) < < λ 1 ( j ) < < λ 1 ( 2 ) < λ 1 ( 1 ) < λ 2 ( 2 ) < < λ j ( j ) < < λ n ( n )
and
x j x j + 1 > 0 , i = 1 , 2 , , n 1 .
If
η j 2 ψ j ϕ j 0 , j = 3 , , n ,
where ϕ j , ψ j , and η j are as in (6), then there exists an n × n symmetric bordered tridiagonal matrix B of the form (2), such that λ 1 ( j ) and λ j ( j ) are the extremal eigenvalues of the leading principal submatrix B j of B, for j = 1 , 2 , , n , and ( λ n ( n ) , x ) is the eigenpair of B.
Proof of Theorem 1. 
We show that the system of equations
P j ( λ 1 ( j ) ) = 0 P j ( λ j ( j ) ) = 0 , j = 1 , 2 , , n ,
can be solved recursively for real values a j , j = 1 , 2 , , n , and b j , j = 1 , 2 , n 1 is positive and real, where the characteristic polynomials P j ( λ ) , satisfy Lemma 3.
It is clear that a 1 = λ 1 ( 1 ) . From system (10), for j = 2 , and Lemma 2, we can get the required entries b 1 and a 2 as follows
b 1 = λ 1 1 λ 1 2 λ 2 2 λ 1 1
and
a 2 = λ 1 2 + λ 2 2 λ 1 1 .
Now, from system (10), for j = 3 , 4 , , n , we have
b j 1 2 P j 2 ( λ 1 ( j ) ) + c j 2 2 Q j 1 ( λ 1 ( j ) ) ( 1 ) j 1 2 b j 1 c j 2 R j 1 ( λ 1 ( j ) ) = ( λ 1 ( j ) a j ) P j 1 ( λ 1 ( j ) ) b j 1 2 P j 2 ( λ j ( j ) ) + c j 2 2 Q j 1 ( λ j ( j ) ) ( 1 ) j 1 2 b j 1 c j 2 R j 1 ( λ j ( j ) ) = ( λ j ( j ) a j ) P j 1 ( λ j ( j ) )
which has real solutions a j , b j 1 , and c j 2 . Indeed, a j there exists since P j 1 ( λ i ( j ) ) 0 for j = 3 , 4 , , n and i = 1 , j . Furthermore, solving (13) we get the following equation
ϕ j X 2 + ψ j Y 2 + 2 η j X Y + ξ j = 0 ,
where X = b j 1 and Y = c j 2 , which represents a conic. Fixing X, the discriminant of Equation (14) is Δ Y = 4 [ ( η j 2 ψ j ϕ j ) X 2 ψ j ξ j ] .
Then Δ Y > 0 if
(i)
η j 2 ψ j ϕ j > 0 , for all X R .
(ii)
η j 2 ψ j ϕ j < 0 , for X ψ j ξ j η j 2 ψ j ϕ j , ψ j ξ j η j 2 ψ j ϕ j ;
therefore, Y exists in either case.
Moreover, the point ( X , Y ) = ( b j 1 , c j 2 ) belongs to the conic
C = ( X , Y ) R 2 : ϕ j X 2 + ψ j Y 2 + 2 η j X Y + ξ j = 0 ,
which, by Lemma 2 and condition (9), is non-degenerate, non-empty, and centered at the origin. Consequently, there exist real numbers b j 1 and c j 2 , j = 3 , 4 , , n satisfying (14).
On the other hand, as ( λ n ( n ) , x ) is a eigenpair of B, it follows that
a 1 x 1 + b 1 x 2 + k = 1 n 2 c k x k + 2 = λ n ( n ) x 1 c j 2 x 1 + b j 1 x j 1 + a j x j + b j x j + 1 = λ n ( n ) x j , j = 3 , 4 , , n ,
with c 0 = b n = 0 .
Thus, from (15) and condition (8), we obtain that
b j = ( λ n ( n ) a j ) x j x j + 1 b j 1 x j 1 x j + 1 c j 2 x 1 x j + 1 , j = 2 , 3 , , n 1 .
Finally, from Equation (14), for j = 3 , 4 , , n , condition (9) and Lemma 2, we have
c j 2 = b j 1 η j ± b j 1 2 ( η j 2 ψ j ϕ j ) ψ j ξ j ψ j
and
a j = λ i ( j ) b j 1 2 P j 2 ( λ i ( j ) ) + c j 2 2 Q j 1 ( λ i ( j ) ) ( 1 ) j 1 2 b j 1 c j 2 R j 1 ( λ i ( j ) ) P j 1 ( λ i ( j ) )
for i = 1 , j . The proof is complete.    □

2.2. Solution to INSEEP

This section addresses the inverse extremal eigenvalue problem for nonsymmetric bordered tridiagonal matrices. The analysis begins with the following lemma.
Lemma 4. 
Given an n × n nonsymmetric bordered tridiagonal matrix B of the form (3), and B j the j × j principal leading submatrix of B with characteristic polynomial P j λ = det ( λ I j B j ) ,   j = 1 , 2 , , n , then the sequence P j λ j = 1 n satisfies the following recurrence relation:
P 1 λ = λ a 1 , P 2 λ = ( λ a 2 ) P 1 λ b 1 c 1 , P j ( λ ) = ( λ a j ) P j 1 ( λ ) b j 1 c j 1 P j 2 ( λ ) d j 2 e j 2 Q j 1 ( λ ) + ( 1 ) j 1 b j 1 e j 2 R j 1 ( λ ) + c j 1 d j 2 T j 1 ( λ ) , j = 3 , 4 , , n ,
where R j 1 ( λ ) , T j 1 ( λ ) , and Q j 1 ( λ ) are the determinants of the submatrices resulting from eliminating the ( j 1 ) -th row and the first column, the first row and ( j 1 ) -th column, and the first row and column of submatrix λ I j 1 B j 1 , respectively.
To simplify future calculations, we introduce the following notation:
φ ( λ i ( j ) ) = b j 1 P j 2 ( λ i ( j ) ) ( 1 ) j 1 d j 2 T j 1 ( λ i ( j ) ) θ ( λ i ( j ) ) = d j 2 Q j 1 ( λ i ( j ) ) ( 1 ) j 1 b j 1 R j 1 ( λ i ( j ) ) α ( λ i ( j ) ) = ( λ i ( j ) m j w j , j + 1 ) P j 1 ( λ i ( j ) ) β ( λ i ( j ) ) = φ j ( λ i ( j ) ) w j 1 , j + 1 w j , j + 1 P j 1 ( λ i ( j ) ) γ ( λ i ( j ) ) = θ ( λ i ( j ) ) w 1 , j + 1 w j , j + 1 P j 1 ( λ i ( j ) ) , i = 1 , j ,
w h , j = x h y j x j y h , h = 1 , 2 , , n ,
and
j = ( λ n ( n ) λ n 1 ( n 1 ) ) x j y j , m j 1 = λ n ( n ) x j 1 y j λ n 1 ( n 1 ) y j 1 x j
for j = 3 , 4 , , n , with y n = 0 .
Theorem 2. 
Let 2 n 1 be real numbers λ 1 ( j ) , λ j ( j ) j = 1 n , the vectors x = ( x 1 , x 2 , , x n ) and y = ( y 1 , y 2 , , y n 1 ) , and real numbers 1 , 2 , , n 2 , satisfying (7), (8),
y j y j + 1 > 0 , j = 1 , 2 , , n 2 ,
and
w h , j 0 , h = 1 , 2 , , n , j = 3 , 4 , , n .
If
γ ( λ j ( j ) ) β ( λ 1 ( j ) ) γ ( λ 1 ( j ) ) β ( λ j ( j ) ) 0
for j = 3 , 4 , n , where w, β, and γ are as in (20) and (21), then there exists an n × n nonsymmetric bordered tridiagonal matrix B of the form (3), such that λ 1 ( j ) and λ j ( j ) are the extremal eigenvalues of the leading principal submatrix B j of B , j = 1 , 2 , , n , ( λ n ( n ) , x ) is the eigenpair of B , ( λ n 1 ( n 1 ) , y ) is the eigenpair of B n 1 , and d i = i , i = 1 , , n 2 .
Proof of Theorem 2. 
We show that the system of equations
P j λ i j = 0 , for j = 1 , 2 , , n , and i = 1 , j B x = λ n n x B n 1 y = λ n 1 n 1 y
where P j ( λ ) = det ( λ I j B j ) , j = 1 , 2 , , n satisfies Lemma 4 and has real solutions a j , b j , c j , d j , and e j .
It is clear that a 1 = λ 1 ( 1 ) . Note that the second and third equalities in (26) have the following form:
a 1 x 1 + b 1 x 2 + k = 1 n 2 d k x k + 2 = λ n n x 1 c 1 x 1 + a 2 x 2 + b 2 x 3 = λ n n x 2 e j 2 x 1 + c j 1 x j 1 + a j x j + b j x j + 1 = λ n n x j , x n + 1 = 0 , j = 3 , 4 , , n , a 1 y 1 + b 1 y 2 + i = 1 n 3 d i y i + 2 = λ n 1 n 1 y 1 c 1 y 1 + a 2 y 2 + b 2 y 3 = λ n 1 n 1 y 2 e j 2 y 1 + c j 1 y j 1 + a j y j + b j y j + 1 = λ n 1 n 1 y j , y n = 0 , j = 3 , 4 , , n 1 .
From (26), for j = 2 , condition (7), and Lemma 2, we obtain
a 2 = λ 1 2 + λ 2 2 λ 1 1 ,
and
b 1 = 1 c 1 ( λ 2 ( 2 ) λ 1 ( 1 ) ) ( λ 1 ( 1 ) λ 1 ( 2 ) )
Now, from (27) and condition (24) we get
c 1 = m 2 + a 2 w 3 , 2 w 1 , 3 ,
and
b 2 = ( λ n ( n ) a 2 ) x 2 x 3 c 1 x 1 x 3 .
On the other hand, for j = 3 , , n 1 , from (27) and condition (24) we get
a j = m j e j 2 w 1 , j + 1 c j 1 w j 1 , j + 1 w j , j + 1
and
b j = j e j 1 w 1 , j c j 1 w j 1 , j w j + 1 , j .
Then, from the system (26), we get
( λ 1 ( j ) a j ) P j 1 ( λ 1 ( j ) ) c j 1 φ j ( λ 1 ( j ) ) e j 2 θ j ( λ 1 ( j ) ) = 0 ( λ j ( j ) a j ) P j 1 ( λ j ( j ) ) c j 1 φ j ( λ j ( j ) ) e j 2 θ j ( λ j ( j ) ) = 0 ,
which has real solutions c j 1 and e j 2 . Indeed, considering (32) and condition (25) we write the system (34), and we have
α ( λ 1 ( j ) ) c j 1 β ( λ 1 ( j ) ) e j 2 γ ( λ 1 ( j ) ) = 0 α ( λ j ( j ) ) c j 1 β ( λ j ( j ) ) e j 2 γ ( λ j ( j ) ) = 0 .
Solving the system (35), we obtain
c j 1 = α ( λ j ( j ) ) γ ( λ 1 ( j ) ) α ( λ 1 ( j ) ) γ ( λ j ( j ) ) β ( λ j ( j ) ) γ ( λ 1 ( j ) ) β ( λ 1 ( j ) ) γ ( λ j ( j ) )
and
e j 2 = α ( λ j ( j ) ) β ( λ 1 ( j ) ) α ( λ 1 ( j ) ) β ( λ j ( j ) ) γ ( λ j ( j ) ) β ( λ 1 ( j ) ) γ ( λ 1 ( j ) ) β ( λ j ( j ) ) .
Analogously, for j = n , we obtain c n 1 and e n 2 where
a n = λ n ( n ) e n 2 x 1 x n c n 1 x n 1 x n .
Thus, the proof is concluded.    □

2.3. Solution to IESVP

We now focus our analysis on finding a solution for the IESVP for Lefkovitch-type matrices, considering the minimal and maximal singular values of all its principal row submatrices and a singular vector of the matrix. Although it is known that the IEP and the ISVP are equivalent for symmetric matrices, and that any ISVP can be addressed by reformulating it as an IEP (see [18,29]), these statements may not hold when the eigenvalues and singular values belong to specific submatrices. This observation led us to consider a different approach.
We begin by recalling the following lemma, which plays a fundamental role in the subsequent analysis. This result, analogous to Cauchy’s interlacing theorem, establishes a relation between the singular values of a matrix and those of its leading row submatrices.
Lemma 5 
([30] (Interlacing property of singular values)).Let A p , an n × ( n p ) ( ( n p ) × n ) matrix, denote a submatrix of A obtained by deleting any p columns (any p rows) from A. Then
σ i ( A ) σ i ( A p ) σ i + p ( A ) ; f o r i = 1 , 2 , , n p .
As a consequence of this lemma, considering the notation and order established in Section 1 for the extremal singular values of the leading row submatrices L j , it follows that
σ 1 ( n ) σ 1 ( j ) σ 1 ( 2 ) σ 1 ( 1 ) σ 2 ( 2 ) < σ j ( j ) σ n ( n ) .
Moreover, the inequalities are strict, since the leading principal submatrices of L L T are irreducible. This relationship is a necessary condition satisfied by the extremal singular values of all leading row submatrices of a Lefkovitch-type matrix.
The following result establishes sufficient conditions for the solvability of Problem 3.
Theorem 3. 
Let 2 n 1 be given positive real numbers σ 1 ( j ) , σ j ( j ) j = 1 n , the vector u = ( u 1 , u 2 , , u n ) , and real numbers 1 and 2 , satisfying
σ 1 ( n ) < < σ 1 ( j ) < < σ 1 ( 2 ) < σ 1 ( 1 ) < σ 2 ( 2 ) < < σ j ( j ) < < σ n ( n ) ,
and
u j u j + 1 > 0 , i = 1 , 2 , , n 1 .
If ( σ 1 ( j ) ) 2 , ( σ j ( j ) ) 2 j = 1 n satisfies condition (9) of Theorem 1, then there exists an n × n Lefkovitch-type matrix L of the form (1) with p 1 = 1 and g 1 = 2 , such that σ 1 ( j ) and σ j ( j ) are the extremal singular values of the leading row submatrix L j of L , j = 1 , 2 , , n and ( σ n ( n ) , u ) is a left singular pair of the matrix L.
Proof of Theorem 3. 
Suppose that (39) and (40) hold. Since ( σ 1 ( j ) ) 2 , ( σ j ( j ) ) 2 j = 1 n and u = ( u 1 , u 2 , , u n ) also satisfy the conditions of Theorem 1, then there exists a symmetric bordered tridiagonal matrix B, with entries a j , b j , c j , of the form (2), such that ( σ 1 ( j ) ) 2 and ( σ j ( j ) ) 2 are the extremal eigenvalues of its leading principal submatrices and ( ( σ n ( n ) ) 2 , u ) is an eigenpair of B.
On the other hand, for a Lefkovitch-type matrix L of the form (1), we have that
L L T = p 1 2 + k = 2 n f k 2 f 2 p 2 + g 1 p 1 f 2 g 2 + f 3 p 3 f 3 g 3 + f 4 p 4 f n 1 g n 1 + f n p n f 2 p 2 + g 1 p 1 g 1 2 + p 2 2 g 2 p 2 0 0 f 2 g 2 + f 3 p 3 g 2 p 2 g 2 2 + p 3 2 f 3 g 3 + f 4 p 4 0 0 g n 1 p n 1 f n 1 g n 1 + f n p n 0 0 g n 1 p n 1 g n 1 2 + p n 2
If we set B = L L T , it follows that
p j = a j g j 1 2 ; j = 2 , 3 , , n
g j = b j p j ; j = 2 , 3 , , n 1
f 2 = b 1 g 1 p 1 p 2
f j = c j 2 f j 1 g j 1 p j ; j = 3 , 4 , , n ,
with p 1 = 1 and g 1 = 2 .
Given that the product L j L j T , for j = 1 , 2 , , n , corresponds to the j × j leading principal submatrices of B, we can conclude that σ 1 ( j ) and σ j ( j ) are the extremal singular values of L j . Furthermore, since L L T u = ( σ n ( n ) ) 2 u , it follows that u is a left singular vector of L. Hence, the statement holds.    □

3. Examples

This section provides some examples that illustrate and validate the theoretical results obtained in the preceding sections. Theorems 1, 2, and 3 lead to algorithmic procedures that reconstruct the associated matrices. These algorithms were implemented using MATLAB R2024b.
Next, we introduce the following notation: λ = λ 1 ( n ) , , λ 1 ( 2 ) , λ 1 ( 1 ) , λ 2 ( 2 ) , λ n ( n ) and σ = σ 1 ( n ) , , σ 1 ( 2 ) , σ 1 ( 1 ) , σ 2 ( 2 ) , σ n ( n ) , the vectors whose components are the prescribed eigenvalues and singular values, respectively. λ ^ = λ ^ 1 ( n ) , , λ ^ 1 ( 2 ) , λ ^ 1 ( 1 ) , λ ^ 2 ( 2 ) , , λ ^ n ( n ) and σ ^ = σ ^ 1 ( n ) , , σ ^ 1 ( 2 ) , σ ^ 1 ( 1 ) , σ ^ 2 ( 2 ) , , σ ^ n ( n ) are vectors with components equal to the extremal eigenvalues of the leading principal submatrices and the extremal singular values of the leading row submatrices of the reconstructed matrix, respectively.
x ^ = x ^ 1 , x ^ 2 , , x ^ n T , y ^ = y ^ 1 , y ^ 2 , , y ^ n 1 T , and u ^ = u ^ 1 , u ^ 2 , , u ^ n T are, respectively, the eigenvector associated with λ ^ n ( n ) , the eigenvector associated with λ ^ n 1 ( n 1 ) , and the left singular vector associated with σ ^ n ( n ) , of the corresponding reconstructed matrices.
To evaluate the closeness between the computed results and the prescribed data, we consider the following relative errors:
e λ = λ λ ^ 2 λ 2 , e σ = σ σ ^ 2 σ 2 , e x = x x ^ 2 x 2 , e y = y y ^ 2 y 2 , and e u = u u ^ 2 u 2 .
Example 1. 
To illustrate the results obtained in Section 2.2, we use the data prescribed in Table 1, which satisfy the conditions of Theorem 1.
The symmetric bordered tridiagonal matrix providing the requisite spectral properties is as follows:
B = 0.3245 0.5108 0.3507 0.9390 0.8759 0.5502 0.6225 0.5870 0.5108 3.7175 0.8176 0 0 0.3507 0.8176 3.1029 0.7948 0.9390 0 0.7948 1.9472 0.6443 0.8759 0.6443 1.7434 0.3786 0.5502 0.3786 1.7871 0.8116 0 0.6225 0.8116 1.2254 0.5328 0.5870 0 0 0.5328 2.0340 ,
and the spectra of its leading principal submatrices are detailed in Table 2.
The closeness of the obtained data is illustrated in Table 3.
Example 2. 
In this example, the initial data given, which satisfy the criteria established by Theorem 2, are presented in Table 4. Table 5 shows the extremal eigenvalues of the submatrices B j of the reconstructed nonsymmetric bordered tridiagonal matrix B . Finally, Table 6 provides the relative errors of the results obtained in our computations.
B = 1.9944 0.9326 0.1091 0.0902 0.0468 0.0252 0.0231 0.2576 2.0755 0.1635 0 0 0.1385 0.7519 0.9037 0.9211 0.0284 0 0.2287 1.2769 0.7947 0.0012 0.0642 0.1027 0.5774 0 0.1001 0.7673 4.6171 0.4395 0.8130 0 0 0.6597 3.2656 ,
Example 3. 
Let L be an n × n Lefkovitch-type matrix
L = 2 1 1 1 1 2 0 0 0 1 2 0 0 0 1 2 ,
and B the symmetric bordered tridiagonal matrix
B = 3 + n 4 3 3 3 4 5 2 0 0 3 2 5 3 0 0 2 3 0 0 2 5 ,
where B = L L T . In this example, the matrix B is reconstructed from the squares of the extremal singular values of the leading row submatrices L j of L, and the left singular vector associated with the maximal singular value of L. These data satisfy the conditions of Theorems 1 and 3. Then, using the procedure given in Theorem 3, with p 1 = 2 and g 1 = 1 , the matrix L is reconstructed. We consider different sizes of these matrices and add the relative error e A = A A ^ 2 A 2 to show the closeness between the given matrix A and the reconstructed matrix A ^ . The results obtained are shown in Table 7 and Table 8.
Example 4. 
In the following example, we reconstruct a Lefkovitch-type matrix from the prescribed data given in Table 9.
The Lefkovitch-type matrix that satisfies the properties required in Theorem 3 is
B = 0.3517 0.5472 0.1386 0.1493 0.2575 0.8407 0.2543 0.8143 0.2435 0.8308 0 0 0 0.9293 0.5853 0.3500 0.5497 0.1966 0.9172 0.2511 0.2868 0.6160 0.7572 0 0 0 0.4733 0.7537 ,
Table 10 shows the singular values of the leading row submatrices L j of L.
Finally, Table 11 summarizes the computed results.

4. Conclusions

This paper provides advances in the inverse extremal eigenvalue problem (IEEP) and, in particular, in the inverse extremal singular value problem (IESVP) for structured matrices. Due to their definitions, both problems are of theoretical importance, since IEEPs are a special class of IEPs and ISVPs are a natural extension of IEPs (see [18]). Furthermore, they are also relevant in engineering applications. In [13], a spring–mass system is reconstructed from the minimum and maximum frequencies of its subsystems, resulting from successively fixing an internal mass. This issue is closely related to the IEEP for tridiagonal matrices. Similarly, the ISVP for structured matrices is related to telecommunications theory, particularly in the design of signature sequences for synchronous code division multiple access systems (see [21]).
Our contribution in this work focuses first on constructing symmetric and nonsymmetric bordered tridiagonal matrices of order n from the extremal eigenvalues of their leading principal submatrices and an eigenvector of the matrix for the symmetric case and, additionally, an eigenvector of the submatrix of order n 1 for the nonsymmetric case. Although the procedure used is already known in the literature, to our knowledge, the inverse eigenvalue problem for this type of matrix has not been addressed. On the other hand, it is known that there is limited literature on ISVPs. In this sense, we better substantiate the procedure given in [20] and apply it to reconstruct a Lefkovitch-type matrix from the extremal singular values of its leading row submatrices and a left singular vector of the matrix. These data are associated with the spectral data used in the IEEP for a symmetric bordered tridiagonal matrix. In all cases, we provide sufficient conditions for the existence and construction of such matrices. As our results are constructive, algorithmic procedures are generated to determine a solution matrix explicitly.

Author Contributions

Conceptualization, H.P.-S., S.A.-P., C.M. and H.N.; investigation, H.P.-S., S.A.-P., C.M. and H.N.; software, H.P.-S.; writing—original draft, S.A.-P. and C.M.; writing—review and editing, H.P.-S. and H.N. All authors have contributed equally to the work. All authors have read and agreed to the published version of the manuscript.

Funding

Hubert Pickmann-Soto was supported by the Universidad de Tarapacá, Arica, Chile, Proyecto Mayor de Investigación Científica y Tecnológica UTA 4782-24. Susana Arela-Pérez was supported by the Universidad de Tarapacá, Arica, Chile, Proyecto Mayor de Investigación Científica y Tecnológica UTA 4778-24. H. Nina thanks the support for the Programa Regional MATH-AMSUD 23-MATH-09 MORA DataS project.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors sincerely thank the referees for their valuable and constructive feedback, which has resulted in an improved final version of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

To complement the numerical examples, we include in this Appendix the algorithms in pseudocode derived from the three theorems in Section 2. The numbers and vectors are generated by the randn routine in MATLAB R2024b, considering the conditions given in each theorem. In this sense, the method of constructing the matrices is a procedure in which all their leading submatrices are obtained recursively.
Algorithm A1 constructs a symmetric bordered tridiagonal matrix B of the form (2), such that λ 1 ( j ) and λ j ( j ) are the extremal eigenvalues of the leading principal submatrix B j , j = 1 , 2 , , n of B and ( λ n ( n ) , x ) is an eigenpair of B.
Algorithm A1 (ISEEP)
  1:
Input:  λ 1 ( j ) , λ j ( j ) j = 1 n and x = ( x 1 , x 2 , , x n ) , that satisfy (7) and (8)
  2:
Output: An n × n symmetric bordered tridiagonal matrix of the form (2)
  3:
for  j = 1  to n do
  4:
      if  j = 1  then
  5:
            set  a 1 = λ 1 ( 1 )
  6:
      end if
  7:
      if j = 2 then
  8:
            compute b 1 and a 2 defined by (11) and (12)
  9:
      end if
10:
      if j > 2 then
11:
            compute b j 1 defined by (16)
12:
            compute ϕ j , ψ j , η j , ξ j , defined by (6)
13:
            if η j 2 ψ j ϕ j 0 then
14:
                  compute c j 2 and a j defined by (17) and (18)
15:
            else
16:
                  return
17:
            end if
18:
      end if
19:
end for
Algorithm A2 constructs a nonsymmetric bordered tridiagonal matrix B of the form (3) with d i = i ; i = 1 , , n 2 , such that λ 1 ( j ) and λ j ( j ) are the extremal eigenvalues of the leading principal submatrix B j , j = 1 , 2 , , n of B , λ n ( n ) , x which is an eigenpair of the matrix B , and ( λ n 1 ( n 1 ) , y ) is an eigenpair of the leading principal submatrix B n 1 .
Algorithm A2 (INSEEP)
  1:
Input:  λ 1 ( j ) , λ j ( j ) j = 1 n and x = ( x 1 , x 2 , , x n ) , y = ( y 1 , y 2 , , y n 1 ) , 1 , 2 , , n 2 , that satisfy (7), (8), (23) and (24)
  2:
Output: An n × n nonsymmetric bordered tridiagonal matrix of the form (3)
  3:
for  j = 1  to n do
  4:
      compute w k , j , j and m j 1 defined by (21) and (22)
  5:
      if j = 1 then
  6:
            set a 1 = λ 1 ( 1 )
  7:
      end if
  8:
      if j = 2 then
  9:
            compute c 1 , b 1 , a 2 and b 2 defined by (30), (29), (28) and (31)
10:
      end if
11:
      if j > 2 then
12:
            set d j 2 = j 2
13:
            compute φ ( λ i ( j ) ) , θ ( λ i ( j ) ) , α ( λ i ( j ) ) , β ( λ i ( j ) ) , γ ( λ i ( j ) ) defined by (20)
14:
            if γ ( λ j ( j ) ) β ( λ 1 ( j ) ) γ ( λ 1 ( j ) ) β ( λ j ( j ) ) 0 then
15:
                  compute c j 1 , e j 2 , defined by (36) and (37)
16:
                  if j < n then
17:
                        compute a j and b j , defined by (32) and (33)
18:
                  end if
19:
                  if j = n then
20:
                        compute a n by (38)
21:
                  end if
22:
            else
23:
                  return
24:
            end if
25:
      end if
26:
end for
Algorithm A3 reconstructs a Lefkovitch-type matrix L of the form (1) with p 1 = 1 and g 1 = 2 , such that σ 1 ( j ) and σ j ( j ) are the extremal singular values of the leading row submatrix L j , j = 1 , 2 , , n of L, and σ n ( n ) , u is a left singular vector of L.
Algorithm A3 (IESVP)
  1:
Input:  σ 1 ( j ) , σ j ( j ) j = 1 n , u = ( u 1 , u 2 , , u n ) , 1 , 2 , that satisfy (39) and (40)
  2:
Output: An n × n Lefkovitch-type matrix of the form (1)
  3:
Run Algorithm A1 with ( σ 1 ( j ) ) 2 , ( σ j ( j ) ) 2 j = 1 n and u = ( u 1 , u 2 , , u n )
  4:
for  j = 2  to n do
  5:
      set p 1 = 1 and g 1 = 2
  6:
      compute p j and f 2 defined by (42) and (44)
  7:
      if j < n then
  8:
            compute g j defined by (43)
  9:
      end if
10:
      if j > 2 then
11:
            compute f j defined by (45)
12:
      end if
13:
end for

References

  1. Golub, G.H.; Van Loan, C.F. Matrix Computations, 1st ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1983; pp. 16–20. [Google Scholar]
  2. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis, 1st ed.; Cambridge University Press: Cambridge, UK, 1991; pp. 134–163. [Google Scholar]
  3. Eckart, C.; Young, G. The approximation of one matrix by another of lower rank. Psychometrika 1936, 1, 211–218. [Google Scholar] [CrossRef]
  4. Golpayegani, Z.; Bouguila, N. PatchSVD: A non-uniform SVD-based image compression Algorithm. In Proceedings of the 13th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2024), Rome, Italy, 24–26 February 2024; pp. 886–893. [Google Scholar]
  5. Koren, Y.; Bell, R.; Volinsky, C. Matrix Factorization techniques for recommender system. Computer 2009, 48, 30–37. [Google Scholar] [CrossRef]
  6. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An Algorithm for Designing Overcomplete dictionaries for sparse representation. IEEE Trans. Signal. Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  7. Wold, S.; Esbensen, K.; Geladi, P. Principal Component Analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  8. Arela-Pérez, S.; Nina, H.; Pantáz, J.; Pickmann-Soto, H.; Valero, E. Construction of Lefkovitch and doubly Lefkovitch matrices with maximal eigenvalues and some diagonal elements prescribed. Linear Algebra Appl. 2021, 626, 152–170. [Google Scholar] [CrossRef]
  9. Takada, T.; Nakajima, H. An analysis of life history evolution in terms of the density-dependent Lefkovitch matrix model. Math. Biosci. 1992, 112, 155–176. [Google Scholar] [CrossRef]
  10. Tokachil, M.N.; Yahya, A. The Lefkovitch Matrix of Aedes Aegypti with Rainfall Dependent Model for Eggs Hatching. J. Phys. Conf. Ser. 2019, 1366, 1–8. [Google Scholar] [CrossRef]
  11. Pickmann-Soto, H.; Arela-Peréz, S.; Egaña, J.C.; Soto, R.L. Extreme Spectra Realization by Real Nonsymmetric Tridiagonal and Real Nonsymmetric arrow matrices. Math. Probl. Eng. 2019, 2019, 3459017. [Google Scholar] [CrossRef]
  12. Pickmann-Soto, H.; Arela-Pérez, S.; Lozano, C.; Nina, H. Realization of Extremal spectral data by pentadiagonal matrices. Mathematics 2024, 12, 2198. [Google Scholar] [CrossRef]
  13. Huang, X.; Hu, X.; Zhang, L. Physical parameters reconstruction of a fixed-fixed mass-spring system from its characterisc data. J. Comput. Apple. Math. 2007, 206, 645–655. [Google Scholar] [CrossRef]
  14. Sharma, D.; Sen, M. Inverse eigenvalue problems for two special acyclic matrices. Mathematics 2016, 4, 12. [Google Scholar] [CrossRef]
  15. Lefkovitch, P.L. The study of population growth in organisms grouped by stages. Biometrics 1965, 21, 1–18. [Google Scholar] [CrossRef]
  16. Bai, Z.J.; Jin, X.Q.; Vong, S.W. On some inverse singular value problmes with Toeplitz-related structure. Numer. Algebra Control Optim. 2012, 2, 187–192. [Google Scholar] [CrossRef]
  17. Chu, M.T. Numerical methods for inverse singular value problems. SIAM J. Numer. Anal. 1992, 29, 885–903. [Google Scholar] [CrossRef]
  18. Chu, M.T.; Golub, G.H. Inverse Eigenvalue Problems: Theory, Algorithms and Applications; Oxford University Press: Oxford, UK, 2005; pp. 128–134. [Google Scholar]
  19. Fathi, F.; Fariborzi Aragui, M.A.; Shahzadeh Fazeli, S.A. Inverse singular values for nonsymmteric ahead arrow matrix. Inverse Probl. Sci. Eng. 2021, 9, 2085–2097. [Google Scholar] [CrossRef]
  20. Montaño, E.; Salas, M.; Soto, R.L. Nonnegative matrices with prescribed extremal singular values. Comput. Math. Appl. 2008, 56, 30–42. [Google Scholar] [CrossRef]
  21. Tropp, J.A.; Dhillon, I.S.; Heath, R.W. Finite-Step algorithms for constructing optimal CDMA signature sequences. IEEE Trans. Inf. Theory 2004, 50, 2916–2921. [Google Scholar] [CrossRef]
  22. Vong, S.W.; Bai, Z.J.; Jin, X.Q. An Ulm-like method for inverse singular value problems. SIAM J. Matrix Anal. Appl. 2011, 32, 412–429. [Google Scholar] [CrossRef]
  23. El-Mikkawy, M.; Atlan, F. Algorithms for solving doubly bordered tridiagonal linear systems. Br. J. Math Comput. Sci. 2014, 4, 1246–1267. [Google Scholar] [CrossRef]
  24. El-Mikkawy, M.; El-Shehawy, M.; Shehab, N. Solving doubly bordered tridiagonal linear systems via partition. Appl. Math. 2015, 6, 967–978. [Google Scholar] [CrossRef]
  25. Jia, J.; Li, S. On the inverse and determinant of general bordered tridiagonal matrices. Comput. Math. Appl. 2015, 69, 503–509. [Google Scholar] [CrossRef]
  26. Karawia, A.A.; Rizvi, Q.M. On solving a general bordered tridiagonal linear system. Int. J. Math. Math. Sci. 2013, 33, 1160–1163. [Google Scholar]
  27. Marrero, J.A. A numerical solver for general bordered tridiagonal matrix equations. Comput. Math. Appl. 2016, 72, 2731–2740. [Google Scholar] [CrossRef]
  28. Udaf, A.; Reedera, R.; Velmrea, E.; Harrisonh, P. Comparison of methosds for solving the Schrödinger equation for multiquantum. Proc. Est. Acad. Sci. Eng. 2006, 12, 246–260. [Google Scholar]
  29. Datta, B.N. Numerical Linear Algebra and Applications, 1st ed.; Brooks/Cole Publishing Company: Pacific Grove, CA, USA, 1995; pp. 551–558. [Google Scholar]
  30. Mathias, R. Two theorems on singular values an eigenvalues. Am. Math. Mon. 1990, 97, 47–50. [Google Scholar] [CrossRef]
Table 1. Initial spectral data.
Table 1. Initial spectral data.
j12345678
λ 1 ( j ) 0.3245 0.2493 0.2298 0.1646 0.3252 0.3974 0.5184 0.5755
λ j ( j ) 0.3245 3.7928 4.3784 4.5358 4.5823 4.5940 4.6075 4.6214
x j 0.2611 0.6738 0.5818 0.3031 0.1599 0.0957 0.0827 0.0763
Table 2. Spectra of the leading principal submatrices of matrix B.
Table 2. Spectra of the leading principal submatrices of matrix B.
σ ( B 1 ) σ ( B 2 ) σ ( B 3 ) σ ( B 4 )
0.3245 0.2493 0.2298 0.1646
3.7928 2.5367 1.7552
4.3784 2.9656
4.5358
σ ( B 5 ) σ ( B 6 ) σ ( B 7 ) σ ( B 8 )
0.3252 0.3974 0.5184 0.5755
1.0257 0.9587 0.6179 0.4839
2.3830 1.6873 1.0702 1.0261
3.1696 2.5486 2.0289 1.9061
4.5823 3.2315 2.7319 2.1948
4.5940 3.3100 2.8362
4.6075 3.3890
4.6214
Note. The extremal eigenvalues of B j are highlighted in bold.
Table 3. Relative errors in the reconstruction of matrix B.
Table 3. Relative errors in the reconstruction of matrix B.
e λ e x
1.5287 × 10 16 3.5520 × 10 15
Table 4. Initial spectral data.
Table 4. Initial spectral data.
j1234567
λ 1 j 1.9944 1.5431 0.8066 0.5382 0.0496 0.0397 0.0414
λ j j 1.9944 2.5268 2.5964 2.6090 2.6093 4.7148 4.8930
x j 0.0144 0.0017 0.0066 0.0249 0.1110 0.9181 0.3793
y j 0.0132 0.0018 0.0079 0.0294 0.1246 0.9917
Table 5. Spectra of the leading principal submatrices of matrix B .
Table 5. Spectra of the leading principal submatrices of matrix B .
σ B 1 σ B 2 σ B 3 σ B 4
1.9944 1.5431 0.8066 0.5382
2.5268 1.5706 1.5086
2.5964 1.5946
2.6090
σ B 5 σ B 6 σ B 7
0.0496 0.0397 0.0414
0.5655 0.5611 0.5606
1.5261 1.5198 1.5287
1.6026 1.6066 1.5902
2.6093 2.6077 2.6046
4.7148 3.1002
4.8930
Note. The extremal eigenvalues of B j are highlighted in bold.
Table 6. Relative errors in the reconstruction of matrix B .
Table 6. Relative errors in the reconstruction of matrix B .
e λ e x e y
5.8195 × 10 16 1.5567 × 10 13 2.1285 × 10 13
Table 7. Relative errors in the reconstruction of matrix B.
Table 7. Relative errors in the reconstruction of matrix B.
n e λ e x e B
5 3.3480 × 10 16 3.1889 × 10 16 7.6903 × 10 15
10 2.4100 × 10 16 6.5017 × 10 15 7.9296 × 10 14
15 3.7435 × 10 16 2.7106 × 10 14 2.0878 × 10 12
20 2.5209 × 10 16 1.2641 × 10 14 1.1125 × 10 10
25 3.2118 × 10 16 1.3047 × 10 11 4.6447 × 10 09
30 2.2608 × 10 16 1.0659 × 10 10 9.2127 × 10 08
35 2.7822 × 10 16 2.6403 × 10 10 1.3089 × 10 06
Table 8. Relative errors in the reconstruction of matrix L.
Table 8. Relative errors in the reconstruction of matrix L.
n e σ e u e L
5 2.5406 × 10 15 2.0837 × 10 15 3.1675 × 10 14
10 5.5000 × 10 15 6.1437 × 10 15 2.5249 × 10 13
15 4.8426 × 10 14 4.3725 × 10 14 7.5804 × 10 12
20 1.9346 × 10 12 1.6595 × 10 12 8.7629 × 10 10
25 3.4677 × 10 12 1.6103 × 10 11 1.5192 × 10 08
30 5.1014 × 10 11 1.3972 × 10 10 3.1065 × 10 07
35 5.9609 × 10 11 9.4446 × 10 11 4.5990 × 10 07
Table 9. Initial singular values.
Table 9. Initial singular values.
j12345678
σ 1 ( j ) 1.4019 0.7377 0.4101 0.3384 0.3370 0.1900 0.1783 0.0333
σ j ( j ) 1.4019 1.4734 1.6360 1.6416 1.6488 1.6564 1.7055 1.7633
u j 0.7610 0.2926 0.3619 0.0688 0.1048 0.1047 0.3072 0.2886
Table 10. Singular values of the leading row submatrices of matrix L.
Table 10. Singular values of the leading row submatrices of matrix L.
Σ L 1 Σ L 2 Σ L 3 Σ L 4
1.4019 0.7377 0.4101 0.3384
1.4734 1.0375 0.6762
1.6360 1.0388
1.6416
Σ L 5 Σ L 6 Σ L 7 Σ L 8
0.3370 0.1900 0.1783 0.0333
0.6571 0.3486 0.3392 0.3385
0.8880 0.6596 0.6344 0.5452
1.0832 0.8913 0.6797 0.6707
1.6488 1.1134 0.9671 0.7431
1.6564 1.2115 0.9859
1.7055 1.2724
1.7633
Note. The extremal singular values of L j are highlighted in bold.
Table 11. Relative errors in the reconstruction of matrix L.
Table 11. Relative errors in the reconstruction of matrix L.
e σ e u
1.4837 × 10 14 2.2774 × 10 14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pickmann-Soto, H.; Arela-Pérez, S.; Manzaneda, C.; Nina, H. An Inverse Extremal Eigenproblem for Bordered Tridiagonal Matrices Applied to an Inverse Singular Value Problem for Lefkovitch-Type Matrices. Mathematics 2025, 13, 3369. https://doi.org/10.3390/math13213369

AMA Style

Pickmann-Soto H, Arela-Pérez S, Manzaneda C, Nina H. An Inverse Extremal Eigenproblem for Bordered Tridiagonal Matrices Applied to an Inverse Singular Value Problem for Lefkovitch-Type Matrices. Mathematics. 2025; 13(21):3369. https://doi.org/10.3390/math13213369

Chicago/Turabian Style

Pickmann-Soto, Hubert, Susana Arela-Pérez, Cristina Manzaneda, and Hans Nina. 2025. "An Inverse Extremal Eigenproblem for Bordered Tridiagonal Matrices Applied to an Inverse Singular Value Problem for Lefkovitch-Type Matrices" Mathematics 13, no. 21: 3369. https://doi.org/10.3390/math13213369

APA Style

Pickmann-Soto, H., Arela-Pérez, S., Manzaneda, C., & Nina, H. (2025). An Inverse Extremal Eigenproblem for Bordered Tridiagonal Matrices Applied to an Inverse Singular Value Problem for Lefkovitch-Type Matrices. Mathematics, 13(21), 3369. https://doi.org/10.3390/math13213369

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop