Next Article in Journal
Solving a Multimodal Routing Problem with Pickup and Delivery Time Windows under LR Triangular Fuzzy Capacity Constraints
Previous Article in Journal
Pricing Chinese Convertible Bonds with Learning-Based Monte Carlo Simulation Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bidiagonal Factorizations of Filbert and Lilbert Matrices

1
Department of Applied Mathematics, University Research Institute of Mathematics and Its Applications (IUMA), University of Zaragoza, 50009 Zaragoza, Spain
2
Department of Mathematics, Centro de Astropartículas y Física de Altas Energías (CAPA), University of Zaragoza, 50009 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(4), 219; https://doi.org/10.3390/axioms13040219
Submission received: 5 February 2024 / Revised: 13 March 2024 / Accepted: 22 March 2024 / Published: 26 March 2024

Abstract

:
Extensions of Filbert and Lilbert matrices are addressed in this work. They are reciprocal Hankel matrices based on Fibonacci and Lucas numbers, respectively, and both are related to Hilbert matrices. The Neville elimination is applied to provide explicit expressions for their bidiagonal factorization. As a byproduct, formulae for the determinants of these matrices are obtained. Finally, numerical experiments show that several algebraic problems involving these matrices can be solved with outstanding accuracy, in contrast with traditional approaches.

1. Introduction

Many efforts have been devoted in the last decades to the study of Hankel and Toeplitz matrices. Their applications extend through many areas, such as signal processing and system identification. In particular, the singular value decomposition of a Hankel matrix plays a crucial role in state-space realization and hidden Markov models (see [1,2,3,4,5]).
An interesting case is the so-called reciprocal Hankel matrix, defined by Richardson in [6]. Given an integer sequence { a k } k N , these matrices R = ( R i , j ) are defined as R i , j = 1 / a i + j 1 . An appealing case appears when considering the famous Fibonacci sequence, defined by:
F 0 : = 0 , F 1 : = 1 , F i = F i 1 + F i 2 , i N , i 2 ,
for which the corresponding reciprocal Hankel matrix is called a Filbert matrix because of its similarities with the well-known Hilbert matrix, its entries being given by
F ( n ) = ( 1 / F i + j 1 ) 1 i , j n + 1 .
Fibonacci numbers, omnipresent in nature, come into play in diverse scientific areas, e.g., image encryption algorithms [7] or thermal engineering [8], and they have proved to also be relevant in signal processing [9].
Filbert matrices are also deeply related with q-Hilbert matrices, for q = ( 1 5 ) / ( 1 + 5 ) (cf. [10]), which in turn were recently studied by some authors [11]. These are defined by
H n ( α , q ) : = [ α ] q [ i + j + α 2 ] q 1 i , j n + 1 ,
where [ α ] q is the q-integer defined as [ α ] q : = 1 + q + + q α 1 .
A generalization of Filbert matrices F n ( α ) = ( 1 / F i + j + α ) 1 i , j n + 1 , for α 1 being an integer parameter, was studied in [12].
We can also consider the Lucas sequence,
L 0 : = 2 , L 1 : = 1 , L i = L i 1 + L i 2 , i N , i 2 ,
to obtain the Lilbert matrices L n = ( 1 / L i + j 1 ) 1 i , j n + 1 , defined in [13]. As with Filbert matrices, Lilbert matrices can be generalized analogously. In the mentioned papers, an explicit formula for the L U -decomposition, the Cholesky factorization and the inverse has been obtained for Filbert, Lilbert matrices and some extensions [12,13].
The condition number of Vandermonde and Hilbert matrices grows dramatically with their dimensions [14,15,16]. Unfortunately, specific information about the condition number of the Filbert and Lilbert matrices is not widely documented. However, we can expect these matrices to be ill-conditioned due to its similar structure to Hilbert matrices. In Section 5, devoted to numerical experiments, it is shown that the two-norm condition number of the Filbert and Lilbert matrices grows significantly as the size of the matrices increases. For instance, the condition number of a 5 × 5 Filbert matrix is approximately 10 5 . As a consequence, conventional routines applying the best algorithms for solving algebraic problems such as computing the inverse of a matrix, its singular values or the resolution of a linear system, fail to provide any accuracy in the obtained results.
At this point, it should be mentioned that any Hankel matrix can be transformed into a Toeplitz matrix with no cost by means of a permutation—the one given by the anti-identity matrix. In principle, when solving algebraic problems such as the resolution of linear systems, this would allow us to apply several well-established numerical methods, including the so-called fast direct Toeplitz solvers [17,18], with a computational cost of O ( n log 2 n ) , and the iterative procedures based on the gradient conjugate algorithm with a suitable preconditioner, which can improve the cost to O ( n log n ) [19]. However, these direct algorithms guarantee only weak stability [20], i.e., that for well-conditioned problems, the computed and the exact solution are close. The same can be said about preconditioned conjugate gradient methods, since the speed of convergence and its stability heavily depend on the condition number of the given matrix.
In this work, the generalized versions of Filbert and Lilbert matrices are addressed by means of a Neville elimination process, giving explicit expressions for its multipliers and pivots. Following [21], this allows us to determine a bidiagonal factorization of the considered matrices. As a byproduct, formulae for the determinants of both classes of matrices are derived. Moreover, numerical experiments for the above-mentioned algebraic problems—which are heavily ill-conditioned—have been performed, showing results that they exhibit machine-order accuracy, in stark contrast with traditional numerical methods.
The paper is organized as follows: to keep this paper as self-contained as possible, Section 2 recalls basic concepts and results related to Neville elimination and bidiagonal factorizations of nonsingular matrices. Filbert and Lilbert matrices are considered in Section 3 and Section 4, respectively, where the pivots and multipliers of their Neville elimination are obtained, and a remarkable analogy with those of quantum Hilbert matrices is illustrated. As seen later, the obtained bidiagonal factorizations have experimentally shown an impressive level of performance, attaining machine-order errors while classical numerical methods fail to deliver the correct solution by orders of magnitude. Finally, Section 5 presents a series of numerical experiments.

2. Notations and Auxiliary Results

As advanced in the Introduction, the main result of this paper, gathered in the following sections, consists in the computation of the bidiagonal factorization of Filbert and Lilbert matrices, which is possible by following a Neville elimination process. This being the case, let us begin by recalling some basic results concerning the Neville elimination (NE). First of all, it is an algorithm that, given a ( n + 1 ) × ( n + 1 ) real-valued matrix A, obtains an upper-triangular matrix U after n iterations. More specifically, intermediate steps, labeled by A ( k + 1 ) for k = 1 , , n , are obtained from the previous iteration A ( k ) , making zeros below the diagonal in the kth column. To do so, the initial step is by definition A ( 1 ) : = A , whereas the entries of A ( k + 1 ) for every k = 1 , , n are obtained through the subsequent recursion formula
a i , j ( k + 1 ) : = a i , j ( k ) , if 1 i k , a i , j ( k ) a i , k ( k ) a i 1 , k ( k ) a i 1 , j ( k ) , if k + 1 i , j n + 1 , and a i 1 , j ( k ) 0 , a i , j ( k ) , if k + 1 i n + 1 , and a i 1 , k ( k ) = 0 .
In the last iteration of this process, the matrix U : = A ( n + 1 ) is obtained, which, as mentioned before, is upper-triangular. In this process, the entries corresponding to the jth column at the j 1 step, i.e.,
p i , j : = a i , j ( j ) , 1 j i n + 1 ,
are called the ( i , j ) pivots (or ith diagonal pivots in the i = j case) of the NE process. The following quotient is also of relevance:
m i , j : = a i , j ( j ) / a i 1 , j ( j ) = p i , j / p i 1 , j , if a i 1 , j ( j ) 0 , 0 , if a i 1 , j ( j ) = 0 ,
and is known as the ( i , j ) multiplier.
By applying a second Neville elimination to U T , a diagonal matrix is obtained; this process is known as a complete Neville elimination. When in this process, there is no need to perform any row exchanges, the matrix A is said to verify the WRC condition (see, e.g., [21]). In Theorem 2.2 of [21], it is proved that a ( n + 1 ) × ( n + 1 ) real-valued nonsingular matrix A verifies the WRC condition if and only if it can be expressed in a unique way as the following product,
A = F n F n 1 F 1 D G 1 G 2 G n ,
where F i , G i R ( n + 1 ) × ( n + 1 ) are the lower- and upper-, respectively, triangular bidiagonal matrices given by
F i = 1 1 m i + 1 , 1 1 m n + 1 , n + 1 i 1 , G i T = 1 1 m ˜ i + 1 , 1 1 m ˜ n + 1 , n + 1 i 1 ,
while the entries of the diagonal matrix D are the diagonal pivots p i , i obtained in the NE of A. In fact, the NE processes of A and A T also give the nondiagonal entries of F i and G i , since the values m i , j , m ˜ i , j appearing in (5) are precisely the multipliers of these algorithms as defined in (3).
Another interesting result is provided by Theorem 2.2 of [22]. Taking advantage of the diagonal pivots and multipliers obtained in the NE of A, it is possible to formulate the inverse A 1 as
A 1 = G ^ 1 G ^ 2 G ^ n D 1 F ^ n F ^ n 1 F ^ 1 ,
where the matrices F ^ i and G ^ i are very much like their counterparts F i and G i , but with a different arrangement of the multipliers, being defined as
F ^ i = 1 1 m i + 1 , i 1 m n + 1 , i 1 , G ^ i T = 1 1 m ˜ i + 1 , i 1 m ˜ n + 1 , i 1 .
It is worth noting that more general classes of matrices can be factorized as in (4), see [23].
Hereafter, the convention adopted by Koev in [24] to store the coefficients of the bidiagonal decomposition (4) of A in a ( n + 1 ) × ( n + 1 ) matrix B D ( A ) is followed. The entries of this matrix form are given by
B D ( A ) i , j : = m i , j , if i > j , p i , i , if i = j , m ˜ j , i , if i < j .
Remark 1.
Provided that the bidiagonal factorization of a nonsingular matrix A R ( n + 1 ) × ( n + 1 ) exists, then, using the factorization (4), it follows that
A T = G n T G n 1 T G 1 T D F 1 T F 2 T F n T .
Furthermore, in the case of A being symmetric, we have that G i = F i T for i = 1 , , n and, as a consequence,
A = F n F n 1 F 1 D F 1 T F 2 T F n T .
It is worth noting that thanks to the structure of the factors in the bidiagonal decomposition (4) of a nonsingular matrix A, in order to compute its determinant, it suffices to perform the product of the diagonal pivots obtained in the NE of A, since the determinant of each of the factors F i and G i is trivially one. This result will be used later in the manuscript to obtain the determinants of generalized Filbert and Lilbert matrices and is summarized in the following lemma.
Lemma 1.
Consider a nonsingular matrix A R ( n + 1 ) × ( n + 1 ) . If the bidiagonal decomposition of A exists, then
det A = i = 1 n + 1 p i , i ,
where p i , i are the diagonal pivots of the Neville elimination of A given by (2).

3. Bidiagonal Factorization of Filbert Matrices

Let us recall that the sequence of Fibonacci numbers: ( F n ) n N is given by
F 0 : = 0 , F 1 : = 1 ,
with the recursion formula
F n + 1 = F n + F n 1 , n 1 .
Filbert matrices are defined in terms of the Fibonacci sequence as
F i , j = 1 F i + j 1 , 1 i , j n + 1 ,
and they have the property—shared with Hilbert matrices—of having an inverse with integer entries [6]. In fact, an explicit formula for the entries of the inverse matrices is proved using computer algebra. This formula shows a remarkable analogy with the corresponding formula for the elements of the inverse of Hilbert matrices in the sense that it can be obtained by replacing some binomial coefficients n k by the analogous Fibonomial coefficients introduced in [25] as follows
n k F : = i = 1 k F n i + 1 F i , 0 k n ,
with the usual convention that empty products are defined as one. Let us observe that by defining
[ 0 ] F ! : = 1 , [ n ] F ! : = k = 1 n F k ,
we can also write
n k F = [ n ] F ! [ k ] F ! [ n k ] F ! , 0 k n .
The following identities for Fibonomial coefficients hold
n k F = 1 , 0 k n 2 ,
and taking into account the following recursion formula
n k F = F k 1 n 1 k F + F n k + 1 n 1 k 1 F , 1 k < n ,
(see [25]), it can be clearly seen that the Fibonomial coefficients are integers. It can also be checked that Fibonomial coefficients satisfy the following useful identities:
( a ) F α F n α 1 n 1 F = α n F ,
( b ) F α n F n α 1 n 1 F = α 1 n F ,
( c ) F α F α n + 1 α 1 n 1 F = α n 1 F .
Now, we consider the following generalization of the Filbert matrix F ( n ) described in (11). Given α N , let F n ( α ) : = ( F i , j ( α ) ) 1 i , j n + 1 with
F i , j ( α ) = 1 F i + j + α 2 , 1 i , j n + 1 .
Clearly, for α = 1 , F ( n , 1 ) coincides with the Filbert matrix (11).
There are many nice equalities relating the Fibonacci numbers with each other. In this paper we use the following identity:
F n + p F n + q F n F n + p + q = ( 1 ) n F p F q , p , q , n N ,
which is known as Vajda’s identity. On the other hand, it is well known that Fibonacci numbers F n , n N , satisfy the following property:
lim n F n + 1 F n = φ ,
where φ : = ( 1 + 5 ) / 2 is the “golden ratio”. Moreover, using the Binet form of Fibonacci numbers, we can write
F n = φ n ( 1 φ ) n 5 = φ n 1 1 q n 1 q = φ n 1 [ n ] q , for q = 1 5 1 + 5 .
The previous equalities illustrate a clear relation between q-Hilbert and Filbert matrices that is going to be reflected in the obtained expression for the pivots and multipliers of the Neville elimination and, consequently, their bidiagonal factorization (4) (cf. [11]).
Theorem 1.
Given α N , let F n ( α ) R ( n + 1 ) × ( n + 1 ) be the Filbert matrix given by (19). The multipliers m i , j of the Neville elimination of F n ( α ) are given by
m i , j = m ˜ i , j : = ( 1 ) j 1 F i + α 2 2 F i + j + α 2 F i + j + α 3 , 1 j < i n + 1 .
Moreover, the diagonal pivots p i , i of the Neville elimination of F n ( α ) are given by
p i , i = ( 1 ) ( i 1 ) ( i + α 2 ) 1 F 2 i + α 2 2 i + α 3 i 1 F 2 , 1 i n + 1 ,
and can be computed as follows:
p 1 , 1 = 1 / F α , p i + 1 , i + 1 = ( 1 ) α 2 F i 2 F i + α 1 2 F 2 i + α F 2 i + α 1 2 F 2 i + α 2 p i , i , 1 i n .
Proof. 
Let F ( k ) : = ( f i , j ( k ) ) 1 i , j n + 1 , k = 1 , , n + 1 , be the matrices obtained after k 1 steps of the Neville elimination procedure for F n ( α ) . Now, by induction on k = 2 , , n + 1 , we see that
f i , j ( k ) = ( 1 ) ( k 1 ) ( i + α 2 ) j 1 k 1 F F k i + k + α 3 k 1 F i + j + α 2 k F , k j , i n + 1 .
It can be easily checked that f i , 1 / f i 1 , 1 = F i + α 2 / F i + α 1 ; thus, using the Vajda identity (20) with n : = i + α 2 , p : = 1 and q : = j 1 , we can write
f i , j ( 2 ) = 1 F i + j + α 2 F i + α 2 F i + α 1 1 F i + j + α 3 = F i + α 1 F i + j + α 3 F i + α 2 F i + j + α 2 F i + α 1 F i + j 1 F i + j + α 3 = ( 1 ) i + α 2 F j 1 F i + α 1 F i + j + α 2 F i + j + α 3 ,
and (24) follows for k = 2 . If (24) holds for some k { 2 , , n } , we have
f i , k ( k ) f i 1 , k ( k ) = ( 1 ) k 1 i + k + α 4 k 1 F i + k + α 3 k F i + k + α 3 k 1 F i + k + α 2 k F = ( 1 ) k 1 F i + α 2 2 F i + k + α 2 F i + k + α 3 ,
for i = k + 1 , , n + 1 . Taking into account that by (1), f i , j ( k + 1 ) = f i , j ( k ) f i , k ( k ) f i 1 , j ( k ) / f i 1 , k ( k ) and the following identity, obtained from (18),
F i + k + α 3 F i + α 2 i + k + α 4 k 1 F = i + k + α 3 k 1 F ,
we can write
f i , j ( k + 1 ) = ( 1 ) ( k 1 ) ( i + α 2 ) j 1 k 1 F F k i + k + α 3 k 1 F C ˜ i , j ( k )
with
C ˜ i , j ( k ) : = 1 i + j + α 2 k F F i + α 2 F i + k + α 2 1 i + j + α 3 k F ,
for k + 1 j , i n + 1 . Taking into account (17) and (16), respectively, we have
i + j + α 2 k F = F k + 1 F i + j + α k 2 i + j + α 2 k + 1 F , i + j + α 3 k F = F k + 1 F i + j + α 2 i + j + α 2 k + 1 F ,
and from (25), we derive
f i , j ( k + 1 ) = ( 1 ) ( k 1 ) ( i + α 2 ) j 1 k 1 F F k + 1 F k i + k + α 3 k 1 F i + j + α 2 k + 1 F F i + j k + α 2 F i + α 2 F i + k + α 2 F i + j + α 2 .
On the other hand, by considering the Vajda identity in (20) with p : = j k , n : = i + α 2 and q : = k , it can be checked that
F i + j k + α 2 F i + k + α 2 F i + α 2 F i + j + α 2 = ( 1 ) i + α 2 F j k F k ,
and then, from (26), we can write
f i , j ( k + 1 ) = ( 1 ) ( k 1 ) ( i + α 2 ) j 1 k 1 F F k F k + 1 i + k + α 3 k 1 F i + j + α 2 k + 1 F F i + j k + α 2 F i + k + α 2 F i + α 2 F i + j + α 2 F i + k + α 2 = ( 1 ) k ( i + α 2 ) j 1 k 1 F F j k F k + 1 i + k + α 3 k 1 F i + j + α 2 k + 1 F F i + k + α 2 ,
for k + 1 j , i n + 1 . Finally, taking into account (17) and (16), respectively, we can write
j 1 k 1 F = F k F j k j 1 k F , i + k + α 3 k 1 F = F k F i + k + α 2 i + k + α 2 k F ,
and conclude that
f i , j ( k + 1 ) = ( 1 ) k ( i + α 2 ) j 1 k F F k + 1 i + k + α 2 k F i + j + α 2 k + 1 F , k + 1 j , i n + 1 ,
and (24) holds for k + 1 .
Now, by (2) and (24), the pivots of the Neville elimination of H satisfy
p i , j = f i , j ( j ) = ( 1 ) ( j 1 ) ( i + α 2 ) 1 F j i + j + α 3 j 1 F i + j + α 2 j F , 1 j < i n + 1 .
For the particular case i = j , we obtain
p i , i = ( 1 ) ( i 1 ) ( i + α 2 ) F i 2 i + α 3 i 1 F 2 i + α 2 i F = ( 1 ) ( i 1 ) ( i + α 2 ) F 2 i + α 2 2 i + α 3 i 1 F 2 ,
and (22) follows. It can be easily checked that p 1 , 1 = 1 / F α and
p i + 1 , i + 1 p i , i = ( 1 ) 2 i + α 2 F i 2 F i + α 1 2 F 2 i + α F 2 i + α 1 2 F 2 i + α 2 ,
confirming Formula (23).
Let us observe that since the pivots of the Neville elimination of F n ( α ) are nonzero, this elimination can be performed without row exchanges.
Finally, using (3) and (24), the multipliers m i , j can be described as:
m i , j = p i , j p i 1 , j = ( 1 ) j 1 F i + α 2 2 F i + j + α 2 F i + j + α 3 , 1 j < i n + 1 .
Since F n ( α ) is symmetric, using Remark 1, we deduce that m ˜ i , j = m i , j .    □
Taking into account Theorem 1, the decomposition (4) of F n ( α ) and (6) of ( F n ( α ) ) 1 can be stored by means of B D ( F n ( α ) ) = ( B D ( F n ( α ) ) i , j ) 1 i , j n + 1 with
B D ( F n ( α ) ) i , j : = ( 1 ) j 1 F i + α 2 2 F i + j + α 2 F i + j + α 3 , if i > j , ( 1 ) ( i 1 ) ( i + α 2 ) 1 F 2 i + α 2 2 i + α 3 i 1 F 2 , if i = j , ( 1 ) i 1 F j + α 2 2 F i + j + α 2 F i + j + α 3 , if i < j .
On the other hand, using Lemma 1 and Formula (22) for the diagonal pivots, the determinant of Filbert matrices F n ( α ) can be expressed as follows:
det F n ( α ) = ( 1 ) 1 6 n ( n + 1 ) ( 2 n + 3 α 2 ) 1 F α k = 1 n F 2 k + α 2 k + α 1 k F 2 1 ,
which is an equivalent formula to that obtained in Theorem 5 of [12].

4. Bidiagonal Factorization of Lilbert Matrices

Let us recall that Lucas numbers are defined recursively is a similar way to Fibonacci numbers, just changing the value of the initial element of the sequence,
L 0 : = 2 , L 1 : = 1 , L n + 1 = L n + L n 1 , n 1 .
The analogous Lilbonomial coefficients are
n k L : = i = 1 k L n i + 1 L i = [ n ] L ! [ k ] L ! [ n k ] L ! , 0 k n ,
with the usual convention that empty products are defined as one and
[ 0 ] L ! : = 1 , [ n ] L ! : = k = 1 n L k .
Let us observe that using the Binet form of Lucas numbers, we can write
L n = φ n + ( 1 φ n ) = φ n ( 1 + q n ) ,
for q = ( 1 5 ) / ( 1 + 5 ) and φ = ( 1 + 5 ) / 2 . Moreover, as for Fibonacci numbers, in the literature, one can find many interesting equalities relating the Lucas numbers with each other, as well as Lucas and Fibonacci numbers. In this section, we use the following Vajda-type equality
L n L n + p + q L n + p L n + q = 5 ( 1 ) n F p F q , p , q , n N ,
proved in Theorem 5 of [26] to derive the bidiagonal factorization of the Lucas matrix L n ( α ) : = ( L i , j ) 1 i , j n + 1 with
L i , j ( α ) = 1 L i + j + α 2 , 1 i , j n + 1 .
Theorem 2.
Given α N , let L n ( α ) R ( n + 1 ) × ( n + 1 ) be the Lilbert matrix given by (33). The multipliers m i , j of the Neville elimination of L n ( α ) are given by
m i , j = m ˜ i , j : = ( 1 ) j 1 L i + α 2 2 L i + j + α 2 L i + j + α 3 , 1 j < i n + 1 .
Moreover, the diagonal pivots p i , i of the Neville elimination of L n ( α ) are
p i , i = ( 1 ) ( i 1 ) ( i + α 1 ) 5 i 1 [ i 1 ] F ! 2 [ i 1 ] L ! 2 1 L 2 i + α 2 2 i + α 3 i 1 L 2 , 1 i n + 1 ,
and can be computed as follows
p 1 , 1 = 1 / L α , p i + 1 , i + 1 = 5 ( 1 ) α 1 F i 2 L i + α 1 2 L 2 i + α L 2 i + α 1 2 L 2 i + α 2 p i , i , 1 i n .
Proof. 
The proof is analogous to that of Theorem 1 for the computation of the pivots and multipliers of the Neville elimination of Filbert matrices and, for this reason, we only provide a sketch. Let L ( k ) : = ( i , j ( k ) ) 1 i , j n + 1 , k = 1 , , n + 1 , be the matrices obtained after k 1 steps of the Neville elimination procedure for L n ( α ) . Using an inductive reasoning, similar to that of Theorem 1, the Vajda-type equality (32) and the definition (31) of Lilbonomial coefficients, the entries of the intermediate matrices of the Neville elimination can be written as follows:
i , j ( k ) = ( 1 ) ( k 1 ) ( i + α 2 ) ( 5 ) ( k 1 ) ( [ k 1 ] F ! ) 2 ( [ k 1 ] L ! ) 2 j 1 k 1 F L k i + j + α 2 k L i + k + α 3 k 1 L ,
for k j , i n + 1 , and then the pivots of the Neville elimination are
p i , j = i , j ( j ) = ( 1 ) ( j 1 ) ( i + α 2 ) ( 5 ) ( j 1 ) ( [ j 1 ] F ! ) 2 ( [ j 1 ] L ! ) 2 1 L j i + j + α 2 j L i + j + α 3 j 1 L .
Identities (35) and (36) are deduced by considering i = j in (38). Moreover, Formula (34) for the multipliers m i , j = m ˜ i , j are derived by taking into account that m i , j = p i , j / p i 1 , j (see (3)).    □
Taking into account Theorem 2, the decomposition (4) of L n ( α ) and (6) of ( L n ( α ) ) 1 , can be stored by means of B D ( L n ( α ) ) = ( B D ( L n ( α ) ) i , j ) 1 i , j n + 1 with   
B D ( L n ( α ) ) i , j : = ( 1 ) j 1 L i + α 2 2 L i + j + α 2 L i + j + α 3 , if i > j , ( 1 ) ( i 1 ) ( i + α 1 ) 5 i 1 [ i 1 ] F ! 2 [ i 1 ] L ! 2 1 L 2 i + α 2 2 i + α 3 i 1 L 2 , if i = j , ( 1 ) i 1 L j + α 2 2 L i + j + α 2 L i + j + α 3 , if i < j .
Using Lemma 1 and Formula (35) for the diagonal pivots, the determinant of Lilbert matrices L n ( α ) can be expressed as follows
det L n ( α ) = ( 1 ) 1 6 n ( n + 1 ) ( 2 n + 3 α + 1 ) 5 1 2 n ( n + 1 ) k = 1 n [ i ] F ! [ i ] L ! L 2 k + α 2 k + α 1 k L 2 1 ,
which is an equivalent formula to that obtained in Theorem 1.17 of [13].

5. Numerical Experiments

In this section, a collection of numerical experiments is presented, comparing the algorithms that take advantage of the bidiagonal decompositions presented in this work with the best standard routines. It should be noted that the cost of computing the matrix form (7) of the bidiagonal decomposition (4) is O ( n 2 ) for both Filbert matrices F n ( α ) (see (29)) and for Lilbert matrices L n ( α ) (see (39)).
We considered several Filbert matrices F n ( α ) , for α = 1 and α = 2 , as well as Lilbert matrices L n ( α ) , for α = 1 and α = 3 , with dimension n + 1 = 5 , , 15 . To keep the notation as contained as possible, in what follows, Filbert and Lilbert matrices are denoted as F and L, respectively, and their bidiagonal decompositions by B D ( F ) and B D ( L ) .
The two-norm condition number of all considered matrices was computed in Mathematica. As can be easily seen in Figure 1, the condition number grows dramatically with the size of the matrix. As mentioned at the beginning of the paper, this bad conditioning prevents standard routines from giving accurate solutions to any algebraic problem, even for relatively small-sized problems.
To analyze the behavior of the bidiagonal approach and confront it with standard direct methods, several numerical experiments were performed, concerning both Filbert and Lilbert matrices. The factorizations obtained in Section 3 and Section 4 were used as an input argument of the Matlab functions of the TNTool package, made available in [27]. In particular, the following functions were used, each corresponding to an algebraic problem:
  • TNInverseExpand ( B D ( A ) ) provides A 1 , with an O ( n 2 ) computational cost (see [22]).
  • TNSolve ( B D ( A ) , d ) solves the system A x = b , with an O ( n 2 ) cost.
  • TNSingularValues ( B D ( A ) ) obtains the singular values of A, with an O ( n 3 ) cost.
For each problem, the approximated solution obtained by the TNTool subroutine was compared with the classical method provided by Matlab R 2022 b . Relative errors in both cases were computed by comparing with the exact solution given by Mathematica 13.1 , which makes use of 100-digit arithmetic.
Computation of inverses. In this experiment, we compared the accuracy in determining the inverse of each considered matrix with two methods: the bidiagonal factorization as an input to the TNInverseExpand routine and the standard Matlab command inv. It is clear from Figure 2 that our procedure obtained great accuracy in every analyzed case, whereas the results obtained with Matlab failed dramatically for moderate sizes of the matrices.
Resolution of linear systems. For each of the matrices considered, in this experiment, the solution of the linear systems F x = d and L x = d was computed, where d = ( ( 1 ) i + 1 d i ) 1 i n + 1 and d i , i = 1 , , n + 1 , are random nonnegative integer values. This was again performed in two ways: by using the proposed bidiagonal factorization as an input of the TNSolve routine and by the standard ∖ Matlab command. As before, the standard Matlab routine could not overcome the ill-conditioned nature of the analyzed matrices, in contrast with the machine precision-order errors achieved by the bidiagonal approach, as is depicted in Figure 3.
Computation of singular values. The relative errors in determining the smallest singular value of both Filbert and Lilbert matrices are illustrated in this experiment. These were computed with both the standard Matlab command svd and by providing as an input argument to TNSingularValues the corresponding bidiagonal decomposition. It follows from Figure 4 that our method determined accurately the lowest singular value for every studied case, while the standard Matlab command svd results were very far from the exact solution even for small sizes of the considered matrices.

6. Conclusions

The paper analyzed the generalized versions of Filbert and Lilbert matrices F n ( α ) and L n ( α ) , based on Fibonacci and Lucas numbers, respectively. Leaning on the Neville elimination, their bidiagonal factorizations were obtained explicitly, which also led to formulae for the corresponding determinants. Numerical experiments were provided, exhibiting a great level of accuracy in the case of the routines that took as an input the bidiagonal decomposition of the matrices, even for notably ill-conditioned cases, while the results of standard procedures were wrong by orders of magnitude. Future prospects include the study of the condition number of these matrices, which could offer some insight about the excellent experimental results obtained.

Author Contributions

Conceptualization, Y.K., E.M., J.M.P., E.R.-A. and B.R.; methodology, Y.K., E.M., J.M.P., E.R.-A. and B.R.; investigation, Y.K., E.M., J.M.P., E.R.-A. and B.R.; writing—original draft, Y.K., E.M., J.M.P., E.R.-A. and B.R.; writing—review and editing, Y.K., E.M., J.M.P., E.R.-A. and B.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Spanish research grants PID2022-138569NB-I00 (MCI/AEI) and RED2022-134176-T (MCI/AEI) and by Gobierno de Aragón (E41_23R, S60_23R).

Data Availability Statement

The data and codes used in this work are available under request.

Acknowledgments

We thank the editors and the anonymous referees for their valuable comments and suggestions, which have improved this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kramer, B.; Gorodetsky, A.A. System Identification via CUR-Factored Hankel Approximation. SIAM J. Sci. Comput. 2018, 40, A848–A866. [Google Scholar] [CrossRef]
  2. Pan, V. Structured Matrices and Polynomials: Unified Superfast Algorithms; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  3. Bini, D.; Pan, V. Improved parallel computations with Toeplitz-like and Hankel-like matrices. Linear Algebra Appl. 1993, 189, 3–29. [Google Scholar] [CrossRef]
  4. Datta, B.N. Application of Hankel matrices of Markov Parameters to the solutions of the Routh-Hurwitz and the Schur-Cohn problems. J. Math. Anal. App. 1979, 68, 276–290. [Google Scholar] [CrossRef]
  5. Alotaibi, A.; Mursaleen, M. applications of Hankel and regular matrices in Fourier Series. In Abstract and Applied Analysis; Hindawi: New York, NY, USA, 1993; Volume 2013. [Google Scholar]
  6. Richardson, T.M. The Filbert matrix. Fibonacci Quart. 2001, 39, 268–275. [Google Scholar]
  7. Huang, X.; Yao, S. Solidification performance of new trapezoidal longitudinal fins in latent heat thermal energy storage. Case Stud. Therm. Eng. 2021, 26, 101110. [Google Scholar] [CrossRef]
  8. Hosny, K.M.; Kamal, S.T.; Darwish, M.M.; Papakostas, G.A. New Image Encryption Algorithm Using Hyperchaotic System and Fibonacci Q-Matrix. Electronics 2021, 10, 1066. [Google Scholar] [CrossRef]
  9. Benavoli, A.; Chisci, L.; Farina, A. Fibonacci sequence, golden section, Kalman filter and optimal control. Signal Process. 2009, 89, 1483–1488. [Google Scholar] [CrossRef]
  10. Andersen, J.E.; Berg, C. Quantum Hilbert matrices and orthogonal polynomials. J. Comput. Appl. Math. 2009, 233, 723–729. [Google Scholar] [CrossRef]
  11. Mainar, E.; Peña, J.M.; Rubio, B. Accurate bidiagonal factorization of quantum Hilbert matrices. Linear Algebra Appl. 2024, 681, 131–149. [Google Scholar] [CrossRef]
  12. Kiliç, E.; Prodinger, H. A generalized Filbert Matrix. Fibonacci Quart. 2010, 48, 29–33. [Google Scholar]
  13. Kiliç, E.; Prodinger, H. The generalized Lilbert matrix. Period. Math. Hung. 2016, 73, 62–72. [Google Scholar] [CrossRef]
  14. Beckermann, B. The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math. 2000, 85, 553–577. [Google Scholar] [CrossRef]
  15. Córdova Yévenes, A.; Gautschi, W.; Ruscheweyh, S. Vandermonde matrices on the circle: Spectral properties and conditioning. Numer. Math. 1990, 57, 577–591. [Google Scholar] [CrossRef]
  16. Tyrtyshnikov, E. How bad are Hankel matrices? Numer. Math. 1994, 67, 261–269. [Google Scholar] [CrossRef]
  17. Chan, R.H.; Ng, M.K. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [Google Scholar] [CrossRef]
  18. Heinig, G.; Rost, K. Algebraic Methods for Toeplitz-like Matrices and Operators; Birkhäuser Verlag: Basel, Switzerland, 1984; Volume 13, 212p. [Google Scholar]
  19. Serra-Capizzano, S. Preconditioning strategies for Hermitian Toeplitz systems with nondefinite generating functions. SIAM J. Matrix Anal. Appl. 1996, 17, 1007–1019. [Google Scholar] [CrossRef]
  20. Bunch, J.R. The weak and strong stability of algorithms in numerical linear algebra. Linear Algebra Appl. 1987, 88–89, 49–66. [Google Scholar] [CrossRef]
  21. Gasca, M.; Peña, J.M. On factorizations of totally positive matrices. In Total Positivity and Its Applications; Gasca, M., Micchelli, C.A., Eds.; Kluver Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
  22. Marco, A.; Martínez, J.J. Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
  23. Barreras, A.; Peña, J.M. Accurate computations of matrices with bidiagonal decomposition using methods for totally positive matrices. Numer. Linear Algebra Appl. 2013, 20, 413–424. [Google Scholar] [CrossRef]
  24. Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
  25. Knuth, D.E. The Art of Computer Programming, Volume I: Fundamental Algorithms, 2nd ed.; Addison-Wesley: Reading, MA, USA, 1973; Volume 1. [Google Scholar]
  26. Keskin, R.; Demirtürk, B. Some new Fibonacci and Lucas identities by matrix methods. Int. J. Math. Educ. Sci. Technol. 2010, 41, 379–387. [Google Scholar] [CrossRef]
  27. Koev, P. Available online: http://math.mit.edu/∼plamen/software/TNTool.html (accessed on 4 February 2024).
Figure 1. The 2-norm conditioning of Filbert matrices F and Lilbert matrices L.
Figure 1. The 2-norm conditioning of Filbert matrices F and Lilbert matrices L.
Axioms 13 00219 g001
Figure 2. Relative error of the approximations to the inverse of Filbert and Lilbert matrices, F 1 and L 1 , respectively.
Figure 2. Relative error of the approximations to the inverse of Filbert and Lilbert matrices, F 1 and L 1 , respectively.
Axioms 13 00219 g002
Figure 3. Relative error of the approximations to the solution of the linear systems F c = d and L c = d , where d = ( ( 1 ) i + 1 d i ) 1 i n + 1 and d i , i = 1 , , n + 1 , are random nonnegative integer values.
Figure 3. Relative error of the approximations to the solution of the linear systems F c = d and L c = d , where d = ( ( 1 ) i + 1 d i ) 1 i n + 1 and d i , i = 1 , , n + 1 , are random nonnegative integer values.
Axioms 13 00219 g003
Figure 4. Relative error of the approximations to the lowest singular value of Filbert matrices F and Lilbert matrices L.
Figure 4. Relative error of the approximations to the lowest singular value of Filbert matrices F and Lilbert matrices L.
Axioms 13 00219 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khiar, Y.; Mainar, E.; Peña, J.M.; Royo-Amondarain, E.; Rubio, B. Bidiagonal Factorizations of Filbert and Lilbert Matrices. Axioms 2024, 13, 219. https://doi.org/10.3390/axioms13040219

AMA Style

Khiar Y, Mainar E, Peña JM, Royo-Amondarain E, Rubio B. Bidiagonal Factorizations of Filbert and Lilbert Matrices. Axioms. 2024; 13(4):219. https://doi.org/10.3390/axioms13040219

Chicago/Turabian Style

Khiar, Yasmina, Esmeralda Mainar, Juan Manuel Peña, Eduardo Royo-Amondarain, and Beatriz Rubio. 2024. "Bidiagonal Factorizations of Filbert and Lilbert Matrices" Axioms 13, no. 4: 219. https://doi.org/10.3390/axioms13040219

APA Style

Khiar, Y., Mainar, E., Peña, J. M., Royo-Amondarain, E., & Rubio, B. (2024). Bidiagonal Factorizations of Filbert and Lilbert Matrices. Axioms, 13(4), 219. https://doi.org/10.3390/axioms13040219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop