1. Introduction
Many efforts have been devoted in the last decades to the study of Hankel and Toeplitz matrices. Their applications extend through many areas, such as signal processing and system identification. In particular, the singular value decomposition of a Hankel matrix plays a crucial role in state-space realization and hidden Markov models (see [
1,
2,
3,
4,
5]).
An interesting case is the so-called reciprocal Hankel matrix, defined by Richardson in [
6]. Given an integer sequence
, these matrices
are defined as
. An appealing case appears when considering the famous Fibonacci sequence, defined by:
for which the corresponding reciprocal Hankel matrix is called a Filbert matrix because of its similarities with the well-known Hilbert matrix, its entries being given by
Fibonacci numbers, omnipresent in nature, come into play in diverse scientific areas, e.g., image encryption algorithms [
7] or thermal engineering [
8], and they have proved to also be relevant in signal processing [
9].
Filbert matrices are also deeply related with
q-Hilbert matrices, for
(cf. [
10]), which in turn were recently studied by some authors [
11]. These are defined by
where
is the
q-integer defined as
.
A generalization of Filbert matrices
, for
being an integer parameter, was studied in [
12].
We can also consider the Lucas sequence,
to obtain the Lilbert matrices
, defined in [
13]. As with Filbert matrices, Lilbert matrices can be generalized analogously. In the mentioned papers, an explicit formula for the
-decomposition, the Cholesky factorization and the inverse has been obtained for Filbert, Lilbert matrices and some extensions [
12,
13].
The condition number of Vandermonde and Hilbert matrices grows dramatically with their dimensions [
14,
15,
16]. Unfortunately, specific information about the condition number of the Filbert and Lilbert matrices is not widely documented. However, we can expect these matrices to be ill-conditioned due to its similar structure to Hilbert matrices. In
Section 5, devoted to numerical experiments, it is shown that the two-norm condition number of the Filbert and Lilbert matrices grows significantly as the size of the matrices increases. For instance, the condition number of a
Filbert matrix is approximately
. As a consequence, conventional routines applying the best algorithms for solving algebraic problems such as computing the inverse of a matrix, its singular values or the resolution of a linear system, fail to provide any accuracy in the obtained results.
At this point, it should be mentioned that any Hankel matrix can be transformed into a Toeplitz matrix with no cost by means of a permutation—the one given by the anti-identity matrix. In principle, when solving algebraic problems such as the resolution of linear systems, this would allow us to apply several well-established numerical methods, including the so-called fast direct Toeplitz solvers [
17,
18], with a computational cost of
, and the iterative procedures based on the gradient conjugate algorithm with a suitable preconditioner, which can improve the cost to
[
19]. However, these direct algorithms guarantee only weak stability [
20], i.e., that for well-conditioned problems, the computed and the exact solution are close. The same can be said about preconditioned conjugate gradient methods, since the speed of convergence and its stability heavily depend on the condition number of the given matrix.
In this work, the generalized versions of Filbert and Lilbert matrices are addressed by means of a Neville elimination process, giving explicit expressions for its multipliers and pivots. Following [
21], this allows us to determine a bidiagonal factorization of the considered matrices. As a byproduct, formulae for the determinants of both classes of matrices are derived. Moreover, numerical experiments for the above-mentioned algebraic problems—which are heavily ill-conditioned—have been performed, showing results that they exhibit machine-order accuracy, in stark contrast with traditional numerical methods.
The paper is organized as follows: to keep this paper as self-contained as possible,
Section 2 recalls basic concepts and results related to Neville elimination and bidiagonal factorizations of nonsingular matrices. Filbert and Lilbert matrices are considered in
Section 3 and
Section 4, respectively, where the pivots and multipliers of their Neville elimination are obtained, and a remarkable analogy with those of quantum Hilbert matrices is illustrated. As seen later, the obtained bidiagonal factorizations have experimentally shown an impressive level of performance, attaining machine-order errors while classical numerical methods fail to deliver the correct solution by orders of magnitude. Finally,
Section 5 presents a series of numerical experiments.
2. Notations and Auxiliary Results
As advanced in the Introduction, the main result of this paper, gathered in the following sections, consists in the computation of the bidiagonal factorization of Filbert and Lilbert matrices, which is possible by following a Neville elimination process. This being the case, let us begin by recalling some basic results concerning the Neville elimination (NE). First of all, it is an algorithm that, given a
real-valued matrix
A, obtains an upper-triangular matrix
U after
n iterations. More specifically, intermediate steps, labeled by
for
, are obtained from the previous iteration
, making zeros below the diagonal in the
kth column. To do so, the initial step is by definition
, whereas the entries of
for every
are obtained through the subsequent recursion formula
In the last iteration of this process, the matrix
is obtained, which, as mentioned before, is upper-triangular. In this process, the entries corresponding to the
jth column at the
step, i.e.,
are called the
pivots (or
ith diagonal pivots in the
case) of the NE process. The following quotient is also of relevance:
and is known as the
multiplier.
By applying a second Neville elimination to
, a diagonal matrix is obtained; this process is known as a complete Neville elimination. When in this process, there is no need to perform any row exchanges, the matrix
A is said to verify the WRC condition (see, e.g., [
21]). In Theorem 2.2 of [
21], it is proved that a
real-valued nonsingular matrix
A verifies the WRC condition if and only if it can be expressed in a unique way as the following product,
where
are the lower- and upper-, respectively, triangular bidiagonal matrices given by
while the entries of the diagonal matrix
D are the diagonal pivots
obtained in the NE of
A. In fact, the NE processes of
A and
also give the nondiagonal entries of
and
, since the values
appearing in (
5) are precisely the multipliers of these algorithms as defined in (
3).
Another interesting result is provided by Theorem 2.2 of [
22]. Taking advantage of the diagonal pivots and multipliers obtained in the NE of
A, it is possible to formulate the inverse
as
where the matrices
and
are very much like their counterparts
and
, but with a different arrangement of the multipliers, being defined as
It is worth noting that more general classes of matrices can be factorized as in (
4), see [
23].
Hereafter, the convention adopted by Koev in [
24] to store the coefficients of the bidiagonal decomposition (
4) of
A in a
matrix
is followed. The entries of this matrix form are given by
Remark 1. Provided that the bidiagonal factorization of a nonsingular matrix exists, then, using the factorization (4), it follows thatFurthermore, in the case of A being symmetric, we have that for and, as a consequence, It is worth noting that thanks to the structure of the factors in the bidiagonal decomposition (
4) of a nonsingular matrix
A, in order to compute its determinant, it suffices to perform the product of the diagonal pivots obtained in the NE of
A, since the determinant of each of the factors
and
is trivially one. This result will be used later in the manuscript to obtain the determinants of generalized Filbert and Lilbert matrices and is summarized in the following lemma.
Lemma 1. Consider a nonsingular matrix . If the bidiagonal decomposition of A exists, thenwhere are the diagonal pivots of the Neville elimination of A given by (2). 3. Bidiagonal Factorization of Filbert Matrices
Let us recall that the sequence of Fibonacci numbers:
is given by
with the recursion formula
Filbert matrices are defined in terms of the Fibonacci sequence as
and they have the property—shared with Hilbert matrices—of having an inverse with integer entries [
6]. In fact, an explicit formula for the entries of the inverse matrices is proved using computer algebra. This formula shows a remarkable analogy with the corresponding formula for the elements of the inverse of Hilbert matrices in the sense that it can be obtained by replacing some binomial coefficients
by the analogous Fibonomial coefficients introduced in [
25] as follows
with the usual convention that empty products are defined as one. Let us observe that by defining
we can also write
The following identities for Fibonomial coefficients hold
and taking into account the following recursion formula
(see [
25]), it can be clearly seen that the Fibonomial coefficients are integers. It can also be checked that Fibonomial coefficients satisfy the following useful identities:
Now, we consider the following generalization of the Filbert matrix
described in (
11). Given
, let
with
Clearly, for
,
coincides with the Filbert matrix (
11).
There are many nice equalities relating the Fibonacci numbers with each other. In this paper we use the following identity:
which is known as Vajda’s identity. On the other hand, it is well known that Fibonacci numbers
,
, satisfy the following property:
where
is the “golden ratio”. Moreover, using the Binet form of Fibonacci numbers, we can write
The previous equalities illustrate a clear relation between q-Hilbert and Filbert matrices that is going to be reflected in the obtained expression for the pivots and multipliers of the Neville elimination and, consequently, their bidiagonal factorization (
4) (cf. [
11]).
Theorem 1. Given , let be the Filbert matrix given by (19). The multipliers of the Neville elimination of are given byMoreover, the diagonal pivots of the Neville elimination of are given byand can be computed as follows: Proof. Let
,
, be the matrices obtained after
steps of the Neville elimination procedure for
. Now, by induction on
, we see that
It can be easily checked that
; thus, using the Vajda identity (
20) with
,
and
, we can write
and (
24) follows for
. If (
24) holds for some
, we have
for
. Taking into account that by (
1),
and the following identity, obtained from (18),
we can write
with
for
. Taking into account (17) and (
16), respectively, we have
and from (
25), we derive
On the other hand, by considering the Vajda identity in (
20) with
,
and
, it can be checked that
and then, from (
26), we can write
for
. Finally, taking into account (17) and (
16), respectively, we can write
and conclude that
and (
24) holds for
.
Now, by (
2) and (
24), the pivots of the Neville elimination of
H satisfy
For the particular case
, we obtain
and (
22) follows. It can be easily checked that
and
confirming Formula (
23).
Let us observe that since the pivots of the Neville elimination of are nonzero, this elimination can be performed without row exchanges.
Finally, using (
3) and (
24), the multipliers
can be described as:
Since
is symmetric, using Remark 1, we deduce that
. □
Taking into account Theorem 1, the decomposition (
4) of
and (
6) of
can be stored by means of
with
On the other hand, using Lemma 1 and Formula (
22) for the diagonal pivots, the determinant of Filbert matrices
can be expressed as follows:
which is an equivalent formula to that obtained in Theorem 5 of [
12].
4. Bidiagonal Factorization of Lilbert Matrices
Let us recall that Lucas numbers are defined recursively is a similar way to Fibonacci numbers, just changing the value of the initial element of the sequence,
The analogous Lilbonomial coefficients are
with the usual convention that empty products are defined as one and
Let us observe that using the Binet form of Lucas numbers, we can write
for
and
. Moreover, as for Fibonacci numbers, in the literature, one can find many interesting equalities relating the Lucas numbers with each other, as well as Lucas and Fibonacci numbers. In this section, we use the following Vajda-type equality
proved in Theorem 5 of [
26] to derive the bidiagonal factorization of the Lucas matrix
with
Theorem 2. Given , let be the Lilbert matrix given by (33). The multipliers of the Neville elimination of are given byMoreover, the diagonal pivots of the Neville elimination of areand can be computed as follows Proof. The proof is analogous to that of Theorem 1 for the computation of the pivots and multipliers of the Neville elimination of Filbert matrices and, for this reason, we only provide a sketch. Let
,
, be the matrices obtained after
steps of the Neville elimination procedure for
. Using an inductive reasoning, similar to that of Theorem 1, the Vajda-type equality (
32) and the definition (
31) of Lilbonomial coefficients, the entries of the intermediate matrices of the Neville elimination can be written as follows:
for
, and then the pivots of the Neville elimination are
Identities (
35) and (
36) are deduced by considering
in (
38). Moreover, Formula (
34) for the multipliers
are derived by taking into account that
(see (
3)). □
Taking into account Theorem 2, the decomposition (
4) of
and (
6) of
, can be stored by means of
with
Using Lemma 1 and Formula (
35) for the diagonal pivots, the determinant of Lilbert matrices
can be expressed as follows
which is an equivalent formula to that obtained in Theorem 1.17 of [
13].
5. Numerical Experiments
In this section, a collection of numerical experiments is presented, comparing the algorithms that take advantage of the bidiagonal decompositions presented in this work with the best standard routines. It should be noted that the cost of computing the matrix form (
7) of the bidiagonal decomposition (
4) is
for both Filbert matrices
(see (
29)) and for Lilbert matrices
(see (
39)).
We considered several Filbert matrices , for and , as well as Lilbert matrices , for and , with dimension . To keep the notation as contained as possible, in what follows, Filbert and Lilbert matrices are denoted as F and L, respectively, and their bidiagonal decompositions by and .
The two-norm condition number of all considered matrices was computed in Mathematica. As can be easily seen in
Figure 1, the condition number grows dramatically with the size of the matrix. As mentioned at the beginning of the paper, this bad conditioning prevents standard routines from giving accurate solutions to any algebraic problem, even for relatively small-sized problems.
To analyze the behavior of the bidiagonal approach and confront it with standard direct methods, several numerical experiments were performed, concerning both Filbert and Lilbert matrices. The factorizations obtained in
Section 3 and
Section 4 were used as an input argument of the Matlab functions of the
TNTool package, made available in [
27]. In particular, the following functions were used, each corresponding to an algebraic problem:
TNInverseExpand provides
, with an
computational cost (see [
22]).
TNSolve solves the system , with an cost.
TNSingularValues obtains the singular values of A, with an cost.
For each problem, the approximated solution obtained by the TNTool subroutine was compared with the classical method provided by Matlab . Relative errors in both cases were computed by comparing with the exact solution given by Mathematica , which makes use of 100-digit arithmetic.
Computation of inverses. In this experiment, we compared the accuracy in determining the inverse of each considered matrix with two methods: the bidiagonal factorization as an input to the
TNInverseExpand routine and the standard Matlab command
inv. It is clear from
Figure 2 that our procedure obtained great accuracy in every analyzed case, whereas the results obtained with Matlab failed dramatically for moderate sizes of the matrices.
Resolution of linear systems. For each of the matrices considered, in this experiment, the solution of the linear systems
and
was computed, where
and
,
, are random nonnegative integer values. This was again performed in two ways: by using the proposed bidiagonal factorization as an input of the
TNSolve routine and by the standard ∖ Matlab command. As before, the standard Matlab routine could not overcome the ill-conditioned nature of the analyzed matrices, in contrast with the machine precision-order errors achieved by the bidiagonal approach, as is depicted in
Figure 3.
Computation of singular values. The relative errors in determining the smallest singular value of both Filbert and Lilbert matrices are illustrated in this experiment. These were computed with both the standard Matlab command
svd and by providing as an input argument to
TNSingularValues the corresponding bidiagonal decomposition. It follows from
Figure 4 that our method determined accurately the lowest singular value for every studied case, while the standard Matlab command
svd results were very far from the exact solution even for small sizes of the considered matrices.