Abstract
Extensions of Filbert and Lilbert matrices are addressed in this work. They are reciprocal Hankel matrices based on Fibonacci and Lucas numbers, respectively, and both are related to Hilbert matrices. The Neville elimination is applied to provide explicit expressions for their bidiagonal factorization. As a byproduct, formulae for the determinants of these matrices are obtained. Finally, numerical experiments show that several algebraic problems involving these matrices can be solved with outstanding accuracy, in contrast with traditional approaches.
Keywords:
bidiagonal decompositions; Hilbert matrices; Filbert matrices; Lilbert matrices; Fibonacci numbers; Lucas numbers MSC:
65F05; 65F15; 65G50; 15A23
1. Introduction
Many efforts have been devoted in the last decades to the study of Hankel and Toeplitz matrices. Their applications extend through many areas, such as signal processing and system identification. In particular, the singular value decomposition of a Hankel matrix plays a crucial role in state-space realization and hidden Markov models (see [1,2,3,4,5]).
An interesting case is the so-called reciprocal Hankel matrix, defined by Richardson in [6]. Given an integer sequence , these matrices are defined as . An appealing case appears when considering the famous Fibonacci sequence, defined by:
for which the corresponding reciprocal Hankel matrix is called a Filbert matrix because of its similarities with the well-known Hilbert matrix, its entries being given by
Fibonacci numbers, omnipresent in nature, come into play in diverse scientific areas, e.g., image encryption algorithms [7] or thermal engineering [8], and they have proved to also be relevant in signal processing [9].
Filbert matrices are also deeply related with q-Hilbert matrices, for (cf. [10]), which in turn were recently studied by some authors [11]. These are defined by
where is the q-integer defined as .
A generalization of Filbert matrices , for being an integer parameter, was studied in [12].
We can also consider the Lucas sequence,
to obtain the Lilbert matrices , defined in [13]. As with Filbert matrices, Lilbert matrices can be generalized analogously. In the mentioned papers, an explicit formula for the -decomposition, the Cholesky factorization and the inverse has been obtained for Filbert, Lilbert matrices and some extensions [12,13].
The condition number of Vandermonde and Hilbert matrices grows dramatically with their dimensions [14,15,16]. Unfortunately, specific information about the condition number of the Filbert and Lilbert matrices is not widely documented. However, we can expect these matrices to be ill-conditioned due to its similar structure to Hilbert matrices. In Section 5, devoted to numerical experiments, it is shown that the two-norm condition number of the Filbert and Lilbert matrices grows significantly as the size of the matrices increases. For instance, the condition number of a Filbert matrix is approximately . As a consequence, conventional routines applying the best algorithms for solving algebraic problems such as computing the inverse of a matrix, its singular values or the resolution of a linear system, fail to provide any accuracy in the obtained results.
At this point, it should be mentioned that any Hankel matrix can be transformed into a Toeplitz matrix with no cost by means of a permutation—the one given by the anti-identity matrix. In principle, when solving algebraic problems such as the resolution of linear systems, this would allow us to apply several well-established numerical methods, including the so-called fast direct Toeplitz solvers [17,18], with a computational cost of , and the iterative procedures based on the gradient conjugate algorithm with a suitable preconditioner, which can improve the cost to [19]. However, these direct algorithms guarantee only weak stability [20], i.e., that for well-conditioned problems, the computed and the exact solution are close. The same can be said about preconditioned conjugate gradient methods, since the speed of convergence and its stability heavily depend on the condition number of the given matrix.
In this work, the generalized versions of Filbert and Lilbert matrices are addressed by means of a Neville elimination process, giving explicit expressions for its multipliers and pivots. Following [21], this allows us to determine a bidiagonal factorization of the considered matrices. As a byproduct, formulae for the determinants of both classes of matrices are derived. Moreover, numerical experiments for the above-mentioned algebraic problems—which are heavily ill-conditioned—have been performed, showing results that they exhibit machine-order accuracy, in stark contrast with traditional numerical methods.
The paper is organized as follows: to keep this paper as self-contained as possible, Section 2 recalls basic concepts and results related to Neville elimination and bidiagonal factorizations of nonsingular matrices. Filbert and Lilbert matrices are considered in Section 3 and Section 4, respectively, where the pivots and multipliers of their Neville elimination are obtained, and a remarkable analogy with those of quantum Hilbert matrices is illustrated. As seen later, the obtained bidiagonal factorizations have experimentally shown an impressive level of performance, attaining machine-order errors while classical numerical methods fail to deliver the correct solution by orders of magnitude. Finally, Section 5 presents a series of numerical experiments.
2. Notations and Auxiliary Results
As advanced in the Introduction, the main result of this paper, gathered in the following sections, consists in the computation of the bidiagonal factorization of Filbert and Lilbert matrices, which is possible by following a Neville elimination process. This being the case, let us begin by recalling some basic results concerning the Neville elimination (NE). First of all, it is an algorithm that, given a real-valued matrix A, obtains an upper-triangular matrix U after n iterations. More specifically, intermediate steps, labeled by for , are obtained from the previous iteration , making zeros below the diagonal in the kth column. To do so, the initial step is by definition , whereas the entries of for every are obtained through the subsequent recursion formula
In the last iteration of this process, the matrix is obtained, which, as mentioned before, is upper-triangular. In this process, the entries corresponding to the jth column at the step, i.e.,
are called the pivots (or ith diagonal pivots in the case) of the NE process. The following quotient is also of relevance:
and is known as the multiplier.
By applying a second Neville elimination to , a diagonal matrix is obtained; this process is known as a complete Neville elimination. When in this process, there is no need to perform any row exchanges, the matrix A is said to verify the WRC condition (see, e.g., [21]). In Theorem 2.2 of [21], it is proved that a real-valued nonsingular matrix A verifies the WRC condition if and only if it can be expressed in a unique way as the following product,
where are the lower- and upper-, respectively, triangular bidiagonal matrices given by
while the entries of the diagonal matrix D are the diagonal pivots obtained in the NE of A. In fact, the NE processes of A and also give the nondiagonal entries of and , since the values appearing in (5) are precisely the multipliers of these algorithms as defined in (3).
Another interesting result is provided by Theorem 2.2 of [22]. Taking advantage of the diagonal pivots and multipliers obtained in the NE of A, it is possible to formulate the inverse as
where the matrices and are very much like their counterparts and , but with a different arrangement of the multipliers, being defined as
It is worth noting that more general classes of matrices can be factorized as in (4), see [23].
Hereafter, the convention adopted by Koev in [24] to store the coefficients of the bidiagonal decomposition (4) of A in a matrix is followed. The entries of this matrix form are given by
Remark 1.
Provided that the bidiagonal factorization of a nonsingular matrix exists, then, using the factorization (4), it follows that
Furthermore, in the case of A being symmetric, we have that for and, as a consequence,
It is worth noting that thanks to the structure of the factors in the bidiagonal decomposition (4) of a nonsingular matrix A, in order to compute its determinant, it suffices to perform the product of the diagonal pivots obtained in the NE of A, since the determinant of each of the factors and is trivially one. This result will be used later in the manuscript to obtain the determinants of generalized Filbert and Lilbert matrices and is summarized in the following lemma.
Lemma 1.
Consider a nonsingular matrix . If the bidiagonal decomposition of A exists, then
where are the diagonal pivots of the Neville elimination of A given by (2).
3. Bidiagonal Factorization of Filbert Matrices
Let us recall that the sequence of Fibonacci numbers: is given by
with the recursion formula
Filbert matrices are defined in terms of the Fibonacci sequence as
and they have the property—shared with Hilbert matrices—of having an inverse with integer entries [6]. In fact, an explicit formula for the entries of the inverse matrices is proved using computer algebra. This formula shows a remarkable analogy with the corresponding formula for the elements of the inverse of Hilbert matrices in the sense that it can be obtained by replacing some binomial coefficients by the analogous Fibonomial coefficients introduced in [25] as follows
with the usual convention that empty products are defined as one. Let us observe that by defining
we can also write
The following identities for Fibonomial coefficients hold
and taking into account the following recursion formula
(see [25]), it can be clearly seen that the Fibonomial coefficients are integers. It can also be checked that Fibonomial coefficients satisfy the following useful identities:
Now, we consider the following generalization of the Filbert matrix described in (11). Given , let with
Clearly, for , coincides with the Filbert matrix (11).
There are many nice equalities relating the Fibonacci numbers with each other. In this paper we use the following identity:
which is known as Vajda’s identity. On the other hand, it is well known that Fibonacci numbers , , satisfy the following property:
where is the “golden ratio”. Moreover, using the Binet form of Fibonacci numbers, we can write
The previous equalities illustrate a clear relation between q-Hilbert and Filbert matrices that is going to be reflected in the obtained expression for the pivots and multipliers of the Neville elimination and, consequently, their bidiagonal factorization (4) (cf. [11]).
Theorem 1.
Given , let be the Filbert matrix given by (19). The multipliers of the Neville elimination of are given by
Moreover, the diagonal pivots of the Neville elimination of are given by
and can be computed as follows:
Proof.
Let , , be the matrices obtained after steps of the Neville elimination procedure for . Now, by induction on , we see that
It can be easily checked that ; thus, using the Vajda identity (20) with , and , we can write
and (24) follows for . If (24) holds for some , we have
for . Taking into account that by (1), and the following identity, obtained from (18),
we can write
with
for . Taking into account (17) and (16), respectively, we have
and from (25), we derive
On the other hand, by considering the Vajda identity in (20) with , and , it can be checked that
and then, from (26), we can write
for . Finally, taking into account (17) and (16), respectively, we can write
and conclude that
and (24) holds for .
Now, by (2) and (24), the pivots of the Neville elimination of H satisfy
For the particular case , we obtain
and (22) follows. It can be easily checked that and
confirming Formula (23).
Let us observe that since the pivots of the Neville elimination of are nonzero, this elimination can be performed without row exchanges.
On the other hand, using Lemma 1 and Formula (22) for the diagonal pivots, the determinant of Filbert matrices can be expressed as follows:
which is an equivalent formula to that obtained in Theorem 5 of [12].
4. Bidiagonal Factorization of Lilbert Matrices
Let us recall that Lucas numbers are defined recursively is a similar way to Fibonacci numbers, just changing the value of the initial element of the sequence,
The analogous Lilbonomial coefficients are
with the usual convention that empty products are defined as one and
Let us observe that using the Binet form of Lucas numbers, we can write
for and . Moreover, as for Fibonacci numbers, in the literature, one can find many interesting equalities relating the Lucas numbers with each other, as well as Lucas and Fibonacci numbers. In this section, we use the following Vajda-type equality
proved in Theorem 5 of [26] to derive the bidiagonal factorization of the Lucas matrix with
Theorem 2.
Given , let be the Lilbert matrix given by (33). The multipliers of the Neville elimination of are given by
Moreover, the diagonal pivots of the Neville elimination of are
and can be computed as follows
Proof.
The proof is analogous to that of Theorem 1 for the computation of the pivots and multipliers of the Neville elimination of Filbert matrices and, for this reason, we only provide a sketch. Let , , be the matrices obtained after steps of the Neville elimination procedure for . Using an inductive reasoning, similar to that of Theorem 1, the Vajda-type equality (32) and the definition (31) of Lilbonomial coefficients, the entries of the intermediate matrices of the Neville elimination can be written as follows:
for , and then the pivots of the Neville elimination are
Identities (35) and (36) are deduced by considering in (38). Moreover, Formula (34) for the multipliers are derived by taking into account that (see (3)). □
Using Lemma 1 and Formula (35) for the diagonal pivots, the determinant of Lilbert matrices can be expressed as follows
which is an equivalent formula to that obtained in Theorem 1.17 of [13].
5. Numerical Experiments
In this section, a collection of numerical experiments is presented, comparing the algorithms that take advantage of the bidiagonal decompositions presented in this work with the best standard routines. It should be noted that the cost of computing the matrix form (7) of the bidiagonal decomposition (4) is for both Filbert matrices (see (29)) and for Lilbert matrices (see (39)).
We considered several Filbert matrices , for and , as well as Lilbert matrices , for and , with dimension . To keep the notation as contained as possible, in what follows, Filbert and Lilbert matrices are denoted as F and L, respectively, and their bidiagonal decompositions by and .
The two-norm condition number of all considered matrices was computed in Mathematica. As can be easily seen in Figure 1, the condition number grows dramatically with the size of the matrix. As mentioned at the beginning of the paper, this bad conditioning prevents standard routines from giving accurate solutions to any algebraic problem, even for relatively small-sized problems.
Figure 1.
The 2-norm conditioning of Filbert matrices F and Lilbert matrices L.
To analyze the behavior of the bidiagonal approach and confront it with standard direct methods, several numerical experiments were performed, concerning both Filbert and Lilbert matrices. The factorizations obtained in Section 3 and Section 4 were used as an input argument of the Matlab functions of the TNTool package, made available in [27]. In particular, the following functions were used, each corresponding to an algebraic problem:
- TNInverseExpand provides , with an computational cost (see [22]).
- TNSolve solves the system , with an cost.
- TNSingularValues obtains the singular values of A, with an cost.
For each problem, the approximated solution obtained by the TNTool subroutine was compared with the classical method provided by Matlab . Relative errors in both cases were computed by comparing with the exact solution given by Mathematica , which makes use of 100-digit arithmetic.
Computation of inverses. In this experiment, we compared the accuracy in determining the inverse of each considered matrix with two methods: the bidiagonal factorization as an input to the TNInverseExpand routine and the standard Matlab command inv. It is clear from Figure 2 that our procedure obtained great accuracy in every analyzed case, whereas the results obtained with Matlab failed dramatically for moderate sizes of the matrices.
Figure 2.
Relative error of the approximations to the inverse of Filbert and Lilbert matrices, and , respectively.
Resolution of linear systems. For each of the matrices considered, in this experiment, the solution of the linear systems and was computed, where and , , are random nonnegative integer values. This was again performed in two ways: by using the proposed bidiagonal factorization as an input of the TNSolve routine and by the standard ∖ Matlab command. As before, the standard Matlab routine could not overcome the ill-conditioned nature of the analyzed matrices, in contrast with the machine precision-order errors achieved by the bidiagonal approach, as is depicted in Figure 3.
Figure 3.
Relative error of the approximations to the solution of the linear systems and , where and , , are random nonnegative integer values.
Computation of singular values. The relative errors in determining the smallest singular value of both Filbert and Lilbert matrices are illustrated in this experiment. These were computed with both the standard Matlab command svd and by providing as an input argument to TNSingularValues the corresponding bidiagonal decomposition. It follows from Figure 4 that our method determined accurately the lowest singular value for every studied case, while the standard Matlab command svd results were very far from the exact solution even for small sizes of the considered matrices.
Figure 4.
Relative error of the approximations to the lowest singular value of Filbert matrices F and Lilbert matrices L.
6. Conclusions
The paper analyzed the generalized versions of Filbert and Lilbert matrices and , based on Fibonacci and Lucas numbers, respectively. Leaning on the Neville elimination, their bidiagonal factorizations were obtained explicitly, which also led to formulae for the corresponding determinants. Numerical experiments were provided, exhibiting a great level of accuracy in the case of the routines that took as an input the bidiagonal decomposition of the matrices, even for notably ill-conditioned cases, while the results of standard procedures were wrong by orders of magnitude. Future prospects include the study of the condition number of these matrices, which could offer some insight about the excellent experimental results obtained.
Author Contributions
Conceptualization, Y.K., E.M., J.M.P., E.R.-A. and B.R.; methodology, Y.K., E.M., J.M.P., E.R.-A. and B.R.; investigation, Y.K., E.M., J.M.P., E.R.-A. and B.R.; writing—original draft, Y.K., E.M., J.M.P., E.R.-A. and B.R.; writing—review and editing, Y.K., E.M., J.M.P., E.R.-A. and B.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially supported by Spanish research grants PID2022-138569NB-I00 (MCI/AEI) and RED2022-134176-T (MCI/AEI) and by Gobierno de Aragón (E41_23R, S60_23R).
Data Availability Statement
The data and codes used in this work are available under request.
Acknowledgments
We thank the editors and the anonymous referees for their valuable comments and suggestions, which have improved this paper.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Kramer, B.; Gorodetsky, A.A. System Identification via CUR-Factored Hankel Approximation. SIAM J. Sci. Comput. 2018, 40, A848–A866. [Google Scholar] [CrossRef]
- Pan, V. Structured Matrices and Polynomials: Unified Superfast Algorithms; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
- Bini, D.; Pan, V. Improved parallel computations with Toeplitz-like and Hankel-like matrices. Linear Algebra Appl. 1993, 189, 3–29. [Google Scholar] [CrossRef]
- Datta, B.N. Application of Hankel matrices of Markov Parameters to the solutions of the Routh-Hurwitz and the Schur-Cohn problems. J. Math. Anal. App. 1979, 68, 276–290. [Google Scholar] [CrossRef]
- Alotaibi, A.; Mursaleen, M. applications of Hankel and regular matrices in Fourier Series. In Abstract and Applied Analysis; Hindawi: New York, NY, USA, 1993; Volume 2013. [Google Scholar]
- Richardson, T.M. The Filbert matrix. Fibonacci Quart. 2001, 39, 268–275. [Google Scholar]
- Huang, X.; Yao, S. Solidification performance of new trapezoidal longitudinal fins in latent heat thermal energy storage. Case Stud. Therm. Eng. 2021, 26, 101110. [Google Scholar] [CrossRef]
- Hosny, K.M.; Kamal, S.T.; Darwish, M.M.; Papakostas, G.A. New Image Encryption Algorithm Using Hyperchaotic System and Fibonacci Q-Matrix. Electronics 2021, 10, 1066. [Google Scholar] [CrossRef]
- Benavoli, A.; Chisci, L.; Farina, A. Fibonacci sequence, golden section, Kalman filter and optimal control. Signal Process. 2009, 89, 1483–1488. [Google Scholar] [CrossRef]
- Andersen, J.E.; Berg, C. Quantum Hilbert matrices and orthogonal polynomials. J. Comput. Appl. Math. 2009, 233, 723–729. [Google Scholar] [CrossRef][Green Version]
- Mainar, E.; Peña, J.M.; Rubio, B. Accurate bidiagonal factorization of quantum Hilbert matrices. Linear Algebra Appl. 2024, 681, 131–149. [Google Scholar] [CrossRef]
- Kiliç, E.; Prodinger, H. A generalized Filbert Matrix. Fibonacci Quart. 2010, 48, 29–33. [Google Scholar]
- Kiliç, E.; Prodinger, H. The generalized Lilbert matrix. Period. Math. Hung. 2016, 73, 62–72. [Google Scholar] [CrossRef]
- Beckermann, B. The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math. 2000, 85, 553–577. [Google Scholar] [CrossRef]
- Córdova Yévenes, A.; Gautschi, W.; Ruscheweyh, S. Vandermonde matrices on the circle: Spectral properties and conditioning. Numer. Math. 1990, 57, 577–591. [Google Scholar] [CrossRef]
- Tyrtyshnikov, E. How bad are Hankel matrices? Numer. Math. 1994, 67, 261–269. [Google Scholar] [CrossRef]
- Chan, R.H.; Ng, M.K. Conjugate gradient methods for Toeplitz systems. SIAM Rev. 1996, 38, 427–482. [Google Scholar] [CrossRef]
- Heinig, G.; Rost, K. Algebraic Methods for Toeplitz-like Matrices and Operators; Birkhäuser Verlag: Basel, Switzerland, 1984; Volume 13, 212p. [Google Scholar]
- Serra-Capizzano, S. Preconditioning strategies for Hermitian Toeplitz systems with nondefinite generating functions. SIAM J. Matrix Anal. Appl. 1996, 17, 1007–1019. [Google Scholar] [CrossRef]
- Bunch, J.R. The weak and strong stability of algorithms in numerical linear algebra. Linear Algebra Appl. 1987, 88–89, 49–66. [Google Scholar] [CrossRef]
- Gasca, M.; Peña, J.M. On factorizations of totally positive matrices. In Total Positivity and Its Applications; Gasca, M., Micchelli, C.A., Eds.; Kluver Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
- Marco, A.; Martínez, J.J. Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
- Barreras, A.; Peña, J.M. Accurate computations of matrices with bidiagonal decomposition using methods for totally positive matrices. Numer. Linear Algebra Appl. 2013, 20, 413–424. [Google Scholar] [CrossRef]
- Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
- Knuth, D.E. The Art of Computer Programming, Volume I: Fundamental Algorithms, 2nd ed.; Addison-Wesley: Reading, MA, USA, 1973; Volume 1. [Google Scholar]
- Keskin, R.; Demirtürk, B. Some new Fibonacci and Lucas identities by matrix methods. Int. J. Math. Educ. Sci. Technol. 2010, 41, 379–387. [Google Scholar] [CrossRef]
- Koev, P. Available online: http://math.mit.edu/∼plamen/software/TNTool.html (accessed on 4 February 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).