1. Introduction
The Stirling numbers form combinatorial sequences with remarkable properties and many applications in Combinatorics, Number Theory, or Special Functions, among others. The Stirling numbers were so named by Nielsen (cf. [
1]) in honor of James Stirling, who computed them in 1730 (cf. [
2]). Interested readers are referred to [
3,
4,
5] to find a nice introduction and many nice properties of Stirling numbers. In the literature, many kinds of generalizations of Stirling numbers have been introduced and studied ([
3,
6]). In this paper, we shall focus on the
r-Stirling numbers whose combinatorial and algebraic properties, in most cases, generalize those of the regular Stirling numbers.
The
r-Bell polynomials (see [
7]) are considered as an efficient mathematical tool for combinatorial analysis (cf. [
5]) and can be applied in many different contexts: in the evaluation of some integrals and alternating sums [
8,
9], for the analysis of the internal relations for the orthogonal invariants of a positive compact operator [
10], in the Blissard problem [
5], for the Newton sum rules for the zeros of polynomials [
11], in the recurrence relations for a class of Freud-type polynomials [
12] and in many other subjects.
In this paper, a class of
r-Stirling matrices will be introduced. It will be shown that these matrices determine the linear transformations between monomial and
r-Bell polynomial bases and have the property of being totally positive and, therefore, having all of their minors as non-negative. Several applications of the class of totally positive matrices can be found in [
13,
14,
15]. Totally positive matrices can be expressed as a particular product of bidiagonal matrices (see [
16,
17]). This bidiagonal decomposition provides a representation of the totally positive matrices that exploits their total positivity property and allows us to derive algorithms to high relative accuracy for the resolution of relevant algebraic problems, such as, the computation of their inverses, eigenvalues, singular values, or the solution of some systems of linear equations.
A major issue in Numerical Linear Algebra is to obtain algorithms to high relative accuracy, because the relative errors in their computations will have the order of the machine precision and will not be drastically affected by the dimension or conditioning of the considered matrices. Consequently, the design of algorithms adapted to the structure of totally positive matrices and achieving computations to high relative accuracy has attracted the interest of many researchers (see [
18,
19,
20,
21,
22,
23,
24]).
Let us recall that many interpolation, numerical quadrature, and approximation problems require algebra computations with collocation matrices of the considered bases. When solving Taylor interpolation problems, Wronskian matrices have to be considered. On the other hand, Gramian matrices define linear transformations to obtain orthogonal from nonorthogonal bases. Moreover, the inversion of Gramian matrices is also required when approximating, in the least-squares sense, curves by linear combinations of control points and the basis functions. The large range of applications of r-Bell polynomials has motivated us to exploit the total positivity property of r-Stirling matrices to design fast and accurate algorithms for the resolution of algebraic problems with collocation, Wronskian, and Gramian matrices of r-Bell polynomial bases.
The layout of this paper is as follows.
Section 2 summarizes some notations and auxiliary results.
Section 3 focuses on the class of
r-Stirling matrices. Their bidiagonal decomposition is provided and their total positivity property is deduced. In
Section 4,
r-Bell polynomials are considered. Collocation, Wronskian, and Gramian matrices of
r-Bell polynomial bases are shown to be totally positive. Then, using the proposed factorization for
r-Stirling matrices, the resolution of the above mentioned algebraic problems can be achieved to high relative accuracy. Finally,
Section 5 presents numerical experiments confirming the accuracy of the proposed methods for the computation of eigenvalues, singular values, inverses, or the solution of some linear systems related to collocation, Wronskian, and Gramian matrices of
r-Bell polynomial bases.
2. Notations and Auxiliary Results
Let us recall that a matrix is totally positive (respectively, strictly totally positive) if all its minors are nonnegative (respectively, positive).
The Neville elimination is an alternative procedure to Gaussian elimination (see [
16,
17,
25]). Given a nonsingular matrix
, its Neville elimination calculates a sequence of
matrices
,
, so that their elements below the main diagonal of the
first columns are zeros and then
is upper triangular.
From
, the Neville elimination computes
as follows,
with
. The element
is called the
pivot and we say that
is the
i-th diagonal pivot of the Neville elimination of
A. Whenever all pivots are nonzero, no row exchanges are needed in the Neville elimination procedure. The value
for
, is the
multiplier of the Neville elimination of
A.
The Neville elimination procedure of a nonsingular matrix is illustrated with the following example:
The multipliers and the diagonal pivots are
,
,
,
,
, and
.
By Theorem 4.2 and the arguments of p. 116 of [
17], a nonsingular totally positive
can be written as follows,
where
and
,
, are the totally positive, lower and upper triangular bidiagonal matrices with the unit diagonal described by
and
is a diagonal matrix with positive diagonal entries. In fact, the elements in the diagonal of
D are the diagonal pivots of the Neville elimination of
A. On the other hand, the off-diagonal elements
and
are the multipliers of the Neville elimination of
A and
, respectively.
Using the results in [
16,
17,
25] and Theorem 2.2 of [
22], the Neville elimination of
A can be used to derive the following bidiagonal factorization (
5) of its inverse matrix,
where
and
are the lower and upper triangular bidiagonal matrices with the form described in (
6), which are obtained by replacing the off-diagonal entries
and
by
and
, respectively. The structure of the bidiagonal matrix factors in (
7) is described as follows,
Let us also observe that, if a matrix
is nonsingular and totally positive, its transpose
is also nonsingular and totally positive, and
where
,
and
,
, are the lower and upper triangular bidiagonal matrices in (
5).
The total positivity property of a given matrix can be characterized by analyzing the sign of the diagonal pivots and multipliers of its Neville elimination, as shown in Theorem 4.1, Corollary 5.5 of [
25] and the arguments of p. 116 of [
17].
Theorem 1. A nonsingular matrix A is totally positive (respectively, strictly totally positive) if and only if the Neville elimination of A and can be performed without row exchanges, all the multipliers of the Neville elimination of A and are nonnegative (respectively, positive) and the diagonal pivots of the Neville elimination of A are all positive.
In [
19], the bidiagonal factorization (
5) of a nonsingular and totally positive
is represented by defining a matrix
such that
This representation will allow us to define algorithms adapted to the totally positive structure and provide accurate computations with the matrix.
Let us observe that, for the matrix
A in (
4), the bidiagonal decomposition (
5) can be written as follows:
Moreover, since the diagonal pivots and multipliers of the Neville elimination of
A and
are all positive, we conclude that
A is strictly totally positive. The above factorization can be represented in the following matrix form:
Let us also recall that a real value
x is computed to high relative accuracy whenever the computed value
satisfies
where
u is the unit round-off and
is a constant, which is independent of the arithmetic precision. High relative accuracy implies a great accuracy in the computations, since the relative errors have the same order of the machine precision and the accuracy is not affected by the dimension or the conditioning of the problem we are solving.
A sufficient condition to assure that an algorithm can be computed to high relative accuracy is the non-inaccurate cancellation condition, sometimes denoted as the NIC condition, which is satisfied if the algorithm does not require inaccurate subtractions and only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of the opposite sign, or subtraction of initial data (cf. [
18,
19]).
If the bidiagonal factorization (
5) of a nonsingular and totally positive matrix
A can be computed to high relative accuracy, the computation of its eigenvalues and singular values, the computation of
, and even the resolution of systems of linear equations
, for vectors
b with alternating signs, can be also computed to high relative accuracy, using the algorithms provided in [
26].
Let
be a basis of a space
U of functions defined on
. Given a sequence of parameters
on
I, the corresponding collocation matrix is defined by
The system
of functions defined on
is totally positive if all of its collocation matrices
are totally positive.
If the space
U is formed by
n-times continuously differentiable functions and
, the Wronskian matrix at
x is defined by:
where
,
, denotes the
i-th derivative of
u at
x.
Now, let us suppose that
U is a Hilbert space of functions under a given inner product
defined for any
. Then, given linearly independent functions
in
U, the corresponding Gram matrix is the symmetric matrix defined by:
3. Bidiagonal Factorization of -Stirling Matrices
Let us recall that given with , the (signless) Stirling number of the first kind, usually denoted by , can be defined combinatorially as the number of permutations of the set having m cycles.
On the other hand, the Stirling number of the second kind, usually denoted by
, coincides with the number of partitions of the set
into
m non-empty disjoint blocks (cf. [
3]). Moreover, the Bell number
gives the total number of partitions of the set
; that is,
The first and second kinds of Stirling numbers can be recursively computed as follows,
with the initial conditions
The r-Stirling numbers are polynomials in r generalizing the regular Stirling numbers and they count some restricted permutations and partitions of sets with a given number of elements.
For
, the first kind
r-Stirling number, denoted by
, counts the number of permutations of the elements
having
m cycles, such that
are in distinct cycles. In addition, the second kind
r-Stirling number, denoted by
, is defined as the number of set partitions of
into
blocks such that the first
r elements are in distinct blocks. Thus the
r-Bell number
gives the total number of partitions of the set
, such that the first
r elements are in distinct blocks; that is,
Clearly, for
, the
r-Stirling numbers coincide with the corresponding regular Stirling numbers.
Theorems 1 and 2 of [
3] prove that the
r-Stirling numbers satisfy the recurrence relations verified by the regular Stirling numbers (see (
14)); that is,
but with different initial conditions,
and
Moreover, for some particular values, the corresponding
r-Stirling numbers can be easily obtained:
Finally, let us recall a “cross” recurrence relating
r-Stirling numbers of the second kind with a different
r that is going to be used in this paper (see Theorem 4 of [
3]). The r-Stirling numbers of the second kind satisfy
Now, we are going to define triangular matrices whose entries are given in terms of
r-Stirling numbers. We shall analyze their Neville elimination and derive their bidiagonal factorization (
5).
Definition 1. For with , the the first kind of r-Stirling matrix is the lower triangular matrix withOn the other hand, the second kind of r-Stirling matrix, is the lower triangular matrix with The following results provide the expression of the pivots and multipliers of the Neville elimination of
r-Stirling matrices and their inverses. Then, their bidiagonal factorization (
5) will be deduced and their total positivity property analyzed.
Theorem 2. For with , the first kind of r-Stirling matrix is totally positive and admits the following factorizationwhere , , are the lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries are given by Moreover,where , , are the lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries are given by Proof. Let us define
and
,
, the matrix obtained after
steps of the Neville elimination of
. We shall deduce by induction on
that
For
, identities (
28) follow from (
22). Now, suppose that (
28) is verified for some
. Then, taking into account (19), we can write
By (
1), we have
and, using (
16), (
28), and (
29), we derive
corresponding to the identity (
35) for
.
By (
2), (19), and (
28), we deduce that the pivots of the Neville elimination of
satisfy
and so the diagonal pivots are
for
(see (
17)). Moreover, the following expression for the multipliers can be deduced
Clearly,
, for
, and
, for
. Then, using Theorem 1, we deduce that
is totally positive.
Finally, taking into account the bidiagonal factorization (
7) for the inverse of a nonsingular totally positive matrix, the factorization (
26) for
can be deduced easily. □
The provided bidiagonal decomposition (
5) of the first kind of
r-Stirling matrix
can be stored in a compact form through
with
Now, the following result deduces the pivots and multipliers of the Neville elimination of r-Stirling matrices of the second kind and their inverses.
Theorem 3. For with , the second kind r-Stirling matrix admits a bidiagonal factorizationwhere , , are lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries , , are given by Moreover,where , , are lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries , , are given by Proof. Let us define
and
,
, the matrix obtained after
steps of the Neville elimination of
. By induction on
, we shall deduce that
Taking into account (
23), identities (
35) clearly hold for
. If (
35) holds for some
, using (20), we have
Since
(see (
1)), and taking into account (
21), (
35), and (
36), we derive
corresponding to the identity (
35) for
.
Now, by considering (
2), (20), and (
35), we can easily deduce that
and
, for
. For the multipliers we have
Clearly,
, for
,
, for
and, using Theorem 1, we conclude that
is strictly totally positive.
Finally, taking into account the bidiagonal factorization (
7) for the inverse of a nonsingular totally positive matrix, the factorization (
33) for
can be deduced. □
The provided bidiagonal factorization (
5) of the matrix
can be stored in a compact form through
with
Remark 1. Given a lower triangular matrix, for any , the determinants of the submatrices using row and columns with for all are called nontrivial minors because all other minors are zero. A lower triangular matrix is called ΔSTP if all its nontrivial minors are positive. Since Theorem 2 (respectively, Theorem 3) proves that the lower triangular matrix with unit diagonal (respectively, ) has all multipliers of its Neville elimination positive, by Theorem 4.3 of [16], we conclude that (respectiveley, ) is in fact ΔSTP. Now, taking into account Theorems 2 and 3, we provide the pseudocode of Algorithms 1 and 2, respectively. Specifically, Algorithm 1 computes the matrix form
in (
30), for the bidiagonal decomposition of the first kind
r-Stirling matrix
in (
24). Moreover, Algorithm 2 computes the matrix form
in (
37) for the bidiagonal decomposition of the
r-Stirling matrix of the second kind
in (
31). The computational cost of both algorithms is
. Let us observe that none of the algorithms requires inaccurate subtractions, and so the provided matrices are computed to high relative accuracy. In fact, all of their entries are natural numbers.
Algorithm 1: Computation to high relative accuracy of (see (30)) |
Require: n, r Ensure: bidiagonal decomposition of to high relative accuracy for for end end |
Algorithm 2: Computation to high relative accuracy of (see (37)) |
Require: n, r Ensure: bidiagonal decomposition of to high relative accuracy for for end end |
4. Accurate Computations with r-Bell Bases
Let
be the (n+1)-dimensional linear space formed by all polynomials in the variable
x defined on a real interval
I and whose degree is not greater than
n; that is,
For
, the
n degree
r-Bell polynomial of the first kind is defined as
Then we can define a system
of
r-Bell polynomials of the first kind that satisfy
where
,
and
is the
first kind of
r-Stirling matrix defined by (
22). By (
24),
and we can guarantee that
is a basis of
.
Definition 2. We say that is the r-Bell basis of the first kind of the polynomial space .
On the other hand, the
r-Bell polynomials of the second kind
in [
3,
27] are called
r-Bell polynomials, which are defined by their generating function
They can also be written in terms of the monomial basis as follows,
and then
where
is the
second kind of
r-Stirling matrix defined by (
23). By (
31),
and we can guarantee that
is a basis of
.
Definition 3. We say that is the r-Bell basis of the second kind of .
Let us recall that the monomial basis
, with
for
, is a strictly totally positive basis of
, and so the collocation matrix
is strictly totally positive for any increasing sequence of positive values
(see Section 3 of [
19]). In fact,
V is the Vandermonde matrix at the considered nodes. Let us recall that Vandermonde matrices have relevant applications in linear interpolation and numerical quadrature (see for example [
28,
29]). As for
, we have
and it can be easily checked that the computation of
does not require inaccurate cancellations and can be performed to high relative accuracy.
Taking into account the total positivity of the Vandermonde matrices and the total positivity of r-Stirling matrices of the first and second kinds, we shall derive the total positivity of r-Bell bases, as well as factorizations providing computations to high relative accuracy when considering their collocation matrices.
Theorem 4. The r-Bell basis of first and second kind are totally positive bases of . Moreover, given , the collocation matricesand their bidiagonal factorization (5) can be computed to high relative accuracy. Proof. Given
, by formulae (
39) and (
41), the collocation matrices in (
44) satisfy
where
V is the Vandermonde matrix (
42), and
and
are
r-Stirling matrices of the first and second kind, respectively.
It is well known that
V is strictly totally positive at
and its decomposition (
5) can be computed to high relative accuracy (see [
19]). By Theorem 2 (respectively, Theorem 3),
(respectively,
) is a nonsingular totally positive matrix and its decomposition (
5) can be computed to high relative accuracy. Taking into account these facts,
(respectively,
) is a nonsingular totally positive matrix and its bidiagonal decomposition can be also computed to high relative accuracy (see (
10)). Therefore, we can deduce that
B (respectively,
) is a totally positive matrix since it is the product of totally positive matrices (see Theorem 3.1 of [
13]). Moreover, using Algorithm 5.1 of [
26], if the decomposition (
5) of two nonsingular totally positive matrices is provided to high relative accuracy, then the decomposition of the product can be obtained to high relative accuracy. Consequently,
B (respectively,
) and its decomposition (
5) can be obtained to high relative accuracy. □
Corollary 1 of [
20] provides the factorization (
5) of
, the Wronskian matrix of the monomial basis
at
. For the matrix representation
, we have
Taking into account the sign of the entries of
and Theorem 1, one can derive that the Wronskian matrix of the monomial basis is totally positive for any
. Moreover, the computation of (
46) satisfies the NIC condition and
W can be computed to high relative accuracy. Using (
46), computations to high relative accuracy when solving algebraic problems related to
W have been achieved in [
20] for
.
Using formula (
39), it can be checked that
So, with the reasoning of the proof of Theorem 4, the next result follows and we can also guarantee computations to high relative accuracy when solving algebraic problems related to Wronkian matrices of
r-Bell polynomial bases at positive values.
Theorem 5. Let and be the r-Bell bases of the first and second kind, respectively. For any , the Wronskian matrices and are nonsingular totally positive and their bidiagonal decomposition (5) can be computed to high relative accuracy. It is well known that the polynomial space
is a Hilbert space under the inner product
and the Gramian matrix of the monomial basis
with respect to (
47) is:
The matrix
is the
Hilbert matrix. In Numerical Linear Algebra, Hilbert matrices are well-known Hankel matrices. Their inverses and determinants have explicit formulas; however, they are very ill-conditioned for moderate values of their dimension. Then, they can be used to test numerical algorithms and see how they perform on ill-conditioned or nearly singular matrices. It is well known that Hilbert matrices are strictly totally positive. In [
19], the pivots and the multipliers of the Neville elimination of
H are explicitly derived. It can be checked that
is given by
Clearly, the computation of the factorization (
5) of
does not require inaccurate cancellations, and so it can be computed to high relative accuracy.
Using formula (
39), it can be checked that the Gramian matrices
and
of the
r-Bell bases of the first and second kind, respectively, with respect to the inner product (
47), can be written as follows,
where
(respectively,
) is the
r-Stirling matrix of the first (respectively, second) kind. According to the reasoning in the proof of Theorem 4, the following result can be deduced.
Theorem 6. The Gramian matrices and of the r-Bell bases of the first and second kind, respectively, with respect to the inner product (47), are nonsingular and totally positive. Furthermore, , and their bidiagonal decompositions (5) can be computed to high relative accuracy. Now, taking into account Algorithms 1 and 2, as well as Theorems 4–6, we provide the pseudocode of Algorithms 3–5 for computing, to high relative accuracy, the matrix form (
11) of the bidiagonal decomposition of the collocation matrices
,
, Wronskian matrices
,
, and Gramian matrices
,
of
r-Bell bases of the first and second kind. Algorithm 3 requires
,
, and the bidiagonal decomposition of the Vandermonde matrix implemented in the MATLAB function
available in [
30]. In addition, Algorithm 4 requires
,
, and the bidiagonal decomposition
(
46) of the Wronskian matrix
W of the monomial basis at
x. Moreover, Algorithm 5 requires
,
, and the bidiagonal decomposition
of the Hilbert matrix
H implemented in the MATLAB function
available in [
30]. Finally, let us observe that these three algorithms call the MATLAB function
available in [
30]. Let us recall that, given
and
to high relative accuracy,
computes
to high relative accuracy. The computational cost of the mentioned function and mentioned algorithms is
arithmetic operations.
Algorithm 3: Computation to high relative accuracy of the bidiagonal decomposition of collocation matrices , of r-Bell bases of the first and second kind |
Require: n, r, such that Ensure: bidiagonal decomposition of to high relative accuracy bidiagonal decomposition of to high relative accuracy (see Algorithm 1) (see Algorithm 2) |
Algorithm 4:Computation to high relative accuracy of the bidiagonal decomposition of Wronskian matrices , of r-Bell bases of the first and second kind |
Require: n, r, Ensure: bidiagonal decomposition of to high relative accuracy bidiagonal decomposition of to high relative accuracy (see Algorithm 1) (see Algorithm 2) |
Algorithm 5: Computation to high relative accuracy of the bidiagonal decomposition of Gramian matrices , of r-Bell bases of the first and second kind |
Require: n, r Ensure: bidiagonal decomposition of to high relative accuracy bidiagonal decomposition of to high relative accuracy (see Algorithm 1) (see Algorithm 2) |
5. Numerical Experiments
Some numerical tests are presented in this section, supporting the obtained theoretical results. We have considered different nonsingular strictly totally positive collocation matrices , , Wronskian matrices , , and Gramian matrices , of r-Bell bases of the first and second kind. Specifically, with and , , and , with and , , and with and , . In addition, with and and, with and . Finally, with , .
We have computed, with Mathematica, the 2-norm condition number, which is the ratio of the largest singular value to the smallest, of all considered matrices. This conditioning is depicted in
Figure 1. It can be easily observed that the conditioning drastically increases with the size of the matrices. Due to the ill-conditioning of these matrices, standard routines do not obtain accurate solutions because they can suffer from inaccurate cancellations.
Let us recall that if
can be obtained to high relative accuracy, then the MATLAB functions
TNEigenValues,
TNSingularValues,
TNInverseExpand and
TNSolve available in the software library
TNTools in [
30], take as input argument
, and compute to high relative accuracy the eigenvalues and singular values of
A, the inverse matrix
(using the algorithm presented in [
22]) and even the solution of linear systems
, for vectors
b with alternating signs.
In order to check the accuracy of our algorithms, we have performed several matrix computations using the mentioned routines available in [
30], with the matrix form (
11) of the bidiagonal factorization (
5) as an input argument. The obtained approximations have been compared with the respective approximations obtained by traditional methods provided in Matlab
. In this context, the values provided by Wolfram Mathematica
with 100-digit arithmetic have been taken as the exact solution of the considered algebraic problem. For the sake of brevity, only a few of these experiments will be shown.
The relative error of each approximation has also been computed in Mathematica with 100-digit arithmetic as , where y denotes the exact solution and the computed approximation.
Computation of eigenvalues and singular values. For all considered matrices, we have compared the smallest eigenvalue and singular value obtained using the proposed bidiagonal decompositions provided by Algorithms 3–5 with the functions TNEigenvalues and TNSingularvalues, and the smallest eigenvalue and singular value computed with the Matlab commands eig and svd, respectively. Note that the eigenvalues of the Wronskian matrices are exact. Furthermore, the singular values of the Gramian matrices considered coincide with their eigenvalues (since Gramian matrices are symmetric).
The relative errors are shown in
Figure 2. Note that our approach accurately computes the smallest eigenvalue and singular value regardless of the 2-norm condition number of the considered matrices. In contrast, the Matlab commands
eig and
svd return results that are not accurate at all.
Computation of inverses. Further to this, for all considered matrices, we have compared the inverse obtained using the proposed bidiagonal decompositions provided by Algorithms 3–5 with the function
TNInverseExpand and the inverse computed with the Matlab command
inv. As shown in
Figure 3, our procedure provides very accurate results. On the contrary, the results obtained with Matlab reflect poor accuracy.
Resolution of linear systems. Finally, for all considered matrices, we have compared the solution of the linear systems
,
,
,
,
and
where
and
,
, are random nonnegative integer values, obtained using the proposals for bidiagonal decompositions provided by Algorithms 3–5 with the function
TNSolve and the solutions obtained within the Matlab command ∖. As opposed to the results obtained with the command ∖, the proposed procedure preserves the accuracy for all of the dimensions which have been taken into account.
Figure 4 illustrates the relative errors.