Next Article in Journal
Using Vector-Product Loop Algebra to Generate Integrable Systems
Next Article in Special Issue
T-BT Inverse and T-GC Partial Order via the T-Product
Previous Article in Journal
Reliability Estimation of Inverse Weibull Distribution Based on Intuitionistic Fuzzy Lifetime Data
Previous Article in Special Issue
Green Matrices, Minors and Hadamard Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Total Positivity and Accurate Computations of r-Bell Polynomial Bases

Department of Applied Mathematics, University Research Institute of Mathematics and Its Applications (IUMA), University of Zaragoza, 50009 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(9), 839; https://doi.org/10.3390/axioms12090839
Submission received: 21 July 2023 / Revised: 23 August 2023 / Accepted: 26 August 2023 / Published: 29 August 2023
(This article belongs to the Special Issue Advances in Linear Algebra with Applications)

Abstract

:
A new class of matrices defined in terms of r-Stirling numbers is introduced. These r-Stirling matrices are totally positive and determine the linear transformation between monomial and r-Bell polynomial bases. An efficient algorithm for the computation to high relative accuracy of the bidiagonal factorization of r-Stirling matrices is provided and used to achieve computations to high relative accuracy for the resolution of relevant algebraic problems with collocation, Wronskian, and Gramian matrices of r-Bell bases. The numerical experimentation confirms the accuracy of the proposed procedure.

1. Introduction

The Stirling numbers form combinatorial sequences with remarkable properties and many applications in Combinatorics, Number Theory, or Special Functions, among others. The Stirling numbers were so named by Nielsen (cf. [1]) in honor of James Stirling, who computed them in 1730 (cf. [2]). Interested readers are referred to [3,4,5] to find a nice introduction and many nice properties of Stirling numbers. In the literature, many kinds of generalizations of Stirling numbers have been introduced and studied ([3,6]). In this paper, we shall focus on the r-Stirling numbers whose combinatorial and algebraic properties, in most cases, generalize those of the regular Stirling numbers.
The r-Bell polynomials (see [7]) are considered as an efficient mathematical tool for combinatorial analysis (cf. [5]) and can be applied in many different contexts: in the evaluation of some integrals and alternating sums [8,9], for  the analysis of the internal relations for the orthogonal invariants of a positive compact operator [10], in the Blissard problem [5], for the Newton sum rules for the zeros of polynomials [11], in the recurrence relations for a class of Freud-type polynomials [12] and in many other subjects.
In this paper, a class of r-Stirling matrices will be introduced. It will be shown that these matrices determine the linear transformations between monomial and r-Bell polynomial bases and have the property of being totally positive and, therefore, having all of their minors as non-negative. Several applications of the class of totally positive matrices can be found in [13,14,15]. Totally positive matrices can be expressed as a particular product of bidiagonal matrices (see [16,17]). This bidiagonal decomposition provides a representation of the totally positive matrices that exploits their total positivity property and allows us to derive algorithms to high relative accuracy for the resolution of relevant algebraic problems, such as, the computation of their inverses, eigenvalues, singular values, or the solution of some systems of linear equations.
A major issue in Numerical Linear Algebra is to obtain algorithms to high relative accuracy, because the relative errors in their computations will have the order of the machine precision and will not be drastically affected by the dimension or conditioning of the considered matrices. Consequently, the design of algorithms adapted to the structure of totally positive matrices and achieving computations to high relative accuracy has attracted the interest of many researchers (see [18,19,20,21,22,23,24]).
Let us recall that many interpolation, numerical quadrature, and approximation problems require algebra computations with collocation matrices of the considered bases. When solving Taylor interpolation problems, Wronskian matrices have to be considered. On the other hand, Gramian matrices define linear transformations to obtain orthogonal from nonorthogonal bases. Moreover, the inversion of Gramian matrices is also required when approximating, in the least-squares sense, curves by linear combinations of control points and the basis functions. The large range of applications of r-Bell polynomials has motivated us to exploit the total positivity property of r-Stirling matrices to design fast and accurate algorithms for the resolution of algebraic problems with collocation, Wronskian, and Gramian matrices of r-Bell polynomial bases.
The layout of this paper is as follows. Section 2 summarizes some notations and auxiliary results. Section 3 focuses on the class of r-Stirling matrices. Their bidiagonal decomposition is provided and their total positivity property is deduced. In Section 4, r-Bell polynomials are considered. Collocation, Wronskian, and Gramian matrices of r-Bell polynomial bases are shown to be totally positive. Then, using the proposed factorization for r-Stirling matrices, the resolution of the above mentioned algebraic problems can be achieved to high relative accuracy. Finally, Section 5 presents numerical experiments confirming the accuracy of the proposed methods for the computation of eigenvalues, singular values, inverses, or the solution of some linear systems related to collocation, Wronskian, and Gramian matrices of r-Bell polynomial bases.

2. Notations and Auxiliary Results

Let us recall that a matrix is totally positive (respectively, strictly totally positive) if all its minors are nonnegative (respectively, positive).
The Neville elimination is an alternative procedure to Gaussian elimination (see [16,17,25]). Given a nonsingular matrix A = ( a i , j ) 1 i , j n + 1 , its Neville elimination calculates a sequence of ( n + 1 ) × ( n + 1 ) matrices A ( k ) , k = 1 , , n + 1 , so that their elements below the main diagonal of the k 1 first columns are zeros and then A ( n + 1 ) is upper triangular.
From A ( k ) = ( a i , j ( k ) ) 1 i , j n + 1 , the Neville elimination computes A ( k + 1 ) = ( a i , j ( k + 1 ) ) 1 i , j n + 1 as follows,
a i , j ( k + 1 ) : = a i , j ( k ) , if 1 i k , a i , j ( k ) a i , k ( k ) a i 1 , k ( k ) a i 1 , j ( k ) , if k + 1 i , j n + 1 , and a i 1 , k ( k ) 0 , a i , j ( k ) , if k + 1 i n + 1 , and a i 1 , k ( k ) = 0 ,
with A ( 1 ) : = A . The element
p i , j : = a i , j ( j ) , 1 j i n + 1 ,
is called the ( i , j ) pivot and we say that p i , i is the i-th diagonal pivot of the Neville elimination of A. Whenever all pivots are nonzero, no row exchanges are needed in the Neville elimination procedure. The value
m i , j : = a i , j ( j ) / a i 1 , j ( j ) = p i , j / p i 1 , j , if a i 1 , j ( j ) 0 , ( 0 , if a i 1 , j ( j ) = 0 ,
for 1 j < i n + 1 , is the ( i , j ) multiplier of the Neville elimination of A.
The Neville elimination procedure of a nonsingular matrix is illustrated with the following example:
A = A ( 1 ) = 2 4 12 8 26 138 56 262 1704 A ( 2 ) = 2 4 12 0 10 90 0 80 738 A ( 3 ) = 2 4 12 0 10 90 0 0 18 .
The multipliers and the diagonal pivots are m 2 , 1 = 4 , m 3 , 1 = 7 , m 3 , 2 = 8 , p 1 , 1 = 2 , p 2 , 2 = 10 , and p 3 , 3 = 18 .
By Theorem 4.2 and the arguments of p. 116 of [17], a nonsingular totally positive A R ( n + 1 ) × ( n + 1 ) can be written as follows,
A = F n F n 1 F 1 D G 1 G 2 G n ,
where F i R ( n + 1 ) × ( n + 1 ) and G i R ( n + 1 ) × ( n + 1 ) , i = 1 , , n , are the totally positive, lower and upper triangular bidiagonal matrices with the unit diagonal described by
F i = 1 0 1 0 1 m i + 1 , 1 1 m i + 2 , 2 1 m n + 1 , n + 1 i 1 , G i T = 1 0 1 0 1 m ˜ i + 1 , 1 1 m ˜ i + 2 , 2 1 m ˜ n + 1 , n + 1 i 1 ,
and D R ( n + 1 ) × ( n + 1 ) is a diagonal matrix with positive diagonal entries. In fact, the elements in the diagonal of D are the diagonal pivots of the Neville elimination of A. On the other hand, the off-diagonal elements m i , j and m ˜ i , j are the multipliers of the Neville elimination of A and A T , respectively.
Using the results in [16,17,25] and Theorem 2.2 of [22], the Neville elimination of A can be used to derive the following bidiagonal factorization (5) of its inverse matrix,
A 1 = G ^ 1 G ^ 2 G ^ n D 1 F ^ n F ^ n 1 F ^ 1 ,
where F ^ i R ( n + 1 ) × ( n + 1 ) and G ^ i R ( n + 1 ) × ( n + 1 ) are the lower and upper triangular bidiagonal matrices with the form described in (6), which are obtained by replacing the off-diagonal entries { m i + 1 , 1 , , m n + 1 , n + 1 i } and { m ˜ i + 1 , 1 , , m ˜ n + 1 , n + 1 i } by { m i + 1 , i , , m n + 1 , i } and { m ˜ i + 1 , i , , m ˜ n + 1 , i } , respectively. The structure of the bidiagonal matrix factors in (7) is described as follows,
F ^ i = 1 0 1 0 1 m i + 1 , i 1 m i + 2 , i 1 m n + 1 , i 1 ,
G ^ i T = 1 0 1 0 1 m ˜ i + 1 , i 1 m ˜ i + 2 , i 1 m ˜ n + 1 , i 1 .
Let us also observe that, if a matrix A R ( n + 1 ) × ( n + 1 ) is nonsingular and totally positive, its transpose A T is also nonsingular and totally positive, and
A T = F ˜ n F ˜ n 1 F ˜ 1 D G ˜ 1 G ˜ 2 G ˜ n ,
where F ˜ i = G i T , G i ˜ = F i T and F i , G i i = 1 , , n , are the lower and upper triangular bidiagonal matrices in (5).
The total positivity property of a given matrix can be characterized by analyzing the sign of the diagonal pivots and multipliers of its Neville elimination, as shown in Theorem 4.1, Corollary 5.5 of [25] and the arguments of p. 116 of [17].
Theorem 1. 
A nonsingular matrix A is totally positive (respectively, strictly totally positive) if and only if the Neville elimination of A and A T can be performed without row exchanges, all the multipliers of the Neville elimination of A and A T are nonnegative (respectively, positive) and the diagonal pivots of the Neville elimination of A are all positive.
In [19], the bidiagonal factorization (5) of a nonsingular and totally positive A R ( n + 1 ) × ( n + 1 ) is represented by defining a matrix B D ( A ) = ( B D ( A ) i , j ) 1 i , j n + 1 such that
B D ( A ) i , j : = m i , j , if i > j , p i , i , if i = j , m ˜ j , i , if i < j .
This representation will allow us to define algorithms adapted to the totally positive structure and provide accurate computations with the matrix.
Let us observe that, for the matrix A in (4), the bidiagonal decomposition (5) can be written as follows:
2 4 12 8 26 138 56 262 1704 = 1 0 0 0 1 0 0 7 1 1 0 0 4 1 0 0 8 1 2 0 0 0 10 0 0 0 18 1 2 0 0 1 6 0 0 1 1 0 0 0 1 3 0 0 1 .
Moreover, since the diagonal pivots and multipliers of the Neville elimination of A and A T are all positive, we conclude that A is strictly totally positive. The above factorization can be represented in the following matrix form:
B D ( A ) = 2 2 3 4 10 6 7 8 18 .
Let us also recall that a real value x is computed to high relative accuracy whenever the computed value x ˜ satisfies
x x ˜ x < K u ,
where u is the unit round-off and K > 0 is a constant, which is independent of the arithmetic precision. High relative accuracy implies a great accuracy in the computations, since the relative errors have the same order of the machine precision and the accuracy is not affected by the dimension or the conditioning of the problem we are solving.
A sufficient condition to assure that an algorithm can be computed to high relative accuracy is the non-inaccurate cancellation condition, sometimes denoted as the NIC condition, which is satisfied if the algorithm does not require inaccurate subtractions and only evaluates products, quotients, sums of numbers of the same sign, subtractions of numbers of the opposite sign, or subtraction of initial data (cf. [18,19]).
If the bidiagonal factorization (5) of a nonsingular and totally positive matrix A can be computed to high relative accuracy, the computation of its eigenvalues and singular values, the computation of A 1 , and even the resolution of systems of linear equations A x = b , for vectors b with alternating signs, can be also computed to high relative accuracy, using the algorithms provided in [26].
Let ( u 0 , , u n ) be a basis of a space U of functions defined on I R . Given a sequence of parameters x 1 < < x n + 1 on I, the corresponding collocation matrix is defined by
M ( x 1 , , x n + 1 ) : = u j 1 ( x i ) 1 i , j n + 1 .
The system ( u 0 , , u n ) of functions defined on I R is totally positive if all of its collocation matrices M ( x 1 , , x n + 1 ) are totally positive.
If the space U is formed by n-times continuously differentiable functions and x I , the Wronskian matrix at x is defined by:
W ( u 0 , , u n ) ( x ) : = ( u j 1 ( i 1 ) ( x ) ) 1 i , j n + 1 ,
where u ( i ) ( x ) , i n , denotes the i-th derivative of u at x.
Now, let us suppose that U is a Hilbert space of functions under a given inner product u , v defined for any u , v U . Then, given linearly independent functions u 0 , , u n in U, the corresponding Gram matrix is the symmetric matrix defined by:
G ( u 0 , , u n ) : = u i 1 , u j 1 1 i , j n + 1 .

3. Bidiagonal Factorization of r -Stirling Matrices

Let us recall that given n , m N with m < n , the (signless) Stirling number of the first kind, usually denoted by n m , can be defined combinatorially as the number of permutations of the set { 1 , , n } having m cycles.
On the other hand, the Stirling number of the second kind, usually denoted by n m , coincides with the number of partitions of the set { 1 , , n } into m non-empty disjoint blocks (cf. [3]). Moreover, the Bell number B n gives the total number of partitions of the set { 1 , , n } ; that is,
B n : = m = 0 n n m .
The first and second kinds of Stirling numbers can be recursively computed as follows,
n k = ( n 1 ) n 1 k + n 1 k 1 , n k = k n 1 k + n 1 k 1 ,
with the initial conditions
0 0 : = 1 , n 0 = 0 n : = 0 , n n : = 1 , n 0 = 0 n : = 0 , n N .
The r-Stirling numbers are polynomials in r generalizing the regular Stirling numbers and they count some restricted permutations and partitions of sets with a given number of elements.
For r N , the first kind r-Stirling number, denoted by n m r , counts the number of permutations of the elements { 1 , , n } having m cycles, such that 1 , 2 , , r are in distinct cycles. In addition, the second kind r-Stirling number, denoted by n m r , is defined as the number of set partitions of { 1 , , n + r } into m + r blocks such that the first r elements are in distinct blocks. Thus the r-Bell number B n r gives the total number of partitions of the set { 1 , , n + r } , such that the first r elements are in distinct blocks; that is,
B n r : = m = 0 n n m r .
Clearly, for  r = 0 , the r-Stirling numbers coincide with the corresponding regular Stirling numbers.
Theorems 1 and 2 of [3] prove that the r-Stirling numbers satisfy the recurrence relations verified by the regular Stirling numbers (see (14)); that is,
n k r = ( n 1 ) n 1 k r + n 1 k 1 r , n k r = k n 1 k r + n 1 k 1 r ,
but with different initial conditions,
n k r = 0 , n < r , n k r = δ k , r , n = r ,
and
n k r = 0 , n < r , n k r = δ k , r , n = r .
Moreover, for some particular values, the corresponding r-Stirling numbers can be easily obtained:
n n r = n n r = 1 , n r .
n k r = n k r = 0 , k > n ,
n r r = r n r ¯ : = ( n 1 ) ( n 2 ) r , k > n ,
n r r = r n r .
Finally, let us recall a “cross” recurrence relating r-Stirling numbers of the second kind with a different r that is going to be used in this paper (see Theorem 4 of [3]). The r-Stirling numbers of the second kind satisfy
n k = n k r 1 ( r 1 ) n 1 k r 1 , 1 r n .
Now, we are going to define triangular matrices whose entries are given in terms of r-Stirling numbers. We shall analyze their Neville elimination and derive their bidiagonal factorization (5).
Definition 1.
For r , n N with 0 r n , the  ( n + 1 ) × ( n + 1 ) the first kind of r-Stirling matrix is the lower triangular matrix L n r = ( l i , j r ) 1 i , j n + 1 with
l i , j r : = i + r 1 j + r 1 r , 1 i , j n + 1 .
On the other hand, the  ( n + 1 ) × ( n + 1 ) second kind of r-Stirling matrix, is the lower triangular matrix L ¯ n r = ( l ¯ i , j r ) 1 i , j n + 1 with
l ¯ i , j r : = i + r 1 j + r 1 r , 1 i , j n + 1 .
The following results provide the expression of the pivots and multipliers of the Neville elimination of r-Stirling matrices and their inverses. Then, their bidiagonal factorization (5) will be deduced and their total positivity property analyzed.
Theorem 2. 
For r , n N with r n , the first kind of r-Stirling matrix L n r R ( n + 1 ) × ( n + 1 ) is totally positive and admits the following factorization
L n r = F n F 1 ,
where F i , i = 1 , , n , are the lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries are given by
m i , j = i + r j 1 , 1 j < i n + 1 .
Moreover,
( L n r ) 1 = F ^ n F ^ 1 ,
where F ^ i , i = 1 , , n , are the lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries are given by
m i , j = ( r + j 1 ) , 1 j < i n + 1 .
Proof. 
Let us define L ( 1 ) : = L n r and L ( k ) = ( l i j ( k ) ) 1 i , j n + 1 , k = 2 , , n , the matrix obtained after k 1 steps of the Neville elimination of L n r . We shall deduce by induction on k { 1 , , n + 1 } that
l i , j ( k ) = i + r k j + r k r .
For k = 1 , identities (28) follow from (22). Now, suppose that (28) is verified for some k { 1 , , n } . Then, taking into account (19), we can write
l i , k ( k ) / l i 1 , k ( k ) = i + r k r r / i + r k 1 r r = r i k ¯ / r i k 1 ¯ = i + r k 1 .
By (1), we have l i , j ( k + 1 ) = l i , j ( k ) l i , k ( k ) / l i 1 , k ( k ) l l 1 , j ( k ) and, using (16), (28), and (29), we derive
l i , j ( k + 1 ) = i + r k j + r k r ( i + r k 1 ) i + r k 1 j + r k r = i + r k 1 j + r k 1 r ,
corresponding to the identity (35) for k + 1 .
By (2), (19), and (28), we deduce that the pivots of the Neville elimination of L n r satisfy
p i , j = l i , j ( j ) = i + r j r r ,
and so the diagonal pivots are p i , i = r r r = 1 for  i = 1 , , n + 1 (see (17)). Moreover, the following expression for the multipliers can be deduced
m i , j = p i , j / p i 1 , j = i + r j r r / i + r j 1 r r = i + r j 1 , 1 j < i n + 1 .
Clearly, p i , i > 0 , for  i = 1 , , n + 1 , and  m i , j > 0 , for  1 j < i n + 1 . Then, using Theorem 1, we deduce that L n r is totally positive.
Finally, taking into account the bidiagonal factorization (7) for the inverse of a nonsingular totally positive matrix, the factorization (26) for ( L n r ) 1 can be deduced easily.    □
The provided bidiagonal decomposition (5) of the first kind of r-Stirling matrix L n r can be stored in a compact form through B D ( L n r ) = B D ( L n r ) i , j 1 i , j n + 1 with
B D ( L n r ) i , j = i + r j 1 , 2 j < i n + 1 , 1 , 1 i = j n + 1 , 0 , elsewhere .
Now, the following result deduces the pivots and multipliers of the Neville elimination of r-Stirling matrices of the second kind and their inverses.
Theorem 3. 
For r , n N with r n , the  ( n + 1 ) × ( n + 1 ) second kind r-Stirling matrix L ¯ n r admits a bidiagonal factorization
L ¯ n r = F n F 1 ,
where F i , i = 1 , , n , are lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries m i , j , 1 j < i n + 1 , are given by
m i , j = r + j 1 , 1 j < i n + 1 .
Moreover,
( L ¯ n r ) 1 = F ^ n F ^ 1 ,
where F ^ i , i = 1 , , n , are lower triangular bidiagonal matrices of the form (6), whose off-diagonal entries m i , j , 1 j < i n + 1 , are given by
m i , j = ( r + i j 1 ) . 1 j < i n + 1 .
Proof. 
Let us define L ( 1 ) : = L ¯ n r and L ( k ) = ( l i j ( k ) ) 1 i , j n + 1 , r = 2 , , n , the matrix obtained after k 1 steps of the Neville elimination of L ¯ n r . By induction on k { 1 , , n + 1 } , we shall deduce that
l i , j ( k ) = i + r 1 j + r 1 r + k 1 .
Taking into account (23), identities (35) clearly hold for k = 1 . If (35) holds for some k { 1 , , n } , using (20), we have
l i , k ( k ) / l i 1 , k ( k ) = i + r 1 k + r 1 r + k 1 / i + r 2 k + r 1 r + k 1 = ( k + r 1 ) i k ( k + r 1 ) i k 1 = k + r 1 .
Since l i , j ( k + 1 ) = l i , j ( k ) l i , k ( k ) / l i 1 , k ( k ) l l 1 , j ( k ) (see (1)), and taking into account (21), (35), and (36), we derive
l i , j ( k + 1 ) = i + r 1 j + r 1 r + k 1 ( r + k 1 ) i + r 2 j + r 1 r + k 1 = i + r 1 j + r 1 r + k ,
corresponding to the identity (35) for k + 1 .
Now, by considering (2), (20), and (35), we can easily deduce that
p i , j = l i , j ( j ) = i + r 1 j + r 1 r + j 1 = ( r + j 1 ) i j ,
and p i , i = 1 , for  i = 1 , , n + 1 . For the multipliers we have
m i , j = p i , j / p i 1 , j = ( r + j 1 ) i j / ( r + j 1 ) i j 1 . = r + j 1 , 1 j < i n + 1 .
Clearly, p i , i > 0 , for  i = 1 , , n + 1 , m i , j > 0 , for  1 j < i n + 1 and, using Theorem 1, we conclude that L ¯ n r is strictly totally positive.
Finally, taking into account the bidiagonal factorization (7) for the inverse of a nonsingular totally positive matrix, the factorization (33) for ( L ¯ n r ) 1 can be deduced.    □
The provided bidiagonal factorization (5) of the matrix L ¯ n r can be stored in a compact form through B D ( L ¯ n r ) = B D ( L ¯ n r ) i , j 1 i , j n + 1 with
B D ( L ¯ n r ) i , j = r + j 1 , 2 j < i n + 1 , 1 , 1 i = j n + 1 , 0 , elsewhere .
Remark 1.
Given a lower triangular ( n + 1 ) × ( n + 1 ) matrix, for any k n + 1 , the determinants of the submatrices using row i 1 , , i k and columns j 1 , , j k with i h j h for all h k are called nontrivial minors because all other minors are zero. A lower triangular matrix is called ΔSTP if all its nontrivial minors are positive. Since Theorem 2 (respectively, Theorem 3) proves that the lower triangular matrix with unit diagonal L n r (respectively, L ¯ n r ) has all multipliers of its Neville elimination positive, by Theorem 4.3 of [16], we conclude that L n r (respectiveley, L ¯ n r ) is in fact ΔSTP.
Now, taking into account Theorems 2 and 3, we provide the pseudocode of Algorithms 1 and 2, respectively. Specifically, Algorithm 1 computes the matrix form B D ( L n r ) in (30), for the bidiagonal decomposition of the first kind r-Stirling matrix L n r in (24). Moreover, Algorithm 2 computes the matrix form B D ( L ¯ n r ) in (37) for the bidiagonal decomposition of the r-Stirling matrix of the second kind L ¯ n r in (31). The computational cost of both algorithms is O ( n 2 ) . Let us observe that none of the algorithms requires inaccurate subtractions, and so the provided matrices are computed to high relative accuracy. In fact, all of their entries are natural numbers.
Algorithm 1: Computation to high relative accuracy of B D ( L n r ) (see (30))
Require: n, r
Ensure: B D L n r bidiagonal decomposition of L n r to high relative accuracy
B D L n r = e y e ( n + 1 )
   for i = 2 : n + 1
     for j = 1 : i 1
       B D L n r ( i , j ) = i + r j 1
      end
    end
Algorithm 2: Computation to high relative accuracy of B D ( L ¯ n r ) (see (37))
Require: n, r
Ensure: B D L ¯ n r bidiagonal decomposition of L ¯ n r to high relative accuracy
B D L ¯ n r = e y e ( n + 1 )
   for i = 2 : n + 1
     for j = 1 : i 1
       B D L ¯ n r ( i , j ) = r + j 1
     end
   end

4. Accurate Computations with r-Bell Bases

Let P n ( I ) be the (n+1)-dimensional linear space formed by all polynomials in the variable x defined on a real interval I and whose degree is not greater than n; that is,
P n ( I ) : = span { 1 , x , , x n } , x I .
For n N , the  n degree r-Bell polynomial of the first kind is defined as
B n r ( x ) : = k = 0 n n + r k + r r x k .
Then we can define a system ( B 0 r , , B n r ) of r-Bell polynomials of the first kind that satisfy
( B 0 r , , B n r ) T = L n r ( m 0 , , m n ) T ,
where m i ( x ) : = x i , i = 0 , , n and L n r is the ( n + 1 ) × ( n + 1 ) first kind of r-Stirling matrix defined by (22). By (24), det L n r = 1 and we can guarantee that ( B 0 r , , B n r ) T is a basis of P n ( R ) .
Definition 2.
We say that ( B 0 r , , B n r ) is the r-Bell basis of the first kind of the polynomial space P n ( R ) .
On the other hand, the r-Bell polynomials of the second kind { B ¯ n r } n 0 in [3,27] are called r-Bell polynomials, which are defined by their generating function
n 0 B ¯ n r ( x ) t n n ! = exp x ( e t 1 ) + r t .
They can also be written in terms of the monomial basis as follows,
B ¯ n r ( x ) = k = 0 n n + r k + r r x k , n N .
and then
( B ¯ 0 r , , B ¯ n r ) T = L ¯ n r ( m 0 , , m n ) T ,
where L ¯ n r is the ( n + 1 ) × ( n + 1 ) second kind of r-Stirling matrix defined by (23). By  (31), det L ¯ n r = 1 and we can guarantee that ( B ¯ 0 r , , B ¯ n r ) T is a basis of P n ( R ) .
Definition 3.
We say that ( B ¯ 0 r , , B ¯ n r ) is the r-Bell basis of the second kind of P n ( R ) .
Let us recall that the monomial basis ( m 0 , , m n ) , with  m i ( x ) = x i for i = 0 , , n , is a strictly totally positive basis of P n ( 0 , + ) , and so the  collocation matrix
V : = x i j 1 1 i , j n + 1 ,
is strictly totally positive for any increasing sequence of positive values 0 < x 1 < < x n + 1 (see Section 3 of [19]). In fact, V is the Vandermonde matrix at the considered nodes. Let us recall that Vandermonde matrices have relevant applications in linear interpolation and numerical quadrature (see for example [28,29]). As for B D ( V ) , we have
B D ( V ) i , j : = k = 1 j 1 x i x i k x i 1 x i k 1 , if i > j , k = 1 i 1 ( x i x k ) , if i = j , x i , if i < j ,
and it can be easily checked that the computation of B D ( V ) does not require inaccurate cancellations and can be performed to high relative accuracy.
Taking into account the total positivity of the Vandermonde matrices and the total positivity of r-Stirling matrices of the first and second kinds, we shall derive the total positivity of r-Bell bases, as well as factorizations providing computations to high relative accuracy when considering their collocation matrices.
Theorem 4. 
The r-Bell basis of first and second kind are totally positive bases of P n ( 0 , + ) . Moreover, given 0 < x 1 < < x n + 1 , the collocation matrices
B : = B j 1 r ( x i ) 1 i , j n + 1 , B ¯ : = B ¯ j 1 r ( x i ) 1 i , j n + 1 ,
and their bidiagonal factorization (5) can be computed to high relative accuracy.
Proof. 
Given 0 < x 1 < < x n + 1 , by formulae (39) and (41), the collocation matrices in (44) satisfy
B = V ( L n r ) T , B ¯ = V ( L ¯ n r ) T ,
where V is the Vandermonde matrix (42), and L n r and L ¯ n r are r-Stirling matrices of the first and second kind, respectively.
It is well known that V is strictly totally positive at 0 < x 1 < < x n + 1 and its decomposition (5) can be computed to high relative accuracy (see [19]). By Theorem 2 (respectively, Theorem 3), L n r (respectively, L ¯ n r ) is a nonsingular totally positive matrix and its decomposition (5) can be computed to high relative accuracy. Taking into account these facts, ( L n r ) T (respectively, ( L ¯ n r ) T ) is a nonsingular totally positive matrix and its bidiagonal decomposition can be also computed to high relative accuracy (see (10)). Therefore, we can deduce that B (respectively, B ¯ ) is a totally positive matrix since it is the product of totally positive matrices (see Theorem 3.1 of [13]). Moreover, using Algorithm 5.1 of [26], if the decomposition (5) of two nonsingular totally positive matrices is provided to high relative accuracy, then the decomposition of the product can be obtained to high relative accuracy. Consequently, B (respectively, B ¯ ) and its decomposition (5) can be obtained to high relative accuracy.    □
Corollary 1 of [20] provides the factorization (5) of W : = W ( m 0 , , m n ) ( x ) , the Wronskian matrix of the monomial basis ( m 0 , , m n ) at x R . For the matrix representation B D ( W ) , we have
B D ( W ) i , j : = x , if i < j , ( i 1 ) ! , if i = j , 0 , if i > j .
Taking into account the sign of the entries of B D ( W ) and Theorem 1, one can derive that the Wronskian matrix of the monomial basis is totally positive for any x > 0 . Moreover, the computation of (46) satisfies the NIC condition and W can be computed to high relative accuracy. Using (46), computations to high relative accuracy when solving algebraic problems related to W have been achieved in [20] for x > 0 .
Using formula (39), it can be checked that
W ( B 0 r , , B n r ) ( x ) = W ( m 0 , , m n ) ( x ) ( L n r ) T , W ( B ¯ 0 r , , B ¯ n r ) ( x ) = W ( m 0 , , m n ) ( x ) ( L ¯ n r ) T .
So, with the reasoning of the proof of Theorem 4, the next result follows and we can also guarantee computations to high relative accuracy when solving algebraic problems related to Wronkian matrices of r-Bell polynomial bases at positive values.
Theorem 5. 
Let ( B 0 r , , B n r ) and ( B ¯ 0 r , , B ¯ n r ) be the r-Bell bases of the first and second kind, respectively. For any x > 0 , the Wronskian matrices W n r : = W ( B 0 r , , B n r ) ( x ) and W ¯ n r : = W ( B ¯ 0 r , , B ¯ n r ) ( x ) are nonsingular totally positive and their bidiagonal decomposition (5) can be computed to high relative accuracy.
It is well known that the polynomial space P n ( [ 0 , 1 ] ) is a Hilbert space under the inner product
p , q : = 0 1 p ( x ) q ( x ) d x ,
and the Gramian matrix of the monomial basis ( m 0 , , m n ) with respect to (47) is:
H n : = 0 1 x i + j 2 d x 1 i , j n + 1 = 1 i + j 1 1 i , j n + 1 .
The matrix H n is the ( n + 1 ) × ( n + 1 ) Hilbert matrix. In Numerical Linear Algebra, Hilbert matrices are well-known Hankel matrices. Their inverses and determinants have explicit formulas; however, they are very ill-conditioned for moderate values of their dimension. Then, they can be used to test numerical algorithms and see how they perform on ill-conditioned or nearly singular matrices. It is well known that Hilbert matrices are strictly totally positive. In [19], the pivots and the multipliers of the Neville elimination of H are explicitly derived. It can be checked that B D ( H n ) = ( B D ( H n ) i , j ) 1 i , j n + 1 is given by
B D ( H n ) i , j : = ( i 1 ) 2 ( i + j 1 ) ( i + j 2 ) , if i > j , ( i 1 ) ! 4 ( 2 i 1 ) ! ( 2 i 2 ) ! , if i = j , ( j 1 ) 2 ( i + j 1 ) ( i + j 2 ) , if i < j .
Clearly, the computation of the factorization (5) of H n does not require inaccurate cancellations, and so it can be computed to high relative accuracy.
Using formula (39), it can be checked that the Gramian matrices G n r and G ¯ n r of the r-Bell bases of the first and second kind, respectively, with respect to the inner product (47), can be written as follows,
G n r = L n r H n ( L n r ) T , G ¯ n r = L ¯ n r H n ( L ¯ n r ) T ,
where L n r (respectively, L ¯ n r ) is the ( n + 1 ) × ( n + 1 ) r-Stirling matrix of the first (respectively, second) kind. According to the reasoning in the proof of Theorem 4, the following result can be deduced.
Theorem 6. 
The Gramian matrices G n r and G ¯ n r of the r-Bell bases of the first and second kind, respectively, with respect to the inner product (47), are nonsingular and totally positive. Furthermore, G n r , G ¯ n r and their bidiagonal decompositions (5) can be computed to high relative accuracy.
Now, taking into account Algorithms 1 and 2, as well as Theorems 4–6, we provide the pseudocode of Algorithms 3–5 for computing, to high relative accuracy, the matrix form (11) of the bidiagonal decomposition of the collocation matrices B n r , B ¯ n r , Wronskian matrices W n r , W ¯ n r , and Gramian matrices G n r , G ¯ n r of r-Bell bases of the first and second kind. Algorithm 3 requires B D ( L n r ) , B D ( L ¯ n r ) , and the bidiagonal decomposition of the Vandermonde matrix implemented in the MATLAB function T N V a n d B D available in [30]. In addition, Algorithm 4 requires B D ( L n r ) , B D ( L ¯ n r ) , and the bidiagonal decomposition B D ( W )  (46) of the Wronskian matrix W of the monomial basis at x. Moreover, Algorithm 5 requires B D ( L n r ) , B D ( L ¯ n r ) , and the bidiagonal decomposition B D ( H ) of the Hilbert matrix H implemented in the MATLAB function T N C a u c h y B D available in [30]. Finally, let us observe that these three algorithms call the MATLAB function T N P r o d u c t available in [30]. Let us recall that, given A = B D ( F ) and B = B D ( G ) to high relative accuracy, T N P r o d u c t ( A , B ) computes B D ( F · G ) to high relative accuracy. The computational cost of the mentioned function and mentioned algorithms is O ( n 3 ) arithmetic operations.
Algorithm 3: Computation to high relative accuracy of the bidiagonal decomposition of collocation matrices B n r , B ¯ n r of r-Bell bases of the first and second kind
Require: n, r, x : = { x i } i = 1 n + 1 such that 0 < x 1 < < x n + 1   
Ensure: B D B n r bidiagonal decomposition of B n r to high relative accuracy
          B D B ¯ n r bidiagonal decomposition of B ¯ n r to high relative accuracy
B D L n r = B D L n r ( n , r ) (see Algorithm 1)
B D L ¯ n r = B D L ¯ n r ( n , r ) (see Algorithm 2)
B D V = T N V a n d B D ( x )
B D B n r = T N P r o d u c t ( B D V , ( B D L n r ) T )
B D B ¯ n r = T N P r o d u c t ( B D V , ( B D L ¯ n r ) T )
Algorithm 4:Computation to high relative accuracy of the bidiagonal decomposition of Wronskian matrices W n r , W ¯ n r of r-Bell bases of the first and second kind
Require: n, r, x ( 0 , )   
Ensure: B D W n r bidiagonal decomposition of W n r to high relative accuracy
          B D W ¯ n r bidiagonal decomposition of W ¯ n r to high relative accuracy
B D L n r = B D L n r ( n , r ) (see Algorithm 1)
B D L ¯ n r = B D L ¯ n r ( n , r ) (see Algorithm 2)
B D W = B D W ( x )
B D W n r = T N P r o d u c t ( B D W , ( B D L n r ) T )
B D W ¯ n r = T N P r o d u c t ( B D W , ( B D L ¯ n r ) T )
Algorithm 5: Computation to high relative accuracy of the bidiagonal decomposition of Gramian matrices G n r , G ¯ n r of r-Bell bases of the first and second kind
Require: n, r   
Ensure: B D G n r bidiagonal decomposition of G n r to high relative accuracy
          B D G ¯ n r bidiagonal decomposition of G ¯ n r to high relative accuracy
B D L n r = B D L n r ( n , r ) (see Algorithm 1)
B D L ¯ n r = B D L ¯ n r ( n , r ) (see Algorithm 2)
B D H = T N C a u c h y B D ( 1 : n , 0 : n 1 )
B 1 = T N P r o d u c t ( B D L n r ) , B D H )
B D G ˜ = T N P r o d u c t ( B 1 , ( B D L n r ) T ) )
B 2 = T N P r o d u c t ( B D L , B D H )
B D G = T N P r o d u c t ( B 2 , ( B D L ¯ n r ) T )

5. Numerical Experiments

Some numerical tests are presented in this section, supporting the obtained theoretical results. We have considered different nonsingular strictly totally positive collocation matrices B n r , B ¯ n r , Wronskian matrices W n r , W ¯ n r , and Gramian matrices G n r , G ¯ n r of r-Bell bases of the first and second kind. Specifically, B n with r = 0 and t = ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 , and  B ¯ n r , with  r = 1 and t = 1 + ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 , and with r = 3 and t = ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 . In addition, W n r with r = 5 and t = 60 and, W ¯ n r with r = 4 and t = 50 . Finally, G ¯ n r with r = n , n = 10 , 11 , , 20 .
We have computed, with Mathematica, the 2-norm condition number, which is the ratio of the largest singular value to the smallest, of all considered matrices. This conditioning is depicted in Figure 1. It can be easily observed that the conditioning drastically increases with the size of the matrices. Due to the ill-conditioning of these matrices, standard routines do not obtain accurate solutions because they can suffer from inaccurate cancellations.
Let us recall that if B D ( A ) can be obtained to high relative accuracy, then the MATLAB functions TNEigenValues, TNSingularValues, TNInverseExpand and TNSolve available in the software library TNTools in [30], take as input argument B D ( A ) , and compute to high relative accuracy the eigenvalues and singular values of A, the inverse matrix A 1 (using the algorithm presented in [22]) and even the solution of linear systems A x = b , for vectors b with alternating signs.
In order to check the accuracy of our algorithms, we have performed several matrix computations using the mentioned routines available in [30], with the matrix form (11) of the bidiagonal factorization (5) as an input argument. The obtained approximations have been compared with the respective approximations obtained by traditional methods provided in Matlab R 2022 b . In this context, the values provided by Wolfram Mathematica 13.1 with 100-digit arithmetic have been taken as the exact solution of the considered algebraic problem. For the sake of brevity, only a few of these experiments will be shown.
The relative error of each approximation has also been computed in Mathematica with 100-digit arithmetic as e : = | y y ˜ | / | y | , where y denotes the exact solution and y ˜ the computed approximation.
Computation of eigenvalues and singular values. For all considered matrices, we have compared the smallest eigenvalue and singular value obtained using the proposed bidiagonal decompositions provided by Algorithms 3–5 with the functions TNEigenvalues and TNSingularvalues, and the smallest eigenvalue and singular value computed with the Matlab commands eig and svd, respectively. Note that the eigenvalues of the Wronskian matrices are exact. Furthermore, the singular values of the Gramian matrices considered coincide with their eigenvalues (since Gramian matrices are symmetric).
The relative errors are shown in Figure 2. Note that our approach accurately computes the smallest eigenvalue and singular value regardless of the 2-norm condition number of the considered matrices. In contrast, the Matlab commands eig and svd return results that are not accurate at all.
Computation of inverses. Further to this, for all considered matrices, we have compared the inverse obtained using the proposed bidiagonal decompositions provided by Algorithms 3–5 with the function TNInverseExpand and the inverse computed with the Matlab command inv. As shown in Figure 3, our procedure provides very accurate results. On the contrary, the results obtained with Matlab reflect poor accuracy.
Resolution of linear systems. Finally, for all considered matrices, we have compared the solution of the linear systems B n r c = d , B ¯ n r c = d , W n r c = d , W ¯ n r c = d , G n r c = d and G ¯ n r c = d where d = ( ( 1 ) i + 1 d i ) 1 i n + 1 and d i , i = 1 , , n + 1 , are random nonnegative integer values, obtained using the proposals for bidiagonal decompositions provided by Algorithms 3–5 with the function TNSolve and the solutions obtained within the Matlab command ∖. As opposed to the results obtained with the command ∖, the proposed procedure preserves the accuracy for all of the dimensions which have been taken into account. Figure 4 illustrates the relative errors.

6. Conclusions

We have introduced a new class of matrices called r-Stirling matrices. These matrices are defined in terms of r-Stirling numbers and have interesting properties, such as being totally positive. They also play a significant role in determining the linear transformation between monomial and r-Bell polynomial bases.
The paper further discusses an efficient algorithm for computing the bidiagonal factorization of r-Stirling matrices to high relative accuracy. This factorization is used for solving relevant algebraic problems involving collocation, Wronskian, and Gramian matrices associated with r-Bell polynomial bases. To validate the proposed procedure and its usefulness, the paper presents numerical experiments that confirm its accuracy.

Author Contributions

Conceptualization, E.M., J.M.P. and B.R.; methodology, E.M., J.M.P. and B.R.; investigation, E.M., J.M.P. and B.R.; writing—original draft, E.M., J.M.P. and B.R.; writing—review and editing, E.M., J.M.P. and B.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Spanish research grants PGC2018-096321-B-I00 (MCIU/AEI) and RED2022-134176-T (MCI/AEI) and by Gobierno de Aragón (E41_23R).

Data Availability Statement

The data and codes used in this work are available under request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nielsen, N. Handbuch der Theorie der Gammafunktion; reprinted under the title: Die Gammafunktion, 1965; Chelsea: New York, NY, USA, 1906. [Google Scholar]
  2. Stirling, J. Methodus Differentialis, Sire Tractatus De Summatione et Interpolazione Serierum Infinitorum; Typies Gul. Bowyer: London, UK, 1730. [Google Scholar]
  3. Broder, A.Z. The r-Stirling numbers. Discret. Math. 1984, 49, 241–259. [Google Scholar] [CrossRef]
  4. Comtet, L. Analyse Combinatoire; Presses Universitaires de France: Paris, France, 1970; Volume 1. [Google Scholar]
  5. Riordan, J. An Introduction to Combinatorial Analysis; Wiley: New York, NY, USA, 1958. [Google Scholar]
  6. Mansour, T.; Schork, M. Commutation Relations, Normal Ordering, and Stirling Numbers; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  7. Bell, E.T. Exponential polynomials. Ann. Math. 1934, 35, 258–277. [Google Scholar] [CrossRef]
  8. Collins, C.B. The role of Bell polynomials in Integration. J. Comput. Appl. Math. 2001, 131, 195–222. [Google Scholar] [CrossRef]
  9. Kirschenhofer, P. An alternating sum. Electron. J. Combin. 1996, 3, 1–10. [Google Scholar]
  10. Cassisa, C.; Ricci, P.E. Orthogonal invariants and the Bell polynomials. Rend. Mat. 2000, 20, 293–303. [Google Scholar]
  11. Isoni, T.; Natalini, P.; Ricci, P.E. Symbolic computation of Newton sum rules for the zeros of polynomial eigenfunctions of linear differential operators. Numer. Algor. 2001, 28, 215–227. [Google Scholar] [CrossRef]
  12. Bernardini, A.; Ricci, P.E. Bell polynomials and differential equations of Freud-type polynomials. Math. Comput. Model. 2002, 36, 1115–1119. [Google Scholar] [CrossRef]
  13. Ando, T. Totally positive matrices. Linear Algebra Appl. 1987, 90, 165–219. [Google Scholar] [CrossRef]
  14. Fallat, S.M.; Johnson, C.R. Totally Nonnegative Matrices; Princeton Series in Applied Mathematics; Princeton University: Princeton, NJ, USA, 2011. [Google Scholar]
  15. Pinkus, A. Totally Positive Matrices. Cambridge Tracts in Mathematics; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  16. Gasca, M.; Peña, J.M. A matricial description of Neville elimination with applications to total positivity. Linear Algebra Appl. 1994, 202, 33–53. [Google Scholar] [CrossRef]
  17. Gasca, M.; Peña, J.M. On factorizations of totally positive matrices. Total Positivity and Its Appl. In Total Positivity and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
  18. Demmel, J.; Koev, P. The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 2005, 27, 42–52. [Google Scholar] [CrossRef]
  19. Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar] [CrossRef]
  20. Mainar, E.; Peña, J.M.; Rubio, B. Accurate computations with Wronskian matrices. Calcolo 2021, 58, 1. [Google Scholar] [CrossRef]
  21. Mainar, E.; Peña, J.M.; Rubio, B. High relative accuracy through Newton bases. Numer. Algor. 2023. [Google Scholar] [CrossRef]
  22. Marco, A.; Martínez, J.J. Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
  23. Marco, A.; Martínez, J.J.; Peña, J.M. Accurate bidiagonal decomposition of totally positive Cauchy–Vandermonde matrices and applications. Linear Algebra Appl. 2017, 517, 63–84. [Google Scholar] [CrossRef]
  24. Rubio, B. Algorithms for curve design and accurate computations with totally positive matrices. Ph.D. Thesis, Prensas de la Universidad de Zaragoza, Zaragoza, Spain, 2021. Available online: https://zaguan.unizar.es/record/125371/files/TESIS-2023-069.pdf (accessed on 12 May 2023).
  25. Gasca, M.; Peña, J.M. Total positivity and Neville elimination. Linear Algebra Appl. 1992, 165, 25–44. [Google Scholar] [CrossRef]
  26. Koev, P. Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar] [CrossRef]
  27. Mezö, I. The r-Bell numbers. Integer Seq. 2011, 14, A11. [Google Scholar]
  28. Finck, T.; Heinig, G.; Rost, K. An inversion formula and fast algorithms for Cauchy-Vandermonde matrices. Linear Algebra Appl. 1993, 183, 179–191. [Google Scholar] [CrossRef]
  29. Oruc, H.; Phillips, G.M. Explicit factorization of the Vandermonde matrix. Linear Algebra Appl. 2000, 315, 113–123. [Google Scholar] [CrossRef]
  30. Koev, P. Software for Performing Virtually All Matrix Computations with Nonsingular Totally Nonnegative Matrices to High Relative Accuracy. Available online: http://www.math.sjsu.edu/~koev/ (accessed on 24 May 2023).
Figure 1. The 2-norm condition number of collocation, Wronskian, and Gramian matrices of r-Bell bases of the first and second kind.
Figure 1. The 2-norm condition number of collocation, Wronskian, and Gramian matrices of r-Bell bases of the first and second kind.
Axioms 12 00839 g001
Figure 2. Relative error in the approximations to the smallest eigenvalue of B ¯ n r with r = 3 and t i = ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 , and to the smallest singular value of B n r with r = 0 and t i = ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 .
Figure 2. Relative error in the approximations to the smallest eigenvalue of B ¯ n r with r = 3 and t i = ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 , and to the smallest singular value of B n r with r = 0 and t i = ( i 1 ) / ( n + 1 ) , i = 1 , , n + 1 .
Axioms 12 00839 g002
Figure 3. Relative error of the approximations to the inverse of W n r with r = 5 and t = 60 and G ¯ n r with r = n .
Figure 3. Relative error of the approximations to the inverse of W n r with r = 5 and t = 60 and G ¯ n r with r = n .
Axioms 12 00839 g003
Figure 4. Relative error of the approximations to the solution of the linear systems B ¯ n r c = d , with r = 1 and t i = 1 + i / ( n + 1 ) , i = 0 , , n , and W ¯ n r c = d , with r = 4 and t = 50 , where d = ( ( 1 ) i + 1 d i ) 1 i n + 1 and d i , i = 1 , , n + 1 , are random nonnegative integer values.
Figure 4. Relative error of the approximations to the solution of the linear systems B ¯ n r c = d , with r = 1 and t i = 1 + i / ( n + 1 ) , i = 0 , , n , and W ¯ n r c = d , with r = 4 and t = 50 , where d = ( ( 1 ) i + 1 d i ) 1 i n + 1 and d i , i = 1 , , n + 1 , are random nonnegative integer values.
Axioms 12 00839 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mainar, E.; Peña, J.M.; Rubio, B. On the Total Positivity and Accurate Computations of r-Bell Polynomial Bases. Axioms 2023, 12, 839. https://doi.org/10.3390/axioms12090839

AMA Style

Mainar E, Peña JM, Rubio B. On the Total Positivity and Accurate Computations of r-Bell Polynomial Bases. Axioms. 2023; 12(9):839. https://doi.org/10.3390/axioms12090839

Chicago/Turabian Style

Mainar, Esmeralda, Juan Manuel Peña, and Beatriz Rubio. 2023. "On the Total Positivity and Accurate Computations of r-Bell Polynomial Bases" Axioms 12, no. 9: 839. https://doi.org/10.3390/axioms12090839

APA Style

Mainar, E., Peña, J. M., & Rubio, B. (2023). On the Total Positivity and Accurate Computations of r-Bell Polynomial Bases. Axioms, 12(9), 839. https://doi.org/10.3390/axioms12090839

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop