Next Article in Journal
Extremal Unicyclic Graphs for the Euler Sombor Index: Applications to Benzenoid Hydrocarbons and Drug Molecules
Next Article in Special Issue
The Reverse Order Law for the {1,3M,4N}—The Inverse of Two Matrix Products
Previous Article in Journal
On Prime Ideals in Rings Satisfying Certain Functional Identities
Previous Article in Special Issue
Linear Algebraic Approach for Delayed Patternized Time-Series Forecasting Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High Relative Accuracy for Corner Cutting Algorithms

by
Jorge Ballarín
1,†,
Jorge Delgado
2,*,† and
Juan Manuel Peña
1,†
1
Departamento de Matemática Aplicada, Facultad de Ciencias, Universidad de Zaragoza/IUMA, 50009 Zaragoza, Spain
2
Departamento de Matemática Aplicada, Escuela de Ingeniería y Arquitectura, Universidad de Zaragoza/IUMA, 50018 Zaragoza, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2025, 14(4), 248; https://doi.org/10.3390/axioms14040248
Submission received: 14 February 2025 / Revised: 14 March 2025 / Accepted: 24 March 2025 / Published: 26 March 2025
(This article belongs to the Special Issue Advances in Linear Algebra with Applications, 2nd Edition)

Abstract

:
Corner cutting algorithms are important in computer-aided geometric design and they are associated to stochastic non-singular totally positive matrices. Non-singular totally positive matrices admit a bidiagonal decomposition. For many important examples, this factorization can be obtained with high relative accuracy. From this factorization, a corner cutting algorithm can be obtained with high relative accuracy. Illustrative examples are included.

1. Introduction

Bidiagonal decompositions of a matrix arise in two apparently separate fields of mathematics: in total positivity theory and in computer-aided geometric design (CAGD). In the first case, the bidiagonal decomposition is a remarkable property of a non-singular totally positive matrix. Moreover, this decomposition provides a parametrization of the matrix that has been the starting point for the construction of algorithms with high relative accuracy (see [1,2,3,4,5,6]). In fact, if one knows the bidiagonal decomposition of a non-singular totally positive matrix, then one can construct such algorithms for the computation of all eigenvalues and singular values of the matrix and also to calculate the inverse (see also [7]) and the solution of some linear systems.
High relative accuracy (HRA) is a very desirable goal in numerical analysis and it can be assured when the subtractions in the algorithm only involve initial data (see [8]). The mentioned parameters of the bidiagonal decomposition come from an elimination procedure known as Neville elimination. But this procedure uses, in fact, subtractions, so that an alternative method is usually necessary to obtain the parameters of the bidiagonal decomposition with HRA.
In CAGD, decompositions of a matrix also play a crucial role. In fact, they are associated with the main family of algorithms in this field, which are called corner cutting algorithms (see [9,10,11,12]). For instance, evaluation algorithms and reduction and elevation degree algorithms are corner cutting algorithms. The matrix associated with these algorithms is totally positive as well as stochastic, and all bidiagonal factors of the decomposition are also stochastic matrices.
In [13] it was shown that, if a corner cutting algorithm is known with high relative accuracy, then the bidiagonal decomposition of the corresponding matrix can also be obtained with high relative accuracy. Here, the more practical converse question is considered. Let us assume that the bidiagonal decomposition of a given stochastic matrix is known. This has been achieved with many important classes of matrices. Then, it is proved that the corresponding corner cutting algorithm can also be obtained with high relative accuracy.
The layout of this paper is as follows. In Section 2, totally positive matrices and bidiagonal decompositions are introduced, relating them to Neville elimination. Section 3 is devoted to recall some basic facts concerning high relative accuracy. Section 4 deals with corner cutting algorithms and relates them with stochastic non-singular totally positive matrices. Section 5 proves the result mentioned above, which provides the parameters for corner cutting algorithms with high relative accuracy. Section 6 includes some illustrative examples and shows applications to curve evaluation. Finally, Section 7 summarizes the main conclusions of this paper.

2. Totally Positivity and Bidiagonal Decompositions

A matrix is called totally positive (TP) when all its minors are non-negative. They are often called totally non-negative matrices (see [14,15]). This class of matrices has relevance in many fields like approximation theory, statistics, mechanics, economics, combinatorics, biology, computer-aided geometry design, lie group theory, or graph theory (see [14,16,17,18,19]).
One of the properties of non-singular matrices TP with more computational advantages is given by their following bidiagonal decomposition, although this property can be defined for more general matrices. We say that an n × n non-singular matrix A has a bidiagonal decomposition  BD ( A ) when it can be expressed in the following form:
A = L ( 1 ) L ( n 1 ) D U ( n 1 ) U ( 1 ) ,
where D = diag ( d 1 , , d n ) , and, for  k = 1 , , n 1 , L ( k ) and U ( k ) are unit diagonal lower and upper bidiagonal matrices, respectively, with off-diagonal entries l i ( k ) : = ( L ( k ) ) i + 1 , i and u i ( k ) : = ( U ( k ) ) i , i + 1 , ( i = 1 , , n 1 ) satisfying
  • d i 0 for all i,
  • l i ( k ) = u i ( k ) = 0 for i < n k ,
  • l i ( k ) = 0 l i + s ( k s ) = 0 and u i ( k ) = 0 u i + s ( k s ) = 0 , for  s = 1 , 2 , , k 1 .
Therefore, the bidiagonal matrices of the bidiagonal decomposition BD ( A ) have the following form:
L ( k ) = 1 0 1 0 1 l n k ( k ) 1 l n 1 ( k ) 1 , U ( k ) = 1 0 1 0 1 u n k ( k ) 1 u n 1 ( k ) 1 ,
where k = 1 , , n 1 .
In general, the bidiagonal decomposition of a matrix is not unique. However, Proposition 2.2 in [20] guarantees that BD ( A ) is unique.
Proposition 1.
Let A be a non-singular matrix. If a bidiagonal decomposition BD ( A ) exists, then it is unique.
When the matrix A is TP, its bidiagonal decomposition BD ( A ) satisfies more specific conditions, which characterize non-singular matrices TP as shown by the following result. This result can be derived from Theorem 4.2 in [21].
Theorem 1.
An n × n non-singular matrix A is TP if and only if there exists a (unique) BD ( A ) such that
1. 
d i > 0 for all i;
2. 
l i ( k ) 0 , u i ( k ) 0 for 1 k n 1 and n k i n 1 .
The next simple example illustrates the applications of Theorem 1.
Example 1.
Given the matrix
A = 1 / 2 1 / 2 1 / 3 2 / 3
its BD ( A ) is given by
A = 1 0 2 / 3 1 1 / 2 0 0 1 / 3 1 1 0 1 ,
so by Theorem 1 it is TP. Examples of higher dimensions can be seen in Section 6.
The representation BD ( A ) for a non-singular matrix TP A arises in the process of the complete Neville elimination (CNE), where the entries l i ( k ) , u i ( k ) of the previous matrices coincide with the multipliers of the CNE and the entries d i with the diagonal pivots (see [21,22]).
The following section shows that BD ( A ) leads to many accurate computations with non-singular matrices TP.

3. High Relative Accuracy

Let us recall that an algorithm can be performed with high relative accuracy if it does not include subtractions (except of the initial data), that is, if it only includes products, divisions, sums (of numbers of the same sign), and subtractions of the initial data (cf. [8,23]). In case of an algorithm without any subtraction, it is called a subtraction free (SF) algorithm and it can be performed with HRA.
For a non-singular matrix TP, the non-trivial entries of the matrices in BD ( A ) (see (1)) have been considered in [4,5,6] as natural parameters associated with A in order to perform many linear algebra computations with A to HRA. In fact, if we know BD ( A ) with HRA, then we can compute with HRA the singular values of A, its eigenvalues, its inverse (using also [7]), or the solution of linear systems A x = b where b has alternating signs.
Moreover, for many subclasses of non-singular matrices TP it has been possible to obtain the bidiagonal decomposition BD ( A ) of their matrices A with HRA, so that the remaining mentioned linear algebra computations can also be obtained with HRA. Among these subclasses, we can mention (cf. [24,25,26]) the collocation matrices of the Bernstein basis of polynomials (also called Bernstein–Vandermonde matrices). Other subclasses of non-singular matrices TP for which this has also been possible are the collocation matrices of the Said–Ball basis of polynomials [27], the collocation matrices of rational bases using the Bernstein or the Said–Ball basis [28], the collocation matrices of the q-Bernstein basis [29], or the collocation matrices of the h-Bernstein basis [30]. All these mentioned bases are very useful in the field of computer-aided geometric design (see [9]). In the next section, we recall a crucial algorithms in this field.

4. Stochastic Matrices TP and Corner Cutting Algorithms

A matrix is call stochastic if it is non-negative and the entries of each row sum up to 1. We will pay special attention to the non-singular matrices TP that are also stochastic, because they are very important in CAGD. In fact, their bidiagonal factorization can lead to corner cutting algorithms, which form the most relevant family of algorithms in this subject. These algorithms have an important geometric interpretation. They start from a polygon and refine this polygon cutting its corners iterability. These algorithms, depending on how the corners are cut, can finish at a point, like the de Casteljau evaluation algorithm, or at another polygon. In this later case, one can have the elevation degree algorithms or some subdivision-type algorithms like the Chaikin algorithm (cf. [31]). In addition, corner cutting algorithms have very good stability properties. So let us now recall the definition of these algorithms.
An elementary corner cutting is a transformation that maps any polygon P 0 P n into another polygon B 0 B n defined by one of the following ways:
B j = P j , j i , B i = ( 1 λ ) P i + λ P i + 1 ,
for some i { 0 , , n 1 } , 0 λ < 1 , or 
B j = P j , j i , B i = ( 1 λ ) P i + λ P i 1 ,
for some i 1 , , n , 0 λ < 1 (see Figure 1). Then, a corner cutting algorithm is any composition of elementary corner cuttings (see [10]).
The matrix form of the elementary corner cutting given by (2) is
( B 0 , , B n ) T = U ( λ i ) ( P 0 , , P n ) T ,
where U ( λ i ) is the non-singular, stochastic, bidiagonal and upper triangular matrix
U ( λ i ) = 1 0 1 0 1 λ i λ i 1 0 1 .
Analogously to the previous case, a lower triangular matrix can also be used for the elementary corner cutting (3).
Therefore, a corner cutting algorithm is given by a product of matrices that are all bidiagonal, non-singular, TP, and stochastic. In particular, any upper triangular bidiagonal, non-singular, TP and stochastic matrix leads to a corner cutting algorithm by the composition
1 λ 0 λ 0 1 λ 1 λ 1 1 λ i λ i 1 λ n 1 λ n 1 1 = U ( λ n 1 ) U ( λ n 2 ) U ( λ 0 ) .
Likewise, there exists an analogous factorization for the lower triangular case (3).
A corner cutting algorithm coming from a non-singular stochastic matrix TP can be expressed as a product of bidiagonal non-singular stochastic matrices TP, as can be seen by the following result, which corresponds to Theorem 4.5 in [21].
Theorem 2.
An n × n nonsingular matrix A is TP stochastic if and only if it can be decomposed as
A = F n 1 F n 2 F 1 G 1 G n 2 G n 1 ,
with
F i = 1 0 1 0 1 α i + 1 , 1 1 α i + 1 , 1 α n , n i 1 α n , n i
and
G i = 1 0 1 0 1 α 1 , i + 1 α 1 , i + 1 1 α n i , n α n i , n 1 ,
where, ( i , j ) ,   0 α i , j < 1 satisfies
α i j = 0 α h j = 0 h > i f o r i > j , α i j = 0 α i k = 0 k > j f o r i < j .
Under these conditions, the factorization is unique.
We now can back to the matrix to Example 1 to illustrate Theorem 2.
Example 2.
Let A the matrix given in Example 1. Then, its decomposition associated to a corner cutting algorithm is given by
A = 1 / 2 1 / 2 1 / 3 2 / 3 = 1 0 2 / 3 1 / 3 1 / 2 1 / 2 0 1 .
Examples of higher dimensions will be presented in Section 6.
Remark 1.
A corner cutting algorithm corresponding to a stochastic matrix TP A as this of Theorem 2 can be expressed in compact form using the following matrix notation:
CCA ( A ) = 1 α 1 , 2 α 1 , n α 2 , 1 1 α n 1 , n α n , 1 α n , n 1 1 .
With an analogous distribution to that of the off-diagonal entries of the compact form of the corner cutting algorithm CCA ( A ) , we can define the compact form of the bidiagonal decomposition BD ( A ) , but including in the main diagonal the diagonal pivots of the CNE of A (see Section 2 of [5]).
Observe that the restrictions for the α i j ’s in Theorem 2 are imposed to assure the unicity of the decomposition, in the same way as the restrictions of zero entries in BD ( A ) .
Many essential algorithms used in curve design, such as evaluation, subdivision, degree elevation, and knot insertion, are corner cutting algorithms (see [9,10]). In particular, it is well known that all bases mentioned in the previous section, which are useful in CAGD, have non-singular stochastic collocation matrices TP. Therefore, they satisfy the hypotheses of Theorem 2 and so it leads to an evaluation corner cutting algorithm for the simultaneous evaluation of n + 1 points, as illustrated in Section 6.

5. Construction of Corner Cutting Algorithms with HRA

It has already been studied [13] how to obtain an accurate bidiagonal decomposition of a matrix A from a corner cutting algorithm. In this section, we consider the converse problem, that is, how to obtain with HRA the associated corner cutting algorithm if we start with an accurate bidiagonal decomposition BD ( A ) (which has been obtained in many important examples, as recalled in Section 3). The following result gives a constructive answer proving that, from an accurate BD ( A ) , we can construct with HRA the corner cutting algorithm of the associated n × n non-singular stochastic matrix TP A.
Theorem 3.
Let A a non-singular stochastic matrix TP. If we know the entries of D , L ( k ) , U ( k ) , for  k = 1 , 2 , , n 1 from the bidiagonal decomposition BD ( A ) (1) with HRA, then we can compute a corner cutting algorithm associated to the matrix decomposition (4) with a SF algorithm and, hence, to HRA.
Proof. 
Since U ( 1 ) is non-singular and non-negative, its row sums are positive, and so we can rewrite U ( 1 ) as
U ( 1 ) = D 1 ( U ) U ¯ ( 1 ) ,
where D 1 ( U ) = d i a g j = 1 n u 1 j ( 1 ) , j = 1 n u 2 j ( 1 ) , , j = 1 n u n j ( 1 ) has positive diagonal entries, formed by the row sums of U ( 1 ) . In addition, U ¯ ( 1 ) is a stochastic matrix. Since U ( k ) for k = 2 , 3 , , n 1 is non-singular and non-negative, the previous process can be iterated obtaining
U ( k ) D k 1 ( U ) = D k ( U ) U ¯ ( k ) ,
where D k ( U ) is the diagonal matrix whose i-th diagonal entry is the sum of the entries in the i-th row of the matrix U ( k ) D k 1 ( U ) . Since this matrix is non-negative and non-singular, the diagonal entries of D k ( U ) are positive. Moreover, U ¯ ( k ) are stochastic matrices, for k = 2 , 3 , , n 1 . After iterating this process, we obtain
A = L ( 1 ) L ( n 1 ) D D n 1 ( U ) = : D ¯ U ¯ ( n 1 ) U ¯ ( 1 ) = L ( 1 ) L ( n 1 ) D ¯ U ¯ ( n 1 ) U ¯ ( 1 ) .
Now, since L ( n 1 ) D ¯ is non-singular and non-negative, its row sums are positive, and it can be expressed by
L ( n 1 ) D ¯ = D n 1 ( L ) L ¯ ( n 1 ) ,
where L ¯ ( n 1 ) is an stochastic matrix. Iterating this procedure the following factorization of A is obtained by
A = D 1 ( L ) L ¯ ( 1 ) L ¯ ( n 1 ) U ¯ ( n 1 ) U ¯ ( 1 ) ,
where D 1 ( L ) is a diagonal non-negative matrix. Taking into account that the matrices A, L ¯ ( k ) and U ¯ ( k ) are stochastic, we have
A e = e , L ¯ ( k ) e = e and U ¯ ( k ) e = e ,
where e = ( 1 , , 1 ) T . From (6) and using the previous formulas, we can write
D 1 ( L ) e = D 1 ( L ) L ¯ ( 1 ) L ¯ ( n 1 ) U ¯ ( n 1 ) U ¯ ( 1 ) e = e = A e = e .
So, D 1 ( L ) is a diagonal stochastic matrix. Hence, the identity matrix and factorization (6) must be as follows:
A = L ¯ ( 1 ) L ¯ ( n 1 ) U ¯ ( n 1 ) U ¯ ( 1 ) ,
where L ¯ ( k ) (resp., U ¯ ( k ) ) are stochastic and lower (resp., upper) bidiagonal matrices as in (4) of Theorem 2 (for k = 1 , 2 , , n 1 , F k = L ¯ ( n k ) and G k = U ¯ ( n k ) ).
Taking into account that, for the construction of the corner cutting factorization (7), only products, quotients, and sum of non-negative numbers have been used, the corner cutting algorithm associated to (7) is an SF algorithm and so, it can be obtained with HRA.    □
Algorithm 1 shows a pseudocode of the recursive algorithm introduced in the proof of the previous theorem.
Algorithm 1 Computation of a corner cutting algorithm from a bidiagonal factorization
  • Require:  BD ( A ) of a stochastic non-singular matrix TP A
  • Ensure:  ( α i , j ) 1 i , j n , i j entries of the corner cutting algorithm
  •    D a u x = I n
  •   for  k = 1 : n 1   do
  •      U ( k ) = U ( k ) D a u x
  •      a u x = U ( k ) e
  •      D a u x = d i a g ( a u x )
  •     for  i = 1 : k  do
  •          α i , n k + i = U ( K ) ( n k + i 1 , n k + i ) / a u x ( n k + i 1 )
  •     end for
  •   end for
  •    D a u x = D D a u x
  •   for  k = 1 : n 1   do
  •      L ( n k ) = L ( n k ) D a u x
  •      a u x = L ( n k ) e
  •      D a u x = d i a g ( a u x )
  •     for  i = k + 1 : n  do
  •          α i , i k = L ( n k ) ( i , i 1 ) / a u x ( i )
  •     end for
  •   end for

6. Examples

This section presents examples illustrating the result and the algorithm of the previous section. For example, let us consider the basis of the space of polynomials of degree at most n given by the Bernstein polynomials of degree n:
b i n ( x ) = n i x i ( 1 x ) n i , x [ 0 , 1 ] , i = 0 , 1 , , n .
Example 3.
For the first example, let us consider the collocation matrix of the Bernstein basis for n = 3 at the points 0 , 1 / 3 , 2 / 3 , 1 , which is given by
M 3 : = M b 0 3 , b 1 3 , b 2 3 , b 3 3 0 , 1 / 3 , 2 / 3 , 1 = 1 0 0 0 8 27 4 9 2 9 1 27 1 27 2 9 4 9 8 27 0 0 0 1 .
This matrix is stochastic, non-singular, and totally positive. It can be checked that its bidiagonal decomposition is given in compact form by
BD ( M 3 ) = 1 0 0 0 8 27 4 9 1 2 1 6 1 8 3 8 1 3 2 3 0 0 0 1 ,
that is,
M 3 = 1 1 1 0 1 · 1 1 1 8 1 0 1 · 1 8 27 1 3 8 1 0 1 · · 1 4 9 1 3 1 · 1 0 1 1 2 1 2 3 1 · 1 1 0 1 1 6 1 · 1 1 1 0 1 .
Applying Algorithm 1 to the previous bidiagonal decomposition of M 3 , the following corner cutting algorithm is obtained:
M b 0 3 , b 1 3 , b 2 3 , b 3 3 0 , 1 / 3 , 2 / 3 , 1 = 1 1 1 0 1 · 1 1 1 8 7 8 0 1 · 1 8 27 19 27 19 63 44 63 0 1 · · 1 0 12 19 7 19 7 11 4 11 1 · 1 1 0 6 7 1 7 1 · 1 1 1 0 1 .
Taking into account Remark 1, this bidiagonal representation of the corner cutting algorithm can be written in compact form as
CCA ( M 3 ) = 1 0 0 0 8 27 1 7 19 1 7 1 8 19 63 1 4 11 0 0 0 1 .
Example 4.
In this example, we consider the collocation matrix of the Bernstein basis of the space of polynomials of degree less than or equal to 7 at the points { i / 7 } i = 0 7 . Let us denote it by M 7 . It can be checked that its bidiagonal decomposition can be expressed in compact form as
BD ( M 7 ) = 1 0 0 0 0 0 0 0 279936 823543 46656 117649 1 2 5 18 1 6 1 10 1 18 1 42 78125 279936 109375 279936 3125 16807 2 3 2 5 6 25 2 15 2 35 16384 78125 24576 78125 7168 15625 256 2401 3 4 9 20 1 4 3 28 2187 16384 3645 16384 729 2048 567 1024 27 343 4 5 4 9 4 21 128 2187 256 2187 160 729 32 81 56 81 4 49 5 6 5 14 1 128 3 128 1 16 5 32 3 8 7 8 1 7 6 7 0 0 0 0 0 0 0 1
Applying Algorithm 1 to the previous bidiagonal decomposition, the following corner cutting algorithm is obtained:
CCA ( M 7 ) = 1 0 0 0 0 0 0 0 279936 823543 1 217015 543607 53719 217015 8359 53719 799 8359 43 799 1 43 78125 279936 8493859375 23742862339 1 592578372 1270750247 15807156 49381531 272388 1317263 2724 22699 12 227 16384 78125 445376 1552041 40664007904 107878368199 1 703547955 1493652451 5096295 15634399 22455 113251 45 499 2187 16384 45009189 221828125 916951 3157481 1493652451 3765658771 1 12765680 28400079 46320 159571 80 579 128 2187 14197 131776 3157481 17496875 224053 796633 9466693 22912743 1 109350 268921 50 243 1 128 2059 92583 5599 103456 796633 6795625 28629 124979 268921 660961 1 40 121 0 0 0 0 0 0 0 1
In computer-aided geometric design, the usual representation to work with polynomial curves is the Bézier representation. So a Bézier curve of degree n is given by
γ ( t ) = i = 0 n P i b i n ( t ) , t [ 0 , 1 ] ,
where P i R k , k = 2 or 3, are called the control points of the curve. The usual algorithm in CAGD to evaluate a Bézier curve is the de Casteljau algorithm. This algorithm is a corner cutting algorithm; that is, all its steps are formed by linear convex combinations. Corner cutting algorithms are desirable algorithms since they are very stable from a numerical point of view. The de Casteljau algorithm evaluates a Bézier curve of degree n at a certain value t [ 0 , 1 ] with a computational cost of O ( n 2 ) elementary operations. So it evaluates the Bézier curve at a sequence of points 0 t 0 < t 1 < < t n 1 with a computational cost of O ( n 3 ) elementary operations. Taking into account Examples 3 and 4, we can obtain another corner cutting algorithm to evaluate Bézier curves. Let us consider a Bézier function of degree n
f ( t ) = i = 0 n f i b i n ( t ) , t [ 0 , 1 ] ,
where f i R for i = 0 , 1 , , n . Then, we can consider the collocation matrix of the Bernstein basis of degree n at 0 t 0 < t 1 < < t n 1 :
M n : = M b 0 n , b 1 n , , b n n t 0 , t 1 , , t n .
The previous matrix is stochastic, non-singular, and totally positive. By the results in [26], its bidiagonal decomposition can be computed to HRA. Then, using Theorem 3, from this bidiagonal decomposition, a corner cutting algorithm representation like this of Theorem 2 can be obtained for M n :
M n = F n F n 2 F 1 G 1 G n 2 G n .
Then, we can deduce that
f ( t 0 ) f ( t 1 ) f ( t n ) = M n f 0 f 1 f n = F n F n 2 F 1 G 1 G n 2 G n f 0 f 1 f n
Hence, in fact we have a corner cutting algorithm to evaluate Bézier curves alternative to the de Casteljau algorithm. In addition, the new algorithm has a computational cost of O ( n 2 ) elementary operations to evaluate the function at the n + 1 points in contrast to the computational cost of O ( n 3 ) elementary operations of the de Casteljau algorithm.
The two previous examples have shown how the algorithm provides a corner cutting algorithm from the bidiagonal decomposition of a non-singular stochastic matrix TP. So, in the next example, it will be illustrated that when Algorithm 1 is carried out in floating point arithmetic, the parameters of the corner cutting algorithm are obtained with high relative accuracy.
Example 5.
In Example 4, the corner cutting algorithm CCA ( M 7 ) from BD ( M 7 ) was obtained applying the algorithm with exact computations. Then, CCA ( M 7 ) were computed in double precision floating point arithmetic with a Python (3.10.9) implementation of Algorithm 1. Then, the component-wise relative errors were calculated obtaining:
0 0 0 0 0 0 0 0 0 1.12 × 10 16 1.78 × 10 16 2.9 × 10 16 3.87 × 10 16 0 0 1.55 × 10 16 0 1.73 × 10 16 2.68 × 10 16 0 0 0 1.93 × 10 16 0 1.18 × 10 16 1.7 × 10 16 1.4 × 10 16 1.54 × 10 16 0 0 1.91 × 10 16 1.4 × 10 16 1.23 × 10 16 0 0 0 0 0 0 0 1.37 × 10 16 0 0 0 1.28 × 10 16 1.18 × 10 16 1.21 × 10 16 0 0 0 0 0 0 0 0 0 .
As it can be observed, all the parameters of the corner cutting algorithm are obtained with an error of the order of the unit roundoff of the double precision floating point arithmetic system. So, all the parameters are obtained with HRA as Theorem 3 states.

7. Conclusions

The bidiagonal decomposition with high relative accuracy is known for many non-singular matrices TP. If the non-singular matrix TP is also stochastic, then it provides a corner cutting algorithm. It is proved that, if we have the bidiagonal decomposition with high relative accuracy, then we can obtain the corner cutting algorithm through a SF algorithm, and so with high relative accuracy. Hence, the method presented in this paper provides a source for constructing corner cutting algorithms with high relative accuracy. Examples of its use as curve evaluation algorithms are included.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Spanish research grants PID2022-138569NB-I00 (MCI/AEI) and RED2022-134176-T (MCI/AEI), and by Gobierno de Aragón (E41_23R).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
TPTotally positive
CNEComplete Neville elimination
HRAHigh relative accuracy
SFSubtraction free
CAGDComputer-aided geometric design

References

  1. Demmel, J. Accurate singular value decompositions of structured matrices. SIAM J. Matrix Anal. Appl. 1999, 21, 562–580. [Google Scholar] [CrossRef]
  2. Demmel, J.; Koev, P. The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 2005, 27, 42–52. [Google Scholar]
  3. Higham, N.J. Accuracy and Stability of Numerical Algorithms, 2nd ed.; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  4. Koev, P. Accurate eigenvalues and SVDs of totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 1–23. [Google Scholar]
  5. Koev, P. Accurate computations with totally nonnegative matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 731–751. [Google Scholar]
  6. Koev, P. TNTool. Available online: https://sites.google.com/sjsu.edu/plamenkoev/home/software/tntool (accessed on 6 February 2025).
  7. Marco, A.; Martínez, J.J. Accurate computation of the Moore–Penrose inverse of strictly totally positive matrices. J. Comput. Appl. Math. 2019, 350, 299–308. [Google Scholar] [CrossRef]
  8. Demmel, J.; Dumitriu, I.; Holtz, O.; Koev, P. Accurate and efficient expression evaluation and linear algebra. Acta Numer. 2008, 17, 87–145. [Google Scholar]
  9. Farin, G. Curves and Surfaces for Computer-Aided Geometric Design. A Practical Guide, 4th ed.; Academic Press: San Diego, CA, USA; Computer Science and Scientific Computing, Inc.: Rockaway, NJ, USA, 1997. [Google Scholar]
  10. Goodman, T.N.T.; Micchelli, C.A. Corner cutting algorithms for the Bézier representation of free form curves. Linear Algebra Appl. 1988, 99, 225–252. [Google Scholar]
  11. Hoschek, J.; Lasser, D. Fundamentals of Computer Aided Geometric Design; A K Peters: Wellesley, MA, USA, 1993. [Google Scholar]
  12. Micchelli, C.A.; Pinkus, A.M. Descartes systems from corner cutting. Constr. Approx. 1991, 7, 161–194. [Google Scholar] [CrossRef]
  13. Barreras, A.; Peña, J.M. Matrices with bidiagonal decomposition, accurate computations and corner cutting algorithms. In Concrete Operators, Spectral Theory, Operators in Harmonic Analysis and Approximation, Birkhauser; Operator Theory, Advances and Applications; Springer: Berlin/Heidelberg, Germany, 2013; Volume 236, pp. 43–51. [Google Scholar]
  14. Fallat, S.M.; Johnson, C.R. Totally Nonnegative Matrices; Princeton Series in Applied Mathematics; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  15. Gantmacher, F.P.; Krein, M.G. Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, Revised Edition; AMS Chelsea Publishing: Providence, RI, USA, 2002. [Google Scholar]
  16. Ando, T. Totally positive matrices. Linear Algebra Appl. 1987, 90, 165–219. [Google Scholar]
  17. Gasca, M.; Micchelli, C.A. Total Positivity and Its Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996. [Google Scholar]
  18. Karlin, S. Total Positivity; Stanford University Press: Stanford, CA, USA, 1968; Volume 1. [Google Scholar]
  19. Pinkus, A. Totally Positive Matrices; Cambridge Tracts in Mathematics 181; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  20. Barreras, A.; Peña, J.M. Accurate computations of matrices with bidiagonal decomposition using methods for totally positive matrices. Numer. Linear Algebra Appl. 2013, 20, 413–424. [Google Scholar] [CrossRef]
  21. Gasca, M.; Peña, J.M. On factorizations of totally positive matrices. In Total Positivity and Its Applications; Gasca, M., Micchelli, C.A., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; pp. 109–130. [Google Scholar]
  22. Gasca, M.; Peña, J.M. Total positivity and Neville elimination. Linear Algebra Appl. 1992, 165, 25–44. [Google Scholar]
  23. Demmel, J.; Gu, M.; Eisenstat, S.; Slapnicar, I.; Veselic, K.; Drmac, Z. Computing the singular value decomposition with high relative accuracy. Linear Algebra Appl. 1999, 299, 21–80. [Google Scholar]
  24. Marco, A.; Martínez, J.J. A fast and accurate algorithm for solving Bernstein-Vandermonde linear systems. Linear Algebra Appl. 2007, 422, 616–628. [Google Scholar]
  25. Marco, A.; Martínez, J.J. Polynomial least squares fitting in the Bernstein basis. Linear Algebra Appl. 2010, 433, 1254–1264. [Google Scholar]
  26. Marco, A.; Martínez, J.J. Accurate computations with totally positive Bernstein-Vandermonde matrices. Electron. J. Linear Algebra 2013, 26, 357–380. [Google Scholar] [CrossRef]
  27. Marco, A.; Martínez, J.J. Accurate computations with Said-Ball-Vandermonde matrices. Linear Algebra Appl. 2010, 432, 2894–2908. [Google Scholar]
  28. Delgado, J.; Peña, J.M. Accurate computations with collocation matrices of rational bases. Appl. Math. Comput. 2013, 219, 4354–4364. [Google Scholar] [CrossRef]
  29. Delgado, J.; Peña, J.M. Accurate computations with collocation matrices of q-Bernstein polynomials. SIAM J. Matrix Anal. Appl. 2015, 36, 880–893. [Google Scholar] [CrossRef]
  30. Marco, A.; Martínez, J.J.; Viaña, R. Accurate bidiagonal decomposition of totally positive h-Bernstein-Vandermonde matrices and applications. Linear Algebra Appl. 2019, 579, 320–335. [Google Scholar]
  31. Joy, K.I. On-Line Geometric Modeling Notes: Chaikin’s Algorithm for Curves. Visualization and Graphics Research Group, Department of Computer Science, University of California, Davis. 1999. Available online: https://www.cs.unc.edu/~dm/UNC/COMP258/LECTURES/Chaikins-Algorithm.pdf (accessed on 10 March 2025).
Figure 1. Elementary corner cuttings (2) and (3).
Figure 1. Elementary corner cuttings (2) and (3).
Axioms 14 00248 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ballarín, J.; Delgado, J.; Peña, J.M. High Relative Accuracy for Corner Cutting Algorithms. Axioms 2025, 14, 248. https://doi.org/10.3390/axioms14040248

AMA Style

Ballarín J, Delgado J, Peña JM. High Relative Accuracy for Corner Cutting Algorithms. Axioms. 2025; 14(4):248. https://doi.org/10.3390/axioms14040248

Chicago/Turabian Style

Ballarín, Jorge, Jorge Delgado, and Juan Manuel Peña. 2025. "High Relative Accuracy for Corner Cutting Algorithms" Axioms 14, no. 4: 248. https://doi.org/10.3390/axioms14040248

APA Style

Ballarín, J., Delgado, J., & Peña, J. M. (2025). High Relative Accuracy for Corner Cutting Algorithms. Axioms, 14(4), 248. https://doi.org/10.3390/axioms14040248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop