Compensated Evaluation of Tensor Product Surfaces in CAGD

: In computer-aided geometric design, a polynomial surface is usually represented in Bézier form. The usual form of evaluating such a surface is by using an extension of the de Casteljau algorithm. Using error-free transformations, a compensated version of this algorithm is presented, which improves the usual algorithm in terms of accuracy. A forward error analysis illustrating this fact is developed.


Introduction
The Horner algorithm is the most usual method for the evaluation of polynomials. Important algorithms in computer-aided geometric design (CAGD) need to compute roots of curves and surfaces. Some of the algorithms, in order to compute those roots, need to evaluate accurately the curves and surfaces at points close to the roots (see [1,2]). These evaluations are ill-conditioned, and accurate evaluation algorithms could play a key role in the performances of some of these root finding algorithms. In the last few years, in the literature it has been shown that the de Casteljau algorithm outperforms Horner's algorithm, among other evaluation algorithms, from the point of view of accuracy (see [3][4][5][6][7][8]). The de Casteljau algorithm evaluates polynomials represented in Bézier form, that is, using the Bernstein polynomials. In CAGD it is the usual evaluation algorithm for polynomial curves.
In CAGD, polynomials (curves and surfaces) are usually represented in Bernstein form, by using the Bernstein polynomials of degree n. A polynomial in the Bézier form is evaluated by the de Casteljau algorithm in the bivariate case and by an extended version in the multivariate case. The error analysis of these algorithms in [6,7] shows a relative error bound of the following form: where u is the unit roundoff of the computing precision. For an ill-conditioned problem, such as the evaluation of a polynomial at parameters very close to a multiple root, the condition number can exceed 1/u. In that case we can obtain an approximation of the polynomial at the parameter value with almost all its digits being false. Error-free transformations (EFTs) have been studied by Rump and Ogita in [9][10][11]. In [12], applying EFTs, Graillat and Langlois presented a compensated version of the usual Horner algorithm to evaluate polynomials represented in the power basis. Later, in [13] a compensated de Casteljau algorithm for the evaluation of univariate polynomialas was devised. The relative error bound for this algorithm has the following form: which improves the bound for the usual de Casteljau algorithms in (1).
In [7], an error analysis was performed for the extension of the de Casteljau algorithm for tensor product surfaces in Bernstein-Bézier form. In this paper, applying EFTs, we present a compensated version of this algorithm for the evaluation of those surfaces with improved accuracy.
The layout of the paper is as follows. Section 2 introduces some basic notation and results about error analysis with floating point arithmetic; the EFTs; the de Casteljau algorithm for polynomial curves and its compensated version. Section 3 recalls the extension of the de Casteljau algorithm for the evaluation of tensor product surfaces and the corresponding error analysis. Then, the compensated de Casteljau algorithm for Bézier tensor product surfaces is devised and the corresponding error analysis performed, providing a better bound for the error.

Floating Point Arithmetic and Forward Error Analysis
Given a real number x, the computed element in floating point arithmetic will be denoted by either f l(x) or x. Let us assume that u is the unit roundoff of the arithmetic floating point system we are using. In error analysis, the study of the effect of rounding errors is usually carried out by using one of the following two models.
where op is any one of the operations +, −, ×, / (for more details see pages 40-41 of [14]). Now let us define where k ∈ N 0 verifies ku < 1. Given δ 1 , . . . , δ k with |δ i | ≤ u for all i, in error analysis it is usual to deal with quantities θ k satisfying that ∏ k i=1 (1 + δ i ) = (1 + θ k ). In Lemma 3.1 of [14] it was proved that their absolute value is bounded above by γ k , that is, |θ k | ≤ γ k . The following result summarizes some classic properties in error analysis (see Lemma 3.3 of [14]).

Lemma 1.
i. ( Condition numbers of the functions to be evaluated are important for the accuracy of the result. Let us now recall some condition numbers related to the evaluation of functions. Given a space of functions U defined on Θ ⊂ R s , a basis B = (b 0 , . . . , b n ) for U and a function f = ∑ n i=0 c i b i ∈ U , measures of the sensitivity of f (x) to perturbations in c = (c j ) n j=0 are important in error analysis of the evaluation algorithms. Thus, given a relative perturbation δ = (δ i ) n i=0 of the coefficients c, we obtain the function g = ∑ n i=0 (1 + δ i )c i b i , which is related to f . Then for any The number plays the role of a condition number for the evaluation of the function f at x using the basis B (see [4,5,[15][16][17]). In CAGD, it is usual that the basis B must be formed of blending functions; that is, each basis function must be nonnegative on Θ, and the sum of all bases functions must be equal to 1 for all point in Θ. If B = (b 0 , . . . , b n ) is a basis of blending functions andB = (k 0 b 0 , . . . , In floating point arithmetic, given an algorithm for the evaluation of the function f (x), one obtains the computed value f l( f (x)) or f (x). From a practical point of view, to obtain an error bound or estimate for the approximation of the exact evaluation f (x) given by f l( f (x)), it is desirable. The success on the accuracy of the obtained aproximation when using an evaluation algorithm despends on: • The backward error-that is, the error of the calculations of the algorithm; • The difficulty of the evaluated function-that is, the condition number of the function with respect to the basis used as a representation by the evaluation algorithm.
In error analysis, the computed f l( f (x)) can be expressed as i=0 is a perturbation in c. Thus, the upper bound of the forward error for evaluation in formula (4) is usually interpreted as a product of the backward error δ ∞ and the condition number S B ( f (x)) (cf. [14]).

Error-Free Transformations
Error-free transformations (EFTs) will be taken into account in our algorithms in order to improve accuracy. In particular, TwoSum and TwoProduct EFTs will be used (see [9]) for computing sums and products, respectively. The algorithm TwoSum for the sum was presented by Knuth in [18], whereas the algorithm TwoProduct for the product was presented by Dekker, due to G. W. Veltkamp, in [19]. Algorithms 1 and 2 show these algorithms (TwoSum and TwoProduct), Algorithm 3 is used by Algorithm 2.
Require: a Ensure: [x, y] such that x + y = a 1: c = factor ⊗ a %factor = 2 27 + 1 in IEEE 754 Error analyses of both algorithms were presented in Theorem 3.4 of [9] and Théorème 3.14 of [20]. The following result shows a summary of these results. Theorem 1. Let F be the set of standard floating point numbers corresponding to a certain floating point arithmetic. If a, b ∈ F, then: ii.

De Casteljau Algorithm for Polynomial Curves in Bézier Form
The Horner algorithm is the best well-known method for polynomial evaluation. It uses the monomial basis of the space P n , M n : , the error analysis of the Horner algorithm in chapter 5 of [14] shows that In CAGD the usual evaluation algorithm for polynomial curves is the de Casteljau algorithm. This algorithm evaluates polynomials represented using the Bernstein basis (see [21]). The Bernstein polynomials of degree n, B n : , form a basis of P n and are defined by A polynomial is said to be in Bézier form or in Bernstein-/Bézier form. Algorithm 4 shows the de Casteljau algorithm for the evaluation of polynomials in Bézier form (6).

Algorithm 4 De
Casteljau algorithm for the evaluation of p ∈ P n at t.

for end for
A corner cutting algorithm is an algorithm such that each step is formed by linear convex combinations (see [6]). The de Casteljau algorithm is a corner cutting algorithm. In [6] an error analysis of corner cutting algorithms was carried out, which for the particular case of the de Casteljau algorithm can be written as , for all p ∈ P n and t ∈ [0, 1].
In addition, the optimal conditioning of Bernstein basis for polynomial evaluation among all the bases formed by nonnegative polynomials on [0, 1] was shown in [5]. Thus, there does not exist another basis of P n , up to positive scaling, formed by nonnegative polynomials on [0, 1] that is better conditioned for every p ∈ P n at every point t ∈ [0, 1]. In particular, we have S B n (p(t)) ≤ S M n (p(t)) for all p ∈ P n and t ∈ [0, 1]. Hence, the part of the error bound corresponding to the condition number for the de Casteljau algorithm is lower than the corresponding part of the bound for the Horner algorithm. In fact, the numerical experiments in [3] show that the algorithms using the Bernstein representation, like the de Casteljau algorithm, present better stability properties than the Horner algorithm.

Compensated Evaluation Algorithms for Bézier Curves
It is usual to apply EFTs (see [9] and Section 2.2) in order to devise compensated evaluation algorithms providing more accurate results. Hence, in [22,23] Graillat, Langlois and Louvet devised a compensated Horner algorithm for the evaluation of a polynomial in monomial form. In Theorem 5 of [22] it was proved that the evaluation of a degree n polynomial with the compensated Horner algorithm provides an approximation f l(p(t)) verifying In [13] a compensated de Casteljau algorithm for the evaluation of polynomials curves in Bernstein-Bézier form was presented. In Theorem 5 of [13] it was proved that the evaluation of a degree n polynomial with the compensated de Casteljau algorithm provides an approximation f l(p(t)) verifying |p(t) − f l(p(t))| ≤ u|p(t)| + 2γ 2 3n S B n (p(t)).
According to the previous bound for problems where 2γ 2 3n S B n (p(t)) |p(t)| < u, the relative error for the approximations provided by the compensated de Casteljau algorithm is u.

Evaluation Algorithms for Tensor Product Bézier Surfaces
In CAGD ensor product polynomial surfaces are usually represented in the Bernstein-Bézier form (see [21]) by using tensor product Bernstein systems.
is called a tensor product Bézier surface.
A tensor product Bézier surface can be evaluated by de Casteljau type evaluation algorithm inspired in the de Castaljau evaluation algorithm for Bézier curves (see [21]). By considering the components of the points P ij , the evaluation of (7) depends on the evaluation of scalar functions. Hence, based on the de Casteljau algorithm for Bézier curves, the corresponding evaluation algorithm for tensor product Bézier surfaces is shown in Algorithm 5.

Algorithm 5
De Casteljau algorithm for the evaluation of F in (7).
ij (x, y) = f ij end for end for for i = 0 to m do for s = 1 to n do

end for end for
In Theorem 5 of [7], error analyses of algorithms evaluating tensor product surfaces were performed. Taking into account the roundoff error when computing 1 t, for the particular case of tensor product Bézier surfaces we have the following error analysis of Algorithm 5. . Let F(x, y) be given by (7), and let us suppose that 3(m + n)u < 1, where u is the unit roundoff. Then, the value F(x, y) = f mn 00 computed in floating point arithmetic through Algorithm 5 satisfies (F(x, y)).

Compensated de Casteljau Evaluation Algorithm for Tensor Product Bézier Surfaces
In this section we devise a compensated de Casteljau algorithm for the evaluation of tensor product surfaces-that is, a compensated version of Algorithm 5. In order to track the local errors at each step, the following EFTs will be used: ij , ξ 0s ij ] = TwoSum(P 1,y , P 2,y ), [ f rn i0 , ξ rn i0 ] = TwoSum(P 1,x , P 2,x ).
Then we can describe the error in the following way: Now let us define the global errors at each step as It can be seen that the local error satisfies the following expressions: If computations are performed in exact arithmetic: F(x, y) = f mn 00 + ∂ f mn 00 . (9) Taking into account the previous discussion, Algorithm 6 shows the corresponding compensated version of the de Casteljau algorithms for tensor product Bézier surfaces.

Algorithm 6
Compensated de Casteljau algorithm for the evaluation of F in (7).
ij (x, y) = 0 end for end for for i = 0 to m do for s = 1 to n do for j = 0 to n − s do [P 1,y , π 0s ij ] = TwoProduct( r y , f 0,s−1 ij (x, y)) Now an error analysis of the compensated de Casteljau algorithm for the evaluation of tensor product surfaces (Algorithm 6) will be carried out. First, an auxiliary result will be proved. ii. Proof. i.
By using the recurrence relation of the Bernstein polynomials, b k By iterating this procedure, we obtain all the inequalities in i. ii.
Analogous to i.

Remark 1.
Assuming that (3(m + n) + 4)u < 1, the error bound for the evaluation of tensor product surfaces by the compensated de Casteljau algorithm obtained in the previous theorem is much lower than the error bound corresponding to the usual de Casteljau algorithm in Theorem 2. The assumption (3(m + n) + 4)u < 1 is typical when working in a CAGD framework. In fact, if γ 2

3(m+n)+4
S B mn (F(x, y)) |F(x, y)| < u the relative error for the approximation provided by the compensated de Casteljau is u.

Conclusions
A compensated version of the de Casteljau algorithm for tensor product functions has been presented. This new method is carried out with the usual floating point arithmetic and operations, and it uses only the same working precision as the data. With this framework, the following bound for the relative error of the new compensated method has been provided: |F(x, y)| , which is lower than the bound corresponding to the usual method γ 3(m+n) S B mn (F(x, y)) |F(x, y)| .
Hence, the new compensated de Casteljau algorithm for tensor product functions can be quite useful for problems with ill-conditioned situations.