A New First Order Expansion Formula with a Reduced Remainder

: This paper is devoted to a new ﬁrst order Taylor-like formula, where the corresponding remainder is strongly reduced in comparison with the usual one, which appears in the classical Taylor’s formula. To derive this new formula, we introduce a linear combination of the ﬁrst derivative of the concerned function, which is computed at n + 1 equally spaced points between the two points, where the function has to be evaluated. We show that an optimal choice of the weights in the linear combination leads to minimizing the corresponding remainder. Then, we analyze the Lagrange P 1 - interpolation error estimate and the trapezoidal quadrature error, in order to assess the gain of the accuracy we obtain using this new Taylor-like formula


Introduction
Rolle's theorem, and therefore, Lagrange and Taylor's theorems, prevent one from precisely determining the error estimate of numerical methods applied to partial differential equations.Basically, this stems from the existence of a non-unique unknown point, which appears in the remainder of Taylor's expansion, as a heritage of Rolle's theorem.This is the reason why, in the context of finite elements, only asymptotic behaviors are generally considered for the error estimates, which strongly depend on the interpolation error (see, for example, [1] or [2]).
Owing to this lack of information, several heuristic approaches have been considered, so as to investigate new possibilities, which rely on a probabilistic approach.Such new possibilities enable one to classify numerical methods in which the associated data are fixed and not asymptotic (for a review, see [2,3]) .
However, an unavoidable fact is that Taylor's formula introduces an unknown point.This leads to the inability to exactly determine the interpolation error and, consequently, the approximation error of a given numerical method.It is thus legitimate to ask if the corresponding errors are bounded by quantities, which are as small as possible.
Here, we focus on the values of the numerical constants that appear in these estimations to minimize them as much as possible.
For example, let us consider the two-dimensional case and the P 1 -Lagrange interpolation error of a given C 2 function, defined on a given triangle.
One can show that the numerical constant, which naturally appears in the corresponding interpolation error estimate [4], is equal to 1/2, as a heritage of the remainder of the first-order Taylor expansion.
Hence, in this paper, we propose a new first-order Taylor-like formula, in which we strongly modify the repartition of the numerical weights between the Taylor polynomial and the corresponding remainder.
To this end, we introduce a sequence of (n + 1) equally spaced points and consider a linear combination of the first derivative at these points.We show that an optimal choice of the coefficients in this linear combination leads to minimizing the corresponding remainder.Indeed, the bound of the absolute value of the new remainder becomes 2n smaller than the classical one obtained by the standard Taylor formula.
As a consequence, we show that the bounds of the Lagrange P 1 -interpolation error estimate, as well as the bound of the absolute quadrature error of the trapezoidal rule, are two times smaller than the usual ones obtained using the standard Taylor formula, provided we restrict ourselves to the new Taylor-like formula when n = 1, namely, with two points.
The paper is organized as follows.In Section 2, we present the main result of this paper, the new first-order Taylor-like formula.In Section 3.1, we show the consequences we derived for the approximation error devoted to interpolation and, in Section 3.2, to numerical quadratures.Finally, in Section 4, we provide concluding remarks.

The New First-Order, Taylor-like Theorem
Let us first recall the well-known first-order Taylor formula [5] or [6].
and we have where In order to derive the main result below, we introduce the function φ defined by Then, we remark that φ(0) = f (a), and φ(1) = f (b).Moreover, the remainder a,1 (b) in (2) satisfies the following result.
Proposition 1.The function a,1 (b) in the remainder (2) can be written as follows: Proof.Taylor's formula with the remainder in the integral form gives, at the first order, and using the substitution x = a + (b − a)t in the integral of (5), we obtain Finally, Now, let n ∈ N * .We define a,n+1 (b) by the formula below where the sequence of the real weights ω k (n) will be determined such that the corresponding remainder built on a,n+1 (b) will be as small as possible.
In other words, we will prove the following result.
then we have the following first order expansion where Moreover, this result is optimal, since the weights involved in (7) guarantee that the remainder a,n+1 (b) is minimum for the set of equally-spaced points which is considered in the expansion (7).
Remark 1. Formula (7) can be derived by using the composite trapezoidal quadrature rule (see, for example, [7]) by integrating a given function f on the interval [a, b].However, in this way, the corresponding quadrature error does not trivially appear as the minimum.This is the purpose of Theorem 1.
Remark 2. To compare the control of a,n+1 (b) given by (8) and those of a,1 (b) given by (3), we remark that (8) Consequently, the bound of the absolute value of the remainder a,n+1 (b) is 2n smaller than those derived with a,1 (b).
Remark 3. We also notice in Theorem 1 that between the parentheses lies a Riemann sum where, if n tends to infinity, we obtain the classical formula of integral calculus.That is to say, In order to prove Theorem 1, we will need the following lemma.
Lemma 1.Let u be any continuous function on R, and a sequence of real numbers (a k ) 0 k n ∈ R n+1 , (n ∈ N * ).Thus, we have the following formula: where Proof.We set We will prove, by induction on n, that A n = B n for all n ∈ N * .
If n = 1, we have and Let us now assume that A n = B n , and let us show that A n+1 = B n+1 .We have We conclude: where Let us now prove Theorem 1.
Proof.We have which can be re-written as However, given that Equation ( 12) becomes Let us now use Lemma 1 in ( 14) by setting in (10), Thus, we obtain, using (10), which can be written, by a simple substitution, as Then, (14) gives Next, to derive a double inequality on a,n+1 (b), we split the last integral in (16) as follows: Then, considering the constant sign of (S k (n and Thus, ( 18) and ( 19) enable us to obtain the following two inequalities and, Since we also have the following results where we set inequalities ( 20) and (21) lead to where we defined the two polynomials P 1 (λ) and P 2 (λ) by Keeping in mind that we want to minimize a,n+1 (b), let us determine the value of λ such that the polynomial P(λ) ≡ P 2 (λ) − P 1 (λ) is minimum.
To this end, let us remark that Then, for this value of λ, (24) becomes and finally, by summing on k between 0 and n − 1, we have Due to the definitions (11) of S k (n) and ( 23) of λ, on the one hand, and because, on the other hand, the weights ω k (n), with (k = 0, n), satisfy (13), we have So, for k = 0, (28 , and for k = 1, Then, step by step, the corresponding ω k (n) weights are equal to , and which completes the proof for Theorem 1.
Remark 4. Condition (13) on the weights ω k (n), with (k = 0, n), in Theorem 1 is a kind of closure condition, since it helps determine w n (n), but it is not a restrictive one.Indeed, without the closure condition (13), one would have to consider the following expression of a,n+1 (b) instead of ( 16) Then, (27) would be replaced by where we assume that (m Moreover, to obtain (32), we also used the fact that the weights ω k (n), with (k = 0, n), may be found with the help of (28) without using the closure condition (13).
More precisely, in this case, one can find that the weights ω k (n), with (k = 0, n), are equal to , and Consequently, from (32), we obtain that the bound of the absolute value of the remainder a,n+1 (b) is n times smaller that those of the first-order Taylor formula given by ( 2) and (3).So, by considering the closure condition (13) and the corresponding weights ω k (n), with (k = 0, n), we slightly improved the result of (32), since the bound of the absolute value of the remainder given by ( 16) is 2n smaller than those of the first Taylor formula.
Finally, we also observe that formula (7) can be directly obtained from the Composite Trapezoidal Rule applied to b a f (x)dx taking n subintervals.In addition, we show that the corresponding remainder is minimized.

Application to the Approximation Error
To give added value to Theorem 1, which was presented in the previous section, this section is devoted to appreciating the resulting differences one can observe in two main applications, which belong to the field of numerical analysis.The first one concerns the Lagrange polynomial interpolation and the second, the numerical quadrature.In these two cases, we will evaluate the corresponding approximation error both with the help of the standard first-order Taylor formula and using the generalized Formula (7) derived in Theorem 1.

The Interpolation Error
In this subsection, we consider the first application of the generalized Taylor-like expansion (7) when n = 1.In this case, for any function f , which belongs to C 2 ([a, b]), Formula (7) can be written where a,2 (b) satisfies As a first application of Formulas (34) and (35), we will consider the particular case of the P 1 -Lagrange interpolation (see [8] or [9]), which consists in interpolating a given function f on [a, b] by a polynomial Π [a,b] ( f ) of degree less than or equal to one.Then, the corresponding polynomial of interpolation One can remark that, using (36), we have Our purpose now is to investigate the consequences of Formula (34) when one uses it to evaluate the error of interpolation e(.), defined by and to compare it with the classical first-order Taylor formula given by ( 2).
The standard results [7] regarding the P 1 −Lagrange interpolation error claim that for any function f , which belongs to This result is usually derived by considering the suitable function g(t) defined on [a, b] by Given that g(a) = g(b) = g(x) = 0, and by applying Rolle's theorem twice, one can deduce that there exists ξ x ∈]a, b[ such that g (ξ x ) = 0. Therefore, after some calculations, one obtains the following and (37) simply follows.
Still, as one can see from (39), estimation (37) can be improved since Then, (39) leads to in the place of (37).However, to appreciate the difference between the classical Taylor formula and the new one in (34), we will now reformulate the proof of (41) by using the classical Taylor Formula (2).This is the purpose of the following lemma.Lemma 2. Let f be a function, which belongs to C 2 ([a, b]), satisfying (1); then, the first-order Taylor theorem leads to the following interpolation error estimate where M = max{|m 2 |, |M 2 |}.
Proof.We begin by writing the Lagrange P 1 −polynomial Π [a,b] ( f ) given by (36) with the help of the classical first-order Taylor Formula (2).Indeed, in (36), we substitute f (a) and f (b) by where, by the help of ( 3) and ( 1), x,1 (a) and x,1 (b) satisfy Then, (36) gives and due to (45), we obtain where we used the fact that Finally, due to (40), (47) leads to (42).
Let us now derive the corresponding result when one uses the new first-order Taylor-like Formula (34) in the expression of the interpolation polynomial Π [a,b] ( f ) defined by (36).This is the purpose of the following lemma.
Proof.We begin by writing f (a) and f (b) by the help of ( 34) where x,2 satisfies (35), with obvious changes in notations.Namely, we have Then, by substituting f (a) and f (b) in the interpolation polynomial given by (36), we have Equation ( 52) becomes Thus, due to (51), we have , and (52) with the help of (53) gives which completes the proof of this lemma.
Let us now formulate a couple of the consequences of Lemmas 2 and 3. 1.
If we consider the refined interpolation polynomial Π * [a,b] ( f ) defined by (53), we obtain an accuracy for the error estimate (48), which is two times more precise than what we obtained in (42) using the classical Taylor formula.In order to compare (42) and ( 48), we notice that (48 Now, the cost for this improvement is that Π * [a,b] ( f ) is a polynomial of degree less than or equal to two, which requires the computation of f (a) and f (b).However, the consequent gain clearly appears in the following application devoted to finite elements.To this end, we consider a Hilbert space V endowed with a norm .V and a bilinear, continuous, and Moreover, we denote by l(•) a linear continuous form defined on V. So, let u ∈ V be the unique solution to the second order elliptic variational formulation (VP) defined by: Let us also introduce the approximation u h of u, the solution to the approximate variational formulation (VP) h defined by: where V h is a given linear subspace of V, whose dimension is finite.Then, we are in position to recall Céa's Lemma, which can be found in [1], for example: Lemma 4. Let u be solution to (58) and u h be the solution of (59).Then, the following inequality holds: where the constant C and α are the continuity constant and the ellipticity constant, respectively, of the bilinear form a(•, •) defined in (57).
So, due to Céa's lemma, (60) leads to for any interpolate polynomial Π h (u) in V h of the function u.Thus, inequality (61) shows that the approximation error is bounded by the interpolation error.Therefore, if one wants to locally guarantee that the upper bound of the interpolation error is not greater than a given threshold , then if h denotes the local mesh size of a given mesh, by setting h = b − a, inequalities (42) and ( 56) lead to It follows that the difference between the interpolation based on .707, and consequently, the gain is around 30 percent with the refined interpolation polynomial Π * [a,b] ( f ).This economy in terms of the total number of meshes would be even more significant if one considers the extension of this case to a three-dimensional application.

2.
We also notice that, if we now consider the particular class of C 2 −functions f defined on R, (b − a)-periodic, then f (a) = f (b), and consequently, the interpolation error e * (x) is equal to e(x), and (48 In other words, for this class of periodic functions, due to the new first order Taylor-like formula (34), the interpolation error e(x) provided by ( 63) is bound by a quantity that is two times smaller that those we obtained in (42) using the classical Taylor formula.We highlight that, in this case, there is no cost anymore to obtain this more accurate result, since it concerns the standard interpolation error associated with the standard Lagrange P 1 -polynomial.

3.
Finally, since the refined polynomial Π * [a,b] ( f ) has a degree less than or equal to two, one would want to compare it with the performance of the corresponding Lagrange polynomial with the same degree.In order to process it, we must assume that f belongs to C 3 ([a, b]); then, in [7], we find that the interpolation error e T (x) for a Lagrange polynomial whose degree is less than or equal to two is given by ∀x where Consequently, by comparing (64) and (56), provided the given function f is sufficiently smooth, (namely in C 3 ([a, b])), one would prefer to use the Lagrange polynomial of a degree less than or equal to two, which leads to a more accurate interpolation error.However, for a function f , which only belongs to C 2 ([a, b]), no result is available for this Lagrange polynomial, and the comparison is not valid anymore.

The Quadrature Error
We now consider, for any integrable function f defined on [a, b], the famous trapezoidal quadrature [7] or [10], the formula of which is given by b We consider (65) due to the fact that this quadrature formula corresponds to approximating the function f by its Lagrange polynomial interpolation Π [a,b] ( f ), of degree less than or equal to one, which is given by (36).In the literature on numerical integration, (see, for example, [7] and [11] or [12]), the following estimation is well known as the trapezoid inequality where ∀x ∈ Now, we prove a lemma that will propose estimation (66) in an alternate display.It will also extend estimation (67) to twice differentiable functions f that satisfy (1).
Proof.In order to derive estimation (68), we recall that the classical first-order Taylor Formula (2) enables us to write the polynomial Π [a,b] ( f ) by (46).Then, by integrating (46) between a and b, we obtain However, one can easily show that the given by (36) also fulfills Now, if we introduce the well-known quantity E( f ), which is called the quadrature error, defined by Equations ( 69) and (70) lead to the two following inequalities and where we used inequality (3) for x,1 (a) and x,1 (b), with obvious adaptations.
One can now observe that in (72) and (73), the two integrals I and J defined by can be computed as follows.
Let us consider in (74 t; then, we obtain and Finally, to obtain an upper bound for |E( f )|, owing to (72), ( 73), (75), and (76), we obtain Now, we consider the expression of the polynomial interpolation Π [a,b] ( f )(x) to transform it with the help of the new first-order Taylor-like Formula (34).This will enable us to obtain the following lemma devoted to the corrected trapezoid formula according to Atkinson's terminology [14].Lemma 6.Let f be a twice differentiable mapping on [a, b], which satisfies (1).
Then, we have the following corrected trapezoidal estimation Proof.We consider the expression we obtained in (52) for the polynomial interpolation Π [a,b] ( f )(x), and we integrate it between a and b to obtain where we used the following result obtained by the same substitution that we used to compute the integrals in (74) Then, due to (51), we also have the following inequality and (79) directly gives the result (78) to be proved.
We conclude this section with several remarks.

1.
We observe that the quadrature error we derived in (78) by the new first-order Taylor-like formula is bounded two times less than those we derived in (68) with the help of the classical Taylor's formula.Furthermore, Cheng and Sun proved in [15] that the best constant one can expect in (78) is equal to 1/36 √ 3 1/62.35,which is slightly smaller than the 1/48 we found in (79).
In other words, we find that for this class of periodic functions, the quadrature error of the classical trapezoid formula is two times more accurate than those we found in (77), where the classical first-order Taylor formula was implemented.

Conclusions and Perspectives
In this paper, we derived a new first-order Taylor-like formula to minimize the unknown remainder, which appears in the classical one.This new formula was composed of a linear combination of the first derivative of a given function, computed at (n + 1) equally spaced points on [a, b].
We also showed that the corresponding new remainder could be minimized using a suitable choice of the set of the weights that appear in the linear combination of the first derivative values at the corresponding points.
As a consequence, the bound of the absolute value of the new remainder was 2n smaller than the one that appeared in the classical first-order Taylor formula.
Next, we considered two famous applicative contexts given by the numerical analysis where the Taylor formula was used: the interpolation error and the quadrature error.Then, we showed that one can obtain a significant improvement in the corresponding errors.Namely, Lemma 3 and Lemma 6 proved that the upper bound of these errors was two times smaller than the usual ones estimated by the classical Taylor formula, if one limits it to the class of periodic functions.
Several other applications might be considered by this new first-order Taylor-like formula, for example, the approximation error, which has to be considered in ODEs where the Taylor formula is strongly used for the appropriate numerical schemes, or in the context of finite elements.
For this last application, when one considers linear second elliptic PDEs, due to Cea's Lemma [1], the approximation error was bounded by the interpolation error.Then, the improvement in the interpolation error that we showed in this current work, using the interpolation polynomial defined by (53), in comparison with the standard P 1 -Lagrange Polynomial, will consequently impact the accuracy of the approximation error.
Indeed, we highlighted the corresponding gain one may take into account for building meshes, as soon as a given local threshold of accuracy is fixed for the associated approximations.
Other developments may also be considered, e.g., a generalized high-order Taylor-like formula, on the one hand, or its corresponding extension for functions with several variables, on the other hand.

Lemma 3 .
Let f ∈ C 2 ([a, b]); then, we have the following interpolation error estimate, for all x ∈ [a, b]:
[13]any function f twice differentiable on [a, b], the second derivative of which is accordingly bounded on[a, b].It is also well known[13]that if f is only C 1 on [a, b], one has the following estimation