Next Article in Journal
A Note on Type 2 w-Daehee Polynomials
Next Article in Special Issue
Special Class of Second-Order Non-Differentiable Symmetric Duality Problems with (G,αf)-Pseudobonvexity Assumptions
Previous Article in Journal
Extremal Betti Numbers of t-Spread Strongly Stable Ideals
Previous Article in Special Issue
One-Point Optimal Family of Multiple Root Solvers of Second-Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Viscovatov-Like Algorithm of Thiele–Newton’s Blending Expansion for a Bivariate Function

1
Institute of Applied Mathematics, Bengbu University, Bengbu 233030, China
2
School of Science, Bengbu University, Bengbu 233030, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(8), 696; https://doi.org/10.3390/math7080696
Submission received: 18 June 2019 / Revised: 29 July 2019 / Accepted: 30 July 2019 / Published: 2 August 2019
(This article belongs to the Special Issue Multivariate Approximation for solving ODE and PDE)

Abstract

:
In this paper, Thiele–Newton’s blending expansion of a bivariate function is firstly suggested by means of combining Thiele’s continued fraction in one variable with Taylor’s polynomial expansion in another variable. Then, the Viscovatov-like algorithm is given for the computations of the coefficients of this rational expansion. Finally, a numerical experiment is presented to illustrate the practicability of the suggested algorithm. Henceforth, the Viscovatov-like algorithm has been considered as the imperative generalization to find out the coefficients of Thiele–Newton’s blending expansion of a bivariate function.

1. Introduction

The interpolation and expansion of a function are two of the oldest and most interesting branches in both computational mathematics and approximation theory. Most often, they have a natural link with their corresponding algorithms, such as Newton’s interpolatory formula and its divided-difference algorithm, Thiele’s interpolating continued fraction and its inverse-difference algorithm, Thiele’s expansion of a univariate function and its Viscovatov’s algorithm, and so on. For the function f being a univariate function, such problems have been extensively investigated, and abundant research results have been achieved. Some surveys and a complete literature for the problems in single variable interpolation and expansion can be found in Cheney [1], Hildebrand [2], Davis [3], Alfio et al. [4], Gautschi [5], Burden et al. [6], and the references therein. However, in comparison to the broad research and application of the univariate interpolation and expansion problems, much less attention has been paid to the problems associated with multivariate interpolation and expansion, and the study of multivariate rational interpolation and expansion is even less. However, fortunately, there exists some literature discussing the multivariate rational interpolation and expansion problems. We mention the works of Baker et al. [7,8], Kuchminskaya [9], Skorobogatko [10], Siemaszko [11], Viscovatov [12], Graves-Morris [13], Cuyt and Verdonk [14,15,16,17], Möller [18], Zhu et al. [19], Gu et al. [20,21], Tan et al. [22,23,24,25,26,27,28], and the references therein for results concerning the multivariate rational interpolation and expansion.
Skorobogatko applied the idea of the branch to the continued fraction from about the 1960s to the 1980s, which ushered in a new era of the research on the theories and methods of the continued fraction [10]. In 1983, the concept of the Thiele-type interpolation by the continued fraction in one variable was generalized to the multivariate case by Siemaszko [11], and the Thiele-type branched continued fractions were obtained and an algorithm for the computation of the limiting case of branched continued fractions for bivariable functions suggested. Furthermore, in the 1980s, based on the so-called symmetric branched continued fraction, Cuyt et al. [14,15,16] introduced a symmetric interpolation scheme and studied the expansion of a bivariate function by using this method and technique. By following the prior works, in 1995, Zhu et al. [19,22] discussed the vector-valued rational interpolants by branched continued fractions. In 1997, Gu et al. [20,21] investigated the problem about matrix-valued rational interpolants. In the meantime, Tan et al. engaged in studying bivariate rational interpolants and obtained tremendous scholarly achievements in this field [22,23,24,25,26,27,28]. In 2007, Tan summarized the research results concerning the theory of the continued fraction and published the famous book The Theory of Continued Fractions and Their Applications. This book has played an important role in promoting some modern research about the continued fraction. Furthermore, there are a few works and references about the application of the continued fraction in image processing, such as the literature of Hu and Tan [29,30], Li et al. [31].
As we all know, Taylor’s expansion of a function is likely to be the best known and most widely-used formula for the function approximation problem. If f is a function of a univariate x and the derivatives of all orders are uniformly bounded in a neighborhood ( ξ ) , then for each x in ( ξ ) , f ( x ) can be expanded into the following Taylor’s formula about ξ :
f ( x ) = C 0 + C 1 ( x ξ ) + C 2 ( x ξ ) 2 + + C k ( x ξ ) k + ,
where C k = 1 k ! f ( k ) ( ξ ) , k = 0 , 1 , 2 , . On the other hand, the function f ( x ) can also be expanded about ξ in terms of the continued fraction, which is in the form of the following:
Mathematics 07 00696 i001
where d k R , k = 0 , 1 , 2 , . Here, the above formula is called Thiele’s expansion for f ( x ) about ξ . There is a famous algorithm to compute the coefficients d 0 , d 1 , d 2 , , of Thiele’s expansion, which is called Viscovatov’s algorithm. We can see the references [16,28].
Motivated by the results concerning the univariate function, in this paper, we consider the rational expansion by Thiele’s continued fraction of a bivariate function and give a Viscovatov-like algorithm for the computations of the coefficients. As a preliminary to our discussions, Thiele–Newton’s interpolation needs to be introduced first. In the works [25,28], the so-called Thiele–Newton’s interpolation was suggested to construct bivariate interpolants by Tan et al. Its main idea is to combine Thiele’s interpolating continued fraction in one variable with Newton’s interpolating polynomial in another variable to hybridize a new interpolation, which is defined as below:
Mathematics 07 00696 i002
where:
t i ( y ) = φ T N [ x 0 , , x i ; y 0 ] + ( y y 0 ) φ T N [ x 0 , , x i ; y 0 , y 1 ] + + ( y y 0 ) ( y y 1 ) ( y y n 1 ) φ T N [ x 0 , , x i ; y 0 , , y n ]
for i = 0 , 1 , , m , both X = { x i | i N } and Y = { y j | j N } are two sets of points belonging to R , and φ T N [ x 0 , , x i ; y 0 , , y j ] denotes the blending difference of the function f ( x , y ) at points x 0 , , x i ; y 0 , , y j . Suppose that any blending difference φ T N [ x 0 , , x i ; y 0 , , y j ] exists. Then, one can easily confirm that:
T N m , n ( x i , y j ) = f ( x i , y j ) , i = 0 , 1 , , m ; j = 0 , 1 , , n .
The limiting case of Thiele’s interpolating continued fraction expansion of a univariate function has been discussed in the literature [26]. With the inspiration of the limiting case, Thiele–Newton’s expansion of a bivariate function is yielded when all the points in sets X = { x i | i N } and Y = { y j | j N } are coincident with certain points ξ and ζ , respectively, from Equations (1) and (2), or in other words, a bivariate function f ( x , y ) has Thiele–Newton’s expansion of the following form:
Mathematics 07 00696 i003
where:
l i ( y ) = a i , 0 + a i , 1 ( y ζ ) + a i , 2 ( y ζ ) 2 + a i , 3 ( y ζ ) 3 +
for all i N . Therefore, there exists a question about how to calculate the unknowns a i , j , i = 0 , 1 , 2 , , j = 0 , 1 , 2 , , in Equation (4).
The aim of this paper is to find an algorithm for the computations of the coefficients of Thiele–Newton’s expansion of a bivariate function. The paper is organized as follows. In Section 2, we briefly recall some preliminaries for Thiele’s continued fraction and Thiele–Newton’s blending interpolation. In Section 3, we suggest Thiele–Newton’s blending rational expansion and prove the Viscovatov-like algorithm. In Section 4, numerical examples are given to illustrate the application of the Viscovatov-like algorithm. Throughout the paper, we let N and R stand for the set of natural numbers and the set of real numbers, respectively.

2. Preliminaries

In this section, we briefly review some basic definitions and results for Thiele’s continued fraction, Thiele’s expansion of a univariate function, and blending interpolation. Some surveys and complete literature about the continued fraction could be found in Cuyt et al. [14,15,16], Zhu et al. [19], Gu et al. [20,21], and Tan et al. [25,26,28].
Definition 1.
Assume that G is a subset of the complex plane and X = { x i | i N } is a set of points belonging to G. Suppose, in addition, that f ( x ) is a function defined on G. Let:
f [ x i ] = f ( x i ) , i N , f [ x i , x j ] = f [ x i ] f [ x j ] x i x j , f [ x i , x j , x k ] = f [ x i , x k ] f [ x i , x j ] x k x j
and:
f [ x i , , x j , x k , x l ] = f [ x i , , x j , x l ] f [ x i , , x j , x k ] x l x k .
Then, f [ x i , , x j , x k ] is called the divided difference of f ( x ) with respect to points x i , , x j , x k .
Definition 2.
Assume that G is a subset of the complex plane and X = { x i | i N } is a set of points in G. Suppose, in addition, that f ( x ) is a function defined on G. We let:
ρ [ x i ] = f ( x i ) , i N , ρ [ x i , x j ] = x i x j ρ [ x i ] ρ [ x j ] , ρ [ x i , x j , x k ] = x k x j ρ [ x i , x k ] ρ [ x i , x j ]
and:
ρ [ x i , , x j , x k , x l ] = x l x k ρ [ x i , , x j , x l ] ρ [ x i , , x j , x k ] .
Then, ρ [ x i , , x j , x k ] is called the inverse difference of f ( x ) with respect to points x i , , x j , x k .
Definition 3.
Assume that G is a subset of the complex plane and X = { x i | i N } G is a set of points. In addition, let f ( x ) be a function defined on G, and let:
Mathematics 07 00696 i004
where ρ [ x 0 , x 1 , , x i ] , i = 0 , 1 , 2 , , n , is the inverse difference of f ( x ) with respect to points x 0 , x 1 , , x i . Then, R n ( x ) is called Thiele’s interpolating continued fraction of order n. It is easy to verify that the rational function satisfies the following conditions:
R n ( x i ) = f ( x i ) , i = 0 , 1 , 2 , , n .
When all the points in the set X = { x i | i N } are coincident with a certain point ξ G , Thiele’s expansion of a univariate function f ( x ) at x = ξ is obtained as follows:
Mathematics 07 00696 i005
where d k R , k = 0 , 1 , 2 , . Moreover, if f ( x ) is a function with derivatives of all orders in a neighborhood ( ξ ) , then Taylor’s expansion of the function f ( x ) at x = ξ is denoted as below:
f ( x ) = n = 0 C n ( 0 ) ( x ξ ) n ,
where C n ( 0 ) = 1 n ! f ( n ) ( ξ ) , n = 0 , 1 , 2 , A famous method, called Viscovatov’s algorithm (see [16,28]), is available for the computations of the coefficients d 0 , d 1 , d 2 , , of Thiele’s expansion, which is formulated as follows.
Algorithm 1. Viscovatov’s algorithm to calculate the coefficients d 0 , d 1 , d 2 , :
C i ( 0 ) = f ( i ) ( ξ ) / i ! , i = 0 , 1 , 2 , , d 0 = C 0 ( 0 ) , d 1 = 1 / C 1 ( 0 ) , C i ( 1 ) = C i + 1 ( 0 ) / C 1 ( 0 ) , i 1 , d l = C 1 ( l 2 ) / C 1 ( l 1 ) , l 2 , C i ( l ) = C i + 1 ( l 2 ) d l C i + 1 ( l 1 ) , i 1 , l 2 .
Remark 1.
Clearly, by applying Viscovatov’s algorithm, we can carry out computations step by step for the coefficients d 0 , d 1 , d 2 , .
In [25,28], the method known as Thiele–Newton’s blending interpolation was suggested to construct bivariate interpolants by Tan et al. Before the method can be introduced, we recall the definition concerning the blending difference.
Definition 4.
Assume that Π m , n = X m × Y n , where X m = { x i | i = 0 , 1 , 2 , , m } [ a , b ] R and Y n = { y j | j = 0 , 1 , 2 , , n } [ c , d ] R are two sets of points. Suppose that f ( x , y ) is a function of two variables defined on D = [ a , b ] × [ c , d ] . Let:
φ T N [ x i ; y j ] = f ( x i , y j ) , ( x i , y j ) D , φ T N [ x i ; y p , y q ] = φ T N [ x i ; y q ] φ T N [ x i ; y p ] y q y p , φ T N [ x i ; y p , , y q , y r , y s ] = φ T N [ x i ; y p , , y q , y s ] φ T N [ x i ; y p , , y q , y r ] y s y r , φ T N [ x i , x j ; y p ] = x j x i φ T N [ x j ; y p ] φ T N [ x i ; y p ] , φ T N [ x i , , x j , x k , x l ; y p ] = x l x k φ T N [ x i , , x j , x l ; y p ] φ T N [ x i , , x j , x k ; y p ]
and:
φ T N [ x i , , x l ; y p , , y q , y r , y s ] = φ T N [ x i , , x l ; y p , , y q , y s ] φ T N [ x i , , x l ; y p , , y q , y r ] y s y r .
Then, φ T N [ x 0 , , x i ; y 0 , , y j ] is called Thiele–Newton’s blending difference of f ( x , y ) with respect to the set of points Π i , j .
Remark 2.
From Definition 4, it is easy to see that the first recurrence relations on Thiele–Newton’s blending difference φ T N [ x 0 , , x i ; y 0 , , y j ] are just the inverse difference of f ( x , y ) with respect to the variable x, and the second recurrence relations are only the divided difference of f ( x , y ) with respect to the variable y.
Next, recall Thiele–Newton’s interpolation T N m , n ( x , y ) , as shown in Equations (1) and (2). In order to calculate this rational interpolation, we need to utilize the following algorithm whose main operation is matrix transformations (see [23,28]).
Algorithm 2. Four main steps for the algorithm to calculate Thiele–Newton’s interpolation are as follows:
  • Step 1: Initialization. For i = 0 , 1 , , m ; j = 0 , 1 , , n , let f i , j ( 0 , 0 ) = f ( x i , y j ) . Define the following initial information matrix:
    M 0 = f 0 , 0 ( 0 , 0 ) f 1 , 0 ( 0 , 0 ) f m , 0 ( 0 , 0 ) f 0 , 1 ( 0 , 0 ) f 1 , 1 ( 0 , 0 ) f m , 1 ( 0 , 0 ) f 0 , n ( 0 , 0 ) f 1 , n ( 0 , 0 ) f m , n ( 0 , 0 ) .
  • Step 2: Thiele’s recursion along the X -axis. For j = 0 , 1 , , n ; p = 1 , 2 , , m ; i = p , p + 1 , , m , compute:
    f i , j ( p , 0 ) = x i x p 1 f i , j ( p 1 , 0 ) f p 1 , j ( p 1 , 0 )
    and construct the following information matrix:
    M 1 = f 0 , 0 ( 0 , 0 ) f 1 , 0 ( 1 , 0 ) f m , 0 ( m , 0 ) f 0 , 1 ( 0 , 0 ) f 1 , 1 ( 1 , 0 ) f m , 1 ( m , 0 ) f 0 , n ( 0 , 0 ) f 1 , n ( 1 , 0 ) f m , n ( m , 0 ) .
  • Step 3: Newton’s recursion along the Y -axis. For i = 0 , 1 , , m ; q = 1 , 2 , , n ; j = q , q + 1 , , n , compute:
    f i , j ( i , q ) = f i , j ( i , q 1 ) f i , q 1 ( i , q 1 ) y j y q 1
    and construct the following information matrix:
    M 2 = f 0 , 0 ( 0 , 0 ) f 1 , 0 ( 1 , 0 ) f m , 0 ( m , 0 ) f 0 , 1 ( 0 , 1 ) f 1 , 1 ( 1 , 1 ) f m , 1 ( m , 1 ) f 0 , n ( 0 , n ) f 1 , n ( 1 , n ) f m , n ( m , n ) .
  • Step 4: Establish Thiele–Newton’s interpolation. For i = 0 , 1 , , m , let:
    t i , n ( y ) = f i , 0 ( i , 0 ) + ( y y 0 ) f i , 1 ( i , 1 ) + + ( y y 0 ) ( y y 1 ) ( y y n 1 ) f i , n ( i , n ) .
    Then, Thiele–Newton’s interpolation is established as follows:
    Mathematics 07 00696 i006
    which satisfies:
    T N m , n ( x i , y j ) = f ( x i , y j )
    for i = 0 , 1 , , m ; j = 0 , 1 , , n .
Remark 3.
Obviously, for any i { 0 , 1 , , m } , by using the elements f i , j ( i , j ) , j = 0 , 1 , , n , in the ( i + 1 ) th column of the information matrix M 2 , Newton’s interpolating polynomial t i , n ( y ) with respect to the variable y can be constructed.

3. Thiele–Newton’s Blending Expansion and the Viscovatov-Like Algorithm

In this section, our main objective is to expound on Thiele–Newton’s blending rational expansion of a bivariate function and show the Viscovatov-like algorithm that finds the coefficients of Thiele–Newton’s expansion.

3.1. Thiele–Newton’s Blending Expansion

Definition 5.
Assume that Π = X × Y with Π D = [ a , b ] × [ c , d ] , where X = { x i | i = 0 , 1 , 2 , } [ a , b ] R and Y = { y j | j = 0 , 1 , 2 , } [ c , d ] R are two sets of points. Suppose, in addition, that the point ( ξ , ζ ) D and f ( x , y ) is a bivariate function defined on D. Let all the points in the set X = { x i | i = 0 , 1 , 2 , } and Y = { y j | j = 0 , 1 , 2 , } be coincident with the given points ξ and ζ, respectively. Then, Thiele–Newton’s interpolation T N m , n ( x , y ) of a bivariate function f ( x , y ) defined in Section 2 turns into Thiele–Newton’s blending expansion of the bivariate function f ( x , y ) as shown below:
Mathematics 07 00696 i007
where:
d i ( y ) = a i , 0 + a i , 1 ( y ζ ) + a i , 2 ( y ζ ) 2 + a i , 3 ( y ζ ) 3 +
for any i N .
Obviously, a main topic for further discussion is how to calculate the coefficients d i ( y ) , i = 0 , 1 , 2 , , in Equation (7), or in other words, how to compute the coefficients a i , j , i = 0 , 1 , 2 , ; j = 0 , 1 , 2 , , in Equation (8). In order to handle the problem that we are facing in the bivariate case, we introduce the following algorithm.

3.2. Viscovatov-Like Algorithm

Suppose that f ( x , y ) is a bivariate function of two variables x and y. If y is held constant, say y = ζ , then f ( x , ζ ) is a function of the single variable x. Likewise, f ( ξ , y ) is also a function of the single variable y when x is regarded as a constant, i.e., x = ξ . We use the notation: D x m f ( x , y ) denotes the m-order partial derivative of f ( x , y ) with respect to x. Similarly, the n-order partial derivative of f ( x , y ) with respect to y is denoted by D y n f ( x , y ) . Furthermore, D x m f ( ξ , ζ ) and D y n f ( ξ , ζ ) denote the values of D x m f ( x , y ) and D y n f ( x , y ) about the point ( x , y ) = ( ξ , ζ ) , respectively. Let:
C k ( 0 ) ( y ) = 1 k ! D x k f ( ξ , y ) , k = 0 , 1 , 2 , .
Then, the bivariate function f ( x , y ) can be expanded formally about the point ξ as follows:
f ( x , y ) = C 0 ( 0 ) ( y ) + C 1 ( 0 ) ( y ) ( x ξ ) + + C k ( 0 ) ( y ) ( x ξ ) k + .
From Equations (7)–(9), we give the Viscovatov-like algorithm, which finds out the coefficients of Thiele–Newton’s expansion d i ( y ) , i = 0 , 1 , 2 , , and a i , j , i = 0 , 1 , 2 , ; j = 0 , 1 , 2 , , as described by the following algorithm.
Algorithm 3. Viscovatov-like algorithm to calculate the coefficients d i ( y ) , i = 0 , 1 , 2 , , and a i , j , i = 0 , 1 , 2 , ; j = 0 , 1 , 2 , :
C i ( 0 ) ( y ) = D x i f ( ξ , y ) / i ! , i = 0 , 1 , 2 , , d 0 ( y ) = C 0 ( 0 ) ( y ) = f ( ξ , y ) , d 1 ( y ) = 1 / C 1 ( 0 ) ( y ) , C i ( 1 ) ( y ) = C i + 1 ( 0 ) ( y ) / C 1 ( 0 ) ( y ) , i 1 , d l ( y ) = C 1 ( l 2 ) ( y ) / C 1 ( l 1 ) ( y ) , l 2 , C i ( l ) ( y ) = C i + 1 ( l 2 ) ( y ) d l ( y ) C i + 1 ( l 1 ) ( y ) , i 1 , l 2 , a i , j = D y j d i ( ζ ) / j ! , i = 0 , 1 , 2 , ; j = 0 , 1 , 2 , .
Proof of Algorithm 3.
First, we compute the coefficients d 0 ( y ) and d 1 ( y ) . Considering the two expansions (7) and (9), we have:
Mathematics 07 00696 i008
Letting x = ξ , from Equation (10), one can clearly get:
d 0 ( y ) = C 0 ( 0 ) ( y ) .
Combining Equation (11) with Equation (10), we have:
Mathematics 07 00696 i009
Let x = ξ in Equation (12). Then, we can easily obtain:
d 1 ( y ) = 1 C 1 ( 0 ) ( y ) .
Next, by mathematical induction, we shall prove that the following equation:
d l ( y ) = C 1 ( l 2 ) ( y ) C 1 ( l 1 ) ( y )
is true for all l 2 .
When l = 2 , we shall verify that Equation (14) holds. Substituting Equation (13) into Equation (12), we have:
Mathematics 07 00696 i010
which implies that:
Mathematics 07 00696 i011
Let:
C i ( 1 ) ( y ) = C i + 1 ( 0 ) ( y ) C 1 ( 0 ) ( y ) , i = 1 , 2 , 3 , .
Then, it follows from the identity (16) that:
Mathematics 07 00696 i012
Using x = ξ in Equation (18) yields:
d 2 ( y ) = C 1 ( 0 ) ( y ) C 1 ( 1 ) ( y ) ,
which implies that Equation (14) is true for l = 2 .
When l 3 , assume that Equation (14) holds for any l = n , n = 3 , 4 , . Then, let us prove that Equation (14) is also true for l = n + 1 .
By assumption, we have the following equation:
d n ( y ) = C 1 ( n 2 ) ( y ) C 1 ( n 1 ) ( y )
holds.
Referring to Equation (18), we assume that the following equation:
Mathematics 07 00696 i013
is true, where:
C i ( k ) ( y ) = C i + 1 ( k 2 ) ( y ) d k ( y ) C i + 1 ( k 1 ) ( y ) , k = n 2 , n 1 ; n 2 ; i = 1 , 2 , 3 , .
Combining Equation (20) with Equation (21), one has:
Mathematics 07 00696 i014
Let:
C i ( n ) ( y ) = C i + 1 ( n 2 ) ( y ) d n ( y ) C i + 1 ( n 1 ) ( y ) , i = 1 , 2 , 3 , .
Then, Equation (23) is rewritten as follows:
Mathematics 07 00696 i015
Using the above Equation (25) with x = ξ produces:
d n + 1 ( y ) = C 1 ( n 1 ) ( y ) C 1 ( n ) ( y ) ,
which means that Equation (14) holds for l = n + 1 .
As is shown above, Equation (14) is true by mathematical induction for all l 2 . Meanwhile, we show that Equation (24) is also true for any l = n , n 2 .
Moreover, by differentiating Equation (8) j times with respect to the variable y, one has:
D y j d i ( y ) = a i , j j ! + a i , j + 1 ( j + 1 ) ! 1 ! ( y ζ ) + a i , j + 2 ( j + 2 ) ! 2 ! ( y ζ ) 2 + .
Notice y = ζ , from Equation (27), and we immediately obtain:
a i , j = D y j d i ( ζ ) j !
for i N and j N .
Therefore, associating Equation (28) with Equations (11), (13), (14), (17), and (24), we have shown the desired conclusion denoted by Algorithm 3. This completes the proof. ☐

4. Numerical Experiments

In the section, we give the results of numerical experiments to compare the efficiency of Thiele–Newton’s blending expansion (7) with series expansion of bivariate functions.
For | x | < 1 , | y | < 1 and x y , given the following two test functions:
f 1 ( x , y ) = 1 y x ln ( 1 x ) ln ( 1 y )
and:
f 2 ( x , y ) = x 2 ( 1 x ) ( x y ) 2 + y 2 ( 1 y ) ( x y ) 2 + 2 x y ln ( 1 x ) ln ( 1 y ) ( x y ) 3 ,
where ln ( z ) gives the natural logarithm of z (logarithm to base e). We shall discuss Thiele–Newton’s blending expansions of Equations (29) and (30), respectively.
First of all, let us consider Thiele–Newton’s blending expansion of the bivariate function f 1 ( x , y ) defined by Equation (29) at the point ( ξ , ζ ) = ( 0 , 0 ) . Therefore, using the Viscovatov-like algorithm, we can obtain the coefficient using the notation a i , j f 1 of Thiele–Newton’s expansion of f 1 ( x , y ) . Some values of a i , j f 1 , i = 0 , 1 , 2 , , m , ; j = 0 , 1 , 2 , , n , , are shown in Table 1.
Thus, Thiele–Newton’s blending expansion of f 1 ( x , y ) at ( ξ , ζ ) = ( 0 , 0 ) is denoted in the form:
Mathematics 07 00696 i016
For m = 2 , n = 3 , taking into account the truncated Thiele–Newton’s blending expansion R m , n f 1 ( x , y ) of R f 1 ( x , y ) expressed by the above Equation (31), one can have:
R 2 , 3 f 1 ( x , y ) = 1 + y 2 + y 2 3 + y 3 4 + x 2 4 3 y 1 9 y 2 8 135 y 3 x 3 4 + 7 16 y + 293 960 y 2 + 299 1280 y 3 .
On the other hand, the bivariate function f 1 ( x , y ) defined by Equation (29) can be expanded at the point ( ξ , ζ ) = ( 0 , 0 ) by means of the Appell series F 1 f 1 ( a , b , c ; d ; x , y ) denoted for | x | < 1 , | y | < 1 and x y by the following bivariate series (see [17]):
F 1 f 1 ( a , b , c ; d ; x , y ) = i , j = 0 ( a ) i + j ( b ) i ( c ) j ( d ) i + j i ! j ! x i y j ,
where a = b = c = 1 , d = 2 , and the Pochhammer symbol ( τ ) k represents the rising factorial:
( τ ) k = τ ( τ + 1 ) ( τ + 2 ) ( τ + k 1 )
for any τ R + (see [17,32]). In particular, ( 1 ) 0 = 1 , ( 1 ) k = k ! , ( 2 ) k = ( k + 1 ) ! .
For Equation (33), the following polynomial:
F 1 m , n f 1 ( a , b , c ; d ; x , y ) = i = 0 m j = 0 n ( a ) i + j ( b ) i ( c ) j ( d ) i + j i ! j ! x i y j
is defined as the ( m , n ) -order truncated Appell series, where m N and n N .
By Equations (33)–(35), we have:
f 1 ( x , y ) = F 1 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) = i , j = 0 1 i + j + 1 x i y j
and for m = 2 , n = 3 , the ( 2 , 3 ) -order truncated Appell series is given by:
F 1 2 , 3 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) = 1 + x 2 + x 2 3 + y 2 + x y 3 + x 2 y 4 + y 2 3 + x y 2 4 + x 2 y 2 5 + y 3 4 + x y 3 5 + x 2 y 3 6 .
Second, performing similar operations for the bivariate function f 2 ( x , y ) defined by Equation (30), this gives the coefficient of Thiele–Newton’s expansion, which is denoted by the notation a i , j f 2 . Some values of a i , j f 2 , i = 0 , 1 , 2 , , m , ; j = 0 , 1 , 2 , , n , , are listed in Table 2.
Therefore, according to the values of a i , j f 2 , i = 0 , 1 , 2 , , ; j = 0 , 1 , 2 , , in Table 2, the Thiele–Newton’s blending expansion of f 2 ( x , y ) at ( ξ , ζ ) = ( 0 , 0 ) can be written as:
Mathematics 07 00696 i017
The corresponding truncated Thiele–Newton’s blending expansion R 2 , 3 f 2 ( x , y ) of R f 2 ( x , y ) is:
R 2 , 3 f 2 ( x , y ) = 1 + y + y 2 + y 3 + x 1 4 3 y + 5 18 y 2 + 4 135 y 3 x 1 + 7 6 y + 221 180 y 2 + 151 120 y 3 .
By a similar technique, consider the Appell series for the bivariate function f 2 ( x , y ) expanded about the point ( ξ , ζ ) = ( 0 , 0 ) ,
F 1 f 2 ( 1 , 2 , 2 ; 2 ; x , y ) = i , j = 0 ( 1 ) i + j ( 2 ) i ( 2 ) j ( 2 ) i + j i ! j ! x i y j = i , j = 0 ( i + 1 ) ( j + 1 ) i + j + 1 x i y j .
The ( 2 , 3 ) -order truncated Appell series for f 2 ( x , y ) is:
F 1 2 , 3 f 2 ( 1 , 2 , 2 ; 2 ; x , y ) = 1 + x + x 2 + y + 4 3 x y + 3 2 x 2 y + y 2 + 3 2 x y 2 + 9 5 x 2 y 2 + y 3 + 8 5 x y 3 + 2 x 2 y 3 .
Considering the errors, we let:
e 2 , 3 f k = f k ( x , y ) R 2 , 3 f k ( x , y )
and:
E 2 , 3 f k = f k ( x , y ) F 1 2 , 3 f k ( a , b , c ; d ; x , y )
for k = 1 , 2 .
Table 3 lists various values of ( x , y ) , together with the values of the bivariate function f 1 ( x , y ) , the truncated Thiele–Newton’s blending expansion R 2 , 3 f 1 ( x , y ) , and the truncated Appell series F 1 2 , 3 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) . Furthermore, for comparison purposes, the values of errors e 2 , 3 f 1 and E 2 , 3 f 1 are given in this table. It can be seen from Table 3 that the error e 2 , 3 f 1 using the truncated Thiele–Newton’s blending expansion R 2 , 3 f 1 ( x , y ) is less than when using the truncated Appell series F 1 2 , 3 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) , which gives the error E 2 , 3 f 1 . Similarly, displayed in Table 4 are the numerical results for the bivariate function f 2 ( x , y ) defined by Equation (30). Thus, these results illustrate that the approximation by the truncated Thiele–Newton’s blending expansion is clearly superior in the two test examples.

5. Conclusions

From Section 3 in the paper, it is clear to see that we generalized Thiele’s expansion of a univariate function to the bivariate case. Thus, we obtained a rational approximation method, say Thiele–Newton’s blending expansion of a bivariate function. Furthermore, we suggested the Viscovatov-like algorithm, which calculates the coefficients of Thiele–Newton’s expansion and gave the proof of this algorithm. Finally, the application of the Viscovatov-like algorithm was given. Numerical experiments and comparisons were presented in Table 3 and Table 4, showing that Thiele–Newton’s blending expansion performed much better approximation than the polynomial expansion. Moreover, the next step in the research work is the consideration of a vector case by a similar technique.

Author Contributions

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

Funding

This research was funded by the Project of Leading Talent Introduction and Cultivation in Colleges and Universities of the Education Department of Anhui Province (Grant No. gxfxZD2016270), the Natural Science Key Foundation of the Education Department of Anhui Province (Grant No.KJ2013A183), the Incubation Project of the National Scientific Research Foundation of Bengbu University (Grant No. 2018GJPY04), and the Project of Quality Curriculums of Education Department of Anhui Province (Grant Nos. 2016gxx087, 2018mooc517).

Acknowledgments

The authors are thankful to the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheney, E.W. Introduction to Approximation Theory; McGraw-Hill: New York, NY, USA, 1966. [Google Scholar]
  2. Hildebrand, F.B. Introduction to Numerical Analysis, 2nd ed.; McGraw-Hill: New York, NY, USA, 1974. [Google Scholar]
  3. Davis, P.J. Interpolation and Approximation; Dover: New York, NY, USA, 1975. [Google Scholar]
  4. Alfio, Q.; Riccardo, S.; Fausto, S. Numerical Mathematics; Springer: New York, NY, USA, 2000. [Google Scholar]
  5. Gautschi, W. Numerical Analysis, 2nd ed.; Birkhäuser: Boston, FL, USA, 2011. [Google Scholar]
  6. Burden, A.M.; Faires, J.D.; Burden, R.L. Numerical Analysis, 10th ed.; Cengage Learning: Boston, FL, USA, 2014. [Google Scholar]
  7. Baker, G.A. Essentials of Padé Approximants; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  8. Baker, G.A.; Graves-Morris, P.R. Padé Approximants, 2nd ed.; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  9. Kuchminskaya, K. On approximation of functions by continued and branched continued fractions. Mat. Met. Fiz. Meh. Polya 1980, 12, 3–10. [Google Scholar]
  10. Skorobogatko, V. Branched Continued Fractions and Their Applications; Nauka: Moscow, Russian, 1983. [Google Scholar]
  11. Siemaszko, W. Thiele-type branched continued fractions for two variable functions. J. Comput. Math. 1983, 9, 137–153. [Google Scholar] [CrossRef]
  12. Viscovatov, B. De la méthode générale pour reduire toutes sortes de quantités en fraction continues. Mém. Acad. Impériale Sci. St.-Petersbg. 1805, 1, 226–247. [Google Scholar]
  13. Graves-Morris, P.R. Symmetrical formulas for rational interpolants. J. Comput. Appl. Math. 1984, 10, 107–111. [Google Scholar] [CrossRef] [Green Version]
  14. Cuyt, A.; Verdonk, B. Multivariate rational interpolants. Computing 1985, 34, 141–161. [Google Scholar] [CrossRef]
  15. Cuyt, A.; Verdonk, B. Multivariate reciprocal differences for branched Thiele contiuned fraction expansions. J. Comput. Appl. Math. 1988, 21, 145–160. [Google Scholar] [CrossRef]
  16. Cuyt, A.; Verdonk, B. A review of branched contiuned fraction theory for the construction of multivariate rational approximants. Appl. Numer. Math. 1988, 4, 263–271. [Google Scholar] [CrossRef]
  17. Cuyt, A.; Driver, K.; Tan, J.Q.; Verdonk, B. Exploring multivariate Padé approximants for multiple hypergeometric series. Adv. Comput. Math. 1999, 10, 29–49. [Google Scholar] [CrossRef]
  18. Möller, H.M. Multivariate rational interpolation: Reconstruction of rational functions. Int. Ser. Numer. Math. 1989, 90, 249–256. [Google Scholar]
  19. Zhu, G.Q.; Tan, J.Q. The duality of vector valued rational interpolants over rectangular grids. Chin. J. Num. Math. Appl. 1995, 17, 75–84. [Google Scholar]
  20. Gu, C.Q. Bivariate Thiele-type matrix valued rational interpolants. J. Comput. Appl. Math. 1997, 80, 71–82. [Google Scholar]
  21. Gu, C.Q.; Zhu, G.Q. Bivariate Lagrange-type vector valued rational interpolants. J. Comput. Math. 2002, 2, 207–216. [Google Scholar]
  22. Tan, J.Q.; Zhu, G.Q. Bivariate vector valued rational interpolants by branched continued fractions. Numer. Math. A J. Chin. Univ. 1995, 4, 37–43. [Google Scholar]
  23. Tan, J.Q. Bivariate blending rational interpolants. Approx. Theory Appl. 1999, 15, 74–83. [Google Scholar]
  24. Tan, J.Q. Bivariate rational interpolants with rectangle-hole structure. J. Comput. Math. 1999, 17, 1–14. [Google Scholar]
  25. Tan, J.Q.; Fang, Y. Newton–Thiele’s rational interpolants. Numer. Algorithms 2000, 24, 141–157. [Google Scholar] [CrossRef]
  26. Tan, J.Q. The limiting case of Thiele’s interpolating continued fraction expansion. J. Comput. Math. 2001, 19, 433–444. [Google Scholar]
  27. Tan, J.Q.; Jiang, P. A Neville-like method via continued fractions. J. Comput. Appl. Math. 2004, 163, 219–232. [Google Scholar] [CrossRef] [Green Version]
  28. Tan, J.Q. The Theory of Continued Fractions and Their Applications; Science Press: Beijing, China, 2007. [Google Scholar]
  29. Hu, M.; Tan, J.Q.; Xue, F. A new approach to the image resizing using interpolating rational-linear splines by continued fractions. J. Inf. Comput. Sci. 2005, 2, 681–685. [Google Scholar]
  30. Hu, M.; Tan, J.Q. Adaptive osculatory rational interpolation for image processing. J. Comput. Appl. Math. 2006, 195, 46–53. [Google Scholar] [CrossRef] [Green Version]
  31. Li, S.F.; Song, L.T.; Xie, J.; Dong, Y. Image inpainting based on bivariate interpolation by continued fractions. In Proceedings of the IEEE International Conference on Computer Science and Automation Engineering, Zhangjiajie, China, 25–27 May 2012; Volume 2, pp. 756–759. [Google Scholar]
  32. Li, S.F.; Dong, Y. k-Hypergeometric series solutions to one type of non-homogeneous k-hypergeometric equations. Symmetry 2019, 11, 262. [Google Scholar] [CrossRef]
Table 1. The coefficients a i , j f 1 of Thiele–Newton’s expansion of f 1 ( x , y ) given by Equation (29).
Table 1. The coefficients a i , j f 1 of Thiele–Newton’s expansion of f 1 ( x , y ) given by Equation (29).
a i , j f 1 j = 0 j = 1 j = 2 j = 3 j = 4
i = 0 1 1 2 1 3 1 4 1 5
i = 1 2 4 3 1 9 8 135 31 810
i = 2 3 4 7 16 293 960 299 1280 33869 179200
i = 3 16 88 15 191 225 10264 23625 194491 708750
Table 2. The coefficients a i , j f 2 of Thiele–Newton’s expansion of f 2 ( x , y ) given by Equation (30).
Table 2. The coefficients a i , j f 2 of Thiele–Newton’s expansion of f 2 ( x , y ) given by Equation (30).
a i , j f 2 j = 0 j = 1 j = 2 j = 3 j = 4
i = 0 11111
i = 1 1 4 3 5 18 4 135 17 1620
i = 2 1 7 6 221 180 151 120 10721 8400
Table 3. Comparison of the numerical results by using R 2 , 3 f 1 ( x , y ) and F 1 2 , 3 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) .
Table 3. Comparison of the numerical results by using R 2 , 3 f 1 ( x , y ) and F 1 2 , 3 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) .
( x , y ) f 1 ( x , y ) R 2 , 3 f 1 ( x , y ) e 2 , 3 f 1 F 1 2 , 3 f 1 ( 1 , 1 , 1 ; 2 ; x , y ) E 2 , 3 f 1
(0.6,0.5)2.2314355131422.175811138576 5.56244 × 10 2 2.007583333333 2.23852 × 10 1
(0.5,0.4)1.8232155679401.801574172062 2.16414 × 10 2 1.731400000000 9.18156 × 10 2
(0.4,0.3)1.5415067982731.534197264544 7.30953 × 10 3 1.506843333333 3.46635 × 10 2
(0.3,0.2)1.3353139262451.333336425463 1.97750 × 10 3 1.324153333333 1.11606 × 10 2
(0.2,0.1)1.1778303565641.177455592535 3.74764 × 10 4 1.175210000000 2.62036 × 10 3
(0.09,0.1)1.1049836186591.104936257854 4.73608 × 10 5 1.104746383333 2.37235 × 10 4
(0.08,0.09)1.0929070532191.092875387558 3.16657 × 10 5 1.092744392933 1.62660 × 10 4
(0.07,0.08)1.0810916104221.081071421327 2.01891 × 10 5 1.080985191467 1.06419 × 10 4
(0.05,0.06)1.0582109330541.058204252599 6.68046 × 10 6 1.058173883333 3.70497 × 10 5
(0.06,0.05)1.0582109330541.058202709844 8.22321 × 10 6 1.058150458333 6.04747 × 10 5
(0.04,0.05)1.0471299867301.047126709307 3.27742 × 10 6 1.047111416666 1.85701 × 10 5
(0.05,0.04)1.0471299867301.047125552862 4.43387 × 10 6 1.047095800000 3.41867 × 10 5
(0.03,0.02)1.0256500167191.025649181797 8.34922 × 10 7 1.025642954533 7.06219 × 10 6
(0.02,0.03)1.0256500167191.025649615899 4.00820 × 10 7 1.025647765133 2.25159 × 10 6
(0.02,0.01)1.0152371464021.015236912398 2.34003 × 10 7 1.015235095400 2.05100 × 10 6
(0.01,0.02)1.0152371464021.015237085235 6.11671 × 10 8 1.015236857467 2.88935 × 10 7
Table 4. Comparison of the numerical results by using R 2 , 3 f 2 ( x , y ) and F 1 2 , 3 f 2 ( 1 , 2 , 2 ; 2 ; x , y ) .
Table 4. Comparison of the numerical results by using R 2 , 3 f 2 ( x , y ) and F 1 2 , 3 f 2 ( 1 , 2 , 2 ; 2 ; x , y ) .
( x , y ) f 2 ( x , y ) R 2 , 3 f 2 ( x , y ) e 2 , 3 f 2 F 1 2 , 3 f 2 ( 1 , 2 , 2 ; 2 ; x , y ) E 2 , 3 f 2
(0.4,0.3)2.5276463652682.541958340395 1.43120 × 10 2 2.314840000000 2.12806 × 10 1
(0.3,0.2)1.8333757422001.834880020667 1.50428 × 10 3 1.774760000000 5.86157 × 10 2
(0.2,0.1)1.3997896848561.399902542529 1.12858 × 10 4 1.387786666667 1.20030 × 10 2
(0.09,0.1)1.2250487635701.225046790051    1.97352 × 10 6 1.223971000000 1.07776 × 10 3
(0.08,0.09)1.1975907387531.197589594868    1.14389 × 10 6 1.196860955200 7.29784 × 10 4
(0.07,0.08)1.1711290671011.171128457172    6.09930 × 10 7 1.170657476267 4.71591 × 10 4
(0.06,0.07)1.1456158027531.145615507616    2.95138 × 10 7 1.145329149600 2.86653 × 10 4
(0.05,0.06)1.1210058308881.121005702793    1.28095 × 10 7 1.120845560000 1.60271 × 10 4
(0.06,0.05)1.1210058308881.121006469601 6.38713 × 10 7 1.120749100000 2.56731 × 10 4
(0.04,0.05)1.0972566711691.097256621117    5.00525 × 10 8 1.097177266667 7.94045 × 10 5
(0.05,0.04)1.0972566711691.097256985627 3.14458 × 10 7 1.097113306667 1.43365 × 10 4
(0.03,0.02)1.0521829678981.052183005275 3.73770 × 10 8 1.052154046400 2.89215 × 10 5
(0.02,0.03)1.0521829678981.052182961331    6.56692 × 10 9 1.052173533600 9.43430 × 10 6
(0.02,0.01)1.0307850775551.030785083180 5.62544 × 10 9 1.030776771467 8.30609 × 10 6
(0.01,0.02)1.0307850775551.030785075750    1.80440 × 10 9 1.030783868267 1.20929 × 10 6

Share and Cite

MDPI and ACS Style

Li, S.; Dong, Y. Viscovatov-Like Algorithm of Thiele–Newton’s Blending Expansion for a Bivariate Function. Mathematics 2019, 7, 696. https://doi.org/10.3390/math7080696

AMA Style

Li S, Dong Y. Viscovatov-Like Algorithm of Thiele–Newton’s Blending Expansion for a Bivariate Function. Mathematics. 2019; 7(8):696. https://doi.org/10.3390/math7080696

Chicago/Turabian Style

Li, Shengfeng, and Yi Dong. 2019. "Viscovatov-Like Algorithm of Thiele–Newton’s Blending Expansion for a Bivariate Function" Mathematics 7, no. 8: 696. https://doi.org/10.3390/math7080696

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop