Rational Interpolation: Jacobi’s Approach Reminiscence

: We treat the interpolation problem { f ( x j ) = y j } Nj = 1 for polynomial and rational functions. Developing the approach originated by C. Jacobi, we represent the interpolants by virtue of the Hankel polynomials generated by the sequences of special symmetric functions of the data set like { ∑ Nj = 1 x kj y j / W (cid:48) ( x j ) } k ∈ N and { ∑ Nj = 1 x kj / ( y j W (cid:48) ( x j )) } k ∈ N ; here, W ( x ) = ∏ Nj = 1 ( x − x j ) . We also review the results by Jacobi, Joachimsthal, Kronecker and Frobenius on the recursive procedure for computation of the sequence of Hankel polynomials. The problem of evaluation of the resultant of polynomials p ( x ) and q ( x ) given a set of values { p ( x j ) / q ( x j ) } Nj = 1 is also tackled within the framework of this approach. An effective procedure is suggested for recomputation of rational interpolants in case of extension of the data set by an extra point.


Introduction
Given the data set for the variables x and y x x 1 x 2 . . . x N y y 1 y 2 . . . y N , with distinct nodes {x 1 , . . . , x N } ⊂ R, we treat the interpolation problem in a sense of finding a function f (x) such that f (x j ) = y j ∈ R for j ∈ {1, . . . , N}. Systematic exploration of the problem was started since 17th century; for the historical overview and the review of numerous applications of the relevant theory, we refer to [1] and the references cited therein. Depending on the concrete objectives, the solution to the problem can be looked for in various classes of interpolants like algebraic polynomial, trigonometric polynomial, sum of exponentials, rational functions, etc. The two interpolation problems dealt with in the present paper are polynomial and rational ones.
In comparison with the polynomial interpolation problem, the rational interpolation one has its beginning a century and a half later, with the first explicit formula due to Cauchy [2]. Further its development was made by Kronecker [3] and Netto [4] (we briefly discuss it in Section 5). The interest to the problem revives in the second part of the 20th century and is connected with its application in Approximation Theory, Control Theory (recovering of the transfer function from the frequency responses) and Error Correcting Codes; in the latter case, the problem is treated in finite fields. We refer to [5][6][7][8] for recent developments and further references on the rational interpolation problem as well as to its generalization, known as rational Hermite's or osculatory rational interpolation problem, where the values of some derivatives for f (x) are assumed to be specified at some nodes.
The present paper is devoted to an approach to the rational interpolation problem originated in 1846 by Carl Jacobi [9]. Within the framework of this approach, the numerator and the denominator of the rational interpolant f (x) = p(x)/q(x), deg p(x) + deg q(x) = N − 1 with the entries {c 0 , c 1 , . . . , c 2k−1 } ⊂ C called its generators. Numerator and denominator of the rational interpolant can be expressed as Hankel polynomials of appropriate orders with the generating sequences chosen in the form of suitable symmetric functions of the data set (1), namely correspondingly; here W(x) = ∏ N j=1 (x − x j ). Jacobi's approach has not got much attention and further development in modern computational mathematics; perhaps the only exception is the paper [10]. This disregard is probably caused by an evident computational bottleneck, namely the computation of a parameter dependent determinant. In case of its large order, it is not a trivial task.
Fortunately a specific structure of the involved determinants helps a lot: there exists a recursive procedure for the Hankel polynomial computation. It is based on the identity linking the Hankel determinants of three consecutive orders: here, A k and B k are some constants that can be expressed via the coefficients of H k−2 (x) and H k−1 (x). Formula (4) looks similar to the recurrent formula connecting orthogonal polynomials. It is indeed the case for the positive definite Hankel matrix, i.e., under this condition, the Hankel polynomials H 1 (x), H 2 (x), . . . are orthogonal with respect to the inner product introduced via this matrix taken as the Gram matrix. In [11], this idea was extended to the Hankel matrices with the non-zero leading principal minors. Application of the recursive procedures for computation of the sequences of Hankel polynomials to the rational interpolation problem provides one with an opportunity to efficiently compute not only a single interpolant, but the whole family of fractions with all possible combinations of degrees for the numerator and the denominator satisfying the restriction deg p(x) + deg q(x) = N − 1.
One issue of this scheme should be additionally discussed, namely constants' evaluation in (4). For the classical orthogonal polynomials or for their counterparts from [11], this is done via the inner product computations involving polynomials H k−1 (x) and xH k−1 (x). However, there exists an alternative algorithm for the computation of the constants in (4). Surprisingly, this version of the identity (4) is also related to Jacobi with the original idea outlined in his paper [12] as of 1836, and further complete treatment executed by his disciple Ferdinand Joachimsthal [13]. This approach is free of the orthogonality concept and is based on purely determinantal formalism.
The present paper is organized as follows. We first discuss the properties of Hankel polynomials in Section 3 starting with the Jacobi-Joachimsthal result. The first two of the present authors have excavated this result and published it with the original Joachimsthal's proof in the paper [14]. For the convenience of a reader of the present paper, we decided to include an English translation of the proof (original Jacobi's paper was written in Latin, Joachimsthal's-in German, the paper [14] is in Russian). Further investigation of the 19th century heredity, results in discovering the generalization of the Jacobi-Joachimsthal result (the cases of degeneracy of (4) like, for instance, A k = 0) in the works by Kronecker and Frobenius [15]; we include their results in Section 3. The necessity in detailed presentation of this classical algebra material (with the exception of Theorem 8, which is due to the present authors) is justified by its complete forgetfulness in the next two centuries. Indeed, we failed to find any references to them either in the monographs [16][17][18] or even in research papers. This state of affairs might seem unfair, but not an extraordinary one, if not for one circumstance. As a matter of fact, the referred results should be treated as the gist of the algorithm, which is nowadays known as the Berlekamp-Massey algorithm [19,20]; it was suggested for the decoding procedure in Bose-Chaudhury-Hocquenghem (BCH) codes and for finding the minimal polynomial of a linear recurrent sequence.
The deployment of this Hankel polynomial formalism to the main problem of the paper is started in Section 4 with the treatment of the polynomial interpolation. We discuss here some properties of the sums like (3), i.e., symmetric functions of the data set. The rational interpolant construction is dealt with in Section 5 where the efficiency of the two approaches (i.e., Jacobi-Joachimsthal and orthogonality-based) for computation of the Hankel polynomial via (4) is discussed.
The results of Sections 6 and 7 should be regarded as completely original. In Section 6, we deal with the question which relates to the uniqueness problem for the rational interpolation. Having the data set (1) generated by some rational fraction p(x)/q(x) with deg p(x) + deg q(x) = N − 1, is it possible to establish that the fraction is irreducible, (or, equivalently, that polynomials p(x) and q(x) do not possess a common zero) avoiding, as an intermediate step, their explicit representation? The question is equivalent to the possibility of expressing the resultant of polynomials p(x) and q(x) in terms of the entries of the data set (1). We prove that the resultant can be expressed in the form of an appropriate Hankel determinant generated by any of the sequences (3).
In Section 7, we present an effective procedure for recomputation of Hankel polynomials for the data set (1) complemented by an additional point. It turns out that generically the kth order Hankel polynomial composed for the extended data set can be expressed as a linear combination of two of its counterparts (of the orders k and k − 1) constructed for the initial set (1).
Relationship of the Jacobi's approach to the rational interpolation problem with that one based on barycentric formula [6,21] is discussed in Section 8.
In Section 9, we briefly outline an application of the Hankel polynomial formalism to the problems of Coding Theory. We address here the Berlekamp-Welch algorithm [22] for the error detection and correction in a data sequence.

Algebraic Preliminaries
For the convenience of further references, we recall here some auxiliary results from the Matrix theory and Polynomial Field theory.
Theorem 1 (Kronecker [23,24]). If in a given matrix a certain rth order minor is not zero, and all the (r + 1)th order minors containing that rth order minor are zero, then all the (r + 1)th minors are zero, and, therefore, the rank of the matrix equals r.
Theorem 2 (Sylvester [16]). For a square matrix A of the order n denote by A i 1 i 2 . . . i n j 1 j 2 . . . j n its minor standing in the rows i 1 , i 2 , . . . , i k and in the columns j 1 , j 2 , . . . , j k . The following Sylvester's identity is valid: We next recall the definition of the resultant of polynomials [8,23,25]. For the polynomials with p 0 = 0, q 0 = 0, n ≥ 1, m ≥ 1 their resultant is formally defined as where λ 1 , . . . , λ n denote the zeros of p(x) (counted in accordance with their multiplicities). Equivalently, the resultant can be defined as where µ 1 , . . . , µ m denote the zeros of q(x) (counted in accordance with their multiplicities).
As for the constructive methods of computing the resultant as a polynomial function of the coefficients of p(x) and q(x), this can be done with the aid of several determinantal representations (like Sylvester's, Bézout's or Kronecker's).

Example 1.
For the polynomials p(x) = p 0 x 3 + p 1 x 2 + p 2 x + p 3 and q(x) = q 0 x 5 + · · · + q 5 with p 0 = 0, q 0 = 0, their resultant in Sylvester's form is the (3 + 5)-order determinant: Not indicated entries of the determinant are assigned to zero. An important particular case related to the resultant of a polynomial and its derivative gives rise to a special notion: the expression is known as the discriminant of the polynomial p(x). It is a polynomial function in the coefficients p 0 , . . . , p n .

Hankel Determinants and Polynomials
For a (finite or infinite) sequence of complex numbers a square matrix in which each ascending skew-diagonal from left to right is constant is called Hankel matrix of order k generated by the sequence (9); its determinant will be denoted by H k ({c}) or simply H k if it will not cause misunderstandings.
The following result is a particular case of Sylvester's identity (5).

Theorem 5.
Hankel determinants of three consecutive orders are linked by the equality (sometimes also referred to as the Jacobi identity [17]): If we replace the last row of the Hankel matrix of order k + 1 with the row of powers of x, then the corresponding determinant or simply H k (x), is called the kth Hankel polynomial [17] generated by the sequence (9). Expansion of (12) by its last row yields Thus, deg H k (x) = k iff H k = 0. We will also utilize an alternative representation for Hankel polynomial in the form of Hankel determinant resulting from linear transformation of the columns of the determinant (12): It turns out that there exists an explicit linear relation between any three consecutive Hankel polynomials generated by arbitrary sequence (9). Theorem 6 (Jacobi, Joachimsthal, refs. [12,13]). Any three consecutive Hankel polynomials are linked by the following identity which hereinafter will be referred to as the JJ-identity.
Proof is a slightly modernized original proof from [13].

Lemma 1.
Let the Hankel polynomial H k (x) be generated by the sequence for arbitrary distinct λ 1 , . . . , λ m with m > k; hence, c 0 = m. Then, the following equalities are valid: Proof. λ Using the linear property of the determinant convert this linear combination of determinants into a single one: If j < k, then the last determinant possesses two identical rows. Therefore, in this case, it is just zero. For j = k the obtained determinant coincides with H k+1 .
Proof of Theorem 6. First consider the case where the generating sequence for the polynomials H k−2 (x), H k−1 (x), H k (x) is given by (16).
Here, the undetermined coefficients of the quotient Q(x) can be determined from those of H k (x) and H k−1 (x) via equating the coefficients of corresponding powers of x in the both sides of (18): or, equivalently, To find the coefficients of the remainder (18): Summation of these equalities yields Due to (17), one gets Next multiply every equality (22) by the corresponding λ j and sum up the obtained equalities by . For j ∈ {1, . . . , k − 3}, the resulting equality looks similar to the previously deduced: Multiplication of equalities (22) by λ k−2 yields something different: Unifying the obtained relations for R 0 , . . . , R k−2 with (21), one gets the linear system: Consider it as a system of homogeneous equations with respect to the variables R 0 , R 1 , . . . , R k−2 , 1. Since it possesses a nontrivial solution, its determinant necessarily vanishes: Expansion of the determinant by its last column yields: Together with the already obtained expression (19) for Q 1 , this confirms the validity of (15) for the particular case of the generating sequence given by (16).
Consider now the case of arbitrary generating sequence (9). For any given sequence of complex numbers c 1 , . . . , c 2k−1 it is possible to find complex numbers λ 1 , . . . , λ m with m > 2k − 1 such that the equations (16) become consistent. These numbers can be chosen to be the zeros of a polynomial of the degree m whose first 2k − 1 Newton sums [25] coincide with {c j } 2k−1 j=1 . To complete the proof of (15), one should fill one gap in the arguments of the previous paragraph. Whereas the numbers c 1 , . . . , c 2k−1 can be chosen arbitrarily, the number c 0 takes the positive integer value, namely m. Thus, the validity of (15) is proved only for any positive integer c 0 . However, this equality is an algebraic one in c 0 . Being valid for an infinite set of integers, it should be valid for any c 0 ∈ C.
Joachimsthal did not provide the proof of the result for the case H k−1 = 0. Theorem 4 manages this case. The left-hand side of (15) and H k−1 are polynomials in c 0 , . . . , c 2k−1 .
The relationship (15) gives rise to a more symmetric form: then the JJ-identity can be written down as The next result will be used in one of the further proofs.

Corollary 2.
For the coefficient h k−1,2 , the following representation is valid: Proof. Take the coefficient of x k−2 from the left-hand side of the JJ-identity: Under assumption that H k−1 = 0, H k = 0, it follows Apply Sylvester's identity (5) to both fractions in the right-hand side. For the first one, we take it in the version (11) while for the second one we apply it to the matrix with the determinant equal to H k : To cover the case where H k−1 = 0 or H k = 0, one should apply the result of Theorem 4.
The JJ-identity permits one to generate the recursive procedure for computation of Hankel polynomials. Indeed, assume that the expressions for H k−2 (x) and H k−1 (x) are already computed and Then in (15) all the constants are also evaluated except for H k and h k1 for which one has only their determinantal representations: These determinants differ from the transposed determinantal representation for H k−1 (x) only in their last columns. Expansions by the entries of these columns have the same values for corresponding cofactors, and, therefore, the following formulas: allow one to evaluate h k0 and h k1 via the already computed coefficients of H k−1 (x). However, the just-outlined algorithm for recursive computation of H k (x) fails for the case where H k−1 = 0. Otherwise, and are zero. This follows from Theorem 1. Therefore, all the coefficients of the polynomial H k−1 (x) are also zero since they are the (k − 1)th minors of this matrix.
To establish the validity of (28), it is sufficient to prove that the remainder of the division of H k (x) by H k−2 (x) is identically zero. This will be done later, while now we intend to prove (29). Since H k−1 = 0, the identity (15) can be rewritten as If h k−1,1 = 0, then the Sylvester identity (5) leads one to from which it follows that Consequently, the identity (31) leads one to which proves (29). We now intend to prove (28) and (30).
Here, the quotient Q( with coefficients determined by the equalities: To find the coefficients of the remainder use arguments similar to those used in the proof of Theorem 6. First, consider the case where the generating sequence (9) is given by (16). Substitute x = λ into (32) and sum up the obtained equalities. Due to (17) one gets Similar equalities result from multiplication of (35) by λ j for j ∈ {1, . . . , k − 5} and further summation by : Multiplication of (35) by λ k−4 and summation yields and, since by assumption H k−1 = 0, the obtained equality looks similar to the previous ones. Multiplication of (35) by λ k−3 and summation leads to If h k−1,1 = 0 then the obtained linear system of equalities with respect to R 0 , . . . , R k−3 , namely (since the determinant of the system equals H k−2 = 0). This proves (28). For the case h k−1,1 = 0, we unify all the obtained relationships with (34) and compose the linear system with respect to R 0 , . . . , R k−3 , 1. Since it is consistent, its determinant should vanish: Expansion of the determinant by its last column and the use of the Formula (33) lead to which completes the proof of (30).

Remark 1.
Formulas of Theorem 7 allow one to organize recursive computation for H k (x) if the polynomials H k−2 (x) and H k−3 (x) are already computed. The involved factors, such as h k−2,1 , h k−2,2 , h k−1,1 and h k1 , are either treated known as the coefficients of H k−2 (x), H k−3 (x), or can be calculated by Formulas (27). The only exception is the value for h k2 . For its computation, we suggest the following original result.
Proof. Is based on the equality (25). By replacing there (k − 1) → k, we start with We represent the first determinant with the aid of Sylvester's identity (5): If H k−1 = h k−1,1 = 0, then the expansions of the remained determinants by their last rows lead to (8).
As for the extension of the results of the present section in the 19th century, we recall here one generalization due to Kronecker [3] and Frobenius [15]. It relates to the general degenerate case of the vanishing of several entries in the sequence of Hankel determinants H 1 , H 2 , . . . . The key point in their treatment is the number of consecutive zeros in this sequence.
If the coefficients of H k−1 (x) and H k (x) are known, then the expressions for D 0 , D 1 , . . . , D r and B 0,r−1 , B r,0 can be computed via the linear relationships similar to (27). As for the values B ij with i + j ≥ r, i > 0, j > 0, we are sure that there exist formulas for their evaluation similar to (8), i.e., their explicit representations as combinations of the coefficients of polynomials H k−1 (x) and H k (x). We failed to find them in classical papers, we also have not yet succeeded in deducing them ourselves. Our confidence is based on the coincidence of the results of Theorems 7 and 9 with the algorithm known as the Berlekamp-Massey algorithm [19,20]. Massey suggested it for the problem of finding the minimal polynomial of a linear recurrent sequence. The relationship of this problem to that of finding some characteristics of an appropriate Hankel matrix is due to Kronecker and partially discussed in [18]; in the papers [27], the backgrounds of this algorithm in terms of systems of linear equations with Hankel matrices have been discussed, with some of deduced conclusions equivalent to those presented here in terms of Hankel polynomials. The paper [28] extends the orthogonality oriented approach to the rational interpolation problem initiated in [11] to the case of the vanishing of several leading principal minors of the Hankel matrix. The result is very similar to Theorem 8 (though it is obtained by a different technique).
As it is mentioned in the Introduction of the present paper, it looks like the results of Theorems 7 and 9 are lost in the past.

Polynomial Interpolation via Symmetric Functions of the Data Set
The classical polynomial interpolation problem consists in finding the polynomial satisfying the data set (1), i.e., This problem always possesses a unique solution that can be found via resolving the system of linear relationships (41) with the Vandermonde matrix or, in other terms, via the determinant evaluation This computation is usually performed by virtue of some auxiliary constructions like representation of the interpolant in either the Newton or the Lagrange form. The last one stems from the expansion of the determinant from (42) by its last column: where and The following easily deduced equality will be used in some further proofs: It should be noted that the representation (43) does not immediately provide one with the canonical form for the interpolant, i.e., an explicit expression for its coefficients. In order to extract them from this formula, let us prove first a preliminary result Theorem 10. Let G(x) ≡ G 0 x N−1 + G 1 x N−2 + · · · + G N−1 be a polynomial of a degree at most N − 1. Then the following equalities are valid Proof. Use the following Lagrange formula: From this the expansion of the fraction G(x)/W(x) in the Laurent series in negative powers of x can be derived: On the other hand, multiplying the formal expansion by W(x) ≡ x N + w 1 x N−1 + · · · + w N , one obtains the following formulas for determining the coefficients c 0 , c 1 , . . . recursively:

Remark 2.
In the 19th century publications, the expression (50) treated as a symmetric function of x 1 , . . . , x N was known as alephfunction ((Germ.) Alephfunktion) and was denoted by ℵ k (x 1 , . . . , x N ) [29]. Despite of its rationally looking definition, it happens to be a complete homogeneous symmetric polynomial, i.e., it equals the sum of all possible monomials of the degree k in the variables x 1 , . . . , x N . Thus, for instance, The expression τ k defined by (51) can also be treated as a symmetric function of pairs of variables (x 1 , y 1 ), (x 2 , y 2 ), . . . , (x N , y N ), i.e., such a function whose value is unchanged when any of the pairs are interchanged. Although the present authors suppose that the result of Theorem 11 is contained in an old textbook in Algebra, this source remains still hidden from them.
We now turn to an alternative representation for the interpolant.
Theorem 12. Assume that y j = 0 for j ∈ {1, . . . N}. Calculate the sequence of values: The interpolation polynomial can be represented in the Hankel polynomial form: Proof. Assume that the interpolation polynomial (40) exists. Rewrite the equalities (41) in the form Multiply each of these equalities by the corresponding multiplier 1/W (x j ) and sum the obtained results. Due to (48), one gets Similar equalities result from multiplication of (54) by {x k j /W (x j )} N−2 k=1 : Linear combination of (54) multiplied by x N−1 j /W (x j ) yields something different: Unifying all the obtained equalities with (40), one obtains a system of linear equations with respect to p N−1 , p N−2 , . . . , p 0 . If the interpolant exists, then it should satisfy the following identity: Expansion of the determinant by its last column gives provided that H N ({ τ}) = 0. To prove the latter, represent the determinant as the product: Therefore, the denominator in the right-hand side of (55) does not vanish. One can utilize the last expression for the direct deduction of the validity of the Formula (53) as a solution to the polynomial interpolation problem. Indeed, let us substitute x = x 1 into the numerator of the fraction in the right-hand side of (53): .
Consider an entry of the last determinant It can be easily verified that Thus, introducing the sequence one gets The last determinant can be evaluated by Formula (56): Combining this equality with (56), one gets in (55): p(x 1 ) = y 1 . Other nodes x j are dealt with similarly.
At first glance, Formula (53) seems to have no advantage over the algorithm from Theorem 11. We dispel doubts in its practical utility in Section 9. The present section is concluded by a technical result.
Proof. Let us prove that The linear combination equals zero for k ∈ {0, . . . , N − 1}. Therefore we have N linear homogeneous equalities all the equalities (60) are valid. Thus, we know all the zeros of the polynomial H N (x; { τ}), while its leading coefficient equals to H N ({ τ}). The latter has been already evaluated by (56). This completes the proof.
Hereinafter, we will not distinguish the solutions to the problem with numerator and denominator multiplied by a common numerical factor.
The first solution to the problem was suggested in the form of generalization of the Lagrange representation (43) for the polynomial interpolant.
Theorem 14 (Cauchy, ref. [2]). Denote Rational interpolant (61) can be found via the formulas Being valid generically, Cauchy's solution fails for some particular choices of the data set (1). Whereas the polynomial interpolation problem always has a solution, the rational interpolation one is not always resolvable. This defect was first discovered by Kronecker [3], and later discussed by Netto [4]. To exemplify this, we first generate from the condition (62) the system of equations or, equivalently, p n + p n−1 x j + · · · + p 0 x n j = q m y j + q m−1 x j y j + · · · + q 0 x m j y j for j ∈ {1, . . . , N} (64) which is linear with respect to the N + 1 coefficients of p(x) and q(x). The principal solvability of this system can be established with the aid of Linear Algebra methods, like, for instance via Gaussian elimination procedure.

Example 2.
Given the data set Solution.
Resolving the system (64), one gets the expressions: However, p(2) = 0 and q(2) = 0, and therefore the condition r(2) = 3 is not satisfied. It is not satisfied even if we cancel the numerator and denominator by the common linear factor.
Explanation for this phenomenon of the unattainable points consists in the nonequivalence of the passage from (62) to (63) since for some node x j one might obtain a solution to the linear system (64) satisfying both conditions p(x j ) = 0 and q(x j ) = 0.
On the other hand, solution to the problem might be not unique.

Example 3. For the data set
x −1 0 1 2 3 y 1 1 1/3 1/7 1/13 generated by the rational function 1/(x 2 + x + 1) there exists infinitely many rational inter- We now pass to an alternative approach to the problem, due to Jacobi. and then there exists a unique rational interpolant with deg p(x) = n, deg q(x) ≤ m = N − n − 1 which can be expressed as: Proof. Can be found in [11], and we refer here only with the underlying idea leading to the Hankel polynomial appearance. If a desired interpolant exists, then the equalities (64) are valid. Multiply the jth equality by x k j /W (x j ) for k ∈ {0, . . . , m − 1} and sum the obtained equalities by j. Due to the Euler-Lagrange equalities (48), one arrives at a system of equations . . . , q m τ m−1 +q m−1 τ m + . . . +q 0 τ 2m−1 = 0. Therefore, the denominator of the fraction should satisfy the relation for some constant factor A.
In a similar way, multiplying the equalities p n 1 y j + p n−1 x j y j + · · · + p 0 x n j y j = q m + q m−1 x j + · · · + q 0 x m j , j ∈ {1, . . . , N} Formulation of Theorem 15 is due to the authors of [11]. Jacobi did not bother himself in [9] with the questions of existence or uniqueness of a solution to the interpolation problem. He just only suggested that the denominator of the (potential candidate) rational interpolant can be represented in the form of the Hankel polynomial H m (x; {τ}). On its computation, the rational interpolation problem is reduced to the polynomial interpolation one for the numerator of the fraction. Jacobi did not care himself either on computational optimization of the algorithm like the one outlined in the solution to Example 4 exploiting their own Formula (24).
We next take a closer look at this JJ-identity. The linear relation of the type (4) connecting three consecutive Hankel polynomials generated by an arbitrary sequence reminds the similar looking recurrence identity for the classical orthogonal polynomials [30,31]: for some real constants A k , B k , C k . The proof of this identity is traditionally carried out using the orthogonality of the system {P j (x)} k j=0 . For instance, in case of monic polynomials P j (x), the following formula for evaluation of B k is suggested where the inner product for a pair of real polynomials P(x), Q(x) is defined via with a positive and integrable on the interval a < x < b function w(x). The relationship of orthogonal polynomials to the Hankel ones is evident from the form of the Gram matrix for sequence 1, x, x 2 , . . . , x n .
This matrix is of Hankel type: for P(x) = P 0 + P 1 x + · · · + P n x n , Q(x) = Q 0 + Q 1 x + · · · + Q n x n , one has P(x), Q(x) = (P 0 , P 1 , . . . , P n )M(Q 0 , Q 1 , . . . , Q n ) where The Gram-Schmidt orthogonalization process applied to the sequence (74) can be interpreted as that of successive computation of Hankel polynomials {H k (x; {µ})} n k=1 . Though orthogonal polynomials are Hankel polynomials, the converse is not necessarily true. An immediate counterexample demonstrates this: for the generating sequence {1, −1, 1, −1, 161, 999}, the first three Hankel polynomials are as follows and H 1 (x), H 3 (x) cannot be orthogonal. The necessary condition for the orthogonality of the sequence of Hankel polynomials is the positive definiteness of their Hankel matrix. In this context, the result of Theorem 6 cannot be reduced to the case of orthogonal polynomials, it indeed needs the orthogonality-free proof.
The authors of [11] take a step further. Considering the Hankel matrix M = τ j+k−2 N j,k=1 generated by (65), they replace the requirement of its positive definiteness by that of non-vanishing of all its leading principal minors H 1 ({τ}), . . . , H N ({τ}). Formula (75) is kept for the definition of the inner product and it becomes equivalent to In this way, some of the traditional properties of the classical inner product are lost. However, the relationship of the type connecting the (quasi)orthogonal Hankel polynomials Returning to Formula (27) for computation of the constants in the JJ-identity (4), one can notice that the cost of this is equivalent to only 2 inner products in R k+1 . We do not have any idea how to clear up the perplexity why the more general and computationally effective result is completely forgotten; we have not found any reference to it later than 1910.
We next intend to determine what assumption from those posed in Theorem 15 is responsible for the uniqueness of the solution to the rational interpolation problem, i.e., the one that prevents the cases like that in Example 3.

Resultant Interpolation
Let us clarify the relationship of the Hankel determinants H m+1 ({τ}) and H n ({ τ}), which appeared in Theorem 15, to the problem of existence of a nontrivial common factor for the numerator and the denominator of the rational interpolant. This is closely related to the notion mentioned in Section 2, namely the resultant R(p(x), q(x)) of the polynomials (6).
Given the polynomials (6) compute their values and assume that none of them is zero. Generate the Hankel matrices by the sequences with W(x) defined by (44).
Theorem 16. The following equalities are valid Proof. Will be illuminated for a particular case n = 3, m = 5. Consider first the case where p(x) possesses only simple zeros; denote them by λ 1 , λ 2 , λ 3 . Construct a new sequence: From (49) it follows that τ k = −η k for k ∈ {0, . . . , 5} and therefore Using the definition of the resultant in the form (7), the last expression can be represented as Finally use the alternative definition of the resultant (8): Thus, the equality (79) is true. We just proved this equality under the additional assumption that polynomial p(x) has all its zeros distinct. To extend this result to the general case, the traditional trick consists in application of Theorem 4. By Corollary 1, the condition of distinction (simplicity) of zeros of polynomial with symbolic (indeterminate) coefficients can be expressed as an algebraic inequality with respect to these coefficients. In accordance with the referred theorem, the algebraic identity which is valid under an extra assumption in the form of algebraic inequality, should be valid for all values of indeterminates.

Hankel Polynomials Computation for Extended Data Set
Let the data set (1) be extended by adding an extra point: Is it possible to organize the computation of the rational interpolant for this set based on already computed interpolant for the initial set (1)? In other words: we are looking for an analogue of the Newton interpolation polynomial construction procedure.
In the framework of the approach discussed in Section 5, the stated problem can be reformulated as that of effective computation of Hankel polynomials generated by the sequence and a similar one obtained on replacing y j by 1/y j , under the assumption that the corresponding Hankel polynomials for smaller data set (1) are already at our disposal.
Computation of the numerator of the wanted fraction is performed in a similar way using the already computed in Example 4 expressions for H 2 (x) and H 3 (x). Finally, We do not discuss here the exceptional cases where some of polynomials H k (x) vanish at x N+1 . As for a relationship of the obtained result to the Newton polynomial interpolant construction procedure (since the former should contain the latter as a particular case) we restrict ourselves to the reference to Theorem 13.

Comparison with the Barycentric Representation
We intend here to establish the relationship of Jacobi's approach to the one based on the so-called barycentric representation.
Theorem 19 (Ref. [6]). If a solution to the rational interpolation problem (61)-(62) with m ≤ n exists, then u = [u 1 , . . . , u N ] is a vector of weights in one of its barycentric representations if and only if u belongs to the kernel of the (N − 1) × N stack matrix First, we claim that m + 1 vectors belong to the kernel of the Vandermonde matrix V. Indeed this evidently follows from the equalities (48). Next we look for a linear combination v 0 Ξ 0 + v 1 Ξ 1 + · · · + v m Ξ m , which belongs to the kernel of the matrix H: or, equivalently, in the notation of symmetric functions (65) of the data set, Provided that the rank of the matrix standing in the left-hand side of this equation equals m, the dimension of its kernel equals 1 with the basis vector consisting of the sequence of algebraic complements to the entries of the last row of the matrix . Therefore, this vector must be proportional (collinear) to the vector of the coefficients of the denominator of the rational interpolant represented by (70).
Consequently, the Hankel polynomial approach to the rational interpolation problem can be interpreted as a method for resolution of the system of linear equations for determining the vector of weights in barycentric representation of the interpolant from Theorem 19.

Redundant Data Set with Systematic Errors
We now turn to a problem of data storage systems, namely the one aimed at establishing the locations of erroneous blocks in their sequence containing together with data (information) blocks and also some specially designed checksums. This problem relates to the theory of the Error Correcting Codes. To explain the idea of one of the practical procedures from this theory, we first present an artificial example. It is known that initially it has been generated by the polynomial of degree two with integer coefficients, but later on, a suspicion arises that up to two y-values are corrupted. Find the erroneous values and restore the initial polynomial.

Solution.
Compute successively Hankel polynomials from Theorem 15 and try to factorize them over Q. We succeed at and at It can be easily verified that the values of polynomial f (x) = 4 x 2 − 3 x + 8 coincide with the y-values from the data set at all the given x-values except for x = −1 and x = 2. Thus, the integer zeros of H 2 (x; {τ}) give the location of the erroneous values while the square factor of H 4 (x; { τ}) coincides with the "true" polynomial generating the remaining correct y-values.
We now concretize the problem statement. Let the data set (1) of rational numbers be redundant for the polynomial computation, i.e., the number of true values in it exceeds k + 1 where k = deg f (x) is known a priori. The number of corrupted y-values is unknown, but one may expect that it is much less than those of true. We need to find x-values corresponding to the erroneous y-values. The first step in dealing with this problem should be a test for the error presence. The next result is a trivial consequence of Theorem 11. Theorem 20. For the polynomial interpolant generated by (1) to be of the degree k < N − 1, it is necessary and sufficient that Thus, the uncorrupted data set (1) passes the test {τ j = 0} N−k−2 j=0 , and then the Formula (53) permits one to compute the polynomial. One may expect that the corrupted data set would not pass the test: generically, the degree of the interpolant for the random data set equals N − 1 > k. If the number E of erroneous values is given to one by revelation, the following result can be utilized for their location.
Theorem 21 (Ref. [32]). Let E ∈ {1, 2, . . . , N/2 − 1} and e 1 , . . . , e E be distinct numbers from {1, 2, . . . , N}. Let polynomial f (x) be of a degree k < N − 2E. Let the data set satisfy the conditions (a) y j = f (x j ) for j ∈ {1, . . . , N} \ {e 1 , . . . , e E }, (b) y e s := f (x e s ) = y e s for s ∈ {1, . . . , E}. Then, This result confirms one empirical conclusion from solution to Example 6. The other one can also be justified, i.e., under the conditions of Theorem 21, the following identity is valid, up to a sign [33]: here, a 0 stands for the leading coefficient of f (x). Unfortunately, in the practice of the error correcting codes, the exact number of erroneous values, as well as their location, is seldom known a priori. Therefore, in order to find the polynomial from Theorem 21, one should verify the reducibility of the successive polynomials H 1 (x; {τ}), H 2 (x; {τ}), . . . . This reason immediately refers to the results of Section 3 for the effective procedure of computation of the next polynomial in the succession when the previous ones are already known.
We now recall the idea of the Berlekamp-Welch algorithm [22] for the error correction codes. Assume the integers y 1 , . . . , y k are treated as information blocks to be protected against failures when put into computer data storage. For this aim, let us organize the checksums computation in the following way. First, chose arbitrary distinct integers x 1 , . . . , x k , say {x j = j} k j=1 . Next, compute the polynomial interpolant f (x) of the degree ≤ k − 1 for the data set {(x j , y j )} k j=1 . Finally, evaluate f (x) at the points {x j = j} N j=k+1 . The obtained data set {(x j , f (x j ))} N j=1 is redundant for the f (x) computation, but now, in the view of preceding considerations, it might sustain some amount of data corruptions.
The real life application of the algorithm is performed in finite fields, say GF(2 D ) for sufficiently large positive integer D. Usually, {x j = a j−1 } N j=1 where a stands for a primitive element of the field. The polynomial (92) is addressed as the error locator polynomial. In [22], the algorithm of its construction is treated as the consequence of a linear system (64) solving. The presence of errors in the data set causes the appearance of a nontrivial common factor for the numerator and the denominator, i.e., the conditions of Theorem 15 are not fulfilled. Theorem 21 demonstrates that it is possible to make a version of the Berlekamp-Welch algorithm which is free from the rational interpolation stage.
Compared with infinite fields, several computational difficulties appear when implementing the algorithms in GF(2 D ). We just list some of them: finding the inversion and discrete logarithm of an element, evaluation and zero finding for a polynomial over GF(2 D ), etc. The vanishing of H k−1 in the JJ-identity (15) is to be treated as an exception in an infinite field, but it has a non-zero probability in GF(2 D ). A complete treatment of these issues is to be done in further publications on the subject.

Conclusions
We have addressed an approach for the solution of the polynomial and rational interpolation problem originated in the paper by Jacobi. It consists first in representation of the interpolant by virtue of appropriate Hankel polynomials and further recursive computation of the sequence of these polynomials via the Jacobi-Joachimsthal identity and its development. This yields one an opportunity to compute effectively the whole family of rational interpolants for the given data set.
Problems for further investigation relating the theoretical backgrounds of the Jacobi's approach are mentioned in Section 3. Extension of the approach to the (mentioned in Introduction) rational Hermite interpolation problem [34] looks promising.
We trace some other topics of potential interest related to the approach: 1. The problem of choice of the interpolant with the prescribed restrictions on its entries. For the case of polynomials over R, this might be the demand of absence of either real zeros for the denominator or zeros lying in the right half-plane of the complex plane (in other words, the denominator must be a stable polynomial). Is it possible to resolve these types of problems in the framework of the Hankel polynomial sequence construction? The positive answer can be guessed from the title of the original Joachimsthal's paper [13]. Indeed, it is devoted to the zero separation problem for a univariate polynomial over R. 2. Multivariate interpolation. The treatment of this problem meets more obstacles than the univariate counterpart. For instance, one can select such a data sets {(x j , y j , z j )} 9 j=1 ⊂ R 3 with distinct pairs (x j , y j ) for which the polynomial interpolant f (x, y) ∈ R[x, y], deg f = 3, {(x j , y j ) = z j } 9 j=1 does not exist. On the other hand, there exist counterparts for the results from Section 4 for multivariate symmetric functions. For instance, an analogue of Theorem 10 is known as (what a surprise!) the Jacobi equality [35]. Therefore, the extendability of the everywhere Jacobi's approach to the multivariate rational interpolation problem looks intriguing.