Approximation of Finite Hilbert and Hadamard Transforms by Using Equally Spaced Nodes

: In the present paper, we propose a numerical method for the simultaneous approximation of the ﬁnite Hilbert and Hadamard transforms of a given function f , supposing to know only the samples of f at equidistant points. As reference interval we consider [ − 1, 1 ] and as approximation tool we use iterated Boolean sums of Bernstein polynomials, also known as generalized Bernstein polynomials. Pointwise estimates of the errors are proved, and some numerical tests are given to show the performance of the procedures and the theoretical results. partial differential equations, optics (X-ray crystallography, electron-atom scattering), electrodynamics and quantum mechanics (Kramers–Kronig signal processing (phase retrieval, transfer functions


Introduction
The Hilbert transform in its original form is an integral transform given by Alongside this form there are different variants defining Hilbert transforms on a finite interval, on the torus, or discrete groups. Objects of our studies are the finite Hilbert transform and its derivative, namely the Hadamard transform, both defined on the finite (standard) interval [−1, 1]. They are given by where the single and double bar-integral notation indicate that the involved integrals have to be understood as the Cauchy principal value and the Hadamard finite-part integrals, respectively. The interest in integrals of this kind is due to their wide use to formulate boundary-value problems in many areas of mathematical physics (potential theory, fracture mechanics, aerodynamics, elasticity, etc...) in terms of singular integral equations in [−1, 1] involving such integrals (see e.g., [1][2][3][4][5] and the references therein).
In fact, the Hilbert transform in its aforementioned form (1) and all its relatives appear in various fields in mathematical analysis, signal processing, physics and other fields in science. Among them are partial differential equations, optics (X-ray crystallography, electron-atom scattering), electrodynamics and quantum mechanics (Kramers-Kronig relation), signal processing (phase retrieval, transfer functions of linear systems, spectral factorization). We will not go into details here but instead refer to the comprehensive two volume treatise by F. King [6] on many aspects of the Hilbert transform and its various variants. Due to its outstanding relevance it is of great importance to possess procedures which allow computation of the Hilbert transform numerically with high degree of accuracy. This problem was studied by many authors for all the different variants of the Hilbert transform and under different assumptions. We limit the citation here to only a few interesting papers [7][8][9].
Our focus in the present paper lies on the numerical computation of the finite Hilbert and Hadamard transforms (2). There is an extensive literature on numerical methods for these transforms. We only mention here [5,10] and the references therein. Many of these methods produce a high degree of approximation, especially when the smoothness of f increases (see e.g., [11][12][13][14][15]). Since they are based on Gaussian quadrature rules and its modified versions or product rules, they require the values of f to be given at the zeros of Jacobi polynomials which often is not the case. For example, in many applications the measurements of f are produced by devices which sample the function on equidistant knots. Other procedures which pay attention to this fact and which are frequently used in applications involve composite quadrature rules on equally spaced points. However, this type of quadrature rules suffers from a low degree of approximation or show saturation phenomena. Hence there is a need to establish a new approach which combines the advantages of both the aforementioned methods.
To move towards this goal, we propose some quadrature rules obtained by means of the sequence {B m,s f } m of the so-called generalized Bernstein polynomials, defined as iterated Boolean sums of the classical Bernstein polynomials {B m f } m ( [16][17][18]). These types of formulas are based on equally spaced knots in the interval [−1, 1] and their convergence order increases with increasing smoothness of the function, in contrast to various popular rules based on piecewise polynomial approximation. Moreover, there exists a numerical evidence showing that the speed of convergence of the formula increases for higher values of the parameter s and for fixed m (see [19], Remark 4.1).
Concerning the numerical computation of the Hilbert transform H( f ), we revisit the method introduced in [20] from both the theoretical and computational point of view. Indeed, here, according to a more recent result obtained in [21], we estimate the quadrature error in terms of the more refined kth ϕ-modulus of Ditzian and Totik, instead of the ordinary modulus of smoothness. As consequence, we get error estimates in Sobolev and Hölder Zygmund spaces and we are able to state the maximum rate of convergence for functions in such spaces. The second improvement of the method in [20] regards the computation of the quadrature weights that is performed in a more stable way. It is based on a recurrence relation which does not require the transformation to the canonical bases {1, x, . . . , x m }, but it preserves the fundamental Bernstein polynomials {p m,k (x)} m k=0 . As regards the Hadamard transform H 1 ( f , t), before introducing the numerical procedure for its computation, we prove that H 1 ( f , t) presents algebraic singularities at the endpoints of the interval, when the density function f satisfies a Dini-type condition. Successively, we introduce a quadrature rule for approximating H 1 ( f , t) always based on the polynomials B m,s f and useful also for the simultaneous approximation of H( f , t) and H 1 ( f , t), since the samples of the function f at the equidistant nodes employed in the computation of H( f , t) have been reused to approximate H 1 ( f , t) too. The convergence of such a quadrature rule is proved by using the simultaneous approximation property of the generalized Bernstein polynomials, and similarly to the Hilbert transform case, for the quadrature error we state weighted pointwise estimates.
It comes out that the additional integer parameter s introduced by the B m,s f can be suitable chosen to accelerate the convergence of the quadrature rules for both the transforms H and H 1 . Moreover, the coefficients of both the quadrature rules are given in a simple compact vectorial form and can be efficiently computed by recurrence.
The outline of the paper is as follows. Section 2 contains some notation and preliminary results concerning the generalized Bernstein polynomials and the Hilbert and Hadamard transforms. The quadrature rules with the corresponding pointwise error estimates can be found in Section 3 where details about the recurrence relations for their coefficients are also given. Section 4 contains additional numerical details for computing the quadrature weights and some numerical tests which show the performance of our procedures and confirm the theoretical estimates. Finally, in Section 5 the proofs are given.

Notation and Preliminary Results
In the sequel C will denote a generic positive constant which may differ at different occurrences. We will write C = C(a, b, ..) to indicate that C is independent of a, b, ... Moreover, if A, B > 0 depend on some parameters the notation A ∼ B means that there are fixed constants C 1 , C 2 > 0 (independent of the parameters in A, B) such that We recall that ( [22], Theorem 2.1.1) where AC denotes the space of the absolutely continuous functions on [−1, 1]. By means of this modulus of continuity, we define the subspace DT ⊂ C 0 of all functions satisfying a Dini-type condition, namely where we set ω ϕ ( f , u) = ω 1 ϕ ( f , u). Moreover, the Hölder-Zygmund space of order λ > 0 is defined by and equipped with the norm The space Z λ constitutes a particular case of the Besov-type spaces studied in [23] where it has been proved that ( [23], Theorem 2.1) Such norms' equivalence ensures that the previous definitions are indeed independent of the integer r > λ we choose. Moreover, by (6) we get an interesting characterization of the continuous functions f ∈ Z λ in terms of the rate of convergence to zero of the errors of best uniform polynomial approximation of f , which in turn is closely related to the smoothness of f (see e.g., Corollary 8.2.2 and Theorem 6.2.2 of [22]). More precisely, for any continuous function f and any λ > 0 we have Furthermore, by the previous definitions, for any r > λ > 0 we get In the case that λ = k ∈ N, by virtue of (3), we have that the space Z λ is equivalent to the Sobolev space equipped with the norm f W k = f + f (k) ϕ k , and we recall that ( [22], Theorem 4.2.1) Finally, since we are going to use a result of [24] based also on the ordinary moduli of smoothness (cf. Theorem 2), we conclude the subsection by recalling their definition and some properties. Set the ordinary r-th modulus of smoothness of f is defined as It is related with the ϕ modulus by Moreover, setW we have the following analogues of (9) and (10) (see e.g., [22], p. 40) For any f ∈ C 0 the m-th Bernstein polynomial B m f is defined as where are the so-called fundamental Bernstein polynomials. They satisfy the following recurrence relation with p 0,0 (x) = 1 and p m, The computation of B m f (x) can be efficiently performed by the de Casteljau algorithm (see e.g., [25]).
Based on the polynomial B m f , the generalized Bernstein polynomial B m,s f were introduced separately in [16][17][18]. They are defined as the following Boolean sums Please note that B m,s f ∈ P m and it can be expressed as where An estimate of the error R m,s f := f − B m,s f in uniform norm is given by the following theorem Theorem 1. [21] Let s ∈ N be fixed. Then for all m ∈ N and any f ∈ C 0 we have Moreover, for any 0 < λ ≤ 2s we obtain and the o-saturation class is characterized by

Remark 1.
Please note that unlike the basic Bernstein operator B m , the Boolean sums B m,s may accelerate the speed of convergence as the smoothness of f increases. In particular, taking into account (7)-(9), from Theorem 1 we deduce About the simultaneous approximation of the derivatives of f by means of the sequence {B m,s f } m , the following estimate holds.

Theorem 2 ([24]
, Corollary 1.6). Let s ∈ N be fixed. Then for all m, k ∈ N and any f ∈ C k we have where ω := ω 1 and C = C(m, f ). (9), (11) we deduce the following maximum rate of convergence

Remark 2. From Theorem 2 and
Finally, we give some details on the computation of B m,s f and its first derivative.
Observe that a more convenient representation of the fundamental polynomials {p (s) m,i } m i=0 is given by [26] (see also [27]) where we set and C m,s ∈ R (m+1)×(m+1) is the changing basis matrix given by where I denotes the identity matrix and A is the matrix with entries and consequently In matrix-vector notation this reads as As regards the derivatives of the Bernstein polynomials B m,s f , we obtain from (25) the following useful representation where Finally, concerning the entries of the vector p 1 m (x), i.e., the derivatives of the fundamental Bernstein polynomials at x ∈ [−1, 1], starting from the definition (14), easy computations yield the expression with the usual convention p m,j (x) = 0 if j / ∈ N m 0 .

Hilbert and Hadamard Transforms
First, we recall the finite Hilbert transform H( f , t) is defined by The following theorem provides a sufficient condition for the existence of H( f , t) in (−1, 1) when the density function f satisfies a Dini-type condition. It also shows the behavior of H( f , t) as t approaches the endpoints of the interval [−1, 1].

Theorem 3 ([28]
, Theorem 2.1). For any f ∈ DT and |t| < 1, we have Consider now H 1 ( f , t), which is the finite part of the divergent integral in the Hadamard sense (see for instance [5,10,14]), i.e., defined for |t| < 1 as (cf. [5], Equation (1.3)) being (30) and (29) equivalent when f is an Hölder continuous function (see [5]). By the following theorem, we are going to state that for all functions f with f ∈ DT, we have that H 1 ( f , t) exists finite for any |t| < 1, while it algebraically diverges at the endpoints of the interval [−1, 1].

Theorem 4.
Let the function f ∈ C 1 be s.t. f ∈ DT. Then, for any −1 < t < 1, we have

On the Computation of H( f , t)
The numerical method for computing H( f , t) is based on the following proposition Proposition 1. For any f ∈ DT and for any |t| < 1, we have In view of (32), we mainly must approximate the function For any given s ∈ N, by means of the polynomial sequence {B m,s f } m , we define the following Please note that and taking into account the relation in (20) between the bases {p m,i (x)} i∈N m 0 and {p About the computation of {q m,i (t)} i∈N m 0 we can prove the following For the sequence {q m,i (t)} the following recurrence relation holds the quadrature rule (34) takes the form This formula can be directly applied to approximate H( f , t) in the form given in (32), i.e., supposed to know f (t), we can approximate In the case only the samples Using matrix-vector notation as in (39) and (25) we arrive at The quadrature error can then be expressed as About the convergence of both the previous quadrature rules F m,s and H m,s , the following theorem estimates the associate errors, Φ m,s and E m,s respectively. Theorem 5. Let be −1 < t < 1. Then for any f ∈ DT, we have with r < m and C = C(m, f , t).
The same estimate continues to hold for Φ m,s ( f , t), which satisfies also In case of smoother functions, from the previous estimates and (7), (8), (9) and (11), we easily get and the same holds for |Φ m,s ( f , t)|. Moreover, for all f ∈ C k+1 , with 1 ≤ k ≤ 2s, we have In conclusion, we remark that in proving Theorem 5 we also stated the following relations between the quadrature errors and the approximation errors by generalized Bernstein polynomials

On the Computation of
We are going to use the following proposition Proposition 3. For any f ∈ C 1 s.t. f ∈ DT and for all |t| < 1, we have Let Supposed both f (t) and f (t) are known, then we can get the exact value of the non-integral part at the right-hand side of (44). In this case, the numerical computation of H 1 ( f , t) can be performed by the following quadrature rule where Using (36), (37) and (46), we get where the polynomials d m,i (t), i = 0, . . . , m, can be computed recursively, according to The previous recurrence relation can be easily deduced by Proposition 2. Let then the quadrature rule (46) takes the following form In the case that only the vector f is known, we have to approximate also the non-integral part in (44) and we propose the following quadrature rule where E 1 m,s ( f , t) denotes the error and By (49), (25) and (26), the rule in vector form is given by We point out that both the rules (49) and (52) are based on the same data vector f used in the rules (39) and (41). We see that our method allows simultaneous approximation of the Hilbert transform H( f , t) and its first derivative H 1 ( f , t) for |t| < 1 by using the same samples of the function f . About the convergence of the quadrature rules F 1 m,s and H 1 m,s , the following theorem estimates the associate errors Φ 1 m,s and E 1 m,s by means of the error of approximation when f and f are approximated by generalized Bernstein polynomials. Theorem 6. Let be −1 < t < 1. Then for any f ∈ C 1 s.t. f ∈ DT, we have with r < m and C = C(m, f , t).
The same estimate can also be applied to Φ 1 m,s ( f , t), which in the case of continuously differentiable functions in C 2 satisfies also Thanks to this theorem, by Theorem 1 and Theorem 2 we can easily get estimates of the quadrature errors E 1 m,s and Φ 1 m,s based on several moduli of smoothness of f and f . For brevity we omit the details and only state the following result, which easily follows by using (9) and (11) in the estimates of Theorems 1 and 2, which in turn are used in Theorem 6. Corollary 2. Let −1 < t < 1 and s ∈ N. For all functions f ∈ C k+1 , with 1 ≤ k ≤ 2s, and for sufficiently large m ∈ N, we have The same estimate holds for Φ 1 m,s ( f , t), which also satisfies

Numerical Details and Some Experiments
First, we recall some details given in [19] about the computation of the matrix C m,s in (21). We start from the matrix A defined in (22). It will be constructed by rows by making use of the triangular scheme in (15) and thus for each row m 2 long operations are required. On the other hand, since A is centrosymmetric, i.e., A = J AJ , where J is the counter-identity matrix of order m + 1 (J i,j = δ i,m−j , i, j ∈ N m 0 , being δ h,k the Kronecker delta), it will be enough to compute only the first m+1 2 or m+2 2 rows, according to m is odd or even, respectively. Therefore, the construction of A requires about m 3 2 long operations. Furthermore, since the product of two centrosymmetric matrices can be performed in almost m 3 4 long operations [29], the matrix C m,s in (21) can be constructed in almost (s − 2)m 3 /4 long operations, instead of (s − 2)m 3 ones, i.e., with a saving of about the 75%. A more significant reduction is achieved when the parameter s = 2 p , p ∈ N, p ≥ 1. Indeed, by using ( [30], (14)) the matrix C m,s can be determined by 2(log 2 s − 1) products of centrosymmetric matrices and therefore requiring almost m 3 2 (log 2 s − 1) long operations. For instance, for s = 256, if we use Equation (21) , For any choice of m we consider different values of s. In the tables we report the approximating values of the integrals. All the computations have been performed in double-machine precision (eps ∼ 2.22044e − 16).
Here f ∈ C ∞ and as we can see the performance of the quadrature rules improves keeping m fixed and increasing the values of s. An empty cell means that there is no improvement in the computation. In particular as we can see in Tables 1-2, the machine precision is attained for m = 128 and s = 16 as well as for m = 64 and s = 32.
In this case, f ∈ Z 15 2 , and as the results in Tables 3-4 show, the numerical errors agree with the theoretical estimates.
Here f ∈ C ∞ . In this test (see Table 5), we want to show the performance of the quadrature rule when m is fixed and s increases, highlighting how we get an improvement, but it seems till to a certain threshold. This behavior will be the subject of future investigations.

Proofs
The following three lemmas will be useful in the sequel.
Lemma 1. Let f ∈ DT and P m ∈ P m , m ≥ 2. Then where r ∈ N with r < m and 0 < C = C(m, f ).
Proof. Taking into account that ω ϕ ( f , t) is a non-decreasing function of t, we have Then, by applying the following Stechkin type inequality ( [22], Theorem 7.2.4) and taking into account that ∑ ∞ j=n 1 j 2 ≤ C n holds for all n ∈ N, with C = C(n), we obtain Finally, by applying the Jackson type inequality ( [22], Theorem 7.2.1) (see also [31], Section 2.5.2), and recalling that ( [22], (4.1.3)) we deduce which completes the proof.

Lemma 2.
For any −1 < t ≤ − 1 2 , and for any f s.t. f ∈ DT, we have Concerning A 1 , by reasoning as done in proving Proposition 3 we have that f ∈ DT implies and using we obtain the form Hence, changing the variables Consequently, for any −1 < t ≤ − 1 2 we obtain and using (56), we conclude that and the statement follows by collecting this last inequality, (59) and (57).
Similarly, we can prove the following Lemma 3. For any 1 2 ≤ t < 1, and for any f s.t. f ∈ DT, we have where C = C( f , t).
Proof of Theorem 4. Assume first that −1 < t ≤ − 1 2 . In this case, ϕ 2 (t) ∼ (1 + t) and we have Since the statement follows from Lemma 2 for any −1 < t ≤ − 1 2 . Assume now 1 2 ≤ t < 1, so that ϕ 2 (t) ∼ (1 − t). By using the decomposition and taking into account that the statement follows from Lemma 3 for any 1 2 ≤ t < 1. Finally, suppose |t| < 1 2 and fix 1 4 < a < 1 2 . In this case, ϕ(t) ∼ 1 and we consider the following decomposition For the first term at the right-hand side of (62) we get Concerning the second addendum of (62), we proceed analogously to the estimate of A 1 (t) in Lemma 2. More precisely, by using f ∈ DT and (58) we obtain and by changing the variables x = t ± σ 2 ϕ(t) and τ = t ± h 2 ϕ(t), we get Finally, as regards the third term at the right-hand side of (62), since = t+a t−a dx (x−t) 2 = − 2 a , we have and the theorem is completely proven.

Proof of Proposition 1. Start from the standard decomposition
and taking into account we must prove that the principal value integral in (63) is indeed an improper integral. To this aim, let us first prove that Please note that for any > 0, Moreover, for any g ∈ AC, we note that On the other hand, recalling that arcsin y = y + y 3 6 + 3 40 y 5 + 5 112 y 7 + 35 1152 y 9 + . . . , |y| < 1, we easily get arcsin(u + t) − arcsin(t) u ≤ C = C(t, u), |t| < 1, 0 < u ≤ 2, and therefore, the previous estimate and (3) yield Hence, for all |t| < 1 it follows i.e., under the assumption f ∈ DT, (64) holds.
Similarly proceeding, we can prove that and the statement follows.
Proof of Proposition 2. For 1 ≤ k ≤ m − 1, by using the recurrence relation (15) and taking into account that 1 −1 p m,h (x)dx = 2 m+1 holds ∀h ∈ N m 0 , we get For k = 0, we have For k = m we proceed analogously. Applying Theorem 3, E m,s ( f , t) can be estimated as follows and by Theorem 1 we further obtain Moreover, by Lemma 1 we get Regarding the quadrature error Φ m,s ( f , t), we observe that which leads to Hence, in the case that f ∈ DT, the estimate (42) holds for Φ m,s ( f , t) as well.
Finally, if f ∈ C 1 then, by applying the mean value theorem, we get and (43) follows from Theorem 2.
Proof of Proposition 3. We start from the standard decomposition and recalling the definitions we note that Moreover, taking into account that Hence to complete the proof, we have to prove that this last principal value integral is indeed an improper integral if f ∈ DT.
Hence, by means of (3), we get and under the assumption f ∈ DT, (68) follows.
Proof of Theorem 6. We start from (1 − t 2 ) 2 1 − t 2 R m,s f (t) ≤ 2 R m,s f . Finally, (54) follows from the Peano form of the Taylor's remainder term, namely