Article Shannon’s Sampling Theorem for Bandlimited Signals and Their Hilbert Transform, Boas-Type Formulae for Higher Order Derivatives—The Aliasing Error Involved by Their Extensions from Bandlimited to Non-Bandlimited Signals †

The paper is concerned with Shannon sampling reconstruction formulae of derivatives of bandlimited signals as well as of derivatives of their Hilbert transform, and their application to Boas-type formulae for higher order derivatives. The essential aim is to extend these results to non-bandlimited signals. Basic is the fact that by these extensions aliasing error terms must now be added to the bandlimited reconstruction formulae. These errors will be estimated in terms of the distance functional just introduced by the authors for the extensions of basic relations valid for bandlimited functions to larger function spaces. This approach can be regarded as a mathematical foundation of aliasing error analysis of many applications.


Introduction
According to Shannon's sampling theorem, a signal f ∈ B 2 σ , i.e., a bandlimited signal (see Section 2 for the exact definition) can be completely reconstructed in terms of samples, equidistantly spaced apart the real axis R. Likewise one can reconstruct the derivatives f (s) or the Hilbert transform f and its derivative f (s) = f (s) just in terms of samples of f .Almost immediate applications of these results are Boas-type formulae for arbitrary order derivatives of such signals as well as of their Hilbert transforms, used in numerical analysis for computation of those derivatives.
The essential aim is to extend these results to non-bandlimited signals, in fact to the largest class, denoted by F s,2 below, for which the Fourier transform, the basic tool of this approach, can be employed effectively.Basic is the fact that by these extensions the exact reconstruction formulae have to be equipped with remainder (error) terms.The errors involved will be measured in terms of the distance of f from the space B 2 σ of bandlimited function, a concept just introduced by the authors for the extensions of basic relations for Bernstein spaces B 2 σ to larger function spaces.To become familiar with the new approach, the classical Shannon sampling theorem for derivatives of B 2 σ -signals, namely Theorem 3.1 below, can be extended to the larger space F s,2 by adding the remainder term R WKS s,σ f of (6) to the expansion for bandlimited signals (5).This remainder, the aliasing error, can be estimated (cf. (7)) by: The integral on the right-hand side is the above mentioned distance of f (s) from the space B 2 σ .Its behaviour depends on the smoothness properties of f , and was extensively studied in [1].If f is bandlimited to [−σ, σ], then it vanishes, as to be expected.Otherwise, if f ∈ F s,2 , the largest space to be considered in this context, then this distance tends to zero for σ → ∞.Furthermore, if one restricts the matter to certain subspaces of F s,2 , one can obtain refined estimates, including unusually sharp rates of approximation; see Corollaries 3.5-3.7.In particular, if f belongs to a Lipschitz or Sobolev space, then R WKS s,σ f decays like a negative power of σ, and if f belongs to a Hardy space, then it decays exponentially.
Similarly, to the sampling reconstruction for the derivatives of the Hilbert transform of B 2 σ -signals, thus for (d/dt) (s) f = H(d/dt) (s) f , namely (12), the remainder term ( R WKS s,σ f )(t) of ( 14), must be added in order to obtain its extension (13) to F s,2 .The remainder ( R WKS s,σ f )(t) can be estimated in the same way as R WKS s,σ f above.The paper's chief point is to generalize the Boas formula for the first derivative, namely (16), to higher order ones, odd order given by (25), even orders by (26).Thus, e.g., the derivative f (2s−1) (t) is expressed in terms of the signal values f (t + π(k − 1/2)/σ), which depend however on t ∈ R. The proof is unusually simple, one just sets σ = π and t = 1/2 in Theorem 3.1 and applies the resulting formula to the function g(u) := f (uπ/σ + t − π/σ).
For the first major result, Theorem 5.1, the extension of (25) to the larger class F s,2 , formula (25) has to be equipped with the aliasing error term R Boas 2s−1,σ f of (38), which can be estimated in the same fashion as the error R WKS s,σ f above, which again tends to zero for σ → ∞.
A basic inequality in the theory of functions of exponential type is Bernstein's inequality for the derivatives f (s) for bandlimited f in the finite energy norm or in L p (R), 1 ≤ p ≤ ∞.This result is generalized in Section 6 to non-bandlimited signals, the aliasing error being (52) in the case of odd order derivatives and (53) for even ones.
The active field of Landau-Kolmogorov inequalities in our situation is handled in Section 7. Finally, Boas-type formulae for the Hilbert transform are left to Section 8.

Notation and Preliminary Results
As usual, L p (R) is the space of all real or complex-valued function f that are Lebesgue integrable to the pth power over the real axis R, endowed with the norm The Fourier transform f of f ∈ L p (R), 1 ≤ p ≤ 2, is defined by: where , then there holds the Fourier inversion formula: at each point t ∈ R where f is continuous; see [2,Proposition 5.1.10,5.2.16].
For σ > 0 and 1 ≤ p ≤ ∞, let B p σ the Bernstein space comprising all entire functions (thus arbitrary often differentiable) of exponential type σ, (i.e., |f (z)| ≤ f C(R) exp(σ|y|) for z = x + iy ∈ C), which belong to L p (R) when restricted to the real axis R.There holds: According to the Paley-Wiener theorem (cf.[3, p. 103]), a signal f belongs to B p σ ; 1 ≤ p ≤ 2, if and only if it is bandlimited to [−σ, σ], i.e., its Fourier transform vanishes outside [−σ, σ].The same holds true for p > 2, if the Fourier transform is understood in the distributional sense.Note that a bandlimited signal cannot be simultaneously duration limited.
The sinc function is defined by: where the rectangle function is given by: Moreover, there holds by the Fourier inversion formula (1) (cf.[2, Section 5.2.4]):

A Hierarchy of Spaces Extending Bernstein Spaces
In order to extend the Bernstein space B p σ to larger function spaces, we weaken the property of f being bandlimited, i.e., f vanishes outside the compact interval [−σ, σ], to f belonging to L 1 (R).This still guarantees the reconstructibility of f from its Fourier transform in terms of the inversion formula (1).To this end, we introduce the Fourier inversion classes: In addition to (1) one has for f ∈ F s,2 that the derivative f (s) exists, belongs to C(R) and has the representation: see [2, Proposition 5.1.17with f replaced by f ].
The Fourier inversion classes are in some sense the most general spaces in which our studies can be performed.Spaces between B 2 σ and F s,2 are also of interest since they will yield smaller errors in the extended formulae.
The modulus of smoothness of f ∈ L 2 (R) of order r ∈ N is defined by: and the associated Lipschitz class for 0 < α ≤ r by: The Sobolev space is given by: and Hardy spaces for horizontal strips S d := {z ∈ C : | z| < d}, d > 0, by: There hold the inclusions: Here we recall some facts concerning the distance functional introduced in [1].Let G be the vector space of all functions f : R −→ C having the representation: for some φ ∈ L 1 (R) ∩ L q (R), 1 ≤ q < ∞.We define the distance of two functions f 1 , f 2 ∈ G having representation (4) with φ 1 and φ 2 , respectively, by: and the distance of a function f ∈ G from the Bernstein space B 2 σ by: Moreover, one has for f ∈ F s,2 , s ∈ N 0 , Observe that for f ∈ F 0,2 and q = 2 one has in view of the isometry of the Fourier transform that The following estimates for the distance dist q (f (s) , B 2 σ ) can be found in [1].In each of the subsequent statements, c and γ with attached indices denote positive numbers that depend only on the indices but not on f and σ.They may be different at each occurrence.Proposition 2.1.(a) Let f ∈ F 0,2 with f ∈ L q (R), 1 ≤ q < ∞, and r ∈ N. One has the derivative free estimate: Then, for 1 ≤ q < ∞, s ∈ N, and σ > 0,

Extensions of Shannon's Theorem to Non-Bandlimited Signals and Their Hilbert Transforms; Aliasing Errors
Let us consider the well-known Whittaker-Kotel'nikov-Shannon sampling theorem for reconstructing a bandlimited signal and its derivatives in terms of samples of just f , namely (see e.g., [4,5], [6, p. 13], [7, p. 59]).Theorem 3.1.Let f ∈ B 2 σ , then, for each s ∈ N, the series converging absolutely and uniformly for t ∈ R as well as in L 2 (R)-norm.
This theorem can be extended to the larger space F 2,s by adding a remainder or error term R WKS s,σ f , to the expansion (5).This leads to the following extended version of Theorem 3.1.Theorem 3.2.Let f ∈ F s,2 for some s ∈ N 0 , and let θ s : R −→ C be the 2π-periodic signal, defined for t ∈ R by: Then one has the approximate sampling representation: with the remainder R WKS s,σ f given by: In particular, there holds: The remainder R WKS s,σ f is the so-called aliasing error occurring when a non-bandlimited signal is reconstructed in terms of the sampling theorem; see e.g., [8].
The case s = 0 can be found already in [9] (see also [5,7,10], [6, p. 15 ff]), where the remainder (6) was given in the equivalent form: Theorem 3.2 for arbitrary s ∈ N is contained in [11], where it was deduced as a particular case of a unified approach to various sampling representations.This general approach also covers the following two results on the reconstruction of the Hilbert transform f and its derivatives in terms of samples of f ; see [5,11].
The Hilbert transform or conjugate function of f ∈ L 2 (R) ∩ C(R), defined by the Cauchy principal value: plays an important role in electrical engineering (see e.g., [12, p. 267 ff.], [13]).For the Hilbert transform, also often called "one of the most important operators in analysis", one may consult [2, Chap.8, 9], [14,15].It defines a bounded linear operator from L 2 (R) into itself, and one has: Furthermore, if f ∈ F 2,s for some s ∈ N 0 , then by the Fourier inversion formula (1) for each t ∈ R, the latter equality holding provided f (s) ∈ L 2 (R).This formula also shows that f (s) = f (s) ; thus derivation and taking Hilbert transform are commutative operations.Noting ( 2) and ( 8), we see that the Fourier transform of the Hilbert transform of the sinc-function is given by: and one easily obtains from the case s = 0 in (9) that the Hilbert transform of the sinc-function is given by: Moreover, one has the representation: Since the Hilbert transform is a bounded linear operator from L 2 (R) into itself, which commutes with differentiation, the following sampling representation follows immediately from ( 5) by taking the Hilbert transform of each side.
σ , where σ > 0.Then, for each s ∈ N, the series converging absolutely and uniformly for t ∈ R as well as in L 2 (R)-norm.
This formula enables one to compute f (s) (t) in term of samples of f itself; for the case s = 0 see [5][6][7]16], and for arbitrary s see [11].The extended version of this result reads (see [11]): s for some s ∈ N 0 , and let η s : R −→ C be the 2π-periodic function, defined for t ∈ R by: Then one has the approximate sampling representation: with the remainder R WKS s,σ f given by: In particular, there holds: The integral on the right-hand side of ( 7) and ( 15) is the distance of f (s) from the space B 2 σ .Its behaviour for σ → ∞ depends on the smoothness properties of f , and was extensively studied in [1]; recall Section 2.1.This leads to the following estimates for the remainders R WKS s,σ f and R WKS s,σ f .Corollary 3.5.
In particular, if in addition The three corollaries remain true, if R WKS s,σ f is replaced by R WKS s,σ f .

Boas-type Formulae for Higher Derivatives
In [17, formula (6)] (see also [3, p. 211]) Boas established a differentiation formula that may be presented as follows.
Let f ∈ B ∞ σ , where σ > 0.Then, for h = π/σ, we have: When f is a trigonometric polynomial of degree n, i.e., f and so ( 16) applies.In this case, by virtue of the periodicity of f , the series in ( 16) can be condensed to a finite sum.The resulting formula was obtained by M. Riesz in 1914 [18].In fact, Riesz's interpolation formula for trigonometric polynomials reads: where implies the classical Bernstein inequality: Analogously to the proof of (18), Boas' formula (16), also known as generalized Riesz interpolation formula (as Isaac Pesenson informed us), can be used to prove the basic Bernstein inequality for functions f ∈ B p σ , namely f (s) L p (R) ≤ σ s f L p (R) , s ∈ N, which will be treated extensively in Section 6.There exist families of differentiation formulae for higher derivatives holding in Bernstein spaces; see [6, § 3.2].Which of them should we consider as a generalization of Boas' formula?In the applications of ( 16) the following properties are crucial: (e) When the sample points are arranged in increasing order, then the associated coefficients have alternating signs.
In the case of higher derivatives, a Boas-type formula should also have the properties (a) to (e).The Boas-type formulae to be established will be deduced as applications of the Whittaker-Kotel'nikov-Shannon sampling theorem for higher order derivatives (Theorem 3.1).Another approach by contour integration methods of complex function theory will be presented in Section 4.1.
In view of Leibniz's rule, the basic term sinc (s) (t) in ( 5) can be written as follows: Expressing now sin(πt + jπ/2) in terms of sin πt and cos πt, we can rewrite sinc (s) (t) in the more handy form: where the right hand side is naturally thought to be continuously extended at t = 0 by: This can be easily obtained from (3) or the power series expansion: Further, it follows easily from (19) that: In view of (3) there holds the Fourier expansion (cf.[2, Proposition 4.1.5]): and, in particular, for s ∈ N and x = 0, As an application, we now come to two Boas-type formulae, one for derivatives of odd order and one for those of even order.σ for some σ > 0, and define for s ∈ N, Then there hold the representations: Proof.The identities ( 23) and ( 24) follow immediately from (19), noting that sin π(k − 1/2) = cos πk = (−1) k and cos π(k − 1/2) = sin πk = 0.For ( 25) and (26) we will give two proofs.The first one applies only to B 2 σ , which seems to be the more interesting case in engineering applications, whereas the second one also covers the larger space B ∞ σ .First assume that f ∈ B 2 π , i.e., h = 1.Setting t = 1/2 in (5), then by the definition of A s,k , Now, if f ∈ B 2 σ for arbitrary σ > 0, then (25) follows by applying (27) to the function u → f (hu + t − h/2), which belongs to B 2 π .For even order derivatives, one obtains from (5) for σ = π and t = 0, The proof can now completed along the same lines as in the case of odd order derivatives.
In order to extend (25) and (26) to f ∈ B ∞ σ , one may apply the B 2 σ -result just proved to the function t → f (t(1 − ε)) sinc(εt/π), 0 < ε < 1, which belongs to B 2 σ , and then let ε → 0+.This density argument can be avoided by the following alternative proof.To this end, let f ∈ B ∞ π , and let f 1 ∈ B 2 π be defined by: By Leibniz's rule one has: Since f 1 ∈ B 2 π , we can apply (5) to the terms on the right-hand side to obtain: Now, we have to distinguish between odd and even order derivatives.Replacing s by 2s − 1 and setting t = 1/2 in (28), we obtain in view of (20), Further, noting (22) and the definition of A s,k , we end up with: To complete the proof for odd order derivatives, let now f ∈ B ∞ σ for arbitrary σ > 0 and apply (29) to the function u → f (hu + t − h/2), where h = π/σ.
For even order derivatives one starts again with (28), replaces s by 2s and sets t = 0.In view of ( 21), ( 22) and (24), one then obtains: Finally, apply this equation to the function u → f (hu + t).
Representation (25) can also be found in [19,Corollary 5], where it was proved by contour integral methods.
For s = 1, (25) is the classical Boas formula (16), and (26) reads: The case s = 2 in (25) gives: It is easily seen that formulae ( 25) and ( 26) both have the properties (a) to (d), but it is not immediately clear whether (e) holds.We need to know the signs of the numbers A s,k and B s,k .For this, we represent these numbers by an integral with an integrand that does not change sign.
In particular, (−1) (b) For s ∈ N the numbers B s,k of (24) have the representation: In particular, (−1) Proof.First we note that the sum on the right-hand side of ( 23) is the Taylor polynomial of the cosine function of degree 2s − 2 with respect to the origin, evaluated at (2k − 1)π/2.Next we recall Taylor's formula for a function f with the remainder represented by an integral.It states that: see, e.g., [20,p. 88,Theorem 6].Applying this formula to f = cos with x = (2k − 1)π/2, we see that (23) may be rewritten as: By a change of variables, we obtain: Now an integration by parts, taking (−1) k + sin t as a primitive of cos t, yields: From this, (30) follows immediately.
Except for a set of measure zero, the integrand in ( 30) is positive on the interval of integration if the upper limit (2k −1)π/2 of the integral is positive, and is negative if that limit is negative.This shows that the integral in ( 30) is always positive.Hence (31) holds for s > 1, and in view of A 1,k = π −1 (k − 1/2) −2 for s = 1 as well.
Regarding (32), we note that the sum on the right-hand side of ( 24) is the Taylor polynomial of degree 2s − 1 of the sine function evaluated at kπ.Using again (34) and proceeding as in the proof of (a), we arrive at (32) and (33).Now (31) and (33) show that formulae (25) and ( 26) have also the property (e) and so they are Boas-type formulae in our sense.
One may ask, why we started with f (2s−1) (1/2) in the proof of ( 25), and with f (2s) (0) in the proof of (26).If one begins with f (2s−1) (0) in the case of odd order derivatives, one would end up with formulae, the coefficients of which behave like O(|k| −1 ) for k → ±∞.Moreover, they are valid in B p σ for 1 ≤ p < ∞ only, but not in B ∞ σ .Hence they are not Boas-type formulae in our sense.For s = 1 such a formula can be found in [7, p. 60 (87)].

An Alternative Approach by Methods of Complex Analysis
Formulae ( 25) and ( 26) of Theorem 4.1 can also be derived by contour integration without employing the sampling theorem and without requiring a process that leads from B 2 σ to B ∞ σ .Denote by Q(X) the positively oriented rectangle with vertices at ±X ± iX.For f ∈ B ∞ π , we first consider the meromorphic function: It has a simple pole or a removable singularity at z = k + 1/2 for k ∈ Z, a pole of order at most 2s at zero and no other singularities.Noting that the sum is a truncated Taylor expansion of cos(πz) around z = 0, we conclude that: By a well-known formula for the residue at a pole of order at most 2s, this implies that: Furthermore, it is easily verified that: with A s,k given by (23).Now, for N ∈ N, it is readily seen that: Hence by the residue theorem, which implies ( 25) by a transformation of the argument of f .
Analogously, one considers: Here z = 0 is a pole of order at most 2s + 1 and, which gives: with B s,k given by ( 24).This time, for N ∈ N, we find that: which implies ( 26) by a transformation of the argument of f .

Variants
If we abandon property (e) of a Boas-type formula and admit a correction at t by the value of f or that of its first derivative, we can improve upon property (d) by establishing formulae with coefficients that decay like O(k −3 ) as k → ±∞.Such formulae are of interest in numerical applications since the truncated series will need less terms for achieving certain accuracy.
The following formula (35) was already obtained in [19,Corollary 5] by methods of complex analysis; formula (36) is new.
σ for some σ > 0 and let s ∈ N.Then, in the notation of Theorem 4.1, Proof.Obviously the function g : z → f (z) − f (0) /z belongs to B ∞ σ and, as is seen by Taylor expansion around 0, we have Thus applying (25) to g at t = 0, we obtain For f : z → cos πz this formula holds with h = 1 and yields Thus, which gives (35) by shifting the argument of f .Analogously, applying ( 26) to g at t = 0, we obtain: For the sinc function, this formula holds with h = 1 and yields: which gives (36) by substituting the value of B s,0 and shifting the argument of f .
For arbitrary h > 0 and t ∈ R one applies this particular case to the function Theorem 5.2.Let s ∈ N, f ∈ F 2s,2 .Then f (2s) exists and for h > 0, σ := π/h formula (26) extends to: where, φ 2s being the 2π-periodic function defined by: In particular, Proof.We proceed as in the proof of Theorem 5.1.With f 1 as defined by (41) we have: Noting that, Applying now (26) to g v : t → e ivt , where |v| ≤ π, we find: which yields (45) and ( 46) for h = 1 and t = 0.The general case follows again by applying this particular case to f h : u → f (t + hu).Since φ 2s is a 2π-periodic function, we have |φ 2s (v)| ≤ π 2s for all v ∈ R.This yields the first relation in (47), and the second one is again a consequence of [1, Proposition 15].

Extended Bernstein Inequalities for Higher Order Derivatives
We now come the matter sketched at the beginning of Section 4. The well-known Bernstein inequality states: The case s = 1 is usually proved with help of Boas' formula (16), and the general case then by iteration; see e.g., [3,Section 11.3].Boas' formulae for higher order derivatives enables us now to prove (48) directly for arbitrary s ∈ N. Indeed, we have by (25), The series on the right-hand side can be evaluated as follows.Since sin(σ •) ∈ B ∞ σ , formula (25) applies to this function.For t = 0 it yields that: where (31) has been used in the last step.Combining this equation with (49), we obtain (48).For derivatives of even order one uses ( 26) and proceeds analogously.
In this section we employ Theorems 5.1 and 5.2 to extend Bernstein's inequality for higher derivatives to non-bandlimited functions by adding an "error term".For this aim, properties (a)-(e) of a Boas-type formula, specified in Section 4, will be crucial.
and suppose that v 2s−1 f (v) belongs to L p (R) as a function of v.Then, for any σ > 0, we have: with R Boas 2s−1,σ f defined by (38).Furthermore, Proof.Consider (37) as a function of t and apply • L p (R) on both sides.Using the triangle inequality on the right-hand side and noting that f L p (R) does not change under a shift of the argument of f , we find that: Inserting (50) for the series on the right-hand side, we obtain (51).
By an analogous proof, we deduce from Theorem 5.2 the following result for derivatives of even order.
and suppose that v 2s f (v) belongs to L p (R) as a function of v.Then, for any σ > 0, we have: with R Boas 2s,σ f defined by (45).Furthermore, It should be noted that for derivatives of odd order the bound in terms of the distance function is by a factor 4/3 bigger than the corresponding bound for derivatives of even order.However, when p = 2, we can profit from the isometry of the Fourier transform and deduce the same bound in both cases.An obvious modification of the proof in [1,Theorem 11] leads to the following result.Theorem 6.3.Let s ∈ N, f ∈ F s,2 and suppose that v s f (v) ∈ L 2 (R) as a function of v.Then, for any σ > 0, we have

Landau-Kolmogorov Inequalities
In this section we consider the case where f belongs to a Sobolev space and deduce Landau-Kolmogorov inequalities, a very popular and still active field.The proof of the following proposition is essentially contained in that of [1,Proposition 13].
Lemma 7.5.Let (x, y, z) ∈ R 3 + , C > 0 and 0 < s < t.Then, for all σ > 0 if and only if, where, Proof.Suppose that (58) holds.Then we may minimize the right-hand side over σ by using standard calculus.This leads us to (59) with α and K given by (60).Conversely, suppose that (59) holds with K > 0 and α ∈ (0, 1).Consider now the function: Its Hessian shows that it is concave on R 2 + .Hence, at any point (x 0 , y 0 ) ∈ R 2 + the tangent plane of F lies above the graph of F , that is, F (x, y) ≤ F (x 0 , y 0 ) + grad F (x 0 , y 0 ), (x − x 0 , y − y 0 ) Setting λ := y 0 /x 0 , we find by a straightforward calculation that: Now, setting s := αt and defining: we find that: This shows that: with C defined by (60).Since λ may take any value in (0, ∞), the same is true for σ.Hence (59) implies (58).
Lemma 7.5 can be used to deduce three Landau-Kolmogorov inequalities from (55)-(57).We may state them in a unified form as follows.
Unfortunately, the constant in (61) is not the best possible.However, the discussion in [24, pp.442-447], does not extend to results for p ∈ (2, ∞).For p = 2 inequality (61) simplifies to: Here the term in square brackets can be replaced by 1.For s = 1 and r = 2, this is shown in [25,§ 7.9,No. 261].For general r, s ∈ N with r > s, we may use the isometry of the L 2 -Fourier transform together with Hölder's inequality and proceed as follows: where the right hand side is thought to be continuously extended at t = 0 by: sinc This can be easily obtained from (11) or the power series expansion: As an application, we now come two new Boas-type formulae, one for derivatives of odd order and another for those of even order.Theorem 8.1.Let f ∈ B 2 σ for some σ > 0 and h := π/σ.Then for s ∈ N, and for s ∈ N 0 , Here the coefficients A s,k and B s,k are given by: Proof.The proof follows along the same lines as the first proof of Theorem 4.1, starting with: in the case of odd order derivatives, and with: for even orders.

Achieser-type Formulae
Achieser [26, p. 143, (II)] proved an informative formula, which combines the assertions of the first derivative of a signal and that of its Hilbert transform.It may be stated for our definition of the Hilbert transform as follows: Let f ∈ B 2 σ .Then, We now establish analogous formulae for higher derivatives, distinguishing the cases of odd and even order.
Theorem 8.2.Let f ∈ B 2 σ for some σ > 0 and h := π/σ.Then for α ∈ R and s ∈ N, and, where, In order to evaluate the infinite series in (76), we have to proceed in a different way from the proof of Theorem 5.1.Indeed, since the function g v : t → e ivt does not belong to L 2 (R) we cannot apply formula (63) to this function.
On the other hand, there holds by ( 65) and ( 11 The proof can now be completed as the proof of Theorem 5.1. The graphs of χ 1 and χ 3 are shown in Figures 5 and 6 below.Theorem 8.6.Let s ∈ f ∈ F 2s,2 .Then f (2s) exists and for h > 0, σ := π/h formula (64) extends to: Here χ 2s is the 4π-periodic function defined by:

Applications
In this section we apply the results of Sections 3 and 8 to the signal function g(t) := 1/(1 + t 2 ), t ∈ R, having Fourier transform π/2 exp(−|v|) and Hilbert transform g(t) = t/(1 + t 2 ).The extended sampling theorem for the Hilbert transform (Theorem 3.4) takes on the concrete form, first for g , In practice, one has to deal with a finite sum rather than with the infinite series.This leads to an additional truncation error, namely, Assuming N ≥ γσ|t| for some constant γ > 1, then the terms of the latter series, denoted by a k , can be estimated by: Combining the aliasing error in (81) with this estimate for the truncation error, we finally obtain: Thus, we have a pretty precise and practical estimate for the error occurring when the derivative of the Hilbert transform is reconstructed in terms of the Hilbert version of the sampling theorem.Whereas the first term on the right-hand side covers the aliasing error, the second one is due to truncation.
Similarly, the Boas-type theorem for higher order derivatives (Theorem 8.5) takes the form (recall (67)):  These are the aliasing errors for the reconstruction of derivatives of the Hilbert transform in terms of the Boas-type formulae.In both cases, the truncation errors can be handled in a similar fashion as above.
and L ∞ (R) is the space of all measurable essentially bounded functions f with the norm f L ∞ (R) := ess sup u∈R |f (u)|.By C(R) we denote the class of all functions f : R −→ C that are uniformly continuous and bounded on R, where f C(R) := sup u∈R |f (u)|.
(a) The formula applies to all entire functions of exponential type σ that are only bounded on R. (b) The sample points are uniformly spaced according to the Nyquist rate and are located relatively to the argument t of the derivative.(c) The coefficients do not depend on t.(d) The coefficients decay like O(k −2 ) as k → ±∞.