 Next Article in Journal
Empirical Data Confirm Autism Symptoms Related to Aluminum and Acetaminophen Exposure
Next Article in Special Issue
Asymptotic Behavior of the Maximum Entropy Routing in Computer Networks
Previous Article in Journal
Life as Thermodynamic Evidence of Algorithmic Structure in Natural Environments
Previous Article in Special Issue
Optimization of MIMO Systems Capacity Using Large Random Matrix Methods
Article

Shannon’s Sampling Theorem for Bandlimited Signals and Their Hilbert Transform, Boas-Type Formulae for Higher Order Derivatives—The Aliasing Error Involved by Their Extensions from Bandlimited to Non-Bandlimited Signals †

1
Lehrstuhl A für Mathematik, RWTH Aachen University, 52056 Aachen, Germany
2
Department of Mathematics, University of Erlangen-Nuernberg, 91058 Erlangen, Germany
*
Author to whom correspondence should be addressed.
Dedicated to Karl Willy Wagner (1883–1953). A man of principles, pioneer of the theory of electronic filters.
Entropy 2012, 14(11), 2192-2226; https://doi.org/10.3390/e14112192
Received: 29 August 2012 / Revised: 6 October 2012 / Accepted: 8 October 2012 / Published: 5 November 2012
(This article belongs to the Special Issue Information Theory Applied to Communications and Networking)

Abstract

The paper is concerned with Shannon sampling reconstruction formulae of derivatives of bandlimited signals as well as of derivatives of their Hilbert transform, and their application to Boas-type formulae for higher order derivatives. The essential aim is to extend these results to non-bandlimited signals. Basic is the fact that by these extensions aliasing error terms must now be added to the bandlimited reconstruction formulae. These errors will be estimated in terms of the distance functional just introduced by the authors for the extensions of basic relations valid for bandlimited functions to larger function spaces. This approach can be regarded as a mathematical foundation of aliasing error analysis of many applications.

1. Introduction

According to Shannon’s sampling theorem, a signal $f ∈ B σ 2$, et al., a bandlimited signal (see Section 2 for the exact definition) can be completely reconstructed in terms of samples, equidistantly spaced apart the real axis $R$. Likewise one can reconstruct the derivatives $f ( s )$ or the Hilbert transform $f ˜$ and its derivative $f ˜ ( s ) = f ( s ) ˜$ just in terms of samples of f. Almost immediate applications of these results are Boas-type formulae for arbitrary order derivatives of such signals as well as of their Hilbert transforms, used in numerical analysis for computation of those derivatives.
The essential aim is to extend these results to non-bandlimited signals, in fact to the largest class, denoted by $F s , 2$ below, for which the Fourier transform, the basic tool of this approach, can be employed effectively. Basic is the fact that by these extensions the exact reconstruction formulae have to be equipped with remainder (error) terms. The errors involved will be measured in terms of the distance of f from the space $B σ 2$ of bandlimited function, a concept just introduced by the authors for the extensions of basic relations for Bernstein spaces $B σ 2$ to larger function spaces.
To become familiar with the new approach, the classical Shannon sampling theorem for derivatives of $B σ 2$-signals, namely Theorem 3.1 below, can be extended to the larger space $F s , 2$ by adding the remainder term $R s , σ WKS f$ of (6) to the expansion for bandlimited signals (5). This remainder, the aliasing error, can be estimated (cf. (7)) by:
$( R s , σ WKS f ) ( t ) ≤ 2 π ∫ | v | ≥ σ v s f ^ ( v ) d v = 2 π dist 1 ( f ( s ) , B σ 2 )$
The integral on the right-hand side is the above mentioned distance of $f ( s )$ from the space $B σ 2$. Its behaviour depends on the smoothness properties of f, and was extensively studied in . If f is bandlimited to $[ − σ , σ ]$, then it vanishes, as to be expected. Otherwise, if $f ∈ F s , 2$, the largest space to be considered in this context, then this distance tends to zero for $σ → ∞$. Furthermore, if one restricts the matter to certain subspaces of $F s , 2$, one can obtain refined estimates, including unusually sharp rates of approximation; see Corollaries 3.5–3.7. In particular, if f belongs to a Lipschitz or Sobolev space, then $R s , σ WKS f$ decays like a negative power of σ, and if f belongs to a Hardy space, then it decays exponentially.
Similarly, to the sampling reconstruction for the derivatives of the Hilbert transform of $B σ 2$-signals, thus for $( d / d t ) ( s ) f ˜ = H ( d / d t ) ( s ) f$, namely (12), the remainder term $( R ˜ s , σ WKS f ) ( t )$ of (14), must be added in order to obtain its extension (13) to $F s , 2$. The remainder $( R ˜ s , σ WKS f ) ( t )$ can be estimated in the same way as $R s , σ WKS f$ above.
The paper’s chief point is to generalize the Boas formula for the first derivative, namely (16), to higher order ones, odd order given by (25), even orders by (26). Thus, e.g., the derivative $f ( 2 s − 1 ) ( t )$ is expressed in terms of the signal values $f ( t + π ( k − 1 / 2 ) / σ )$, which depend however on $t ∈ R$. The proof is unusually simple, one just sets $σ = π$ and $t = 1 / 2$ in Theorem 3.1 and applies the resulting formula to the function $g ( u ) : = f ( u π / σ + t − π / σ )$.
For the first major result, Theorem 5.1, the extension of (25) to the larger class $F s , 2$, formula (25) has to be equipped with the aliasing error term $R 2 s − 1 , σ Boas f$ of (38), which can be estimated in the same fashion as the error $R s , σ WKS f$ above, which again tends to zero for $σ → ∞$.
A basic inequality in the theory of functions of exponential type is Bernstein’s inequality for the derivatives $f ( s )$ for bandlimited f in the finite energy norm or in $L p ( R )$, $1 ≤ p ≤ ∞$. This result is generalized in Section 6 to non-bandlimited signals, the aliasing error being (52) in the case of odd order derivatives and (53) for even ones.
The active field of Landau–Kolmogorov inequalities in our situation is handled in Section 7. Finally, Boas-type formulae for the Hilbert transform are left to Section 8.

2. Notation and Preliminary Results

As usual, $L p ( R )$ is the space of all real or complex-valued function f that are Lebesgue integrable to the pth power over the real axis $R$, endowed with the norm $∥ f ∥ L p ( R ) : = ∫ R | f ( u ) | p d u 1 / p$, $1 ≤ p < ∞$, and $L ∞ ( R )$ is the space of all measurable essentially bounded functions f with the norm $∥ f ∥ L ∞ ( R ) : = ess sup u ∈ R | f ( u ) |$. By $C ( R )$ we denote the class of all functions $f : R ⟶ C$ that are uniformly continuous and bounded on $R$, where $∥ f ∥ C ( R ) : = sup u ∈ R | f ( u ) |$.
The Fourier transform $f ^$ of $f ∈ L p ( R )$, $1 ≤ p ≤ 2$, is defined by:
$f ^ ( v ) : = 1 2 π ∫ R f ( u ) e − i v u d u ( p = 1 ) lim R → ∞ f ^ ( v ) − 1 2 π ∫ − R R f ( u ) e − i v u d u L p ′ ( R ) = 0 ( 1 < p ≤ 2 )$
where $1 / p + 1 / p ′ = 1$. If $f ∈ L p ( R )$, $1 ≤ p ≤ 2$, is such that $f ^ ∈ L 1 ( R )$, then there holds the Fourier inversion formula:
$f ( t ) = 1 2 π ∫ R f ^ ( v ) e i v t d v$
at each point $t ∈ R$ where f is continuous; see [2, Proposition 5.1.10, 5.2.16].
For $σ > 0$ and $1 ≤ p ≤ ∞$, let $B σ p$ the Bernstein space comprising all entire functions (thus arbitrary often differentiable) of exponential type σ, (i.e., $| f ( z ) | ≤ ∥ f ∥ C ( R ) exp ( σ | y | )$ for $z = x + i y ∈ C ) ,$, which belong to $L p ( R )$ when restricted to the real axis $R$. There holds:
$B σ 1 ⊂ B σ p 1 ⊂ B σ p 2 ⊂ B σ ∞ ( 1 ≤ p 1 ≤ p 2 ≤ ∞ )$
According to the Paley–Wiener theorem (cf. [3, p. 103]), a signal f belongs to $B σ p$; $1 ≤ p ≤ 2$, if and only if it is bandlimited to $[ − σ , σ ]$, i.e., its Fourier transform vanishes outside $[ − σ , σ ]$. The same holds true for $p > 2$, if the Fourier transform is understood in the distributional sense. Note that a bandlimited signal cannot be simultaneously duration limited.
The sinc function is defined by:
$sinc z : = sin ( π z ) π z , z ∈ C \ { 0 } 1 , z = 0 sinc ^ ( v ) = 1 2 π rect ( v ) ( v ∈ R )$
where the rectangle function is given by:
$rect ( v ) : = 1 , | v | < π 1 2 , | v | = π 0 , | v | > π$
Moreover, there holds by the Fourier inversion formula (1) (cf. [2, Section 5.2.4]):
$sinc ( s ) ( t ) = 1 2 π ∫ − ∞ ∞ rect ( v ) 2 π ( i v ) s e i v t d v = 1 2 π ∫ − π π ( i v ) s e i v t d v ( t ∈ R ; s ∈ N 0 )$

2.1. A Hierarchy of Spaces Extending Bernstein Spaces

In order to extend the Bernstein space $B σ p$ to larger function spaces, we weaken the property of f being bandlimited, $f ^$ vanishes outside the compact interval $[ − σ , σ ]$, to $f ^$ belonging to $L 1 ( R )$. This still guarantees the reconstructibility of f from its Fourier transform in terms of the inversion formula (1). To this end, we introduce the Fourier inversion classes:
$F s , 2 : = f ∈ L 2 ( R ) ∩ C ( R ) : v s f ^ ( v ) ∈ L 1 ( R ) ( s ∈ N 0 )$
For $0 ≤ s 1 ≤ s 2$, there holds $F s 2 , 2 ⊂ F s 1 , 2 ⊂ F 0 , 2$. In addition to (1) one has for $f ∈ F s , 2$ that the derivative $f ( s )$ exists, belongs to $C ( R )$ and has the representation:
$f ( s ) ( t ) = 1 2 π ∫ R ( i v ) s f ^ ( v ) e i v t d v ( t ∈ R )$
see [2, Proposition 5.1.17 with $f$ replaced by $f ^$].
The Fourier inversion classes are in some sense the most general spaces in which our studies can be performed. Spaces between $B σ 2$ and $F s , 2$ are also of interest since they will yield smaller errors in the extended formulae.
The modulus of smoothness of $f ∈ L 2 ( R )$ of order $r ∈ N$ is defined by:
$ω r f , δ , L 2 ( R ) : = sup | h | ≤ δ ∫ − ∞ ∞ ∑ j = 0 r ( − 1 ) r − j r j f ( u + j h ) 2 d u 1 / 2 ( δ > 0 )$
and the associated Lipschitz class for $0 < α ≤ r$ by:
$Lip r ( α , L 2 ( R ) ) : = f ∈ L 2 ( R ) ; ω r f , δ , L 2 ( R ) = O δ α , δ → 0 +$
The Sobolev space is given by:
$W s , 2 ( R ) : = f ∈ L 2 ( R ) : v s f ^ ( v ) ∈ L 2 ( R ) ( s ∈ N 0 )$
and Hardy spaces for horizontal strips $S d : = { z ∈ : | ℑ z | < d }$, $d > 0$, by:
$H 2 ( S d ) : = f : f analytic on S d , f H 2 ( S d ) < ∞ f H 2 ( S d ) : = sup 0 < y < d ∫ R | f ( t − i y ) | 2 + | f ( t + i y ) | 2 2 d t 1 / 2$
There hold the inclusions:
$B σ 2 ⊂ H 2 ( S d ) ⊂ W s , 2 ( R ) ∩ C ( R ) ⊂ F s − 1 , 2 ⊂ F 0 , 2 ⊂ L 2 ( R ) ( h > 0 ; s ∈ N )$
Here we recall some facts concerning the distance functional introduced in . Let G be the vector space of all functions $f : R ⟶ C$ having the representation:
$f ( t ) = 1 2 π ∫ R ϕ ( v ) e i v t d v$
for some $ϕ ∈ L 1 ( R ) ∩ L q ( R ) , 1 < q ≤ < ∞$. We define the distance of two functions $f 1 , f 2 ∈ G$ having representation (4) with , respectively, by:
$dist q ( f 1 , f 2 ) = ∥ ϕ 1 − ϕ 2 ∥ L q ( R ) = ∫ R | ϕ 1 ( v ) − ϕ 2 ( v ) | q d v 1 / q$
and the distance of a functions $f ∈ G$ from the Bernstein space $B σ 2$by:
$dist q ( f , B σ 2 ) : = inf g ∈ B σ 2 dist q ( f , g )$
If $f ∈ F s , 2$, $s ∈ N 0$, then the derivative $f ( s )$ belongs to G with $ϕ ( v ) = ( i v ) s f ^ ( v )$. Hence one has for $f 1 , f 2 ∈ F s , 2$ with $( i v ) s f ^ n ( v ) ∈ L q ( R )$, $n = 1 , 2$,
$dist q f 1 ( s ) , f 2 ( s ) = v s f 1 ^ − f 2 ^ L q ( R )$
Moreover, one has for $f ∈ F s , 2$, $s ∈ N 0$,
$dist q ( f ( s ) , B σ 2 ) = ∫ | v | ≥ σ v s f ^ ( v ) q d v 1 / q$
Observe that for $f ∈ F 0 , 2$ and $q = 2$ one has in view of the isometry of the Fourier transform that $dist 2 ( f 1 , f 2 ) = ∥ f 1 − f 2 ∥ L 2 ( R )$, i.e., $dist 2$ is the Euclidean distance.
The following estimates for the distance $dist q ( f ( s ) , B σ 2 )$ can be found in . In each of the subsequent statements, c and γ with attached indices denote positive numbers that depend only on the indices but not on f and σ. They may be different at each occurrence.
Proposition 2.1.
(a) Let $f ∈ F 0 , 2$ with $f ^ ∈ L q ( R )$, $1 ≤ q < ∞$, and $r ∈ N$. One has the derivative free estimate:
$dist q ( f , B σ 2 ) ≤ c r , q ∫ σ ∞ v − q / 2 ω r ( f , v − 1 , L 2 ( R ) ) q d v 1 / q$
If also $f ∈ Lip r ( α , L 2 ( R ) )$, $1 / q − 1 / 2 < α ≤ r$, then:
$dist q ( f , B σ 2 ) = O σ − α − 1 / 2 + 1 / q ( σ → ∞ )$
If $f ( s ) ∈ Lip 1 ( α , L 2 ( R ) )$, $s ∈ N$, $0 < α ≤ 1$, then:
$dist q ( f ( s ) , B σ 2 ) = O σ − α − s − 1 / 2 + 1 / q ( σ → ∞ )$
(b) If $f ∈ F s , 2$ and $v s f ^ ( v ) ∈ L q ( R )$, $1 ≤ q < ∞$, $s ∈ N 0$, then for each $r ∈ N$,
$dist q ( f ( s ) , B σ 2 ) ≤ c s , r , q ∫ σ ∞ v − q / 2 ω r ( f ( s ) , v − 1 , L 2 ( R ) ) q d v 1 / q = O σ − α − 1 / 2 + 1 / q ( σ → ∞ )$
the latter holding provided $f ( s ) ∈ Lip r ( α , L 2 ( R ) )$, $1 / q − 1 / 2 < α ≤ r$.
Proposition 2.2.
Let $f ∈ W r , 2 ( R ) ∩ C ( R )$. Then, for $1 ≤ q < ∞$, $s ∈ N$, and $σ > 0$,
$dist q ( f , B σ 2 ) ≤ c r , q σ − r − 1 / 2 + 1 / q ∥ f ( r ) ∥ L 2 ( R ) dist q ( f ( s ) , B σ 2 ) ≤ c r − s , q σ − r − 1 / 2 + s + 1 / q ∥ f ( r ) ∥ L 2 ( R ) ( r > s + 1 / q − 1 / 2 )$
Proposition 2.3.
Let $f ∈ H 2 ( S d )$. Then, for $1 ≤ q < ∞$, $s ∈ N$,
$dist q ( f , B σ 2 ) ≤ γ d , q e − d σ ∥ f ∥ H 2 ( S d ) ( σ > 0 ) dist q ( f ( s ) , B σ 2 ) ≤ γ d , q , s σ s e − d σ ∥ f ∥ H 2 ( S d ) ( σ ≥ s / d )$

3. Extensions of Shannon’s Theorem to Non-Bandlimited Signals and Their Hilbert Transforms; Aliasing Errors

Let us consider the well-known Whittaker–Kotel’nikov–Shannon sampling theorem for reconstructing a bandlimited signal and its derivatives in terms of samples of just f, namely ((see e.g., [4,5], [6, p. 13], [7, p. 59]).
Theorem 3.1.
Let $f ∈ B σ 2$, then, for each $s ∈ N$,
$f ( s ) ( t ) = ∑ k = − ∞ ∞ f k π σ d d t ( s ) sinc σ t π − k ( t ∈ R )$
the series converging absolutely and uniformly for $t ∈ R$ as well as in $L 2 ( R )$-norm.
This theorem can be extended to the larger space $F 2 , s$ by adding a remainder or error term $R s , σ WKS f$, to the expansion (5). This leads to the following extended version of Theorem 3.1.
Theorem 3.2.
Let $f ∈ F s , 2$ for some $s ∈ N 0$, and let $θ s : R ⟶ C$ be the 2π-periodic signal, defined for $t ∈ R$ by:
$θ s ( t , v ) : = v s e − i t v ( − π < v ≤ π )$
Then one has the approximate sampling representation:
$f ( s ) ( t ) = ∑ k = − ∞ ∞ f k π σ d d t ( s ) sinc σ t π − k + ( R s , σ WKS f ) ( t ) ( t ∈ R )$
with the remainder $R s , σ WKS f$ given by:
$( R s , σ WKS f ) ( t ) : = i s 2 π ∫ | v | ≥ σ f ^ ( v ) [ v s − ( σ π ) s θ s ( t , v σ ) ] e i v t d v ( t ∈ R )$
In particular, there holds:
$| ( R s , σ WKS f ) ( t ) | = 2 π ∫ | v | ≥ σ | v s f ^ ( v ) | d v = 2 π dist 1 ( f ( S ) , B σ 2 ) = o ( 1 ) ( σ → ∞ )$
The remainder $R s , σ WKS f$ is the so-called aliasing error occurring when a non-bandlimited signal is reconstructed in terms of the sampling theorem; see e.g., .
The case $s = 0$ can be found already in  (see also [5,7,10]), [6, p. 15 ff]), where the remainder (6) was given in the equivalent form:
$( R 0 , σ WKS f ) ( t ) : = 1 2 π ∑ k = − ∞ ∞ 1 − e − i 2 k σ t ∫ ( 2 k − 1 ) σ ( 2 k + 1 ) σ f ^ ( v ) e i v t d v ( t ∈ R )$
Theorem 3.2 for arbitrary $s ∈ N$ is contained in , where it was deduced as a particular case of a unified approach to various sampling representations. This general approach also covers the following two results on the reconstruction of the Hilbert transform $f ˜$ and its derivatives in terms of samples of f; see [5,11].
The Hilbert transform or conjugate function of $f ∈ L 2 ( R ) ∩ C ( R )$, defined by the Cauchy principal value:
$f ˜ ( t ) : = lim δ → 0 + 1 π ∫ | u | > δ f ( t − u ) u d u = PV 1 π ∫ − ∞ ∞ f ( t − u ) u d u$
plays an important role in electrical engineering (see [12, p. 267 ff.], ). For the Hilbert transform, also often called “one of the most important operators in analysis”, one may consult [2, Chap. 8, 9], [14,15]. It defines a bounded linear operator from $L 2 ( R )$ into itself, and one has:
$f ˜ ^ ( v ) = ( − i sgn v ) f ^ ( v ) a . e .$
Furthermore, if $f ∈ F 2 , s$ for some $s ∈ N 0$, then by the Fourier inversion formula Equation (1) for each $t ∈ R$,
$f ˜ ( s ) ( t ) = 1 2 π ∫ − ∞ ∞ f ^ ( v ) ( − i sgn v ) ( i v ) s e i v t d v = 1 2 π ∫ − ∞ ∞ f ( s ) ^ ( v ) ( − i sgn v ) e i v t d v$
the latter equality holding provided $f ( s ) ∈ L 2 ( R )$. This formula also shows that $f ˜ ( s ) = f ( s ) ˜$; thus derivation and taking Hilbert transform are commutative operations.
Noting (2) and (8), we see that the Fourier transform of the Hilbert transform of the sinc-function is given by:
$sinc ˜ ^ ( v ) = 1 2 π ( − i sgn v ) rect ( v ) a . e .$
and one easily obtains from the case $s = 0$ in (9) that the Hilbert transform of the sinc-function is given by:
$sinc ˜ ( t ) = 1 − cos π t π t = sin 2 π t 2 π t 2 , t ∈ R ∖ { 0 } 0 , t = 0$
Moreover, one has the representation:
$sinc ˜ ( s ) ( t ) = 1 2 π ∫ − π π ( − i sgn v ) ( i v ) s e i v t d v ( t ∈ R ; s ∈ N 0 )$
Since the Hilbert transform is a bounded linear operator from $L 2 ( R )$ into itself, which commutes with differentiation, the following sampling representation follows immediately from (5) by taking the Hilbert transform of each side.
Theorem 3.3.
Let $f ∈ B σ 2$, where $σ > 0$. Then, for each $s ∈ N$,
$f ˜ ( s ) ( t ) = ∑ k = − ∞ ∞ f k π σ d d t ( s ) sinc ˜ σ t π − k ( t ∈ R )$
the series converging absolutely and uniformly for $t ∈ R$ as well as in $L 2 ( R )$-norm.
This formula enables one to compute $f ˜ ( s ) ( t )$ in term of samples of f itself; for the case $s = 0$ see [5,6,7,16], and for arbitrary s see . The extended version of this result reads (see ):
Theorem 3.4.
Let $f ∈ F 2 , s$ for some $s ∈ N 0$, and let $η s : R → C$ be the 2$π − p e r i o d i c f u n c t i o n , d e f i n e d f o r$ t$∈ R$ by:
$η s ( t , v ) : = sgn ( v ) v s e − i t v ( − π < v ≤ π )$
Then one has the approximate sampling representation:
$f ˜ ( s ) ( t ) = ∑ k = − ∞ ∞ f k π σ d d t ( s ) sinc ˜ σ t π − k + ( R ˜ s , σ WKS f ) ( t ) ( t ∈ R )$
with the remainder $R ˜ s , σ WKS f$ given by:
$( R ˜ s , σ WKS f ) ( t ) : = i s − 1 2 π ∫ | v | ≥ σ f ^ ( v ) sgn ( v ) v s − σ π s η s t , v σ e i v t d v ( t ∈ R )$
In particular, there holds:
$( R ˜ s , σ WKS f ) ( t ) = 2 π ∫ | v | ≥ σ v s f ^ ( v ) d v = 2 π dist 1 ( f ( s ) , B σ 2 ) = o ( 1 ) ( σ → ∞ )$
The integral on the right-hand side of (7) and (15) is the distance of $f ( s )$ from the space $B σ 2$. Its behaviour for $σ → ∞$ depends on the smoothness properties of f, and was extensively studied in ; recall Section 2.1. This leads to the following estimates for the remainders $R s , σ WKS f$ and $R ˜ s , σ WKS f$.
Corollary 3.5.
If $f ∈ F s , 2$, $s ∈ N 0$, then for any $r ∈ N$ and $t ∈ R$,
$( R s , σ WKS f ) ( t ) ≤ 2 π dist 1 ( f ( s ) , B σ 2 ) ≤ c s , r ∫ σ ∞ v − 1 / 2 ω r f ( s ) , v − 1 , L 2 ( R ) d v ( σ > 0 )$
In particular, if in addition $f ( s ) ∈ Lip r ( α , L 2 ( R ) )$ for $1 / 2 < α ≤ r$, then:
$( R s , σ WKS f ) ( t ) = O σ − α + 1 / 2 ( σ → ∞ )$
Corollary 3.6.
Let $s ∈ N 0$, $f ∈ W r , 2 ∩ C ( R )$ for some $r ≥ s + 1$. Then: for $t ∈ R$,
$( R s , σ WKS f ) ( t ) ≤ c s , r σ − r + s + 1 / 2 ∥ f ( r ) ∥ L 2 ( R ) ( σ > 0 )$
If moreover $f ( r ) ∈ Lip 1 ( α , L 2 ( R ) )$, $0 < α ≤ 1$, then
$( R s , σ WKS f ) ( t ) = O σ − r − α + s + 1 / 2 ( σ → ∞ )$
Corollary 3.7.
If $f ∈ H 2 ( S d )$, then for $s ∈ N 0$, positive $σ ≥ s / d$, and $t ∈ R$,
$( R s , σ WKS f ) ( t ) ≤ c d , s σ s e − π d σ ∥ f ∥ H 2 ( S d )$
The three corollaries remain true, if $R s , σ WKS f$ is replaced by $R ˜ s , σ WKS f$.

4. Boas-type Formulae for Higher Derivatives

In  (see also ) Boas established a differentiation formula that may be presented as follows.
Let $f ∈ B σ ∞$, where $σ > 0$. Then, for $h = π / σ$, we have:
$f ′ ( t ) = 1 h ∑ k ∈ Z ( − 1 ) k + 1 π ( k − 1 2 ) 2 f t + h k − 1 2$
When f is a trigonometric polynomial of degree n, i.e., $f ( t ) = T n ( t ) = ∑ k = − n n c k e i k t$, then $f ∈ B n ∞$ and so (16) applies. In this case, by virtue of the periodicity of f, the series in (16) can be condensed to a finite sum. The resulting formula was obtained by M. Riesz in 1914 . In fact, Riesz’s interpolation formula for trigonometric polynomials reads:
$T n ′ ( t ) = ∑ k = 1 2 n ( − 1 ) k − 1 α k T n ( t + τ k ) ( t ∈ R )$
where $α k : = n − 1 ( 2 sin ( τ k / 2 ) ) − 2$ and $τ k : = ( 2 k − 1 ) / ( 2 n )$ for $k = 1 , 2 , ⋯ , 2 n$. Since $∑ k = 1 2 n α k = n$, (17) implies the classical Bernstein inequality:
$T n ( s ) C 2 π ≤ n s T n C 2 π ( s ∈ N )$
Analogously to the proof of (18), Boas’ formula (16), also known as generalized Riesz interpolation formula (as Isaac Pesenson informed us), can be used to prove the basic Bernstein inequality for functions $f ∈ B σ p$, namely $∥ f ( s ) ∥ L p ( R ) ≤ σ s ∥ f ∥ L p ( R )$, $s ∈ N$, which will be treated extensively in Section 6.
There exist families of differentiation formulae for higher derivatives holding in Bernstein spaces; see [6, § 3.2]. Which of them should we consider as a generalization of Boas’ formula? In the applications of (16) the following properties are crucial:
(a)
The formula applies to all entire functions of exponential type σ that are only bounded on $R$.
(b)
The sample points are uniformly spaced according to the Nyquist rate and are located relatively to the argument t of the derivative.
(c)
The coefficients do not depend on t.
(d)
The coefficients decay like $O ( k − 2 )$ as $k → ± ∞$.
(e)
When the sample points are arranged in increasing order, then the associated coefficients have alternating signs.
In the case of higher derivatives, a Boas-type formula should also have the properties (a) to (e).
The Boas-type formulae to be established will be deduced as applications of the Whittaker–Kotel’nikov–Shannon sampling theorem for higher order derivatives (Theorem 3.1). Another approach by contour integration methods of complex function theory will be presented in Section 4.1.
In view of Leibniz’s rule, the basic term $sinc ( s ) ( t )$ in (5) can be written as follows:
$sinc ( s ) ( t ) = 1 π t sin π t ( s ) = ∑ j = 0 s s j ( sin π t ) ( j ) 1 π t ( s − j ) = ∑ j = 0 s s j π j sin π t + j π 2 ( − 1 ) s − j ( s − j ) ! π t s − j + 1 = ( − 1 ) s s ! π t s + 1 ∑ j = 0 s sin π t + j π 2 ( − 1 ) j ( π t ) j j !$
Expressing now $sin ( π t + j π / 2 )$ in terms of $sin π t$ and $cos π t$, we can rewrite $sinc ( s ) ( t )$ in the more handy form:
$sinc ( s ) ( t ) = ( − 1 ) s s ! π t s + 1 sin π t ∑ ν = 0 ⌊ s 2 ⌋ ( − 1 ) ν ( π t ) 2 ν ( 2 ν ) ! − cos π t ∑ ν = 0 ⌊ s − 1 2 ⌋ ( − 1 ) ν ( π t ) 2 ν + 1 ( 2 ν + 1 ) !$
where the right hand side is naturally thought to be continuously extended at $t = 0$ by:
$sinc ( s ) ( 0 ) = 0 , s odd ( − 1 ) s / 2 π s s + 1 , s even ( s ∈ N 0 )$
This can be easily obtained from (3) or the power series expansion:
$sinc t = ∑ j = 0 ∞ ( − 1 ) j ( 2 j + 1 ) ! ( π t ) 2 j ( t ∈ R )$
Further, it follows easily from (19) that:
$( 2 s − 1 ) sinc ( 2 s − 2 ) 1 2 − k = k − 1 2 sinc ( 2 s − 1 ) 1 2 − k ( k ∈ Z )$
$2 s sinc ( 2 s − 1 ) ( − k ) = k sinc ( 2 s ) ( − k ) ( k ∈ Z )$
In view of (3) there holds the Fourier expansion (cf. [2, Proposition 4.1.5]):
$∑ k = − ∞ ∞ sinc ( s ) ( t − k ) e i k x = ∑ k = − ∞ ∞ 1 2 π ∫ − π π ( i v ) s e − i ( k − t ) v d v e i t x = ( i x ) s e i t x ( | x | < π ; t ∈ R ; s ∈ N 0 )$
and, in particular, for $s ∈ N$ and $x = 0$,
$∑ k = − ∞ ∞ sinc ( s ) ( t − k ) = 0 ( t ∈ R )$
As an application, we now come to two Boas-type formulae, one for derivatives of odd order and one for those of even order.
Theorem 4.1.
Let $f ∈ B σ ∞$ for some $σ > 0$, and define for $s ∈ N$,
$A s , k : = ( − 1 ) k + 1 sinc ( 2 s − 1 ) 1 2 − k = ( 2 s − 1 ) ! π ( k − 1 2 ) 2 s ∑ j = 0 s − 1 ( − 1 ) j ( 2 j ) ! π k − 1 2 2 j ( k ∈ Z )$
$B s , k : = ( − 1 ) k + 1 sinc ( 2 s ) ( − k ) = ( − 1 ) s + 1 π 2 s 2 s + 1 , k = 0 ( 2 s ) ! π k 2 s + 1 ∑ j = 0 s − 1 ( − 1 ) j ( π k ) 2 j + 1 ( 2 j + 1 ) ! , k ∈ Z ∖ { 0 }$
Then there hold the representations:
$f ( 2 s − 1 ) ( t ) = 1 h 2 s − 1 ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k f t + h k − 1 2 ( t ∈ R )$
$f ( 2 s ) ( t ) = 1 h 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 B s , k f ( t + h k ) ( t ∈ R )$
Proof.
The identities (23) and (24) follow immediately from (19), noting that $sin π ( k − 1 / 2 ) = cos π k = ( − 1 ) k$ and $cos π ( k − 1 / 2 ) = sin π k = 0$.
For (25) and (26) we will give two proofs. The first one applies only to $B σ 2$, which seems to be the more interesting case in engineering applications, whereas the second one also covers the larger space $B σ ∞$.
First assume that $f ∈ B π 2$, i.e., $h = 1$. Setting $t = 1 / 2$ in (5), then by the definition of $A s , k$,
$f ( 2 s − 1 ) 1 2 = ∑ k = − ∞ ∞ f ( k ) sinc ( 2 s − 1 ) 1 2 − k = ∑ k = − ∞ ∞ f ( k ) ( − 1 ) k + 1 A s , k$
Now, if $f ∈ B σ 2$ for arbitrary $σ > 0$, then (25) follows by applying (27) to the function $u ↦ f ( h u + t − h / 2 )$, which belongs to $B π 2$.
For even order derivatives, one obtains from (5) for $σ = π$ and $t = 0$,
$f ( 2 s ) ( 0 ) = ∑ k = − ∞ ∞ f ( k ) sinc ( 2 s ) ( − k ) = ∑ k = − ∞ ∞ f ( k ) ( − 1 ) k + 1 B s , k$
The proof can now completed along the same lines as in the case of odd order derivatives.
In order to extend (25) and (26) to $f ∈ B σ ∞$, one may apply the $B σ 2$-result just proved to the function $t ↦ f ( t ( 1 − ε ) ) sinc ( ε t / π )$, $0 < ε < 1$, which belongs to $B σ 2$, and then let $ε → 0 +$. This density argument can be avoided by the following alternative proof. To this end, let $f ∈ B π ∞$, and let $f 1 ∈ B π 2$ be defined by:
$f 1 ( u ) : = f ( u ) − f ( 0 ) u , u ∈ R ∖ { 0 } f ′ ( 0 ) , u = 0$
By Leibniz’s rule one has:
$f ( s ) ( t ) = t f 1 ( t ) ( s ) = s f 1 ( s − 1 ) ( t ) + t f 1 ( s ) ( t ) ( t ∈ R ; s ∈ N )$
Since $f 1 ∈ B π 2$, we can apply (5) to the terms on the right-hand side to obtain:
$f ( s ) ( t ) = s ∑ k = − ∞ ∞ f 1 ( k ) sinc ( s − 1 ) ( t − k ) + t ∑ k = − ∞ ∞ f 1 ( k ) sinc ( s ) ( t − k ) ( t ∈ R ; s ∈ N )$
Now, we have to distinguish between odd and even order derivatives. Replacing s by $2 s − 1$ and setting $t = 1 / 2$ in (28), we obtain in view of (20),
$f ( 2 s − 1 ) 1 2 = ( 2 s − 1 ) ∑ k = − ∞ ∞ f 1 ( k ) sinc ( 2 s − 2 ) 1 2 − k + 1 2 ∑ k = − ∞ ∞ f 1 ( k ) sinc ( 2 s − 1 ) 1 2 − k = ∑ k = − ∞ ∞ f 1 ( k ) k − 1 2 sinc ( 2 s − 1 ) 1 2 − k + 1 2 ∑ k = − ∞ ∞ f 1 ( k ) sinc ( 2 s − 1 ) 1 2 − k = ∑ k = − ∞ ∞ f ( k ) − f ( 0 ) sinc ( 2 s − 1 ) 1 2 − k ( s ∈ N )$
Further, noting (22) and the definition of $A s , k$, we end up with:
$f ( 2 s − 1 ) 1 2 = ∑ k = − ∞ ∞ f ( k ) sinc ( 2 s − 1 ) 1 2 − k = ∑ k = − ∞ ∞ f ( k ) ( − 1 ) k + 1 A s , k ( s ∈ N )$
To complete the proof for odd order derivatives, let now $f ∈ B σ ∞$ for arbitrary $σ > 0$ and apply (29) to the function $u ↦ f ( h u + t − h / 2 )$, where $h = π / σ$.
For even order derivatives one starts again with (28), replaces s by $2 s$ and sets $t = 0$. In view of (21), (22) and (24), one then obtains:
$f ( 2 s ) ( 0 ) = 2 s ∑ k = − ∞ ∞ f 1 ( k ) sinc ( 2 s − 1 ) ( − k ) = ∑ k = − ∞ ∞ f 1 ( k ) k sinc ( 2 s ) ( − k ) = ∑ k = − ∞ ∞ f ( k ) − f ( 0 ) sinc ( 2 s ) ( − k ) = ∑ k = − ∞ ∞ f ( k ) ( − 1 ) k + 1 B s , k ( s ∈ N )$
Finally, apply this equation to the function $u ↦ f ( h u + t )$.        ☐
Representation (25) can also be found in [19, Corollary 5], where it was proved by contour integral methods.
For $s = 1$, (25) is the classical Boas formula (16), and (26) reads:
$f ′ ′ ( t ) = − π 2 3 h 2 f ( t ) + 2 h 2 ∑ k = − ∞ k ≠ 0 ∞ f ( t + h k ) ( − 1 ) k + 1 k 2$
The case $s = 2$ in (25) gives:
$f ( 3 ) ( t ) = 1 h 3 ∑ k = − ∞ ∞ ( − 1 ) k + 1 6 π ( 1 2 − k ) 4 1 − π 2 2 1 2 − k 2 f t + h k − 1 2 ( t ∈ R )$
It is easily seen that formulae (25) and (26) both have the properties (a) to (d), but it is not immediately clear whether (e) holds. We need to know the signs of the numbers $A s , k$ and $B s , k$. For this, we represent these numbers by an integral with an integrand that does not change sign.
Proposition 4.2.
(a) For $s ∈ N$, $s > 1$ the numbers $A s , k$ of (23) have the representation:
$A s , k = ( − 1 ) s − 1 ( 2 s − 1 ) ( 2 s − 2 ) π ( k − 1 2 ) 2 s ∫ 0 ( 2 k − 1 ) π / 2 t 2 s − 3 1 + ( − 1 ) k sin t d t$
In particular,
$( − 1 ) s − 1 A s , k > 0 ( s ∈ N ; k ∈ Z )$
(b) For $s ∈ N$ the numbers $B s , k$ of (24) have the representation:
$B s , k = ( − 1 ) s − 1 2 s ( 2 s − 1 ) π k 2 s + 1 ∫ 0 k π t 2 s − 2 1 − ( − 1 ) k cos t d t ( k ∈ Z ∖ { 0 } )$
In particular,
$( − 1 ) s − 1 B s , k > 0 ( s ∈ N ; k ∈ Z )$
Proof.
First we note that the sum on the right-hand side of (23) is the Taylor polynomial of the cosine function of degree $2 s − 2$ with respect to the origin, evaluated at $( 2 k − 1 ) π / 2$. Next we recall Taylor’s formula for a function f with the remainder represented by an integral. It states that:
$f ( x ) = ∑ ν = 0 2 s − 2 f ( ν ) ( 0 ) ν ! x ν + ∫ 0 x ( x − t ) 2 s − 2 ( 2 s − 2 ) ! f ( 2 s − 1 ) ( t ) d t$
see, e.g., [20, p. 88, Theorem 6]. Applying this formula to $f = cos$ with $x = ( 2 k − 1 ) π / 2$, we see that (23) may be rewritten as:
$A s , k = ( − 1 ) s + 1 ( 2 s − 1 ) π ( k − 1 2 ) 2 s ∫ 0 ( 2 k − 1 ) π / 2 t − k − 1 2 π 2 s − 2 sin t d t$
By a change of variables, we obtain:
$A s , k = ( − 1 ) s + k ( 2 s − 1 ) π ( k − 1 2 ) 2 s ∫ 0 ( 2 k − 1 ) π / 2 t 2 s − 2 cos t d t$
Now an integration by parts, taking $( − 1 ) k + sin t$ as a primitive of $cos t$, yields:
$A s , k = ( − 1 ) s + k − 1 ( 2 s − 1 ) ( 2 s − 2 ) π ( k − 1 2 ) 2 s ∫ 0 ( 2 k − 1 ) π / 2 t 2 s − 3 ( − 1 ) k + sin t d t$
for $s > 1$. From this, (30) follows immediately.
Except for a set of measure zero, the integrand in (30) is positive on the interval of integration if the upper limit $( 2 k − 1 ) π / 2$ of the integral is positive, and is negative if that limit is negative. This shows that the integral in (30) is always positive. Hence (31) holds for $s > 1$, and in view of $A 1 , k = π − 1 ( k − 1 / 2 ) − 2$ for $s = 1$ as well.
Regarding (32), we note that the sum on the right-hand side of (24) is the Taylor polynomial of degree $2 s − 1$ of the sine function evaluated at $k π$. Using again (34) and proceeding as in the proof of (a), we arrive at (32) and (33).         ☐
Now (31) and (33) show that formulae (25) and (26) have also the property (e) and so they are Boas-type formulae in our sense.
One may ask, why we started with $f ( 2 s − 1 ) ( 1 / 2 )$ in the proof of (25), and with $f ( 2 s ) ( 0 )$ in the proof of (26). If one begins with $f ( 2 s − 1 ) ( 0 )$ in the case of odd order derivatives, one would end up with formulae, the coefficients of which behave like $O ( | k | − 1 )$ for $k → ± ∞$. Moreover, they are valid in $B σ p$ for $1 ≤ p < ∞$ only, but not in $B σ ∞$. Hence they are not Boas-type formulae in our sense. For $s = 1$ such a formula can be found in [7, p. 60 (87)].

4.1. An Alternative Approach by Methods of Complex Analysis

Formulae (25) and (26) of Theorem 4.1 can also be derived by contour integration without employing the sampling theorem and without requiring a process that leads from $B σ 2$ to $B σ ∞$.
Denote by $Q ( X )$ the positively oriented rectangle with vertices at $± X ± i X .$ For $f ∈ B π ∞$, we first consider the meromorphic function:
$K 1 , s ( z ) : = ( 2 s − 1 ) ! f ( z ) z 2 s cos ( π z ) ∑ j = 0 s − 1 ( − 1 ) j ( 2 j ) ! ( π z ) 2 j$
It has a simple pole or a removable singularity at $z = k + 1 / 2$ for $k ∈ Z$, a pole of order at most $2 s$ at zero and no other singularities. Noting that the sum is a truncated Taylor expansion of $cos ( π z )$ around $z = 0$, we conclude that:
$z 2 s ( 2 s − 1 ) ! K 1 , s ( z ) = f ( z ) 1 + O z 2 s ( z → 0 )$
By a well-known formula for the residue at a pole of order at most $2 s$, this implies that:
$res ( K 1 , s , 0 ) = f ( 2 s − 1 ) ( 0 )$
Furthermore, it is easily verified that:
$res K 1 , s , k − 1 2 = ( − 1 ) k A s , k f k − 1 2$
with $A s , k$ given by (23).
Now, for $N ∈ N$, it is readily seen that:
$max z ∈ Q ( N ) | K 1 , s ( z ) | = O N − 2 ( N → ∞ )$
Hence by the residue theorem,
$0 = lim N → ∞ 1 2 π i ∫ Q ( N ) K 1 , s ( z ) d z = f ( 2 s − 1 ) ( 0 ) + ∑ k = − ∞ ∞ ( − 1 ) k A s , k f k − 1 2$
which implies (25) by a transformation of the argument of f.
Analogously, one considers:
$K 2 , s ( z ) : = ( 2 s ) ! f ( z ) z 2 s + 1 sin ( π z ) ∑ j = 0 s − 1 ( − 1 ) j ( π z ) 2 j + 1 ( 2 j + 1 ) !$
Here $z = 0$ is a pole of order at most $2 s + 1$ and,
$z 2 s + 1 ( 2 s ) ! K 2 , s ( z ) = f ( z ) 1 − ( − 1 ) s ( π z ) 2 s ( 2 s + 1 ) ! + O ( z 2 s + 1 ( z → 0 )$
which gives:
$res ( K 2 , s , 0 ) = f ( 2 s ) ( 0 ) + ( − 1 ) s + 1 π 2 s 2 s + 1 f ( 0 )$
while,
$res ( K 2 , s , k ) = ( − 1 ) k B s , k f ( k ) ( k ∈ Z ∖ { 0 } )$
with $B s , k$ given by (24). This time, for $N ∈ N$, we find that:
$0 = lim N → ∞ 1 2 π i ∫ Q ( N + 1 2 ) K 2 , s ( z ) d z = f ( 2 s ) ( 0 ) + ( − 1 ) s + 1 π 2 s 2 s + 1 f ( 0 ) + ∑ k = − ∞ k ≠ 0 ∞ ( − 1 ) k B s , k f ( k )$
which implies (26) by a transformation of the argument of f.

4.2. Variants

If we abandon property (e) of a Boas-type formula and admit a correction at t by the value of f or that of its first derivative, we can improve upon property (d) by establishing formulae with coefficients that decay like $O ( k − 3 )$ as $k → ± ∞$. Such formulae are of interest in numerical applications since the truncated series will need less terms for achieving certain accuracy.
The following formula (35) was already obtained in [19, Corollary 5] by methods of complex analysis; formula (36) is new.
Corollary 4.3.
Let $f ∈ B σ ∞$ for some $σ > 0$ and let $s ∈ N$. Then, in the notation of Theorem 4.1,
$f ( 2 s ) ( t ) = ( − 1 ) s π h 2 s f ( t ) + 2 s h 2 s ∑ k = − ∞ ∞ A s , k k − 1 2 f t + h k − 1 2 ( t ∈ R )$
$f ( 2 s + 1 ) ( t ) = ( − 1 ) s π h 2 s f ′ ( t ) + 2 s + 1 h 2 s + 1 ∑ k = − ∞ k ≠ 0 ∞ ( − 1 ) k + 1 B s , k k f ( t + h k ) ( t ∈ R )$
Proof.
Obviously the function $g : z ↦ f ( z ) − f ( 0 ) / z$ belongs to $B σ ∞$ and, as is seen by Taylor expansion around 0, we have
$g ( r ) ( 0 ) r ! = f ( r + 1 ) ( 0 ) ( r + 1 ) ! ( r ∈ N )$
Thus applying (25) to g at $t = 0$, we obtain
$f ( 2 s ) ( 0 ) = 2 s h 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k f ( h ( k − 1 2 ) ) − f ( 0 ) k − 1 2$
For $f : z ↦ cos π z$ this formula holds with $h = 1$ and yields
$( − 1 ) s π 2 s = − 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k k − 1 2$
Thus,
$f ( 2 s ) ( 0 ) = ( − 1 ) s π h 2 s f ( 0 ) + 2 s h 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k k − 1 2 f h ( k − 1 2$
which gives (35) by shifting the argument of f.
Analogously, applying (26) to g at $t = 0$, we obtain:
$f ( 2 s + 1 ) ( 0 ) = − 2 s + 1 h 2 s B s , 0 f ′ ( 0 ) + 2 s + 1 h 2 s + 1 ∑ k = − ∞ k ≠ 0 ∞ ( − 1 ) k + 1 B s , k f ( h k ) − f ( 0 ) k$
For the sinc function, this formula holds with $h = 1$ and yields:
$0 = − ( 2 s + 1 ) ∑ k = − ∞ k ≠ 0 ∞ ( − 1 ) k + 1 B s , k k$
Thus,
$f ( 2 s + 1 ) ( 0 ) = − 2 s + 1 h 2 s B s , 0 f ′ ( 0 ) + 2 s + 1 h 2 s + 1 ∑ k = − ∞ k ≠ 0 ∞ ( − 1 ) k + 1 B s , k k f ( h k )$
which gives (36) by substituting the value of $B s , 0$ and shifting the argument of f.     ☐

5. Extensions to Non-Bandlimited Functions

Theorem 5.1.
Let $s ∈ N ,$ $f ∈ F 2 s − 1 , 2$. Then $f ( 2 s − 1 )$ exists, and for $h > 0$, $σ : = π / h$ formula (25) extends to:
$f ( 2 s − 1 ) ( t ) = 1 h 2 s − 1 ∑ k ∈ Z ( − 1 ) k + 1 A s , k f t + h k − 1 2 + ( R 2 s − 1 , σ Boas f ) ( t ) ( t ∈ R )$
where,
$( R 2 s − 1 , σ Boas f ) ( t ) = i ( − 1 ) s + 1 2 π h 2 s − 1 ∫ | v | ≥ σ ( h v ) 2 s − 1 − ϕ 2 s − 1 ( h v ) f ^ ( v ) e i v t d v$
$ϕ 2 s − 1$ being the $4 π$-periodic function defined by:
$ϕ 2 s − 1 ( v ) : = v 2 s − 1 , − π < v ≤ π ( 2 π − v ) 2 s − 1 , − π < v ≤ 3 π$
In particular,
$( R 2 s − 1 , σ Boas f ) ( t ) ≤ 1 + 3 − 2 s + 1 2 π ∫ | v | ≥ σ | v | 2 s − 1 f ^ ( v ) d v = 1 + 3 − 2 s + 1 2 π dist 1 ( f ( 2 s − 1 ) , B σ 2 )$
Proof.
First assume $h = 1$, i.e., $σ = π$, and let:
$f 1 ( t ) : = 1 2 π ∫ | v | ≥ π f ^ ( v ) e i t v d v$
Then $f − f 1 ∈ B π ∞$ and so (25) applies to this difference, i.e.,
$( R 2 s − 1 , π Boas f ) ( t ) = R 2 s − 1 , π Boas ( f − f 1 ) ( t ) + ( R 2 s − 1 , π Boas f 1 ) ( t ) = ( R 2 s − 1 , π Boas f 1 ) ( t )$
and we find that for $t = 0$,
$( R 2 s − 1 , π Boas f ) ( 0 ) = f 1 ( 2 s − 1 ) ( 0 ) − ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k f 1 k − 1 2$
>From (41) there follows:
$f 1 ( 2 s − 1 ) ( 0 ) = 1 2 π ∫ | v | ≥ π ( i v ) 2 s − 1 f ^ ( v ) d v$
and,
$f 1 k − 1 2 = 1 2 π ∫ | v | ≥ π f ^ ( v ) e i ( k − 1 / 2 ) v d v ( k ∈ Z )$
When we use these expressions for the calculation of (42), an interchange of summation and integration is permitted by Levi’s theorem. Hence,
$( R 2 s − 1 , π Boas f ) ( 0 ) = 1 2 π ∫ | v | ≥ π f ^ ( t ) ( i v ) 2 s − 1 − ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k e i ( k − 1 / 2 ) v d v$
Consider now the function $g v : t ↦ e i v t$. Obviously $g v ∈ B π ∞$ when $| v | ≤ π$, and so (25) applies to $g v$ for these values of v, i.e., for $t = 0$,
$g v ( 2 s − 1 ) ( 0 ) = ( i v ) 2 s − 1 = ∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k e i ( k − 1 / 2 ) v ( | v | ≤ π )$
This means that:
$∑ k = − ∞ ∞ ( − 1 ) k + 1 A s , k e i ( k − 1 / 2 ) v = i ( − 1 ) s + 1 ϕ 2 s − 1 ( v ) ( | v | ≤ π )$
Noting that the left-hand side of (44) is a $4 π$-periodic function satisfying, in addition,
$ϕ 2 s − 1 ( v + 2 n π ) = ( − 1 ) n ϕ 2 s − 1 ( v ) ( n ∈ Z )$
we see from the definition (39) of $ϕ 2 s − 1$ that (44) even holds for all $v ∈ R$, and inserting this into (43),we now obtain (37) with remainder (38) for $h = 1$ and $t = 0$.
For arbitrary $h > 0$ and $t ∈ R$ one applies this particular case to the function $f h : u ↦ f ( t + h u )$. Since $f h ^ ( v ) = h − 1 f ^ ( v ) e i v t / h$, the general form (38) of the remainder now follows by a change of variable.
Since $| v 2 s − 1 − ϕ 2 s − 1 ( v ) | ≤ ( 1 + 3 − 2 s + 1 ) | v | 2 s − 1$ for all $v ∈ R$, the first relation in (40) follows immediately. The second is a consequence of [1, Proposition 15].       ☐
Theorem 5.2.
Let $s ∈ N$, $f ∈ F 2 s , 2$. Then $f ( 2 s )$ exists and for $h > 0$, $σ : = π / h$ formula (26) extends to:
$f ( 2 s ) ( t ) = 1 h 2 s ∑ k ∈ Z ( − 1 ) k + 1 B s , k f ( t + h k ) + ( R 2 s , σ Boas f ) ( t ) ( t ∈ R )$
where,
$( R 2 s , σ Boas f ) ( t ) = ( − 1 ) s 2 π h 2 s ∫ | v | ≥ σ ( h v ) 2 s − ϕ 2 s ( h v ) f ^ ( v ) e i v t d v$
$ϕ 2 s$ being the $2 π$-periodic function defined by:
$ϕ 2 s ( v ) : = v 2 s , for | v | ≤ π$
In particular,
$( R 2 s , σ Boas f ) ( t ) ≤ 1 2 π ∫ | v | ≥ σ v 2 s f ^ ( v ) d v = 1 2 π dist 1 ( f ( 2 s ) , B σ 2 )$
Proof.
We proceed as in the proof of Theorem 5.1. With $f 1$ as defined by (41) we have:
$( R 2 s , π Boas f ) ( 0 ) = f 1 ( 2 s ) ( 0 ) − ∑ k = − ∞ ∞ ( − 1 ) k + 1 B s , k f 1 ( k )$
Noting that,
$f 1 ( 2 s ) ( 0 ) = 1 2 π ∫ | v | ≥ π ( i v ) 2 s f ^ ( v ) d v$
$f 1 ( k ) = 1 2 π ∫ | v | ≥ π f ^ ( v ) e i k v d v ( k ∈ Z )$
it follows that,
$( R 2 s , π Boas f ) ( 0 ) = 1 2 π ∫ | v | ≥ π f ^ ( t ) ( i v ) 2 s − ∑ k = − ∞ ∞ ( − 1 ) k + 1 B s , k e i k v d v$
Applying now (26) to $g v : t ↦ e i v t$, where $| v | ≤ π$, we find:
$g v ( 2 s ) ( 0 ) = ( i v ) 2 s = ∑ k = − ∞ ∞ ( − 1 ) k + 1 B s , k e i k v ( | v | ≤ π )$
$ϕ 2 s ( v ) : = 1 i 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 B s , k e i k v = v 2 s ( | v | ≤ π )$
which yields (45) and (46) for $h = 1$ and $t = 0$. The general case follows again by applying this particular case to $f h : u ↦ f ( t + h u )$.
Since $ϕ 2 s$ is a $2 π$-periodic function, we have $| ϕ 2 s ( v ) | ≤ π 2 s$ for all $v ∈ R$. This yields the first relation in (47), and the second one is again a consequence of [1,Proposition 15].      ☐
Counterparts of Corollaries 3.5–3.7 are also valid in the instance of the error $R s , σ Boas f$.
For $s = 1$, Theorem 5.1 reduces to [1, Theorem 5]. The graphs of $ϕ 1$, $ϕ 2$, $ϕ 3$, $ϕ 4$ are shown in Figure 1, Figure 2, Figure 3 and Figure 4, respectively.
Figure 1. The graph of $π − 1 ϕ 1$.
Figure 1. The graph of $π − 1 ϕ 1$. Figure 2. The graph of $π − 2 ϕ 2$.
Figure 2. The graph of $π − 2 ϕ 2$. Figure 3. The graph of $π − 3 ϕ 3$.
Figure 3. The graph of $π − 3 ϕ 3$. Figure 4. The graph of $π − 4 ϕ 4$.
Figure 4. The graph of $π − 4 ϕ 4$. 6. Extended Bernstein Inequalities for Higher Order Derivatives

We now come the matter sketched at the beginning of Section 4. The well-known Bernstein inequality states:
For $f ∈ B σ p$, $1 ≤ p ≤ ∞$, $σ > 0$, there holds:
$∥ f ( s ) ∥ L p ( R ) ≤ σ s ∥ f ∥ L p ( R ) ( s ∈ N )$
The case $s = 1$ is usually proved with help of Boas’ formula (16), and the general case then by iteration; see [3, Section 11.3]. Boas’ formulae for higher order derivatives enables us now to prove (48) directly for arbitrary $s ∈ N$. Indeed, we have by (25),
$∥ f ( 2 s − 1 ) ∥ L p ( R ) ≤ 1 h 2 s − 1 ∑ k = − ∞ ∞ | A s , k | ∥ f ∥ L p ( R )$
The series on the right-hand side can be evaluated as follows. Since $sin ( σ · ) ∈ B σ ∞$, formula (25) applies to this function. For $t = 0$ it yields that:
$( − 1 ) s − 1 σ 2 s − 1 = 1 h 2 s − 1 ∑ k ∈ Z A s , k = ( − 1 ) s − 1 h 2 s − 1 ∑ k = − ∞ ∞ | A s , k |$
where (31) has been used in the last step. Combining this equation with (49), we obtain (48). For derivatives of even order one uses (26) and proceeds analogously.
In this section we employ Theorems 5.1 and 5.2 to extend Bernstein’s inequality for higher derivatives to non-bandlimited functions by adding an “error term”. For this aim, properties (a)–(e) of a Boas-type formula, specified in Section 4, will be crucial.
Theorem 6.1.
Let $s ∈ N$, $f ∈ F 2 s − 1 , 2$, $p ∈ [ 2 , ∞ ]$, and suppose that $v 2 s − 1 f ^ ( v )$ belongs to $L p ′ ( R )$ as a function of v. Then, for any $σ > 0$, we have:
$∥ f ( 2 s − 1 ) ∥ L p ( R ) ≤ σ 2 s − 1 ∥ f ∥ L p ( R ) + ∥ R 2 s − 1 , σ Boas f ∥ L p ( R )$
with $R 2 s − 1 , σ Boas f$ defined by (38). Furthermore,
$∥ R 2 s − 1 , σ Boas f ∥ L p ( R ) ≤ ( 2 π ) 1 / 2 − 1 / p ′ 1 h 2 s − 1 ∫ | v | ≥ σ ( h v ) 2 s − 1 − ϕ 2 s − 1 ( h v ) f ^ ( v ) p ′ d v 1 / p ′$
$≤ 4 3 ( 2 π ) 1 / 2 − 1 / p ′ ∫ | v | ≥ σ v 2 s − 1 f ^ ( v ) p ′ d v 1 / p ′ = 4 3 ( 2 π ) 1 / 2 − 1 / p ′ dist p ′ ( f ( 2 s − 1 ) , B σ 2 )$
Proof.
Consider (37) as a function of t and apply $∥ · ∥ L p ( R )$ on both sides. Using the triangle inequality on the right-hand side and noting that $∥ f ∥ L p ( R )$ does not change under a shift of the argument of f, we find that:
$∥ f ( 2 s − 1 ) ∥ L p ( R ) ≤ 1 h 2 s − 1 ∑ k ∈ Z | A s , k | · ∥ f ∥ L p ( R ) + ∥ R 2 s − 1 , σ Boas f ∥ L p ( R )$
Inserting (50) for the series on the right-hand side, we obtain (51).
Next we observe that $( R 2 s − 1 , σ Boas f ) ( − · )$ is the Fourier transform of the function:
$g : v ⟼ i ( − 1 ) s − 1 h 2 s − 1 ( h v ) 2 s − 1 − ϕ 2 s − 1 ( h v ) f ^ ( v )$
The hypotheses imply that $g ∈ L 1 ( R ) ∩ L p ′ ( R )$. Thus, using [2, Prop. 5.2.6] and noting that this book uses the notation $∥ · ∥ p = ( 2 π ) − 1 / ( 2 p ) ∥ · ∥ L p ( R )$, we conclude that:
$∥ R 2 s − 1 , σ Boas f ∥ L p ( R ) ≤ ( 2 π ) 1 / 2 − 1 / p ′ ∥ g ∥ L p ′ ( R ) = ( 2 π ) 1 / 2 − 1 / p ′ 1 h 2 s − 1 ∫ | v | ≥ σ ( h v ) 2 s − 1 − ϕ 2 s − 1 ( h v ) f ^ ( v ) p ′ d v 1 / p ′ ≤ 4 3 ( 2 π ) 1 / 2 − 1 / p ′ ∫ | v | ≥ σ v 2 s − 1 f ^ ( v ) p ′ d v 1 / p ′ = 4 3 ( 2 π ) 1 / 2 − 1 / p ′ dist p ′ ( f ( 2 s − 1 ) , B σ 2 )$
where we have used that $| v 2 s − 1 − ϕ 2 s − 1 ( v ) | ≤ ( 1 + 3 − 2 s + 1 ) | v | 2 s − 1 ≤ ( 4 / 3 ) | v | 2 s − 1$. This completes the proof.           ☐
By an analogous proof, we deduce from Theorem 5.2 the following result for derivatives of even order.
Theorem 6.2.
Let $s ∈ N$, $f ∈ F 2 s , 2$, $p ∈ [ 2 , ∞ ]$, and suppose that $v 2 s f ^ ( v )$ belongs to $L p ′ ( R )$ as a function of v. Then, for any $σ > 0$, we have:
$∥ f ( 2 s ) ∥ L p ( R ) ≤ σ 2 s ∥ f ∥ L p ( R ) + ∥ R 2 s , σ Boas f ∥ L p ( R )$
with $R 2 s , σ Boas f$ defined by (45). Furthermore,
$∥ R 2 s , σ Boas f ∥ L p ( R ) ≤ ( 2 π ) 1 / 2 − 1 / p ′ ∫ | v | ≥ σ | v 2 s f ^ ( v ) | p ′ d v 1 / p ′ = ( 2 π ) 1 / 2 − 1 / p ′ dist p ′ ( f ( 2 s ) , B σ 2 )$
It should be noted that for derivatives of odd order the bound in terms of the distance function is by a factor $4 / 3$ bigger than the corresponding bound for derivatives of even order. However, when $p = 2$, we can profit from the isometry of the Fourier transform and deduce the same bound in both cases. An obvious modification of the proof in [1,Theorem 11] leads to the following result.
Theorem 6.3.
Let $s ∈ N$, $f ∈ F s , 2$ and suppose that $v s f ^ ( v ) ∈ L 2 ( R )$ as a function of v. Then, for any $σ > 0$, we have
$∥ f ( s ) ∥ L 2 ( R ) ≤ σ s ∥ f ∥ L 2 ( R ) + dist 2 ( f ( s ) , B σ 2 )$

7. Landau–Kolmogorov Inequalities

In this section we consider the case where f belongs to a Sobolev space and deduce Landau–Kolmogorov inequalities, a very popular and still active field. The proof of the following proposition is essentially contained in that of [1, Proposition 13].
Proposition 7.1.
Let $f ∈ W r , 2 ( R ) ∩ C ( R )$, where $r ∈ N$. Then for $s ∈ N$ with $s ≤ r$, we have
$dist 2 ( f ( s ) , B σ 2 ) ≤ 1 σ r − s ∥ f ( r ) ∥ L 2 ( R )$
and
$dist q ( f ( s ) , B σ 2 ) ≤ c r − s , q σ r − s + 1 / 2 − 1 / q ∥ f ( r ) ∥ L 2 ( R )$
for $q ∈ [ 1 , 2 )$ and $r ≥ s − 1 / 2 + 1 / q$, where
$c r , q : = 4 − 2 q ( 2 r + 1 ) q − 2 1 / q − 1 / 2$
Proposition 7.1 enables us to deduce from Theorems 6.1–6.3 the following corollaries.
Corollary 7.2.
Let $s ∈ N$ and $f ∈ W r , 2 ( R ) ∩ C ( R )$, where $r ≥ 2 s$. Then, for $p ∈ [ 2 , ∞ ]$ and any $σ > 0$, we have:
$∥ f ( 2 s − 1 ) ∥ L p ( R ) ≤ σ 2 s − 1 ∥ f ∥ L p ( R ) + 4 3 ( 2 π ) 1 / 2 − 1 / p ′ c r − 2 s + 1 , p ′ σ r − 2 s + 3 / 2 − 1 / p ′ ∥ f ( r ) ∥ L 2 ( R )$
Corollary 7.3.
Let $s ∈ N$ and let $f ∈ W r , 2 ( R ) ∩ C ( R )$, where $r > 2 s$. Then, for $p ∈ [ 2 , ∞ ]$ and any $σ > 0$, we have
$∥ f ( 2 s ) ∥ L p ( R ) ≤ σ 2 s ∥ f ∥ L p ( R ) + ( 2 π ) 1 / 2 − 1 / p ′ c r − 2 s , p ′ σ r − 2 s + 1 / 2 − 1 / p ′ ∥ f ( r ) ∥ L 2 ( R )$
Corollary 7.4.
Let $s ∈ N$ and let $f ∈ W r , 2 ( R ) ∩ C ( R )$, where $r > s$. Then, for any $σ > 0$, we have:
$∥ f ( s ) ∥ L 2 ( R ) ≤ σ s ∥ f ∥ L 2 ( R ) + σ s − r ∥ f ( r ) ∥ L 2 ( R )$
Note that for $s = 1$ Corollary 7.2 reduces to a result in [1, Corollary 16].
The statements of Corollaries 7.2–7.4 can be interpreted as a linearized equivalent form of a Landau–Kolmogorov inequality [21,22]. The equivalence is shown by the following lemma in which $R + : = ( 0 , ∞ )$. A more specialized result was mentioned by Stečkin ; also see [21, pp. 5–6].
Lemma 7.5.
Let $( x , y , z ) ∈ R + 3 ,$ $C > 0$ and $0 < s < t .$ Then,
$z ≤ σ s x + C σ s − t y$
for all $σ > 0$ if and only if,
$z ≤ K x 1 − α y α$
where,
$α = s t and K = C α α α ( 1 − α ) 1 − α$
Proof.
Suppose that (58) holds. Then we may minimize the right-hand side over σ by using standard calculus. This leads us to (59) with α and K given by Equation (60).
Conversely, suppose that (59) holds with $K > 0$ and $α ∈ ( 0 , 1 )$. Consider now the function:
$F ( x , y ) : = K x 1 − α y α$
Its Hessian shows that it is concave on $R + 2$. Hence, at any point $( x 0 , y 0 ) ∈ R + 2$ the tangent plane of F lies above the graph of F, that is,
$F ( x , y ) ≤ F ( x 0 , y 0 ) + grad F ( x 0 , y 0 ) , ( x − x 0 , y − y 0 )$
Setting $λ : = y 0 / x 0$, we find by a straightforward calculation that:
$F ( x , y ) ≤ K ( 1 − α ) λ α x + K α λ α − 1 y$
Now, setting $s : = α t$ and defining:
$σ : = K ( 1 − α ) λ α 1 / s$
we find that:
$K α λ α − 1 = K 1 / α α ( 1 − α ) 1 / α − 1 σ s − t$
This shows that:
$F ( x , y ) ≤ σ s x + C σ s − t y$
with C defined by (60). Since λ may take any value in $( 0 , ∞ )$, the same is true for σ. Hence (59) implies (58).            ☐
Lemma 7.5 can be used to deduce three Landau–Kolmogorov inequalities from (55)–(57). We may state them in a unified form as follows.
Corollary 7.6.
Let $s ∈ N$ and $f ∈ W r , 2 ( R ) ∩ C ( R )$, where $r ∈ N$ and $r > s .$ For $p ∈ [ 2 , ∞ ]$ define:
$α : = s r + 1 2 − 1 p ′$
and,
$C ( s , r , p ) : = 1 , p = 2 ( 2 π ) 1 / 2 − 1 / p ′ c r − s , p ′ , p ∈ ( 2 , ∞ ] , s even 4 3 ( 2 π ) 1 / 2 − 1 / p ′ c r − s , p ′ , p ∈ ( 2 , ∞ ] , s odd$
with $c r − s , p ′$ given by (54). Then,
$∥ f ( s ) ∥ L p ( R ) ≤ C ( s , r , p ) α α α ( 1 − α ) 1 − α ∥ f ∥ L p ( R ) 1 − α ∥ f ( r ) ∥ L 2 ( R ) α$
Unfortunately, the constant in (61) is not the best possible. However, the discussion in [24, pp. 442–447], does not extend to results for $p ∈ ( 2 , ∞ )$. For $p = 2$ inequality (61) simplifies to:
$∥ f ( s ) ∥ L 2 ( R ) ≤ r − s s s / r + s r − s 1 − s / r ∥ f ∥ L 2 ( R ) 1 − s / r ∥ f ( r ) ∥ L 2 ( R ) s / r$
Here the term in square brackets can be replaced by 1. For $s = 1$ and $r = 2$, this is shown in [25, § 7.9, No. 261]. For general $r , s ∈ N$ with $r > s$, we may use the isometry of the $L 2$-Fourier transform together with Hölder’s inequality and proceed as follows:
$∥ f ( s ) ∥ L 2 ( R ) 2 = ∫ R | v s f ^ ( v ) | 2 d v = ∫ R v 2 s | f ^ ( v ) | 2 s / r · | f ^ ( v ) | 2 ( 1 − s / r ) d v ≤ ∫ R v 2 s | f ^ ( v ) | 2 s / r p d v 1 / p · ∫ R f ^ ( v ) 2 ( 1 − s / r ) p ′ d v 1 / p ′ = ∫ R v 2 r f ^ ( v ) 2 d v s / r · ∫ R | f ^ ( v ) | 2 d v 1 − s / r = ∥ f ∥ L 2 ( R ) 2 ( 1 − s / r ) · ∥ f ( r ) ∥ L 2 ( R ) 2 s / r$
Now, employing Lemma 7.5, we may in turn improve upon Corollary 7.4. This way we obtain:
Corollary 7.7.
Let $s ∈ N$ and let $f ∈ W r , 2 ( R ) ∩ C ( R )$, where $r > s$. Then, for any $σ > 0$, we have
$∥ f ( s ) ∥ L 2 ( R ) ≤ σ s ∥ f ∥ L 2 ( R ) + ( r − s ) r − s s s r r 1 / s σ s − r ∥ f ( r ) ∥ L 2 ( R )$

8. Boas-type Formulae for the Hilbert Transform

We now establish the counterparts of the theorems of Section 4 and Section 5 in the instance of Hilbert transforms. Although the definition of the Hilbert transform can be extended to signals $f ∈ L p ( R )$, $1 ≤ p ≤ ∞$, (see [26, p. 126 ff.]), we restrict ourselves to the most important case $p = 2$.

8.1. Formulae for Bandlimited Functions

The derivatives $sinc ˜ ( s ) ( t )$ are needed. They are given by (cf. (10)),
$sinc ˜ ( s ) ( t ) = 1 − cos π t π t ( s ) = ∑ j = 0 s s j ( 1 − cos π t ) ( j ) 1 π t ( s − j ) = ( − 1 ) s s ! π t s + 1 − ∑ j = 0 s s j π j cos π t + j π 2 ( − 1 ) s − j ( s − j ) ! π t s − j + 1 = ( − 1 ) s s ! π t s + 1 1 − ∑ j = 0 s cos π t + j π 2 ( − 1 ) j ( π t ) j j !$
By the cosine addition formula this can be rewritten as:
$sinc ˜ ( s ) ( t ) = ( − 1 ) s s ! π t s + 1 1 − cos π t ∑ ν = 0 ⌊ s 2 ⌋ ( − 1 ) ν ( π t ) 2 ν ( 2 ν ) ! − sin π t ∑ ν = 0 ⌊ s − 1 2 ⌋ ( − 1 ) ν ( π t ) 2 ν + 1 ( 2 ν + 1 ) !$
where the right hand side is thought to be continuously extended at $t = 0$ by:
$sinc ˜ ( s ) ( 0 ) = ( − 1 ) ( s − 1 ) / 2 π s s + 1 , s odd 0 , s even ( s ∈ N 0 )$
This can be easily obtained from (11) or the power series expansion:
$sinc ˜ ( t ) = ∑ j = 1 ∞ ( − 1 ) j + 1 ( 2 j ) ! ( π t ) 2 j − 1$
As an application, we now come two new Boas-type formulae, one for derivatives of odd order and another for those of even order.
Theorem 8.1.
Let $f ∈ B σ 2$ for some $σ > 0$ and $h : = π / σ$. Then for $s ∈ N$,
$f ˜ ( 2 s − 1 ) ( t ) = 1 h 2 s − 1 ∑ k = − ∞ ∞ ( − 1 ) k + 1 A ˜ s , k f ( t + h k ) ( t ∈ R )$
and for $s ∈ N 0$,
$f ˜ ( 2 s ) ( t ) = 1 h 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 B ˜ s , k f t + h ( k − 1 2 ( t ∈ R )$
Here the coefficients $A ˜ s , k$ and $B ˜ s , k$ are given by:
$A ˜ s , k : = ( − 1 ) k + 1 sinc ˜ ( 2 s − 1 ) ( − k ) = ( − 1 ) s π 2 s − 1 2 s k = 0 ( 2 s − 1 ) ! π k 2 s ( − 1 ) k − ∑ j = 0 s − 1 ( − 1 ) j ( 2 j ) ! π k 2 j k ∈ Z ∖ { 0 }$
$B ˜ s , k : = ( − 1 ) k + 1 sinc ˜ ( 2 s ) 1 2 − k = ( 2 s ) ! π ( k − 1 2 ) 2 s + 1 ( − 1 ) k + ∑ j = 0 s − 1 ( − 1 ) j π ( k − 1 2 ) 2 j + 1 ( 2 j + 1 ) ! ( k ∈ Z )$
Proof.
The proof follows along the same lines as the first proof of Theorem 4.1, starting with:
$f ˜ ( 2 s − 1 ) ( 0 ) = ∑ k = − ∞ ∞ f ( k ) sinc ˜ ( 2 s − 1 ) ( − k )$
in the case of odd order derivatives, and with:
$f ˜ ( 2 s ) 1 2 = ∑ k = − ∞ ∞ f ( k ) sinc ˜ ( 2 s ) 1 2 − k$
for even orders.            ☐
For $s = 0$ one obtains from (64):
$f ˜ ( t ) = ∑ k = − ∞ ∞ − 1 π ( k − 1 2 ) f t + h k − 1 2$
and the case $s = 1$ in (63) gives:
$f ˜ ′ ( t ) = π 2 h f ( t ) + 1 h ∑ k = − ∞ ∞ − 2 π ( 2 k + 1 ) 2 f t + h ( 2 k + 1 )$
already to be found in [27, p. 203]; see also . The case $s = 1$ in (64) gives:
$f ˜ ′ ′ ( t ) = 1 h 2 ∑ k = − ∞ ∞ ( − 1 ) k + 1 2 ( − 1 ) k + ( π k − 1 2 π k − 1 2 3 f t + h k − 1 2 ( t ∈ R )$

8.2. Achieser-type Formulae

Achieser [26, p. 143, (II)] proved an informative formula, which combines the assertions of the first derivative of a signal and that of its Hilbert transform. It may be stated for our definition of the Hilbert transform as follows:
Let $f ∈ B σ 2$. Then,
$sin α f ′ ( t ) − cos α f ˜ ′ ( t ) = σ ∑ k ∈ Z ( − 1 ) k + 1 2 sin 2 α − k π 2 ( α − k π ) 2 f t + k π − α σ$
We now establish analogous formulae for higher derivatives, distinguishing the cases of odd and even order.
Theorem 8.2.
Let $f ∈ B σ 2$ for some $σ > 0$ and $h : = π / σ$. Then for $α ∈ R$ and $s ∈ N$,
$sin α f ( 2 s − 1 ) ( t ) − cos α f ˜ ( 2 s − 1 ) ( t ) = 1 h 2 s − 1 ∑ k ∈ Z ( − 1 ) k + 1 A s , k ( α ) f t + h k − α π$
and,
$cos α f ( 2 s ) ( t ) + sin α f ˜ ( 2 s ) ( t ) = 1 h 2 s ∑ k ∈ Z ( − 1 ) k + 1 B s , k ( α ) f t + h k − α π$
where,
$A s , k ( k π ) : = ( − 1 ) s − 1 π 2 s − 1 2 s ( k ∈ Z ) , A s , k ( α ) : = − π 2 s − 1 ( 2 s − 1 ) ! ( α − k π ) 2 s ( − 1 ) k cos α − ∑ j = 0 s − 1 ( − 1 ) j ( 2 j ) ! ( α − k π ) 2 j ( α ≠ k π )$
and,
$B s , k ( k π ) : = ( − 1 ) s − 1 π 2 s 2 s + 1 ( k ∈ Z ) , B s , k ( α ) : = − π 2 s ( 2 s ) ! ( α − k π ) 2 s + 1 ( − 1 ) k sin α − ∑ j = 0 s − 1 ( − 1 ) j ( 2 j + 1 ) ! ( α − k π ) 2 j + 1 ( α ≠ k π )$
Proof.
First let $f ∈ B π 2 .$ Then,
$sin α f ( 2 s − 1 ) α π − cos α f ˜ ( 2 s − 1 ) α π = ∑ k ∈ Z sin α sinc ( 2 s − 1 ) α π − k − cos α sinc ˜ ( 2 s − 1 ) α π − k f ( k )$
By using the formulae (19) and (62) for calculating the term in square brackets, we find that:
$sin α sinc ( 2 s − 1 ) α π − k − cos α sinc ˜ ( 2 s − 1 ) α π − k = − π 2 s − 1 ( 2 s − 1 ) ! ( α − k π ) 2 s − cos α + ( − 1 ) k ∑ j = 0 s − 1 ( − 1 ) j ( 2 j ) ! ( α − k π ) 2 j = ( − 1 ) k + 1 A s , k ( α )$
for $α ≠ k π$ with $A s , k ( α )$ as defined in the theorem. The values of $sinc ( 2 s − 1 ) ( 0 )$ and $sinc . ˜ ( 2 s − 1 ) ( 0 )$ show that the left-hand side is equal to the right-hand side for $α = k π$ as well. Hence we have proved that:
$sin α f ( 2 s − 1 ) α π − cos α f ˜ ( 2 s − 1 ) α π = ∑ k ∈ Z ( − 1 ) k + 1 A s , k ( α ) f ( k )$
If $f ∈ B σ 2$ for an arbitrary $σ > 0$, then applying this result to the function $u ↦ f ( h u + t − h α / π )$, we obtain (69). The proof of (70) is strictly analogous.       ☐
Remark 8.3.
Note that Theorem 8.2 contains the statements of Theorems 4.1 for $f ∈ B σ 2 ⊂ B σ ∞$ and 8.1 for $s ∈ N$ as special cases. This follows by observing that:
$A s , k ( 0 ) = − A ˜ s , k , A s , k π 2 = A s , k , B s , k ( 0 ) = B s , k , B s , k π 2 = B ˜ s , k$
Next we establish integral representations for the numbers $A s , k ( α )$ and $B s , k ( α )$, which allow us to determine the signs of these numbers.
Proposition 8.4.
(a) For $s ∈ N$, $s > 1$ and $α ≠ k π$ we have the integral represention:
$A s , k ( α ) = ( − 1 ) s − 1 π 2 s − 1 ( 2 s − 1 ) ( 2 s − 2 ) ( α − k π ) 2 s ∫ 0 α − k π t 2 s − 3 ( 1 − ( − 1 ) k cos ( t − α ) d t$
Furthermore,
$( − 1 ) s − 1 A s , k ( α ) > 0$
for all $s ∈ N ,$ $k ∈ Z$ and $α ∈ R .$
(b) For $s ∈ N$ and $α ≠ k π$ we have the integral representation:
$B s , k ( α ) = ( − 1 ) s − 1 π 2 s 2 s ( 2 s − 1 ) ( α − k π ) 2 s + 1 ∫ 0 α − k π t 2 s − 2 ( 1 − ( − 1 ) k cos ( t − α ) d t$
Furthermore,
$( − 1 ) s − 1 B s , k ( α ) > 0$
for all $s ∈ N$, $k ∈ Z$ and $α ∈ R .$
Proof.
Let $s > 1$ and $α ≠ k π$. Writing $( − 1 ) k cos α$ as $cos ( α − k π )$ and using Taylor’s formula as given by (34), we readily find that:
$A s , k ( α ) = ( − 1 ) s − 1 π 2 s − 1 ( 2 s − 1 ) ! ( α − k π ) 2 s ∫ 0 α − k π ( α − k π − t ) 2 s − 2 ( 2 s − 2 ) ! sin t d t$
Now integration by parts and a change of variables yields (71). The integral in (71) is always positive, regardless of whether $α − k π$ is positive or negative. This shows that (72) holds whenever (71) is valid. For $s = 1$ and for the exceptional values of α, the validity of (72) can be verified directly.
The proofs of (73) and (74) are analogous except for obvious variations.       ☐

8.3. Extensions to Non-bandlimited Functions

Our next aim is to extend (63) and (64) to larger function spaces.
Theorem 8.5.
Let $s ∈ N ,$ $f ∈ F 2 s − 1 , 2$. Then $f ˜ ( 2 s − 1 )$ exists and for $h > 0$, $σ : = π / h$ formula (63) extends to:
$f ˜ ( 2 s − 1 ) ( t ) = 1 h 2 s − 1 ∑ k ∈ Z ( − 1 ) k + 1 A ˜ s , k f ( t + h k ) + ( R ˜ 2 s − 1 , σ Boas f ) ( t ) ( t ∈ R )$
where,
$( R ˜ 2 s − 1 , σ Boas f ) ( t ) = ( − 1 ) s + 1 2 π h 2 s − 1 ∫ | v | ≥ σ ( sgn v ) ( h v ) 2 s − 1 − χ 2 s − 1 ( h v ) f ^ ( v ) e i v t d v$
with $χ 2 s − 1$ being the $2 π$-periodic function defined by:
$χ 2 s − 1 ( v ) : = ( sgn v ) v 2 s − 1 = | v | 2 s − 1 ( − π < v ≤ π )$
In particular,
$( R ˜ 2 s − 1 , σ Boas f ) ( t ) ≤ 1 2 π ∫ | v | ≥ σ v 2 s − 1 f ^ ( v ) d v = 1 2 π dist 1 ( f ( 2 s − 1 ) , B σ 2 )$
Proof.
Following the proof of Theorem 5.1 we find that:
$( R ˜ 2 s − 1 , π Boas f ) ( 0 ) = f 1 ˜ ( 2 s − 1 ) ( 0 ) − ∑ k = − ∞ ∞ ( − 1 ) k + 1 A ˜ s , k f 1 ( k )$
$f 1 ˜ ( 2 s − 1 ) ( 0 ) = 1 2 π ∫ | v | ≥ π f ^ ( v ) ( − i sgn v ) ( i v ) 2 s − 1 d v$
and,
$f 1 ( k ) = 1 2 π ∫ | v | ≥ π f ^ ( v ) e i k v d v ( k ∈ Z )$
There follows:
$( R ˜ 2 s − 1 , π Boas f ) ( 0 ) = 1 2 π ∫ | v | ≥ π f ^ ( v ) ( − i sgn v ) ( i v ) 2 s − 1 − ∑ k = − ∞ ∞ ( − 1 ) k + 1 A ˜ s , k e i k v d v$
In order to evaluate the infinite series in (76), we have to proceed in a different way from the proof of Theorem 5.1. Indeed, since the function $g v : t ↦ e i v t$ does not belong to $L 2 ( R )$ we cannot apply formula (63) to this function.
On the other hand, there holds by (65) and (11),
$( − 1 ) k + 1 A ˜ s , k = sinc ˜ ( 2 s − 1 ) ( − k ) = ( − 1 ) s + 1 1 2 π ∫ − π π ( sgn v ) v 2 s − 1 e − i k v d v$
i.e., the series in (76) is the (trigonometric) Fourier series of the $2 π$-periodic function $( − 1 ) s + 1 χ 2 s − 1$ with $χ 2 s − 1$ defined by (75). Moreover, since $χ 2 s − 1 ( v )$ is differentiable, save possibly for $v = j π$, $j ∈ Z$, we even have (cf. [2, Proposition 4.1.5]),
$∑ k = − ∞ ∞ ( − 1 ) k + 1 A ˜ s , k e i k v = ( − 1 ) s + 1 χ 2 s − 1 ( v ) ( a. e on R )$
The proof can now be completed as the proof of Theorem 5.1.        ☐
The graphs of $χ 1$ and $χ 3$ are shown in Figure 5 and Figure 6 below.
Figure 5. The graph of $π − 1 χ 1$.
Figure 5. The graph of $π − 1 χ 1$. Figure 6. The graph of $π − 3 χ 3$.
Figure 6. The graph of $π − 3 χ 3$. Theorem 8.6.
Let $s ∈ N$, $f ∈ F 2 s , 2$. Then $f ˜ ( 2 s )$ exists and for $h > 0$, $σ : = π / h$ formula (64) extends to:
$f ˜ ( 2 s ) ( t ) = 1 h 2 s ∑ k = − ∞ ∞ ( − 1 ) k + 1 B ˜ s , k f t + h ( k − 1 2 + ( R ˜ 2 s , σ Boas f ) ( t ) ( t ∈ R )$
where,
$( R ˜ 2 s , σ Boas f ) ( t ) = i ( − 1 ) s + 1 2 π h 2 s ∫ | v | ≥ σ ( sgn v ) ( h v ) 2 s − χ 2 s ( h v ) f ^ ( v ) e i v t d v$
Here $χ 2 s$ is the $4 π$-periodic function defined by:
$χ 2 s ( v ) : = ( sgn v ) v 2 s , − π < v ≤ π sgn ( 2 π − v ) ( 2 π − v ) 2 s , − π < v ≤ 3 π$
In particular,
$( R ˜ 2 s , σ Boas f ) ( t ) ≤ 1 + 3 − 2 s 2 π ∫ | v | ≥ σ v 2 s f ^ ( v ) d v = 1 + 3 − 2 s 2 π dist 1 ( f ( 2 s ) , B σ 2 )$
Proof.
Proceeding as in the proof of Theorem 8.5, we arrive at a formula corresponding to (76), namely,
$( R ˜ 2 s , π Boas f ) ( 0 ) = 1 2 π ∫ | v | ≥ π f ^ ( v ) ( − i sgn v ) ( i v ) 2 s − ∑ k = − ∞ ∞ ( − 1 ) k + 1 B ˜ s , k e i ( k − 1 / 2 ) v d v$
Noting (66) and (11), we see that the infinite series in (79) can be rewritten as a Fourier series, namely,
$∑ k = − ∞ ∞ ( − 1 ) k + 1 B ˜ s , k e i ( k − 1 / 2 ) v = e − i v / 2 ∑ k = − ∞ ∞ sinc . ˜ ( 2 s ) 1 2 − k e i k v = ( − 1 ) s − 1 e − i v / 2 ∑ k = − ∞ ∞ 1 2 π ∫ − π π ( sgn v ) v 2 s e i v / 2 e − i k v d v e i k v$
and, using the same arguments on the convergence of Fourier series as above, we obtain:
$∑ k = − ∞ ∞ ( − 1 ) k + 1 B ˜ s , k e i ( k − 1 / 2 ) v = ( − 1 ) s − 1 χ 2 s ( v ) ( 0 < | v | < π )$
Since the series in (80) defines a $4 π$-periodic function satisfying $χ 2 s ( v + 2 n π ) = ( − 1 ) n χ 2 s ( v )$ for all $n ∈ Z$, it follows that (80) holds a. e. on $R$. This yields (77) with remainder (78) for $h = 1$ and $t = 0$. The rest of the proof now follows as in the proof of Theorem 5.1.       ☐
The graphs of $χ 2$ and $χ 4$ are shown in Figure 7 and Figure 8 below.
Figure 7. The graph of $π − 2 χ 2$.
Figure 7. The graph of $π − 2 χ 2$. Figure 8. The graph of $π − 4 χ 4$.
Figure 8. The graph of $π − 4 χ 4$. 9. Applications

In this section we apply the results of Section 3 and Section 8 to the signal function $g ( t ) : = 1 / ( 1 + t 2 )$, $t ∈ R$, having Fourier transform $π / 2 exp ( − | v | )$ and Hilbert transform $g ˜ ( t ) = t / ( 1 + t 2 )$. The extended sampling theorem for the Hilbert transform (Theorem 3.4) takes on the concrete form, first for $g ˜ ′$,
$1 − t 2 ( 1 + t 2 ) 2 − ∑ k = − ∞ ∞ σ 2 σ 2 + ( k π ) 2 π σ t − k sin π ( σ t − k ) + cos ( π ( σ t − k ) ) − 1 π ( σ t − k ) 2$
$≤ 1 2 π ∫ | v | ≥ σ π 2 | v | e − | v | d v = ( 1 + σ ) e − σ ( σ > 0 )$
In practice, one has to deal with a finite sum rather than with the infinite series. This leads to an additional truncation error, namely,
$( T σ , N f ) ( t ) = ∑ | k | ≥ N + 1 σ 2 σ 2 + ( k π ) 2 π σ t − k sin π ( σ t − k ) + cos ( π ( σ t − k ) ) − 1 π ( σ t − k ) 2$
Assuming $N ≥ γ σ | t |$ for some constant $γ > 1$, then the terms of the latter series, denoted by $a k$, can be estimated by:
$| a k | ≤ σ 2 ( π k ) 2 | σ t − k | + 1 | σ t − k | 2 ≤ σ 2 ( 2 γ + 1 ) π 2 ( γ − 1 ) 1 | k | 3 ( | k | > N )$
This yields for the truncation error:
$( T σ , N f ) ( t ) ≤ σ 2 ( 2 γ + 1 ) π 2 ( γ − 1 ) ∑ | k | ≥ N + 1 1 | k | 3 ≤ 2 σ 2 ( 2 γ + 1 ) π 2 ( γ − 1 ) ∫ N ∞ 1 u 3 d u = σ 2 ( 2 γ + 1 ) π 2 ( γ − 1 ) N − 2$
Combining the aliasing error in (81) with this estimate for the truncation error, we finally obtain:
$1 − t 2 ( 1 + t 2 ) 2 − ∑ k = − N N σ 2 σ 2 + ( k π ) 2 π σ t − k sin π ( σ t − k ) + cos ( π ( σ t − k ) ) − 1 π ( σ t − k ) 2 ≤ ( 1 + σ ) e − σ + σ 2 ( 2 γ + 1 ) π 2 ( γ − 1 ) N − 2 ( σ > 0 ; N ≥ γ σ | t | )$
Thus, we have a pretty precise and practical estimate for the error occurring when the derivative of the Hilbert transform is reconstructed in terms of the Hilbert version of the sampling theorem. Whereas the first term on the right-hand side covers the aliasing error, the second one is due to truncation.
Similarly, the Boas-type theorem for higher order derivatives (Theorem 8.5) takes the form (recall (67)):
$1 − t 2 ( 1 + t 2 ) 2 − π 2 h − 1 h ∑ k = − ∞ ∞ 2 π ( 2 k + 1 ) 2 1 1 + [ t + ( 2 k + 1 ) h ] 2 ≤ 1 2 π ∫ | v | ≥ σ π 2 | v | e − | v | d v = ( 1 + σ ) e − σ ( σ > 0 )$
For the second order derivative of $g ˜$ one obtains from Theorem 8.6 for $s = 1$,
$2 t 3 − 6 t ( 1 + t 2 ) 3 − 1 h 2 ∑ k = − ∞ ∞ ( − 1 ) k 8 π − 2 π k + 2 ( − 1 ) k π ( 2 k − 1 ) 3 1 1 + [ t + ( 2 k + 1 ) h ] 2 ≤ 1 2 π ∫ | v | ≥ σ π 2 v 2 e − | v | d v = ( 2 + 2 σ + σ 2 ) e − σ ( σ > 0 )$
These are the aliasing errors for the reconstruction of derivatives of the Hilbert transform in terms of the Boas-type formulae. In both cases, the truncation errors can be handled in a similar fashion as above.

Acknowledgements

The authors would like to thank Mikael Skoglund, guest editor together with Eduard Jorswieck of the present special issue of Entropy to which the authors were invited to contribute, for his help on the details concerning Willy Wagner’s many honours received from Sweden.
They also would like to thank Jörg Vautz, Lehrstuhl A für Mathematik, RWTH Aachen, for carefully preparing the eight figures.

A Brief Biography of Karl Willy Wagner

Karl Willy Wagner, born in Friedrichsdorf, a town in the Taunus founded 1687 by the Huguenots (his mother Emilie Zeline, née Gauterin, was a traditional Huguenot), who completed his studies as an electrical engineer at the well-known Technikum Bingen in 1902, worked in 1904–1908 as a research engineer at Berlin’s Siemens & Schuckert. In 1908 he became the assistant to the physicist Hermann Theodor Simon at Göttingen under whom he received his Dr. Phil. in 1910, and in 1912 he earned the Habilitation degree at the TH Berlin. Already in fall 1912 he was appointed Professor and member of the Physikalisch-Technische Reichsanstalt, and its President from 1923 to 1927. Then he became the founding professor of the Institute für Schwingungsforschung (Oscillation Theory) at the TH Berlin, named the Heinrich-Hertz-Institute in 1930.
At a special meeting of the Heinrich-Hertz Institute of January 1936 the Gaudozentenführer (the Nazi boss of Berlin’s universities), Willi Willing (1907–1983), reported that he planned to dismiss Wagner from his offices due to certain financial irregularities: allegedly receiving a Leica camera from the Leitz firm and for driving home in his official car in a roundabout way for coffee. None of the professors present objected, only Dr. Alfred Thoma, Wagner’s assistant, did, stating that these accusations did not represent the facts. As a consequence, he was discharged a few day later. Willing declared Wagner a Volksfeind (state enemy) and fired him.
Willing himself, who had received his doctorate only in 1935, became provisional Director of the Heinrich-Hertz Institute that February 1936.
According to Wagner himself, he was removed from office because he refused to dismiss his Jewish employees, ignoring the Nuremberg laws of September 1935 according to which German universities were to be “cleansed” of their Jewish students and lecturers.
As Dr. Thoma reported, Wagner would have been sent to a concentration camp, if he had not been bedridden with thrombosis at the time. Until 1938 most prisoners of concentration camps, which were an integral feature of the regime as soon as Hitler came to power, and were established all over Germany to handle the masses of people labelled as political/religious opponents and social deviants, were German citizens. Only after the “Kristallnacht” pogroms of November 1938 did the Nazis conduct mass arrests of adult male Jews. To facilitate the genocide of the Jews, the Nazis established “killing centers”, the first being Chelmno in Poland in December 1941.
After his dismissal Wagner returned home to Friedrichsdorf and together with his brother-in-law Friedrich Schmitt founded the “Landgrafen-Zwiebackfabrik”, enabling him to make a living.
In 1949 Wagner became President of the predecessor of the University of Mainz, which he co-founded, and in 1951 became Honorary Professor at Mainz.
He received many honours, the first, in 1919, being the “Cedergrenska guldmedaljen” (Gold Cedergren Medal), awarded every five years by the Royal Institute of Technology in Stockholm, and in 1935 he was elected a corresponding member of the Royal Swedish Academy of Engineering Sciences (korresponderande ledamot Kungl. Ingenjörsvetenskapsakademin) there. He was a referee of Wilhelm Cauer’s milestone thesis of 1926. Already in the fall of 1946, Wagner could again visit his colleagues in Switzerland. A little later, he was invited to lecture in Stockholm, Uppsala and Gothenburg, as well as to a guest-course at the KTH, the Royal Institute of Technology at Stockholm. Wagner’s treatise “Operatorenrechnung und Laplace Transformation”  was one of the first to attempt a justification of the operational calculus of Oliver Heaviside (the recipient of the Cedergren Medal in 1924), as well as to make transform methods popular in engineering circles, Fourier transforms being one of them; see [29,30,31,32,33].
Our paper is therefore dedicated to a true man of principles, a renowned electrical engineer, one who was fired from office for not having dismissed his Jewish employees, one of few such cases known in university circles during Nazi times.

References

1. Butzer, P.L.; Schmeisser, G.; Stens, R.L. Basic relations valid for the Bernstein space $B σ p$ and their extensions to functions from larger spaces in terms of their distances from $B σ p$. To be submitted for publication.
2. Butzer, P.L.; Nessel, R.J. Fourier Analysis and Approximation; Academic Press: New York, NY, USA; Birkhäuser: Basel, Switzerland, 1971. [Google Scholar]
3. Boas, R.P., Jr. Entire Functions; Academic Press Inc.: New York, NY, USA, 1954. [Google Scholar]
4. Lundin, L.; Stenger, F. Cardinal-type approximations of a function and its derivatives. SIAM J. Math. Anal. 1979, 10, 139–160. [Google Scholar] [CrossRef]
5. Butzer, P.L.; Splettstößer, W. Approximation und Interpolation durch verallgemeinerte Abtastsummen; Westdeutscher Verlag: Opladen, Germany, 1977. [Google Scholar]
6. Butzer, P.L.; Splettstößer, W.; Stens, R.L. The sampling theorem and linear prediction in signal analysis. Jahresber. Deutsch. Math.-Verein. 1988; 90, 19–70. [Google Scholar]
7. Butzer, P.L.; Schmeisser, G.; Stens, R.L. An Introduction to Sampling Analysis. In Nonuniform Sampling—Thery and Practice; Marvasti, F., Ed.; Kluwer/Plenum: New York, NY, USA, 2001; pp. 17–121. [Google Scholar]
8. Splettstößer, W. 75 Years of Aliasing Error in the Sampling Theorem. In EUSIPCO–83, Signal Processing: Theories and Applications, Proceedings of the 2nd European Signal Processing Conference, Erlangen, Germany, 12–16 September 1983; Schüssler, H.W., Ed.; North-Holland Publishing Company: Amsterdam, The Netherlands, 1983; pp. 1–4. [Google Scholar]
9. Brown, J.L., Jr. On the error in reconstructing a non-bandlimited function by means of the bandpass sampling theorem.
10. Butzer, P.L.; Splettstößer, W. A sampling theorem for duration-limited functions with error estimates. Inf. Control 1977, 34, 55–65. [Google Scholar] [CrossRef]
11. Stens, R.L. A unified approach to sampling theorems for derivatives and Hilbert transforms. Signal Process 1983, 5, 139–151. [Google Scholar] [CrossRef]
12. Bracewell, R.N. The Fourier Transform and its Applications, 2nd ed.; McGraw-Hill Book Co.: New York, NY, USA, 1978. [Google Scholar]
13. Boche, H. Charakterisierung der Systeme zur Ermittlung der Hilbert-Transformation (in German with English summary) [Characterization of systems to calculate the Hilbert-Transform]. Forschung im Ingenieurwesen 1997, 63, 1–6. [Google Scholar] [CrossRef]
14. Hahn, S.L. Hilbert Transforms in Signal Processing; The Artech House Signal Processing Library, Artech House Inc.: Boston, MA, USA, 1996. [Google Scholar]
15. Feldman, M. Hilbert Transform Applications in Mechanical Vibration; John Wiley & Sons: New York, NY, USA, 2011. [Google Scholar]
16. Boche, H. Konvergenzverhalten der konjugierten Shannonschen Abtastreihe (in German with English summary) [Convergence behavior of the conjugate Shannon sampling series]. Acta Math. Inform. Univ. Ostraviensis 1997, 5, 13–26. [Google Scholar]
17. Boas, R.P., Jr. The derivative of a trigonometric integral. J. Lond. Math. Soc. 1937; S1–S12, 164. [Google Scholar]
18. Riesz, M. Eine trigonometrische Interpolationsformel und einige Ungleichungen für Polynome. Jahresber. Deutsch. Math.-Verein. 1914; 23, 354–368. [Google Scholar]
19. Schmeisser, G. Numerical differentiation inspired by a formula of R. P. Boas. J. Approx. Theory 2009, 160, 202–222. [Google Scholar] [CrossRef]
21. Bagdasarov, S. Chebyshev Splines and Kolmogorov Inequalities; Birkhäuser Verlag: Basel, Switzerland, 1998. [Google Scholar]
22. Kwong, M.K.; Zettl, A. Norm Inequalities for Derivatives and Differences; Volume 1536, Lecture Notes in Mathematics; Springer-Verlag: Berlin/Heidelberg, Germany, 1992. [Google Scholar]
23. Stečkin, S.B. Inequalities between norms of derivatives of arbitrary functions (Russian). Acta Sci. Math. (Szeged) 1965, 26, 225–230. [Google Scholar]
24. Kolmogorov, A.N. Selected Works of A. N. Kolmogorov, Volume I: Mathematics and Mechanics; Tikhomirov, V.M., Ed.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1991. [Google Scholar]
25. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities; Cambridge Mathematical Library, Cambridge University Press: Cambridge, UK, 1934. [Google Scholar]
26. Achieser, N.I. Theory of Approximation; Frederick Ungar Publishing Co.: New York, NY, USA, 1956. [Google Scholar]
27. Timan, A.F. Theory of Approximation of Functions of a Real Variable; A Pergamon Press Book, The Macmillan Co.: New York, NY, USA, 1963. [Google Scholar]
28. Wagner, K.W. Operatorenrechnung und Laplacesche Transformation nebst Anwendungen in Physik und Technik; Johann Ambrosius Barth Verlag: Leipzig, Germany, 1939; Revised edition, 1950. [Google Scholar]
29. Thoma, A. Karl Willi Wagner zum 65. Geburtstag. AEÜ 1958, 2, 117–119. [Google Scholar]
30. Luetzen, J. Heaviside’s operational calculus and the attempts to rigorize it. Arch. Hist. Exact Sci. 1979, 21, 161–200. [Google Scholar] [CrossRef]
31. Wunsch, G. Geschichte der Systemtheorie: Dynamische Systeme und Prozesse; R. Oldenbourg Verlag: Munich, Germany, 1985. [Google Scholar]
32. Noll, P. Nachrichtentechnik an der TH/TU Berlin—Geschichte, Stand und Ausblick. Available online: http://www.nue.tu-berlin.de/fileadmin/fg97/Ueber_uns/Geschichte/Dokumente/th_nachr.pdf (accessed on 15 August 2012).
33. Dittrich, E.; Prof. Dr., Karl Willy, Wagner. Ein Leben zwischen Tradition und Innovation. In Friedrichsdorfer Schriften; OK:verlag und medien: Friedrichsdorf, Germany, 2003/2004; Volume 3, pp. 32–51. [Google Scholar]