Next Article in Journal
There Are Quantum Jumps
Previous Article in Journal
The Fractional Orthogonal Derivative
Article Menu

Export Article

Mathematics 2015, 3(2), 299-318; doi:10.3390/math3020299

On the Duality of Discrete and Periodic Functions
Studies conducted at the Institute of Mathematics, Ludwig Maximilians University Munich, 80333 Munich, Germany
Academic Editor: Hari M. Srivastava
Received: 8 October 2014 / Accepted: 22 April 2015 / Published: 30 April 2015


: Although versions of Poisson’s Summation Formula (PSF) have already been studied extensively, there seems to be no theorem that relates discretization to periodization and periodization to discretization in a simple manner. In this study, we show that two complementary formulas, both closely related to the classical Poisson Summation Formula, are needed to form a reciprocal Discretization-Periodization Theorem on generalized functions. We define discretization and periodization on generalized functions and show that the Fourier transform of periodic functions are discrete functions and, vice versa, the Fourier transform of discrete functions are periodic functions.
Poisson Summation Formula; discreteness; periodicity; sampling; replicating; Schwartz space; tempered distributions; generalized functions; discrete functions

1. Introduction

Versions of Poisson’s summation formula have already been the subject of many publications and it is clear that it plays a very central role in mathematics, physics and electrical engineering. Many publications describe the interaction of discretization (or sampling) and periodization. In some publications we find a clear statement that the Poisson’s Summation Formula (PSF) is linked to both discretization and periodization [13], and in rare cases we find that the PSF actually is a discretization-periodization theorem [4]. Despite the circumstance that PSF versions are commonly understood as discretization - periodization relationships, it appears to be difficult to find a simple statement on the special interrelation between discretization and periodization. In this paper, we show that two complementary formulas, very closely related to Poisson’s classical summation formula, are needed to form a complete, i.e., reciprocal, discretization-periodization theorem in the tempered distribution sense.

Most publications about discretization and periodization relationships are so far application driven. A comprehensive derivation of interactions between sampling (discretization) and periodization has already been performed for Gabor systems. Orr and Janssen initially covered the transition from continuous settings to discrete settings [5,6] and later Kaiblinger [7] covered the transition from discrete settings to continuous settings. Finally, Søndergaard summarized and complemented these results in [8] and, amongst other things, wrote a Poisson Summation Formula without sums but with sampling and periodization operators instead ([4], Theorem A.1), very similar to the derivation in this paper but in S0(ℝ), Feichtinger’s algebra [9].

Another comprehensive treatment of interactions between discretization and periodization is given in Ozaktas [10], whereby discreteness and periodicity are characterized as Fourier duals such as translation and modulation. Amongst others, the notion of discrete functions is clarified as being Fourier duals of periodic functions. Other publications establish a connection between sampling and periodization, on the one hand, and Shannon’s Sampling Theorem on the other [11] without compelling necessity to mention Poisson’s summation formula in this context. There are even excellent signal processing textbooks that can manage without Poisson’s summation formula but not without describing its impacts, e.g., [12]. Poisson’s summation formula also plays an important role in wavelet theory [1315], although only in proofs, because it provides transitions from continuous to discrete settings.

Lastly, Benedetto and Zimmermann investigated the validity of Poisson’s summation formula in several classical spaces in analysis including the space of tempered distributions [3,16]. They define sampling and periodization operators and, amongst others, characterize uniform sampling as the Fourier transform of periodization.

The Poisson Summation Formula is also honored as being the link between the classical Fourier transform and Fourier series in Girgensohn [17], Hunter and Nachtergaele [18] and Strichartz [19]. Indeed, we will see that this link is discretization, formally correct only in the sense of generalized functions, called distributions. Hence, in order to formally treat discrete functions correctly, we will need the theory of distributions, basically developed by Laurent Schwartz in [20,21], with contributions by Temple [22] and Lighthill [2], Gelfand and Schilow [23,24] and Zemanian [25]. For our purposes, it is especially convenient to show everything within the space of tempered distributions S , because the Fourier transform of a tempered distribution is again a tempered distribution [21,26]. Note that the Fourier transform definition can even be more generalized to spaces larger than S , see [27] for example, but extending these results accordingly is beyond the scope of this paper.

Conversely, results presented in this paper may also hold in less general settings. For engineering purposes, it often suffices to apply the definitions of both sampling and periodization and use their symbols, but neglect their tempered distribution nature, in order to simplify the already established formulas found in textbooks. A first impression of this technique is given in Section 5 that uses almost no generalized functions theory.

Section 2 will first familiarize the reader with the notations used throughout this article. Section 3 explains the proof method; readers familiar with distribution theory may skip this section. In Section 4 we define discretization and periodization as operations on suitable subspaces of tempered distributions and in Section 5 we hope to motivate the reader to apply them to already known results in textbooks. Section 6 provides calculation rules for both discretization and periodization, such that the proof of our theorem in Section 7 reduces to two lines. Finally, we summarize our results in Section 8 and provide an outlook in Section 9.

2. Notation

We mainly follow established notations as in Schwartz [21], including his Fourier transform definition Equation (1), Benedetto [3], Strichartz [26] and Walter [27], and we substantially follow Woodward [1] and Bracewell [28]. One of the primary goals of this paper is to extend Bracewell’s idea of establishing symbolic calculation rules in signal processing for educational purposes. His textbook Fourier Transform and its Applications is a highly recognized standard work in electrical engineering literature. By replacing Bracewell’s Shah symbol III by two Shah-like symbols and Δ Δ Δ we achieve three things: (1) we are able to express the interrelation between discretization and periodization, (2) we are able to express the duality of both symbols and (3) we overcome difficulties that arise when conventional dissimilar symbols are used.

The application of a distribution f S to some test function φ S is denoted as ⟨f, φ⟩ and, in particular, if f is a locally integrable function such that the following integral exists, we define it by

f , φ : = n f ( t ) φ ( t ) d n t .

For singular distributions, e.g., δ S , it is required to write down their definition explicitly, i.e., ⟨δ, φ⟩ := φ(0). The Fourier Transform for integrable functions φL1(ℝn) is defined by

( φ ) ( σ ) : = n φ ( t ) e 2 π i t σ d n t = e 2 π i t σ , φ
where i = 1 and t · σ is the usual scalar product in ℝn. For generalized functions f S we define its Fourier transform by
f , φ : = f , φ
i.e., is “rolled off” to test functions φ, a standard technique for definitions in distribution theory. Defining the Fourier transform in that way implies that δ = 1 and 1 = δ, see for example [21]. Also note that −1 f = f and −1 f = f for any f S , see [21,26], and φ = φ where φ ( t ) : = φ ( t ) for test functions φ S. Furthermore, we define Fourier Series S for integrable periodic functions fL1(ℝn) by
( s f ) ( σ ) : = k n f ( k ) e 2 π i t σ .

The translation operator τ for some function f : n and some constant a ∈ ℝn is defined by (τaf)(t) := f(ta). For generalized functions it is defined by

τ a f , φ : = f , τ a φ .

Additionally, we abbreviate τaδ:= δa for translated Dirac impulses δ. Often in the literature we find a Dirac impulse train, also Dirac comb as

III : = k n δ k .

Instead of δk, we often use δkT where T + n = { t n : 0 < t ν , 1 ν n }. By kT we mean component-wise multiplication with k ∈ ℤn. Note that III S , see [2,26,29] for example.

As mentioned already, the Fourier transform of a tempered distribution is again a tempered distribution. So, for convenience, we stay in S . Additionally, we sometimes need to restrict ourselves to several subspaces of S and would like to remind the reader that the following continuous embeddings of distribution spaces

D S O M O C D L 1 S D .
have been investigated in [21]. We will need the space of test functions : = C , its dual space , the space of infinitely differentiable functions with compact support D : = C c and the space of bounded test functions and its dual space D L 1, we need the Schwartz space S of rapidly decreasing, infinitely differentiable functions and its dual space, the space of tempered distributions S and exploit that is an automorphism of S, i.e., ( S ) = S and that this implies that is an automorphism of S , i.e., ( S ) = S .

Additionally, we require the space of slowly increasing test functions O M and the space of rapidly decreasing distributions to be the image of O M in S with respect to , i.e., ( O M ) = O C and ( O C ) = O M. Recall that O M and O C are the spaces of multiplication operators in S and convolution operators in S , respectively [21]. Finally, all those spaces X(ℝn) we abbreviate to X, for functions φ : n depending on t ∈ ℝn we write φ instead of φ(t). For test functions in O M we use α, β, for test functions in S we use φ, ψ and for generalized functions in O C or S we use f, g.

3. Idea

In signal processing it is well-known that sampling a function means periodizing its spectrum. And, vice versa, periodizing a function means sampling its spectrum. Our goal is now, first, to formulate this in the simplest way and, secondly, to prove it the widest possible way. We are especially interested under which conditions these statements hold in the widest sense.

Denoting a generalized function as f and its spectrum as f, we additionally require a symbol for discretization, say , and one for periodization, say Δ Δ Δ. We will then write a sampled function as f s = f and a periodized function as f p = Δ Δ Δ f. This notation is deliberately chosen such that it reminds us of linear algebra notations like y = Ax. Having a symbol for discretization, we are now able to express the relationship between Fourier series s and the Fourier transform as s ( f ) = ( f ), i.e., first applying discretization and then to f corresponds to what we know as the Fourier series of f, shortly s = where is concatenation. Now, coming back to the first two sentences in this section, our goal is to prove that ( f ) = Δ Δ Δ ( f ) and ( Δ Δ Δ f ) = ( f ) in the generalized functions sense if f satisfies suitable properties. But what do these properties look like, why do we need distribution theory and does this really have applications?

Properties. We will see that f must be an element in O M S to fulfill the first equation, and an element in ( O M ) S to fulfill the second.

3.1. Distribution Theory

Indeed, the only way to prove these equations is using the theory of generalized functions (distribution theory) because sampling some function φ(t), often explained via definition

δ ( t ) φ ( t ) d t = d e f φ ( 0 )
is mathematically wrong. Either this integral would be zero according to the definition of δ or it contradicts the Lebesgue integral. Distribution theory has been developed to overcome this problem. Instead of the integral expression, we now write
δ , φ = d e f φ ( 0 )
to apply δ to a function φ. The novelty is that arbitrarily defined “functions” f can now be applied to other functions φ without contradicting the Lebesgue integral. Hence, ⟨·, ·⟩ can be considered as a “generalized integral” but, equivalently, we must take care that ⟨f, φ⟩ < ∞ holds for all “functions” f operating on functions φ. This is secured by using spaces X of φ having certain properties and spaces X′ of f with complementary properties. So, for example, if f is slowly growing then φ must be rapidly decreasing. Vice versa, if f is rapidly decreasing then φ is allowed to be slowly growing. Coming back to our equations, we must show that if fX′ then the above equations, e.g., ( f ) = Δ Δ Δ ( f ), must hold in the sense that
( f ) , φ = Δ Δ Δ ( f ) , φ <
for every test function φX. Also, note that spaces of test functions are continuously embedded in spaces of generalized functions XX′, such that test functions themselves are generalized functions. Moreover, in distribution theory every test function φ is required being infinitely differentiable ( φ C ) such that derivatives of generalized functions can always be found no matter whether their derivatives do exist in the conventional sense or not. Distribution theory is therefore also regarded as the completion of Fourier analysis [2] or of differential calculus [26].

3.2. Applications

As usually done in distribution theory, we will claim in the following, e.g., Equations (5), (7), (10), (17), (21) and (23), that all functions involved must be test functions, i.e., they are in C . This is required in order to be able to roll-off the multiplication of functions with generalized functions to the multiplication of functions with test functions in definition ⟨αf, φ⟩ := ⟨f, αφ⟩ such that α φ S is secured for all φ S. The property of being infinitely differentiable is a pure technical requirement in distribution theory that does not restrict the theorem we prove for two reasons: (1) In distribution theory, every locally integrable function (considered as generalized function) is already infinitely differentiable and (2) distribution theory is connected to its applications via several approximation theorems (see e.g., [27,30]). For example, every continuous function but also every generalized function can be approximated by test functions, i.e., functions in the classical sense being infinitely differentiable. We therefore always operate (as usual) on either spaces of generalized functions or spaces of test functions.

3.3. Generality

The theorem we prove corresponds to the Fourier transform in S , the widest possible comprehension of the Fourier transform as automorphism (according to Laurent Schwartz’ distribution theory [20,21]) and is, hence, most general in this sense. It comprises, moreover, several classical definitions of the Fourier transform. We note, for example, that there is no need to have a special Fourier transform P designed for periodic functions. Also constants, in particular also being periodic functions, can be Fourier transformed without difficulties. Finally, all results presented in this paper can be used in pure symbolic calculations once they have been proven (in S ). Our theorem shows that discretization and periodization are intertwined by the Fourier transform, symbolically ( ) = Δ Δ Δ and ( Δ Δ Δ ) = where · is an operator argument placeholder.

4. Definitions

We often find the Shah symbol III in the literature (Bracewell [28], Feichtinger [31] and Lecture Notes, e.g., [32]) as an abbreviation for the Dirac impulse train Equation (4). It is convenient to only have one symbol because the Dirac impulse train can be used to both discretize and periodize functions: Multiplication with it yields discrete functions, convolution with it yields periodic functions [28]. Furthermore, having only one symbol avoids introducing a theory of operations on functions which is especially of advantage for reasons of brevity in engineering books. However, any interaction between discretization and periodization cannot be expressed via one symbol.

4.1. Discretization

A major difficulty in S is that the product of two tempered distributions is generally not defined. We will therefore use Laurent Schwartz’ [21] space of multiplication operators O M in the following definition.

Definition 1 (Discretization). For slowly increasing functions α O M S and increments T + n we claim that the tempered distribution defined by

T α : = k n α ( k T ) δ k T
converges in S (Figure 1). The operation T is called uniform discretization (or sampling) with increments T. We call T α the discrete (sampled) function of α discretized (sampled) with increments T. Discretization (sampling) is a linear continuous operator T : O M S , α T α. We abbreviate α, if α is discretized (sampled) with increments Tν = 1 in all dimensions 1 ⩽ νn and identify T 1 T.

Discretization will be well-defined in this sense and it is clear that these series indeed converge in S , see [26] for example. This definition corresponds in fact to the multiplication of function α with a Dirac comb III T S . It is today already a common engineering practice to model the sampling process by the multiplication with the Dirac comb [33]. However, an operator approach is much more advantageous as we will see later. It can already be found as combT (α) in [1], later re-published in [34].

4.2. Periodization

Analogous to discretization where the space of multiplication operators O M in S is required, we will now need the space of convolution operators O C in S , again, introduced by Laurent Schwartz in [21].

Definition 2 (Periodization). For rapidly decreasing tempered distributions f O C S and increments T + n we claim that the tempered distribution defined by

Δ Δ Δ T f : = k n τ k T f
converges in S (Figure 2). The operation Δ Δ Δ T is called periodization (replicating) or periodic continuation of f with periods T. Periodization (replicating) is a linear continuous operator Δ Δ Δ T : O C S , f Δ Δ Δ T f. We abbreviate Δ Δ Δ f if f is periodically continued (replicated) with periods Tν = 1 in all dimensions 1 ⩽ νn and identify Δ Δ Δ T δ Δ Δ Δ T.

Again, it is clear that periodization is well-defined in this sense and that these series do converge in S . In some textbooks, this operator can be found as repT f [1,34,35], as fper [15], as f T [16], as Pf [26], as f ¯ [36] as Tf [37] or as fT [38]. However, Δ Δ Δ is similar to and can, hence, be used for symbolic calculations as we will see later. We will in particular see that, by using and Δ Δ Δ, especially Poisson Summation Formulas reduce to something that is much easier to understand.

5. Motivation

The interaction of discretization and periodization is ubiquitous in many calculations that we find in today’s textbooks. In order to reveal these relations, in this section we use Equations (5) and (6), for the sake of simplicity, neglect considerations in the tempered distribution sense. We give three examples: first, we show that Fourier Series result from Fourier transforming discretized functions. Secondly, we develop periodized functions into Fourier series seeing that it is related to discretization. And, thirdly, we apply the definitions of Section 4 to known versions of Poisson’s summation formula found in the literature.

5.1. Fourier Series and Fourier Transform

Using our definition of discretization we verify that Fourier series actually correspond to a concatenation of discretization followed by Fourier transformation. Hence, having and , the definition of S as given in Equation (3) is obsolete; we may now write instead of introducing Equation (3).

Lemma 1 (Fourier Series vs Fourier Transform). Let α O M, then

s α = ( α ) i n S .

Proof. We may apply to α O M and, accordingly, α converges in S such that ( α ) S . Due to the convergence of α in S we may calculate term by term. Symbolically, we obtain

( α ) = ( k n α ( k ) δ k ) = k n α ( k ) ( τ k δ ) = k n α ( k ) e 2 π i k σ δ = k n α ( k ) e 2 π i k σ = s α .
with equalities in the tempered distributions sense. □

A consequence is the following equality in the operations sense : O M S

s =
where ○ is the concatenation of operations in S . We will later come back to this result in Equation (25).

5.2. Fourier Series of Periodized Functions

For further motivation we show that a Discretization-Periodization Theorem for Schwartz functions φ S can already be derived from the above definitions: We construct the Fourier series of a periodically continued function φ S and see that it is indeed related to discretization. Note that every periodic function f can be written as f = Δ Δ Δ ϕ for some function ϕ. Because φ S O C it follows that Δ Δ Δ φ S is a tempered distribution. As every periodic function can be developed into a sum of ek(t) = ei t·k, k ∈ ℤn, we will now do this for Δ Δ Δ φ. Starting from

Δ Δ Δ φ ( t ) = k n c k e k ( t )
we finally arrive at (see [27,39,40] for example)
Δ Δ Δ φ ( t ) = k n δ k , φ e 2 π i t k = 1 ( k n δ k , φ δ k ) .

Inserting now the definition of discretization yields this important relationship, a formula being closely related to Poisson’s summation formula, between periodization and discretization

( Δ Δ Δ φ ) = ( φ ) in S
so far for Schwartz functions φ S. We note that φ S O M such that ( φ ) is meaningful as well because requires functions in O M according to Definition 1.

Indeed, Equation (10) can be more generalized and an inversion formula can be found. It will, moreover, be surprising to learn at this point that both operators in Equations (5) and (6) have already been defined in Woodward’s 1953 textbook [1] leading to Equations (28) and (29), meant on functions in S. In Section 7, we will extend these results to generalized functions f O C S and S O M f (see also Appendix B).

5.3. Versions of Poisson Summation Formulas

In general, each Poisson Summation Formula is only one of two possible statements: It either turns discretization into periodization resulting in Equations (23) and (28) or it turns periodization into discretization resulting in Equations (24) and (29) via the Fourier transform. And special cases lead to Equations (16) and (27). This can be seen by applying Equations (5) and (6) to PSF versions found in the literature. In the following we distinguish a general version, a classical version, a symmetrical version and, finally, a special version.

For simplicity, let φ S and T ∈ ℝ+ (one-dimensional case) for now. Recall that PSF are valid on φ S, see e.g., [3,9]. In Section 7, we will extend these results to generalized functions f O C S and S O M f (see also Appendix B). We note that Equation (11) corresponds to Equations (24) and (29) in Section 7.

A general version of Poisson’s summation formula can be found for example as

T k φ ( t + k T ) = k ( φ ) ( k / T ) e 2 π i t k / T
in Benedetto and Zimmermann [3] but also in [16,37,38,4143]. Inserting Equations (5) and (6) and using Equation (9), this formula yields (29). A variation of (11) emerges if T = 1, e.g., in Gröchenig [39].

Now, let t ≡ 0, then Equation (11) reduces to the classical version of Poisson’s summation formula

T k φ ( k T ) = k ( φ ) ( k / T )
found in [13,4345] for example.

Now, let T ≡ 1, then Equation (12) reduces to the symmetrical version of Poisson’s summation formula

k φ ( k ) = k φ ^ ( k ) ,
where φ ^ : = φ, e.g., found in Feichtinger and Strohmer [9], also in [37,4648]. We note that the latter two Equations (12) and (13), especially the last one, look most elegant and one might guess that there would be some loss of information from top to bottom. But surprisingly, the deduction could also be done from bottom to top, first via argument substitution xxT, i.e., using Fourier transform rule (φ(xT)) = (1/T) (φ)(x/T) and then using argument substitution xt + x, i.e., using Fourier transform rule (φ(t + x)) = (φ)(x) eit·x. Hence, Equations (11), (12) and (13) can be transferred into one another for φ S.

Now, let us assume that instead of φ S S , we indeed have some suitable f S , let fδ, then Equation (11) is valid in S (known result) and reduces to the special version of Poisson’s summation formula

T k δ k T = k ( δ ) ( k / T ) 1 δ k / T ( Δ Δ Δ T δ ) = 1 T 1 T 1 = 1 T Δ Δ Δ 1 T δ
also found very often in the literature, for example in [16,26,39,49] stating that the Fourier transform of a Dirac comb is a Dirac comb. Thus, Equation (11) with f ≡ δ yields (27). Additionally having T = 1 yields (16), found for example in [3,16,50,51]. Note that in Equation (14) we already use identity T 1 Δ Δ Δ T δ. Such rules will be treated in the next section.

Finally, by looking at the from bottom-to-top derivation above, we note that there is a certain degree of freedom on what side to move the Fourier transform. This allows us, in contrast to what is already done, to periodize f instead of f, resulting in the other two Equations (23) and (28) we will prove in Section 7. These formulas are, in contrast to Equations (24) and (29), very difficult to find in the literature. However, [52] is one of the rare sources where beside the classical Poisson Summation Equation (37.1) in [52] another dual formula Equation (37.2) in [52] can be found instead of only one; Equation (37.2) in [52] corresponds to Equations (23) and (28) and Equation (37.1) in [52] corresponds to Equations (24) and (29).

6. Calculation Rules

In order to present a concise proof of Theorem 1, we need to compile few important operator properties. We note that the Fourier transform of a Dirac comb within S is commonly known (e.g., [16,39]) whereas the (asymmetric) duality of multiplication and convolution within S is a more generalized case of commonly known (symmetric) versions (e.g., in S), it can be found in [21].

Dirac Comb Identity. The first, maybe surprising observation by looking at the two definitions above is that both operations generate a Dirac impulse train by either sampling the function that is constantly 1 or by periodically repeating δ, symbolically

T 1 = k n δ k T = Δ Δ Δ T δ .

Moreover, 1 and δ are unitary elements with respect to multiplication and convolution in S , respectively. They are also a Fourier pair, more precisely 1 = δ and δ = 1. Also note that the Shah symbol III in Equation (4) is called the sampling or replicating symbol in [28] because it can be used for either sampling (discretizing) or replicating (periodizing) a function. However, instead of III we now have either 1 or, alternatively, Δ Δ Δ δ which allows us to switch from discretization to periodization or from periodization to discretization (needed in proof of Theorem 1).

Dirac Comb Fourier Transform. We also need that ( 1 ) = 1 and, accordingly,

( Δ Δ Δ δ ) = Δ Δ Δ δ in S .

Omitting Brackets. A very convenient property of both operators is that brackets can be omitted. For reasons of brevity, we move this proof to Appendix A.

Lemma 2. Let α , β O M and g , f O C . Then

α ( T β ) = T ( α β ) = ( T α ) β a n d
g ( Δ Δ Δ T f ) = Δ Δ Δ T ( g f ) = ( Δ Δ Δ T g ) f i n S .

In particular, with 1 = III = Δ Δ Δ δ, additionally let us identify 1 and Δ Δ Δ δ Δ Δ Δ, with Equations (17) and (18) we obtain relationships published in Bracewell [28]

α = 1 α = α = III α
Δ Δ Δ f = Δ Δ Δ δ f = Δ Δ Δ f = III f .

It means that discretization is realized via multiplication and periodization is realized via convolution with III. Note that replacing III by operations , Δ Δ Δ is the central proposal in this paper.

Multiplication and Convolution. Recall that multiplication or convolution products do not necessarily exist in S . We must rather restrict our operator arguments to suitable subspaces of S (see operator definitions), i.e., to O M and O C respectively.

Lemma 3 (Multiplication and Convolution). Let α O M , g O C and f S . Then

( α f ) = α f a n d
( g f ) = g f i n S .

A detailed treatment of multiplication and convolution in S including a proof of this lemma is given in [21], Théorème XV, p. 124. Notice that O M can rarely be found in the literature ([30,5356]) while O C is better known. A description of interactions of O M with O C , aside from [21], is even more seldom in the literature, e.g., [57,58] describing the Fourier transform as an algebra isomorphism : ( O C , ) ( O M , ).

7. The Discretization-Periodization Theorem

For the sake of simplicity, we first consider only increments of Tν = 1 in all dimensions 1 ⩽ νn and then generalize these results in a corollary below.

7.1. Unitary Increments

Everything is prepared now to present a discretization - periodization theorem in S . It consists of two formulas very closely related to Poisson’s Summation Formula starting from discretization and from periodization, respectively. Compared to classical versions of Poisson’s summation formula we simply hide one sum in and the other in Δ Δ Δ, such that these formulas become better understandable.

Theorem 1 (Discretization vs. Periodization, unitary case). Let α O M and f O C . Then

( α ) = Δ Δ Δ ( α ) a n d
( Δ Δ Δ f ) = ( f ) i n S .

Note that, in contrast to α O M and f O C , both formulas hold for Schwartz functions φ S understood as rapidly decreasing tempered distributions because φ S O M S on the one hand, and φ S ( O M ) = O C S on the other. Hence, it proves Equation (10) in particular (see Appendix B).

Furthermore, recall that ( α ) and Δ Δ Δ ( α ) are the Fourier series S of α according to Equation (7). It is, in other words, the Fourier transform of discrete functions S. And the second equation corresponds to the Fourier transform of periodic functions P (Figure 3). See one-page summaries: Figure 3.31 in [12] or Prolog II in [16], for example, embracing all four different Fourier transforms and Fourier series including their (either integral or discrete) inversion formulas.

Proof. Formally, according to the calculation rules shown above the following equalities hold

( Δ Δ Δ f ) = Δ Δ Δ ( δ f ) = ( Δ Δ Δ δ f ) = ( Δ Δ Δ δ ) f = Δ Δ Δ δ f = 1 f = ( 1 ) = ( f )
in S . We start using f = δf because δ O C is the unitary element with respect to convolution in O C . Then we apply Equations (18), (22), (16), (15) and (17), in this order. Finally, let α : = f O M, we use α = 1·α because 1 O M is the unitary element with respect to multiplication in O M to show the second formula in an analogous manner. □

To demonstrate the usefulness of the theorem, we use Equations (23), (24) and (15) to prove Equation (16) by

( Δ Δ Δ δ ) = ( δ ) = 1 = Δ Δ Δ δ and ( 1 ) = Δ Δ Δ ( 1 ) = Δ Δ Δ δ = 1 in S
and identifying 1 and Δ Δ Δ δ Δ Δ Δ we obtain ( Δ Δ Δ ) = and ( ) = Δ Δ Δ.

Finally, we use the theorem to conclude that Equation (8) is complemented having

= s = Δ Δ Δ
in the operations sense : O M S . Hence, Fourier series S either correspond to discretization followed by the Fourier transform or, equivalently, they correspond to the Fourier transform followed by periodization. Accordingly, for the second Poisson Summation Formula (24) we obtain
Δ Δ Δ = P =
in the operations sense : O C S .

7.2. Arbitrary Increments

In our Equations (1), (3) and (4) as well as in Equations (16), (23) and (24) we used periodic functions with periods of Tν = 1 in all dimensions 1 ⩽ νn, which is convenient but not the general case. The following lemma helps to generalize our results to arbitrary periods and increments.

Lemma 4 (Reciprocity). The Fourier transform of a stretched/compressed Dirac impulse train is a compressed/stretched Dirac impulse train. Formally, let T = [ T 1 , T 2 , , T n ] + n then

( Δ Δ Δ T δ ) = 1 T p Δ Δ Δ 1 T δ i n S
with 1 T : = [ 1 T 1 , 1 T 2 , , 1 T n ] + n and Tp :T1T2…Tn ∈ ℝ+

A proof of this lemma is straight forward from what is known in the literature (see for example [16,32,40,49]). We will omit it here for the sake of conciseness. Let us now see the theorem in the arbitrary increments case.

Corollary 1 (Discretization vs. Periodization, general case). Let α O M and f O C and let T = [ T 1 , T 2 , , T n ] + n a n d 1 T : = [ 1 T 1 , 1 T 2 , , 1 T n ] + n then

( 1 T α ) = T p Δ Δ Δ T ( α ) a n d
( Δ Δ Δ 1 T f ) = T p T ( f ) i n S .
where Tp := T1T2Tn ∈ ℝ+.

This is instantly verified replacing Equation (16) by Equation (27) in the above proof.

8. Conclusions

We have seen that, in the generalized functions sense, there is a (Fourier transform) duality between discrete and periodic functions. The theorem states that the generalized function being discretized must not grow faster than polynomials and the generalized function being periodized must rapidly decay to zero in order to obtain convergence on both sides of the equations in the generalized functions sense. Furthermore, it is shown that both formulas in this theorem are closely related to the classical Poisson Summation Formula. This means that these conditions will have impact on the validity of Poisson Summation Formula versions. We have, furthermore, seen that one formula in this theorem corresponds to Fourier series and the other formula corresponds to the Fourier transform of periodic functions. Finally, by choosing two similar symbols for discretization and periodization, we showed that an easy-to-use symbolic calculation scheme is obtained.

9. Outlook

The theorem above is indisputably connected to the classical sampling theorem, see, e.g., Woodward [1] or Benedetto and Zimmermann [3]. An all-embracing theory including the Sampling Theorem will, however, require to have two more operations, one reversing discretization and one reversing periodization, i.e., interpolation of discrete functions and restriction of a periodic function to one period. Surprisingly, S has been suitable to model discretization and periodization but will not be suitable to model their inverses. Further studies are required.

MSC classifications: 46F10, 46F12

Conflicts of Interest

The author declares no conflicts of interest.


A. Proof

Proof of Lemma 2. Discretization, . We know that α S and with β O M it follows that β α S such that it can be applied to test functions φ S. Due to the convergence of T α in S′ and the continuity of multiplication : O M × S S , ( α , f ) α f we may calculate term by term

β k n α ( k T ) δ k T , φ K = k n β α ( k T ) δ k T , φ = k n β ( k T ) α ( k T ) δ k T , φ = k n ( β α ) ( k T ) δ k T , φ
for all test functions φ S, i.e., β T α = T ( β α ) in S . Finally, the roles of α and β can be exchanged due to commutativity β α = α β in O M. □

Proof of Lemma 2. Periodization, Δ Δ Δ. We know that Δ Δ Δ T g S and with f O C it follows that f * Δ Δ Δ T g S such that it can be applied to test functions φ S. Due to the convergence of Δ Δ Δ T g in S and the continuity of convolution : O C × S S , ( f , g ) f g we may calculate term by term, we use Equations (A3), (A4), (A5), (A3), (A4), in this order, obtaining

f Δ Δ Δ T g , φ = Δ Δ Δ T g , f φ = g , Δ Δ Δ T ( f φ ) = g , f ( Δ Δ Δ T φ ) = f g , Δ Δ Δ T φ = Δ Δ Δ T ( f * g ) , φ
for all test functions φ S, i.e., f Δ Δ Δ T g = Δ Δ Δ T ( f g ) in S . Finally, the roles of f and g can be exchanged due to commutativity f g = g f in O C . □

The following definitions and calculation rules are needed for the above proof.

Definition A1. Let f S and φ S. Then

( f φ ) ( t ) : = f , τ t φ
defines a convolution in S where ( τ t φ ) ( ξ ) : = φ ( ξ t ) = φ ( t ξ ).

Lemma A1. Let f O C and φ S. Then f φ S where

f φ = 1 ( f φ ) i n S

Proof. We may write

f φ = f , τ t φ = f , ( τ t φ ) = f , ( τ t φ ) = f , e 2 π i t σ φ
according to Equation (A1), φ = φ , Equation (2), (tφ) = e2i t·σ φ, in this order. Because ( O C ) = O M, we know that Ff is a regular function, hence
f , e 2 π i t σ φ = n ( f ) ( σ ) e 2 π i t σ d n t = 1 ( f φ )

The last expression is a function in Sbecause with f O M and φ S it follows that f φ S. Finally, 1 ( f φ ) S because is an automorphism in S. □

Definition A2. Let f O C , g S and φ S. According to Equation (A2) it follows that f φ S such that

f g , φ : = g , f φ
defines a convolution in S where f , φ : = f , φ and φ ( t ) : = φ ( t ).

Lemma A2. Let f O C and φ S. Then

Δ Δ Δ T f , φ = f , Δ Δ Δ T φ

Proof. Let us initially operate on finite sums over k ∈ ℤn with |k| ⩽ N ∈ ℕ where |k| := max1⩽ν n |kν| is the maximum norm, and let

φ N : = τ k T φ and f N : = τ k T f .

It then follows that φN, the space of bounded test functions [21] and f N S . With finite sums we start operating f O C on φN obtaining

f , φ N = f , τ k T φ = f , τ k T φ = τ k T f , φ = τ k T f , φ = f N , φ
where f N S operates on φ S. Finally, using the continuity in generalized function spaces
lim N f N , φ = lim N f , φ N = f , lim N φ N = f , Δ Δ Δ T φ .

In this last expression we apply a generalized function f O C to some test function Δ Δ Δ T φ . We must justify that this makes sense as O C and are no dual spaces. But O C is continuously embedded in D L 1 which is the dual space of such that the convergence of f , Δ Δ Δ T φ is secured for all f O C and φ S. And the following expressions converge due to the completeness of S

f N = | k | N τ k T f S Δ Δ Δ T f for N .
in the sense of convergence in S . □

Lemma A3. Let f O C and φ S, then

( Δ Δ Δ T f ) φ = Δ Δ Δ T ( f φ ) = f ( Δ Δ Δ T φ ) i n

Proof. With Equation (A2) we know that f φ S and because S O C we may apply Δ Δ Δ Tto f φ and obtain

Δ Δ Δ T ( f φ ) = k n τ k T ( f φ ) = k n f , τ t k T φ = k n f , τ t τ k T φ = f , τ t k n τ k T φ = f Δ Δ Δ T φ = k n f , τ k T τ t φ = k n τ k T f , τ t φ = Δ Δ Δ T f φ
using Equations (6) and (A1), exchanges in the sequence of translations and again Equation (A1), in this order. □

B. Summary

Let us now have a look at the beauty of “sortedness” of functions in S (Figure 4). It is this “sortedness” that led us from constant function 1 via the Gaussian function g ( t ) = e π t 2 up to the discovery of δ.

It is interesting to observe that there is a triplet of embedded automorphisms ( s , O M O C , S ) involved in this scheme where S ( Ο M O C ) S .

Also note that discrete functions are being injected on the smooth side of S (left) whereas periodic functions are being injected on the rough side of S (right). But both are actually located on the same side of S (opposite of S) because left hand side and right hand side are being identical, recall that 1 Δ Δ Δ δ. So, S reminds us of a torus.


These studies have essentially been conducted at the Ludwig Maximilians University Munich, Institute of Mathematics in 1996. The author is particularly grateful to Professor Otto Forster who made this study possible, especially for directing the authors interests towards Laurent Schwartz’ distribution theory. The author is also very grateful to Professor Hans G. Feichtinger, University of Vienna, for many valuable hints and fruitful discussions and, finally, to the anonymous reviewers who significantly helped improving this manuscript.


  1. Woodward, P.M. Probability and Information Theorywith Applications to Radar; Pergamon Press Ltd: Oxford, UK, 1953. [Google Scholar]
  2. Lighthill, M.J. An Introduction to Fourier Analysis and Generalised Functions; Cambridge University Press: Cambridge, NY, USA, 1958. [Google Scholar]
  3. Benedetto, J.J.; Zimmermann, G. Sampling Multipliers and the Poisson Summation Formula. J. Fourier Anal. Appl. 1997, 3, 505–523. [Google Scholar]
  4. Søndergaard, P.L. Gabor Frames by Sampling and Periodization. Adv. Comput. Math. 2007, 27, 355–373. [Google Scholar]
  5. Orr, R.S. Derivation of the Finite Discrete Gabor Transform by Periodization and Sampling. Signal Process 1993, 34, 85–97. [Google Scholar]
  6. Janssen, A. From Continuous to Discrete Weyl-Heisenberg Frames through Sampling. J. Fourier Anal. Appl. 1997, 3, 583–596. [Google Scholar]
  7. Kaiblinger, N. Approximation of the Fourier Transform and the Dual Gabor Window. J. Fourier Anal. Appl. 2005, 11, 25–42. [Google Scholar]
  8. Søndergaard, P.L. Finite Discrete Gabor Analysis. Ph.D. Thesis, Institute of Mathematics, Technical University of Denmark, Copenhagen, Denmark, 2007. [Google Scholar]
  9. Feichtinger, H.G.; Strohmer, T. Gabor Analysis and Algorithms: Theory and Applications; Birkhäuser: Basel, Switzerland, 1998. [Google Scholar]
  10. Ozaktas, H.M.; Sumbul, U. Interpolating between Periodicity and Discreteness through the Fractional Fourier Transform. IEEE Trans. Signal Process. 2006, 54, 4233–4243. [Google Scholar]
  11. Strohmer, T.; Tanner, J. Implementations of Shannon’s Sampling Theorem, a Time-Frequency Approach. Sampl. Theory Signal Image Process 2005, 4, 1–17. [Google Scholar]
  12. Proakis, J.G.; Manolakis, D.G. Digital Signal Processing: Principles, Algorithms and Applications, 2nd ed; Macmillan Publishers: Greenwich, CT, USA, 1992. [Google Scholar]
  13. Heil, C.E.; Walnut, D.F. Continuous and Discrete Wavelet Transforms. SIAM Rev. 1989, 31, 628–666. [Google Scholar]
  14. Daubechies, I. The Wavelet Transform, Time-Frequency Localization and Signal Analysis. IEEE Trans. Inf. Theory 1990, 36, 961–1005. [Google Scholar]
  15. Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics (SIAM): Philadelphia, PA, USA, 1992. [Google Scholar]
  16. Benedetto, J.J. Harmonic Analysis Applications; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  17. Girgensohn, R. Poisson Summation Formulas and Inversion Theorems for an Infinite Continuous Legendre Transform. J. Fourier Anal. Appl. 2005, 11, 151–173. [Google Scholar]
  18. Hunter, J.K.; Nachtergaele, B. Applied Analysis; World Scientific Publishing Company: Singapore, 2001. [Google Scholar]
  19. Strichartz, R.S. Mock Fourier Series and Transforms Associated with Certain Cantor Measures. J. Anal. Math. 2000, 81, 209–238. [Google Scholar]
  20. Schwartz, L. Théorie des Distributions Tome I; Hermann Paris: Paris, France, 1950. [Google Scholar]
  21. Schwartz, L. Théorie des Distributions Tome II; Hermann Paris: Paris, France, 1959. [Google Scholar]
  22. Temple, G. The Theory of Generalized Functions. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1955, 228, 175–190. [Google Scholar]
  23. Gelfand, I.; Schilow, G. Verallgemeinerte Funktionen (Distributionen), Teil I; VEB Deutscher Verlag der Wissenschaften Berlin: Berlin, Germany, 1969. [Google Scholar]
  24. Gelfand, I.; Schilow, G. Verallgemeinerte Funktionen (Distributionen), Teil II, Zweite Auflage; VEB Deutscher Verlag der Wissenschaften Berlin: Berlin, Germany, 1969. [Google Scholar]
  25. Zemanian, A. Distribution Theory And Transform Analysis—An Introduction To Generalized Functions, With Applications; McGraw-Hill Inc: New York, NY, USA, 1965. [Google Scholar]
  26. Strichartz, R.S. A Guide to Distribution TheorieFourier Transforms, 1st Ed ed; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  27. Walter, W. Einführung in die Theorie der Distributionen; BI-Wissenschaftsverlag, Bibliographisches Institut & FA Brockhaus: Mannheim, Germany, 1994. [Google Scholar]
  28. Bracewell, R.N. Fourier Transform its Applications; McGraw-Hill Book Company: New York, NY, USA, 1965. [Google Scholar]
  29. Nashed, M.Z.; Walter, G.G. General Sampling Theorems for Functions in Reproducing Kernel Hilbert Spaces. Math. Control Signals Syst. 1991, 4, 363–390. [Google Scholar]
  30. Grubb, G. Distributions and Operators; Springer: Berlin, Germany, 2008. [Google Scholar]
  31. Feichtinger, H.G. Atomic Characterizations of Modulation Spaces through Gabor-Type Representations. Rocky Mt. J. Math 1989, 19, 113–125. [Google Scholar]
  32. Osgood, B. The Fourier Transform and its Applications; Lecture Notes for EE 261; Electrical Engineering Department, Stanford University: Stanford, CA, USA, 2005. [Google Scholar]
  33. Unser, M. Sampling-50 Years after Shannon. IEEE Proc. 2000, 88, 569–587. [Google Scholar]
  34. Brandwood, D. Fourier Transforms in Radar and Signal Processing; Artech House: Norwood, MA, USA, 2003. [Google Scholar]
  35. Cariolaro, G. Unified Signal Theory; Springer: Berlin, Germany, 2011.
  36. Jerri, A. Integral and Discrete Transforms with Applications and Error Analysis; Marcel Dekker Inc: New York, NY, USA, 1992. [Google Scholar]
  37. Auslander, L.; Meyer, Y. A Generalized Poisson Summation Formula. Appl. Comput. Harmon. Anal. 1996, 3, 372–376. [Google Scholar]
  38. Brigola, R. Fourieranalysis Distributionenund Anwendungen; Vieweg Verlag: Wiesbaden, Germany, 1997. [Google Scholar]
  39. Gröchenig, K. Foundations of Time-Frequency Analysis; Birkhäuser: Basel, Switzerland, 2001. [Google Scholar]
  40. Quegan, S. Signal Processing; Lecture Notes; University of Sheffield: Sheffield, UK, 1993.
  41. Gröchenig, K. An Uncertainty Principle related to the Poisson Summation Formula. Stud. Math. 1996, 121, 87–104. [Google Scholar]
  42. Papoulis, A.; Pillai, S.U. Probability Random Variables, and Stochastic Processes; Tata McGraw-Hill Education: Noida, India, 1996. [Google Scholar]
  43. Li, B.Z.; Tao, R.; Xu, T.Z.; Wang, Y. The Poisson Sum Formulae Associated with the Fractional Fourier Transform. Signal Process. 2009, 89, 851–856. [Google Scholar]
  44. Rahman, M. Applications of Fourier Transforms to Generalized Functions; Wessex Institute of Technology (WIT) Press: Ashurst Lodge UK, 2011. [Google Scholar]
  45. Jones, D. The Theory of Generalized Functions; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  46. Kahane, J.P.; Lemarié-Rieusset, P.G. Remarques sur la formule sommatoire de Poisson. Stud. Math. 1994, 109, 303–316. [Google Scholar]
  47. Balazard,, M.; France,, M.M.; Sebbar,, A. Variations on a Theme of Hardy’s. Ramanujan J. 2005, 9, 203–213. [Google Scholar]
  48. Lev, N.; Olevskii, A. Quasicrystals and Poisson’s Summation Formula. Invent. Math. 2013. [Google Scholar] [CrossRef]
  49. Mallat, S. A Wavelet Tour of Signal Processing; Academic Press: Waltham, MA, USA, 1999. [Google Scholar]
  50. Feichtinger, H.G. Elements of Postmodern Harmonic Analysis. In Operator-Related Function Theory and Time-Frequency Analysis; Springer: Berlin, Germany, 2015; pp. 77–105. [Google Scholar]
  51. Lev, N.; Olevskii, A. Measures with Uniformly Discrete Support and Spectrum. Compt. Rendus Math. 2013, 351, 599–603. [Google Scholar]
  52. Gasquet, C.; Witomski, P. Fourier Analysis and Applications: Filtering, Numerical Computation, Wavelets; Springer: Berlin, Germany, 1999. [Google Scholar]
  53. Edwards, R. On Factor Functions. Pac. J. Math. 1955, 5, 367–378. [Google Scholar]
  54. Betancor, J.J.; Marrero, I. Multipliers of Hankel Transformable Generalized Functions. Comment. Math. Univ. Carolin 1992, 33, 389–401. [Google Scholar]
  55. Bonet, J.; Frerick, L.; Jordá, E. The Division Problem for Tempered Distributions of One Variable. J. Funct. Anal. 2012, 262, 2349–2358. [Google Scholar]
  56. Morel, J.M.; Ladjal, S. Notes sur l’analyse de Fourier et la théorie de Shannon en traitement d’images. J. X-UPS 1998, 1998, 37–100. [Google Scholar]
  57. Gracia-Bondía, J.M.; Varilly, J.C. Algebras of Distributions Suitable for Phase-Space Quantum Mechanics. I. J. Math. Phys. 1988, 29, 869–879. [Google Scholar]
  58. Dubois-Violette, M.; Kriegl, A.; Maeda, Y.; Michor, P.W. Smooth*-Algebras. Prog. Theor. Phys. Supp. 2001, 144, 54–78. [Google Scholar]
Figure 1. Discretization of regular function yielding generalized function ???T.
Figure 1. Discretization of regular function yielding generalized function ???T.
Mathematics 03 00299f1 1024
Figure 2. Periodization of generalized function f yielding generalized function △△△T f.
Figure 2. Periodization of generalized function f yielding generalized function △△△T f.
Mathematics 03 00299f2 1024
Figure 3. The Discretization-Periodization Theorem.
Figure 3. The Discretization-Periodization Theorem.
Mathematics 03 00299f3 1024
Figure 4. The space of tempered distributions and important subspaces.
Figure 4. The space of tempered distributions and important subspaces.
Mathematics 03 00299f4 1024
Mathematics EISSN 2227-7390 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top