On the Reversibility of Discretization

: “Discretization” usually denotes the operation of mapping continuous functions to inﬁnite or ﬁnite sequences of discrete values. It may also mean to map the operation itself from one that operates on functions to one that operates on inﬁnite or ﬁnite sequences. Advantageously, these two meanings coincide within the theory of generalized functions. Discretization moreover reduces to a simple multiplication. It is known, however, that multiplications may fail. In our previous studies, we determined conditions such that multiplications hold in the tempered distributions sense and, hence, corresponding discretizations exist. In this study, we determine, vice versa, conditions such that discretizations can be reversed, i.e., functions can be fully restored from their samples. The classical Whittaker-Kotel’nikov-Shannon (WKS) sampling theorem is just one particular case in one of four interwoven symbolic calculation rules deduced below.


Scale of Observation
Once, Mandelbrot asked the question "How long is the coast of Britain?" and the answer is infinite, unless the "scale" T > 0, the smallest measuring unit, is specified [110]. Equivalently, the multiplication product between two Dirac delta functions remains undetermined (Figure 1), unless their relative "position uncertainty" T > 0 is known [85]. In any measurement, some uncertainty T > 0 is present (Figure 2, left) because measuring devices with infinite bandwidth do not exist. A key finding in renormalization theory [111] is the fact that measurement results depend on the "scale" of observation [112]. The "scale" is moreover well-known in optical and radar imaging, in terms of the chosen "frequency band" [113], "resolution" [114] and "pixel size" [115,116], denoted T ( Figure 2). The "scale" is also known in wavelet analysis [54,55,57,117]. Each "pixel", often denoted using a Dirac delta [50,57,77,79,[118][119][120][121][122][123], does not only have a (1) position and a (2) value but also a (3) width, given by T = 1/B as a consequence of the finite bandwidth B of a used measuring device (Appendix C). One may recall, B may be large but cannot be infinite. Hence, for every sample (Dirac delta), there is a corresponding scale T > 0 greater than zero (Figure 2, left). Vice versa, choosing T = 0 in ⊥⊥⊥ T , "discretization" ceases to exist (see Definition 1). The pixel size T > 0 is, so to say, a hidden property of any Dirac delta. It is its smoothness component, the "scale" at which we are able to "see", a tiny surface interacting with matter. In radar remote sensing, for example, we are able to "see" clouds at short (<3 cm) wavelengths and we are able to "see" (the ground) through clouds at longer (>3 cm) wavelengths [113,124]. So, the "things" we "see" depend on the scale of observation.

Discretization versus Regularization
In the history of quantum mechanics, two opposite positions can be found, Dirac, on one side, with his δ functions as the representatives of discreteness, i.e., infinitely precise (position or momentum) measurements and Pauli [111,125,126] on the other who regularized δ functions in order to obtain smoothness, i.e., to get rid of improper functions and the infinities they introduce. The intention of Pauli was to introduce mathematical rigor into "renormalization" theory [111], initiated by Heisenberg and developed by Schwinger, Feynman and others [36]. His idea was to smear out "sharp" results ( Figure 1) and in this way to regain regular (smooth) functions, i.e., functions which possess a "norm" in contrast to improper functions. However, both strategies, Dirac's and Pauli's, are correct from a strict mathematical point of view (distribution theory). They are, in fact, "dual" to one another, i.e., one strategy cannot exist without the other. It is expressed in the reciprocal relationships (Theorem 3) between discretization ⊥⊥⊥ and two kinds of regularization,ˆ and ∩ , at a respective scale T (see an example in Figure A6), • is the concatenation of operations and id is the identity operation. The reason whyˆ T can be replaced by ∩ T and vice versa is Lemma 5 in [85]. Regularization [11,[13][14][15]17,30,35,49] is well-established today, used for example in [47,69,91,111,[125][126][127][128][129][130]. It is the inverse operation of discretization ( [14], p. 132 and [15], p. 401). Localization is another most important tool, e.g., in [13,65,[131][132][133][134][135][136]. It is the Fourier dual of regularization [83,136]. A particular case of localization is truncation [137][138][139][140][141] and a link between discretization, truncation and "Tikhonov regularization" is studied in [128,129]. In generalized functions theory, we replace the "Picard condition" by f ∈ O C (see Tables 1-3, [84]). It ensures that f falls to zero with exponential decay, cf. [133]. Equivalent is the conditionf ∈ O M , see Lemma 1. It ensures thatf is an ordinary, infinitely differentiable function [41].

Regularization Methods
All together, there are four regularization methods and in [85] we showed that all four are exactly those which reverse "discretizations" in both time and frequency domain. They correspond to four kinds of "truncation": sharp and smooth truncation in both time and frequency domain. Their main application is a back-and-forth "sampling theorem" on generalized functions (Table A3, . However, amongst other things, it also tells us how to regularize generalized functions in both time and frequency domain (simultaneously). The latter is a core demand in renormalization theory [13]. It is known that, once the "distribution multiplication problem" is solved, then the central problem in renormalization theory is solved as well.

Notation
All functions and generalized functions in generalized functions theory (distribution theory) are smooth (infinitely differentiable), either in the ordinary or in the generalized functions sense. In this study, we follow the standard notation in distribution theory and continue [85] where we already introduce to the topic in much greater detail. Amongst others, we explain how to deal with generalized functions and how to extend the one-dimensional case t, T ∈ R to the n-dimensional case t, T ∈ R n . See [142] for a treatment in n dimensions. An example for n = 2 is given in Appendix C. In this section, we merely summarize the most important facts and notations.

Fourier Transform
We denote all spaces of ordinary and generalized functions as in the standard literature, e.g., [1,14,15,27,30]. The "unitary ordinary frequency" or "normalized" Fourier transform of integrable functions isf where t · σ is the usual inner product in R n and for generalized functions f ∈ S it is F f , ϕ := f , F ϕ for ϕ ∈ S, ·, · is the application of f to ϕ [85]. We denote them as F ( f ) or F f or simplyf . The space of tempered distributions S includes, in particular, the Schwartz space S, the subspace of functions which are well-localized in both time and frequency domain.

Sinc and Rect Functions
The sinc function is defined to be 1 at t = 0 and sinc(t) := sin(πt)/(πt) else and rect := F (sinc) is its Fourier transform. It equals 1 within the interval ]−1/2, +1/2[ as well as 1/2 at t = −1/2 and t = +1/2 and zero else. We prefer using rect instead of the characteristic function of an interval χ which cannot take on values other than 0 or 1. This "mid-point property at jumps", cf. [43], p. 52, is a property of the Fourier transform [9,45]. It implies, for example, Rules 19 and 20 in [84]. The sinc function belongs to O M , the subspace of all ordinary smooth functions in S and rect belongs to O C , the subspace of rapidly decreasing generalized functions in S . Vice versa, rect does not belong to O M because it is not smooth (infinitely differentiable) and sinc does not belong to O C because its decrease is too slow (polynomially instead of exponentially), cf. Remark 2 in [84]. Additionally, rect belongs to E , the subspace of compactly supported "time-limited" tempered distributions in S and sinc belongs to PW, the subspace of Paley-Wiener "band-limited" ordinary functions in S .

Finite, Entire, Local and Regular Functions
For reasons of brevity, we call functions "finite", "entire", "local" and "regular" if they belong to the spaces E , PW, O C and O M , respectively [85]. The Paley-Wiener-Schwartz-Ehrenpreis theorem [143] states that, briefly, F (E ) = PW and F (PW) = E . It is also known in terms of the Paley-Wiener-Schwartz [46,100] or the Paley-Wiener theorem [46,133,144]. We call this the Fourier duality between time and band-limited functions. It extents to F (O M ) = O C and F (O C ) = O M (Lemma 1) which is the Fourier duality between time and band-localized functions. In particular, δ ∈ E and 1 ∈ PW satisfy F δ = 1 and F 1 = δ. Figure 1 in [85] visualizes these relations and tells us that "time or band-localization" is wider than "time or band-limitness". A subspace of O M is D, the space of "time-limited" Schwartz functions, and a subspace of O C is Z, the space of "band-limited" Schwartz functions [145], cf. Figure 1 in [85].

Localized Sinc, Regularized Rect, Unitary Functions and Dirac Comb
The Fourier duality between sinc and rect is not directly applicable to arbitrary tempered distributions [37,49,85,127,146]. However, localized sinc and regularized rect functions [13] can be applied to any tempered distribution [83]. We denote them Ω andΩ, respectively [85]. The function Ω is a so-called Lighthill unitary function [7,42,58,147]. It is furthermore a "building block" of the function that is constantly 1. Hence, it forms a "smooth partition of unity" [13] and as such it plays a very central role in Fourier analysis. It serves, for example, as cutout function whenever the Fourier coefficients of a periodic tempered distribution [7,30,34] such as the "Dirac comb" need to be determined. Here, T > 0 is real-valued and δ kT := τ kT δ extends to τ kT f := f (t − kT) for ordinary functions f . We briefly write δ if k = 0 and III if T = 1. Clearly, III T ∈ S [35,76,148] and F (III T ) = T −1 III 1 T and F (III 1 T ) = T III T , see e.g., [43,52,76,82]. The functions Ω T ∈ D andΩ T ∈ Z are, in fact, the convolution and multiplication (cross) inverses of III T , see [85]. The regularization of rect and the localization of sinc are both linked to "Riemann's localization theorem" [45,134,146,149,150].

Preliminaries
We briefly list the most important ingredients (Lemma 1, Definition 1, Lemma 2) here. These are known facts. For details one may refer to [85].
Lemma 1 (Convolution-Multiplication Duality). Let g ∈ S and f ∈ O C and α ∈ O M , then in the tempered distributions sense.
The conditions described in this lemma [15,17,24,27] are both necessary and sufficient for the existence of convolutions and multiplications on tempered distributions [84]. They secure, for example, the existence of the "square of the Dirac delta" in a rigorous distributional sense [85]. Using Lemma 1, we may define "discretizations" and "periodizations" as follows.
These operations, primarily intended for radar applications, can already be found in Woodward [151], p.
in the tempered distributions sense.
We are well-prepared now. In Sections 5-7 we present new results. Lemma 3 and Lemma 4 below are particular cases of Theorem 1 in [83].

Truncation
There are, basically, four kinds of truncation. All four arise naturally as the multiplication and convolution inverses of the Dirac comb [85]. Truncation can be done sharply or smoothly, e.g., [51], p. 37, and in time or frequency domain, e.g., [100], p. 191. All four are introduced next.

Sharp Truncation
First, we distinguish between sharp truncation in time (finitization) and sharp truncation in frequency domain (entirization). The resulting functions are time-limited (E ) or band-limited (PW). An explicit construction of Ω T is described in [85].
Definition 2 (Finitization). Let T > 0 be real. For any f ∈ S , we define another tempered distribution by where Ω T ∈ D is double-sided unitary. The operation T : S → E , f → T f is called finitization.

Smooth Truncation
A wider idea of sharp truncation (time or band-limitness) is smooth truncation (time or band-localization), e.g., [163], p. 49. Instead in E and PW we now land in O C and O M , respectively, which include E ⊂ O C and PW ⊂ O M , i.e., "time or band-limitness" is generalized in this way.

Definition 4 (Regularization)
. Let T > 0 be real. For any f ∈ S , we define another tempered distribution by where Ω T ∈ D is double-sided unitary. The operation ∩ T : S → O M , f → ∩ T f is called regularization.
Definition 5 (Localization). Let B > 0 be real. For any f ∈ S , we define another tempered distribution bŷ It is known that the space of "time-limited" and the space of "band-limited" functions do not overlap (except for the zero function). For this reason, a concept of "time and band-localized" functions is required. Here, the well-known Schwartz space S is in its overlap (Figure 1 in [85]). Schwartz functions are, correspondingly, well-localized in both time and frequency domain. It explains their extraordinary role as (1) test functions for tempered distributions, (2) test functions in quantum mechanics [63], p. 12 and [164], pp. 317-318, (3) window functions in the Short Time Fourier Transform (STFT) [41,51,56,165,166] and (4) their validity-satisfying role in Poisson's Summation Formula [20,84,142,165].

Four Truncation Rules
The above definitions obey the following four symbolic calculation rules. They are particular cases of a generally valid regularization-localization duality [83].

Lemma 3 (Time vs. Band-Truncation).
Let T > 0 be real-valued and f , g ∈ S , then in the tempered distributions sense.
Lemma 4 (Time vs. Band-Localization). Let T > 0 be real-valued and f , g ∈ S , then in the tempered distributions sense.
Obviously, the first and the second line in both lemmas are Fourier transforms of one another. We merely prove the first lemma, the second is shown analogously.
In engineering, Lemma 3 reduces to the trivial statement that time and band-limited functions are Fourier transforms of one another and Lemma 4 expresses the same but in a wider sense. The latter is known as the Fourier duality between windowing (localization) and smoothing (regularization). The use of mollifiers [86] and regularizers [127] has its origin here.

Representation Theorems
The next two theorems can be understood as representation theorems for periodic functions, discrete functions and for time and band-limited generalized functions in S . The entity TB = 1 is known as the "time-bandwidth" product [163], cf. Woodward [151], pp. 118-119.
Theorem 1 (Global Inversion). Let T, B > 0 be real, TB = 1, p ∈ S be B-periodic andp = d, then in the tempered distributions sense.
in the tempered distributions sense.

Proof. Global Inversion. Because
where the second equality holds because p is B-periodic. The symbol B p denotes one period of p.
With Lemmas 2 and 3 we now deduce that Although the next proof is very similar, it is presented here for comparison purposes. More precisely, (21) and (22) are needed below in Corollary 1.

Proof. Local Inversion.
Using the fact that f is B-finite, symbolically f = B f , we obtain where the second equality holds because τ kB (Ω B · f ) = f for k = 0 and zero else. With Lemma 2 and Lemma 3 we now deduce that in the tempered distributions sense.
Equation (17) can also be found in Vladimirov [34], p. 114 as (1.4), in particular (1.5), for the reader's convenience-although it is derived here differently. In the special case where p is an ordinary function, B can be replaced by multiplication with the characteristic function of an interval, cf. Benedetto and Zimmermann [167], p. 508. In general, however, the cutout function is a unitary function, i.e., it is not unique [7], p. 61. Equation (18) is the Fourier transform of (17). We may think of B p as the "period" of p, ofˆ T d as the "wave" of d, of B f as the "cycle" of f and of ⊥⊥⊥ T α as the "coefficients" of α. Clearly, B p andˆ T d as well as B f and ⊥⊥⊥ T α are all not unique in (17)- (20). It is, hence, convenient to think of them as equivalence classes in these equations.
Equation (20) is the classical Whittaker-Kotel'nikov-Shannon (WKS) sampling theorem [95,98,101,104] (19) is its Fourier domain, B is rect-multiplication andˆ T is sinc-convolution in the limiting case. For ordinary functions, (20) and (19) coincide with (28) and (29) in [151], pp. 33-34, respectively. One may recall, sinc-convolution succeeds on Lebesgue square-integrable functions with extremely slow convergence but fails on arbitrary tempered distributions [49]. We therefore constructed B as the multiplication with a "regularized" rect-function [13] andˆ T as the convolution with a "localized" sinc-function in [85]. The duality between regularization and localization [83] is often used in connection with the classical sampling theorem, e.g., in [146]. Using "localized" sinc-functions accelerates (cf. [7], p. 6, > 0) the convolution convergence and corresponds to the use of "regularizers" [127]. The use of regularizers, in turn, corresponds to using "oversampling" [54]. In fact, "any sampling rate higher than the Nyquist rate is sufficient" [168]. Let us remind here to the fact that sinc functions cannot be integrated, neither in the Riemann nor in the Lebesgue sense [47,169]. However, it exists as an "improper integral" [151,170] which corresponds to using "convergence factors" [171] as, for example, in García and Moro [101].

in S and
The idea used in [101] is moreover equivalent to the use of Lighthill's unitary functions [7,85,147], in other words, to the use of the operators B andˆ T .
The condition f = B f is further studied in Appendix B. It is now just a small step to deduce the following statement. We relate finiteness to periodicity and smoothness to discreteness. Theorem 3 (Cyclic Dualities). If g ∈ S is considered being cyclic and finite, simultaneously, then in the tempered distributions sense, where TB = 1. Hence,ĝ ∈ S is both discrete and entire, simultaneously.
Proof. If Theorem 1 and 2 are true and p is f and f is p then, by Fourier duality, d is α and α is d.
The commutator [63,84] is defined as [A, B] := AB − BA. In other words,ˆ and ⊥⊥⊥ as well as and commute. Dualities as in (23) or (24) do indeed exist. They are, often unawarely, heavily used in practice. We give three examples.

Example 1 (Discrete Fourier Transform).
Cyclic dualities are routinely used in digital signal processing. Whenever we use the Discrete Fourier Transform (DFT), we actually identify finite functions with periodic functions and periodic functions with finite functions [50]. Hence, g andĝ are both finite and periodic. Simultaneously,ĝ and g are both entire (smooth) and discrete (Theorem 3) by Fourier duality.

Example 2 (Number-Function-Duality). Every real number is a constant function and every constant
function is a real number. More generally, complex numbers are "waves" (Figure 3), cf. [172]. Thinking of 1 as a discrete periodic function, we may use the DFT, its Fourier transform is 1. Thinking of 1 as a smooth periodic function, using F in S , its Fourier transform is δ. One may recall, this is no contradiction [84]. c = r e i t versus c(t) = r(t) e i t Figure 3. Complex numbers are discrete (as a number) and smooth (as a function), simultaneously.
Example 3 (Wave-Particle Duality). Cyclic dualities are moreover known in quantum physics where we either observe (discrete) positions of a particle or all its positions as an entire (smooth) wave. Its momentum (Fourier transform) is, correspondingly, smooth if its position is discrete and discrete if its position is smooth. It is known as the "wave-particle duality" and it turned out be equivalent to Heisenberg's uncertainty principle [173].
An intimate relationship between finiteness and periodicity is also known in other branches of physics, e.g., in crystallography [94] or, more generally, whenever periodic boundary conditions arise, e.g., in [174,175]. "Dirichlet boundary conditions" emerge for example in connection with the "time-periodic" case of Maxwell's equations in [88,89]. Here, it is intended to find real-valued, time-periodic and spatially localized solutions. Theorem 3 tells us that whenever we add periodic boundary conditions to finite or infinite intervals, i.e., as soon as we close a circle (Figure 4), it implies identifying smoothness with discreteness in its Fourier domain.  Obviously, many infinities (if not all) arise by just cycling around finitely extended circles. More generally, circles can be replaced by (nearly arbitrary) closed paths in the complex plane [176,177]. Our result is obviously related to the Fundamental Theorem of Calculus ( [177], Theorem 24) and Cauchy's theorem ( [177], Theorem 25). The idea that "infinity" might not exist in the real world [178] is presently hotly debated in physics [179,180]. It is moreover known in complex analysis that the "complex plane" is not a plane but a "Riemann sphere" and the "real axis" is not a line but a circle on the Riemann sphere going through its "north pole" where −∞ and +∞ coincide [3,8,9,176]. The "imaginary axis" is now another circle on the Riemann sphere, also through its north pole, but intersects the real axis orthogonally.

Wider Definitions
The "classical" sampling theorem (20) states thatˆ T • ⊥⊥⊥ T = id , i.e., regularization reverses discretization. Vice versa, the "forward" sampling theorem (18) states that ⊥⊥⊥ T •ˆ T = id , i.e., discretization reverses regularization. Both together yield (1). So, discrete functions can always be applied to smooth functions and smooth functions can always be applied to discrete functions. This discreteness-smoothness duality is in fact the foundation of generalized functions theory. It can be used as follows.
The underlying idea in Definition 6 is depicted in Figures A3 and A4 below.

Remark 1 (Constants)
. This definition, in contrast to Definition 1, allows us to periodize already periodic functions and to discretize already discrete functions, e.g., T 1 = 1 and ⊥⊥⊥ T δ = δ hold for any T > 0.

Convolution-Multiplication Associativity
Another beautiful detail shall be mentioned. Following Ellis [181], Novelli and Thibon [182], we call it "cross-associativity" between multiplication and convolution in S . We believe this property is not known in the literature so far. It relies on Lemma 1.

Corollary 1 (Cross-Associativity).
There are a, b, c ∈ S such that in the tempered distributions sense. (27) is (21), a = III B , b = Ω B and c is periodic, (28) is then its Fourier transform where c is discrete. Vice versa, (28) is (22), a = Ω B , b = III B and c is finite, (27) is now its Fourier transform where c is entire.

Proof. Equation
Corollary 1 may also be true on other tempered distributions, Schwartz functions a, b, c ∈ S, for example. However, most generally, multiplication or convolution products are not associative in S , not in a mixture of * and · and not even among multiplication products alone, see e.g., [8]. Nevertheless, the idea that periodic functions, discrete functions as well as time and band-limited functions might exactly be those which satisfy these equations in a widest sense in S is fascinating and worth being further investigated.
They correspond to TB = 1 in this study, i.e., (1/B) × B = 1 and (1/T) × T = 1, respectively. This relation between sampling rate T ≡ 1/B and bandwidth B is known as the "classical sampling theorem" and it is linked herewith to Woodwards's effective "area of ambiguity" which equals 1. The "Classical Sampling Theorem" in S is summarized in Table A3, Rules 25-28. One may note, it does not matter whether T is B or B is T, both variables are fully equivalent. They are reciprocal to one another. Furthermore, (1/T) × T = 1 is nothing else than the definition of "frequency". It means "frequency" is "reverse time" and, trivially, "reverse time" × "time" equals unity. For a deeper understanding one may study Max Born's theory of reciprocity [183,184].
The theorems derived in this study are in fact part of a larger symbolic calculation scheme, summarized in Appendix A. One may have noticed, four truncation methods { ,ˆ } and { ∩ ,∩ } are introduced but only { ,ˆ } have been used intensively. According to [85], Lemma 5, it is possible to replaceˆ by ∩ and∩ by in many cases-done e.g., in Appendix C. We will have a closer look at this phenomenon in a later study. Acknowledgments: The authors would like to thank the reviewers for their careful review and valuable suggestions and the editors of this journal for their courteous, rapid processing.

Conflicts of Interest:
The authors declare no conflicts of interest.

Appendix A. Symbolic Calculation Rules
The following tables summarize the results in this study, they complement previously found rules, cf. Tables 1-3 in [84]. All rules below are given pairwise, i.e., two consecutive lines are Fourier transforms of one another where TB = 1. Hence, B = 1/T and T = 1/B, respectively.
The unitary function 1 T , see Rules 01, 03 and 07, is a generalization of the characteristic function (cf. [57], p. 47, [53], p. 3, [105], p. 570) on the interval [−T/2, +T/2] and δ B is its Fourier transform. At the same scale, δ T and 1 T are two different pixel representations ( Figure A6, left and right). Tables A1 and A2 are both generalized in Table A3. The properties "finite" and "entire" are inherited from unitary functions and "discrete" and "periodic" are inherited from Dirac combs. The classical Whittaker-Kotel'nikov-Shannon sampling theorem arises in Rule 28 as a particular case of the "sampling theorem" on tempered distributions. Rule 28 is regularization, its reversal is discretization: Rule 26. Equivalently, Rule 27 is localization, its reversal is periodization: Rule 25.

Appendix B. The Condition T f = f
In this appendix, we have a closer look at the condition T f = f which ensures that the Whittaker-Kotel'nikov-Shannon sampling theorem holds [97], also in the tempered distributions sense. It can be found in [97,105] and [167], p. 508, for example, which is the conventional functions case and [13], p. 22, [23], p. 77 and [101], Lemma 2, is the distributional case. However, the condition T f = f is just one in four such function properties (Remark 2). All four are described below.
The property T f = f means that Ω T · f and f are the same function. The support of f is fully contained in [−(T − )/2, +(T − )/2] where > 0 is arbitrarily small (cf. [167], p. 506, where d > 0, [54], p. 18 where λ > 0, [45], p. 24 where > 0). Functions f satisfying T f = f are called "time-limited" in electrical engineering and T is truncation with respect to T, centered around the origin. Truncation fails if both = 0 and f ∈ O M , i.e., if Lemma 1 is ignored, due to undetermined interval boundaries after truncation. Figure A1 shows the example of Ω T (blue) multiplied by III 1 2 (gray) where T = 9/2 and = 1/2. A special case is T δ = δ, for any T > 0, hence, δ is "universally" time-limited. Finitization −t max /2 0 +t max /2 T Support of f Figure A1. Finitization of a tempered distribution.

Appendix B.2. Band-Limited Functions
The propertyˆ T f = f , where TB = 1, means that Bf =f . Hence, Ω B ·f andf are the same function. This is true due to the reciprocity between time and frequency domain (Lemma 3). The support of F f is fully contained in [−(B − )/2, +(B − )/2] where > 0 is arbitrarily small. Functions f satisfyingˆ T f = f are called "Paley-Wiener functions" or "band-limited" and B is band-truncation with respect to B, centered around the origin. The difference = B − σ max (or the quotient B/σ max > 1) is called "oversampling" [54,163].

Appendix B.3. Periodic Functions
The property T f = f , where T is understood in the sense of Definition 6, means that f is periodic with period T > 0. However, f may additionally be periodic with respect to L if T = L k for some integer k > 0 ( Figure A3). Periodization −t max /2 0 +t max /2 T Support of f Figure A3. Periodization of a tempered distribution.

Appendix B.4. Discrete Functions
The property ⊥⊥⊥ T f = f , where ⊥⊥⊥ T is understood in the sense of Definition 6, means that f is discrete at integer multiplies of T > 0. It is connected to its bandwidth B > 0 via TB = 1. Figures A3 and A4 are the same, except for the re-interpretation of f as F f and of T as B. A special case is ⊥⊥⊥ T δ = δ, for any T > 0, hence, δ is "universally" discrete. Discretization −σ max /2 0 +σ max /2 B Support of F f Figure A4. Discretization of a tempered distribution via periodization of its Fourier transform.

Appendix C. Image Scale
The "scale" in digital images is an inherent property known as the "pixel size" [115]. In Figure A5 it is T = [30,30] cm and in Figure A6 it is T = [10, 10] m. Images can be displayed in Dirac, Sinc or Rect representation ( Figure A6). The respective pixel representation does not change their "scale".  Figure A6. Radar image, scale T = [10, 10] m, "Sinc", "Dirac" and "Rect" shaped pixels.
In generalized functions theory, rect is replaced by a regularized rect function, denoted Ω, and sinc is replaced by a localized sinc function, denotedΩ. The regularizations sinc * and rect * becomeˆ and ∩ , respectively [85].