Abstract
In data analysis and signal processing, the recovery of structured functions from the given sampling values is a fundamental problem. Many methods generalized from the Prony method have been developed to solve this problem; however, the current research mainly deals with the functions represented in sparse expansions using a single generating function. In this paper, we generalize the Prony method to solve the sparse expansion problem for two generating functions, so that more types of functions can be recovered by Prony-type methods. The two-generator sparse expansion problem has some special properties. For example, the two sets of frequencies need to be separated from the zeros of the Prony polynomial. We propose a two-stage least-square detection method to solve this problem effectively.
1. Introduction
The Prony method is a popular tool used to recover the functions represented in sparse expansions using one generating function. For example, the function with the following form
can be recovered from equispaced sampling values for an appropriate positive constant h; however, in many real-world applications, we need to deal with the functions represented by more than one generating functions. For example, the harmonic signals with the form
are generated by two generating functions (or simply generators): and , where and are generic parameters used as the placeholders for the real parameters and to generate the specific terms in the expansion. In this system, we have two sets of coefficients and and two sets of frequencies and . Analogous to the original Prony method, we expect to use equispaced sampling values to recover those four sets of parameters.
There are some existing methods to solve this problem. The first one is to convert it to a single-generator problem by the following formulas
which results in problem (1) (see [1]). Another way using the same idea is based on the even/odd properties for and (see [2]) as follows
However, this approach is very restrictive, because the chance that one can make this kind of conversion is very small. In this paper, we are interested in solving the general two-generator sparse expansion problem by a new way of generalized Prony method. More specifically, we study the functions with the following two-generator sparse expansion
where and are two different functions used as the generators. In order to make the Prony method work, we need a critical condition for our special technique: There exists a linear operator, such that and are both eigenfunctions of this operator.
Another situation that could lead to the two-generator expansion problem is when we apply some special transforms on a sparse expansion. For example, when we apply the short time Fourier transform (STFT), i.e.,
using the Gaussian window function on the sparse cosine expansion
we would obtain a two-generator sparse expansion as follows,
In this example, the two generators are and with . Actually, the original single-generator problem (6) can be solved directly. For example, one can convert to (see [1]), or use a method based on the Chebyshev polynomials (see [3]). When we solve problem (6) directly, we use the sampling values in the time domain; when we solve the problem in the form of (7), we use the sampling values in the frequency domain. (See [4] for a discussion on sampling values in the frequency domain.) In this paper, we use this example to study the special properties of the two-generator sparse expansion problem.
Since the signals could take various forms, not necessarily in the exponential form studied in the classical Prony method, many researchers generalized the Prony method to handle different types of signals. For example, many results in [1,3,5,6,7,8,9,10,11,12] have been developed over the last few years. In particular, Peter and Plonka in [1,8] generalized the Prony method to reconstruct M-sparse expansions in terms of eigenfunctions of some special linear operators. In [3], Plonka and others reconstructed different signals by exploiting the generalized shift operator. These results provide us the building blocks for our method in this paper.
We organize our presentation in the remaining sections as follows. In Section 2, we quickly review the classical Prony method and one of its generalizations for the Gaussian generating function to establish the foundation of our method. In Section 3, we describe the details of our method using the example with two generators: cosine and sine functions. In Section 4, we apply our method on two different types of Gaussian generating functions, so that we can study an interesting property: When the Hankel matrix for finding the coefficients of the Prony polynomial is singular, what does it really mean? In Section 5, we show two examples that correspond to the two problems solved in Section 3 and Section 4, respectively. Finally, we make conclusions in Section 6 and describe two related research problems to be solved in the future.
2. Review of the Prony Method and One of Its Generalizations
Our method is built on top of the Prony method and one of its generalizations. Before we present our technique, we review these basic methods.
2.1. Classical Prony Method
Let be a function in the form of
with . Then the coefficients and the frequencies can be recovered from the sampling values , where h is some positive constant. To solve this problem, a special polynomial called the Prony polynomial can help us convert the relatively hard non-linear problem (8) to two linear problems and a simple non-linear problem (finding zeros of a polynomial). The Prony polynomial for (8) is defined as
where are the coefficients of the monomial terms in (9) with the leading coefficient . The technique is based on the following critical property:
for any , which can be written as the following linear system
The coefficient vector can be calculated from the sampling values . The linear system (11) is guaranteed to have a unique solution under the condition that all ’s are distinct in for some (with h in the range ), and are nonzero in , which is a natural requirement for problem (8). This property is a direct result of the following matrix factorization
where is a Vandermonde matrix, which is non-singular for distinct ’s and for . The frequencies can be extracted from the zeros of (in the form of ) using the formula
Finally, the coefficients can be determined by solving the following overdetermined linear system (with M unknowns and equations)
The redundant equations in this overdetermined linear system will play a critical role in our two-generator method to help us separate the frequencies associated with the two generators (see Section 3).
2.2. Sparse Expansions on Shifted Gaussian
In order to solve the two-generator sparse expansion problem (7), we need to apply the technique presented in [3], which solves a single-generator sparse expansion problem with the following form
where . The technique relies on the following generalized shift operator
where , and has the property
The function in (16) is chosen to be , so that we have the following critical property
which means that ’s are eigenfunctions of for all .
The sparse expansion in (15) can be reconstructed using sampling values , and is an arbitrary real number. If , then ; while if , then with for for some given (See [3].) The Prony polynomial for the problem in (15) can be defined as:
with . Then, we have the following linear system
for , which can be represented as an inhomogeneous system
where , and . This matrix is a Hankel matrix, and it has the following structure
with the Vandermonde matrix
Thus, is invertible for distinct ’s in for , and the vector of the coefficients are obtained by solving the system (20), which can be used to calculate the parameters ’s.
Finally, the coefficients ’s in the expansion (15) can be computed by solving the following overdetermined linear system:
3. The Sparse Expansion Problem with Two Generators: Cosine and Sine
In this section, we present our method for solving the two-generator sparse expansion problem in the following form
through a modified Prony method. We present our method in the following theorem.
Theorem 1.
Assume that a function has the two-generator sparse expansion form of(23), where the number of terms for two generators and are known, but the two sets of coefficients in and and the two sets of frequencies in and are unknown. If equispaced sampling values of the form for are provided, then the original function can be uniquely reconstructed under the following conditions:
All the coefficients are nonzero in .
All the frequencies are distinct in a range for some . Furthermore, h is selected from the range .
The value of is selected to make the numbers ,
nonzero.
Proof.
First, we choose an appropriate linear operator, such that our two generating functions and in (23) are both the eigenfunctions of this operator. We consider the symmetric shift operator (see [3])
When we apply this operator on and , we obtain
where and are the eigenvalues. Now we define the Prony polynomial for problem (23) using all the eigenvalues and as follows:
which can be written in terms of the Chebyshev polynomials as
where . (See [3] for more information on this technique.) Since the leading coefficient of the Chebyshev polynomial is , we choose , so that in (27) has the leading coefficient 1. This Prony polynomial has the following critical property:
for and , respectively, which is essential to help us derive the following linear system.
To derive a linear system for , we need to calculate the following expression
which can be shown to be zero. That is,
for . Indeed, using the right-hand-side expression in (23) for in (28) and for a fixed , we obtaining
We can reformulate the system (28) as
for . To solve this system, we need sampling values in the form of for .
In order to see that the linear system in (29) has a unique solution, we study the coefficient matrix in (29), which we denote as . As in the classical Prony method, we can factorize in the following structure
where the Vandermonde Block matrix can be written as
with
and
and the diagonal block matrix can be written as
where
and
Thus, is guaranteed to be invertible by the conditions and of the theorem. Then, we can find the unique solution for from the linear system (29).
With these values for as in (26), we can determine ’s and ’s from the zeros of ; however, this step is non-trivial, because we do not know what zeros correspond to ’s and what zeros correspond to ’s. In order to resolve this ambiguity, we consider all the possible cases: Among zeros of , of them correspond to ’s. Thus, there are a total possible choices for ’s, among which there is exactly one choice for the solution; however, how do we select the right one? We need to go to the next overdetermined linear system for the answer.
When we determine the coefficients ’s and ’s in (23), we have the following linear system
for corresponding to all the sampling values, which has equations and unknowns. This overdetermined linear system gives us the extra information we need to select the true-solution case from the remaining non-solution cases.
Our method is based on an observation: The sampling values are calculated using the original ’s and ’s (corresponding to the true-solution case), which means that all the equations in (36) are completely satisfied for the true-solution case. In other words, the least-square solution of (36) for the true-solution case should have this property: Its error term is zero theoretically (or very close to zero due to rounding errors in computation). While the least-square solution for any non-solution case would have a significant (with respect to the rounding errors) nonzero error term, which makes the true solution stand out clearly. □
Our experiments have verified this phenomenon. Based on this observation, we develop a two-stage least-square detection method to minimize the computing cost, and in Section 5, we demonstrate the effectiveness of this method using a simple example.
Remark 1.
The overdetermined linear system (36) plays an important role in determining the true solution from certain number of possible cases. Typically, this situation happens in the multi-generator sparse expansion problem. For the single-generator case, we can select same number of linearly independent equations from the overdetermined system as the number of unknowns to find the solution; however, for the multi-generator case, the redundant equations are very useful in the least-square method.
4. The Sparse Expansion Problem with Two Gaussian Generators
In this section, we solve another two-generator sparse expansion problem as in (7) that uses the two Gaussian generating functions, and , in the form of
for some constant . In order to recover the coefficients and the parameters ’s, we need sampling values , where , and h satisfies the same condition as in Section 2.2.
This two-generator sparse expansion problem has a special property: When for some , the two functions and are the same. This property would cause some problem for our method presented in the previous section. In order to make the discussion easier, we separate these two cases, and consider the case that for all first.
Theorem 2.
Assume that a function has the two-generator sparse expansion form of (37), where the number of terms M and the constant are known, but the coefficients in and the parameters in are unknown. If equispaced sampling values of the form for are provided, then the original function can be uniquely reconstructed under the following conditions:
The coefficients are nonzero in .
The parameters are nonzero in for some , and they are distinct.
If , then ; while if , then .
Proof.
Our method relies on existence of some critical linear operator, such that both generating functions are its eigenfunctions. Here we use the operator as defined in (16) with , which has the following properties:
Clearly and are eigenfunctions of for all with corresponding eigenvalues and , respectively, for . Hence we can define the Prony polynomial using all these eigenvalues:
with . Since the real number , we can assume that for all based on the structure in (37) to improve the certainty without loss of generality.
Then for , we calculate
which can be written as the following linear system
for . To solve this system, we need sampling values: for . To study existence of the solution for this linear system, we would like to simplify it with respect to the unknown vector as follows,
with and
The invertibility of can be seen from the following matrix factorization:
where the Vandermonde block matrix has the following form
with
and
and the diagonal block matrix is given by
with
and
From this structure, we can see that the Vandermonde matrix in (44) is invertible by conditions and of the theorem, and hence in (42) is also invertible by condition , which results in the unique solution for .
With all the values found from the above linear system, we can find all the values by calculating the zeros of the Prony polynomial of (39). In this case, we do not need to deal with the ambiguity that we encountered in the previous section due to the special structure of the pairs ’s. Finally, the coefficients ’s of the sparse expansion (37) can be computed by solving the following overdetermined linear system:
for . □
Remark 2.
Our method above only works for the case when for all j in ; however, in the real-world situation, when we solve a problem of (37) using sampling values, how do we know if there exists any in it or not? We need a detection method to tell us if all the ’s are nonzero before we apply the above method.
Let us investigate the existence of a solution for the linear system (41), which is determined by the invertibility of in (42). We notice that when , the first column of (45) and the first column of (46) are the same, which causes the matrix in (44) to be singular. Then, we conclude that in (43) is singular if any . In other words, by checking the invertibility of , we can tell if there is any for problem (37). If in (42) is singular, our current method does not work. Fortunately we can modify our method to solve the problem for this special situation.
Let us assume that , and the remaining ’s are positive numbers. In this case, we modify (37) to
and its corresponding Prony polynomial is defined as
with . Since , it leads to
Then we can show that
because we can split the above left-hand-side summation into the following three summations with zero value each:
and
We use sampling values: for to solve the system. Similar to 43, we still have
but we need to modify to
which is invertible for positive distinct , and the diagonal block matrix becomes
with and maintaining the same forms of (48) and (49), respectively.
After we solve the linear system of (55), we obtain the Prony polynomial that contains one zero at and the remaining zeros appear in pairs of ’s, which correspond to the parameter values 0 and pairs. Finally, we will solve the following overdetermined linear system for values
for . From this example, we can see that the value of can give us some important information, that is, which of the two systems in (37) and (51) we should work on. This property could be useful when we consider a problem in which the M value in (51) is unknown, but restricted in certain range. (See discussion in Section 6).
5. Numerical Experiments
In this section, we use two simple examples to illustrate the implementation details of our method for the two-generator sparse expansion problem described in the previous sections. The first example is for version (23) in Section 3. The second example is for version (37) in Section 4.
Example 1.
We consider a function (see Figure 1) that is a two-generator expansion with each generator producing 5 terms in the following form
and the 20 parameters we used are listed in the table below to generate the sampling values.
Figure 1.
The signal in (58) with 39 equispaced sampling values.
How to use the 39 equispaced sampling values (where 39 comes from ) in the form of to recover the original parameters in Table 1?
Table 1.
Original parameters of the function in (58).
There are 20 original parameters in two sets: and corresponding to the two generators, respectively. To recover them, first we solve the following linear system for the coefficients of the Prony polynomial based on the Equation (29)
where
and
We obtain
which corresponds to the following Prony polynomial
From the 10 zeros of this polynomial, we obtain 10 parameter values:
which correspond to , but the explicit order is unknown. We must resolve the ambiguity: What five parameter values are for (with the remaining five parameter values for )?
To separate the ’s from ’s, we consider the following overdetermined linear system:
where we use the shorthand notations
for . Note: In this linear system, we only use 20 out of 39 original sampling values, which is adequate for this particular example. It is a trade-off issue between the accuracy of computation and the cost of computation (in time). In general, the more redundant equations we use, the more accuracy we can achieve in searching for the true solution. In other words, if we can obtain adequate accuracy, we focus on cutting the computation cost to the minimum. We do not solve this overdetermined linear system by the least-square method directly. We split these 20 equations into two parts: In the first part, we approximate the coefficients in (60) by the least-square method. Then we apply these derived coefficients to the equations in the second part so as to filter out the true solution.
Among the 10 values in (59), every time we select 5 of them for , the remaining 5 numbers are automatically for . We will have total 252 possible choices (which is the combinatorial number ) as the candidates for the solution. Notice that this combinatorial number is a relatively big number. In order to speed up the processing, we reduce the redundant computation to the minimum. Let us use the notations with representing those 252 candidates. Our method is based on the property that the information given in the sampling values has a lot of redundancy for selecting the true solution, and we only use just enough information from the given sampling values so as to save the computation time.
First, when we calculate the coefficients by the least-square method, we use exactly 10 equations (the same number of the coefficients) out of the 20 equations in (60). Based on our experiments, we do not have to use an overdetermined system for a good approximation by the least-square method. A determined system can give us excellent approximation for the least-square problem, while any underdetermined system usually does not approximate the data well through the least-square solution. For convenience, we select 10 consecutive equations in (60) somewhere in the middle, which we call the least-square block in our discussion, to approximate the coefficients . Specifically, our least-square block takes the subscripts from 6 through 15, and the corresponding sampling values should be selected as a reduced linear system given below,
Even if our new linear system (61) is a determined system, we still solve it for a least-square solution, because the determinant of the square matrix in (61) could be very close to zero. Then the remaining equations in (60) together with the coefficients derived from (61) will be used to detect which candidate is the true solution based on the error information.
For each set of values among the 252 candidates, the least-square solution for the linear system (61) would produce the 10 coefficients , and we evaluate the following vector
which is in general different from the original sampling vector . Then we will calculate the difference of these two vectors, and see how close they are. We define the error vector as follows:
To search for the true solution among the 252 candidates, we discover an intrinsic property, shown in Figure 2 and Figure 3, that can clearly separate the true solution from other candidates.
Figure 2.
Display the error vector for one of the 252 candidates.
Figure 3.
Display the error vector for the true solution with two reference points at the ends.
In Figure 2, we plot the error vector for one of the 252 candidates to view its typical behavior. The error values in the least-square block (with subscripts from 6 to 15) are very close to zero for a typical candidate; however, the error values that are out of the least-square block (with subscripts from 0 to 5 and from 16 to 19) are not close to zero in general for a candidate that is not the true solution.
This behavior can be explained in this way: The errors in the least-square block are usually very small due to the fact that the least-square solution of the determined system approximates the targeting sampling values quite well; however, when we consider an error for a sampling value out of the least-square block, since the corresponding equation is not involved in the least-square approximation, there is no reason for this equation to generate a value that is very close to the targeting sampling value.
While for the true solution case, the behavior is different in the sense that the errors for all the equations in the linear system (60) are very close to zero (see Figure 3, and ignore the two reference points at the ends). Let us summarize the key property that helps us to find out the true solution among all the candidates: For a candidate, if the coefficients generated from the determined linear system (61) by the least-square method cannot approximate just one sampling value out of the least-square block well, then it cannot be the true solution.
However, if the coefficients for one candidate can approximate one particular sampling value out of the least-square block well, we can only say that it is highly likely that this candidate could be the true solution, because the probability for a non-solution candidate to approximate some sampling value out of the least-square block well is very small. Based on this observation from our experiments, we design the following strategy for the solution search.
Strategy: Eliminate as many as possible candidates in the first round filtering in two steps: Step 1. Select a determined linear system from the overdetermined linear system in (60) (as the least-square block), and approximate the coefficients by the least-square method for each of the 252 candidates. Step 2. Apply the derived coefficients in Step 1 on one of the linear equations out of the least-square block to approximate the targeting sampling value and calculate the error with the targeting sampling value. If the error is greater than certain threshold (we use as our threshold), we drop this candidate from the consideration; otherwise, this candidate passes the first round filtering. If only one candidate survives the first round filtering, it must be the true solution. If more than one candidates pass the first round filtering, we need to do the second round filtering. In the second round filtering, we simply apply the derived coefficients on another linear equation out of the least-square block, and calculate the error for the targeting sampling value. If the error is greater than the threshold, we eliminate this candidate. We keep doing these cycles until we identify the true solution. Since we have plenty of redundant equations out of the least-square block, we should be able to determine the true solution without going through too many cycles in general. Furthermore, those linear equations corresponding to the original sampling values that are not included in the linear system (60) can still be used for the above steps when necessary, but the probability to use those equations out of the linear system (60) will be extremely small. This simple strategy is designed to allow us to detect the true solution without unnecessary computation, while we still preserve the option to use the redundant information when necessary.
Here we would like to point out that as soon as we select values in -group or -group, the order of those values in each group is not important, because their corresponding coefficients (’s or ’s) will also be aligned with them accordingly when we solve the determined linear system (61) using the least-square method.
Example 2.
Our second function to be recovered has the following form
which is derived by applying the STFT on the following function
with the parameters of (64) listed in the following Table 2.
Table 2.
Parameters of the function in (64).
To solve this problem, we need to use 12 (i.e., ) sampling values. After we applied the method described in Section 4, we solved a linear system with 6 unknowns, and derived the Prony polynomial of degree 6 as follows
The symmetric structure of this polynomial tells us that its zeros appear in pairs for , which correspond to three pairs of parameters: , and for . Finally, we can solve another linear system for the coefficients ’s with the errors listed in the Table 3.
Table 3.
Parameters of the function in (64) and approximate errors using 12 sampling values with .
6. Conclusions
In this paper, we introduce a method that extends the Prony method to solve the two-generator sparse expansion problem. This method relies on the existence of a special linear operator for which the two generators must be the eigenfunctions. This two-generator problem has a special property: The zeros of its Prony polynomial correspond to two sets of parameters, and there is no straightforward way to separate them. We propose a two-stage least-square detection method on an overdetermined linear system for each candidate to extract the true solution, which relies on an intrinsic property for the true solution: Only the true solution can use the coefficients derived from the least-square block to approximate the targeting sampling values out of the least-square block well. Our method is designed to minimize the computation cost, while still maintain the computation accuracy.
It seems that the idea can be extended to the k-generator sparse expansion problem for ; however, for the general k-generator case, the requirement that there exists a linear operator such that all the generators must be its eigenfunctions becomes extremely hard to achieve. For example, in the following sparse expansion problem,
it is not easy to find a linear operator, such that both and are its eigenfunctions. One may argue that the problem could be solved by converting to , and then it becomes a one-generator problem. Notice that converting a two-generator problem to a one-generator problem may not work most of the time. We are interested in developing a general method that can solve the two-generator sparse expansion problem including the one in (65). We can see that there are many difficult problems to be solved in this multi-generator sparse expansion problem, and we would like to see more researchers contribute in this direction.
Our method for the two-generator sparse expansion problem can handle certain degree of uncertainty. For example, in problem (23), if we know the total number of terms (i.e., the value of ), but we do not know the number of terms in each summation (i.e., the individual values of and ), we can still solve the problem using our two-stage least-square detection method described in Section 3 and Section 5. If we increase the uncertainty a little more, can we still solve the problem?
For example, in the problem we considered in Section 4, if we do not know the exact number of terms (it is referred to unknown order of sparsity M in [1]) in the following expansion,
and we are given K equispaced sampling values for some positive integer K. If we are told that these sampling values are sufficient to recover the signal, how do we recover it? In other words, we know that the number of terms M is in the range , but we do not know the exact number M, can we solve the problem? The answer is yes, because we can try all the possible cases: , and for each case, we apply our two-stage least-square detection method to tell us if the true solution can be extracted.
However, we are not satisfied with this kind of exhaustive search type solution due to its high cost. We plan to develop an efficient term number detection method, so that when we make a term number prediction, this method can tell us if it is correct or not immediately. In [1], two methods are proposed: One is based on the rank of the matrix, and the other is based on the singular values of the matrix. The main issue is: How to obtain a reliable method to determine the M value in the sparse expansion? Only after we obtain the correct term number we will pay the computation cost to go through all the necessary details to find the solution.
Author Contributions
Conceptualization, A.H. and W.H.; methodology, A.H. and W.H.; software, A.H. and W.H.; validation, A.H. and W.H.; formal analysis, A.H. and W.H.; investigation, A.H. and W.H.; resources, A.H. and W.H.; data curation, A.H. and W.H.; writing—original draft preparation, A.H. and W.H.; writing—review and editing, A.H. and W.H.; visualization, A.H. and W.H. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Peter, T. Generalized Prony Method. Ph.D. Thesis, University of Gottingen, Gottingen, Germany, 2014. [Google Scholar]
- Hussen, A.M. Recover Data in Sparse Expansion Forms Modeled by Special Basis Functions. Ph.D. Thesis, University of Missouri, St. Louis, MO, USA, 2019. Available online: https://irl.umsl.edu/dissertation/896 (accessed on 22 March 2022).
- Plonka, G.; Stampfer, K.; Keller, I. Reconstruction of Stationary and Non-stationary Signals by the Generalized Prony Method. Anal. Appl. 2019, 17, 179–210. [Google Scholar] [CrossRef]
- Plonka, G.; Wischerhoff, M. How many Fourier samples are needed for real function reconstruction? J. Appl. Math. Comput. 2013, 42, 117–137. [Google Scholar] [CrossRef][Green Version]
- Beinert, R.; Plonka, G. Sparse phase retrieval of one-dimensional signals by Prony’s method. Front. Appl. Math. Stat. 2017, 3, 5. [Google Scholar] [CrossRef]
- Coluccio, L.; Eisinberg, A.; Fedele, G. A Prony-like method for non-uniform sampling. Signal Process. 2007, 87, 2484–2490. [Google Scholar] [CrossRef]
- Peter, T.; Plonka, G.; Schaback, R. Prony’s Method for Multivariate Signals. Proc. Appl. Math. Mech. 2015, 15, 665–666. [Google Scholar] [CrossRef]
- Peter, T.; Plonka, G.; Roşca, D. Representation of sparse Legendre expansions. J. Symb. Comput. 2013, 50, 159–169. [Google Scholar] [CrossRef]
- Peter, T.; Plonka, G. A generalized Prony method for reconstruction of sparse sums of eigenfunctions of linear operators. Inverse Prob. 2013, 29, 025001. [Google Scholar] [CrossRef]
- Peter, T.; Potts, D.; Tasche, M. Nonlinear approximation by sums of exponentials and translates. SIAM J. Sci. Comput. 2011, 33, 1920–1947. [Google Scholar] [CrossRef]
- Plonka, G.; Tasche, M. Prony methods for recovery of structured functions. GAMM Mitt. 2014, 37, 239–258. [Google Scholar] [CrossRef]
- Wischerhof, M. Reconstruction of Structured Functions From Sparse Fourier Data. Ph.D. Thesis, Niedersächsische Staats-und Universitätsbibliothek Göttingen, Göttingen, Germany, 2015. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).