Abstract
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a -length DFT (where k is a positive integer), and is substantially more efficient for any other length, N. Our method uses the eigenstructure of the DFT, and the fact that sparse, real-valued, eigenvectors can be found and used to advantage. Computation using our method uses only vector dot products and vector-scalar products.
1. Introduction
The DFT is a function that decomposes a discrete signal into its constituent frequencies. Computing the DFT is often an integral part of DSP systems, and so its efficient computation is very important. As is well known, the founding of DSP was significantly impacted by the development of the Cooley–Tukey FFT (Fast DFT) algorithm []. If x is a discrete signal of length N, the DFT of x is given by
where . If F is the Fourier transform matrix whose elements are the complex exponentials given in (1) taken in the proper order, the DFT X of x can be written
The matrix F is unitary, i.e., the scaling factor from (1) is included in F. This is in contrast to the conventional DSP definition of F, where the scaling factor is pushed into the inverse transform as the scale . Here, we assume that this scaling factor is included in F.
As is well known, multiplication of two complex numbers in requires either four real multiplications and two real additions or three real multiplications and five real additions. Using traditional approaches of calculating the DFT using the various FFT algorithms does not completely eliminate the need for complex multiplications. Although complex multiplications can be easily implemented in sophisticated software like MATLAB or high level programming languages like Python, it is a harder task in hardware or low cost/low performance digital signal processors. For example, complex multiplication in VHDL or C programming language needs additional libraries, and the code or hardware produced is irregular (not vectorizable). Thus, we were motivated to look into a DFT computation method that eliminates the need for complex multiplications and additions.
The aforementioned Cooley–Tukey FFT algorithm is the most common method used for computing the DFT [], though in a modified form to exploit the advantages of the butterfly using the split-radix method []. This algorithm uses the divide and conquer approach that decomposes the DFT of a composite length N into smaller DFTs to reduce the resulting computational complexity. As mentioned before, using the Cooley–Tukey FFT algorithm requires the multiplication by complex twiddle factors (these are constants of magnitude 1 in ) which stitch the smaller length DFTs together when recombining to produce the larger N length DFT. Even considering the prime factor FFT algorithm [], the smallest un-factorizable DFT will require complex arithmetic.
Our proposed method uses the real-valued eigenvectors of the DFT for computation and a combiner matrix that essentially replaces the twiddle factor multiplications. Our method uses only vector dot products and vector-scalar products. Therefore, our method has the potential to perform well on hardware with dedicated multiply-and-accumulate (MAC) units or on hardware designed to operate on one-dimensional arrays like vector processors. The eigenvectors obtained from the ortho-normalization of DFT projection matrices are slightly sparse, which further improves computational efficiency. However, there are a few disadvantages with our method. If the size of the DFT is a power of 2, the Cooley–Tukey FFT outperforms our method. Furthermore, the CORDIC algorithm [] will perform better on hardware that can efficiently do bit shifts. Another limitation, though relatively unimportant in today’s environment of inexpensive memory, is the need for extra memory to store the pre-computed eigenvectors. Our work done in [] showed the relationship between the eigenstructure of the discrete Hirschman transform and the eigenstructure of the DFT. That work inspired us to develop the method presented here. By extension, our method can also be used to calculate the discrete Hirschman transform first introduced in []. This method can also be extended to other ortho-normal transforms which have real eigenvectors.
Our analysis has found that the proposed method requires fewer multiplications than when using a single stage Cooley–Tukey algorithm. In fact, if the proposed method is combined with recursive algorithms like the prime factor FFT, the number of multiplications are comparable to the Cooley–Tukey FFT of size ; . Moreover, this method is not restricted by the size of the DFT; unlike the Cooley–Tukey FFT which requires the input length N to be an integer power of 2, or the prime factor FFT which requires N to be a product of coprime numbers.
The methodology is derived in Section 2, which shows the relationship between an input (data) vector, the DFT eigenvectors, and the Fourier transform output. Section 3 deals with the computation of the inverse DFT. The DFT eigenvectors with some favourable properties were derived by Matveev [] and Section 4 provides a brief description of that technique to find the eigenvectors. In Section 5, a 5-point DFT example is shown using our method developed in this work. Accuracy of the proposed method is discussed in Section 6. In Section 7, the number of real-valued operations required for calculating the DFT using our algorithm is derived.
2. Methodology
A Fourier transform matrix F is diagonalizable and can be decomposed into its Jordan form where V is a matrix whose columns are the eigenvectors of F and is a purely diagonal matrix containing the eigenvalues of F. Since F is unitary, the eigenvalues in have magnitude one and the eigenvector matrix V is also unitary. What may be less well-known is that it is possible find a real and orthonormal set of eigenvectors for F []. We use as the complex conjugate transpose of V and as the transpose of V. However, since V is real, is the transpose of V, i.e., . Of course, since V is unitary, . So, we have the more useful Jordan form
Definition 1.
The function converts a vector to a diagonal matrix.
Let be a Euclidean basis vector where the ith element is 1 and all other elements are zero. Thus, , and so on. If , is defined as
Theorem 1.
If a and b are vectors, and and , then .
Proof.
Let and .
This completes the proof. □
If X is the Fourier transform of the vector x and F is the Fourier transform matrix,
where , and such that is a column vector containing the eigenvalues of F. Note that from Theorem 1.
In (4), adds the columns of the matrix product in a particular manner defined by the eigenvalues of F. It is well known that there are only four unique eigenvalues for the Fourier transform: and , where is the imaginary unit [,]. Denote the algebraic multiplicity of the DFT eigenvalues as , , and for the eigenvalues 1, , j and , respectively. So, for an N-point DFT, we have . From [], these multiplicities are
Without loss of generality, let in (4) be constructed such that the unique eigenvalues are grouped together as shown in (6). This grouping simplifies our derivation. Note that the eigenvectors in the columns of V will correspond with their respective eigenvalues in the vector .
Let u be an matrix as defined in (7) where the first row contains 1’s and ’s and the second row contains 1’s and ’s.
2.1. Formulations of Equation (8)
If the input vector x is complex such that , we can split the real and imaginary components by where and . Since , we can rewrite (8) by splitting the real and imaginary parts such that and . It is clear that . Now (8) becomes
where . Alternatively, (9) can be implemented by modifying u and keeping z unchanged for the imaginary part. In this case (9) changes to
where is given by (12). The first column of contains -1’s and 1’s and the second column contains 1’s and -1’s. Thus, can be formed by swapping the columns of u followed by negating the first column.
In (10) and (11) both terms on the right-hand-side contain real and imaginary parts. This can be shown by factoring ():
where the matrix . The first column of W contributes to the real part of the DFT and its second column contributes to the imaginary part of DFT. Therefore, (8) and (9) can be manipulated such that the components of the input that lead to the real and imaginary parts of the DFT are grouped together before computing W. Let , and . Create vectors and such that
where the notation represents the kth element of the vector and . Let where and . It is clear that is a matrix. Define a matrix as given in (13). The first column of is comprised of 1’s, -1’s followed by 1’s; and the second column consists of 1’s, -1’s, 1’s and -1’s.
With this information, the third formulation of (9) is obtained:
Another method to implement (9) is by rearranging the matrix. Define as
where the column vector is the kth eigenvector of F. is a matrix. Let and , where the semicolon represents the start of a new row. Thus, and are vectors. Let , and , where has size . Now (9) can be rephrased as
Here we have presented four different formulations of (9) via (10), (11), (14) and (15). Each of these formulations affects a different factor in (8). In these equations the only complex component is contained in either z or . The vector z or is not necessary for the computational algorithm, only for theoretical completeness. Thus, all the multiplications and additions are real-valued. Therefore, we say that this method of computing the DFT is natively-real valued because it can be computed using operations whose operands are all real numbers (in ).
2.2. Real Inputs
If x is real such that , the four different forms given in (10), (11), (14) and (15) simplify to
which is exactly (8) since . This is very straightforward in (10) and (11). In both these cases the second term is 0 since . For the other terms, the matrix product or simplifies to . Since x is real, the product in (8) results in an matrix whose first column contains the real component of the DFT X, and the second column is the imaginary component of X. As mentioned before, the vector z is not necessary for the computational algorithm; z just clarifies that the first column is real and the second column is imaginary.
2.3. A Filter Bank Model/Interpretation
Since , the diagonal entries of are formed by the inner product of the DFT eigenvectors and the input x. Let where the column vector is a DFT eigenvector. Thus
where the notation represents the inner (dot) product of two vectors a and b. We use this to develop a filter bank description for the linear algebra in the above discussion. Specifically, we define the time reversed vector of . Now, filter the sequence x through the FIR filter to produce the length output sequence . Down-sample this output by N (i.e., selecting the middle term) yields the dot product , i.e., , where is the Nth element of . A similar procedure is applied to the other eigenvectors.
Now consider the matrix product . Each diagonal entry of scales the corresponding column of V, i.e., each eigenvector is scaled by the corresponding filter output. These scaled eigenvectors are then added or subtracted in the specific manner defined by u to give the real and imaginary components of the DFT result. This is why we call u the “combiner”. The combiner adds and subtracts the real parts (from the elements of u) and the imaginary components (from the of u). The vector z is only provided for theoretical completeness so that we can combine the real and imaginary components for the DFT X. These steps are summarized in Figure 1.

Figure 1.
A block diagram showing the signal flow for our proposed DFT algorithm. Here is the th diagonal entry of . The scaler block multiplies the eigenvectors by the filter output (producing ) while the combiner block multiplies the scaled result with u(producing the real and imaginary components of X).
3. Inverse DFT
The IDFT (inverse DFT) computation follows a similar development. The IDFT has the same unique eigenvalues as the DFT, only they differ in the eigenvalue multiplicity for eigenvalues j and . Due to the conjugation difference in the DFT and the IDFT, the multiplicities of these eigenvalues are exchanged. Consequently, the eigenvalue multiplicities of the IDFT are
Similarly, the eigenvectors are unchanged from the DFT for the eigenvalues 1 and , but they are exchanged for eigenvalues j and . Thus, the eigenvector matrix for the IDFT can be obtained directly from the DFT eigenvector matrix V.
Using , the steps described in Section 2 can be used to compute the IDFT. The block diagram of the IDFT follows directly from these definitions used to replace the DFT computations used to derive Figure 1. As in the case for the DFT, symmetries found in the input sequence can be used to improve the computational efficiency. Moreover, if the input or the output sequence is known to be real valued, the computational efficiency can be improved further. If the input to the IDFT is real, the procedure is similar to the DFT with real inputs.
However, if the output of the IDFT is known to be real, the input will be complex and will have some symmetries which can be exploited to improve computational performance. Let , where such that and . If x is real, will be even symmetric and will be odd symmetric (ch. 5 []). It is also known that the DFT eigenvectors are either even symmetric (for eigenvalues ) or odd symmetric (for eigenvalues ) []. These eigenvector symmetries are further explored in Section 7. Let represent an even symmetric eigenvector for eigenvalues and denote an odd symmetric eigenvector for eigenvalues . It is well understood that the product (Hadamard product) of two even sequences or two odd sequences will be even, and the product of an even and an odd sequence will be odd. We also know that the sum of the elements of an even sequence will be twice the sum of the first half of the sequence and the sum of the elements of an odd sequence will be zero. Combining this information for the calculation of the inner products for and in (10), we get and , require approximately multiplications each. Therefore, the last diagonal entries of and the first diagonal elements of will be 0 where . Thus, both the terms of (10) will be purely real. Similarly, for (14), since again leading to a reduction in the number of operations. Comparable patterns also exist for (11) and (15).
4. Computing Orthonormal Eigenvectors of the DFT
Several methods exist to find the real and orthonormal eigenvectors of the DFT. The method described by McClellan and Parks [] only yields real eigenvectors. These eigenvectors can be orthonormalized using the Gram–Schmidt process. Matveev [] uses DFT projection matrices and their Grammian determinants to find the orthonormal eigenvectors directly. Alternatively, the DFT projection matrices [,] can be used in conjunction with the Gram–Schmidt process to obtain orthonormal eigenvectors. Our method performs equally well regardless of the method used to find the orthonormal real-valued eigenvectors.
4.1. Matveev Method
The first step in the Matveev method is to compute the four projection matrices of the DFT corresponding to each unique eigenvalue. If is a projection matrix corresponding to an eigenvalue e, F is the Fourier transform matrix and I is the identity matrix,
The rank of each of these matrices is tied to their corresponding eigenvalue multiplicity. Thus, the matrix will be rank . Therefore, to compute the orthogonal eigenvectors only the first and columns of and are considered; the first column of and are ignored and only the next and columns are considered. The first columns of and are ignored because they are zeros. The kept columns form independent eigenvectors of F corresponding to each respective eigenvalue.
Let the elements of the matrices in (18) be represented as . For example, if the indexing starts from 0, the element in the second row and first column of will be . Therefore, the last element will be . Let the columns of these matrices be represented as , where is the kth column vector of . For example, the third column vector of will be , again assuming the indexing starts from 0. Note that these projection matrices are symmetric. Therefore, the kth column will also be the kth row.
The orthogonal eigenvectors are given by the determinants of the following matrices. For eigenvalues ,
where form a set of orthogonal eigenvectors corresponding to eigenvalues and . gives the eigenvector when . Because these matrices contain scalars and vectors , the determinant can be computed by expanding along the last column and multiplying these vectors by their corresponding cofactors. Similarly, for eigenvalues ,
where is the general equation. Here also . Note that if , for the projection matrix the elements of the first row , first column and the first column vector are not considered since they are zeros. The determinant for these matrices can be computed using the method explained in the previous paragraph for the real eigenvalues.
In both cases, the resulting vectors are orthogonal but not unit length. After normalization, the columns of the eigenvector matrix matrix V are
Although (18) contains powers of the Fourier transform matrix, it is not necessary to numerically calculate those powers. Using the properties of the DFT, , where represents the complex conjugate transpose of F. will be a real matrix such that , where is the kth column of the identity matrix I assuming the indexing starts from 0.
4.2. Gram–Schmidt Orthonormalization
Numerically similar orthonormal eigenvectors can be found from (18) via the Gram–Schmidt orthonormalization process. As discussed before, the matrices are not full rank. Therefore, we only consider the first and columns of and ; ignore the first column of and and consider the next and columns. In our observation, using Gram–Schmidt orthonormalization on the DFT projection matrices gives the fastest and most numerically accurate results when run on MATLAB©. Accuracy tests and their results are discussed in Section 6. We observed that the eigenvectors obtained using the Matveev method is somewhat sparse. That same sparse structure is retained by the Gram–Schmidt orthonormalization of the DFT projection matrices as well as the Gram–Schmidt orthonormalization of McClellan and Parks eigenvectors.
4.3. Structure of Eigenvectors
The eigenvectors obtained using McClellan and Parks method are collinear to the eigenvectors from the DFT projection matrices and they differ only in magnitude. As discussed before, orthonormalizing these eigenvectors make them slightly sparse. These orthonormal eigenvectors have the following structure:
where the matrix represents the set of orthonormal eigenvectors corresponding to eigenvalues and represent the orthonormal eigenvectors corresponding to eigenvalues . The columns of these matrices represent the eigenvectors. The vectors in is even symmetric and the vectors in is odd symmetric. Additionally, if the size of DFT N is an even value, the th row of will be 0 to retain the odd symmetry. The sparsity of the eigenvectors are shown in Figure 2. From this plot, it is clear that the orthonormal DFT eigenvectors are somewhat sparse. This sparse nature can be exploited to reduce the number of operations required while computing the DFT as discussed in Section 7.

Figure 2.
Plot showing the percentage of elements in the orthonormal eigenvectors that are zero against the length of DFT. Note the slight upward trend of the plot. Eigenvectors of larger DFTs are sparser than eigenvectors of smaller ones.
The orthonormal eigenvectors of an 8-point DFT are given below to highlight the sparsity. The eigenvectors are the columns of the matrix . The eigenvalue multiplicities of an 8-point DFT are 3, 2, 1, 2 for eigenvalues 1, , j, , respectively. Therefore, the first three columns of are the eigenvectors corresponding to the eigenvalue 1, the next two columns are for eigenvalue , the column after that is for eigenvalue j and the last two columns correspond to eigenvalue .
5. An Example with 5-Point DFT
5.1. Forward DFT
The eigenvalue multiplicity for a 5-point DFT is for the unique eigenvalues , or we can just say that the eigenvalues are . The orthonormal eigenvectors from the Matveev method as described in Section 4 computed in MATLAB© to double precision are
Now, suppose that the input signal is . Then
If and , then
From the eigenvalue multiplicity we know that the DFT X of x is
5.2. Inverse DFT
As mentioned in Section 3, the eigenvectors for the inverse operation can be constructed from (19) by interchanging the eigenvectors corresponding to the eigenvalues j and . Thus, the eigenvector matrix for the 5-point IDFT is
If ,
The eigenvalue multiplicity for the 5-point IDFT is for eigenvalues . Summing the columns of using the information from the eigenvalue multiplicity, we get
where is the kth column of . Thus,
This is the signal that we started with while computing the forward DFT.
6. Accuracy and Performance of DFT Eigenvector Computation
The two different methods used to calculate the orthonormal eigenvectors yield different computational accuracies. These differences arise due to the limitations imposed by the finite-precision data types that are used in real-world machines, and the varying methods used to produce the sparse eigenvectors. We compare the DFTs created using the Matveev method and the Gram–Schmidt orthonormalization of DFT projection matrices. All tests are run using the double-precision floating point data type on MATLAB version 2020b on a machine with Intel(R) Core(TM) i7-6700HQ @ 2.60GHz 4 core CPU, 8 GB RAM and Windows 10 operating system.
Figure 3 indicates the computational error due to imprecise DFT eigenvectors. Suppose x is a vector in and . The figure shows the mean squared error (MSE) of in decibels. The DFT and IDFT is calculated using Matveev eigenvectors, Gram–Schmidt orthonormalization of DFT projection matrices, and using the MATLAB fft() and ifft() functions. The MSE is calculated via (21), (22) and (23) where n is the length of the vectors x and y and the operator represents the 2-norm of x. The error shown in Figure 3 is caused by the limitation imposed by the machine that was used to compute the eigenvectors. More powerful machines can be used to pre-compute the eigenvectors to a high enough precision such that this error can be eliminated.
where
and

Figure 3.
Plot showing the mean square error while computing the using the proposed method. The eigenvectors were calculated using the Matveev method and the Gram–Schmidt method. Since the plot shows the error after the forward and inverse computation, the error in just one of the directions will be slightly less than what is shown here. Note that this error is caused due to the limitations on the machine that was used to calculate the eigenvectors. In practice, the eigenvectors can be pre-computed with high enough precision to eliminate the error shown in this plot.
Here represents sample out of 1000 samples as explained below.
For each n-point DFT, 1000 vectors are randomly generated such that each element of a vector x is an integer that lies between −128 and 127. These limits are chosen because we can represent the pixels of an 8-bit image after centering the pixel values to 0. For each vector x, the corresponding y and are computed. Equation (22) gives the average value of the 1000 samples and (21) gives the average MSE in dB. The plot in Figure 3 shows the error after computing the forward and inverse DFT. Therefore, the error while finding the DFT in one direction will be slightly lower than what is shown and will depend on the size of DFT. Clearly, the computational performance of the eigenvector methods suffers a bit, especially as the length of the DFT increases. The differences are not significant for lengths up to 20, and the method’s complexity analysis in Section 7 will show why these approaches are attractive.
The time required to calculate the eigenvectors from the DFT projection matrices and orthonormalizing them with Matveev method and Gram–Schmidt process is summarized in Table 1. In this table, the time is calculated for 10,000 executions of each of the methods and is shown in units of seconds. This is consistent with our rudimentary analysis of the computational complexity of each of the methods: for Gram–Schmidt and for Matveev method (assuming the required matrix determinant is computed in complexity). This discrepancy arises from the need for calculating the determinant of the matrix in the Matveev method, whereas the vector dot product can be calculated in time for the Gram–Schmidt process.

Table 1.
Computation time in seconds for 10,000 runs.
7. Number of Operations
7.1. Using Generalized Eigenvectors
At first glance while observing (8), computing seems to need N multiplications and additions assuming that . However, as briefly discussed in Section 3, it is known that all the eigenvectors of the DFT are either even or odd symmetric sequences []. Eigenvectors corresponding to eigenvalues 1 or will be even symmetric, and eigenvectors corresponding to eigenvalues j or will be odd symmetric. Using this property, if an eigenvector is even symmetric, in (16) can be computed using real multiplications and real additions. If is an odd symmetric eigenvector, the inner product requires real multiplications and real additions because the first element of will be 0 in this case. For example, consider an even eigenvector of a 5-point DFT . For an input vector ,
The first line requires five multiplications, and the second line requires three multiplications. The number of additions in both cases is 5. Now consider an odd eigenvector of a 5-point DFT .
Here using the symmetry, there are two multiplications and three additions. Similar relationships can be found for even length sequences.
We know that the number of even or odd eigenvectors depends on the eigenvalue multiplicity (which in turn depends on N). The number of even eigenvectors is and the number of odd eigenvectors is . Thus, for computing in (16) the total number of multiplications and number of additions required are
For large values of N the eigenvalue multiplicities can be approximated to be equal to each other (which in turn approximately equals ). Thus, . Therefore for large N,
While multiplying the eigenvector matrix with in (4) and (8) the eigenvector symmetry can be utilized again. Let . Let the number of multiplications required for computing be and the number of additions be . So, due to the symmetry of the eigenvectors in V. As noted before, the operation just changes the magnitude of each eigenvector. Therefore, the columns of W retain the even or odd symmetries of the eigenvectors. From (8) we know that the columns of W are added in a specific pattern to determine the DFT. Thus, the result of this sum will also be even or odd symmetric. Therefore, while adding the columns of W, only the top half of the elements need to be computed (and the “middle” value). Thus, the number of additions required is
When N is large
The total number of multiplications and additions required is
For large values of N, these are approximated as
7.2. Using the Sparsity of the Eigenvectors
Further gains can be obtained if the sparse eigenvectors obtained from the Matveev method (Section 4) are used to find the orthonormal DFT eigenvectors. The Matveev DFT eigenvectors contain a few patterns which can be exploited as explained below.
7.2.1. Zeros in Many Eigenvectors
Elements in the starting and ending indices of many of these eigenvectors are zeros. More of these elements are zeros for larger values of N. For example, if the first even eigenvector calculated using the Matveev method is , the second eigenvector will have , the third will have and so on. A similar pattern also exists for the odd eigenvectors.
7.2.2. Repeating Elements
The first eigenvectors corresponding to the eigenvalues are made of only two scalar elements each, i.e., they are of the form .
7.2.3. Zeros in Eigenvectors of Even Length
The middle element of the eigenvectors corresponding to the eigenvalues are zero, if the eigenvectors are of even length. Thus, if N is the length of the eigenvector, then the element will be 0 if N is even. For example, is a possible configuration of such an eigenvector. Note that in this example, the first element is 0 because the vector is odd symmetric.
Using this new information the number of multiplications and additions required for calculating is
For the matrix multiplication , the number of scalar multiplications required is . The number of scalar additions required for adding the columns vectors of W is
If N is very large, the number of multiplications and additions required using Matveev’s DFT eigenvectors is approximated:
If the input is even or odd symmetric, the number of multiplications and additions can be further reduced. In contrast to (33) and (34), brute force computation of the DFT for a real input requires real multiplications and an equal number of additions. Thus, the method proposed here performs better when compared to the brute force DFT computation. Using the Matveev eigenvectors, our method also outshines the DFT calculated using the -point DFT. Moreover, unlike most FFT algorithms, our method does not require that N be highly composite. One can combine our approach with deconstructive algorithms like the Cooley–Tukey algorithm or the prime factor algorithm to yield efficient algorithms for computing the DFT of large sequences. This approach is certainly effective when the length N is not an integer power of 2. Figure 4 shows the number of multiplications required for computing the DFT using various methods.

Figure 4.
Number of real multiplications required while computing the DFT using various methods for . The “recursive Matveev’s DFT” plot is obtained by recursively dividing the DFT of length N into smaller DFTs and using the Matveev’s eigenvectors for the computation of the smaller DFTs. Note that the -point DFT is valid only if N is even and the Cooley–Tukey FFT shown in this plot is valid only if N is an integer power of 2.
7.3. Complex Inputs
If the input , (10), (11), (14) or (15) can be used to keep the computation natively-real. In this case, the number of real multiplications double, and the number of real additions is slightly more than double. This is easiest to see in (10) and (11). It is clear that the multiplications and additions are doubled because there are two terms. Furthermore, an extra additions are required to add the real and imaginary parts from these two terms. Similar results can also be derived from (14) and (15). Assuming the use of Matveev’s eigenvectors, (33) and (34) change to
where and represents the total number of real multiplications and additions required if x is complex.
7.4. Example
Suppose we must determine the 2D DFT for a high-definition image with resolution . When using the row-column approach for finding the 2D DFT, first the transform is taken along the columns, followed by a transform along the rows. Version R2020b of MATLAB© was not sufficiently precise enough to determine the Matveev eigenvectors to a high enough accuracy for any . As mentioned before, this cap of around 20 is from the limitation of the machine that was used to compute the eigenvectors. In practice, this cap will be higher or can be eliminated by pre-computing the eigenvectors with high accuracy. It is also possible that a more numerically sound version of the Matveev algorithm can be found. This numerical accuracy issue actually occurs in every DFT computational algorithm to varying degrees and impacts. Typically, however, the DFT is calculated by decomposing the larger computation into smaller DFTs in a divide-and-conquer strategy.
Since both 1080 and 1920 have many factors, it is possible to decompose the DFT into smaller DFTs of length less than or equal to 20. For instance, consider first taking the 1D DFT along the columns. We know that . Therefore, first we can use the prime-factor algorithm [] to divide the 1080-point DFT into separate DFTs of length , and 5. The and DFTs can be recursively divided into smaller lengths via decimation-in-time. The final 9-point and 5-point DFT butterflies are computed using the proposed method which result in 24,800 real multiplications. Computing the 2-point DFTs are trivial and require 1,620 real multiplications by twiddle factors. Thus, using this method to find a 1080-point DFT requires a total of 26,420 multiplications. For the 2D DFT, this has to be repeated 1920 times, for a total of around 50.7 million multiplications. Now, take the DFT along the rows. All the rows together need around 49.1 million multiplications. Thus, in total, the 2D DFT uses approximately 99.8 million multiplications.
8. Comparison with the 5-Point DFT Butterfly Algorithm
Unlike the 4-point DFT, computing the 5-point DFT requires multiplication with complex twiddle factors as shown in Figure 5. Our method performs better due to the lack of complex multiplications. Using our method for a 5-point DFT requires 20 multiplications and 24 additions from (33) and (34). On the other hand, the brute force, direct, method requires 41 multiplications and 36 additions.We chose the 5-point DFT butterfly because this is the smallest DFT that contains all the unique DFT eigenvalues, i.e., . In contrast, the eigenvalues of the 4-point DFT are .

Figure 5.
Signal flow graph of a 5-point DFT. Here .
9. Conclusions
In this paper, we have shown how the discrete Fourier transform can be efficiently calculated using its eigenvectors in a natively real-valued algorithm. Our proposed method uses (sparse) real eigenvectors of the DFT. The resulting algorithms are efficient when compared to the Cooley–Tukey FFT algorithm, and all steps are natively real-valued, so no complex multiplications are ever required! Our proposed method is suitable for higher dimensional DFTs, can be extended to other unitary transforms, and requires no significant structural change for differing lengths N. An Inverse DFT (IDFT) is presented that possesses the same qualities, and is nearly identical to the efficient DFT algorithm.
Author Contributions
Conceptualization, R.T., V.D. and L.S.D.; methodology, R.T., V.D. and L.S.D.; software, R.T.; validation, R.T. and V.D.; formal analysis, R.T. and V.D.; investigation, R.T.; resources, R.T., V.D. and L.S.D.; data curation, R.T.; writing— original draft preparation, R.T.; writing —review and editing, R.T., V.D. and L.S.D.; visualization, R.T., V.D. and L.S.D.; supervision, V.D. and L.S.D.; project administration, V.D.; funding acquisition, V.D. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Cooley, J.W.; Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comput. 1965, 19, 297–301. [Google Scholar] [CrossRef]
- Duhamel, P.; Hollmann, H. ’Split radix’ FFT algorithm. Electron. Lett. 1984, 20, 14–16. [Google Scholar] [CrossRef] [Green Version]
- Good, I.J. The Interaction Algorithm and Practical Fourier Analysis. J. R. Stat. Soc. Ser. B Methodol. 1958, 20, 361–372. [Google Scholar] [CrossRef]
- Volder, J.E. The CORDIC Trigonometric Computing Technique. IRE Trans. Electron. Comput. 1959, EC-8, 330–334. [Google Scholar] [CrossRef]
- Thomas, R.; DeBrunner, V.; DeBrunner, L. How the Discrete Hirschman Transform Inherits its Eigenstructure from the DFT. In Proceedings of the 2020 54th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 1–4 November 2020; pp. 858–862. [Google Scholar] [CrossRef]
- Przebinda, T.; DeBrunner, V.; Ozaydin, M. The optimal transform for the discrete Hirschman uncertainty principle. IEEE Trans. Inf. Theory 2001, 47, 2086–2090. [Google Scholar] [CrossRef] [Green Version]
- Matveev, V.B. Intertwining relations between the Fourier transform and discrete Fourier transform, the related functional identities and beyond. Inverse Probl. 2001, 17, 633. [Google Scholar] [CrossRef]
- McClellan, J.; Parks, T. Eigenvalue and eigenvector decomposition of the discrete Fourier transform. IEEE Trans. Audio Electroacoust. 1972, 20, 66–74. [Google Scholar] [CrossRef]
- Candan, C. On the eigenstructure of DFT matrices [DSP education]. IEEE Signal Process. Mag. 2011, 28, 105–108. [Google Scholar] [CrossRef]
- Mitra, S.K. Digital Signal Processing, 4th ed.; McGraw-Hill: New York, NY, USA, 2010. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).