Next Article in Journal
On the Quality of Deep Representations for Kepler Light Curves Using Variational Auto-Encoders
Previous Article in Journal
Bearing Prognostics: An Instance-Based Learning Approach with Feature Engineering, Data Augmentation, and Similarity Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors

Department of Electrical and Computer Engineering, Florida State University, Tallahassee, FL 32306, USA
*
Authors to whom correspondence should be addressed.
Signals 2021, 2(4), 688-705; https://doi.org/10.3390/signals2040041
Submission received: 30 June 2021 / Revised: 20 September 2021 / Accepted: 30 September 2021 / Published: 11 October 2021

Abstract

:
Direct computation of the discrete Fourier transform (DFT) and its FFT computational algorithms requires multiplication (and addition) of complex numbers. Complex number multiplication requires four real-valued multiplications and two real-valued additions, or three real-valued multiplications and five real-valued additions, as well as the requisite added memory for temporary storage. In this paper, we present a method for computing a DFT via a natively real-valued algorithm that is computationally equivalent to a N = 2 k -length DFT (where k is a positive integer), and is substantially more efficient for any other length, N. Our method uses the eigenstructure of the DFT, and the fact that sparse, real-valued, eigenvectors can be found and used to advantage. Computation using our method uses only vector dot products and vector-scalar products.

1. Introduction

The DFT is a function that decomposes a discrete signal into its constituent frequencies. Computing the DFT is often an integral part of DSP systems, and so its efficient computation is very important. As is well known, the founding of DSP was significantly impacted by the development of the Cooley–Tukey FFT (Fast DFT) algorithm [1]. If x is a discrete signal of length N, the DFT of x is given by
X ( k ) = 1 N n = 0 N 1 x ( n ) e j 2 π N n k ,
where k = 0 , 1 , , N 1 . If F is the Fourier transform matrix whose elements are the complex exponentials given in (1) taken in the proper order, the DFT X of x can be written
X = F x
The matrix F is unitary, i.e., the scaling factor 1 N from (1) is included in F. This is in contrast to the conventional DSP definition of F, where the 1 N scaling factor is pushed into the inverse transform as the scale 1 N . Here, we assume that this scaling factor is included in F.
As is well known, multiplication of two complex numbers in C 1 requires either four real multiplications and two real additions or three real multiplications and five real additions. Using traditional approaches of calculating the DFT using the various FFT algorithms does not completely eliminate the need for complex multiplications. Although complex multiplications can be easily implemented in sophisticated software like MATLAB or high level programming languages like Python, it is a harder task in hardware or low cost/low performance digital signal processors. For example, complex multiplication in VHDL or C programming language needs additional libraries, and the code or hardware produced is irregular (not vectorizable). Thus, we were motivated to look into a DFT computation method that eliminates the need for complex multiplications and additions.
The aforementioned Cooley–Tukey FFT algorithm is the most common method used for computing the DFT [1], though in a modified form to exploit the advantages of the N = 2 2 = 4 butterfly using the split-radix method [2]. This algorithm uses the divide and conquer approach that decomposes the DFT of a composite length N into smaller DFTs to reduce the resulting computational complexity. As mentioned before, using the Cooley–Tukey FFT algorithm requires the multiplication by complex twiddle factors (these are constants of magnitude 1 in C ) which stitch the smaller length DFTs together when recombining to produce the larger N length DFT. Even considering the prime factor FFT algorithm [3], the smallest un-factorizable DFT will require complex arithmetic.
Our proposed method uses the real-valued eigenvectors of the DFT for computation and a combiner matrix that essentially replaces the twiddle factor multiplications. Our method uses only vector dot products and vector-scalar products. Therefore, our method has the potential to perform well on hardware with dedicated multiply-and-accumulate (MAC) units or on hardware designed to operate on one-dimensional arrays like vector processors. The eigenvectors obtained from the ortho-normalization of DFT projection matrices are slightly sparse, which further improves computational efficiency. However, there are a few disadvantages with our method. If the size of the DFT is a power of 2, the Cooley–Tukey FFT outperforms our method. Furthermore, the CORDIC algorithm [4] will perform better on hardware that can efficiently do bit shifts. Another limitation, though relatively unimportant in today’s environment of inexpensive memory, is the need for extra memory to store the pre-computed eigenvectors. Our work done in [5] showed the relationship between the eigenstructure of the discrete Hirschman transform and the eigenstructure of the DFT. That work inspired us to develop the method presented here. By extension, our method can also be used to calculate the discrete Hirschman transform first introduced in [6]. This method can also be extended to other ortho-normal transforms which have real eigenvectors.
Our analysis has found that the proposed method requires fewer multiplications than when using a single stage Cooley–Tukey algorithm. In fact, if the proposed method is combined with recursive algorithms like the prime factor FFT, the number of multiplications are comparable to the Cooley–Tukey FFT of size 2 k ; k Z + . Moreover, this method is not restricted by the size of the DFT; unlike the Cooley–Tukey FFT which requires the input length N to be an integer power of 2, or the prime factor FFT which requires N to be a product of coprime numbers.
The methodology is derived in Section 2, which shows the relationship between an input (data) vector, the DFT eigenvectors, and the Fourier transform output. Section 3 deals with the computation of the inverse DFT. The DFT eigenvectors with some favourable properties were derived by Matveev [7] and Section 4 provides a brief description of that technique to find the eigenvectors. In Section 5, a 5-point DFT example is shown using our method developed in this work. Accuracy of the proposed method is discussed in Section 6. In Section 7, the number of real-valued operations required for calculating the DFT using our algorithm is derived.

2. Methodology

A Fourier transform matrix F is diagonalizable and can be decomposed into its Jordan form F = V Λ V 1 where V is a matrix whose columns are the eigenvectors of F and Λ is a purely diagonal matrix containing the eigenvalues of F. Since F is unitary, the eigenvalues in Λ have magnitude one and the eigenvector matrix V is also unitary. What may be less well-known is that it is possible find a real and orthonormal set of eigenvectors for F [7]. We use V H as the complex conjugate transpose of V and V T as the transpose of V. However, since V is real, V H is the transpose of V, i.e., V H = V T . Of course, since V is unitary, V 1 = V H = V T . So, we have the more useful Jordan form
F = V Λ V T
Definition 1.
The function diag · converts a vector to a diagonal matrix.
Let e i be a Euclidean basis vector where the ith element is 1 and all other elements are zero. Thus, e 0 = 1 0 0 T , e 1 = 0 1 0 0 T and so on. If a = a 0 a 1 a N 1 T , diag a is defined as
A = diag a = i = 0 N 1 a i e i e i T = a 0 0 0 0 a 1 0 0 0 0 0 a N 1
Theorem 1.
If a and b are vectors, and A = diag a and B = diag b , then A b = B a .
Proof. 
Let a = a 0 a 1 a N 1 T and b = b 0 b 1 b N 1 T .
A b = diag a · b = i = 0 N 1 a i e i e i T b = i = 0 N 1 a i e i b i = i = 0 N 1 b i e i a i = i = 0 N 1 b i e i e i T a = diag b · a = B a
This completes the proof. □
If X is the Fourier transform of the vector x and F is the Fourier transform matrix,
X = F x = V Λ V T x = V Λ x ˜ = V X ˜ λ
where x ˜ = V T x , X ˜ = diag x ˜ and Λ = diag λ such that λ is a column vector containing the eigenvalues of F. Note that Λ x ˜ = X ˜ λ from Theorem 1.
In (4), λ adds the columns of the matrix product V X ˜ in a particular manner defined by the eigenvalues of F. It is well known that there are only four unique eigenvalues for the Fourier transform: ± 1 and ± j , where j = 1 is the imaginary unit [8,9]. Denote the algebraic multiplicity of the DFT eigenvalues as m 1 , m 1 , m j and m j for the eigenvalues 1, 1 , j and j , respectively. So, for an N-point DFT, we have m 1 + m 1 + m j + m j = N . From [7], these multiplicities are
m 1 = N 4 + 1 m j = N 1 4 m 1 = N + 2 4 m j = N + 1 4
Without loss of generality, let λ in (4) be constructed such that the unique eigenvalues are grouped together as shown in (6). This grouping simplifies our derivation. Note that the eigenvectors in the columns of V will correspond with their respective eigenvalues in the vector λ .
λ = 1 1 1 1 j j j j T
Let u be an N × 2 matrix as defined in (7) where the first row contains m 1 1’s and m 1 1 ’s and the second row contains m j 1’s and m j 1 ’s.
u = 1 1 1 1 0 0 0 0 1 1 1 1 T
From (6) and (7) it is clear that
λ = u 1 j = u z
where z = 1 j T . Now (4) can be rewritten
X = V X ˜ u z

2.1. Formulations of Equation (8)

If the input vector x is complex such that x C N , we can split the real and imaginary components by x = x r + j x i where x r = real x and x i = imag x . Since X ˜ = diag V T x , we can rewrite (8) by splitting the real and imaginary parts such that X ˜ r = diag V T x r and X ˜ i = diag V T x i . It is clear that X ˜ = X ˜ r + j X ˜ i . Now (8) becomes
X = V X ˜ r u z + j V X ˜ i u z
= V X ˜ r u z + V X ˜ i u z i
where z i = j z = j 1 T . Alternatively, (9) can be implemented by modifying u and keeping z unchanged for the imaginary part. In this case (9) changes to
X = V X ˜ r u z + V X ˜ i u i z
where u i is given by (12). The first column of u i contains m j -1’s and m j 1’s and the second column contains m 1 1’s and m 1 -1’s. Thus, u i can be formed by swapping the columns of u followed by negating the first column.
u i = 0 0 1 1 1 1 1 1 1 1 0 0 T
In (10) and (11) both terms on the right-hand-side contain real and imaginary parts. This can be shown by factoring ():
X = V X ˜ r u + X ˜ i u i z = V W z
where the N × 2 matrix W = X ˜ r u + X ˜ i u i . The first column of W contributes to the real part of the DFT and its second column contributes to the imaginary part of DFT. Therefore, (8) and (9) can be manipulated such that the components of the input that lead to the real and imaginary parts of the DFT are grouped together before computing W. Let x ˜ r = V T x r , x ˜ i = V T x i and m = m 1 + m 1 . Create vectors y 1 and y 2 such that
y 1 = x r ˜ 0 , , x r ˜ m 1 , x i ˜ m , , x i ˜ N 1 T y 2 = x i ˜ 0 , , x i ˜ m 1 , x r ˜ m , , x r ˜ N 1 T
where the notation x ˜ s k represents the kth element of the vector x ˜ s and s = r , i . Let Y = Y 1 Y 2 where Y 1 = diag y 1 and Y 2 = diag y 2 . It is clear that Y is a N × 2 N matrix. Define a 2 N × 2 matrix u as given in (13). The first column of u is comprised of m 1 1’s, m 1 + m j -1’s followed by m j 1’s; and the second column consists of m 1 1’s, m 1 -1’s, m j 1’s and m j -1’s.
u = 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 T
With this information, the third formulation of (9) is obtained:
X = V Y u z
Another method to implement (9) is by rearranging the V T matrix. Define V as
V = v 0 v m 1 0 0 0 0 v m v N 1 T
where the column vector v k is the kth eigenvector of F. V is a N × 2 N matrix. Let x 1 = x r ; x i and x 2 = x i ; x r , where the semicolon represents the start of a new row. Thus, x 1 and x 2 are 2 N × 1 vectors. Let X ˜ 1 = diag V x 1 , X ˜ 2 = diag V x 2 and X = X ˜ 1 X ˜ 2 , where X has size N × 2 N . Now (9) can be rephrased as
X = V X u z
Here we have presented four different formulations of (9) via (10), (11), (14) and (15). Each of these formulations affects a different factor in (8). In these equations the only complex component is contained in either z or z i . The vector z or z i is not necessary for the computational algorithm, only for theoretical completeness. Thus, all the multiplications and additions are real-valued. Therefore, we say that this method of computing the DFT is natively-real valued because it can be computed using operations whose operands are all real numbers (in R ).

2.2. Real Inputs

If x is real such that x R N , the four different forms given in (10), (11), (14) and (15) simplify to
X = V X ˜ r u z
which is exactly (8) since X ˜ = X ˜ r . This is very straightforward in (10) and (11). In both these cases the second term is 0 since x i = 0 . For the other terms, the matrix product Y u or X u simplifies to X ˜ r u . Since x is real, the product V X ˜ u in (8) results in an N × 2 matrix whose first column contains the real component of the DFT X, and the second column is the imaginary component of X. As mentioned before, the vector z is not necessary for the computational algorithm; z just clarifies that the first column is real and the second column is imaginary.

2.3. A Filter Bank Model/Interpretation

Since X ˜ = diag V T x , the diagonal entries of X ˜ are formed by the inner product of the DFT eigenvectors and the input x. Let V = v 0 v 1 v N 1 where the column vector v k is a DFT eigenvector. Thus
X ˜ = v 0 , x 0 0 0 v 1 , x 0 0 0 0 0 v N 1 , x
where the notation a , b represents the inner (dot) product of two vectors a and b. We use this to develop a filter bank description for the linear algebra in the above discussion. Specifically, we define the time reversed vector v 0 ( n ) of v 0 . Now, filter the sequence x through the FIR filter v 0 ( n ) to produce the N + N 1 = 2 N 1 length output sequence x 0 f i l t . Down-sample this output x 0 f i l t by N (i.e., selecting the middle term) yields the dot product v 0 , x , i.e., v 0 , x = x 0 f i l t N 1 , where x 0 f i l t N 1 is the Nth element of x 0 f i l t . A similar procedure is applied to the other eigenvectors.
Now consider the matrix product V X ˜ . Each diagonal entry of X ˜ scales the corresponding column of V, i.e., each eigenvector is scaled by the corresponding filter output. These scaled eigenvectors are then added or subtracted in the specific manner defined by u to give the real and imaginary components of the DFT result. This is why we call u the “combiner”. The combiner adds and subtracts the real parts (from the ± 1 elements of u) and the imaginary components (from the ± j of u). The vector z is only provided for theoretical completeness so that we can combine the real and imaginary components for the DFT X. These steps are summarized in Figure 1.

3. Inverse DFT

The IDFT (inverse DFT) computation follows a similar development. The IDFT has the same unique eigenvalues as the DFT, only they differ in the eigenvalue multiplicity for eigenvalues j and j . Due to the conjugation difference in the DFT and the IDFT, the multiplicities of these eigenvalues are exchanged. Consequently, the eigenvalue multiplicities of the IDFT are
m 1 = N 4 + 1 m j = N + 1 4 m 1 = N + 2 4 m j = N 1 4
Similarly, the eigenvectors are unchanged from the DFT for the eigenvalues 1 and 1 , but they are exchanged for eigenvalues j and j . Thus, the eigenvector matrix for the IDFT V IDFT can be obtained directly from the DFT eigenvector matrix V.
Using V IDFT , the steps described in Section 2 can be used to compute the IDFT. The block diagram of the IDFT follows directly from these definitions used to replace the DFT computations used to derive Figure 1. As in the case for the DFT, symmetries found in the input sequence can be used to improve the computational efficiency. Moreover, if the input or the output sequence is known to be real valued, the computational efficiency can be improved further. If the input to the IDFT is real, the procedure is similar to the DFT with real inputs.
However, if the output of the IDFT is known to be real, the input will be complex and will have some symmetries which can be exploited to improve computational performance. Let x = IDFT X , where X = X r + X i such that X r = real X and X i = imag X . If x is real, X r will be even symmetric and X i will be odd symmetric (ch. 5 [10]). It is also known that the DFT eigenvectors are either even symmetric (for eigenvalues ± 1 ) or odd symmetric (for eigenvalues ± j ) [8]. These eigenvector symmetries are further explored in Section 7. Let v e v represent an even symmetric eigenvector for eigenvalues ± 1 and v o d denote an odd symmetric eigenvector for eigenvalues ± j . It is well understood that the product (Hadamard product) of two even sequences or two odd sequences will be even, and the product of an even and an odd sequence will be odd. We also know that the sum of the elements of an even sequence will be twice the sum of the first half of the sequence and the sum of the elements of an odd sequence will be zero. Combining this information for the calculation of the inner products for X ˜ r and X ˜ i in (10), we get X r , v o d = X i , v e v = 0 and X r , v e v , X i , v o d require approximately N / 2 multiplications each. Therefore, the last N m diagonal entries of X ˜ r and the first m diagonal elements of X ˜ i will be 0 where m = m 1 + m 1 . Thus, both the terms of (10) will be purely real. Similarly, for (14), Y = Y since Y 2 = 0 again leading to a reduction in the number of operations. Comparable patterns also exist for (11) and (15).

4. Computing Orthonormal Eigenvectors of the DFT

Several methods exist to find the real and orthonormal eigenvectors of the DFT. The method described by McClellan and Parks [8] only yields real eigenvectors. These eigenvectors can be orthonormalized using the Gram–Schmidt process. Matveev [7] uses DFT projection matrices and their Grammian determinants to find the orthonormal eigenvectors directly. Alternatively, the DFT projection matrices [7,9] can be used in conjunction with the Gram–Schmidt process to obtain orthonormal eigenvectors. Our method performs equally well regardless of the method used to find the orthonormal real-valued eigenvectors.

4.1. Matveev Method

The first step in the Matveev method is to compute the four projection matrices of the DFT corresponding to each unique eigenvalue. If P e is a projection matrix corresponding to an eigenvalue e, F is the Fourier transform matrix and I is the identity matrix,
P 1 = 1 4 F 3 + F 2 + F + I P 1 = 1 4 F 3 + F 2 F + I P j = 1 4 j F 3 F 2 j F + I P j = 1 4 j F 3 F 2 + j F + I
The rank of each of these matrices is tied to their corresponding eigenvalue multiplicity. Thus, the matrix P e will be rank m e . Therefore, to compute the orthogonal eigenvectors only the first m 1 and m 1 columns of P 1 and P 1 are considered; the first column of P j and P j are ignored and only the next m j and m j columns are considered. The first columns of P j and P j are ignored because they are zeros. The kept columns form independent eigenvectors of F corresponding to each respective eigenvalue.
Let the elements of the matrices in (18) be represented as P e m , n . For example, if the indexing starts from 0, the element in the second row and first column of P 1 will be P 1 1 , 0 . Therefore, the last element will be P 1 N 1 , N 1 . Let the columns of these matrices be represented as p e , k , where p e , k is the kth column vector of P e . For example, the third column vector of P j will be p j , 2 , again assuming the indexing starts from 0. Note that these projection matrices are symmetric. Therefore, the kth column will also be the kth row.
The orthogonal eigenvectors are given by the determinants of the following matrices. For eigenvalues e = 1 , 1 ,
β e , 0 = p e , 0
β e , 1 = P e 0 , 0 p e , 0 P e 1 , 0 p e , 1
β e , k = P e 0 , 0 P e 0 , 1 P e 0 , k 1 p e , 0 P e 1 , 0 P e 1 , 1 P e 1 , k 1 p e , 1 P e k , 0 P e k , 1 P e k , k 1 p e , k
where β e , k form a set of orthogonal eigenvectors corresponding to eigenvalues e = 1 , 1 and 1 k m e 1 . β e , 1 gives the eigenvector when k = 1 . Because these matrices contain scalars P e m , n and vectors p e , k , the determinant can be computed by expanding along the last column and multiplying these vectors by their corresponding cofactors. Similarly, for eigenvalues e = j , j ,
β e , 0 = p e , 1
β e , 1 = P e 1 , 1 p e , 1 P e 2 , 1 p e , 2
β e , k = P e 1 , 1 P e 1 , 2 P e 1 , k p e , 1 P e 2 , 1 P e 2 , 2 P e 2 , k p e , 2 P e k + 1 , 0 P e k + 1 , 2 P e k + 1 , k p e , k + 1
where β e , k is the general equation. Here also 1 k m e 1 . Note that if e = j , j , for the projection matrix P e the elements of the first row P e 0 , k , first column P e k , 0 and the first column vector p e , 0 are not considered since they are zeros. The determinant for these matrices can be computed using the method explained in the previous paragraph for the real eigenvalues.
In both cases, the resulting vectors β e , k are orthogonal but not unit length. After normalization, the columns of the eigenvector matrix matrix V are
v e , k = β e , k β e , k
Although (18) contains powers of the Fourier transform matrix, it is not necessary to numerically calculate those powers. Using the properties of the DFT, F 3 = F H , where F H represents the complex conjugate transpose of F. F 2 will be a real matrix such that F 2 = I 0 I N 1 I N 2 I 2 I 1 , where I k is the kth column of the identity matrix I assuming the indexing starts from 0.

4.2. Gram–Schmidt Orthonormalization

Numerically similar orthonormal eigenvectors can be found from (18) via the Gram–Schmidt orthonormalization process. As discussed before, the matrices P e are not full rank. Therefore, we only consider the first m 1 and m 1 columns of P 1 and P 1 ; ignore the first column of P j and P j and consider the next m j and m j columns. In our observation, using Gram–Schmidt orthonormalization on the DFT projection matrices gives the fastest and most numerically accurate results when run on MATLAB©. Accuracy tests and their results are discussed in Section 6. We observed that the eigenvectors obtained using the Matveev method is somewhat sparse. That same sparse structure is retained by the Gram–Schmidt orthonormalization of the DFT projection matrices as well as the Gram–Schmidt orthonormalization of McClellan and Parks eigenvectors.

4.3. Structure of Eigenvectors

The eigenvectors obtained using McClellan and Parks method are collinear to the eigenvectors from the DFT projection matrices and they differ only in magnitude. As discussed before, orthonormalizing these eigenvectors make them slightly sparse. These orthonormal eigenvectors have the following structure:
V ± 1 = a 0 0 0 0 a 1 b 1 0 0 a 1 b 2 c 2 0 a 1 b 3 c 3 d 4 a 1 b 3 c 3 d 4 a 1 b 2 c 2 0 a 1 b 1 0 0 V ± j = 0 0 0 b 1 0 0 b 2 c 2 0 b 3 c 3 d 4 b 3 c 3 d 4 b 2 c 2 0 b 1 0 0
where the matrix V ± 1 represents the set of orthonormal eigenvectors corresponding to eigenvalues ± 1 and V ± j represent the orthonormal eigenvectors corresponding to eigenvalues ± j . The columns of these matrices represent the eigenvectors. The vectors in V ± 1 is even symmetric and the vectors in V ± j is odd symmetric. Additionally, if the size of DFT N is an even value, the N 2 + 1 th row of V ± j will be 0 to retain the odd symmetry. The sparsity of the eigenvectors are shown in Figure 2. From this plot, it is clear that the orthonormal DFT eigenvectors are somewhat sparse. This sparse nature can be exploited to reduce the number of operations required while computing the DFT as discussed in Section 7.
The orthonormal eigenvectors of an 8-point DFT are given below to highlight the sparsity. The eigenvectors are the columns of the matrix V 8 . The eigenvalue multiplicities of an 8-point DFT are 3, 2, 1, 2 for eigenvalues 1, 1 , j, j , respectively. Therefore, the first three columns of V 8 are the eigenvectors corresponding to the eigenvalue 1, the next two columns are for eigenvalue 1 , the column after that is for eigenvalue j and the last two columns correspond to eigenvalue j .
V 8 = 0.823 0 0 0.569 0 0 0 0 0.215 0.573 0 0.311 0.168 0.354 0.612 0 0.215 0.081 0.143 0.311 0.575 0.500 0.289 0.408 0.215 0.299 0.490 0.311 0.168 0.354 0.204 0.577 0.215 0.389 0.692 0.311 0.476 0 0 0 0.215 0.3 0.490 0.311 0.168 0.354 0.204 0.577 0.215 0.081 0.143 0.311 0.575 0.500 0.289 0.408 0.215 0.573 0 0.311 0.168 0.354 0.612 0

5. An Example with 5-Point DFT

5.1. Forward DFT

The eigenvalue multiplicity for a 5-point DFT is m = 2 , 1 , 1 , 1 for the unique eigenvalues 1 , 1 , j , j , or we can just say that the eigenvalues are 1 , 1 , 1 , j , j . The orthonormal eigenvectors from the Matveev method as described in Section 4 computed in MATLAB© to double precision are
V = 0.851 0 0.526 0 0 0.263 0.5 0.425 0.193 0.68 0.263 0.5 0.425 0.68 0.193 0.263 0.5 0.425 0.68 0.193 0.263 0.5 0.425 0.193 0.68
Now, suppose that the input signal is x = 2 0 3 1 1 T . Then
x ˜ = V T x = 0.387 1.5 3.178 1.554 0.294
If X ˜ = diag ( x ˜ ) and X ^ = V X ˜ , then
X ^ = 0.329 0 1.671 0 0 0.102 0.75 1.352 0.3 0.2 0.102 0.75 1.352 1.057 0.057 0.102 0.75 1.352 1.057 0.057 0.102 0.75 1.352 0.3 0.2 = x ^ 0 x ^ 1 x ^ 2 x ^ 3 x ^ 4
From the eigenvalue multiplicity we know that the DFT X of x is
X = x ^ 0 + x ^ 1 x ^ 2 + j x ^ 3 x ^ 4
X = 1.342 2.203 0.1 j 0.703 + 1.113 j 0.703 1.113 j 2.203 + 0.1 j

5.2. Inverse DFT

As mentioned in Section 3, the eigenvectors for the inverse operation can be constructed from (19) by interchanging the eigenvectors corresponding to the eigenvalues j and j . Thus, the eigenvector matrix for the 5-point IDFT is
V IDFT = 0.851 0 0.526 0 0 0.263 0.5 0.425 0.68 0.193 0.263 0.5 0.425 0.193 0.68 0.263 0.5 0.425 0.193 0.68 0.263 0.5 0.425 0.68 0.193
If y is taken from (20), y ˜ = V IDFT y .
y ˜ = 0.387 1.5 3.178 0.294 j 1.554 j
If Y ^ = V IDFT · diag ( y ˜ ) ,
Y ^ = 0.329 0 1.671 0 0 0.102 0.75 1.352 0.12 j 0.3 j 0.102 0.75 1.352 0.057 j 1.057 j 0.102 0.75 1.352 0.057 j 1.057 j 0.102 0.75 1.352 0.12 j 0.3 j
The eigenvalue multiplicity for the 5-point IDFT is 2 , 1 , 1 , 1 for eigenvalues { 1 , 1 , j , j } . Summing the columns of Y ^ using the information from the eigenvalue multiplicity, we get
x = y ^ 0 + y ^ 1 y ^ 2 + j y ^ 3 y ^ 4
where y ^ k is the kth column of Y ^ . Thus,
x = 2 0 3 1 1
This is the signal that we started with while computing the forward DFT.

6. Accuracy and Performance of DFT Eigenvector Computation

The two different methods used to calculate the orthonormal eigenvectors yield different computational accuracies. These differences arise due to the limitations imposed by the finite-precision data types that are used in real-world machines, and the varying methods used to produce the sparse eigenvectors. We compare the DFTs created using the Matveev method and the Gram–Schmidt orthonormalization of DFT projection matrices. All tests are run using the double-precision floating point data type on MATLAB version 2020b on a machine with Intel(R) Core(TM) i7-6700HQ @ 2.60GHz 4 core CPU, 8 GB RAM and Windows 10 operating system.
Figure 3 indicates the computational error due to imprecise DFT eigenvectors. Suppose x is a vector in R n and y = IDFT DFT x . The figure shows the mean squared error (MSE) of x y in decibels. The DFT and IDFT is calculated using Matveev eigenvectors, Gram–Schmidt orthonormalization of DFT projection matrices, and using the MATLAB fft() and ifft() functions. The MSE is calculated via (21), (22) and (23) where n is the length of the vectors x and y and the operator x represents the 2-norm of x. The error shown in Figure 3 is caused by the limitation imposed by the machine that was used to compute the eigenvectors. More powerful machines can be used to pre-compute the eigenvectors to a high enough precision such that this error can be eliminated.
MSE = 10 log 10 μ
where
μ = 1 1000 i = 1 1000 σ i
and
σ = x y 2 n
Here σ i represents i t h sample out of 1000 samples as explained below.
For each n-point DFT, 1000 vectors are randomly generated such that each element of a vector x is an integer that lies between −128 and 127. These limits are chosen because we can represent the pixels of an 8-bit image after centering the pixel values to 0. For each vector x, the corresponding y and σ are computed. Equation (22) gives the average σ value of the 1000 samples and (21) gives the average MSE in dB. The plot in Figure 3 shows the error after computing the forward and inverse DFT. Therefore, the error while finding the DFT in one direction will be slightly lower than what is shown and will depend on the size of DFT. Clearly, the computational performance of the eigenvector methods suffers a bit, especially as the length of the DFT increases. The differences are not significant for lengths up to 20, and the method’s complexity analysis in Section 7 will show why these approaches are attractive.
The time required to calculate the eigenvectors from the DFT projection matrices and orthonormalizing them with Matveev method and Gram–Schmidt process is summarized in Table 1. In this table, the time is calculated for 10,000 executions of each of the methods and is shown in units of seconds. This is consistent with our rudimentary analysis of the computational complexity of each of the methods: O n 3 for Gram–Schmidt and O n 5 for Matveev method (assuming the required matrix determinant is computed in O n 3 complexity). This discrepancy arises from the need for calculating the determinant of the β matrix in the Matveev method, whereas the vector dot product can be calculated in O n time for the Gram–Schmidt process.

7. Number of Operations

7.1. Using Generalized Eigenvectors

At first glance while observing (8), computing X ˜ seems to need N multiplications and N 1 additions assuming that x R N . However, as briefly discussed in Section 3, it is known that all the eigenvectors of the DFT are either even or odd symmetric sequences [8]. Eigenvectors corresponding to eigenvalues 1 or 1 will be even symmetric, and eigenvectors corresponding to eigenvalues j or j will be odd symmetric. Using this property, if an eigenvector v k is even symmetric, v k , x in (16) can be computed using N / 2 + 1 real multiplications and N 1 real additions. If v k is an odd symmetric eigenvector, the inner product requires N / 2 real multiplications and N 2 real additions because the first element of v k will be 0 in this case. For example, consider an even eigenvector of a 5-point DFT v e v = a 0 a 1 a 2 a 2 a 1 T . For an input vector x = x 0 x 1 x 2 x 3 x 4 T ,
v e v , x = a 0 x 0 + a 1 x 1 + a 2 x 2 + a 2 x 3 + a 1 x 4 = a 0 x 0 + a 1 ( x 1 + x 4 ) + a 2 ( x 2 + x 3 )
The first line requires five multiplications, and the second line requires three multiplications. The number of additions in both cases is 5. Now consider an odd eigenvector of a 5-point DFT v o d = 0 b 1 b 2 b 2 b 1 T .
v o d , x = 0 + b 1 x 1 + b 2 x 2 b 2 x 3 b 1 x 4 = b 1 ( x 1 x 4 ) + b 2 ( x 2 x 3 )
Here using the symmetry, there are two multiplications and three additions. Similar relationships can be found for even length sequences.
We know that the number of even or odd eigenvectors depends on the eigenvalue multiplicity (which in turn depends on N). The number of even eigenvectors is m 1 + m 1 and the number of odd eigenvectors is m j + m j . Thus, for computing X ˜ in (16) the total number of multiplications Ξ X ˜ ( N ) and number of additions Φ X ˜ ( N ) required are
Ξ X ˜ ( N ) = N 2 + 1 m 1 + m 1 + N 2 m j + m j
Φ X ˜ ( N ) = N 1 m 1 + m 1 + N 2 m j + m j
For large values of N the eigenvalue multiplicities can be approximated to be equal to each other (which in turn approximately equals N / 4 ). Thus, m 1 m 1 m j m j N / 4 . Therefore for large N,
Ξ X ˜ ( N ) N 2 2
Φ X ˜ ( N ) N 2 2 N 3 N 2
While multiplying the eigenvector matrix with X ˜ in (4) and (8) the eigenvector symmetry can be utilized again. Let W = V X ˜ . Let the number of multiplications required for computing V X ˜ be Ξ W N and the number of additions be Φ W N . So, Ξ W N = Ξ X ˜ N due to the symmetry of the eigenvectors in V. As noted before, the operation V X ˜ just changes the magnitude of each eigenvector. Therefore, the columns of W retain the even or odd symmetries of the eigenvectors. From (8) we know that the columns of W are added in a specific pattern to determine the DFT. Thus, the result of this sum will also be even or odd symmetric. Therefore, while adding the columns of W, only the top half of the elements need to be computed (and the “middle” value). Thus, the number of additions required is
Φ W N = N 2 + 1 m 1 + m 1 + N 2 m j + m j
When N is large
Φ W N N 2 2
The total number of multiplications and additions required is
Ξ N = Ξ X ˜ N + Ξ W N = 2 N 2 + 1 m 1 + m 1 + N 2 m j + m j
Φ N = Φ X ˜ N + Φ W N = N + N 2 m 1 + m 1 + N + N 2 2 m j + m j
For large values of N, these are approximated as
Ξ N N 2
Φ N 3 2 N 2

7.2. Using the Sparsity of the Eigenvectors

Further gains can be obtained if the sparse eigenvectors obtained from the Matveev method (Section 4) are used to find the orthonormal DFT eigenvectors. The Matveev DFT eigenvectors contain a few patterns which can be exploited as explained below.

7.2.1. Zeros in Many Eigenvectors

Elements in the starting and ending indices of many of these eigenvectors are zeros. More of these elements are zeros for larger values of N. For example, if the first even eigenvector calculated using the Matveev method is v e v = a 0 a 1 a 2 a 3 a 3 a 2 a 1 T , the second eigenvector will have a 0 = 0 , the third will have a 0 = a 1 = 0 and so on. A similar pattern also exists for the odd eigenvectors.

7.2.2. Repeating Elements

The first eigenvectors corresponding to the eigenvalues ± 1 are made of only two scalar elements each, i.e., they are of the form v = a 0 a 1 a 1 a 1 a 1 T .

7.2.3. Zeros in Eigenvectors of Even Length

The middle element of the eigenvectors corresponding to the eigenvalues ± j are zero, if the eigenvectors are of even length. Thus, if N is the length of the eigenvector, then the N / 2 + 1 th element will be 0 if N is even. For example, v = 0 a 1 a 2 0 a 2 a 1 is a possible configuration of such an eigenvector. Note that in this example, the first element is 0 because the vector is odd symmetric.
Using this new information the number of multiplications Ξ X ˜ ( N ) and additions Φ X ˜ ( N ) required for calculating X ˜ is
Ξ X ˜ N = m 1 1 N 2 m 1 2 2 + 2 + m 1 1 N 2 m 1 2 2 + 2 + m j N 1 2 m j 1 2 + m j N 1 2 m j 1 2
Φ X ˜ ( N ) = m 1 N m 1 m 1 1 1 + m 1 N m 1 m 1 1 1 + 2 m j N 1 2 m j 2 + 2 m j N 1 2 m j 2
For the matrix multiplication W = V X ˜ , the number of scalar multiplications required is Ξ W N = Ξ X ˜ N . The number of scalar additions required for adding the columns vectors of W is
Φ W N = m 1 2 + m 1 + m 1 1 N 2 + 1 m 1 + m j 2 + m j + m j 1 N 1 2 m j
From (30)–(32) the total number of multiplications and additions required using Matveev method is
Ξ N = Ξ X ˜ N + Ξ W N = 2 Ξ X ˜ N
Φ N = Φ X ˜ N + Φ W N
If N is very large, the number of multiplications and additions required using Matveev’s DFT eigenvectors is approximated:
Ξ N 3 4 N 2
Φ N 9 8 N 2
If the input is even or odd symmetric, the number of multiplications and additions can be further reduced. In contrast to (33) and (34), brute force computation of the DFT for a real input requires 2 ( N 1 ) 2 real multiplications and an equal number of additions. Thus, the method proposed here performs better when compared to the brute force DFT computation. Using the Matveev eigenvectors, our method also outshines the DFT calculated using the N / 2 -point DFT. Moreover, unlike most FFT algorithms, our method does not require that N be highly composite. One can combine our approach with deconstructive algorithms like the Cooley–Tukey algorithm or the prime factor algorithm to yield efficient algorithms for computing the DFT of large sequences. This approach is certainly effective when the length N is not an integer power of 2. Figure 4 shows the number of multiplications required for computing the DFT using various methods.

7.3. Complex Inputs

If the input x C N , (10), (11), (14) or (15) can be used to keep the computation natively-real. In this case, the number of real multiplications double, and the number of real additions is slightly more than double. This is easiest to see in (10) and (11). It is clear that the multiplications and additions are doubled because there are two terms. Furthermore, an extra 2 N additions are required to add the real and imaginary parts from these two terms. Similar results can also be derived from (14) and (15). Assuming the use of Matveev’s eigenvectors, (33) and (34) change to
Ξ C N = 2 Ξ N = 4 Ξ X ˜ N
Φ C N = 2 Φ N + 2 N = Φ X ˜ N + Φ W N + 2 N
where Ξ C N and Φ C N represents the total number of real multiplications and additions required if x is complex.

7.4. Example

Suppose we must determine the 2D DFT for a high-definition image with resolution 1080 × 1920 . When using the row-column approach for finding the 2D DFT, first the transform is taken along the columns, followed by a transform along the rows. Version R2020b of MATLAB© was not sufficiently precise enough to determine the Matveev eigenvectors to a high enough accuracy for any N 20 . As mentioned before, this cap of around 20 is from the limitation of the machine that was used to compute the eigenvectors. In practice, this cap will be higher or can be eliminated by pre-computing the eigenvectors with high accuracy. It is also possible that a more numerically sound version of the Matveev algorithm can be found. This numerical accuracy issue actually occurs in every DFT computational algorithm to varying degrees and impacts. Typically, however, the DFT is calculated by decomposing the larger computation into smaller DFTs in a divide-and-conquer strategy.
Since both 1080 and 1920 have many factors, it is possible to decompose the DFT into smaller DFTs of length less than or equal to 20. For instance, consider first taking the 1D DFT along the columns. We know that 1080 = 2 3 × 3 3 × 5 . Therefore, first we can use the prime-factor algorithm [3] to divide the 1080-point DFT into separate DFTs of length 2 3 , 3 3 and 5. The 2 3 and 3 3 DFTs can be recursively divided into smaller lengths via decimation-in-time. The final 9-point and 5-point DFT butterflies are computed using the proposed method which result in 24,800 real multiplications. Computing the 2-point DFTs are trivial and require 1,620 real multiplications by twiddle factors. Thus, using this method to find a 1080-point DFT requires a total of 26,420 multiplications. For the 2D DFT, this has to be repeated 1920 times, for a total of around 50.7 million multiplications. Now, take the DFT along the rows. All the rows together need around 49.1 million multiplications. Thus, in total, the 2D DFT uses approximately 99.8 million multiplications.

8. Comparison with the 5-Point DFT Butterfly Algorithm

Unlike the 4-point DFT, computing the 5-point DFT requires multiplication with complex twiddle factors as shown in Figure 5. Our method performs better due to the lack of complex multiplications. Using our method for a 5-point DFT requires 20 multiplications and 24 additions from (33) and (34). On the other hand, the brute force, direct, method requires 41 multiplications and 36 additions.We chose the 5-point DFT butterfly because this is the smallest DFT that contains all the unique DFT eigenvalues, i.e., 1 , 1 , j , j . In contrast, the eigenvalues of the 4-point DFT are 1 , 1 , j .

9. Conclusions

In this paper, we have shown how the discrete Fourier transform can be efficiently calculated using its eigenvectors in a natively real-valued algorithm. Our proposed method uses (sparse) real eigenvectors of the DFT. The resulting algorithms are efficient when compared to the Cooley–Tukey FFT algorithm, and all steps are natively real-valued, so no complex multiplications are ever required! Our proposed method is suitable for higher dimensional DFTs, can be extended to other unitary transforms, and requires no significant structural change for differing lengths N. An Inverse DFT (IDFT) is presented that possesses the same qualities, and is nearly identical to the efficient DFT algorithm.

Author Contributions

Conceptualization, R.T., V.D. and L.S.D.; methodology, R.T., V.D. and L.S.D.; software, R.T.; validation, R.T. and V.D.; formal analysis, R.T. and V.D.; investigation, R.T.; resources, R.T., V.D. and L.S.D.; data curation, R.T.; writing— original draft preparation, R.T.; writing —review and editing, R.T., V.D. and L.S.D.; visualization, R.T., V.D. and L.S.D.; supervision, V.D. and L.S.D.; project administration, V.D.; funding acquisition, V.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cooley, J.W.; Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comput. 1965, 19, 297–301. [Google Scholar] [CrossRef]
  2. Duhamel, P.; Hollmann, H. ’Split radix’ FFT algorithm. Electron. Lett. 1984, 20, 14–16. [Google Scholar] [CrossRef] [Green Version]
  3. Good, I.J. The Interaction Algorithm and Practical Fourier Analysis. J. R. Stat. Soc. Ser. B Methodol. 1958, 20, 361–372. [Google Scholar] [CrossRef]
  4. Volder, J.E. The CORDIC Trigonometric Computing Technique. IRE Trans. Electron. Comput. 1959, EC-8, 330–334. [Google Scholar] [CrossRef]
  5. Thomas, R.; DeBrunner, V.; DeBrunner, L. How the Discrete Hirschman Transform Inherits its Eigenstructure from the DFT. In Proceedings of the 2020 54th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 1–4 November 2020; pp. 858–862. [Google Scholar] [CrossRef]
  6. Przebinda, T.; DeBrunner, V.; Ozaydin, M. The optimal transform for the discrete Hirschman uncertainty principle. IEEE Trans. Inf. Theory 2001, 47, 2086–2090. [Google Scholar] [CrossRef] [Green Version]
  7. Matveev, V.B. Intertwining relations between the Fourier transform and discrete Fourier transform, the related functional identities and beyond. Inverse Probl. 2001, 17, 633. [Google Scholar] [CrossRef]
  8. McClellan, J.; Parks, T. Eigenvalue and eigenvector decomposition of the discrete Fourier transform. IEEE Trans. Audio Electroacoust. 1972, 20, 66–74. [Google Scholar] [CrossRef]
  9. Candan, C. On the eigenstructure of DFT matrices [DSP education]. IEEE Signal Process. Mag. 2011, 28, 105–108. [Google Scholar] [CrossRef]
  10. Mitra, S.K. Digital Signal Processing, 4th ed.; McGraw-Hill: New York, NY, USA, 2010. [Google Scholar]
Figure 1. A block diagram showing the signal flow for our proposed DFT algorithm. Here X ˜ i is the i + 1 th diagonal entry of X ˜ . The scaler block multiplies the eigenvectors by the filter output (producing V X ˜ ) while the combiner block multiplies the scaled result with u(producing the real and imaginary components of X).
Figure 1. A block diagram showing the signal flow for our proposed DFT algorithm. Here X ˜ i is the i + 1 th diagonal entry of X ˜ . The scaler block multiplies the eigenvectors by the filter output (producing V X ˜ ) while the combiner block multiplies the scaled result with u(producing the real and imaginary components of X).
Signals 02 00041 g001
Figure 2. Plot showing the percentage of elements in the orthonormal eigenvectors that are zero against the length of DFT. Note the slight upward trend of the plot. Eigenvectors of larger DFTs are sparser than eigenvectors of smaller ones.
Figure 2. Plot showing the percentage of elements in the orthonormal eigenvectors that are zero against the length of DFT. Note the slight upward trend of the plot. Eigenvectors of larger DFTs are sparser than eigenvectors of smaller ones.
Signals 02 00041 g002
Figure 3. Plot showing the mean square error while computing the IDFT DFT x using the proposed method. The eigenvectors were calculated using the Matveev method and the Gram–Schmidt method. Since the plot shows the error after the forward and inverse computation, the error in just one of the directions will be slightly less than what is shown here. Note that this error is caused due to the limitations on the machine that was used to calculate the eigenvectors. In practice, the eigenvectors can be pre-computed with high enough precision to eliminate the error shown in this plot.
Figure 3. Plot showing the mean square error while computing the IDFT DFT x using the proposed method. The eigenvectors were calculated using the Matveev method and the Gram–Schmidt method. Since the plot shows the error after the forward and inverse computation, the error in just one of the directions will be slightly less than what is shown here. Note that this error is caused due to the limitations on the machine that was used to calculate the eigenvectors. In practice, the eigenvectors can be pre-computed with high enough precision to eliminate the error shown in this plot.
Signals 02 00041 g003
Figure 4. Number of real multiplications required while computing the DFT using various methods for N 15 , 65 . The “recursive Matveev’s DFT” plot is obtained by recursively dividing the DFT of length N into smaller DFTs and using the Matveev’s eigenvectors for the computation of the smaller DFTs. Note that the N / 2 -point DFT is valid only if N is even and the Cooley–Tukey FFT shown in this plot is valid only if N is an integer power of 2.
Figure 4. Number of real multiplications required while computing the DFT using various methods for N 15 , 65 . The “recursive Matveev’s DFT” plot is obtained by recursively dividing the DFT of length N into smaller DFTs and using the Matveev’s eigenvectors for the computation of the smaller DFTs. Note that the N / 2 -point DFT is valid only if N is even and the Cooley–Tukey FFT shown in this plot is valid only if N is an integer power of 2.
Signals 02 00041 g004
Figure 5. Signal flow graph of a 5-point DFT. Here ω = e j 2 π 5 .
Figure 5. Signal flow graph of a 5-point DFT. Here ω = e j 2 π 5 .
Signals 02 00041 g005
Table 1. Computation time in seconds for 10,000 runs.
Table 1. Computation time in seconds for 10,000 runs.
Length of DFT:10203040
Matveev0.9193.3546.46010.452
Gram–Schmidt0.6841.3032.2693.694
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Thomas, R.; DeBrunner, V.; DeBrunner, L.S. A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors. Signals 2021, 2, 688-705. https://doi.org/10.3390/signals2040041

AMA Style

Thomas R, DeBrunner V, DeBrunner LS. A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors. Signals. 2021; 2(4):688-705. https://doi.org/10.3390/signals2040041

Chicago/Turabian Style

Thomas, Rajesh, Victor DeBrunner, and Linda S. DeBrunner. 2021. "A Sparse Algorithm for Computing the DFT Using Its Real Eigenvectors" Signals 2, no. 4: 688-705. https://doi.org/10.3390/signals2040041

Article Metrics

Back to TopTop