SU(2) and SU(1,1) Approaches to Phase Operators and Temporally Stable Phase States: Applications to Mutually Unbiased Bases and Discrete Fourier Transforms

We propose a group-theoretical approach to the generalized oscillator algebra Ak recently investigated in J. Phys. A: Math. Theor. 43 (2010) 115303. The case k>or 0 corresponds to the noncompact group SU(1,1) (as for the harmonic oscillator and the Poeschl-Teller systems) while the case k<0 is described by the compact group SU(2) (as for the Morse system). We construct the phase operators and the corresponding temporally stable phase eigenstates for Ak in this group-theoretical context. The SU(2) case is exploited for deriving families of mutually unbiased bases used in quantum information. Along this vein, we examine some characteristics of a quadratic discrete Fourier transform in connection with generalized quadratic Gauss sums and generalized Hadamard matrices.


Introduction
The use of a generalized oscillator algebra for characterizing a dynamical system gave rise to a great deal of papers. Among many works, we may quote the polynomial Heisenberg algebra worked out in the context of supersymmetry [1][2][3], the deformed Heisenberg algebra introduced in connection with parafermionic and parabosonic systems [4][5][6][7], the C λ -extended oscillator algebra developed in the framework of parasupersymmetric quantum mechanics [8][9][10][11], and the generalized Weyl-Heisenberg algebra W k related to Z k -graded supersymmetric quantum mechanics [12][13][14][15][16]. In this direction, the construction of a truncated generalized oscillator algebra was developed by several authors. In particular, the pioneer work along this line by Pegg and Barnett led to calculating the phase properties of the electromagnetic field [17]. Let us also mention the works [18,19] in relation with orthogonal polynomials of a discrete variable and [16] in connection with phase operators and dynamical systems.
Recently, a generalized oscillator algebra A κ , a one-parameter algebra that is a particular case of the algebra W 1 , was studied for the purpose of defining phase operators and the corresponding phase eigenstates [16]. In addition, it was shown that the phase states for A κ with κ > 0, which are particular coherent states [20,21], can serve to construct mutually unbiased bases which are of considerable interest in quantum information and quantum computing [16].
It is the aim of the present paper to analyze the algebra A κ from the point of view of group theory. Since A κ can describe the Morse system for κ < 0 as well as the harmonic oscillator and the Pöschl-Teller systems for κ ≥ 0, we expect that the groups SU(2) and SU(1,1) play a central role. The search for phase operators and temporally stable phase states thus amounts to study generalized coherent states for SU (2) and SU (1,1).
The material presented here is organized as follows. Section 2 deals with the generalized oscillator algebra A κ and its connection with the Lie algebra of SU (2) and SU (1,1). The phase operators and the phase states introduced in [16] are described in the framework of SU (2) and SU (1,1). Section 4 is devoted to a truncation of the algebra A κ . In section 5, the phase operator for the group SU(2) is shown to be of relevance for the determination of mutually unbiased bases (cf. [22][23][24][25][26][27][28][29][30][31][32]). Finally, the quadratic transformation that connects the phase states for SU (2) to angular momentum states is studied in Section 6. This transformation generalizes the discrete Fourier transform whose the main properties are given in the appendix.
The notations are standard. Let us simply mention that: δ a,b stands for the Kronecker symbol of a and b, I for the identity operator, A † for the adjoint of the operator A, and [A, B] for the commutator of the operators A and B. The bar indicates complex conjugation and matrices are generally written with bold-face letters (I d is the d-dimensional identity matrix). We use a notation of type |ψ for a vector in an Hilbert space and we denote φ|ψ and |φ ψ| respectively the inner and outer products of the vectors |ψ and |φ . As usual N, N * , Z and R + are the sets of integers, strictly positive integers, relative integers and positive real numbers; R and C the real and complex fields; and Z/dZ the ring of integers 0, 1, . . . , d − 1 modulo d.

The Algebra
Following [16], we start from the algebra A κ spanned by the three linear operators a − , a + and N satisfying the following commutation relations where κ is a real parameter. In the particular case κ = 0, the algebra A 0 is the usual harmonic oscillator algebra. In the case κ = 0, the operators a − , a + and N in (1) generalize the annihilation, creation and number operators used for the harmonic oscillator. Thus, the algebra A κ can be referred to as a generalized oscillator algebra. In fact, the algebra A κ represents a particular case of the generalized Weyl-Heisenberg algebra W k introduced in [12][13][14][15] to describe a fractional supersymmetric oscillator. A similar algebra, namely the C λ -extended oscillator algebra, was studied in connection with a generalized oscillator [8][9][10][11].

The Oscillator Algebra as a Lie Algebra
The case κ = 0 corresponds of course to the usual Weyl-Heisenberg algebra. It can be shown that the cases κ < 0 and κ > 0 considered in [16] are associated with the Lie algebras of the groups SU(2) and SU(1,1), respectively. We shall consider in turn the cases when κ < 0 and κ > 0.
For κ < 0, we introduce the operators J − , J + and J 3 defined via They satisfy the commutation relations and therefore span the Lie algebra of SU (2). Similarly for κ > 0 the operators K − , K + and K 3 , given by lead to the Lie brackets of the group SU(1,1). (2) and Su (1,1) We are now in a position to reconsider some of the results of [16] in terms of the Lie algebras su(2) and su (1,1). This will shed new light on the usual treatments of the representation theory of SU(2) and SU(1,1) as far as the action on the representation space of the shift operators of these groups are concerned.

Rotated Shift Operators for Su
Let us first recall that in the generic case (κ ∈ R), the algebra A κ admits a Hilbertian representation for which the operators a − , a + and N act on a Hilbert space F κ spanned by the basis {|n : n = 0, 1, . . .} that is orthonormal with respect to an inner product n|n ′ = δ n,n ′ . The dimension of F κ is finite when κ < 0 or infinite when κ > 0. The representation is defined through [16] where ϕ is an arbitrary real parameter and the function F : N → R + satisfies Obviously, for κ > 0 the dimension of F κ is infinite. In contrast, for κ < 0 the space F κ is finite-dimensional with a dimension given by It is thus possible to transcribe (6)- (8) in terms of the Lie algebras su(2) and su(1,1).
Although there is no interdiction to have ϕ = 0, it is worthwhile to look for the significance of the introduction of ϕ. Let us callĴ + andĴ − those operators J + and J − which correspond to ϕ = 0, respectively. It is easy to show then thatĴ ± and J ± are connected by the similarity transformation where the operator X reads Note that the nonlinear transformation J ± ↔Ĵ ± , defined by (19), leaves invariant the Casimir operator J 2 of SU(2). We shall see in section 5 that the parameter ϕ is essential in order to generate mutually unbiased bases.

The Su(1,1) Case
The representation theory of SU(1,1) is well known (see for example [20]). We shall be concerned here with the positive discrete series D ′ + of SU(1,1). The representation associated with the Bargmann index k can be defined via K + |k, k + n = (2k + n)(n + 1)e −iψ(k,n) |k, k + n + 1 (21) with K 2 |k, k + n = k(1 − k)|k, k + n , where n ∈ N and K 2 stands for the Casimir operator of SU(1,1). This infinite-dimensional representation is spanned by the orthonormal set {|k, k + n : n ∈ N}. Equations (21) and (22) differ from the standard relations [20] by the introduction of the real-valued phase function ψ. Such a function is introduced, in a way paralleling the introduction of the phase factors in (6) and (7), to make precise the connection between A κ and su(1,1) for κ > 0. The relative phases in (21) and (22) are such that K + is the adjoint of K − . For fixed κ and k, we make the identification Then, from (4) we get the central relation to be compared with (16). Furthermore, by combining (4), (6), (7), (21), (22), (25) and (26) we get Finally, the action of the shift operators K + and K − on a generic vector |k, k + n can be rewritten as The particular case ϕ = 0 in (28) and (29) gives back the standard relations for SU(1,1). The operators K + and K − are connected to the operatorsK + andK − corresponding to ϕ = 0 bŷ with so that the nonlinear transformation K ± ↔K ± , defined by (30), leaves invariant the Casimir operator K 2 of SU(1,1).

Phase Operators
Phase operators were defined in [16] from a factorization of the annihilation operator a − of A κ . We shall transcribe this factorization in terms of the lowering generators J − and K − of SU(2) and SU(1,1), respectively.

The Su(2) Case
Let us define E d via The operator E d can be developed as where m − 1 should be understood as j when m = −j. Consequently and It is clear from (34) and (35) that the operator E d is unitary.
In order to show that E d is a phase operator, we consider the eigenvalue equation It can be shown that the determination of normalized eigenstates |z satisfying (36) requires that the condition be fulfilled. Hence, the complex variable z is a root of unity given by As a result, the states |z depend on a continuous parameter ϕ and a discrete parameter α. They shall be written as |ϕ, α . A lengthy calculation leads to The latter states satisfy Thus, the states |ϕ, α are phase states and the unitary operator E d is a phase operator, with a non-degenerate spectrum, associated with SU(2). Furthermore, the eigenvectors of E d satisfy where and t is a real parameter. Equation (41) indicates that the phase states |ϕ, α are temporally stable, an important property to determine the so-called mutually unbiased bases [16]. Note that they are not all orthogonal (the states with the same ϕ are of course orthogonal) and they satisfy the closure property 2j α=0 |ϕ, α ϕ, α| = I for fixed ϕ (see also [16]).

The Su(1,1) Case
By writing it can be shown that The operator E ∞ has the following property Thus, it is not unitary in contrast with the case of the operator E d for su (2). Let us look for normalized states |z such that One readily finds that up to a phase factor. Following [33] and [16], we define the states |ϕ, θ by where θ ∈ [−π, +π[. One thus obtains that The states (50), defined on the unit circle S 1 , have the property The operator E ∞ is thus a nonunitary phase operator associated with SU(1,1). As a particular case of the phase states |ϕ, θ , the states |0, θ corresponding to ϕ = 0 are identical to the phase states introduced in [33] for SU(1,1). The parameter ϕ ensures that the states |ϕ, θ are temporally stable with respect to in the sense that for any real value of t. Note that, for fixed ϕ, the phase states |ϕ, θ , satisfy the closure relation but they are neither normalized nor orthogonal.

Truncated Generalized Oscillator Algebra
The idea of a truncated algebra for the harmonic oscillator goes back to Pegg and Barnett [17]. Truncated algebras for generalized oscillators were introduced in [16,18,19]. In [16], a truncated oscillator algebra A κ,s associated with the algebra A κ was considered both in the infinite-dimensional case (κ ≥ 0) and the finite-dimensional case (κ < 0). The introduction of such a truncated algebra makes it possible to define a unitary phase operator for κ ≥ 0 and to avoid degeneracy problems for κ < 0. We shall briefly revisit in this section the truncation of the generalized oscillator algebra A κ in an approach that renders more precise the relationship between A κ,s and A κ .
Let us start with the two operators where d(κ) = d − 1 or ∞ according to whether κ < 0 or κ ≥ 0. The finite truncation index s is arbitrary for κ ≥ 0 and less than d for κ < 0. It is straightforward to prove that Therefore, the operators c − and c + = (c − ) † lead to the null vector when acting on the vectors of the space F κ that do not belong to its subspace F κ,s spanned by the set {|0 , |1 , . . . , |s − 1 }. In this sense, c + and c − differ from the operators b + and b − of [16].
In the light of Equations (57)-(60), the passage from the algebra A κ to the truncated algebra A κ,s should be understood as the restriction of the space F κ to its subspace F κ,s together with the replacement of the commutation relations in (1) by which easily follow from (55) and (56). It should be observed that the difference between the operators c ± and b ± manifests itself in (61) by the summation from n = s to n = d(κ).

Quantization of the Phase Parameter
We now examine the consequence of a discretization of the parameter ϕ in the su(2) case (κ < 0). By taking (cf. [16]) the state vector |ϕ, α becomes The phase operator E d is of course ϕ-dependent. For the quantized values of ϕ given by (62), Equations (34) and (35) can be rewritten as and The corresponding operator E d is thus a-dependent. However, the eigenvalues of E d do not depend on a as shown by (40).
The operator v ra is unitary and it commutes with the Casimir operator J 2 of SU(2). The set {J 2 , v ra } is a complete set of commuting operators that provides an alternative to the scheme {J 2 , J z }, used in angular momentum theory. In other words, for fixed j, r and a, the set constitutes a nonstandard orthonormal basis for the (2j + 1)-dimensional irreducible representation of SU (2). The basis B ra is an alternative to the canonical basis B 2j+1 defined in (11). The reader may consult [22,23] for a study of the {J 2 , v ra } scheme and of its associated Wigner-Racah algebra.
The a-dependent operator E d and the operator v ra are closely connected. Indeed, it can be checked that as can be guessed from (40) and (71).

Introduction of Mutually Unbiased Bases
The case r = 0 deserves a special attention. Let us examine the inner product aα|bβ of the vectors |aβ and |bβ defined by (63), in view of its importance in the study of mutually unbiased bases (MUBs).
For a = b, we have Therefore, for fixed j and a (2j ∈ N and a in the ring Z/(2j + 1)Z), the basis (a particular case of the basis B ra ) and the basis B 2j+1 are interrelated via with α = 0, 1, . . . , 2j and m = j, j − 1, . . . , −j. In view of (77), we see that B 0a (and more generally B ra ) can be considered as a generalized Fourier transform of B 2j+1 .
For a = b, the inner product aα|bβ can be expressed in term of the generalized quadratic Gauss sum defined by (see [34]) In fact, we have The sum S(u, v, w) can be calculated in the situation where u, v and w are integers such that u and w are mutually prime, uw is not zero, and uw + v is even.
Let us now briefly discuss the reason why (63) is of interest for the determination of MUBs. We recall that two orthonormal bases of the d-dimensional Hilbert space C d are said to be unbiased if the modulus of the inner product of any vector of one basis with any vector of the other one is equal to 1/ √ d [35,36].
For fixed d, it is known that the number N M U B of MUBs is such that 3 ≤ N M U B ≤ d + 1 and that the limit N M U B = d + 1 is attained when d is a power of a prime number [35,36]. Then, equation (77) shows that any basis B 0a (a ∈ Z/(2j + 1)Z) is unbiased with B 2j+1 for arbitrary value of 2j + 1. Furthermore, in the special case where 2j + 1 is a prime integer, the calculation of S(u, v, w) with (80) leads to for a = b, α = 0, 1, . . . , 2j and β = 0, 1, . . . , 2j. Equation (81) implies that B 0a and B 0b for a and b in the Galois field F 2j+1 are mutually unbiased. Thus one arrives at the following conclusion. For 2j +1 prime, the 2j +1 bases B 0a (a = 0, 1, . . . , 2j) and the basis B 2j+1 form a complete set of d + 1 = 2j + 2 MUBs. This result is in agreement with the one derived in [24][25][26][27][28][29][30][31][32]. It can be extended to the case r = 0 as follows. For arbitrarily fixed r and 2j + 1 prime, the 2j + 1 bases B ra (a = 0, 1, . . . , 2j) and the basis B 2j+1 form a complete set of d + 1 = 2j + 2 MUBs. The parameter r serves to differentiate various families (or complete sets) of MUBs.

Discrete Fourier Transforms
We discuss in this section two quadratic versions of the discrete Fourier transform (DFT), namely, the quantum DFT that connects state vectors in an Hilbert space and the classical DFT used in signal analysis.

Quantum Quadratic Discrete Fourier Transform
Equation (66) shows that the vector |jα; ra can be considered as a quantum DFT that is quadratic (in m) for a = 0. This transform is nothing but a quantum ordinary DFT for r = a = 0 [37]. For fixed j, r and a, the inverse transform is Compact relations, more adapted to the Fourier transform formalism, can be obtained by going back to the change of notation given by (14) and (15). Then, Equations (66) and (82) read and We shall put or a relation that defines (for fixed d, r and a) a d × d matrix F ra . Let us recall that for a fixed value of d in N * , both r and a have a fixed value (r ∈ R and a ∈ Z/dZ) and n, α = 0, 1, . . . , d − 1.
For d = 2j + 1 arbitrary, we can show that Therefore, in the particular case r = s and d = p, where p is prime, we have Equation (88) shall be discussed below in terms of Hadamard matrices.

Factorization of the Quadratic DFT
We are now prepared for discussing the transforms (83) The particular case r = a = 0 corresponds to the ordinary DFT. For a = 0, the bijective transformation x ↔ y can be thought of as a quadratic DFT. The analog of the Parseval-Plancherel theorem for the ordinary DFT can be expressed in the following way. The quadratic transformations x ↔ y and x ′ ↔ y ′ associated with the same matrix F ra , r ∈ R and a ∈ Z/dZ, satisfy the conservation rule where both sums do not depend on r and a.
The matrix F ra can be factorized as where D ra is the d × d diagonal matrix with the matrix elements For fixed d, there are one d-multiple infinity of Gaussian matrices D ra (and thus F ra ) distinguished by a ∈ Z/dZ and r ∈ R. On the other hand, F is the well-known ordinary DFT matrix. The matrix F was the object of a great number of studies. The main properties of the ordinary DFT matrix F are summed up in the appendix.

Hadamard Matrices
The matrix F ra defined by (85) is unitary. The modulus of each of its matrix elements is equal to 1/ √ d. Thus, F ra can be considered as a generalized Hadamard matrix (we adopt here the normalization of Hadamard matrices generally used in quantum information and quantum computing) [26][27][28][29][30][31].
In the case where d is a prime number, Equation (88) shows that the matrix (F ra ) † F rb is another Hadamard matrix. However, it should be mentioned that, given two Hadamard matrices M and N, the product M † N is not in general a Hadamard matrix.

Trace Relations
The trace of F ra reads where S(u, v, w) is given by (78) with Note that the case a = 2 deserves a special attention. In this case, the quadratic character of tr F ra disappears. In addition, if r = 0 we get as can be seen from direct calculation.

Diagonalization
It is a simple matter of calculation to prove that where the matrix represents the linear operator v ra defined by (68). Therefore, the matrix F ra reduces the endomorphism associated with the operator v ra .
Concerning (97) and (98), it is important to note the following conventions. According to the tradition in quantum mechanics and quantum information, the matrix V ra of the operator v ra is set up on the basis B 2j+1 ordered from left to right and from top to bottom in the range |j, j ≡ |d − 1 , |j, j − 1 ≡ |d − 2 , . . . , |j, −j ≡ |0 . For the sake of compatibility, we adopt a similar convention for the other matrices under consideration. Thus, the lines and columns of F ra are arranged in the order d − 1, d − 2, . . . , 0.

Link with the Cyclic Group
There exists an interesting connection between the matrix X and the cyclic group C d [24][25][26]. Let us call R a rotation of 2π/d around an arbitrary axis, the generator of C d . Then, the application defines a d-dimensional matrix representation of C d . This representation is the regular representation of C d . Thus, the reduction of the representation {X n : n = 0, 1, . . . , d − 1} contains once and only once each (one-dimensional) irreducible representation of C d .

Decomposition
The matrix V ra can be decomposed as where The matrices P r , X and Z (and thus V ra ) are unitary. They satisfy Equation (103) can be iterated to give the useful relation where m, n ∈ Z/dZ. Furthermore, we have the quasi-nilpotency relations (the relations (106) are true nilpotency relations when r = 0). More generally, we obtain in agreement with the obtained eigenvalues for V ra (see Equation (71)).

Weyl Pairs
For r = a = 0, Equations (103) and (106) show that the unitary matrices X and Z satisfy the q-commutation relation XZ = qZX (108) and the nilpotency relations Therefore, X and Z constitute a Weyl pair (X, Z). Note that the Weyl pair (X, Z) can be defined from the matrix V ra only since which emphasize the important role played by the matrix V ra . Note also that according to (97), we have that proves that X and Z are related by the DFT matrix. Weyl pairs were introduced at the beginning of quantum mechanics [38] and used for building operator unitary bases [39]. The pair (X, Z) plays an important role in quantum information and quantum computing. In these fields, the linear operators corresponding to X and Z are known as flip or shift and clock operators, respectively. For d arbitrary, they are at the root of the Pauli group, a finite subgroup of order d 3 of the group U(d) for d even and SU(d) for d odd [30,31]. The Pauli group is of considerable importance for describing quantum errors and quantum fault tolerance in quantum computation (see [40][41][42][43] and references therein for recent geometrical approaches to the Pauli group). The Weyl pair (X, Z) turns out to be an integrity basis for generating the set {X a Z b : a, b ∈ Z/dZ}. The latter set constitutes a basis for the Lie algebra of the unitary group U(d) with respect to the commutator law. This set consists of d 2 generalized Pauli matrices in d dimensions [30,31]. In this respect, note that for d = 2 we have in terms of the ordinary Pauli matrices σ 0 = I 2 , σ x , σ y , and σ z .

Link with a Lie Algebra
Equation (105) can be particularized to give Let us define the operator It is convenient to use the abbreviation The product T n T m is easily obtained to be where m × n := m 1 n 2 − m 2 n 1 , m + n = (m 1 + n 1 , m 2 + n 2 ) The commutator [T m , T n ], follows at once from (116). The operators T m can be thus formally viewed as the generators of the infinite-dimensional Lie algebra W ∞ (or sine algebra) investigated in [44,45].

Closing Remarks
We used the representation theory of the symmetry groups SU(2) and SU(1,1) to describe the generalized oscillator algebra A κ and the two phase operators E d and E ∞ introduced in [16]. The phase eigenstates of E d and E ∞ were thus understood in terms of finite-dimensional and infinite-dimensional representations of SU(2) and SU(1,1), respectively. In the case of those representations of SU (2) for which the dimension is a prime integer, our approach led us to derive MUBs as eigenbases of the phase operator E d (with d prime), opening a way for further results on unitary phase operators associated with Lie groups.
The unitary phase operator E d defined via leads to a polar decomposition of the algebra su (2) in the scheme {J 2 , E d }, which is an alternative to the familiar quantization scheme {J 2 , J 3 } of angular momentum theory. The {J 2 , E d } scheme and the {J 2 , v ra } scheme of [22][23][24][25][26][27][28][29][30][31][32] are related by (74). In the case of the noncompact Lie algebra su(1,1), the phase operator E ∞ is non-unitary and given by Although this does not correspond to a true polar decomposition (because E ∞ is not unitary), it yields a scheme {K 2 , E ∞ }, which is an alternative to the canonical scheme {K 2 , K 3 } developed for su(1,1) by Bargmann and most of other authors. We hope to further study this new scheme from the point of view of the representation theory and the Wigner-Racah algebra of SU(1,1). As far as the applications of the new SU(2) and SU(1,1) phase states derived in Section 3 are concerned, let us mention that, besides the two applications (to mutually unbiased bases in section 5 and to discrete Fourier transform in Section 6) discussed in our paper, we can mention other potential applications. Our phase states can be useful for various dynamical systems (e.g., the Morse system for the SU(2) states as well as the Pöschl-Teller system and the repulsive oscillator system for the SU(1,1) states). We can also mention a possible application of the quadratic discrete Fourier transform to discrete linear canonical transforms and to Hadamard matrices in connection with the production of geometric optics setups. Some of these further potential applications are presently under consideration.
Because F is unitary, its eigenvalues must be on the unit circle S 1 , and since it is a fourth root of unity, so are its eigenvalues, to be denoted by for k = 0, 1, 2, 3. This divides the space C d into four Fourier-invariant, mutually orthogonal subspaces whose dimensions N ϕ k exhibit the modulo-4 multiplicities of the eigenvalues ϕ k . Of course, we have For d = 4J + k with k = 0, 1, 2, 3 and J ∈ N, the multiplicities, traces and determinants of the submatrices of F associated with each eigenvalue are given by: (see for example [46] noting that the DFT matrix there is the complex conjugate of the DFT matrix here).
Since N ϕ k ≈ d/4, there is wide freedom in choosing eigenvector bases within each eigenspace. Finding a "good" eigenbasis is of interest to define fractional powers of the DFT matrices, which constitute the abelian group of elements {F ν }, for real ν modulo 4 [47], that would contract, for d → ∞, to the fractional Fourier integral transform. The fractionalization of the Fourier integral transform was defined in 1937 by Condon [48] at the suggestion of von Neumann, rediscovered in other contexts [49,50], and is currently of importance for signal analysis and image processing through the fast Fourier transform algorithm [51][52][53][54]. The integral kernel of the fractional Fourier integral transform can be expressed as a bilinear generating function for Hermite-Gauss functions [55], where H n (x) are the Hermite polynomials of degree n ∈ N in x, which are the eigenfunctions of the Fourier integral transform F , The integral kernel of the fractional Fourier integral transform [47] is then obtained as for 0 < ν < 2, with the limits These kernels are unitary, and form a one-parameter group with ν modulo 4.
To fractionalize the DFT matrix F, one will be naturally interested finding d-point functions Φ (d) n (m) that are "good" discrete counterparts for the Hermite-Gauss functions Ψ n (x) in (127); in particular that they be analytic and periodic functions of m. Mehta [56] has proposed the following functions: that we call Mehta functions. These have the desired properties and for n ∈ N and where Φ n (m). Of course, there cannot be more than d linearly independent vectors in C d , so we may take the subset {Φ (d) n : n = 0, 1, . . . , d − 1}. Prima facie, it is not clear whether this subset is linearly independent and orthogonal, or not -Mehta [56] left unresolved their orthogonality, which was lately described thoroughly by Ruzzi [57]. The departure from strict orthogonality of the vectors of the Mehta basis was investigated in [58]; the departure is small for low values of n and gradually worsens up to d − 1.
Indeed, there is wide freedom in choosing bases for C d when the sole requirement is that they be eigenbases of F, satisfying (135). Labelling these eigenvectors by their four Fourier eigenvalues ϕ k , and within each of these eigenspaces C Nϕ k by j = 0, 1, . . . , N ϕ k − 1, we denote them by {Υ (ϕ k ,j) (m)}, periodic in m modulo d [46,58]; and we assume that they are complete in C d and thus have a dual basis {Υ (ϕ k ,j) (m)} periodic in m, such that and where Π ϕ k is the projector matrix on the Fourier subspace ϕ k . Associated with this basis {Υ}, one may define the corresponding 'Υ-fractionalized DFT matrices' F ν Υ with elements where we use the compound index n = 4j + k to order the vectors, as if it were the 'energy' label in the Mehta functions (134). In this way, the vectors of the {Υ} basis are eigenvectors of a number matrix N Υ with elements In other words N Υ Υ (ϕ k ,j) = n Υ (ϕ k ,j) The matrix N Υ has the virtue of being the generator of the Υ-fractional Fourier matrices, for ν modulo 4.