Some Singular Vector-valued Jack and Macdonald Polynomials

For each partition $\tau$ of $N$ there are irreducible modules of the symmetric groups $\mathcal{S}_{N}$ or the corresponding Hecke algebra $\mathcal{H}_{N}\left( t\right) $ whose bases consist of reverse standard Young tableaux of shape $\tau$. There are associated spaces of nonsymmetric Jack and Macdonald polynomials taking values in these modules, respectively.The Jack polynomials are a special case of those constructed by Griffeth for the infinite family $G\left( n,p,N\right) $ of complex reflection groups. The Macdonald polynomials were constructed by Luque and the author. For both the group $\mathcal{S}_{N}$ and the Hecke algebra $\mathcal{H}_{N}\left( t\right) $ there is a commutative set of Dunkl operators. The Jack and the Macdonald polynomials are parametrized by $\kappa$ and $\left( q,t\right) $ respectively. For certain values of the parameters (called singular values) there are polynomials annihilated by each Dunkl operator; these are called singular polynomials. This paper analyzes the singular polynomials whose leading term is $x_{1}^{m}\otimes S$, where $S$ is an arbitrary reverse standard Young tableau of shape $\tau$. The singular values depend on properties of the edge of the Ferrers diagram of $\tau$.


Introduction
For each partition τ of N there are irreducible modules of the symmetric groups S N and the corresponding Hecke algebra H N (t), whose bases consist of reverse standard Young tableaux of shape τ . There are associated spaces of nonsymmetric Jack and Macdonald polynomials taking values in these modules, respectively. (In what follows the polynomials are always of the nonsymmetric type.) The Jack polynomials are a special case of those constructed by Griffeth [7] for the infinite family G (n, p, N ) of complex reflection groups. The Macdonald polynomials were constructed by Luque and the author [6]. The polynomials are the simultaneous eigenfunctions of the Cherednik operators, which form a commutative set. For both the group S N and the Hecke algebra H N (t) there is a commutative set of Dunkl operators, which lower the degree of a homogeneous polynomial by 1.
The Jack and the Macdonald polynomials are parametrized by κ and (q, t) respectively. For certain values of the parameters (called singular values) there are polynomials annihilated by each Dunkl operator; these are called singular polynomials. The structure of the singular polynomials for the trivial module corresponding to the partition (N ), that is the ordinary scalar polynomials, is more or less well understood by now. For the modules of dimension ≥ 2 the singular polynomials are mostly a mystery. In [3] and [4] we constructed special singular polynomials which correspond to the minimum parameter values. To be specific denote the longest hook-length in the Ferrers diagram of τ by h τ then any other singular value κ satisfies |κ| ≥ 1 h and if a pair (q, t) such that q m t n = 1 provides a singular polynomial then m n ≥ 1 h . The main topic of this paper is the determination of all the singular values for which the Jack or Macdonald polynomials with leading term x m 1 ⊗ S are singular, where S is an arbitrary reverse standard Young tableau of shape τ . The singular values depend on properties of the edge of the Ferrers diagram of τ .
There is a brief outline of the needed aspects of the representation theory of S N and H N (t) in Section 2, focussing on the action of the generators on the basis elements. The important operators on scalar and vectorvalued polynomials are defined in Section 3. Subsection 3.1 deals with the Cherednik-Dunkl and Dunkl operators on the vector-valued polynomials, introduces the Jack polynomials, and the key formulas for the action of Dunkl operators, in particular, when specialized to the polynomials with leading term x m 1 ⊗ S. Subsection 3.2 contains the analogous results on Macdonald polynomials. Section 4 combines the previous results with analyses of the spectral vectors and a combinatorial analysis of the possible singular values, to prove our main results on Jack and Macdonald polynomials. Subsection 4.1 illustrates the representation-theoretic aspect of singular polynomials.

Representation Theory
The symmetric group S N is the group of permutations of {1, 2, . . . , N }. The transpositions w = (i, j), defined by w (i) = j, w (j) = i and w (k) = k for k = i, j are fundamental tools in this study. The simple reflections s i := (i, i + 1) , 1 ≤ i < N , generate S N ; and the group is abstractly presented by s 2 i = 1 : 1 ≤ i < N and the braid relations: The group algebra CS N , namely the linear space w∈S N c w w , is of dimension N !. The associated Hecke algebra H N (t) where t is transcendental (formal parameter) or a complex number not a root of unity, is the associative algebra generated by {T 1 , T 2 , . . . , T N −1 } subject to the relations It can be shown that there is a linear isomorphism between CS N and H N (t) based on the map s i → T i . When t = 1 they are identical.
The irreducible modules of these algebras correspond to partitions of N and are constructed in terms of Young tableaux. The descriptions will be given in terms of the actions of {s i } or {T i } on the basis elements (see [2]).
There are representations of S N and H N (t) on V τ ; each will be denoted by τ . For each i and S (with 1 ≤ i < N and S ∈ Y (τ )) there are four different possibilities: 1) row (i, S) = row (i + 1, S) (implying col (i, S) = col (i + 1, S) + 1 and c (i, S) − c (i + 1, S) = 1) then 2) col (i, S) = col (i + 1, S) (implying row (i, S) = row (i + 1, S) + 1 and 3) row (i, S) < row (i + 1, S) and col (i, S) > col (i + 1, S). In this case and S (i) , denoting the tableau obtained from S by exchanging i and i + 1, is an element of Y (τ ) and The formulas in (4) are consequences of those in (3) by interchanging S and S (i) and applying the relations τ (s i ) 2 = I and (τ ( There is a commutative set of Jucys-Murphy elements in both ZS N and H N (t) and which are diagonalized with respect to the basis Y (τ ) (with 1 ≤ i ≤ N and S ∈ Y (τ )) The representation τ of S N is unitary (orthogonal) when V τ is furnished with the inner product This form satisfies f τ (T i ) , g 0 = f, gτ (T i ) 0 for f, g ∈ V τ and 1 ≤ i < N .
The action of the symmetric group on polynomials is defined by There is a subtlety (implicit inverse) involved due to acting on the right: for example p(x) The action of the Hecke algebra on polynomials is defined by The defining relations can be verified straightforwardly. There are special values: For a partition τ of N let P τ := P⊗V τ . The set x α ⊗ S : α ∈ N N 0 , S ∈ Y (τ ) is a basis of P τ . The representations of S N and H N (t) on P τ are respectively defined by the linear extension from the action on generators by for p ∈ P, S ∈ Y (τ ) and 1 ≤ i < N . (For details and background for the vector-valued Macdonald polynomials see [6].)

Jack polynomials
The Dunkl {D i } and Cherednik-Dunkl {U i } operators on P τ for p ∈ P, S ∈ Y (τ ) and 1 ≤ i ≤ N , are defined by

Each of the sets {D i } and {U i } consists of pairwise commuting elements.
There is a basis of P τ consisting of homogeneous polynomials each of which is a simultaneous eigenfunction of {U i }; these are the nonsymmetric Jack polynomials. For each (α, where v α,β,S (κ) ∈ V τ ; these coefficients are rational functions of κ. These polynomials satisfy For detailed proofs see [5]. We are concerned with the special case α = (m, 0, . . . , 0) ∈ N N 0 . We apply formulas from [3] to analyze J α,S D i .
The next result uses the inner product on Jack polynomials for partition labels β. The Pochhammer symbol is (a) n = n i=1 (a + i − 1).
These norm formulas are results of Griffeth [7] specialized to the symmetric groups. The final ingredient for the formula is a special case of [3,Thm. 6.3].
Proof. The first line comes from [3,Thm. 6.3]. Then the norm ratios are computed, which involves much cancellation.
Denote the prefactor of J α,S in equation (5) by C S,m (κ). Our interest is in the zeros of C S,m (κ) as a function of κ. We will see that C S,m (κ) depends only on τ and the location of 1 in S. The idea is to group entries of S by row and use telescoping properties. There is a simple formula (proven inductively) where g is a function on Z and a ≤ b. For the present application set The part of the product in C S,m (κ) coming from row #i has c (j, S) ranging from 1 − i to τ i − i so the corresponding subproduct is Multiply these factors for i = 1, 2, . . . ℓ (τ ); note that and thus As stated before the formula depends only on τ and the location of 1 in S. More simplification is possible due to telescoping if some τ i 's are equal.
Definition 6 For τ as in Definition 5 define the increasing sequence Let S denote the tableau formed by deleting the box (row (1, S) , col (1, S)) from S. The key property of I ( τ ) is that it controls the possible locations where a box containing 1 could be adjoined to S to form a RSYT. These locations are {(1, τ 1 + 1) , . . . , (i s , τ is + 1) , . . .}. If τ ℓ(τ ) = 0 then the last location is (ℓ (τ ) , 1) otherwise it is (ℓ (τ ) + 1, 1). Thus Z ( τ ) is the set of contents of locations in the list. Evaluate the part of the product in formula (6) for the range i s ≤ j < i s +1 to obtain This completes the proof of the following: Proposition 8 For τ and I ( τ ) as in Definitions 5 and 6 where i k = ℓ (τ ) + 1.
Proof. By construction the sequence { τ ia } a≥1 is strictly decreasing and the sequence {i a } a≥1 is strictly increasing. Suppose for some a, b the equation Here is a sketch of τ marked by and the possible cells for the entry 1 In a later section we examine the relation to singular polynomials of the form J α,S .

Macdonald polynomials
Adjoin the parameter q. To say that (q, t) is generic means that q = 1, q a t b = 1 for a, b ∈ Z and −N ≤ b ≤ N . Besides the operators T i defined in (3) we introduce (for p ∈ P, S ∈ Y (τ )) The Cherednik {ξ i } and Dunkl {D i } operators, for 1 ≤ i ≤ N , are defined by These definitions were given for the scalar case by Baker and Forrester [1] and extended to vector-valued polynomials by Luque and the author [6]. The operators {ξ i : 1 ≤ i ≤ N } commute pairwise, while the operators {D i : 1 ≤ i ≤ N } commute pairwise and map P n ⊗V τ to P n−1 ⊗V τ for n ≥ 0. A polynomial p ∈ P τ is singular for some particular value of (q, t) if pD i = 0 , evaluated at (q, t), for all i. There is a basis of P τ consisting of homogeneous polynomials each of which is a simultaneous eigenfunction of {ξ i }; these are the nonsymmetric Macdonald polynomials. For each (α, S) ∈ N N 0 × Y (τ ) there is the polynomial where v α,β,S (q, t) ∈ V τ and R α := (T i i T i 2 · · · T im ) −1 where α.s i 1 s i 2 · · · s im = α + and there is no shorter product s j 1 s j 2 · · · having this property (that is [4, p. 19] for the values of a, b, which are not needed here), and As before ζ α,S (i) The other ingredient is the affine step (from the Yang-Baxter graph, see [6], [4, (3.14)]): for β ∈ N N 0 set βΦ := (β 2 , β 3 , . . . , β N , β 1 + 1) then M βΦ,S = x N (M β,S w) . The spectral vector of βΦ is ζ β,S (2) , . . . , ζ β,S (N ) , q ζ β,S (1) .
Observe αΦ = α ′ for α = (m − 1, 0, . . .). By definition This is very similar to the Jack case (5) and the same telescoping argument will be used. Denote the factor of M α,S in (7) by C S,m (q, t). Set With the same notation for τ as in Definition 5 The same computational scheme as in Proposition 8 proves the following: Proposition 15 For τ and I ( τ ) as in Definitions 5 and 6 where i k = ℓ (τ ) + 1.
Proposition 16 The set of zeros of C m,S (q, t) is Proof. None of the numerator factors in the product are cancelled out due to Lemma 9. The only possible cancellation occurs for τ ℓ(τ ) = 0 when c (1, S) is the last entry in the list Z ( τ ).

Singular Polynomials
where λ ∈ N N,+ 0 , the coefficients b λ,γ,S,S ′ (κ) , b λ,γ,S,S ′ (q, t) are rational functions of κ, (q, t) respectively and c = q a t b for some integers a, b. If one can show that for each (γ, S ′ ) with γ ⊳ λ that the spectral vector is distinct from that of (λ, S), that is, when evaluated at the specific values of κ or (q, t) (with ζ) then J λ,S , respectively M λ,S , do not have a pole there. The following is a device for analyzing possibly coincident spectral vectors.
From gcd (m, n) = 1 it follows that β i − γ i = mv i for some v i ∈ Z and thus r β (i, S) − c (r γ (i, S ′ )) = nv i . Now we specialize to α = (m, 0, . . .) as in Subsection 3.1 and n satisfying C S,m − m n = 0. By Proposition 10 this is equivalent to n = c (1, S) − z with z ∈ Z ( τ ).
Proof. Suppose that γ α and The hypothesis on γ implies c (i,  Proof. By the triangularity of formula (4) there is an expansion Define an operator .
Then J β,S T = J β,S and each J γ,S ′ (with γ ⊳ β) is annihilated by at least one factor of T . Thus J β,S = x β ⊗ Sτ (r β ) T , a polynomial whose coefficients To relate this to (m, n)-critical pairs we specify a condition on (q, t) which implies a = mv and b = nv for some v ∈ Z when q a t b = 1.

Lemma 25
Suppose a, b are integers such that q a t b = 1 at (q, t) = ̟ then a = mv, b = nv for some v ∈ Z.
Since u is not a root of unity it follows that −a n Then q = ωφ −n/g v −n/g = exp 2πi k−nl m v −n/g . Since gcd (m, n) = g there are integers s, s ′ such that s ′ m + sn = g. Set l = s"s (with s ′′ ∈ Z) then k − nl = k − s ′′ g − s ′′ s ′ m; thus ωφ −n/g = exp 2πi k−s ′′ g m . If g > 1 then Example 27 Suppose m = 8 and n = −12, then g = 4 and the possible values of ̟ are exp πi 4 u 3 , u 2 and exp 3πi 4 u 3 , u 2 where u is not a root of unity.
We will use this result to produce singular polynomials M α,S for (q, t) = ̟. Proof. The proof is essentially identical to that of Proposition 22. There replace x β ⊗ Sτ (r α ) by q a t b x β ⊗ Sτ (R β ) (with the appropriate prefactor q a t b ), J by M , ζ by ζ, U i by ξ i . The formula shows that M β,S is a polynomial, the denominators of whose coefficients are products of factors with the form q β i t b − q γ i t b ′ , and none of these vanish at (q, t) = ̟. This is our main result for the Macdonald polynomials.

Isotype of Singular Polynomials
The following discussion is in terms of Macdonald polynomials. It is straightforward to deduce the analogous results for Jack polynomials. Suppose σ is a partition of N . A basis {p S : S ∈ Y (σ)} of an H N (t)-invariant subspace of P τ is called a basis of isotype σ if each p S transforms under the action of T i defined in Section 2 with σ (T i ) replaced by T i . For example if row (i, S) = row (i + 1, S) then p S (xs i ) = p S (x), equivalently p S T i = tp S , or if col (i, S) = col (i + 1, S) then p S T i = −p S . There is a strong relation to singular polynomials.
Proposition 31 : A polynomial p ∈ P τ is singular for a specific value of (q, t) = ψ if and only if pξ i = pφ i for 1 ≤ i ≤ N , evaluated at ψ.
Proof. Recall the Jucys-Murphy elements {φ i } from (1). By definition pD N = 0 if and only if pξ N = p = pφ N . Proceeding by induction suppose that pD j = 0 for i < j ≤ N if and only if pξ j = pφ j for i < j ≤ N . Suppose This completes the proof.
With M α,S and n as in Theorem 30 the spectral vector ζ α,S (i) = q m t c(1,S) , t c(2,S) , . . . , t c(N,S) . Specialized to (q, t) = ̟ the polynomial M α,S is singular and q m t c(1,S) = t −n+c(1,S) . Recall n = z for some z ∈ Z ( τ ), and z determines a cell (i s , τ is + 1). In terms of Ferrers diagrams let σ = τ ∪(i s , τ is + 1), that is σ is = τ is +1. Let S ′ denote the RSYT formed from the cells of τ containing the numbers 2, . . . , N and the cell (i s , τ is + 1) containing 1. Then c (i, S ′ ) = c (i, S) for 2 ≤ i ≤ N and c (1, S ′ ) = c (1, S) − n. Thus the spectral vector of M α,S evaluated at (q, t) = ̟ is t c(i, . This implies that M α,S is (a basis element) of isotype σ. The other elements of the basis corresponding to Y (σ) are obtained from M α,S by appropriate transformations using {T i }.

Concluding Remarks
We have shown the existence of singular vector-valued Jack and Macdonald polynomials for the easiest possible values of the label α, that is, (m, 0, . . . , 0).
The proofs required some differentiation formulas and combinatorial arguments involving Young tableaux. The singular values were found to have an elegant interpretation in terms of where another cell can be attached to an RSYT. It may occur that a larger set of parameter values, say gcd (m, n) > 1, or even m n / ∈ Z, still leads to singular Jack polynomials but our proof techniques do not seem to cover these. One hopes that eventually a larger class of examples (more general labels in N N 0 ) will be found, with a target of a complete listing as is already known for the trivial representation τ = (N ). It is suggestive that the isotype σ of the singular polynomial M α,S is obtained by a reasonably natural transformation of the partition τ .