Next Article in Journal
Caristi, Nadler and H + -Type Contractive Mappings and Their Fixed Points in θ-Metric Spaces
Next Article in Special Issue
Quadratic Spline Wavelets for Sparse Discretization of Jump–Diffusion Models
Previous Article in Journal
Collaborative Content Downloading in VANETs with Fuzzy Comprehensive Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Singular Vector-Valued Jack and Macdonald Polynomials

Department of Mathematics, University of Virginia, Charlottesville, VA 22904-4137, USA
Symmetry 2019, 11(4), 503; https://doi.org/10.3390/sym11040503
Submission received: 20 February 2019 / Revised: 30 March 2019 / Accepted: 1 April 2019 / Published: 7 April 2019
(This article belongs to the Special Issue Symmetry in Special Functions and Orthogonal Polynomials)

Abstract

:
For each partition τ of N, there are irreducible modules of the symmetric groups S N and of the corresponding Hecke algebra H N t whose bases consist of the reverse standard Young tableaux of shape τ . There are associated spaces of nonsymmetric Jack and Macdonald polynomials taking values in these modules. The Jack polynomials form a special case of the polynomials constructed by Griffeth for the infinite family G n , p , N of complex reflection groups. The Macdonald polynomials were constructed by Luque and the author. For each of the groups S N and the Hecke algebra H N t , there is a commutative set of Dunkl operators. The Jack and the Macdonald polynomials are parametrized by κ and q , t , respectively. For certain values of these parameters (called singular values), there are polynomials annihilated by each Dunkl operator; these are called singular polynomials. This paper analyzes the singular polynomials whose leading term is x 1 m S , where S is an arbitrary reverse standard Young tableau of shape τ . The singular values depend on the properties of the edge of the Ferrers diagram of τ .

1. Introduction

For each partition τ of N, there are irreducible modules of the symmetric groups S N and of the corresponding Hecke algebra H N t , whose bases consist of the reverse standard Young tableaux of shape τ . There are associated spaces of nonsymmetric Jack and Macdonald polynomials taking values in these modules (in what follows, the polynomials are always of the nonsymmetric type). The Jack polynomials are a special case of those constructed by Griffeth [1] for the infinite family G n , p , N of complex reflection groups. The Macdonald polynomials were constructed by Luque and the author [2]. The polynomials are the simultaneous eigenfunctions of the Cherednik operators. The latter form a commutative set. For both the group S N and the Hecke algebra H N t , there is a commutative set of Dunkl operators, which lower the degree of a homogeneous polynomial by one. The definitions of the two types look quite different.
The Jack and the Macdonald polynomials are parametrized by κ and q , t , respectively. For certain values of the parameters (called singular values), there are polynomials annihilated by each Dunkl operator; these are called singular polynomials. The structure of the singular polynomials for the trivial module corresponding to the partition N , that is the scalar polynomials, is more or less well understood by now. For the modules of dimension 2 , the singular polynomials are mostly a mystery. In [3,4], we constructed special singular polynomials, which correspond to the minimum parameter values. To be specific, denote the longest hook-length in the Ferrers diagram of τ by h τ , then any other singular value κ satisfies κ 1 h τ . If a pair q , t such that q m t n = 1 provides a singular polynomial, then m n 1 h τ . The main topic of this paper is the determination of all the singular values for which the Jack or Macdonald polynomials with leading term x 1 m S are singular, where S is an arbitrary reverse standard Young tableau of shape τ . The tensor product is in the context of the linear space of polynomials times the module. The singular values depend on the properties of the edge of the Ferrers diagram of τ .
There is a brief outline of the needed aspects of the representation theory of S N and H N t in Section 2, focusing on the action of the generators on the basis elements. The important operators on scalar and vector-valued polynomials are defined in Section 3. Section 3.1 deals with the Cherednik–Dunkl and Dunkl operators on the vector-valued polynomials and introduces the Jack polynomials and key formulas for the action of Dunkl operators, in particular, when specialized to the polynomials with leading term x 1 m S . Section 3.2 contains the analogous results on Macdonald polynomials. Section 4 combines the previous results with analyses of the spectral vectors and a combinatorial analysis of the possible singular values, to prove our main results on Jack and Macdonald polynomials. Section 4.1 illustrates the representation-theoretic aspect of singular polynomials.

2. Representation Theory

The symmetric group S N is the group of permutations of 1 , 2 , , N . The transpositions w = i , j , defined by w i = j , w j = i and w k = k for k i , j are fundamental tools in this study. The simple reflections s i : = i , i + 1 , 1 i < N , generate S N . The group is abstractly presented by s i 2 = 1 : 1 i < N and the braid relations:
s i s i + 1 s i = s i + 1 s i s i + 1 , 1 i N 2 , s i s j = s j s i , 1 i < j 1 N 2 .
The group algebra C S N has the underlying linear space w S N c w w (with c w C ) and is of dimension N ! . The associated Hecke algebra H N t , where t is transcendental (formal parameter) or a complex number not a root of unity, is the associative algebra generated by T 1 , T 2 , , T N 1 subject to the relations:
T i + 1 T i t = 0 , T i T i + 1 T i = T i + 1 T i T i + 1 , 1 i N 2 , T i T j = T j T i , 1 i < j 1 N 2 .
It can be shown that there is a linear isomorphism between C S N and H N t based on the map s i T i . When t = 1 , they are identical. We require t 1 because there are several formulas with 1 t in the denominator; however, it is generally possible to obtain meaningful limits as t 1 .
The irreducible modules of these algebras correspond to partitions of N. They are constructed in terms of the Young tableaux. The descriptions will be given in terms of the actions of s i or T i on the basis elements (see [5]).
Let N 0 : = 0 , 1 , 2 , 3 , , and denote the set of partitions with N parts by N 0 N , + : = λ N 0 N : λ 1 λ 2 λ N . By a partition τ of N, we mean τ N 0 N , + and i = 1 N τ i = N . Thus, τ = τ 1 , τ 2 , (often, the trailing zero entries are dropped when writing τ ). The length of τ is τ : = max i : τ i > 0 . The Ferrers diagram of shape τ (given the same label) is the union of the boxes at points i , j with 1 i τ and 1 j τ i . A tableau of shape τ is a filling of the boxes with numbers. A reverse standard Young tableau (RSYT) is a filling with the numbers 1 , 2 , , N so that the entries decrease in each row and each column. Denote the set of RSYT’s of shape τ by Y ( τ ) . Let V τ = s p a n F S : S Y τ with orthogonal basis Y τ , where F is some extension field of Q containing the parameters κ or q , t . The dimension of V τ , that is # Y τ , is given by a hook-length product formula (for more information about the tableaux, see Stanley [6]). For 1 i N and S Y τ , the entry i is at coordinates row i , S , col i , S , and the content of the entry is c i , S = col i , S row i , S . Each S Y τ is uniquely determined by its content vector c i , S i = 1 N . For example, let τ = 4 , 3 and S = 7 6 5 2 4 3 1 , then the content vector is 1 , 3 , 0 , 1 , 2 , 1 , 0 . There are representations of S N and H N t on V τ ; each will be denoted by τ . For each i and S (with 1 i < N and S Y τ ), there are four different possibilities:
(1) row i , S = row i + 1 , S (implying col i , S = col i + 1 , S + 1 and c i , S c i + 1 , S = 1 ) then:
S τ s i = S , S τ T i = t S ;
(2) col i , S = col i + 1 , S (implying row i , S = row i + 1 , S + 1 and c i , S c i + 1 , S = 1 ) then:
S τ s i = S , S τ T i = S ;
(3) row i , S < row i + 1 , S and col i , S > col i + 1 , S . In this case:
c i , S c i + 1 , S = col i , S col i + 1 , S + row i + 1 , S row i , S 2 ,
then S i , denoting the tableau obtained from S by exchanging i and i + 1 , is an element of Y τ and:
S τ s i = S i + 1 c i , S c i + 1 , S S , S τ T i = S i + t 1 1 t c i + 1 , S c i , S S ;
(4) c i , S c i + 1 , S 2 ; thus, row i , S > row i + 1 , S and col i , S < col i + 1 , S , then with b = c i , S c i + 1 , S ,
S τ s i = 1 1 b 2 S i + 1 b S , S τ T i = t t b + 1 1 t b 1 1 t b 1 2 S i + t b t 1 t b 1 S .
The formulas in (4) are consequences of those in (3) by interchanging S and S i and applying the relations τ s i 2 = I and τ T i + I τ T i t I = 0 (where I denotes the identity operator on V τ ).
There is a commutative set of Jucys–Murphy elements in both Z S N and H N t . They are diagonalized with respect to the basis Y τ (with 1 i N and S Y τ ):
ω i : = j = i + 1 N i , j , S τ ω i = c i , S S ,
ϕ N = 1 , ϕ i = 1 t T i ϕ i + 1 T i , S τ ϕ i = t c i , S S .
The representation τ of S N is unitary (orthogonal) when V τ is furnished with the inner product ( S , S Y τ ) :
S , S 0 : = δ S , S × 1 i < j N , c j , S c i , S 2 1 1 c i , S c j , S 2 .
The analogue for H N t is ( S , S Y τ ):
S , S 0 : = δ S , S × 1 i < j N , c j , S c i , S 2 u t c i , S c j , S
where:
u z : = t z 1 t z 1 z 2 .
This form satisfies f τ T i , g 0 = f , g τ T i 0 for f , g V τ and 1 i < N .

3. Representations and Operators on Polynomials

For N 2 , set x = x 1 , , x N R N . The cardinality of a set E is denoted by # E . For α N 0 N (a composition or N-tuple), let α : = i = 1 N α i , x α : = i = 1 N x i α i , a monomial of degree α . The spaces of polynomials, respectively homogeneous polynomials (in N variables over F ) , are:
P : = s p a n F x α : α N 0 N , P n : = s p a n F x α : α N 0 N , α = n , n N 0 .
For α N 0 N , let α + denote the nonincreasing rearrangement of α . We use partial orders on N 0 N : for α , β N 0 N , α β ( α dominates β ) means that α β and i = 1 j α i i = 1 j β i for 1 j N ; and α β means that α = β and either α + β + , or α + = β + and α β . Furthermore, there is the rank function ( 1 i N ):
r α i : = # j : 1 j i , α j α i + # j : i < j N , α j > α i .
Then, r α S N and r α i = i for all i if and only if α = α + .
The (right) action of the symmetric group on polynomials is defined by:
x s i = x 1 , x i i + 1 , x i i + 1 , , x N , p x s i = p x s i , 1 i < N .
For arbitrary transpositions x i , j = , x i j , , x j i , and p x i , j = p x i , j . There is a subtlety (implicit inverse) involved due to acting on the right: for example, p ( x ) s 1 s 2 = p x s 1 s 2 = p x s 2 s 1 , that is p x 1 , x 2 , x 3 s 1 s 2 = p x 2 , x 1 , x 3 s 2 = p x 3 , x 1 , x 2 .
In general p x w = p x w 1 where x w i = x w 1 i for all i.
The action of the Hecke algebra on polynomials is defined by:
p x T i = 1 t x i + 1 p x p x s i x i x i + 1 + t p x s i .
The defining relations can be verified straightforwardly. There are special values: x i T i = x i + 1 , x i + x i + 1 T i = t x i + x i + 1 , and t x i x i + 1 T i = t x i x i + 1 . Furthermore, p T i = t p if and only if p s i = p , because t p p T i = t x i x i + 1 x i x i + 1 p p s i .
For a partition τ of N, let P τ : = P V τ (tensor product of two linear spaces over F ) . The set x α S : α N 0 N , S Y τ is a basis of P τ . The representations of S N and H N t on P τ are respectively defined by linear extension from the action on generators by:
s i : p x S p x s i S τ s i ,
T i : p x S 1 t x i + 1 p x p x s i x i x i + 1 S + p x s i S τ T i ,
for p P , S Y τ and 1 i < N (for details and background for the vector-valued Macdonald polynomials, see [2]).

3.1. Jack Polynomials

The Dunkl D i and Cherednik–Dunkl U i operators on P τ for p P , S Y τ and 1 i N , with parameter κ , are defined by:
p x S D i = x i p x S + κ j i , j = 1 N p x p x i , j x i x j S τ i , j , p x S U i = x i p x S D i κ j < i p x i , j S τ i , j .
Each of the sets D i and U i consists of pairwise commuting elements. There is a basis of P τ consisting of homogeneous polynomials each of which is a simultaneous eigenfunction of U i ; these are the nonsymmetric Jack polynomials. For each α , S N 0 N × Y τ , there is the polynomial:
J α , S = x α S τ r α + β α x β v α , β , S κ ,
where v α , β , S κ V τ ; these coefficients are rational functions of κ . These polynomials satisfy:
J α , S U i = ζ α , S i J α , S , ζ α , S i : = α i + 1 + κ c r α i , S , 1 i N .
The spectral vector is ζ α , S i i = 1 N . For detailed proofs, see [7].
We are concerned with the special case α = m , 0 , , 0 N 0 N . We apply formulas from [3] to analyze J α , S D i .
Proposition 1
([3], Cor. 6.2). Suppose β , S N 0 N × Y τ and β j = 0 for j k with some fixed k > 1 , then J β , S D j = 0 for all j k .
The next result, uses the inner product on Jack polynomials for partition labels β . The Pochhammer symbol is a n = i = 1 n a + i 1 .
Proposition 2.
Suppose β N 0 N , + and S Y τ , then:
J β , S 2 = S , S 0 i = 1 N 1 + κ c i , S β i × 1 i < j N = 1 β i β j 1 κ + κ c i , S c j , S 2 .
Corollary 1.
Suppose α = m , 0 , , 0 , then:
J α , S 2 = S , S 0 1 + κ c 1 , S m j = 2 N = 1 m 1 κ + κ c 1 , S c j , S 2 .
These norm formulas are the results of Griffeth [1] specialized to the symmetric groups. The final ingredient for the formula is a special case of [3], Theorem 6.3.
Proposition 3.
Suppose α = m , 0 , , 0 and α ^ = m 1 , 0 , , 0 , then:
J α , S D 1 = J α , S 2 J α ^ , S 2 J α ^ , S = m + κ c 1 , S j = 2 N 1 κ m + κ c 1 , S c j , S 2 J α ^ , S .
Proof. 
The first line comes from [3], Theorem 6.3. Then, the norm ratios are computed, which involves much cancellation. □
Denote the prefactor of J α ^ , S in Equation (5) by C S , m κ . Our interest is in the zeros of C S , m κ as a function of κ . We will see that C S , m κ depends only on τ and the location of the entry 1 in S. The idea is to group entries of S by row and use telescoping properties. There is a simple formula (proven inductively):
i = a b g i + 1 g i 1 g i 2 = g a 1 g b + 1 g a g b
where g is a function on Z and a b . For the present application, set g i = m + κ c 1 , S i .
Definition 1.
The partition τ ^ N 0 N , + is obtained from τ by removing the box row 1 , S , col 1 , S : for 1 i τ , set τ ^ i = τ i 1 if row 1 , S = i , otherwise set τ ^ i = τ i .
The part of the product in C S , m κ coming from row # i has c j , S ranging from 1 i τ ^ i i , so the corresponding subproduct is:
j = 1 i τ ^ i i g j + 1 g j 1 g j 2 = g i g τ ^ i i + 1 g 1 i g τ ^ i i .
Multiply these factors for i = 1 , 2 , τ ; note that:
i = 1 τ g i g 1 i = g τ g 0 = m + κ c 1 , S + τ m + κ c 1 , S ,
and thus:
C S , m κ = m + κ c 1 , S + τ i = 1 τ m + κ c 1 , S τ ^ i + i 1 m + κ c 1 , S τ ^ i + i .
As stated before, the formula depends only on τ and the location of the entry 1 S. More simplification is possible due to telescoping if some τ ^ i ’s are equal. Next, we formalize the set of indices of the parts of a partition, which can be increased by one while maintaining the partition (nonincreasing) property.
Definition 2.
For τ ^ as in Definition 1, define the increasing sequence I τ ^ = i 1 , i 2 , , i k such that i 1 = 1 , and 2 s k implies τ ^ i s < τ ^ i s 1 and τ ^ j = τ ^ i s 1 for i s 1 j < i s . The last element i k = τ + 1 . Let Z τ ^ = τ ^ i s + 1 i s : 1 s k 1 τ : τ ^ τ 1 (the latter set is omitted when τ ^ τ = 0 ).
Example 1.
Suppose τ ^ = 5 , 5 , 4 , 4 , 4 , 3 , 3 , 2 , 1 , then I τ ^ = 1 , 3 , 6 , 8 , 9 , 10 and Z τ ^ = 5 , 2 , 2 , 5 , 7 , 9 . If τ ^ = 5 , 5 , 4 , 4 , 4 , 3 , 3 , 3 , 0 , then I τ ^ = 1 , 3 , 6 , 9 and Z τ ^ = 5 , 2 , 2 , 8 .
Let S ^ denote the tableau formed by deleting the box row 1 , S , col 1 , S from S. The key property of I τ ^ is that it controls the possible locations where a box containing 1 could be adjoined to S ^ to form an RSYT. These locations are 1 , τ ^ 1 + 1 , , i s , τ ^ i s + 1 , . If τ ^ τ = 0 , then the last location is τ , 1 , otherwise it is τ + 1 , 1 . Thus, Z τ ^ is the set of contents of locations in the list. Evaluate the part of the product in Formula (6) for the range i s j < i s + 1 to obtain:
j = i s i s + 1 1 m + κ c 1 , S τ ^ i s + j 1 m + κ c 1 , S τ ^ i s + j = m + κ c 1 , S τ ^ i s + 1 i s m + κ c 1 , S τ ^ i s + 1 i s + 1 .
This completes the proof of the following:
Proposition 4.
For τ ^ and I τ ^ as in Definitions 1 and 2:
C S , m κ = m + κ c 1 , S + τ s = 1 k 1 m + κ c 1 , S τ ^ i s + 1 i s m + κ c 1 , S τ ^ i s + 1 i s + 1 ,
where i k = τ + 1 .
If τ ^ τ = 0 , then the entry at τ , 1 is 1, c 1 , S = 1 τ , i k 1 = τ , and the last factor in the product (for s = k 1 ) equals m m + κ , thus canceling out the leading factor m + κ c 1 , S + τ = m + κ .
Lemma 1.
Suppose 1 a , b k 1 , then τ ^ i a i a τ ^ i b i b + 1 .
Proof. 
By construction, the sequence τ ^ i a a 1 is strictly decreasing, and the sequence i a a 1 is strictly increasing. Suppose for some a , b , the equation τ ^ i a i a = τ ^ i b i b + 1 holds, that is i a i b + 1 = τ ^ i a τ ^ i b . Clearly, a = b or a = b + 1 is impossible. Suppose, i a i b + 1 > 0 , then b < b + 1 < a , implying τ ^ i a τ ^ i b < 0 , a contradiction. Similarly, suppose i a i b + 1 < 0 , then a < b + 1 ; furthermore, that a < b since a = b is impossible, thus τ ^ i a τ ^ i b > 0 , again a contradiction. This completes the proof. □
Proposition 5.
The set of zeros of C m , S κ is:
m c 1 , S z : z Z τ ^ , z c 1 , S .
Proof. 
None of the numerator factors in the product are canceled out due to Lemma 1. The only possible cancellation occurs for τ ^ τ = 0 when c 1 , S is the last entry in the list Z τ ^ . □
Example 2.
Let N = 7 , τ = 5 , 5 , 5 , 4 , 4 , 2 , 2 and row 1 , S , col 1 , S = 3 , 5 , then τ ^ = 5 , 5 , 4 , 4 , 4 , 2 , 2 . The possible locations where the box containing 1 could be adjoined to τ ^ are 1 , 6 , 3 , 5 , 6 , 3 , 8 , 1 , so that Z τ ^ = 5 , 2 , 3 , 7 and:
C S , m κ = m m 3 κ m + 5 κ m + 9 κ m κ m + 3 κ m + 7 κ .
Here is a sketch of τ ^ marked by □ and the possible cells for the entry 1:
1 1 1 1
In a later section, we examine the relation to singular polynomials of the form J α , S .

3.2. Macdonald Polynomials

Adjoin the parameter q. To say that q , t is generic means that q 1 , q a t b 1 for a , b Z and N b N . Besides the operators T i defined in (3), we introduce (for p P , S Y τ ):
ω : = T 1 T 2 T N 1 , p x S w : = p q x N , x 1 , , x N 1 S τ ω .
Thus, ω is an element of H N t analogous to the cyclic shift, and w is an operator on P τ . The Cherednik ξ i and Dunkl D i operators, for 1 i N , are defined by:
ξ i : = t i N T i 1 1 T 1 1 w T N 1 T i , D N : = 1 ξ N / x N , D i : = 1 t T i D i + 1 T i .
These definitions were given for the scalar case by Baker and Forrester [8] and extended to vector-valued polynomials by Luque and the author [2]. The operators ξ i : 1 i N commute pairwise, while the operators D i : 1 i N commute pairwise and map P n V τ to P n 1 V τ for n 0 . A polynomial p P τ is singular for some particular value of q , t if p D i = 0 , evaluated at q , t , for all i. There is a basis of P τ consisting of homogeneous polynomials each of which is a simultaneous eigenfunction of ξ i ; these are the nonsymmetric Macdonald polynomials. For each α , S N 0 N × Y τ , there is the polynomial:
M α , S = q q t b x α S τ R α + β α x β v α , β , S q , t ;
here, v α , β , S q , t V τ and R α : = T i i T i 2 T i m 1 where α . s i 1 s i 2 s i m = α + , and there is no shorter product s j 1 s j 2 having this property (that is, m = # i , j : i < j , α i < α j ), a , b Z (see [4], p. 19, for the values of a , b , which are not needed here). The eigenvector property is:
M α , S ξ i = ζ ˜ α , S i M α , S , ζ ˜ α , S i = q α i t c r α i , S , 1 i N .
As before, ζ ˜ α , S i i = 1 N is called the spectral vector (the tilde indicates the q , t -version). We consider the special case α = m , 0 , , 0 .
Proposition 6
([4], Prop.12). Suppose β , S N 0 N × Y τ and β j = 0 for j k with some fixed k > 1 , then M β , S D j = 0 for all j k .
Adapting to the proof of [4], Lemma 5, we show (recall (2) u z = t z 1 t z 1 z 2 ):
Proposition 7.
Let α = m , 0 , , α = 0 , 0 , , m and S Y τ , then:
M α , S D 1 = t 1 N j = 1 N 1 u q m t c 1 , S c j + 1 , S M α , S D N T N 1 T 1 .
The other ingredient is the affine step (from the Yang–Baxter graph; see [4], 3.14 [2]): for β N 0 N , set β Φ : = β 2 , β 3 , , β N , β 1 + 1 , then M β Φ , S = x N M β , S w . The spectral vector of β Φ is ζ ˜ β , S 2 , , ζ ˜ β , S N , q ζ ˜ β , S 1 . Observe that α ^ Φ = α for α ^ = m 1 , 0 , . By definition:
M α , S D N = 1 x N M α , S 1 ξ N = 1 x N 1 ζ ˜ α , S N M α , S = 1 ζ ˜ α , S N M α ^ , S w = 1 q ζ ˜ α ^ , S 1 M α ^ , S w .
Furthermore, 1 q ζ ˜ α ^ , S 1 = 1 q m t c 1 , S and w T N 1 T 1 = t N 1 ξ 1 , so that M α ^ , S ξ 1 = q m 1 t c 1 , S M α ^ , S .
Proposition 8.
Let α = m , 0 , , α ^ = m 1 , 0 , , 0 and S Y τ , then:
M α , S D 1 = q m 1 t c 1 , S 1 q m t c 1 , S j = 2 N u q m t c 1 , S c j , S M α ^ , S .
This is very similar to the Jack case (5), and the same telescoping argument will be used. Denote the factor of M α ^ , S in (7) by C S , m q , t . Set g i = 1 q m t c 1 , S i for i Z , then:
u q m t c 1 , S c j , S = t g c j , S 1 g c j , S + 1 g c j , S 2 .
With the same notation for τ ^ as in Definition 1:
C S , m q , t = q m 1 t c 1 , S + N 1 1 q m t c 1 , S i = 1 τ g i g 1 i g τ ^ i i + 1 g τ ^ i i = q m 1 t c 1 , S + N 1 1 q m t c 1 , S g τ g 0 i = 1 τ g τ ^ i i + 1 g τ ^ i i = q m 1 t c 1 , S + N 1 1 q m t c 1 , S + τ i = 1 τ g τ ^ i i + 1 g τ ^ i i .
The same computational scheme as in Proposition 4 proves the following:
Proposition 9.
For τ ^ and I τ ^ as in Definitions 1 and 2:
C S , m q , t = q m 1 t c 1 , S + N 1 1 q m t c 1 , S + τ s = 1 k 1 1 q m t c 1 , S τ ^ i s + 1 i s 1 q m t c 1 , S τ ^ i s + 1 i s + 1 ,
where i k = τ + 1 .
If τ ^ τ = 0 , then the entry at τ , 1 is 1, c 1 , S = 1 τ , i k 1 = τ , and the last factor in the product (for s = k 1 ) equals 1 q m 1 q m t , thus canceling out the leading factor 1 q m t c 1 , S + τ = 1 q m t .
Proposition 10.
The set of zeros of C m , S q , t is:
q m t c 1 , S z = 1 : z Z τ ^ , z c 1 , S .
Proof. 
None of the numerator factors in the product are canceled out due to Lemma 1. The only possible cancellation occurs for τ ^ τ = 0 when c 1 , S is the last entry in the list Z τ ^ . □
Example 3.
Let N = 7 , τ = 5 , 5 , 4 , 4 , 4 , 3 , 2 and row 1 , S , col 1 , S = 6 , 3 , then τ ^ = 5 , 5 , 4 , 4 , 4 , 2 , 2 . This is the same τ ^ as in Example 2, and Z τ ^ = 5 , 2 , 3 , 7 . The same diagram applies here. Then:
C S , m q , t = q m 1 t 4 1 q m 1 q m t 8 1 q m t 5 1 q m t 4 1 q m t 6 1 q m t 2 1 q m t 2 .
In the next section, we will see under what conditions M α , S is singular.

4. Singular Polynomials

For α = m , 0 , N 0 N , + , α ^ = m 1 , 0 , and S Y τ , we have shown:
J α , S D i = 0 , M α , S D i = 0 , 2 i N ; J α , S D 1 = C S , m κ J α ^ , S , M α , S D 1 = C S , m q , t M α ^ , S ,
and we determined the zeros of C S , m κ and C S , m q , t . However, not all zeros lead to singular polynomials because, in general, the coefficients of J β , S (with respect to the monomial basis x γ S ) have denominators of the form a + b κ and the coefficients of M β , S have denominators of the form 1 q a t b where a , b Z and b N . Thus, to be able to substitute κ = κ 0 , a zero of C S , m κ , or q , t = q 0 , t 0 , a zero of C S , m q , t , in Equations (5) and (7) to conclude that J α , S or M α , S are singular, it is necessary to show that neither J α , S nor J α ^ , S have a pole at κ = κ 0 ; the analogous requirement applies to M α , S and M α ^ , S . From the triangularity of J β , S and M β , S with respect to the monomial basis, we can deduce that:
x λ S = J λ , S + γ λ , S Y τ b λ , γ , S , S κ J γ , S , x λ S = c M λ , S + γ λ , S Y τ b λ , γ , S , S q , t M γ , S ,
where λ N 0 N , + , the coefficients b λ , γ , S , S κ , b λ , γ , S , S q , t are rational functions of κ , q , t , respectively, and c = q a t b for some integers a , b . If one can show that for each γ , S with γ λ that the spectral vector is distinct from that of λ , S , that is ζ γ , S i i = 1 N ζ λ , S i i = 1 N when evaluated at the specific values of κ or q , t (with ζ ˜ ), then J λ , S , respectively M λ , S , does not have a pole there. The following is a device for analyzing possibly coincident spectral vectors.
Definition 3.
Let β , S , γ , S N 0 N × Y τ such that β γ , and let m , n Z with m 1 , n 0 . Then, β , S , γ , S is an m , n -critical pair if there is v Z N such that β i γ i = m v i and c r β i , S c r γ i , S = n v i for 1 i N .
Lemma 2.
Let β , S , γ , S N 0 N × Y τ such that β γ and ζ β , S i = ζ γ , S i for all i when κ = m n , with gcd m , n = 1 , then β , S , γ , S is an m , n -critical pair.
Proof. 
By hypothesis 1 + β i m n c r β i , S = 1 + γ i m n c r γ i , S for 1 i N ; thus:
β i γ i = m n r β i , S c r γ i , S , n β i γ i = m r β i , S c r γ i , S .
From gcd m , n = 1 , it follows that β i γ i = m v i for some v i Z , and thus, r β i , S c r γ i , S = n v i . □
Now, we specialize to α = m , 0 , as in Section 3.1 and n satisfying C S , m m n = 0 . By Proposition 5, this is equivalent to n = c 1 , S z with z Z τ ^ .
Proposition 11.
There are no m , n -critical pairs α , S , γ , S .
Proof. 
Suppose that γ α and α i γ i = m v i , c i , S c r γ i , S = n v i with v i Z , and 1 i N . From γ = α = m and α j = m or = 0 , it follows that γ k = m for some k and γ i = 0 for i k . If k = 1 , then z i = 0 for all i and c i , S = c r γ i , S = c i , S , because γ N 0 N , + . The content vector determines S uniquely, and thus, S = S and γ = α . Now, suppose k > 1 , then v 1 = 1 , v k = 1 , and v i = 0 otherwise. The respective content vectors are:
c i , S i = 1 N = c 1 , S , c 2 , S , , c k , S , c k + 1 , S , , c N , S , c r γ i , S i = 1 N = c 2 , S , c 3 , S , , c 1 , S , c k + 1 , S , , c N , S .
The hypothesis on γ implies c i , S = c i 1 , S for i = 3 i k , c i , S = c i , S for k + 1 i N , and c 2 , S = c 1 , S n , c 1 , S = c k , S + n . Since S and S are both of shape τ , the two content vectors are permutations of each other. The list of values c 3 , S , , c N , S agrees with c 2 , S , , c k 1 , S , c k + 1 , S , c N , S ; thus, c 1 , S , c k , S and c 1 , S , c 2 , S contain the same two numbers. Since c 2 , S = c 1 , S n c 1 , S , the equation c 1 , S = c 1 , S must hold. The possible locations of the entry 1 in a RSYT must have different contents (else they would be on the same diagonal i , j : j i = c 1 , S ) . Thus, row 1 , S , col 1 , S = row ( 1 , S , col 1 , S ) , and S and S lead to the same τ ^ (the partition formed by removing the cell of 1 from τ ) . By construction n = z , for some z Z τ ^ , and z determines a cell i s , τ ^ i s + 1 where 1 can be attached to the part of S containing 2 , 3 , , N to form a new RSYT S . By construction, c 1 , S = z = c 1 , S n = c 2 , S = c 2 , S . It is impossible for c 1 , S = c 2 , S for any RSYT; thus, γ α cannot occur. □
The same problem for α ^ = m 1 , 0 , is almost trivial.
Lemma 3.
Suppose S Y τ , γ = m 1 and α ^ i γ i = m v i , c i , S c r γ i , S = n v i with v i Z , and 1 i N . Then, α ^ , S = γ , S .
Proof. 
The hypothesis γ = m 1 implies γ i m 1 , and thus, α ^ i γ i m 1 for all i. This implies v i = 0 for all i implying γ = α ^ and c j , S = c j , S for all j; thus, S = S . □
Proposition 12.
Suppose β , S N 0 N × Y τ , gcd m , n = 1 and there are no m , n -critical pairs β , S , γ , S , then J β , S has no poles at κ = m n .
Proof. 
By the triangularity of Formula (4), there is an expansion:
x β S τ r β = J β , S + γ β , S Y τ b β , γ , S , S κ J γ , S .
By Lemma 2, for each γ β , S Y τ , there is at least one i = i γ , S such that ζ β , S i ζ γ , S i 0 when κ = m n . Define an operator:
T : = γ β , S Y τ U i γ , S ζ γ , S i γ , S ζ β , S i γ , S ζ γ , S i γ , S .
Then, J β , S T = J β , S , and each J γ , S (with γ β ) is annihilated by at least one factor of T . Thus, J β , S = x β S τ r β T , a polynomial whose coefficients have denominators that are factors of γ β , S Y τ ζ β , S i γ , S ζ γ , S i γ , S . By construction of i γ , S , this product does not vanish at κ = m n . □
We are ready for the main result on Jack polynomials.
Theorem 1.
Suppose α = m , 0 , , S Y τ and Z τ ^ is as in Definition 2. Further, suppose z Z τ ^ , n : = c 1 , S z 0 , and gcd m , n = 1 , then J α , S is a singular polynomial for κ = m n .
Proof. 
From Proposition 1, J α , S D j = 0 for 2 j N and J α , S D 1 = C S , m κ J α ^ , S , where α ^ = m 1 , 0 , . By Propositions 11 and 12 and Lemma 3, J α , S and J α ^ , S do not have poles at κ = m n . Furthermore, C S , m m n = 0 , and thus, J α , S D 1 = 0 at κ = m n . □
To set up the analogous results for Macdonald polynomials, consider the differences between two spectral vectors: ζ ˜ β , S i ζ ˜ γ , S i = q β i t c r β i , S q γ i t c r γ i , S = q γ i t c r γ i , S q β i γ i t c r γ i , S c r γ i , S 1 . To relate this to m , n -critical pairs, we specify a condition on q , t , which implies a = m v and b = n v for some v Z when q a t b = 1 .
Definition 4.
Suppose m , n are integers such that m 1 , n 0 and gcd m , n = g 1 . Let u C 0 such that u is not a root of unity and ω = exp 2 π i k m with gcd k , g = 1 . Define ϖ = q , t = ω u n / g , u m / g .
Lemma 4.
Suppose a , b are integers such that q a t b = 1 at q , t = ϖ , then a = m v , b = n v for some v Z .
Proof. 
By the hypothesis:
1 = ω u n / g a u m / g b = ω a u n a + m b / g .
Since u is not a root of unity, it follows that a n g + b m g = 0 , but gcd n g , m g = 1 ; thus, m g divides a . Write a = m g c for some integer c, then 1 = ω a = exp 2 π i k m m c g = exp 2 π i k c g . This implies c = v g with v Z because exp 2 π i k g is a primitive g th root of unity. Thus, a = m g v g = m v and b = n m a = n v . □
Remark 1.
All the possible values of ϖ are included when (1) g > 1 and ω = exp 2 π i k m with gcd k , g = 1 and 1 k < g (2) g = 1 and ω = 1 . To prove this, let u = ϕ v with ϕ = exp 2 π i g l m and l Z so that u m / g = v m / g . Then, q = ω ϕ n / g v n / g = exp 2 π i k n l m v n / g . Since gcd m , n = g , there are integers s , s such that s m + s n = g . Set l = s s (with s Z ) , then k n l = k s g + s s m ; thus, ω ϕ n / g = exp 2 π i k s g m . If g > 1 , then let s = k g + 1 , implying 1 k s g < g , while if g = 1 , set s = k .
Example 4.
Suppose m = 8 and n = 12 , then g = 4 , and the possible values of ϖ are exp π i 4 u 3 , u 2 and exp 3 π i 4 u 3 , u 2 , where u is not a root of unity.
We will use this result to produce singular polynomials M α , S for q , t = ϖ .
Lemma 5.
Let β , S , γ , S N 0 N × Y τ such that β γ and ζ ˜ β , S i = ζ ˜ γ , S i for all i when q , t = ϖ , then β , S , γ , S is an m , n -critical pair.
Proof. 
The equation ζ ˜ β , S i = ζ ˜ γ , S i is q β i t c r β i , S = q γ i t c r γ i , S , that is q β i γ i t c r β i , S c r γ i , S = 1 at q , t = ϖ . By Lemma 4, there is an integer v i such that β i γ i = m v i and c r β i , S c r γ i , S = n v i . This argument applies to all i. □
Proposition 13.
Suppose β , S N 0 N × Y τ and there are no m , n -critical pairs β , S , γ , S , then M β , S has no poles at q , t = ϖ .
Proof. 
The proof is essentially identical to that of Proposition 12. There, replace x β S τ r α by q a t b x β S τ R β (with the appropriate prefactor q a t b ), J by M, ζ by ζ ˜ , and U i by ξ i . The formula shows that M β , S is a polynomial, the denominators of the coefficients of which are products of factors with the form q β i t b q γ i t b , and none of these vanish at q , t = ϖ . □
This is our main result for the Macdonald polynomials.
Theorem 2.
Suppose α = m , 0 , , S Y τ and Z τ ^ is as in Definition 2. Further, suppose z Z τ ^ , n : = c 1 , S z 0 , then M α , S is a singular polynomial for q , t = ϖ .
Proof. 
From Proposition 6 M α , S D j = 0 for 2 j N and M α , S D 1 = C S , m q , t M α ^ , S , where α ^ = m 1 , 0 , . By Propositions 11 and 13 and Lemma 3, M α , S and M α ^ , S do not have poles at q , t = ϖ . Furthermore, C S , m ω u n / g , u m / g = 0 (due to the factor 1 q m t c 1 , S z , Proposition 10), and thus, M α , S D 1 = 0 at q , t = ϖ . □

4.1. Isotype of Singular Polynomials

The following discussion is in terms of Macdonald polynomials. It is straightforward to deduce the analogous results for Jack polynomials. Suppose σ is a partition of N. A basis p S : S Y σ of an H N t -invariant subspace of P τ is called a basis of isotype σ if each p S transforms under the action of T i defined in Section 2 with σ T i replaced by T i . For example, if row i , S = row i + 1 , S , then p S x s i = p S x ; equivalently, p S T i = t p S , or if col i , S = col i + 1 , S , then p S T i = p S . There is a strong relation to singular polynomials.
Proposition 14.
A polynomial p P τ is singular for a specific value of q , t = ψ if and only if p ξ i = p ϕ i for 1 i N , evaluated at ψ.
Proof. 
Recall the Jucys–Murphy elements ϕ i from (1). By definition, p D N = 0 if and only if p ξ N = p = p ϕ N . Proceeding by induction, suppose that p D j = 0 for i < j N if and only if p ξ j = p ϕ j for i < j N . Suppose:
0 = p D i = 1 t p T i D i + 1 T i p T i D i + 1 = 0 p T i ξ i + 1 = p T i ϕ i + 1 p T i ξ i + 1 T i = p T i ϕ i + 1 T i t p ξ i = t p ϕ i .
This completes the proof. □
With M α , S and n as in Theorem 2, the spectral vector ζ ˜ α , S i i = 1 N = q m t c 1 , S , t c 2 , S , , t c N , S . Specialized to q , t = ϖ , the polynomial M α , S is singular and q m t c 1 , S = t n + c 1 , S . Recall n = z for some z Z τ ^ , and z determines a cell i s , τ ^ i s + 1 . In terms of Ferrers diagrams, let σ = τ ^ i s , τ ^ i s + 1 , that is σ i s = τ i s + 1 . Let S denote the RSYT formed from the cells of τ containing the numbers 2 , , N and the cell i s , τ ^ i s + 1 containing 1. Then, c i , S = c i , S for 2 i N and c 1 , S = c 1 , S n . Thus, the spectral vector of M α , S evaluated at q , t = ϖ is t c i , S i = 1 N . This implies that M α , S is (a basis element) of isotype σ . The other elements of the basis corresponding to Y σ are obtained from M α , S by appropriate transformations using T i .

5. Concluding Remarks

We have shown the existence of singular vector-valued Jack and Macdonald polynomials for the easiest possible values of the label α , that is m , 0 , , 0 . The proofs required some differentiation formulas and combinatorial arguments involving Young tableaux. The singular values were found to have an elegant interpretation in terms of where another cell can be attached to an RSYT. It may occur that a larger set of parameter values, say gcd m , n > 1 or even m n Z , still leads to singular Jack polynomials, but our proof techniques do not seem to cover these. One hopes that eventually, a larger class of examples (more general labels in N 0 N ) will be found, with a target of a complete listing as is already known for the trivial representation τ = N . It is suggestive that the isotype σ of the singular polynomial M α , S is obtained by a reasonably natural transformation of the partition τ .

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Griffeth, S. Orthogonal functions generalizing Jack polynomials. Trans. Am. Math. Soc. 2010, 362, 6131–6157. [Google Scholar]
  2. Dunkl, C.; Luque, J.-G. Vector valued Macdonald polynomials. Sém. Lothar. Comb. 2011, 66, B66b. [Google Scholar]
  3. Dunkl, C. The smallest singular values and vector-valued Jack polynomials. SIGMA 2018, 14, 115. [Google Scholar] [CrossRef]
  4. Dunkl, C. A positive-definite inner product for vector-valued Macdonald polynomials. Sém. Lothar. Comb. 2019, 80, B80a. [Google Scholar]
  5. Dipper, R.; James, G. Representations of Hecke algebras of general linear groups. Proc. Lond. Math. Soc. 1986, 52, 2–52. [Google Scholar] [CrossRef]
  6. Stanley, R.P. Enumerative Combinatorics; Cambridge Studies in Advanced Mathematics 62; Cambridge University Press: Cambridge, UK, 1999; Volume 2. [Google Scholar]
  7. Dunkl, C.; Luque, J.-G. Vector-valued Jack polynomials from scratch. SIGMA 2011, 7, 26. [Google Scholar] [CrossRef]
  8. Baker, T.H.; Forrester, P.J. A q-analogue of the type A Dunkl operator and integral kernel. Int. Math. Res. Not. 1997, 14, 667–686. [Google Scholar]

Share and Cite

MDPI and ACS Style

Dunkl, C.F. Some Singular Vector-Valued Jack and Macdonald Polynomials. Symmetry 2019, 11, 503. https://doi.org/10.3390/sym11040503

AMA Style

Dunkl CF. Some Singular Vector-Valued Jack and Macdonald Polynomials. Symmetry. 2019; 11(4):503. https://doi.org/10.3390/sym11040503

Chicago/Turabian Style

Dunkl, Charles F. 2019. "Some Singular Vector-Valued Jack and Macdonald Polynomials" Symmetry 11, no. 4: 503. https://doi.org/10.3390/sym11040503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop