Next Article in Journal
Bounds of Riemann-Liouville Fractional Integrals in General Form via Convex Functions and Their Applications
Next Article in Special Issue
The Hitchhiker Guide to: Secant Varieties and Tensor Decomposition
Previous Article in Journal
Trans-Sasakian 3-Manifolds with Reeb Flow Invariant Ricci Operator
Previous Article in Special Issue
A Very Brief Introduction to Nonnegative Tensors from the Geometric Viewpoint
Open AccessArticle

# Seeking for the Maximum Symmetric Rank

Dipartimento di Matematica e Applicazioni “Renato Caccioppoli”, Università di Napoli Federico II, I-80126 Napoli, Italy
Mathematics 2018, 6(11), 247; https://doi.org/10.3390/math6110247
Received: 9 October 2018 / Revised: 2 November 2018 / Accepted: 3 November 2018 / Published: 12 November 2018

## Abstract

We present the state-of-the-art on maximum symmetric tensor rank, for each given dimension and order. After a general discussion on the interplay between symmetric tensors, polynomials and divided powers, we introduce the technical environment and the methods that have been set up in recent times to find new lower and upper bounds.
Keywords:

## 1. Introduction

Until recently, if a scientist or an engineer were asked what tensors are good for, probably in many cases they would have organized their answer around examples such as the Cauchy stress tensor or the tensors that arise in general relativity. However, these are examples of a more complex concept: that of a tensor field. To give a first explanation of the difference, let us point out that a scalar field is a function defined on a portion of space (on which one can perform the various operations of Calculus, e.g., take the partial derivatives), whereas a scalar is just a real (or complex) number. To refine the explanation, one might discuss the difference between a vector field and a single vector. In the context of the present survey article, tensors are considered as mathematical objects per se. Thus, we disregard how tensors can vary in a portion of space, and look at them as single elements of a space of tensors, i.e. just the set of all tensors of the same type. In the last decades, it has become clearer and clearer that even this considerably simpler notion is of utmost applicative importance: a convincing account is provided by Landsberg’s book , especially in Section 1.3 and the whole Part 3.
In most of the applications, vectors are either physical quantities which can be represented by arrows, or arrays of numbers. Since spaces of these two kinds of objects reveal the same fundamental structure, the twentieth century Mathematics has established the unifying concept of a vector space. This abstract object turns out to be suitable for a much wider class of situations, including those in which vectors can vary in real space. Vector spaces are a kind of algebraic structure, that is, they are formally defined by means of the operations that can be performed on their elements. The multiplication of numbers by vectors can be extended by allowing more general kinds of “numbers”, which can be defined by means of another algebraic structure, named a field (not to be confused with a field of varying objects in a portion of space). These are the building blocks of modern Linear Algebra, with which even the multilinear algebra, hence the tensor theory, can be set up. The survey is written using this language. Although not unknown, such an abstract mathematical approach might be a bit demanding for some tensor practitioners (cf. , Subsection 0.3); therefore, it is likely they will prefer to accompany the reading with some of the many good books on abstract algebra.
Based on the work in , Part 3, the importance of tensor decompositions can be appreciated. The perhaps most basic kind of decomposition leads to the notion of tensor rank. It can be said that, to date, tensor rank is not understood well, as no general algorithm for determining it is known. The same is true for the symmetric rank of a symmetric tensor. Moreover, many of the techniques that have led to some success in tensor theory apply both to rank and to symmetric rank, and the assertion that they coincide (for a symmetric tensor) is the content of the Comon’s conjecture (see , Exercise 2.6.6.5 and references therein). This encourages us to believe that a better understanding of the symmetric rank may shed some light on tensor rank.
When the base field is of characteristic zero, e.g., when dealing with real or complex numbers (as in most of the applicative uses of tensors), symmetric tensors are naturally identified with homogeneous polynomials. Even in positive characteristic, symmetric tensors and polynomials are “quite close” since, as a matter of facts, symmetric tensors can always be naturally identified with the so-called divided powers. From the polynomial viewpoint, the symmetric rank becomes the Waring rank, that is, the minimum number of summands that are required to express a given homogeneous polynomial as a sum of powers of linear forms. It follows that, in the characteristic zero case, to determine the maximum symmetric rank for symmetric tensors of given dimension n and order d, is the same as to determine the maximum Waring rank for degree d homogeneous polynomials in n variables. This is one of the most natural variants of the classical Waring problem on natural numbers. Now, if the symmetric rank was well understood, one probably would easily determine what is its maximum for symmetric tensors of given order and dimension. This leads us to hope that the techniques invented to find the maximum Waring rank for degree d forms in n variables might indicate some ways to understand tensor rank.
Following Geramita , let us recall that, while the classical Waring problem was solved by Hilbert, its main variant remains open, i.e. determining the minimum number $G ( d )$ of summands that are needed to decompose every sufficiently large natural number into a sum of dth powers. From the polynomial viewpoint, a naturally analogous problem is to determine the maximum Waring rank of a generic degree d homogeneous polynomials in n variables, where “generic” is meant in the sense that it commonly has in Algebraic Geometry. In this case, under the hypothesis that the base field is algebraic closed of characteristic zero, the answer is provided by the celebrated Alexander–Hirschowitz theorem, which deals with interpolation of sets of fat points, but which through the Terracini’s lemma has a direct translation in frame of the Waring rank (see  for the original proof of the theorem and , Section 7 for a good historical account of it). Henceforth, we shall refer to the Waring rank of a form simply as its rank.
Interestingly, at the end of his review of the outstanding paper  in the MathSciNet database, Fedor Zak wrote:
It would also be nice to know how many linear forms are required to express an arbitrary form of degree d as a sum of dth powers of linear forms.
In the mentioned Exposé , Tony Geramita called this problem the little (and the “generic version” solved by Alexander and Hirschowitz the big) Waring problem for polynomials. However, as little as it can be, similar to a mouse, the problem is also escaping, and indeed it has remained open to date.
After Geramita’s Exposé, Johannes Kleppe was able to prove that the maximum rank of ternary quartics is seven, in his master thesis  under the supervision of Kristian Ranestad. More precisely, that work contains a study of normal forms and ranks of ternary cubics and quartics, from which the maximum ranks can be obtained.
Some years later, at the opposite side of Kleppe’s result, which gives a sharp upper bound in two very specific cases, Corollary 1  provided us with a very mild upper bound in the general case. After about the same amount of time, that result was dramatically improved by Blekherman and Teitler (see , Corollary 9). From the references in , some information on earlier works on the subject can be obtained. For instance, the work in , published in 1969, deals with the problem for arbitrary fields, with a particular care to detect in which cases there is actually a maximum for the Waring rank (as it is always the case when the base field is algebraically closed of characteristic zero). We believe that any list of references one might try to work out for the problem of our interest will likely be far from complete, since it is of a natural and elementary nature and dates back to at least the first half of the past century. For instance, B. Segre at the beginning of , Section 96, mentioned a question raised in .
The introduction of Buczyński and Teitler’s paper  is a good source of information on what is known on the maximum Waring rank, up to recent times. As tensor theory, as well as its related algebrogeometric aspects, is currently a very active research field, a good deal of new results have been discovered since then, and they can be of use to determine that maximum. However, if we strictly focus on the problem, as far as we know, there are only two new facts that improve (, Table 1). The first one is , Proposition 3.3, which sensibly improves the Blekherman and Teitler’s upper bound when $n = 3$. The other is , Theorem 2.5, which extends to even degrees the Buczyński and Teitler’s lower bound established in , Theorem 1 (in the sense that it raises by one the lower bound given by the maximum rank of monomials, which for an even degree d is $( d 2 + 2 d ) / 4$).
According to the aforementioned results, for ternary forms over algebraically closed fields of characteristic zero both the lower and the upper bounds are of order $O ( d 2 / 4 )$, and the gap is $d − 1$. The main purpose of the present survey article is to convey the basic ideas that led to those bounds.

## 2. Symmetric Algebra and Apolarity

In this section, we give an overview of some fundamental facts that are worthy of being recalled in the present context. The technical treatments of basic tensor theory and apolarity that can be encountered in the literature may vary considerably. Instead of fixing one of them, we only review the main statements: whatever reference the reader is willing to adopt, the basic definitions will likely be compatible with (or at least adaptable to) our assumptions. The exposition is organized so that, by adding sufficient details, one can get a coherent theoretical development.

#### 2.1. The Tensor Algebra of a Vector Space

We take the view that the tensor algebra $T ( V )$ can be any fixed $K$-algebra that contains V as a $K$-vector subspace and such that for each $K$-vector space homomorphism of V into a (commutative or not) $K$-algebra A there exists a unique $K$-algebra homomorphism $T ( V ) → A$ that extends it. The elements of $T ( V )$ are called tensors, the multiplication in $T ( V )$ is denoted by ⊗ and $t 0 ⊗ t 1$ is the tensor product of $t 0$ and $t 1$. However, $V ⊗ V$ does not denote the set of all $v 0 ⊗ v 1$ with $v 0 , v 1 ∈ V$, but the vector subspace of $T ( V )$ generated by that set. The tensor power$V ⊗ ⋯ ⊗ V = : V ⊗ d$ can be similarly defined; its elements are said to have orderd. The multiplication $μ d : V × ⋯ × V → V ⊗ n$, $v 1 , … , v n ↦ v 1 ⊗ ⋯ ⊗ v n$, is a universal d-linear map, that is, every d-linear map $m : V × ⋯ × V → W$ factors as $φ ∘ μ d$ for a uniquely determined vector space homomorphism $φ : V ⊗ d → W$.
This approach may be considered rather abstract, but it might be said that $T ( V )$ is no more abstract than $C$: similar to how one extends $R$ by adding a square root of $− 1$ and all objects that are consequently needed to keep the operations and their properties, here one extends V and $K$ by adding tensor products of vectors and all objects that are consequently needed to keep the operations of V, and to build a multiplication with the usual properties (apart from commutativity). The above characterization of $T ( V )$ (up to algebra isomorphisms) makes precise the intuitive idea, and indeed a similar characterization holds for $C$: for every $R$-algebra A that contains a square root x of $− 1$ (for instance, A could be the algebra of endomorphisms of vectors in the plane and x a rotation by $90 ∘$), there exists a unique $R$-algebra homomorphism $C → A$ that sends the imaginary unit into x.

#### 2.2. The Symmetric Algebra of a Vector Space

The definition of the symmetric algebra $S ( V )$ can be given in the same way, except for the fact that now the target $K$-algebra A in the characteristic property is required to be commutative. A very natural $K$-algebra homomorphism $T ( V ) → S ( V )$ arises: the one which extends the identity map of V. Elements of $S ( V )$ are not exactly symmetric tensors, though in most cases the two things can be identified, but rather polynomials. Indeed, if $x i i ∈ I$ denotes a (possibly infinite) basis of the vector space V, any element of $S ( V )$ can be written as a polynomial in the $x i$s, and two elements are equal if and only if for each monomial they carry the same coefficient.

#### 2.3. Symmetric Tensors

To define symmetric tensors, first notice that for each permutation $σ$ of ${ 1 , … , n }$, by the mentioned universal property of tensor powers we have an automorphism $σ V$ of $V ⊗ n$ such that $v 1 ⊗ ⋯ ⊗ v n ↦ v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( n )$. A tensor in $V ⊗ n$ is said to be symmetric if it is invariant under $σ V$ for all permutation $σ$ of ${ 1 , … , n }$. For instance, $v ⊗ w + w ⊗ v$ and $v ⊗ v ⊗ w + v ⊗ w ⊗ v + w ⊗ v ⊗ v$ are symmetric tensors for whatever choice of $v , w ∈ V$. A sum of symmetric tensors of different orders can be called symmetric as well (although very seldom one encounters such sums).

#### 2.4. The Symmetric Product

The set $S ¯ ( V )$ of symmetric tensors is a vector subspace, but not a subalgebra of $T ( V )$ (apart from the trivial cases when $dim V ≤ 1$). However, a meaningful multiplication in $S ¯ ( V )$ can be introduced. The symmetric product$s 0 s 1$ of two symmetric tensors of given orders $d 0 , d 1$ can be defined as the sum of $σ V ( s 0 ⊗ s 1 )$ with $σ$ varying on the $( d 0 , d 1 )$-shuffles, that are the permutations of ${ 1 , … , d 0 + d 1 }$ such that $σ ( 1 ) < ⋯ < σ ( d 0 )$ and $σ ( d 0 + 1 ) < ⋯ < σ ( d 0 + d 1 )$. For instance,
$u v ⊗ w + w ⊗ v = u ⊗ v ⊗ w + u ⊗ w ⊗ v + v ⊗ u ⊗ w + w ⊗ u ⊗ v + v ⊗ w ⊗ u + w ⊗ v ⊗ u = u v w$
for whatever choice of $u , v , w ∈ V$.
There is a unique way to extend this operation on the whole of $S ¯ ( V )$, that preserves the distributive law. Note that with this definition we have $v d = d ! v ⊗ d = d ! v ⊗ ⋯ ⊗ v$ (d factors).

#### 2.5. A Variant of the Symmetric Product in Characteristic Zero

We mention that there is a much more popular variant of the symmetric product, which is in use when $K$ is $C$, or more generally is of characteristic zero (see, e.g., , 2.6.3). It can be presented as follows. Every $K$-vector space automorphism $α$ of any $K$-algebra A induces another multiplication on A: the one that makes $α$ a $K$-algebra automorphism, when it replaces the original one on the target (but not on the domain). In particular, if a sequence $a d d ≥ 0$ of nonzero scalars is given, the multiplication by $a d$ on order d symmetric tensors gives a $K$-vector space automorphism of $S ¯ ( V )$, hence a multiplication in it. The mentioned symmetric product in $S ¯ ( V )$ is given by the sequence $a d : = 1 / d !$. For instance, this definition gives
$u v ⊗ w + w ⊗ v = 1 3 u ⊗ v ⊗ w + u ⊗ w ⊗ v + v ⊗ u ⊗ w + w ⊗ u ⊗ v + v ⊗ w ⊗ u + w ⊗ v ⊗ u = u ( 2 v w )$
(compare with Equation (1)); note also that now $v d = v ⊗ d$.
The reason for the popularity of the “modified” symmetric product is explained by the relationship between symmetric tensors and polynomials: it makes the restriction $S ¯ ( V ) → S ( V )$ of the natural map $T ( V ) → S ( V )$ a $K$-algebra isomorphism. In positive characteristic, a multiplication with such a desirable property can not be hoped for, simply because in this case the restriction $S ¯ ( V ) → S ( V )$ is not bijective (unless $dim V ≤ 1$). It also worthy of being remarked that in characteristic zero, $S ¯ ( V )$ with the previous symmetric product is naturally isomorphic to $S ( V )$ as well, but in this case through the $K$-algebra homomorphism $S ( V ) → S ¯ ( V )$ that arises simply because $S ¯ ( V )$ is a commutative $K$-algebra containing V.
Although the subject of the present work is the maximum symmetric rank in the characteristic zero case, we prefer to assume that $S ¯ ( V )$ is equipped with the general symmetric product (which is defined regardless of the characteristic of $K$), not with the modified one.

#### 2.6. Tensors on the Dual and Multilinear Forms

Let us consider the dual space $V *$ and denote by $V * d$ the space of d-linear forms. A d-linear form $μ : V d → K$ and a $d ′$-linear form $μ ′ : V d ′ → K$ give rise to the $( d + d ′ )$-linear form
$v 1 , … , v d + d ′ ↦ μ v 1 , … , v d μ ′ v d + 1 , … , v d + d ′ .$
This allows us to define a $K$-algebra structure on
$M ( V ) : = ⨁ d V * d .$
Since $V *$ embeds (as a summand) in $M ( V )$, one comes with a natural $K$-algebra homomorphism $T V * → M ( V )$, which is always injective and turns out to be an isomorphism if and only if $dim V < ∞$. On the other hand, to give a d-linear form on V is the same as to give a linear form on $V ⊗ d$, and $T ( V )$, as a matter of facts, is a graded algebra whose degree d component is $V ⊗ d$ (regardless of the finiteness assumption). Therefore, $M ( V )$ is isomorphic as a vector space to the graded dual of $T ( V )$. In conclusion, when $dim V < ∞$, the graded dual of $T ( V )$ is isomorphic, as a graded vector space, to $T V *$.
Let us also point out that, given $l 1 , … , l d ∈ V *$, the image of the tensor product $l 1 ⊗ ⋯ ⊗ l d$ in $M ( V )$ takes value
$l 1 v 1 ⋯ l d v d$
on $v 1 , … , v d$, and this is also the value taken on $v 1 ⊗ ⋯ ⊗ v d$ by the image in the graded dual of $T ( V )$.
For the symmetric algebra, we have the following facts. To begin with, $S ( V )$ is a graded algebra as well, and the natural homomorphism $T ( V ) → S ( V )$ is a graded one, and as a matter of facts is surjective on each graded component. Hence, the graded dual of $S ( V )$ embeds into the graded dual of $T ( V )$, which is isomorphic to $M ( V )$. However, similarly to what happens for $V ⊗ d$, to give a d-linear form on the symmetric power $S d ( V )$ (the subspace of degree d elements of $S ( V )$) is the same as to give a symmetricd-linear form on V. This easily implies that the image of the dual of $S d ( V )$ through the embedding into $M ( V )$ is precisely the subspace of d-linear symmetric forms. Next, it is not difficult to show that an order d tensor in $T V *$ is symmetric if and only if its image through the embedding $T V * ↪ M ( V )$ is a symmetric d-linear map. In the finite dimensional case, the converse is also true: a d-linear form is symmetric if and only if it corresponds to a symmetric tensor. In conclusion, when $dim V < ∞$, the graded dual of $S ( V )$ is isomorphic, as a graded $K$-vector space, to $S ¯ V *$.
According to the definition of the symmetric product, given $l 1 , … , l n ∈ V *$, the image of $l 1 ⋯ l d$ in $M ( V )$ takes value
$∑ σ l 1 v σ ( 1 ) ⋯ l d v σ ( d )$
on $v 1 , … , v d$, that is, the permanent of the matrix $l i v j i , j$. This is also the value taken on $v 1 ⋯ v d$ by the image of $l 1 ⋯ l d$ in the graded dual of $S ( V )$.

#### 2.7. Evaluation

The algebra $K V$ of all functions $V → K$ is commutative and contains $V *$. Hence, there is a canonical $K$-algebra homomorphism $S V * → K V$. Note also that the value of the image of $f ∈ S V *$ on v is the image of through the evaluation homomorphism $S V * → K$ that extends the evaluation at v map $V * → K$ ($l ↦ l ( v )$). If $f ∈ S V *$ is considered as a polynomial, the image in $K V$ is the corresponding polynomial function.
For a form $f ∈ S d V *$, the corresponding symmetric d-linear form can be considered as a polarization of the corresponding polynomial function (though it is customary to affect that d-linear form by a factor $1 / d !$).

#### 2.8. Symmetric Tensors on the Dual and Divided Powers

The symmetric product may be preferred over its variant in characteristic zero, not only because it is defined regardless to the characteristic of $K$, but also because it makes $S ¯ ( V )$ an algebra of divided powers, in a quite simple way: it suffices to set $v [ d ] : = v ⊗ d$. A perhaps more familiar way to introduce divided powers is in the context of duality, as done, e.g., in , Appendix A. To link the discussion earlier to the Iarrobino and Kanev’s approach:
Since an isomorphic image of a tensor algebra is a tensor algebra as well, we can assume that $T V *$ is chosen so that $S ¯ V *$ actually is the graded dual of $S ( V )$. Similarly, if $R = K x 1 , … , x r = ⊕ d ≥ 0 R d$ is a (graded) ring of polynomials, we can assume $S R 1 = R$. Let us also assume $V : = R 1$ (under a suitable definition of polynomial rings, this assumption imposes no restriction on V). Setting $D : = S ¯ V *$ and denoting by $D d$ the subspace of order d symmetric tensors, we get exactly in the situation at the beginning of , Appendix A. Let us employ a few lines below to check that setting $l [ d ] : = l ⊗ d$, subsequent definitions of , Appendix A, are automatically fulfilled.
Let $X 1 , … , X r$ be the base of $D 1$, dual to $x 1 , … , x r$ (that is, $X i x j$ is 0 when $i ≠ j$ and 1 when $i = j$). Note that $X i X j$ takes value 1 on $x i x j$ when $i ≠ j$ (it is indeed given by Equation (3), and $l i x j = l j v i = 0$), but the value is 2 when $i = j$, that is, $X i 2 x i 2 = 2$. Instead, we have $X i [ 2 ] x i = X i ⊗ 2 x i = 1$, as it follows from the evaluation rule Equation (2) (or also from $X i 2 = 2 ! X i ⊗ 2$). More generally, we have that $X 1 [ d 1 ] ⋯ X r [ d r ]$ takes value 0 on $x 1 d 1 ′ ⋯ x r d r ′$ when $d 1 , … , d r ≠ d 1 ′ , … , d r ′$ and value 1 when $d 1 , … , d r = d 1 ′ , … , d r ′$. This agrees with , Definition A.1. Since the number of $( d , d ′ )$-shuffles is $( d + d ′ ) ! / ( d ! d ′ ! )$, we have
$X i [ d ] X i d ′ = ( d + d ′ ) ! d ! d ′ ! X i d + d ′ ,$
hence
$X 1 d 1 ⋯ X r d r X 1 d 1 ′ ⋯ X r d r ′ = ( d 1 + d 1 ′ ) ! d 1 ! d 1 ′ ! ⋯ ( d r + d r ′ ) ! d r ! d r ′ ! X 1 d 1 + d 1 ′ ⋯ X r d r + d r ′ ,$
in agreement with , (A.0.5). Similar calculations lead to
$a 1 X 1 + ⋯ + a r X r [ d ] = ∑ d 1 + ⋯ + d r = d a 1 d 1 ⋯ a r d r X 1 [ d 1 ] ⋯ X r [ d r ] ,$
in agreement with , Definition A.8.

#### 2.9. The Contraction Map

Given $p ∈ R d$, $f ∈ D d + d ′ = Hom R d + d ′ , K$ and denoting by $μ p : R d ′ → R d + d ′$ the vector space homomorphism given by the multiplication by p, the composition $f ∘ μ p$ belongs to $D d ′$ and is called the contraction of f by p. The bilinear operations $R d × D d + d ′ → D d ′$ (assuming $D d ′ = { 0 }$ when $d ′ < 0$) extend to a unique bilinear operation $R × D → D$, that can be called contraction map (in agreement with , Definition A.2). If $l 1 , … , l d ∈ R 1$ and $φ ∈ M ( V )$ is the $( d + d ′ )$-linear form corresponding to f, the contraction of f by $l 1 ⋯ l d$ corresponds to the $d ′$-linear form obtained by fixing d arguments of $φ$ equal to $l 1 , … , l d$: $v 1 , … , v d ′ ↦ φ l 1 , … , l d , v 1 , … , v d ′$. This property may also be used to give an alternative definition of the contraction, because of the characteristic property of the symmetric powers (the d-linear assignment on all $( l 1 , … , l d ) ∈ R 1 d$ determines a homomorphism $R d = S d R 1 → V * d ′$, whose image is canonically isomorphic to $D d ′$ and which depends on $f ∈ D d + d ′$ in a linear way). For this reason, the contraction can also be called insertion and denoted by ┙ (this attitude is perhaps more common in the context of alternating forms).

#### 2.10. Contraction and Derivatives

The ordinary directional derivative of differentiable (real valued) functions fulfills the Leibnitz rule $∂ v ( f g ) = ∂ v f g + f ∂ v g$ and on linear forms is nothing but the evaluation on v. These two properties can be used to characterize the derivative of polynomials in $S V *$ along $v ∈ V$. Indeed, for each d, we can define $∂ v$ on $S d V *$ as the unique operator into $S d − 1 V *$ such that
$∂ v l 1 ⋯ l d = ∑ i = 1 d l i ( v ) l 1 ⋯ l i ^ ⋯ l d ∈ S d − 1 V *$
for all $l 1 , … , l d ∈ V *$ (with the hat denoting omission). Then, $∂ v$ extends to the whole of $S V *$ by additivity. Using again the fact that linear operators on $S d V *$ are characterized by their values on the products $l 1 ⋯ l d$, one can easily check that the Leibnitz rule holds in $S V *$ and that $∂ v$ is the unique extension of the evaluation on v in $V *$ with this property. A partial derivative is obviously a directional derivative along a basis vector $x i$.
Let $l 1 , … , l d ∈ V *$, $v 1 , … , v d ∈ V$, and f be the image in $D d$ of $l 1 ⋯ l d ∈ S d V *$ (that is, the product $l 1 ⋯ l d$ in the ring $D$). From the evaluation rule Equation (3) follows that the contraction of f by $v 1$ take the same value on $v 2 ⋯ v d$ as
$∑ i = 1 d l i ( v 1 ) l 1 ⋯ l i ^ ⋯ l d ∈ D d − 1$
(is basically a Laplace-like expansion of the permanent along the first row), which is the image of $∂ v 1 l 1 ⋯ l d ∈ S d − 1 V *$. Using additivity and again the fact that operators on symmetric powers are determined by their values on products of vectors, we conclude that, for every $v ∈ V$, the partial derivative $∂ v$ and the contraction by v are compatible via the canonical homomorphism $S V * → S ¯ V * = D$.
A constant coefficient linear partial differential operator on $S V *$ is a linear combination of compositions of directional (or partial) derivatives. The set D of such operators is a commutative $K$-algebra with multiplication given by composition. Hence, $v ↦ ∂ v$ extends in a unique way to a $K$-algebra homomorphism $S ( V ) = R → D$. The image of each $p ∈ R$ in D can be denoted by $∂ p$.

#### 2.11. Apolarity

When $K$ is of characteristic zero, both canonical homomorphisms $S V * → D$ and $R → D$ are isomorphisms. In this case, we can assume that $S V * = D$ (and under a suitable definition of polynomials also $R = D$ could be assumed). This way the contraction map becomes a bilinear map $S ( V ) × S V * → S V *$ such that each p acts as the constant coefficient linear differential operator $∂ p$ (e.g., the contraction of $X 1 X 2 2$ by $3 x 1 x 2 + x 2 2$ is $3 ∂ x 1 ∂ x 2 X 1 X 2 2 + ∂ x 2 ∂ x 2 X 1 X 2 2 = 2 X 1 + 6 X 2$).
From the coordinate-free definitions is quite easy to recognize that the contraction map is invariant with respect to the canonical actions of $GL ( V )$ on V and on $V *$ (cf. also , Proposition A3(i)). As Ehrenborg and Rota reported in , Introduction, for each fixed degree there is a unique invariant bilinear form $S d ( V ) × S d V * → K$, which has been much used since the nineteenth century by classical invariant theorists. They also say that this form can be called apolar bilinear form, that the subject of apolarity has been related with the symbolic method in classical invariant theory, and that an efficient treatment can be given in frames of Hopf algebras.
Apolarity for univariate polynomials of the same degree can also be disguised as an explicit formula with alternating signs. To understand why, it may be useful to have a quick look on what happens for the homogeneous version of such polynomials, that is, for binary forms. From a geometric, projective viewpoint, (nonzero) vectors can be viewed as points and linear forms as hyperplanes. For binary forms, that is, when $dim V = 2$ and the projective picture is a line, hyperplanes are (singletons of) points. From the algebraic viewpoint, this amounts to the existence of an isomorphism $V → V *$, unique up to scalar factors (or, equivalently, to the existence of a unique, up to scalar factors, nondegenerate alternating form on V). In coordinates, $a 0 x 0 + a 1 x 1$ corresponds to $a 1 X 0 − a 0 X 1$ (or to a scalar multiple of it). The extension $S ( V ) → ∼ S V *$ of this isomorphism allows us to equivalently describe apolarity as a bilinear form on $S ( V )$ alone. Restricting the attention on (homogeneous) polynomials of the same degree, we get a bilinear form on $S d ( V )$, which is alternating for d odd and symmetric for d even. In coordinates:
$∑ i a i x 0 d − i x 1 i , ∑ i a i ′ x 0 d − i x 1 i ↦ ∑ i ( − 1 ) i ( d − i ) ! i ! a d − i a i ′ .$
In terms of univariate polynomials of an assigned degree d:
$∑ i a i x i , ∑ i a i ′ x i ↦ ∑ i ( − 1 ) i ( d − i ) ! i ! a d − i a i ′ .$
Some classical results deal with this kind of apolarity, with $K = C$. For instance, Grace’s theorem is sometimes named Grace’s apolarity theorem for this reason. Nowadays, there is a basic result, which is largely referred to as the apolarity lemma. It plays a fundamental role in proving the bounds on Waring ranks we aim to present in this article. We end this section by setting up the technical environment of our presentation. In the next section, we state a version of the apolarity lemma.

#### 2.12. Standing Assumptions

From the usual geometric viewpoint, forms are regarded as hypersurfaces. To begin with, for every element of $S V *$, the vanishing locus of the corresponding polynomial function is an affine hypersuperface in V. However, a projective viewpoint is perhaps more fruitful. We define the projective space $P ( V )$ as the set of one-dimensional subspace of V, $v$ with $v ≠ 0$, and the hypersurface corresponding to a form $f ∈ S d V *$ as the zero locus in $P ( V )$ of the corresponding degree d homogeneous function on V. Sometimes geometric features of the hypersurface defined by a form f, such as the singular locus, can give information on the rank of f (cf. , Theorem 9.2.1.4).
However, rank determination often uses the dual viewpoint, from which forms are considered as points in a space where powers of linear forms constitute a Veronese variety (see , 4.3.7). Sometimes, having a simultaneous look on both viewpoint has been fruitful, and perhaps a systematic investigation with this double viewpoint could be worthy of being pursued. However, that it is not our goal here, we prefer to facilitate the interchange between V and $V *$ by working on an arbitrary pair of (finite dimensional) vector spaces, with a given perfect pairing between one another. To denote such spaces, we follow a kind of abstract index notation, using upper indices for one of the two spaces and lower indices for the other. In a purely algebraic context, especially one in which powers play an important role, this might be considered bad practice. However, if one takes care of not assigning an independent meaning to x, the use of $x i$ causes no ambiguities. An advantage of this choice is to have a notation that can be more promptly translated (and provide insight) in physics contexts where tensors are widely used.
As we have anticipated, we are interested in the case when $K$ is algebraically closed of characteristic zero (e.g., $K = C$). Hence, apolarity will be assumed on a (dual) pair of symmetric algebras, which we denote by $S •$ and $S •$. From our preferred geometric viewpoint, elements of $S •$ are considered as polynomial functions on the degree 1 component $S 1$ of $S •$ (in accordance with the abstract index notation, where forms take upper indices and vectors lower indices), but elements of $S •$ act also as constant coefficient linear differential operators on $S •$. Note that, to fit the first description into the previously outlined technical treatment, one needs to identify $S •$ with $S ( V )$ and $S •$ with $S V *$, whereas, to fit the other, one needs the converse. Polynomials whose rank is to be studied will live in $S d$.
Following from the above said, now we set up more formally our ground technical framework for the subsequent sections.
 We assume that $K$ is algebraically closed and of characteristic zero.The span of a subset $X ⊆ V$ is denoted by $X$ (or also $v 1 , … , v n$ when $X = { v 1 , … , v n }$).We define the projective space $P ( V )$ as the set $v : v ∈ V ∖ { 0 }$ of one-dimensional subspaces of V.We fix two symmetric algebras $S •$ and $S •$, whose degree d components are denoted by $S d$ and $S d$.We assume $dim S 1 < ∞$, $dim S 1 < ∞$, $S • = S S 1$, $S • = S S 1$.We assume that a perfect pairing $S 1 × S 1 → K$ is given.By the value $f ( v )$ of $f ∈ S •$ at $v ∈ S 1$, we mean the value at v of the image of f through $S • → ∼ S S 1 * → K S 1$, where the first isomorphism is induced by the given perfect pairing.The ideal $I ( X ) ⊆ S •$ of a subset $X ⊆ P S 1$ is the homogenous ideal with degree d components given, for each d, by all $f ∈ S d$ such that $f ( v ) = 0$ for all $v ∈ X$.The apolar bilinear map $S • × S • → S •$ is the map induced by the earlier defined bilinear map $S ( V ) × S V * → S V *$, when $V = S 1$, through the isomorphism $S S 1 * → ∼ S •$ induced by the given perfect pairing.$∂ f x$ denotes the value of the apolar map on $( f , x ) ∈ S • × S •$.f and x are said to be apolar to each other when $∂ f x = 0$, and the ideal $Ann x ⊆ S •$ of all f apolar to x is called the apolar ideal of x.If $W ≤ S d$ is a subspace, $W ⊥$ denotes the set of all $f ∈ S d$ that are apolar to all elements of W. Similarly, for a subspace $W ≤ S d$, $W ⊥$ denotes the set of all $x ∈ S d$ that are apolar to all elements of W.
From the discussion above, it follows that apolarity induces an isomorphism $S d → S d *$ for each d, therefore gives a perfect pairing in each degree. Hence, $W ⊥$ denotes nothing but the orthogonal complement with respect such a perfect pairing. The notation $Ann x$ for the apolar ideal complies with the notion of the annihilator of an element of a module, because $S •$ is structured as an $S •$-module by apolarity. We prefer not to use the quite common notation $x ⊥$ for the apolar ideal (we speak about orthogonality only in a fixed degree).
When one needs schemes (for which we assume the definitions in ), it turns out that a point of $Proj S V *$ rational over $K$ is a maximal non-irrelevant homogeneous ideal in $S V *$ such that its intersection with $S 1 V * = V *$ is a hyperplane. However, every hyperplane in $V *$ is the hyperplane of forms that vanish on some point in $P ( V )$. This gives the canonical identification of $P ( V )$ with the set of $K$-points of $Proj S V *$.

## 3. Basic Results

#### 3.1. Apolarity Lemma

Let us preliminary point out that
$f ( v ) = 1 d ! ∂ f v d , ∀ f ∈ S d , v ∈ S 1 , d > 0 .$
Remark 1.
From Equation (4), it follows that, given $f ∈ S d$ and $v ∈ S 1$ such that $f ( v ) = 0$, for every $g ∈ S d ′$, we have
$∂ g ∂ f v d + d ′ = ∂ g f v d + d ′ = ( d + d ′ ) ! ( g f ) ( v ) = ( d + d ′ ) ! g ( v ) f ( v ) = 0 .$
However, $∂ g ∂ f v d + d ′ = 0$ for all $g ∈ S d ′$ implies that $∂ f v d + d ′ = 0$ (because apolarity gives a perfect pairing in degree $d ′$). Of course, $∂ f$ also vanishes on $v d ′$ when $d ′ < d$. Therefore, if $f ( v ) = 0$ then $∂ f$ vanishes on all powers of v.
By additivity, we conclude that every $f ∈ I { v 1 , ⋯ , v r }$ is apolar to every power sum $v 1 d + ⋯ + v r d$, $d > 0$ and, more generally, to every linear combination of $v 1 d , … , v r d$, $d > 0$.
Now, we prove the following version of the apolarity lemma.
Lemma 1.
Let $x ∈ S d$ with $d > 0$, and $X : = { v 1 , ⋯ , v r } ⊂ P S 1$. Then,
$x ∈ v 1 d , … , v r d ⇔ I ( X ) ⊆ Ann x .$
Proof.
Suppose that $x ∈ v 1 d , … , v r d$, that is, x is a linear combination of $v 1 d , … , v r d$. By Remark 1, every $f ∈ I ( X )$ is apolar to such linear combination, hence is apolar to x. Therefore, $I ( X ) ⊆ Ann x$.
Conversely, let us suppose that $I ( X ) ⊆ Ann x$. By the evaluation of Equation (4), it follows that $f ∈ S d$ vanishes on $v ∈ S 1$ if and only if it is apolar to $v d$. In other terms, the set of all $f ∈ S d$ that vanish on $v ∈ S 1$ is the orthogonal complement of $v d$ with respect to the perfect pairing given by apolarity in degree d. Hence
$I ( X ) ∩ S d = v 1 d , … , v r d ⊥ .$
Since $I ( X ) ⊆ Ann x$, we have in particular that x is orthogonal to $I ( X ) ∩ S d$. Hence, $x ∈ v 1 d , … , v r d$. □
A general and detailed version of the apolarity lemma can be found in , Lemma 1.15. It holds in every characteristic and uses divided powers. Lemma 1 is basically equivalent to , Lemma 1.15(i) and (ii), restricted to the characteristic zero case.
To illustrate the lemma with a simple (nearly trivial) example, let $S • : = C x 0 , x 1$, $S • : = C x 0 , x 1$, with $x 0 , x 1$ being the dual basis of $x 0 , x 1$. The evaluation of a polynomial $p = p x 0 , x 1 ∈ S •$ on $a 0 x 0 + a 1 x 1 ∈ S 1$ is just $p a 0 , a 1$. When p is homogeneous of degree d and $p ( v ) = 0$ for a $v ∈ S 1$, then p vanishes on all scalar multiples of v, that is, on all elements of $v$. If $v ≠ 0$, we can say that $v ∈ P S 1$ is a root of p.
Let us find the sum of squares decompositions of $f : = x 0 x 1 ∈ S 2$ using the apolarity lemma. We have $∂ x 0 f = x 1$ and $∂ x 1 f = x 0$, hence $Ann f$ has no degree 1 homogeneous nonzero elements. In degree 2 we have $Ann f ∩ S 2 = x 0 2 , x 1 2$. Obviously, $S d ⊂ Ann f$ for all $d ≥ 3$. Now, for a finite set $X = { v 1 , ⋯ , v r } ⊂ P S 1$ of r (distinct) points, $I ( X )$ is the set of all (polynomial) multiples in $S •$ of the polynomial $p ∈ S r$ with roots precisely $v 1 , ⋯ , v r$. Hence, by the apolarity lemma, for every homogeneous $p ∈ Ann f ∩ S r$ that has r (distinct) roots $v 1 , ⋯ , v r$, we have $f ∈ v 1 2 , … , v r 2$. It easily follows that f can be decomposed as a sum of squares of appropriate scalar multiples of $v 1 , … v r$. For instance, $p : = x 0 2 − x 1 2 ∈ Ann f ∩ S 2$ gives rise to the decomposition
$x 0 x 1 = 1 2 ( x 0 + x 1 ) 2 + i 2 ( x 0 − x 1 ) 2 .$

#### 3.2. The Classically Known Results on Maximum Rank

From the elementary theory of quadratic forms, known since long time, follows that the rank of a quadratic form $f ∈ S 2$, equals the rank of its representing matrix with respect to whatever given (ordered) basis. Hence, the maximum rank equals $dim S 1$, that is, the number of indeterminates (if $S •$ is considered as a ring of polynomials).
To find the maximum rank of binary forms of given degree, apolarity is very effective. Indeed, let us consider the following simple description of the apolar ideal of a binary form.
Proposition 1.
Let $f ∈ S d ∖ { 0 }$, with $dim S 1 = 2$. Then, $Ann f$ is generated by a form $a ∈ S s$ and a form $b ∈ S d + 2 − s$ for some integer $s ≤ ( d + 2 ) / 2$.
Proof.
See , Theorem 1.44(iv). □
As reported in , this classical result is due to Macaulay. When $dim S 1 = 2$, the ideal of a set of r distinct points in $P S 1$ is generated by a homogeneous form. If the form a in the statement of Proposition 1 is squarefree, from Lemma 1 follows that the rank of f is s, and some appropriate pairwise non-proportional roots of a in $S 1$ give the linear forms of a dth power sum decomposition. Taking also into account that a and b must be coprime, since they generate an ideal that contains $S d + 1$, we also deduce that if a is not squarefree, then the rank is $d + 2 − s$ and for every finite subset $X ⊂ P S 1$ there exists a dth power sum decomposition such that $v ∉ X$ for every linear form v in it.
From the above, it is clear that the rank of f is at most d, and can be d only if $Ann f$ contains the square of a linear form. Given a basis $x 0 , x 1$ of $S 1$, the apolar ideal of $x 0 d − 1 x 1$ contains $x 1 2$, with $x 0 , x 1$ being the dual basis. We conclude that the maximum rank of binary forms of degree $d > 0$ is d.
In , Sections 96 and 97, one finds that the maximum ranks of ternary and quaternary cubics are 5 and 7, respectively. As we mentioned in the Introduction, beyond these classical results, only two new cases have been recently worked out: the maximum rank is 7 for ternary quartics (which has been determined in [5,17]) and 10 for ternary quintics (see [11,18]).
To our knowledge, no other values for the maximum rank have been determined to date; the known values can therefore be summarized in the following Table 1.

#### 3.3. Elementary Bounds on Maximum Waring Rank

Let us recall a common geometric viewpoint on Waring rank. Given $f ∈ S d ∖ { 0 }$, we have a point $f ∈ P S d$ and we have to express f as sum of dth powers. The set of (spans of) dth powers of linear forms is an algebraic variety in $P S d$: the image of the embedding $P S 1 → P S d$, $v ↦ v d$. This embedding turns out to be equivalent to a much studied embedding: the Veronese embedding (also called d-uple embedding: see, e.g., , Chapter I, Exercise 2,12). Its image is sometimes called the Veronese variety (see  4.3.7). The problem of finding a power sum decomposition of f is equivalent to the problem of finding a set of points X in the Veronese variety such that $f ∈ X$, that is, such that the point $f$ lies in the projective span of X (one has to take into account that a scalar multiple of a dth power is a dth power as well, since $K$ is algebraically closed). Since the Veronese variety spans $P S d$, the Waring rank is well defined for all forms, and it is at most
$dim S d = d + n − 1 n − 1 ,$
where $n : = dim S 1$ and the parenthesized notation stands for the binomial coefficient. This gives an elementary upper bound on rank (which could easily be slightly lowered, but we are not interested in doing this here).
Let us now consider the union U of all projective spans of r distinct points in the Veronese variety. Clearly, for every $f ∉ U$, the rank of f is greater than r. Elementary tools of algebraic geometry allow one to estimate the dimension of the Zariski closure $U ¯$ of U (see , 4.9.5 or , Chapter I, Section 2, p. 10), which is called the $( r − 1 )$th secant variety (of the Veronese variety). Roughly speaking, the set of all groups of r points in the Veronese variety, which has dimension $n − 1 = dim P S 1$, is of dimension $r ( n − 1 )$. Since most of these groups spans a subspace of dimension $r − 1$, the expected dimension of $U ¯$ is $r n − 1$. When this number does not reach the dimension $d + n − 1 n − 1 − 1$ of the entire space $P S d$, there exist forms with rank greater than r. Hence, the maximum rank in $S d$ is at least
$1 n d + n − 1 n − 1$
(where the external parentheses denote the upper integer part; a similar notation $⌊ … ⌋$ will be used for the lower integer part).
Note that when $d = 2$, the lower bound is $⌈ ( n + 1 ) / 2 ⌉$, meanwhile the maximum rank is n. It is also worthy of being mentioned that the estimate of $dim U ¯$ fails for $d = 2$ (and $n , r ≥ 2$). More generally, that estimate fails when most points in $U ¯$ lie on infinitely many spans. In this case, the dimension of the secant variety drops, and the now classical theorem by Alexander and Hirschowitz gives the complete list of $n , d , r$ for which this happens (we refer the reader to the exposition in ). It turns out that for $d ≥ 3$ the above lower bound can be raised by one in exactly four cases: $( n , d ) ∈ { ( 3 , 4 ) , ( 4 , 4 ) , ( 5 , 3 ) , ( 5 , 4 ) }$.
Given $n , d$, if r is the least value for which $U ¯ = P S d$ then, by some basic algebrogeometric considerations which we skip here, for all $f$ in a nonempty open Zariski subset of $P S d$, f is actually of rank r. In this situation, it is customary to say that r is the rank of a generic form in $S d$. In the context of tensor rank, the striking outcome of the Alexander–Hirschowitz theorem is indeed the exact value of the rank of a generic form (the lower bounds in the exceptional cases are only some of the many consequences). In the following sections, we review the enhanced lower and upper bounds that have been found recently.

## 4. Lower Bounds

To find a good lower bound on the set of the symmetric ranks of all symmetric tensors over $K$ of given order d and dimension n, it suffices to find a form in $S d$, when $dim S 1 = n$, with high Waring rank.
The (few) lower bounds which we are aware of have been obtained by finding some special forms of high rank. Since the rank of a generic form gives a lower bound, the challenge is to exceed it. In this section, we present the special forms of high rank that have given the best known lower bounds.

#### 4.1. What Monomials Tell Us

To begin with, let us consider binary forms, for which ranks are quite well understood. The maximum rank of degree d binary forms is d, while the rank of a generic degree d binary form is $⌊ ( d + 2 ) / 2 ⌋$ (it can be deduced from Proposition 1). Moreover, a degree d binary form of maximum rank can be turned into a monomial by a change of coordinates. Hence, for binary forms, the maximum rank is reached by monomials. For quadrics, whose rank is obviously well understood, the maximum rank of monomials is two (unless $dim S 1 ≤ 1$), meanwhile the maximum rank is reached by generic forms (and equals $dim S 1$, that from a polynomial viewpoint is the number of indeterminates).
The rank of all monomials has been determined by Carlini, Catalisano e Geramita in . In dimension three, it turns out that the monomial $x y s z s$ is of rank $( s + 1 ) 2$ and $x y s − 1 z s$ of rank $s ( s + 1 )$. This gives a lower bound that asymptotically approaches $d 2 / 4$ for the rank of ternary forms of degree d, while a generic form has rank asymptotically approaching $d 2 / 6$. According to , Proposition 3.4, the asymptotic estimate of maximum rank for ternary forms is actually $d 2 / 4$. When the number of variables is four or greater, the maximum rank of monomials does not exceed the rank of a generic form of the same degree.
In view of the above, a first guess on maximum rank could be that the maximum rank is reached either by monomials or by generic forms. However, for ternary quartics, for which the maximum rank is known from Kleppe’s master thesis [5,17], it exceeds by one the maximum rank of both the monomials and the generic forms. The maximum rank exceeds both the maximum rank, of the monomials and of the generic forms, for ternary quintics too (see [11,18]). Buczyński and Teitler in  also found forms in more than three variables with rank exceeding by one the rank of generic forms. An improvement by one might seem not too exciting but, at least for ternary forms, one cannot hope to go much farther. Indeed, the upper bound given in , Proposition 3.3 shows that the maximum rank of a degree d ternary form can exceed the maximum rank of monomials by at most d. Thus, the initial guess may be modified by expecting that the maximum rank could only slightly exceed the maximum rank of either monomials or generic forms.
For a detailed discussion on maximum rank of monomials, we refer the reader to . Let us now outline how the rank of monomials has been bounded from below, and how that technique has been enhanced by Buczyński and Teitler, to exceed the previously known lower bounds on maximum ranks.

#### 4.2. What Hilbert Functions Tell Us

The Hibert function of a graded module $⊕ d ∈ Z M d$ over the graded $K$-algebra $S •$ can be simply defined as the function that on each $d ∈ Z$ takes value $dim K M d$. This is a fundamental tool in algebraic geometric, and is still much studied. To let readers who are not acquainted with algebraic geometry get a taste of the fundamental nature of Hilbert function, let us mention that the degree and the dimension of an algebraic set can easily be get from a naturally associated Hilbert function. More precisely, let $X ⊆ P S 1$ be the set of all points on which some system of homogenous forms in $S •$ vanishes. Then, the Hilbert function $H X$ of $S • / I ( X )$ coincides with a polynomial $p X$ (the Hilbert polynomial of X) for all sufficiently large degrees. The degree of $p X$ gives the dimension n of X and $n !$ times the leading coefficient of $p X$ gives the degree of X. In the case when X is a finite set of r points, which by Lemma 1 is of our interest here, we get that $H X ( d ) = r$ for all sufficiently large d. Below, we take a few lines to directly show this fact in an elementary way; readers who are interested in the general properties of Hilbert functions can find them in many basic textbooks of algebraic geometry (e.g., in , Chapter I, Section 7).
Let $X = v 1 , … , v r ⊆ P S 1$ be a set of r points and let $H X$ be the Hilbert function of $S • / I ( X )$. The degree d component of that quotient is $S d / I ( X ) d$, where $I ( X ) d = S d ∩ I ( X )$ is the space of degree d forms that vanish on X. From the evaluation in Equation (4), it follows that $I ( X ) d = v 1 d , … , v r d ⊥$ (this fact has been already noticed in the proof of Lemma 1). It follows that
$H X ( d ) = dim v 1 d , … , v r d .$
Note that it is easy to find $l 1 , … , l r − 1 ∈ S 1$ such that $l i ( v i ) = 0$ and $l i ( v r ) ≠ 0$ for each $i ∈ { 1 , … , r − 1 }$. It follows that when $d ≥ r − 1$ the hyperplane $( l 1 d − r + 2 l 2 ⋯ l r − 1 ) ⊥ < S d$ contains $v 1 d , … , v r − 1 d$, but not $v r d$. In a similar way, it can be found hyperplanes that do not contain a given $v i$ but contain all $v j$ with $j ≠ i$. This shows that $v 1 d , … , v r d$ are linearly independent, hence
$dim v 1 d , … , v r d = r , ∀ d ≥ r − 1 .$
Therefore, $H X ( d ) = r$ for all sufficiently large d ($≥ r − 1$, in this case).

#### 4.3. What Hyperplane Sections Tell Us

Let $S ¯ • = S • / I$ be the quotient of $S •$ by a homogeneous ideal I and suppose that $l ¯ ∈ S ¯ 1$ is not a zero divisor in $S ¯ •$. Then, the multiplication by $l ¯$ in $S ¯ •$ injects each homogeneous component $S ¯ d$ into $S ¯ d + 1$. Let H and $H ′$ be, respectively, the Hilbert functions of $S ¯ •$ and of its quotient $S ¯ • / l ¯$ over the ideal generated by $l ¯$. Thus, we have
$H ′ ( d ) = H ( d ) − H ( d − 1 ) .$
Consequently, we also have
$H ( d ) = ∑ i = 0 d H ′ ( i ) .$
In this way, relevant properties of H can be deduced from properties of $H ′$.
The quotient $S ¯ • / l ¯$ is naturally isomorphic to the quotient of $S •$ over the ideal $I + ( l )$, with $l ∈ S 1$ being a representative of $l ¯$. When I is the ideal of an algebraic set $X ⊆ P S 1$ (that is, the set of all points where some set of homogeneous elements of $S •$ vanish), the algebraic set defined by $I + ( l )$ (that is, the set of all points where all the homogeneous elements of $I + ( l )$ vanish) is the intersection of X with the hyperplane given by l. From a geometric viewpoint, the idea is that features of hyperplane sections of X give relevant information on X. This idea is ubiquitous in algebraic geometry.
Let us see what we get in the case of our interest. When X is a finite set of r points, if a product of homogeneous elements $x y$ vanishes on X but y does not, then x must vanish on some point of X. Conversely, if x vanishes on some point of X, it is not difficult to find a nonzero y in some $S d$ such that $x y$ vanishes on X. Therefore, to find $l ¯ ∈ S • / I ( X )$ that is not a zero divisor is to find an hyperplane that does not meet X. In this case, since we know that H takes values r for $d ≥ r − 1$, we conclude that
$r = ∑ i = 0 r − 1 H ′ ( i ) ,$
with $H ′$ being the Hilbert function of $S • / ( I ( X ) + ( l ) )$.

#### 4.4. Rank of Monomials

The result we have just discussed, in conjunction with the apolarity lemma gives a way to bound the rank of $f ∈ S d$ from below. If we fix $l ∈ S 1 ∖ { 0 }$, for every finite set of r points X such that $I ( X ) ⊆ Ann f$ we obviously have $I ( X ) + ( l ) ⊆ Ann f + ( l )$. Denoting by $H f$ the Hilbert function of $S • / ( Ann f + ( l ) )$, we have $H ′ ( i ) ≥ H f ( i )$ for all i, hence
$r ≥ ∑ i = 0 d H f ( i ) = : b ( l ) .$
Taking into account Lemma 1, we have that every power sum decomposition $f = v 1 d + ⋯ + v r d$ for which no $v i$ lies on the hyperplane $l ⊥$, has at least $b ( l )$ summands. On this basis, a first rough idea to find a lower bound on the rank of f is to find the minimum $b ( l )$, with $l$ varying in an infinite subset of $P S 1$ such that each point of $P S 1$ lies on at most a finite number of the hyperplanes $l ⊥$ (e.g., one may take an irreducible curve in $P S 1$ contained in no hyperplane). However, Carlini, Catalisano and Geramita followed another interesting path.
For whatever X, the ideal $I ( X ) : l = g ∈ S • : g l ∈ I ( X )$ is clearly the ideal of the set $X ′ : = X ∖ P l ⊥$. Hence, for the Hilbert function $H f ′$ of $S • / ( Ann f : l ) + ( l )$,
$∑ i = 0 d H f ′ ( i )$
cannot exceed the number of points in $X ′$, and consequently the number of points in X, for whatever X. This holds for whatever choice of $l ∈ S 1 ∖ { 0 }$, which therefore can be chosen to maximize the above sum.
Now, let us consider a positive degree monomial $f = x 1 a 1 ⋯ x n a n$ for a given basis $x 1 , … , x n$ of $S 1$. Let $x 1 , … , x n$ be the dual basis in $S 1$, and note that a monomial $x 1 b 1 ⋯ x n b n$ is apolar to f if and only if $b i > a i$ for some i. Moreover, for two different (monic) monomials m and $m ′$ that are not apolar to f, we have that $∂ m f$ and $∂ m ′ f$ cannot be proportional. This easily implies that $Ann f$ is the ideal generated by $x 1 a 1 + 1 , … , x n a n + 1$.
With no loss of generality, we can assume $a 1 ≤ ⋯ ≤ a n$, and let $a i$ be the first nonzero exponent. It is quite easy to recognize that $Ann f : x i$ is generated by
$x 1 , … , x i − 1 , x i a i , … , x n a n + 1$
and $Ann f : x i + x i$ by
$x 1 , … , x i , x i + 1 a i + 1 + 1 , … , x n a n + 1 .$
To calculate that for the Hilbert function $H f ′$ of $S • / ( Ann f : x i ) + ( x i )$, one has
$∑ i = 0 d H f ′ ( i ) = a i + 1 + 1 ⋯ a n + 1$
is not difficult (and quite easy if one is familiar with Hilbert functions).
Since we are concerned with lower bounds on rank, we could end here the subsection. However, to agree that $r : = a i + 1 + 1 ⋯ a i + 1 + 1$ is actually the rank of f is quite easy because, similar to what is smartly remarked in , we have that
$x 1 , … , x i − 1 , x i + 1 a i + 1 + 1 − x i a i + 1 + 1 , … , x n a n + 1 − x i a n + 1$
is the ideal of a set of r distinct points and is contained in $Ann f$.

#### 4.5. Beyond Monomials and Generic Forms

As anticipated before, once the rank of monomials has been determined, one can find ternary monomials with rank much higher than the rank of generic forms of the same degree (which is known by the Alexander–Hirschowitz theorem). When the number of indeterminates is four or greater, the rank of generic forms can not be exceeded by monomials. We give now a brief account of how Buczyński and Teitler were able to beat both ternary monomials and generic quaternary forms, with one and the same argument. A more informative description can be directly found in .
Let $f ∈ S d$ and $l ∈ S 1 ∖ { 0 }$. In the calculations before, the sum of all values of the Hilbert function of quotient algebras of the type $S • / ( I + ( l ) )$ turned out to be useful. When $I = Ann f$, that sum bounds from below the number of summands of a decomposition whose linear forms are outside the hyperplane $l ⊥$. When $I = ( Ann f : l )$, that sum directly bounds from below the rank of f. Note that the sum under consideration is nothing but the dimension of $S • / ( I + ( l ) )$ as a $K$-vector space. We can give the following useful description of this dimension, using a relation between $I + ( l )$ and $( I : l )$ that one often encounters when dealing with hyperplane sections.
When $S • / I$ is finite-dimensional, we have
$dim K S • I + ( l ) = dim K S • I − dim K I + ( l ) I = dim K S • I − dim K ( l ) ( l ) ∩ I .$
Since the kernel of the homomorphism
$S • ↠ ( l ) ( l ) ∩ I , x ↦ [ x l ] ( l ) ∩ I ,$
is $( I : l )$, we deduce that
$dim K S • I + ( l ) = dim K S • I − dim K S • ( I : l ) .$
When $I = Ann f$, $S • / I$ is called the apolar algebra, and its dimension the apolar length of f. They can be denoted by $A f$ and $al f$. Note that even when $I = ( Ann f : l )$, $S • / I$ is an apolar algebra. Indeed, for whatever $x , y ∈ S •$, we have
$x ∈ ( Ann f : y ) ⇔ x y ∈ Ann f ⇔ ∂ x y f = 0 ⇔ ∂ x ∂ y f = 0 ⇔ x ∈ Ann ∂ y f ,$
that is, $( Ann f : y ) = Ann ∂ y f$. Hence, when $I = ( Ann f : l )$, the quotient A is the apolar algebra of $∂ l f$.
From the formula $( Ann f : y ) = Ann ∂ y f$ and the fact that apolarity is a perfect pairing in every degree, we get another interesting fact: the apolar length of f equals the dimension of the vector space of all $∂ y f$ with $y ∈ S •$.
We end up with
$dim K S • Ann f + ( l ) = al f − al ∂ l f ,$
$dim K S • ( Ann f : l ) + ( l ) = al ∂ l f − al ∂ l 2 f ,$
the former being a lower bound on the number of summands of a decomposition of f whose linear forms are outside the hyperplane $l ⊥$, and the latter a lower bound on the rank of f. Based on these remarks, a good knowledge of apolar algebras, which can be obtained for instance from , clearly gives precious information on lower bounds on rank.
A first obvious step is to try to maximize $al ∂ l f − al ∂ l 2 f$. To use a coordinate description, let us fix dual bases $x 1 … , x n$ and $x 1 … , x n$ of $S 1$ and $S 1$. To get $al ∂ l 2 f = 0$ we choose $l = x 1$ and consider a form
$f : = x 1 g + k , g , k ∈ K x 2 , … x n .$
Then, $∂ l f = g$, and we have to choose g with maximum apolar length. From , we can find the value of that maximum and learn that it is reached by a generic g (that is, for all g in a suitable nonempty open set in the Zariski topology). In conclusion, there exist degree d forms in n indeterminates with rank not less than the maximum apolar length of degree $d − 1$ forms in $n − 1$ indeterminates. Surprisingly, that maximum equals the maximum rank of degree d monomials when $n = 3$, and the rank of generic forms of degree d when $n = 4$ and d is odd.
When $( Ann f : l ) = Ann ∂ l f$ is considered instead of $Ann f$ to get the lower bound, one might hope that for some special f some of the linear forms might be forced to lie on the hyperplane $l ⊥$. However, for geometric reasons, we expect that forms of high rank have many decompositions, which can therefore easily escape out of the hyperplane. Note also that, when f is a monomial $x 1 a 1 ⋯ x n a n$, $al f − al ∂ x i f = al ∂ x i f − al ∂ x i 2 f$ whenever $a i ≠ 0$, so that we have no loss in cutting out the part on the hyperplane. This might give some indication on why in the high rank examples found in  the first thing considered is to raise the value of $al f − al ∂ x i f$
For a form f of the type $x 1 g + k$, one can raise $al f − al ∂ x 1 f$ by one by lowering $al g$ by one, but at the cost of lowering $al x 1 f − al ∂ x 1 2 f$ too. This causes a problem for decompositions that have some linear forms on the hyperplane $x 1 ⊥$. The idea was to show that for suitable choices of the form k, such decompositions must have at least two forms on the hyperplane. To this end, note that if a decomposition of f involve exactly one linear form v that lies in $x 1 ⊥$, then $f − v d$ has a decomposition with all linear factors outside that hyperplane. Using the fact (pointed out before) that the apolar length equals the dimension of the vector space of all derivatives, it turns out that $al ( f − v d ) − al ∂ x 1 ( f − v d )$ can be kept hight for whatever choice of v. This is incompatible with the fact that for some v, $f − v d$ has a decomposition with all linear factors outside $x 1 ⊥$.
In view of determination of maximum rank, the Buczyński–Teitler lower bound is particularly interesting because, in conjunction with the upper bound in , shows that the maximum rank of ternary quintics is ten. Now, at the end of , Introduction, a possible guess for the maximum rank for ternary forms of an arbitrarily given degree d is outlined, and if it is correct then the Buczyński–Teitler lower bound is the best possible for d odd (and $n = 3$). In , the lower bound for ternary forms given by monomials is raised by one for even degrees too, and if the guess in  is correct, it cannot be improved further. Basically, the lower bound in  follows the second advice in , Remark 19, but uses a more specific example, similar to that in , Theorem 18 (which gives the lower bound of ten for ternary quintics), and the arguments are of a purely algebraic nature (do not involve geometric dimension counts).

## 5. Upper Bounds

#### 5.1. Rise and (Relative) Fall of an Upper Bound of Genuinely Geometric Nature

We know that the rank of generic forms gives a lower bound for maximum rank. A simple geometric argument shows that twice that rank gives an upper bound. This can be proven in a more general context. Indeed, the rank of a point P of a projective space $P$, with respect to a variety in $P$, is defined as the minimum number of points that can be fixed on the variety such that P lies in their projective span. For the Veronese variety we recover the Waring rank.
Let us suppose that all forms in a nonempty open subset of a projective space $P$ are of a certain rank $r g e n$ with respect to some given variety (as it is the case when $P = P S d$, and $r g e n$ is the rank of generic forms). When the base field is $C$ (or even $R$), one can consider a small ball whose points all have rank $r g e n$. Every point in a projective line joining two points in the ball has rank at most the sum of the ranks of the two points, hence at most $2 r g e n$. Since all such lines cover $P$, we deduce that the maximum rank is at most $2 r g e n$. When $K ≠ C$ (but is algebraically closed according to our standing assumption, or at least infinite), to consider the Zariski topology causes no problems, since a nonempty intersection of a line with a Zariski open set is always an infinite set.
The upper bound $2 r g e n$, due to Blekherman and Teitler (see ), in the case of the Waring rank has dramatically improved the previously known upper bounds from [6,21,22]. In addition, if we recall that, for generic binary forms of degree d, $r g e n = ⌊ ( d + 2 ) / 2 ⌋$, we have that the maximum rank for degree d binary forms, which is d, it equals $2 r g e n − 1$ for odd degrees and $2 r g e n − 2$ for even degrees. This might induce to hope that the Blekherman–Teitler upper bound is nearly sharp.
Somewhat symmetrically to what they had done for lower bounds, Buczyński and Teitler, jointly with Han and Mella, showed that the maximum rank is at most $2 r g e n − 1$. In some special cases, which include binary forms of even degrees, they also lowered the upper bound down to $2 r g e n − 2$ (see , Theorem 3.9 and Example 3.10). This result, which would have given a strong evidence in favor of the sharpness of such bounds, had no such effect because, when  was announced, it was already known that for ternary forms the upper bound $2 r g e n$ is quite mild (because of the asymptotic estimate in ).
The Blekherman–Teitler bound keeps its relevance in the general context of rank with respect to varieties, which, apart from its intrinsic interest, by means of Segre varieties can be readily applied to arbitrary (not necessarily symmetric) tensors too (see , 4.3.4). At the same time, because of the special nature of Veronese and Segre varieties, it is reasonable to expect that the Blekherman–Teitler bound on tensor rank can be sensibly lowered, by means of appropriate algebraic techniques.

#### 5.2. Linear Algebraic Tools for Upper Bounds on Waring Rank

A widely shared attitude is that hyperplane sections should provide us with a good insight on how to build short power sum decompositions. Apart from the ubiquitous nature of this tool, in this specific case we have the following easy fact. Let $f ∈ S d$ and let us take a hyperplane, defined by a nonzero $l ∈ S 1$. A power sum decomposition of the derivative
$∂ l f = v 1 d − 1 + ⋯ + v r d − 1 ,$
such that $l v i ≠ 0$ for all i, leads to a “lifted” form
$F = 1 d l v 1 v 1 d + ⋯ + 1 d l v r v r d ,$
because, from the Leibnitz rule for $∂ l$ and Equation (4), it readily follows that
$∂ l F = v 1 d − 1 + ⋯ + v r d − 1 ,$
that is, $∂ l F = ∂ l f$. The fact that $∂ l ( f − F ) = 0$ implies that $f − F$ can be regarded as a form in one indeterminate less. Indeed, note that if $x 1 , … , x n$ and $x 1 , … , x n$ are dual bases with $l = x 1$, then $f − F$ is a form in $x 2 , … , x n$. More invariantly, $T • : = ker ∂ l$ is a graded subring of $S •$, and since the elements of the ideal $( l )$ annihilate every element in $T •$, the apolar bilinear map $S • × S • → S •$ induces a bilinear map $T • × T • → T •$, where $T • : = S • / ( l )$, which is an apolar bilinear map as well, and $dim T 1 = dim S 1 − 1$. This allows us to design an inductive procedure, which basically is the one that has given the upper bound in .
The main difficulty in the mentioned procedure is the condition that $l v i ≠ 0$ for all i. The improvement given in  to the upper bound in  basically consists in a slightly better handling of that condition. The fact that the bounds in [6,22] are much milder than the Blekherman–Teitler bound should not be taken as an indication to abandon their path. Indeed, the scope of  is wider than upper bounds on rank, and Jelisiejew opens a line of investigation in that direction (in  Introduction one can find some additional motivation on why that line is worthy of being pursued).
To explain what is the further idea that led to the presently known best upper bounds on maximum rank of ternary forms of given degree, let us look at binary forms, for which the actual maximum rank is known. It does not come as a surprise that in a context where duality plays an important role, the case when points and hyperplanes are the same thing, at least from a geometric viewpoint, turns out to be relatively simple. Then, a reasonable basic principle is to give an important role to both points and hyperplanes.
The idea that underlies , Proposition 3.3 (that supersedes the Blekherman–Teitler bound for ternary forms), and the sharp upper bounds for ternary quartics and quintics (see [17,18]), is to look for decompositions that “split along a few lines”. More precisely, in , it is shown that every ternary quintic f can be decomposed as a sum $v 1 5 + ⋯ + v 10 5$ such that the points $v 1 , … , v 10$ belong to a union of four distinct lines $P l 1 ⊥ , … , P l 4 ⊥$ in $P S 1$. In  is shown that every degree d ternary form has a decomposition $v 1 d + ⋯ + v r d$, with r giving the upper bound, such that the points $v 1 , … , v r$ belong to a union of d distinct lines $P l 1 ⊥ , … , P l d ⊥$ in $P S 1$. Of course, even a bare induction procedure based on hyperplane sections, when followed step by step, leads to a decomposition of $f ∈ S d$ that splits along d hyperplanes. What is new here is that the configuration of lines is fixed in advance, and is used to drive the construction step by step of the decomposition. Let us now describe in more detail how the construction works.
To begin with, given a nonzero $x ∈ S 1$, a decomposition of $f ∈ S d$ gives rise to a decomposition of $∂ x f$ where all linear forms $v ∈ x ⊥$ disappear. Hence, to have a decomposition that splits along lines given by $l 1 , … , l d ∈ S 1 ∖ { 0 }$, a necessary condition is that $∂ l 1 ⋯ l d f = 0$. At each step, we have to lift with respect to $l i$ a decomposition with all linear forms outside the line $l i ⊥$. Hence, we need that $l 1 , … , l d$ are distinct. A further mild technical need is that $∂ l 1 ⋯ l i ^ ⋯ l d f ≠ 0$, with the hat denoting omission. It is not difficult to find such $l 1 , … , l d$ (see , Proposition 3.1).
To start the procedure, let us consider $∂ l 2 ⋯ l d f$, which of course is a linear form and has a trivial 1-power decomposition with just one summand. However, we choose a redundant decomposition with two summands lying in $l 1 ⊥$, such that the summands lie on no one of the lines $P l 2 ⊥ , … P l d ⊥$, and fulfill a further condition that we shall explain later. Then, we can lift that decomposition with respect to $∂ l 2$ as in Equation (5) and take the difference, denoted $g 1 ′$, with $∂ l 3 ⋯ l d f$. Since the lift is annihilated by $∂ l 1$, we have $∂ l 1 g 1 ′ = ∂ l 1 l 3 ⋯ l d f ≠ 0$, and since the lift has the same $∂ l 2$-derivative as $∂ l 3 ⋯ l d f$, we also have $∂ l 2 g 1 ′ = 0$. Hence, $g 1 ′$ can be considered as a binary form and can be decomposed along the line $P l 2 ⊥$.
Now, we have come to a delicate step. If $g 1 ′$ is a square $v 2$, with $v ∈ S 1$, on the one hand, we have a cheap decomposition with one summand, while, on the other hand, if $l i ( v ) = 0$ for some $i ≥ 3$, the procedure cannot proceed. Unfortunately, from Proposition 1, it follows that all the decompositions with two summands must involve v (or, more accurately, every such decomposition must have a zero summand; a fact that also follows from elementary considerations). To overcome this problem, a crucial condition has to be imposed on $g 1 ′$: the least degree of a generator of $Ann g 1 ′$, a number which in  is called the binary length of $g 1 ′$, must be 2 (the highest possible binary length for degree 2 forms). This way we can find a sufficiently cheap decomposition that does not stop the procedure. A similar condition is needed in the subsequent steps.
The binary length coincides with the border rank, which is an important invariant of a form in the context of tensor rank, but we do not need this fact. The important fact in the proof is that the condition on $g 1 ′$ (and on other forms related with $l 3 , … , l d$) can be assured at the previous step, when the redundant decomposition of the linear form $∂ l 2 ⋯ l d f$ is chosen. A similar care has to be taken when choosing the decomposition of $g 1 ′$ with two summands, and so on for all decompositions that subsequently arise by lifting with respect to $l 3$, $l 4$, etc. To keep control of these conditions, a technical lemma is needed: see , Lemma 2.7, based on , Lemma 2.6.
That is how the upper bound
$d 2 + 6 d + 1 4$
in , Proposition 3.3 has been obtained. The splitting that has given the sharp upper bound for quintics in  was obtained in a direct, not inductive, way. However, a new and probably simpler proof can be organized in a way that is closer to that outlined above.

#### 5.3. A Nontrivial Feature of Split Decompositions

A rough dimension count shows that, to say that every degree d ternary form has a power sum decomposition with r summands that splits along d (or $d − 1$) lines, imposes a nontrivial constraint on r. Indeed, the variety of sets of r points belonging to the union of given d lines is of dimension r. Each of those sets gives a space of decompositions of dimension at most $r − 1$. Finally, the set of all unions of d lines has dimension $2 d$. To reach the dimension of the space of all degree d ternary forms, we need that
$2 r − 1 + 2 d ≥ d + 2 2 − 1 ,$
hence
$r ≥ d 2 − d + 2 4 .$
This is quite near to (and in fact a bit less than) the lower bound on ternary forms discussed in the previous section, that is,
$d 2 + 2 d + 5 4$
(for $d ≥ 2$). Note that, in most cases, a generic degree d ternary form does not have a decomposition that splits along d lines. Hence, the above dimension count indicates that this procedure could be particularly suitable for finding the maximum rank.

## 6. Summary

Let us summarize the state of the knowledge on the maximum rank $r m a x ( n , d )$ of forms of degree $d > 0$ in n variables, which has been presented in this article. For ternary forms:
$d 2 + 2 d + 5 4 ≤ r m a x ( 3 , d ) ≤ d 2 + 6 d + 1 4 , ∀ d ≥ 2 .$
For $n ≥ 4$, we have the lower and upper bounds given by the rank $r g e n$ of generic forms and its double, which hold in general for the rank with respect to a variety. The enhancements obtained in [11,23] allow raising by one the lower bound when d is odd, and lower the upper bound by one. Let us mention that, according to the list in , 5.4.1, the exceptional cases from the Alexander and Hirschowitz’s theorem, $( 3 , 4 ) , ( 4 , 4 ) , ( 5 , 3 ) , ( 5 , 4 )$, are included in the special cases for which the upper bound $2 r g e n$ can be lowered by two according to Theorem 3.9 in .
The other special cases are a bit more cumbersome to be detected, so we do not take them into account in the following summary:
$1 n n + d − 1 n − 1 + ε ≤ r m a x ( n , d ) ≤ 2 1 n n + d − 1 n − 1 + ε ′ − 1 , ∀ n ≥ 4 , ∀ d > 0 ,$
with
$ε = 1 when n = 4 and d is odd and ≥ 3 , or ( n , d ) ∈ ( 4 , 4 ) , ( 5 , 3 ) , ( 5 , 4 ) 0 otherwise , ε ′ = 1 when ( n , d ) ∈ ( 4 , 4 ) , ( 5 , 3 ) , ( 5 , 4 ) 0 otherwise .$
To give a more concrete idea of the above values, we explicitly report some ranges in Table 2, which enrich Table 1.
Let us conclude by recalling that, if the symmetric rank was well understood, it would be easy to determine $r m a x ( n , d )$. That is why the techniques invented to find $r m a x ( n , d )$ may hopefully indicate some ways to understand tensor rank, which would be a considerable achievement, because of the recently recognized high applicative interest of this topic.

## Funding

Financial support provided by Università degli Studi di Napoli Federico II.

## Acknowledgments

We are grateful to Zach Teitler for providing us with some updated information on the recent literature.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Landsberg, J. Tensors: Geometry and Applications; American Mathematical Society (AMS): Providence, RI, USA, 2012; pp. 1–439. [Google Scholar]
2. Geramita, A. Exposé I A: Inverse systems of fat points: Waring’s problem, secant varieties of Veronese varieties and parameter spaces for Gorenstein ideals. In The Curves Seminar at Queen’s, Vol. X; Queen’s University: Kingston, ON, Canada, 1996; pp. 2–114. [Google Scholar]
3. Alexander, J.; Hirschowitz, A. Polynomial interpolation in several variables. J. Algebr. Geom. 1995, 4, 201–222. [Google Scholar]
4. Brambilla, M.C.; Ottaviani, G. On the Alexander–Hirschowitz theorem. J. Pure Appl. Algebra 2008, 212, 1229–1251. [Google Scholar] [CrossRef]
5. Kleppe, J. Representing a Homogeneous Polynomial as a Sum of Powers of Linear Forms. Master’s Thesis, Candidatum Scientiarum, Department of Mathematics, University of Oslo, Oslo, Norway, 1999. [Google Scholar]
6. Białynicki-Birula, A.; Schinzel, A. Representations of multivariate polynomials by sums of univariate polynomials in linear forms. Colloq. Math. 2008, 112, 201–233. [Google Scholar] [CrossRef][Green Version]
7. Blekherman, G.; Teitler, Z. On maximum, typical and generic ranks. Math. Ann. 2015, 362, 1021–1031. [Google Scholar] [CrossRef]
8. Ellison, W.J. A “Waring’s problem” for homogeneous forms. Math. Proc. Camb. Philos. Soc. 1969, 65, 663–672. [Google Scholar] [CrossRef]
9. Segre, B. The non-singular cubic surfaces. Bull. Amer. Math. Soc. 1943, 49, 350–352. [Google Scholar]
10. Baker, H.F. Principles of Geometry. Volume 3. Solid Geometry. Quadrics, Cubic Curves in Space, Cubic Surfaces; Cambridge Library Collection; Cambridge University Press: Cambridge, UK, 1923. [Google Scholar]
11. Buczyński, J.; Teitler, Z. Some examples of forms of high rank. Collect. Math. 2016, 67, 431–441. [Google Scholar] [CrossRef]
12. De Paris, A. The asymptotic leading term for maximum rank of ternary forms of a given degree. Linear Algebra Appl. 2016, 500, 15–29. [Google Scholar] [CrossRef][Green Version]
13. De Paris, A. High-rank ternary forms of even degree. Arch. Math. 2017, 109, 505–510. [Google Scholar] [CrossRef][Green Version]
14. Iarrobino, A.; Kanev, V. Power Sums, Gorenstein Algebras, and Determinantal Loci. With an Appendix “The Gotzmann Theorems and the Hilbert Scheme’ by Anthony Iarrobino and Steven L. Kleiman; Lecture Notes in Mathematics; Springer: Berlin, Germany, 1999; Volume 1721. [Google Scholar] [CrossRef]
15. Ehrenborg, R.; Rota, G. Apolarity and canonical forms for homogeneous polynomials. Eur. J. Comb. 1993, 14, 157–181. [Google Scholar] [CrossRef]
16. Hartshorne, R. Algebraic Geometry; Graduate Texts in Mathematics, No. 52; Springer: New York, NY, USA; Heidelberg, Germany, 1977. [Google Scholar]
17. De Paris, A. A proof that the maximum rank for ternary quartics is seven. Matematiche 2015, 70, 3–18. [Google Scholar] [CrossRef]
18. De Paris, A. Every ternary quintic is a sum of ten fifth powers. Int. J. Algebra Comput. 2015, 25, 607–631. [Google Scholar] [CrossRef][Green Version]
19. Carlini, E.; Catalisano, M.; Geramita, A. The solution to the Waring problem for monomials and the sum of coprime monomials. J. Algebra 2012, 370, 5–14. [Google Scholar] [CrossRef]
20. Holmes, E.; Plummer, P.; Siegert, J.; Teitler, Z. Maximum Waring ranks of monomials and sums of coprime monomials. Commun. Algebra 1996, 44, 4212–4219. [Google Scholar] [CrossRef]
21. Ballico, E.; De Paris, A. Generic Power Sum Decompositions and Bounds for the Waring Rank. Discret. Comput. Geom. 2017, 57, 896–914. [Google Scholar] [CrossRef][Green Version]
22. Jelisiejew, J. An upper bound for the Waring rank of a form. Arch. Math. 2014, 102, 329–336. [Google Scholar] [CrossRef][Green Version]
23. Buczyński, J.; Han, K.; Mella, M.; Teitler, Z. On the locus of points of high rank. Eur. J. Math. 2018, 4, 113–136. [Google Scholar] [CrossRef]
Table 1. Known maximum Waring ranks in $S d$, with $dim S 1 = n$.
Table 1. Known maximum Waring ranks in $S d$, with $dim S 1 = n$.
12345d
$1$111111
$2$12345d
$3$135710-
$4$147---
$n$1n----
Table 2. Ranges for $r m a x ( n , d )$ in low degree.
Table 2. Ranges for $r m a x ( n , d )$ in low degree.
123456
$1$111111
$2$123456
$3$13571013–18
$4$14710–1815–2721–41
$5$158–1415–2826–5142–83
$6$1610–1921–4142–8377–153