Next Article in Journal
The Origin of Intergalactic Light in Compact Groups of Galaxies
Next Article in Special Issue
Schwarzschild-like Wormholes in Asymptotically Safe Gravity
Previous Article in Journal
Polyadic Braid Operators and Higher Braiding Gates
Previous Article in Special Issue
Tractor Beams, Pressor Beams and Stressor Beams in General Relativity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Counting Tensor Rank Decompositions

Yukawa Institute for Theoretical Physics, Kyoto University, Kitashirakawa, Sakyo-ku, Kyoto 606-8502, Japan
*
Author to whom correspondence should be addressed.
Universe 2021, 7(8), 302; https://doi.org/10.3390/universe7080302
Submission received: 27 July 2021 / Revised: 9 August 2021 / Accepted: 12 August 2021 / Published: 15 August 2021
(This article belongs to the Special Issue Cosmological Models, Quantum Theories and Astrophysical Observations)

Abstract

:
Tensor rank decomposition is a useful tool for geometric interpretation of the tensors in the canonical tensor model (CTM) of quantum gravity. In order to understand the stability of this interpretation, it is important to be able to estimate how many tensor rank decompositions can approximate a given tensor. More precisely, finding an approximate symmetric tensor rank decomposition of a symmetric tensor Q with an error allowance Δ is to find vectors ϕ i satisfying Q i = 1 R ϕ i ϕ i ϕ i 2 Δ . The volume of all such possible ϕ i is an interesting quantity which measures the amount of possible decompositions for a tensor Q within an allowance. While it would be difficult to evaluate this quantity for each Q, we find an explicit formula for a similar quantity by integrating over all Q of unit norm. The expression as a function of Δ is given by the product of a hypergeometric function and a power function. By combining new numerical analysis and previous results, we conjecture a formula for the critical rank, yielding an estimate for the spacetime degrees of freedom of the CTM. We also extend the formula to generic decompositions of non-symmetric tensors in order to make our results more broadly applicable. Interestingly, the derivation depends on the existence (convergence) of the partition function of a matrix model which previously appeared in the context of the CTM.

1. Introduction

The canonical tensor model (CTM) is a tensor model for quantum gravity which is constructed in the canonical formalism in order to introduce time into a tensor model [1] with, as its fundamental variables, the canonically conjugate pair of real symmetric tensors of degree three, Q a b c and P a b c . Interestingly, under certain algebraic assumptions, this model has been found to be unique [2]. Furthermore, several remarkable connections have been found between the CTM and general relativity [3,4,5] which, combined with the fact that defining the quantised model is mathematically very simple and straightforward [6], makes this a very attractive model to study in the context of quantum gravity.
Recent developments in the study of the canonical tensor model sparked interest in the tensor rank decomposition from the perspective of quantum gravity. The tensor rank decomposition is a decomposition of tensors into a sum of rank-1 tensors [7], also called simple tensors, and it might be seen as a generalisation of the singular value decomposition of matrices to tensors. 1 This is a tool frequently used in a broad range of sciences as it is often a very effective way to extract information from a tensor [8].
In Ref. [9], tensor rank decomposition was used to extract topological and geometric information from tensors used in the CTM. Here, every term in the decomposition corresponds to a (fuzzy) point, collectively forming a space that models a universe. However, finding the exact tensor rank decomposition of a tensor is, in general, next to impossible [10]. This means that for a given tensor Q a b c , which is in the CTM the fundamental variable that is supposed to represent a spatial slice of spacetime, it may potentially be approximated by several different decompositions, possibly corresponding to different universes. This leads to two questions related to the stability of this approach:
  • How many tensor rank decompositions are close to a given tensor Q a b c ?
  • Do different decompositions describe the same space (and if not, how much do they differ)?
In this work, we focus on the former of these questions. To understand this question, we introduce the configuration space of tensor rank decompositions for rank R, denoted by F R , and introduce the quantity to describe the volume of the configuration space close to a tensor Q:2
V R ( Q , Δ ) = F R d Φ Θ ( Δ Q Φ 2 ) ,
where Φ F R denotes a tensor rank decomposition in the space of tensor rank decompositions that is integrated over, Θ ( x ) ( x R ) is the Heaviside step function, and Δ is a parameter to define the maximum square distance between Q and Φ . Better understanding this quantity will lead to a better understanding of the tensor rank decomposition configuration space, and what to expect when aiming to approximate a tensor using tensor rank decomposition. In this work, we study a related quantity Z R ( Δ ) , which we arrive at by integrating (1) over normalised tensors Q ˜ . Analysing this quantity will give us information about the average amount of different decompositions, potentially representing different spaces, close to tensors, and analysing its divergent properties will lead to insights in the expected size, in terms of the amount of fuzzy points, of spaces in the CTM.
Another motivation coming from the CTM to study the configuration space of tensor rank decompositions comes from the quantum CTM. A noteworthy fact about the CTM is that it has several known exact solutions for the quantum constraint equations [11]. One of these has recently been extensively analysed due to the emergence of Lie group symmetries in this wave function, which potentially hints towards the emergence of macroscopic spacetimes [12,13,14,15,16,17]. This wave function, in the Q-representation, is closely related to a statistical model [17] that is mathematically equivalent to
Ψ ( Q ) = F R d Φ O ( Φ ) e κ ( Q Φ ) 2 ,
where O ( Φ ) only depends on the weights of the components of the decomposition, which will be more precisely defined below. This shows that for a full understanding of this statistical model, understanding the underlying configuration space and the behaviour of volumes therein is important.
Besides research in the CTM, this work might be more generally applicable. Similar questions might arise in other areas of science and, mathematically, there are many open questions about the nature of tensor rank decomposition. Understanding the configuration space constructed here might lead to significant insights elsewhere. For these reasons, the content of the paper is kept rather general. Our main research interests are real symmetric tensors of degree three, but we will consider both symmetric and generic (non-symmetric) tensors of general degree.
This work is structured as follows. We define the configuration space of tensor rank decompositions in Section 2. Here, we also give a proper definition of V R ( Q , Δ ) and introduce the main quantity we will analyse, Z R ( Δ ) , which is the average of V R ( Q , Δ ) over normalised tensors. Section 3 contains the main result of our work. There, we derive a closed formula for Z R ( Δ ) , which is guaranteed to exist under the condition that a certain quantity G R , which is independent of Δ , exists and is finite. Another interesting connection to the CTM is found at this point, since this quantity G R is a generalisation of the partition function of the matrix model studied in [14,15,16]. In Section 4, the existence of G R is proven for R = 1 , and numerical analysis is conducted for R > 1 for a specific choice of volume form d Φ to arrive at a conjecture for the maximal allowed value of R, called R c . In Section 5, we present direct numerical computations of Z R ( Δ ) to further verify the analytical derivation and conclude that the closed form indeed seems to be correct. Surprisingly, up to a divergent factor, the Δ -behaviour still appears to hold for R > R c . We finalise this work with some conclusions and discussions in Section 6.

2. Volume in the Space of Tensor Rank Decompositions

In this section, we introduce the configuration space of tensor rank decompositions and define the volume quantities we will analyse. We consider two types of tensor spaces, namely the real symmetric tensors of degree K, Sym K ( R N ) , and the space of generic (non-symmetric) real tensors, R N K . This could be generalised even further in a relatively straightforward way, but for readability, only these two cases will be discussed. First, the symmetric case will be discussed, and afterwards, the differences to the generic case will be pointed out. For more information about the tensor rank decomposition, see Appendix A and references therein.
Consider an arbitrary symmetric tensor of (symmetric) rank 3 R given by its tensor rank decomposition:
Φ a 1 a K = i = 1 R λ i ϕ a 1 i ϕ a K i ,
where we choose ϕ a k i to lie on the upper-hemisphere of the N 1 -dimensional sphere, which we denote by S + N 1 , and λ i R . This is mainly to remove redundancies, for later convenience and to make the generalisation easier.
The configuration space can now be defined as all of these possible configurations for a given rank R:
F R : = R R × S + N 1 × × S + N 1 R times = R R × S + N 1 × R .
Note that while (3) links a given tensor rank decomposition in the space F R to a tensor in the tensor space Sym K ( R N ) , our objects of interest are the tensor rank decompositions themselves.
We define an inner product on the tensor space by, for Q , P Sym K ( R N ) ,
Q · P = a 1 a K = 1 N Q a 1 a K P a 1 a K ,
which induces a norm Q 2 : = Q · Q = a 1 a K = 1 N Q a 1 a K 2 . We also use Q 2 Q 2 for brevity. On the configuration space F R , we introduce a measure by the infinitesimal volume element
d Φ w = i = 1 R | λ i | w 1 d λ i d ϕ i ,
where d λ i is the usual line element of the real numbers, and d ϕ i is the usual volume element on the N 1 -dimensional unit sphere. w (with w 1 ) is introduced for generality. w = 1 will turn out to be less singular, while w = N corresponds to treating ( λ i , ϕ i ) as hyperspherical coordinates of R N .
In summary, for a given rank R, we constructed a configuration space F R in (4) with the infinitesimal volume element (6), taking inner product (5) on the tensor space. If R < R , then F R F R , and thus we have an increasing sequence of spaces, which limits to the whole symmetric tensor space of tensors of degree K:
F R R Sym K ( R N ) R N Q ,
where N Q : = N + K 1 K counts the degrees of freedom of the tensor space.
A question one might ask is “Given a tensor Q, how many tensor rank decompositions of rank R approximate that tensor?”. For this, we define the following quantity
V R ϵ ( Q , Δ ) : = F R d Φ w Θ ( Δ Q Φ 2 ) e ϵ i = 1 R λ i 2 ,
where Δ is the maximum square distance of a tensor rank decomposition Φ a 1 a K to tensor Q a 1 a K , and ϵ is a (small) positive parameter. The exponential function is needed to regularise the integral, since even though Φ a 1 a K is bounded, the individual terms λ i ϕ a 1 i ϕ a K i might not be. This quantity gives an indication of how hard it will be to approximate a tensor Q by a rank-R tensor rank decomposition; a large value means there are many decompositions that approximate the tensor, while a small value might indicate that a larger rank is necessary.
While (7) might contain all the information one would want, it is hard to compute. Instead, we will introduce a quantity to make general statements about the configuration space by averaging this quantity over all normalised tensors Q ˜ a 1 a K (such that Q ˜ 2 = 1 ):
Z R ( Δ ; ϵ ) : = 1 V Q = 1 Q = 1 d Q ˜ V R ϵ ( Q ˜ , Δ ) .
Since the configuration space of Q is isometric to R N Q , it is possible to move to hyperspherical variables. Q ˜ is then given by the angular part of Q. Furthermore, we have defined V Q = 1 : = Q = 1 d Q ˜ = 2 π N Q / 2 Γ ( N Q / 2 ) . For now, we assume the existence of the ϵ 0 + limit of this quantity such that
Z R ( Δ ) : = lim ϵ 0 + Z R ( Δ ; ϵ ) .
This limit does not necessarily exist, and it diverges if R is taken too large, as we will show in Section 4. In Proposition 2 in the next section, we will obtain an explicit formula for Z R ( Δ ) found in (22) under the condition that the following quantity exists:
G R : = lim ϵ 0 + G R ( ϵ ) : = lim ϵ 0 + F R d Φ w e Φ 2 ϵ i = 1 R λ i 2 .
Note that since G R ( ϵ ) is a monotonically decreasing positive function of ϵ , the ϵ 0 + limit either diverges or is finite if it is bounded from above.
This condition presents a peculiar connection to the canonical tensor model. Let us first rewrite
G R ( ϵ ) = F R i = 1 R d λ i | λ i | w 1 d ϕ i e i , j = 1 R λ i ( ϕ i · ϕ j ) K λ j ϵ i = 1 R λ i 2 ,
where we introduced the usual inner product on S + N 1 R N
ϕ i · ϕ j = a = 1 N ϕ a i ϕ a j ,
inherited from the tensor space inner product. In Refs. [14,15,16], a matrix model was analysed that corresponds to a simplified wave function of the canonical tensor model. The matrix model under consideration had a partition function given by
Z ( k ) = R N R i = 1 R a = 1 N d ρ a i e i , j = 1 R ( ρ i · ρ j ) 3 k i = 1 R ( ρ i · ρ i ) 3 ,
where ρ i R N with the usual Euclidean inner product on R N . Let us now go to hyperspherical coordinates ( r i , ϕ i ) for every N-dimensional subspace for every i, but instead of taking the usual convention where r i 0 and ϕ i S N 1 , we let r i R and ϕ i S + N 1 . Then
Z ( k ) = i = 1 R | r i | N 1 d r i d ϕ i e i , j = 1 R ( r i ( ϕ i · ϕ j ) r j ) 3 k i = 1 R r i 6 , = c o n s t . F R i = 1 R | λ i | N 3 3 d λ i d ϕ i e i , j = 1 R λ i ( ϕ i · ϕ j ) 3 λ j k i = 1 R λ i 2 ,
where we have substituted λ i = r i 3 and c o n s t . is an irrelevant numerical factor. Comparing (12) with (11), we see that the matrix model studied in the context of the canonical tensor model is a special case of G R ( ϵ ) , where ϵ = k , K = 3 and w = N K .
Let us now turn to the case of generic (non-symmetric) tensors. We will point out the differences in the treatment and the result, though the derivation in Section 3 will be identical. We will still focus on tensors of degree K that act on a multiple of Euclidean vector spaces V = R N , though generalisations of this could also be considered in a very similar way. A generic rank R tensor is given by
Φ a 1 a K ( G ) = i = 1 R λ i ϕ a 1 ( 1 ) i ϕ a K ( K ) i ,
where we again choose λ i R and ϕ ( k ) i S + N 1 . Note that the main difference here is that the vectors ϕ ( k ) i are independent and, thus, the generic configuration space will be bigger:
F R , K ( G ) : = R R × S + N 1 × K × × S + N 1 × K R times = R R × S + N 1 × K R ,
where we now define the measure by the volume element
d Φ w ( G ) = i = 1 R | λ i | w 1 d λ i k = 1 K d ϕ ( k ) i .
Note that the degrees of freedom of the tensor space are now N Q = N K . Under these changes, we can again define analogues of (7), (9), and (10). With these re-definitions, the general result (22) will actually be the same but now for N Q = N K and R being the generic tensor rank (instead of the symmetric rank).

3. Derivation of the Average Volume Formula

In this section, we will derive the result as presented in (22). The main steps of the derivation are performed in this section, but for some mathematical subtleties, we will refer to Appendix B, and for some general formulae to Appendix C. The general strategy for arriving at (22) is to take the Laplace transform, extract the dependence on the variables, and take the inverse Laplace transform.
Let us take the Laplace transform of (9) with (7) and (8) (see Appendix C.2):
Z ¯ R ( γ ) = 0 d Δ Z R ( Δ ) e γ Δ , = 1 V Q = 1 lim ϵ 0 + Q = 1 d Q ˜ F R d Φ w Q ˜ Φ 2 d Δ e γ Δ ϵ i = 1 R λ i 2 , = 1 γ V Q = 1 lim ϵ 0 + Q = 1 d Q ˜ F R d Φ w e γ ( Q ˜ Φ ) 2 ϵ i = 1 R λ i 2 ,
where we have taken the limit out of the Δ integration. It will be shown below when this is allowed. Let us multiply this quantity by γ
Z ¯ R ( γ ) : = γ Z ¯ R ( γ ) = 1 V Q = 1 lim ϵ 0 + Q = 1 d Q ˜ F R d Φ w e γ ( Q ˜ Φ ) 2 ϵ i = 1 R λ i 2 .
This will be undone again at a later stage. For later use, we will also define the quantity depending on ϵ without taking the limit:
Z ¯ R ( γ ; ϵ ) : = 1 V Q = 1 Q = 1 d Q ˜ F R d Φ w e γ ( Q ˜ Φ ) 2 ϵ i = 1 R λ i 2 .
As an aside, recall that for the Laplace transform, multiplication by γ corresponds to taking the derivative in Δ -space. This means that we now effectively have a definition of the Laplace transform of the distributive quantity
Z R ( Δ ; ϵ ) : = Q = 1 d Q ˜ D V R ϵ ( Q ˜ , Δ ) : = Q = 1 d Q ˜ F R d Φ w δ ( Δ Q ˜ Φ 2 ) e ϵ i = 1 R λ i 2 ,
where δ ( x ) ( x R ) is the delta distribution, assuming that (15) is well defined (which will be shown below for the aforementioned assumption).
We will now present the first main result that will be necessary.
Proposition 1.
Given that (10) is finite, (15) is finite and given by
Z ¯ R ( γ ) = G R γ w R 2 1 F 1 N Q w R 2 , N Q 2 , γ .
Proof. 
Let us prove this proposition in the following two steps.
Step one: Z ¯ R ( γ ) is finite if G R is finite.
First let us remark that the integrand in (16) is positive and, thus, for Z ¯ R ( γ ) to be finite, we should show that Z ¯ R ( γ ) < . Furthermore, because of the reverse triangle inequality, we have the inequality
Q Φ 2 ( Q Φ ) 2 ,
and from ( x y ) 2 = A y 2 A 1 A x 2 + ( 1 A ) y x 1 A 2 for x , y R and 0 < A < 1 , we have the inequality
Q Φ 2 A Φ 2 A 1 A Q 2 .
Putting this together, we find that
Z ¯ R ( γ ; ϵ ) = 1 V Q = 1 Q = 1 d Q ˜ F R d Φ w e γ ( Q ˜ Φ ) 2 ϵ i = 1 R λ i 2 , 1 V Q = 1 Q = 1 d Q ˜ F R d Φ w e γ ( Q ˜ Φ ) 2 ϵ i = 1 R λ i 2 , 1 V Q = 1 Q = 1 d Q ˜ e γ A 1 A Q ˜ 2 F R d Φ w e γ A Φ 2 ϵ i = 1 R λ i 2 , = ( γ A ) w R 2 e γ A 1 A G R ϵ γ A .
This means that as long as G R = lim ϵ 0 + G R ( ϵ ) is finite, Z ¯ R ( γ ) = lim ϵ 0 + Z ¯ R ( γ ; ϵ ) is finite, since we have a finite upper bound. Moreover, it converges since it monotonically increases with ϵ 0 + and is bounded.
Step two: Find the closed form.
Let us introduce the quantity
Y ( α , γ ) : = lim ϵ 0 + R N Q d Q F R d Φ w e α Q 2 γ ( Q Φ ) 2 ϵ i = 1 R λ i 2 .
Note that in this quantity, Q is defined over the whole tensor space R N Q , so not only the normalised tensors. In the appendix, Lemma A1 shows that this quantity is finite under the same assumption that G R is finite.
We can rewrite (18) in terms of G R as follows
Y ( α , γ ) = lim ϵ 0 + R N Q d Q F R d Φ w e ( α + γ ) Q γ α + γ Φ 2 α γ α + γ Φ 2 ϵ i = 1 R λ i 2 , = π α + γ N Q 2 lim ϵ 0 + F R d Φ w e α γ α + γ Φ 2 ϵ i = 1 R λ i 2 , = π α + γ N Q 2 α + γ α γ w R 2 G R , = π N Q / 2 γ N Q + w R 2 1 + t N Q w R 2 t w R 2 G R ,
where t α γ . We can also relate (18) to Z ¯ R ( γ ) by using polar coordinates for Q ( | Q | , Q ˜ ) :
Y ( α , γ ) = lim ϵ 0 + R N Q d | Q | | Q | N Q 1 d Q ˜ F R d Φ w e α | Q | 2 γ ( | Q | Q ˜ Φ ) 2 ϵ i = 1 R λ i 2 , = V Q = 1 lim ϵ 0 + 0 d | Q | | Q | N Q 1 + w R e α | Q | 2 Z ¯ R ( γ | Q | 2 ; ϵ | Q | 2 ) , = 1 2 V Q = 1 γ N Q + w R 2 lim ϵ 0 + 0 d x x N Q + w R 2 1 Z ¯ R ( x ; ϵ x / γ ) e t x , = 1 2 V Q = 1 γ N Q + w R 2 0 d x x N Q + w R 2 1 Z ¯ R ( x ) e t x .
Here, in the first step, we rescaled λ i | Q | λ i , in the second step we introduced a new integration variable x γ | Q | 2 , and in the final step we took the limit inside the integral as is proven to be allowed in the appendix Lemma A2. Note the appearance of Z ¯ R ( γ ; ϵ ) as defined in (16).
By equating (19) and (20), we now arrive at the relation
0 d x x N Q + w R 2 1 Z ¯ R ( x ) e t x = Γ [ N Q / 2 ] G R ( 1 + t ) N Q w R 2 t w R 2 .
The crucial observation now is that the left-hand side is the Laplace transform of the function x N Q + w R 2 1 Z ¯ R ( x ) . Hence, by taking the inverse Laplace transform of the right-hand side and using (A19) in the appendix, we find
Z ¯ R ( x ) = G R x w R 2 1 F 1 N Q w R 2 , N Q 2 , x .
Having obtained the result above, we undo the operation performed in (15):
Z ¯ R ( γ ) = G R γ w R 2 1 1 F 1 N Q w R 2 , N Q 2 , γ .
The main remaining task to find the central result of this paper, an expression for Z R ( Δ ) , is to take the inverse Laplace transform of this function. This is performed in the proposition below.
Proposition 2.
Given that G R in (10) is finite, Z R ( Δ ) , as defined in (9), is given by
Z R ( Δ ) = 2 G R Γ w R 2 · 1 N Q Δ N Q 2 2 F 1 1 w R 2 , N Q w R 2 , 1 + N Q 2 , Δ , Δ 1 , 1 w R Δ w R 2 2 F 1 w R 2 , N Q w R 2 , N Q 2 , 1 / Δ , Δ 1 ,
Proof. 
If (10) is finite and, thus, (21) exists and is finite, we need to perform the inverse Laplace transform of (21) in order to prove (22). This may be achieved as follows. First, we write (21) in terms of one of the Whittaker functions
Z ¯ R ( γ ) = G R γ w R 2 N Q 4 1 e γ 2 M N Q 4 w R 2 , N Q 4 1 2 ( γ ) ,
where we used Kummer’s transformation (A13), and M μ , ν ( γ ) is one of the Whittaker functions which may be found in (A14) in the appendix. Let us rewrite
Z ¯ R ( γ ) = G R γ N Q 4 e γ 2 M N Q 4 w R 2 , N Q 4 1 2 ( γ ) L [ f ] γ w R 2 1 L [ g ] ,
such that we can now use the formula from the convolution theorem which can be found in (A17) in the appendix. Let us first find the inverse Laplace transform of L [ g ] , which may be found using Formula (A18) from the appendix
g ( t ) = t w R 2 Γ w R 2 + 1 .
The inverse Laplace transform of L [ f ] may be found using Formula (A20) from the appendix
f ( t ) = β w R 2 , N Q w R 2 1 t N Q w R 2 1 ( 1 t ) w R 2 1 , 0 < t < 1 , 0 , otherwise ,
where β is the beta-function defined in (A9). Combining these results with the convolution product formula (A17) in the appendix yields
Z R ( Δ ) = c R 0 Δ q N Q w R 2 1 ( 1 q ) w R 2 1 ( Δ q ) w R 2 d q , Δ 1 , c R 0 1 q N Q w R 2 1 ( 1 q ) w R 2 1 ( Δ q ) w R 2 d q , Δ 1 ,
where c R G R Γ w R 2 + 1 β w R 2 , N Q w R 2 . Let us focus on the Δ 1 case first. Using (A8), we find
Z R ( Δ ) = c R Δ w R 2 0 1 q N Q w R 2 1 ( 1 q ) w R 2 1 ( 1 q / Δ ) w R 2 d q , = G R Γ w R 2 + 1 Δ w R 2 2 F 1 w R 2 , N Q w R 2 , N Q 2 , 1 Δ .
For Δ 1 , we find
Z R ( Δ ) = c R 0 Δ q N Q w R 2 1 ( 1 q ) w R 2 1 ( Δ q ) w R 2 d q , = c R Δ N Q 2 0 1 q N Q w R 2 1 ( 1 Δ q ) w R 2 1 ( 1 q ) w R 2 d q , = G R Γ w R 2 N Q 2 Δ N Q 2 2 F 1 1 w R 2 , N Q w R 2 , N Q 2 + 1 , Δ .
where we changed integration variables in the first step to q = q / Δ . This result is in accord with (22). □
This concludes the proof of (22). As mentioned before, the derivation is exactly identical for generic tensors. The main difference now is that the number of degrees of freedom N Q is different for this tensor space. What is left to determine are the range of R for which G R is finite and the value of G R . This will be carried out in Section 4.
Before we finish this section, let us demonstrate some properties of this function. First, let us note that the parameters R and w always come together, even though they seemingly are unrelated when inspecting (7). This can be understood by the fact that every term in the tensor rank decomposition comes with a weight given by λ i . However, in the measure, we count every unit of λ with a power of w, so we have R terms that each scale with a factor of w, explaining why R and w always come together.
Now, we take a look at some special values of the function. Starting with the case where w R / 2 = 1 , we have the situation that for Δ 1 , the hypergeometric part of the function will be constant because the first argument is zero. For Δ 1 , we see that the function will be of the form 1 + N Q 2 ( Δ 1 ) . Hence, the full function will simplify to
Z R ( Δ ) Δ N Q / 2 , Δ 1 , 1 + N Q 2 ( Δ 1 ) , Δ 1 ,
making the function linear for larger Δ . Let us try another simple case, namely for w R = N Q . In this case, the hypergeometric part becomes a constant everywhere, and we get
Z R ( Δ ) Δ N Q / 2 .
Examples of the special values above, and others, are plotted in Figure 1.
Furthermore, let us focus on some of the limiting behaviour of the function. For Δ 0 + , the hypergeometric part is approximately a constant, and we see
lim Δ 0 + Z R ( Δ ) Δ N Q 2 .
Similarly, for Δ , the hypergeometric part is constant and the function tends to
lim Δ Z R ( Δ ) Δ w R 2 .
In some sense, the hypergeometric part of the function interpolates between these two extremes. This is also shown in Figure 1.
It is instructive to compare Z R ( Δ ) to another quantity,
C R ( Δ ) : = F R d Φ w Θ Δ Φ 2 , = G R Γ w R 2 + 1 Δ w R 2 .
For the derivation of this quantity, we would like to refer to Appendix D. This quantity measures the amount of tensor rank decompositions of size smaller than Δ , giving us a measure for the scaling of volume in the space of tensor rank decompositions. Figure 2 sketches the difference between Z R ( Δ ) and C R ( Δ ) . It can be seen that in the Δ limit, Z R ( Δ ) C R ( Δ ) .
Dividing Z R ( Δ ) by this quantity yields a quantity comparing the amount of tensor rank decompositions with a distance less than Δ from a tensor of size 1 to the amount of decompositions of size less than Δ :
Z R ( Δ ) / C R ( Δ ) = w R N Q Δ N Q w R 2 2 F 1 1 w R 2 , N Q w R 2 , N Q 2 + 1 , Δ , Δ 1 , 2 F 1 w R 2 , N Q w R 2 , N Q 2 , 1 Δ , Δ 1 .
This quantity is useful to predict the difficulty of finding a tensor rank decomposition close to a certain tensor in the tensor space. Notice here that the G R dependence drops out. This implies that this quantity might be well defined, even in the case that G R itself is not.
Upon inspecting Figure 3, it can be seen that (26) has some interesting R-dependence. Firstly, while the limiting behaviour for Δ to 1 is already clear from (24) and the overlap in the regions as sketched in Figure 2, the quantity will limit to 1 from below for w R < N Q , while for w R > N Q , it will limit towards 1 from above. The reason for this is that for large R, even with small Δ , there will be many tensor rank decompositions that approximate an arbitrary tensor with error allowance less than Δ , while for small Δ , the volume counted by C R ( Δ ) will be small. This shows that for small Δ , the regions in Figure 2 scale in different ways. Secondly, what is interesting is that the R = 1 curve overtakes the R = 2 curve around Δ = 1 , and for larger R the behaviour for small Δ changes from accelerating to decelerating.
This motivates us to look at a specific case of the quantity (26), namely for Δ = 1 . As is clear from the structure of the function, Δ = 1 appears to be a special value which we can analyse further. Fixing Δ = 1 gives us the opportunity to look at the R and w-dependence a bit closer. Up until now, we have kept the value of w arbitrary; it is however interesting to see what happens for specific values of w. It turns out that, peculiarly, when taking
w K 3 N 11 12 ,
for generic tensors, the function Z R / C R ( Δ = 1 ) , as a function of R, appears to be minimised at (or very close to) the expected generic rank of the tensor space. 4 Some examples of this may be found in Figure 4 This means that until the expected rank, the relative amount of decompositions that approximate tensors is decreasing, while from the expected rank, the amount of decompositions that approximate a tensor of unit norm increase. The reason for the form of (27) is currently unknown, and it would be interesting to find a theoretical explanation for this.

4. Convergence and Existence of the Volume Formula

The derivation of the closed form of Z R ( Δ ) depends on the existence of G R , defined in (10). We will analyse the existence in the current section. Except for the case where R = 1 , which is shown below, we will focus on numerical results since a rigid analytic understanding is not present at this point.
First, let us briefly focus on the case of general N , K and w, but specifically for R = 1 . This case is the only known case for general N , K and w that can be solved exactly. In this case the quantity simplifies to
G 1 ( ϵ ) = | λ | w 1 d λ S + N 1 d ϕ e ( 1 + ϵ ) λ 2 = Γ w 2 ( 1 + ϵ ) w / 2 π N 2 Γ N 2 .
Clearly, in this case, the lim ϵ 0 + G 1 ( ϵ ) exists, so there exist at least one R for which the quantity exists. The main question now is for up to what value of R, R c , the quantity exists.
Contrary to the R = 1 case above, one might expect (10) does not always converge. The matrix model analysed in [14,15,16], corresponding to a choice of parameters of K = 3 and w = N K , did not converge in general. It had a critical value around R c 1 2 ( N + 1 ) ( N + 2 ) , above which the ϵ 0 + limit did not appear to converge anymore. In the current section, we will add numerical analysis for general K and w = 1 and discuss the apparent leading order behaviour. The main result of this section is that for w = 1 , the critical value seems to be R c = N Q . Hereafter, in this section, we will always assume w = 1 .
The numerical analysis was conducted by first integrating out the λ i variables and subsequently using Monte Carlo sampling on the compact manifold that remains. The derivation below is for the symmetric case, but can be conducted for the generic case in a similar manner. The λ i can be integrated out in a relatively straightforward way since the measure in the w = 1 case is very simple. Let us rewrite (10) in a somewhat more suggestive form
G R ( ϵ ) : = F R d Φ w e Φ 2 ϵ i = 1 R λ i 2 , = F R i = 1 R d λ i d ϕ i e i , j = 1 R λ i ϕ i · ϕ j K + ϵ δ i j λ j .
It can now be seen that, for λ i , this is a simple Gaussian matrix integral over the real numbers λ i , with the matrix M ϵ i j : = ϕ i · ϕ j K + ϵ δ i j . The result of this integral is
G R ( ϵ ) = π R / 2 S + N 1 × R i = 1 R d ϕ i 1 det ϕ i · ϕ j K + ϵ δ i j ,
which is a compact, finite (for ϵ > 0 ) integral. The corresponding expression for generic tensors is
G R ( ϵ ) = π R / 2 S + N 1 × K R i = 1 R k = 1 K d ϕ ( k ) i 1 det k = 1 K ϕ ( k ) i · ϕ ( k ) j + ϵ δ i j .
We wrote a C++ program evaluating the integrals above using Monte Carlo sampling. The general method applied is the following:
  • Construct R, N-dimensional random normalised vectors using Gaussian sampling.
  • Generate the matrix M i j by taking inner products (and adding ϵ to the diagonal elements).
  • Calculate the determinant of M i j and evaluate the integrand.
  • Repeat this process M times.
The main difference between the above method, and the method for generic tensors, is that we generate R · K random vectors, and the matrix is now given by M ϵ i j : = k = 1 K ϕ ( k ) i · ϕ ( k ) j + ϵ δ i j . To generate random numbers, we used C++’s Mersenne Twister implementation mt19937, and for the calculation of the determinant of M ϵ i j we used the C++ Eigen package [18].
We have conducted simulations using this method for both symmetric and generic tensors. After the initial results, it became clear that the critical value for R seems to lie on R c = N Q , so to verify this, we calculated the integral for R c 1 , R c and R c + 1 , and checked if G R indeed starts to diverge at R c + 1 .
What divergent behaviour to expect can be explained as follows. Let us take the limit of lim ϵ 0 + M ϵ i j = : M i j . It is clear that this integral diverges whenever the matrix is degenerate. Assume now that M i j has rank r, meaning that the matrix M i j in diagonalised form has R r zero-entries. Thus, adding a small but positive ϵ to the diagonal entries results in the following expansion
det M ϵ = A ϵ R r + O ( ϵ R r 1 ) ,
leading to leading order for the integrand
1 det M ϵ ϵ R r 2 .
Thus, if there is a set with measure nonzero in the integration region with r < R , the final ϵ -dependence for small epsilon is expected to be
G R ( ϵ ) C ϵ R R c 2 + O ( ϵ R R c 1 2 )
where the constant factor C is the measure of the divergent set, and the other factor is due to non-leading order nonzero measure integration regions. Note that now we should take r = R c , as by definition of R c , this will yield the leading order contribution for the integral. An example of this approach for finding R c for symmetric tensors with N = 3 and K = 3 is given in Figure 5. By the definition of R c , for R R c , G R ( ϵ ) should converge to a constant value.
This procedure has been carried out for both symmetric and generic tensors and for various choices of the parameters K and N. The results of this can be found in Table 1. This procedure lets us also determine the value of G R numerically, as is also shown in the examples of Figure 5.
Generally, the result was quite clear: There is a transition point at R c = N Q . This is true for all examples we tried, except for the N = 2 cases for symmetric tensors, for which the critical value is R c = 1 .
Let us explain why an upper bound for the value of R c is given by N Q . The matrix may be written as
M i j = a 1 , , a K = 1 N ( ϕ a 1 i ϕ a K i ) · ( ϕ a 1 j ϕ a K j ) .
Thus, if we consider only the right part of the expression above (i.e., one of the rows of the matrix), it can be seen as the linear map
Λ : R R R N Q , λ i i = 1 R λ i ϕ a 1 i ϕ a K i .
A basic result from linear algebra is that a linear map from a vector space V to W, with dim ( V ) dim ( W ) , has a kernel of at least dimension
dim ( ker Λ ) dim ( V ) dim ( W ) .
Thus, for R > N Q , this kernel always has a finite dimension, and since M i j is simply the square of this linear transformation, det M = 0 . Thus, we may conclude
R c N Q .
The reason why the critical rank actually attains this maximal value for all cases N > 2 is, at present, not clear. However, it is good to note that for random matrices, the set of singular matrices has measure zero and, hence, for R R c , the construction of the matrix M i j appears to be random.
The current result of R c = N Q , together with the previous result for w = N K and K = 3 of R c 3 N N Q mentioned before, suggest a general formula that holds for most cases
R c = N Q w .
This formula seems very simple, but there is no analytic understanding for this formula yet. At present, it should be treated merely as a conjecture.

5. Numerical Evaluation and Comparison

The main goal of this section is to numerically confirm the derived formula for Z R ( Δ ) in (22). Therefore, we will mainly focus on values of R R c found in Section 4 that allow for the existence of G R defined in (10), since in those cases, the derivation is expected to hold. We will briefly comment on cases where R > R c at the end of the section. In short, we will find that the relation found in (22) indeed holds for all cases that could be reliably calculated. In this section, we will always take w = 1 such that the integration measure on F R is given by
d Φ : = i = 1 R d λ i d ϕ i .
Since the integration region has a rapidly increasing dimension, we used Monte Carlo sampling to evaluate the integral. To do this, we alter the configuration space to a compact manifold by introducing a cutoff Λ
R R × S + N 1 × R Λ , Λ R × S + N 1 × R ,
and similarly for the generic tensor case:
Λ , Λ R × S + N 1 × K R .
With the integration region now being compact, there is no need for the extra regularisation parameter ϵ anymore, and we can let Λ play that role instead.
In order to look at a more complicated example than matrices, but still keep the discussion and calculations manageable, we will only consider tensors of degree 3 (i.e., K = 3 ). Since the difficulty of the direct evaluation of Z R ( Δ ) rapidly increases due to the high dimension of the integration region, we will only focus on low values of N. To illustrate: noting that we also have to integrate over the normalised tensorspace, the integration region for generic tensors with N = 3 for R = 2 is already 40-dimensional. Considering the derivation in Section 3 and the evidence for the existence of G R presented in Section 4, we will only show results for low values of N, as sufficient evidence for (22) is already at hand.
In the symmetric case, the N = 2 case is only well defined for R = 1 , since R c = 1 , as can be found in Table 1. This means that only evaluating N = 2 would yield only limited insight and, hence, we also evaluated cases for N = 3 . We evaluated all cases up to R c = 10 and found that results always agree with (22) up to numerical errors. Two examples may be found in Figure 6. For the generic case, the situation is slightly different. For N = 2 , the critical value R c = 8 , so we can already actually expect interesting behaviour in this case. Hence, we solely focus on the N = 2 case and evaluate the integral up to R c = 8 . Two examples of this may be found in Figure 6.
We may conclude that for both the symmetric and generic cases, the numerical results agree perfectly well with the derived Equation (22) and, moreover, match the values of G R determined independently in the numerical manner explained in Section 4.
We finalise this section with a remark on the case of R > R c . In this case, G R diverges, and the correctness of formula (22) is not guaranteed anymore. This leads to a question: Does Z R ( Δ ) also diverge for R > R c , or is the divergence of G R only problematic for the derivation of its closed form? We investigated the simplest case for this: symmetric tensors with dimension N = 2 and rank R = 2 . We found that Z R ( Δ ) still diverges by setting Δ = 1 and investigating the dependence on Λ , which can be seen in Figure 7. One peculiar fact we discovered is that the functional form of Z R Λ ( Δ ) for fixed and finite Λ still follows the functional dependence on Δ of (22), also shown in Figure 7.
This last fact suggests the possibility that the quantity defined in (26) might actually be finite even for R > R c , since the diverging parts will cancel out when taking the ϵ 0 + limit (or Λ as in this section). To support this a bit further, let us consider the differential equation solved by the hypergeometric function (A7), which is a homogeneous ordinary differential equation. If we rewrite our result from (22) 5
2 F 1 ( a , b , c ; z ) : = u ( z ) z A Z R ( z ) ,
and plug this into the hypergeometric differential equation, we notice that the resulting equation, which is the equation that Z R ( z ) solves and, necessarily, is still a homogeneous ordinary differential equation. If we assume that the actual physically relevant properties are described by this differential equation, an overall factor should not matter. Hence, if we extract this overall factor (which might become infinite in the limit ϵ 0 + ), we should be left with the physically relevant behaviour.

6. Conclusions and Discussions

Motivated by recent progress in the study of the canonical tensor model, in this work we turned our attention to the space of tensor rank decompositions. Because of the analogy between the terms of a tensor rank decomposition and points in a discrete space discussed in [9], we call this the configuration space of tensor rank decompositions. This space has the topology of a product of R times the real line and R times an N 1 -dimensional unit hemisphere. We equip this space with a measure generated by an infinitesimal volume element, depending on the parameter w. In the definition, we are rather general, taking into account both symmetric and non-symmetric tensors.
The central result of this work is the derivation of a closed formula for the average volume around a tensor of unit norm, Z R ( Δ ) , in terms of a hypergeometric function in (22). This formula depends on the degrees of freedom of the tensor space, the parameter w of the measure, and the rank of the tensor rank decompositions we are considering. The existence of such a closed form formula is far from obvious, and the derivation crucially depends on the existence of a quantity G R . We have investigated the existence of this quantity numerically for the case where w = 1 . In this case, the maximum value of R for the existence appears to agree with the degrees of freedom of the tensor space R c = N Q , with the exception of the case for symmetric tensors where N = 2 . Together with earlier results in [14,15,16] we conjecture a more general Formula (30). Finally, we conducted some direct numerical checks for Z R ( Δ ) and found general agreement with the derived formula.
From a general point of view, we have several interesting future research directions. For one, the conjectured Formula (30) for the maximum R c is based on the analysis of two values of w. It might be worth extending this analysis to more values, which might lead to a more proper analytical explanation for this formula that is currently missing. Secondly, we introduced a quantity C R ( Δ ) , describing the amount of decompositions of size less than Δ . Dividing Z R ( Δ ) by C R ( Δ ) , we expect that this leads to a meaningful quantity that is finite, even for R > R c . Understanding this quantity and its convergence (or divergence) better would be worth investigating. Finally, a peculiar connection between w and the expected rank was found for some examples, where tuning w as in (27) lead to Z R ( Δ = 1 ) to be minimised for the expected rank of the tensor space. Whether this is just coincidence, or has some deeper meaning, would be interesting to take a closer look at.
Let us discuss what the results mean for the canonical tensor model of quantum gravity. The present work provides the first insight into the question how many tensor rank decompositions are close to a given tensor Q a b c . In the CTM, the rank R considered corresponds to the amount of fuzzy points in a spatial slice of spacetime. The most natural choice for parameter w is w = N / 3 because this treats the points in the fuzzy space as elements of R N [9]. The conjectured formula (30) then implies that the expected spacetime degrees of freedom in the CTM are bounded by
R c g r a v . = 1 2 ( N + 1 ) ( N + 2 ) ,
which is one of the reasons why the further study of the quantity G R and an analytic understanding of the critical value R c is highly interesting from the point of view of the CTM.
Going back to the original question of how many discrete spaces of a given size (i.e., amount of points R) are close to a tensor, we note that, here, we are mainly interested in the case for small Δ . As is shown in (23), the function Z R ( Δ ) becomes a power function proportional to Δ N Q 2 in this limit. Therefore, the R-dependence in this regime is only present in the constant pre-factor, particularly G R . This means that in future studies, to fully answer this question, only the quantity G R has to be considered, dramatically simplifying our problem. Another interesting future research area would be to find a way to practically compute V R ( Q , Δ ) for a given tensor Q, as our main results here are estimates, since we take the average of tensors of size one.
To conclude, we would like to point out that the Formula (22) could prove to be important in the understanding of the wave function of the canonical tensor model studied in [12,13,14,15,16,17]. In [17], the phase of the wave function was analysed in the Q-representation; the amplitude of the wave function is, however, not known. From [12,13], we expect that there is a peak structure, where the peaks are located at Q a b c that are symmetric under Lie group symmetries. In the present paper, we have determined a formula for the mean amplitude 6, which we can use to compare to the local wave function values in future works.

Author Contributions

Both authors contributed equally to this article. Both authors have read and agreed to the published version of the manuscript.

Funding

The work of N.S. is supported in part by JSPS KAKENHI Grant No.19K03825.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Tensor Rank Decompositions

The tensor rank decomposition, also called the canonical polyadic decomposition, may be thought of as a generalisation of the singular value decomposition (SVD) for matrices, which are tensors of degree two, to tensors of general degree. For a more extensive introduction to tensors and tensor rank decomposition, we would like to refer to [19,20].
The SVD decomposes a given real N × N matrix M into M = A T Λ B , where A and B are orthogonal matrices and Λ is a diagonal matrix, the diagonal components of which are called the singular values. 7 The amount of nonzero singular values of a given matrix is called the rank of the matrix, denoted by R. To extend the SVD to tensors of general degree, let us rewrite this in a more suggestive form, which is called the dyadic notation of the matrix
M a b = i = 1 R j = 1 R ( A i ) a Λ i i δ i j ( B j ) b : = i = 1 R λ i v a i w b i ,
where v i , w i R N and λ i Λ i i R are the nonzero singular values. The generalisation to general tensors of degree K is now straightforward:
Q a 1 a K = i = 1 R λ i v a 1 ( 1 ) i v a K ( K ) i ,
where the rank R is now defined as the lowest number for which such a decomposition exists, and v ( k ) i R N . For symmetric tensors (similar to symmetric matrices), we can find a decomposition in terms of symmetric rank-1 tensors, meaning that every term in the decomposition is generated by a single vector
Q a 1 a K = i = 1 R λ i v a 1 i v a K i .
The minimum R for which this is possible is called the symmetric rank.
The space of tensor rank decompositions with R components, F R , is a subset of the full tensor space
F R T = V V .
This space increases as R becomes bigger, and in its limit it spans the whole tensor space. A typical rank R t of the tensor space T is a rank for which F R has positive measure in the full tensor space. This typical rank is not necessarily unique, but if this is the case, it is called the generic rank.
The expected generic rank, R E , is a conjectured formula for the generic rank that a tensor space is expected to have, which has been proven to provide a lower estimate of the generic rank. The formula for the non-symmetric case is given by:
R E = N K N * K K + 1 .
Note that while the tensor rank decomposition generalises the singular value decomposition, there are many differences between the two [21]. For example, often the tensor rank decomposition is unique [8], but actually computing the tensor rank decomposition is very hard [10].
Note that the vectors v ( k ) i may be re-scaled as
ϕ ( k ) i : = ± v ( k ) i v ( k ) i , λ i λ i k = 1 K ± v ( k ) i ,
where the sign is taken such that ϕ ( k ) i lies on the upper hemisphere S + N 1 R N . This is the form we will use in order to remove redundancies in the definition.

Appendix B. Lemmas

This appendix section contains two lemmas used in the propositions of Section 3.
Lemma A1.
Given that G R in (10) is finite, for α , γ > 0 the following limit of the integral
Y ( α , γ ) : = lim ϵ 0 + R N Q d Q F R d Φ e α Q 2 γ ( Q Φ ) 2 ϵ i = 1 R λ i 2 ,
is finite.
Proof. 
Using the same inequality with 0 < A < 1 ,
Q Φ 2 Q Φ 2 A Φ 2 A 1 A Q 2 ,
as in step one of the proof of Proposition 1, we obtain
Y ( α , γ ) lim ϵ 0 + R N Q d Q F R d Φ e α Q 2 + γ A 1 A Q 2 γ A Φ 2 ϵ i = 1 R λ i 2 , = R N Q d Q e α γ A 1 A Q 2 lim ϵ 0 + F R d Φ e γ A Φ 2 ϵ i = 1 R λ i 2 .
In the second line, it can be seen that the Q and Φ integration decouple, where the Q integration is simply a finite Gaussian integral if one takes A such that α > γ A 1 A . The Φ integration is nothing more than a finite constant multiplied by G R .
Hence, we conclude that this integration is finite if lim ϵ 0 + G R ( ϵ ) exists. □
Lemma A2.
The limits in Equation (20) may be safely interchanged, i.e.,
lim ϵ 0 + 0 d x Z ¯ R ( x ; ϵ x ) x N Q + w R 2 1 e t x = 0 d x lim ϵ 0 + Z ¯ R ( x ; ϵ x ) x N Q + w R 2 1 e t x ,
under the assumption that lim ϵ 0 + G R ( ϵ ) converges and is finite.
Proof. 
In order to prove (A4), let us take an X > 0 and split the integral into two parts
lim ϵ 0 + 0 X d x Z ¯ R ( x ; ϵ x ) x N Q + w R 2 1 e t x + lim ϵ 0 + X d x Z ¯ R ( x ; ϵ x ) x N Q + w R 2 1 e t x ,
and consider both parts separately.
For the first term, we know that the integral and limit can be interchanged if the integrand is uniformly convergent, i.e.,
lim ϵ 0 + sup x 0 , X x N Q + w R 2 1 e t x Z ¯ R ( x ; ϵ x ) Z ¯ R ( x ) = 0 .
Now, note that the function Z ¯ R ( x ; ϵ x ) is bounded by a contribution proportional to x w R 2 as shown in (17), but the expression above has a factor of x N Q + w R 2 1 and, thus, the point x = 0 does not pose a problem and the value above is finite for all x [ 0 , X ) . However, since from the first step of Proposition 1, we know Z ¯ R ( x ; ϵ x ) Z ¯ R ( x ) ,
x [ 0 , X ) | x N Q + w R 2 1 e t x | | Z ¯ R ( x ; ϵ x ) Z ¯ R ( x ) | 0 ,
and, hence, we have uniform convergence, meaning that the integral and limiting operations may be interchanged.
For the second term, since Z ¯ R ( x ; ϵ ) is decreasing in x and ϵ , we obtain an upper bound (and using the convergence of Z ¯ R ( x ; ϵ x ) which has been proven already)
X d x Z ¯ R ( x ; ϵ x ) x N Q + w R 2 1 e t x X d x Z ¯ R ( X ) x N Q + w R 2 1 e t x , = Z ¯ R ( X ) X d x x N Q + w R 2 1 e t x .
Now, Z ¯ R ( X ) does not increase for larger X, and the final integral converges to zero for large X. This means that the left-hand side vanishes in the limit X .
Thus, we conclude that the integral and limiting operations may be interchanged. □

Appendix C. Necessary Formulae

In this work, we use some nontrivial formulae that are listed in this subsection. Most of them are used in Section 3 for the proof of Propositions 1 and 2. This section is divided into formulae related to the hypergeometric functions, Appendix C.1, and formulae directly related to the inverse Laplace transforms, Appendix C.2.

Appendix C.1. Properties of Hypergeometric Functions

The hypergeometric function and its generalisations play a central role in many fields of mathematics, physics, and other sciences. The reason for this is that many of the special functions used throughout these areas can be expressed in terms of the hypergeometric function. An overview of the hypergeometric function and its application may be found in [22], and a resource for the confluent hypergeometric function (including the Whittaker’s function mentioned below) may be found in [23]. In this work, the final result is expressed in terms of the hypergeometric function, whereas in the derivation, we use the confluent hypergeometric function. This appendix section summarises some important notions, definitions, and formulae.
The generalised hypergeometric function, in some sense a generalisation of the geometric series, is defined as the analytic continuation of the series
p F q ( a 1 , , a p , b 1 , , b q ; z ) = n = 0 ( a 1 ) n ( a p ) n n ! ( b 1 ) n ( b q ) n z n ,
where we used the Pochhammer symbols
( a ) n = Γ [ a + n ] Γ [ a ] .
The hypergeometric function is the case where p = 2 and q = 1 , i.e., inside the range of convergence
2 F 1 ( a , b , c ; z ) = n = 0 ( a ) n ( b ) n n ! ( c ) n z n .
The hypergeometric function may also be defined as the solution to the hypergeometric differential equation
z ( 1 z ) d 2 u ( z ) d z 2 + [ c ( a + b + 1 ) z ] d u ( z ) d z a b u ( z ) = 0 .
For Re ( c ) > Re ( b ) > 0 and z not being a real number on z 1 , the hypergeometric function has an integral representation, 8
2 F 1 ( a , b , c ; z ) = 1 β ( b , c b ) 0 1 d t t b 1 ( 1 t ) c b 1 ( 1 z t ) a ,
where β ( a , b ) is the beta-function defined by
β ( a , b ) : = Γ [ a ] Γ [ b ] Γ [ a + b ] .
The confluent hypergeometric function is defined by the limit
M ( a , c ; z ) : = lim b 2 F 1 ( a , b , c ; z / b ) = 1 F 1 ( a , c ; z ) ,
which exactly corresponds to the series representation defined in (A5) for p = q = 1 . The differential equation associated to this function may be found in a similar way, and is called the Kummer’s equation 9
z d 2 w ( z ) d z 2 + [ c z ] d w ( z ) d z a w ( z ) = 0 .
The confluent hypergeometric function also has an integral representation given by
1 F 1 ( a , c ; z ) = 1 β ( a , c a ) 0 1 d t e t z t a 1 ( 1 t ) c a 1 ,
for Re ( c ) > Re ( a ) > 0 . One property of the confluent hypergeometric function we will need is Kummer’s transformation:
e z 1 F 1 ( a , c ; z ) = 1 F 1 ( c a , c ; z ) .
The Whittaker functions are a variant of both of the confluent hypergeometric functions. The first Whittaker function is the only one we will use and it is defined by [23]
M ν , μ ( z ) : = e z 2 z μ + 1 2 1 F 1 ( μ ν + 1 2 , 1 + 2 μ ; z ) .

Appendix C.2. The (Inverse) Laplace Transform

The Laplace transform and its inverse are heavily used tools in mathematics, physics, engineering, and other sciences. A good introduction and overview of this area of mathematics is [24]. In Ref. [25], many explicit Laplace transforms may be found. 10
The Laplace transform (or Laplace integral) of a function f ( t ) is given by
F ( s ) L ( f ) ( s ) : = 0 e s t f ( t ) d t .
The Laplace transform is a very useful tool in many aspects. For our purposes, on the one hand, it is possible to convert a complicated integral to a closed formula in the Laplace space and, secondly, we find a formula that exactly corresponds to a Laplace transform which lets us extract a function by taking the inverse Laplace transform. Generally, it is often used for solving differential equations. The main reason for this is that under the Laplace transformation, taking a derivative corresponds to multiplication by the variable s in the Laplace space.
Of course, neither taking the Laplace transform nor taking the inverse Laplace transform is always an easy task. In our case, taking the Laplace transform is not that difficult, but the inverse Laplace transform is more involved.
The Laplace transform of a function f ( t ) exists if the function satisfies two properties: (1) it is of exponential order, (2) it is integrable over any finite domain in [ 0 , ) . Note that from (A15), it can easily be seen that the inverse Laplace transform cannot be unique, since every null function (a function of measure zero) may be added to a function and result in the same Laplace transform. Hence, the inverse Laplace transformation can only be expected to map towards an equivalence class generated by the null functions. In the present work, however, this ambiguity does not affect our final result: the function (8) is clearly a monotonically increasing function in Δ , and the end result (22) is continuous and, hence, there is no possibility for a null function to be added.
For two functions f ( t ) and g ( t ) , we can define the convolution as
( f * g ) ( t ) = 0 t f ( τ ) g ( t τ ) d τ .
It can straightforwardly be verified that convolution is both commutative and associative. If we assume the convergence of the Laplace integral of f ( t ) and g ( t ) , then the convolution theorem holds
L ( f * g ) = L ( f ) L ( g ) ,
in other words, the convolution of two functions in the usual domain corresponds to a product in the Laplace domain.
The Laplace transform used in Section 3 is just a straightforward computation of (A15), but we also use two inverse Laplace transforms. Hence, below are three inverse Laplace transformations we use. We will give short proofs for the formulae.
The first inverse Laplace transform we need is a relatively easy one, namely the inverse Laplace transform of x A 1 :
L 1 [ x A 1 ] = t A Γ [ A + 1 ] .
This can be found by using (A15) on the right-hand side. This formula is valid for A > 1 .
In this work we need the inverse Laplace transform of ( 1 + x ) A x B . This is given by
L 1 [ ( 1 + x ) A x B ] = t 1 + A + B 1 F 1 A , A + B , t Γ ( A + B ) .
Showing this is a little less trivial. For this, let us take the Laplace transform of the right hand side, using the integral representation of (A12),
L t 1 + A + B 1 F 1 A , A + B , t = 1 β ( A , B ) 0 d t e t x t 1 + A + B 0 1 d τ e t τ τ A 1 ( 1 τ ) B 1 , = 1 β ( A , B ) 0 1 d τ τ A 1 ( 1 τ ) B 1 0 d t e t ( x + τ ) t 1 + A + B , = Γ [ A + B ] β ( A , B ) 0 1 d τ τ A 1 ( 1 τ ) B 1 ( x + τ ) A B , = Γ [ A + B ] ( 1 + x ) A x B ,
where in the second step, we used (A18).
The last explicit equation we will need is related to the Whittaker function (A14),
L 1 β μ ν + 1 2 , μ + ν + 1 2 x 1 2 μ e x 2 M ν , μ ( x ) = 0 , t < 0 , t μ + ν 1 2 ( 1 t ) μ ν 1 2 , 0 t 1 , 0 , t > 1 .
One can find this inverse Laplace transform by using the definition of the Laplace transfrom (A15), the integral representation of the confluent hypergeometric function (A12), the definition of the Whittaker function (A14), and Kummer’s transformation (A13):
L t μ + ν 1 2 ( 1 t ) μ ν 1 2 Θ ( t < 1 ) = 0 1 d t e x t t μ + ν 1 2 ( 1 t ) μ ν 1 2 , = β μ ν + 1 2 , μ + ν + 1 2 1 F 1 >( μ + ν + 1 2 , 2 μ + 1 ; x ) , = β μ ν + 1 2 , μ + ν + 1 2 e x 1 F 1 ( μ ν + 1 2 , 2 μ + 1 ; x ) , = β μ ν + 1 2 , μ + ν + 1 2 x 1 2 μ e x 2 M ν , μ ( x ) .

Appendix D. The Expression of CR(Δ)

In (25), we introduce the following quantity:
C R ( Δ ) : = F R d Φ w Θ ( Δ Φ 2 ) .
A proper definition of this quantity would assume a regularisation function such as in (8). In this appendix section, we keep the discussion short and heuristic. A proper derivation including this regularisation function would go exactly along the lines of the derivation of Z R ( Δ ) in Section 3. In a similar way as the derivation of Z R ( Δ ) , assuming the existence of G R , we can now take the Laplace transform
C ¯ R ( γ ) = 0 d Δ F R d Φ w e γ Δ Θ ( Δ Φ 2 ) , = F R d Φ w Φ 2 d Δ e γ Δ , = γ 1 F R d Φ w e γ Φ 2 , = γ w R 2 1 G R .
Now that we related the Laplace transform to G R , we can take the inverse Laplace transform using (A18):
C R ( Δ ) = G R Γ w R 2 + 1 Δ w R 2 .

Notes

1
For more information we would like to refer to Appendix A.
2
This is a formal definition which will be properly regulated later on.
3
Note that the usual definition of the rank of a tensor is the minimal value R such that there is a solution to Equation (3).
4
The expected rank of a tensor space is the expected rank for which F R becomes dense in (an open subset of) the full tensor space. See Appendix A.
5
Here, we took the case where z < 1 , and the exact same argument holds for the z > 1 case.
6
In the actual wave function of CTM in (2), the O ( Φ ) part contains a product of Airy functions. Since the size of this part is generally upper bounded for a reasonable choice of the wave function, the mean value is for the upper bound of the local wave function.
7
To keep the discussion simple, only real N × N matrices are considered here, but this may be generalised in a straightforward manner.
8
This is actually the proper analytic continuation of the series above.
9
There is another function besides 1 F 1 ( a , c ; z ) that satisfies the differential equation in (A11). This is called the confluent hypergeometric function of the second kind.
10
A note of caution here, since the formula for (A20) for instance is incorrect.

References

  1. Sasakura, N. Canonical tensor models with local time. Int. J. Mod. Phys. 2012, 27, 1250020. [Google Scholar] [CrossRef] [Green Version]
  2. Sasakura, N. Uniqueness of canonical tensor model with local time. Int. J. Mod. Phys. 2012, 27, 1250096. [Google Scholar] [CrossRef] [Green Version]
  3. Sasakura, N.; Sato, Y. Interpreting canonical tensor model in minisuperspace. Phys. Lett. 2014, 732, 32–35. [Google Scholar] [CrossRef] [Green Version]
  4. Sasakura, N.; Sato, Y. Constraint algebra of general relativity from a formal continuum limit of canonical tensor model. JHEP 2015, 10, 109. [Google Scholar] [CrossRef] [Green Version]
  5. Chen, H.; Sasakura, N.; Sato, Y. Equation of motion of canonical tensor model and Hamilton-Jacobi equation of general relativity. Phys. Rev. 2017, 95, 066008. [Google Scholar] [CrossRef] [Green Version]
  6. Sasakura, N. Quantum canonical tensor model and an exact wave function. Int. J. Mod. Phys. 2013, 28, 1350111. [Google Scholar] [CrossRef] [Green Version]
  7. Hitchcock, F.L. The Expression of a Tensor or a Polyadic as a Sum of Products. J. Math. Phys. 1927, 6, 164–189. [Google Scholar] [CrossRef]
  8. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51. [Google Scholar] [CrossRef]
  9. Kawano, T.; Obster, D.; Sasakura, N. Canonical tensor model through data analysis: Dimensions, topologies, and geometries. Phys. Rev. D 2018, 97, 124061. [Google Scholar] [CrossRef] [Green Version]
  10. Hillar, C.J.; Lim, L.H. Most Tensor Problems Are NP-Hard. J. ACM 2013, 60. [Google Scholar] [CrossRef]
  11. Narain, G.; Sasakura, N.; Sato, Y. Physical states in the canonical tensor model from the perspective of random tensor networks. JHEP 2015, 1, 10. [Google Scholar] [CrossRef] [Green Version]
  12. Obster, D.; Sasakura, N. Symmetric configurations highlighted by collective quantum coherence. Eur. Phys. J. C 2017, 77, 783. [Google Scholar] [CrossRef]
  13. Obster, D.; Sasakura, N. Emergent symmetries in the canonical tensor model. PTEP 2018, 2018, 043A01. [Google Scholar] [CrossRef]
  14. Lionni, L.; Sasakura, N. A random matrix model with non-pairwise contracted indices. PTEP 2019, 2019, 073A01. [Google Scholar] [CrossRef]
  15. Sasakura, N.; Takeuchi, S. Numerical and analytical analyses of a matrix model with non-pairwise contracted indices. Eur. Phys. J. C 2020, 80, 118. [Google Scholar] [CrossRef]
  16. Obster, D.; Sasakura, N. Phases of a matrix model with non-pairwise index contractions. PTEP 2020, 2020, 073B06. [Google Scholar] [CrossRef]
  17. Sasakura, N. Phase profile of the wave function of canonical tensor model and emergence of large spacetimes. arXiv 2021, arXiv:2104.11845v1. [Google Scholar]
  18. Guennebaud, G.; Jacob, B. Eigen v3. 2010. Available online: http://eigen.tuxfamily.org (accessed on 20 July 2021).
  19. Hackbusch, W. Tensor Spaces and Numerical Tensor Calculus; Springer Series in Computational Mathematics; Springer Nature Switzerland: Cham, Switzerland, 2019. [Google Scholar]
  20. Landsberg, J. Tensors: Geometry and Applications; Graduate Studies in Mathematics; American Mathematical Society: Providence, RI, USA, 2011. [Google Scholar]
  21. Comon, P. Tensors: A brief introduction. IEEE Signal Process. Mag. 2014, 31, 44–53. [Google Scholar] [CrossRef] [Green Version]
  22. Seaborn, J. Hypergeometric Functions and Their Applications; Texts in Applied Mathematics; Springer: New York, NY, USA, 1991. [Google Scholar]
  23. Slater, L. Confluent Hypergeometric Functions; Cambridge University Press: London, UK, 1960. [Google Scholar]
  24. Doetsch, G.; Nader, W. Introduction to the Theory and Application of the Laplace Transformation; Springer: Berlin/Heidelberg, Germany, 1974. [Google Scholar]
  25. Oberhettinger, F.; Badii, L. Tables of Laplace Transforms; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1973. [Google Scholar]
Figure 1. On the left: Z R ( Δ ) for symmetric tensors with Δ running from Δ = 0 to 2, where K = 3 , N = 2 , and w = 1 . On the right: The limiting behaviour of Z R ( Δ ) for K = 3 , N = 4 , w = 2 , R = 3 , again for symmetric tensors. The blue curve represents (22), the red line the small Δ behaviour of (23), and the green line the large Δ behaviour of (24). α R Γ w R 2 N Q 2 G R is a normalisation factor.
Figure 1. On the left: Z R ( Δ ) for symmetric tensors with Δ running from Δ = 0 to 2, where K = 3 , N = 2 , and w = 1 . On the right: The limiting behaviour of Z R ( Δ ) for K = 3 , N = 4 , w = 2 , R = 3 , again for symmetric tensors. The blue curve represents (22), the red line the small Δ behaviour of (23), and the green line the large Δ behaviour of (24). α R Γ w R 2 N Q 2 G R is a normalisation factor.
Universe 07 00302 g001
Figure 2. A sketch showing the difference in the quantities Z R ( Δ ) and C R ( Δ ) . The red dotted line represents the normalised tensors. The blue shaded area represents the area counted by Z R ( Δ ) , and the red shaded area represents the area counted by C R ( Δ ) . On the left we take Δ 1 , and on the right we take Δ 1 .
Figure 2. A sketch showing the difference in the quantities Z R ( Δ ) and C R ( Δ ) . The red dotted line represents the normalised tensors. The blue shaded area represents the area counted by Z R ( Δ ) , and the red shaded area represents the area counted by C R ( Δ ) . On the left we take Δ 1 , and on the right we take Δ 1 .
Universe 07 00302 g002
Figure 3. The quantity Z R ( Δ ) / C R ( Δ ) for K = 3 , N = 2 , w = 1 , and R ranging from 1 to 5. We can identify some of the behaviour expected from (26) and (24). For any value of R, the function nears 1 for Δ . For w R = N Q , the function is just one everywhere.
Figure 3. The quantity Z R ( Δ ) / C R ( Δ ) for K = 3 , N = 2 , w = 1 , and R ranging from 1 to 5. We can identify some of the behaviour expected from (26) and (24). For any value of R, the function nears 1 for Δ . For w R = N Q , the function is just one everywhere.
Universe 07 00302 g003
Figure 4. Examples of the minimums when choosing w to be (27). The horizontal axis labels R, while the vertical axis labels Z R / C R ( Δ = 1 ) . The red line represents the expected rank, see (A2), of the tensor space (which is taken to be generic).
Figure 4. Examples of the minimums when choosing w to be (27). The horizontal axis labels R, while the vertical axis labels Z R / C R ( Δ = 1 ) . The red line represents the expected rank, see (A2), of the tensor space (which is taken to be generic).
Universe 07 00302 g004
Figure 5. An example of the verification of R c and the determination of the numerical value of G R . This is the case for symmetric tensors, with K = 3 and N = 3 . The dots (with error bars) represent the measurements, and the fitted curves are C * ϵ R R c 2 + c o n s t . for R > R c as in (29), and the constant value G R for R R c as in (10). This clearly shows that, in this case, R c = 10 .
Figure 5. An example of the verification of R c and the determination of the numerical value of G R . This is the case for symmetric tensors, with K = 3 and N = 3 . The dots (with error bars) represent the measurements, and the fitted curves are C * ϵ R R c 2 + c o n s t . for R > R c as in (29), and the constant value G R for R R c as in (10). This clearly shows that, in this case, R c = 10 .
Universe 07 00302 g005
Figure 6. Several examples of the direct numerical evaluation of Z R ( Δ ) for K = 3 and w = 1 as a function of Δ. The dots illustrate the numerically evaluated values, while the line is the curve in (22) with the value of GR determined numerically as explained in Section 4.
Figure 6. Several examples of the direct numerical evaluation of Z R ( Δ ) for K = 3 and w = 1 as a function of Δ. The dots illustrate the numerically evaluated values, while the line is the curve in (22) with the value of GR determined numerically as explained in Section 4.
Universe 07 00302 g006
Figure 7. Numerical evaluation of Z R = 2 ( Δ ) for N = 2 . On the left, we set Δ = 1 and vary Λ on the horizontal axis. It can be seen that the value indeed diverges linearly, as is expected from the discussion in Section 4, since this corresponds to a divergence of G R ( ϵ ) ϵ 1 / 2 because of ϵ Λ 2 . On the right, we set Λ = 10 and vary Δ on the horizontal axis to show that the functional form (except for the divergent part) is still given by the Formula (22).
Figure 7. Numerical evaluation of Z R = 2 ( Δ ) for N = 2 . On the left, we set Δ = 1 and vary Λ on the horizontal axis. It can be seen that the value indeed diverges linearly, as is expected from the discussion in Section 4, since this corresponds to a divergence of G R ( ϵ ) ϵ 1 / 2 because of ϵ Λ 2 . On the right, we set Λ = 10 and vary Δ on the horizontal axis to show that the functional form (except for the divergent part) is still given by the Formula (22).
Universe 07 00302 g007
Table 1. The results of the verification of R c for both symmetric tensors and generic tensors. It can be seen that in most cases, except  N = 2 for symmetric tensors, the hypothesis R c = N Q holds.
Table 1. The results of the verification of R c for both symmetric tensors and generic tensors. It can be seen that in most cases, except  N = 2 for symmetric tensors, the hypothesis R c = N Q holds.
Symmetric Tensors
KN R c N Q
2213
366
41010
51515
3214
31010
42020
53535
4215
31515
43535
Generic Tensors
KN R c N Q
2244
399
41616
52525
3288
32727
46464
421616
38181
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Obster, D.; Sasakura, N. Counting Tensor Rank Decompositions. Universe 2021, 7, 302. https://doi.org/10.3390/universe7080302

AMA Style

Obster D, Sasakura N. Counting Tensor Rank Decompositions. Universe. 2021; 7(8):302. https://doi.org/10.3390/universe7080302

Chicago/Turabian Style

Obster, Dennis, and Naoki Sasakura. 2021. "Counting Tensor Rank Decompositions" Universe 7, no. 8: 302. https://doi.org/10.3390/universe7080302

APA Style

Obster, D., & Sasakura, N. (2021). Counting Tensor Rank Decompositions. Universe, 7(8), 302. https://doi.org/10.3390/universe7080302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop