Next Article in Journal
Establishment and Analysis of a General Mass Model for Solenoid Valves Used in Space Propulsion Systems
Previous Article in Journal
Optimizing Low-Carbon Supply Chain Decisions Considering Carbon Trading Mechanisms and Data-Driven Marketing: A Fairness Concern Perspective
Previous Article in Special Issue
Development of Dynamic System Applications Using Distributed Quantum-Centric Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calculating the Projective Norm of Higher-Order Tensors Using a Gradient Descent Algorithm

by
Aaditya Rudra
1 and
Maria Anastasia Jivulescu
2,*
1
Department of Electrical Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur 440010, Maharashtra, India
2
Department of Mathematics, Politehnica University of Timişoara, Victoriei Square 2, 300006 Timişoara, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2026, 14(1), 105; https://doi.org/10.3390/math14010105
Submission received: 24 November 2025 / Revised: 20 December 2025 / Accepted: 24 December 2025 / Published: 27 December 2025
(This article belongs to the Special Issue Recent Advances in Scientific Computing & Applications)

Abstract

Projective Norms are a class of tensor norms that map from the input to the output spaces. These norms are useful for providing a measure of entanglement. Calculating the projective norms is an NP-hard problem, which creates challenges in computing due to the complexity of the exponentially growing parameter space for higher-order tensors. We develop a novel gradient descent algorithm to estimate the projective norm of higher-order tensors. The algorithm exhibits stable convergence to a minimum nuclear-rank decomposition of the given tensor in all our numerical experiments. We further extend the algorithm to symmetric tensors and to density matrices. We demonstrate the performance of our algorithm by computing the nuclear rank and the projective norm for both pure and mixed states and provide numerical evidence supporting these results.

1. Introduction

Quantum information science brings new ideas in computing and information processing by exploiting the properties of quantum mechanics, such as entanglement and superposition [1]. Entanglement is a feature and a resource of the quantum world, and hence, the interest in efficient methods to characterise and detect it. One way of detecting if a quantum state is entangled is to compute its projective norm and check if its value is larger than 1 [2]. Nevertheless, the computation of the tensor norm is NP-hard [3], and therefore, there is a need for alternative algorithms for computing it [4].
In this paper, a novel gradient descent-based algorithm for computing the projective tensor norm is introduced. The algorithm showcases consistent convergence to global minimum norm values, i.e., projective tensor norm, and it works for both pure states (rank-one density matrices or m-tensors of Hilbert–Schmidt norm 1) and mixed states (statistical mixtures of pure states or 2 m-tensors, usually called density tensors [4]). Separable quantum states correspond to probabilistic mixtures of product states, and therefore separable density tensors are generalizations of product states. An entangled quantum state means that it cannot be written as a product state, i.e., a rank-one tensor. Hence, entangled quantum states are described by density tensors, which are not separable. Various works have related the detection of entangled multipartite quantum states to the computation of the projective tensor norm associated with the corresponding density tensors [5]. For example, the injective tensor norm is related, for pure states, to the geometric measure of entanglement [6], whereas the projective tensor norm gives a necessary and sufficient condition to test the separability of a quantum state, i.e., to check if the projective norm of the tensor is equal to 1 [2]. Relaxations of this result have been introduced in [5], where, based on linear contraction, called entanglement testers, a unified approach of the entanglement criteria theory based on projective tensor norm is presented.
This paper focuses on developing a new gradient descent algorithm to compute the projective tensor norm of high-order tensors. We mention that the algorithm convergence analysis is beyond the aim of the present paper, and it will be subject to another study.
This paper is organised as follows: Section 2 recalls the main mathematical tools for tensor norms, such as the definitions of projective and injective tensor norms and their mathematical properties. Section 3 explains the existing algorithm for computing the projective norm, as well as the new method. Section 4 presents the performance of the algorithm for both pure and mixed quantum states by comparing the obtained results with known analytical and numerical values for selected classes of high-order tensors. The section also includes representative non-quantum tensor examples to illustrate the generality of the proposed method. Section 5 analyses the performance of the algorithm with respect to parameter sensitivity and provides a comparison with other algorithms.

2. The Projective Norm and Its Application in Quantum Information Theory

In this section, we recall basic definitions and facts about different natural norms that can be introduced on the algebraic tensor product of finite-dimensional Banach spaces. We will focus mostly on the projective norm and its application to detect entangled states. We start by defining the projective and the injective tensor norms for (finite-dimensional) Banach spaces V i , i = 1 , 2 , , m .
Definition 1.
Given m finite-dimensional Banach spaces V 1 , , V m , with their respective norms · V i and T V 1 V m , the projective (nuclear) tensor norm of the tensor T is
T π : = inf k = 1 r N x k 1 V 1 x k m V m : r N N , x k i V i , T = k = 1 r N x k 1 x k m .
Here, the infimum is taken over all the decompositions of the m-tensor T = k = 1 r N x k 1 x k m , where r N is a finite, but arbitrary integer. The Banach space induced by the projective norm on V 1 V m is denoted V 1 π π V m .
The projective norm of a tensor can also be defined as
T π : = inf k = 1 r N | λ k | : r N N , T = k = 1 r N λ k x k 1 x k m , x k i V i , x k i V i 1 .
The integer r N is called the nuclear rank, i.e., the number for which it holds such a minimum nuclear decomposition of the tensor T . Given that the quantity k = 1 r j = 1 k x k j V i can be seen as the energy of the tensor T = k = 1 r x k 1 x k m , the projective norm can be interpreted as the minimum energy to decompose the tensor T into a sum of rank-one tensors [7].
Definition 2.
The injective (spectral) tensor norm of the tensor T V 1 V m is
T ϵ : = sup α 1 α k | T : α i V i * , α i V i * 1 ,
Here, V i * is the dual space of V i , that is, the space of all bounded linear functionals on V i , endowed with the norm α i V i * : = sup x V i 1 | α ( x ) | . The Banach space induced by the injective norm on V 1 V m is denoted by V 1 ϵ ϵ V m .
The nuclear and spectral norms factorise on simple tensors, i.e., for all x 1 V 1 , , x m V m ,
x 1 x m π = x 1 x m ϵ = x 1 V 1 x m V m ,
and they are dual to one another, that is for all T V 1 V m ,
T π = sup α V 1 * ϵ ϵ V k * 1 α | T , T ϵ = sup α V 1 * π π V k * 1 α | T .
Among the set of tensor norms that can be defined, the projective and the injective tensor norms are extremal [8] (p. 127), i.e., for any other tensor norm · on V 1 V m , it holds
T V 1 V m , T ϵ T T π .
In the last years, tensor norms have been used intensively in quantum information theory for studying different notions or properties of quantum systems, such as entanglement and quantum measurements [9]. We recall here basic notions, such as quantum entanglement, which will be relevant for the goal of this paper.
A pure multipartite quantum state is a unit vector | ψ in a tensor product of complex Hilbert spaces H 1 H k , where k 2 and H i C d i for 1 i k . The vector | ψ is said to be separable if it is a product tensor:
| ψ = | ψ 1 | ψ k .
Here, each Banach space which appears in the factorisation is ( H i , · 2 ) , where · 2 is the Euclidean norm. Usually, 2 d : = ( C d , · 2 ) . The separability condition for pure states is described in a simple manner, as follows:
Proposition 1.
A multipartite pure state | ψ H 1 H k , ψ 2 = 1 , is separable if and only if
ψ ϵ = ψ π = 1 .
For mixed multipartite quantum states, which are positive semidefinite and unit trace operators ρ on H 1 H k , each of the Banach spaces B ( H i ) M d i ( C ) , 1 i k , is considered with respect to the Schatten 1-norm (or nuclear norm)
X 1 = Tr X * X .
Wewrite S 1 d : = ( M d ( C ) , · 1 ) for the complex Banach space. Since mixed quantum states are self-adjoint operators, we shall also consider the real Banach space S 1 , s a d : = ( M d s a ( C ) , · 1 ) .
The separability of mixed quantum states can be checked using the following
Theorem 1
([2]). For a multipartite mixed quantum state ρ M d 1 ( C ) M d m ( C ) , ρ 0 , Tr ( ρ ) = 1 , the following assertions are equivalent:
(i) 
ρ is separable,
(ii) 
ρ S 1 , s a d 1 π π S 1 , s a d m = 1 ,
(iii) 
ρ S 1 d 1 π π S 1 d m = 1 .
Theorem 1 gives a necessary and sufficient condition to check separability, but is difficult to use in practice, as the computation of tensor norms is an NP-hard problem.
For pure states | ψ ( C d ) m , it holds that [5] [Remark12.3]
| ψ ( 2 d ) π m 2 = | ψ ψ | ( S 1 d ) π m
Also, the injective tensor norm is related to the geometric measure of entanglement [6], which is a faithful measure of entanglement for multipartite pure states (it is equal to 0 if and only if the state is a product state).

3. Algorithm

In this section, we briefly explain the existing algorithm used to compute the projective norm. We then elaborate upon our developed method, which we will use to estimate the projective norm. Our method is further extended to symmetric tensors and density matrices.

3.1. Alternating Method

The alternating method algorithm [4] computes the projective norm of a tensor by optimising over a single rank-one product state tensor at a time while keeping the rest of the rank-one product state tensors fixed. This results in a minimisation problem that is equivalent to minimising the sum of the Euclidean norms under a linear constraint—specifically a second order cone programming (SOCP). For symmetric tensors, the algorithm is adapted to maintain symmetry throughout the decomposition, further utilising the symmetric nature of the tensor structure to reduce computational overheads.
This method provides an upper bound on the projective norm. However, due to the alternating approach, it does not guarantee a global optimum and might even converge to a local minimum or critical point.

3.2. Canonical Polyadic Decomposition

Before presenting our algorithm, we briefly recall the Canonical Polyadic Decomposition (CPD) [10], as the next sections expand on this to build up our algorithm. The CPD attempts to approximate a tensor T as a sum of R rank-one tensors, R being a finite positive integer. The vectors a j i i { 1 , 2 , , m } involved in calculating the rank one tensors are called cores.
T = j = 1 R a j 1 a j 2 a j m
The goal of CPD is to minimise the mean square error T T (we shall call this the reconstruction cost henceforth) by optimising the cores. One such algorithm to achieve this is the alternating least squares (ALS) [11] algorithm, where the optimisation is performed between specific views of the approximation and the original tensor. However, the rate of convergence is slow and might converge to suboptimal stationary points owing to the fact that the ALS considers only one view of the original tensor at a time. Hence, we use an adaptive gradient descent algorithm which adjusts parameters by computing the individual adaptive step sizes for different parameter estimates of the first- and second-order gradients [12]. This allows simultaneous optimisation of all the cores and overcoming saddle points on the parameter landscape by adaptively adjusting the step size such that the algorithm is not stuck in a shallow local minimum. This also improves the speed of convergence, especially in higher-dimensional parameter spaces.

3.3. Adaptive Rank Canonical Polyadic Decomposition

The CPD approximates a tensor T as a sum of R rank-one tensors. This does not give a minimum rank decomposition. To ensure that R converges to the rank of a tensor r ( T ) , we make modifications to the objective function of CPD to optimise not only the cores but also R, which we call the adaptive rank canonical polyadic decomposition (ARCPD). We do this by introducing coefficients C j to our reconstructed tensor T .
T = j = 1 R C j · ϕ j , where ϕ j = a j 1 a j 2 a j m a j 1 a j 2 a j m
We optimise such that we minimise not just the reconstruction cost but also the number of C j terms going to zero (we shall call this the rank cost henceforth). Hence, the effective rank is the number of C j coefficients greater than a small threshold. We call this algorithm adaptive because we start with a decomposition of a higher rank R and adaptively reduce to the minimal rank r ( T ) . We choose R as the strong upper bound for the rank of a tensor [7] given by
r ( T ) i = 1 m d i max { d 1 , d 2 , , d m } = R
where d i is the dimensionality of the Banach space V i .
The objective function L R ( T , a , C ) to minimise for this decomposition is given as
L R ( T , a , C ) = k 1 T T + k 2 j = 1 R 1 { | C j | > ϵ }
where ϵ 0 is the small threshold that forces the maximum number of coefficients to become as close to zero, and k 1 , k 2 are regularisation constants. As we desire the reconstruction cost and rank cost to be strongly minimised, we choose large values of k 1 and k 2 to achieve this optimisation.
The reformulation in Equation (6) transforms the computation of minimum-rank decomposition into a continuous optimisation problem over the tensor factors and their corresponding coefficients. Gradient descent is particularly well-suited to this formulation, as it allows all parameters to be updated simultaneously based on the full objective function. In contrast to alternating schemes such as ALS, which optimise one factor at a time while keeping others fixed, gradient-based updates can exploit interactions between different tensor modes during each iteration. This joint optimisation is especially important for suppressing redundant rank-one components and improving convergence behaviour in higher-dimensional parameter spaces.

3.4. Nuclear Rank Canonical Polyadic Decomposition

The nuclear rank is the minimum rank of the tensor decomposition, which gives us the projective norm as seen from Equations (1) and (2). If we decompose our tensor T for the minimum rank and the sum of norms of the corresponding product state coefficients (we shall call this the norm cost) simultaneously, we will converge to the nuclear rank, and the norm cost will converge to the projective norm. This comes from the following Proposition 2 and its Corollary 1.
Proposition 2
([13]). Let V be a real vector space of dimension n and ν : V [ 0 , ) be a norm. Suppose E , the set of the extreme points of the unit ball B ν , is compact. Then the nuclear rank rank v ( x m ) : V R is an upper semi-continuous function, i.e., if ( x m ) m = 1 is a convergent sequence in V with rank v ( x m ) r for all m N , then x = l i m m x m must have rank v r .
Corollary 1.
For any A R n 1 × × n m , the best nuclear rank-r approximation problem
argmin { A X : rank * ( X ) r }
always has a solution.
Incorporating the norm cost into the objective function of the ARCPD, we obtain the nuclear rank canonical polyadic decomposition (NRCPD). The objective function L N ( T , a , C ) for this decomposition is given as
L N ( T , a , C ) = k 1 T T + k 2 j = 1 R 1 { | C j | > ϵ } + k 3 j = 1 R | C j |
The steps of the NRCPD are summarised in Algorithm 1.

3.5. Symmetric Nuclear Rank Canonical Polyadic Decomposition

A tensor T ( s ) V 1 V m is said to be symmetric if d 1 = = d m = d s and T i 1 i m ( s ) = T j 1 j m ( s ) where ( i 1 , , i m ) is a permutation of ( j 1 , , j m ) [14]. For every m-partite symmetric tensor T ( s ) , there exists a decomposition [15] of the form
T ( s ) = j = 1 r S i = 1 m x j
where r S is the symmetric rank. Symmetric tensors find applications in various fields, such as in machine learning, communications, signal, speech, and image processing [16]. In physics, symmetric quantum states are also known as bosons [17].
From Equation (9), it is clear that due to the symmetric nature of the tensor, it has a reduced parameter space which simplifies the calculations and computations, providing tighter bounds for finding the symmetric rank, nuclear rank and projective norm. Due to being less entangled, their geometric measure of entanglement is polynomially computable. This simplifies the NRCPD to obtain the symmetric nuclear rank canonical polyadic decomposition (SNRCPD). Therefore, the objective function L N ( s ) for computing the projective norm of a symmetric tensor simplifies as
L N ( s ) T ( s ) , a , C = k 1 T ( s ) T + k 2 j = 1 R 1 { | C j | > ϵ } + k 3 j = 1 R | C j |
The steps of the SNRCPD are summarized in Algorithm 2.
Algorithm 1 Nuclear Rank Canonical Polyadic Decomposition to estimate projective norm of T
Require: 
T T 0
Ensure: 
a j i 0 for T 0
 1:
R i = 1 m d i max { d 1 , d 2 , , d m }
 2:
Intialise:  a j i N 0 , 1 d i , i 1 , 2 , , m , j 1 , 2 , , R
 3:
Initialise:  C j N ( 0 , 1 ) , j 1 , 2 , R
 4:
if  T is complex then
 5:
       a ˜ j i N 0 , 1 d i , i 1 , 2 , , m , j 1 , 2 , , R
 6:
       C ˜ j N ( 0 , 1 ) , j 1 , 2 , R
 7:
       a j i a j i + i . a ˜ j i , i 1 , 2 , , m , j 1 , 2 , , R
 8:
       C j C j + i . C ˜ j , j 1 , 2 , , R
 9:
end if
10:
t = 0
11:
for  t <  max epochs  do
12:
        ϕ j a j 1 a j 2 a j m a j 1 a j 2 a j m , j 1 , 2 , , R
13:
        T j = 1 R C j · ϕ j
14:
        J L N ( T , a , C )
15:
        a j i a j i α J a j i , i 1 , 2 , , m , j 1 , 2 , , R                 ▹ α is the step size
16:
        C j C j α J C j , j 1 , 2 , , R
17:
        t t + 1
18:
end for
19:
r N = 0
20:
if  C j >  tolerance  then
21:
        r N r N + 1 , j 1 , 2 , , R
22:
end if
23:
T π = j = 1 R | C j |

3.6. Nuclear Rank Canonical Polyadic Density Matrix Decomposition

Alongside tensors, quantum states can also be represented using the density matrix formulation. We shall first define the representation and notation we shall be using before defining the projective norm for a density matrix.
Definition 3
([1]). Given a m-partite state | ψ k H 1 H m , where k is the index with respective probabilities p k as part of the ensemble of pure states { p k , | ψ k } , the density matrix for the system is defined as
ρ k p k | ψ k ψ k | , such that k p k = 1
In such a representation, a state ρ is said to be separable if it can be written as a convex combination of the product density matrices as follows
ρ = k = 1 r λ k · ρ 1 ( k ) ρ m ( k )
for a probability distribution ( λ i ) k = 1 r and ρ j ( k ) S 1 d j are normalised. Otherwise ρ is said to be entangled. With this, we define the projective norm for a density matrix as
ρ π = inf k = 1 r N ( D ) | λ k | : r N ( D ) N , ρ = k = 1 r N ( D ) λ k · ρ 1 ( k ) ρ m ( k ) , ρ j ( k ) S 1 d j , ρ j ( k ) 1 = 1
where r N ( D ) if the nuclear rank of the density matrix.
Algorithm 2 Symmetric Nuclear Rank Canonical Polyadic Decomposition to estimate projective norm of T ( s )
Require: 
T ( s ) T 0
Ensure: 
a j 0
 1:
R d s m 1
 2:
Initialise:  a j N 0 , 1 d s , j 1 , 2 , , R
 3:
Initialise:  C j N ( 0 , 1 ) , j 1 , 2 , R
 4:
if  T is complex then
 5:
       a ˜ j N 0 , 1 d s , j 1 , 2 , , R
 6:
       C ˜ j N ( 0 , 1 ) , j 1 , 2 , R
 7:
       a j a j + i . a ˜ j , j 1 , 2 , , R
 8:
       C j C j + i . C ˜ j , j 1 , 2 , , R
 9:
end if
10:
t = 0
11:
for  t <  max epochs  do
12:
        ϕ j ( s ) i = 1 m a j i = 1 m a j , j 1 , 2 , , R
13:
        T j = 1 R C j · ϕ j ( s )
14:
        J L N ( T ( s ) , a , C )
15:
        a j a j α J a j , j 1 , 2 , , R                       ▹ α is the step size
16:
        C j C j α J C j , j 1 , 2 , , R
17:
        t t + 1
18:
end for
19:
r N = 0
20:
if  C j >  tolerance  then
21:
        r N r N + 1 , j 1 , 2 , , R
22:
end if
23:
T ( s ) π = j = 1 R | C j |
We extend the NRCPD to handle density matrices by increasing the parameter space by optimising over the coefficients C j , the kets a j ( i ) , and the bras b j ( i ) (which are not normalised), which allows our decomposition to optimise over all possible combinations of the product state density matrices. This gives us the nuclear rank canonical polyadic density matrix decomposition (NRCPDMD).
ρ = j = 1 R ( D ) C j · Φ j , where Φ j = i = 1 m | a j i b j i | i = 1 m | a j i b j i | 1
where R ( D ) is the upper bound for the rank of a density matrix, which increases owing to the increased parameter space from including both the kets and bras.
R ( D ) = i = 1 m d i max { d 1 , d 2 , , d m } 2
The objective function L N ( D ) to compute the projective norm of a density matrix is given as
L N ( D ) ( ρ , a , b , C ) = k 1 ρ ρ + k 2 j = 1 R ( D ) 1 { | C j | > ϵ } + k 3 j = 1 R ( D ) | C j |
The steps of the NRCPDMD are summarized in Algorithm 3.
Algorithm 3 Nuclear Rank Canonical Polyadic Density Matrix Decomposition to estimate projective norm of ρ
Require: 
ρ ρ 0
Ensure: 
a j i , b j i 0
 1:
R ( D ) i = 1 m d i max { d 1 , d 2 , , d m } 2
 2:
Initialise:  a j i N 0 , 1 d i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
 3:
Initialise:  b j i N 0 , 1 d i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
 4:
Initialise:  C j N ( 0 , 1 ) , j 1 , 2 , R ( D )
 5:
if  ρ is complex then
 6:
       a ˜ j i N 0 , 1 d i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
 7:
       b ˜ j i N 0 , 1 d i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
 8:
       C ˜ j N ( 0 , 1 ) , j 1 , 2 , R ( D )
 9:
       a j i a j i + i . a ˜ j i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
10:
     b j i b j i + i . b ˜ j i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
11:
     C j C j + i . C ˜ j , j 1 , 2 , , R ( D )
12:
end if
13:
t = 0
14:
for  t <  max epochs  do
15:
        Φ j = i = 1 m | a j i b j i | i = 1 m | a j i b j i | 1 , j 1 , 2 , , R ( D )
16:
        ρ j = 1 R ( D ) C j · Φ j
17:
        J L N ( D ) ( ρ , a , b , C )
18:
        a j i a j i α J a j i , i 1 , 2 , , m , j 1 , 2 , , R ( D )               ▹ α is the step size
19:
        b j i b j i α J b j i , i 1 , 2 , , m , j 1 , 2 , , R ( D )
20:
        C j C j α J C j , j 1 , 2 , , R ( D )
21:
        t t + 1
22:
end for
23:
r N ( D ) = 0
24:
if  C j >  tolerance  then
25:
        r N ( D ) r N ( D ) + 1 , j 1 , 2 , , R ( D )
26:
end if
27:
ρ π = j = 1 R ( D ) | C j |

4. Computation and Results

In this section, we showcase the performance of our developed method to calculate the projective norm. We are going to focus on tensors having orders of 2, 3, 4, 5, and 6. The projective norm for order-2 tensors is precisely the Schatten 1-norm of the corresponding matrix, i.e., the sum of the non-zero singular values of the operator. Computing the projective norm for order 3 and higher is an NP-hard problem for which we shall verify our results using the analytical results obtained in [7,13] for order-3 tensors and numerical results obtained in [4] for even higher order tensors. We shall also do the same for density matrices for both pure and mixed states and verify the separability criterion for some given mixed states in [18]. The codes were run on Python 3.12.4 using the PyTorch 2.3.1 library, which is an open-source machine learning framework that provides efficient tensor operations with GPU acceleration and automatic differentiation, offering dynamic computation graphs and easy manipulation of higher-order multi-dimensional tensors. PyTorch’s Autograd engine enables automatic gradient computation, using which we can implement various gradient descent-based optimisers, such as Stochastic Gradient Descent (SGD), Adam, RMSProp, etc. [19].

4.1. Projective Norm Computations for Tensors

The projective norm of 2nd-order tensor states (or bipartite states) can be computed analytically as the sum of the singular values of their respective matrix form. We compute the projective norm and nuclear rank of an entangled state and an arbitrary separable state and compare the numerical results obtained using our algorithm with the analytical values in Figure 1 and Figure 2, respectively.
Calculating the projective norm for 3rd-order tensor states becomes complicated; however, we have some analytical values of the projective norm calculated for some symmetric states in [7,13]. Figure 3 shows the convergence of our numerical results to the analytically calculated projective norm and nuclear rank for the 3-qubit GHZ state.
In general, a tensor has different projective norms and nuclear ranks over the real and complex fields. We demonstrate this using the 3-qubit W state, and the 3-qubit state | ψ = 1 2 | 001 + | 010 + | 100 | 111 using Figure 4 and Figure 5 for the W state and Figure 6 and Figure 7 for the above mentioned state | ψ over the real and complex number fields, respectively.
For tensors of order 4 and higher, we do not have analytically calculated results owing to the NP-hard complexity scaling of calculating the projective norm. Hence, we utilise the numerical results obtained using the alternating method algorithm in [4] as our analytical projective norm and compare them with the results obtained using our algorithm. We calculate the projective norm for order 4, 5 and 6 non-symmetric and symmetric tensors to demonstrate the effectiveness of our algorithm for both the general and symmetric respective cases in Figure 8 and Figure 9 for order 4 tensors, Figure 10 and Figure 11 for order 5 tensors and Figure 12 and Figure 13 for order 6 tensors. The time taken for computation increases drastically as the dimensionality of the tensors scales up.

4.2. Projective Norm Computations for Density Matrices

We benchmark our algorithm over density matrices by first computing the projective norm over pure state density matrices and then over mixed state density matrices.
For the pure state case, we choose the W state for which we already have the analytical projective norm calculated over its tensor form, and an arbitrary 3-qubit separable state for which we know, due to its separability, that its projective norm is 1. We show the numerical results obtained for these states in Figure 14 and Figure 15, respectively.
For mixed states, we take the following two bipartite state examples given in [18]. These states are parameterised by α , over which the state has been found to be separable or entangled. We verify the same using our projective norm algorithm.
For the 3 3 state given by:
ρ α = 2 7 | ψ + ψ + | + α 7 σ + + 5 α 7 V σ + V ,
with 0 α 5 , | ψ + = 1 3 i = 0 2 | i i , σ + = 1 3 | 01 01 | + | 12 12 | + | 20 20 | and V being the operator that swaps the two systems, it is found that the state is separable for 2 α 3 and entangled for the the rest of the values of α . We show the same using our algorithm in Figure 16 where the projective norm for the mixed state is 1 for 2 α 3 and greater than 1 for the other values of α .
For the 4 4 state given by:
ρ α = 1 2 + α | ψ 1 ψ 1 | + | ψ 2 ψ 2 | + α · σ , α 0 ,
where
| ψ 1   = 1 2 | 00 + | 11 + 2 | 22 , | ψ 2   = 1 2 | 01 + | 10 + 2 | 33 , σ = 1 8 ( | 02 02 | + | 03 03 | + | 12 12 | + | 13 13 | + | 20 20 | + | 21 21 | + | 30 30 | + | 31 31 | ) ,
It is found that the state is entangled for all values of α . We verify the same using our algorithm in Figure 17, where the projective norm for this state is observed to be greater than 1 for all values of α .
The next example is taken from [20] of a 3 3 state given by
ρ = 1 8 a + 1 a 0 0 0 a 0 0 0 a 0 a 0 0 0 0 0 0 0 0 0 a 0 0 0 0 0 0 0 0 0 a 0 0 0 0 0 a 0 0 0 a 0 0 0 a 0 0 0 0 0 a 0 0 0 0 0 0 0 0 0 1 + a 2 0 1 a 2 2 0 0 0 0 0 0 0 a 0 a 0 0 0 a 0 1 a 2 2 0 1 + a 2
where 0 < a < 1 . We consider a mixture of this state with white noise
ρ ( a , p ) = p · ρ ( a ) + ( 1 p ) · I 9 , 0 p 1 .
Figure 18 and Figure 19 show the variation of the projective norm with respect to the parameters a and p, showcasing the degree of entanglement for different values of the parameters.

4.3. Non-Quantum Tensors

To demonstrate the generality of the proposed method in cross-domain scenarios, we apply the NRCPD algorithm to non-quantum tensors. In particular, we consider image tensors drawn from the MNIST handwritten digits dataset [21]. Each image is represented as a 28 × 28 × 1 real-valued tensor. For illustrative clarity, we restrict our attention to single-channel images; however, the proposed algorithm is directly applicable to general multi-channel image tensors.
In Figure 20, we apply NRCPD to the selected image tensor and show the reconstructed image obtained from the low-rank projective-norm decomposition, together with the corresponding nuclear rank and projective norm. Since the image is single-channel, it can be treated as a two-dimensional tensor, allowing the analytical projective norm to be computed via singular value decomposition. The close agreement between the analytically computed and numerically obtained projective norms demonstrates the accuracy of the proposed method for image tensors. In addition, the recovered nuclear rank matches the analytical value, further validating the correctness of the decomposition in a non-quantum setting.
In Figure 21, we plot the individual decomposition terms obtained after performing NRCPD. The individual terms, when combined, sum to form the reconstructed image. Therefore, each term acts as a basis element for the image tensor in the two-dimensional case considered here, where NRCPD is equivalent to singular value decomposition. For higher-order tensors or multichannel colour images, the decomposition yields rank-1 tensor components that serve as generating elements rather than a basis in the linear-algebraic sense. This provides a simplified representation of the image. Such representations are commonly used to obtain compact and structured descriptions of image data in signal-processing contexts.

5. Performance of the Algorithm

Having shown the effectiveness of our algorithm in calculating the projective norm and nuclear rank, in this section, we study the performance of our algorithm. We analyse the effect of the regularisation constants k 1 , k 2 , and k 3 on the optimisation process, and elaborate on an empirical strategy for choosing these constants. We also compare the performance of our algorithm with other alternating algorithms, such as the ALS and ADMM, over different indicators.

5.1. Parameter Sensitivity

From the objective function L N ( T , a , C ) given in Equation (8), the regularisation constants k 1 , k 2 , and k 3 govern the following aspect of the algorithm.
  • k 1 corresponds to the term T T which gives us the reconstruction error.
  • k 2 corresponds to the term j = 1 R 1 { | C j | > ϵ } which gives us the nuclear rank.
  • k 3 corresponds to the term j = 1 R | C j | which gives us the projective norm.
Therefore, based on the minimisation requirements of the algorithm, we want the reconstruction error to become extremely small, while the nuclear rank and projective norm should also be minimised in a manner that avoids degenerate decompositions or coefficients collapsing to zero. Hence, the choice of values for the regularisation constant is paramount to the success of the algorithm. Each parameter has different sensitivities to the algorithm’s results, which we shall discuss below. All parameter sensitivity experiments were performed, keeping the number of iterations at 50,000. The initial state chosen for this is the 3-qubit state 1 2 | 001 + | 010 + | 100 | 111 . This state was chosen as we know the exact analytical values of the nuclear rank and projective norm [13]. However, the sensitivity analysis for the parameters remains valid for any order tensor.

5.1.1. Effect of k 1 (Keeping k 2 and k 3 Fixed)

As observed in Figure 22, as we increase k 1 while keeping k 2 and k 3 fixed, the reconstruction error reduces. As we want T to be a strict reconstruction of T , we choose a very large value of k 1 ideally larger than 10 6 to ensure a reconstruction error of order 10 8 10 7 or even smaller.

5.1.2. Effect of k 2 (Keeping k 1 and k 3 Fixed)

For studying the effect of k 2 , we fix k 1 = 10 6 to obtain a faithful reconstruction. Therefore, the rank obtained corresponds to the actual decomposition due to the presence of the norm cost. From Figure 23, we can see that as the value of k 2 increases, the calculated nuclear rank decreases and eventually reaches a minimum value, which corresponds to the actual nuclear rank. The purpose of the term
j = 1 R 1 { | C j | > ϵ }
is to prevent degenerate decompositions where one component is split into multiple rank-1 terms by penalising redundant terms which can be merged together. So k 2 should be chosen in the range 10 2 10 3 or higher to obtain the minimal decomposition and nuclear rank faster.

5.1.3. Effect of k 3 (Keeping k 1 and k 2 Fixed)

For observing the effects of k 3 on the projective norm, we fix k 1 = 10 6 to get an accurate tensor decomposition and run our algorithm for 50,000 iterations. We have chosen k 2 = 1 for demonstrating the behaviour of k 3 . For our chosen example state, the projective norm value is 2 in C . From Figure 24, as k 3 increases, the projective norm initially converges to the actual projective norm and later diverges and reduces as k 3 is increased further. This divergence is due to the fact that as the k 3 value becomes comparable to k 1 , the norm cost starts dominating the reconstruction cost. Hence, the norm cost starts to shrink the coefficients to zero along with a poor reconstruction error. Therefore, k 3 should be chosen such that it is a few orders smaller than k 1 to ensure proper reconstruction and convergence to the projective norm simultaneously.

5.1.4. Strategies for Choosing Regularisation Parameters

Taking into account the parameter sensitivity, the regularisation constants are chosen such that
k 1 > > k 3 > k 2
The values of the constants taken for our implementation have been listed as follows in Table 1. These values gave us consistent results over all N —dimensional tensors we implemented in the manuscript.
To ensure these values provide consistent results for any dimensional tensor, we multiply these regularisation constants by the dimensionality of the tensor N = i = 1 m d i to ensure that all the regularisation constants also scale with the size of the tensor, as the reconstruction error, rank cost, and norm cost also scale with the dimensionality of the tensor.

5.2. Comparative Study with Other Algorithms

We shall now compare the performance of our algorithm with other algorithms which have been previously used in the literature to calculate the projective norm. We perform multidimensional experiments of our developed algorithm with alternating algorithms, such as alternating least squares (ALS) [22] and alternating direction method of multipliers (ADMM) [23] over several indicators such as convergence, time of computation and memory complexity.
Figure 25, Figure 26, Figure 27 and Figure 28 show the convergence of NRCPD with ALS and ADMM for 3, 4, and 5 order tensors, along with the total time taken for computing their respective number of iterations. We clearly observe that the NRCPD converges faster than ALS and ADMM with a smaller deviation from the analytical projective norm values when compared with the analytical projective norm values. The total time taken for the algorithms to complete the entire run is observed to be as follows, with t NRCPD < t ALS < t ADMM . A key point to note is that the results for ALS and ADMM are the best results obtained after several runs of the algorithms, which is due to their alternating updates, which make them sensitive to initialisation and prone to convergence to suboptimal stationary points. However, NRCPD required just a single run to obtain the results.
For calculating memory complexity and scaling, we shall choose tensors with d i = 2 as we have taken for our performative experiments to simplify calculations; however, this analysis is valid for any dimension tensor. As the order of the tensor increases, the memory complexity of ALS, ADMM, and NRCPD scales as follows in Table 2.
  • where N is the order of the tensor. The extra factor of N in the memory complexity of NRCPD is due to the fact that NRCPD simultaneously optimises over the entire set of rank-one tensors and coefficients. We observe that although our algorithm has a higher memory requirement, it compensates for this shortcoming by faster convergence and time of completion within a single run with better accuracy.

6. Conclusions

The projective tensor norm is an important tool for detecting and quantifying entanglement for a given multipartite quantum state. It can even be utilised to measure entanglement for quantum states having order greater than two, therefore providing a more generalised method to detect entanglement than methods such as positive trace maps and related criteria. However, calculating projective norms is an NP-hard problem computationally. Existing algorithms do not guarantee convergence and are restricted to pure states. As discussed in this paper, we present a novel gradient descent-based algorithm for computing the projective tensor norm. The proposed algorithm shows very reliable empirical convergence to the projective norm value across a wide range of numerical experiments. We mention that a rigorous theoretical convergence analysis of our method is beyond the scope of our present work, and it will be addressed in future studies.
We exemplify this convergence for higher-order tensors using previously calculated analytical and numerical values, showing the effectiveness of our algorithm in tackling quantum states from large Hilbert spaces. We also demonstrate that our algorithm can be extended to calculate the projective norm for density matrices of both pure and mixed states. We explicitly showcase the performance of our algorithm for mixed states with parameterised introduction of noise, displaying the sensitivity of the algorithm for detecting entanglement in noisy states. We demonstrate the versatility of our algorithm for non-quantum tensors as well. We compare the performance of our algorithm with other algorithms and demonstrate why our algorithm presents a better choice. The code is available open source on GitHub [24].

Author Contributions

Conceptualisation, M.A.J. and A.R.; methodology, A.R. and M.A.J.; software, A.R.; validation, M.A.J. and A.R.; writing—review and editing, A.R. and M.A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

A.R. and M.A.J. thank Ion Nechita and Khurshed Fitter for their invaluable discussions in understanding projective norms questions. A.R. also thanks Stuti Pradhan, Aditya Jivoji and Tabitha Sneha for their technical support in running the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NPNon-deterministic Polynomial time
SOCPSecond Order Cone Programming
CPDCanonical Polyadic Decomposition
ALSAlternating Least Squares
ARCPDAdaptive Rank Canonical Polyadic Decomposition
NRCPDNuclear Rank Canonical Polyadic Decomposition
SNRCPDSymmetric Nuclear Rank Canonical Polyadic Decomposition
NRCPDMDNuclear Rank Canonical Polyadic Density Matrix Decomposition
GPUGraphics Processing Unit
SGDStochastic Gradient Descent
AdamAdaptive Moment Estimation
RMSPropRoot Mean Squared Propagation
GHZ stateGreenberger–Horne–Zeilinger state
MNISTModified National Institute of Standards and Technology
ADMMAlternating Direction Method of Multipliers

References

  1. Nielsen, M.; Chuang, I. Quantum Computation and Quantum Information: 10th Anniversary Edition; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  2. Rudolph, O. A separability criterion for density operators. J. Phys. Math. Gen. 2000, 30, 3951. [Google Scholar] [CrossRef]
  3. Hillar, C.; Lim, L.-H. Most tensor problems are NP-hard. J. ACM 2009, 60, 45. [Google Scholar] [CrossRef]
  4. Derksen, H.; Friedland, S.; Lim, L.-H.; Wang, L. Theoretical and computational aspects of entanglement. arXiv 2017, arXiv:1705.07160. [Google Scholar] [CrossRef]
  5. Jivulescu, M.; Lancien, C.; Nechita, I. Multipartite entanglement detection via projective tensor norms. Ann. Henri Poincare 2022, 23, 3791–3838. [Google Scholar] [CrossRef]
  6. Fitter, K.; Lancien, C.; Nechita, I. Estimating the entanglement of random multipartite quantum states. Quantum 2025, 9, 1870. [Google Scholar] [CrossRef]
  7. Bruzda, W.; Friedland, S.; Życzkowski, K. Rank of a tensor and quantum entanglement. Linear Multilinear Algebra 2023, 72, 1796–1859. [Google Scholar] [CrossRef]
  8. Ryan, R. Introduction to Tensor Products of Banach Spaces; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  9. Loulidi, F.; Nechita, I. Measurement incompatibility vs. Bell Non-Locality: An approach via tensor norm. PRX Quantum 2022, 3, 040325. [Google Scholar] [CrossRef]
  10. Kolda, T.; Bader, B. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  11. Harshman, R.A. Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multi-model factor analysis. UCLA Work. Pap. Phon. 1970, 16, 1–84. [Google Scholar]
  12. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar] [CrossRef]
  13. Friedland, S.; Lim, L. Nuclear Norm of Higher-Order Tensors. Math. Comput. 2017, 87, 1255–1281. [Google Scholar] [CrossRef]
  14. Nie, J. Symmetric Tensor Nuclear Norms. SIAM J. Appl. Algebra Geom. 2017, 1, 599–625. [Google Scholar] [CrossRef]
  15. Comon, P.; Golub, G.; Lim, L.; Mourrain, B. Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl. 2008, 30, 1254–1279. [Google Scholar] [CrossRef]
  16. Favier, G. Matrix and Tensor Decompositions in Signal Processing; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2021. [Google Scholar]
  17. Friedland, S.; Kemp, T. Most Boson quantum states are almost maximally entangled. arXiv 2016, arXiv:1612.00578. [Google Scholar] [CrossRef]
  18. Doherty, A.; Parrilo, P.; Spedalieri, F. Complete family of separability criteria. Phys. Rev. A 2004, 69, 022308. [Google Scholar] [CrossRef]
  19. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adv. Neural Inf. Process. Syst. 2019, 32, 8024–8035. [Google Scholar]
  20. Zhang, C.; Zhang, Y.; Zhang, S.; Guo, G. Entanglement detection beyond the computable cross-norm or realignment criterion. Phys. Rev. A 2008, 77, 060301. [Google Scholar] [CrossRef]
  21. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  22. Minster, R.; Viviano, I.; Liu, X.; Ballard, G. CP Decomposition for Tensors via Alternating Least Squares with QR Decomposition. Numer. Linear Algebra Appl. 2023, 30, e2511. [Google Scholar] [CrossRef]
  23. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  24. Rudra, A.; Jivulescu, M.A. Adaptive Projective Norm. 2025. Available online: https://github.com/AadityaR04/Adaptive-Projective-Norm (accessed on 23 November 2025).
Figure 1. Projective Norm and Nuclear rank of the Bell state | ϕ + in C .
Figure 1. Projective Norm and Nuclear rank of the Bell state | ϕ + in C .
Mathematics 14 00105 g001
Figure 2. Projective Norm and Nuclear rank of a 2-qubit separable state in C .
Figure 2. Projective Norm and Nuclear rank of a 2-qubit separable state in C .
Mathematics 14 00105 g002
Figure 3. Projective Norm and Nuclear rank of the 3 qubit GHZ state in C .
Figure 3. Projective Norm and Nuclear rank of the 3 qubit GHZ state in C .
Mathematics 14 00105 g003
Figure 4. Projective Norm and Nuclear rank of the 3 qubit W state in R .
Figure 4. Projective Norm and Nuclear rank of the 3 qubit W state in R .
Mathematics 14 00105 g004
Figure 5. Projective Norm and Nuclear rank of the 3 qubit W state in C .
Figure 5. Projective Norm and Nuclear rank of the 3 qubit W state in C .
Mathematics 14 00105 g005
Figure 6. Projective Norm and Nuclear rank of the 3 qubit state in R .
Figure 6. Projective Norm and Nuclear rank of the 3 qubit state in R .
Mathematics 14 00105 g006
Figure 7. Projective Norm and Nuclear rank of the 3 qubit state in C .
Figure 7. Projective Norm and Nuclear rank of the 3 qubit state in C .
Mathematics 14 00105 g007
Figure 8. Projective Norm of a 4-qubit non-symmetric state in C .
Figure 8. Projective Norm of a 4-qubit non-symmetric state in C .
Mathematics 14 00105 g008
Figure 9. Projective Norm of a 4-qubit symmetric state in C .
Figure 9. Projective Norm of a 4-qubit symmetric state in C .
Mathematics 14 00105 g009
Figure 10. Projective Norm of a 5-qubit non-symmetric state in C .
Figure 10. Projective Norm of a 5-qubit non-symmetric state in C .
Mathematics 14 00105 g010
Figure 11. Projective Norm of a 5-qubit symmetric state in C .
Figure 11. Projective Norm of a 5-qubit symmetric state in C .
Mathematics 14 00105 g011
Figure 12. Projective Norm of a 6-qubit non-symmetric state in C .
Figure 12. Projective Norm of a 6-qubit non-symmetric state in C .
Mathematics 14 00105 g012
Figure 13. Projective Norm of a 6-qubit symmetric state in C .
Figure 13. Projective Norm of a 6-qubit symmetric state in C .
Mathematics 14 00105 g013
Figure 14. Projective Norm of the 3-qubit W state as a density matrix.
Figure 14. Projective Norm of the 3-qubit W state as a density matrix.
Mathematics 14 00105 g014
Figure 15. Projective Norm of an arbitrary 3-qubit separable pure state as a density matrix.
Figure 15. Projective Norm of an arbitrary 3-qubit separable pure state as a density matrix.
Mathematics 14 00105 g015
Figure 16. Projective Norm vs α variation for the given 3 3 bipartite mixed state. The dashed blue line indicates the separable bound of the projective norm, · π = 1 .
Figure 16. Projective Norm vs α variation for the given 3 3 bipartite mixed state. The dashed blue line indicates the separable bound of the projective norm, · π = 1 .
Mathematics 14 00105 g016
Figure 17. Projective Norm vs α variation for the given 4 4 bipartite mixed state. The dashed blue line indicates the separable bound of the projective norm, · π = 1 .
Figure 17. Projective Norm vs α variation for the given 4 4 bipartite mixed state. The dashed blue line indicates the separable bound of the projective norm, · π = 1 .
Mathematics 14 00105 g017
Figure 18. Projective Norm variation for different values of a and p for the given 3 3 bipartite mixed state.
Figure 18. Projective Norm variation for different values of a and p for the given 3 3 bipartite mixed state.
Mathematics 14 00105 g018
Figure 19. Projective Norm variation for different values of a for the given 3 3 bipartite mixed state.
Figure 19. Projective Norm variation for different values of a for the given 3 3 bipartite mixed state.
Mathematics 14 00105 g019
Figure 20. Original and reconstructed MNIST dataset image with its projective norm and nuclear rank after NRCPD.
Figure 20. Original and reconstructed MNIST dataset image with its projective norm and nuclear rank after NRCPD.
Mathematics 14 00105 g020
Figure 21. Decomposition terms of the image obtained after NRCPD.
Figure 21. Decomposition terms of the image obtained after NRCPD.
Mathematics 14 00105 g021
Figure 22. Effect of k 1 on reconstruction error. x-y are in log-log scale.
Figure 22. Effect of k 1 on reconstruction error. x-y are in log-log scale.
Mathematics 14 00105 g022
Figure 23. Effect of k 2 on nuclear rank. x-axis in log scale.
Figure 23. Effect of k 2 on nuclear rank. x-axis in log scale.
Mathematics 14 00105 g023
Figure 24. Effect of k 3 on projective norm. x-axis in log scale.
Figure 24. Effect of k 3 on projective norm. x-axis in log scale.
Mathematics 14 00105 g024
Figure 25. Projective norm convergence and computation time comparison for a 3-order tensor.
Figure 25. Projective norm convergence and computation time comparison for a 3-order tensor.
Mathematics 14 00105 g025
Figure 26. Projective norm convergence and computation time comparison for a 4-order tensor.
Figure 26. Projective norm convergence and computation time comparison for a 4-order tensor.
Mathematics 14 00105 g026
Figure 27. Projective norm convergence and computation time comparison for a 5-order tensor.
Figure 27. Projective norm convergence and computation time comparison for a 5-order tensor.
Mathematics 14 00105 g027
Figure 28. Projective norm convergence and computation time comparison for a 6-order tensor.
Figure 28. Projective norm convergence and computation time comparison for a 6-order tensor.
Mathematics 14 00105 g028
Table 1. Values of the regularisation constants used in the numerical experiments.
Table 1. Values of the regularisation constants used in the numerical experiments.
k 1 k 2 k 3
10 12 10 3 10 6
Table 2. Memory complexity of ALD, ADMM and NRCPD.
Table 2. Memory complexity of ALD, ADMM and NRCPD.
ALSADMMNRCPD
O 2 N O 2 N O N · 2 N
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rudra, A.; Jivulescu, M.A. Calculating the Projective Norm of Higher-Order Tensors Using a Gradient Descent Algorithm. Mathematics 2026, 14, 105. https://doi.org/10.3390/math14010105

AMA Style

Rudra A, Jivulescu MA. Calculating the Projective Norm of Higher-Order Tensors Using a Gradient Descent Algorithm. Mathematics. 2026; 14(1):105. https://doi.org/10.3390/math14010105

Chicago/Turabian Style

Rudra, Aaditya, and Maria Anastasia Jivulescu. 2026. "Calculating the Projective Norm of Higher-Order Tensors Using a Gradient Descent Algorithm" Mathematics 14, no. 1: 105. https://doi.org/10.3390/math14010105

APA Style

Rudra, A., & Jivulescu, M. A. (2026). Calculating the Projective Norm of Higher-Order Tensors Using a Gradient Descent Algorithm. Mathematics, 14(1), 105. https://doi.org/10.3390/math14010105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop