Tensor Eigenvalue and SVD from the Viewpoint of Linear Transformation

: A linear transformation from vector space to another vector space can be represented as a matrix. This close relationship between the matrix and the linear transformation is helpful for the study of matrices. In this paper, the tensor is regarded as a generalization of the matrix from the viewpoint of the linear transformation instead of the quadratic form in matrix theory; we discuss some operations and present some deﬁnitions and theorems related to tensors. For example, we provide the deﬁnitions of the triangular form and the eigenvalue of a tensor, and the theorems of the tensor QR decomposition and the tensor singular value decomposition. Furthermore, we explain the signiﬁcance of our deﬁnitions and their differences from existing deﬁnitions.


Introduction
Unfolding is an important approach in tensor research, and one of the common methods of unfolding is mode-n matricization.Based on this, the mode-n product and n-rank are defined.This helps to extend the concepts of the eigenvalue and the singular value decomposition of matrices to tensors.For example, the eigenvalues of a real supersymmetric tensor are presented in [1], a multilinear singular value decomposition (HOSVD) is presented in [2], and the restricted singular value decomposition (RSVD) for three quaternion tensors is presented in [3].The singular values of the general tensor are introduced in [4].In [5,6], the singular values of a tensor are connected with its symmetric embedding.In [7], the authors presented the definition of the singular value of a real rectangular tensor and discussed its properties.For a tensor T ∈ R I×J×I×J , exploiting the matricization of the tensor, a singular value decomposition is presented in [8].On the other hand, in matrix theory, many studies of matrices are inseparable from linear transformations.Due to the one-to-one correspondence between matrices and linear transformations, we always believe that it is more natural to provide definitions and theorems related to tensors from the viewpoint of linear transformation.This fact also proves that when we think from this perspective, some unresolved tensor problems have already been solved.Similar studies have been considered in [9].We provide some supplements to the article presented in [9]; for example, we give a more general definition of the multiplication between tensor and tensor and present the form of a triangular tensor by proposing tensor QR decomposition.In addition, in matrix theory, the one-dimensional linear subspace made up of the eigenvectors of one eigenvalue of a linear transformation is stable under the action of this linear transformation.However, this property cannot be generalized for the existing tensor eigenvalue and the corresponding eigenvector.From this point of view, we present a new concept of the eigenvalue and the corresponding eigenvector of an even-order tensor.
Then, the concepts of the eigenvalue decomposition, singular value, and singular value decomposition are naturally obtained.
This paper is organized as follows.In Section 2, regarding the tensor as a linear transformation, we present some definitions.For example, we present the identity tensor and a new multiplication for two tensors, and further explain the partitioning of the indices of a tensor.In Section 3, starting from the orthogonalization process of a set of tensors, we present the QR decomposition of a tensor and the form of a triangular tensor.In Section 4, motivated by the invariant eigenspace, we derive the definitions of the tensor eigenvalue and the corresponding tensor eigenvector, and the spectral decomposition theorem of a Hermitian tensor.The singular value decomposition of a tensor is defined in Section 5, and numerical examples are used to illustrate the advantages of our singular value decomposition in tensor compression.The application of the tensor singular value decomposition is described in Section 6.

Basic Definitions
For the convenience of writing, we illustrate the symbols to be used, which are similar to those in [9].The set of integers {1, • • • , p} will be abbreviated to the symbol Additionally, let ♯α denote the number of elements in the set α.Under a given partitioning (α, β), an element in the tensor T will be marked as τ Given a partitioning (α, β), an order-p tensor T ∈ C I In particular, an identity map is a linear transformation Because the relationship between a tensor and the corresponding linear transformation depends on the partitioning (α, β), in order to avoid confusion, we note the tensor discussed under the partitioning (α, β) as T α β .When T and (α, β) are known, we can use a command in MATLAB to determine T α β directly [10], where [alpha,beta] indicates the order of the subscripts to be accessed when performing identification.In fact, for a fixed partitioning (α, β), the tensor can be restored as a matrix.Precisely speaking, the element This storage method can also be regarded as a special case of blocking in [11].
Definition 1 ([11]).The set M = {m (1) , . . ., m (d) } is a blocking for T ∈ C ] is a vector of positive integers that sums to ] is a vector with ones in all elements and of length I α k for all α k ∈ α, and m (β k ) = [I β k ] for all β k ∈ β, the blocking M is the row blocking of T α β under the partitioning (α, β).Similarly, we can obtain the column blockings.Specifically, the entry τ i 1 ,••• ,i p of an order-p tensor T ∈ C I 1 ×I 2 ×•••×I p can also be saved at the location of the linear array.In the subsequent discussion, we shall comply with the rules in (1) or (2) whenever we want to unfold a tensor to a matrix or a vector.There are many ways to define multiplication between tensors [12].Based on the relationship between a tensor T and the corresponding map T β , in [9], the authors defined the tensor multiplication between the tensor T and any lower-order tensor A ∈ C I β 1 ×•••×I βn as follows The adjoint of T β , denoted by T * β , is a linear transformation The tensor representation of the linear transformation T * β is (T α β ) * , which is the conjugate transposition of T α β .See [9] for more details.
Similar to the derivation of the multiplication ⊛ β in (3), by means of linear transformation, we define another multiplication between two tensors G and T when there exists (γ, η) and (α, β) satisfying ♯η = ♯α and I η i = I α i , i = 1, . . ., ♯α.This definition is a generalization of the existing tensor contraction definition.

Definition 2. Suppose the linear transformations corresponding to G and T are
then, the composition of T and G is a linear transformation The rule of action is From this, we can present the following theorem: From the arbitrariness of A and B, we can obtain Because the tensors G γ η and T α β are the tensor representations of the linear transformations G η and T β , respectively, the equation * can be established immediately.

QR Decomposition of the Tensor
QR decomposition is a fundamental tool in matrix theory and plays an important role in the design of algorithms.In this section, we aim to present the tensor QR decomposition of a given tensor from the perspective of linear transformation and describe the form of the triangular tensor based on it.
For the given tensor T, the partitioning (α, β) and the multi-index J ∈ ], the process of orthogonalizing the tensors τ [:|J ] is similar to the Gram-Schmidt orthogonalization process of the column vectors of a matrix.In this orthogonalization process, the tensor T and the tensors τ [:|J ] play the roles of the matrix and column vectors, respectively.Suppose the only way to obtain ∑ J k J τ [:|J ] = 0 is for all the scalars k J to be zero.The orthogonalization process of the tensors τ [:|J ] then works as follows: In general, we have where r s 1 ,s 2 ,..., . Equation ( 6) can be rewritten as where r l 1 ,l 2 ,...,l n = p [:|l 1 ,l 2 ,...,l n ] .This expression leads us to obtain a tensor decomposition where Example 1.Consider the order-4 tensor T ∈ C 2×3×2×3 with the partitioning ({1, 2}, {3, 4}).
Applying the orthogonalization process, we can obtain the QR decomposition of T. The tensors Q, R ∈ C 2×3×2×3 can be laid out, respectively, as the "matrix" of 2 × 3 blocks of 2 × 3 matrices q 1111 q 1112 q 1113 q 1121 q 1122 q 1123 q 1211 q 1212 q 1213 q 1221 q 1222 q 1223 q 1311 q 1312 q 1313 q 1321 q 1322 q 1323 q 2111 q 2112 q 2113 q 2121 q 2122 q 2123 q 2211 q 2212 q 2213 q 2221 q 2222 q 2223 q 2311 q 2312 q 2313 q 2321 q 2322 q 2323 When the tensor is an order-2 tensor, the QR decomposition of the tensor degenerates into the QR decomposition of the matrix.This is also in line with the fact that tensors are higher-order generalizations of matrices.
Based on the above QR decomposition, we can obtain the following definitions of triangular tensors, which are generalizations of those in matrix theory and different from those in [14].Definition 3.For a given tensor T ∈ C I 1 ×•••×I p and a partitioning (α, β), we call T upper triangular (or, lower triangular, diagonal) under the partitioning (α, The elements τ [i α 1 ,...,i αm |j β 1 ,...,j βn ] satisfying Example 2. For a tensor T ∈ C 4×3×10 with the partitioning ({1, 2}, {3}), we mark its diagonal elements and upper triangular elements at the first row of Figure 1 for observation convenience, where the blue dots represent diagonal elements, and the red dots represent the strictly upper triangular elements.Additionally, we mark the upper triangular elements and the strictly upper triangular elements defined by [14] at the second row of Figure 1, where the red dots represent the strictly upper triangular elements, and the green and red dots together form the upper triangle elements.The slices in the figure are lateral slices [12], and the indices of the elements of T are denoted as (i 1 , i 2 , i 3 ).

Eigenvalue of the Tensor
The eigeninformation of a matrix is a fundamental concept in the field of matrix analysis and plays an important role in many practical applications.Therefore, it is necessary to consider the definition and existence of the eigeninformation of a tensor.
In linear algebra, if there exists a constant λ and a nonzero vector x such that Ax = λx, we call λ the eigenvalue of A, and call x the eigenvector associated with λ.The equation Ax = λx is bilinear and implies that the one-dimensional linear subspace expanded by eigenvectors associated with the same eigenvalue remains stable under the corresponding linear transformation.However, H-eigenvalues and Z-eigenvalues do not keep this property.With the benefit of the linear transformation, we present a new definition of the eigeninformation of an even-order tensor.
Firstly, we propose a new concept of square tensors from the viewpoint of an identity map, which is different from the definitions in [9,12].Definition 4. For the tensor T ∈ C I 1 ×•••×I 2m , if there exists a partitioning (α, β) satisfying ♯α = ♯β = m and I α k = I β k , k = 1, • • • , m, then we call the tensor T a square tensor under the partitioning (α, β).Definition 5. Suppose T ∈ C I 1 ×•••×I 2m is a square tensor under the partitioning (α, β), then we have a transformation and the tensor X is called an (α, β)-eigentensor of T associated with the eigenvalue λ.
This definition is similar to that in [15], which conforms to our original intention of proposing the new definition of the tensor eigenvalue.The linear invariance of matrix eigensubspace is generalized.Theorem 2. Suppose X 1 , X 2 , • • • , X r are all eigentensors associated with the eigenvalue λ of the tensor T and k 1 , k 2 , . . ., k r are complex numbers.Then, k 1 X 1 + k 2 X 2 + • • • + k r X r is also the eigentensor associated with the eigenvalue λ of the tensor T.
It is worth noting that for an order-4 tensor, if the elements of T satisfy T ijkl = T ikjl , the eigenvalues of T under the partitioning ({1, 2}, {3, 4}) and the partitioning ({1, 3}, {2, 4}) are the same.Extending the conclusion to the general situation, we obtain that if T ∈ C I 1 ×•••×I 2m is square under the partitioning (α, β) and (ξ, ζ) and satisfies T α β = T ξ ζ , then the eigenvalues of T under the partitioning (α, β) and the partitioning (ξ, ζ) are the same.Importantly, if T is a supersymmetric tensor, the eigenvalues under arbitrary partitioning (α, β) that satisfy ♯α = ♯β are the same.
For a square tensor T ∈ C I 1 ×•••×I 2m under the partitioning (α, β), if its corresponding transformation satisfies T β = T * β , we call the transformation a self-adjoint operator and call the tensor a Hermitian tensor under the partitioning (α, β) [9].Based on this concept, and similar to the spectral theorem of a symmetric matrix, we present and prove the following lemma and theorem.Lemma 1. Suppose the tensor T ∈ C I 1 ×•••×I 2m is Hermitian under the partitioning (α, β), then the (α, β)-eigentensors corresponding to different (α, β)-eigenvalues of T are orthogonal.

Singular Value Decomposition of the Tensor
Based on the eigenvalue of a tensor in Section 4, we want to offer a new definition of tensor singular value decomposition, which is a generalization of the decomposition in [8].To sufficiently demonstrate the feasibility of this decomposition, we first propose the following theorem and definition.
Definition 6.For a tensor T ∈ C I 1 ×•••×I p with the partitioning (α, β), we call the non-negative square root of an eigenvalue of (T •×I βn a singular value under the partitioning (α, β) of the tensor T. Theorem 5.Each tensor T ∈ C I 1 ×•••×I p with the partitioning (α, β) has a singular value decomposition under the partitioning (SVDUP) is a diagonal tensor whose diagonal elements are the singular values under the partitioning (α, β) of T.
Proof.Without loss of generality, we only prove the case when From Theorem 3, it can be seen that there is a unitary tensor ×I βn be a diagonal tensor whose diagonal elements are the square roots of the diagonal elements of Λ.Let Σ ∈ C I α 1 ×•••×I αm ×I β 1 ×•••×I βn be the tensor by adding or deleting zeros in the tensor Finally, we use the appropriate tensors that constitute the unitary tensor When p = 2, T is a second-order tensor, in decomposition (9), U and V are matrices.It is clear that SVDUP is a generalization of matrix singular value decomposition in order.Therefore, we unify the concepts of a tensor singular value and unique left and right singular tensors.For the singular value decomposition of a matrix, the core matrix is a diagonal matrix, but the core tensor of HOSVD [2] is not.Additionally, it is impossible to transform higher-order tensors into a pseudodiagonal tensor by performing orthogonal transformations [2].From this perspective, our definition would be a better way to generalize the singular value decomposition of matrices.In addition, to demonstrate the advantages of our SVDUP method in data compression, we also provide the following example.
Example 3. We randomly generate tensors using the command "create_problem" in the Tensor Toolbox, and compress them using truncated SVDUP and truncated HOSVD, respectively.Then, we compare the accuracy while ensuring that the compression ratio is almost the same in advance.In order to guarantee the generality of the experiment, we test 1000 data points of each size.The metric [16] where T and T are the approximations of T computed by truncated SVDUP and truncated HOSVD, respectively, is introduced to describe the accuracy of the approximations.∆φ > 0 T can provide a more accurate approximation.The compression ratio is equal to the number of elements contained in the approximate tensor divided by the number of elements in the original tensor.p represents order, and I represents dimension of each order.Tables 1 and 2 illustrate that as the tensor scale increases, our T-SVDUP can obtain higher accuracy approximations than T-HOSVD under a similar compression ratio.

Application
In both academic research and practical applications, there are many examples of mapping elements in one high-dimensional space to another.For example, a color image is mapped to a blurred image in 3D space during image processing, one discrete matrix is mapped to another in the process of the discretization of Poisson's equation, and so on.The existing processing method is to stretch data into vectors, but the structure of the data is destroyed this way.In this section, we take two-dimensional image deblurring as an example to describe the application of SVDUP without the destruction of data structure.
In order to calculate the (i, j) pixels of the blurred pixel matrix Y, we should rotate the PSF (point spread function) array 180 degrees and match it with the pixels in the source pixel matrix X by placing the center of the rotated PSF array on the (i, j) pixels of X [17].Then, the products of the corresponding components are summed to be y ij .For example, let F = ( f ij ) 3×3 , X = (x ij ) 3×3 and Y = (y ij ) 3×3 be the PSF array, the source pixel matrix and the blurred pixel matrix, respectively.Then, We assume T {3,4} : R 3×3 → R 3×3 to be the map from the source pixel matrix to the blurred pixel matrix.Then, the element of its tensor representation T is Then, the transformation can be summarized as Y = T ⊛ {3,4} X.The image deblurring problem is recovering the source pixel matrix X from the blurred pixel matrix Y = T ⊛ {3,4} X + Θ, where Θ is a kind of noise.
We use the SVDUP T {1,2} {3,4} = U ⊛ Σ ⊛ V * to analyze this problem.The solution can be written as where X ∈ R I 1 ×I 2 , B = Y − Θ, U •i = U :,:,l,j , V •i = V :,:,l,j , satisfying i = l + I 1 (j − 1), and σ i is the singular value satisfying σ 1 ≥ σ 2 ≥ . . .≥ σ I 1 ×I 2 ≥ 0. In general, for the blurring tensor, σ I 1 ×I 2 = 0, and the number σ 1 /σ I 1 ×I 2 is large.The approach that is frequently used to dampen the effects of small singular values is to discard all SVDUP components with small singular values, that is, constructing the approximation where σ j (j ≥ k + 1) is smaller than the given tolerance.This method is called the truncated SVDUP, which is a particular case of the spectral filtering methods where the filter factors φ i are chosen such that φ i ≈ 1 for large singular values, and φ i ≈ 0 for small singular values [17].To some extent, this method can compensate for the errors caused by the truncation of singular values.Suppose the original image is blurred with a circular averaging blur kernel, and perturbed by Gaussian white noise.Figure 2 shows the image recovered using truncated SVDUP with different truncations.All of our computations were performed in MATLAB 2016a running on a PC Intel(R) Core(TM) i7-7500 of 2.7 GHZ CPU and 8 GB of RAM.

Conclusions
In this paper, from the perspective of linear transformation, we present some basic concepts and provide three tensor decompositions: QR decomposition, spectral decomposition and singular value decomposition.These concepts and decompositions are different from the existing results and are more natural generalizations of the corresponding matrix theory.The numerical results also show the effectiveness of our new decomposition method.

Theorem 1 .
Let T β and G η be the linear transformations defined in (4) and let T α β and G γ η be their tensor representations.Denote the adjoints of T β and G η by T * β and G * η and the conjugate transposes of T α β and G γ η by (T α β ) * and (G γ η ) * , respectively.Then,

Figure 1 .
Figure 1.Upper triangular elements under different definitions.