Next Article in Journal
Fingerprints Classification through Image Analysis and Machine Learning Method
Previous Article in Journal
A Hybrid Ontology-Based Recommendation System in e-Commerce
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tensor-Based Algorithms for Image Classification

Department of Mathematics and Computer Science, Freie Universität Berlin, 14195 Berlin, Germany
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2019, 12(11), 240; https://doi.org/10.3390/a12110240
Submission received: 20 October 2019 / Revised: 5 November 2019 / Accepted: 7 November 2019 / Published: 9 November 2019

Abstract

:
Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose two novel approaches for image classification. One is a kernel-based reformulation of the previously introduced multidimensional approximation of nonlinear dynamics (MANDy), the other an alternating ridge regression in the tensor train format. We apply both methods to the MNIST and fashion MNIST data set and show that the approaches are competitive with state-of-the-art neural network-based classifiers.

1. Introduction

Tensor-based methods have become a powerful tool for scientific computing over the last years. In addition to many application areas, such as quantum mechanics and computational dynamics, where low-rank tensor approximations have been successfully applied, using tensor networks for supervised learning has gained a lot of attention recently. In particular, the canonical format and the tensor train format have been considered for quantum machine learning (There are different research directions in the field of quantum machine learning, here we understand it as using quantum computing capabilities for machine learning problems.) problems, see, e.g., [1,2,3]. A tensor-based algorithm for image classification using sweeping techniques inspired by the density matrix renormalization group (DMRG) [4] was proposed in [5,6] and further discussed in [7,8]. Interestingly, also researchers at Google are currently developing a tensor-based machine learning framework called “TensorNetwork (http://github.com/google/TensorNetwork)” [9,10]. The goal is to expedite the adoption of such methods by the machine learning community.
Our goal is to show that recently developed methods for recovering the governing equations of dynamical systems can be generalized in such a way that they can also be used for supervised learning tasks, e.g., classification problems. To learn the governing equations from simulation or measurement data, regression methods such as sparse identification of nonlinear dynamics (SINDy) [11,12] and its tensor-based reformulation multidimensional approximation of nonlinear dynamics (MANDY) [13] can be applied. The main challenge is often to choose the right function space from which the system representation is learned. Although SINDy and MANDy essentially select functions from a potentially large set of basis functions by applying regularized regression methods, other approaches allow nested functions and typically result in nonlinear optimization problems, which are then frequently solved using (stochastic) gradient descent. By constructing a basis comprising tensor products of simple functions (e.g., functions depending only on one variable), extremely high-dimensional feature spaces can be generated.
In this work, we explain how to compute the pseudoinverse required for solving the minimization problem directly in the tensor train (TT) format, i.e., we replace the iterative approach from [5,6] by a direct computation of the least-squares solution and point out similarities with the aforementioned system identification methods. The reformulated algorithm can be regarded as a kernelized variant of MANDy, where the kernel is based on tensor products. This is also related to quantum machine learning ideas: As pointed out in [14], the basic idea of quantum computing is similar to kernel methods in that computations are performed implicitly in otherwise intractably large Hilbert spaces. Although kernel methods were popular in the 1990s, the focus of the machine learning community has shifted to deep neural networks in recent years [14]. We will show that, for simple image classification tasks, kernels based on tensor products are competitive with neural networks.
In addition to the kernel-based approach, we propose another DMRG-inspired method for the construction of TT decompositions of weight matrices containing the coefficients for the selected basis functions. Instead of computing pseudoinverses, a core-wise ridge regression [15] is applied to solve the minimization problem. Although the approach introduced in [5,6] only involves tensor contractions corresponding to single images of the training data set, we use TT representations of transformed data tensors, see [13,16], to include the entire training data set at once for constructing low-dimensional systems of linear equations. Combining an efficient computational scheme for the corresponding subproblems and truncated singular value decompositions [17], we call the resulting algorithm alternating ridge regression (ARR) and discuss connections to MANDy and other regularized regression techniques.
Although we describe the classification problems using the example of the iconic MNIST data set [18] and the fashion MNIST data set [19], the derived algorithms can be easily applied to other classification problems. There is a plethora of kernel and deep learning methods for image classification; a list of the most successful methods for the MNIST and fashion MNIST data sets including nearest-neighbor heuristics, support vector machines, and convolutional neural networks can be found on the respective website (http://yann.lecun.com/exdb/mnist/, http://github.com/zalandoresearch/fashion-mnist). We will not review these methods in detail, but instead focus on relationships with data-driven methods for analyzing dynamical systems. The main contributions of this paper are as follows.
  • Extension of MANDy: We show that the efficacy of the pseudoinverse computation in the tensor train format can be improved by eliminating the need to left- and right-orthonormalize the tensor. Although this is a straightforward modification of the original algorithm, it enables us to consider large data sets. The resulting method is closely related to kernel ridge regression.
  • Alternating ridge regression: We introduce a modified TT representation of transformed data tensors for the development of a tensor-based regression technique which computes low-rank representations of coefficient tensors. We show that it is possible to obtain results which are competitive with those computed by MANDy and, at the same time, reduce the computational costs and the memory consumption significantly.
  • Classification of image data: Although originally designed for system identification, we apply these methods to classification problems and visualize the learned classifier, which allows us to interpret features detected in the images.
The remainder is structured as follows. In Section 2, we describe methods to learn governing equations of dynamical systems from data as well as a tensor-based iterative scheme for image classification and highlight their relationships. In Section 3, we describe how to apply MANDy to classification problems and introduce the ARR approach based on the alternating optimization of TT cores. Numerical results are presented in Section 4, followed by a brief summary and conclusion in Section 5.

2. Prerequisites

We will introduce the original MNIST and the fashion MNIST data set, which will serve as guiding examples. Afterwards, SINDy and MANDy, as well as tensor-based methods for image classification problems, will be briefly discussed. In what follows, we will use the notation summarized in Table 1.

2.1. MNIST and Fashion MNIST

The MNIST data set [18], see Figure 1a, contains grayscale (The methods described below can be easily extended to color images by defining basis functions for each primary color.) images of handwritten digits and the associated labels. The data set is split into 60,000 images for training and 10,000 images for testing. Each image is of size 28 × 28. Let d = 784 be the number of pixels of one image, and let the images, reshaped as vectors, be denoted by x ( j ) R d and the corresponding labels by y ( j ) R d , where d = 10 is the number of different classes. Each label encodes a number in { 0 , , 9 } , and the entries y i ( j ) of the vector y ( j ) are given by
y i ( j ) = { 1 , if   x ( j )   contains   the   number   i 1 , 0 , otherwise ,
i.e., y ( j ) = [ 1 , 0 , 0 , , 0 ] represents 0, y ( j ) = [ 0 , 1 , 0 , , 0 ] represents 1, etc. This is also called one-hot encoding in machine learning.
The fashion MNIST data set [19] can be regarded as a shoo-in replacement for the original data set. There are again 60,000 training and 10,000 test images of size 28 × 28 . Some samples are shown in Figure 1b and the corresponding labels in Figure 1c. Given a picture of a clothing item, the goal now is to identify the correct category, which is encoded as described above.

2.2. SINDy

SINDy [11] was originally developed to learn the governing equations of dynamical systems from data. We will show how it can, in the same way, be used for classification problems. Consider an autonomous ordinary differential equation of the form x ˙ = f ( x ) , with f : R d R d . Given m measurements of the state of the system, denoted by x ( j ) , j = 1 , , m , and the corresponding time derivatives y ( j ) : = x ˙ ( j ) , the goal is to reconstruct the function f from the measurement data. Let X = [ x ( 1 ) , , x ( m ) ] R d × m and Y = [ y ( 1 ) , , y ( m ) ] R d × m . That is, d = d in this case. To represent f, we select a vector-valued basis function Ψ : R d R n and define the transformed data matrix Ψ X R n × m by
Ψ X = [ Ψ ( x ( 1 ) )     Ψ ( x ( m ) ) ] .
Omitting sparsity constraints, SINDy then boils down to solving
min Ξ Y Ξ Ψ X F ,
where
Ξ = [ ξ 1     ξ d ] R n × d
is the coefficient matrix. Each column vector ξ i then represents a function f i , i.e.,
y i ( j ) f i ( x ( j ) ) = ξ i Ψ ( x ( j ) ) .
We thus obtain a model of the form x ˙ = Ξ Ψ ( x ) , which approximates the possibly unknown dynamics. The solution of the minimization problem (3) with minimal Frobenius norm is given by
Ξ = Y Ψ X + ,
where + denotes the pseudoinverse, see [20].

2.3. Tensor-Based Learning

We will now briefly introduce the basic concepts of tensor decompositions and tensor formats as well as the tensor-based reformulation of SINDy, called MANDy, proposed in [13]. Additionally, recently introduced methods for supervised learning with tensor networks will be discussed.

2.3.1. Tensor Decompositions

To mitigate the curse of dimensionality when working with tensors T R n 1 × × n p , where n μ N , we will exploit low-rank tensor approximations. The simplest approximation of a tensor of order p is a rank-one tensor, i.e., a tensor product of p vectors given by
T = T ( 1 ) T ( 2 ) T ( p ) ,
where T ( μ ) , μ = 1 , , p , are vectors in R n μ . If a tensor is written as the sum of r rank-one tensors, i.e.,
T = k = 1 r T : , k ( 1 ) T : , k ( 2 ) T : , k ( p ) ,
with T ( μ ) R n μ × r , this results in the so-called canonical format. In fact, any tensor can be expressed in this format, but we are particularly interested in low-rank representations of tensors in order to reduce the storage consumption as well as the computational costs. The same requirement applies to tensors expressed in the tensor train format (TT format), where a high-dimensional tensor is represented by a network of multiple low-dimensional tensors [21,22]. A tensor T R n 1 × × n p is said to be in the TT format if
T = k 0 = 1 r 0 k p = 1 r p T k 0 , : , k 1 ( 1 ) T k p 1 , : , k p ( p ) .
The tensors T ( μ ) R r μ 1 × n μ × r μ of order 3 are called TT cores. The numbers r μ are called TT ranks and have a strong influence on the expressivity of a tensor train. It holds that r 0 = r p = 1 and r μ 1 for μ = 1 , , p 1 . Figure 2a shows the graphical representation of a tensor train, which is also called Penrose notation, see [23].
The left- and right-unfoldings of a TT core T ( μ ) are given by the matrices
L μ = T ( μ ) | r μ 1 , n μ r μ R ( r μ 1 · n μ ) × r μ   and   R μ = T ( μ ) | r μ 1 n μ r μ R r μ 1 × ( n μ · r μ ) ,
respectively. Here, the indices of two modes of T ( μ ) are lumped into a single row or column index, whereas the remaining mode forms the other dimension of the unfolding matrix. We call the TT core T ( μ ) left-orthonormal if its left-unfolding is orthonormal with respect to the rows, i.e., L μ · L μ = Id R r μ × r μ . Correspondingly, a core is called right-orthonormal if its right-unfolding is orthonormal with respect to the columns, i.e., R μ · R μ = Id R r μ 1 × r μ 1 . In Penrose notation, orthonormal components are depicted by half-filled circles, cf. Figure 2b, where a tensor train with left-orthonormal cores is shown.
A given TT core can be left- or right-orthonormalized, respectively, by computing a singular value decomposition (SVD) of its unfolding. For instance, the components of an SVD of the form L μ = U · Σ · V can be interpreted as a left-orthonormalized version of T ( μ ) coupled with the matrices Σ and V . When we talk about, e.g., left-orthonormalization of the cores of a tensor train, we mean the application of sequential SVDs from left to right (also called HOSVD, cf. [24]) where U builds the updated core, while the non-orthonormal part Σ · V is contracted with the subsequent TT core. As described in [13,16,25], left- and right-orthonormalization can be used to construct pseudoinverses of tensors. The general idea is to construct a global SVD of a given tensor train by left- and right-orthonormalizing its cores. However, in Section 3.2, we will exploit the structure of transformed data tensors, as introduced in [13], to propose a different method for the construction of pseudoinverses, which significantly reduces the computational effort.
We also represent TT cores as two-dimensional arrays containing vectors as elements. In this notation, a single core of a tensor train T R n 1 × × n p is written as
T ( μ ) = T 1 , : , 1 ( μ ) T 1 , : , r μ ( μ ) T r μ 1 , : , 1 ( μ ) T r μ 1 , : , r μ ( μ ) .
Then, the expression T = T ( 1 ) T ( p ) is used for representing tensor trains T , cf. [13,26,27].

2.3.2. MANDy

MANDy [13] is a tensorized version of SINDy and constructs counterparts of the transformed data matrices (2) directly in the TT format. Two different types of decompositions, namely, the coordinate- and the function-major decomposition, were introduced in [13]. In [16], the technique for the construction of the transformed data tensors was generalized to arbitrary lists of basis functions. This will be explained in more detail in Section 3.1. Given data matrices X , Y R d × m and basis functions ψ μ : R d R n μ , μ = 1 , , p , the tensor-based representation of the corresponding transformed data tensors Ψ X R n 1 × × n p × m enables us to solve the reformulated minimization problem
min Ξ Y Ξ Ψ X F
so that the coefficients are given in form of a tensor train Ξ R n 1 × × n p × d , cf. Section 2.2. Instead of identifying the governing equations of dynamical systems from data, see [13], we seek to classify images using MANDy. The only difference is that Ψ X now contains the transformed images and Y the corresponding labels. As the matrix Y may have different dimensions than X, i.e., Y R d × m , the aim is to find the optimal solution of (12) in the form of a tensor train Ξ R n 1 × × n p × d . We will discuss the explicit representation of transformed data tensors and their pseudoinversion in Section 3.

2.3.3. Supervised Learning with Tensor Networks

It has been shown in [5,6] that tensor-based optimization schemes can be adapted to supervised learning problems. A given input vector x is mapped into a higher-dimensional space using a feature map Ψ before being classified by a decision function f : R d R d of the form
f ( x ) = Ξ   Ψ ( x ) ,
where Ξ is a coefficient tensor in TT format. The ith entry of the vector f ( x ) then represents the likelihood that the image x belongs to the class with label i 1 . The transformation defined in [5,6] reads as follows,
Ψ ( x ) = [ cos ( α   x 1 ) sin ( α   x 1 ) ] [ cos ( α   x 2 ) sin ( α   x 2 ) ] [ cos ( α   x d ) sin ( α   x d ) ] ,
where α is a parameter. However, the originally proposed choice of α = π 2 is often not optimal. This will be discussed in more detail below. The function Ψ assigns each pixel of the image a two-dimensional vector, inspired by the spin-vectors encountered in quantum mechanics [6]. It was illustrated in [14] how such a transformation can be implemented as a quantum feature map, where the information is encoded in the amplitudes of qubits. Embedding data into quantum Hilbert spaces might be interesting in cases where the quantum device evaluates kernels faster or where kernels cannot be simulated by classical computers anymore [14].
Due to the tensor structure, Ψ ( x ) is a tensor with 2 d entries, which, for the original MNIST image size, amounts to n 10 236 basis functions. In [5,6], the image size is first reduced to 14 × 14 pixels by averaging groups of four pixels, which then results in “only” n 10 59 basis functions. Thus, storing the full coefficient matrix is clearly infeasible since Ξ R 2 × × 2 × d R n × d . Here, d appears as an additional tensor index since the decision function is computed for all d labels simultaneously.
To learn the tensor Ξ from training data, a DMRG/ALS-related algorithm (cf. [4,28]) that sweeps back and forth along the cores and iteratively minimizes the cost function
min Ξ j = 1 m y ( j ) Ξ   Ψ ( x ( j ) ) 2 2
is devised. The suggested algorithm varies two neighboring cores at the same time, which allows for adapting the tensor ranks, and computes an update using a gradient descent step. The tensor ranks are reduced by truncated SVDs to control the computational costs. The truncation of the TT ranks can also be interpreted as a form of regularization. For more details, we refer to [5,6].
Different techniques to improve the original algorithm presented in [5] were proposed. In [29], the image data is preprocessed using a discrete cosine transformation and the ordering of the pixels is optimized in order to reduce the ranks. In [10], the DMRG-based sweeping method was replaced by a stochastic gradient descent approach, where the gradient is computed with the aid of automatic differentiation. Furthermore, it was shown that GPUs allow for an efficient solution of such problems.

3. Tensor-Based Classification Algorithms

We will now describe two different tensor-based classification approaches. First, we show how to combine MANDy with kernel-based regression techniques, so as to derive an efficient method for the computation of the pseudoinverse of the transformed data tensor. Then, a classification algorithm based on the alternating optimization of the TT cores of the coefficient tensor is proposed.

3.1. Basis Decomposition

As above, let x R d be a vector and ψ μ : R d R n μ , μ = 1 , , p , basis functions. We consider the rank-one tensors
Ψ ( x ) = ψ 1 ( x ) ψ p ( x ) = [ ψ 1 , 1 ( x ) ψ 1 , n 1 ( x ) ] [ ψ p , 1 ( x ) ψ p , n p ( x ) ] R n 1 × n 2 × × n p .
For m different vectors stored in a data matrix X = [ x ( 1 ) , , x ( m ) ] R d × m , we must construct transformed data tensors Ψ X R n 1 × × n p × m with ( Ψ X ) : , , : , j = Ψ ( x ( j ) ) . In [13,16], this was achieved by multiplying (with the aid of the tensor product) the rank-one decompositions given in (16) for all vectors, x ( 1 ) , , x ( m ) , by additional unit vectors and subsequently summing them up. The transformed data tensor can then be represented using the following canonical/TT decompositions,
Ψ X = j = 1 m Ψ ( x ( j ) ) e j = j = 1 m ψ 1 ( x ( j ) ) ψ p ( x ( j ) ) e j = ψ 1 ( x ( 1 ) )     ψ 1 ( x ( m ) ) ψ 2 ( x ( 1 ) ) 0 0 ψ 2 ( x ( m ) )  ⋯ ψ p ( x ( 1 ) ) 0 0 ψ p ( x ( m ) ) e 1 e m = Ψ X ( 1 ) Ψ X ( p + 1 ) ,
where e j , j = 1 , , m , denote the unit vectors of the standard basis in the m-dimensional Euclidean space. An entry of Ψ X is given by
( Ψ X ) i 1 , , i p , j = ψ 1 , i 1 ( x ( j ) ) · · ψ p , i p ( x ( j ) ) ,
for 1 i k n k and 1 j m . Thus, the matrix-based counterpart of Ψ X , see (2), would be given by the mode-p unfolding
Ψ X = Ψ X | n 1 , , n p m .
That is, the modes n 1 , , n p represent row indices of the unfolding, and mode m is the column index. However, for the purpose of this paper, we modify the representation of our transformed data tensors. First, realize that the last core of the TT representation in (17) can be neglected, as it is only a reshaped identity matrix. The result is then a tensor network with an “open arm”, which can be regarded as a tensor train with an additional column mode located at the last core, see Figure 3a. Second, this additional mode can be shifted to any TT core of the decomposition. This is shown in Figure 3b. We will benefit from these modifications in Section 3.3 when constructing the subproblems for the ALS-inspired approach. Consider the TT decomposition Ψ ^ X given by
Ψ ^ X = Ψ X ( 1 ) Ψ X ( p 1 ) ψ p ( x ( 1 ) ) ψ p ( x ( m ) ) .
Note that this tensor is an element of the tensor space R n 1 × × n p , i.e., Ψ ^ X has no additional column dimension, and it holds that
Ψ ^ X | n 1 , , n p = Ψ X · [ 1 , , 1 ] .
Now, we define Ψ ^ X , μ R n 1 × × n p × m to be the tensor derived from Ψ ^ X by replacing the μ th core by
Ψ ^ X , μ ( μ ) = [ | 0 0 ψ μ ( x 1 ) | 0 0 ] 0 0 [ 0 0 | ψ μ ( x m ) 0 0 | ] R m × n μ × m × m ,
where the outer modes correspond to the rank dimensions, whereas the inner modes represent the dimensions of the matrices. Analogously, for the first and the last core of Ψ ^ X , μ the nondiagonal core structure has to be used. The 4-dimensional TT core (22) naturally represents a component of a TT operator. In what follows, we will not need to store the whole TT core given in (22). Otherwise, this would mean that we have to save m 3 · n scalar entries (not using a sparse format). However, from a theoretical point of view, Ψ X in Figure 3a and Ψ ^ X , μ in Figure 3b represent the same tensor in R n 1 × × n p × m , see Appendix A.

3.2. Kernel-Based MANDy

Given a training set X R d × m , the corresponding label matrix Y R d × m , and a set of basis functions ψ μ : R d R n μ , μ = 1 , , p , we exploit the canonical representation of Ψ X given in (17) for kernel-based MANDy. The aim is to solve the optimization problem (12), i.e., we try to find a coefficient tensor Ξ R n 1 × × n p × d such that Ξ Ψ X is as close as possible to the corresponding label matrix Y R d × m . The solution of (12) with minimal Frobenius norm is given by Ξ = Y Ψ X + , cf. (6). Note that, compared to standard SINDy/MANDy, the matrix Y here does not necessarily have the same dimensions as X. Due to potentially large ranks of the transformed data tensor Ψ X , the direct computation of the pseudoinverse using left- and right-orthonormalization, as proposed in [13], would be computationally expensive. However, using the identity Ψ X + = ( Ψ X Ψ X ) + Ψ X , we can rewrite the coefficient tensor as
Ξ = Y ( Ψ X Ψ X ) + Ψ X .
The contraction of Ψ X and Ψ X yields a Gram matrix G R m × m whose entries are given by the resulting kernel function k ( x , x ) = Ψ ( x ) ,   Ψ ( x ) , i.e.,
G i , j = k ( x ( i ) , x ( j ) ) = Ψ ( x ( i ) ,   Ψ ( x ( j ) ) .
Note that due to the tensor structure of Ψ X , we obtain
k ( x ( i ) , x ( j ) ) = μ = 1 p ψ μ ( x ( i ) ) , ψ μ ( x ( j ) ) ,
i.e., a product of p local kernels.
Remark 1.
For the basis functions defined in (14), this can be simplified to
k ( x , x ) = i = 1 d cos ( α ( x i x i ) ) ,
which is a product of cosine kernels, cf. [5].
The product structure of the kernel allows us to compute the Gram matrix G as a Hadamard product (denoted by ⊙) of p matrices, that is,
G = Θ 1 Θ 2 Θ p ,
where Θ μ R m × m is given by
Θ μ = [ ψ μ ( x ( 1 ) ) , , ψ μ ( x ( m ) ) ] · [ ψ μ ( x ( 1 ) ) , , ψ μ ( x ( m ) ) ] .
We now define Z : = Y   G + R d × m , which can be obtained by solving the system Z   G = Y (in the least-squares sense if G is singular). The decision function f, cf. (13), is then given by
f ( x ) = Z Ψ X = : Ξ Ψ ( x ) = Z [ k ( x 1 , x ) k ( x m , x ) ]
and again only requires kernel evaluations. As above, we can use a sequence of Hadamard products to compute Ψ X Ψ ( x ) . The classification problem can thus be solved as summarized in Algorithm 1.
Algorithm 1 Kernel-based MANDy for classification.
Input: Training set X and label matrix Y, test set X ˜ , basis functions.
Output: Label matrix Y ˜ .
1:
Compute G using (27) and (28).
2:
Solve Z   G = Y .
3:
Define the decision function f using (29).
4:
Apply f to every vector x ˜ in the test set, and store the resulting vectors y ˜ in the matrix Y ˜ .
5:
The index i of the largest entry of y ˜ determines the detected label, i.e., set  y ˜ = e i .
We could also replace the pseudoinverse G + by the regularized inverse ( G + ε Id ) 1 , where ε is the regularization parameter, which would lead to a slightly different system of linear equations. However, for the numerical experiments in Section 4, we do not use regularization. Algorithm 1 is equivalent to kernel ridge regression (see, e.g., [15]) with a tensor product kernel. This is not surprising, as we are solving simple least-squares problems.
Remark 2.
Note that the kernel does not necessarily have to be based on tensor products of basis functions for this method to work, we could also simply use, e.g., a Gaussian kernel, which for the MNIST data set leads to slightly lower but similar classification rates. Tensor-based kernels, however, have an exponentially large yet explicit feature space representation and additional structure that could be exploited to speed up computations. Moreover, the kernel-based algorithm outlined above can in the same way be applied to time-series data to learn governing equations in potentially infinite-dimensional feature spaces.
Compared to the method proposed in [5,6], the advantage of our approach, which can be regarded as a kernel-based formulation of MANDy (or SINDy), is that we can compute a closed-form solution without necessitating any iterations or sweeps. However, even though this approach for classification problems computes an optimal solution of the minimization problem (12), the runtime as well as the memory consumption of the algorithm depend crucially on the size of the training data set (and also the number of labels), and the resulting coefficient tensor Ξ has no guaranteed low-rank structure. We will now propose an alternating optimization method which circumvents this problem.

3.3. Alternating Ridge Regression

In what follows, we will use the TT representation illustrated in Figure 3b for the transformed data tensor Ψ X R n 1 × × n p × m . Even though we do not consider a TT operator, the proposed approach is closely related to the DMRG method [4], also called alternating linear scheme (ALS) [28]. As in [5,6], the idea here is to compute a low-rank TT approximation of the coefficient tensor Ξ by an alternating scheme. That is, a low-dimensional system of linear equations has to be solved for each TT core. Our approach is outlined in Algorithm 2.
First, note that instead of solving the minimization problem (12), we can also find separate solutions of
min Ξ i Y i , : Ξ i Ψ X 2
for each row of Y. As these systems can be solved independently, Algorithm 2 can be easily parallelized. We then use a DMRG/ALS-inspired scheme to split the optimization problem (30) into p subproblems. The micromatrix M μ of such a subproblem can be built from three different parts, namely, Ψ ^ X , μ ( μ ) , P μ , and Q μ . The latter are both collected in a left and right stack to avoid repetitive computations. Note that P μ is determined by contracting P μ 1 with the ( μ 1 ) th cores of Ξ i and Ψ ^ X . Analogously, Q μ is build from Q μ + 1 and the ( μ + 1 ) th cores of Ξ i and Ψ ^ X . During the first half sweep of Algorithm 2, we only have to compute the matrices P μ , as the used matrices Q μ are not based on any updated cores. Afterwards, the matrices Q μ are (re-)computed during the second half. See [28] for further details and Figure 4 for a graphical illustration of the construction of the subproblems and the extraction of the optimized core. Note that it is not necessary to store the (sparse) core Ψ ^ X , μ ( μ ) in its full representation as a 4-dimensional array to construct the matrix M μ . By using, e.g., NumPy’s einsum the TT core can be replaced a (dense) matrix containing the corresponding function evaluations.
Algorithm 2 Alternating ridge regression (ARR) for classification.
Input: Training set X and label matrix Y, test set X ˜ , basis functions, initial guesses.
Output: Label matrix Y ˜ .
1:
for i = 1 , , d do
(parallelizable)
2:
 Define w = Y i , : = [ Y i ( 1 ) , , Y i ( m ) ] .
3:
 Define initial guess Ξ i and right-orthonormalize.
4:
 Compute right stack Q p , , Q 1 .
5:
for μ = 1 , , p 1 do
(first half sweep)
6:
  Compute P μ .
7:
  Construct micromatrix M μ from P μ , Ψ ^ X , μ ( μ ) , Q μ .
8:
  Determine truncated SVD solution of min v w v M μ 2 .
9:
  Apply QR decomposition to extract updated core.
10:
for μ = p , , 1 do
(second half sweep)
11:
  Compute Q μ .
12:
  Construct micromatrix M μ from P μ , Ψ ^ X , μ ( μ ) , Q μ .
13:
  Determine truncated SVD solution of min v w v M μ 2 .
14:
  if μ > 1 then
15:
   Apply QR decomposition to extract updated core.
16:
  else
17:
   Set the updated core to a reshape of v.
18:
 Repeat lines 5–17 to increase accuracy (if needed).
19:
Define Ξ using (31) and set y = f ( x ) using (13).
20:
The index of the largest entry of y determines the detected label, see Algorithm 1.
By orthonormalizing the fixed cores of Ξ , and using truncated SVDs [17] for solving the subsystems, we can interpret our approach as a core-wise ridge regression approximating the solution obtained by kernel-based MANDy, see Appendix B. After approximating the coefficient tensor
Ξ = i = 1 d Ξ i e i ,
the decision function f is given by (13). The main difference between our approach and the method introduced in [5,6] is that we do not update the TT cores of Ξ using gradient descent steps. Instead we solve a low-dimensional system of linear equations corresponding to the entire training data set whose solution yields the updated core. Moreover, we solve a minimization problem for each row of the label matrix Y. Using the modified basis decomposition introduced in Section 3.1, it is possible to significantly reduce the storage consumption of the stack, see Algorithm 2 Lines 4 and 11. If we only use a fixed representation for Ψ X , as given in (17), the additional mode would lead to a much higher storage consumption of the right stack. Thus, our method provides an efficient construction of the subproblems.

4. Numerical Results

We apply the tensor-based classification algorithms described in Section 3.2 and Section 3.3 to both the MNIST and fashion MNIST data sets, choosing the basis defined in (14) and setting α 0.59 . This value was determined empirically for the MNIST data set, but also leads to better classification rates for the fashion MNIST set. Kernel-based MANDy as well as ARR are available in Scikit-TT (https://github.com/PGelss/scikit_tt). The numerical experiments were performed on a Linux machine with 128 GB RAM and an Intel Xeon processor with a clock speed of 3 GHz and eight cores.
For the first approach, using kernel-based MANDy, we do not apply any regularization techniques. For the ARR approach, we set the TT rank for each solution Ξ i , see Algorithms 2–10, and repeat the scheme five times. Here, we use regularization, i.e., truncated SVDs with a relative threshold of 10−2 are applied to the minimization problems given in Algorithm 2 (Lines 8 and 13). The obtained classification rates for the reduced and full MNIST and fashion MNIST data are shown in Figure 5.
Similarly to [5,6], we first apply the classifiers to the reduced data sets, see Figure 5a. Using MANDy, we obtain classification rates of up to 98.75% for the MNIST and 88.82% for the fashion MNIST data set. Using the ARR approach, the classification rates are not monotonically increasing, which may simply be an effect of the alternating optimization scheme. The highest classification rates we obtain are 98.16% for the MNIST data and 87.55% for the fashion MNIST data. We typically obtain a 100% classification rate for the training data (as a consequence of the richness of the feature space). This is not necessarily a desired property as the learned model might not generalize well to new data, but seems to have no detrimental effects for the simple MNIST classification problem. As shown in Figure 5b, kernel-based MANDy can still be applied when considering the full data sets without reducing the image size. Here, we obtain classification rates of up to 97.24% for the MNIST and 88.37% for the fashion MNIST data set. That we obtain lower classification rates for the full images as compared to the reduced ones might be due to the fact that pixel-by-pixel comparisons of images are not expedient. The averaging effect caused by downscaling the images helps to detect coarser features. This is similar to the effect of convolutional kernels and pooling layers. In principle, ARR can also be used for the classification of the full data sets. So far, however, our numerical experiments produced only classification rates significantly lower than those obtained by applying MANDy (95.94% for the MNIST and 82.18% for fashion MNIST data set). This might be due to convergence issues caused by the kernel. The application to higher-order transformed data tensors and potential improvements of ARR will be part of our future research.
Figure 5 also shows a comparison with tensorflow. We run the code provided as a classification tutorial (www.tensorflow.org/tutorials/keras/basic_classification) ten times and compute the average classification rate. The input layer of the network comprises 784 nodes (one for each pixel; for the reduced data sets, we thus have only 196 input nodes), followed by two dense layers with 128 and 10 nodes. The layer with 10 nodes is the output layer containing probabilities that a given image belongs to the class represented by the respective neuron. Note that although more sophisticated methods and architectures for these problems exists—see the (fashion) MNIST website for a ranking—the results show that our tensor-based approaches are competitive with state-of-the-art deep-learning techniques.
To understand the numerical results for the MNIST data set (obtained by applying kernel-based MANDy to all 60,000 training images), we analyze the misclassified images, examples of which are displayed in Figure 6a. For misclassified images x, the entries of f ( x ) , see (29), are often numerically zero, which implies that there is no other image in the training set that is similar enough so that the kernel can pick up the resemblance. Some of the remaining misclassified digits are hard to recognize even for humans. Histograms demonstrating which categories are misclassified most often are shown in Figure 6b. Here, we simply count the instances where an image with label i was assigned the wrong label j. The digits 2 and 7, as well as 4 and 9, are confused most frequently. Additionally, we wish to visualize what the algorithm detects in the images. To this end, we perform a sensitivity analysis as follows. Starting with an image whose pixel values are constant everywhere (zero or any other value smaller than one, we choose 0.5), we set pixel ( i , j ) to one and compute y = f ( x ) for this image. The process is repeated for all pixels. For each label, we then plot a heat map of the values of y. This tells us which pixels contribute most to the classification of the images. The resulting maps are shown in Figure 6c. Except for the digit 1, the results are highly similar to the images obtained by averaging over all images containing a certain digit.
Figure 7 shows examples of misclassified images and the corresponding histogram as well as the results of the sensitivity analysis for the fashion MNIST data set. We see that the images of shirts (6) are most difficult to classify (due to the ambiguity in the category definitions), whereas trousers (1) and bags (8) have the lowest misclassification rates (probably due to their distinctive shapes). In contrast to the MNIST data set, the results of the sensitivity analysis differ widely from the average images. The classifier for coats (4), for instance, “looks for” a zipper and coat pockets, which are not visible in the “average coat”, and the classifier for dresses (3) seems to base the decision on the presence of creases, which are also not distinguishable in the “average dress”. The interpretation of other classifiers is less clear, e.g., the ones for sandals (5) and sneakers (7) seem to be contaminated by other classes.
Comparing the runtimes of both approaches applied to the reduced data sets with 60,000 training images, kernel-based MANDy needs approximately one hour for the construction of the decision function (29). On the other hand, ARR needs less than 10 minutes to compute the coefficient tensor assuming we parallelize Algorithm 2.

5. Conclusions

In this work, we presented two different tensor-based approaches for supervised learning. We showed that a kernel-based extension of MANDy can be utilized for image classification. That is, extending the method to arbitrary least-squares problems (originally, MANDy was developed to learn governing equations of dynamical systems) and using sequences of Hadamard products for the computation of the pseudoinverse, we were able to demonstrate the potential of kernel-based MANDy by applying it to the MNIST and fashion MNIST data sets. Additionally, we proposed the alternating optimization scheme ARR, which approximates the coefficient tensors by low-rank TT decompositions. Here, we used a mutable tensor representation of the transformed data tensors in order to construct low-dimensional regression problems for optimizing the TT cores of the coefficient tensor.
Both approaches use an exponentially large set of basis functions in combination with least-squares regression techniques on a given set of training images. The results are encouraging and show that methods exploiting tensor products of simple basis functions are able to detect characteristic features in image data. The work presented in this paper constitutes a further step towards tensor-based techniques for machine learning.
The reason why we can handle the extremely high-dimensional feature space spanned by basis functions is its tensor product format. Besides, the general questions of the choice of basis functions and the expressivity of these functions, the rank-one tensor products that were used in this work can, in principle, be replaced by other structures, which might result in higher classification rates. For instance, the transformation of an image could be given by a TT representation with higher ranks or hierarchical tensor decompositions (with the aim to detect features on different levels of abstraction). Furthermore, we could define different basis functions for each pixel, vary the number of basis functions per pixel, or define basis functions for groups of pixels.
Even though kernel-based MANDy computes the minimum norm solution of the considered regression problems as an exact TT decomposition, the method is likely to suffer from high ranks of the transformed data tensors and might thus not be competitive for large data sets. At the moment, we are computing the Gram matrix for the entire training data set. However, a possibility to speed up computations and to lower the memory consumption is found in exploiting the properties of the kernel. That is, if the kernel almost vanishes if two images differ significantly in at least one pixel (as it is the case for the specific kernel used in this work, provided that the originally proposed value α = π 2 is used), the Gram matrix is essentially sparse when setting entries smaller than a given threshold to zero. Using sparse solvers would allow us to handle much larger data sets. Moreover, the construction of the Gram matrix is highly parallelizable and it would be possible to use GPUs to assemble it in a more efficient fashion.
Further modifications of ARR such as different regression methods for the subproblems, an optimized ordering of the TT cores, and specific initial coefficient tensors can help to improve the results. We provided an explanation for the stability of ARR, but the properties of alternating regression schemes have to be analyzed in more detail in the future.

Author Contributions

Conceptualization, S.K. and P.G.; methodology, S.K. and P.G.; software, S.K. and P.G.; writing, S.K. and P.G.

Funding

This research has been funded by Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 “Scaling Cascades in Complex Systems”. Part of this research was performed while S.K. was visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation (Grant No. DMS-1440415).

Acknowledgments

We would like to thank Michael Götte and Alex Goeßmann from the TU Berlin for interesting discussions related to tensor decompositions and system identification. The publication of this article was funded by Freie Universität Berlin.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Representation of Transformed Data Tensors

Proposition A1.
For all i { 1 , , p } , it holds that
Ψ ^ X , i | n 1 , , n d m = Ψ X | n 1 , , n d m .
That is, the TT decompositions Ψ ^ X , i and Ψ X represent the same tensor in R n 1 × × n p × m .
Proof. 
An entry of Ψ ^ X , μ , 1 < μ < p , is given by
( Ψ ^ X , μ ) i 1 , , i p , j = k 1 = 1 m k p 1 = 1 m ( Ψ ^ X , μ ( 1 ) ) 1 , i 1 , k 1 · · ( Ψ ^ X , μ ( μ ) ) k μ 1 , i μ , j , k μ · · ( Ψ ^ X , μ ( p ) ) k p 1 , i p , 1 .
By definition,
( Ψ ^ X , μ ( μ ) ) k μ 1 , i μ , j , k μ 0     k μ 1 = j = k μ .
On the other hand, an entry of Ψ ^ X , μ ( ν ) with ν μ and 1 < ν < p is nonzero if and only if k ν 1 = k ν . It follows that
( Ψ ^ X , μ ) i 1 , , i p , j = ( Ψ ^ X , μ ( 1 ) ) 1 , i 1 , j · · ( Ψ ^ X , μ ( μ ) ) j , i μ , j , j · · ( Ψ ^ X , μ ( p ) ) j , i p , 1 = ψ 1 , i 1 ( x j ) · · ψ μ , i μ ( x j ) · · ψ p , i p ( x j ) ,
This can be shown in an analogous fashion for μ = 1 and μ = p . □

Appendix B. Interpretation of ARR as ALS Ridge Regression

The following reasoning will elucidate the relation between ARR, ridge regression, and kernel-based MANDy. We only outline the rough idea without concrete proofs. Let R μ denote the retraction operator, see [28], consisting of the fixed TT cores Ξ ( 1 ) , , Ξ ( μ 1 ) and Ξ ( μ + 1 ) , , Ξ ( p ) of the solution Ξ at any iteration step of Algorithm 2. Furthermore, assume that Ξ ( 1 ) , , Ξ ( μ 1 ) are left- and Ξ ( μ + 1 ) , , Ξ ( p ) right-orthonormal. In Lines 8 and 13 of Algorithm 2, we consider the system (with a slight abuse of notation)
y = M μ x = ( Ψ X · R μ ) x .
The application of a truncated SVD to the matricization of Ψ X · R μ (as done in Algorithm 2) is then similar to a regularization in the form of
min x { y M μ x 2 2 + ε x 2 2 }
with appropriate regularization parameter ε , i.e., x M μ + y for both approaches, see [17,30]. The formulation (A1) is known as Tikhonov’s smoothing functional, ridge regression, or 2 regularization (which, of course, could also directly be applied in Algorithm 2). The solution of (A1) is also the solution of the regularized normal equation
M μ y = ( M μ M μ + ε Id ) x ,
see, e.g., [31]. As R μ R μ = Id , it follows that
( R μ Ψ X ) y = ( R μ ( Ψ X Ψ X + ε Id ) R μ ) x .
In fact, this is a subproblem corresponding to the application of ALS [28] to the tensor-based system
Ψ X y = ( Ψ X Ψ X + ε Id ) Ξ .
Note that all requirements for the application of ALS are satisfied since Ψ X Ψ X + ε Id is a symmetric positive definite tensor operator and R μ is orthonormal. The system of linear equations given in (A2) is then equivalent to the minimization problem
min Ξ { y Ψ X T Ξ 2 2 + ε Ξ 2 2 } .
For sufficiently small ε , it holds that Ξ Ψ X + y , see [32], meaning Algorithm 2 computes an approximation of the coefficient tensor resulting from the application of kernel-based MANDy, see Section 3.2.

References

  1. Beylkin, G.; Garcke, J.; Mohlenkamp, M.J. Multivariate Regression and Machine Learning with Sums of Separable Functions. SIAM J. Sci. Comput. 2009, 31, 1840–1857. [Google Scholar] [CrossRef] [Green Version]
  2. Novikov, A.; Podoprikhin, D.; Osokin, A.; Vetrov, D. Tensorizing Neural Networks. In Advances in Neural Information Processing Systems 28 (NIPS); Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2015; pp. 442–450. [Google Scholar]
  3. Cohen, N.; Sharir, O.; Shashua, A. On the expressive power of deep learning: A tensor analysis. In Proceedings of the 29th Annual Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; Feldman, V., Rakhlin, A., Shamir, O., Eds.; Proceedings of Machine Learning Research. Columbia University: New York, NY, USA, 2016; Volume 49, pp. 698–728. [Google Scholar]
  4. White, S.R. Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett. 1992, 69, 2863–2866. [Google Scholar] [CrossRef] [PubMed]
  5. Stoudenmire, E.M.; Schwab, D.J. Supervised learning with tensor networks. In Advances in Neural Information Processing Systems 29 (NIPS); Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; pp. 4799–4807. [Google Scholar]
  6. Stoudenmire, E.M.; Schwab, D.J. Supervised learning with quantum-inspired tensor networks. arXiv 2016, arXiv:1605.05775. [Google Scholar]
  7. Stoudenmire, E.M. Learning relevant features of data with multi-scale tensor networks. Quantum Sci. Technol. 2018, 3, 034003. [Google Scholar] [CrossRef] [Green Version]
  8. Huggins, W.; Patil, P.; Mitchell, B.; Whaley, K.B.; Stoudenmire, E.M. Towards quantum machine learning with tensor networks. Quantum Sci. Technol. 2019, 4, 024001. [Google Scholar] [CrossRef]
  9. Roberts, C.; Milsted, A.; Ganahl, M.; Zalcman, A.; Fontaine, B.; Zou, Y.; Hidary, J.; Vidal, G.; Leichenauer, S. TensorNetwork: A library for physics and machine learning. arXiv 2019, arXiv:1905.01330. [Google Scholar]
  10. Efthymiou, S.; Hidary, J.; Leichenauer, S. TensorNetwork for Machine Learning. arXiv 2019, arXiv:1906.06329. [Google Scholar]
  11. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Rudy, S.H.; Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Data-driven discovery of partial differential equations. Sci. Adv. 2017, 3. [Google Scholar] [CrossRef] [PubMed]
  13. Gelß, P.; Klus, S.; Eisert, J.; Schütte, C. Multidimensional Approximation of Nonlinear Dynamical Systems. J. Comput. Nonlinear Dyn. 2019, 14, 061006. [Google Scholar] [CrossRef]
  14. Schuld, M.; Killoran, N. Quantum machine learning in feature Hilbert spaces. Phys. Rev. Lett. 2019, 122, 040504. [Google Scholar] [CrossRef] [PubMed]
  15. Shawe-Taylor, J.; Cristianini, N. Kernel Methods for Pattern Analysis; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar] [CrossRef]
  16. Nüske, F.; Gelß, P.; Klus, S.; Clementi, C. Tensor-based EDMD for the Koopman analysis of high-dimensional systems. arXiv 2019, arXiv:1908.04741. [Google Scholar]
  17. Hansen, P.C. The truncated SVD as a method for regularization. BIT Numer. Math. 1987, 27, 534–553. [Google Scholar] [CrossRef]
  18. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  19. Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
  20. Golub, G.H.; van Loan, C.F. Matrix Computations, 4th ed.; The Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  21. Oseledets, I.V. A New Tensor Decomposition. Dokl. Math. 2009, 80, 495–496. [Google Scholar] [CrossRef]
  22. Oseledets, I.V. Tensor-Train Decomposition. SIAM J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  23. Penrose, R. Applications of negative dimensional tensors. In Combinatorial Mathematics and Its Applications; Welsh, D.J.A., Ed.; Academic Press Inc.: Cambridge, MA, USA, 1971; pp. 221–244. [Google Scholar]
  24. Oseledets, I.V.; Tyrtyshnikov, E.E. Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions. SIAM J. Sci. Comput. 2009, 31, 3744–3759. [Google Scholar] [CrossRef]
  25. Klus, S.; Gelß, P.; Peitz, S.; Schütte, C. Tensor-based dynamic mode decomposition. Nonlinearity 2018, 31. [Google Scholar] [CrossRef]
  26. Gelß, P.; Matera, S.; Schütte, C. Solving the Master Equation Without Kinetic Monte Carlo. J. Comput. Phys. 2016, 314, 489–502. [Google Scholar] [CrossRef]
  27. Gelß, P.; Klus, S.; Matera, S.; Schütte, C. Nearest-neighbor interaction systems in the tensor-train format. J. Comput. Phys. 2017, 341, 140–162. [Google Scholar] [CrossRef] [Green Version]
  28. Holtz, S.; Rohwedder, T.; Schneider, R. The Alternating Linear Scheme for Tensor Optimization in the Tensor Train Format. SIAM J. Sci. Comput. 2012, 34, A683–A713. [Google Scholar] [CrossRef]
  29. Liu, Y.; Zhang, X.; Lewenstein, M.; Ran, S. Entanglement-guided architectures of machine learning by quantum tensor network. arXiv 2018, arXiv:1803.09111. [Google Scholar]
  30. Groetsch, C.W. Inverse Problems in the Mathematical Sciences; Vieweg+Teubner Verlag: Wiesbaden, Germany, 1993. [Google Scholar] [CrossRef]
  31. Zhdanov, A.I. The method of augmented regularized normal equations. Comput. Math. Math. Phys. 2012, 52, 194–197. [Google Scholar] [CrossRef]
  32. Barata, J.C.A.; Hussein, M.S. The Moore–Penrose pseudoinverse: A tutorial review of the theory. Braz. J. Phys. 2012, 42, 146–165. [Google Scholar] [CrossRef]
Figure 1. (a) Samples of the MNIST data set. (b) Samples of the fashion MNIST data set. Each row represents a different item type. (c) Corresponding labels for the fashion MNIST data set.
Figure 1. (a) Samples of the MNIST data set. (b) Samples of the fashion MNIST data set. Each row represents a different item type. (c) Corresponding labels for the fashion MNIST data set.
Algorithms 12 00240 g001
Figure 2. Graphical representation of tensor trains: (a) A core is depicted by a circle with different arms indicating the modes of the tensor and the rank indices. The first and the last tensor train (TT) core are regarded as matrices due to the fact that r 0 = r p = 1 . (b) Left-orthonormalized tensor train obtained by, e.g., sequential singular value decompositions (SVDs). Note that the TT ranks may change due to orthonormalization, e.g., when using (reduced/truncated) SVDs.
Figure 2. Graphical representation of tensor trains: (a) A core is depicted by a circle with different arms indicating the modes of the tensor and the rank indices. The first and the last tensor train (TT) core are regarded as matrices due to the fact that r 0 = r p = 1 . (b) Left-orthonormalized tensor train obtained by, e.g., sequential singular value decompositions (SVDs). Note that the TT ranks may change due to orthonormalization, e.g., when using (reduced/truncated) SVDs.
Algorithms 12 00240 g002
Figure 3. TT representation of transformed data tensors: (a) As in [13], the first p cores (blue circles) are given by (17). The direct contraction of the two last TT cores in (17) can be regarded as an operator-like TT core with a row and column mode (green circle). (b) The additional column mode can be shifted to any of the p TT cores.
Figure 3. TT representation of transformed data tensors: (a) As in [13], the first p cores (blue circles) are given by (17). The direct contraction of the two last TT cores in (17) can be regarded as an operator-like TT core with a row and column mode (green circle). (b) The additional column mode can be shifted to any of the p TT cores.
Algorithms 12 00240 g003
Figure 4. Construction and solution of the subproblem for the μ th core: (a) The 4-dimensional core of Ψ ^ X , μ (green circle) is contracted with the matrices P μ and Q μ constructed by joining the fixed cores of the coefficient tensor (orange circles) with the corresponding cores of the transformed data tensor. The matricization then defines the matrix M μ . (b) The TT core (red circle) obtained by solving the low-dimensional minimization problem is decomposed (e.g., using QR factorization) into a orthonormal tensor and a triangular matrix. The orthonormal tensor then yields the updated core.
Figure 4. Construction and solution of the subproblem for the μ th core: (a) The 4-dimensional core of Ψ ^ X , μ (green circle) is contracted with the matrices P μ and Q μ constructed by joining the fixed cores of the coefficient tensor (orange circles) with the corresponding cores of the transformed data tensor. The matricization then defines the matrix M μ . (b) The TT core (red circle) obtained by solving the low-dimensional minimization problem is decomposed (e.g., using QR factorization) into a orthonormal tensor and a triangular matrix. The orthonormal tensor then yields the updated core.
Algorithms 12 00240 g004
Figure 5. Results for MNIST and fashion MNIST: (a) Classification rates for the reduced 14 × 14 images. (b) Classification rates for full 28x28 images. Reducing the image size by averaging over groups of pixels improves the performance of the algorithm.
Figure 5. Results for MNIST and fashion MNIST: (a) Classification rates for the reduced 14 × 14 images. (b) Classification rates for full 28x28 images. Reducing the image size by averaging over groups of pixels improves the performance of the algorithm.
Algorithms 12 00240 g005
Figure 6. MNIST classification: (a) Images misclassified by kernel-based MANDy described in Section 3.2. The original image is shown in black, the identified label in red, and the correct label in green. (b) Histograms illustrating which categories are misclassified most often. The rows represent the correct labels of the misclassified image and the columns the detected labels. (c) Visualizations of the learned classifiers showing a heat map of the classification function obtained by applying it to images that differ in one pixel.
Figure 6. MNIST classification: (a) Images misclassified by kernel-based MANDy described in Section 3.2. The original image is shown in black, the identified label in red, and the correct label in green. (b) Histograms illustrating which categories are misclassified most often. The rows represent the correct labels of the misclassified image and the columns the detected labels. (c) Visualizations of the learned classifiers showing a heat map of the classification function obtained by applying it to images that differ in one pixel.
Algorithms 12 00240 g006
Figure 7. Fashion MNIST classification: (a) Misclassified images. (b) Histogram of misclassified images. (c) Visualizations of the learned classifiers.
Figure 7. Fashion MNIST classification: (a) Misclassified images. (b) Histogram of misclassified images. (c) Visualizations of the learned classifiers.
Algorithms 12 00240 g007
Table 1. Notation used in this work.
Table 1. Notation used in this work.
SymbolDescription
X = [ x ( 1 ) , , x ( m ) ] data matrix in R d × m
Y = [ y ( 1 ) , , y ( m ) ] label matrix in R d × m
n 1 , , n p mode dimensions of tensors
r 0 , , r p ranks of tensor trains
ψ 1 , , ψ p basis functions ψ μ : R d R n μ
Ψ X / Ψ X transformed data matrices/tensors
Ξ / Ξ coefficient matrices/tensors

Share and Cite

MDPI and ACS Style

Klus, S.; Gelß, P. Tensor-Based Algorithms for Image Classification. Algorithms 2019, 12, 240. https://doi.org/10.3390/a12110240

AMA Style

Klus S, Gelß P. Tensor-Based Algorithms for Image Classification. Algorithms. 2019; 12(11):240. https://doi.org/10.3390/a12110240

Chicago/Turabian Style

Klus, Stefan, and Patrick Gelß. 2019. "Tensor-Based Algorithms for Image Classification" Algorithms 12, no. 11: 240. https://doi.org/10.3390/a12110240

APA Style

Klus, S., & Gelß, P. (2019). Tensor-Based Algorithms for Image Classification. Algorithms, 12(11), 240. https://doi.org/10.3390/a12110240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop