Next Article in Journal
Nonsmooth Optimization-Based Hyperparameter-Free Neural Networks for Large-Scale Regression
Next Article in Special Issue
A Recommendation System Supporting the Implementation of Sustainable Risk Management Measures in Airport Operations
Previous Article in Journal
Placement of IoT Microservices in Fog Computing Systems: A Comparison of Heuristics
Previous Article in Special Issue
A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Tensor-Based Approaches for Nonlinear and Multilinear Systems Modeling and Identification

by
Gérard Favier
1,*,† and
Alain Kibangou
2,3,†
1
I3S Laboratory, Côte d’Azur University, CNRS, 06900 Sophia Antipolis, France
2
Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, GIPSA-Lab, 38000 Grenoble, France
3
Faculty of Science (Auckland Park Campus), University of Johannesburg, Johannesburg 2006, South Africa
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2023, 16(9), 443; https://doi.org/10.3390/a16090443
Submission received: 30 July 2023 / Revised: 29 August 2023 / Accepted: 9 September 2023 / Published: 14 September 2023
(This article belongs to the Special Issue Mathematical Modelling in Engineering and Human Behaviour)

Abstract

:
Nonlinear (NL) and multilinear (ML) systems play a fundamental role in engineering and science. Over the last two decades, active research has been carried out on exploiting the intrinsically multilinear structure of input–output signals and/or models in order to develop more efficient identification algorithms. This has been achieved using the notion of tensors, which are the central objects in multilinear algebra, giving rise to tensor-based approaches. The aim of this paper is to review such approaches for modeling and identifying NL and ML systems using input–output data, with a reminder of the tensor operations and decompositions needed to render the presentation as self-contained as possible. In the case of NL systems, two families of models are considered: the Volterra models and block-oriented ones. Volterra models, frequently used in numerous fields of application, have the drawback to be characterized by a huge number of coefficients contained in the so-called Volterra kernels, making their identification difficult. In order to reduce this parametric complexity, we show how Volterra systems can be represented by expanding high-order kernels using the parallel factor (PARAFAC) decomposition or generalized orthogonal basis (GOB) functions, which leads to the so-called Volterra–PARAFAC, and Volterra–GOB models, respectively. The extended Kalman filter (EKF) is presented to estimate the parameters of a Volterra–PARAFAC model. Another approach to reduce the parametric complexity consists in using block-oriented models such as those of Wiener, Hammerstein and Wiener–Hammerstein. With the purpose of estimating the parameters of such models, we show how the Volterra kernels associated with these models can be written under the form of structured tensor decompositions. In the last part of the paper, the notion of tensor systems is introduced using the Einstein product of tensors. Discrete-time memoryless tensor-input tensor-output (TITO) systems are defined by means of a relation between an Nth-order tensor of input signals and a Pth-order tensor of output signals via a ( P + N ) th-order transfer tensor. Such systems generalize the standard memoryless multi-input multi-output (MIMO) system to the case where input and output data define tensors of order higher than two. The case of a TISO system is then considered assuming the system transfer is a rank-one Nth-order tensor viewed as a global multilinear impulse response (IR) whose parameters are estimated using the weighted least-squares (WLS) method. A closed-form solution is proposed for estimating each individual IR associated with each mode-n subsystem.

1. Introduction

The continuous development of mathematical knowledge, together with a constantly renewed and growing need to study, represent and analyze ever more complex physical phenomena and systems, are at the origin of new mathematical objects and models. In particular, the notion of matrices introduced by Gauss in 1810 to solve systems of linear algebraic equations, with the foundations of matrix computation developed during the 19th century by Sylvester (1814–1897) and Cayley (1821–1895), among several other mathematicians, has later given rise to the notion of tensors. Tensors of order higher than two, i.e., mathematical objects indexed by more than two indices, are multidimensional generalizations of vectors and matrices which are tensors of orders one and two, respectively. Such objects are well suited to represent and process multidimensional and multimodal signals and data, like in computer vision [1], pattern recognition [2], array processing [3], machine learning [4], recommender systems [5], ECG applications [6], bioinformatics [7], and wireless communications [8], among many other fields of application. Today, with ever constantly growing big data (texts, images, audio, and videos) to manage in multimedia applications and social networks, tensor tools are well adapted to fuse, classify, analyze and process digital information [9].
The purpose of this paper is to present an overview of tensor-based methods for modeling and identifying nonlinear and multilinear systems using input–output data, as encountered in signal processing applications, with a focus on truncated Volterra models and block-oriented nonlinear ones and an introduction to memoryless input–output tensor systems. With a detailed reminder of the tensor tools useful to make the presentation as self-contained as possible and a review of main nonlinear models and their applications, this paper should be of interest to researchers and engineers concerned with signal processing applications.
First of all, developed as computational and representation tools in physics and geometry, tensors were the subject of mathematical developments related to polyadic decomposition [10], aiming to generalize dyadic decompositions, i.e., matrix decompositions such as the singular value decomposition (SVD), discovered independently by Beltrami (1835–1900) and Jordan (1838–1922) in 1873 and 1874, respectively. Then, tensors were used for the analysis of three-dimensional data generalizing matrix analysis to sets of matrices, seen as arrays of data characterized by three indices, in the fields of psychometrics and chemometrics [11,12,13,14]. This explains the other name given to tensors as multiway arrays in the context of data analysis and data mining [15].
Matrix decompositions, such as the SVD, have thus been generalized into tensor decompositions, such as the PARAFAC decomposition [13], also called canonical polyadic decomposition (CPD), and the Tucker decomposition (TD) [12]. Tensor decompositions consist in representing a high-order tensor by means of factor matrices and lower-order tensors, called core tensors. In the context of data analysis, such decompositions make it possible to highlight hidden structures of the data while preserving their multilinear structure, which is not the case when stacking the data in the form of vectors or matrices. Tensor decompositions can be used to reduce data dimensionality [16], merge coupled data tensors [17], handle missing data through the application of tensor completion methods [18,19], and design semi-blind receivers for tensor-based wireless communication systems [8].
In Table 1, we present basic and very useful matrix and third-order tensor decompositions, namely the reduced SVD, also known as the compact SVD, PARAFAC/CPD and TD, in a comparative way. A detailed presentation of PARAFAC and Tucker decompositions is given in Section 4.2. Note that the matrix factors U and V which are column-orthonormal, contain the left and right singular vectors, respectively, whereas the diagonal matrix Σ contains the nonzero singular values, and R denotes the rank of the matrix.
A historical review of the theory of matrices and tensors, with basic decompositions and applications, can be found in [20].
Similarly, from the system modeling point of view, linear models of dynamic systems in the form of input–output relationships or state space equations have given rise to nonlinear and multilinear models to take into account nonlinearities inherent in physical systems. This explains why nonlinear models are appropriate in many engineering applications. Consequently, standard parameter estimation and filtering methods for linear systems, such as the least-squares (LS) algorithm and the Kalman filter (KF), first proposed by Legendre in 1805 [21] and Kalman in 1960 [22], respectively, were extended for parameter and state estimation of nonlinear systems. Thus, the alternating least-squares (ALS) algorithm [13] and the extended Kalman filter (EKF) [23] were developed, respectively, for estimating the parameters of a PARAFAC decomposition and applying the KF to nonlinear systems.
In Table 2, we present two examples of standard linear models, namely the single-input single-output (SISO) finite impulse response (FIR) model and the memoryless multi-input multi-output (MIMO) model, often used for modeling a communication channel between n T transmit antennas and n R receive antennas, where h i , j is the fading coefficient between the jth transmit antenna and the ith receiver antenna. The FIR model is one of the most used for modeling linear time-invariant (LTI) systems, i.e., systems which satisfy the constraints of linearity and time-invariance, which means that the system output y ( t ) can be obtained from the input via a convolution y ( t ) = ( h u ) ( t ) , where h ( . ) is the system’s impulse response (IR), and ★ denotes the convolution operator.
The notion of linear dynamical system has been generalized to multilinear dynamical systems in [24] to model tensor time series data, i.e., time series in which input and output data are tensors. In this paper, the multilinear operator is chosen in the form of a Kronecker product of matrices, and the parameters are estimated by means of an expectation-maximization algorithm, with application to various real datasets. Then, the notion of LTI system has been extended to multilinear LTI (MLTI) systems by [25] using the Einstein product of even-order paired tensors, with an extension of the classical stability, reachability, and observability criteria to the case of MLTI systems. In Table 2, four examples of nonlinear (NL) and multilinear (ML) models are introduced, namely the polynomial, truncated Volterra, tensor-input tensor-output (TITO), and multilinear tensor-input single-output (TISO) models, which will be studied in more detail in Section 5 and Section 6, as mentioned in Table 2.
System modeling and identification is a fundamental problem in engineering applications. Real-life systems being often nonlinear in nature, NL models are very useful for various application areas. Parameter estimation using measurements of input and output (I/O) signals is at the heart of identification methods. In this paper, two main families of NL models are considered: (i) discrete-time Volterra models, also called truncated Volterra series expansions; (ii) block-oriented (Wiener, Hammerstein, Wiener–Hammerstein) models. In the sequel, we assume that the systems to be modeled are time invariant, i.e., their properties and consequently the parameters of their model do not depend on time.
Volterra models are frequently used due to the fact that they allow approximating any fading memory nonlinear systems with an arbitrary precision, as shown in [26]. They represent a direct nonlinear extension of the very popular FIR linear model, with guaranteed stability in the bounded-input bounded-output (BIBO) sense, and they have the advantage to be linear in their parameters, the kernel coefficients [27]. The nonlinearity of a Pth-order truncated Volterra model is due to products of up to P samples of delayed inputs. Moreover, they are interpretable in terms of multidimensional convolutions which makes the derivation of their z-transform and Fourier transform representations easy[28].
Among the numerous application areas of Volterra models, we can mention chemical and biochemical processes [29], radio-over-fiber (RoF) wireless communication systems (due to optical/electrical (O/E) conversion) [30,31], high-power amplifiers (HPA) in satellite communications [32,33], physiological systems [34], vibrating structures and more generally mechatronic systems like robots [35], and acoustic echo cancellation [36].
The main drawback of Volterra models is their parametric complexity implying the need to estimate a huge number of parameters which exponentially grows with the order and memory of the kernels. So, several complexity reduction approaches for Volterra models have been developed using symmetrization or triangularization of Volterra kernels, or their expansion on orthogonal bases like Laguerre and Kautz ones, or generalized orthogonal bases (GOB). Considering Volterra kernels as tensors, they can also be decomposed using a PARAFAC decomposition or a tensor train (TT). These approaches lead to the so-called Volterra–Laguerre, Volterra–GOB–Tucker, Volterra–PARAFAC and Volterra–TT models [37,38,39,40,41,42]. In Section 5.3 and Section 5.4, we review the Volterra–PARAFAC and Volterra–GOB–Tucker models. Note that a model-pruning approach can also be employed to adjust the complexity reduction in considering only nearly diagonal coefficients of the kernels and removing the other ones which correspond to more delayed input values whose influence decreases when the delay increases [43].
Another approach for ensuring a reduced parametric complexity consists in considering block-oriented NL models, composed of two types of blocks: linear time-invariant (LTI) dynamic blocks and static NL blocks. The linear blocks may be parametric (transfer functions, FIR models, state-space representations) or nonparametric (impulse responses), whereas the NL blocks may be with memory or memoryless. The different blocks are concatenated in series leading to the so-called Hammerstein (NL-LTI) and Wiener (LTI-NL) models, extended to the Wiener–Hammerstein (LTI-NL-LTI) and Hammerstein–Wiener (NL-LTI-NL) models, abbreviated W-H and H-W, respectively. To extend the modeling potential of block-oriented models, several W-H and H-W models can also be interconnected in parallel. Although such models are simpler but less general than Volterra models, they allow us to represent numerous nonlinear systems. One of the first applications of block-oriented NL models was for modeling biological systems [44]. A lot of papers have been devoted to the identification of block-oriented models and their applications. For more details, the reader is referred to the book [45] and the survey papers [46,47].
In Section 5.5, we show that the Wiener, Hammerstein and W-H models are equivalent to structured Volterra models. This equivalence is at the origin of the structure identification method for block-oriented systems, which will be presented in Section 5.5.4. Tensor-based methods using this equivalence have been developed to estimate the parameters of block-oriented nonlinear systems [48,49,50,51]. These methods are generally composed of two steps. In the first one, the Volterra kernel associated with a particular block-oriented system is used to estimate the LTI component(s). Note that there exist closed-form solutions for estimating only the Volterra kernel of interest. Such a solution is proposed in [52,53] for a third-order and fifth-order kernel, respectively. Then, in a second step, the nonlinear block is estimated using the LS method. An example of a tensor-based method for identifying a nonlinear communication channel represented by means of a W-H model was proposed in [54] using the associated third-order Volterra kernel.
On the other hand, multilinear models are useful for modeling coupled dynamical systems in engineering, biology, and physics. Tensor-based approaches have been proposed for solving and identifying multilinear systems [24,55,56]. Using the Einstein product of tensors, we first introduce a new class of systems, the so-called memoryless tensor-input tensor-output (TITO) systems, in which the multidimensional input and output signals define two tensors. The LS method is applied to estimate the tensor transfer of such a system. Then the case of a tensor-input single-output (TISO) system is considered assuming the system transfer is a rank-one Nth-order tensor, which leads to a multilinear system with respect to the impulse responses (IR) of the N subsystems associated with the N modes of the input tensor.
The non-recursive weighted least-squares (WLS) method is used to estimate the multilinear impulse response (MIR) under a vectorized form. A closed-form method is also proposed to estimate the IR of each subsystem from the estimated MIR.
The rest of the paper is structured as follows. In Section 2, we present the notations with the index convention used throughout the paper. In Section 3, we introduce some tensor sets in connection with multilinear forms. In Section 4, we briefly recall basic tensor operations and decompositions. Section 5 and Section 6 are devoted to tensor-based approaches for nonlinear and multilinear systems modeling and identification, respectively. Finally, Section 7 concludes the paper, with some perspectives for future work.
Many books and survey papers discuss estimation theory and system identification. In the field of engineering sciences, we can cite the fundamental contributions of [57,58,59,60,61,62,63] for linear systems and [27,28,29,47,64,65,66,67,68,69] for nonlinear systems. In the case of multilinear systems, the reader is referred to [55,56] for more details.

2. Notation and Index Convention

Scalars, column vectors, matrices, and tensors are denoted by lower-case, boldface lower-case, boldface upper-case, and calligraphic letters, e.g., x, x , X, X , respectively. We denote by a i , r the ( i , r ) element and by A . r (resp. A i . ) the rth column (resp. ith row) of A C I × R . I R denotes the identity matrix of size R × R .
The transpose, complex conjugate, transconjugate, and Moore–Penrose pseudo-inverse operators are represented by ( . ) T , ( · ) * , ( · ) H and ( · ) , respectively.
The operator diag ( · ) forms a diagonal matrix from its vector argument, while D i ( A ) stands for a diagonal matrix holding the ith row of A C I × R on the diagonal.
The operator T M + N 1 , N ( · ) forms a ( M + N 1 ) × N Toeplitz matrix from its vector argument x C M , whose first column and row are, respectively, x 1 x M 0 N 1 T T and x 1 0 N 1 T .
Given Y C I × J , the vec and unvec operators are defined such that: y = vec ( Y ) C J I Y = unvec ( y ) C I × J , where the order of dimensions in the product J I is linked to the order of variation of the indices, with the column index j varying more slowly than the row index i.
The outer, Kronecker and Khatri–Rao products are denoted by ∘, ⊗ and ⋄, respectively.
Table 3 summarizes the notation used for sets of indices and dimensions [70].
We now introduce the index convention which allows eliminating the summation symbols in formulae involving multi-index variables. For example, i = 1 I a i b i is simply written as a i b i . Note there are two differences relative to Einstein’s summation convention:
  • Each index can be repeated more than twice in an expression;
  • Ordered index sets are allowed.
The index convention can be interpreted in terms of two types of summation, the first associated with the row indices (superscripts) and the second associated with the column indices (subscripts), with the following rules [70]:
  • The order of the column indices is independent of the order of the row indices;
  • Consecutive row and column indices (or index sets) can be permuted.
In Table 4, we give some examples of vector and matrix products using index convention, where e i j e i ( I ) e j ( J ) , e i j e i ( I ) ( e j ( J ) ) T , e i k j e i ( I ) e k ( K ) ( e j ( J ) ) T .
Using the index convention, the multiple sum over the indices of x i 1 , , i P y i 1 , , i P will be abbreviated to
i 1 = 1 I 1 i P = 1 I P x i 1 , , i P y i 1 , , i P = i ̲ P = 1 ̲ I ̲ P x i ̲ P y i ̲ P = x i ̲ P y i ̲ P ,
where 1 ̲ denotes a set of ones whose number is fixed by the index P of the set I ̲ P . The notation i ̲ P and I ̲ P allows us to simplify the expression of the multiple sum into a single sum over an index set, which is further simplified by using the index convention.

3. Tensors and Multilinear Forms

In signal processing applications, a tensor X K I 1 × × I N of order N and size I 1 × × I N is typically viewed as an array of numbers [ x i 1 , , i N ] . The order corresponds to the number of indices ( i 1 , , i N ) that characterize its elements x i 1 , , i N K , also denoted x i 1 i N or ( X ) i 1 , , i N . Each index i n I n { 1 , , I n } , for n N { 1 , , N } , is associated with a mode, also called a way, and I n denotes the dimension of the nth mode. The number of elements in X is equal to n = 1 N I n . For instance, in a wireless communication system [8], each index of a signal x i 1 , , i N corresponds to a different form of diversity (in time, space, frequency, code, etc., domains), and the dimensions I n are the numbers of time samples, receive antennas, subcarriers, the code length, etc.
The tensor X is said to be real (resp. complex) if its elements are real numbers (resp. complex numbers), which corresponds to K = R (resp. K = C ). It is said to have even order (resp. odd order) if N is even (resp. odd). The special cases N = 2 and N = 1 correspond to the sets of matrices X K I × J and column vectors x K I , respectively.
If I 1 = = I N = I , the Nth-order tensor X = [ x i 1 , , i N ] K I × I × × I is said to be hypercubic, of dimensions I, with i n I , for n N . The number of elements in X is then equal to I N . The set of (real or complex) hypercubic tensors of order N and dimensions I will be denoted K [ N ; I ] .
A hypercubic tensor of order N and dimensions I is said to be symmetric if it is invariant under any permutation π of its modes, i.e.,
a π ( i 1 , i 2 , , i N ) a i π ( 1 ) , i π ( 2 ) , , i π ( N ) = a i 1 , i 2 , , i N .
The identity tensor of order N and dimensions I is denoted I N , I = [ δ i 1 , , i N ] , with i n I , for n N , or simply I . It is a hypercubic tensor whose elements are defined using the generalized Kronecker delta
δ i 1 , , i N = 1 if i 1 = = i N 0 otherwise .
It is a diagonal tensor whose diagonal elements are equal to 1 and other elements to zero, which can be written as the sum of I outer products of N canonical basis vectors e i ( I ) of the space R I
I N , I = i = 1 I e i ( I ) e i ( I ) N terms .
where the outer product operation is defined later in Table 9.
A diagonal tensor X K I × × I of order N, whose diagonal elements are the entries of vector a = a 1 , , a I T , will be written as
x i , i 2 , , i N = a i δ i , i 2 , , i N X = i = 1 I a i e i ( I ) e i ( I ) N terms .
Different matricizations, also called matrix unfoldings, can be defined for a tensor X K I 1 × × I N . Consider a partitioning of the set of modes N into two disjoint ordered subsets S 1 and S 2 , composed of p and N p modes, respectively, with p N 1 . A general matrix unfolding formula was given by [71] as follows
X S 1 ; S 2 = i 1 = 1 I 1 i N = 1 I N x i 1 , , i N n S 1 e i n ( I n ) n S 2 e i n ( I n ) T K J 1 × J 2 ,
where e i n ( I n ) is the i n -th vector of the canonical basis of R I n , and J n 1 = I n n S n 1 , for n 1 = 1 and 2 . We say that X S 1 ; S 2 is a matrix unfolding of X along the modes of S 1 for the rows and along the modes of S 2 for the columns, with S 1 S 2 = and S 1 S 2 = N .
For instance, in the case of a third-order tensor X K I × J × K , we have six flat unfoldings and six tall unfoldings. For S 1 = 1 and S 2 = { 2 , 3 } , we have the following mode-1 flat unfolding X I × J K X 1 ; { 2 , 3 } , while for S 1 = { 2 , 3 } and S 2 = 1 we obtain the following mode-1 tall unfolding X J K × I X { 2 , 3 } ; 1 = X I × J K T .
Vectorized forms of X K I 1 × × I N are obtained by combining the modes in a given order. Thus, a lexicographical vectorization gives the vector y x I 1 I N with element x i 1 , , i N at the position m = i 1 i 2 i N ¯ in y , i.e., y m = x i 1 , , i N x i ̲ N , with [72]
i 1 i 2 i N ¯ i N + n = 1 N 1 ( i n 1 ) k = n + 1 N I k .
By convention, the order of the dimensions in a product n = 1 N I n I 1 I N associated with the index combination i 1 i 2 i N ¯ follows the order of variation of the indices ( i 1 , , i N ) , with i 1 varying more slowly than i 2 , which in turn varies more slowly than i 3 , etc.
The Frobenius norm of X K I 1 × × I N is the square root of the inner product of the tensor with itself, i.e.,
X F = X , X = i 1 = 1 I 1 i N = 1 I N | x i 1 , , i N | 2 1 / 2 .
Table 5 presents various sets of tensors that will be considered in this paper, with the notation introduced in [70].
We can make the following remarks about the sets of tensors defined in Table 5:
For P = N = 1 , the set K [ 2 ; I , J ] is the set K I × J of (real or complex) matrices of size I × J .
The set K [ P ; I ] is also denoted K I P or T P ( K I ) by some authors.
The set K I ̲ P × I ̲ P is called the set of even-order (or square) tensors of order 2 P and size I ̲ P × I ̲ P . The name of square tensor comes from the fact that the index set is divided into two identical subsets of dimension I ̲ P .
Analogously to matrices, tensors in the sets K I ̲ P × J ̲ P with J p I p and K I ̲ P × J ̲ N are said to be rectangular. The set K I ̲ P × J ̲ N is called the set of rectangular tensors with index blocks of dimensions I ̲ P and J ̲ N .
The various tensor sets introduced above can be associated with scalar real-valued multilinear forms in vector variables and with homogeneous polynomials. Like in the matrix case, we will distinguish between homogeneous polynomials of degree P that depend on the components of P vector variables and those that depend on just one vector variable.
A real-valued multilinear form, also called a P-linear form, is a map f such as
× p = 1 P R I p ( x ( 1 ) , x ( P ) ) f x ( 1 ) , x ( P ) R
that is separately linear with respect to each vector variable x ( p ) when the other variables x ( q ) , for q p , are fixed. Using the index convention, the multilinear form can be written for x ( p ) R I p , p P , as
f x ( 1 ) , , x ( P ) = i 1 = 1 I 1 i P = 1 I P a i 1 , , i P x i 1 ( 1 ) x i P ( P ) = a i ̲ P p = 1 P x i p ( p ) .
The tensor A R I ̲ P is called the tensor associated with the multilinear form f.
Two multilinear forms are presented in Table 6, which also states the transformation corresponding to each of them, as well as the associated tensor.
Table 7 recalls the definitions of bilinear/quadratic forms using the index convention, then presents the multilinear forms defined in Table 6, as well as the associated tensors from Table 5 and the corresponding homogeneous polynomials.
We can make the following remarks:
  • In the same way that bilinear forms depend on two variables that do not necessarily belong to the same vector space, general real multilinear forms depend on P variables that may belong to different vector spaces: x ( p ) R I p .
  • Analogously to quadratic forms obtained from bilinear forms by replacing the pair ( x , y ) with the vector x , real multilinear forms can be expressed using just one vector x K I . In the same way symmetric quadratic forms lead to the notion of symmetric matrices, the symmetry of multilinear forms is directly linked to the symmetry of their associated tensors.

4. Tensor Operations and Decompositions

In Section 4.1, we introduce different multiplications with tensors. Then, in Section 4.2, we present the two most used tensor decompositions, namely the PARAFAC (parallel factors) and Tucker decompositions [12,13].
For a more in-depth presentation of tensor tools, the reader is referred to the recent book [70] and review papers [73,74].

4.1. Multiplications with Tensors

In Table 8, we present three types of multiplication with tensors, using the notation of Table 3 and the index convention: mode-p, mode- ( p , n ) , and Einstein products.
The multiplication × p , called mode-p or Tucker product, corresponds to a summation over the index i p associated with the mode p of the Pth-order tensor X and the second index of A , giving a tensor of order P 1 , and size I 1 × × I p 1 × I p + 1 × × I P .
The mode- ( p , n ) product, denoted × p n , corresponds to a contraction operation performed for two arbitrary modes ( p , n ) , such as: I p = J n = K . This multiplication gives a tensor of order P + N 2 and size I 1 × × I p 1 × I p + 1 × × I P × J 1 × × J n 1 × J n + 1 × × J N .
The Einstein product, denoted A N X , of the tensors A K I ̲ P × J ̲ N of order P + N and X K J ̲ N × K ̲ Q of order N + Q corresponds to a contraction along the N shared indices j ̲ N , associated with the N last modes of A and the N first modes of X . The tensor A can be interpreted as a multilinear operator associated with a multilinear transformation applied to the tensor X . The Einstein product will be used in Section 6 for defining multilinear systems.
Table 9 presents a few examples of outer products of vectors, matrices, and tensors, indicating the order and the space to which the tensors resulting from the products belong.

4.2. PARAFAC and Tucker Decompositions

The PARAFAC decomposition [13] is also called CANDECOMP (canonical decomposition) by [75] and CP for CANDECOMP/PARAFAC by [76] when applied to decompose a data tensor. In the context of system modeling, it is called a PARAFAC model. It amounts to decomposing a tensor into a sum of R polyads, i.e., R rank-one tensors [10]. For an Nth-order tensor X , each polyad corresponds to the outer product of the rth columns of N factor matrices A ( n ) K I n × R , i.e., n = 1 N A . r ( n ) . When R is minimal, it is called the tensor rank or canonical rank of X . PARAFAC is also called a canonical polyadic decomposition (CPD), and concisely written as { A ( 1 ) , , A ( N ) ; R } . When R = 1 , X K I ̲ N is a rank-one tensor, also called a separable tensor. Then, it can be written as the outer product of N non-zero vectors a ( n ) K I n
X = n = 1 N a ( n ) x i ̲ N = n = 1 N a i n ( n ) .
In the case of a symmetric rank-one tensor X K [ N , I ] , all the vectors a ( n ) K I are identical [77].
In Table 10, we present different ways of writing a PARAFAC decomposition for a third-order and Nth-order tensor: scalar writing, with mode-p and outer products, and matrix unfoldings as defined in (3).
PARAFAC models have the following two main features:
  • Essential uniqueness, i.e., uniqueness up to trivial indeterminacies corresponding to permutation and scalar ambiguities of the columns of the factor matrices (see [78,79]);
  • Existence of a simple algorithm, the so-called alternating least-squares (ALS), for estimating the PARAFAC parameters for a tensor of an arbitrary order N.
The Tucker decomposition [12] of a tensor X K I ̲ N can be viewed as a generalization of the PARAFAC decomposition in the sense that such a decomposition allows taking into account all interactions between distinct columns of the factor matrices A ( n ) K I n × R n , when a PARAFAC model only involves interactions between the same columns r R of the factor matrices A ( n ) K I n × R . In Table 11, we present different ways of writing a Tucker decomposition for a third-order and Nth-order tensor. From the writing with outer products, we can conclude that the Tucker model of a Nth-order tensor consists in a weighted sum of n = 1 N R n rank-one tensors, where the coefficients g r 1 , , r N of the core tensor G K R ̲ N , define the weights of the interactions between the columns A . r n ( n ) of the factor matrices.
Note that a Tucker decomposition is generally not essentially unique, unless additional constraints are imposed, such as a perfect knowledge of the core tensor, certain sparseness or structural constraints on the core tensor or the matrix factors [80,81]. Consult [82] for a review of uniqueness results for Tucker models.

5. Tensor-Based Approaches for Nonlinear Systems

The use of tensor-based approaches for nonlinear systems has proved advantageous in three areas: (i) parametric complexity reduction in order to get efficient and computationally fast parameters estimation algorithms, (ii) generating new representations of nonlinear systems thanks to tensor decompositions, and (iii) structural identification of systems which can be represented as combinations of dynamical linear systems with static nonlinear blocks. In Section 5.1, we first describe polynomial models. Then, in Section 5.2, we introduce standard discrete-time Volterra models. To reduce the parametric complexity of such models, in Section 5.3, we present a tensor approach which consists in using a PARAFAC decomposition of Volterra kernels. The expansion of Volterra kernels on orthogonal bases is considered in Section 5.4. Finally, some links between block-oriented models (Hammerstein, Wiener, and Wiener–Hammerstein) and tensor representations via their associated Volterra kernels are established in Section 5.5

5.1. Polynomial Models

Polynomial models are a direct extension of linear models. For a single-input single-output (SISO) system, the output of a recursive polynomial model, at discrete-time instant t, is given by
y ^ ( t ) = p = 1 P f p [ u ( t ) , , u ( t n u ) , y ( t 1 ) , , y ( t n y ) ] ,
where f p ( . ) is a pth-degree polynomial in the system input ( u ) and output ( y ) signals, P is the nonlinearity order, and M = m a x ( n u , n y ) is the memory of the model. In the sequel, all the signals will be assumed to be real-valued.
The input/output (I/O) relationship (9) is also called a nonlinear autoregressive with exogenous input (NARX) model [83], or a one-step prediction model, i.e., a model whose output y ^ ( t ) at time t depends on past values y ( t n ) (for n n y ) of the system output, and current and past values u ( t n ) (for n = 0 , 1 , , n u ) of the system input. This model is an extension of the standard autoregressive with exogenous input (ARX) model, frequently used to study discrete time series, due to the presence of nonlinear terms in the input–output signals, which explains its success in many industrial applications.
Equation (9) can also be written as a regression model which is linear in its parameters, namely the polynomial coefficients, and nonlinear in the I/O signals:
y ^ ( t ) = φ T ( u ( t ) , y ( t 1 ) ) θ ,
where u ( t ) = [ u ( t ) u ( t n u ) ] T , y ( t 1 ) = [ y ( t 1 ) y ( t n y ) ] T , and φ is the nonlinear regressor vector whose components are monomials in (i.e., products of) previous system outputs and previous and current system inputs contained in the vectors y ( t 1 ) and u ( t ) , and θ is the parameter vector containing the polynomial coefficients.
If the previous system outputs y ( t 1 ) , , y ( t n y ) are replaced by previous model outputs y ^ ( t 1 ) , , y ^ ( t n y ) , the polynomial model (9) is then called a simulation or nonlinear output error (NOE) model, defined as
y ^ ( t ) = p = 1 P f p [ u ( t ) , , u ( t n u ) , y ^ ( t 1 ) , , y ^ ( t n y ) ] = φ T ( u ( t ) , y ^ ( t 1 ) ) θ ,
with y ^ ( t 1 ) = [ y ^ ( t 1 ) y ^ ( t n y ) ] T .
This model is recursive with respect to previous model outputs y ^ ( t n ) (for n n y ), while the one-step prediction model (10) is purely feedforward.
As for linear systems, the advantage of NARX and NOE models with output feedback is to be more parsimonious than without output feedback, which means a reduced parametric complexity in terms of dimension of the parameter vector θ . One drawback of output feedback is that stability is generally not guaranteed. Another drawback of NOE models is that they need to use a nonlinear optimization method for parameter estimation due to the dependence of y ^ ( t 1 ) on θ in the regression Equation (11) implying a nonlinear dependence of the model output with respect to model parameters. That is not the case for the NARX model that is linear in its parameters, whose estimation can therefore be carried out by means of the standard least-squares (LS) algorithm.

5.2. Truncated Volterra Models

When the polynomial functions f p ( . ) in (9) are independent from the output signal, i.e., without output feedback, the polynomial model is called a nonrecursive polynomial model or a discrete-time Volterra model. A Pth-order Volterra model for a causal, stable, finite-memory, time-invariant SISO system is described by the following I/O relationship:
y ^ ( t ) = h 0 + p = 1 P m 1 = 1 M p m P = 1 M P h m 1 , , m P ( p ) q = 1 p u ( t m q + 1 ) = h 0 + p = 1 P y ^ ( p ) ( t ) ,
where h 0 is the offset, M p is the memory of the pth-order homogeneous term y ^ ( p ) ( t ) , and h m 1 , , m P ( p ) is a coefficient of the pth-order Volterra kernel, assumed to be real-valued.
Note that a truncated Volterra model can be seen as a truncated Taylor series expansion for approximating a given smooth nonlinear function (around 0 by convention).
Equation (12) can also be written as a polynomial regression model linear in its parameters and composed of monomials in previous samples of the input signal
y ^ ( t ) = h 0 + φ T ( u ( t ) ) θ ,
where u ( t ) = [ u ( t ) , , u ( t M ) ] T , with M = m a x p ( M p ) , and θ is the parameter vector containing all the kernel coefficients, and the vector φ contains all possible monomials in u up to degree P. In the sequel, we assume that all memories M p are equal to M. The coefficient h m 1 , , m P ( p ) being characterized by p indices can be viewed as an element of a tensor H ( p ) R [ p , M ] , of order p, characterized by M p entries, which is a number growing very fast with the kernel order p. The pth-order homogeneous term y ^ ( p ) ( t ) can then be written using the Tucker product as
y ^ ( p ) ( t ) = H ( p ) × q = 1 p u T ( t ) ,
which is a homogeneous polynomial of degree p in the components of the input vector.
Several adaptive and nonadaptive methods have been proposed to identify truncated Volterra models from I/O measurements, both in the time and frequency domains. Frequency methods are based on the use of input signal cumulants, which requires estimating high-order statistics of the input signal, up to order 2 P for a Pth-order Volterra model. Such an approach is mainly interesting with a Gaussian input signal, since the input cumulants of order higher than two are then zero, which implies a significant simplification of frequency methods. In the time domain, we can distinguish the optimal minimum mean-square error (MMSE) estimator, based on the use of input signal statistics, the nonrecursive least-squares (LS) algorithm, which can be viewed as an approximation of the MMSE solution, and adaptive methods. Note that estimating the parameters of an homogeneous Pth-order Volterra kernel, with memory M, using the MMSE and nonrecursive LS solutions requires inverting an autocorrelation matrix of size M P × M P , which is a time consuming and numerically difficult task.
Adaptive methods are often associated with adaptive Volterra filters used for representing NL time-varying signals and systems as encountered in echo cancellation, for instance. Parameter estimation of adaptive Volterra filters is carried out using the well-known least-mean-square (LMS) or recursive LS (RLS) algorithms. See the book [28] and the references therein for an overview of the methods briefly introduced above.
In the next section, we present an approach for identifying reduced complexity Volterra models which is based on a PARAFAC decomposition of symmetrized kernels, leading to the so-called Volterra–PARAFAC models.

5.3. Volterra–PARAFAC Models

As each permutation of the indices m 1 , , m p corresponds to the same product q = 1 p u ( t m q + 1 ) of delayed inputs, we can sum all the coefficients associated with these permutations to get a symmetric kernel given by
h m 1 , , m P ( p , s y m ) = 1 p ! π ( . ) h m π ( 1 ) , , m π ( p ) ( p ) ,
where ( π ( 1 ) , , π ( p ) ) denotes a permutation of ( 1 , , p ) . So, in the sequel, without loss of generality, the Volterra kernels of order p 2 will be considered in symmetric form. Assuming all the kernels have the same memory M, the number of independent coefficients contained in the symmetric pth-order kernel is equal to C p M + p 1 = ( M + p 1 ) ! p ! ( M 1 ) ! , showing that this number, and consequently the parametric complexity of the Volterra model, grows quickly with M even for moderate p.
In order to reduce the complexity of Volterra models, a PARAFAC decomposition of symmetrized kernels was exploited in [40,41]. The symmetrized pth-order Volterra kernel can then be decomposed using a symmetric PARAFAC decomposition, with symmetric rank r p and matrix factor A ( p ) R M × r p , for p P , as [77]
h m 1 , , m P ( p , s y m ) = r = 1 r p q = 1 p a m q , r ( p ) , m q = 1 , M .
Remark 1.
Note that a pth-order Volterra kernel is said to be separable if it can be written as the product of p first-order kernels [28], i.e.,
h m 1 , , m P ( p ) = q = 1 p a m q ( p ) , m q = 1 , M
which corresponds to a rank-one PARAFAC decomposition (15).
The kernel decomposition (15) allows rewritting the pth-order homogeneous term as follows:
y ^ ( p ) ( t ) = m 1 = 1 M m P = 1 M h m 1 , , m P ( p , s y m ) q = 1 p u ( t m q + 1 )
= m 1 = 1 M m P = 1 M r = 1 r p q = 1 p a m q , r ( p ) q = 1 p u ( t m q + 1 ) ,
or equivalently
y ^ ( p ) ( t ) = r = 1 r p q = 1 p m q = 1 M a m q , r ( p ) u ( t m q + 1 ) = r = 1 r p u T ( t ) A . r ( p ) p .
We then obtain a homogeneous polynomial of degree p expressed as a sum of powers of linear forms, which is directly connected to the Waring problem. Note that a Waring’s decomposition consists in expressing a homogeneous polynomial of degree p in n variables (i.e., a quantics), associated with a symmetric tensor, as a sum of pth powers of linear forms [84]. This pth-order homogeneous term can therefore be carried out in parallelizing r p Wiener models. As introduced later (see Section 5.5.2), each Wiener model is composed of a FIR linear filter whose coefficients are the components of a column A . r ( p ) R M of the matrix factor A ( p ) , in cascade with a static nonlinearity equal to the power ( . ) p . Consequently, the Volterra model output (12) is obtained as the sum of the offset term h 0 , and the outputs of p = 1 P r p Wiener models in parallel, as illustrated in Figure 1 for a cubic Volterra–PARAFAC model, where A . 1 ( 1 ) = h 1 ( 1 ) , , h M ( 1 ) and r 1 = 1 .
It is worth noting that such a Volterra–PARAFAC model provides a very attractive modular and parallel architecture for approximating nonlinear systems with a low computational complexity.
This Volterra–PARAFAC architecture is to be compared with the parallel cascade Wiener (PCW) model composed of P Wiener models in parallel and described by the following equation:
y ^ ( t ) = p = 1 P N ( p ) m = 1 M h m ( p ) u ( t m + 1 ) ,
where h m ( p ) is the mth coefficient of the pth FIR model h ( p ) , and N ( p ) represents a static nonlinearity for the pth path, p P . Comparing Equation (20) with Equation (19) allows us to conclude that the Volterra–PARAFAC model is a PCW one whose the FIR filters are the columns of the factor matrices of the PARAFAC representations of the Volterra kernels, and the pth static nonlinearity N ( p ) ( . ) is the power ( . ) p . In [85], it is shown that any discrete-time, finite-memory nonlinear system can be approximated with an arbitrary accuracy by a PCW model, with a finite number P of paths. A method based on a joint diagonalization of third-order Volterra kernel slices is proposed in [50] for identifying PCW systems.
The extended Kalman filter (EKF) was proposed in [40,41] to estimate the parameters of a Volterra–PARAFAC model, associated with the following state-space representation
θ ( t ) = θ ( t 1 ) + w ( t )
y ( t ) = p = 1 P r = 1 r p u T ( t ) A . r ( p ) p = G ( θ ( t ) , u ( t ) ) ,
where the state vector is the Volterra–PARAFAC parameters vector θ defined as
θ A . 1 ( 1 ) T , A . 1 ( 2 ) T , , A . r 2 ( 2 ) T , , A . 1 ( P ) T , , A . r P ( P ) T T R M ¯
= A . 1 ( 1 ) T , vec T ( A ( 2 ) ) , , vec T ( A ( P ) ) T
= [ θ ( 1 ) ] T , [ θ ( 2 ) ] T , , [ θ ( P ) ] T T ,
with θ ( p ) vec ( A ( P ) ) A . 1 ( p ) T , , A . r p ( p ) T T , for p P , and M ¯ = M 1 + p = 2 P r p .
Equation (21) corresponds to a random walk model for modeling slowly time-varying parameters θ , and w ( t ) R M ¯ is a white Gaussian noise sequence with covariance σ w 2 I M ¯ .
The EKF algorithm can be used online for updating the estimated parameters as input samples become available, and even for tracking time-varying kernels. It is obtained by applying the Kalman filter after linearization of the nonlinear function G ( θ , u ( t ) ) around the last estimate θ ^ ( t 1 )
y ( t ) G ( θ ^ ( t 1 ) , u ( t ) ) + h T ( t ) ( θ θ ^ ( t 1 ) ) ,
where h ( t ) is the gradient of G ( θ , u ( t ) ) with respect to the parameter vector θ , calculated at the point θ = θ ^ ( t 1 )
h ( t ) G ( θ , u ( t ) ) θ | θ = θ ^ ( t 1 ) R M ¯
= ( h ( 1 ) ( t ) ) T , ( h ( 2 ) ( t ) ) T , , ( h ( P ) ( t ) ) T T
h ( 1 ) ( t ) ) = u ( t )
h ( P ) ( t ) G ( θ , u ( t ) ) θ ( p ) | θ = θ ^ ( t 1 ) , for p [ 2 , P ] .
Let us define the scalar quantity z p , r ( t ) as
z p , r ( t ) u T ( t ) A . r ( p ) .
The nonlinear function G ( θ , u ( t ) ) defined in () can then be written as
G ( θ , u ( t ) ) = p = 1 P r = 1 r p z p , r p ( t ) .
By the chain rule, we have
G ( θ , u ( t ) ) A . r ( p ) = p z p , r p 1 ( t ) u ( t ) R M h ( P ) ( t ) = G ( θ , u ( t ) ) A . 1 ( p ) T , , G ( θ , u ( t ) ) A . r p ( p ) T θ = θ ^ ( t 1 ) T = p z ^ p , 1 p 1 ( t ) , , z ^ p , r p p 1 ( t ) T u ( t ) R M r p ,
where z ^ p , r ( t ) = u T ( t ) A ^ . r ( p ) .
The EKF equations are then derived from the Kalman filter associated with the linearized state space equations
θ ( t ) = θ ( t 1 ) + w ( t )
y ( t ) = h T ( t ) θ + n ( t ) ,
where n ( t ) is assumed to be a white Gaussian noise, with variance σ n 2 , including both the measurement noise and the modeling error.
The innovation process associated with the linearized Equation (26) is equal to
e ( t ) = y ( t ) G ( θ ^ ( t 1 ) , u ( t ) )
= h T ( t ) θ θ ^ ( t 1 ) + n ( t ) ,
with variance s ( t ) = E [ e 2 ( t ) ] = h T ( t ) P ( t | t 1 ) h ( t ) + σ n 2  , where P ( t | t 1 ) is the covariance matrix of the prediction error θ θ ^ ( t 1 ) , and
G ( θ ^ ( t 1 ) , u ( t ) ) = p = 1 P r = 1 r p u T ( t ) A ^ . r ( p ) ( t 1 ) p .
The Kalman gain is given by
k ( t ) = 1 s ( t ) P ( t | t 1 ) h ( t ) ,
and the recursive equation for calculating the parameter vector estimate is
θ ^ ( t ) = θ ^ ( t 1 ) + k ( t ) e ( t ) .
Finally, the equation for updating the covariance matrix of the one-step prediction error is
P ( t + 1 / t ) = I M ¯ k ( t ) h T ( t ) P ( t | t 1 ) + σ w 2 I M ¯ .
The EKF algorithm is summarized in Algorithm 1.
Algorithm 1: Extended Kalman filter for parameter estimation of a Volterra–PARAFAC model.
Given σ w 2 and σ n 2 :
  • Initialize P ( 0 / 1 ) and θ ^ ( 0 ) .
  • For t = 1 to t = T , compute:
  • The innovation process e ( t ) using Equations (36) and (38);
  • The gradient h ( t ) using Equations (28), (29), (31) and (33);
  • The Kalman gain k ( t ) using Equation (39);
  • The recursive parameter estimate with Equation (40);
  • The updated error covariance matrix P ( t + 1 / t ) using Equation (41).
Example 1.
In this example, we consider a memory M = 5 third-order Volterra model with rank one second-order and third-order kernels. Each kernel acts on a specific bandwidth, making the nonlinear distortion frequency selective. Precisely:
  • First-order kernel h m 1 ( 1 ) = a m 1 ( 1 ) , with a m 1 ( 1 ) the m 1 th entry of
    A . 1 ( 1 ) = 0.0284 0.2370 0.4692 0.2370 0.0284 T , which represents a low pass FIR filter with normalized cut-off frequency 0.2.
  • Second-order kernel h m 1 , m 2 ( 2 ) = a m 1 ( 2 ) a m 2 ( 2 ) with a m i , i = 1 , 2 the entries of
    A . 1 ( 2 ) = 0.0568 0.0708 0.8698 0.0708 0.0568 T , which stands for a bandpass filter with normalized frequencies 0.3 and 0.5.
  • Third-order kernel h m 1 , m 2 , m 3 ( 2 ) = a m 1 ( 3 ) a m 2 ( 3 ) a m 3 ( 3 ) with a m i , i = 1 , 2 , 3 the entries of A . 1 ( 3 ) = [ 0.0302 0.3463 0.7471 0.3463 0.0302 ] T , a high-pass filter with normalized frequency 0.9.
We first analyze the output reconstruction capability of Volterra–PARAFAC with parameters estimated by an EKF. Then, we evaluate the transient behavior of the algorithm in the noiseless case in comparison with PCWS. Eventually, the steady state results in the noisy case are evaluated. The considered PCWS has three branches, each branch being a Wiener system of order 3 and memory 5. The parameters of PCWS were estimated using an EKF. The simulation results given hereafter were obtained by implementing the algorithms with MATLAB R2018b. The code is provided as Supplementary Material.
Output reconstruction: 
We consider the composite signal u ( t ) = 0.5 s i n ( 0.01 π t ) + 0.5 s i n ( 0.9 π t ) as input. Figure 2 depicts output reconstruction obtained with the proposed EKF filter from a noisy signal. One can note a very good reconstruction after convergence of the filter.
Transient behavior evaluation in the noiseless case: 
R = 100 Monte Carlo runs are considered for this analysis. For each run ρ, the square error e ρ ( t ) = y ( t ) y ^ ρ ( t ) 2 , with y ^ ρ ( t ) the reconstructed output at the ρ-th run, is computed. Then the median value over the R runs is computed as ϵ ( t ) = median e ρ ( t ) , ρ = 1 , 2 , , R . This allows discarding outliers due to ill convergence of EKF. Indeed, depending on the initialization, the EKF sometimes failed to converge with the selected number of samples. This is particularly the case for PCWS. Finally, ϵ ( t ) is smoothed with a moving average filter: ϵ L ( t ) = 1 L τ = 0 L 1 ϵ ( t τ ) . The obtained results are given in Figure 3 where a comparison between PCWS and Volterra–PARAFAC in terms of the square error ϵ L ( t ) , with L = 100 , is depicted. In general, EKF converges faster with Volterra–PARAFAC than with PCWS in the noiseless case.
Evaluation in steady state: 
To evaluate the steady state performance, the NMSE (normalized mean square error) is calculated as N M S E = ϵ ¯ 2 y ¯ 2 , with ϵ ¯ 2 = 1 t f t 0 t = t 0 t f ϵ 2 ( t ) and y ¯ 2 = 1 t f t 0 t = t 0 t f y 2 ( t ) , where the interval [ t 0 , t f ] characterizes the steady state. The evaluation was carried out with two types of input signals: the composite sum of sines previously used and a random input. The random input was drawn from a uniform distribution between 1 and 1. The number of iterations needed for convergence of the EKF with the random input was much less than with sum of sines; 10 , 000 samples were generated for a random input and the steady performance was evaluated from the 1000 last samples of the reconstructed output. In the case of sum of sines, 100 , 000 samples were generated and the steady state was evaluated from the 10 , 000 last samples. A white Gaussian noise was added to the output; its variance depends on a specified signal-to-noise ratio (SNR). For different values of SNR, Figure 4 and Figure 5 depict the NMSE in steady state for sum of sines and for the considered random input, respectively. It can be noticed that in steady state, both Volterra–PARAFAC and PCWS give the same performance whatever the input signal. With a lower SNR value, Volterra–PARAFAC is slightly better than PCWS.

5.4. Volterra–GOB Models

Under stability and causality conditions, a Volterra kernel H ( p ) can be expanded on a basis of orthogonal functions [27]. Various functions have been introduced in the literature (Laguerre, Kautz, generalized orthogonal basis functions (GOBF), etc.). Selection of such a basis has been widely studied (see [38] for instance). Defining by b k j ( j , p ) ( . ) , k j = 1 , 2 , , a set of orthogonal basis functions for expanding the pth-order Volterra kernel along its jth mode, j p , the GOB expansion of this Volterra kernel in such a basis is given by
h m 1 , m 2 , , m p ( p ) = k 1 = 1 k 2 = 1 k p = 1 g k 1 , k 2 , , k p ( p ) j = 1 p b k j ( j , p ) ( m j ) , m j M p , j p ,
where g k 1 , k 2 , , k p ( p ) are the coefficients of the expansion, also called Fourier coefficients, and the GOB functions b k j ( j , p ) ( . ) in the time domain can be derived from the inverse z-transform of some transfer function [86]. This expansion is often truncated to a given order K p for practical reasons, leading to the following truncated development
h m 1 , m 2 , , m p ( p ) = k 1 = 1 K p k 2 = 1 K p k p = 1 K p g k 1 , k 2 , , k p ( p ) j = 1 p b k j ( j , p ) ( m j ) ,
where b k j ( j , p ) ( m j ) is the m j th entry of the k j th column B . k j ( j , p ) R M p of the matrix factor B ( j , p ) R M p × K p , associated with mode j p .
The development (43) of the pth-order Volterra kernel, viewed as a pth-order tensor H ( p ) = h m 1 , m 2 , , m p ( p ) R M p × × M p , can be interpreted as the following Tucker model:
H ( p ) = G ( p ) × j = 1 p B ( j , p ) ,
where the core tensor G ( p ) R K p × × K p contains the Fourier coefficients.
Consider the FIR linear filter B k j ( j , p ) ( q 1 ) = m j = 1 M p b k j ( j , p ) ( m j ) q m j , where q 1 is the unit delay operator. This filter, with memory M p , is associated with mode j of the tensor H ( p ) .
Now, let us define the filtered input s k j ( j , p ) ( t ) , for k j K p and j p , as
s k j ( j , p ) ( t ) = B k j ( j , p ) ( q 1 ) u ( t ) = m j = 1 M p b k j ( j , p ) ( m j ) u ( t m j ) .
Using the truncated expansion (43) of the pth Volterra kernel and the filtered inputs (45), the input–output relationship for the pth-order homogeneous Volterra–GOB term can then be written as
y ^ ( p ) ( t ) = m 1 = 1 M p m 2 = 1 M p m p = 1 M p h m 1 , m 2 , , m p ( p ) j = 1 p u ( t m j )
= k 1 = 1 K p k 2 = 1 K p k p = 1 K p g k 1 , k 2 , , k p ( p ) j = 1 p s k j ( j , p ) ( t ) .
Taking the Tucker model (44) of the pth order kernel tensor H ( p ) into account, and defining the vector s ( j , p ) ( t ) = s 1 ( j , p ) ( t ) , , s K p ( j , p ) ( t ) T R K p , the input–output equation for the Volterra–GOB model then becomes
y ^ ( t ) = h 0 + p = 1 P G ( p ) × j = 1 p [ s ( j , p ) ( t ) ] T .
Figure 6 illustrates a third-order Volterra–GOB model.
Remark 2.
Note that the truncation order K p , and as a consequence the parametric complexity of the Volterra–GOB model, is strongly dependent on the choice of the GOB functions, which is a difficult task. Once these functions are fixed, the Volterra–GOB model is linear in its parameters, the Fourier coefficients, which can be estimated using the standard least-squares (LS) method. In comparison, the Volterra–PARAFAC model is nonlinear in its parameters, the PARAFAC coefficients, which requires the use of a nonlinear optimization method like the extended Kalman filter, for their estimation.

5.5. Block-Oriented Models

Nonlinear input–output models constituted by a cascade of linear dynamic subsystems with memoryless (static) nonlinearities, also called block-oriented (or block-structured) nonlinear models, have been extensively studied by many authors during the last three decades. They play an important role in many fields of application owing to their low parametric complexity implying a low computational cost for system identification. Moreover, they often reflect the structure of physical systems. We review hereafter the three most common block-oriented models and their tensor representation. According to the Weierstrass theorem, it is assumed that nonlinear blocks are continuous and therefore can be represented with a polynomial of a given degree P: c ( x ) = p = 0 P c p x p .

5.5.1. Hammerstein Model

It is constituted with a nonlinear functional block followed by a FIR linear one g ( . ) , with memory M g . In control applications, as illustrated on Figure 7, the Hammerstein model is used for representing control systems with nonlinearities in the actuator.
The output y ( t ) of the Hammerstein model is given by
y ( t ) = i = 1 M g g i v ( t i )
= i = 1 M g g i p = 0 P c p u p ( t i )
= p = 0 P c p i = 1 M g g i u p ( t i ) .
This model is therefore equivalent to a Volterra model of order P, with the following pth-order kernel
h i , i 2 , , i p ( p ) = c p g i δ i , i 2 , , i p , i M g ,
where δ i , i 2 , , i p is the generalized Kronecker delta. The corresponding tensor is diagonal and given by [49]
H ( p ) = c p G ( p ) ,
where G ( p ) R M g × × M g is a diagonal tensor whose diagonal elements are the components of the FIR coefficients vector g = [ g 1 , , g M g ] T .

5.5.2. Wiener Model

It is the dual of the Hammerstein model, i.e., the FIR linear functional block l ( . ) , with memory M h , comes before the nonlinear one, as illustrated on Figure 8. It allows taking sensor nonlinearities into account for instance.
For this model, the output y ( t ) is given by
y ( t ) = c ( w ( t ) ) = p = 0 P c p w p ( t )
= p = 0 P c p i = 1 M h l i u ( t i ) p
= p = 0 P c p i 1 = 1 M h i p = 1 M h j = 1 p l i j u ( t i j ) .
This model is equivalent to a Volterra model of order P, whose the pth-order kernel is a rank one symmetric tensor defined as
h i 1 , i 2 , , i p ( p ) = c p j = 1 p l i j , i j M h , j p ,
or equivalently
H ( p ) = c p j = 1 P l ,
where l = [ l 1 , , l M h ] T is the vector of FIR coefficients.

5.5.3. Wiener–Hammerstein Model

The Wiener–Hammerstein model, whose structure is illustrated in Figure 9, is a combination of the Wiener and Hammerstein models described previously. Its output y ( t ) is given by
y ( t ) = i = 1 M g g i v ( t i ) = i = 1 M g g i c ( w ( t i ) )
= i = 1 M g g i p = 0 P c p w p ( t i )
= p = 0 P c p i = 1 M g g i j = 1 p m j = 1 M h l m j u ( t i m j ) .
Defining the changes of variables i j = i + m j , for j p , and reordering the sums lead to the following I/O relationship:
y ( t ) = p = 0 P c p i 1 = 2 M v i p = 2 M v i = 1 M g g i j = 1 p l i j i u ( t i j ) ,
where M v = M g + M h stands for the memory of the overall system. The Wiener–Hammerstein model is associated with a Volterra model whose pth-order kernel is given by [49]
h i 1 , , i P ( p ) = c p i = 1 M g g i j = 1 P l i j i , i j = 2 , , M v , j p .
The corresponding tensor H ( p ) R M v 1 × × M v 1 is a rank M g tensor admitting a PARAFAC decomposition written as
H ( p ) = c p i = 1 M g g i j = 1 p a i = c p I p , M v 1 × j = 1 p A ( j ) ,
where
a i = 0 i 1 l 0 M g i R M v 1 , i M g ,
A ( j ) = a 1 a M g = T M v 1 , M g ( l ) = l 1 0 0 l 1 l M h 0 0 l M h l 1 0 0 l M h , j p 1 ,
A ( p ) = T M v 1 , M g ( l ) d i a g ( g ) R M v 1 × M g .

5.5.4. Tensor Rank and Structure Identification

Given the Volterra model associated with a block-oriented system among the three previously considered, it was shown in [51] that the inherent structure of the system can be inferred from analyzing the tensor rank of the pth-order Volterra kernel H ( p ) . Indeed, from the results established in Section 5.5.1, Section 5.5.2 and Section 5.5.3, we can conclude that the tensor H ( p ) has a rank less than or equal to M g . It is precisely rank 1 for a Wiener model and diagonal for a Hammerstein one. However, the PARAFAC decomposition of H ( p ) is not guaranteed to be of minimal rank. The method is based on a filtering of the system output with a FIR filter with nonzero random coefficients. The Volterra kernel of this augmented system is ensured to have minimal rank. For an order M f FIR filter with an impulse response coefficients vector f , the tensor corresponding to the pth order Volterra kernel of the augmented system is given by
H ¯ ( p ) = c p I × i = 1 p A ¯ ( i ) ,
where A ¯ ( i ) = T M ¯ v , M g ( l ) , i p 1 , A ¯ ( p ) = T M ¯ v , M g ( l ) d i a g ( g ¯ ) , g ¯ = T M ¯ g , M g ( f ) g , M ¯ g = M f + M g 1 , and M ¯ v = M v + M f 1 . The factor matrices are generically full column rank. Therefore, the matrix unfolding of the tensor along the pth dimension is full rank and reflects the tensor rank. This leads to the following decision rule
  • r a n k ( H ¯ ( p ) ) = M ¯ v ⟹ Hammerstein structure (Nonlinear–Linear)
  • r a n k ( H ¯ ( p ) ) = M f ⟹ Wiener structure (Linear–Nonlinear)
  • r a n k ( H ¯ ( p ) ) M f , M ¯ v ⟹ Wiener–Hammerstein structure (Linear–Nonlinear–Linear).
Since the factor matrices are full column rank, the tensor rank is precisely given by the rank of the pth unfolding of tensor, hereafter denoted H ¯ p . However, in presence of Volterra kernel estimation errors, H ¯ p is often full column rank, which can lead to an erroneous selection of the Hammerstein structure. Thus, it is necessary to check the diagonal structure of the tensor in order to confirm the decision or not. This is performed by checking if the sum of diagonal entries of the tensor is much higher than those of off-diagonal ones. If the matrix is rank deficient, an interesting rule for computing the rank r from the singular values of the matrix is given in [87] as
r = arg min i ρ ( i ) , ρ ( i ) = σ i + 1 2 σ i 2 2 σ i + 1 2 if σ i + 1 2 σ i 2 3 else ρ ( i ) = 1 .
The algorithm for detecting the structure of a block-oriented nonlinear system is then described in Algorithm 2:
Algorithm 2: Structure identification of a block-oriented nonlinear system.
Given the coefficients h i 1 , i 2 , , i p ( p ) of the pth order kernel of the Volterra model associated with the block-oriented nonlinear system with memory M v :
  • Generate the impulse response f of an M f M v order FIR filter with random coefficients.
  • Form the tensor H ¯ ( p ) of the augmented system by filtering the Volterra kernel as
    h ¯ i 1 , i 2 , , i p ( p ) = i = 0 M f 1 f i h i 1 i , i 2 i , , i p i ( p ) .
  • Compute the singular values σ i of the matrix unfolding H ¯ p .
  • Compute the rank r of H ¯ p as the smallest integer such that
    i = 1 k 1 σ i < ϵ i = 1 M v + M f 1 σ i i = 1 k σ i ,
    where ϵ is a constant close to 1.
  • If r = M v ¯ = M v + M f 1 , test if H ¯ ( p ) is diagonal. If yes then conclude that the system has a Hammerstein structure.
  • If r < M v + M f 1
    Compute the rank r using (69).
    (a)
    If r = M f , then the system has a Wiener structure.
    (b)
    If M f < r < M v + M f 1 , then the system has a Wiener–Hammerstein structure whose first linear block is of order M l = M v + M f r , while the second linear block is of order M g = r M f + 1 .

6. Tensor-Based Approaches for Multilinear Systems

In Section 6.1 and Section 6.2, we introduce the notions of tensor system and memoryless discrete-time tensor-input tensor-output (TITO) system, respectively. Then, in Section 6.3, we consider a tensor-input single-output (TISO) system whose system transfer is a rank-one Nth-order tensor, which leads to a multilinear system whose the N vector factors represent IRs of subsystems associated with the N modes of the input tensor. In Section 6.4, we present the weighted least-squares (WLS) algorithm for estimating the system transfer tensor of a multilinear system from input–output (I/O) data. A closed-form solution is also proposed for estimating the individual IR of each subsystem.

6.1. Tensor Systems

In this section, we introduce the notion of tensor system using the Einstein product [55]. In Table 12, we present two examples of tensor systems: one is linear in the unknown tensor variable X , while the other one is bilinear in the unknown tensor variables ( X , Z ) .
Example 2.
To illustrate the notion of tensor system, let us consider the following equation:
Y = A 2 X .
This equation can be associated with the following map f : R K × L X f ( X ) = Y R I × J such as
y i , j = k = 1 K l = 1 L a i , j , k , l x k , l
with the associated fourth-order tensor A R I × J × K × L .
Equation (70) can be solved with respect to the unknown matrix X by minimizing the LS criterion min X Y A 2 X F 2 . This minimization is carried out after a vectorization of Equation (70) using the unfolding A I J × K L of the tensor A and the vectorized forms x K L and y I J of the matrices X and Y , which leads to a standard system of linear equations in matrix form, with a coefficient matrix. The LS criterion then becomes
min x K L y I J A I J × K L x K L 2 2 .
Minimizing this criterion with respect to the unknown vector x K L gives the following normal equations:
( A I J × K L T A I J × K L ) x ^ K L = A I J × K L T y I J
x ^ K L = ( A I J × K L T A I J × K L ) 1 A I J × K L T y I J
if the matrix A I J × K L T A I J × K L is invertible, i.e., if A I J × K L has full column rank, which implies the necessary but not sufficient condition that I J K L .
Remark 3.
Let us consider the inner product of the Nth-order real tensors A R I ̲ N and X R I ̲ N
A , X = A N X = i 1 = 1 I 1 i N = 1 I N a i 1 , i N x i 1 , i N = a i ̲ N x i ̲ N .
Assuming X has rank-one, i.e.,
X = n = 1 N x ( n ) x i ̲ N = n = 1 N x i n ( n ) ,
Equation (76) becomes
A N X = i 1 = 1 I 1 i N = 1 I N a i 1 , i N n = 1 N x i n ( n ) = A × 1 x ( 1 ) × N x ( N ) = A × n = 1 N x ( n ) ,
and we obtain a homogeneous multivariate polynomial of degree N in the components of the N vector factors x ( n ) , n N .
If we assume that A has also rank-one, i.e., A = n = 1 N a ( n ) , Equation (76) can be written as
A N X = i 1 = 1 I 1 i N = 1 I N n = 1 N a i n ( n ) x i n ( n ) = n = 1 N i n = 1 I n a i n ( n ) x i n ( n ) = n = 1 N ( a ( n ) ) T x ( n ) .
In conclusion, when A has rank-one, the multivariate polynomial (78) is equal to the product of N linear forms, each linear form being an univariate polynomial in the components x i n ( n ) of the vector x ( n ) .
If A satisfies a rank-R PARAFAC decomposition (see Table 10), i.e., A = r = 1 R n = 1 N A . r ( n ) , with A ( n ) R I n × R , Equation (78) becomes
A N X = i 1 = 1 I 1 i N = 1 I N r = 1 R n = 1 N a i n , r ( n ) n = 1 N x i n ( n )
= r = 1 R ( i 1 = 1 I 1 a i 1 , r ( 1 ) x i 1 ( 1 ) ) ( i N = 1 I N a i N , r ( N ) x i N ( N ) )
= r = 1 R ( ( A . r ( 1 ) ) T x ( 1 ) ) ( ( A . r ( N ) ) T x ( N ) .
In this case, we obtain a sum of products of N linear forms, to be compared with the Volterra–PARAFAC model (19) which corresponds to the particular case where x ( n ) = u ( t ) and A . r ( n ) = A . r ( p ) , for n N . This last constraint results from the assumption of symmetry of the Volterra kernel.

6.2. Discrete-Time Memoryless Tensor-Input Tensor-Output Systems

Assuming system input and model output data are contained in two tensors X ( t ) R J ̲ N and Y ^ ( t ) R I ̲ P which depend on time t [ 1 , T ] , we define a discrete-time memoryless tensor-input tensor-output (TITO) model by means of the following I/O relationship:
Y ^ ( t ) = A N X ( t ) ,
where X ( t ) and Y ^ ( t ) are the tensors of system input and model output signals of the TITO system, at the time instant t, and A R I ̲ P × J ̲ N is the system transfer tensor. Using the index convention, Equation (82) can be written in scalar form as
y ^ i ̲ P ( t ) y ^ i 1 , , i P ( t ) = a i ̲ P , j ̲ N x j ̲ N ( t ) .
This equation is associated with the following map:
R J ̲ N X ( t ) f X ( t ) = Y ^ ( t ) R I ̲ P
with the associated ( P + N ) th-order tensor A R I ̲ P × J ̲ N .
Considering measurements of I/O signals during the time interval T, the sets of input and output signals are concatenated along the time mode to form the matrix unfoldings X Π J N × T R Π J N × T and Y ^ Π I P × T R Π I P × T of the tensors X ( T ) R J ̲ N × T and Y ^ ( T ) R I ̲ P × T , respectively, with Π I P and Π J N defined as in Table 3. The I/O relationship of the TITO model can then be written in the following matrix form:
Y ^ Π I P × T = A Π I P × Π J N X Π J N × T .
Let us assume the model output (83) is corrupted by a zero-mean additive white Gaussian noise (AWGN) e i ̲ P ( t ) such as the measured noisy output signal is given by
y i ̲ P ( t ) = y ^ i ̲ P ( t ) + e i ̲ P ( t ) = a i ̲ P , j ̲ N x j ̲ N ( t ) + e i ̲ P ( t ) .
Transposing both members of Equation (85) gives
Y T × Π I P = X T × Π J N A Π J N × Π I P + E T × Π I P .
From this equation, it is easy to derive the LS estimate of the matrix unfolding A Π J N × Π I P of the system transfer tensor, which minimizes the least mean square error between the model outputs and the noisy system output measurements
min A Π J N × Π I P E T × Π I P F 2 = i 1 = 1 I 1 i P = 1 I P e i ̲ P 2 ( t ) = min A Π J N × Π I P Y T × Π I P X T × Π J N A Π J N × Π I P F 2 A ^ Π J N × Π I P = [ X T × Π J N ] Y T × Π I P .
To ensure uniqueness of this LS solution, it is necessary that X T × Π J N be full column rank, which implies the necessary condition T Π J N ; i.e., the number T of input–output samples must be greater or equal to the number of input signal samples at each time instant t.

6.3. Multilinear TISO Systems

In the case of a memoryless tensor-input single-output (TISO) system, let us assume that the system transfer is a rank-one tensor A R J ̲ N written as
A = n = 1 N h ( n ) a j ̲ N = n = 1 N h j n ( n ) ,
with h ( n ) R J n , for n N . The model output (82) is then given by
y ^ ( t ) = ( n = 1 N h ( n ) ) N X ( t ) = j 1 = 1 J 1 j N = 1 J N n = 1 N h j n ( n ) x j ̲ N ( t ) = j 1 = 1 J 1 h j 1 ( 1 ) x j 1 , , j N ( t ) j N = 1 J N h j N ( N ) x j 1 , , j N ( t )
or using the index convention
y ^ ( t ) = n = 1 N h j n ( n ) x j ̲ N ( t )
or equivalently
y ^ ( t ) = X ( t ) × 1 ( h ( 1 ) ) T X ( t ) × N ( h ( N ) ) T = X ( t ) × n = 1 N ( h ( n ) ) T .
The resulting system output is multilinear (N-linear) with respect to the vector factors h ( n ) . Each vector can be interpreted as the impulse response (IR) of length J n of the subsystem associated to the nth mode of the input tensor X ( t ) . The output signal y ^ ( t ) is therefore a multilinear form in the N individual IR vectors (see Table 7).

6.4. Estimation of the System Transfer Tensor from I/O Data

Let us define the lexicographical vectorization u ( t ) vec ( X ( t ) ) R Π J N such as
u j 1 j N ¯ ( t ) = x j 1 , , j N ( t ) ,
and the (multilinear) global impulse response (GIR) h as the vectorized form of the system transfer tensor A
h = vec ( n = 1 N h ( n ) ) = n = 1 N h ( n ) = h ( 1 ) h ( 2 ) h ( N ) R Π J N .
The output of the multilinear model can then be rewritten as
y ^ ( t ) = u T ( t ) h .
Considering noisy output measurements on the time interval [ 1 , T ] , the noisy output vector y ( T ) R T is given by
y ( T ) y ( 1 ) y ( T ) = u T ( 1 ) u T ( T ) h + e ( 1 ) e ( T ) U ( T ) h + e ( T ) ,
where e ( t ) is a zero-mean AWGN, at the time instant t, and U ( T ) R T × Π J N .
We now determine the weighted least-squares (WLS) estimate of the vectorized form h of the GIR tensor, which minimizes the following cost funtion min h e ( T ) W 2 , with
e ( T ) W 2 e T ( T ) W e ( T ) = t = 1 T w t e 2 ( t ) = y ( T ) U ( T ) h W 2 ,
where W diag ( w 1 , , w T ) is a diagonal weighting matrix, with w t > 0 for all t [ 1 , T ] . The WLS criterion (97) can be developed as
e ( T ) W 2 = y ( T ) W 2 2 h T U T ( T ) W y ( T ) + h T U T ( T ) W U ( T ) h .
It is a quadratic cost function with respect to the unknown parameters vector h . The Hessian ( 2 U T ( T ) W U ( T ) ) being a nonnegative definite matrix, this criterion has a unique global minimum obtained in canceling its gradient with respect to h , which gives
U T ( T ) W U ( T ) h ^ ( T ) = U T ( T ) W y ( T ) .
Assuming the matrix U T ( T ) W U ( T ) is nonsingular, the WLS estimate of h is given by
h ^ ( T ) = U T ( T ) W U ( T ) 1 U T ( T ) W y ( T ) .
As the diagonal weighting matrix W is positive definite, a condition for ensuring the uniqueness of the WLS estimate is that U ( T ) be full column rank, which implies the necessary condition T Π J N . When the weighting matrix is chosen as the identity matrix, we obtain the standard LS estimate of the GIR given by
h ^ ( T ) = U ( T ) y ( T ) .
In [56], an iterative Wiener filter and LMS-based algorithms are proposed to identify multilinear systems as described in (91).
Tensorizing the GIR vector estimate h ^ as a Nth-order rank-one tensor H ^ R J ̲ N , an estimate h ^ ( n ) of each individual IR h ( n ) can be obtained by using the high order singular value decomposition (HOSVD) of H ^ , i.e., calculating the left singular vector associated with the largest singular value of the matrix unfolding H ^ J n × J 1 J n 1 J n + 1 J N . For more details concerning the HOSVD-based estimation of matrix or vector factors of a multiple Kronecker product, the reader is referred to the following references [70,81]. Uniqueness of individual estimates h ^ ( n ) is ensured assuming the first coefficient h 1 ( n ) = 1 for n N .

7. Conclusions and Perspectives

The aim of this paper is to outline links between tensors and nonlinear and multilinear systems. In the case of NL systems, a focus has been made on Volterra models, with the objective of parametric complexity reduction using a PARAFAC decomposition of symmetrized kernels or their expansion on generalized orthogonal basis functions. The EKF algorithm has been proposed to estimate the parameters of a Volterra–PARAFAC model. Then, three block-oriented nonlinear systems have been represented by means of associated Volterra models in the form of a structured tensor decomposition. It has been shown how this equivalent tensor representation can be exploited to identify the structure of a block-oriented system. This tensor representation can also be used for parameter estimation of a block-oriented system. As perspectives of these results, it would be interesting to compare the different NL models considered, both in terms of parametric complexity and quality of modeling via parameter estimation for a given benchmark.
For multilinear systems, a new class of systems called tensor-input tensor-output (TITO) systems is introduced using Einstein product of tensors. The case of a TISO system has been studied in more detail assuming that the transfer tensor has rank one. The WLS algorithm has been derived for estimating the multilinear global impulse response (GIR) associated with the vectorized form of the system transfer tensor. A closed-form HOSVD-based solution has been proposed to estimate the individual impulse response of each subsystem from the estimated GIR. Another line of research will be to consider a sparse input data tensor modeled using different tensor models and apply tensor completion methods to reconstruct missing data.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/a16090443/s1, Matlab code for simulations results in Figure 2, Figure 3, Figure 4 and Figure 5.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement:

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vasilescu, M.A.O.; Terzopoulos, D. Multilinear analysis of image ensembles: TensorFaces. In Proceedings of the European Conference on Computer Vision (ECCV 2002), Copenhagen, Denmark, 28–31 May 2002; pp. 447–460. [Google Scholar]
  2. Lu, H.; Plataniotis, K.N.; Venetsanopoulos, A.N. MPCA: Multilinear principal component analysis of tensor objects. IEEE Trans. Neural Netw. 2008, 19, 18–39. [Google Scholar]
  3. Raimondi, F.; Cabral Farias, R.; Michel, O.; Comon, P. Wideband multiple diversity tensor array processing. IEEE Trans. Signal Process. 2017, 65, 5334–5346. [Google Scholar] [CrossRef]
  4. Ji, Y.; Wang, Q.; Li, X.; Liu, J. A Survey on tensor techniques and applications in machine learning. IEEE Access 2019, 7, 162950. [Google Scholar] [CrossRef]
  5. Frolov, E.; Oseledets, I. Tensor methods and recommender systems. WIREs Data Mining Knowl. Discov. 2017, 7, e1201. [Google Scholar] [CrossRef]
  6. Padhy, S.; Goovaerts, G.; Boussé, M.; De Lathauwer, L.; Van Huffel, S. The Power of Tensor-Based Approaches in Cardiac Applications. In Biomedical Signal Processing. Advances in Theory, Algorithms and Applications; Naik, G., Ed.; Springer: Singapore, 2019. [Google Scholar]
  7. Wang, R.; Li, S.; Cheng, L.; Wong, M.H.; Leung, K.S. Predicting associations among drugs, targets and diseases by tensor decomposition for drug repositioning. BMC Bioinform. 2019, 26, 628. [Google Scholar] [CrossRef] [PubMed]
  8. Favier, G.; Sousa Rocha, D. Overview of tensor-based cooperative MIMO communication systems— Part 1: Tensor modeling. MDPI Entropy 2023, 25, 1181. [Google Scholar] [CrossRef]
  9. Cichocki, A. Era of big data processing: A new approach via tensor networks and tensor decompositions. arXiv 2014, arXiv:1403.2048v4. [Google Scholar]
  10. Hitchcock, F.L. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 1927, 6, 164–189. [Google Scholar] [CrossRef]
  11. Cattell, R. Parallel proportional profiles and other principles for determining the choice of factors by rotation. Psychometrika 1944, 9, 267–283. [Google Scholar] [CrossRef]
  12. Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika 1966, 31, 279–311. [Google Scholar] [CrossRef] [PubMed]
  13. Harshman, R.A. Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Work. Pap. Phon. 1970, 16, 1–84. [Google Scholar]
  14. Bro, R. PARAFAC. Tutorial and applications. Chemom. Intell. Lab. Syst. 1997, 38, 149–171. [Google Scholar] [CrossRef]
  15. Morup, M. Applications of tensor (multiway array) factorizations and decompositions in data mining. WIREs Data Min. Knowl. Discov. 2011, 1, 20–40. [Google Scholar] [CrossRef]
  16. Cichocki, A.; Lee, N.; Oseledets, I.; Phan, A.H.; Zhao, Q.; Mandic, D.P. Tensor networks for dimensionality reduction and large-scale optimization: Part 1 Low-rank tensor decompositions. Found. Trends Mach. Learn. 2016, 9, 249–429. [Google Scholar] [CrossRef]
  17. Acar, E.; Bro, R.; Smilde, A. Data fusion in metabolomics using coupled matrix and tensor factorizations. Proc. IEEE 2015, 103, 1602–1620. [Google Scholar] [CrossRef]
  18. Gandy, S.; Recht, B.; Yamada, I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 2011, 27, 025010. [Google Scholar] [CrossRef]
  19. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 208–220. [Google Scholar] [CrossRef]
  20. Favier, G. From Algebraic Structures to Tensors; Wiley: Hoboken, NJ, USA, 2019; Volume 1. [Google Scholar]
  21. Legendre, A.M. Appendice: Sur la Méthode des Moindres Quarrés, in “Nouvelles Méthodes Pour la Détermination des Orbites des Comètes”; Firmin-Didot: Paris, France, 1805; pp. 72–80. [Google Scholar]
  22. Kalman, R.E. A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 1960, 82D, 34–45. [Google Scholar] [CrossRef]
  23. Jazwinski, A.H. Stochastic Processes and Filtering Theory; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  24. Rogers, M.; Li, L.; Russell, S.J. Multilinear dynamical systems for tensor time series. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 2634–2642. [Google Scholar]
  25. Chen, C.; Surana, A.; Bloch, A.; Rajapakse, I. Multilinear control systems theory. SIAM J. Control Optim. 2021, 5, 749–776. [Google Scholar] [CrossRef]
  26. Boyd, S.; Chua, L. Fading memory and the problem of approximating nonlinear operators with Volterra series. IEEE Trans. Circuits Syst. 1985, 32, 1150–1161. [Google Scholar] [CrossRef]
  27. Schetzen, M. The Volterra and Wiener Theories of Nonlinear Systems; John Wiley & Sons: New York, NY, USA, 1980. [Google Scholar]
  28. Mathews, V.; Sicuranza, G. Polynomial Signal Processing; John Wiley & Sons: New York, NY, USA, 2000. [Google Scholar]
  29. Doyle III, F.; Pearson, R.; Ogunnaike, B. Identification and Control Using Volterra Models; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  30. Fernando, X.N.; Sesay, A.B. Adaptive asymmetric linearization of radio over fiber links for wireless access. IEEE Trans. Veh. Technol. 2002, 51, 1576–1586. [Google Scholar] [CrossRef]
  31. He, J.; Lee, J.; Kandeepan, S.; Wang, K. Machine learning techniques in radio-over-fiber systems and networks. Photonics 2020, 7, 105. [Google Scholar] [CrossRef]
  32. Benedetto, S.; Biglieri, E.; Daffara, S. Modeling and peformance evaluation of nonlinear satellite links—A Volterra series approach. IEEE Trans. Aerosp. Electron. Syst. 1979, AES-15, 494–507. [Google Scholar] [CrossRef]
  33. Cheng, C.H.; Powers, E.J. Optimal Volterra kernel estimation algorithms for a nonlinear communication system for PSK and QAM inputs. IEEE Trans. Signal Process. 2001, 49, 147–163. [Google Scholar] [CrossRef]
  34. Marmarelis, V. Nonlinear Dynamic Modeling of Physiological Systems; Wiley-IEEE Press: Hoboken, NJ, USA, 2004. [Google Scholar]
  35. Kerschen, G.; Worden, K.; Vakakis, A.; Golinval, J. Past, present and future of nonlinear system identification in structural dynamics. Mech. Syst. Signal Process. 2006, 20, 505–592. [Google Scholar] [CrossRef]
  36. Azpicueta, L.; Zeller, M.; Figueiras-Vidal, A.; Kellerman, W.; Arenas-Garcia, J. Enhanced adaptive Volterra filtering by automatic attenuation of memory regions and its application to acoustic echo cancellation. IEEE Trans. Signal Process. 2013, 61, 2745–2750. [Google Scholar] [CrossRef]
  37. Campello, R.J.; Favier, G.; Amaral, W.C. Optimal expansions of discrete-time Volterra models using Laguerre functions. Automatica 2004, 42, 815–822. [Google Scholar] [CrossRef]
  38. Kibangou, A.; Favier, G.; Hassani, M.M. Selection of generalized orthonormal bases for second order Volterra filters. Signal Process. 2005, 85, 2371–2385. [Google Scholar] [CrossRef]
  39. da Rosa, A.; Campello, R.; Amaral, W. Choice of free parameters in expansions of discrete-time Volterra models using Kautz functions. Automatica 2007, 43, 1084–1091. [Google Scholar] [CrossRef]
  40. Favier, G.; Bouilloc, T. Identification de modèles de Volterra basée sur la décomposition PARAFAC de leurs noyaux et le filtre de Kalman etendu. Traitement du Signal 2010, 27, 27–51. [Google Scholar]
  41. Favier, G.; Kibangou, A.; Bouilloc, T. Nonlinear system modeling and identification using Volterra–PARAFAC models. Int. J. Adapt. Control Signal Process. 2012, 26, 30–53. [Google Scholar] [CrossRef]
  42. Batselier, K.; Chen, Z.; Wong, N. Tensor network alternating linear scheme for MIMO Volterra system identification. Automatica 2017, 84, 26–35. [Google Scholar] [CrossRef]
  43. Crespo-Cadenas, C.; Reina-Tosina, J.; Madero-Ayora, M.J.; Muñoz-Cruzato, J. A new approach to pruning Volterra models for power amplifiers. IEEE Trans. Signal Process. 2010, 58, 2113–2120. [Google Scholar] [CrossRef]
  44. Hunter, I.W.; Korenberg, M.J. The identification of nonlinear biological systems: Wiener and Hammerstein cascade models. Biol. Cybern. 1986, 55, 135–144. [Google Scholar] [CrossRef]
  45. Giri, F.; Bai, E.W. Block-Oriented Nonlinear System Identification; LNCIS; Springer: London, UK, 2010; Volume 404. [Google Scholar]
  46. Pearson, R.K.; Pottmann, M. Gray-box identification of block-oriented nonlinear models. J. Process. Control 2000, 10, 301–315. [Google Scholar] [CrossRef]
  47. Schoukens, J.; Tiels, K. Identification of block-oriented nonlinear systems starting from linear approximations: A survey. Automatica 2017, 85, 272–292. [Google Scholar] [CrossRef]
  48. Favier, G. Nonlinear system modeling and identification using tensor approaches. In Proceedings of the 10th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA’2009), Hammamet, Tunisia, 20–22 December 2009. [Google Scholar]
  49. Kibangou, A.; Favier, G. Wiener-Hammerstein systems modeling using diagonal Volterra kernels coefficients. IEEE Signal Process. Lett. 2006, 13, 381–384. [Google Scholar] [CrossRef]
  50. Kibangou, A.; Favier, G. Identification of parallel-cascade Wiener systems using joint diagonalization of third-order Volterra kernel slices. IEEE Signal Process. Lett. 2009, 16, 188–191. [Google Scholar] [CrossRef]
  51. Kibangou, A.; Favier, G. Tensor analysis-based model structure determination and parameter estimation for block-oriented nonlinear systems. IEEE J. Sel. Top. Signal Process. Spec. Issue Model Order Sel. Signal Process. Syst. 2010, 4, 514–525. [Google Scholar] [CrossRef]
  52. Tseng, C.; Powers, E. Identification of cubic systems using higher order moments of i.i.d. signals. IEEE Trans. Signal Process. 1995, 43, 1733–1735. [Google Scholar] [CrossRef]
  53. Kibangou, A.; Favier, G. Identification of fifth-order Volterra systems using i.i.d. inputs. IET Signal Process. 2010, 4, 30–44. [Google Scholar] [CrossRef]
  54. Kibangou, A.; Favier, G. Matrix and tensor decompositions for identification of block-structured nonlinear channels in digital transmission systems. In Proceedings of the IEEE 9th Worshop on Signal Processing Advances in Wireless Communications (SPAWC), Recife, Brazil, 6–9 July 2008. [Google Scholar]
  55. Brazell, M.; Li, N.; Navasca, C.; Tamon, C. Solving multilinear systems via tensor inversion. SIAM J. Matrix Anal. Appl. 2013, 34, 542–570. [Google Scholar] [CrossRef]
  56. Dogariu, L.M.; Paleologu, C.; Benesty, J.; Ciochina, S. Identification of Multilinear Systems: A Brief Overview. In Principal Component Analysis; IntechOpen: London, UK, 2022. [Google Scholar]
  57. Sage, A.P.; Melsa, J.L. System Identification; Academic Press: Cambridge, MA, USA, 1971. [Google Scholar]
  58. Söderström, T.; Stoica, P. System Identification. Prentice-Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar]
  59. Eykhoff, P. System Identification. Parameter and State Estimation; John Wiley & Sons: Hoboken, NJ, USA, 1974. [Google Scholar]
  60. Goodwin, G.; Payne, R. Dynamic system Identification: Experiment Design and Data Analysis; Academic Press: Cambridge, MA, USA, 1977. [Google Scholar]
  61. Norton, J. An introduction to Identification; Academic Press: Cambridge, MA, USA, 1986. [Google Scholar]
  62. Ljung, L. System Identification: Theory for the User; Prentice-Hall: Hoboken, NJ, USA, 1987. [Google Scholar]
  63. Heuberger, P.; Van den Hof, P.; Wahlberg, B. Modelling and Identification with Rational Orthogonal Basis Functions; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  64. Billings, S.A. Identification of nonlinear systems—A survey. IEE Proc. 1980, 127, 272–285. [Google Scholar] [CrossRef]
  65. Rugh, W.J. Nonlinear System Theory. The Volterra-Wiener Approach; Johns Hopkins University Press: Baltimore, MD, USA, 1981. [Google Scholar]
  66. Haber, R.; Keviczky, L. Nonlinear System Identification. Input-Ouput Modeling Approach. Vol. 1: Nonlinear System Parameter Identification; Kluwer Academic Publishers: New York, NY, USA, 1999. [Google Scholar]
  67. Giannakis, G.; Serpedin, E. A bibliography on nonlinear system identification. Signal Process. 2001, 81, 533–580. [Google Scholar] [CrossRef]
  68. Nelles, O. Nonlinear System Identification: From Classical Approaches to Neural Networks and Fuzzy Models; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  69. Schoukens, J.; Ljung, L. Nonlinear system identification. A user-oriented road map. IEEE Control Syst. Mag. 2019, 39, 28–99. [Google Scholar] [CrossRef]
  70. Favier, G. Matrix and Tensor Decompositions in Signal Processing. Vol. 2; Wiley: Hoboken, NJ, USA, 2022. [Google Scholar]
  71. Favier, G.; de Almeida, A.L.F. Overview of constrained PARAFAC models. EURASIP J. Adv. Signal Process. 2014, 5, 1–25. [Google Scholar] [CrossRef]
  72. Ragnarsson, S.; Van Loan, C. Block tensors and symmetric embeddings. Linear Algebra Its Appl. 2013, 438, 853–874. [Google Scholar] [CrossRef]
  73. Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Trans. Signal Process. 2015, 32, 145–163. [Google Scholar] [CrossRef]
  74. Sidiropoulos, N.D.; de Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  75. Carroll, J.D.; Chang, J. Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 1970, 35, 283–319. [Google Scholar] [CrossRef]
  76. Kiers, H.A.L. Towards a standardized notation and terminology in multiway analysis. J. Chemom. 2000, 14, 105–122. [Google Scholar] [CrossRef]
  77. Comon, P.; Golub, G.; Lim, L.H.; Mourrain, B. Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl. 2008, 30, 1254–1279. [Google Scholar] [CrossRef]
  78. Kruskal, J.B. Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Its Appl. 1977, 18, 95–138. [Google Scholar] [CrossRef]
  79. Sidiropoulos, N.D.; Bro, R. On the uniqueness of multilinear decomposition of N-way arrays. J. Chemom. 2000, 14, 229–239. [Google Scholar] [CrossRef]
  80. Ten Berge, J.M.F.; Smilde, A.K. Non-triviality and identification of a constrained Tucker3 analysis. J. Chemom. 2002, 16, 609–612. [Google Scholar] [CrossRef]
  81. De Lathauwer, L.; de Moor, B.; Vandewalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 2000, 21, 1253–1278. [Google Scholar] [CrossRef]
  82. Smilde, A.K.; Bro, R.; Geladi, P. Multi-Way Analysis. Applications in the Chemical Sciences; Wiley: Chichester, England, 2004. [Google Scholar]
  83. Leontaritis, I.J.; Billings, S.A. Input-output parametric models for non-linear systems. Int. J. Control 1985, 41, 303–344. [Google Scholar] [CrossRef]
  84. Comon, P.; Mourrain, B. Decomposition of quantics in sums of power of linear forms. Signal Process. 1996, 53, 93–107. [Google Scholar] [CrossRef]
  85. Korenberg, M. Parallel cascade identification and kernel estimation for nonlinear systems. Ann. Biomed. Eng. 1991, 19, 429–455. [Google Scholar] [CrossRef]
  86. Ninness, B.; Gustafsson, F. A unifying construction of orthonormal bases for system identification. IEEE Trans. Autom. Control 1997, 42, 515–521. [Google Scholar] [CrossRef]
  87. Liavas, A.; Regalia, P.; Delmas, J. Blind channel approximation: Effective channel order determination. IEEE Trans. Signal Process. 1999, 47, 3336–3344. [Google Scholar] [CrossRef]
Figure 1. Realization of a third-order Volterra–PARAFAC model as Wiener models in parallel.
Figure 1. Realization of a third-order Volterra–PARAFAC model as Wiener models in parallel.
Algorithms 16 00443 g001
Figure 2. Top: Original output signal. Bottom: reconstructed output from noisy measurements (SNR = 30 dB).
Figure 2. Top: Original output signal. Bottom: reconstructed output from noisy measurements (SNR = 30 dB).
Algorithms 16 00443 g002
Figure 3. Evolution of the square error ϵ L ( t ) , L = 100 .
Figure 3. Evolution of the square error ϵ L ( t ) , L = 100 .
Algorithms 16 00443 g003
Figure 4. Normalized mean square error in steady state for a sum of sines.
Figure 4. Normalized mean square error in steady state for a sum of sines.
Algorithms 16 00443 g004
Figure 5. Normalized mean square error in steady state for a random input.
Figure 5. Normalized mean square error in steady state for a random input.
Algorithms 16 00443 g005
Figure 6. Realization of a third-order Volterra–GOB model.
Figure 6. Realization of a third-order Volterra–GOB model.
Algorithms 16 00443 g006
Figure 7. Block diagram of a Hammerstein model.
Figure 7. Block diagram of a Hammerstein model.
Algorithms 16 00443 g007
Figure 8. Block diagram of a Wiener model.
Figure 8. Block diagram of a Wiener model.
Algorithms 16 00443 g008
Figure 9. Block diagram of a Wiener–Hammerstein model.
Figure 9. Block diagram of a Wiener–Hammerstein model.
Algorithms 16 00443 g009
Table 1. Reduced SVD, PARAFAC/CPD, and TD.
Table 1. Reduced SVD, PARAFAC/CPD, and TD.
MatricesThird-Order Tensors
X R I × J X R I × J × K
Reduced SVDPARAFAC/CPD
x i , j = r = 1 R σ r u i , r v j , r X = U Σ V T x i , j , k = r = 1 R a i , r b j , r c k , r
U R I × R , V R J × R , Σ R R × R A R I × R , B R J × R , C R K × R
TD
x i , j , k = p = 1 P q = 1 Q s = 1 S g p , q , s a i , p b j , q c k , s
A R I × P , B R J × Q , C R K × S , G R P × Q × S
Table 2. Some examples of linear, nonlinear and multilinear models.
Table 2. Some examples of linear, nonlinear and multilinear models.
Linear models
SISO FIR model
y ( t ) = i n u h i u ( t i )
Memoryless MIMO model
y i ( t ) = j = 1 n T h i , j u j ( t ) ,    i [ 1 , n R ]
y ( t ) = H u ( t ) ,    y ( t ) R n R , u ( t ) R n T , H R n R × n T
Nonlinear models
Polynomial model    (Section 5.1)
y ( t ) = p = 1 P f p [ u ( t ) , , u ( t n u ) , y ( t 1 ) , , y ( t n y ) ]
f p ( . ) = pth-degree polynomial in the system input ( u ) and output ( y ) signals
Truncated Volterra model   (Section 5.2)
y ( t ) = h 0 + p = 1 P m 1 = 1 M p m P = 1 M P h m 1 , , m P ( p ) q = 1 p u ( t m q + 1 )
h m 1 , , m P ( p ) = p th-order Volterra kernel with memory M p
Multilinear models
TITO model   (Section 6.2)
y i 1 , , i P ( t ) = j 1 = 1 J 1 j N = 1 J N h i 1 , , i P , j 1 , , j N u j 1 , , j N ( t )
U ( t ) R J 1 × × J N , Y ( t ) R I 1 × × I P
Multilinear TISO model   (Section 6.3 and Section 6.4)
y ( t ) = j 1 = 1 J 1 j N = 1 J N n = 1 N h j n ( n ) u j 1 , , j N ( t )
Table 3. Notation for sets of indices and dimensions.
Table 3. Notation for sets of indices and dimensions.
i ̲ P { i 1 , , i P }   ;   j ̲ N { j 1 , , j N }
I ̲ P { I 1 , , I P }   ;   J ̲ N { J 1 , , J N }
I ̲ P I 1 × × I P   ;   J ̲ N J 1 × × J N
I ̲ P × J ̲ N = I 1 × × I P × J 1 × × J N
I ̲ P × I ̲ P = I 1 × × I P × I 1 × × I P
Π I P I 1 I P = p = 1 P I p
Table 4. Vector and matrix products using the index convention.
Table 4. Vector and matrix products using the index convention.
u K I , v K J , w K K
u v = u i v j e i j K I J
u v T = u i v j e i j K I × J
u v T w = u i v j w k e i k j K I K × J
A K I × J , B K J × K , C K K × J
A B = i = 1 I k = 1 K ( j = 1 J a i j b j k ) e i k = a i j b j k e i k K I × K
A C T = a i j c k j e i k K I × K
Table 5. Various sets of tensors.
Table 5. Various sets of tensors.
OrderSizeSets of Tensors
P I ̲ P = I 1 × × I P K I 1 × × I P K I ̲ P
P I ̲ P = I 1 × × I P with I p = I , p P K [ P ; I ]
P + N I ̲ P × J ̲ N = I 1 × × I P × J 1 × × J N K I ̲ P × J ̲ N
I ̲ P × J ̲ N = I × × I × J × × J
P + N with K [ P + N ; I , J ]
I p = I , p P   and   J n = J , n N
2 P I ̲ P × I ̲ P   with   I p = I , p P K [ 2 P ; I ]
Table 6. Multilinear forms and associated tensors.
Table 6. Multilinear forms and associated tensors.
Multilin. FormsTransformationsTensors
real-valued in P vectors × p = 1 P R I p ( x ( 1 ) , x ( P ) ) f x ( 1 ) , x ( P ) R A R I ̲ P
real-valued in one vector R I x f ( x , , x ) P terms R A R [ P ; I ]
Table 7. Multilinear forms and associated homogeneous polynomials.
Table 7. Multilinear forms and associated homogeneous polynomials.
FormsMatrices/TensorsHomogeneous Polynomials
Bilinear A R I × J ; y R I , x R J f ( x , y ) = y T A x = a i j y i x j , i I , j J
Quadratic A R I × I ; x R I f ( x ) = x T A x = a i j x i x j , i , j I
Real multilinear in P vector A R I ̲ P ; x ( p ) R I p f x ( 1 ) , x ( P ) = a i ̲ P p = 1 P x i p ( p ) , i p I p , p P
Real multilinear in one vector A R [ P ; I ] ; x R I f ( x , , x ) P terms = a i ̲ P p = 1 P x i p , i p I , p P
Table 8. Different types of multiplication with tensors.
Table 8. Different types of multiplication with tensors.
TensorsOperationsDefinitions
X K I ̲ P , A K J × I p Y = X × p A y i 1 , , i p 1 , j , i p + 1 , , i P = i p a j , i p x i ̲ P = a j , i p x i ̲ P
X K I ̲ P , u K I p Y = X × p u T y i 1 , , i p 1 , i p , , i P = i p u i p x i ̲ P = u i p x i ̲ P
X K I ̲ P , Y K J ̲ N Z = X × p n Y z i 1 , , i p 1 , i p + 1 , , i P , j 1 , , j n 1 , j n + 1 , , j N =
with   I p = J n = K k = 1 K a i 1 , , i p 1 , k , i p + 1 , , i P b j 1 , , j n 1 , k , j n + 1 , , j N
A K I ̲ P × J ̲ N , X K J ̲ N × K ̲ Q Y = A N X y i ̲ P , k ̲ Q = j ̲ N = 1 ̲ J ̲ N a i ̲ P , j ̲ N x j ̲ N , k ̲ Q = a i ̲ P , j ̲ N x j ̲ N , k ̲ Q
Table 9. Outer products of vectors, matrices, and tensors.
Table 9. Outer products of vectors, matrices, and tensors.
Vectors/Matrices/TensorsOuter ProductsSpacesOrders
u ( p ) K I p , p P p = 1 P u ( p ) K I ̲ P P
A ( p ) K I p × J p , p P p = 1 P A ( p ) K I 1 × J 1 × × I P × J P 2 P
A K I ̲ P , B K J ̲ N A B K I ̲ P × J ̲ N P + N
A ( p ) K J ̲ N p , p P p = 1 P A ( p ) K J ̲ N 1 × × J ̲ N P p = 1 P N p
Table 10. PARAFAC decomposition of a tensor of order three and order N.
Table 10. PARAFAC decomposition of a tensor of order three and order N.
Third-Order Tensor Nth-Order Tensor
X K I × J × K X K I ̲ N
A K I × R , B K J × R , C K K × R , A ( n ) K I n × R
x i , j , k = r = 1 R a i r b j r c k r Scalar writing x i ̲ N = r = 1 R n = 1 N a i n , r ( n )
X = I R × 1 A × 2 B × 3 C with mode-n products X = I R × n = 1 N A ( n )
X = r = 1 R A . r B . r C . r with outer products X = r = 1 R n = 1 N A . r ( n )
X I J × K = ( A B ) C T Matrix unfoldings X S 1 ; S 2 = n S 1 A ( n ) n S 2 A ( n ) T
X J K × I = ( B C ) A T
X K I × J = ( C A ) B T
x I J K = ( A B C ) 1 R Vectorized form x I 1 · I N = ( A ( 1 ) A ( 2 ) A ( N ) ) 1 R
Table 11. Tucker decomposition of a tensor of order three and order N.
Table 11. Tucker decomposition of a tensor of order three and order N.
Third-Order Tensor Nth-Order Tensor
X K I × J × K X K I ̲ N
G K P × Q × S , A K I × P , G K R ̲ N , A ( n ) K I n × R n , n N
B K J × Q , C K K × S
x i j k = p = 1 P q = 1 Q s = 1 S g p q s a i p b j q c k s Scalar writing x i ̲ N = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N a i n , r n ( n )
X = G × 1 A × 2 B × 3 C with mode-n products X = G × 1 A ( 1 ) × 2 A ( 2 ) × 3 × N A ( N )
X = p = 1 P q = 1 Q s = 1 S g p q s A . p B . q C . s with outer products X = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N A . r n ( n )
Table 12. Examples of tensor systems.
Table 12. Examples of tensor systems.
FormsDimensionsTensor Systems
Linear Y R I ̲ P , A R I ̲ P × J ̲ N , X R J ̲ N Y = A N X
Bilinear Y R I ̲ P , A R I ̲ P × K ̲ M × J ̲ N , X R J ̲ N , Z R K ̲ M Y = A N X M Z
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Favier, G.; Kibangou, A. Tensor-Based Approaches for Nonlinear and Multilinear Systems Modeling and Identification. Algorithms 2023, 16, 443. https://doi.org/10.3390/a16090443

AMA Style

Favier G, Kibangou A. Tensor-Based Approaches for Nonlinear and Multilinear Systems Modeling and Identification. Algorithms. 2023; 16(9):443. https://doi.org/10.3390/a16090443

Chicago/Turabian Style

Favier, Gérard, and Alain Kibangou. 2023. "Tensor-Based Approaches for Nonlinear and Multilinear Systems Modeling and Identification" Algorithms 16, no. 9: 443. https://doi.org/10.3390/a16090443

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop