Next Article in Journal
Economic Complexity Based Recommendation Enhance the Efficiency of the Belt and Road Initiative
Next Article in Special Issue
A Novel Algorithm Based on the Pixel-Entropy for Automatic Detection of Number of Lanes, Lane Centers, and Lane Division Lines Formation
Previous Article in Journal
A New Image Encryption Algorithm Based on Chaos and Secure Hash SHA-256
Previous Article in Special Issue
Harmonic Sierpinski Gasket and Applications
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Representation and Characterization of Nonstationary Processes by Dilation Operators and Induced Shape Space Manifolds

Univ Lyon, UJM-Saint Etienne, INSA Lyon, DISP, F-69621 Villeurbanne, France
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(9), 717; https://doi.org/10.3390/e20090717
Received: 21 July 2018 / Revised: 29 August 2018 / Accepted: 7 September 2018 / Published: 19 September 2018
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

Abstract

:
We proposed in this work the introduction of a new vision of stochastic processes through geometry induced by dilation. The dilation matrices of a given process are obtained by a composition of rotation matrices built in with respect to partial correlation coefficients. Particularly interesting is the fact that the obtention of dilation matrices is regardless of the stationarity of the underlying process. When the process is stationary, only one dilation matrix is obtained and it corresponds therefore to Naimark dilation. When the process is nonstationary, a set of dilation matrices is obtained. They correspond to Kolmogorov decomposition. In this work, the nonstationary class of periodically correlated processes was of interest. The underlying periodicity of correlation coefficients is then transmitted to the set of dilation matrices. Because this set lives on the Lie group of rotation matrices, we can see them as points of a closed curve on the Lie group. Geometrical aspects can then be investigated through the shape of the obtained curves, and to give a complete insight into the space of curves, a metric and the derived geodesic equations are provided. The general results are adapted to the more specific case where the base manifold is the Lie group of rotation matrices, and because the metric in the space of curve naturally extends to the space of shapes; this enables a comparison between curves’ shapes and allows then the classification of random processes’ measures.

1. Introduction

The analysis and/or the representation of nonstationary processes has been tackled for four or five decades now by time-scale/time-frequency analysis [1,2], by Fourier-like representation when the processes belong to the periodically correlated (PC) subclass [3,4], or by partial correlation coefficients (parcors) series [5,6], to cite a few. One of the advantages of dealing with parcors resides in their strong relation to the measure of the process by the one-to-one relation with correlation coefficients [7,8]. They consequently appear explicitly in the Orthogonal Polynomial on the Real Line/Unit Circle decomposition of the measure [9,10], on the Matrices Orthogonal Polynomials on the Unit Circle [11] and its applications [12], are the elements for the construction of dilation matrices that appear in the Cantero Moral, and Velazquez (CMV)/Geronimus, Gragg, and Teplyaev (GGT) matrices [13], for the Schur flows problem with upper Hessenberg matrices [14] that are also seen in the literature as evolution operators [10] or shift operator [15], and finally appear in the state-space representation [7,16]. The dilation theory takes its roots from the operator theory [17], which bridges the process’s measure and unitary operators. In its simplest version, the dilation theory corresponds to Naimark dilation [17,18], and states that given a sequence of correlation coefficients, there exists a unitary matrix W such that R n ( 1 0 0 ) W n ( 1 0 0 ) T where · T denotes the transposition. When the process is not stationary, its associated correlation matrix is no more Toeplitz structured, a set of matrices is required [16] and the previous expression becomes R i , j ( 1 0 0 ) W i + 1 W i + 2 W j ( 1 0 0 ) T . The matrices W i are theoretically understood as infinite rotation matrices, which become finite when the correlation coefficients sequence is itself finite. In that particular case, the matrices W i belong to S O ( n ) or S U ( n ) , the special orthogonal or unitary group, respectively, and the process’s measure is totally described by the set of W i . As a consequence, the measure of the process is beautifully characterized for the nonstationary case, by a sampled trajectory induced by the dilation matrices on the appropriate Lie group. When the process is periodically correlated, the sequence of parcors is periodic, as well as the sequence of dilation matrices, which yields a closed path as illustrated in Figure 1. This raises the question of comparison of processes by means of their dilation matrices. Many efforts have been made in the last decade to exploit the hyperbolic geometric structure not of the correlation matrices directly but of the related parcors when obtained in stationary conditions [8,19,20,21,22], as well as the information geometry structure that is closely related to it [23,24,25]. As the Kullback–Leibler divergence let do, the comparison of stationary processes is then made by comparing curves, whose sampled points are parcors sequences, defined on several copies of the Poincaré disk through geodesics deformation. Treatment of the nonstationary case has not been tackled to our knowledge with the previously mentioned approaches. In this paper, we hope to initiate interest in filling this gap by extending the representation and the characterization of the processes’s measure in a nonstationary context, inspired on the one hand by information geometric application and interpretation of parcors [19,20] or correlation matrices [26,27,28], and on the other hand by theories and applications dealing with curves on manifolds [29,30], closely related to some aspects of Euler’s equations [31]. Therefore, we combine Constantinescu’s approach to dilation and shape analysis to the propose of seeing stochastic processes as elements of a Lie group. Characterizing the time-varying measure of the process is now tackled by studying curves (or sampled curves) on special groups.
To support the reader, some insights on dilation theory are given in Section 2. Practical implementations of dilation matrices according to the operator theory approach [16,18] or the lattice filter structure approach [32,33] are also discussed and the strong connection between parcors and the dilation matrices is emphasized. Section 3 focuses on the geometry of the curves induced by the dilation on particular manifolds. The general framework is first introduced by recalling concepts of distances and shape of curves when the ambient space is not flat. Next, the square root velocity (SRV) functions are developed and adapted to the Lie group, and a procedure to compare nonstationary processes through their time evolution trajectory is presented. Finally, a conclusion is drawn in Section 4 and the reader will find some technical tools in Appendix A and Appendix B.

2. The Structure of Semi-Positive-Definite Matrices and the Dilation Theory

2.1. Outline of the Dilation Theory

Let us give some insights into the dilation theory. In its fundamental definition, the dilation theory consists of a Hilbert space H and an operator-valued function f, i.e., an L ( H ) -valued function, to find a larger Hilbert space H and another application F such that f is the orthogonal projection of F :
f ( t ) = P H F ( t ) , t Z
where P H denotes the orthogonal projection onto the Hilbert space H . The ideas of the dilation theory are:
  • there exists a larger space from which the original function (or matrix) is deduced;
  • we can choose the “dilated” function to be simpler. For instance, when dealing with matrices, each of its coefficients can be expressed as the projection of a larger unitary matrix. In this case, we obtain a unitary dilation. This approach has been for example developed in [34,35,36] for the stationary dilation of periodically-correlated processes.

2.1.1. Dilation and Rotation of Contractions

For an operator T on a Hilbert space H , we denote by T * the adjoint operator, i.e., the operator on H such that T x , y = x , T * y for all x , y H . An operator T L ( H ) is said to be a contraction if T 1 where · is the operator norm. We deduce the expression for the defect operator D T = I T * T 1 / 2 and its adjoint D T * = I T T * 1 / 2 .
One of the easiest results is that, given a contraction Γ , the following unitary operator
J ( Γ ) = Γ D Γ * D Γ Γ *
satisfies, for all n N
Γ n = 1 0 J ( Γ ) n 1 0 .
In other words, the elementary rotation of a contraction also corresponds to the unitary dilation operator of the contraction. This operator is called the Julia operator, and corresponds to the Halmos extension [15] of a contraction.

2.1.2. Dilation and Isometries

Following the idea and the formulation of Naimark, the dilation theory can be restated in terms of dilation of the sequence of operators or sequence of numbers when the dimension of the underlying Hilbert space is 1. A sequence of operators R n n = 1 acting on H is said to be positive if
i , j = 0 + R i j h i , h j 0 for all h i H i .
Assuming now that R n * = R n and R 0 = I , leads to the following Toeplitz matrix:
R ( m ) = I R 1 R m 1 R 1 * I R m 2 · · · · · · R m 1 * R m 2 * I
which is positive-definite. Remark that this matrix can be seen as the correlation matrix of a stationary process, as it is positive and Toeplitz [37]. Owing to this property, we obtain the existence of an operator U such that [18,38] and Theorem 1.1 in [39]:
R n = P H U n H , for all n 0 and U an isometry on K
as a result of the Naimark dilation theorem [17]. Furthermore, if K = n 0 U n H , where ⋁ denotes the linear span, then U is unique up to an isomorphism.

2.1.3. Dilation and Measure

From Bochner’s theorem [37], we know a matrix of type (5) can be seen as the Fourier coefficient of a given positive Borelian measure. This is also known as the moment or trigonometric problem [16]. Therefore, we can restate the dilation problem in terms of measure. If we denote by E λ an operator-valued distribution function on [ 0 , 2 π ] , then the function
R n = 0 2 π e i n λ d E λ .
is positive-definite. This shows the strong correspondence between the spectral measure and the dilation theory. There hence exists a unitary operator on a Hilbert space K such that R n = P H U ( n ) where P H stands for the orthogonal projection. With the spectral representation of unitary operators, U = 0 2 π e i λ d E λ and we have
0 2 π e i n λ d E λ u , v = 0 2 π e i n λ d F λ u , v
or, in an equivalent form:
E λ = P H F λ .
Note that the operator-valued measure F λ is in fact an orthogonal projection-valued measure because all its increments are orthogonal. With dilation matrices having been introduced, we give in the next section a methodology to understand how they are obtained.

2.2. Construction of Dilation Matrices

As mentioned previously, given an SPD matrix R = R i , j i , j N , it is possible to build a sequence of matrices W i i N such that R i , j = 1 0 0 0 W i W i + 1 W j 1 1 0 0 0 T . Let the general framework where R i , j is a complex operator satisfying R i , j L ( H j , H i ) with H n n a sequence of Hilbert spaces and L the set of linear applications. For example, consider the stochastic process X n n , where X n L 2 ( P ) is a squared integrable random variable with respect to the probability space Ω , F , P . Then the stochastic process can be viewed as an operator: X n : C L 2 ( P ) , X n λ = λ X n , and the correlation kernel becomes R i , j = X i * X j .
To start the construction, let first the following theorem
Theorem 1 (Structure of a positive-definite block matrix).
Let X and Z be positive operators in L ( H X ) and L ( H Z ) respectively. Then the following are equivalent :
  • The operator A = X Y Y * Z is positive
  • There exists a unique contraction Γ in L ( R ( Z ) , R ( X ) ) such that
    Y = X 1 / 2 Γ Z 1 / 2
Proof. 
Let us now apply this relation repeatedly on an SPD matrix. To fix ideas, let the R be the 3 × 3 (block-)matrix:
R = R 1 , 1 R 1 , 2 R 1 , 3 R 1 , 2 * R 2 , 2 R 2 , 3 R 1 , 3 * R 2 , 3 * R 3 , 3
and apply Theorem 1 to R 1 , 1 R 1 , 2 R 1 , 2 * R 2 , 2 , R 2 , 2 R 2 , 3 R 2 , 3 * R 3 , 3 and finally to R 1 , 2 R 1 , 3 . Note that when a square root of a (block-)matrix has to be chosen, it is done according to the Schur decomposition given in Appendix A. For example, we have R 1 , 2 = R 1 , 1 1 / 2 Γ 1 , 2 R 1 , 1 1 / 2 . At each step, a contraction Γ i , j is generated with respect to the indices of the upper and lower (block-)matrices of the main diagonal, e.g., Γ 1 , 2 for the first R 1 , 1 R 1 , 2 R 1 , 2 * R 2 , 2 (block-)matrix. We thus obtain a one-to-one correspondence between the SPD matrix R and the set of contractions Γ i , j i = 1 , 2 j = 3 . Regarding the huge work of Constantinescu [16], we will call these contractions the Schur-Constantinescu parameters. We consider now unit variance and arbitrary size n × n for the SPD matrix, which allows us to write the correspondence as:
R 1 , 1 R 1 , 2 R 1 , n R 1 , 2 * R 2 , 2 R n 1 , n R 1 , n * R n 1 , 1 * R n , n 0 Γ 1 , 2 Γ 1 , 3 Γ 1 , n 0 0 Γ 2 , 3 Γ 2 , 4 Γ 2 , n Γ n 2 , n 0 Γ n 1 , n 0 0 0 .
Once (12) is established, each dilation matrix W i is built-up as a product of Givens rotations of a sequence of Schur-Constantinescu parameters in the following way:
W i = G ( Γ i , i + 1 ) G ( Γ i , i + 2 ) G ( Γ i , j ) ,
where G Γ i , i + l denotes the Givens rotation of Γ i , i + l as follows:
G ( Γ i , i + l ) = I Γ i , i + l D Γ i , i + l * D Γ i , i + l Γ i , i + l * I = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Γ i , i + l D Γ i , i + l * 0 0 0 0 D Γ i , i + l Γ i , i + l * 0 0 0 0 0 1 0 0 0 1
where the “non-identity” part, consisting of a Julia operator D Γ i , i + l = I Γ i , i + l * Γ i , i + l 1 / 2 is located at the entry ( i , i ) . When the SPD matrix is Toeplitz, which corresponds to a stationary underlying process, then all dilation matrices W i are identical and take the form
W i = U = Γ 1 D Γ 1 * Γ 2 D Γ 1 * D Γ 2 * Γ 3 D Γ 1 * D Γ 2 * D Γ 3 * Γ 4 D Γ 1 Γ 1 * Γ 2 Γ 1 * D Γ 2 * Γ 3 Γ 1 * D Γ 2 * D Γ 3 * Γ 3 0 D Γ 2 Γ 2 * Γ 3 Γ 2 * D Γ 3 * Γ 4 0 0 D Γ 3 Γ 3 * Γ 4 0 0 0 D Γ 4 · · · · · · · ·
which is nothing less than the Naimark dilation introduced in the first part, i.e., R i , j = R j 1 = [ 1 0 0 ] U j i [ 1 0 0 ] T . For the sake of completeness, we give the correspondence between the coefficients of the SPD matrix (the left-hand side of (12)) and the Schur-Constantinescu parameters:
Theorem 2.
The matrix R ( n ) = [ R k , j ] k , j = 1 n , satisfying R j , k * = R k , j is positive if and only if
  • R k k 0 for all k
  • there exists a family { Γ k , j k , j = 1 , n , k j } of contraction such that
    R k , j = B k , k * ( L k , j 1 U k + 1 , j 1 C k + 1 , j + D Γ k , k + l * D Γ k , j l * Γ k , j D Γ k + 1 , j D Γ j 1 , j ) B j , j
    where B k , k is the Cholesky’s square-root of R k , k .
and
L k , j = [ Γ k , k + 1 D Γ k , k + l * Γ k , k + 2 D Γ k , k + l * D Γ k , j 1 * Γ k , j ]
a row contraction associated to the set of parameters { Γ k , m k < m j } ,
C k , j = [ Γ j 1 , j Γ j 2 , j D Γ j 1 , j Γ k , j D Γ k + 1 , j D Γ j 1 , j ] T
a column contraction associated to the set of parameters { Γ m , j m = j 1 , k } , and finally
U k , j = G ( Γ k , k + 1 ) G ( Γ k , k + 2 ) G ( Γ k , k + j ) U k + 1 , j I
Proof. 
This theorem is proved in ([16], Theorem 5.3). ☐
A different approach leading to the same results can be found in [40], using directly the Kolmogorov decomposition. In [32] the Naimark dilation is constructed using the lattice filter and finally applications of this decomposition in quantum mechanics are to be found in [41,42] for example.
We now give some remarks to conclude this part:
  • If R ( n ) is a semi-positive definite complex-valued Toeplitz kernel, then all the Γ n are complex-valued and respect | Γ i | < 1 . The structure and the construction procedure for obtaining such a complex-valued parameter is identical whether the kernel is real or complex.
  • The framework proposed by Constantinescu and recalled previously is quite general and can be extended to Matrix Orthogonal Polynomial on the Unit Circle (MOPUC) development. By referring to ([43], Section 3.1), matrix polynomials stem from a Szëgo recursion akin to the scalar case and thus provide a sequence of Verblunsky coefficients, or parcors that are matrices. Again in ([43], Section 3.11), a correspondence between MOPUC and CMV matrices [13] is provided, which are equivalent to dilation matrices. The construction procedure remains the same for the dilation matrices, but the parcors become in that case matrices (they are matrix-valued Verblunsky coefficients), which can be obtained by a matrix version of the Schur/Geronimus algorithm [44].
Dilation matrices being now fully introduced, we focus the attention of the reader on the hidden information contained in their timely geometrical dissemination.

3. Analysis of Curves on a Manifold Induced by the Dilation

Parcors, composing dilation matrices, have already been given a geometrical point of view, as, for example, in [8] where the sequence of parcors associated with a stationary process is seen as a point onto the Poincaré polydisk P n , that is, the product of the Poincaré disk. To give geometrical settings, a distance to characterize individual parcors is then proposed and discussed. In [45], a stochastic process is studied under the local stationarity assumption. To each stationary slice of the process corresponds a sequence of parcors, represented as a point in the Poincaré polydisk P n as well. A trajectory is then generated on that space which materializes a curve on the manifold P n . The underlying computations are quite intricate because of the product manifolds, and the question of nonstationarity arises. Based on the works of Le Brigant [45,46], Celledoni et al. [47] and Zhang et al. [48], we propose then to give particular attention to this question. We first make use of the dilation theory introduced in Section 2. When the process under study is nonstationary, a set of matrices W i is obtained. The basic idea for having geometric information on the nonstationary process is therefore to characterize the trajectory formed by the set of dilation matrices. These matrices are theoretically operators of infinite dimension, but as we dispose of only a finite set of parcors, the theoretical matrices of (15) are truncated. Matrices respecting (15) are general rotation matrices that become perfect rotation operators belonging to S O ( n ) for real processes and S U ( n ) when dealing with complex processes, when their dimensions are reduced to n × n . Our aim is finally to analyse those curves living on the Lie group of rotation matrices and emphasize the geometry or, more precisely, the intrinsic geometry formulation of these objects. For example, we aim at comparing different curves coming from different processes or at resuming many realizations of a stochastic process (multiple measurements) through the computation of the mean of the associated several curves. The question as to computation complexity still exists, but many results have been proposed recently to overcome this difficulty and to propose closed-form formulations [30,49]. In particular, it is predicated to extract the shape of the trajectory for it contains the essential information, in a topologic sense.
To allow the curves comparison, we have based our development on the works of Le Brigant [45] and Celledoni et al. [47]. First, we define the manifold M given by the set of all curves in the base manifold. This leads to another space, the shape space, for which the manifold M will be a principal bundle. We dispose then of a metric in M from which a metric on the shape space is deduced. These steps are now explained in the following.

3.1. Preliminaries on Lie Groups

As we are going to deal with curves on a Lie group, we start with some preliminaries.
A metric · , · on a Lie group is said to be left invariant if:
u , v b = ( d L a ) b u , ( d L a ) b v a b
where ( d L a ) b is the derivative in the manifold field sense (so the tangent map) of the left translation L a at b. A left-invariant metric gives the same number whenever the vectors are translated on the left. It is straightforward to adapt this definition to a right-invariant metric. A metric that is both left and right invariant is called a bi-invariant metric. A Lie group endowed with a bi-invariant metric has plenty of import properties that can be exploited for our study of curves on shape spaces. We list some of them in the following [50].
  • The geodesics through e (the identity element) are the integral curves t e x p ( t u ) , u g , that is, the one-parameter groups. In addition, because left and right are isometries and isometries maps geodesics to geodesics, the geodesics through any point a G are the left (right) translates of the geodesics through e
    γ ( t ) = L a e x p ( t u ) , u g .
    Of course, we have
    γ ( 0 ) = d L a e ( u ) .
  • The Levi-Civita connection is given by : X Y = 1 2 [ X , Y ] , X , Y g
where [ · , · ] denotes the Lie bracket. We can now link these formulas to our base manifold S O ( n ) . The Killing form, B, of a Lie algebra is the symmetric bilinear form B : g × g C given by B ( u , v ) = t r ( a d ( u ) a d ( v ) ) , where t r denotes the trace operator and a d denotes the adjoint representation of the group, namely, the map a d : G G L ( g ) such that, for all a G a d a : g g is the linear isomorphism defined by a d a = d ( R a 1 L a ) e . If we now assume B to be negative-definite, then -B is an inner product and is adjoint invariant. Thus, it is a classical result of the compact semi-simple Lie theory that -B induces a bi-invariant metric on G.
The Lie algebra of S O ( n ) is the set of skew-symmetric matrices which verifies M T = M . The Killing form on S O ( n ) is given by B so ( n ) = ( n 2 ) t r ( X Y ) , and as a result of the skew symmetry, we have B so ( n ) = ( n 2 ) t r ( X Y T ) . Therefore, it induces a bi-invariant metric and the previous formula can be plugged into the expression of the metric on the space of curves. In the sequel, the manifold that supports the curves is S O ( n ) endowed with its bi-invariant metric.

3.2. Basic Outline of Geometry

Curves of interest are those living in the Lie group of real rotation matrices; this yields c : [ 0 , 1 ] S O ( n ) . For the sake of clarity, assume that c C ( [ 0 , 1 ] , S O ( n ) ) , we will come back to the case of discrete curves later. To study the geometrical features of such curves, we interest ourselves with the set of all curves lying in S O ( n ) (where S O ( n ) is seen as a manifold) with nonvanishing velocity, i.e., M = c C ( [ 0 , 1 ] , S O ( n ) ) : c ( t ) 0 t , this is in fact a sub-manifold of C ( [ 0 , 1 ] , S O ( n ) ) . A curve c is thus a particular point in M . The tangent space at a curve c is given by
T c M = v C ( [ 0 , 1 ] , T S O ( n ) ) : v ( t ) T c ( t ) S O ( n )
where T S O ( n ) denotes the tangent bundle of the base manifold S O ( n ) . Note that a tangent vector is a curve in the tangent space of S O ( n ) (see Theorem 5.6 in [51]). When comparing two curves, it is natural that the distance between these two curves should remain the same if the curves are only reparametrized, that is, if we define other curves that pass through the same points than the original curves but at different speeds. When the curve is discretized as we will see in the sequel, doing a reparametrization is equivalent to changing the chosen points (see Figure 2). A reparametrization is represented by increasing diffeomorphism ϕ D : [ 0 , 1 ] [ 0 , 1 ] acting on the right of the curve by composition. In other words, we required that the Riemannian metric g on M satisfies the following property:
g c ϕ ( u ϕ , v ϕ ) = g c ( u , v )
for all c M , u , v T c M and ϕ D .
This property is called reparametrization invariance. We insist on the fact that g is the metric on M , the space of all curves on S O ( n ) and not on S O ( n ) itself. In terms of distances, this gives
d M ( c 0 ϕ , c 1 ϕ ) = d M ( c 0 , c 1 )
where d M denote the distance on M corresponding to the metric g. The reparametrization introduced above induces an equivalence relation between points in M such that
c 0 c 1 ϕ D : c 0 = c 1 ϕ .
with this equivalence relation, a quotient space can be constructed as the collection of equivalence classes; it is named the shape space and has the following writing:
S = M / , or S = M / D .
A distance function on the shape space is obtained from the distance on M as follows:
d S ( [ c 0 ] , [ c 1 ] ) = i n f ϕ D d M ( c 0 , c 1 ϕ )
where [ c 0 ] and [ c 1 ] are representatives of the equivalence classes of c 0 and c 1 respectively. It can be shown that this distance is independent of the choice of the representatives. Some precautions has to be taken here: whereas M is a submanifold of the Fréchet manifold C ( [ 0 , 1 ] , S O ( n ) ) , has proven by [52], Theorem 10.4, the shape space S is not a manifold and the principal bundle structure π = M S is not formally defined. However, a manifold structure can be obtained if we only consider free immersion [53]. As the metric defined on the shape space is reparametrization-invariant, it is constant along the “fibers” (the origin point is fixed). Further explanations on the Riemannian submersion can be found in [54]. Closed curves being of main interest in this work, we can also define the set
M c = c C ( [ 0 , 1 ] , S O ( n ) ) : c ( t ) 0 , c ( 0 ) = c ( 1 ) .
Basically, the closure of a curve just imposes the equality of the first and the last point of it, and not of their first derivative. Consequently, M C turns into
M c + = c C ( [ 0 , 1 ] , S O ( n ) ) : c ( t ) 0 , c ( 0 ) = c ( 1 ) , c ( 0 ) = c ( 1 ) .
We need now to introduce the Square Root Velocity function (SRV function) [55], in which a curve is represented by its starting point and its normalized velocity at each time t. There are several possibilities to define the SRV of a curve. The more general definition is the following
F : M S O ( n ) × T M c c ( 0 ) , q = c c .
However, we can go further and benefit from the specific case of the Lie group. In this section, we will denote the base manifold G = S O ( n ) to emphasize its group structure, and g an element of the group. As in [47], we consider only curves that start at the identity; this is because other curves can be reduced to this case by right or left translation. In these settings, it is interesting to turn the SRV function into the Transported SRV function (TSRV). This is basically the SRV that has been parallel transported to a reference point. Different versions have been given in [47,48,56] which differ in the choice of their reference point. For our case of study, the identity is our natural curve starting point and is thus a particularly good choice for being the reference point. In a Lie group, a parallel transport operation can be defined, here again, by the right (or left) translation. This justifies that we can take, as suggested in [47], a TSRV function of the following form:
F L i e : C ( [ 0 , 1 ] , G ) S O ( n ) × q C ( [ 0 , 1 ] , g ) , q ( t ) 0 , t [ 0 , 1 ] F L i e ( c ) ( t ) = ( c ( 0 ) , q ( t ) ) = c ( 0 ) , R c ( t ) * 1 ( c ( t ) ) c ( t ) = c ( 0 ) , T c c ( t ) I ( c ( t ) ) c ( t ) ,
where g is the Lie algebra, R is the right translation on the group, R g 1 ( g 2 ) = g 2 g 1 , R g * = T e R g is the tangent map at the identity, · is a norm induced by a right-invariant metric on G, and T c c ( t ) I denotes the parallel transport from c ( t ) to the identity according to the curve c. A curve is now represented pointwise as an element of the tangent bundle c ( 0 ) , q ( t ) M × g (recall that q draws a curve in the tangent bundle), and c ( 0 ) is the identity element of the Lie group. The inverse of the SRV function is then straightforward: for every q C ( [ 0 , 1 ] , T M ) , there exists a unique curve c such that F ( c i ) = q i and c ( t ) = 0 t q ( r ) q ( r ) d r where · is the norm in S O ( n ) .

3.3. Metric and Distance over M and S

We now give insights on a relevant metric that should be used on M to compare different closed trajectories. The following development and expression of metrics and distances can be found in [45]. The distance on the shape space is used to compare how the curves are intrinsically different. It has been seen in [57] that the simple L 2 metric on M given by
g c L 2 ( u , v ) = u , v c ( t ) d t
where · , · is the Riemannian metric on S O ( n ) , induced a vanishing metric on the shape space, that is, we cannot differentiate shape with this metric. To overcome this difficulty, the family of elastic metric, derived from the Sobolev metric [58,59], has been investigated for it is non-vanishing on the shape space. In the case of a Euclidean space R n , it admits the expression:
g c a , b ( u , v ) = a 2 D l u N , D l v N + b 2 D l u T , D l v T c ( t ) d t ,
where D l u = h / c , D l u T = D l u , w w , with w = c / c and D l u N = D l u D l u T . Here, we are only interested in the special metric that has been proposed in [45], and which is an adaptation of the elastic metric for the Riemannian manifold. In our case this gives:
g c ( u , v ) = l u N , l v N + 1 4 l u T , l v T ( c t ) d t ,
where ∇ is the Levi-Civita connection that corresponds to · , · ; l u = 1 c c h , l u T = l u , w w , w = c / c . For the computations being done now in a manifold space, the Levi-Civita connection replaced the ordinary derivative of R n .
Once geometry has been settled in M , the geometry of the shape space can be derived from its quotient structure. Let the tangent bundle be decomposed into a vertical and a horizontal subspace: T M = H M V M , with V M = k e r T c π and T c the tangent map, π : M S the principal bundle, and H M = V M , see Figure 3. This metric is reparametrisation invariant, that is, constant along the fibers, hence we have
g c ( u H , v H ) = [ g ] π ( c ) T c π ( u ) , T c π ( v )
where [ g ] denotes the metric on the shape space.
A similar result in a different (but still close) context is used in Lemma 1 of [60]. In terms of distances, this can be understood in the following sense. The geodesic s [ c ] ( s ) between [ c 0 ] and [ c 1 ] in the shape space is the projection of the horizontal geodesic linking c 0 to the fiber containing c 1 . In fact, the horizontal geodesic between c 0 of c 1 intersects the fiber at c 1 at the reparametrized version of c 1 , c 1 ϕ which gives the distance in the shape space:
[ d ] ( [ c 0 ] , [ c 1 ] ) = d g ( c 0 , c 1 ϕ )
where [ d ] denotes the distance in S , and d g denotes the distance on the space of curves induced by the aforementioned Riemannian metric. In the TSRV formulation, the distance problem of Equation (37) yields an optimisation problem:
[ d ] ( [ c 0 ] , [ c 1 ] ) = i n f ϕ D 0 1 q 0 ( t ) q 1 ( ϕ ( t ) ) ϕ ( t ) 2 1 / 2 ,
which is solved by a traditional gradient descent algorithm or a dynamic linear programming [47]. Finally, we have to mention that in a practical situation, the above formula has to be discretized. This is the object of [46]. Formulae are essentially similar, but in this setting, a curve is now represented by a set of points c d i s c ( x 0 , x 1 , , x n ) and the tangent space turns into
T d i s c M = v = ( v 0 , v 1 , , v n ) , v i T x i S O ( n ) , i .
Concerning the metric on the space of curves, it becomes
g c d i s c ( u , v ) = u 0 , v 0 + 1 n i = 0 n 1 c / s q u 0 , k n , c / s q v 0 , k n u , v T d i s c M
where, as before, for a u T c d i s c M , we define a path of piecewise geodesic curves ( s , t ) c u ( s , t ) such that the following traditional initial conditions are fulfilled
c u 0 , k n = x k , and c u / t 0 , k n = n log x k ( x k + 1 ) .
This is the discrete analogue of the tangent vector of a continuous curve at time t. The log function is the inverse of the exponential map on the base manifold, S O ( n ) for us, and here c u s , · must be a geodesic on S O ( n ) between x k / n and x ( k + 1 ) / n . The SRV function that appears in the formula refer to the SRV function of the piecewise geodesics c u s , · . Then, the discretized version of the SRV function, q k = n log x k ( x k + 1 ) / log x k ( x k + 1 ) is such that
c / s q s , k n = c / s q k ( s )

3.4. The Geodesic Equation

Let us now give the geodesic equation, relative to our chosen measure. As a result of the TSRV, the geodesic equation takes a much simpler form than what can be found in [45,46]. The formula can be found in [47]. For the sake of completeness, we give a reformulated proof in Appendix B. Recall that a geodesic is a particular path of curves. A path of curves is a continuous set of curve s c ( s , · ) such that for each s, c ( s , · ) is a point in M , or, equivalently, a curve in M, (see Figure A1). Thus, for each curve of the path of curves, we can defined its TSRV function. Then for all s [ 0 , 1 ] , we have (we omit the letter `s’ for clarity): q = T c c ( t ) I c / t c / t
Theorem 3.
A path of curves [ 0 , 1 ] s c ( s , 0 ) , q ( s , t ) (t is the parameter of the curve c ( s , · ) )is a geodesic on M if and only if
c / s c / s q ( s , t ) ( s , t ) = 0 s , t
Proof. 
Thus, we have a quite familiar expression for the geodesic interpolation between two curves c 0 and c 1 , expressed in their TSRV domain:
F L i e 1 ( 1 s ) F L i e ( c 0 ) + s F L i e ( c 1 )
for s [ 0 , 1 ] . This expression is nothing but a linear interpolation on the transported tangent space.
We now have all the ingredients to give the procedure for nonstationary processes characterization and comparison:
  • Input: a set of rotation matrices W i i , seen as a partial observation of a closed trajectory on S O ( n ) .
  • Map the set of matrices W i into the a set of matrices in the Lie algebra V i using the inverse exponential map.
  • Interpolate with splines between matrices V i [61,62].
  • Go back in the base manifold S O ( n ) with the exponential map.
  • Shift the interpolated curve in order to fulfill the condition c ( 0 ) = e and compute the SRV transformation given by (41)
  • Compute the distance defined by (38). The optimization is carried out by dynamic programming.
  • Output: distance between two curves in the manifold defined by the set of curves in S O ( n ) , and geodesic path between the curves.
The interpolation computations are carried out in the Lie algebra, which is a vector space, and thus it does not demand great computational resources. The discretization step, which amounts to choosing certain values among the continuous curve, is also done in the Lie algebra. In this way, we avoid the calculation of matrices exponential that would have been discarded at the end. We finally note that geodesic shooting [45,63] or other path straightening methods could be applied to obtain a geodesic path between two curves, and between the shapes of the two curves.

3.5. Results

In order to expose how the approach of this work gives interesting results for PC processes understanding, we propose to compare four PC processes, displayed along with Figure 4. We also bring their corresponding S O ( 3 ) representation on Figure 5 and Figure 6 with 200 interpolated points and 50 interpolated points respectively. For this scenario we have generated four PC processes with 1000 samples each. A classical amplitude modulated model a ( t ) cos ( 2 π f / f e t ) where a ( t ) is a zero mean and unit variance stationary random process with a period of 20 points, a periodic AR(2) with a period of 20 points, a periodic AR(2) (AutoRegressive) with a period of 54 points, and a periodic ARMA(2,1) (AutoRegressive Moving Average) with a period of 20 points have been generated. We have used the R package PerARMA to generate the periodic ARMA and AR signals and we finally used the PerPACF function of this package to estimate the 20 (or 54) sequences of three parcors each. The analysis of Figure 4 with Figure 5 shows that the spectral measure of the amplitude modulated signal of Figure 4a has dilation matrices which do not spread a lot; we could think that this process is almost stationary due to the weak distance between each matrices. A contrario, whereas the temporal form of the PARMA(2,1) signal of Figure 4d is quite identical to the amplitude modulated signal of Figure 4a, their representation on SO(3) is very different. The spectral measure of the PARMA(2,1) signal spread much more. Lastly, when we observe the Figure 4b,c which are generated with the same model but with a different period, we can see that the more the number of points per period is important, the more the curve wraps. We also note one of the advantages of using spline interpolation. Along with Figure 5, we can remark that the curvature is well approximated owing to more and closer interpolated points. For this figure, we had approximatively four times more interpolated points than original ones whereas for Figure 6 we computed roughly twice more interpolated points than the original ones. Actually, for such a number, a change in the grid could be thought but it seems again that the spline interpolation gives good points repartition. This is mainly illustrated by Figure 7 for which the difference between geodesic interpolation of the curve associated with the periodic AR(2) signal of Figure 4c for 200 interpolated points and 50 interpolated points is very weak.
To end this analysis by the example, we have computed the distance defined by (38) between the PC process of Figure 8 and all the PC processes studied and displayed on Figure 4 and Figure 5. The distances are reported inside Table 1. Clearly, the distances between the shapes of the curves characterizing the spectral measure of each PC process, reveal some spectral proximity between the PC processes benchmarked. We have gray colored the row of the PAR(2) signal model indexed by letter (c). Whatever the number of interpolated points and the dimension of the base manifold, the spectral representation through dilation operators of this signal is the nearest on S O ( 3 ) and the second nearest for S O ( 4 ) and S O ( 5 ) to the PAR(2) signal of reference. The second interesting signal is the one indexed by letter (b) and stands for a PAR(2) signal with exactly the same model parameters as that of the signal of reference but with 54 points of periodicity instead of 20. We have lightly gray colored its associated row on Table 1 when it had the shortest distance. The curve associated with this signal has many wraps on its representation, and it has consequently the greatest distance on S O ( 3 ) , but increasing the dimension improves the comparison. Indeed, it finally has the shortest distance on S O ( 4 ) and S O ( 5 ) . This is particularly interesting to see that there is a competition between the curves of a PAR(2) with different model parameters but the same periodicity and a PAR(2) with the same model parameters but different periodicity.
We end by noticing that the PARMA(2,1) has the second shortest distance to the PAR(2) signal model of reference on S O ( 3 ) . As Figure 5 shows, their spectral measure evolves in a similar way with one major loop and a second less important loop. However, once the dimension of the base manifold increases, the assumption that the two processes may be close is strongly rejected by the fact that the PARMA(2,1) signal model has the longest distance. Finally, these observations leave open besides the question of the topology of these curves and how it could be used for the classification.

4. Conclusions

We have introduced a new vision of stochastic processes through geometry induced by dilation. The dilation matrices of given processes were obtained by a composition of rotations whose angles correspond to the well-known parcors, reflexion coefficients or Verblunski coefficients. The advantage of working with these particular matrices is that they are strongly related to the stochastic measure of the process, and thus, to its spectra. Furthermore, the dilation theory is independent of the stationarity of the underlying process; when the signal is stationary, its dilation operator is related to the Naimark dilation whereas when the signal is nonstationary, a set of dilation matrices is obtained and it is related to the Kolmogorov decomposition. Rigorously, dilation matrices are infinite dimensional, although we turn them into rotation matrices by truncation. Each of them belongs to the Special Orthogonal Group S O ( n ) or the Special Unitary Group S U ( n ) depending on the real- or complex-valued process under study. We focused our attention on the Periodically Correlated (PC) class of nonstationary processes for which a timely ordered set of dilation matrices describes the process measure. This set draws a closed curve on the Lie group of rotation matrices, and describing or classifying the different PC processes is made by curves comparison. We use for that the Square Root Velocity (SRV) function which represents a curve by its starting point and by its normed velocity vector on the space or curves. The metric in the space of curve naturally extends to the space of shapes. It is then possible to compare the shape of curves when the metric is translated into the Lie algebra, achieving therefore a closed-form expression and easy computation. Nonstationary processes are then characterized via their embedded curves.

Author Contributions

Conceptualization, M.D.; Formal analysis, M.D. and G.B.; Funding acquisition, G.B. and E.M.; Investigation, M.D.; Methodology, G.B.; Supervision, G.B. and E.M.; Writing—original draft, M.D.; Writing—review and editing, G.B.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Defect Operator, Elementary Rotation

Introducing the defect operator of a contraction T as being D T = ( I T * T ) 1 / 2 , we have the following factorisation:
X Y Y * Z = X 1 / 2 0 Z 1 / 2 Γ * Z 1 / 2 D Γ X 1 / 2 Γ Z 1 / 2 0 D Γ Z 1 / 2
where X and Y are positive matrices. Note that this is a Cholesky factorisation-type result. This type of decomposition is used as the square root of matrices in the construction of the dilation. A corollary is that the operator I T T * I is positive if and only if T is a contraction.
Theorem A1.
Let X and Y be operators in z. The following statements are equivalent:
  • There exists a contraction Γ in z such that X = Γ Y ,
  • X * X Y * Y .
Proof. 
This result can be proved by taking the contraction Γ with respect to Γ X h = Y h . [41]. ☐
As a corollary If, X * X = Y * Y , then there exists a partial isometry V such that V X = Y . It is easy to see that we can choose V to be the contraction Γ defined above. Isometry V can also be assumed unitary. For a positive operator A L ( H ) , if we denote by A 1 / 2 its unique positive square root, then every L such that L * L = A is related to A 1 / 2 by A 1 / 2 = V L (or A 1 / 2 = L * V * ).
Let us state another theorem that intervenes much in Constantinescu’s factorisation of positive-definite kernel. Note that in the following, R ( Γ ) will denote the close range of the operator Γ . We start with a basic case:
Theorem A2 (row contraction).
Let T = [ T 1 T 2 ] L ( H 1 H 2 , H ) , then T 0 if and only if there exists contractions Γ 1 L ( H 1 , H ) and Γ 2 L ( H 2 , H ) such that
T = [ Γ 1 D Γ 1 * Γ 2 ]
Proof. 
The proof is a simple application of Theorem A1. For the if part, it is obvious that we can take Γ 1 to be T 1 . Then T 1 implies
I T T * = I Γ 1 Γ 1 * T 2 T 2 * 0
with D Γ 1 * 2 T 2 T 2 * . Hence, there exists Δ such that Δ D Γ 1 * = T 2 * . Choosing Γ 2 = Δ * finishes the argument. ☐
In the same way as that of the Cholesky factorisation, we can write down the defect operator for the whole contraction T = [ T 1 T 2 ] [41] to be
D T 2 = D Γ 1 0 Γ 2 * Γ 1 D Γ 1 D Γ 1 Γ 1 * Γ 2 0 D Γ 1 .
Therefore, with Theorem A2, we have an operator α such that
D T = D Γ 1 0 Γ 2 * Γ 1 D Γ 1 α
Similarly,
D T * 2 = ( D Γ 1 * D Γ 2 * D Γ 2 * D Γ 1 * )
and the general case is:
Theorem A3 (Structure of row contraction).
The following are equivalent:
  • The operator T n = [ T 1 T 2 T n ] in L ( k = 1 n H k , H ) is a contraction
  • T 1 = Γ 1 is a contraction and, for k > 2 , there exists uniquely determined contractions Γ k L ( H k , R ( γ k ) ) such that T k = D Γ 1 * D Γ 2 * D Γ k 1 * Γ k .
Furthermore, the defect operators of the whole contraction T are of the form
D T 2 = D Γ 1 0 0 Γ 2 * Γ 1 D Γ 2 0 Γ n * D Γ n 1 * D Γ 2 * Γ n * D Γ n 1 * D Γ 3 * Γ 2 D Γ n D Γ 1 Γ 1 * Γ 2 Γ 1 * D Γ 2 * D Γ n 1 * Γ n 0 D Γ 2 Γ 2 * D Γ 3 * D Γ n 1 * Γ n 0 0 D Γ n
and
D T * 2 = D Γ 1 * D Γ n * D Γ n * D Γ 1 *
Proof. 
It can be proved straightforwardly by induction. ☐
This construction permits to understand the apparition of the operators α and β in the publications of Constantinescu which are used to identify the defect space of the components (the underlying contractions of a row contraction) of a row contraction with the defect space of the row contraction itself. Same results are readily obtained for a column contraction of the form T = T 1 T 2 .

Appendix B. Geodesic Equation in the Space of Curve M

To have a complete insight on the geodesic equation, we give the proof for a more general case that arises when considering the SRV and not only the TSRV function of a curve, that is, the curves are parametrised by their starting point and their velocity, but their starting points are not transported to the identity.
Theorem A4.
A path of curves [ 0 , 1 ] s c ( s , 0 ) , q ( s , t ) (t is the parameter of the curve c ( s , · ) )is a geodesic on M if and only if:
c / s c ( s , 0 ) + 0 1 R q ( s , t ) , c / s q ( s , t ) ( c ( s , 0 ) ) d t = 0 s
c / s c / s q ( s , t ) ( s , t ) = 0 s , t
Similarly to [45,48], we consider a variation of the path s c ( s , 0 ) , q ( s , t ) starting and ending at the same points, we denote c ( s , 0 , τ ) , q ( s , t , τ ) . In Figure A1, to get a clear picture, we have represented a variation of a path of curves with fixed starting and ending points. Although similar, the situation here is a bit different because of the representation of the curve through its SRV function, which we can hardly represent. However, the process remains similar. We emphasise the subtle difference with [45]. Here, we work directly in the tangent space representation, via the SRV representation, and not with “the whole family” of curves c ( s , t , τ ) .
We denote τ c ( s , 0 , τ ) = c ( s , 0 , τ ) τ , and similarly for s c ( s , 0 , τ ) and τ c ( s , 0 , τ ) . The energy of the path indexed by τ is
E ( τ ) = 1 2 0 1 s c ( s , 0 , τ ) , s c ( s , 0 , τ ) + c / s q ( s , t , τ ) , c / s q ( s , t , τ ) d s .
Recall that the derivative of the inner product is given by d d x f ( x ) , f ( x ) = 2 * f ( x ) , d f d x . Then
E ( 0 ) = 0 1 c / τ c s ( s , 0 , 0 ) , c s ( s , 0 , 0 ) + c / τ c / s q ( s , t , 0 ) , c / s q ( s , t , 0 ) d s
with c / s τ c ( s , 0 , τ ) = c / τ s c ( s , 0 , τ ) and owing to the curvature tensor R τ c ( s , 0 , τ ) , s c ( s , 0 , τ ) ( q ( s , t , τ ) = c / τ c / s ( q ( s , t , τ ) ) c / s c / τ ( q ( s , t , τ ) ) we have
E ( 0 ) = 0 1 s c τ c ( s , 0 , τ ) , s c ( s , 0 , τ ) + R τ c ( s , 0 , τ ) , s c ( s , 0 , τ ) q ( s , t , τ ) , s q ( s , t , τ ) + s c τ c q ( s , t , 0 ) , s c q ( s , t , 0 ) d s .
Integrating by parts now, allows to have
0 1 τ c s c ( s , 0 , τ ) , s c ( s , 0 , τ ) d s = 0 1 s c s c ( s , 0 , τ ) , τ c ( s , 0 , τ ) d s 0 1 s c τ c ( q ( s , t , τ ) ) , s q ( s , t , τ ) = 0 1 s c s c ( q ( s , t , τ ) ) , τ q ( s , t , τ )
which yields to
E ( 0 ) = 0 1 ( s c s c ( s , 0 , τ ) , τ c ( s , 0 , τ ) )              + R τ c ( s , 0 , τ ) , s c ( s , 0 , τ ) q ( s , t , τ ) , s q ( s , t , τ )                    + ( s c s c q ( s , t , 0 ) , τ c q ( s , t , 0 ) ) d s ,
for any vector fields X , Y , Z , W , R ( X , Y ) Z , W = R ( W ) , Z , we consequently obtain
E ( 0 ) = 0 1 s c τ c ( s , 0 , τ ) , s c ( s , 0 , τ )              + R q ( s , t , τ ) , s q ( s , t , τ ) ( s c ( s , 0 , τ ) ) , τ c ( s , 0 , τ                    + c / s c / τ q ( s , t , 0 ) , c / s q ( s , t , 0 ) d s .
Geodesic corresponds to minimal energy. It means that every other path that starts and ends at the same points should require more energy to travel than the geodesic. We then have to solve E ( 0 ) = 0 for every τ c ( s , 0 , τ ) and every τ ( q ( s , t , τ ) ) . This gives the result.
Now when the framework is given by the TSRV and not by the SRV, only the second part of the geodesic equation remains as a result of the fixed starting point which corresponds to the identity element. This very much simplifies the equation, even though the derivation is the same.

References

  1. Auger, F.; Flandrin, P.; Lin, Y.-T.; McLaughlin, S.; Meignen, S.; Oberlin, T.; Wu, H.-T. Time-Frequency Reassignment and Synchrosqueezing: An Overview. IEEE Signal Process. Mag. 2013, 30, 32–41. [Google Scholar] [CrossRef][Green Version]
  2. Flandrin, P. Time-Frequency/Time-Scale Analysis; Academic Press: Cambridge, MA, USA, 1999; Volume 10. [Google Scholar]
  3. Hurd, H.L.; Miamee, A. Periodically Correlated Random Sequences: Spectral Theory and Practice; Wiley Series in Probability and Statistics; John Wiley: Hoboken, NJ, USA, 2007. [Google Scholar]
  4. Napolitano, A. Cyclostationarity: New trends and applications. Signal Process. 2016, 120, 385–408. [Google Scholar] [CrossRef]
  5. Dégerine, S.; Lambert-Lacroix, S. Characterization of the partial autocorrelation function of nonstationary time series. J. Multivar. Anal. 2003, 87, 46–59. [Google Scholar] [CrossRef]
  6. Lambert-Lacroix, S. Extension of Autocovariance Coefficients Sequence for Periodically Correlated Processes. J. Time Ser. Anal. 2005, 26, 423–435. [Google Scholar] [CrossRef][Green Version]
  7. Desbouvries, F. Unitary Hessenberg and state-space model based methods for the harmonic retrieval problem. IEE Proc. Radar Sonar Navig. 1996, 143, 346–348. [Google Scholar] [CrossRef]
  8. Yang, L.; Arnaudon, M.; Barbaresco, F. Riemannian median, geometry of covariance matrices and radar target detection. In Proceedings of the 7th European Radar Conference, Paris, France, 30 September–1 October 2010; pp. 415–418. [Google Scholar]
  9. Bingham, N.H. Szego’s Theorem and Its Probabilistic Descendants. Prob. Surv. 2012, 9, 287–324. [Google Scholar] [CrossRef]
  10. Simon, B. Orthogonal Polynomials on the Unit Circle Part 1 and Part 2; American Mathematical Society: Providence, RI, USA, 2009; Volume 54. [Google Scholar]
  11. Delsarte, P.; Genin, Y.V.; Kamp, Y.G. Orthogonal polynomial matrices on the unit circle. IEEE Trans. Circuits Syst. 1978, 25, 149–160. [Google Scholar] [CrossRef]
  12. Barbaresco, F. Radar micro-Doppler Signal Encoding in Siegle Unit Poly-Disk for Machine Learning in Fisher Metric Space. In Proceedings of the 2018 19th International Radar Symposium (IRS), Bonn, Germany, 20–22 June 2018. [Google Scholar]
  13. Simon, B. CMV matrices: Five years after. J. Comput. Appl. Math. 2007, 208, 120–154. [Google Scholar] [CrossRef]
  14. Ammar, G.; Gragg, W.; Reichel, L. Constructing a Unitary Hessenberg Matrix from Spectral Data; Springer: Berlin, Germany, 1991; pp. 385–395. [Google Scholar]
  15. Masani, P. Dilations as Propagators of Hilbertian Varieties. SIAM J. Math. Anal. 1978, 9, 414–456. [Google Scholar] [CrossRef]
  16. Constantinescu, T. Schur Parameters, Factorization and Dilation Problems; Birkhäuser: Basel, Switzerland, 1995. [Google Scholar]
  17. Nagy, B.S.; Foias, C.; Bercovici, H.; Kérchy, L. Harmonic Analysis of Operators on Hilbert Space; Springer: New York, NY, USA, 2010. [Google Scholar]
  18. Arsene, G.; Constantinescu, T. The Structure of the Naimark Dilation and Gaussian Stationary Processes. Integral Equ. Oper. Theory 1985, 8, 181–204. [Google Scholar] [CrossRef]
  19. Arnaudon, M.; Barbaresco, F.; Yang, L. Riemannian Medians and Means With Applications to Radar Signal Processing. IEEE J. Sel. Top. Signal Process. 2013, 7, 595–604. [Google Scholar] [CrossRef]
  20. Barbaresco, F. Interactions between symmetric cone and information geometries: Bruhat-tits and siegel spaces models for high resolution autoregressive doppler imagery. In Emerging Trends in Visual Computing; Springer: Berlin, Germany, 2009; pp. 124–163. [Google Scholar]
  21. Desbouvries, F. Geometrical Aspects of Linear Prediction Algorithms. Available online: http://www.math.ucsd.edu/helton/MTNSHISTORY/CONTENTS/2000PERPIGNAN/CDROM/articles/B85.pdf (accessed on 9 September 2018).
  22. Desbouvries, F. Non-euclidean geometrical aspects of the schur and levinson-szego algorithms. IEEE Trans. Inf. Theory 2003, 49, 1992–2003. [Google Scholar] [CrossRef][Green Version]
  23. Balaji, B.; Barbaresco, F.; Decurninge, A. Information geometry and estimation of Toeplitz covariance matrices. In Proceedings of the 2014 International Radar Conference, Lille, France, 13–17 October 2014; pp. 1–4. [Google Scholar]
  24. Barbaresco, F. Geometric Radar Processing based on Fréchet distance: Information geometry versus Optimal Transport Theory. In Proceedings of the 2011 12th International Radar Symposium (IRS), Leipzig, Germany, 7–9 September 2011; pp. 663–668. [Google Scholar]
  25. Barbaresco, F. Information Geometry of Covariance Matrix: Cartan-Siegel Homogeneous Bounded Domains, Mostow/Berger Fibration and Fréchet Median. In Matrix Information Geometry; Springer: Berlin/Heidelberg, Germany, 2013; pp. 199–255. [Google Scholar]
  26. Lipeng, N.; Xianhua, J.; Georgiou, T. Geometric methods for structured covariance estimation. In Proceedings of the 2012 American Control Conference (ACC), Montreal, QC, Canada, 27–29 June 2012; pp. 1877–1882. [Google Scholar]
  27. Pennec, X. Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements. J. Math. Imaging Vis. 2006, 25, 127–154. [Google Scholar] [CrossRef]
  28. Pennec, X.; Fillard, P.; Ayache, N. A Riemannian framework for tensor computing. Int. J. Comput. Vis. 2006, 66, 41–66. [Google Scholar] [CrossRef][Green Version]
  29. Barbaresco, F.; Ruiz, M. Radar detection for nonstationary Doppler signal in one burst based on information geometry: Distance between paths on covariance matrices manifold. In Proceedings of the 2015 European Radar Conference (EuRAD), Paris, France, 9–11 September 2015; pp. 41–44. [Google Scholar]
  30. Le Brigant, A.; Arnaudon, M.; Barbaresco, F. Optimal Matching Between Curves in a Manifold. In Proceedings of the 2017 International Conference on Geometric Science of Information, Paris, France, 7–9 November 2017; pp. 57–64. [Google Scholar]
  31. Arnold, V. Sur la Géométrie Différentielle des Groupes de Lie de Dimension Infinie et Ses Applications à L’hydrodynamique des Fluides Parfaits. Annales de l’institut Fourier 1966, 319–361. (In French) [Google Scholar] [CrossRef]
  32. Kailath, T.; Bruckstein, A.M. Naimark Dilations, State-Space Generators and Transmission Lines. In Advances in Invariant Subspaces and Other Results of Operator Theory; Douglas, R.G., Pearcy, C.M., Nagy, B.S., Vasilescu, F.-H., Voiculescu, D., Arsene, G., Eds.; Birkhäuser: Basel, Switzerland, 1986; pp. 173–186. [Google Scholar]
  33. Sayed, A.H.; Constantinescu, T.; Kailath, T. Recursive Construction of Multichannel Transmission Lines with a Maximum Entropy Property. In Codes, Graphs, and Systems; Springer: Berlin, Germany, 2002; pp. 259–290. [Google Scholar]
  34. Makagon, A.; Salehi, H. Notes on infinite dimensional stationary sequences. In Probability Theory on Vector Spaces IV; Cambanis, S., Weron, A., Eds.; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1989; Volume 1391, pp. 200–238. [Google Scholar]
  35. Miamee, A. Spectral dilation of L(B,H)-valued measures and its application to stationary dilation for Banach space valued processes. Indiana Univ. Math. J. 1989, 38, 841–860. [Google Scholar] [CrossRef]
  36. Miamee, A. Periodically Correlated Processes and Their Stationary Dilations. SIAM J. Appl. Math. 1990, 50, 1194–1199. [Google Scholar] [CrossRef][Green Version]
  37. Bochner, S. Vorlesungen über Fouriersche Integrale; Mathematik und ihre Anwendungen, Akademie-Verlag: Leipzig, Germany, 1932. [Google Scholar]
  38. Foias, C.; Frazho, A.E. A Geometric Approach to Positive Definite Sequences. In The Commutant Lifting Approach to Interpolation Problems; Birkhäuser: Basel, Switzerland, 1990; pp. 497–546. [Google Scholar]
  39. Frazho, A.E.; Arthur, E. On Stochastic Bilinear Systems. In Modelling and Application of Stochastic Processes; Springer: Boston, MA, USA, 1986; pp. 214–241. [Google Scholar]
  40. Timotin, D. Prediction theory and choice sequences: An alternate approach. In Advances in Invariant Subspaces and Other Results of Operator Theory; Springer: Berlin, Germany, 1986; pp. 341–352. [Google Scholar]
  41. Tseng, M.C. Contractions, Matrix Paramatrizations, and Quantum Information. arXiv, 2006; arXiv:quant-ph/0610259. [Google Scholar]
  42. Tseng, M.C.; Ramakrishna, V. Dilation Theoretic Parametrizations of Positive Matrices with Applications to Quantum Information. arXiv, 2006; arXiv:quant-ph/0610021. [Google Scholar]
  43. Damanik, D.; Pushnitski, A.; Simon, B. The Analytic Theory of Matrix Orthogonal Polynomials. Surv. Approx. Theory 2008, 4, 1–85. [Google Scholar]
  44. Bakonyi, M.; Constantinescu, T. Pitman Research Notes in Math. In Schur’s Algorithm and Several Applications; Longman Sc and Tech: Harlow, UK, 1992; Volume 261. [Google Scholar]
  45. Le Brigant, A. Computing distances and geodesics between manifold-valued curves in the SRV framework. arXiv, 2016; arXiv:1601.02358. [Google Scholar] [CrossRef]
  46. Le Brigant, A. A discrete framework to find the optimal matching between manifold-valued curves. arXiv, 2017; arXiv:1703.05107. [Google Scholar] [CrossRef]
  47. Celledoni, E.; Eslitzbichler, M.; Schmeding, A. Shape Analysis on Lie Groups with Applications in Computer Animation. arXiv, 2015; arXiv:1506.00783. [Google Scholar] [CrossRef]
  48. Zhang, Z.; Su, J.; Klassen, E.; Le, H.; Srivastava, A. Video-based action recognition using rate-invariant analysis of covariance trajectories. arXiv, 2015; arXiv:1503.06699. [Google Scholar]
  49. Le Brigant, A.; Arnaudon, M.; Barbaresco, F. A Probability on the Spaces of Curvesand the Associated Metric Spaces via Information Geometry; Radar Application. Ph.D. Thesis, Université de Bordeaux, Bordeaux, France, 4 July 2017. [Google Scholar]
  50. Kobayashi, S.; Nomizu, K. Foundations of Differential Geometry; Interscience Publishers: New York, NY, USA, 1963; Volume 1. [Google Scholar]
  51. Kriegl, A.; Michor, P.W. Aspects of the theory of infinite dimensional manifolds. Differ. Geom. Appl. 1991, 1, 159–176. [Google Scholar] [CrossRef]
  52. Michor, P.W.; Mumford, D. Shiva Mathematics Series. In Manifolds on Differential Mappings; Birkhauser: Basel, Switzerland, 1980; Volume 3. [Google Scholar]
  53. Michor, P.W.; Mumford, D. An overview of the Riemannian metrics on shape spaces of curves using the Hamiltonian approach. Appl. Comput. Harmon. Anal. 2007, 23, 74–113. [Google Scholar] [CrossRef][Green Version]
  54. Michor, P.W. Topics in Differential Geometry; Graduate Studies in Mathematics American Mathematical Society: Providence, RI, USA, 2008; Volume 93. [Google Scholar]
  55. Srivastava, A.; Klassen, E.; Joshi, S.H.; Jermyn, I.H. Shape Analysis of Elastic Curves in Euclidean Spaces. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1415–1428. [Google Scholar] [CrossRef] [PubMed][Green Version]
  56. Bauer, M.; Bruveris, M.; Marsland, S.; Michor, P.W. Constructing reparameterization invariant metrics on spaces of plane curves. Differ. Geom. Appl. 2014, 34, 139–165. [Google Scholar] [CrossRef][Green Version]
  57. Michor, P.W.; Mumford, D. Vanishing geodesic distance on spaces of submanifolds and diffeomorphisms. Doc. Math 2005, 10, 217–245. [Google Scholar]
  58. Bauer, M.; Bruveris, M.; Michor, P.W. Why Use Sobolev Metrics on the Space of Curves. In Riemannian Computing in Computer Vision; Springer: Cham, Switzerland, 2016; pp. 233–255. [Google Scholar][Green Version]
  59. Gelman, A.; Carlin, J.B.; Stern, H.S.; Dunson, D.B.; Vehtari, A.; Rubin, D.B. Bayesian Data Analysis; CRC Press: Boca Raton, FL, USA, 2014; Volume 2. [Google Scholar]
  60. Yili, L.; Wong, K.M. Riemannian Distances for Signal Classification by Power Spectral Density. IEEE J. Sel. Top. Signal Process. 2013, 7, 655–669. [Google Scholar] [CrossRef]
  61. Hofer, M.; Pottmann, H. Energy-minimizing Splines in Manifolds. ACM Trans. Graph. 2004, 23, 284–293. [Google Scholar] [CrossRef]
  62. Shingel, T. Interpolation in special orthogonal groups. IMA J. Num. Anal. 2009, 29, 731–745. [Google Scholar] [CrossRef]
  63. Pilté, M.; Barbaresco, F. Tracking quality monitoring based on information geometry and geodesic shooting. In Proceedings of the 2016 17th International Radar Symposium (IRS), Krakow, Poland, 10–12 May 2016; pp. 1–6. [Google Scholar]
Figure 1. Illustration of a sampled closed trajectory drawn in S O ( n ) or S U ( n ) that materializes the time varying of the Periodically Correlated (PC) measure for a stochastic process. Each W i is a dilation matrix built through the parcors. Recall that a PC process is a process such that R s , t = R s + T , t + T for a certain T, where R · , · stands for the correlation function of the process.
Figure 1. Illustration of a sampled closed trajectory drawn in S O ( n ) or S U ( n ) that materializes the time varying of the Periodically Correlated (PC) measure for a stochastic process. Each W i is a dilation matrix built through the parcors. Recall that a PC process is a process such that R s , t = R s + T , t + T for a certain T, where R · , · stands for the correlation function of the process.
Entropy 20 00717 g001
Figure 2. Example of a reparametrization of a curve. Here, it consists in changing the discretization with nonlinear time sample.
Figure 2. Example of a reparametrization of a curve. Here, it consists in changing the discretization with nonlinear time sample.
Entropy 20 00717 g002
Figure 3. The tangent space T [ c ] M at a point [ c ] in the shape space S is isomorphic to the horizontal part H M of the tangent space at a point on the associated fiber.
Figure 3. The tangent space T [ c ] M at a point [ c ] in the shape space S is isomorphic to the horizontal part H M of the tangent space at a point on the associated fiber.
Entropy 20 00717 g003
Figure 4. 1000 samples of PC processes generated by (a) a modulated zero mean and unit variance stationary random process a ( t ) ; (b) a periodic AR(2) model with a period of 54 points; (c) a periodic AR(2) model with a period of 20 points; and (d) a periodic ARMA(2,1) model with a period of 20 points.
Figure 4. 1000 samples of PC processes generated by (a) a modulated zero mean and unit variance stationary random process a ( t ) ; (b) a periodic AR(2) model with a period of 54 points; (c) a periodic AR(2) model with a period of 20 points; and (d) a periodic ARMA(2,1) model with a period of 20 points.
Entropy 20 00717 g004
Figure 5. Representation inside the ball of radius π of the four PC processes drawn in Figure 4, arranged in the same order with 200 interpolated points represented with green stars, the dashed black line is the theoretical curve and the red dots are the representation of dilation matrices. (a) is the representation of Figure 4a, (b) is the representation of Figure 4b, (c) is the representation of Figure 4c and (d) is the representation of Figure 4d.
Figure 5. Representation inside the ball of radius π of the four PC processes drawn in Figure 4, arranged in the same order with 200 interpolated points represented with green stars, the dashed black line is the theoretical curve and the red dots are the representation of dilation matrices. (a) is the representation of Figure 4a, (b) is the representation of Figure 4b, (c) is the representation of Figure 4c and (d) is the representation of Figure 4d.
Entropy 20 00717 g005
Figure 6. Representation inside the ball of radius π of the four PC processes drawn in Figure 4, arranged in the same order with 50 interpolated points except for plot (b) which has been computed with 100 interpolated points represented with green stars; the dashed black line is the theoretical curve and the red dots are the representation of dilation matrices. (a) is the representation of Figure 4a, (b) is the representation of Figure 4b, (c) is the representation of Figure 4c and (d) is the representation of Figure 4d.
Figure 6. Representation inside the ball of radius π of the four PC processes drawn in Figure 4, arranged in the same order with 50 interpolated points except for plot (b) which has been computed with 100 interpolated points represented with green stars; the dashed black line is the theoretical curve and the red dots are the representation of dilation matrices. (a) is the representation of Figure 4a, (b) is the representation of Figure 4b, (c) is the representation of Figure 4c and (d) is the representation of Figure 4d.
Entropy 20 00717 g006
Figure 7. Geodesic interpolation with respect to (43) between the green dashed curve (gold standard signal of Figure 8) and the dashed red curve (signal of Figure 4c), first row for 200 interpolated points, second row for 50 interpolated points. For this scenario s [ 1 / 4 , 1 / 2 , 3 / 4 ] which corresponds to [(ad), (be), (cf)] respectively.
Figure 7. Geodesic interpolation with respect to (43) between the green dashed curve (gold standard signal of Figure 8) and the dashed red curve (signal of Figure 4c), first row for 200 interpolated points, second row for 50 interpolated points. For this scenario s [ 1 / 4 , 1 / 2 , 3 / 4 ] which corresponds to [(ad), (be), (cf)] respectively.
Entropy 20 00717 g007
Figure 8. A PAR(2) signal with a period of 20 points, 1000 samples were generated, and its corresponding SO(3) representation inside the ball of radius π .
Figure 8. A PAR(2) signal with a period of 20 points, 1000 samples were generated, and its corresponding SO(3) representation inside the ball of radius π .
Entropy 20 00717 g008
Figure A1. we consider a beam of curves, which consists in a slight modification of the geodesic. The different curves are indexed by τ . The idea is to find which of these curves gives the minimal energy to go from c 0 to c 1 .
Figure A1. we consider a beam of curves, which consists in a slight modification of the geodesic. The different curves are indexed by τ . The idea is to find which of these curves gives the minimal energy to go from c 0 to c 1 .
Entropy 20 00717 g0a1
Table 1. Table of the distances between all the PC processes of Figure 4 to the gold standard PC process of Figure 8 through the distance of their curves’ shapes on S O ( 3 ) , S O ( 4 ) and S O ( 5 ) . We have interpolated with roughly twice and four times the number of original points. We also applied here a DP to solve the optimization assignment problem.
Table 1. Table of the distances between all the PC processes of Figure 4 to the gold standard PC process of Figure 8 through the distance of their curves’ shapes on S O ( 3 ) , S O ( 4 ) and S O ( 5 ) . We have interpolated with roughly twice and four times the number of original points. We also applied here a DP to solve the optimization assignment problem.
Model of Signal Displayed in Figure 4Distance to the Signal of Figure 8
SO(3)SO(4)SO(5)
200 pts50 pts200 pts50 pts200 pts50 pts
(a)5.724.4797.1926.86526.3695.47
(b)—100 pts instead of 50 pts31.6328.9841.7812.98298.64220.32
(c)3.443.2990.8920.23476.55116.06
(d)4.!94.50187.4250.36621.51171.73

Share and Cite

MDPI and ACS Style

Dugast, M.; Bouleux, G.; Marcon, E. Representation and Characterization of Nonstationary Processes by Dilation Operators and Induced Shape Space Manifolds. Entropy 2018, 20, 717. https://doi.org/10.3390/e20090717

AMA Style

Dugast M, Bouleux G, Marcon E. Representation and Characterization of Nonstationary Processes by Dilation Operators and Induced Shape Space Manifolds. Entropy. 2018; 20(9):717. https://doi.org/10.3390/e20090717

Chicago/Turabian Style

Dugast, Maël, Guillaume Bouleux, and Eric Marcon. 2018. "Representation and Characterization of Nonstationary Processes by Dilation Operators and Induced Shape Space Manifolds" Entropy 20, no. 9: 717. https://doi.org/10.3390/e20090717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop