From Time–Frequency to Vertex–Frequency and Back

: The paper presents an analysis and overview of vertex–frequency analysis, an emerging area in graph signal processing. A strong formal link of this area to classical time–frequency analysis is provided. Vertex–frequency localization-based approaches to analyzing signals on the graph emerged as a response to challenges of analysis of big data on irregular domains. Graph signals are either localized in the vertex domain before the spectral analysis is performed or are localized in the spectral domain prior to the inverse graph Fourier transform is applied. The latter approach is the spectral form of the vertex–frequency analysis, and it will be considered in this paper since the spectral domain for signal localization is well ordered and thus simpler for application to the graph signals. The localized graph Fourier transform is deﬁned based on its counterpart, the short-time Fourier transform, in classical signal analysis. We consider various spectral window forms based on which these transforms can tackle the localized signal behavior. Conditions for the signal reconstruction, known as the overlap-and-add (OLA) and weighted overlap-and-add (WOLA) methods, are also considered. Since the graphs can be very large, the realizations of vertex–frequency representations using polynomial form localization have a particular signiﬁcance. These forms use only very localized vertex domains, and do not require either the graph Fourier transform or the inverse graph Fourier transform, are computationally efﬁcient. These kinds of implementations are then applied to classical time–frequency analysis since their simplicity can be very attractive for the implementation in the case of large time-domain signals. Spectral varying forms of the localization functions are presented as well. These spectral varying forms are related to the wavelet transform. For completeness, the inversion and signal reconstruction are discussed as well. The presented theory is illustrated and demonstrated on numerical examples.


Introduction
Processing of big data, whose domain is irregular and can be represented by a graph, has attracted significant research interest [1][2][3][4][5][6][7][8][9][10].For big data, the possibility of using smaller and localized subsets of the available information is crucial for their efficient analysis and processing [11].In addition, in practical applications when large graphs are used as the signal domain, we are commonly interested in localized analysis than in global behavior.In order to characterize the vertex-localized behavior of signals and their narrowband spectral properties, the joint vertex-frequency domain analysis is introduced.This analysis represents a natural analogy to the time-frequency analysis, a well-established area in classical signal processing [12][13][14].
In classical signal analysis, the basic short-time Fourier transform approach uses window functions to localize the signal in time, while the projection of such a windowed signal onto Fourier transform basis functions provides its spectral localization.Time localization, combined with the modulation by the basis functions, produces kernel functions for classical time-frequency analysis.The classical time-frequency analysis approach has been extended to vertex-frequency analysis for signals defined on graphs [15][16][17][18][19][20][21][22].This generalization is not straightforward, since graph is a complex and irregular signal domain.Namely, even a time-shift operation, which is trivial in classical time-domain analysis, cannot be straightforwardly generalized to the graph signal domain.This has resulted in several approaches to define vertex-frequency kernels.One approach is based on the vertex domain windows defined using the graph spectral domain [23].The vertex domain windows can also be fully defined in the vertex domain, using the vertex neighborhood [19].
The vertex domain approaches are based on local analysis with a vertex neighborhood and can be very efficient in the large graph analysis.This paper will focus on the vertex-frequency kernels defined in the spectral domain, with spectral shifts performed as in classical signal analysis, while the vertex shifts are implemented in an indirect way, using the basis functions.This approach produces practically very efficient forms, especially when combined with polynomial approximations of the analysis kernels.This paper's primary goal is to provide a strong link of the time-frequency analysis with vertex-frequency analysis and to indicate some new possibilities for simple methods in the time-frequency analysis of large-duration signals based on the vertex-frequency forms.Conditions for the signal reconstruction, known as overlap-and-add (OLA) method and weighted overlapand-add (WOLA) are considered, and the window forms from the classical signal analysis are adapted to satisfy these conditions, with appropriate comments related to their application to the vertex-frequency analysis, when the eigenvalues are used instead of the frequency.
The paper is structured as follows.Basic definitions in graph theory and signals on graphs, including the graph Fourier transform, are reviewed in Section 2. A solid formal relation between the classical signal processing paradigm and graph signal processing is provided in Section 3, where benchmark graphs and signals are introduced.In Section 4, the spectral-domain localized graph Fourier transform is presented, along with a few simple basic implementation forms.The general OLA and WOLA conditions for analysis in the graph spectral domain are introduced, with illustration on several windows for each of these conditions, including the spectral domain wavelet-like transform.The polynomial approximations of the presented kernels are the topic of Section 5, where the Chebyshev polynomial series, least-squares approximation, and Legendre polynomial approximation are presented.Inversion of the local graph Fourier transform is elaborated on in Section 6, where both of the defined kernel forms are analyzed.The support uncertainty principle in the general form (such that it can be used for graph signals) is presented in Section 7, along with the discussion on the relation of the local graph Fourier transform support and the kernel function width in the spectral domain.The possibility of splitting large signals into smaller parts and simplifying the analysis of such signals is considered in Section 8.The presented theory is illustrated in numerous examples.The manuscript closes with summarized conclusions and the reference list.

Basic Graph Definitions
A graph consists of N vertices, n ∈ V = {1, 2, . . ., N}, which are connected with edges.The weight of edges are W mn [24][25][26].For the vertices m and n which are not connected, by definition W mn = 0.The weights of edges are the elements of an N × N matrix, W. The graphs can be directed and undirected.For undirected graphs it is assumed that the vertices m and n are connected by the same edge weight in both directions, resulting in a symmetric weight matrix W, when W = W T holds.A graph is unweighted if all nonzero elements of its weight matrix, W, are equal to 1.In this case the weight matrix, W, assumes specific form and the edges are represented by a connectivity or adjacency matrix, A. In addition to the adjacency and weight matrix, A or W, in graph theory several other matrices are used.All of them can be derived from the adjacency and weight matrix.A matrix that indicates the vertex degree in a graph is called the degree matrix.It is of diagonal form and its common notation is D. The elements D nn of the degree matrix are obtained as a sum of all weights corresponding to the edges connected to the considered vertex, n.The diagonal elements of D are equal to D nn = ∑ m W mn .A combination of the weight matrix, W, and the degree matrix, D, produces one of the most commonly used matrix in the graph theory, the graph Laplacian.It is defined by In the case of an undirected graph, the symmetric form of the weight matrix results in a symmetric graph Laplacian, L = L T .
The eigendecomposition of the graph matrices (for example, of the graph Laplacian L or the adjacency matrix A) is used for spectral analysis of graphs and graph signals.
The eigendecomposition of a graph Laplacian (or any other matrix) relates its eigenvalues, λ k , and the corresponding eigenvectors, u k , by where λ 1 , λ 2 , ..., λ N , are not necessarily distinct.Since the graph Laplacian is a real-valued symmetric matrix, it is always diagonalizable, that is, the geometric multiplicity equals the algebraic multiplicity for every eigenvalue.The previous N equations can then be written in a compact matrix form (the eigendecomposition relation for diagonizable matrices) as LU = UΛ.
The same eigendecomposition relation can be used for the adjacency matrix For diagonalizable matrix there exist a set of orthonormal eigenvectors.They are used as the transformation basis functions for the definition of the graph Fourier transform (GFT), X = [X(1), X(2), . . ., X(N)] T , of a graph signal, x = [x(1), x(2), . . ., x(N)] T .The graph signal value at a vertex n is denoted by x(n), n = 1, 2, . . ., N, while the notation x is used for the vector of signal values at all vertices.The vector of the GFT of a graph signal x will be denoted by X, and the elements (components) of the GFT vector by X(k), k = 1, 2 . . ., N. The elements of a graph signal at a vertex n, x(n), can then be written as a linear combination of the eigenvectors where the basis function values u k , are the elements of the k-th eigenvector, u k , at the vertex n, n = 1, 2, . . ., N. This is the definition of the inverse graph Fourier transform (IGFT).Matrix form of the IGFT is x = UX.For real and symmetric matrices (corresponding to undirected graphs) the transformation matrix U is orthogonal, UU T = I, that is U −1 = U T .Then the graph Fourier transform (GFT) is defined by X = U −1 x = U T x or in elementwise form For undirected graphs, both the Laplacian and the adjacency matrix are symmetric, resulting in real-valued eigenvectors and the resulting transformation matrices.However, for directed circular graphs, the eigenvalues (and eigenvectors) of the adjacency matrix are complex-valued.Then, the elements of the inverse transformation matrix U should be used in the GFT definition.When U −1 = U H holds (normal matrices), the complex-conjugate basis functions, u * k (n), are used in (2).

Classical Signal Processing within the Graph Signal Processing Framework
The graph signal processing will be related to classical time-frequency analysis in two ways: (1) using the directed circular graph and its adjacency matrix, or (2) using the undirected circular graphs and the graph Laplacian.These two relations are discussed next.

Directed circular graph.
The signal values, x(n), in classical signal processing systems, are defined in a well-ordered time domain, defined by the time instants denoted by n = 1, 2, . . ., N. In the DFT-based classical analysis it has also been assumed that the signal is periodic.The domain of such signals is illustrated in Figure 1 for N = 8.
Consider next a classical form of a discrete-time finite impulse response (FIR) system.The input-output relation for this system is given by In order to make a connection with graphs and graph notation of the signal domain, notice that this input-output relation of the FIR system can be written in the matrix form as where . . .
For this system, the time instants are well-ordered and their connectivity matrix is given by the adjacency matrix.The instants (in graph notation vertices) relation is defined by

•
A mn = 1 if the vertex or instant m is a predecessor (connected) to the instant (vertex) n, and • A mn = 0 otherwise, as shown in Figure 1(left), for N = 8.Rows of the adjacency matrix indicate the corresponding vertex connectivity.In the first row, there is value 1 at the position N.It means that the vertex 1 is related to vertex N by a directed edge between these two vertices.In the second row there is value 1 at the first position, meaning that the vertex 2 is connected to vertex 1 with an edge, as shown in in Figure 1(left).The elements of the shift matrix relation y = Ax are y(n) = x(n − 1), as it has been expected for a simple delay operation.The delay for two instants, y(n) = x(n − 2), is calculated as A(Ax) = A 2 x, and so on.Now, we will perform the eigendecomposition of this adjacency matrix A. According to the eigendecomposition relation Recall that U is the matrix whose columns are the eigenvectors, u k , and the eigenvalue diagonal matrix is Λ.The eigenvalues λ k are on the diagonal of this matrix.The adjacency matrix of the directed circular graph is diagonalizable because all its eigenvalues are distinct.The adjacency matrix of a directed circular graph is a circulant matrix.As it is well-known that this kind of matrix is diagonalizable by the discrete Fourier transform [27] (as it will be shown next).In general, the adjacency matrix of a directed graph may not be diagonalizable, when the Jordan form should be used [1] and Appendix A in [3].This kind of graphs is not considered in the paper.The input-output relation of a classical FIR system (3) can now be written as where the eigendecomposition property Now, by left-multiplication by U −1 we can write where Y = U H y and X = U H x, are the discrete Fourier transforms (DFT) of the output signal, y, and the input signal, x.
The diagonal transfer function is denoted by H(Λ) and its elements are given by for X(k) = 0. Indeed, the presented forms represent the well-known classical DFT-based relations.In order to confirm this conclusion, we will analyze the eigenvalue relation for the presented adjacency matrix, A, The corresponding characteristic polynomial is given by 1) , the solutions for the eigenvalues and eigenvectors are √ N e j2π(k−1)(n−1)/N , for k = 1, 2, . . ., N. The eigenvectors are equal to the DFT basis functions, normalized in such a way that their energy is unity.We can easily arrive to the element-wise form of the DFT using the GFT definition given in (2).
For implementation issues that will be addressed later, it is crucial to notice that for the calculation of A M x we need only the signal neighborhood M with respect to the each considered instant, n.In the time domain, it means the distance defined by (n − M).The fact that A M x requires the signal samples within M neighborhood of the considered vertex (instant) will hold for general graphs.The local neighborhood based calculation is of key importance when large graphs are analyzed or signals, representing big data on large graphs, are processed.

Undirected circular graph.
When the circular graph is not directed, as shown in Figure 1(right), then we should assume that every instant (vertex), n, is connected to both the predecessor vertex (instant), n − 1, and to the succeeding vertex, n + 1.The adjacency or weight matrix for this kind of connection, A = W, and the corresponding graph Laplacian, defined by L = D − W, are given by where L x (n) are the elements of the vector Lx.
The solution to the difference equation of the second order, (4), can be obtained in the form with the eigenvalue For each of the eigenvalues, we can define two distinct orthogonal eigenvectors in quadrature, for example, using φ k = 0 and φ k = π/2 in (5).These two eigenvectors correspond to the classical sinusoidal basis functions, cos(2π(k − 1)(n − 1)/N) and sin(2π(k − 1)(n − 1)/N), in the Fourier series analysis of real-valued signals.The exceptions are the eigenvalues λ 1 = 0 and the last eigenvalue for an even N, when there is only one basis function.The sine and cosine functions should be normalized to the unit energy, to represent eigenvectors.Therefore, a definition of the graph Laplacian eigenvalues and eigenvectors for an undirected circular graph (for an even N, for example, N = 8), taking into account all previous properties, is given by ), ) ), ) ), ) ). ( The smallest eigenvalue, λ 1 = 0, corresponds to a constant vector, u 1 (n) = 1/ √ 8, while the largest eigenvalue, λ 8 = 4, corresponds to the fastest-varying eigenvector Smoothness and local smoothness.Notice that for an undirected circular graph and small frequency, ω 2 k , the relation in ( 6) can be approximated by This relation means that the graph Laplacian eigenvalue, λ k , corresponding to the eigenvector, u k , can be related to the classical frequency (squared), ω 2 k , of a sinusoidal basis function in classical Fourier series analysis.
In general, it is easy to show that the eigenvalue of the graph Laplacian can be used to indicate the speed of change (called the smoothness) of an eigenvector or a graph signal, in general.Namely, if we left-multiply by u T k both sides of the eigenvalue definition relation The local smoothness can be defined for a vertex n.It will be denoted by λ(n).This parameter corresponds to the classical time-varying (instantaneous) frequency, ω(t), defined at an time-instant t, in the form [28] In this relation we used L x (n) to denote the n-th element of the vector Lx.It has been assumed that x(n) = 0.If we use x(n) = cos(ω k n) and the graph Laplacian of an undirected circular graph, as in (4), we obtain the value from (8).In general, if the signal x(n) is equal to an eigenvector u k (n) at the vertex n and at its neighboring vertices, then λ(n) = λ k .
System on a general graph.The relations presented in this section are the special cases of the general graph Fourier transform (Section 2) and systems for graph signals.
The most important difference between the classical systems and the systems for graph signals is in the fact that the standard shift operator, x(n) = x(n − 1), just moves a signal sample from one instant, n, to another instant, (n − 1), while the graph shift operator, y = Ax or y = Lx, moves the signal sample to all neighboring vertices (in the case of graph Laplacian, in addition to the signal being moved to the neighboring vertices (with a change of sign), its sample is kept at the original vertex as well).Notice that the graph shift operator does not satisfy the isometry property since the shifted signal's energy is not the same as the energy of the original signal.In analogy with the role of time shift in standard system theory, a system for graph signals is implemented as a linear combination of a graph signal and its graph shifted versions, where, by definition L 0 = I, while h 0 , h 1 , . . ., h M−1 are the system coefficients.The spectral form of this relation is given by where H(Λ) is a diagonal matrix representing the transfer function of the system for a graph signal.Notice that if the transfer function, in general, can be written in a form of polynomial, as in ( 11), then the system can be implemented using the graph-shifted forms of the signal, L x, L 2 x, . . ., up to L M−1 x, as in (10), which require only (M − 1) neighborhoods of each signal sample to obtain the system output, independently of the size of the considered graph.
Graph signal filtering-Graph convolution.Three approaches to filtering of a graph signal using a system whose transfer function is G(Λ), with elements on the diagonal G(λ k ), k = 1, 2, . . ., N, will be presented next.
(i) The simplest approach is based on the direct employment of the GFT.It is performed by: (a) Calculating the GFT of the input signal, X = U −1 x, (b) Finding the output signal GFT by multiplying X by G(Λ), Y = G(Λ)X, (c) Calculating the output (filtered ) signal as the inverse DFT of Y, y = UY.The result of this operation, is called a convolution of signals on a graph [25,29].However, this procedure could be computationally unacceptable for very large graphs.(ii) A possibility to avoid the full size transformation matrices for large graphs, is to approximate the filter transfer function, G(λ), at the positions of the eigenvalues, Then the system of N equations is solved, in the least squared sense, for M < N unknown parameters of the system h = [h 0 , h 1 , . . ., h M ] T , with a given M and is the column vector of diagonal elements of G.The elements of matrix V are V(k, m) = λ m k , m = 0, 1, . . ., M, k = 1, 2, . . ., N (Vandermonde matrix).This system can efficiently be solved for a relatively small M.Then, the implementation of the graph filter is performed in the vertex domain using the so obtained h 0 , h 1 , . . ., h M in (10) and the M-neighborhood of a every considered vertex.Notice that the relation between the IGFT of diag{G} and the system coefficients h 0 , h 1 , . . ., h M is direct in the classical DFT case only, while it is more complex in the general graph case [25].For large M, the solution to the system of equations in (12), for the unknown parameters h 0 , h 1 , . . ., h M can be numerically unstable due to large values of the powers λ M k for large M. (iii) Another approach that allows us to avoid the direct GFT calculation in the implementation of graph filters is in approximating the given transfer function, G(λ), by a polynomial H(λ), using continuous variable λ.This approximation does not guarantee that the transfer functions G(λ) and its polynomial approximation H(λ) will be close at a discrete set of points λ = λ p , p = 1, 2, . . ., N.
The maximal absolute deviation of the polynomial approximation can be kept as small as possible using the so-called min-max polynomials.After the polynomial approximation is obtained the output of the graph system is calculated using (10), that is This approach will be presented in Section 5.
Case study examples.In the next example we shall introduce two graphs and signals on these graphs, which will be used as benchmark models for the analysis that follows.
Example 1. Two graphs are shown in Figure 2. A circular undirected unweighted graph represents the domain for classical signal analysis, with each of N = 100 vertices (instants) being connected to the predecessor and successor vertices (top panel).A general form of a graph, with the same number of N = 100 vertices, is shown in Figure 2(bottom).These two graphs will be further used to demonstrate classical and graph signal processing principles and relations.
A signal on the circular graph is shown in Figure 3(top).We have formed this synthetic signal using parts of three graph Laplacian eigenvectors (corresponding to three harmonics in classical analysis).For the vertices in the subset V 1 = {1, 2, 3, . . ., 40} ⊂ V, the eigenvector (harmonic) with the spectral index k = 16 was used.
For the subset V 2 = {41, 42, 43, . . ., 70}, the eigenvector u k (n), with k = 84, is used to define the signal.The eigenvector with spectral index k = 29 was used to define the signal on the remaining set of vertices, V 3 ⊂ V.
A signal on the general graph is shown in Figure 3(bottom).It is also composed of parts of three Laplacian eigenvectors.For the vertices in V 1 , the eigenvector with spectral index k = 12 has been used.
For the subset of vertices V 2 , containing the vertex indices ranging from n = 41 to n = 70, the eigenvector u k (n) with k = 84 was used to define the signal.Within the subset, the spectral index was k = 29.Supports of these three components are designated by different vertex colors.
The local smoothness index λ(n), which corresponds to the speed of change of the corresponding components, λ(n) = λ k , is shown in Figure 4 for the presented graph signals.The local smoothness in the classical signal analysis is related to the instantaneous frequency of each signal components as λ(n) = 4 sin 2 (ω(n)/2).Vertices from V 1 are designated blue dots, vertices form V 2 are marked by black dots, while vertices form V 3 are given by green dots.
Other graph shift operators.Finally, notice that in relation (10), we used the graph Laplacian, L, as the shift operator.In addition to the adjacency matrix, A, as another common choice for the shift operation, the normalized version of the adjacency matrix, (A/λ max ), normalized graph Laplacian (D −1/2 LD −1/2 ), or the random walk (also called diffusion) matrix, (D −1 W), may be used as graph shift operators, producing corresponding spectral forms of the systems for graph signals [30].
Remark 1.The normalized graph Laplacian, is used as a shift operator in the first-order system, to define the convolution operation and the convolution layer in the graph convolutional neural networks (GCNN).Its form is Using this relation, the input, x (l−1) c and the output, x c of the c-th channel of the l-th convolution layer in the GCNN are implemented as where the weight w , in the c-th channel of the l-th convolution layer, corresponds to the weight (h 0 + h 1 ) in ( 14) and w corresponds to (−h 1 ) in ( 14).

Spectral Domain Localized Graph Fourier Transform (LGFT)
Classical short-time Fourier transform (STFT) admits time-frequency localization of the analyzed signal using the Fourier transform of the windowed and shifted versions of the signal.This principle is possible in the graph signal processing [29,31].However, since this approach requires sophisticated approaches to the vertex shift operation on the signals, the spectral domain localization is more commonly used in vertex-frequency analysis.Although, the spectral domain is possible and well-defined in classical analysis, it has been rarely used for time-frequency analysis of signals.The time-frequency localization of a signal in the spectral domain is obtained using a spectral domain localization window, which is shifted in frequency, while the time shift is achieved by the modulation of the windowed Fourier transform of the signal.
We shall use the spectral approach to perform vertex-frequency localization.The graph Fourier transform localized in the spectral domain (LGFT) is defined as an inverse graph Fourier transform of the graph Fourier transform, X(p), multiplied by a spectral domain window, H(k − p).The spectral domain window is nonzero at and around the spectral index k.Therefore, the element-wise LGFT is calculated using The shift is here performed in the well-ordered spectral domain, along the spectral index k, instead of the more complex signal shift in the vertex domain.As it will be shown, this form of the vertex-frequency analysis, will also allow vertex localized implementations of the vertex-frequency analysis, even without calculation of the graph Fourier transform of the signal, which is of crucial importance in the case of very large graphs.
Remark 2. The counterpart of (16) in the classical time-frequency analysis is well-known shorttime Fourier transform (STFT) [12] , where H(k) is a frequency domain localization window.
The LGFT defined in the spectral domain by ( 16) can be realized by using bandpass transfer functions, denoted by H k (λ p ) = H(k − p).Then the LGFT definition is given by The transfer function in (17), is obtained from The STFT is then defined as The matrix form of the vertex-frequency spectrum ( 17) is or using vector/matrix notation where the column vector whose elements are S(m, k), m = 1, 2, . . ., N, is denoted by s k .

Binomial Decomposition
Consider the simplest decomposition when the total spectral domain of graph signal is divided into K = 2 bands.These two bands, indexed by k = 0 and k = 1, cover the low-pass part and high-pass part of spectral content of signal, respectively.First we will use the linear functions of eigenvalue λ, to achieve these properties Using the relation between (10) and (11) we can conclude that the vertex-domain implementation of this kind of LGFT analysis is very simple and for each vertex, m, the calculation of S(m, 0) = s 0 (m) and S(m, 1) = s 1 (m), requires only the combination of the signal at this vertex and its neighboring vertices, to calculate the elements of L x. Remark 4. The classical time-frequency analysis counterpart of ( 23) is obtained using the eigenvalue to frequency relation for the circular undirected graph λ = 2 sin 2 (ω/2) to produce low-pass and high-pass type transfer functions as shown in Figure 5(top).These spectral transfer functions are dual to the classical Hann (raised cosine) window forms, used for signal localization in the time domain.
To improve the spectral resolution and to divide the spectral range into more than two bands.we can use the same transfer function forms by applying them to the low-pass part of the signal and dividing the spectral content of this part of the signal into its low-pass part and its high-pass part.In classical signal processing, two common approaches are applied: (a) The high-pass part is kept unchanged, while the low-pass part is split.This approach corresponds to the wavelet transform or the frequency-varying classical analysis.(b) The high-pass part is also split into its low-pass and high-pass parts to keep the frequency resolution constant for all frequency bands.
Next, we consider these two approaches for the division of frequency bands.
(a) In a two-scale wavelet-like analysis we keep the high-pass part s 1 , while the low-pass part, s 0 , is split in its low-pass part, s 00 and high-pass part, s 01 , using the same transfer function, as For the third scale step we would keep s 1 and the high-pass part of the scale two step, s 01 , and then split the low-pass part in scale two, s 00 , into its low-pass part, s 000 , and high-pass part, s 001 , using This process could be continued until the desired scale (frequency resolution) is reached.(b) For the uniform frequency bands both the low-pass and the high-pass bands are split in the same way, to obtain Notice that this kind of spectral band division will produce two times the same result.
Once when the original low-pass part is multiplied by the high-pass function, and then again when the original high-pass part is multiplied by the low-pass function.This is the reason why the constant value of 2 has appeared in the new middle pass-band, s 01 .The bands in relation (24) can be obtained as the terms of the binomial expression x.If we continue to the next level, by multiplying all the elements in ( 24) by the low-pass part, (I − L/λ max ), and then by the high-part part, L/λ max , after grouping the same terms, we would obtain the signal bands of the same form as the terms of the binomial (I − L/λ max ) + L/λ max 3 x.We can conclude that the division can be performed into K bands corresponding to the terms of a binomial form The transfer function of the k-th, k = 0, 1, 2, . . ., K, term, has the vertex domain form Of course, the sum of all parts of signal, filtered by H k (L), produces the reconstruction relation, x, what is obvious from the identity in (25), that is, from (I − L/λ max ) + L/λ max = I.Example 2. The spectral domain transfer functions H k (λ p ), p = 1, 2, . . ., N, k = 0, 1, . . ., K − 1, which correspond to classical time-frequency processing and the binomial form terms for K = 2, K = 3, and K = 26 are shown in Figure 5.The last two panels (the third and fourth panel) show the case with K = 26.In the third panel, the amplitudes of every transfer function is normalized.In the fourth panel, all transfer functions for K = 26 are shown, without the amplitude normalization.Example 3.For a general graph, the spectral domain transfer functions H k (λ p ), p = 1, 2, . . ., N, k = 0, 1, . . ., K − 1, that can be obtained as the terms of the binomial form for K = 2, K = 3, and K = 26 are shown in Figure 6.The last two panels (the third and fourth panel) again show the case with K = 26.In the third panel, the amplitudes of every transfer function is normalized.In the fourth panel, all transfer functions for K = 26 are shown, without the amplitude normalization.Example 4. Vertex-domain implementation is based on the multiplication of signal, x, by the graph Laplacian L. For each vertex n it is localized to its neighborhood one.After the signal Lx is calculated, then the new signal L 2 x is easily obtained as the graph Laplacian multiplication with the calculated signal, Lx, that is L 2 x = L Lx .This procedure is continued up to the any order, In classical time-frequency analysis the multiplication by the graph Laplacian of an undirected circular graph, 1 λ max Lx with λ max = 4, is equivalent to the convolution of the signal x with the impulse response of the finite impulse response filter that corresponds to the transfer function H 1 (ω) = sin 2 (ω/2) = 1 − cos(ω) /2.It means that the high-pass and low-pass part of the signal are obtained as (the element-wise form of the Laplacian operator applied to the signal is given by ( 4) and where * denotes convolution operation.These convolutions can be repeated to produce wavelet-like band distribution or uniform distribution of frequency bands.
If no downsampling is used, then the redundant representation of signal is obtained with each of these components containing the same number of samples as the original signal.However, it is possible to form nonredundant form of this representation.Using downsampling with factor of 2, the values x(2n + 1), x(2n + 1), are kept.The signal samples at the even indexed instants, 2n, are easily obtained as x(2n) = s 0 (2n) + s 1 (2n), while for the samples at the odd indexed instants, 2n + 1, we have Using the initial condition x(−1), we can reconstruct all odd-indexed samples.This reconstruction can be noise sensitive for large N, due to repeated recursions in the last relation.6a(right panel).The same analysis for the general graph signal from Figure 3(bottom) is shown in Figure 6b.Finally, in order to present the common complex-valued harmonic form, the signal is composed by adding two corresponding sine and cosine components (as in (7)) and forming the complex-valued components u 16 (n) + ju 17 (n), within V 1 , u 84 (n) + ju 85 (n), within V 2 , and u 28 (n) + ju 29 (n), within V 3 .Time-frequency representation of this signal is given in Figure 6c.(c) Time-frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 5(bottom).The complex signal is formed by adding two corresponding sine and cosine components.In all cases, the original representation is given on the left panel, while the reassigned value to the position of the distribution maximum is given on the right panel.
Selectivity of the transfer functions can be improved using higher order polynomials, instead of the linear functions in (23).Assuming that the high-pass part should satisfy H 1 (0) = 0 and H 1 (λ max ) = 1, and that its derivative is zero at the initial interval point, (λ p = 0), and the ending interval point (λ p = λ max ), as well as that H 0 (λ) + H 1 (λ) = 1, we can use the following polynomial forms The vertex-domain implementation is performed according to The same analysis can be now repeated as for (23).These polynomial forms will be revisited later in this paper.

Hann (Raised Cosine) Window Decomposition
We have presented the simplest decomposition to the low-pass and high-part of a signal.However, the LGFT of the form (17) can be calculated using any other set of bandpass functions, H k (Λ), k = 0, 1, . . ., K − 1, as The spline or raised cosine (Hann window) functions are commonly used as bandpass functions.To further illustrate the concepts, we will consider next transfer functions in general form of the shifted raised cosine functions.They are given by where the spectral bands for H k (Λ) are defined with (a k , b k ] and (b k , c k ], k = 0, 1, . . ., K − 1.If spectral bands were uniform within 0 ≤ λ ≤ λ max , the corresponding intervals are based on with a 1 = 0 and lim λ→0 (a 1 /λ) = 1.Here, only 0 The most common case with uniform division of the spectral domain, as defined by (29), is given in Figure 8a.Two forms of the spectral dependent widths are shown in Figure 8b,c.While the widths, defined by the constants a k in (29) increase as in the wavelet transform case, the widths of the transfer functions in Figure 8c are kept narrow around the spectral indices of signal components, in order to make finer spectral resolution at these regions (signal adaptive approach).Finally, Figure 8d shows polynomial approximations of the transfer functions form Figure 8a, which will be discussed later.(e) (f)

General Window Form Decomposition-OLA condition
The spectral transfer functions in the form of the raised cosine transfer function ( 28) are characterized by We may use any common window for the decomposition which satisfies this relation.Next, we will list some of these windows:

•
A combination of the raised cosine windows.After one set of the raised cosine windows is defined, we may use another set with different constants a k , b k , c k and overlap it with the existing set.If the window values are divided by 1/2 then the resulting window satisfies (30).In this way, we can increase the number of different overlapping windows.

•
Hamming window can be used in the same way as in (28).The only difference is that the Hamming windows sum-up to 1.08 in the overlapping interval, meaning that the result should be divided by this constant.• Bartlett (triangular) window with the same constants a k , b k , c k as in ( 28) satisfies the condition (30), along with combinations with different sets of a k , b k , c k to increase overlapping.

•
Tukey window has a flat part in the middle and the cosine form in the transition interval.It can also be used with appropriately defined a k , b k , c k to take into account the flat (constant) window range.

Frame Decomposition-WOLA Condition
For the signal reconstruction, using the kernel orthogonality and the frames concept, the windows should satisfy the condition The graph signal reconstruction can be performed based on (30) and ( 31), as discussed with more details in Section 6.
Several windows that satisfy the condition in (31) will be presented next: • Sine window is obtained as the square root of the raised cosine window in (28).
Obviously, this window will satisfy (31).Its form is

•
A window that satisfies (31) can be formed for any window in the previous section, by taking its square root.
Example 10.For the case of the Hann window and the triangular (Bartlett) window, their corresponding squared root forms, that will produce ∑ K−1 k=0 H 2 k (λ p ) = 1, are shown in Figure 11, for a uniform splitting of the spectral domain and a signal-dependent (wavelet-like) form.Notice that the squared root of the Hann window is the sine window form.It is obvious that the windows are not differentiable at the ending interval points, meaning that their transforms will be very spread (slow-converging).The windows defined as square roots of the presented windows (which originally satisfy the OLA condition), do not satisfy the first derivative continuity property at the ending interval points.For example, the raised cosine window satisfied that property, but its square root (sine) window loses this desirable property Figure 11.To restore this property, we may either define new windows or just use the same windows, such as the raised cosine window, and change argument so that the window derivative is continuous at the ending point.This technique is used to define the following window form.

•
Mayer's window form modifies the square root of the raised cosine window (sine window) by adding the function v x (x) in the argument, x, which will make the first derivative continuous at the ending points.In this case, the window functions become [32] with a k+1 = b k , b k+1 = c k , while the initial and the last intervals are defined as in (29).
In order to overcome the non-differentiability of the sine and cosine functions at the interval-end points, the previous argument from ( 28), of the form , producing Meyer's wavelet-like transfer functions.
If we now check the derivative of a transfer function, dH k (λ)/dλ, at the ending interval points, we will find that it is zero-valued.This was the reason for introducing the nonlinear (polynomial) argument form instead x or λ, having in mind the relation between the arguments x and λ.
Example 11.The transfer functions from the previous example, for the case of the Hann window and the triangular (Bartlett) window of forms that will produce ∑ K−1 k=0 H 2 k (λ p ) = 1, whose argument is modified in order to achieve differentiability at the ending points, are shown in Figure 12.Due to differentiability, these transfer functions have a faster convergence than the forms in the previous example, and are appropriate for vertex-frequency and timefrequency analysis.The results of this analysis would be similar to those presented in Figures 9  and 10.The difference exists is in the reconstruction procedure as well.
) is applied to the triangular window.Their form is The simplest polynomial for that would satisfy the conditions v x (0 In general, the conditions: These transfer functions are the extension of the linear forms presented in (23) and could be very convenient for the vertex (time) implementation.The polynomial of the third order in λ will require only neighborhood 3 in the vertex (time) domain implementation.
• Spectral graph wavelet transform.In the same way as the LGFT can be defined as a projection of a graph signal onto the corresponding kernel functions, the spectral graph wavelet transform can be calculated as the projections of the signal onto the wavelet transform kernels.The basic form of the wavelet transfer function in the spectral domain is denoted by H(λ p ).Then, the other transfer functions of the wavelet transform are obtained as the scaled versions of the basic function H(λ p ) using the scales s i , i = 1, 2, . . ., K − 1.The scaled transform functions are H s i (λ p ) = H(s i λ p ) [21,22,[33][34][35][36].
The father wavelet is a low-pass scale function denoted by G(λ p ).It is a low-pass function, in the same way as in the LGFT was the function H 0 (λ p ).The set of scales for the calculation of the wavelet transform is s ∈ {s 1 , s 2 , . . ., s K−1 }/ The scaled transform functions obtained in these scales are H s i (λ p ) and G(λ p ). Next, the spectral wavelet transform is calculated as a projection of the signal onto the bandpass (and scaled) wavelet kernel, ψ m,s i (n), in the same way as the kernel H m,k (n) was used in the LGFT in (18).It means that the wavelet transform elements are with the wavelet coefficients given by The Meyer approach to the transfer functions is defined in (33) with the argument v x (q(s i λ − 1)).The same form can be applied to the wavelet transform using H(s i λ p ) and the intervals of the support for this function given by: where the scales are defined by s i = s i−1 M = M i /λ max .

-
The interval for the low-pass function, G(λ), is 0 and the value G(λ) = 1 as λ → 0).Notice that the wavelet transform is just a special case of the varying transfer function, when the narrow transfer functions are used for low spectral indices and wide transfer functions are used for high spectral indices, as shown in Figure 12b or Figure 9b,d.In the implementations, we can use the vertex domain localized polynomial approximations of the spectral wavelet functions in the same way as described in Section 5.

•
Optimization of the vertex-frequency representations.As in classical time-frequency analysis, various measures can be used to compare and optimize joint vertex-frequency representations.An overview of these measures may be found in [37].Here, we shall suggest the one-norm (in the vector norm sense), introduced to the time-frequency optimization problems in [37], in the form where is the Frobenius norm of matrix S, used for the energy normalization.The normalization factor can be omitted if S(m, k) is a tight frame.Here we will just underline that the functions, S(m, k), are referred to as a frame.In the case of a graph signal, x, the set functions S(m, k), is a frame is [22] holds , with a and b being positive constants.This constants determine the stability of reconstructing the signal from the values S(m, k).A frame is called Parseval's tight frame if a = b.The LGFT, as given by in (17), represents Parseval's tight frame when Notice that Parseval's theorem is used for the LGFT, S(m, k), as it is the GFT of the spectral windowed signal, X(p)H k (λ p ).With this fact in mind we obtain The LGFT defined by ( 17) is a tight frame if the condition in (31) or (49) holds.This is the condition that is used to define transfer functions shown in Figure 5b,c.

Polynomial LGFT Approximation
Let us assume that the spectral domain localization window in the LGFT corresponds to a transfer function of a bandpass graph system, H k (λ p ).In the case of very large graphs, for the vertex domain realization of the LGFT, it is of crucial importance to define or approximate this transfer function by a polynomial where k = 0, 1, . . ., K − 1, K is the number of spectral bands and M is the polynomial order.It is assumed that transfer function H k (λ p ) is centered at an eigenvalue, λ k , and is of a bandpass type around it (as in ( 17)).The vector form of the LGFT, S(m, k), defined for the vertex index m and spectral index k by ( 18) is given as follows In vector notation, s k is a column vector, whose elements are equal to S(m, k), m = 1, 2, . . ., N. The property of the eigendecomposition of a power matrix is used to obtain this result.The number of shifted transfer functions, K, does not depend on the number of indices N. The realization of the LGFT is based on the linear combination of the graph signal shifts L p x, and does not require any graph Fourier transform or other operation on the entire graph.
For this reason the bandpass LGFT functions, H k (λ), k = 0, 1, . . ., K − 1, in the form given by ( 28) or ( 33) should be realized using their approximations by polynomials whose order is (M − 1).Although the approximation based on the Chebyshev polynomial is most commonly used for this purpose [25,31], we will revisit alternative approaches [38] as well.

Chebyshev Polynomial
The transfer functions in the graph relations, denoted by H k (λ), are defined at discrete set of eigenvalues λ = λ p .The polynomial approximation is obtained by using a function that is continuous.Its argument is within the range 0 ≤ λ ≤ λ max .Then the optimal choice for the polynomial approximation type are the so-called "min-max" Chebyshev polynomials.They have the property that the maximal possible deviation from the desired function is minimal.This property is of crucial importance since we approximate the transfer functions in continuous λ, which will be used as transfer functions at a discrete set of eigenvalues λ p of the LGFT.
The Chebyshev polynomials are defined by The mapping Tm (λ) = T m (2λ/λ max − 1) is introduced to transform the argument from 0 ≤ λ ≤ λ max to −1 to 1.Then, the Chebyshev polynomials of the finite (M − 1)-order can be written as follows where the polynomial coefficients are calculated using the Chebyshev polynomial inversion property as Based on the previous definitions, the vertex-domain implementation (39) of the spectral LGFT form, can be now written as follows In the calculation of the polynomial form of the transfer functions in (41), the (M − 1)neighborhood is only used to obtain the LGFT for every vertex, n.This form does not employ the eigendecomposition analysis over the whole graph, in any way.Therefore, the computational complexity for large graphs is feasible.
Example 12.The Chebyshev polynomial approximation approach will be illustrated on a set of the transfer functions, H k (λ), defined in ( 28) and (29).For K = 15, these transfer function are shown in Figure 8a.The transfer functions H k (λ) satisfy OLA condition, ∑ K−1 k=0 H k (λ) = 1.We used the Chebyshev polynomial, Pk,M−1 , k = 0, 1, . . ., K − 1, given by (40), to approximate each individual transfer function, H k (λ).Two polynomial orders are considered for the approximation, M = 20, M = 50.The resulting Chebyshev polynomial approximations of the transfer functions shown in Figure 8a are given in Figure 8d when the polynomial order was M = 20.In order to show the compliance of the obtained approximation with the imposed OLA condition, the value of ∑ K−1 k=0 Pk,M−1 (λ) is presented in the figure.This summation value is depicted by the dotted line in Figure 8d.As it can be seen, these values are close to unity.This means that the signal reconstruction from the LGFT calculated using the presented polynomial approximation will be stable and accurate.
The Chebyshev polynomial approximations of H k (λ), calculated in the way presented here, are applied to obtain the vertex-frequency analysis the signal from Example 1, using the LGFT.Timefrequency representations of the harmonic signal from Example 1 are shown for both polynomial orders, for M = 20 in Figure 9e and for M = 50 in Figure 9f.As it can be seen from Figure 9d, the representations with a lower order polynomial approximation, when M = 20, is less concentrated than representation in Figure 9e obtained for M = 50.However, using higher orders, (M − 1), of the polynomial approximation increases calculation complexity, since wider neighborhoods are required in the LGFT calculation.The experiment is repeated for the graph signal from Example 1.The two considered sets of Chebyshev polynomial-based approximations of bandpass transfer functions H k (λ), k = 0, 1, . . ., K − 1, from Figure 8a are now used in the calculation of vertexfrequency representations from Figure 10e,f, for M = 20 and M = 40, respectively.
Example 13.In order to present the Chebyshev polynomial approximation in more detail, and give the exact values of the approximation coefficients we further reduced the approximation order to (M − 1) = 5.Then we used this order to calculate the approximations of the bandpass functions, H k (λ), for every k in the case of the raised cosine form, given in (28), with K = 10 bands.The resulting approximation coefficients, h i,k , which are used in the vertex-domain implementation, defined by (39), are shown in Table 1.

Least Squares Approximation
Bandpass transfer functions H k (λ), used in the calculation of vertex-frequency (timefrequency) representations, can be approximated using polynomial such that squared error is minimized.This approximation will be referred to as Least Squares (LS) approximation.As in the case of Chebyshev approximation, the interval 0 ≤ λ ≤ λ max is normalized to [−1, 1], to ensure the standard calculation procedure.This is achieved using the substitution z = (2λ − λ max )/λ max .Upon introducing the following variables and The matrix form of the previous relation is Sa = b.When this linear system of equations is solved, the approximation coefficients α 0,k , α 1,k , . . ., α M−1,k are obtained.With λ = 0.5(z + 1)λ max we further have The approximation is then The vertex-domain implementation (39) of the spectral LGFT form, based on this approximation is performed according to x, for every k = 0, 1, . . ., K − 1.

Legendre Polynomial
The least squares approximation using Legendre polynomials assumes minimization of are referred to as Legendre polynomials.These polynomials satisfy the so-called Bonnet's recursive relation This case also assumes the normalization and shift of interval 0 ≤ λ ≤ λ max , to achieve the mapping to [−1, 1].This is performed with z = (2λ − λ max )/λ max to obtain φm (λ) = φ m (2λ/λ max − 1).
For each m = 0, 1, . . ., M − 1 coefficients of the form are calculated, and are further used to obtain the polynomial coefficients of the form As with λ = 0.5(z + 1)λ max , relation holds, in analogy with previous cases, we obtain the following approximation serving as a basis for the implementation of vertex-frequency analysis using the spectral LGFT form in (39) according to Example 14.The Legendre polynomial approximation will be illustrated on the transfer functions, H k (λ), defined by ( 28) and ( 29) for every k = 0, 1, . . ., K − 1 with K = 15.Recall that functions H k (λ) satisfy ∑ K−1 k=0 H k (λ) = 1.In this example, we consider approximations of these functions using LS approximation (42), as well as the approximations based on Legendre polynomial (44).For convenience, we also consider the approximation based on Chebyshev polynomial.
To illustrate how the polynomial order influences the convergence of the approximations, we consider three orders of polynomials: M = 12, M = 20, and M = 40.Shifted spectral transfer functions H k (λ), k = 0, 1, . . ., K − 1, which are being approximated, are shown in Figures 13a, 14a, and 14a.The approximations based on the Chebyshev polynomial are shown in Figures 13b, 14b, and 14b, for the considered polynomial orders.The approximations using the Legendre polynomial are shown in Figures 13c, 14c, and 14c, while the LS approximations are shown in Figures 13d-14d, also for the three considered polynomial orders.It can be seen that even with M = 12, the Chebyshev and LS based approximations are sufficiently narrow to enable clear distinction between the various spectral bands.The approximations using Legendre polynomials are shown to be less convenient for this purpose.
The fact that the order of polynomials as low as M = 12 can be used in the calculation of time-frequency representations and vertex-frequency representations is indicated in Figure 15.Polynomial approximations from Figure 13b-d are used in the calculation of time-frequency representations for the harmonic signal from Example 1.As shown in Figure 15a,c,e, signal components are clearly distinguishable in representations obtained based on approximated band functions.The experiment was repeated for the graph signal from Example 1, and the obtained representations are presented in Figure 15b,d,f, for the spectral bandpass functions approximated using the considered polynomials: Chebyshev, Legendre, and LS.We can conclude that when a higher polynomial order M is used in the approximation, it increases the LGFT calculation complexity since it uses a wider neighborhood of the considered vertex.

Inversion of the LGFT
Two approaches to the inversion of the classical STFT are used.One is based on the summation of the STFT values (overlap-and-add approach) and the other uses the weighted STFT values for the reconstruction (weighted overlap-and-add approach).These two approaches will be used in the vertex-frequency analysis as well.

Inversion by Summation (OLA Method)
For the LGFT, defined by (27) or in a polynomial form by (39), as the signal can be reconstructed by a summation over all spectral shifts This relation holds when the OLA condition ∑ K−1 k=0 H k (L) = I holds.The spectral-domain form of this condition is given by Having in mind that H p (λ k ) is band-limited to Q p nonzero samples, then meaning that the support of the LGFT satisfies The smallest possible number of nonzero samples in the LGFT is defined by where Q = max p {Q p }.If we select just one spectral frequency by a bandpass filter, with Q = 1, then the duration of the S(n, p) must be N.If half of the spectral band is selected by the bandpass function, Q = N/2, then ||s p || 0 ≥ 2. Finally, if all spectral components are used, then a delta pulse is possible in the time domain, that is, for Q = N, we can have ||s p || 0 ≥ 1.

Analysis Based on Splitting Large Signals
The graph analysis suggested a polynomial approximation of the transfer functions and the implementation of vertex-frequency analysis using powers of the Laplacian applied to signal.This means that the neighborhood of the considered sample, defined by the power of the Laplacian, is used.In classical signal analysis, this problem was approached by windowing a large signal and then by splitting the analysis into smaller nonoverlapping or overlapping time segments.This idea will be now generalized to graph signals, which may then be used to define more general forms of signal splitting in the classical analysis.
For undirected graphs U −1 = U T holds.From the inverse GFT, x 0 = UX 0 and x 1 = UX 1 , having in mind the positions of the zero values in x 0 and x 1 , we obtain where U Lo and U U p are the lower and upper parts of the matrix U.They consist of the rows of U corresponding to the zero-value positions of the signal x 0 and x 1 , respectively.Splitting now the transform vector X 0 in its even-indexed part, (X 0 ) Even , and odd-indexed part, (X 0 ) Odd , and doing the same for the transform vector X 1 , we obtain 0 = U Lo,Even (X 0 ) Even + U Lo,Odd (X 0 ) Odd , 0 = U U p,Even (X 1 ) Even + U U p,Odd (X 1 ) Odd . (60) This means that there is no need to calculate the full GFT.It is sufficient to calculate the GFT of order N/2, and its values (X 0 ) Even and (X 1 ) Even or (X 0 ) Odd and (X 1 ) Odd .The remaining parts of the transform vectors could be obtained from (60).This can reduce the problem dimensionality.
Notice that all matrices used in this relation are of size N/2 × N/2, while all vectors are of size N/2 × 1.This approach can be applied to different splitting schemes (for example, we can split the signal into even and odd indexed samples, and then split the transform elements into upper and lower part).The same procedure can be used for splitting the signal into not equal sets of samples.The case with the overlapping windowed signal can easily be split into nonoverlapping problems.For example, if the window in the classical domain overlaps for half of the window width, then the problem can be separated into two sets of nonoverlapping windows since every other window does not overlap [46][47][48][49][50][51].

Conclusions
Time-frequency analysis is a basis for extending the classical concepts to the vertexvarying spectral analysis of signals on graphs.Attention has been paid to linear signal transformations as the most important forms in classical signal analysis and graph signal processing.The spectral domain of these representations has been considered in detail since it provides an opportunity for a direct generalization of the well-developed timefrequency approaches to vertex-frequency analysis.Various polynomial forms are used in the implementation since they can be computationally very efficient in the case of very large graphs.The polynomial forms are developed in detail in graph signal processing, and can then be used in classical time-frequency analysis, with their simplicity being attractive for the implementation in the case of large time-domain signals.Reconstruction of the graph signals from the vertex-frequency representation has been reviewed, with some practical notes on the filtering, optimal parameter selection, uncertainty principle, and schemes for large signal division into smaller parts.All results are illustrated by numerous numerical examples.

Figure 1 .
Figure 1.Time domain of periodic signals presented as: Circular unweighted directed graph (left) and an undirected graph (right), with N = 8 vertices (instants).

Figure 2 .
Figure 2. A circular undirected unweighted graph as the domain for classical signal analysis.Each of N = 100 vertices (instants) is connected to the predecessor and successor vertices (top).A general form of a graph, with N = 100 vertices (bottom).

Figure 3 .
Figure 3. Graph signal on a circular undirected unweighted graph (top), and a general graph (bottom).Vertices from V 1 are designated blue dots, vertices form V 2 are marked by black dots, while vertices form V 3 are given by green dots.

Figure 4 .
Figure 4. Local smoothness of the signals from Figure 3.The values are shown for nonzero signal samples.The local smoothness in classical signal analysis is related to the instantaneous frequency as

Example 5 .
Time-frequency and vertex-frequency analysis based on the binomial decomposition of the signals from Example 1 is performed in this example.The corresponding transfer functions for the time-frequency analysis (circular undirected graph, Figure 2(top)) and vertexfrequency analysis (general graph, Figure 2(bottom)) are shown in Figures 5 and 6, respectively.The time-frequency representation of the three-harmonic signal from Figure 3(top) is shown in Figure 6a, (left panel).Its reassigned version to the position of the maximum distribution value is given in Figure

Figure 6 .
Figure 6.Time-frequency and vertex-frequency representations of the signals from Example 1: (a) Time-frequency analysis of the harmonic signal from Example 1, shown in Figure 3(top), using the transfer functions from Figure 5(bottom).(b) Vertex-frequency analysis of the general graph signal from Example 1, shown in Figure 3(bottom), using the transfer functions from Figure 6(bottom).(c)Time-frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure5(bottom).The complex signal is formed by adding two corresponding sine and cosine components.In all cases, the original representation is given on the left panel, while the reassigned value to the position of the distribution maximum is given on the right panel.

Figure 8 .Example 8 .
Figure 8. Transfer functions in the spectral domain.(a) The transfer functions corresponding to the Hann form terms for K = 15.(b) The spectral index-varying (wavelet-like) transfer functions whose terms are of half-cosine form, with K = 11.(c) The spectral domain signal adaptive transfer functions with K = 17.(d) Approximations of transfer functions from panel (a) using Chebyshev polynomials, with H 9 (λ) being designated by the thick black lines, whereas gray markers indicate the corresponding discrete values.Example 8.The transfer functions with various widths of the Hann window form Figure 8 are used for time-frequency representation of the signal on the circular graph from Example 1.The results are shown in Figure 9.

Figure 9 .Example 9 .
Figure 9. Time-frequency representation of a three-component time-domain signal from Example 1, shown in Figure 3(top), based on various transfer functions from Figure 5.The LGFT is calculated based on: (a) the transfer functions in Figure 5, (b) the transfer functions in Figure 8a, (c) the waveletlike spectral transfer functions in Figure 8b, (d) the signal adaptive transfer functions from Figure 8b, (e) Chebyshev polynomial-based approximations from Figure 8d, with M = 20 and (f) Chebyshev polynomial-based approximations from Figure 8d, with M = 50.Example 9.In this example, the same transfer functions form Figure 5 are used for vertex-frequency representation of the signal on the general graph from Example 1.The results are shown in Figure 10.

Figure 10 .
Figure 10.vertex-frequency representation of a three-component general graph signal from Example 1, shown in Figure 3(bottom).The LGFT is calculated based on: (a) the transfer functions in Figure 5, (b) the transfer functions in Figure 8a, (c) the wavelet-like spectral transfer functions in Figure 8b, (d) the signal adaptive transfer functions from Figure 8b, (e) Chebyshev polynomial-based approximations from Figure 8d, with M = 20 and (f) Chebyshev polynomial-based approximations from Figure 8d, with M = 50.

Figure 11 .
Figure 11.Transfer functions formed using the Hann window (a,b) and the Bartlett window (c,d) square root so that the reconstruction condition ∑ K−1 k=0 H 2 k (λ p ) = 1 is satisfied, for a uniform splitting (a,c) of the spectral domain and a wavelet-like splitting (b,d).

Figure 13 .
Figure 13.Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations.(a) Spectral functions of the Hann form with K = 15.(b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 12.(c) Legendre-polynomial-based approximations of spectral transfer functions with M = 12.(d) Least squares approximation of spectral transfer functions with M = 12.For convenience, function H 8 (λ) is designated with a thick black line on each panel.

Figure 14 .
Figure 14.Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations.(a) Spectral functions of the Hann form with K = 15.(b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 20.(c) Legendre-polynomial-based approximations of spectral transfer functions with M = 20.(d) Least squares approximation of spectral transfer functions with M = 20.For convenience, function H 8 (λ) is designated with a thick black line on each panel.

Figure 14 .
Figure 14.Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations.(a) Spectral functions of the Hann form with K = 15.(b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 40.(c) Legendre-polynomial-based approximations of spectral transfer functions with M = 40.(d) Least squares approximation of spectral transfer functions with M = 40.For convenience, function H 8 (λ) is designated with a thick black line on each panel.

Figure 14 .
Figure 14.(a) Time-frequency analysis of the harmonic signal from Example 1, shown in Figure 3(top), using the polynomial approximations of transfer functions from Figure 13b.(b) Vertexfrequency analysis of the general graph signal from Example 1, shown in Figure 3(bottom), using the transfer functions from Figure 13b.(c) Time-frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 13c.(d) Vertex-frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 13c.(e) Time-frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 13d.(f) Vertex-frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 13d.A complex signal is formed by adding two corresponding sine and cosine components.
Λ) = I since we may write ∑ K−1 k=0 H k (L) = U ∑ K−1 k=0H k (Λ)U T = I and the fact that U T U = I holds for a symmetric matrix L. This condition used when the transfer functions in Figure5aare defined.The element-wise form of the inversion relation (46) isx(n) = K−1 ∑ k=0 S(n, k).

Remark 3 .
In classical time-frequency analysis the elements of the inverse DFT matrix U are equal to u k (n) = exp(j2π(n − 1)(k − 1)/N)/ √ N and H k (λ p ) = H k (e jω p ) are the bandpass transfer functions, with the kernel