# From Time–Frequency to Vertex–Frequency and Back

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Basic Graph Definitions

## 3. Classical Signal Processing within the Graph Signal Processing Framework

**Directed circular graph.**The signal values, $x(n)$, in classical signal processing systems, are defined in a well-ordered time domain, defined by the time instants denoted by $n=1,2,\dots ,N$. In the DFT-based classical analysis it has also been assumed that the signal is periodic. The domain of such signals is illustrated in Figure 1 for $N=8$.

- ${A}_{mn}=1$ if the vertex or instant m is a predecessor (connected) to the instant (vertex) n, and
- ${A}_{mn}=0$ otherwise,

**Undirected circular graph.**When the circular graph is not directed, as shown in Figure 1 (right), then we should assume that every instant (vertex), n, is connected to both the predecessor vertex (instant), $n-1$, and to the succeeding vertex, $n+1$. The adjacency or weight matrix for this kind of connection, $\mathbf{A}=\mathbf{W}$, and the corresponding graph Laplacian, defined by $\mathbf{L}=\mathbf{D}-\mathbf{W}$, are given by

**Smoothness and local smoothness.**Notice that for an undirected circular graph and small frequency, ${\omega}_{k}^{2}$, the relation in (6) can be approximated by

**System on a general graph.**The relations presented in this section are the special cases of the general graph Fourier transform (Section 2) and systems for graph signals.

**Graph signal filtering—Graph convolution.**Three approaches to filtering of a graph signal using a system whose transfer function is $G(\mathsf{\Lambda})$, with elements on the diagonal $G({\lambda}_{k})$, $k=1,2,\dots ,N$, will be presented next.

- (i)
- The simplest approach is based on the direct employment of the GFT. It is performed by:
- (a)
- Calculating the GFT of the input signal, $\mathbf{X}={\mathbf{U}}^{-1}\mathbf{x}$,
- (b)
- Finding the output signal GFT by multiplying $\mathbf{X}$ by $G(\mathsf{\Lambda})$, $\mathbf{Y}=G(\mathsf{\Lambda})\mathbf{X}$,
- (c)
- Calculating the output (filtered ) signal as the inverse DFT of $\mathbf{Y}$, $\mathbf{y}=\mathbf{U}\mathbf{Y}$.

The result of this operation,$$y(n)=x(n)\ast g(n)=\mathrm{IGFT}\{\mathrm{GFT}\{x(n)\}\mathrm{GFT}\{g(n)\}\}=\mathrm{IGFT}\{X(k)G({\lambda}_{k})\},$$However, this procedure could be computationally unacceptable for very large graphs. - (ii)
- A possibility to avoid the full size transformation matrices for large graphs, is to approximate the filter transfer function, $G(\lambda )$, at the positions of the eigenvalues, $\lambda ={\lambda}_{k}$, $k=1,2,\dots ,N$, by a polynomial, ${h}_{0}+{h}_{1}\lambda +{h}_{2}{\lambda}^{2}+\cdots +{h}_{M}{\lambda}^{M}$, that is$${h}_{0}+{h}_{1}{\lambda}_{k}+{h}_{2}{\lambda}_{k}^{2}+\cdots +{h}_{M}{\lambda}_{k}^{M}=G({\lambda}_{k}),\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}k=1,2,\dots ,N.$$$$\mathbf{V}\mathbf{h}=\mathrm{diag}\{\mathbf{G}\},$$$$\mathrm{diag}\{\mathbf{G}\}={[G({\lambda}_{1}),G({\lambda}_{2}),\dots ,G({\lambda}_{N})]}^{T}$$This system can efficiently be solved for a relatively small M. Then, the implementation of the graph filter is performed in the vertex domain using the so obtained ${h}_{0},{h}_{1},\dots ,{h}_{M}$ in (10) and the M-neighborhood of a every considered vertex. Notice that the relation between the IGFT of $\mathrm{diag}\{\mathbf{G}\}$ and the system coefficients ${h}_{0},{h}_{1},\dots ,{h}_{M}$ is direct in the classical DFT case only, while it is more complex in the general graph case [25].For large M, the solution to the system of equations in (12), for the unknown parameters ${h}_{0},{h}_{1},\dots ,{h}_{M}$ can be numerically unstable due to large values of the powers ${\lambda}_{k}^{M}$ for large M.
- (iii)
- Another approach that allows us to avoid the direct GFT calculation in the implementation of graph filters is in approximating the given transfer function, $G(\lambda )$, by a polynomial $H(\lambda )$, using continuous variable $\lambda $.This approximation does not guarantee that the transfer functions $G(\lambda )$ and its polynomial approximation $H(\lambda )$ will be close at a discrete set of points $\lambda ={\lambda}_{p}$, $p=1,2,\dots ,N$. The maximal absolute deviation of the polynomial approximation can be kept as small as possible using the so-called min–max polynomials. After the polynomial approximation is obtained the output of the graph system is calculated using (10), that is$$\mathbf{y}=\left(\sum _{m=0}^{M-1}{h}_{m}{\mathbf{L}}^{m}\right)\phantom{\rule{0.166667em}{0ex}}\mathbf{x}=H(\mathbf{L})\phantom{\rule{0.166667em}{0ex}}\mathbf{x}.$$

**Case study examples.**In the next example we shall introduce two graphs and signals on these graphs, which will be used as benchmark models for the analysis that follows.

**Example**

**1.**

**Other graph shift operators.**Finally, notice that in relation (10), we used the graph Laplacian, $\mathbf{L}$, as the shift operator. In addition to the adjacency matrix, $\mathbf{A}$, as another common choice for the shift operation, the normalized version of the adjacency matrix, ($\mathbf{A}/{\lambda}_{\mathrm{max}}$), normalized graph Laplacian (${\mathbf{D}}^{-1/2}\mathbf{L}{\mathbf{D}}^{-1/2}$), or the random walk (also called diffusion) matrix, (${\mathbf{D}}^{-1}\mathbf{W}$), may be used as graph shift operators, producing corresponding spectral forms of the systems for graph signals [30].

**Remark**

**1.**

## 4. Spectral Domain Localized Graph Fourier Transform (LGFT)

**Remark**

**2.**

**Remark**

**3.**

#### 4.1. Binomial Decomposition

**Remark**

**4.**

- (a)
- The high-pass part is kept unchanged, while the low-pass part is split. This approach corresponds to the wavelet transform or the frequency-varying classical analysis.
- (b)
- The high-pass part is also split into its low-pass and high-pass parts to keep the frequency resolution constant for all frequency bands.

- (a)
- In a two-scale wavelet-like analysis we keep the high-pass part ${\mathbf{s}}_{1}$, while the low-pass part, ${\mathbf{s}}_{0}$, is split in its low-pass part, ${\mathbf{s}}_{00}$ and high-pass part, ${\mathbf{s}}_{01}$, using the same transfer function, as$$\begin{array}{c}{\mathbf{s}}_{1}=\frac{1}{{\lambda}_{\mathrm{max}}}\mathbf{L}\phantom{\rule{0.166667em}{0ex}}\mathbf{x},\\ {\mathbf{s}}_{00}={\left(\mathbf{I}-\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\right)}^{2}\mathbf{x},\hspace{1em}{\mathbf{s}}_{01}=\left(\mathbf{I}-\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\right)\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\mathbf{x}.\end{array}$$$$\begin{array}{c}{\mathbf{s}}_{000}={\left(\mathbf{I}-\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\right)}^{3}\mathbf{x},\hspace{1em}{\mathbf{s}}_{001}={\left(\mathbf{I}-\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\right)}^{2}\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\mathbf{x}.\end{array}$$
- (b)
- For the uniform frequency bands both the low-pass and the high-pass bands are split in the same way, to obtain$${\mathbf{s}}_{00}={\left(\mathbf{I}-\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\right)}^{2}\mathbf{x},\hspace{1em}{\mathbf{s}}_{01}=2\left(\mathbf{I}-\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\right)\frac{\mathbf{L}}{{\lambda}_{\mathrm{max}}}\mathbf{x},\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}{\mathbf{s}}_{11}=\frac{{\mathbf{L}}^{2}}{{\lambda}_{\mathrm{max}}^{2}}\mathbf{x}.$$The bands in relation (24) can be obtained as the terms of the binomial expression ${\left((\mathbf{I}-\mathbf{L}/{\lambda}_{\mathrm{max}})+\mathbf{L}/{\lambda}_{\mathrm{max}}\right)}^{2}\phantom{\rule{0.166667em}{0ex}}\mathbf{x}$. If we continue to the next level, by multiplying all the elements in (24) by the low-pass part, $(\mathbf{I}-\mathbf{L}/{\lambda}_{\mathrm{max}})$, and then by the high-part part, $\mathbf{L}/{\lambda}_{\mathrm{max}}$, after grouping the same terms, we would obtain the signal bands of the same form as the terms of the binomial ${\left((\mathbf{I}-\mathbf{L}/{\lambda}_{\mathrm{max}})+\mathbf{L}/{\lambda}_{\mathrm{max}}\right)}^{3}\phantom{\rule{0.166667em}{0ex}}\mathbf{x}$. We can conclude that the division can be performed into K bands corresponding to the terms of a binomial form$${\left((\mathbf{I}-\mathbf{L}/{\lambda}_{\mathrm{max}})+\mathbf{L}/{\lambda}_{\mathrm{max}}\right)}^{K}\phantom{\rule{0.166667em}{0ex}}\mathbf{x}.$$$${H}_{k}(\mathbf{L})=\left(\begin{array}{c}K\\ k\end{array}\right){\left(\mathbf{I}-\frac{1}{{\lambda}_{\mathrm{max}}}\mathbf{L}\right)}^{K-k}{\left(\frac{1}{{\lambda}_{\mathrm{max}}}\mathbf{L}\right)}^{k}.$$

**Example**

**2.**

**Example**

**3.**

**Example**

**4.**

**Example**

**5.**

#### 4.2. Hann (Raised Cosine) Window Decomposition

**Example**

**6.**

**Example**

**7.**

**Example**

**8.**

#### 4.3. General Window Form Decomposition—OLA condition

- A combination of the raised cosine windows. After one set of the raised cosine windows is defined, we may use another set with different constants ${a}_{k}$, ${b}_{k}$, ${c}_{k}$ and overlap it with the existing set. If the window values are divided by $1/2$ then the resulting window satisfies (30). In this way, we can increase the number of different
**overlapping windows**. **Hamming window**can be used in the same way as in (28). The only difference is that the Hamming windows sum-up to $1.08$ in the overlapping interval, meaning that the result should be divided by this constant.**Tukey window**has a flat part in the middle and the cosine form in the transition interval. It can also be used with appropriately defined ${a}_{k}$, ${b}_{k}$, ${c}_{k}$ to take into account the flat (constant) window range.

#### 4.4. Frame Decomposition—WOLA Condition

**Sine window**is obtained as the square root of the raised cosine window in (28). Obviously, this window will satisfy (31). Its form is$${H}_{k}(\lambda )=\left\{\begin{array}{c}\mathrm{sin}\left(\frac{\pi}{2}\frac{{a}_{k}}{{b}_{k}-{a}_{k}}(\frac{\lambda}{{a}_{k}}-1)\right),\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{a}_{k}<\lambda \le {b}_{k}\hfill \\ \mathrm{cos}\left(\frac{\pi}{2}\frac{{b}_{k}}{{c}_{k}-{b}_{k}}(\frac{\lambda}{{b}_{k}}-1)\right),\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{b}_{k}<\lambda \le {c}_{k}\hfill \\ 0,\phantom{\rule{4.pt}{0ex}}\mathrm{elsewhere}.\hfill \end{array}\right.$$- A window that satisfies (31) can be formed for
**any window**in the previous section, by taking its square root.**Example****10.**For the case of the Hann window and the triangular (Bartlett) window, their corresponding squared root forms, that will produce ${\sum}_{k=0}^{K-1}{H}_{k}^{2}({\lambda}_{p})=1$, are shown in Figure 12, for a uniform splitting of the spectral domain and a signal-dependent (wavelet-like) form. Notice that the squared root of the Hann window is the sine window form.It is obvious that the windows are not differentiable at the ending interval points, meaning that their transforms will be very spread (slow-converging).The windows defined as square roots of the presented windows (which originally satisfy the OLA condition), do not satisfy**the first derivative continuity property**at the ending interval points. For example, the raised cosine window satisfied that property, but its square root (sine) window loses this desirable property Figure 12.To restore this property, we may either define new windows or just use the same windows, such as the raised cosine window, and change argument so that the window derivative is continuous at the ending point. This technique is used to define the following window form. **Mayer’s window form**modifies the square root of the raised cosine window (sine window) by adding the function ${v}_{x}(x)$ in the argument, x, which will make the first derivative continuous at the ending points. In this case, the window functions become [32]$${H}_{k}(\lambda )=\left\{\begin{array}{c}\mathrm{sin}\left(\frac{\pi}{2}{v}_{x}\left(\frac{{a}_{k}}{{b}_{k}-{a}_{k}}(\frac{\lambda}{{a}_{k}}-1)\right)\right),\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{a}_{k}<\lambda \le {b}_{k}\hfill \\ \mathrm{cos}\left(\frac{\pi}{2}{v}_{x}\left(\frac{{b}_{k}}{{c}_{k}-bk}(\frac{\lambda}{{b}_{k}}-1)\right)\right),\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{b}_{k}<\lambda \le {c}_{k}\hfill \\ 0,\phantom{\rule{4.pt}{0ex}}\mathrm{elsewhere},\hfill \end{array}\right.$$$${v}_{x}(x)=x=\frac{{a}_{k}}{{b}_{k}-{a}_{k}}(\frac{\lambda}{{a}_{k}}-1),$$$${v}_{x}(x)={x}^{4}(35-84x+70{x}^{2}-20{x}^{3}),$$If we now check the derivative of a transfer function, $d{H}_{k}(\lambda )/d\lambda $, at the ending interval points, we will find that it is zero-valued. This was the reason for introducing the nonlinear (polynomial) argument form instead x or $\lambda $, having in mind the relation between the arguments x and $\lambda $.**Example****11.**The transfer functions from the previous example, for the case of the Hann window and the triangular (Bartlett) window of forms that will produce ${\sum}_{k=0}^{K-1}{H}_{k}^{2}({\lambda}_{p})=1$, whose argument is modified in order to achieve differentiability at the ending points, are shown in Figure 13. Due to differentiability, these transfer functions have a faster convergence than the forms in the previous example, and are appropriate for vertex–frequency and time–frequency analysis. The results of this analysis would be similar to those presented in Figure 10 and Figure 11. The difference exists is in the reconstruction procedure as well.**Polynomial windows**are obtained if the function ${v}_{x}(x)={x}^{4}(35-84x+70{x}^{2}-20{x}^{3})$ is applied to the triangular window. Their form is$${H}_{k}(\lambda )=\left\{\begin{array}{c}{v}_{x}\left(\frac{{a}_{k}}{{b}_{k}-{a}_{k}}(\frac{\lambda}{{a}_{k}}-1)\right),\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{a}_{k}<\lambda \le {b}_{k}\hfill \\ 1-{v}_{x}\left(\frac{{b}_{k}}{{c}_{k}-bk}(\frac{\lambda}{{b}_{k}}-1)\right),\phantom{\rule{4.pt}{0ex}}\mathrm{for}\phantom{\rule{4.pt}{0ex}}{b}_{k}<\lambda \le {c}_{k}\hfill \\ 0,\phantom{\rule{4.pt}{0ex}}\mathrm{elsewhere}.\hfill \end{array}\right.$$The simplest polynomial for that would satisfy the conditions ${v}_{x}(0)=0$, ${v}_{x}(1)=1$, ${v}_{x}^{\prime}(0)={v}_{x}^{\prime}(1)=0$ is ${v}_{x}(x)=a{x}^{3}+b{x}^{2}$ with $a+b=1$, $3a+2b=0$, that is$${v}_{x}(x)=-2{x}^{3}+3{x}^{2}.$$In general, the conditions:$${v}_{x}(0)=0,\phantom{\rule{2.em}{0ex}}{v}_{x}(1)=1,\phantom{\rule{2.em}{0ex}}{\frac{d{v}_{x}(x)}{dx}}_{|x=0}=0,\phantom{\rule{2.em}{0ex}}{\frac{d{v}_{x}(x)}{dx}}_{|x=1}=0,$$$${v}_{x}(x)=a{x}^{n}+b{x}^{n-1},$$$${v}_{x}(x)=-4{x}^{5}+5{x}^{4}.$$These transfer functions are the extension of the linear forms presented in (23) and could be very convenient for the vertex (time) implementation. The polynomial of the third order in $\lambda $ will require only neighborhood 3 in the vertex (time) domain implementation.**Spectral graph wavelet transform.**In the same way as the LGFT can be defined as a projection of a graph signal onto the corresponding kernel functions, the spectral graph wavelet transform can be calculated as the projections of the signal onto the wavelet transform kernels. The basic form of the wavelet transfer function in the spectral domain is denoted by $H({\lambda}_{p})$. Then, the other transfer functions of the wavelet transform are obtained as the scaled versions of the basic function $H({\lambda}_{p})$ using the scales ${s}_{i}$, $i=1,2,\dots ,K-1$. The scaled transform functions are ${H}_{{s}_{i}}({\lambda}_{p})=H({s}_{i}{\lambda}_{p})$ [21,22,33,34,35,36].The father wavelet is a low-pass scale function denoted by $G({\lambda}_{p})$. It is a low-pass function, in the same way as in the LGFT was the function ${H}_{0}({\lambda}_{p})$. The set of scales for the calculation of the wavelet transform is $s\in \{{s}_{1},{s}_{2},\dots ,{s}_{K-1}\}$/ The scaled transform functions obtained in these scales are ${H}_{{s}_{i}}({\lambda}_{p})$ and $G({\lambda}_{p})$. Next, the spectral wavelet transform is calculated as a projection of the signal onto the bandpass (and scaled) wavelet kernel, ${\psi}_{m,{s}_{i}}(n)$, in the same way as the kernel ${\mathcal{H}}_{m,k}(n)$ was used in the LGFT in (18). It means that the wavelet transform elements are$${\psi}_{m,{s}_{i}}(n)=\sum _{p=1}^{N}H({s}_{i}{\lambda}_{p}){u}_{p}(m){u}_{p}(n),$$$$W(m,{s}_{i})=\sum _{n=1}^{N}{\psi}_{m,{s}_{i}}(n)x(n)\phantom{\rule{0ex}{0ex}}=\sum _{n=1}^{N}\sum _{p=1}^{N}H({s}_{i}{\lambda}_{p})x(n){u}_{p}(m){u}_{p}(n)=\sum _{p=1}^{N}H({s}_{i}{\lambda}_{p})X(p){u}_{p}(m).$$The Meyer approach to the transfer functions is defined in (33) with the argument ${v}_{x}(q({s}_{i}\lambda -1))$. The same form can be applied to the wavelet transform using $H({s}_{i}{\lambda}_{p})$ and the intervals of the support for this function given by:- –
- –
- –
- –
- where the scales are defined by ${s}_{i}={s}_{i-1}M={M}^{i}/{\lambda}_{\mathrm{max}}$.
- –
- The interval for the low-pass function, $G(\lambda )$, is $0\le \lambda \le {M}^{2}/{s}_{K-1}$ (cosine function within $M/{s}_{K-1}<\lambda \le {M}^{2}/{s}_{K-1}$ and the value $G(\lambda )=1$ as $\lambda \to 0$).

Notice that the wavelet transform is just a special case of the varying transfer function, when the narrow transfer functions are used for low spectral indices and wide transfer functions are used for high spectral indices, as shown in Figure 13b or Figure 10b,d.In the implementations, we can use the vertex domain localized polynomial approximations of the spectral wavelet functions in the same way as described in Section 5.**Optimization of the vertex–frequency representations.**As in classical time–frequency analysis, various measures can be used to compare and optimize joint vertex–frequency representations. An overview of these measures may be found in [37]. Here, we shall suggest the one-norm (in the vector norm sense), introduced to the time–frequency optimization problems in [37], in the form$$\mathcal{M}=\frac{1}{F}\sum _{m=1}^{N}\sum _{k=0}^{K-1}|S(m,k)|=\frac{1}{F}{\parallel \mathbf{S}\parallel}_{1},$$$${a||\mathbf{x}||}_{2}^{2}\le \sum _{k=0}^{K-1}\sum _{m=1}^{N}{|S(m,k)|}^{2}\le {b||\mathbf{x}||}_{2}^{2},$$$$\begin{array}{c}\sum _{k=0}^{K-1}\sum _{m=1}^{N}{|S(m,k)|}^{2}=\sum _{k=0}^{K-1}\sum _{p=1}^{N}{|X(p){H}_{k}({\lambda}_{p})|}^{2}={E}_{x}=constant.\end{array}$$Notice that Parseval’s theorem is used for the LGFT, $S(m,k)$, as it is the GFT of the spectral windowed signal, $X(p){H}_{k}({\lambda}_{p})$. With this fact in mind we obtain$$\sum _{m=1}^{N}{|S(m,k)|}^{2}=\sum _{p=1}^{N}{|X(p){H}_{k}({\lambda}_{p})|}^{2}.$$

## 5. Polynomial LGFT Approximation

#### 5.1. Chebyshev Polynomial

**Example**

**12.**

**Example**

**13.**

#### 5.2. Least Squares Approximation

#### 5.3. Legendre Polynomial

**Example**

**14.**

## 6. Inversion of the LGFT

#### 6.1. Inversion by Summation (OLA Method)

#### 6.2. Kernel-Based LGFT Inversion (WOLA Method—Frames)

**Vertex-Varying Filtering.**For the vertex-varying filtering of the graph signals using the vertex–frequency representation we can use a support function $B(m,k)$ in the vertex–frequency domain. Then, the signal that is filtered in the vertex–frequency using the LGFT is obtained as ${S}_{f}(m,k)=S(m,k)B(m,k)$.

## 7. Support Uncertainty Principle in the LGFT

## 8. Analysis Based on Splitting Large Signals

- Calculate even-indexed elements of ${\mathbf{X}}_{0}$ and ${\mathbf{X}}_{1}$, corresponding to two halves of the signal samples, as$${({\mathbf{X}}_{0})}_{Even}={({\mathbf{U}}^{-1})}_{Up,Even}{[x(1),\phantom{\rule{4pt}{0ex}}x(2),\phantom{\rule{4pt}{0ex}}\dots ,x(N/2)]}^{T}={({\mathbf{U}}^{-1})}_{Up,Even}{\mathbf{x}}_{Up},\phantom{\rule{0ex}{0ex}}{({\mathbf{X}}_{1})}_{Even}={({\mathbf{U}}^{-1})}_{Lo,Even}{[x(N/2+1),\phantom{\rule{4pt}{0ex}}x(N/2+2),\dots ,x(N)]}^{T}={({\mathbf{U}}^{-1})}_{Lo,Even}{\mathbf{x}}_{Lo}.$$
- Find the odd-indexed elements of the GFT using (60) as$${({\mathbf{X}}_{0})}_{Odd}={({\mathbf{U}}_{Lo,Odd})}^{-1}{\mathbf{U}}_{Lo,Even}{({\mathbf{X}}_{0})}_{Even},\phantom{\rule{0ex}{0ex}}{({\mathbf{X}}_{1})}_{Odd}={({\mathbf{U}}_{Up,Odd})}^{-1}{\mathbf{U}}_{Up,Even}{({\mathbf{X}}_{1})}_{Even}.$$
- Reconstruct the GFT elements of the whole signal,$${(\mathbf{X})}_{Even}={({\mathbf{X}}_{0})}_{Even}+{({\mathbf{X}}_{1})}_{Even}\phantom{\rule{0ex}{0ex}}{(\mathbf{X})}_{Odd}={({\mathbf{X}}_{0})}_{Odd}+{({\mathbf{X}}_{1})}_{Odd}.$$Notice that all matrices used in this relation are of size $N/2\times N/2$, while all vectors are of size $N/2\times 1$.

## 9. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Conflicts of Interest

## References

- Sandryhaila, A.; Moura, J.M. Discrete signal processing on graphs. IEEE Trans. Signal Process.
**2013**, 61, 1644–1656. [Google Scholar] [CrossRef] [Green Version] - Chen, S.; Varma, R.; Sandryhaila, A.; Kovačević, J. Discrete Signal Processing on Graphs: Sampling Theory. IEEE Trans. Signal Process.
**2015**, 63, 6510–6523. [Google Scholar] [CrossRef] [Green Version] - Sandryhaila, A.; Moura, J.M. Discrete Signal Processing on Graphs: Frequency Analysis. IEEE Trans. Signal Process.
**2014**, 62, 3042–3054. [Google Scholar] [CrossRef] [Green Version] - Ortega, A.; Frossard, P.; Kovačević, J.; Moura, J.M.; Vandergheynst, P. Graph signal processing: Overview, challenges, and applications. Proc. IEEE
**2018**, 106, 808–828. [Google Scholar] [CrossRef] [Green Version] - Djuric, P.; Richard, C. (Eds.) Cooperative and Graph Signal Processing: Principles and Applications; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
- Hamon, R.; Borgnat, P.; Flandrin, P.; Robardet, C. Extraction of temporal network structures from graph-based signals. IEEE Trans. Signal Inf. Process. Netw.
**2016**, 2, 215–226. [Google Scholar] [CrossRef] - Marques, A.; Ribeiro, A.; Segarra, S. Graph Signal Processing: Fundamentals and Applications to Diffusion Processes. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017. [Google Scholar]
- Quinn, C.J.; Kiyavash, N.; Coleman, T.P. Directed information graphs. IEEE Trans. Inf. Theory
**2015**, 61, 6887–6909. [Google Scholar] [CrossRef] [Green Version] - Raginsky, M.; Jafarpour, S.; Harmany, Z.T.; Marcia, R.F.; Willett, R.M.; Calderbank, R. Performance bounds for expander-based compressed sensing in Poisson noise. IEEE Trans. Signal Process.
**2011**, 59, 4139–4153. [Google Scholar] [CrossRef] [Green Version] - Hamon, R.; Borgnat, P.; Flandrin, P.; Robardet, C. Transformation from Graphs to Signals and Back. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 111–139. [Google Scholar]
- Sandryhaila, A.; Moura, J.M. Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Process. Mag.
**2014**, 31, 80–90. [Google Scholar] [CrossRef] - Stanković, L.; Daković, M.; Thayaparan, T. Time-Frequency Signal Analysis with Applications; Artech House: London, UK, 2014. [Google Scholar]
- Cohen, L. Time-Frequency Analysis; Prentice Hall PTR: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
- Boashash, B. Time-Frequency Signal Analysis and Processing: A Comprehensive Reference; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
- Shuman, D.I.; Ricaud, B.; Vandergheynst, P. Vertex-frequency analysis on graphs. Appl. Comput. Harmon. Anal.
**2016**, 40, 260–291. [Google Scholar] [CrossRef] - Shuman, D.I.; Ricaud, B.; Vandergheynst, P. A windowed graph Fourier transform. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, MI, USA, 5–8 August 2012; pp. 133–136. [Google Scholar]
- Zheng, X.W.; Tang, Y.Y.; Zhou, J.T.; Yuan, H.L.; Wang, Y.L.; Yang, L.N.; Pan, J.J. Multi-windowed graph Fourier frames. In Proceedings of the IEEE International Conference on Machine Learning and Cybernetics (ICMLC), Ningbo, China, 9–12 July 2016; Volume 2, pp. 1042–1048. [Google Scholar]
- Tepper, M.; Sapiro, G. A short-graph Fourier transform via personalized pagerank vectors. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 4806–4810. [Google Scholar]
- Stanković, L.; Daković, M.; Sejdić, E. Vertex-Frequency Analysis: A Way to Localize Graph Spectral Components [Lecture Notes]. IEEE Signal Process. Mag.
**2017**, 34, 176–182. [Google Scholar] [CrossRef] - Cioacă, T.; Dumitrescu, B.; Stupariu, M.S. Graph-Based Wavelet Multiresolution Modeling of Multivariate Terrain Data. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 479–507. [Google Scholar]
- Hammond, D.K.; Vandergheynst, P.; Gribonval, R. The Spectral Graph Wavelet Transform: Fundamental Theory and Fast Computation. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 141–175. [Google Scholar]
- Behjat, H.; Van De Ville, D. Spectral Design of Signal-Adapted Tight Frames on Graphs. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 177–206. [Google Scholar]
- Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag.
**2013**, 30, 83–98. [Google Scholar] [CrossRef] [Green Version] - Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data Analytics on Graphs Part I: Graphs and Spectra on Graphs. Found. Trends Mach. Learn.
**2020**, 13, 1–157. [Google Scholar] [CrossRef] - Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data Analytics on Graphs Part II: Signals on Graphs. Found. Trends Mach. Learn.
**2020**, 13, 158–331. [Google Scholar] [CrossRef] - Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data Analytics on Graphs Part III, Machine Learning on Graphs, from Graph Topology to Applications. Found. Trends Mach. Learn.
**2020**, 13, 332–530. [Google Scholar] [CrossRef] - Gray, R.M. Toeplitz and Circulant Matrices: A Review; NOW Publishers: Delft, The Netherlands, 2006. [Google Scholar]
- Dakovic, M.; Stankovic, L.J.; Sejdic, E. Local Smoothness of Graph Signals. Math. Probl. Eng.
**2019**, 2019, 3208569. [Google Scholar] [CrossRef] - Hammond, D.K.; Vandergheynst, P.; Gribonval, R. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon.
**2011**, 30, 129–150. [Google Scholar] [CrossRef] [Green Version] - Stanković, L.; Mandic, D.; Daković, M.; Kisil, I.; Sejdić, E.; Constantinides, A.G. Understanding the Basis of Graph Signal Processing via an Intuitive Example-Driven Approach. IEEE Signal Process. Mag.
**2019**, 36, 133–145. [Google Scholar] [CrossRef] - Stanković, L.; Mandic, D.; Daković, M.; Scalzo, B.; Brajović, M.; Sejdić, E.; Constantinides, A.G. Vertex-frequency graph signal processing: A comprehensive review. Digit. Signal Process.
**2020**, 107, 102802. [Google Scholar] [CrossRef] - Leonardi, N.; Van De Ville, D. Tight wavelet frames on multislice graphs. IEEE Trans. Signal Process.
**2013**, 61, 3357–3367. [Google Scholar] [CrossRef] - Behjat, H.; Leonardi, N.; Sörnmo, L.; Van De Ville, D. Anatomically-adapted Graph Wavelets for Improved Group-level fMRI Activation Mapping. NeuroImage
**2015**, 123, 185–199. [Google Scholar] [CrossRef] [PubMed] - Rustamov, R.; Guibas, L.J. Wavelets on graphs via deep learning. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–8 December 2013; pp. 998–1006. [Google Scholar]
- Jestrović, I.; Coyle, J.L.; Sejdić, E. A fast algorithm for vertex-frequency representations of signals on graphs. Signal Process.
**2017**, 131, 483–491. [Google Scholar] [CrossRef] [Green Version] - Masoumi, M.; Rezaei, M.; Hamza, A.B. Shape Analysis of Carpal Bones Using Spectral Graph Wavelets. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 419–436. [Google Scholar]
- Stanković, L. A measure of some time-frequency distributions concentration. Signal Process.
**2001**, 81, 621–631. [Google Scholar] [CrossRef] - Brajović, M.; Stanković, L.; Daković, M. On Polynomial Approximations of Spectral Windows in Vertex-Frequency Representations. In Proceedings of the 24th International Conference on Information Technology, Žabljak, Montenegro, 18–22 February 2020. [Google Scholar]
- Pasdeloup, B.; Gripon, V.; Alami, R.; Rabbat, M.G. Uncertainty principle on graphs. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 317–340. [Google Scholar]
- Perraudin, N.; Ricaud, B.; Shuman, D.I.; Vandergheynst, P. Global and local uncertainty principles for signals on graphs. APSIPA Trans. Signal Inf. Process.
**2018**, 7, 1–26. [Google Scholar] [CrossRef] [Green Version] - Tsitsvero, M.; Barbarossa, S.; Di Lorenzo, P. Signals on graphs: Uncertainty principle and sampling. IEEE Trans. Signal Process.
**2016**, 64, 539–554. [Google Scholar] [CrossRef] [Green Version] - Ricaud, B.; Tottesani, B. A survey of uncertainty principles and some signal processing applications. Adv. Comput. Math.
**2014**, 40, 629–650. [Google Scholar] [CrossRef] [Green Version] - Stankovic, L. Highly concentrated time-frequency distributions: Pseudo quantum signal representation. IEEE Trans. Signal Process.
**1997**, 45, 543–551. [Google Scholar] [CrossRef] [Green Version] - Stanković, L. The Support Uncertainty Principle and the Graph Rihaczek Distribution: Revisited and Improved. IEEE Signal Process. Lett.
**2020**, 27, 1030–1034. [Google Scholar] [CrossRef] - Elad, M.; Bruckstein, A.M. Generalized uncertainty prin- ciple and sparse representation in pairs of bases. IEEE Trans. Inf. Theory
**2002**, 48, 2558–2567. [Google Scholar] [CrossRef] [Green Version] - Stanković, L. Digital Signal Processing with Selected Topics; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2015; ISBN 978-1514179987. [Google Scholar]
- Stanković, L.; Sejdić, E.; Daković, M. Vertex-frequency energy distributions. IEEE Signal Process. Lett.
**2018**, 25, 358–362. [Google Scholar] [CrossRef] - Stanković, L.; Sejdić, E.; Daković, M. Reduced interference vertex-frequency distributions. IEEE Signal Process. Lett.
**2018**, 25, 1393–1397. [Google Scholar] [CrossRef] - Agaskar, A.; Lu, Y.M. A spectral graph uncertainty principle. IEEE Trans. Inf. Theory
**2013**, 59, 4338–4356. [Google Scholar] [CrossRef] - Sakiyama, A.; Tanaka, Y. Oversampled graph Laplacian matrix for graph filter banks. IEEE Trans. Signal Process.
**2014**, 62, 6425–6437. [Google Scholar] [CrossRef] - Girault, B. Stationary graph signals using an isometric graph translation. In Proceedings of the 2015 23rd IEEE European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 1516–1520. [Google Scholar]

**Figure 1.**Time domain of periodic signals presented as: Circular unweighted directed graph (

**left**) and an undirected graph (

**right**), with $N=8$ vertices (instants).

**Figure 2.**A circular undirected unweighted graph as the domain for classical signal analysis. Each of $N=100$ vertices (instants) is connected to the predecessor and successor vertices (top). A general form of a graph, with $N=100$ vertices (bottom).

**Figure 3.**Graph signal on a circular undirected unweighted graph (top), and a general graph (bottom). Vertices from ${\mathcal{V}}_{1}$ are designated blue dots, vertices form ${\mathcal{V}}_{2}$ are marked by black dots, while vertices form ${\mathcal{V}}_{3}$ are given by green dots.

**Figure 4.**Local smoothness of the signals from Figure 3. The values are shown for nonzero signal samples. The local smoothness in classical signal analysis is related to the instantaneous frequency as $\lambda (n)=4{\mathrm{sin}}^{2}(\omega (n)/2)$.

**Figure 5.**The spectral domain transfer functions ${H}_{k}({\lambda}_{p})$, for a circular undirected and unweighted graph (classical analysis), $p=1,2,\dots ,N$, $k=0,1,\dots ,K-1$, that correspond to the terms of the binomial form for $K=26$.

**Figure 6.**The spectral domain transfer functions ${H}_{k}({\lambda}_{p})$, for a general graph $p=1,2,\dots ,N$, $k=0,1,\dots ,K-1$, that correspond to the terms of the binomial form for $K=26$.

**Figure 7.**Time–frequency and vertex–frequency representations of the signals from Example 1: (

**a**) Time–frequency analysis of the harmonic signal from Example 1, shown in Figure 3 (top), using the transfer functions from Figure 5 (bottom). (

**b**) Vertex–frequency analysis of the general graph signal from Example 1, shown in Figure 3 (bottom), using the transfer functions from Figure 6 (bottom). (

**c**) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 5 (bottom). The complex signal is formed by adding two corresponding sine and cosine components. In all cases, the original representation is given on the left panel, while the reassigned value to the position of the distribution maximum is given on the right panel.

**Figure 8.**Transfer functions in the spectral eigenvalue and frequency domains for classical analysis: (top) The eigenvalue spectral domain transfer functions ${H}_{k}({\lambda}_{p})$, $p=1,2,\dots ,N$, $k=0,1,\dots ,K-1$, for $K=15$ and $0\le \lambda \le {\lambda}_{\mathrm{max}}=4$. (down) The frequency spectral domain transfer functions ${H}_{k}({\omega}_{p})$, $p=1,2,\dots ,N$, $k=0,1,\dots ,K-1$, for $K=15$ and $0\le \omega \le \pi $. Horizontal axis represents the continuous variable λ and discrete values ${\lambda}_{p}$, $p=1,2,\dots ,N$, corresponding to the eigenvalues and denoted by gray dots along the axis. The same notation is used for frequency ω and its discrete values ${\omega}_{p}$ that correspond to ${\lambda}_{p}$.

**Figure 9.**Transfer functions in the spectral domain. (

**a**) The transfer functions corresponding to the Hann form terms for K = 15. (

**b**) The spectral index-varying (wavelet-like) transfer functions whose terms are of half-cosine form, with K = 11. (

**c**) The spectral domain signal adaptive transfer functions with K = 17. (

**d**) Approximations of transfer functions from panel (

**a**) using Chebyshev polynomials, with H

_{9}(λ) being designated by the thick black lines, whereas gray markers indicate the corresponding discrete values.

**Figure 10.**Time–frequency representation of a three-component time-domain signal from Example 1, shown in Figure 3 (top), based on various transfer functions from Figure 5. The LGFT is calculated based on: (

**a**) the transfer functions in Figure 5, (

**b**) the transfer functions in Figure 9a, (

**c**) the wavelet-like spectral transfer functions in Figure 9b, (

**d**) the signal adaptive transfer functions from Figure 9b, (

**e**) Chebyshev polynomial-based approximations from Figure 9d, with $M=20$ and (

**f**) Chebyshev polynomial-based approximations from Figure 9d, with $M=50$.

**Figure 11.**Vertex–frequency representation of a three-component general graph signal from Example 1, shown in Figure 3 (bottom). The LGFT is calculated based on: (

**a**) the transfer functions in Figure 5, (

**b**) the transfer functions in Figure 9a, (

**c**) the wavelet-like spectral transfer functions in Figure 9b, (

**d**) the signal adaptive transfer functions from Figure 9b, (

**e**) Chebyshev polynomial-based approximations from Figure 9d, with $M=20$ and (

**f**) Chebyshev polynomial-based approximations from Figure 9d, with $M=50$.

**Figure 12.**Transfer functions formed using the Hann window (

**a**,

**b**) and the Bartlett window (

**c**,

**d**) square root so that the reconstruction condition ${\sum}_{k=0}^{K-1}{H}_{k}^{2}({\lambda}_{p})=1$ is satisfied, for a uniform splitting (

**a**,

**c**) of the spectral domain and a wavelet-like splitting (

**b**,

**d**).

**Figure 13.**Transfer functions formed using the Hann window (

**a**,

**b**) and the Bartlett window square root with modified argument (

**c**,

**d**), using the argument x mapping ${v}_{x}(x)={x}^{4}(35-84x+70{x}^{2}-20{x}^{3})$. The WOLA reconstruction condition ${\sum}_{k=0}^{K-1}{H}_{k}^{2}({\lambda}_{p})=1$ is satisfied.

**Figure 14.**Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (

**a**) Spectral functions of the Hann form with $K=15$. (

**b**) Approximation of spectral transfer functions based on Chebyshev polynomial with $M=12$. (

**c**) Legendre-polynomial-based approximations of spectral transfer functions with $M=12$. (

**d**) Least squares approximation of spectral transfer functions with $M=12$. For convenience, function ${H}_{8}(\lambda )$ is designated with a thick black line on each panel.

**Figure 15.**Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (

**a**) Spectral functions of the Hann form with $K=15$. (

**b**) Approximation of spectral transfer functions based on Chebyshev polynomial with $M=20$. (

**c**) Legendre-polynomial-based approximations of spectral transfer functions with $M=20$. (

**d**) Least squares approximation of spectral transfer functions with $M=20$. For convenience, function ${H}_{8}(\lambda )$ is designated with a thick black line on each panel.

**Figure 16.**Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (

**a**) Spectral functions of the Hann form with $K=15$. (

**b**) Approximation of spectral transfer functions based on Chebyshev polynomial with $M=40$. (

**c**) Legendre-polynomial-based approximations of spectral transfer functions with $M=40$. (

**d**) Least squares approximation of spectral transfer functions with $M=40$. For convenience, function ${H}_{8}(\lambda )$ is designated with a thick black line on each panel.

**Figure 17.**(

**a**) Time–frequency analysis of the harmonic signal from Example 1, shown in Figure 3 (top), using the polynomial approximations of transfer functions from Figure 14b. (

**b**) Vertex–frequency analysis of the general graph signal from Example 1, shown in Figure 3 (bottom), using the transfer functions from Figure 14b. (

**c**) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 14c. (

**d**) Vertex–frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 14c. (

**e**) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 14d. (

**f**) Vertex–frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 14d. A complex signal is formed by adding two corresponding sine and cosine components.

**Table 1.**Coefficients, ${h}_{p,k}$, $p=0,1,\dots ,M-1$, $k=0,1,\dots ,K-1$, for the polynomial calculation of the LGFT, ${\mathbf{s}}_{k}$, of a signal $\mathbf{x}$, in various spectral bands, k, for $(M-1)=5$ and $K=10$.

${\mathbf{s}}_{k}=({h}_{0,k}\mathbf{I}+{h}_{1,k}\mathbf{L}+{h}_{2,k}{\mathbf{L}}^{2}+{h}_{3,k}{\mathbf{L}}^{3}+{h}_{4,k}{\mathbf{L}}^{4}+{h}_{5,k}{\mathbf{L}}^{5})\mathbf{x}$ | ||||||

k | ${h}_{0,k}$ | ${h}_{1,k}$ | ${h}_{2,k}$ | ${h}_{3,k}$ | ${h}_{4,k}$ | ${h}_{5,k}$ |

0 | $1.079$ | $-1.867$ | $1.101$ | $-0.2885$ | $0.03458$ | $-0.001548$ |

1 | $-0.053$ | $1.983$ | $-1.798$ | $0.5744$ | $-0.07722$ | $0.003723$ |

2 | $-0.134$ | $0.763$ | $-0.310$ | $0.0222$ | $0.00422$ | $-0.000460$ |

3 | $0.050$ | $-0.608$ | $0.900$ | $-0.3551$ | $0.05348$ | $-0.002762$ |

4 | $0.096$ | $-0.726$ | $0.768$ | $-0.2475$ | $0.03172$ | $-0.001424$ |

5 | $0.016$ | $-0.013$ | $-0.128$ | $0.1047$ | $-0.02231$ | $0.001424$ |

6 | $-0.073$ | $0.616$ | $-0.779$ | $0.3228$ | $-0.05135$ | $0.002762$ |

7 | $-0.051$ | $0.351$ | $-0.356$ | $0.1146$ | $-0.01323$ | $0.000460$ |

8 | $0.084$ | $-0.687$ | $0.871$ | $-0.3751$ | $0.06409$ | $-0.003723$ |

9 | $-0.021$ | $0.183$ | $-0.251$ | $0.1172$ | $-0.02196$ | $0.001419$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Stanković, L.; Lerga, J.; Mandic, D.; Brajović, M.; Richard, C.; Daković, M.
From Time–Frequency to Vertex–Frequency and Back. *Mathematics* **2021**, *9*, 1407.
https://doi.org/10.3390/math9121407

**AMA Style**

Stanković L, Lerga J, Mandic D, Brajović M, Richard C, Daković M.
From Time–Frequency to Vertex–Frequency and Back. *Mathematics*. 2021; 9(12):1407.
https://doi.org/10.3390/math9121407

**Chicago/Turabian Style**

Stanković, Ljubiša, Jonatan Lerga, Danilo Mandic, Miloš Brajović, Cédric Richard, and Miloš Daković.
2021. "From Time–Frequency to Vertex–Frequency and Back" *Mathematics* 9, no. 12: 1407.
https://doi.org/10.3390/math9121407