# Topological Comparison of Some Dimension Reduction Methods Using Persistent Homology on EEG Data

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Preliminaries

**Definition 1.**

**Definition 2.**

**Definition 3.**

**Definition 4.**

#### 2.2. ISOMAP

- 1.
- For a fixed integer K and real number $\u03f5>0$, perform an $\u03f5-K$-nearest neighbor search using the fact that the geodesic distance ${D}^{\mathcal{M}}({v}_{i},{v}_{j})$ between two points on $\mathcal{M}$ is the same (by isometry) as their euclidian distance $\left(\right)$ in ${\mathbb{R}}^{d}$. K is the number of data points selected within a ball of radius $\u03f5$.
- 2.
- Having calculated the distance between points as above, the entire data set can be considered as a weighted graph with vertices $\mathit{v}=\left(\right)open="\{"\; close="\}">{v}_{i}$ and edges $\mathit{e}=\left(\right)open="\{"\; close="\}">{e}_{ij}$, where ${e}_{ij}$ connects ${v}_{i}$ with ${v}_{j}$ with a distance ${w}_{ij}={D}^{\mathcal{M}}({v}_{i},{v}_{j})$, considered an associated weight. The geodesic distance between two data points ${v}_{i}$ and ${v}_{j}$ is estimated as the graph distance between the two edges, that is, the number of edges in the shortest path connecting them. We observe that this shortest path is found by minimizing the sum of the weights of its constituent edges.
- 3.
- Having calculated the geodesic distances ${D}^{G}=\left(\right)open="\{"\; close="\}">{w}_{ij}$ as above, we observe that ${D}^{G}$ is a symmetric matrix, so we can apply the classical multidimensional scaling algorithm (MDS) (see [33]) to ${D}^{G}$ by mapping (embedding) them into a feature space $\mathcal{Y}$ of dimension d while preserving the geodesic distance on $\mathcal{M}$. $\mathcal{Y}$ is generated by a $d\times m$ matrix whose i-th column represents the coordinates of ${v}_{i}$ in $\mathcal{Y}$.

#### 2.3. Laplacian Eigenmaps

- 1.
- For a fixed integer K and real number $\u03f5>0$, perform a K-nearest neighbor search on symmetric neighborhoods. Note that given two points ${v}_{i},{v}_{j}$, their respective K-neighborhood ${N}_{i}^{K}$ and ${N}_{j}^{K}$ are symmetric if and only ${v}_{i}\in {N}_{j}^{K}\u27fa{v}_{j}\in {N}_{i}^{K}$.
- 2.
- For a given real number $\sigma >0$ and each pair of points $({v}_{i},{v}_{j})$, calculate the weight ${w}_{ij}={e}^{-\frac{{\left(\right)}^{{u}_{i}}}{2}2{\sigma}^{2}}$ if ${v}_{i}\in {N}_{j}^{K}$ and ${w}_{ij}=0$ if ${v}_{i}\notin {N}_{j}^{K}$. Obtain the adjacency matrix $\mathit{W}=\left({w}_{ij}\right)$. The data now form a weighted graph with vertices $\mathit{v}$, with edges $\mathit{e}=\left(\right)open="\{"\; close="\}">{e}_{ij}$, and weights $\mathit{W}=\left(\right)open="\{"\; close="\}">{w}_{ij}$, where ${e}_{ij}$ connects ${v}_{i}$ with ${v}_{j}$ with distance ${w}_{ij}$.
- 3.
- Consider $\mathbf{\Lambda}=\left(\right)open="\{"\; close="\}">{\lambda}_{ij}$ to be a diagonal matrix with $\lambda}_{ii}=\sum _{j}{w}_{ij$ and define the graph Laplacian as $\mathit{L}=\mathbf{\Lambda}-\mathit{W}$. Then, $\mathit{L}$ is positive definite so let $\widehat{\mathit{Y}}$ be the $d\times n$ matrix that minimizes $\sum _{i,j}{w}_{ij}{\left(\right)}^{{\mathbf{y}}_{i}}2=tr\left({\mathit{TLY}}^{\mathit{T}}\right)$. Then, $\widehat{\mathbf{Y}}$ can used to embed $\mathcal{M}$ into a d-dimensional space $\mathcal{Y}$, whose i-th column represents the coordinates of ${v}_{i}$ in $\mathcal{Y}$.

#### 2.4. Fast ICA

- 1.
- Data preparation: it consists of centering the data $\mathit{v}$ with respect to the column to obtain ${\mathit{v}}^{c}$. That is, $v}_{ij}^{c}={v}_{ij}-\frac{1}{m}\sum _{j=1}^{m}{v}_{ij$, for $i=1,2,\cdots ,n$. The centered data are then whitened; that is, ${\mathit{v}}^{c}$ is linearly transformed into ${\mathit{v}}_{w}^{c}$, a matrix of uncorrelated components. This is accomplished through an eigenvalue decomposition of the covariance matrix $\mathit{C}={\mathit{v}}^{\mathit{c}}{\left({\mathit{v}}^{\mathit{c}}\right)}^{\mathit{T}}$ to obtain two matrices $\mathit{V},\mathit{E}$, respectively, of eigenvectors and eigenvalues so that $\mathbb{E}\left[\mathit{C}\right]={\mathit{VEV}}^{\mathit{T}}$. The whitened data are found as ${\mathit{v}}_{\mathit{w}}^{\mathit{c}}={\mathit{E}}^{-\mathbf{1}/\mathbf{2}}{\mathit{V}}^{\mathit{T}}{\mathit{v}}^{\mathit{c}}$ and simply referred to again as $\mathit{v}$ for simplicity.
- 2.
- Component extraction: Let $F\left(\mathit{W}\right)=\mathbb{E}\left[\mathit{v}g\left({\mathit{W}}^{\mathit{T}}\mathit{v}\right)\right]-\beta \mathit{W}$ for a given constant $\beta =\mathbb{E}\left[{\mathit{W}}_{\mathit{a}}^{\mathit{T}}\mathit{v}g\left({\mathit{W}}_{\mathit{a}}^{\mathit{T}}\mathit{v}\right)\right]$, where ${W}_{a}$ is the optimal weight matrix. Applying the Newton scheme (${x}_{n+1}={x}_{n}-F\left({x}_{n}\right){\left[{F}^{\prime}\left({x}_{n}\right)\right]}^{-1}$) to the differentiable function ${J}_{G}$, we
- Select a random starting vector ${\mathit{W}}_{0}$.
- For $n\ge 0$, ${\mathit{W}}_{n+1}=\mathbb{E}\left[\mathit{v}g\left({\mathit{W}}_{n}^{T}\mathit{v}\right)\right]-\mathbb{E}\left[{g}^{\prime}\left({\mathit{W}}_{n}^{T}\mathit{v}\right)\right]{\mathit{W}}_{n}$.
- Normalize ${\mathit{W}}_{n+1}$ as $\frac{{\mathit{W}}_{n+1}}{\left(\right)}$.
- Repeat until a suitable convergence level is reached.
- From the last matrix $\mathit{W}$ obtained, let $\mathit{s}={\mathit{W}}^{\mathit{T}}\mathit{v}$.

#### 2.5. Kernel Ridge Regression

#### 2.6. t-SNE

- Calculate the asymmetrical probabilities ${p}_{kl}$ as ${p}_{kl}=\frac{{e}^{-{\delta}_{kl}}}{{\sum}_{k\ne l}{e}^{-{\delta}_{kl}}}$, where ${\delta}_{kl}=\frac{{\left(\right)}^{{v}_{k}}}{2}2{\sigma}_{i}$ represents the dissimilarity between ${v}_{k}$ and ${v}_{l}$, and ${\sigma}_{i}$ is a parameter selected by the experimenter or by a binary search. ${p}_{kl}$ represents the conditional probability that datapoint ${v}_{l}$ is the neighborhood of datapoint ${v}_{k}$ if neighbors were selected proportionally to their probability density under a normal distribution centered at ${v}_{k}$ and variance ${\sigma}_{i}$.
- Assuming that the low dimensional data are $\mathit{u}=\left({u}_{k}\right),\phantom{\rule{3.33333pt}{0ex}}k=1,2,\cdots ,N$, the corresponding dissimilarity probabilities ${q}_{kl}$ are calculated under constant variance as ${q}_{kl}=\frac{{e}^{-{d}_{kl}}}{{\sum}_{k\ne l}{e}^{-{d}_{kl}}}$, where ${d}_{kl}={\left(\right)}^{{u}_{k}}2$ in the case of SNE, and $q}_{kl}=\frac{{(1+{d}_{kl})}^{-1}}{{\sum}_{k\ne l}{(1+{d}_{kl})}^{-1}$ for t-SNE.
- Then, we minimize the Kullback–Leibler divergence between ${p}_{kl}$ and ${q}_{kl}$, given as $L=\sum _{k=1}^{N}\sum _{l=1}^{N}{p}_{kl}log\left(\right)open="("\; close=")">\frac{{p}_{kl}}{{q}_{kl}}$, using the gradient descent method with a momentum term with the scheme ${\mathit{w}}^{t}={\mathit{w}}^{t-1}+\eta \frac{\partial L}{\partial \mathit{u}}+\alpha \left(t\right)({\mathit{w}}^{t-1}-{\mathit{w}}^{t-2})$ for $t=2,3,\cdots ,T$ for some given T. Note that ${\mathit{w}}^{0}=({u}_{1},{u}_{2},\cdots ,{u}_{N})\sim N(0,{10}^{-4}\mathit{I})$, where $\mathit{I}$ is the $N\times N$ identity matrix, $\eta $ is a constant representing a learning rate, and $\alpha \left(t\right)$ is t-th momentum iteration. We note that $\frac{\partial L}{\partial \mathit{u}}=\left(\right)open="("\; close=")">\frac{\partial L}{\partial {u}_{k}}$ for $k=1,2,\cdots ,N$ where $\frac{\partial L}{\partial {u}_{k}}=4\sum _{l=1}^{N}({p}_{kl}-{q}_{kl})({u}_{k}-{u}_{l}){(1+{d}_{kl})}^{-1}.$
- Then, we use $\mathit{u}={\mathit{w}}^{T}$ as the low dimensional representation of $\mathit{v}$.

## 3. Persistent Homology

#### 3.1. Simplex Complex

**Definition 5.**

**Example 1.**

**Remark 1.**

**Definition 6.**

**Example 2.**

**Definition 7.**

- 1.
- Given ${S}_{i}\in \Sigma $, its face ${R}_{i}\in \Sigma $.
- 2.
- Given ${S}_{i},{S}_{j}\in \Sigma $, either ${S}_{i}\cap {S}_{j}=\varnothing $ or ${S}_{i}\cap {S}_{j}={R}_{i}={R}_{j}$, the faces of ${S}_{i}$ and ${S}_{j}$, respectively.

**Definition 8.**

- 1.
- For all $\omega \in \Omega $, then $\left\{\omega \right\}\in \Sigma $.
- 2.
- For any set U such that $S\subset U$ for some $S\in \Sigma $, then $U\in \Sigma $.

**Example 3.**

- 1.
- ${\Sigma}_{1}=\left(\right)open="\{"\; close="\}">\left\{1\right\},\left\{2\right\},\left\{3\right\},\left\{4\right\},\left(\right)open="\{"\; close="\}">1,2,\left(\right)open="\{"\; close="\}">1,3},\left(\right)open="\{"\; close="\}">1,2,3$
- 2.
- ${\Sigma}_{2}=\mathcal{P}(\Omega )\setminus \left\{\varnothing \right\}$, where $\mathcal{P}(\Omega )$ is the set of all subsets of Ω.

#### 3.2. Homology and Persistent Homology

**Definition 9.**

**Definition 10.**

**Remark 2.**

**Definition 11.**

**Example 4.**

- 1.
- ${b}_{0}$ is the number of connected components of the complex.
- 2.
- ${b}_{1}$ is the number of tunnels and holes.
- 3.
- ${b}_{2}$ is the number of shells around cavities or voids.

**Definition 12.**

**Definition 13.**

**Definition 14.**

**Definition 15.**

**Example 5.**

**Definition 16.**

## 4. Results

**Definition 17.**

**Definition 18.**

**Remark 3.**

#### 4.1. Randomly Generated Data

#### 4.2. EEG Epilepsy Data

#### 4.2.1. Data Description

#### 4.2.2. Data Analysis

**Single-channel Analysis:**

**Multiple-channel Analysis:**

- (a)
- Within set analysis

- (b)
- Between set analysis

## 5. Concluding Remarks

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Whitney, H. Differentiable manifolds. Ann. Math.
**1936**, 37, 645–680. [Google Scholar] [CrossRef] - Takens, F. Detecting strange attractors in turbulence dynamical systems and turbulence. Lect. Notes Math.
**1981**, 898, 366–381. [Google Scholar] - Ma, Y.; Fu, Y. Manifold Learning: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
- Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol.
**1933**, 24, 498–520. [Google Scholar] [CrossRef] - Ramsay, J.O.; Silverman, B.W. Functional Data Analysis, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Cohen, J.; West, S.G.; Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 3rd ed.; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 2003. [Google Scholar]
- Friedman, J.H. Regularized discriminant analysis. J. Am. Stat. Assoc.
**1989**, 84, 165–175. [Google Scholar] [CrossRef] - Yu, H.; Yang, J. A direct lda algorithm for high-dimensional data—With application to face recognition. Pattern Recognition
**2001**, 34, 2067–2069. [Google Scholar] [CrossRef] [Green Version] - Tenenbaum, J.B.; de Siva, V.; Langford, J.C. A global geometric frameworkfor nonlinear dimensionality reduction. Science
**2000**, 290, 2319–2323. [Google Scholar] [CrossRef] - Belkin, M.; Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems 14; Dietterich, T.G., Becker, S., Ghahramani, Z., Eds.; MIT Press: Cambridge, MA, USA, 2002; pp. 585–591. [Google Scholar]
- Hyvärinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw.
**1999**, 13, 411–430. [Google Scholar] [CrossRef] [Green Version] - Theodoridis, S. (Ed.) Chapter 11—Learning in reproducing kernel hilbert spaces . In Machine Learning, 2nd ed.; Academic Press: Cambridge, MA, USA, 2020; pp. 531–594. Available online: https://www.sciencedirect.com/science/article/pii/B9780128188033000222 (accessed on 28 June 2023).
- Van der Maaten, L.J.P.; Hinton, G.E. Visualizing data using t-sne. J. Mach. Learn. Res.
**2008**, 9, 2579–2605. [Google Scholar] - Naizait, G.; Zhitnikov, A.; Lim, L.-H. Topology of deep neural networks. J. Mach. Learn. Res.
**2020**, 21, 7503–7542. [Google Scholar] - Chan, J.M.; Carlsson, G.; Rabadan, R. Topology of viral evolution. Proc. Natl. Acad. Sci. USA
**2013**, 110, 18566–18571. [Google Scholar] [CrossRef] - Otter, N.; Porter, M.A.; Tillman, U.; Grindrod, P.; Harrington, H.A. A roadmap for the computationof persistent homology. EPJ Data Sci.
**2017**, 6, 17. [Google Scholar] [CrossRef] [Green Version] - De Silva, V.G.; Ghrist, R. Coverage in sensor networks via persistent homology. Algebr. Geom. Topol.
**2007**, 7, 339–358. [Google Scholar] [CrossRef] [Green Version] - Gameiro, M.; Hiraoka, Y.; Izumi, S.; Mischaikow, K.M.K.; Nanda, K. A topological measurement of proteincompressibility. Jpn. J. Ind. Appl. Math.
**2015**, 32, 1–17. [Google Scholar] [CrossRef] - Xia, K.; Wei, G.-W. Persistent homology analysis of protein structure, flexibility, and folding. Int. J. Numer. Methods Biomed. Eng.
**2014**, 30, 814–844. [Google Scholar] [CrossRef] [PubMed] - Emmett, K.; Schweinhart, N.; Rabadán, R. Multiscale topology of chromatin folding. In Proceedings of the 9th EAIinternational Conference on Bio-Inspired Information and Communications Technologies, BICT’15, ICST 2016, New York City, NY, USA, 3–5 December 2015; pp. 177–180. [Google Scholar]
- Rizvi, A.; Camara, P.; Kandror, E.; Roberts, T.; Schieren, I.; Maniatis, T.; Rabadán, R. Single-cell topological rna-seqanalysis reveals insights into cellular differentiation and development. Nat. Biotechnol.
**2017**, 35, 551–560. [Google Scholar] [CrossRef] [PubMed] - Bhattacharya, S.; Ghrist, R.; Kumar, V. Persistent homology for path planning in uncertain environments. IEEE Trans. Robot.
**2015**, 31, 578–590. [Google Scholar] [CrossRef] - Pokorny, F.T.; Hawasly, M.; Ramamoorthy, S. Topological trajectory classification with filtrations of simplicialcomplexes and persistent homology. Int. J. Robot. Res.
**2016**, 35, 204–223. [Google Scholar] [CrossRef] - Vasudevan, R.; Ames, A.; Bajcsy, R. Persistent homology for automatic determination of human-data based costof bipedal walking. Nonlinear Anal. Hybrid Syst.
**2013**, 7, 101–115. [Google Scholar] [CrossRef] - Chung, M.K.; Bubenik, P.; Kim, P.T. Persistence diagrams of cortical surface data. In Information Processing in Medical Imaging. Lecture Notes in Computer Science; Prince, J.L., Pham, D.L., Myers, K.J., Eds.; Springer: Berlin, Germany, 2009; Volume 5636, pp. 386–397. [Google Scholar]
- Guillemard, M.; Boche, H.; Kutyniok, G.; Philipp, F. Persistence diagrams of cortical surface data. In Proceedings of the 10th International Conference on Sampling Theory and Applications, Bremen, Germany, 1–5 July 2013; pp. 309–312. [Google Scholar]
- Taylor, D.; Klimm, F.; Harrington, H.A.; Kramár, M.; Mischaikow, K.; Porter, M.A.; Mucha, P.J. Topological data analysis ofcontagion maps for examining spreading processes on networks. Nat. Commun.
**2015**, 6, 7723. [Google Scholar] [CrossRef] [Green Version] - Leibon, G.; Pauls, S.; Rockmore, D.; Savell, R. Topological structures in the equities market network. Proc. Natl. Acad. Sci. USA
**2008**, 105, 20589–20594. [Google Scholar] [CrossRef] - Giusti, C.; Ghrist, R.; Bassett, D. Two’s company and three (or more) is a simplex. J. Comput. Neurosci.
**2016**, 41, 1–14. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Sizemore, A.E.; Phillips-Cremins, J.E.; Ghrist, R.; Bassett, D.S. The importance of the whole: Topological data analysis for the network neuroscientist. Netw. Neurosci.
**2019**, 3, 656–673. [Google Scholar] [CrossRef] - Maletić, S.; Zhao, Y.; Rajković, M. Persistent topological features of dynamical systems. Chaos
**2016**, 26, 053105. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Chung, M.K.; Ramos, C.G.; Paiva, J.; Mathis, F.B.; Prabharakaren, V.; Nair, V.A.; Meyerand, E.; Hermann, B.P.; Binder, J.R.; Struck, A.F. Unified topological inference for brainnetworks in temporal lobe epilepsy using thewasserstein distance. arXiv
**2023**, arXiv:2302.06673. [Google Scholar] - Torgerson, W.S. Multidimensional scaling: I. theory and method. Psychometrika
**1952**, 17, 410–419. [Google Scholar] [CrossRef] - Jäntschi, L. Multiple linear regressions by maximizing the likelihood under assumption of generalized gauss-laplace distribution of the error. Comput. Math. Methods Med.
**2016**, 2016, 8578156. [Google Scholar] [CrossRef] [PubMed] - Jäntschi, L. Symmetry in regression analysis: Perpendicular offsets—The case of a photovoltaic cell. Symmetry
**2023**, 15, 948. [Google Scholar] [CrossRef] - NSilver, The Signal and Noise: Why So Many Predictions Fail—But Some Dont; The Penguin Press: London, UK, 2012.
- Hinton, G.E.; Roweis, S. Stochastic neighbor embedding. In Advances in Neural Information Processing Systems; Becker, S., Thrun, S., Obermayer, K., Eds.; MIT Press: Cambridge, MA, USA, 2002; Volume 15. [Google Scholar]
- Ghrist, R. Barcodes: The persistent topology of data. Bull. Amer. Math. Soc.
**2008**, 45, 61–75. [Google Scholar] [CrossRef] [Green Version] - Kwessi, E.; Edwards, L. Analysis of eeg time series data using complex structurization. Neural Comput.
**2021**, 33, 1942–1969. [Google Scholar] [CrossRef] - Mileyko, Y.; Mukherjee, S.; Harer, J. Probability measures on the space of persistence diagrams. Inverse Probl.
**2011**, 27, 124007. [Google Scholar] [CrossRef] [Green Version] - Berry, E.; Chen, Y.-C.; Cisewski-Kehe, J.; Fasy, B.T. Functional summaries of persistence diagrams. J. Appl. Comput. Topol.
**2020**, 4, 211–262. [Google Scholar] [CrossRef] [Green Version]

**Figure 2.**Example of a simplicial complex. J is 0-simplex; A and D are 1-simplices; B, C, G, and H are 2-simplices; E and F are 3-simplices; and I is a 4-simplex. We note that A∩ B is a 0-simplex. B∩ C is a 1-simplex and a face of B and C, respectively. E∩ F is a 2-simplex and a face of E and F. G∩ H is a 1-simplex and I∩ H is a 1-simplex.

**Figure 3.**Example of the evolution of Rips complexes $\left(\right)$ through a filtration with parameter $\delta $. As we move from left to right, it shows how sample points (blue dots) first form 0-simplices, then 1-simplices, and so on. In particular, it shows how connected components progressively evolve to form different types of holes.

**Figure 4.**Example of the evolution of barcodes through a filtration with parameter $\delta $ for the same data as above. As we move from left to right, from top to bottom, it shows the appearance and disappearance of lines (${\mathbb{H}}_{0}$) and holes (${\mathbb{H}}_{1}$) as the parameter $\delta $ changes. It shows that certain lines and holes persist through the filtration process.

**Figure 5.**Scatterplots for a Takens projection method (

**a**), KRR method (

**b**), ISOMAP (

**c**), LEIM (

**d**), ICA (

**e**), and t-SNE (

**f**).

**Figure 6.**Barcodes for a Takens projection method (

**a**), KRR method (

**b**), ISOMAP (

**c**), LEIM (

**d**), ICA (

**e**), and t-SNE (

**f**).

**Figure 7.**Bottleneck distances between the persistent diagrams for 15 channels within each set (

**A**–

**E**) on ${\mathbb{H}}_{1}$ and ${\mathbb{H}}_{2}$ for each of the methods introduced above. The red lines represent the Bottleneck distances between persistent diagrams on ${\mathbb{H}}_{1}$ and the blue are their counterparts on ${\mathbb{H}}_{2}$.

**Table 1.**Bottleneck distance between the persistent diagrams above at ${\mathbb{H}}_{0}$ (a), at ${\mathbb{H}}_{1}$ and at ${\mathbb{H}}_{2}$ (b).

(a) | ||||||

${\mathbb{H}}_{0}$ | Tak | Iso | KRR | ICA | LEIM | TSNE |

Tak | ||||||

Iso | 0.0945019 | |||||

KRR | 0.0957546 | 0.0200035 | ||||

ICA | 0.0982795 | 0.0157002 | 0.0071899 | |||

LEIM | 0.1678820 | 0.1182656 | 0.1247918 | 0.1205499 | ||

TSNE | 0.2238167 | 0.1730406 | 0.1817924 | 0.1759454 | 0.1162392 | |

(b) | ||||||

${\mathbb{H}}_{1}$/${\mathbb{H}}_{2}$ | Tak | Iso | KRR | ICA | LEIM | TSNE |

Tak | 0.0363205 | 0.0301992 | 0.0292631 | 0.0291247 | 0.0551774 | |

Iso | 0.0340282 | 0.0330687 | 0.0290406 | 0.0236890 | 0.0598517 | |

KRR | 0.0317261 | 0.0279460 | 0.0207599 | 0.0212138 | 0.0647935 | |

ICA | 0.0310771 | 0.0270919 | 0.0208086 | 0.0242277 | 0.0611090 | |

LEIM | 0.0607389 | 0.0725585 | 0.0702695 | 0.0682761 | 0.0542615 | |

TSNE | 0.0757815 | 0.0959521 | 0.0864587 | 0.0861522 | 0.0785030 |

${\mathbb{H}}_{1}$/${\mathbb{H}}_{2}$ | A | B | C | D | E |
---|---|---|---|---|---|

A | 0.1975936 | 0.3049497 | 0.2467548 | 0.7432987 | |

B | 0.3202554 | 0.3835209 | 0.5066311 | 0.1707835 | |

C | 0.0832231 | 0.1322987 | 0.8356690 | 0.7088614 | |

D | 0.2012797 | 0.6292608 | 0.6292608 | 0.5067258 | |

E | 0.0049325 | 0.0157855 | 0.0157855 | 0.0114901 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Kwessi, E.
Topological Comparison of Some Dimension Reduction Methods Using Persistent Homology on EEG Data. *Axioms* **2023**, *12*, 699.
https://doi.org/10.3390/axioms12070699

**AMA Style**

Kwessi E.
Topological Comparison of Some Dimension Reduction Methods Using Persistent Homology on EEG Data. *Axioms*. 2023; 12(7):699.
https://doi.org/10.3390/axioms12070699

**Chicago/Turabian Style**

Kwessi, Eddy.
2023. "Topological Comparison of Some Dimension Reduction Methods Using Persistent Homology on EEG Data" *Axioms* 12, no. 7: 699.
https://doi.org/10.3390/axioms12070699