# Simplicial-Map Neural Networks Robust to Adversarial Examples

^{1}

^{2}

^{3}

^{*}

^{†}

^{‡}

## Abstract

**:**

## 1. Introduction

## 2. Background

**Definition**

**1.**

**Definition**

**2.**

- 1.
- Any subset of V with exactly one point of V is a simplex of K called 0-simplex or vertex.
- 2.
- Any nonempty subset of a simplex σ is a simplex, called a face of σ.

**Definition**

**3.**

**Definition**

**4.**

**Definition**

**5.**

**Definition**

**6.**

**Definition**

**7.**

**Definition**

**8.**

**Definition**

**9.**

**Theorem**

**1.**

**Proposition**

**1**

**(Simplicial Approximation Theorem Extension [12]).**Given $\u03f5>0$ and a continuous function $g:\left|K\right|\to \left|L\right|$ between the underlying spaces of two simplicial complexes K and L, there exists $s,t>0$ such that ${\phi}^{c}:|{Sd}^{s}K|\to |{Sd}^{t}L|$ is a simplicial approximation of g and $\left|\right|g-{\phi}^{c}\left|\right|\le \u03f5$.

**Definition**

**10**

**(adapted from [19]).**Given $d,k>0$, a multi-layer feed-forward network defined between spaces $X\subseteq {\mathbb{R}}^{d}$ and $Y\subseteq {\mathbb{R}}^{k}$ is a function $\mathcal{N}:X\to Y$ composed by $m+1$ functions:

**Theorem**

**2**

**(Theorem 4 of [12]).**Let us consider a simplicial map ${\phi}^{c}:\left|K\right|\to \left|L\right|$ between the embedding of two finite pure simplicial complexes K and L of dimension d and k, respectively. Then a two-hidden-layer feed-forward network ${\mathcal{N}}_{\phi}:\left|K\right|\to \left|L\right|$ such that ${\mathcal{N}}_{\phi}(x)={\phi}^{c}(x)$ for all $x\in \left|K\right|$ can be explicitly defined.

## 3. Simplicial-Map Neural Networks

**Definition**

**11.**

- an input layer composed of ${d}_{0}=d$ neurons;
- a first hidden layer composed of ${d}_{1}=n(d+1)$ neurons;
- a second hidden layer composed of ${d}_{2}=m(k+1)$ neurons; and
- an output layer with ${d}_{3}=k$ neurons.

**Proposition**

**2**

**.**Let K and L be two finite pure simplicial complexes of dimension $d>0$ and $k>0$, respectively. Let us consider the simplicial map ${\phi}^{c}:\left|K\right|\to \left|L\right|$ induced by a vertex map ${\phi}^{(0)}:{K}^{(0)}\to {L}^{(0)}$. Then, the simplicial-map neural network ${\mathcal{N}}_{\phi}:\left|K\right|\to \left|L\right|$ induced by the simplicial map ${\phi}^{c}$ satisfies that ${\mathcal{N}}_{\phi}(x)={\phi}^{c}(x)$ for all $x\in \left|K\right|$.

## 4. Classification with Simplicial-Map Neural Networks

**Definition**

**12.**

**Definition**

**13.**

- $\mathcal{N}(p)=\ell $ for all $(p,\ell )\in D$.
- $\mathcal{N}$ maps $x\in X$ to a vector of scores $\mathcal{N}(x)=({y}_{1},\cdots ,{y}_{k})\in Y$ such that ${y}_{i}\in [0,1]$ for $i\in \u27e61,n\u27e7$ and ${\sum}_{i\in \u27e61,n\u27e7}{y}_{i}=1$.

**Remark**

**1.**

**Definition**

**14.**

**Proposition**

**3.**

**Proof.**

**Proposition**

**4.**

**Proof.**

**Remark**

**2.**

**Lemma**

**1.**

**Proof.**

#### Computing Simplicial-Map Neural Networks Robust to Adversarial Attacks

**Definition**

**15.**

**Proposition**

**5.**

**Proof.**

**Example**

**1.**

**Theorem**

**3.**

**Proof.**

- All the vertices of $\sigma $ are in ${T}_{\phi}$. Then, $\left|\sigma \right|\subseteq {T}_{\phi}$.
- Otherwise, $\sigma ={\sigma}^{1}\cup {\sigma}^{2}$ being $\varnothing \ne |{\sigma}^{1}|\subseteq {\Gamma}_{\phi}$ and $\varnothing \ne |{\sigma}^{2}|\subseteq {T}_{\phi}$.

- (1)
- All the vertices of $\gamma $ belong to ${T}_{\phi}$. Then $\gamma \subseteq {\sigma}^{2}\cap {\mu}^{2}$ and $g(x)=x$.
- (2)
- $\gamma ={\gamma}^{1}\cup {\gamma}^{2}$ with $|{\gamma}^{1}|\subseteq {\Gamma}_{\phi}$ and $|{\gamma}^{2}|\subseteq {T}_{\phi}$. Then ${\gamma}^{1}\subseteq {\sigma}^{1}\cap {\mu}^{1}$ and ${\gamma}^{2}\subseteq {\sigma}^{2}\cap {\mu}^{2}$ so the definition of $g(x)$ with respect to $\sigma $ and $\mu $ coincides.

- (1)
- If $x\in {C}_{{\phi}_{t}\circ \omega}^{j}$ then ${\omega}^{c}(x)\in {C}_{{\phi}_{t}}^{j}$, being $j\in \u27e60,k\u27e7$.
- (2)
- Let $v\in {({Sd}^{t}K)}^{(0)}$ and $z\in |stv|$. If $v\in {C}_{{\phi}_{t}}^{j}$ then $z\in {C}_{{\phi}_{t}}^{j}$, being $j\in \u27e60,k\u27e7$.Since $z\in |stv|$ then $z\in \left|\sigma \right|$ for a d-simplex $\sigma \in {Sd}^{t}K$ with $v\in \sigma $. Then, by Remark 2, $\sigma ={\sigma}^{1}\cup {\sigma}^{2}$, with ${\sigma}^{1}\in {\Gamma}_{{\phi}_{t}}$ and ${\sigma}^{2}\in {C}_{{\phi}_{t}}^{j}$. Besides, since $v\in {\sigma}^{2}$ then $z\notin |{\sigma}^{1}|$, therefore $z\in \left|\sigma \right|\setminus |{\sigma}^{1}\subseteq {C}_{{\phi}_{t}}^{j}$.
- (3)
- If $x\in {C}_{{\phi}_{t}\circ \omega}^{j}$ then $g(x)\in {C}_{{\phi}_{t}}^{j}$, being $j\in \u27e60,k\u27e7$.If $x\in {C}_{{\phi}_{t}\circ \omega}^{j}$ then there exists $v\in {C}_{{\phi}_{t}\circ \omega}^{j}$ such that $x\in |stv|$. Then, $\omega (v)\in {C}_{{\phi}_{t}}^{j}$ by (0). Now, since $g(|stv|)\subseteq |st\omega (v)|$, then $g(x)\in {C}_{{\phi}_{t}}^{j}$ by (1).
- (4)
- Let $x\in |{Sd}^{s}K|$. If $g(x)\in {\Gamma}_{{\phi}_{t}}$ then $x\in {\Gamma}_{{\phi}_{t}\circ \omega}$.By contradiction, let us assume that $g(x)\in {\Gamma}_{{\phi}_{t}}$ and $x\in {C}_{{\phi}_{t}\circ \omega}^{j}$ for some $j\in \u27e60,k\u27e7$. Then, $g(x)\in {C}_{{\phi}_{t}}^{j}$ by (2), leading to a contradiction.
- (5)
- Let $x\in |{Sd}^{s}K|$. If $g(x)\in {C}_{{\phi}_{t}}^{j}$ with probability ${y}_{j}$ and $x\in {C}_{{\phi}_{t}\circ \omega}^{{j}^{\prime}}$ then $j={j}^{\prime}$ and $|{y}_{j}-{y}_{j}^{\prime}|<{r}_{1}$. This last statement is a consequence of (2) and that $\left|\right|g-{\omega}^{c}\left|\right|<{r}_{1}$.

**Example**

**2.**

## 5. Conclusions and Future Work

## Author Contributions

## Funding

## Institutional Review Board Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv
**2013**, arXiv:cs.CV/1312.6199. [Google Scholar] - Fezza, S.A.; Bakhti, Y.; Hamidouche, W.; Déforges, O. Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification. In Proceedings of the Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 5–7 June 2019; pp. 1–6. [Google Scholar] [CrossRef][Green Version]
- Garg, S.; Ramakrishnan, G. BAE: BERT-based Adversarial Examples for Text Classification. arXiv
**2020**, arXiv:cs.CL/2004.01970. [Google Scholar] - Karim, F.; Majumdar, S.; Darabi, H. Adversarial Attacks on Time Series. IEEE Trans. Pattern Anal. Mach. Intell.
**2020**, 1. [Google Scholar] [CrossRef] [PubMed][Green Version] - Christakopoulou, K.; Banerjee, A. Adversarial Attacks on an Oblivious Recommender. In Proceedings of the 13th ACM Conference on Recommender Systems, Association for Computing Machinery, Copenhagen, Denmark, 20 September 2019; pp. 322–330. [Google Scholar] [CrossRef]
- Xu, H.; Ma, Y.; Liu, H.; Deb, D.; Liu, H.; Tang, J.; Jain, A.K. Adversarial Attacks and Defenses in Images, Graphs and Text: A Review. Int. J. Autom. Comput.
**2020**, 17, 151–178. [Google Scholar] [CrossRef][Green Version] - Yan, Z.; Guo, Y.; Zhang, C. Adversarial Margin Maximization Networks. IEEE Trans. Pattern Anal. Mach. Intell.
**2019**, 1. [Google Scholar] [CrossRef] [PubMed][Green Version] - Cortes, C.; Vapnik, V. Support Vector Networks. Mach. Learn.
**1995**, 20, 273–297. [Google Scholar] [CrossRef] - Tang, Y. Deep Learning using Linear Support Vector Machines. arXiv
**2013**, arXiv:cs.LG/1306.0239. [Google Scholar] - Sun, S.; Chen, W.; Wang, L.; Liu, X.; Liu, T. On the Depth of Deep Neural Networks: A Theoretical View. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Schuurmans, D., Wellman, M.P., Eds.; AAAI Press: Palo Alto, CA, USA, 2016; pp. 2066–2072. [Google Scholar]
- Wang, X.; Zhang, S.; Lei, Z.; Liu, S.; Guo, X.; Li, S.Z. Ensemble Soft-Margin Softmax Loss for Image Classification. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI’18, Stockholm, Sweden, 13–19 July 2018; AAAI Press: Palo Alto, CA, USA, 2018; pp. 992–998. [Google Scholar]
- Paluzo-Hidalgo, E.; Gonzalez-Diaz, R.; Gutiérrez-Naranjo, M.A. Two-hidden-layer feed-forward networks are universal approximators: A constructive approach. Neural Netw.
**2020**, 131, 29–36. [Google Scholar] [CrossRef] [PubMed] - Ismailov, V.E. On the approximation by neural networks with bounded number of neurons in hidden layers. J. Math. Anal. Appl.
**2014**, 417, 963–969. [Google Scholar] [CrossRef] - Guliyev, N.J.; Ismailov, V.E. Approximation capability of two hidden layer feedforward neural networks with fixed weights. Neurocomputing
**2018**, 316, 262–269. [Google Scholar] [CrossRef][Green Version] - Ebli, S.; Defferrard, M.; Spreemann, G. Simplicial Neural Networks. arXiv
**2020**, arXiv:cs.LG/2010.03633. [Google Scholar] - Spanier, E.H. Algebraic Topology; Springer: New York, NY, USA, 1995. [Google Scholar]
- Boissonnat, J.D.; Chazal, F.; Yvinec, M. Geometric and Topological Inference; Cambridge Texts in Applied Mathematics; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar] [CrossRef][Green Version]
- Okabe, A.; Boots, B.; Sugihara, K.; Chiu, S.N.; Kendall, D.G. Definitions and Basic Properties of Voronoi Diagrams. In Spatial Tessellations: Concepts and Applications of Voronoi Diagrams, 2nd ed.; John Wiley & Sons: Chichester, UK, 2000; pp. 43–112. [Google Scholar] [CrossRef]
- Hornik, K. Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw.
**1991**, 4, 251–257. [Google Scholar] [CrossRef] - Edelsbrunner, H.; Harer, J. Computational Topology—An Introduction; American Mathematical Society: Providence, RI, USA, 2010; pp. 1–241. [Google Scholar]
- Lecuyer, M.; Atlidakis, V.; Geambasu, R.; Hsu, D.; Jana, S. Certified Robustness to Adversarial Examples with Differential Privacy. In Proceedings of the IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 18–19 May 2019; pp. 656–672. [Google Scholar] [CrossRef][Green Version]
- Yuan, X.; He, P.; Zhu, Q.; Li, X. Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE Trans. Neural Netw. Learn. Syst.
**2019**, 30, 2805–2824. [Google Scholar] [CrossRef] [PubMed][Green Version]

**Figure 1.**Example of a barycentric subdivision. Let $V=\{a,b,c\}$ be the set of the three vertices (in blue) of the triangle depicted on the left. Let $K=\{\{a\},\{b\},\{c\},\{a,b\},\{a,c\},\{b,c\},\{a,b,c\}\}$. From left to right: $\left|K\right|$, $|SdK|$, and $|{Sd}^{2}K|$ are shown.

**Figure 2.**Given a labelled dataset D, a convex polytope $\mathcal{P}$ containing D can be computed. Then, the simplicial complex K can be obtained using the Delaunay triangulation of all the points of D and the vertices of $\mathcal{P}$.

**Figure 3.**On the top, we can see a 1-simplex with two iterated applications of the barycentric subdivision. On the bottom, a continuous function was applied to the straight line and a simplicial approximation (in red) is provided. The star condition is satisfied and no more barycentric subdivisions are needed.

**Figure 4.**Let $K=\mathcal{D}({D}_{P}\cup {V}_{\mathcal{P}})$ be the Delaunay complex of ${D}_{P}\cup {V}_{\mathcal{P}}$ where ${D}_{P}$ is the set $\{A,B,C\}$ of red and blue points, and ${V}_{\mathcal{P}}$ are the green vertices (depicted in the center). Let L be the simplicial complex with one maximal simplex $\sigma =\{{v}_{0}=(0,0,1),{v}_{1}=(0,1,0),{v}_{2}=(1,0,0)\}$ (pictured on the right). Let us consider the vertex map ${\phi}^{(0)}$ that sends the blue points $A,B$ to ${v}_{1}$, the red point C to ${v}_{2}$, and the green points (labelled as unknown) to ${v}_{0}$. Then, ${\phi}^{(0)}$ gives rise to the simplicial map ${\phi}^{c}$ and the simplicial-map neural network ${\mathcal{N}}_{\phi}$. The decision boundary of ${\mathcal{N}}_{\phi}$ is pictured on the center as the set of points in the boundary of the red, blue or green region. For example, ${\mathcal{N}}_{\phi}(D)=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$, ${\mathcal{N}}_{\phi}(F)=(\frac{1}{2},\frac{1}{2},0)$ and ${\mathcal{N}}_{\phi}(G)=(0,\frac{1}{2},\frac{1}{2})$. Let us consider now the barycenter subdivision of K shown on the left and the simplicial map ${\omega}^{c}$ which relates both simplicial complexes. The decision boundary of ${\mathcal{N}}_{\phi \circ \omega}$ is the gray zone on the left picture.

**Figure 5.**An adversarial example x for the simplicial-map neural network ${\mathcal{N}}_{\phi}:\left|K\right|\to \left|L\right|$.

**Figure 6.**Three simplicial complexes with simplicial maps ${\omega}^{c}$ and ${\phi}_{2}^{c}$ between them are shown illustrating a neural network ${\mathcal{N}}_{{\phi}_{2}\circ \omega}$ robust to adversarial attacks of size r.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Paluzo-Hidalgo, E.; Gonzalez-Diaz, R.; Gutiérrez-Naranjo, M.A.; Heras, J. Simplicial-Map Neural Networks Robust to Adversarial Examples. *Mathematics* **2021**, *9*, 169.
https://doi.org/10.3390/math9020169

**AMA Style**

Paluzo-Hidalgo E, Gonzalez-Diaz R, Gutiérrez-Naranjo MA, Heras J. Simplicial-Map Neural Networks Robust to Adversarial Examples. *Mathematics*. 2021; 9(2):169.
https://doi.org/10.3390/math9020169

**Chicago/Turabian Style**

Paluzo-Hidalgo, Eduardo, Rocio Gonzalez-Diaz, Miguel A. Gutiérrez-Naranjo, and Jónathan Heras. 2021. "Simplicial-Map Neural Networks Robust to Adversarial Examples" *Mathematics* 9, no. 2: 169.
https://doi.org/10.3390/math9020169