# Semi-Supervised Manifold Alignment Using Parallel Deep Autoencoders

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Semi-Supervised Manifold Alignment

- Method 1 (MAPA), Manifold Alignment using Procrustes Analysis [23]: This method is a version of Approach I and in its second step applies Procrustes analysis to rescale and rotate manifold ${S}_{Y}$ to align it with manifold ${S}_{X}$. If Locality Preserving Projections (LPP) [40] are used in the dimensionality reduction step, it results in feature-level manifold alignment, and we refer to the method as MAPA-feat in the following sections. If Laplacian eigenmaps [41] are used in the dimensionality reduction step to obtain instance-level alignment, we refer to the method as MAPA-inst.
- Method 2 (MALG), Manifold Alignment preserving Local Geometry [39]: This method is a version of Approach II. First, a joint manifold Z is calculated using the graph Laplacians of the given manifolds. If, in the next step, eigenvalue decomposition of Z provides instance-level alignment, we refer to it as MALG-inst. If generalised eigenvalue decomposition of Z is used for feature-level alignment, we refer to the method as MALG-feat.
- Method 3 (MAGG), Manifold Alignment preserving Global Geometry [39]: This method is a version of Approach II. A joint manifold Z is generated using the global distances of corresponding pairs in $X\cup Y$. Eigenvalue decomposition of Z provides dimensionality reduction to obtain the aligned low-dimensional manifolds in the case of instance-level alignment (MAGG-inst). Generalised eigenvalue decomposition is used instead in the case of feature-level alignment (MAGG-feat).

## 3. Parallel Deep Autoencoder

#### Asymmetric PDAE

## 4. Performance Evaluation

## 5. Experiments and Results

#### 5.1. 1-Manifold Alignment

#### 5.2. Experiments on 2-, 3- and 4-Manifold Alignment

#### 5.2.1. Double Pendulum Datasets

- (i)
- 2D-2D motion: The pendulum has two Degrees-Of-Freedom (DOF), that is, both limbs ${u}_{1}$ and ${u}_{2}$ rotate in the two-dimensional (x-y)-plane, each of them describing a circle. In Figure 4a, ${\theta}_{1}$ and ${\theta}_{2}$ are the rotation angles of limbs ${u}_{1}$ and ${u}_{2}$ at joints ${J}_{1}$ and ${J}_{2}$, respectively. Accordingly, the manifold representing the dynamics of the 2D-2D case is the cross-product of two circles, ${S}^{1}\times {S}^{1}$, which is homeomorphic to the two-dimensional torus, that is a 2-manifold.
- (ii)
- 2D-3D motion: The pendulum has three DOFs, where limb ${u}_{2}$ can rotate on a two-dimensional sphere ${S}^{2}$ in three-dimensional space, while ${u}_{1}$ is restricted to rotate on a circle ${S}^{1}$ in a two-dimensional plane. That is, the manifold representing the dynamics of the 2D-3D case is homeomorphic to ${S}^{1}\times {S}^{2}$, which is a 3-manifold. As the pendulum moves in 3D space, the end-effector has the 3D coordinates $\left({e}_{x},{e}_{y},{e}_{z}\right)$. In Figure 4b, ${\theta}_{{y}^{\prime}}$ and ${\theta}_{{z}^{\prime}}$ are the angles of ${u}_{2}$ with axes ${y}^{\prime}$ and ${z}^{\prime}$, respectively, and describe the motion on the sphere ${S}^{2}$. ${\theta}_{1}$ is the angle between the x-axis and ${u}_{1}$ and describes the two-dimensional rotation of the sphere’s centre in the (x-y)-plane.
- (iii)
- 3D-3D motion: In this case, the pendulum has four DOFs, where both limbs can rotate on two-dimensional spheres in 3D space. In Figure 4c, ${\theta}_{y}$ and ${\theta}_{z}$ are the angles of ${u}_{1}$ with the y and z axes, respectively, and ${\theta}_{{y}^{\prime}}$ and ${\theta}_{{z}^{\prime}}$ are the angles of ${u}_{2}$ with the ${y}^{\prime}$ and ${z}^{\prime}$ axes, respectively. Accordingly, we expected that the manifolds representing the dynamics of the 3D-3D case were homeomorphic to ${S}^{2}\times {S}^{2}$, which is a 4-manifold.

- Pendulum X: $({u}_{2}/{u}_{1})=0.75/1.25=0.60$
- Pendulum Y: $({u}_{2}/{u}_{1})=1.25/1.56=0.80$

- 2D-2D: $({e}_{x},{e}_{y},\phantom{\rule{4pt}{0ex}}cos{\theta}_{1},cos{\theta}_{2},\phantom{\rule{4pt}{0ex}}sin{\theta}_{1},sin{\theta}_{2})$
- 2D-3D: $({e}_{x},{e}_{y},{e}_{z},\phantom{\rule{4pt}{0ex}}cos{\theta}_{1},\phantom{\rule{4pt}{0ex}}cos{\theta}_{{y}^{\prime}},cos{\theta}_{{z}^{\prime}},\phantom{\rule{4pt}{0ex}}sin{\theta}_{1},\phantom{\rule{4pt}{0ex}}sin{\theta}_{{y}^{\prime}},sin{\theta}_{{z}^{\prime}})$
- 3D-3D: $({e}_{x},{e}_{y},{e}_{z},\phantom{\rule{4pt}{0ex}}cos{\theta}_{y},cos{\theta}_{z},\phantom{\rule{4pt}{0ex}}cos{\theta}_{{y}^{\prime}},cos{\theta}_{{z}^{\prime}},\phantom{\rule{4pt}{0ex}}sin{\theta}_{z},sin{\theta}_{y},\phantom{\rule{4pt}{0ex}}sin{\theta}_{{y}^{\prime}},sin{\theta}_{{z}^{\prime}})$

#### 5.2.2. PDAE Architecture

- 2D-2D: 6-5-4-3-4-5-6
- 2D-3D: 9-8-7-6-5-4-3-4-5-6-7-8-9
- 3D-3D: 11-10-9-8-7-6-5-4-3-4-5-6-7-8-9-10-11

#### 5.2.3. Results of 2-Manifold Alignment

#### 5.2.4. Results of 3-Manifold Alignment

#### 5.2.5. Results of 4-Manifold Alignment

#### 5.3. Cross-Modality Manifold Alignment

## 6. Discussion and Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Abbreviations

CAE | Convolutional Autoencoder |

DOF | Degrees-Of-Freedom |

FAE | Fully-connected Autoencoder |

MAGG | Manifold Alignment preserving Global Geometry |

MALG | Manifold Alignment preserving Local Geometry |

MAPA | Manifold Alignment using Procrustes Analysis |

PAE | Parallel Autoencoders |

PCA | Principal Component Analysis |

PDAE | Parallel Deep Autoencoder |

## References

- Lee, J.M. Introduction to Topological Manifolds; Springer: New York, NY, USA, 2000. [Google Scholar]
- Hirsch, M. Differential Topology; Springer: New York, NY, USA, 2000. [Google Scholar]
- Spivac, M. A Comprehensive Introduction to Differential Geometry, 2nd ed.; Publish or Perish, Inc.: Houston, TX, USA, 1979. [Google Scholar]
- Lee, J.A.; Verleysen, M. Nonlinear Dimensionality Reduction; Springer Science & Business Media: New York, NY, USA, 2007. [Google Scholar]
- Tenenbaum, J.B.; de Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science
**2000**, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed] - Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality Reduction: A Comparative Review; Technical Report TiCC TR 2009-005; Tilburg Center for Cognition and Communication (TiCC): Tilburg, The Netherlands, 2009. [Google Scholar]
- Ma, Y.; Fu, Y. (Eds.) Manifold Learning. Theory and Applications; CRC Press, Inc.: Boca Raton, FL, USA, 2011. [Google Scholar]
- Chalup, S.K.; Clement, R.; Tucker, C.; Ostwald, M.J. Modelling Architectural Visual Experience Using Non-linear Dimensionality Reduction. In Proceedings of the Australian Conference on Artificial Life (ACAL 2007), Gold Coast, Australia, 4–6 December 2007; Lecture Notes in Computer Science. Randall, M., Abbass, H., Wiles, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4828, pp. 84–95. [Google Scholar]
- Chalup, S.K.; Clement, R.; Marshall, J.; Tucker, C.; Ostwald, M.J. Representations of Streetscape Perceptions Through Manifold Learning in the Space of Hough Arrays. In Proceedings of the 2007 IEEE Symposium on Artificial Life (CI-ALife 2007), Honolulu, HI, USA, 1–5 April 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 362–369. [Google Scholar][Green Version]
- Wong, A.S.W.; Chalup, S.K.; Bhatia, S.; Jalalian, A.; Kulk, J.; Nicklin, S.; Ostwald, M.J. Visual Gaze Analysis of Robotic Pedestrians Moving in Urban Space. Archit. Sci. Rev.
**2012**, 55, 213–223. [Google Scholar] [CrossRef] - Paul, R.; Chalup, S.K. A Study on Validating Non-Linear Dimensionality Reduction Using Persistent Homology. Pattern Recognit. Lett.
**2017**, 100, 160–166. [Google Scholar] [CrossRef] - Aziz, F.; Chalup, S. Testing the Robustness of Manifold Learning on Examples of Thinned-Out Data. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019), Budapest, Hungary, 14–19 July 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Ham, J.H.; Lee, D.D.; Saul, L.K. Learning high dimensional correspondences from low dimensional manifolds. In Proceedings of the 20th International Conference on Machine Learning (ICML 2003) Workshop: The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, Washington, DC, USA, 21–24 August 2003. [Google Scholar]
- Ham, J.; Lee, D.; Saul, L. Semisupervised alignment of manifolds. In Proceedings of the Annual Conference on Uncertainty in Artificial Intelligence, The Savannah Hotel, Barbados, 6–8 January 2005. [Google Scholar]
- Chang, Y.; Hu, C.; Feris, R.; Turk, M. Manifold Based Analysis of Facial Expression. Image Vis. Comput.
**2006**, 24, 605–614. [Google Scholar] [CrossRef] - Cui, Z.; Shan, S.; Zhang, H.; Lao, S.; Chen, X. Image sets alignment for video-based face recognition. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 2626–2633. [Google Scholar] [CrossRef]
- Pei, Y.; Huang, F.; Shi, F.; Zha, H. Unsupervised image matching based on manifold alignment. IEEE Trans. Pattern Anal. Mach. Intell.
**2012**, 34, 1658–1664. [Google Scholar] [CrossRef] [PubMed] - Wang, X.; Yang, R. Learning 3D shape from a single facial image via non-linear manifold embedding and alignment. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 414–421. [Google Scholar] [CrossRef]
- Xiong, L.; Wang, F.; Zhang, C. Semi-definite manifold alignment. In Machine Learning: ECML 2007; Lecture Notes in Computer Science, Book Section 79; Kok, J., Koronacki, J., Mantaras, R., Matwin, S., Mladenič, D., Skowron, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4701, pp. 773–781. [Google Scholar]
- Zhai, D.; Li, B.; Chang, H.; Shan, S.; Chen, X.; Gao, W. Manifold alignment via corresponding projections. In Proceedings of the British Machine Vision Conference, Aberystwyth, UK, 31 August–3 September 2010; BMVA Press: Surrey, UK, 2010; pp. 1–11. [Google Scholar] [CrossRef]
- Escolano, F.; Hancock, E.; Lozano, M. Graph matching through entropic manifold alignment. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2417–2424. [Google Scholar] [CrossRef]
- Abeo, T.A.; Shen, X.J.; Ganaa, E.D.; Zhu, Q.; Bao, B.K.; Zha, Z.J. Manifold alignment via global and local structures preserving PCA framework. IEEE Access
**2019**, 7, 38123–38134. [Google Scholar] [CrossRef] - Wang, C.; Mahadevan, S. Manifold alignment using procrustes analysis. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, Helsinki, Finland, 5–9 July 2008; ACM: New York, NY, USA, 2008; pp. 1120–1127. [Google Scholar] [CrossRef]
- Guerrero, R.; Ledig, C.; Rueckert, D. Manifold alignment and transfer learning for classification of Alzheimer’s disease. In Machine Learning in Medical Imaging; Wu, G., Zhang, D., Zhou, L., Eds.; Springer: Cham, Switzerland, 2014; pp. 77–84. [Google Scholar]
- Yang, H.L.; Crawford, M.M. Manifold alignment for multitemporal hyperspectral image classification. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Vancouver, BC, Canada, 24–29 July 2011; pp. 4332–4335. [Google Scholar] [CrossRef]
- Li, X.; Lv, J.; Zhang, Y. Manifold Alignment Based on Sparse Local Structures of More Corresponding Pairs. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI’13), Beijing, China, 3–9 August 2013; pp. 2862–2868. [Google Scholar]
- Bishop, C.M. Training with Noise is Equivalent to Tikhonov Regularization. Neural Comput.
**1995**, 7, 108–116. [Google Scholar] [CrossRef] - Bourlard, H.; Kamp, Y. Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern.
**1988**, 59, 291–294. [Google Scholar] [CrossRef] [PubMed] - Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science
**2006**, 313, 504–507. [Google Scholar] [CrossRef] [PubMed][Green Version] - Sakurada, M.; Yairi, T. Anomaly Detection Using Autoencoders with Nonlinear Dimensionality Reduction. In Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, MLSDA’14, Gold Coast, Australia, 2 December 2014; ACM: New York, NY, USA, 2014; pp. 4–11. [Google Scholar] [CrossRef]
- Wang, Y.; Yao, H.; Zhao, S. Auto-encoder Based Dimensionality Reduction. Neurocomputing
**2016**, 184, 232–242. [Google Scholar] [CrossRef] - Finn, C.; Tan, X.Y.; Duan, Y.; Darrell, T.; Levine, S.; Abbeel, P. Deep spatial autoencoders for visuomotor learning. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 512–519. [Google Scholar] [CrossRef]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res.
**2010**, 11, 3371–3408. [Google Scholar] - Majumdar, A. Blind denoising autoencoder. IEEE Trans. Neural Netw. Learn. Syst.
**2018**, 30, 1–6. [Google Scholar] [CrossRef] [PubMed] - Amodio, M.; Krishnaswamy, S. MAGAN: Aligning Biological Manifolds. In Proceedings of the 35th International Conference on Machine Learning, ICML, Stockholm, Sweden, 10–15 July 2018; pp. 215–223. [Google Scholar]
- Mukherjee, T.; Yamada, M.; Hospedales, T.M. Deep matching autoencoders. arXiv
**2017**, arXiv:1711.06047. [Google Scholar] - Wang, R.; Li, L.; Li, J. A novel parallel auto-encoder framework for multi-scale data in civil structural health monitoring. Algorithms
**2018**, 11, 112. [Google Scholar] [CrossRef] - Wang, C.; Mahadevan, S. Manifold Alignment Without Correspondence. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, IJCAI’09, Pasadena, CA, USA, 11–17 July 2009; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2009; Volume 2, pp. 1273–1278. [Google Scholar]
- Wang, C.; Mahadevan, S. A general framework for manifold alignment. In Proceedings of the AAAI Fall Symposium: Manifold Learning and Its Applications, Arlington, VA, USA, 5–7 November 2009; AAAI Press: Menlo Park, CA, USA, 2009; pp. 79–86. [Google Scholar]
- He, X.; Niyogi, P. Locality preserving projections. In Advances in Neural Information Processing Systems (NIPS 2003); Thrun, S., Saul, L.K., Schölkopf, B., Eds.; MIT Press: Cambridge, MA, USA, 2004; Volume 16, pp. 153–160. [Google Scholar]
- Belkin, M.; Niyogi, P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput.
**2003**, 15, 1373–1396. [Google Scholar] [CrossRef] - Baldi, P. Autoencoders, unsupervised learning, and deep architectures. In Proceedings of the ICML Workshop on Unsupervised and Transfer Learning; Guyon, I., Dror, G., Lemaire, V., Taylor, G., Silver, D., Eds.; PMLR: Bellevue, WA, USA, 2012; Volume 27, pp. 37–49. [Google Scholar]
- Wang, M.; Deng, W. Deep Visual Domain Adaptation: A Survey. arXiv
**2018**, arXiv:1802.03601. [Google Scholar] [CrossRef] - Berman, H.M.; Battistuz, T.; Bhat, T.N.; Bluhm, W.F.; Bourne, P.E.; Burkhardt, K.; Feng, Z.; Gilliland, G.L.; Iype, L.; Jain, S.; et al. The protein data bank. Acta Crystallogr. Sect. D
**2002**, 58, 899–907. [Google Scholar] [CrossRef] [PubMed] - Wang, C. A Geometric Framework for Transfer Learning Using Manifold Alignment. Ph.D. Thesis, Department of Computer Science, University of Massachusetts Amherst, Amherst, MA, USA, 2010. [Google Scholar]
- Wang, J.; Zhang, X.; Li, X.; Du, J. Semi-Supervised Manifold Alignment With Few Correspondences. Neurocomputing
**2017**, 230, 322–331. [Google Scholar] [CrossRef] - Aziz, F.; Wong, A.S.W.; Welsh, J.S.; Chalup, S.K. Aligning manifolds of double pendulum dynamics under the influence of noise. In Proceedings of the 25th International Conference on Neural Information Processing (ICONIP 2018), Siem Reap, Cambodia, 13–16 December 2018; Lecture Notes in Computer Science (LNCS). Springer: Cham, Switzerland, 2018. [Google Scholar]

**Figure 1.**PDAE: The two high-dimensional datasets X and Y include correspondence subsets ${X}_{C}$ and ${Y}_{C}$, resp., and are compressed to low-dimensional manifolds ${S}_{X}$ and ${S}_{Y}$, where ${D}_{X}$ and ${D}_{Y}$ are the low-dimensional representations of ${X}_{C}$ and ${Y}_{C}$, respectively. The minimisation of ${E}_{corr}$ applies regularisation pressure to ${D}_{X}$ and ${D}_{Y}$ to align the low-dimensional manifolds ${S}_{X}$ and ${S}_{Y}$.

**Figure 2.**Architecture of asymmetric PDAE: The left autoencoder is a fully-connected autoencoder, which takes feature vectors from X as input. The right autoencoder is a CAE, which takes images from Y as input. Both autoencoders have a fully-connected layer with three neurons as their code layer.

**Figure 3.**Protein structure manifold alignment. 3D structure of the glutaredoxin protein PDB-1G7O: The blue graph shows Model 1, and the red graph shows Model 21 scaled by a factor of four. (

**a**) shows the manifolds before alignment. The other subfigures show the resulting alignments using (

**b**) MAPA-feature (feat), (

**c**) MAPA-instance (inst), (

**d**) MALG-feat, (

**e**) MALG-inst, (

**f**) MAGG-feat, (

**g**) MAGG-inst and (

**h**) PDAE.

**Figure 4.**(

**a**) shows the 2D-2D version of the double pendulum where both limbs are rotating in a two-dimensional plane. x and y are the local coordinate axes of limb ${u}_{1}$. ${x}^{\prime}$ and ${y}^{\prime}$ are the local axes of limb ${u}_{2}$. $({e}_{x},{e}_{y})$ are the end-effector coordinates. (

**b**) shows the 2D-3D version of the double pendulum where limb ${u}_{1}$ is in a two-dimensional plane and limb ${u}_{2}$ is rotating on a sphere in three-dimensional space. x and y are the local coordinate axes of limb ${u}_{1}$. ${x}^{\prime}$, ${y}^{\prime}$ and ${z}^{\prime}$ are the local coordinate axes of limb ${u}_{2}$, and $({e}_{x},{e}_{y},{e}_{z})$ are the end-effector coordinates. (

**c**) shows the 3D-3D version of the double pendulum where both limbs are rotating on spheres in three-dimensional space. x, y and z are the local axes of limb ${u}_{1}$. ${x}^{\prime}$, ${y}^{\prime}$ and ${z}^{\prime}$ are the local axes of limb ${u}_{2}$, and $({e}_{x},{e}_{y},{e}_{z})$ are the end-effector coordinates.

**Figure 5.**Manifold alignments of 2D motion data under the influence of noise: The graphs visualise the outcomes of aligning datasets X (red) and Y (blue) using the seven methods mentioned at the top. Each row shows the results obtained under the influence of a different level of actuator noise (Rows 2–4) and coordinate noise (Rows 5–7). The expected result is a torus ${S}^{1}\times {S}^{1}$. However, the outcomes of MAPA, MALG and MAGG tend to collapse into a cylinder or for noise ranges ≥$\pm {2}^{\circ}$ misalign or otherwise disintegrate, particularly at the instance level. The only exception seems to be MALG-feat at the highest level of actuator noise. Otherwise, the graphs in the rightmost column demonstrate that of all methods tested, PDAE has the best ability to produce the expected torus-like manifold at all levels of noise.

**Figure 6.**Manifold alignments of 2D-3D motion data under the influence of noise: The graphs visualise the outcomes of aligning datasets X (red) and Y (blue) using the seven methods mentioned at the top. Each row shows the results obtained under the influence of a different level of noise. The expected outcome is ${S}^{1}\times {S}^{2}$, that is a ring of spheres where, for clarity, our visualisations showed a section comprising six spheres ${S}^{2}$ at equally-distributed positions on the circle ${S}^{1}$. Of all methods tested, the PDAE shows the best performance with almost perfectly-aligned manifolds shown at the right end of the top row. With the addition of noise, the results deteriorate.

**Figure 7.**Alignments of 3D-3D motion manifolds: Each graph visualises a different way of aligning the manifolds underlying datasets X and Y, which are collected from snapshots of ${90}^{\circ}$ steps at ${u}_{1}$ and ${30}^{\circ}$ steps at ${u}_{2}$. All manifolds of the instance-level methods collapsed or misaligned. The outcome of MAGG-feat and the deep autoencoder shows the expected results, that is six spheres representing snapshots of the pendulum movements on a 4-manifold homeomorphic to ${S}^{2}\times {S}^{2}$.

**Figure 8.**The figures show cross-modality manifold alignment using the asymmetric PDAE using four versions of the data with different percentages of correspondence pairs ($10\%,30\%,50\%,100\%$). The corresponding quantitative alignment errors $\mathsf{\Delta}$ and associated standard deviations $\sigma $ in parentheses are displayed in the subcaptions next to the correspondence percentage numbers.

**Table 1.**The table shows $\mathsf{\Delta}$ of the alignment of Models 1 and 21 for all methods that were tested and visualised in Figure 3. The standard deviation $\sigma $ is provided in parenthesis. PDAE performed the best.

Methods | $\mathsf{\Delta}(\mathit{\sigma})$ | |
---|---|---|

MAPA | feat | 0.094 (0.039) |

inst | 0.030 (0.021) | |

MALG | feat | 0.022 (0.016) |

inst | 0.023 (0.021) | |

MAGG | feat | 0.033 (0.024) |

inst | 0.034 (0.030) | |

PDAE | 0.017(0.010) |

**Table 2.**2D-2D manifold alignment: Shown are the alignment errors $\mathsf{\Delta}$ for the different alignment methods under different levels of noise. The standard deviation $\sigma $ as defined as a measure of smoothness of the alignment in (5) is provided in parenthesis. The best results are highlighted in bold.

Noise | MAPA | MALG | MAGG | PDAE | |||
---|---|---|---|---|---|---|---|

feat | inst | feat | inst | feat | inst | ||

0 | 0.186 (0.070) | 0.112 (0.041) | 0.003 (0.001) | 0.013 (0.012) | 0.054 (0.015) | 0.089 (0.038) | 0.007 (0.003) |

Actuator noise | |||||||

$\pm {2}^{\circ}$ | 0.090 (0.030) | 0.090 (0.037) | 0.016 (0.009) | 0.034 (0.026) | 0.053 (0.017) | 0.097 (0.043) | 0.015(0.009) |

$\pm {6}^{\circ}$ | 0.377 (0.157) | 0.485 (0.205) | 0.047 (0.027) | 0.101 (0.092) | 0.072 (0.031) | 0.098 (0.051) | 0.011 (0.028) |

$\pm {10}^{\circ}$ | 0.365 (0.154) | 0.713 (0.304) | 0.078 (0.043) | 0.180 (0.135) | 0.119 (0.055) | 0.101 (0.056) | 0.007(0.039) |

Coordinate noise | |||||||

$\pm 0.2$ | 0.105 (0.035) | 0.150 (0.067) | 0.039(0.018) | 0.061 (0.042) | 0.078 (0.029) | 0.117 (0.052) | 0.048 (0.022) |

$\pm 0.6$ | 0.125 (0.056) | 0.184 (0.093) | 0.124 (0.056) | 0.145 (0.096) | 0.139 (0.060) | 0.161 (0.076) | 0.085(0.043) |

$\pm 1.0$ | 0.203 (0.078) | 0.271 (0.145) | 0.135 (0.063) | 0.218 (0.144) | 0.204 (0.086) | 0.190 (0.083) | 0.128(0.048) |

**Table 3.**2D-3D manifold alignment: Alignment errors with standard deviations as explained in Table 2.

Noise | MAPA | MALG | MAGG | PDAE | |||
---|---|---|---|---|---|---|---|

feat | inst | feat | inst | feat | inst | ||

0 | 0.079 (0.032) | 0.157 (0.073) | 0.025 (0.012) | 0.124 (0.055) | 0.023 (0.012) | 0.062 (0.025) | 0.007 (0.004) |

Actuator noise | |||||||

$\pm {2}^{\circ}$ | 0.065 (0.027) | 0.427 (0.17) | 0.025 (0.012) | 0.092 (0.063) | 0.026 (0.013) | 0.037 (0.018) | 0.012(0.006) |

$\pm {4}^{\circ}$ | 0.073 (0.031) | 0.379 (0.153) | 0.035 (0.017) | 0.109 (0.06) | 0.029 (0.016) | 0.048 (0.025) | 0.026(0.012) |

$\pm {6}^{\circ}$ | 0.065 (0.3) | 0.115 (0.049) | 0.039 (0.021) | 0.273 (0.228) | 0.048 (0.023) | 0.074 (0.038) | 0.051 (0.026) |

$\pm {8}^{\circ}$ | 0.150 (0.071) | 0.082 (0.04) | 0.052 (0.031) | 0.207 (0.244) | 0.047 (0.027) | 0.086 (0.047) | 0.062(0.026) |

$\pm {10}^{\circ}$ | 0.120 (0.056) | 0.169 (0.08) | 0.089 (0.045) | 0.361 (0.164) | 0.078 (0.042) | 0.117 (0.062) | 0.049(0.023) |

Coordinate noise | |||||||

$\pm 0.2$ | 0.073 (0.031) | 0.143 (0.075) | 0.066 (0.025) | 0.131 (0.066) | 0.08 (0.031) | 0.092 (0.044) | 0.075 (0.033) |

$\pm 0.4$ | 0.113 (0.047) | 0.428 (0.169) | 0.101 (0.041) | 0.163 (0.095) | 0.122 (0.05) | 0.141 (0.071) | 0.082 (0.035) |

$\pm 0.6$ | 0.149 (0.061) | 0.182 (0.095) | 0.131 (0.053) | 0.197 (0.121) | 0.171 (0.069) | 0.185 (0.09) | 0.139 (0.059) |

$\pm 0.8$ | 0.215 (0.088) | 0.384 (0.139) | 0.159 (0.065) | 0.237 (0.139) | 0.238 (0.098) | 0.230 (0.113) | 0.088 (0.038) |

$\pm 1$ | 0.264 (0.104) | 0.438 (0.148) | 0.196 (0.079) | 0.297 (0.167) | 0.297 (0.122) | 0.315 (0.154) | 0.119 (0.05) |

**Table 4.**3D-3D manifold alignment: Shown is the alignment error $\mathsf{\Delta}$ with the standard deviation for each level of the noise. The lowest $\mathsf{\Delta}$ or closest alignment of each row is highlighted in bold.

Noise | MAPA | MALG | MAGG | PDAE |
---|---|---|---|---|

feat | feat | feat | ||

0 | 0.106 (0.060) | 0.019(0.011) | 0.045 (0.012) | 0.037 (0.020) |

Actuator noise | ||||

$\pm {2}^{\circ}$ | 0.115 (0.067) | 0.025(0.011) | 0.048 (0.015) | 0.029 (0.20) |

$\pm {4}^{\circ}$ | 0.124 (0.072) | 0.042 (0.021) | 0.058 (0.022) | 0.039(0.019) |

$\pm {6}^{\circ}$ | 0.141 (0.077) | 0.051 (0.025) | 0.075 (0.032) | 0.042(0.013) |

$\pm {8}^{\circ}$ | 0.233 (0.118) | 0.058 (0.033) | 0.082 (0.036) | 0.050(0.037) |

$\pm {10}^{\circ}$ | 0.359 (0.225) | 0.066 (0.030) | 0.090 (0.041) | 0.054(0.016) |

Coordinate noise | ||||

$\pm 0.2$ | 0.135 (0.078) | 0.032 (0.015) | 0.071 (0.012) | 0.023(0.012) |

$\pm 0.4$ | 0.151 (0.083) | 0.048 (0.028) | 0.113 (0.042) | 0.024(0.013) |

$\pm 0.6$ | 0.170 (0.094) | 0.057 (0.033) | 0.158 (0.058) | 0.022(0.011) |

$\pm 0.8$ | 0.213 (0.121) | 0.072 (0.041) | 0.194 (0.069) | 0.070(0.033) |

$\pm 1$ | 0.227 (0.122) | 0.083 (0.047) | 0.225 (0.079) | 0.069(0.033) |

**Table 5.**Shown is the structure of the convolutional autoencoder part of the asymmetric PDAE, which was used to align our cross-modality pendulum data. The input layer is at the top.

Layer Type | Kernel | Channels | Size | Activation |
---|---|---|---|---|

Convolutional | 5 × 5 | 3 | 128 × 128 | lrelu |

Max pool | 2 × 2 | 56 × 56 | ||

Convolutional | 5 × 5 | 10 | 56 × 56 | lrelu |

Max pool | 2 × 2 | 28 × 28 | ||

Convolutional | 5 × 5 | 20 | 28 × 28 | lrelu |

Max pool | 2 × 2 | 14 × 14 | ||

Convolutional | 5 × 5 | 30 | 14 × 14 | lrelu |

Max pool | 2 × 2 | 7 × 7 | ||

Convolutional | 3 × 3 | 40 | 7 × 7 | lrelu |

Max pool | 2 × 2 | 4 × 4 | ||

Flatten | 640 | |||

Fully-connected | 200 | tanh | ||

Fully-connected | 100 | tanh | ||

Fully-connected | 3 | tanh | ||

Fully-connected | 100 | tanh | ||

Fully-connected | 200 | tanh | ||

Fully-connected | 640 | tanh | ||

Reshape | 4 × 4 × 40 | |||

Deconvolutional | 3 × 3 | 40 | 4 × 4 | lrelu |

Upsampling | 2 × 2 | 7 × 7 | ||

Convolutional | 3 × 3 | 30 | 7 × 7 | lrelu |

Upsampling | 2 × 2 | 14 × 14 | ||

Convolutional | 5 × 5 | 20 | 14 × 14 | lrelu |

Upsampling | 2 × 2 | 28 × 28 | ||

Convolutional | 5 × 5 | 30 | 28 × 28 | lrelu |

Upsampling | 2 × 2 | 56 × 56 | ||

Convolutional | 5 × 5 | 40 | 56 × 56 | lrelu |

Upsampling | 2 × 2 | 128 × 128 | ||

Convolutional | 5 × 5 | 3 | 128 × 128 | lrelu |

**Table 6.**Shown are the execution times of the different manifold alignment methods when processing our data. The dataset sizes were $1296\times 6$, $1728\times 9$, and 20,736 × 11 for the 2-, 3- and 4-manifold data, respectively. The training times of the PDAE were recorded for 10,000 epochs and averaged over five runs starting from different initial weights. Standard deviations are in parenthesis. The other methods did not involve randomness, and their execution times remained the same in repeat experiments.

Methods | Detail | Execution Times | ||
---|---|---|---|---|

2-Manifold Alignment | 3-Manifold Alignment | 4-Manifold Alignment | ||

MAPA | feat | 0.63 s | 2.2 min | 6.6 min |

inst | 0.71 s | 4.2 min | 13.9 min | |

MALG | feat | 0.32 s | 1.6 min | 5.3 min |

inst | 0.44 s | 3.7 min | 11.7 min | |

MAGG | feat | 3 s | 9.5 min | 11 h |

inst | 8 s | 12.3 min | 13 h | |

PDAE | training | 18 (3.88) min | 22 (5.56) min | 9.3 (5.22) h |

inference | 0.33 (0.04) s | 0.61 (0.08) s | 1.6 (0.12) s |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Aziz, F.; Wong, A.S.W.; Chalup, S. Semi-Supervised Manifold Alignment Using Parallel Deep Autoencoders. *Algorithms* **2019**, *12*, 186.
https://doi.org/10.3390/a12090186

**AMA Style**

Aziz F, Wong ASW, Chalup S. Semi-Supervised Manifold Alignment Using Parallel Deep Autoencoders. *Algorithms*. 2019; 12(9):186.
https://doi.org/10.3390/a12090186

**Chicago/Turabian Style**

Aziz, Fayeem, Aaron S. W. Wong, and Stephan Chalup. 2019. "Semi-Supervised Manifold Alignment Using Parallel Deep Autoencoders" *Algorithms* 12, no. 9: 186.
https://doi.org/10.3390/a12090186