Special Issue "Manifold Learning and Dimensionality Reduction"

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (30 November 2015).

Special Issue Editor

Guest Editor
Dr. Stephan Chalup Website E-Mail
Associate Professor, School of Electrical Engineering and Computing, The University of Newcastle, NSW 2308, Australia
Phone: +61 2 492 16080
Interests: manifold learning and kernel methods; sequences and time series; reinforcement learning and neural information processing

Special Issue Information

Dear Colleagues,

While computers have become faster and memory has become more affordable, new algorithmic challenges have arrived with the desire to analyze large and high-dimensional datasets. A central question is how to trade off the efficiency of computation against precision in data analytics. This Special Issue addresses machine learning, pattern recognition, and data analysis techniques and the applications that are related to dimensionality reduction. In this context, there is the question of whether techniques of non-linear dimensionality reduction can be fast enough to process big datasets in a reasonable time, and possibly on low-powered devices. There is also the hope that the ability to process large enough datasets will allow suitable manifold learning approaches to extract manifolds that otherwise would collapse. This Special Issue will consider applied, experimental, and theoretical work that can help shed light on this topic domain, including related work on optimization or machine learning on manifolds.

Stephan Chalup
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


Keywords

  • Big Data Analytics
  • Clustering
  • Deep Learning
  • Dimensionality Reduction
  • Kernel Machines
  • Large or Sequence Data Processing
  • Manifold Learning
  • Non-linear Pattern Analysis
  • Optimization on Manifolds

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Robust Hessian Locally Linear Embedding Techniques for High-Dimensional Data
Algorithms 2016, 9(2), 36; https://doi.org/10.3390/a9020036 - 26 May 2016
Cited by 2
Abstract
Recently manifold learning has received extensive interest in the community of pattern recognition. Despite their appealing properties, most manifold learning algorithms are not robust in practical applications. In this paper, we address this problem in the context of the Hessian locally linear embedding [...] Read more.
Recently manifold learning has received extensive interest in the community of pattern recognition. Despite their appealing properties, most manifold learning algorithms are not robust in practical applications. In this paper, we address this problem in the context of the Hessian locally linear embedding (HLLE) algorithm and propose a more robust method, called RHLLE, which aims to be robust against both outliers and noise in the data. Specifically, we first propose a fast outlier detection method for high-dimensional datasets. Then, we employ a local smoothing method to reduce noise. Furthermore, we reformulate the original HLLE algorithm by using the truncation function from differentiable manifolds. In the reformulated framework, we explicitly introduce a weighted global functional to further reduce the undesirable effect of outliers and noise on the embedding result. Experiments on synthetic as well as real datasets demonstrate the effectiveness of our proposed algorithm. Full article
(This article belongs to the Special Issue Manifold Learning and Dimensionality Reduction)
Show Figures

Figure 1

Open AccessArticle
Alternating Direction Method of Multipliers for Generalized Low-Rank Tensor Recovery
Algorithms 2016, 9(2), 28; https://doi.org/10.3390/a9020028 - 19 Apr 2016
Cited by 4
Abstract
Low-Rank Tensor Recovery (LRTR), the higher order generalization of Low-Rank Matrix Recovery (LRMR), is especially suitable for analyzing multi-linear data with gross corruptions, outliers and missing values, and it attracts broad attention in the fields of computer vision, machine learning and data mining. [...] Read more.
Low-Rank Tensor Recovery (LRTR), the higher order generalization of Low-Rank Matrix Recovery (LRMR), is especially suitable for analyzing multi-linear data with gross corruptions, outliers and missing values, and it attracts broad attention in the fields of computer vision, machine learning and data mining. This paper considers a generalized model of LRTR and attempts to recover simultaneously the low-rank, the sparse, and the small disturbance components from partial entries of a given data tensor. Specifically, we first describe generalized LRTR as a tensor nuclear norm optimization problem that minimizes a weighted combination of the tensor nuclear norm, the l1-norm and the Frobenius norm under linear constraints. Then, the technique of Alternating Direction Method of Multipliers (ADMM) is employed to solve the proposed minimization problem. Next, we discuss the weak convergence of the proposed iterative algorithm. Finally, experimental results on synthetic and real-world datasets validate the efficiency and effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Manifold Learning and Dimensionality Reduction)
Show Figures

Figure 1

Back to TopTop