Clustering Algorithms and Their Applications

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (15 July 2019) | Viewed by 11485

Special Issue Editor


E-Mail Website
Guest Editor
Department of Psychological Sciences, University of Missouri, Columbia, 210 McAlester Hall, Columbia, MO 65211, USA
Interests: cluster analysis; mixture modelling; exploratory data analysis

Special Issue Information

Dear Colleagues,

Clustering algorithms (e.g., unsupervised learning) have seen enormous growth in applied fields in recent years. While they were introduced more than 50 years ago, it has been in about the last 10 years that these methods have seen more widespread use among practitioners. This holds true for the ubiquitous K-means clustering algorithm, as well as for more recent approaches such as genetic algorithms, nature-based clustering algorithms, simulated annealing, and the like. The usage of clustering algorithms has permeated into standard software packages; however, this exposure leads to the necessity of “default” versions of these algorithms. Unfortunately, the defaults rarely perform well across all problem sets and often can lead to poor results.

As such, the aim of this Special Issue is to highlight recent applications where clustering algorithms have been either developed or modified to address specific research questions.  Developments in both the theoretical and methodological domain as well as novel modifications where so-called “standard” approaches are insufficient are focused on.

Prof. Dr. Doug Steinley
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 426 KiB  
Article
Laplacian Eigenmaps Dimensionality Reduction Based on Clustering-Adjusted Similarity
by Honghu Zhou and Jun Wang
Algorithms 2019, 12(10), 210; https://doi.org/10.3390/a12100210 - 4 Oct 2019
Cited by 3 | Viewed by 4316
Abstract
Euclidean distance between instances is widely used to capture the manifold structure of data and for graph-based dimensionality reduction. However, in some circumstances, the basic Euclidean distance cannot accurately capture the similarity between instances; some instances from different classes but close to the [...] Read more.
Euclidean distance between instances is widely used to capture the manifold structure of data and for graph-based dimensionality reduction. However, in some circumstances, the basic Euclidean distance cannot accurately capture the similarity between instances; some instances from different classes but close to the decision boundary may be close to each other, which may mislead the graph-based dimensionality reduction and compromise the performance. To mitigate this issue, in this paper, we proposed an approach called Laplacian Eigenmaps based on Clustering-Adjusted Similarity (LE-CAS). LE-CAS first performs clustering on all instances to explore the global structure and discrimination of instances, and quantifies the similarity between cluster centers. Then, it adjusts the similarity between pairwise instances by multiplying the similarity between centers of clusters, which these two instances respectively belong to. In this way, if two instances are from different clusters, the similarity between them is reduced; otherwise, it is unchanged. Finally, LE-CAS performs graph-based dimensionality reduction (via Laplacian Eigenmaps) based on the adjusted similarity. We conducted comprehensive empirical studies on UCI datasets and show that LE-CAS not only has a better performance than other relevant comparing methods, but also is more robust to input parameters. Full article
(This article belongs to the Special Issue Clustering Algorithms and Their Applications)
Show Figures

Figure 1

15 pages, 397 KiB  
Article
Simple K-Medoids Partitioning Algorithm for Mixed Variable Data
by Weksi Budiaji and Friedrich Leisch
Algorithms 2019, 12(9), 177; https://doi.org/10.3390/a12090177 - 24 Aug 2019
Cited by 36 | Viewed by 6386
Abstract
A simple and fast k-medoids algorithm that updates medoids by minimizing the total distance within clusters has been developed. Although it is simple and fast, as its name suggests, it nonetheless has neglected local optima and empty clusters that may arise. With the [...] Read more.
A simple and fast k-medoids algorithm that updates medoids by minimizing the total distance within clusters has been developed. Although it is simple and fast, as its name suggests, it nonetheless has neglected local optima and empty clusters that may arise. With the distance as an input to the algorithm, a generalized distance function is developed to increase the variation of the distances, especially for a mixed variable dataset. The variation of the distances is a crucial part of a partitioning algorithm due to different distances producing different outcomes. The experimental results of the simple k-medoids algorithm produce consistently good performances in various settings of mixed variable data. It also has a high cluster accuracy compared to other distance-based partitioning algorithms for mixed variable data. Full article
(This article belongs to the Special Issue Clustering Algorithms and Their Applications)
Show Figures

Figure 1

Back to TopTop