Next Article in Journal
A Novel Coupling Algorithm Based on Glowworm Swarm Optimization and Bacterial Foraging Algorithm for Solving Multi-Objective Optimization Problems
Previous Article in Journal
Autonomous Population Regulation Using a Multi-Agent System in a Prey–Predator Model That Integrates Cellular Automata and the African Buffalo Optimization Metaheuristic
Open AccessArticle

Heterogeneous Distributed Big Data Clustering on Sparse Grids

Department of Simulation Software Engineering, University of Stuttgart, 70569 Stuttgart, Germany
*
Authors to whom correspondence should be addressed.
Algorithms 2019, 12(3), 60; https://doi.org/10.3390/a12030060
Received: 31 January 2019 / Revised: 27 February 2019 / Accepted: 2 March 2019 / Published: 7 March 2019
Clustering is an important task in data mining that has become more challenging due to the ever-increasing size of available datasets. To cope with these big data scenarios, a high-performance clustering approach is required. Sparse grid clustering is a density-based clustering method that uses a sparse grid density estimation as its central building block. The underlying density estimation approach enables the detection of clusters with non-convex shapes and without a predetermined number of clusters. In this work, we introduce a new distributed and performance-portable variant of the sparse grid clustering algorithm that is suited for big data settings. Our computed kernels were implemented in OpenCL to enable portability across a wide range of architectures. For distributed environments, we added a manager–worker scheme that was implemented using MPI. In experiments on two supercomputers, Piz Daint and Hazel Hen, with up to 100 million data points in a ten-dimensional dataset, we show the performance and scalability of our approach. The dataset with 100 million data points was clustered in 1198 s using 128 nodes of Piz Daint. This translates to an overall performance of 352 TFLOPS . On the node-level, we provide results for two GPUs, Nvidia’s Tesla P100 and the AMD FirePro W8100, and one processor-based platform that uses Intel Xeon E5-2680v3 processors. In these experiments, we achieved between 43% and 66% of the peak performance across all computed kernels and devices, demonstrating the performance portability of our approach. View Full-Text
Keywords: clustering; machine learning; distributed computing; performance portability; GPGPU; OpenCL; peak performance clustering; machine learning; distributed computing; performance portability; GPGPU; OpenCL; peak performance
Show Figures

Figure 1

MDPI and ACS Style

Pfander, D.; Daiß, G.; Pflüger, D. Heterogeneous Distributed Big Data Clustering on Sparse Grids. Algorithms 2019, 12, 60.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map

1
Back to TopTop