Feature Selection Meets Deep Learning

A special issue of Informatics (ISSN 2227-9709).

Deadline for manuscript submissions: closed (31 March 2019) | Viewed by 7012

Special Issue Editor


E-Mail Website
Guest Editor
School of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK
Interests: machine learning; computer vision; and machine perception of social behavior

Special Issue Information

Dear Colleagues,

Over the last few years, feature ranking and selection (FRS) has attracted a lot of attention in solving computer vision and pattern recognition problems, from vision to language. FRS techniques have been playing a central role in identifying the most relevant cues from huge amounts of otherwise meaningless data. However, with the advent of representation learning and deep learning there has been a major shift in the way features, or representations, are designed (i.e., the learning of data-driven representations). As a result, conventional FRS strategies may not be the most suitable for deep neural networks (DNNs) and novel strategies might be explored for a more natural integration.

The primary focus of this Special Issue will be on feature selection and deep learning, that is the question of how deep learning models can be imbued with FRS strategy. In fact, FRS can help to regulate the elaborate learning process behind DNNs by (i) simultaneously learning which features are informative in the process, (ii) reducing the significant redundancy in deep convolutional neural networks (CNNs) by pruning neurons, (iii) regulating dynamically the dropout factor to improve the prediction performance, and so on.

This Special Issue calls for contributions that target the study and analysis of FRS strategies for deep learning models from both theoretical and application perspectives. The topics of interest include, but are not limited, to the following:

  • Pruning networks using feature selection strategies
  • Feature selection based dropout
  • Feature selection layers in CNNs
  • Relevancy and residual DNNs
  • Deep feature selection
  • Feature selection using DNNs
  • Please refer to the submission page for the submission guidelines.

Dr. Giorgio Roffo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Informatics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Feature selection
  • Deep learning
  • representation learning
  • learning (artificial intelligence)
  • neural nets
  • feature extraction
  • space dimensionality reduction
  • sparsity
  • network pruning
  • Dropout
  • Filtering

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2289 KiB  
Article
The Effect of Evidence Transfer on Latent Feature Relevance for Clustering
by Athanasios Davvetas, Iraklis A. Klampanos, Spiros Skiadopoulos and Vangelis Karkaletsis
Informatics 2019, 6(2), 17; https://doi.org/10.3390/informatics6020017 - 25 Apr 2019
Cited by 1 | Viewed by 6252
Abstract
Evidence transfer for clustering is a deep learning method that manipulates the latent representations of an autoencoder according to external categorical evidence with the effect of improving a clustering outcome. Evidence transfer’s application on clustering is designed to be robust when introduced with [...] Read more.
Evidence transfer for clustering is a deep learning method that manipulates the latent representations of an autoencoder according to external categorical evidence with the effect of improving a clustering outcome. Evidence transfer’s application on clustering is designed to be robust when introduced with a low quality of evidence, while increasing the effectiveness of the clustering accuracy during relevant corresponding evidence. We interpret the effects of evidence transfer on the latent representation of an autoencoder by comparing our method to the information bottleneck method. Information bottleneck is an optimisation problem of finding the best tradeoff between maximising the mutual information of data representations and a task outcome while at the same time being effective in compressing the original data source. We posit that the evidence transfer method has essentially the same objective regarding the latent representations produced by an autoencoder. We verify our hypothesis using information theoretic metrics from feature selection in order to perform an empirical analysis over the information that is carried through the bottleneck of the latent space. We use the relevance metric to compare the overall mutual information between the latent representations and the ground truth labels before and after their incremental manipulation, as well as, to study the effects of evidence transfer regarding the significance of each latent feature. Full article
(This article belongs to the Special Issue Feature Selection Meets Deep Learning)
Show Figures

Figure 1

Back to TopTop