Special Issue "Information Transfer in Multilayer/Deep Architectures"

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: 15 July 2020.

Special Issue Editors

Prof. Vincent Vigneron
E-Mail Website
Guest Editor
Informatique Biologie Intégrative et Systèmes Complexes, Université d'Évry-Val-d'Essonn, Évry, France
Interests: machine learning; blind source separation; image processing
Prof. Hichem Maaref
E-Mail Website
Guest Editor
Informatique Biologie Intégrative et Systèmes Complexes, Université d'Évry-Val-d'Essonn, Évry, France
Interests: machine learning; image processing; robotic

Special Issue Information

Dear Colleagues,

The renewal of research interest in machine learning came with the emergence of the concept of big data during the late 2000s.
Schematically, families of deep learning networks (DLN) emerged with industrial ambitions, taking advantage of the development of graphics cards (GPUs) to construct prediction models from massive amounts of collected and stored data and substantial means of calculation. It is illusory to want to learn a deep network involving millions of parameters without very large databases. We tend to think that more data lead to more information.
In addition, the core of learning is all but a problem of data representation, not in the ‘data compression’ sense. For instance, in DLN, one representation (input layer) is replaced by a cascade of many representations (hidden layers), which means an increase of information (entropy). However, some questions remain:
How does information spread in these inflationary networks? Is information transform conservative through the DLN? Can information theory quantify the learning capacity of these networks? How do generative models convert information from the observed space to the hidden space? 

Foreseen contributions include the following:

- high-dimension feature selection and pattern correlations
- information entropy in large data representation
- information gain in decision trees
- between layer dependencies
- auto-encoding
- network capacity and information loss
- etc.

This Special Issue has the ambition to collect responses to these questions from the theorical and applicative points of view.

Prof. Vincent Vigneron
Prof. Hichem Maaref
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Emergence of Network Motifs in Deep Neural Networks
Entropy 2020, 22(2), 204; https://doi.org/10.3390/e22020204 - 11 Feb 2020
Abstract
Network science can offer fundamental insights into the structural and functional properties of complex systems. For example, it is widely known that neuronal circuits tend to organize into basic functional topological modules, called network motifs. In this article, we show that network science [...] Read more.
Network science can offer fundamental insights into the structural and functional properties of complex systems. For example, it is widely known that neuronal circuits tend to organize into basic functional topological modules, called network motifs. In this article, we show that network science tools can be successfully applied also to the study of artificial neural networks operating according to self-organizing (learning) principles. In particular, we study the emergence of network motifs in multi-layer perceptrons, whose initial connectivity is defined as a stack of fully-connected, bipartite graphs. Simulations show that the final network topology is shaped by learning dynamics, but can be strongly biased by choosing appropriate weight initialization schemes. Overall, our results suggest that non-trivial initialization strategies can make learning more effective by promoting the development of useful network motifs, which are often surprisingly consistent with those observed in general transduction networks. Full article
(This article belongs to the Special Issue Information Transfer in Multilayer/Deep Architectures)
Back to TopTop