Reusing and Distilling Knowledge in Deep Neural Networks

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Processes".

Deadline for manuscript submissions: closed (25 September 2023) | Viewed by 368

Special Issue Editor


E-Mail Website
Guest Editor
Department of Physics, University of Patras, 26504 Rion, Greece
Interests: patter recognition; computer vision; machine learning; feature and representation learning; learning from sequences; activity recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep neural network (DNN) models demonstrate excellent performance when learning efficient representations from large datasets. They achieve this via an end-to-end learning approach, where the mapping between the input space (data) and the output is learnt based on a defined objective. In this manner, DNNs learn feature mappings that encode the information contained in datasets to optimize a given objective. However, in real- and open-world datasets, this supervised learning approach cannot always be formulated as the required supervision signals are absent (or need extensive sources to be produced). Furthermore, when the supervisory signals are available, DNNs learn at the expense of big data collections and large numbers of learning iterations. Finally, these models are prone to catastrophic forgetting, requiring a new training from scratch, which also happens in cases when the supervision signals are modified.

Over the last few years, there has been an emerging trend in developing methods to surpass these limitations by utilizing self-supervised knowledge transfer and distillation, incremental learning, and unsupervised representation learning.

This Special Issue will feature the recent approaches of methods that are able to extract information embodied in collections of raw data from one or more modalities and encoded in one or more deep neural network models in order to learn efficient feature representations.

 Topics include, but are not limited to:

- Knowledge distillation with novel student–teacher schemes;
- Self-supervision sequence learning (video, audio and natural language);
- Unsupervised or self-supervised learning methods;
- Knowledge transfer between modalities (vision, audio and NLP);
- Self-supervising with multiple modalities.

Dr. Dimitris Kastaniotis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge distillation
  • teacher–tudent architectures
  • self-supervised learning

Published Papers

There is no accepted submissions to this special issue at this moment.
Back to TopTop