Special Issue "Theory and Applications of Information Theoretic Machine Learning"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 29 February 2020.

Special Issue Editors

Assist. Prof. Sotiris Kotsiantis
E-Mail Website
Guest Editor
Department of Mathematics, University of Patras, Greece
Interests: machine learning; data mining; computational intelligence; learning analytics
Assoc. Prof. Dimitris Kalles
E-Mail Website
Guest Editor
School of Science and Technology, Hellenic Open University, Greece
Interests: machine learning; artificial intelligence; educational intelligence; educational technology
Assoc. Prof. Christos Makris
E-Mail Website
Guest Editor
Department of Computer Engineering and Informatics, University of Patras, Greece
Interests: data structures; information retrieval; data mining; bioinformatics; string algorithms; computational geometry; multimedia data bases; internet technologies
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

At present, the entire world and software industry is looking for ways to apply the principles of data science and data analytics to address various difficult problems. The usage and application of machine learning and data analytics principles, methods, and techniques can contribute to address new problems and discover improved solutions. This Special Issue aims at bringing together applications of machine learning in various interdisciplinary domains and areas of interest, such as data mining, data analytics, and data science to cater to a wide landscape of methods, methodologies, and techniques which can be applied to obtain productive results. The aims of this Special Issue are: (1) to present state-of-the-art research on data mining and machine learning; and (2) to provide a forum for researchers to discuss the latest progress, new research methodologies, and potential research topics. Further, all submissions should explain the role of entropy or information theory applications to this field. Topics of interests include, but are not limited, to classification, regression and prediction, clustering, kernel methods, data mining, web mining, information retrieval, natural language processing, deep learning, probabilistic models and methods, vision and speech perception, bioinformatics, streaming data, industrial, financial, and educational applications. Papers will be evaluated based on their originality, presentation, relevance, and contribution, as well as their suitability and the quality in terms of both technical contribution and writing. The submitted papers must be written in English and describe original research which has not been published nor currently under review by other journals or conferences.

Assist. Prof. Sotiris Kotsiantis
Assoc. Prof. Dimitris Kalles
assoc. Prof. Christos Makris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.


  • machine learning
  • data mining
  • computational intelligence
  • learning analytics
  • artificial intelligence
  • educational intelligence
  • educational technology
  • information retrieval
  • bioinformatics

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:


Open AccessArticle
Combination of Active Learning and Semi-Supervised Learning under a Self-Training Scheme
Entropy 2019, 21(10), 988; https://doi.org/10.3390/e21100988 - 10 Oct 2019
One of the major aspects affecting the performance of the classification algorithms is the amount of labeled data which is available during the training phase. It is widely accepted that the labeling procedure of vast amounts of data is both expensive and time-consuming [...] Read more.
One of the major aspects affecting the performance of the classification algorithms is the amount of labeled data which is available during the training phase. It is widely accepted that the labeling procedure of vast amounts of data is both expensive and time-consuming since it requires the employment of human expertise. For a wide variety of scientific fields, unlabeled examples are easy to collect but hard to handle in a useful manner, thus improving the contained information for a subject dataset. In this context, a variety of learning methods have been studied in the literature aiming to efficiently utilize the vast amounts of unlabeled data during the learning process. The most common approaches tackle problems of this kind by individually applying active learning or semi-supervised learning methods. In this work, a combination of active learning and semi-supervised learning methods is proposed, under a common self-training scheme, in order to efficiently utilize the available unlabeled data. The effective and robust metrics of the entropy and the distribution of probabilities of the unlabeled set, to select the most sufficient unlabeled examples for the augmentation of the initial labeled set, are used. The superiority of the proposed scheme is validated by comparing it against the base approaches of supervised, semi-supervised, and active learning in the wide range of fifty-five benchmark datasets. Full article
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)
Show Figures

Figure 1

Back to TopTop