Ensemble Algorithms and Their Applications

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (15 February 2020) | Viewed by 76077

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics, University of Patras, GR 265-00 Patras, Greece
Interests: software engineering; AI in education; intelligent systems; decision support systems; machine learning; data mining; knowledge discovery
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Co-Guest Editor
Department of Mathematics, University of Patras, GR 265-00 Patras, Greece
Interests: artificial intelligence; machine learning; neural networks; deep learning; optimization algorithms
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We invite you to submit your latest research in the area of the development of ensemble algorithms to this Special Issue, “Ensemble Algorithms and Their Applications”. During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a great attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms (classifiers) to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.  Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. Thus, many ensemble learning algorithms have been proposed in the literature and found their application in various real word problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting.

The aim of this Special Issue is to present the recent advances related to all kinds of ensemble learning algorithms and methodologies and investigate the impact of their application in a diversity of real world problems.

Prof. Dr. Panagiotis E. Pintelas
Dr. Ioannis E. Livieris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Implementations of ensemble learning algorithms
  • Theoretical framework for ensemble methods
  • Ensemble learning methodologies dealing with imbalanced data
  • Ensemble methods in clustering
  • Subsampling and feature selection in multiple model machine learning
  • Homogeneous and heterogeneous ensembles
  • Black, white and gray box models
  • Distributed ensemble learning algorithms
  • Hybrid methods in prediction and classification
  • Evolving, incremental and online ensemble learning
  • Multi-objective ensemble learning
  • Ensemble methods in agent and multi-agent systems
  • Applications of ensemble methods in business, engineering, medicine, etc
  • Comparisons among ensemble deep learners, single deep learners
  • Deep ensemble for feature selection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 167 KiB  
Editorial
Special Issue on Ensemble Learning and Applications
by Panagiotis Pintelas and Ioannis E. Livieris
Algorithms 2020, 13(6), 140; https://doi.org/10.3390/a13060140 - 11 Jun 2020
Cited by 57 | Viewed by 6070
Abstract
During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be [...] Read more.
During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. In the literature, ensemble learning algorithms constitute a dominant and state-of-the-art approach for obtaining maximum performance, thus they have been applied in a variety of real-world problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)

Research

Jump to: Editorial

21 pages, 433 KiB  
Article
Ensemble Deep Learning Models for Forecasting Cryptocurrency Time-Series
by Ioannis E. Livieris, Emmanuel Pintelas, Stavros Stavroyiannis and Panagiotis Pintelas
Algorithms 2020, 13(5), 121; https://doi.org/10.3390/a13050121 - 10 May 2020
Cited by 89 | Viewed by 12631
Abstract
Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high [...] Read more.
Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high volatility and strong fluctuations of prices over time. Therefore, the development of an intelligent forecasting model is considered essential for portfolio optimization and decision making. The main contribution of this research is the combination of three of the most widely employed ensemble learning strategies: ensemble-averaging, bagging and stacking with advanced deep learning models for forecasting major cryptocurrency hourly prices. The proposed ensemble models were evaluated utilizing state-of-the-art deep learning models as component learners, which were comprised by combinations of long short-term memory (LSTM), Bi-directional LSTM and convolutional layers. The ensemble models were evaluated on prediction of the cryptocurrency price on the following hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). Additionally, the reliability of each forecasting model and the efficiency of its predictions is evaluated by examining for autocorrelation of the errors. Our detailed experimental analysis indicates that ensemble learning and deep learning can be efficiently beneficial to each other, for developing strong, stable, and reliable forecasting models. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

14 pages, 427 KiB  
Article
Ensemble Deep Learning for Multilabel Binary Classification of User-Generated Content
by Giannis Haralabopoulos, Ioannis Anagnostopoulos and Derek McAuley
Algorithms 2020, 13(4), 83; https://doi.org/10.3390/a13040083 - 1 Apr 2020
Cited by 51 | Viewed by 6871
Abstract
Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content [...] Read more.
Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content in multiple emotional classes. Ensemble learning can improve the statistical, computational and representation aspects of such classifiers. We present a baseline stacked ensemble and propose a weighted ensemble. Our proposed weighted ensemble can use multiple classifiers to improve classification results without hyperparameter tuning or data overfitting. We evaluate our ensemble models with two datasets. The first dataset is from Semeval2018-Task 1 and contains almost 7000 Tweets, labeled with 11 sentiment classes. The second dataset is the Toxic Comment Dataset with more than 150,000 comments, labeled with six different levels of abuse or harassment. Our results suggest that ensemble learning improves classification results by 1.5 % to 5.4 % . Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

24 pages, 963 KiB  
Article
Ensemble Learning of Hybrid Acoustic Features for Speech Emotion Recognition
by Kudakwashe Zvarevashe and Oludayo Olugbara
Algorithms 2020, 13(3), 70; https://doi.org/10.3390/a13030070 - 22 Mar 2020
Cited by 57 | Viewed by 7492
Abstract
Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features [...] Read more.
Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features extracted from facial images, video files or speech signals. However, these features were not able to recognize the fear emotion with the same level of precision as other emotions. The authors propose the agglutination of prosodic and spectral features from a group of carefully selected features to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were performed to test the effectiveness of the proposed features extracted from speech files of two public databases and used to train five popular ensemble learning algorithms. Results show that random decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for speech emotion recognition. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

25 pages, 1744 KiB  
Article
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspectral Image Analysis and Classification
by Konstantinos Demertzis and Lazaros Iliadis
Algorithms 2020, 13(3), 61; https://doi.org/10.3390/a13030061 - 7 Mar 2020
Cited by 23 | Viewed by 6241
Abstract
Deep learning architectures are the most effective methods for analyzing and classifying Ultra-Spectral Images (USI). However, effective training of a Deep Learning (DL) gradient classifier aiming to achieve high classification accuracy, is extremely costly and time-consuming. It requires huge datasets with hundreds or [...] Read more.
Deep learning architectures are the most effective methods for analyzing and classifying Ultra-Spectral Images (USI). However, effective training of a Deep Learning (DL) gradient classifier aiming to achieve high classification accuracy, is extremely costly and time-consuming. It requires huge datasets with hundreds or thousands of labeled specimens from expert scientists. This research exploits the MAML++ algorithm in order to introduce the Model-Agnostic Meta-Ensemble Zero-shot Learning (MAME-ZsL) approach. The MAME-ZsL overcomes the above difficulties, and it can be used as a powerful model to perform Hyperspectral Image Analysis (HIA). It is a novel optimization-based Meta-Ensemble Learning architecture, following a Zero-shot Learning (ZsL) prototype. To the best of our knowledge it is introduced to the literature for the first time. It facilitates learning of specialized techniques for the extraction of user-mediated representations, in complex Deep Learning architectures. Moreover, it leverages the use of first and second-order derivatives as pre-training methods. It enhances learning of features which do not cause issues of exploding or diminishing gradients; thus, it avoids potential overfitting. Moreover, it significantly reduces computational cost and training time, and it offers an improved training stability, high generalization performance and remarkable classification accuracy. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

19 pages, 841 KiB  
Article
A Soft-Voting Ensemble Based Co-Training Scheme Using Static Selection for Binary Classification Problems
by Stamatis Karlos, Georgios Kostopoulos and Sotiris Kotsiantis
Algorithms 2020, 13(1), 26; https://doi.org/10.3390/a13010026 - 16 Jan 2020
Cited by 49 | Viewed by 7351
Abstract
In recent years, a forward-looking subfield of machine learning has emerged with important applications in a variety of scientific fields. Semi-supervised learning is increasingly being recognized as a burgeoning area embracing a plethora of efficient methods and algorithms seeking to exploit a small [...] Read more.
In recent years, a forward-looking subfield of machine learning has emerged with important applications in a variety of scientific fields. Semi-supervised learning is increasingly being recognized as a burgeoning area embracing a plethora of efficient methods and algorithms seeking to exploit a small pool of labeled examples together with a large pool of unlabeled ones in the most efficient way. Co-training is a representative semi-supervised classification algorithm originally based on the assumption that each example can be described by two distinct feature sets, usually referred to as views. Since such an assumption can hardly be met in real world problems, several variants of the co-training algorithm have been proposed dealing with the absence or existence of a naturally two-view feature split. In this context, a Static Selection Ensemble-based co-training scheme operating under a random feature split strategy is outlined regarding binary classification problems, where the type of the base ensemble learner is a soft-Voting one composed of two participants. Ensemble methods are commonly used to boost the predictive performance of learning models by using a set of different classifiers, while the Static Ensemble Selection approach seeks to find the most suitable structure of ensemble classifier based on a specific criterion through a pool of candidate classifiers. The efficacy of the proposed scheme is verified through several experiments on a plethora of benchmark datasets as statistically confirmed by the Friedman Aligned Ranks non-parametric test over the behavior of classification accuracy, F1-score, and Area Under Curve metrics. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

17 pages, 362 KiB  
Article
A Grey-Box Ensemble Model Exploiting Black-Box Accuracy and White-Box Intrinsic Interpretability
by Emmanuel Pintelas, Ioannis E. Livieris and Panagiotis Pintelas
Algorithms 2020, 13(1), 17; https://doi.org/10.3390/a13010017 - 5 Jan 2020
Cited by 93 | Viewed by 15215
Abstract
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of [...] Read more.
Machine learning has emerged as a key factor in many technological and scientific advances and applications. Much research has been devoted to developing high performance machine learning models, which are able to make very accurate predictions and decisions on a wide range of applications. Nevertheless, we still seek to understand and explain how these models work and make decisions. Explainability and interpretability in machine learning is a significant issue, since in most of real-world problems it is considered essential to understand and explain the model’s prediction mechanism in order to trust it and make decisions on critical issues. In this study, we developed a Grey-Box model based on semi-supervised methodology utilizing a self-training framework. The main objective of this work is the development of a both interpretable and accurate machine learning model, although this is a complex and challenging task. The proposed model was evaluated on a variety of real world datasets from the crucial application domains of education, finance and medicine. Our results demonstrate the efficiency of the proposed model performing comparable to a Black-Box and considerably outperforming single White-Box models, while at the same time remains as interpretable as a White-Box model. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

27 pages, 3064 KiB  
Article
Exploring an Ensemble of Methods that Combines Fuzzy Cognitive Maps and Neural Networks in Solving the Time Series Prediction Problem of Gas Consumption in Greece
by Konstantinos I. Papageorgiou, Katarzyna Poczeta, Elpiniki Papageorgiou, Vassilis C. Gerogiannis and George Stamoulis
Algorithms 2019, 12(11), 235; https://doi.org/10.3390/a12110235 - 6 Nov 2019
Cited by 21 | Viewed by 5697
Abstract
This paper introduced a new ensemble learning approach, based on evolutionary fuzzy cognitive maps (FCMs), artificial neural networks (ANNs), and their hybrid structure (FCM-ANN), for time series prediction. The main aim of time series forecasting is to obtain reasonably accurate forecasts of future [...] Read more.
This paper introduced a new ensemble learning approach, based on evolutionary fuzzy cognitive maps (FCMs), artificial neural networks (ANNs), and their hybrid structure (FCM-ANN), for time series prediction. The main aim of time series forecasting is to obtain reasonably accurate forecasts of future data from analyzing records of data. In the paper, we proposed an ensemble-based forecast combination methodology as an alternative approach to forecasting methods for time series prediction. The ensemble learning technique combines various learning algorithms, including SOGA (structure optimization genetic algorithm)-based FCMs, RCGA (real coded genetic algorithm)-based FCMs, efficient and adaptive ANNs architectures, and a hybrid structure of FCM-ANN, recently proposed for time series forecasting. All ensemble algorithms execute according to the one-step prediction regime. The particular forecast combination approach was specifically selected due to the advanced features of each ensemble component, where the findings of this work evinced the effectiveness of this approach, in terms of prediction accuracy, when compared against other well-known, independent forecasting approaches, such as ANNs or FCMs, and the long short-term memory (LSTM) algorithm as well. The suggested ensemble learning approach was applied to three distribution points that compose the natural gas grid of a Greek region. For the evaluation of the proposed approach, a real-time series dataset for natural gas prediction was used. We also provided a detailed discussion on the performance of the individual predictors, the ensemble predictors, and their combination through two well-known ensemble methods (the average and the error-based) that are characterized in the literature as particularly accurate and effective. The prediction results showed the efficacy of the proposed ensemble learning approach, and the comparative analysis demonstrated enough evidence that the approach could be used effectively to conduct forecasting based on multivariate time series. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

15 pages, 297 KiB  
Article
A Weighted Voting Ensemble Self-Labeled Algorithm for the Detection of Lung Abnormalities from X-Rays
by Ioannis E. Livieris, Andreas Kanavos, Vassilis Tampakas and Panagiotis Pintelas
Algorithms 2019, 12(3), 64; https://doi.org/10.3390/a12030064 - 16 Mar 2019
Cited by 41 | Viewed by 6652
Abstract
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large [...] Read more.
During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

Back to TopTop