Special Issue "Regularization Techniques for Machine Learning and Their Applications"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 31 December 2020.

Special Issue Editors

Prof. Dr. Theodore Kotsilieris
Website
Guest Editor
Department of Business Administration, University of the Peloponnese, GR 241-00, Greece
Interests: mobile agents; WSN routing algorithms; medical informatics; artificial intelligence; E-learning
Dr. Ioannis E. Livieris
Website SciProfiles
Guest Editor
Department of Business Administration, University of the Peloponnese, GR 241-00,Greece; Department of Mathematics, University of Patras, GR 265-00, Greece
Interests: artificial neural networks; numerical analysis; computational mathematics; machine learning; algorithms; semi-supervised learning; ICT in education; data mining; deep learning
Prof. Dr. Ioannis Anagnostopoulos
Website
Guest Editor
Department of Computer Science and Biomedical Informatics, University of Thessaly, GR 351-00, Greece
Interests: web information management; internet technologies and web applications; communication networks; multimedia retrieval; personalisation / adaptation; social networking and integrated services; E-Commerce; E-Learning; intelligent systems; aplications in bioinformatics

Special Issue Information

Dear Colleagues,

We invite you to submit your latest research in the development of ensemble algorithms to this Special Issue, “Regularization Techniques for Machine Learning and Their Applications”.

Over the last decade, learning theory has led to the achievement of significant progress in the development of sophisticated algorithms and their theoretical foundations. The theory builds on concepts which exploit ideas and methodologies from mathematical areas, such as optimization theory. Regularization is probably the key to address the challenging problem of overfitting, which usually occurs in high-dimensional learning. Its primary goal is to make the machine learning algorithm “learn” and not “memorize” by penalizing the algorithm to reduce its generalization error in order to avoid the risk of overfitting. As a result, the variance of the model is significantly reduced, without substantial increase in its bias and without losing any important properties in the data. 

The main aim of this Special Issue is to present the recent advances related to all kinds of regularization methodologies and investigations of the impact of their application to a diversity of real-world problems.

Prof. Dr. Theodore Kotsilieris
Dr. Ioannis E. Livieris
Prof. Dr. Ioannis Anagnostopoulos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Regularized neural networks
  • Dropout & Dropconnect techniques
  • Regularization for deep learning models
  • Weight-constrained neural networks
  • L-norm regularization
  • Adversarial learning
  • Penalty functions
  • Multitask learning
  • Pooling techniques
  • Model selection techniques
  • Matrix regularizers
  • Data augmentation
  • Early stopping strategies

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
An Advanced Pruning Method in the Architecture of Extreme Learning Machines Using L1-Regularization and Bootstrapping
Electronics 2020, 9(5), 811; https://doi.org/10.3390/electronics9050811 - 15 May 2020
Abstract
Extreme learning machines (ELMs) are efficient for classification, regression, and time series prediction, as well as being a clear solution to backpropagation structures to determine values in intermediate layers of the learning model. One of the problems that an ELM may face is [...] Read more.
Extreme learning machines (ELMs) are efficient for classification, regression, and time series prediction, as well as being a clear solution to backpropagation structures to determine values in intermediate layers of the learning model. One of the problems that an ELM may face is due to a large number of neurons in the hidden layer, making the expert model a specific data set. With a large number of neurons in the hidden layer, overfitting is more likely and thus unnecessary information can deterioriate the performance of the neural network. To solve this problem, a pruning method is proposed, called Pruning ELM Using Bootstrapped Lasso BR-ELM, which is based on regularization and resampling techniques, to select the most representative neurons for the model response. This method is based on an ensembled variant of Lasso (achieved through bootstrap replications) and aims to shrink the output weight parameters of the neurons to 0 as many and as much as possible. According to a subset of candidate regressors having significant coefficient values (greater than 0), it is possible to select the best neurons in the hidden layer of the ELM. Finally, pattern classification tests and benchmark regression tests of complex real-world problems are performed by comparing the proposed approach to other pruning models for ELMs. It can be seen that statistically BR-ELM can outperform several related state-of-the-art methods in terms of classification accuracies and model errors (while performing equally to Pruning-ELM P-ELM), and this with a significantly reduced number of finally selected neurons. Full article
(This article belongs to the Special Issue Regularization Techniques for Machine Learning and Their Applications)
Show Figures

Figure 1

Open AccessArticle
sDeepFM: Multi-Scale Stacking Feature Interactions for Click-Through Rate Prediction
Electronics 2020, 9(2), 350; https://doi.org/10.3390/electronics9020350 - 19 Feb 2020
Abstract
For estimating the click-through rate of advertisements, there are some problems in that the features cannot be automatically constructed, or the features built are relatively simple, or the high-order combination features are difficult to learn under sparse data. To solve these problems, we [...] Read more.
For estimating the click-through rate of advertisements, there are some problems in that the features cannot be automatically constructed, or the features built are relatively simple, or the high-order combination features are difficult to learn under sparse data. To solve these problems, we propose a novel structure multi-scale stacking pooling (MSSP) to construct multi-scale features based on different receptive fields. The structure stacks multi-scale features bi-directionally from the angles of depth and width by constructing multiple observers with different angles and different fields of view, ensuring the diversity of extracted features. Furthermore, by learning the parameters through factorization, the structure can ensure high-order features being effectively learned in sparse data. We further combine the MSSP with the classical deep neural network (DNN) to form a unified model named sDeepFM. Experimental results on two real-world datasets show that the sDeepFM outperforms state-of-the-art models with respect to area under the curve (AUC) and log loss. Full article
(This article belongs to the Special Issue Regularization Techniques for Machine Learning and Their Applications)
Show Figures

Figure 1

Back to TopTop