Special Issue "Regularization Techniques for Machine Learning and Their Applications"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 September 2021.

Special Issue Editors

Prof. Dr. Theodore Kotsilieris
E-Mail Website
Guest Editor
Department of Business Administration, University of the Peloponnese, GR 241-00, Greece
Interests: mobile agents; WSN routing algorithms; medical informatics; artificial intelligence; E-learning
Dr. Ioannis E. Livieris
E-Mail Website
Guest Editor
Department of Business Administration, University of the Peloponnese, GR 241-00,Greece; Department of Mathematics, University of Patras, GR 265-00, Greece
Interests: artificial neural networks; numerical analysis; computational mathematics; machine learning; algorithms; semi-supervised learning; ICT in education; data mining; deep learning
Special Issues and Collections in MDPI journals
Prof. Dr. Ioannis Anagnostopoulos
E-Mail Website
Guest Editor
Department of Computer Science and Biomedical Informatics, University of Thessaly, GR 351-00, Greece
Interests: web information management; internet technologies and web applications; communication networks; multimedia retrieval; personalisation / adaptation; social networking and integrated services; E-Commerce; E-Learning; intelligent systems; aplications in bioinformatics

Special Issue Information

Dear Colleagues,

We invite you to submit your latest research in the development of ensemble algorithms to this Special Issue, “Regularization Techniques for Machine Learning and Their Applications”.

Over the last decade, learning theory has led to the achievement of significant progress in the development of sophisticated algorithms and their theoretical foundations. The theory builds on concepts which exploit ideas and methodologies from mathematical areas, such as optimization theory. Regularization is probably the key to address the challenging problem of overfitting, which usually occurs in high-dimensional learning. Its primary goal is to make the machine learning algorithm “learn” and not “memorize” by penalizing the algorithm to reduce its generalization error in order to avoid the risk of overfitting. As a result, the variance of the model is significantly reduced, without substantial increase in its bias and without losing any important properties in the data. 

The main aim of this Special Issue is to present the recent advances related to all kinds of regularization methodologies and investigations of the impact of their application to a diversity of real-world problems.

Prof. Dr. Theodore Kotsilieris
Dr. Ioannis E. Livieris
Prof. Dr. Ioannis Anagnostopoulos
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Regularized neural networks
  • Dropout & Dropconnect techniques
  • Regularization for deep learning models
  • Weight-constrained neural networks
  • L-norm regularization
  • Adversarial learning
  • Penalty functions
  • Multitask learning
  • Pooling techniques
  • Model selection techniques
  • Matrix regularizers
  • Data augmentation
  • Early stopping strategies

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Application of Deep Neural Network to the Reconstruction of Two-Phase Material Imaging by Capacitively Coupled Electrical Resistance Tomography
Electronics 2021, 10(9), 1058; https://doi.org/10.3390/electronics10091058 - 29 Apr 2021
Cited by 1 | Viewed by 447
Abstract
A convolutional neural network (CNN)-based image reconstruction algorithm for two-phase material imaging is presented and verified with experimental data from a capacitively coupled electrical resistance tomography (CCERT) sensor. As a contactless version of electrical resistance tomography (ERT), CCERT has advantages such as no [...] Read more.
A convolutional neural network (CNN)-based image reconstruction algorithm for two-phase material imaging is presented and verified with experimental data from a capacitively coupled electrical resistance tomography (CCERT) sensor. As a contactless version of electrical resistance tomography (ERT), CCERT has advantages such as no invasion, low cost, no radiation, and rapid response for two-phase material imaging. Besides that, CCERT avoids contact error of ERT by imaging from outside of the pipe. Forward modeling was implemented based on the practical circular array sensor, and the inverse image reconstruction was realized by a CNN-based supervised learning algorithm, as well as the well-known total variation (TV) regularization algorithm for comparison. The 2D, monochrome, 2500-pixel image was divided into 625 clusters, and each cluster was used individually to train its own CNN to solve the 16 classes classification problem. Inherent regularization for the assumption of binary materials enabled us to use a classification algorithm with CNN. The iterative TV regularization algorithm achieved a close state of the two-phase material reconstruction by its sparsity-based assumption. The supervised learning algorithm established the mathematical model that mapped the simulated resistance measurement to the pixel patterns of the clusters. The training process was carried out only using simulated measurement data, but simulated and experimental tests were both conducted to investigate the feasibility of applying a multi-layer CNN for CCERT imaging. The performance of the CNN algorithm on the simulated data is demonstrated, and the comparison between the results created by the TV-based algorithm and the proposed CNN algorithm with the real-world data is also provided. Full article
(This article belongs to the Special Issue Regularization Techniques for Machine Learning and Their Applications)
Show Figures

Figure 1

Article
An Advanced CNN-LSTM Model for Cryptocurrency Forecasting
Electronics 2021, 10(3), 287; https://doi.org/10.3390/electronics10030287 - 26 Jan 2021
Cited by 3 | Viewed by 1308
Abstract
Nowadays, cryptocurrencies are established and widely recognized as an alternative exchange currency method. They have infiltrated most financial transactions and as a result cryptocurrency trade is generally considered one of the most popular and promising types of profitable investments. Nevertheless, this constantly increasing [...] Read more.
Nowadays, cryptocurrencies are established and widely recognized as an alternative exchange currency method. They have infiltrated most financial transactions and as a result cryptocurrency trade is generally considered one of the most popular and promising types of profitable investments. Nevertheless, this constantly increasing financial market is characterized by significant volatility and strong price fluctuations over a short-time period therefore, the development of an accurate and reliable forecasting model is considered essential for portfolio management and optimization. In this research, we propose a multiple-input deep neural network model for the prediction of cryptocurrency price and movement. The proposed forecasting model utilizes as inputs different cryptocurrency data and handles them independently in order to exploit useful information from each cryptocurrency separately. An extensive empirical study was performed using three consecutive years of cryptocurrency data from three cryptocurrencies with the highest market capitalization i.e., Bitcoin (BTC), Etherium (ETH), and Ripple (XRP). The detailed experimental analysis revealed that the proposed model has the ability to efficiently exploit mixed cryptocurrency data, reduces overfitting and decreases the computational cost in comparison with traditional fully-connected deep neural networks. Full article
(This article belongs to the Special Issue Regularization Techniques for Machine Learning and Their Applications)
Show Figures

Figure 1

Article
An Advanced Pruning Method in the Architecture of Extreme Learning Machines Using L1-Regularization and Bootstrapping
Electronics 2020, 9(5), 811; https://doi.org/10.3390/electronics9050811 - 15 May 2020
Cited by 1 | Viewed by 738
Abstract
Extreme learning machines (ELMs) are efficient for classification, regression, and time series prediction, as well as being a clear solution to backpropagation structures to determine values in intermediate layers of the learning model. One of the problems that an ELM may face is [...] Read more.
Extreme learning machines (ELMs) are efficient for classification, regression, and time series prediction, as well as being a clear solution to backpropagation structures to determine values in intermediate layers of the learning model. One of the problems that an ELM may face is due to a large number of neurons in the hidden layer, making the expert model a specific data set. With a large number of neurons in the hidden layer, overfitting is more likely and thus unnecessary information can deterioriate the performance of the neural network. To solve this problem, a pruning method is proposed, called Pruning ELM Using Bootstrapped Lasso BR-ELM, which is based on regularization and resampling techniques, to select the most representative neurons for the model response. This method is based on an ensembled variant of Lasso (achieved through bootstrap replications) and aims to shrink the output weight parameters of the neurons to 0 as many and as much as possible. According to a subset of candidate regressors having significant coefficient values (greater than 0), it is possible to select the best neurons in the hidden layer of the ELM. Finally, pattern classification tests and benchmark regression tests of complex real-world problems are performed by comparing the proposed approach to other pruning models for ELMs. It can be seen that statistically BR-ELM can outperform several related state-of-the-art methods in terms of classification accuracies and model errors (while performing equally to Pruning-ELM P-ELM), and this with a significantly reduced number of finally selected neurons. Full article
(This article belongs to the Special Issue Regularization Techniques for Machine Learning and Their Applications)
Show Figures

Figure 1

Article
sDeepFM: Multi-Scale Stacking Feature Interactions for Click-Through Rate Prediction
Electronics 2020, 9(2), 350; https://doi.org/10.3390/electronics9020350 - 19 Feb 2020
Cited by 2 | Viewed by 839
Abstract
For estimating the click-through rate of advertisements, there are some problems in that the features cannot be automatically constructed, or the features built are relatively simple, or the high-order combination features are difficult to learn under sparse data. To solve these problems, we [...] Read more.
For estimating the click-through rate of advertisements, there are some problems in that the features cannot be automatically constructed, or the features built are relatively simple, or the high-order combination features are difficult to learn under sparse data. To solve these problems, we propose a novel structure multi-scale stacking pooling (MSSP) to construct multi-scale features based on different receptive fields. The structure stacks multi-scale features bi-directionally from the angles of depth and width by constructing multiple observers with different angles and different fields of view, ensuring the diversity of extracted features. Furthermore, by learning the parameters through factorization, the structure can ensure high-order features being effectively learned in sparse data. We further combine the MSSP with the classical deep neural network (DNN) to form a unified model named sDeepFM. Experimental results on two real-world datasets show that the sDeepFM outperforms state-of-the-art models with respect to area under the curve (AUC) and log loss. Full article
(This article belongs to the Special Issue Regularization Techniques for Machine Learning and Their Applications)
Show Figures

Figure 1

Back to TopTop