Special Issue "Neural Networks and Learning Systems II"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: 30 September 2022 | Viewed by 1582

Special Issue Editors

Prof. Dr. Luca Pancioni
E-Mail Website
Guest Editor
Department of Information Engineering and Mathematics, University of Siena, Siena, Italy
Interests: bifurcation; memristor circuits; memristors; chaos; nonlinear dynamical systems; oscillators; Chua's circuit; Hopfield neural nets; Lyapunov methods; asymptotic stability; cellular neural nets; convergence; coupled circuits; hysteresis; piecewise linear techniques; stability; synchronization; time-varying networks; neural nets; circuit stability
Special Issues, Collections and Topics in MDPI journals
Dr. Giacomo Innocenti
E-Mail Website
Guest Editor
Department of Information Engineering, University of Florence, Via Santa Marta 3, 50139 Firenze, Italy
Interests: complex networks; control systems, robotics and automation; nonlinear dynamics
Prof. Dr. Fernando Corinto
E-Mail Website
Guest Editor
Department of Electronics and Telecommunications Politecnico di Torino Corso Duca Degli Abruzzi, 24, 10129 Torino, Italy
Interests: nonlinear circuits and systems; locally coupled nonlinear/nanoscale networks; memristor nanotechnology

Special Issue Information

Dear Colleagues,

In recent decades, systems based on artificial neural networks and machine learning devices have become increasingly present in everyday life. Artificial intelligence is considered one of the most useful tools in data analysis and decision making. The fields of application involve all sectors of our life, including medicine, engineering, economy, manufacturing, and so on. In this context, the importance of research based on artificial neural network systems is evident, and for these reasons, some disciplines related to this topic are rapidly growing in terms of project financing and research scope. As a result, part of the scientific community is devoted to investigating learning machines and artificial neural network systems from a point of view of their application and theory, which is fundamental in order to provide validation for mathematical models.

The main goal of this Special Issue is to collect papers regarding state-of-the-art and the latest studies on neural networks and learning systems. Moreover, it is an opportunity to provide a place where researchers can share and exchange their views on this topic in the fields of theory, design, and applications. The area of interest is wide and includes several categories such as stability and convergence analysis, learning algorithms, artificial vision, and speech recognition.

Prof. Dr. Luca Pancioni
Dr. Giacomo Innocenti
Prof. Dr. Fernando Corinto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neural networks
  • neurons
  • stability
  • circuit theory
  • nonlinear systems
  • synchronization
  • network topology
  • couplings
  • convergence

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
End-to-End Training of Deep Neural Networks in the Fourier Domain
Mathematics 2022, 10(12), 2132; https://doi.org/10.3390/math10122132 - 19 Jun 2022
Viewed by 233
Abstract
Convolutional networks are commonly used in various machine learning tasks, and they are more and more popularly used in the embedded domain with devices such as smart cameras and mobile phones. The operation of convolution can be substituted by point-wise multiplication in the [...] Read more.
Convolutional networks are commonly used in various machine learning tasks, and they are more and more popularly used in the embedded domain with devices such as smart cameras and mobile phones. The operation of convolution can be substituted by point-wise multiplication in the Fourier domain, which can save operation, but usually, it is applied with a Fourier transform before and an inverse Fourier transform after the multiplication, since other operations in neural networks cannot be implemented efficiently in the Fourier domain. In this paper, we will present a method for implementing neural network completely in the Fourier domain, and by this, saving multiplications and the operations of inverse Fourier transformations. Our method can decrease the number of operations by four times the number of pixels in the convolutional kernel with only a minor decrease in accuracy, for example, 4% on the MNIST and 2% on the HADB datasets. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

Article
Deep Learning Approaches for the Segmentation of Glomeruli in Kidney Histopathological Images
Mathematics 2022, 10(11), 1934; https://doi.org/10.3390/math10111934 - 05 Jun 2022
Viewed by 307
Abstract
Deep learning is widely applied in bioinformatics and biomedical imaging, due to its ability to perform various clinical tasks automatically and accurately. In particular, the application of deep learning techniques for the automatic identification of glomeruli in histopathological kidney images can play a [...] Read more.
Deep learning is widely applied in bioinformatics and biomedical imaging, due to its ability to perform various clinical tasks automatically and accurately. In particular, the application of deep learning techniques for the automatic identification of glomeruli in histopathological kidney images can play a fundamental role, offering a valid decision support system tool for the automatic evaluation of the Karpinski metric. This will help clinicians in detecting the presence of sclerotic glomeruli in order to decide whether the kidney is transplantable or not. In this work, we implemented a deep learning framework to identify and segment sclerotic and non-sclerotic glomeruli from scanned Whole Slide Images (WSIs) of human kidney biopsies. The experiments were conducted on a new dataset collected by both the Siena and Trieste hospitals. The images were segmented using the DeepLab V2 model, with a pre-trained ResNet101 encoder, applied to 512 × 512 patches extracted from the original WSIs. The results obtained are promising and show a good performance in the segmentation task and a good generalization capacity, despite the different coloring and typology of the histopathological images. Moreover, we present a novel use of the CD10 staining procedure, which gives promising results when applied to the segmentation of sclerotic glomeruli in kidney tissues. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

Article
Wave Loss: A Topographic Metric for Image Segmentation
Mathematics 2022, 10(11), 1932; https://doi.org/10.3390/math10111932 - 04 Jun 2022
Viewed by 332
Abstract
The solution of segmentation problems with deep neural networks requires a well-defined loss function for comparison and network training. In most network training approaches, only area-based differences that are of differing pixel matter are considered; the distribution is not. Our brain can compare [...] Read more.
The solution of segmentation problems with deep neural networks requires a well-defined loss function for comparison and network training. In most network training approaches, only area-based differences that are of differing pixel matter are considered; the distribution is not. Our brain can compare complex objects with ease and considers both pixel level and topological differences simultaneously and comparison between objects requires a properly defined metric that determines similarity between them considering changes both in shape and values. In past years, topographic aspects were incorporated in loss functions where either boundary pixels or the ratio of the areas were employed in difference calculation. In this paper we will show how the application of a topographic metric, called wave loss, can be applied in neural network training and increase the accuracy of traditional segmentation algorithms. Our method has increased segmentation accuracy by 3% on both the Cityscapes and Ms-Coco datasets, using various network architectures. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

Article
A Comprehensive Comparison of the Performance of Metaheuristic Algorithms in Neural Network Training for Nonlinear System Identification
Mathematics 2022, 10(9), 1611; https://doi.org/10.3390/math10091611 - 09 May 2022
Viewed by 351
Abstract
Many problems in daily life exhibit nonlinear behavior. Therefore, it is important to solve nonlinear problems. These problems are complex and difficult due to their nonlinear nature. It is seen in the literature that different artificial intelligence techniques are used to solve these [...] Read more.
Many problems in daily life exhibit nonlinear behavior. Therefore, it is important to solve nonlinear problems. These problems are complex and difficult due to their nonlinear nature. It is seen in the literature that different artificial intelligence techniques are used to solve these problems. One of the most important of these techniques is artificial neural networks. Obtaining successful results with an artificial neural network depends on its training process. In other words, it should be trained with a good training algorithm. Especially, metaheuristic algorithms are frequently used in artificial neural network training due to their advantages. In this study, for the first time, the performance of sixteen metaheuristic algorithms in artificial neural network training for the identification of nonlinear systems is analyzed. It is aimed to determine the most effective metaheuristic neural network training algorithms. The metaheuristic algorithms are examined in terms of solution quality and convergence speed. In the applications, six nonlinear systems are used. The mean-squared error (MSE) is utilized as the error metric. The best mean training error values obtained for six nonlinear systems were 3.5×104, 4.7×104, 5.6×105, 4.8×104, 5.2×104, and 2.4×103, respectively. In addition, the best mean test error values found for all systems were successful. When the results were examined, it was observed that biogeography-based optimization, moth–flame optimization, the artificial bee colony algorithm, teaching–learning-based optimization, and the multi-verse optimizer were generally more effective than other metaheuristic algorithms in the identification of nonlinear systems. Full article
(This article belongs to the Special Issue Neural Networks and Learning Systems II)
Show Figures

Figure 1

Back to TopTop