Special Issue "Numerical Analysis of Artificial Neural Networks"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: 31 December 2022 | Viewed by 4102

Special Issue Editor

Prof. Dr. Miguel Atencia
E-Mail Website
Guest Editor
Escuela de Ingenierías Industriales, Universidad de Málaga, 29071 Málaga, Spain
Interests: recurrent neural networks; dynamical systems; numerical optimization; time series; natural language processing

Special Issue Information

Numerical analysis is one of the pillars, computer algebra being the other, of all computational algorithms. Accurate results of machine learning algorithms for classification, regression, and prediction are supported by theoretical features of numerical methods. The list of examples is overwhelming: principal component analysis based upon numerical linear algebra; optimization with Hopfield networks stemming from concepts rooted in dynamical systems; backpropagation that requires numerical optimizers; etc. On the other hand, research on computational intelligence techniques has led to advances in many numerical methods, with stochastic gradient descent being primus inter pares.

In this Special Issue, we aim at fostering the synergy between these two fields, by encouraging the analysis and design of numerical methods for, in, and from machine learning algorithms. We welcome contributions that highlight satisfactory learning results as soundly based on numerical foundations, as well as ground-breaking numerical methods that provide the basis for efficient practical algorithms, at least at the proof-of-concept stage.

The scope of the issue is deliberately broad, including but not limited to numerical techniques from linear algebra, dynamical systems, kernel methods, optimization, spectral methods, and stochastic formulations, as well as algorithms within neural networks, support vector machines, recurrent networks, and clustering methods.

Prof. Dr. Miguel Atencia
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Numerical linear algebra
  • Numerical methods for dynamical systems
  • Numerical optimization
  • Geometric numerical integration
  • Iterative methods
  • Convergence
  • Machine learning
  • Neural networks 
  • Classification 
  • Time series forecasting 
  • Dimensionality reduction

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Imaginary Finger Movements Decoding Using Empirical Mode Decomposition and a Stacked BiLSTM Architecture
Mathematics 2021, 9(24), 3297; https://doi.org/10.3390/math9243297 - 18 Dec 2021
Cited by 1 | Viewed by 674
Abstract
Motor Imagery Electroencephalogram (MI-EEG) signals are widely used in Brain-Computer Interfaces (BCI). MI-EEG signals of large limbs movements have been explored in recent researches because they deliver relevant classification rates for BCI systems. However, smaller and noisy signals corresponding to hand-finger imagined movements [...] Read more.
Motor Imagery Electroencephalogram (MI-EEG) signals are widely used in Brain-Computer Interfaces (BCI). MI-EEG signals of large limbs movements have been explored in recent researches because they deliver relevant classification rates for BCI systems. However, smaller and noisy signals corresponding to hand-finger imagined movements are less frequently used because they are difficult to classify. This study proposes a method for decoding finger imagined movements of the right hand. For this purpose, MI-EEG signals from C3, Cz, P3, and Pz sensors were carefully selected to be processed in the proposed framework. Therefore, a method based on Empirical Mode Decomposition (EMD) is used to tackle the problem of noisy signals. At the same time, the sequence classification is performed by a stacked Bidirectional Long Short-Term Memory (BiLSTM) network. The proposed method was evaluated using k-fold cross-validation on a public dataset, obtaining an accuracy of 82.26%. Full article
(This article belongs to the Special Issue Numerical Analysis of Artificial Neural Networks)
Show Figures

Figure 1

Article
Learning Neural Representations and Local Embedding for Nonlinear Dimensionality Reduction Mapping
Mathematics 2021, 9(9), 1017; https://doi.org/10.3390/math9091017 - 30 Apr 2021
Viewed by 551
Abstract
This work explores neural approximation for nonlinear dimensionality reduction mapping based on internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in [...] Read more.
This work explores neural approximation for nonlinear dimensionality reduction mapping based on internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in parameters is optimized subject to given training observations by the proposed learning process, and verified for transformation of novel testing observations to images in the low-dimensional output space. Optimized internal representations sketch graph-organized supports of distributed data clusters and their representative images in the output space. On the basis, the approximating function is able to operate for testing without reserving original massive training observations. The neural approximating model contains multiple modules. Each activates a non-zero output for mapping in response to an input inside its correspondent local support. Graph-organized data supports have lateral interconnections for representing neighboring relations, inferring the minimal path between centroids of any two data supports, and proposing distance constraints for mapping all centroids to images in the output space. Following the distance-preserving principle, this work proposes Levenberg-Marquardt learning for optimizing images of centroids in the output space subject to given distance constraints, and further develops local embedding constraints for mapping during execution phase. Numerical simulations show the proposed neural approximation effective and reliable for nonlinear dimensionality reduction mapping. Full article
(This article belongs to the Special Issue Numerical Analysis of Artificial Neural Networks)
Show Figures

Figure 1

Article
Control Method of Flexible Manipulator Servo System Based on a Combination of RBF Neural Network and Pole Placement Strategy
Mathematics 2021, 9(8), 896; https://doi.org/10.3390/math9080896 - 17 Apr 2021
Cited by 10 | Viewed by 710
Abstract
Gravity and flexibility will cause fluctuations of the rotation angle in the servo system for flexible manipulators. The fluctuation will seriously affect the motion accuracy of end-effectors. Therefore, this paper adopts a control method combining the RBF (Radial Basis Function) neural network and [...] Read more.
Gravity and flexibility will cause fluctuations of the rotation angle in the servo system for flexible manipulators. The fluctuation will seriously affect the motion accuracy of end-effectors. Therefore, this paper adopts a control method combining the RBF (Radial Basis Function) neural network and pole placement strategy to suppress the rotation angle fluctuations. The RBF neural network is used to identify uncertain items caused by the manipulator’s flexibility and the time-varying characteristics of dynamic parameters. Besides, the pole placement strategy is used to optimize the PD (Proportional Differential) controller’s parameters to improve the response speed and stability. Firstly, a dynamic model of flexible manipulators considering gravity is established based on the assumed mode method and Lagrange’s principle. Then, the system’s control characteristics are analyzed, and the pole placement strategy optimizes the parameters of the PD controllers. Next, the control method based on the RBF neural network is proposed, and the Lyapunov stability theory demonstrates stability. Finally, numerical analysis and control experiments prove the effectiveness of the control method proposed in this paper. The means and standard deviations of rotation angle error are reduced by the control method. The results show that the control method can effectively reduce the rotation angle error and improve motion accuracy. Full article
(This article belongs to the Special Issue Numerical Analysis of Artificial Neural Networks)
Show Figures

Figure 1

Article
Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments
Mathematics 2021, 9(5), 571; https://doi.org/10.3390/math9050571 - 07 Mar 2021
Cited by 5 | Viewed by 759
Abstract
This is the first time that the method for the investigation of unpredictable solutions of differential equations has been extended to unpredictable oscillations of neural networks with a generalized piecewise constant argument, which is delayed and advanced. The existence and exponential stability of [...] Read more.
This is the first time that the method for the investigation of unpredictable solutions of differential equations has been extended to unpredictable oscillations of neural networks with a generalized piecewise constant argument, which is delayed and advanced. The existence and exponential stability of the unique unpredictable oscillation are proven. According to the theory, the presence of unpredictable oscillations is strong evidence for Poincaré chaos. Consequently, the paper is a contribution to chaos applications in neuroscience. The model is inspired by chaotic time-varying stimuli, which allow studying the distribution of chaotic signals in neural networks. Unpredictable inputs create an excitation wave of neurons that transmit chaotic signals. The technique of analysis includes the ideas used for differential equations with a piecewise constant argument. The results are illustrated by examples and simulations. They are carried out in MATLAB Simulink to demonstrate the simplicity of the diagrammatic approaches. Full article
(This article belongs to the Special Issue Numerical Analysis of Artificial Neural Networks)
Show Figures

Figure 1

Back to TopTop