Dynamics in Neural Networks

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Dynamical Systems".

Deadline for manuscript submissions: 31 October 2024 | Viewed by 8162

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
Interests: artificial intelligence; neural networks; engineering; informatics
Special Issues, Collections and Topics in MDPI journals
Faculty of Electrical and Computer Engineering, Kanazawa University, Kanazawa-shi 920-1192, Japan
Interests: multiple-valued logic; neural networks; optimization

Special Issue Information

Dear Colleagues,

As a method of computational intelligence, artificial neural networks and other methods of computational intelligence have intertwined to develop a variety of new neural networks, such as convolutional neural networks, BP neural networks, and evolutionary neural networks, which have both practical value and developmental promise. Despite the remarkable development and application of neural networks, the uncertainty of the network itself makes it a time-consuming process to find a suitable network in the process of design and use; the design of artificial neural networks is an extremely complex dynamic optimization work for a specific problem, and there is still no systematic rule to follow. Although neural networks have a relatively sound theoretical foundation, computational intelligence techniques like evolutionary computation and other important dynamic optimization techniques do not yet have a sound mathematical foundation. The analysis and proof of the stability and convergence of computational intelligence algorithms are still in the research stage. Testing their effectiveness and efficiency through numerical experimental methods and specific applications is still the main method for the study of computational intelligence algorithms.

This Special Issue hopes to serve as a good international exchange platform for researchers in various fields to summarize the latest progress and ideas in neural computing and other fields related to neural networks.

Prof. Dr. Zheng Tang
Dr. Yuki Todo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • neural networks
  • computational intelligence
  • big data
  • computational neural models
  • brain-like systems
  • optimization problems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 3813 KiB  
Article
OFPI: Optical Flow Pose Image for Action Recognition
by Dong Chen, Tao Zhang, Peng Zhou, Chenyang Yan and Chuanqi Li
Mathematics 2023, 11(6), 1451; https://doi.org/10.3390/math11061451 - 17 Mar 2023
Cited by 3 | Viewed by 1767
Abstract
Most approaches to action recognition based on pseudo-images involve encoding skeletal data into RGB-like image representations. This approach cannot fully exploit the kinematic features and structural information of human poses, and convolutional neural network (CNN) models that process pseudo-images lack a global field [...] Read more.
Most approaches to action recognition based on pseudo-images involve encoding skeletal data into RGB-like image representations. This approach cannot fully exploit the kinematic features and structural information of human poses, and convolutional neural network (CNN) models that process pseudo-images lack a global field of view and cannot completely extract action features from pseudo-images. In this paper, we propose a novel pose-based action representation method called Optical Flow Pose Image (OFPI) in order to fully capitalize on the spatial and temporal information of skeletal data. Specifically, in the proposed method, an advanced pose estimator collects skeletal data before locating the target person and then extracts skeletal data utilizing a human tracking algorithm. The OFPI representation is obtained by aggregating these skeletal data over time. To test the superiority of OFPI and investigate the significance of the model having a global field of view, we trained a simple CNN model and a transformer-based model, respectively. Both models achieved superior outcomes. Because of the global field of view, especially in the transformer-based model, the OFPI-based representation achieved 98.3% and 94.2% accuracy on the KTH and JHMDB datasets, respectively. Compared with other advanced pose representation methods and multi-stream methods, OFPI achieved state-of-the-art performance on the JHMDB dataset, indicating the utility and potential of this algorithm for skeleton-based action recognition research. Full article
(This article belongs to the Special Issue Dynamics in Neural Networks)
Show Figures

Figure 1

20 pages, 1343 KiB  
Article
A Dendritic Neuron Model Optimized by Meta-Heuristics with a Power-Law-Distributed Population Interaction Network for Financial Time-Series Forecasting
by Yuxin Zhang, Yifei Yang, Xiaosi Li, Zijing Yuan, Yuki Todo and Haichuan Yang
Mathematics 2023, 11(5), 1251; https://doi.org/10.3390/math11051251 - 4 Mar 2023
Cited by 7 | Viewed by 2090
Abstract
The famous McCulloch–Pitts neuron model has been criticized for being overly simplistic in the long term. At the same time, the dendritic neuron model (DNM) has been shown to be effective in prediction problems, and it accounts for the nonlinear information-processing capacity of [...] Read more.
The famous McCulloch–Pitts neuron model has been criticized for being overly simplistic in the long term. At the same time, the dendritic neuron model (DNM) has been shown to be effective in prediction problems, and it accounts for the nonlinear information-processing capacity of synapses and dendrites. Furthermore, since the classical error back-propagation (BP) algorithm typically experiences problems caused by the overabundance of saddle points and local minima traps, an efficient learning approach for DNMs remains desirable but difficult to implement. In addition to BP, the mainstream DNM-optimization methods include meta-heuristic algorithms (MHAs). However, over the decades, MHAs have developed a large number of different algorithms. How to screen suitable MHAs for optimizing DNMs has become a hot and challenging area of research. In this study, we classify MHAs into different clusters with different population interaction networks (PINs). The performance of DNMs optimized by different clusters of MHAs is tested in the financial time-series-forecasting task. According to the experimental results, the DNM optimized by MHAs with power-law-distributed PINs outperforms the DNM trained based on the BP algorithm. Full article
(This article belongs to the Special Issue Dynamics in Neural Networks)
Show Figures

Figure 1

24 pages, 7564 KiB  
Article
Novel Synchronization Conditions for the Unified System of Multi-Dimension-Valued Neural Networks
by Jianying Xiao and Yongtao Li
Mathematics 2022, 10(17), 3031; https://doi.org/10.3390/math10173031 - 23 Aug 2022
Cited by 4 | Viewed by 1274
Abstract
This paper discusses the novel synchronization conditions about the unified system of multi-dimension-valued neural networks (USOMDVNN). First of all, the general model of USOMDVNN is successfully set up, mainly on the basis of multidimensional algebra, Kirchhoff current law, and neuronal property. Then, the [...] Read more.
This paper discusses the novel synchronization conditions about the unified system of multi-dimension-valued neural networks (USOMDVNN). First of all, the general model of USOMDVNN is successfully set up, mainly on the basis of multidimensional algebra, Kirchhoff current law, and neuronal property. Then, the concise Lyapunov–Krasovskii functional (LKF) and switching controllers are constructed for the USOMDVNN. Moreover, the new inequalities, whose variables, together with some parameters, are employed in a concise and unified form whose variables can be translated into special ones, such as real, complex, and quaternion. It is worth mentioning that the useful parameters really make some contributions to the construction of the concise LKF, the design of the general controllers, and the acquisition of flexible criteria. Further, we acquire the newer criteria mainly by employing Lyapunov analysis, constructing new LKF, applying two unified inequalities, and designing nonlinear controllers. Particularly, the value of the fixed time is less than the other ones in some existing results, owing to the adjustable parameters. Finally, three multidimensional simulations are presented, to demonstrate the availability and progress of the achieved acquisitions. Full article
(This article belongs to the Special Issue Dynamics in Neural Networks)
Show Figures

Figure 1

32 pages, 2404 KiB  
Article
A Novel Artificial Visual System for Motion Direction Detection in Grayscale Images
by Sichen Tao, Yuki Todo, Zheng Tang, Bin Li, Zhiming Zhang and Riku Inoue
Mathematics 2022, 10(16), 2975; https://doi.org/10.3390/math10162975 - 17 Aug 2022
Cited by 7 | Viewed by 1761
Abstract
How specific features of the environment are represented in the mammalian brain is an important unexplained mystery in neuroscience. Visual information is considered to be captured most preferentially by the brain. As one of the visual information elements, motion direction in the receptive [...] Read more.
How specific features of the environment are represented in the mammalian brain is an important unexplained mystery in neuroscience. Visual information is considered to be captured most preferentially by the brain. As one of the visual information elements, motion direction in the receptive field is thought to be collected already at the retinal direction-selective ganglion cell (DSGC) layer. However, knowledge of direction-selective (DS) mechanisms in the retina has remained only at a cellular level, and there is a lack of complete direction-sensitivity understanding in the visual system. Previous studies of DS models have been limited to the stage of one-dimensional black-and-white (binary) images or still lack biological rationality. In this paper, we innovatively propose a two-dimensional, eight-directional motion direction detection mechanism for grayscale images called the artificial visual system (AVS). The structure and neuronal functions of this mechanism are highly faithful to neuroscientific perceptions of the mammalian retinal DS pathway, and thus highly biologically reasonable. In particular, by introducing the horizontal contact pathway provided by horizontal cells (HCs) in the retinal inner nuclear layer and forming a functional collaboration with bipolar cells (BCs), the limitation that previous DS models can only recognize object motion directions in binary images is overcome; the proposed model can solve the recognizing problem of object motion directions in grayscale images. Through computer simulation experiments, we verified that AVS is effective and has high detection accuracy, and it is not affected by the shape, size, and location of objects in the receptive field. Its excellent noise immunity was also verified by adding multiple types of noise to the experimental data set. Compared to a classical convolutional neural network (CNN), it was verified that AVS is completely significantly better in terms of effectiveness and noise immunity, and has various advantages such as high interpretability, no need for learning, and easy hardware implementation. In addition, activation characteristics of neurons in AVS are highly consistent with those real in the retinal DS pathway, with strong neurofunctional similarity and brain-like superiority. Moreover, AVS will also provide a novel perspective and approach to understanding and analyzing mechanisms as well as principles of mammalian retinal direction-sensitivity in face of a cognitive bottleneck on the DS pathway that has persisted for nearly 60 years. Full article
(This article belongs to the Special Issue Dynamics in Neural Networks)
Show Figures

Figure 1

Back to TopTop