Special Issue "Artificial Neural Networks: Design and Applications"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: 31 October 2022 | Viewed by 1499

Special Issue Editors

Prof. Dr. Hong Qu
E-Mail Website
Guest Editor
School of Computer Science and Engineering, University of Electronic Science and Technology of China, 610054 Chengdu, China
Interests: artificial intelligence; machine learning; neuromorphic computing
Dr. Malu Zhang
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119077, Singapore
Interests: neuromorphic computing; deep learning; speech recognition
Dr. Mingsheng Fu
E-Mail Website
Guest Editor
The Singapore-ETH Centre, National University of Singapore, Singapore 119077, Singapore
Interests: artificial intelligence; machine learning; reinforcement learning

Special Issue Information

Dear Colleagues,

Deep learning, a major driving force behind artificial neural networks, has achieved remarkable progress in the field of image recognition, speech processing, machine translation, and board games. In particular, deep learning has drastically outperformed traditional methods and overtaken them to become the choice in different applications. However, many traditional network structures may not be able to deal with the increasing challenges of modern complex tasks, especially on non-structural data and complicated decision-making problems. To resolve this problem, various neural network structures and algorithms have been proposed in recent years, such as graph neural networks, spiking neural networks, and deep reinforcement learning. This Special Issue will focus on state-of-the-art neural network models, innovative learning methods, and applications. We seek contributions that include but are not limited to:

  • Graph neural networks
  • Deep neural networks
  • Spiking neural networks
  • Computer vision
  • Natural language processing
  • Bayesian methods
  • Multi-agent reinforcement learning
  • Brain-inspired artificial neural networks

Prof. Dr. Hong Qu
Dr. Malu Zhang
Dr. Mingsheng Fu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • graph neural networks
  • deep neural networks
  • spiking neural networks
  • computer vision
  • natural language processing
  • bayesian methods
  • multi-agent reinforcement learning
  • brain-inspired artificial neural networks

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Robust Sparse Bayesian Learning Scheme for DOA Estimation with Non-Circular Sources
Mathematics 2022, 10(6), 923; https://doi.org/10.3390/math10060923 - 14 Mar 2022
Viewed by 377
Abstract
In this paper, a robust DOA estimation scheme based on sparse Bayesian learning (SBL) for non-circular signals in impulse noise and mutual coupling (MC) is proposed. Firstly, the Toeplitz property of the MC matrix is used to eliminate the effect of array MC, [...] Read more.
In this paper, a robust DOA estimation scheme based on sparse Bayesian learning (SBL) for non-circular signals in impulse noise and mutual coupling (MC) is proposed. Firstly, the Toeplitz property of the MC matrix is used to eliminate the effect of array MC, and the array aperture is extended by using the properties of the non-circular signal. To eliminate the effect of impulse noise, the outlier part of the impulse noise is reconstructed together with the original signal in the signal matrix, and the DOA coarse estimation is obtained by balancing the accuracy and efficiency of parameter estimation using the alternating SBL update algorithm. Finally, a one-dimensional search is used in the vicinity of the searched spectral peaks to achieve a high-precision DOA estimation. The effectiveness and robustness of the algorithm for dealing with the above errors are demonstrated by extensive simulations. Full article
(This article belongs to the Special Issue Artificial Neural Networks: Design and Applications)
Show Figures

Figure 1

Article
An Investigation on Spiking Neural Networks Based on the Izhikevich Neuronal Model: Spiking Processing and Hardware Approach
Mathematics 2022, 10(4), 612; https://doi.org/10.3390/math10040612 - 16 Feb 2022
Viewed by 769
Abstract
The main required organ of the biological system is the Central Nervous System (CNS), which can influence the other basic organs in the human body. The basic elements of this important organ are neurons, synapses, and glias (such as astrocytes, which are the [...] Read more.
The main required organ of the biological system is the Central Nervous System (CNS), which can influence the other basic organs in the human body. The basic elements of this important organ are neurons, synapses, and glias (such as astrocytes, which are the highest percentage of glias in the human brain). Investigating, modeling, simulation, and hardware implementation (realization) of different parts of the CNS are important in case of achieving a comprehensive neuronal system that is capable of emulating all aspects of the real nervous system. This paper uses a basic neuron model called the Izhikevich neuronal model to achieve a high copy of the primary nervous block, which is capable of regenerating the behaviors of the human brain. The proposed approach can regenerate all aspects of the Izhikevich neuron in high similarity degree and performances. The new model is based on Look-Up Table (LUT) modeling of the mathematical neuromorphic systems, which can be realized in a high degree of correlation with the original model. The proposed procedure is considered in three cases: 100 points LUT modeling, 1000 points LUT modeling, and 10,000 points LUT modeling. Indeed, by removing the high-cost functions in the original model, the presented model can be implemented in a low-error, high-speed, and low-area resources state in comparison with the original system. To test and validate the proposed final hardware, a digital FPGA board (Xilinx Virtex-II FPGA board) is used. Digital hardware synthesis illustrates that our presented approach can follow the Izhikevich neuron in a high-speed state (more than the original model), increase efficiency, and also reduce overhead costs. Implementation results show the overall saving of 84.30% in FPGA and also the higher frequency of the proposed model of about 264 MHz, which is significantly higher than the original model, 28 MHz. Full article
(This article belongs to the Special Issue Artificial Neural Networks: Design and Applications)
Show Figures

Figure 1

Back to TopTop