Special Issue "Neural Networks and Their Applications"

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 7038

Special Issue Editor

Telecommunications Engineering, Carlos III University of Madrid, 28911 Leganes, Spain
Interests: neural networks; artificial intelligence; computer supported learning; sensor-based applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Neural-network-based models have experienced continuous growth in complexity over the last few decades. The combination of high-performance computing resources (incorporating FPGAs and GPUs), distributed architectures and cloud computing, the availability of big data sources and datasets, and the increasing interest in the research community have created an unprecedented ecosystem to train complex models and apply them to solve many different real-life problems. From personal health recommenders, autonomous vehicles, and market sentiment analyses to natural language recognition or image recognition, different neural-network-based models are able to solve complex classification and regression problems. Deep neural networks have been developed to increase the ability to learn patterns from all kinds of data sources. Attention mechanisms have been able to go further in accuracy in combination with deep neural network based machine learning models.

This Special Issue aims to collect publications that will showcase the power and diversity of novel neural networks and how they can be applied to solve real cases. The application of neural networks in different domains will open the door for new scenarios and encourage their adoption and use by both the research community and the industry. The Special Issue welcomes high-quality papers both from a theoretical perspective and from a practical and experimental approach. The major challenges linked to the use of artificial neural networks to solve real problems will be tackled, and papers are expected to propose solutions to them using neural-network-based models.

Prof. Dr. Mario Muñoz Organero
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Neural networks
  • High-performance computing
  • Complexity
  • Complex models
  • Machine learning
  • Big data

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Using Traffic Sensors in Smart Cities to Enhance a Spatio-Temporal Deep Learning Model for COVID-19 Forecasting
Mathematics 2023, 11(18), 3904; https://doi.org/10.3390/math11183904 - 14 Sep 2023
Viewed by 215
Abstract
Respiratory viruses, such as COVID-19, are spread over time and space based on human-to-human interactions. Human mobility plays a key role in the propagation of the virus. Different types of sensors in smart cities are able to continuously monitor traffic-related human mobility, showing [...] Read more.
Respiratory viruses, such as COVID-19, are spread over time and space based on human-to-human interactions. Human mobility plays a key role in the propagation of the virus. Different types of sensors in smart cities are able to continuously monitor traffic-related human mobility, showing the impact of COVID-19 on traffic volumes and patterns. In a similar way, traffic volumes measured by smart traffic sensors provide a proxy variable to capture human mobility, which is expected to have an impact on new COVID-19 infections. Adding traffic data from smart city sensors to machine learning models designed to estimate upcoming COVID-19 incidence values should provide optimized results compared to models based on COVID-19 data alone. This paper proposes a novel model to extract spatio-temporal patterns in the spread of the COVID-19 virus for short-term predictions by organizing COVID-19 incidence and traffic data as interrelated temporal sequences of spatial images. The model is trained and validated with real data from the city of Madrid in Spain for 84 weeks, combining information from 4372 traffic measuring points and 143 COVID-19 PCR test centers. The results are compared with a baseline model designed for the extraction of spatio-temporal patterns from COVID-19-only sequences of images, showing that using traffic information enhances the results when forecasting a new wave of infections (MSE values are reduced by a 70% factor). The information that traffic data has on the spread of the COVID-19 virus is also analyzed, showing that traffic data alone is not sufficient for accurate COVID-19 forecasting. Full article
(This article belongs to the Special Issue Neural Networks and Their Applications)
Show Figures

Figure 1

Article
Threat Hunting System for Protecting Critical Infrastructures Using a Machine Learning Approach
Mathematics 2023, 11(16), 3448; https://doi.org/10.3390/math11163448 - 09 Aug 2023
Viewed by 591
Abstract
Cyberattacks are increasing in number and diversity in nature daily, and the tendency for them is to escalate dramatically in the forseeable future, with critical infrastructures (CI) assets and networks not being an exception to this trend. As time goes by, cyberattacks are [...] Read more.
Cyberattacks are increasing in number and diversity in nature daily, and the tendency for them is to escalate dramatically in the forseeable future, with critical infrastructures (CI) assets and networks not being an exception to this trend. As time goes by, cyberattacks are more complex than before and unknown until they spawn, being very difficult to detect and remediate. To be reactive against those cyberattacks, usually defined as zero-day attacks, cyber-security specialists known as threat hunters must be in organizations’ security departments. All the data generated by the organization’s users must be processed by those threat hunters (which are mainly benign and repetitive and follow predictable patterns) in short periods to detect unusual behaviors. The application of artificial intelligence, specifically machine learning (ML) techniques (for instance NLP, C-RNN-GAN, or GNN), can remarkably impact the real-time analysis of those data and help to discriminate between harmless data and malicious data, but not every technique is helpful in every circumstance; as a consequence, those specialists must know which techniques fit the best at every specific moment. The main goal of the present work is to design a distributed and scalable system for threat hunting based on ML, and with a special focus on critical infrastructure needs and characteristics. Full article
(This article belongs to the Special Issue Neural Networks and Their Applications)
Show Figures

Figure 1

Article
Neural Network Approaches for Computation of Soil Thermal Conductivity
Mathematics 2022, 10(21), 3957; https://doi.org/10.3390/math10213957 - 25 Oct 2022
Cited by 1 | Viewed by 1524
Abstract
The effective thermal conductivity (ETC) of soil is an essential parameter for the design and unhindered operation of underground energy transportation and storage systems. Various experimental, empirical, semi-empirical, mathematical, and numerical methods have been tried in the past, but lack either accuracy or [...] Read more.
The effective thermal conductivity (ETC) of soil is an essential parameter for the design and unhindered operation of underground energy transportation and storage systems. Various experimental, empirical, semi-empirical, mathematical, and numerical methods have been tried in the past, but lack either accuracy or are computationally cumbersome. The recent developments in computer science provided a new computational approach, the neural networks, which are easy to implement, faster, versatile, and reasonably accurate. In this study, we present three classes of neural networks based on different network constructions, learning and computational strategies to predict the ETC of the soil. A total of 384 data points are collected from literature, and the three networks, Artificial neural network (ANN), group method of data handling (GMDH) and gene expression programming (GEP), are constructed and trained. The best accuracy of each network is measured with the coefficient of determination (R2) and found to be 91.6, 83.2 and 80.5 for ANN, GMDH and GEP, respectively. Furthermore, two sands with 80% and 99% quartz content are measured, and the best performing network from each class of ANN, GMDH and GEP is independently validated. The GEP model provided the best estimate for 99% quartz sand and GMDH with 80%. Full article
(This article belongs to the Special Issue Neural Networks and Their Applications)
Show Figures

Figure 1

Review

Jump to: Research

Review
Perceptron: Learning, Generalization, Model Selection, Fault Tolerance, and Role in the Deep Learning Era
Mathematics 2022, 10(24), 4730; https://doi.org/10.3390/math10244730 - 13 Dec 2022
Cited by 6 | Viewed by 3869
Abstract
The single-layer perceptron, introduced by Rosenblatt in 1958, is one of the earliest and simplest neural network models. However, it is incapable of classifying linearly inseparable patterns. A new era of neural network research started in 1986, when the backpropagation (BP) algorithm was [...] Read more.
The single-layer perceptron, introduced by Rosenblatt in 1958, is one of the earliest and simplest neural network models. However, it is incapable of classifying linearly inseparable patterns. A new era of neural network research started in 1986, when the backpropagation (BP) algorithm was rediscovered for training the multilayer perceptron (MLP) model. An MLP with a large number of hidden nodes can function as a universal approximator. To date, the MLP model is the most fundamental and important neural network model. It is also the most investigated neural network model. Even in this AI or deep learning era, the MLP is still among the few most investigated and used neural network models. Numerous new results have been obtained in the past three decades. This survey paper gives a comprehensive and state-of-the-art introduction to the perceptron model, with emphasis on learning, generalization, model selection and fault tolerance. The role of the perceptron model in the deep learning era is also described. This paper provides a concluding survey of perceptron learning, and it covers all the major achievements in the past seven decades. It also serves a tutorial for perceptron learning. Full article
(This article belongs to the Special Issue Neural Networks and Their Applications)
Show Figures

Figure 1

Back to TopTop