New Insights and Techniques for Neural Networks

A special issue of Electronics (ISSN 2079-9292).

Deadline for manuscript submissions: closed (15 November 2023) | Viewed by 5141

Special Issue Editors


E-Mail Website
Guest Editor
Department of Telecommunications and Media Informatics, Budapest University of Technology and Economics, 1117 Budapest, Hungary
Interests: applying artificial intelligence; machine deep learning to speech and language processing; building conversational AI; human-computer dialogue systems; expressive brain-inspired artificial neural networks

E-Mail Website
Guest Editor
Department of Telecommunications and Media Informatics, Budapest University of Technology and Economics, 1117 Budapest, Hungary
Interests: artificial Intelligence; data science; machine learning; media informatics; image processing; algorithms and computing in mathematics and engineering

E-Mail Website
Guest Editor
Department of Electrical and Electronics, University of the Basque Country, 48940 Leioa, Spain
Interests: natural language; dialogue system; interactive pattern recognition framework; speech-based human machine interaction; ASR system

E-Mail Website
Guest Editor
Electrical Engineering Department, Faculty of Engineering at Shoubra, Benha University, Cairo 11629, Egypt
Interests: image compression; water marking authentication; nano devices; digital water marking; communication biometrics; computer vision; signal; image and video processing; pattern recognition; data hiding; machine learning, neural networks; artificial intelligence

Special Issue Information

Dear Colleagues,

In recent years, there has been a significant increase in the volume, veracity, and variety of machine learning applications. This has led to growing challenges and new possibilities for researchers to effectively extract hidden features in a broad range of scientific and engineering fields. The success of deep learning comes primarily from the availability of algorithms, large data, and computer power. Neural networks in particular have become an essential part of artificial intelligence and machine learning and are applied to a wide range of applications, including medical imaging, computer vision, speech signal processing, robotics, and more. Through the interconnection of nodes and a learning process, neural networks have a unique ability to extract meaning from complex or imprecise data. These networks have the potential to enhance measurements in a diverse range of research fields, and there is a need for in-depth investigations of neural networks-related topics to implement effective, cost-competitive, and practical solutions.

The purpose of this Special Issue is to provide a platform for researchers to share their latest findings, discuss new methodologies, and showcase current applications and technologies using neural networks. By gathering contributions from experts in the field, this collection aims to supplement and advance the existing literature on the topic, providing new insights and perspectives. Specifically, the issue will focus on novel techniques, recent advancements, and emerging trends related to neural networks in various research fields, including speech processing, computer vision, signal processing, dialogue systems, and more.

Topics to be covered in this Special Issue include, but are not limited to:

  • Multimodal deep neural networks.
  • Speech processing, synthesis, and recognition.
  • Speech acoustic modeling.
  • Advances in dialogue systems and virtual assistants.
  • Linear and non-linear prediction in signals.
  • Novel machine learning methods and applications.
  • Image, video, and multidimensional signal processing.
  • Generative models for transformer networks.
  • Real-time decision systems.
  • Brain-inspired artificial neural networks.
  • Noise reduction.

We welcome original research articles, review articles, and case studies that offer new insights, techniques, or perspectives on these topics.

Dr. Mohammed Salah Al-Radhi
Dr. Gabor Szucs
Dr. Raquel Justo Blanco
Dr. Lamiaa Elrefaei
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multimodal deep neural networks
  • speech processing
  • speech acoustic modelling

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 3433 KiB  
Article
Predicting Software Defects in Hybrid MPI and OpenMP Parallel Programs Using Machine Learning
by Amani S. Althiban, Hajar M. Alharbi, Lama A. Al Khuzayem and Fathy Elbouraey Eassa
Electronics 2024, 13(1), 182; https://doi.org/10.3390/electronics13010182 - 30 Dec 2023
Viewed by 1030
Abstract
High-performance computing (HPC) and its supercomputers are essential for solving the most difficult issues in many scientific computing domains. The proliferation of computational resources utilized by HPC systems has resulted in an increase in the associated error rates. As such, modern HPC systems [...] Read more.
High-performance computing (HPC) and its supercomputers are essential for solving the most difficult issues in many scientific computing domains. The proliferation of computational resources utilized by HPC systems has resulted in an increase in the associated error rates. As such, modern HPC systems promote a hybrid programming style that integrates the message-passing interface (MPI) and open multi-processing (OpenMP). However, this integration often leads to complex defects, such as deadlocks and race conditions, that are challenging to detect and resolve. This paper presents a novel approach: using machine learning algorithms to predict defects in C++-based systems by employing hybrid MPI and OpenMP models. We focus on employing a balanced dataset to enhance prediction accuracy and reliability. Our study highlights the effectiveness of the support vector machine (SVM) classifier, enhanced with term frequency (TF) and recursive feature elimination (RFE) techniques, which demonstrates superior accuracy and performance in defect prediction when compared to other classifiers. This research contributes significantly to the field by providing a robust method for early defect detection in hybrid programming environments, thereby reducing development time, costs and improving the overall reliability of HPC systems. Full article
(This article belongs to the Special Issue New Insights and Techniques for Neural Networks)
Show Figures

Figure 1

25 pages, 2315 KiB  
Article
Analysis of Function Approximation and Stability of General DNNs in Directed Acyclic Graphs Using Un-Rectifying Analysis
by Wen-Liang Hwang and Shih-Shuo Tung
Electronics 2023, 12(18), 3858; https://doi.org/10.3390/electronics12183858 - 12 Sep 2023
Cited by 1 | Viewed by 544
Abstract
A general lack of understanding pertaining to deep feedforward neural networks (DNNs) can be attributed partly to a lack of tools with which to analyze the composition of non-linear functions, and partly to a lack of mathematical models applicable to the diversity of [...] Read more.
A general lack of understanding pertaining to deep feedforward neural networks (DNNs) can be attributed partly to a lack of tools with which to analyze the composition of non-linear functions, and partly to a lack of mathematical models applicable to the diversity of DNN architectures. In this study, we analyze DNNs using directed acyclic graphs (DAGs) under a number of basic assumptions pertaining to activation functions, non-linear transformations, and DNN architectures. DNNs that satisfy these assumptions are referred to as general DNNs. Our construction of an analytic graph was based on an axiomatic method in which DAGs are built from the bottom–up through the application of atomic operations to basic elements in accordance with regulatory rules. This approach allowed us to derive the properties of general DNNs via mathematical induction. We demonstrate that the proposed analysis method enables the derivation of some properties that hold true for all general DNNs, namely that DNNs “divide up” the input space, “conquer” each partition using a simple approximating function, and “sparsify” the weight coefficients to enhance robustness against input perturbations. This analysis provides a systematic approach with which to gain theoretical insights into a wide range of complex DNN architectures. Full article
(This article belongs to the Special Issue New Insights and Techniques for Neural Networks)
Show Figures

Figure 1

23 pages, 832 KiB  
Article
A Smart Control System for the Oil Industry Using Text-to-Speech Synthesis Based on IIoT
by Ali Raheem Mandeel, Ammar Abdullah Aggar, Mohammed Salah Al-Radhi and Tamás Gábor Csapó
Electronics 2023, 12(16), 3380; https://doi.org/10.3390/electronics12163380 - 8 Aug 2023
Cited by 2 | Viewed by 1483
Abstract
Oil refineries have high operating expenses and are often exposed to increased asset integrity risks and functional failure. Real-time monitoring of their operations has always been critical to ensuring safety and efficiency. We proposed a novel Industrial Internet of Things (IIoT) design that [...] Read more.
Oil refineries have high operating expenses and are often exposed to increased asset integrity risks and functional failure. Real-time monitoring of their operations has always been critical to ensuring safety and efficiency. We proposed a novel Industrial Internet of Things (IIoT) design that employs a text-to-speech synthesizer (TTS) based on neural networks to build an intelligent extension control system. We enhanced a TTS model to achieve high inference speed by employing HiFi-GAN V3 vocoder in the acoustic model FastSpeech 2. We experimented with our system on a low resources-embedded system in a real-time environment. Moreover, we customized the TTS model to generate two target speakers (female and male) using a small dataset. We performed an ablation analysis by conducting experiments to evaluate the performance of our design (IoT connectivity, memory usage, inference speed, and output speech quality). The results demonstrated that our system Real-Time Factor (RTF) is 6.4 (without deploying the cache mechanism, which is a technique to call the previously synthesized speech sentences in our system memory). Using the cache mechanism, our proposed model successfully runs on a low-resource computational device with real-time speed (RTF equals 0.16, 0.19, and 0.29 when the memory has 250, 500, and 1000 WAV files, respectively). Additionally, applying the cache mechanism has reduced memory usage percentage from 16.3% (for synthesizing a sentence of ten seconds) to 6.3%. Furthermore, according to the objective speech quality evaluation, our TTS model is superior to the baseline TTS model. Full article
(This article belongs to the Special Issue New Insights and Techniques for Neural Networks)
Show Figures

Figure 1

13 pages, 462 KiB  
Article
Data Aging Matters: Federated Learning-Based Consumption Prediction in Smart Homes via Age-Based Model Weighting
by Konstantinos Skianis, Anastasios Giannopoulos, Panagiotis Gkonis and Panagiotis Trakadas
Electronics 2023, 12(14), 3054; https://doi.org/10.3390/electronics12143054 - 12 Jul 2023
Cited by 4 | Viewed by 967
Abstract
Smart homes, powered mostly by Internet of Things (IoT) devices, have become very popular nowadays due to their ability to provide a holistic approach towards effective energy management. This is made feasible via the deployment of multiple sensors, which enables predicting energy consumption [...] Read more.
Smart homes, powered mostly by Internet of Things (IoT) devices, have become very popular nowadays due to their ability to provide a holistic approach towards effective energy management. This is made feasible via the deployment of multiple sensors, which enables predicting energy consumption via machine learning approaches. In this work, we propose FedTime, a novel federated learning approach for predicting smart home consumption which takes into consideration the age of the time series datasets of each client. The proposed method is based on federated averaging but aggregates local models trained on each smart home device to produce a global prediction model via a novel weighting scheme. Each local model contributes more to the global model when the local data are more recent, or penalized when the data are older upon testing for a specific residence (client). The approach was evaluated on a real-world dataset of smart home energy consumption and compared with other machine learning models. The results demonstrate that the proposed method performs similarly or better than other models in terms of prediction error; FedTime achieved a lower mean absolute error of 0.25 compared to FedAvg. The contributions of this work present a novel federated learning approach that takes into consideration the age of the datasets that belong to the clients, experimenting with a publicly available dataset on grid import consumption prediction, while comparing with centralized and decentralized baselines, without the need for data centralization, which is a privacy concern for many households. Full article
(This article belongs to the Special Issue New Insights and Techniques for Neural Networks)
Show Figures

Figure 1

Back to TopTop