Deep Learning for the Internet of Things

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 15528

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Texas Tech University, Lubbock, TX 79409, USA
Interests: machine and deep learning; cyber and software security; human factor modeling

Special Issue Information

Dear Colleagues,

The recent advancements in artificial intelligence; machine learning; and, more importantly, deep learning, with its many variations of algorithms and platforms, have radically transformed conventional software applications into a set of smart and connected components with the capability of data-driven decision-making; intelligent data sensing and filtering; proactive and collaborative integration of software and hardware devices; and, more importantly, the ability to capture tons of useful interaction data. Machine and deep learning-based components are becoming the backbone of modern systems such as cyber-physical systems (CPS) and Internet of Things (IoT), where intelligent integrations and collaborations of smart devices along with their outstanding data collection capabilities make them a perfect application domain for such sophisticated learning algorithms. The beauty of smart and connected devices governed by software utilities has already been demonstrated in several application domains such as healthcare, agriculture and farming, manufacturing, smart buildings, transportations, energy, and environmental surveillance monitoring systems. While these IoT-based systems can capture a good amount of data, the usage of these valuable and hard-to-produce data are not being fully utilized. Deep learning enables one to explore these data in further detail and capturing the hidden relationships that may exist amongst their key factors and parameters, thus providing further insight into the underlying application domains. Furthermore, deep learning algorithms are also capable of addressing challenging problems that are natural in IoT but remain unsolved in systems such as real time optimization problems, modeling non-linear characteristics of IoT devices and components, and several instances of prediction and classification problems.

This Special Issue aims to foster deep learning-based modeling, solutions, and approaches to problems in Internet of Things systems. It seeks to explore deep learning algorithms, including generative adversarial models, attention-based networks, deep reinforcement learning, and recurrent deep neural networks, in capturing features and modeling the behavior of the involved software and hardware components.

Academic researchers and industrial practitioners are invited to contribute their valuable research and experiences to state-of-the-art of deep learning-based approaches to modeling Internet of Things and associated problems. Topics of interest include but are not limited to the following applications of deep learning techniques to

  • Modeling IoT systems using deep learning;
  • Generative adversarial networks (GANS) in IoT and CPS;
  • Long short-term memory (LSTM) modeling of IoT time series data;
  • Attention-based approaches to capture significant features in IoT;
  • Deep learning-based modeling and experience in IoT-based applications such as smart building, healthcare, agriculture, manufacturing, left-driving cars, and cyber security;
  • Deep reinforcement learning for modeling decision making and uncertainty in IoT.

Prof. Dr. Akbar Siami-Namin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • Internet of Things
  • generative adversarial networks
  • attention network

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1302 KiB  
Article
A Survey of DDOS Attack Detection Techniques for IoT Systems Using BlockChain Technology
by Zulfiqar Ali Khan and Akbar Siami Namin
Electronics 2022, 11(23), 3892; https://doi.org/10.3390/electronics11233892 - 24 Nov 2022
Cited by 5 | Viewed by 3228
Abstract
The Internet of Things (IoT) is a network of sensors that helps collect data 24/7 without human intervention. However, the network may suffer from problems such as the low battery, heterogeneity, and connectivity issues due to the lack of standards. Even though these [...] Read more.
The Internet of Things (IoT) is a network of sensors that helps collect data 24/7 without human intervention. However, the network may suffer from problems such as the low battery, heterogeneity, and connectivity issues due to the lack of standards. Even though these problems can cause several performance hiccups, security issues need immediate attention because hackers access vital personal and financial information and then misuse it. These security issues can allow hackers to hijack IoT devices and then use them to establish a Botnet to launch a Distributed Denial of Service (DDoS) attack. Blockchain technology can provide security to IoT devices by providing secure authentication using public keys. Similarly, Smart Contracts (SCs) can improve the performance of the IoT–blockchain network through automation. However, surveyed work shows that the blockchain and SCs do not provide foolproof security; sometimes, attackers defeat these security mechanisms and initiate DDoS attacks. Thus, developers and security software engineers must be aware of different techniques to detect DDoS attacks. In this survey paper, we highlight different techniques to detect DDoS attacks. The novelty of our work is to classify the DDoS detection techniques according to blockchain technology. As a result, researchers can enhance their systems by using blockchain-based support for detecting threats. In addition, we provide general information about the studied systems and their workings. However, we cannot neglect the recent surveys. To that end, we compare the state-of-the-art DDoS surveys based on their data collection techniques and the discussed DDoS attacks on the IoT subsystems. The study of different IoT subsystems tells us that DDoS attacks also impact other computing systems, such as SCs, networking devices, and power grids. Hence, our work briefly describes DDoS attacks and their impacts on the above subsystems and IoT. For instance, due to DDoS attacks, the targeted computing systems suffer delays which cause tremendous financial and utility losses to the subscribers. Hence, we discuss the impacts of DDoS attacks in the context of associated systems. Finally, we discuss Machine-Learning algorithms, performance metrics, and the underlying technology of IoT systems so that the readers can grasp the detection techniques and the attack vectors. Moreover, associated systems such as Software-Defined Networking (SDN) and Field-Programmable Gate Arrays (FPGA) are a source of good security enhancement for IoT Networks. Thus, we include a detailed discussion of future development encompassing all major IoT subsystems. Full article
(This article belongs to the Special Issue Deep Learning for the Internet of Things)
Show Figures

Figure 1

16 pages, 1556 KiB  
Article
Deep Learning-Based Time-Series Analysis for Detecting Anomalies in Internet of Things
by Saroj Gopali and Akbar Siami Namin
Electronics 2022, 11(19), 3205; https://doi.org/10.3390/electronics11193205 - 6 Oct 2022
Cited by 10 | Viewed by 2232
Abstract
Anomaly detection in time-series data is an integral part in the context of the Internet of Things (IoT). In particular, with the advent of sophisticated deep and machine learning-based techniques, this line of research has attracted many researchers to develop more accurate anomaly [...] Read more.
Anomaly detection in time-series data is an integral part in the context of the Internet of Things (IoT). In particular, with the advent of sophisticated deep and machine learning-based techniques, this line of research has attracted many researchers to develop more accurate anomaly detection algorithms. The problem itself has been a long-lasting challenging problem in security and especially in malware detection and data tampering. The advancement of the IoT paradigm as well as the increasing number of cyber attacks on the networks of the Internet of Things worldwide raises the concern of whether flexible and simple yet accurate anomaly detection techniques exist. In this paper, we investigate the performance of deep learning-based models including recurrent neural network-based Bidirectional LSTM (BI-LSTM), Long Short-Term Memory (LSTM), CNN-based Temporal Convolutional (TCN), and CuDNN-LSTM, which is a fast LSTM implementation supported by CuDNN. In particular, we assess the performance of these models with respect to accuracy and the training time needed to build such models. According to our experiment, using different timestamps (i.e., 15, 20, and 30 min), we observe that in terms of performance, the CuDNN-LSTM model outperforms other models, whereas in terms of training time, the TCN-based model is trained faster. We report the results of experiments in comparing these four models with various look-back values. Full article
(This article belongs to the Special Issue Deep Learning for the Internet of Things)
Show Figures

Figure 1

13 pages, 2660 KiB  
Article
Stress Detection System for Working Pregnant Women Using an Improved Deep Recurrent Neural Network
by Sameer Dev Sharma, Sonal Sharma, Rajesh Singh, Anita Gehlot, Neeraj Priyadarshi and Bhekisipho Twala
Electronics 2022, 11(18), 2862; https://doi.org/10.3390/electronics11182862 - 9 Sep 2022
Cited by 12 | Viewed by 2244
Abstract
Stress is a concerning issue in today’s world. Stress in pregnancy harms both the development of children and the health of pregnant women. As a result, assessing the stress levels of working pregnant women is crucial to aid them in developing and growing [...] Read more.
Stress is a concerning issue in today’s world. Stress in pregnancy harms both the development of children and the health of pregnant women. As a result, assessing the stress levels of working pregnant women is crucial to aid them in developing and growing professionally and personally. In the past, many machine-learning (ML) and deep-learning (DL) algorithms have been made to predict the stress of women. It does, however, have some problems, such as a more complicated design, a high chance of misclassification, a high chance of making mistakes, and less efficiency. With these considerations in mind, our article will use a deep-learning model known as the deep recurrent neural network (DRNN) to predict the stress levels of working pregnant women. Dataset preparation, feature extraction, optimal feature selection, and classification with DRNNs are all included in this framework. Duplicate attributes are removed, and missing values are filled in during the preprocessing of the dataset. Full article
(This article belongs to the Special Issue Deep Learning for the Internet of Things)
Show Figures

Figure 1

22 pages, 466 KiB  
Article
A Comparative Analysis of Modeling and Predicting Perceived and Induced Emotions in Sonification
by Faranak Abri, Luis Felipe Gutiérrez, Prerit Datta, David R. W. Sears, Akbar Siami Namin and Keith S. Jones
Electronics 2021, 10(20), 2519; https://doi.org/10.3390/electronics10202519 - 15 Oct 2021
Cited by 3 | Viewed by 1847
Abstract
Sonification is the utilization of sounds to convey information about data or events. There are two types of emotions associated with sounds: (1) “perceived” emotions, in which listeners recognize the emotions expressed by the sound, and (2) “induced” emotions, in which listeners feel [...] Read more.
Sonification is the utilization of sounds to convey information about data or events. There are two types of emotions associated with sounds: (1) “perceived” emotions, in which listeners recognize the emotions expressed by the sound, and (2) “induced” emotions, in which listeners feel emotions induced by the sound. Although listeners may widely agree on the perceived emotion for a given sound, they often do not agree about the induced emotion of a given sound, so it is difficult to model induced emotions. This paper describes the development of several machine and deep learning models that predict the perceived and induced emotions associated with certain sounds, and it analyzes and compares the accuracy of those predictions. The results revealed that models built for predicting perceived emotions are more accurate than ones built for predicting induced emotions. However, the gap in predictive power between such models can be narrowed substantially through the optimization of the machine and deep learning models. This research has several applications in automated configurations of hardware devices and their integration with software components in the context of the Internet of Things, for which security is of utmost importance. Full article
(This article belongs to the Special Issue Deep Learning for the Internet of Things)
Show Figures

Figure 1

19 pages, 513 KiB  
Article
Reinforcement Learning for Generating Secure Configurations
by Shuvalaxmi Dass and Akbar Siami Namin
Electronics 2021, 10(19), 2392; https://doi.org/10.3390/electronics10192392 - 30 Sep 2021
Cited by 2 | Viewed by 1524
Abstract
Many security problems in software systems are because of vulnerabilities caused by improper configurations. A poorly configured software system leads to a multitude of vulnerabilities that can be exploited by adversaries. The problem becomes even more serious when the architecture of the underlying [...] Read more.
Many security problems in software systems are because of vulnerabilities caused by improper configurations. A poorly configured software system leads to a multitude of vulnerabilities that can be exploited by adversaries. The problem becomes even more serious when the architecture of the underlying system is static and the misconfiguration remains for a longer period of time, enabling adversaries to thoroughly inspect the software system under attack during the reconnaissance stage. Employing diversification techniques such as Moving Target Defense (MTD) can minimize the risk of exposing vulnerabilities. MTD is an evolving defense technique through which the attack surface of the underlying system is continuously changing. However, the effectiveness of such dynamically changing platform depends not only on the goodness of the next configuration setting with respect to minimization of attack surfaces but also the diversity of set of configurations generated. To address the problem of generating a diverse and large set of secure software and system configurations, this paper introduces an approach based on Reinforcement Learning (RL) through which an agent is trained to generate the desirable set of configurations. The paper reports the performance of the RL-based secure and diverse configurations through some case studies. Full article
(This article belongs to the Special Issue Deep Learning for the Internet of Things)
Show Figures

Figure 1

16 pages, 6534 KiB  
Article
Steganalysis of Adaptive Multi-Rate Speech Based on Extreme Gradient Boosting
by Congcong Sun, Hui Tian, Chin-Chen Chang, Yewang Chen, Yiqiao Cai, Yongqian Du, Yong-Hong Chen and Chih Cheng Chen
Electronics 2020, 9(3), 522; https://doi.org/10.3390/electronics9030522 - 21 Mar 2020
Cited by 4 | Viewed by 3230
Abstract
Steganalysis of adaptive multi-rate (AMR) speech is a hot topic for controlling cybercrimes grounded in steganography in related speech streams. In this paper, we first present a novel AMR steganalysis model, which utilizes extreme gradient boosting (XGBoost) as the classifier, instead of support [...] Read more.
Steganalysis of adaptive multi-rate (AMR) speech is a hot topic for controlling cybercrimes grounded in steganography in related speech streams. In this paper, we first present a novel AMR steganalysis model, which utilizes extreme gradient boosting (XGBoost) as the classifier, instead of support vector machines (SVM) adopted in the previous schemes. Compared with the SVM-based model, this new model can facilitate the excavation of potential information from the high-dimensional features and can avoid overfitting. Moreover, to further strengthen the preceding features based on the statistical characteristics of pulse pairs, we present the convergence feature based on the Markov chain to reflect the global characterization of pulse pairs, which is essentially the final state of the Markov transition matrix. Combining the convergence feature with the preceding features, we propose an XGBoost-based steganalysis scheme for AMR speech streams. Finally, we conducted a series of experiments to assess our presented scheme and compared it with previous schemes. The experimental results demonstrate that the proposed scheme is feasible, and can provide better performance in terms of detecting the existing steganography methods based on AMR speech streams. Full article
(This article belongs to the Special Issue Deep Learning for the Internet of Things)
Show Figures

Figure 1

Back to TopTop