Recent Machine Learning Applications to Internet of Things (IoT)

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 June 2020) | Viewed by 33819

Special Issue Editors


E-Mail Website
Guest Editor
Dipartimento di Scienze e Tecnologie, Università degli studi di Napoli “Parthenope”, Centro Direzionale, Isola C4, 80143 Napoli, Italy
Interests: computational intelligence; machine learning; neural networks; clustering; data mining in bioinformatics and ecological informatics; wireless sensor networks; IoT; soft computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dipartimento di Scienze e Tecnologie, Università degli studi di Napoli “Parthenope”, Centro Direzionale, Isola C4, 80143 Napoli, Italy
Interests: soft computing; machine learning; computational intelligence; data mining; data mining for astrophysics; geology and biology data; signal processing; audio streaming; brain–computer interface
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It was a long time ago that Mark Weiser envisioned a world of small, cheap, and robust networked processing devices, distributed across human and natural environments at all scales, to aid our everyday life. Since then, the technological developments, from nanotechnologies to computation and communication systems, have converged into what is nowadays known as Internet of Things (IoT), and made what Weiser envisioned a reality. IoT has paved the way to a plethora of new application domains, at the same time posing, however, several challenges as a multitude of devices, protocols, communication channels, architectures. and middleware exist. Nevertheless, IoT is growing exponentially in the number and heterogeneity of actors; indeed, a number between 50 and 100 billion objects are expected by 2020, and this growth makes “intelligence” a critical turning point for the success of IoT.

In particular, we are witnessing an incremental development of interconnections between devices (smartphones, tablets, smartwatches, fitness trackers and wearable devices in general, smart TVs, home appliances, and more), people, processes, and data.
Data generated by the devices are becoming big data and call for advanced learning and data mining techniques to efficiently and effectively understand, learn, and reason with this amazing volume of information.

Moreover, thanks to the latest results of research on Artificial Intelligence, applications can count on an “intelligent” network of billions of sensors “aware” of their operating environment, able to listen, learn, and respond to offer new services and functionalities in the most disparate application domains, which guarantee greater security, simplicity, and reliability.

This Special Issue aims at collecting contributions concerning any use of intelligent techniques to any IoT aspects related to the IoT domain, from protocols to applications, to give the reader an up-to-date picture of the state-of-the-art on the connection between machine learning, computational intelligence, and IoT.
General topics covered in this Special Issue include but are not limited to the following methodologies and IoT applications:

  • Methodologies:
    • Soft computing (e.g., fuzzy logic, rough sets);
    • Neural networks;
    • Neuro-fuzzy systems;
    • Deep learning;
    • Evolutionary and bio-inspired algorithms;
  • Applications:
    • Machine learning and computational intelligence-aided IoT;
    • Intelligent middleware solutions IoT;
    • Brain–computer interface and IoT;
    • IoT and cloud computing;
    • Semantic web of things;
    • Social network IoT;
    • Internet of vehicles;
    • Context awareness;
    • Security and IoT.

Dr. Antonino Staiano
Dr. Angelo Ciaramella
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 1763 KiB  
Article
Intellino: Processor for Embedded Artificial Intelligence
by Young Hyun Yoon, Dong Hyun Hwang, Jun Hyeok Yang and Seung Eun Lee
Electronics 2020, 9(7), 1169; https://doi.org/10.3390/electronics9071169 - 18 Jul 2020
Cited by 23 | Viewed by 5084
Abstract
The development of computation technology and artificial intelligence (AI) field brings about AI to be applied to various system. In addition, the research on hardware-based AI processors leads to the minimization of AI devices. By adapting the AI device to the edge of [...] Read more.
The development of computation technology and artificial intelligence (AI) field brings about AI to be applied to various system. In addition, the research on hardware-based AI processors leads to the minimization of AI devices. By adapting the AI device to the edge of internet of things (IoT), the system can perform AI operation promptly on the edge and reduce the workload of the system core. As the edge is influenced by the characteristics of the embedded system, implementing hardware which operates with low power in restricted resources on a processor is necessary. In this paper, we propose the intellino, a processor for embedded artificial intelligence. Intellino ensures low power operation based on optimized AI algorithms and reduces the workload of the system core through the hardware implementation of a neural network. In addition, intellino’s dedicated protocol helps the embedded system to enhance the performance. We measure intellino performance, achieving over 95% accuracy, and verify our proposal with an field programmable gate array (FPGA) prototyping. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

19 pages, 7328 KiB  
Article
Exploiting Recurring Patterns to Improve Scalability of Parking Availability Prediction Systems
by Sergio Di Martino and Antonio Origlia
Electronics 2020, 9(5), 838; https://doi.org/10.3390/electronics9050838 - 19 May 2020
Cited by 9 | Viewed by 2012
Abstract
Parking Guidance and Information (PGI) systems aim at supporting drivers in finding suitable parking spaces, also by predicting the availability at driver’s Estimated Time of Arrival (ETA), leveraging information about the general parking availability situation. To do these predictions, most of the proposals [...] Read more.
Parking Guidance and Information (PGI) systems aim at supporting drivers in finding suitable parking spaces, also by predicting the availability at driver’s Estimated Time of Arrival (ETA), leveraging information about the general parking availability situation. To do these predictions, most of the proposals in the literature dealing with on-street parking need to train a model for each road segment, with significant scalability issues when deploying a city-wide PGI. By investigating a real dataset we found that on-street parking dynamics show a high temporal auto-correlation. In this paper we present a new processing pipeline that exploits these recurring trends to improve the scalability. The proposal includes two steps to reduce both the number of required models and training examples. The effectiveness of the proposed pipeline has been empirically assessed on a real dataset of on-street parking availability from San Francisco (USA). Results show that the proposal is able to provide parking predictions whose accuracy is comparable to state-of-the-art solutions based on one model per road segment, while requiring only a fraction of training costs, thus being more likely scalable to city-wide scenarios. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

13 pages, 671 KiB  
Article
Towards Near-Real-Time Intrusion Detection for IoT Devices using Supervised Learning and Apache Spark
by Valerio Morfino and Salvatore Rampone
Electronics 2020, 9(3), 444; https://doi.org/10.3390/electronics9030444 - 06 Mar 2020
Cited by 44 | Viewed by 4902
Abstract
In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several [...] Read more.
In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several machine learning algorithms in identifying cyber-attacks (namely SYN-DOS attacks) to IoT systems are compared both in terms of application performances, and in training/application times. We use supervised machine learning algorithms included in the MLlib library of Apache Spark, a fast and general engine for big data processing. We show the implementation details and the performance of those algorithms on public datasets using a training set of up to 2 million instances. We adopt a Cloud environment, emphasizing the importance of the scalability and of the elasticity of use. Results show that all the Spark algorithms used result in a very good identification accuracy (>99%). Overall, one of them, Random Forest, achieves an accuracy of 1. We also report a very short training time (23.22 sec for Decision Tree with 2 million rows). The experiments also show a very low application time (0.13 sec for over than 600,000 instances for Random Forest) using Apache Spark in the Cloud. Furthermore, the explicit model generated by Random Forest is very easy-to-implement using high- or low-level programming languages. In light of the results obtained, both in terms of computation times and identification performance, a hybrid approach for the detection of SYN-DOS cyber-attacks on IoT devices is proposed: the application of an explicit Random Forest model, implemented directly on the IoT device, along with a second level analysis (training) performed in the Cloud. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

17 pages, 605 KiB  
Article
Classification of Transition Human Activities in IoT Environments via Memory-Based Neural Networks
by Giovanni Acampora, Gianluca Minopoli, Francesco Musella and Mariacarla Staffa
Electronics 2020, 9(3), 409; https://doi.org/10.3390/electronics9030409 - 28 Feb 2020
Cited by 3 | Viewed by 2871
Abstract
Human activity recognition is a crucial task in several modern applications based on the Internet of Things (IoT) paradigm, from the design of intelligent video surveillance systems to the development of elderly robot assistants. Recently, machine learning algorithms have been strongly investigated to [...] Read more.
Human activity recognition is a crucial task in several modern applications based on the Internet of Things (IoT) paradigm, from the design of intelligent video surveillance systems to the development of elderly robot assistants. Recently, machine learning algorithms have been strongly investigated to improve the recognition task of human activities. Though, in spite of these research activities, there are not so many studies focusing on the efficient recognition of complex human activities, namely transitional activities, and there is no research aimed at evaluating the effects of noise in data used to train algorithms. In this paper, we bridge this gap by introducing an innovative activity recognition system based on a neural classifier endowed with memory, able to optimize the performance of the classification of both transitional and non-transitional human activities. The system recognizes human activities from unobtrusive IoT devices (such as the accelerometer and gyroscope) integrated in commonly used smartphones. The main peculiarity provided by the proposed system is related to the exploitation of a neural network extended with short-term memory information about the previous activities’ features. The experimental study proves the reliability of the proposed system in terms of accuracy with respect to state-of-the-art classifiers and the robustness of the proposed framework with respect to noise in data. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

21 pages, 1479 KiB  
Article
First Order and Second Order Learning Algorithms on the Special Orthogonal Group to Compute the SVD of Data Matrices
by Simone Fiori, Lorenzo Del Rossi, Michele Gigli and Alessio Saccuti
Electronics 2020, 9(2), 334; https://doi.org/10.3390/electronics9020334 - 15 Feb 2020
Cited by 2 | Viewed by 2526
Abstract
The present paper deals with neural algorithms to learn the singular value decomposition (SVD) of data matrices. The neural algorithms utilized in the present research endeavor were developed by Helmke and Moore (HM) and appear under the form of two continuous-time differential equations [...] Read more.
The present paper deals with neural algorithms to learn the singular value decomposition (SVD) of data matrices. The neural algorithms utilized in the present research endeavor were developed by Helmke and Moore (HM) and appear under the form of two continuous-time differential equations over the special orthogonal group of matrices. The purpose of the present paper is to develop and compare different numerical schemes, under the form of two alternating learning rules, to learn the singular value decomposition of large matrices on the basis of the HM learning paradigm. The numerical schemes developed here are both first-order (Euler-like) and second-order (Runge-like). Moreover, a reduced Euler scheme is presented that consists of a single learning rule for one of the factors involved in the SVD. Numerical experiments performed to estimate the optical-flow (which is a component of modern IoT technologies) in real-world video sequences illustrate the features of the novel learning schemes. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

12 pages, 1038 KiB  
Article
System Log Detection Model Based on Conformal Prediction
by Yitong Ren, Zhaojun Gu, Zhi Wang, Zhihong Tian, Chunbo Liu, Hui Lu, Xiaojiang Du and Mohsen Guizani
Electronics 2020, 9(2), 232; https://doi.org/10.3390/electronics9020232 - 31 Jan 2020
Cited by 8 | Viewed by 2622
Abstract
With the rapid development of the Internet of Things, the combination of the Internet of Things with machine learning, Hadoop and other fields are current development trends. Hadoop Distributed File System (HDFS) is one of the core components of Hadoop, which is used [...] Read more.
With the rapid development of the Internet of Things, the combination of the Internet of Things with machine learning, Hadoop and other fields are current development trends. Hadoop Distributed File System (HDFS) is one of the core components of Hadoop, which is used to process files that are divided into data blocks distributed in the cluster. Once the distributed log data are abnormal, it will cause serious losses. When using machine learning algorithms for system log anomaly detection, the output of threshold-based classification models are only normal or abnormal simple predictions. This paper used the statistical learning method of conformity measure to calculate the similarity between test data and past experience. Compared with detection methods based on static threshold, the statistical learning method of the conformity measure can dynamically adapt to the changing log data. By adjusting the maximum fault tolerance, a system administrator can better manage and monitor the system logs. In addition, the computational efficiency of the statistical learning method for conformity measurement was improved. This paper implemented an intranet anomaly detection model based on log analysis, and conducted trial detection on HDFS data sets quickly and efficiently. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

17 pages, 11244 KiB  
Article
Individual Behavior Modeling with Sensors Using Process Mining
by Onur Dogan, Antonio Martinez-Millana, Eric Rojas, Marcos Sepúlveda, Jorge Munoz-Gama, Vicente Traver and Carlos Fernandez-Llatas
Electronics 2019, 8(7), 766; https://doi.org/10.3390/electronics8070766 - 09 Jul 2019
Cited by 16 | Viewed by 4733
Abstract
Understanding human behavior can assist in the adoption of satisfactory health interventions and improved care. One of the main problems relies on the definition of human behaviors, as human activities depend on multiple variables and are of dynamic nature. Although smart homes have [...] Read more.
Understanding human behavior can assist in the adoption of satisfactory health interventions and improved care. One of the main problems relies on the definition of human behaviors, as human activities depend on multiple variables and are of dynamic nature. Although smart homes have advanced in the latest years and contributed to unobtrusive human behavior tracking, artificial intelligence has not coped yet with the problem of variability and dynamism of these behaviors. Process mining is an emerging discipline capable of adapting to the nature of high-variate data and extract knowledge to define behavior patterns. In this study, we analyze data from 25 in-house residents acquired with indoor location sensors by means of process mining clustering techniques, which allows obtaining workflows of the human behavior inside the house. Data are clustered by adjusting two variables: the similarity index and the Euclidean distance between workflows. Thereafter, two main models are created: (1) a workflow view to analyze the characteristics of the discovered clusters and the information they reveal about human behavior and (2) a calendar view, in which common behaviors are rendered in the way of a calendar allowing to detect relevant patterns depending on the day of the week and the season of the year. Three representative patients who performed three different behaviors: stable, unstable, and complex behaviors according to the proposed approach are investigated. This approach provides human behavior details in the manner of a workflow model, discovering user paths, frequent transitions between rooms, and the time the user was in each room, in addition to showing the results into the calendar view increases readability and visual attraction of human behaviors, allowing to us detect patterns happening on special days. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

23 pages, 3234 KiB  
Article
Machine Learning Prediction Approach to Enhance Congestion Control in 5G IoT Environment
by Ihab Ahmed Najm, Alaa Khalaf Hamoud, Jaime Lloret and Ignacio Bosch
Electronics 2019, 8(6), 607; https://doi.org/10.3390/electronics8060607 - 30 May 2019
Cited by 45 | Viewed by 6426
Abstract
The 5G network is a next-generation wireless form of communication and the latest mobile technology. In practice, 5G utilizes the Internet of Things (IoT) to work in high-traffic networks with multiple nodes/sensors in an attempt to transmit their packets to a destination simultaneously, [...] Read more.
The 5G network is a next-generation wireless form of communication and the latest mobile technology. In practice, 5G utilizes the Internet of Things (IoT) to work in high-traffic networks with multiple nodes/sensors in an attempt to transmit their packets to a destination simultaneously, which is a characteristic of IoT applications. Due to this, 5G offers vast bandwidth, low delay, and extremely high data transfer speed. Thus, 5G presents opportunities and motivations for utilizing next-generation protocols, especially the stream control transmission protocol (SCTP). However, the congestion control mechanisms of the conventional SCTP negatively influence overall performance. Moreover, existing mechanisms contribute to reduce 5G and IoT performance. Thus, a new machine learning model based on a decision tree (DT) algorithm is proposed in this study to predict optimal enhancement of congestion control in the wireless sensors of 5G IoT networks. The model was implemented on a training dataset to determine the optimal parametric setting in a 5G environment. The dataset was used to train the machine learning model and enable the prediction of optimal alternatives that can enhance the performance of the congestion control approach. The DT approach can be used for other functions, especially prediction and classification. DT algorithms provide graphs that can be used by any user to understand the prediction approach. The DT C4.5 provided promising results, with more than 92% precision and recall. Full article
(This article belongs to the Special Issue Recent Machine Learning Applications to Internet of Things (IoT))
Show Figures

Figure 1

Back to TopTop