sensors-logo

Journal Browser

Journal Browser

Special Issue "AI for IoT"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 30 June 2021.

Special Issue Editors

Dr. Marco Zennaro
E-Mail Website
Guest Editor
Telecommunications/ICT4D Laboratory, The Abdus Salam International Centre for Theoretical Physics, Strada Costiera, 11-I-34151 Trieste, Italy
Interests: IoT; wireless networks; network data analysis
Special Issues and Collections in MDPI journals
Prof. Dr. Pietro Manzoni
E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Connecting billions of devices that exchange data without any central coordination is what makes the Internet of Things (IoT) one of today’s most promising technologies. Handling such a wide amount of data is extremely complex and therefore, recently, AI was put to use for a more efficient IoT.

AI in IoT can improve the value of data gathered by IoT devices. This data will provide solutions for the development of more efficient products and services that will fulfill final user expectations.

The realization of IoT depends on being able to obtain insights that are hidden in the vast and growing seas of available data. Since current approaches do not scale to IoT volumes, the future realization of IoT’s promise is dependent on Artificial Intelligence methods to find the patterns, correlations and anomalies that have the potential to enable improvements in almost every facet of our daily lives.

As an example, possible future uses of AI-Powered IoT Devices are security and access devices, emotional analysis, facial recognition, self-driving vehicles, personal health tracking, etc. However, many issues still need to be solved, such as the following: what decisions are suited for AI? where can they be made? How can the decisions be made? How can they be propagated? What is the Impact?

This Special Issue on AI for IoT seeks original, previously unpublished papers empirically addressing key issues and challenges related with the design, implementation, deployment, operation, and evaluation of novel approaches based on the use of AI solutions for the Internet of Things.

Dr. Marco Zennaro
Prof. Pietro Manzoni
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • IoT
  • AI
  • edge computing
  • pub/sub

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities
Sensors 2021, 21(12), 4223; https://doi.org/10.3390/s21124223 - 20 Jun 2021
Viewed by 572
Abstract
Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to [...] Read more.
Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Huffman based on deep learning concepts using weights, pruning, and pooling in the neural network. The proposed algorithm is believed to obtain a better compression ratio. Additionally, in this paper, we also discuss the challenges of applying the proposed algorithm to IoT data compression due to the limitations of IoT memory and IoT processor, which later it can be implemented in IoT networks. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Article
ARTYCUL: A Privacy-Preserving ML-Driven Framework to Determine the Popularity of a Cultural Exhibit on Display
Sensors 2021, 21(4), 1527; https://doi.org/10.3390/s21041527 - 22 Feb 2021
Viewed by 554
Abstract
We present ARTYCUL (ARTifact popularitY for CULtural heritage), a machine learning(ML)-based framework that graphically represents the footfall around an artifact on display at a museum or a heritage site. The driving factor of this framework was the fact that the presence of security [...] Read more.
We present ARTYCUL (ARTifact popularitY for CULtural heritage), a machine learning(ML)-based framework that graphically represents the footfall around an artifact on display at a museum or a heritage site. The driving factor of this framework was the fact that the presence of security cameras has become universal, including at sites of cultural heritage. ARTYCUL used the video streams of closed-circuit televisions (CCTV) cameras installed in such premises to detect human figures, and their coordinates with respect to the camera frames were used to visualize the density of visitors around the specific display items. Such a framework that can display the popularity of artifacts would aid the curators towards a more optimal organization. Moreover, it could also help to gauge if a certain display item were neglected due to incorrect placement. While items of similar interest can be placed in vicinity of each other, an online recommendation system may also use the reputation of an artifact to catch the eye of the visitors. Artificial intelligence-based solutions are well suited for analysis of internet of things (IoT) traffic due to the inherent veracity and volatile nature of the transmissions. The work done for the development of ARTYCUL provided a deeper insight into the avenues for applications of IoT technology to the cultural heritage domain, and suitability of ML to process real-time data at a fast pace. While we also observed common issues that hinder the utilization of IoT in the cultural domain, the proposed framework was designed keeping in mind the same obstacles and a preference for backward compatibility. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Article
IoT-Based Bee Swarm Activity Acoustic Classification Using Deep Neural Networks
Sensors 2021, 21(3), 676; https://doi.org/10.3390/s21030676 - 20 Jan 2021
Cited by 4 | Viewed by 706
Abstract
Animal activity acoustic monitoring is becoming one of the necessary tools in agriculture, including beekeeping. It can assist in the control of beehives in remote locations. It is possible to classify bee swarm activity from audio signals using such approaches. A deep neural [...] Read more.
Animal activity acoustic monitoring is becoming one of the necessary tools in agriculture, including beekeeping. It can assist in the control of beehives in remote locations. It is possible to classify bee swarm activity from audio signals using such approaches. A deep neural networks IoT-based acoustic swarm classification is proposed in this paper. Audio recordings were obtained from the Open Source Beehive project. Mel-frequency cepstral coefficients features were extracted from the audio signal. The lossless WAV and lossy MP3 audio formats were compared for IoT-based solutions. An analysis was made of the impact of the deep neural network parameters on the classification results. The best overall classification accuracy with uncompressed audio was 94.09%, but MP3 compression degraded the DNN accuracy by over 10%. The evaluation of the proposed deep neural networks IoT-based bee activity acoustic classification showed improved results if compared to the previous hidden Markov models system. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Article
An Experimental Analysis of Attack Classification Using Machine Learning in IoT Networks
Sensors 2021, 21(2), 446; https://doi.org/10.3390/s21020446 - 10 Jan 2021
Cited by 2 | Viewed by 1234
Abstract
In recent years, there has been a massive increase in the amount of Internet of Things (IoT) devices as well as the data generated by such devices. The participating devices in IoT networks can be problematic due to their resource-constrained nature, and integrating [...] Read more.
In recent years, there has been a massive increase in the amount of Internet of Things (IoT) devices as well as the data generated by such devices. The participating devices in IoT networks can be problematic due to their resource-constrained nature, and integrating security on these devices is often overlooked. This has resulted in attackers having an increased incentive to target IoT devices. As the number of attacks possible on a network increases, it becomes more difficult for traditional intrusion detection systems (IDS) to cope with these attacks efficiently. In this paper, we highlight several machine learning (ML) methods such as k-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB), random forest (RF), artificial neural network (ANN), and logistic regression (LR) that can be used in IDS. In this work, ML algorithms are compared for both binary and multi-class classification on Bot-IoT dataset. Based on several parameters such as accuracy, precision, recall, F1 score, and log loss, we experimentally compared the aforementioned ML algorithms. In the case of HTTP distributed denial-of-service (DDoS) attack, the accuracy of RF is 99%. Furthermore, other simulation results-based precision, recall, F1 score, and log loss metric reveal that RF outperforms on all types of attacks in binary classification. However, in multi-class classification, KNN outperforms other ML algorithms with an accuracy of 99%, which is 4% higher than RF. Full article
(This article belongs to the Special Issue AI for IoT)
Article
Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization
Sensors 2021, 21(2), 444; https://doi.org/10.3390/s21020444 - 10 Jan 2021
Viewed by 670
Abstract
With the development of deep learning technologies and edge computing, the combination of them can make artificial intelligence ubiquitous. Due to the constrained computation resources of the edge device, the research in the field of on-device deep learning not only focuses on the [...] Read more.
With the development of deep learning technologies and edge computing, the combination of them can make artificial intelligence ubiquitous. Due to the constrained computation resources of the edge device, the research in the field of on-device deep learning not only focuses on the model accuracy but also on the model efficiency, for example, inference latency. There are many attempts to optimize the existing deep learning models for the purpose of deploying them on the edge devices that meet specific application requirements while maintaining high accuracy. Such work not only requires professional knowledge but also needs a lot of experiments, which limits the customization of neural networks for varied devices and application scenarios. In order to reduce the human intervention in designing and optimizing the neural network structure, multi-objective neural architecture search methods that can automatically search for neural networks featured with high accuracy and can satisfy certain hardware performance requirements are proposed. However, the current methods commonly set accuracy and inference latency as the performance indicator during the search process, and sample numerous network structures to obtain the required neural network. Lacking regulation to the search direction with the search objectives will generate a large number of useless networks during the search process, which influences the search efficiency to a great extent. Therefore, in this paper, an efficient resource-aware search method is proposed. Firstly, the network inference consumption profiling model for any specific device is established, and it can help us directly obtain the resource consumption of each operation in the network structure and the inference latency of the entire sampled network. Next, on the basis of the Bayesian search, a resource-aware Pareto Bayesian search is proposed. Accuracy and inference latency are set as the constraints to regulate the search direction. With a clearer search direction, the overall search efficiency will be improved. Furthermore, cell-based structure and lightweight operation are applied to optimize the search space for further enhancing the search efficiency. The experimental results demonstrate that with our method, the inference latency of the searched network structure reduced 94.71% without scarifying the accuracy. At the same time, the search efficiency increased by 18.18%. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Article
A Decision Support System for Water Optimization in Anti-Frost Techniques by Sprinklers
Sensors 2020, 20(24), 7129; https://doi.org/10.3390/s20247129 - 12 Dec 2020
Cited by 2 | Viewed by 528
Abstract
Precision agriculture is a growing sector that improves traditional agricultural processes through the use of new technologies. In southeast Spain, farmers are continuously fighting against harsh conditions caused by the effects of climate change. Among these problems, the great variability of temperatures (up [...] Read more.
Precision agriculture is a growing sector that improves traditional agricultural processes through the use of new technologies. In southeast Spain, farmers are continuously fighting against harsh conditions caused by the effects of climate change. Among these problems, the great variability of temperatures (up to 20 °C in the same day) stands out. This causes the stone fruit trees to flower prematurely and the low winter temperatures freeze the flower causing the loss of the crop. Farmers use anti-freeze techniques to prevent crop loss and the most widely used techniques are those that use water irrigation as they are cheaper than other techniques. However, these techniques waste too much water and it is a scarce resource, especially in this area. In this article, we propose a novel intelligent Internet of Things (IoT) monitoring system to optimize the use of water in these anti-frost techniques while minimizing crop loss. The intelligent component of the IoT system is designed using an approach based on a multivariate Long Short-Term Memory (LSTM) model, designed to predict low temperatures. We compare the proposed approach of multivariate model with the univariate counterpart version to figure out which model obtains better accuracy to predict low temperatures. An accurate prediction of low temperatures would translate into significant water savings, as anti-frost techniques would not be activated without being necessary. Our experimental results show that the proposed multivariate LSTM approach improves the univariate counterpart version, obtaining an average quadratic error no greater than 0.65 °C and a coefficient of determination R2 greater than 0.97. The proposed system has been deployed and is currently operating in a real environment obtained satisfactory performance. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Article
RaveGuard: A Noise Monitoring Platform Using Low-End Microphones and Machine Learning
Sensors 2020, 20(19), 5583; https://doi.org/10.3390/s20195583 - 29 Sep 2020
Cited by 1 | Viewed by 669
Abstract
Urban noise is one of the most serious and underestimated environmental problems. According to the World Health Organization, noise pollution from traffic and other human activities, negatively impact the population health and life quality. Monitoring noise usually requires the use of professional and [...] Read more.
Urban noise is one of the most serious and underestimated environmental problems. According to the World Health Organization, noise pollution from traffic and other human activities, negatively impact the population health and life quality. Monitoring noise usually requires the use of professional and expensive instruments, called phonometers, able to accurately measure sound pressure levels. In many cases, phonometers are human-operated; therefore, periodic fine-granularity city-wide measurements are expensive. Recent advances in the Internet of Things (IoT) offer a window of opportunities for low-cost autonomous sound pressure meters. Such devices and platforms could enable fine time–space noise measurements throughout a city. Unfortunately, low-cost sound pressure sensors are inaccurate when compared with phonometers, experiencing a high variability in the measurements. In this paper, we present RaveGuard, an unmanned noise monitoring platform that exploits artificial intelligence strategies to improve the accuracy of low-cost devices. RaveGuard was initially deployed together with a professional phonometer for over two months in downtown Bologna, Italy, with the aim of collecting a large amount of precise noise pollution samples. The resulting datasets have been instrumental in designing InspectNoise, a library that can be exploited by IoT platforms, without the need of expensive phonometers, but obtaining a similar precision. In particular, we have applied supervised learning algorithms (adequately trained with our datasets) to reduce the accuracy gap between the professional phonometer and an IoT platform equipped with low-end devices and sensors. Results show that RaveGuard, combined with the InspectNoise library, achieves a 2.24% relative error compared to professional instruments, thus enabling low-cost unmanned city-wide noise monitoring. Full article
(This article belongs to the Special Issue AI for IoT)
Show Figures

Figure 1

Back to TopTop