Special Issue "Machine Learning in IoT Networking and Communications"

A special issue of Journal of Sensor and Actuator Networks (ISSN 2224-2708). This special issue belongs to the section "Big Data, Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 March 2022.

Special Issue Editor

Dr. Mona Jaber
E-Mail Website
Guest Editor
School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London E1 4NS, UK
Interests: IoT networks; IoT-enabled digital twins; machine learning for smart urban mobility; IoT security and communication

Special Issue Information

Dear Colleagues,

The fast and wide spread of Internet of Things (IoT) applications offers new opportunities in multiple domains but also presents new challenges. A skyrocketing number of IoT devices (sensors, actuators, etc.) is deployed to collect critical data and to control environments such as manufacturing, healthcare, urban/built areas, and public safety. At the same time, machine learning (ML) has shown significant success in transforming heterogeneous and complex datasets into coherent output and actionable insights. Thus, the marriage of ML and IoT has a pivotal role in enabling smart environments with precision in decision-making and adaptive automation. However, leveraging ML and IoT still faces significant challenges obstructing the full realisation of foreseen opportunities. Direct challenges relate to scalability, security, accessibility, resilience and latency, all of which have resulted in a growing corpus of research addressing one or more of these issues. Nevertheless, the overarching challenge concerns the exportability of advancements in this area across multiple applications. For instance, an acoustic scene classification method that successfully detects violence in a small town would completely fail in busy cities, and an autonomous pod trained to deliver groceries in a controlled environment would not succeed elsewhere. Thus, the biggest challenge in pushing forward the seamless integration of ML and IoT systems is the exportability of technologies which creates opportunities for novel research and interdisciplinary efforts.

The papers in this Special Issue will focus on state-of-the-art research and challenges in leveraging ML and IoT. In this Special Issue, we shall solicit papers that cover numerous topics of interest that include but are not limited to:

  • ML and IoT for system deployment and operation;
  • ML and IoT for assisted automation;
  • ML-enabled real-time IoT data analytics;
  • ML- and IoT-enabled digital twin;
  • Cloud/edge computing systems for IoT employing ML;
  • ML-enabled spatial-temporal IoT data fusion for intelligent decision making;
  • Data-centric simulations for IoT systems;
  • ML for IoT application orchestration;
  • ML for managing security in IoT data processing;
  • ML for IoT attack detection and prevention;
  • Testbed and empirical studies.

Dr. Mona Jaber
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Sensor and Actuator Networks is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • artificial intelligence
  • Internet of Things
  • digital twin
  • exportable AI
  • IoT security
  • IoT data fusion

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Network Attack Classification in IoT Using Support Vector Machines
J. Sens. Actuator Netw. 2021, 10(3), 58; https://doi.org/10.3390/jsan10030058 - 31 Aug 2021
Viewed by 479
Abstract
Machine learning (ML) techniques learn a system by observing it. Events and occurrences in the network define what is expected of the network’s operation. It is for this reason that ML techniques are used in the computer network security field to detect unauthorized [...] Read more.
Machine learning (ML) techniques learn a system by observing it. Events and occurrences in the network define what is expected of the network’s operation. It is for this reason that ML techniques are used in the computer network security field to detect unauthorized intervention. In the event of suspicious activity, the result of the ML analysis deviates from the definition of expected normal network activity and the suspicious activity becomes apparent. Support vector machines (SVM) are ML techniques that have been used to profile normal network activity and classify it as normal or abnormal. They are trained to configure an optimal hyperplane that classifies unknown input vectors’ values based on their positioning on the plane. We propose to use SVM models to detect malicious behavior within low-power, low-rate and short range networks, such as those used in the Internet of Things (IoT). We evaluated two SVM approaches, the C-SVM and the OC-SVM, where the former requires two classes of vector values (one for the normal and one for the abnormal activity) and the latter observes only normal behavior activity. Both approaches were used as part of an intrusion detection system (IDS) that monitors and detects abnormal activity within the smart node device. Actual network traffic with specific network-layer attacks implemented by us was used to create and evaluate the SVM detection models. It is shown that the C-SVM achieves up to 100% classification accuracy when evaluated with unknown data taken from the same network topology it was trained with and 81% accuracy when operating in an unknown topology. The OC-SVM that is created using benign activity achieves at most 58% accuracy. Full article
(This article belongs to the Special Issue Machine Learning in IoT Networking and Communications)
Show Figures

Figure 1

Article
OPriv: Optimizing Privacy Protection for Network Traffic
J. Sens. Actuator Netw. 2021, 10(3), 38; https://doi.org/10.3390/jsan10030038 - 24 Jun 2021
Viewed by 576
Abstract
Statistical traffic analysis has absolutely exposed the privacy of supposedly secure network traffic, proving that encryption is not effective anymore. In this work, we present an optimal countermeasure to prevent an adversary from inferring users’ online activities, using traffic analysis. First, we formulate [...] Read more.
Statistical traffic analysis has absolutely exposed the privacy of supposedly secure network traffic, proving that encryption is not effective anymore. In this work, we present an optimal countermeasure to prevent an adversary from inferring users’ online activities, using traffic analysis. First, we formulate analytically a constrained optimization problem to maximize network traffic obfuscation while minimizing overhead costs. Then, we provide OPriv, a practical and efficient algorithm to solve dynamically the non-linear programming (NLP) problem, using Cplex optimization. Our heuristic algorithm selects target applications to mutate to and the corresponding packet length, and subsequently decreases the security risks of statistical traffic analysis attacks. Furthermore, we develop an analytical model to measure the obfuscation system’s resilience to traffic analysis attacks. We suggest information theoretic metrics for quantitative privacy measurement, using entropy. The full privacy protection of OPriv is assessed through our new metrics, and then through extensive simulations on real-world data traces. We show that our algorithm achieves strong privacy protection in terms of traffic flow information without impacting the network performance. We are able to reduce the accuracy of a classifier from 91.1% to 1.42% with only 0.17% padding overhead. Full article
(This article belongs to the Special Issue Machine Learning in IoT Networking and Communications)
Show Figures

Figure 1

Article
Digital Twin-Driven Decision Making and Planning for Energy Consumption
J. Sens. Actuator Netw. 2021, 10(2), 37; https://doi.org/10.3390/jsan10020037 - 20 Jun 2021
Cited by 1 | Viewed by 708
Abstract
The Internet of Things (IoT) is revolutionising how energy is delivered from energy producers and used throughout residential households. Optimising the residential energy consumption is a crucial step toward having greener and sustainable energy production. Such optimisation requires a household-centric energy management system [...] Read more.
The Internet of Things (IoT) is revolutionising how energy is delivered from energy producers and used throughout residential households. Optimising the residential energy consumption is a crucial step toward having greener and sustainable energy production. Such optimisation requires a household-centric energy management system as opposed to a one-rule-fits all approach. In this paper, we propose a data-driven multi-layer digital twin of the energy system that aims to mirror households’ actual energy consumption in the form of a household digital twin (HDT). When linked to the energy production digital twin (EDT), HDT empowers the household-centric energy optimisation model to achieve the desired efficiency in energy use. The model intends to improve the efficiency of energy production by flattening the daily energy demand levels. This is done by collaboratively reorganising the energy consumption patterns of residential homes to avoid peak demands whilst accommodating the resident needs and reducing their energy costs. Indeed, our system incorporates the first HDT model to gauge the impact of various modifications on the household energy bill and, subsequently, on energy production. The proposed energy system is applied to a real-world IoT dataset that spans over two years and covers seventeen households. Our conducted experiments show that the model effectively flattened the collective energy demand by 20.9% on synthetic data and 20.4% on a real dataset. At the same time, the average energy cost per household was reduced by 10.7% for the synthetic data and 17.7% for the real dataset. Full article
(This article belongs to the Special Issue Machine Learning in IoT Networking and Communications)
Show Figures

Figure 1

Article
Optimising Performance for NB-IoT UE Devices through Data Driven Models
J. Sens. Actuator Netw. 2021, 10(1), 21; https://doi.org/10.3390/jsan10010021 - 05 Mar 2021
Viewed by 825
Abstract
This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to [...] Read more.
This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to improve energy consumption, delay, throughput or a combination of those metrics, depending on the user-end device and the application. Reinforcement learning utilising gradient descent and genetic algorithm is adopted synchronously with machine and deep learning algorithms to predict the environmental states and suggest an optimal configuration. The results highlight the adaptability of the Deep Neural Network in the prediction of intermediary environmental states, additionally the results present superior performance of the genetic reinforcement learning algorithm regarding its performance optimisation. Full article
(This article belongs to the Special Issue Machine Learning in IoT Networking and Communications)
Show Figures

Figure 1

Review

Jump to: Research

Review
Trends in Intelligent Communication Systems: Review of Standards, Major Research Projects, and Identification of Research Gaps
J. Sens. Actuator Netw. 2021, 10(4), 60; https://doi.org/10.3390/jsan10040060 - 12 Oct 2021
Viewed by 104
Abstract
The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging [...] Read more.
The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging communication networks. At the same time, the access and ability to process large volumes of network data can unleash the full potential of a network orchestrated by AI/ML to optimise the usage of available resources while keeping both CapEx and OpEx low. Driven by these new opportunities, the ongoing standardisation activities indicate strong interest to reap the benefits of incorporating AI and ML techniques in communication networks. For instance, 3GPP has introduced the network data analytics function (NWDAF) at the 5G core network for the control and management of network slices, and for providing predictive analytics, or statistics, about past events to other network functions, leveraging AI/ML and big data analytics. Likewise, at the radio access network (RAN), the O-RAN Alliance has already defined an architecture to infuse intelligence into the RAN, where closed-loop control models are classified based on their operational timescale, i.e., real-time, near real-time, and non-real-time RAN intelligent control (RIC). Different from the existing related surveys, in this review article, we group the major research studies in the design of model-aided ML-based transceivers following the breakdown suggested by the O-RAN Alliance. At the core and the edge networks, we review the ongoing standardisation activities in intelligent networking and the existing works cognisant of the architecture recommended by 3GPP and ETSI. We also review the existing trends in ML algorithms running on low-power micro-controller units, known as TinyML. We conclude with a summary of recent and currently funded projects on intelligent communications and networking. This review reveals that the telecommunication industry and standardisation bodies have been mostly focused on non-real-time RIC, data analytics at the core and the edge, AI-based network slicing, and vendor inter-operability issues, whereas most recent academic research has focused on real-time RIC. In addition, intelligent radio resource management and aspects of intelligent control of the propagation channel using reflecting intelligent surfaces have captured the attention of ongoing research projects. Full article
(This article belongs to the Special Issue Machine Learning in IoT Networking and Communications)

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Trends in Intelligent Communication Systems; AI/ML, Standards & Research: A Roadmap from the Radio Access Network to the Edge and Core Networks
Authors: Mehrdad Dianati, et al.
Author Affiliations: WMG, University of Warwick, Coventry, CV4 7AL, UK
Abstract: While machine learning techniques have been applied successfully in many disciplines like computer vision and natural language processing, their application in communication networks is still in its infancy. On the one hand, this is because of the lack of established and widely-accepted training datasets. On the other hand, this is due to the ability to capture and define mathematically accurate models for the optimization of communication networks, which are the outcomes of extensive research activities in the past 50 years. Unfortunately, the increasing complexity of communication systems, following the advent of heterogeneous networks and services, challenges the use of the so far developed models. At the same time, the paradigm of network virtualization and software-defined networking enables the collection and analysis of data giving rise to the emerging field of knowledge-defined networking. Ongoing standardization activities are key in the adoption of machine learning in communication networks. For instance, 3GPP has introduced the Network Data Analytics Function for the control and management of network slices using machine learning and data analytics in current and future networks. In this paper, we review model-aided machine learning at the PHY and MAC layers for transceiver design, radio resource management, and aspects of intelligent control of the propagation channel. We proceed with applications of machine learning at the edge and core networks with emphasis on algorithms that can learn to optimize routing decisions in unseen network topologies. Finally, we conclude with a summary of recent and currently-funded projects on intelligent communications and networking.

Title: Pothole detection and maintenance system based on 3D reconstruction
Authors: Edwin Salcedo, Ammar Yasir Naich, Jesús Requena-Carrión and Mona Jaber
Author Affiliations: Queen Mary University of London, UK
Abstract: Maintenance of critical infrastructure is a costly necessity where developing countries often struggle to deliver timely repairs. The transport system acts as the arteries of any economy in development, and the formation of potholes on the roads can lead to injuries and loss of lives. Although many countries have enabled pothole reporting platforms, duplicate records remain a challenge as well as the differentiation between a deep pothole and a shallow one. To this end, automatic pothole detection has been proposed in which a variety of data sources is used to identify and localize potholes. However, the majority of research fails to classify the potholes in terms of size and depth, albeit this information would be key in prioritizing repair work and improving the safety of the roads. In this work, we conduct a brief survey of the recent research efforts in this area. Then, we introduce a depth and size detection algorithm based on stereo vision, and we contrast its efficiency against two recent techniques. We also present this work using an example of a georeferenced pothole reporting, evaluation, and maintenance system which will implement the developed algorithm and let us draw conclusions about the advantages of the proposed approach. Keywords: Pothole detection; machine learning; stereo vision; computer vision road surface modelling; smart maintenance

Title: Machine Learning enabled food contamination detection using RFID and Internet of things system
Authors: Abubakar Sharif, Shuja Ansari, Kia Dashtipour, Hasan Tahir Abbas, Qammer Hussain Abbasi, Muhammad Ali Imran
Author Affiliations: James Watt School of Engineering
Abstract: This paper presents an approach for contamination sensing of food items and drinks such as soft drinks and alcohol. We employ an RFID wireless sticker and machine learning approach for contaminations sensing. The RFID tag antenna was mounted on pure product and RSSI and phase of backscattered signal is measured using Tagformance Pro devices. Moreover, the performance is further characterize using an android phone connected RFID reader unit. We used machine learning xbost algorithm for further training of model and accuracy of sensing is about 90%. Therefore, this research study paves a way for ubiquitous contamination sensing using RFID and machine learning technologies that can inform their users about the health and safety of their food.

Title: Challenges of Malware Detection in the IoT and a Review of Artificial Immune System Approaches
Authors: Hadeel Alrubayyi, Gokop Goteng, Mona Jaber, James Kelly
Author Affiliations: School of Electronic Engineering and Computer Science: Queen Mary University of London
Abstract: The fast growth of the Internet of Things (IoT) and diverse applications increase the risk of cyberattacks, one of which is malware attacks. Due to the IoT devices' different capabilities and the dynamic and ever-evolving environment, applying complex security measures is challenging, and applying only basic security standards is risky. Artificial Immune Systems (AIS) are intrusion detecting algorithms inspired by the human body's adaptive immune system techniques. Most of these algorithms imitate the B-cell and T-cell defensive mechanisms. They are lightweight, adaptive, and able to detect malware attacks without prior knowledge. In this work, we review the recent advances in employing AIS for improved detection of malware in IoT networks. We present a critical analysis that highlights the limitations of the state-of-the-art in AIS research and offer insights into promising new research directions.

Back to TopTop