sensors-logo

Journal Browser

Journal Browser

Special Issue "Cloud and Edge Computing for the Next Generation of Internet of Things Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (30 September 2020).

Special Issue Editors

Prof. Dr. Pietro Manzoni
E-Mail Website
Guest Editor
Department of Computer Engineering (DISCA), Universitat Politècnica de València, 46022 Valencia, Spain
Interests: IoT; mobile networking; pub/subsystems; edge computing
Special Issues and Collections in MDPI journals
Prof. Johann M. Marquez-Barja
E-Mail Website
Guest Editor
University of Antwerpen – IMEC, 2020 Antwerpen, Belgium
Interests: 5G advanced architectures including edge computing; flexible and programmable future end-to-end networks; IoT communications and applications; vehicular communications, mobility, and smart cities deployments
Special Issues and Collections in MDPI journals
Dr. Marco Picone
E-Mail Website
Guest Editor
Department of Sciences and Methods for Engineering (DISMI), University of Modena and Reggio Emilia, Via Amendola 2, Pad. Morselli, 42121 Reggio Emilia, Italy
Interests: distributed systems; Internet of Things; Edge/Fog computing; vehicular networks (Internet of Vehicles); pervasive and mobile computing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Cloud Computing technologies have revolutionized and transformed the ICT ecosystem in recent decades by introducing new architectural and design paradigms and giving a worthy centralized and powerful reference for applications and services. On-demand Cloud resources and services have allowed client/server, distributed, and mobile applications to dramatically speed up development and deployment and easily scale to reach a massive number of devices and users all over the world. Furthermore, within this dynamic ecosystem, Cloud computing also represents a natural and easy way to design and to deploy IoT applications where the centralized brain controls and processes all the generated information. Thanks to the advent of the Internet of Things (IoT) revolution and its quick and pervasive evolution, we now have millions of interconnected “Smart Objects” able to generate and consume a massive amount of heterogeneous data.
Nevertheless, due to several key factors (e.g., performance, latency, security, interaction patterns), a Cloud-centric vision is not always applicable and does not represent the best fit for all applications and deployments. As a response to fit those applications, Fog/Edge computing has been recently envisioned. Fog/Edge solutions aim to support and to drive the IoT evolution through the definition of a new and scalable distributed architecture, improving performance and enabling the creation of new services.
In this challenging context, the efficient, intelligent, secure, and coordinated integration of Cloud and Edge services and applications will represent a crucial pillar for enabling the next generation of IoT applications.
This Special Issue focuses on novel developments, technologies, and challenges related to the efficient and innovative coexistence of Cloud and Fog/Edge computing, and in particular to their adoption within the Internet of Things research field. We are particularly interested in the latest findings from research, ongoing projects, and in review articles that can provide readers with current research trends and solutions. The potential topics include but are not limited to:

  • Hybrid Cloud and Edge computing architectures
  • Adoption of Cloud computing patterns and technologies to the Edge
  • Cloud and Edge Lambda functions
  • Distributed knowledge and data synchronization algorithms
  • Services and microservice migration and orchestration between Cloud and Edge
  • Distributed machine learning architecture
  • Convergence of Edge and Cloud computing for machine learning
  • Smart object virtualization and digital twins solutions
  • Optimized networking between Edge and Cloud
  • Security for hybrid architectures
  • Cyber security for IoT and edge computing
  • LPWANs and edge computing
  • Platforms and applications
  • 5G and Mobile Edge computing

Prof. Pietro Manzoni
Prof. Johann M. Marquez-Barja
Prof. Marco Picone
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • Edge computing
  • Fog computing
  • data processing
  • architecture
  • machine learning
  • microservices
  • orchestration
  • algorithms

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Article
Optimal Consensus with Dual Abnormality Mode of Cellular IoT Based on Edge Computing
Sensors 2021, 21(2), 671; https://doi.org/10.3390/s21020671 - 19 Jan 2021
Viewed by 612
Abstract
The continuous development of fifth-generation (5G) networks is the main driving force for the growth of Internet of Things (IoT) applications. It is expected that the 5G network will greatly expand the applications of the IoT, thereby promoting the operation of cellular networks, [...] Read more.
The continuous development of fifth-generation (5G) networks is the main driving force for the growth of Internet of Things (IoT) applications. It is expected that the 5G network will greatly expand the applications of the IoT, thereby promoting the operation of cellular networks, the security and network challenges of the IoT, and pushing the future of the Internet to the edge. Because the IoT can make anything in anyplace be connected together at any time, it can provide ubiquitous services. With the establishment and use of 5G wireless networks, the cellular IoT (CIoT) will be developed and applied. In order to provide more reliable CIoT applications, a reliable network topology is very important. Reaching a consensus is one of the most important issues in providing a highly reliable CIoT design. Therefore, it is necessary to reach a consensus so that even if some components in the system is abnormal, the application in the system can still execute correctly in CIoT. In this study, a protocol of consensus is discussed in CIoT with dual abnormality mode that combines dormant abnormality and malicious abnormality. The protocol proposed in this research not only allows all normal components in CIoT to reach a consensus with the minimum times of data exchange, but also allows the maximum number of dormant and malicious abnormal components in CIoT. In the meantime, the protocol can make all normal components in CIoT satisfy the constraints of reaching consensus: Termination, Agreement, and Integrity. Full article
Show Figures

Figure 1

Article
Evaluation of Clustering Algorithms on GPU-Based Edge Computing Platforms
Sensors 2020, 20(21), 6335; https://doi.org/10.3390/s20216335 - 06 Nov 2020
Cited by 1 | Viewed by 694
Abstract
Internet of Things (IoT) is becoming a new socioeconomic revolution in which data and immediacy are the main ingredients. IoT generates large datasets on a daily basis but it is currently considered as “dark data”, i.e., data generated but never analyzed. The efficient [...] Read more.
Internet of Things (IoT) is becoming a new socioeconomic revolution in which data and immediacy are the main ingredients. IoT generates large datasets on a daily basis but it is currently considered as “dark data”, i.e., data generated but never analyzed. The efficient analysis of this data is mandatory to create intelligent applications for the next generation of IoT applications that benefits society. Artificial Intelligence (AI) techniques are very well suited to identifying hidden patterns and correlations in this data deluge. In particular, clustering algorithms are of the utmost importance for performing exploratory data analysis to identify a set (a.k.a., cluster) of similar objects. Clustering algorithms are computationally heavy workloads and require to be executed on high-performance computing clusters, especially to deal with large datasets. This execution on HPC infrastructures is an energy hungry procedure with additional issues, such as high-latency communications or privacy. Edge computing is a paradigm to enable light-weight computations at the edge of the network that has been proposed recently to solve these issues. In this paper, we provide an in-depth analysis of emergent edge computing architectures that include low-power Graphics Processing Units (GPUs) to speed-up these workloads. Our analysis includes performance and power consumption figures of the latest Nvidia’s AGX Xavier to compare the energy-performance ratio of these low-cost platforms with a high-performance cloud-based counterpart version. Three different clustering algorithms (i.e., k-means, Fuzzy Minimals (FM), and Fuzzy C-Means (FCM)) are designed to be optimally executed on edge and cloud platforms, showing a speed-up factor of up to 11× for the GPU code compared to sequential counterpart versions in the edge platforms and energy savings of up to 150% between the edge computing and HPC platforms. Full article
Show Figures

Figure 1

Article
Optimal Placement of Social Digital Twins in Edge IoT Networks
Sensors 2020, 20(21), 6181; https://doi.org/10.3390/s20216181 - 30 Oct 2020
Cited by 4 | Viewed by 898
Abstract
In next-generation Internet of Things (IoT) deployments, every object such as a wearable device, a smartphone, a vehicle, and even a sensor or an actuator will be provided with a digital counterpart (twin) with the aim of augmenting the physical object’s capabilities and [...] Read more.
In next-generation Internet of Things (IoT) deployments, every object such as a wearable device, a smartphone, a vehicle, and even a sensor or an actuator will be provided with a digital counterpart (twin) with the aim of augmenting the physical object’s capabilities and acting on its behalf when interacting with third parties. Moreover, such objects can be able to interact and autonomously establish social relationships according to the Social Internet of Things (SIoT) paradigm. In such a context, the goal of this work is to provide an optimal solution for the social-aware placement of IoT digital twins (DTs) at the network edge, with the twofold aim of reducing the latency (i) between physical devices and corresponding DTs for efficient data exchange, and (ii) among DTs of friend devices to speed-up the service discovery and chaining procedures across the SIoT network. To this aim, we formulate the problem as a mixed-integer linear programming model taking into account limited computing resources in the edge cloud and social relationships among IoT devices. Full article
Show Figures

Figure 1

Article
Enhancing Extensive and Remote LoRa Deployments through MEC-Powered Drone Gateways
Sensors 2020, 20(15), 4109; https://doi.org/10.3390/s20154109 - 23 Jul 2020
Viewed by 1175
Abstract
The distribution of Internet of Things (IoT) devices in remote areas and the need for network resilience in such deployments is increasingly important in smart spaces covering scenarios, such as agriculture, forest, coast preservation, and connectivity survival against disasters. Although Low-Power Wide Area [...] Read more.
The distribution of Internet of Things (IoT) devices in remote areas and the need for network resilience in such deployments is increasingly important in smart spaces covering scenarios, such as agriculture, forest, coast preservation, and connectivity survival against disasters. Although Low-Power Wide Area Network (LPWAN) technologies, like LoRa, support high connectivity ranges, communication paths can suffer from obstruction due to orography or buildings, and large areas are still difficult to cover with wired gateways, due to the lack of network or power infrastructure. The proposal presented herein proposes to mount LPWAN gateways in drones in order to generate airborne network segments providing enhanced connectivity to sensor nodes wherever needed. Our LoRa-drone gateways can be used either to collect data and then report them to the back-office directly, or store-carry-and-forward data until a proper communication link with the infrastructure network is available. The proposed architecture relies on Multi-Access Edge Computing (MEC) capabilities to host a virtualization platform on-board the drone, aiming at providing an intermediate processing layer that runs Virtualized Networking Functions (VNF). This way, both preprocessing or intelligent analytics can be locally performed, saving communications and memory resources. The contribution includes a system architecture that has been successfully validated through experimentation with a real test-bed and comprehensively evaluated through computer simulation. The results show significant communication improvements employing LoRa-drone gateways when compared to traditional fixed LoRa deployments in terms of link availability and covered areas, especially in vast monitored extensions, or at points with difficult access, such as rugged zones. Full article
Show Figures

Figure 1

Article
Advanced Computation Capacity Modeling for Delay-Constrained Placement of IoT Services
Sensors 2020, 20(14), 3830; https://doi.org/10.3390/s20143830 - 09 Jul 2020
Viewed by 624
Abstract
A vast range of sensors gather data about our environment, industries and homes. The great profit hidden in this data can only be exploited if it is integrated with relevant services for analysis and usage. A core concept of the Internet of Things [...] Read more.
A vast range of sensors gather data about our environment, industries and homes. The great profit hidden in this data can only be exploited if it is integrated with relevant services for analysis and usage. A core concept of the Internet of Things targets this business opportunity through various applications. The virtualized and software-controlled 5G networks are expected to achieve the scale and dynamicity of communication networks required by Internet of Things (IoT). As the computation and communication infrastructure rapidly evolves, the corresponding substrate models of service placement algorithms lag behind, failing to appropriately describe resource abstraction and dynamic features. Our paper provides an extension to existing IoT service placement algorithms to enable them to keep up with the latest infrastructure evolution, while maintaining their existing attributes, such as end-to-end delay constraints and the cost minimization objective. We complement our recent work on 5G service placement algorithms by theoretical foundation for resource abstraction, elasticity and delay constraint. We propose efficient solutions for the problems of aggregating computation resource capacities and behavior prediction of dynamic Kubernetes infrastructure in a delay-constrained service embedding framework. Our results are supported by mathematical theorems whose proofs are presented in detail. Full article
Show Figures

Figure 1

Other

Jump to: Research

Letter
Design and Implementation of Fast Fault Detection in Cloud Infrastructure for Containerized IoT Services
Sensors 2020, 20(16), 4592; https://doi.org/10.3390/s20164592 - 16 Aug 2020
Viewed by 690
Abstract
The container-based cloud is used in various service infrastructures as it is lighter and more portable than a virtual machine (VM)-based infrastructure and is configurable in both bare-metal and VM environments. The Internet-of-Things (IoT) cloud-computing infrastructure is also evolving from a VM-based to [...] Read more.
The container-based cloud is used in various service infrastructures as it is lighter and more portable than a virtual machine (VM)-based infrastructure and is configurable in both bare-metal and VM environments. The Internet-of-Things (IoT) cloud-computing infrastructure is also evolving from a VM-based to a container-based infrastructure. In IoT clouds, the service availability of the cloud infrastructure is more important for mission-critical IoT services, such as real-time health monitoring, vehicle-to-vehicle (V2V) communication, and industrial IoT, than for general computing services. However, in the container environment that runs on a VM, the current fault detection method only considers the container’s infra, thus limiting the level of availability necessary for the performance of mission-critical IoT cloud services. Therefore, in a container environment running on a VM, fault detection and recovery methods that consider both the VM and container levels are necessary. In this study, we analyze the fault-detection architecture in a container environment and designed and implemented a Fast Fault Detection Manager (FFDM) architecture using OpenStack and Kubernetes for realizing fast fault detection. Through performance measurements, we verified that the FFDM can improve the fault detection time by more than three times over the existing method. Full article
Show Figures

Figure 1

Back to TopTop