Special Issue "Novel Cloud-Based Service/Application Platforms and Ecosystems"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 8862

Special Issue Editors

Prof. Dr. Balázs Sonkoly
E-Mail Website
Guest Editor
Department of Telecommunications and Media Informatics, Budapest University of Technology and Economics, 1117 Budapest, Hugary
Interests: NFV; SDN; multi-domain orchestration; SFC control plane
Prof. Dr. László Toka
E-Mail Website
Guest Editor
Department of Telecommunications and Media Informatics, Budapest University of Technology and Economics, 1117 Budapest, Hugary
Interests: computer networking; network communication
Dr. Luis Miguel Contreras-Murillo
E-Mail Website
Guest Editor
Telefónica I+D / CTIO unit, Madrid, Spain
Interests: SDN; NFV; 5G; transport technologies; slicing; interconnection
Special Issues, Collections and Topics in MDPI journals
Dr. Róbert Szabó
E-Mail
Guest Editor
Ericsson Research, Torshamnsgatan 21, Stockholm, Sweden
Interests: virtualisation; software defined networking; 5G mobile communication; cloud computing; computer centres; cost reduction; graph theory; monitoring; resource allocation; task analysis; telecommunication services; wide area networks; telecommunication traffic

Special Issue Information

Dear Colleagues,

In the fast approaching 5G era telcos, cloud operators and online application providers will join forces to deliver ICT services to customers globally. Several technologies act as enablers for novel application design: software-defined networking (SDN) along with network function virtualization (NFV) ensure that online services are elastic and resilient, and their deployments are fast and flexible. Besides revolutionizing telco services, virtualization provides us with the possibility of creating online applications that require low latency, e.g., X reality (XR), and tactile Internet applications over edge cloud infrastructure. In this ecosystem, not only infrastructure and platform-level technologies play important roles, but business aspects will also greatly influence the capability and performance of the global system: the collaboration of provider actors will determine the availability and the end user prices of certain services.This Special Issue focuses on recent research results and challenges, as well as novel developments and technologies, related to NFV, SDN, and virtualization in general. We are particularly interested in application level design, system and network management methods, and the control and data plane performance of cloudified platforms. We welcome the latest findings from research, ongoing projects, and review articles that can provide readers with current research trends and solutions. The potential topics include, but are not limited to:

  • Novel telco services, applications enabled by SDN/NFV
  • Ultra-low latency applications, e.g., AR/VR/MR (augmented/virtual/mixed reality), tactile Internet enabled by edge computing
  • SDN, NFV, beyond 5G technologies
  • Cloud native platforms: CaaS (container as a service), FaaS (function as a service), serverless
  • Real-time cloud solutions
  • Network-cloud integration
  • Novel/disrupt network technologies, e.g., NDN (named data networking)
  • AI-based technologies and system components
  • Business aspects of cloudification and cloud/network provider federation

Prof. Dr. Balázs Sonkoly
Prof. Dr. László Toka
Dr. Byung-Seo Kim
Mr. Luis Miguel Contreras-Murillo
Dr. Róbert Szabó
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • SDN
  • NFV
  • Beyond 5G
  • Cloud native
  • Edge computing
  • Real-time cloud
  • AR/VR/MR
  • Tactile Internet
  • NDN
  • AI/ML
  • Business models

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Applications of Federated Learning; Taxonomy, Challenges, and Research Trends
Electronics 2022, 11(4), 670; https://doi.org/10.3390/electronics11040670 - 21 Feb 2022
Cited by 4 | Viewed by 1775
Abstract
The federated learning technique (FL) supports the collaborative training of machine learning and deep learning models for edge network optimization. Although a complex edge network with heterogeneous devices having different constraints can affect its performance, this leads to a problem in this area. [...] Read more.
The federated learning technique (FL) supports the collaborative training of machine learning and deep learning models for edge network optimization. Although a complex edge network with heterogeneous devices having different constraints can affect its performance, this leads to a problem in this area. Therefore, some research can be seen to design new frameworks and approaches to improve federated learning processes. The purpose of this study is to provide an overview of the FL technique and its applicability in different domains. The key focus of the paper is to produce a systematic literature review of recent research studies that clearly describes the adoption of FL in edge networks. The search procedure was performed from April 2020 to May 2021 with a total initial number of papers being 7546 published in the duration of 2016 to 2020. The systematic literature synthesizes and compares the algorithms, models, and frameworks of federated learning. Additionally, we have presented the scope of FL applications in different industries and domains. It has been revealed after careful investigation of studies that 25% of the studies used FL in IoT and edge-based applications and 30% of studies implement the FL concept in the health industry, 10% for NLP, 10% for autonomous vehicles, 10% for mobile services, 10% for recommender systems, and 5% for FinTech. A taxonomy is also proposed on implementing FL for edge networks in different domains. Moreover, another novelty of this paper is that datasets used for the implementation of FL are discussed in detail to provide the researchers an overview of the distributed datasets, which can be used for employing FL techniques. Lastly, this study discusses the current challenges of implementing the FL technique. We have found that the areas of medical AI, IoT, edge systems, and the autonomous industry can adapt the FL in many of its sub-domains; however, the challenges these domains can encounter are statistical heterogeneity, system heterogeneity, data imbalance, resource allocation, and privacy. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
Cost and Latency Optimized Edge Computing Platform
Electronics 2022, 11(4), 561; https://doi.org/10.3390/electronics11040561 - 13 Feb 2022
Viewed by 600
Abstract
Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added [...] Read more.
Latency-critical applications, e.g., automated and assisted driving services, can now be deployed in fog or edge computing environments, offloading energy-consuming tasks from end devices. Besides the proximity, though, the edge computing platform must provide the necessary operation techniques in order to avoid added delays by all means. In this paper, we propose an integrated edge platform that comprises orchestration methods with such objectives, in terms of handling the deployment of both functions and data. We show how the integration of the function orchestration solution with the adaptive data placement of a distributed key–value store can lead to decreased end-to-end latency even when the mobility of end devices creates a dynamic set of requirements. Along with the necessary monitoring features, the proposed edge platform is capable of serving the nomad users of novel applications with low latency requirements. We showcase this capability in several scenarios, in which we articulate the end-to-end latency performance of our platform by comparing delay measurements with the benchmark of a Redis-based setup lacking the adaptive nature of data orchestration. Our results prove that the stringent delay requisites necessitate the close integration that we present in this paper: functions and data must be orchestrated in sync in order to fully exploit the potential that the proximity of edge resources enables. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
Fog-Computing-Based Cyber–Physical System for Secure Food Traceability through the Twofish Algorithm
Electronics 2022, 11(2), 283; https://doi.org/10.3390/electronics11020283 - 17 Jan 2022
Cited by 1 | Viewed by 464
Abstract
The Internet is an essential part of daily life with the expansion of businesses for maximizing profits. This technology has intensely altered the traditional shopping style in online shopping. Convenience, quick price comparison, saving traveling time, and avoiding crowds are the reasons behind [...] Read more.
The Internet is an essential part of daily life with the expansion of businesses for maximizing profits. This technology has intensely altered the traditional shopping style in online shopping. Convenience, quick price comparison, saving traveling time, and avoiding crowds are the reasons behind the abrupt adoption of online shopping. However, in many situations, the product provided does not meet the quality, which is the primary concern of every customer. To ensure quality product provision, the whole food supply chain should be examined and managed properly. In food traceability systems, sensors are used to gather product information, which is forwarded to fog computing. However, the product information forwarded by the sensors may not be similar, as it can be modified by intruders/hackers. Moreover, consumers are interested in the product location, as well as its status, i.e., manufacturing date, expiry date, etc. Therefore, in this paper, data and account security techniques were introduced to efficiently secure product information through the Twofish algorithm and dual attestation for account verification. To improve the overall working, the proposed mechanism integrates fog computing with novel modules to efficiently monitor the product, along with increasing the efficiency of the whole working process. To validate the performance of the proposed system, a comparative simulation was performed with existing approaches in which Twofish showed notably better results in terms of encryption time, computational cost, and the identification of modification attacks. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
Feature Matching Synchronized Reasoning from Energy-Based Memory Network for Intelligent Data Management in Cloud Computing Data Center
Electronics 2021, 10(16), 1900; https://doi.org/10.3390/electronics10161900 - 08 Aug 2021
Viewed by 628
Abstract
A cloud data center for software-as-a-service (SaaS) was built for the purpose of stably managing these server computers in one place in order to provide an uninterrupted service, not only for a stable power supply and security but also for the efficient data [...] Read more.
A cloud data center for software-as-a-service (SaaS) was built for the purpose of stably managing these server computers in one place in order to provide an uninterrupted service, not only for a stable power supply and security but also for the efficient data management. To manage such a data center efficiently, it is important to build a cloud database with structured storage above all else. In recent decades, many studies have focused on designing cloud data centers and most of the research has focused on communication traffic, routing, topological issues and communication technology. However, in order to build an efficient cloud database that can support user demand, the most sophisticated intelligent system, based on AI technology and considering user convenience, should be designed. From this viewpoint, adopting human brain functions, Energy-Based Memory Network was designed for a knowledge-based frame of an intelligent system. And its event-related synchronized data extraction mechanism was proposed. In particular, a Thinking Thread extraction phase was implemented for the reasoning process using qualia matching and a deep extraction method in a cloud database. The purpose of this approach is to design and implement an intelligent cloud database that has an efficient structure and mechanism for supporting user demand and providing accurate, prompt services. In experiments, the working phase of the functions was simulated with data and analyzed. As a result, it was confirmed that the proposed system works well and intelligently for the design purpose. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
A Secure Link-Layer Connectivity Platform for Multi-Site NFV Services
Electronics 2021, 10(15), 1868; https://doi.org/10.3390/electronics10151868 - 03 Aug 2021
Cited by 2 | Viewed by 870
Abstract
Network Functions Virtualization (NFV) is a key technology for network automation and has been instrumental to materialize the disruptive view of 5G and beyond mobile networks. In particular, 5G embraces NFV to support the automated and agile provision of telecommunication and vertical services [...] Read more.
Network Functions Virtualization (NFV) is a key technology for network automation and has been instrumental to materialize the disruptive view of 5G and beyond mobile networks. In particular, 5G embraces NFV to support the automated and agile provision of telecommunication and vertical services as a composition of versatile virtualized components, referred to as Virtual Network Functions (VNFs). It provides a high degree of flexibility in placing these components on distributed NFV infrastructures (e.g., at the network edge, close to end users). Still, this flexibility creates new challenges in terms of VNF connectivity. To address these challenges, we introduce a novel secure link-layer connectivity platform, L2S. Our solution can automatically be deployed and configured as a regular multi-site NFV service, providing the abstraction of a layer-2 switch that offers link-layer connectivity to VNFs deployed on remote NFV sites. Inter-site communications are effectively protected using existing security solutions and protocols, such as IP security (IPsec). We have developed a functional prototype of L2S using open-source software technologies. Our evaluation results indicate that this prototype can perform IP tunneling and cryptographic operations at Gb/s data rates. Finally, we have validated L2S using a multi-site NFV ecosystem at the Telefonica Open Network Innovation Centre (5TONIC), using our solution to support a multicast-based IP television service. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
Extending TOSCA for Edge and Fog Deployment Support
Electronics 2021, 10(6), 737; https://doi.org/10.3390/electronics10060737 - 20 Mar 2021
Cited by 6 | Viewed by 1162
Abstract
The emergence of fog and edge computing has complemented cloud computing in the design of pervasive, computing-intensive applications. The proximity of fog resources to data sources has contributed to minimizing network operating expenditure and has permitted latency-aware processing. Furthermore, novel approaches such as [...] Read more.
The emergence of fog and edge computing has complemented cloud computing in the design of pervasive, computing-intensive applications. The proximity of fog resources to data sources has contributed to minimizing network operating expenditure and has permitted latency-aware processing. Furthermore, novel approaches such as serverless computing change the structure of applications and challenge the monopoly of traditional Virtual Machine (VM)-based applications. However, the efforts directed to the modeling of cloud applications have not yet evolved to exploit these breakthroughs and handle the whole application lifecycle efficiently. In this work, we present a set of Topology and Orchestration Specification for Cloud Applications (TOSCA) extensions to model applications relying on any combination of the aforementioned technologies. Our approach features a design-time “type-level” flavor and a run time “instance-level” flavor. The introduction of semantic enhancements and the use of two TOSCA flavors enables the optimization of a candidate topology before its deployment. The optimization modeling is achieved using a set of constraints, requirements, and criteria independent from the underlying hosting infrastructure (i.e., clouds, multi-clouds, edge devices). Furthermore, we discuss the advantages of such an approach in comparison to other notable cloud application deployment approaches and provide directions for future research. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
State Management for Cloud-Native Applications
Electronics 2021, 10(4), 423; https://doi.org/10.3390/electronics10040423 - 09 Feb 2021
Cited by 5 | Viewed by 1379
Abstract
The stateless cloud-native design improves the elasticity and reliability of applications running in the cloud. The design decouples the life-cycle of application states from that of application instances; states are written to and read from cloud databases, and deployed close to the application [...] Read more.
The stateless cloud-native design improves the elasticity and reliability of applications running in the cloud. The design decouples the life-cycle of application states from that of application instances; states are written to and read from cloud databases, and deployed close to the application code to ensure low latency bounds on state access. However, the scalability of applications brings the well-known limitations of distributed databases, in which the states are stored. In this paper, we propose a full-fledged state layer that supports the stateless cloud application design. In order to minimize the inter-host communication due to state externalization, we propose, on the one hand, a system design jointly with a data placement algorithm that places functions’ states across the hosts of a data center. On the other hand, we design a dynamic replication module that decides the proper number of copies for each state to ensure a sweet spot in short state-access time and low network traffic. We evaluate the proposed methods across realistic scenarios. We show that our solution yields state-access delays close to the optimal, and ensures fast replica placement decisions in large-scale settings. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Article
On the Mediation Price War of 5G Providers
Electronics 2020, 9(11), 1901; https://doi.org/10.3390/electronics9111901 - 12 Nov 2020
Cited by 2 | Viewed by 1046
Abstract
Virtualization guarantees that, moving toward 5G, online services will be versatile and the creation of those will be quick, satisfying the interest of end-clients to a higher degree than what is plausible today. Telcos, cloud administrators, and online application suppliers will unite for [...] Read more.
Virtualization guarantees that, moving toward 5G, online services will be versatile and the creation of those will be quick, satisfying the interest of end-clients to a higher degree than what is plausible today. Telcos, cloud administrators, and online application suppliers will unite for conveying those services to clients around the world. Thus, in order to help their portability, or the simple geographic range of the offered application, the business arrangements among the actors must scale over numerous domains and a guaranteed nature of joint effort among different stakeholders is important. Therefore, the vision of the 5G environment is majorly established on the federation of these partners in which they can consistently strive towards the objective of making reliable resource slices and deploying applications within for a maximal geographic reach of clients. In this environment, business perspectives will significantly impact the technical capacity of the system: the business arrangements of the providers will innately decide the accessibility and the end-client costs of certain services. In this work, we model the business relations of infrastructure providers as a variation of network formation games. We infer conditions under which the current transit-peering structure of network providers stays unblemished, and we also draw the specifics of an envisioned setup in which providers create business links among each other starting with a clean slate. Full article
(This article belongs to the Special Issue Novel Cloud-Based Service/Application Platforms and Ecosystems)
Show Figures

Figure 1

Back to TopTop