Network Cost Reduction in Cloud and Fog Computing Environments

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Network Virtualization and Edge/Fog Computing".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 26372

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Faculty of Information Science and Informatics, Ionian University, 49100 Corfu, Greece
Interests: medium access control in ad hoc networks; performance issues in wireless networks; information dissemination; service discovery; facility location; energy consumption and recharging in wireless sensor networks; network cost reduction in cloud computing environments; routing in wireless sensor networks; MAC for vehicular networks; synchronization issues in distributed systems; cloud gaming; smart agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics & Telecommunications, Faculty of Informatics & Telecommunications, University of Ioannina, 47100 Arta, Greece
Interests: facility location; computer networking; network simulation; programming; sensor networks; Internet of Things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics, Ionian University, Tsirigoti Square 7, 49100 Corfu, Greece
Interests: multimedia cloud computing; cloud gaming; (mobile-) edge computing; fog computing; performance issues in IoT and wireless networks; facility location; resource allocation; server/service placement; distributed interactive, immersive and future-generation applications; social networks; network resource/cost optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cloud computing has attracted significant attention in recent years, as a result of delivering alternative and more affordable on-demand access to online services. However, current technological trends, such as Internet of Things (IoT), smart cities, and vehicular networks, in addition to emerging applications, like those envisaged by 5G (and beyond) telecommunications or immersive interactive media (e.g., virtual reality communications), may impose stringent networking, energy, capacity, budget or other constraints. As such, service providers benefit from intelligent Fog and its supplementary (Mobile) Edge-based Computing to offload services near the data sources. In parallel, new prediction and optimization models have allowed the integration of nature-inspired algorithms as well as Artificial Intelligence (AI) and Machine Learning (ML)-based mechanisms, as critical components for network performance enhancement and real-time big data analysis. Still, regardless of the selected approach, new challenges manifest. These networks need to balance an increasing service workload, efficiently manage a large number of users, guarantee their QoS/QoE, and optimize the network utilization. Thus, improving accessibility to the provided (and often limited) resources has become an exceedingly urgent problem. Although environment-specific economic policies can be enforced to modulate the service provisioning, the pervasiveness of resources, heterogeneity of application requirements, and mobility of users generate crucial interconnection performance issues that require, on the one hand, intensive tuning on complex architectures and location-allocation algorithms and, on the other, novel methods which allow the network to dynamically adapt its behavior in a scalable fashion. In any case, the common goal, recognized by all service providers, refers to network cost reduction. This Special Issue solicits original research papers and comprehensive reviews on all areas related to network cost reduction and resource optimization for the cloud/fog (or edge) ecosystem. We welcome papers that propose and examine new ideas on both the theoretical (e.g., algorithms, protocols, and/or mathematical and analytical models) and practical (e.g., experimental results, simulation tools, and/or implementation designs) fronts. Relevant topics include, but are not limited to, the following:

  • Joint optimization of computing and communication resources in the cloud, fog (or edge);
  • AI and ML-based for configuring or managing cloud, fog (or edge) systems;
  • Tradeoffs between application performance, resource utilization, energy consumption, and other performance metrics;
  • Business pricing and cost models for cloud, fog (or edge) service provisioning;
  • Policies and recommendations for intelligent cloud, fog (or edge) architectures;
  • Network load balancing and data traffic management methods;
  • Service placement and facility location-allocation networking issues and solutions;
  • Softwarization-virtualization network approaches and slicing, SDN, NFV;
  • Green, cognitive, and serverless communications and networks;
  • Dynamic and scalable resource allocation and monitoring;
  • Cross-layer design and optimization;
  • Content-aware and semantic network cost reduction models;
  • Performance modeling and prediction;
  • Architectures, analytical models, simulation tools, testbeds and prototypes;
  • QoS/QoE-oriented network adaptation;
  • Competition cost strategies and game-theoretic models.

Dr. Konstantinos Oikonomou
Dr. Georgios Tsoumanis
Dr. Athanasios Tsipis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud/fog/edge computing
  • network cost
  • performance optimization
  • QoS/QoE
  • service placement
  • resource location-allocation
  • analytics
  • load balancing
  • energy efficiency
  • operations research

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 884 KiB  
Article
Optimal Pricing in a Rented 5G Infrastructure Scenario with Sticky Customers
by Marta Flamini and Maurizio Naldi
Future Internet 2023, 15(2), 82; https://doi.org/10.3390/fi15020082 - 19 Feb 2023
Viewed by 1041
Abstract
The ongoing deployment of 5G is accompanied by architecture and pricing decisions. Network sharing is a critical feature, allowing operators to reduce their costs, but introducing a mixed partnering/competition situation, where the infrastructure owner, renting out their infrastructure to virtual operators (who act [...] Read more.
The ongoing deployment of 5G is accompanied by architecture and pricing decisions. Network sharing is a critical feature, allowing operators to reduce their costs, but introducing a mixed partnering/competition situation, where the infrastructure owner, renting out their infrastructure to virtual operators (who act as customers), also provides services to end customers, competing with virtual operators. Pricing is the leverage through which an optimal balance between the two roles is accomplished. However, pricing may not be the only variable affecting customers’ choice, which may prefer (stick to) one operator for several reasons. In this paper, we formulate a game model to analyse the optimal pricing decisions for operators in the presence of such sticky behaviour of customers. After concluding that the game does not allow for a Nash equilibrium, we consider a case when one of the parties (the infrastructure owner, the virtual operators, or the regulator) is responsible for setting prices and analyse how operators’ profits are impacted when price-setting powers are shifted among the parties. The scenario where the regulator sets prices leads to the lowest profits for the operators, even lower than when competitors set prices. Full article
(This article belongs to the Special Issue Network Cost Reduction in Cloud and Fog Computing Environments)
Show Figures

Figure 1

22 pages, 3265 KiB  
Article
Engineering Resource-Efficient Data Management for Smart Cities with Apache Kafka
by Theofanis P. Raptis, Claudio Cicconetti, Manolis Falelakis, Grigorios Kalogiannis, Tassos Kanellos and Tomás Pariente Lobo
Future Internet 2023, 15(2), 43; https://doi.org/10.3390/fi15020043 - 22 Jan 2023
Cited by 7 | Viewed by 2619
Abstract
In terms of the calibre and variety of services offered to end users, smart city management is undergoing a dramatic transformation. The parties involved in delivering pervasive applications can now solve key issues in the big data value chain, including data gathering, analysis, [...] Read more.
In terms of the calibre and variety of services offered to end users, smart city management is undergoing a dramatic transformation. The parties involved in delivering pervasive applications can now solve key issues in the big data value chain, including data gathering, analysis, and processing, storage, curation, and real-world data visualisation. This trend is being driven by Industry 4.0, which calls for the servitisation of data and products across all industries, including the field of smart cities, where people, sensors, and technology work closely together. In order to implement reactive services such as situational awareness, video surveillance, and geo-localisation while constantly preserving the safety and privacy of affected persons, the data generated by omnipresent devices needs to be processed fast. This paper proposes a modular architecture to (i) leverage cutting-edge technologies for data acquisition, management, and distribution (such as Apache Kafka and Apache NiFi); (ii) develop a multi-layer engineering solution for revealing valuable and hidden societal knowledge in the context of smart cities processing multi-modal, real-time, and heterogeneous data flows; and (iii) address the key challenges in tasks involving complex data flows and offer general guidelines to solve them. In order to create an effective system for the monitoring and servitisation of smart city assets with a scalable platform that proves its usefulness in numerous smart city use cases with various needs, we deduced some guidelines from an experimental setting performed in collaboration with leading industrial technical departments. Ultimately, when deployed in production, the proposed data platform will contribute toward the goal of revealing valuable and hidden societal knowledge in the context of smart cities. Full article
(This article belongs to the Special Issue Network Cost Reduction in Cloud and Fog Computing Environments)
Show Figures

Figure 1

21 pages, 919 KiB  
Article
A Cost-Aware Framework for QoS-Based and Energy-Efficient Scheduling in Cloud–Fog Computing
by Husam Suleiman
Future Internet 2022, 14(11), 333; https://doi.org/10.3390/fi14110333 - 14 Nov 2022
Cited by 3 | Viewed by 1579
Abstract
Cloud–fog computing is a large-scale service environment developed to deliver fast, scalable services to clients. The fog nodes of such environments are distributed in diverse places and operate independently by deciding on which data to process locally and which data to send remotely [...] Read more.
Cloud–fog computing is a large-scale service environment developed to deliver fast, scalable services to clients. The fog nodes of such environments are distributed in diverse places and operate independently by deciding on which data to process locally and which data to send remotely to the cloud for further analysis, in which a Service-Level Agreement (SLA) is employed to govern Quality of Service (QoS) requirements of the cloud provider to such nodes. The provider experiences varying incoming workloads that come from heterogeneous fog and Internet of Things (IoT) devices, each of which submits jobs that entail various service characteristics and QoS requirements. To execute fog workloads and meet their SLA obligations, the provider allocates appropriate resources and utilizes load scheduling strategies that effectively manage the executions of fog jobs on cloud resources. Failing to fulfill such demands causes extra network bottlenecks, service delays, and energy constraints that are difficult to maintain at run-time. This paper proposes a joint energy- and QoS-optimized performance framework that tolerates delay and energy risks on the cost performance of the cloud provider. The framework employs scheduling mechanisms that consider the SLA penalty and energy impacts of data communication, service, and waiting performance metrics on cost reduction. The findings prove the framework’s effectiveness in mitigating energy consumption due to QoS penalties and therefore reducing the gross scheduling cost. Full article
(This article belongs to the Special Issue Network Cost Reduction in Cloud and Fog Computing Environments)
Show Figures

Figure 1

Review

Jump to: Research

0 pages, 2797 KiB  
Review
TinyML for Ultra-Low Power AI and Large Scale IoT Deployments: A Systematic Review
by Nikolaos Schizas, Aristeidis Karras, Christos Karras and Spyros Sioutas
Future Internet 2022, 14(12), 363; https://doi.org/10.3390/fi14120363 - 06 Dec 2022
Cited by 39 | Viewed by 20163
Abstract
The rapid emergence of low-power embedded devices and modern machine learning (ML) algorithms has created a new Internet of Things (IoT) era where lightweight ML frameworks such as TinyML have created new opportunities for ML algorithms running within edge devices. In particular, the [...] Read more.
The rapid emergence of low-power embedded devices and modern machine learning (ML) algorithms has created a new Internet of Things (IoT) era where lightweight ML frameworks such as TinyML have created new opportunities for ML algorithms running within edge devices. In particular, the TinyML framework in such devices aims to deliver reduced latency, efficient bandwidth consumption, improved data security, increased privacy, lower costs and overall network cost reduction in cloud environments. Its ability to enable IoT devices to work effectively without constant connectivity to cloud services, while nevertheless providing accurate ML services, offers a viable alternative for IoT applications seeking cost-effective solutions. TinyML intends to deliver on-premises analytics that bring significant value to IoT services, particularly in environments with limited connection. This review article defines TinyML, presents an overview of its benefits and uses and provides background information based on up-to-date literature. Then, we demonstrate the TensorFlow Lite framework which supports TinyML along with analytical steps for an ML model creation. In addition, we explore the integration of TinyML with network technologies such as 5G and LPWAN. Ultimately, we anticipate that this analysis will serve as an informational pillar for the IoT/Cloud research community and pave the way for future studies. Full article
(This article belongs to the Special Issue Network Cost Reduction in Cloud and Fog Computing Environments)
Show Figures

Figure 1

Back to TopTop