Edge and Cloud Computing Systems and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 November 2023) | Viewed by 11941

Special Issue Editors


E-Mail Website
Guest Editor
Department of Software Engineering, Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur 50603, Malaysia
Interests: software application reliability; software efficiency; machine and deep learning; reinforcement learning; multithreading communication.

E-Mail Website
Guest Editor
Department of Information Technology, Ajman University, Ajman 20550, United Arab Emirates
Interests: blockchain; edge computing; IoT

Special Issue Information

Dear Colleagues,

With the recent need for digital integration across sectors and industries, from automobile to healthcare, banking, finance, retail, agriculture and  manufacturing industries, edge and cloud computing ecosystems must be reliable and effective. Edge and cloud computing converge computation, communications, and storage services through systems and applications in a distributed manner. Hence, they open nearly unlimited research and application opportunities and raise unforeseen technological, theoretical, and societal issues and challenges that are not yet properly specified.

We are pleased to invite you to contribute your valuable work of experimental and application-oriented results to this Special Issue, which aims to disseminate recent developments in edge and cloud reliability, edge and cloud management, edge and cloud resources, edge and cloud services, as well as efficiency and security in edge and cloud systems. This Special Issue will highlight case studies and industrial applications in edge/cloud storage, edge/cloud connections, edge/cloud analytics, edge/cloud artificial intelligence, and edge/cloud processing engines.

This Special Issue aims to welcome both original research articles and reviews. Research areas may include (but are not limited to) the following:

  • Edge and cloud technologies in software systems;
  • Edge and cloud technologies for the Metaverse;
  • Edge and cloud application innovations;
  • Edge and cloud in blockchain;
  • Industry-specific edges;
  • Risk analysis of edge and cloud applications;
  • Trust computing in edge and cloud applications
  • Quality of Service in edge and cloud applications;
  • Artificial intelligence for edge and cloud applications;
  • Requirements and use cases for edge and cloud applications;
  • Fault tolerance and recovery in edge and cloud technologies;
  • Application regulations and standards with edge and cloud technologies.

Dr. Siti Hafizah Ab Hamid
Dr. Raja Wasim Ahmad
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • edge computing
  • cloud computing
  • machine learning
  • deep learning
  • reinforcement learning
  • collaborative computing
  • performance evaluation
  • computation analytics
  • edge-based application
  • cloud-based application

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 474 KiB  
Article
TETES: Trust Based Efficient Task Execution Scheme for Fog Enabled Smart Cities
by Ahmad Naseem Alvi, Bakhtiar Ali, Mohamed Saad Saleh, Mohammed Alkhathami, Deafallah Alsadie and Bushra Alghamdi
Appl. Sci. 2023, 13(23), 12799; https://doi.org/10.3390/app132312799 - 29 Nov 2023
Cited by 1 | Viewed by 629
Abstract
Quality lifestyle leads to increasing trends in smart cities by offering modern communication and information technologies. Smart cities offer multiple applications with smart management of resources such as smart agriculture, Intelligent transportation systems, waste management and energy management. These applications are based on [...] Read more.
Quality lifestyle leads to increasing trends in smart cities by offering modern communication and information technologies. Smart cities offer multiple applications with smart management of resources such as smart agriculture, Intelligent transportation systems, waste management and energy management. These applications are based on IoTs that are composed of sensor networks with limited processing and computing capabilities and are connected with different types of networks. Due to limited computational capability, IoT sensor nodes require more time to compute different tasks and are required to offload some tasks to remotely placed cloud servers for task execution. Fog nodes are preferred over the cloud as they are placed in close access to IoT nodes distributed in different networks. Different types of networks make it more vulnerable to malicious attacks. Malicious nodes offload complex and high computing tasks to fog nodes to compromise their performance and create delays in the computing tasks of legitimate nodes. In addition, fog nodes even after removing the malicious nodes are unable to process all the legitimate tasks within a specific time frame. In this work, a Trust-based Efficient Task Execution Scheme (TETES) is proposed for fog node that scrutinizes the offloaded tasks sent by the malicious nodes and efficiently execute most of the trusted tasks within a stipulated time cycle. The simulated results show that TETES execute more offloaded tasks as compared to well-known First Come First Serve (FCFS), Longest Task First (LTF), and Shortest Task First (STF) algorithms. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

23 pages, 560 KiB  
Article
Comparison of Cloud-Computing Providers for Deployment of Object-Detection Deep Learning Models
by Prem Rajendran, Sarthak Maloo, Rohan Mitra, Akchunya Chanchal and Raafat Aburukba
Appl. Sci. 2023, 13(23), 12577; https://doi.org/10.3390/app132312577 - 22 Nov 2023
Viewed by 1170
Abstract
As cloud computing rises in popularity across diverse industries, the necessity to compare and select the most appropriate cloud provider for specific use cases becomes imperative. This research conducts an in-depth comparative analysis of two prominent cloud platforms, Microsoft Azure and Amazon Web [...] Read more.
As cloud computing rises in popularity across diverse industries, the necessity to compare and select the most appropriate cloud provider for specific use cases becomes imperative. This research conducts an in-depth comparative analysis of two prominent cloud platforms, Microsoft Azure and Amazon Web Services (AWS), with a specific focus on their suitability for deploying object-detection algorithms. The analysis covers both quantitative metrics—encompassing upload and download times, throughput, and inference time—and qualitative assessments like cost effectiveness, machine learning resource availability, deployment ease, and service-level agreement (SLA). Through the deployment of the YOLOv8 object-detection model, this study measures these metrics on both platforms, providing empirical evidence for platform evaluation. Furthermore, this research examines general platform availability and information accessibility to highlight differences in qualitative aspects. This paper concludes that Azure excels in download time (average 0.49 s/MB), inference time (average 0.60 s/MB), and throughput (1145.78 MB/s), and AWS excels in upload time (average 1.84 s/MB), cost effectiveness, ease of deployment, a wider ML service catalog, and superior SLA. However, the decision between either platform is based on the importance of their performance based on business-specific requirements. Hence, this paper ends by presenting a comprehensive comparison based on business-specific requirements, aiding stakeholders in making informed decisions when selecting a cloud platform for their machine learning projects. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

17 pages, 3884 KiB  
Article
Exploring Performance Degradation in Virtual Machines Sharing a Cloud Server
by Hamza Ahmed, Hassan Jamil Syed, Amin Sadiq, Ashraf Osman Ibrahim, Manar Alohaly and Muna Elsadig
Appl. Sci. 2023, 13(16), 9224; https://doi.org/10.3390/app13169224 - 14 Aug 2023
Viewed by 1039
Abstract
Cloud computing has become a leading technology for IT infrastructure, with many companies migrating their services to cloud servers in recent years. As cloud services continue to expand, the issue of cloud monitoring has become increasingly important. One important metric to monitor is [...] Read more.
Cloud computing has become a leading technology for IT infrastructure, with many companies migrating their services to cloud servers in recent years. As cloud services continue to expand, the issue of cloud monitoring has become increasingly important. One important metric to monitor is CPU steal time, which measures the amount of time a virtual CPU waits for the actual CPU. In this study, we focus on the impact of CPU steal time on virtual machine performance and the potential problems that can arise. We implement our work using an OpenStack-based cloud environment and investigate intrusive and non-intrusive monitoring methods. Our analysis provides insights into the importance of CPU steal time monitoring and its impact on cloud performance. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

33 pages, 5332 KiB  
Article
Fault-Prone Software Requirements Specification Detection Using Ensemble Learning for Edge/Cloud Applications
by Fatin Nur Jannah Muhamad, Siti Hafizah Ab Hamid, Hema Subramaniam, Razailin Abdul Rashid and Faisal Fahmi
Appl. Sci. 2023, 13(14), 8368; https://doi.org/10.3390/app13148368 - 19 Jul 2023
Cited by 1 | Viewed by 1212
Abstract
Ambiguous software requirements are a significant contributor to software project failure. Ambiguity in software requirements is characterized by the presence of multiple possible interpretations. As requirements documents often rely on natural language, ambiguity is a frequent challenge in industrial software construction, with the [...] Read more.
Ambiguous software requirements are a significant contributor to software project failure. Ambiguity in software requirements is characterized by the presence of multiple possible interpretations. As requirements documents often rely on natural language, ambiguity is a frequent challenge in industrial software construction, with the potential to result in software that fails to meet customer needs and generates issues for developers. Ambiguities arise from grammatical errors, inappropriate language use, multiple meanings, or a lack of detail. Previous studies have suggested the use of supervised machine learning for ambiguity detection, but limitations in addressing all ambiguity types and a lack of accuracy remain. In this paper, we introduce the fault-prone software requirements specification detection model (FPDM), which involves the ambiguity classification model (ACM). The ACM model identifies and selects the optimal algorithm to classify ambiguity in software requirements by employing the deep learning technique, while the FPDM model utilizes Boosting ensemble learning algorithms to detect fault-prone software requirements specifications. The ACM model achieved an accuracy of 0.9907, while the FPDM model achieved an accuracy of 0.9750. To validate the results, a case study was conducted to detect fault-prone software requirements specifications for 30 edge/cloud applications, as edge/cloud-based applications are becoming crucial and significant in the current digital world. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

18 pages, 1227 KiB  
Article
SLMAS: A Secure and Light Weight Mutual Authentication Scheme for the Smart Wheelchair
by Abdulwahab Ali Almazroi, Misbah Liaqat, Rana Liaqat Ali and Abdullah Gani
Appl. Sci. 2023, 13(11), 6564; https://doi.org/10.3390/app13116564 - 28 May 2023
Viewed by 1587
Abstract
The modern innovation called the Internet of Things (IoT) empowers individuals to connect to anybody and anything at any point, wherever. The application of the IoT in smart cities concerning smart healthcare management can improve patient welfare, user acceptance, the standard of living, [...] Read more.
The modern innovation called the Internet of Things (IoT) empowers individuals to connect to anybody and anything at any point, wherever. The application of the IoT in smart cities concerning smart healthcare management can improve patient welfare, user acceptance, the standard of living, and accurate illness monitoring. Powered wheelchairs (PW) with sensors, computers, and other connected assistive technologies are called smart wheelchairs. Smart wheelchairs with sensing abilities are intended to offer universal connectivity using cloud and edge computing technology. Numerous outstanding people were impacted by paralyzing phenomena, including Stephen Hawking and Max Brito. The issue of legitimacy is one of the most important difficulties in e-health applications, because of how sensitive the technology is, and this needs to be appropriately handled. To safeguard the data transport, usage, and interchange between sensor nodes/smart wheelchairs and servers, e-health applications require an authentication method. As all conversations use wireless channels, e-health apps are exposed to various vulnerabilities. Additionally, the IoT has limited computational and power capacity limitations. To combat the various security risks, the present research offers a user authentication technique that is efficient and ensures anonymity. The suggested method creates a safe connection for the authorized entity and forbids unauthorized entities from accessing the Internet of Things sensor nodes. The suggested approach has lower communication and computation overheads than the traditional techniques, making it more effective. In addition, the security verification of the presented protocol is scrutinized through AVISPA. The proposed scheme, on average, requires only 12.4% more computation cost to execute. Compared to the existing approaches, the suggested protocol’s extra computational cost can be compensated for by its enhanced security, while the suggested method’s communication cost is 46.3% smaller. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

19 pages, 3464 KiB  
Article
LFDC: Low-Energy Federated Deep Reinforcement Learning for Caching Mechanism in Cloud–Edge Collaborative
by Xinyu Zhang, Zhigang Hu, Meiguang Zheng, Yang Liang, Hui Xiao, Hao Zheng and Aikun Xu
Appl. Sci. 2023, 13(10), 6115; https://doi.org/10.3390/app13106115 - 16 May 2023
Cited by 1 | Viewed by 1060
Abstract
The optimization of caching mechanisms has long been a crucial research focus in cloud–edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies [...] Read more.
The optimization of caching mechanisms has long been a crucial research focus in cloud–edge collaborative environments. Effective caching strategies can substantially enhance user experience quality in these settings. Deep reinforcement learning (DRL), with its ability to perceive the environment and develop intelligent policies online, has been widely employed for designing caching strategies. Recently, federated learning, when combined with DRL, has been in gaining popularity for optimizing caching strategies and protecting data training privacy from eavesdropping attacks. However, online federated deep reinforcement learning algorithms face high environmental dynamics, and real-time training can result in increased training energy consumption despite improving caching efficiency. To address this issue, we propose a low-energy federated deep reinforcement learning strategy for caching mechanisms (LFDC) that balances caching efficiency and training energy consumption. The LFDC strategy encompasses a novel energy efficiency model, a deep reinforcement learning mechanism, and a dynamic energy-saving federated policy. Our experimental results demonstrate that the proposed LFDC strategy significantly outperforms existing benchmarks in terms of energy efficiency. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

21 pages, 4296 KiB  
Article
WFO: Cloud-Edge Cooperative Data Offloading Strategy Akin to Water Flow
by Shaonan Li, Yongqiang Xie, Zhongbo Li, Jin Qi, Junjie Xie and Zexin Yan
Appl. Sci. 2023, 13(10), 5867; https://doi.org/10.3390/app13105867 - 10 May 2023
Viewed by 1251
Abstract
The exponential growth of video data in networks has led to video flow occupying a significant proportion of network traffic, causing congestion and poor service quality. To address this issue, it is crucial to quickly offload data and ensure high-quality service for users, [...] Read more.
The exponential growth of video data in networks has led to video flow occupying a significant proportion of network traffic, causing congestion and poor service quality. To address this issue, it is crucial to quickly offload data and ensure high-quality service for users, especially in the context of cloud-edge collaboration. We propose a strategy for collaborative data offloading between cloud and edge computing, analogous to water flow (WFO). When users simultaneously access the same data from the same data source, WFO can serve more users within the limited bandwidth of the cloud while maintaining the quality of service. WFO creates a water flow-like data link between nodes to enable data offloading, using multiple nodes in collaboration to offload data for a single node. Experimental results show that compared with typical methods, such as fair-queue and first-come-first-served, WFO can significantly reduce the data offloading delay, guarantee service quality, and effectively reduce network congestion. Moreover, the number of service nodes can be as numerous as possible. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

15 pages, 4020 KiB  
Article
Detection of Denial of Service Attack in Cloud Based Kubernetes Using eBPF
by Amin Sadiq, Hassan Jamil Syed, Asad Ahmed Ansari, Ashraf Osman Ibrahim, Manar Alohaly and Muna Elsadig
Appl. Sci. 2023, 13(8), 4700; https://doi.org/10.3390/app13084700 - 07 Apr 2023
Viewed by 2914
Abstract
Kubernetes is an orchestration tool that runs and manages container-based workloads. It works as a collection of different virtual or physical servers that support multiple storage capacities, provide network functionalities, and keep all containerized applications active in a desired state. It also provides [...] Read more.
Kubernetes is an orchestration tool that runs and manages container-based workloads. It works as a collection of different virtual or physical servers that support multiple storage capacities, provide network functionalities, and keep all containerized applications active in a desired state. It also provides an increasing fleet of different facilities, known as microservices. However, Kubernetes’ scalability has led to a complex network structure with an increased attack vector. Attackers can launch a Denial of service (DoS) attack against servers/machines in Kubernetes by producing fake traffic load, for instance. DoS or Distributed Denial of service (DDoS) attacks are malicious attempts to disrupt a targeted service by flooding the target’s service with network packets. Constant observation of the network traffic is extremely important for the early detection of such attacks. Extended Berkeley Packet Filter (eBPF) and eXpress Datapath (XDP) are advanced technologies in the Linux kernel that perform high-speed packet processing. In the case of Kubernetes, eBPF and XDP can be used to protect against DDoS attacks by enabling fast and efficient network security policies. For example, XDP can be used to filter out traffic that is not authorized to access the Kubernetes cluster, while eBPF can be used to monitor network traffic for signs of DDoS attacks, such as excessive traffic from a single source. In this research, we utilize eBPF and XDP to build a detection and observation mechanism to filter out malicious content and mitigate a Denial of Service attack on Kubernetes. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

Back to TopTop