Application of Cloud Computing and Distributed Systems

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 8148

Special Issue Editors

Department of Marketing & Business Analytics, Texas A&M University-Commerce, Commerce, TX 75428, USA
Interests: service computing; business analytics; reliable distributed systems; web databases; semantic integration systems

E-Mail Website
Guest Editor
Department of Computer Science, Rochester Institute of Technology, Rochester, NY 14623, USA
Interests: distributed and parallel systems; high-performance computing (HPC); system architecture and resource disaggregation; cloud computing; Internet of Things (IoT); edge computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Operating systems now govern a wide range of electronic devices, including consumer electronics products and cyber-physical systems such as cars and power grids. With increased computational power, the majority of modern computing devices can perform parallel and distributed computing. Over the past decade, the field of intelligent distributed systems research has matured, and numerous related applications are already in use. The main techniques used to create large-scale distributed systems and provide software services to end users are service-oriented architecture and cloud computing. Enterprise application delivery is increasingly dominated by cloud-native software since these applications are made up of (micro)services that may be independently created and delivered by utilizing a variety of heterogeneous technologies.

Beyond cloud computing, new types of distributed systems now exist that offer novel sorts of applications but present new difficulties for data management. By hiding scaling, capacity planning, and maintenance operations from the developer or operator, recent efforts towards serverless computing aim to streamline the process of delivering programs in the cloud into production. Other initiatives focus on avoiding communication to the cloud by setting up and executing data processing environments close to data sources in Internet of Things scenarios (for example, fog and edge computing) for massive smart homes, businesses, and cities (e.g., cloudlets for mobile applications and offline first technologies for web applications). The use of IoT and distributed sensor systems has increased significantly over the past several years in a variety of application fields, including smart transportation, smart energy systems, smart homes and buildings, smart healthcare, and environmental monitoring. The tight integration of sensing and artificial intelligence, dependable and efficient networking, interoperability and scalability, the requirement for reliable autonomy, human interaction, and crucial issues of security, privacy, and trust are just a few of the research challenges that need to be resolved if smart operating and distributed systems are to truly become useful and pervasive.

Topics of interest include, but are not limited to, the following:

  • Applications of numerical algorithms in science and engineering;
  • Application case studies for benchmarking and comparative studies of parallel programming models;
  • Numerical methods for large-scale data analysis;
  • Optimization and non-linear problems in parallel and distributed computing;
  • Parallel and distributed programming productivity, usability, and component-based parallel programming;
  • Data-centric parallel and distributed algorithms for exascale computing;
  • Data management in edge devices and the computing continuum;
  • Large-scale data processing applications in science, engineering, business, and healthcare;
  • Emerging trends for computing, machine learning, approximate computing, and quantum computing;
  • Parallel, replicated, and highly available distributed databases;
  • Coupling HPC simulations with in situ data analysis;
  • Parallel and distributed machine learning, knowledge discovery, and data mining;
  • Privacy and trust in parallel and distributed data management and analytics systems;
  • Data analysis in cloud and serverless models;
  • Scheduling algorithms for homogeneous and heterogeneous platforms;
  • Energy and temperature awareness in scheduling and load balancing;
  • Resource management for HPC and clouds;
  • Parallel programming in the edge and in the computing continuum;
  • Machine intelligence in distributed sensor systems and real-time analytics;
  • Cyber-physical systems;
  • Security and privacy in IoT and smart systems;
  • Interdisciplinary applications of IoT and smart systems:
    • Smart…
      • healthcare and digital epidemiology;
      • agriculture;
      • infrastructures;
      • cities;
      • factories;
      • workspace.

Dr. Zaki Malik
Dr. M. Mustafa Rafique
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high-performance computing
  • cyber-physical systems
  • IoT and cloud computing
  • smart technologies
  • data analytics in distributed systems

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 576 KiB  
Article
CVL: A Cloud Vendor Lock-In Prediction Framework
by Amal Alhosban, Saichand Pesingu and Krishnaveni Kalyanam
Mathematics 2024, 12(3), 387; https://doi.org/10.3390/math12030387 - 25 Jan 2024
Cited by 1 | Viewed by 1202
Abstract
This paper presents the cloud vendor lock-in prediction framework (CVL), which aims to address the challenges that arise from vendor lock-in in cloud computing. The framework provides a systematic approach to evaluate the extent of dependency between service providers and consumers and offers [...] Read more.
This paper presents the cloud vendor lock-in prediction framework (CVL), which aims to address the challenges that arise from vendor lock-in in cloud computing. The framework provides a systematic approach to evaluate the extent of dependency between service providers and consumers and offers predictive risk analysis and detailed cost assessments. At the heart of the CVL framework is the Dependency Module, which enables service consumers to input weighted factors that are critical to their reliance on cloud service providers. These factors include service costs, data transfer expenses, security features, compliance adherence, scalability, and technical integrations. The research delves into the critical factors that are necessary for dependency calculation and cost analysis, providing insights into determining dependency levels and associated financial implications. Experimental results showcase dependency levels among service providers and consumers, highlighting the framework’s utility in guiding strategic decision-making processes. The CVL is a powerful tool that empowers service consumers to proactively navigate the complexities of cloud vendor lock-in. By offering valuable insights into dependency levels and financial implications, the CVL aids in risk mitigation and facilitates informed decision-making. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

13 pages, 1801 KiB  
Article
Dynamic Power Provisioning System for Fog Computing in IoT Environments
by Mohammed Al Masarweh and Tariq Alwada’n
Mathematics 2024, 12(1), 116; https://doi.org/10.3390/math12010116 - 29 Dec 2023
Viewed by 537
Abstract
Large amounts of data are created from sensors in Internet of Things (IoT) services and applications. These data create a challenge in directing these data to the cloud, which needs extreme network bandwidth. Fog computing appears as a modern solution to overcome these [...] Read more.
Large amounts of data are created from sensors in Internet of Things (IoT) services and applications. These data create a challenge in directing these data to the cloud, which needs extreme network bandwidth. Fog computing appears as a modern solution to overcome these challenges, where it can expand the cloud computing model to the boundary of the network, consequently adding a new class of services and applications with high-speed responses compared to the cloud. Cloud and fog computing propose huge amounts of resources for their clients and devices, especially in IoT environments. However, inactive resources and large number of applications and servers in cloud and fog computing data centers waste a huge amount of electricity. This paper will propose a Dynamic Power Provisioning (DPP) system in fog data centers, which consists of a multi-agent system that manages the power consumption for the fog resources in local data centers. The suggested DPP system will be tested by using the CloudSim and iFogsim tools. The outputs show that employing the DPP system in local fog data centers reduced the power consumption for fog resource providers. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

26 pages, 870 KiB  
Article
Decentralized News-Retrieval Architecture Using Blockchain Technology
by Adrian Alexandrescu and Cristian Nicolae Butincu
Mathematics 2023, 11(21), 4542; https://doi.org/10.3390/math11214542 - 3 Nov 2023
Cited by 2 | Viewed by 816
Abstract
Trust is a critical element when it comes to news articles, and an important problem is how to ensure trust in the published information on news websites. First, this paper describes the inner workings of a proposed news-retrieval and aggregation architecture employed by [...] Read more.
Trust is a critical element when it comes to news articles, and an important problem is how to ensure trust in the published information on news websites. First, this paper describes the inner workings of a proposed news-retrieval and aggregation architecture employed by a blockchain-based solution for fighting disinformation; this includes a comparison between existing information retrieval solutions. The decentralized nature of the solution is achieved by separating the crawling (i.e., extracting the web page links) from the scraping (i.e., extracting the article information) and having third-party actors extract the data. A majority-rule mechanism is used to determine the correctness of the information, and the blockchain network is used for traceability. Second, the steps needed to deploy the distributed components in a cloud environment seamlessly are discussed in detail, with a special focus on the open-source OpenStack cloud solution. Lastly, novel methods for achieving a truly decentralized architecture based on community input and blockchain technology are presented, thus ensuring maximum trust and transparency in the system. The results obtained by testing the proposed news-retrieval system are presented, and the optimizations that can be made are discussed based on the crawling and scraping test results. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

29 pages, 6296 KiB  
Article
A High-Performance Federated Learning Aggregation Algorithm Based on Learning Rate Adjustment and Client Sampling
by Yulian Gao, Gehao Lu, Jimei Gao and Jinggang Li
Mathematics 2023, 11(20), 4344; https://doi.org/10.3390/math11204344 - 19 Oct 2023
Viewed by 1042
Abstract
Federated learning is a distributed learning framework designed to protect user privacy, widely applied across various domains. However, existing federated learning algorithms face challenges, including slow convergence, significant loss fluctuations during aggregation, and imbalanced client sampling. To address these issues, this paper introduces [...] Read more.
Federated learning is a distributed learning framework designed to protect user privacy, widely applied across various domains. However, existing federated learning algorithms face challenges, including slow convergence, significant loss fluctuations during aggregation, and imbalanced client sampling. To address these issues, this paper introduces a high-performance federated learning aggregation algorithm. This algorithm combines a cyclic adaptive learning rate adjustment strategy with client-weighted random sampling, addressing the aforementioned problems. Weighted random sampling assigns client weights based on their sampling frequency, balancing client sampling rates and contributions to enhance model aggregation. Additionally, it adapts the learning rate based on client loss variations and communication rounds, accelerating model convergence and reducing communication costs. To evaluate this high-performance algorithm, experiments are conducted using well-known datasets MNIST and CIFAR-10. The results demonstrate significant improvements in convergence speed and loss stability. Compared to traditional federated learning algorithms, our approach achieves faster and more stable convergence while effectively reducing training costs. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

24 pages, 458 KiB  
Article
Workflow Scheduling Scheme for Optimized Reliability and End-to-End Delay Control in Cloud Computing Using AI-Based Modeling
by Mustafa Ibrahim Khaleel, Mejdl Safran, Sultan Alfarhood and Michelle Zhu
Mathematics 2023, 11(20), 4334; https://doi.org/10.3390/math11204334 - 18 Oct 2023
Cited by 1 | Viewed by 1171
Abstract
In the context of cloud systems, the effectiveness of placing modules for optimal reliability and end-to-end delay (EED) is directly linked to the success of scheduling distributed scientific workflows. However, the measures used to evaluate these aspects (reliability and EED) are in conflict [...] Read more.
In the context of cloud systems, the effectiveness of placing modules for optimal reliability and end-to-end delay (EED) is directly linked to the success of scheduling distributed scientific workflows. However, the measures used to evaluate these aspects (reliability and EED) are in conflict with each other, making it impossible to optimize both simultaneously. Thus, we introduce a scheduling algorithm for distributed scientific workflows that focuses on enhancing reliability while maintaining specific EED limits. This is particularly important given the inevitable failures of processing servers and communication links. To achieve our objective, we first develop an artificial intelligence-based model that merges an improved version of the wild horse optimization technique with a levy flight approach. This hybrid approach enhances the ability to explore new possibilities effectively. Additionally, we establish a viable strategy for sharing mapping decisions and stored information among processing servers, promoting scalability and robustness—essential qualities for large-scale distributed systems. This strategy not only boosts local search capabilities but also prevents premature convergence of the algorithm. The primary goal of this study is to pinpoint resource placements that strike a balance between global exploration and local exploitation. This entails effectively harnessing the search space and minimizing the inclination toward resources with a high likelihood of failures. Through experimentation in various system configurations, our proposed method consistently outperformed competing workflow scheduling algorithms. It achieved notably higher levels of reliability while adhering to the same EED constraints. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

28 pages, 539 KiB  
Article
A Hybrid Many-Objective Optimization Algorithm for Job Scheduling in Cloud Computing Based on Merge-and-Split Theory
by Mustafa Ibrahim Khaleel, Mejdl Safran, Sultan Alfarhood and Michelle Zhu
Mathematics 2023, 11(16), 3563; https://doi.org/10.3390/math11163563 - 17 Aug 2023
Viewed by 997
Abstract
Scheduling jobs within a cloud environment is a critical area of research that necessitates meticulous analysis. It entails the challenge of optimally assigning jobs to various cloud servers, each with different capabilities, and is classified as a non-deterministic polynomial (NP) problem. Many conventional [...] Read more.
Scheduling jobs within a cloud environment is a critical area of research that necessitates meticulous analysis. It entails the challenge of optimally assigning jobs to various cloud servers, each with different capabilities, and is classified as a non-deterministic polynomial (NP) problem. Many conventional methods have been suggested to tackle this difficulty, but they often struggle to find nearly perfect solutions within a reasonable timeframe. As a result, researchers have turned to evolutionary algorithms to tackle this problem. However, relying on a single metaheuristic approach can be problematic as it may become trapped in local optima, resulting in slow convergence. Therefore, combining different metaheuristic strategies to improve the overall system enactment is essential. This paper presents a novel approach that integrates three methods to enhance exploration and exploitation, increasing search process efficiency and optimizing many-objective functions. In the initial phase, we adopt cooperative game theory with merge-and-split techniques to train computing hosts at different utilization load levels, determining the ideal utilization for each server. This approach ensures that servers operate at their highest utilization range, maximizing their profitability. In the second stage, we incorporate the mean variation of the grey wolf optimization algorithm, making significant adjustments to the encircling and hunting phases to enhance the exploitation of the search space. In the final phase, we introduce an innovative pollination operator inspired by the sunflower optimization algorithm to enrich the exploration of the search domain. By skillfully balancing exploration and exploitation, we effectively address many-objective optimization problems. To validate the performance of our proposed method, we conducted experiments using both real-world and synthesized datasets, employing CloudSim software version 5.0. The evaluation involved two sets of experiments to measure different evaluation metrics. In the first experiment, we focused on minimizing factors such as energy costs, completion time, latency, and SLA violations. The second experiment, in contrast, aimed at maximizing metrics such as service quality, bandwidth utilization, asset utilization ratio, and service provider outcomes. The results from these experiments unequivocally demonstrate the outstanding performance of our algorithm, surpassing existing state-of-the-art approaches. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

30 pages, 5099 KiB  
Article
Toward Optimal Load Prediction and Customizable Autoscaling Scheme for Kubernetes
by Subrota Kumar Mondal, Xiaohai Wu, Hussain Mohammed Dipu Kabir, Hong-Ning Dai, Kan Ni, Honggang Yuan and Ting Wang
Mathematics 2023, 11(12), 2675; https://doi.org/10.3390/math11122675 - 12 Jun 2023
Cited by 2 | Viewed by 1599
Abstract
Most enterprise customers now choose to divide a large monolithic service into large numbers of loosely-coupled, specialized microservices, which can be developed and deployed separately. Docker, as a light-weight virtualization technology, has been widely adopted to support diverse microservices. At the moment, Kubernetes [...] Read more.
Most enterprise customers now choose to divide a large monolithic service into large numbers of loosely-coupled, specialized microservices, which can be developed and deployed separately. Docker, as a light-weight virtualization technology, has been widely adopted to support diverse microservices. At the moment, Kubernetes is a portable, extensible, and open-source orchestration platform for managing these containerized microservice applications. To adapt to frequently changing user requests, it offers an automated scaling method, Horizontal Pod Autoscaler (HPA), that can scale itself based on the system’s current workload. The native reactive auto-scaling method, however, is unable to foresee the system workload scenario in the future to complete proactive scaling, leading to QoS (quality of service) violations, long tail latency, and insufficient server resource usage. In this paper, we suggest a new proactive scaling scheme based on deep learning approaches to make up for HPA’s inadequacies as the default autoscaler in Kubernetes. After meticulous experimental evaluation and comparative analysis, we use the Gated Recurrent Unit (GRU) model with higher prediction accuracy and efficiency as the prediction model, supplemented by a stability window mechanism to improve the accuracy and stability of the prediction model. Finally, with the third-party custom autoscaling framework, Custom Pod Autoscaler (CPA), we packaged our custom autoscaling algorithm into a framework and deployed the framework into the real Kubernetes cluster. Comprehensive experiment results prove the feasibility of our autoscaling scheme, which significantly outperforms the existing Horizontal Pod Autoscaler (HPA) approach. Full article
(This article belongs to the Special Issue Application of Cloud Computing and Distributed Systems)
Show Figures

Figure 1

Back to TopTop