Next Article in Journal
Traffic-Driven Controller-Load-Balancing over Multi-Controller Software-Defined Networking Environment
Previous Article in Journal
Multi-Phase Adaptive Recoding: An Analogue of Partial Retransmission in Batched Network Coding
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Impact of Resource Management Strategies on Simulated Edge Cloud Performance: An Experimental Study

by
Nikolaos Kaftantzis
*,
Dimitrios G. Kogias
and
Charalampos Z. Patrikakis
Department of Electrical and Electronics Engineering, University of West Attica, 12241 Athens, Greece
*
Author to whom correspondence should be addressed.
Network 2024, 4(4), 498-522; https://doi.org/10.3390/network4040025
Submission received: 29 July 2024 / Revised: 9 October 2024 / Accepted: 4 November 2024 / Published: 6 November 2024

Abstract

Edge computing has emerged as a critical technology for meeting the needs of latency-sensitive applications and reducing network congestion. This goal is achieved mainly by distributing computational resources closer to end users and away from traditional data centers. Optimizing the utilization of limited edge cloud resources and improving the performance of edge computing systems requires efficient resource-management techniques. In this paper, we primarily discuss the use of simulation tools—EdgeSimPy in particular—to assess edge cloud resource management methods. We give a summary of the main difficulties in managing a limited pool of resources in edge cloud computing, and we go over how simulation programs like EdgeSimPy work and evaluate resource management algorithms. The scenarios we consider for this evaluation involve edge computing while taking into account variables like user location, resource availability, and network structure. We evaluate four resource management algorithms in a fixed, simulated edge computing environment to determine their performance regarding their CPU usage, memory usage, disk usage, power consumption, and latency performance metrics to determine which method performs better in a fixed scenario. This allows us to determine the optimal algorithm for tasks that prioritize minimal resource use, low latency, or a combination of the two. Furthermore, we outline areas of unfilled research needs and potential paths forward for improving the reliability and realism of edge cloud simulation tools.

1. Introduction

The increasing use of compute-intensive applications with tight latency and bandwidth requirements has highlighted the limits of traditional, centralized cloud data centers. As a result, the IT sector has been shifting toward edge computing, a paradigm that decentralizes processing power closer to data sources to facilitate data handling at the network’s edge, significantly reducing the latency associated with data transmission over the Internet [1]. This approach allows for real-time data processing, making it highly suitable for latency-sensitive applications such as autonomous vehicles, IoT devices, and augmented reality.
Despite its near advantages, edge computing poses significant technical obstacles. Large-scale edge data centers are frequently impractical in metropolitan locations, resulting in infrastructures made up of smaller clusters of low-powered devices scattered throughout constricted physical areas with limited resources. As a result, developing and testing effective resource management strategies is crucial to ensuring that they achieve specified goals within these restrictions. Furthermore, the financial, time, and logistical challenges of testing such policies in real-world edge testbeds make it difficult to experiment with new resource-management strategies. Power outages, network disruptions, and hardware constraints all impede the implementation and evaluation of these strategies in practical contexts [2].
Furthermore, the dynamic nature of edge networks, as well as the mobility of edge devices, adds complexity to workload distribution and resource allocation. Addressing these difficulties while maximizing energy consumption and lowering delay complicates the development of resource management algorithms by mandating the study of several interacting variables. To solve these challenges, this work uses the EdgeSimPy simulation tool [3], which allows for a systematic and controlled examination of resource management strategies in a simulated environment.
In this context, our research focuses on analyzing and comparing alternative resource management algorithms using specific metrics (e.g., CPU, memory, disk utilization, power consumption, and latency) to establish their effectiveness in managing resources in a limited edge environment. The simulation system we use includes realistic scenarios like dynamic network connectivity and changeable user locations, bridging the gap between theoretical models and real-world applications. This approach allows researchers to test the behavior of these algorithms in dynamic contexts, supporting the iterative refining of resource management strategies before their real-world deployment.
A variety of edge cloud simulation tools have been developed over the years to solve these difficulties; however, many research articles do not provide a detailed experimental evaluation of alternative algorithms in a simulated environment using standardized performance criteria. Our study intends to fill this gap by taking a more experimental approach to testing algorithms in various settings, evaluating their results, and determining whether their performance is consistent with their design characteristics. By doing so, we can provide a nuanced comparison of the algorithms to determine which, if any, have an advantage over others in specific scenarios.
The objectives of this study are fourfold:
  • To conduct edge computing experiments using the dedicated EdgeSimPy simulation tool, which offers a thorough and controlled environment for examining resource management strategies.
  • To systematically investigate the challenges of resource management in edge cloud infrastructures.
  • To present and evaluate well-known resource management algorithms in a simulated setting, each exploiting distinct characteristics to maximize resource utilization and enhance system efficiency.
  • To assess the effectiveness of these algorithms using key metrics such as CPU, memory, disk usage, power consumption, and latency. The results are analyzed to extract actionable insights that inform future research and practical applications.
The remainder of this paper is organized as follows. Section 2 presents an overview of the edge simulation landscape, resource management strategies, and their contributions. Section 3 reviews the concepts of cloud computing, edge computing, and the resource management algorithms used in this study. It also introduces the EdgeSimPy tool and highlights the core contributions of this paper. Section 4 outlines the experimental setup, simulation configurations, and performance evaluation results. Section 5 discusses the limitations and challenges encountered, and Section 6 provides the conclusions and potential future research directions.

2. Related Work

Several issues of resource allocation, optimization, and efficiency in edge computing deployments have been the subject of numerous research. Here, we present a summary of relevant research that has advanced our knowledge of resource-management techniques in edge cloud settings.
In [4], the authors focus on resource allocation. Resource scheduling and resource provisioning included two portions of some of the metrics for resource management that were examined. Their scheduling metrics are allocation, monitoring, and mapping; their provisioning metrics are detection, selection, and mapping of the distribution of the load. A number of other factors, or Key Performance Indicators (KPIs), were also looked into, including Service Level Agreement (SLA), QoS, scalability, latency, VM placement, failure rates, accuracy, resource utilization, cost, and energy consumption. This paper survey makes a very extensive contribution, although it only offers a small number of research publications that deal with edge, cloud, and fog computing.
In [5], the authors examine the state of edge cloud offloading algorithms and focus on the difficulties, approaches, and potential paths forward in this quickly developing field. The authors address latency, energy usage, network bandwidth, and resource limitations in their thorough examination of the problems associated with shifting computation-intensive workloads from edge devices to the cloud. They examine current offloading algorithms and classify them according to their goals, methods for making decisions, and standards for optimization. Based on the collected literature, the authors cover a range of offloading algorithm approaches in single and multiple servers, offline and online algorithms, machine learning strategies, and heuristic approaches. Moreover, the writers offer viewpoints regarding the prospects of edge cloud offloading, emphasizing new developments, avenues for investigation, and issues that require attention.
Similar to [6], the authors offer a taxonomy of resource-management techniques in fog computing. The following categories are taken into account by the following taxonomy: resource allocation, load balancing, task offloading, resource scheduling, resource provisioning, and application placement. Their main goal was to organize the material using resource-management theories. They included information on the case study, strategy, performance metric, evaluation instrument, benefits, and drawbacks for each resource-management approach. Overall, this study gave information about previous research on each resource-management strategy, but it only looked at fog computing, excluding the potentially fascinating inclusion of edge computing. Furthermore, the study simply took an exploratory approach to discussing the solution possibilities. Put another way, the research project is an analytical analysis and discussion of previous resource management studies.
The authors of [7] propose two optimal edge server deployment schemes and three collaborative resource allocation optimization algorithms. The research aims to minimize the access delay of edge servers and optimize resource allocation in MEC. It discusses the impact of network nodes, requested data volume, and mobile devices on access delay, as well as the comparison of different algorithms. The study also introduces the concept of SDN technology and its role in optimizing edge server deployment. Additionally, it highlights the significance of MEC in providing real-time services with low transmission delay and its potential applications in various fields.
The authors of [8] look into where various servers should be placed in a cloud computing system. It suggests a two-step method: an online stage for the dynamic aspects of user movement and an offline stage for the best server location. According to experimental data, the suggested method increases base station response time fairness by 71.60% and decreases system reaction time by 47.37%, respectively.
Finally, a list of optimization metrics was provided in [9] to address issues with resource management and service placement. The indicators that are taken into consideration include latency, blockage probability, congestion ratio, cost, energy usage, and resource use. The authors’ findings suggest that more studies have to be performed on issues pertaining to service placement problems, optimization techniques, and evaluation environments.
Numerous simulation tools [10] have been developed, and a range of solutions have been proposed to address the edge computing difficulties, according to the evaluation of relevant work. But there is still an apparent gap in the literature when it comes to experimental research that thoroughly tests algorithms in simulated environments using certain performance criteria. With an emphasis on an experimental methodology that involves methodically testing algorithms in controlled environments, this work seeks to close this gap. The study compares each algorithm to see if any shows a discernible advantage over the others and assesses whether the observed results match the theoretical predictions of each method.

3. Background

3.1. Edge Computing

The world is becoming more digitalized, producing enormous amounts of data from across various industries that need to be processed quickly for real-time applications. Although small- and medium-sized enterprises and research organizations can now perform computations without owning infrastructure thanks to cloud technologies, end users still need to send and receive data to and from data centers [11].
On the other hand, edge computing (Figure 1) is perfect for real-time applications since it computes at the location of the data. Computational power is brought closer to the locations where data are generated and consumed by positioning edge servers closer to end users. This lowers latency and allows for faster data processing. By allocating computing resources among cloud data centers, edge cloud architecture uses gadgets such as smartphones and Internet of Things sensors as endpoints and data sources to improve data processing, storage, and transmission efficiency.
The primary benefit of edge computing is its capacity to process data locally, at, or close to the location where it is generated. Applications where quick decision making is critical, such as industrial automation, smart cities, autonomous cars, and healthcare, require local processing capabilities. Edge computing increases the speed and efficiency of data processing, resulting in faster insights and actions by reducing the need to transfer massive volumes of data to remote cloud data centers [12].
Infrastructure for edge computing usually comprises edge servers and other devices that are positioned carefully to perform particular tasks. These edge servers, which are often located at the edge of the network and closer to end users, offer processing capabilities to support cloud data centers. Better resource usage, increased scalability, and enhanced resilience to network outages and congestion are all made possible by this distributed architecture. Furthermore, edge devices—such as smartphones and Internet of Things sensors—serve as data sources and processing nodes, adding to the ecosystem’s adaptability and durability [2].
Despite its numerous advantages, edge computing also presents challenges, particularly in terms of resource management, security, and interoperability [13]. The distributed nature of edge infrastructures requires sophisticated algorithms for efficient resource allocation, task scheduling, and load balancing. Additionally, ensuring the security and privacy of data processed at the edge is critical, given the potential exposure to various threats. Interoperability between diverse devices and systems is another key consideration, necessitating standardization and robust communication protocols.

3.2. Resource Management Algorithms

Optimizing the performance and scalability of edge cloud environments requires effective resource management. Because edge environments are dynamic and heterogeneous, resource management activities, including scheduling workloads, allocating resources, and optimizing energy use, are difficult.
The development and implementation of resource management algorithms are greatly impacted by the various complications that arise from optimizing energy usage and minimizing latency in edge computing scenarios. Because edge devices range in power from robust servers to low-power Internet of Things devices, scalable and adaptable algorithms that can adjust to different processing capacities and operating environments are required [14]. Additionally, resource management solutions must be flexible enough to quickly adapt in order to maintain low latency and energy efficiency due to dynamic and unpredictable workloads, which are typified by changing data rates and occasional surges. This optimization is made more difficult by the constrained computational capacity and strict energy constraints of many edge devices, which necessitate a careful balance between local processing and task offloading to more powerful nodes. A further degree of complexity is introduced by real-time processing needs, which call for cooperative processing among edge devices and latency-aware scheduling. These difficulties force the creation of complex, context-aware algorithms that take security and privacy issues into account while being able to perform dynamic adaptation, predictive analytics, and intelligent offloading. Thus, in order to efficiently balance energy consumption and latency, resource management in edge computing entails a multimodal strategy that incorporates real-time decision-making, sophisticated optimization techniques, and collaborative processing strategies [15].
In conventional edge cloud environments, resource management plays a vital role in assigning computing resources such as CPU, memory, storage, and network. The energy resource is crucial, particularly for edge computing. For instance, a smartphone user prefers not to frequently recharge their device; therefore, some sensors should not require regular charging [16]. Developers should be aware of how much power is required for a given application and how data can be delivered or withdrawn from the device, especially for edge analytics. This type of issue eventually becomes an optimization issue with scheduling and task placement. There are a few key prerequisites for edge platform management in general [2].
As shown in Table 1, the first and most important requirement is scalability, which calls for the capacity to manage a sizable number of edge devices with various functionalities and communication protocols [11]. Security is critical, necessitating support for integrity checks to guarantee data integrity and confidentiality inside the infrastructure, as well as privacy-preserving mechanisms for security tokens. It is imperative to consider heterogeneity by providing support for diverse hardware and software configurations to facilitate smooth integration and interoperability. To preserve system performance and stability, the availability and mobility of hardware and software components must be effectively handled. Data protection is essential in order to preserve privacy and secrecy; it is necessary to comply with laws like the GDPR and to make sure that all data are locally stored and encrypted instantly. For effective system upgrades and maintenance, infrastructure performance is critical. This calls for low-latency, lightweight communication protocols like MQTT and high-performance containerized resources with quick provisioning capabilities. Flexibility and agility require application mobility [17]. Finally, in order to effectively handle and analyze the data produced by edge devices in order to glean insightful information and facilitate decision-making processes, support for data management and analytics pipelines is essential [18].
In order to guarantee the resource management algorithms’ applicability and efficacy in edge cloud systems, a set of criteria was used in their selection. Initially, the algorithms were selected based on their proven [19,20,21] capacity to maximize important performance indicators like CPU usage, memory usage, disk usage, and power consumption. Secondly, all algorithms had to demonstrate flexibility in handling the varied and ever-changing workload situations that are characteristic of edge computing. Third, to ensure that the algorithms are applicable to edge deployments in the real world, we gave priority to those that have demonstrated scalability across distributed edge infrastructures. Finally, to improve latency reduction and the user experience overall, the algorithms chosen have to take into account network conditions and geographic closeness.
We specifically look into four resource management algorithms: user-location, power-saving, worst-fit, and best-fit algorithms. These scheduling algorithms, as shown in Table 2, were chosen to offer a thorough grasp of resource allocation and optimization in edge cloud settings.
More specifics on each algorithm:
  • By choosing the server with the closest resource capacity match to incoming tasks or the smallest memory block that is available, the best-fit [19] scheduling method maximizes resource allocation. By reducing resource fragmentation and waste, this tactic improves system performance and efficiency. In an edge cloud environment, the best-fit method enhances overall system throughput and responsiveness by giving resources a higher priority depending on suitability.
  • By choosing the server with the greatest resource capacity to match incoming tasks or the largest memory block that is accessible, the worst-fit [19] scheduling algorithm adopts the opposite strategy from the best-fit method. This technique seeks to maximize resource consumption while minimizing resource fragmentation, despite its contradictory name. The utilization of the worst-fit method maximizes system efficiency by assigning tasks to the greatest available resources, hence mitigating the possibility of minor gaps between resources. This method ensures smoother resource management in computer systems and helps prevent excessive fragmentation, even though it may result in a not ideal resource optimization result. It is used alongside the best-fit algorithm to serve as a baseline for our setup [19].
  • The power-saving algorithm [20] prioritizes reducing energy consumption by dynamically allocating resources according to workload trends and device conditions to the least power-consuming node. Because of their outdoor or remote locations, edge servers sometimes have trouble obtaining dependable power supplies, which makes energy management more difficult and calls for effective power-saving techniques. By dynamically assigning tasks based on energy-efficient servers’ availability, it optimizes energy usage without compromising performance [20].
  • The location-based scheduling algorithm [21] uses geographic data to improve user experience and optimize resource consumption in edge cloud environments. This algorithm handles workloads that are dynamic in nature by dynamically allocating tasks to edge servers according to users’ geographical proximity to the servers. The method makes sure that jobs are directed to the closest edge server by using location-based data, which reduces latency and improves end-user response times. Moreover, the algorithm places the needs of the user first by attempting to provide services as quickly as possible, which enhances system responsiveness and user experience in general. Practically speaking, the location-based algorithm works by continually tracking the geographic coordinates of edge servers and users, altering workload assignments in real-time to account for shifts in server availability and user distribution. Through the use of this proactive strategy, the algorithm efficiently maximizes resource utilization and reduces reaction latency, especially in applications where latency is critical, including real-time communication. The location-based algorithm is an example of a strategic approach to resource management that is in line with the changing needs of edge computing applications because of its user-centric design and usage of location-based data.

3.3. EdgeSimPy

The advent of edge computing has sparked the introduction of numerous edge simulators. The simulation tools for edge computing are introduced in this section, followed by a comparison analysis that highlights EdgeSimPy unique characteristics and contributions as compared to these existing solutions. Our objective is to demonstrate EdgeSimPy’s potential benefits and innovative contributions to the edge simulation sector by assessing its capabilities. Table 3 shows a comparison of EdgeSimPy’s built-in features with existing frameworks and simulators.
The selection of EdgeSimPy for this study is based on several key considerations that make it a superior choice compared to other available simulation tools, as shown in Table 3. EdgeSimPy is a Python-based framework for modeling and simulating resource management policies. It has a modular architecture consisting of multiple functional abstractions like edge servers, network devices, and applications. It offers detailed models for edge-specific scenarios, including resource-constrained devices, intermittent connectivity, and the need for real-time processing. This specificity allows for more accurate and realistic simulations for edge cloud applications. EdgeSimPy encompasses functional abstractions for various components, including edge servers, network devices, and applications.
SimEdgeIntel [22] introduces a versatile edge-caching simulator offering key contributions. It enables several scenarios, heterogeneous device modeling, and rapid mobile network configuration. Integration is made easier by its own architecture for algorithm access. Performance assessments show that the simulator is very adaptable, allows custom models, and has a learning-based caching strategy for cloud collaborative intelligence. SimEdgeIntel’s efficacy for optimizing edge computing settings is currently limited by its lack of robust support for workload jobs and its inability to dynamically shut down servers during simulations.
CloudSim [23] is a flexible simulator for cloud computing infrastructure. Researchers can carry out in-depth analyses of resource allocation, task scheduling, and energy use with the help of CloudSim. It makes detailed simulations possible, which improves the creation of cloud-based apps and the assessment of cloud-management techniques. While cloud simulators like CloudSim lack user mobility and wireless support, network simulators may not cover servers and users.
IOTSim [24] is a dedicated simulator for analyzing IoT applications. This tool enables researchers to study the behavior and performance of IoT applications comprehensively. The focus of this toolkit is on microservice management, mobility, and clustering in edge and fog computing environments. IOTSim facilitates resource management, communication protocols, and data-processing evaluations in IoT scenarios, empowering researchers to optimize IoT solutions and enhance IoT applications’ efficiency.
Because neither CloudSim nor IOTSim can migrate services to other servers or dynamically shut down servers for maintenance during simulations, EdgeSimPy is the most feature-rich simulator that meets our needs the best. It lets us schedule network flow based on our infrastructure needs, manage tasks based on our preferred resource management strategies, migrate services to other servers within the infrastructure, and schedule server maintenance operations as part of our simulation.
Complete support for all important elements is where EdgeSimPy shines, making it incredibly flexible, scalable, and suitable for a wide range of edge computing scenarios. It stands out due to its extensive capabilities despite being more tailored for edge scenarios. Task scheduling and maintenance activities are absent from SimEdgeIntel despite its scalability and support for several features. Although CloudSim and IoTSim perform well in cloud and IoT contexts, their flexibility is limited by the lack of support for service migration and maintenance. In general, the most reliable and flexible tool for intricate edge simulations is EdgeSimPy.

3.4. EdgeSimPy Architecture

EdgeSimPy was created especially for modeling and simulating edge cloud scenarios. It offers an adaptable and expandable platform for experimenting with different edge computing scenarios, such as workloads for applications, network setups, and resource-management techniques. Researchers can assess edge computing system performance in a controlled and repeatable setting without incurring the expense and complexity of real-world installations by utilizing simulation-based methodologies. EdgeSimPy facilitates the design and optimization of edge computing installations by allowing researchers to evaluate the effects of resource-management strategies on system performance, scalability, and energy efficiency [3].
EdgeSimPy’s architecture (Figure 2), which consists of four essential layers (Core, Physical, Logical, and Management), has been designed to imitate edge computing scenarios.
The core layer of EdgeSimPy integrates and synchronizes the Physical, Logical, and Management layers to provide the foundation of the simulation framework. It offers the fundamental parts and functions—such as the simulation engine, configuration management, and event scheduling—that are necessary for the smooth running of the complete system. The Core layer makes sure that all layers are in sync and communicate with each other. This makes simulations accurate and efficient, freeing researchers from the burden of underlying system complexities to concentrate on creating and assessing resource-management strategies.
In order to provide a realistic simulation of resource restrictions and connectivity, the Physical layer models the geographical and operational properties of edge infrastructure, including base stations, network switches, edge servers, and users. The physical layer’s functionalities essentially create a digital duplicate of the infrastructure’s physical components. This makes it easier to study resource-management strategies in an authentic edge computing environment, enabling researchers to assess their efficacy in virtual environments that closely mimic real-world implementations.
The Logical layer serves as the software counterpart of the physical infrastructure. It abstracts the functional aspects of applications, focusing on containerization and supporting various application models, from monolithic to microservices, to facilitate effective resource allocation and task scheduling. The applications that run on the simulated edge servers can be defined here, effectively constituting the simulated software ecosystem. Researchers are able to precisely define the configuration of these applications and how they will be deployed on the edge resources that are available from the logical layer.
Leveraging the capabilities of the lower layers, the Management layer serves as the hub for resource allocation and optimization decisions. Establishing and carrying out resource management policies is the main objective of this stratum. The Management layer can be used by researchers to plan network traffic flows, specify application migration protocols, set up service placement methods, and replicate edge environment maintenance procedures. Essentially, the management layer gives researchers the ability to assess different resource-control techniques and how they affect the simulated edge computing system’s overall effectiveness and performance.

3.5. EdgeSimPy Simulation Process

EdgeSimPy reads JSON [25] files or Python dictionaries and generates simulated entities. Figure 3 shows this technique in step 1. EdgeSimPy iterates until a user-specified stopping criterion is met after loading the scenario. EdgeSimPy executes user-defined resource management policies, invokes the activation regime to update agents’ states, advances the simulation clock, and logs the system state throughout each time step (Figure 3).

3.6. EdgeSimPy Limitations

Although simulators are very useful for evaluating algorithm performance in edge computing settings, there are a number of drawbacks and difficulties that researchers need to take into account. The natural abstraction and simplification of real-world situations are some of the main drawbacks, which might cause differences between simulated outcomes and actual performance in real deployments. The heterogeneity and dynamic nature of edge settings, which include variations in device capabilities, network circumstances, and workload patterns, are typically difficult for simulators to accurately simulate. Performance reviews may become unduly optimistic or pessimistic as a result. Furthermore, the subtle differences in energy usage between various edge devices may not be adequately captured by simulators, resulting in ratings of energy efficiency that are less accurate. The potential lack of scalability in simulation tools is another difficulty; they may not be able to manage intricate, large-scale edge networks or faithfully replicate interactions between several devices. Additionally, simulators frequently lack the fidelity of security and privacy issues, which could lead to the neglect of important facets of edge computing data protection. To obtain more thorough and reliable assessments of their algorithms, researchers need to be conscious of these limitations and supplement simulation studies with in-person trials or hybrid methodologies.

3.7. Problem Statement

Decentralized edge cloud systems are dynamic, making resource management algorithm optimization difficult. Edge applications demand efficient resource allocation to meet their diverse needs while maximizing performance, scalability, and energy economy. The lack of thorough testing frameworks and the high cost of real-world deployments make edge resource management algorithm selection and assessment difficult. Simulation methods can be used to test edge cloud resource management strategies in a controlled environment. In our case, the EdgeSimPy simulation program is used to model edge computing scenarios, undertake tests with different resource management approaches, and assess the results based on specific factors [3].
We meticulously test resource management algorithms in the simulated environment and evaluate the performance indicators based on specific metrics to ascertain their acceptability and efficiency for edge computing compatibility. We outline the process of simulating edge cloud environment scenarios and assessing resource-management techniques.
This research helps bridge the gap between conceptual research, design, and practical use cases by offering a thorough framework for the seamless modeling of edge infrastructure-management procedures. It achieves this by using algorithms that are both theoretically sound and have been effectively applied in a variety of real-world circumstances. This allows it to bridge the gap between theoretical principles and actual implementations.
The integration of comprehensive simulations and empirical validations highlights the pragmatic viability of the suggested algorithms, promoting a more seamless progression from theoretical exploration to real-world implementation in various edge computing uses.

4. Performance Evaluation

4.1. Experimental Setup

The experimental setting for assessing the efficacy of resource management algorithms in edge cloud environments is specifically built to guarantee the ability to replicate results, maintain uniformity, and accurately represent real-world conditions in the simulated environment. We employ the EdgeSimPy simulation tool to construct a virtualized edge cloud environment that faithfully replicates the features and dynamics of actual edge computing scenarios.
Our experimental simulation scenario, where entities interact with one another and the environment is affected by resource management decisions, is shown in Figure 3. EdgeSimPy requires an input file (dataset) that defines the simulated scenario infrastructure and entities before the simulation begins and receives statistics and metrics as defined in the next section throughout the duration of the simulation.
The experimental scenario in Figure 3 consists of a 4 × 4 hexagonal grid layout using the map model proposed in [26], where the servers in the designated area are, we assume, deployed in an urban zone divided into hexagonal cells, as typical in mobile cellular networks. Each cell has its own unique coordinates and has direct communication with the cells immediately adjacent to it. Because the simulated area is uniformly covered by the hexagonal grid map, we are able to assess how well resource management algorithms function in various geographic locations. Every grid cell has a base station that is connected to the internet, which makes it easier for users and edge servers to communicate and relocate workload to an adjacent server. The base stations are arranged in a way that guarantees communication with directly connected grid cells.
In addition, every base station is equipped with a 10 Gbps network capacity and a 1 s latency. This allows users and edge servers to communicate in a realistic manner while taking latency and bandwidth limitations into account. The infrastructure consists of six edge servers total, as shown in Table 4, dispersed over the grid layout with the promise that two edge servers cannot be assigned to the same grid cell.
Two NVIDIA Jetson TX2 models (servers 5 and 6) and four Raspberry Pi 4 (servers 1–4) models are used as our edge servers. These edge servers are dispersed over the network, as seen in Figure 4. Every grid cell denotes a specific geographic region that contains an edge server. Six users are also included in our simulation scenario; each of them is assigned randomly to a certain grid cell.
The power model and computational capacity of each edge server are also defined in Table 4. The NVIDIA Jetson TX2 models have a quad-core ARM Cortex-A57 processor with a CUDA-enabled GPU, 8 GB of RAM, and a 32 GB disk capacity with a minimum power consumption of 7.5 W and 15 W at a maximum workload. Meanwhile, the Raspberry Pi 4 models have 8 GB of RAM, a quad-core ARM Cortex-A72 processor, a 32 GB disk capacity, and a power consumption of 2.56 W at minimum workload and 7.3 W at maximum (based on [27]). We are able to precisely model the energy consumption of each edge server under different workload situations thanks to the linear server power model, which links power consumption to the computing load.
To simulate random patterns of user interaction and data access, each user is also assigned an access pattern that is set to random. Because of this unpredictability, edge servers are exposed to a variety of workload scenarios, which mirror actual usage patterns in edge computing environments.
In order to start the simulation, it is necessary to set the time frame in which each algorithm will run. Specifically, we define the time frame in time steps and set each step to correspond to one second. We set the simulation scenario to run for 2000 time steps, i.e., 2000 s, which corresponds to 33.33 min. At each time step of the simulation, values will be obtained for all the metrics we have defined, and based on these, the algorithm will make the corresponding decisions. This duration enables us to gather enough information and evaluate each algorithm’s performance over an extended length of time under our workload scenario.

4.2. Metrics for Evaluation

In order to guarantee a thorough analysis, we developed a set of comparative metrics that enable us to evaluate the resource management algorithms’ performance under defined parameters. These parameters include the following, based on [28]:
  • Power consumption.
  • Memory utilization.
  • CPU utilization.
  • Disk utilization.
  • Latency.
Metrics such as CPU load, RAM load, and disk utilization offer valuable information on how algorithms efficiently manage and employ available resources, guaranteeing peak performance with these resource-limited scenarios. In edge environments, where power supplies may be scarce or costly, energy conservation is essential. Evaluating power consumption allows us to identify algorithms that minimize energy usage while maintaining performance, promoting sustainability and cost-effectiveness.
In real-time or low-latency applications in particular, edge services’ responsiveness is essential. Insights into the latency at which services can react to requests are provided by latency metrics, guaranteeing an excellent user experience. Network circumstances and workload variations are common in edge situations.
We can determine which algorithms perform well in edge cloud environments by assessing resource usage, energy efficiency, service quality, and scalability. This allows us to determine which algorithms are performing best to solve the various needs and challenges associated with edge computing and systematically and consistently perform the task for which they were created.
We seek to offer insights into the efficacy, scalability, and efficiency of each strategy for managing computational resources in edge cloud environments by comparing these metrics in the various methods.

4.3. Experimental Results

4.3.1. CPU Utilization Results

The y-axis in Figure 5 shows the number of CPU cores used, ranging from 0 to the maximum of 4 cores, and the x-axis shows time steps from 1 to 2000 s. In Figure 5, we present the simulation up to the 500 s mark, as there are no changes beyond this point, and for reasons of clarity and space. The results of CPU utilization show that different resource-management strategies have different allocation patterns.
According to the data, there is a noticeable initial surge in CPU core use for all four algorithms, which is expected given the necessity to distribute enormous workloads effectively at the beginning of the simulation. After that, for each algorithm, the average core use settles down to about one core per server. But with time, clear distinctions in their behavior become apparent: the power-saving method shows an extra spike in CPU core consumption, which could indicate periodic activity spurts or a change in the workload allocation approach.
On the other hand, the location-based approach exhibits multiple spikes, which makes sense given that it relocates workloads around dynamically in response to user movement. This pattern points to difficulties in efficiently allocating tasks in situations where user movement affects resource requirements in real-time. In spite of these oscillations, the location-based approach shows lower CPU core utilization maxima than other algorithms, suggesting a more uniform distribution of workloads among servers. This feature represents its approach to resource optimization, which minimizes latency and increases overall efficiency in edge computing environments by allocating work depending on users’ proximity to the system.
The worst-fit technique as explained in Section 4.3.6, illustrates that high average CPU utilization with minimal variance signifies efficient load balancing and the best use of computational resources. The location-based algorithm’s dynamic workload adjustment demonstrates its flexibility and reactivity to shifting needs, resulting in effective core usage and improved performance.
Overall, the location-based algorithm works better than the others, proving to be more efficient at modifying the resource distribution in response to varying workload needs and server capacities. As can be seen from its higher average usage in Section 4.3.6, the location-based method achieves the best CPU utilization overall, demonstrating its ability to optimize resource management in edge computing environments.

4.3.2. Memory Utilization Results

The y-axis in Figure 6 shows the memory used, ranging from 0 to the maximum of 8196 MB, and the x-axis shows the time in seconds. In Figure 6, we present the simulation up to the 500 s mark, as there are no changes beyond this point, and for reasons of clarity and space. The memory use trends of the four algorithms are similar inside the simulated edge cloud environment, which is consistent with the findings of Section 4.3.1 regarding CPU utilization.
As they assign demanding workloads, all algorithms first display notable spikes in memory utilization that emphasize the initial surge in resource demand at the beginning of the simulation. Memory consumption levels out across servers as the simulation goes on, suggesting that memory resources are effectively managed to keep up with workload demands.
In particular, the power-saving mechanism shows sporadic memory consumption spikes, indicating adaptive task distribution adjustments to maximize resource utilization over time. The location-based algorithm, on the other hand, exhibits repeated memory use spikes because of its dynamic task relocation method, which is dependent on user movement and demand patterns. The location-based method maintains lower peaks in memory consumption despite these oscillations than other algorithms, suggesting a more equitable and effective distribution of memory resources. Despite these fluctuations, the location-based algorithm appears to manage available memory and task scheduling effectively, resulting in lower average memory consumption, as shown in more detail in Section 4.3.6.
The memory consumption data, taken as a whole, highlight how crucial adaptive resource-management techniques are to edge computing. Algorithms like the location-based method shown here, which efficiently balance memory allocation with workload needs, are critical to improving resource efficiency and satisfying dynamic demands in real-world edge computing installations.
The worst-fit and best-fit algorithms, after an initial spike in memory usage, stabilize and do not exhibit further spikes in utilization. This stability can be attributed to their simpler nature of workload allocation, which appears to offer an advantage over more complex algorithms. Consequently, simpler algorithms tend to maintain a more stable performance profile.
In summary, the memory utilization results show how well the resource management methods distribute workloads and make use of edge server computational resources. While the location-based algorithm’s dynamic allocation demonstrates its flexibility and reactivity to shifting needs, the worst-fit algorithm’s high and stable memory utilization rates demonstrate effective load balancing.
These results offer a thorough comprehension of memory-management techniques, directing the creation of resource management solutions for edge computing environments that are more effective and flexible.

4.3.3. Disk Utilization Results

In Figure 7, the x-axis represents time in seconds ranging from 1 to 2000, while the y-axis indicates the disk utilization (%). In Figure 7, we present the simulation up to the 500 s mark, as there are no changes beyond this point, and for reasons of clarity and space. Different strengths and potential weaknesses in managing edge cloud settings are highlighted by the observed patterns in disk consumption across the algorithms, which have a substantial impact on overall system performance and resource utilization efficiency.
The best-fit algorithm exhibits quick and effective resource allocation, as seen by its early rise and subsequent stabilization at 28% disk use. This low, consistent use shows that the disk is being managed effectively, optimizing resource efficiency and lowering the possibility of overloading any one server. As a result, this technique works well for sustaining steady performance even under increasing workloads.
On the other hand, by dividing workloads equally among all servers, the worst-fit method, which stabilizes at 50% utilization, demonstrates a balanced approach. Although this guarantees a consistent distribution of resources, it might also imply that no single server is completely optimized, which could result in unrealized potential that could be more effectively employed.
The power-saving algorithm, after initial stabilization at 41%, indicates its approach of giving low-power-consumption servers priority. Nevertheless, compared to the best-fit algorithm, this results in increased disk consumption, suggesting a trade-off between power efficiency and disk management. This tactic could result in less-than-ideal disk usage, which could affect system performance as disk demand rises.
The location-based approach shows a substantial degree of variability in disk resource management with its many phases and stabilizing at a high 68% disk utilization. By dynamically allocating jobs based on user location, it substantially minimizes latency, yet the high disk utilization suggests possible inefficiencies. Should the need for disk space increase, the disk could eventually fill up to capacity, which could cause workload problems, slowdowns in performance, or even server breakdowns.
In conclusion, these patterns imply that whereas certain algorithms enhance particular facets of resource management, they can also entail trade-offs that affect the effectiveness and performance of the system as a whole. Reaching disk capacity, in particular, may result in catastrophic failures, highlighting the necessity of balanced approaches that take into account all resource dimensions to provide stable and dependable edge cloud systems.

4.3.4. Power Consumption Results

The y-axis in Figure 8 shows the percentage of power consumed by the servers, while the x-axis shows time steps from 1 to 2000.
The findings on power consumption have important ramifications for edge cloud environments’ sustainability and energy efficiency. When there are limited energy resources available, the effectiveness of algorithms in controlling power usage becomes critical. One notable feature of the power-saving method is that it stabilizes at the lowest average consumption of 5.4 W per server. Because of this, it is especially well-suited for settings with strict energy limits. It ensures sustainability by cutting down on operating expenses and prolongs the life of edge devices by managing energy efficiently.
On the other hand, the worst-fit method shows the highest average power usage of 5.64 W per server, indicating that it is not very effective at balancing energy consumption with workload demands. This increased energy consumption makes it less appropriate for edge cloud scenarios where energy efficiency is a top concern and could result in higher operational expenses and decreased sustainability. Resources may be strained by the worst-fit algorithm’s high power consumption, especially in isolated or off-grid areas where energy supplies are scarce and costly.
The best-fit algorithm exhibits a more balanced approach, blending an efficient workload distribution with comparatively low energy usage, with an average of 5.45 W per server. It is therefore a wise option for settings that require both consistent performance and energy efficiency. Its capacity to sustain steady power consumption without appreciable spikes guarantees that it can deliver dependable service without placing undue strain on energy resources. The location-based algorithm maintains lower average spikes even with its unpredictability and continuous power usage variations. This indicates that it successfully strikes a balance between workload needs and energy usage, even though frequent reallocations may put some strain on the servers. In dynamic contexts, where low latency and energy efficiency are crucial, this balance is crucial. By reallocating workloads according to user proximity, the location-based algorithm can minimize latency and optimize resource use, improving user experience without appreciably consuming more energy.
To sum up, the patterns of power usage illustrate the compromises that each algorithm makes between energy efficiency and workload management. The best-fit and location-based algorithms provide balanced solutions for environments requiring both efficient power usage and an effective workload distribution, while the power-saving algorithm performs well in environments with limited energy due to its low-energy profile, making it ideal for stable environments. The worst-fit algorithm is less desirable because of its high energy demand. These understandings are essential for creating energy-efficient edge cloud environments that maximize performance.

4.3.5. Latency Results

The x-axis in Figure 9 shows time steps from 1 to 2000, and the y-axis shows the user access latency for each application.
The latency results show different trends for each algorithm, which correspond to their individual workload-management tactics and user-experience effects. The best-fit algorithm consistently maintains a latency range of 15–20 ms, with sporadic spikes that usually do not impair the user experience too much. This steady performance shows that the workload is distributed effectively, guaranteeing that latency stays within reasonable ranges for the majority of applications.
On the other hand, the average latency range of the worst-fit method is comparable, but it exhibits more frequent and noticeable spikes at both lower and higher levels. These fluctuations point to a less consistent performance, which can result in a worse user experience, particularly when latency is higher. Even if these spikes are short-lived, they could nonetheless have an effect on applications that need steady response times.
The power-saving algorithm has a latency that is between 15 and 20 ms, but it deviates from the average the most out of all the algorithms, with peaks as high as 23 ms and troughs as low as 8 ms. The algorithm’s emphasis on assigning workloads to servers with the lowest power consumption is probably the cause of this unpredictability, as it might result in inefficiencies in circumstances where latency is a concern. These variations could have a detrimental effect on the user experience, especially for apps that require dependable and timely responses. With a substantially shorter average latency of 10–15 ms and brief spikes on both ends, the location-based approach stands out.
Compared to other algorithms, this algorithm’s architecture, which puts server proximity to customers first, reduces latency by 30.82%. The location-based method efficiently lowers network congestion and improves response times by optimizing resource allocation based on user location and minimizing data transmission distances. As such, it is the best choice for settings where customer happiness depends on minimal latency and fast service delivery.
By dynamically allocating workloads to the closest available servers, the location-based algorithm excels in decreasing physical distance and data transmission times, consequently increasing user experience and reducing latency times. This proximity optimization guarantees fewer network hops and less congestion when combined with effective real-time resource allocation based on geography data. The algorithm’s efficiency is further increased by its capacity to adjust to shifting user locations and task demands. The location-based algorithm greatly reduces latency by emphasizing user proximity and real-time requirements; this makes it especially useful for time-sensitive applications and guarantees a flawless user experience.
To sum up, the latency outcomes highlight the benefits of the location-based algorithm in handling tasks that require quick turnaround times. Applications needing quick and reliable performance might benefit greatly from its low latency and little volatility. However, although the average latencies of the best-fit and power-saving algorithms are consistent, their greater unpredictability may cause problems in situations when strict latency requirements are required. The frequent spikes seen in the worst-fit method underscore the necessity for more sophisticated approaches to guarantee consistent and effective latency control.

4.3.6. Summary of Results

The simulation results provide important information on how different resource-management methods perform, as shown in Table 5 and Figure 10a–e. When it comes to overall system efficiency, the location-based approach excels, especially when it comes to CPU and RAM consumption. The location-based approach outperforms the power-saving algorithm by 0.15%, as seen in Figure 10a, which also shows the lowest average CPU utilization. It also has the lowest memory utilization, as demonstrated by Figure 10b, which beats the power-saving strategy by 0.19%. The efficiency of the system can be ascribed to the algorithm’s capacity to distribute computer resources geographically, hence reducing latency and optimizing resource allocation, ultimately improving system performance. An intriguing pattern showed that within the first 500 s of the 2000 s simulation, the performance of the power-saving, worst-fit, and best-fit algorithms stabilized, with the exception of the location-based approach. This suggests that in order to effectively distribute workloads and maximize resource efficiency, these algorithms could need a time span of early adjustment. After the first 500 s, in most cases, no changes were observed, with the only exception in the location-based algorithm, where it was affected by the continuous movement of the users.
As Figure 10c illustrates, the best-fit algorithm turns out to be the most efficient in terms of disk utilization. It is 38.1% more efficient than the power-saving algorithm and shows the lowest average disk utilization. It is especially helpful in situations where there is a significant demand for storage since its better management of storage resources lowers waste and increases the lifespan of servers.
The location-based algorithm performs exceptionally well in terms of power usage as well. Its efficiency is 1.1% higher than that of the power-saving algorithm and 5.08% higher than that of the worst-fit method, as shown in Figure 10d. Its capacity to shorten data transmission lengths by emphasizing geographic closeness and make better usage of CPU and memory is largely responsible for this drop in power usage.
Additionally, by reducing latency, the location-based algorithm greatly improves user experience. Its response time is 30.82% faster than the other algorithms’, as seen in Figure 10e, highlighting its efficacy in dynamically allocating workloads to servers that are physically closer to consumers. The location-based algorithm is now positioned as a top method for controlling edge cloud resources due to its reduced latency and effective use of computing power and resources.
To sum up, every algorithm exhibits unique benefits based on various measures. For CPU, memory, and latency, the location-based approach performs better than the best-fit algorithm, while for disk usage and power consumption, the latter is superior. These results not only confirm the usefulness of these algorithms in particular scenarios but also offer practical suggestions for enhancing resource management in edge cloud setups. The outcomes provide a solid basis for additional research and real-world applications, as they are in good agreement with the original goals of optimizing resource use, limiting energy consumption, and lowering latency.
Building on these findings, future studies might investigate sophisticated computational methods, optimize tactics, and develop dynamic adaptation mechanisms to further optimize resource management in edge cloud systems.

5. Validity of Experiment

A number of crucial factors determine whether or not our experiment is valid. While the simulations’ EdgeSimPy framework cannot fully replicate real-world deployments, it offers a realistic and adaptable environment that can be used to model a variety of scenarios. By bridging some of the gaps between simulation and reality, this method offers controlled testing and insightful information about the behavior of resource management algorithms in various scenarios. Furthermore, our work highlights how crucial it is to validate and apply these algorithms in practical contexts. In order to evaluate the presented algorithms’ behavior and practical effectiveness under load, future research should concentrate on implementing them in real edge computing systems. This step is essential for verifying the simulation findings and making sure the algorithms work as intended in real-world situations where time restrictions and unpredictable variables are major factors.

5.1. Limitations

Several limitations must be acknowledged in this study. First, the realism of the simulation environment is a concern, as our research relies on simulations run within the EdgeSimPy framework. These simulations offer a secure and controlled testing environment, but it is possible that they do not accurately represent the complexity and unpredictability of real-world edge cloud deployments. Real-world performance may differ from simulation results due to a variety of factors, including hardware failures, network congestion, and ambient factors. Utilizing simplified workload scenarios to assess resource management algorithms has another drawback. Although these scenarios are meant to illustrate typical edge computing use cases, they might not fully capture the variety, dynamic nature, and unpredictability of real-world workloads. More research into increasingly complicated and variable workload circumstances may be necessary to gain a more thorough grasp of algorithm performance. In addition, the scope of algorithm comparison places limitations on our study. While additional possibly pertinent algorithms were not taken into consideration, we evaluated four widely used resource management algorithms in edge cloud situations. Future studies could incorporate a wider range of algorithms in this comparison, providing a more comprehensive analysis. This study recommends that future research look into dynamic adaptive mechanisms and sophisticated computational tools to address these limitations. The incorporation of these novel methodologies may augment the robustness and flexibility of resource allocation schemes, rendering them more appropriate for the heterogeneous and dynamic terrain of edge computing.

5.2. Risk Assessment

There are several risks to the validity of our experiment that warrant careful consideration. Significant hazards arise from internal factors such as measurement mistakes in measurements, biases in algorithm implementation, and inconsistent simulation settings. In order to mitigate these weaknesses and protect the validity of our conclusions, it is imperative that all algorithms adhere to rigorous measurement techniques and standardized implementation standards. From an external perspective, our simulation environment’s controlled nature might not accurately capture the intricacy and unpredictability of actual edge computing environments. This constraint may have an impact on how broadly applicable our results are. A field study or applying our findings in practical settings is crucial to addressing these issues. These actions would confirm that our findings apply to real-world edge computing scenarios and offer important new perspectives on their usefulness.

6. Conclusions

This study provides an in-depth evaluation of various resource-management methods in edge cloud systems, showing their unique performance across crucial criteria. Our findings show that the location-based method is the most effective at optimizing CPU and RAM utilization thanks to its ability to allocate resources based on users’ geographic proximity. This geographic-aware strategy results in a significant decrease in processing overhead and increased overall system efficiency. In contrast, the best-fit method outperforms in terms of disk use, demonstrating its ability to manage storage resources and reduce excessive disk consumption. In terms of energy efficiency, the worst-fit and power-saving algorithms beat the others, indicating that they can contribute to energy-efficient computing in resource-constrained edge infrastructures. Additionally, the location-based algorithm proved to be the best in terms of latency, offering the lowest response times and enhancing user experience.
Our research advances the subject by improving our understanding of how alternative resource-management strategies perform in simulated edge environments, bridging the gap between theoretical design and practical implementation. We used the EdgeSimPy simulation framework to conduct a detailed investigation of the performance of four algorithms—location-based, best-fit, worst-fit, and power-saving—using a number of measures, such as CPU, memory, disk utilization, power consumption, and latency. This study sheds light on how edge computing situations affect the effectiveness of these algorithms in real-world environments with dynamic network circumstances, variable workloads, and user mobility.
The findings emphasize the significance of tailoring resource management solutions to specific goals, such as lowering latency or maximizing energy efficiency, based on the context and application requirements. For example, the location-based algorithm succeeds at latency reduction but exhibits inconsistencies in power and disk management, indicating the need for fine-tuning and hybrid approaches to achieve balanced performance across all parameters. Similarly, while the best-fit method is efficient in storage management, it requires additional refinement to more evenly allocate CPU and memory resources.
This study serves as a reference point for future research aimed at optimizing resource management in edge cloud systems by illustrating the merits and shortcomings of each method in various circumstances. Our experimental results point to potential areas for development, such as the implementation of dynamic adaption mechanisms, parameter-optimization techniques, and context-aware resource scheduling. Furthermore, real-world validation and the implementation of scalability analysis will be required to translate these simulated insights into practical, implementable solutions.
In conclusion, this study has contributed to a more nuanced understanding of resource management in edge cloud systems by providing a thorough experimental comparison of well-established methods in a simulated context. These findings improve edge computing knowledge by helping to design strong and effective resource management strategies that can be used for a wide range of applications. This study establishes the framework for future research efforts focused on developing high-performance, sustainable, and adaptable edge infrastructures that fully realize the promise of next-generation services and applications.

Author Contributions

Conceptualization, N.K.; methodology, N.K.; software, N.K.; validation, N.K.; writing—original draft preparation, N.K. and D.G.K.; writing—review and editing, D.G.K. and C.Z.P.; visualization, N.K.; supervision, C.Z.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cao, K.; Liu, Y.; Meng, G.; Sun, Q. An overview on Edge computing research. IEEE Access 2020, 8, 85714–85728. [Google Scholar] [CrossRef]
  2. Krishnasamy, E.; Varrette, S.; Mucciardi, M. Edge Computing: An Overview of Framework and Applications. Or-Bilu.Uni.Lu. December 2020. Available online: https://orbilu.uni.lu/handle/10993/46573 (accessed on 9 April 2024).
  3. Souza, P.S.; Ferreto, T.; Calheiros, R.N. EdgeSimPy: Python-based modeling and simulation of Edge Computing Resource Management Policies. Future Gener. Comput. Syst. 2023, 148, 446–459. [Google Scholar] [CrossRef]
  4. Bendechache, M.; Svorobej, S.; Endo, P.T.; Lynn, T. Simulating resource management across the cloud-to-thing continuum: A survey and Future Directions. Future Internet 2020, 12, 95. [Google Scholar] [CrossRef]
  5. Wang, J.; Pan, J.; Esposito, F.; Calyam, P.; Yang, Z.; Mohapatra, P. Edge cloud offloading algorithms. ACM Comput. Surv. 2019, 52, 1–23. [Google Scholar] [CrossRef]
  6. Ghobaei-Arani, M.; Souri, A.; Rahmanian, A.A. Resource Management Approaches in fog computing: A comprehensive review. J. Grid Comput. 2019, 18, 1–42. [Google Scholar] [CrossRef]
  7. Lv, Z.; Qiao, L. Optimization of Collaborative Resource Allocation for Mobile Edge Computing. Comput. Commun. 2020, 161, 19–27. [Google Scholar] [CrossRef]
  8. Cao, K.; Li, L.; Cui, Y.; Wei, T.; Hu, S. Exploring placement of heterogeneous edge servers for response time minimization in mobile edge-cloud computing. IEEE Trans. Ind. Inform. 2021, 17, 494–503. [Google Scholar] [CrossRef]
  9. Salaht, F.A.; Desprez, F.; Lebre, A. An overview of service placement problem in fog and Edge Computing. ACM Comput. Surv. 2020, 53, 1–35. [Google Scholar] [CrossRef]
  10. Margariti, S.V.; Dimakopoulos, V.V.; Tsoumanis, G. Modeling and simulation tools for Fog Computing—A comprehensive survey from a cost perspective. Future Internet 2020, 12, 89. [Google Scholar] [CrossRef]
  11. Cruz, P.; Achir, N.; Viana, A.C. On the edge of the deployment: A survey on multi-access Edge Computing. ACM Comput. Surv. 2022, 55, 1–34. [Google Scholar] [CrossRef]
  12. Darrous, J.; Lambert, T.; Ibrahim, S. On the importance of container image placement for service provisioning in the edge. In Proceedings of the 2019 28th International Conference on Computer Communication and Networks (ICCCN), Valencia, Spain, 29 July–1 August 2019. [Google Scholar] [CrossRef]
  13. Roges, L.; Ferreto, T. Dynamic provisioning of container registries in edge computing infrastructures. In Proceedings of the Anais do XXIV Simpósio em Sistemas Computacionais de Alto Desempenho (WSCAD 2023), Porto Alegre, Brasil, 17 October 2023. [Google Scholar] [CrossRef]
  14. Naha, R.K.; Garg, S.; Georgakopoulos, D.; Jayaraman, P.P.; Gao, L.; Xiang, Y.; Ranjan, R. Fog computing: Survey of trends, Architectures, requirements, and Research Directions. IEEE Access 2018, 6, 47980–48009. [Google Scholar] [CrossRef]
  15. Hong, C.-H.; Varghese, B. Resource management in fog/edge computing: A Survey on Architectures, Infrastructure, and Algorithms. ACM Comput. Surv. 2019, 52, 1–37. [Google Scholar] [CrossRef]
  16. Hamdan, S.; Ayyash, M.; Almajali, S. Edge-computing architectures for internet of things applications: A survey. Sensors 2020, 20, 6441. [Google Scholar] [CrossRef]
  17. Mijuskovic, A.; Chiumento, A.; Bemthuis, R.; Aldea, A.; Havinga, P. Resource Management Techniques for Cloud/Fog and Edge Computing: An Evaluation Framework and classification. Sensors 2021, 21, 1832. [Google Scholar] [CrossRef]
  18. Luo, Q.; Hu, S.; Li, C.; Li, G.; Shi, W. Resource scheduling in edge computing: A survey. IEEE Commun. Surv. Tutor. 2021, 23, 2131–2165. [Google Scholar] [CrossRef]
  19. Gebali, F. Computer Communication Networks; Springer: New York, NY, USA, 2008. [Google Scholar]
  20. Rubin, F.; Souza, P.; Ferreto, T. Reducing power consumption during server maintenance on edge computing infrastructures. In Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, Tallinn, Estonia, 27–31 March 2023. [Google Scholar] [CrossRef]
  21. Souza, P.S.; Ferreto, T.C.; Rossi, F.D.; Calheiros, R.N. Location-aware maintenance strategies for edge computing infrastructures. IEEE Commun. Lett. 2022, 26, 848–852. [Google Scholar] [CrossRef]
  22. Wang, C.; Li, R.; Li, W.; Qiu, C.; Wang, X. SimEdgeIntel: An open-source simulation platform for Resource Management in edge intelligence. J. Syst. Archit. 2021, 115, 102016. [Google Scholar] [CrossRef]
  23. Goyal, T.; Singh, A.; Agrawal, A. Cloudsim: Simulator for cloud computing infrastructure and modeling. Procedia Eng. 2012, 38, 3566–3572. [Google Scholar] [CrossRef]
  24. Zeng, X.; Garg, S.K.; Strazdins, P.; Jayaraman, P.P.; Georgakopoulos, D.; Ranjan, R. IOTSim: A simulator for analysing IOT Applications. J. Syst. Archit. 2017, 72, 93–107. [Google Scholar] [CrossRef]
  25. Nurseitov, N.; Paulson, M.; Reynolds, R.; Izurieta, C. Comparison of json and xml data interchange formats: A case study. In Proceedings of the International Conference on Computer Applications in Industry and Engineering, San Francisco, CA, USA, 4–6 November 2009; pp. 157–162. [Google Scholar]
  26. Aral, A.; De Maio, V.; Brandic, I. Ares: Reliable and sustainable edge provisioning for Wireless Sensor Networks. IEEE Trans. Sustain. Comput. 2022, 7, 761–773. [Google Scholar] [CrossRef]
  27. Suzen, A.A.; Duman, B.; Sen, B. Benchmark Analysis of jetson TX2, Jetson Nano and Raspberry Pi using Deep-cnn. In Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 26–27 June 2020. [Google Scholar] [CrossRef]
  28. Aslanpour, M.S.; Gill, S.S.; Toosi, A.N. Performance evaluation metrics for cloud, fog and edge computing: A review, taxonomy, benchmarks and standards for future research. Internet Things 2020, 12, 100273. [Google Scholar] [CrossRef]
Figure 1. Edge computing architecture.
Figure 1. Edge computing architecture.
Network 04 00025 g001
Figure 2. EdgeSimPy architecture [3].
Figure 2. EdgeSimPy architecture [3].
Network 04 00025 g002
Figure 3. EdgeSimPy simulation workflow [3].
Figure 3. EdgeSimPy simulation workflow [3].
Network 04 00025 g003
Figure 4. Experimental setup use case.
Figure 4. Experimental setup use case.
Network 04 00025 g004
Figure 5. CPU Utilization per algorithm.
Figure 5. CPU Utilization per algorithm.
Network 04 00025 g005
Figure 6. Memory utilization per algorithm.
Figure 6. Memory utilization per algorithm.
Network 04 00025 g006
Figure 7. Disk utilization per algorithm.
Figure 7. Disk utilization per algorithm.
Network 04 00025 g007
Figure 8. Power consumption per algorithm.
Figure 8. Power consumption per algorithm.
Network 04 00025 g008
Figure 9. Latency per algorithm.
Figure 9. Latency per algorithm.
Network 04 00025 g009
Figure 10. Performance results per algorithm for (a) CPU cores utilization (Log Scale); (b) Memory utilization (Log Scale); (c) Disk utilization results; (d) Power consumption results (Log Scale); (e) Latency results.
Figure 10. Performance results per algorithm for (a) CPU cores utilization (Log Scale); (b) Memory utilization (Log Scale); (c) Disk utilization results; (d) Power consumption results (Log Scale); (e) Latency results.
Network 04 00025 g010aNetwork 04 00025 g010b
Table 1. General requirement for edge device management and orchestration [2].
Table 1. General requirement for edge device management and orchestration [2].
RequirementDescription
ScalabilityAbility to address a large number of edge devices of different types and capabilities with appropriate deployment and communicating protocols
SecurityPrivacy preservation for security tokens and support for integrity checks within the infrastructure
HeterogeneitySupport for a high degree of heterogeneity within hardware/software
VolatilitySupport for volatile availability and mobile hardware/software components
Data ProtectionGDPR compliance, ensuring all data are kept locally and on-the-fly encrypted
Infrastructure PerformanceVery-low-latency, lightweight publish–subscribe network protocol such as MQTT. High-performance containerized resources with fast (zero-touch) provisioning allowing easy system upgrades
Application PortabilityUnified architecture view via MEC compliance enabling Function as a Service (FaaS) capabilities
Data AnalyticsSupports for data management and data analytics pipeline engine
Table 2. Brief description of the algorithms we selected.
Table 2. Brief description of the algorithms we selected.
AlgorithmsDescription
Best-fit algorithm [19]Allocates tasks to the edge server with the closest-matching available resources, minimizing resource wastage and maximizing efficiency
Worst-fit Algorithm [19]Assigns incoming tasks to the edge server with the greatest available excess capacity, aiming to minimize resource fragmentation and maximize system stability, even if it results in suboptimal resource utilization
Power-saving algorithm [20]Designed to reduce energy consumption and promote sustainability in edge cloud environments by assigning workload to servers with the lowest power consumption
Location-based algorithm [21]Considers the geographical proximity of tasks to edge servers when making allocation decisions. It aims to minimize latency and network traffic by assigning tasks to the nearest available server, optimizing response times for edge applications
Table 3. Overview of built-in features supported by simulation tools.
Table 3. Overview of built-in features supported by simulation tools.
SimulatorTask SchedulingService MigrationMaintenance OperationMobility SupportNetwork Flow Scheduling
SimEdgeIntel [22]NoYesNoYesYes
CloudSim [23]YesNoNoYesYes
IOTSim [24]YesNoNoYesYes
EdgeSimPy [3]YesYesYesYesYes
Table 4. Edge server’s specifications.
Table 4. Edge server’s specifications.
ServersCPU CoresMemory (MB)Disk (MB)Min Power Consumption (Watts)Max Power Consumption (Watts)
Edge server 1 (Raspberry Pi)4819232,7682.567.3
Edge server 2 (Raspberry Pi)4819232,7682.567.3
Edge server 3 (Raspberry Pi)4819232,7682.567.3
Edge server 4 (Raspberry Pi)4819232,7682.567.3
Edge server 5 (Jetson TX2)4819232,7687.515
Edge server 6 (Jetson TX2)4819232,7687.515
Table 5. Summary of simulation results.
Table 5. Summary of simulation results.
AlgorithmsBest Algorithm for Each Metric
Average CPU utilization (cores)Location-based algorithm had the lowest core use of the algorithms. It was 0.15% more efficient from the power-saving algorithm
Average Memory utilization (MB)Location-based algorithm had the lowest memory use of the algorithms, and it was 0.19% more efficient than the power-saving algorithm
Average Disk Utilization (%)Best-fit algorithm had the lowest average disk utilization and was 38.1% more efficient than the power-saving algorithm
Average Power Consumption (W)Location-based algorithm had the lowest power consumption. It was 1.1% more efficient than the power saving algorithm and 5.08% more efficient than the worst-fit algorithm
Average Latency (ms)Location-based algorithm had the lowest response time and was 30.82% more efficient than the other algorithms
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaftantzis, N.; Kogias, D.G.; Patrikakis, C.Z. Exploring the Impact of Resource Management Strategies on Simulated Edge Cloud Performance: An Experimental Study. Network 2024, 4, 498-522. https://doi.org/10.3390/network4040025

AMA Style

Kaftantzis N, Kogias DG, Patrikakis CZ. Exploring the Impact of Resource Management Strategies on Simulated Edge Cloud Performance: An Experimental Study. Network. 2024; 4(4):498-522. https://doi.org/10.3390/network4040025

Chicago/Turabian Style

Kaftantzis, Nikolaos, Dimitrios G. Kogias, and Charalampos Z. Patrikakis. 2024. "Exploring the Impact of Resource Management Strategies on Simulated Edge Cloud Performance: An Experimental Study" Network 4, no. 4: 498-522. https://doi.org/10.3390/network4040025

APA Style

Kaftantzis, N., Kogias, D. G., & Patrikakis, C. Z. (2024). Exploring the Impact of Resource Management Strategies on Simulated Edge Cloud Performance: An Experimental Study. Network, 4(4), 498-522. https://doi.org/10.3390/network4040025

Article Metrics

Back to TopTop