1. Introduction
The challenges include ineffective resource utilization to process data and latency in the data processing. The end-node devices induce these challenges in big data. To overcome the problem of analytics and big data storage caused by high quantities of cloud resources, the primitive technology of cloud computing has been merged with these new networks [
1]. It provides lower delay and larger computing agility when compared to the strong computing platforms in cloud data centers (CDC). At the edge of the network, about 40% of the IoT-created data are contained and processed [
2]. In a heterogeneous environment, the research community faces several problems such as efficient data collection, network architecture, reliable traffic management, storage, and security due to the interconnection of these devices. In addition to this, due to the insufficiency of resources such as memory, onboard power, processing, and communication, wireless sensors are prone to multiple threats. Hence, with reduced resource usage, an effective communication structure of sensor devices can increase its performance in producing results with great accuracy [
3].
The main challenge faced by mobile edge computing (MEC) is mobility issues. The risk to the integrity of the data and interruptions to service delivery caused by security threats can affect the MEC ecosystem in terms of reliability and loss of availability. Fault tolerance is one of the challenges of MEC which consists of availability, dependability, and reliability. To enhance the efficacy in dealing with time constraints, the offloading graininess and partitioning mention the code size that must be loaded for remote execution [
4]. The MEC user remains in the coverage areas of MEC service providers only for a limited duration, which results in the diverse user demand. Various types of users require a variety of services which are changing rapidly based on the values of requirements. Hence, it is necessary to establish the services in a cost-effective manner. Meanwhile, developing a cost-effective technique is considered a challenging task because of the diverse nature of emerging services.
QoS refers to the performance characteristics and level of service provided by a network or system, which directly impacts the end-user experience. It encompasses various metrics such as latency, throughput, reliability, availability, and energy efficiency. Achieving a high QoS in MEC systems is crucial for delivering low-latency and high-throughput services to end-users and ensuring a satisfactory user experience. The International Telecommunication Union (ITU) plays a significant role in establishing standards and guidelines for QoS in telecommunication networks. ITU-T Supp. 9 of the E.800 Series provides regulations and recommendations related to QoS in telecommunication services. This document offers a comprehensive framework for assessing, measuring, and monitoring QoS parameters, facilitating the effective management and optimization of network performance.
A deployed network’s overall lifetime gets reduced due to the increased energy consumption of the sensor devices used in high traffic rates and the heterogeneous communication infrastructure [
5]. Load balancing and resource allocation play a crucial role in optimizing network performance and enhancing system lifespan by managing heavy-duty hours. However, it is often used as an unconstrained process, leading to side effects. The solution lies in implementing proper admission control to ensure genuine network load and enhance network load balancing. This can significantly improve overall network functioning. To achieve optimal results, the load balancing process needs to work in tandem with the admission control process [
6]. The key contributions of this paper are described as follows.
A novel approach has been developed for resource allocation and QoS optimization in MEC using IoT: the proposed approach combines hybrid kernel random forest [
7] and ensemble support vector machine algorithms [
8] with crossover-based hunter-prey optimization to optimize the QoS metrics and allocate resources to different applications running on the MEC network. Improved QoS metrics [
9]: The proposed approach aims to improve QoS metrics such as latency, throughput, and energy efficiency by allocating resources to different applications running on the MEC network. In the context of ubiquitous mobile edge computing (UMEC) using IoT, it is crucial to optimize resource allocation and quality of service (QoS) metrics while considering the cost implications. By efficiently allocating resources to different applications running on the UMEC network, the proposed approach aims to achieve cost-effective service provision. In UMEC systems, there is a large number of devices and applications involved, leading to resource allocation challenges. Inefficient resource utilization can result in increased costs, such as higher energy consumption and poor network performance. Therefore, optimizing resource allocation becomes essential for cost-effectiveness. The proposed approach combines machine learning algorithms (hybrid kernel random forest and ensemble support vector machine) with an optimization algorithm (crossover-based hunter-prey optimization) to achieve cost-effective service provision. By accurately predicting and optimizing QoS metrics, such as latency, throughput, and energy efficiency, the system can allocate resources more effectively and reduce unnecessary costs. By optimizing resource allocation and QoS metrics, the proposed approach aims to strike a balance between service quality and cost efficiency. This can help service providers deliver high-quality services to end-users while minimizing operational costs. The cost-effective provision of services is crucial for ensuring the sustainability and profitability of UMEC systems.
The paper organization is arranged as follows. In
Section 2, past literature works are described in the context of mobile edge computing. The ubiquitous mobile edge computing using hybrid ensemble SVM and kernel random forest is depicted in
Section 4. In
Section 5, the evaluation results are discussed. In the final section, the conclusion of the paper is presented.
2. Literature Review
As mentioned in
Table 1, Yu et al. [
10] presented a method of edge computing that implemented SAGINs for reducing the usage of satellite resources and the task completion time. Additionally, the action space size was reduced through the pre-classification method. The proposed method used a deep imitation learning-driven caching and offloading algorithm and thereby achieved real-time decision-making. They have evaluated the developed method in a simulated environment and compared it to other existing edge computing methods. Ai et al. [
11] developed an approach, namely, a smart collaborative framework (SCF) for creating multi-task offloading solutions and for achieving a prediction of dynamic service. They have developed a theoretical approach and used hybrid deep learning algorithms in a hierarchical spatio-temporal monitoring (HSTM) approach from spatio-temporal dimensions. Additionally, they have used advanced queuing and mixed game theories for enhancing the offloading efficiency of the scheduling approach, namely, fine-grained resource scheduling (FRS). However, the high computational expenditure and large memory needed by the DNN significantly diminish the deep learning usage in edge computing with restricted resources.
Sood et al. [
12] presented a smart traffic management approach for the prediction of the inflow of traffic and time-enhanced vehicles’ smart navigation, which was based on edge-cloud-centric IoT. The congestion at junctions was avoided through the prediction of traffic arrival and also through avoiding long queues. They have used a baseline classifier and analyzed the traffic arrival, the result showed that the proposed Smart management approach was more effective in terms of road safety at junctions, smart navigation, and best load balancing when compared to other existing methods. However, this model is not energetically suitable for resource-restricted mobile mechanisms.
Mazumdar et al. [
13] presented a three-layered fog node IoT approach to optimize the service with regard to time. They used the load-offloading method in the load sharing approach to enhance the security features of the proposed method, and its efficiency was evaluated through similar existing methods. A limitation of this method is that utilized mobile terminals can only depend on cloud graphic processing units (GPUs) to stimulate calculating; although, the cloud computing security, the bandwidth of the wireless network, and the communication delay will increase the network complexity.
Chien et al. [
14] developed a spatio-temporal-dependency approach that was designed for feature extraction, and which was based on a convolutional neural network (CNN). Shah et al. [
15] presented an empirical multi-agent cognitive method for the consecutive transition of IoT APIs. They discovered a classification method for CAs which enables the creation of CA, control, and migration. The proposed method achieved an IoT API distribution and transparency in the heterogeneity, which provided cloud computing optimization. However, the contrasted sensory data accumulate over a large network; hence, the data itself may have a paradox. Bolettieri et al. [
16] proposed a heuristic algorithm with linear relaxation and rounding techniques due to optimization problem complexity. The proposed approach was not effective in handling inconsistent traffic demands. This method mainly involved two types of base station traffic prediction data to enhance the hyperparameters. Mobile edge computing (MEC) was integrated with the base station to reduce the time cost of data transmission to the cloud server. The performance metrics such as mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE) were used for evaluation. The results showed that training time decreased and prediction accuracy increased. On the other hand, the use of a large amount of data significantly impacted the performance of this model [
17].
Abbasi et al. [
18] explained the fog computing based IoT architecture for mobile edge computing. This method used a genetic algorithm (GA) to handle many requests and their security and quality, and fog computing was used to enhance the management and processing of IoT and smart grid. The results showed that this method reduced the delay and consumption of power of devices. Meanwhile, deep neural networks were required to solve the multi-objective optimization problems.
For edge-computing-enabled IoT systems, Liao and Cheng [
19] developed a consensus technique (RVC) based on voting and reputation. The computing resources were carried nearer to the internet of things (IoT) and farther from the center of the cloud via edge computing. This enhanced the growth of the IoT by minimizing the delay. Blockchain improved the IoT’s security problems, as the devices and edge servers were scattered. A consensus technique based on voting and reputation (RVC) was employed in this article, to rectify issues such as reducing consensus efficiency and the safety of the existing consensus techniques. A successful consensus rate, transaction output, and reduced time were the advantages of this RVC. The increased number of nodes was the drawback of this RVC.
Karjee et al. [
20] established split computing technology to provide a superior user experience and alleviate the issues of partly offloading the DNN model inference task from an IoT device to a trusted device called an edge. When compared with in-device inference time, the results reveal that the DNN model minimizes the inference time and balances the tasks across edge devices to significantly reduce battery drainage. The energy utilization/battery dissipation of edge devices was examined and indicated, which minimizes the overall execution time of each task and amends the user experience through implementing this mechanism [
21]. Due to their low computational capabilities, these tasks are arduous to accomplish in a short period of time and provide accurate results.
Chen et al. [
22] explored the concept of load balancing in mobile edge computing (MEC) systems within ultra-dense networks. The study focused on improving the efficiency of MEC by accurately estimating the load on different edge servers and dynamically allocating tasks based on this estimation. The load estimation model proposed in the study may have relied on simplified assumptions and factors, such as CPU utilization, memory usage, and network traffic. While these factors are important, there may be other parameters and complexities that can affect the load on edge servers. The accuracy of the load estimation model could be further improved by considering a broader range of factors.
Poryazov et al. [
23] discussed the normalization of quality of experience (QoE) models in telecommunication systems. The study addressed the limitation of existing QoE prediction models that often provide inadequate results due to variations in data collection and presentation. The authors proposed an overall model normalization technique to improve the accuracy and reliability of QoE predictions. Mutichiro et al. [
24] discussed the Dynamic pod-scheduling model to solve the task scheduling problem at the edge.
Table 1.
Summary of various related works.
Table 1.
Summary of various related works.
Author and Year | Technique | Objective | Pros | Cons |
---|
Yu et al., 2021 [10] | Deep Imitation Learning-Driven Caching and Offloading Algorithm | To reduce the usage of satellite resources and the task completion time | Achieved real-time decision-making | Time required for implementation was high |
Ai et al., 2023 [11] | Hierarchical Spatio-temporal Monitoring (HSTM) | To achieve prediction of dynamic service | Offloading efficiency was enhanced | High computational expenditure and large memory needed |
Sood et al., 2021 [12] | Smart traffic management approach | To predict inflow of traffic and to enhance vehicle’s smart navigation | More effective and better load balancing | Cannot be suitable energetically for resource-restricted mobile mechanism |
Mazumdar et al. 2021 [13] | Load-offloading method | To optimize the service in suitable time and to support only static IoT devices | Reduces the amount of data to be sent to the cloud | Communication delay as well as high network complexity |
Shah et al., 2018 [15] | Empirical multi-agent cognitive method | To attain consecutive transition of IoT APIs | Achieves transparency in the heterogeneity and distribution of IoT APIs | The challenge is in designing the future of connected ecosystems |
Bolettieri et al., 2021 [16] | Heuristic algorithm with linear relaxation and rounding techniques | To minimize complexity | High efficiency | Not effective in handling inconsistent traffic demands |
Chien et al., 2021 [14] | Convolutional neural network (CNN) | To reduce the time cost during data transmission to the cloud server | Reduced training time and increased prediction accuracy | Use of large amount of data largely impacted the performance |
Abbasi et al., 2021 [18] | Genetic algorithm (GA) | To enhance management and processing of IoT and smart grid | Reduced delay and power consumption | Deep neural networks were required to solve the multi-objective optimization problems |
Liao et al., 2023 [19] | Reputation- and voting based blockchain consensus (RVC) | To rectify issues such as reducing consensus efficiency | Successful consensus rate, transaction output, and reduced time consumption | Required large number of nodes |
Karjee et al., 2022 [20] | Deep neural network (DNN) | To alleviate the issues of partly offloading | Minimizes the overall execution time of each task | Computational capabilities |
Mutichiro et al., 2021 [24] | Dynamic pod-scheduling model | To solve the task scheduling problem at the edge | Maximizes node utilization, minimizes the cost, and optimizes the service time | Few constraints in resource capacity (CPU and memory) and total service time |
The proposed approach to resource allocation and QoS optimization in mobile edge computing (MEC) using IoT has several practical implications, including its integration into existing MEC systems.
The proposed approach is designed to be seamlessly integrated into existing MEC systems without requiring major modifications or disruptions. It leverages the existing infrastructure, protocols, and interfaces, ensuring compatibility with established MEC frameworks. This integration capability enables service providers to adopt the approach without significant implementation challenges, reducing time-to-market and minimizing operational complexities. The proposed approach follows a modular architecture, allowing for flexible integration with different components of an MEC system. It can be integrated at various layers, including edge nodes, gateway devices, and cloud servers. This modularity enables service providers to selectively deploy and scale the proposed approach based on their specific requirements and existing infrastructure, ensuring a customized integration process. The proposed approach offers flexibility in configuration and adaptation to suit different MEC environments. Service providers can customize the approach based on their specific needs, such as defining resource allocation policies, setting QoS thresholds, and adapting the algorithms to match their network characteristics. This configurability enables seamless integration into diverse MEC ecosystems with varying requirements and constraints.
4. Results and Discussion
The experimental analysis was conducted on a personal computer with Intel® Xeon® 32 Gb RAM, 2.4 GHz on python 3.5 with a source code. The data collection module collects real-time data from various sources such as IoT sensors, user devices, and network components. The performance of the proposed model was evaluated using various performance metrics such as throughput, latency, delay, and energy consumption.
4.1. Parameter Settings
The hyper parameter configuration of the proposed method is depicted in
Table 2. For the hybrid ensemble SVM, the kernel function was set to the radial basis function kernel. For the kernel random forest, the number of trees in the forest was set to 50.
In
Table 3 we have included four different input configurations, each with varying settings for the number of trees in the hybrid kernel random forest (HKRF) algorithm, the types of kernels used, and the number of iterations in the crossover-based hunter–prey optimization (CHPO) algorithm. The table includes performance metrics such as accuracy, computational efficiency, convergence speed, and resource utilization. The values in the table illustrate the impact of the input configurations on these metrics.
4.2. Performance Measures
The performances of the proposed model are evaluated using various performance metrics such as throughput, latency, delay, and energy consumption [
26,
27].
Throughput is the amount of data that can be transmitted through the network in a given amount of time.
Energy consumption refers to the amount of energy used by the devices or networks to perform a specific task.
Delay refers to the time taken for a packet or data to travel from the source to the destination.
Latency is defined as the time taken between initiating a network request as well as receiving a response.
4.3. Performance Evaluation
To validate the mobile edge computing, we compare ensemble SVM, kernel random forest, with proposed hybrid ensemble SVM and kernel random forest for the performance metrics such as delay, throughput, and energy consumptions.
The performance analysis of the proposed model is presented in
Figure 3,
Figure 4,
Figure 5 and
Figure 6.
Figure 3 demonstrates the performance of the proposed approach in terms of throughput, where it is compared with ensemble SVM [
28] and kernel random forest [
29]. The results show that the proposed approach achieves a higher throughput rate with 595 mbit/s than the other two algorithms. This suggests that the proposed approach can allocate resources more efficiently, resulting in higher data transmission rates across the network.
Figure 4 shows the performance analysis of average energy consumption. This indicates that the proposed approach can allocate resources more efficiently, resulting in reduced energy consumption with 9.4 mJ by the devices and network. The performance analysis of the proposed model in terms of latency and delay is demonstrated in
Figure 5 and
Figure 6, respectively. This implies that the proposed model can effectively allocate resources and optimize the QoS [
30], resulting in improved network performance.
5. Conclusions
This paper has presented a novel approach for resource allocation and quality of service (QoS) optimization in ubiquitous mobile edge computing (UMEC) using the internet of things (IoT). The proposed approach combines the hybrid kernel random forest (HKRF) and ensemble support vector machine (ESVM) algorithms with crossover-based hunter–prey optimization (CHPO) to optimize the QoS metrics, such as latency, throughput, and energy efficiency, while allocating resources to different applications running on the UMEC network. The experimental results have demonstrated that the proposed approach outperforms other state-of-the-art algorithms in terms of QoS metrics and resource allocation efficiency. It achieves a higher throughput rate of 595 mbit/s compared to the other evaluated algorithms, indicating improved data transmission rates. Additionally, it reduces energy consumption by devices and the network to 9.4 mJ, showcasing enhanced energy efficiency.
This paper has introduced a unique combination of HKRF, ESVM, and CHPO algorithms to tackle the resource allocation and QoS optimization challenges in UMEC systems. The proposed approach effectively optimizes latency, throughput, and energy efficiency, enhancing the overall network performance and user experience. Extensive experiments have been conducted to evaluate the performance of the proposed approach, demonstrating its superiority over other algorithms. The paper highlights the practical implications of the proposed approach, such as its integration into existing MEC systems and the potential additional benefits beyond the evaluated metrics. However, it is important to acknowledge some limitations of this work. Firstly, the experimental analysis was conducted on a specific hardware configuration, and the results may vary in different settings. Secondly, the proposed approach relies on the accurate prediction of QoS metrics, which, in turn, depends on the quality and availability of input data. Further improvements in data collection and prediction accuracy could enhance the system’s performance. Lastly, while the proposed approach addresses resource allocation and QoS optimization, other aspects such as security and fault tolerance could be explored in future research.