Next Article in Journal
Non-Uniform Antenna Array for Enhanced Medical Microwave Imaging
Previous Article in Journal
Depth Perception Based on the Interaction of Binocular Disparity and Motion Parallax Cues in Three-Dimensional Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems

1
School of Physics and Electronic Information, Yantai University, Yantai 264005, China
2
Shandong Data Open Innovation Application Laboratory of Smart Grid Advanced Technology, Yantai 264005, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(10), 3172; https://doi.org/10.3390/s25103172
Submission received: 2 April 2025 / Revised: 9 May 2025 / Accepted: 13 May 2025 / Published: 17 May 2025
(This article belongs to the Section Sensor Networks)

Abstract

:
As an emerging paradigm for supporting computation-intensive and latency-sensitive services, mobile edge computing (MEC) faces significant challenges in terms of efficient resource utilization and intelligent task coordination among heterogeneous user equipment (UE), especially in dense MEC scenarios with severe interference. Generally, task similarity and cooperation opportunities among UE are usually ignored in existing studies when dealing with reusable tasks. In this paper, we investigate the problem of cooperative computation offloading and resource allocation for reusable tasks, with a focus on minimizing the energy consumption of UE while ensuring delay limits. The problem is formulated as an intractable mixed-integer nonlinear programming (MINLP) problem, and we design a similarity-based cooperative offloading and resource allocation (SCORA) algorithm to obtain a solution. Specifically, the proposed SCORA algorithm decomposes the original problem into three subproblems, i.e., task offloading, resource allocation, and power allocation, which are solved using a similarity-based matching offloading algorithm, a cooperative-based resources allocation algorithm, and a concave–convex procedure (CCCP)-based power allocation algorithm, respectively. Simulation results show that compared to the benchmark schemes, the SCORA scheme can reduce energy consumption by up to 51.52% while maintaining low latency. Moreover, the energy of UE with low remaining energy levels is largely saved.

1. Introduction

In recent years, the rapid advancement of wireless communications and internet of things (IoT) technologies has driven a surge in mobile user equipment (UE) and IoT devices. Concurrently, 5G mobile communication technology has reached full-scale commercialization, spurring the emergence of computation-intensive and latency-sensitive applications such as biometric recognition (face/fingerprint/iris), natural language processing, and interactive gaming [1]. These applications require significant amounts of both energy and computing power from UE. However, the constrained computing and battery capacities of UE often hinder efficient operation, adversely affecting the quality of experience (QoE) of UE. Moreover, reliable communication quality and low network latency are essential for these applications. Proposed by the European Telecommunications Standards Institute (ETSI), mobile edge computing (MEC) has emerged as an effective extension of traditional mobile cloud computing (MCC) [2]. In technical specification [3], support for edge computing by the 3rd Generation Partnership Project (3GPP) is described. MEC enables operator and third-party services to be hosted close to the UE’s access point so as to reduce end-to-end latency and transport network load. The fundamental principle of MEC involves processing data at the network edge through MEC servers deployed in proximity to UE. This architecture allows MEC servers to handle latency-sensitive and computation-intensive tasks locally, enabling elastic utilization of computational and storage resources. Consequently, MEC effectively addresses traditional cloud computing challenges including high latency, excessive energy consumption, and data security risks associated with long-distance data transmission.
In dense edge computing systems (DECS), which integrate ultra-dense network (UDN) and MEC, the growing UE population necessitates the adoption of loosely coupled reusable task design as a critical architectural solution. This design paradigm enables applications to be composed of modular tasks where input parameters and output results are decoupled from task code implementation, allowing different input combinations with the same task code to generate corresponding outputs [4]. For example, virtual reality (VR) and augmented reality (AR) applications require a large amount of real-time rendering and data processing, and multiple pieces of UE may need to handle the same rendering and data processing tasks at the same time within the same region, with only the UE’s own data being different. These tasks can be regarded as reusable tasks, and multiple pieces of UE can share the same rendering resources and processing results through cooperation. Moreover, there are three characteristic scenarios that demonstrate the potential of reusable tasks: (1) For connected vehicle ecosystems, traffic incidents during peak hours often trigger simultaneous requests from numerous UE items for identical real-time navigation data and updated traffic conditions [4]. (2) In a component-based multiplayer gaming environment, multiple players may frequently reuse the same game components [5]. (3) In industrial IoT deployments, massive IoT devices often generate reusable task requests in the same scenarios [6].
In this paper, taking the task similarity into consideration, we want to connect multiple pieces of UE with numerous identical reusable tasks to the same MEC server. In this way, during the processing phase of each task, only one UE device is required to offload the task code to the MEC server for computation. The remaining UE devices merely need to send the task input parameters and can share the corresponding task output results, aiming to reduce energy consumption. In addition, in order to improve the QoE of UE, our goal is to minimize the energy consumption of UE by jointly optimizing the task offloading decisions, subchannel and computing resource allocation, and transmission power allocation while considering the remaining energy of UE. Therefore, we propose a similarity-based cooperative offloading and resource allocation (SCORA) algorithm to achieve our objectives. The main contributions of this paper are as follows:
  • To address the cooperative offloading problem posed by reusable tasks in DECS, we formulate a joint optimization problem involving offloading decisions, subchannel assignments, computing resource allocation, and transmission power optimization to minimize the energy consumption of UE;
  • Since the formulated problem is a mixed-integer nonlinear programming problem (MINLP), to facilitate the solution of this problem, we decompose the problem into three subproblems and solve them separately. For the offloading subproblem, considering the task similarity between UE devices, a similarity-based matching offloading strategy is developed to solve it; then, a cooperative-based subchannel allocation strategy is proposed to solve the subchannel allocation subproblem, and finally, the UE devices’ transmission power is optimized using the concave–convex procedure (CCCP) method;
  • Simulation results show that our scheme performs better than existing ones. It can effectively reduce the average energy consumption and latency of user equipment. Moreover, it can save power for UE with low remaining energy levels, and it adapts well to different numbers of MEC servers and UE devices, providing a reliable solution for reusable task offloading and resource allocation in DECS.
The organization of this article is as follows. Section 2 describes the related work. Section 3 introduces the system model, including the network model, the task model, the communication model, the latency model and the energy consumption model, and then formulates the problem. In Section 4, we decompose the proposed problem into three sub-problems and solve these three sub-problems, respectively. Section 5 provides the related simulations. Finally, we conclude this article in Section 6.

2. Related Work

Contemporary academic studies have systematically addressed crucial technical aspects including offloading policy formulation and heterogeneous resource orchestration in the MEC system. In [7], the authors proposed a collaborative offloading scheme between MEC servers in order to alleviate network congestion and then used the deep Q-network approach to minimize the total execution time concerning deadline constraints. In [8], the authors added energy consumption constraints while constructing the goal of minimizing system latency. There are many studies aiming to minimize energy consumption [9,10,11]. In [9], the authors proposed an offloading strategy for the joint optimization of computing and communication resources to minimize energy consumption within the maximum tolerance time. In [10], the authors considered minimizing energy consumption while maximizing the number of tasks completed in a dynamical system, and they proposed a deep reinforcement learning scheme to jointly optimize the offloading decisions and the computational frequency allocation. In [11], the authors added service migration cost and task discarding penalties to the proposed objective of minimizing energy consumption in mobile IoT networks using energy harvesting, and they optimized the harvested energy, task allocation factor, central processing unit (CPU) frequency, transmission power, and association vector. In order to obtain a balance of latency and energy consumption, many studies focus on multi-objective optimization frameworks with approaches such as game theory and machine learning algorithms [12,13,14,15]. In [12], the authors proposed an improved quantum particle swarm algorithm to optimize task offloading decisions. In [13], considering the download delay of tasks, the authors proposed a three-stage multi-round combined offload scheduling mechanism and a joint resource allocation policy to solve the joint optimization problem of task offloading and heterogeneous resource allocation. In [14], the authors considered idle offsite servers and UE mobility, and they proposed a multilateral collaborative computation offloading model and an improved genetic algorithm. In [15], the authors formulated the system utility as an integrated function of computing service costs, task execution time, and energy consumption and then optimized the joint optimization of offloading decisions, transmission power, computing resource allocation, and computational service costs through a two-tier bargaining-based task offloading and a collaborative computing incentive mechanism.
In the aforementioned studies, although the increased number of MEC servers has proven to be effective in assisting task processing, the quantity of MEC servers typically remains limited. In DECS, the proliferation of MEC servers enhances opportunities for task offloading and collaborative processing among UE devices with rapidly increasing density. However, it also brings some challenges, such as the complexity of task offloading decisions, multidimensional resource allocation, and intricate service migration and caching placement issues. Many studies have endeavored to address the task offloading problems [16,17,18]. In [16], in order to minimize the long-term average task latency of all UE, the authors developed a novel calibrated contextual bandit learning algorithm to enable UE devices to predict the task offloading decisions of the rest of the UE in order to independently decide their own offloading decisions. In [17], the authors designed an online task offloading deep reinforcement learning algorithm: the asynchronous advantage actor–critic. This framework operates independently of real-time channel state information and BS computational power, achieving the dual objectives of strict adherence to energy budget constraints and systematic minimization of task completion latency. In [18], the authors proposed a contextual sleeping bandit learning algorithm (CSBL) with Lyapunov optimization to minimize long-term task delay under price constraints, and they extended it to multi-server scenarios as CSBL-M to address exponential action space growth in task offloading.
In DECS, the proliferation of UE has given rise to pressing resource allocation challenges. Therefore, many of recent studies have focused on addressing computational offloading and resource allocation problems [19,20,21,22]. In [19], the authors proposed a deep reinforcement learning-based scheduling algorithm that utilizes deep deterministic policy gradient and behavioral critique networks to solve the task scheduling and resource scheduling problems, aiming to minimize the task latency for all UE devices. In [20], in order to minimize the system energy consumption, the authors used the improved artificial fish swarm algorithm and the improved particle swarm optimization (PSO) algorithm to jointly optimize the computation offloading decision, the task offloading ratio, and the allocation of communication and computation resources. In [21], the authors jointly optimized the problems of task offloading, BS selection, and resource scheduling, which is addressed by a Newton-interior point method-based resource allocation algorithm and a genetic algorithm-based scheduling method, achieving the minimization of the weighted sum of system delay and energy consumption. In [22], taking the uncertainties in UE mobility and resource constraints into consideration, the authors proposed a distributed delay-constrained computation offloading framework that incorporates Lyapunov-based game-theoretic optimization and multi-stage stochastic programming to achieve adaptive task offloading and computational capacity management. In addition, extensive studies exist on dynamic service migration and caching placement mechanisms [23,24]. An energy-efficient online algorithm based on Lyapunov and PSO was developed to solve the task migration problem in order to reduce energy consumption while considering the interference and mobility of UE in [23]. In [24], the authors proposed a two-timescale hierarchical multi-agent deep reinforcement learning (HMDRL)-based scheme for the joint optimization of cooperative service caching, computation offloading, and resource allocation to minimize weighted energy consumption across energy-harvesting (EH)-powered mobile UE devices and small base stations (SBSs). Although these studies addressed many problems in DECS, they failed to consider the homogeneity of tasks between UE devices and ignored cooperation opportunities brought about by the dense deployment of UE devices and MEC servers.
In the processing of reusable tasks, cooperative offloading realizes resource sharing via task decomposition, resource allocation, and UE cooperation to boost efficiency and reduce energy consumption. To fully exploit the computing capacity of a multi-server system, collaborative offloading among multiple edge servers is necessary [25]. Scenarios involving reusable tasks are considered in [4,5,6,26]. In [4], by applying coalitional game theory, the authors formulated a cooperative offloading process for reusable tasks as a coalitional game to maximize cost savings. In [5], the authors developed a 0–1 integer nonlinear programming problem to minimize the total energy cost on the player’s side under a delay constraint in a component-based multiplayer game scenario. In [6], the authors incorporated drones to assist with the offloading of reusable tasks in an MEC environment. By jointly optimizing the UE offloading policy, UE transmission power, server allocation on the unmanned aerial vehicle (UAV), the computation frequency of the UE and the UAV server, and the UAV flight trajectory, a system model was constructed to minimize the system average total energy consumption under time delay constraints. In [26], the authors formulated the joint offload optimization problem with the aim of minimizing the long-term average task execution cost, taking into account transmission collaboration, shared wireless bandwidth, and varying task queues in UE devices and MEC servers. The above research on cooperative task offloading does not take into account the types of tasks in the scenario, which is important for the completion of cooperative offloading.

3. System Model and Problem Formulation

In this section, we will provide a detailed introduction to the system model, task model, delay model, and energy consumption model and present our optimization problem.

3.1. System Model

As shown in Figure 1, there are M = 1 , 2 , , M SBSs in our system, each equipped with an MEC server, using orthogonal frequency division multiple access technology (OFDMA). There are N = 1 , 2 , , N UE devices in the coverage area, each with its own remaining energy level and task list. The MEC server connected to the SBS is powered by cable, so the energy consumption of its processing tasks is not considered [27]. The energy consumption generated by the hardware of the UE’s own circuitry is chosen to be ignored [28]. The download latency of task results is usually ignored due to the small size of the output results and the generally high rate of the downlink [29,30]. When UE devices offload their tasks by occupying the same subchannels, the interference cannot be neglected [31]. For instance, as shown in Figure 1, when UE 3 offloads its computational tasks to SBS 2 via a subchannel, it generates interference to other SBSs serving UE devices operating on the same subchannel. The main symbols and their definitions are summarized in Table 1.

3.2. Task Model

In each time slot, each UE device generates a type of task. Let j denote the computational task type and each UE device maintain a task list that stores the tasks. Figure 2 shows an example of reusable offloading. In Figure 2a, with the existing offloading method, even though the current reusable tasks of UE 1 and UE 2 are the same, all the task codes are offloaded separately to the MEC server for processing. However, as shown in Figure 2b, when UE 1 and UE 2 choose to perform cooperative offloading, if UE 1 offloads the task body code and its own input parameters, then UE 2 only needs to offload its own input parameters, which greatly reduces the energy consumption of UE 2. Therefore, we focus on the cooperative offloading. In order to save energy, UE devices will offload their computational tasks to MEC servers for processing due to limited computational power [32]. The reusable tasks in this paper are similar to [5], which does not consider the divisibility. Let S n j = L n j , L n j , m a i n , L n j , i n , C n j , C n j , m a i n , C n j , i n , T n j max represent the attributes of the UE n unloading task j [33], where L n j , L n j , m a i n , and L n j , i n are the total task size, the task main code size, and the task input parameters size for UE n, respectively. C n j , C n j , m a i n and C n j , i n are the number of CPU cycles required to complete the total task, the number of CPU cycles required to complete the task main code, and the number of CPU cycles required to complete the task input parameters, respectively. T n j max is the latency requirement for the UE n to complete task j.

3.3. Communication Model

Let a m n be a variable indicating task offloading. When UE n offloads its tasks to the MEC server m for processing, a m n = 1 ; otherwise, a m n = 0 . Each MEC server divides bandwidth into K subchannels, with a set of K = 1 , 2 , , K subchannels and a bandwidth of B 0 for each subchannel. The accessed UE devices offload tasks in an orthogonal manner. Let b m n k denote the variable of subchannel allocation. When subchannel k is occupied by UE n to offload tasks to MEC server m, b m n k = 1 ; otherwise, b m n k = 0 . Since OFDMA technology is used, there is no interference between UE devices connected to the same MEC server, but there is co-tier interference between different MEC servers, i.e., UE devices occupying the same subchannel have influence on each other [34]. According to the Shannon formula, the uplink data rate of UE n transmitting to MEC server m via subchannel k can be calculated as
R n = m = 1 M k = 1 K a m n B 0 l o g 2 ( 1 + b m n k P n k h m n k σ 2 + I m k ) ,
where I m k = i = 1 , i n N a m i b m i k P i k h m i k is the co-tier interference to MEC server m on subchannel k, P n k is the transmission power of UE n on subchannel k, h m n k is the gain from UE n to MEC server m on subchannel k, and σ 2 is the Gaussian white noise power.

3.4. Latency Model

In the cooperative scheme, one piece of UE is selected from the UE devices with reusable tasks within the same MEC servers to offload the main task code and its own task input parameters to the MEC server. The remaining UE devices only need to send their own task input parameters to the MEC server. Then, the MEC server will calculate each task and send the results of each task back to each piece of UE. If UE n accesses MEC server m and acts as the UE for offloading the main part and input parameters of the task, then the size of its offloaded task can be given by
L n j = L n j , m a i n + L n j , i n .
The number of CPU cycles required to complete the entire task can be given by
C n j = C n j , m a i n + C n j , i n .
The transmission latency of UE n can be calculated by
T n j t = m = 1 M a m n L n j R n = m = 1 M a m n L n j , m a i n + L n j , i n R n .
The computational latency of the MEC server m for the task computation of UE n can be expressed as
T n j s = m = 1 M a m n C n j f n = m = 1 M a m n C n j , m a i n + C n j , i n f n ,
where f n is a computing resource allocated by the MEC server m to the accessed UE n.
If UE n is used as a UE device that offloads only the task input parameters, then its offload task size should be L n j = L n j , i n , and the number of CPU cycles required to complete the entire task should be C n j = C n j , i n .
The transmission latency of UE n is calculated as follows:
T n j t = m = 1 M a m n L n j R n = m = 1 M a m n L n j , i n R n .
Since the task can only start to be executed after all the task data have been uploaded to the MEC server, the UE devices that send the task input parameters need to wait until the UE item that sends the main part of the task has completed the offloading before they can start the task execution. If the transmission delay of UE n is less than that of the cooperative UE i which offloads the main code of task j, then it needs to wait until the offload of cooperative UE i is completed before the task calculation can be carried out. If it is greater than the transmission delay of the cooperative UE i, there is no need to wait; that is, T n j t = max T n j t , T i j t , where i n , i N , T i j t = m = 1 M a m i L i j R i . Moreover, since the power usage of the UE during the waiting period is very small, we choose to ignore the energy consumption during the waiting process.
The computational latency of the MEC server m for task computation T n j s is (5). Therefore, the total latency of UE n can be expressed as
T n j = T n j t + T n j s = m = 1 M a m n L n j R n + C n j f n .

3.5. Energy Consumption Model

When UE n is used as the UE device that sends the task main code and task input parameters, the transmission energy consumption of UE n can be expressed as
E n j t = k = 1 K P n k T n j t = k = 1 K m = 1 M a m n P n k L n j , m a i n + L n j , i n R n ,
where P n k is the power of UE n.
When UE n is used as the UE device that only sends its task input parameters, the transmission energy consumption of UE n can be calculated by
E n j t = k = 1 K P n k T n j t = k = 1 K m = 1 M a m n P n k L n j , i n R n .
Finally, the total energy consumption of UE n is calculated as follows:
E n j = E n j t = k = 1 K m = 1 M a m n P n k L n j R n = k = 1 K m = 1 M a m n P n k L n j m = 1 M k = 1 K a m n B 0 l o g 2 ( 1 + b m n k P n k h m n k σ 2 + I m k ) .

3.6. Problem Formulation

In this paper, considering the similarity task, we jointly optimize task offloading policy, the allocation of communication and computing resources, and the transmission power of UE devices to minimize the average total energy consumption of the UE devices. The problem is formulated as follows:
min a m n , b m n k , f n , P n k 1 N n = 1 N E n j s . t . C 1 : m = 1 M a m n = 1 , n N C 2 : m = 1 M k = 1 K b m n k = 1 , n N C 3 : a m n , b m n k 0 , 1 C 4 : P n k P n k max , n N , k K C 5 : k = 1 K P n k P n max , n N C 6 : T n j T n j max , n N ,
where P n k max and P n max are the max transmission power on each subchannel and on each MEC server for UE n.
Constraint C1 indicates that each UE device can select only one MEC server for computation offloading. Constraint C2 indicates that each piece of UE is assigned only one subchannel. Constraint C3 indicates that both task offloading and subchannel assignment are binary. Constraints C4 and C5 give the transmission power limit on each subchannel and on each MEC server, respectively. Constraint C6 is the maximum delay requirement for UE n to complete the task. Since problem (11) is an MINLP problem, obtaining the optimal solution is challenging.

4. Similarity-Based Cooperative Offloading and Resource Allocation Algorithm

In this section, we propose the SCORA algorithm to solve the formulated problem (11). We divide it into three interdependent subproblems: (1) a task offloading decision subproblem, (2) a resource allocation subproblem, and (3) a power allocation subproblem. In the SCORA algorithm, the similarity-based matching offloading algorithm is proposed to solve the task offloading decision subproblem, and the cooperation-based resource allocation algorithm is designed to tackle the resource allocation problem, leveraging task similarity among UE devices. Finally, the non-convex power allocation optimization is systematically addressed through the CCCP-based power allocation algorithm, enabling efficient energy consumption minimization. Figure 3 illustrates the original problem and the decomposed subproblems and corresponding algorithms.

4.1. Similarity-Based Matching Offloading Algorithm

Given that each UE offloads its task to exactly one MEC server, while each MEC server is capable of serving multiple UE devices, it is suitable for this problem to be solved by applying many-to-one matching theory. Initially, the matching multinomial group P M , ψ , Φ is defined as follows:
  • P M = M , N are two sets of unrelated collections. In this paper, M is the set of MEC servers that executes tasks, and N is the set of UE devices that offloads tasks to MEC servers.
  • ψ = P L m , L n represents a list of preferences for MEC servers and UE. Each MEC server m M maintains a preference list P L m in which the preferences for UE devices are sorted in descending order, that is, P L m = n , n m n , which means that the MEC server m prefers UE n to n . Each UE device maintains a preference list P L n = m , m n m for MEC servers in descending order as well.
  • Φ M × N represents the matching between MEC servers and UE. Each UE device can be matched with at most one MEC server, that is, Φ n M and Φ n 1 , where Φ · is the cardinality of the matching result Φ · . In addition, each MEC server can be matched with multiple UE devices, that is, Φ m N and Φ m A m , where A m is maximum number of access UE devices for the MEC server m.
Subsequently, the definition of the considered many-to-one matching can be described as follows:
Definition 1.
Given the UE set N and the MEC server set M , the matching game Φ : N M for computation offloading is defined as a many-to-one function, such that
  • Φ n M and Φ n 1 , n N ;
  • Φ m N and Φ m K , m M ;
  • n = Φ m m = Φ n , n N , m M .
During the computation offloading process, each UE device is willing to associate with the MEC server that has the best channel conditions to obtain a high offloading rate. Therefore, a preference function of UE n for each MEC server is established:
ψ n m = h m n k .
Each UE device can calculate preferences for the MEC servers and generate a preference list based on the above equation.
In an effort to effectively pair more UE devices that have identical reusable tasks with the same MEC server, the similarity score is introduced as a metric. For example, the similarity score G i , n between UE i and UE n can be calculated in the following way: First, the number of similar tasks in the task lists of UE i and UE n is counted, and then G i , n can be calculated as a ratio of the number of similar tasks to the total number of tasks of UE i. Before starting the matching algorithm, we calculate the similarity scores among UE devices. These scores are incorporated into the preference function of MEC servers for UE. In this way, they influence the preferences of UE when choosing the same MEC server, thereby enabling the pairing of more UE devices with identical reusable tasks to the same MEC server. The preference function of the MEC server m for each UE device can reflect that the MEC server has a higher preference for UE with good channel conditions and low energy consumption, which can prioritize the tasks of UE devices with low remaining energy levels to save the energy, and the addition of similarity scoring can match more UE devices with the same tasks to the same MEC server to achieve more energy savings for UE. The preference function of MEC server m for each UE device can be calculated as
ψ m n = 1 β h m n k ε ¯ n + β G i , n ,
where ε ¯ n is the remaining energy level of UE n.
Algorithm 1 summarizes the steps of similarity-based matching offloading strategy. First, all UE devices and MEC servers are not matched. Therefore, the unmatched UE set N u and the unmatched MEC server set M u are initialized as all UE and all MEC servers, respectively. Calculate the similarity score G i , n between UE i and UE n. In the matching process, every unmatched UE device builds its preference list based on (12) and sends an offloading request to the first MEC server on the preference list. Subsequently, each unmatched MEC server calculates m M u , which is the number of offloading requests received, and marks UE devices that sent the offloading requests by the set M C m . If C m is not more than the maximum number of access UE devices for MEC server m A m , then all UE in M C m is allowed to communicate with the MEC server m for task offloading. Otherwise, the unmatched MEC server m establishes its preference list based on (13). The first A m UE devices in M C m are allowed to offload their tasks, and the offloading requests of the other UE in M C m are rejected. Then, the unmatched UE set N u and the unmatched MEC server set M u are updated. The matching process is not finished until all UE devices and MEC servers are matched. Finally, the offloading indicator can be obtained from the matching result A * . Following the same method as in [31,35], we can prove that the proposed matching algorithm is stable.
Algorithm 1 Similarity-based matching offloading algorithm.
Input: The maximum latency requirement T n j max for all UE to complete the task.
Output: Stable matching result A * .
  1:
Initialize the unmatched MEC server set M u = 1 , 2 , , M , and the unmatched UE set N u = 1 , 2 , , N . Calculate the similarity score G i , n between UE i N u and UE n N u .
  2:
while N u  do
  3:
       for all unmatched UE n N u  do
  4:
            UE n builds its preference list P L n based on (12) in a descending order;
  5:
            UE n sends an offloading request to the first MEC server in P L n ;
  6:
       end for
  7:
       while  M u  do
  8:
             for all unmatched MEC server m M u  do
  9:
                   Count the number of requests received by MEC server m as C m and denote
                   the set of these UE devices as M C m ;
10:
                   if  C m A m  then
11:
                       MEC server m allows these C m UE devices to offload their tasks;
12:
                       Remove MEC server m from M u and P L n , and remove these C m UE
                       devices from N u ;
13:
                   end if
14:
                   if  C m > A m  then
15:
                        MEC server m updates its preference list P L m in a descending order
                        according to (13) and allows the first K UE devices in M C m to offload
                        their tasks;
16:
                        Remove MEC server m from M u and P L n , and remove these C m UE
                        devices from N u ;
17:
                   end if
18:
             end for
19:
       end while
20:
       For all UE devices n N matched with MEC server m, a m n = 1 ;
21:
end while

4.2. Cooperation-Based Resource Allocation Algorithm

To better achieve the purpose of cooperative offloading, we adopt the agglomerative clustering algorithm in the hierarchical clustering algorithm to form clusters of UE devices with the same type of reusable tasks in the current time slot. The agglomerative hierarchical clustering algorithm divides data points into clusters by calculating the distance between each cluster. In this paper, data points are defined as clusters formed by UE, and the total number of clusters is D = 1 , 2 , , D . N m denotes the set of A m UE devices connected to MEC server m. D m represents the cluster formed by UE i N m . The distance is defined as the task similarity between UE devices. We adopt the merging rule based on the shortest distance and use the task types g i , j and g n , j of the current time of UE i and UE n to calculate the distance l i , n . The shortest distance among all clusters is set as l min , and the stopping condition is l max . If the task types g i , j and g n , j of UE i and UE n are the same, then l i , n = l min . When l min < l max , the clusters d i and d n where UE i and UE n are located will be merged into d i n . Here, we introduce the cooperation-based resource allocation algorithm, as shown in Algorithm 2.
Algorithm 2 Cooperation-based resource allocation algorithm.
Input: The Stable matching result A * .
Output: The subchannel allocation result Q * and computing resources allocation result F * .
  1:
for all MEC servers m M  do
  2:
      for all UE devices i N m  do
  3:
            If the task types g i , j and g n , j of UE i and UE n are the same, then l i , n = l min ;
  4:
             while  l min < l max  do
  5:
                   The clusters d i and d n where UE i and UE n are located will merge into d i n ;
  6:
                   Compare the task types g i n , j of the d i n with task types g c , j of all remaining
                   clusters d c , c N m , c i , n ;
  7:
                   Remove UE n from N m ;
  8:
             end while
  9:
      end for
10:
      for all clusters d i D m  do
11:
            Allocate resources to cluster d i according to (14) and (18);
12:
            Select the cluster head d i , h according to (15);
13:
            Allocate resources to cluster head d i , h according to (16) and (19);
14:
            Allocate resources to cluster member d i , o according to (17) and (20);
15:
      end for
16:
end for
Each MEC server allocates the subchannels to each cluster according to the ratio of the number of UE devices. The number of subchannels allocated to cluster d i can be calculated by
Q d = K × S d i i = 1 D S d i ,
where K is the number of subchannels in the system, and S d i is the number of UE devices in cluster d i .
All of the UE devices n d i are sorted based on the product of channel gain and residual energy, and the first piece of UE is selected as the cluster head d i , h . The sorting formula is as follows:
ψ d i n = h m n k ε ¯ n .
In this way, the selected cluster head a the piece of UE with both high channel gain and remaining energy, which is beneficial to reducing the total energy consumption and delaying and reducing the energy consumption for UE with low remaining energy levels.
This strategy allocates subchannels to the cluster head and the remaining UE according to the ratio of the total task size sent by the cluster head and the total task size sent by other UE in the cluster. The number of subchannels allocated to the cluster head d i , h of cluster d i can be calculated by
Q d i , h = Q d × L d i , m a i n + L d i , h , i n L d i , m a i n + s = 1 S d i L d i , s , i n ,
where L d i , m a i n is the main code of the task to be sent by all UE in cluster d i , L d i , h , i n is the task input parameter of the cluster head d i , h , and s = 1 S d i L d i , s , i n is the task input parameter of all UE in cluster d i .
The number of subchannels Q d i , o allocated to the cluster member UE d i , o in cluster d i can be expressed as
Q d i , o = Q d i × L d i , o , i n L d i , m a i n + s = 1 S d i L d i , s , i n ,
where L d i , o , i n is the task input parameter of the cluster member UE d i , o in cluster d i .
The number of computing resources allocated to cluster d i can be calculated by
F d = K × S d i i = 1 D S d i .
The number of computing resources allocated to the cluster head d i , h of cluster d i can be calculated by
F d i , h = F d × L d i , m a i n + L d i , h , i n L d i , m a i n + s = 1 S d i L d i , s , i n .
The number of computing resources allocated to the cluster member UE d i , o in cluster d i can be expressed as
F d i , o = F d i × L d i , o , i n L d i , m a i n + s = 1 S d i L d i , s , i n .

4.3. CCCP-Based Power Allocation Algorithm

After completing the computation offloading and resource allocation, the problem becomes
min P n k 1 N n = 1 N k = 1 K P n k L n j k = 1 K B 0 log 2 ( 1 + P n k h m n k σ 2 + I m k ) s . t . C 4 . C 5 . C 6 .
To better accomplish the power optimization, the problem is rewritten as follows:
min P n k 1 N n = 1 N k = 1 K φ n k s . t . C 4 , C 5 , C 6 , C 7 : P n k L n j B 0 l o g 2 ( 1 + P n k h m n k σ 2 + I m k ) φ n k .
Theorem 1 is introduced to make the above equation tractable.
Theorem 1.
If P * , φ * is the optimal solution of the problem, then there exist η n k * , n = 1 , , N , k = 1 , , K when η = η * and φ = φ * , such that P * is a solution to the following problem:
min P n k 1 N n = 1 N k = 1 K η n k φ n k B 0 log 2 ( 1 + P n k h m n k σ 2 + I m k ) P n k L n j s . t . C 4 , C 5 , C 6 , C 7 .
and when η = η * and φ = φ * , P * also satisfies the following equations:
φ n k = P n k L n j R n , η n k = 1 R n , n N , k K .
According to Theorem 1, the optimal solution of problem (21) can be obtained by solving (23). If the solution of (23) is unique, then the solution of (23) is the optimal solution of problem (22). Next, we will solve (23).
After rearranging the objective function in (24), we obtain
U ( P ) = U c a v e 1 ( P ) U c a v e 2 ( P ) ,
where the functions on the right side of the equation are
U c a v e 1 ( P ) = 1 N n = 1 N k = 1 K η n k φ n k B 0 log 2 t = 1 N P t k h m t k + σ 2
and U c a v e 2 ( P ) = 1 N n = 1 N k = 1 K η n k φ n k B 0 log 2 σ 2 + I m k + 1 N n = 1 N η n k P n k L n j .
In addition, the non-convex constraint condition C 6 can be transformed into its equivalent convex linear form C 6 through mathematical operations:
C 6 : P n k h m n k + 1 2 L n j B 0 T n j max C n j f n σ 2 + I m k 0 .
Therefore, problem (23) is equivalent to
min P n k U ( P ) s . t . C 4 , C 5 , C 6 , C 7 .
The solution of problem (21) can be obtained by solving (27). Since U c a v e 1 ( P ) and U c a v e 2 ( P ) are both concave functions with respect to P , problem (24) has a difference-of-convex structure. Moreover, since U c a v e 2 ( P ) is differentiable, the CCCP algorithm can be used to solve problem (27). The main idea of the CCCP algorithm is to transform the convex part of U ( P ) , that is, U c a v e 2 ( P ) , into a linear form through iteration. By introducing Theorem 2, the above problem can be solved.
Theorem 2.
Problem (27) can be solved by solving the following sequential convex programming problem:
P l + 1 = arg min P n k U c a v e 1 ( P ) P T × U c a v e 2 ( P ( l ) ) ,
where P T is the transpose of P . U c a v e 2 ( P ( l ) ) = 1 ( l ) , 2 ( l ) , , K ( l ) represents the gradient of U c a v e 2 ( P ) at P ( l ) and k ( l ) = i = 1 , i n N η i k φ i k B 0 h m i k / ln 2 t = 1 , t i N P t k h m t k + σ 2 + η n k L n j = Δ D n k η n k , φ n k .
Since (28) is a convex optimization problem, classical convex optimization algorithms can be used to solve the problem. The Lagrangian function corresponding to the construction of (28) is
L P , γ , λ = 1 N n = 1 N k = 1 K η n k φ n k B 0 log 2 ( I m k + σ 2 + P n k h m n k ) 1 N n = 1 N k = 1 K P n k D n k η n k , φ n k + 1 N n = 1 N k = 1 K γ n k P n k h m n k + 1 2 L n j B 0 T n j max C n j f n i = 1 , i n N P i k h m i k + σ 2 1 N n = 1 N λ n k = 1 K P n k P n max ,
where γ and λ are Lagrange multipliers. Let 𝜕 L P , γ , λ / 𝜕 P n k = 0 denote the Lagrange function, that is,
B 0 η n k φ n k ln 2 × h m n k I m k + σ 2 + P n k h m n k D n k η n k , φ n k + λ n + γ n k h m n k + i = 1 , i n N γ i k h m i k 1 2 L n j B 0 T n j max C n j f n = 0 .
The optimum power is obtained as
P n k * = B n η n k φ n k / ln 2 D n k ( η n k , φ n k ) + λ n μ n k I m k + σ 2 h m n k 0 P n k max ,
where μ n k = γ n k h m n k + i = 1 , i n N γ i k h m i k 1 2 L n j B 0 ( T n j max C n j f n ) .
The γ and λ updated by subgradient methods:
γ n k ( l + 1 ) = γ n k ( l ) ξ γ ( l ) × P n k h m n k + ( 1 2 L n j B 0 ( T n j max C n j f n ) ) ( I m k + σ 2 ) + ,
λ n ( l + 1 ) = ( λ n ( l ) ξ λ ( l ) × ( P n k m a x P n k ( l ) ) + ,
where ξ γ ( l ) and ξ λ ( l ) represent the step sizes of γ and λ at the l 1 , 2 , , L max k t h iteration step, respectively, and L max is the maximum number of iterations.
The ξ γ ( l ) and ξ λ ( l ) should satisfy the following conditions:
l = 1 ξ i ( l ) = , lim l ξ i ( l ) = 0 , i γ , λ .
It can be proven that (31) is standard. Therefore, for any value of initial power, the algorithm converges to a unique value. A unique solution can be obtained for (30), so the solution of problem (17) is equivalent to the solution of (30). Algorithm 3 shows the steps of the proposed power allocation algorithm. Correspondingly, the total procedure of the SCORA algorithm is demonstrated in Algorithm 4.
Algorithm 3 CCCP-based power allocation algorithm.
Input: The maximum time delay requirement T n j max for UE n to complete the task.
Output: The optimal power P * .
1:
Initialize l = 0 and ε > 0 . Set P 0 and calculate η n k 0 and φ n k 0 for all UE devices and MEC servers;
2:
while min P n k 1 N n = 1 N k = 1 K η n k l φ n k l B 0 log 2 ( 1 + P n k l h m n k σ 2 + I m k ) P n k l L n j ε or l < L max  do
3:
    Use CCCP to solve problem (27), and the optimal power P n k * is obtained according to (31);
4:
    Update η n k l + 1 and φ n k l + 1 according to (24);
5:
    Update l = l + 1 ;
6:
end while
Algorithm 4 SCORA algorithm.
Input: The maximum time delay requirement T n j max for UE n to complete the task.
Output: Stable matching result A * , the subchannel allocation result Q * , the computing resources allocation result F * and the optimal power P * .
1:
Initialize the UE device set N , the MEC servers set M , the subchannel set K and maximum time delay requirement T n j max for UE device n N .
2:
Obtain the stable UE device and MEC server matching A * according to Algorithm 1.
3:
Use Algorithm 2 to attain the subchannel allocation result Q * and the computing resources allocation result F * .
4:
Use Algorithm 3 to calculate the optimal power P * .

4.4. Complexity Analysis

Here, we will analyze the complexity of Algorithm 4, which mainly consists of the following complexities of Algorithms 1–3. In Algorithm 1, the complexity of computing the similarity scores of UE devices is O n 2 . The complexity of each UE device sorting M MEC servers in its preference list is O M log M . Therefore, the sorting complexity for N UE devices is O N M log M . Considering the worst case, each MEC server will receive N requests from UE, and the sorting complexity for M MEC servers is O M N log N . Thus, the complexity of Algorithm 1 in the worst case is O N 2 + M N log M N . In Algorithm 2, for each MEC server, the complexity of comparing task types between A m UE devices and performing the merge operation in the worst case is O A m 2 . Since the above steps need to be performed for all MEC servers, the complexity of Algorithm 2 is O M A m 2 . In Algorithm 3, updating η , φ and γ , λ requires L η φ i t and L γ λ i t iterations in the worst case, respectively. Hence, the complexity of Algorithm 3 is denoted as O K M N η φ i t N γ λ i t log 1 / ε . Therefore, the complexity of the proposed SCORA algorithm can be calculated as O N 2 + M N log M N + M A m 2 + K M N η φ i t N γ λ i t log 1 / ε .

5. Simulation Results

5.1. Simulation Setting

In this section, we perform numerical simulations to evaluate the performance of our proposed scheme. The simulation code is written in C language and compiled by Visual Studio 2010. The MEC servers and UE are randomly located in a 100 × 100 m2 square area [31], and the simulation parameter settings can be found in Table 2. During subsequent simulations, the ratio of MEC servers to UE is 1:5 if not otherwise specified. To verify the performance of the proposed SCORA algorithm, the following six benchmark schemes are introduced:
  • Base scheme: UE devices offload tasks to MEC servers with optimal channel conditions and use maximum transmission power. Subchannels are randomly allocated to UE, and computing resources are equally allocated to UE. Compared with this scheme, the advantages of cooperative task offloading and resource allocation in the proposed SCORA scheme can be demonstrated;
  • Non-similarity offloading and power allocation (Non-SOPA) scheme: Compared to the SCORA scheme, this scheme does not consider the task similarity and cooperation-based resource allocation strategy. UE devices offload the tasks to MEC servers with optimal channel conditions, and the transmission power of UE is optimized through the CCCP method. The subchannels and computing resources are respectively allocated to UE devices randomly and equally. In contrast to this scheme, the necessity of considering similarity when offloading tasks and allocating resources can be proven in the SCORA scheme;
  • Similarity-based cooperative offloading and CCCP (SCOC) scheme: Compared with the SCORA scheme, this scheme does not consider the cooperation-based resource allocation strategy. Resource allocation is the same as the Non-SOPA scheme. Contrasted against this scheme, the necessity of considering similarity when offloading tasks and allocating resources can be proven in the SCORA scheme;
  • Similarity-based cooperative offloading and subchannel allocation (SCOSA) scheme: Compared to the SCORA scheme, this scheme does not consider the CCCP-based power allocation algorithm. UE devices offload tasks to MEC servers with maximum transmission power. In this way, the influence of UE power control on system performance can be reflected;
  • Non-energy similarity-based cooperative offloading and resource allocation (Non-ESCORA) scheme: Compared to the SCORA scheme, this scheme does not take into account the UE’s remaining energy levels in either offloading strategy or cluster head selection. Compared to this scheme, the benefits of reducing the energy consumption of UE with low remaining energy levels can be demonstrated;
  • Coalitional game-based cooperative offloading (CGCO) scheme [4]: UE devices cooperate to form coalitions to offload their tasks to the MEC servers, and cluster head UE devices are randomly selected. The resource allocation algorithms are the same as in the SCORA scheme. Compared to this scheme, the performance of the cooperative scheme proposed in the SCORA scheme can be evaluated.
Table 2. Simulation parameters.
Table 2. Simulation parameters.
ParametersValue
The number of MEC servers: M20–50
The number of UE devices: N100–250
The maximum number of UE devices allowed to associate with each MEC server: A m 5
The similarity weight: β 0.5
The size of the main code of task j: L n j , m a i n [1–10] MB [5]
The size of the task j input parameter for UE n: L n j , i n [0.001–0.01] MB [5]
The number of subchannels: K10
The bandwidth of each subchannel: B 0 1 MHz
Maximum transmission power of each UE device: P n max 23 dBm [31]
The maximum deadline for task completion: T n max 5 s [20]
Noise power spectral density−174 dBm/Hz [20]
The CPU cycles required to process each bit of the task30 cycles/bit
The computing resources of each MEC server 2.5 × 10 10 cycles/s [5]
Pathloss between MEC server m and UE n 128.1 + 37.6 × log 10 d m n [36]

5.2. Numerical Results

Figure 4 shows the convergence of the proposed SCORA scheme on different subchannels when there are 20 MEC servers and 100 UE devices. It can be seen that this SCORA algorithm can converge within 20 iterations.
Figure 5 illustrates the similarity-based matching and clustering results when there are 25 UE devices and 5 MEC servers in the system. In Figure 5, UE devices within one circle are assigned to the same cluster, UE devices in circles of different colors have different reusable tasks, and the associations between UE and MEC servers are denoted by dashed arrows. It can be seen that a large number of UE devices are connected to their nearest MEC servers. However, there are a few UE devices that are not matched to the nearest MEC servers. For example, instead of connecting to the nearest option, MEC server 1, UE 12 connects to MEC server 2. This is because UE 12 has the same task type as UE 17 and UE 18. Therefore, they form one cluster in order to conduct cooperative offloading.
Figure 6 presents the impacts of different similarity score weights β when the number of MEC servers ranges from 20 to 50. It can be seen that average energy consumption and latency both increase with the number of MEC servers. This is because the number of UE devices scales linearly with the number of MEC servers, resulting in more energy consumption and more severe interference, which reduces the task uploading rate. In addition, it can be observed that when the number of MEC servers is less than 40, β = 0.2 yields better performance than β = 0.8 . This is due to the fact that UE devices offload their tasks to the MEC servers with better channel conditions and obtain a high task offloading rate when β = 0.2 . However, when the number of MEC servers is more than 40, fewer UE devices tend to cooperate, and more UE devices need to offload their main codes to the MEC servers when β = 0.2 , leading to greater latency and more energy consumption when β = 0.8 . Moreover, from Figure 6, we can see that the average energy consumption and latency when β = 0.5 are less than those when β = 0.2 and β = 0.8 . This is because the similarity score and channel gain are balanced when β = 0.5 ; thus, UE devices with good channel conditions can cooperate to offload their tasks.
Figure 7a and Figure 7b respectively show the average energy consumption and delay when the maximum number of UE devices allowed to associate with each MEC server changes from 5 to 10. In this simulation, the total number of UE devices in the MEC system is 100, and the number of MEC servers is 20. It can be seen that as the maximum number of UE devices grows, both energy consumption and latency gradually increase. The reason for this is that more cooperative UE devices offload their tasks to the same MEC server, leading to an increase in the number of cooperating clusters and increased competition for spectrum resource, resulting in an increase in the average energy consumption and delay. Therefore, in the subsequent simulations, we set the maximum number of UE devices that can access each MEC server A m to 5.
Figure 8 shows the energy consumption of cluster head UE devices in the SCORA and Non-ESCORA schemes. The remaining energy levels of cluster head UE devices are presented in Table 3. For the sake of clarity, only different cluster head UE devices’ information from the two schemes is given in Table 3, ignoring information the same cluster head UE devices. From Figure 8 and Table 3, it can be seen that the cluster head UE devices consume more energy compared to non-cluster head UE in all schemes. The cluster head UE devices chosen by the SCORA scheme have more remaining energy than those selected by the Non-ESCORA scheme. The reason for this is that the remaining energy factor is considered in the SCORA scheme, which helps to reduce the energy consumption of UE with low remaining energy levels.
Figure 9 depicts the average energy consumption of UE under a varying number of MEC servers. It can be seen that average energy consumption increases with the number of MEC servers due to the increase in the number of UE devices. Compared to the Base and SCOSA schemes, the proposed SCORA algorithm can reduce the average energy consumption by 51.52% and 1.24%, respectively, when the number of MEC servers is 20. This proves that the proposed offloading strategy, resource allocation strategy, and CCCP-based power allocation can be efficient in saving UE energy. However, since the remaining energy levels of the UE are taken into account in both the matching strategy and the selection of cluster head UE, in the proposed scheme, many UE devices matched with MEC servers without the best channel conditions, as the trade-off between the UE’s remaining energy level and channel conditions is considered when choosing the cluster head UE. Therefore, the SCORA scheme consumes more energy than the Non-ESCORA scheme. Although the CGCO scheme adopts cooperative task offloading, it exhibits higher average energy consumption than the SCORA scheme due to its random cluster head selection strategy.
Figure 10 shows the average latency of UE when the number of MEC servers ranges from 20 to 50. It can be seen that the average latency of UE in all schemes increases with the growth of the MEC server density due to the severe interference caused by the increase in the number of UE devices. Moreover, we can see that our proposed scheme is able to obtain lower delay than others. Specifically, compared to the Base and SCOSA schemes, the latency of UE in the SCORA scheme is reduced by 0.79% and 55.83%, respectively, when the number of MEC servers is 20. This is due to the fact that more subchannels and computing resources are assigned to the cluster head UE devices, resulting in a higher task transmission rate. From Figure 9 and Figure 10, we can see that the SCORA scheme obtains good performance in terms of UE energy savings and task latency reduction. Therefore, our proposed SCORA scheme can improve the efficiency of the entire system.
Figure 11 depicts the average energy consumption of UE given a varying number of UE devices when the number of MEC servers is 20. It can be seen that average energy consumption increases with the growth of the number of UE devices due to the fact that more tasks need to be offloaded. In addition, we can observe that the proposed SCORA scheme performs better in reducing the UE’s energy consumption compared to other schemes such as Base, Non-SOPA, SCOC, SCOSA, and CGCO. This indicates that the SCORA scheme effectively improves the resource utilization efficiency and reduces the overall energy consumption of the UE by jointly optimizing the task offloading, resource allocation, and power allocation. However, the SCORA scheme results in higher energy consumption than Non-ESCORA. The reason for this is that UE devices with more remaining energy are selected as the cluster head UE to transmit the main codes to the MEC servers in the SCORA scheme, while the UE devices with the best channel conditions are chosen as the cluster head UE in the Non-ESCORA scheme. Therefore, the UE in the SCORA scheme should upload the main codes with higher power in order to meet the UE devices’ latency requirements. In Figure 12, although the average latency in the SCORA scheme is reduced compared to the Non-ESCORA scheme, the energy consumption is increased due to the use of higher transmission power.
The effect of the UE density on the average latency is evaluated in Figure 12, where the number of MEC servers is 20. It can be observed that the average latency of UE in all schemes increases with the growth of the UE density due to the severe interference caused by the increase in the number of UE devices. Compared to all the other schemes, the SCORA scheme attains the lowest average delay, which is mainly due to the fact that the SCORA scheme allocates more subchannels and computing resources to UE devices with the same tasks through a cooperative offloading strategy, thus improving the speed of task transmission and processing. From Figure 11 and Figure 12, we can see that as the number of UE devices increases, compared with other schemes, the SCORA scheme can significantly reduce the average energy consumption and latency of UE. Therefore, our proposed SCORA scheme can work well in a relatively DECS.
Table 4 shows the energy consumption and latency data for each scheme and the energy consumption and latency gains obtained by the proposed SCORA compared to the six benchmark schemes when the number of MEC servers is 20, and the number of UE devices is 100. It can be seen that the proposed SCORA algorithm has significant advantages for energy consumption and latency reduction.

6. Conclusions

In this study, we focus on addressing the energy consumption challenge in DECS with reusable tasks while considering the task similarity between UE devices. By jointly optimizing task offloading, resource allocation, and power allocation, the problem is formulated as an MINLP problem, and the SCORA scheme is proposed to solve it. This strategy aims to minimize the total energy consumption of all UE while considering the remaining energy levels of UE. In the SCORA scheme, the similarity-based matching offloading strategy is developed to solve the task offloading subproblem, the cooperation-based resource allocation strategy is proposed to solve the subchannel and computing resource allocation subproblem, and the CCCP-based power allocation strategy is proposed to solve the UE transmission power optimization subproblem. Through extensive simulation experiments, the SCORA scheme was compared to six benchmark approaches. The results demonstrate that the SCORA scheme can significantly outperform existing schemes in terms of reducing the energy consumption of UE. Notably, compared to the scheme neglecting UE’s remaining energy levels, the SCORA scheme exhibits remarkable advantages in terms of energy conservation for UE with low remaining energy levels, highlighting its adaptability to UE heterogeneity. These findings indicate that the proposed SCORA scheme can provide an efficient and practical solution for joint task offloading and resource allocation in DECS. Since the task offloading in a dynamic environment is more complicated, we consider the computation offloading in a static environment and assume that the user’s task is indivisible. In the future, with the help of the deep reinforcement learning method, our work will expand to intelligent task offloading and resource allocation problems in a dynamic MEC network environment where UE mobility will be taken into account.

7. Patents

An invention patent entitled “A Task Similarity-Based Task Offloading Method, System, and Storage Medium” based on the content of this manuscript has been submitted to the China National Intellectual Property Administration. The invention patent application number is 202510264933.X.

Author Contributions

Conceptualization, H.M. and S.W.; methodology, H.M. and S.W.; software, H.M.; validation, P.H.; formal analysis, H.M.; investigation, J.C.; resources, W.W.; data curation, J.C.; writing—original draft preparation, H.M.; writing—review and editing, S.W.; visualization, W.W.; supervision, P.H.; project administration, P.H.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chinese National Natural Science Foundation (Grant 62201491) and the Natural Science Foundation of Shandong Province (Grant ZR2021QF097).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the editor and all reviewers for their valuable comments and efforts on this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MECMobile edge computing
CCCPConvex–concave procedure
IoTInternet of things
UEUser equipment
QoEQuality of experience
ETSIEuropean Telecommunications Standards Institute
MCCMobile cloud computing
CPUCentral processing unit
DECSDense edge computing systems
UDNUltra-dense network
CSBLContextual sleeping bandit learning
CSBL-MCSBL-multiple
PSOParticle swarm optimization
EHEnergy harvesting
SBSsSmall base stations
NOMANon-orthogonal multiple access
WPTWireless power transfer
UAVUnmanned aerial vehicle
OFDMA          Orthogonal frequency division multiple access
MINLPMixed-integer nonlinear programming
SCORASimilarity-based cooperative offloading and resource allocation
Non-SOPANon-similarity offloading and power allocation
SCOCSimilarity-based cooperative offloading and CCCP
SCOSASimilarity-based cooperative offloading and Subchannel Allocation
Non-ESCORANon-energy similarity-based cooperative offloading and resource allocation
CGCOCoalitional game-based cooperative offloading

References

  1. Abbas, N.; Zhang, Y.; Taherkordi, A.; Skeie, T. Mobile edge computing: A survey. IEEE Internet Things J. 2017, 5, 450–465. [Google Scholar] [CrossRef]
  2. Hu, Y.C.; Patel, M.; Sabella, D.; Sprecher, N.; Young, V. Mobile edge computing—A key technology towards 5G. ETSI White Pap. 2015, 11, 1–16. [Google Scholar]
  3. 3GPP. Technical Specification Group Services and System Aspects; System Architecture for the 5G System (5GS); Stage 2. (Release 15); Technical Specification TR23.501; 3GPP: Sophia Antipolis, France, 2022. [Google Scholar]
  4. Yang, X.; Luo, H.; Sun, Y.; Zou, J.; Guizani, M. Coalitional game-based cooperative computation offloading in MEC for reusable tasks. IEEE Internet Things J. 2021, 8, 12968–12982. [Google Scholar] [CrossRef]
  5. Yang, X.; Luo, H.; Sun, Y.; Obaidat, M.S. Energy-efficient collaborative offloading for multiplayer games with cache-aided MEC. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–7. [Google Scholar]
  6. Li, B.; Cai, H.; Zhao, C.; Wang, J. Energy Optimization for Computing Reuse in Unmanned Aerial Vehicle-assisted Edge Computing Systems. J. Electron. Inf. Technol. 2024, 46, 2740–2747. [Google Scholar]
  7. Al-Hammadi, I.; Li, M.; Islam, S.M. Independent tasks scheduling of collaborative computation offloading for SDN-powered MEC on 6G networks. Soft Comput. 2023, 27, 9593–9617. [Google Scholar] [CrossRef]
  8. Huang, X.; Lei, B.; Ji, G.; Zhang, B. Energy criticality avoidance-based delay minimization ant colony algorithm for task assignment in mobile-server-assisted mobile edge computing. Sensors 2023, 23, 6041. [Google Scholar] [CrossRef]
  9. Li, Z.; Zhu, Q. An offloading strategy for multi-user energy consumption optimization in multi-MEC scene. KSII Trans. Internet Inf. Syst. (TIIS) 2020, 14, 4025–4041. [Google Scholar]
  10. Ale, L.; Zhang, N.; Fang, X.; Chen, X.; Wu, S.; Li, L. Delay-aware and energy-efficient computation offloading in mobile-edge computing using deep reinforcement learning. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 881–892. [Google Scholar] [CrossRef]
  11. Hu, H.; Wang, Q.; Hu, R.Q.; Zhu, H. Mobility-aware offloading and resource allocation in a MEC-enabled IoT network with energy harvesting. IEEE Internet Things J. 2021, 8, 17541–17556. [Google Scholar] [CrossRef]
  12. Shu, W.; Li, Y. Joint offloading strategy based on quantum particle swarm optimization for MEC-enabled vehicular networks. Digit. Commun. Netw. 2023, 9, 56–66. [Google Scholar] [CrossRef]
  13. Zhang, H.; Liu, Z.; Hasan, S.; Xu, Y. Joint optimization strategy of heterogeneous resources in multi-MEC-server vehicular network. Wirel. Netw. 2022, 28, 765–778. [Google Scholar] [CrossRef]
  14. Li, C.; Cai, Q.; Luo, Y. Multi-edge collaborative offloading and energy threshold-based task migration in mobile edge computing environment. Wirel. Netw. 2021, 27, 4903–4928. [Google Scholar] [CrossRef]
  15. Chen, G.; Chen, Y.; Mai, Z.; Hao, C.; Yang, M.; Du, L. Incentive-based distributed resource allocation for task offloading and collaborative computing in MEC-enabled networks. IEEE Internet Things J. 2022, 10, 9077–9091. [Google Scholar] [CrossRef]
  16. Zhang, R.; Cheng, P.; Chen, Z.; Liu, S.; Vucetic, B.; Li, Y. Calibrated bandit learning for decentralized task offloading in ultra-dense networks. IEEE Trans. Commun. 2022, 70, 2547–2560. [Google Scholar] [CrossRef]
  17. Lin, Z.; Gu, B.; Zhang, X.; Yi, D.; Han, Y. Online task offloading in udn: A deep reinforcement learning approach with incomplete information. In Proceedings of the 2022 IEEE Wireless Communications and Networking Conference (WCNC), Austin, TX, USA, 10–13 April 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1236–1241. [Google Scholar]
  18. Liu, S.; Cheng, P.; Chen, Z.; Xiang, W.; Vucetic, B.; Li, Y. Contextual User-Centric Task Offloading for Mobile Edge Computing in Ultra-Dense Network. IEEE Trans. Mob. Comput. 2023, 22, 5092–5108. [Google Scholar] [CrossRef]
  19. Udupa, V.N.; Tumuluru, V.K. Clustering and Reinforcement Learning based Multi-Access Edge Computing in Ultra Dense Networks. In Proceedings of the 2023 International Conference on Artificial Intelligence and Smart Communication (AISC), Greater Noida, India, 27–29 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 903–908. [Google Scholar]
  20. Zhou, T.; Zeng, X.; Qin, D.; Jiang, N.; Nie, X.; Li, C. Cost-aware computation offloading and resource allocation in ultra-dense multi-cell, multi-user and multi-task MEC networks. IEEE Trans. Veh. Technol. 2023, 73, 6642–6657. [Google Scholar] [CrossRef]
  21. Lu, Y.; Chen, X.; Zhang, Y.; Chen, Y. Cost-efficient resources scheduling for mobile edge computing in ultra-dense networks. IEEE Trans. Netw. Serv. Manag. 2022, 19, 3163–3173. [Google Scholar] [CrossRef]
  22. Xia, S.; Yao, Z.; Li, Y.; Xing, Z.; Mao, S. Distributed computing and networking coordination for task offloading under uncertainties. IEEE Trans. Mob. Comput. 2023, 23, 5280–5294. [Google Scholar] [CrossRef]
  23. Zhou, X.; Ge, S.; Qiu, T.; Li, K.; Atiquzzaman, M. Energy-efficient service migration for multi-user heterogeneous dense cellular networks. IEEE Trans. Mob. Comput. 2021, 22, 890–905. [Google Scholar] [CrossRef]
  24. Chen, Z.; Wang, F.; Zhang, X. Joint optimization for cooperative service-caching, computation-offloading, and resource-allocations over EH/MEC-based ultra-dense mobile networks. In Proceedings of the ICC 2023-IEEE International Conference on Communications, Rome, Italy, 28 May–1 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 716–722. [Google Scholar]
  25. Zhang, H.; Yang, Y.; Shang, B.; Zhang, P. Joint resource allocation and multi-part collaborative task offloading in MEC systems. IEEE Trans. Veh. Technol. 2022, 71, 8877–8890. [Google Scholar] [CrossRef]
  26. Yang, X.; Luo, H.; Sun, Y.; Guizani, M. A novel hybrid-ARPPO algorithm for dynamic computation offloading in edge computing. IEEE Internet Things J. 2022, 9, 24065–24078. [Google Scholar] [CrossRef]
  27. Pham, Q.V.; Leanh, T.; Tran, N.H.; Park, B.J.; Hong, C.S. Decentralized Computation Offloading and Resource Allocation for Mobile-Edge Computing: A Matching Game Approach. IEEE Access 2018, 6, 75868–75885. [Google Scholar] [CrossRef]
  28. Wang, T.; You, C. Distributed user association and computation offloading in UAV-assisted mobile edge computing systems. IEEE Access 2024, 12, 63548–63567. [Google Scholar] [CrossRef]
  29. Yuan, X.; Tian, H.; Zhang, Z.; Zhao, Z.; Liu, L.; Sangaiah, A.K.; Yu, K. A MEC offloading strategy based on improved DQN and simulated annealing for internet of behavior. ACM Trans. Sens. Netw. 2022, 19, 1–20. [Google Scholar] [CrossRef]
  30. Gao, Z.; Yang, L.; Dai, Y. Large-scale cooperative task offloading and resource allocation in heterogeneous mec systems via multiagent reinforcement learning. IEEE Internet Things J. 2023, 11, 2303–2321. [Google Scholar] [CrossRef]
  31. Wu, S.; Li, X.; Dong, N.; Liu, X. Residual Energy-Based Computation Efficiency Maximization in Dense Edge Computing Systems. Electronics 2023, 12, 4429. [Google Scholar] [CrossRef]
  32. Tariq, M.N.; Wang, J.; Raza, S.; Siraj, M.; Altamimi, M.; Memon, S. Toward Optimal Resource Allocation: A Multi-Agent DRL Based Task Offloading Approach in Multi-UAV-Assisted MEC Networks. IEEE Access 2024, 12, 81428–81440. [Google Scholar] [CrossRef]
  33. Yu, H.; Liu, J.; Hu, C.; Zhu, Z. Privacy-preserving task offloading strategies in mec. Sensors 2022, 23, 95. [Google Scholar] [CrossRef]
  34. Wu, S.; Yin, R.; Wu, C. Heterogeneity-aware energy saving and energy efficiency optimization in dense small cell networks. IEEE Access 2020, 8, 178670–178684. [Google Scholar] [CrossRef]
  35. Du, Y.; Li, J.; Shi, L.; Liu, T.; Shu, F.; Han, Z. Two-tier matching game in small cell networks for mobile edge computing. IEEE Trans. Serv. Comput. 2019, 15, 254–265. [Google Scholar] [CrossRef]
  36. 3GPP. Further Advancements for E-UTRA Physical Layer Aspects; Technical Report TR 36.814; 3GPP: Sophia Antipolis, France, 2010. [Google Scholar]
Figure 1. System model.
Figure 1. System model.
Sensors 25 03172 g001
Figure 2. An example of reusable task offloading. (a) Non-cooperative offloading strategy. (b) Cooperative offloading strategy.
Figure 2. An example of reusable task offloading. (a) Non-cooperative offloading strategy. (b) Cooperative offloading strategy.
Sensors 25 03172 g002
Figure 3. Relationship between subproblems and the corresponding algorithms.
Figure 3. Relationship between subproblems and the corresponding algorithms.
Sensors 25 03172 g003
Figure 4. Convergence of the SCORA algorithm.
Figure 4. Convergence of the SCORA algorithm.
Sensors 25 03172 g004
Figure 5. The matching and clustering results between 25 UE devices and 5 MEC servers.
Figure 5. The matching and clustering results between 25 UE devices and 5 MEC servers.
Sensors 25 03172 g005
Figure 6. Different similarity score weights in the proposed algorithm. (a) Average energy consumption. (b) Average latency.
Figure 6. Different similarity score weights in the proposed algorithm. (a) Average energy consumption. (b) Average latency.
Sensors 25 03172 g006
Figure 7. Different maximum numbers of UE devices allowed to associate with each MEC server in the proposed algorithm. (a) Average energy consumption. (b) Average latency.
Figure 7. Different maximum numbers of UE devices allowed to associate with each MEC server in the proposed algorithm. (a) Average energy consumption. (b) Average latency.
Sensors 25 03172 g007
Figure 8. Comparison of the energy consumption of cluster head UE devices in different schemes.
Figure 8. Comparison of the energy consumption of cluster head UE devices in different schemes.
Sensors 25 03172 g008
Figure 9. Average energy consumption versus the number of MEC servers.
Figure 9. Average energy consumption versus the number of MEC servers.
Sensors 25 03172 g009
Figure 10. Average latency versus the number of MEC servers.
Figure 10. Average latency versus the number of MEC servers.
Sensors 25 03172 g010
Figure 11. Average energy consumption versus the number of UE devices.
Figure 11. Average energy consumption versus the number of UE devices.
Sensors 25 03172 g011
Figure 12. Average latency versus the number of UE devices.
Figure 12. Average latency versus the number of UE devices.
Sensors 25 03172 g012
Table 1. List of main symbols.
Table 1. List of main symbols.
SymbolDefinition
M The set of MEC servers
N The set of UE devices
K The set of subchannels
L n j The total size of task j that is offloaded by UE n
L n j , m a i n The main code size of task j that is offloaded by UE n
L n j , i n The input parameters size of task j that is offloaded by UE n
C n j The CPU cycles required to complete the task j that is offloaded by UE n
C n j , m a i n The CPU cycles required to complete the main code of task j that is offloaded by UE n
C n j , i n The CPU cycles required to complete the input parameters of task j that is offloaded by UE n
T n j max The latency requirement for UE n to complete the task j
B 0 The bandwidth of each subchannel
R n The uplink transmission rate of UE n
T n j t The transmission latency of UE n for offloading
T n j s The computational latency of task j that is offloaded by the UE n
E n j t The transmission energy consumption of UE n for offloading task j
f n The computing resource allocated to UE n
P n max The maximum transmission power of UE n
Table 3. The remaining energy of cluster head UE devices in the same cluster for different schemes.
Table 3. The remaining energy of cluster head UE devices in the same cluster for different schemes.
SCORA Cluster Head UE ID31422313542464863647489
remaining energy levels0.900.981.001.000.670.821.000.891.000.820.460.78
Non-ESCORA cluster head UE ID724529215227381565361837
remaining energy levels0.780.140.940.900.180.420.520.670.320.670.210.33
Table 4. Performance of the different schemes and the gains obtained by the proposed SCORA scheme.
Table 4. Performance of the different schemes and the gains obtained by the proposed SCORA scheme.
×SCORABaseNon-SOPASCOCSCOSANon-ESCORACGCO
Energy consumption (J)1.593.282.322.361.611.251.83
Latency (s)1.252.832.082.111.261.401.56
Energy consumption gains×51.52%31.47%32.63%1.24%–27.20%13.11%
Latency gains×55.83%39.90%40.76%0.79%10.71%19.87%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mu, H.; Wu, S.; He, P.; Chen, J.; Wu, W. Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems. Sensors 2025, 25, 3172. https://doi.org/10.3390/s25103172

AMA Style

Mu H, Wu S, He P, Chen J, Wu W. Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems. Sensors. 2025; 25(10):3172. https://doi.org/10.3390/s25103172

Chicago/Turabian Style

Mu, Hanchao, Shie Wu, Pengfei He, Jiahui Chen, and Wenqing Wu. 2025. "Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems" Sensors 25, no. 10: 3172. https://doi.org/10.3390/s25103172

APA Style

Mu, H., Wu, S., He, P., Chen, J., & Wu, W. (2025). Task Similarity-Aware Cooperative Computation Offloading and Resource Allocation for Reusable Tasks in Dense MEC Systems. Sensors, 25(10), 3172. https://doi.org/10.3390/s25103172

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop