Computation Offloading Game for Multi-Channel Wireless Sensor Networks

Computation offloading for wireless sensor devices is critical to improve energy efficiency and maintain service delay requirements. However, simultaneous offloadings may cause high interferences to decrease the upload rate and cause additional transmission delay. It is thus intuitive to distribute wireless sensor devices in different channels, but the problem of multi-channel computation offloading is NP-hard. In order to solve this problem efficiently, we formulate the computation offloading decision problem as a decision-making game. Then, we apply the game theory to address the problem of allowing wireless sensor devices to make offloading decisions based on their own interests. In the game theory, not only are the data size of wireless sensor devices and their computation capability considered but the channel gain of each wireless sensor device is also included to improve the transmission rate. The consideration could evenly distribute wireless sensor devices to different channels. We prove that the proposed offloading game is a potential game, where the Nash equilibrium exists in each game after all device states converge. Finally, we extensively evaluate the performance of the proposed algorithm based on simulations. The simulation results demonstrate that our algorithm can reduce the number of iterations to achieve Nash equilibrium by 16%. Moreover, it improves the utilization of each channel to effectively increase the number of successful offloadings and lower the energy consumption of wireless sensor devices.


Introduction
Wireless sensor devices (WSDs) have emerged in various applications of smart cities, remote healthcare, unmanned aerial vehicle (UAV) and smart homes [1][2][3][4][5], where WSDs generate and transmit remote sensing data to sink nodes. Various types of modern sensor devices may generate a great volume of data [6,7]. For example, mobile devices capable of sensing and computing may act as sensors to participate in crowdsensing for collecting video, images, or voice [8]. These WSDs share data and extract information for operations of common interest. While both energy and computation capabilities of WSDs are usually limited, some environmental monitoring tasks may require low response latency [9,10]. Although the computation capabilities of WSDs are increasing, the energy consumption of WSDs is still problematic for computation-intensive tasks. Therefore, effectively performing computation-intensive tasks is an important challenge for WSDs.
To overcome this challenge, edge computing is a promising and effective approach [11,12]. With edge computing, WSDs could offload their computation tasks which are locally computed originally to remote edge servers with higher computation capabilities through wireless networks. Therefore, the offloading could reduce the energy consumption of WSDs without increasing processing delay.
Although edge computing could increase the efficiency of processing computation tasks for WSDs, the rapidly increasing WSDs may cause radio communication interference to result in longer communication delays when these WSDs offload their tasks to the edge server simultaneously. Currently, multi-channel communication has been applied to several applications of WSDs to enhance the performance of both transmission throughput and energy efficiency [2]. The communication interference can thus be avoided by allocating one channel to each WSD. Unfortunately, this approach is not feasible because the spectrum resources are scarce and expensive. An efficient computation offload policy based on limited spectrum resources is thus desired.
In this paper, we develop an efficient solution for the computational offloading problem based on a multi-channel computation offloading game. Game theory has been widely used to design a decentralized mechanism for decision-making problems. By considering each WSD as a selfish agent, each agent can decide to either perform the computation locally or by the remote edge server according to their own interests. The previous proposals based on game theory [13][14][15][16] only consider computation capabilities and data size for the decision-making. Because of the selfish behavior in game theory, the dynamic channel assignment may result in poor overall performance. For example, when the channel gain of a WSD is particularly large as compared to the other WSDs, the WSD will occupy a channel. In order to avoid this situation, our scheme considers the channel resource allocation problem based on the channel gain of each WSD for the decision-making problem of computation offloading.
The rest of the paper is organized as follows. Section 2 describes related work. In Section 3, we introduce the system model of multi-WSD offloading. Then, we formulate the proposed centralized mechanism and decentralized mechanism for the multi-WSD computation offload problem in Section 4. In Section 5, we introduce the proposed algorithm based on game theory and prove the existence of Nash equilibrium. In Section 6, the results of the simulation are presented according to the proposed mechanism. Finally, the conclusion of this paper is provided in Section 7.

Related Works
Nowadays, the research on the computation offloading decision problem of multiple wireless devices can be divided into two types, namely centralized and decentralized computation offloading. The mechanisms of centralized computation offloading request all devices to send their statistics data before offloading to edge servers, where the statistics data includes data size, computing capacity, and received signal strength indicator (RSSI). After receiving the data, the edge server yields the optimal solution of resource allocation. A previous work presented a system model of multiple basestations with built-in edge servers to serve wireless devices [17]. This work also considers communication interference and presents an approach based on the genetic algorithm to solve the decision-making problem of energy-efficient offloading. The computation tasks could be divided into subtasks for parallel execution [18]. It is also possible to employ neighboring devices for cooperative partial offloading [19].
In the case that a single device transmits multiple independent computation requests to an edge server, another work determines the order of offloading tasks according to the delay and energy requirements [20]. Zhang et al. proposed a mechanism of energy-efficient offloading for multiple devices [21]. With the decision-making results, the maximum energy consumption of the overall system can be minimized under the time limit of devices. The tradeoff between delay and energy consumption can also be addressed by an iterative search algorithm [22]. This algorithm yields the optimal solution in multi-device environments. Kan et al. formulated the multi-device resource-allocation offloading decision problem and proposed a heuristic algorithm to solve the cost minimization problem [23]. With the technique of energy harvesting, the computation offloading problem for wireless powered WSD networks is also investigated [24][25][26][27][28][29].
The decentralized offloading mechanisms allow each device to make decisions in a distributed manner. Each device can thus decide the most appropriate decision based on its own interests. The offloading decision of each device will affect the decisions of other devices and vice versa. Game theory is one of the most commonly used methods to solve decentralized decision problems. Chen et al. considered the scenario that a single basestation serves multiple devices and proposed a mechanism based on game theory to solve the problem of efficient computation offloading for edge cloud [13]. They also prove the existence of Nash equilibrium. Guo et al. proposed a scenario that multiple devices offload to multiple edge servers [14]. They studied the collaborative computation offloading problem among edge servers. To support the idea of centralized cloud and multi-access edge computing, Guo and Liu proposed a general architecture that devices can choose not only local or edge server but also cloud data centers for computation offloading [15]. Meskar et al. designed a competition game in order to reduce the energy consumption of a device for computation tasks [16]. In this competition game, devices choose the decision with the minimum energy consumption under time constraints according to the decisions of other devices. Yuan et al. extended edge computing to UAV and proposed a Stackelberg game approach for the heterogenous computation offloading decision problem [5]. Recently, techniques of machine learning have been employed for the offloading decision problem [4,30,31].
In the previously mentioned algorithms of decentralized offloading based on game theory, devices make decisions based on their benefits without considering overall system performance. The selfishness may give some devices poor transmission bandwidth and degrade the overall performance of computation offloading. To avoid such situations, we consider the transmission interferences among devices of simultaneously offloading computation tasks and develop a system to achieve balanced offloading performance for all devices.

System Model
We present the system model of this work in Figure 1. We consider that a set of n WSDs, N = wsd 1 , wsd 2 , wsd 3 , . . . , wsd n , are distributed in an area, where each WSD needs to complete a computationally intensive task within a limited time period. These WSDs are connected to one basestation through multiple wireless channels. In addition, each basestation is connected to an edge server, which is connected to a power outlet. The edge server has much higher computation capability than WSDs and allows WSDs to offload their tasks. Similar to the previous studies on offloading of cloud computing and edge computing [13][14][15][16][17][20][21][22][23][32][33][34][35][36][37][38], we consider a static scenario, where the states of WSDs do not change during the computation offloading.

Communication Model
Next, the communication model of edge computing is introduced. Each basestation has a set of m wireless channels denoted as M = {1, 2, 3, . . . , m}. Because each channel may suffer from wireless interferences, we use OFDMA to share spectrum resources to multiple WSDs [10]. We make the channel resources of the system orthogonal to each other, but the number of channel resources is not enough to allocate one channel for each WSD. We express 0 ≤ a i ≤ |M| as the computation offloading decision of wsd i . Specifically, if a i > 0, it means that wsd i decides to offload its task through wireless channel a i ; if a i = 0, it means that wsd i chooses to compute its task locally. Based on the decision variables A = (a 1 , a 2 , . . . , a n ) of each WSD, we can calculate the uplink data rate r i of wsd i that decides to offload the task to the edge server as follows [39].
where W is the channel bandwidth, and p i is the transmission power of wsd i . h i,s denotes the channel gain between wsd i and the basestation based on the path loss and shadowing. w 0 denotes the background noise power.

Computation Model
Then, we introduce the computation model for both local and edge computing. We assume that each wsd i has a task J i = (B i , D i ) to be calculated. Each task can be calculated locally or offloaded to an edge server according to the WSD's decision. B i indicates the data size of the task J i and D i denotes the number of CPU cycles required to complete task J i . Then, we show the processing time and energy consumption required for local computing and edge computing.

Local Computing
If wsd i decides to calculate task J i locally, the delay and energy consumption are described below. Let f l i be the local computation capability (i.e., CPU cycles per second) of wsd i . The delay time required for local computing is defined as The energy consumption of wsd i for the computation is given as where γ i is the consumed energy per CPU cycle. In order to ensure the completion of computation tasks, we assume that local computing can always meet the delay requirements of computation tasks with higher energy consumption.

Edge Computing
We further consider the delay and energy consumption if wsd i chooses to offload its task to the edge server through the basestation. The difference from the local calculation is that the computation offloading requires extra time and energy consumption for transmitting the input data of the task to the edge server. We calculate the transmission time of the offloading input data by using the following equation: The transmission energy of wsd i is expressed as After the transmission, the edge server performs the computation task J i . Let f m i be the computational capability (i.e., CPU cycles per second) of the edge server allocated for wsd i .
Here, we assume that edge servers always have sufficient resources to fulfill the computing requirements of all WSDs. The delay time required by wsd i to calculate on the edge server is expressed as We do not consider the energy consumption of edge servers, because these servers are wire-powered. Thus, the time delay and energy consumption for the edge computation can be expressed as and According to Equations (3) and (8), we can obtain the energy consumption for each wsd i as Similar to other works [13][14][15][16][40][41][42][43], we ignore the time of transmitting the results from the edge server to WSDs because the results are usually much smaller than the input data.
Edge computing can reduce the energy consumption of WSDs by offloading computation tasks, but not all WSDs can offload their tasks because of poor channel quality. According to Equation (1), it is necessary to lower channel interference to achieve a higher transmission rate. However, the channel interference is mainly tied to the WSDs sharing the same channel. In order to consider the overall transmission performance, we consider the channel gain of each WSD to reduce channel interferences. Moreover, because a centralized approach may suffer from high system overhead and considerable latency owing to acquiring information from all WSDs, we employ a decentralized approach for the computation offloading decision problem.

Channel Gain
Our main idea is to make WSDs with similar channel gains offload their tasks through the same wireless channel. It is based on the observation that mixing both low-channelgain and high-channel-gain WSDs in the same channel may result in huge transmission interference to keep these WSDs from successful offloading. Accordingly, we can categorize WSDs into different sets according to their channel gains.
Since we assume that there are m wireless channels and n WSDs, we can use the k-means to divide WSDs into m clusters according to their channel gains. The cluster centers are denoted as {h 1 channel , h 2 channel , h 3 channel , . . . , h m channel }. They are also equivalent to the channel's qualities. We use the difference between each WSD's channel gain and each channel's quality by calculating the distance from the cluster center based on the following equation: arg min Therefore, we can define the difference between wsd i and channel j's quality as According to the above formula, WSDs with similar channel gains will choose the same channel to offload their tasks.

Problem Formulation
In this section, we separately consider the problem of centralized computing offloading and decentralized computing offloading.

Centralized Edge Computing
In a centralized edge-computing system, we can formulate the energy optimization problem as follows: T(a i ) is the time spent for wsd i if the WSD chooses the channel a i for task offloading. T max i denotes the maximum allowable delay for wsd i . WSDs transmit their statistics to the edge server to make system-wide decisions in a centralized manner. Although the centralized optimization can minimize overall energy consumption, it is problematic to enforce WSDs to comply with the remote decisions.

Decentralized Edge Computing
In a decentralized edge-computing system, we can formulate the energy optimization problem for each WSD in the following equation: where a −i = (a 1 , . . . , a i−1 , a i+1 , . . . , a n ) be the set of decisions of the other WSDs except wsd i .
In this system, all WSDs make decisions according to their own interests. Therefore, the decision of each WSD could be selfish for the whole system. Each WSD considers the current decision state of other WSDs before making the best decision for themselves. In this paper, we consider the decision-making problem of distributed computation offloading among the WSDs and present a feasible algorithm.

Multi-Channel Computation Offloading Game
In this section, we implement the multi-channel computation offloading game for mobile edge computing.

Game Formulation
Let −a i be the other decisions that can meet time constraints but are not chosen by wsd i . Then, given a −i , wsd i would like to set its decision variable a i as the solution of the following equation: I {A} is an indicator function, where X is a conditional expression. If X is true, I {X} is equal to 1; otherwise I {X} is equal to 0.
In order to ease the explanation of the problem, we can set the energy consumption function of the problem as the following equation: Then, we formulate the optimization problem in Equation (14) as a strategic game , where the set of wsd i is the set of the rational players, A i is the strategy set of player i, and the energy consumption function E i (a i , a −i ) of wsd i is the cost function to be minimized by player i. The game Γ is the multi-channel computation offloading game. In the following, we introduce the concept of Nash equilibrium.

Definition 1.
A strategy set a * = (a * 1 , . . . , a * n ) is a Nash equilibrium for the channel selection computation offloading game when no WSD can reduce its cost by changing its decision, i.e., Therefore, according to the definition of Nash equilibrium, we can know that when the game reaches equilibrium, each WSD is in the situation of choosing their own best decision and the game has reached its end.

Game Theory with WSD's Channel Gain
The pseudo-code of the proposed algorithm is listed in Algorithm 1. Initially, this algorithm assumes that the offloading decisions of all WSDs are local computation, i.e., a i = 0, for all WSDs. Moreover, each WSD will transmit a pilot signal to the basestation first. After the basestation receives pilot signals from all WSDs, the basestation returns all the received interference information to all WSDs. After each WSD receives the interference information, each device computes the best solution for their DOPT problems in multiple iterations. If an offloading decision of a WSD remains the same, the WSD will not update its decision to the edge server. If the new decision is different from the previous one, the WSD will send an update request message to the edge server to compete for validating its updated decision. This means that for wsd i , the cost of updating the decision will be lower than that of maintaining the decision of the previous iteration, where the cost is the energy consumption of wsd i .
Then, after the edge server receives the update requests from WSDs, it will randomly select one of the WSDs that transmit update requests. The update request of the selected WSD is validated and a reply of update permission is returned to the WSD. The WSDs whose decisions are not selected by the edge server will not receive the reply message from the edge server and will retain the same decision as the previous iteration. In other words, these WSDs will not update their decisions.
After the selected device updates its decision, the next iteration is performed to generate new decisions by repeating the above actions until no WSD requests for updating their decisions. Namely, the decisions of all WSDs reach the Nash equilibrium. Reaching the Nash Equilibrium means that no WSD can obtain a lower cost by changing its decision. Therefore, all the WSD decisions in this iteration form the solution to our multi-channel computation offloading problem.

Algorithm 1 Game Theory with WSD's Channel Gain
Initialization: The initial computation decisions for all WSDs are a i (0) = 0 1: repeat for each WSD, wsd i , and each decision time slot t in parallel: 2: Send a pilot signal to the basestation on the selected channel a i (t) 3: Receive the interference information on all channel from the basestation 4: ∆ i (t) ← compute the best response solution for (DOPT) 5: if ∆ i (t) = a i (t − 1) then 6: Send an update request message to the edge server to compete for an update decision 7: if there is an update permission message received from the edge server then 8: choose the decision a i (t + 1) = ∆ i (t) for next time slot 9: else choose the original decision a i (t + 1) = a i (t) for next time slot 10: end if 11: else choose the original decision a i (t + 1) = a i (t) for next time slot 12: end if 13: until no WSD transmits any update to the edge server

Convergence Analysis
Next, we discuss the convergence of the algorithm, which means that we have to prove the existence of Nash equilibrium for the multi-channel computation offloading game. Before that, we first introduce a convenient property that is helpful for proving the existence of Nash equilibrium.

Definition 2. A game is said to be a potential game if there is a potential function Φ(a) in the game such that
The potential function is a useful tool for analyzing the equilibrium property of a game, because one of the properties of a potential game is that it must have a Nash equilibrium and the finite improvement property [43]. Next, we show that the multi-channel computation offloading game is a potential game. Lemma 1. Given a computation offloading strategy set a, the edge server is beneficial for wsd i if the interference µ i (a) = ∑ wsd j ∈N\{wsd i }:a j =a i p i h i,s received on the selected wireless channel µ i > 0 satisfies that µ i (a) ≤ Q i , with the threshold Proof. According to Equations (3) and (5), if we want the edge server to be beneficial as compared to local computing for wsd i , we have the condition E m i (a) ≤ E l i , i.e., B i r i p i ≤ E l i . Therefore, we can derive the following equation, According to Equation (1), we can obtain that ∑ wsd j ∈N\{wsd i }:a j =a i From Lemma 1, we observe that if the interference received by WSD on the channel is low enough, it is beneficial for the WSD to offload the work to the edge server for computing. Conversely, when the interference on the channel is too high for the WSD, the device should perform local computation. Theorem 1. The channel selection computation offloading game is a potential game that must have Nash Equilibrium and the finite improvement property.
Proof. We first construct the potential equation for the channel selection computation offloading game as Then, Equation (18) can be equivalently written as the following formula: Then, because is independent of wsd k 's strategies a k and we can use Equations (19)- (21) to derive the following result: where Ξ(a A\{k} ) is the Equation (20). Since wsd k update decision a k cannot change Ξ(a A\{k} ), we can omit it in the rest of the proof. Next, we assume that when wsd k decides to update its decision to reduce its cost function, it will make E k (a k , a −k ) < E i (a k , a −k ). According to the definition of potential game, the update should also cause Φ(a k , a −k ) < Φ(a i , a −i ) in the potential function. In order to prove the case, we consider three cases, namely case (1): a k > 0 ,a k > 0, case (2): a k = 0 ,a k > 0, and case (3): a k > 0,a k = 0.
The first case occurs when wsd k 's decision is updated from the wireless channel a k > 0 to the wireless channel a k > 0. According to Equation (1), because the function w log 2 x is monotonically increasing for x and the condition E k (a k , a −k ) < E i (a k , a −k ) is known, we can obtain the result of this inequality: Next, we can know the following result according to Equations (22) and (23), that is, In the second case, it means that the decision of wsd k is updated from a decision of local computation, a k = 0, to edge computing using a wireless channel a k > 0. We know that if wsd k selects a wireless channel a k > 0 for offloading, the interference on the wireless channel a k must be lower than the threshold value of interference, i.e., ∑ wsd i ∈N\{wsd k }:a i =a k p i h i < Q k and E k (a k , a −k ) < E i (a k , a −k ). Accordingly, we can derive the following equation: In the third case, the decision of wsd k is updated from offloading with the wireless channel a k > 0 to local computation, a k = 0. This case could happen under two situations.
For the former situation, because ∑ wsd i ∈N\{wsd k }:a i =a k p i h i > Q k , we can obtain the following equation: For the later situation, we can derive that E i (a k , a −k ) ∼ = ∞ according to Equation (15). Thus, we can also conclude thatE k (a k , a −k ) < E i (a k , a −k ) and ∑ wsd i ∈N\{wsd k }:a i =a k p i h i ∼ = ∞ > Q k This also implies that there will be the same result.
Based on the results of the above three possible updates of decision, we can demonstrate that the multi-channel computation offloading game is a potential game that can reach Nash Equilibrium.

Simulation Results
In this section, we show the simulation results of the proposed decentralized multichannel computing offloading algorithm. Section 6.1 describes the simulation parameters and Section 6.2 presents the simulation results.

Simulation Parameters
In our scenario, we consider a basestation whose coverage is 100 m [14]. There are  WSDs, namely N = [30,50], randomly distributed within the coverage [13]. There are five available channels, namely M = 5 [13], where the bandwidth of channel, W, is 10MHz [44]. The transmission power p is 100 mW [13] and the background noise is ω 0 = −100 dbm [13]. The channel gain is defined as h i,s = l −α i,s , where l i,s is the distance from wsd i to the basestation. We set the path loss factor as α = 4 [13].
The data generated by WSDs could be of various types and sizes. We randomly generate data size, B = [100, 1000] × 10 3 bits [23]. The number of required CPU cycles for computation tasks is also randomly selected, where D = [100, 1000] × 10 6 cycles [23]. The computation capacity of a WSD, f l , is randomly set within [1.5, 2.5] × 10 9 cycles/sec [23] and the consumed energy per CPU cycle is γ = 0.5 J/gigacycle [17]. The edge computation capacity f m is assigned to each WSD that computes on the edge server as 10 × 10 9 cycles/sec [13]. The maximum tolerance time T max i for wsd i is [0.7, 1] sec.

Simulation Results
In order to compare the performance of our algorithm, we also implement several mechanisms as listed below. Local: All WSDs compute their tasks locally. Random: The offloading decisions of all WSDs are random. Greedy: A WSD with a larger difference between edge and local computation has a higher priority for computation offloading. We first show the effectiveness of the proposed approach of channel selection with 40 WSDs and five channels. Figure 2 shows the data rate of each WSD in descending order of their signal strengths. The data size of each WSD is 500 × 10 3 bits. We can observe that as compared to random channel selection, the selection based on channel quality can improve the data rate of most WSDs. Moreover, the approach can avoid a low data rate because there are only WSDs of similar channel gains in the same channel. By considering channel quality, the transmission rate of most WSDs can be improved to increase the number of successful offloadings.

Random
Channel Quality Data Rate (bit/s) The number of offloadings for different numbers of WSDs is shown in Figure 3. The figure shows the difference among different approaches based on game theory with and without considering channel gain. The figure shows that if there are more channel resources for WSDs, there are more opportunities for GTCG to offload tasks to the edge server because the WSDs of GTCG can change their decisions among the channels. However, when the channel resources are scarce, the channels that GTCG can select are reduced for the WSDs with lower channel-gain values to degrade the number of offloadings. CPA+GT also does not have the problem that WSDs rob channels from WSDs with lower channel gain by changing the channel selection. It can maintain transmission performance even when the channel resources are reduced. The performance of the random decision could outperform GT because GT could allow a WSD to occupy one single channel. If the approach of the random decision could evenly allocate WSDs in different channels, the random-decision approach could achieve better performance with respect to the number of offloadings.  Next, we show the overall energy consumption of WSDs in Figure 4. By offloading computation tasks to edge servers, the energy consumption of WSDs can be improved. Since GTCG and CPA+GT can offload more tasks to the edge server as shown in Figure 3, they consume less energy as compared to the other approaches.   Figure 5 shows the average latency for completing tasks. We note that all tasks can be accomplished within their time constraints since each task can always be computed locally. Both GTCG and CPA+GT have longer latency because each offloading requires additional transmission latency. Therefore, the average latency could be decreased with more local-computation tasks. The latency of CPA+GT may decrease with more WSDs because the percentage of WSDs with successful offloadings decreases. The difference between GTCG/CPA+GT and the other approaches is about 111 milliseconds or less.  Figure 6 shows the number of offloadings with different channel bandwidths. As the per-channel bandwidth increases, GTCG can always have a greater number of offloaded tasks as compared to CPA+GT because the additional resources cannot significantly affect the decisions of CPA+GT. Instead, GTCG allows WSDs to change their channel selections. Since the average available resources for each WSD increase, when there is a WSD to occupy another channel, the impact on the WSDs of the occupied channel would be less significant. As for other methods that do not consider channel gain, although the number has increased, it is still far from GTCG. In Figure 7, we increase the number of channels to show the average number of offloadings. Likewise, when the available resources increase, GTCG can always offload more tasks. In fact, GTCG could offload all tasks with eight channels. On the contrary, when the number of channels for CPA+GT is increased to more than 10, it is still impossible to achieve offloading for all WSDs. GTCG provides the best offloading performance among all compared algorithms. We further demonstrate the problem of a single WSD occupying a wireless channel, as mentioned earlier. Figure 8 shows the number of channels selected by the WSD in each approach. Among them, we can see that random decision, greedy and GT have one channel occupied by only one WSD to cause unfair channel allocation. The imbalance channel allocation for WSDs can be avoided by considering the channel quality. In addition to showing the number of WSDs in each channel, we also present the standard deviation for the number of WSDs in each channel for different approaches in Table 1. Both GTCG and CPA+GT have smaller values of standard deviation as compared to the other approaches. We can thus conclude that the decisions of channel selection from the proposed approaches are better balanced. CPA+GT has a smaller value of standard deviation than GTCG because it does not allow each WSD to change their channel selections. We illustrate the difference in WSD fairness between GTCG/CPA+GT and other methods by performing 100 computation tasks and show the percentage of successful offloading for each WSD in descending order of their channel gains in Figure 9. The methods without considering the channel gains of WSDs have imbalanced offloading ratios among WSDs. Moreover, the WSDs with better channel gains may not successfully offload their tasks. However, it is not the case for CTCG. When a WSD cannot acquire sufficient resources on the original channel, it may still offload its task by changing its channel selection. Although GTCG/CPA+GT cannot keep WSDs with different channel gains having the same offloading ratios, they do increase the offloading ratios for WSDs with low channel gains.

Random
Greedy  Next, we consider the case of unevenly distributed WSD locations, where 25% of WSDs are close to the basestation, and 75% of the WSDs are located at the border of the basestation coverage. Figure 10 shows that the performance of GTCG and CPA+GT is not affected by the uneven WSD distribution. In addition, GTCG always has better performance than CPA+GT because the channel pre-allocation of CPA+GT may limit the offloading decisions of some WSDs.  Figure 11 presents an iterative process of three methods based on game theory. We observe that GTCG and CPA+GT reach Nash equilibrium within 32 and 37 iterations, respectively. The GT takes 44 iterations to reach convergence with severe oscillation of energy consumption. The oscillation occurs with some WSD and with better quality occupies a channel to make the original WSDs of the same channel suffer from high transmission interferences. These WSDs may change their channels to result in low convergence. CPA+GT converges faster than GTCG because CPA+GT assigns channels for each WSD initially. As a result, each update of offloading decision only affects the WSDs in the same channel. Although GTCG has a slightly slower convergence performance than CPA+GT, it achieves better offloading performance as a reasonable tradeoff.

Conclusions
In this paper, we consider the efficient computation offloading problem of multiple WSDs. As the number of WSDs increases, we consider the computation offloading problem in a multi-channel wireless sensor network for better transmission performance. However, the transmission performance of WSDs in the same channel is mainly affected by the WSDs with high channel gains. Accordingly, we propose a channel-assignment approach for WSDs. Our approach can arrange WSDs with similar channel gains in the same channel to effectively improve transmission performance. Then, we formulate a decentralized computational offloading decision problem and propose an algorithm based on game theory. We further prove the existence of Nash equilibrium for our algorithm. The simulation results show that our algorithm can effectively increase the number of offloadings by jointly considering the channel gain of each WSD for channel selection to minimize the energy consumption of WSDs. Our algorithm also has fast convergence performance. Our future work attempts to further increase the number of successful offloadings by employing heterogenous transmission techniques.