Optimal Computational Power Allocation in Multi-Access Mobile Edge Computing for Blockchain

Blockchain has emerged as a decentralized and trustable ledger for recording and storing digital transactions. The mining process of Blockchain, however, incurs a heavy computational workload for miners to solve the proof-of-work puzzle (i.e., a series of the hashing computation), which is prohibitive from the perspective of the mobile terminals (MTs). The advanced multi-access mobile edge computing (MEC), which enables the MTs to offload part of the computational workloads (for solving the proof-of-work) to the nearby edge-servers (ESs), provides a promising approach to address this issue. By offloading the computational workloads via multi-access MEC, the MTs can effectively increase their successful probabilities when participating in the mining game and gain the consequent reward (i.e., winning the bitcoin). However, as a compensation to the ESs which provide the computational resources to the MTs, the MTs need to pay the ESs for the corresponding resource-acquisition costs. Thus, to investigate the trade-off between obtaining the computational resources from the ESs (for solving the proof-of-work) and paying for the consequent cost, we formulate an optimization problem in which the MTs determine their acquired computational resources from different ESs, with the objective of maximizing the MTs’ social net-reward in the mining process while keeping the fairness among the MTs. In spite of the non-convexity of the formulated problem, we exploit its layered structure and propose efficient distributed algorithms for the MTs to individually determine their optimal computational resources acquired from different ESs. Numerical results are provided to validate the effectiveness of our proposed algorithms and the performance of our proposed multi-access MEC for Blockchain.


Introduction
Blockchain, a distributed and trustable architecture for recording and storing digital transactions, has been considered as one the promising mechanisms for enabling the secure cyber-physical systems [1]. In the framework of Blockchain, the miners participate in a mining game [2], and all miners compete with each other to be the first winner to solve the proof-of-work puzzle (which corresponds to executing a series of hashing computation). After solving the proof-of-work puzzle and broadcasting the mined block to other miners to reach the consensus, the winner can claim the consequent reward.
to achieve an efficient allocation of the edge servers' computational resources to different mobile users/miners.
In this work, we investigate the optimal allocation of the computational resource/power in multi-access MEC enabled Blockchain. As described before, the multi-access MEC enables each MT to obtain the computational power from multiple ESs simultaneously (in the following, we use the term of computational power and the term of computational resource interchangeably). As a result, the MT can increase its successful probability when participating in the mining game and gain the consequent reward (i.e., winning the bitcoin). However, as a compensation to the ESs, the MT needs to pay for the consequent cost for acquiring the computational resources. Our contributions in this work can be summarized as follows: • To investigate the trade-off between obtaining the computational resources from the ESs (for solving the proof-of-work) and paying for the consequent cost, we focus on a scenario in which a group of the MTs acquire the computational powers from a set of nearby ESs and pay for the consequent costs for acquiring the computational powers. Mathematically, we formulate an optimization problem in the MTs to determine their acquired computational powers from different ESs, with the objective of maximizing the MTs' total net-reward while keeping the fairness among the MTs.

•
Despite the non-convexity of the formulated optimization problem, we exploit its layered structure and design two algorithms (one for the one-ES scenario and the other for the multi-ES scenario) to find the optimal solution efficiently. We also provide extensive numerical results to validate the effectiveness of our proposed algorithms and show the performance of our proposed multi-access MEC for Blockchain.
The reminder of this paper is organized as follows. We present the system model and problem formulation in Section 2. We first focus on the one-ES scenario and propose a distributed algorithm to compute the optimal solution in Section 3. We then consider the multi-ES scenario and propose a corresponding distributed algorithm to achieve the optimal solution in Section 4. We conclude this work in Section 5 and discuss the future directions.

System Model
We consider a system model as shown in Figure 1. Specifically, a group of the ESs denoted by K = {1, 2, ..., K} provide the computing services to a group of the MTs which are denoted by I = {1, 2, ..., I}. Enabled by the multi-access MEC, each MT i ∈ I can acquire the computational power from the ESs simultaneously. Specifically, we use x k i to denote MT i's computational power from ES k and use x loc i to denote MT i's local computational power. Thus, each MT i's total computational power θ i can be expressed as: In this work, θ i , x loc i , and x k i are all measured in the unit of GHash/sec. Furthermore, we introduce α i to denote MT i's total computational power with respect to the overall computational power of all MTs, i.e., In the mining game, the MTs compete against each other to be the first one to solve the proof-of-work puzzle and receive the reward accordingly. Similar to [22], we model the successful probability that MT i wins the mining game (including that MT i successfully mines the block and its solution reaches the consensus) as a random variable as follows: where t i denotes MT i's block-size, and function P orphan (t i ) denotes the orphaning probability [22] as follows: The use of the above P orphan (t i ) can be explained as follows. After solving the proof-of-work puzzle, MT i needs to broadcast its result to other MTs for reaching the consensus. Since the broadcasting of the computation-result among the MTs suffers from a certain delay, it is possible that MT i fails to be the first one whose computation-result reaches the consensus among all MTs (even though that MT i is the first one who solves the proof-of-work puzzle). The orphaning probability P orphan (t i ) in (4) quantifies such a probability. In particular, the same as [22,44], we express P (t i ) as the Poisson distribution with parameter λ, in which parameter λ denotes the inter-arrival rate of the Poisson distribution. Based on (3), we can express MT i's net-reward function in winning the mining game as follows: where R denotes the fixed reward, and rt i denotes the variable reward which linearly grows with MT i's block-size t i (parameter r is a fixed constant). Parameter p k denotes the marginal price of ES k for providing the computational power to MT i.
In this work, we consider that the MTs acquire the computational power from the group of the ESs with the objective of maximizing the total net-reward, while keeping the fairness among them. To this end, we formulate the following total net-reward optimization (TRO) problem: variables: x k i ≥ 0, ∀i ∈ I, k ∈ K.
Constraint (6) ensures that all MTs' total computational power acquired from ES k cannot exceed ES k's maximum computational power C k,tot . Notice that since both θ i and α i depend on {x k i } k∈K , we just treat {x k i } i∈I,k∈K as the decision variables in Problem (TRO). However, Problem (TRO) is a complicated non-convex optimization problem, and there exists no general algorithm that can solve it efficiently [45]. We will propose a distributed algorithm to compute the optimal solution in the next two sections. Specifically, in our proposed algorithm, each MT individually determines the acquired computational powers from different ESs. Then, viewing the aggregate demands from all MTs, the ESs further update their respective computational powers allocated to all MTs for maximizing the total net-reward. Thus, our algorithm does not require a central entity to collect the global information in the considered network. Nevertheless, the downside of our proposed algorithm is that it requires the message exchange between the MTs and the ESs, meaning that the MTs and ESs need to take the additional burdens on sending and receiving the required messages for reaching the optimal solution.

Problem Formulation and Its Decomposition
We first consider one-ES scenario and aim at finding the optimal allocation of the computational resource for the MTs. For the sake of easy presentation, we use ES k = 1 as an example in the following. In particular, with one ES, Problem (TRO) turns into : variables: However, Problem (TRO-ES) is still a non-convex optimization problem, which is difficult to solve in general. To efficiently solve Problem (TRO-ES), we adopt a vertical decomposition as follows. We firstly introduce an auxiliary variable µ to denote all MTs' total computational power obtained from ES 1, i.e., with 0 ≤ µ ≤ C 1,tot .
Suppose that the value of µ is given in advance. We thus aim at solving the following subproblem: subject to: variables: with parameters A i and B i given by: Notice that, in Subproblem (TRO-ES-Sub), we use V sub µ to denote the optimal value of Subproblem (TRO-ES-Sub), which depends on the given value of µ (i.e., the total computational power obtained from ES 1).
After solving Problem (TRO-ES-Sub) and obtaining V sub µ (for each given µ), we continue to find the optimal value of µ (denoted by µ * ) for maximizing V sub µ , i.e., solving the following top-problem: The reason for us to adopt the above proposed vertical decomposition is as follows. Given the value of µ, Subproblem (TRO-ES-Sub) is a strictly convex optimization (i.e., Proposition 1 provided below), which enables us to solve it efficiently. In the next subsection, we propose a distributed algorithm to solve Subproblem (TRO-ES-Sub) and Top-problem (TRO-ES-Top).

Proposed Algorithm to Solve Subproblem (TRO-ES-Sub)
To efficiently solve Subproblem (TRO-ES-Sub), we firstly identify the following property.

Proposition 1. Given µ, Subproblem (TRO-ES-Sub) is a strictly convex optimization.
Proof. Given the value of µ, the values of {A i , B i } i∈I are all fixed. Thus, according to the convex optimization theory [45], Problem (TRO-ES-Sub) is a strictly convex optimization problem.
The convexity of Subproblem (TRO-ES-Sub) enables us to use the Karush-Kuhn-Tucker (KKT) conditions [45] to compute the optimal solution. In particular, to solve Subproblem (TRO-ES-Sub), we identify the following three possible cases.
In Case I, we define V sub subject to: variables: Problem (TRO-ES-Sub-I) is again a strictly convex optimization problem. Moreover, it can be observed that constraint (12) is strictly binding at the optimum. Thus, we introduce the dual variable λ to relax (12) and obtain the Lagrangian function as (where the subscript "I" stands for Case I): With the KKT condition and (13), we can derive the optimal solution for Problem (TRO-ES-Sub-I) as follows: where λ * is determined according to the following condition: Based on (14) and (15), we can propose the following distributed algorithm (i.e., Algorithm 1) to solve Problem (TRO-ES-Sub-I). Notice that, in Algorithm 1, exploiting the monotonic property of (14), we use the bisection-search (i.e., from Step 3 to Step 11) to find λ * until condition (15) is satisfied. while |λ − λ| > ε do ES 1 sets λ cur = λ+λ 2 and broadcasts λ cur to all MTs.
ES 1 sets λ * = λ cur and broadcasts λ * to all MTs in I.
Case II in which there exists a subset of MT i ∈ I with A i > 0. In particular, we denote this subset as In Case II, we can derive the optimal solution of Problem (TRO-ES-Sub) as follows. For each MT i with A i < 0, we set x * i = 0 directly. For the MTs in I sub , we can express Problem (TRO-ES-Sub) as: subject to: variables: It can be observed that Subproblem (TRO-ES-Sub-II) is a strictly convex optimization problem, and the optimal solution occurs when constraint (16) is binding. We thus introduce the dual variable λ to relax (16) and obtain the following Lagrangian function: With the KKT condition and (17), we can derive the optimal solution for Subproblem (TRO-ES-Sub-II) as follows: with λ * determined according to the following condition ∑ i∈I sub Based on (18) and (19), we can propose the following distributed algorithm (i.e., Algorithm 2) to solve Problem (TRO-ES-SubII). Notice that, in Algorithm 2, exploiting the monotonic property of (18), we use the bisection-search (i.e., from Step 4 to Step 12) to find λ * until condition (19) is satisfied. MT i ∈ I sets x 1 * i = 0 if its A i ≤ 0 and reports to ES 1. while |λ − λ| > ε do ES 1 sets λ cur = λ+λ 2 and broadcasts λ cur to the MTs in I sub .
Case III in which we have A i < 0, ∀i ∈ I and moreover, ∑ As a summary of the above Case I, Case II, and Case III, we propose the following Algorithm 3 to solve Problem (TRO-ES-Sub) and determine {x 1 * i } i∈I and V sub µ . In Algorithm 3, we use Algorithm 1 (in Step 7) and Algorithm 2 (in Step 10) as the subroutines. Until now, we have completed solving Problem (TRO-ES-Sub) for the given value of µ. Each MT i ∈ I reports its x loc i to ES 1. ES 1 computes X loc = ∑ i∈I x loc i and sends X loc to all MTs. Each MT i uses (10) to compute A i = (R+rt i )e −λt i X loc +µ − p 1 , and uses (11)

Proposed Algorithm to Solve Top-Problem (TRO-ES-Top)
We continue to solve Top-problem (TRO-ES-Top) in this subsection. Notice that, for each given µ, we can use Algorithm 3 to obtain the value of V sub µ . However, the difficulty in solving Top-problem (TRO-ES-Top) lies in that we cannot derive V sub µ analytically. As a result, Top-problem (TRO-ES-Top) is an optimization problem in which the objective function cannot be analytically given, which thus prevents us from using the conventional gradient based approach to solve it. Fortunately, the viable interval of the µ is fixed, namely, µ ∈ [0, C 1,tot ]. Exploiting this property, we can use a linear-search (LS) with a very small step-size to numerically find the best value of µ (which is denoted by µ * ) that can maximize V sub µ . The details are shown in the following Algorithm 4. Notice that, in Step 3, we use Algorithm 3 as the subroutine to determine the value of V sub µ (and the corresponding optimal {x 1 * i } i∈I ) under the currently enumerated µ. ∆ rounds of the iterations (notice that only one of the two subroutines, i.e., Algorithms 1 and 2, will be invoked in each round of the iterations). As a result, the total complexity of our proposed Algorithm 4 is given

Numerical Results for One-ES Scenario
We present the numerical results to validate the effectiveness of our proposed algorithms to solve Problem (TRO-ES). We set the parameter-settings according to the data provided in [46] (Table 1 lists the detailed settings). Specifically, we set λ = 1 600 (i.e., the average generating-time for each block is 10 min) and each block-size t i = 1 Mbit. Meanwhile, according to [46], we set R = 7000$ for each block and set r = 1000 $/Mbit. In addition, for MT i, the local computational power x k i is randomly generated according to a uniform distribution within [1,2] GHash/sec. Finally, we set p 1 = 10 $/GHash according to the ES's unit cost for providing the computational power.
To illustrate the rationale of our proposed decomposition, we provide Figure 2 to show how V sub µ varies with different µ. We test a 5-MT case (in the left subplot) and a 10-MT case (in the right subplot). Figure 2 shows that V sub µ firstly increases when µ increases, and then gradually decreases when µ is beyond a certain threshold. Such a phenomenon of V sub µ matches the intuition very well, namely, neither a too small µ nor too small µ will be beneficial to the computation offloading. On the one hand, when µ is too small, the MTs can only obtain a small amount of computational power from the ES, which results in a small total net-reward. On the other hand, when µ is too large, a large cost is incurred for obtaining the computational power, which again degrades the total net-reward. This phenomenon is the motivation of our work, i.e., to find the optimal trade-off between exploiting the computational power provided by the ES and the consequent cost.   Table 2 validates the effectiveness of our Algorithm 4 for solving Top-problem (TRO-ES-Top). For the purpose of comparison, we use a benchmark scheme that exploits the convexity of Problem (TRO-ES-Sub). Specifically, we use CVX [45] (which is a commercial solver for convex optimization) to solve Problem (TRO-ES-Sub) directly for each given µ and then execute a linear search of µ ∈ [0, C 1,tot ] to solve Top-problem (TRO-ES-Top). Table 2 shows the optimal value achieved by different schemes and the corresponding computation-time. Notice that all the results are obtained on a PC with Intel Core i5-4590 CPU@3.3GHz. As shown in Table 2, our Algorithm 4 can achieve the global optimum solution as the benchmark scheme, and, moreover, our Algorithm 4 consumes a significantly less computation-time than the benchmark scheme, which thus validates the effectiveness of our proposed algorithm.  Figure 3 evaluates the impact of the ES's price for providing the computational power to the MTs. When the price increases, the MTs become conservative in using the computational power from the ES, and thus the total computational power acquired from ES 1 decreases (as shown in the right subplot). Accordingly, all MTs' total net-reward gradually decreases when the price increases (as shown in the left subplot).

Problem Decomposition
We next consider the multi-ES scenario and focus on solving Subproblem (TRO-Sub) and Top-problem (TRO-Top). As we have described before, Problem (TRO-ES) is a non-convex optimization problem which is difficult to solve in general. To this end, we again adopt a vertical decomposition, by introducing an auxiliary variable ν i which denotes MT i's totally acquired computational power from all ESs, namely, Firstly, we assume that the values of {ν i } i∈I are given in advance, and we aim at solving the subproblem as follows: (TRO-Sub): H sub subject to: variables: x k i ≥ 0, ∀i ∈ I, k ∈ K.
After solving Subproblem (TRO-Sub) and obtaining H sub (for the given {ν i } i∈I ), we continue to solve the top-problem as: where we set Q max = ∑ k∈K C k,tot .

Distributed Algorithm to Solve Subproblem (TRO-Sub)
The reason for us to adopt the above decomposition is that we can propose a distributed algorithm to solve Subproblem (TRO-Sub). Specifically, given {ν i } i∈I , Subproblem (TRO-Sub) is a convex optimization problem. Thus, we again introduce the dual variable λ k to relax constraint (22) with respect to ES k, and obtain the corresponding Lagrangian function as: where the fixed parameter M (under the given {ν i } i∈ ∈I ) is: An observation on (25) shows that it can be separated as follows: where, for each MT i, its associated Lagrangian function is: Based on (28), we formulate each MT i's local optimization problem as follows: subject to: variables: x k i ≥ 0, ∀i ∈ I, k ∈ K.
To further determine the optimal values of {λ k } k∈K (i.e., the optimal solution of the dual problem), we use the following subgradient method: where ε is the step-size for the dual-updating. Notice that (31) means that each ES k can individually update its λ k based on all MTs' reported {x k i } i∈I . Based on the above, each MT i's local optimization problem (TRO-Sub-MTi) and each MT k's dual-updating in (31), we propose the following distributed algorithm to solve Problem (TRO-Top) and Problem (TRO-Sub).
In particular, according to [45], using the diminishing step-size (in Step 7) enables us to reach the dual optimality. Thus, Algorithm 5 is guaranteed to converge to the optimal solution of Subproblem (TRO-Sub) and determine H sub

Algorithm 5:
To solve Subproblem (TRO-Sub) and determine H sub Each ES k broadcasts λ k (l) to all MTs. Given {λ k (l)} k∈K , each MT i solves its local Problem (TRO-Sub-MTi) and obtain its {x k i } k∈K . Each MT i reports itsx k i to each ES k. After collecting {x k i } i∈I from all MTs, each ES k updates where ε(l) = a b+l . Parameters a and b are the given values. Set l = l + 1.

end while
Each MT i sets ln (R + rt i ) M − ∑ k∈K p kxk i and reports to ES 1. ES 1 sets H sub

Proposed Algorithm to Solve Top-Problem (TRO-Top)
We then continue to solve Problem (TRO-Top). The difficulty in solving Problem (TRO-Top) lies in the fact that we cannot express H sub {ν i } i∈I analytically for each MT i. To tackle this difficulty, we adopt the idea of simulated annealing (SA) [47] to determine the optimal values of {ν i } i∈I (which are denoted by {ν * i } i∈I ). The details are shown in the following Algorithm 6. Based on the idea of SA, our Algorithm 6 executes a randomized search for finding {ν * i } i∈I : • If the newly generated {ν i } i∈I (which is randomly generated within the range of the previously located {ν i } i∈I ) can improve the current best value (CBV), we then accept the newly generated {ν i } i∈I .

•
If the newly generated {ν i } i∈I fails to improve the CBV, we then accept it according to a certain probability, such that we can avoid being trapped by a local optimum. In particular, the probability for us to accept a non-improvement {ν i } i∈I gradually decreases according to an annealing process in which the annealing temperature decreases gradually. As a result, it is more likely that we will refuse to accept a non-improved {ν i } i∈I .
Notice that, in Step 7, given the newly generated {ν i } i∈I , we use Algorithm 5 to compute the value of H sub {ν i } i∈I . However, regarding our proposed Algorithm 6, we notice that it is very challenging to analytically derive its computational complexity. The key reason is due to the difficulty in deriving the complexity of the subroutine, i.e., Algorithm 5 which relies on the sub-gradient method to reach the convergence.

Algorithm 6: To solve Top-problem (TRO-top)
Initialization: Initialize the temperature T 1 = 9.9, the decaying-rate d = 0.9, the lowest temperature T final = 0.001, N count = 0, and the time index t = 1. Randomly generate {ν i } i∈I . Set CS = {ν i } i∈I . Use Algorithm 5 to compute H sub

Generate a number according to the uniform distribution within
Break.

Numerical Results for Multi-ES Scenario
In this subsection, we show the performance of our proposed algorithms for the multi-ES scenario. Figure 4 shows the convergence of Algorithm 5 (under the given {ν i } i∈I ). We use a 5-MT and 3-ES case, with the three ESs having {p 1 , p 2 , p 3 } = {10, 20, 30}$/GHash. In Figure 4a, we set {ν i } = {1, 2, 3, 4, 5} for the five MTs. The top-subplot in Figure 4a shows the convergence of the dual variables {λ 1 , λ 2 , λ 3 }, and the bottom-subplot in Figure 4a shows the convergence of H sub to the dual optimum (which is denoted by the red-dash line). In Figure 4b, we set {ν i } = {1, 2, 3, 4, 5} for the five MTs, and the results show the same convergence property as Figure 4a. Figure 5 shows the convergence of our Algorithm 6 for solving Top-problem (TRO-top). The left-subplot shows the case of 5-MT and 3-ES (which is used in Figure 4 before), and the right-subplot shows the case of 10-MT and 3-ES. The results show that our algorithm can quickly converge to the optimal solution (i.e., {ν * i } i∈I ) and reach the global optimum of the total net-reward of all MTs.   Table 3 evaluates the accuracy and efficiency of our proposed Algorithm 6 for solving Top-problem (TRO-Top), in comparison with the exhaustive-search method. In the exhaustive-search method, we enumerate all possible {ν i } i∈I . However, the exhaustive-search method consumes a significant computation complexity. We thus consider two cases, namely, a 5-MT and 2-ES case and a 3-MT and 2-ES case. We set {p 1 , p 2 } = {10, 20}$/GHash for the two ESs, and vary each MT's block-size t i from 0.2 Mbit to 1 Mbit. For each tested case, we show the total net-reward (i.e., the top-value) in each cell, and the corresponding computation time (i.e., the bottom-value) in each cell. The results in Table 3 show that our Algorithm 6 achieves the optimal solution exactly the same as the exhaustive-search method, but consuming significantly less computation-time.
To show the advantage of our proposed algorithms, we further compare the performance of our proposed algorithms with that of a heuristic equal-allocation scheme in which each ES equally divides its total computational power to be shared by all MTs. Figure 6a below shows the results under the scenario of 10 MTs and one ES, and Figure 6b shows the results under the scenario of five MTs and two ESs. Both figures validate that our proposed algorithms can outperform the equal-allocation scheme. This advantage essentially comes from that we properly allocate the computational powers at different ESs for the MTs.
Finally, in Figure 7, we evaluate the impact of the ESs' prices for providing the computational power to the MTs. We use the same parameter-settings as Table 3, but fix p 1 = 10$/GHash and vary p 2 from 6 $/GHash to 14 $/GHash. Both subplots show that all MTs' totally acquired computational power from ES-2 gradually decreases due to the increasing price. As a result, the MTs are encouraged to acquire more computational power from ES-1. Table 3. Accuracy and efficiency of our Algorithm 6.

Conclusions
In this work, we have investigated the optimal computational power allocation for the multi-access MEC enabled Blockchain. In particular, we focused on the scenario in which the group of the MTs acquire the computational power from the ESs, with the objective of maximizing all MTs' total net-reward in the mining process while keeping the fairness among the MTs. By exploiting the layered structure of the formulated optimization problem, we have proposed two distributed algorithms, namely, one for the single-ES scenario and another for the multi-ES scenario, to efficiently compute the MTs' optimal computational power allocations. Extensive numerical results have been provided to validate the effectiveness of our proposed algorithms. In this work, we mainly focused on the reward optimization from the MTs' perspective. For our future work, we will further consider the revenue of the ESs in providing the multi-access MEC service and investigate how different ESs adjust their prices for optimizing their revenues.