#### 3.2. Problem Analysis

Let

$q*$ be the optimal value to problem P1. Then, we have that:

where

$\mathcal{F}$ denotes the feasible solution set to problem P1, which satisfies Constraints (9)–(13). According to [

29], problem P1 can be transformed into an equivalent linear form with

q as follows.

where

q denotes a feasible solution to problem P1, and problem P1 is equivalently transformed to problem P2 if the following condition, i.e.,

Therefore, solving problem P1 can be replaced by solving problem P2 instead. Here, the Dinkelbach method [

30] is applied to find the optimal

$q*$. In particular, our designed algorithm consists of a two layer framework, and for a given

q, problem P2 is solved in the inner layer of our presented method. Then,

q is updated in the outer layer in terms of (13) until the obtained solution satisfies the condition (15).

To effectively deal with problem P2, we first introduce a set of slack variables with

$e=\left\{{e}_{n,j}\right\}$. Then, problem P2 can be transformed as:

Once the optimal variables

$(e*,\tau *)$ are obtained by solving problem (P3), one can derive the optimal

${p}_{{}_{n,j}}^{*}$ by:

Moreover, problem P3 is convex, so the Lagrangian dual method is applied to solve it. For a given

q, the Lagrange function of the problem P3 is:

where

$\mu ,\nu ,\lambda ,\delta $ are the Lagrangian multipliers respectively related to Constrains (9)–(13). As a result, the Lagrange dual function

$g(\mu ,\nu ,\lambda ,\delta )$ can be given by:

and its dual problem is:

The optimal Lagrangian variables

$(\mu ,\nu ,\lambda ,\delta )$ can be found by solving the problem in (20), where the BCD method [

17] can be used, with one of

$\tau $ and

e being fixed to optimize the other. There are two cases of

${\tau}_{j}*$ for a given

e. First, when

${\sum}_{n=1}^{N}{e}_{n,j}=0$, i.e.,

${e}_{n,j}=0$ for all

n, no energy in this case is transmitted to user

j, so

${\tau}_{j}*=0$. When

${\sum}_{n=1}^{N}{e}_{n,j}>0$ and

${\partial}^{2}L/\phantom{{\partial}^{2}L\partial {{\tau}_{j}}^{2}}\phantom{\rule{0.0pt}{0ex}}\partial {{\tau}_{j}}^{2}<0$, this indicates that the Lagrange functions are concave. If

$\partial L/\phantom{\partial L\partial {\tau}_{j}}\phantom{\rule{0.0pt}{0ex}}\partial {\tau}_{j}=0$,

${\tau}_{j}*$ can be obtained by:

where

${A}_{j}={\sum}_{n=1}^{N}{P}_{n}{\nu}_{n,j}-q{p}_{j}^{c}-\lambda $ and

$0\le {\tau}_{j}\le 1$.

With the obtained

$\tau *$, the BCD method is used to optimize

e; that is, with a fixed

$\tau *$, the optimized

${e}_{n,j}$ is found. It is found that the value of

$\partial L/\phantom{\partial L\partial {e}_{n,j}}\phantom{\rule{0.0pt}{0ex}}\partial {e}_{n,j}$ decreases with the increase of

${e}_{n,j}$, so

e can be uniquely determined with

$\partial L/\phantom{\partial L\partial {e}_{n,j}}\phantom{\rule{0.0pt}{0ex}}\partial {e}_{n,j}=0$ (

${e}_{n,j}\ge 0$), i.e.,

where

${B}_{n,j}=-q+\zeta {\sum}_{{j}^{\prime}\ne j}^{J}{\mu}_{{j}^{\prime}}{h}_{n,{j}^{\prime}}-{\upsilon}_{n,j}$. Since the Lagrangian function L is concave w.r.t. the variable (

$e,\tau $), the convergence of the BCD method [

31] can be ensured.

With the above operations, the optimal solution to Problem (20) is obtained through two layer BCD loops, where in the outer layer, e and $\tau $ are optimized alternately, and in the inner layer, ${e}_{n,j}$ is alternatively optimized by fixing other ${e}_{i,j}$, with a given $\tau $. When L does not grow any longer, the iteration stops.

Next, the ellipsoid method [

32] is used to obtain the optimal value to the Lagrangian multipliers, and the sub-gradients [

30] are used to update as follows,

Finally, after completing the above steps to solve the dual function, update q until q converges to a certain accuracy, also outputting the optimal solution to problem P1. Algorithm 1 is as follows.

The complexity of the BCD method is

$O\left(N{J}^{2}\right)$, and the complexity of the ellipsoid method is

$O\left({(JN+J+1)}^{2}\right)$. Thus, the total complexity of Algorithm 1 is

$O\left(N{J}^{2}{(JN+J+1)}^{2}\mathcal{I}\right)$, where

$\mathcal{I}$ is the number of iterations for updating

q. In

Section 4, our simulation results will show that the proposed method achieves much higher system EE compared with the benchmark methods.

**Algorithm 1:** The presented solution approach for solving problem P1. |

1 **Initialize**q |

2 **While** $T(q*)\le \epsilon $, **do** |

3 Initialize $\{\mu ,\nu ,\lambda ,\delta \}$ |

4 **While** $\{\mu ,\nu ,\lambda ,\delta \}\le \epsilon $, **do** |

5 Initialize e and $\tau $ |

6 **While** L stops the improvement, **do** |

7 Compute the ${\tau}_{j}$ that maximizes L. |

8 Compute e with fixed $\tau $. |

9 Update the Lagrangian multipliers of $\{\mu ,\nu ,\lambda ,\delta \}$ with the ellipsoid method via subgradients. |

10 Update q |

11 **Obtain** $p*$. |