Distributed Cell Clustering Based on Multi-Layer Message Passing for Downlink Joint Processing Coordinated Multipoint Transmission

: Joint processing coordinated multipoint transmission (JP-CoMP) has gained high attention as part of the effort to cope with the increasing levels of demand in the next-generation wireless communications systems. By clustering neighboring cells and with cooperative transmission within each cluster, JP-CoMP efﬁciently mitigates inter-cell interference and improves the overall system throughput. However, choosing the optimal clustering is formulated as a nonlinear mathematical problem, making it very challenging to ﬁnd a practical solution. In this paper, we propose a distributed cell clustering algorithm that maximizes the overall throughput of the JP-CoMP scheme. The proposed algorithm renders the nonlinear mathematical problem of JP-CoMP clustering into an approximated linear formulation and introduces a multi-layer message-passing framework in order to ﬁnd an efﬁcient solution with a very low computational load. The main advantages of the proposed algorithm are that i) it enables distributed control among neighboring cells without the need for any central coordinators of the network; (ii) the computational load imposed on each cell is kept to a minimum; and, (iii) required message exchanges via backhaul result in only small levels of overhead on the network. The simulation results verify that the proposed algorithm ﬁnds an efﬁcient JP-CoMP clustering that outperforms previous algorithms in terms of both the sum throughput and edge user throughput. Moreover, the convergence properties and the computational complexity of the proposed algorithm are compared with those of previous algorithms, conﬁrming its usefulness in practical implementations.


Introduction
The increased level of demands for data transmission is a major issue that needs to be addressed in relation to cellular networks [1]. In 2016, there was an increase in data demand levels of 63 percent, with levels up 18-fold over five years [2]. The increased level of demand for data must be solved through increased cellular network performance. An increased number of base stations (BS) and spatial reuse are examples of solutions capable of solving this problem [3]. However, these solutions must be accompanied by strategies that prevent data transmission collisions, especially in edge cell regions [4]. Coordination among base stations is required in such cases. In order to prevent data transmission collisions, the base station coordination approach has been introduced by the Third Generation Partnership Project (3GPP), Release-11 [5], in Long Term Evolution Advanced (LTE-A). This method is known as the Coordinated Multipoint (CoMP).
CoMP has become one of the key methods in the fifth-generation (5G) wireless communication field. CoMP is a network cooperative method that mitigates inter-cell interference (ICI) from neighboring cells to provide higher spectral efficiency. CoMP provides benefits in many directions, such as extending cell coverage area and improving edge cell throughput [3,4,6,7].
Downlink CoMP systems can be classified into two types: joint processing coordinated multipoint (JP-CoMP) and coordinated beamforming coordinated multipoint (CB-CoMP) [8]. The main difference between JP-CoMP and CB-CoMP lies in their implementation scheme, i.e., whether user data is shared across the cooperating cells via backhaul or not [6]. JP-CoMP exchanges data and CSI concurrently among cooperating base stations in a cluster [9], while CB-CoMP only shares CSI without exchanges of data among cooperating base stations in a cluster [10].
In the CoMP schemes, the formation of efficient clusters is a critical factor affecting overall CoMP performance. However, finding the optimal clustering approach typically requires combinatorial optimization due to the nonlinearity. In general, suboptimal clustering results in increased computational complexity, failures of proper data exchanges, and less optimal throughput performances of CoMP [8]. A limited backhaul capacity is another crucial factor that degrades JP-CoMP performance levels on the real-world networks.
The cell clustering in CoMP has been widely studied in recent years as part of effort to improve inter-cell interference management [8]. Clustering algorithms are classified into two types: the static and dynamic clustering types. Static clustering methods have been proposed in order to optimize edge throughput cells by relying on a predetermined fixed base station cluster [8]. Each static clustering algorithm utilizes different strategies to determine the efficient cluster formation. Examples include overlapping [11], formation cooperative strategies [12], and sectoring [13]. These clustering methods have simple configurations, but the aforementioned works did not consider suitable methods for adapting clustering to changes on the network.
To this end, dynamic clustering is introduced here to realize additional performance improvements. These methods utilizes different approaches to achieve optimal performance goals, i.e., dynamic network-centric clustering [14], the blossom tree algorithm [15], graph-based clustering [16], the use of sub-cluster [17], a novel re-clustering [18], coalitional game theory [19], density-based spatial clustering [20], the use of channel state prediction [21], a weight traffic model [22], the exchange-matching algorithm [23], mixed-integer nonlinear programming [24], and the successive convex algorithm [25]. Dynamic clustering adapts to network changes, but these methods are designed based on centralized control on the network, which requires extensive information sharing and high computational complexity.
The CoMP clustering method has also been recently explored in conjunction with affinity propagation (AP) [26] and capacitated affinity propagation (CAP) [27]. Both of the methods utilize message passing [28][29][30][31] to achieve the optimal base station cluster. When compared to AP, CAP limits the maximum number of clusters in order to enable sophisticated control over the clusters. These methods provide decentralized clustering with low computational complexity. However, they also attempt to solve the cell clustering by minimizing the sum-distance problem instead of maximizing the sum-capacity problem. The different approaches by these methods have resulted in suboptimal performance.
In this paper, we propose a distributed downlink JP-COMP clustering algorithm that is based on multi-layer message passing. Based on graphical models described in relation to the clustering problem, all base stations perform distributed optimization by exchanging a small-sized information known as a message. Message exchange occurs in multi-layer to find the best solution. In addition, the distributed nature of the proposed algorithm requires lower computational complexity and it incurs less backhaul overload.
The main contributions in this paper are summarized, as follows. We propose a distributed cell clustering algorithm utilizing message passing in downlink JP-CoMP. The paper addresses sum capacity maximization in cell clustering through approximation into a linear objective function. Based on the approximation, the dynamic cell clustering problem is rendered into a multi-layer message passing problem. Each base station will exchange messages with the neighboring base stations and will choose an appropriate partner that is based on the optimum sum capacity in the first layer. The partner selected during the previous layer will exchange messages again with the other neighboring chosen partners and will choose an appropriate partner based on the closest distance in the second layer. Finally, partner BSs in all layers form a cooperating cluster in which they share CSI and user data for joint transmission. By enabling distributed control of the network, this proposed algorithm also reduces both the computational complexity and backhaul overload.
The rest of this paper is organized, as follows. Section 2 describes the JP-CoMP system model. Section 3 explains message passing for JP-CoMP clustering. Section 4 presents the simulation results. Section 5 summarizes this paper.

System Model
This section will provide information regarding the assumption, constraints, and system model of joint processing CoMP. The section consists of the three sub-sections on downlink JP-CoMP, the channel model, and the JP-CoMP sum capacity.

Downlink JP-CoMP
JP-CoMP exchanges data and channel state information (CSI) among the cooperating base stations concurrently. These data exchanges and CSI shared in JP-CoMP are connected by a backhaul link. The downlink in JP-CoMP will increase the transmission throughput to each user. The cooperating base stations will transmit the downlink signal to each user in the cluster. Each user will receive signals from different base stations. The JP-CoMP scheme typically provides higher sum capacity levels when compared to other schemes by mitigating interference signals from neighboring cells. This coordination scheme also has a simple configuration. However, JP-CoMP requires a high backhaul bandwidth and low latency due to the data exchanges among cooperating base stations.

Channel Model
Consider a downlink cellular system that consists of C cells. Each cell consists of one base station and M users that are randomly distributed. Each base station has transmitter antennas, N t , and each user has receiver antennas, N r . Figure 1 shows an example of a C = 7 cellular network for a downlink JP-CoMP system. This illustration also provides an example of how the base station clusters transmit data to the selected user in the cluster. Each base station will exchange data and CSI with the cooperating base stations in order to create a cell cluster. The cooperating base stations that act as a cluster member will transmit data to all users in the cluster. The transmission medium conducted from the base station to the users can be represented by a channel. Suppose that a base station in the ith cell will transmit data to users in the gth cell with a single antenna. The relationship between them can be represented by channel H ig . Channel H ig can be defined, as follows, where h ig ∈ C 1×N t is a channel vector between a base station in the ith cell and the selected user in the gth cell. In addition, H ig ∈ C M×N t represents the channel matrix between a base station in the ith cell and all users in the gth cell. The channel will become an important parameter in the JP-CoMP.

JP-CoMP Sum Capacity
The cooperating base stations in JP-CoMP will transmit data to all users in the cluster. This case requires a channel aggregation scheme in the JP-CoMP clustering. Suppose that a Y cluster consists of K(≤C) base stations. The Y cluster will serve transmission data for users in the cluster. Cooperating base stations overall in the Y cluster have X transmitter antennas and each user has V receiver antennas, where X = KN t and V= N r . With the assumptions of ideal transmitter beamforming and receiver beamforming, the channel aggregation can be defined, as follows, where H Y ∈ C N r ×KN t represents the aggregated channel between the transmitter antennas of the cooperating base stations in the Y cluster and the selected user in the cluster. Utilizing Shannon's capacity equation, the sum capacity in the cooperating base stations can be defined, as follows, where I represents the identity matrix and N denotes the total number of clusters.

Message Passing for JP-CoMP Clustering
This section explains the main purpose and details of the algorithm for multi-layer message passing. The section covers the main idea, problem formulation, and message derivation.

Main Idea
Determining an appropriate cell partner by maximizing the sum capacity (3) is a challenging problem. The sum capacity is a nonlinear problem without an efficient solution available to solve it. The combinatorial optimization is required in this case to provide an optimal clustering method. To solve the combinatorial problem requires a centralized process, which, in turn, incurs a high computational burden and high backhaul capacity. Therefore, we propose a distributed algorithm that works by message passing to reduce both the computational complexity and backhaul overload.
Nonlinear optimization (3) cannot easily be realized in this case. Instead, the approximation of the objective function is required. Hence, we approximate the original nonlinear objective function into a linear form. Utilizing this transform, multi-layer message passing is introduced as the proposed algorithm for cell clustering in JP-CoMP. This proposed algorithm will provide the lower computational complexity and backhaul capacity level.
The main idea of multi-layer message passing is to perform distributed optimization by exchanging messages at all base stations in the multi-layer assessment. The proposed algorithm assesses the cell cluster based on the specific objective function of each layer. Multi-layer message passing will divide the clustering process into two layers, in this case the cluster (first layer) and the super-cluster (second layer). The cluster is represented by an exemplar, while the super-cluster is represented by a super-exemplar. Although our approach is suboptimal, our simulation results confirm that the proposed algorithm outperforms previous schemes.

Problem Formulation
In the main idea described above, we approximated the original nonlinear form into linear-form objective function. With this transformation, multi-layer message passing proposes a mixed-form objective function to provide optimal cell clustering in JP-CoMP. This approach compromises the performance improvement by releasing the cell size limit and mismatch of the approximated function. The mixed-form objective function consists of two parameters: the optimal sum capacity and the closest distance.
The systematical work of multi-layer message passing is defined, as follows. In the cluster, the base station will exchange messages with neighboring cells and choose the exemplar based on the sum capacity. To characterize the throughput improvement of cooperative manner, the CoMP gain, s ij , is defined in our model. The CoMP gain maximization will increase the efficiency of JP-CoMP throughput improvement by determining the tendency of one base station to coordinate with another base station based on sum capacity. This term can be defined, as follows, where R comp (i,j) represents the sum capacity of a specific user when the serving base stations, in this case, the ith base station and the jth base station, operate in a cooperative manner.
In the super-cluster, the exemplar will exchange messages to other exemplars and choose the super-exemplar based on the closest distance. This distance presents a simple quantitative parameter for JP-CoMP. Although the distance does not have a direct relationship with any increases in the throughput, minimizing the distances between the members of a cluster increases the efficiency of JP-COMP throughput improvements. The inverse distance is used in our model, and this term can be defined, as follows, where D k is the position of the kth base station as an exemplar, D i is the position of the ith base station as an exemplar, and T (·, ·) denotes the distance between two positions. Finally, all of the base stations in the same super-cluster will cooperate and ignore the cluster limitation. The objective function, in this case the sum capacity, can be represented by using s ij , l iC ii , and the assignment matrix, as follows, where the [C ij ] N×N assignment variables consist of two assignment matrices of the non-diagonal variable C ij ∈ {0, 1} and the diagonal variable C ii ∈ {0, 1, . . . , N}. For the non-diagonal variable, C ij = 1 implies that the jth base station is an exemplar of the ith base station. For the diagonal variable, . . , N} implies that the kth base station is a super-exemplar of the ith base station. Otherwise, the ith base station is not an exemplar.
The sum capacity maximization problem in (3) is approximated into a linear optimization problem, as shown below, subject to ∑ j∈φ(i) Each constraint has own special meaning in forming multi-layer clustering. Constraint (8) explains that the ith BS should choose only one BS as its exemplar. The notation φ(i) is defined as the set of neighboring BSs adjacent to the ith BS. Constraint (9) explains that each BS only cooperate with at most one BS in the first layer clustering, where φ(j) is defined as the set of neighboring BSs adjacent to the jth BS. This constraint only enforces two possible cases. The first holds that exactly one neighboring BS around the jth BS should select the jth BS as an exemplar when equality holds. The second requires that other neighboring BS do not select the jth BS as an exemplar when inequality holds. Constraint (10) explains that, if there exists the ith BS, as an exemplar, has selected the kth BS as its super-exemplar, i.e., C ii = k, then the kth BS should be a super-exemplar for itself. The notation θ(i) is defined as the set of neighboring exemplars adjacent to the ith BS as an exemplar.
To apply a message passing algorithm in this multi-layer cell clustering scheme, problem (7) has been reformulated as an unconstrained problem. This reformulation is expressed as where S ij C ij denotes the mixed-form objective function. Three different functions that consider the constraint in (8)-(10) are expressed, as where X i = C ij : j ∈ φ (i) , X j = C ij : i ∈ φ (j) , and X k = {C kk : k ∈ θ (i)} are defined as representation cases. F i (X i ) is introduced as an association function of the constraint (8) to represent that each base station should be assigned to one base station. If each user is assigned to more than one base station, F i (X i ) takes the value of minus infinity and the maximization of objective function cannot be achieved. Otherwise, the value of F i (X i ) becomes zero and it will not contribute to the objective function. H j X j is defined to represent the constraint (9), which implies cluster consistency. In this case, each base station only selects at most one base station as its partner in the cluster. The violation of this function will result in the minus infinity value, accordingly the objective function can never be maximized. Otherwise, the value of H j X j will not contribute to the objective function. G k (X k ) represents the constraint (10) that implies the super-cluster consistency. If the super-cluster works inconsistently, i.e., the ith base station has selected the kth base station as partner, however the kth base station does not select itself, G k (X k ) contributes to the objective function as minus infinity. Accordingly, the objective function will not be maximized. Otherwise, G k (X k ) will take the value of zero and it will not contribute to the objective function. The unconstrained formulation problem (11) enables the drawing of a factor graph. A factor graph is a useful graphic representation that shows the relationship between each of the variables with constraints that serve as boundaries in the distributed optimization problem. Figure 2 shows the representation of the factor graph in the proposed algorithm. The factor graph shows that there are two variables: non-diagonal variables C ij and diagonal variables (C ii ). The non-diagonal variable will be constrained by the two function of F i and H j . Moreover, the diagonal variable will be constrained by three different functions of F i , G k , and H j . A detailed explanation will be provided in the next subsection.

Message Passing Derivation
Separation consideration will be implemented in order to derive the message. The message derivation process will be divided into the diagonal variable and the non-diagonal variable. The messages associated with each variable are presented in Figure 3. The non-diagonal element C ij will be constrained by two function nodes, F i and H j . The relationship between the variable node and the function node for the non-diagonal variable is represented by the messages λ ij , µ ij , β ij , and η ij . Accordingly, based on the message passing principle [28], each message of the non-diagonal variable can be represented as The message values λ ij (m), µ ij (m), β ij (m), and η ij (m) are determined for setting the hidden variable C ij to m ∈ {0, 1}.
The final messages are defined according to the message difference between m = 1 and m = 0. The final message from the variable node to the function node is the sum of all incoming messages from C ij , except for the message from the own function. The messages λ ij and β ij can be expressed as The message from H j to C ij is defined as the difference between message µ ij (1), which indicates when the ith BS selects the jth BS, and message µ ij (0), which indicates when none of the neighboring BS around the jth BS select the jth BS as an exemplar. Therefore, message µ ij can be defined as The message from F i to C ij is defined as the difference between message η ij (1) and message η ij (0). Message η ij (1) indicates when the ith BS selects the jth BS, after which the ith BS cannot select another BS besides the jth BS (C ij = 0). Message η ij (0) indicates when none of the neighboring BSs around the jth BS selects the jth BS as exemplar. Hence, there are two possible cases: the ith BS selects another BS besides the jth BS (C ij = 0) or the ith BS becomes an exemplar itself (C ij = 0). Therefore, message η ij can be defined as The diagonal element C ii will be constrained by the three function nodes of F i , H j , and G k . The relationship between the variable node and the function node for the diagonal variable is represented by six messages, i.e., messages λ ii , µ ii , β ii , η ii , υ ik , and ξ ik . Therefore, based on the message passing principle, each message of the diagonal variable can be expressed as The message values λ ii (m), µ ii (m), β ii (m), η ii (m), υ ik (m), and ξ ik (m) are utilized for setting hidden variable C ii to m ∈ {0, 1, . . . , N}.
The finally derived messages are considered according to the values of m. The final messages are defined as the message difference condition between m = 0 and m = 0. The final message from the variable node to the function node is the sum of all incoming messages from C ii , except for the message from the function itself. The final messages for λ m i and β m i can be represented as On the other hand, the message from H j to C ii is defined as the difference between message µ ii (m) and message µ ii (0). Message µ ii (m) indicates that, when the ith BS is an exemplar, there are two possible cases, i.e., neighboring BSs around the ith BS select the ith BS as an exemplar or neighboring BSs around the ith BS do not select the ith BS as an exemplar. Message µ ii (0) indicates that the ith BS is not an exemplar, none of the neighboring BSs around the ith BS select the ith BS (C i i = 0) as an exemplar. Therefore, message µ ii can be defined as For the message from F i to C ii , the message is defined as the difference between message η ii (m) and message η ii (0). Message η ii (m) indicates that when the ith BS is an exemplar, it should select itself as an exemplar and cannot possibly select other points as an exemplar (C ii = 0). Message η ii (0) indicates that when the ith BS is not an exemplar, it should select other BSs as an exemplar (C ii = 0). Therefore, message η ii can be defined as The message from C ii to G k is defined as the difference between the preference of the ith BS selecting the kth BS as a super-exemplar and the maximum preference value when the ith BS is not an exemplar. This message can be expressed as The messages from G k to C ii are considered according to the values i and k. If i = k, the message is defined as the difference between two cases. The first case is when the kth BS has been chosen as a super-exemplar, with other BSs beside the ith BS then not constrained. The second case is when the kth BS is not a super-exemplar. This message can be expressed as If i = k, then the message is defined as the difference between two cases. The first case is when the ith BS has chosen the kth BS as a super-exemplar; the kth BS is a super-exemplar for itself and other BSs beside the ith BS are then not constrained. The second case is when the kth BS is not constrained as a super-exemplar. This message can be expressed as Multi-layer clustering message passing yields the final messages indicated here by (19)- (22) and (29)-(35). Each of the final messages will contribute to the assignment. To provide the assignment of all messages, this step should sum all of the incoming messages to each of the corresponding datapoints, i.e., C ij and C ii . The assignment for these final messages can be expressed as The assignment messages will determine the appropriate cell partner for JP-CoMP. Based on the assignment result, the proposed algorithm is concluded, as shown in Algorithm 1. ij and send to neighboring BSs. Update λ ij (t+1) and send to neighboring BSs.
Update µ (t) ii and send to neighboring BSs. Update λ m i (t+1) and send to neighboring BSs.
Until all messages have been converged or max iteration reached. Compute C ij (t) and C ii (t) to determine the cooperating base station.
If C ij (t) = 1 and C ii (t) = k, the ith BS, the jth BS, and the kth BS are cooperating base stations. If C ij (t) = 0 and C ii (t) = k, the ith BS and the kth BS are cooperating base stations. If C ij (t) = 1 and C ii (t) = i, the ith BS and the jth BS are cooperating base stations. If C ij (t) = 0 and C ii (t) = i, the ith BS operates alone.

Simulation Results
The simulation results compare the performance of the proposed algorithm with those of existing schemes. For this purpose, the proposed algorithm will be compared to existing JP-CoMP clustering methods, in this case novel static clustering [13], coalitional game theory [19], affinity propagation (AP) [26], and capacitated affinity propagation (CAP) [27]. The user throughput, network scalability and complexity are evaluated in the simulation results.

Simulation Parameters
The simulations are performed using an Intel R Core TM i7-7700 CPU system operating at 3.60 GHz (8 CPUs). The programming software is MATLAB R2018B. In order to achieve reliable results, the simulation results have been averaged over extensive number of random realizations of wireless channels and user drops during simulations. The simulation parameters are shown in Table 1.

Throughput Evaluation
We compare the throughput performance of the proposed algorithm with the outcomes of other existing methods. Figures 4 and 5 show the throughput evaluation and its cumulative distributive function, respectively. Figure 4 shows that the throughput performance gradually decreases as the distance increases. This result also indicates that the proposed algorithm outperforms other methods. When the UE distance is 104 meters, the proposed algorithm shows 2%, 13%, 13%, and 21% average throughput improvements as compared to coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering, respectively. In addition, when the UE distance is 496 m, the proposed algorithm shows 30%, 20%, 207%, and 304% average throughput improvements compared to coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering, respectively. In addition, Figure 5 indicates that the proposed algorithm provides higher average throughput performance in all percentiles. The average UE throughput percentage of less than 1.5 Mbps for the proposed algorithm, coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering are 16%, 26%, 33%, 43%, and 56%, respectively. The average UE throughput percentage of less than 2.5Mbps for the proposed algorithm, coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering are 64%, 72%, 82%, 83%, and 90%. The average UE throughput percentage of less than 3.5 Mbps for the proposed algorithm, coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering are 95%, 96%, 99%, 98%, and 100%.  These results show that the proposed algorithm has higher throughput performance at every UE distance point. The proposed algorithm also maintains high performance in vulnerable areas, such as edge users. This is possible because the proposed algorithm attempts to reconsider all possible cooperating base stations by maximizing sum-capacity in order to determine the appropriate formation by the means of multi-layer assessment. This multi-layer message passing scheme increases the possibility of the best solution being achieved and prevents greedy choices in forming the cell clustering. The proposed algorithm also set outs proper cell clustering during the global optimality of the solution. Accordingly, the fixed point will get improved performance after the proposed algorithm reaches its convergence point.

Network Scalability
This evaluation has the purpose of determining the performance of the proposed algorithm with different network parameters. The JP-CoMP clustering scheme is expected to be able to handle such network changes. Two parameters are utilized in this simulation in order to evaluate this problem: the cell size and the number of users. Figure 6 shows the cell size evaluation. This evaluation provides the average edge user throughput performance with different cell sizes. The result shows that the average edge user throughput performance gradually decreases with an increase in the cell size. When the cell radius is 100 m, the proposed algorithm shows 2%, 39%, 39%, and 41% average edge user throughput improvements as compared to coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering, respectively. In addition, when the cell radius is 500 m, the proposed algorithm shows 29%, 37%, 208%, and 341% average edge user throughput improvements when compared to coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering, respectively. These results demonstrate that the proposed algorithm consistently outperforms the others in all cell size ranges. Figure 7 presents the evaluation result, depending on the the number of users. This evaluation presents the throughput performance with different numbers of users. The results indicate that the increase in the UE number does not have significant effect to the average edge user throughput. When the UE number is 50, the proposed algorithm shows 9%, 35%, 208%, and 240% average edge user throughput improvements as compared to coalitional game theory, capacitated affinity propagation, affinity propagation, and novel static clustering, respectively. Accordingly, the proposed algorithm outperforms the others consistently with all numbers of UE ranges.
These results verify that the proposed algorithm sophisticatedly optimizes the objective function properly in different networks, because the proposed algorithm utilizes distributed control of the network. This scheme allows for the utilization of different networks to determine the appropriate cooperating formation instead of a greedy choice when determining the cooperation formation.   Table 2 shows a comparison of the computational complexity of each method. This result shows that the computational complexity of the proposed algorithm, capacitated affinity propagation, and affinity propagation are identical at O n 2 . The difference in the message passing complexity only appears in constants and can therefore be ignored. In addition, the computational complexity for coalitional game theory is O (2 n ). The updated split and merge algorithm in the coalitional game theory result in a higher computational complexity when compared to the message passing algorithm. The message update process in the proposed algorithm mostly consists of message exchanges among neighbors, which only contains small-sized information. This is the main factor of the low computational complexity in the proposed algorithm.

Method
Proposed Algorithm Coalitional Game Theory [19] AP [26] Capacitated AP [27] Complexity Figure 8 provides the results of the comparison of the convergence properties of the proposed algorithm and those of other existing methods. Convergence iteration shows how rapidly an algorithm will reach a stabilized position during iteration. The simulation result shows that the proposed algorithm converges less than the five iterations. It should be noted that the proposed algorithm requires less number of iterations when compared to other iterative algorithms. This implies the overall required complexity for the proposed algorithm is kept minimal, making the proposed algorithm appropriate for practical implementations. The proposed algorithm exchanges linear and scalar quantity messages (36) and (37) at each iteration. These messages only consume a small amount of transmission bandwidth. Therefore, this scheme requires a lower backhaul capacity level during its implementation. The iterative message update have the high potential to incur the latency issue. However, the long-term period of JP-CoMP renders the latency not a critical issue.

Conclusions
This paper proposes a distributed algorithm in the cell clustering for downlink joint processing coordinated multipoint. This proposed algorithm tackles the nonlinear sum capacity optimization problem through approximation into distributed linear-form message passing. This linear approximation scheme is interpreted into multi-layer message passing as an algorithm foundation for dynamic cell clustering.
The extensive simulation result confirms that the proposed algorithm provides considerable performance improvements in JP-CoMP clustering. The proposed algorithm provides higher performance consistently compared to the conventional methods in terms of both average UE throughput and average edge user throughput. The proposed algorithm enables the distributed control of the network, thus allowing for adaptive properties for dynamically changing networks. Accordingly, the proposed algorithm guarantees high throughput performance for different cell sizes and different number of users. In addition, the message update among neighbor base stations in the proposed algorithm only incurs low computational complexity and consumes a small amount of transmission bandwidth. Despite all of the advantages, the message update procedure in the proposed algorithm causes the latency issues. However, for most typical applications of JP-CoMP, the latency issue would not be critical.
Future research directions can include more practical technical issues to further improve JP-CoMP performance outcomes. The imbalance conditions, i.e., the UE imbalance in each cell and the power transmitting imbalance from each base station become interesting topics in this field. Another aspect that should be exploited is the implementation of the algorithm in coordinated beamforming CoMP.

Conflicts of Interest:
The authors declares no conflic interest.