A Trade-Off Algorithm for Solving p-Center Problems with a Graph Convolutional Network

: The spatial optimization method between combinatorial optimization problems and GIS has many geographical applications. The p-center problem is a classic NP-hard location modeling problem, which has essential applications in many real-world scenarios, such as urban facility locations (ambulances, ﬁre stations, pipelines maintenance centers, police stations, etc.). This study implements two methods to solve this problem: an exact algorithm and an approximate algorithm. Exact algorithms can get the optimal solution to the problem, but they are inefﬁcient and time-consuming. The approximate algorithm can give the sub-optimal solution of the problem in polynomial time, which has high efﬁciency, but the accuracy of the solution is closely related to the initialization center point. We propose a new paradigm that combines a graph convolution network and greedy algorithm to solve the p-center problem through direct training and realize that the efﬁciency is faster than the exact algorithm. The accuracy is superior to the heuristic algorithm. We generate a large amount of p-center problems by the Erdos–Renyi graph, which can generate instances in many real problems. Experiments show that our method can compromise between time and accuracy and affect the solution of p-center problems.

There are three main types of classic methods to solve this problem.
• Exact algorithm.An exact algorithm refers to the method to find the optimal solution to the problem [12].When the size of the problem is small, the exact algorithm can obtain the optimal solution in an acceptable time.However, the scale of the problem is often significant in industrial application scenarios.The amount of calculation and storage space required to obtain the optimal solution increases rapidly, prone to "combinatorial explosion".It is not easy to find the optimal solution.The essence of the exact algorithm is to search the space of the solution.Therefore, with the increase of the scale of the problem, the time complexity is exponential order or even factorial order.

•
Approximate algorithm.An approximate algorithm refers to using approximate methods for solving optimization problems [13].For an NP-hard problem, since an exact solution cannot be obtained in polynomial time, it is considered to use an approximate algorithm to obtain an available sub-optimal solution in polynomial time.
One of the simplest approximation algorithms is to search for an approximate solution of the original problem with an exact algorithm within a given solution time and then measure whether the approximate solution is feasible.

•
Heuristic algorithm.A heuristic algorithm is an algorithm based on intuition or empirical, which is widely used in various optimization problems [14,15].It can give a relatively optimal solution to the problem in an acceptable time.However, there is no theoretical guarantee, and it is impossible to measure the relationship with the optimal solution.
The rise of neural networks provides new ideas for solving COPs.There are two main directions.One direction is reinforcement learning (RL) combined with combinatorial optimization, in which many problems can be modeled as sequential decision-making processes.RL is an effective tool for processing the Markov decision process.Numerous researchers consider using RL to deal with sequential decision-making processes in COPs [16,17].There are still many challenges, such as the feasibility of the solution, the difficulty of modeling, the difficulty of migrating to large-scale problems, and the trouble of data generation.Another direction is deep learning combined with combinatorial optimization, which combines learning and optimization to improve the performance of solving real problems.Previous literature on using deep learning to solve COPs contains three types: First, pure end2end [18,19] predicts decisions directly from input, but optimization is hard to encode in a neural network.Second, two-stage training predicts and then optimizes [20].Third, given that the accuracy and consistency of decision-making results cannot be guaranteed at present, decision-focused learning [21] establishes a differentiable optimization objective in the training process.However, it is still a challenge to establish a differentiable optimization system.
We mainly study the p-center (PC) problems to establish a more effective model to deal with practical problems.PC is a classic NP-hard problem and has extremely important guiding significance in the urban facility location, social network analytics, and other issues.PC can be described as selecting p points as the center point in a point dataset and assigning other points to the p-center points so that the maximum distance from all points to their corresponding center points is minimum.Usually, the design of objective functions and constraints for optimization problems is determined according to the practical problems.There are many ways to solve PC problems.In our study, we give an exact algorithm based on minimum domain sets (MDS) [22] and a greedy approximation algorithm [23] to solve the problems.MDS can give the optimal solution to the problem.However, it cannot be solved easily in polynomial time as the problem size is increased.The greedy algorithm can give a sub-solution of the problem in a short time, but the quality of the solution is closely related to the setting of the initial value.We focus more on combining graph learning with optimization problems.A general framework that combines graph learning with optimization is proposed.Using a graph convolutional network (GCN) to give the result of clustering to achieve the efficiency is superior to the exact algorithm and the accuracy is better than the greedy algorithm.
Our contributions are as follows: • A new approach with a greedy algorithm is proposed to solve p-center problems by directly training GCN.

•
Our method achieves that the solution accuracy is superior to the greedy algorithm and the efficiency is better than the exact algorithm.

•
The method is transferable and can be combined with various existing approximation or heuristic algorithms.
This study is divided into six sections.The rest of the article is organized as follows.Section 2 introduces the related work about COPs, which combines with deep learning.Section 3 presents the preliminary knowledge for this study.Section 4 is our approach to solving PC and introduces a clustering algorithm to implement the GCN training strategy.Section 5 shows the experiments.We use a different algorithm to solve PC problems on many different scale graphs and achieve the desired results.Section 6 is the conclusions.

Related Work
There are many types of research on solving combinatorial optimization problems by training neural networks [24,25].The pointer network (PN) is one of the representative networks for processing COPs and was first proposed by Vinyals et al. [26].PN is a sequence-to-sequence learning paradigm, which can mainly solve the problem of the immutable size of the output vocabulary.It can be used to solve the problem of the variable number of nodes in the COPs [27].The traditional seq2seq cannot solve the problem that the output sequence will change with the different input sequence lengths, especially the problem that the output is heavily dependent on the input.In essence, PN solves the problem of forced constraints on the input and output by simplifying and adjusting the attention mechanism.As a predefined heuristic algorithm is complicated, Zhang et al. used a graph neural network (GNN) to solve the link prediction problem and proposed to learn a heuristic algorithm from a given network instead of a predefined one [28].By extracting the local sub-graphs around each target link, it can learn a mapping from the sub-graph pattern to the existence of the link to learn a heuristic method suitable for the current network.Highlight a method of learning heuristics from local sub-graphs using GNN.Khalil et al. combined RL and graph embedding to solve COPs [17].The article mainly used the graph embedding method of struc2vec and proposed a meta-algorithm to solve this problem.
Through meta-algorithms, the same type of COPs based on graph theory can be solved directly, relying on meta-algorithms and not requiring solving one by one.Bello et al. combined RL with PN to solve the COPs [16].They used the policy gradient method to calculate the parameters of the PN model for TSP and then optimize the solution to the TSP problem.Kool et al. proposed a model based on the transformer model [29]; they built a model architecture that unifies COPs with RL and solved TSP, VRP, OP, and PCTSP.
Tian et al. first proposed learning deep representations for graph clustering [30].The idea of the article is simplicity.Firstly, they advised applying the autoencoder to the graph structure to get feature extraction.Then, achieve clustering by K-means directly, which came from spectral clustering.Based on it, Yang [18] replaced the Laplace matrix with a modularity matrix.The optimization of the modularity matrix is equivalent to spectral clustering.Xie et al. proposed the deep embedding clustering (DEC) for cluster analysis [31], which defines a parameterized nonlinear mapping from data space X to low-dimensional feature space Z and optimizes the clustering target in low-dimensional space.Guo et al. considered manipulating the feature space to disperse data points by preserving the data structure and the clustering loss as a guide [32].They put forward the improvement of the deep embedding clustering (IDEC) algorithm.IDEC joint clustering label allocation and learning are suitable for clustering and retaining the characteristics of the data structure by fusing the clustering loss and the loss of the autoencoder.
Wilder et al. thought that the BP in a neural network is in a continuous space, while K-means is used to deal with discrete space problems as an algorithm [33].They present a differentiable K-means algorithm to effectively deal with modularity and PC problems that combine GNN and K-means clustering.The paper treats CoreData as a PC problem to solve.Although GNN can classify CoreData sets well, it cannot effectively solve p-center.The CoreData is a kind of unweighted graph in which isolated points (not connected with any point or indirectly) will exist in the graph.In their paper, the distance between the isolated and center points is directly set as the current maximum distance, which is problematic and unreliable.We mainly explore and research the PC problem and propose a new method to solve the PC problems.Meanwhile, we use ER graph to randomly generate a large number of examples of PC problems to prove the feasibility of our method.

Preliminary
This section introduces some basic concepts and algorithms about PC problems for our work.We will solve the PC problems with two classic algorithms: an exact algorithm based on minimum domain sets and a fast-approximation method by a greedy algorithm.

The Definition of the p-Center Problem
The description of PC is as follows [34]: Given an undirected graph G = (V, E) and a positive integer p, the aim is to find a subset C ∈ V as centers with |C| < p, such that the distance from farthest vertex v to its closest center in S is minimized.We give a strict mathematical definition of PC [35] for having a clear understanding of the algorithm.with any point or indirectly) will exist in the graph.In their paper, the distance between the isolated and center points is directly set as the current maximum distance, which is problematic and unreliable.We mainly explore and research the PC problem and propose a new method to solve the PC problems.Meanwhile, we use ER graph to randomly generate a large number of examples of PC problems to prove the feasibility of our method.

Preliminary
This section introduces some basic concepts and algorithms about PC problems for our work.We will solve the PC problems with two classic algorithms: an exact algorithm based on minimum domain sets and a fast-approximation method by a greedy algorithm.

The Definition of the p-Center Problem
The description of PC is as follows [34]: Given an undirected graph G = (V, E) and a positive integer p, the aim is to find a subset C ∈ V as centers with |C| < p, such that the distance from farthest vertex v to its closest center in S is minimized.We give a strict mathematical definition of PC [35] for having a clear understanding of the algorithm.In metric space X, : d X X R → is the metric function in X, which indicates the similarity or proximity between two elements in X. S is a set in X, the proposal is to find a set CS  of the most p centers and generate an assignment The operator  maps each element in S to one of the points in C. According to the above definition, for each is present the distance between i and the cluster center ( ). i  The distance can not only express the true distance between two points, but also the similarity or proximity for any two elements.In many individual issues, the goal is to require  to be as small as possible.The optimization objective of PC is to be minimized For each point i ∈ S , we define max ( , PC is an NP-hard problem, under P NP  , PC cannot be solved in polynomial time [36].Therefore, we measure the quality of p-center problems' solutions by approximation ratio and relative error.In metric space X, d : X × X → R is the metric function in X, which indicates the similarity or proximity between two elements in X. S is a set in X, the proposal is to find a set C ⊆ S of the most p centers and generate an assignment ψ : S → C .The operator ψ maps each element in S to one of the points in C. According to the above definition, for each i ∈ S, d(i, ψ(i)) is present the distance between i and the cluster center ψ(i).The distance can not only express the true distance between two points, but also the similarity or proximity for any two elements.In many individual issues, the goal is to require d(i, ψ(i)) to be as small as possible.The optimization objective of PC is to be minimized max i∈S d(i, ψ(i)).

Definition 1. If the optimal solution to an optimization problem is c*. c is the best solution obtained by an approximate algorithm. Then the approximate ratio 𝛾 of the approximate algorithm is defined as:
For each point i ∈ S, we define PC is an NP-hard problem, under P = NP, PC cannot be solved in polynomial time [36].Therefore, we measure the quality of p-center problems' solutions by approximation ratio and relative error.Definition 1.If the optimal solution to an optimization problem is c*.c is the best solution obtained by an approximate algorithm.Then the approximate ratio γ of the approximate algorithm is defined as: γ ≤ r(n) , where r(n) is a function only related to the scale n of the problem.
Relative error λ is defined as follows: λ ≤ η(n), where η(n) is the relative error bound which is only related to the scale n of the problem.According to the above definition, it is obviously η Obviously, the smaller the approximation ratio or relative error is, the performance of the approximation algorithm is better.

Two Algorithms for Solving p-Center Problems
There are three algorithms for solving PC problems: exact algorithm (EA), approximate algorithm, and a heuristic algorithm.Exact algorithms are usually based on linear programming or mixed-integer programming [37][38][39], and the optimal solution can be obtained for most problems.We use an exact algorithm based on minimum domain sets (MDS) to solve PC instances in our work.EA can get the optimal solution to the problem, which can be used as a reference to measure the quality of the algorithm.Approximation algorithms include the SH algorithm [36], Gon algorithm [39], and HS algorithm [40].These algorithms have proved to have the best approximation factor in theory, but they perform poorly on many benchmark datasets.Although the heuristic algorithm has no strict theoretical proof and cannot guarantee rapid convergence, it has performed well on many benchmark datasets [41].
In order to quickly analyze the effectiveness of our method, we choose a greedy approximation algorithm, which is proven to have a 2-approximation ratio in theory [34].It is a fast heuristic algorithm, which is the easiest to implement.Next, we will mainly show two algorithms for solving PC problems.
Exact Algorithm.Our exact algorithm is based on the minimum dominating set in graph theory.This section will give the mixed linear programming form of MDS and some definitions related to it.We first give the definition of the minimum dominating set [42].
Definition 2. Given an input graph G = (V, E), a dominating set is a subset D ∈ V such that for every vertex v ∈ V, an edge (v, u) ∈ E with either u or v in D exists.
We show a dominating set in Figure 2 for understanding the definition.Figure 2 shows the domain set of the graph.{v0, v2, v4} is one of the domain sets of the graph.According to the definition of a domain set, {v1, v3, v5} is also a domain set of the graph.

(
)  the problem.According to the above definition, it is obviously Obviously, the smaller the approximation ratio or relative error is, the perform of the approximation algorithm is better.

Two Algorithms for Solving p-Center Problems
There are three algorithms for solving PC problems: exact algorithm (EA), app mate algorithm, and a heuristic algorithm.Exact algorithms are usually based on li programming or mixed-integer programming [37][38][39], and the optimal solution ca obtained for most problems.We use an exact algorithm based on minimum domain (MDS) to solve PC instances in our work.EA can get the optimal solution to the prob which can be used as a reference to measure the quality of the algorithm.Approxim algorithms include the SH algorithm [36], Gon algorithm [39], and HS algorithm These algorithms have proved to have the best approximation factor in theory, but perform poorly on many benchmark datasets.Although the heuristic algorithm ha strict theoretical proof and cannot guarantee rapid convergence, it has performed we many benchmark datasets [41].
In order to quickly analyze the effectiveness of our method, we choose a greedy proximation algorithm, which is proven to have a 2-approximation ratio in theory [3 is a fast heuristic algorithm, which is the easiest to implement.Next, we will mainly s two algorithms for solving PC problems.
Exact Algorithm.Our exact algorithm is based on the minimum dominating s graph theory.This section will give the mixed linear programming form of MDS and s definitions related to it.We first give the definition of the minimum dominating set with either u or v in D exists.
We show a dominating set in Figure 2 for understanding the definition.Figu shows the domain set of the graph.
is one of the domain sets of the gr According to the definition of a domain set, is also a domain set of the gr      The minimum dominating set is a sub-problem of the p-center problem, so the solution of the p-center problem can be converted to some MDS problems.However, MDS requires that the graph data is a complete graph.This procedure can get the exact solution but is time-consuming and low efficiency.The solution of MDS is to convert the MDS problem into a mixed linear programming problem [43] and solve it with an open LP solver.The following is the mixed linear programming formulation of MDS: , is a binary variable Based on the MDS algorithm, the basic exact algorithm for solving PC is shown in Algorithm 1.The minimum dominating set is a sub-problem of the p-center problem, so the solution of the p-center problem can be converted to some MDS problems.However, MDS requires that the graph data is a complete graph.This procedure can get the exact solution but is time-consuming and low efficiency.The solution of MDS is to convert the MDS problem into a mixed linear programming problem [43] and solve it with an open LP solver.The following is the mixed linear programming formulation of MDS: Based on the MDS algorithm, the basic exact algorithm for solving PC is shown in Algorithm 1. Greedy Algorithm.The idea of the greed strategy is to select the current optimal solution at each step.This greed algorithm ensures that it has no more than a 2-approval approximation ratio.The detail of the algorithm is shown in Algorithm 2.
Greedy Algorithm.The idea of the greed strategy is to select the current optimal solution at each step.This greed algorithm ensures that it has no more than a 2-approval approximation ratio.The detail of the algorithm is shown in Algorithm 2.

Graph Neural Network
A convolutional neural network makes all the difference in deep learning, but it ca not handle graph structure data.Graph convolutional network (GCN) was first propose by Kipf and Welling [44] for a semi-supervised classification, which can effectively pr cess graph structure data.GCN is a spectral-based graph convolution model.We usual use G = (V, E) to represent a graph, where V is the set of nodes in the graph, |V| is th number of nodes, and E is the set of edges in the graph.
The convolution of a pixel the image is to sum the weight of the pixel and adjace pixels, so the convolution of a node in the graph structure can be shown as the weighte sum of the node and adjacent nodes.
We first consider the simplest convolution operation. ( In the above formula, Θ is the parameter of convolution transformation, whi needs training and optimization.A represents the adjacency matrix of the graph.ij A  represent the node i and node j are adjacent.AX is to add the vectors of all neighbor node However, the formula only obtains the information of neighbor nodes and ignores its i formation.Therefore, A is improved by adding a unit matrix.The formula is as follows In the process of calculation, Formula (5) will add all neighbor vectors of the nod After multi-layer convolution, the value of the vector is extraordinarily large.Therefor matrix A needs to be normalized.D denotes the degree matrix of the graph and the degr indicates the number of neighbors of each node.The normalization of A can be achieve   Append   to  13 Return , ().
Lemma 2 [34].The time complexity of the greedy algorithm is O(pn), where p is the number of centers, n is the number of nodes.

Graph Neural Network
A convolutional neural network makes all the difference in deep learning, but it cannot handle graph structure data.Graph convolutional network (GCN) was first proposed by Kipf and Welling [44] for a semi-supervised classification, which can effectively process graph structure data.GCN is a spectral-based graph convolution model.We usually use G = (V, E) to represent a graph, where V is the set of nodes in the graph, |V| is the number of nodes, and E is the set of edges in the graph.
The convolution of a pixel in the image is to sum the weight of the pixel and adjacent pixels, so the convolution of a node in the graph structure can be shown as the weighted sum of the node and adjacent nodes.
We first consider the simplest convolution operation.
In the above formula, Θ is the parameter of convolution transformation, which needs training and optimization.A represents the adjacency matrix of the graph.A ij = 0 represent the node i and node j are adjacent.AX is to add the vectors of all neighbor nodes.However, the formula only obtains the information of neighbor nodes and ignores its information.Therefore, A is improved by adding a unit matrix.The formula is as follows: (5) In the process of calculation, Formula (5) will add all neighbor vectors of the node.After multi-layer convolution, the value of the vector is extraordinarily large.Therefore, matrix A needs to be normalized.D denotes the degree matrix of the graph and the degree indicates the number of neighbors of each node.The normalization of A can be achieved by multiplying the inverse of the degree matrix D −1 .To ensure that the normalized matrix is still symmetric, the symmetric normalization formula is as follows: After the above derivation, the core formula of GCN is as follows: where D is the degree of A, X ∈ R N×C , Θ ∈ R C×F .N, C, F represent the number of nodes, the number of channels, and the number of convolution kernels, respectively.The computational complexity of GCN is O(|E|).It has a linear relationship with the number of edges E. When the graph is sparse, the complexity is much lower than O(n 2 ).The algorithm only considers the first-order information of the neighborhood.Stacking multiple layers can effectively increase the receptive field.In the experiment, it is not that the more layers of GCN we use, the better the training effect.We only use two layers of GCN to solve our problems.
The superiority of GCN: • GCN can extract features from graph data.It can perform node classification, graph classification, edge prediction, and graph embedding on the graph data, which traditional CNN cannot process.

•
The computational complexity of GCN is low.In our problem, the training speed is fast.Compared with graph attention networks, Graphsage, and other graph neural networks, the solution is faster and the model is more effective.

Methodology
As deep learning gives a good performance on many tasks, many researchers try use deep learning to solve combinatorial optimization problems.In previous studies, the main must be based on a sufficiently large training set to get a robustness model which can effectively deal with the problems in test sets.Therefore, even if the training time is very long, it can effectively solve many problems as long as the trained model is well enough.Our work presents a new idea that combines the traditional algorithms with GCN to solve PC problems.The main workflow of our approach is shown in Figure 4.
Our method does not need to rely on large datasets.We train the model directly.The idea is straightforward, but it can effectively improve the solution of PC problems and can be extended to analogous COPs.

Solution Method
For the graph G = (V, E) corresponding to any PC problems, we use two basic algorithms to seek the solution to the problems.The detail of the algorithms is in Section 3. In our approach, we first use a greedy algorithm to work out the center points in the graph.Then a clustering algorithm is proposed to calculate the category of each node which is according to the center points.Next, G, features, and the clustering results are input parameters into GCN to obtain new clusters.Finally, based on the guidance of new clusters, each category's center point and maximum distance are computed, and the solution to the original problem is obtained.
As deep learning gives a good performance on many tasks, many researchers try to use deep learning to solve combinatorial optimization problems.In previous studies, the main idea must be based on a sufficiently large training set to get a robustness model which can effectively deal with the problems in test sets.Therefore, even if the training time is very long, it can effectively solve many problems as long as the trained model is well enough.Our work presents a new idea that combines the traditional algorithms with GCN to solve PC problems.The main workflow of our approach is shown in Figure 4. Our method does not need to rely on large datasets.We train the model directly.The idea is straightforward, but it can effectively improve the solution of PC problems and can be extended to analogous COPs.

Solution Method
For the graph G = (V, E) corresponding to any PC problems, we use two basic algorithms to seek the solution to the problems.The detail of the algorithms is in Section 3. In our approach, we first use a greedy algorithm to work out the center points in the graph.Then a clustering algorithm is proposed to calculate the category of each node which is according to the center points.Next, G, features, and the clustering results are input parameters into GCN to obtain new clusters.Finally, based on the guidance of new clusters, each category's center point and maximum distance are computed, and the solution to the original problem is obtained.We most notably train the GCN directly rather than the GCN model in advance.Therefore, this part of the training time needs to be calculated into the solution of the whole problem.
Cluster Algorithm.We need a clustering algorithm to calculate the category of each node which is according to the center point.We only need to cycle all vertices once, calculate the distance from each node to each center point and take the nearest center point as the label of the current vertex.The algorithm is shown in Algorithm 3. It is obvious that the time complexity of our cluster algorithm is O(pn), p is the number of centers, n is the number of nodes.Our method does not need to rely on large datasets.We train the model directly.The idea is straightforward, but it can effectively improve the solution of PC problems and can be extended to analogous COPs.

Solution Method
For the graph G = (V, E) corresponding to any PC problems, we use two basic algorithms to seek the solution to the problems.The detail of the algorithms is in Section 3. In our approach, we first use a greedy algorithm to work out the center points in the graph.Then a clustering algorithm is proposed to calculate the category of each node which is according to the center points.Next, G, features, and the clustering results are input parameters into GCN to obtain new clusters.Finally, based on the guidance of new clusters, each category's center point and maximum distance are computed, and the solution to the original problem is obtained.
We most notably train the GCN directly rather than the GCN model in advance.Therefore, this part of the training time needs to be calculated into the solution of the whole problem.
Cluster Algorithm.We need a clustering algorithm to calculate the category of each node which is according to the center point.We only need to cycle all vertices once, calculate the distance from each node to each center point and take the nearest center point as the label of the current vertex.The algorithm is shown in Algorithm 3. It is obvious that the time complexity of our cluster algorithm is O pn , p is the number of centers, n is the number of nodes.

Feasibility Analysis of Algorithm
The exact algorithm can give the optimal solution to the problem, but the efficiency is low.The greedy algorithm has high efficiency, but the accuracy of the solution is not enough.We propose a simple and effective method to solve the p-center problem.The solution efficiency is better than the accurate algorithm, the solution accuracy is better than the greedy algorithm, and a balance is achieved between efficiency and accuracy.

Feasibility Analysis of Algorithm
The exact algorithm can give the optimal solution to the problem, but the efficiency is low.The greedy algorithm has high efficiency, but the accuracy of the solution is not enough.We propose a simple and effective method to solve the p-center problem.The solution efficiency is better than the accurate algorithm, the solution accuracy is better than the greedy algorithm, and a balance is achieved between efficiency and accuracy.Lemma 3. The time complexity of our approach is O(pn) + O(|E|) , where p is the number of centers, n is the number of nodes, |E| is the number of edges.
Proof of Lemma 3. Our algorithm is based on a greedy algorithm and a clustering algorithm.In Lemma 2, the time complexity of the greedy algorithm is O(pn).According to Algorithm 3, the time complexity of the clustering algorithm is also O(pn).GCN

Experiments
The GCN can effectively process graph structure data since the backup of solid theoretical results.However, the training of neural networks is a black-box model.Many phenomena in training have no sufficient theoretical guarantees.Following Lemmas 1 and 2, it is evident that the time complexity of our algorithm is superior to EA.In this section, we used many experiments to verify that the solution accuracy of our approach was better than GA.

Implementation
Data Generation.We generated many Erdos-Renyi (ER) graphs as instances for the p-center problems.ER graphs can represent a lot of practical problems in reality.Since PC is an NP-hard problem and the solution time increases rapidly with the scale of the problem, we first compared the running time of the two algorithms in Section 3 under different scale problems.Figure 5 is a curve line chart of the running time of the two algorithms varying with the number of nodes.Considering the time cost, we mainly conducted experiments on instances with several nodes ranging from 100 to 200.One hundred instances were generated using the ER random graph for each type of problem and solved by three methods, respectively.The solution time and accuracy of the three algorithms were compared.The final results were expressed by the mean and standard deviation of 100 groups of experiments.

Algorithm 4 GCN with Greedy Algorithm
Input: An undirected graph  = (, ) , a integer p, features f Output:

Experiments
The GCN can effectively process graph structure data since the backup of solid theoretical results.However, the training of neural networks is a black-box model.Many phenomena in training have no sufficient theoretical guarantees.Following Lemmas 1 and 2, it is evident that the time complexity of our algorithm is superior to EA.In this section, we used many experiments to verify that the solution accuracy of our approach was better than GA.

Implementation
Data Generation.We generated many Erdos-Renyi (ER) graphs as instances for the p-center problems.ER graphs can represent a lot of practical problems in reality.Since PC is an NP-hard problem and the solution time increases rapidly with the scale of the problem, we first compared the running time of the two algorithms in Section 3 under different scale problems.Figure 5 is a curve line chart of the running time of the two algorithms varying with the number of nodes.Considering the time cost, we mainly conducted experiments on instances with several nodes ranging from 100 to 200.One hundred instances were generated using the ER random graph for each type of problem and solved by three methods, respectively.The solution time and accuracy of the three algorithms were compared.The final results were expressed by the mean and standard deviation of 100 groups of experiments.Result Analysis.Our method combines a heuristic algorithm and GCN, which realizes that the solution efficiency is better than the accurate algorithm on a scale of 100-200, and its accuracy is superior to the greedy algorithm.We expected that a general framework suitable for a specific form of a particular problem could be obtained from training in previous related work.However, learning a general model is a great challenge because of the particularity of graph data.The fundamental reason is that there are no consistent characteristics of different graph structure data of the same problem which cannot be trained well.For example, two ER graphs with 100 nodes were randomly generated and recorded as graphs A and B. Since nodes were randomly generated, there may be a node c, which exists in both A and B and belongs to different categories, so it is not feasible to classify node c with GCN.
Therefore, we proposed a new method-training directly.We do not need to generate a unified model but only need to train their own GCN model for each graph.We used GCN to learn the classification results directly and then computed the center point and the minimum distance according to the classification results.Experiments showed that the training time of the GCN network increases very slowly with the increase of the number of nodes.Only 20-30 s after 10,000 iterations in GCN is relatively small compared with the solution time of the exact algorithm in large-scale problems.It indicates that training directly is feasible and can effectively guide PC problems.

Conclusions
In summary, we first implement two basic algorithms: an exact algorithm based on a minimum domain set and a greedy approximate algorithm.Then, we propose a new method to solve p-center problems combined with a graph convolutional network.Our method achieves that the solution accuracy is better than the greedy algorithm and the solution efficiency is superior to the exact algorithm.The time complexity of our algorithm is analyzed theoretically, and a better solution, which is compared with the greedy algorithm, is achieved on a large number of PC instances generated by ER graph.GCN is used as a classifier for specific instances and the output is directly used to solve the origin problems, which is different from previous works.Experiments show that our approach is practical and can better deal with PC problems.
A trade-off approach is proposed to deal with PC problems in our study.Although the experiments are restricted by the solution time, the results also show that our approach is effective.This study proposed our algorithm based on an exact algorithm and a greedy algorithm.We achieved a trade-off between the solution time and the quality of the solution.

Figure 1
is a diagram of the p-center problem.The objective of the p-center problem is to minimize the maximal distance for all demand points.ISPRS Int.J. Geo-Inf.2022, 11, x FOR PEER REVIEW 4 of 15 Figure 1 is a diagram of the p-center problem.The objective of the p-center problem is to minimize the maximal distance for all demand points.

Figure 1 .
Figure 1.The diagram of the p-center problem.

Figure 1 .
Figure 1.The diagram of the p-center problem.

Definition 3 .
A minimum dominating set is a set of minimum cardinality among all the dominating sets.

Figure 3 15 Definition 3 .
Figure3shows a minimum dominating set.Apart from the shown in Figure3, {v2, v5} is also a minimum dominating set of the graph.It is obvious that the MDS is not unique and must be a subset of domain sets.

Figure 3
Figure3shows a minimum dominating set.Apart from the shown in Figure3, { 2, 5} vv is also a minimum dominating set of the graph.It is obvious that the MDS is not unique and must be a subset of domain sets.

Figure 3 .
Figure 3. {v 0 , v 3 } is a minimum domain set of the graph.

Algorithm 1 16 Lemma 1 [
An Exact Algorithm for the PC problem (EA) ISPRS Int.J. Geo-Inf.2022, 11, x FOR PEER REVIEW 7 of 22].The time complexity of the exact algorithm is 2 ( ) O n logn .

Algorithm 2 2
Greedy Algorithm for the PC problem (GA)Input:An undirected graph  = (, ) , a integer pOutput:Centers set , distance  1 Get the  vertices of ; Random generate a starting index from ‖‖ and put in ;

Figure 4 .
Figure 4. (a) The process of our approach to solving PC problems.(b) The greedy algorithm and exact algorithm.

Figure 4 .
Figure 4. (a) The process of our approach to solving PC problems.(b) The greedy algorithm and exact algorithm.

Figure 4 .
Figure 4. (a) The process of our approach to solving PC problems.(b) The greedy algorithm and exact algorithm.

Algorithm 2
Greedy Algorithm for the PC problem (GA) has O(|E|) computational complexity in one layer.Our model has two layers of GCN (Algorithm 4).The time complexity of our algorithm is O(pn) + O(|E|) ., where p is the number of centers, n is the number of nodes, |E| is the number of edges.Our algorithm is based on a greedy algorithm and a clustering algorithm.In Lemma 2, the time complexity of the greedy algorithm is