Abstract
The Minimum Vertex Weighted Coloring (MinVWC) problem is an important generalization of the classic Minimum Vertex Coloring (MinVC) problem which is NP-hard. Given a simple undirected graph , the MinVC problem is to find a coloring s.t. any pair of adjacent vertices are assigned different colors and the number of colors used is minimized. The MinVWC problem associates each vertex with a positive weight and defines the weight of a color to be the weight of its heaviest vertices, then the goal is the find a coloring that minimizes the sum of weights over all colors. Among various approaches, reduction is an effective one. It tries to obtain a subgraph whose optimal solutions can conveniently be extended into optimal ones for the whole graph, without costly branching. In this paper, we propose a reduction algorithm based on maximal clique enumeration. More specifically our algorithm utilizes a certain proportion of maximal cliques and obtains lower bounds in order to perform reductions. It alternates between clique sampling and graph reductions and consists of three successive procedures: promising clique reductions, better bound reductions and post reductions. Experimental results show that our algorithm returns considerably smaller subgraphs for numerous large benchmark graphs, compared to the most recent method named RedLS. Also, we evaluate individual impacts and some practical properties of our algorithm. Furthermore, we have a theorem which indicates that the reduction effects of our algorithm are equivalent to that of a counterpart which enumerates all maximal cliques in the whole graph if the run time is sufficiently long.
1. Introduction
Below we will introduce the MinVWC problem, current reduction approaches, and our proposed approach, together with some high-level motivation and comparisons.
1.1. The Problem
Given a simple undirected graph , a feasible coloring for G is an assignment of colors to V s.t. any pair of adjacent vertices are assigned different colors. Formally a feasible coloring S for is defined as a partition of V s.t. for any , for any , , and for any edge , u and v are not in the same vertex subset where . Notice that k is unknown until we find a feasible coloring. In the Minimum Vertex Weighted Coloring (MinVWC) problem, each vertex is associated with a positive weight, i.e., there is an additional weighting function , and the goal is to find a feasible coloring that minimizes . Obviously, an instance of the NP-hard MinVC problem can conveniently be reduced to an instance of the MinVWC problem by associating a weight of 1 with each vertex. As a result, the MinVWC problem is also NP-hard [,]. This problem arises in several applications like traffic assignment [,], manufacturing [], scheduling [] etc. Up to now, there are two types of algorithms for this problem: complete algorithms [,,] and incomplete ones [,,].
1.2. Current Reduction Approaches
In MinVC solving, a clique provides a lower bound for reductions because any two vertices in a clique cannot have the same color. In MinVWC solving, a clique is also able to do so, as can be found in the most recent reduction method RedLS published in []. Roughly it is desirable that we have cliques in hand that are of great sizes and each vertex in them has a big weight. So one may think that we can call an incomplete maximum vertex weight clique solver like [,] to obtain a list of optimal or near-optimal cliques. Such examples can be found in the state-of-the-art method RedLS. In detail, RedLS first performs reduction to obtain a reduced subgraph and then does a local search on that subgraph. In this paper, we will abuse the name RedLS to refer to its reduction component as well. As to its reduction component, RedLS first samples a proportion of vertices, and for each of them namely v, it tries to find one maximum or near-maximum vertex weight clique that contains v. Second it combines such cliques to obtain a ‘relaxed’ partition set and apply this set for reductions. In a nutshell, the reduction method of RedLS performs clique sampling and graph reduction successively without interleaving, which we believe is not so flexible and may miss a few promising cliques and bounds.
1.3. Our Approach
We do not believe that sampling maximum or near-maximum vertex weight cliques is a perfect approach for clique reductions. In fact, there are two types of cliques that may not have great total vertex weights but are still useful: those only with big size and those only with high-weight vertices, because they also contribute to a bound. Actually, solving MinVWC requires diversification, to be specific, a list of cliques that vary in both sizes and vertex weight distributions is preferred. If we call a maximum vertex weight clique solving procedure, we may finally obtain a list of cliques that lack such diversification, which results in relatively ineffective reductions. Therefore in this paper, we abandon such an approach and instead enumerate diverse cliques. In this sense enumerating all maximal cliques in the input graph seems to be a good choice, however, doing so may be costly and thus infeasible even in sparse graphs, so we develop an algorithm that only enumerates a certain proportion of them but leads to equally effective reductions as the counterpart which enumerates all of them, if our algorithm completes.
Recently complex networks have presented a number of applications like cloud computing [,], so research about vertex-weighted coloring in large complex networks is capturing great interest. In this paper, we will present a reduction algorithm that processes large sparse graphs in order to speed up current MinVWC solving. Roughly speaking, it alternates between clique sampling and graph reductions. In a graph reduction procedure, it obtains a subgraph whose optimal solutions can be extended into optimal ones for the whole graph, and we call this subgraph a VWC-reduced subgraph (Vertex Weighted Coloring-reduced subgraph). Since most large sparse graphs obey the power-distribution law [,], they can be reduced considerably by cliques of a certain quality. On the other hand, a smaller graph presents smaller search space and the algorithm may find better cliques more easily which can then be used for further reductions.
Our algorithm consists of three successive procedures. Firstly, we collect vertices that have maximum degrees or weights and enumerate all maximal cliques containing them. Each time we find a maximal clique we check whether it leads to further reductions and do so immediately if possible. Secondly, we systematically look for cliques that can trigger more effective reductions. As in the previous procedure, we will perform reductions immediately once we have found such a clique. Thirdly, we perform clique reductions which are ignored in the first two procedures. We evaluated our algorithm on a list of large sparse graphs that were accessed via http://networkrepository.com/ on 1 January 2018, and compared its performance with RedLS. Experimental results show that our reduction algorithm often returns subgraphs that are considerably smaller than those obtained by RedLS. Also, we evaluated the individual impacts of the three procedures above, and found that they all had significant contributions. Furthermore, our algorithm was able to confirm that it had found the best bound on a list of benchmark graphs. Last we have a theorem that indicates that although our algorithm only samples a certain proportion of maximal cliques in the whole graph, its reduction effects are equivalent to that of a counterpart that enumerates all of them in the whole graph, given sufficient run time.
2. Preliminaries
In what follows, we suppose a vertex weighted graph with being a weighting function. If is an edge of G, we say that u and v are adjacent/connected and thus neighbors. Given a vertex v, we define the set of its neighbors, denoted by , as and we use to denote . The degree of a vertex v, denoted by , is defined as . A clique C is a subset of V s.t. any two vertices in C are mutually connected. A clique is said to be maximal if it is not a subset of any other clique. By convention we define size of a clique C, denoted by , to be the number of vertices in it. Given a graph G and a vertex subset , we use to denote the subgraph of G which is induced by , i.e., where . Given a graph G, we use and to denote the set of vertices and edges of G, respectively.
In the following, for the ease of discussions, we generalize the notion of a coloring and allow it to color vertices not in V, so a coloring has now been redefined as with . Then we say that are color classes and we redefine as Obviously according to new definitions, one coloring can have several representations, e.g., and represent the same coloring.
Given a graph G, we use to denote a certain coloring for it. Then Proposition 1 below shows that given any feasible coloring, its cost on any induced subgraph does not exceed that on the whole graph.
Proposition 1.
Suppose and . If is a feasible coloring for G, then
- is also a feasible coloring for ;
- .
Proof.
See Appendix A. □
Throughout this paper, when we say an optimal coloring/solution, we mean a feasible coloring/solution with the minimum . Given a vertex u, we use to denote u’s color. In addition, we use to denote the operation which assigns u the color j, so assigns u a color which is equal to that of v, i.e., which puts u in the same vertex subset with v.
Given a tuple , we use to denote the number of components in t, so . For ease of expression, if t is an empty tuple, we define to be 0. Given a map and an element , if , then we say that y is x’s image under f or simply say is x’s image under f. Such notions will be useful when we discuss the removal of vertices in clique reductions. Finally, when given vertices u and v, we say that u is heavier (resp. lighter) than v if (resp. ).
2.1. A Reduction Framework
Below we will present notions that are related to graph reductions for the MinVWC problem. The first is an extension to a coloring which relates solutions for a subgraph to that for the whole graph.
Definition 1.
Given a coloring and a vertex x s.t. , we define an extension to S with respect to , denoted by , as
We also define S as an extension of itself. So an extension to S will not change the color of any vertices that have already been colored before. Instead, it will put a new vertex into one of the k existing vertex partitions if , or a new one if . Obviously given two operations and for extensions, we have , so the order of the operations does not matter.
Given a set , we use to denote , and we also say is an extension to S. Last if or is a feasible color for G, then we say that or is a feasible extension to S for G. Below we have a proposition that will be useful in proving other later propositions.
The proposition below illustrates that extending a coloring will not decrease its cost.
Proposition 2.
Given a vertex and a coloring for , then
for any .
Proof.
See Appendix B. □
Next, we define a type of subgraphs whose feasible solutions can be extended into feasible ones for the whole graph with the same .
Definition 2.
Suppose and . If given any feasible coloring for , there exists an extension to , denoted by , such that is feasible for G and , then we say that is a VWC-reduced subgraph for G.
This notion of VWC-reduced subgraph has two nice properties which are shown in Propositions 3 and 4 below. In detail, Proposition 3 shows that the relation of the VWC-reduced subgraph is transitive and we can compute a VWC-reduced subgraph in an iterative way.
Proposition 3.
Suppose , , is a VWC-reduced subgraph for and is a VWC-reduced subgraph for G, then is a VWC-reduced subgraph for G.
Proof.
See Appendix C. □
Proposition 4 shows that in order to find an optimal solution for G, we can first find an optimal solution for its VWC-reduced subgraphs.
Proposition 4.
Suppose , and is a VWC-reduced subgraph of G, then
- 1.
- given any optimal feasible solution for , there exists an extension to which is an optimal solution for G;
- 2.
- given any non-optimal feasible solution for , there exist no extension to that is an optimal solution for G.
Proof.
See Appendix D. □
These propositions allow our algorithms to interleave between clique sampling and graph reduction, which is different from the approach in RedLS [] yet similar to that in FastWClq []. This is why we titled this paper ‘iterative clique reductions’.
In what follows we will introduce a general principle for computing VWC-reduced subgraphs.
2.2. Clique Reductions
Below we will utilize the notion of VWC-reduced subgraph to introduce clique reductions which was initially proposed in []. First, we introduce the notion of absorb which illustrates that a vertex’s close neighborhood is a weak sub-structure of a clique.
Definition 3.
Given a vertex u and a clique in G s.t. , , and , then we say that u is absorbed by C.
Note that the condition guarantees that always exists. Also notice that [] did not allow the equation in to hold, but we extend their statements slightly.
Example 1.
Consider in which denotes Vertex with a weight ω. Let and , then , and , thus and . So we say that is absorbed by C.
To make our descriptions more intuitive, we show C and separately below and moreover, in C a heavier vertex is shown in a darker color. If we left-shift , then we will find that there is a one-to-one map namely , s.t.
- 1.
- , that is, u is no heavier than its image under ξ;
- 2.
- and for any , , that is, images of u’s neighbors are no lighter than that of u, or we may roughly say that u’s image is the lightest compared to those of its neighbors.
Since , there exist at least 4 colors in any feasible solution for . For coloring vertices in , we only need colors, so there exists at least one color among that of which is not in use for , and we can use it to color u namely without causing any conflicts. Because , even though we assign the same color as that of , the lightest vertex in C, the of that coloring will not increase. So we can now simply ignore and later assign it an existing color after all its neighbors have been colored, depending on its weight as well as its neighbors’ colors. Obviously, this is a feasible extension that does not increase the cost of a coloring. Therefore is a VWC-reduced subgraph of .
In general, we have a proposition below [].
Proposition 5.
Given a graph G and a vertex u, if there exists a clique C s.t. u is absorbed by C, then is a VWC-reduced subgraph of G.
Proof.
See Appendix E. □
So if a vertex is absorbed by a clique, it can be removed in order to obtain a VWC-reduced subgraph.
Example 2. 
Now we continue with Example 1.
- 1.
- In , we find that is absorbed by C, so we have is a VWC-reduced subgraph of . Similarly we have is that of and is that of .
- 2.
- By Proposition 3, we have is that of . Also we have an optimal coloring for isand .
- 3.
- Considering Proposition 4, there exists a feasible extension to , denoted by , s.t. is an optimal solution for . In detail, for coloring the removed vertices in , we can follow the reversed order of the reductions before.
- 4.
- So an optimal coloring for isand .

Furthermore we only have to focus on maximal cliques as is shown by the proposition below.
Proposition 6.
If u is absorbed by a clique in G, then it must be absorbed by a maximal clique in G.
From the propositions above, we can see that whether a vertex can be removed to obtain a VWC-reduced subgraph or not depends on the quality of the cliques in hand. Below we define a partial order ⊑ between cliques which indicates whether vertices absorbed by one clique are a subset of those absorbed by the other.
Definition 4.
Given a graph and its two cliques and where and , we define a partial order ⊑ s.t. iff
- 1.
- ;
- 2.
- for .
So if , then leads to reductions that are at least as effective as that result from . In what follows, if , we say that is subsumed by . Obviously we have a proposition below which shows that the ⊑ relation is transitive.
Proposition 7.
Given a graph and its three cliques , if and , then .
Then we have two propositions which show that if , then we can keep and ignore .
Proposition 8.
Suppose u is a vertex and are cliques s.t. and , then if u is absorbed by , then it is also absorbed by .
The proposition below states that if there occur reductions among , and their vertices where , then keeping is at least as good as keeping .
Proposition 9.
Suppose are cliques s.t. and where and , if and , then we have for any , if is absorbed by , then is absorbed by .
So if we utilize and to perform clique reductions where , we can simply ignore and keep .
2.3. A State-of-the-Art Reduction Method
To date, as we know, the only work on reductions for vertex weighted coloring is RedLS [], which constructs promising cliques like FastWClq [] and combines these cliques in an appropriate way to obtain a ‘relaxed’ partition set. Then it utilizes this set to perform reductions and compute lower bounds. So RedLS consists of clique sampling and graph reductions as successive procedures without interleaving.
Notice that FastWClq alternates between clique sampling and graph reduction and it benefits much from this approach. Hence it will be interesting to try whether such an alternating approach would lead to better reductions in vertex weighted coloring. Fortunately, the reduction framework introduced above allows us to do so.
For simplicity, we will put the details of RedLS in Section 4, where we will be able to reuse our notations and algorithms for succinct presentation.
3. Our Algorithm
Our reduction algorithm consists of three successive procedures: Algorithms 1 and 2 and post reductions in Section 3.3. As to Algorithm 1, we will first run it with maximum-weight vertices assigned to in Line 1 and then run it again with maximum-degree vertices in the same way.
3.1. Sampling Promising Cliques
Algorithm 1 samples promising cliques that may lead to considerable reductions with three components as below.
- contains maximum degree/weight vertices and helps find promising cliques.
- contains cliques that may probably lead to effective reductions and will be utilized in post reductions in Section 3.3.
- is a list of weights in non-increasing order and will be used for reductions.
In Line 7, we adopt depth-first search to enumerate all maximal cliques which contain vertices only in . This operation can be costly, so in Section 3.4, we will set a cutoff for it. To be specific, before each enumeration, we will first put all related vertices into a list and shuffle this list randomly, then we will pick decision vertices one after another in this list to construct maximal cliques. By decision vertices, we mean those vertices that can both be included and excluded to form different maximal cliques.
Furthermore, Lines 8, 9, 10, and 16 will be introduced in Definition 8. Lines 21 and 22 are based on Proposition 16 and will be introduced in detail there.
| Algorithm 1: PromisingCliqueReductions |
![]() |
3.1.1. Geometric Representations
First, we introduce a notation for representing weight distributions within given cliques.
Definition 5.
Given a clique s.t. where , we define its weight list, denoted by , to be .
Second, we introduce an operator for appending items to the end of a weight list, and it is somewhat like counterparts for vector in C++, ArrayList in Java, or list in Python.
Definition 6.
Given a list of weights L and a weight ω, we define as if and as if .
In order to describe properties of our algorithms intuitively, we introduce Euclidean geometric representations of a list of weights in a rectangular coordinate system as below.
| Algorithm 2: BetterBoundReductions |
![]() |
| Algorithm 3: updateTopLevelWeights |
![]() |
Definition 7.
Given a list of positive numbers , we draw a curve on the Rectangular Coordinate Plane with the list of coordinates by connecting adjacent points, and we call this curve the derived curve of L.
Example 3.
Notice . There are three maximal cliques, , and with , and .
We draw the derived curves of , and as (blue), (green) and (red) in (I) in Figure 1.
Figure 1.
(I) Derived curves of , and represented by blue, green and red curves, respectively; (II) after being updated by and successively, which is the tightest envelope of them.
On the other hand, we draw the derived curve of which has just been updated with respect to and successively in Algorithm 3 as (black) in (II) in Figure 1.
- In detail, when has just been updated with respect to , its derived curve exactly overlaps that of .
- Next when has just been updated with respect to , a part of its derived curve, namely , has moved to its top-right, namely , so the derived curve of has turned into . Notice that having been updated with respect to and , the derived curve of is the bottom-left most curve that is not exceeded by that of and . In other words, has become the tightest envelope of that of and .
Actually, if we switch the order of and in the procedure above, we will obtain the same sequence in . In general, from the second time on, each time Algorithm 3 ends with being updated, parts of the derived curve of move to their top-right.
Now we consider the derived curves of and and define several notions below which describe the relationship between a vertex weighted clique and a list of non-increasing weights.
Definition 8.
Given a list of weights s.t. and a clique C with and ,
- 1.
- we say that C is covered by L iff and for any ;
- 2.
- we say that C intersects with L at l iff and ;
- 3.
- we say that C deviates above L at l iff or .
Example 4.
Consider Example 3 with having been updated with respect to and . By referring to (II) in Figure 1, we can find the following.
- 1.
- and are covered by .
- 2.
- intersects with at 2, 3, 4 and 5 (see , , and ). intersects with at 1 (see ).
- 3.
- deviates above at 2 (see ).
Obviously, we have a proposition below which helps determine whether a clique is effective in reductions.
Proposition 10.
- 1.
- iff is covered by .
- 2.
- If is covered by , is covered by , then is covered by .
- 3.
- If deviates above L at certain l and is covered by L, then .
3.1.2. Algorithm Execution
As to the execution of Algorithm 3, the next proposition presents a sufficient and necessary condition in which will be updated.
Proposition 11.
The in Algorithm 3 will be updated if and only if = or the input clique C deviates above at certain l.
Also, we have propositions below which illustrate how will be updated.
Proposition 12 (First Top-level Insertion).
Suppose that and , then will successively be appended to the end of in Line 7 in Algorithm 3.
Proposition 13 (Successor Top-level Updates).
Suppose where ,
- 1.
- for any , will be replaced with in Line 5 in Algorithm 3 iff C deviates above at l;
- 2.
- for any , a weight will be inserted in Line 7 in Algorithm 3 iff C deviates above at l.
The following proposition shows the relation between and C if it has been updated in Algorithm 3.
Proposition 14.
If has been updated in Algorithm 3, then covers the clique C at the end of this algorithm.
Such a covering relation will still hold after Algorithm 3 returns program control back to Algorithm 1. Then we have a proposition about in Algorithm 1.
Proposition 15.
- 1.
- Right before the execution of Line 22, for any , is covered by .
- 2.
- In Line 19, if C deviates above , then for any , we have .
Intuitively right before the execution of Line 21, can do whatever any clique in can, with exceptions being dealt with in Section 3.3. In Line 19, if C updates , then it will be allowed an entry into .
Example 5.
After Algorithm 1 is run on in Example 3, has become and has been updated to be , as is shown as in Figure 2. The details are as follows.
Figure 2.
after being updated by , and . Derived curves of , , and represented by blue, green, red and black curves respectively.
- 1.
- because but . So either was refused to enter or it was removed from , depending on whether the algorithm found earlier than it found .
- 2.
- (blue) and (red) are both covered by .
- 3.
- As to the two cliques above, neither subsumes the other.
So in Line 19, implies . In other words, if holds, then C is not covered by any clique in , i.e., C is not subsumed by any clique in . In this sense, we add it to and this will not cause obvious redundancy.
Based on the discussion above, we have
- right before the execution of Line 21 in Algorithm 1, contains best-found bounds formed by all previous enumerated cliques;
- and if any clique improves this bound, then no previously enumerated clique subsumes it. Unlike [], we will apply instead of ‘relaxed’ partition set to perform reductions in Algorithms 1 and 2.
Furthermore, for the sake of efficiency, we should keep as small as possible and as powerful as possible. So in Algorithm 1, if , i.e., is subsumed by C, then we will simply remove in Line 14 and this will do no harm to the power of . In addition, if does not intersects with the derived curve of , its reduction power is overwhelmed by , so we remove it in Line 18 as well.
3.1.3. Reductions Based on Top Level Weights
Next we have a proposition below which states that can be utilized for clique reductions.
Proposition 16.
Given , then
- 1.
- for any , there exists a clique and ;
- 2.
- given any feasible coloring for G, ;
- 3.
- given any vertex u s.t. and ,
- (a)
- u is absorbed by some certain clique in G;
- (b)
- and is a VWC-reduced subgraph of G.
Notice that Item 1 states that is the tightest envelope of all cliques that have been enumerated (See Figure 2 above for details and intuition).
- In this sense, if t was decreased or any of was decreased, the derived curve of would be left or down shift, which in turn, made at least one clique deviate above at some certain l. Therefore there must exist a color whose weight was smaller than its lower-bound.
- Considering that weights of other colors are all underestimated, we have the sum of all components in the new variant of could never be achieved by any feasible coloring.
So in order to obtain a feasible coloring that avoids lower-bound conflicts in any enumerated cliques, we have to accept the cost revealed by or even more. In a word, any feasible coloring for G costs at least , which will be shown and proved formally in Proposition 17 and has also been proved by [] in another approach.
Given a vertex u, we represent it as a point on the Rectangular Coordinate Plane in order for intuition (See Figure 3). Then we have is strictly below the derived curve ofiffand, and such a location relation implies Items 3a and 3b above. Moreover each time one neighbor of u is removed, will be decreased by 1 and the point will be left shift by 1. Meanwhile, when we enumerate cliques, tend to move to its top-right. These opposite trends will gradually help reduce the input graph.
Figure 3.
Iterated Removals. represents while represents respectively. (I) Right before reductions; (II) been removed; (III) been removed; (IV) been removed.
Example 6.
Consider in Example 1 in which there exists a maximal clique with . As to the four other vertices with degrees , we represent them by , respectively, on a rectangular coordinate plane in (I) in Figure 3 below. For instance, the coordinate of is namely . Notice that and have the same degree and weight, so their corresponding points overlap on the coordinate plane, to be specific, and are represented by and , respectively, which overlap.
On the other hand, we can utilize instead of specific cliques to perform clique reductions. For example, Line 21 in Algorithm 1 exploits to perform reductions based on Proposition 16 above. In detail, applyCliqueReductions(G, , S) performs clique reductions and obtain a VWC-reduced subgraph of G, but keeps all vertices in S in the returned subgraph. We do this for the following reason: In Line 21, since we are enumerating cliques in , we should keep all vertices in it. Otherwise, the procedure may crash. However, in Line 22, since we have completed the enumeration procedures, we do not have to keep any vertices in the VWC-reduced subgraph. We also remind readers that in the applyCliqueReductions procedure, each time one vertex is removed, all its neighbors will be taken into account for further reductions because their degrees have all been decreased by 1.
Notice that in Proposition 16 we require rather than , because we have to ensure that u is absorbed by a clique that does not contain u, which is coincident with the approach in []. However, this method may fail to perform some reductions which can be performed by Proposition 5. Yet this is not a problem, because, at the end of our reductions, we will deal with that case. See Section 3.3 for more details.
Example 7.
Now we call applyCliqueReductions
which is based on Proposition 16 as below. See Figure 3 for visualization.
- 1.
- In (I) in Figure 3, we find that the derived curve of is and F is strictly below it, so Item 3 in Proposition 16 is applicable and the corresponding vertex is removed.
- 2.
- Because of the removal of , the degrees of , and are all decreased by 1, so their corresponding points on the coordinate plane are all left shift by 1 (see (II) in Figure 3). Notice that , , and overlap at this time.
- 3.
- Notice that is strictly below the derived curved of now, so we remove it like before, and this causes the left movement of (see (III) in Figure 3).
- 4.
- Analogously we remove because is strictly below now (see (IV) in Figure 3).
- 5.
- Note that removing is not allowed by Proposition 16, but it is permitted by Proposition 5. This shows the weakness of ourapplyCliqueReductionsprocedure, and we will address this issue in Section 3.3.
Obviously, in order to perform effective reductions, we want to be as big as possible. Hence, in Algorithm 2, we will try to increase their values. Furthermore, Proposition 16 is helpful in proving Proposition 17 below which computes a lower-bound of the cost of a feasible coloring.
Proposition 17.
Given any feasible coloring S for G and , we have .
Proof.
See Appendix F. □
Also, we have a proposition below which will be helpful in Section 3.3.
Proposition 18.
Right before the execution of Line 22 in Algorithm 1, there do not exist any two cliques s.t. and .
Proof.
See Appendix G. □
3.2. Searching for Better Cliques
Given , Algorithm 2 attempts to increase the values of and it even tries to find a clique whose size is bigger than t. So if Algorithm 2 completes, it will be able to confirm the following.
- Each component in has achieved its maximum possible value.
- There exists no clique whose size is greater than .
In Algorithm 2, we use to denote whether is increased in the iteration for i. In Line 2, means that we fail to update . In our algorithm, there are two tricks that refer to as below.
- If and we have confirmed that there are no cliques that improve , then there will be no cliques which improve .
- If and we fail to update , then it will be hard for us to update as well, so we adopt a continue statement here to avoid probably hopeless efforts.
We also call the procedure applyCliqueReductions which was explained in the previous subsection. Notice that in Line 4, we enumerate maximal cliques which contain vertices in only. To be specific, when , we will do so by considering vertices with weights greater than only, because we are now focusing on increasing . Like the counterpart in Algorithm 1, we will shuffle related vertices randomly before each enumeration.
3.2.1. Increasing Top Level Weights
Like Algorithm 1, we also exploit depth-first search to enumerate maximal cliques. Yet different from it, we will rarely enumerate all such maximal cliques. Instead, once we have found a clique that increases any value among , we will immediately perform reductions and break the enumeration procedure (see Line 14). Below we have a proposition that illustrates a sufficient and necessary condition in which will be increased.
Proposition 19.
As to the outermost loop in Algorithm 2, for any , will be increased if there exists a clique s.t. .
Example 8.
Suppose we have , and we are now focusing on increasing whose current value is 3. Suppose among vertices with weights greater than , we have found a clique C with whose derived curve deviates above that of at 3 (see and in Figure 4). So can now increase to be 4, and we will start another iteration to check whether can further increase.
Figure 4.
A clique is found to improve . represents while represents the derived curve of .
Notice that Line 14 breaks the clique enumeration loop and the program control of this algorithm will eventually be returned to Line 3 with an increased . We do this for the following reason: Since we have increased , any vertices that have a weight bigger than the previous but not bigger than the current will not help further increase . Hence, we eliminate these vertices from and enumerate maximal cliques again with respect to the same i (see Line 14). With a smaller , we can increase to its maximum possible value more efficiently. In a word, we increase gradually until it reaches its maximum. Notice that Line 7 might also increase t, so long as the algorithm has found a clique that is bigger than any that have been found. Last we remind readers that although we are focusing on increasing , there could be side effects that we increase as well where , so long as we have found a clique that contains sufficiently many vertices with big weights.
3.2.2. Effects of Better Cliques
There is a chance that the clique C obtained in Line 4 is not a maximal clique for the whole graph G, thus there may exist another clique in G that is a superset of C and has more reduction power. Alternatively, C may expand to a bigger clique by including vertices with a weight not greater than and lead to more reductions. Yet this is not a problem. If such a case exists, the full reduction power will be exploited in later iterations.
In the first few iterations of the outermost loop, i is relatively small and thus is relatively big, which is likely to result in a relatively small , so enumerating cliques in probably costs relatively little time. Moreover, these cliques may lead to effective reductions which significantly decrease the time cost of later enumerations. When , we have , thus in the worst case, we will have to enumerate all maximal cliques in G, which seems to be time-consuming and thus infeasible. Yet this is not so serious, because
- we are dealing with large sparse graphs which often obey the power-distribution law,
- and we have performed considerable reductions before, so at this time, G is likely to be small enough to allow maximal clique enumerations. In Section 3.4, we will also set a cutoff for enumerating cliques.
Last we remind readers that as i increases and decreases, becomes larger and larger, and thus enumerating cliques will become more and more time-consuming, so we need to set a cutoff for enumerations (see Section 3.4). Due to this cutoff, once we fail to confirm that has achieved its maximum, we will not make any effort to confirm whether has arrived at its best possible value for any .
Moreover, we have a proposition below which shows that, given sufficient run time, Algorithm 2 will be able to increase to its maximum possible value for any , where is the maximum size of a clique in G.
Proposition 20.
As to the outermost loop in Algorithm 2, we have
- 1.
- for any , right before i is increased by 1, there exist no cliques which deviate above at i.
- 2.
- for , when the iteration ends, there exist no cliques which deviate above at i.
Then by this proposition, we have a theorem below which shows that our clique reduction algorithm is as effective as the counterpart which enumerates all maximal cliques in G, if time permits. To describe this theorem we first define the equality relation between two lists in Definition 9.
Definition 9.
Given two list of weights and we say that iff and for any .
Theorem 1.
Let be the returned after Algorithms 1 and 2 are executed successively, and be the returned after Algorithm 4 is executed, then .
Note that Algorithm 4 can be time-consuming even for sparse graphs.
| Algorithm 4: computeTopLevelWeightsWithBF(G) |
![]() |
3.3. Post Reductions
Section 3.1 mentions that we have not fully exploited Proposition 5 to perform reductions, so in this subsection, we deal with the remaining case. At this stage, for each vertex, we will examine whether it is absorbed by some certain clique in and perform reductions if so.
3.4. Implementation Issues
Although we apply various tricks to enumerate diverse cliques for effective reductions, our algorithm may still become stuck in dense subgraphs, so we have to set a certain cutoff for our algorithm.
We believe that a good reduction algorithm should not focus too much on a local subgraph, so our cutoff will prevent each clique enumeration from spending too much time. The impact of this compromise is that we have to sacrifice some good properties above, to be specific, we now cannot expect that all values in will increase to their maximum. Yet in our parameter setting, there are still quite a few values that are confirmed to achieve their optimum.
Furthermore, in some large graphs, we may need to consider a great many vertices and enumerate cliques that contain them, so there could be numerous enumerations. Hence, even though each enumeration needs a small amount of time, the total time cost of so many enumerations might not be affordable, so we also need to limit the total amount of time spent on enumerations.
3.4.1. Limiting The Number of Decisions Made in Each Enumeration
Notice that we adopt a depth-first search to enumerate maximal cliques in Algorithms 1 and 2. During each depth-first enumeration, decisions of whether a vertex should be included in the current clique have to be made, and the search has to traverse both branches recursively, so there may be an exponential number of decisions for a single depth-first enumeration. Hence, in any enumeration, if has been unable to be improved within consecutive decisions, we will simply stop this enumeration and go on to the next one.
3.4.2. Limiting Running Time
Some benchmark graphs contain a large number of vertices that are of the greatest weights or degrees, so there can be a great amount of enumerations in Algorithm 1. Moreover as to Algorithm 2, there can be many candidate vertices that may form a clique to improve a particular component in , hence numerous enumerations may be performed as well.
Even though we limit the number of decisions and thus limit the time spent in each enumeration, too many enumerations may still cost our algorithm so much time. Hence in practice, we employ another parameter T to limit the running time of our algorithm. More specifically in Algorithm 2, we will check whether the total time spent from the very beginning of our whole algorithm is greater than T. If so we will simply stop Algorithm 2 and turn to post reductions.
In fact, if Algorithm 2 is stopped because of this parameter, there can be cases as below. For the sake of presentation, we let K be the number of components in which is equal to the size of the greatest clique that has been found.
- Algorithm 2 is unable to tell whether there exists a clique C s.t. and C is able to improve a particular component in .
- Algorithm 2 has confirmed that any clique containing at most K vertices will not improve . Yet it is unable to confirm whether there exists a clique whose size is bigger than K.
3.4.3. Programming Tricks
In graph algorithms, there is a common procedure as follows. Given a graph and its two vertices u and v, determine whether u and v are neighbors. In our program, this procedure is called frequently, so we have to implement it efficiently. However, it is unsuitable to store large sparse graphs by adjacency matrices. Therefore, we adopted a hash-based data structure which was proposed in [] to do so.
In our algorithm, we often have to obtain vertices of certain weights or degrees. Moreover, as vertices are removed, the degrees of their neighbors will be decreased. Furthermore, our algorithm interleaves between clique sampling and graph reductions, which requires us to maintain such relations in time. So we need efficient data structures to maintain vertices of each degree and/or weight in the reduced graph. Hence, we adapted the so-called Score-based Partition in [] to do so.
4. Related Works
To our best knowledge, the only algorithm on reductions for vertex weighted coloring is RedLS [], and its details are shown in Algorithm 5. In this algorithm, C is a candidate clique being constructed and each vertex in is connected to each one in C. Hence, any single vertex in can be added into C to form a greater clique.
| Algorithm 5: RedLS |
![]() |
In Line 2, 1% of the vertices in V are randomly collected to obtain . As to the outer loop starting from Line 4, each vertex like v in is picked and a maximal clique containing v is constructed from Lines 6 to 11, based on a heuristic inspired by FastWClq []. In the inner loop starting from Line 8, Line 9 picks a vertex u in , Line 10 places the vertex u into C, and Line 11 eliminates vertices which are not connected to every one in C, i.e., which are impossible to be added into C to make greater cliques.
Notice that Line 9 selects a next vertex to put into C with some look-head technique. To be specific, rather than choose the heaviest vertices and maximize current benefits, it tries to maximize the total weight of the remaining possible vertices, i.e., , with a hope for greater future benefits. So given a vertex v, Algorithm 5 always aims to look for maximum or near-maximum weight cliques that contain it.
Each time a maximal clique C is constructed, Algorithm 5 will compare with C and updates if needed (see Line 12). Actually RedLS adopts the so-called ‘relaxed’ vertex partition, yet the effects are equivalent to our descriptions with in Algorithm 5. After enumerating cliques with respect to vertices in , Algorithm 5 will call the applyCliqueReductions procedure and perform reductions based on Proposition 16. However, when determining whether a vertex namely u can be removed, it always takes in the whole graph as u’s degree, i.e., no degree decrease will be taken into account. In a nutshell, RedLS consists of clique sampling and graph reductions as successive procedures, which is different from our interleaving approach.
5. Experiments
We will present solvers and benchmarks, parameter settings, presentation protocols, results, and discussions in this section.
5.1. Solvers and Benchmarks
We consider a list of networks online that were accessed via http://networkrepository.com/ on 1 January 2018. They were originally unweighted, and to obtain the corresponding MinVWC instances, we use the same method as in [,]. For the i-th vertex , . For the sake of space, we do not report results on graphs with fewer than 100,000 vertices or fewer than 1,000,000 edges. There is an instance named soc-sinaweibo which contains 58,655,849 vertices and 261,321,033 edges and thus is too large for our program, so our program ran out of memory and we do not report its result. In the following experiments, we simply disable the local search component in RedLS [] and compare its reduction method to our algorithm.
Our algorithm was coded in Java and open source via https://github.com/Fan-Yi/iterated-clique-reductions-in-vertex-weighted-coloring-for-large-sparse-graphs accessed on 1 June 2023. It was compiled by OpenJDK 20.0.1 and run in an OpenJDK 64-bit Server VM (build 20.0.1+9-29, mixed mode, sharing). The experiments were conducted on a workstation with Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz CPU with 266 GB RAM under CentOS 7.9. Since we shuffle vertices in Algorithms 1 and 2, there exists randomness in the effectiveness of reduction. Yet we only test one arbitrary seed, since the benchmark graphs are diverse and each of them contains a large number of maximal cliques.
5.2. Parameter Settings
As to the parameter that limits the number of branching decisions in each depth-first enumeration procedure, we set it as where is the maximum degree in the input graph. On the other hand, RedLS was run with the default parameter setting in the machine environment reported by []. In fact, RedLS usually completes reductions in a significantly shorter time compared to our algorithm, yet this is not a big problem, because this paper focuses only on the potential effectiveness of a reduction algorithm, instead of its efficiency. Since the MinVWC problem is NP-hard, even a small number of additionally removed vertices may decrease a great amount of later search time, so our idea is meaningful.
As to the parameter T that limits the total running time of enumerations, we set it as 1200 s.
5.3. Presentation Protocols
For each instance, we report the number of vertices and edges in the original graph (denoted by ‘Original’ in Table 1) as well as that obtained by RedLS and our algorithm (denoted by ‘RedLS-reduced’ and ‘ours’, respectively, in the same table). In Table 1, we mainly compare the number of remaining vertices obtained by RedLS and that by our algorithm (Columns 4 and 6), and better results (smaller numbers) are shown in bold.
Table 1.
Reductions on instances. Those numbers of remaining vertices that are confirmed to be optimal are marked with ‘*’.
To show the effectiveness of our algorithm more clearly, we also report the percentage of remaining vertices, , where V is the set of original vertices and is the set of remaining vertices after reductions. So the closer is to 0, the more effective our algorithm is. Furthermore, the time column reports the number of seconds needed by our algorithm to perform reductions.
5.4. Main Results and Discussions
From Table 1, we observe the following.
- Our algorithm obtains significantly better results in most of these instances compared to RedLS. Among all the graphs, the number of remaining vertices returned by RedLS is at least 10,000. However, on nearly 20% of the instances, our algorithm returns a result less than 10,000. Moreover, on more than 10% of the instances, it returns a result less than 1000.
- On more than 40% of the instances, our percentage of remaining vertices is smaller than 10%, while on nearly 20%, the respective results are smaller than 1%.
- The most attractive result lies in the road-net category, in which our algorithm returned subgraphs that contained 156, 86, 14, and 54 vertices, respectively, with slightly more than . However, RedLS returns subgraphs that contain at least 800,000 vertices. Thanks to our algorithm, it seems that optimal solutions for these graphs can now be easily found by state-of-the-art complete solvers.
5.5. Individual Impacts
We will show individual impacts of our three successive procedures as well as the optimality of top-level weights returned.
5.5.1. Individual Impacts of Our Successive Procedures
To show that each of our three successive procedures is necessary, we calculate the number of vertices removed in each procedure during the execution of our algorithm. In Table 2, we use , and to represent the number of vertices removed by Algorithms 1 and 2 and post reductions, respectively. We select representative instances from most categories in order to reflect the individual impacts comprehensively.
Table 2.
Individual Impacts of Three Successive Procedures.
From this table, we find that Algorithm 2 may sometimes remove no vertices, and post reductions usually have great contributions, which is why we allow equations to hold and extend statements in [] to present Definition 3.
5.5.2. Optimality of Top Level Weights
Finally, we discuss the optimality of returned by our algorithm which will play an essential role in future works. Notice that Algorithm 2 tries to enumerate all possible cliques that may increase any component of and even attempt to find a clique whose size is bigger than . In practice, Algorithm 2 was able to confirm that some particular values had achieved their maximum. To be specific, we take instances web-it-2004, sc-pwtk and delaunay_n24 as examples, and show our experimental results in this aspect as below.
- As to web-it-2004, our experiment guaranteed that each value had achieved its maximum and there existed no clique whose size was bigger than . This is the best result which ensures that no better top-level weights can be found. This also implies that we have found the smallest number of remaining vertices. No better results can be obtained by clique reductions. In Table 1, all such instances are marked with ∗ in our column.
- As to sc-pwtk, our experiment guaranteed that each value had achieved its maximum, but it was unable to tell whether a clique with a size greater than existed. In this sense, future works on this instance can focus on finding a clique of greater size.
- As to delaunay_n24, our experiment could only make certain that the first two values of returned had achieved their maximum, but there were still two components that were not confirmed. Hence, more efforts are to be made in this instance.
6. Conclusions
In this paper, we have proposed an iterated reduction algorithm for the MinVWC problem based on maximal clique enumerations. It alternates between clique sampling and graph reductions and consists of three successive procedures: promising clique reductions, better-bound reductions and post reductions. Experimental results on several large sparse graphs show that the effectiveness of our algorithm significantly outperforms that of RedLS in most of the instances. Moreover, it makes a big improvement on about 10% to 20% of them, especially on the road-net instances. Also, we have shown and discussed individual impacts as well as practical properties of our algorithm. Last we have a theorem that indicates that our algorithm’s reduction effects are equivalent to that of a counterpart which enumerates all maximal cliques in the input graph if time permits.
However, our clique enumeration procedures are somewhat brute-force, which may waste a great amount of time checking useless cliques. Furthermore given a vertex, clique reductions assume that each of its neighbors has a distinct color, yet this is not always the case and thus may limit the power of reductions.
For future works, we will develop various heuristics to sample promising cliques that are both effective and efficient for reductions. Also, we plan to develop reductions that allow neighbors of a vertex to have repeated colors.
Author Contributions
Conceptualization, Y.F. and K.S.; methodology, Y.F., Z.Z., and Y.L.; software, Y.F.; validation, Q.Y., K.S., and Y.W.; formal analysis, Y.F., and Q.Y.; investigation, K.S., and Q.Y.; resources, K.S. and Y.W.; data curation, Y.W. and S.P.; writing—original draft preparation, Y.F.; writing—review and editing, Y.F., Q.Y. and L.J.L.; visualization, Y.F.; supervision, K.S. and L.J.L.; project administration, L.J.L.; funding acquisition, Y.F., Z.Z, Q.Y., Y.L., Y.W. and L.J.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded in part by the National Natural Science Foundation of China (62241206), in part by the Science and Technology Plan Project of Guizhou Province (No. Qiankehe Foundation-ZK[2022] General 550), in part by the Project for Growing Youth Talents of the Educational Department of Guizhou (No. KY[2019]201 and No. KY[2021]282), in part by the Foundation Project for Talents of Qiannan Science and Technology Cooperation Platform Supported by the Department of Science and Technology, Guizhou ([2019]QNSYXM-05), in part by the Educational Department of Guizhou under Grant (KY[2019]067), in part by the Foundation Project for Professors of Qiannan Normal University for Nationalities (QNSY2018JS010), in part by the Natural Science Foundation of Fujian under Grant 2023J01351, in part by the Special Foundation for Talents in Qiannan Normal University for Nationalities (qnsy2019rc10,qnsyrc202203,qnsyrc202204), in part by the Foundation Project of Science and Technology Plans of Qiannan under Grant 2019XK01ST and 2020XK05ST, in part by the Nature Science Foundation of Qiannan under Grant No. 2019XK04ST, in part by the Education Quality Improvement Project of QNUN under Grant No. 2021xjg029, and in part by the National College Students’ Innovation and Entrepreneurship Training Program under Grant No. S202210670024. This work was also in part supported by NSFC under Grant No. 61806050 and in part supported by the NSF Grant IIS-2107213.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
We would like to thank the anonymous referees for their helpful comments and suggestions.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Proof of Proposition 1
Proof.
- (By contradiction) Assume that is not a feasible coloring for , then there exists an edge in s.t. . Since and is an induced subgraph of G, we have is in G. This in turn implies that there exists an edge in G s.t. , which contradicts the precondition that is a feasible coloring for G.
- Suppose that , then considering that , we have .
□
Appendix B. Proof of Proposition 2
Proof.
- Case 1: , so .
- Case 2: .
□
Appendix C. Proof of Proposition 3
Proof.
Given any solution for , we have there exists an extension to , denoted by , such that is feasible for and
Also for the same reason, given any solution for , we have there exists an extension to , denoted by , such that is feasible for G and
Combining the statements above we have, given any solution for , there exists an extension to , denoted by , such that is feasible for G and
□
Appendix D. Proof of Proposition 4
(1) Since is a VWC-reduced subgraph of G, we have there exists an extension to , denoted by , such that is feasible for G and
Now we prove by contradiction that must be an optimal solution for G. Assume that is not an optimal solution for G, then there exists a feasible coloring for G s.t. and thus
i.e.,
Since is a feasible coloring for G, by Proposition 1, we have is also a feasible coloring for and
This in turn implies that there exists a feasible coloring for and
which contradicts that is an optimal solution for . Alternatively we have is an optimal solution for G.
(2) Based on the statements above, we have the following. Given any optimal solution for , there exists an extension to , denoted by , which is an optimal solution for G and
Suppose is an arbitrary extension to which is a coloring for G. Then by Proposition 2, we have
Because is an arbitrary extension, we cannot write ‘=’. Also because is a non-optimal solution for , we have
Hence,
This in turn implies that is not an optimal solution for G.
Appendix E. Proof of Proposition 5
Proof.
Let s.t. . Suppose is any feasible coloring for . Now we are to prove that there exists an extension to , denoted by , such that is feasible for the whole graph G and
Since u is absorbed by C, we have and , thus we have and . Now we construct
where
So is an extension to by coloring u an existing color in .
- Since u is distinct from , we have assigning u a certain color among those of is possible. (We cannot assign u a color of itself, which is meaningless.)
- Since C is a clique, we have for any . Moreover because , we have there must be at least colors in any feasible coloring for . For coloring u’s neighbors, the number of colors in use is at most , hence, at least one color among those of is not in use. So we can use it to color u with causing any conflicts, and thus make is a feasible coloring for G.
- Considering thatwe have
□
Appendix F. Proof of Proposition 17
Proof.
(By contradiction) Assume that . Now we are to show that S is not a feasible coloring which will contradict the preconditions.
Without loss of generality, suppose is a coloring for G s.t. ≥, then we have by Proposition 16 and also
Since for any and for any , we have there exists at least one s.t. and thus .
By Proposition 16, we have there exists a clique s.t. and
Therefore,
and thus
Considering that
by the Pigeonhole Principle, we have there exists at least one s.t. contains two or more vertices among , that is, S is not a feasible coloring for G, which contradicts the preconditions. Alternatively we have proved that . □
Appendix G. Proof of Proposition 18
Proof.
The proof includes two cases.
- enters first. If , then will be removed from at Line 14 before enters , i.e., they will not be in simultaneously.
- enters first. If , i.e., enters later, then by Proposition 15, we have .
□
References
- Garey, M.R.; Johnson, D.S. Computers and Intractability: A Guide to the Theory of NP-Completeness; W. H. Freeman & Co.: New York, NY, USA, 1979. [Google Scholar]
- Malaguti, E. The Vertex Coloring Problem and its generalizations. 4OR 2009, 7, 101–104. [Google Scholar] [CrossRef][Green Version]
- Ribeiro, C.C.; Minoux, M.; Penna, M.C. An optimal column-generation-with-ranking algorithm for very large scale set partitioning problems in traffic assignment. Eur. J. Oper. Res. 1989, 41, 232–239. [Google Scholar] [CrossRef]
- Prais, M.; Ribeiro, C.C. Reactive GRASP: An Application to a Matrix Decomposition Problem in TDMA Traffic Assignment. Informs J. Comput. 1998, 12, 164–176. [Google Scholar] [CrossRef]
- Gavranovic, H.; Finke, G. Graph Partitioning and Set Covering for the Optimal Design of a Production System in the Metal Industry. IFAC Proc. Vol. 2000, 33, 603–608. [Google Scholar] [CrossRef]
- Hochbaum, D.S.; Landy, D. Scheduling Semiconductor Burn-In Operations to Minimize Total Flowtime. Oper. Res. 1997, 45, 874–885. [Google Scholar] [CrossRef]
- Furini, F.; Malaguti, E. Exact weighted vertex coloring via branch-and-price. Discret. Optim. 2012, 9, 130–136. [Google Scholar] [CrossRef]
- Cornaz, D.; Furini, F.; Malaguti, E. Solving vertex coloring problems as maximum weight stable set problems. Discret. Appl. Math. 2017, 217, 151–162. [Google Scholar] [CrossRef]
- Malaguti, E.; Monaci, M.; Toth, P. Models and heuristic algorithms for a weighted vertex coloring problem. J. Heuristics 2009, 15, 503–526. [Google Scholar] [CrossRef]
- Sun, W.; Hao, J.; Lai, X.; Wu, Q. Adaptive feasible and infeasible tabu search for weighted vertex coloring. Inf. Sci. 2018, 466, 203–219. [Google Scholar] [CrossRef]
- Wang, Y.; Cai, S.; Pan, S.; Li, X.; Yin, M. Reduction and Local Search for Weighted Graph Coloring Problem. Proc. AAAI Conf. Artif. Intell. 2020, 34, 2433–2441. [Google Scholar] [CrossRef]
- Cai, S.; Lin, J. Fast Solving Maximum Weight Clique Problem in Massive Graphs. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9–15 July 2016; pp. 568–574. [Google Scholar]
- Fan, Y.; Li, N.; Li, C.; Ma, Z.; Latecki, L.J.; Su, K. Restart and Random Walk in Local Search for Maximum Vertex Weight Cliques with Evaluations in Clustering Aggregation. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, 19–25 August 2017; pp. 622–630. [Google Scholar] [CrossRef]
- Wang, T.; Sun, B.; Wang, L.; Zheng, X.; Jia, W. EIDLS: An Edge-Intelligence-Based Distributed Learning System Over Internet of Things. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 3966–3978. [Google Scholar] [CrossRef]
- Wang, T.; Liang, Y.; Shen, X.; Zheng, X.; Mahmood, A.; Sheng, Q.Z. Edge Computing and Sensor-Cloud: Overview, Solutions, and Directions. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
- Eubank, S.; Kumar, V.S.A.; Marathe, M.; Srinivasan, A.; Wang, N. Structural and algorithmic aspects of large social networks. In Proceedings of the Fifteenth Acm-Siam Symposium on Discrete Algorithms, New Orleans, LA, USA, 11–14 January 2004. [Google Scholar]
- Fan, C.; Lu, L. Complex Graphs and Networks; American Mathematical Society: Providence, RI, USA, 2006. [Google Scholar]
- Fan, Y.; Li, C.; Ma, Z.; Wen, L.; Sattar, A.; Su, K. Local Search for Maximum Vertex Weight Clique on Large Sparse Graphs with Efficient Data Structures. In Proceedings of the Twenty-Nineth Australasian Joint Conference, AI 2016, Hobart, TAS, Australia, 5–8 December 2016; pp. 255–267. [Google Scholar] [CrossRef]
- Fan, Y.; Lai, Y.; Li, C.; Li, N.; Ma, Z.; Zhou, J.; Latecki, L.J.; Su, K. Efficient Local Search for Minimum Dominating Sets in Large Graphs. In Proceedings of the Twenty-Fourth International Conference on Database Systems for Advanced Applications, DASFAA 2019, Chiang Mai, Thailand, 22–25 April 2019; pp. 211–228. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).




