1. Introduction
The SAT (satisfiability) problem involves determining whether a given Boolean formula has a satisfiable assignment. As one of the classical problems in computer science and among the most notable NP-complete problems in computational complexity theory [
1], SAT has been extensively studied and discussed for nearly a century. Due to its critical role in formal verification [
2,
3], artificial intelligence [
4], and combinatorial optimization [
5], efficiently solving SAT problems remains a key focus of research.
From the most straightforward perspective, solving a SAT problem entails traversing the search space. As the problem size increases, the search space expands exponentially, resulting in exponentially growing solving times, which makes direct solving infeasible for large-scale problems. To improve solving efficiency, heuristic strategies are widely employed in SAT solving [
6]. The core algorithm of modern SAT solvers, conflict-driven clause learning (CDCL), significantly accelerates solving by utilizing heuristics such as clause learning, conflict-driven backjumping, and decision heuristics, building upon the foundational DPLL algorithm [
7].
When employing DPLL or CDCL algorithms, the problem is solved sequentially [
8], iteratively verifying potential assignments via backjumping. To further enhance solving efficiency, leveraging computational resources through parallel computation becomes another key approach. Inspired by parallelism, SAT parallel solvers have seen significant development. The two primary categories of mainstream parallel SAT solving algorithms are as follows [
9,
10]:
- (1)
Divide-and-conquer algorithms, which partition the search space and compute different portions in parallel.
- (2)
Portfolio algorithms, which employ multiple solvers with diverse restart, decision, and learning heuristics applied to parallel instances, competing to find the fastest solving path.
In both sequential and parallel solving, the essence of SAT solving lies in assigning values (1 or 0) to one or more variables and analyzing their impact on formula satisfiability through operations like constraint propagation and conflict analysis. This paper defines the search space from both topological and logical perspectives, analyzing the nature of assignments and revealing their essence as partitions of the search space.
Based on this insight, a novel search space partitioning method is proposed, differing from traditional assignment-based approaches. This method partitions the search space by merging variables, not only independently deriving sequential or parallel solving algorithms, but also being combined with assignment strategies. Experimental data indicate that, depending on the algorithm strategy, the solving time can be optimized in the majority of cases. Finally, the paper summarizes a more unified search space partitioning methodology, rigorously defining the properties of different partitions and providing a more precise analysis of various existing solving algorithms.
Organization of the Paper:
Section 2: Reviews the most common SAT solving algorithms as well as the various solvers that implement these algorithms;
Section 3: Defines the search space, analyzes the commonalities of algorithms in
Section 2 from a topological perspective, and proposes a new solving approach distinct from the existing algorithms;
Section 4: Explains how this new approach can be applied in solving, offering ideas for both sequential and parallel solving while discussing its interaction with the existing methods;
Section 5: Summarizes a unified search space partitioning method, defines the properties of partitions, and analyzes the partition characteristics satisfied by the existing solving algorithms.
3. Search Space and Partitioning
In the previous section, we briefly introduced some SAT solving algorithms, both parallel and sequential. Regardless of the approach, these algorithms ultimately assign values to variables in a step-by-step process, to either find a satisfying assignment for the formula or prove that the original formula is unsatisfiable. We can refer to this solving process as assignment-based solving. In this section, we propose a new solving approach that differs from the traditional assignment-based methods. This approach involves merging variables, which we refer to as abstraction.
To better explain the distinction between abstraction and assignment, we must introduce the concept of search space and intuitively analyze the differences and connections between the two.
3.1. Definition of Search Space
In this study, we use CNF formulas to represent SAT problems. For clarity, we define the following terminology for SAT problems:
Let represent a CNF formula and represent a disjunction. Thus, represents a CNF formula consisting of disjunctions , where each is a clause of the formula F;
The set of variables appearing in the formula is denoted as , and the set of variables appearing in a clause is denoted as , represented by ;
To maintain symmetry in the search space, in this paper, we assign the value for true and for false (instead of x = 0). A set of satisfiable assignments is called a solution to the formula, denoted by , and the set of all solutions is denoted by .
For example, if and , then represents the formula . and are two sets of assignments for the formula F, and .
Definition 1. Given formula , the search space of is defined as , denoted as .
According to Definition 1, is the set of vertices of a hyperrectangle centered at the origin, with edges parallel to the coordinate axes. Suppose we have two formulas:
For formula , with , .
For formula , with , .
The solutions of these formulas in the search space are shown in
Figure 3.
3.2. Assignment-Based Solving and Its Partitioning
3.2.1. Logical Expression of Assignment-Based Solving
As mentioned earlier, in commonly used SAT solving algorithms, variables are assigned values step by step. Assigning a value of 1 to a variable
in formula
can be viewed as the formula
. Similarly, assigning a value of −1 corresponds to the formula
[
9].
Taking the first three formulas in Example 1,
, as an example, the result when
is shown in
Table 2.
We can simplify the formula and obtain
. Next, we observe the formula
, and the results are shown in
Table 3, where each clause can also be simplified.
The resulting formula is . Assigning produces the same results as . Furthermore, in order for the formula to be satisfiable, must also be 1.
In other words, logically, the assignment of values to the formula is equivalent to , and the simplification after assignment is equivalent to applying the absorption law and resolution to the formula .
At the same time, if the simplified formula obtained after assigning is satisfiable, it indicates that the original formula is also satisfiable. However, if is unsatisfiable, it does not necessarily imply that the original formula is also unsatisfiable. This can also be explained logically.
Proposition 1. If the formula is satisfiable, then the formula is satisfiable.
Proof. If is satisfiable, then there exists an assignment such that . is equivalent to AND . Therefore, if is satisfiable, it implies that is satisfiable. □
Proposition 2. If the formula is unsatisfiable, then the formula is not necessarily unsatisfiable.
Proof. If is unsatisfiable, then for all assignments , . is equivalent to OR . This means that it is possible for to be satisfiable while is unsatisfiable, if is unsatisfiable under the given assignment. Therefore, the unsatisfiability of does not necessarily imply the unsatisfiability of □
Therefore, determining the satisfiability of the original formula by checking whether the simplified formulas for and are satisfiable is equivalent to determining the satisfiability of the formula , which, after simplification, becomes .
From a logical perspective, the assignment-based backtracking algorithm essentially repeats the following steps:
If is satisfiable, then must be satisfiable. To check if is satisfiable, one can further check if is satisfiable, and so on.
Similarly, if is unsatisfiable, then it is necessary to check if is satisfiable. To check if is satisfiable, one can continue checking if is satisfiable, progressing step by step.
The search tree of the assignment-based SAT solving algorithm is logically equivalent to the search tree shown in
Figure 4.
3.2.2. Partitioning of Assignment-Based Solving
As we learned in the previous section, assignment essentially adds new constraints to the formula. Next, we will explore how these constraints affect the search space.
Definition 2. Given formulas and , if , then is called a logical partition of . The formula is referred to as a partition of the search space of , and is called the partitioned formula.
Example 2. Solving a Formula Using Assignments.
If we simulate the assignment process, we first try assigning and use unit propagation, which simplifies the formula to: Next, we assign , which leaves us with the final clause: Thus, we obtain a satisfiable assignment .
If we plot the solution and search spaces of the three formulas
,
, and
, as shown in
Figure 5, we can observe the following:
Assigning corresponds to using the plane to cut the search space , resulting in the space partition .
Assigning corresponds to using the line to cut the search space , resulting in the partition .
The entire process involves progressively reducing the dimensions. Each assignment lowers the dimensionality, and when the dimension reaches one, we can check whether the formula has a solution at that point.
As shown in
Figure 6, both
and
partition the formula
. They represent the planes
and
, respectively. If no solution is found on the plane
(the green plane), as seen in the previous search tree, backtracking is used to check for an assignment on
(the blue plane). If no solution is found on either plane, it means that no solution exists within this three-dimensional search space. Similarly, this three-dimensional search space may be derived from a four-dimensional space by partitioning with
. The next step would involve backtracking to the partition of
.
Therefore, the backtracking algorithm continuously partitions the search space and ultimately verifies whether a point in the search space is a solution to the formula. Parallel algorithms based on search space partitioning also stem from this backtracking process, but they calculate multiple backtracking paths simultaneously. In addition to this assignment-based search space partitioning, there are other partitioning methods. In this paper, we propose a new search space partitioning method, referred to as abstraction-based search space partitioning.
3.3. Abstraction-Based Solving and Its Partitioning
As mentioned in the previous section, assignment-based solving is essentially solving the two partitioned formulas,
and
. As long as one of them is satisfiable, the original formula
is satisfiable. This is expressed as follows:
This conclusion holds for any formula, meaning that:
As defined in Definition 1 from the previous section, adding constraints corresponds to partitioning the search space. In other words, and are actually methods of partitioning the search space. The constraints added by the assignment, and , partition using the hyperplanes and , respectively. Each partition reduces the dimensionality by eliminating , and through continuous dimensional reduction, a solution is verified.
If we consider dimensional reduction, there are also other methods for reducing dimensions.
Observing the search space partitioning method shown in
Figure 7, the search space is partitioned using the planes
(a) and
(b). In both planes, the dimension
is preserved, while the dimensions
and
are merged into a new dimension. The two planes together form the original search space in its entirety.
We can extend this merging method to higher-dimensional search spaces by partitioning the space with the hyperplanes and , effectively merging dimensions and to achieve dimensional reduction. To do this, we need to provide a clear definition of such an operation.
In Answer Set Programming (ASP), researchers have developed a debugging method that maps multiple propositional atoms to a set
, which is called
Omission-based Abstraction [
34]. The merging and dimensional reduction process is similar to this, but instead of omitting atoms, it merges multiple propositional variables and substitutes them with a new variable. Therefore, we can also refer to this operation as abstraction.
Definition 3. Let be a formula with . If the mapping holds, where and , the new formula obtained is called the abstraction of formula , denoted as .
Abstraction defines the merging of dimensions and , with the new dimension denoted as . From a logical perspective, the mapping is equivalent to . In Boolean formulas, is equivalent to .
As we learned in
Section 3.2.1, the simplification of formulas in assignment-based solving is based on resolution and the absorption law. Just as assignments can simplify the original formula, abstraction can also simplify a formula. As shown in
Table 4, the simplification of the abstracted formula is also based on resolution.
That is, .
The clause and the constraint resolve to , which is equivalent to . Similarly, and resolve to , which is equivalent to . We have mapped both and to , thus obtaining and .
Lemma 1. .
Now, we have the logical representation of the hyperplane . Since we define False as −1, the symmetry of the search space allows us to use the formula to represent the plane . To represent , we can generalize extended Definition 3.
In Definition 3, the mapping , i.e., , is essentially a mapping of variables. This includes both and . Similarly, if we perform such a mapping, and , we can denote it as .
Definition 3 (continued). Let be a formula with . If the mapping holds, where and , the new formula obtained is called the abstraction of formula , denoted as .
Similarly, from a logical perspective, this abstraction corresponds to solving
, which is equivalent to
. As shown in
Table 5, The clauses in
also come from the resolution of the original formula’s clauses with
or
, followed by the variable mapping.
That is, .
Thus, we can draw similar conclusions.
Lemma 2. .
The mapping process represented by abstraction is the merging of dimensions and , with the new dimension denoted as . This gives us a partitioning method that is different from assignment-based partitioning, namely, abstraction-based partitioning.
The abstraction-based partitioning formulas are as follows: or , which correspond to and , respectively.
4. Abstraction-Based Solving Method
In this section, we will propose the methodology for abstraction-based solving, drawing on the existing assignment-based solving methods. We will also introduce a hybrid solving method that combines abstraction and assignment.
4.1. Conventional Solving Methods
In
Section 3.2.1, we used a search tree to describe the logical form of assignment-based solving. Similarly, abstraction-based solving should also be represented in this manner, as shown in
Figure 8.
If the assignment-based solving algorithm is described by the following steps: Assignment → Simplification → Backtracking → Assignment → … then, similarly, the abstraction-based solving algorithm should be divided into these steps: Abstraction → Simplification → Backtracking → Abstraction → …
For the formula
, abstraction-based solving can proceed according to the process shown in
Table 6.
Only in the final path, where the formula simplifies to , does a solution exist, with . Then, via and , we obtain .
In assignment-based solving, each time a variable is assigned a value, its assignment is recorded. In contrast, abstraction-based solving is more like a classifier. Each time an abstraction is performed, variables are divided into two sets based on their equivalence relationships.
For example, if , then the classification is , .
Then, if , we obtain , .
If , we obtain , .
This process continues, gradually dividing all variables into two sets,
and
, where any variable
and
satisfy
. The abstraction process eventually leaves variables that represent the assignment of one set, and the assignment for the other set is thereby confirmed. In
Table 6, we have
and
.
We can more intuitively observe the solving process in the search space.
The solving steps involve first using the plane
to partition
, resulting in
, as shown in
Figure 9b.
Then, using the line
, we partition
, obtaining
, as shown in
Figure 9c.
Ultimately, the solution is found at the point .
For assignment-based algorithms, unit propagation is a particularly important strategy. Unit propagation significantly reduces the time spent using the most straightforward backtracking methods [
11]. Therefore, abstraction also needs a corresponding strategy to reduce the solving time.
The core of unit propagation involves handling the literals after assignment, ensuring that these literals are true in the formula [
10,
35]. Similarly, abstraction has a similar process for handling these literals. In all literals, abstraction ensures that all positive literals and negative literals are distinct from each other, while making positive literals equal to each other and negative literals equal to each other. That is to say, if a formula contains
, then
and
.
Since abstraction-based sequential solving is also structured as a tree, abstraction-based parallel solving follows the same implementation approach as assignment-based solving: simultaneously checking multiple subtrees. This will not be further elaborated upon here.
4.2. Hybrid Solving
As seen in the previous section, the abstraction-based solving algorithm ultimately divides variables into two categories, resulting in classifications. Therefore, without appropriate strategy optimization, its worst-case time complexity should be . Its search tree, like that of assignment-based solving, is also a full binary tree. However, the assignment-based approach has already developed many pruning strategies. In the current mature state of assignment-based solving methods, revisiting pruning strategies for abstraction is a cumbersome task. Therefore, a more important approach is to combine the two solving methods.
Although the two methods partition the search space differently, the content within the partitioned search space is the same. In fact, the distribution of solutions within the search space changes, but it still maintains the solutions’ form . Therefore, after partitioning the search space using one method, it is still possible to attempt the other method to further partition the search space.
It is worth noting that in assignment-based solving algorithms, the strategy for selecting decision variables, such as VSIDS, plays an important role in the solving time. Some researchers have proven this [
36]. Therefore, different strategies for variable selection are also expected to have different impacts on hybrid solving methods.
Thus, we conducted the following experiment:
Generate a random CNF formula with 350 variables and 4.2 times the number of variables for the number of clauses;
Use abstraction on variables from randomly selected clauses of length-2 to partition the search space into two parts, referred to as Space R1 and Space R2;
Use the two most frequently occurring variables in pairs for abstraction to partition the search space into two parts, referred to as Space F1 and Space F2;
Solve the four search spaces, as well as the original formula, using Minisat;
Statistical solving times are organized in the table, as shown in
Table 7.
If the two search spaces partitioned through abstraction are solved using a simple DAC parallel solving approach, then when the formula is satisfiable, the solving time should be the minimum value from the two search spaces. When the formula is unsatisfiable, the solving time should be the maximum value from the two search spaces. The impact of different strategies on the solving time is shown in
Figure 10.
It can be observed that, when the formula is satisfiable, Strategy R tends to accelerate solving. When the formula is unsatisfiable, Strategy F is more effective in speeding up the solving process. In addition to the samples above, the experiment calculated a total of 100 CNF formulas, 38 of which are unsatisfiable. In 81% of these cases, Strategy F accelerated solving; for the remaining 62 formulas, Strategy R accelerated solving in 72% of cases. Considering that this is a simple parallel method, the probability of improving computational efficiency should be higher in more advanced parallel algorithms.
To examine the impact of the number of variables or clauses on abstract hybrid solving, we controlled the number of variables or clauses while changing the other factor. The results are shown in
Appendix A.1. The solver used in the above experiments is MiniSat. To investigate the impact of abstraction on different solvers, we also tested other solvers, with the test data sourced from SAT competition 2024. The data obtained from these tests are shown in
Appendix A.2.
All these experiments led to a similar conclusion: abstract hybrid solving can optimize solving time in most cases.
Analyzing the reason behind this, we believe that the main factor is that abstraction alters the length of certain clauses. This affects the unit propagation process, which, as emphasized earlier, is crucial for solving performance [
10].
If Strategy R is used, where variables in length-2 clauses are abstracted, single literals are produced, allowing unit propagation to occur. If Strategy F is used, where the most frequently occurring variable pairs are selected for abstraction, clauses that were originally length-3 become length-2, enabling unit propagation in a process where it would otherwise be impossible.
Take formula
as an example, abstraction is applied using two strategies. From
Figure 11, it can be observed that in
Strategy R, the new clause
produced by abstraction participates in the unit propagation process. In
Strategy F, the original clause
changes from
to
, allowing clause
to determine the variable assignments in
through unit propagation, continuing the influence of unit propagation.
Similarly, both strategies can lead to the removal of clauses directly from the formula, and these removed clauses might have participated in unit propagation in the original assignment-based solving algorithm.
One of the other effects of reducing clause length is its impact on backtracking. For example, consider the following formula:
When solving this formula using the assignment-based method, four backtrack operations are performed, and the paths are shown in
Figure 12.
If we first perform abstraction on the formula, we obtain the following:
As shown in
Figure 13, when solving the abstracted formula
using the assignment-based method, only two backtrack operations are performed, and the depth of each backtrack is not deep.
As mentioned in
Section 2.1, backtracking determines how to retreat during the search, while unit propagation primarily determines how to advance the search. Abstraction impacts both of these processes, causing some calculations to speed up and others to slow down in algorithms that combine abstraction and assignment. Therefore, compared to DAC, it might be more worthwhile to try incorporating abstraction as a new strategy into portfolio-based parallel solving methods.
Additionally, abstraction can also be useful in SMTs (Satisfiability Modulo Theories) [
37]. SMTs refer to more complex constraint problems that build on traditional SAT problems by introducing additional theoretical constraints, such as arithmetic, strings, and matrices, etc. [
37,
38,
39]. As a result, solving SMT problems is divided into two parts: the SAT solver and the theory solver, often referred to as DPLL(T) [
40].
The SAT solver is used to obtain feasible atomic assignments, while the theory solver checks these atomic assignments. If they are not feasible, the theory solver feeds back to the SAT solver for further adjustments.
For example, consider the expression
In SMTs, these details are initially ignored, and the formula is treated as a propositional formula The solver first solves it, for instance by assigning and , and then passes these assignments to the theory solver to check if they satisfy the constraints. This means checking if holds, and simultaneously ensuring that also holds. If the assignments do not satisfy the constraints, the theory solver returns , and the SAT solver continues the search. This iterative process continues until a solution is found.
The abstraction proposed in this paper is actually based on atomic equivalence relations, such as or . From the perspective of abstraction, whether certain variables are equal or not can simplify the formula. In the formula and , it is clear that they cannot both hold simultaneously. Therefore, we can conclude that , which allows for the further simplification of the formula. This approach can also be used to assess the satisfiability of SMT problems using abstraction.
In other words, through the theory solver, we can pre-determine the equivalence of certain atoms. By integrating abstraction into the SMT solver, we can simplify the formula based on these equivalences before continuing with the solving process.
4.3. Summary of Abstraction-Based Solving
In the previous section, we explained step by step how the abstraction-based SAT solving algorithm is implemented. In this subsection, we will briefly review the entire solving process and highlight the differences between it and the assignment-based solving approach.
Compared to traditional assignment-based solving, the main difference with abstraction-based solving is that it does not assign values to propositional atoms. Instead, it classifies the propositional atoms. If we continued abstraction to its extreme, where all atoms are classified into two categories, it would essentially become a purely abstraction-based algorithm. However, if abstraction is carried out to a certain extent and then an assignment-based algorithm is used, it becomes a hybrid solving method. As shown in
Figure 14, the relationship between the two approaches is clear.
As shown in
Figure 14a, when only using the assignment-based solving method, the result will be the variable assignments. If only the abstraction-based solving method is used, the result will be two sets of different variables, as depicted in
Figure 14c. However, when using the hybrid solving method, the result will include both the two sets of different variables and the assignments of certain elements within those sets, as shown in
Figure 14b.
So far, we have outlined the specific methods and implementation steps for abstraction, as well as its potential benefits. However, during our actual solutions, we still encountered some issues.
Although the experimental data suggest that abstract hybrid solving can accelerate the solving process in most cases, they also indicate that in certain situations, the efficiency of abstract hybrid assignment solving is not as good as when directly using assignment-based solving. In fact, the time consumed may even be significantly higher than that of direct solving.
As mentioned in the experiments, the selection of abstract variables plays a crucial role in the solving process. However, to date, there is no optimal strategy for choosing variables for abstraction. The two strategies used in the experiments are among the simplest and most straightforward.
Additionally, the experiments were conducted by first performing abstraction just once and then solving. How to integrate abstraction with the existing solvers and establishing communication between the two methods are areas that require further research.
Finally, the method still lacks the corresponding strategies when dealing with large-scale problems, as the cost of certain abstraction strategies, such as identifying the most frequent pairs of variables, increases with problem size.
5. Unified Method for Search Space Partitioning
Below, we have listed the logical expressions for solving based on assignment and abstraction:
Assignment: .
Abstraction: .
The logical formulas
and
represent a partition of the search space, while their counterparts
and
are complementary partitions. As we mentioned in
Section 3.3, Formula (9):
.
If we simplify the formula, we obtain , which simplifies to , thus demonstrating that satisfiability entirely depends on the formula .
In fact, we can give a unified definition for such a partition of the search space.
Definition 4. Given a formula , if , , …, are logical partitions of respectively, then , , …, are called a set of logical partitions of . is called the partition formula, and is called the subpartition formula. If , then , , …, are called a set of complete logical partitions of .
From Proposition 1, it is known that, regardless of whether a set of partitions is complete, as long as there is a solution in any partition, the original formula must have a solution.
Theorem 1. If , , …, are a set of logical partitions of , and the partition formula is satisfiable, then is satisfiable.
Theorem 1 only guarantees the sufficiency that if the partition formula is satisfiable, then the original formula is satisfiable. The advantage of a complete partition is that it not only provides the necessary condition, but also shows the situation where the original formula has no solution.
Theorem 2. If , , …, are a set of complete logical partitions of , then is unsatisfiable if and only if the partition formula is unsatisfiable, and is satisfiable if and only if the partition formula is satisfiable.
Proof. Let the set of partitions be , , …, , and let .
The partition formula is as follows:
- 1.
Consistency of Unsatisfiability:
Sufficiency: If is unsatisfiable, since it is a set of complete partitions, . Thus, is unsatisfiable. Therefore, is unsatisfiable, i.e., for any , is unsatisfiable.
Necessity: If for any , is unsatisfiable, the partition formula is unsatisfiable. Hence, is unsatisfiable, since . Therefore, is unsatisfiable.
- 2.
Consistency of Satisfiability:
Sufficiency: If is satisfiable, since , then . Therefore, is satisfiable. Hence, there exists some such that is satisfiable, which implies that if is satisfiable, the partition formula is satisfiable.
Necessity: If the partition formula is satisfiable, this is equivalent to Theorem 1. □
At the same time, when describing the partitions based on assignment and abstraction, we mentioned that the two partitions are composed of complementary formulas. If we make a strict definition, we can further formalize the partitions.
Definition 5. Given a formula , if , , …, are logical partitions of , then , , …, are called a set of logical partitions of , is called the partition formula, and is called the subpartition formula. If for any and , , then , , …, are called a set of independent logical partitions of .
The completeness of the partitions ensures that this set of partitions can reveal the satisfiability of the formula, while the independence of the partitions helps identify which partition the solution falls into when the formula is satisfiable.
Theorem 3. If , , …, are a set of independent complete logical partitions of , and is satisfiable, then for every solution , there exists exactly one logical subpartition such that the subpartition formula .
Proof. Let the set of partitions be , , …, , and let , a solution .
The partition formula is as follows:
From completeness, we know that .
Thus, . Therefore, is satisfiable and there exists some such that ) = 1.
Now, assume that another subpartition exists, such that .
Therefore, . Thus, .
From the independence of the partitions, we know that , which leads to a contradiction. Therefore, there can only exist one subpartition , such that the subpartition formula is satisfiable. □
According to the pigeonhole principle, we know that for any three Boolean variables, there must be two variables with the same assignment. That is,
If we consider these three logical formulas as a set of logical partitions, then this set of partitions is complete, and its partition formula is as follows:
However, the conjunction of any two logical partitions is satisfiable, meaning that they are not independent logical partitions. When solving the first search space, solutions in other search spaces will repeat previously verified solutions. For example, when searching the space of , some solutions satisfying will also be searched.
From the search space, we can see that, for example, in three dimensions, the planes , , will all pass through the line in the space. This line may contain some solutions, causing the search on each plane to repeatedly verify these assignments.
In contrast, in the assignment-based partition, each plane does not intersect, and is thus independent. In the abstraction-based partition, since each plane intersects at the coordinate axes, and no points from the search space lie on the axes, this partition is also independent.
In some local search algorithms, the algorithm assigns values to multiple variables at once [
41,
42]. If we use partitions, this form can be represented by the following logical formula:
where
represents one term of the Principal Disjunctive Normal Form (PDNF) constructed from
, and it, along with the other terms of this PDNF, forms a set of independent complete logical partitions of
. Independence ensures that the assignments already checked do not need to be checked again, while completeness guarantees that the satisfiability of the formula can eventually be determined.
Assuming that
, this partition divides the search space into
equally sized search spaces.
checks only one of these, while
contains the remaining
search spaces. By flipping a literal in
, we obtain a term from
. From Theorem 2, it can be deduced that to determine satisfiability, each remaining search space must be checked individually, which requires exponential time. Therefore, local search algorithms often flip certain variables based on the satisfiability of clauses [
36,
43], meaning that they selectively solve within the remaining search space.
In addition, in parallel SAT algorithms, there is another type of search space partition called XOR partition [
10,
43].
This partition is also independent and complete, and it is equivalent to the abstraction-based partition when there are only two variables.
Through analysis of the partitions, we can conclude that, in addition to mixing abstraction-based and assignment-based solving, different solving algorithms can also be combined, as long as the partitions they correspond to are independent and complete. This ensures both consistency between the satisfiability of all search spaces and the satisfiability of the original formula, while avoiding redundant calculations.