Next Article in Journal
A Hybrid CNN–GRU Deep Learning Model for IoT Network Intrusion Detection
Previous Article in Journal
A Survey of Three-Dimensional Wireless Sensor Networks Deployment Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

T-Way Combinatorial Testing Strategy Using a Refined Evolutionary Heuristic

1
China Satellite Network System Research Institute, Beijing 100071, China
2
School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510275, China
*
Author to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2025, 14(5), 95; https://doi.org/10.3390/jsan14050095
Submission received: 30 July 2025 / Revised: 17 September 2025 / Accepted: 22 September 2025 / Published: 25 September 2025
(This article belongs to the Section Communications and Networking)

Abstract

In complex testing scenarios of large-scale information systems, communication networks, and the Internet of Things, exhaustive testing is always prohibitively expensive and time-consuming. T-way combinatorial testing has emerged as a cost-effective solution. To address the problem of generating test suites for t-way combinatorial testing, a Logical Combination Index Table (LCIT) is proposed. Utilizing the LCIT, the t-way combinatorial coverage model (t-wCCM) is constructed to guide the test case generation process. Multi-start Construction Procedure (MsCP) algorithm is employed to generate an initial solution set, and then local optimization is performed using a low-complexity Balanced Local Search (BLS) algorithm. Further, Evolutionary Path Relinking combined with the BLS (EvPR + BLS) algorithm is proposed to accelerate the convergence process. Experiments show that the proposed Refined Evolutionary Heuristic (REH) algorithm performs best on 50% of the classic test instances, and performs superior to the average on 66% of the test instances, with a relative improvement in the maximum computation time of approximately 33.33%.

1. Introduction

With the rapid development of large-scale information systems, communication networks, and the Internet of Things, the application demand for combinatorial testing scenarios is also growing rapidly. A report by the National Institute of Standards and Technology (NIST) indicates that inadequate testing accounts for 50% to 80% of the development budgets [1]. Given that testing-related costs constitute more than half of the total budgets, improving the testing efficiency can significantly reduce development budgets. Exhaustive testing requires traversing all possible combinations of inputs and execution conditions, consuming substantial human resources, time, and financial costs, making it impractical in real-world work. Furthermore, time and budget constraints often lead to insufficient testing during development, significantly increasing the risk of defect leakage [2,3], which can subsequently lead to failures and security vulnerabilities.

1.1. Combinatorial Testing

Due to the high cost and impracticality of exhaustive testing, combinatorial testing methods have gradually developed into a practical and general approach. Its core is the automated generation of test cases, applicable to various combinatorial testing scenarios. Research [4] reveals the approximate distribution of fault-triggering conditions: most faults are triggered by a single parameter, a small number of faults are triggered by interactions between two parameters, and faults involving the combinations of three or more parameters are progressively fewer. A NASA case study [5] also confirms this distribution: 67% of faults were caused by a single parameter, 93% were caused by combinations of two parameters, 98% by combinations of three parameters, and the vast majority of faults could be triggered by combinations of 4 to 6 parameters. Other studies [6,7,8] have reached consistent conclusions, reinforcing the universality of this approximate distribution.
Relevant research concludes that most faults can be covered by testing individual parameters or small combinations of parameters. Higher strength testing (typically involving 4 or more parameters) is a necessary condition for high assurance [9,10], but leads to a significant increase in the computational cost of generating the test suite, the number of test cases, and the execution overhead. The problem of generating test suites for combinatorial testing scenarios is classified as a Knapsack-like Problem (KLP) [11], i.e., an NP-hard problem [12]. As the number of parameters n, the number of parameter values m, and the combination strength t increase, the computational complexity of finding the optimal combinatorial test suite increases dramatically, by approximately O ( m C n t ) .

1.2. Two-Way Combinatorial Testing Strategy

Faced with complex combinatorial testing scenarios [13], the two-way combinatorial testing strategy is generally adopted, which ensures that any two values from two parameters each are covered by at least one test case. Compared to exhaustive testing, two-way combinatorial testing strategy can significantly reduce the number of test cases, thereby lowering costs. Two-way combinatorial testing is the most commonly used strategy in combinatorial testing scenarios, offering high practicality and efficiency, and has been widely applied in actual testing activities.
To date, nearly 20 combinatorial test case generation tools have emerged [14], including AETG, PICT, TConfig, AllPairs, GATG, etc. [15,16]. Existing tools primarily have four limitations:
  • Some tools only support generating test suites for two-way combinatorial testing strategy, potentially missing 10% to 40% or more of potential defects, which is unacceptable for critical, safety, and high-reliability scenarios.
  • There is a lack of excellent algorithms suitable for higher-strength (e.g., 4-, 5-, 6-way) combinatorial testing strategies. Only a few tools support constructing test suites beyond two-way combination strength, and the generation time for these suites is often excessively long [1,4,7].
  • A small number of tools can construct combinatorial test suites within a reasonable computation time, but the results may not be optimal. More efficient algorithms are needed to improve the test suites’ quality and reduce the computation time.
  • Existing tools lack interactive features and combinatorial statistical features [17]. During actual test execution, it may not be necessary to run all test cases completely, but combinatorial coverage information is crucial for fault analysis and problem localization.

1.3. Deterministic and Non-Deterministic Testing Strategy

A combinatorial testing strategy can be divided into deterministic and non-deterministic testing strategies. The deterministic testing strategy typically constructs test cases based on mathematical functions, i.e., generating test cases according to predetermined rules. Under general conditions, they can generate an optimal combinatorial test suite, but the suite is fixed and unchanged each time, making it difficult to adapt to specific testing needs or optimization objectives [8,18]. The non-deterministic testing strategy relies on random or guided selection algorithms to construct test cases and can be divided into random algorithms [19], heuristic algorithms [20], and artificial intelligence algorithms [21]. Random algorithms construct test suites by randomly sampling from the exhaustive combination space, usually requiring a random distribution as input for sampling [22]. Heuristic algorithms use random techniques to predict and prioritize the construction of valuable test cases, e.g., AETG [23] and its variant mAETG [24] aim to generate test cases that maximize coverage gain. Artificial intelligence algorithms employ evolutionary mechanisms [7,17,25] to construct near-optimal test cases, such as Genetic Algorithms (GAs) and Ant Colony Algorithms (ACAs) [21]. However, when the number of parameters and values is too large, the convergence efficiency of the evolutionary process decreases, and the computation time increases. Heuristic algorithms also include simulated annealing [26], adaptive random algorithm [27], tabu search [28], harmony search [29], gravitational search algorithm [13], swarm optimization algorithm [30], metaheuristic algorithm [9], and fuzzy adaptive sine cosine algorithm [31]. These methods typically add test cases iteratively until the coverage criterion is satisfied. Although the non-deterministic testing strategy does not always produce optimal test suites, it possess randomness, meaning that the generated test suite can vary each time.
This paper focuses on the modeling and solving of combinatorial testing problems. The main contributions are as follows.
  • Constructing a Logical Combination Index Table (LCIT) to support the mathematical characterization of the t-way combinatorial testing strategy [29,32,33], capable of counting t-way combinations and guiding the test case generation process.
  • Establishing a t-way combinatorial coverage model (t-wCCM), defining the t-way combinations coverage function and coverage criterion, and quantitatively analyzing the approximate size of the t-way combinatorial test suite.
  • Proposing a refined evolutionary heuristic (REH) algorithm that optimizes the generation of the combinatorial test suite step by step, utilizing an adaptive evolutionary mechanism to accelerate computational convergence efficiency.
This paper is organized as follows. Section 2 builds the t-wCCM mathematical model based on LCIT. Section 3 proposes the REH algorithm. Section 4 analyzes the algorithms performance using classic test instances. Section 5 concludes this paper.

2. Mathematical Modeling

2.1. t-Way Combinatorial Testing Strategy

Assume a t-way combinatorial testing scenario contains n parameters, represented as
P = { P i , i = 1 , , n } ,
where parameter P i has n i discrete values, and its value set is
P i = { v i , l , l = 1 , , n i } ,
v i , l represents the l-th value of the i-th parameter. Selecting t parameters from the set P results in C n t combinations. These combinations can be represented as
T t = { T t k , k = 1 , , C n t } ,
where T t k = { P j k , j = 1 , , t } represents the k-th combination of t parameters.
For each T t k , assume there exists a feasible test case set
S T t k = { S i T t k , i = 1 , , m k } ,
Let | S T t k | = m k be the number of test cases required to cover all values combinations of the selected t parameters. Any test case S i T t k consists of n values form n parameters each and can be represented as
S i T t k = { v i , l T t k , l = 1 , , n } ,
where v i , l T t k represents any value belonging to parameter P i . If any combination of t values form t parameters, each is covered by the test case set S t T at least once, and then
S t T = { S T t k , k = 1 , , C n t } ,
where S t T is called the t-way combinatorial test suite. When t = 2 , it is called the 2-way combinatorial testing strategy, and S 2 T must cover any combinations of two values from two parameters each at least once. Similarly, when t = n , S n T covers all n values combinations from n parameters each at least once, i.e., the exhaustive test suite.

2.2. Logical Combination Index Table

The Logical Combination Index Table (LCIT) is used to calculate t-way combinations coverage and guide the construction process of the combinatorial test suite [34]. Table 1 shows a combinatorial testing scenario with four parameters, and each parameter has a different number of values.
Taking a four-parameter combinatorial scenario as an example, its LCIT is shown in Table 2. The numbers with −/+ signs in the cells represent indices of specific values in the parentheses. Each empty cell indicates that the value of the parameter can be any one, and does not need to be specified. Assume that a negative number in a row represents value v p , which forms a two-way combination with the positive number v q in the same row. The LCIT ensures that any combination of any two values from any two parameters each are covered by at least one row.
The LCIT can be extended to t 2 -way combinations. Generating a t-way LCIT first requires calculating the exhaustive combinations of any t 1 parameters. There are C n t 1 results to select t 1 parameters from n parameters. Furthermore, the exhaustive combinations of t 1 parameters is v t 1 , where v is the number of values of each parameter. As shown in Table 3, the k-th combination of any t 1 parameters can be represented as
T t 1 k = { P i k , k = 1 , , C n t 1 , i = 1 , , t 1 } ,
where P i k is the i-th parameter in T t 1 k . Let ( T t 1 k ) be the complement set of T t 1 k . In the LCIT, the values of these t 1 parameters are labeled with a negative sign and placed in the left columns. The values of the remaining parameters are labeled with a positive sign and placed in the right columns. In any row of the LCIT, the t 1 negative values and any one positive value form a t-way combination of t different parameters, representing one t-way combination covered by that row.
For any t-way combination X = { v i X , i = 1 , , t } , the coverage function is defined as
f ( v i X ) = 1 , if X S T t 0 , otherwise
If X S T t , then f ( v i X ) = 1 indicates that value v i X , as an element of combination X, is covered by S T t once.
Algorithm 1 provides the pseudocode for generating the t-way LCIT.
Algorithm 1 Pseudocode for LCIT construction algorithm.
Require: Combination strength t, parameter set P = { P i , i = 1 , , n } , value sets P i = { v i , l , l = 1 , , n i }
Ensure: LCIT for t-way combinations
1:
Generate the set T t 1 = { T t 1 k , k = 1 , , C n t 1 } from all combinations of t 1 parameters of set P
2:
for  k = 1 C n t 1  do
3:
    Enumerate the values of combination T t 1 k = { P i k , i = 1 , , t 1 }
4:
    Mark these values with a negative sign and place them in the left t 1 columns of the LCIT
5:
    Place the values of the complement set ( T t 1 k ) in the remaining n t + 1 columns
6:
    In each row, the t 1 negative values and any one positive value form a t-way combination
7:
end for
8:
Output the LCIT for t-way combinations
The number of rows in the LCIT can be approximated as
m = | S T t | = C n t × v t ,
where v is the maximum number of values in the n parameters.
When constructing the t-way combinatorial test suite based on the LCIT, each row of the LCIT corresponds to one test case. All rows of the LCIT constitute a t-way combinatorial test suite. Figure 1 shows an estimation of the number of t-way combinatorial test cases.
As shown in Figure 1, the number of t-way combinatorial test cases is most significantly affected by the combination strength t, growing approximately exponentially. The higher the combination strength, the faster the growth in the number of combinatorial test cases. Therefore, when constructing a t-way combinatorial test suite, it is necessary to improve the quality of test case generation to reduce redundant coverage, and to reduce the computational complexity to refine the algorithm convergence efficiency.
In Table 3, each row of the LCIT covers n t + 1 combinations. Therefore, the number of t-way combinations covered by the LCIT can be calculated as
C = m × ( n t + 1 ) t = C n t × v t × ( n t + 1 ) t .

2.3. Mathematical Modeling

Based on the logical combination coverage characteristics of the LCIT, the t-way combinatorial coverage model (t-wCCM) is constructed.
max k i j f ( v i , j T t k )
s . t . f ( v i , j T t k ) 1 , v i , j T t k P j k
T t k = { P j k , j = 1 , , t } , P j k P
S i T t k = { v i , l T t k , l = 1 , , n } , v i , l T t k S i T t k
S T t k = { S i T t k , i = 1 , , m k }
S T t = { S T t k , k = 1 , , C n t } .
The objective function (11) maximizes the coverage of all t-way combinations constructed by the LCIT. Constraints (12) and (13) ensure that any combination of t values is covered by the set S T t at least once, while minimizing the size of the t-way combination set S T t . Constraint (14) ensures that every combination S i T t k consists of n values from n parameters each, and any subset of t values { v i , l T t k , i = 1 , , t } is covered by the combination S i T t k . Constraints (15) and (16) ensure that the set S T t covers all combinations of t values from t parameters each.

3. Refined Evolutionary Heuristic

Aiming to solve the t-wCCM problem, a refined evolutionary heuristic (REH) algorithm is proposed, as shown in Figure 2. Firstly, construct the t-way LCIT and use the Multi-start Construction Procedure (MsCP) algorithm to generate an initial solution set. Secondly, use the low-complexity Balanced Local Search (BLS) algorithm to optimize the initial solution set, generating a guiding solution set. Thirdly, introduce Path Relinking (PR) combined with BLS (PR + BLS) algorithm to optimize the guiding solution set, generating an elite solution set. Fourthly, use the Evolutionary Path Relinking combined with the BLS (EvPR + BLS) algorithm to iteratively optimize the elite solution set, converging to the optimal solution set.
Once an optimal solution is generated, the corresponding t-way combinations in the LCIT are marked as covered. The weights of covered values are set to zero, while the weights of uncovered values are increased. The algorithm iteration terminates when all t-way combinations in all rows of the LCIT are covered by the optimal solution set.

3.1. Multi-Start Construction Procedure Algorithm

The Multi-start Construction Procedure (MsCP) algorithm [35,36] aims to independently and randomly construct multiple feasible solutions, and then the feasible solutions are uncorrelated. By repeating the above procedure multiple times, an initial solution set can be obtained [37].
As shown in Algorithm 2, the MsCP algorithm constructs the initial solution set S I , consisting of m initial solutions.
S I = { S k I , k = 1 , , m } ,
where every feasible solution S k I consists of n values from n parameters each.
S k I = { v i , s k I , i = 1 , , n } ,
where the value v i , s k I is randomly selected from parameter P i .
Define the t-way combinations coverage weights g k I of initial solution S k I as
g k I = 1 i n f ( v i , s k I ) ,
which is the number of t-way combinations covered by the initial solution S k I in the LCIT. Then, the t-way combinations coverage weights of the initial solution set S I is
g I = { g k I , k = 1 , , m } .
Algorithm 2 Pseudocode for multi-start construction procedure (MsCP) algorithm.
Require: Parameter set P = { P i , i = 1 , , n } , value sets P i = { v i , l , l = 1 , , n i } , number of initial solutions m
Ensure: Initial solutions S I = { s k I , k = 1 , , m } , t-way combinations coverage weights g I = { g k I , k = 1 , , m }
1:
Generate a time-dependent random seed
2:
for  k = 1 m  do
3:
    for i = 1 n do
4:
         v i , s k I randomly select a value from P i
5:
    end for
6:
     S k I construct initial solution { v i , s k I , i = 1 , , n }
7:
     g k I calculate t-way combinations coverage weights of S k I via LCIT
8:
end for
9:
Output S I and g I

3.2. Balanced Local Search Algorithm

To minimize the size of the t-way combinatorial test suite and balance the coverage frequency of all values from the parameter sets, a Balanced Local Search (BLS) algorithm based on LCIT is proposed. With low computational complexity, BLS optimizes the initial solutions one by one to maximize the coverage of t-way combinations in the LCIT, as can be seen in Algorithm 3.
Given the initial solution set S I , the values of parameter P i are sorted in ascending order of their coverage frequency in S I .
P i = { v i , l , 1 l n i } ,
Values with lower coverage frequency (e.g., { v i , l , l = 1 , 2 , 3 } ) are prioritized as neighborhood search candidates. In the neighborhood search, replace the i-th value in any solution S k I with value v i , l to obtain a new candidate solution S i , l , and calculate its t-way combination coverage weight g i , l . If g i , l > g k I , then replace S k I with S i , l . After performing the above neighborhood search process for each solution in S I one by one, the optimized guiding solution set is obtained.
S G = { S k G , k = 1 , , m } ,
with its t-way combination coverage weights
g G = { g k G , k = 1 , , m } .
Algorithm 3 Pseudocode for Balanced Local Search (BLS) algorithm.
Require: Parameter set P, value sets P i , number of initial solutions m, initial solution set S I , t-way combinations coverage weights g I
Ensure: Guiding solution set S G , t-way combinations coverage weights g G
  1:
for  i = 1 n  do
  2:
    Sort values of P i in ascending order of coverage frequency in S I P i
  3:
end for
  4:
for  k = 1 m  do
  5:
    for  i = 1 n  do
  6:
        Replace the i-th value in S k I with { v i , l , l = 1 , 2 , 3 } S i , l
  7:
        Calculate t-way combinations coverage weight of S i , l g i , l
  8:
        if  g i , l > g k I  then
  9:
            S k I S i , l , g k I g i , l
10:
        end if
11:
    end for
12:
end for
13:
S G S I , g G g I
14:
Output S G and g G

3.3. Path Relinking Algorithm

The Path Relinking (PR) algorithm [38] is used to construct a transformation path from one solution to another. Each path generates a neighborhood solution, which may be better or worse [39]. Therefore, each neighborhood solution needs further optimization by searching its neighborhood to overcome local optima.
The PR algorithm operations include forward and backward directions. Given an initial solution S k I and a guiding solution S k G , bidirectional paths can be constructed as
S k I S k G : S k I = S 1 f , S 2 f , , S n f = S k G ,
S k G S k I : S k G = S 1 b , S 2 b , , S n b = S k I .
The solution sets generated by the PR algorithm are defined as the neighborhood solution sets
N S f = { N S l f , l = 1 , , n } ,
N S b = { N S l b , l = 1 , , n } ,
where the forward neighborhood solution set N S f is generated during the traversal of the path S k I S k G , and the backward neighborhood solution set N S b is generated during the traversal of the path S k G S k I , as shown in Figure 3.
Combining the PR algorithm with the BLS algorithm (PR + BLS), each neighborhood solution is optimized separately, further updating the guiding solution set to the elite solution set S E .
S E = { S k E , k = 1 , , m } ,
with its t-way combinations coverage weights
g E = { g k E , k = 1 , , m } .

3.4. Evolutionary Path Relinking Algorithm

To accelerate the neighborhood optimization computation speed, Evolutionary Path Relinking combined with the BLS (EvPR + BLS) algorithm is proposed by combing the PR + BLS algorithm with the genetic algorithm [40,41]. The elite solution set is used as the initial population of the evolutionary mechanism, and then multiple groups of two elite solutions are randomly selected to generate the next generation solutions until the stopping criterion is met. Further, the stopping criterion can be a constraint on the maximum number of evolutionary generations or a computation time constraint. To reduce the computation time of the evolutionary iterations, the process will halt if the current best solution shows no improvement after three generations of evolution.
Assume that, after q iterations of evolution, the current optimal solution set S B is obtained as
S B = { S j B , j = 1 , , q } ,
with its t-way combinations coverage weights
g B = { g j B , j = 1 , , q } .
In Algorithm 4, lines 1 to 3 sort the values of P i in ascending order of their coverage frequency in S B . Line 4 selects the elite solution with the maximum coverage weight from S E as the current optimal solution S q + 1 B . Line 5 defines the evolution stopping condition: if the current optimal solution shows no improvement for three consecutive generations, the evolution process terminates. Lines 6 to 21 execute the PR and BLS algorithms. If the t-way combinations coverage weight of a newly generated solution is greater than g q + 1 B , it replaces S q + 1 B . Lines 24 to 25 add the ( q + 1 ) -th current optimal solution obtained by evolving the elite solution set to the current optimal solution set S B .
Algorithm 4 Pseudocode for Evolutionary Path Relinking Algorithm (EvPR + BLS).
Require: Parameter set P, value sets P i , elite solution set S E , t-way combinations coverage weights g E , current optimal solution set S B , t-way combinations coverage weights g B
Ensure: New optimal solution set S B , t-way combinations coverage weights g B
  1:
for  i = 1 n  do
  2:
    Sort values of P i in ascending order of coverage frequency in S B P i
  3:
end for
  4:
Select the solution from S E with the maximum coverage weight S q + 1 B , g q + 1 B , k = 0
  5:
while  k < 3  do
  6:
    for  1 m / 2  do
  7:
        Randomly select two solutions from S E ( S i E , S j E )
  8:
        Forward PR: S i E S j E N S f , backward PR: S j E S i E N S b
  9:
        for  l = 1 n  do
10:
           for  i = 1 n  do
11:
               Replace the i-th value in N S l f (or N S l b ) with { v i , j , j = 1 , 2 , 3 } S i , j
12:
               Calculate t-way combinations coverage weight of S i , j g i , j
13:
               if  g i , j > g q + 1 B  then
14:
                    S q + 1 B S i , j , g q + 1 B g i , j
15:
                    k = 0
16:
               else
17:
                    k = k + 1
18:
               end if
19:
           end for
20:
        end for
21:
        Update S q + 1 B , g q + 1 B
22:
    end for
23:
end while
24:
Add S q + 1 B to S B = { S j B , j = 1 , , q + 1 } .
25:
Add g q + 1 B to g B = { g j B , j = 1 , , q + 1 }
26:
Output S B and g B

4. Performance Evaluation

Two numerical analysis methods are used to evaluate the performance of the proposed REH algorithm (see Figure 2). Firstly, compare the size of combinatorial test suites generated by known tools using multiple classic test instances. Secondly, analyze the computation time for generating combinatorial test suites and compare the algorithms’ convergence efficiency [17].

4.1. Test Suite Size Analysis

Six groups of classic test instances are used to compare the performances of different tools in constructing t-way combinatorial test suites. Each instance can be characterized as v n , where n is the number of parameters and v is the number of values per parameter. The widely used two-way combinatorial testing strategy is selected. The proposed REH algorithm and ten other tools [15] are used to generate two-way combinatorial test suites separately. The sizes of the two-way combinatorial test suites are summarized in Table 4.
Compared to the ten tools, the proposed REH algorithm generates the smallest 2-way combinatorial test suites for test instances 1, 3, and 4, performing better than other tools, i.e., it achieves the best performance on 50% of the test instances. Secondly, the test suite sizes generated by the REH algorithm for instances 1, 3, 4, and 5 are smaller than the arithmetic mean of the other ten tools, meaning its performance is superior to the average on 66.67% of the test instances.
Utilizing the t-way combinations coverage feature of the LCIT, the number of 2-way combinations for each test instance can be quantified, annotated in Figure 4. The six growth curves of 2-way combinations coverage during the test suite generation process by the proposed REH algorithm are statistically analyzed.
As shown in Figure 4, the slopes of the two-way combinations coverage growth curves for test instances 1, 2, 4, and 5 are approximately constant, indicating that the 2-way combinations’ coverage weight per test case tends to be stable. For test instances 3 and 6, as the number of test cases increases, the slope of the 2-way combinations coverage growth curve decreases significantly, indicating that, when the number of parameters and values is too large, the performance of the REH algorithm will decline, and it becomes more difficult to search for the theoretical optimal solution in the later stages of test suite generation. The reason is that the MsCP (see Algorithm 2) algorithm and the EvPR + BLS (see Algorithm 4) algorithm introduce randomness, respectively. While the proposed REH algorithm avoids generating the same fixed test suite each time, it also introduces stochasticity into the solving process, meaning that the generated test suite may not be theoretically optimal each time.

4.2. Algorithm Convergence Analysis

The time-to-target method [42,43] is used to analyze the convergence efficiency of the proposed REH algorithm. The computation time consumed by multiple independent runs is used to fit the probability distribution curves of the computation time. The algorithm stopping criterion achieves the specified coverage target for the t-way combinatorial test suite, i.e., the algorithm terminates when the test suite covers all t-way combinations defined by the LCIT.
The experiments were conducted on a basic personal computer featuring a 2.1 GHz processor and 4 GB of memory.
In the experiments, the GATG tool was selected for comparison. The REH algorithm and the GATG algorithm were independently executed 300 times each on test instance 2 ( 3 13 ), and each time required covering 702 two-way combinations.
The experimental and theoretical probability distribution curves using the time-to-target method are shown in Figure 5. The results show that the proposed REH algorithm has a significant advantage in convergence performance. The probability of the REH algorithm finding a two-way combinatorial test suite in at most 2750 ms is about 40%, and in at most 3100 ms, it is about 90%. The average computation time for the REH algorithm is 2840 ms, while for the GATG, it is 3845 ms, meaning that the REH algorithm saves about 1000 ms on average, and its maximum computation time is about 66.67% of GATGs.

5. Conclusions

Addressing combinatorial testing scenarios in large-scale information systems, communication networks, and the Internet of Things, this paper constructs a mathematically characterized t-way LCIT and establishes the t-wCCM mathematical model for the combinatorial testing problem. Furthermore, the REH algorithm is proposed to solve the t-wCCM. Comparative analysis using six groups of classic test instances shows that the proposed REH algorithm performs best on 50% of the test instances, and performs superior to the average on 66.67% of the test instances, with a relative improvement in the maximum computation time of approximately 33.33%. In conclusion, the proposed REH algorithm offers high practicality and efficiency, and can be widely applied in actual combinatorial testing activities.

Author Contributions

P.L. conceptualized the study, developed the mathematical model, designed the software, and wrote the original draft. J.S. conducted the investigation, performed the formal analysis, and created visualizations. X.C. supervised the project, acquired funding, and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported in part by the NSFC (No. 61501527), State’s Key Project of Research and Development Plan (No. 2016YFE0122900-3), the Fundamental Research Funds for the Central Universities, Guangdong Science and Technology Project (No. 2016A010106002), and 2016 Major Project of Collaborative Innovation in Guangzhou (No. 201604046008).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kuhn, D.R.; Kacker, R.N.; Lei, Y. Practical Combinatorial Testing; National Institute of Standards and Technology: Gaithersburg, MD, USA; U.S. Department of Commerce: Washington, DC, USA, 2010. Available online: https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-142.pdf (accessed on 16 September 2025).
  2. Almering, V.; Von Genuchten, M.; Cloudt, G.; Sonnemans, P.J.M. Using Software Reliability Growth Models in Practice. IEEE Softw. 2007, 24, 82–88. [Google Scholar] [CrossRef]
  3. Balera, J.; Santiago, V., Jr. An algorithm for combinatorial interaction testing: Definitions and rigorous evaluations. J. Softw. Eng. Res. Dev. 2017, 5, 10. [Google Scholar] [CrossRef]
  4. Kuhn, D.; Kacker, R.; Lei, Y. Advanced Combinatorial Test Methods for System Reliability. In Reliability Society 2010 Annual Technical Report; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2015; pp. 1–6. [Google Scholar]
  5. Kuhn, D.; Kacker, R.; Lei, Y. Combinatorial Coverage Measurement. NIST Interagency/Internal Report. Available online: https://www.nist.gov/publications/combinatorial-coverage-measurement (accessed on 16 September 2025).
  6. Hagar, J.; Kuhn, R.; Kacker, R.; Wissink, T. Introducing Combinatorial Testing in a Large Organization: Pilot Project Experience Report. In Proceedings of the International Conference on Software Testing Verification and Validation, Cleveland, OH, USA, 31 March–4 April 2014; p. 153. [Google Scholar]
  7. Muazu, A.A.; Hashim, A.S.; Sarlan, A. Review of Nature Inspired Metaheuristic Algorithm Selection for Combinatorial t-Way Testing. IEEE Access 2022, 10, 27404–27431. [Google Scholar] [CrossRef]
  8. Bohm, S.; Schmidt, T.J.; Krieter, S.; Pett, T.; Thum, T.; Lochau, M. Coverage Metrics for T-Wise Feature Interactions. In Proceedings of the 2025 IEEE Conference on Software Testing, Verification and Validation (ICST), Napoli, Italy, 31 March–4 April 2025; pp. 198–209. [Google Scholar]
  9. Qin, G.; Zheng, J.; Tsuchiya, T. Meta-Heuristic Algorithm for Constructing Higher-Index Covering Arrays for Combinatorial Interaction Testing. In Proceedings of the 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops, Dublin, Ireland, 16–20 April 2023; pp. 190–196. [Google Scholar]
  10. Guo, X.; Song, X.; Zhou, J.T.; Wang, F.; Tang, K. An Effective Approach to High Strength Covering Array Generation in Combinatorial Testing. IEEE Trans. Softw. Eng. 2023, 49, 4566–4593. [Google Scholar] [CrossRef]
  11. Rojanasoonthon, S.; Bard, J. A GRASP for Parallel Machine Scheduling with Time Windows. INFORMS J. Comput. 2005, 17, 32–51. [Google Scholar] [CrossRef]
  12. Garey, M.; Johnson, D. Computers and Intractability: A Guide to the Theory of NP-Completeness; W.H. Freeman and Company: New York, NY, USA, 1979. [Google Scholar]
  13. Htay, K.M.; Othman, R.R.; Amir, A.; Zakaria, H.L.; Ramli, N. A Pairwise t-Way Test Suite Generation Strategy Using Gravitational Search Algorithm. In Proceedings of the 2021 International Conference on Artificial Intelligence and Computer Science Technology (ICAICST), Yogyakarta, Indonesia, 29–30 June 2021; pp. 7–12. [Google Scholar]
  14. Available online: http://msdn.microsoft.com/en-us/library/cc150619.aspx (accessed on 16 September 2025).
  15. Available online: https://www.pairwise.org/tools.html (accessed on 16 September 2025).
  16. Lin, J.; Cai, S.; He, B.; Fu, Y.; Luo, C.; Lin, Q. FastCA: An Effective and Efficient Tool for Combinatorial Covering Array Generation. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), Madrid, Spain, 25–28 May 2021; pp. 77–80. [Google Scholar]
  17. Lin, P.; Bao, X.; Shu, Z. Test case generation based on adaptive genetic algorithm. In Proceedings of the 2012 IEEE International Conference on. Quality, Reliability, Risk, Maintenance, and Safety Engineering (ICQR2MSE), Chengdu, China, 15–18 June 2012; pp. 863–866. [Google Scholar]
  18. Grindal, M.; Offutt, J.; Andler, S.F. Combination testing strategies: A survey. Softw. Test. Verif. Reliab. 2005, 15, 167–199. [Google Scholar] [CrossRef]
  19. Younis, M.; Zamli, K.; Isa, N. IRPS Can efficient test data generation strategy for pairwise testing. In Proceedings of the International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Zagreb, Croatia, 3–5 September 2008; pp. 493–500. [Google Scholar]
  20. Klaib, M.; Muthuraman, S.; Noraziah, A. A Parallel Tree Based Strategy for t-Way Combinatorial Interaction Testing. In Proceedings of the International Conference on Software Engineering and Computer Systems, Honolulu, HI, USA, 21–22 May 2011; pp. 91–98. [Google Scholar]
  21. Shiba, T.; Tsuchiya, T.; Kikuno, T. Using artificial life techniques to generate test cases for combinatorial testing. In Proceedings of the 28th Annual International Computer Software and Applications Conference, Hong Kong, China, 28–30 September 2004; pp. 72–77. [Google Scholar]
  22. Mandl, R. Orthogonal Latin Squares: An application of experiment design to compiler testing. Commun. ACM 1985, 28, 1054–1058. [Google Scholar] [CrossRef]
  23. Cohen, D.; Dalal, S.; Fredman, M.; Patton, G. The AETG system: An approach to testing based on combinatorial design. IEEE Trans. Softw. Eng. 1997, 7, 437–444. [Google Scholar] [CrossRef]
  24. Cohen, M. Designing Test Suites for Software Interactions Testing. Ph.D. Thesis, The University of Auckland, Auckland, New Zealand, 2004. [Google Scholar]
  25. Flores, P.; Cheon, Y. Pwisegen: Generating test cases for pairwise testing using genetic algorithms. In Proceedings of the IEEE International Conference on Computer Science and Automation Engineering, Shanghai, China, 10–12 June 2011; pp. 747–752. [Google Scholar]
  26. Avila, H.; Torres, J.; Hernández, V.; Gonzalez, L. Simulated annealing for constructing mixed covering arrays. Distrib. Comput. Artif. Intell. 2012, 15, 657–664. [Google Scholar]
  27. Chen, J.; Chen, J.; Cai, S.; Chen, H.; Zhang, C.; Huang, C. A Test Case Generation Method of Combinatorial Testing based on t-Way Testing with Adaptive Random Testing. In Proceedings of the 2021 IEEE International Symposium on Software Reliability Engineering Workshops, Wuhan, China, 25–28 October 2021; pp. 83–90. [Google Scholar]
  28. Gonzalez, L.; Rangel, N.; Torres, J. Construction of Mixed Covering Arrays of Variable Strength Using a Tabu Search Approach. In Combinatorial Optimization and Applications (COCOA 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 51–64. [Google Scholar]
  29. Muazu, A.A.; Hashim, A.S.; Audi, U.I.; Maiwada, U.D. Refining a One-Parameter-at-a-Time Approach Using Harmony Search for Optimizing Test Suite Size in Combinatorial t-Way Testing. IEEE Access 2024, 12, 137373–137398. [Google Scholar] [CrossRef]
  30. Jia, S.; Shu, W. Generation of Pairwise Test Sets using Novel DPSO algorithm. Green Commun. Netw. 2012, 11, 479–487. [Google Scholar]
  31. Zabidi, N.S.; Ibrahim, N.; Rejab, M.M.; Mamat, M.; Nazir, S.; Tuselim, N.H.M. Optimizing Variable-Strength Combinatorial Test Suite Generation via Fuzzy Adaptive Sine Cosine Algorithm (FASCA). In Proceedings of the 2024 1st International Conference on Cyber Security and Computing, Melaka, Malaysia, 6–7 November 2024; pp. 62–67. [Google Scholar]
  32. Muazu, A.A.; Hashim, A.S.; Sarlan, A.; Maiwada, U.D. Proposed Method of Seeding and Constraint in One-Parameter-At-a-Time Approach for t-Way Testing. In Proceedings of the 2022 International Conference on Digital Transformation and Intelligence (ICDI), Kuching, Malaysia, 1–2 December 2022; pp. 39–45. [Google Scholar]
  33. Prasad, M.L.; Sastry, J.K.R.; Mallikarjuna, B.; Sitaramulu, V.; Srinivasulu, C.; Naib, B.B. Development of a Programmed Generation of t-Way Test cases Using an Improved Particle Swarm Optimization Strategy. In Proceedings of the 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 28–29 April 2022; pp. 1394–1399. [Google Scholar]
  34. Jin, H.; Shi, C.; Tsuchiya, T. Summary of Constrained Detecting Arrays: Mathematical Structures for Fault Identification in Combinatorial Interaction Testing. In Proceedings of the 2024 IEEE International Conference on Software Testing, Verification and Validation Workshops, Toronto, ON, Canada, 27–31 May 2024; pp. 215–216. [Google Scholar]
  35. Resende, M. Greedy randomized adaptive search procedures Greedy Randomized Adaptive Search Procedures: GPASP Greedy randomized adaptive search procedures GPASP. In Encyclopedia of Optimization; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1460–1469. [Google Scholar]
  36. Jin, H.; Tsuchiya, T. A Two-Step Heuristic Algorithm for Generating Constrained Detecting Arrays for Combinatorial Interaction Testing. In Proceedings of the 2020 IEEE 29th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, Bayonne, France, 10–13 September 2020; pp. 219–224. [Google Scholar]
  37. Resende, M.; Ribeiro, C. Greedy randomized adaptive search procedures: Advances, hybridizations, and applications. In Handbook of Metaheuristics; Springer: New York, NY, USA, 2010; pp. 283–319. [Google Scholar]
  38. Glover, F.; Laguna, M.; Martí, R. Fundamentals of scatter search and path relinking. Control Cybern. 2010, 29, 653–684. [Google Scholar]
  39. Resendel, M.; Ribeiro, C. GRASP with path-relinking: Recent advances and applications. In Metaheuristics: Progress as Real Problem Solvers; Springer: New York, NY, USA, 2005; pp. 29–63. [Google Scholar]
  40. Villegas, J.; Prins, C.; Prodhon, C. A GRASP with evolutionary path relinking for the truck and trailer routing problem. Comput. Oper. Res. 2011, 38, 1319–1334. [Google Scholar] [CrossRef]
  41. Lin, P.; Kuang, L.; Chen, X.; Yan, J.; Lu, J.; Wang, X. Adaptive subsequence adjustment with evolutionary asymmetric path-relinking for TDRSS scheduling. J. Syst. Eng. Electron. 2014, 25, 800–810. [Google Scholar] [CrossRef]
  42. Aiex, M.; Resende, M.; Ribeiro, C. Probability distribution of solution time in GRASP: An experimental investigation. J. Heuristics 2002, 8, 343–373. [Google Scholar] [CrossRef]
  43. Aiex, R.; Resende, M.; Ribeiro, C. TTT plots: A perl program to create time-to-target plots. Optim. Lett. 2007, 1, 355–366. [Google Scholar] [CrossRef]
Figure 1. Estimation of t-way combinatorial test cases.
Figure 1. Estimation of t-way combinatorial test cases.
Jsan 14 00095 g001
Figure 2. Framework of the refined evolutionary heuristic algorithm.
Figure 2. Framework of the refined evolutionary heuristic algorithm.
Jsan 14 00095 g002
Figure 3. Path relinking operation between initial and guiding solutions.
Figure 3. Path relinking operation between initial and guiding solutions.
Jsan 14 00095 g003
Figure 4. 2-way Combinations Coverage Growth Curve.
Figure 4. 2-way Combinations Coverage Growth Curve.
Jsan 14 00095 g004
Figure 5. Probability distribution of computation time.
Figure 5. Probability distribution of computation time.
Jsan 14 00095 g005
Table 1. Four-parameter combinatorial scenario.
Table 1. Four-parameter combinatorial scenario.
ParametersSets of Values
IIIIII
I (Duplex Mode)TDD
II (Carrier Bandwidth)100 MHz200 MHz
III (Coding Scheme)LDPCPolar
IV (Modulation Order)BPSKQPSK16QAM
Table 2. LCIT for 2-way combinations.
Table 2. LCIT for 2-way combinations.
Row No.Index (Value Set)
IIIIIIIV
1−1(TDD)+1(100 MHz)+1(LDPC)+1(BPSK)
2−1(TDD)+2(200 MHz)+2(Polar)+2(QPSK)
3−1(TDD) +3(16QAM)
4−2()+1(100 MHz)+1(LDPC)+1(BPSK)
5−2()+2(200 MHz)+2(Polar)+2(QPSK)
6−2() +3(16QAM)
7−3()+1(100 MHz)+1(LDPC)+1(BPSK)
8−3()+2(200 MHz)+2(Polar)+2(QPSK)
9−3() +3(16QAM)
10 −1(100 MHz)+1(LDPC)+1(BPSK)
11 −1(100 MHz)+2(Polar)+2(QPSK)
12 −1(100 MHz) +3(16QAM)
13 −2(200 MHz)+1(LDPC)+1(BPSK)
14 −2(200 MHz)+2(Polar)+2(QPSK)
15 −2(200 MHz) +3(16QAM)
16 −3()+1(LDPC)+1(BPSK)
17 −3()+2(Polar)+2(QPSK)
18 −3() +3(16QAM)
19 −1(LDPC)+1(BPSK)
20 −1(LDPC)+2(QPSK)
21 −1(LDPC)+3(16QAM)
22 −2(Polar)+1(BPSK)
23 −2(Polar)+2(QPSK)
24 −2(Polar)+3(16QAM)
25 −3()+1(BPSK)
26 −3()+2(QPSK)
27 −3()+3(16QAM)
Table 3. LCIT for t-way combinations.
Table 3. LCIT for t-way combinations.
Row No.Index
T t 1 k = { P 1 k , , P t 1 k } + ( T t 1 k )
1 v t 1 T t 1 1 + ( T t 1 1 )
v t 1 + 1 2 v t 1 T t 1 2 + ( T t 1 2 )
( C n t 1 1 ) × v t 1 + 1 C n t 1 × v t 1 T t 1 C n t 1 + ( T t 1 C n t 1 )
Table 4. Sizes of the 2-way combinatorial test suites.
Table 4. Sizes of the 2-way combinatorial test suites.
IDInstanceAETGIPOTConfigCTSJennyecfeedAllPairsPICTIPO-sGATGMeanREH
1 3 4 9999111099999.39
2 3 13 1517151518191718171917.018
3 4 15 3 17 2 29 4134403938373437323837.031
4 4 13 3 9 2 35 2826302928282627233127.622
5 2 100 1015141016161415101413.411
6 10 20 180212231210193203197210220245210.1239
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, P.; She, J.; Chen, X. T-Way Combinatorial Testing Strategy Using a Refined Evolutionary Heuristic. J. Sens. Actuator Netw. 2025, 14, 95. https://doi.org/10.3390/jsan14050095

AMA Style

Lin P, She J, Chen X. T-Way Combinatorial Testing Strategy Using a Refined Evolutionary Heuristic. Journal of Sensor and Actuator Networks. 2025; 14(5):95. https://doi.org/10.3390/jsan14050095

Chicago/Turabian Style

Lin, Peng, Jinzhao She, and Xiang Chen. 2025. "T-Way Combinatorial Testing Strategy Using a Refined Evolutionary Heuristic" Journal of Sensor and Actuator Networks 14, no. 5: 95. https://doi.org/10.3390/jsan14050095

APA Style

Lin, P., She, J., & Chen, X. (2025). T-Way Combinatorial Testing Strategy Using a Refined Evolutionary Heuristic. Journal of Sensor and Actuator Networks, 14(5), 95. https://doi.org/10.3390/jsan14050095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop