Next Article in Journal
The Effect of Emotional Intelligence on the Accuracy of Facial Expression Recognition in the Valence–Arousal Space
Previous Article in Journal
Object Identity Reloaded—A Comprehensive Reference for an Efficient and Effective Framework for Logic-Based Machine Learning
Previous Article in Special Issue
Test Coverage in Microservice Systems: An Automated Approach to E2E and API Test Coverage Metrics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Test Case Prioritization Using Dragon Boat Optimization for Software Quality Testing

Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Building No 3963, Al-Kharj 16273, Saudi Arabia
Electronics 2025, 14(8), 1524; https://doi.org/10.3390/electronics14081524
Submission received: 14 February 2025 / Revised: 5 April 2025 / Accepted: 8 April 2025 / Published: 9 April 2025
(This article belongs to the Special Issue Software Analysis, Quality, and Security)

Abstract

:
Test Case Prioritization (TCP) is critical in software quality testing, aiming to identify high-priority test cases early in the testing process. This study proposes a novel TCP approach using the Dragon Boat Optimization Algorithm (DBOA), inspired by the synchronized teamwork seen in dragon boat racing. The proposed TCP-DBOA model strategically reorders test cases to improve fault detection efficiency while minimizing execution time. By using the Average Percentage of Faults Detected (APFD) as the optimization objective, the model enhances both coverage speed and testing effectiveness. DBOA offers advantages in handling large search spaces, balancing exploration and exploitation, and adapting to complex testing scenarios. The performance of TCP-DBOA is evaluated using four benchmark datasets—GZIP, GREP, TCAS, and CS-TCAS—demonstrating superior APFD values compared to existing methods. Results confirm the model’s robustness in reducing test execution time and improving fault detection early in the test cycle. This approach contributes to faster, more efficient regression testing, especially in continuous integration environments.

1. Introduction

Generally, software engineering is software development and programming and the execution of engineering processes in the growth of any software in an organized method [1]. Software testing takes a long time to implement, which is the costliest stage of software development. With the assumption of the agile model in most software enterprises, the interest in constant integration environments is developing [2]. The advantages of such atmospheres include incorporating regular software variations and creating software evolution quicker and less expensive. As an outcome, it will efficiently handle challenges like test implementation, test result reporting, and build procedures [3]. Software testing has been proficient since the area of software engineering arose. Software testing was presented to estimate the quality of the software. Testing contains actions that can classify every possible fault in software for resolving them before the software product has been delivered to end-users [4]. Software testing is frequently implemented, even with time restrictions and fixed resources. The groups of software engineers are commonly required to close their testing actions due to economic and time needs, which will cause some troubles like issues with software excellence and client contracts. However, the TCP app acts to improve test feasibility in software testing action [5]. TCP’s primary goal is to fix test cases to attain a preliminary optimizer dependent upon the chosen assets. It provides the capability to perform highly vital test cases as soon as possible according to a few measures and generate preferred outcomes like illuminating previous errors and delivering reviews to the samples. In addition, it aids in addressing a perfect transformation of a sequence of test cases and is implemented consequently. Software testing and debugging costs exceed 50% of the growth cost [6].
The price of regression testing (RT) depends upon the difficulty of the use and the dimension of the test set. The important task of software testing is to enhance the demand of the test cases for execution to identify the highest errors in the assumed test suite. To overcome this task, three common answers are studied in the work, namely test suite prioritization (TSP), test suite minimization (TSM), and test suite selection (TSS). TSP reorganizes the test cases to detect more faults in the highest few test cases. Machine learning (ML) and artificial intelligence (AI) approaches have verified their real suitability in interdisciplinary uses [7]. Finding out the best demand for the test cases and the best method to restrict or pick the test cases creates an NP-hard issue. Optimization approaches are employed to overcome these problems positively. Nature-inspired algorithms have effectively resolved tough optimizer issues in numerous areas. Otherwise, they can enhance the cost-effectiveness of RT. Nature-inspired systems demand researchers due to their simple structure and simplicity of utilization. The techniques have been ideally constructed by exhibiting natural actions [8]. They are generally categorized into biology-inspired, social phenomena-inspired, and physics/chemistry-inspired algorithms. Also, these methods are functional in RT. The most frequently employed algorithms are evolutionary and swarm intelligence-based methods from the biology-inspired family of nature-inspired techniques. The growing complexity of modern software systems demands more effectual testing methods to ensure reliability and quality [9]. Given the time and cost constraints in software testing, optimizing the process has become significant for improving productivity and mitigating expenses. With the rise of continuous development cycles, there is a growing requirement for techniques to prioritize test cases and detect faults early efficiently. This assists in minimizing the overall testing time while ensuring that critical issues are identified promptly. The proposed method aims to address these challenges by introducing a novel approach for TCP, which enhances the overall efficiency and effectiveness of the testing process [10].
This study presents a Test Case Prioritization using the Dragon Boat Optimization Algorithm (TCP-DBOA) model for software quality testing. The chief purpose of the TCP-DBOA model is to diminish the total execution period and maximize the APFD. The DBOA is utilized for TCP. In addition, the TCP-DBOA technique utilizes the average percentage of test point coverage APFD as an optimizer objective to represent the coverage velocity to the test point. The TCP-DBOA technique is recognized as a huge search space for finding an optimum collection of test cases. The performance analysis of the TCP-DBOA method is performed, and the results are investigated in terms of dissimilar measures under diverse datasets.
  • The TCP-DBOA model strategically prioritizes test cases to optimize the testing process, effectively mitigating the total execution time. Focusing on an efficient test case order improves overall testing efficiency. This approach ensures faster fault detection and enhanced resource utilization during testing.
  • The TCP-DBOA approach aims to optimize the APFD, increasing the chances of identifying faults early in the testing cycle. Prioritizing test cases based on APFD improves fault detection efficiency. This approach results in quicker defect identification, improving the overall efficiency of the testing process.
  • The TCP-DBOA methodology implements the DBOA model to improve TCP, allowing the model to navigate large search spaces effectively. By employing DBOA, optimal test case orders are identified, enhancing testing efficiency. This method ensures a more streamlined process, prioritizing test cases that maximize fault detection.
  • The TCP-DBOA technique uniquely utilizes the APFD as an objective function to capture coverage velocity, giving a novel approach to TCP. This method effectively balances fault detection and test case selection (TCS). By optimizing APFD, it ensures a faster and more effective testing process. The novelty is its ability to handle large search spaces while prioritizing fault detection and coverage speed.

2. Related Works

In [11], a model termed TestReduce was developed and planned by a mixture of genetic procedures to discover an enhanced and least pair of test cases. The vital objective of this research is to offer a method that resolves the minimized issue of RT in related desires. The 100-dollar prioritized technique has been employed to describe the significance of novel desires. Hamza et al. [12] intended a Modified Harris Hawks Optimized-based TCP (MHHO-TCP) model for software testing. The main intention of the developed MHHO-TCP method is to expand APFD and diminish the complete implementation period. Furthermore, the MHHO model is intended to increase the exploitation and exploration capacities of the conventional HHO process. Many models have been directed at dissimilar benchmark programs to authorize improved efficacy. In [13], an optimizer algorithm, namely the Bee Algorithm (BA), was projected, dependent upon the intelligent forage performance of the honey bee group. The planned technique advanced for improving the error recognition rate in the least period is stimulated by the performance of dual kinds of worker bees, such as foragers and scout bees. The projected method is executed on dual projects. The prioritized outcome is dignified by employing the APFD. Priya and Prasanna [14] develop an effectual Multi-objective Test Case Generation and Prioritize utilizing an Improved GA (MTCGP-IGA) technique. An arbitrary search-based model for generating and highlighting multiple-objective tests was used. Specifically, the multi-objective optimizer includes increasing the prioritized range of test cases (PR), pairwise coverage of characteristics (PCC), decreasing total implementation cost (TIC), and fault-finding capability (FFC). An exclusive fitness function has been built using cost-effective metrics for the test prioritizing issue.
Iqbal and Al-Azzoni [15] developed an enhanced quantum-behaved particle swarm optimizer (PSO) approach. The model is enhanced using a fix-up device to execute perturbation for the combinatorial TCP issue. Next, the dynamic contraction expansion co-efficient is employed to quicken the convergence. It is surveyed using an adaptive test case collection plan to pick the modification-illuminating test cases. Lastly, the superfluous test cases are detached. In [16], the authors developed a hybrid method for change or RT over the test case prioritized. The recommended model primarily produces the test cases and then groups them into experimental and insignificant clusters by employing a kernel-based fuzzy c-means (FCM) clustering model. Then, the suitable test cases were measured and prioritized by executing the grey wolf optimization (GWO) model. Chandra et al. [17] projected a nature-inspired smell detection agent (SDA) technique. This method is an optimizer algorithm appropriate for recognizing optimum tracks with priority. The SDA procedure depends upon the vanishing of small particles in the gas procedure and the ability of a sensing agent to gain insight. The amount of linearly liberated tracks over a program unit is dignified by generating a control flow graph (CFG), which trials the cyclomatic difficulty. In [18], a test case reduction (TCR) and support-based whale optimization algorithm (SWOA) optimizer for distributed agile software improvement employing RT and including dual phases is projected. Selection and prioritization are implemented once the test cases are rescued and gathered. The test groups have been organized and ranked to confirm that the most dangerous examples were elected foremost. Furthermore, the SWOA has been employed to pick test cases with superior coverage or failure measure occurrence. Table 1 highlights the existing studies on TCP techniques, comparing various approaches in performance metrics such as fault detection, execution time, and efficiency.
Several existing models for optimizing TCP concentrate on improving APFD and mitigating implementation time, but they often face difficulty with scalability for massive datasets. While techniques like BA and MTCGP-IGA aim to improve error detection and coverage, they lack robust mechanisms for adapting to dynamic changes in software environments. Furthermore, approaches such as SDA and SWOA optimizers encounter challenges in generalizing across diverse software applications. Despite improvements, many of these models do not account for the complexities of growing software systems, emphasizing a research gap in developing more flexible, scalable, and adaptive prioritization methods for dynamic testing environments.

3. The Proposed Method

This article proposes the TCP-DBOA methodology for software quality testing. The methodology aims to minimize total execution time and maximize the APFD. In addition, it picks the average percentage of test point coverage APFD as an optimizer objective to represent the coverage velocity to the test point. Figure 1 illustrates the overall process of the TCP-DBOA approach.

3.1. Design of DBOA

Motivated by the dragon boat (DB) race, the DBOA model examines the conditions of the drummers and paddlers on dissimilar DBs and gathers the racing conditions of all of them [19]. Furthermore, the algorithm integrates a social psychology device to perfect the whole procedure of a DB race. The DBOA model is chosen over other techniques due to its unique capability to combine individual and collective performance through the analogy of a DB race. By considering the roles of drummers and paddlers, the model efficiently captures the dynamics of cooperation and competition within a group, making it highly appropriate for optimization problems. Integrating a social psychology device improves the capability of the model to adapt and refine the racing process, promoting better synchronization and coordination among elements. This aspect allows DBOA to optimize complex tasks more effectively than conventional techniques that may not account for the components’ social interaction and collaborative behavior. Additionally, the flexibility and adaptability of the model to various scenarios make it a robust contender for TCP, presenting a more holistic and dynamic approach compared to conventional algorithms. Algorithm 1 demonstrates the DBOA model.
Algorithm 1: DBOA Technique
1.
Initialization:
  • A population of agents (DBs) is initialized. Every agent has a position and velocity representing a possible solution to the optimization problem.
  • The number of agents depicts the size of the DB team.
  • The optimization problem defines the objective function the algorithm will aim to minimize or maximize.
2.
DB Representation:
  • The DBs are illustrated by agents with their positions in the search space.
  • Each agent can move across the search space, and their movement depends on the team’s synchronization.
3.
Fitness Evaluation:
  • The fitness function computes how well each DB performs concerning the given objective.
  • The fitness can be based on diverse metrics depending on the problem domain (e.g., APFD, fault detection rate in TCP, etc.).
4.
Synchronization:
  • Like a DB team where all paddlers synchronize their movements for optimal performance, the agents update their positions by synchronizing with the best-performing members (leaders).
  • The leaders, usually those with the best fitness values, guide the direction of the boat’s movement.
5.
Exploration and Exploitation:
  • Exploration: DBs (agents) explore the search space by slightly adjusting their positions and attempting to discover new regions of the search space.
  • Exploitation: Once a suitable region of the solution space is detected, the agents exploit this by refining their positions to converge towards the optimal solution.
6.
Velocity and Position Update:
  • The position and velocity of every agent are updated utilizing a formula based on their current position, the best solution found so far, and the team’s synchronization. The equation is typically:
         New Position = Current Position     + Velocity
    New Velocity = Current Velocity + c 1 × ( BestPosition         CurrentPosition ) + c 2 × ( Global Best Position         CurrentPosition )
    where c 1 and c 2 are constants that balance the exploration and exploitation efforts.
7.
Leader Selection:
  • The agent with the best fitness value becomes the leader, directing the entire team towards better solutions. Other agents adjust their movements based on the leader’s position.
8.
Termination Criteria:
  • The algorithm continues iterating until a stopping condition is met, such as a predefined number of iterations, or if the solution reaches a desired fitness value (convergence).
9.
Final Solution:
  • After the termination, the best solution (usually the leader’s position) is considered the optimal solution to the problem.

3.1.1. Social Behavior Patterns

The DB race is a team sport in which human social powers play a vital part. As an outcome, at the time of race, the teams are subject to the effect of social psychology devices such as social incentives and social loafing. Social loafing denotes when individuals in a group action use less effort than they work alone, often owing to united responsibilities and decreased individual effort levels. Social incentive refers to the application of external factors to inspire creativity, effectiveness, and enthusiasm in groups or individuals when appealing at work or in actions, with the primary goal of enhancing general efficiency.
Considering these social psychology tools, it is thought that when using the DBOA to find an unrestricted issue, the DB team is inclined to display a social loafing performance pattern. However, in the case of constrained problems, the constraining states aid in enhancing the team’s inspiration, which results in the behavior of social incentive.
So, the social behavior factor is presented to describe the team’s behavior patterns. Its formulation is as follows.
ψ = D B N R d , R d < D B N   o r   u n c o n s t r a i n e d   s i t u a t i o n 1 ,   R d > D B N   o r   c o n s t r a i n e d   s i t u a t i o n                  
Here, ψ signifies the social behavior factor. D B N represents the number of DBs who participate in a DB race. R d refers to a random number.

3.1.2. Acceleration Factor

The rate at which the drum beat changes is below the drummer’s switch. At the time of the DB race, the drummer analyses the rate at which the drum should be compressed. Calculating this rate takes into consideration many key factors, including the difference in distance enclosed among their own DB and other challenging boats and the state of paddlers. The expression is used to estimate the acceleration factor.
λ = ψ × I 1 ψ × I                        
Here, λ signifies the acceleration factor. I represents the iteration number.

3.1.3. Attenuation Factor

In a perfect scenario, all paddlers must coordinate their paddling with the drumbeat. However, in actual races, for every crew member, particularly the paddlers, while upholding a high-intensity workout, it is complex to escape the problem of strength attenuation. These outcomes in the paddler’s performance worsen as the quantity of paddling rises. The reduction factor is presented to describe this condition. The following expression shows it.
μ = 1 + ψ × I l ψ × I                    
Here, μ represents the attenuation factor. l designates the number of iterations.

3.1.4. Imbalance Rate of Paddlers

The forward propulsion of a DB is based on the paddler’s paddling, which produces a feedback force on the paddle from the sea, pushing the boat forward. To attain the sturdiest forward propulsion, the paddler wants to consider numerous factors before each act of paddling. These factors include the course, distance, angle of the paddling and support created by the paddling, and the degree of obstruction when the boat is moving onward. By following hydrodynamics principles, paddlers can use the water’s assets to produce the sturdiest driving force.
Well-executed paddling must uphold an optimum point of entry for the paddle to enlarge the propulsion of the DB, even regarding variations in the water surface. However, when manifold DBs are running similarly, each boat must struggle with its inherent water surface variations and those affected by the paddles of nearby boats. This offers growth at an imbalanced rate, which makes it most challenging for paddlers to maintain steady paddling conditions. The optimum viewpoint at which the paddle arrives at the water can expressively diminish this problem and improve overall performance.
The best paddle angle to entry is θ B . Here, s i n θ is used to compute the paddle’s depth in the water, while c o s θ helps measure the paddle’s water resistance during paddling. The formulation for computing the imbalance rate is as follows.
H = | c o s ( θ ) | I ψ + H b                    
Here, H signifies the imbalance rate, portraying the effect of the superposition wave from the surface of the water and the wave produced by the growth of other DBs on the paddler. H b represents the basis imbalance rate created from the fundamental wave of the water surface. The value of H b is set to be 0.01 in this paper. θ signifies the entry angle of a paddle.

3.1.5. Strategies for Updating Crew State

The DB’s speedy forward movement depends upon the cooperation of the team. The paddlers’ condition is represented as a matrix. When upgrading these conditions, the reference object is the condition of the paddler in the equivalent location on the fastest DB. However, a dissimilar upgrade plan is used for the paddlers on the fastest DB. The upgrade plans for paddler conditions are classified into dual cases, such as one for the paddlers on the fastest DB and another for the other DBs. The state upgrade tactics formulations are as follows:
P a d d l e r s = g 1 1 g 2 1 g k 1 1 g k 1 g 1 2 g 2 2 g k 1 2 g k 2 g 1 j 1 g 2 j 1 g k 1 j 1 g k j 1 g 1 j g 2 j g k 1 j g k j                  
G f = P a d d l e r s   1 , k = g k 1 , f = 1                    
G e = P a d d l e r s   j , k = g k j , e 1
R f = G f × λ
R e = c f + c e 2 × μ × λ
Here, Paddlers signify the state matrix of all paddlers. G f refers to the paddler’s state on the fastest DB. G e designates the paddler’s state on the other DB. R f represents the state upgrade approach for paddlers on the fastest DB. R e indicates the other state upgrade strategy for paddlers on the other DB.

3.1.6. Comparative Analysis of DBOA vs. Other Models

The comparative analysis between DBOA and other optimization algorithms, comprising GA, HHO, PSO, differential evolution (DE), shows crucial differences in their performance. DBOA stands out in terms of search space diversity, convergence rate, computational complexity, and success rate. DBOA gives a very high search space diversity compared to other techniques, which assists in preventing premature convergence and ensures that the algorithm explores a broader range of solutions. This feature makes DBOA more effective at escaping local optima. In terms of convergence rate, DBOA is more efficient, reaching optimal solutions faster than GA, which tends to converge more slowly, particularly on larger problem sets. This faster convergence rate is a key advantage in real-world applications where time is critical. Furthermore, DBOA offers low computational complexity, making it more resource-efficient than GA, which usually requires large populations and various generations. As a result, DBOA has a clear advantage in computational cost. Finally, DBOA achieves a very high success rate, outperforming GA and HHO in consistently reaching optimal or near-optimal solutions across various test cases.
The integration of social psychology aspects into DBOA, inspired by the dynamics of a DB race, additionally improves its performance. This teamwork-based approach promotes synchronization and cooperation among agents, improving exploration and exploitation of the solution space. By addressing social behavior patterns such as social loafing and social incentives, DBOA increases the overall team performance, which is substantial for complex optimization tasks. Conventional algorithms, which concentrates primarily on individual optimization, often fall short in handling dynamic and multi-agent problems effectively. Moreover, the capability of the DBOA model to adaptively adjust individual agent efforts based on group dynamics allows for more effectual global exploration, mitigating the risk of stagnation in local optima.
In conclusion, DBOA outperforms GA, HHO, PSO, and DE across diverse critical optimization criteria, making it a more effective and adaptable choice for solving complex, dynamic problems. Table 2 illustrates the performance comparison study of the DBOA with existing methods.

3.2. Process Involved in TCP-DBOA Technique

The proposed TCP-DBOA technique aims to minimize the overall execution time and maximize APFD [3]. The TCP-DBOA technique is chosen for its dual focus on reducing overall execution time while maximizing APFD, which ensures both efficiency and effectiveness in TCP. By mitigating execution time, the model improves testing speed, making it suitable for large-scale projects with tight deadlines. Simultaneously, maximizing APFD ensures higher FDRs early in the testing process, improving software reliability. The TCP-DBOA model strikes a better balance between speed and fault coverage than other techniques, addressing the common trade-off in optimization tasks. Moreover, its capability to handle complex test scenarios and adapt to diverse testing environments gives it a distinct advantage over more conventional models that may prioritize one factor over the other. This makes the TCP-DBOA technique ideal for achieving optimal testing performance in dynamic conditions. Additionally, its flexibility in balancing exploration and exploitation confirms that it can effectively address both simple and highly complex optimization tasks without compromising efficiency.
This study presents a specific example to illustrate the problem of TCP. Assume that there are five test cases and nine components to be enclosed. T 1 T 3 are selected, and all the components are covered as quickly as possible. Compared to the original series of T 1 T 2 T 3 T 4 T 5 , the execution sequence of T 1 T 3 T 2 T 5 T 4 is more effective. The tester more quickly addresses the issue in the program. TCP is regularly employed in software testing, which dramatically increases the efficacy of RT. The problem of TCP is determined as the mapping from P t to the actual number set is F , T denotes the test case, and every possible prioritizing set of test cases in T is P t . The prioritizing issue of the test case is to locate T ε P t so that for T ε P t and T T , there is | f ( T ) f ( T ) | .
Where f indicates the quantitative report of an objective to measure the performance of prioritization, it is now determined that the better the effect, the larger the f will be. In a real-time application, the tester sets various test objectives, viz., the test point’s coverage velocity and the fault’s recognition rate. The SI method must determine TCP’s fitness function, viz., f values. Based on the requirement, it is classified into single- and multiple-objective optimizers.
The effective execution time (EET) of the test case series shows the time spent by the test case once it obtains the maximal statement coverage for the first time. Moreover, some optimization objectives for code coverage were introduced in the single-objective optimization. For the black box testing, there are almost no mistakes if the elements are enclosed for a significant amount of time. Thus, the study chooses the APTC as an optimizer objective to represent the coverage speed for the testing point. The EET and APTC are mathematically conveyed as follows:
E E T = i = 1 N E T i
A P T C = 1 T T 1 + T T 2 + + T T M M N + 1 2 N
where N signifies the test case count, and E T i indicates the time utilized for implementing i t h test case. M shows the program statement counts,   N ′ refers to the count of test cases implemented while the maximal statement coverage was obtained for the primary time, and T i indicates the test case location from the implementation series where the test point was discovered for the initial time.

4. Result Analysis and Discussion

This section investigates the performance analysis of the TCP-DBOA method. The suggested technique is simulated by employing Python 3.6.5 tool on PC i5-8600k, 250 GB SSD, GeForce 1050 Ti 4 GB, 16 GB RAM, and 1 TB HDD. The parameter settings are provided as follows: learning rate: 0.01, activation: ReLU, epoch count: 50, dropout: 0.5, and batch size: 5.
In Table 3 and Figure 2, the experimental outcomes of the TCP-DBOA method in terms of APFD in terms of the GZIP, GREP, TCAS, and CS-TCAS datasets are given [12]. The results state that the TCP-DBOA technique reaches enhanced values of APFD. With five iterations, the TCP-DBOA technique gains an increased APFD of 97.17%. At the same time, the MHHO-TCP, Fault Analysis (FA), Percentage of Faults Detected (PSD), Location-Based Services (LBS), and greedy approaches obtained decreased APFD of 95.51%, 95.17%, 94.28%, 94.74%, and 92.38%, respectively. Additionally, based on 20 iterations, the TCP-DBOA technique achieves a boosted APFD of 97.13%, but the MHHO-TCP, FA, PSD, LBS, and greedy methods get reduced APFD to 95.59%, 95.01%, 94.56%, 95.13%, and 94.39%, respectively. Meanwhile, with 30 iterations, the TCP-DBOA technique gains an improved APFD of 97.24%, although the MHHO-TCP, FA, PSD, LBS, and greedy techniques acquire diminished APFD of 95.56%, 94.98%, 94.60%, 94.79%, and 93.09%, respectively.
Table 4 and Figure 3 describe the experimental outcomes of the TCP-DBOA method under the GREP dataset with respect to APFD. These acquired outcomes indicated that the TCP-DBOA method obtains improved values of APFD. According to the five iterations, the TCP-DBOA method attains a raised APFD of 97.41%, whereas the MHHO-TCP, FA, PSD, LBS, and greedy methods obtain a diminished APFD of 95.89%, 95.64%, 94.08%, 95.21%, and 93.46%, respectively. In addition, based on 20 iterations, the TCP-DBOA technique provides a higher APFD of 97.33%; however, the MHHO-TCP, FA, PSD, LBS, and greedy methods obtain a lessened APFD of 95.67%, 95.48%, 93.23%, 93.89%, and 92.58%. Likewise, based on 30 iterations, the TCP-DBOA technique achieves a higher APFD of 97.36%, but the MHHO-TCP, FA, PSD, LBS, and greedy methods acquire a reduced APFD of 95.72%, 95.58%, 94.27%, 94.78%, and 92.46%, respectively.
Table 5 and Figure 4 describe the experimental analysis of the TCP-DBOA technique under the TCAS dataset with respect to APFD. These accomplished findings indicate that the TCP-DBOA technique obtains superior values of APFD. According to 5 iterations, the TCP-DBOA technique provides boosted APFD of 95.67%, although the MHHO-TCP, FA, PSD, LBS, and greedy techniques attain reduced APFD of 94.05%, 94.02%, 94.52%, 92.63%, and 89.81%, respectively. Moreover, based on 20 iterations, the TCP-DBOA method achieves an improved APFD of 95.67%, but the MHHO-TCP, FA, PSD, LBS, and greedy techniques acquire minimized APFD of 94.10%, 94.10%, 92.75%, 94.12%, and 91.62%, respectively. Also, with 30 iterations, the TCP-DBOA method achieves an improved APFD of 95.68%. However, the MHHO-TCP, FA, PSD, LBS, and greedy techniques obtain a reduced APFD of 94.05%, 94.09%, 93.79%, 90.58%, and 92.87%.
A wide-ranging experimental analysis of the TCP-DBOA technique in terms of APFD under the CS-TCAS dataset is determined in Table 6 and Figure 5. These achieved outcomes showed that the TCP-DBOA technique obtains increased values of APFD. With five iterations, the TCP-DBOA technique acquires an enhanced APFD of 95.97%, although the MHHO-TCP, FA, PSD, LBS, and greedy approaches obtain a diminished APFD of 94.4%, 94.4%, 94.75%, 91.47%, and 91.18%. Furthermore, based on 20 iterations, the TCP-DBOA method provides a raised APFD of 94.43%. However, the MHHO-TCP, FA, PSD, LBS, and greedy methodologies obtain a reduced APFD of 92.73%, 92.74%, 92.96%, 92.6%, and 93.93%, respectively. In addition, with 30 iterations, the TCP-DBOA method achieves a higher APFD of 95.56%, but the MHHO-TCP, FA, PSD, LBS, and greedy methodologies obtain a lessened APFD of 93.99%, 93.99%, 93.49%, 90.69%, and 92.89%, respectively.
The average time execution (ATE) results of the TCP-DBOA technique are provided with recent models in Table 7 and Figure 6. The results show that the PSD model has a worse performance with maximum ATE values, whereas the FA and Greedy approaches have shown slightly reduced ATE values. Along with that, the MHHO-TCP and LBS techniques obtain reasonable ATE values. However, the TCP-DBOA technique performs better with minimal ATE values of 1.50 min, 1.95 min, 4.69 min, and 7.53 min, under GZIP, GREP, TCAS, and CS-TCAS datasets, respectively.
The mean APFD results of the TCP-DBOA technique are provided with recent models in Table 8 and Figure 7. The results show that the TCP-DBOA technique gains enhanced mean APFD values. With the GZIP dataset, the TCP-DBOA technique reports an increased mean APFD of 96.92%, while the MHHO-TCP, FA, PSD, LBS, and greedy models obtain a decreased mean APFD of 95.56%, 95.16%, 94.05%, 94.57%, and 93.22%, respectively. Also, based on the GREP dataset, the TCP-DBOA method describes an improved mean APFD of 96.90%, although the MHHO-TCP, FA, PSD, LBS, and greedy techniques achieve a lessened mean APFD of 95.72%, 95.32%, 94.16%, 94.76%, and 93.33%, respectively. Additionally, with the TCAS dataset, the TCP-DBOA method indicates a boosted mean APFD of 94.80%, but the MHHO-TCP, FA, PSD, LBS, and greedy models obtain a reduced mean APFD of 93.65%, 93.12%, 92.40%, 92.07%, and 90.60%. Finally, based on the CS-TCAS dataset, the TCP-DBOA method exhibits an improved mean APFD of 94.80%, although the MHHO-TCP, FA, PSD, LBS, and greedy models obtain a diminished mean APFD of 93.57%, 93.13%, 92.74%, 92.07%, and 91.74%, respectively.
Table 9 and Figure 8 illustrate the computational time (CT) analysis of the TCP-DBOA approach with existing models. The TCP-DBOA approach demonstrates the most efficient performance across all datasets, with the lowest CT of 7.95 s for GZIP, 6.34 s for GREP, 8.23 s for TCAS, and 6.40 s for CS-TCAS. In comparison, the MHHO-TCP method illustrates significantly higher CT, ranging from 10.97 s for GZIP to 22.83 s for TCAS. The FA, PSD, LBS, and greedy methods also exhibit higher CTs, with the PSD Techniques particularly showing longer CTs, particularly on GREP with 17.85 s and CS-TCAS with 11.72 s. Overall, the TCP-DBOA method provides superior efficiency, completing all tasks faster than the other techniques in all four datasets.
These results show that the TCP-DBOA technique performs better than recent approaches.

5. Conclusions

In this article, the TCP-DBOA method for software quality testing is proposed. The main purpose of the TCP-DBOA method is to minimize total implementation time and maximize the APFD. In addition, the TCP-DBOA technique picks the average percentage of test point coverage APFD as an optimizer objective to represent the coverage speed to the test point. The TCP-DBOA technique is recognized as a vast search space for finding an optimum organization of test cases. The performance analysis of the TCP-DBOA approach is carried out, and the results are investigated using dissimilar measures. The experimental values highlighted that the TCP-DBOA approach attains better performance over recent approaches. The TCP-DBOA approach’s limitations include focusing on a specific set of benchmarks, which may not fully represent the diversity of real-world testing environments. Moreover, the scalability of the approach to handle massive datasets with more complex test cases remains unaddressed. The model also does not incorporate dynamic changes in software or real-time adaptability, which limits its application in agile or continuously evolving development processes. Furthermore, while the study emphasizes fault detection, it does not explore the impact of various environmental factors, such as hardware discrepancies or network conditions, on TCP. Future work may improve the model’s scalability, integrate real-time adaptability, and extend its applicability to a wide range of testing scenarios. Further research into hybrid models that incorporate diverse optimization strategies could enhance performance across various software environments.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University, project number (PSAU/2025/R/1446).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Nazir, M.; Mehmood, A.; Aslam, W.; Park, Y.; Choi, G.S.; Ashraf, I. A Multi-Goal Particle Swarm Optimizer for Test Case Prioritization. IEEE Access 2023, 11, 90683–90697. [Google Scholar] [CrossRef]
  2. Silega, N.; Aguilar, G.F.; Alcívar, I.A.; Colombo, K.M. Applying Neutrosophic Iadov Technique for assessing an MDD-based approach to support software design. Int. J. Neutrosophic Sci. (IJNS) 2022, 19, 80–86. [Google Scholar] [CrossRef]
  3. Yang, B.; Li, H.; Xing, Y.; Zeng, F.; Qian, C.; Shen, Y.; Wang, J. Directed Search Based on Improved Whale Optimization Algorithm for Test Case Prioritization. Int. J. Comput. Commun. Control 2023, 18, 5049. [Google Scholar] [CrossRef]
  4. Rao, K.K.; Rao, M.B.; Kavitha, C.; Kumari, G.L.; Surekha, Y. Prioritization of Test Cases in Software Testing Using M2 H2 Optimization. Int. J. Mod. Educ. Comput. Sci. 2022, 14, 56. [Google Scholar]
  5. Li, X.; Yang, Q.; Hong, M.; Pan, C.; Liu, R. Test case prioritization approah based on historical data and multi-objective optimization. J. Comput. Appl. 2023, 43, 221. [Google Scholar]
  6. Gupta, P.K. K-Step Crossover Method based on Genetic Algorithm for Test Suite Prioritization in Regression Testing. J. Univers. Comput. Sci. 2021, 27, 170–189. [Google Scholar] [CrossRef]
  7. Juneja, K. Design of a Novel Weighted-Multicriteria Analysis Model for Effective Test Case Prioritization for Network and Robotic Projects. Wirel. Pers. Commun. 2022, 123, 2505–2532. [Google Scholar] [CrossRef]
  8. Singhal, S.; Jatana, N.; Subahi, A.F.; Gupta, C.; Khalaf, O.I.; Alotaibi, Y. Fault Coverage-Based Test Case Prioritization and Selection Using African Buffalo Optimization. Comput. Mater. Contin. 2023, 74, 6755–6774. [Google Scholar] [CrossRef]
  9. Raamesh, L.; Jothi, S.; Radhika, S. Test case minimization and prioritization for regression testing using SBLA-based adaboost convolutional neural network. J. Supercomput. 2022, 78, 18379–18403. [Google Scholar] [CrossRef]
  10. Rajagopal, M.; Sivasakthivel, R.; Loganathan, K.; Sarris, L.E. An Automated Path-Focused Test Case Generation with Dynamic Parameterization Using Adaptive Genetic Algorithm (AGA) for Structural Program Testing. Information 2023, 14, 166. [Google Scholar] [CrossRef]
  11. Sheikh, R.; Babar, M.I.; Butt, R.; Abdelmaboud, A.; Eisa, T.A.E. An Optimized Test Case Minimization Technique Using Genetic Algorithm for Regression Testing. Comput. Mater. Contin. 2023, 74, 6789–6806. [Google Scholar] [CrossRef]
  12. Hamza, M.A.; Abdelmaboud, A.; Larabi-Marie-Sainte, S.; Alshahrani, H.M.; Al Duhayyim, M.; Ibrahim, H.A.; Rizwanullah, M.; Yaseen, I. Modified Harris hawks optimization based Test Case Prioritization for software testing. CMC-Comput. Mater. Contin. 2022, 72, 1951–1965. [Google Scholar]
  13. Nayak, S.; Kumar, C.; Tripathi, S.; Mohanty, N.; Baral, V. Regression test optimization and prioritization using Honey Bee optimization algorithm with fuzzy rule base. Soft Comput. 2021, 25, 9925–9942. [Google Scholar] [CrossRef]
  14. Priya, T.; Prasanna, M. Component-Based Test Case Generation and Prioritization Using an Improved Genetic Algorithm. Int. J. Coop. Inf. Syst. 2023, 34, 2350017. [Google Scholar] [CrossRef]
  15. Iqbal, S.; Al-Azzoni, I. Test Case Prioritization for model transformations. J. King Saud Univ. -Comput. Inf. Sci. 2022, 34, 6324–6338. [Google Scholar] [CrossRef]
  16. Pathik, B.; Pathik, N.; Sharma, M. Test Case Prioritization for changed code using nature inspired optimizer. J. Intell. Fuzzy Syst. 2023, 44, 5711–5718. [Google Scholar] [CrossRef]
  17. Chandra, S.V.; Sankar, S.S.; Anand, H.S. Smell Detection Agent Optimization Approach to Path Generation in Automated Software Testing. J. Electron. Test. 2022, 38, 623–636. [Google Scholar] [CrossRef]
  18. Singh, M.; Chauhan, N.; Popli, R. Test Case Reduction and SWOA Optimization for Distributed Agile Software Development Using Regression Testing. Multimed. Tools Appl. 2023, 84, 7065–7090. [Google Scholar] [CrossRef]
  19. Li, X.; Lan, L.; Lahza, H.; Yang, S.; Wang, S.; Yang, W.; Liu, H.; Zhang, Y. A Novel Human-Based Meta-Heuristic Algorithm: Dragon Boat Optimization. arXiv 2023, arXiv:2311.15539. [Google Scholar]
Figure 1. Overall procedure of TCP-DBOA approach.
Figure 1. Overall procedure of TCP-DBOA approach.
Electronics 14 01524 g001
Figure 2. APFD outcome of TCP-DBOA technique on GZIP dataset.
Figure 2. APFD outcome of TCP-DBOA technique on GZIP dataset.
Electronics 14 01524 g002
Figure 3. APFD analysis of TCP-DBOA model on GREP dataset.
Figure 3. APFD analysis of TCP-DBOA model on GREP dataset.
Electronics 14 01524 g003
Figure 4. APFD analysis of TCP-DBOA technique under TCAS dataset.
Figure 4. APFD analysis of TCP-DBOA technique under TCAS dataset.
Electronics 14 01524 g004
Figure 5. APFD analysis of TCP-DBOA technique with CS-TCAS dataset.
Figure 5. APFD analysis of TCP-DBOA technique with CS-TCAS dataset.
Electronics 14 01524 g005
Figure 6. ATE analysis of TCP-DBOA technique under four datasets.
Figure 6. ATE analysis of TCP-DBOA technique under four datasets.
Electronics 14 01524 g006
Figure 7. Mean APFD analysis of TCP-DBOA technique under four datasets.
Figure 7. Mean APFD analysis of TCP-DBOA technique under four datasets.
Electronics 14 01524 g007
Figure 8. CT evaluation of TCP-DBOA technique with existing models under four datasets.
Figure 8. CT evaluation of TCP-DBOA technique with existing models under four datasets.
Electronics 14 01524 g008
Table 1. Existing studies on TCP using optimization algorithm for software quality testing.
Table 1. Existing studies on TCP using optimization algorithm for software quality testing.
Ref. NumberObjectiveMethodsDatasetMeasures
Sheikh et al. [11]To propose the TestReduce technique for minimizing and prioritizing RT cases.GAWeb application requirementsTest case minimization, prioritization using 100-Dollar approach. Quality criteria conformance evaluation
Hamza et al. [12]To propose the MHHO-TCP technique for maximizing APFD and minimizing execution time in software testing.MHHO-based TCP techniqueGZIP, GREP, TCAS, and CSTCASAPFD, ET, FDR
Nayak et al. [13]To propose a BA-based technique for enhancing fault detection.BA with Fuzzy Rule Base, Scout And Forager Bees BehaviorStandard DatasetAPFD, FDR, TCP performance
Priya and Prasanna [14]To propose an efficient MTCGP-IGA for Component-based software development.Improved GA, Nondominated Sorting GS-IIComponent-based Software Development Test ScenariosTCP, PCC, Fault-Finding Capability (FFC), TIC
Iqbal and Al-Azzoni [15]To propose a test prioritization approach for the RT model transformations using rule coverage information.Rule Coverage-based TCP, Empirical Study and Tool ImplementationModel Transformation Test CasesFDR, TCP Efficiency, Test Case Orderings
Pathik, Pathik, and Sharma [16]To propose a hybrid technique for RT through TCP using clustering and optimization.Kernel-based FCM Clustering, GWO for PrioritizationRT Cases for Software ModificationsFDR, TCP Efficiency
Chandra, Sankar, and Anand [17]To propose a SDA approach for selecting and prioritizing paths in software testing.SDA, CFG, Cyclomatic ComplexityTen Benchmarked ApplicationsPath Coverage Increase, Time Complexity Reduction
Singh, Chauhan, and Popli [18]To propose a TCR and SWOA for RT in distributed agile software development.TCP and Selection, SWOA, Clustering and Sorting of Test CasesDistributed Agile Software ProjectsTCS Performance, Coverage and Failure Rate
Table 2. Performance comparison of DBOA with GA, HHO, and other approaches.
Table 2. Performance comparison of DBOA with GA, HHO, and other approaches.
AlgorithmSearch Space DiversityConvergence RateComputational ComplexitySuccess RateComputational Cost
GAModerateSlowHighMediumHigh
HHOLowModerateModerateHighModerate
PSOHighFastModerateHighModerate
DEModerateModerateModerateHighModerate
DBOAVery HighFastLowVery HighLow
Table 3. APFD analysis of TCP-DBOA technique with various iterations with GZIP dataset.
Table 3. APFD analysis of TCP-DBOA technique with various iterations with GZIP dataset.
GZIP Dataset
Number of IterationsTCP-DBOAMHHO-TCPFA
Techniques
PSD TechniquesLBS TechniquesGreedy
196.8895.3695.1594.0594.0592.39
296.5995.2194.8094.1994.8292.49
396.9195.3194.8094.0393.8793.22
497.1895.6195.3793.6295.3993.10
597.1795.5195.1794.2894.7492.38
697.2095.5695.3693.0693.7892.59
796.6695.3395.1594.9895.0693.56
896.4995.1794.8294.3594.5293.38
997.2395.5995.3794.1494.6292.37
1097.0295.5695.3394.9195.3093.29
1196.6995.4894.5593.8993.8193.57
1296.9795.6495.5193.8395.0693.38
1396.9495.3794.8393.4193.7694.24
1496.7695.4495.0094.5594.7993.12
1596.7995.5695.2093.1594.5993.56
1697.1695.5895.2893.7293.8392.29
1797.0795.5795.0594.0394.8093.22
1897.2495.8695.7294.7894.4392.56
1997.2995.8595.7194.2994.9393.53
2097.1395.5995.0194.5695.1394.39
2197.0295.6994.7894.0693.9393.28
2296.8595.5995.4093.6495.4293.11
2396.8595.4895.2194.3694.7792.41
2496.9795.5795.2793.1293.8192.56
2597.1995.4995.0895.0195.0293.52
2697.1095.6595.5293.8395.0993.34
2797.2795.5794.8093.4093.7894.23
2897.2295.7495.5293.7695.1193.39
2997.2195.5594.8293.3493.7494.26
3097.2495.5694.9894.6094.7993.09
Table 4. APFD analysis of TCP-DBOA technique with diverse iterations on the GREP dataset.
Table 4. APFD analysis of TCP-DBOA technique with diverse iterations on the GREP dataset.
GREP Dataset
Number of IterationsTCP-DBOAMHHO-TCPFA
Techniques
PSD TechniquesLBS TechniquesGreedy
196.9395.6395.1994.7395.3194.59
297.3296.0295.9095.0194.5792.60
397.4095.8895.4793.9394.0092.44
497.2295.6695.1394.6494.9793.19
597.4195.8995.6494.0895.2193.46
697.1295.7895.4295.0595.4193.44
797.0595.5494.8893.2694.7293.45
897.2095.7095.4494.5193.9392.68
997.3195.7895.5893.7495.6293.18
1096.9595.6494.9694.2594.9892.67
1197.2695.9595.8294.3895.1293.66
1297.1195.6495.2294.1794.9593.38
1397.0495.6195.3293.2594.7893.63
1497.0095.4194.9393.5393.9494.33
1596.9295.2694.7094.0493.9393.68
1697.2895.7095.5194.2094.7492.44
1797.1195.7895.2095.1195.1693.65
1897.2895.6295.3394.4794.9092.48
1997.1695.5494.9494.1794.0193.30
2097.3395.6795.4893.2393.8992.58
2197.2495.7795.5593.7495.5493.21
2296.9195.5394.9194.2994.9892.65
2397.4095.9695.8694.3895.1393.66
2497.3895.8195.1994.1994.9693.36
2597.2795.7395.1594.6994.9693.17
2697.2495.8395.6494.0295.2193.49
2797.1595.7095.3193.2594.7693.66
2897.1395.5194.9293.5193.8794.38
2997.0595.5894.6894.0794.0493.71
3097.3695.7295.5894.2794.7892.46
Table 5. APFD outcome of TCP-DBOA technique with various iterations under TCAS dataset.
Table 5. APFD outcome of TCP-DBOA technique with various iterations under TCAS dataset.
TCAS Dataset
Number of IterationsTCP-DBOAMHHO-TCPFA
Techniques
PSD TechniquesLBS TechniquesGreedy
196.3594.7094.6692.4393.2191.19
295.3793.7793.8093.0392.4289.92
396.1694.3794.3593.3293.1191.70
496.6994.9594.9492.8191.6391.14
595.6794.0594.0294.5292.6389.81
695.3293.7993.8094.7690.8989.77
794.3592.7992.8194.6494.1890.17
896.4094.7694.8091.5591.3490.15
995.1493.5893.6193.9393.2388.52
1094.9793.1893.2292.1993.2991.54
1196.1094.4294.4193.0293.3389.84
1296.9795.1795.2193.3891.3889.55
1393.8792.2892.3091.4191.0292.46
1495.5193.9793.9693.3591.4989.76
1594.4492.8792.8793.2889.8791.50
1696.7995.2495.2693.5791.6990.54
1796.2394.4694.4894.1093.6090.66
1895.0393.4893.4591.5592.6889.42
1996.0794.4494.4193.7893.3992.54
2095.6794.1094.1092.7594.1291.62
2195.1393.3493.3494.2292.4790.59
2294.8993.1493.1592.8391.0890.16
2394.9093.1793.1994.3192.2491.55
2495.5193.9993.9992.8391.6390.02
2594.7492.9893.0291.6292.5690.64
2696.8495.2595.2594.4992.4591.59
2794.8093.0293.0094.4893.7189.48
2896.8995.2395.2492.1392.9990.29
2995.5793.7993.8293.3693.8289.62
3095.6894.0594.0993.7990.5892.87
Table 6. APFD analysis of TCP-DBOA technique with varying iterations on CS-TCAS dataset.
Table 6. APFD analysis of TCP-DBOA technique with varying iterations on CS-TCAS dataset.
CS-TCAS Dataset
Number of IterationsTCP-DBOAMHHO-TCPFA
Techniques
PSD TechniquesLBS TechniquesGreedy
196.5194.7994.7792.2793.3393.03
295.8394.394.2992.1793.9892.26
394.8893.2293.1994.2793.2593.42
495.7294.0694.0394.7491.7590.08
595.9794.494.494.7591.4791.18
695.4693.7293.7392.1594.3390.64
794.9893.1893.1692.1693.5292.64
894.4392.8592.8692.3893.0692.03
994.9993.3393.3494.3992.0990.82
1095.1493.5793.5892.0992.2392.69
1195.0293.2693.2592.0691.1491.13
1294.6293.0493.0191.9793.6192.92
1394.8992.2392.2293.8391.9390.72
1496.7495.0795.0792.4194.2490.29
1596.0894.4794.4692.9491.9889.68
1696.4494.7394.7693.9391.3892.48
1794.8693.1693.1592.6991.8990.9
1896.0494.4294.4192.6992.7791.74
1996.5294.7694.7994.4292.3789.95
2094.4392.7392.7492.9692.693.93
2195.9794.1794.1593.1591.6792.75
2295.0493.3393.392.3593.5692.82
2395.2293.593.4792.7194.0693.55
2495.2793.6993.7294.0491.8590.82
2596.995.195.0694.0893.2793.32
2695.8494.1694.1594.6694.0192.55
2795.1593.6393.6292.7694.4391.91
2895.1993.5693.5592.5593.7791.25
2995.9894.294.293.9291.0790.51
3095.5693.9993.9993.4990.6992.89
Table 7. ATE analysis of TCP-DBOA technique with other methods under four datasets.
Table 7. ATE analysis of TCP-DBOA technique with other methods under four datasets.
ATE (min)
MethodsGZIPGREPTCASCS-TCAS
TCP-DBOA1.501.954.697.53
MHHO-TCP3.123.756.379.29
FA Techniques4.054.767.6710.63
PSD Techniques5.926.8814.3821.09
LBS Techniques3.964.897.6110.95
Greedy4.574.968.7311.76
Table 8. Mean APFD analysis of TCP-DBOA technique with other methods under four datasets.
Table 8. Mean APFD analysis of TCP-DBOA technique with other methods under four datasets.
Mean APFD
MethodsGZIPGREPTCASCS-TCAS
TCP-DBOA96.9296.9094.8094.80
MHHO-TCP95.5695.7293.6593.57
FA Techniques95.1695.3293.1293.13
PSD Techniques94.0594.1692.4092.74
LBS Techniques94.5794.7692.0792.07
Greedy93.2293.3390.6091.74
Table 9. CT evaluation of TCP-DBOA technique with existing models under four datasets.
Table 9. CT evaluation of TCP-DBOA technique with existing models under four datasets.
CT (s)
MethodsGZIPGREPTCASCS-TCAS
TCP-DBOA7.956.348.236.40
MHHO-TCP10.9714.4222.8319.88
FA Techniques13.4315.7310.5122.06
PSD Techniques23.8617.8512.3511.72
LBS Techniques19.3427.2112.3623.68
Greedy11.8411.6122.2911.35
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Assiri, M. Test Case Prioritization Using Dragon Boat Optimization for Software Quality Testing. Electronics 2025, 14, 1524. https://doi.org/10.3390/electronics14081524

AMA Style

Assiri M. Test Case Prioritization Using Dragon Boat Optimization for Software Quality Testing. Electronics. 2025; 14(8):1524. https://doi.org/10.3390/electronics14081524

Chicago/Turabian Style

Assiri, Mohammed. 2025. "Test Case Prioritization Using Dragon Boat Optimization for Software Quality Testing" Electronics 14, no. 8: 1524. https://doi.org/10.3390/electronics14081524

APA Style

Assiri, M. (2025). Test Case Prioritization Using Dragon Boat Optimization for Software Quality Testing. Electronics, 14(8), 1524. https://doi.org/10.3390/electronics14081524

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop