5.1. Experimental Results of BPO1 for 0-1 KPs
Table 3 shows the BPO1 parameter values used in the experiments. The OR Library [
33] provides a comprehensive 0-1 KPs dataset comprising 25 distinct datasets detailed in
Table 4. In the table, “ID” refers to the instance number assigned in this study, while Dataset denotes the name of the benchmark instance (e.g., 8a, 12b, 20c). Cap. Indicates the knapsack capacity associated with each dataset, and “Dim.” represents the problem dimension. Finally, “Opt.” corresponds to the known optimal objective value of each dataset.
Table 5,
Table 6,
Table 7,
Table 8,
Table 9,
Table 10,
Table 11 and
Table 12 present the statistical results of BPO1. In the tables, “Dataset” denotes the problem dataset, “Cap.” refers to the knapsack capacity, and “Opt.” represents the known optimal value. “Best” indicates the maximum profit achieved, “Mean” is the average profit across multiple runs, and Worst denotes the minimum profit obtained. “Std.” Corresponds to the standard deviation of the results. “Weight” refers to the total weight of the items included in the best solution, and “WR” (weight ratio) expresses the percentage of the knapsack capacity utilized. “Time” is reported as the average computational time per run. Finally, “SR” represents the success rate, defined as the percentage of independent runs in which at least one feasible solution satisfying all problem constraints is generated. In the tables, the use of bold font is intended to denote the optimum value.
Table 3.
Parameter settings.
Table 3.
Parameter settings.
Parameters | Values |
---|
Population size (N) | 20 |
Maximum iteration (MaxIter) | 5000 |
Number of runs | 20 |
Crossover probability (pCR) | 0.20 |
Greedy noise amplitude parameter | 0.10 |
Table 4.
Datasets of instances for the 0-1 KPs [
31].
Table 4.
Datasets of instances for the 0-1 KPs [
31].
ID | Dataset | Cap. | Dim. | Opt. |
---|
KP(1) | 8a | 1,863,633 | 8 | 3,924,400 |
KP(2) | 8b | 1,822,718 | 8 | 3,813,669 |
KP(3) | 8c | 1,609,419 | 8 | 3,347,452 |
KP(4) | 8d | 2,112,292 | 8 | 4,187,707 |
KP(5) | 8e | 2,493,250 | 8 | 4,955,555 |
KP(6) | 12a | 2,805,213 | 12 | 5,688,887 |
KP(7) | 12b | 3,259,036 | 12 | 6,498,597 |
KP(8) | 12c | 3,489,815 | 12 | 5,170,626 |
KP(9) | 12d | 3,453,702 | 12 | 6,992,404 |
KP(10) | 12e | 2,520,392 | 12 | 5,337,472 |
KP(11) | 16a | 3,780,355 | 16 | 7,850,983 |
KP(12) | 16b | 4,426,945 | 16 | 9,352,998 |
KP(13) | 16c | 4,323,280 | 16 | 9,151,147 |
KP(14) | 16d | 4,550,938 | 16 | 9,348,889 |
KP(15) | 16e | 3,760,429 | 16 | 7,769,117 |
KP(16) | 20a | 5,169,647 | 20 | 10,727,049 |
KP(17) | 20b | 4,681,373 | 20 | 9,818,261 |
KP(18) | 20c | 5,063,791 | 20 | 10,714,023 |
KP(19) | 20d | 4,286,641 | 20 | 8,929,156 |
KP(20) | 20e | 4,476,000 | 20 | 9,357,969 |
KP(21) | 24a | 6,404,180 | 24 | 13,549,094 |
KP(22) | 24b | 5,971,071 | 24 | 12,233,713 |
KP(23) | 24c | 5,870,470 | 24 | 12,448,780 |
KP(24) | 24d | 5,762,284 | 24 | 11,815,315 |
KP(25) | 24e | 6,654,569 | 24 | 13,940,099 |
Table 5.
The results of the BPO1 for TF1.
Table 5.
The results of the BPO1 for TF1.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0091 | 1.8722 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2811 | 1.8669 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3460 | 1.8644 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0016 | 1.8647 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.8629 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9166 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,497,318 | 6,473,019 | 5719 | 3,256,963 | 99.9364 | 1.9003 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2367 | 1.8899 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.8822 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.879 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,842,093 | 7,823,318 | 11,500 | 3,771,406 | 99.7633 | 1.9549 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,347,804 | 9,259,634 | 20,816 | 4,426,267 | 99.9847 | 1.9440 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,147,364 | 9,100,116 | 12,406 | 4,297,175 | 99.3962 | 1.9526 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,338,199 | 9,305,859 | 9316 | 4,444,721 | 99.8603 | 1.9536 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,764,839 | 7,750,491 | 7661 | 3,752,854 | 99.7986 | 1.9544 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,725,302 | 10,692,101 | 7815 | 5,166,676 | 99.9425 | 2.0298 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,809,366 | 9,754,368 | 16,503 | 4,671,869 | 99.7970 | 2.0368 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,710,007 | 10,700,635 | 6295 | 5,053,832 | 99.8033 | 2.0303 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,921,022 | 8,873,716 | 17,171 | 4,282,619 | 99.9062 | 2.0276 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,355,219 | 9,345,847 | 4762 | 4,470,060 | 99.8673 | 2.0312 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,511,191 | 13,459,475 | 26,876 | 6,402,560 | 99.9747 | 2.0987 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,215,824 | 12,159,261 | 24,149 | 5,966,008 | 99.9152 | 2.1076 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,435,421 | 12,367,653 | 20,381 | 5,861,707 | 99.8507 | 2.0966 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 1,180,541 | 11,754,633 | 19,258 | 5,756,602 | 99.9014 | 2.1036 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,933,446 | 13,909,292 | 9135 | 6,637,749 | 99.7472 | 2.0966 | 100 |
In
Table 5, the performance of BPO1 with TF1 across the 0-1 KPs is reported. Across all datasets, the Best values are found to be identical to the known optima, indicating that BPO1 successfully identified the optimal solution in every run. For the datasets ranging from 8a to 12e, the mean values are consistently matched with the optimal results, except for instance 12b, where a slight deviation is observed. Within the same range (8a–12e), the standard deviation is recorded as zero, confirming the complete stability of the algorithm except for instance 12b. For larger-scale instances, small variations in the mean values and standard deviation are observed; however, these deviations are considered negligible and do not compromise the algorithm’s overall robustness. Capacity utilization rates exceeded 97%, with most rates above 99%, demonstrating efficient resource use. The success rate is 100% for all datasets. These findings confirm that the proposed algorithm delivers high accuracy, stability, and efficiency across problems of varying scales, including large-scale instances.
Table 6.
The results of the BPO1 for TF2.
Table 6.
The results of the BPO1 for TF2.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0090 | 1.8589 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.8556 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.8708 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0016 | 1.8779 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.8776 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9298 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,498,597 | 6,498,597 | 0 | 3,256,963 | 99.9364 | 1.9295 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2367 | 1.9353 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.9472 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.9338 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,840,710 | 7,823,318 | 12,026 | 3,771,406 | 99.7634 | 1.9753 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,352,208 | 9,347,736 | 1927 | 4,426,267 | 99.9846 | 1.9689 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,146,133 | 9,100,116 | 13,206 | 4,297,175 | 99.3962 | 1.9691 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,333,429 | 9,296,536 | 14,043 | 4,444,721 | 99.8603 | 1.9784 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,761,303 | 7,750,491 | 8948 | 3,752,854 | 99.7986 | 1.9716 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,725,302 | 10,692,101 | 7815 | 5,166,676 | 99.9425 | 2.0300 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,809,732 | 9,753,772 | 17,625 | 4,671,869 | 99.7969 | 2.0332 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,706,515 | 10,700,635 | 6682 | 5,053,832 | 99.8033 | 2.0237 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,919,512 | 8,873,716 | 17,677 | 4,282,619 | 99.9062 | 2.0184 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,354,550 | 9,345,847 | 5165 | 4,470,060 | 99.8673 | 2.0234 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,515,169 | 13,460,329 | 26,859 | 6,402,560 | 99.9747 | 2.0913 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,220,570 | 12,159,261 | 21,267 | 5,966,008 | 99.9152 | 2.0926 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,429,256 | 12,385,452 | 20,738 | 5,861,707 | 99.8507 | 2.0866 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,806,293 | 11,772,086 | 13,984 | 5,756,602 | 99.9014 | 2.0940 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,933,005 | 13,902,534 | 12,358 | 6,637,749 | 99.7472 | 2.0888 | 100 |
Table 6 presents the performance of the proposed algorithm using TF2 across all 0-1 KPs. For the 8a–12e datasets, the algorithm consistently achieved the optimal solution in all runs, with best, mean, and worst values identical and a standard deviation of zero, indicating perfect stability. In the larger datasets (16a–24e), the best results always matched the optimum, and the mean and worst results remained extremely close to it, with very small standard deviations, demonstrating strong robustness. Capacity utilization ratios ranged from approximately 97% to nearly 100%, confirming efficient use of available resources. Execution times remained low across all datasets, and the success rate is 100% in every case. These results indicate that, under TF2, the proposed algorithm maintains high accuracy, stability, and efficiency across different problem sizes, including large-scale instances. This advantage is attributed to TF2′s probability curve being more balanced in the middle region, thereby enabling a more effective balance between exploration and exploitation phases. The probability curves of TF1 and TF2 exhibit a more balanced distribution in the middle region. This characteristic enables a more effective balance between the exploration and exploitation phases, yielding stable results in small- and medium-scale datasets and slightly more consistent performance in large-scale datasets.
Table 7.
The results of the BPO1 for TF3.
Table 7.
The results of the BPO1 for TF3.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0090 | 1.8669 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.8685 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.8606 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0015 | 1.8631 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.8658 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9573 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,494,760 | 6,473,019 | 9370 | 3,256,963 | 99.9363 | 1.9496 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2366 | 1.9552 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.9491 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.9449 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,845,372 | 7,823,318 | 10,207 | 3,771,406 | 99.7632 | 2.0082 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,352,998 | 9,352,998 | 0 | 4,426,267 | 99.9847 | 2.0053 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,143,922 | 9,082,307 | 19,064 | 4,297,175 | 99.3965 | 2.0108 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,338,342 | 9,296,536 | 11,357 | 4,444,721 | 99.8603 | 2.0046 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,765,217 | 7,750,491 | 6973 | 3,752,854 | 99.7985 | 2.0055 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,723,554 | 10,692,101 | 10,756 | 5,166,676 | 99.9425 | 2.0566 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,805,114 | 9,744,513 | 23,273 | 4,671,869 | 99.7969 | 2.0480 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,702,726 | 10,611,486 | 22,453 | 5,053,832 | 99.8033 | 2.0245 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 89,267,678 | 8,895,152 | 8050 | 4,282,619 | 99.9061 | 2.0224 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 93,564,304 | 9,345,847 | 3560 | 4,470,060 | 99.8673 | 2.0238 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,516,478 | 13,455,545 | 28,966 | 6,402,560 | 99.9747 | 2.0851 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,221,969 | 12,193,732 | 15,937 | 5,966,008 | 99.9152 | 2.0853 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,439,973 | 12,401,619 | 13,640 | 5,861,707 | 99.8507 | 2.0861 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,801,226 | 11,754,633 | 19,256 | 5,756,602 | 99.9013 | 2.0881 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,925,645 | 13,902,534 | 12,720 | 6,637,749 | 99.7472 | 2.0800 | 100 |
As shown in
Table 7, TF3 is reported to have achieved high accuracy across most datasets, although slight deviations in mean and worst values are observed in datasets 12b and 16a. These deviations are considered to be due to TF3′s structure being less capable than TF2 of sustaining strong local search intensity for certain problem sizes. While TF3 is observed to be more stable compared to TF1, it does not achieve the same low variance levels as TF2 and TF4. Nevertheless, the high-capacity utilization ratios indicate that the overall solution quality is maintained.
TF2 and TF3 exhibit high-capacity utilization rates (97–100%) while maintaining overall solution quality. However, TF2 demonstrates a more stable and consistent performance than TF3 by delivering near-zero variance and perfect stability across both small-to-medium and large-scale problems. Although TF3 achieves high accuracy in most datasets, minor deviations in mean and worst-case values are observed in datasets 12b and 16a. These deviations can be attributed to TF3′s inability to sustain local search intensity as effectively as TF2 for certain problem sizes.
Table 8.
The results of the BPO1 for TF4.
Table 8.
The results of the BPO1 for TF4.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0091 | 1.8785 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.8785 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.8731 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0015 | 1.8768 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.8751 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9466 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,498,597 | 6,498,597 | 0 | 3,256,963 | 99.9363 | 1.9293 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2366 | 1.9393 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.9327 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.9302 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,850,983 | 7,850,983 | 0 | 3,771,406 | 99.7632 | 1.9863 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,352,998 | 9,352,998 | 0 | 4,426,267 | 99.9846 | 1.9859 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,151,147 | 9,151,147 | 0 | 4,297,175 | 99.3962 | 1.9878 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,338,343 | 9,296,536 | 11,356 | 4,444,721 | 99.8603 | 1.9933 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,763,908 | 7,750,491 | 8225 | 3,752,854 | 99.7985 | 1.9893 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,721,807 | 10,692,101 | 12,803 | 5,166,676 | 99.9424 | 2.0375 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,814,023 | 9,754,368 | 14,793 | 4,671,869 | 99.7969 | 2.0388 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,709,264 | 10,700,635 | 6504 | 5,053,832 | 99.8033 | 2.0366 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,925,636 | 8,872,522 | 12,873 | 4,282,619 | 99.9061 | 2.0310 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,354,386 | 9,323,214 | 8276 | 4,470,060 | 99.8672 | 2.0362 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,514,928 | 13,470,217 | 27,897 | 6,402,560 | 99.9747 | 2.1012 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,211,240 | 12,157,691 | 22,582 | 5,966,008 | 99.9152 | 2.0948 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,435,784 | 12,373,645 | 19,337 | 5,861,707 | 99.8507 | 2.1066 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,801,052 | 11,772,086 | 16,697 | 5,756,602 | 99.9013 | 2.0962 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,924,841 | 13,886,063 | 16,898 | 6,637,749 | 99.7472 | 2.0984 | 100 |
The results presented in
Table 8 indicate that TF4 demonstrates high performance across all datasets. The best values are found to always correspond to the global optimum, while the mean and worst values are reported to match this optimum in most cases. TF4 is observed to exhibit notably faster and more stable convergence in large-scale datasets compared to TF1, TF3, TF5, TF6, and TF7. This advantage is attributed to TF4′s strong ability to escape local minima through more step transitions in the solution space. Such behavior is indicated to enable the algorithm to achieve rapid convergence to the optimum without compromising solution quality.
In addition, TF4 is distinguished by having the highest number of instances with deviations in the mean, worst, and standard deviation values compared to the other TFs. This indicates that, although TF4 generally ensures convergence and high accuracy, its aggressive search dynamics may lead to greater variability across certain problem sizes.
Table 9.
The results of the BPO1 for TF5.
Table 9.
The results of the BPO1 for TF5.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0091 | 1.9196 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.9143 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.9092 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0015 | 1.9174 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.9171 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9600 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,498,597 | 6,498,597 | 0 | 3,256,963 | 99.9363 | 1.9496 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2366 | 1.9549 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.9511 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.9485 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,843,989 | 7,823,318 | 11,230 | 3,771,406 | 99.7632 | 2.0100 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 935,247 | 9,347,736 | 1619 | 4,426,267 | 99.9846 | 2.0017 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,148,595 | 9,100,116 | 11,410 | 4,297,175 | 99.3961 | 2.0092 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,340,350 | 9,336,691 | 5735 | 4,444,721 | 99.8603 | 2.0099 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,765,028 | 7,750,491 | 7327 | 3,752,854 | 99.7985 | 2.0227 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,721,806 | 10,692,101 | 12,803 | 5,166,676 | 99.9425 | 2.0837 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,811,066 | 9,757,816 | 15,147 | 4,671,869 | 99.7969 | 2.0883 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,708,880 | 10,695,858 | 6816 | 5,053,832 | 99.8033 | 2.0871 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,925,696 | 8,873,716 | 12,614 | 4,282,619 | 99.9061 | 2.0789 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,355,275 | 9,323,214 | 7997 | 4,470,060 | 99.8672 | 2.0811 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,512,231 | 13,470,217 | 26,368 | 6,402,560 | 99.9747 | 2.1397 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,221,200 | 12,157,691 | 20,892 | 5,966,008 | 99.9152 | 2.1410 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,439,555 | 12,401,619 | 16,955 | 5,861,707 | 99.8507 | 2.1388 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,800,233 | 11,765,854 | 19,131 | 5,756,602 | 99.9013 | 2.1430 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,934,355 | 13,909,292 | 1039 | 6,637,749 | 99.7472 | 2.1396 | 100 |
The results in
Table 9 are reported to show that TF5 achieves the global optimum in all runs for all datasets. For large-scale datasets, the mean and worst values are indicated to remain very close to the optimum. However, TF5 is not found to match TF4′s convergence speed. In large-scale datasets, the mean and worst-case values remain very close to the optimum. However, TF5 does not match the convergence speed of TF4 in large-scale datasets. TF4 achieves high accuracy and low variance across both small- and large-scale datasets.
Table 10.
The results of the BPO1 for TF6.
Table 10.
The results of the BPO1 for TF6.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0090 | 1.8859 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.8851 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.8864 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0015 | 1.8871 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.9014 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,563 | 5,682,404 | 1449 | 2,798,038 | 99.7442 | 1.9675 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,497,318 | 6,473,019 | 5719 | 3,256,963 | 99.9363 | 1.9469 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2367 | 1.9489 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.9458 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.9380 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,844,502 | 7,823,318 | 11,654 | 3,771,406 | 99.7632 | 2.0002 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,352,735 | 9,347,736 | 1176 | 4,426,267 | 99.9846 | 1.9969 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,149,916 | 9,126,526 | 5505 | 4,297,175 | 99.3961 | 2.0035 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,340,029 | 9,305,859 | 10,047 | 4,444,721 | 99.8603 | 2.0084 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,765,028 | 7,750,491 | 7327 | 3,752,854 | 99.7985 | 2.0054 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,727,049 | 10,727,049 | 0 | 5,166,676 | 99.9425 | 2.0538 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,812,109 | 9,757,816 | 14,872 | 4,671,869 | 99.7969 | 2.0660 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,708,668 | 10,700,635 | 6729 | 5,053,832 | 99.8033 | 2.0532 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,923,367 | 8,895,152 | 12,540 | 4,282,619 | 99.9061 | 2.0501 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,355,353 | 9,323,214 | 8020 | 4,470,060 | 99.8672 | 2.0434 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,502,593 | 13,457,089 | 27,758 | 6,402,560 | 99.9747 | 2.1198 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,226,947 | 12,188,747 | 14,995 | 5,966,008 | 99.9152 | 2.1297 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,430,039 | 12,368,592 | 25,802 | 5,861,707 | 99.8507 | 2.1094 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,802,389 | 11,765,854 | 16,099 | 5,756,602 | 99.9013 | 2.1198 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,933,384 | 13,902,534 | 9987 | 6,637,749 | 99.7472 | 2.1327 | 100 |
The results presented in
Table 10 indicate that TF6 achieved the optimum value among the best values across all datasets. Compared to other TFs, TF6 tends to explore a broader search space. While this enhances its exploration capability, it is also the result of minor standard deviations in certain cases. Nevertheless, TF6 demonstrated strong performance in specific datasets, such as 20a, where it consistently achieved perfect success. This comprehensive evaluation confirms that the proposed BPO1 algorithm delivers performance not only across different problem sizes but also in instances of varying complexity. The diversity of TFs allows the algorithm to maintain solution quality while adapting its strategies to the characteristics of each problem. TF6 has achieved the optimal value among the best results across all datasets and has generally demonstrated high accuracy.
Table 11.
The results of the BPO1 for TF7.
Table 11.
The results of the BPO1 for TF7.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0090 | 1.9181 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.9904 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.8791 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0015 | 1.9281 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.9740 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9938 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,497,318 | 6,473,019 | 5719 | 3,256,963 | 99.9363 | 1.9700 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2366 | 1.9297 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 1.9867 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 1.9986 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,842,093 | 7,823,318 | 11,500 | 3,771,406 | 99.7632 | 2.0634 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,347,803 | 9,259,634 | 20,815 | 4,426,267 | 99.9846 | 1.9915 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,147,364 | 9,100,116 | 12,405 | 4,297,175 | 99.3961 | 2.0029 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,338,199 | 9,305,859 | 9316 | 4,444,721 | 99.8603 | 2.0399 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,764,839 | 7,750,491 | 7661 | 3,752,854 | 99.7985 | 2.0655 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,725,301 | 10,692,101 | 7815 | 5,166,676 | 99.9425 | 2.0960 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,809,365 | 9,754,368 | 16,503 | 4,671,869 | 99.7969 | 2.0688 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,710,006 | 10,700,635 | 6294 | 5,053,832 | 99.8033 | 2.1125 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,921,021 | 8,873,716 | 17,171 | 4,282,619 | 99.90612 | 2.1390 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,355,218 | 9,345,847 | 4761 | 4,470,060 | 99.8672 | 2.1812 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,511,190 | 13,459,475 | 26,876 | 6,402,560 | 99.9747 | 2.1718 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,215,824 | 12,159,261 | 24,149 | 5,966,008 | 99.9152 | 2.1384 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,435,420 | 12,367,653 | 20,380 | 5,861,707 | 99.8507 | 2.1138 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,805,417 | 11,754,633 | 19,258 | 5,756,602 | 99.9013 | 2.0999 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,933,446 | 13,909,292 | 9134 | 6,637,749 | 99.7472 | 2.1023 | 100 |
The results in
Table 11 show that TF7 offers a balanced performance between high stability on small-scale and medium-scale problems and high quality and diversity on large-scale problems. On small- and medium-scale datasets, TF7 achieves stability and accuracy very close to TF4. However, while TF4 achieves absolute stability with zero variance at these scales. TF7 provides a higher standard deviation by increasing diversity on large-scale datasets. This is attributed to TF7′s adoption of a broader exploration strategy.
Table 12.
The results of the BPO1 for TF8.
Table 12.
The results of the BPO1 for TF8.
Dataset | Cap. | Opt. | Best | Mean | Worst | Std. | Weight | WR | Time | SR |
---|
8a | 1,863,633 | 3,924,400 | 3,924,400 | 3,924,400 | 3,924,400 | 0 | 1,826,529 | 98.0090 | 1.9217 | 100 |
8b | 1,822,718 | 3,813,669 | 3,813,669 | 3,813,669 | 3,813,669 | 0 | 1,809,614 | 99.2810 | 1.9303 | 100 |
8c | 1,609,419 | 3,347,452 | 3,347,452 | 3,347,452 | 3,347,452 | 0 | 1,598,893 | 99.3459 | 1.8989 | 100 |
8d | 2,112,292 | 4,187,707 | 4,187,707 | 4,187,707 | 4,187,707 | 0 | 2,048,957 | 97.0016 | 1.9249 | 100 |
8e | 2,493,250 | 4,955,555 | 4,955,555 | 4,955,555 | 4,955,555 | 0 | 2,442,114 | 97.9490 | 1.9321 | 100 |
12a | 2,805,213 | 5,688,887 | 5,688,887 | 5,688,887 | 5,688,887 | 0 | 2,798,038 | 99.7442 | 1.9907 | 100 |
12b | 3,259,036 | 6,498,597 | 6,498,597 | 6,497,318 | 6,473,019 | 5719 | 3,256,963 | 99.9363 | 1.9961 | 100 |
12c | 2,489,815 | 5,170,626 | 5,170,626 | 5,170,626 | 5,170,626 | 0 | 2,470,810 | 99.2366 | 1.9704 | 100 |
12d | 3,453,702 | 6,992,404 | 6,992,404 | 6,992,404 | 6,992,404 | 0 | 3,433,409 | 99.4124 | 2.0149 | 100 |
12e | 2,520,392 | 5,337,472 | 5,337,472 | 5,337,472 | 5,337,472 | 0 | 2,514,881 | 99.7813 | 2.0397 | 100 |
16a | 3,780,355 | 7,850,983 | 7,850,983 | 7,844,859 | 7,823,318 | 9768 | 3,771,406 | 99.7632 | 2.0368 | 100 |
16b | 4,426,945 | 9,352,998 | 9,352,998 | 9,352,471 | 9,347,736 | 1619 | 4,426,267 | 99.9846 | 2.0493 | 100 |
16c | 4,323,280 | 9,151,147 | 9,151,147 | 9,144,812 | 9,100,116 | 16,241 | 4,297,175 | 99.3961 | 2.0900 | 100 |
16d | 4,450,938 | 9,348,889 | 9,348,889 | 9,336,979 | 9,305,859 | 8569 | 4,444,721 | 99.8603 | 2.0789 | 100 |
16e | 3,760,429 | 7,769,117 | 7,769,117 | 7,767,443 | 7,750,491 | 5186 | 3,752,854 | 99.7985 | 2.0839 | 100 |
20a | 5,169,647 | 10,727,049 | 10,727,049 | 10,721,157 | 10,665,994 | 15,780 | 5,166,676 | 99.9425 | 2.1058 | 100 |
20b | 4,681,373 | 9,818,261 | 9,818,261 | 9,807,517 | 9,754,368 | 19,886 | 4,671,869 | 99.7969 | 2.1619 | 100 |
20c | 5,063,791 | 10,714,023 | 10,714,023 | 10,707,708 | 10,700,635 | 6585 | 5,053,832 | 99.8033 | 2.0533 | 100 |
20d | 4,286,641 | 8,929,156 | 8,929,156 | 8,924,422 | 8,873,716 | 14,805 | 4,282,619 | 99.906 | 2.1256 | 100 |
20e | 4,476,000 | 9,357,969 | 9,357,969 | 9,356,430 | 9,345,847 | 3560 | 4,470,060 | 99.8672 | 2.0807 | 100 |
24a | 6,404,180 | 13,549,094 | 13,549,094 | 13,521,169 | 13,482,886 | 20,276 | 6,402,560 | 99.9747 | 2.1603 | 100 |
24b | 5,971,071 | 12,233,713 | 12,233,713 | 12,215,039 | 12,157,691 | 29,279 | 5,966,008 | 99.9152 | 2.1305 | 100 |
24c | 5,870,470 | 12,448,780 | 12,448,780 | 12,432,966 | 12,389,124 | 18,086 | 5,861,707 | 99.8507 | 2.1211 | 100 |
24d | 5,762,284 | 11,815,315 | 11,815,315 | 11,802,070 | 11,765,854 | 17,168 | 5,756,602 | 99.9013 | 2.1585 | 100 |
24e | 6,654,569 | 13,940,099 | 13,940,099 | 13,922,962 | 13,866,034 | 20,101 | 6,637,749 | 99.7472 | 2.1723 | 100 |
In
Table 12, TF8 delivers balanced performance, achieving successful results. It delivers high stability on small-scale and medium-scale examples and strong solution quality on larger problems. Observed deviations are minimal and do not compromise solution quality. While TF8 is not as stable as TF4, its balance between variety and quality is comparable to TF6 and TF7.
Table 13 presents the standard deviation (Std.) rank results of BPO1 with eight different TFs (TF1–TF8). For each dataset, the rank of the standard deviation obtained by each TF is reported, where lower ranks indicate more stable performance. In
Table 13, the row “Total Best” indicates the number of times each TF achieves the best rank (rank = 1) across all datasets, highlighting its overall consistency. The row “Total Min. Rank” shows the number of times each TF obtained the best rank across all datasets. This highlights the overall frequency with which each TF demonstrates the most stable performance.
In
Table 13, the highest value in the “Total best” row is obtained by TF4 (14 times), indicating that TF4 has most frequently achieved the best performance most frequently compared to the others. Additionally, in the “Total Min. Rank” row, the value for TF4 is recorded as 1 (rank = 1), indicating that TF4 has the lowest total rank and is considered the most prominent in terms of stability. Therefore, the results demonstrate that TF4 is considered the most reliable and stable for BPO1, as it is associated with generating more balanced solutions in terms of variance.
Additionally, compared to the S-shape and V-shape, TF4 applies a softer exponential decay by dividing the exponent by three. This modification yields a smoother slope, producing more gradual adjustments in bit-flip probabilities. Consequently, TF4 reduces the risk of premature convergence, preserves diversity within the population, and limits transfer-function-induced misclassifications. This translates into a more effective balance between exploration and exploitation, enabling TF4 to maintain solution stability while still providing sufficient flexibility to escape local optima.
Table 14 presents a comparison of BPO1-TF4 with the existing binary optimization algorithms the Binary Evolutionary Mating Algorithm (BinEMA) and the Binary Fire Hawk Optimizer (BinFHO). The evaluation considers best values, standard deviations, and time results. The evaluation considers best values, standard deviations, and time results. The parameter settings used in the experiments are listed in
Table 3, while the results of the BinEMA and BinFHO are taken from [
60].
Table 14 shows that BPO1 achieves the best values on all datasets, demonstrating its effectiveness in obtaining optimal results for different problem sizes. This indicates that BPO1 is proven to be successful in all dimensions, maintaining competitive performance in smaller datasets and showing clear superiority as the problem size increases. BinEMA has fallen behind BPO1 in all datasets after size 12. Equality is observed only in dataset 20c, where BPO1 and BinEMA produce the same result. In contrast, dataset 12c reports a result for BinEMA that is lower than that of BPO1. Similarly, BinFHO is consistently outperformed by BPO1 in datasets larger than size 8. Overall, these findings confirm that BPO1 is established as a more powerful and stable solver, particularly in large-scale problem instances, where it has consistently produced higher-quality solutions compared to BinEMA and BinFHO.
In general, the results of
Table 14 show that the standard deviation values of BPO1 are obtained as zero or close to zero in almost all datasets. This indicates that highly consistent and repeatable results are consistently produced by BPO1 in each independent run, ensuring its stability and reliability. In contrast, BinEMA and especially BinFHO are observed to have produced considerably high standard deviation values in many datasets, which shows that their results are fluctuating and their stability is weakened. Therefore, in the overall evaluation, BPO1 achieves superiority in terms of both reliability and solution stability, while the other algorithms produce more variable results. This finding confirms that BPO1 is established as a more consistent and reliable algorithm for solving binary optimization problems.
In
Table 14, the time results indicate that BPO1 is executed with consistently low and stable computational times across all datasets. At the same time, BinEMA and BinFHO require significantly higher and more fluctuating times. This demonstrates that BPO1 is far more efficient and practical compared to the other algorithms.
Table 15 presents a comparison of the Success Rate (SR) values achieved by BPO1-TF4 and the S-shape Binary Whale Optimization Algorithm (BWOS). The SR (%) results of BWOS are taken from [
33], and both BPO1-TF4 and BWOS are executed under the same experimental configuration specified in
Table 3.
In
Table 15, the SR values of BPO1 and BWOS are compared across the datasets (8a–8e). BPO1 is shown to consistently achieve a perfect SR of 100% in all cases, demonstrating its ability to reliably produce feasible solutions without failure. On the other hand, BWOS reaches 100% success in most datasets but drops to 75% in dataset 8b, indicating instability. These results emphasize that BPO1 is not only more stable but also more dependable, as it guarantees feasibility across all problem instances, showcasing its superiority over BWOS in terms of robustness and consistency. SR values of BPO1 and BWOS are compared for different datasets (12a–12e). BPO1 consistently achieves an SR of 100% across all instances, proving its robustness and reliability in always producing feasible solutions. In contrast, the BWOS exhibits significant variability while it achieves 100% accuracy in datasets 12c, 12d, and 12e, its performance drops sharply to 85% in 12b and even further to 40% in 12a. These results highlight that BWOS can struggle with feasibility in certain cases, whereas BPO1 maintains flawless performance. This underlines BPO1′s superiority, as it guarantees consistent and stable outcomes regardless of the dataset’s complexity. SR values of BPO1 and BWOS are compared for datasets (16a–16e). BPO1 once again achieves a perfect 100% success rate across all datasets, confirming its stability and ability to generate feasible solutions consistently. In contrast, BWOS exhibits very low and highly inconsistent performance, dropping as low as 5% in dataset 16a, completely failing with 0% in 16d, and only reaching 35% in 16e. Meanwhile, it achieves relatively better values of 80% in 16b. These findings clearly demonstrate that BWOS is highly unreliable in more challenging instances, whereas BPO1 maintains flawless and dependable results in every case. This comparison highlights BPO1′s robustness and superior reliability compared to BWOS. In this table, the SR values of BPO1 and BWOS are compared for datasets (20a–20e). BPO1 consistently maintains a perfect 100% success rate across all instances, further confirming its robustness and reliability in producing feasible solutions without exception. On the other hand, BWOS displays highly unstable behavior: while it performs strongly with 95% in dataset 20a and reaches 100% in 20b, its performance drops drastically to 0% in 20c, 15% in 20d, and only 30% in 20e. These sharp fluctuations highlight BWOS’s lack of consistency in more complex cases, whereas BPO1 proves to be stable and dependable under all conditions. Overall, the results highlight BPO1′s clear superiority, demonstrating that it consistently delivers flawless success where BWOS frequently falls short. SR value comparison between BPO1 and BWOS is presented for datasets (24a–24e). BPO1 once again achieves a flawless 100% success rate in all cases, demonstrating its reliability and robustness across larger and more challenging problem instances. In contrast, BWOS exhibits unstable and inconsistent results: it drops drastically to 25% in dataset 24a, completely fails with 0% in 24d, and only manages 60% in 24e. These variations suggest that BWOS is not dependable under various conditions, frequently struggling to maintain its feasibility. By comparison, BPO1 proves superior by consistently delivering perfect performance, underlining its stability and effectiveness as a solver for complex binary optimization problems.
As a result, BPO1 achieves a 100% success rate across all datasets, clearly demonstrating its robustness, stability, and reliability. Unlike BWOS, which shows inconsistent and often low performance with significant drops in success rate, BPO1 consistently produces feasible and high-quality solutions without exception. This consistency across different problem sizes and complexities highlights BPO1 as a superior and dependable algorithm for solving binary optimization problems.
Figure 10 shows the comparative computational time results of the BPO1, BinEMA, and BinFHO algorithms for datasets of varying sizes (8a–8e, 12a–12e, 16a–16e, 20a–20e, and 24a–24e).
Figure 10 shows that the computational time results for BPO1 remain at very low levels across all datasets and are presented as an almost flat line. This demonstrates that BPO1 is highly efficient and consistent in terms of computational time. The algorithms BinEMA and BinFHO are reported to have achieved significantly higher time values. In the graphs, BinEMA is seen to have produced high and fluctuating time results in most datasets. BinFHO, although it produces lower values than BinEMA in some small datasets, is generally associated with high and unstable computational costs. In the datasets with 16, 20, and 24 items, the time values of BinEMA and BinFHO exhibit sharp increases and decreases, which are interpreted as unstable and costly computational behavior. In contrast, BPO1 is illustrated to follow a consistently low and stable line. As a result, the graphs clearly demonstrate that BPO1 is significantly faster than the other algorithms in terms of time and is proven to maintain its efficiency even as the problem scale increases. This confirms that BPO1 is not only superior in terms of solution quality but is also established as advantageous in terms of computational cost.
5.2. Experimental Results of BPO2 for UFLP
The UFLP is categorized based on its problem size and difficulty level, ranging from small to huge. This ensures a diverse and comprehensive evaluation of the optimization algorithm’s performance across varying problem complexities.
Table 16 shows the BPO2 parameter values used in the experiments. A comprehensive dataset of 12 different UFLPs is available from the OR Library, as illustrated in
Table 17.
Table 18 reports the best values,
Table 19 provides the standard deviation values indicating stability,
Table 20 summarizes the mean value,
Table 21 shows the GAP values reflecting proximity to the optimum,
Table 22 presents the worst results, and
Table 23 reports the computational time results, highlighting efficiency. These tables show the performance of the BPO2 algorithm under TF1-TF8 variants. In the tables, The bolded values indicate the optimum value.
Table 16.
BPO2′s parameter settings.
Table 16.
BPO2′s parameter settings.
Parameters | Values |
---|
Population size (N) | 40 |
Maximum iteration (MaxIter) | 2000 |
MaxFEs | 80,000 |
Number of runs | 30 |
Table 17.
Characteristics of the UFLP from the OR library.
Table 17.
Characteristics of the UFLP from the OR library.
Problem Name | Difficulty Level Size | Size of the Problem | Optimum |
---|
Cap71 | Small | 16 × 50 | 9.3262 × 105 |
Cap72 | Small | 16 × 50 | 9.7780 × 105 |
Cap73 | Small | 16 × 50 | 1.01064 × 106 |
Cap74 | Small | 16 × 50 | 1.0350 × 106 |
Cap101 | Medium | 25 × 50 | 7.9664 × 105 |
Cap102 | Medium | 25 × 50 | 8.5470 × 105 |
Cap103 | Medium | 25 × 50 | 8.9378 × 105 |
Cap104 | Medium | 25 × 50 | 9.2894 × 105 |
Cap131 | Large | 50 × 50 | 7.9344 × 105 |
Cap132 | Large | 50 × 50 | 8.5150 × 105 |
Cap133 | Large | 50 × 50 | 8.9308 × 105 |
Cap134 | Large | 50 × 50 | 9.2894 × 105 |
Table 18.
The statical best values of the BPO2.
Table 18.
The statical best values of the BPO2.
Dataset | TF1-BPO2 | TF2-BPO2 | TF3-BPO2 | TF4-BPO2 | TF5-BPO2 | TF6-BPO2 | TF7-BPO2 | TF8-BPO2 |
---|
Cap71 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.4958 × 105 | 9.3262 × 105 | 9.3262 × 105 |
Cap72 | 9.7780 × 105 | 9.8694 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 |
Cap73 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 |
Cap74 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 |
Cap101 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 |
Cap102 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 |
Cap103 | 8.9378 × 105 | 8.9457 × 105 | 8.9378 × 105 | 8.9378 × 105 | 8.9401 × 105 | 8.9378 × 105 | 8.9378 × 105 | 8.9378 × 105 |
Cap104 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 |
Cap131 | 7.9742 × 105 | 8.0031 × 105 | 7.9793 × 105 | 8.0124 × 105 | 7.9840 × 105 | 8.0058 × 105 | 7.9774 × 105 | 8.0351 × 105 |
Cap132 | 8.5761 × 105 | 8.5752 × 105 | 8.5453 × 105 | 8.5755 × 105 | 8.6033 × 105 | 8.5351 × 105 | 8.5522 × 105 | 8.5385 × 105 |
Cap133 | 8.9687 × 105 | 8.9920 × 105 | 8.9927 × 105 | 8.9480 × 105 | 8.9559 × 105 | 8.9723 × 105 | 8.9568 × 105 | 8.9475 × 105 |
Cap134 | 9.3151 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2948 × 105 | 9.2948 × 105 | 9.2948 × 105 | 9.2948 × 105 |
Total Rank | 1.0914 × 107 | 1.0926 × 107 | 1.0911 × 107 | 1.0913 × 107 | 1.0914 × 107 | 1.0928 × 107 | 1.0908 × 107 | 1.0912 × 107 |
Finally Rank | 5 | 7 | 2 | 4 | 5 | 8 | 1 | 3 |
In
Table 18, the Total Rank row represents the overall performance of each TF across all datasets. The lowest total value, 1.0908 × 10
7, is obtained by TF7, demonstrating that TF7 has provided the best overall performance. The Finally Rank row summarizes the ranking of these total values. TF7 is placed first (rank = 1), followed by TF3 in second (rank = 2), TF8 in third (rank = 3), and TF4 in fourth (rank = 4).
Table 19.
The statical standard deviation results of the BPO2.
Table 19.
The statical standard deviation results of the BPO2.
Dataset | TF1-BPO2 | TF2-BPO2 | TF3-BPO2 | TF4-BPO2 | TF5-BPO2 | TF6-BPO2 | TF7-BPO2 | TF8-BPO2 |
---|
Cap71 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap72 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap73 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap74 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap102 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap103 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap104 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap131 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap132 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap133 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap134 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Total Rank | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Finally Rank | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
Table 19 presents the standard deviation results of BPO2 across different TFs (TF1-TF8) for the UFLP. It is observed that the standard deviation values are consistently recorded as zero for all datasets and all TFs, indicating that the algorithm has produced identical results in every independent run. Consequently, the Total Rank values are also zero across all TFs, and the Finally Rank row assigns all TFs the first rank (rank = 1). These outcomes demonstrate that BPO2 exhibits perfect stability and robustness on the tested datasets, ensuring that no variability is observed across runs regardless of the TFs employed.
Table 20.
The mean results of the BPO2.
Table 20.
The mean results of the BPO2.
Dataset | TF1-BPO2 | TF2-BPO2 | TF3-BPO2 | TF4-BPO2 | TF5-BPO2 | TF6-BPO2 | TF7-BPO2 | TF8-BPO2 |
---|
Cap71 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.4958 × 105 | 9.3262 × 105 | 9.3262 × 105 |
Cap72 | 9.7780 × 105 | 9.8694 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 |
Cap73 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 |
Cap74 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 |
Cap101 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 |
Cap102 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 |
Cap103 | 8.9378 × 105 | 8.9457 × 105 | 8.9378 × 105 | 8.9378 × 105 | 8.9401 × 105 | 8.9378 × 105 | 8.9378 × 105 | 8.9378 × 105 |
Cap104 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 |
Cap131 | 7.9742 × 105 | 8.0031 × 105 | 7.9793 × 105 | 8.0124 × 105 | 7.9840 × 105 | 8.0058 × 105 | 7.9774 × 105 | 8.0351 × 105 |
Cap132 | 8.5761 × 105 | 8.5752 × 105 | 8.5453 × 105 | 8.5755 × 105 | 8.6033 × 105 | 8.5351 × 105 | 8.5522 × 105 | 8.5385 × 105 |
Cap133 | 8.9687 × 105 | 8.9920 × 105 | 8.9927 × 105 | 8.9480 × 105 | 8.9559 × 105 | 8.9723 × 105 | 8.9568 × 105 | 8.9475 × 105 |
Cap134 | 9.3151 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2948 × 105 | 9.2948 × 105 | 9.2948 × 105 | 9.2948 × 105 |
Total Rank | 1.0914 × 107 | 1.0926 × 107 | 1.0911 × 107 | 1.0913 × 107 | 1.0914 × 107 | 1.0928 × 107 | 1.0908 × 107 | 1.0912 × 107 |
Finally Rank | 5 | 7 | 2 | 4 | 5 | 8 | 1 | 3 |
Table 20 shows mean results indicate that although all TFs provide competitive outcomes, certain TFs yield slight advantages in specific datasets. The Total Rank values reinforce this, with TF7 obtaining the lowest cumulative score (rank = 1), followed by TF3 (rank = 2), TF8 (rank = 3), and TF4 (rank = 4). The Finally Rank row, therefore, highlights TF7 as the most effective TFs overall, combining consistency with competitiveness, while the remaining TFs also deliver reliable performance with dataset-specific strengths.
Table 21.
The statical GAP values of the BPO2.
Table 21.
The statical GAP values of the BPO2.
Dataset | TF1-BPO2 | TF2-BPO2 | TF3-BPO2 | TF4-BPO2 | TF5-BPO2 | TF6-BPO2 | TF7-BPO2 | TF8-BPO2 |
---|
Cap71 | 0 | 0 | 0 | 0 | 0 | 1.8185 × 100 | 0 | 0 |
Cap72 | 0 | 9.3491 × 10−1 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap73 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap74 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Cap101 | 1.4613 × 10−14 | 1.4613 × 10−14 | 1.4613 × 10−14 | 1.4613 × 10−14 | 1.4613 × 10−14 | 1.4613 × 10−14 | 1.4613 × 10−14 | 1.4613 × 10−14 |
Cap102 | 1.3621 × 10−14 | 1.3621 × 10−14 | 1.3621 × 10−14 | 1.3621 × 10−14 | 1.3621 × 10−14 | 1.3621 × 10−14 | 1.3621 × 10−14 | 1.3621 × 10−14 |
Cap103 | 0 | 8.8567 × 10−2 | 0 | 0 | 2.5289 × 10−2 | 0 | 0 | 0 |
Cap104 | 1.2532 × 10−14 | 1.2532 × 10−14 | 1.2532 × 10−14 | 1.2532 × 10−14 | 1.2532 × 10−14 | 1.2532 × 10−14 | 1.2532 × 10−14 | 1.2532 × 10−14 |
Cap131 | 5.0167 × 10−1 | 8.6586 × 10−1 | 5.6634 × 10−1 | 9.8266 × 10−1 | 6.2559 × 10−1 | 8.9979 × 10−1 | 5.4144 × 10−1 | 1.2694 × 100 |
Cap132 | 7.1793 × 10−1 | 7.0697 × 10−1 | 3.5643 × 10−1 | 7.1090 × 10−1 | 1.0374 × 100 | 2.3665 × 10−1 | 4.3702 × 10−1 | 2.7668 × 10−1 |
Cap133 | 4.2463 × 10−1 | 6.8536 × 10−1 | 6.9381 × 10−1 | 1.9309 × 10−1 | 2.8173 × 10−1 | 4.6505 × 10−1 | 2.9188 × 10−1 | 1.8759 × 10−1 |
Cap134 | 2.7621 × 10−1 | 1.2532 × 10−14 | 5.7680 × 10−2 | 1.2532 × 10−14 | 1.2532 × 10−14 | 5.7680 × 10−2 | 5.7680 × 10−2 | 5.7680 × 10−2 |
Total Rank | 1.9184 × 100 | 3.2817 × 100 | 1.6739 × 100 | 1.8867 × 100 | 1.9700 × 100 | 3.4777 × 100 | 1.3280 × 100 | 1.7914 × 100 |
Finally Rank | 5 | 7 | 2 | 4 | 6 | 8 | 1 | 3 |
Table 21 shows the GAP values, where certain TFs exhibit slight advantages on certain datasets. The overall Ranking values highlight these differences; TF7 achieves the lowest cumulative GAP score (rank = 1), followed by TF3 (rank = 2), TF8 (rank = 3), and TF4 (rank = 4). Consequently, the final Ranking line identifies TF7 as the most effective TFs overall because it combines robustness with accuracy.
Table 22.
The statical worst results of the BPO2.
Table 22.
The statical worst results of the BPO2.
Dataset | TF1-BPO2 | TF2-BPO2 | TF3-BPO2 | TF4-BPO2 | TF5-BPO2 | TF6-BPO2 | TF7-BPO2 | TF8-BPO2 |
---|
Cap71 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.3262 × 105 | 9.4958 × 105 | 9.3262 × 105 | 9.3262 × 105 |
Cap72 | 9.7780 × 105 | 9.8694 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 | 9.7780 × 105 |
Cap73 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 | 1.0106 × 106 |
Cap74 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 | 1.0350 × 106 |
Cap101 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 | 7.9665 × 105 |
Cap102 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 | 8.5470 × 105 |
Cap103 | 8.9378 × 105 | 8.9457 × 105 | 8.9378 × 105 | 8.9378 × 105 | 8.9401 × 105 | 8.9378 × 105 | 8.9378 × 105 | 8.9378 × 105 |
Cap104 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 |
Cap131 | 7.9742 × 105 | 8.0031 × 105 | 7.9793 × 105 | 8.0124 × 105 | 7.9840 × 105 | 8.0058 × 105 | 7.9774 × 105 | 8.0351 × 105 |
Cap132 | 8.5761 × 105 | 8.5752 × 105 | 8.5453 × 105 | 8.5755 × 105 | 8.6033 × 105 | 8.5351 × 105 | 8.5522 × 105 | 8.5385 × 105 |
Cap133 | 8.9687 × 105 | 8.9920 × 105 | 8.9927 × 105 | 8.9480 × 105 | 8.9559 × 105 | 8.9723 × 105 | 8.9568 × 105 | 8.9475 × 105 |
Cap134 | 9.3151 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2894 × 105 | 9.2948 × 105 | 9.2948 × 105 | 9.2948 × 105 | 9.2948 × 105 |
Total Rank | 1.0914 × 107 | 1.0926 × 107 | 1.0911 × 107 | 1.0913 × 107 | 1.0914 × 107 | 1.0928 × 107 | 1.0908 × 107 | 1.0912 × 107 |
Finally Rank | 5 | 7 | 2 | 4 | 5 | 8 | 1 | 3 |
Table 22 presents the worst results of BPO2 across the UFLP using eight different TFs (TF1-TF8). The Finally Rank row highlights TF7 as the best-performing TF in terms of worst results (rank = 1), showcasing its robustness and reliability, with TF3 (rank = 2) and TF8 (rank = 3) emerging as competitive alternatives.
Table 23.
The statical time results of the BPO2.
Table 23.
The statical time results of the BPO2.
Dataset | TF1-BPO2 | TF2-BPO2 | TF3-BPO2 | TF4-BPO2 | TF5-BPO2 | TF6-BPO2 | TF7-BPO2 | TF8-BPO2 |
---|
Cap71 | 1.7468 × 101 | 1.7116 × 101 | 1.6772 × 101 | 1.6913 × 101 | 1.7519 × 101 | 1.7613 × 101 | 1.6903 × 101 | 1.7287 × 101 |
Cap72 | 1.6622 × 101 | 1.6908 × 101 | 1.6567 × 101 | 1.6847 × 101 | 1.7206 × 101 | 1.6635 × 101 | 1.6571 × 101 | 1.6660 × 101 |
Cap73 | 1.6441 × 101 | 1.6557 × 101 | 1.6392 × 101 | 1.6328 × 101 | 1.6988 × 101 | 1.6403 × 101 | 1.5571 × 101 | 1.5577 × 101 |
Cap74 | 1.5355 × 101 | 1.5582 × 101 | 1.5426 × 101 | 1.5424 × 101 | 1.5769 × 101 | 1.5379 × 101 | 1.5271 × 101 | 1.5326 × 101 |
Cap101 | 1.7894 × 101 | 1.7813 × 101 | 1.7897 × 101 | 1.7471 × 101 | 1.8207 × 101 | 1.7757 × 101 | 1.7825 × 101 | 1.7624 × 101 |
Cap102 | 1.7309 × 101 | 1.7142 × 101 | 1.7241 × 101 | 1.7193 × 101 | 1.7928 × 101 | 1.7429 × 101 | 1.7258 × 101 | 1.7339 × 101 |
Cap103 | 1.7007 × 101 | 1.7084 × 101 | 1.7291 × 101 | 1.7021 × 101 | 1.7876 × 101 | 1.6273 × 101 | 1.5989 × 101 | 1.6092 × 101 |
Cap104 | 1.6057 × 101 | 1.5991 × 101 | 1.6095 × 101 | 1.6019 × 101 | 1.6472 × 101 | 1.5888 × 101 | 1.5921 × 101 | 1.5910 × 101 |
Cap131 | 1.7221 × 101 | 1.7612 × 101 | 1.6910 × 101 | 1.6905 × 101 | 1.7882 × 101 | 1.7015 × 101 | 1.6892 × 101 | 1.6919 × 101 |
Cap132 | 1.6728 × 101 | 1.6731 × 101 | 1.6739 × 101 | 1.6743 × 101 | 1.7667 × 101 | 1.6773 × 101 | 1.6730 × 101 | 1.6761 × 101 |
Cap133 | 1.6660 × 101 | 1.6666 × 101 | 1.6665 × 101 | 1.6665 × 101 | 1.7601 × 101 | 1.6692 × 101 | 1.6654 × 101 | 1.6695 × 101 |
Cap134 | 1.6627 × 101 | 1.6621 × 101 | 1.6633 × 101 | 1.6647 × 101 | 1.7576 × 101 | 1.6656 × 101 | 1.6626 × 101 | 1.6655 × 101 |
Total Rank | 2.0139 × 102 | 2.0182 × 102 | 2.0063 × 102 | 2.0018 × 102 | 2.0869 × 102 | 2.0051 × 102 | 1.9821 × 102 | 1.9884 × 102 |
Finally Rank | 6 | 7 | 5 | 3 | 8 | 4 | 1 | 2 |
Table 23 shows the time performance of BPO2 with different transfer functions. All TFs achieve similar execution times, confirming the algorithm’s efficiency. TF7 records the lowest cumulative time (rank = 1), followed by TF8 (rank = 2) and TF4 (rank = 3), while TF5 has the highest (rank = 8), making TF7 the most efficient option overall.
The Wilcoxon signed-rank test is employed to determine whether there is a statistically significant difference between the performance distributions of the two algorithms. In the table, the symbol (+) indicates that a significant difference exists in favor of the proposed algorithm (
p < 0.05). In contrast, the symbol (-) indicates that no statistically significant difference is observed (
p ≥ 0.05). This analysis provides a clear statistical validation of the similarities and differences among the algorithms under comparison [
61].
Table 24 presents the results of the Wilcoxon signed-rank test conducted between the BPO2 and its variants using different TFs (i.e., TF1-BPO2 via TF2-BPO2, TF2-BPO2 via TF3-BPO2, …, TF6-BPO2 via TF7-BPO2, TF7-BPO2 via TF8-BPO2). The results of
Table 24 show that in the majority of cases, p-values are extremely small (close to zero) and h = +, confirming significant performance differences between TFs. Overall, these outcomes demonstrate that TFs’ selection significantly affects BPO2′s performance, with most TFs exhibiting statistically distinguishable results across datasets.
Table 25 presents a comparison of BPO2-TF7 with three competing binary optimization algorithms: Binary Honey Badger Algorithm (BinHBA), Binary Aquila Optimizer (BinAO), and Binary Fire Hawk Optimizer (BinFHO) in terms of standard deviation on the UFLP datasets. The results of BinHBA, BinAO, and BinFHO are taken from the literature [
60]. All compared algorithms are executed under the same parameter settings as in
Table 16.
Table 25 shows that BPO2 consistently records a standard deviation of 0 in all datasets, indicating that BPO2 has produced identical results across all independent runs. These findings demonstrate that BPO2 is the most stable and reliable algorithm in terms of robustness. In contrast, the results of BinHBA, BinAO, and BinFHO exhibit considerable fluctuations and inconsistencies.
Table 26 reports the GAP values for BPO2 and the competing other algorithms on the UFLP.
In
Table 26, TF7-BPO2 consistently achieves values equal to zero or very close to zero across nearly all instances, indicating that the obtained solutions are highly proximate to the known optima. By contrast, BinHBA, BinAO, and BinFHO generate considerably higher GAP values, particularly in larger and more challenging datasets such as Cap131-Cap133. These findings clearly demonstrate that TF7-BPO2 provides superior accuracy and reliability compared to the competing algorithms.
Table 27 reports the execution times of BPO2 and the competing algorithms on the UFLP datasets.
In
Table 27, TF7-BPO2 consistently achieves execution times in the narrow range of approximately 15–18 seconds across all datasets, demonstrating remarkable efficiency and stability. In contrast, the competing algorithms require substantially higher times, often exceeding 100 seconds, with BinAO in particular reaching over 300 seconds in larger instances such as Cap131-Cap133. These sharp differences highlight that TF7-BPO2 is not only the fastest algorithm but also the most consistent in terms of runtime performance. While BinHBA, BinAO, and BinFHO exhibit significant variability and scalability issues as the dataset size increases, TF7-BPO2 maintains nearly constant computational costs regardless of problem complexity. This stability ensures that the algorithm remains efficient and scalable, making it highly suitable for large-scale UFLP instances where both solution quality and computational efficiency are critical.
Figure 11 shows the computation times of TF7-BPO2 compared to BinHBA, BinAO, and BinFHO. While BPO2 consistently maintains low and stable execution costs even as the problem scale increases, the other algorithms exhibit substantially higher times accompanied by considerable fluctuations. This result highlights the superior efficiency and robustness of BPO2 in handling larger and more complex instances.