Next Article in Journal
A Novel Recognition-Before-Tracking Method Based on a Beam Constraint in Passive Radars for Low-Altitude Target Surveillance
Previous Article in Journal
Efficient Separation of Isoamyl Alcohol from Fusel Oil Using Non-Polar Solvent and Hybrid Decanter–Distillation Process
Previous Article in Special Issue
Personalized-Template-Guided Intelligent Evolutionary Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Binary Puma Optimizer: A Novel Approach for Solving 0-1 Knapsack Problems and the Uncapacitated Facility Location Problem

1
Department of Information Technologies Engineering, Selcuk University, 42075 Konya, Turkey
2
Department of Computer Engineering, Selcuk University, 42075 Konya, Turkey
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 9955; https://doi.org/10.3390/app15189955
Submission received: 17 August 2025 / Revised: 3 September 2025 / Accepted: 6 September 2025 / Published: 11 September 2025
(This article belongs to the Special Issue Novel Research and Applications on Optimization Algorithms)

Abstract

In this study, the Binary Puma Optimizer (BPO) is introduced as a novel binary metaheuristic. The BPO employs eight Transfer Functions (TFs), consisting of four S-shaped and four V-shaped mappings, to convert the continuous search space of the original Puma Optimizer into binary form. To evaluate its effectiveness, BPO is applied to two well-known combinatorial optimization problems: the 0-1 Knapsack Problems (KPs) and the Uncapacitated Facility Location Problem (UFLP). The solver tailored for KPs is referred to as BPO1, while the solver for the UFLP is denoted as BPO2. In the UFLP experiments, only TFs are integrated into the solutions. Conversely, in the 0-1 KPs experiment, the additional mechanisms are (i) greedy-based population strategies; (ii) a crossover operator; (iii) a penalty algorithm; (iv) a repair algorithm; and (v) an improvement algorithm. Unlike KPs, the UFLP has no infeasible solutions, as facilities are assumed to be uncapacitated. Unlike KPs, the UFLP has no capacity constraints, as facilities are assumed to be uncapacitated. Thus, violations cannot occur, making improvement strategies unnecessary, and the BPO2 depends solely on TFs for binary adaptation. The proposed algorithms are compared with binary optimization algorithms from the literature. The experimental framework demonstrates the versatility and effectiveness of BPO1 and BPO2 in addressing different classes of binary optimization problems.

1. Introduction

Evolutionary algorithms have long been a model of innovation in computational problem-solving, drawing inspiration from natural processes and the behavior of primitive agents. These algorithms have been demonstrated to be effective instruments for addressing intricate optimization challenges, as they are driven by the inherent need to adapt and evolve [1]. In recent years, metaheuristic algorithms have been proposed as solutions to various challenges across multiple disciplines. The complexity of real-world applications is increasing. Operations research, robotics, machine learning, bioinformatics, and decision-making are among the primary areas of emphasis. Within the broader context of combinatorial optimization, binary optimization occupies a significant position. Consequently, researchers have initiated a series of scholarly investigations to transition algorithms from continuous optimization to binary optimization. Logic gates, Transfer Function (TF), and similarity measurement techniques are frequently employed in these adaptations to produce potential solutions. Specific techniques for measuring similarity have been implemented [2]. Conversely, adaptation methods such as TFs, angle modulation, or mod-based functions are required to convert algorithms designed for continuous values into binary-compatible formats [3]. Binary optimization frameworks are enhanced by integrating mechanisms such as Hill Climbing, crossover operation, greedy strategies, and XOR-based transformations to balance exploration and exploitation. Together, these methods enhance diversity, solution quality, and convergence, resulting in more robust and accurate performance across various problem domains [4].
Among recent metaheuristic algorithms, the Puma Optimizer (PO) has been introduced as a novel algorithm. Inspired by the life patterns and intelligent behaviors of pumas, PO was proposed as a novel approach. It included specific mechanisms for exploration and exploitation [5]. This study introduces the Binary Puma Optimizer (BPO), a novel binary metaheuristic algorithm designed to enhance the efficiency and effectiveness of binary optimization. The main contributions of this work can be summarized as follows:
  • A novel binary metaheuristic, the BPO, is proposed as a binary adaptation of the original PO. Eight TFs, including four S-shaped and four V-shaped mappings, are systematically integrated into BPO to enable the transformation of continuous search dynamics into binary space.
  • The two proposed variants, BPO1 and BPO2, are specifically designed to address different binary optimization problems. In particular, BPO1 incorporates several distinct mechanisms are designed to enhance its performance. The key features of these variants are outlined as follows:
    BPO1, designed to solve 0-1 KPs, incorporates additional mechanisms that ensure the total weight does not exceed the knapsack capacity, that infeasible solutions are corrected, and that feasible solutions are further improved to achieve higher profits. BPO1 incorporates the following additional mechanisms:
    (i)
    Greedy-based population strategies to generate diverse and high-quality initial solutions;
    (ii)
    Crossover operator to exchange information between parent and candidate solutions and preserve population diversity;
    (iii)
    Penalty Algorithm (PA) to address infeasible solutions;
    (iv)
    Repair Algorithm (RA) to restore feasibility by adjusting infeasible solutions;
    (v)
    Improvement Algorithm (IA) to ensure more effective constraint handling and enhance overall solution quality.
    BPO2 is designed to address the UFLP solely through TFs, without the need for additional auxiliary mechanisms. Unlike KPs, the UFLP does not involve capacity infeasible solutions. Therefore, all generated solutions are inherently feasible, removing the necessity for corrective strategies or algorithms.
  • A comprehensive benchmarking of this study is conducted, where BPO1 and BPO2 are compared against well-established binary optimization algorithms from the literature, using both performance and solution quality metrics.
  • The experimental findings demonstrated the robustness, versatility, and superior performance of BPO1 and BPO2 in addressing two distinct and challenging classes of binary optimization problems.

2. Literature Review

2.1. Binary Optimization Algorithms

Recent years have witnessed the rapid development of binary variants of nature inspired metaheuristic algorithms, tailored to address complex combinatorial and different problems. For instance, the Binary Black Widow Optimization Algorithm (BBWO) was explicitly designed for binary optimization tasks [6]. Similarly, the Chimp Optimization Algorithm (COA), inspired by the intelligent problem-solving abilities of chimpanzees [7], offers a versatile solution to a wide range of optimization challenges, the Binary Chimp Optimization Algorithm (BCOA) [8]. Additionally, the Slime Mould Algorithm (SMA), inspired by the decentralized movement patterns of slime molds [9], further enhanced by the binary variable Binary Slime Mold Algorithm (BSMA), has emerged as a powerful tool to solve various optimization problems [10].
The Dwarf Mongoose Optimization (DMO) simulates the foraging behavior of dwarf mongooses, taking into account their social dynamics and ecological adaptations, thereby addressing classical and benchmark functions, as well as a variety of engineering optimization problems [11]. Additionally, the Binary Dwarf Mongoose Optimization (BDMO) was specifically designed to address high-dimensional feature selection problems [12]. The Ebola Optimization Search Algorithm (EOSA) was developed as an optimization algorithm inspired by the Ebola virus’s propagation strategy. EOSA endeavored to resolve intricate optimization issues by integrating principles that were derived from the propagation of natural diseases [13]. The Arithmetic Optimization Algorithm (AOA) employed the distribution of arithmetic operators and developed a mathematical model for optimization objectives [14]. The Binary Arithmetic Optimization Algorithm (BAOA) was introduced in a separate study for feature selection in classification tasks. To better align with the character of the feature selection, BAOA converted the search space from continuous to binary using Transfer Function (TF). The classifier implemented a wrapper-based methodology, specifically the K-Nearest Neighbors (KNN) classifier algorithm, and its efficacy was suggested [15]. Additionally, Artificial Jellyfish Search (AJS) modeled the feeding behavior of jellyfish in the ocean. A Binary Artificial Jellyfish Search (BinAJS) was proposed to solve KPs. The effects of eight different TFs and five different mutation rates were studied, and BinAJS was developed. The optimal mutation rate and TFs were determined for each dataset [16].

2.2. Binary Metaheuristic Algorithms and Used TFs

S-shaped functions are generally associated with gradual probability transitions, thus supporting exploration, while V-shaped functions emphasize decisive bit-flips that strengthen exploitation. Therefore, adopting both categories together (four S-hape + four V-shape) ensures robustness and adaptability, while avoiding the limitations of relying solely on one type. In addition, the Binary Ebola Search Optimization Algorithm (BEOSA) was developed to investigate mutations in infected populations during the exploitation and exploration phases. This algorithm utilizes specially designed S-shape and V-shape [17]. In another study, a novel population-based optimization algorithm known as Hunger Games Search (HGS) was proposed [18]. Two binary versions of the Hunger Games Search Optimization (HGSO) were presented in another study, denoted as BHGSO-V and BHGSO-S. These versions employed wrapper feature selection models with S-shaped and V-shaped [19]. Subsequently, a novel algorithm named the Binary Aquila Optimizer (BAO) was proposed using S-shaped and V-shaped [20]. Additionally, an Improved Grey Wolf Optimizer (IGWO) was suggested as a solution to the workflow scheduling issue in cloud computing. IGWO was implemented which employed S-shaped and V-shaped TFs [21]. Table 1 lists the binary variants of metaheuristic algorithms proposed in recent years, along with the shapes used in TFs. Table 1 presents the binary variants of metaheuristic algorithms proposed over the past few years, along with the types of TFs (S-shaped and V-shaped) employed in their operation

2.3. Binary Algorithms for 0-1 KPs

Quantum-Inspired Wolf Pack Algorithm (QWPA) based on quantum coding was developed to solve 0-1 KPs and tested on classical and high-dimensional problems [29]. The other study, the Quantum-Inspired Firefly Algorithm with Particle Swarm Optimization (QIFAPSO), was applied to solve the 0-1 KPs, a more complex extension of the classical KPs involving multiple resource constraints. The proposed algorithm integrated the classical Firefly Algorithm, originally designed for continuous problems, with principles from quantum computing and Particle Swarm Optimization (PSO) [30]. Furthermore, the Cohort Intelligence (CI) algorithm was employed to resolve 0-1 KPs in an additional study. Candidates enhanced their solutions by exchanging knowledge, thereby obtaining superior outcomes collectively. Different instances of 0-1 KPs were subjected to tests [31]. In another study, Binary Dynamic Gray Wolf Optimization (BDGWO), a new variant of Gray Wolf Optimization (GWO), was proposed for solving binary optimization problems. The key advantages of BDGWO compared to other binary GWO variants are the use of a bitwise XOR operation for binarization and the use of a dynamic coefficient method to determine the influence of the three dominant coefficients (alpha, beta, and delta) in the algorithm. The proposed BDGWO was tested on 0-1 KPs to determine its success and accuracy [32].

2.4. Development of Binary Algorithms for 0-1 KPs

The Binary Evolutionary Optimizer (BEO) employed eight TFs, including S-shape and V-shape types, with V3-shape proving the most effective. A sigmoid S3-shape curve also showed potential advantages. PA and RA were applied to handle infeasible solutions, enabling the proposed method to solve 0-1 KPs. Experimental results demonstrated that BEO-V3 outperformed the other variants [33]. An evolutionary algorithm based on a novel greedy repair strategy was developed to solve Multi-Objective Knapsack Problems (MOKPs). The proposed algorithm first transformed all infeasible solutions into feasible ones and then improved the feasible solutions under knapsack capacity constraints to achieve higher quality [34]. A Modified Binary Particle Swarm Optimization (MBPSO) algorithm was developed for solving the 0-1 KPs and the Multidimensional Knapsack Problem (MKP). The MBPSO introduced a new probability function that preserved swarm diversity, thereby reducing the risk of premature convergence and making the algorithm more explorative and efficient compared to the standard Binary Particle Swarm Optimization (BPSO) [35]. Binary Flower Pollination Algorithm (BFPA) was developed for the 0-1 KPs. In BFPA, a PA was incorporated to penalize infeasible solutions, thereby assigning negative fitness values to infeasible solutions [36]. Additionally, a two-stage repair algorithm, named Flower Repair, was proposed. FR first applied a repair phase by removing items with the lowest profit-to-weight ratio to ensure feasibility, and then employed an improvement phase to enhance solution quality [37]. The Binary Monarch Butterfly Optimization (BMBO) algorithm was developed to solve the 0-1 KPs. In the BMBO, a hybrid encoding scheme was employed to represent individuals using both real-valued and binary vectors. A greedy strategy-based repair operator was applied as the Repair Algorithm (RA) to correct capacity infeasible solutions and improve solution quality. In addition, different population strategies (BMBO-1, BMBO-2, BMBO-3) were designed and compared [38]. First, a hybrid algorithm that combined Particle Swarm Optimization (PSO) with genetic operators (mutation and crossover operators) was proposed. The algorithm was specifically developed for the Multidimensional Knapsack Problem (MKP). In the solution process, particles were updated using the classical PSO mechanism, and then random mutation and crossover operations were applied to the best individuals. Penalty functions were also employed to prevent infeasible solution. The experimental results showed that the proposed method achieved more promising performance compared to the probability-based binary PSO [39]. The other study proposed a Binary Simplified Binary Harmony Search (BSBHS) algorithm to solve 0-1 KPs. The BSBHS generated new solutions applied a dynamic Harmony Memory Considering Rate (HMCR) together with a heuristic-based local search to improve solution quality [40].

2.5. Binary Metaheuristic for UFLP

The Binary Galactic Swarm Optimization (BinGSO) was proposed by incorporating the Binary Artificial Algae Algorithm as the search mechanism within the GSO study. The performance of BinGSO was subsequently evaluated on the UFLP [41]. Four novel binary metaheuristic algorithms, namely the Binary Coati Optimization Algorithm (BCOA), the Binary Mexican Axolotl Optimization Algorithm (BMAO), the Binary Dynamic Hunting Leadership Optimization (BDHL), and the Binary Aquila Optimizer (BAO), were developed and extensively tested on the UFLP. To evaluate the performance of the algorithms, 15 problem instances from the OR-Lib dataset were employed, and 17 different TFs (S-shaped, V-shaped, and other shapes) were examined [42]. Another study proposed the Binary Grasshopper Optimization Algorithm (BGOA) with a probability-based binarization procedure for solving the UFLP. An α parameter was introduced to enhance diversity and improve the quality of candidate solutions. The algorithm was tested on CAP and M* datasets. Experimental results showed that the proposed method outperformed state-of-the-art binary algorithms and proved effective for UFLPs [43]. The other study, the Binary Pied Kingfisher Optimizer (BinPKO) was adapted to solve the UFLP. The binary version with 14 TFs was tested on 15 Cap problems and an enhanced variant incorporating Lévy flight was also proposed. Results showed that TF1 and TF2 provided the best performance [44]. The other study proposed binary versions of the Arithmetic Optimization Algorithm, namely BinAOA and BinAOAX, for solving the UFLP. These variants incorporated an XOR-based mechanism for binarization, and their performances were evaluated on UFLP [45].

2.6. Development of PO

In recent years, various versions of the PO have been proposed to enhance the balance between exploration and exploitation. The Chaotic Puma Optimization Algorithm (CPOA) incorporates chaotic maps into both the exploration and exploitation phases to increase diversity, prevent premature convergence, and achieve more stable convergence dynamics [46]. Furthermore, the integration of the Rao algorithm and the introduction of new position update rules enhanced the exploration capability of PO, which resulted in faster convergence and higher solution quality across continuous, multimodal, and discontinuous functions [47]. Additionally, several variants of PO were developed for real-world applications. The Improved Binary Quantum-Based Puma Optimizer (IBQP) was employed for the optimal placement and sizing of electric vehicle charging stations (EVCS) in microgrids, with experiments performed on the IEEE 33-bus distribution system [48]. These studies clearly demonstrate the flexibility and extensibility of PO in continuous domains, as well as its potential to strengthen the exploration and exploitation balance and serve as a powerful optimization tool when combined with suitable enhancements [46,47,48].

3. Materials and Methods

3.1. Puma Optimizer

PO is a biologically inspired metaheuristic algorithm that models puma behavior as search operators. The search space is treated as a puma’s territory. The best solution within the puma’s territory represents a male puma, while the remaining solutions represent female pumas. PO is a biologically inspired metaheuristic algorithm that models the hunting behavior of the puma. The search space represents the territory, and the best solution corresponds to a male puma; the others represent female pumas. Exploration and exploitation are guided by stalking, concealment, and roaming strategies. The search process consists of two stages based on experience: the Unexperienced phase and the Experienced phase. In the first three iterations in the Unexperienced phase, pumas lack knowledge of their environment and perform exploration and exploitation simultaneously using functions f1 and f2. Figure 1 illustrates the hunting strategy of the PO. Equations (1) and (2) formulate these behaviors [5].
f 1 Explore = P F 1 · Seq CostExplore 1 Seq Time
f 1 Exploit = P F 1 · Seq CostExploit 1 S e q Time
The second scoring functions use three consecutive improvements, given in Equations (3) and (4).
f 2 Explore = P F 2 · Seq CostExplore 1 + Seq CostExplore 2 + Seq CostExplore 3 Seq Time 1 + Seq Time 2 + Seq Time 3
f 2 Exploit = P F 2 · Seq CostExploit 1 + Seq CostExploit 2 + Seq CostExploit 3 Seq Time 1 + Seq Time 2 + Seq Time 3
where S e q Time = 1 and P F 1 , P F 2 { 0,1 } determine the contribution of each function. The sequential cost terms are defined in Equations (5)–(10), the cost of the best solution found during the initialization stage is indicated by the C o s t Best Initial .
S e q CostExplore 1 = C o s t Best Initial C o s t Explore 1
S e q CostExplore 2 = C o s t Explore 2 C o s t Explore 1
S e q CostExplore 3 = C o s t Explore 3 C o s t Explore 2
S e q CostExploit 1 = C o s t Best Initial C o s t Exploit 1
S e q CostExploit 2 = C o s t Exploit 2 C o s t Exploit 1
S e q CostExploit 3 = C o s t Exploit 3 C o s t Exploit 2
where S e q CostExplore 1 , S e q CostExplore 2 , and S e q CostExplore 3 denote sequences throughout exploration and exploitation. C o s t Explore and C o s t Exploit denote the expenses associated with exploration and exploitation. The optimal first solution cost is C o s t Best Initial . The expenses of Best Solutions are enumerated step by step C o s t Exploit 1 , C o s t Explore 2 , C o s t Explore 3 , C o s t Exploit 1 , C o s t Exploit 2 , and C o s t Exploit 3 . After the initial three generations, the algorithm enters the Experienced phase, during which pumas have gained sufficient knowledge to determine the most suitable phase (exploration or exploitation) for each iteration. In this stage, three scoring functions (f1, f2, and f3) are applied. Function f1, calculated using Equations (11) and (12), prioritizes the phase that has shown superior performance in previous iterations, with a stronger emphasis on exploration. Functions f2 and f3 further refine phase selection by incorporating performance stability and improvement rates, ensuring a balanced and adaptive search process.
Score Explore = P F 1 · f 1 Explore + P F 2 · f 2 Explore
Score Exploit = P F 1 · f 1 Exploit + P F 2 · f 2 Exploit
where f 1 Exploit resolves Equations (13) and (14) using f 1 t explore shows the beginning function size.
f 1 t exploit = P F 1 · Cos t old exploit Cos t new exploit T t exploit
f 1 t explore = P F 1 · Cos t old explore C o s t new explore T t explore
where Cost old explore and C o s t new explore indicate optimal solution costs after enhancement of selection. Furthermore, T t explore indicates the quantity of unselected iterations between prior and current alternatives. Before optimization, users configured P F 1 to either “0” or “1”. This illustrates the program’s dependence on the principal function. Equations (15) and (16) implement the second function.
f 2 t exploit = P F 2 · C o s Old , 1 exploit C o s New , 1 exploit + C o s t Old , 2 exploit C o s t New , 2 exploit + C o s Old , 3 exploit C o s t New , 3 exploit T t , 1 exploit + T t , 2 exploit + T t , 3 exploit
f 2 t explore = P F 2 · C o s t Old , 1 explore C o s t New , 1 explore + C o s t Old , 2 explore C o s t New , 2 explore + C o s t Old , 3 explore C o s t New , 3 explore T t , 1 explore + T t , 2 explore + T t , 3 explore
where f 2 t exploit and f 2 t explore represent the second function values for the exploitation and exploration phases at iteration t. The significance of the second function depends on P F 2 , which may be either “0” or “1”. Equations (17) and (18) identify the phases that are underrepresented to delay convergence.
f 3 t exploit = i f   s e l e c t e d ,   f 3 t exploit = 0 o t h e r w i s e , f 3 t exploit + P F 3
f 3 t explore = i f   s e l e c t e d , f 3 t explore = 0 o t h e r w i s e , f 3 t explore + P F 3
where f 3 t explore denotes the third function associated with the exploitation or exploration phase, and t represents the current iteration number. The value of the third function is augmented by the parameter P F 3 in each iteration for the designated stage. The value stays at “0” for the unselected step. Equations (19)–(23) facilitate the assessment of the efficacy of each optimization stage in determining whether to transition from the exploitation phase to the exploration phase. Pumas implements Equation (24) to improve the solution after population classification. The dynamic alteration of α and δ in both phases is contingent upon the performance of the phase during the search. The exploitation function parameter α reaches its maximum value of one when it is subjected to a 0.01 straight line penalty. l c argues that the cost of exploration and exploitation enhancements is higher. Equations (25) and (26) are arbitrary and situational.
F t exploit = α t exploit · f 1 t exploit + α t exploit · f 2 t exploit + δ t exploit · l c · f 3 t exploit
F t explore = α t explore · f 1 t explore + α t explore · f 2 t explore + δ t explore · l c · f 3 t explore
l c = C o s t old Cos t New exploitation , C o s t old Cos t New exploration , 0 l c
α t explore , exploit = i f   F exploit > F explore , α exploit = 0.99 , α explore = α explore 0.01,0.01 o t h e r w i s e , α explore = 0.99 , α exploit = α exploit 0.01,0.01
δ t exploit = 1 α t exploit
δ t explore = 1 α explore
I f   r a n d 1 > 0.5 , Z i , G = R Dim ( U b L b ) + L b
Otherwise ,   Z i , G = X a , G + G · ( X a , G X b , G ) + G · ( ( ( X a , G X b , G ) ( X c , G X d , G ) ) + ( ( X c , G X d , G ) ) ( X e , G X f , G ) ) ) , G = 2 · r a n d 2 1
Equation (25) outlines the process of generating new solutions during exploration. If a randomly generated number, r a n d 1 , falls within the range [0, 0.5], the new solution, Z i , G , is determined based on random dimensions within the problem’s bounds. Conversely, if r a n d 1 exceeds 0.5, the solution is derived from a combination of existing solutions X a , G , X b , G , X c , G , X d , G , X e , G , and X f , G from the population, adjusted by a random factor. Dimension replacement is determined by Equations (27)–(30) during iterations, which promotes diversity.
X new = Z i , G , i f j = j rand   or   rand 3 U X a , G   o t h e r w i s e
N C = 1 U
p = N C N p o p
if   Cost new < C o s t i , U = U + p
where j rand and rand 3 are randomly generated integers and numbers, respectively. U , a parameter ranging between “0” and “1” guides solution updates. The value of G is calculated using another randomly generated number. Finally, newly generated solutions replace current ones using Equation (31).
X a , G = X n e w ,   if   X i , new < X a , G
To optimize solutions, the PO algorithm employs two distinct operators during the exploitation phase, inspired by the hunting strategies of pumas: pursuit and ambush. Dashing and ambushing are the methods by which the PO assaults, as indicated in Equation (32).
X new i f   rand 4 0.5 , X new = mean Sol total N p o p · X 1 r ( 1 ) β × X i 1 + α · rand 5 o t h e r w i s e ,   i f   rand 6 L , X new = Puma male + 2 · rand 7 · exp randn 1 · X 2 r X i o t h e r w i s e ,   X new = 2 × rand 8 × F 1 · R · X ( i ) + F 2 · ( 1 R ) · Puma male 2 · rand 9 1 + randn 2 Puma male
Rapid running is chosen if rand 5 is more than 0.5, ambush if less. X 1 r is random, X i is current, and Puma male is best. Randomly produced randn 1 and randn 2 have a normal distribution. X 2 r is another random solution from Equations (33) and (34). In the problem sizes and the normal distribution, r a n d n 3 is a random number in Equation (35). In Equations (36)–(38), r a n d n 4 and r a n d n 5 represent randomly generated numbers following a normal distribution.
r o u n d 1 + ( Npop 1 ) · rand 10
R = 2 · r a n d 11 1
F 1 = r a n d n 3 · e x p 2 Iter · 2 MaxIter
F 2 = w × ( v ) 2 · c o s 2 × r a n d 12 · w
w = r a n d n 4
v = r a n d n 5
where Npop is the total number of Pumas and rand 10 is a randomly generated number between “0” and “1”, rounded to the nearest integer and the c o s denotes the cosine function and r a n d 12 is a randomly generated number between “0” and “1” [5].

3.2. Transfer Functions

Binary optimization algorithms are derived from continuous algorithms by converting continuous variables into binary values. Transfer Functions (TFs) facilitate this process by mapping continuous inputs to binary outputs. TFs include sigmoid, step, threshold, linear, and piecewise-linear functions. The sigmoid function projects inputs onto the interval [0, 1]. In general, TFs discretize the search space by assigning a value of 0 or 1 based on a specified probability or threshold [42]. The mathematical formulations of TFs are presented in Table 2. Figure 2 illustrates two representative types, namely S-shaped and V-shaped TFs.

3.3. Knapsack Problems

The 0-1 Knapsack Problem (KP) is a classical combinatorial optimization problem that involves selecting a subset of items from a larger set to maximize overall profit within a specific weight limit [32]. Each item is associated with a specific weight and profit, and a binary variable determines whether it is included in the knapsack. The objective is to maximize total profit while ensuring that the total weight of the selected items does not exceed the knapsack capacity [49]. In this representation, a value of “1” indicates that an item is selected, whereas a value of “0” indicates that it is not. This binary encoding efficiently represents potential solutions, enabling the optimization process to balance capacity utilization and profit maximization [16]. Finally, 0-1 KPs can mathematically be formulated as:
Maximize i = 1 n x i p i Subject   to i = 1 n w i x i < c , x i = 0   or   1 , i = { 0 , 1 } n , p i > 0 , w i > 0 , c > 0
where n is the total number of items, W i is the weight of the item, p k is the profit, c is the maximum capacity of the knapsack. This subset of items can be identified by constructing a knapsack that contains the selected item with a value of x i = 1, while the other items have a value of x i = 0 and are not selected for the knapsack [33].

3.4. Penalty Algorithm

Infeasible solutions occur when their total weight exceeds the capacity constraint. In such cases, a Penalty Algorithm (PA) is applied to prevent the solution from being selected, even though it provides a high profit. In this approach, the fitness value of each infeasible solution is converted to a negative value so that the algorithm does not prefer this solution as the best candidate [33]. Figure 3 depicts the pseudo-code of the PA.

3.5. Fixing Infeasible Solution

The Repair Algorithm (RA) and the Improvement Algorithm (IA) constitute two essential procedures for refining candidate solutions in binary optimization. The RA is primarily responsible for restoring feasibility by correcting solutions that violate problem-specific constraints, such as exceeding the knapsack capacity. It systematically adjusts the solution structure to ensure compliance with the feasibility requirements of the problem. Once feasibility is achieved, the IA operates to enhance the overall solution quality. This is accomplished by strategically modifying feasible solutions such as replacing or reordering selected items in order to increase total profit while maintaining constraint satisfaction. Collectively, these two procedures form a complementary mechanism: the RA ensures the validity of solutions, while the IA improves their effectiveness within the feasible search space [33].
In RA, items are prioritized for removal based on their profit-to-weight ratio ( p i / w i ), with the item having the lowest ratio eliminated first. In the case of ties, the item with the smallest absolute profit is removed. This process is repeated until the total weight satisfies the knapsack capacity. The pseudo-code of RA is presented in Figure 4.
In the IA, when multiple items have equal ratios, the item with the highest profit is selected. After each addition, the objective function is recalculated. To reduce computational cost, IA evaluates only the top-ranked subset of the remaining items. If the updated solution becomes infeasible, the last added item is removed and the process is terminated. The pseudo-code of IA is provided in Figure 5 [33].

3.6. Uncapacitated Facility Location Problem

Assuming that there is no capacity limit on facilities and that each customer is served by only one facility, the Uncapacitated Facility Location Problem (UFLP) entails offering consumers a selection of facilities. The location of factories, warehouses, or power transmission lines is a critical location decision problem [50,51]. The UFLP entails evaluating the location of facilities and the associated expenses of customer service. The primary objective is to minimize the overall cost of business operations by meticulously organizing the layout of facilities [42]. UFLP solutions are found by using a binary vector to indicate whether each facility is open or closed and examining various ways to arrange the facilities. The larger the dimensions of a problem, the harder it becomes. The UFLP is an NP-hard problem [52]. The primary objective is to ascertain the most appropriate locations for the facilities and the most effective methods of serving the clients, thereby reducing the company’s overall operating expenses [53]. The objective function is the total cost of providing services to customers and opening the facilities. The objective is to determine the most cost-effective method of assigning facilities to customers [54]. A mathematical definition of the UFLP is provided in Equation (40).
Minimize   f ( W , X ) = i = 1 c j = 1 v serviceCost i j . w i j + j = 1 p openningCost j . x j Subject   to   j = 1 C w i j = 1 , j = 1 , 2 , , C w i j = 1 if   customerigets   service   from   facility   j 0 otherwise x j = 1 if   facility   j   is   open 0 otherwise w i j x j , i = 1 , , C and   j = 1,2 , , F
where W = w i j C x F indicates whether i -th customer receives service from j -th facility. X = x j 1 × F indicates whether j -th facility is open. C is the number of consumers, and F is the number of facilities. The serviceCostij represents the service cost that i -th consumer receives from the j -th facility, openingCost j represents the opening cost required by the opening of the j -th facility [42].

3.7. Crossover Operation

Crossover is a genetic operator widely used in evolutionary and metaheuristic algorithms. It produces new offspring solutions by combining genetic information from two parent solutions according to predefined rules. This process aims to increase diversity in the search space, facilitate information exchange from high-quality solutions, and improve the overall performance of the algorithm [55,56]. Crossover operators can be implemented through different strategies:
  • One-point crossover: A random cut-point is selected, and all genes beyond this point are exchanged between the two parent solutions.
  • Two-point crossover: Two crossover points are selected randomly, and the segment between them is swapped between the parent solutions.
  • Uniform crossover: Each gene is independently inherited from one of the parent solutions with a predefined probability.
In general, crossover enhances information sharing among solutions, preserves diversity, and strengthens the exploitation capability of the algorithm [57]. In this study, a hybrid crossover approach governed by the crossover probability (pCR) is adopted. In this mechanism, a random crossover point is selected to ensure that at least one gene from the candidate solution is transferred to the offspring, thereby guaranteeing structural diversity and preventing the production of identical offspring. For the remaining genes, the uniform crossover principle is applied, where each gene is independently inherited from either the candidate or the current solution, according to the probability pCR. For example, when pCR = 0.2, approximately 20% of the genes are expected to be inherited from the candidate solution and 80% from the parent solution. Moreover, to prevent invalid offspring (e.g., all-zero vectors), at least one element is enforced to remain active. This additional mechanism guarantees feasibility and eliminates the generation of meaningless solutions.

3.8. Greedy-Based Population Strategies

The greedy-based population strategy is widely employed to generate high-quality initial solutions in combinatorial optimization problems such as the 0-1 KPs. In this strategy, items are ranked according to their profit-to-weight ratio, and those with higher efficiency are selected first to construct candidate solutions. This ensures that the initial population is composed of solutions with relatively good objective values, which can accelerate convergence in the subsequent search process. To prevent premature convergence and preserve diversity, random perturbations (noise) are incorporated into the greedy ordering. By combining deterministic efficiency-based selection with controlled randomness, this approach strikes a balance between solution quality and population diversity, thereby enhancing the overall effectiveness of the optimization process [58,59].
The greedy-based population strategy is designed to generate high-quality yet diverse solutions in the initial population. For each item, the profit-to-weight ratio ( p i / w i ) is first calculated to measure its efficiency. To avoid deterministic ordering that could lead to a homogeneous population, a small random noise is added to these efficiency values. The items are then sorted in descending order based on the perturbed efficiency scores. Following this ranking, items are sequentially inserted into the knapsack as long as the total weight does not exceed the capacity. In this way, the most efficient items are selected first, and the process continues until the knapsack is filled. This strategy ensures that the generated solutions are feasible with respect to the capacity constraint while maintaining the high-profit tendency of the greedy principle. Furthermore, the introduction of random noise preserves diversity within the population, prevents premature convergence, and contributes to a more balanced algorithm between exploration and exploitation throughout the optimization process [59].
In conclusion, the greedy-based population strategy ensures the generation of high-quality solutions, while the incorporation of random noise preserves diversity within the population. This combination provides a robust starting point that supports both exploration and exploitation capabilities in subsequent iterations of the algorithm.

4. The Binary Puma Optimizer

In this study, two distinct variants of the proposed Binary Puma Optimizer (BPO) have been developed. These variants have been designed using TFs (four S-shaped and four V-shaped) and are tailored for the 0-1 Knapsack Problem (KP) and Uncapacitated Facility Location Problem (UFLP). The following subsections describe the structural characteristics and design details of the proposed BPO and BPO’s variants.

4.1. TFs-Based Binary Transformation

In the standard BPO framework, the continuous candidate solution is subsequently mapped into the binary domain. Specifically, the selected TFs transform each continuous value into a probability within the interval { 0,1 } . Then, this probability is compared with a uniformly distributed random number rand ( 0,1 ) to decide the binary state of each dimension. In this study, eight TFs comprising four S-shaped and four V-shaped functions are investigated to assess their effectiveness in guiding the search process within the binary domain. Accordingly, BPO1 converts continuous values into binary representations to solve the 0-1 KPs. The binarization process is expressed in Equation (41).
C S i dim = 1 ,   if   rand   < T F C S i dim 0 ,   if   rand   T F C S i dim
where C S i dim denotes the value of the dim-th component of the i -th candidate solution. T F C S i dim represents the TFs that maps a continuous value into a probability in the range [ 0 , 1 ] , while rand is a uniformly distributed random variable within ( 0,1 ) .

4.2. Binary Puma Optimizer-1 for 0-1 KP

A novel algorithm, termed the Binary Puma Optimizer-1 (BPO1), is proposed in this study as a binary adaptation of the BPO, which is initially designed for continuous domains. Since the direct application of BPO to binary optimization problems, such as the 0-1 KP, is not feasible, both the initialization strategies and the iterative update mechanisms of the algorithm have been redesigned and augmented with TFs specifically tailored for the binary search space.
Given the characteristics of the 0-1 KP, infeasible solutions frequently violate the capacity constraint. When such violations are addressed solely through penalty functions, solution quality tends to deteriorate, leading to inefficiencies in the search process. To overcome this limitation, BPO1 employs different mechanisms. These mechanisms not only ensure feasibility but also improve the overall quality of solutions, thereby enhancing the algorithm’s effectiveness in solving constrained binary optimization problems.

4.2.1. Greedy-Based Populations Strategies and TFs-Based Binary Transformation

The quality of the initial population strongly influences the efficiency of the search process. To address this, BPO1 employs a hybrid initialization strategy that ensures both diversity and solution quality.
  • Random populations: Half of the population is generated as random solutions that strictly satisfy the capacity constraint. This approach enhances diversity by introducing a wide range of candidate solutions.
  • Greedy-based populations strategies: The remaining half is constructed using a greedy strategy in which items are ranked according to their profit-to-weight ratio. This ensures that solutions are biased toward higher quality in terms of profitability.
By combining random and greedy-based strategies, the hybrid population initialization provides high-quality candidate solutions in the early stages while maintaining sufficient exploratory capacity to guide the search toward the global optimum. Furthermore, once the candidate solutions are generated, they are transformed into the binary domain using the TFs-based binarization procedure expressed in Equation (41). This process ensures that continuous values are effectively converted into binary representations, thereby enabling BPO1 to operate efficiently within the 0-1 search space.

4.2.2. Crossover Operator

The newly generated candidate solution, y is recombined with the current solution, x, through a crossover operator. This procedure is executed as follows:
  • A random position j0 is selected, and the corresponding component is always inherited from y, ensuring that the offspring differs from the parent.
  • For all other dimensions, values are inherited from y with a predefined crossover probability pCR otherwise, the values of x are retained.
  • As a result, the offspring solution z combines the features of both the parent solution x and the candidate solution y.
  • To prevent the generation of empty solutions (i.e., solutions with no selected items), at least one item is enforced by randomly assigning a value of 1 to one position if necessary.
This crossover operator maintains population diversity while simultaneously enabling information exchange between parent and candidate solutions, thereby contributing to the effective guidance of the evolutionary search process.

4.2.3. Penalty, Repair, and Improvement Algorithm

Due to the inherent nature of the 0-1 KPs, violations of the capacity constraint frequently result in infeasible solutions. To address this problem, BPO1 employs a three-stage constraint-handling mechanism composed of the following procedures:
  • Penalty Algorithm (PA): Infeasible solutions that exceed the capacity constraint are penalized according to the degree of violation. This reduces their fitness values and decreases their likelihood of being selected in subsequent iterations.
  • Repair Algorithm (RA): Infeasible solutions are iteratively corrected by removing items with the lowest profit-to-weight ratio until the total weight satisfies the knapsack capacity, thereby restoring feasibility.
  • Improvement Algorithm (IA): Once feasibility is ensured, solutions are further refined by incorporating items with the highest profit-to-weight ratio, provided the capacity constraint is not violated. This process increases the overall profit and enhances solution quality.
BPO1 is proposed as a binary adaptation of the original PO, redesigned to address the specific requirements of 0-1 KPs. To generate a strong starting point, BPO1 employs a hybrid initialization strategy that combines random feasible solutions with greedy-based solutions guided by profit-to-weight ratios, ensuring both diversity and quality. Continuous candidate solutions are transformed into binary form using eight TFs, including four S-shaped and four V-shaped mappings. A tailored crossover operator enables effective information exchange between parent and candidate solutions while preserving diversity. To handle infeasible solutions, BPO1 integrates a three-stage mechanism: infeasible solutions are first penalized, then repaired by eliminating low-efficiency items, and finally improved through greedy insertion of high-efficiency items. Collectively, these mechanisms ensure feasibility, enhance solution quality, and improve the algorithm’s overall effectiveness in solving complex constrained binary optimization problems. The flowchart of the proposed BPO1 algorithm applied to 0-1 KPs is presented in Figure 6. Figure 7 presents the step-by-step pseudo-code of the proposed BPO1 algorithm designed for solving 0-1 KPs.

4.3. Binary Puma Optimizer-2 for UFLP

Secondly, this study introduces the Binary Puma Optimizer-2 (BPO2), a binary adaptation of the BPO specifically designed to solve the UFLP. Similarly to other binary variants of metaheuristic algorithms, BPO2 utilizes TFs as the primary mechanism for mapping continuous update values into binary decision variables. In this study, eight TFs are examined, including four S-shaped and four V-shaped mappings.
For the UFLP, the adaptation procedure follows the same principle applied in other binary algorithms. The initial population is generated directly as binary { 0,1 } vectors to represent facility-opening and assignment decisions, thereby eliminating unnecessary transformation steps. During subsequent iterations, candidate solutions are updated using the exploration and exploitation operators of the original PO. The resulting continuous values are then passed through the selected TFs, which transform them into probabilities within the interval { 0,1 } . Each probability is compared against a uniformly distributed random number rand { 0,1 } , and the binary state of each dimension is then determined using the binarization rule in Equation (41).
Once binary solutions are generated, they are evaluated using the UFLP objective function, which minimizes total facility-opening and assignment costs. The best-performing solutions are retained through selection mechanisms, guiding the population toward more cost-efficient configurations.
The cycle of continuous updating, TFs based binarization, and solution evaluation is repeated across generations until the termination criterion is satisfied, either by reaching a predefined number of iterations or by achieving convergence.
In the UFLP, facility-opening (0-1) and assignment decisions are not subject to capacity restrictions, since each facility is assumed to have unlimited capacity. As a result, no infeasible solutions arise from exceeding resource limits, and corrective mechanisms such as the PA, RA, and IA are not required. In contrast, in the 0–1 KP, each item selection directly affects the total weight, and exceeding the knapsack capacity produces infeasible solutions. Under such circumstances, PAs are needed to discourage constraint violations, RAs are required to restore feasibility by removing low-efficiency items, and IAs are applied to further enhance feasible solutions by inserting high-efficiency items. Collectively, these algorithms are indispensable in KPs to ensure feasibility and maintain solution quality, whereas in the UFLP, they become redundant due to the absence of capacity constraints. Figure 8 illustrates the flowchart of the proposed BPO applied to the UFLP, while Figure 9 presents the step-by-step pseudo-code of the proposed BPO2 for solving the UFLP. The gray colors in Figure 9 represent the binary adaptation of BPO2.

5. Results

Transfer Functions (TFs) are categorized based on their problem size and difficulty level, ranging from small to huge. This categorization ensures a diverse and comprehensive evaluation of the optimization algorithm’s performance across varying problem complexities. The GAP value is formulated mathematically by Equation (42) [42]. The Success Rate (SR) is calculated by Equation (43) [33]:
G A P % = f i t n e s s m e a n o p t i m u m o p t i m u m × 100
S R ( % ) = Number   of   runs   yielding   at   least   one   feasible   solution Total   number   of   runs × 100

5.1. Experimental Results of BPO1 for 0-1 KPs

Table 3 shows the BPO1 parameter values used in the experiments. The OR Library [33] provides a comprehensive 0-1 KPs dataset comprising 25 distinct datasets detailed in Table 4. In the table, “ID” refers to the instance number assigned in this study, while Dataset denotes the name of the benchmark instance (e.g., 8a, 12b, 20c). Cap. Indicates the knapsack capacity associated with each dataset, and “Dim.” represents the problem dimension. Finally, “Opt.” corresponds to the known optimal objective value of each dataset. Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 present the statistical results of BPO1. In the tables, “Dataset” denotes the problem dataset, “Cap.” refers to the knapsack capacity, and “Opt.” represents the known optimal value. “Best” indicates the maximum profit achieved, “Mean” is the average profit across multiple runs, and Worst denotes the minimum profit obtained. “Std.” Corresponds to the standard deviation of the results. “Weight” refers to the total weight of the items included in the best solution, and “WR” (weight ratio) expresses the percentage of the knapsack capacity utilized. “Time” is reported as the average computational time per run. Finally, “SR” represents the success rate, defined as the percentage of independent runs in which at least one feasible solution satisfying all problem constraints is generated. In the tables, the use of bold font is intended to denote the optimum value.
Table 3. Parameter settings.
Table 3. Parameter settings.
ParametersValues
Population size (N)20
Maximum iteration (MaxIter)5000
Number of runs20
Crossover probability (pCR)0.20
Greedy noise amplitude parameter0.10
Table 4. Datasets of instances for the 0-1 KPs [31].
Table 4. Datasets of instances for the 0-1 KPs [31].
IDDatasetCap.Dim.Opt.
KP(1)8a1,863,63383,924,400
KP(2)8b1,822,71883,813,669
KP(3)8c1,609,41983,347,452
KP(4)8d2,112,29284,187,707
KP(5)8e2,493,25084,955,555
KP(6)12a2,805,213125,688,887
KP(7)12b3,259,036126,498,597
KP(8)12c3,489,815125,170,626
KP(9)12d3,453,702126,992,404
KP(10)12e2,520,392125,337,472
KP(11)16a3,780,355167,850,983
KP(12)16b4,426,945169,352,998
KP(13)16c4,323,280169,151,147
KP(14)16d4,550,938169,348,889
KP(15)16e3,760,429167,769,117
KP(16)20a5,169,6472010,727,049
KP(17)20b4,681,373209,818,261
KP(18)20c5,063,7912010,714,023
KP(19)20d4,286,641208,929,156
KP(20)20e4,476,000209,357,969
KP(21)24a6,404,1802413,549,094
KP(22)24b5,971,0712412,233,713
KP(23)24c5,870,4702412,448,780
KP(24)24d5,762,2842411,815,315
KP(25)24e6,654,5692413,940,099
Table 5. The results of the BPO1 for TF1.
Table 5. The results of the BPO1 for TF1.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.00911.8722100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.28111.8669100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.34601.8644100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00161.8647100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.8629100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9166100
12b3,259,0366,498,5976,498,5976,497,3186,473,01957193,256,96399.93641.9003100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23671.8899100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.8822100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.879100
16a3,780,3557,850,9837,850,9837,842,0937,823,31811,5003,771,40699.76331.9549100
16b4,426,9459,352,9989,352,9989,347,8049,259,63420,8164,426,26799.98471.9440100
16c4,323,2809,151,1479,151,1479,147,3649,100,11612,4064,297,17599.39621.9526100
16d4,450,9389,348,8899,348,8899,338,1999,305,85993164,444,72199.86031.9536100
16e3,760,4297,769,1177,769,1177,764,8397,750,49176613,752,85499.79861.9544100
20a5,169,64710,727,04910,727,04910,725,30210,692,10178155,166,67699.94252.0298100
20b4,681,3739,818,2619,818,2619,809,3669,754,36816,5034,671,86999.79702.0368100
20c5,063,79110,714,02310,714,02310,710,00710,700,63562955,053,83299.80332.0303100
20d4,286,6418,929,1568,929,1568,921,0228,873,71617,1714,282,61999.90622.0276100
20e4,476,0009,357,9699,357,9699,355,2199,345,84747624,470,06099.86732.0312100
24a6,404,18013,549,09413,549,09413,511,19113,459,47526,8766,402,56099.97472.0987100
24b5,971,07112,233,71312,233,71312,215,82412,159,26124,1495,966,00899.91522.1076100
24c5,870,47012,448,78012,448,78012,435,42112,367,65320,3815,861,70799.85072.0966100
24d5,762,28411,815,31511,815,3151,180,54111,754,63319,2585,756,60299.90142.1036100
24e6,654,56913,940,09913,940,09913,933,44613,909,29291356,637,74999.74722.0966100
The bolded values indicate the optimum value.
In Table 5, the performance of BPO1 with TF1 across the 0-1 KPs is reported. Across all datasets, the Best values are found to be identical to the known optima, indicating that BPO1 successfully identified the optimal solution in every run. For the datasets ranging from 8a to 12e, the mean values are consistently matched with the optimal results, except for instance 12b, where a slight deviation is observed. Within the same range (8a–12e), the standard deviation is recorded as zero, confirming the complete stability of the algorithm except for instance 12b. For larger-scale instances, small variations in the mean values and standard deviation are observed; however, these deviations are considered negligible and do not compromise the algorithm’s overall robustness. Capacity utilization rates exceeded 97%, with most rates above 99%, demonstrating efficient resource use. The success rate is 100% for all datasets. These findings confirm that the proposed algorithm delivers high accuracy, stability, and efficiency across problems of varying scales, including large-scale instances.
Table 6. The results of the BPO1 for TF2.
Table 6. The results of the BPO1 for TF2.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.00901.8589100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.28101.8556100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.34591.8708100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00161.8779100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.8776100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9298100
12b3,259,0366,498,5976,498,5976,498,5976,498,59703,256,96399.93641.9295100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23671.9353100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.9472100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.9338100
16a3,780,3557,850,9837,850,9837,840,7107,823,31812,0263,771,40699.76341.9753100
16b4,426,9459,352,9989,352,9989,352,2089,347,73619274,426,26799.98461.9689100
16c4,323,2809,151,1479,151,1479,146,1339,100,11613,2064,297,17599.39621.9691100
16d4,450,9389,348,8899,348,8899,333,4299,296,53614,0434,444,72199.86031.9784100
16e3,760,4297,769,1177,769,1177,761,3037,750,49189483,752,85499.79861.9716100
20a5,169,64710,727,04910,727,04910,725,30210,692,10178155,166,67699.94252.0300100
20b4,681,3739,818,2619,818,2619,809,7329,753,77217,6254,671,86999.79692.0332100
20c5,063,79110,714,02310,714,02310,706,51510,700,63566825,053,83299.80332.0237100
20d4,286,6418,929,1568,929,1568,919,5128,873,71617,6774,282,61999.90622.0184100
20e4,476,0009,357,9699,357,9699,354,5509,345,84751654,470,06099.86732.0234100
24a6,404,18013,549,09413,549,09413,515,16913,460,32926,8596,402,56099.97472.0913100
24b5,971,07112,233,71312,233,71312,220,57012,159,26121,2675,966,00899.91522.0926100
24c5,870,47012,448,78012,448,78012,429,25612,385,45220,7385,861,70799.85072.0866100
24d5,762,28411,815,31511,815,31511,806,29311,772,08613,9845,756,60299.90142.0940100
24e6,654,56913,940,09913,940,09913,933,00513,902,53412,3586,637,74999.74722.0888100
Table 6 presents the performance of the proposed algorithm using TF2 across all 0-1 KPs. For the 8a–12e datasets, the algorithm consistently achieved the optimal solution in all runs, with best, mean, and worst values identical and a standard deviation of zero, indicating perfect stability. In the larger datasets (16a–24e), the best results always matched the optimum, and the mean and worst results remained extremely close to it, with very small standard deviations, demonstrating strong robustness. Capacity utilization ratios ranged from approximately 97% to nearly 100%, confirming efficient use of available resources. Execution times remained low across all datasets, and the success rate is 100% in every case. These results indicate that, under TF2, the proposed algorithm maintains high accuracy, stability, and efficiency across different problem sizes, including large-scale instances. This advantage is attributed to TF2′s probability curve being more balanced in the middle region, thereby enabling a more effective balance between exploration and exploitation phases. The probability curves of TF1 and TF2 exhibit a more balanced distribution in the middle region. This characteristic enables a more effective balance between the exploration and exploitation phases, yielding stable results in small- and medium-scale datasets and slightly more consistent performance in large-scale datasets.
Table 7. The results of the BPO1 for TF3.
Table 7. The results of the BPO1 for TF3.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.00901.8669100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.28101.8685100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.34591.8606100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00151.8631100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.8658100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9573100
12b3,259,0366,498,5976,498,5976,494,7606,473,01993703,256,96399.93631.9496100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23661.9552100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.9491100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.9449100
16a3,780,3557,850,9837,850,9837,845,3727,823,31810,2073,771,40699.76322.0082100
16b4,426,9459,352,9989,352,9989,352,9989,352,99804,426,26799.98472.0053100
16c4,323,2809,151,1479,151,1479,143,9229,082,30719,0644,297,17599.39652.0108100
16d4,450,9389,348,8899,348,8899,338,3429,296,53611,3574,444,72199.86032.0046100
16e3,760,4297,769,1177,769,1177,765,2177,750,49169733,752,85499.79852.0055100
20a5,169,64710,727,04910,727,04910,723,55410,692,10110,7565,166,67699.94252.0566100
20b4,681,3739,818,2619,818,2619,805,1149,744,51323,2734,671,86999.79692.0480100
20c5,063,79110,714,02310,714,02310,702,72610,611,48622,4535,053,83299.80332.0245100
20d4,286,6418,929,1568,929,15689,267,6788,895,15280504,282,61999.90612.0224100
20e4,476,0009,357,9699,357,96993,564,3049,345,84735604,470,06099.86732.0238100
24a6,404,18013,549,09413,549,09413,516,47813,455,54528,9666,402,56099.97472.0851100
24b5,971,07112,233,71312,233,71312,221,96912,193,73215,9375,966,00899.91522.0853100
24c5,870,47012,448,78012,448,78012,439,97312,401,61913,6405,861,70799.85072.0861100
24d5,762,28411,815,31511,815,31511,801,22611,754,63319,2565,756,60299.90132.0881100
24e6,654,56913,940,09913,940,09913,925,64513,902,53412,7206,637,74999.74722.0800100
As shown in Table 7, TF3 is reported to have achieved high accuracy across most datasets, although slight deviations in mean and worst values are observed in datasets 12b and 16a. These deviations are considered to be due to TF3′s structure being less capable than TF2 of sustaining strong local search intensity for certain problem sizes. While TF3 is observed to be more stable compared to TF1, it does not achieve the same low variance levels as TF2 and TF4. Nevertheless, the high-capacity utilization ratios indicate that the overall solution quality is maintained.
TF2 and TF3 exhibit high-capacity utilization rates (97–100%) while maintaining overall solution quality. However, TF2 demonstrates a more stable and consistent performance than TF3 by delivering near-zero variance and perfect stability across both small-to-medium and large-scale problems. Although TF3 achieves high accuracy in most datasets, minor deviations in mean and worst-case values are observed in datasets 12b and 16a. These deviations can be attributed to TF3′s inability to sustain local search intensity as effectively as TF2 for certain problem sizes.
Table 8. The results of the BPO1 for TF4.
Table 8. The results of the BPO1 for TF4.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.00911.8785100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.28101.8785100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.34591.8731100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00151.8768100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.8751100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9466100
12b3,259,0366,498,5976,498,5976,498,5976,498,59703,256,96399.93631.9293100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23661.9393100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.9327100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.9302100
16a3,780,3557,850,9837,850,9837,850,9837,850,98303,771,40699.76321.9863100
16b4,426,9459,352,9989,352,9989,352,9989,352,99804,426,26799.98461.9859100
16c4,323,2809,151,1479,151,1479,151,1479,151,14704,297,17599.39621.9878100
16d4,450,9389,348,8899,348,8899,338,3439,296,53611,3564,444,72199.86031.9933100
16e3,760,4297,769,1177,769,1177,763,9087,750,49182253,752,85499.79851.9893100
20a5,169,64710,727,04910,727,04910,721,80710,692,10112,8035,166,67699.94242.0375100
20b4,681,3739,818,2619,818,2619,814,0239,754,36814,7934,671,86999.79692.0388100
20c5,063,79110,714,02310,714,02310,709,26410,700,63565045,053,83299.80332.0366100
20d4,286,6418,929,1568,929,1568,925,6368,872,52212,8734,282,61999.90612.0310100
20e4,476,0009,357,9699,357,9699,354,3869,323,21482764,470,06099.86722.0362100
24a6,404,18013,549,09413,549,09413,514,92813,470,21727,8976,402,56099.97472.1012100
24b5,971,07112,233,71312,233,71312,211,24012,157,69122,5825,966,00899.91522.0948100
24c5,870,47012,448,78012,448,78012,435,78412,373,64519,3375,861,70799.85072.1066100
24d5,762,28411,815,31511,815,31511,801,05211,772,08616,6975,756,60299.90132.0962100
24e6,654,56913,940,09913,940,09913,924,84113,886,06316,8986,637,74999.74722.0984100
The results presented in Table 8 indicate that TF4 demonstrates high performance across all datasets. The best values are found to always correspond to the global optimum, while the mean and worst values are reported to match this optimum in most cases. TF4 is observed to exhibit notably faster and more stable convergence in large-scale datasets compared to TF1, TF3, TF5, TF6, and TF7. This advantage is attributed to TF4′s strong ability to escape local minima through more step transitions in the solution space. Such behavior is indicated to enable the algorithm to achieve rapid convergence to the optimum without compromising solution quality.
In addition, TF4 is distinguished by having the highest number of instances with deviations in the mean, worst, and standard deviation values compared to the other TFs. This indicates that, although TF4 generally ensures convergence and high accuracy, its aggressive search dynamics may lead to greater variability across certain problem sizes.
Table 9. The results of the BPO1 for TF5.
Table 9. The results of the BPO1 for TF5.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.00911.9196100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.28101.9143100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.34591.9092100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00151.9174100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.9171100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9600100
12b3,259,0366,498,5976,498,5976,498,5976,498,59703,256,96399.93631.9496100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23661.9549100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.9511100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.9485100
16a3,780,3557,850,9837,850,9837,843,9897,823,31811,2303,771,40699.76322.0100100
16b4,426,9459,352,9989,352,998935,2479,347,73616194,426,26799.98462.0017100
16c4,323,2809,151,1479,151,1479,148,5959,100,11611,4104,297,17599.39612.0092100
16d4,450,9389,348,8899,348,8899,340,3509,336,69157354,444,72199.86032.0099100
16e3,760,4297,769,1177,769,1177,765,0287,750,49173273,752,85499.79852.0227100
20a5,169,64710,727,04910,727,04910,721,80610,692,10112,8035,166,67699.94252.0837100
20b4,681,3739,818,2619,818,2619,811,0669,757,81615,1474,671,86999.79692.0883100
20c5,063,79110,714,02310,714,02310,708,88010,695,85868165,053,83299.80332.0871100
20d4,286,6418,929,1568,929,1568,925,6968,873,71612,6144,282,61999.90612.0789100
20e4,476,0009,357,9699,357,9699,355,2759,323,21479974,470,06099.86722.0811100
24a6,404,18013,549,09413,549,09413,512,23113,470,21726,3686,402,56099.97472.1397100
24b5,971,07112,233,71312,233,71312,221,20012,157,69120,8925,966,00899.91522.1410100
24c5,870,47012,448,78012,448,78012,439,55512,401,61916,9555,861,70799.85072.1388100
24d5,762,28411,815,31511,815,31511,800,23311,765,85419,1315,756,60299.90132.1430100
24e6,654,56913,940,09913,940,09913,934,35513,909,29210396,637,74999.74722.1396100
The results in Table 9 are reported to show that TF5 achieves the global optimum in all runs for all datasets. For large-scale datasets, the mean and worst values are indicated to remain very close to the optimum. However, TF5 is not found to match TF4′s convergence speed. In large-scale datasets, the mean and worst-case values remain very close to the optimum. However, TF5 does not match the convergence speed of TF4 in large-scale datasets. TF4 achieves high accuracy and low variance across both small- and large-scale datasets.
Table 10. The results of the BPO1 for TF6.
Table 10. The results of the BPO1 for TF6.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.00901.8859100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.28101.8851100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.34591.8864100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00151.8871100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.9014100
12a2,805,2135,688,8875,688,8875,688,5635,682,40414492,798,03899.74421.9675100
12b3,259,0366,498,5976,498,5976,497,3186,473,01957193,256,96399.93631.9469100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23671.9489100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.9458100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.9380100
16a3,780,3557,850,9837,850,9837,844,5027,823,31811,6543,771,40699.76322.0002100
16b4,426,9459,352,9989,352,9989,352,7359,347,73611764,426,26799.98461.9969100
16c4,323,2809,151,1479,151,1479,149,9169,126,52655054,297,17599.39612.0035100
16d4,450,9389,348,8899,348,8899,340,0299,305,85910,0474,444,72199.86032.0084100
16e3,760,4297,769,1177,769,1177,765,0287,750,49173273,752,85499.79852.0054100
20a5,169,64710,727,04910,727,04910,727,04910,727,04905,166,67699.94252.0538100
20b4,681,3739,818,2619,818,2619,812,1099,757,81614,8724,671,86999.79692.0660100
20c5,063,79110,714,02310,714,02310,708,66810,700,63567295,053,83299.80332.0532100
20d4,286,6418,929,1568,929,1568,923,3678,895,15212,5404,282,61999.90612.0501100
20e4,476,0009,357,9699,357,9699,355,3539,323,21480204,470,06099.86722.0434100
24a6,404,18013,549,09413,549,09413,502,59313,457,08927,7586,402,56099.97472.1198100
24b5,971,07112,233,71312,233,71312,226,94712,188,74714,9955,966,00899.91522.1297100
24c5,870,47012,448,78012,448,78012,430,03912,368,59225,8025,861,70799.85072.1094100
24d5,762,28411,815,31511,815,31511,802,38911,765,85416,0995,756,60299.90132.1198100
24e6,654,56913,940,09913,940,09913,933,38413,902,53499876,637,74999.74722.1327100
The results presented in Table 10 indicate that TF6 achieved the optimum value among the best values across all datasets. Compared to other TFs, TF6 tends to explore a broader search space. While this enhances its exploration capability, it is also the result of minor standard deviations in certain cases. Nevertheless, TF6 demonstrated strong performance in specific datasets, such as 20a, where it consistently achieved perfect success. This comprehensive evaluation confirms that the proposed BPO1 algorithm delivers performance not only across different problem sizes but also in instances of varying complexity. The diversity of TFs allows the algorithm to maintain solution quality while adapting its strategies to the characteristics of each problem. TF6 has achieved the optimal value among the best results across all datasets and has generally demonstrated high accuracy.
Table 11. The results of the BPO1 for TF7.
Table 11. The results of the BPO1 for TF7.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.0090 1.9181100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.2810 1.9904100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.3459 1.8791100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.0015 1.9281100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.9490 1.9740100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9938100
12b3,259,0366,498,5976,498,5976,497,3186,473,01957193,256,96399.93631.9700100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23661.9297100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41241.9867100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78131.9986100
16a3,780,3557,850,9837,850,9837,842,0937,823,31811,5003,771,40699.76322.0634100
16b4,426,9459,352,9989,352,9989,347,8039,259,63420,8154,426,26799.98461.9915100
16c4,323,2809,151,1479,151,1479,147,3649,100,11612,4054,297,17599.39612.0029100
16d4,450,9389,348,8899,348,8899,338,1999,305,85993164,444,72199.86032.0399100
16e3,760,4297,769,1177,769,1177,764,8397,750,49176613,752,85499.79852.0655100
20a5,169,64710,727,04910,727,04910,725,30110,692,10178155,166,67699.94252.0960100
20b4,681,3739,818,2619,818,2619,809,3659,754,36816,5034,671,86999.79692.0688100
20c5,063,79110,714,02310,714,02310,710,00610,700,63562945,053,83299.80332.1125100
20d4,286,6418,929,1568,929,1568,921,0218,873,71617,1714,282,61999.906122.1390100
20e4,476,0009,357,9699,357,9699,355,2189,345,84747614,470,06099.86722.1812100
24a6,404,18013,549,09413,549,09413,511,19013,459,47526,8766,402,56099.97472.1718100
24b5,971,07112,233,71312,233,71312,215,82412,159,26124,1495,966,00899.91522.1384100
24c5,870,47012,448,78012,448,78012,435,42012,367,65320,3805,861,70799.85072.1138100
24d5,762,28411,815,31511,815,31511,805,41711,754,63319,2585,756,60299.90132.0999100
24e6,654,56913,940,09913,940,09913,933,44613,909,29291346,637,74999.74722.1023100
The results in Table 11 show that TF7 offers a balanced performance between high stability on small-scale and medium-scale problems and high quality and diversity on large-scale problems. On small- and medium-scale datasets, TF7 achieves stability and accuracy very close to TF4. However, while TF4 achieves absolute stability with zero variance at these scales. TF7 provides a higher standard deviation by increasing diversity on large-scale datasets. This is attributed to TF7′s adoption of a broader exploration strategy.
Table 12. The results of the BPO1 for TF8.
Table 12. The results of the BPO1 for TF8.
DatasetCap.Opt.BestMeanWorstStd.WeightWRTimeSR
8a1,863,6333,924,4003,924,4003,924,4003,924,40001,826,52998.0090 1.9217100
8b1,822,7183,813,6693,813,6693,813,6693,813,66901,809,61499.2810 1.9303100
8c1,609,4193,347,4523,347,4523,347,4523,347,45201,598,89399.3459 1.8989100
8d2,112,2924,187,7074,187,7074,187,7074,187,70702,048,95797.00161.9249 100
8e2,493,2504,955,5554,955,5554,955,5554,955,55502,442,11497.94901.9321 100
12a2,805,2135,688,8875,688,8875,688,8875,688,88702,798,03899.74421.9907100
12b3,259,0366,498,5976,498,5976,497,3186,473,01957193,256,96399.93631.9961 100
12c2,489,8155,170,6265,170,6265,170,6265,170,62602,470,81099.23661.9704 100
12d3,453,7026,992,4046,992,4046,992,4046,992,40403,433,40999.41242.0149 100
12e2,520,3925,337,4725,337,4725,337,4725,337,47202,514,88199.78132.0397100
16a3,780,3557,850,9837,850,9837,844,859 7,823,31897683,771,40699.76322.0368 100
16b4,426,9459,352,9989,352,9989,352,471 9,347,73616194,426,26799.98462.0493100
16c4,323,2809,151,1479,151,1479,144,812 9,100,11616,2414,297,17599.39612.0900100
16d4,450,9389,348,8899,348,8899,336,9799,305,85985694,444,72199.86032.0789100
16e3,760,4297,769,1177,769,1177,767,4437,750,49151863,752,85499.79852.0839100
20a5,169,64710,727,04910,727,04910,721,15710,665,99415,7805,166,67699.94252.1058100
20b4,681,3739,818,2619,818,2619,807,5179,754,36819,8864,671,86999.79692.1619100
20c5,063,79110,714,02310,714,02310,707,708 10,700,6356585 5,053,83299.8033 2.0533 100
20d4,286,6418,929,1568,929,1568,924,422 8,873,71614,805 4,282,61999.906 2.1256 100
20e4,476,0009,357,9699,357,9699,356,430 9,345,8473560 4,470,06099.8672 2.0807100
24a6,404,18013,549,09413,549,09413,521,169 13,482,88620,276 6,402,56099.9747 2.1603 100
24b5,971,07112,233,71312,233,71312,215,039 12,157,69129,279 5,966,00899.9152 2.1305 100
24c5,870,47012,448,78012,448,78012,432,966 12,389,12418,086 5,861,70799.8507 2.1211 100
24d5,762,28411,815,31511,815,31511,802,070 11,765,85417,168 5,756,60299.9013 2.1585100
24e6,654,56913,940,09913,940,09913,922,962 13,866,03420,101 6,637,74999.7472 2.1723 100
In Table 12, TF8 delivers balanced performance, achieving successful results. It delivers high stability on small-scale and medium-scale examples and strong solution quality on larger problems. Observed deviations are minimal and do not compromise solution quality. While TF8 is not as stable as TF4, its balance between variety and quality is comparable to TF6 and TF7.
Table 13 presents the standard deviation (Std.) rank results of BPO1 with eight different TFs (TF1–TF8). For each dataset, the rank of the standard deviation obtained by each TF is reported, where lower ranks indicate more stable performance. In Table 13, the row “Total Best” indicates the number of times each TF achieves the best rank (rank = 1) across all datasets, highlighting its overall consistency. The row “Total Min. Rank” shows the number of times each TF obtained the best rank across all datasets. This highlights the overall frequency with which each TF demonstrates the most stable performance.
In Table 13, the highest value in the “Total best” row is obtained by TF4 (14 times), indicating that TF4 has most frequently achieved the best performance most frequently compared to the others. Additionally, in the “Total Min. Rank” row, the value for TF4 is recorded as 1 (rank = 1), indicating that TF4 has the lowest total rank and is considered the most prominent in terms of stability. Therefore, the results demonstrate that TF4 is considered the most reliable and stable for BPO1, as it is associated with generating more balanced solutions in terms of variance.
Additionally, compared to the S-shape and V-shape, TF4 applies a softer exponential decay by dividing the exponent by three. This modification yields a smoother slope, producing more gradual adjustments in bit-flip probabilities. Consequently, TF4 reduces the risk of premature convergence, preserves diversity within the population, and limits transfer-function-induced misclassifications. This translates into a more effective balance between exploration and exploitation, enabling TF4 to maintain solution stability while still providing sufficient flexibility to escape local optima.
Table 14 presents a comparison of BPO1-TF4 with the existing binary optimization algorithms the Binary Evolutionary Mating Algorithm (BinEMA) and the Binary Fire Hawk Optimizer (BinFHO). The evaluation considers best values, standard deviations, and time results. The evaluation considers best values, standard deviations, and time results. The parameter settings used in the experiments are listed in Table 3, while the results of the BinEMA and BinFHO are taken from [60].
Table 14 shows that BPO1 achieves the best values on all datasets, demonstrating its effectiveness in obtaining optimal results for different problem sizes. This indicates that BPO1 is proven to be successful in all dimensions, maintaining competitive performance in smaller datasets and showing clear superiority as the problem size increases. BinEMA has fallen behind BPO1 in all datasets after size 12. Equality is observed only in dataset 20c, where BPO1 and BinEMA produce the same result. In contrast, dataset 12c reports a result for BinEMA that is lower than that of BPO1. Similarly, BinFHO is consistently outperformed by BPO1 in datasets larger than size 8. Overall, these findings confirm that BPO1 is established as a more powerful and stable solver, particularly in large-scale problem instances, where it has consistently produced higher-quality solutions compared to BinEMA and BinFHO.
In general, the results of Table 14 show that the standard deviation values of BPO1 are obtained as zero or close to zero in almost all datasets. This indicates that highly consistent and repeatable results are consistently produced by BPO1 in each independent run, ensuring its stability and reliability. In contrast, BinEMA and especially BinFHO are observed to have produced considerably high standard deviation values in many datasets, which shows that their results are fluctuating and their stability is weakened. Therefore, in the overall evaluation, BPO1 achieves superiority in terms of both reliability and solution stability, while the other algorithms produce more variable results. This finding confirms that BPO1 is established as a more consistent and reliable algorithm for solving binary optimization problems.
In Table 14, the time results indicate that BPO1 is executed with consistently low and stable computational times across all datasets. At the same time, BinEMA and BinFHO require significantly higher and more fluctuating times. This demonstrates that BPO1 is far more efficient and practical compared to the other algorithms.
Table 15 presents a comparison of the Success Rate (SR) values achieved by BPO1-TF4 and the S-shape Binary Whale Optimization Algorithm (BWOS). The SR (%) results of BWOS are taken from [33], and both BPO1-TF4 and BWOS are executed under the same experimental configuration specified in Table 3.
In Table 15, the SR values of BPO1 and BWOS are compared across the datasets (8a–8e). BPO1 is shown to consistently achieve a perfect SR of 100% in all cases, demonstrating its ability to reliably produce feasible solutions without failure. On the other hand, BWOS reaches 100% success in most datasets but drops to 75% in dataset 8b, indicating instability. These results emphasize that BPO1 is not only more stable but also more dependable, as it guarantees feasibility across all problem instances, showcasing its superiority over BWOS in terms of robustness and consistency. SR values of BPO1 and BWOS are compared for different datasets (12a–12e). BPO1 consistently achieves an SR of 100% across all instances, proving its robustness and reliability in always producing feasible solutions. In contrast, the BWOS exhibits significant variability while it achieves 100% accuracy in datasets 12c, 12d, and 12e, its performance drops sharply to 85% in 12b and even further to 40% in 12a. These results highlight that BWOS can struggle with feasibility in certain cases, whereas BPO1 maintains flawless performance. This underlines BPO1′s superiority, as it guarantees consistent and stable outcomes regardless of the dataset’s complexity. SR values of BPO1 and BWOS are compared for datasets (16a–16e). BPO1 once again achieves a perfect 100% success rate across all datasets, confirming its stability and ability to generate feasible solutions consistently. In contrast, BWOS exhibits very low and highly inconsistent performance, dropping as low as 5% in dataset 16a, completely failing with 0% in 16d, and only reaching 35% in 16e. Meanwhile, it achieves relatively better values of 80% in 16b. These findings clearly demonstrate that BWOS is highly unreliable in more challenging instances, whereas BPO1 maintains flawless and dependable results in every case. This comparison highlights BPO1′s robustness and superior reliability compared to BWOS. In this table, the SR values of BPO1 and BWOS are compared for datasets (20a–20e). BPO1 consistently maintains a perfect 100% success rate across all instances, further confirming its robustness and reliability in producing feasible solutions without exception. On the other hand, BWOS displays highly unstable behavior: while it performs strongly with 95% in dataset 20a and reaches 100% in 20b, its performance drops drastically to 0% in 20c, 15% in 20d, and only 30% in 20e. These sharp fluctuations highlight BWOS’s lack of consistency in more complex cases, whereas BPO1 proves to be stable and dependable under all conditions. Overall, the results highlight BPO1′s clear superiority, demonstrating that it consistently delivers flawless success where BWOS frequently falls short. SR value comparison between BPO1 and BWOS is presented for datasets (24a–24e). BPO1 once again achieves a flawless 100% success rate in all cases, demonstrating its reliability and robustness across larger and more challenging problem instances. In contrast, BWOS exhibits unstable and inconsistent results: it drops drastically to 25% in dataset 24a, completely fails with 0% in 24d, and only manages 60% in 24e. These variations suggest that BWOS is not dependable under various conditions, frequently struggling to maintain its feasibility. By comparison, BPO1 proves superior by consistently delivering perfect performance, underlining its stability and effectiveness as a solver for complex binary optimization problems.
As a result, BPO1 achieves a 100% success rate across all datasets, clearly demonstrating its robustness, stability, and reliability. Unlike BWOS, which shows inconsistent and often low performance with significant drops in success rate, BPO1 consistently produces feasible and high-quality solutions without exception. This consistency across different problem sizes and complexities highlights BPO1 as a superior and dependable algorithm for solving binary optimization problems.
Figure 10 shows the comparative computational time results of the BPO1, BinEMA, and BinFHO algorithms for datasets of varying sizes (8a–8e, 12a–12e, 16a–16e, 20a–20e, and 24a–24e).
Figure 10 shows that the computational time results for BPO1 remain at very low levels across all datasets and are presented as an almost flat line. This demonstrates that BPO1 is highly efficient and consistent in terms of computational time. The algorithms BinEMA and BinFHO are reported to have achieved significantly higher time values. In the graphs, BinEMA is seen to have produced high and fluctuating time results in most datasets. BinFHO, although it produces lower values than BinEMA in some small datasets, is generally associated with high and unstable computational costs. In the datasets with 16, 20, and 24 items, the time values of BinEMA and BinFHO exhibit sharp increases and decreases, which are interpreted as unstable and costly computational behavior. In contrast, BPO1 is illustrated to follow a consistently low and stable line. As a result, the graphs clearly demonstrate that BPO1 is significantly faster than the other algorithms in terms of time and is proven to maintain its efficiency even as the problem scale increases. This confirms that BPO1 is not only superior in terms of solution quality but is also established as advantageous in terms of computational cost.

5.2. Experimental Results of BPO2 for UFLP

The UFLP is categorized based on its problem size and difficulty level, ranging from small to huge. This ensures a diverse and comprehensive evaluation of the optimization algorithm’s performance across varying problem complexities. Table 16 shows the BPO2 parameter values used in the experiments. A comprehensive dataset of 12 different UFLPs is available from the OR Library, as illustrated in Table 17. Table 18 reports the best values, Table 19 provides the standard deviation values indicating stability, Table 20 summarizes the mean value, Table 21 shows the GAP values reflecting proximity to the optimum, Table 22 presents the worst results, and Table 23 reports the computational time results, highlighting efficiency. These tables show the performance of the BPO2 algorithm under TF1-TF8 variants. In the tables, The bolded values indicate the optimum value.
Table 16. BPO2′s parameter settings.
Table 16. BPO2′s parameter settings.
ParametersValues
Population size (N)40
Maximum iteration (MaxIter)2000
MaxFEs80,000
Number of runs30
Table 17. Characteristics of the UFLP from the OR library.
Table 17. Characteristics of the UFLP from the OR library.
Problem NameDifficulty Level SizeSize of the ProblemOptimum
Cap71Small16 × 509.3262 × 105
Cap72Small16 × 509.7780 × 105
Cap73Small16 × 501.01064 × 106
Cap74Small16 × 501.0350 × 106
Cap101Medium25 × 507.9664 × 105
Cap102Medium25 × 508.5470 × 105
Cap103Medium25 × 508.9378 × 105
Cap104Medium25 × 509.2894 × 105
Cap131Large50 × 507.9344 × 105
Cap132Large50 × 508.5150 × 105
Cap133Large50 × 508.9308 × 105
Cap134Large50 × 509.2894 × 105
Table 18. The statical best values of the BPO2.
Table 18. The statical best values of the BPO2.
DatasetTF1-BPO2TF2-BPO2TF3-BPO2TF4-BPO2TF5-BPO2TF6-BPO2TF7-BPO2TF8-BPO2
Cap719.3262 × 1059.3262 × 1059.3262 × 1059.3262 × 1059.3262 × 1059.4958 × 1059.3262 × 1059.3262 × 105
Cap729.7780 × 1059.8694 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 105
Cap731.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 106
Cap741.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 106
Cap1017.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 105
Cap1028.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 105
Cap1038.9378 × 1058.9457 × 1058.9378 × 1058.9378 × 1058.9401 × 1058.9378 × 1058.9378 × 1058.9378 × 105
Cap1049.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 105
Cap1317.9742 × 1058.0031 × 1057.9793 × 1058.0124 × 1057.9840 × 1058.0058 × 1057.9774 × 1058.0351 × 105
Cap1328.5761 × 1058.5752 × 1058.5453 × 1058.5755 × 1058.6033 × 1058.5351 × 1058.5522 × 1058.5385 × 105
Cap1338.9687 × 1058.9920 × 1058.9927 × 1058.9480 × 1058.9559 × 1058.9723 × 1058.9568 × 1058.9475 × 105
Cap1349.3151 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2948 × 1059.2948 × 1059.2948 × 1059.2948 × 105
Total Rank1.0914 × 1071.0926 × 1071.0911 × 1071.0913 × 1071.0914 × 1071.0928 × 1071.0908 × 1071.0912 × 107
Finally Rank57245813
In Table 18, the Total Rank row represents the overall performance of each TF across all datasets. The lowest total value, 1.0908 × 107, is obtained by TF7, demonstrating that TF7 has provided the best overall performance. The Finally Rank row summarizes the ranking of these total values. TF7 is placed first (rank = 1), followed by TF3 in second (rank = 2), TF8 in third (rank = 3), and TF4 in fourth (rank = 4).
Table 19. The statical standard deviation results of the BPO2.
Table 19. The statical standard deviation results of the BPO2.
DatasetTF1-BPO2TF2-BPO2TF3-BPO2TF4-BPO2TF5-BPO2TF6-BPO2TF7-BPO2TF8-BPO2
Cap7100000000
Cap7200000000
Cap7300000000
Cap7400000000
Cap10100000000
Cap10200000000
Cap10300000000
Cap10400000000
Cap13100000000
Cap13200000000
Cap13300000000
Cap13400000000
Total Rank00000000
Finally Rank11111111
Table 19 presents the standard deviation results of BPO2 across different TFs (TF1-TF8) for the UFLP. It is observed that the standard deviation values are consistently recorded as zero for all datasets and all TFs, indicating that the algorithm has produced identical results in every independent run. Consequently, the Total Rank values are also zero across all TFs, and the Finally Rank row assigns all TFs the first rank (rank = 1). These outcomes demonstrate that BPO2 exhibits perfect stability and robustness on the tested datasets, ensuring that no variability is observed across runs regardless of the TFs employed.
Table 20. The mean results of the BPO2.
Table 20. The mean results of the BPO2.
DatasetTF1-BPO2TF2-BPO2TF3-BPO2TF4-BPO2TF5-BPO2TF6-BPO2TF7-BPO2TF8-BPO2
Cap719.3262 × 1059.3262 × 1059.3262 × 1059.3262 × 1059.3262 × 1059.4958 × 1059.3262 × 1059.3262 × 105
Cap729.7780 × 1059.8694 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 105
Cap731.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 106
Cap741.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 106
Cap1017.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 105
Cap1028.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 105
Cap1038.9378 × 1058.9457 × 1058.9378 × 1058.9378 × 1058.9401 × 1058.9378 × 1058.9378 × 1058.9378 × 105
Cap1049.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 105
Cap1317.9742 × 1058.0031 × 1057.9793 × 1058.0124 × 1057.9840 × 1058.0058 × 1057.9774 × 1058.0351 × 105
Cap1328.5761 × 1058.5752 × 1058.5453 × 1058.5755 × 1058.6033 × 1058.5351 × 1058.5522 × 1058.5385 × 105
Cap1338.9687 × 1058.9920 × 1058.9927 × 1058.9480 × 1058.9559 × 1058.9723 × 1058.9568 × 1058.9475 × 105
Cap1349.3151 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2948 × 1059.2948 × 1059.2948 × 1059.2948 × 105
Total Rank1.0914 × 1071.0926 × 1071.0911 × 1071.0913 × 1071.0914 × 1071.0928 × 1071.0908 × 1071.0912 × 107
Finally Rank57245813
Table 20 shows mean results indicate that although all TFs provide competitive outcomes, certain TFs yield slight advantages in specific datasets. The Total Rank values reinforce this, with TF7 obtaining the lowest cumulative score (rank = 1), followed by TF3 (rank = 2), TF8 (rank = 3), and TF4 (rank = 4). The Finally Rank row, therefore, highlights TF7 as the most effective TFs overall, combining consistency with competitiveness, while the remaining TFs also deliver reliable performance with dataset-specific strengths.
Table 21. The statical GAP values of the BPO2.
Table 21. The statical GAP values of the BPO2.
DatasetTF1-BPO2TF2-BPO2TF3-BPO2TF4-BPO2TF5-BPO2TF6-BPO2TF7-BPO2TF8-BPO2
Cap71000001.8185 × 10000
Cap7209.3491 × 10−1000000
Cap7300000000
Cap7400000000
Cap1011.4613 × 10−141.4613 × 10−141.4613 × 10−141.4613 × 10−141.4613 × 10−141.4613 × 10−141.4613 × 10−141.4613 × 10−14
Cap1021.3621 × 10−141.3621 × 10−141.3621 × 10−141.3621 × 10−141.3621 × 10−141.3621 × 10−141.3621 × 10−141.3621 × 10−14
Cap10308.8567 × 10−2002.5289 × 10−2000
Cap1041.2532 × 10−141.2532 × 10−141.2532 × 10−141.2532 × 10−141.2532 × 10−141.2532 × 10−141.2532 × 10−141.2532 × 10−14
Cap1315.0167 × 10−18.6586 × 10−15.6634 × 10−19.8266 × 10−16.2559 × 10−18.9979 × 10−15.4144 × 10−11.2694 × 100
Cap1327.1793 × 10−17.0697 × 10−13.5643 × 10−17.1090 × 10−11.0374 × 1002.3665 × 10−14.3702 × 10−12.7668 × 10−1
Cap1334.2463 × 10−16.8536 × 10−16.9381 × 10−11.9309 × 10−12.8173 × 10−14.6505 × 10−12.9188 × 10−11.8759 × 10−1
Cap1342.7621 × 10−11.2532 × 10−145.7680 × 10−21.2532 × 10−141.2532 × 10−145.7680 × 10−25.7680 × 10−25.7680 × 10−2
Total Rank1.9184 × 1003.2817 × 1001.6739 × 1001.8867 × 1001.9700 × 1003.4777 × 1001.3280 × 1001.7914 × 100
Finally Rank57246813
Table 21 shows the GAP values, where certain TFs exhibit slight advantages on certain datasets. The overall Ranking values highlight these differences; TF7 achieves the lowest cumulative GAP score (rank = 1), followed by TF3 (rank = 2), TF8 (rank = 3), and TF4 (rank = 4). Consequently, the final Ranking line identifies TF7 as the most effective TFs overall because it combines robustness with accuracy.
Table 22. The statical worst results of the BPO2.
Table 22. The statical worst results of the BPO2.
DatasetTF1-BPO2TF2-BPO2TF3-BPO2TF4-BPO2TF5-BPO2TF6-BPO2TF7-BPO2TF8-BPO2
Cap719.3262 × 1059.3262 × 1059.3262 × 1059.3262 × 1059.3262 × 1059.4958 × 1059.3262 × 1059.3262 × 105
Cap729.7780 × 1059.8694 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 1059.7780 × 105
Cap731.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 1061.0106 × 106
Cap741.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 1061.0350 × 106
Cap1017.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 1057.9665 × 105
Cap1028.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 1058.5470 × 105
Cap1038.9378 × 1058.9457 × 1058.9378 × 1058.9378 × 1058.9401 × 1058.9378 × 1058.9378 × 1058.9378 × 105
Cap1049.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2894 × 105
Cap1317.9742 × 1058.0031 × 1057.9793 × 1058.0124 × 1057.9840 × 1058.0058 × 1057.9774 × 1058.0351 × 105
Cap1328.5761 × 1058.5752 × 1058.5453 × 1058.5755 × 1058.6033 × 1058.5351 × 1058.5522 × 1058.5385 × 105
Cap1338.9687 × 1058.9920 × 1058.9927 × 1058.9480 × 1058.9559 × 1058.9723 × 1058.9568 × 1058.9475 × 105
Cap1349.3151 × 1059.2894 × 1059.2894 × 1059.2894 × 1059.2948 × 1059.2948 × 1059.2948 × 1059.2948 × 105
Total Rank1.0914 × 1071.0926 × 1071.0911 × 1071.0913 × 1071.0914 × 1071.0928 × 1071.0908 × 1071.0912 × 107
Finally Rank57245813
Table 22 presents the worst results of BPO2 across the UFLP using eight different TFs (TF1-TF8). The Finally Rank row highlights TF7 as the best-performing TF in terms of worst results (rank = 1), showcasing its robustness and reliability, with TF3 (rank = 2) and TF8 (rank = 3) emerging as competitive alternatives.
Table 23. The statical time results of the BPO2.
Table 23. The statical time results of the BPO2.
DatasetTF1-BPO2TF2-BPO2TF3-BPO2TF4-BPO2TF5-BPO2TF6-BPO2TF7-BPO2TF8-BPO2
Cap711.7468 × 1011.7116 × 1011.6772 × 1011.6913 × 1011.7519 × 1011.7613 × 1011.6903 × 1011.7287 × 101
Cap721.6622 × 1011.6908 × 1011.6567 × 1011.6847 × 1011.7206 × 1011.6635 × 1011.6571 × 1011.6660 × 101
Cap731.6441 × 1011.6557 × 1011.6392 × 1011.6328 × 1011.6988 × 1011.6403 × 1011.5571 × 1011.5577 × 101
Cap741.5355 × 1011.5582 × 1011.5426 × 1011.5424 × 1011.5769 × 1011.5379 × 1011.5271 × 1011.5326 × 101
Cap1011.7894 × 1011.7813 × 1011.7897 × 1011.7471 × 1011.8207 × 1011.7757 × 1011.7825 × 1011.7624 × 101
Cap1021.7309 × 1011.7142 × 1011.7241 × 1011.7193 × 1011.7928 × 1011.7429 × 1011.7258 × 1011.7339 × 101
Cap1031.7007 × 1011.7084 × 1011.7291 × 1011.7021 × 1011.7876 × 1011.6273 × 1011.5989 × 1011.6092 × 101
Cap1041.6057 × 1011.5991 × 1011.6095 × 1011.6019 × 1011.6472 × 1011.5888 × 1011.5921 × 1011.5910 × 101
Cap1311.7221 × 1011.7612 × 1011.6910 × 1011.6905 × 1011.7882 × 1011.7015 × 1011.6892 × 1011.6919 × 101
Cap1321.6728 × 1011.6731 × 1011.6739 × 1011.6743 × 1011.7667 × 1011.6773 × 1011.6730 × 1011.6761 × 101
Cap1331.6660 × 1011.6666 × 1011.6665 × 1011.6665 × 1011.7601 × 1011.6692 × 1011.6654 × 1011.6695 × 101
Cap1341.6627 × 1011.6621 × 1011.6633 × 1011.6647 × 1011.7576 × 1011.6656 × 1011.6626 × 1011.6655 × 101
Total Rank2.0139 × 1022.0182 × 1022.0063 × 1022.0018 × 1022.0869 × 1022.0051 × 1021.9821 × 1021.9884 × 102
Finally Rank67538412
Table 23 shows the time performance of BPO2 with different transfer functions. All TFs achieve similar execution times, confirming the algorithm’s efficiency. TF7 records the lowest cumulative time (rank = 1), followed by TF8 (rank = 2) and TF4 (rank = 3), while TF5 has the highest (rank = 8), making TF7 the most efficient option overall.
The Wilcoxon signed-rank test is employed to determine whether there is a statistically significant difference between the performance distributions of the two algorithms. In the table, the symbol (+) indicates that a significant difference exists in favor of the proposed algorithm (p < 0.05). In contrast, the symbol (-) indicates that no statistically significant difference is observed (p ≥ 0.05). This analysis provides a clear statistical validation of the similarities and differences among the algorithms under comparison [61].
Table 24 presents the results of the Wilcoxon signed-rank test conducted between the BPO2 and its variants using different TFs (i.e., TF1-BPO2 via TF2-BPO2, TF2-BPO2 via TF3-BPO2, …, TF6-BPO2 via TF7-BPO2, TF7-BPO2 via TF8-BPO2). The results of Table 24 show that in the majority of cases, p-values are extremely small (close to zero) and h = +, confirming significant performance differences between TFs. Overall, these outcomes demonstrate that TFs’ selection significantly affects BPO2′s performance, with most TFs exhibiting statistically distinguishable results across datasets.
Table 25 presents a comparison of BPO2-TF7 with three competing binary optimization algorithms: Binary Honey Badger Algorithm (BinHBA), Binary Aquila Optimizer (BinAO), and Binary Fire Hawk Optimizer (BinFHO) in terms of standard deviation on the UFLP datasets. The results of BinHBA, BinAO, and BinFHO are taken from the literature [60]. All compared algorithms are executed under the same parameter settings as in Table 16.
Table 25 shows that BPO2 consistently records a standard deviation of 0 in all datasets, indicating that BPO2 has produced identical results across all independent runs. These findings demonstrate that BPO2 is the most stable and reliable algorithm in terms of robustness. In contrast, the results of BinHBA, BinAO, and BinFHO exhibit considerable fluctuations and inconsistencies. Table 26 reports the GAP values for BPO2 and the competing other algorithms on the UFLP.
In Table 26, TF7-BPO2 consistently achieves values equal to zero or very close to zero across nearly all instances, indicating that the obtained solutions are highly proximate to the known optima. By contrast, BinHBA, BinAO, and BinFHO generate considerably higher GAP values, particularly in larger and more challenging datasets such as Cap131-Cap133. These findings clearly demonstrate that TF7-BPO2 provides superior accuracy and reliability compared to the competing algorithms. Table 27 reports the execution times of BPO2 and the competing algorithms on the UFLP datasets.
In Table 27, TF7-BPO2 consistently achieves execution times in the narrow range of approximately 15–18 seconds across all datasets, demonstrating remarkable efficiency and stability. In contrast, the competing algorithms require substantially higher times, often exceeding 100 seconds, with BinAO in particular reaching over 300 seconds in larger instances such as Cap131-Cap133. These sharp differences highlight that TF7-BPO2 is not only the fastest algorithm but also the most consistent in terms of runtime performance. While BinHBA, BinAO, and BinFHO exhibit significant variability and scalability issues as the dataset size increases, TF7-BPO2 maintains nearly constant computational costs regardless of problem complexity. This stability ensures that the algorithm remains efficient and scalable, making it highly suitable for large-scale UFLP instances where both solution quality and computational efficiency are critical.
Figure 11 shows the computation times of TF7-BPO2 compared to BinHBA, BinAO, and BinFHO. While BPO2 consistently maintains low and stable execution costs even as the problem scale increases, the other algorithms exhibit substantially higher times accompanied by considerable fluctuations. This result highlights the superior efficiency and robustness of BPO2 in handling larger and more complex instances.

6. Conclusions

A novel binary metaheuristic, the Binary Puma Optimizer (BPO), is proposed in this study. In BPO, the original Puma Optimizer are transformed into the binary domain through eight transfer functions (four S-shaped and four V-shaped). Two problem-specific variants are proposed: BPO1, designed to address the 0-1 Knapsack Problems (KPs), and BPO2, developed for the unconstrained Uncapacitated Facility Location Problem (UFLP). In the 0-1 KP experiments, TF4 is identified as the most effective transfer function for BPO1. In contrast, the UFLP experiments demonstrate complete stability (zero standard deviation) across all transfer functions, with TF7 achieving the best overall performance.
Through comparative analyses, the competitiveness of the proposed approach is further demonstrated. Under identical experimental settings for KPs, BPO1-TF4 is consistently shown to outperform The Binary Evolutionary Mating Algorithm (BinEMA), the Binary Fire Hawk Optimizer (BinFHO), and S-shape Binary Whale Optimization Algorithm (BWOS) in terms of solution quality, robustness, and runtime. For the UFLP, significant success is achieved by BPO2-TF7, particularly over Binary Honey Badger Algorithm (BinHBA), Binary Aquila Optimizer (BinAO), and Binary Fire Hawk Optimizer (BinFHO) in terms of GAP values and computational time rankings.
Overall, the BPO can be regarded as a robust and versatile framework that not only ensures high accuracy and stability but also establishes a promising benchmark for future study in binary optimization algorithms.
Future study will focus on several promising directions: (i) designing adaptive and ensemble strategies for transfer function selection or switching; (ii) extending BPO to broader application domains, including feature selection, and multi-objective optimization; (iii) conducting comprehensive parameter tuning and ablation studies; (iv) incorporating advanced local search procedures.

Author Contributions

A.I.: conceptualization, formal analysis, investigation, methodology, software, visualization, and writing—original draft preparation. T.S.: methodology, formal analysis, investigation, software, review and editing, and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This study received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed at the corresponding author.

Acknowledgments

This study is derived from a part of Aysegul Ihsan’s doctoral dissertation at Selcuk University. The authors would like to thank the project for using the laboratory facilities supported by the “Data Intensive and Computational Vision Research Laboratory Infrastructure Project” (BAP project number 20301027) in this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Probert, M. Engineering Optimisation: An Introduction with Metaheuristic Applications, by Xin-She Yang. Contemp. Phys. 2012, 53, 271–272. [Google Scholar] [CrossRef]
  2. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  3. Franken, N.; Engelbrecht, A.P. Particle swarm optimization approaches to coevolve strategies for the iterated prisoner’s dilemma. IEEE Trans. Evol. Comput. 2005, 9, 562–579. [Google Scholar] [CrossRef]
  4. Alawad, N.A.; Abed-alguni, B.H.; Al-Betar, M.A.; Jaradat, A. Binary improved white shark algorithm for intrusion detection systems. Neural Comput. Appl. 2023, 35, 19427–19451. [Google Scholar] [CrossRef]
  5. Abdollahzadeh, B.; Khodadadi, N.; Barshandeh, S.; Trojovsky, P.; Soleimanian Gharehchopogh, F.; El-kenawy, E.-S.; Abualigah, L.; Mirjalili, S. Puma optimizer (PO): A novel metaheuristic optimization algorithm and its application in machine learning. Clust. Comput. 2024, 27, 5235–5283. [Google Scholar] [CrossRef]
  6. Keleş, M.K.; Kiliç, Ü. Binary Black Widow Optimization Approach for Feature Selection. IEEE Access 2022, 10, 95936–95948. [Google Scholar] [CrossRef]
  7. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  8. Wang, J.; Khishe, M.; Kaveh, M.; Mohammadi, H. Binary Chimp Optimization Algorithm (BChOA): A New Binary Meta-heuristic for Solving Optimization Problems. Cogn. Comput. 2021, 13, 1297–1316. [Google Scholar] [CrossRef]
  9. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  10. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Ryan, M.J.; Mirjalili, S. An efficient binary slime mould algorithm integrated with a novel attacking-feeding strategy for feature selection. Comput. Ind. Eng. 2021, 153, 107078. [Google Scholar] [CrossRef]
  11. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  12. Akinola, O.A.; Agushaka, J.O.; Ezugwu, A.E. Binary dwarf mongoose optimizer for solving high-dimensional feature selection problems. PLoS ONE 2022, 17, e0274850. [Google Scholar] [CrossRef]
  13. Oyelade, O.N.; Ezugwu, A.E.S.; Mohamed, T.I.A.; Abualigah, L. Ebola Optimization Search Algorithm: A New Nature-Inspired Metaheuristic Optimization Algorithm. IEEE Access 2022, 10, 16150–16177. [Google Scholar] [CrossRef]
  14. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  15. Xu, M.; Song, Q.; Xi, M.; Zhou, Z. Binary arithmetic optimization algorithm for feature selection. Soft Comput. 2023, 27, 11395–11429. [Google Scholar] [CrossRef] [PubMed]
  16. Yildizdan, G.; Baş, E. A Novel Binary Artificial Jellyfish Search Algorithm for Solving 0–1 Knapsack Problems. Neural Process. Lett. 2023, 55, 8605–8671. [Google Scholar] [CrossRef]
  17. Akinola, O.; Oyelade, O.N.; Ezugwu, A.E. Binary Ebola optimization search algorithm for feature selection and classification problems. Appl. Sci. 2022, 12, 11787. [Google Scholar] [CrossRef]
  18. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  19. Devi, R.; Manoharan, P.; Jangir, P.; Kumar, D.; Alrowaili, D.; Nisar, K. BHGSO: Binary Hunger Games Search Optimization Algorithm for Feature Selection Problem. CMC Tech Sci. Press 2021, 70, 557–579. [Google Scholar] [CrossRef]
  20. Baş, E. Binary Aquila Optimizer for 0–1 knapsack problems. Eng. Appl. Artif. Intell. 2023, 118, 105592. [Google Scholar] [CrossRef]
  21. Mohammadzadeh, A.; Masdari, M.; Gharehchopogh, F.S.; Jafarian, A. Improved chaotic binary grey wolf optimization algorithm for workflow scheduling in green cloud computing. Evol. Intell. 2021, 14, 1997–2025. [Google Scholar] [CrossRef]
  22. Seyyedabbasi, A.; Hu, G.; Shehadeh, H.; Wang, X.; Canatalay, P. V-shaped and S-shaped binary artificial protozoa optimizer (APO) algorithm for wrapper feature selection on biological data. Clust. Comput. 2025, 28, 163. [Google Scholar] [CrossRef]
  23. Li, M.; Luo, Q.; Zhou, Y. BGOA-TVG: Binary Grasshopper Optimization Algorithm with Time-Varying Gaussian Transfer Functions for Feature Selection. Biomimetics 2024, 9, 187. [Google Scholar] [CrossRef]
  24. Sharma, R.; Mahanti, G.K.; Panda, G.; Rath, A.; Dash, S.; Mallik, S.; Zhao, Z. Comparative performance analysis of binary variants of FOX optimization algorithm with half-quadratic ensemble ranking method for thyroid cancer detection. Sci. Rep. 2023, 13, 19598. [Google Scholar] [CrossRef]
  25. Hassan, I.H.; Abdullahi, M.; Aliyu, M.M.; Yusuf, S.A.; Abdulrahim, A. An improved binary manta ray foraging optimization algorithm based feature selection and random forest classifier for network intrusion detection. Intell. Syst. Appl. 2022, 16, 200114. [Google Scholar] [CrossRef]
  26. Balakrishnan, K.; Dhanalakshmi, R.; Seetharaman, G. S-shaped and V-shaped binary African vulture optimization algorithm for feature selection. Expert Syst. 2022, 39, e13079. [Google Scholar] [CrossRef]
  27. Sharafi, Y.; Teshnehlab, M. Opposition-based binary competitive optimization algorithm using time-varying V-shape transfer function for feature selection. Neural Comput. Appl. 2021, 33, 17497–17533. [Google Scholar] [CrossRef]
  28. Hussien, A.G.; Hassanien, A.E.; Houssein, E.H.; Bhattacharyya, S.; Amin, M. S-shaped Binary Whale Optimization Algorithm for Feature Selection. In Recent Trends in Signal and Image Processing; Bhattacharyya, S., Mukherjee, A., Bhaumik, H., Das, S., Yoshida, K., Eds.; Springer: Singapore, 2019; pp. 79–87. [Google Scholar]
  29. Gao, Y.; Zhang, F.; Zhao, Y.; Li, C. Quantum-Inspired Wolf Pack Algorithm to Solve the 0–1 Knapsack Problem. Math. Probl. Eng. 2018, 2018, 1–10. [Google Scholar] [CrossRef]
  30. Zouache, D.; Nouioua, F.; Moussaoui, A. Quantum-inspired firefly algorithm with particle swarm optimization for discrete optimization problems. Soft Comput. 2016, 20, 2781–2799. [Google Scholar] [CrossRef]
  31. Kulkarni, A.J.; Krishnasamy, G.; Abraham, A. Solution to 0–1 Knapsack Problem Using Cohort Intelligence Algorithm. In Cohort Intelligence: A Socio-Inspired Optimization Method; Kulkarni, A.J., Krishnasamy, G., Abraham, A., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 55–74. [Google Scholar]
  32. Erdoğan, F.; Karakoyun, M.; Gülcü, Ş. An effective binary dynamic grey wolf optimization algorithm for the 0–1 knapsack problem. Multimed. Tools Appl. 2024, 84, 23279–23311. [Google Scholar] [CrossRef]
  33. Abdel-Basset, M.; Mohamed, R.; Mirjalili, S. A binary equilibrium optimization algorithm for 0–1 knapsack problems. Comput. Ind. Eng. 2021, 151, 106946. [Google Scholar] [CrossRef]
  34. Yuan, J.; Li, Y. Solving binary multi-objective knapsack problems with novel greedy strategy. Memetic Comput. 2021, 13, 447–458. [Google Scholar] [CrossRef]
  35. Bansal, J.C.; Deep, K. A modified binary particle swarm optimization for knapsack problems. Appl. Math. Comput. 2012, 218, 11042–11061. [Google Scholar] [CrossRef]
  36. Rodrigues, D.; Yang, X.-S.; De Souza, A.N.; Papa, J.P. Binary flower pollination algorithm and its application to feature selection. In Recent Advances in Swarm Intelligence and Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2014; pp. 85–100. [Google Scholar]
  37. Abdel-Basset, M.; El-Shahat, D.; El-Henawy, I. Solving 0–1 knapsack problem by binary flower pollination algorithm. Neural Comput. Appl. 2019, 31, 5477–5495. [Google Scholar] [CrossRef]
  38. Feng, Y.; Wang, G.-G.; Deb, S.; Lu, M.; Zhao, X.-J. Solving 0–1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2017, 28, 1619–1634. [Google Scholar] [CrossRef]
  39. Mingo López, L.F.; Gómez Blas, N.; Arteta Albert, A. Multidimensional knapsack problem optimization using a binary particle swarm model with genetic operations. Soft Comput. 2018, 22, 2567–2582. [Google Scholar] [CrossRef]
  40. Kong, X.; Gao, L.; Ouyang, H.; Li, S. A simplified binary harmony search algorithm for large scale 0–1 knapsack problems. Expert Syst. Appl. 2015, 42, 5337–5355. [Google Scholar] [CrossRef]
  41. Kaya, E. BinGSO: Galactic swarm optimization powered by binary artificial algae algorithm for solving uncapacitated facility location problems. Neural Comput. Appl. 2022, 34, 11063–11082. [Google Scholar] [CrossRef]
  42. Sag, T.; Ihsan, A. Efficiency analysis of binary metaheuristic optimization algorithms for uncapacitated facility location problems. Appl. Soft Comput. 2025, 174, 112968. [Google Scholar] [CrossRef]
  43. Babalik, A.; Babadag, A. A binary grasshopper optimization algorithm for solving uncapacitated facility location problem. Eng. Sci. Technol. Int. J. 2025, 65, 102031. [Google Scholar] [CrossRef]
  44. Beşkirli, A. A Transfer Function-Based Binary Version of Improved Pied Kingfisher Optimizer for Solving the Uncapacitated Facility Location Problem. Biomimetics 2025, 10, 526. [Google Scholar] [CrossRef]
  45. Baş, E.; Yildizdan, G. A new binary arithmetic optimization algorithm for uncapacitated facility location problem. Neural Comput. Appl. 2023, 36, 4151–4177. [Google Scholar] [CrossRef]
  46. Kmich, M.; El Ghouate, N.; Bencharqui, A.; Karmouni, H.; Sayyouri, M.; Askar, S.S.; Abouhawwash, M. Chaotic Puma Optimizer Algorithm for controlling wheeled mobile robots. Eng. Sci. Technol. Int. J. 2025, 63, 101982. [Google Scholar] [CrossRef]
  47. Pravesjit, S.; Kantawong, K. Enhancing the Puma Optimizer Algorithm for Optimiza-tion Problems. In Proceedings of the 2025 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT & NCON), Nan, Thailand, 29 January–1 February 2025. [Google Scholar] [CrossRef]
  48. Esakkiappan, K.; Kandasamy, P.; Raja, R.; Rajendran, S.K. Improved Binary Quantum-Based Puma Opti-mizer for Optimal Location and Sizing of Micro Grid with Electric Vehicle Charging Station. Iran. J. Sci-Ence Technol. Trans. Electr. Eng. 2025. [Google Scholar] [CrossRef]
  49. Bas, E.; Guner, L.B. The binary crayfish optimization algorithm with bitwise operator and repair method for 0–1 knapsack problems: An improved model. Neural Comput. Appl. 2025, 37, 4733–4767. [Google Scholar] [CrossRef]
  50. Cornuéjols, G.; Nemhauser, G.; Wolsey, L. The Uncapicitated Facility Location Problem; Cornell University Operations Research and Industrial Engineering: Ithaca, NY, USA, 1983. [Google Scholar]
  51. Jakob, K.; Pruzan, P.M. The simple plant location problem: Survey and synthesis. Eur. J. Oper. Res. 1983, 12, 41. [Google Scholar] [CrossRef]
  52. Monabbati, E.; Kakhki, H.T. On a class of subadditive duals for the uncapacitated facility location problem. Appl. Math. Comput. 2015, 251, 118–131. [Google Scholar] [CrossRef]
  53. Aslan, M.; Gunduz, M.; Kiran, M.S. JayaX: Jaya algorithm with xor operator for binary optimization. Appl. Soft Comput. 2019, 82, 105576. [Google Scholar] [CrossRef]
  54. Kole, A.; Chakrabarti, P.; Bhattacharyya, S. An ant colony optimization algorithm for uncapacitated facility location problem. In Proceedings of the 38th International Conference on Computers and Industrial Engineering, Hong Kong, 16–18 October 2013. [Google Scholar]
  55. Siew Mooi, S.L.; Md Sultan, A.B.; Sulaiman, M.; Mustapha, A.; Leong, K.Y. Crossover and Mutation Operators of Genetic Algorithms. Int. J. Mach. Learn. Comput. 2017, 7, 9–12. [Google Scholar] [CrossRef]
  56. Kora, P.; Yadlapalli, P. Crossover Operators in Genetic Algorithms: A Review. Int. J. Comput. Appl. 2017, 162, 34–36. [Google Scholar] [CrossRef]
  57. Truong, T.K.; Li, K.; Xu, Y. Chemical reaction optimization with greedy strategy for the 0–1 knapsack problem. Appl. Soft Comput. 2013, 13, 1774–1780. [Google Scholar] [CrossRef]
  58. Lv, J.; Wang, X.; Huang, M.; Cheng, H.; Li, F. Solving 0–1 knapsack problem by greedy degree and expectation efficiency. Appl. Soft Comput. 2015, 41, 94–103. [Google Scholar] [CrossRef]
  59. Zhao, J.; Huang, T.; Pang, F.; Liu, Y. Genetic Algorithm Based on Greedy Strategy in the 0-1 Knapsack Problem. In Proceedings of the 2009 Third International Conference on Genetic and Evolutionary Computing, Guilin, China, 14–17 October 2009; pp. 105–107. [Google Scholar]
  60. Yildizdan, G.; Bas, E. A new binary coati optimization algorithm for binary optimization problems. Neural Comput. Appl. 2024, 36, 2797–2834. [Google Scholar] [CrossRef]
  61. Bas, E. BinDMO: A new Binary Dwarf Mongoose Optimization algorithm on based Z-shaped, U-shaped, and taper-shaped transfer functions for CEC-2017 benchmarks. Neural Comput. Appl. 2024, 36, 6903–6935. [Google Scholar] [CrossRef]
Figure 1. (a) Exploration strategy; (b) Exploration strategy.
Figure 1. (a) Exploration strategy; (b) Exploration strategy.
Applsci 15 09955 g001
Figure 2. (a) S-shaped TFs; (b) V-shaped TFs.
Figure 2. (a) S-shaped TFs; (b) V-shaped TFs.
Applsci 15 09955 g002
Figure 3. Pseudo-code of PA [33].
Figure 3. Pseudo-code of PA [33].
Applsci 15 09955 g003
Figure 4. Pseudo-code of RA [33].
Figure 4. Pseudo-code of RA [33].
Applsci 15 09955 g004
Figure 5. Pseudo-code of IA [33].
Figure 5. Pseudo-code of IA [33].
Applsci 15 09955 g005
Figure 6. Flowchart of BPO1 for 0-1 KPs.
Figure 6. Flowchart of BPO1 for 0-1 KPs.
Applsci 15 09955 g006
Figure 7. Pseuduo-code of BPO1 for 0-1 KPs.
Figure 7. Pseuduo-code of BPO1 for 0-1 KPs.
Applsci 15 09955 g007
Figure 8. Flowchart of the BPO2 for the UFLP.
Figure 8. Flowchart of the BPO2 for the UFLP.
Applsci 15 09955 g008
Figure 9. Pseudo-code of BPO2 for UFLP.
Figure 9. Pseudo-code of BPO2 for UFLP.
Applsci 15 09955 g009
Figure 11. The time results of BPO2 compared to other algorithms.
Figure 11. The time results of BPO2 compared to other algorithms.
Applsci 15 09955 g010
Figure 10. The time results of BPO1 compared to other algorithms.
Figure 10. The time results of BPO1 compared to other algorithms.
Applsci 15 09955 g011
Table 1. Binary variants of metaheuristic algorithms.
Table 1. Binary variants of metaheuristic algorithms.
YearBinary Versions of the AlgorithmTransfer Function Shape
2025Binary Artificial Protozoa Optimizer [22]S-shape and V-shape
2024Binary Grasshopper Optimization [23]S-shape and V-shape
2023FOX Optimization Algorithms [24]S-shape and V-shape
2022Binary Manta Ray Foraging [25]S-shape
2022Binary African Vulture Optimization [26]S-shape and V-shape
2021Binary Competitive Optimization [27]V-shape
2019Binary Whale Optimization [28]S-shape and V-shape
Table 2. TFs mathematical formula.
Table 2. TFs mathematical formula.
TFS-ShapeFormulaTFV-ShapeFormula
TF1S1 T F 1 x = 1 1 + e 2 x TF5V1 T F 5 x = erf π 2 x
TF2S2 T F 2 x = 1 1 + e x TF6V2 T F 6 x = tanh x
TF3S3 T F 3 x = 1 1 + e x 2 TF7V3 T F 7 x = x 1 + x 2
TF4S4 T F 4 x = 1 1 + e x 3 TF8V4 T F 8 x = 2 π arctan π 2 x
Table 13. BPO1′s results based on standard deviation ranks.
Table 13. BPO1′s results based on standard deviation ranks.
DatasetTF1TF2TF3TF4TF5TF6TF7TF8
8a11111111
8b11111111
8c11111111
8d11111111
8e11111111
12a11111811
12b41811444
12c11111111
12d11111111
12e11111111
16a78213651
16b87114364
16c56813147
16d38761532
16e58273351
20a22566128
20b46813247
20c25837614
20d68143265
20e45186731
24a34872651
24b64253168
24c67142853
24d71635274
24e35671428
Total best1112131412121113
Total Min. Rank84214472
Bolded rank 1 values denote zero standard deviation.
Table 14. Comparison of BPO1 and existing binary optimization algorithms.
Table 14. Comparison of BPO1 and existing binary optimization algorithms.
DatasetBest ValueStd.Time Result
BPO1BinEMABinFHOBPO1BinEMABinFHOBPO1BinEMABinFHO
8a3,924,4003,924,4003,924,40000105,55618,7854,731,8641,154,939
8b3,813,6693,813,6693,813,6690058,11218,7854,870,5288,936,598
8c3,347,4523,347,4523,347,4520055,86418,7315,586,3726,213,332
8d4,187,7074,187,7074,187,7070076,10918,7685,211,0771,176,619
8e4,955,5554,955,5554,955,5550085,56418,7515,009,0061,160,177
12a5,688,8875,688,8875,681,3600079,98719,4666,428,5764,717,419
12b6,498,5976,498,5976,473,01900118,04319,2936,798,889656,393
12c5,170,6265,169,6765,169,67600143,76919,3936,446,4044,771,043
12d6,992,4046,992,4046,988,0750087,41419,3274,332,9734,827,808
12e5,337,4725,337,4725,337,4720087,53719,302642,8134,762,691
16a7,850,9837,850,9837,794,20709637105,72419,8637,939,084511,062
16b9,352,9989,352,9989,136,20702159122,42819,8591,227,7675,034,261
16c9,151,1479,151,1479,151,147015,181125,59419,8781,222,0065,081,491
16d9,348,8899,348,8899,211,22411,35615,40498,42319,9331,316,8625,079,434
16e7,769,1177,769,1177,750,49182258393149,80219,8936,335,0245,034,843
20a10,727,04910,705,22210,530,65212,80312,728129,16420,3751,525,9845,218,366
20b9,818,2619,790,3119,658,45414,79317,262128,17020,3886,152,3995,262,672
20c10,714,02310,714,02310,523,410650410,266143,46020,3669,028,1965,180,001
20d8,929,1568,915,3968,756,79212,87314,59586,19520,3108,654,6165,314,107
20e9,357,9699,357,1929,208,072827617,33290,05520,3628,741,2315,207,472
24a13,549,09413,504,87813,347,96027,89728,355177,17721,0129,782,5465,472,737
24b12,233,71312,166,91411,967,39422,58217,330122,98520,9481,055,364619,041
24c12,448,78012,424,94212,265,76619,33719,062128,54221,0661,110,9626,775,468
24d11,815,31511,779,14111,456,79516,69714,836106,29920,9621,155,5137,484,923
24e13,940,09913,897,78213,834,38316,89818,648165,97320,9841,040,9657,426,182
Table 15. Comparison of BPO1 and BWOS’s SR (%) value.
Table 15. Comparison of BPO1 and BWOS’s SR (%) value.
Alg.SR Alg.SR Alg.SR Alg.SR Alg.SR
8aBPO110012aBPO110016aBPO110020aBPO110024aBPO1100
BWOS100 BWOS40 BWOS5 BWOS95 BWOS25
8bBPO110012bBPO110016bBPO110020bBPO110024bBPO1100
BWOS75 BWOS85 BWOS80 BWOS100 BWOS100
8cBPO110012cBPO110016cBPO110020cBPO110024cBPO1100
BWOS100 BWOS100 BWOS100 BWOS0 BWOS100
8dBPO110012dBPO110016dBPO110020dBPO110024dBPO1100
BWOS100 BWOS100 BWOS0 BWOS15 BWOS0
8eBPO110012eBPO110016eBPO110020eBPO110024eBPO1100
BWOS100 BWOS100 BWOS35 BWOS30 BWOS60
Table 24. Wilcoxon signed-rank test results for BPO2 with different TFs.
Table 24. Wilcoxon signed-rank test results for BPO2 with different TFs.
DatasetTF1-BPO2
TF2-BPO2
hTF2-BPO2
TF3-BPO2
hTF3-BPO2
TF4-BPO2
hTF4-BPO2
TF5-BPO2
hTF5-BPO2
TF6-BPO2
hTF6-BPO2
TF7-BPO2
hTF7-BPO2
TF8-BPO2
h
Cap711.4433 × 10−2+6.5533 × 10−5+4.3736 × 10−11+5.5122 × 10−21.6855 × 10−26+3.8758 × 10−35+3.0525 × 10−6+
Cap720+0+3.4143 × 10−5+1.3764 × 10−83.4213 × 10−10+4.3425 × 10−2+4.9529 × 10−3+
Cap734.5496 × 10−2+5.5567 × 10−6+4.6935 × 10−10+2.3250 × 10−6+5.7591 × 10−11.3423 × 10−6+1.3566 × 10−24+
Cap741.3881 × 10−9+1.5618 × 10−3+4.9959 × 10−13.4853 × 10−14.6170 × 10−15+2.2828 × 10−13+1.0656 × 10−2+
Cap1012.84065 × 10−5+1.1888 × 10−69+1.5173 × 10−34+1.0174 × 10−81+1.5662 × 10−194+3.5661 × 10−28+2.5649 × 10−37+
Cap1026.1757 × 10−141+2.1287 × 10−262+2.1845 × 10−3+6.3118 × 10−212+2.8400 × 10−12+3.6962 × 10−171+5.8703 × 10−26+
Cap1030+0+2.65185 × 10−115+1.91237 × 10−6+2.53748 × 10−10+6.19404 × 10−14+6.05068 × 10−152+
Cap1041.0757 × 10−4+5.3125 × 10−1+1.6223 × 1040+2.9629 × 1032+1.0756 × 1011+1.3073 × 1034+4.4884 × 107+
Cap1310+0+1.0982 × 10152+1.7751 × 10194+3.7907 × 103+6.2461 × 105+8.8752 × 10192+
Cap1322.6110 × 10−9+2.4921 × 10226+1.8649 × 1070+1.1413 × 1069+7.4979 × 10−210+7.5880 × 10−2+3.5134 × 10−213+
Cap1337.3156 × 10−205+4.3519 × 1064+2.7829 × 1030+8.3696 × 10165+5.9503 × 10−289+1.2564 × 10312+1.0410 × 1044+
Cap1348.8687 × 10−64+1.0087 × 1027+1.4696 × 1032+3.8097 × 10151+3.3251E × 108+6.5280 × 10130+2.1655 × 1088+
Table 25. Standard deviations results for BPO2 and other algorithms.
Table 25. Standard deviations results for BPO2 and other algorithms.
DatasetBPO2BinHBABinAOBinFHO
Cap7103003.4980.0000.000
Cap7201897.379323.070513.649
Cap7301159.66050.00959.522
Cap7401227.927842.2261096.042
Cap10103243.4811544.2941296.063
Cap10203576.3543572.1092612.916
Cap10302378.5172884.2473002.349
Cap10402888.8362406.360113.161
Cap13105560.1837234.8213955.994
Cap13206099.7386917.6813548.456
Cap13307068.9647091.4032395.778
Cap13405913.9096742.8724315.387
Table 26. GAP values for BPO2 and other algorithms.
Table 26. GAP values for BPO2 and other algorithms.
DatasetTF7-BPO2BinHBABinAOBinFHO
Cap7102.3100 × 10−100
Cap7201.4700 × 10−11.1000 × 10−23.9000 × 10−2
Cap7305.4000 × 10−22.0000 × 10−32.0000 × 10−3
Cap7405.8000 × 10−21.9000 × 10−25.3000 × 10−2
Cap1011.4613 × 10145.3800 × 10−15.0100 × 10−15.2900 × 10−1
Cap1021.3621 × 10145.0900 × 10−16.9400 × 10−17.8600 × 10−1
Cap10302.1700 × 10−15.8000 × 10−14.4800 × 10−1
Cap1041.2532 × 10141.4700 × 10−11.8000 × 10−13.0000 × 10−3
Cap1315.4144 × 1012.2620 × 1003.3800 × 1003.3820 × 100
Cap1324.3702 × 1011.4600 × 1002.7870 × 1001.8750 × 100
Cap1332.9188 × 1018.7300 × 10−11.7290 × 1008.5800 × 10−1
Cap1345.7680 × 1024.6600 × 10−11.2510 × 1003.6700 × 10−1
Table 27. Running times of BPO2 and other algorithms.
Table 27. Running times of BPO2 and other algorithms.
DatasetBPO2BinHBABinAOBinFHO
Cap711.6903 × 1011.1019 × 1022.5438 × 1021.4205 × 102
Cap721.6571 × 1015.2108 × 1013.2636 × 1021.2549 × 102
Cap731.5571 × 1013.7402 × 1011.2273 × 1021.1101 × 102
Cap741.5271 × 1013.6606 × 1011.3500 × 1029.4923 × 101
Cap1011.7825 × 1011.7647 × 1022.8252 × 1021.9780 × 102
Cap1021.7258 × 1017.7935 × 1012.9084 × 1021.8116 × 102
Cap1031.5989 × 1014.7754 × 1011.7569 × 1021.4776 × 102
Cap1041.5921 × 1013.7847 × 1011.5390 × 1021.3143 × 102
Cap1311.6892 × 1011.0444 × 1023.3792 × 1022.9275 × 102
Cap1321.6730 × 1017.7320 × 1012.8343 × 1022.1781 × 102
Cap1331.6654 × 1015.7447 × 1012.6054 × 1021.7899 × 102
Cap1341.6626 × 1014.8901 × 1012.4912 × 1021.4819 × 102
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ihsan, A.; Sag, T. Binary Puma Optimizer: A Novel Approach for Solving 0-1 Knapsack Problems and the Uncapacitated Facility Location Problem. Appl. Sci. 2025, 15, 9955. https://doi.org/10.3390/app15189955

AMA Style

Ihsan A, Sag T. Binary Puma Optimizer: A Novel Approach for Solving 0-1 Knapsack Problems and the Uncapacitated Facility Location Problem. Applied Sciences. 2025; 15(18):9955. https://doi.org/10.3390/app15189955

Chicago/Turabian Style

Ihsan, Aysegul, and Tahir Sag. 2025. "Binary Puma Optimizer: A Novel Approach for Solving 0-1 Knapsack Problems and the Uncapacitated Facility Location Problem" Applied Sciences 15, no. 18: 9955. https://doi.org/10.3390/app15189955

APA Style

Ihsan, A., & Sag, T. (2025). Binary Puma Optimizer: A Novel Approach for Solving 0-1 Knapsack Problems and the Uncapacitated Facility Location Problem. Applied Sciences, 15(18), 9955. https://doi.org/10.3390/app15189955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop