Next Article in Journal
Compulsory Black-Box Traceable CP-ABE with Outsourcing of Computation
Previous Article in Journal
Bi-Symmetric Polyhedral Cages with Three, Four, Five or Six Connected Faces and Small Holes
Previous Article in Special Issue
Sailfish Optimization Algorithm Integrated with the Osprey Optimization Algorithm and Cauchy Mutation and Its Engineering Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Symmetry Strategy Improved Binary Planet Optimization Algorithm with Theoretical Interpretability for the 0-1 Knapsack Problem

by
Yang Yang
School of Mathematical Sciences, Xiamen University, Xiamen 361005, China
Symmetry 2025, 17(9), 1538; https://doi.org/10.3390/sym17091538
Submission received: 15 May 2025 / Revised: 10 July 2025 / Accepted: 15 July 2025 / Published: 15 September 2025
(This article belongs to the Special Issue Symmetry in Intelligent Algorithms)

Abstract

The Planet Optimization Algorithm (POA) is a meta-heuristic inspired by celestial mechanics, drawing on Newtonian gravitational principles to simulate planetary dynamics in optimization search spaces. While the POA demonstrates a strong performance in continuous domains, we propose an Improved Binary Planet Optimization Algorithm (IBPOA) tailored to the classical 0-1 knapsack problem (0-1 KP). Building upon the POA, the IBPOA introduces a novel improved transfer function (ITF) and a greedy repair operator (GRO). Unlike general binarization methods, the ITF integrates theoretical foundations from branch-and-bound (B&B) and reduction algorithms, reducing the search space while guaranteeing optimal solutions. This improvement is strengthened further through the incorporation of the GRO, which significantly improves the searching capability. Extensive computational experiments on large-scale instances demonstrate the IBPOA’s effectiveness for the 0-1 KP, showing a superior performance in its convergence rate, population diversity, and exploration–exploitation balance. The results from 30 independent runs confirm that the IBPOA consistently obtains the optimal solutions across all 15 benchmark instances, spanning three categories. Wilcoxon’s rank-sum tests against seven state-of-the-art algorithms reveal that the IBPOA significantly outperforms all competitors ( p < 0.05 ) , though it is occasionally matched in its solution quality by the binary reptile search algorithm (BinRSA). Crucially, the IBPOA achieves solutions 4.16 times faster than the BinRSA on average, establishing an optimal balance between solution quality and computational efficiency.

1. Introduction

The 0-1 knapsack problem (0-1 KP) [1], a canonical NP-hard problem [2,3,4], and its various variants [5] remain widely studied despite the absence of polynomial time exact algorithms. The simplicity of its formulation and the flexibility of its variants for modeling diverse real-world constraints have ensured its relevance across applications such as portfolio selection [6,7], rectilinear building frames [8], complex planning and scheduling [9], and picking–packing and delivery problems [10]. High-quality solutions to these problems can yield significant economic benefits and profits, driving sustained research interest in NP-hard problem variants, particularly the 0-1 KP. The mathematical formulation of the 0-1 KP is as follows:
Definition 1
([1]). Take a set N = { 1 , , n } of n items where each item j has a profit p j > 0 and a weight w j > 0 . The 0-1 KP aims to select a subset of N that maximizes the total profit while ensuring the total weight does not exceed the knapsack’s capacity C. The problem can be formulated as follows.
max f ( X ) = j = 1 n x j p j ,
s.t.
g ( X ) = j = 1 n x j w j C ,
x j { 0 , 1 } ,
where x j = 1 indicates the selection of the j-th item, and x j = 0 indicates non-selection. The objective function (1) of the model aims to maximize the total profit of the selected items. Constraint (2) enforces the total weight of selected items not exceeding the knapsack’s capacity C, while (3) specifies binary decision variables. To exclude trivial solutions, we assume j = 1 n w j > C and w j C for all j N . Additionally, for clarity in the subsequent analysis, items are sorted such that p 1 w 1 p 2 w 2 p n w n .
Since only exact algorithms with non-polynomial time complexity exist for the large-scale 0-1 KP, such as dynamic programming (DP) [11] and branch-and-bound (B&B) [12,13], meta-heuristic algorithms (MAs), particularly genetic algorithms (GAs) [14], have become the dominant approach.
The increasing diversity of MAs has demonstrated that the framework of multiple sub-swarms effectively enhances the search performance. This framework has been widely adopted by prominent algorithms such as Gray Wolf Optimization (GWO) [15], Dynamic Mentoring and Self-Regulation-Based Particle Swarm Optimization (DMeSR-PSO) [16], and Bee Colony Optimization (BCO) [17]. While sub-swarm mechanisms improve the algorithmic performance, their effectiveness is highly sensitive to the parameter settings during partitioning. Therefore, partitioning parameter-free MAs with a robust performance and well-designed mechanisms represents a valuable research direction.
To address parameter tune in sub-swarm partitioning, Thanh et al. [18] proposed the Planet Optimization Algorithm (POA), inspired by celestial mechanics. The POA designates the global best solution g best as the Sun, with the other solutions (planets) orbiting it, while distant solutions experience weaker attraction, maintaining proximity to the current positions. This gravitational mechanism achieves parameter-free synergy between local and global search, demonstrating an excellent algorithmic performance within continuous search spaces, including scenarios such as 23 well-known test functions, 38 IEEE CEC benchmark test functions (CEC2017, CEC2019), and 3 real engineering problems (tension/compression spring design, welded beam design, and pressure vessel design).
The POA partitions individuals into multiple sub-swarms through a simple mechanism that effectively maintains the population diversity without empirical parameter dependence. This approach provides a valuable framework for solving NP-hard problems. However, extending this framework to discrete spaces requires structural modifications via discretization strategies. To address this, we propose an Improved Binary Planet Optimization Algorithm (IBPOA) for the 0-1 KP. Preserving the POA’s sub-swarm mechanism, the IBPOA introduces a distinct discretization approach. We propose an improved transfer function (ITF) that incorporates Yang’s reduction algorithm [19], as an alternative to the standard sigmoid-based binarization strategies. The ITF modifies the sigmoid transfer function by leveraging B&B theory. This modification symmetrically adjusts the selection probabilities relative to the hyperplane formed by the break item (also called the split item) and the origin. The magnitude of the adjustment scales proportionally to an item’s distance from this hyperplane. This enhancement simultaneously prunes the search space, accelerates the convergence, and preserves global optimality guarantees. The main contributions of this paper are
  • Practically, the IBPOA outperforms other methods from the literature [20] for the 0-1 KP while providing an extensible POA-based framework for related NP-hard problems. Its sub-swarm mechanism further offers transferable population structures for other MAs.
  • Enhancement of the ITF operator embeds B&B theory with rigorous interpretability. Its parameters self-adapt to problem instances, boosting the search capabilities while ensuring the existence of an optimal solution.
  • Computational evaluations on 15 literature benchmarks [20] confirm the IBPOA’s superiority in its convergence acceleration, diversity preservation, and exploration–exploitation equilibrium, with seamless adaptability to 0-1 KP variants.
The remainder of this paper is organized as follows. Section 1 reviews related work on the IBPOA. Section 2 details the IBPOA framework and its enhancements over the original POA. Section 3 evaluates the computational performance against that of comparison algorithms from the literature. Concluding remarks and future research directions are presented in the final section.

2. The Literature Review

With the growing diversity of MAs, numerous algorithms now exist for solving the 0-1 KP. Based on encoding schemes, these approaches fall into two primary categories: binary encoding and real encoding [21], such as Hybrid Rice Optimization (HRO) [22], the Binary Artificial Jellyfish Search Algorithm (Bin-AJS) [23], the Binary Marine Predators Algorithm (BMPA) [24], and the Monogamous Pairs Genetic Algorithm (MPGA) [25]. Binary-encoded algorithms operate on bit-string representations directly, whereas real-encoded algorithms obtain 0-1 KP solutions by converting continuous solutions via transfer functions (TFs). Concurrently, researchers are attempting to explore novel encoding schemes to enhance the algorithmic performance. Complex-valued encoding represents decision variables using complex numbers, leveraging both real and imaginary components. This approach increases the information capacity and improves the population diversity. During decoding, these algorithms first convert complex values into real numbers via the L2-norm, assigning the sign bit based on the phase angle. The resulting real values are then mapped to binary solutions employing TFs (e.g., the sigmoid function). A few complex-valued encoding algorithms have been developed for the 0-1 KP, such as the Complex-Valued Bat Algorithm (CPBA) [26] and Complex-valued Wind Driven Optimization (CWDO) [27]. Additionally, some MAs utilize less scalable encoding schemes for the 0-1 KP, including Quantum-Inspired Social Evolution (QSE) [28], Quantum-Inspired Differential Evolution with Gray Wolf Optimizer (QDGWO) [29], and the Modified Discrete Shuffled Frog Leaping Algorithm (MDSFL) [30].
As a representative of MAs, Particle Swarm Optimization (PSO) [31] was first proposed by Kennedy and Eberhart for continuous optimization. Two years later, they introduced Binary Particle Swarm Optimization (BPSO) [32] to solve discrete optimization problems such as the 0-1 KP. As of 2023, incomplete statistics indicated that over 540 MAs exist [33], among which more than 57 are inspired by or exhibit iterative mechanisms similar to PSO [34]. Examples include widely applied algorithms like the Bee Colony Optimization [17], the Sine Cosine Algorithm [35,36], and the Moth-Flame Optimization Algorithm [37]. Thus, high-performance improvement operators not only enhance PSO’s problem-solving efficacy but also significantly benefit its variants. Research on such operators is therefore critically important.
During the discretization of PSO and its variants, these algorithms typically employ sigmoid functions to map continuous-space solutions into discrete/binary spaces. This approach causes significant position fluctuations when particle velocities approach zero, resulting in compromised local search capabilities. To address this issue, researchers have developed improvements primarily through two approaches:
On the one hand, one research direction involves modifying TFs by replacing sigmoid functions with alternative formulations [20,38,39]. Mirjalili and Lewis developed the S-shaped transfer function by scaling the original sigmoid function with a coefficient α . To address the position oscillation issue near zero velocity inherent in S-shaped functions, they subsequently introduced a V-shaped transfer function based on tanh x and its variants [38]. Following similar principles, Mirjalili et al. proposed a U-shaped transfer function using the exponents | v | k ( k > 1 ), while He et al. created a taper-shaped transfer function with | v | k ( 0 < k < 1 ) [39,40]. Separately, Guo et al. derived a Z-shaped transfer function from 1 α β where α R + and β denotes the particle velocity [41]. Empirical studies confirm that the TF’s performance varies significantly across problem domains, making its careful selection essential for algorithmic optimization.
On the other hand, besides sharing common drawbacks like a lack of a theoretical foundation with other MAs, PSO inherently exhibits relatively weak local search capabilities due to the insufficient synergy among particles caused by its iterative update rules [42,43,44]. To address this, researchers have proposed various improvement strategies, with multi-sub-swarm structures representing a significant direction. Kennedy and Mendes incorporated topological structures inspired by simulated annealing algorithms, modifying particle movement toward the global best and personal best solutions to target neighboring local best solutions instead. This approach substantially enhanced the population diversity [45]. Similarly, Ye et al. partitioned particles into sub-swarms and redirected particle updates toward the personal best solutions and the arithmetic mean of all local best solutions [46]. Notably, while well-designed topologies can significantly improve PSO’s performance and avoid local optima, selecting the appropriate structures remains challenging [43]. As an alternative to topology selection, Tanweer et al. classified particles into mentor, mentee, and independent learner groups based on the fitness differences and the Euclidean distance from the global best particle, proposing DMeSR-PSO [16]. Experimental results demonstrate that DMeSR-PSO not only eliminates topology selection but also outperforms six PSO variants and five MAs on the CEC2005, CEC2011, and CEC2013 benchmark functions.
Clearly, the classification rules for different groups in DMeSR-PSO significantly influence the solution quality while being empirically parameter-dependent. Similarly, parameter tuning for sub-swarm partitioning persists in PSO variants. In this work, we preserve the parameter-free sub-swarm partitioning mechanism of the POA while introducing the IBPOA, an enhanced heuristic for the 0-1 KP. Table 1, Table 2 and Table 3 compare the IBPOA with the algorithms from [20]. To demonstrate the IBPOA’s superiority, Table 4 and Table 5 present the results of Wilcoxon’s rank-sum tests against these benchmarks, confirming that the IBPOA satisfies both the computational efficiency and accuracy requirements for the problem. Additionally, the ablation studies in Table 6 compare the proposed ITF operator against 20 transfer functions (including S-shaped [38], V-shaped [38], U-shaped [40], taper-shaped [39], and Z-shaped [41] functions), showing the ITF’s significant advantage in solving the 0-1 KP.

3. The Improved Binary Planet Optimization Algorithm (IBPOA)

This section details the IBPOA, beginning with related operators of the POA in Section 3.1. We then develop an ITF leveraging B&B theory [19] for real-to-binary encoding conversion, integrated with a greedy repair operator (GRO) for constraint handling. Section 3.4 establishes the theoretical foundations for the ITF and analyzes the implementation properties, which represent advances beyond the conventional methodologies. An analysis of the computational complexity of the IBPOA is presented in Section 3.6.

3.1. A Review of the Planet Optimization Algorithm (POA)

3.1.1. Initial Solution Generation

For an n-dimensional continuous optimization problem, let X t represent the population of the POA in the t-th generation, where 0 t T . Here, t denotes the current iteration, and T represents the maximum number of iterations. Clearly, X 0 is the initial solution of the algorithm. Let x i , j t be the j-th variable of the i-th individual in the t-th generation. The indices satisfy the constraints 1 j n and 1 i pop , where pop is the size of the population. The initial solution of the POA can be expressed as follows.
x i , j 0 = rand × ( UB j LB j ) + LB j ,
where UB and LB are the maximum and minimum values of the j-th decision variable in the problem, respectively, and rand is a uniformly distributed random number in the range [ 0 , 1 ] . In particular, each rand value is independent of one another.

3.1.2. The Moment of the Planet

The POA introduces gravitation to guide the iterative direction of individuals. Experimental results demonstrate that the moment of gravitation becomes more effective computationally compared to gravitation itself. Similar to gravitation, the moment’s value depends on interplanetary distances and masses. In the POA, a planet’s mass is determined by its objective function value: higher values correspond to a greater mass, exerting a stronger gravitational pull. The mass m i is
m i = a α α + min f ( X t ) f ( X i t ) + 1 ,
where a = 2 , min f ( X t ) is the minimal objective value in generation t, and
α = | min f ( X t ) f ( X Sun t ) | ,
where f ( X Sun t ) is the Sun’s objective value, i.e., the best in generation t.
The Euclidean distance between planet i and the Sun is
R i , Sun = X i t X Sun t = j = 1 n x i j t x Sun , j t 2
In the solar system, planets are primarily attracted by stellar gravity. Thus, we focus on Sun–planet interactions rather than the inter-planet gravitation. The gravitational moment between planet i and the Sun in generation t is
M i t = F · R i , Sun = G m i m Sun R i , Sun
where G = 1 is the gravitational constant, and m Sun is the Sun’s mass.

3.1.3. Global Search

During the global search, individual X i t is updated as follows:
X i t + 1 = X i t + d × β i × rand × ( X Sun t X i t ) ,
where X i t + 1 is the updated individual of X i t , d = 1 is a constant, rand U [ 0 , 1 ] , and X Sun t represents the Sun individual in generation t.
In (9), β i controls the maximum displacement toward the Sun during global search and is calculated as follows.
β i = M i t M max t ,
where M max t is the maximum of M t . Thus, β i ( 0 , 1 ] .

3.1.4. Local Search

During the local search, individual X i t is updated as follows:
X i t + 1 = X i t + c × rand × ( γ × X Sun t X i t ) ,
where c = c 0 t / T is an adaptive factor, rand U [ 0 , 1 ] , and γ N ( μ , σ 2 ) , where the empirical parameters are set to μ = 0.5 and σ = 0.2 . We adopt these settings from the literature [18].

3.1.5. The Control Search Factor

The POA employs two distinct search methods. In the population of the t-th generation, the iteration strategy for an individual X i t is determined by the distance between the planet and the Sun. The POA utilizes a parameter R min to choose between local and global search for each individual. Specifically, during the t-th iteration, if the distance R i , Sun between the i-th planet and the Sun satisfies R i , Sun R min , the algorithm applies local search to the i-th individual; otherwise, global search is employed.
The value of R min is crucial. As reported by Thanh et al. [18], it is defined by the following formula:
R min = j = 1 n ( UB j LB j ) 2 R 0 ,
where R 0 is a preset experimental value, typically set to 1000.

3.2. An Improved Transfer Function (ITF)

Section 3.1 outlined the core mechanisms of the POA. Due to its real encoding scheme, the POA is not directly applicable to discrete optimization problems such as the 0-1 KP. The sigmoid function serves as the classical transfer function for mapping continuous spaces to binary spaces and is widely adopted in PSO and its variants for discretization. Let Y i t and y i j t denote the binary counterparts of X i t and x i j t after applying the transfer function:
y i j t = 1 , if rand < 1 1 + e x i j t , 0 , otherwise .
where rand U ( 0 , 1 ) is a uniformly distributed random number.
The sigmoid function maps the continuous-space solution x i j t into the interval ( 0 , 1 ) . A random number rand is then used to determine the binary assignment: if rand < 1 1 + e x i j t , y i j t is assigned 1; otherwise, it is assigned 0. To enhance the search capability of the algorithm further, we propose an improved transfer function tailored to the characteristics of the 0-1 KP.
Very recently, Yang [19] adapted the improved B&B method proposed by Dey et al. [47] to solving the 0-1 KP. For a given instance P , the method first applies a greedy strategy: items are packed in non-increasing order of their profit density (i.e., p j w j ) until the first item exceeds the knapsack’s capacity. This yields the break item b and the break solution X = ( x 1 , , x b 1 , x b , , x n ) , where b = min { k : j = 1 k w j > C } , x 1 = = x b 1 = 1 and x b = = x n = 0 . Next, the Dantzig upper bound U ( P ) [48] is computed via linear relaxation, U ( P ) = k = 1 b 1 p k + r p b / w b , where r = C k = 1 b 1 w k denotes the residual capacity. Finally, the B&B reduction phase searches for feasible solutions with objective values no less than that of the break solution f ( X ) .
From the literature [19], we have the following definition and theorem.
Definition 2
([19]). The item set N 1 = { 1 , 2 , , b 1 } , consisting of items with a higher profit density than the break item b, can be divided into m disjoint subsets, denoted as N 1 = ω = 1 m N 1 , ω . Each subset N 1 , ω has n 1 , ω items and can be defined as follows:
N 1 , ω = j : r p b p j w b p b w j + 1 = ω , p j w j > p b w b ,
where r is the residual capacity. If no item satisfies the Formula (14) for a given ω ( 1 ω m ), then N 1 , ω = .
Definition 2 first constructs a hyperplane based on the coordinates of the origin and the break item (i.e., the numerical values of the item’s profit p j and weight w j ). Subsequently, the definition assigns a value ω to each item according to its distance from the hyperplane, with ω decreasing as the distance increases. Correspondingly, the set N 2 = { b , b + 1 , , n } , composed of items whose profit density is less than that of the break item, satisfies the following expression.
N 2 , ω = j : r p b p b w j p j w b + 1 = ω , p j w j p b w b .
Depending on the value of ω , we have the following theorem.
Theorem 1
([19]). For each ω ( 1 ω m ), let D 1 , ω denote the set of items not packed into the optimal solution from N 1 , ω , and let s 1 , ω = | D 1 , ω | = j N 1 , ω | y j 1 | . These values s 1 , ω constitute the vector S = ( s 1 , ω ) ω = 1 m . Therefore, we can conclude that
ω = 1 m s 1 , ω ω < 1 .
Theorem 1 reveals that at most ω 1 items in N 1 , ω are excluded from the optimal solution. In other words, all of the items in N 1 , 1 are selected in the optimal solution. At most one item in N 1 , 2 is excluded, and so on. The smaller an item’s ω -value in N 1 , ω , the higher the item’s likelihood of optimal inclusion.
Conversely, at most ω 1 items in N 2 , ω are included in the optimal solution. Specifically, no items in N 2 , 1 are selected. Up to one item in N 2 , 2 may be chosen, and so on. Smaller ω values for items in N 2 , ω correlate with reduced selection probabilities. To streamline the subsequent discussion, we denote the ω -value associated with each decision variable as ϖ , which is computed as follows.
ϖ j = r p b p j w b p b w j + 1 , if e j > e b , r p b p b w j p j w b + 1 , if e j < e b , , otherwise .
Building on the formula above, we propose an improved transfer function (ITF). The ITF partitions the set of items into two subsets based on the break item b’s profit density. For each item, the ITF assigns a distance-weighted value 1 ϖ j using the item’s distance to the hyperplane formed by the break item and the origin. For items with a profit density exceeding the break item, the ITF increases the probability of setting y i j t = 1 ; symmetrically, for items with a profit density below the break item, it increases the probability of setting y i j t = 0 . The ITF is formally defined as follows:
y i j t = 1 , if rand < 1 1 + e x i j t + 1 ϖ j , and e j e b 1 , if rand < 1 1 + e x i j t 1 ϖ j , and e j < e b 0 , otherwise ,
where rand U ( 0 , 1 ) .
Building on Formula (13), the ITF in Formula (17) partitions the item set N into two subsets N 1 and N 2 based on the break item b’s profit density. The value ϖ j for the j-th item is determined via Formula (14) for N 1 and Formula (15) for N 2 , affecting y i j t . To illustrate the refined transfer function, consider the items in N 1 , 1 . For any j N 1 , 1 , the item is guaranteed inclusion in the optimal solution. Observe that 1 1 + e x i j t + 1 ϖ j > 1 , ensuring y i j t = 1 for all x i j t . As ϖ j increases, 1 ϖ j decreases monotonically, reflecting the item’s hyperplane proximity. Consequently, the items in N 1 , ω exhibit a diminishing selection certainty, leading the algorithm to prioritize searching for these variables. Conversely, for the items in N 2 , 1 , 1 1 + e x i j t 1 ϖ j < 0 , forcing y i j t = 0 for all j N 2 , 1 . Here, smaller ϖ j values correspond to stronger guarantees of exclusion from the optimal solution.
Therefore, the ITF prioritizes exploring decision variables near the hyperplane defined by the origin and the break item b. By reducing the number of variables definitively included in or excluded from the optimal solution, the algorithm maintains its search capability while accelerating the convergence.

3.3. An Example for the ITF

To facilitate discussion, we introduce the following example to illustrate the working principle of the ITF.
Example 1.
Consider a 0-1 KP instance with C = 115 and n = 4 items. The profit and weight of each item are given in the following table, where the break item is the third item ( p 3 = 73 , w 3 = 70 ). Parameter ϖ is computed utilizing Formula (16).
j 1234
p j 90367316
w j 13337086
ϖ j 1461
To demonstrate the ITF workflow, consider a real encoding individual X i t = ( 2.9 , 3.1 , 1.1 , 0.9 ) and the random variable rand. For illustration, we fix rand = 0.5 . We denote the standard sigmoid (13) and the improved transfer function (17) as Sig ( X i t ) and ITF ( X i t ) , respectively; the transformation proceeds as follows.
As illustrated in Figure 1, the binary solution Y i t obtained by processing the real-encoded individual X i t through the traditional sigmoid transfer function yields a total profit of 89 and a total weight of 156 (exceeding the knapsack’s capacity of 115), constituting an infeasible solution. In contrast, the binary solution Y i t generated via the ITF achieves a total profit of 163 and a weight of 83, matching the optimal solution for Example 1. This demonstrates that even for real encoding individuals producing significantly underperforming solutions under sigmoid-based transformation, the ITF mechanism robustly maintains the solution quality while generating superior solutions.

3.4. The Theoretical Analysis of the ITF

In previous studies, researchers evaluating novel meta-heuristic algorithms typically conducted simulations on benchmark instances, employing computational results to demonstrate the effectiveness of the improvements. However, since meta-heuristic algorithms are stochastic, the results of individual simulation runs are not unique, rendering computational comparisons alone insufficient to robustly justify algorithmic enhancements. Unlike conventional approaches, the ITF is derived from the theoretical framework of B&B, providing stronger analytical guarantees.
Given an instance of the 0-1 KP composed of a set N of n items, the discretization strategies in previous research have first mapped the individual encoding X i t from the continuous search space R n to the probability space P = ( 0 , 1 ) n via the sigmoid function and acquired the corresponding probability vector Z i t . Then, a binary encoding Y i t { 0 , 1 } n is generated by comparing each value in the probability vector Z i t with a randomly sampled number. The space mapping relationship is
R n ( 0 , 1 ) n { 0 , 1 } n .
From Formula (17), the ITF narrows the probability mapping space for the j-th decision variable to either 1 ϖ j , 1 or 0 , 1 1 ϖ j , depending on the value of ϖ j . This contrasts with traditional TFs, which use the full interval ( 0 , 1 ) . We denote the reduced probability mapping space of the ITF as P . To quantify this reduction, we employ the Lebesgue Measure for theoretical comparison. The measure of the probability mapping space for the traditional transfer function is μ ( P ) = 1 , whereas the measure per dimension for the ITF is 1 1 ϖ j . Thus, the total measure of P is
μ ( P ) = j = 1 n 1 1 ϖ j
Clearly, μ ( P ) is less than μ ( P ) . A reduced probability mapping space implies that the improved algorithm can eliminate unproductive search regions, thereby enhancing the search capability. Notably, the design of the ITF based on Theorem 1 guarantees that the optimal solutions remain within the search space. Furthermore, unlike parameter-dependent reduction methods, the ITF requires no additional parameter tuning, improving its generalization capability.

3.5. The Greedy Repair Operator (GRO)

Although the ITF significantly improves the search efficiency, solutions generated during iterations for constrained optimization problems often suffer from two common issues: constraint violations or suboptimal solutions that fail to saturate the constraint bounds. To address this challenge, meta-heuristic algorithms typically incorporate constraint-handling mechanisms. The current mainstream approaches include penalty function methods and repair strategies [20,49,50]. For the 0-1 KP, this study adopts a widely used greedy repair operator (GRO) to refine the solutions generated during iterations.
Given an individual Y i t processed by the ITF, the GRO operates in two phases: (1) repairing the infeasibility by eliminating items in ascending profit density order and (2) optimizing the feasibility by tentatively adding items in descending profit density order. We assume the item order reflects a non-increasing profit density (i.e., p j / w j p j + 1 / w j + 1 ). The GRO procedure is formalized in Algorithm 1.
Algorithm 1: Greedy repair operator (GRO)
Symmetry 17 01538 i001
Algorithm 1 first repairs infeasible solutions through Steps 2–7. Subsequently, Steps 8–12 refine the solution via greedy optimization. The operations maintain a time complexity of O ( n ) . Compared to the original POA, the IBPOA introduces only two additions: the ITF and the GRO. Crucially, both components exhibit a lower time complexity than that of the core POA framework, preserving the algorithm’s computational efficiency.
Notably, since the GRO operates directly on Y i t , it may induce encoding–decoding disruption. This phenomenon stems from the algorithm’s dual-space operation (e.g., continuous and binary encodings): if the GRO modifies Y i t without synchronously updating X i t , the search direction diverges from the actual optimization trajectory in the solution space. To mitigate this, the GRO adjusts X i t for variables altered in Y i t (via Lines 6 and 12). Specifically, when y i j t changes from 1 to 0, the corresponding x i j t is repositioned to 20% of its original distance from LB j , where the 0.2 scaling factor is an empirical value determined through experimentation. Conversely, when y i j t changes from 0 to 1, x i j t is similarly repositioned to 20% of its original distance from UB j .

3.6. A Complexity Analysis of the IBPOA

Building on the POA framework, the IBPOA introduces a greedy repair operator (GRO) to boost the search performance further. The workflow is illustrated in Figure 2, where black boxes denote the original POA operations and blue boxes highlight IBPOA-specific additions.
The computational complexity of the IBPOA is analyzed in this subsection. Pseudo-code appears in Algorithm 2, with the population size pop and maximum iterations MaxIter.
In the IBPOA, the time complexity of generating the initial solution is O ( pop × n ) (Algorithm 2, Line 1). The computational complexity of computing ϖ is O ( n ) (Algorithm 2, Line 2). Within the loop, the time complexity of the ITF is O ( pop × n ) (Algorithm 2, Line 4), and that of the GRO is O ( pop × n ) (Algorithm 2, Line 5). For the planetary moment computations, the time complexities of Formulas (5) to (7) are all O ( pop × n ) , while that of Formula (8) is O ( pop ) (Algorithm 2, Line 6). The time complexities of the global search operator and the local search operator are both O ( pop × n ) . Thus, the time complexity of the IBPOA is O ( MaxIter × pop × n ) .
To facilitate reproducibility and future research, the implementation of the IBPOA is publicly available at https://github.com/zhugemutian/Improved-Binary-Planet-Optimization-Algorithm, accessed on 14 May 2025.
Algorithm 2: Improved Binary Planet Optimization Algorithm (IBPOA)
Symmetry 17 01538 i002

4. The Experimental Results and Comparisons

To efficiently demonstrate the improved performance of the proposed algorithm, this section employs 15 benchmark 0-1 KP instances from [20], with the decision variables ranging from 100 to 1000. We compare the computational results of the IBPOA against those of seven algorithms reported in the literature [20], including both general and state-of-the-art methods. Additionally, since these benchmark instances lack known optimal solutions, we invoke CPLEX to compute their exact solutions.

4.1. Benchmark Instances

The 0-1 KP instances provided in [20] are categorized into three benchmark datasets: Benchmark Dataset-1, -2, and -3. Benchmark Dataset-1 comprises small-scale instances with 8 to 24 decision variables. While the optimal solutions for these instances exhibit seven- or eight-digit values, they can be solved relatively easily using B&B. Benchmark Dataset-3 was excluded from our analysis because its data could not be accessed via the provided website on our network. Given these considerations, we selected Benchmark Dataset-2 as the primary benchmark for this study. Furthermore, as is evident from the ITF’s design, it demonstrates a strong performance for large-scale instances with widely distributed profit densities (i.e., p j / w j ). To fully demonstrate this capability, we additionally incorporate the final subset of summed problem instances from Benchmark Dataset-2.
The first set includes three datasets, i.e., uncorrelated, weakly correlated, and strongly correlated instances. These are standard benchmark settings for the 0-1 KP [1]. Each dataset includes instances with sizes n = 100 , 200 , 300 , 500 , and 1000. The parameter configurations are as follows.
1. Uncorrelated instances (labeled ‘kp_uc_n’) have profit and weight coefficients defined as p j Z [ 1 , R ] and w j Z [ 1 , R ] , where x Z [ A , B ] denotes that x is a random integer within the interval [ A , B ] ;
2. Weakly correlated instances (labeled ‘kp_wc_n’) have profit and weight coefficients defined as w j Z [ 1 + R / 10 , R + R / 10 ] , p j Z [ w j R / 10 , w j + R / 10 ] ;
3. Strongly correlated instances (labeled ‘kp_sc_n’) have profit and weight coefficients defined as w j Z [ 1 , R ] , p j = w j + R / 10 .
For the above datasets, R = 100 . The profit/weight values and knapsack capacities for these instances are publicly available at https://github.com/whuph/KP_data, accessed on 14 May 2025.

4.2. The Computing Platform and Reference Algorithms

We perform all of the experimental computation on a device with the Windows 11 Edition platform, an Inter® CoreTM i7-12700K CPU @ 3.60 GHz (20 CPUs), and 32 GB DDR5L of RAM (24 GB remaining). All algorithms are computed using MATLAB 17a.
Compared to the POA, the IBPOA primarily improves the transfer function and incorporates the GRO. Consequently, most of the empirical parameters from the POA remain applicable. Notably, since the decision variables x j 0 , 1 , the bounds UB j and LB j are set to 100 and −100, respectively. Additionally, the control search factor R min is set to n / 10 . Following the benchmark algorithm configurations, the population size pop and the maximum iterations MaxIter are fixed at 30 and 200, respectively. For the differential evolution algorithms, the maximum number of fitness evaluations is 6000.
The literature [20] provides seven algorithms for solving these instances. The seven comparison algorithms can be divided into three categories: two discrete versions of PSO, namely Binary Particle Swarm Optimization (BPSO) [31] and Modified Binary Particle Swarm Optimization (MBPSO) [51]; binary versions of four different differential evolution algorithms, binary learning differential evolution (BLDE) [52], adaptive quantum-inspired differential evolution (AQDE) [53], Dichotomous Binary Differential Evolution (DBDE) [54], and novel binary Differential Evolution (NBin-DE) [55]; and the binary reptile search algorithm (BinRSA) [20]. The relevant parameters of the above algorithms can be referred to in [20,54]. Notably, according to the computational results of [54], Peng et al. incorporated the greedy repair operator from DBDE into BPSO, MBPSO, and BLDE, while replacing the random repair operator in AQDE with DBDE’s greedy repair operator. Since this modification enhanced the performance of the comparisons, we adopted the same approach. To facilitate a comparative analysis by researchers, implementations of all of the aforementioned algorithms are publicly available at https://github.com/zhugemutian/Reproduce_algorithms, accessed on 14 May 2025.
To mitigate stochastic fluctuations, the IBPOA was executed 30 times per instance, with a subsequent statistical analysis of the best solutions obtained across all runs. Additionally, neither [54] nor [20] provides the optimal solutions for the benchmark instances. To bridge this gap, we utilize YALMIP [56] to invoke CPLEX (version 12.8) to derive exact solutions, thereby establishing and reporting provably optimal solutions for these instances.

4.3. Computational Results and Comparisons

Table 1, Table 2 and Table 3 present the computational results for the IBPOA and comparisons. The first row lists the test instance names, while the second row provides their optimal values obtained via CPLEX. According to the results of 30 independent runs, the performance metrics for all algorithms are displayed in subsequent columns across the three tables. These include the best solution value ( f best ), the average value ( f avg ), the standard deviation ( std ), and the average runtime ( t avg ) derived from the 30 executions. Values matching the CPLEX-derived optimal solution are boldfaced in each row.
Table 1. Computational results of the IBPOA against those of seven comparison algorithms on kp_uc_n instances.
Table 1. Computational results of the IBPOA against those of seven comparison algorithms on kp_uc_n instances.
NameResultskp_uc_100kp_uc_200kp_uc_300kp_uc_500kp_uc_1000
Opt 1807 3403 5444 9495 18,844
BPSO f best 180734015443949018,838
f avg 1797.773355.405394.709390.9018,694.97
std 18.1244.8041.0892.08101.24
t avg 0.280.560.861.584.36
MBPSO f best 1807 33635322919717,992
f avg 1789.273310.135250.939038.4317,769.93
std 22.5729.3540.3569.39117.55
t avg 0.280.600.941.744.87
BLDE f best 180734035444946318,041
f avg 1806.203393.035426.50941417,830.50
std 3.047.6911.6329.35105.91
t avg 0.150.290.440.761.64
AQDE f best 180733905365912817,394
f avg 1806.073353.0352708957.7317,071.27
std 2.9120.0140.0394.07127.28
t avg 0.310.610.931.704.61
DBDE f best 180733595282919318,139
f avg 1783.233289.035144.179069.6317,722.43
std 21.8033.8167.3273.91127.66
t avg 0.150.210.2610.310.46
NBin-DE f best 180734035435945518,667
f avg 18073386.875388.179357.7318,547.90
std 011.9232.1246.5067.03
t avg 0.591.161.763.238.36
BinRSA f best 180734035444949518,844
f avg 18073402.535440.579491.9718,844
std 01.312.012.370
t avg 1.783.746.6013.2440.40
IBPOA f best 180734035444949518,844
f avg 18073402.035443.409492.7718,844
std 01.970.972.730
t avg 0.410.931.522.887.96
Table 2. Computational results of the IBPOA against those of seven comparison algorithms on kp_wc_n instances.
Table 2. Computational results of the IBPOA against those of seven comparison algorithms on kp_wc_n instances.
NameResultskp_wc_100kp_wc_200kp_wc_300kp_wc_500kp_wc_1000
Opt 659 1332 1963 3250 6482
BPSO f best 6591332196232496480
f avg 655.501326.301954.803233.166468.70
std 2.366.287.7914.797.34
t avg 0.320.620.981.845.05
MBPSO f best 6591332195732376425
f avg 656.731324.531948.373221.276402.13
std 1.743.974.556.537.82
t avg 0.320.651.042.025.46
BLDE f best 6591332196232446404
f avg 657.601326.901954.373233.036385.27
std 1.404.145.056.859.85
t avg 0.180.320.520.921.95
AQDE f best 6581326195732226368
f avg 657.831320.571943.603204.406344.43
std 0.533.256.517.5013.71
t avg 0.360.681.072.005.35
DBDE f best 6591332196132416421
f avg 658.431329.171954.273225.036401.83
std 0.682.653.936.7710.70
t avg 0.150.200.250.350.53
NBin-DE f best 6591329196232496473
f avg 658.071327.731958.373240.276461.53
std 0.251.482.634.436.83
t avg 0.631.211.863.408.64
BinRSA f best 6591332196332506482
f avg 658.401332196332506481.87
std 0.500000.35
t avg 1.563.475.6510.8733.42
IBPOA f best 6591332196332506482
f avg 658.831332196332506482
std 0.370000
t avg 0.450.871.492.998.63
Table 3. Computational results of the IBPOA against those of seven comparison algorithms on kp_sc_n instances.
Table 3. Computational results of the IBPOA against those of seven comparison algorithms on kp_sc_n instances.
NameResultskp_sc_100kp_sc_200kp_sc_300kp_sc_500kp_sc_1000
Opt 813 1631 2433 4078 8228
BPSO f best 8131631243340788228
f avg 804.331622.702428.974063.578201.93
std 7.915.864.757.5314.73
t avg 0.260.510.811.484.01
MBPSO f best 8131621241840088075
f avg 806.431606.472393.773988.538030.87
std 4.927.309.389.2718.12
t avg 0.260.550.891.674.54
BLDE f best 8131631242340287968
f avg 809.831620.872415.404000.937918.10
std 4.245.905.8114.4926.74
t avg 0.150.270.420.721.50
AQDE f best 8121628240139857914
f avg 810.831606.872388.103950.507849.73
std 2.258.498.8917.4428.68
t avg 0.290.590.931.584.24
DBDE f best 8131629242140178060
f avg 812.131615.702401.633992.138008.20
std 1.016.578.8414.1419.63
t avg 0.130.180.230.280.40
NBin-DE f best 8121630242240128016
f avg 810.501612.472397.803974.977975.63
std 3.209.0911.1718.2127.88
t avg 0.591.111.723.158.08
BinRSA f best 8131631243340788228
f avg 812.571631243340788228
std 1.850000
t avg 1.713.586.1113.5652.13
IBPOA f best 8131631243340788228
f avg 8131631243340788226.60
std 00002.24
t avg 0.461.021.663.198.28
As shown in Table 1, Table 2 and Table 3, the IBPOA is capable of obtaining the optimal solution to the problem and demonstrates strong solving and searching abilities. The introduction of the GRO ensures that the objective function value of the solutions is typically very close to the optimal solution.
As the IBPOA is a stochastic algorithm, we employ the non-parametric Wilcoxon’s rank-sum test to assess the statistical significance of the differences in the f best values between the IBPOA and each compared algorithm. Table 4 reports the p-values for each algorithm relative to that of the IBPOA across problem instances. When p 0.05 , the results from 30 independent runs indicate that the IBPOA and the compared algorithm do not share the same distribution. In such cases, if the f avg value of the compared algorithm is superior, then that algorithm is superior; in the opposite case, the compared algorithm is considered inferior. For p > 0.05 , the performance of the compared algorithm and the IBPOA is considered comparable. Cases where the Wilcoxon’s rank-sum test yields NaN indicate identical results. These symbolic representations (+, −, ≈, =) facilitate the comparative analysis. The resulting statistical comparison between the IBPOA and the compared algorithms is presented in Table 5.
Table 4. The results of the Wilcoxon’s rank-sum test for the IBPOA against those for seven comparison algorithms over all benchmark instances.
Table 4. The results of the Wilcoxon’s rank-sum test for the IBPOA against those for seven comparison algorithms over all benchmark instances.
Algorithms
v.s. IBPOA
BPSOMBPSOBLDEAQDEDBDENBin-DEBinRSA
kp_uc_100 5.58 × 10 3 1.27 × 10 5 1.61 × 10 1 8.15 × 10 2 1.30 × 10 7 NaNNaN
kp_uc_200 2.64 × 10 11 1.09 × 10 11 1.33 × 10 8 1.09 × 10 11 1.09 × 10 11 4.10 × 10 10 0.2086
kp_uc_300 3.12 × 10 11 1.49 × 10 11 1.04 × 10 9 1.49 × 10 11 1.49 × 10 11 1.49 × 10 11 5.80 × 10 8
kp_uc_500 2.62 × 10 11 9.37 × 10 12 9.32 × 10 12 9.37 × 10 12 9.36 × 10 12 9.36 × 10 12 0.6047
kp_uc_1000 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 1.21 × 10 12 NaN
kp_wc_100 6.10 × 10 10 5.11 × 10 8 2.70 × 10 5 1.45 × 10 10 6.12 × 10 3 3.42 × 10 9 6.40 × 10 4
kp_wc_200 1.26 × 10 5 1.61 × 10 11 1.91 × 10 9 1.17 × 10 12 3.38 × 10 7 8.71 × 10 13 NaN
kp_wc_300 1.16 × 10 12 1.15 × 10 12 1.18 × 10 12 1.20 × 10 12 1.14 × 10 12 1.09 × 10 12 NaN
kp_wc_500 1.20 × 10 12 1.20 × 10 12 1.19 × 10 12 1.19 × 10 12 1.19 × 10 12 1.16 × 10 12 NaN
kp_wc_1000 1.18 × 10 12 1.20 × 10 12 1.20 × 10 12 1.20 × 10 12 1.20 × 10 12 1.19 × 10 12 4.18 × 10 2
kp_sc_100 4.09 × 10 12 5.52 × 10 11 3.18 × 10 7 5.81 × 10 13 5.01 × 10 6 3.19 × 10 13 0.0815
kp_sc_200 4.40 × 10 12 1.18 × 10 12 3.12 × 10 10 1.16 × 10 12 1.13 × 10 12 1.11 × 10 12 NaN
kp_sc_300 1.64 × 10 8 1.19 × 10 12 7.71 × 10 13 1.20 × 10 12 1.16 × 10 12 1.13 × 10 12 NaN
kp_sc_500 4.47 × 10 12 1.20 × 10 12 1.16 × 10 12 1.21 × 10 12 1.20 × 10 12 1.18 × 10 12 NaN
kp_sc_1000 5.11 × 10 10 1.76 × 10 11 1.76 × 10 11 1.77 × 10 11 1.76 × 10 11 1.76 × 10 11 6.57 × 10 5
Table 5. A summary of the Wilcoxon’s rank-sum test results for the IBPOA against those of seven comparison algorithms over all benchmark instances.
Table 5. A summary of the Wilcoxon’s rank-sum test results for the IBPOA against those of seven comparison algorithms over all benchmark instances.
Algorithms
v.s. IBPOA
kp_uc_nkp_wc_nkp_sc_n
+ = + = + =
BPSO005000500050
MBPSO005000500050
BLDE014000500050
AQDE014000500050
DBDE005000500050
NBin-DE004100500050
BinRSA021200231103
Table 4 and Table 5 demonstrate through the Wilcoxon’s rank-sum analysis that the IBPOA exhibits statistically significant advantages when solving the three categories of instances (kp_uc_n, kp_wc_n, and kp_sc_n). Compared to the seven other algorithms (BPSO, MBPSO, BLDE, etc.), the IBPOA shows an enhanced performance for most test instances, with the binary reptile search algorithm (BinRSA) being the sole exception. The BinRSA emerges as the only competitive alternative, outperforming the IBPOA on the kp_sc_1000 instance and achieving equivalent results for eight problem instances. However, its average computation time is 4.16 times longer than that of the IBPOA, substantially limiting its practical applicability.
Collectively, the IBPOA achieves an optimal balance between computational efficiency and solution quality, making it the preferred choice for such optimization problems. The BinRSA remains viable only in time-insensitive scenarios where the IBPOA underperforms. The remaining algorithms require further improvements to enhance their competitiveness.
To evaluate the convergence performance of the algorithm, we introduce an evaluation metric, denoted as Evaluate t , which assesses the quality of the solutions by measuring the difference between the best solution and the optimal solution among the t-th generation individual. When items are ranked by their profit density in descending order, the difference between the best solution and the optimal solution often clusters around the break item b. Consequently, it is necessary to increase the weight of the serial number j corresponding to the position of the items. The calculation of Evaluate is as follows.
Evaluate t = j = 1 n ( | y i j t y j * | × | b j | ) for i = arg max 1 k pop f ( Y k t )
where y j * is the j-th variable value of the optimal solution. Additionally, since the optimal solution to the problem is not necessarily unique, we set Evaluate t = 0 if the objective function values of the best solution and the optimal solution coincide.
Based on the computational results of the IBPOA, the algorithm convergence and the evaluation curves are presented in Figure 3. The three columns correspond to kp_uc_n, kp_wc_n, and kp_sc_n, where n = 100 , 200 , 300 , 500 , and 1000, respectively. The horizontal axis in all of the figures represents the iteration number. As shown in the figures, the objective function values obtained by the greedy algorithm demonstrate minimal deviation from the optimal solutions due to the inherent characteristics of the instances, resulting in relatively stable Best_Fitness values. However, the evaluation curves exhibit rapid convergence to zero with increasing iterations across most test cases. This pattern substantiates that the IBPOA can efficiently identify the optimal solutions for 0-1 knapsack problems, demonstrating both a robust global search capability and an effective local search performance.

4.4. An Analysis of the Population Diversity

In meta-heuristic algorithm iterations, the population diversity serves as a critical metric for evaluating the search capabilities, providing quantitative evidence of the ability to escape local optima. Typically, algorithms with strong population diversity exhibit high initial diversity levels that gradually decrease during optimization. To rigorously evaluate the performance of the IBPOA, we maintain a fixed population size of 30 and introduce the population diversity metric pop _ Div , defined as follows:
pop _ Div = i = 1 pop j = 1 n ( y i j t y ¯ j t ) 2
y ¯ j t = 1 pop i = 1 pop y i j
The computational results are illustrated in Figure 4. As shown, the population diversity remains high during the initial iterations of the IBPOA. Notably, for the kp_wc_200 instance, where the greedy solution coincides with the global optimum, the population diversity metric remains zero throughout the optimization process. In kp_uc_n instances, due to significant disparities between the near-optimal solutions and the true optimum, the algorithm rapidly converges to the optimal solution. The local search operator subsequently exerts a minimal influence on individuals. Furthermore, under the moment of gravitation mechanism, individuals approaching the optimal solution are swiftly captured, leading to homogenization of the encodings and asymptotic convergence of the population diversity to zero. For the other instance categories, the population diversity progressively approaches zero with increasing iterations. Distinct from conventional meta-heuristic algorithms, the dual-operator framework of the IBPOA induces multi-modal diversity patterns contingent on the operator selection dynamics during the search process. These results align with theoretical expectations and validate the robust search capabilities of the algorithm.

4.5. The Exploration and Exploitation Analysis

In addition to population diversity metrics, we introduce exploration and exploitation metrics to evaluate the algorithmic performance. Typically, algorithms demonstrating balanced exploration and exploitation capabilities exhibit behavioral trajectories where exploration dominates the initial phases, transitions through dynamic equilibrium in the intermediate stages, and culminates in the predominance of exploitation during the final iterations. Notably, algorithms employing multiple search operators may exhibit multi-modal patterns in these trajectories. The quantification methodology for exploration and exploitation is defined as follows:
Exploration ( % ) = Div t Div max t × 100
Exploitation ( % ) = | Div t Div max t | Div max t × 100
Div t = 1 n j = 1 n 1 pop i = 1 pop | y ¯ j t y i j t |
The computational results are presented in Figure 5. Due to the inherent complexity of the instances, exploration and exploitation exhibit rapid dynamic interplay, followed by swift convergence during the initial iterations. Owing to the dual-operator framework, the algorithm maintains effective mechanisms for escaping from local optima, even as exploitation becomes predominant, as evidenced by the recurrent intersections between the exploration and exploitation curves. This behavior mitigates premature convergence and demonstrates robust adaptability.

4.6. An Ablation Study on the Transfer Functions

In Section 3.4, we analyzed the probability mapping spaces of both TFs by introducing the Lebesgue Measure. The theoretical results demonstrate that the ITF exhibits a smaller probability mapping space while ensuring the optimal solutions are preserved. To validate the superior performance of the ITF further, we not only provide theoretical guarantees but also conduct simulation experiments on 15 datasets (e.g., ‘kp_wc_n’). These empirical results confirm the efficacy of the improvements in the ITF.
To demonstrate the better performance of the ITF, we extend the POA framework by hybridizing the following transfer functions for algorithmic comparison. These include S-shaped [38], V-shaped [38], U-shaped [40], and taper-shaped [39] functions, proposed by Mirjalili and collaborators, along with the Z-shaped function introduced by Guo et al. [41]. Mathematical expressions of these TFs appear in Table 6, where column 1 lists the names of various transfer functions, columns 2 and 4 provide function symbols, and columns 3 and 5 present the corresponding mathematical expressions.
Table 6. Summary of compared transfer functions.
Table 6. Summary of compared transfer functions.
NameLabelExpressionLabelExpression
S-shaped [38]S1 TF 1 ( x ) = 1 1 + e 2 x S3 TF 3 ( x ) = 1 1 + e x / 2
S2 TF 2 ( x ) = 1 1 + e x S4 TF 4 ( x ) = 1 1 + e x / 3
V-shaped [38]V1 TF 5 ( x ) = erf ( π 2 x ) V3 TF 7 ( x ) = x 1 + x 2
V2 TF 6 ( x ) = tanh ( x ) V4 TF 8 ( x ) = 2 π arctan ( π 2 x )
U-shaped [40]U1 TF 9 ( x ) = x 1.5 U3 TF 11 ( x ) = x 3
U2 TF 10 ( x ) = x 2 U4 TF 12 ( x ) = x 4
Taper-shaped [39]T1 TF 13 ( x ) = x A T2 TF 15 ( x ) = x 3 A 3
T3 TF 14 ( x ) = x A T4 TF 16 ( x ) = x 4 A 4
Z-shaped [41]Z1 TF 17 ( x ) = 1 2 x Z3 TF 19 ( x ) = 1 8 x
Z2 TF 18 ( x ) = 1 5 x Z4 TF 20 ( x ) = 1 20 x
To standardize the discussion, we adopt a consistent naming convention for the POA hybridized with different TFs. For example, the POA integrated with transfer function S1 is denoted as BPOA-S1, with the other hybrid variants following the same naming scheme. Furthermore, to minimize the impact of the initial solutions, we fix the initial solution configuration as follows:
X 0 = ( ζ ) pop × n where ζ = 5 .
During execution, if a solution Y i t (corresponding to an individual encoding X i t via TFs) violates the constraint g ( Y i t ) C , Y i t is set to the zero vector 0 n and X i t is reset to its initial state. The maximum iteration count is fixed at MaxIter = 200 .
To validate the efficacy of the ITF, we conduct 30 independent runs comparing the BPOA-ITF against 20 BPOA variants hybridized with different TFs across three categories of instances. Detailed computational results are available at https://github.com/zhugemutian/Improved-Binary-Planet-Optimization-Algorithm, accessed on 14 May 2025.
Table 7 categorizes the performance outcomes relative to those of BPOA-ITF as superior, equivalent, inferior, or identical. Consistent with the prior methodology, these outcomes are symbolically represented as (+, −, ≈, =) to facilitate the comparative analysis. The statistical analysis reveals that the ITF demonstrates significant advantages ( p < 0.05 ) over all 20 transfer functions across three categories of instances based on 30 independent runs.
Table 7. Summary of Wilcoxon’s rank-sum test results: BPOA-ITF versus BPOA-TFs.
Table 7. Summary of Wilcoxon’s rank-sum test results: BPOA-ITF versus BPOA-TFs.
BPOA-TFs
v.s. BPOA-ITF
kp_uc_nkp_wc_nkp_sc_n
+ = + = + =
BPOA-S1005000500050
BPOA-S2005000500050
BPOA-S3005000500050
BPOA-S4005000500050
BPOA-V1005000500050
BPOA-V2005000500050
BPOA-V3005000500050
BPOA-V4005000500050
BPOA-U1005000500050
BPOA-U2005000500050
BPOA-U3005000500050
BPOA-U4005000500050
BPOA-T1005000500050
BPOA-T2005000500050
BPOA-T3005000500050
BPOA-T4005000500050
BPOA-Z1005000500050
BPOA-Z2005000500050
BPOA-Z3005000500050
BPOA-Z4005000500050

5. Conclusions

The Planet Optimization Algorithm (POA), equipped with dual search operators, demonstrates a superior performance in escaping from the local optimal solution across 23 well-known test functions, 38 IEEE CEC benchmark test functions (CEC 2017, CEC 2019), and 3 real engineering problems in continuous optimization. Given the absence of binary POA variants, we propose the Improved Binary Planet Optimization Algorithm (IBPOA) by integrating the theoretical insights from branch-and-bound (B&B) and reduction algorithms with a hybrid greedy repair operator (GRO). The IBPOA leverages B&B theory to enhance the sigmoid transfer function. By ordering the items by profit density and establishing a symmetric hyperplane defined by the break item (denoted s) and the origin, it adjusts the selection probabilities symmetrically based on the items’ distances to this hyperplane, yielding the improved transfer function (ITF). Computational experiments on three categories of instances confirm that the IBPOA exhibits an exceptional performance in its convergence speed, population diversity, and exploration–exploitation balance.
Notably, the ITF demonstrates a superior performance on 15 benchmark instances compared to that of the other TFs. However, its reliance on linear relaxation and B&B theory introduces generalization limitations. When hybridized with algorithms solving other NP-hard problems, the ITF’s effectiveness varies due to problem-specific characteristics and linear relaxation methodologies. By leveraging linear relaxation and the theoretical results from B&B, a widely applicable transfer function for discrete optimization problems represents a promising research direction.
The future research directions are tripartite. Firstly, in terms of the algorithmic mechanisms, why the moment of gravitation outperforms direct gravitation in the operator selection should be investigated. Second, in terms of the theoretical bounds, we ought to determine whether empirically tuned parameters (e.g., R min ) possess theoretical upper bounds. Third, as regards the application scope, we could extend the IBPOA to combinatorial variants (e.g., set-union knapsack problems, discounted knapsack problems, and multi-dimensional knapsack problems).

Funding

This research received no external funding.

Data Availability Statement

The data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
POAplanet optimization algorithm
IBPOAimproved binary planet optimization slgorithm
GROgreedy repair operator
0-1 KP0-1 knapsack problem
B&Bbranch-and-bound
GAgenetic algorithm
PSOparticle swarm optimization

References

  1. Kellerer, H.; Pferschy, U.; Pisinger, D. Knapsack Problems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  2. Cook, S.A. The complexity of theorem-proving procedures. In Proceedings of the 3rd Annual ACM Symposium on Theory of Computing (STOC 1971), Shaker Heights, ON, USA, 3–5 May 1971; pp. 151–158. [Google Scholar]
  3. Karp, R.M. Reducibility among Combinatorial Problems. In Complexity of Computer Computations; Miller, R.E., Thatcher, J.W., Bohlinger, J.D., Eds.; Springer: Boston, MA, USA, 1972; pp. 85–103. [Google Scholar]
  4. Garey, M.R.; Johnson, D.S. Computer and Intractablility: A Guide to the Theory of NP-Completeness; Freeman: San Francisco, CA, USA, 1979. [Google Scholar]
  5. Sulaiman, A.; Sadiq, M.; Mehmood, Y.; Akram, M.; Ali, G.A. Fitness-Based Acceleration Coefficients Binary Particle Swarm Optimization (FACBPSO) to Solve the Discounted Knapsack Problem. Symmetry 2022, 14, 1208. [Google Scholar] [CrossRef]
  6. Beliakov, G. Knapsack problems with dependencies through non-additive measures and Choquet integral. Eur. J. Oper. Res. 2022, 301, 277–286. [Google Scholar] [CrossRef]
  7. Tavana, M.; Keramatpour, M.; Santos-Arteaga, F.J.; Ghorbaniane, E. A fuzzy hybrid project portfolio selection method using Data Envelopment Analysis, TOPSIS and Integer Programming. Expert Syst. Appl. 2015, 42, 8432–8444. [Google Scholar] [CrossRef]
  8. Sharafi, P.; Teh, L.H.; Hadi, M.N.S. Conceptual design optimization of rectilinear building frames: A Knapsack problem approach. Eng. Optimiz. 2014, 47, 1303–1323. [Google Scholar] [CrossRef]
  9. Maher, S.J. A novel passenger recovery approach for the integrated airline recovery problem. Comput. Oper. Res. 2015, 57, 123–137. [Google Scholar] [CrossRef]
  10. Wang, S.; Cui, W.; Chu, F.; Yu, J. The interval min–max regret knapsack packing-delivery problem. Int. J. Prod. Res. 2020, 59, 5661–5677. [Google Scholar] [CrossRef]
  11. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  12. Land, A.H.; Doig, A.G. An automatic method of solving discrete programming problem. Econometrica 1960, 28, 497–520. [Google Scholar] [CrossRef]
  13. Toth, P. Dynamic programming algorithms for the Zero-One Knapsack Problem. Computing 1980, 25, 29–45. [Google Scholar] [CrossRef]
  14. Holland, J.H. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, MA, USA, 1975. [Google Scholar]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  16. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems. Inf. Sci. 2016, 326, 1–24. [Google Scholar] [CrossRef]
  17. Nikolić, M.; Teodorović, D. Transit network design by bee colony optimization. Expert Syst. Appl. 2013, 40, 5945–5955. [Google Scholar] [CrossRef]
  18. Thanh, S.-T.; Minh, H.-L.; Magd, A.W.; Thanh, C.-L. An efficient Planet Optimization Algorithm for solving engineering problems. Sci. Rep. 2022, 12, 8362. [Google Scholar] [CrossRef] [PubMed]
  19. Yang, Y. An upper bound of the mutation probability in the genetic algorithm for general 0-1 knapsack problem. arXiv 2024, arXiv:2403.11307. [Google Scholar]
  20. Ervural, B.; Hakli, H. A binary reptile search algorithm based on transfer functions with a new stochastic repair method for 0-1 knapsack problems. Comput. Ind. Eng. 2023, 178, 109080. [Google Scholar] [CrossRef]
  21. Zhou, Y.Q.; Shi, Y.; Wei, Y.F.; Luo, Q.F.; Tang, Z.H. Nature-inspired algorithms for 0-1 knapsack problem: A survey. Neurocomputing 2023, 554, 126630. [Google Scholar] [CrossRef]
  22. Shu, Z.; Ye, Z.W.; Zong, X.L.; Liu, S.Q.; Zhang, D.D.; Wang, C.Z.; Wang, M.W. A modified hybrid rice optimization algorithm for solving 0-1 knapsack problem. Appl. Intell. 2022, 52, 5751–5769. [Google Scholar] [CrossRef]
  23. Yildizdan, G.; Baş, E. A Novel Binary Artificial Jellyfish Search Algorithm for Solving 0-1 Knapsack Problems. Neural Process Lett. 2023, 55, 8605–8671. [Google Scholar] [CrossRef]
  24. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Ryan, M.; Mirjalili, S. New binary marine predators optimization algorithms for 0-1 knapsack problems. Comput. Ind. Eng. 2021, 151, 106949. [Google Scholar] [CrossRef]
  25. Lim, T.Y.; Al-Betar, M.A.; Khader, A.T. Taming the 0/1 knapsack problem with monogamous pairs genetic algorithm. Expert Syst. Appl. 2016, 54, 241–250. [Google Scholar] [CrossRef]
  26. Zhou, Y.Q.; Li, L.L.; Ma, M.Z. A Complex-valued Encoding Bat Algorithm for Solving 0-1 Knapsack Problem. Neural Process Lett. 2016, 44, 407–430. [Google Scholar] [CrossRef]
  27. Zhou, Y.Q.; Bao, Z.F.; Luo, Q.F.; Zhang, S. A complex-valued encoding wind driven optimization for the 0-1 knapsack problem. Appl. Intell. 2017, 46, 684–702. [Google Scholar] [CrossRef]
  28. Pavithr, R.S.; Gursaran. Quantum Inspired Social Evolution (QSE) algorithm for 0-1 knapsack problem. Swarm Evol. Comput. 2016, 29, 33–46. [Google Scholar] [CrossRef]
  29. Wang, Y.; Wang, W. Quantum-Inspired Differential Evolution with Grey Wolf Optimizer for 0-1 Knapsack Problem. Mathematics 2021, 9, 1233. [Google Scholar] [CrossRef]
  30. Bhattacharjee, K.K.; Sarmah, S.P. Shuffled frog leaping algorithm and its application to 01 knapsack problem. Appl. Soft Comput. 2014, 19, 252–263. [Google Scholar] [CrossRef]
  31. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks (ICNN), Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  32. Kennedy, J.; Eberhart, R.C. Discrete binary version of the particle swarm algorithm. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  33. Rajwar, K.; Deep, K.; Das, S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef] [PubMed]
  34. Molina, D.; Poyatos, J.; Del Ser, J.; García, S.; Hussain, A.; Herrera, F. Comprehensive taxonomies of nature-and bio-inspired optimization: Inspiration versus algorithmic behavior, critical analysis recommendations. Cogn. Comput. 2020, 12, 897–939. [Google Scholar] [CrossRef]
  35. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  36. Suid, M.H.; Ahmad, M.A.; Nasir, A.N.K.; Ghazali, M.R.; Jui, J.J. Continuous-time Hammerstein model identification utilizing hybridization of Augmented Sine Cosine Algorithm and Game-Theoretic approach. Results Eng. 2024, 23, 102506. [Google Scholar] [CrossRef]
  37. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  39. He, Y.; Zhang, F.; Mirjalili, S.; Zhang, T. Novel binary differential evolution algorithm based on Taper-shaped transfer functions for binary optimization problems. Swarm Evol. Comput. 2022, 69, 101022. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Zhang, H.; Mirjalili, S.; Chalup, S.; Noman, N. A Novel U-shaped transfer function for binary particle swarm optimisation. Adv. Intell. Syst. Comput. 2020, 1138, 241–259. [Google Scholar]
  41. Guo, S.S.; Wang, J.S.; Guo, M.W. Z-Shaped Transfer Functions for Binary Particle Swarm Optimization Algorithm. Comput. Intell. Neurosci. 2020, 2020, 6502807. [Google Scholar] [CrossRef] [PubMed]
  42. Clerc, M. Particle Swarm Optimization; ISTE: Washington, DC, USA, 2006. [Google Scholar]
  43. Wang, W.B. Research on Particle Swarm Optimization Algorithm and Its Application. Ph.D. Thesis, Southwest Jiaotong University, Chengdu, China, 2012. [Google Scholar]
  44. Wang, D.S.; Tan, D.P.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  45. Kennedy, J.; Mendes, R. Neighborhood topologies in fully informed and best-of-neighborhood particle swarms. IEEE Trans. Syst. Man Cybern. Part C 2006, 36, 515–519. [Google Scholar] [CrossRef]
  46. Ye, W.; Feng, W.; Fan, S. A novel multi-swarm particle swarm optimization with dynamic learning strategy. Appl. Soft Comput. 2017, 61, 832–843. [Google Scholar] [CrossRef]
  47. Dey, S.S.; Dubey, Y.; Molinaro, M. Branch-and-bound solves random binary IPs in poly(n)-time. Math. Program. 2023, 200, 569–587. [Google Scholar] [CrossRef]
  48. Dantzig, G.B. Discrete-Variable Extremum Problems. Oper. Res. 1957, 5, 266–288. [Google Scholar] [CrossRef]
  49. Mezura-Montes, E.; Coello Coello, C.A. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  50. Samanipour, F.; Jelovica, J. Adaptive repair method for constraint handling in multi-objective genetic algorithm based on relationship between constraints and variables. Appl. Soft Comput. 2020, 90, 106143. [Google Scholar] [CrossRef]
  51. Bansal, J.C.; Deep, K. A modified binary particle swarm optimization for knapsack problems. Appl. Math. Comput. 2012, 218, 11042–11061. [Google Scholar] [CrossRef]
  52. Chen, Y.; Xie, W.C.; Zou, X.F. A binary differential evolution algorithm learning from explored solutions. Neurocomputing 2015, 149, 1038–1047. [Google Scholar] [CrossRef]
  53. Hota, A.R.; Pat, A. An adaptive quantum-inspired differential evolution algorithm for 0-1 knapsack problem. In Proceedings of the IEEE 2nd World Congress on Nature and Biologically Inspired Computing (NaBIC 10), Kitakyushu, Japan, 15–17 December 2010; pp. 703–708. [Google Scholar]
  54. Peng, H.; Wu, Z.J.; Shao, P.; Deng, C.S. Dichotomous binary differential evolution for Knapsack problems. Math. Probl. Eng. 2016, 2016, 5732489. [Google Scholar] [CrossRef]
  55. Ali, I.M.; Essam, D.; Kasmarik, K. Novel binary differential evolution algorithm for knapsack problems. Inf. Sci. 2021, 542, 177–194. [Google Scholar] [CrossRef]
  56. Löfberg, J. YALMIP: A Toolbox for Modeling and Optimization in MATLAB. In Proceedings of the CACSD Conference, Taipei, China, 2–4 September 2004. [Google Scholar]
Figure 1. A comparison of two decoding methods.
Figure 1. A comparison of two decoding methods.
Symmetry 17 01538 g001
Figure 2. A flowchart of a basic genetic algorithm.
Figure 2. A flowchart of a basic genetic algorithm.
Symmetry 17 01538 g002
Figure 3. The convergence curve and the Evaluate curve.
Figure 3. The convergence curve and the Evaluate curve.
Symmetry 17 01538 g003
Figure 4. The population diversity.
Figure 4. The population diversity.
Symmetry 17 01538 g004
Figure 5. The exploration and exploitation of the IBPOA.
Figure 5. The exploration and exploitation of the IBPOA.
Symmetry 17 01538 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y. A Hybrid Symmetry Strategy Improved Binary Planet Optimization Algorithm with Theoretical Interpretability for the 0-1 Knapsack Problem. Symmetry 2025, 17, 1538. https://doi.org/10.3390/sym17091538

AMA Style

Yang Y. A Hybrid Symmetry Strategy Improved Binary Planet Optimization Algorithm with Theoretical Interpretability for the 0-1 Knapsack Problem. Symmetry. 2025; 17(9):1538. https://doi.org/10.3390/sym17091538

Chicago/Turabian Style

Yang, Yang. 2025. "A Hybrid Symmetry Strategy Improved Binary Planet Optimization Algorithm with Theoretical Interpretability for the 0-1 Knapsack Problem" Symmetry 17, no. 9: 1538. https://doi.org/10.3390/sym17091538

APA Style

Yang, Y. (2025). A Hybrid Symmetry Strategy Improved Binary Planet Optimization Algorithm with Theoretical Interpretability for the 0-1 Knapsack Problem. Symmetry, 17(9), 1538. https://doi.org/10.3390/sym17091538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop