Next Article in Journal
Harnessing Unsupervised Ensemble Learning for Biomedical Applications: A Review of Methods and Advances
Next Article in Special Issue
Truck Appointment Scheduling: A Review of Models and Algorithms
Previous Article in Journal
Centralized and Decentralized Event-Triggered Nash Equilibrium-Seeking Strategies for Heterogeneous Multi-Agent Systems
Previous Article in Special Issue
Pareto Approximation Empirical Results of Energy-Aware Optimization for Precedence-Constrained Task Scheduling Considering Switching Off Completely Idle Machines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Marriage in Honey-Bee Optimization Algorithm for Minimizing Earliness/Tardiness Penalties in Single-Machine Scheduling with a Restrictive Common Due Date

1
Industrial Engineering Department, University of Santiago de Chile, Avenida Victor Jara 3769, Santiago 9170124, Chile
2
Facultad de Ingeniería, Ciencia y Tecnología, Universidad Bernardo O’Higgins, Avenida Viel 1497, Ruta 5 Sur, Santiago 8370993, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 418; https://doi.org/10.3390/math13030418
Submission received: 1 October 2024 / Revised: 13 November 2024 / Accepted: 21 November 2024 / Published: 27 January 2025

Abstract

:
This study evaluates the efficiency of a swarm intelligence algorithm called marriage in honey-bee optimization (MBO) in solving the single-machine weighted earliness/tardiness problem, a type of NP-hard combinatorial optimization problem. The goal is to find the optimal sequence for completing a set of tasks on a single machine, minimizing the total penalty incurred for tasks being completed too early or too late compared to their deadlines. To achieve this goal, the study adapts the MBO metaheuristic by introducing modifications to optimize the objective function and produce high-quality solutions within reasonable execution times. The novelty of this work lies in the application of MBO to the single-machine weighted earliness/tardiness problem, an approach previously unexplored in this context. MBO was evaluated using the test problem set from Biskup and Feldmann. It achieved an average improvement of 1.03% across 280 problems, surpassing upper bounds in 141 cases (50.35%) and matching or exceeding them in 193 cases (68.93%). In the most constrained problems (h = 0.2 and h = 0.4), the method achieved an average improvement of 3.77%, while for h = 0.6 and h = 0.8, the average error was 1.72%. Compared to other metaheuristics, MBO demonstrated competitiveness, with a maximum error of 1.12%. Overall, MBO exhibited strong competitiveness, delivering significant improvements and high efficiency in the problems studied.

1. Introduction

In today’s dynamic global marketplace, optimizing production, administrative, and logistics processes has become essential for organizations seeking not only survival but also growth and prosperity. Competitiveness is continually intensifying, becoming a crucial factor in the viability of any company [1]. In the realm of production and operations management, effectively scheduling tasks on machines is a critical challenge that directly impacts organizational productivity and competitiveness. A particularly complex issue within this domain is single-machine scheduling with a common due date, where both early and late completions incur significant penalties [2]. This challenge is known as the single-machine weighted earliness/tardiness (SMWET) problem. It is a fundamental area of study in production scheduling, which provides a framework for optimizing job sequencing, seeking a balance between delivery times and associated costs. Its relevance extends across various industries (manufacturing, logistics, and services), and advancements in optimization algorithms and techniques continue to expand its applications and utility. As modern production environments become increasingly complex, research in this field remains crucial for enhancing efficiency and competitiveness. The use of SMWET to optimize job sequencing on a production line offers several advantages and disadvantages, which depend on the specific characteristics of the environment and the desired objectives.
The main advantages include the following:
  • Simplicity: Scheduling on a single machine is simpler to model and solve compared to multi-machine environments, making implementation and analysis easier. The model incorporates job durations, enabling more accurate planning.
  • Flexibility: It allows for the adjustment of penalty weights to prioritize certain tasks, adapting to the specific needs of production or service.
  • Resource optimization: Proper scheduling can maximize machine capacity usage, minimizing idle time and improving operational efficiency.
  • Improved customer satisfaction: By minimizing penalties for tardiness and earliness, customer relationships can be enhanced by better meeting delivery deadlines.
  • Suitable for a critical machine: Optimizes the work sequence in environments where a machine is a key resource.
  • The main disadvantages include the following:
  • Computational complexity: It is NP-hard, making it difficult to find optimal solutions for large job sets.
  • Does not consider multiple machines: It is limited to a single-machine environment, unsuitable for systems with multiple stations.
  • Effect of penalties: Penalties for early/late completions must be well-defined for the model to function correctly.
  • Complexity in implementation: Although the problem is conceptually simple, implementing a system that considers all constraints and optimizes performance can be complicated.
  • Limitation in flexibility: It may not adapt well to dynamic production environments or those with frequent changes.
Scheduling problems are classic discrete combinatorial optimization problems. Most of these belong to the well-known class of just-in-time scheduling, which aims to optimize the supply chain by delivering materials and components precisely when required, minimizing inventory and maximizing operational efficiency [3]. Common due date assignment and just-in-time scheduling have been extensively researched in recent years [4,5].
Due dates are key parameters in scheduling problems. When all jobs share a single due date, it is referred to as a common due date scheduling problem. These dates are determined within the context of the business relationship, considering both client objectives and company goals to ensure customer satisfaction, while minimizing production costs [6]. In a common due date scheduling problem, some jobs may be completed ahead of schedule, while others may be delayed. Early job completion necessitates storage, incurring additional expenses. Late job delivery results in contractual penalties, profit loss, and damage to the manufacturer’s reputation. Both scenarios are unfavorable due to their associated costs and negative impact on the company [7].
Generally, problems involving common due dates can be categorized into two types: restrictive and non-restrictive. Specifically, if the optimal common due date is either a decision variable or does not impact the optimal task sequence, the problem is classified as non-restrictive. Conversely, if the common due date is given and can influence the optimal task sequence, the problem is considered restrictive. In this case, the search for an optimal task schedule must be conducted with respect to the due date [8].
The approaches used in this area are generally categorized into mathematical methods and metaheuristic algorithms [9]. While mathematical methods are effective in situations where gradients can be calculated, they often encounter significant challenges when solving global optimization problems [10]. This limitation has motivated the development of nature-inspired algorithms [11], which have demonstrated remarkable success in addressing optimization complexities on benchmark datasets, such as those published at the IEEE congress on evolutionary computation (CEC).
Widely used across various domains, metaheuristic algorithms have been studied for several decades. These optimization techniques employ nature-inspired simulations and provide high-quality approximate solutions to complex problems that are difficult to solve [12]. To address scheduling problems, there are primarily two types of methods: exact algorithms and approximate algorithms. Exact algorithms mainly encompass mathematical programming, dynamic programming, and branch-and-bound methods. Approximate algorithms primarily include heuristic and intelligent algorithms [13]. Specifically, this study proposes the use of the MBO algorithm, categorized as an approximate metaheuristic algorithm. MBO has emerged as a promising approach for tackling complex problems. Its effectiveness lies in its ability to discover optimal solutions by drawing inspiration from the natural mating process of queen honey bees and drones [14]. There are several algorithms derived from MBO, such as fast marriage in honey-bee optimization (FMHBO), honey-bee mating optimization (HBMO), honey-bee optimization (HBO) and next-generation hybrid algorithms such as hybrid honey-bee simulated annealing (HBSA) [15], the honey-bee mating-based firefly algorithm (HBMFA) [16], and others focused on nectar collection, such as the hierarchical learning-based artificial bee colony (HLABC) [17], the artificial bee colony + particle swarm optimization (ABCPSO) [18], the one-dimensional binary artificial bee colony (oBABC) [19], and the artificial bee colony based on a two-dimensional queue structure (KLABC) [20], among others.
MBO simulates the mating process of honey bees, where queen bees select the most suitable drones to ensure offspring quality. It employs selection and adaptation principles that can be highly effective in optimizing task scheduling [21]. Benatchba et al. [22] propose the use of an artificial bee colony-inspired optimization algorithm to tackle a complex data mining problem framed as a Max-SAT problem. The algorithm proved effective in generating high-quality solutions within reasonable computational limits, outperforming several conventional optimization techniques. Palominos et al. [23] propose an algorithm inspired by honey bee behavior, which incorporates chaotic elements to enhance solution exploration. Empirical results validate the algorithm’s superiority, yielding higher-quality and more efficient solutions than conventional approaches across multiple traveling salesman problem instances. Alotaibi [24] proposes a hybrid algorithm that combines the behavior of artificial bee colonies with that of spiders to optimize resource allocation in the cloud. This problem has grown increasingly complex due to the escalating demands of cloud services. The findings reveal that, in terms of cost, the proposed method outperforms various bio-inspired optimization algorithms.
Recently, Yogeshwar and Kamalakkannan [25] proposed a hybrid MBO-rock hyraxes swarm optimization algorithm for efficient key generation in data sanitization and restoration. This approach significantly improved privacy and security in IoT-based systems by effectively concealing sensitive data during transmission. The model achieved 97% effectiveness on the analyzed dataset, outperforming other compared methods. Xiao et al. [26] introduced an MBO approach for efficient area coverage using swarms of robots, especially in challenging environments. Inspired by bee and ant colonies, the methodology incorporates specific behavioral models for each robot type. The algorithm demonstrated adaptability to dynamic conditions, including unexpected threats, across various experimental scenarios.
The scheduling problem to be solved involves sequencing tasks on a machine to minimize the total penalties associated with early and late deliveries relative to a deadline. The purpose of this study is to develop a schedule that minimizes the objective function value, which represents the cost associated with these penalties. This study adopts the methodology of Abbass [27,28,29], adapted for discrete problems, introducing a novel use of the MBO algorithm that has not been previously explored in this context. The main objective is to evaluate the efficiency of this metaheuristic method in solving NP-hard combinatorial optimization problems, specifically in scheduling.

Related Literature

The general problem of minimizing total penalties for early and late tasks with a common due date has been investigated by different researchers, including Lv et al. [30], who studied a single-machine scheduling problem with proportional job deterioration, focusing on minimizing the total weighted completion time while considering job release times. Their key contributions include identifying dominance properties and deriving both upper and lower bounds for the problem. Furthermore, they propose a branch-and-bound algorithm and several metaheuristic algorithms, such as tabu search, simulated annealing, and the NEH heuristic. Experimental results show that the branch-and-bound algorithm can solve instances with up to 40 jobs within reasonable timeframes. Additionally, the NEH and simulated annealing algorithms were found to be more accurate than tabu search, providing a comparative analysis of the efficiency and accuracy of the different proposed methods. In addition, Shabtay et al. [6] investigated the single-machine scheduling problem with assignable due dates or windows. Their findings suggest that if the due window’s location is unrestricted, the problem can be solved efficiently. Moreover, when the due window’s length is unlimited, the computational time is reduced further. Nevertheless, imposing bounds on either parameter dramatically increases problem complexity, necessitating a pseudopolynomial-time algorithm to address this NP-hard challenge.
Zhang and Tang [31] propose a method for solving the single-machine scheduling problem with flexible delivery dates. They introduced a dynamic scheduling algorithm to address two specific problems. The objective is to minimize a function that considers early work, late work, and flexible delivery dates. For the first issue, they determine the range of delivery dates based on cost coefficients and develop an algorithm that guarantees an optimal solution. For the second issue, they address the allocation of time windows and propose another algorithm that defines the range of delivery dates. Finally, they provide a numerical example to illustrate the feasibility of their proposed algorithms. Li and Chen [32] investigate the optimization of assigning a common due date to multiple tasks. Their primary contribution lies in formulating and solving a scheduling problem that minimizes penalties for weighted early and late completions. The findings suggest that, under specific conditions, this problem can be resolved using a polynomial-time algorithm.
Atsmony and Mosheiov [33] propose a pseudo-polynomial dynamic programming algorithm to address the NP-hard problem of common deadline assignment on a machine. The goal is to minimize the maximum lead/lag costs while maintaining a specified bound on the total job rejection cost. The results demonstrate an efficient implementation that reduces computational effort, allowing problems with up to 200 jobs to be solved in reasonable times. Additionally, they introduce an intuitive heuristic that, after numerical testing, shows small error margins compared to the optimal solution. Qian et al. [34] developed a single-machine scheduling model incorporating important factors like the learning effect, the lead time, and convex resource allocation. Their findings reveal that three distinct objective functions can be formulated and solved, each targeting the minimization of different scheduling costs, including the lead time, delays, and resource expenses. Furthermore, they demonstrate that all the relevant problems can be solved in polynomial time, indicating that efficient algorithms have been developed to address them.
Arik [2] developed a polynomial-time algorithm to address single-machine scheduling with distinct weights for job preemption and tardiness. The objective is to minimize the total cost resulting from the sum of the weighted anticipation and tardiness, along with the assigned common due date. Leveraging the V-shaped property and machine start time, the proposed heuristic consistently delivers superior solutions within reasonable computational bounds, even as problem complexity increases. Tan and Fu [3] address a single-machine scheduling problem aiming to minimize the total costs incurred by job preemption, tardiness, and idle time. They propose a hybrid approach that integrates a custom dynamic programming algorithm, designed to handle the nonconvex idle cost function, with a genetic algorithm (GA) enhanced by restarts and early discarding for sequencing. The results underscore the method’s superior performance and adaptability compared to existing solutions.
Furthermore, Lee and Kim [35] tackled the single-machine SMWET problem using a parallel GA, followed by James [36], who employed a tabu search (TS) algorithm. Notably, neither study incorporated the third property identified by Biskup and Feldmann [37], which states that an optimal sequence of tasks does not necessarily start at time zero. Biskup and Feldmann generated a test problem set for the SMWET problem, solving them with two specific heuristics while considering the three properties outlined in Section 2. The resulting solution values were established as upper bounds for the problem set, providing a benchmark for subsequent research.
Later, Feldmann and Biskup [8] revisited the SMWET problem, employing five distinct metaheuristic approaches: (i) the evolutionary strategy, (ii) simulated annealing, (iii) threshold-accepting algorithms, (iv) the evolutionary strategy with a destabilizing phase, and (v) threshold-accepting algorithms with stabilized search. They benchmarked these metaheuristics against a subset of test problems from their prior work [37]. Subsequently, [38,39,40,41,42,43] also explored the SMWET problem, utilizing the Biskup and Feldmann [37] test suite for a performance comparison.
A literature review reveals a growing diversity of scheduling approaches for single machines, leading to substantial advancements in efficiency optimization and cost reduction. These advancements primarily focus on the lead time (earliness/tardiness) and the assignment of common delivery dates. The reviewed studies address complex problems such as minimizing weighted completion times, assigning flexible delivery windows, and considering factors like task deterioration and learning.
Notable methodological contributions include branch-and-bound algorithms, pseudo-polynomial dynamic programming, and metaheuristics, which have enabled more efficient solutions to NP-hard problems. However, each approach has its limitations and challenges. While some algorithms achieve optimal solutions in reasonable times for small instances, complexity increases significantly with additional constraints or a larger number of tasks, posing an ongoing challenge.
Moreover, despite significant theoretical progress, scalability and the impact of external factors remain areas requiring further attention. The practical application of these methods in real industrial settings has not yet been fully explored. While valuable contributions have been made, it is crucial to focus on testing these methods in more dynamic and complex environments and to explore hybrid approaches that can address some of the current limitations.
The remainder of this document is organized as follows. Section 2 defines the SMWET problem. Section 3 details the MBO metaheuristics. Section 4 outlines the adaptation of the MBO approach to solve the restrictive SMWET problem. Section 5 presents the computational experiments and discusses the obtained results, evaluating the MBO metaheuristic’s performance on the studied problem set. Finally, Section 6 provides conclusions of the study and suggests some future work.

2. Problem Formulation

Punctuality is a fundamental requirement for successful just-in-time implementation. By scheduling tasks with minimal tolerance and defining strict time windows, deviations from the production plan are minimized. This strategy helps reduce in-process inventory, cycle times, and material management costs. In this context, the objective is to find an optimal sequence of tasks, σ, that minimizes the weighted sum of lead and lag times for all tasks. In the problem formulation, a set of tasks j = {1,2,…, n} is given, available to be processed at time zero. Each task j has a processing time p j and a common due date d that applies to the entire set of tasks. Suspending tasks is not allowed. A task is considered early if its completion time C j is less than the common due date and tardy if its processing finishes after the due date. Therefore, if the completion time of task j deviates from the due date, either earlier or later, a penalty will be incurred. The rest of the nomenclature is defined as follows: α j , a penalty for the early completion of job j per unit of time; β j , a penalty for the tardiness of job j per unit of time; E j , the early completion of job j; and T j , the tardiness of job j.
Earliness and tardiness are calculated as E j = max{0; − L j } = max{0; d − C j } and T j = max{0; L j } = max{0; C j − d}, respectively, for each job j, j = {1,2,…, n}. The unit penalties per unit time of job j for being early or tardy are α j and β j , respectively.
The objective is to identify a feasible schedule, S, that minimizes the aggregate earliness and tardiness penalties given a common due date, d.
f S = j = 1 n α j d C j + j = 1 n β j C j d = j = 1 n α j E j + β j T j
The objective function (Equation (1)) refers to the restrictive case, where the common due date is not a decision variable but is given and can influence optimum task scheduling. It is subject to the following conditions:
T j     S j + P j d, j = 1,2,…n
E j     d S j P j , j = 1,2,…n
S j + P j     S k + R ( 1 x j k ), j = 1,2,…n − 1; k = j + 1,…n
S k + P k     S j + x j k , j = 1,2,…n − 1; k = j + 1,…n
T j ,   E j ,   S j ≥ 0, j = 1,2,…n
x j k {0, 1}, j = 1,2,…n − 1; k = j + 1,…n
In the field of operations research and task scheduling theory, it is essential to identify and apply properties that facilitate the optimization of schedules for various types of problems. This study describes key properties of the SMWET problem, which enable the construction of optimal solutions by organizing sequences of tasks under specific conditions and efficiency criteria. Several authors have examined three properties of the non-restrictive version of the SMWET problem [44,45,46,47,48,49,50]. On the other hand, for the problem with a restrictive due date, properties are identified by the methods proposed in [48,50,51], and they can be formulated in a similar way as that proposed by the authors of [8].
  • Property 1: In an optimal schedule, there is no idle time between the processing of two consecutive tasks [50].
  • Property 2: An optimal schedule exhibits a V-shaped property in the sense that, within the schedule, the advanced tasks are ordered in decreasing order according to the p j / α j ratio, and the tardy tasks are sequenced in the increasing order of the p j / β j ratio. The possibility of a straddling job existing in the set of tasks, i.e., a job whose processing starts early and ends tardily, remains open [37].
  • Property 3: There exists an optimal schedule in which the following holds true: (i) the first job in the set starts processing at time zero or (ii) a job is completed exactly on its due date [37].
This research addresses the SMWET problem in its restricted form, considering properties 1 and 3 through the application of the MBO metaheuristic using the Abbass procedure [27,28,29]. The objective is to evaluate the effectiveness of this approach for solving NP-hard combinatorial optimization problems. Specifically, the main contribution of this work is the adaptation of the MBO metaheuristic to address the restrictive SMWET problem, demonstrating its suitability for task scheduling. According to the literature review, and as far as we know, there are no existing studies that apply the MBO algorithm to this area.
To evaluate the performance of the proposed MBO metaheuristic approach for the SMWET problem, a set of standard test problems from the literature, as proposed by Biskup and Feldmann [37] and accessible through the OR-Library [52], was solved. In this context, new upper bounds were established for a significant portion of the test instances. Specifically, the adapted MBO algorithm achieved solutions that were at least as good as, or better than, those of the test problem set in 67.14% of cases.

3. MBO Metaheuristics

The MBO metaheuristic was introduced by Abbass in his 2001 paper [27]. Later, the authors of [28,29,53,54] presented modifications and improvements to the initially proposed algorithm.
The MBO metaheuristic is inspired by the social behavior of mating in bee colonies. It is an evolutionary algorithm designed to solve complex combinatorial problems. MBO employs strategies based on searching for neighboring solutions, placing it in the same category as GAs and ant colony optimization.
Abbass [27,28,29] pioneered the modeling of bee mating for optimization problem-solving. Bee colonies consist of queens, drones, workers, and offspring. The process begins with the queen’s nuptial flight, where she mates with multiple drones. The queen stores the drones’ genetic material in her spermatheca. Each egg is fertilized with a random combination from this stored material, creating the colony’s offspring. Workers tend to the young.
The MBO metaheuristic simulates the mating process of bees as a series of state transitions. Initially, a queen bee’s genotype is randomly generated. Worker bees, acting as local search heuristics, improve this initial solution. Subsequently, the queen embarks on multiple mating flights. Each flight involves the random initialization of the queen’s energy and speed. As she transitions through different states (potential solutions), she has opportunities to mate with randomly generated drones. Successful matings, determined by factors like the queen’s energy and the drone’s fitness, result in the drone’s genetic material being stored in the queen’s spermatheca. After each flight, the queen generates new offspring by combining her genetic material with randomly selected material from the spermatheca. These offspring are refined by worker bees. The fittest offspring replaces the queen, and the process iterates.
The MBO algorithm comprises three core processes: (i) the queen’s mating flight for parent selection, (ii) the creation of offspring through genetic recombination, and (iii) the refinement of offspring by worker bees using local search heuristics [53]. Each process is crucial for the algorithm’s performance in finding optimal solutions.
The MBO algorithm in [27] employs a pure exploration strategy, accepting as many queen paths as possible. Each queen is initially assigned random energy E 0 and speed S 0 values between 0.5 and 1.0 to facilitate an average of 7–17 matings per flight [27,28,29]. During the flight, the queen’s energy and speed diminish according to Equations (2) and (3).
E t + 1 = E t g
S t + 1 = α S t
where parameters g and α correspond to the reduction factors of the queen’s energy and speed at instant t, respectively.
The queen’s energy and speed parameters mimic natural bee behavior. Energy correlates with flight duration, limiting the queen’s search for drones in the solution space. Speed influences mating success, with higher speeds increasing the likelihood of successful mating, especially at the flight’s onset [27,28,29].
The success of a queen’s mating flight depends on the proximity of drones. Drones closer to the queen have a higher mating probability, as defined by Equation (4). These drones are preferentially selected for genetic recombination.
p q , d = m í n 1 , e l ( q , d ) s ( t )
where l(q,d) = dist(f(q), f(d)) is the distance between the evaluations of the fitness functions of the queen and of the drone, and s(t) is the queen’s speed at instant t.
In Abbass’s MBO algorithm [27], queen bees randomly select drones for mating. A drone is accepted as a parent if its mating probability exceeds a random value between 0 and 1. Half of the drone’s genetic material is then incorporated into the queen’s spermatheca, signifying a successful mating.
A mating flight concludes when the queen’s energy depletes, i.e., E(t) = 0, or the spermatheca is full. Upon returning to the hive, the queen initiates reproduction by randomly selecting a drone’s genetic material from her spermatheca for crossover. The resulting offspring undergo mutation by worker bees, finalizing the brood-raising process.
The fittest brood replaces the weakest queen. The remaining broods are discarded. The process iterates until the termination criteria are met.

4. MBO Algorithm for Solving the SMWET Problem

There are two commonly used methods to represent solutions to the SMWET problem [43]. One approach utilizes permutation chains to depict the physical task sequences. Consequently, an SMWET problem with n tasks yields a solution space comprising n permutations. Alternatively, solutions can be sought within a smaller solution space exclusively containing those that inherently satisfy property 2 (the V-shaped property).
This work employs permutation chains for solution representation due to their adaptability and guaranteed feasibility. Accordingly, each genotype (queen, drone, and brood) is characterized by a permutation sequence of equal length (n) representing the task order. Furthermore, a starting time is assigned to each genotype’s initial task, addressing property 3 of the SMWET problem, which stipulates that the first processed task may not necessarily commence at time zero.

4.1. Parameters Used for the MBO Algorithm

In the proposed MBO algorithm, the use of fixed parameters was considered for solving all the instances contained in the set of test problems proposed by the authors of [37], with the purpose of evaluating the effectiveness of the algorithm in the search for good solutions of the problem, and to validate a general solving algorithm in which it would not be necessary to adjust the parameters to solve a particular problem.
The general parameters employed in the proposed algorithm are as follows: (i) The number of queen bees (Q)—this parameter represents the number of queen bees initiating mating flights. The algorithm utilized four queens, each proposing a distinct solution at every iteration. (ii) The number of drones (D)—indicating the maximum number of drones a queen can mate with per flight, D was experimentally fixed at 10. (iii) The spermatheca capacity (M)—this parameter defines the queen’s genetic material storage capacity post-flight. Based on Abbass’s recommendations [27,28,29] suggesting a range of 7 to 17, M was set to 10.
Table 1 shows the values of the specific parameters used in the MBO algorithm proposed in this study.
Equation (5) was used to determine the value of parameter 5.
g = 0.5 e 0 M
where e 0 = E(t = 0) is the initial energy of the queen bee, and M is the size of the spermatheca. The specification of parameter 5 ensures that each queen mates with the amount of drone-derived material stored in her spermatheca.
The values s 0 = 0.9 and e 0 = 0.9 were chosen for the initial velocity and the queen bee’s energy, respectively, in accordance with the studies [27,28,29,53,54].

4.1.1. Obtaining an Initial Population

This paper examines four queen bees, two initialized with random solutions and two derived from the LPT and SPT scheduling rules. Randomizing half the queen bees fosters greater solution diversity, following the guidelines of [27,28,29,53,54,55]. Each generated queen bee is assigned initial energy and speed parameters. To ensure quality solutions, the initial queen bees are refined using worker bees, which are local search algorithms or simple heuristics.
The initial drone population is also generated randomly, adhering to the original MBO algorithm framework by the authors of [27,28,29].

4.1.2. Mutation of Drones

As in [53], each drone should mutate at the time of coupling with the queen. In the mutation process, each drone possesses a genotype, and an operation called a genotype marker (GM). This marker randomly selects half of the drone’s genes for mutation, ensuring that only these genes participate in the creation of brood due to the haploid nature of drones [27,28,29,53,54]. The mutation process involves applying the GM to the drone genotype and then randomly rearranging the marked genes, which will contribute to the offspring.
Figure 1 visually depicts the drone genotype, the GM operator, and the mutation process. The symbol m indicates a marked gene, n/m an unmarked gene, and an asterisk * a non-existent gene.
In this way, at each transition, starting with the original drone genotype denoted by S D , a mutated genotype S D is obtained. If the fitness of the mutated drone is higher than that of the original drone (i.e., f( S D ) < f( S D )), then the mutated drone replaces the original one S D = S D .

4.1.3. Selection of the Drone’s Genetic Material

In the implementation of the MBO algorithm for the SMWET problem, it was defined, following [27,28,29,53,54], that the queen probabilistically adds drone material to her spermatheca according to the annealing function presented in Equation (4). A random number between 0 and 1 is generated for each queen and then compared to the highest mating probability of any drone with respect to that queen. If the mating probability is greater than the generated number, the queen bee accepts the drone as a parent, meaning that half of the drone’s genetic material (given its haploid donor status) is added to the queen’s spermatheca, resulting in a successful mating.
Furthermore, in this work, the substitution of a queen by a drone was considered if the fitness of any drone was superior to that of a queen.

4.1.4. Crossing of Genetic Material and Offspring Production

During the mating process, the queen bee stores the genetic material of a drone in her spermatheca. This genetic material is then randomly selected to fertilize eggs, resulting in the creation of broods.
A genetic crossover operator is employed to facilitate the exchange of genetic material between the queen and the drone. This operator selects two individuals, referred to as Parent 1 and Parent 2, and performs a genetic crossover between their genotypes.
The crossover process involves combining the genetic information from both parents to generate a new offspring. The head of the offspring’s genotype sequence is inherited from Parent 2, while the tail is inherited from Parent 1.
The genetic crossover operator can be formally defined by the following three steps: (i) a segment of the cord, which carries the genotype of Parent 1, is randomly selected. This segment is defined by its starting position, i, and ending position, j, where 1 < i < j < n. The genes within this segment (i.e., i, j) are stored as a list. (ii) Genes from Parent 2 are selected, ensuring they are not present in the previously chosen segment of Parent 1. These selected genes are stored in a list of length less than n, where n is the total number of tasks. (iii) The genes from Parent 2 stored in the previously generated list, up to position i (the starting point of the Parent 1 segment), are combined with the Parent 1 segment and the remaining genes from Parent 2, thus creating a new offspring.
The queen’s and drone’s genotypes served as Parent 1 and Parent 2. The crossover operator was applied to create broods with (i) queen-dominant genotypes—the majority of genetic material at the beginning of the sequence was from the queen (with the drone’s genotype as Parent 1 and the queen’s as Parent 2)—and (ii) drone-dominant genotypes—the majority of genetic material at the beginning of the sequence was from the drone (with the queen’s genotype as Parent 1 and the drone’s as Parent 2).
Experiments were conducted using the crossover operator with the drone as Parent 1 in 70% of cases and the queen as Parent 1 in 30%. This approach was based on the general superiority of the queen’s genotype (due to prior optimization) and its potential alignment with the solution structure.

4.2. Improvement of Offspring Using Workers

To address the SMWET scheduling problem, the MBO algorithm was used, employing three heuristics as workers. These heuristics were used to enhance the initial queen bee population and the offspring generated through the genetic recombination of the drone and queen. The specific heuristics utilized were hill climbing, simulated annealing, and a local search operator.

4.2.1. Hill-Climbing Heuristic

Beginning with an initial state or solution, a local search algorithm explores the search space by moving to neighboring states, which enhances the value of a specified objective function [56]. At each iteration, the algorithm modifies one or more elements of the current solution, represented by a vector x, which can contain continuous and/or discrete data. It then assesses how these changes impact the value of the objective function f(x) [57]. This optimization process continues until no neighboring states yield a better solution, leading to a local maximum or minimum, or until a predetermined termination criterion is fulfilled. Algorithm 1 shows the pseudo-code of the hill-climbing heuristic [58].
Algorithm 1. Pseudo code for hill-climbing algorithm
i = initial solution
While f(s) ≤ f(i) for all s   Neighbors (i) do
Generates an s  Neighbors (i);
If fitness (s) > fitness (i) then
Replace s with the i;
End If
End While
This search process was selected as a worker based on the suggestion in [27,28,29] to use local search algorithms as workers. The choice of this heuristic and its representation was influenced by the study of M’Hallah [59]. This technique, along with simulated annealing, was chosen due to its ease of application to various combinatorial optimization problems. Despite the risk of converging to local optima, it can provide strong initial solutions.
The hill-climbing heuristic consists of the following three steps: (i) Selecting a starting point or current state S i , which corresponds to the current solution. (ii) Perturbing the current solution ( S i S i ) and accepting any transition that improves the solution (either upward or downward, depending on the problem’s objective). (iii) Choosing the best successor S j in the neighborhood of S i , where S j = best { S i }; if there are multiple equally good successors, one is chosen at random.
Transitions are generated by creating neighboring solutions using a task-swapping operator. This operator randomly exchanges tasks within the solution delivered by the brood. If the resulting solution is better than the current one, it is accepted.

4.2.2. Simulated Annealing Heuristic

Stochastic search method inspired by the annealing process in metallurgy, suggested by the authors of [60], and proposed by the authors of [61]. While simulated annealing does not guarantee finding an optimal solution, it can effectively escape local minima by exploring the search space through random movements, particularly in complex environments [62]. This enables suboptimal solutions to improve by employing a decreasing parameter known as “temperature” [63]. Furthermore, simulated annealing is simple to implement and requires minimal parameter tuning, making it well-suited for a wide range of optimization problems, both combinatorial and continuous ones.
This method was chosen as the second worker heuristic due to its versatility and applicability to combinatorial optimization problems. Its probabilistic acceptance of solutions, even if they are not always superior, allows for the exploration of the solution space and avoids the local optima trap, a significant disadvantage of hill climbing. To adapt simulated annealing to our problem, we followed a similar approach as described in [8]. The simulated annealing metaheuristic used in this study, both to refine brood solutions and improve initial queens, is outlined below.
Starting from an arbitrary solution (initial) S i , a solution S j is chosen randomly in the neighborhood of S i ,. Let Δ i j = f( S j ) − f( S i ) be the difference between the values of the objective function of S j and S i . If the value of the objective function of the new solution S j is less than or equal to the previous Δ i j ≤ 0, then the new solution, S j , becomes the present one and the search process continues from S j . On the other hand, if the objective value of the neighboring solution S j is greater than the objective value of the previous Δ i j > 0 solution, then S j is accepted as the present solution with probability e Δ i j / C k , where C k represents the present value of the control parameter of the simulated annealing algorithm, which is the temperature. The algorithm begins with a relatively high temperature value, so at the beginning, most of the lower neighboring solutions are accepted. During the execution of the simulated annealing algorithm, C k remains constant over several iterations and then drops, so the probability of accepting neighboring solutions is less in the terminal stage of the search process. The present work considered an initial temperature C k = 0 = 0.9, which remains constant over several iterations equal to L k = 8. A linear type of temperature reduction function and a temperature reduction factor (or cooling factor) c = 0.8 were also used, so that the temperature after each transition decreased according to the rule C k = c  C k 1 . The simulated annealing algorithm (Algorithm 2) can be outlined as follows.
Algorithm 2. Pseudo-code for simulated annealing algorithm
Step 0: k: = 1, S i : = S s t a r , S i : = S b e s t , c 1 : = c s t a r
Step 1: Select at random S j N ( S i )
If  Δ i j ≤ 0 then: S i : = S j and if f( S i ) < f( S b e s t ) then  S b e s t : = S i
else: If exp(− Δ i j / c k ) > random [0, 1) then  S i : = S j
Step 2: After L k repetitions of Step 1: c k + 1 : = g ( c k ) and k: = k + 1
Step 3: If stop criterion is not true go to Step 1
Like hill climbing, transitions are generated by creating neighboring solutions using a swap operator. This operator randomly exchanges tasks within the solution provided by the brood. Solutions are accepted if they improve the current solution or meet the simulated annealing acceptance criteria, based on a predetermined number of iterations. The best solution found throughout the process is retained and presented as the final output.

4.2.3. Local Search Operator

In addition to the two heuristic workers, it was decided to use a swap operator that works in parallel to these workers. A local search operator makes minimal changes to the current solution with the goal of improving its quality [64]. This operator is based on the findings of Pan et al. [39], who also investigated the SMWET problem. To further enhance solution quality, these authors applied an iterative local search to the global best solution G t . This search utilizes a simple binary exchange neighborhood. The local search procedure involves swapping two randomly selected tasks, subject to the condition that the first of these tasks is early with respect to the due date and the second is tardy with respect to the due date. This task-swapping operator was also introduced by the authors of [8]. The iterative local search in Algorithm 3 was based on the simple binary swap neighborhood.
Algorithm 3. Pseudo-code for Iterated local search algorithm
s0 = Gt s 0 = G t
s = Local Search ( s 0 )
Do{
s 1 = Perturbation (s)
s 2 = LocalSearch ( s 1 )
s = AcceptanceCriterion (s, s2)
}While (Not Termination)
If f(s) < f( G t ) then  G t = s
In this work, both the heuristic and task-swapping operators are initially used to refine the initial queens and later used to enhance the offspring produced through the genetic recombination of the queen and drone. The selection of the candidate heuristic is performed probabilistically in three steps: (i) Each heuristic candidate is assigned a selection probability. The heuristic with the highest probability after each iteration is chosen as the worker. Initially, all heuristics have a probability of 1/w, where w the total number of workers. (ii) The average improvement i m p ¯ h achieved by each heuristic h is calculated and stored. This represents the average improvement of the brood solution. (iii) The selection probability is then updated to reflect the impact of the most effective heuristic. The new probability, p , is calculated as the previous probability p plus the ratio of the average improvement contributed by heuristic h to the sum of all average improvements and the total number of workers w. This calculation ensures that the new probability remains between 0 and 1. Formally, the new choice probability is given by Equation (6).
p = p + i m p h / m p · w ¯
The choice rule defined in Equation (6) is applied after each iteration of the heuristics, selecting the one with the highest choice probability. Once selected, each solution improvement method is executed a specific number of times: 200 iterations for simulated annealing, 50 iterations for hill climbing, and 40 iterations for local search. These iteration values were determined experimentally.

4.3. Population Updating

In each flight, after the broods obtained from crossing operations have been improved through the application of worker heuristics, the fitness of the newly improved broods is evaluated. If the fitness function of an improved brood is better than the fitness function of some queen, f( S B ) < f( S Q ), then the queen population is updated by removing the least fit queen and replacing her with the brood that has the best fitness, S Q = S B .
The evaluation function determines whether the newly generated individual (the improved brood) is suitable to be considered a new solution-generating individual (queen bee).
Therefore, the proposed MBO algorithm evaluates both queens and broods using Equation (1), applied to the partial solutions of each: f( S Q ) for queens and f( S B ) for broods. Figure 2 provides a schematic representation of the fitness evaluation process.
At the end of each flight, the queen population is updated by replacing the queens with the worst fitness with broods that have the best fitness, if any exist. The lists of generated broods and drones are then deleted. A new population of drones is generated for the subsequent flight, resulting in new broods from crossing the genotypes of the queens and the new drones.
This process is repeated until the algorithm’s stopping condition is met. Since queens are updated after each flight, the algorithm iterates through all the previously mentioned stages until the stopping criterion is fulfilled.

4.4. Parameter Updating

During each mating flight, a queen bee experiences a loss of energy, a decrease in flight speed, and a reduction in available space in her spermatheca. To accurately simulate these changes, parameters related to the queen’s energy, speed, and spermatheca capacity must be updated after each flight. The updating procedure follows the methods outlined by the authors of [27,28,29,53,54]. In this way, the relevant parameters updated during each flight are as follows.
  • Updating the queen’s energy: This is completed using Equation (2): E(t + 1) = E(t) − g . At each transition t, the energy is reduced by g units, where g corresponds to the energy reduction factor of the bees, as previously described in Equation (5).
  • Updating the queen’s speed: This is completed using Equation (3): S(t + 1) = α∙S(t). At each transition t, the speed is progressively reduced by a factor α of the previous speed.
  • Updating the spermatheca’s capacity: The capacity of the spermatheca is updated based on the number of drones with which the queen mates during each flight. This depends on the acceptance or rejection of the mating event between the queen and the drone, which is represented by the annealing probability described in Equation (4).

4.5. Stopping Conditions

In the implementation of the MBO algorithm, a stopping criterion was established based on reaching a predetermined number of iterations, equivalent to the number of flights. For the scheduling problem examined, it was determined that 20 flights within the MBO algorithm yielded satisfactory solutions for a substantial portion of the 280 problems investigated. Consequently, this number of flights was considered suitable for addressing the SMWET problem. The MBO metaheuristic proposed for the SMWET problem is described in Algorithm 4.
Algorithm 4. Queen–brood fitness evaluation
DefineM, (to be spermatheca size); Q = 4, (queen quantity); α = 0.98, (queen velocity’s reduction step)
DefineE(t) and S(t) to be the queen’s energy and speed at time t, respectively
Definemaxiter_hc, maxiter_sa, maxiter_ls to be the maximum number of iterations for workers
Initializeworkers to be simulated annealing, hill climbing and local search.
Initializequeen genotypes, according to SPT and LPT scheduling rules for the first two queens and randomly
generate the genotypes for the remaining two queens.
Randomly use workers to improve initial queen’s genotype
whileFlight quantity < 20:
t = 0
Initialize e 0 = 0.9 ;   s 0   = 0.9   and   g = [0.5 E(t)/M]
Generate D = 10 drones at random
whileE(t) > 0 and M < 10: (queen still have energy to flight and spermatheca is not full)
Mutate drones genotype S D S D
if, f ( S D )   <   f ( S D )(if mutated drone fitness is better than the original drone fitness, then accept mutated drone genotype as potential drone)
S D = S D
else if, f ( S D )   <   f ( S Q )(if mutated drone fitness is better than queen fitness, replace queen genotype by mutated drone genotype)
S Q = S Q
end if
end elseif
Select the candidate drone’s genotype to cross it with the queen’s genotype according with annealing probability, (Equation (4))
if, rand ( 0.1 )   <   p ( q d ) M     = { S D ,   S D } for D = {1, 2,…, 10}
(add drone’s genotype to spermatheca)
t = t + 1; E(t + 1)=E(t) − g ; S(t + 1) = αS(t)
end while
for brood=1 to total number of broods
select drone material from queen spermatheca at random
if rand(0.1) ≤ 0.3 apply “drone-queen” crossover operator and generate a brood
else, apply “queen drone” crossover operator and generate a brood
endif
use workers to improve brood’s genotype according to the probabilistic rule (Equation (6))
for workers [hill climbing, simulated annealing and local search]
determine   p b e s t , the best election probability associated with a worker
select worker
apply worker to all broods to obtain mutated broods
S B S B
Actualize   p b e s t
end for
if f ( S B   )   >   f ( S B )(if mutated brood fitness is better than the original brood fitness, then keep mutated brood as partial solution)
S B = S B
end for
if f ( S Q   )   >   f ( S B )(if there are broods with better fitness than any queen, replace the queen by broods)
S Q = S B
end if
kill all broods
end while
The flowchart of the MBO algorithm, adapted for this study, is provided in Figure 3.

5. Computational Experiments and Results

5.1. Benchmarks Used for Experimentation

The MBO algorithm for addressing the SMWET problem was implemented in Python. The algorithm was evaluated using a suite of test problems developed by the authors of [37] and is accessible in the OR-Library [52]. This collection comprises a total of 280 benchmark problems, spanning seven categories with n = 10, 20, 50, 100, 200, 500, and 1000 tasks, with 10 problems in each category. For each problem, a due date restrictiveness factor h = 0.2, 0.4, 0.6, and 0.8 was considered.
The common due date value is generated by d = [h∙∑ p j ], meaning that, for each problem, the common due date is estimated by multiplying the restrictive factor h by the sum of the processing times ( p j ) of the n tasks. Thus, the smaller the value of h, the more restrictive the problem becomes.
The results are analyzed by comparing the values obtained by the proposed MBO metaheuristic with the upper bound (UB) values provided by the authors of [37], who used two specific heuristics for the problem. Additionally, the effectiveness of the MBO metaheuristic is compared with other metaheuristic techniques found in the literature that have been applied to the SMWET problem using the same set of test problems [8,38,39,40,41,42,43].
The performance of the MBO algorithm is assessed using the percent error index, which compares the solutions obtained by the MBO algorithm (referred to as the obtained MBO value) with the UB provided by the authors of [37] (denoted as U B F & B ). This index is calculated using Equation (7).
p e r c e n t e r r o r = ( o b t a i n e d M B O v a l u e U B B & F ) U B B & F × 100 %

5.2. Results Obtained on the Benchmarks from [37]

Table 2 and Table 3 provide a detailed illustration of the results obtained using the MBO metaheuristic for the 280 instances proposed by the authors of [37]. Table 2 displays the results for problems with n = 10, 20, and 50 tasks, while Table 3 presents the results for larger problems with n = 100, 200, 500, and 1000 tasks. Each table includes the problem size (n), the instance number corresponding to each problem (k), the value of the due date restrictive factor (h), the UB found in [47], the value obtained by the MBO metaheuristic, and the percent error of the MBO value compared to the UB value for each problem, as determined by Equation (7).
As shown in Table 2 and Table 3, the performance of the MBO metaheuristic adopted in this study is of high quality. This is evident from the consistent average percentage improvements observed across a wide range of problems compared to the UB solutions. For problems with n = 10, 20, and 50 tasks, and due date restrictiveness factors h = 0.2, 0.4, 0.6, and 0.8 (a total of 120 problems), an average percentage improvement of −1.40% was achieved relative to the UBs.
Similarly, for problems with n ≥ 100 tasks, the adapted MBO algorithm consistently outperformed the UB solutions for all problems with restrictive factors h = 0.2 and h = 0.4, as shown in Table 3. Specifically, for the more restrictive cases (with h = 0.2), better average percent improvements are obtained: −3.84% for n = 20, −5.69% for n = 50, −6.11% for n = 100, and −5.46% for n = 200. For the set of problems with n = 10, optimal solutions are achieved using a complete scheduling formulation with the LINGO software version 20.0.1.12 [8].
On the other hand, for problems with due date restrictiveness factors of h = 0.6 and h = 0.8, the adapted MBO metaheuristic exhibited strong performance for n ≤ 50 tasks. It achieved average improvements of −0.72% for n = 20, −0.72% and −0.41% for h = 0.6 and h = 0.8, respectively, and −0.20% for n = 50 with h = 0.6 and 0.28% with h = 0.8 (Table 2). However, for problems with due date restrictiveness factors h = 0.6 and h = 0.8 and a number of scheduling tasks n ≥ 100, the adapted MBO metaheuristic yielded solutions that demonstrated a positive average deviation relative to the UBs. For instance, for n = 100 tasks and h = 0.6, an average error of 0.8% relative to the UBs was observed, and for n = 1000 and h = 0.8, the average percentage error was 4.76% relative to the UBs.
Regarding the performance of the adapted MBO algorithm, it is evident that it consistently outperformed the published UBs [37] (Table 2 and Table 3) for all problems with due date restrictiveness factors of h = 0.2 and h = 0.4.
However, when addressing problems with restrictive factors of h = 0.6 and h = 0.8, it was observed that for n ≥ 100 tasks, the algorithm generally required more time to find a solution, often resulting in a positive error relative to the UBs [37].
This behavior can be attributed to the less restrictive nature of due dates with h = 0.6 or h = 0.8. These less stringent due dates allow for a broader range of feasible schedules, as they permit a wider range of possible starting times beyond zero. Liao and Cheng [41] emphasize this condition in their paper on the SMWET problem, categorizing due dates with restrictive factors of h = 0.6 and h = 0.8 as loose and those with factors of h = 0.2 and h = 0.4 as tight.

5.3. Comparison of Results

In the existing literature on the SMWET problem, at least six studies have been identified that utilize the set of test problems proposed by the authors of [37]. These studies employ a diverse range of specific metaheuristics to address the SMWET problem, thereby validating these approaches as effective methodologies for solving scheduling problems, particularly those of the SMWET type.
Authors such as Feldmann and Biskup [8] extended the study of the test problems presented in [37], comparing five different metaheuristics: evolutionary strategy, simulated annealing, threshold accepting, an evolutionary variant with destabilization, and another version of threshold accepting with stabilization. The authors conducted ten runs of each algorithm for each problem and selected the best objective value obtained among the five methods. This analysis focused solely on due date restrictions with parameters h = 0.2 and h = 0.4.
Table 4 compares the average performance of the metaheuristics proposed in [8] with the proposed MBO for the SMWET problem. The percentage error relative to the UBs [37] is calculated using Equation (7). The results show that MBO generally performs similarly to [8] for most instances. However, for problems with n ≥ 100, MBO consistently produces slightly better solutions, with an average positive deviation of 0.32% across 140 problems with h = 0.2 and h = 0.4 compared to [8]. This finding suggests that MBO is a competitive alternative for solving SMWET, especially for large-scale instances.

Comparison with Metaheuristics from the Literature

In this section, various metaheuristic approaches from the literature used to address the SMWET problem are compared, utilizing a set of test problems provided by the authors of [37] as a benchmark for performance evaluation. The following studies were identified: Hinno et al. [38] used a GA, TS, and two hybrid methods combining both tabu search + genetic algorithm (HTG) and genetic algorithm + tabu search (HGT) approaches. They started with an initial solution from a heuristic method by the authors of [40], with a sequential exchange approach (SEA). This method, which is like the two heuristic methods of [37], involves dividing tasks into two sets: one for early tasks and another for late tasks, with exchange operations evaluated between the sets. Liao and Cheng [41] proposed a hybrid metaheuristic that uses TS within variable neighborhood search (VNS/TS). Pan et al. [39] used a discrete particle swarm optimization (DPSO) algorithm. Nearchou [43] used differential evolution (DE). Pham et al. [42] utilized an algorithm based on the foraging behavior of bees.
It is worth noting that, in the studies by the authors of [38] and [8], the authors ran each problem ten times, saving the best objective function value as the result.
The results in each study are presented as a function of the average percentage error (the improvement) of the objective function relative to the UB values provided by the authors of [37]. These results are summarized in Table 5, which compares the average errors yielded by different metaheuristics for the SMWET problem. Overall, the results are very close, with less than a 0.3% error difference between the best- and worst-performing metaheuristic methods (excluding the MBO method).
Table 5 shows that the best-performing metaheuristics are foraging bees [42], DPSO [39], and VNS/TS [41], delivering an average improvement of 2.15% across the 280 problems. The best percentage improvement values relative to the UBs of [37] are highlighted in bold.
Regarding the metaheuristic approaches used in the literature, using the test problems from [37] as a basis, the MBO metaheuristic method adapted in this work remained competitive across a wide range of problems. Specifically, MBO performed similarly to other approaches for more constrained problems, i.e., those involving due date restrictive factors of h = 0.2 and h = 0.4 (Table 5). The MBO method achieved an average improvement of −4.69% for the 70 problems with h = 0.2, compared to an average improvement of −4.96% by the authors of [39,42] for the same set, representing a 0.27% error difference from the best metaheuristic in the literature (Table 6). For the 70 problems with h = 0.4, MBO achieved an average improvement of −2.86%, compared to −3.27% by the authors of [39,41,42], resulting in a 0.41% error difference.
For problems with wider due date restrictions (h = 0.6 and h = 0.8), the MBO algorithm achieved average percentage errors of 1.40% and 2.03%, respectively, compared to the UB values from [37].
Unlike the metaheuristic methods from the literature, the MBO algorithm in this study used the same parameters across all 280 problems, without any parameter adjustments for specific problems. This consistency highlights the MBO algorithm’s competitiveness across a wide range of problems, achieving an average percentage improvement of −1.03% for the entire test set and a 1.12% error difference compared to the best-performing metaheuristic methods [39,41,42].

5.4. Computing Time Analysis

Table 7 shows a comparison of execution times for the set of problems. However, it is crucial to acknowledge that these computation times are not directly comparable across different algorithms, mainly due to two reasons. First, the programming languages used for these metaheuristics vary in how they translate code into machine language, complicating objective comparisons. Compiled languages generally execute faster than interpreted ones, but interpreted languages offer more flexibility, leading to potential discrepancies in execution speed. Second, differences in the processing power and speed of the computers used further impact these comparisons.
As a result, direct comparisons with other metaheuristic approaches were not conducted. In some cases, reported execution times were either unavailable [42] or not comparable due to the use of compiled languages, which can be up to 100 times faster than Python, the language used for the MBO metaheuristic in this study.
Table 7 presents the computational times of the MBO method under two scenarios: (i) considering only h = 0.2 and h = 0.4 and (ii) considering all due date restrictive factors: h = 0.2, 0.4, 0.6, and 0.8. This comparison allows us to evaluate the computational times of the MBO method against those achieved by the authors of [8] and the DE algorithm proposed by the authors of [43].
The results show that, in comparison to [8], the MBO algorithm performs similarly on most problems, with larger deviations observed in problems involving a high number of tasks (n ≥ 500). However, when compared to the DE algorithm, the MBO computational times differ significantly for most problems.
Nonetheless, it is important to note that the computational times are not directly comparable due to differences in algorithm specifications, the programming languages used, and the characteristics of the computers on which they were run.

5.5. Statistical Analysis

A comparison is conducted to determine whether the errors generated by the adapted MBO metaheuristic are significantly different from the UB values established by the authors of [37]. These findings were supported by two statistical analyses: (i) An analysis of mean differences by the problem size: mean differences were analyzed for problems with 20, 50, 100, 200, 500, and 1000 tasks, considering each of the four delivery date constraint factors. (ii) An analysis of mean differences by the restrictive factor: mean differences were analyzed for problems with each delivery date restrictive factor value (h = 0.2, 0.4, 0.6, and 0.8), regardless of the number of tasks.
The model description comprises multiple elements: The response variable: this represents the error relative to the optimal solutions found by the authors of [37]. Training level 1: this level is trained on the optimal solutions provided by the authors of [37], considering the values of the upper dimensions. Training level 2: this level is trained on solutions generated by the MBO metaheuristic, specifically adapted for this study. µ: the overall mean response. µ 1 : the mean response for treatment level 1. µ 2 : the mean response for treatment level 2. T i : the differential effect on response with respect to level j of the training variable. Ɛ i j : the error term.
Traditional ANOVA model: Y i j = µ + T i + Ɛ i j . The working hypothesis of the model- H 0 : µ 1 = µ 2 . H 1 : The means are not equal.

5.5.1. Statistical Analysis of Mean Differences Based on the Number of Tasks

The observations correspond to 40 problems with two treatment levels.
Table 8, Table 9 and Table 10 show that the p-value of the F-test is less than 0.05, providing statistical evidence to reject the null hypothesis ( H 0 ). This indicates that the difference in the mean error between the solutions obtained using the adapted MBO metaheuristic and the UBs reported by the authors of [37] is statistically significant at a 95% confidence level. Specifically, the solutions generated for problem sets with n = 20, 50, and 100, considering four values of the deadline restrictive factor (h = 0.2, 0.4, 0.6, and 0.8), are statistically significant compared to the UB values obtained by the authors of [37] for these problem sizes.
Table 11, Table 12 and Table 13 show that the p-value of the F-test exceeds 0.05. This provides statistical evidence to not reject the null hypothesis ( H 0 ). Consequently, the difference in means is not statistically significant between the treatment levels corresponding to the errors obtained using the adapted MBO metaheuristic and the UB values reported by the authors of [37], with a 95% confidence level. Therefore, the solutions generated for problem sets with n = 200, 500, and 1000 tasks, considering four values of the deadline restrictive factor (h = 0.2, 0.4, 0.6, and 0.8), are statistically significant compared to the UB values obtained by the authors of [37] for these same problem sizes.

5.5.2. Statistical Analysis of Mean Differences According to the Restrictive Factor of the Delivery Date

The observations correspond to 120 problems, with two treatment levels.
Table 14, Table 15, Table 16 and Table 17 demonstrate that the p-value associated with the F-test exceeds 0.05. This finding suggests that the null hypothesis ( H 0 ) cannot be rejected, indicating no statistically significant difference in the means between the treatment levels. These results represent the errors derived from the adapted MBO metaheuristic and the UBs documented by the authors of [37], with a 95% confidence level. Therefore, the solutions produced for all problem instances with h = 0.2, 0.4, 0.6, and 0.8 exhibit statistical significance, which is to say that the MBO outcomes for these problems are significantly better than the UBs established by the authors of [37].

6. Conclusions

This paper explores the potential of the MBO algorithm to tackle complex NP-hard optimization problems. This study proposes an enhanced MBO heuristic algorithm, introducing modifications to the original algorithm to optimize the objective function and generate high-quality solutions within reasonable computational time. This novel approach has not been previously explored in the context of this research. Specifically, we consider the discrete problem of minimizing the total penalties for the completion of early and late tasks on a single machine with a common due date: 1 / d j = d r e s / ( α j E j + β j T j ) [49,51,65], known as SMWET.
Compared to the UB values from [37], our proposed MBO algorithm achieved an average percentage improvement of 0.91% across 280 benchmark problems. It yielded better solutions for 160 problems, a 57% improvement in the total number of problems addressed.
The metaheuristic demonstrated efficient performance on problems with restrictive due date factors of h = 0.2 and h = 0.4, achieving an average percentage improvement of 3.71% across 140 evaluated cases. On the other hand, in problems with restrictive due date factors h = 0.6 and h = 0.8, also in 140 cases, the metaheuristic achieved an average percentage error of 1.72% compared to the UBs published by the authors of [37].
Compared to other metaheuristics in the literature for the same problem set, the MBO metaheuristic demonstrated strong competitiveness. Notably, MBO was consistently applied to all 280 problems without parameter tuning, unlike other metaheuristic approaches. MBO achieved a maximum error of 1.12% compared to the best improvement percentage of 2.15%, resulting in an average efficiency of 98.88% against solutions from well-known metaheuristics like foraging bees [42], DPSO [39], and VNS/TS [41], for the complete set of 280 problems. The proposed MBO algorithm demonstrated competitiveness compared to recent studies, such as those by the authors of [30,33,66,67]. It achieved comparable optimal values and computation times when applied to a different dataset, addressing the same problem.
Moreover, when comparing average computation times for finding optimal solutions, the adapted MBO metaheuristic exhibited similar behavior to the times reported by the authors of [8] for 140 problems with restrictive due date factors of h = 0.2 and h = 0.4. Similarly, comparable average computation times were observed relative to the DE metaheuristic for all 280 problems. Differences were mainly noted in problems involving a number of tasks n ≥ 100. However, the computation times are not directly comparable due to inherent differences in programming languages and the capabilities of the systems used.
A statistical analysis of the mean difference relative to the UB values reported by the authors of [37] confirms that the adapted MBO algorithm is significantly more efficient for all problems with restrictive due date factors (h = 0.2 and h = 0.4). This holds true regardless of the number of tasks to be sequenced and at a 5% significance level (α = 0.05). However, for problems with factors h = 0.6 and h = 0.8, the solutions obtained by the MBO metaheuristic do not exhibit statistically significant differences compared to the UB values.
Direct comparisons with other algorithms are hindered, as execution times were often unavailable or substantially faster due to the use of compiled languages instead of Python. Additionally, the processing power of the hardware significantly impacts the obtained results. The scalability of the system is notably compromised when adding constraints or increasing the number of tasks, highlighting a persistent limitation in its current state. While the methodology employed in this study has shown promising results, further comparative studies are necessary to evaluate its effectiveness in industrial settings and determine its suitability for specific problems.
Future works could focus on advancing the MBO metaheuristic through hybrid approaches with other optimization techniques like ant colony optimization and particle swarm optimization. Aiming to develop hybrid algorithms that combine the strengths of each method to tackle more complex optimization problems.
Finally, it is essential to conduct rigorous studies on the selection and tuning of the parameters of MBO. By identifying optimal configurations, we can significantly improve the algorithm’s performance and stability across diverse applications. A promising approach involves leveraging machine learning techniques to automate this process, reducing manual effort and enhancing the algorithm’s adaptability to various problem types.

Author Contributions

Conceptualization, P.P.; Methodology, M.M. and M.A.; Software, M.M.; Validation, G.F.; Formal analysis, P.P.; Investigation, P.P.; Resources, M.A.; Data curation, M.M.; Writing – original draft, G.F.; Writing – review & editing, M.M.; Visualization, M.A.; Supervision, G.F.; Project administration, G.F.; Funding acquisition, P.P. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by the DICYT (the Scientific and Technological Research Bureau) of the University of Santiago of Chile (USACH) and the Department of Industrial Engineering.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

TS:Tabu search
GM:Genotype marker
UB:Upper bound.
GA:Genetic algorithm
DEDifferential evolution
SPTShortest processing time
LPTLargest processing time
MBO:Marriage in honey bee optimization
HTG:Search+genetic algorithm
HGT:Genetic algorithm+tabu search
SEA:Sequential exchange approach
DPSO:Discrete particle swarm optimization.
SMWET:Single-machine weighted earliness/tardiness problem.
VNS/TS:Variable neighborhood search/Tabu search.

References

  1. Arık, O.A. Optimal Policies for Minimizing Total Job Completion Times and Deviations from Common Due Dates in Unrelated Parallel Machine Scheduling. OPSEARCH 2024, 61, 1654–1680. [Google Scholar] [CrossRef]
  2. Arik, O.A. A Heuristic for Single Machine Common Due Date Assignment Problem with Different Earliness/Tardiness Weights. OPSEARCH 2023, 60, 1561–1574. [Google Scholar] [CrossRef]
  3. Tan, Z.; Fu, G. Just-in-Time Scheduling Problem with Affine Idleness Cost. Eur. J. Oper. Res. 2024, 313, 954–976. [Google Scholar] [CrossRef]
  4. Babu, S.; Girish, B.S. Pareto-Optimal Front Generation for the Bi-Objective JIT Scheduling Problems with a Piecewise Linear Trade-off between Objectives. Oper. Res. Perspect. 2024, 12, 100299. [Google Scholar] [CrossRef]
  5. Baals, J.; Emde, S.; Turkensteen, M. Minimizing Earliness-Tardiness Costs in Supplier Networks—A Just-in-Time Truck Routing Problem. Eur. J. Oper. Res. 2023, 306, 707–741. [Google Scholar] [CrossRef]
  6. Shabtay, D.; Mosheiov, G.; Oron, D. Single Machine Scheduling with Common Assignable Due Date/Due Window to Minimize Total Weighted Early and Late Work. Eur. J. Oper. Res. 2022, 303, 66–77. [Google Scholar] [CrossRef]
  7. Arık, O.A.; Schutten, M.; Topan, E. Weighted Earliness/Tardiness Parallel Machine Scheduling Problem with a Common Due Date. Expert Syst. Appl 2022, 187, 115916. [Google Scholar] [CrossRef]
  8. Feldmann, M.; Biskup, D. Single-Machine Scheduling for Minimizing Earliness and Tardiness Penalties by Meta-Heuristic Approaches. Comput. Ind. Eng. 2003, 44, 307–323. [Google Scholar] [CrossRef]
  9. Lian, J.; Hui, G. Human Evolutionary Optimization Algorithm. Expert Syst. Appl. 2024, 241, 122638. [Google Scholar] [CrossRef]
  10. Ezugwu, A.E.; Shukla, A.K.; Nath, R.; Akinyelu, A.A.; Agushaka, J.O.; Chiroma, H.; Muhuri, P.K. Metaheuristics: A Comprehensive Overview and Classification along with Bibliometric Analysis. Artif. Intell. Rev. 2021, 54, 4237–4316. [Google Scholar] [CrossRef]
  11. Sabattin, J.; Fuertes, G.; Alfaro, M.; Quezada, L.; Vargas, M. Optimization of Large Electric Power Distribution Using a Parallel Genetic Algorithm with Dandelion Strategy. Turk. J. Electr. Eng. Comput. Sci. 2018, 26, 2648–2660. [Google Scholar] [CrossRef]
  12. Wang, G.G. Moth Search Algorithm: A Bio-Inspired Metaheuristic Algorithm for Global Optimization Problems. Memet. Comput. 2018, 10, 151–164. [Google Scholar] [CrossRef]
  13. Qian, J.; Zhan, Y. The Due Date Assignment Scheduling Problem with Delivery Times and Truncated Sum-of-Processing-Times-Based Learning Effect. Mathematics 2021, 9, 3085. [Google Scholar] [CrossRef]
  14. Solgi, R.; Loáiciga, H.A. Bee-Inspired Metaheuristics for Global Optimization: A Performance Comparison. Artif. Intell. Rev. 2021, 54, 4967–4996. [Google Scholar] [CrossRef]
  15. Chandrashekhar, M.; Dhal, P.K. Multi-Objective Economic and Emission Dispatch Problems Using Hybrid Honey Bee Simulated Annealing. Meas. Sens. 2024, 32, 101065. [Google Scholar] [CrossRef]
  16. Younis, A.; Belabbes, F.; Cotfas, P.A.; Cotfas, D.T. Utilizing the Honeybees Mating-Inspired Firefly Algorithm to Extract Parameters of the Wind Speed Weibull Model. Forecasting 2024, 6, 357–377. [Google Scholar] [CrossRef]
  17. Zhang, Q.; Bu, X.; Gao, H.; Li, T.; Zhang, H. A Hierarchical Learning Based Artificial Bee Colony Algorithm for Numerical Global Optimization and Its Applications. Appl. Intell. 2024, 54, 169–200. [Google Scholar] [CrossRef]
  18. Kiliçarslan, S.; Dönmez, E. Improved Multi-Layer Hybrid Adaptive Particle Swarm Optimization Based Artificial Bee Colony for Optimizing Feature Selection and Classification of Microarray Data. Multimed. Tools Appl. 2024, 83, 67259–67281. [Google Scholar] [CrossRef]
  19. Zhu, F.; Shuai, Z.; Lu, Y.; Su, H.; Yu, R.; Li, X.; Zhao, Q.; Shuai, J. OBABC: A One-Dimensional Binary Artificial Bee Colony Algorithm for Binary Optimization. Swarm Evol. Comput. 2024, 87, 101567. [Google Scholar] [CrossRef]
  20. Pan, X.; Wang, Y.; Lu, Y.; Sun, N. Improved Artificial Bee Colony Algorithm Based on Two-Dimensional Queue Structure for Complex Optimization Problems. Alex. Eng. J. 2024, 86, 669–679. [Google Scholar] [CrossRef]
  21. Lobato, F.S.; Steffen, V.; da Silva Neto, A.J. Artificial Bee Colony Algorithm. In Computational Intelligence Applied to Inverse Problems in Radiative Transfer; Springer: Berlin/Heidelberg, Germany, 2023; pp. 85–93. ISBN 978-3-031-43544-7. [Google Scholar]
  22. Benatchba, K.; Admane, L.; Koudil, M. Using Bees to Solve a Data-Mining Problem Expressed as a Max-Sat One. In Proceedings of the Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, IWINAC, Canary Islands, Spain, 15–18 June 2005; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2005; Volume 3562, pp. 212–220. [Google Scholar]
  23. Palominos, P.; Ortega, C.; Alfaro, M.; Fuertes, G.; Vargas, M.; Camargo, M.; Parada, V.; Gatica, G. Chaotic Honeybees Optimization Algorithms Approach for Traveling Salesperson Problem. Complexity 2022, 2022, 8903005. [Google Scholar] [CrossRef]
  24. Alotaibi, M. Hybrid Metaheuristic Technique for Optimal Container Resource Allocation in Cloud. Comput. Commun. 2022, 191, 477–485. [Google Scholar] [CrossRef]
  25. Yogeshwar, A.; Kamalakkannan, S. Proposed Association Rule Hiding Based Privacy Preservation Model with Block Chain Technology for IoT Healthcare Sector. Comput. Methods Biomech. Biomed. Eng. 2023, 26, 1898–1915. [Google Scholar] [CrossRef]
  26. Xiao, R.; Wu, H.; Hu, L.; Hu, J. A Swarm Intelligence Labour Division Approach to Solving Complex Area Coverage Problems of Swarm Robots. Int. J. Bio-Inspired Comput. 2020, 15, 224–238. [Google Scholar] [CrossRef]
  27. Abbass, H.A. MBO: Marriage in Honey Bees Optimization—A Haplometrosis Polygynous Swarming Approach. In Proceedings of the International Congress on Evolutionary Computation, Seoul, Republic of Korea, 27–30 May 2001; pp. 207–214. [Google Scholar]
  28. Abbass, H.A. A Monogenous MBO Approach to Satisfiability. In Proceedings of the International Conference on Computational Inteligence for Modelling Control and Automation, CIMCA, Las Vegas, NV, USA, 9–11 July 2001. [Google Scholar]
  29. Abbass, H.A. A Single Queen Single Worker Honey-Bees Approach to 3-SAT. In Proceedings of the The Genetic and Evolutionary Computation Conference, GECCO, San Francisco, CA, USA, 7–11 July 2001; pp. 807–814. [Google Scholar]
  30. Lv, Z.-G.; Zhang, L.-H.; Wang, X.-Y.; Wang, J.-B. Single Machine Scheduling Proportionally Deteriorating Jobs with Ready Times Subject to the Total Weighted Completion Time Minimization. Mathematics 2024, 12, 610. [Google Scholar] [CrossRef]
  31. Zhang, X.G.; Tang, X.M. The Two Single-Machine Scheduling Problems with Slack Due Date to Minimize Total Early Work and Late Work. J. Oper. Res. Soc. China 2024, 43, 2284–2291. [Google Scholar] [CrossRef]
  32. Li, S.S.; Chen, R.X. Scheduling with Common Due Date Assignment to Minimize Generalized Weighted Earliness–Tardiness Penalties. Optim. Lett. 2020, 14, 1681–1699. [Google Scholar] [CrossRef]
  33. Atsmony, M.; Mosheiov, G. Single Machine Scheduling to Minimize Maximum Earliness/Tardiness Cost with Job Rejection. Optim. Lett 2024, 18, 751–766. [Google Scholar] [CrossRef]
  34. Qian, J.; Chang, G.; Zhang, X. Single-Machine Common Due-Window Assignment and Scheduling with Position-Dependent Weights, Delivery Time, Learning Effect and Resource Allocations. J. Appl. Math. Comput. 2024, 70, 1965–1994. [Google Scholar] [CrossRef]
  35. Lee, C.Y.; Kim, S.J. Parallel Genetic Algorithms for the Earliness-Tardiness Job Scheduling Problem with General Penalty Weights. Comput. Ind. Eng. 1995, 28, 231–243. [Google Scholar] [CrossRef]
  36. James, R.J.W. Using Tabu Search to Solve the Common Due Date Early/Tardy Machine Scheduling Problem. Comput. Oper. Res. 1997, 24, 199–208. [Google Scholar] [CrossRef]
  37. Biskup, D.; Feldmann, M. Benchmarks for Scheduling on a Single Machine against Restrictive and Unrestrictive Common Due Dates. Comput. Oper. Res. 2001, 28, 787–801. [Google Scholar] [CrossRef]
  38. Hino, C.M.; Ronconi, D.P.; Mendes, A.B. Minimizing Earliness and Tardiness Penalties in a Single-Machine Problem with a Common Due Date. Eur. J. Oper. Res. 2005, 160, 190–201. [Google Scholar] [CrossRef]
  39. Pan, Q.K.; Tasgetiren, M.F.; Liang, Y.C. Minimizing Total Earliness and Tardiness Penalties with a Common Due Date on a Single-Machine Using a Discrete Particle Swarm Optimization Algorithm. In Proceedings of the International Conference on Swarm Intelligence, Brussels, Belgium, 4–7 September 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 460–467. [Google Scholar]
  40. Lin, S.W.; Chou, S.Y.; Ying, K.C. A Sequential Exchange Approach for Minimizing Earliness–Tardiness Penalties of Single-Machine Scheduling with a Common Due Date. Eur. J. Oper. Res. 2007, 177, 1294–1301. [Google Scholar] [CrossRef]
  41. Liao, C.J.; Cheng, C.C. A Variable Neighborhood Search for Minimizing Single Machine Weighted Earliness and Tardiness with Common Due Date. Comput. Ind. Eng. 2007, 52, 404–413. [Google Scholar] [CrossRef]
  42. Pham, D.T.; Koc, E.; Lee, J.Y.; Phrueksanant, J. Using the Bees Algorithm to Schedule Jobs for a Machine. In Proceedings of the International Conference on Laser Metrology, CMM and Machine Tool Performance, Euspen, Cardiff, UK, 2 July 2007; pp. 430–439. [Google Scholar]
  43. Nearchou, A.C. A Differential Evolution Approach for the Common Due Date Early/Tardy Job Scheduling Problem. Comput. Oper. Res. 2008, 35, 1329–1343. [Google Scholar] [CrossRef]
  44. Cheng, T.C.E. A Duality Approach to Optimal Due-Date Determination. Eng. Optim. 1985, 9, 127–130. [Google Scholar] [CrossRef]
  45. Cheng, T.C.E. An Algorithm for the Con Due-Date Determination and Sequencing Problem. Comput. Oper. Res. 1987, 14, 537–542. [Google Scholar] [CrossRef]
  46. Quaddus, M.A. A Generalized Model of Optimal Due-Date Assignment by Linear Programming. J. Oper. Res. Soc. 1987, 38, 353–359. [Google Scholar] [CrossRef]
  47. Bector, C.R.; Gupta, Y.P.; Gupta, M.C. Determination of an Optimal Common Due Date and Optimal Sequence in a Single Machine Job Shop. Int. J. Prod. Res. 1988, 26, 613–628. [Google Scholar] [CrossRef]
  48. Baker, K.R.; Scudder, G.D. On the Assignment of Optimal Due Dates. J. Oper. Res. Soc. 1989, 40, 93–95. [Google Scholar] [CrossRef]
  49. Hall, N.G.; Posner, M.E. Earliness-Tardiness Scheduling Problems, i: Weighted Deviation of Completion Times about a Common Due Date. Oper. Res. 1991, 39, 836–846. [Google Scholar] [CrossRef]
  50. Cheng, T.C.E.; Kahlbacher, H.G. A Proof for the Longest-Job First Policy in One Machine Scheduling. Nav. Res. Logist. 1991, 38, 715–720. [Google Scholar] [CrossRef]
  51. Hoogeveen, J.A.; van de Velde, S.L. Scheduling around a Small Common Due Date. Eur. J. Oper. Res. 1991, 55, 237–242. [Google Scholar] [CrossRef]
  52. Beasley, J.E. Common Due Date Scheduling. Available online: http://people.brunel.ac.uk/~mastjjb/jeb/orlib/schinfo.html (accessed on 26 June 2024).
  53. Teo, J.; Abbass, H.A. An Annealing Approach to the Mating-Flight Trajectories in the Marriage in Honey Bees Optimization Algorithm. Int. J. Comput. Intell. Appl. 2001, 3, 199–211. [Google Scholar] [CrossRef]
  54. Teo, J.; Abbass, H.A. A True Annealing Approach to the Marriage in Honey-Bees Optimization Algorithm. Int. J. Comput. Intell. Appl. 2003, 3, 199–211. [Google Scholar] [CrossRef]
  55. Afshar, A.; Bozorg Haddad, O.; Mariño, M.A.; Adams, B.J. Honey-Bee Mating Optimization (HBMO) Algorithm for Optimal Reservoir Operation. J. Franklin Inst. 2007, 344, 452–462. [Google Scholar] [CrossRef]
  56. Freyre-Echevarría, A.; Alanezi, A.; Martínez-Díaz, I.; Ahmad, M.; El-Latif, A.A.A.; Kolivand, H.; Razaq, A. An External Parameter Independent Novel Cost Function for Evolving Bijective Substitution-Boxes. Symmetry 2020, 12, 1896. [Google Scholar] [CrossRef]
  57. Hernando, L.; Mendiburu, A.; Lozano, J.A. Hill-Climbing Algorithm: Let’s Go for a Walk before Finding the Optimum. In Proceedings of the Congress on Evolutionary Computation, Rio de Janeiro, Brazil, 8 July 2018. [Google Scholar]
  58. Shehab, M.; Khader, A.T.; Al-Betar, M.A.; Abualigah, L.M. Hybridizing Cuckoo Search Algorithm with Hill Climbing for Numerical Optimization Problems. In Proceedings of the International Conference on Information Technology, Amman, Jordan, 17 May 2017; pp. 36–43. [Google Scholar]
  59. M’Hallah, R. Minimizing Total Earliness and Tardiness on a Single Machine Using a Hybrid Heuristic. Comput. Oper. Res. 2007, 34, 3126–3142. [Google Scholar] [CrossRef]
  60. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  61. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science (1979) 1983, 220, 671–680. [Google Scholar] [CrossRef]
  62. Alkhateeb, F.; Abed-Alguni, B.H. A Hybrid Cuckoo Search and Simulated Annealing Algorithm. J. Intell. Syst. 2019, 28, 683–698. [Google Scholar] [CrossRef]
  63. Alkhateeb, F.; Abed-alguni, B.H.; Al-rousan, M.H. Discrete Hybrid Cuckoo Search and Simulated Annealing Algorithm for Solving the Job Shop Scheduling Problem. J. Supercomput. 2022, 78, 4799–4826. [Google Scholar] [CrossRef]
  64. Sengupta, L.; Mariescu-Istodor, R.; Fränti, P. Which Local Search Operator Works Best for the Open-Loop TSP? Appl. Sci. 2019, 9, 3985. [Google Scholar] [CrossRef]
  65. Hoogeveen, J.A.; Oosterhout, H.; van de Velde, S.L. New Lower and Upper Bounds for Scheduling around a Small Common Due Date. Oper. Res. 1994, 42, 102–110. [Google Scholar] [CrossRef]
  66. Sharma, N.; Sharma, H.; Sharma, A. An Effective Solution for Large Scale Single Machine Total Weighted Tardiness Problem Using Lunar Cycle Inspired Artificial Bee Colony Algorithm. IEEE/ACM Trans. Comput. Biol. Bioinform. 2020, 17, 1573–1581. [Google Scholar] [CrossRef]
  67. Sabri, A.; Allaoui, H.; Souissi, O. Reinforcement Learning and Stochastic Dynamic Programming for Jointly Scheduling Jobs and Preventive Maintenance on a Single Machine to Minimise Earliness-Tardiness. Int. J. Prod. Res. 2024, 62, 705–719. [Google Scholar] [CrossRef]
Figure 1. The GM operator.
Figure 1. The GM operator.
Mathematics 13 00418 g001
Figure 2. Queen–brood fitness evaluation.
Figure 2. Queen–brood fitness evaluation.
Mathematics 13 00418 g002
Figure 3. MBO algorithm.
Figure 3. MBO algorithm.
Mathematics 13 00418 g003
Table 1. Parameters of MBO algorithm.
Table 1. Parameters of MBO algorithm.
ParameterSourceValue
1Number of queen beesExperimental4
2Number of dronesExperimental10
3Initial speed of the queen bees[27,28,29,53,54]0.9
4Initial energy of the queen bees0.9
5Energy reduction factor of the queen bees[53] g
6Speed reduction factor of the queen beesExperimental0.98
7Spermatheca capacity[27,28,29]10
8Number of flightsExperimental20
Table 2. Results for n = 10, 20, and 50.
Table 2. Results for n = 10, 20, and 50.
h = 0.2h = 0.4h = 0.6h = 0.8
nkUBMBO% OffsetUBMBO% OffsetUBMBO% OffsetUBMBO% Offset
101193619360.00102510250.008418410.008188180.00
2104210420.006156150.006156150.006156150.00
3158615860.009179170.007937930.007937930.00
4213921390.00123012300.008158150.008038030.00
5118711870.006306300.005215210.005215210.00
6152115210.009089080.007557550.007557550.00
7217021700.00137413740.00110111010.00108310830.00
8172017200.00102010200.006106100.005405400.00
9157415740.008768760.005825820.005545540.00
10186918690.00113611360.007107100.006716710.00
Average0.00Average0.00Average0.00Average0.00
20144314394−0.84306630660.00298629860.00298629860.00
285678430−1.6048974847−1.0232603206−1.66298029800.00
363316210−1.9138833838−1.1636003583−0.4736003583−0.47
494789188−3.0651225118−0.0833363317−0.57304030400.00
543404215−2.8825712495−2.9622062173−1.5022062173−1.50
667666527−3.5336013582−0.5330163010−0.2030163010−0.20
711,10110,455−5.8263576238−1.8741754126−1.1739003878−0.56
842033920−6.7321512145−0.28163816380.00163816380.00
935303465−1.8420972096−0.0519921965−1.3619921965−1.36
1055454979−10.2131922925−8.3621162110−0.28199519950.00
Average−3.84Average−1.63Average−0.72Average−0.41
50142,36340,697−3.9324,86823,792−4.3317,99017,969−0.1217,99017,937−0.29
233,63730,624−8.9619,27917,920−7.0514,23114,054−1.2414,13214,2000.48
337,64134,425−8.5421,35320,502−3.9916,49716,5090.0716,49716,5910.57
430,16627,755−7.9917,49516,657−4.7914,10514,1210.1114,10514,2150.78
532,60432,307−0.9118,44118,007−2.3514,65014,612−0.2614,65014,618−0.22
636,92034,969−5.2821,49720,385−5.1714,25114,2740.1614,07514,1160.29
744,27743,134−2.5823,88323,038−3.5417,71517,637−0.4417,71517,682−0.19
846,06543,859−4.7925,40224,892−2.0121,36721,4030.1721,36721,4350.32
936,39734,234−5.9421,92919,986−8.8614,29814,202−0.6713,95214,0560.75
1035,79732,960−7.9320,04819,167−4.3914,37714,4090.2214,37714,4160.27
Average−5.69Average−4.65Average−0.20Average0.28
Table 3. Results for n = 100, 200, 500, and 1000.
Table 3. Results for n = 100, 200, 500, and 1000.
h = 0.2h = 0.4h = 0.6h = 0.8
nkUBMBO% OffsetUBMBO% OffsetUBMBO% OffsetUBMBO% Offset
1001156,103145,623−6.7189,58886,380−3.5872,01972,5440.7372,01973,1831.62
2132,605124,972−5.7674,85473,486−1.8359,35159,5400.3259,35160,3621.70
3137,463129,911−5.4985,36379,763−6.5668,53769,2971.1168,53769,6971.69
4137,265129,749−5.4887,73079,589−9.2869,23169,6380.5969,23170,2621.49
5136,761124,436−9.0176,42471,627−6.2855,29155,6120.5855,27755,9731.26
6151,930139,321−8.3086,72478,067−9.9862,51963,2971.2462,51963,1461.00
7141,613135,181−4.5479,85478,597−1.5762,21363,0081.2862,21363,4531.99
8168,086160,158−4.7295,36194,629−0.7780,84481,3830.6780,84481,3700.65
9125,153116,653−6.7973,60569,916−5.0158,77159,3180.9358,77158,9960.38
10124,446119,120−4.2872,39972,133−0.3761,41961,7830.5961,41962,2851.41
Average−6.11Average−4.52Average0.80Average1.32
2001526,666499,408−5.18301,449298,952−0.83254,268260,4002.41254,268264,3653.97
2566,643543,008−4.17335,714320,699−4.47266,028272,4202.40266,028274,2013.07
3529,919490,462−7.45308,278298,587−3.14254,647259,3541.85254,647268,8945.59
4603,709587,741−2.64360,852353,113−2.14297,269305,1382.65297,269307,1823.33
5547,953515,285−5.96322,268305,632−5.16260,455268,4283.06260,455269,7433.57
6502,276479,853−4.46292,453280,955−3.93236,160241,4072.22236,160247,6414.86
7479,651456,839−4.76279,576278,022−0.56247,555253,5892.44247,555256,2313.50
8530,896496,257−6.52288,746280,299−2.93225,572231,5872.67225,572233,6973.60
9575,353530,693−7.76331,107316,412−4.44255,029261,9402.71255,029265,2344.00
10572,866540,116−5.72332,808326,195−1.99269,236275,5932.36269,236279,2863.73
Average−5.46Average−2.96Average2.48Average3.92
50013,113,0882,975,034−4.431,839,9021,789,511−2.741,581,2331,629,2343.041,581,2331,640,9833.78
23,569,0583,393,446−4.922,064,9981,999,487−3.171,715,3321,787,2394.191,715,3321,810,5355.55
33,300,7443,121,429−5.431,909,3041,900,634−0.451,644,9471,695,2373.061,644,9471,698,4523.25
43,408,8673,255,505−4.501,930,8291,888,910−2.171,640,9421,701,3563.681,640,9421,703,4273.81
53,377,5473,132,862−7.241,881,2211,809,312−3.821,468,3251,518,2983.401,468,3251,519,8763.51
63,024,0822,811,235−7.041,658,4111,632,663−1.551,413,3451,455,6572.991,413,3451,489,3215.38
73,381,1663,199,398−5.381,971,1761,904,756−3.371,634,9121,692,4433.521634,9121,698,7433.90
83,376,6783,154,755−6.571,924,1911,843,914−4.171,542,0901,594,3043.391,542,0901,602,8943.94
93,617,8073,391,379−6.262,065,6471,979,537−4.171,684,0551,754,5324.181,684,0551,772,6415.26
103,315,0193,163,870−4.561,928,5791,840,626−4.561,520,5151,589,3664.531,520,5151,601,7205.34
Average−5.63Average−3.02Average3.60Average4.37
1000115,190,37114,099,474−7.188,570,1548,149,624−4.916,411,5816,599,6052.936,411,5816,685,4304.27
213,356,72712,454,682−6.757,592,0407,327,215−3.496,112,5986,324,8373.476,112,5986,365,7404.14
312,919,25912,061,237−6.647,313,7367,116,546−2.705,985,5386,221,4713.945,985,5386,250,9104.43
412,705,25911,903,382−6.317,300,2177,105,489−2.676,096,7296,324,6343.746,096,7296354,6344.23
513,276,86812,556,618−5.427,738,3677,398,453−4.396,348,2426,618,4524.266,348,2426,672,9245.11
612,236,08011,749,705−3.977,144,4916,987,842−2.196,082,1426,394,3225.136,082,1426,415,2735.48
714,160,77313,354,947−5.698,426,0247,982,541−5.266,575,8796,810,2953.566,575,8796,903,1584.98
813,314,72312,324,002−7.447,508,5077,289,547−2.926,069,6586,308,4383.936,069,6586,360,7044.80
912,433,82111,865,221−4.577,299,2717,284,531−0.206,188,4166,407,5023.546,188,4166,468,2704.52
1013,395,23412,485,997−6.797,617,6587,322,548−3.876,147,2956,405,2954.206,147,2956,493,8155.64
Average−6.08Average−3.26Average3.87Average4.76
Table 4. The average improvement of the set of test problems.
Table 4. The average improvement of the set of test problems.
nhHybrid Heuristic [8] %MBO %
100.20.000.00
0.40.000.00
200.2−3.84−3.84
0.4−1.63−1.63
500.2−5.65−5.69
0.4−4.64−4.65
1000.2−6.18−6.11
0.4−4.94−4.52
Mean −4.10−3.77
Table 5. Benchmarking of metaheuristics in the literature.
Table 5. Benchmarking of metaheuristics in the literature.
%
nhHybrid Heuristic [8]TS [38]GA [38]HTG [38]HGT [38]DPSO [39]SEA [40]VNS/TS [41] Bees [42] DE [43]MBO
100.20.000.250.120.120.120.000.080.000.000.000.00
0.40.000.240.190.190.190.000.010.000.000.000.00
0.60.100.030.030.010.000.020.000.000.000.00
0.80.000.000.000.000.000.000.000.000.000.00
200.2−3.84−3.84−3.84−3.84−3.84−3.84−3.79−3.84−3.84−3.84−3.84
0.4−1.63−1.62−1.62−1.62−1.62−1.63−1.58−1.63−1.63−1.63−1.63
0.6−0.71−0.68−0.71−0.71−0.72−0.64−0.72−0.72−0.72−0.72
0.8−0.41−0.28−0.41−0.41−0.41−0.39−0.41−0.41−0.41−0.41
500.2−5.65−5.70−5.68−5.70−5.70−5.70−5.58−5.70−5.70−5.69−5.69
0.4−4.64−4.66−4.60−4.66−4.66−4.66−4.42−4.66−4.66−4.66−4.65
0.6−0.32−0.31−0.27−0.31−0.34−0.31−0.34−0.34−0.32−0.20
0.8−0.24−0.19−0.23−0.23−0.24−0.24−0.24−0.24−0.240.28
1000.2−6.18−6.19−6.17−6.19−6.19−6.19−6.12−6.19−6.19−6.17−6.11
0.4−4.94−4.93−4.91−4.93−4.93−4.94−4.85−4.94−4.94−4.89−4.52
0.6−0.01−0.120.080.04−0.15−0.15−0.15−0.15−0.130.80
0.8−0.15−0.12−0.08−0.11−0.18−0.18−0.18−0.18−0.171.32
2000.2−5.73−5.76−5.74−5.76−5.76−5.78−5.76−5.78−5.78−5.77−5.46
0.4−3.79−3.74−3.75−3.75−3.75−3.75−3.73−3.75−3.75−3.72−2.96
0.6−0.01−0.130.370.00−0.15−0.15−0.15−0.150.232.48
0.8−0.04−0.140.260.07−0.15−0.15−0.15−0.150.203.92
5000.2−6.40−6.41−6.41−6.41−6.41−6.42−6.43−6.42−6.43−6.43−5.63
0.4−3.52−3.57−3.58−3.58−3.58−3.56−3.57−3.56−3.57−3.57−3.02
0.60.25−0.110.730.15−0.11−0.11−0.11−0.111.723.60
0.80.21−0.110.730.13−0.11−0.11−0.11−0.111.014.37
10000.2−6.72−6.73−6.75−6.74−6.74−6.76−6.77−6.75−6.76−6.72−6.08
0.4−4.30−4.39−4.40−4.39−4.39−4.37−4.40−4.37−4.35−4.38−3.26
0.61.01−0.051.280.42−0.06−0.06−0.05−0.051.293.87
0.81.13−0.051.280.40−0.06−0.06−0.05−0.052.794.76
Total average improvement−2.01−2.12−1.94−2.06−2.15−2.12−2.15−2.15−1.87−1.03
Table 6. A comparison of the results from metaheuristics in the literature and a summary of average errors according to the value of the restrictive factor h.
Table 6. A comparison of the results from metaheuristics in the literature and a summary of average errors according to the value of the restrictive factor h.
%
Restrictive FactorHybrid Heuristic [8]TS [38]GA [38]HTG [38]HGT [38]DPSO [39]SEA [40]VNS/TS [41]Bees [42] DE [43]MBO
h = 0.2−4.93−4.91−4.92−4.93−4.93−4.96−4.91−4.95−4.96−4.95−4.69
h = 0.4−3.26−3.24−3.24−3.25−3.25−3.27−3.22−3.27−3.27−3.26−2.86
h = 0.60.04−0.200.22−0.05−0.22−0.20−0.22−0.220.301.40
h = 0.80.07−0.130.22−0.02−0.16−0.16−0.16−0.160.452.03
Table 7. A comparison of computing times (seconds).
Table 7. A comparison of computing times (seconds).
Problem Size10 Tasks20 Tasks50 Tasks100 Tasks200 Tasks500 Tasks1000 Tasks
Average MBO computing time
h = 0.2; 0.4
0.1610.8551.00386.58872.904001.9012,487.70
Average MBO computing time0.6230.48158.60777.081746.456671.3315,946.50
Average MBO computing time [8]
h = 0.2; 0.4
0.9047.8087.30284.90955.203647.2010,962.50
Average computing time [43]0.211.082.4920.77232.373932.468519.33
Table 8. Problems with 20 tasks.
Table 8. Problems with 20 tasks.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.00544768510.00544768519.916457842.6882 × 10−5
Within levels0.021335089780.000273527
Total0.02678277379
Table 9. Problems with 50 tasks.
Table 9. Problems with 50 tasks.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0134210.0134227.593091.2636 × 10−6
Within levels0.03792780.00049
Total0.0513479
Table 10. Problems with 100 tasks.
Table 10. Problems with 100 tasks.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0090510.0090512.722320.00062
Within levels0.05546780.00071
Total0.0645179
Table 11. Problems with 200 tasks.
Table 11. Problems with 200 tasks.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0005110.000510.619270.43370
Within levels0.06414780.00082
Total0.0646579
Table 12. Problems with 500 tasks.
Table 12. Problems with 500 tasks.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0000610.000060.059260.80830
Within levels0.07635780.00098
Total0.07064079
Table 13. Problems with 1000 tasks.
Table 13. Problems with 1000 tasks.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0000610.000060.0548880.815379
Within levels0.088788780.001138
Total0.08885079
Table 14. Problems with h = 0.2.
Table 14. Problems with h = 0.2.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0897110.08971434.655262.252 × 10−41
Within levels0.024351180.00021
Total0.11406119
Table 15. Problems with h = 0.4.
Table 15. Problems with h = 0.4.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0334610.03346121.661347.1387 × 10−20
Within levels0.032451180.00027
Total0.06591119
Table 16. Problems with h = 0.6.
Table 16. Problems with h = 0.6.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0080610.0080646.369584.35183 × 10−10
Within levels0.020501180.00017
Total0.02855119
Table 17. Problems with h = 0.8.
Table 17. Problems with h = 0.8.
Source of VariationSum of SquaresDegrees of FreedomAverage of the SquaresF Statisticp-Value
Between levels0.0166610.0166670.237991.27812 × 10−13
Within levels0.028001180.00024
Total0.04466119
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Palominos, P.; Mazo, M.; Fuertes, G.; Alfaro, M. An Improved Marriage in Honey-Bee Optimization Algorithm for Minimizing Earliness/Tardiness Penalties in Single-Machine Scheduling with a Restrictive Common Due Date. Mathematics 2025, 13, 418. https://doi.org/10.3390/math13030418

AMA Style

Palominos P, Mazo M, Fuertes G, Alfaro M. An Improved Marriage in Honey-Bee Optimization Algorithm for Minimizing Earliness/Tardiness Penalties in Single-Machine Scheduling with a Restrictive Common Due Date. Mathematics. 2025; 13(3):418. https://doi.org/10.3390/math13030418

Chicago/Turabian Style

Palominos, Pedro, Mauricio Mazo, Guillermo Fuertes, and Miguel Alfaro. 2025. "An Improved Marriage in Honey-Bee Optimization Algorithm for Minimizing Earliness/Tardiness Penalties in Single-Machine Scheduling with a Restrictive Common Due Date" Mathematics 13, no. 3: 418. https://doi.org/10.3390/math13030418

APA Style

Palominos, P., Mazo, M., Fuertes, G., & Alfaro, M. (2025). An Improved Marriage in Honey-Bee Optimization Algorithm for Minimizing Earliness/Tardiness Penalties in Single-Machine Scheduling with a Restrictive Common Due Date. Mathematics, 13(3), 418. https://doi.org/10.3390/math13030418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop