Next Article in Journal
Stability and Bifurcation in a Delayed Malaria Model with Threshold Control
Previous Article in Journal
An Improved Stiffness Model for Spur Gear with Surface Roughness Under Thermal Elastohydrodynamic Lubrication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Evolutionary Procedure for a Bi-Objective Assembly Line Balancing Problem

1
Innovation and Sustainability Data Lab (ISDaLab), UPF—Barcelona School of Management, 08008 Barcelona, Spain
2
EAE Business School, 08015 Barcelona, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(20), 3336; https://doi.org/10.3390/math13203336
Submission received: 18 September 2025 / Revised: 10 October 2025 / Accepted: 14 October 2025 / Published: 20 October 2025

Abstract

An assembly line is a manufacturing process commonly used in the production of commodity goods. The assembly process is divided into elementary tasks that are sequentially performed at serially arranged workstations. Among the various challenges that must be addressed during the design and operation of an assembly line, the assembly line balancing problem involves the assignment of tasks to different workstations. In its simplest form, this problem aims to distribute assembly operations among the workstations efficiently. An efficient line is one that optimizes a specific objective function, usually associated with maximizing throughput or minimizing resource requirements. In this study, we adopt a bi-objective approach to find a Pareto set of efficient solutions balancing throughput and resource requirements. To address this problem, we propose a multi-objective evolutionary method, complemented by single- and multi-objective local search procedures that leverage a polynomially solvable case of the problem. We then compare the results of these methods, including their hybridizations, through a computational experiment demonstrating the ability to achieve high-quality solutions.

1. Introduction

The division of work in assembly lines leads to specialization, error reduction, and other advantages that make them the preferred production system for mass-consumption commodity goods. Assembly lines are divided into workstations, also referred to as stations, that are arranged in a fixed pattern, often a straight line, and connected by a mechanism, such as a conveyor belt, which moves work in progress between stations. Each station performs a subset of the elementary (i.e., indivisible) operations, or tasks, required to assemble a unit, with the production of a unit distributed among these stations. The benefits of work specialization often make assembly lines the preferred production system in many contexts. Among the various challenges related to the design and operation of assembly lines, the assembly line balancing problem (ALBP) focuses on the assignment of elementary tasks to the stations to achieve some efficiency goal.
Production units move through the stations in the sequence of their arrangement. In paced assembly lines, which are the focus of this paper, the product moves between stations at a fixed rate. This rate is determined by the maximum amount of time required by any station to complete its tasks, known as the cycle time, which defines the line’s throughput. Furthermore, for any task to be performed at a station, technological constraints in the form of precedence relations must be satisfied. Specifically, all predecessors of a given task must be assigned to either the same station or a preceding one.
Due to different industrial settings, the literature has considered a wide variety of ALBPs but a large majority of them share common characteristics: (1) all elementary tasks need to be performed and have an operation time that corresponds to the time required to perform them; (2) each station performs a disjoint subset of tasks and has a workload equal to the sum of the operation times of the tasks performed by the station; (3) the throughput of the line is determined by the station with a larger workload, the cycle time of the assembly line; (4) tasks have precedence relations that force some tasks to be performed before others; and (5) an efficiency metric related to idle time minimization is optimized. An ALBP featuring only these characteristics is known as a Simple Assembly Line Balancing Problem (SALBP)—see [1] for a more detailed discussion of the hypothesis of the SALBP—while problems that contain alternative considerations are referred to as General Assembly Line Balancing Problems (GALBP).
SALBPs can be categorized into several types: type 1 problems, where the goal is to minimize idle time by reducing the number of workstations for a fixed cycle time; type 2 problems, where the objective is to minimize the cycle time given a fixed number of stations; and type E problems, which aim to minimize idle time by optimizing over a specified range of possible cycle times and numbers of stations. Additionally, there is a feasibility version of the problem, type F, where the aim is to determine a feasible solution when both the cycle time and the number of stations are fixed.
While the SALBP represents a simplified version of real-life assembly line balancing problems (ALBPs), it has garnered significant attention from the scientific community due to its simplicity paired with inherent complexity [see [2,3,4,5] for state-of-the-art procedures for the SALBP]. Furthermore, the SALBP serves as the foundational model for many generalized assembly line balancing problems (GALBPs), is frequently used as a test bed for various algorithmic ideas, and extends the classical Bin Packing Problem [6] by considering precedence constraints among items.
The SALBP is a single-objective optimization problem, even though most real-life line balancing problems involve multiple, often conflicting, objectives. While the GALBP literature has extensively addressed this complexity, few studies on SALBP have focused on it. This gap was addressed in [7], where the authors proposed a bi-objective version of the SALBP, referred to as the bi-objective SALBP (BO-SALBP), which simultaneously optimizes both the cycle time and the number of stations as separate objectives.
This paper builds on the problem proposed in [7] and focuses on the bi-objective SALBP (BO-SALBP). We identify a polynomially solvable special case of the problem and use it as the foundation for developing evolutionary and local search heuristic procedures. These heuristic methods are capable of addressing larger problem instances compared to the implicit enumeration-based approach presented in [7], which was limited to instances with up to 20 tasks. An analysis of previous work on multi-objective assembly line balancing problems (MO-ALBPs) is provided thereafter.

1.1. Literature Review

The scientific literature on assembly line balancing is extensive and well-established. For a comprehensive overview, we refer interested readers to several state-of-the-art reviews [8,9,10,11,12]. In this work, we narrow our focus to prior studies that are directly relevant to our research and select publications that offer a summary of research on multi-objective assembly line balancing problems (MO-ALBPs).
MO-ALBPs primarily differ in three key aspects: (1) the methodology used to handle multiple objectives, (2) the specific characteristics and goals of the assembly line, and (3) the solution approaches applied. This review addresses each of these aspects separately.
The most common method for tackling MO-ALBPs involves converting multiple objectives into a single-objective problem using a weighted function, where relative importance is assigned to different objectives [see [13,14,15,16,17,18,19] for some recent examples]. Although this approach is straightforward, it requires the selection of appropriate weights, which can often be difficult to determine objectively. A related method is the hierarchical (lexicographic) approach, in which objectives are ranked by their importance [see [20,21,22,23] for examples]. In this case, each objective serves as a tie-breaker for more significant objectives. Note that hierarchical methods can also be represented as weighted problems by assigning appropriate weights, effectively treating them as single-objective problems with complex objective functions.
Other multi-criteria decision-making methods, such as TOPSIS [24] and goal programming methodologies [25,26,27], have also been explored. These methods aim to achieve predefined target values, with the objective of minimizing deviations from these targets.
Finally, Pareto-based approaches, like the one adopted in this work, are also widely used. Unlike methods that identify a single optimal solution, Pareto-based techniques aim to generate multiple efficient solutions. A solution is considered efficient, or non-dominated, if no other solution is superior in at least one objective without being inferior in any other [see [7,28,29,30,31] for examples]. These approaches provide a set of solutions, leaving the final selection to decision-makers, who can evaluate trade-offs between objectives using information not accessible to the optimization process.
Regardless of the multi-objective methodology used, the numerous ALBP variants discussed in the single-objective ALBP literature can be seamlessly extended to multi-objective ALBPs, thereby introducing a wide array of environments and objectives. The literature addresses conditions such as uncertain demand [31,32]; preventive maintenance requirements [33]; U-shaped [34], two-sided [35], and multi-manned [36,37] assembly line configurations; disassembly lines [38]; heterogeneous operators [29,30,38]; resource constraints [36]; zoning constraints [39]; setup times between tasks [35]; and space requirements within stations [40], among many others.
Focusing on the objectives, previous work typically includes an efficiency metric, such as minimizing cycle time [22,41] or the number of workstations [42], alongside one or more additional metrics relevant to the specific conditions under study. These additional metrics include workload smoothing [43], ergonomic risks [44,45,46], disruption risks [47], energy consumption [38,48,49,50], operational and installation costs [45,51,52], learning costs [53], regularity of worker assignments [54], and carbon emission reductions [55].
When it comes to solution methods, the literature features representative works across the most common approaches to combinatorial optimization problems, including exact and heuristic methods. Exact methods utilize off-the-shelf integer programming solvers [21,29] and custom-designed implicit enumeration schemes [7,56]. On the heuristic side, approaches encompass priority rule-based methods [43], constructive metaheuristics like ant colony optimization [57], and local-search-based techniques such as tabu search [58] and variable neighborhood search [35]. Additionally, population-based evolutionary algorithms, including genetic algorithms [40] and particle swarm optimization [59], are widely employed. Given the relevance to this work, we now concentrate on heuristic approaches, with a focus on the different methods used to represent solutions.
A direct representation assigns each task to a station identifier, a natural number that specifies where the task is performed. This straightforward encoding is often used in constructive and local search-based methods. However, it poses challenges in population-based evolutionary algorithms, either due to difficulties in maintaining feasibility during recombination operations or because the evolutionary method is designed to optimize solutions represented with real values.
Consequently, the literature explores several alternative solution representations. One such alternative encodes the solution as a sequence of tasks combined with separators that identify station assignments [40,60]. The tasks in the sequence must adhere to precedence constraints (i.e., if task i precedes task j, then task i appears before task j in the sequence). The separators indicate how tasks are assigned to stations: the first station performs all tasks before the first separator, the second station handles tasks between the first and second separators, and so on.
Another alternative involves using an indirect encoding approach. In this method, the resolution technique does not encode the solution directly; instead, it encodes sets of rules that a decoding procedure uses to construct a feasible solution. Indirect methods can thus be categorized based on their representation technique and the decoding procedure employed.
While alternative methods exist, such as using dynamic programming-based sequence splitting techniques to decode a sequence of tasks into a feasible solution [61,62], the most commonly used approach involves variations of a greedy, priority-based constructive method [see [63] for a detailed description of these methods for the SALBP]. In priority-based methods, tasks are assigned to stations sequentially while maintaining solution feasibility. Task assignment is guided by the indirect representation and can be determined by selecting the task with the highest priority [64], the first unassigned task in a sequence [28,31,33,65,66], or based on a specific priority rule [50]. It is important to note that these indirect representation methods are primarily designed for type 1 objectives, which aim to minimize the number of stations. When applied to type 2 objectives, which focus on minimizing the cycle time, they must be combined with a bisection method [67].
To conclude this overview of the state of the art, we focus on prior MO-ALBPs that are most similar to the problem studied in this work, specifically the BO-SALBP. The BO-SALBP was formally introduced in [7], who proposed an implicit enumeration approach and tested their method on small instances with up to 20 tasks. They were able to optimally solve 91.81% of the instances within a 300 s time limit. Other studies addressing the simultaneous optimization of cycle time and the number of workstations include [68], which incorporated workload smoothing as an additional objective and tackled a set of instances with 15 tasks, and [28], which considered the BO-SALBP with fuzzy task times, employing a non-dominated sorting genetic algorithm (NSGA-II) multi-objective evolutionary algorithm, solving 12 instances with a number of tasks from 7 to 83. Finally, ref. [36] addressed an enhanced BO-SALBP in a mixed-model, multi-manned assembly line with resource constraints using a swarm-based algorithm. As demonstrated, the BO-SALBP has received limited attention in the literature, and existing solution methods are restricted to small instances. This highlights the need for new approaches capable of efficiently handling larger instances within reasonable time limits.

1.2. Contributions and Outline of This Work

In this work, we address the BO-SALBP and aim to find a set of efficient solutions that simultaneously optimize cycle time and the number of stations using an evolutionary-based approach. To tackle this problem, we first examine a special case of the SALBP-2 in which the precedence constraints form a chain. In this scenario, the problem loses the sequencing complexity inherent in ALBPs and becomes polynomially solvable through a dynamic programming (DP) approach that generates each efficient task-to-workstation partition for any number of stations. Moreover, we show that this dynamic programming approach not only solves the SALBP-2 with a chain precedence graph but also its BO-SALBP counterpart.
Building on this result, we adapt a previously proposed multi-objective local search approach—the Pareto Local Search (PLS) [69]—to incorporate the dynamic programming recurrence and to accommodate a coding scheme in which each element represents not a single feasible solution but a set of solutions. As the PLS requires an initial set of solutions, we propose a multi-objective evolutionary algorithm (MOEA) for both initialization and comparison purposes. The Non-dominated Sorting Genetic Algorithm II (NSGA-II) [70] is selected for this purpose, as it is widely used as a reference MOEA. The NSGA-II likewise employs the DP recurrence to evaluate individuals, thereby avoiding the non-polynomial-time methods commonly used in the literature, and integrates single-objective local search procedures to further enhance its performance.
Our results demonstrate that the proposed methods can achieve near-optimal efficiency sets even under a limited computation time of 1 s per task in the instance, while showing further improvements when a larger time limit of 600 s is allowed, not only for the small instances previously considered in the literature but also for large problem instances an order of magnitude greater than those analyzed in [7]. When our efficient sets are compared with the optimal sets for instances where these can be computed, our best-performing method identifies 99.79% of the efficient solutions, showing minimal differences from a lower approximation of these sets in different metrics.
The remainder of the paper is structured as follows. Section 2 presents a mixed-integer programming and a dynamic programming (DP) formulation for the BO-SALBP. The DP formulation is then used to derive a special dynamic programming recursion for the BO-SALBP under chain precedence relations. Section 3 details our evolutionary approach for the BO-SALBP, and Section 4 describes the proposed local search methods. The performance of these methods is compared in Section 5. Finally, Section 6 offers conclusions and outlines potential avenues for future research. Table 1 provides a summary of the notation used in the paper.

2. The Bi-Objective Simple Assembly Line Balancing Problem (BO-SALBP)

2.1. Mathematical Formulation

Simple assembly line balancing problems (SALBPs) consider a set of tasks V = { 1 , , n } , each with a deterministic and known processing time t i , which is the time required by a station to perform the task. Tasks are subject to precedence relations, meaning that some tasks must be completed before others can begin. These precedence relations are represented as a directed acyclic graph G ( V , A ) , where each vertex corresponds to a task, and an arc from vertex i to vertex j indicates a precedence relationship between tasks i and j, meaning task i must be completed before task j.
Each task must be performed in one of the K = { 1 , , m } stations that make up the assembly line (with m being an upper bound on the maximum number of stations, i.e., the number of tasks m = n ). The subset of tasks assigned to station k is referred to as its station load, S k , and the sum of the processing times of the station load is called the station workload, i.e.,  w k = i S k t i . Under steady-state conditions, the cycle time, c, of the assembly line is defined as the maximum workload of any station, i.e.,  c = max k K w k . A solution is an assignment of tasks to stations (i.e., each task is assigned to a single station) such that both the cycle time and the precedence constraints are satisfied. The most common objective functions are station minimization for a given cycle time, known as the SALBP-1, and cycle time minimization for a given number of stations, known as the SALBP-2. The bi-objective simple assembly line balancing problem (BO-SALBP) simultaneously considers both objectives within a Pareto efficiency framework, unlike the SALBP-E, which optimizes both objectives by minimizing total idle time.
A bi-objective integer programming formulation for the BO-SALBP uses binary variables x i k to indicate whether task i is assigned to station k, binary variables y k to indicate whether station k is used, and a continuous variable c to denote the cycle time. The formulation is as follows:
min . k K y k ,
min . c ,
s . t . k K x i k = 1 , i V ,
i V t i x i k c y k , k K
k K k x i k k K k x j k , ( i , j ) A
x i k { 0 , 1 } , i V , k K
y k { 0 , 1 } , k K
c R 0 .
Objectives (1) and (2) aim to minimize the number of stations and the cycle time, respectively. Constraints (3) ensure that each task is assigned to a station, constraints (4) guarantee that the cycle time is not exceeded, and that the auxiliary variables for the stations are activated if any task is assigned to the corresponding station. Constraints (5) ensure precedence constraints among tasks. Finally, constraints (6) to (8) define the domain of the decision variables.
The multi-objective nature of the problem, together with the limitations of off-the-shelf integer programming solvers in handling single-objective formulations, led us to disregard the use of commercial solvers for the BO-SALBP. For a more detailed exposition of the issues with the proposed model, see [71], and for a discussion on the application of integer programming solvers to single-objective formulations of the SALBP, see [72].
By relaxing the integrality constraints (6) and (7), we can derive a lower bound on the cycle time for any given number of stations k. This is achieved by summing all task times and evenly dividing them among the k stations. This bound represents the optimal relaxation of the linear model with objective (2). Moreover, if the task times have integer values, the relaxation can be rounded up to the nearest integer. The non-divisibility of tasks allows for an improved bound, as the cycle time cannot be smaller than the largest task time, t m a x = max i V t i . Consequently, the lower bound becomes (9).
l b ( k ) = max i V t i k , t m a x
This bound on the cycle time for any number of stations will be used to approximate the optimal set of efficient solutions.
Alternatively, the BO-SALBP can be formulated as a multi-stage decision process using dynamic programming. In this formulation, states are defined as partial assignments of tasks to stations, represented as ( S , k ) , where S ( S V ) is a subset of tasks assigned to the first k stations. The subset S must internally satisfy the precedence constraints (i.e., if and ( i , j ) A , then i S ). The recurrence relation then computes the optimal assignment of these tasks to the specified number of stations, as well as the corresponding cycle time.
Once the recurrence is computed, an optimal solution using k stations is derived from the sequence of decisions made from the initial state ( , 0 ) to state ( V , k ) . Additionally, the optimal Pareto set for the BO-SALBP is composed of the set of optimal solutions for all k { 1 , , m } , with the optimal values for each state determined according to the recurrence
f ( S , k ) = min S S : ( 11 ) holds max f ( S , k 1 ) , i : S S t i S V : ( 11 ) holds ; k = 1 , , m
j S , ( i , j ) A i S
with base case f ( , 0 ) = 0 . Recurrence (10) evaluates the optimal cycle time required to perform a subset of tasks S in k stations by determining the best possible cycle time among all feasible subsets of tasks S assigned to the first k 1 stations and the remaining S S tasks to station k. These subsets must comply with the precedence constraints as specified in (11).

2.2. Polynomially Solvable Cases

While the SALBP is NP-hard [73], there are some polynomially solvable special cases. For instance, SALBP-1 instances in which all tasks have exactly one predecessor and one successor, apart from the initial and final tasks, are polynomially solvable. We will refer to this problem as an SALBP with a chain precedence graph. For such cases, an optimal assignment of tasks can be achieved using a single-pass procedure without any look-ahead. The process starts with an empty open station and sequentially assigns tasks to the open station according to their order in the precedence graph. If assigning a task to the current station is feasible within the cycle time constraint, the task is assigned; otherwise, the station is closed, a new station is opened, and the task is assigned to the new station. This procedure has a time complexity of O ( n ) assuming tasks are pre-ordered according to precedence relations. The method leverages the fact that, for any SALBP instance, there exists an optimal solution that complies with the full-load property, as there are no potential gains from delaying the assignment of a task to a station if it satisfies the precedence and cycle time constraints.
This single-pass approach is similar to the priority-rule greedy construction heuristics used for the SALBP-1 and other line balancing problems, differing only in task selection. In the special polynomially solvable case, there is always a single candidate task for assignment at each step. In contrast, for general instances, multiple candidate tasks may exist (a task is said to be a candidate if all its predecessors have already been assigned). Here, the available task (defined as a task whose predecessors are already included in the partial solution and whose assignment complies with the cycle time constraint) with the highest priority is assigned to the open station. If no such task is available, the station is closed.
For SALBP-2, these single-pass methods are combined with a bisection search to determine the smallest cycle time value for which the SALBP-1 procedure can provide a solution with the desired number of workstations.
An alternative approach for the SALBP-2 with a chain precedence graph is to adapt the dynamic programming recursion outlined in Section 2.1 to account for the fixed sequence of tasks implied by the chain structure. In a SALBP with a chain precedence graph, the only feasible order to assign tasks is exactly this sequence, and precedence feasibility forces every feasible subset T to be of the form { 1 , , i } . That is, any feasible subset is a prefix, and we can replace T = { 1 , , i } with the integer i indicating the prefix length.
Let f ( i , k ) denote the minimum cycle time needed to perform tasks { 1 , , i } using k stations. Given the special structure of the graph, the optimal cycle time for any given number of tasks i and stations k is determined by the best assignment of the subset { 1 , , i } of tasks to the first k 1 stations and the subset { i + 1 , , i } to station k. This relationship can be computed using the following dynamic programming recursion:
f ( i , k ) = min i = 0 , , i 1 max f ( i , k 1 ) , i = i + 1 i t i ,
with base case f ( 0 , 0 ) = 0 . The optimal cycle time with m stations corresponds to the value of f ( n , m ) and the partition of tasks to stations is derived from the sequence of decisions that lead from f ( 0 , 0 ) to f ( n , m ) . The recurrence is computed in increasing station order. Since the calculations associated with a state corresponding to a given station depend only on the preceding station, this ordering ensures that all values required to compute a state are available when needed.
This adaptation yields a polynomially solvable method for the SALBP-2 with a chain precedence graph. Note that the bisection method mentioned earlier is not polynomially solvable, as it depends on the task times, which are not polynomially bounded by the number of tasks. Furthermore, the recurrence not only yields the optimal solution for the SALBP-2 instance with m stations but also provides solutions for any SALBP-2 instance with fewer stations. Therefore, by setting m = n , recursion (12) solves the BO-SALBP for the special case of a chain precedence graph. The recursion (12) can be solved in O ( n 3 ) time, as shown below, confirming that the BO-SALBP with a chain precedence graph is polynomially solvable.
Theorem 1. 
The BO-SALBP with a chain precedence graph is solvable in O ( n 3 ) time.
Proof. 
Recurrence (12) solves the BO-SALBP with a chain precedence graph. We need to compute at most n × n values, each requiring O ( n ) operations (the summation in (12) can be precomputed in O ( n 2 ) ). Consequently, the algorithm has a time complexity of O ( n 3 ) and a space complexity of O ( n 2 ) .    □
This result yields two significant insights that are leveraged in the development of the methods presented in this work. First, we can separate the scheduling and packing components of the BO-SALBP into two distinct problems, allowing us to design effective heuristics for the scheduling aspect. Since the packing part can be optimally solved once the order of the tasks is determined, our heuristic components can focus on task sequencing, encoding solutions as a sequence of operations. Second, every task sequence provides a set of potentially efficient solutions for the BO-SALBP.
Moreover, since any solution to the SALBP-2 can be represented by the order (sequence) in which operations are performed, and the recurrence assigns tasks optimally to the different stations according to their operation order, an algorithm that searches for a solution encoded as a sequence and decodes it using the indicated recurrence will include an optimal solution to the SALBP-2 within its search space.
Furthermore, as all Pareto-efficient solutions for the BO-SALBP must be optimal for a given k—and therefore optimal for a specific SALBP-2 instance—a multi-objective procedure that uses the proposed encoding and decodes it with the recurrence can represent and generate all solutions in an efficient set of the problem by combining the efficiency sets of one or more task orderings.
However, this approach comes with computational costs. Although bisection methods may be non-polynomial, they are often very efficient in practice, as task times are usually represented as integers and bounds on the cycle time for a given number of stations are readily available [63]. Thus, we restrict the recurrence to states with a number of stations that can be identified as inefficient in a preliminary analysis. Specifically, the minimum cycle time for any BO-SALBP solution is the maximum task time,  t m a x = max i V t i . Suppose we have a feasible solution with a cycle time of t m a x using k stations. In that case, any solution with more than k stations is guaranteed to be inefficient, so the computation of (12) can be limited to states with k < k . Such an initial solution can be obtained using any SALBP-1 exact or heuristic method. In this work, we use a priority-based constructive heuristic that prioritizes tasks by their task times, selecting the task with the largest time among the candidates at each step. The number of stations in this solution will be denoted as m m a x .

3. Evolutionary Algorithms for the BO-SALBP

We now introduce the proposed evolutionary algorithm for the BO-SALBP. The method is based on the NSGA-II framework, a widely used multi-objective evolutionary algorithm that has been successfully applied in numerous MO-ALBP studies. In this approach, each individual in the population represents a potential solution that could contribute to the Pareto efficiency set. Before detailing the algorithm, we first present several key concepts related to multi-objective optimization.

3.1. Multi-Objective Optimization

The concept of optimality used in this work is based on Pareto efficiency and the notion of dominance between solutions. A solution p is said to dominate another solution q, denoted as p q , if the following conditions are simultaneously satisfied: (1) p is no worse than q in all objectives, and (2) p is strictly better than q in at least one objective. If only the first condition is met, then p is said to weakly dominate q, denoted as p q .
Using this definition of dominance, a multi-objective optimization problem has an optimal set of solutions known as the Pareto or efficiency set, which comprises all non-dominated solutions in the solution space. The corresponding set of objective vectors is referred to as the Pareto front. Each of these solutions represents a trade-off between objectives, meaning that an improvement in one objective requires a sacrifice in another.
The goal of a multi-objective optimization problem is to find a set of solutions that approximate the Pareto front as closely as possible. This goal implies two key objectives for a heuristic multi-objective optimization approach: (1) to identify solutions that lie on the optimal Pareto set and (2) to ensure that the set of solutions is diverse enough to cover the entire range of the Pareto optimal set.
Based on these goals, we introduce four performance metrics commonly used in the existing literature.

3.1.1. Hypervolume Ratio

The first metric is the hypervolume ratio ( H V R ), a unary performance metric that evaluates the quality of the non-dominated solution set produced by an algorithm. This metric first identifies a reference point, typically defined by the worst possible values for each objective. It then determines the hypercubes formed by this reference point and each of the non-dominated solutions, and calculates the hypervolume of the union of these hypercubes. In a bi-objective problem, such as the BO-SALBP, the hypercubes are effectively rectangles, and the hypervolume corresponds to the area of the union of these rectangles.
Since the hypervolume metric is influenced by the units of the objectives, it is normalized using the hypervolume of the optimal Pareto set. The  H V R is computed as follows:
H V R = H V ( P ) H V ( P * ) ,
where H V ( P ) and H V ( P * ) represent the hypervolume of the approximate Pareto set P and the optimal Pareto set P * . An  H V R value of 1 indicates that the approximation of the Pareto front perfectly matches the true Pareto front. Conversely, HVR values lower than 1 suggest that the generated Pareto front is inferior to the true Pareto front.
Calculations depend on the reference point and the optimal set of Pareto solutions. The reference point is typically set using trivial upper bounds on the objectives. For the BO-SALBP, this means the sum of task times for the cycle time and the total number of tasks for the stations. However, initial experiments showed that using these values resulted in H V R values close to 1, even for poor approximations of the Pareto front. This outcome arises from the difference between the cycle time reference value and possible values from the cycle time objective, which varies from roughly half the cycle time (when there are only two stations) to the maximum task time (as the number of stations approaches the number of tasks).
To improve the metric, we apply two adjustments specific to the problem at hand:
  • Tighter Bound: We use a tighter reference bound that produces smaller areas. Our reference point is based on heuristic solutions for the SALBP-1 instance with a cycle time, c = t m a x rendering m m a x ), and the SALBP-2 instance with two stations, m = 2 , using the greedy method described in Section 2.2 with the largest task time priority rule. These solutions serve as upper bounds on the number of stations and cycle time for any efficient solution leading to smaller areas.
  • Logarithmic Transformation: We consider the natural logarithm for both the objective value and reference value of the cycle time objective value. Applying the natural logarithm reduces the values, making area calculations less sensitive to large differences on cycle time values.
To obtain an optimal Pareto set, one can rely on [7] or solve multiple SALBP-1 or SALBP-2 instances, each corresponding to a feasible cycle time or number of stations, and then identify the non-dominated solutions to construct the optimal set. While these methods are viable for instances with a small number of tasks, they become impractical for larger instances. In such cases, the Pareto optimal set is commonly approximated by aggregating all Pareto sets generated by the proposed methods. However, this approach (approximating the optimal Pareto set from above using feasible solutions) has a significant drawback: it relies on the quality of the heuristic methods used, which means it is possible to achieve an optimal hypervolume ratio ( H V R equal to 1) even if the actual Pareto front is poorly approximated.
To address this issue, we instead opt to approximate the optimal Pareto set from below. This is carried out by computing a lower bound on the cycle time for each possible number of stations, as described in Section 2.1. We believe that this approach is preferable if the bounding method provides a good approximation of the optimal values, as demonstrated in [71], and it avoids misleading indicators of good performance that may result from deficiencies in approximating the optimal set.
While the proposed changes result in H V R values reported in Section 5 being lower than those that could be obtained with traditional approaches, our results still provide H V R values near 1.

3.1.2. Coverage Metric

The second indicator we use is the set coverage metric (C). This metric is used to compare the performance of two multi-objective algorithms based on their reported efficient sets without requiring any intermediary reference for the comparison.
Let P and Q be two efficient sets. The coverage from P to Q is computed as
C ( P , Q ) = | { q Q ; p P : p q } | | Q | .
Based on (14), if  C ( P , Q ) = 1 , then all solutions in Q are dominated by or equal to solutions in P (i.e., P is better than Q). Since C ( P , Q ) + C ( Q , P ) does not necessarily sum to 1, both C ( P , Q ) and C ( Q , P ) must be evaluated to fully compare the performance of the two sets.

3.1.3. Inverted Generational Distance

The third indicator we use is the inverted generational distance ( I G D ). The  I G D evaluates both efficiency and coverage, much like the H V R , but it measures differences between fronts rather than computing a ratio of their hypervolumes. Specifically, let P be an approximate Pareto front obtained by a solution method, P * the set of elements of the reference Pareto front—here represented by a lower approximation (a lower bound) of the optimal Pareto front—and let d ( p , P ) = min p P p p be the minimum Euclidean distance from a reference point p to any point in P. Then,
I G D ( P ) = 1 | P * | p P * d ( p , P ) ,
which reports the average distance between the reference points and the approximate front. The range of I G D values depends on the magnitude of the objective values; therefore, the  I G D should be used to compare different methods within the same problem, with smaller values being preferable.

3.1.4. Multiplicative Unary Epsilon

The final indicator we use is the multiplicative unary epsilon ( I ϵ ). Like the I G D metric, it is a unary measure that compares an approximate Pareto front P with a reference Pareto front P * and computes the minimum factor by which the approximate front must be scaled so that it weakly dominates the reference front. The unary epsilon indicator is defined as
I ϵ ( P ) = inf ϵ R p * P * , p P : f i ( p ) ( 1 + ϵ ) f i ( p * ) , i ,
where f i ( · ) denotes the i-th objective value (in this case, cycle time or number of stations). As with the I G D metric, smaller values are preferable.

3.2. An NSGA-II Algorithm for the BO-SALBP

Among the various multi-objective evolutionary algorithms available in the literature, the non-dominated sorting genetic algorithm, NSGA-II [70], is one of the most widely used for solving a wide range of MO-ALBPs [see [28,40,52] for some examples]. An NSGA-II algorithm is built on (1) single-objective genetic algorithm concepts; (2) additional components for Pareto dominance, crowding, and elitism; and (3) custom encoding, evaluation, and mutation methods tailored to the problem at hand.
Our proposed NSGA-II implementation employs an indirect encoding scheme, where individuals represent permutations (or sequences) of tasks, along with an integer that indicates the number of stations in the solution. The population is initialized in lines 1–4 from Algorithm 1 with a specified number of randomly generated individuals, which is a parameter of the algorithm, as follows:
  • The sequence is constructed by appending tasks one at a time, selecting each task from those whose predecessors have already been included in the sequence under construction (i.e., a topological order of the precedence graph).
  • The number of stations is randomly selected from a range between 2 and m m a x .
After initializing the population, each individual is decoded and evaluated into a solution using recurrence (12), line 5. The evaluation function also divides the population into multiple ordered sets ( Σ 1 , Σ 2 , ) , each containing the subset of individuals that are only dominated by individuals from preceding sets. Specifically, the first Pareto set consists of non-dominated individuals, the second set includes individuals only dominated by those in the first set, the third set includes individuals only dominated by those in the first and second sets, and so on.
The population is then sorted in non-decreasing order based on the set to which each individual belongs, with ties broken using a crowding distance metric. The crowding distance is calculated as follows: individuals in each set are sorted according to one objective, the crowding distance of the first and last individuals in the set are set to infinity, and the crowding distance for the remaining individuals is computed as the average difference in objective values with their immediate neighbors in the ordering.
Once the initial population is sorted, the algorithm repeats the following steps until a time limit is reached:
Algorithm 1: Outline of the NSGA-II algorithm for the BO-ALBP
Mathematics 13 03336 i001
  • As many new individuals as the size of the current population are generated using a crossover operator and then added to the population. The crossover operator creates two offspring from two parent individuals. For the sequence part, it uses an order-preserving crossover, and for the number of stations, it inherits the value from one of the parents. The operator selects two different positions in the sequence: the first offspring inherits the number of stations from the first parent and copies the sequence from the beginning to the first position and from the second position to the end. The segment between these two positions is filled using the elements from the second parent, preserving their relative order. The second offspring is generated similarly, but with the roles of the parents reversed.
    Parents are selected using a tournament selection procedure. For each parent, a specified number of individuals from the population, determined by a parameter of the algorithm, are randomly chosen. The highest-ranked individual among the selected ones is then chosen as a parent.
  • After generation, each new individual is subjected to mutation, which is controlled by a mutation rate (a probability parameter of the algorithm). Both the station part and the sequence part of the solution are subjected to mutation. The mutation operator for the number of stations randomly selects a value k between 2 and m m a x and assigns it to the individual. The sequence mutation operator randomly selects a task i from the sequence and moves it to a feasible position within the sequence.
    Feasible positions are determined based on the earliest and latest allowable positions for task i according to the current ordering and precedence relations. Let π j be the position of task j in the sequence. Then, the earliest position e i for task i is given by e i = max j : ( j , i ) A π j + 1 if task i has any predecessors, or position 1 if it has no predecessors. The latest position l i is given by l i = min j : ( i , j ) A π j 1 if task i has any successor, or the last position, position n, otherwise. Consequently, task i is moved to a random position within the range [ e i , l i ] , while maintaining the relative order of other tasks in the sequence.
  • After the set of new individuals has been generated, the new individuals is evaluated using the decoding procedure. The population is then reordered following the same method used for the initial solutions, and the population size is reduced to its original number by selecting the top-performing individuals.
The method outputs the set of non-dominated individuals from the final population as its solution.

4. Local Search Approaches for the BO-SALBP

Local search approaches are common in single-objective combinatorial optimization problems. A local search procedure begins with an incumbent solution and makes small modifications to explore neighboring solutions, using what is known as a neighborhood operator. If an improving neighbor is found, the incumbent is updated, and the procedure continues until no further improvements can be made, resulting in a local optimum. This framework must be adapted for different optimization problems by defining problem-specific neighborhood operators. Furthermore, for multi-objective problems, additional adjustments are needed, as the incumbent corresponds not to a single solution but to a set of Pareto efficient solutions.
We now introduce classical local search operators for the SALBP, which will be used in combination with the proposed NSGA-II algorithm. We then discuss their integration within the Pareto Local Search framework [69], which extends local search methods to multi-objective problems.

4.1. The Move and Swap Neighborhoods

Common neighborhood structures for single-objective SALBPs include move and swap operators. The move operator reassigns a task from one station to another, while the swap operator exchanges the assignments of two tasks. In both cases, the station assignments must remain feasible, respecting the precedence constraints among tasks.
Moves and swaps seldom improve solutions for the SALBP-1, as reducing the number of stations generally requires transferring a solitary task from one station to another, which is rarely beneficial. However, these operators are significantly more effective for the SALBP-2. In SALBP-2, a solution improves when the load of the critical station (defined as the station whose load equals the cycle time) decreases, while ensuring that the loads of all other stations remain below the cycle time.
It is important to note that the description of these operators assumes a solution representation where tasks are assigned to specific stations. In contrast, when a task sequence representation is used, as in this work, an alternative definition of the move and swap operators is required.
First, note that encoding a solution as a sequence requires the use of a decoding procedure. An important side effect of the decoding operation is that modifying the execution order of tasks not assigned to a critical station can result in solutions that improve the objective function value. This behavior occurs because altering the task order can cause a cascading effect on the assignments to other stations, potentially affecting the critical station as well.
Second, the feasibility of a move or swap in a sequence-based encoding differs from that in a station assignment encoding. As described in the mutation operators, the earliest and latest positions in the sequence for any task depend on its predecessors and successors. Thus, for a given task i, the set of feasible moves corresponds to changing its position within the range [ e i , l i ] ; that is, a position no earlier than its earliest and no later than its latest feasible position, while preserving the relative order of other tasks in the sequence. A swap between tasks i and j is feasible if the current positions in sequence s, π i and π j , satisfy e i π j l i and e j π i l j . Algorithm  2 provides the pseudocode of the method. The solution is represented as a sequence σ , and the position of any task i is denoted as π i within the sequence. The method evaluates all possible moves in the for loop in line 2 and all swaps in the for loop in line 7, and repeats both loops until no further improvement is found.    
Algorithm 2: Local search for the SALBP-2
Mathematics 13 03336 i002
Finally, note that it is possible to avoid evaluating recurrence in Equation (12) to check whether a specific move or swap leads to an improved solution. This can be achieved by using the single-pass, no-look-ahead procedure described in Section 2.2 with a cycle time reduced by one time unit compared to the incumbent’s cycle time. The computation of the optimal cycle time using recurrence in Equation (12) is only necessary when the new sequence is known to improve upon the incumbent.
Notice that a local search utilizing these move and swap operators can be seamlessly integrated into the proposed NSGA-II procedure. This is carried out by extending the individual decoding process to include the local search, returning both the optimized cycle time and its corresponding sequence. The improved individual then replaces the one generated by crossover and mutation within the population.

4.2. A Pareto Local Search for the BO-SALBP

Pareto Local Search (PLS) [69] extends traditional local search methods to handle multi-objective optimization problems. The method systematically explores the neighbors of each solution within the efficient set, continuing this process until none of the neighbors of the incumbent solutions yield any new efficient solutions.
An outline of the original method is as follows:
1.
Mark all solutions in the efficient set as unexplored.
2.
For each solution in the efficient set marked as unexplored, mark it as explored and enumerate all of its neighbors. Any neighbor that is not weakly dominated by any solution is added to a secondary set.
3.
Update the efficient set by combining the original set with the solutions from the secondary set, removing dominated solutions. All solutions from the secondary set are marked as unexplored.
4.
If all solutions in the efficient set are marked as explored, the locally optimal efficient set has been found, and the process stops. Otherwise, repeat the last two steps.
The PLS procedure requires an initial efficient set to begin the search, as well as one or more neighborhood operators. In [69], the authors suggest using a weighted-based version of a competitive algorithm designed for the single-objective case. In our implementation, the PLS method is initialized with the set produced by any of the evolutionary approaches described in Section 3.
Regarding the neighborhood structure, we utilize both move and swap operators applied to a sequence of tasks and employ the BO-SALBP dynamic programming recurrence to decode the sequence into multiple solutions. As a result, the PLS procedure must be adapted to account for the distinction between a single solution and a sequence that generates multiple solutions upon decoding. Algorithm 3 provides pseudocode of the modified PLS method. The adaptations include the following.
Algorithm 3: Pareto Local Search for the BO-SALBP
Mathematics 13 03336 i003
  • The PLS method maintains a set of efficient solutions P, as well as two sets of sequences, S and S . The set S contains sequences to be explored at the start of each iteration, whereas S stores any improved sequences found during the current iteration of the algorithm.
  • The set S is initialized with the sequence associated with each solution in the initial efficient set.
  • When evaluating neighbor sequences, each new sequence is decoded using the recurrence in Equation (12). If any decoded solution is efficient with respect to the Pareto set P (see lines 7 or 13), the efficient set is updated, and the new sequence is stored for subsequent exploration in S .
  • Instead of deriving a new efficient set as in the original PLS method, the efficient set is updated immediately whenever a non-dominated solution is found. Moreover, as the decoding can generate solutions with different numbers of stations, the neighbors of any efficient solution may provide not only an improved solution with the same number of stations but also multiple new efficient solutions.
  • After exploring all neighbor sequences, a filtering step removes from the set of sequences to explore those that became non-efficient (dominated) due to sequences found later within the same iteration.

5. Computational Experiments

5.1. Methodology

To validate the proposed methods, we used the instance set originally introduced in [74]. This set includes synthetic instances characterized by various attributes commonly emphasized by practitioners in industrial settings. The attributes are as follows.
  • Distribution of task times: Instances are categorized into three groups based on the distribution used to generate task times. The bottom group features task times drawn from a truncated normal distribution with a mean of 100 time units. The middle group uses a truncated normal distribution with a mean of 500 time units. The bimodal group combines task times from both distributions.
  • Number of precedence constraints (order strength): The number of precedence relations is represented as a ratio of direct and indirect precedence relations to the maximum possible number of relations. Instances are classified into low (0.2), medium (0.6), and high (0.9) ratios.
  • Structure of the precedence graph: The precedence graph structure is specified in three ways. Unstructured instances have randomly generated graphs, constrained by the number of precedence relations. Instances with a block structure have groups, or blocks, of tasks that must be completed before tasks in subsequent blocks, with bottleneck tasks positioned between blocks. The chain structure describes instances where tasks within blocks have a direct predecessor and successor, forming a chain-like sequence.
  • Instance size. The instance set includes small ( n = 20 ), medium ( n = 50 ), large ( n = 100 ) and very large ( n = 1000 ) instances. Our experiments focus on small, medium, and large instances. Previous research [73] indicates that very large instances are not more challenging than large ones, except when the average number of tasks per workstation is small (between 2 and 3).
These characteristics are combined into 21 groups, covering every possible combination, except for the high number of constraints group, where only unstructured instances are generated. For each combination of characteristics and instance size, 25 instances were generated, resulting in a total of 2100 instances. More details about these instances can be found in [74].
These instances are used to evaluate the performance of the NSGA-II algorithm and its combination with local search methods. Specifically, we compare the results of four different methods:
  • The basic NSGA-II algorithm, following the description provided in Section 3.2. This method is denoted as NSGA.
  • The NSGA-II algorithm with move and swap neighborhoods integrated into the decoding of individuals, as detailed in Section 4.1. This method is denoted as NSGA+LS.
  • The basic NSGA-II algorithm, followed by the PLS local search—see Section 4.2—applied to its final efficient set. This method is denoted as NSGA+PLS.
  • The NSGA-II algorithm with move and swap neighborhoods integrated into the decoding of individuals, followed by the PLS local search. This method is denoted as NSGA+.
Moreover, for small instances ( n = 20 ), we compute the optimal Pareto set by solving SALBP-2 instances for each number of stations ranging from 2 to n for each BO-SALBP instance, using the code described in [3] without a time limit. The optimal cycle time values obtained can then be used to derive the optimal efficiency set for the instance. However, this approach is limited to small instances due to the computational challenge of having to optimally solve multiple NP-hard problems per instance.
The proposed methods were implemented in C++ and compiled using the GNU Compiler Collection (GCC) version 12.2.0. All experiments were conducted on a computer equipped with an AMD Ryzen 7 8845HS processor (3.8 GHz) and 32 GB of RAM, running the Linux operating system. The computer has eight+ CPUs, which were utilized to run eight simultaneous executions of the code on different problem instances.

5.2. Parameter Settings

Since the performance of the proposed evolutionary method is influenced by the population size (number of solutions held in the population between iterations), the tournament size (used for crossover), and the mutation rate (used to control random mutation), we use the Irace [75] automatic parameter tuning package for their selection. Irace employs a Friedman-test-based approach to identify the best parameters according to their performance across a set of training instances. To prevent overfitting, we use a different set of instances for parameter selection. These training instances were generated using the instance generator documented in [73], which follows the methodology described in [74].
Based on the findings in [3,73], the main factors influencing the performance of solution procedures for the SALBP are the instance size and the distribution of task times. Regarding the characteristics of the precedence graph, the most challenging instances were those that were unstructured with low or medium order strengths. Therefore, for each instance size ( n = { 20 , 50 , 100 } ), we generated two instances per combination of task time distribution (bottom, middle, and bimodal), low or medium order strength, and an unstructured precedence graph, resulting in 12 training instances per instance size.
Irace is configured to optimize the H V R metric for each instance size separately, with a budget of 1000 independent algorithm runs allocated for parameter optimization. We opted to optimize the H V R metric, as it provides a single-value measure that simultaneously assesses the quality and diversity of the sets produced by the method; however, other unary metrics, such as I ϵ or I G D , could be used. The method’s termination condition is set to a time limit proportional to the number of tasks in the instance—low time limit, set to 20 · n seconds, a common choice in MOEAs [31]—or to 600 s (high time limit), which is a common time limit used in previous ALBP studies [see [2,3]]. This time limit applies only to the evolutionary part of the algorithm (either NSGA or NSGA+LS) and not to the PLS optimization. This decision is justified by the fact that the running time of the PLS optimization is negligible compared with the evolutionary method. For instance, in our experiments, the average time to perform the PLS step on large instances was only 0.45 s.
Table 2 presents the range of values allowed for each parameter, as well as the best parameters found for each time limit (either low or high) and instance size (20, 50, or 100 tasks).
The values indicate a preference for large population sizes and a high degree of selective pressure, as indicated by the tournament size, along with generally low mutation rates. However, in some cases, Irace reports a preference for a low population size on large instances, likely due to a more-than-linear running time relative to the number of tasks required by the local search and fitness evaluation.

5.3. Results for Small Instances (n = 20)

Table 3 presents the average H V R metric for each of the proposed methods (columns NSGA, NSGA-LS, NSGA-PLS, and NSGA+) and time limits (low and high) on small instances, grouped by their characteristics (the rows). The table also includes the average H V R metric for the optimal Pareto set, as all these instances can be solved for optimality using a branch, bound, and remember algorithm [3]. Note that the H V R metric for the optimal Pareto set is not equal to 1 because it compares the efficient set of each method to the approximation introduced below in Section 3.1.1. Moreover, the H V R reported in the table considers that the cycle time objective has been scaled to the logarithm of its value to set its range of values to a similar scale to the other objective.
As shown in Table 3, the proposed methods achieve H V R values close to 1, with the best-performing methods being those that incorporate either, or both, local search procedures. Additionally, when comparing the H V R values of the exact method with those of the other approaches, the results are very close to the optimal Pareto set, indicating that the proposed methods are capable of producing (near-)optimal efficient sets.
This finding is further illustrated in Figure 1, and the differences between methods can be more effectively compared using the coverage metric reported in Table 4. Figure 1 displays the combined Pareto fronts for all small instances with the y-axis (cycle time objective) displayed on a logarithmic scale. Four fronts are reported: the front from the NSGA method (dotted gray line), from the NSGA+ method (solid gray line), from the exact method (dotted black line), and the lower bound on the efficient set H V ( P * ) (solid black line). The combined fronts were constructed as follows: for each feasible number of stations, the cycle times of the efficient solutions from each of the 525 instances were averaged (if a front does not contain a solution with a specific number of stations, the efficient solution is taken as the closest one with a smaller number of stations). After calculating the average cycle times for each possible number of stations, any points that do not meet the necessary efficiency conditions are removed (i.e., the same average cycle time with a higher number of stations).
The fronts in Figure 1 illustrate that the Pareto sets from both the worst (NSGA) and best (GA+) heuristic methods largely overlap, with the NSGA+ method being virtually indistinguishable from the exact Pareto front. This is further highlighted by the coverage metric reported in Table 4, where the coverage value from the NSGA+ heuristic to the exact method is 0.9979; in other words, 99.79% of the Pareto efficient solutions from the exact method are shared with the NGA+ solutions. The coverage metrics in Table 4 also show minimal differences among the various heuristic approaches, with NSGA-LS and NSGA+ emerging as the top-performing methods. Additionally, there is no observed improvement between the 20-s and 600-s time limits for small instances, suggesting that no additional time is necessary to achieve better solutions for this instance size.
Beyond the comparison of heuristic methods—which will be further analyzed for medium and large instances—Figure 1 enables us to compare our approximation from below of the optimal Pareto set with the optimal Pareto set obtained by solving multiple SALBP-2 instances. This comparison provides insights into the validity of our H V ( P * ) calculations.
To make this comparison, we examine the combined Pareto sets from the exact solution method (dotted black line) and our approximation (solid black line). The difference between the two is negligible for a small number of stations, reaches a maximum when the number of stations corresponds to roughly two to three tasks per station (i.e., approximately 7 to 11 stations), and then decreases. This behavior is expected, as SALBP instances with a task-to-station ratio of two to three are the most challenging to solve, due to the weaker quality of the lower bound used in exact enumeration and for our approximation from below. Nevertheless, the plot reveals only small differences (less than a decimal unit in the logarithmic scale) between the exact and the approximation, indicating that our approximation serves as a reliable substitute when the exact set is unavailable, such as for larger instances.

5.4. Results for Medium and Large Instances

Table 5, Table 6, Table 7 and Table 8, along with Figure 2 and Figure 3, present the results of the proposed methods on instances with a medium and large number of tasks grouped according to instance characteristics.
Table 5 reports the H V R metric for each algorithm and time limit, with instances grouped by their characteristics for medium instances, while Table 6 provides the same results for large instances, both with the cycle time objective measured as the logarithm of its value as in Table 3. Both tables show H V R values close to one, with minor improvements for methods that incorporate a local search step, either within the evolutionary algorithm, as seen in the NSGA-LS and NSGA-PLS methods, or as a combined approach in the NSGA+ method. For both medium and large instances, there are slight performance gains when larger time limits are allocated to the methods. However, larger gaps to the ideal H V R value are observed for instances with the middle task time distributions and medium to large order strength. This observation is consistent with previous findings on single-objective SALBP instances [3,73], where instances with these characteristics posed greater challenges due to lower bound quality issues (which directly impact the gap between our H V ( P * ) approximation and its optimal value) and a more complex structure of cycle time and precedence constraints.
Despite these challenges, the reported H V R values remain close to one across all algorithms. As a result, we use the coverage metric to better compare the performance of different methods and time limits, as it provides a clearer comparison without relying on an external reference like our approximation of the optimal Pareto set. The coverage metric for medium instances is shown in Table 7, while Table 8 presents the results for large instances. Both tables reveal similar patterns, underscoring the limitations of the NSGA-II method without the support of a local search method (NSGA row), with efficient sets that are significantly dominated by those produced by the other methods. Among the methods using only one local search strategy, the approach that integrates local search into the evolutionary algorithm (NSGA-LS) outperforms the one that applies local search only to the final efficient set (NSGA-PLS). This finding, along with the observed differences between methods with and without local search, emphasizes the crucial role of local search in enhancing the efficient sets produced by the evolutionary algorithm. Although the improvements from local search are relatively minor in terms of the cycle time objective for a given number of stations (as reflected by the H V R metric), they are impactful, as the local search method consistently improves a substantial portion of the solutions in the efficiency sets.
When focusing on the two best-performing methods, NSGA-LS and NSGA+, we observe that applying the Pareto Local Search (PLS) after the evolutionary algorithm, in conjunction with local search, results in further improvements in the coverage metric. This improvement is more pronounced in large instances, where the coverage metric for NSGA+ with a 600 s time limit reaches 0.9987, compared to 0.9812 for NSGA-LS when compared to NSGA+. This result demonstrates that the PLS method, which evaluates sequences across all possible numbers of stations rather than just a fixed number, leads to minor but meaningful improvements. Given the short execution time of the PLS method, less than half a second on average for large instances, the combined method emerges as the most effective of all proposed approaches. It is also worth noting that there is no strict dominance between these methods; no method has a coverage value of 1 over the other, mainly because the results are from independent runs, introducing variability due to the inherent randomness of the methods.
Figure 2 and Figure 3 provide visual representations of the average combined Pareto fronts for the NSGA and NSGA+ methods with a 600 s time limit on medium and large instances, respectively. Both figures also show the average combined Pareto front for the approximation from below of the optimal Pareto set, H V ( P * ) . The differences between the methods, shown in gray, are particularly noticeable for mid-range values of the number of stations and are more substantial in large instances. This occurs because the NSGA-II method, which emphasizes exploration over exploitation, often fails to identify locally optimal solutions.
When comparing the Pareto sets obtained by the NSGA+ method to the approximation of the optimal set, we observe a similar trend to that seen in small instances in Figure 1. The NSGA+ method successfully approximates the optimal set, with differences being smaller and less visible when the number of stations is low. These discrepancies can be attributed to both the gap between the approximation and the actual exact set, as well as potential gaps between the set found by the NSGA+ method and the true efficient set. Nonetheless, both figures show only minor divergences, underscoring the quality of the approximation from above (the heuristic approach) and from below (the bounds) relative to the optimal set.
To further compare the methods, Table 9 reports the average H V R , I G D , and I ϵ metrics for each method and time limit on instances with 50 and 100 tasks. In this case, H V R is calculated without the logarithmic transformation. Reporting H V R without the logarithmic transformation avoids duplicating the results shown in previous tables and further emphasizes the extent to which the hypervolume of our efficient sets coincides with that of the reference set. The best value for each metric, instance size, and time limit across methods is shown in boldface.
While the best-performing algorithms remain consistent across different metrics, the differences are small, particularly between the NSGA+LS and NSGA+ methods. To further assess whether these differences are statistically significant, we conducted non-parametric tests [76] comparing the eight algorithm–time-limit treatments. We chose non-parametric tests rather than classical parametric ones to avoid potential violations of normality and homoscedasticity. Note that when parametric assumptions hold, non-parametric procedures are typically less efficient (i.e., have lower statistical power), which increases the risk of false negatives. The Friedman test, a non-parametric alternative to repeated-measures ANOVA, was applied. For all instance sizes and metrics, the Friedman test indicates statistically significant differences (the smallest Chi-square value among the six tests is 2550, with a p-value effectively 0 within machine precision).
Each Friedman test was followed by pairwise Wilcoxon signed-rank tests (Holm-adjusted to account for multiple comparisons). These tests yielded identical conclusions regardless of the performance metric used ( H V R , I G D , or I ϵ ) and only differed with respect to instance size. For instances with n = 50 tasks, the tests grouped the methods into six performance categories. The best group comprises the NSGA+LS and NSGA+ methods with the high time limit, followed by the same two methods with the low time limit, and then the remaining methods in the order indicated by their respective average values. For instances with n = 100 tasks, the tests revealed significant differences among all methods, with the smallest Holm-adjusted pairwise p-value equal to 0.0048 (less than 0.5%). These results highlight the importance of the local-search components in obtaining high-quality efficient sets of solutions and confirm that the improvements introduced by the PLS approach are statistically significant.
Finally, we report additional performance results in terms of the number of iterations, the percentage of time consumed by the local-search phase, and the time required for the PLS phase. For each instance size, Table 10 presents the average number of iterations (column # iterations), the average percentage of the allotted time devoted to local search (column % in LS), and the average time in seconds required for the PLS post-processing step (column PLS), for both the low and high time limits.
Note that Table 10 shows that the algorithm performs fewer iterations for n = 20 than for n = 50 . This is explained by the larger time budget available for n = 50 (the low time limit is set to 20 · n seconds), not by algorithmic slowdowns. Even so, the number of iterations is high in both cases. The large number of iterations, together with the improvements observed when moving from low to high time limits, underscores the method’s ability to keep improving solutions beyond the initial phases. Finally, the time required by the local search accounts for a large share of the total runtime and is likely to become a computational burden for very large instances. Nevertheless, its contribution to finding high-quality, efficient sets is consistently highlighted by all metrics, so its use is recommended even for large instances.
To conclude, local search consumes a substantial fraction of the computation time—especially for larger instances—but its effect on overall performance is significant. Moreover, the number of iterations is large, indicating that, for the instance sizes considered in this paper, these time requirements do not hinder performance, although they may become an issue for larger instances. Finally, while the PLS contributes modest improvements in terms of the reported metrics, its effect is statistically significant and comes at a very low computational cost.

6. Conclusions and Future Research

In this work, we have studied a version of the simple assembly line balancing problem (SALBP) that aims to simultaneously optimize line efficiency by minimizing the cycle time and the number of stations, a problem known in the literature as BO-SALBP. This problem had previously been addressed in the literature using an exact method, whose applicability was limited to solving small-sized instances (with 20 tasks). In contrast, the present study explores the application of multi-objective heuristic methods for solving larger problems.
To this end, an adapted version of the NSGA-II method has been proposed, and the method has been hybridized with local search procedures derived from SALBP-2, a problem in which the cycle time is minimized given a fixed number of stations. Additionally, a multi-objective local search approach from [69] has been extended to the studied problem. Apart from the proposed algorithms, we have also identified a special case of the BO-SALBP that can be solved in polynomial time using a dynamic program. This dynamic program is used by the proposed methods and offers two advantages over previously available methods: (1) it allows for efficient, i.e., polynomial-time, evaluation of individuals, thereby avoiding the need for a bisection procedure to obtain the optimal cycle time, and (2) it provides a tool for evaluating all BO-SALBP solutions associated with a task sequence, a feature used in our multi-objective local search.
Our experiments demonstrate the contribution of local search methods to solving multi-objective balancing problems, particularly for larger instances, where incorporating local search into our evolutionary procedure significantly improves the sets of efficient solutions found. We also highlight the importance of comparing the developed procedures not only among themselves but also with a theoretical efficient set obtained through lower bounds. Using an efficient set derived from lower bounds emphasizes the quality of the solutions found, even those obtained by a pure evolutionary method like the NSGA-II algorithm, without basing our conclusions solely on comparing our proposed methods. Finally, we emphasize the high quality achieved by the developed methods, even though they are heuristic procedures. For optimally solvable problems, i.e., those with a small number of tasks, our heuristic procedure yields efficient sets that are practically indistinguishable from those obtained by an optimal procedure (we find 99.79% of all efficient solutions). For larger instances, we observe minimal differences compared to the sets derived from our theoretical optimal solution set (our best performing method reports an I G D value of 6.638 distance units, which hints at a low average increases in the cycle time of our solutions to those of the reference set).
We should also highlight the practical conclusions that can be drawn when balancing assembly lines. On one hand, we observe that solving multi-objective problems using heuristic methods that combine evolutionary procedures with local search methods can efficiently and effectively achieve sets of efficient solutions. This opens the door to analyzing trade-offs involved in increasing or decreasing the line’s throughput relative to the costs associated with a line having more or fewer workstations, especially the trend of diminishing returns from adding new workstations: as more workstations are added, the cycle time decreases, but the improvements become smaller, eventually becoming negligible. Moreover, our results show that the difference between the theoretical efficiency set (derived from a lower bound with minimal idle time) and the set obtained by the method is smallest when fewer stations are used. This highlights the need for the decision maker to assess the practical relevance of the additional specialization achieved by increasing the number of stations, thereby reducing the number of tasks per workstation, as this may lead to greater idle times in each workstation. Also, we detect a need to tackle in more detail balancing problems where there are more precedence constraints, as well as those problems where the distribution of task times shows larger variability.
Finally, we conclude this work by outlining some possible lines of future research. First, the results demonstrate the quality of these types of procedures for solving the simple problem, and, as with single-objective cases, we believe that the conclusions and advances made for the BO-SALBP can also be extended to other multi-objective problems. This is similar to the case of the simple problem, where the best heuristic and exact procedure for SALBP is also competitive in problems with additional considerations. Second, we think it is important to generalize the study of cases with specific properties in the ALBP literature. These specific cases provide problem-specific insights that allow the development of heuristics and metaheuristics that take advantage of these features. This research avenue has already been explored in previous works such as [61,62], but we believe it holds great potential for the development of efficient solution methods in other line-balancing problems. Third, methods such as the one proposed in this work—namely, the evaluation of individuals through a recurrence derived from dynamic programming and the generalization of local search methods to encodings that represent not just a single solution but a set of them—can be applied to other multi-objective optimization procedures, such as NSGA-III [77]. Since these methods offer improvements over classical approaches like NSGA-II, one could expect better results from hybridizing the ideas presented in this work with such methods.

Author Contributions

Conceptualization, methodology, investigation, resources, data curation, writing—review and editing, and visualization, J.P. and M.V.; software, formal analysis, and writing—original draft preparation, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original data and source code presented in the study are openly available in http://github.com/jordipereiragude/BO-SALBP, accessed on 13 October 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Baybars, I. Survey of exact algorithms for the simple assembly line balancing problem. Manag. Sci. 1986, 32, 909–932. [Google Scholar] [CrossRef]
  2. Álvarez-Miranda, E.; Pereira, J.; Vargas, C.; Vilà, M. Variable-depth local search heuristic for assembly line balancing problems. Int. J. Prod. Res. 2022, 61, 3103–3121. [Google Scholar] [CrossRef]
  3. Álvarez-Miranda, E.; Pereira, J.; Vilà, M. A branch, bound and remember algorithm for maximizing the production rate in the simple assembly line balancing problem. Comput. Oper. Res. 2024, 166, 106597. [Google Scholar] [CrossRef]
  4. Morrison, D.R.; Sewell, E.C.; Jacobson, S.H. An application of the branch, bound, and remember algorithm to a new simple assembly line balancing dataset. Eur. J. Oper. Res. 2014, 236, 403–409. [Google Scholar] [CrossRef]
  5. Sternatz, J. Enhanced multi-Hoffmann heuristic for efficiently solving real-world assembly line balancing problems in automotive industry. Eur. J. Oper. Res. 2014, 235, 740–754. [Google Scholar] [CrossRef]
  6. Delorme, M.; Iori, M.; Martello, S. Bin packing and cutting stock problems: Mathematical models and exact algorithms. Eur. J. Oper. Res. 2016, 255, 1–20. [Google Scholar] [CrossRef]
  7. Cerqueus, A.; Delorme, X. A branch-and-bound method for the bi-objective simple line assembly balancing problem. Int. J. Prod. Res. 2019, 57, 5640–5659. [Google Scholar] [CrossRef]
  8. Becker, C.; Scholl, A. A survey on problems and methods in generalized assembly line balancing. Eur. J. Oper. Res. 2006, 168, 694–715. [Google Scholar] [CrossRef]
  9. Scholl, A.; Becker, C. State-of-the-art exact and heuristic solution procedures for simple assembly line balancing. Eur. J. Oper. Res. 2006, 168, 666–693. [Google Scholar] [CrossRef]
  10. Battaïa, O.; Dolgui, A. A taxonomy of line balancing problems and their solution approaches. Int. J. Prod. Econ. 2013, 142, 259–277. [Google Scholar] [CrossRef]
  11. Boysen, N.; Schulze, P.; Scholl, A. Assembly line balancing: What happened in the last fifteen years? Eur. J. Oper. Res. 2022, 301, 797–814. [Google Scholar] [CrossRef]
  12. Battaïa, O.; Dolgui, A. Hybridizations in line balancing problems: A comprehensive review on new trends and formulations. Int. J. Prod. Econ. 2022, 250, 108673. [Google Scholar] [CrossRef]
  13. Chen, Y.Y.; Cheng, C.Y.; Li, J.Y. Resource-constrained assembly line balancing problems with multi-manned workstations. J. Manuf. Syst. 2018, 48, 107–119. [Google Scholar] [CrossRef]
  14. Defersha, F.; Mohebalizadehgashti, F. Simultaneous balancing, sequencing, and workstation planning for a mixed model manual assembly line using hybrid genetic algorithm. Comput. Ind. Eng. 2018, 119, 370–387. [Google Scholar] [CrossRef]
  15. Pereira, J. Modelling and solving a cost-oriented resource-constrained multi-model assembly line balancing problem. Int. J. Prod. Res. 2018, 56, 3994–4016. [Google Scholar] [CrossRef]
  16. Sadeghi, P.; Rebelo, R.; Ferreira, J. Balancing mixed-model assembly systems in the footwear industry with a variable neighbourhood descent method. Comput. Ind. Eng. 2018, 121, 161–176. [Google Scholar] [CrossRef]
  17. Salehi, M.; Maleki, H.; Niroomand, S. A multi-objective assembly line balancing problem with worker’s skill and qualification considerations in fuzzy environment. Appl. Intell. 2018, 48, 2137–2156. [Google Scholar] [CrossRef]
  18. Tiacci, L.; Mimmi, M. Integrating ergonomic risks evaluation through OCRA index and balancing/sequencing decisions for mixed model stochastic asynchronous assembly lines. Omega 2018, 78, 112–138. [Google Scholar] [CrossRef]
  19. Yuan, M.; Yu, H.; Huang, J.; Ji, A. Reconfigurable assembly line balancing for cloud manufacturing. J. Intell. Manuf. 2019, 30, 2391–2405. [Google Scholar] [CrossRef]
  20. Abidin Çil, Z.; Kizilay, D. Constraint programming model for multi-manned assembly line balancing problem. Comput. Oper. Res. 2020, 124, 105069. [Google Scholar] [CrossRef]
  21. Koltai, T.; Dimény, I.; Gallina, V.; Gaal, A.; Sepe, C. An analysis of task assignment and cycle times when robots are added to human-operated assembly lines, using mathematical programming models. Int. J. Prod. Econ. 2021, 242, 108292. [Google Scholar] [CrossRef]
  22. Moreira, M.; Miralles, C.; Costa, A. Model and heuristics for the Assembly Line Worker Integration and Balancing Problem. Comput. Oper. Res. 2015, 54, 64–73. [Google Scholar] [CrossRef]
  23. Pastor, R.; García-Villoria, A.; Laguna, M.; Martí, R. Metaheuristic procedures for the lexicographic bottleneck assembly line balancing problem. J. Oper. Res. Soc. 2015, 66, 1815–1825. [Google Scholar] [CrossRef]
  24. Nourmohammadi, A.; Zandieh, M. Assembly line balancing by a new multi-objective differential evolution algorithm based on TOPSIS. Int. J. Prod. Res. 2011, 49, 2833–2855. [Google Scholar] [CrossRef]
  25. Akyol, S.; Baykasoğlu, A. ErgoALWABP: A multiple-rule based constructive randomized search algorithm for solving assembly line worker assignment and balancing problem under ergonomic risk factors. J. Intell. Manuf. 2019, 30, 291–302. [Google Scholar] [CrossRef]
  26. Choi, G. A goal programming mixed-model line balancing for processing time and physical workload. Comput. Ind. Eng. 2009, 57, 395–400. [Google Scholar] [CrossRef]
  27. Ramezanian, R.; Ezzatpanah, A. Modeling and solving multi-objective mixed-model assembly line balancing and worker assignment problem. Comput. Ind. Eng. 2015, 87, 74–80. [Google Scholar] [CrossRef]
  28. Babazadeh, H.; Javadian, N. A novel meta-heuristic approach to solve fuzzy multi-objective straight and U-shaped assembly line balancing problems. Soft Comput. 2019, 23, 8217–8245. [Google Scholar] [CrossRef]
  29. Katiraee, N.; Calzavara, M.; Finco, S.; Battaïa, O.; Battini, D. Assembly line balancing and worker assignment considering workers’ expertise and perceived physical effort. Int. J. Prod. Res. 2023, 61, 6939–6959. [Google Scholar] [CrossRef]
  30. Yin, T.; Zhang, Z.; Zhang, Y.; Wu, T.; Liang, W. Mixed-integer programming model and hybrid driving algorithm for multi-product partial disassembly line balancing problem with multi-robot workstations. Robot. Comput.-Integr. Manuf. 2022, 73, 102251. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Tang, Q.; Chica, M.; Li, Z. Reinforcement Learning-Based Multiobjective Evolutionary Algorithm for Mixed-Model Multimanned Assembly Line Balancing Under Uncertain Demand. IEEE Trans. Cybern. 2024, 54, 2914–2927. [Google Scholar] [CrossRef]
  32. Liu, X.; Yang, X.; Lei, M. Optimisation of mixed-model assembly line balancing problem under uncertain demand. J. Manuf. Syst. 2021, 59, 214–227. [Google Scholar] [CrossRef]
  33. Meng, K.; Tang, Q.; Zhang, Z.; Li, Z. Robust mixed-model assembly line balancing and sequencing problem considering preventive maintenance scenarios with interval processing times. Swarm Evol. Comput. 2023, 77, 101255. [Google Scholar] [CrossRef]
  34. Zhang, Z.; Tang, Q.; Ruiz, R.; Zhang, L. Ergonomic risk and cycle time minimization for the U-shaped worker assignment assembly line balancing problem: A multi-objective approach. Comput. Oper. Res. 2020, 118, 104905. [Google Scholar] [CrossRef]
  35. Roshani, A.; Paolucci, M.; Giglio, D.; Tonelli, F. A hybrid adaptive variable neighbourhood search approach for multi-sided assembly line balancing problem to minimise the cycle time. Int. J. Prod. Res. 2021, 59, 3696–3721. [Google Scholar] [CrossRef]
  36. Zhou, B.; Bian, J. A bi-objective salp swarm algorithm with sine cosine operator for resource constrained multi-manned disassembly line balancing problem. Appl. Soft Comput. 2022, 131, 109759. [Google Scholar] [CrossRef]
  37. Zacharia, P.T.; Nearchou, A.C. A new multi-objective genetic algorithm for solving the fuzzy stochastic multi-manned assembly line balancing problem. Int. J. Prod. Res. 2025, 1–23. [Google Scholar] [CrossRef]
  38. Zeng, Y.; Zhang, Z.; Yin, T.; Zheng, H. Robotic disassembly line balancing and sequencing problem considering energy-saving and high-profit for waste household appliances. J. Clean. Prod. 2022, 381, 135209. [Google Scholar] [CrossRef]
  39. Mellouli, A.; Mellouli, R.; Triki, H.; Masmoudi, F. An efficient hybridization of ant colony optimization and genetic algorithm for an assembly line balancing problem of type II under zoning constraints. Ann. Oper. Res. 2025, 351, 903–935. [Google Scholar] [CrossRef]
  40. Chica, M.; Bautista, J.; Cordón, T.; Damas, S. A multiobjective model and evolutionary algorithms for robust time and space assembly line balancing under uncertain demand. Omega 2016, 58, 55–68. [Google Scholar] [CrossRef]
  41. Bortolini, M.; Faccio, M.; Gamberi, M.; Pilati, F. Multi-objective assembly line balancing considering component picking and ergonomic risk. Comput. Ind. Eng. 2017, 112, 348–367. [Google Scholar] [CrossRef]
  42. Yang, C.; Gao, J.; Sun, L. A multi-objective genetic algorithm for mixed-model assembly line rebalancing. Comput. Ind. Eng. 2013, 65, 109–116. [Google Scholar] [CrossRef]
  43. Zacharia, P.; Nearchou, A. A population-based algorithm for the bi-objective assembly line worker assignment and balancing problem. Eng. Appl. Artif. Intell. 2016, 49, 1–9. [Google Scholar] [CrossRef]
  44. Otto, A.; Scholl, A. Incorporating ergonomic risks into assembly line balancing. Eur. J. Oper. Res. 2011, 212, 3465–3487. [Google Scholar] [CrossRef]
  45. Weckenborg, C.; Thies, C.; Spengler, T. Harmonizing ergonomics and economics of assembly lines using collaborative robots and exoskeletons. J. Manuf. Syst. 2022, 62, 681–702. [Google Scholar] [CrossRef]
  46. Yetkin, B.; Kahya, E. A bi-objective ergonomic assembly line balancing model with conic scalarization method. Hum. Factors Ergon. Manuf. 2022, 32, 494–507. [Google Scholar] [CrossRef]
  47. Karas, A.; Ozcelik, F. Assembly line worker assignment and rebalancing problem: A mathematical model and an artificial bee colony algorithm. Comput. Ind. Eng. 2021, 156, 107195. [Google Scholar] [CrossRef]
  48. Lahrichi, Y.; Gamoura, S.; Damand, D.; Barth, M. Bi-objective minimization of energy consumption and cycle time for the robotic assembly line balancing problem: Pseudo-polynomial case and reduced search space metaheuristic. Int. Trans. Oper. Res. 2025, 1–28. [Google Scholar] [CrossRef]
  49. Yilmaz, O.; Aydin, N.; Kucukkoc, I. Integrated model assignment and multi-line balancing in human–robot collaborative mixed-model assembly lines. Flex. Serv. Manuf. J. 2025, 1–40. [Google Scholar] [CrossRef]
  50. Zhang, Z.; Tang, Q.; Li, Z.; Zhang, L. Modelling and optimisation of energy-efficient U-shaped robotic assembly line balancing problems. Int. J. Prod. Res. 2019, 57, 5520–5537. [Google Scholar] [CrossRef]
  51. Fisel, J.; Exner, Y.; Stricker, N.; Lanza, G. Changeability and flexibility of assembly line balancing as a multi-objective optimization problem. J. Manuf. Syst. 2019, 53, 150–158. [Google Scholar] [CrossRef]
  52. Li, Z.; Janardhanan, M.; Ponnambalam, S. Cost-oriented robotic assembly line balancing problem with setup times: Multi-objective algorithms. J. Intell. Manuf. 2021, 32, 989–1007. [Google Scholar] [CrossRef]
  53. Chao, Y.; Chen, X.; Chen, S. An improved multi-objective antlion optimization algorithm for assembly line balancing problem considering learning cost and workstation area. Int. J. Interactitve Des. Manuf. 2025, 19, 6691–6705. [Google Scholar] [CrossRef]
  54. Moreira, M.; Pastor, R.; Costa, A.; Miralles, C. The multi-objective assembly line worker integration and balancing problem of type-2. Comput. Oper. Res. 2017, 82, 114–125. [Google Scholar] [CrossRef]
  55. Nilakantan, J.; Li, Z.; Tang, Q.; Nielsen, P. Multi-objective co-operative co-evolutionary algorithm for minimizing carbon footprint and maximizing line efficiency in robotic assembly line systems. J. Clean. Prod. 2017, 156, 124–136. [Google Scholar] [CrossRef]
  56. Sancı, E.; Azizoğlu, M. Rebalancing the assembly lines: Exact solution approaches. Int. J. Prod. Res. 2017, 55, 5991–6010. [Google Scholar] [CrossRef]
  57. Rada-Vilela, J.; Chica, M.; Cordón, O.; Damas, S. A comparative study of multi-objective ant colony optimization algorithms for the time and space assembly line balancing problem. Appl. Soft Comput. J. 2013, 13, 4370–4382. [Google Scholar] [CrossRef]
  58. Girit, U.; Azizoğlu, M. Rebalancing the assembly lines with total squared workload and total replacement distance objectives. Int. J. Prod. Res. 2021, 59, 6702–6720. [Google Scholar] [CrossRef]
  59. Aydoğan, E.; Delice, Y.; Özcan, U.; Gencer, C.; Bali, O. Balancing stochastic U-lines using particle swarm optimization. J. Intell. Manuf. 2019, 30, 97–111. [Google Scholar] [CrossRef]
  60. Zhou, B.; Wu, Q. Decomposition-based bi-objective optimization for sustainable robotic assembly line balancing problems. J. Manuf. Syst. 2020, 55, 30–43. [Google Scholar] [CrossRef]
  61. Lahrichi, Y.; Damand, D.; Deroussi, L.; Grangeon, N.; Norre, S. Investigating two variants of the sequence-dependent robotic assembly line balancing problem by means of a split-based approach. Int. J. Prod. Res. 2023, 61, 2322–2338. [Google Scholar] [CrossRef]
  62. Pereira, J.; Ritt, M.; Vásquez, O. A memetic algorithm for the cost-oriented robotic assembly line balancing problem. Comput. Oper. Res. 2018, 99, 249–261. [Google Scholar] [CrossRef]
  63. Scholl, A. Balancing and Sequencing of Assembly Lines; Physica-Verlag HD: Heidelberg, Germany, 1999. [Google Scholar]
  64. Zhang, Z.; Tang, Q.; Zhang, L. Mathematical model and grey wolf optimization for low-carbon and low-noise U-shaped robotic assembly line balancing problem. J. Clean. Prod. 2019, 215, 744–756. [Google Scholar] [CrossRef]
  65. Li, D.; Zhang, C.; Tian, G.; Shao, X.; Li, Z. Multiobjective program and hybrid imperialist competitive algorithm for the mixed-model two-sided assembly lines subject to multiple constraints. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 48, 119–129. [Google Scholar] [CrossRef]
  66. Wu, T.; Zhang, Z.; Yin, T.; Zhang, Y. Multi-objective optimisation for cell-level disassembly of waste power battery modules in human-machine hybrid mode. Waste Manag. 2022, 144, 513–526. [Google Scholar] [CrossRef] [PubMed]
  67. Zhang, W.; Gen, M. An efficient multiobjective genetic algorithm for mixed-model assembly line balancing problem considering demand ratio-based cycle time. J. Intell. Manuf. 2011, 22, 367–378. [Google Scholar] [CrossRef]
  68. Manavizadeh, N.; Rabbani, M.; Moshtaghi, D.; Jolai, F. Mixed-model assembly line balancing in the make-to-order and stochastic environment using multi-objective evolutionary algorithms. Expert Syst. Appl. 2012, 39, 12026–12031. [Google Scholar] [CrossRef]
  69. Dubois-Lacoste, J.; López-Ibáñez, M.; Stützle, T. A hybrid TP+PLS algorithm for bi-objective flow-shop scheduling problems. Comput. Oper. Res. 2011, 38, 1219–1236. [Google Scholar] [CrossRef]
  70. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; Wiley: Chichester, UK, 2001. [Google Scholar]
  71. Pereira, J. Empirical evaluation of lower bounding methods for the simple assembly line balancing problem. Int. J. Prod. Res. 2015, 53, 3327–3340. [Google Scholar] [CrossRef]
  72. Ritt, M.; Costa, A.M. Improved integer programming models for simple assembly line balancing and related problems. Int. Trans. Oper. Res. 2018, 25, 1345–1359. [Google Scholar] [CrossRef]
  73. Álvarez-Miranda, E.; Pereira, J.; Vilà, M. Analysis of the Simple Assembly Line Balancing Problem Complexity. Comput. Oper. Res. 2023, 159, 106323. [Google Scholar] [CrossRef]
  74. Otto, A.; Otto, C.; Scholl, A. Systematic data generation and test design for solution algorithms on the example of SALBPGen for assembly line balancing. Eur. J. Oper. Res. 2013, 228, 33–45. [Google Scholar] [CrossRef]
  75. López-Ibáñez, M.; Dubois-Lacoste, J.; Pérez-Cáceres, L.; Birattari, M.; Stützle, T. The irace package: Iterated racing for automatic algorithm configuration. Oper. Res. Perspect. 2016, 3, 43–58. [Google Scholar] [CrossRef]
  76. Hollader, M.; Wolfe, D.; Chicken, E. Nonparametric Statistical Methods; Wiley: Hoboken, NJ, USA, 2015. [Google Scholar]
  77. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
Figure 1. Combined Pareto fronts for small instances ( n = 20 ). The cycle time objective (y-axis) is reported in a logarithmic scale.
Figure 1. Combined Pareto fronts for small instances ( n = 20 ). The cycle time objective (y-axis) is reported in a logarithmic scale.
Mathematics 13 03336 g001
Figure 2. Combined Pareto front for medium instances ( n = 50 ). The cycle time objective (y-axis) is shown in a logarithmic scale.
Figure 2. Combined Pareto front for medium instances ( n = 50 ). The cycle time objective (y-axis) is shown in a logarithmic scale.
Mathematics 13 03336 g002
Figure 3. Combined Pareto front for large instances ( n = 100 ). The cycle time objective (y-axis) is shown in a logarithmic scale.
Figure 3. Combined Pareto front for large instances ( n = 100 ). The cycle time objective (y-axis) is shown in a logarithmic scale.
Mathematics 13 03336 g003
Table 1. Notation used in this paper.
Table 1. Notation used in this paper.
V = { 1 , , n } Set of tasks.
t i Task time of task i V .
t m a x Largest task time ( t m a x = max i V t i ).
G ( V , A ) Precedence graph.
K = { 1 , , n } Set of stations.
m m a x Upper bound on the number of stations on any efficient solution.
SSubset of tasks S V .
S k Station load of station k (subset of tasks assigned to station k).
w k Station workload of station k ( i S k t k ).
cCycle time.
l b ( k ) Lower bound on c for a solution with k stations; see Equation (9).
f ( S , k ) Minimum c to perform subset S within k stations, see Equation (10).
f ( i , k ) Minimum c to perform subset { 1 , , i } of tasks within k stations when
G ( V , A ) forms a chain, see Equation (12).
≺ (⪯)(Weak) Dominance relation of a solution over another.
PEfficiency (Pareto) set of solutions.
P * Reference set of solutions.
H V R ( P ) Hypervolume ratio of set P over P * .
C ( P , Q ) Coverage metric of set P over Q.
I G D ( P ) Inverted generational distance of the reference set to P.
I ϵ ( P ) Multiplicative unary Epsilon metric of set P.
σ Permutation (sequence) of tasks V.
Σ Set of individuals ( σ ,m).
S Set of sequences.
π i Position of task i in any given sequence.
e i ( l i )Earliest (latest) position of task i in a given sequence that complies
with precedence relations.
n P o p Population size for the NSGA-II procedure.
t S i z e Tournament size for the NSGA-II procedure.
p M u t Mutation probability for the NSGA-II procedure.
Table 2. Parameters of the NSGA-II method. For each parameter, range and best value are provided.
Table 2. Parameters of the NSGA-II method. For each parameter, range and best value are provided.
Low Time LimitHigh Time Limit
Parameter Range n = 20 n = 50 n = 100 n = 20 n = 50 n = 100
Population size[20, 1000]783359598615905152
Tournament size[2, 15]51485152
Mutation rate[0.05, 0.95]0.35740.10380.10130.89490.17420.1112
Table 3. H V R metric of each proposed method for small instances ( n = 20 ). The best results among heuristic methods are shown in boldface.
Table 3. H V R metric of each proposed method for small instances ( n = 20 ). The best results among heuristic methods are shown in boldface.
Low Time LimitHigh Time Limit
Distribution Structure OS NSGA NSGA-LS NSGA-PLS NSGA+ NSGA NSGA-LS NSGA-PLS NSGA+ Exact
bottomblocklow0.98370.98820.98820.98820.98700.98820.98820.98820.9882
medium0.97400.97610.97610.97610.97550.97610.97610.97610.9761
chainlow0.98410.98900.98900.98900.98740.98900.98900.98900.9890
medium0.97090.97320.97320.97320.97270.97320.97320.97320.9732
mixedlow0.98520.98990.98990.98990.98810.98990.98990.98990.9899
medium0.97680.97960.97960.97960.97930.97960.97960.97960.9796
high0.94950.94950.94950.94950.94950.94950.94950.94950.9495
middleblocklow0.94860.95000.95000.95000.94970.95000.95000.95000.9500
medium0.93880.93940.93940.93940.93940.93940.93940.93940.9402
chainlow0.94370.94580.94580.94580.94520.94580.94580.94580.9458
medium0.93900.94000.94000.94000.93990.94000.94000.94000.9401
mixedlow0.95070.95290.95290.95290.95240.95290.95290.95290.9529
medium0.94290.94380.94380.94380.94370.94380.94380.94380.9438
high0.92360.92360.92360.92360.92360.92360.92360.92360.9246
bimodalblocklow0.98540.99020.99020.99020.98880.99030.99030.99030.9903
medium0.97840.98060.98060.98060.98030.98060.98060.98060.9806
chainlow0.98550.99060.99060.99060.98890.99060.99060.99060.9906
medium0.97400.97620.97620.97620.97580.97620.97620.97620.9762
mixedlow0.98700.99180.99180.99180.99030.99180.99180.99180.9918
medium0.97630.97860.97860.97860.97820.97860.97860.97860.9786
high0.94360.94360.94360.94360.94360.94360.94360.94360.9436
Total 0.96390.96630.96630.96630.96570.96630.96630.96630.9664
Table 4. Average coverage metric, C ( P , Q ) , for small ( n = 20 ) instances.
Table 4. Average coverage metric, C ( P , Q ) , for small ( n = 20 ) instances.
Low Time LimitHigh Time Limit
NSGA NSGA-LS NSGA-PLS NSGA+ NSGA NSGA-LS NSGA-PLS NSGA+ Exact
LowNSGA 0.70360.98070.70360.76280.70320.76160.70320.6920
NSGA-LS1.0000 1.00001.00000.99980.99640.99980.99640.9942
NSGA-PLS1.00000.7121 0.71210.77220.71170.77150.71170.7006
NSGA+1.00001.00001.0000 0.99980.99640.99980.99640.9942
HighNSGA1.00000.88430.99850.8843 0.88200.99720.88200.8761
NSGA-LS1.00001.00001.00001.00001.0000 1.00001.00000.9979
NSGA-PLS1.00000.88651.00000.88651.00000.8842 0.88420.8784
NSGA+1.00001.00001.00001.00001.00001.00001.0000 0.9979
Table 5. Average H V R for medium instances ( n = 50 ). The best results among heuristic methods per time limit are shown in boldface.
Table 5. Average H V R for medium instances ( n = 50 ). The best results among heuristic methods per time limit are shown in boldface.
Low Time LimitHigh Time Limit
Distribution Structure OS NSGA NSGA-LS NSGA-PLS NSGA+ NSGA NSGA-LS NSGA-PLS NSGA+
bottomblocklow0.98940.99630.99630.99630.99240.99730.99730.9973
medium0.98800.99470.99470.99470.99060.99530.99540.9954
chainlow0.98990.99620.99620.99620.99250.99710.99710.9971
medium0.98700.99280.99280.99280.98940.99330.99330.9933
mixedlow0.98950.99640.99640.99640.99240.99730.99730.9973
medium0.98750.99420.99420.99420.99020.99470.99470.9947
high0.97230.97380.97380.97380.97320.97380.97380.9738
middleblocklow0.96980.97580.97580.97580.97280.97630.97640.9764
medium0.96780.97300.97300.97300.97040.97330.97330.9733
chainlow0.96970.97580.97580.97580.97250.97630.97630.9763
medium0.96840.97320.97320.97320.97080.97370.97370.9737
mixedlow0.97180.97810.97810.97810.97490.97870.97870.9787
medium0.96770.97270.97270.97270.97030.97320.97320.9732
high0.95510.95580.95580.95580.95570.95580.95580.9558
bimodalblocklow0.99150.99730.99730.99730.99380.99790.99790.9979
medium0.98730.99470.99470.99470.99010.99520.99520.9952
chainlow0.99100.99700.99700.99700.99340.99760.99760.9976
medium0.98600.99260.99260.99260.98870.99310.99310.9931
mixedlow0.99150.99740.99740.99740.99370.99800.99800.9980
medium0.98820.99460.99470.99470.99090.99520.99520.9952
high0.97210.97300.97300.97300.97270.97300.97300.9730
Total 0.98010.98550.98550.98550.98250.98600.98600.9860
Table 6. Average H V R for large instances ( n = 100 ). The best results among heuristic methods per time limit are shown in boldface.
Table 6. Average H V R for large instances ( n = 100 ). The best results among heuristic methods per time limit are shown in boldface.
Low Time LimitHigh Time Limit
Distribution Structure OS NSGA NSGA-LS NSGA-PLS NSGA+ NSGA NSGA-LS NSGA-PLS NSGA+
bottomblocklow0.98660.99460.99470.99470.99080.99660.99670.9967
medium0.98630.99450.99450.99450.99000.99650.99650.9965
chainlow0.98740.99500.99500.99500.99130.99710.99710.9971
medium0.98580.99350.99350.99350.98920.99560.99570.9957
mixedlow0.98720.99490.99490.99490.99110.99690.99690.9969
medium0.98610.99400.99410.99410.98950.99620.99620.9962
high0.98100.98640.98640.98640.98350.98740.98740.9874
middleblocklow0.97340.98220.98240.98240.97740.98420.98430.9843
medium0.97440.98280.98300.98300.97780.98470.98470.9847
chainlow0.97410.98250.98280.98280.97800.98470.98480.9848
medium0.97230.97910.97920.97920.97520.98100.98100.9810
mixedlow0.97340.98210.98230.98230.97730.98420.98420.9842
medium0.97320.98080.98090.98090.97650.98280.98280.9828
high0.96810.97190.97190.97190.97000.97260.97260.9726
bimodalblocklow0.98730.99580.99580.99580.99140.99750.99750.9975
medium0.98650.99530.99530.99530.99040.99700.99700.9970
chainlow0.98710.99540.99540.99540.99120.99720.99720.9972
medium0.98560.99410.99410.99410.98920.99600.99600.9960
mixedlow0.98740.99570.99570.99570.99140.99740.99740.9974
medium0.98620.99470.99470.99470.98990.99640.99650.9965
high0.97990.98540.98540.98540.98240.98630.98630.9863
Total 0.98140.98910.98920.98920.98490.99090.99090.9909
Table 7. Average coverage metric, C ( P , Q ) , for medium ( n = 50 ) instances.
Table 7. Average coverage metric, C ( P , Q ) , for medium ( n = 50 ) instances.
Low Time LimitHigh Time Limit
NSGA NSGA-LS NSGA-PLS NSGA+ NSGA NSGA-LS NSGA-PLS NSGA+
LowNSGA 0.25640.91300.25620.48760.24650.46850.2465
NSGA-LS1.0000 0.99980.99600.99680.72750.99680.7257
NSGA-PLS1.00000.2638 0.26360.51050.25310.50540.2531
NSGA+1.00000.99981.0000 0.99690.72890.99690.7276
HighNSGA1.00000.36740.97880.3671 0.33880.94980.3388
NSGA-LS1.00001.00001.00000.99961.0000 1.00000.9979
NSGA-PLS1.00000.37361.00000.37341.00000.3433 0.3433
NSGA+1.00001.00001.00001.00001.00000.99981.0000
Table 8. Average coverage metric, C ( P , Q ) , for large ( n = 100 ) instances.
Table 8. Average coverage metric, C ( P , Q ) , for large ( n = 100 ) instances.
Low Time LimitHigh Time Limit
NSGA NSGA-LS NSGA-PLS NSGA+ NSGA NSGA-LS NSGA-PLS NSGA+
LowNSGA 0.09140.65920.08770.28480.07670.22650.0764
NSGA-LS1.0000 0.99210.94870.98080.34920.96860.3444
NSGA-PLS1.00000.1113 0.10580.37620.08540.34170.0848
NSGA+0.99990.99800.9994 0.98480.35570.97840.3523
HighNSGA1.00000.16850.89410.1599 0.11720.74980.1165
NSGA-LS1.00001.00000.99990.99711.0000 0.99960.9812
NSGA-PLS1.00000.19351.00000.18461.00000.1259 0.1250
NSGA+1.00000.99991.00000.99991.00000.99871.0000
Table 9. Average H V R , I G D , and I ϵ metric of each method and time limit on instances with 50 and 100 tasks.
Table 9. Average H V R , I G D , and I ϵ metric of each method and time limit on instances with 50 and 100 tasks.
n = 50 n = 100
Time Method HVR IGD I ϵ HVR IGD I ϵ
lowNSGA0.9977311.727930.083820.9985412.189020.08284
NSGA+LS0.998428.228580.069950.999227.873990.06465
NSGA+PLS0.9977511.637160.083480.9986211.865620.07936
NSGA+0.998428.226240.069930.999237.847120.06416
highNSGA0.9979910.493780.079530.9988310.640570.07347
NSGA+LS0.998487.819170.068270.999366.643300.05865
NSGA+PLS0.9980010.469500.079450.9988610.502040.07254
NSGA+0.998487.819140.068270.999366.638400.05856
Table 10. Average number of iterations and running times for the proposed methods.
Table 10. Average number of iterations and running times for the proposed methods.
Low Time LimitHigh Time LimitPLS
n # Iterations % in LS # Iterations % in LS av. Time
No local search205381.0 226,267.9
5011,597.2 40,521.3
1002000.2 50,788.5
Local search201845.667.854,601.779.30.0
501124.490.45047.687.90.02
100158.591.13605.592.70.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pereira, J.; Vilà, M. An Evolutionary Procedure for a Bi-Objective Assembly Line Balancing Problem. Mathematics 2025, 13, 3336. https://doi.org/10.3390/math13203336

AMA Style

Pereira J, Vilà M. An Evolutionary Procedure for a Bi-Objective Assembly Line Balancing Problem. Mathematics. 2025; 13(20):3336. https://doi.org/10.3390/math13203336

Chicago/Turabian Style

Pereira, Jordi, and Mariona Vilà. 2025. "An Evolutionary Procedure for a Bi-Objective Assembly Line Balancing Problem" Mathematics 13, no. 20: 3336. https://doi.org/10.3390/math13203336

APA Style

Pereira, J., & Vilà, M. (2025). An Evolutionary Procedure for a Bi-Objective Assembly Line Balancing Problem. Mathematics, 13(20), 3336. https://doi.org/10.3390/math13203336

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop