1. Introduction
The nurse rostering problem (NRP), also referred to as the nurse scheduling problem, seeks to create an efficient and fair roster for a group of nurses over a specified time period. A roster in this context consists of personalized schedules for each nurse, detailing their sequence of shifts and designated days off [
1]. The NRP has been demonstrated to be NP-hard [
2,
3], encompassing multiple complex constraints and objectives. Given its practical significance and computational complexity, the NRP has attracted substantial research attention over recent decades. An extensive body of literature has emerged addressing both modeling approaches and solution methodologies for NRPs. For comprehensive reviews of these developments, readers are directed to [
4,
5,
6].
Numerous heuristic approaches, particularly metaheuristics such as simulated annealing and variable neighborhood search, have been developed for various NRP variants over the past several decades [
7,
8,
9]. Real-world NRPs must satisfy numerous constraints to balance patient care requirements with nurse satisfaction. Key constraints typically include (i) coverage requirements to meet patient demand, (ii) minimum and maximum limits on (consecutive) working days, and (iii) various considerations of staff preference. The complexity of NRPs’ constraints makes them particularly challenging to solve optimally, especially for large-scale instances. Consequently, during the early stages of algorithmic research, most adopted heuristic approaches as practical solutions. In the first and second international nurse rostering competitions (INRC-I in 2010 and INRC-II in 2015) [
10,
11], heuristic solutions demonstrated remarkable performance. Although heuristic approaches frequently generate high-quality solutions, they lack optimality guarantees. This limitation has motivated continued research into exact algorithms for NRPs.
Branch-and-price (B&P) is an exact algorithm combining branch-and-bound with column generation, where the latter consists of a master problem and a pricing subproblem. It has been demonstrated to be an effective approach for solving various combinatorial optimization problems, including the vehicle routing problem (VRP) [
12] and the cutting stock problem [
13]. If we treat an individual nurse’s schedule as a path for a vehicle, the NRP exhibits structural similarities to the VRP. Therefore, it is unsurprising that several researchers have attempted to develop B&P solutions for NRPs [
14,
15]. One of the most challenging aspects in implementing such solutions is developing an efficient algorithm to solve the pricing subproblem. Initial approaches typically employ integer programming [
16,
17], constraint programming [
18], or heuristic methods [
19,
20]. However, these methods often face trade-offs between computational efficiency and solution optimality. Recent advances [
21,
22] have produced a dynamic programming (DP) algorithm that solves NRP pricing subproblems efficiently and optimally, resulting in faster column generation convergence and better B&P performance. As demonstrated in reference [
21], such a DP-based B&P approach has achieved optimal solutions for a number of instances from the publicly available INRC-I and INRC-II benchmark sets. Nevertheless, some instances remain computationally intractable for optimal solutions.
One primary reason for instance intractability is that the lower bound obtained at the root node through column generation probably is not tight. When this bound remains far from the optimal objective value, it becomes ineffective for branch pruning or optimality verification. Similar to the NRP, column generation can yield weak lower bounds when applied to other problems, such as the VRP and the cutting stock problem. A common method for tightening the lower bound is to introduce valid inequalities to the master problem. Valid inequalities can be classified into two types based on their impact on the pricing subproblem’s complexity [
23]: robust cuts and non-robust cuts. Robust cuts do not increase the complexity of the pricing subproblem, whereas non-robust cuts do, leading to a more computationally challenging subproblem. However, robust cuts have been shown to be potentially less effective, particularly for hard instances. Taking the VRP as an example, Fukasawa et al. [
24] developed a branch-cut-and-price algorithm for the capacitated VRP, incorporating multiple robust cuts, such as rounded capacity cuts, strengthened comb inequalities, and multistar inequalities, to improve the lower bound. Their computational experiments demonstrated that only the rounded capacity cuts were highly effective. This is partly because many of the other cuts were already implicitly satisfied by the master problem’s formulation. Among non-robust cuts for the VRP, subset row cuts (SRCs) have been widely adopted since their introduction by Jepsen et al. [
25]. In fact, SRCs have become a fundamental component in modern exact solvers for various VRPs [
26]. For more details on formulating valid inequalities in column generation, we refer readers to Desaulniers, Desrosiers, and Spoorendonk [
27].
To the best of our knowledge, few studies have reported the performance of cutting-plane methods for NRPs in the context of column generation. Santos et al. [
28] proposed a compact formulation for the INRC-I problem and tested several types of cuts to strengthen the formulation. However, computational experiments on INRC-I instances demonstrate that the lower bounds produced by this formulation and its associated cuts are significantly weaker than those obtained through column generation alone. In this work, we aim to further strengthen column generation’s lower bound performance. Building on results established for the VRP, we focus our investigation on three classes of non-robust cuts: SRCs, Chvátal–Gomory rank-1 cuts, and {0, ½}-cuts. The main contributions of this paper are twofold. First, this work represents one of the first investigations of cutting plane methods to the NRP in the context of column generation. We introduce three classes of non-robust cutting planes to strengthen lower bounds. For each cut type, we provide its formal definition, detailed separation methods, and integration approaches for handling them in the labeling algorithm. Second, we present computational experiments using INRC-I and INRC-II benchmark instances. The results demonstrate the efficacy of these cutting planes across different problem classes while providing valuable insights for future research in this domain.
The remainder of this paper is organized as follows:
Section 2 presents the problem formulation from which we derive our cutting planes.
Section 3 introduces three classes of cutting planes for NRPs and develops corresponding separation methods for each type. As these cuts are non-robust,
Section 4 describes approaches for incorporating them into the labeling algorithm for the pricing subproblem. Computational results using benchmark instances from INRC-I and INRC-II are presented in
Section 5, followed by concluding remarks in
Section 6.
2. Problem Formulation
The cutting plane method is an iterative algorithm for solving integer programming problems. At each step, it solves a linear relaxation of the problem, identifies violated inequalities (cuts) by the non-integer solution, and adds these cuts to exclude the non-integer solution. To describe the cuts developed for the NRP, we first present its problem formulation. While numerous NRP variants exist, we focus on the NRPs introduced in INRC-I and INRC-II. Given that INRC-II contains more instances where the lower bound obtained at the root node is not tight, we use it as the primary example to concisely illustrate the proposed cutting planes. Although we demonstrate these cuts on INRC-II, they can be directly adapted to INRC-I’s formulation. It is notable that the NRP formulation in INRC-II represents a dynamic variant, requiring multi-stage optimization where each stage corresponds to a planning week. In this work, we focus on its static version where all problem information is known a prior, enabling single-stage optimization. For conciseness, we do not reproduce the full problem here. We refer readers to the complete problem description provided in reference [
11].
The master problem (MP) refers to the problem formulation that includes all possible columns (variables). However, due to the typically enormous number of columns in many practical applications, solving the MP directly is computationally intractable. Column generation solves a linear relaxation of the MP through an iterative process involving two key components. The restricted master problem (RMP) solves the MP’s linear relaxation using only a subset of columns, while the pricing subproblem (PSP) generates new columns with negative reduced costs to progressively improve the solution. This alternation continues until no further improving columns can be found. For the NRP, the PSP generates individual schedules (columns) where individual related constraints (e.g., consecutive work/rest limits) are addressed. Provided that all individual schedules are given, the MP selects an individual schedule for each nurse to meet all staffing requirements. Let
N,
D,
S, and
K denote the sets of nurses, days, shifts, and skills, respectively. As nurses
have heterogeneous scheduling constraints, each feasible individual schedule for nurse
n is represented by
, where
is the nurse-specific feasible schedule set. The binary decision variable
indicates whether schedule
is assigned to nurse
n. The MP is formulated as the following integer linear programming model.
Constraint (
2) enforces minimum staffing requirements
by ensuring sufficient nurse coverage for each shift, while Constraint (
3) tracks deficits
when assignments fall below optimal staffing levels
. The binary parameter
indicates whether nurse
n’s individual schedule
l includes an assignment to work shift
s on day
d while utilizing skill
k. Constraint (
4) enforces that each nurse is assigned exactly one individual schedule. Constraints (
5) and (
6) formally define the variables. The objective minimizes the sum of individual schedule penalties (for violating individual related constraints) and staffing deficit penalties (for deviations from optimal demand levels). The parameter
represents the penalty cost of individual schedule
l of nurse
n, which is calculated and returned by the PSP, and
denotes the unit penalty incurred per nurse deficit.
As mentioned previously, the PSP in our column generation approach for the NRP aims to generate new feasible individual schedules for nurses. More precisely, the PSP seeks individual nurse schedules with negative reduced costs, which is computed using the current dual values from the RMP. Let
be the dual variables for corresponding constraints (
2) (
) and (
3) (
) in the RMP.
denotes the dual variables for constraints (
4). We can compute the reduced cost
of an individual schedule not currently present in the RMP for nurse
n as:
The column generation algorithm iteratively adds individual schedules with negative reduced costs to the RMP and reoptimizes. If no such schedules exist, the current RMP solution is provably optimal.
4. Dominance Rules
The PSP for the NRP can be formulated as a shortest path problem with resource constraints (SPPRC), where a path corresponds to an individual schedule for a nurse. The SPPRC is defined on a directed acyclic graph . The vertex set V comprises a virtual source node , a virtual sink node , and intermediate nodes representing either specific shift assignments or rest days. The arc set A consists of all feasible transitions between consecutive days, each of which is related to a cost and a set of resource consumptions. Each complete path from the source node to the sink node in the graph corresponds to a feasible individual schedule for a single nurse, encoding the sequence of shift assignments and rest days across the planning horizon.
While alternative approaches exist, DP remains the method of choice for the SPPRC. The DP algorithm operates by progressively extending partial paths, beginning with the trivial initial path , and systematically generating feasible complete paths through node-by-node expansion. Implementing such a basic path extension procedure would require enumerating all feasible paths, leading to computational complexity that grows exponentially with problem size. As a result, dominance rules are introduced to eliminate non-useful partial paths, thereby reducing the number of paths extended and improving the algorithm’s efficiency. Generally, the label is introduced in the algorithm to represent a partial path, recording both its cost and resource consumption. Consequently, the DP algorithm for the SPPRC is commonly referred to as the labeling algorithm. Let represent the set of labels on node . Algorithm 3 presents the pseudocode for our implementation of the labeling algorithm, where denotes the function that generates a label with updated cost and resource consumption if extending label to node is feasible. The algorithm’s central challenges involve resource definition/updating and dominance rule design.
To formally present the dominance rules, we introduce the following notations. Let
denote a partial path originating at the source vertex
and terminating at vertex
. A partial path
is called a feasible extension of
P if the path
represents a feasible complete path. Given two partial paths
P and
Q, the basic idea of the dominance rule is that if the path
yields a solution at least as good as
for every feasible extension
E of
Q, then path
Q can be safely discarded from further consideration. In such cases, we say that
P dominates
Q.
Algorithm 3 The labeling algorithm for the PSP. |
- 1:
Initialization: - 2:
- 3:
for
do - 4:
- 5:
end for - 6:
Extending labels: - 7:
for
do - 8:
- 9:
for do - 10:
for do - 11:
- 12:
end for - 13:
end for - 14:
end for
|
The development of effective DP algorithms for the NRP has long been challenging due to its bounded resource constraints. Recent advances have yielded efficient DP algorithms, including tailored resources and dominance rules. Building on these developments, Our column generation algorithm adapts such DP algorithms by considering additional cuts. We begin with the tailored dominance rule before presenting our adaptations. The NRP formulation is characterized by soft bounded constraints on resource consumption, meaning that violations of the prescribed bounds for any resource are permitted but incur penalty costs. Therefore, the total cost of a complete path P comprises two components: all arc costs along the path and penalty costs incurred from resource constraint violations. For bounded resource constraints, the relationship between resource consumption and penalties is non-monotonic. Let represent the current consumption of resource r accumulated throughout a partial path P. denotes the corresponding penalty cost resulting from resource constraint violation when extending P with a feasible extension E. Given two partial paths P and Q, does not necessarily imply for any feasible extension E.
This constitutes the fundamental challenge in developing effective dominance rules for the NRP. However, recent investigations reveal that the difference of their penalty costs constitutes a monotonic function (either decreasing or increasing) of resource consumption r across all feasible extensions. Let and be the minimum and maximum value of , respectively. For each partial path P, we define its current cost as the sum of all arc costs along the path plus any identified penalty costs resulting from resource constraint violations. The tailored dominance rules for the NRP are presented in Definition 1.
Definition 1. For two partial paths P and Q terminating at the same vertex, the following dominance relations hold: (1) P dominates Q, if ; (2) Q dominates P, if .
As these concepts have been thoroughly examined by [
21,
22], we omit the proof and the method to define and update resources here. A primary implementation challenge of this dominance rule lies in the computation of
and
. The methods for calculating these values for NRP-specific constraints have been presented. In what follows, we primarily focus on our methodology for computing these values for the cuts developed in
Section 3.
When violated cuts are identified, they are incorporated into the RMP as additional inequality constraints. Let
denote the dual variables associated with these new constraints, where
represents their index. Using CG rank-1 cuts as an example, the modified reduced cost is then given by:
where
is defined by (
7). In the DP algorithm for solving the PSP, a new resource
is introduced for each additional cut
. Let
be a label representing a partial path
P, where
and
denote
and consumption of resource
, respectively. The source node initializes all cut resource consumptions to zero. When extending
using
, Algorithm 4 presents the procedure to update cut resources. To incorporate these additional resources within the dominance rules from Definition 1, we establish the following calculation method for
and
. Given two labels
and
, if
, then
. If
, then
and
. Otherwise,
and
. Algorithm 5 implements the dominance rules for our labeling algorithm, with a particular focus on cut resources. Note that the set
R contains resources corresponding to NRP-specific soft constraints and methods for computing
and
are described in [
22]. As established previously, both SRCs and {0,½}-cuts represent special cases of CG rank-1 cuts. Consequently, the proposed methodology can be directly adapted to compute the relevant values for these cuts.
Algorithm 4 The procedure to update cut resources in . |
- 1:
for
do - 2:
if is a day off then - 3:
- 4:
else - 5:
- 6:
- 7:
- 8:
end if - 9:
end for
|
Algorithm 5 Dominate rules for two labels and in the same vertex v. |
- 1:
- 2:
, , , - 3:
, ▹ For each resource , compute the maximum and minimum penalty cost differences - 4:
for
do - 5:
if vertex v represents an assignment occurring on the terminal day of the planning horizon then - 6:
if and then - 7:
- 8:
- 9:
else if and then - 10:
- 11:
- 12:
end if - 13:
else - 14:
if then - 15:
- 16:
else if then - 17:
- 18:
end if - 19:
end if - 20:
end for - 21:
if costDif + maxDif + maxCutsDif then - 22:
return label dominates label - 23:
else if costDif + minDif + minCutsDif then - 24:
return label dominates label - 25:
else - 26:
return neither label dominates the other - 27:
end if
|
5. Numerical Experiments
Numerical experiments were conducted to (1) verify whether the cuts developed above improve the lower bound, and (2) quantify the degree of improvement achieved through their incorporation. The benchmark instances employed in our study are those originally introduced in INRC-I and INRC-II. Based on the permitted computation time, INRC-I instances are classified into three subsets: sprint, middle, and long. Additionally, they are divided into early, late, and hidden subsets according to their competition release stage. This two-dimensional classification system organizes all 60 INRC-I instances into nine distinct subsets. However, previous investigations have revealed that for most INRC-I instances, the objective value (or its ceiling) obtained by solving the RMP using column generation alone already matches that of the best-known integer solution. In other words, the lower bound produced by column generation for these instances is remarkably tight. Only six instances exist where the lower bound obtained by column generation is not sufficiently tight. Therefore, we use only these INRC-I instances to test our cuts. Detailed information about these instances is provided in
Table 1.
As described in
Section 2, INRC-II instances were designed for a dynamic variant of the problem. The competition provided a publicly available testbed consisting of multiple datasets. Each dataset contains three types of files: scenario files (detailing nurse information), history files (documenting previous work states), and week-data files (specifying daily requirements). An instance comprises a specific scenario file, an initial history file, and a sequence of week-data files. Hence, each instance is named to encode this information. For example, the instance name
n040
w4_2_6-1-0-6 encodes the following components: the scenario file for 40 nurses, the second history file, and four week-data files with indices 6, 1, 0, and 6. We focus on the static formulation, testing instances consistent with those used in previous static version studies. Furthermore, to reflect practical operational conditions, we limit our analysis to instances containing at most 80 nurses and four-week scheduling periods. The INRC-II instances selected for our test are described in
Table 2.
We implemented all algorithms in Java (JDK 21.0.2). The computational experiments were conducted on a Windows 10 system equipped with an Intel Core i7-13700 processor (2.10 GHz), 32 GB of RAM, and an NVIDIA GeForce RTX 3060 GPU. Given the numerous constraints inherent NRP, we set our SRCs parameters to and . IBM ILOG CPLEX is utilized as the linear programming solver for the RMP and as the MIP solver for the separation problem of CG rank-1 cuts and {0, ½}-Cuts. To add multiple cuts per iteration, we employ the solution pool feature of CPLEX. At each iteration, we add up to five of the most violated cuts for each cut type. Recognizing that the time limit for the MIP solver in solving the separation problem probably affects cut quality and consequently lower bound improvement, we established two time limit configurations: {15, 8} and {30, 16}. Here, the first value denotes the minutes allocated to the separation problem (15 or 30 min), while The second specifies the maximum column generation runtime in hours (8 or 16 h).
Table 3,
Table 4 and
Table 5 present the computational results for the INRC-I and INRC-II instances, where SRCs, CG rank-1 cuts, and {0,½}-cuts are employed to strengthen the lower bounds across all instances. To assess the efficiency of different cuts in enhancing the lower bound, these tables document the instance name, the initial lower bound from the column generation process (LB), the lifted lower bound using each type of cut (LB*), the improvement in the lower bound (ILB), the number of cuts added to the RMP (cut_num), and the actual total solving time of the column generation process (time). Note that for effectiveness evaluation, we developed a control version of the SRCs separation algorithm excluding the pre-computation rules.
Table 3 shows the comparative performance on INRC-II instances. For both CG rank-1 cuts and {0,½}-cuts, we implement time limit configurations {15, 8} or {30, 16}. In each iteration after achieving RMP optimality, we check whether the overall solving time has exceeded 8 (or 16) h. Upon reaching this threshold, the optimization process terminates immediately.
As shown in
Table 3, separating SRCs with pre-computation rules offers substantial advantages without compromising solution quality. Across all instances, the LB* and its ILB remain identical to those without pre-computation rules, confirming that the pre-computation step does not sacrifice accuracy. The key gains lie in efficiency: the number of cuts added into RMP is dramatically reduced by an average of
, while computation time plummets by an average of
. The consistent reduction in computational steps and runtime confirms the efficiency of these pre-computation rules.
As presented in
Table 4, SRCs consistently outperform both CG rank-1 cuts and {0,½}-cuts on INRC-I instances across all tested time limit configurations. Specifically, SRCs yield tighter bounds than CG rank-1 cuts in 50% of instances and {0,½}-cuts in 33.3% of instances.
Figure 1 presents two challenging instances to show how lower bounds improve differently when applying these cuts independently. A particularly significant finding from our experiments is that for the “medium_hidden01” instance, SRCs yield a new lower bound of 103.8, surpassing the previously reported lower bound of 103 in [
21]. This advancement underscores the effectiveness of SRCs in refining solution bounds for challenging problem instances.
For CG rank-1 cuts and {0,½}-cuts, increasing the time limits for both separation and overall solving processes yields minimal benefits. Under the {30, 16} configuration, CG rank-1 cuts show no performance improvement and {0,½}-cuts exhibit gains in “medium_hidden01” (LB* increasing from 98.8 to 99) and “medium_hidden02” (LB* increasing from 214.9 to 215). Under both time limit configurations, {0,½}-cuts consistently produce tighter lower bounds than CG rank-1 cuts across all lifted instances. The most pronounced performance is observed in “long_hidden01”, where they achieve the tightest lower bound among all methods. In the case of CG rank-1 cuts, a critical observation was made with the lifted instance “medium_hidden01”. Under the {15, 8} configuration, the solving process exceeded 24 h without converging to optimality after incorporating 18 cuts into the RMP. Thus, we interrupted the computation and documented the performance of the lifted solution up to that point. Under the {30, 16} configuration, we observed identical convergence issues. A key contributing factor to such difficulties is the numerical instability often encountered with CG rank-1 cuts. This instability stems from their multipliers, which can take any value in the interval (0, 1). This characteristic critically impacts the solving of the PSP by weakening the effectiveness of dominance rules, ultimately leading to the observed computational inefficiency and failure to converge.
For INRC-II instances (
Table 3 and
Table 5), {0,½}-cuts demonstrate superior performance to both SRCs and CG rank-1 cuts. SRCs improve lower bounds in 57.1% of instances, and the total solving time for process control remain within 90 s despite the addition of over 350 cuts. Under the {15, 8} configuration, while CG rank-1 cuts cannot improve lower bounds for instances where SRCs are ineffective, they achieve higher ILB values than SRCs in 87.5% of improvable cases. {0,½}-cuts produce the strongest overall results, improving lower bounds for 71.4% of instances (14.3% more than alternatives) and achieving higher ILB values than both SRCs and CG rank-1 cuts in 100% and 87.5% of improvable cases, respectively.
Figure 2 visually demonstrates how SRCs, CG rank-1 cuts, and {0,½}-cuts each strengthen lower bounds differently in two example instances.
Under the {30, 16} configuration, CG rank-1 cuts produce better lower bounds in 14.3% of instances (specifically “n030w4_1_6-7-5-3” and “n050w4_0_0-4-8-7”), but demonstrate worse performance in 42.9% of cases. In contrast, {0,½}-cuts achieve improved lower bounds in 42.9% of instances while showing minor degradations in only 14.3% of cases. Notably, {0,½}-cuts obtain an absolute improvement of 1.4in the lower bound for instance “n070w4_0_4-9-6-7”, where no enhancement had been observed under the {15, 8} configuration. Three key factors probably explain the superior performance of {0,½}-cuts: (1) their constrained multiplier space (limited to 0 or ½) enables more computationally efficient separation than CG rank-1 cuts; (2) their unrestricted formulation enables generation of both more numerous and stronger violated cuts compared to SRCs; and (3) their inherent numerical stability circumvents the convergence issues prevalent in CG rank-1 cuts.
These computational results demonstrate that the effectiveness of different cut types in improving lower bounds is closely tied to the problem formulation. Based on experimental analysis, we identify two key characteristics of INRC-II that explain the comparatively weaker performance of SRCs on INRC-II versus INRC-I. First, INRC-II incorporates optimal nurse requirements rather than just minimum requirements, requiring the introduction of additional integer decision variables to track deviations from optimal staffing levels. Second, INRC-II introduces skill assignment constraints, which necessitate coverage to be satisfied for each combination of day, shift, and skill, thereby substantially increasing the number of coverage constraints. These formulation differences make identifying violated SRCs more challenging, particularly for large instances. This motivated our introduction of CG rank-1 cuts and {0,½}-cuts. As presented above, {0,½}-cuts show better performance.
6. Conclusions
Lower bounds are fundamental to efficient B&P algorithms. First, they guide the branch-and-bound process by providing a global lower bound for optimality verification and local lower bounds for node pruning when these bounds exceed the best-known solution. Second, they form the foundation for critical algorithmic components in modern B&P solvers: primal heuristics for feasible solutions, strong branching for tree navigation, and variable fixing for problem reduction. Crucially, the effectiveness of these techniques depends on bound tightness—tighter bounds yield faster convergence, smaller search trees, and improved overall performance in solving large-scale optimization problems. In this study, we propose three classes of non-robust cutting planes for NRPs—specifically, SRCs, CG rank-1 cuts, and {0, ½}-cuts—to enhance the lower bounds generated through column generation. For each cut type, we describe separation approaches augmented with acceleration strategies. As these cuts are inherently non-robust, each addition to the RMP necessitates introducing new resources in the PSP’s labeling algorithm. Consequently, we developed specialized methods to: (1) efficiently update these resources, and (2) seamlessly integrate them within the NRP dominance rules. Computational experiments on selected INRC-I and INRC-II instances show that the effectiveness of different cut types in improving lower bounds is related to the problem formulation. SRCs significantly improve bounds for challenging INRC-I cases, while {0, ½}-cuts outperform other cuts on INRC-II instances. These results demonstrate cutting planes’ considerable potential to enhance solution efficiency in B&P algorithms for NRPs.
There are two major limitations in this study that require further research attention. First, more efficient separation procedures are needed. Our implementation generates SRCs cuts exclusively for
cases, since cut generation for
would incur prohibitively long runtimes. For CG rank-1 cuts and {0, ½}-cuts, the MIP solver for the separation problem rarely terminates with an optimal solution within the time limit. Integrating cuts whose separation procedures take tens of minutes or longer into a B&P framework is clearly impractical. One acceleration approach is to develop separation heuristics and reduction rules. Several such strategies have been reported in the literature [
30,
37]. However, developing such methods requires a deep understanding of the specific structure of the NRP. Machine learning presents a promising data-driven alternative to identify key cut characteristics and improve computational efficiency [
38]. Second, our analysis focuses solely on column generation contexts. Prior research suggests greater potential could be realized by integrating these cuts into branch-and-bound with proper management strategies, particularly for mixed 0-1 programs [
39]. Implementing them in B&P algorithms appears especially promising. Similarly, machine learning could optimize both which cuts to apply and when to apply them within branch-and-bound frameworks [
40].