Robust Optimization for the Two-Dimensional Strip-Packing Problem with Variable-Sized Bins

: The two-dimensional strip-packing problem (2D-SPP) emerges as a notable variant of the cutting and packing (C&P) problem, aiming to optimize the arrangement of small rectangular items within unique strips with a ﬁxed width and inﬁnite height to minimize the usage of height. Despite extensive academic exploration, applying 2D-SPP solutions in industrial settings remains challenging. Two signiﬁcant issues, often overlooked in academia yet frequently encountered in industrial contexts, are the uncertain demand for items, exacerbated by the bullwhip effect, and the need for diverse types of strips to cater to varying customer needs. Our paper addresses this academia–industry gap by proposing a robust optimization model for the uncertain 2D-SPP with variable-sized bins, aiming to manage the demand ﬂuctuations within a box uncertainty set framework. Additionally, we employ the contiguous one-dimensional relaxation technique in conjunction with column generation to tighten the lower bound of the problem, thereby augmenting solution accuracy. Furthermore, we leverage the Karush–Kuhn–Tucker (KKT) condition to transform the model into a more tractable form, subsequently leading to an exact solution. Based on datasets from a real-life plastic-cutting company, comprehensive experiments validate the effectiveness and efﬁciency of our proposed relaxation method and algorithm, showcasing the potential for an improved industrial application of 2D-SPP solutions.


Introduction
The two-dimensional strip-packing problem (2D-SPP) has emerged as a pivotal variant within the broader domain of cutting and packing (C&P) problems.Since the 1990s, it has garnered substantial academic interest.In the context of 2D-SPP, a strip is defined as a rectangular master item with a fixed width and infinite height.The primary objective of 2D-SPP is to efficiently pack a collection of smaller rectangles into one or multiple such strips, aiming to minimize the cumulative height consumed.
The practical implications of 2D-SPP span a diverse range of applications.In the textile industry, for instance, there is a need to segment large plastic rolls into smaller, varied-sized pieces to cater to various product specifications.Here, the judicious utilization of the rolls becomes paramount.By leveraging the principles of 2D-SPP, the cutting process can be optimized to reduce wastage substantially.Analogously, manufacturing sectors that deal with materials like paper, textiles, and fabrics resonate with the challenges and solutions presented by 2D-SPP.
Beyond the confines of manufacturing, the methodologies of 2D-SPP find relevance in areas such as dynamic memory allocation, multiprocessor scheduling, and layout planning.Within these diverse industrial applications, various operational constraints need to be considered.These constraints predominantly revolve around the placement of the smaller rectangular items.Two quintessential placement strategies include decisions on whether to allow guillotine cuts (permitting horizontal or vertical edge-to-edge cuts) and whether to permit rotation (which typically rotates at 90 degrees).
However, beyond these operational intricacies, a paramount challenge in the industrial application of 2D-SPP is implementing it under uncertainty.For instance, the demand for items slated for packing is frequently accompanied by inherent unpredictabilities.Factors such as the bullwhip effect, which exacerbates demand variability throughout the supply chain, and the hurdles in forecasting, which stem from fluctuating market conditions or internal operational nuances, further compound this uncertainty.While the deterministic 2D-SPP model is a foundational framework, its applicability often diminishes in realworld strip-cutting scenarios, mainly when waste minimization is the focal point.An over-reliance on deterministic models can yield solutions that, though theoretically optimal, may culminate in heightened waste and operational inefficiencies when juxtaposed with models that incorporate uncertainty.
Traditional deterministic models might lead to suboptimal or even infeasible solutions when faced with unexpected variations in demand.Therefore, there is a pressing need for methods that can provide robust solutions, ensuring efficient packing, even in the face of uncertainties.By adopting effective strategies, decision-makers can be better equipped to handle the unpredictable nature of real-world demands, leading to more resilient and cost-effective operations.
In light of the challenges posed by uncertainties in the 2D-SPP, this paper introduces a novel approach to tackle the variable-sized 2D-SPP under uncertain demands.Our method stands out in several ways:

•
One-Dimensional Contiguous Relaxation: To enhance tractability and reduce computational time, we introduce the one-dimensional contiguous relaxation model, the one-dimension contiguous bin-packing problem (1D-CBPP).This strategy transforms the two-dimensional model into a one-dimensional variant.Through a column generation algorithm, we determine the exact solution for the 1D-CBPP, providing a tighter lower bound for the 2D-SPP and eliminating the necessity for manual pattern designs.

•
Robust Optimization: Our model addresses the intricacies of the multi-strip 2D-SPP amidst demand uncertainty.By adopting a robust optimization stance, we integrate a carefully constructed box uncertainty set to adeptly manage the complexities of the multi-strip 2D-SPP in the face of fluctuating demands.Moving beyond traditional probabilistic approaches, we embrace the robust box-type uncertainty set, capturing the extant uncertainties.This deliberate methodology ensures that our solutions remain optimal and demonstrate resilience to varying demand scenarios.For enhanced tractability of the robust model, we employ the Karush-Kuhn-Tucker (KKT) reformulation as a solution strategy.• Empirical Validation: We undertake an in-depth case study centered on an established plastic-cutting enterprise based on real-life data.This analysis serves to underscore the effectiveness of our proposed methodology.Beyond validation, this empirical exploration furnishes valuable insights into the real-world implications of our approach.
The structure of this paper unfolds as follows: Section 2 provides a comprehensive survey of the relevant literature.In Section 3, we unveil the robust model for the 1D-CBPP, underpinned by the box uncertainty set.Section 4 elucidates the integration of column generation with the KKT reformulation, which is employed to ascertain the exact solution of our proposed 1D-CBPP model and provides a superior lower bound for the 2D-SPP.Section 5 is dedicated to numerical experiments to validate our methodology.Finally, we culminate our discussion and present our conclusions in Section 6.

Literature Review
The 2D-SPP is well-recognized as an N P-hard problem [1], epitomizing a classical, extensively researched problem in both academic and practical realms.Numerous variants of the 2D-SPP have been developed, with many focusing on the diverse facets of the items involved, whether it be the modes of inputting the demands, the types of strips, or processing methodologies, which, in turn, amplify the computational complexity of the problem.
The manner in which demand is inputted into the system primarily bifurcates into online and offline input methods.Online input entails packing items one by one according to the sequential order of demand; at time t, the information pertaining to the demands at subsequent times t + 1, t + 2, . . ., T remains undisclosed.The competitive ratio is often employed to gauge the performance of an online algorithm in comparison to an optimal offline algorithm.Yu et al. [2] established a new bound of 3 + √ 5 /2 for the competitive ratio of online strip packing.In a subsequent endeavor, Yu et al. [3] improved the upper bound to 5, advancing the theoretical understanding of online strip packing.Nonetheless, online input methodologies do not constitute the focal point of our research.Predominantly, raw material-cutting factories often possess a substantial forecasted demand at the planning horizon, as opposed to sequential demand input.The rationale for favoring offline over online input entails (1) significant cost reductions, as online solutions incur higher costs, and despite further tightening of the bounds, a considerable amount of waste is generated, and (2) avoidance of excessive setup costs due to dispersed demand acceptance.Sequential demand acceptance disrupts operational continuity, resulting in additional setup costs upon the arrival of new demands following periods of inactivity.Releasing demands sequentially not only defies practicality but also propels suboptimal solutions and objective waste, thereby impeding the realization of a globally optimal solution.On the contrary, offline input operates under the presumption of comprehensive information availability regarding the demand for all smaller items within a planned packing cycle.This provision facilitates preprocessing and resequencing prior to the initiation of packing operations, aiming to attain a globally optimal solution.A majority of the variations in problem formulation and solution methodologies are anchored around the premise of offline input, thereby signifying its practical relevance and applicability in addressing the 2D-SPP.
Beyond the modes of input, unique cutting patterns are tailored to distinct cutting industries.In certain instances, guillotine cuts are incorporated into the 2D-SPP framework.Guillotine cuts stand as a fundamental cutting strategy within the scope of our study, characterized by their linear edge-to-edge incisions that run parallel to the width of the primary roll, extending throughout its length.It is crucial to note that such a methodology often necessitates an additional trimming cut to separate the redundant material from the usable segment.Belov and Scheithauer [4] examined the one-dimensional cutting stock problem (1D-CSP) and the two-dimensional two-stage guillotine-constrained cutting problem.They harnessed a branch-and-price-and-cuts method, grounded in linear programming relaxation and column generation, to address these problems and augmented the Gomory cutting plane method by incorporating mixed-integer Gomory cuts.These cuts furnish a finite description of the convex hull of integer solutions, thereby enhancing the upper bound of the model's solution.Puchinger and Raidl [5] restructured the twodimensional bin-packing problem with guillotine cuts into a three-stage problem, utilizing a branch-and-price-and-cuts method enriched with Dantzig-Wolfe (D-W) decomposition and dual-optimal cuts.Nonetheless, some authors permit the orthogonal rotation of small items.Delorme et al. [6] introduced a nuanced variant of the 2D-SPP problem where items are authorized to undergo a 90-degree rotation, and the resolution is expedited through the employment of Logic-based Benders' decomposition.Additionally, for larger items, or in this case, the strip, there can be variations in widths.Pisinger and Sigurd [7] approached different widths of bins as different costs, transforming the solution of a mixed-integer programming (MIP) problem into a high-quality lower bound provided by the D-W decomposition and delayed column generation technique.In their subsequent work, Pisinger and Sigurd [8] incorporated more operational and tactical constraints, such as guillotine-cutting requirements and fixed positions.They employed the branch-and-price-and-cut algorithm to address these complexities.
Moreover, aside from the operational constraints, exact approaches have emerged as critical elements in addressing the 2D-SPP problem, showcasing a rich diversity over time.Particularly, recent advancements have seen the application of machine learning techniques, such as reinforcement learning, in supplementing or even substituting the traditional mathematical model-solving process [9,10].However, the primary advantage of mathematical models lies in their transparency, allowing for a clear understanding of the decision-making process.This is especially crucial in industries where the rationale behind the packing strategy is as important as the strategy itself.Among the various exact algorithms, the branch-and-bound method stands out as a commonly used technique for solving 2D-SPP [11][12][13].Notably, Arahori et al. [11] introduced a specific partitioning approach to obtain a better lower bound of the problem.In contrast, Boschetti et al. [13] developed constructive heuristics based on different item placement strategies to improve the upper bound.Furthermore, the branch-and-price method, as proposed by Bettinelli et al. [14], represents another significant advancement in dealing with 2D-SPP.This method showcases the evolving landscape of exact algorithms and their increasing sophistication in addressing the complexities of strip packing problems.
The prevailing strategy for devising an exact algorithm for the 2D-SPP involves employing a technique known as the sets of points [15][16][17].This method utilizes coordinates (x, y) to precisely define the position of each small item, with point p = (x, y) denoting the position of the bottom-left corner of the item.Given the enumeration of both items and bin dimensions in integers, the focus is narrowed solely to integer coordinates, ensuring the preservation of solution optimality.The construction of coordinate sets for both the x-axis and the y-axis, using a range derived from the dimensions of bin i and the dimensions of item j, delineates a clear search domain.It is evident that a reduction in the coordinate sets corresponds to a constriction in the search domain.However, exact algorithms are often hard to compute in a short time due to complex constraints; therefore, approximate algorithms with relaxation are widely applied [1,[18][19][20][21][22][23][24][25][26][27].The most classical one is the column-generation linear-relaxation method proposed by Gilmore and Gomory, especially on the 1D-CSP.This method decomposes a MIP problem based on reduced cost into two parts: a master problem and a sub-problem.The sub-problem generates patterns that are iteratively used as columns in the master problem [28].The other crucial relaxation technique is contiguous one-dimensional relaxation.By employing this contiguous relaxation, the 2D-SPP can be reformulated as the 1D-contiguous bin-packing problem (1D-CBPP) [26].The foundational idea behind this relaxation is to execute horizontal slices for each item j, converting them into h i unit-height slices with a width of w i .This step leads to the conversion of the problem into a 1D-BPP with a specified bin capacity of W. Consequently, the number of one-dimensional bins derived from this process presents a lower bound on the height of any possible 2D-SPP solution.To further refine this lower bound, the 1D-CBPP incorporates a contiguity condition, mandating that all slices obtained from a single item j must be accommodated within h i consecutive one-dimensional bins.Despite the 1D-CBPP acting as a relaxation of the 2D-SPP, it retains a strong N P-hard complexity, implying that optimal solutions can only be procured through exhaustive enumeration techniques.A detailed branch-and-bound algorithm was introduced in Martello et al. [26] for achieving an exact solution, where the derived lower bound was applied within an enumerative algorithm tailored for the 2D-SPP.Following this, Alvarez-Valdes et al. [27] put forth an ILP model for the 1D-CBPP to solve it within a branch-and-bound framework for the 2D-SPP.The exploration of these relaxation techniques and subsequent algorithmic approaches significantly contribute to developing more efficient and effective strategies for tackling the 2D-SPP, illuminating pathways for potential advancements in solving related multi-dimensional packing and optimization problems.Aside from the approximation approaches presented before, many heuristic algorithms lead to further optimization of the solution space and provide better computational time under an acceptable gap.For example, Herz [15] proposed a recursive algorithm to optimize memory usage.Christofides and Whitlock [16] proposed a tree-search algorithm to narrow down the search space.Additionally, heuristic algorithms are often rule-based methods, such as the bottom-left (BL) and bottom-left-fill (BLF) [29], the fast layer-based heuristic (FH) [30] methods for packing rectangles.The strategy-based methods are targeted at the design of the problem.
Beyond the deterministic models, uncertainty significantly pervades industrial practices, predominantly manifested through demand fluctuations.To aptly model this uncertainty, stochastic programming and robust optimization emerge as two efficacious methodologies.Stochastic optimization operates on the premise of possessing precise knowledge about the uncertainty distribution, facilitating a probabilistic approach towards optimization under uncertainty; for instance, Curcio et al. [31] leveraged stochastic optimization to model the CSP amidst demand uncertainty, employing Monte Carlo simulation for enhanced accuracy.Conversely, Cherri et al. [32] articulated a two-stage stochastic program with recourse, aiming to manage usable leftovers under the uncertainty of future demand efficiently.However, the applicability of stochastic programming presents certain limitations, predominantly tethered to the high level of mechanization in the production environment.For smaller to medium-sized manufacturing entities, acquiring precise production data poses a formidable challenge.Even with data acquisition, the severe data inadequacy impedes deriving the accurate distribution of information, thereby compromising the exactitude of the solutions obtained through the model.On the other hand, robust optimization provides a pragmatic workaround to these challenges.Unlike stochastic programming, robust optimization does not necessitate an exhaustive knowledge of the uncertainty distribution.Instead, it seeks to furnish solutions that remain viable under a variety of adverse scenarios, thereby ensuring a level of immunity against the possible worst-case scenarios.This methodology inherently alleviates the problem of data scarcity and inaccuracies, making it a more attractive option for industries with limited data availability or pronounced uncertainty.For illustration, Alem and Morabito [33] integrated lot-sizing into the planning phase and employed a robust methodology to address cost and demand uncertainties simultaneously.Their initiative in utilizing robust optimization manifests the potential of this approach in handling multiple uncertainties in a unified framework, thus showcasing its applicability in real-world industrial setups.Furthermore, Ide et al. [34] introduced an innovative uncertainty facet within the veneer CSP industry, delineating the heterogeneous quality of wood as a significant variable.Their research underscores a novel application of deterministic and robust optimization in the wood-cutting industry, thereby expanding the scope of robust optimization techniques in addressing industry-specific uncertainties.In addition to these developments, Arbib et al. [35] proposed a robust collaborative stock assortment and cutting strategy in glass production, addressing the challenges posed by defects.Meanwhile, Coutinho et al. [36] focused on the robust optimization of packing problems, particularly for tissue paper containers, further illustrating the versatility and applicability of robust optimization methods in diverse industrial contexts.

Problem Formulation
This section describes the problem and presents the robust optimization variable-sized 1D-CBPP model along with its corresponding reformulation.

Problem Description
In this paper, we explore a variant of 2D-SPP, focusing on scenarios involving guillotine cuts, varying strip widths, and a diverse set of small rectangular items with different quantities and dimensions.Our deterministic model for the 2D-SPP assumes an exact knowledge of item demand quantity within specified planning horizons.
Due to computational complexity considerations and operational constraints, such as the guillotine cuts, we opted for a one-dimensional contiguous relaxation approximation of the 2D-SPP.
Our deterministic 1D-CBPP model specifically decides the used height for each strip width and the specific cutting patterns in each strip width to cut the demanded small items off with each item's correct "quantity".The designed quantity is equivalent to the actual quantity of each small item multiplied by the actual length of the required small items.In our 1D-CBPP, the width of these smaller rectangular items remains unchanged, while their heights are disregarded and relaxed to one unit.As such, a bin created by each guillotine cut has a height of one unit, facilitating the contiguous relaxation process and rendering the overall problem approximately equivalent to multiple 1D-CSPs.
This relaxation phase incurs no charges, as it leads to fragmented pieces of demanded small items.In many cutting industries, the 1D-CBPP is often an initial step.In the subsequent phase, items with similar widths and materials are grouped into a larger bin.This aggregation process, known as "joining", involves combining small items with identical widths to form a new demanded item that satisfies customer requirements, with an aim to minimize waste.
Figure 1 illustrates the reduction of the two-dimensional strip-packing problem (2D-SPP) to the one-dimensional contiguous bin-packing problem (1D-CBPP).Displayed are four strips of varying widths, designated as Strip A, Strip B, Strip C, and Strip D, each possessing an infinite length.Within each strip, the left side features white rectangular segments, each annotated with a number representing the dimensions of the rectangular items that customers require.The right side depicts the transformation of the strips into the 1D-CBPP model by executing guillotine cuts according to the unit length of the small rectangles, all the while preserving the original width of each rectangle.However, the deterministic model, while effective under certain conditions, often falters in the face of fluctuating demand quantities.Such uncertainties can severely disrupt routine production schedules, incurring substantial adjustment costs.For example, a singular modification to a production schedule might necessitate additional setups, potentially inducing extended overtime or backlog production.This can escalate labor costs or, in extreme cases, risk deliveries overshooting the planning horizon or even the stipulated deadline.This unpredictability can frequently be attributed to the "bullwhip effect" observed in supply chains, where customers frequently alter their demand quantities, often at the eleventh hour.In light of these challenges, we introduce a robust optimization model for the one-dimensional cutting bin-packing problem (RO-1D-CBPP), incorporating a box uncertainty set.This model aims to regulate the volatility in demand quantity.The deterministic model we have discussed serves as a foundational base for our proposed RO-1D-CBPP.

The RO Model for 1D-CBPP
This subsection presents a robust optimization (RO) model for the one-dimensional contiguous bin-packing problem (1D-CBPP).The assumptions made for the problem are listed as follows: (1) The strips vary in width, while their heights are considered infinite.
(2) Each demand must be met in full; stockouts or replenishments in subsequent cycles are not permissible.(3) While there exists uncertainty in the quantity of customer demand, the product type and specifications remain consistent.(4) Uncertainties in the quantity supplied by vendors are not considered.(5) Given that the research focus is on the cutting process, it is assumed that the number of machines and their production capacities are infinite.(6) The width of each order varies.(7) The width of each parent strip varies.(8) Strips are operated on and cut longitudinally based on the width of the parent strip.(9) The small items' quantity exhibits uncertainty, yet the type and dimensions of the small items remain consistent.All other uncertainties are not taken into account in this study.

Sets and indices:
I Represents the set of strips type, indexed by i.

J
Represents the set of order types, indexed by j.K i Represents the set of cutting schemes for the i-th parent roll, indexed by k.Ω Denotes the box uncertainty set defined as where the non-negative number Γ characterizes the range of variation for the random variable ξ j .
K i Represents the subset of cutting patterns for the i-th strip, where K i ⊆ K i , indexed by k.

Input parameters:
M Represents a very large number.α Characterizes the range of variation in the uncertain part of order demand.u i Denotes the width corresponding to the i-th strip, ∀i ∈ I. v j Denotes the width corresponding to the j-th order, ∀j ∈ J. f i Represents the total supply length of the i-th strip, ∀i ∈ I. g j Represents the total demand length of the j-th order, ∀j ∈ J. a ikj Specifies the cut quantity for the j-th order under the k-th cutting scheme for the i-th strip, ∀i ∈ I, ∀k ∈ K i , ∀j ∈ J. ξ j Denotes the random variable corresponding to the j-th order, ∀j ∈ J.
Decision variables: X ik Denotes the length of the i-th strip under the k-th cutting scheme.X ik is a non-negative integer, ∀i ∈ I, ∀k ∈ K i .λ 1j Represents the dual variable corresponding to the demand constraints in the model.λ 1j is a non-negative number, ∀j ∈ J. λ 2i Represents the dual variable corresponding to the supply constraints in the model.λ 2i is a non-negative number, ∀i ∈ I. R j Denotes a binary variable, ∀j ∈ J. S i Denotes a binary variable, ∀i ∈ I. T ik Denotes a binary variable, ∀i ∈ I, ∀k ∈ K i .
Figure 2 depicts two strips, labeled Strip 1 and Strip 2, each featuring guillotine cut intervals, indicated by blue blocks that represent a unit of length.Guillotine cuts are a specific type of cut used in cutting stock problems, characterized by their ability to divide a material entirely along its width or height, creating separate sections without leaving any connected remnants.Within these demarcated intervals, a cutting stock problem is addressed using column generation techniques to ascertain optimal cutting patterns.The numbers displayed on the white rectangular items correspond to identifiers for the demanded small rectangular items within the scope of a two-dimensional strip-packing problem (2D-SPP).These items retain the width of the original 2D-SPP's demanded sizes, while their quantity mirrors the original two-dimensional length.Thus, in this scenario, the 2D-SPP is effectively transformed into a one-dimensional contiguous bin-packing problem (1D-CBPP), concentrating on the efficient distribution of these items along the length of the strip.

O.P. : min
The objective function (1) denotes the minimization of the total cutting area.Inequality constraint (2)-demand constraint: For each type of demand, the total length generated for this demand should be no less than the total length of this demand.
Inequality constraint (3)-supply constraint: For each type of parent roll, the length used for cutting cannot exceed the total supply length of the corresponding parent roll.
Variable constraint (4) indicates the cutting quantity for the j-th order under the k-th cutting scheme for the i-th parent roll.Additionally, the number of cuts should always be non-negative integers.
The robust optimization is as follows: R-O.P. : max s.t.: The objective function (5) seeks to minimize the total cutting area under the worst (or most robust) circumstances.
Inequality constraint (6)-demand constraint: For each type of demand, the total length produced for this demand, considering the box uncertainty set variations, should be no less than the total length required for this demand.The values of a ikj are provided by the output of Section 3.
Inequality constraint (7)-supply constraint: For each type of parent roll, the length used for cutting should not exceed the total supply length of that respective parent roll.
Inequality constraint (8)-random variable constraint: The absolute value of the random variable associated with each demand should not exceed the non-negative value Γ.
Variable constraint (9) specifies the cutting quantity for the j-th order under the k-th cutting scheme for the i-th parent roll.Furthermore, the cutting counts should always be non-negative integers.

Column Generation
In this study, we present a novel deterministic approach to one-dimensional contiguous bin-packing (1D-CBBP) with variable-sized bins, inspired by the principles of column generation.Our methodology offers a unique perspective on handling different strip types.We utilize a specialized column generation algorithm to methodically discern and assess all feasible cutting patterns for each specific supply material.We now proceed to elucidate the original deterministic problem formulation.

Master Problem (MP)
s.t.: The objective function (10) aims to minimize the total number of cuts, where k belongs only to K i .
Inequality constraint (11)-demand constraint: For each type of demand, the total length generated should be at least equal to the demand's total length.Here, A * ikj is the result derived from the sub-problem and is a known parameter.
Inequality constraint (12)-supply constraint: For each parent roll type, the length used in cutting should not surpass the parent roll's overall supply length.
Variable constraint (13) stipulates that the number of cuts must be non-negative integers.Variable constraint (14) indicates that the shadow price for each demand should be no less than 0.

Sub-Problem (SP) : min
s.t.: The objective function (15) aims to identify the most negative value of A ikj .The shadow prices for each demand, derived from the main problem, are known parameters that are incorporated into this objective function.
Inequality constraint ( 16)-cutting scheme generation constraint: For each type of parent roll, the total cutting width under any cutting scheme should not exceed the width of that parent roll.
Variable constraint (17) specifies the cutting quantity for the jth demand under the kth cutting scheme for the ith parent roll.Additionally, the cutting quantity must be non-negative integers.
Figure 3 describes the column generation procedure for solving an optimization problem, as mentioned in the article.The process begins at the start, where the initial step involves defining the sets and parameters that will be used in the model.These sets and parameters include all the necessary components like the decision variables, constraints, and the objective function.
Following this setup, the master problem is formulated.This master problem is the main part of the full optimization problem, typically including a subset of all possible variables or columns.It provides an initial solution that may not yet be optimal but is computationally easier to solve.After solving the master problem, the algorithm proceeds to evaluate a sub-problem, which is designed to improve the current solution by generating new columns.This subproblem is related to the master problem but focuses on a specific aspect or a smaller set of variables.The objective value of the sub-problem is checked: if it is less than zero, it implies that there is potential to improve the master problem's solution by adding a new column generated by the sub-problem.If the sub-problem's objective is greater than or equal to zero, no further improvement is possible with the current set of columns, and the solution of the master problem is considered to be optimal.
At this point, the optimal solution is outputted, and the process ends.The optimal solution here refers to the best solution found for the master problem, given the columns generated during the process.This solution is optimal in the context of the column generation approach, meaning that within the columns considered, no better solution can be found.The algorithm stops here because it has reached a point where adding new columns does not lead to an improvement in the objective value of the master problem.

KKT-Box Uncertainty Set
This section considers the uncertainty in demand during the cutting process and uses the box uncertainty set Ω to investigate the most robust scenarios.The KKT conditions are utilized to address the constraints of the original problem.
Subsequently, we introduce the robust cutting scheduling problem under the demand uncertainty (R-O.P.) model, the problem modified using the Lagrange KKT conditions (R-RP1), and the problem R-RP1 transformed by linearizing the complementarity slackness constraints (R-RP2).It is only necessary to solve the R-RP2 problem to obtain the precise solution for the most robust demand in the cutting schedule.
Proof.Given a decision y for the master problem, we first present a simplified robust optimization problem as follows: Then, we derive the KKT conditions as follows: After eliminating the dual variables θ, the original problem can be equivalently transformed into the following expression: Thus, the robust problem with a max min structure can be equivalently transformed into a maximization problem.
According to the proof introduced by Zeng and Zhao [37], the robust optimization problem is first expressed in its Lagrangian form, which involves combining the original objective function with the constraints of the problem using Lagrange multipliers.The KKT conditions stipulate that the gradient of the Lagrangian function must be zero at the optimal point, meaning that the first-order derivatives with respect to all decision variables and Lagrange multipliers must equal zero, indicating that the solution is at a local extremum.The primal feasibility condition ensures that the solution satisfies all the original constraints of the optimization problem, while the dual feasibility condition requires that the Lagrange multipliers associated with inequality constraints must be non-negative, ensuring that the dual problem's constraints are met.The complementary slackness condition states that for each inequality constraint, either the constraint is active (equality holds) or the corresponding Lagrange multiplier is zero, which eliminates any unnecessary constraints from the optimization problem, simplifying the solution process.By applying the KKT conditions, the robust optimization problem is transformed into a set of equations and inequalities that are more suitable for exact solution methodologies, typically leading to a mixed-integer programming (MIP) formulation or another form that can be tackled using advanced optimization algorithms.The transformed problem can then be solved using optimization solvers such as Gurobi, which are capable of handling the exact solution methodologies required for the KKT-transformed model.The column generation technique mentioned in conjunction with the KKT conditions is a method used to dynamically add variables (columns) to the problem in a way that improves the objective function, often used in large-scale linear programming problems to reduce computational complexity.
It is important to note that the parameter design here slightly differs from that in Section 3.

R-RP1
The objective function (18) aims to maximize the total cutting area.Inequality constraint ( 19)-demand constraint: For each type of demand, the total length generated should be no less than its total demand length.The parameter a ikj is provided by the output of Section 3. The demand fluctuates within the box uncertainty set.
Inequality constraint (20)-supply constraint: For every parent roll type, the length utilized for cutting should not exceed the total supply length of the corresponding parent roll.
Inequality constraint (21)-R-O.P. dual constraint.Equality constraint (22)-supply complementary slackness constraint.Equality constraint (23)-demand complementary slackness constraint.Equality constraint (24)-decision variable X ik complementary slackness constraint.Inequality constraint (25)-random variable constraint: For each type of demand, the absolute value of the associated random variable should not exceed the non-negative number Γ.
Variable constraint (26) indicates the non-negativity of the dual variable corresponding to the demand constraint in the model.
Variable constraint (27) indicates the non-negativity of the dual variable corresponding to the supply constraint in the model.
Variable constraint (28) signifies the cutting quantity for the j-th demand under the k-th cutting scheme for the i-th parent roll.Additionally, the number of cuts should always be non-negative integers.
Upon transforming the equation constraints into equivalent linearized inequality constraints, we obtain the following reformulation, denoted as R-RP2: S j ∈ {0, 1}, ∀j ∈ J (44) The objective function (29) aims to maximize the total cutting area.Inequality constraint (30)-demand constraint: For each type of demand, the total length produced for this demand should be no less than the total demand length.The values of a ikj are provided by the outputs of Section 3. The demands here vary under the box uncertainty set.
Inequality constraint (31)-supply constraint: For each type of parent roll, the length used for cutting cannot exceed the total supply length of the respective parent roll.
Inequality constraint (39)-random variable constraint: The absolute value of the random variable associated with each type of demand must not exceed the non-negative value Γ.
Variable constraint (40) indicates the non-negativity of the dual variable corresponding to the demand constraint in the model.
Variable constraint (41) indicates the non-negativity of the dual variable corresponding to the supply constraint in the model.
Variable constraint (42) represents the cutting quantity for the j-th order under the k-th cutting scheme for the i-th parent roll.Additionally, the number of cuts should always be non-negative integers.

Numerical Experiments
In this section, we developed numerical test cases spanning small, medium, and large scales.We conducted experiments using these test cases to assess the performance of the proposed models in terms of solution quality and computational time.Additionally, we conducted a sensitivity analysis to investigate the range of variation associated with the uncertain component of order demand, represented by α, and the parameter Γ in the uncertainty set.
In addition, all procedures are coded in Python 3.7 and run on a computer with a CPU of 2.70 GHz and a RAM of 8.0 GB.Gurobi 9.1 is set as the LP/IP solver.

Generation of Problem Instances
For our numerical experiments, we used data from a real-world scenario: the plasticcutting operations of a prominent plastic-cutting company, hereafter referred to as "Company X" to maintain confidentiality.In this context, a "strip" denotes the different parent plastic rolls supplied, while "small items" represent the diverse demands in various dimensions.In this section, we present small, medium, and large-scale numerical test cases based on the sales and inventory orders of Company X.The small-scale instances encompass four types of strips and four types of orders, detailed in Table 1.The medium-scale instances feature six types of strips and six types of orders, detailed in Table 2. Finally, the large-scale instances incorporate nine types of strips and nine types of orders, as illustrated in Table 3.

Comparative Analysis
In this section, we compare the O.P., the deterministic MIP model, and the R-O.P., the robust model, in terms of solving time and value.

Comparison of Solution Time
In Table 4, the execution times of the deterministic 1D-CBPP model and the robust 1D-CBPP model (parameterized by Γ = 1 and α = 0.1), solved using Gurobi, are compared across various instance sizes.The robust model consistently demonstrates longer solving times than the deterministic model.Specifically, for the small instance, the times are 0.0124 units for the deterministic model and 0.0174 for the robust model.For the medium instance, they stand at 0.0256 and 0.037, respectively, and for the large instance, they are 0.0716 and 0.1198.This pattern underscores that while the robust model may have certain advantages, such as potentially superior objective values, it demands a higher computational cost, evident from its extended execution times across all instance sizes.Figure 4 provides a comparative analysis of the execution times between the deterministic and robust models across varying instance sizes.As the instance size progresses from small to large, there is a clear trend showing an increase in execution time for both models.The robust model consistently takes longer to execute than the deterministic model, which is expected, given its complexity to accommodate uncertainties.What stands out in the visualization is the marked difference in execution times, as indicated by the red double-headed arrows, with the percentage gap widening considerably for larger instances.This percentage gap starts at a 40.32% increase for the small instance and escalates to a 67.32% increase for the large instance, emphasizing that the additional computational overhead of the robust model becomes more pronounced with the increase in instance size.The graph serves as an effective tool to showcase the trade-off between the robustness of the solution and the computational efficiency, highlighting that the robust model, while providing potentially more reliable results under uncertainty, does so at the cost of longer execution times, especially significant in larger instances.

Comparison of Solution Quality
In Table 5, we define the gap as obj Robust −obj Deterministic obj Deterministic to measure the relative difference in performance between the deterministic and robust cutting models across varying instance sizes.The robust model is parameterized by Γ = 1 and α = 0.1.For the small instance size, the deterministic model registers an objective value of 26,515,200, while the robust model posts a slightly higher value of 29,166,720.This trend persists in the medium instance: the deterministic model achieves 59,522,875 as opposed to the robust model's 65,549,562.5.In the large instance category, the deterministic model tallies 323,297,900, contrasted with the robust model's 357,242,090.Remarkably, the robust model outpaces the deterministic model by approximately 10% across all instance sizes.The consistent margins of 10.00%, 10.12%, and 10.50% for the small, medium, and large instances, respectively, highlight the robust model's stable superiority, irrespective of the problem's scale.Nonetheless, it is essential to weigh these objective values against other pivotal factors not covered in Table 5, like computational time or inherent algorithmic complexity.In Figure 5, the graph presents a clear visualization of how the objective values increase with rising α values across three different instance sizes-small, medium, and large.Each line, differentiated by color, tracks the objective value's progression as α escalates, revealing a consistent upward trend for all instance sizes.This indicates that larger α values lead to higher objective values, with the large instance size showing significantly greater objective values than the medium and small sizes at all points.The annotations on the graph represent the growth rate in objective value when moving from one α value to the next.Notably, there is a general trend of diminishing growth rates as α values increase; for example, the growth rate for the large instance size decreases from 9.50% to 8.39% as α grows from 0.1 to 0.5.This diminishing trend is a critical observation, suggesting that while parameter α positively impacts the objective value, its marginal effect lessens at higher α levels.Such insights underscore the model's sensitivity to α and its robust behavior, highlighting that the model's performance is influenced by α adjustments, and this influence varies depending on the instance size and the specific α range.

Parameter Sensitivity Analysis
In this study, we numerically analyze the objective function and computation time across 15 distinct combinations of parameter α spanning small, medium, and large-scale instances.Each combination underwent evaluation in five separate experiments.
With Γ = 1 fixed, we probed the effects of varying α values, specifically in the range from 0.1 to 0.5, on the objective values, average computation time, and objective function growth rate for all instance sizes, as delineated in Table 6.From the data presented in Table 7, it becomes evident that the objective values and the computation times experience an exponential surge with increasing instance size.Interestingly, within each instance size, the chosen value of α has a minimal bearing on the computation time.Furthermore, the growth rate of the objective function consistently diminishes as α escalates within each instance category.Figure 6 showcases the objective values of different instance sizes-small, medium, and large-across varying α values.The lines for each instance size indicate the progression of the objective value as α increases.For all three instance sizes, there is an observable upward trend, indicating that higher α values correspond to increased objective values.This trend is consistent across the small, medium, and large instances, suggesting that the effect of α on the objective value is proportional across different scales of the problem.Each line is marked with annotations showing the percentage increase in the objective value when moving from one α value to the next.These percentages are placed at the points where α values change, marked by purple arrows that signify the transition from one data point to the next.The graph shows that the growth rate decreases as the α value increases; this is evident in the decreasing percentage values annotated on the graph.For example, the large instance size starts with a growth rate of 9.50% when α increases from 0.1 to 0.2 and ends with a growth rate of 8.39% when α increases from 0.4 to 0.5.This visual representation demonstrates that the relationship between α values and objective values is not only positive but also suggests a diminishing return as α grows larger.This could imply that the sensitivity of the objective value to changes in α decreases as α becomes larger.The graph is a clear indication of how the model behaves with respect to α, which is an important factor in optimizing performance and can be particularly useful for decisionmakers when calibrating the model for different sizes of instances.

Conclusions
The intricacies of 2D-SPP have long posed challenges for its application in industrial environments, primarily due to unforeseen demands and the necessity for diverse strip types to satisfy varying customer requirements.Addressing these challenges, our paper successfully bridges the academia-industry gap, presenting a novel and robust optimization approach for uncertain 2D-SPPs with variable-sized bins.Central to our methodology is the integration of a robust optimization model that employs a box uncertainty set, ensuring resilience to fluctuating demands-a quintessential requirement for real-world scenarios, particularly in industrial settings.
Our research endeavors to mitigate these challenges, crafting a pathway that bridges the dichotomy between academic theorization and pragmatic industrial applications.By introducing a robust optimization model for the uncertain 2D-SPP with adaptable bin sizes, we offer a fresh lens through which to view and address these challenges.The cornerstone of our approach is the integration of a box uncertainty set into our optimization model.This inclusion imbues our solution with a robustness that is crucial for handling the capricious nature of real-world demand fluctuations, especially those that are exacerbated in tangible industrial settings.
A significant breakthrough in our methodology has been the shift from the conventional two-dimensional paradigm to the 1D-CBPP.This transition, while seemingly simple, has profound implications.It paves the way for a relaxation technique that remarkably curtails computational time while simultaneously eliminating the archaic need for manual design patterns.As a direct consequence of this strategic move, we achieved a more stringent lower bound for the 2D-SPP, which invariably leads to enhanced solution precision.The integration of the Karush-Kuhn-Tucker (KKT) condition, complemented by the column generation technique, further refines our model, morphing it into a variant that is not only more tractable but also lends itself to exact solution methodologies.
Validating any academic endeavor with real-world data is quintessential, and our research is no exception.By grounding our empirical analysis in authentic datasets derived from a leading plastic-cutting firm, we provide empirical muscle to our theoretical constructs.The results are illuminating, underscoring the efficiency and potency of our proposed techniques.Beyond mere validation, this empirical journey offers a prism through which we can understand the real-world ramifications of our approach, showcasing how our methodologies translate to tangible benefits in industrial arenas.
The numerical experiments conducted across small-, medium-, and large-scale instances demonstrate the model's robustness and scalability.These experiments, grounded in real-world data from a plastic-cutting company, validate the efficacy of our model.The robust model consistently outperforms the deterministic model in terms of objective values, with an approximate 10% improvement across all instance sizes, underpinning the model's capacity to deliver more efficient use of materials and reduce waste, even under uncertain demands.
A significant methodological advancement in our work is the transition from the conventional two-dimensional perspective to a one-dimensional contiguous bin-packing problem (1D-CBPP).This simplification dramatically reduces computational complexity and removes the need for manual design patterns, leading to tighter lower bounds and enhanced solution accuracy.The integration of the Karush-Kuhn-Tucker (KKT) conditions, along with column generation techniques, refines our model, rendering it not only more tractable but also amenable to exact solution methodologies.
By conducting a thorough analysis of the model's performance and comparing it with existing approaches, our research provides compelling evidence of its practicality and effectiveness.The sensitivity analysis regarding the uncertain component of order demand, represented by α and the parameter Γ in the uncertainty set, sheds light on the model's performance under varying levels of uncertainty.This analysis further demonstrates our model's stability and reliability, providing valuable insights for decision-makers in the industry.
In summation, this research is not just an academic exercise but a foray into bridging theoretical constructs with tangible industrial applications.Through our robust optimization techniques, innovative model transformations, and rigorous empirical validation, we endeavor to chart a path forward for the more nuanced and practical application of 2D-SPP solutions in the industry.
This paper has made significant strides and notable contributions in its field.However, it encounters certain limitations in (1) effectively managing uncertainty, particularly concerning optimal data fitting, and (2) strategizing the input of small rectangular items into strips to minimize overall waste.In light of these challenges and considering the findings and models developed in this study, several promising avenues for future research have emerged.These directions aim to address the identified limitations and further enhance the robustness and practical applicability of the methodologies proposed.

1.
Alternative Uncertainty Sets: While this study primarily focuses on the box uncertainty set, future investigations could benefit from examining a wider array of uncertainty sets, including polynomial and ellipsoidal sets.This expansion is vital to more accurately tailor the model to diverse datasets.The selection of an uncertainty set can greatly influence model performance, as demonstrated by the potential necessity for second-order conic programming (SOCP) when employing an ellipsoidal uncertainty set.

2.
Integration with Machine Learning for Forecasting: The variability in demand, a common source of uncertainty, can be adeptly addressed by incorporating machine learning algorithms for demand forecasting.Especially when extensive data are available, machine learning can offer precise demand predictions, which, when integrated into deterministic models, enhance the method's capacity to tackle demand uncertainty.This integration underscores the synergy between the predictive capabilities of machine learning and the robustness of traditional optimization techniques.

3.
Diverse Uncertainty Optimization Models: Aside from the conservative nature of robust optimization, other models might offer more suitable solutions, depending on the data's characteristics.Alternatives such as stochastic programming, fuzzy programming, or distributionally robust optimization (DRO) should be explored for their potential to yield better results.The choice of the model should correspond with the attributes of the real data to ensure a balance between robustness and performance, with each model offering unique advantages depending on the nature of uncertainty.

4.
Rotation-allowed model: One possible extension is to incorporate a 90-degree rotation mechanism, allowing demanded, small rectangular items to have their width considered as quantity, and height as width.Based on this, there is potential to derive a better lower bound for the solutions.

5.
Two-phase problem: The first phase, which has been elaborated in this paper, treats the uncertain 2D-stock-cutting problem (2D-SPP) with multiple strips as the 1Dcontiguous bin-packing problem.Building upon this, the second phase could delve into the intricacies of layout nesting by considering the thickness dimension for cutting, thus expanding the problem into a three-dimensional space.
These proposed avenues not only deepen the understanding of the current model but also aim to enhance its application scope and efficiency in real-world scenarios.

Figure 2 .
Figure 2. Illustration of our model.

Figure 3 .
Figure 3. Process map of column generation.

Figure 4 .
Figure 4. Comparison of algorithm execution times with gaps.

Figure 5 .
Figure 5.Comparison of algorithms' objective values with gaps.

Figure 6 .
Figure 6.Objective values for different instance sizes with various α values and growth rates.

Table 1 .
Small-scale example parameter table.

Table 2 .
Parameters for medium-scale example.

Table 3 .
Parameters for large-scale example.

Table 4 .
Comparison of algorithm execution times.

Table 5 .
Comparison of objective values among algorithms.

Table 7 .
Impact of α values on the objective function and computation time when Γ = 1.