1. Introduction
Protecting throughput from variance is the key to achieving lean [
1]. If order release is applied, jobs (or orders) are not immediately released onto the shop floor upon arrival at the production system, but release is controlled to meet certain performance targets, which creates a so-called backlog or pre-shop pool that effectively buffers the shop floor. Well-known approaches to order release include Constant Work-in-Process (ConWIP; e.g., [
1,
2,
3,
4]), Drum-Buffer-Rope (DBR, e.g., [
5,
6,
7]), and Workload Control (e.g., [
8,
9,
10,
11]), among others. Meanwhile, a similar logic is also implemented in more recent manufacturing paradigms, such as cloud manufacturing (e.g., [
12,
13,
14,
15,
16,
17]).
The decision concerning which jobs to release is typically subdivided into a backlog sequencing decision, which determines the sequence in which jobs are considered for release, and a selection decision, which decides for each job, in sequence, whether it should be released given certain release criteria, e.g., applying a limit to the workload released to a station. Most studies on order release have focused on the selection decision, implicitly assuming that the sequencing decision should use some measure of urgency. Only recently it has been shown that incorporating load considerations in the sequencing decision can improve performance in the context of Workload Control (e.g., [
18]), ConWIP (e.g., [
19]) and DBR (e.g., [
20]). However, including load considerations requires feedback from the shop floor and significantly increases the complexity of the backlog sequencing decision. The challenge is whether simple time-based rules can be developed that mimic the behavior of load-based rules but without requiring regular information being fed-back from the shop floor.
Taking a closer look at load-based backlog sequencing rules reveals that they gain an advantage by delaying large jobs during periods of high load. In other words, during periods when the incoming workload largely exceeds the available capacity, load-based rules focus on producing as many small jobs on time as possible to the detriment of some large jobs. It has long since been shown that this improves overall performance [
21]. In this study, we mimic this behavior by proposing a new urgency-based rule that creates three classes of jobs: early (non-urgent), urgent, and very urgent. Using a simulation model of a re-entrant flow shop [
22] we conjecture that prioritizing urgent over very urgent jobs leads to similar results to those obtained for load-based sequencing, but in a simpler way.
Re-entrant flow shops are of high practical relevance. They can be found, for example, in semiconductor wafer fab environments (see e.g., [
23]) and in flexible manufacturing systems (see e.g., [
24]). Moreover, the advent of lean management and industry 4.0 fostered a product-based view instead of a resource-based view [
25]. This enables companies to streamline their production systems, so products can be manufactured using a fixed sequence of flexible resources [
26]. However, re-entrant flow shops are also very challenging environments for order release methods [
27] since jobs repeatedly pass through the same station (or stations) at different stages of processing. Our new rule not only provides a simpler means of improving order release performance in such contexts, but by using dedicated classes of jobs based on urgency it also facilitates the selection of those jobs that should be delayed in practice. The new rule mainly differs from load-based rules by not requiring feedback information from the shop floor.
The remainder of this paper is structured as follows. The literature is next reviewed in
Section 2 to identify the order release method and the alternative backlog sequencing rules to be considered in our study.
Section 3 then outlines the simulation model used to evaluate performance before the results are presented and discussed in
Section 4. Finally,
Section 5 puts forward a conclusion and outlines the managerial implications and limitations of the study.
3. Methods
3.1. Simulation
A simulation model of a re-entrant flow shop was implemented using ARENA
® software. In the model job inter-arrival times, processing times and due dates are stochastic variables. The re-entrant flow shop considered here is balanced and consists of three stations, each with two machines preceded by a single input buffer. Job routings are based on the six-step Mini-Fab model of Kempf [
37], as depicted in
Figure 2. That is, the production of each job is completed following a sequence of six processing steps.
In this study it is assumed that all materials for job processing are available and all the required information regarding job routings, job processing times, etc., is known at order arrival. This is in line with previous simulation studies on order release control (e.g., [
10,
29,
38,
39]). Processing times at machines follow a lognormal distribution with a mean of one time unit. Processing time variability of jobs is considered an experimental factor. We consider three levels for the coefficient of variation (CV) of the processing times, namely: 0.1, 0.2, and 0.4. We further assume that set-up times are sequence independent and considered to be part of the operation processing times. To avoid moving away from the focus of the research question and to avoid confounding factors, batch processing and unreliable machines were not considered. The inter-arrival time of jobs to the shop follows an exponential distribution. The mean of this distribution was defined to result in a system steady-state utilization rate of 90%.
Due dates are assumed to be set exogenously, i.e., by the customer. A random allowance, set between 15 and 30 time units, was added to the job entry time. Values were chosen so that releasing jobs onto the shop floor immediately upon arrival yields a percentage of tardy jobs close to 10% for an intermediate level of CV of the processing time (i.e., 0.2).
Table 1 summarizes the simulated shop and job characteristics.
3.2. Order Release and Backlog Sequencing
Six settings of the load norms are considered, namely, 4, 5, 6, 7, 8, and 9 time units. As a baseline setting, immediate release (IMR) of jobs to the shop floor is also considered, i.e., without controlled order release. Since EDD and PRD backlog sequencing are equivalent in shops with a single routing for all jobs, only PRD is considered. We consequently consider three backlog sequencing rules from the literature: FCFS, PRD, and MODCS. Four versions of our new rule are considered, where we change the
bound a (see
Figure 1) that distinguishes early (non-urgent) and urgent jobs at four levels: PRD minus an allowance of 1, 2, 5, and 10 time units, respectively. These rules are referred to as NEW (−1), NEW (−2), NEW (−5), and NEW (−10), respectively. The
bound b (see
Figure 1) that distinguishes between urgent and very urgent jobs is set to the PRD plus an allowance of 10 time units for all four rules based on preliminary simulation experiments, i.e., very urgent jobs are those for which the PRD is more than 10 time units in the past. Finally, the allowance for the operation throughput time at each station is set based on the cumulative moving average realized during simulation experiments.
3.3. Dispatching
Jobs waiting in the input buffer of a station are prioritized according to operation due dates. The operation due date for the last operation in the routing of a job is equal to the due date of the job, as given by Equation (1), while the operation due date of each preceding operation is determined by successively subtracting the estimated station waiting time and processing time from the operation due date of the next operation. In this study, estimated station waiting times are given by the cumulative moving average, i.e., the average of all station waiting times realized until the current simulation time.
3.4. Experimental Design and Performance Measures
Our study considers three experimental factors tested at different levels: (i) three levels of processing time variability (CV = 0.1, 0.2, and 0.4); (ii) six levels of the workload norm (4, 5, 6, 7, 8, and 9 time units); and (iii) seven backlog sequencing rules. A full factorial design was used with 126 (3 × 6 × 7) experimental scenarios, where each scenario was run for 13,000 time units following a warm-up period of 3000 time units. Each experimental scenario was replicated 100 times. These simulation conditions allow for obtaining stable results with small (i.e., precise) confidence intervals for the performance measures.
The key performance measures considered are the total throughput time, the percentage of tardy jobs, and the mean tardiness. The total throughput time refers to the time that elapses between job entry to the system and job completion. The percentage of tardy jobs refers to the percentage of jobs completed after their due date. Tardiness is defined to be zero if the job is on time or early and it is equal to the completion date minus the due date if the job is late. In addition, we also measure the average shop floor throughput time, which is used as an instrumental performance variable. While the total throughput time includes the time that jobs wait before being released, i.e., the backlog waiting time, the shop floor throughput time refers to the time that elapses between job release to the shop floor and job completion.
4. Results and Discussion
An initial statistical analysis of simulation results was conducted using an ANOVA (Analysis of Variance). ANOVA results are presented in
Table 2. All main effects and most of the two-way interactions were found to be statistically significant. There were no significant three-way interactions.
As somewhat expected from the choice and the design of our backlog sequencing rules, the backlog sequencing rules have the strongest impact on the percentage tardy and mean tardiness. This can also be observed from the results for the Scheffé multiple comparison procedure, which was applied to obtain a first indication of the direction and size of the performance differences.
Table 3 gives the 95% confidence interval. When this interval includes zero, performance differences are not considered to be statistically significant. We can observe significant performance differences for all pairs for at least one performance measure except for between FCFS and PRD, for which performance is statistically equivalent. This is explored further in
Section 4.1 and
Section 4.2 where detailed performance results are presented and their robustness evaluated.
4.1. Performance Assessment
Results for a coefficient of variation (CV) of the processing times of 0.1 together with the 95% confidence intervals are given in
Table 4. Results for different levels of the CVs are presented as part of our robustness analysis in
Section 4.2.
For IMR, the percentage of tardy jobs is about 8%. When jobs are not retained in a pool before release, the shop and the total throughput times are identical (13.35 time units). However, when controlled job release is applied and workload norms are tightened, the workload on the shop floor is restricted, and the shop throughput time is reduced. For example, shop throughput time is reduced to 9.30 time units when the load norm is restricted to 4 time units under FCFS, i.e., a reduction of about 30%. This has a positive impact not only on total throughput times, but also on the percentage of tardy jobs if the workload norm is set appropriately. However, when norms are set too tightly, there may be an increase in the percentage of tardy jobs because the time in the pool offsets the reduction in shop floor throughput times, and there is an increase in sequencing deviations. That is, when norms are set too tightly some jobs may be delayed for long periods in the pre-shop pool (or backlog) before being released to the shop floor, which increases the mean tardiness. Results also confirm previous literature in the sense that PRD outperforms FCFS at tighter norms, and MODCS outperforms PRD. Since the superior performance of PRD over FCFS only occurs at tighter norms, it was found not to be statistically significant in our ANOVA.
Most importantly, our new backlog sequencing rule has the potential to outperform existing backlog sequencing rules if the lower bound that distinguishes between early and urgent jobs is set appropriately. The results further highlight that there is no best rule for all performance measures. NEW (−5) allows the lowest percentage of tardy jobs to be obtained, while NEW (−1) approaches the tardiness values of MODCS and PRD. Meanwhile, if the lower bound is set to be too large/loose (i.e., PRD minus 10 time units), then we increase the set of urgent jobs to include some jobs that are still early. As a result, the number of very urgent jobs is likely to increase, resulting in worse performance. This can be observed from
Table 5, which provides more detailed information on the tardiness of jobs for a workload norm level of 4 time units.
The results in
Table 5 highlight that: (i) both MODCS and our new rule improve overall performance by delaying some jobs and (ii) it is important for our new rule to capture only the jobs that are at risk of becoming tardy and still have a chance of being delivered on time as part of the urgent class of jobs that are released first.
4.2. Robustness Analysis
The relative performance of the different backlog sequencing rules is also not affected by the CV of the processing times. This can be observed from
Table 6 and
Table 7, which give the results for a CV of 0.2 and 0.4, respectively. The main effect of the CV is on the best-performing norm level. The best-performing norm level in terms of the mean tardiness remains at five time units, but for the percentage of tardy jobs, the best-performing norm level increases with the coefficient of variation since we also observe less of an impact on the total throughput time and percentage of tardy jobs with an increase in the CV. A higher variability in processing times increases the load balancing opportunities for the release method, which in turn provides ‘less room’ for the sequencing rules.
5. Conclusions
Load-based backlog sequencing rules were recently highlighted as an important means to improve order release performance. However, they rely on feedback information from the shop floor and they delay jobs with long processing times during periods of high loads. In answer to our research question—Can a new backlog sequencing rule be designed that matches high performing MODCS rule whilst only considering job information and enabling a more controlled decision to be taken on which job to delay?—we have shown that similar performance can be achieved in our simulations by simply subdividing orders in the backlog into early, urgent, and very urgent orders, and then releasing urgent before very urgent orders.
Our new, purely time-oriented rule not only provides a simpler means of improving order release performance, using dedicated classes, but it also facilitates the selection of jobs that are desirable to delay. This allows managers in practice to delay specific jobs for which customer due dates can be adjusted. It puts the control of which jobs to delay in the hand of managers. In fact, it may be an alternative explanation for the Workload Control paradox, which recognizes that order release methods often perform better in practice than expected given their simulation results [
40]. Managers in practice are likely to show exactly this behavior–agreeing on new due date allowances for jobs that are otherwise ‘hopelessly’ delayed.
A main limitation of our study is that we have only focused on one order release method. Although this is justified by Workload Control arguably being the best order release method for high-variety contexts, future research could assess the impact of our new rule on the performance of alternative release methods, such as ConWIP and DBR. Another limitation is that the study only considered the specific case of a re-entrant flow shop. Future research could consider other shop configurations, e.g., flow shops and job shops. Meanwhile, a main advantage of our new method is that it uses dedicated job classes to determine which jobs should be delayed. This allows for delaying jobs with flexible customer due dates as part of the due date negotiation process.