A Dynamic Adjusted Aggregate Load Method to Support Workload Control Policies

: Workload control mechanisms are widely studied in the literature for the control of job-shop systems. The control of these systems involves acceptance, order release and priority dispatching. At the release level, the workload norm controls the “enters” of the jobs; it is relevant how the aggregate workload is computed. Few works have studied new computation methods of the aggregate workload but use the adjusted aggregate workload proposed in the literature. This paper proposes a dynamically adjusted aggregate workload to improve the performance of the workload control mechanism in job-shop systems. The adjusted aggregate workload is updated when each part exits from a workstation; this means that the workload used to release the orders is related to the state of the job shop in real-time. Simulation is used to evaluate and compare the proposed model to the classical models proposed in the literature. The simulation experiments demonstrate improvement of performance and how the model proposed is robust under different manufacturing system conditions.


Introduction
Workload control (WLC) approaches are widely used in the case of small and medium enterprises with make to order (MTO) production systems. Some examples in the MTO context are companies that provide subcontracting for several customers, such as aerospace, commercial, textile and food industries. In the production of aluminum rails, the enterprises operate with a lot of rails with the MTO approach. Other studies have investigated the potential application in semiconductor industries [1][2][3][4].
The WLC is an order release mechanism for controlling work in process (WIP) and flow time using a pre-shop pool where the jobs wait until the conditions to enter are verified. The input of a new order is allowable, considering the workload limits of the workstations (norm limit). This allows maintaining an optimal level of WIP, reducing the shop floor flow time and queue lengths in front of the work centers. The main benefit of the WLC approach is to reduce the impact of the variations of the incoming orders on shop floor performance [5].
Breithaupt et al. [6] considered three main decisions related to workload control (the acceptance/rejection order is not considered in this paper).
The first decision is the release decision of the orders that wait in the pre-shop pool of the manufacturing system [7]: the approaches proposed in the literature are periodic and continuous. The continuous release method is more complex to introduce, but this approach can lead to better results [8]. The second decision concerns the priority dispatching rule for the orders in pre-shop. Several rules can be used, such as the earliest due date, shortest process time, etc. [9]. Then, the order with the highest priority can be released if the norm of the workstations (or the bottleneck) is lower than a defined limit. In the literature, the norm of all workstations or the norm of only the bottleneck workstation [10] was considered. The third decision concerns how the aggregate workload is computed. The more used approach is the adjusted aggregate load method [11] that considered the ranked visit of the jobs to the workstations. Processing time as a contribution to the workload is reduced proportionally to the position of the workstation in the routing of the job.
The research proposed in this paper concerns the study of a job shop under a continuous workload control method. The main focus of the research concerns the computation of the aggregate workload of the manufacturing system used to define the norm level for the release of jobs.
The aggregate workload computation proposed is a dynamic computation that is updated after each operation performed by a workstation. Then, the order release in the pre-shop is related to a continuous updating of the workload information. The development of the digital manufacturing concept enables the potential real industrial application of the proposed method. Internet of things (IoT) enables integration of the manufacturing resources into the information network [12], to make the information easy in real-time of the resources. Therefore, the continuous update of the adjusted aggregate workload can be introduced in real industrial applications with digital tools.
This research used simulation to assess the proposed model performance compared with the classical adjusted workload computation proposed in the literature. Moreover, robustness is evaluated, considering different dispatching rules of the pre-shop queue, workload norm evaluation and job release rules. The simulation model developed allows study of a wider range of performance measures than the works proposed in the literature.
The remainder of this paper is organized as follows. Section 2 provides an overview of the literature focused on workload computation in continuous workload control models. Section 3 describes the reference context of the study proposed and the proposed workload computation method. Section 4 presents the simulation models and experimental setup, while Section 5 presents analyses and discusses the results of the simulation study. Finally, in Section 6, concluding remarks are made and directions for future research work are presented.

Literature Review
This section does not present an extensive review of WLC (for a complete review of the literature, see Stevenson et al. [13]; Thurer et al. [14]; Missbauer and Uzsoy [15]), but the most important and recent studies of the aggregate workload computation are discussed.
Thurer et al. [16] tested two methods to compute the aggregate workload: the classical aggregate load approach [17,18] and the corrected aggregate load approach [11,19]. The classical aggregate workload includes indirect load and direct load without distinguishing between the two. The corrected aggregate load considers the routing of the job. The load is corrected (using a reduced factor) according to the position of the work center in the routing. They studied the effect of the shop floor characteristics on the workload norm, considering the two aggregate workload methods. The corrected aggregate level is the better approach in all experiments conducted, as shown also in Thurer et al. [20]. The main drawback of the corrected workload is that the computation is static and therefore does not represent the real-time manufacturing system state.
Renna [21] proposed a corrected workload computation derived from the approach proposed by Land and Gaalman [19] and Oosterman et al. [11]. The approach proposed includes a correction of the workload that depends on the average utilization of the work centers. This approach works better for specific performance measures like the percentage parts in delay and the average time in the manufacturing system.
The calculation of workload can be complex for small shops with limited resources [22]. Then, an alternative was proposed by Land [23] called the COntrol of BAlance by CArd-BAsed Navigation (COBACABANA). This approach uses the card to limit the jobs in the shop floor.
Thurer et al. [24] revised the COBACABANA approach to better adapt the method to a workload control method. Their simulations highlighted improvements of the throughput time, percentage tardiness and mean tardiness. The main limit is the restricted characteristics analyzed.
Thurer et al. [25] proposed a model based on the CONWIP (Constant Work In Process) concept to face the workload problem. Their model considers a CONWIP card which represents a measure of workload rather than a job. The results have shown that the workload computations proposed in the literature are not directly adequate for their model. The main limit is the restricted characteristics analyzed in the manufacturing system.
Fredendall et al. [26] studied the effect of bottleneck shiftness on the performance of a manufacturing system under WLC control. They highlighted how the highest bottleneck utilization improves performance.
Martins et al. [27] discussed a review of the literature on autonomous production control methods. They argued that the methods suggested in the literature are not exploiting their full potential for decision-making in real-time.
The main limits of the literature concern the use of adjusted workload computation in all studies proposed; this computation is static and does not take into account the real-time conditions of the shop floor. Then, the orders are released, considering dated information rather than the changes in the workload of the work centers. Moreover, the literature studied consider the manufacturing systems in restricted conditions such as the directional routing of the jobs, the case of only one bottleneck and a limited number of performance measures.
In response, this research proposes a dynamic model to compute the aggregate workload of the work centers, allowing the release of the orders using updated information. The proposed model updates the workload information every time a job exits from a workstation, considering a non-directional routing of the jobs and a wider range of performance measures. Then, the first research question (RQ) is: RQ1: Can the proposed dynamic adjusted workload computation improve some performance measures of the manufacturing system?
The robustness of the approach proposed is important; therefore, the second research question then asks: RQ2: Is the performance improvement affected by the dispatching rules, norm approach and the number of bottlenecks?
A simulation model is developed to answer the first research question, to evaluate the potential improvement considering several performance measures. The second research question regards the robustness of the proposed model with other decision issues such as the release mechanism, the dispatching rules and the norm approach proposed in the literature.

Reference Context
The job-shop model has been studied in many previous studies [11,21,28] and many other works on workload control to compare and evaluate the performance measures investigated. As the works proposed in the literature, the jobs are all accepted and there are no restrictions on the tools and raw materials needed. The model studied is characterized by the processing times, interarrival times and due date as random variables. Six workstations with a single resource constitute the job shop; each job performs a set of operations drawn from a discrete uniform distribution from one to six operations. The sequence of the stations to visit is completely random, without any preferred order (the great part of the works proposed in literature considers a preferred order). The processing times follow a 2-Erlang distribution and the interarrival time follows an exponential distribution. Then, the due date is assigned, adding a random time to the job entry time.

Order Release
The job can be released into the shop floor, if the workload norm with the workload added by the job candidate does not exceed an upper limit. Then, in this step, all jobs that can be released under the workload norm limit are determined.
The norm can be evaluated following two strategies. The first evaluates that the workload added by the job does not exceed the norm for all stations, and the second strategy evaluates the exceed norm only for the bottleneck stations.

Dispatching Rules
When the jobs that can be released into the shop floor are determined, the first job to release needs to be selected. Two widely used dispatched rules are considered: the earliest due date (EDD) and the shortest process times (SPT).
Moreover, a dispatching rule to balance the load among the stations is proposed in Renna (2015). The ranking Ranki(t) of the jobs is evaluated, considering the average workload of the work centers (see Equation (1)): The priority (low value first) is the job that improves workload balancing among the work centers (see Equation (2)):

Workload Computations
The workload computation following the adjusted workload was proposed by Oosterman et al. [11]; when a job enters in the shop, the workload of each machine m is updated following Equation (3): where the weight Wmi is the position of the work center m in the routing of the job. When a job leaves a workstation, the above method updates the workload only of this workstation while the workload of the other stations is not updated; therefore, the workload used for the order release is not the real-time state of the job shop.
The proposed model updates the workload of all workstations when a job leaves a workstation. When a job leaves the workstation, the workload of the first workstation of the routing m1 reduces according to Equation (4): Then, the weights of the routing for each workstation are reduced by one: Then, the workload of the workstations is recalculated with the new weight using Equation (3). The order release in the job shop is based on real-time information about the workload of the workstations.

Simulation Model
The simulation models use the same parameters proposed in several works in literature [10,24]. For each model, the workload norm that leads to better performance measures is evaluated. The benchmark model is the model with the classical adjusted workload computation. The models tested are shown in Table 1 with the following characteristics: - The order release, which includes the evaluation of the norm on the bottleneck of all workstations; - The three pre-shop ranking methods: EDD, SPT and workload balance; - The classical corrected and proposed (dynamic) workload computation. All workstations norm Balance Dynamic WL Table 2 reports the characteristics that are the same as proposed in Renna et al. [21]. The models reported in Table 1 are replicated for one, two and three bottleneck cases. The exponential interarrival parameter allows us to obtain an average utilization of about 80%, and the due date is assigned to take into account the total processing time of the jobs. The experimental classes are 36, considering 12 models for three bottleneck cases. Each class is tested for different values of the workload norm to establish the value that leads to better results. The better value of the workload norm is 14 and it is the same for all models tested. The results discussed are the better obtained with the 14 as workload norm.
For each experiment class, a number of replications required to assure a 5% confidence interval and 95% of a confidence level for each performance measure have been conducted. Each combination of experiment class is characterized by over 2000 replications and about 8 h of computation time (4 GHz Intel Core i7 and 8 Gb RAM).
The performance measures evaluated are the following. A set of measures concerns the due date of the jobs: percentage of tardy jobs, the standard deviation of lateness, average lateness (unit time) and the total time of lateness. The total time of lateness is computed by multiplying the total product for the percentage in delay and the average lateness for a single product. The other performance measures are as follows:


Total throughput time (unit time): this is the total time from the enter to the exit of the order from the manufacturing system. Standard deviation of the queues by the coefficient of variation to evaluate the distribution of the direct workload among the machines.

Numerical Results
The simulation results are reported as a percentage difference compared to the relative benchmark. The figures report the results according the cases reported in Table 3:  Figure 1 reports the performance of the average lateness of the models tested. Better results have been obtained for cases 3, 8 and 9 with three bottlenecks. These cases work with the adjusted workload. More generally, all cases with adjusted workload perform better for this performance. The number of bottlenecks has a different effect on the methods studied. The results highlight that the job shop with three bottlenecks is characterized by a higher fluctuation of this performance. The proposed method does not improve the average lateness compared to the classical approach. The benchmark works better on this performance, while all the other methods were worse on this performance (see Figure 2). Moreover, the fluctuation of this performance is very high over the methods studied. The number of bottlenecks is relevant for the cases with the dynamic workload and pre-shop queue management FIFO (First In First Out) and SPT (Shortest Processing Time). The proposed method improves this performance compared to the classical method.  Figure 3 shows the average time of the jobs in the entire manufacturing system. The difference among the methods is low; the range of the percentage difference is about 5% (from −2% to 3%). It can be observed that when the number of bottlenecks increases, the variation of the performance increases. The computation of the workload does not significantly affect this performance.    Figure 5 reports the shop time of the jobs; also, for this performance, the proposed approach leads to better results (cases 4, 5, 6, 7, 10 and 11) with about 10% of reduction. In this case, the number of bottlenecks changes the benefit obtained; in particular, when the number of bottlenecks increases, the benefit is higher. Then, the deviation of the average queue of the machines is studied (see Figure 6). The proposed method allows drastic reduction of the deviation of the jobs in queue among the machines. The reduction is from 20% with three bottlenecks to 30% with one bottleneck in the manufacturing system. The bottleneck shiftness (see Figure 7) evaluates the changes of the bottleneck during the manufacturing operations. The proposed approach increases this value; this means that the time in which the bottleneck is the real bottleneck of the system is lower. This leads to a better uniform workload among the machines due to the proposed approach.  Figure 8 shows the total time of lateness accumulated by each model. Cases 6, 7 and 11 lead to better results, obtaining an improvement of about 30%. These cases are all with the proposed computation approach for the workload. The number of bottlenecks changes the improvement obtained for these cases, but a defined trend due to the number of bottlenecks cannot be observed.
From the above results, the following issues can be drawn: The number of bottlenecks affects the results, but it cannot de defined a direct relation between the increase of the bottlenecks and the variations of the performance measures.

Conclusion and Future Development Paths
The workload control model is a control approach for production planning in job-shop systems. The WLC includes how to release the orders, the dispatching rules for the pre-shop queue, and how to compute the aggregate workload of the machines. The major part of the research used the adjusted workload aggregate computation by Oosterman et al. [11], and other studies on this issue are limited. This study evaluates a modified computation of the workload, considering the dynamic adjustment of the workload that can use the development of IoT technologies.
In response, our first research question asked is: Can the proposed dynamic adjusted workload computation improve some performance measures of the manufacturing system?
The simulation results demonstrated how the proposed approach can significantly improve the performance as it relates to the ability to deliver the job on time, achieving a more uniform distribution of the direct workload and reduction of the shop time of the jobs. The uniform distribution of the direct workload can have a relevant impact on the maintenance policy of the manufacturing system.
The second research question asked: Is the performance improvement affected by the dispatching rules, norm approach and the number of bottlenecks?
The proposed model has demonstrated the robustness to the change of dispatching rules, norm evaluation and when the number of bottlenecks changes. Moreover, the better value for the norm limitation is the same as with the classical approach.

Managerial Implication
The proposed model can be supported by the recent developments in IoT that enable the data management of the machines to update the workload in real-time, as proposed in this research. The model proposed with the use of the simulation supports the decision-maker, highlighting what performance measures are significantly improved. The proposed model focuses on the reduction of the percentage jobs in delay, and increases the standard deviation of the lateness and the average lateness for a single job in delay.
The above results are confirmed for the different number of bottlenecks, norm approach and pre-shop queue management; this allows adaptation of the proposed approach to different job-shop systems with adequate robustness. Moreover, the computational complexity is limited to an update of the workload for each machine.

Limitations and Future Research
A limitation of the study is that no disturbances are considered for the job shop as a failure of the machine, degradation, etc. Therefore, future research will study the robustness of the proposed model, considering failure of machines, processing time related to degradation of the machine, and maintenance policies.