No-Idle Flowshop Scheduling for Energy-Efﬁcient Production: An Improved Optimization Framework

: Production environment in modern industries, like integrated circuits manufacturing, ﬁberglass processing, steelmaking, and ceramic frit, is characterized by zero idle-time between inbound and outbound jobs on every machine; this technical requirement improves energy efﬁciency, hence, has implications for cleaner production in other production situations. An exhaustive review of literature is first conducted to shed light on the development of no-idle flowshops. Considering the intractable nature of the problem, this research also develops an extended solution method for optimizing the Bi-objective No-Idle Permutation Flowshop Scheduling Problem (BNIPFSP). Extensive numerical tests and statistical analysis are conducted to evaluate the developed method, comparing it with the best-performing algorithm developed to solve the BNIPFSP. Overall, the proposed extension outperforms in terms of solution quality at the expense of a longer computational time. This research is concluded by providing suggestions for the future development of this understudied scheduling extension.


Introduction
Ecological restoration and reduced carbon emission have become major global priorities [1]. Local governments have put forward regulatory measures and policies to enforce energy-saving initiatives. These measures are predominantly formed around emission taxation and trading of emission credits, which help bring the overall emissions below the target baseline [2]. The Australian carbon reduction policy, the so-called safeguard mechanism, and the EU Emission Trading System are prime examples of reducing the negative impacts of business activities from electricity generation and mining to transportation, construction, and manufacturing.
The manufacturing sector is one of the primary energy consumers and the largest polluter with its share being more than 31 percent of the overall energy consumption and 36 percent of carbon dioxide emissions [3]. To address this issue, supply chain sustainability, in particular, the green process design practices, has been mainly focused on reducing energy consumption in logistics [4], production, and consumption phases as well as the use of renewable energies [5]. Providing on-site energy production, like solar panels and biogas fuel cells, reducing facilities' carbon footprint by replacing lighting and energy control systems, applying energy efficiency standards in the construction of new buildings, and the installation of modern supplements for the use of sustainable resources is the primary green practices reported in the literature [6].
Minimizing the costs associated with the machines' energy consumption and the resulting pollutants have been at the center of the green manufacturing studies. Considering non-processing and processing energy consumption in the production facilities [7], energy-efficiency has been explored from the operational management perspective, i.e., how to cluster jobs to minimize non-value-adding operations [8] and when to turn on/off to reduce machines idle-time, which speed level to operate, and how to plan peak and off-peak production process to save energy [9]; Production scheduling as an operational strategic tool is complex and requires additional measures to account for less tangible operational aspects.
The operations-related performance measures, more particularly those pertinent to processing energy consumption, have been the subject of many scheduling studies to account for sustainability in the production management context. Piroozfard et al. [10] introduced a multi-objective flexible job-shop scheduling problem, minimizing carbon footprint and the total late work criterion. Minimizing the makespan and total carbon emission in production environments with unrelated parallel machines was examined by Zheng and Wang [11]. Safarzadeh and Niaki [12] addressed the total green cost and the makespan finding the Pareto optimal solutions in uniform parallel machine environments. The trade-off between makespan and energy consumption in two-machine flowshop [13], hybrid flowshop [14], and unrelated parallel machine [15], and job-shop scheduling environments [16] are among the other notable contributions at the intersection of energy-efficiency and production schedule. These studies aimed to improve energy efficiency through a soft optimization approach focusing on minimizing processing costs and energy consumption. That is, a trade-off enables the decision-makers to choose between cost-effectiveness or responsiveness and energy efficiency. Although such a flexible approach is suitable in the current regulatory situation, plausibly more restricted regulations in the future urge optimization approaches that minimize non-processing energy consumption considering operational strategic measures and energy cost strategies [17]. Li et al. [18] suggested defining a limitation on the energy consumption of each machine while minimizing the makespan and the total completion time. Scheduling problems with the no-idle time between the in-coming and out-going jobs on the machines is an alternative solution to effectively reduce energy wastage in the production sector. On the other hand, the technical characteristics of modern industries, like steelmaking [19], integrated circuits manufacturing, fiberglass processing, and ceramic frit [20] require a no-idle situation. Given flowshop production as the most common process model in the manufacturing sector [21] and the significance of energy costs in the flowshops, no-idle flowshop scheduling has received recent recognition among production management scholars.
The successful implementation of policy-driven mechanisms for mandating carbon emissions depends on the effective consideration of the corporate priorities, like costeffectiveness and responsiveness, to ensure the firms' competitiveness [22]. This situation is of high significance to address conflicting operational objectives within the no-idle production scheduling agenda that enforces maximal energy efficiency. To the best of the author's knowledge, no published journal papers have addressed the bi-objective optimization of no-idle flowshops. This study extends the energy-efficient production scheduling literature by a two-fold contribution. First, an exhaustive review of the noidle flowshop scheduling literature is conducted to explore the developments and gaps in modern industry scheduling. Second, a Hybrid Iterated Greedy (HIG) algorithm is developed to effectively solve the bi-objective variant of no-idle flowshops while ensuring the robustness of the outcomes. The three-field α| β|γ notation of Graham et al. [23] is used for referring to the Bi-objective No-Idle Permutation Flowshop Scheduling Problem (BNIPFSP) as F m prmu, no − idle α · C max + β · ∑ F j in the remainder of this article. In this notation system, F m shows the flowshop production environment with the set of given jobs being processed by a set of available machines in the same order. In the second part of the notation, prmu determines the permutation setting to show that the sequence of jobs is the same on all machines, and specifies that there is no idle time between inbound and outbound jobs on every machine. Finally, α · C max + β · ∑ F j determines the weighted sum of makespan and total flowtime criteria.
The rest of this manuscript is organized into four sections. A comprehensive review of the literature is provided in Section 2. The methodology, including the extended mathematical formulation and the solution algorithm, is elaborated in Section 3. The numerical analysis comes next, in Section 4, to analyze the effectiveness of the developed solution approach. Finally, concluding remarks and directions for future research on no-idle scheduling close this research work in Section 5.

Literature Review
Considering the recent surge in the number of articles, a comprehensive review on no-idle flowshop scheduling and its solution methods is timely. This section reviews the published works indexed in Google Scholar. For this purpose, searching the keywords "no-idle" and "Flowshop" resulted in a total of 33 articles among which, 25 were perceived as relevant; of the relevant items, five conference papers [24][25][26][27][28] and two theses [29,30] were found. The journal articles are then analyzed considering the number of machines, the studied performance indicator, and the proposed solution approach suggested by Ribas et al. [31] and Neufeld et al. [32].
Computers and Operations Research and Expert Systems with Applications contributed the most to this extension of scheduling problems with two published works. With five contributions, Tasgetiren is the most prominent author, followed by Rossi with three published works. Notably, half of the contributions in no-idle flowshop scheduling are published in or after 2019, all of which are explored in the production context. A summary of the published works is provided in Appendix A with the detailed review elaborated below.
No-idle scheduling was the first time introduced by Cepek et al. [33,34] to minimize total completion time in a two-machine flowshop production environment. This seminal scheduling problem inspired more than 20 research contributions thus far, contributing to solution algorithms and/or new mathematical extensions to No-Idle Permutation Flowshop Scheduling Problem (NIPFSP). Narain and Bagga [35] developed a Branch and Bound solution method to minimize the average flowtime in a two-machine flowshop environment. Wang et al. [36] incorporated no-wait job-related constraints into the no-idle flexible flowshops. Later studies were focused on flowshop settings with m machines. Tasgetiren et al. [37] and [38] developed Differential Evolution and Discrete Artificial Bee Colony algorithms, respectively, to minimize total tardiness in NIPFSP. Tabu Search algorithm was later adopted by Ren et al. [39] to minimize the maximum completion time (makespan). Tasgetiren et al. [40] proposed a hybrid Differential Evolution and variable local search, which improved the makespan values obtained by the earlier studies.
More recent studies are rather focused on proposing novel methods and variants in the scheduling procedure. Lu [41] explored the time-dependent learning effect and deteriorating jobs in NIPFSP, minimizing the makespan criterion. Pagnozzi and Stützle [42] developed an automatic algorithm configuration approach for solving single-objective permutation flowshops. The mixed-no-idle flowshop variant was introduced by Pan and Ruiz [43] to minimize makespan using a basic Iterated Greedy (IG) algorithm. Rossi and Nagano [44,45] explored the mixed-no-idle and sequence-dependent setup time settings and minimized total flowtime using Beam Search algorithms. The same authors developed a constructive heuristic for mixed-NIPFSP with sequence-dependent setup times [46]. In a similar contribution, Nagano et al. [47] developed a constructive heuristic to solve the basic NIPFSP considering total flowtime. Zhao et al. [48] and Riahi et al. [49] developed Discrete Water Wave Optimization (DWWO) and IG, respectively, for minimizing total tardiness in NIPFSP. Benders decomposition was also tested to solve mixed-no-idle flowshops considering the makespan criterion [20]. Most recently, Zhao et al. [50] proposed a new variant to the DWWO algorithm to solve distributed assembly no-idle flowshop scheduling problems considering maximum assembly completion time. Despite its usefulness, no published journal papers are found that addresses the bi-objective variant of NIPFSPs. Motivated by this gap, we propose a new formulation and solution algorithm to contribute to energy-efficient production scheduling using bi-objective no-idle flowshops.

Mathematical Formulation
This study extends the Mixed-Integer Programming (MIP) formulation developed by Ruiz and Stützle [51] to account for two conflicting optimization objectives, i.e., maximum completion time (makespan) and total flowtime. The former is a measure to enhance resource utilization, while the latter measure minimizes work-in-process inventory. The indices, parameters, and decision variables listed in Table 1 are used to model the F m prmu, no − idle α · C max + β · ∑ F j scheduling problem. Processing time of job j on machine i X j,k Binary decision variable, = 1 if job j is positioned at index k of the vector; = 0, otherwise C k,i Integer decision variable, the completion time of the job assigned to position k on machine i F j The total flowtime of job j We now elaborate on the MIP formulation of the F m prmu, no − idle α · C max + β · ∑ F j problem. The objective function in minimizes the weighted sum of the makespan and total flowtime values, which are commensurable. The former part of the objective function, C max , will be calculated using the no-idle calculation mechanism presented in the following sub-section, and the latter part, ∑ n j=0 F j , is determined through the constraint calculations. The objective function is subject to the constraints below. Binary decision variables are used in: where index k represents π[j] for the sake of readability. This constraint is defined to restrict the jobs from being assigned to more than one machine. Besides, each job should occupy one and only one position in the job sequence, as demonstrated in: The completion time of the job in position k on the machine i must be greater than or equal to the completion time of the job on the previous machine, i.e., i − 1, plus the processing time of the same job on the machine i. These are modeled in: where the former equation refers to the first machine, and the latter equation is defined for the rest of the machinery. Similarly, the completion time of a job should correspond to that of the earlier job on the same machine in: (6) where the time of the job placed at the position k of the job sequence vector on machine i corresponds to that of its immediate earlier job at the position k − 1 on machine i. On this basis, the completion time of the job processed on the last machine considering its flowtime is defined in: where the flowtime value in the objective function is defined. Finally, the variable types are demonstrated in: where the completion and total flowtime variables cannot accept negative values, and the job position variable only accepts binary values.

No-Idle Calculation Mechanism
To ensure that there is zero idle time throughout the production process, one should regulate each machine's first job's commencement. For this purpose, each machine's start time, S i , is defined in: where P π[j],i represents the processing time of the job assigned to the position j of the sequence vector π on machine i. S i−1 determines the start time on the previous machine. Once the start time of every machine is known, the completion time of the first job on the machine i can be calculated using: where it equals the summation of the corresponding start time of the machine i and the processing time of the first job in the job sequence vector, P π[1],i . Next, the completion time of the job assigned to the position j of the sequence vector π, which is processed on the machine i, is defined in: where C π[j],i is equal to the completion time of job position in j − 1 of job sequence vector π on machine i, C π[j−1],i , plus the processing time of the job positioned in j, P π[j],i . Finally, the makespan value is calculated using: where C π[n],m represents the completion time of the last job in sequence vector π, which is processed on the last machine. Therefore, C max = S m + ∑ n k=1 P k,m . An illustrative example is provided in Figure 1 to clarify the computational steps of calculating the completion time in the no-idle flowshop.

Solution Algorithm
The IG algorithm was introduced by Ruiz and Stützle [52] to solve permutation flowshops. The computational procedure of IG is inspired by human behavior when wanting a lot more of something in a greedy manner. The successful track record of the IG algorithms in solving flowshop problems inspired us to extend it for solving the F m prmu, no − idle α · C max + β · ∑ F j problem. The pseudocode of the HIG algorithm is provided in Figure 2, followed by the details on the major computational elements. It is worthwhile mentioning that the proposed modifications are adjustable and can be effectively adapted for other application areas.

Solution Initialization and Decoding
Solutions are decoded as a permutation of n numbers, each of which represent a job, with the processing sequence being similar on m machines. Taking the job sequence 3 − 6 − 2 − 4 − 5 − 1 as an example, the solution is symbolized by a vector, (3 6 2 4 5 1), where six jobs should be processed following the specified order on every machine. To generate the initial solution, the well-known constructive heuristic algorithm introduced by Nawaz, Enscore, Ham (NEH; [53]), which is known as one of the best constructive heuristics for solution initialization of the flowshop problems, is preferred to random solution generation to ensure a better initial approximation. The NEH considers average processing time as a priority rule for arranging the jobs. The destruction and construction module presented in the next sub-section uses the outcomes of NEH to improve the solution quality.

Destruction and Construction Methods
This study applies a random destruction method with no limits to facilitate a greater level of disturbance in the search procedure. The randomly extracted jobs, which equals the destruction count (d), will then be saved in a separate array to be considered in the construction procedure. A customized construction method for sorting and inserting the removed jobs is developed to improve the effectiveness of the search procedure while ensuring the feasibility of the resulting new solution. This approach is explained below with an illustrative example of this procedure provided in Figure 3.
Step 1. Remove the last job from Π and name it a.
Step 2. Insert a into Π before the last job. Name the jobs before and after a as a − k and a + k, respectively.
Step 3. Remove job a − k and rename it to b.
Step 4. Insert b next to the first job in a and name the jobs before and after b as b − k and b + k, respectively.
Step 5. Insert b − k right before a.
Step 6. Select b + k and move it to the position before a − k.
Step 7. Select a + k and move it to the position after b.

Local Search Method
After a new solution has resulted from the iterative and greedy construction procedure, a local search mechanism should be applied to search for further improvements. For this purpose, a pre-determined number of non-repetitive random job extraction and insertion, named as the local search count (γ), is used to find fitter solutions. If there is an improvement as a result of applying the local search procedure, the procedure will be continued; otherwise, it will be terminated. The pseudocode of the local search procedure is provided in Figure 4.

Acceptance and Stopping Conditions
Once the current best solution (Π best ) and the new solution (Π new ) are known, the search algorithm should determine if there is an improvement in the fitness value. If the new solution is of better quality than the current best solution, i.e., a smaller weighted sum of the total flowtime and makespan values f itness(Π new ) < f itness(Π best ) has resulted, the new solution becomes the current-best solution, Π best = Π new . Otherwise, a mechanism is required to decide whether or not to accept a new solution that is worse or similar to the current best solution.
Inspired by the Simulated Annealing algorithm [54], the cooling mechanism is used to regulate the acceptance condition. In the approach suggested by Ruiz and Stützle [52], the fitness values associated with the current and best solutions are considered to calculate the relative change in the solution quality, i.e., ∆ = f itness(Π new ) − f itness(Π best ). Given ∆ and the initial temperature, T 0 , as the algorithm parameter, the current temperature T decreases proportionately to the cooling coefficient, i.e., T ← δ × T , where 0 < δ < 1 is the cooling rate. Finally, the acceptance probability, calculated using P = exp(−∆E/T), should be compared with a random number to determine whether to accept a poor-performing solution. This mechanism is particularly useful to avoid premature convergence and getting trapped in the local optima. Unchanged fitness value, i.e., ∆ = 0, for a certain number of iterations, signals the termination of the algorithm. The algorithm terminates when the current best solution remains do not improve for a certain number of iterations.

Results
This section begins with an elaboration on the configuration of the test bank and the algorithm calibration experiment. Numerical results and statistical analysis are then provided to compare the HIG performance with Hybrid Tabu Search (HTS; [25]). It consists of short-and long-term phases, with the short-term phase focusing on a local search and the long-term phase improving concentration and diversification and help escape the local best solutions. HTS applies the NEH [53] for solution initialization, and the 'swap' and 'insert' moves as the disturbance mechanism. Besides, three other variants of IG, denoted by the IG 1 , IG 2 , and IG 3 algorithms, are included to enrich the numerical experiments and provide insights into the impact of various computational elements in solving the problem. It helps explore what element of the proposed extension contributes most to the possible breakthrough. IG 1 and IG 2 apply the basic construction method, while IG 3 uses the customized construction method developed in our study. On the other hand, IG 1 and IG 3 do not have a local search mechanism, while IG 2 applies a perturbation mechanism similar to HIG. All the algorithms are coded and compiled using a personal computer with the following specs; Intel (R) Core (TM) i7 CPU 3.4 GHz, 8 GB RAM, and Windows 7 operating system.
The widely-used scheduling dataset developed by Tillard [55] is used to benchmark HIG against the best-performing algorithms in the literature developed to solve the F m prmu, no − idle α.C max + β.∑ F j problem. This dataset consists of 12 job/machine combinations considering three configuration groups: (1) n ∈ {20, 50, 100} jobs and m ∈ {5, 10, 20} machines; (2) n = 200 jobs and m ∈ {10, 20} machines; (3) n = 500 jobs and m = 20 machines. Ten distinct instances for each combination make a total of 120 instances for the final experiments.
The calibration experiment is conducted in two phases using random test instances. First, the best configuration is determined considering a limited set of alternatives. Next, the set of parameters adjacent to the selected configuration in the first phase will be explored to check if a better combination of parameters can be found. For this purpose, the Relative Percentage Deviation (RPD) shown in Equation (13) is considered to compare the resulting fitness values where smaller values are preferred, and RPD = 0 demonstrates the best solution. In this equation, Fitness * refers to the fitness value obtained by each of the solution algorithms and Fitness best is the best result.

RPD =
Fitness best − Fitness * Fitness * × 100 (13) Random instances are used to determine the parameters of the IG 1 , IG 2 , IG 3 , and HIG algorithms. The calibration test results are summarized in Table 2 for these algorithms. On this basis, the algorithm parameters are set to d = 2, γ = 20, and δ = 0.9 to conduct the final experiments. To ensure a fair comparison, a termination condition similar to that of the HTS algorithm, which is applied by Ren et al. [25], is considered. That is, the algorithm terminates when the current best solution remains unchanged for 100 consecutive iterations. Considering the calibrated parameter values and an equal importance weight (α = β = 0.5) within a priori performance articulation scheme, the best-found solutions of the instances solved using HTS [25] are compared to those of the HTS, IG 1 , IG 2 , and IG 1 algorithms. The results are summarized in Tables 3 and 4. Except for the instance with 20 machines and 10 jobs (20 × 10), where HTS performs slightly better than HIG, the rest of the best solutions are yielded by HIG.
We first analyze the results considering various workloads and operating scales. Considering different numbers of jobs in the first set of instances, Table 3 shows that HIG performs better than HTS. The difference in performance becomes more significant with an increase in the workload. The IG 2 algorithm is also superior to the HTS, considering the first set of test instances, showing that integrating the local search mechanism contributes significantly to the success of the developed algorithm. The solutions obtained by the IG 1 , IG 2 , IG 3 , and HIG algorithms across all test instances are then compared separately in Table 4, considering all test instances where HIG yields the best results in all cases. In an overall analysis, Tables 5 and 6 provide the Average Relative Percentage Deviation (ARPD) values for different workloads and machines, respectively. The RPD analysis shows the overall impact of the number of machinery and workload on the performance of the algorithm. It is evident that HIG obtains meaningfully better solutions than the HTS, IG 1 , IG 2 , and IG 3 algorithms when solving the F m prmu, no − idle α · C max + β · ∑ F j problem across different operational situations. Given the RPD analysis, it is expected that HIG's superiority to the current-best-performing algorithm, HTS, will be even more significant for industry-scale applications.
A statistical test is conducted to check whether the resulting improvement in the best-found solutions is significant. The null hypothesis is that the HIG algorithm does not outperform the HTS algorithm when solving the F m prmu, no − idle α · C max + β · ∑ F j problem. The t-test results are summarized in Table 7. Considering 120 test instances, the p-value is supportive of rejecting the null hypothesis. That is, with 95 percent of confidence, we can claim that HIG is superior to the current-best-performing algorithm in the literature of BNIPFSP, i.e., the HTS algorithm. It is also observed that the proposed extension shows a significant improvement in the performance of the algorithm when compared to all three variants of the IGs. As a final step to the numerical analysis, the best-found solutions to all 120 test instances are recorded in Appendix B. The updated values are highlighted in bold font. Notably, 119 out of 120 best-found solutions are yielded by the HIG algorithm. The resulting values can be used in future studies to benchmark the prospect solution algorithms for solving the F m prmu, no − idle α · C max + β · ∑ F j problem.

Conclusions
Energy efficiency in the production sector requires well-informed operations management decisions in addition to the use of modern equipment, smart lighting and control systems, and the standard construction of facilities. Production scheduling is a prime example of planning tools that facilitate the successful implementation of green initiatives for reducing the carbon footprint. This study contributes to the energy-efficient production scheduling literature developing a mathematical model and a solution algorithm to address the gap identified in the comprehensive literature review. Extensive numerical analysis using a well-known dataset showed that almost all of the best-found solutions are yielded by the HIG algorithm. The statistical test of significance confirmed that HIG performs significantly better than the benchmark algorithm when solving the F m prmu, no − idle α · C max + β · ∑ F j problem.
Despite its effectiveness in solving the BNIPFSPs, the proposed solution algorithm is limited in that it applies a priori preference articulation approach for reconciliation of the makespan and total flowtime. To address this limitation, the following directions can be pursued. First, one can extend the Iterated Greedy algorithm to work with the Pareto Front approach to provide a comprehensive set of optimum solutions and trade-offs. Second, other multi-objective optimization algorithms can be adapted to solve this intractable scheduling extension. The third suggestion for future research includes adopting the Concept of Stratification and Incremental Enlargement to solve the problem's dynamic variant. In doing so, one can also account for operational parameter uncertainties and the possibility of rejecting a job or partially accepting a batch of jobs. Finally, the noidle setting needs more attention in other production settings to contribute to energy efficiency literature.

Conflicts of Interest:
The authors declare no conflict of interest.