You are currently viewing a new version of our website. To view the old version click .
Axioms
  • Article
  • Open Access

7 November 2025

Resource Allocation and Minmax Scheduling Under Group Technology and Different Due-Window Assignments

and
1
School of Machatronics Engineering, Shenyang Aerospace University, Shenyang 110136, China
2
Key Laboratory of Rapid Development & Manufacturing Technology for Aircraft, Shenyang Aerospace University, Ministry of Education, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Advances in Mathematical Optimization Algorithms and Its Applications

Abstract

This article investigates single-machine group scheduling integrated with resource allocation under different due-window ( D I F D W ) assignment. Three distinct scenarios are examined: one with constant processing times, one with a linear resource consumption function, and one with a convex resource consumption function. The objective is to minimize the total cost comprising the maximum earliness/tardiness penalties, the due-window starting time cost, the due-window size cost, and the resource consumption cost. For each problem variant, we analyze the structural properties of optimal solutions and develop corresponding solution algorithms: a polynomial-time optimal algorithm for the case with constant processing times, heuristic algorithms for problems involving linear and convex resource allocation, and the branch-and-bound algorithm for obtaining exact solutions. Numerical experiments are conducted to evaluate the performance of the proposed algorithms.

1. Introduction

Production scheduling is fundamental to industrial management and manufacturing, serving as a critical driver for enhancing operational efficiency and reducing resource consumption. By determining the optimal sequence of jobs on machines, effective scheduling strategies streamline production flows, minimize idle times, and directly contribute to achieving key enterprise objectives such as on-time delivery and cost reduction. The importance of scheduling research is demonstrated by its wide-ranging applications, from traditional shop floors to modern cloud computing and logistics systems, making it a perennial focus in operations research and industrial engineering (see Sun et al. [] and Lv and Wang []).
Within the broad domain of production scheduling, group technology ( G T ) scheduling offers a powerful strategy for environments characterized by product variety. Its significance lies in grouping jobs with similar characteristics—such as process routes or equipment requirements—to be processed consecutively. This grouping drastically reduces non-value-added activities like machine setup times and tool changes, leading to substantial gains in productivity and resource utilization. By clustering similar tasks, G T scheduling effectively reduces the complexity inherent in scheduling diverse jobs, resulting in smoother production flows, lower costs, and shorter lead times (see Huang [] and He et al. []).
Building upon the foundation of G T scheduling, integrating resource allocation into the model introduces a higher degree of operational control and optimization potential. The significance of this integration is that it acknowledges that processing times are often not fixed but can be compressed by allocating additional resources (e.g., labor, energy, or machinery; see Shabtay and Steiner [] and Zhang et al. []). This creates a trade-off between shorter processing times and higher resource costs. Consequently, the core challenge expands from merely sequencing groups and jobs to jointly optimizing the schedule and determining the optimal amount of resources allocated to each job. This approach allows for a more nuanced and cost-effective management of production systems, aligning schedule performance with resource expenditure to achieve superior overall efficiency, which is the central focus of this study.
Furthermore, in industrial production, products often need to be delivered to customers within a specified time frame, known as the “due-window”. If a product is completed before or after this due-window, additional penalty costs may be incurred due to inventory holding or contractual violations. Therefore, the setting of the due-window must be optimized based on resource allocation to achieve high production efficiency and minimize waste. In general, the due-window has three types: the Common Due-Window ( C O N D W ), the Slack Due-Window ( S L K D W ), and the different due-window ( D I F D W ); see Janiak et al. [].
In this study, we investigate the integrated optimization of single-machine scheduling and resource allocation under G T and D I F D W assignment. In practical manufacturing systems, job processing times can often be compressed through resource investment, where the relationship between resource consumption and processing times may be characterized by linear or convex functions. Simultaneously, due to strict customer requirements on delivery times, the rational setting of due-windows is crucial to reducing inventory and penalty costs. Therefore, optimizing job scheduling sequences and resource investment strategies under the combined influence of resource allocation and window assignment holds significant practical importance.
The rest of this paper is organized as follows: In Section 2, we provide a review of the literature related to group technology, resource allocation, and the due-window. In Section 3, a definition of the model is given. Section 4 analyzes the problem with constant processing times. Section 5 examines the problem with a linear resource consumption function. Section 6 studies the problem under a convex resource consumption function. Section 7 introduces the branch-and-bound and simulated annealing algorithms for obtaining exact and near-optimal solutions. Section 8 presents numerical experiments to evaluate the performance of the proposed algorithms. Finally, Section 9 summarizes the findings and suggests directions for future research.

2. Literature Review

Under just-in-time setting, earliness–tardiness cost minimization is very important, such as due-date assignments (see Chen et al. [], Wang and Liu [], and Zhang and Tang []) and due-window assignments. Based on the resulting scheduling outcomes, commonly studied due-window assignment strategies in the literature can be categorized into the following three types: C O N D W , where a unified due-window is set for all jobs (Hoffmann et al. [] and Bai et al. []); S L K D W , in which each job is assigned a fixed-length window whose start time is dynamically determined according to the job’s start time or its position in the sequence (Zhao []); and D I F D W , where each job may be assigned an independent and distinct time window, with both the start time and length determined by job-specific characteristics (Lu et al. []). Recently, Lu et al. [] considered a single-machine problem with delivery times and deteriorating jobs. Under three due-window assignments (including C O N D W , S L K D W and D I F D W ), they proved that general earliness/tardiness penalties are polynomially solvable. Qiu and Wang [] considered scheduling jobs with deterioration effects on mixed due-windows (i.e., the mixing of CONDW and SLKDW). Sun et al. [] and Wang et al. [] considered proportionate flow shop problems with due-window assignments. Under C O N D W and S L K D W (resp. D I F D W ), Sun et al. [] (resp. Wang et al. []) proved that earliness–tardiness cost minimization is polynomially solvable.
With respect to resource allocation problems, Zhao [] considered no-wait flow shop problems with S L K D W . Considering learning effect and resource allocation, they showed that some problems are polynomially solvable. Tian [] and Wang et al. [] studied single-machine due-window assignment problems that simultaneously consider resource allocation and generalized earliness/tardiness penalties. Sun et al. [] investigated single-machine resource allocation scheduling with SLKDW. Qian et al. [] investigated a single-machine C O N D W assignment and scheduling problem that incorporates position-dependent weights, delivery times, a learning effect, and resource allocations and develops a polynomial-time solution for it. Wang et al. [] considered single-machine resource allocation scheduling with deterioration effects, where the scheduling cost is the total weighted completion time. Weng et al. [] studied resource allocation scheduling with multi-resource operations. For makespan minimization, they proposed some algorithms. Sarpatwar et al. [] considered preemptive resource allocation scheduling. For two objectives, i.e., throughput maximization and resource minimization, they presented some algorithms. Sun et al. [] studied the CONDW and SLKW problems in a no-wait flow shop setting with resource allocation, and they proved that earliness–tardiness cost minimization is polynomially solvable.
Scheduling with G T has also been paid attention by many scholars (see Shabtay et al. [], Liu and Wang [], Chen et al. []). Chen et al. [] studied a scheduling problem featuring a controllable learning effect and a G T -dependent due-window assignment. Wang and Liu [] considered a single-machine G T problem where job processing times are affected by resource allocation and are subject to unrestricted due-date assignments. Yin and Gao [] and Yin et al. [] analyzed a single-machine G T problem with general linear deterioration and learning effects. Zhang et al. [] studied no-wait flow shop problems with G T . For makespan minimization, they proposed the distribution algorithm and some heuristic algorithms. Li and Goossens [] considered the multi-league G T and scheduling problem. Ren and Yang [] considered not only minmax but also the single-machine scheduling of resources and C O N D W in group technology, where resource allocation is a linear/convex function of processing time. It can be solved in polynomial time for the special case of group-position-dependent weights. Lv and Wang [] investigated the problems of resource allocation and S L K D W assignment in single-machine group technology scheduling, proposing polynomial-time algorithms and heuristic methods to minimize the maximum S L K D W cost and total resource allocation cost. In this paper, building upon the aforementioned two articles (i.e., Ren and Yang [] and Lv and Wang []), we extend the window conditions on the production line to another common scenario: D I F D W . WE determine the optimal job sequences both within groups and between groups to form a complete product. For the constant processing times, the polynomial-time solution algorithm is given, while for linear or convex resource allocation, heuristic algorithms and branch-and-bound algorithms are given.
In actual production, the requirements for delivery times across different product groups or production stages are complex and diverse. C O N D W is insufficient to fully accommodate this need for flexibility. While S L K D W accounts for production fluctuations, it still has limitations in scope and cannot cover all scenarios for due-date setting. Therefore, it is necessary to expand research on due-window configuration methods. D I F D W can adapt to this diversity—for instance, high-value customized products require precise due-dates. By introducing D I F D W , production requirements can be precisely matched, improving scheduling rationality and system efficiency and ultimately enhancing customer satisfaction.
The main contributions of this article are presented as follows: (i) The integrated optimization problem of single-machine scheduling and resource allocation under G T and D I F D W is proposed. (ii) For the fixed processing time scenario, a polynomial-time optimal algorithm is proposed. (iii) For the linear and convex resource function scenarios, we conjecture that both problems are NP-hard; then, some heuristics and a branch-and-bound algorithm are given.

3. Problem Statement

This section defines the parameters in relation to the problem presented. There are n jobs in total, and the jobs are divided into g groups based on their similarity characteristics. Specifically, there are n l jobs in each group, denoted as G l l = 1 , 2 , , g . During the machining process, there is no idle time for the machine. Furthermore, the machine can only process one job at a time, and it continues to do so until the job is completed. Before each group G l of jobs is processed, there is a setup time s l .
As resources are invested, the processing time decreases, and this relationship is called the resource consumption function. Let O l , h h = 1 , 2 , , n l be the job h in group G l . This study is centered on two types of resource functions as follows:
  • Linear function for resource allocation:
    p ˜ l , h = p l , h δ l , h u l , h , 0 u l , h u ¯ l , h p l , h δ l , h , l = 1 , 2 , , g ; h = 1 , 2 , , n l ,
    where p l , h indicates the normal processing time, δ l , h ( δ l , h 0 ) indicates the compression factor, u l , h is the resource allocation amount, and u ¯ l , h is the upper bound of the resource allocation amount.
  • Convex function for resource allocation:
    p ˜ l , h = w l , h u l , h k , l = 1 , 2 , , g ; h = 1 , 2 , , n l ,
    where w l , h is the job workload of O l , h , and k is a positive constant.
Under DIFDW, let d 1 l , h , d 2 l , h be the due-window of job O l , h , where d 1 l , h and d 2 l , h are the starting and finishing times of the due-window, D l , h = d 2 l , h d 1 l , h denotes the size of the due-window, and d 1 l , h and d 2 l , h are the decision variables for each job O l , h . It is evident that penalties are imposed when job O l , h completes its processing either before d 1 l , h or after d 2 l , h . The specifics are defined according to the measure of earliness or tardiness; that is, E l , h = max { d 1 l , h C l , h , 0 } indicates the earliness cost of job O l , h , where C l , h is the completion of O l , h , whereas the tardiness cost can be indicated as T l , h = max 0 , C l , h d 2 l , h .
The objective is to find the inter-group sequence Φ , the intra-job sequence ϕ l within G l , d 1 l , h , d 2 l , h , and the resource allocation amount for each job O l , h to minimize the following objective function:
F Φ ^ , u = l = 1 g max 1 h n l max κ l E l , h + μ l d 1 l , h + υ l D l , h , ρ l T l , h + μ l d 1 l , h + υ l D l , h + l = 1 g h = 1 n l ξ l , h u l , h
where Φ ^ = Φ , ϕ 1 , ϕ 2 , , ϕ g , d 1 l , h , d 2 l , h denotes the inter-group sequence Φ and intra-job sequence ϕ l , l = 1 , 2 , g , and u = u 1 , 1 , u 1 , 2 , , u 1 , n 1 ; ; u g , 1 , u g , 2 , , u g , n g denotes the resource allocation amount of O l , h . Terms κ l , ρ l , μ l , and υ l indicate the unit costs of earliness, tardiness, due-window starting time, and size, respectively, and they are non-negative constant coefficients in relation to every job within group G l . Term ξ l , h denotes the unit cost of resources consumed by O l , h .
Adopting the three-field notation, which is widely used in scheduling problems, the aforementioned problem is denoted as follows:
1 G T , D I F D W , l i n F Φ ^ , u
1 G T , D I F D W , c o v F Φ ^ , u
where l i n and c o v are the linear and convex resource functions, respectively.

4. Constant Processing Times

Before introducing linear or convex resource allocation, the problem with constant processing times ( c o n s ) is first considered. In this section, the problems are simplified to the following:
1 G T , D I F D W , c o n s F Φ ^ , u
For a given inter-group sequence Φ , the objective function is as follows:
F l = max 1 h n l max κ l E l , h + μ l d 1 l , h + υ l D l , h , ρ l T l , h + μ l d 1 l , h + υ l D l , h = max 1 h n l κ l max d 1 l , h C l , h , 0 + ρ l max C l , h d 2 l , h , 0 + μ l d 1 l , h + υ l D l , h
Lemma 1. 
For a given intra-job sequence ϕ l in group G l , there exists an optimal solution such that d 1 l , [ h ] d 2 l , [ h ] C l , [ h ] , where the subscript [ h ] ( h = 1 , 2 , n l ) involved indicates that the job is at the h-th position in the sequence.
Proof. 
See Appendix A.    □
Lemma 2. 
For group G l l = 1 , 2 , , g , the object function
F l = max 1 h n l κ l max d 1 l , h C l , h , 0 + ρ l max C l , h d 2 l , h , 0 + μ l d 1 l , h + υ l D l , h
can be minimized, as shown in the following cases (see Table 1).
Table 1. The optimal strategy.
Proof. 
See Appendix B.    □
Leveraging the optimal due-window assignment strategy derived in Lemma 2, the following equivalent representation is admitted for F l :
F l = Ω l C l , [ n l ]
where
Ω l = ρ l , Case 1 υ l , Case 2 μ l , Case 3
Based on the above analysis, it is evident that the objective values summarized for the various scenarios mentioned above are independent of the sequence in which the jobs are arranged. In other words, to minimize the objective function value for each group G l , the jobs can be scheduled in any feasible order.
By applying Lemma 2, the optimal intra-job sequence can now be determined, upon which the discussion on the optimal inter-group sequence is founded.
Lemma 3. 
For the problem
1 G T , D I F D W , c o n s F Φ ^ , u ,
the optimal inter-group sequence can be obtained in the ascending order of s l + P ^ l Ω l , where P ^ l = h = 1 n l p ˜ h .
Proof. 
See Appendix C.    □
Base on the above analysis, it can be concluded that the optimal algorithm steps for the problem
1 G T , D I F D W , c o n s F Φ ^ , u
are as follows:
Theorem 1. 
For
1 G T , D I F D W , c o n s F Φ ^ , u
the optimal solution can be solved in polynomial time by Algorithm 1, i.e., in O g log g time.
Algorithm 1: Constant processing times
Step 1. Intra-job Sequence.
    For each group G l l = 1 , 2 , , g ← Any feasible order
Step 2. Comprehensive Parameter.
    Determine Ω l for G l ← Equation (5)
Step 3. Inter-group Sequence.
    In ascending order of s l + P ^ l Ω l ← Lemma 3
Step 4. Sub-problems Solution.
    Calculate d 1 l , h , d 2 l , h for each group G l ← Lemma 2
    Calculate F l for each group G l ← Equation (4)
Step 5. Global Solution.
    Calculate the objective function ← F = l = 1 g F l
Proof. 
See Appendix D.    □

5. Linear Resource Consumption Function

5.1. Basic Properties

By integrating a linear resource consumption function, the scheduling problem is formally expressed as follows:
1 G T , D I F D W , l i n F Φ ^ , u
The objective function can be formulated as follows:
F = l = 1 g F l + l = 1 g h = 1 n l ξ l , h u l , h = l = 1 g Ω l C l , [ n l ] + l = 1 g h = 1 n l ξ l , h u l , h = l = 1 g Ω l r = 1 l s [ r ] + r = 1 l P ^ [ r ] + l = 1 g h = 1 n l ξ l , h u l , h = l = 1 g r = l g Ω [ r ] s [ l ] + l = 1 g r = l g Ω [ r ] P ^ [ l ] + l = 1 g h = 1 n l ξ l , h u l , h
By substituting Equation (1) into Equation (6), we have the following:
F = l = 1 g r = l g Ω [ r ] s [ l ] + l = 1 g h = 1 n l ξ [ l ] , [ h ] p [ l ] , [ h ] δ [ l ] , [ h ] + l = 1 g h = 1 n l χ [ l ] , [ h ] p ˜ [ l ] , [ h ]
where χ [ l ] , [ h ] = r = l g Ω [ r ] ξ [ l ] , [ h ] δ [ l ] , [ h ] , l = 1 , 2 , , g ; h = 1 , 2 , , n l .
The next lemma reveals that u * ( Φ , ϕ 1 , ϕ 2 , , ϕ g ) is a function of Φ , ϕ 1 , ϕ 2 , , ϕ g , and it signifies the resource allocation framework within the context of an optimal due-window assignment strategy.
Lemma 4. 
For u * ( Φ , ϕ 1 , ϕ 2 , , ϕ g ) , the precise formulation is outlined as follows:
u * [ l ] , [ h ] = 0 , χ [ l ] , [ h ] < 0 u [ l ] , [ h ] [ 0 , u ¯ [ l ] , [ h ] ] , χ [ l ] , [ h ] = 0 , u ¯ [ l ] , [ h ] , χ [ l ] , [ h ] > 0 f o r l = 1 , 2 , , g a n d h = 1 , 2 , , n l
Proof. 
See Appendix E.    □
Lemma 5. 
For
1 G T , D I F D W , l i n F Φ ^ , u
the optimal intra-job sequence can be obtained by matching the smallest χ l , h to the job with the largest p ˜ l , h , the second smallest χ l , h to the job with the second largest p ˜ l , h , and so on.
Proof. 
See Appendix F.    □
Lemma 6. 
The term
l = 1 g r = l g Ω [ r ] s [ l ]
can be minimized by sequencing the groups in the descending order of Ω [ l ] s [ l ] .
Proof. 
See Appendix G.    □
Based on Lemmas 5 and 6, the optimal intra-job sequence ϕ l within G l can be determined. However, the optimal inter-group sequence cannot be obtained by a polynomial-time algorithm (this is an open problem). Hence, we can propose the following heuristic algorithm ( H A l i n ) (Algorithm 2).
Algorithm 2: Heuristic algorithm ( H A l i n )
Step 1. Critical Parameter
    Compute Ω l for G l ← Equation (5)
Step 2. Intra-job Sequence
    For each group G l l = 1 , 2 , , g ← Lemma 5
Step 3. Inter-group Sequence
    Strategy A: Schedule groups in descending of Ω l s l
      Calculate the objective value F A Equation (7)
    Strategy B: Schedule groups in ascending of s l
      Calculate the objective value F B Equation (7)
    Strategy C: Schedule groups in descending of Ω l
      Calculate the objective value F C Equation (7)
Step 4. Feasible Solution Selection
    Compute F * F * = m i n F A , F B , F C

5.2. Lower Bounds

According to the given inter-group sequence Φ = Φ G S , Φ G U , let Φ G S (resp. Φ G U ) be a scheduled (resp. unscheduled) part; there are f groups in Φ G S . Equation (7) can also be divided into scheduled and unscheduled parts:
F = l = 1 f r = l f Ω r + r = f + l g Ω [ r ] s l + l = f + 1 g r = l g Ω [ r ] s [ l ] + l = 1 f h = 1 n l ξ l , h p l , h δ l , h + l = f + 1 g h = 1 n l ξ [ l ] , h p [ l ] , h δ [ l ] , h + l = 1 f h = 1 n l χ l , h p ˜ l , h + l = f + 1 g h = 1 n l χ [ l ] , h p ˜ [ l ] , h
From Equation (9), it can be seen that the terms l = 1 f r = l f Ω r s l , l = 1 f h = 1 n l ξ l , h p l , h δ l , h , l = f + 1 g h = 1 n l ξ [ l ] , h p [ l ] , h δ [ l ] , h , and l = 1 f h = 1 n l χ l , h p ˜ l , h are constants. The remaining part can be minimized by Lemmas 5 and 6. Then, we have the following lower bound:
L B 1 l i n = l = 1 f r = l f Ω r + r = f + l g Ω < r > s l + l = f + 1 g r = l g Ω < r > s m i n + l = 1 f h = 1 n l ξ l , h p l , h δ l , h + l = f + 1 g h = 1 n l ξ [ l ] , h p [ l ] , h δ [ l ] , h + l = 1 f h = 1 n l χ l , h p ˜ l , h + l = f + 1 g h = 1 n l χ [ l ] , h p ˜ [ l ] , h
where s m i n = m i n s ( f + 1 ) , s ( f + 1 ) , , s ( g ) , and Ω < f + 1 > Ω < f + 2 > Ω < g > .
Let Ω m i n = m i n Ω < f + 1 > , Ω < f + 1 > , , Ω < g > ; then the second and third lower bounds can be written as
L B 2 l i n = l = 1 f r = l f Ω r + r = f + l g Ω m i n s l + l = f + 1 g r = l g Ω m i n s ( l ) + l = 1 f h = 1 n l ξ l , h p l , h δ l , h + l = f + 1 g h = 1 n l ξ [ l ] , h p [ l ] , h δ [ l ] , h + l = 1 f h = 1 n l χ l , h p ˜ l , h + l = f + 1 g h = 1 n l χ [ l ] , h p ˜ [ l ] , h
and
L B 3 l i n = l = 1 f r = l f Ω r + r = f + l g Ω < r > s l + l = f + 1 g r = l g Ω < r > s ( l ) + l = 1 f h = 1 n l ξ l , h p l , h δ l , h + l = f + 1 g h = 1 n l ξ [ l ] , h p [ l ] , h δ [ l ] , h + l = 1 f h = 1 n l χ l , h p ˜ l , h + l = f + 1 g h = 1 n l χ [ l ] , h p ˜ [ l ] , h
where Ω < f + 1 > Ω < f + 2 > Ω < g > , and s ( f + 1 ) s ( f + 1 ) s ( g ) . Note that Ω < l > and s ( l ) ( l = f + 1 , f + 2 , , g ) do not necessarily correspond to the same group. From the above equations (i.e., Equations (10)–(12)), we select the largest of the three as the lower bound; that is,
L B l i n = max L B 1 l i n , L B 2 l i n , L B 3 l i n

6. Convex Resource Consumption Function

6.1. Basic Properties

During this part, the problem is formally expressed as follows:
1 G T , D I F D W , c o v F Φ ^ , u
Resource consumption is characterized by a convex function, as delineated in Equation (2). By incorporating Equation (2) into Equation (6), we have
F = l = 1 g r = l g Ω [ r ] s [ l ] + l = 1 g r = l g Ω [ r ] h = 1 n [ l ] w [ l ] , [ h ] u [ l ] , [ h ] k + l = 1 g h = 1 n l ξ [ l ] , [ h ] u [ l ] , [ h ]
The specific computation of u * for this problem can be derived from Lemma 7 outlined below.
Lemma 7. 
The optimal resource allocation u * Φ , ϕ 1 , ϕ 2 , , ϕ g is represented as follows:
u * [ l ] , [ h ] Φ , ϕ 1 , ϕ 2 , , ϕ g = k r = l g Ω [ r ] w [ l ] , [ h ] k ξ [ l ] , [ h ] 1 / ( k + 1 )
Proof. 
See Appendix H.    □
The optimization model incorporating both due-window assignment and resource allocation strategies can be formulated through the integration of Equation (15) with Equation (14), yielding the following mathematical representation:
F = l = 1 g r = l g Ω [ r ] s [ l ] + k k k + 1 + k 1 k + 1 l = 1 g r = l g Ω [ r ] 1 k + 1 h = 1 n [ l ] ξ [ l ] , [ h ] w [ l ] , [ h ] k k + 1
Similarly, we assume that ϱ [ l ] = h = 1 n [ l ] ϱ [ l ] , [ h ] = h = 1 n [ l ] ξ [ l ] , [ h ] w [ l ] , [ h ] , ( l = 1 , 2 , , g ; h = 1 , 2 , , n l ) , and the terms r = l g Ω [ r ] 1 k + 1 and r = l g Ω [ r ] do not influence the ultimate sequence. Then, the objective function can be expressed in the following form:
F = l = 1 g r = l g Ω [ r ] s [ l ] + k k k + 1 + k 1 k + 1 l = 1 g r = l g Ω [ r ] 1 k + 1 ϱ [ l ] k k + 1
For the problem
1 G T , D I F D W , c o v F Φ ^ , u ,
it can be concluded that the objective function value F is independent of intra-job sequence, and the jobs of each group G l ( l = 1 , 2 , , g ) can be scheduled in any order. Regarding inter-group sequence, the term l = 1 g r = l g Ω [ r ] s [ l ] can be minimized by Lemma 6, and the term l = 1 g r = l g Ω [ r ] 1 k + 1 ϱ [ l ] k k + 1 can also be minimized likewise. After the above analysis, we present the following heuristic algorithm (i.e., Algorithm 3, H A c o v ) to determine the feasible solution:
Algorithm 3: Heuristic algorithm ( H A c o v )
Step 1. Critical Parameter
    Compute Ω l for G l ← Equation (5)
    Compute ϱ l ϱ [ l ] = h = 1 n [ l ] ϱ [ l ] , [ h ] = h = 1 n [ l ] ξ [ l ] , [ h ] w [ l ] , [ h ]
Step 2. Intra-job Sequence
    For each group G l l = 1 , 2 , , g ← Any feasible order
Step 3. Inter-group Sequence
    Strategy A: Schedule groups in descending of Ω l s l
      Calculate the objective value F A Equation (17)
   Strategy B: Schedule groups in descending of Ω l ϱ l
      Calculate the objective value F B Equation (17)
   Strategy C: Schedule groups in descending of Ω l
      Calculate the objective value F C Equation (17)
   Strategy D: Schedule groups in ascending of s l
      Calculate the objective value F D Equation (17)
   Strategy E: Schedule groups in ascending of ϱ l
      Calculate the objective value F E Equation (17)
Step 4. Feasible Solution Selection
   Compute F * F * = m i n F A , F B , F C , F D , F E

6.2. Lower Bounds

Let Φ = Φ G S , Φ G U be an inter-group sequence, where Φ G S Φ G U is the scheduled (unscheduled) part, and there are f groups in Φ G S . We have the following:
F = l = 1 f r = l f Ω r + r = f + 1 g Ω [ r ] s l + l = f + 1 g r = l g Ω [ r ] s [ l ] + k k k + 1 + k 1 k + 1 l = 1 f r = l f Ω r + r = f + 1 g Ω [ r ] 1 k + 1 ϱ l k k + 1 + l = f + 1 g r = l g Ω [ r ] 1 k + 1 ϱ [ l ] k k + 1
From Equation (18), it can be seen that the terms l = 1 f r = l f Ω r s l and l = 1 f r = l f Ω r ϱ [ l ] k k + 1 are constants. And the remaining terms can be minimized by Lemma 6. Then, we have the following lower bounds:
L B 1 c o v = l = 1 f r = l f Ω r + r = f + 1 g Ω < r > s l + l = f + 1 g r = l g Ω < r > s m i n + k k k + 1 + k 1 k + 1 l = 1 f r = l f Ω r + r = f + 1 g Ω < r > 1 k + 1 ϱ l k k + 1 + l = f + 1 g r = l g Ω ( r ) 1 k + 1 ϱ ( l ) k k + 1
where s m i n = m i n s < f + 1 > , s < f + 2 > , , s < g > , Ω < f + 1 > Ω < f + 2 > Ω < g > and Ω f + 1 ϱ f + 1 Ω f + 2 ϱ f + 2 Ω g ϱ g .
Let ϱ m i n = m i n ϱ < f + 1 > , ϱ < f + 2 > , , ϱ < g > ; the second and third lower bounds can be written as follows:
L B 2 c o v = l = 1 f r = l f Ω r + r = f + 1 g Ω < r > s l + l = f + 1 g r = l g Ω ( r ) s ( l ) + k k k + 1 + k 1 k + 1 l = 1 f r = l f Ω r + r = f + 1 g Ω < r > 1 k + 1 ϱ l k k + 1 + l = f + 1 g r = l g Ω < r > 1 k + 1 ϱ m i n k k + 1
and
L B 3 c o v = l = 1 f r = l f Ω r + r = f + 1 g Ω < r > s l + l = f + 1 g r = l g Ω ( r ) s ( l ) + k k k + 1 + k 1 k + 1 l = 1 f r = l f Ω r + r = f + 1 g Ω < r > 1 k + 1 ϱ l k k + 1 + l = f + 1 g r = l g Ω ( r ) 1 k + 1 ϱ ( l ) k k + 1
where Ω < f + 1 > Ω < f + 2 > Ω < g > , Ω f + 1 s f + 1 Ω f + 2 s f + 2 Ω g s g , Ω f + 1 ϱ f + 1 Ω f + 2 ϱ f + 2 Ω g ϱ g . Note that Ω < l > , Ω ( l ) , s ( l ) , and ϱ ( l ) l = f + 1 , f + 2 , , g do not necessarily correspond to the same group. Similarly to the linear function, we select the largest of the three lower bounds as the lower bound; that is,
L B c o v = m a x L B 1 c o v , L B 2 c o v , L B 3 c o v

7. Computational Algorithms

The branch-and-bound ( B & B ) method ensures global optimality by systematically enumerating the feasible solution space and applying bound-based pruning. Meanwhile, simulated annealing ( S A ) exhibits strong robustness through the probabilistic acceptance of inferior solutions, enabling escape from local optima, which is particularly suitable for large-scale NP-hard scheduling scenarios. These two methods complement each other in precision and efficiency, aligning well with the dual requirements of accuracy and timeliness in practical production scheduling.

7.1. Branch-and-Bound ( B & B ) Algorithm

B & B algorithm search follows a depth-first strategy; this algorithm assigns groups in a forward manner starting from the first group position (assign a group to a node).
Algorithm 4 ( B & B ) first generates an initial feasible solution Φ 0 using Algorithm 2 or Algorithm 3 and calculates its objective function value F ( Φ * ) as the current best solution. Then, in the node evaluation and pruning phase, for each node N i and unscheduled sequence U S j , it computes a lower bound L B l i n ( c o v ) . If the lower bound is not less than the current best value, it prunes the node or sequence; otherwise, it completes the schedule to obtain a new solution Φ , calculates its objective function value, and updates the best solution if improved. Finally, this process is repeated until all nodes are explored or pruned, and the optimal sequence Φ * and its objective function value F ( Φ * ) are output.
Algorithm 4:  B & B
Step 1. (Initial Solution Generation)
     Initial feasible solution Φ 0 Algorithm 2/Algorithm 3
     Compute the objective function value F Φ * = F Φ 0 Equation (9)/Equation (14)
Step 2. (Node Evalution and Pruning)
     For each node N i :
        Compute lower bound L B l i n ( c o v ) N i Equation (13)/Equation (22)
        If L B l i n ( c o v ) N i F Φ * :
          Prune node N i and its subtree
      For each unfathomed schedule U S j :
        Compute lower bound L B l i n ( c o v ) N i Equation (13)/Equation (22)
        If L B l i n ( c o v ) U S j F Φ * :
          Prune node U S j and its subsequent branches
        Else:
          Complete the schedule to obtain Φ
          Calculate F Φ
          If F Φ < F Φ * :
            Update: Φ * Φ
          Else:
            Discard Φ
Step 3. (Termination)
      While there are nodes left to explore
        Perform the exploration process as outlined in Step 2
      If all nodes have been explored
        Terminate the algorithm
        Output optimal inter-group sequence Φ * and F Φ *

7.2. Simulated Annealing ( S A ) Algorithm

On the other hand, S A is a good choice to solve the general problem. The details of the S A algorithm are summarized as follows:
Algorithm 5 ( S A ) begins by generating an initial solution and setting parameters. Its core iterative process involves randomly generating new solutions in the neighborhood of the current solution and updating the current solution based on the Metropolis criterion—accepting better solutions immediately and worse ones with a probability (see Kirkpatrick et al. []). This mechanism helps escape local optima. The probability of accepting inferior solutions decreases as the temperature drops, shifting the algorithm’s focus from global exploration to local search. Finally, the algorithm terminates when stopping criteria are met and outputs the best-found inter-group sequence and its objective value.
Algorithm 5:  S A
Step 1. (Initial Solution Generation)
     Initial feasible solution Φ 0 Algorithm 2/Algorithm 3
     Current solution Φ Φ 0
     Compute the objective function value F Φ * = F Φ 0 Equation (9)/Equation (14)
      T ^ 1.0 (starting temperature)
      e ˇ 1 × 10 16 (lower temperature limit)
      a t ` 0.99999999 (cooling rate)
      L ˙ 1000 × n (iterations)
      J ˜ U ( 0 , 1 ) (random decision factor)
Step 2. (Iterative Optimization)
     While L ˙ > 0 and | T ^ | > e ˇ do:
        Randomly select positions C 1 ^ , C 2 ^
        If C 1 ^ = C 2 ^ :
           L ˙ L ˙ + 1 , continue
        Else:
          Generate neighbor Φ Swap C 1 ^ and C 2 ^ in Φ
         Δ E F Φ F Φ
        If Δ E < 0 or exp ( Δ E / T ^ ) > J ˜ :
          Accept Φ Φ
          If F Φ < F Φ * :
             F Φ * F Φ
             Φ * Φ
        Update temperature: T ^ T ^ × a t `
         L ˙ L ˙ 1
Step 3. (Termination)
     Output best inter-group sequence Φ * and F Φ *

8. Numerical Experiments

To comprehensively evaluate the performance and effectiveness of the proposed algorithms, a comparative analysis is conducted, specifically including H A l i n , H A c o v , S A l i n , S A c o v , and B & B . During the simulation experiments, parameter values are set in accordance with Table 2 while also retaining the flexibility to randomly generate parameters for adaptive testing. Table 3 and Table 4 show the CPU running times of different due-window under the linear resource model and convex resource model.
Table 2. Numerical parameters.
Table 3. CPU time (ms) of different due-window (linear resource model).
Table 4. CPU time (ms) of different due-window (convex resource model).
The error calculation using algorithm solving is as follows:
Z ( X ) Z * Z * × 100 % ,
where Z ( X ) is the objective function value obtained by the algorithm X , X { H A l i n , H A c o n , S A l i n , S A c o v } , and Z * is the optimal value obtained by B & B .
A comparative analysis of error performance reveals that the H A l i n model achieves a maximum relative error of ≤0.402509, while the H A c o v model reduces this value to ≤0.081978. The S A series demonstrates superior performance, with S A l i n and S A c o v attaining errors of 0.065452 and 0.054697, respectively, highlighting the marked advantage of S A models in precision control. (See Table 5).
Table 5. Error of different due-window.

9. Summary and Future Research

This paper systematically investigates single-machine scheduling integrated with resource allocation under G T and D I F D W assignments, focusing on constant, linear, and convex resource consumption functions. The objective is to minimize the sum of max earliness/tardiness penalties, due-window costs, and resource consumption costs. Polynomial-time optimal, heuristic, and branch-and-bound algorithms were developed, with numerical experiments demonstrating a good balance between computational efficiency and solution quality. Future research may further explore optimization and improvement strategies for the algorithms, as well as extending their application to more complex production scenarios (such as flow shop setting; see Sun et al. [], Lv and Wang [], Fasihi et al. []). In addition, group scheduling problems with variable processing times (such as deterioration effects, see Sun et al. [], Cheng et al. []; learning effects, see Jiang et al. [], Liu and Wang []) are also worth studying. A possible real application of the problem is desirable, particularly in validating theoretical models through practical implementations in industries such as manufacturing, logistics, or healthcare, which would enhance both academic rigor and real-world impact.

Author Contributions

Methodology, L.-H.Z. and J.-B.W.; Investigation, J.-B.W.; Writing—original draft, L.-H.Z.; Writing—review & editing, L.-H.Z. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the fundamental research funds for the universities of liaoning province (Project No. LJ222510143003).

Data Availability Statement

The data used to support the findings of this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Appendix A

Proof of Lemma 1. 
Note that d 1 l , [ h ] d 2 l , [ h ] ; then it can be divided into the following cases.
  • Case 1. d 1 l , [ h ] C l , [ h ] d 2 l , [ h ] , for job O l , h , we have
F l , [ h ] = μ l d 1 l , [ h ] + υ l d 2 l , [ h ] d 1 l , [ h ]
If we move d 2 l , [ h ] to the left, ensuring d 2 l , [ h ] = C l , [ h ] , we have
F l , [ h ] = μ l d 1 l , [ h ] + υ l C l , [ h ] d 1 l , [ h ] < F l , [ h ]
Hence, Case 1 is not optimal.
  • Case 2. C l , [ h ] d 1 l , [ h ] d 2 l , [ h ] , for job O l , h , we have
F l , [ h ] = κ l d 1 l , [ h ] C l , [ h ] + μ l d 1 l , [ h ] + υ l d 2 l , [ h ] d 1 l , [ h ]
We move d 1 l , [ h ] and d 2 l , [ h ] to the left and ensure d 1 l , [ h ] = d 2 l , [ h ] = C l , [ h ] ; then
F l , [ h ] = μ l C l , [ h ] < F l , [ h ]
Hence, Case 2 is not optimal.
From the above, d 1 l , [ h ] d 2 l , [ h ] C l , [ h ] is optimal. □

Appendix B

Proof of Lemma 2. 
For a given intra-job sequence ϕ l in group G l , from Lemma 1, d 1 l , [ h ] d 2 l , [ h ] C l , [ h ] ; then for job O l , h ,
F l = max 1 h n l ρ l C l , [ h ] d 2 l , [ h ] + μ d 1 l , [ h ] + υ l d 2 l , [ h ] d 1 l , [ h ] = max 1 h n l ρ l C l , [ h ] + μ l υ l d 1 l , [ h ] + υ l ρ l d 2 l , [ h ]
Obviously, if υ l ρ l , then d 2 l , [ h ] should be 0, and there are d 1 l , [ h ] = d 2 l , [ h ] = 0 and F l = ρ l C l , [ n l ] . If υ l < ρ l and μ l υ l , d 1 l , [ h ] should be 0, d 2 l , [ h ] should be C l , [ h ] , and F l = υ l C l , [ n l ] . If υ l < ρ l and μ l < υ l , then d 1 l , [ h ] and d 2 l , [ h ] should be C l , [ h ] , and F l = μ l C l , [ n l ] . □

Appendix C

Proof of Lemma 3. 
Assume that the sequence Φ is an optimal inter-group sequence and that the two groups G l and G m satisfy s l + P ^ l Ω l > s m + P ^ m Ω m . The two groups are now exchanged to obtained a new sequence Φ , and let the start time of G l be s. Then the total cost in Φ is
F l ( Φ ) + F m ( Φ ) = Ω l C l , [ n l ] + Ω m C m , [ n m ] = Ω l ( s + s l + P ^ l ) + Ω m ( s + s l + P ^ l + s m + P ^ m )
For the exchanged sequence Φ , the total cost is
F l ( Φ ) + F m ( Φ ) = Ω l C l , [ n l ] + Ω m C m , [ n m ] = Ω l ( s + s m + P ^ m + s l + P ^ l ) + Ω m ( s + s m + P ^ m )
Subtracting the two yields
F l ( Φ ) + F m ( Φ ) [ F l ( Φ ) + F m ( Φ ) ] = Ω m ( s l + P ^ l ) Ω l ( s m + P ^ m ) = Ω m Ω l ( s l + P ^ l Ω l s m + P ^ m Ω m ) < 0
This contradicts the assumption that the optimal sequence satisfies s l + P ^ l Ω l > s m + P ^ m Ω m . □

Appendix D

Proof of Theorem 1. 
For each group G l , initialize a feasible ordering. Step 1 involves only assignment operations, so the time complexity for each group is O 1 , and the total time complexity is O g . Step 2 has a time complexity of O g . Steps 3 and 4, respectively, require O g log g and O g time. Additionally, Step 5 involves a constant time operation. Therefore, the overall time complexity of Algorithm 1 is O g log g . □

Appendix E

Proof of Lemma 4. 
In Equation (7), the term l = 1 g ^ r = l g ^ Ω [ r ] s [ l ] is fixed, and l = 1 g ^ h = 1 n ˜ l ξ ˜ [ l ] , [ h ] p [ l ] , [ h ] δ [ l ] , [ h ] is constant for a given inter-group sequence Φ . For F ˜ in Equation (7), only the term χ [ l ] , [ h ] p ˜ [ l ] , [ h ] expresses the individual contributions associated with O [ l ] , [ h ] . If χ [ l ] , [ h ] > 0 , it is necessary to make p ˜ [ l ] , [ h ] as small as possible so as to minimize the impart on F ˜ . Hence, let u * [ l ] , [ h ] = u ¯ [ l ] , [ h ] such that p ˜ [ l ] , [ h ] has the smallest value equal to p [ l ] , [ h ] δ [ l ] , [ h ] u ¯ [ l ] , [ h ] . If χ [ l ] , [ h ] < 0 , it is necessary to make p ˜ [ l ] , [ h ] as large as possible to reduce F ˜ , i.e., p ˜ [ l ] , [ h ] = p [ l ] , [ h ] , and u * [ l ] , [ h ] = 0 . If χ [ l ] , [ h ] = 0 , p ˜ [ l ] , [ h ] is irrelevant to F ˜ . Hence, u * [ l ] , [ h ] can be any value in the interval of 0 , u ¯ [ l ] , [ h ] in this case. □

Appendix F

Proof of Lemma 5. 
From Equation (7), the terms l = 1 g r = l g Ω [ r ] s [ l ] and l = 1 g h = 1 n l ξ [ l ] , [ h ] p [ l ] , [ h ] δ [ l ] , [ h ] are constants in a given inter-group sequence. Obviously, the minimization of this objective function is equivalent to the minimization of l = 1 g h = 1 n l χ [ l ] , [ h ] p ˜ [ l ] , [ h ] . From Hardy et al. [], the term can be minimized by matching the smallest χ l , h to the job with the largest p ˜ l , h , the second smallest χ l , h to the job with the second largest p ˜ l , h , and so on. □

Appendix G

Proof of Lemma 6. 
This can be proved by the method of neighbor exchange. First assume that the optimal inter-group sequence is S = { S 1 , x , y , S 2 } , where x is at the j-th position in S, and y follows. According to the content of the lemma, the two satisfy Ω [ x ] s [ x ] > Ω [ y ] s [ y ] . The exchange yields the sequence S , and the order in both S 1 and S 2 remains unchanged. Then the objective function of the two sequences can be obtained by subtracting them:
l = 1 g r = l g Ω [ r ] s [ l ] ( S ) l = 1 g r = l g Ω [ r ] s [ l ] ( S ) = ( Ω [ x ] + Ω [ y ] + + Ω [ g ] ) s [ x ] + ( Ω [ y ] + + Ω [ g ] ) s [ y ] ( Ω [ y ] + Ω [ x ] + + Ω [ g ] ) s [ y ] ( Ω [ x ] + + Ω [ g ] ) s [ x ] = Ω [ y ] s [ x ] Ω [ x ] s [ y ] = s [ x ] s [ y ] ( Ω [ y ] s [ y ] Ω [ x ] s [ x ] ) 0 .
This validates what the primer describes. □

Appendix H

Proof of Lemma 7. 
We derive the objective function given by Equation (14) with respect to u [ l ] , [ h ] :
F u [ l ] , [ h ] = k r = l g Ω [ r ] w [ l ] , [ h ] k u [ l ] , [ h ] k + 1 + ξ [ l ] , [ h ] , l = 1 , 2 , , g ; h = 1 , 2 , , n [ l ]
Let F u [ l ] , [ h ] = 0 ; then the optimal solution for u [ l ] , [ h ] can be derived through computational analysis as follows:
u * [ l ] , [ h ] Φ , ϕ 1 , ϕ 2 , , ϕ g = k r = l g Ω [ r ] w [ l ] , [ h ] k ξ [ l ] , [ h ] 1 / ( k + 1 )

References

  1. Sun, X.; Liu, T.; Geng, X.-N.; Hu, Y.; Xu, J.-X. Optimization of scheduling problems with deterioration effects and an optional maintenance activity. J. Sched. 2023, 26, 251–266. [Google Scholar] [CrossRef]
  2. Lv, D.-Y.; Wang, J.-B. No-idle flow shop scheduling with deteriorating jobs and common due date under dominating machines. Asia-Pac. J. Oper. Res. 2024, 41, 2450003. [Google Scholar] [CrossRef]
  3. Huang, X. Bicriterion scheduling with group technology and deterioration effect. J. Appl. Math. Comput. 2019, 60, 455–464. [Google Scholar] [CrossRef]
  4. He, X.; Pan, Q.K.; Gao, L.; Neufeld, J.S.; Gupta, J.N.D. Historical information based iterated greedy algorithm for distributed flowshop group scheduling problem with sequence-dependent setup times. Omega 2024, 123, 102997. [Google Scholar] [CrossRef]
  5. Shabtay, D.; Steiner, G. Optimal due date assignment and resource allocation to minimize the weighted number of tardy jobs on a single machine. Manuf. Serv. Oper. Manag. 2007, 9, 332–350. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Sun, X.; Liu, T.; Wang, J.; Geng, X.-N. Single-machine scheduling simultaneous consideration of resource allocations and exponential time-dependent learning effects. J. Oper. Res. Soc. 2025, 76, 528–540. [Google Scholar] [CrossRef]
  7. Janiak, A.; Janiak, W.A.; Krysiak, T.; Kwiatkowski, T. A survey on scheduling problems with due windows. Eur. J. Oper. Res. 2015, 242, 347–357. [Google Scholar] [CrossRef]
  8. Chen, X.; Liang, Y.; Sterna, M.; Wang, W.; Blazewicz, J. Fully polynomial time approximation scheme to maximize early work on parallel machines with common due date. Eur. J. Oper. Res. 2020, 284, 67–74. [Google Scholar] [CrossRef]
  9. Wang, X.; Liu, W. Delivery scheduling with variable processing times and due date assignments. Bull. Malays. Math. Sci. Soc. 2025, 48, 76. [Google Scholar] [CrossRef]
  10. Zhang, X.-G.; Tang, X.-M. The two single-machine scheduling problems with slack due date to minimize total early work and late work. J. Oper. Res. Soc. China 2024. [Google Scholar] [CrossRef]
  11. Hoffmann, J.; Neufeld, J.S.; Buscher, U. Minimizing the earliness–tardiness for the customer order scheduling problem in a dedicated machine environment. J. Sched. 2024, 27, 525–543. [Google Scholar] [CrossRef]
  12. Bai, B.; Wei, C.-M.; He, H.-Y.; Wang, J.-B. Study on single-machine common/slack due-window assignment scheduling with delivery times, variable processing times and outsourcing. Mathematics 2024, 12, 2833. [Google Scholar] [CrossRef]
  13. Zhao, S. Resource allocation flowshop scheduling with learning effect and slack due window assignment. J. Ind. Manag. Optim. 2021, 17, 2817. [Google Scholar] [CrossRef]
  14. Lu, Y.-Y.; Zhang, S.; Tao, J.-Y. Earliness-tardiness scheduling with delivery times and deteriorating jobs. Asia-Pac. J. Oper. Res. 2025, 42, 2450009. [Google Scholar] [CrossRef]
  15. Qiu, X.-Y.; Wang, J.-B. Single-machine scheduling with mixed due-windows and deterioration effects. J. Appl. Math. Comput. 2025, 71, 2527–2542. [Google Scholar] [CrossRef]
  16. Sun, X.; Geng, X.-N.; Liu, T. Due-window assignment scheduling in the proportionate flow shop setting. Ann. Oper. Res. 2020, 292, 113–131. [Google Scholar] [CrossRef]
  17. Wang, J.-B.; Lv, D.-Y.; Wan, C. Proportionate flow shop scheduling with job-dependent due windows and position-dependent weights. Asia-Pac. J. Oper. Res. 2025, 42, 2450011. [Google Scholar] [CrossRef]
  18. Tian, Y. Single-machine due-window assignment scheduling with resource allocation and generalized earliness/tardiness penalties. Asia-Pac. J. Oper. Res. 2022, 39, 2150041. [Google Scholar] [CrossRef]
  19. Wang, J.-B.; Sun, Z.-W.; Gao, M. Research on single-machine scheduling with due-window assignment and resource allocation under total resource consumption cost is bounded. J. Appl. Math. Comput. 2025, 71, 7905–7927. [Google Scholar] [CrossRef]
  20. Sun, X.; Geng, X.-N.; Wang, J.; Pan, L. Slack due window assignment scheduling in the single-machine with controllable processing times. J. Ind. Manag. Optim. 2024, 20, 15–35. [Google Scholar] [CrossRef]
  21. Qian, J.; Chang, G.; Zhang, X. Single-machine common due-window assignment and scheduling with position-dependent weights, delivery time, learning effect and resource allocations. J. Appl. Math. Comput. 2024, 70, 1965–1994. [Google Scholar] [CrossRef]
  22. Wang, J.-B.; Wang, Y.-C.; Wan, C.; Lv, D.-Y.; Zhang, L. Controllable processing time scheduling with total weighted completion time objective and deteriorating jobs. Asia-Pac. J. Oper. Res. 2024, 41, 2350026. [Google Scholar] [CrossRef]
  23. Weng, W.; Chu, C.; Wu, P. Resource allocation to minimize the makespan with multi-resource operations. J. Syst. Sci. Complex. 2024, 37, 2054–2070. [Google Scholar] [CrossRef]
  24. Sarpatwar, K.; Schieber, B.; Shachnai, H. The preemptive resource allocation problem. J. Sched. 2024, 27, 103–118. [Google Scholar] [CrossRef]
  25. Sun, Y.; Lv, D.-Y.; Huang, X. Properties for due window assignment scheduling on a two-machine no-wait proportionate flow shop with learning effects and resource allocation. J. Oper. Res. Soc. 2025, 1–17. [Google Scholar] [CrossRef]
  26. Shabtay, D.; Itskovich, Y.; Yedidsion, L.; Oron, D. Optimal due date assignment and resource allocation in a group technology scheduling environment. Comput. Oper. Res. 2010, 37, 2218–2228. [Google Scholar] [CrossRef]
  27. Liu, W.; Wang, X. Group technology scheduling with due-date assignment and controllable processing times. Processes 2023, 11, 1271. [Google Scholar] [CrossRef]
  28. Chen, Y.; Ma, X.; Zhang, G.; Cheng, Y. On optimal due date assignment without restriction and resource allocation in group technology scheduling. J. Comb. Optim. 2023, 45, 64. [Google Scholar] [CrossRef]
  29. Chen, K.; Han, S.; Huang, H.; Ji, M. A group-dependent due window assignment scheduling problem with controllable learning effect. Asia-Pac. J. Oper. Res. 2023, 40, 2250025. [Google Scholar] [CrossRef]
  30. Wang, X.; Liu, W. Single machine group scheduling jobs with resource allocations subject to unrestricted due date assignments. J. Appl. Math. Comput. 2024, 70, 6283–6308. [Google Scholar] [CrossRef]
  31. Yin, N.; Gao, M. Single-machine group scheduling with general linear deterioration and truncated learning effects. Comput. Appl. Math. 2024, 43, 386. [Google Scholar] [CrossRef]
  32. Yin, N.; He, H.; Zhao, Y.; Chang, Y.; Wang, N. Integrating group setup time deterioration effects and job processing time learning effects with group technology in single-machine green scheduling. Axioms 2025, 14, 480. [Google Scholar] [CrossRef]
  33. Zhang, Z.Q.; Xu, Y.X.; Qian, B.; Hu, R.; Wu, F.C.; Wang, L. An enhanced estimation of distribution algorithm with problem-specific knowledge for distributed no-wait flowshop group scheduling problems. Swarm Evol. Comput. 2024, 87, 101559. [Google Scholar] [CrossRef]
  34. Li, M.; Goossens, D. Grouping and scheduling multiple sports leagues: An integrated approach. J. Oper. Res. Soc. 2025, 76, 739–757. [Google Scholar] [CrossRef]
  35. Ren, J.F.; Yang, Y. Common due-window assignment and minmax scheduling with resource allocation and group technology on a single machine. Eng. Optim. 2022, 54, 1819–1834. [Google Scholar] [CrossRef]
  36. Lv, D.-Y.; Wang, J.-B. Single-machine group technology scheduling with resource allocation and slack due window assignment including minmax criterion. J. Oper. Res. Soc. 2025, 76, 1696–1712. [Google Scholar] [CrossRef]
  37. Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  38. Sun, X.; Geng, X.-N.; Liu, F. Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J. Oper. Res. Soc. 2021, 72, 2674–2689. [Google Scholar] [CrossRef]
  39. Lv, D.-Y.; Wang, J.-B. Research on two-machine flow shop scheduling problem with release dates and truncated learning effects. Eng. Optim. 2025, 57, 1828–1848. [Google Scholar] [CrossRef]
  40. Fasihi, M.; Tavakkoli-Moghaddam, R.; Jolai, F. A bi-objective re-entrant permutation flow shop scheduling problem: Minimizing the makespan and maximum tardiness. Oper. Res. 2023, 23, 29. [Google Scholar] [CrossRef]
  41. Sun, Y.; He, H.; Zhao, Y.; Wang, J.-B. Minimizing makespan scheduling on a single machine with general positional deterioration effects. Axioms 2025, 14, 290. [Google Scholar] [CrossRef]
  42. Cheng, T.C.E.; Kravchenko, S.A.; Lin, B.M.T. On scheduling of step-improving jobs to minimize the total weighted completion time. J. Oper. Res. Soc. 2024, 75, 720–730. [Google Scholar] [CrossRef]
  43. Jiang, Z.-Y.; Chen, F.-F.; Zhang, X.-D. Single-machine scheduling with times-based and job-dependent learning effect. J. Oper. Res. Soc. 2017, 68, 809–815. [Google Scholar] [CrossRef]
  44. Liu, Z.; Wang, J.-B. Single-machine scheduling with simultaneous learning effects and delivery times. Mathematics 2024, 12, 2522. [Google Scholar] [CrossRef]
  45. Hardy, G.H.; Littlewood, J.E.; Polya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1967. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.