Next Article in Journal
Ground State for a Schrödinger–Born–Infeld System via an Approximating Procedure
Next Article in Special Issue
A Novel Hyper-Heuristic Algorithm for Bayesian Network Structure Learning Based on Feature Selection
Previous Article in Journal
Reinforcing Moving Linear Model Approach: Theoretical Assessment of Parameter Estimation and Outlier Detection
Previous Article in Special Issue
Korpelevich Method for Solving Bilevel Variational Inequalities on Riemannian Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Group Setup Time Deterioration Effects and Job Processing Time Learning Effects with Group Technology in Single-Machine Green Scheduling

1
School of Science, Shenyang Aerospace University, Shenyang 110136, China
2
School of Economics, Shenyang University, Shenyang 110096, China
3
Institute of Carbon Neutrality Technology and Policy, Shenyang University, Shenyang 110044, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2025, 14(7), 480; https://doi.org/10.3390/axioms14070480
Submission received: 27 April 2025 / Revised: 5 June 2025 / Accepted: 17 June 2025 / Published: 20 June 2025
(This article belongs to the Special Issue Advances in Mathematical Optimization Algorithms and Its Applications)

Abstract

We study single-machine group green scheduling considering group setup time deterioration effects and job-processing time learning effects, where the setup time of a group is a general deterioration function on its starting setup time and the processing time of a job is a non-increasing function on its position. We focus on confirming the job schedule for each group and group schedule for minimizing the total weighted completion time. It is proved that this problem is NP-hard. According to the problem’s NP-hardness, we present some optimal properties (including lower and upper bounds) and then propose a branch-and-bound algorithm and two heuristic algorithms (including the modified Nawaz–Enscore–Ham algorithm and simulated annealing algorithm). Finally, numerical simulations are provided to indicate the effectiveness of these algorithms, which demonstrates that the branch-and-bound algorithm can solve random instances of 100 jobs and 14 groups within reasonable time and that simulated annealing is more accurate than the modified Nawaz–Enscore–Ham algorithm.

1. Introduction

In the modern manufacturing industry, efficient and green scheduling can improve efficiency and reduce production costs by reducing production-completion times, and is widely used. Obviously, in the context of green manufacturing, green scheduling has theoretical and practical value (see Xue et al. [1], Foumani and Smith-Miles [2], Li and Wang [3], Kong et al. [4]). In real scheduling problems, we often find settings in which job-processing times and/or group setup times may be changed due to deterioration effects ( D e t e r E ) and/or learning effects ( L e a r n E ). For D e t e r E (resp. L e a r n E ), job processing times and/or group setup times are non-decreasing (resp. non-increasing) functions on their starting times (see Yin et al. [5]; Huang and Wang [6]; Pei et al. [7]; Gawiejnowicz [8] (resp. Lu et al. [9]; Jiang  et al. [10]; Azzouz et al. [11]; Sun et al. [12]; Wang et al. [13]; Pei et al. [14])).
In addition, production efficiency can be increased by grouping various parts and products with similar designs and/or production processes; this phenomenon is known as group technology. Group technology ( G r o u p T ) that groups similar products into families helps increase the efficiency of operations and decrease the requirement of facilities (Keshavarz et al. [15]; Ji et al. [16]; Neufeld et al. [17]; Wang and Liang [18]).
Recently, Ning and Sun [19] studied single-machine G r o u p T scheduling with L e a r n E and D e t e r E . They showed that the makespan minimization is polynomially solvable. For the total weighted completion time minimization, they demonstrated that a special case of the problem is polynomially solvable. Given the importance of this issue, i.e., integrating G r o u p T scheduling with L e a r n E and D e t e r E , the applications of this model can be found in the forging process of steel plants and preheated parts in plastic production; benefits are a reduction in production costs by reducing production-completion times. Ning and Sun [19] only considered a special case of total weighted completion time minimization; in this article, we continue the work of Ning and Sun [19], but consider the general case of total weighted completion time minimization. The main contributions of this article are as follows:
(1) We demonstrate that the general problem of minimizing the total weighted completion time is NP-hard;
(2) We propose a heuristic to generate an initial upper-bound solution;
(3) We solve the general problem using a branch-and-bound algorithm and two heuristic-based approaches;
(4) We conduct a series of numerical simulations to validate the effectiveness of the proposed algorithms.
The paper is organized as follows. Section 2 presents the literature review. Section 3 gives the formulation. In Section 4, we introduce the branch-and-bound algorithm and two heuristic algorithms. The numerical tests are given in Section 5. Section 6 presents conclusions.

2. Literature Review

Scheduling problems involving D e t e r E and L e a r n E have been extensively studied. Zhao [20] examined scheduling with truncated L e a r n E and setup times, while Wu et al. [21] investigated step- D e t e r E minimization. For  bicriterion minimization, they developed a branch-and-bound algorithm and heuristic approaches. Ma et al. [22] analyzed online L e a r n E problems and introduced an optimal algorithm for total completion time minimization. Between 2023 and 2025, Miao et al. [23] and Lu et al. [24] explored D e t e r E scheduling with delivery times. Sun et al. [25] addressed single-machine D e t e r E scheduling incorporating maintenance activities. Zhang et al. [26] studied two-agent scheduling with resource allocation and deterioration effects, followed by Zhang et al. [27], who integrated due-window assignment. Liu and Wang [28] focused on L e a r n E scheduling with delivery times, while Wang et al. [29] analyzed D e t e r E scheduling with controllable processing times. Lv et al. [30] proposed algorithms for total weighted completion time minimization in D e t e r E scheduling with ready times. Qian and Guo [31] and Qian et al. [32]  extended this work to include due-window assignment and resource allocation. Finally, Mao et al. [33] examined single-machine D e t e r E scheduling with delivery times. Lv et al. [34] and Qiu and Wang [35] studied single-machine D e t e r E scheduling with due-windows. Lv and Wang [36] considered no-idle flow shop scheduling with D e t e r E and common due date. Paredes-Astudillo et al. [37] considered the makespan minimization flowshop problem with L e a r n E . Parichehreh et al. [38] studied unrelated parallel machine scheduling with L e a r n E of operators and D e t e r E of jobs. Bai et al. [39] considered delivery scheduling with resource allocation, outsourcing and L e a r n E . They proved that some due-window assignment problems are polynomially solvable. Lv and Wang [40] conducted flow shop scheduling with L e a r n E and D e t e r E . Under the peak power constraints of minimizing the makespan, they proposed a hybrid genetic algorithm. Lv and Wang [41] studied flow shop scheduling with L e a r n E and release dates. To solve the total completion time minimization, they proposed some solution algorithms. Wang et al. [42] investigated scheduling problems with truncated L e a r n E and delivery times. Zhang et al. [43] and Zhang et al. [44] considered single-machine scheduling problems with L e a r n E and resource allocation. Song et al. [45] considered step L e a r n E scheduling with job rejection. Sun et al. [46] considered the flow shop problem with time-dependent L e a r n E . Under two machines for minimizing the makespan, they proposed some heuristics and a branch-and-bound algorithm. Wang and Liu [47] studied the delivery scheduling with L e a r n E and D e t e r E . For three due date assignments, they proved that some earliness–tardiness minimizations are polynomially solvable. Sun et al. [48] conducted research on the problem with the positional D e t e r E . They proved that makespan minimization is polynomially solvable. Sun et al. [49] considered no-wait flow shop scheduling with L e a r n E and resource allocation. Under two machines and due-window assignments, they showed that some earliness–tardiness minimizations are polynomially solvable.
In addition, for the G r o u p T scheduling, some early studies in the literature are as follows: Kuo and Yang [50]; Wu et al. [51]; Lee and Wu [52]; Yang and Yang [53]; Kuo [54]; He and Sun [55]; Fan et al. [56]; Huang [57]; the specific research results of these papers are given in Table 1. In 2019, Liu et al. [58] considered single-machine G r o u p T scheduling with D e t e r E and ready times. For the makespan minimization, they proposed some solution algorithms. Miao [59] studied parallel-batch scheduling with G r o u p T and D e t e r E . In 2020, Sun et al. [60] considered G r o u p T scheduling problems with DeJong’s L e a r n E . In 2021, Xu et al. [61] studied scheduling with the maintenance activity and D e t e r E . Liu [62] considered single-machine G r o u p T scheduling with due-date and due-window assignment. In 2023, Yan et al. [63] considered G r o u p T scheduling with resource allocation and L e a r n E . Liu and Wang [64] addressed single-machine G r o u p T scheduling with resource allocation and due date assignments. Chen et al. [65] studied due-window assignment scheduling with G r o u p T and controllable L e a r n E . In 2024, Li et al. [66] considered resource-allocation scheduling jobs with G r o u p T and L e a r n E . Li et al. [67] addressed two-agent scheduling with resource allocation, G r o u p T and D e t e r E . Lv and Wang [68] studied single-machine G r o u p T scheduling with resource allocation. Under the slack due-window assignment of minimizing the maximum window cost, they proposed solution algorithms. Wang and Liu [69,70] considered single-machine G r o u p T scheduling with resource allocation and due date assignments. Yin and Gao [71] addressed single-machine G r o u p T scheduling with the general D e t e r E and truncated L e a r n E , where the objective functions included the makespan and total completion time. Zhang  et al. [72] considered no-wait flowshop problems with G r o u p T . For the makespan minimization, they proposed some solution algorithms. Wang et al. [73] studied the distributed flowshop problem with G r o u p T . Han et al. [74] considered distributed heterogeneous flowshop scheduling with G r o u p T . Li and Goossens [75] studied multiple sports leagues scheduling with G r o u p T . Miao et al. [76] considered G r o u p T scheduling with general logarithmic D e t e r E . They proved that the maximal completion time minimization is polynomially solvable.

3. Problem Formulation

We study a single-machine group scheduling problem; where n jobs are grouped into g groups G 1 , G 2 , , G g 1 , G g , each group G h has n h jobs, i.e.,  n 1 + n 2 + + n g 1 + n g = n . There is a machine setup time S ˜ h t if the machine changes from one group to another and the starting setup time of the first group is s 0 0 . Let J h , l be the lth job in G h , h = 1 , 2 , , g 1 , g ; l = 1 , 2 , , n h 1 , n h . As in Browne and Yechiali  [77], and Ning and Sun [19], the actual setup time of G h is:
S ˜ h t = μ h + ν h s , h = 1 , 2 , , g 1 , g ,
where μ h (resp. ν h , s) is the basic setup time (resp. setup deterioration rate, starting setup time) of G h . Let J h l be job J l in group G h , and  [ f ] be some job (group) scheduled in the fth position, as in Ning and Sun [19]; if job J h , l is scheduled in the r-th position, the actual processing time is
P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , h = 1 , 2 , , g ; r , l = 1 , 2 , , n h ,
where p h , l is the basic processing time of J h , l , 0 A 1 , 0 B 1 , A + B = 1 , α h : [ 0 , + ) [ 0 , 1 ] ( α h ( 0 ) = 1 ) is a differentiable non-increasing function, β h : [ 1 , + ) [ 0 , 1 ] ( β h ( 1 ) = 1 ) is a non-increasing function, and  f = 1 0 p h , [ f ] : = 0 .
For a given schedule, C h , l is the completion time of J h , l in G h , our aim is to determine a schedule that minimizes the total weighted completion time h l w h , l C h , l = h = 1 g l = 1 n h w h , l C h , l , where w h , l denotes the weight of J h , l . If  w h , l = 1 , h l C h , l = h = 1 g l = 1 n h C h , l is the total completion time. The problem is denoted by
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l .
Ning and Sun [19] proved that
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T C max
is polynomially solvable. Ning and Sun [19] also showed that some special cases of
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l C h , l
and
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l
are polynomially solvable. The results of some G r o u p T problems are summarized in Table 1.

4. h l w h , l C h , l Minimization

Theorem 1
(Ning and Sun [19]). For
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l ,
if p h , l p h , j w h , l w h , j (i.e., disagreeable condition), the optimal job schedule in group G h is arranged by the non-decreasing order of p h , l w h , l , i.e., for G h ,
p h , ( 1 ) w h , ( 1 ) p h , ( 2 ) w h , ( 2 ) p h , ( n h 1 ) w h , ( n h 1 ) p h , ( n h ) w h , ( n h ) , h = 1 , 2 , , g 1 , g .
Corollary 1
(Ning and Sun [19]). For
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l C h , l ,
(i.e., w h , l = 1 ), the optimal job schedule in group G h is arranged by the non-decreasing order of p h , l , i.e., for G h ,
p h , ( 1 ) p h , ( 2 ) p h , ( n h 1 ) p h , ( n h ) , h = 1 , 2 , , g 1 , g .
Theorem 2
(Ning and Sun [19]). For
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l ,
if all jobs in each group have disagreeable conditions (i.e., p h , l p h , j w h , l w h , j ), and if ρ h ρ i ϱ h ϱ i , the optimal group schedule is arranged by the non-decreasing order of ρ h (or ϱ h ), h = 1 , 2 , , g , where ρ h = ν h ( 1 + ν h ) l = 1 n h w h , l , ρ i = ν i ( 1 + ν i ) l = 1 n i w i , l , ϱ h = μ h + Θ h ( 1 + ν h ) l = 1 n h w h , l , ϱ i = μ i + Θ i ( 1 + ν i ) l = 1 n i w i , l ,
Θ h = l = 1 n h p h , [ l ] A α h f = 1 l 1 p h , [ f ] ) + B β h ( l )
and
Θ i = l = 1 n i p i , [ l ] A α i f = 1 l 1 p i , [ f ] + B β i ( l ) .
Corollary 2
(Ning and Sun [19]). For
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l C h , l ,
(i.e., w h , l = 1 ), if ψ h ψ i ϕ h ϕ i (i.e., agreeable condition), the optimal group schedule is arranged by the non-decreasing order of ψ h (or ϕ h ), h = 1 , 2 , , g , where ψ h = ν h n h ( 1 + ν h ) , ψ i = ν i n i ( 1 + ν i ) , ϕ h = μ h + M h ˜ ˜ n h ( 1 + ν h ) , ϕ i = μ i + M i ˜ ˜ n i ( 1 + ν i ) .
Theorem 3.
Problem
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l
is NP-hard.
Proof. 
From Bachman et al. [78], the total weighted completion time minimization with deteriorating jobs is NP-hard; hence,
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l
is also NP-hard.    ☐
For a given job schedule and group schedule
S = ( J [ 1 ] , [ 1 ] , J [ 1 ] , [ 2 ] , , J [ 1 ] , [ n 1 ] ; J [ 2 ] , [ 1 ] , J [ 2 ] , [ 2 ] , , J [ 2 ] , [ n 2 ] ; ; J [ g ] , [ 1 ] , J [ g ] , [ 2 ] , , J [ 1 ] , [ n g ] ) ,
by mathematical induction, it follows that
C [ 1 ] , [ l ] = s 0 + μ [ 1 ] + ν [ 1 ] s 0 + j = 1 l p [ 1 ] , [ j ] A α [ 1 ] f = 1 j 1 p [ 1 ] , [ f ] + B β [ 1 ] ( j ) = μ [ 1 ] + s 0 ( 1 + ν [ 1 ] ) + j = 1 l p [ 1 ] , ( j ) A α [ 1 ] f = 1 j 1 p [ 1 ] , [ f ] + B β [ 1 ] ( j ) , (4) l = 1 , 2 , n [ 1 ] 1 , n [ 1 ] , C [ 2 ] , [ l ] = μ [ 2 ] + ( 1 + ν [ 2 ] ) C [ 1 ] , [ n [ 1 ] ] ( X u n ) + j = 1 l p [ 2 ] , [ j ] A α [ 2 ] f = 1 j 1 p [ 2 ] , [ f ] + B β [ 2 ] ( j ) = μ [ 2 ] + μ [ 1 ] ( 1 + ν [ 2 ] ) + s 0 ( 1 + ν [ 1 ] ) ( 1 + ν [ 2 ] ) + ( 1 + ν [ 2 ] ) j = 1 n [ 1 ] p [ 1 ] , [ j ] A α [ 1 ] f = 1 j 1 p [ 1 ] , [ f ] + B β [ 1 ] ( j ) + j = 1 l p [ 2 ] , ( j ) A α [ 2 ] f = 1 j 1 p [ 2 ] , [ f ] + B β [ 2 ] ( j ) , (5) l = 1 , 2 , , n [ 2 ] 1 , n [ 2 ] , C [ g ] , [ l ] = k = 1 g μ [ k ] l = k + 1 g ( 1 + ν [ l ] ) + s 0 l = 1 g ( 1 + ν [ l ] ) + k = 1 g j = 1 n [ k ] p [ k ] , [ j ] A α [ k ] f = 1 j 1 p [ k ] , [ f ] + B β [ k ] ( j ) l = k + 1 g ( 1 + ν [ l ] ) (6) + j = 1 l p [ g ] , [ j ] A α [ g ] f = 1 j 1 p [ g ] , [ f ] + B β [ g ] ( j ) , l = 1 , 2 , , n [ g ] 1 , n [ g ] ,
where l = o o 1 ( 1 + ν [ l ] ) = 1 .

4.1. Job Schedule for Each Group G h

To solve
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l ,
the optimal job schedule π h * for G h will be obtained. If the group schedule is given, from Equation (6), for  G h , it is only needed to minimize
W Z ˜ G h = l = 1 n h w h , [ l ] j = 1 l p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) .
Theorem 4.
For
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l ,
if the group schedule is given, the lower bound of the job schedule for group G h is
L B ( W Z ˜ G h ) = l = 1 ϑ w h , [ l ] j = 1 l p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) + l = ϑ + 1 n h w h , [ l ] j = 1 ϑ p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) + l = ϑ + 1 n h w h , < l > j = ϑ + 1 l p h , < < j > > A α h f = 1 ϑ p h , [ f ] + f = ϑ + 1 j 1 p h , < < f > > + B β h ( j ) ,
where w h , < ϑ + 1 > w h , < ϑ + 2 > w h , < n h 1 > w h , < n h > ,   p h , < < ϑ + 1 > > p h , < < ϑ + 2 > > p h , < < n h 1 > > p h , < < n h > > (note that w h , < l > and p h , < < l > > ( l = ϑ + 1 , ϑ + 2 , , n h 1 , n h ) do not necessarily correspond to the same job).
Proof. 
Let X S h = ( X h s c , X h u n ) be a job schedule for group G h , where X h s c (resp. X h u n ) is the scheduled (resp. unscheduled part) part, and  there are ϑ jobs in X h s c ; it follows that
W Z ˜ G h ( X h s c , X h u n ) = l = 1 ϑ w h , [ l ] j = 1 l p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) + l = ϑ + 1 n h w h , [ l ] j = 1 ϑ p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) + l = ϑ + 1 n h w h , [ l ] j = ϑ + 1 l p h , [ j ] A α h f = 1 ϑ p h , [ f ] + f = ϑ + 1 j 1 p h , [ f ] + B β h ( j ) .
From Equation (9), the terms f = 1 ϑ p h , [ f ] , l = 1 ϑ w h , [ l ] j = 1 l p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) , and  l = ϑ + 1 n h w h , [ l ] j = 1 ϑ p h , [ j ] A α h f = 1 j 1 p h , [ f ] + B β h ( j ) are known. From Theorem 1, the lower bound of Equation (8) can be obtained.    ☐
For G h , from Theorem 1, the following algorithm (i.e., Algorithm 1) is proposed as an upper bound (denoted by U P ) of branch-and-bound (denoted by B a n d B ˜ ).
The program flowchart of Algorithm 1 is shown in Figure 1.
Algorithm 1 UP for G h of minimizing h l w h , l C h , l
Step (1). Jobs in G h are arranged by the non-decreasing order of p h , l .
Step (2). Jobs in G h are arranged by the non-increasing order of w h , l .
Step (3). Jobs in G h are arranged by the non-decreasing order of p h , l w h , l .
Step (4). Choose the better solution from Steps (1)–(3).
From L B ( W Z ˜ G h ) (8) and U P (Algorithm 1), the standard B a n d B ˜ algorithm can be proposed to obtain the optimal job schedule of G h .
Remark 1.
If w h , l = 1 , from Corollary 1, an optimal job schedule of G h is arranged by the non-decreasing order of p h , l , i.e.,  p h , ( 1 ) p h , ( 2 ) p h , ( n h 1 ) p h , ( n h ) , h = 1 , 2 , , g 1 , g .

4.2. Group Schedule

From Section 4.1, it is assumed that the optimal job schedule for each group is obtained. Let X G = ( X s c , X u n ) be a group schedule, where X s c (resp. X u n ) is the scheduled (resp. unscheduled) part, and there are θ groups in X s c . Let s = C θ , [ n θ ] ( X s c ) , W [ h ] = d = 1 n [ h ] w [ h ] , [ d ] ( W h = d = 1 n h w h , [ d ] ), Θ [ h ] = j = 1 n [ h ] p [ h ] , ( j ) A α [ h ] f = 1 j 1 p [ h ] , ( f ) + B β [ h ] ( j ) ( Θ h = j = 1 n h p h , ( j ) [ A α h f = 1 j 1 p h , ( f ) + B β h ( j ) ] ); from Equations (5)–(7), it follows that
h l w h , l C h , l = h = 1 θ l = 1 n h w h , [ l ] C h , [ l ] ( X s c ) + h = θ + 1 g l = 1 n h w h , [ l ] C [ h ] [ l ] ( X u n ) = h = 1 θ l = 1 n h w h , [ l ] C h , [ l ] ( X s c ) + h = θ + 1 g d = 1 n [ h ] w [ h ] , [ d ] k = θ + 1 h μ [ k ] l = k + 1 h ( 1 + ν [ l ] ) + s k = θ + 1 g d = 1 n [ k ] w [ k ] , [ d ] l = θ + 1 k ( 1 + ν [ l ] ) + h = θ + 1 g 1 d = 1 n [ h + 1 ] w [ h + 1 ] , [ d ] k = θ + 1 h Θ [ h ] l = k + 1 h + 1 ( 1 + ν [ l ] ) h = θ + 1 g l = 1 n h w h , [ l ] j = 1 l p h , [ j ] A α h f = 1 ϑ p h , [ f ] + f = ϑ + 1 j 1 p h , [ f ] + B β h ( j ) .
Theorem 5.
For
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l ,
if the job schedule of each group is given, the lower bound of the group schedule is
L B h l w h , l C h , l = h = 1 θ l = 1 n h w h , l C h , [ l ] ( X s c ) + T W min h = θ + 1 g k = θ + 1 h μ < k > l = k + 1 h ( 1 + ν < l > ) + s k = θ + 1 g W < < k > > l = θ + 1 k ( 1 + ν < < l > > ) + T W min h = θ + 1 g 1 k = θ + 1 h Θ < < < h > > > l = k + 1 h + 1 ( 1 + ν < < < l > > > ) + h = θ + 1 g w [ h ] , [ l ] l = 1 n [ h ] p [ h ] , ( l ) A α [ h ] f = 1 l 1 p [ h ] , ( f ) + B β [ h ] ( l ) ,
where μ < θ + 1 > ν < θ + 1 > μ < θ + 2 > ν < θ + 2 > . . . μ < g 1 > ν < g 1 > μ < g > ν < g > , ν < < θ + 1 > > W < < θ + 1 > > ( 1 + ν < < θ + 1 > > ) ν < < θ + 1 > > W < < θ + 1 > > ( 1 + ν < < θ + 1 > > ) . . . ν < < g 1 > > W < < g 1 > > ( 1 + ν < < g 1 > > ) ν < < g > > W < < g > > ( 1 + ν < < g > > ) , and  Θ < < < θ + 1 > > > ν < < < θ + 1 > > > Θ < < < θ + 2 > > > ν < < < θ + 2 > > > . . . Θ < < < g 1 > > > ν < < < g 1 > > > Θ < < < g > > > ν < < < g > > > (where μ < h > ν < h > , ν < < h > > n < < h > > ( 1 + ν < < h > > ) and Θ < < < h > > > ν < < < h > > > do not necessarily correspond to the same group, h = θ + 1 , θ + 2 , . . . , g 1 , g ), i.e.,  μ < h > ν < h > , ν < < h > > n < < h > > ( 1 + ν < < h > > ) and Θ < < < h > > > ν < < < h > > > are three different non-decreasing orders of the group schedule.
Proof. 
From Equation (10), s, h = θ + 1 g l = 1 n h w h , [ l ] j = 1 l p h , [ j ] [ A α h f = 1 ϑ p h , [ f ] + f = ϑ + 1 j 1 p h , [ f ]
+ B β h ( j ) ] (the order of each group is given) and h = 1 θ l = 1 n h w h , [ l ] C h , [ l ] ( X s c ) are constants, the term k = ψ + 1 h μ [ k ] l = k + 1 h ( 1 + ν [ l ] ) is minimized by the non-decreasing order of μ h ν h (see Gawiejnowicz [8]), k = θ + 1 h Θ [ h ] l = k + 1 h + 1 ( 1 + ν [ l ] ) is minimized by the non-decreasing order of Θ h ν h , and  k = θ + 1 g n [ k ] l = θ + 1 k ( 1 + ν [ l ] ) can be minimized by the non-decreasing order of ν h n h ( 1 + ν h ) . Let T W min = min { W h , h X u n } ; hence, the lower bound Equation (11) of
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l C h , l
can be obtained.    ☐
Remark 2.
If w h , l = 1 , let n min = min { n l X u n } , the lower bound of
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l C h , l
is:
L B h l C h , l = h = 1 θ l = 1 n h C h , [ l ] ( X s c ) + n min h = θ + 1 g k = ψ + 1 h μ < k > l = k + 1 h ( 1 + ν < l > ) + s k = θ + 1 g n < < k > > l = θ + 1 k ( 1 + ν < < l > > ) + n min h = θ + 1 g 1 k = θ + 1 h Θ < < < h > > > l = k + 1 h + 1 ( 1 + ν < < < l > > > ) + h = θ + 1 g l = 1 n [ h ] n [ h ] l + 1 p [ h ] , ( l ) A α [ h ] f = 1 l 1 p [ h ] , ( f ) + B β [ h ] ( l ) ,
where μ < θ + 1 > ν < θ + 1 > μ < θ + 2 > ν < θ + 2 > . . . μ < g 1 > ν < g 1 > μ < g > ν < g > , ν < < θ + 1 > > n < < θ + 1 > > ( 1 + ν < < θ + 1 > > ) ν < < θ + 1 > > n < < θ + 1 > > ( 1 + ν < < θ + 1 > > ) . . . ν < < g 1 > > n < < g 1 > > ( 1 + ν < < g 1 > > ) ν < < g > > n < < g > > ( 1 + ν < < g > > ) , and  Θ < < < θ + 1 > > > ν < < < θ + 1 > > > Θ < < < θ + 2 > > > ν < < < θ + 2 > > > . . . Θ < < < g 1 > > > ν < < < g 1 > > > Θ < < < g > > > ν < < < g > > > (where μ < h > ν < h > , ν < < h > > n < < h > > ( 1 + ν < < h > > ) and Θ < < < h > > > ν < < < h > > > do not necessarily correspond to the same group, h = θ + 1 , θ + 2 , . . . , g 1 , g ), i.e.,  μ < h > ν < h > , ν < < h > > n < < h > > ( 1 + ν < < h > > ) and Θ < < < h > > > ν < < < h > > > are three different non-decreasing orders of the group schedule.
Similar to the U P for job schedule, from Theorem 2 and the above analysis, the following U P algorithm (i.e., Algorithm 2) for the group schedule can be proposed.
The program flowchart of Algorithm 2 is shown in Figure 2 (note that the non-increasing order is similar).
From L B h l w h , l C h , l (12) and U P (Algorithm 2), the standard B a n d B ˜ algorithm can be proposed to obtain the optimal group schedule.
Algorithm 2 UP for group schedule
Step (1). Groups are arranged by the non-decreasing (non-increasing) order of μ h ν h , h = 1 , 2 , , g 1 , g .
Step (2). Groups are arranged by the non-decreasing (non-increasing) order of ν h W h ( 1 + ν h ) , h = 1 , 2 , , g 1 , g .
Step (3). Groups are arranged by the non-decreasing (non-increasing) order of μ h + Θ h W h ( 1 + ν h ) , h = 1 , 2 , , g 1 , g .
Step (4). Groups are arranged by the non-decreasing (non-increasing) order of Θ h ν h , h = 1 , 2 , , g 1 , g .
Step (5). Groups are arranged by the non-decreasing (non-increasing) order of μ h , h = 1 , 2 , , g 1 , g .
Step (6). Groups are arranged by the non-decreasing (non-increasing) order of ν h , h = 1 , 2 , , g 1 , g .
Step (7). Groups are arranged by the non-decreasing (non-increasing) order of μ h + Θ h , h = 1 , 2 , , g 1 , g .
Step (8). Groups are arranged by the non-decreasing (non-increasing) order of ν h + Θ h , h = 1 , 2 , , g 1 , g .
Step (9). Groups are arranged by the non-decreasing (non-increasing) order of Θ h , h = 1 , 2 , , g 1 , g .
Step (10). Choose the one with smaller value h l w h , l C h , l as the solution by Steps (1)–(9).

4.3. Solution Algorithms

From Section 4.1 and Section 4.2, combining the techniques of L B and U P ,
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l
can be optimally solved by the following B a n d B ˜ algorithm (see Algorithm 3):
Algorithm 3 BandB ˜ algorithm
Step (1). For each group G h , the optimal job schedule is obtained by the B a n d B ˜ algorithm, where the lower bound is Equation (8) (see Theorem 4) and the upper bound is calculated by Algorithm 1, and  h = 1 , 2 , , g 1 , g .
Step (2). The optimal group schedule is obtained by the B a n d B ˜ algorithm, where the lower bound is Equation (11) (see Theorem 5) and the upper bound is calculated by Algorithm 2.
The complete flowchart of the B a n d B ˜ algorithm based on a depth-first search is shown in Figure 3.
The well-known Nawaz–Enscore–Ham heuristic from Nawaz et al. [79] has been recognized for the flowshop makespan minimization problem; hence, the following modified Nawaz–Enscore–Ham ( M N E H ˜ ) algorithm (i.e., Algorithm 4) is proposed to solve
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G T h l w h , l C h , l .
The program flowchart of the M N E H ˜ algorithm (see Algorithm 4) is shown in Figure 4.
Algorithm 4 MNEH ˜ algorithm
Step (1-1). For group G h , let ξ G h be a job schedule obtained from Algorithm 1, h = 1 , 2 , , g 1 , g .
Step (1-2). Set ξ h = 2 . Select the first two jobs from the sorted list ξ G h and select the better of the two possible job schedules. Do not change the relative positions of these two jobs with respect to each other in the remaining steps of the algorithm. Set ξ h = 3 .
Step (1-3). Pick the job in the ξ h th position of the list generated in Step (1-1) and find the best job schedule by placing it at all possible ξ h positions in the partial schedule found in the previous step, without changing the relative positions to each other of the already assigned jobs.
Step (1-4). If ξ h = n h , STOP; otherwise, set ξ h = ξ h + 1 and go to Step (1-3).
Step (2-1). Let S ξ be a group schedule obtained from Algorithm 2.
Step (2-2). Set ξ = 2 . Select the first two groups from the sorted list S ξ and select the better of the two possible group schedules. Do not change the relative positions of these two groups with respect to each other in the remaining steps of the algorithm. Set ξ = 3 .
Step (2-3). Pick the group in the ξ th position of the list generated in Step (2-1) and find the best group schedule by placing it at all possible ξ positions in the partial group schedule found in the previous step, without changing the relative positions to each other of the already assigned groups.
Step (2-4). If ξ = g , STOP; otherwise, set ξ = ξ + 1 and go to Step (2-3).
In addition, a  simulated annealing ( S A ˜ , see Li et al. [80]) algorithm (i.e., Algorithm 5) is presented for
1 S ˜ h t = μ h + ν h s , P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) , G r o u p T h l w h , l C h , l .
The program flowchart of the S A ˜ algorithm (see Algorithm 5) is shown in Figure 5.
Algorithm 5 SA ˜ algorithm
Step (1-1). For group G h , let ς G h be a job schedule obtained from Algorithm 1, h = 1 , 2 , , g 1 , g .
Step (1-2). Calculate the cost value W Z ˜ G h of the original job schedule ς G h for G h (see Equation (7)).
Step (1-3). Calculate the cost value of the new job schedule ς G h * by using the pairwise interchange (PI) neighborhood method. If  ς G h * is less than ς G h , it is accepted. Nevertheless, if  ς G h * is higher, it might still be accepted with a decreasing probability as the process proceeds. This acceptance probability is determined by the following exponential distribution function: P ( a c c e p t ) = e x p ( α × W Z ˜ G h ) , where α is a parameter and W Z ˜ G h is the change in the objective cost W Z ˜ G h . In addition, the method is used to change α in the kth iteration as follows: α = k δ , where δ is a constant. After preliminary trials, δ = 1 is used.
Step (1-4). If ς G h * increases, the new job schedule is accepted when P ( a c c e p t ) > β , where β is randomly sampled from the uniform distribution.
Step (1-5). The job schedule is stable after 1000 n h iterations.
Step (2-1). Let S ς be a group schedule obtained from Algorithm 2.
Step (2-2). Calculate the cost value h l w h , l C h , l of the original group schedule S ς (see Equation (10)).
Step (2-3). Calculate the cost value of the new group schedule S ς * by using the PI neighborhood method. If  S ς * is less than S ς , it is accepted. Nevertheless, if  S ς * is higher, it might still be accepted with a decreasing probability as the process proceeds. This acceptance probability is determined by the following exponential distribution function: P ( a c c e p t ) = e x p ( α × h l w h , l C h , l ) , where α is a parameter and h l w h , l C h , l is the change in the objective cost h l w h , l C h , l . In addition, the method is used to change α in the kth iteration as follows: α = k δ , where δ is a constant. After preliminary trials, δ = 1 is used.
Step (2-4). If S ς * increases, the new group schedule is accepted when P ( a c c e p t ) > β , where β is randomly sampled from the uniform distribution.
Step (2-5). The group schedule is stable after 1000 g iterations.

5. Experimental Study

To validate the performance of the M N E H ˜ , S A ˜ , and B a n d B ˜ algorithms, we implemented them in C++ using Visual Studio 2022 v17.1.0 and executed them on a HUAWEI desktop (manufactured by Huawei Technologies Co., Ltd., Shenzhen, China) equipped with an Intel Core i5-12400 CPU (2.50 GHz) (manufactured by Intel Corporation, Santa Clara, CA, USA), 16 GB of RAM, and running Windows 11 (developed by Microsoft Corporation, Redmond, WA, USA)). In our numerical experiments, n = 60 , 80 , 100 ; m = 11, 12, 13, 14; P ˜ h , l r = p h , l 0.5 1 + f = 1 r 1 p h , [ f ] a h + 0.5 r b h , n i 1 . As in Lv and Wang [68], Wang and Liu [70], and Yin and Gao [71], the other parameters are given as follows:
(1) Real number: a h ( b h ) U [ 0.5 , 0.1 ] ; ν h U [ 0.1 , 1 ] ; ν h U [ 1 , 2 ] ; ν h U [ 2 , 5 ] .
(2) Integer number: μ h ( p h , l , w h , l ) U [ 1 , 50 ] ; μ h ( p h , l , w h , l ) U [ 51 , 100 ] ; μ h ( p h , l , w h , l ) U [ 1 , 100 ] .
For simulation accuracy, each random instance was run 30 times. The error of algorithm E r H is calculated as follows:
e ρ ( E r H ) = h l w h , l C h , l ( E r H ) h l w h , l C h , l ( E r H * ) ,
where E r H { M N E H ˜ , S A ˜ } , h l w h , l C h , l ( E r H ) (resp. h l w h , l C h , l ( E r H * ) ) is the objective cost (see Equation (10)) obtained by algorithm E r H (resp. B a n d B ˜ ). In addition, the running time (in milliseconds, ms) of the M N E H ˜ , S A ˜ and B a n d B ˜ is defined.
The CPU time and error of these algorithms are shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7. From Table 2, Table 4, and Table 6, the maximum CPU time of B a n d B ˜ is 9,996,035.7 ms (i.e., in Table 2 of n × m = 100 × 14 and ν h U [ 0.1 , 1 ] ). In terms of the CPU time consumption of the B a n d B ˜ algorithm, a remarkable trend emerges. Specifically, when the parameter ν h is within the interval [ 0 , 1 ] , the algorithm requires significantly more CPU time compared to the cases where ν h is in the intervals [ 1 , 2 ] and [ 2 , 5 ] . Further analysis reveals that the CPU time required when ν h is in the interval [ 1 , 2 ] is still longer than that when ν h is in the interval [ 2 , 5 ] . Moreover, as the value of μ h ( p h , l , w h , l ) shows an upward trend, the B a n d B ˜ algorithm exhibits an increased demand for CPU time. When μ h ( p h , l , w h , l ) U [ 51 , 100 ] , the average CPU time consumption of the B a n d B ˜ algorithm is significantly greater than that when μ h ( p h , l , w h , l ) U [ 1 , 50 ] and μ h ( p h , l , w h , l ) U [ 1 , 100 ] .
From Table 2, Table 4 and Table 6, the CPU time consumption of the M N E H ˜ and S A ˜ algorithms is relatively low and stable. In this experiment, as the problem scale increases, their CPU time grows slowly, and the impact of parameter values (i.e., μ h ( p h , l , w h , l ) and ν h ) on the CPU time of these two algorithms (i.e., M N E H ˜ and S A ˜ ) is not significant. Overall, the S A ˜ algorithm requires slightly more CPU time than the M N E H ˜ algorithm. However, it is worth noting that the CPU times of both algorithms are within an acceptable range. Their average CPU times are less than 1138.7 ms and 1020.0 ms respectively, and their maximum CPU times do not exceed 1356.3 ms and 1260.7 ms, respectively.
Regarding the errors of the M N E H ˜ and S A ˜ algorithms, during 30 random runs for each pair of parameter combinations, both the M N E H ˜ and S A ˜ algorithms can always find the optimal solution at least once. As a result, the minimum error of the two algorithms is 1.0000 under any parameter combination. However, regarding the average and maximum errors of these two algorithms, Table 3, Table 5 and Table 7 indicate that when n × g 100 × 14 , both the average error and the maximum error of the S A ˜ algorithm are smaller than those of the M N E H ˜ algorithm. The average errors of the two algorithms do not exceed 1.0284 and 1.0172, respectively, and the maximum errors do not exceed 1.4675 and 1.2121, respectively. In addition, Figure 6 visually demonstrates that the S A ˜ algorithm exhibits excellent performance with smaller errors and stronger stability, whether in terms of average error or maximum error.
Considering that the stability of heuristic algorithms may be affected by the scale of the problem, experiments with 95% confidence intervals were conducted. In these experiments, values of n = 60 , 80, 100 and g = 12 were selected. Each parameter combination was randomly executed 90 times (30 times for ν h U [ 0.1 , 1 ] , ν h U [ 1 , 2 ] and ν h U [ 2 , 5 ] respectively). This experiment recorded the average errors of the M N E H ˜ and S A ˜ algorithms, the upper and lower bounds of the 95% confidence intervals, as well as the spans of these intervals. The experimental results are presented in Table 8, Table 9 and Table 10. The results clearly demonstrate that, for all parameter combinations, the average error of the S A ˜ algorithm is lower than that of the M N E H ˜ algorithm. Moreover, both the upper and lower bounds of the confidence intervals and the interval spans of the S A ˜ algorithm are smaller than those of the M N E H ˜ algorithm. It is thus evident that the S A ˜ algorithm outperforms the M N E H ˜ algorithm in terms of both accuracy and stability when solving this problem.
Remark 3.
Metaheuristic procedures (e.g., S A ˜ algorithm of this paper) may be deterministic while mathematically substantiated algorithms (e.g., M N E H ˜ algorithm of this paper) may be randomized.

6. Conclusions

In this article, we investigate single-machine G r o u p T scheduling, where group setup times have D e t e r E and job-processing times in the same group have L e a r n E . The total (weighted) completion time minimization is NP-hard; hence, two heuristic algorithms and a branch-and-bound algorithm are proposed.
Numerical experiments have demonstrated that the CPU time of the B a n d B ˜ algorithm increases sharply with the growth of the problem scale. However, it can still solve problems of the scale of 100 × 14 within a reasonable time frame. The CPU times of the M N E H ˜ and S A ˜ algorithms are comparable, and both can obtain sub-optimal solutions to the problem in a relatively short time. Nevertheless, the error of the M N E H ˜ algorithm is larger than that of the S A ˜ algorithm in most cases; specifically, the S A ˜ algorithm exhibits improvements of 0.6% and 3.2% in terms of average and maximum error bound, respectively, relative to the M N E H ˜ algorithm. Moreover, according to the experiments with 95% confidence intervals, the M N E H ˜ algorithm is inferior to the S A ˜ algorithm in terms of stability. To be specific, the S A ˜ algorithm demonstrates a 48.3% improvement in stability compared to the M N E H ˜ algorithm (calculated by the 95% CI span metric). Therefore, when solving this problem, the S A ˜ algorithm exhibits better performance than the M N E H ˜ algorithm. The techniques of L B and U P and the S A ˜ algorithm can be used for other group scheduling problems. Our analyses and the S A ˜ algorithm may be used by a production firm in establishing production planning and group scheduling. Especially in the forging process of steel plants, the S A ˜ algorithm’s superior stability and performance could be particularly beneficial.
Future research can delve into the G r o u p T models with job rejection (see Mor and Shapira [81]; Geng et al. [82]; Chen and Li, [83]), i.e., the combination of features G r o u p T and job rejection. Challenging options are also G r o u p T models with position-dependent weights (e.g., earliness and tardiness costs, see Wang et al. [84]; Wang et al. [85]). In addition, G r o u p T models under multi-machine settings (e.g., flow shop or parallel machines; see Wang and Wang [86]; Arani et al. [87]) are possible topics.

Author Contributions

Methodology, N.Y. and H.H.; Writing—original draft, N.Y.; Writing—review & editing, N.Y., H.H., Y.Z., Y.C. and N.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Liaoning Social Science Fund (Project No. L23BGL013).

Data Availability Statement

The data used to support this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xue, Y.; Rui, Z.J.; Yu, X.Y.; Sang, X.Z.; Liu, W.J. Estimation of distribution evolution memetic algorithm for the unrelated parallel-machine green scheduling problem. Memetic Comput. 2019, 11, 423–437. [Google Scholar] [CrossRef]
  2. Foumani, M.; Smith-Miles, K. The impact of various carbon reduction policies on green flowshop scheduling. Appl. Energy 2019, 249, 300–315. [Google Scholar] [CrossRef]
  3. Li, M.; Wang, G.G. A review of green shop scheduling problem. Inf. Sci. 2022, 589, 478–496. [Google Scholar] [CrossRef]
  4. Kong, F.Y.; Song, J.X.; Miao, C.X.; Zhang, Y.Z. Scheduling problems with rejection in green manufacturing industry. J. Comb. Optim. 2025, 49, 63. [Google Scholar] [CrossRef]
  5. Yin, Y.; Wu, W.-H.; Cheng, T.C.E.; Wu, C.-C. Single-machine scheduling with time-dependent and position-dependent deteriorating jobs. Int. J. Comput. Integr. Manuf. 2015, 28, 781–790. [Google Scholar] [CrossRef]
  6. Huang, X.; Wang, J.-J. Machine scheduling problems with a position-dependent deterioration. Appl. Math. Model. 2015, 39, 2897–2908. [Google Scholar] [CrossRef]
  7. Pei, J.; Wang, X.; Fan, W.; Pardalos, P.M.; Liu, X. Scheduling step-deteriorating jobs on bounded parallel-batching machines to maximise the total net revenue. J. Oper. Res. Soc. 2019, 70, 1830–1847. [Google Scholar] [CrossRef]
  8. Gawiejnowicz, S. Models and Algorithms of Time-Dependent Scheduling; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  9. Lu, Y.-Y.; Teng, F.; Feng, Z.-X. Scheduling jobs with truncated exponential sum-of-logarithm-processing-times based and position-based learning effects. Asia-Pac. J. Oper. Res. 2015, 32, 1550026. [Google Scholar] [CrossRef]
  10. Jiang, Z.; Chen, F.; Zhang, X. Single-machine scheduling with times-based and job-dependent learning effect. J. Oper. Res. Soc. 2017, 68, 809–815. [Google Scholar] [CrossRef]
  11. Azzouz, A.; Ennigrou, M.; Ben Said, L. Scheduling problems under learning effects: Classification and cartography. Int. J. Prod. Res. 2018, 56, 1642–1661. [Google Scholar] [CrossRef]
  12. Sun, X.Y.; Geng, X.-N.; Liu, F. Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J. Oper. Res. Soc. 2021, 72, 2674–2689. [Google Scholar] [CrossRef]
  13. Wang, J.-B.; Lv, D.-Y.; Xu, J.; Ji, P.; Li, F. Bicriterion scheduling with truncated learning effects and convex controllable processing times. Int. Trans. Oper. Res. 2021, 28, 1573–1593. [Google Scholar] [CrossRef]
  14. Pei, J.; Zhou, Y.; Yan, P.; Pardalos, P.M. A concise guide to scheduling with learning and deteriorating effects. Int. J. Prod. Res. 2021, 61, 2010–2031. [Google Scholar] [CrossRef]
  15. Keshavarz, T.; Savelsbergh, M.; Salmasi, N. A branch-and-bound algorithm for the single machine sequence-dependent group scheduling problem with earliness and tardiness penalties. Appl. Math. Model. 2015, 39, 6410–6424. [Google Scholar] [CrossRef]
  16. Ji, M.; Zhang, X.; Tang, X.Y.; Cheng, T.C.E.; Wei, G.Y.; Tan, Y.Y. Group scheduling with group-dependent multiple due-windows assignment. Int. J. Prod. Res. 2016, 54, 1244–1256. [Google Scholar] [CrossRef]
  17. Neufeld, J.S.; Gupta, J.N.D.; Buscher, U. A comprehensive review of flowshop group scheduling literature. Comput. Oper. Res. 2016, 70, 56–74. [Google Scholar] [CrossRef]
  18. Wang, J.-B.; Liang, X.-X. Group scheduling with deteriorating jobs and allotted resource under limited resource availability constraint. Eng. Optim. 2019, 51, 231–246. [Google Scholar] [CrossRef]
  19. Ning, L.; Sun, L. Single-machine group scheduling problems with general deterioration and linear learning effects. Math. Probl. Eng. 2023, 2023, 1455274. [Google Scholar] [CrossRef]
  20. Zhao, S. Scheduling jobs with general truncated learning effects including proportional setup times. Comput. Appl. Math. 2022, 41, 146. [Google Scholar] [CrossRef]
  21. Wu, C.-C.; Lin, W.-C.; Azzouz, A.; Xu, J.Y.; Chiu, Y.-L.; Tsai, Y.-W.; Shen, P.Y. A bicriterion single-machine scheduling problem with step-improving processing times. Comput. Ind. Eng. 2022, 171, 108469. [Google Scholar] [CrossRef]
  22. Ma, R.; Guo, S.A.; Zhang, X.Y. An optimal online algorithm for single-processor scheduling problem with learning effect. Theor. Comput. Sci. 2023, 928, 1–12. [Google Scholar] [CrossRef]
  23. Miao, C.; Song, J.; Zhang, Y. Single-machine time-dependent scheduling with proportional and delivery times. Asia-Pac. J. Oper. Res. 2023, 40, 2240015. [Google Scholar] [CrossRef]
  24. Lu, Y.-Y.; Zhang, S.; Tao, J.-Y. Earliness–tardiness scheduling with delivery times and deteriorating jobs. Asia-Pac. J. Oper. Res. 2025, 42, 2450009. [Google Scholar] [CrossRef]
  25. Sun, X.Y.; Liu, T.; Geng, X.-N.; Hu, Y.; Xu, J.-X. Optimization of scheduling problems with deterioration effects and an optional maintenance activity. J. Sched. 2023, 26, 251–266. [Google Scholar] [CrossRef]
  26. Zhang, L.-H.; Lv, D.-Y.; Wang, J.-B. Two-agent slack due-date assignment scheduling with resource allocations and deteriorating jobs. Mathematics 2023, 11, 2737. [Google Scholar] [CrossRef]
  27. Zhang, L.-H.; Geng, X.-N.; Xue, J.; Wang, J.-B. Single machine slack due-window assignment and deteriorating jobs. J. Ind. Manag. 2024, 20, 1593–1614. [Google Scholar] [CrossRef]
  28. Liu, Z.; Wang, J.-B. Single-machine scheduling with simultaneous learning effects and delivery times. Mathematics 2024, 12, 2522. [Google Scholar] [CrossRef]
  29. Wang, J.-B.; Wang, Y.-C.; Wan, C.; Lv, D.-Y.; Zhang, L. Controllable processing time scheduling with total weighted completion time objective and deteriorating jobs. Asia-Pac. J. Oper. Res. 2024, 41, 2350026. [Google Scholar] [CrossRef]
  30. Lv, Z.-G.; Zhang, L.-H.; Wang, X.-Y.; Wang, J.-B. Single machine scheduling proportionally deteriorating jobs with ready times subject to the total weighted completion time minimization. Mathematics 2024, 12, 610. [Google Scholar] [CrossRef]
  31. Qian, J.; Chang, G.; Zhang, X. Single-machine common due-window assignment and scheduling with position-dependent weights, delivery time, learning effect and resource allocations. J. Appl. Math. Comput. 2024, 70, 1965–1994. [Google Scholar] [CrossRef]
  32. Qian, J.; Guo, Z.-Y. Common due-window assignment and single machine scheduling with delivery time, resource allocation, and job-dependent learning effect. J. Appl. Math. Comput. 2024, 70, 4441–4471. [Google Scholar] [CrossRef]
  33. Mao, R.-R.; Lv, D.-Y.; Ren, N.; Wang, J.-B. Supply chain scheduling with deteriorating jobs and delivery times. J. Appl. Math. Comput. 2024, 70, 2285–2312. [Google Scholar] [CrossRef]
  34. Lv, D.-Y.; Xue, J.; Wang, J.-B. Minmax common due-window assignment scheduling with deteriorating jobs. J. Oper. Res. Soc. China 2024, 12, 681–693. [Google Scholar] [CrossRef]
  35. Qiu, X.-Y.; Wang, J.-B. Single-machine scheduling with mixed due-windows and deterioration effects. J. Appl. Math. Comput. 2025, 71, 2527–2542. [Google Scholar] [CrossRef]
  36. Lv, D.-Y.; Wang, J.-B. No-idle flow shop scheduling with deteriorating jobs and common due date under dominating machines. Asia-Pac. J. Oper. Res. 2024, 41, 2450003. [Google Scholar] [CrossRef]
  37. Paredes-Astudillo, Y.A.; Botta-Genoulaz, V.; Montoya-Torres, J.R. Impact of learning effect modelling in flowshop scheduling with makespan minimisation based on the Nawaz–Enscore–Ham algorithm. Int. J. Prod. Res. 2024, 62, 1999–2014. [Google Scholar] [CrossRef]
  38. Parichehreh, M.; Gholizadeh, H.; Fathollahi-Fard, A.M.; Wong, K.Y. An energy-efficient unrelated parallel machine scheduling problem with learning effect of operators and deterioration of jobs. Int. J. Environ. Sci. Technol. 2024, 21, 9651–9676. [Google Scholar] [CrossRef]
  39. Bai, B.; Wei, C.-M.; He, H.-Y.; Wang, J.-B. Study on single-machine common/slack due-window assignment scheduling with delivery times, variable processing times and outsourcing. Mathematics 2024, 12, 2833. [Google Scholar] [CrossRef]
  40. Lv, D.-Y.; Wang, J.-B. Considering the peak power consumption problem with learning and deterioration effect in flow shop scheduling. Comput. Ind. Eng. 2024, 197, 110599. [Google Scholar] [CrossRef]
  41. Lv, D.-Y.; Wang, J.-B. Research on two-machine flow shop scheduling problem with release dates and truncated learning effects. Eng. Optim. 2024. [Google Scholar] [CrossRef]
  42. Wang, X.-Y.; Lv, D.-Y.; Ji, P.; Yin, N.; Wang, J.-B.; Jin, Q. Single machine scheduling problems with truncated learning effects and exponential past-sequence-dependent delivery times. Comput. Appl. Math. 2024, 43, 194. [Google Scholar] [CrossRef]
  43. Zhang, Y.Y.; Sun, X.-Y.; Liu, T.; Wang, J.Y.; Geng, X.-N. Single-machine scheduling simultaneous consideration of resource allocations and exponential time-dependent learning effects. J. Oper. Res. Soc. 2025, 76, 528–540. [Google Scholar] [CrossRef]
  44. Zhang, L.-H.; Yang, S.-H.; Lv, D.-Y.; Wang, J.-B. Research on convex resource allocation scheduling with exponential time-dependent learning effects. Comput. J. 2025, 68, 97–108. [Google Scholar] [CrossRef]
  45. Song, J.X.; Miao, C.X.; Kong, F.Y. Scheduling with step learning and job rejection. Oper. Res. 2025, 25, 6. [Google Scholar] [CrossRef]
  46. Sun, Z.-W.; Lv, D.-Y.; Wei, C.-M.; Wang, J.-B. Flow shop scheduling with shortening jobs for makespan minimization. Mathematics 2025, 13, 363. [Google Scholar] [CrossRef]
  47. Wang, X.Y.; Liu, W.G. Delivery scheduling with variable processing times and due date assignments. Bull. Malays. Math. Sci. Soc. 2025, 48, 76. [Google Scholar] [CrossRef]
  48. Sun, Y.; He, H.; Zhao, Y.; Wang, J.-B. Minimizing makespan scheduling on a single machine with general positional deterioration effects. Axioms 2025, 14, 290. [Google Scholar] [CrossRef]
  49. Sun, Y.; Lv, D.-Y.; Huang, X. Properties for due-window assignment scheduling on a two-machine no-wait proportionate flow shop with learning effects and resource allocation. J. Oper. Res. Soc. 2025. [Google Scholar] [CrossRef]
  50. Kuo, W.-H.; Yang, D.-L. Single-machine group scheduling with a time-dependent learning effect. Comput. Oper. Res. 2006, 33, 2099–2112. [Google Scholar] [CrossRef]
  51. Wu, C.-C.; Lee, W.-C. Single-machine group-scheduling problems with deteriorating setup times and job-processing times. Int. Prod. Econ. 2008, 115, 128–133. [Google Scholar] [CrossRef]
  52. Lee, W.-C.; Wu, C.-C. A note on single-machine group scheduling problems with position-based learning effect. Appl. Math. Model. 2009, 33, 2159–2163. [Google Scholar] [CrossRef]
  53. Yang, S.-J.; Yang, D.-L. Single-machine group scheduling problems under the effects of deterioration and learning. Comput. Ind. Eng. 2010, 58, 754–758. [Google Scholar] [CrossRef]
  54. Kuo, W.-H. Single-machine group scheduling with time-dependent learning effect and position-based setup time learning effect. Ann. Oper. Res. 2012, 196, 349–359. [Google Scholar] [CrossRef]
  55. He, Y.; Sun, L. One-machine scheduling problems with deteriorating jobs and position-dependent learning effects under group technology considerations. Int. J. Syst. Sci. 2015, 46, 1319–1326. [Google Scholar] [CrossRef]
  56. Fan, W.; Pei, J.; Liu, X.; Pardalos, P.M.; Kong, M. Serial-batching group scheduling with release times and the combined effects of deterioration and truncated job-dependent learning. J. Glob. Optim. 2018, 71, 147–163. [Google Scholar] [CrossRef]
  57. Huang, X. Bicriterion scheduling with group technology and deterioration effect. J. Appl. Math. Comput. 2019, 60, 455–464. [Google Scholar] [CrossRef]
  58. Liu, F.; Yang, J.; Lu, Y.-Y. Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng. Optim. 2019, 51, 862–874. [Google Scholar] [CrossRef]
  59. Miao, C.X. Parallel-batch scheduling with deterioration and group technology. IEEE Access 2019, 7, 119082–119086. [Google Scholar] [CrossRef]
  60. Sun, L.; Ning, L.; Huo, J.-Z. Group scheduling problems with time-dependent and position-dependent DeJong’s learning effect. Math. Probl. Eng. 2020, 2020, 5161872. [Google Scholar] [CrossRef]
  61. Xu, H.; Li, X.; Ruiz, R.; Zhu, H. Group scheduling with nonperiodical maintenance and deteriorating effects. IEEE Trans. Syst. Man-Cybern.-Syst. 2021, 51, 2860–2872. [Google Scholar] [CrossRef]
  62. Liu, S.-C. Common due-window assignment and group scheduling with position-dependent processing times. Asia-Pac. J. Oper. Res. 2023, 32, 1550045. [Google Scholar] [CrossRef]
  63. Yan, J.-X.; Ren, N.; Bei, H.-B.; Bao, H.; Wang, J.-B. Study on resource allocation scheduling problem with learning factors and group technology. J. Ind. Manag. Optim. 2023, 19, 3419–3435. [Google Scholar] [CrossRef]
  64. Liu, W.G.; Wang, X.Y. Group technology scheduling with due-date assignment and controllable processing times. Processes 2023, 11, 1271. [Google Scholar] [CrossRef]
  65. Chen, K.; Han, S.; Huang, H.; Ji, M. A group-dependent due-window assignment scheduling problem with controllable learning effect. Asia-Pac. J. Oper. Res. 2023, 40, 2250025. [Google Scholar] [CrossRef]
  66. Li, M.-H.; Lv, D.-Y.; Lu, Y.-Y.; Wang, J.-B. Scheduling with group technology, resource allocation, and learning effect simultaneously. Mathematics 2024, 12, 1029. [Google Scholar] [CrossRef]
  67. Li, M.-H.; Lv, D.-Y.; Lv, Z.-G.; Zhang, L.-H.; Wang, J.-B. A two-agent resource allocation scheduling problem with slack due-date assignment and general deterioration function. Comput. Appl. Math. 2024, 43, 229. [Google Scholar] [CrossRef]
  68. Lv, D.-Y.; Wang, J.-B. Single-machine group technology scheduling with resource allocation and slack due-window assignment including minmax criterion. J. Oper. Res. Soc. 2024. [Google Scholar] [CrossRef]
  69. Wang, X.Y.; Liu, W.G. Optimal different due-dates assignment scheduling with group technology and resource allocation. Mathematics 2024, 12, 436. [Google Scholar] [CrossRef]
  70. Wang, X.Y.; Liu, W.G. Single machine group scheduling jobs with resource allocations subject to unrestricted due date assignments. J. Appl. Math. Comput. 2024, 70, 6283–6308. [Google Scholar] [CrossRef]
  71. Yin, N.; Gao, M. Single-machine group scheduling with general linear deterioration and truncated learning effects. Comput. Appl. Math. 2024, 43, 386. [Google Scholar] [CrossRef]
  72. Zhang, Z.Q.; Xu, Y.X.; Qian, B.; Hu, R.; Wu, F.C.; Wang, L. An enhanced estimation of distribution algorithm with problem-specific knowledge for distributed no-wait flowshop group scheduling problems. Swarm Evol. Comput. 2024, 87, 101559. [Google Scholar] [CrossRef]
  73. Wang, B.T.; Pan, Q.K.; Gao, L.; Li, W.M. The paradoxes, accelerations and heuristics for a constrained distributed flowshop group scheduling problem. Comput. Ind. Eng. 2024, 196, 110465. [Google Scholar] [CrossRef]
  74. Han, Z.D.; Zhang, B.; Sang, H.Y.; Lu, C.; Meng, L.L.; Zou, W.Q. Optimising distributed heterogeneous flowshop group scheduling arising from PCB mounting: Integrating construction and improvement heuristics. Int. J. Prod. Res. 2025, 63, 1753–1778. [Google Scholar] [CrossRef]
  75. Li, M.; Goossens, D. Grouping and scheduling multiple sports leagues: An integrated approach. J. Oper. Res. Soc. 2025, 76, 739–757. [Google Scholar] [CrossRef]
  76. Miao, J.-D.; Lv, D.-Y.; Wei, C.-M.; Wang, J.-B. Research on group scheduling with general logarithmic deterioration subject to maximal completion time cost. Axioms 2025, 14, 153. [Google Scholar] [CrossRef]
  77. Browne, S.; Yechiali, U. Scheduling deteriorating jobs on a single processor. Oper. Res. 1990, 38, 495–498. [Google Scholar] [CrossRef]
  78. Bachman, A.; Janiak, A.; Kovalyov, M.Y. Minimizing the total weighted completion time of deteriorating jobs. Inf. Process. Lett. 2002, 81, 81–84. [Google Scholar] [CrossRef]
  79. Nawaz, M.; Enscore, E.E.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  80. Li, M.-H.; Lv, D.-Y.; Zhang, L.-H.; Wang, J.-B. Permutation flow shop scheduling with makespan objective and truncated learning effects. J. Appl. Math. Comput. 2024, 70, 2907–2939. [Google Scholar] [CrossRef]
  81. Mor, B.; Shapira, D. Single machine scheduling with non-availability interval and optional job rejection. J. Comb. Optim. 2022, 44, 480–497. [Google Scholar] [CrossRef]
  82. Geng, X.-N.; Sun, X.; Wang, J.-Y.; Pan, L. Scheduling on proportionate flow shop with job rejection and common due date assignment. Comput. Ind. Eng. 2023, 181, 109317. [Google Scholar] [CrossRef]
  83. Chen, R.-X.; Li, S.-S. Two-machine job shop scheduling with optional job rejection. Optim. Lett. 2024, 18, 1593–1618. [Google Scholar] [CrossRef]
  84. Wang, J.-B.; Bao, H.; Wan, C. Research on multiple slack due-date assignments scheduling with position-dependent weights. Asia-Pac. J. Oper. Res. 2024, 41, 2350039. [Google Scholar] [CrossRef]
  85. Wang, J.-B.; Lv, D.-Y.; Wan, C. Proportionate flow shop scheduling with job-dependent due-windows and position-dependent weights. Asia-Pac. J. Oper. Res. 2025, 42, 2450011. [Google Scholar] [CrossRef]
  86. Wang, J.-J.; Wang, L. Decoding methods for the flow shop scheduling with peak power consumption constraints. Int. J. Prod. Res. 2019, 57, 3200–3218. [Google Scholar] [CrossRef]
  87. Arani, M.; Momenitabar, M.; Priyanka, T.J. Unrelated parallel machine scheduling problem considering job splitting, inventories, shortage, and resource: A meta-heuristic approach. Systems 2024, 12, 37. [Google Scholar] [CrossRef]
Figure 1. Flowchart of Algorithm 1.
Figure 1. Flowchart of Algorithm 1.
Axioms 14 00480 g001
Figure 2. Flowchart of Algorithm 2.
Figure 2. Flowchart of Algorithm 2.
Axioms 14 00480 g002
Figure 3. Flowchart of the B a n d B ˜ algorithm.
Figure 3. Flowchart of the B a n d B ˜ algorithm.
Axioms 14 00480 g003
Figure 4. Flowchart of the M N E H ˜ algorithm.
Figure 4. Flowchart of the M N E H ˜ algorithm.
Axioms 14 00480 g004
Figure 5. Flowchart of the S A ˜ algorithm.
Figure 5. Flowchart of the S A ˜ algorithm.
Axioms 14 00480 g005
Figure 6. The 95% confidence intervals for the overall error bound of the heuristic algorithm. (a) Mean error. (b) Max error.
Figure 6. The 95% confidence intervals for the overall error bound of the heuristic algorithm. (a) Mean error. (b) Max error.
Axioms 14 00480 g006
Table 1. Results of G r o u p T .
Table 1. Results of G r o u p T .
ReferenceObjective FunctionSetup TimeJob-Processing TimeSolution Method
Kuo and Yang [50]Makespanconstant number L e a r n E , P ˜ h , l r = p h , l 1 + f = 1 r 1 p h , [ f ] a h Polynomial time algorithm
Total completion timeconstant number L e a r n E , P ˜ h , l r = p h , l 1 + f = 1 r 1 p h , [ f ] a h Heuristic algorithm
Wu et al. [51]Makespan D e t e r E , S ˜ h t = μ h + ν s D e t e r E , P ˜ h , l = p h , l + b s Polynomial time algorithm
Total completion time D e t e r E , S ˜ h t = μ h + ν s D e t e r E , P ˜ h , l = p h , l + b s Heuristic algorithm
Lee and Wu [52]Makespan L e a r n E L e a r n E Polynomial time algorithm
Total completion time L e a r n E L e a r n E Polynomial time algorithm
under a special condition
Yang and Yang [53]Makespan D e t e r E , S ˜ h t = ν h s L e a r n E , P ˜ h , l r = p h , l r a h Polynomial time algorithm
Makespan D e t e r E , S ˜ h t = ν h s L e a r n E , P ˜ h , l r = p h , l 1 + f = 1 r 1 p h , [ f ] a h Polynomial time algorithm
Total completion time D e t e r E , S ˜ h t = ν h s L e a r n E , P ˜ h , l r = p h , l r a h Polynomial time algorithm
under a special condition
Total completion time D e t e r E , S ˜ h t = ν h s L e a r n E , P ˜ h , l r = p h , l 1 + f = 1 r 1 p h , [ f ] a h Polynomial time algorithm
under a special condition
Kuo [54]Makespan L e a r n E , S ˜ h A = μ h q b h L e a r n E , P ˜ h , l r = p h , l 1 + f = 1 r 1 p h , [ f ] a h Polynomial time algorithm
Total completion time L e a r n E , S ˜ h A = μ h q b h L e a r n E , P ˜ h , l r = p h , l 1 + f = 1 r 1 p h , [ f ] a h Heuristic algorithm
He and Sun [55]Makespan, D e t e r E , S ˜ h t = μ h + ν h s L e a r n E - D e t e r E , P ˜ h , l = ( p h , l + b s ) r a Polynomial time algorithm
Total completion time D e t e r E , S ˜ h t = μ h + ν h s L e a r n E - D e t e r E , P ˜ h , l = ( p h , l + b s ) r a Polynomial time algorithm
under a special condition
Fan et al. [56]Makespan s b a t c h , D e t e r E , S ˜ h t = ν h s L e a r n E - D e t e r E , P ˜ h , l = p h , l max { r a , ξ } + b s Heuristic algorithm
Huang [57]Maximum cost on the set of D e t e r E , S ˜ h t = ν h ( e + b s ) D e t e r E , P ˜ h , l = p h , l ( e + b s ) Polynomial time algorithm
minimizing total weighted completion time
Liu et al. [58]Makespanconstant number D e t e r E , P ˜ h , l = p h , l ( e + b s ) Heuristic algorithm,
branch-and-bound algorithm
Sun et al. [60]Makespanconstant number L e a r n E , P ˜ h , l r = p h , l A + B z f = 1 r 1 p h , [ f ] r a Polynomial time algorithm
Total completion timeconstant number L e a r n E , P ˜ h , l r = p h , l A + B z f = 1 r 1 p h , [ f ] r a Polynomial time algorithm
Total weighted completion timeconstant number L e a r n E , P ˜ h , l r = p h , l A + B z f = 1 r 1 p h , [ f ] r a Polynomial time algorithm
under a special condition
Liu [62]Earliness–tardiness costconstant number L e a r n E , P ˜ h , l r = p h , l β h ( r ) Polynomial time algorithm
Miao et al. [76]Makespan D e t e r E D e t e r E Polynomial time algorithm
Ning and Sun [19]Makespan D e t e r E , S ˜ h t = μ h + ν h s L e a r n E ,Polynomial time algorithm
P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r )
total (weighted) completion time D e t e r E , S ˜ h t = μ h + ν h s L e a r n E ,Polynomial time algorithm
P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) under a special condition
This papertotal weighted completion time D e t e r E , S ˜ h t = μ h + ν h s L e a r n E ,Heuristic algorithm,
P ˜ h , l r = p h , l A α h f = 1 r 1 p h , [ f ] + B β h ( r ) simulated annealing,
branch-and-bound algorithm
a 0 , a h 0 , b h 0 , and 0 < z 1 are learning factors; 0 < ξ < 1 is a truncation parameter; ν 0 is A deterioration factor; e 0 is a constant; q is some group scheduled in the qth position; s b a t c h denotes a serial-batching problem; P ˜ h , l is the actual processing time of J h , l .
Table 2. CPU time (ms) for μ h ( p h , l , w h , l ) U [ 1 , 50 ] .
Table 2. CPU time (ms) for μ h ( p h , l , w h , l ) U [ 1 , 50 ] .
MNEH ˜ SA ˜ BandB ˜
n × g ν h minavgmaxminavgmaxminavgmax
ν h [ 0.1 , 1 ] 263.1354.5467.9402.5432.6476.03808.719,792.075,060.7
60 × 11 ν h [ 1 , 2 ] 283.8392.7473.5400.9431.7461.4375.01530.73866.2
ν h [ 2 , 5 ] 277.4388.3471.6404.7436.1472.0175.5610.51434.9
ν h [ 0.1 , 1 ] 312.4390.5483.2413.9450.2480.97215.569,538.0279,511.2
60 × 12 ν h [ 1 , 2 ] 233.5393.9502.6400.1444.4484.7509.72433.86264.7
ν h [ 2 , 5 ] 265.5408.1506.4418.1448.6499.6163.01037.43886.2
ν h [ 0.1 , 1 ] 215.5401.8485.5438.4467.9507.715,793.4179,221.5697,682.1
60 × 13 ν h [ 1 , 2 ] 279.1399.7502.1436.4469.4521.61574.34878.014410.0
ν h [ 2 , 5 ] 303.4405.0529.6444.1472.4514.9414.21878.35713.7
ν h [ 0.1 , 1 ] 267.4390.9464.0424.6450.7516.054,801.6549,690.82,392,813.2
60 × 14 ν h [ 1 , 2 ] 277.2376.7459.7423.7430.8438.11524.37593.319,410.4
ν h [ 2 , 5 ] 269.0364.8469.3425.8441.7492.6647.24013.915,599.9
ν h [ 0.1 , 1 ] 406.7558.0716.6626.4669.3716.13921.631,745.468,998.1
80 × 11 ν h [ 1 , 2 ] 467.0612.6726.7639.5669.0720.11073.11869.43650.2
ν h [ 2 , 5 ] 417.7593.1713.9628.3676.3757.4750.21316.52027.9
ν h [ 0.1 , 1 ] 367.8564.5730.9621.5664.7726.818,034.9121,560.8546,209.1
80 × 12 ν h [ 1 , 2 ] 384.6567.4722.2622.2666.6736.51106.93602.78693.4
ν h [ 2 , 5 ] 379.5573.5727.8627.1658.5722.7412.51606.44862.8
ν h [ 0.1 , 1 ] 467.6614.7750.3655.3708.6773.147,222.5399,686.61,426,509.1
80 × 13 ν h [ 1 , 2 ] 486.0605.3744.7670.7719.3770.01244.34917.513,859.8
ν h [ 2 , 5 ] 461.7618.4765.9656.8700.0747.4863.63547.110708.3
ν h [ 0.1 , 1 ] 430.5573.1732.7636.0657.3740.847,602.91,014,303.52,944,199.4
80 × 14 ν h [ 1 , 2 ] 411.6603.9810.5673.3696.6759.31741.812,841.946,554.5
ν h [ 2 , 5 ] 395.4603.2788.1691.0737.9876.21002.05147.716,667.3
ν h [ 0.1 , 1 ] 643.5962.31251.21066.41123.91187.124,913.560,882.7186,142.1
100 × 11 ν h [ 1 , 2 ] 817.81020.01254.91074.31138.71189.46061.010,114.116,456.4
ν h [ 2 , 5 ] 697.3943.41260.71067.71137.01225.04190.29763.617,136.9
ν h [ 0.1 , 1 ] 669.6924.21193.11042.31109.91154.643,013.2161,464.9459,263.5
100 × 12 ν h [ 1 , 2 ] 575.1917.41219.41047.81115.11164.53508.08094.914,471.8
ν h [ 2 , 5 ] 667.5942.51150.91030.91112.71167.62581.25016.48090.7
ν h [ 0.1 , 1 ] 720.5895.31165.7956.21055.41208.192,587.4442,220.31,246,050.5
100 × 13 ν h [ 1 , 2 ] 641.7889.01090.0976.21027.11085.22657.010,081.429,144.3
ν h [ 2 , 5 ] 538.0828.21100.51001.61035.31086.12207.54644.57919.2
ν h [ 0.1 , 1 ] 473.3854.71117.6942.31035.71142.3144,583.61,498,611.19,996,035.7
100 × 14 ν h [ 1 , 2 ] 598.7837.71113.2975.31047.71094.63755.316,910.943,546.8
ν h [ 2 , 5 ] 658.5925.21188.11010.91067.81158.72499.88020.026,848.7
Table 3. Error for μ h ( p h , l , w h , l ) U [ 1 , 50 ] .
Table 3. Error for μ h ( p h , l , w h , l ) U [ 1 , 50 ] .
e ρ ( MNEH ˜ ) e ρ ( SA ˜ )
n × g ν h minavgmaxminavgmax
ν h [ 0.1 , 1 ] 1.00001.01271.07211.00001.00361.0316
60 × 11 ν h [ 1 , 2 ] 1.00001.01601.19501.00001.00471.0551
ν h [ 2 , 5 ] 1.00001.00741.08291.00001.00681.0829
ν h [ 0.1 , 1 ] 1.00001.01701.10451.00001.00291.0287
60 × 12 ν h [ 1 , 2 ] 1.00001.01261.09841.00001.00631.0936
ν h [ 2 , 5 ] 1.00001.00931.11421.00001.00181.0261
ν h [ 0.1 , 1 ] 1.00001.00361.02371.00001.00201.0252
60 × 13 ν h [ 1 , 2 ] 1.00001.01671.16181.00001.00211.0249
ν h [ 2 , 5 ] 1.00001.00671.05931.00001.00091.0128
ν h [ 0.1 , 1 ] 1.00001.00931.07231.00001.00421.0464
60 × 14 ν h [ 1 , 2 ] 1.00001.00261.02211.00001.00241.0678
ν h [ 2 , 5 ] 1.00001.00851.21221.00001.00801.2121
ν h [ 0.1 , 1 ] 1.00001.01381.07651.00001.00311.0563
80 × 11 ν h [ 1 , 2 ] 1.00001.00311.03051.00001.00661.0817
ν h [ 2 , 5 ] 1.00001.00741.08181.00001.00241.0578
ν h [ 0.1 , 1 ] 1.00001.00971.07721.00001.00241.0299
80 × 12 ν h [ 1 , 2 ] 1.00001.01101.12401.00001.00351.0359
ν h [ 2 , 5 ] 1.00001.01141.14271.00001.00061.0124
ν h [ 0.1 , 1 ] 1.00001.01331.07661.00001.00211.0223
80 × 13 ν h [ 1 , 2 ] 1.00001.00261.05811.00001.00141.0109
ν h [ 2 , 5 ] 1.00001.02611.18261.00001.01721.1826
ν h [ 0.1 , 1 ] 1.00001.01271.08321.00001.00671.1386
80 × 14 ν h [ 1 , 2 ] 1.00001.00501.02811.00001.00281.0247
ν h [ 2 , 5 ] 1.00001.01851.10011.00001.01121.1001
ν h [ 0.1 , 1 ] 1.00001.02001.07571.00001.00551.0491
100 × 11 ν h [ 1 , 2 ] 1.00001.00621.14591.00001.00481.0456
ν h [ 2 , 5 ] 1.00001.00271.03871.00001.00201.0230
ν h [ 0.1 , 1 ] 1.00001.01601.15111.00001.00811.1511
100 × 12 ν h [ 1 , 2 ] 1.00001.00861.20211.00001.00281.0509
ν h [ 2 , 5 ] 1.00001.02431.24191.00001.01691.1877
ν h [ 0.1 , 1 ] 1.00001.00911.04981.00001.00471.0588
100 × 13 ν h [ 1 , 2 ] 1.00001.00291.03781.00001.00221.0192
ν h [ 2 , 5 ] 1.00001.00401.04201.00001.00371.0648
ν h [ 0.1 , 1 ] 1.00001.00961.05241.00001.00301.0210
100 × 14 ν h [ 1 , 2 ] 1.00001.00531.11381.00001.00121.0100
ν h [ 2 , 5 ] 1.00001.00671.14801.00001.00031.0044
Table 4. CPU time (ms) for μ h ( p h , l , w h , l ) U [ 51 , 100 ] .
Table 4. CPU time (ms) for μ h ( p h , l , w h , l ) U [ 51 , 100 ] .
MNEH ˜ SA ˜ BandB ˜
n × g ν h minavgmaxminavgmaxminavgmax
ν h [ 0.1 , 1 ] 269.6397.9488.5416.2445.3499.58885.633,664.296,546.1
60 × 11 ν h [ 1 , 2 ] 282.9396.4484.0423.6446.2478.6759.42835.26168.7
ν h [ 2 , 5 ] 312.6394.5490.7424.4449.2485.7337.51024.92329.0
ν h [ 0.1 , 1 ] 267.9381.9468.1419.7445.9485.557,453.6203,469.6843,915.7
60 × 12 ν h [ 1 , 2 ] 275.3376.0456.9414.7444.4471.22961.55430.910,209.0
ν h [ 2 , 5 ] 206.3374.9478.1412.7435.9481.1613.71856.55276.0
ν h [ 0.1 , 1 ] 276.0378.2494.4415.4443.2492.9109,707.8458,420.71,258,305.9
60 × 13 ν h [ 1 , 2 ] 286.6368.8480.4408.7436.6487.62424.68797.826,448.4
ν h [ 2 , 5 ] 269.0375.4481.2409.2438.2498.9380.23259.48612.4
ν h [ 0.1 , 1 ] 277.8379.8479.8416.9460.4541.5146,566.91,546,453.46,929,877.7
60 × 14 ν h [ 1 , 2 ] 280.1385.4493.5433.8455.9506.48227.725,103.347,670.7
ν h [ 2 , 5 ] 244.3370.9496.7429.5454.2513.71507.87019.619,387.9
ν h [ 0.1 , 1 ] 414.4575.3722.1625.8667.5720.821,585.356,004.2126,105.0
80 × 11 ν h [ 1 , 2 ] 442.2613.5735.3658.8686.3723.42837.44985.49074.5
ν h [ 2 , 5 ] 455.5605.5724.7655.9677.7716.11973.92976.74133.9
ν h [ 0.1 , 1 ] 434.3567.0688.6634.3662.2712.340,277.0200,779.11,028,145.8
80 × 12 ν h [ 1 , 2 ] 433.6592.0731.1631.0663.1700.42935.76589.716,168.7
ν h [ 2 , 5 ] 426.9596.7714.9627.2664.9707.71176.22805.97057.2
ν h [ 0.1 , 1 ] 380.5524.0806.0610.7653.3750.8137,932.8765,014.43,816,658.8
80 × 13 ν h [ 1 , 2 ] 395.9556.0668.7612.2624.6697.26312.514,194.826,548.5
ν h [ 2 , 5 ] 405.1528.1685.9611.0626.0713.51187.85114.316,269.6
ν h [ 0.1 , 1 ] 395.1610.0749.4695.4733.1787.0325,392.61,975,618.84,202,318.0
80 × 14 ν h [ 1 , 2 ] 441.4600.4735.2685.2715.3755.18800.927,008.772,611.3
ν h [ 2 , 5 ] 422.5583.6778.3672.0721.8762.92539.310,557.426,999.0
ν h [ 0.1 , 1 ] 717.9923.11152.61039.11090.61167.879,418.1138,441.1327,825.4
100 × 11 ν h [ 1 , 2 ] 644.3953.41151.91050.01093.01185.644,460.064,729.291,622.7
ν h [ 2 , 5 ] 624.8933.71197.81043.61093.31206.348,520.767,937.8148,427.0
ν h [ 0.1 , 1 ] 690.9909.91163.5993.91068.61160.287,321.5326,825.5646,343.8
100 × 12 ν h [ 1 , 2 ] 559.7873.01171.9959.21062.81154.721,499.430,558.745,799.9
ν h [ 2 , 5 ] 585.4876.71068.1934.21080.91232.516,890.524,026.633,192.4
ν h [ 0.1 , 1 ] 682.1905.01206.11009.11093.11236.2285,790.3942,216.13,447,781.9
100 × 13 ν h [ 1 , 2 ] 548.4911.71158.31018.21070.01132.59340.022,170.365,328.4
ν h [ 2 , 5 ] 674.0908.21121.11020.71062.91136.77253.311,856.335,877.5
ν h [ 0.1 , 1 ] 544.6884.31192.21005.41056.11143.0570,380.03,204,244.27,290,899.2
100 × 14 ν h [ 1 , 2 ] 671.3917.01192.7978.01089.41150.415,004.437,006.154,935.1
ν h [ 2 , 5 ] 683.9899.31204.61065.71134.71356.34293.913,242.124,182.8
Table 5. Error for μ h ( p h , l , w h , l ) U [ 51 , 100 ] .
Table 5. Error for μ h ( p h , l , w h , l ) U [ 51 , 100 ] .
e ρ ( MNEH ˜ ) e ρ ( SA ˜ )
n × g ν h minavgmaxminavgmax
ν h [ 0.1 , 1 ] 1.00001.00851.04991.00001.00321.0333
60 × 11 ν h [ 1 , 2 ] 1.00001.00761.04611.00001.00071.0037
ν h [ 2 , 5 ] 1.00001.00561.04471.00001.00151.0233
ν h [ 0.1 , 1 ] 1.00001.00731.03701.00001.00181.0173
60 × 12 ν h [ 1 , 2 ] 1.00001.00651.05251.00001.00231.0258
ν h [ 2 , 5 ] 1.00001.01141.08781.00001.00611.0675
ν h [ 0.1 , 1 ] 1.00001.01301.07451.00001.00581.0367
60 × 13 ν h [ 1 , 2 ] 1.00001.00931.06891.00001.00581.0588
ν h [ 2 , 5 ] 1.00001.01451.21611.00001.00421.0493
ν h [ 0.1 , 1 ] 1.00001.02081.11881.00001.00171.0319
60 × 14 ν h [ 1 , 2 ] 1.00001.00911.08201.00001.00581.0830
ν h [ 2 , 5 ] 1.00001.00321.07151.00001.00111.0212
ν h [ 0.1 , 1 ] 1.00001.01401.07591.00001.00131.0210
80 × 11 ν h [ 1 , 2 ] 1.00001.00721.05701.00001.00271.0252
ν h [ 2 , 5 ] 1.00001.00721.07731.00001.00231.0258
ν h [ 0.1 , 1 ] 1.00001.01341.07091.00001.00401.0365
80 × 12 ν h [ 1 , 2 ] 1.00001.00821.06721.00001.00391.0392
ν h [ 2 , 5 ] 1.00001.00881.10011.00001.00741.1183
ν h [ 0.1 , 1 ] 1.00001.01441.06061.00001.00431.0606
80 × 13 ν h [ 1 , 2 ] 1.00001.00621.03461.00001.00571.0624
ν h [ 2 , 5 ] 1.00001.00461.05431.00001.00251.0312
ν h [ 0.1 , 1 ] 1.00001.01351.05471.00001.00491.0465
80 × 14 ν h [ 1 , 2 ] 1.00001.00861.06451.00001.00171.0177
ν h [ 2 , 5 ] 1.00001.00371.03251.00001.00201.0258
ν h [ 0.1 , 1 ] 1.00001.00591.04241.00001.00091.0102
100 × 11 ν h [ 1 , 2 ] 1.00001.00471.04951.00001.00041.0112
ν h [ 2 , 5 ] 1.00001.00631.07681.00001.00421.0770
ν h [ 0.1 , 1 ] 1.00001.00791.05311.00001.00101.0107
100 × 12 ν h [ 1 , 2 ] 1.00001.01051.08021.00001.00261.0382
ν h [ 2 , 5 ] 1.00001.00221.01981.00001.00081.0106
ν h [ 0.1 , 1 ] 1.00001.00811.04091.00001.00181.0180
100 × 13 ν h [ 1 , 2 ] 1.00001.00531.03761.00001.00471.0869
ν h [ 2 , 5 ] 1.00001.00831.09191.00001.00191.0380
ν h [ 0.1 , 1 ] 1.00001.00511.02361.00001.00071.0192
100 × 14 ν h [ 1 , 2 ] 1.00001.01031.07151.00001.00461.0332
ν h [ 2 , 5 ] 1.00001.00591.04501.00001.00301.0348
Table 6. CPU time (ms) for μ h ( p h , l , w h , l ) U [ 1 , 100 ] .
Table 6. CPU time (ms) for μ h ( p h , l , w h , l ) U [ 1 , 100 ] .
MNEH ˜ SA ˜ BandB ˜
n × g ν h minavgmaxminavgmaxminavgmax
ν h [ 0.1 , 1 ] 289.8404.5513.5412.0454.8503.06800.026,169.495,899.8
60 × 11 ν h [ 1 , 2 ] 325.4397.4487.3408.5455.4481.6349.31767.36501.5
ν h [ 2 , 5 ] 294.7401.4501.9413.8444.7484.0208.2566.62013.7
ν h [ 0.1 , 1 ] 271.2389.9476.2404.7445.2485.05813.582,707.1337,071.4
60 × 12 ν h [ 1 , 2 ] 314.1390.5472.6405.8437.8512.7815.22678.36255.1
ν h [ 2 , 5 ] 284.2364.2479.7407.9437.0464.9234.81117.93235.0
ν h [ 0.1 , 1 ] 292.6405.5489.6415.1448.2485.912,909.2192,160.6668,581.9
60 × 13 ν h [ 1 , 2 ] 294.0394.6468.1421.5443.8474.31167.04782.013,883.9
ν h [ 2 , 5 ] 266.8396.9506.4419.6448.5518.1241.42219.27843.8
ν h [ 0.1 , 1 ] 253.0406.9546.4436.2476.6535.392,349.2702,519.72,408,043.7
60 × 14 ν h [ 1 , 2 ] 231.2413.8541.6438.4479.1522.83475.78926.531,120.8
ν h [ 2 , 5 ] 285.0403.9546.7441.5477.9513.11298.15205.916,778.7
ν h [ 0.1 , 1 ] 471.1596.3707.4651.1688.1733.67864.230,476.586,714.9
80 × 11 ν h [ 1 , 2 ] 426.0601.2741.6639.3683.9745.6941.92290.24192.7
ν h [ 2 , 5 ] 373.7633.2765.0656.1698.4757.7581.21211.22264.9
ν h [ 0.1 , 1 ] 443.4586.5714.6642.6679.3726.311,887.093,614.4279,356.5
80 × 12 ν h [ 1 , 2 ] 388.9589.4749.3646.9683.7760.31206.43490.810,679.2
ν h [ 2 , 5 ] 385.6582.6787.5642.0683.4779.9419.81653.34856.7
ν h [ 0.1 , 1 ] 381.0525.6670.2607.0627.9712.151,369.0299,910.72,290,449.0
80 × 13 ν h [ 1 , 2 ] 383.9523.6663.4612.5622.4709.81513.74996.910,659.8
ν h [ 2 , 5 ] 397.8517.3664.4612.1620.5632.1429.03226.89250.4
ν h [ 0.1 , 1 ] 448.3633.9839.9688.4754.3827.481,601.7868,200.74,336,156.5
80 × 14 ν h [ 1 , 2 ] 404.4608.4777.0685.4723.0803.62927.810,756.025,702.0
ν h [ 2 , 5 ] 390.4606.9769.5690.2717.9748.11631.24337.712,804.6
ν h [ 0.1 , 1 ] 592.7804.41028.4950.9962.6980.515,560.453,766.8138,945.5
100 × 11 ν h [ 1 , 2 ] 599.8783.31003.5948.5963.5993.95175.48503.715,397.8
ν h [ 2 , 5 ] 624.8822.71025.7954.9964.5999.04130.36751.715,606.9
ν h [ 0.1 , 1 ] 598.0798.3986.6934.0943.0966.715,778.4117,011.7369,020.5
100 × 12 ν h [ 1 , 2 ] 555.8813.81013.8932.5949.11021.93837.26704.212,474.9
ν h [ 2 , 5 ] 581.2802.11042.8930.7947.21024.62308.74190.07094.4
ν h [ 0.1 , 1 ] 581.7778.01046.0933.6943.6983.842,711.4349,700.0994,156.8
100 × 13 ν h [ 1 , 2 ] 480.9780.41012.5933.1946.1996.93702.410,104.324,190.5
ν h [ 2 , 5 ] 678.9812.61023.6933.0952.61010.91318.95107.313,438.8
ν h [ 0.1 , 1 ] 576.7894.31228.91019.01065.21153.9117415.41,413,749.37,564,972.4
100 × 14 ν h [ 1 , 2 ] 646.3910.51236.61068.31116.11175.05011.313,764.545,470.0
ν h [ 2 , 5 ] 556.8916.11144.01072.01125.71223.32487.18272.423,057.8
Table 7. Error for μ h ( p h , l , w h , l ) U [ 1 , 100 ] .
Table 7. Error for μ h ( p h , l , w h , l ) U [ 1 , 100 ] .
e ρ ( MNEH ˜ ) e ρ ( SA ˜ )
n × g ν h minavgmaxminavgmax
ν h [ 0.1 , 1 ] 1.00001.01141.09721.00001.00321.0731
60 × 11 ν h [ 1 , 2 ] 1.00001.00531.04641.00001.00191.0312
ν h [ 2 , 5 ] 1.00001.00351.05821.00001.00191.0582
ν h [ 0.1 , 1 ] 1.00001.01541.12651.00001.00511.1192
60 × 12 ν h [ 1 , 2 ] 1.00001.00821.06581.00001.00431.0658
ν h [ 2 , 5 ] 1.00001.00411.07101.00001.00531.1344
ν h [ 0.1 , 1 ] 1.00001.02841.15791.00001.00411.0502
60 × 13 ν h [ 1 , 2 ] 1.00001.02241.46751.00001.00311.0479
ν h [ 2 , 5 ] 1.00001.01451.16041.00001.00021.0046
ν h [ 0.1 , 1 ] 1.00001.01691.09831.00001.00771.0983
60 × 14 ν h [ 1 , 2 ] 1.00001.01611.13331.00001.00921.1330
ν h [ 2 , 5 ] 1.00001.00781.17441.00001.00741.1680
ν h [ 0.1 , 1 ] 1.00001.01671.10221.00001.00341.0427
80 × 11 ν h [ 1 , 2 ] 1.00001.00461.03991.00001.00201.0551
ν h [ 2 , 5 ] 1.00001.01111.07541.00001.00741.0754
ν h [ 0.1 , 1 ] 1.00001.01071.06181.00001.00431.0477
80 × 12 ν h [ 1 , 2 ] 1.00001.00881.08211.00001.00541.0686
ν h [ 2 , 5 ] 1.00001.00271.03621.00001.00831.1918
ν h [ 0.1 , 1 ] 1.00001.02151.10331.00001.00341.0573
80 × 13 ν h [ 1 , 2 ] 1.00001.00401.03221.00001.00541.1336
ν h [ 2 , 5 ] 1.00001.02371.19721.00001.00331.0767
ν h [ 0.1 , 1 ] 1.00001.01091.11311.00001.00611.0571
80 × 14 ν h [ 1 , 2 ] 1.00001.00671.11831.00001.00501.1183
ν h [ 2 , 5 ] 1.00001.02151.19721.00001.00381.0845
ν h [ 0.1 , 1 ] 1.00001.01071.08461.00001.00201.0116
100 × 11 ν h [ 1 , 2 ] 1.00001.00521.03911.00001.00471.0391
ν h [ 2 , 5 ] 1.00001.00841.07901.00001.00071.0166
ν h [ 0.1 , 1 ] 1.00001.00501.05661.00001.00131.0179
100 × 12 ν h [ 1 , 2 ] 1.00001.00611.04531.00001.00281.0266
ν h [ 2 , 5 ] 1.00001.01011.21991.00001.00741.1955
ν h [ 0.1 , 1 ] 1.00001.01531.16591.00001.00261.0202
100 × 13 ν h [ 1 , 2 ] 1.00001.00741.05581.00001.00641.0860
ν h [ 2 , 5 ] 1.00001.00641.10391.00001.00041.0097
ν h [ 0.1 , 1 ] 1.00001.01731.11561.00001.00421.0651
100 × 14 ν h [ 1 , 2 ] 1.00001.00851.08901.00001.00851.0728
ν h [ 2 , 5 ] 1.00001.00391.06351.00001.00301.0632
Table 8. The 95% confidence intervals for heuristic algorithms ( μ h ( p h , l , w h , l ) U [ 1 , 50 ] ).
Table 8. The 95% confidence intervals for heuristic algorithms ( μ h ( p h , l , w h , l ) U [ 1 , 50 ] ).
AlgorithmnMean Error95% CI Lower Bound95% CI Upper Bound95% CI Span
601.012951.007361.018530.011
M N E H ˜ 801.010701.005391.016010.011
1001.016291.007181.025390.018
601.003661.001071.006260.005
S A ˜ 801.002201.000741.003660.003
1001.009251.002731.015770.013
Table 9. The 95% confidence intervals for heuristic algorithms ( μ h ( p h , l , w h , l ) U [ 51 , 100 ] ).
Table 9. The 95% confidence intervals for heuristic algorithms ( μ h ( p h , l , w h , l ) U [ 51 , 100 ] ).
AlgorithmnMean Error95% CI Lower Bound95% CI Upper Bound95% CI Span
601.008411.005081.011740.007
M N E H ˜ 801.010131.006201.014070.008
1001.006901.004161.009640.005
601.003411.001271.005540.004
S A ˜ 801.005101.001871.008320.006
1001.001431.000421.000420.002
Table 10. The 95% confidence intervals for heuristic algorithms ( μ h ( p h , l , w h , l ) U [ 1 , 100 ] ).
Table 10. The 95% confidence intervals for heuristic algorithms ( μ h ( p h , l , w h , l ) U [ 1 , 100 ] ).
AlgorithmnMean Error95% CI Lower Bound95% CI Upper Bound95% CI Span
601.009201.004751.013650.009
M N E H ˜ 801.007411.004161.010670.007
1001.007071.001891.012250.010
601.002701.000861.004540.004
S A ˜ 801.004061.001631.006480.005
1001.001661.000601.002710.002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, N.; He, H.; Zhao, Y.; Chang, Y.; Wang, N. Integrating Group Setup Time Deterioration Effects and Job Processing Time Learning Effects with Group Technology in Single-Machine Green Scheduling. Axioms 2025, 14, 480. https://doi.org/10.3390/axioms14070480

AMA Style

Yin N, He H, Zhao Y, Chang Y, Wang N. Integrating Group Setup Time Deterioration Effects and Job Processing Time Learning Effects with Group Technology in Single-Machine Green Scheduling. Axioms. 2025; 14(7):480. https://doi.org/10.3390/axioms14070480

Chicago/Turabian Style

Yin, Na, Hongyu He, Yanzhi Zhao, Yu Chang, and Ning Wang. 2025. "Integrating Group Setup Time Deterioration Effects and Job Processing Time Learning Effects with Group Technology in Single-Machine Green Scheduling" Axioms 14, no. 7: 480. https://doi.org/10.3390/axioms14070480

APA Style

Yin, N., He, H., Zhao, Y., Chang, Y., & Wang, N. (2025). Integrating Group Setup Time Deterioration Effects and Job Processing Time Learning Effects with Group Technology in Single-Machine Green Scheduling. Axioms, 14(7), 480. https://doi.org/10.3390/axioms14070480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop