Next Article in Journal
An Analysis of the Continuum Hypothesis
Next Article in Special Issue
Robust Optimization for Cooperative Task Assignment of Heterogeneous Unmanned Aerial Vehicles with Time Window Constraints
Previous Article in Journal
Objective Posterior Analysis of kth Record Statistics in Gompertz Model
Previous Article in Special Issue
An Improved MOEA/D with an Auction-Based Matching Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost

1
School of Economics and Management, Shenyang Aerospace University, Shenyang 110136, China
2
School of Mechatronics Engineering, Shenyang Aerospace University, Shenyang 110136, China
3
Key Laboratory of Rapid Development & Manufacturing Technology for Aircraft (Shenyang Aerospace University), Ministry of Education, Shenyang 110136, China
4
School of Mathematics and Computer, Shantou University, Shantou 515063, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2025, 14(3), 153; https://doi.org/10.3390/axioms14030153
Submission received: 13 January 2025 / Revised: 14 February 2025 / Accepted: 19 February 2025 / Published: 20 February 2025
(This article belongs to the Special Issue Mathematical Optimizations and Operations Research)

Abstract

:
Single-machine group scheduling with general logarithmic deterioration is investigated, where the actual job processing (resp. group setup) time is a non-decreasing function of the sum of the logarithmic job processing (resp. group setup) times of the jobs (resp. groups) already processed. Under some optimal properties, it is shown that the maximal completion time (i.e., makespan) cost is solved in polynomial time and the optimal algorithm is presented. In addition, an extension of the general weighted deterioration model is given.

1. Introduction

In many real production processes, group technology (denoted by g r o t e c ) is very important for reducing production costs and improving efficiency (Kuo and Yang [1], Lee and Wu [2], Kuo [3], He and Sun [4], Zhang et al. [5], Liu et al. [6], and Huang [7]). Under a single-machine setting, in 2021, Wang and Ye [8] considered stochastic g r o t e c scheduling with learning effects. For expected total completion time minimization, they proposed heuristic algorithms. In 2022, Qian and Zhan [9] studied g r o t e c scheduling with a learning (aging) effect. For the total completion time (makespan) minimization, they presented a polynomial time algorithm. In 2023, Liu and Wang [10] and Chen et al. [11] considered resource allocation scheduling with g r o t e c . In 2024, Li et al. [12] addressed g r o t e c scheduling with resource allocation and a learning effect. For the non-regular cost of a common due date assignment, they proposed some heuristics. Wang and Liu [13,14] investigated g r o t e c scheduling with resource allocation. For the non-regular cost of different due dates, Wang and Liu [13] proved that a special case is polynomially solvable. For the general case, Wang and Liu [14] proposed some solution algorithms. Yin and Gao [15] studied g r o t e c scheduling with deterioration effects (learning effects) of group setup times (job processing times). Lv and Wang [16] considered g r o t e c scheduling with resource allocation. For a minmax criterion of slack due window assignment, they proposed some solution algorithms.
In addition, one of the assumptions in the scheduling is that the job processing (group setup) times have deterioration effects (Gawiejnowicz [17] and Strusevich and Rustogi [18]). Generally, there are two deterioration models; one is the time-dependent deterioration effect (denoted by T D D E , Shabtay and Mor [19], Sun et al. [20], Wang et al. [21], Zhang et al. [22], Lu et al. [23], and Qiu and Wang [24]) and the other is the position-dependent deterioration (resp. learning) effect (denoted by D D D E (resp. D D L E )). For the D D L E , the processing times of jobs are a non-increasing function of their positions in a sequence (see Koulamas and Kyparisis [25], Zhao [26], Azzouz et al. [27], Jiang et al. [28], Liu and Wang [29], Gerstl and Mosheiov [30], Cohen and Shapira [31], and Zhang et al. [32]). For the D D D E , the processing time of jobs are a non-decreasing function of their positions in a sequence (see Gerstl and Mosheiov [30], Cohen and Shapira [31], Vitaly and Strusevich [33], Montoya-Torres et al. [34], Saavedra-Nieves et al. [35], and Hu et al. [36]). Real-world examples of the D D D E can be found in steel production (see Liu et al. [37]), semiconductor production (see Sloan and Shanthikumar [38]), scheduling derusting operations (see Gawiejnowicz et al. [39]), and production systems that use cooling systems or cutting tools (see Bajestani et al. [40]). Mosheiov [41] considered the following model:
p j A = p j r α M o s h ,
where α M o s h 0 (resp. α M o s h 0 ) is a deterioration (resp. learning, see Biskup [42], Mosheiov [43], and Biskup [44]) index. Gordon et al. [45] considered the following model:
p j A = p j α G o r d r 1 ,
where α G o r d 1 (resp. 0 < α G o r d 1 ) is a deterioration (resp. learning) index and p j is the normal processing time of job J j . Wang et al. [46] considered the following model: if job J j is placed in the lth position, the actual processing time of J j is
p j A = p j 1 + k = 1 l 1 p [ k ] α W a n g ,
where α W a n g 0 (resp. α W a n g 0 ) is a deterioration (resp. learning, see Kuo and Yang [47]) index, and [ k ] denotes some job scheduled in kth position. Lee et al. [48] considered the following model:
p j A = p j 1 + k = 1 l 1 p [ k ] k = 1 n p k α L e e ,
where 0 α L e e 1 is a deterioration index. Huang and Wang [49] considered the following model:
p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g ,
where α H u a n g 1 (resp. α H u a n g 0 ) is a deterioration (resp. learning, see Yang et al. [50]) index, ζ k is the weight of kth position, and 0 < ζ n ζ n 1 ζ 2 ζ 1 .
In view of the importance of group technology, in this paper, we focus on another group scheduling model with logarithmic/weighted D D D E , that is, a job processing (group setup) time which cannot deteriorate quickly. The main contributions of this paper are as follows: (i) The single-machine g r o t e c scheduling with logarithmic/weighted D D D E is modeled studied; (ii) for the maximal completion time (i.e., makespan) minimization, some optimal properties on group and job sequences are given; (iii) we show that the problem is polynomially solvable, and verify the performance through some examples. The rest of this paper is given as follows. In Section 2, we present the model. In Section 3, we show that the problem with logarithmic D D D E is polynomially solvable. In Section 4, an extension with weighted D D D E is given. In Section 5, we report the results of numberical tests. In Section 6, we conclude the paper.

2. Problem Formulation

Assume that n jobs are divided into m groups to be processed on one machine, and all jobs can be processed at time 0. Let the number of jobs in the i group (i.e., group G i ) be n i , n 1 + n 2 + + n m = n . Let J i j denote the jth job in G i , p i j denotes the normal processing time of J i j , and s i denotes the normal setup time of G i . As in Lee et al. [48] and Cheng et al. [51], we define a general logarithmic deterioration model, i.e., if job J i j is placed in the lth position of G i , the actual processing time of J i j is as follows:
p i j A = p i j M + ( 1 M ) 1 + k = 1 l 1 ln p i [ k ] k = 1 n i p i k a i ,
where 0 < M < 1 is a given constant, 0 a i 1 is the job-deterioration index for G i , and p i j e (i.e., the “ln” is a natural logarithm, ln p i j 1 ). Similarly, if group G i is placed in the rth position, the actual setup time is as follows:
s i A = s i N + ( 1 N ) 1 + l = 1 r 1 ln s [ l ] l = 1 m s l b ,
where 0 < N < 1 is a given constant, 0 b 1 is the group-deterioration index for the setup time and s i e (i.e., ln s i 1 ). Let C i j be the completion time of J i j . The goal of this paper is to minimize the maximal completion time (i.e., makespan, C max = max { C i j , i = 1 , , m ; j = 1 , , n i } ); this problem can be expressed as
1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max ,
where D D D E l o g a r i t h m i c denotes the logarithmic D D D E model. The literature review related to the scheduling with D D D E ( D D L E ) is given in Table 1.

3. Basic Result

Let J [ i ] [ j ] denote the job in the jth position of the ith group and C [ i ] [ j ] denote the completion time of J [ i ] [ j ] ; then, for a given sequence
ϱ = { J [ 1 ] [ 1 ] , , J [ 1 ] [ n 1 ] , J [ 2 ] [ 1 ] , , J [ 2 ] [ n 2 ] , , J [ m ] [ 1 ] , , J [ m ] [ n m ] } ,
by mathematical induction
C [ 1 ] [ n 1 ] = s [ 1 ] + j = 1 n [ 1 ] p [ 1 ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 ln p [ 1 ] [ k ] k = 1 n [ 1 ] p [ 1 ] [ k ] a [ 1 ] ,
C [ 2 ] [ n 2 ] = s [ 1 ] + j = 1 n [ 1 ] p [ 1 ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 ln p [ 1 ] [ k ] k = 1 n [ 1 ] p [ 1 ] [ k ] a [ 1 ] + s [ 2 ] N + ( 1 N ) 1 + ln s [ 1 ] k = 1 m s [ k ] b + j = 1 n [ 2 ] p [ 2 ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 ln p [ 2 ] [ k ] k = 1 n [ 2 ] p [ 2 ] [ k ] a [ 2 ] ,
C [ m ] [ n m ] = i = 1 m s [ i ] N + ( 1 N ) 1 + k = 1 i 1 ln s [ k ] k = 1 m s [ k ] b + i = 1 m j = 1 n [ i ] p [ i ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 ln p [ i ] [ k ] k = 1 n [ 2 ] p [ i ] [ k ] a [ i ] .
Therefore,
C max = i = 1 m s [ i ] N + ( 1 N ) 1 + k = 1 i 1 ln s [ k ] k = 1 m s [ k ] b + i = 1 m j = 1 n [ i ] p [ i ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 ln p [ i ] [ k ] k = 1 n [ i ] p [ i ] [ k ] a [ i ] .
Lemma 1.
F ( η ) = ( 1 + η ) a 1 a η ( 1 + η ) a 1 0  if  0 a 1 , and 0 η 1 .
Proof. 
Let F ( η ) = ( 1 + η ) a 1 a η ( 1 + η ) a 1 . Taking the first derivative of F ( η ) to η , we have
F ( η ) = a ( a 1 ) η ( 1 + η ) a 2 0
for 0 a 1 , 0 η 1 . Thus, F ( η ) is a non-decreasing function of η , and F ( η ) F ( 0 ) = 0 .    □
Lemma 2.
G ( x ) = ( 1 + η x ) a 1 a η ( 1 + η x ) a 1 0 if x 1 , 0 a 1 , and 0 η 1 .
Proof. 
Let G ( x ) = ( 1 + η x ) a 1 a η ( 1 + η x ) a 1 ; similarly, we have
G ( x ) = a η ( 1 + η x ) a 1 a ( a 1 ) η 2 ( 1 + η x ) a 2 0
for 0 a 1 , 0 η 1 . Thus, from Lemma 1,
G ( x ) G ( 1 ) = ( 1 + η ) a 1 a η ( 1 + η ) a 1 0 .
   □
Lemma 3.
H ( ϖ ) = [ 1 ( 1 + η ln ϖ + η x ) a ] ϖ [ 1 ( 1 + η x ) a ] 0 if x 1 , 0 a 1 , 0 η 1 , and ϖ 1 .
Proof. 
Let H ( ϖ ) = [ 1 ( 1 + η ln ϖ + η x ) a ] ϖ [ 1 ( 1 + η x ) a ] ; similarly, we have
H ( ϖ ) = ( 1 + η x ) a 1 a η ( 1 + η ln ϖ + η x ) a 1 / ϖ ,
H ( ϖ ) = a η ( 1 + η ln ϖ + η x ) a 1 a ( a 1 ) η 2 ( 1 + η ln ϖ + η x ) a 2 ϖ 2 0 ,
for 0 a 1 . Thus, from Lemma 2, H ( ϖ ) H ( 1 ) = ( 1 + η x ) a 1 a η ( 1 + η x ) a 1 0 . Hence, H ( ϖ ) H ( 1 ) = 0 .    □
Lemma 4.
If 0 a i 1 , 0 b 1 , for 1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max ; jobs within each group are arranged in non-increasing order of normal processing time, i.e., for G i , the jobs are arranged in non-increasing order of p i j (the Largest Processing Time (LPT) rule).
Proof. 
Let π i = { S 1 , J i j J i k J i z , S 2 } and π i = { S 1 , J i k J i j J i z , S 2 } , where → denotes the order relation, i.e., x y denotes that x is scheduled in front of y in a sequence, S 1 and S 2 are partial job sequences, and p i j p i k . Let A be the completion time of last job in S 1 and there are l 1 jobs in S 1 ; then, we have
C i j ( π i ) = A + p i j M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i ,
C i k ( π i ) = A + p i j M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i + p i k M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] + ln p i j δ = 1 n i p i δ a i ,
C i k ( π i ) = A + p i k M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i ,
C i j ( π i ) = A + p i k M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i + p i j M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] + ln p i k δ = 1 n i p i δ a i .
From (5) and (6), we have
C i k ( π i ) C i j ( π i ) = p i j ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i 1 + δ = 1 l 1 ln p i [ δ ] + ln p i k δ = 1 n i p i δ a i p i k ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i 1 + δ = 1 l 1 ln p i [ δ ] + ln p i j δ = 1 n i p i δ a i .
Let P ˜ i = δ = 1 n i p i δ , P ˜ l i = δ = 1 l 1 ln p i δ , ϖ = p i k p i j 1 ( ϖ 1 ), η = 1 P ˜ i + P ˜ l i ( 0 η 1 ), x = ln p i j ( x 1 ), then
C i k ( π i ) C i j ( π i ) = ( p i j p i k ) ( 1 M ) 1 + P ˜ l i P ˜ i a i + p i k ( 1 M ) 1 + P ˜ l i P ˜ i + ln p i j P ˜ i a i p i j ( 1 M ) 1 + P ˜ l i P ˜ i + ln p i k P ˜ i a i = p i j ( 1 M ) 1 + P ˜ l i P ˜ i a i 1 ϖ + ϖ ( 1 + η x ) a i ( 1 + η ln ϖ + η x ) a i = p i j ( 1 M ) 1 + P ˜ l i P ˜ i a i { [ 1 ( 1 + η ln ϖ + η x ) a i ] ϖ [ 1 ( 1 + η x ) a i ] } .
Noting 0 < M < 1 , x 1 , 0 a 1 , 0 η 1 , ϖ 1 , from Lemma 3, it follows that
C i k ( π i ) C i j ( π i ) = p i j ( 1 M ) 1 + P ˜ l i P ˜ i a i { [ 1 ( 1 + η ln ϖ + η x ) a i ] ϖ [ 1 ( 1 + η x ) a i ] } 0 .
In addition,
C i z ( π i ) = C i k ( π i ) + p i z M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] + ln p i j + ln p i k δ = 1 n i p i δ a i ,
and
C i z ( π i ) = C i j ( π i ) + p i z M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] + ln p i k + ln p i j δ = 1 n i p i δ a i .
Obviously, C i z ( π i ) C i z ( π i ) ; this implies C i max ( π i ) C i max ( π i ) , where C i max is the maximal completion time of group G i .    □
Lemma 5.
If  0 a i 1 , 0 b 1 , for 1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max , the groups are listed in non-increasing order of normal setup time, i.e., groups are arranged in non-increasing order of s i (the LPT rule).
Proof. 
Let ϱ = { π , G i G j G h , π } and ϱ = { π , G j G i G h , π } , where π and π are partial group sequences and s i s j . Let B be the starting time of G i (resp. G j ) in ϱ (resp. ϱ ) and there are r 1 groups in π 1 ; we have
C i ( ϱ ) = B + s i N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] θ = 1 m s θ b + l = 1 n i p i [ l ] M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i ,
C j ( ϱ ) = B + s i N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] θ = 1 m s θ b + l = 1 n i p i [ l ] M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i + s j N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] + ln s i θ = 1 m s θ b + l = 1 n i p j [ l ] M + ( 1 M ) 1 + δ = 1 l 1 ln p j [ δ ] δ = 1 n i p j δ a j ,
C j ( ϱ ) = B + s j N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] θ = 1 m s θ b + l = 1 n i p j [ l ] M + ( 1 M ) 1 + δ = 1 l 1 ln p j [ δ ] δ = 1 n i p j δ a i ,
C i ( ϱ ) = B + s j N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] θ = 1 m s θ b + l = 1 n i p j [ l ] M + ( 1 M ) 1 + δ = 1 l 1 ln p j [ δ ] δ = 1 n i p j δ a j + s i N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] + ln s j θ = 1 m s θ b + l = 1 n i p i [ l ] M + ( 1 M ) 1 + δ = 1 l 1 ln p i [ δ ] δ = 1 n i p i δ a i .
From (7) and (8), we have
C j ( ϱ ) C i ( ϱ ) = s i ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] θ = 1 m s θ b 1 + θ = 1 r 1 ln s [ θ ] + ln s j θ = 1 m s θ b s j ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] θ = 1 m s θ b 1 + θ = 1 r 1 ln s [ θ ] + ln s i θ = 1 m s θ b .
Let S = θ = 1 m s θ , S r = θ = 1 r 1 s [ θ ] , ϖ = s j s i ( ϖ 1 ), η = 1 S + S r ( 0 η 1 ), x = ln s i ( x 1 ); then, from Lemma 3, we have
C j ( ϱ ) C i ( ϱ ) = ( s i s j ) ( 1 N ) 1 + S r S b + s j ( 1 N ) 1 + S r S + ln s i S b s i ( 1 N ) 1 + S r S + ln s j S b = s i ( 1 N ) 1 + S r S b 1 ϖ + ϖ ( 1 + η x ) b ( 1 + η ln ϖ + η x ) b = s i ( 1 N ) 1 + S r S b { [ 1 ( 1 + η ln ϖ + η x ) b ] ϖ [ 1 ( 1 + η x ) b ] } 0 .
In addition,
C h ( ϱ ) = C j ( ϱ ) + s h N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] + ln s i + ln s j θ = 1 m s θ b ,
and
C h ( ϱ ) = C i ( ϱ ) + s h N + ( 1 N ) 1 + θ = 1 r 1 ln s [ θ ] + ln s j + ln s i θ = 1 m s θ b .
Obviously, C h ( ϱ ) C h ( ϱ ) ; this implies C max ( ϱ ) C max ( ϱ ) .    □
Based on Lemmas 4 and 5, the following algorithm is proposed to solve
1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max .
Theorem 1.
If 0 a i 1 , 0 b 1 , 1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max can be optimally solved by Algorithm 1 in O ( n log n ) time.
Algorithm 1  1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max .
Step 1. For G i , jobs are arranged in non-increasing order of p i j (Lemma 4).
Step 2. All groups are arranged in non-increasing order of s i (Lemma 5).
Step 3. Calculate the value C max by (4).
Proof. 
The correctness of Algorithm 1 follows directly from Lemmas 4 and 5. For each group G i , the simple sorting algorithm needs n i log n i time; thus, Step 1 needs i = 1 m O ( n i log n i ) O ( n log n ) time. Similarly, Step 2 needs O ( m log m ) O ( n log n ) time. Step 3 needs O ( n ) time. Thus, the total time of Algorithm 1 is O ( n log n ) .    □
Example 1.
Consider 1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max , where n = 24 , m = 4 , n 1 = n 2 = n 3 = n 4 = 6 , M = 0.5 , N = 0.5 , the deterioration index of group setup time is b = 0.25 , the deterioration index of jobs in each group is a 1 = 0.11 , a 2 = 0.63 , a 3 = 0.48 , a 4 = 0.79 , and the group- and job-related parameters are given in Table 2.
By Algorithm 1, Example 1 can be solved as follows:
Step 1: Jobs within each group are arranged in non-increasing order of p i j , i.e.,
  • G 1 : π 1 = { J 12 J 11 J 16 J 14 J 15 J 13 } ,
    G 2 : π 2 = { J 21 J 26 J 23 J 25 J 22 J 24 } ,
    G 3 : π 3 = { J 31 J 33 J 36 J 34 J 35 J 32 } ,
    G 4 : π 4 = { J 42 J 43 J 41 J 46 J 45 J 44 } .
Step 2: Arrange all groups in non-increasing order of s i , i.e.,
  • ϱ = { G 3 G 4 G 2 G 1 } .
Step 3: By (4), the value of C max for the optimal sequence is as follows:
C max ( ϱ ) = i = 1 m s [ i ] N + ( 1 N ) 1 + k = 1 i 1 ln s [ k ] k = 1 m s [ k ] b + i = 1 m j = 1 n [ i ] p [ i ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 ln p [ i ] [ k ] k = 1 n [ i ] p [ i ] [ k ] a [ i ] = 86 + 82 0.5 + 0.5 1 + ln 86 86 + 82 + 72 + 38 0.25 + 72 0.5 + 0.5 1 + ln 86 + ln 82 86 + 82 + 72 + 38 0.25 + 38 0.5 + 0.5 1 + ln 86 + ln 82 + ln 72 86 + 82 + 72 + 38 0.25 + 91 + 85 0.5 + 0.5 1 + ln 91 91 + 85 + 74 + 73 + 53 + 49 0.48 + 74 0.5 + 0.5 1 + ln 91 + ln 85 91 + 85 + 74 + 73 + 53 + 49 0.48 + 73 0.5 + 0.5 1 + ln 91 + ln 85 + ln 74 91 + 85 + 74 + 73 + 53 + 49 0.48 + 53 0.5 + 0.5 1 + ln 91 + ln 85 + ln 74 + ln 73 91 + 85 + 74 + 73 + 53 + 49 0.48 + 49 0.5 + 0.5 1 + ln 91 + ln 85 + ln 74 + ln 73 + ln 53 91 + 85 + 74 + 73 + 53 + 49 0.48
+ 90 + 79 0.5 + 0.5 1 + ln 90 90 + 79 + 78 + 64 + 34 + 21 0.79 + 78 0.5 + 0.5 1 + ln 90 + ln 79 90 + 79 + 78 + 64 + 34 + 21 0.79 + 64 0.5 + 0.5 1 + ln 90 + ln 79 + ln 78 90 + 79 + 78 + 64 + 34 + 21 0.79 + 34 0.5 + 0.5 1 + ln 90 + ln 79 + ln 78 + ln 64 90 + 79 + 78 + 64 + 34 + 21 0.79 + 21 0.5 + 0.5 1 + ln 90 + ln 79 + ln 78 + ln 64 + ln 34 90 + 79 + 78 + 64 + 34 + 21 0.79
+ 92 + 88 0.5 + 0.5 1 + ln 92 92 + 88 + 67 + 59 + 29 + 27 0.63 + 67 0.5 + 0.5 1 + ln 92 + ln 88 92 + 88 + 67 + 59 + 29 + 27 0.63 + 59 0.5 + 0.5 1 + ln 92 + ln 88 + ln 67 92 + 88 + 67 + 59 + 29 + 27 0.63 + 29 0.5 + 0.5 1 + ln 92 + ln 88 + ln 67 + ln 59 92 + 88 + 67 + 59 + 29 + 27 0.63 + 27 0.5 + 0.5 1 + ln 92 + ln 88 + ln 67 + ln 59 + ln 29 92 + 88 + 67 + 59 + 29 + 27 0.63 + 91 + 88 0.5 + 0.5 1 + ln 91 91 + 88 + 82 + 75 + 73 + 35 0.11 + 82 0.5 + 0.5 1 + ln 91 + ln 88 91 + 88 + 82 + 75 + 73 + 35 0.11 + 75 0.5 + 0.5 1 + ln 91 + ln 88 + ln 82 91 + 88 + 82 + 75 + 73 + 35 0.11 + 73 0.5 + 0.5 1 + ln 91 + ln 88 + ln 82 + ln 75 91 + 88 + 82 + 75 + 73 + 35 0.11 + 35 0.5 + 0.5 1 + ln 91 + ln 88 + ln 82 + ln 75 + ln 73 91 + 88 + 82 + 75 + 73 + 35 0.11 = 1884.01556 .
Remark 1.
For Example 1, if jobs within each group are arranged in non-decreasing order of p i j (i.e., the Smallest Processing Time (SPT) rule), we have G 1 : π 1 = { J 13 J 15 J 14 J 16 J 11 J 12 } , G 2 : π 2 = { J 24 J 22 J 25 J 23 J 26 J 21 } , G 3 : π 3 = { J 32 J 35 J 34 J 36 J 33 J 31 } , G 4 : π 4 = { J 44 J 45 J 46 J 41 J 42 J 42 } . And when all groups are arranged by the SPT rule of s i , i.e., ϱ = { G 1 G 2 G 4 G 3 } , we have C max ( ϱ ) = 1887.64453 .
Remark 2.
If a i 1 , b 1 , the optimal job sequence within each group cannot be obtained by the SPT or LPT rules of p i j , and the optimal group sequence cannot be obtained by the SPT or LPT rules of s i . For example, if n = 2 , m = 1 , a 1 = 2 , b = 2 , M = N = 0 , s 1 = 5 ,   p 11 = 10 ,   p 12 = 8 , we have C max ( S P T ) = 5 + 8 + 10 1 + ln 8 10 + 8 2 = 25.44395 > C max ( L P T ) = 5 + 10 + 8 1 + ln 10 10 + 8 2 = 25.17765 . If n = 2 , m = 1 , a 1 = 21 , b = 2 , s 1 = 5 ,   p 11 = 10 ,   p 12 = 8 , C max ( S P T ) = 5 + 8 + 10 1 + ln 8 10 + 8 21 = 112.32570 < C max ( L P T ) = 5 + 10 + 8 1 + ln 10 10 + 8 21 = 115.21795 .
Remark 3.
If a 1 = a 2 = a 3 = a 4 = b = 0 , C max = 86 + 82 + 72 + 38 + 91 + 85 + 74 + 73 + 53 + 49 + 90 + 79 + 78 + 64 + 34 + 21 + 92 + 88 + 67 + 59 + 29 + 27 + 91 + 88 + 82 + 75 + 73 + 35 = 1875 . Obviously, if 0 a i 1 , 0 b 1 ,
p i j A = p i j M + ( 1 M ) 1 + k = 1 l 1 p i [ k ] k = 1 n i p i k a i ,
and
s i A = s i N + ( 1 N ) 1 + l = 1 r 1 s [ l ] l = 1 m s l b ,
the optimal solution of 1 g r o t e c , p i j A , s i A , D D D E C max can be obtained by Algorithm 1, and
C max = i = 1 m s [ i ] N + ( 1 N ) 1 + k = 1 i 1 s [ k ] k = 1 m s [ k ] b + i = 1 m j = 1 n [ i ] p [ i ] [ j ] M + ( 1 M ) 1 + k = 1 j 1 p [ i ] [ k ] k = 1 n [ i ] p [ i ] [ k ] a [ i ] = 86 + 82 0.5 + 0.5 1 + 86 86 + 82 + 72 + 38 0.25 + 72 0.5 + 0.5 1 + 86 + 82 86 + 82 + 72 + 38 0.25 + 38 0.5 + 0.5 1 + 86 + 82 + 72 86 + 82 + 72 + 38 0.25 + 91 + 85 0.5 + 0.5 1 + 91 91 + 85 + 74 + 73 + 53 + 49 0.48 + 74 0.5 + 0.5 1 + 91 + 85 91 + 85 + 74 + 73 + 53 + 49 0.48 + 73 0.5 + 0.5 1 + 91 + 85 + 74 91 + 85 + 74 + 73 + 53 + 49 0.48 + 53 0.5 + 0.5 1 + 91 + 85 + 74 + 73 91 + 85 + 74 + 73 + 53 + 49 0.48 + 49 0.5 + 0.5 1 + 91 + 85 + 74 + 73 + 53 91 + 85 + 74 + 73 + 53 + 49 0.48 + 90 + 79 0.5 + 0.5 1 + 90 90 + 79 + 78 + 64 + 34 + 21 0.79 + 78 0.5 + 0.5 1 + 90 + 79 90 + 79 + 78 + 64 + 34 + 21 0.79
+ 64 0.5 + 0.5 1 + 90 + 79 + 78 90 + 79 + 78 + 64 + 34 + 21 0.79 + 34 0.5 + 0.5 1 + 90 + 79 + 78 + 64 90 + 79 + 78 + 64 + 34 + 21 0.79 + 21 0.5 + 0.5 1 + 90 + 79 + 78 + 64 + 34 90 + 79 + 78 + 64 + 34 + 21 0.79
+ 92 + 88 0.5 + 0.5 1 + 92 92 + 88 + 67 + 59 + 29 + 27 0.63 + 67 0.5 + 0.5 1 + 92 + 88 92 + 88 + 67 + 59 + 29 + 27 0.63 + 59 0.5 + 0.5 1 + 92 + 88 + 67 92 + 88 + 67 + 59 + 29 + 27 0.63 + 29 0.5 + 0.5 1 + 92 + 88 + 67 + 59 92 + 88 + 67 + 59 + 29 + 27 0.63 + 27 0.5 + 0.5 1 + 92 + 88 + 67 + 59 + 29 92 + 88 + 67 + 59 + 29 + 27 0.63 + 91 + 88 0.5 + 0.5 1 + 91 91 + 88 + 82 + 75 + 73 + 35 0.11 + 82 0.5 + 0.5 1 + 91 + 88 91 + 88 + 82 + 75 + 73 + 35 0.11 + 75 0.5 + 0.5 1 + 91 + 88 + 82 91 + 88 + 82 + 75 + 73 + 35 0.11 + 73 0.5 + 0.5 1 + 91 + 88 + 82 + 75 91 + 88 + 82 + 75 + 73 + 35 0.11 + 35 0.5 + 0.5 1 + 91 + 88 + 82 + 75 + 73 91 + 88 + 82 + 75 + 73 + 35 0.11 = 2027.24376 .

4. Extension

Similar to Huang and Wang [49] and Section 3, the following general weighted deterioration model can be presented: If job J i j is placed in the lth position, the actual processing time is as follows:
p i j A = p i j M + ( 1 M ) 1 + k = 1 l 1 ζ i k p i [ k ] α i ,
where α i 0 is the deterioration index for G i and ζ i k is the position-dependent weight of kth position in G i . If G i is placed in the rth position, the actual setup time is as follows:
s i A = s i N + ( 1 N ) 1 + l = 1 r 1 ζ ˜ l s [ l ] β ,
where β 0 is the group-deterioration index and ζ ˜ l is the position-dependent weight of lth group position. The problem can be denoted by
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max ,
where D D D E w e i g h t denotes the weighted- D D D E models (9) and (10).
Similar to Huang and Wang [49] and Section 3, we have the following:
Lemma 6.
I ( ϖ ) = [ 1 ( 1 + ϖ t ) a ] ϖ [ 1 ( 1 + t ) a ] 0 if t 0 , 0 a 1 , and ϖ 1 .
Lemma 7.
I ( ϖ ) = [ 1 ( 1 + ϖ t ) a ] ϖ [ 1 ( 1 + t ) a ] 0 if t 0 , a 1 , and ϖ 1 .
Lemma 8.
If 0 α i 1 and ζ i n i ζ i , n i 1 ζ i 2 ζ i 1 > 0 , for
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max ,
and for G i , the jobs are arranged in non-increasing order of p i j .
Lemma 9.
If 0 β 1 and ζ ˜ m ζ ˜ m 1 ζ ˜ 2 ζ ˜ 1 > 0 , for
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max ,
the groups are arranged in non-increasing order of s i .
Lemma 10.
If α i 1 and 0 < ζ i n i ζ i , n i 1 ζ i 2 ζ i 1 , for
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max ,
and for G i , the jobs are arranged in non-decreasing order of p i j .
Lemma 11.
If β 1 and 0 < ζ ˜ m ζ ˜ m 1 ζ ˜ 2 ζ ˜ 1 , for
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max ,
groups are arranged in non-decreasing order of s i .
Based on Lemmas 8–11, the following algorithm is proposed to solve
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max .
Theorem 2.
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max can be optimally solved by Algorithm 2 in O ( n log n ) time.
Algorithm 2  1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max .
Step 1. If 0 α i 1 and ζ i n i ζ i , n i 1 ζ i 2 ζ i 1 > 0 , for G i , the jobs are arranged in non-increasing order of p i j (Lemma 8). If α i 1 and 0 < ζ i n i ζ i , n i 1 ζ i 2 ζ i 1 , for G i , jobs are arranged in a non-decreasing order of p i j (Lemma 10).
Step 2. If 0 β 1 and ζ ˜ m ζ ˜ m 1 ζ ˜ 2 ζ ˜ 1 > 0 , groups are arranged in a non-increasing order of s i (Lemma 9). If β 1 and 0 < ζ ˜ m ζ ˜ m 1 ζ ˜ 2 ζ ˜ 1 , groups are arranged in a non-decreasing order of s i (Lemma 11).
Step 3. Calculate C max .
Remark 4.
From Lemma 8, if 0 α H u a n g 1 and ζ n ζ m 1 ζ 2 ζ 1 > 0 (i.e., ζ k ), the problem 1 p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g , 0 α H u a n g 1 , ζ k C max can be solved by sequencing the jobs in non-increasing order of p j .
Example 2.
Consider 1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max , where n = 15 , m = 3 , n 1 = n 2 = n 3 = 5 , M = 0.5 , N = 0.5 , α 1 = 0.15 , α 2 = 0.65 , α 3 = 0.45 , β = 0.25 , ζ 11 = 0.1 , ζ 12 = 0.2 , ζ 13 = 0.3 , ζ 14 = 0.4 , ζ 15 = 0.5 , ζ 21 = 0.15 , ζ 22 = 0.25 , ζ 23 = 0.35 , ζ 24 = 0.45 , ζ 25 = 0.55 , ζ 31 = 0.11 , ζ 32 = 0.21 , ζ 33 = 0.36 , ζ 34 = 0.47 , ζ 35 = 0.51 , ζ ˜ 1 = 0.2 , ζ ˜ 2 = 0.3 , ζ ˜ 3 = 0.4 ; the group- and job-related parameters are given in Table 3.
Using Algorithm 2, Example 2 can be solved as follows:
Step 1: Jobs within each group are arranged in non-increasing order of p i j , i.e.,
  • G 1 : π 1 = { J 14 J 15 J 13 J 12 J 11 } ,
  • G 2 : π 2 = { J 23 J 24 J 25 J 22 J 21 } ,
  • G 3 : π 3 = { J 32 J 31 J 33 J 34 J 35 } .
Step 2: Arrange all groups in non-increasing order of s i , i.e.,
  • ϱ = { G 3 G 1 G 2 } .
Step 3: By (4), (9), and (10), we have
C max ( ϱ ) = i = 1 m s [ i ] N + ( 1 N ) 1 + l = 1 r 1 ζ l s [ l ] β + i = 1 m j = 1 n [ i ] p [ i ] [ j ] M + ( 1 M ) 1 + k = 1 l 1 ζ i k p [ i ] [ k ] α [ i ] = 10 + 8 0.5 + 0.5 1 + 0.2 10 0.25 + 7 0.5 + 0.5 1 + 0.2 10 + 0.3 8 0.25 + 19 + 11 0.5 + 0.5 1 + 0.11 19 0.45 + 8 0.5 + 0.5 1 + 0.11 19 + 0.21 11 0.45 + 7 0.5 + 0.5 1 + 0.11 19 + 0.21 11 + 0.36 8 0.45 + 5 0.5 + 0.5 1 + 0.11 19 + 0.21 11 + 0.36 8 + 0.47 7 0.45 + 25 + 17 0.5 + 0.5 1 + 0.1 25 0.15 + 15 0.5 + 0.5 1 + 0.1 25 + 0.2 17 0.15 + 13 0.5 + 0.5 1 + 0.1 25 + 0.2 17 + 0.3 15 0.15 + 12 0.5 + 0.5 1 + 0.1 25 + 0.2 17 + 0.3 15 + 0.4 13 0.15 + 27 + 26 0.5 + 0.5 1 + 0.15 27 0.65
+ 21 0.5 + 0.5 1 + 0.15 27 + 0.25 26 0.65 + 12 0.5 + 0.5 1 + 0.15 27 + 0.25 26 + 0.35 21 0.65 + 9 0.5 + 0.5 1 + 0.15 27 + 0.25 26 + 0.35 21 + 0.45 12 0.65 = 415.24034 .
Remark 5.
Scheduling combined with g r o t e c and general logarithmic/weight deterioration can impact the group and job sequences and thus affect production processes and decisions. Our theoretical results and numerical analysis are conducted, we find that our theories and methods are very efficient. Hence, g r o t e c and general logarithmic/weight deterioration need to be taken into consideration to reduce costs and improve production efficiency.

5. Numerical Study

To evaluate the running time of Algorithms 1 and 2, the random instances were generated. The program was programmed in Visual Studio 2022 v17.1.0 using the C++ language and the testing computer was a desktop machine with a 12th Gen Intel(R) Core(TM) i5-12400 2.50 GHz CPU, 16.00GB RAM, and a Windows 11 operating system. The features of these instances were listed as follows:
(1)
Jobs: n = 200 , 400 , 600 , 800 , 1000 , 1200 , 1400 , 1600 , 1800 ;
(2)
Groups: m = 7 , 19 , 23 , 35 , 46 ;
(3)
p i j U [ 5 , 100 ] , s i U [ 5 , 100 ] (such that ln p i j > 0 and ln s i > 0 , i = 1 , 2 , , m ; j = 1 , 2 , , n i );
(4)
a i U [ 0.1 , 0.9 ] , b U [ 0.1 , 0.9 ] , α i U [ 0.1 , 0.9 ] , β U [ 0.1 , 0.9 ] ( i = 1 , 2 , , m );
(5)
M = N U [ 0.1 , 0.5 ] , M = N U [ 0.5 , 0.9 ] , M = N U [ 0.1 , 0.9 ] ;
(6)
ζ ˜ l = ζ i k = 1 ( l = 1 , 2 , , m ; i = 1 , 2 , , m ; k = 1 , 2 , , n i ).
For each numerical study, 20 random replications were generated, where the minimum, average, and maximum (denoted by min, ave, and max) CPU times (the unit is milliseconds, denoted by ms) were given in Table 4, Table 5 and Table 6. As seen from Table 4, Table 5 and Table 6, we can find that Algorithms 1 and 2 are effective and their CPU times increase moderately as n and m increases, and the maximum CPU times of Algorithms 1 and 2 in this experiment are only 48.8139 ms and 51.7198 ms, respectively, when the problem size is n = 1800 , m = 7 , and M = N U [ 0.5 , 0.9 ] .

6. Conclusions

In this article, the single-machine g r o t e c maximal completion time cost with general logarithmic/weight deterioration has been investigated. It was shown that 1 | g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c | C max ( Z { D D D E l o g a r i t h m i c , D D D E w e i g h t } ) is polynomially solvable. Further research may consider g r o t e c scheduling in a flow shop setting (see Sun et al. [52], Lv and Wang [53], Geng et al. [54]), D D D E scheduling with position-dependent weights (see Sun et al. [55], Wang et al. [56]), D D D E scheduling with resource allocation (see Qian et al. [57], Bai et al. [58], Zhang et al. [59]), or g r o t e c scheduling with step-improving jobs (see Kim and Oron [60], Lim et al. [61], Wu et al. [62], Cheng et al. [63]).

Author Contributions

Methodology, J.-B.W.; Writing—original draft, J.-D.M.; Writing—review & editing, J.-D.M., D.-Y.L., C.-M.W. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the fundamental research funds for the universities of liaoning province (JYTMS20230278).

Data Availability Statement

The data used to support this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kuo, W.-H.; Yang, D.-L. Single-machine group scheduling with a time-dependent learning effect. Comput. Oper. Res. 2006, 33, 2099–2112. [Google Scholar] [CrossRef]
  2. Lee, W.-C.; Wu, C.-C. A note on single-machine group scheduling problems with position-based learning effect. Appl. Math. Model. 2009, 33, 2159–2163. [Google Scholar] [CrossRef]
  3. Kuo, W.-H. Single-machine group scheduling with time-dependent learning effect and position-based setup time learning effect. Ann. Oper. Res. 2012, 196, 349–359. [Google Scholar] [CrossRef]
  4. He, Y.; Sun, L. One-machine scheduling problems with deteriorating jobs and position-dependent learning effects under group technology considerations. Int. J. Syst. Sci. 2015, 46, 1319–1326. [Google Scholar] [CrossRef]
  5. Zhang, X.; Liao, L.-J.; Zhang, W.-Y. Single-machine group scheduling with new models of position-dependent processing times. Comput. Ind. Eng. 2018, 117, 1–5. [Google Scholar] [CrossRef]
  6. Liu, F.; Yang, J.; Lu, Y.-Y. Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng. Optim. 2019, 51, 862–874. [Google Scholar] [CrossRef]
  7. Huang, X. Bicriterion scheduling with group technology and deterioration effect. J. Appl. Math. Comput. 2019, 60, 455–464. [Google Scholar] [CrossRef]
  8. Wang, D.-Y.; Ye, C.-M. Group scheduling with learning effect and random processing time. J. Math. 2021, 2021, 6685149. [Google Scholar] [CrossRef]
  9. Qian, J.; Zhan, Y. Single-machine group scheduling model with position-dependent and job-dependent DeJong’s learning effect. Mathematics 2022, 10, 2454. [Google Scholar] [CrossRef]
  10. Liu, W.; Wang, X. Group technology scheduling with due-date assignment and controllable processing times. Processes 2023, 11, 1271. [Google Scholar] [CrossRef]
  11. Chen, Y.; Ma, X.; Zhang, F.; Cheng, Y. On optimal due date assignment without restriction and resource allocation in group technology scheduling. J. Comb. Optim. 2023, 45, 64. [Google Scholar] [CrossRef]
  12. Li, M.-H.; Lv, D.-Y.; Lu, Y.-Y.; Wang, J.-B. Scheduling with group technology, resource allocation, and learning effect simultaneously. Mathematics 2024, 12, 1029. [Google Scholar] [CrossRef]
  13. Wang, X.; Liu, W. Optimal different due-dates assignment scheduling with group technology and resource allocation. Mathematics 2024, 12, 436. [Google Scholar] [CrossRef]
  14. Wang, X.; Liu, W. Single machine group scheduling jobs with resource allocations subject to unrestricted due date assignments. J. Appl. Math. Comput. 2024, 70, 6283–6308. [Google Scholar] [CrossRef]
  15. Yin, N.; Gao, M. Single-machine group scheduling with general linear deterioration and truncated learning effects. Comput. Appl. Math. 2024, 43, 386. [Google Scholar] [CrossRef]
  16. Lv, D.-Y.; Wang, J.-B. Single-machine group technology scheduling with resource allocation and slack due window assignment including minmax criterion. J. Oper. Res. Soc. 2024. [Google Scholar] [CrossRef]
  17. Gawiejnowicz, S. Models and Algorithms of Time-Dependent Scheduling; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  18. Strusevich, V.A.; Rustogi, K. Scheduling with Times-Changing Effects and Rate-Modifying Activities; Springer: Cham, Switzerland, 2017. [Google Scholar]
  19. Shabtay, D.; Mor, B. Exact algorithms and approximation schemes for proportionate flow shop scheduling with step-deteriorating processing times. J. Sched. 2024, 27, 239–256. [Google Scholar] [CrossRef]
  20. Sun, Z.-W.; Lv, D.-Y.; Wei, C.-M.; Wang, J.-B. Flow shop scheduling with shortening jobs for makespan minimization. Mathematics 2025, 13, 363. [Google Scholar] [CrossRef]
  21. Wang, J.-B.; Wang, Y.-C.; Wan, C.; Lv, D.-Y.; Zhang, L. Controllable processing time scheduling with total weighted completion time objective and deteriorating jobs. Asia-Pac. J. Oper. Res. 2024, 41, 2350026. [Google Scholar] [CrossRef]
  22. Zhang, L.-H.; Geng, X.-N.; Xue, J.; Wang, J.-B. Single machine slack due window assignment and deteriorating jobs. J. Ind. Manag. Optim. 2024, 20, 1593–1614. [Google Scholar] [CrossRef]
  23. Lu, Y.-Y.; Zhang, S.; Tao, J.-Y. Earliness-tardiness scheduling with delivery times and deteriorating jobs. Asia-Pac. J. Oper. Res. 2024. [Google Scholar] [CrossRef]
  24. Qiu, X.-Y.; Wang, J.-B. Single-machine scheduling with mixed due-windows and deterioration effects. J. Appl. Math. Comput. 2024. [Google Scholar] [CrossRef]
  25. Koulamas, C.; Kyparisis, G.J. Single-machine and two-machine flowshop scheduling with general learning functions. Eur. J. Oper. Res. 2007, 178, 402–407. [Google Scholar] [CrossRef]
  26. Zhao, S. Scheduling jobs with general truncated learning effects including proportional setup times. Comput. Appl. Math. 2022, 41, 146. [Google Scholar] [CrossRef]
  27. Azzouz, A.; Ennigrou, M.; Said, L.B. Scheduling problems under learning effects: Classification and cartography. Int. J. Prod. Res. 2018, 56, 1642–1661. [Google Scholar] [CrossRef]
  28. Jiang, Z.-Y.; Chen, F.-F.; Zhang, X.-D. Single-machine scheduling with times-based and job-dependent learning effect. J. Oper. Res. Soc. 2017, 68, 809–815. [Google Scholar] [CrossRef]
  29. Liu, Z.; Wang, J.-B. Single-machine scheduling with simultaneous learning effects and delivery times. Mathematics 2024, 12, 2522. [Google Scholar] [CrossRef]
  30. Gerstl, E.; Mosheiov, G. Minimizing the number of tardy jobs with generalized due-dates and position-dependent processing times. Optim. Lett. 2024. [Google Scholar] [CrossRef]
  31. Cohen, E.; Shapira, D. Minimising the makespan on parallel identical machines with log-linear position-dependent processing times. J. Oper. Res. Soc. 2024. [Google Scholar] [CrossRef]
  32. Zhang, L.-H.; Yang, S.-H.; Lv, D.-Y.; Wang, J.-B. Research on convex resource allocation scheduling with exponential time-dependent learning effects. Comput. J. 2025, 68, 97–108. [Google Scholar] [CrossRef]
  33. Vitaly, K.R.; Strusevich, A. Simple matching vs linear assignment in scheduling models with positional effects: A critical review. Eur. J. Oper. Res. 2012, 222, 393–407. [Google Scholar]
  34. Montoya-Torres, J.R.; Moreno-Camacho, C.A.; Velez-Gallego, M.C. Variable neighbourhood search for job scheduling with position-dependent deteriorating processing times. J. Oper. Res. Soc. 2023, 74, 873–887. [Google Scholar] [CrossRef]
  35. Saavedra-Nieves, A.; Mosquera, M.A.; Fiestras-Janeiro, M.G. Sequencing situations with position-dependent effects under cooperation. Int. Trans. Oper. Res. 2025, 32, 1620–1640. [Google Scholar] [CrossRef]
  36. Hu, C.M.; Zheng, R.; Lu, S.J.; Liu, X.B. Parallel machine scheduling with position-dependent processing times and deteriorating maintenance activities. J. Glob. Optim. 2024. [Google Scholar] [CrossRef]
  37. Liu, P.; Tang, L.; Zhou, X. Two-agent group scheduling with deteriorating jobs on a single machine. Int. J. Adv. Manuf. Technol. 2010, 47, 657–664. [Google Scholar] [CrossRef]
  38. Sloan, T.W.; Shanthikumar, J.G. Combined production and maintenance scheduling for a multipleproduct, single-machine production system. Prod. Oper. Manag. 2000, 9, 379–399. [Google Scholar] [CrossRef]
  39. Gawiejnowicz, S.; Kurc, W.; Pankowska, L. Pareto and scalar bicriterion scheduling of deteriorating jobs. Comput. Oper. Res. 2006, 33, 746–767. [Google Scholar] [CrossRef]
  40. Bajestani, M.A.; Banjevic, D.; Beck, J.C. Integrated maintenance planning and production scheduling with Markovian deteriorating machine conditions. Int. J. Prod. Res. 2014, 52, 7377–7400. [Google Scholar] [CrossRef]
  41. Mosheiov, G. A note on scheduling deteriorating jobs. Math. Comput. Model. 2005, 41, 883–886. [Google Scholar] [CrossRef]
  42. Biskup, D. Single-machine scheduling with learning considerations. Eur. J. Oper. Res. 1999, 115, 173–178. [Google Scholar] [CrossRef]
  43. Mosheiov, G. Scheduling problems with a learning effect. Eur. J. Oper. Res. 2001, 132, 687–693. [Google Scholar] [CrossRef]
  44. Biskup, D. A state-of-the-art review on scheduling with learning effects. Eur. J. Oper. Res. 2008, 188, 315–329. [Google Scholar] [CrossRef]
  45. Gordon, V.S.; Potts, C.N.; Strusevich, V.A.; Whitehead, J.D. Single machine scheduling models with deterioration and learning: Handling precedence constraints via priority generation. J. Sched. 2008, 11, 357–370. [Google Scholar] [CrossRef]
  46. Wang, J.-B.; Wang, L.-Y.; Wang, D.; Wang, X.-Y. Single machine scheduling with a time-dependent deterioration. Int. J. Adv. Manuf. Technol. 2009, 43, 805–809. [Google Scholar] [CrossRef]
  47. Kuo, W.-H.; Yang, D.-L. Minimizing the total completion time in a single-machine scheduling problem with a time-dependent learning effect. Eur. J. Oper. Res. 2006, 174, 1184–1190. [Google Scholar] [CrossRef]
  48. Lee, W.-C.; Wu, C.-C.; Liu, H.-C. A note on single-machine makespan problem with general deteriorating function. Int. J. Adv. Manuf. Technol. 2009, 40, 1053–1056. [Google Scholar] [CrossRef]
  49. Huang, X.; Wang, J.-J. Machine scheduling problems with a position-dependent deterioration. Appl. Math. Model. 2015, 39, 2897–2908. [Google Scholar] [CrossRef]
  50. Yang, D.-L.; Cheng, T.C.E.; Kuo, W.-H. Scheduling with a general learning effect. Int. J. Adv. Manuf. Technol. 2013, 67, 217–229. [Google Scholar] [CrossRef]
  51. Cheng, T.C.E.; Lai, P.-J.; Wu, C.-C.; Lee, W.-C. Single-machine scheduling with sum-of-logarithm-processing-times-based learning considerations. Inf. Sci. 2009, 197, 3127–3135. [Google Scholar] [CrossRef]
  52. Sun, X.; Geng, X.-N.; Liu, F. Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J. Oper. Res. Soc. 2021, 72, 2674–2689. [Google Scholar] [CrossRef]
  53. Lv, D.-Y.; Wang, J.-B. Research on two-machine flow shop scheduling problem with release dates and truncated learning effects. Eng. Optim. 2024. [Google Scholar] [CrossRef]
  54. Geng, X.-N.; Sun, X.Y.; Wang, J.Y.; Pan, L. Scheduling on proportionate flow shop with job rejection and common due date assignment. Comput. Ind. Eng. 2023, 181, 109317. [Google Scholar] [CrossRef]
  55. Sun, X.; Geng, X.-N.; Wang, J.Y.; Liu, T. A bicriterion approach to due date assignment scheduling in single-machine with position-dependent weights. Asia-Pac. J. Oper. Res. 2023, 40, 2250018. [Google Scholar] [CrossRef]
  56. Wang, J.-B.; Lv, D.-Y.; Wan, C. Proportionate flow shop scheduling with job-dependent due windows and position-dependent weights. Asia-Pac. J. Oper. Res. 2024. [Google Scholar] [CrossRef]
  57. Qian, J.; Guo, Z.Y. Common due window assignment and single machine scheduling with delivery time, resource allocation, and job-dependent learning effect. J. Appl. Math. Comput. 2024, 70, 4441–4471. [Google Scholar] [CrossRef]
  58. Bai, B.; Wei, C.-M.; He, H.-Y.; Wang, J.-B. Study on single-machine common/slack due-window assignment scheduling with delivery times, variable processing times and outsourcing. Mathematics 2024, 12, 2833. [Google Scholar] [CrossRef]
  59. Zhang, Y.; Sun, X.; Liu, T.; Wang, J.Y.; Geng, X.-N. Single-machine scheduling simultaneous consideration of resource allocations and exponential time-dependent learning effects. J. Oper. Res. Soc. 2024. [Google Scholar] [CrossRef]
  60. Kim, E.S.; Oron, D. Minimizing total completion time on a single machine with step improving jobs. J. Oper. Res. Soc. 2015, 66, 1481–1490. [Google Scholar] [CrossRef]
  61. Kim, H.J.; Kim, E.S.; Lee, J.H. Scheduling of step-improving jobs with an identical improving rate. J. Oper. Res. Soc. 2022, 73, 1127–1136. [Google Scholar] [CrossRef]
  62. Wu, C.-C.; Lin, W.-C.; Azzouz, A.; Xu, J.Y.; Chiu, Y.-L.; Tsai, Y.-W.; Shen, P.Y. A bicriterion single-machine scheduling problem with step-improving processing times. Comput. Ind. Eng. 2022, 171, 108469. [Google Scholar] [CrossRef]
  63. Cheng, T.C.E.; Kravchenko, S.A.; Lin, B.M.T. On scheduling of step-improving jobs to minimize the total weighted completion time. J. Oper. Res. Soc. 2024, 75, 720–730. [Google Scholar] [CrossRef]
Table 1. Results of scheduling with D D D E ( D D L E ).
Table 1. Results of scheduling with D D D E ( D D L E ).
ProblemComplexityPaper
1 p j A = p j r α M o s h , α M o s h 0 C max O ( n log n ) Mosheiov [41]
1 p j A = p j r α M o s h , α M o s h 0 C max O ( n log n ) Biskup [42]
1 p j A = p j r α M o s h , α M o s h 0 C i j O ( n log n ) Biskup [42]
1 p j A = p j α G o r d r 1 , 0 < α G o r d 1 C max O ( n log n ) Gordon et al. [45]
1 p j A = p j α G o r d r 1 , α G o r d 1 C max O ( n log n ) Gordon et al. [45]
1 p j A = p j 1 + k = 1 l 1 p [ k ] α W a n g , α W a n g 1 C max O ( n log n ) Wang et al. [46]
1 p j A = p j 1 + k = 1 l 1 p [ k ] α W a n g , 0 α W a n g 1 C max O ( n log n ) Wang et al. [46]
1 p j A = p j 1 + k = 1 l 1 p [ k ] α W a n g , α W a n g 0 C max O ( n log n ) Kuo and Yang [47]
1 p j A = p j 1 + k = 1 l 1 p [ k ] α W a n g , α W a n g 0 C i j O ( n log n ) Kuo and Yang [47]
1 p j A = p j 1 + k = 1 l 1 p [ k ] k = 1 n p k α L e e , 0 α L e e 1 C max                                                             O ( n log n ) Lee et al. [48]
1 p j A = p j 1 k = 1 l 1 p [ k ] k = 1 n p k α L e e , α K o u l 1 C max O ( n log n ) Koulamas and Kyparisis [25]
1 p j A = p j 1 k = 1 l 1 p [ k ] k = 1 n p k α L e e , α K o u l 1 C i j O ( n log n ) Koulamas and Kyparisis [25]
1 p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g , α H u a n g 1 , ζ k C max O ( n log n ) Huang and Wang [49]
1 p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g , α H u a n g 1 , ζ k C i j ϑ O ( n log n ) Huang and Wang [49]
1 p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g r α M o s h , α H u a n g 0 , α M o s h 0 C max O ( n log n ) Yang et al. [50]
1 p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g r α M o s h , α H u a n g 0 , α M o s h 0 C i j ϑ O ( n log n ) Yang et al. [50]
1 p j A = p j 1 + k = 1 l 1 ln p [ k ] α C h e n g , α C h e n g 0 C max O ( n log n ) Cheng et al. [51]
1 p j A = p j 1 + k = 1 l 1 ln p [ k ] α C h e n g , α C h e n g 0 C i j O ( n log n ) Cheng et al. [51]
1 g r o t e c , p i j A = p i j 1 + k = 1 l 1 p i [ k ] α i , K u o , α i , K u o 0 , s i A = s i C max O ( n log n ) Kuo and Yang [1]
1 g r o t e c , p i j A = p i j 1 + k = 1 l 1 p i [ k ] α i , K u o , α i , K u o 0 , s i A = s i C i j O ( n log n ) Kuo and Yang [1]
1 g r o t e c , p i j A = p i j l α 1 L e e r α 2 L e e , α 1 L e e 0 , α 2 L e e 0 , s i A = s i r α 2 L e e C max O ( n log n ) Lee and Wu [2]
1 g r o t e c , p i j A = p i j 1 + k = 1 l 1 p i [ k ] α i , K u o , α i , K u o 0 , s i A = s i r α K u o , α K u o 0 C max O ( n log n ) Kuo [3]
1 p j A = p j 1 + k = 1 l 1 ζ k p [ k ] α H u a n g , 0 α H u a n g 1 , ζ k C max O ( n log n ) this paper
1 g r o t e c , p i j A , s i A , D D D E l o g a r i t h m i c C max O ( n log n ) this paper
1 g r o t e c , p i j A , s i A , D D D E w e i g h t C max O ( n log n ) this paper
ϑ > 0 is a given constant, ζ k (resp. ζ k ) denotes the non-decreasing (resp. non-increasing) order of ζ k , α i , K u o ( α K u o , and α 1 L e e , α 2 L e e ) denotes the learning index.
Table 2. Parameters for Example 1.
Table 2. Parameters for Example 1.
G i s i p i 1 p i 2 p i 3 p i 4 p i 5 p i 6
G 1 38889135757382
G 2 72922967275988
G 3 86914985735374
G 4 82789079213464
Table 3. Parameters for Example 2.
Table 3. Parameters for Example 2.
G i s i p i 1 p i 2 p i 3 p i 4 p i 5
G 1 81213152517
G 2 7912272621
G 3 101119875
Table 4. M = N U [ 0.1 , 0.5 ] : CPU time (ms) of Algorithms 1 and 2.
Table 4. M = N U [ 0.1 , 0.5 ] : CPU time (ms) of Algorithms 1 and 2.
CPU of Algorithm 1CPU of Algorithm 2
n m minavemaxminavemax
20070.17740.29970.74370.22000.38671.0640
190.23570.29760.41570.35290.43720.6122
230.31630.34160.36910.52120.54370.5634
350.42580.46170.48980.72740.76650.7931
460.54380.75981.67260.98141.23391.5569
40070.36840.49510.73860.61170.76530.9145
190.48430.65710.87870.83411.01641.2412
230.60280.70531.06391.12061.28421.7052
350.76100.92711.42681.42811.65342.1336
460.91531.13971.59831.78112.08143.0258
60070.64730.74221.07081.20521.28301.5264
190.77880.97061.57061.54131.72912.4133
231.01461.19451.57031.97202.29734.1754
351.16921.68702.65652.49152.98764.5148
461.35601.68022.61892.75333.12054.0006
80071.38221.75962.97002.93133.39624.2770
191.81402.66474.72413.76074.60075.5952
232.04402.79703.83054.36965.10336.4268
352.28522.58803.91304.92935.23135.6840
462.39762.71934.19995.26055.93547.6332
100073.96644.61025.93529.35439.917512.7181
194.22554.69045.25059.887210.573311.4234
234.58864.93576.081110.803311.496215.7895
355.03765.51777.317111.656612.336714.8367
465.30835.75996.427112.481113.095314.4685
120079.402011.691616.492714.008016.843821.0809
199.621211.496414.764514.897516.241318.4022
239.877011.498315.340815.634817.338321.5357
3510.852111.805614.756416.524717.626419.2839
4610.890011.748812.837917.877318.905021.0399
1400718.343822.356233.224522.913027.106733.6057
1918.009321.138628.276024.303226.421632.3352
2319.059820.120624.660324.204426.103630.6468
3519.624820.588526.403025.529026.442429.5598
4620.236721.252925.107526.812427.508029.2252
1600724.382528.593335.268727.895531.792435.7928
1925.365526.517928.186729.794231.017832.6665
2326.076227.546530.203231.558032.638835.8208
3526.915428.239431.886732.039633.499435.7076
4627.659629.184035.748033.666335.288037.8210
1800732.126536.441547.711435.805040.107250.2326
1932.930734.935440.240237.158939.189443.9375
2332.957435.979942.534038.497040.773047.6961
3534.808736.557139.873140.613042.183545.2329
4635.711737.837541.645541.476043.627045.9044
Table 5. M = N U [ 0.5 , 0.9 ] : CPU time (ms) of Algorithms 1 and 2.
Table 5. M = N U [ 0.5 , 0.9 ] : CPU time (ms) of Algorithms 1 and 2.
CPU of Algorithm 1CPU of Algorithm 2
n m minavemaxminavemax
20070.25600.29380.38640.20820.26940.3556
190.36810.45250.62670.35670.42190.5697
230.50350.54710.83650.51550.55850.7156
350.65100.69930.82600.73850.79730.9962
460.78510.85521.18120.93501.00051.1266
40071.21621.38561.80781.21181.29911.6526
191.43831.69222.41481.54101.82552.4314
231.79912.29554.85621.98642.28772.7729
352.05932.52623.48602.41572.83694.6875
462.25772.71553.95572.77973.29724.0864
60071.83302.08412.84062.08882.47104.3851
192.13472.57834.32202.59583.02604.4506
232.48403.02063.78533.04533.72154.7296
352.72773.03374.27113.47343.83194.6743
463.04363.46675.25984.01264.53176.1291
80073.46123.80304.84204.32285.08119.1788
194.02674.69515.84605.26786.02087.7749
234.17445.06967.13615.62166.45168.2103
354.32515.07997.45775.99496.74027.7306
464.74005.26646.70926.63537.14628.3423
100077.19408.382210.81267.72549.360613.4320
197.98089.199211.76558.715810.347212.5973
237.75668.25009.55718.72849.680511.4939
358.28599.439112.40339.895611.244014.9506
468.77649.444310.976910.781411.662914.1304
1200719.716021.468524.944617.261319.801925.0173
1919.380421.053625.373317.172419.315222.8843
2319.616420.860022.498718.142619.217921.0724
3520.657822.149526.746419.383520.336722.5411
4621.344622.230923.211020.611821.669123.4645
1400725.750128.524033.609022.114424.951330.0138
1926.191327.242428.392123.051124.675327.1149
2327.440528.622730.168424.762525.929929.5416
3527.819829.436731.536225.324626.805830.7956
4629.144330.246533.287726.780628.531631.1388
1600727.285329.635633.550928.879432.419739.5174
1927.595729.491231.609829.695031.406135.1885
2328.976930.165031.681031.217232.693238.5413
3529.530230.894733.086633.249434.197335.4664
4630.262332.510339.900534.011735.732940.2877
1800731.417035.659048.813936.249439.394151.7198
1932.526534.356639.042936.860838.747240.8807
2333.483535.251239.226439.233940.684542.2809
3535.095736.918040.908541.638343.337447.0185
4636.110737.288239.094442.736344.912049.8094
Table 6. M = N U [ 0.1 , 0.9 ] : CPU time (ms) of Algorithms 1 and 2.
Table 6. M = N U [ 0.1 , 0.9 ] : CPU time (ms) of Algorithms 1 and 2.
CPU of Algorithm 1CPU of Algorithm 2
n m minavemaxminavemax
20070.16440.22950.36730.20170.29880.6115
190.25830.28100.33230.34880.37760.4798
230.35010.39840.52430.51980.58790.8461
350.47150.52010.60560.74770.80591.0859
460.61500.90361.30411.00831.33541.6907
40070.47870.57760.80250.65200.82051.1169
190.57810.64880.96870.82730.90171.4398
230.71590.76060.92901.06891.15961.5978
350.88000.99081.37001.36571.53042.3361
461.01221.18611.69281.62891.77872.2620
60071.27401.46862.15582.03852.38893.7875
191.50671.77852.67802.59632.95354.0530
231.73982.11183.01192.92403.41554.4226
352.05622.72423.72693.58624.59785.7118
462.34813.11004.70844.23615.24306.1953
80072.65463.46755.12694.43625.38807.2490
193.05123.95694.99595.19266.07617.0845
233.14193.69415.26625.60526.31308.4620
353.41843.77624.75886.18786.62977.9533
463.62313.92215.10386.61837.27008.9901
100076.06908.252212.47829.577411.829515.0412
196.37277.23058.402210.243611.497714.2660
236.48097.04197.985410.693411.594713.4705
356.99047.48708.978911.700912.616714.1152
467.39048.082311.677312.630113.727816.0588
1200715.420617.596822.904117.624019.796527.7313
1915.637916.199517.036717.210718.392820.9206
2316.144216.983618.636718.068119.355320.5493
3517.355918.002919.165219.320420.326321.4951
4617.753318.545919.888921.164621.909223.8463
1400722.666125.060432.083723.217326.105632.0133
1922.289623.403326.190123.463724.735126.6533
2322.960423.440223.820024.144925.923728.4372
3523.945925.440935.288626.064727.185230.2367
4624.906925.691826.869327.654028.329631.1208
1600727.708429.562034.576229.035731.862538.9354
1927.475929.648740.733029.708030.959833.3532
2328.623430.069531.828931.331032.933734.8635
3529.271030.979933.297632.355634.282235.5770
4631.096532.141234.041134.265236.189538.8007
1800735.878638.714145.054935.954939.360944.4885
1936.034237.871140.872437.428139.162441.6080
2336.800338.838941.513138.789140.447043.2618
3537.750139.958342.212240.619142.545844.7960
4639.402540.984543.627842.482443.999345.4539
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Miao, J.-D.; Lv, D.-Y.; Wei, C.-M.; Wang, J.-B. Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost. Axioms 2025, 14, 153. https://doi.org/10.3390/axioms14030153

AMA Style

Miao J-D, Lv D-Y, Wei C-M, Wang J-B. Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost. Axioms. 2025; 14(3):153. https://doi.org/10.3390/axioms14030153

Chicago/Turabian Style

Miao, Jin-Da, Dan-Yang Lv, Cai-Min Wei, and Ji-Bo Wang. 2025. "Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost" Axioms 14, no. 3: 153. https://doi.org/10.3390/axioms14030153

APA Style

Miao, J.-D., Lv, D.-Y., Wei, C.-M., & Wang, J.-B. (2025). Research on Group Scheduling with General Logarithmic Deterioration Subject to Maximal Completion Time Cost. Axioms, 14(3), 153. https://doi.org/10.3390/axioms14030153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop