You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

18 August 2022

Scheduling with Resource Allocation, Deteriorating Effect and Group Technology to Minimize Total Completion Time

,
,
,
and
School of Science, Shenyang Aerospace University, Shenyang 110136, China
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Mathematical Methods and Operation Research in Logistics, Project Planning, and Scheduling

Abstract

This paper studies a single-machine problem with resource allocation ( R A ) and deteriorating effect ( D E ). Under group technology ( G T ) and limited resource availability, our goal is to determine the schedules of groups and jobs within each group such that the total completion time is minimized. For three special cases, polynomial time algorithms are given. For a general case, a heuristic, a tabu search algorithm, and an exact (i.e., branch-and-bound) algorithm are proposed to solve this problem.

1. Introduction

With the development of economy, the research on the group technology (denoted by G T ) problem involves a variety of fields, especially in the supply chain management, information processing, computer systems, and other industries (see Ham et al. [1], Wang et al. [2]). Yang [3] and Bai et al. [4] investigated single-machine G T scheduling with learning and deterioration effects. Lu et al. [5] studied the single-machine problem with G T and time-dependent processing times (i.e., time-dependent scheduling), i.e., the processing time of jobs and setup time of groups are time-dependent. For the makespan minimization subject to release dates, they presented a polynomial time algorithm. Wang et al. [6] examined the single-machine problem with G T and shortening job processing times. For the makespan minimization with ready times, they demonstrated that some special cases were optimally solved in polynomial time. Liu et al. [7] studied the single-machine problem with G T and deterioration effects (denoted by D E ), i.e., the processing time of jobs are time-dependent and setup time of groups are constants. For the makespan minimization with ready times, they proposed a branch-and-bound algorithm. Zhu et al. [8] discussed the single-machine problem with G T , resource allocation (denoted by R A ), and learning effects. For the weighted sum minimization of makespan and total resource consumption, Zhu et al. [8] proved that the problem remains polynomially solvable. In 2018, Zhang et al. [9] discussed the single-machine problem with G T and position-dependent processing times. In 2020, Liao et al. [10] considered the two-competing scheduling problem with G T and learning effects. In 2021, Lv et al. [11] addressed single-machine slack due date assignment problems with G T , R A , and learning effects. In 2021, Xu et al. [12] investigated the single-machine problem with G T , nonperiodical maintenance, and D E . For the makespan minimization, they proposed some heuristic algorithms.
Recently, Oron [13] and Li and Wang [14] considered a single-machine scheduling model combining R A and D E . Later, Wang et al. [15] discussed a scheduling model combining G T , R A , and D E . Under the single-machine setting, the objective is to minimize the weighted sum of makespan and total resource consumption. Wang et al. [15] showed that some special cases remain polynomially solvable. In 2020, Liang et al. [16] considered the same model as Wang et al. [15] for the general case; they provided heuristic and branch-and-bound algorithms. In 2019, Wang and Liang [17] studied the single-machine problem with G T , R A , and D E concurrently. For the makespan minimization under the constraint that total resource consumption cannot exceed an upper bound, they proved that some special cases remain polynomially solvable. For the general case, they provided heuristic and branch-and-bound algorithms.
This paper conducts a further study on the problem with G T , R A , and D E , but the objective cost is to minimize the total completion time under the constraint that total resource consumption cannot exceed an upper bound. For three special cases, polynomial algorithms are given. For the general case, upper and lower bounds of the problem are given, then the branch-and-bound algorithm is proposed. In addition, a tabu search algorithm and numerical simulation analysis are given.
The rest of this paper is organized as follows: Section 2 presents a formulation of the problem. Section 3 gives some basic properties. Section 4 studies some special cases. Section 5 considers the general case, and we propose some algorithms to solve this problem. Section 6 presents the numerical simulations. The conclusions are given in Section 7.

2. Problem Statement

The following notation (see Table 1) will be used throughout this paper. There are n ˜ independent jobs. In order to exploit G T in production (see Ji et al. [18]), all the jobs are classified into m ˜ m ˜ 2 groups (i.e., Ω 1 , Ω 2 , , Ω m ˜ ) in advance according to their processing similarities. All the jobs in the same group must be processed in succession on a single machine. Assume that the single machine and all jobs are available at time zero. Let J ˘ h j be the job j in group Ω h , and the number of jobs in group Ω h is n ˜ h , i.e., h = 1 m ˜ n ˜ h = n ˜ . The actual processing time of J ˘ h j is:
p h j A p t = ς h j r ˜ h j η + θ t ,
where ς h j (resp. r ˜ h j 0 ) is a workload (respective amount of resource) of J ˘ h j , η > 0 is a constant, θ 0 is a common deterioration rate, and t 0 is its starting time. The actual setup time of Ω h is:
s h A p t = o h r ˜ h η + μ t ,
where o h (respectively, r ˜ h ) is a workload (amount of resource) of Ω h , and μ 0 is a common deterioration rate. Obviously, the parameters n ˜ , m ˜ , ς h j , n ˜ h , o h , η , θ , and μ are given in advance, and the resource allocation r ˜ h j and r ˜ h are decision variables. Our goal is to find the optimal group schedule π ¯ Ω * , job schedule π ¯ h * h = 1 , , m ˜ within Ω h , and resource allocation R * (i.e., r ˜ h j and r ˜ h ) such that a total completion time,
t c t ^ ( π ¯ Ω , π ¯ h | h = 1 , , m ˜ , R ) = h = 1 m ˜ j = 1 n ˜ h C ¯ h j
is minimized subject to h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ , h = 1 m ˜ r ˜ h U ˜ , where V ˜ and U ˜ are given constants (there is not any constraint between the r ˜ h j variables and the r ˜ h variables, and h = 1 m ˜ j = 1 n ˜ h r ˜ h j and h = 1 m ˜ r ˜ h are independent from each other). By using the three-field notation (see Gawiejnowicz [19]), the problem can be denoted by
1 p h j A p t = ς h j r ˜ h j η + θ t , s h A p t = o h r ˜ h η + μ t , h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ , h = 1 m ˜ r ˜ h U ˜ , G T t c t ^ ,
where 1 denotes the single machine, the middle field is the job and group characteristics, and t c t ^ is the objective function (this problem is abbreviated as P t c t ^ ). Wang et al. [15] and Liang et al. [16] considered the problem
1 p h j A p t = ς h j r ˜ h j η + θ t , s h A p t = o h r ˜ h η + μ t , G T α 1 × C max + α 2 h = 1 m ˜ j = 1 n ˜ h r ˜ h j + α 3 h = 1 m ˜ r ˜ h ,
where α l 0 ( l = 1 , 2 , 3 ) is a given constant and C max = max { C ¯ h j | h = 1 , , m ˜ ; j = 1 , , n ˜ h } . Wang and Liang [17] studied the problem
1 p h j A p t = ς h j r ˜ h j η + θ t , s h A p t = o h r ˜ h η + μ t , h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ , h = 1 m ˜ r ˜ h U ˜ , G T C max .
Table 1. Symbols.

3. Basic Results

For a given schedule Π , stemming from Wang et al. [15] and Liang et al. [16], by a mathematical induction, we have
C ¯ 1 1 = o 1 r ˜ 1 η + ς 1 1 r ˜ 1 1 η + θ o 1 r ˜ 1 η = ς 1 1 r ˜ 1 1 η + 1 + θ o 1 r ˜ 1 η , C ¯ 1 2 = C ¯ 1 1 + ς 1 2 r ˜ 1 2 η + θ C ¯ 1 1 = ς 1 2 r ˜ 1 2 η + 1 + θ ς 1 1 r ˜ 1 1 η + 1 + θ 2 o 1 r ˜ 1 η , C ¯ 1 n ˜ 1 = j = 1 n ˜ 1 1 + θ n ˜ 1 j ς 1 j r ˜ 1 j η + 1 + θ n ˜ 1 o 1 r ˜ 1 η , C ¯ 2 1 = C ¯ 1 n ˜ 1 + o 2 r ˜ 2 η + μ C ¯ 1 n ˜ 1 + ς 2 1 r ˜ 2 1 η + θ C ¯ 1 n ˜ 1 + o 2 r ˜ 2 η + μ C ¯ 1 n ˜ 1 = j = 1 n ˜ 1 1 + μ 1 + θ n ˜ 1 j + 1 ς 1 j r ˜ 1 j η + 1 + μ 1 + θ n ˜ 1 + 1 o 1 r ˜ 1 η + 1 + θ o 2 r ˜ 2 η + ς 2 1 r ˜ 2 1 η , C ¯ 2 2 = C ¯ 2 1 + ς 2 2 r ˜ 2 2 η + θ C ¯ 2 1 = j = 1 n ˜ 1 1 + μ 1 + θ n ˜ 1 j + 2 ς 1 j r ˜ 1 j η + 1 + μ 1 + θ n ˜ 1 + 2 o 1 r ˜ 1 η + 1 + θ 2 o 2 r ˜ 2 η + 1 + θ ς 2 1 r ˜ 2 1 η + ς 2 2 r ˜ 2 2 η , C ¯ 2 n ˜ 2 = j = 1 n ˜ 1 1 + μ 1 + θ n ˜ 1 + n ˜ 2 j ς 1 j r ˜ 1 j η + 1 + μ 1 + θ n ˜ 1 + n ˜ 2 o 1 r ˜ 1 η + j = 1 n ˜ 2 1 + θ n ˜ 2 j ς 2 j r ˜ 2 j η + 1 + θ n ˜ 2 o 2 r ˜ 2 η , C ¯ [ m ˜ ] [ n ˜ m ] = h = 1 m ˜ j = 1 n ˜ [ h ] ( 1 + μ ) m ˜ h ( 1 + θ ) l = h m ˜ n ˜ [ l ] j ς [ h ] [ j ] r ˜ [ h ] [ j ] η + h = 1 m ˜ ( 1 + μ ) m ˜ h ( 1 + θ ) l = h m ˜ n ˜ [ l ] o [ h ] r ˜ [ h ] η .
According to the above equations, we have
t c t ^ = i = 1 m ˜ j = 1 n ˜ h C ¯ [ i ] [ j ] = h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ς [ i ] [ j ] r ˜ [ i ] [ j ] η + h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] o [ h ] r ˜ [ h ] η .
Lemma 1.
For a given schedule Π of P t c t ^ , the optimal resource allocation R * π ¯ Ω , π ¯ h | h = 1 , , m ˜  is
r ˜ * [ h ] [ j ] = l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 × V ˜
for h = 1 , , m ˜ ; j = 1 , , n ˜ i , and
r ˜ * [ h ] = k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( o [ h ] ) η 1 η + 1 h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( o [ h ] ) η 1 η + 1 × U ˜
for h = 1 , , m ˜ .
Proof. 
Obviously, Equation (4) is a convex function with respect to r ˜ h j and r ˜ h . It is obvious that in the optimal solution all resources should be consumed, i.e., h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ = 0 and h = 1 m ˜ r ˜ h U ˜ = 0 . As in Wang and Liang [17], Shabtay and Kaspi [20], and Wang and Wang [21], for a given schedule, the optimal resource allocation of the problem P t c t ^ can be solved by the Lagrange multiplier method. The Lagrangian function is
Q κ , υ , R = h = 1 m ˜ j = 1 n ˜ h C ¯ h j + κ h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ + υ h = 1 m ˜ r ˜ h U ˜ = h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ς [ h ] [ j ] r ˜ [ h ] [ j ] η + h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] o [ h ] r ˜ [ h ] η + κ h = 1 m ˜ j = 1 n ˜ r ˜ h j V ˜ + υ h = 1 m ˜ r ˜ h U ˜ ,
where κ 0 and υ 0 are the Lagrangian multipliers. Differentiating Equation (7) with respect to r ˜ h j and κ , then
Q κ , υ , R r ˜ i j = δ η l = j n ˜ [ i ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] × ζ h j η r ˜ h j η + 1 = 0
and
Q κ , υ , R κ = h = 1 m ˜ j = 1 n ˜ h r ˜ h j V ˜ = 0 .
By using Equations (8) and (9), it follows that
r ˜ h j = η l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] κ 1 ( η + 1 ) × ζ h j η η + 1
and
κ 1 η + 1 = h = 1 m ˜ j = 1 n ˜ h η l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k i l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ζ h j η 1 η + 1 V ˜ .
From Equations (10) and (11), then
r ˜ h j * = l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ( ς [ h ] [ j ] ) η 1 η + 1 × V ˜ .
Similarly, Equation (6) can be obtained.    □
By Lemma 1, substituting Equations (5) and (6) into t c t ^ = h = 1 m ˜ j = 1 n ˜ h C ¯ h j , we have
t c t ^ = h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] ς [ h ] [ j ] r ˜ [ h ] [ j ] η + h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] o [ h ] r ˜ [ h ] η = V ˜ η h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 η + 1 + U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1 .
Lemma 2.
For P t c t ^ , the optimal job schedule π ¯ h * within group Ω h h = 1 , , m ˜ is the non-decreasing order of ζ h j , i.e., ζ h 1 ζ h 2 ζ h n ˜ h .
Proof. 
From Equation (12), for group Ω h , the objective cost is:
j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 = j = 1 n ˜ h x [ h ] [ j ] y [ h ] [ j ] ,
where x [ h ] [ j ] = l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 and y [ h ] [ j ] = ( ς [ h ] [ j ] ) η η + 1 . The term x [ h ] [ j ] is a monotonically decreasing function of j, by the HLP rule (Hardy et al. [22], i.e., the term j = 1 n ˜ h x [ h ] [ j ] y [ h ] [ j ] is minimized if sequence x [ h ] [ 1 ] , x [ h ] [ 2 ] , , x [ h ] [ n ˜ h ] is ordered non-decreasingly and sequence y [ h ] [ 1 ] , y [ h ] [ 2 ] , , y [ h ] [ n ˜ h ] is ordered non-increasingly or vice versa), for the group Ω h h = 1 , , m ˜ , if ζ h j is a non-decreasing order, i.e., ζ h 1 ζ h 2 ζ h n ˜ h , the result can be obtained.    □

4. Special Cases

By Lemma 2, for group Ω h , the optimal schedule π ¯ h * is the non-decreasing order of ζ h j , i.e., ζ h 1 ζ h 2 ζ h n ˜ h . From Equation (12), let
X = U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1
and
Y = V ˜ η h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 η + 1 .
In this section, we study some special cases (i.e., the cases of parameters ς h j , o h , and n ˜ h have some relationship, then X (Y) is minimized or a constant) which can be solved in polynomial time. The special cases stemmingfrom the parameters ς h j , o h , and n ˜ h have some relationship.

4.1. Case 1

If o h = o and n ˜ h = n ˜ m ˜ = n ¨ ( h = 1 , , m ˜ ) , from Equation (12), it follows that
U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1 = U ˜ η o η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l + ( k h ) n ¨ 1 η + 1 η + 1
is a constant (i.e., U ˜ , o , η , μ , θ , n ¨ , and m ˜ are given constants, and this term is independent of these parameters). Let
Y h ρ = 1 , if Ω h is assigned to ρ th position 0 , otherwise
and
Θ h ρ = j = 1 n ¨ l = j n ¨ ( 1 + θ ) l j + k = ρ + 1 m ˜ ( 1 + μ ) k ρ l = 1 n ¨ ( 1 + θ ) l j + ( k ρ ) n ¨ 1 η + 1 ( ς h j ) η η + 1 .
The optimal group schedule can be translated into the following assignment problem:
Min h = 1 m ˜ ρ = 1 m ˜ Θ h ρ Y h ρ
s . t . ρ = 1 m ˜ Y h ρ = 1 , h = 1 , , m ˜ ,
h = 1 m ˜ Y h ρ = 1 , ρ = 1 , , m ˜ ,
Y h ρ = 0 or 1 , h , ρ = 1 , , m ˜ .
Thus, for the special case o h = o and n ˜ h = n ˜ m ˜ = n ¨ ( h = 1 , , m ˜ ) , the problem P t c t ^ can be solved by:
Theorem 1.
If o h = o and n ˜ h = n ˜ m ˜ = n ¨ h = 1 , , m ˜ , P t c t ^ is solvable by Algorithm 1 in O n ˜ 3 time.
Algorithm 1: Case 1
  • Step 1. For group Ω h h = 1 , , m ˜ , optimal job schedule π ¯ h * can be determined by Lemma 2, i.e., ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. Calculate Θ h ρ h , ρ = 1 , , m ˜ , and determine optimal group schedule π ¯ Ω * by using Equations (15)–(18).
  • Step 3. Optimal resource allocations r ˜ h j * and r ˜ h * are calculated by Equations (5) and (6) (see Lemma 1).
Proof. 
Time of Step 1 is O h = 1 m n ˜ h log n ˜ h O ( n ˜ log n ˜ ) . Steps 3 needs O n ˜ time. For an assignment problem, Step 2 needs O m ˜ 3 O n ˜ 3 time. Thus, the total time is O n ˜ 3 .    □

4.2. Case 2

If ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ , h = 1 , , m ˜ , j = 1 , 2 , , n ˜ h , we have:
Lemma 3.
For P t c t ^ , if ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ h = 1 , , m ˜ ; j = 1 , , n ˜ h , then the optimal group schedule π ¯ Ω * is the non-decreasing order of o h , i.e., o ( 1 ) o ( 2 ) o ( m ˜ ) .
Proof. 
From Equation (12), if ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ ,
V ˜ η h = 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( ς [ h ] [ j ] ) η η + 1 η + 1 = V ˜ η ς η h = 1 m ˜ j = 1 n ¨ l = j n ¨ ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l j + ( k h ) n ¨ 1 η + 1 η + 1
is a constant (i.e., V ˜ , ς , η , μ , θ , n ¨ , and m ˜ are given constants, and this term is independent of these parameters).
From Equation (12) and the above analysis, it can be proved that minimizing t c t ^ is equal to minimizing the following expression:
U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] 1 η + 1 ( o [ h ] ) η η + 1 η + 1 = U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l n ¨ + ξ = h k n ¨ 1 η + 1 ( o [ h ] ) η η + 1 η + 1 = U ˜ η h = 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l + ( k h ) n ¨ 1 η + 1 ( o [ h ] ) η η + 1 η + 1 .
Similar to Lemma 2, k = h m ˜ ( 1 + μ ) k h l = 1 n ¨ ( 1 + θ ) l + ( k h ) n ¨ 1 η + 1 is a monotonically decreasing function of h, and by the HLP rule (Hardy et al. [22]), Equation (19) can be minimized by arranging groups in the non-decreasing order of o h ; this completes the proof.    □
Thus, for the special case ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨ i = 1 , , m ˜ ; j = 1 , , n ˜ h , the problem P t c t ^ can be solved by:
Theorem 2.
If ς h j = ς and n ˜ h = n ˜ m ˜ = n ¨   h = 1 , , m ˜ , P t c t ^ is solvable by Algorithm 2 in O ( n ˜ log n ˜ ) time.
Algorithm 2: Case 2
  • Step 1. For group Ω h h = 1 , , m ˜ , optimal job schedule can be obtained in any order.
  • Step 2. Optimal group schedule π Ω * is the non-decreasing order of o h .
  • Step 3. Optimal resource allocations r ˜ h j * and r ˜ h * are calculated by Equations (5) and (6) (see Lemma 1).

4.3. Case 3

For any groups Ω x and Ω y , if o x o y implies n ˜ x n ˜ y , we have:
Lemma 4.
For any groups Ω x and Ω y of P t c t ^ , if o x o y implies n ˜ x n ˜ y , the optimal group schedule π ¯ Ω * is non-decreasing order of o h .
Proof. 
Similar to the proof of Liang et al. [6] (see Equation (12)).    □
For this special case, i.e., for any groups Ω x and Ω y , if o x o y implies n ˜ x n ˜ y , P t c t ^ can be solved by:
Theorem 3.
For any groups Ω x and Ω y , if o x o y implies n ˜ x n ˜ y , P t c t ^ is solvable by Algorithm 3 in O n ˜ log n ˜ time.
Algorithm 3: Case 3
  • Step 1. For group Ω h h = 1 , , m ˜ , the optimal job schedule π ¯ h * can be determined by Lemma 2, i.e., ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. The optimal group schedule π Ω * is the non-decreasing order of o h .
  • Step 3. The optimal resource allocations r ˜ h j * and r ˜ h * are calculated by Equations (5) and (6) (see Lemma 1).

5. A General Case

For P t c t ^ , we cannot find a polynomially optimal algorithm, and the complexity of determining the optimal group schedule is still an open problem; we conjecture that this problem is NP-hard. Thus, B & B (i.e., branch-and-bound, where we need a lower bound and a upper bound) and heuristic algorithms might be a good way to solve P t c t ^ .

5.1. Upper Bound

For the t c t ^ minimization, any feasible solution can be proposed as a upper bound (denoted by U B ). Similar to Section 3, the group sorting method can be used as the heuristic and then this solution is improved by using the pairwise interchange method.
For a better comparison, an alternative or complementary to Algorithm 4 is proposed, a tabu search (denoted by t s ) algorithm (i.e., Algorithm 5) can be used to solve P t c t ^ .    
Algorithm 4: Upper Bound
  • Step 1. For group Ω h h = 1 , , m ˜ , an internal optimal job schedule π h * (Lemma 2) is: ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. Groups are scheduled by the non-decreasing order of o h , i.e., o ( 1 ) o ( 2 ) o ( m ˜ ) .
  • Step 3. Groups are scheduled by the non-increasing order of n ˜ h , i.e., n ˜ < < 1 > > n ˜ < < 2 > > n ˜ < < m ˜ > > .
  • Step 4. From Steps 2 and 3, the smallest value t c t ^ (see Equation (12)) is selected as an original group schedule π ¯ Ω .
  • Step 5. Set k = 1 .
  • Step 6. Set s = k + 1 .
  • Step 7. The new group schedule can be obtained by exchanging the kth and sth groups (denoted as π ¯ Ω * ), and when t c t ^ of π ¯ Ω * is smaller than π ¯ Ω , π ¯ Ω is updated by π ¯ Ω * .
  • Step 8. If s < m ˜ , then set s = s + 1 , go to step 7.
  • Step 9. If k < m ˜ 1 , then set k = k + 1 , go to step 6; otherwise, STOP. Output the group schedule π ¯ Ω * of the best group schedule found by the heuristic algorithm and its objective value t c t ^ .
  • Step 10. According to Lemma 1, calculate the resource allocation by Equations (5) and (6).
Algorithm 5:  t s
  • Step 1. For group Ω h h = 1 , , m ˜ , an internal optimal job schedule π h * can be obtained by Lemma 2, i.e., ς h 1 ς h 2 ς h n ˜ h .
  • Step 2. Let the tabu list be empty and the iteration number be zero.
  • Step 3. Choose an initial group schedule by the Steps 2–4 of Algorithm 4, calculate its value t c t ^ (see Equation (12)) and set the current group schedule as the best solution π ¯ Ω * .
  • Step 4. Search the associated neighborhood of the current group schedule and resolve if there is a group schedule π ¯ Ω * * with the smallest objective value in associated neighborhood and it is not in the tabu list, where the neighborhood is generated by the random exchange of any two groups.
  • Step 5. If t c t ^ ( π ¯ Ω * * ) < t c t ^ ( π ¯ Ω * ) , then let π ¯ Ω * = π ¯ Ω * * . Update the tabu list and the iteration number.
  • Step 6. If there is not a group schedule in associated neighborhood but it is not in the tabu list or the maximum number of iterations is reached, output the local optimal group schedule π ¯ Ω and t c t ^ ( π ¯ Ω ) . Otherwise, update tabu list and go to Step 4.
  • Step 7. According to Lemma 1, calculate the resource allocation by Equations (5) and (6).

5.2. Lower Bound

Let π ¯ Ω = π ¯ Ω p , π ¯ Ω u be a group schedule, where π ¯ Ω p (respectively π ¯ Ω u ) is the scheduled (respectively unscheduled) part, and there are r groups in π ¯ Ω p . From Equation (12) and Lemma 4, the lower bound (denoted by L B ) of P t c t ^ is
L B = V ˜ η h = 1 r j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 r ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l j n ˜ [ k ] + ξ = h k n ˜ [ ξ ] + k = r + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l j n ˜ < < k > > + ξ = h r n ˜ [ ξ ] + ξ = r + 1 k n ˜ < < ξ > > 1 η + 1 ( ς [ h ] j ) η η + 1 + h = r + 1 m ˜ j = 1 n ˜ h l = j n ˜ [ h ] ( 1 + θ ) l j + k = h + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l j n ˜ < < k > > + ξ = h k n ˜ < < ξ > > 1 η + 1 ( ς ( h ) j ) η η + 1 η + 1 + U ˜ η h = 1 r k = h r ( 1 + μ ) k h l = 1 n ˜ [ k ] ( 1 + θ ) l n ˜ [ k ] + ξ = h k n ˜ [ ξ ] + k = r + 1 m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l n ˜ < < k > > + ξ = h r n ˜ [ ξ ] + ξ = r + 1 k n ˜ < < ξ > > 1 η + 1 ( o [ h ] ) η η + 1 h = r + 1 m ˜ k = h m ˜ ( 1 + μ ) k h l = 1 n ˜ < < k > > ( 1 + θ ) l n ˜ < < k > > + ξ = h k n ˜ < < ξ > > 1 η + 1 ( o ( h ) ) η η + 1 η + 1 ,
where ς h 1 ς h 2 ς h n ˜ h , o ( r + 1 ) o ( r + 2 ) o ( m ˜ ) and n ˜ < < r + 1 > > n ˜ < < r + 2 > > n ˜ < < m ˜ > > (remark: o h and n ˜ < < h > > ( h = r + 1 , , m ˜ ) do not necessarily correspond to identical group).
From the U B (see Algorithm 4) and L B (see Equation (20)), a standardized B & B algorithm can be given.

6. Computational Result

A series of computational experiments were performed to evaluate the effectiveness of the U B , B & B , and t s algorithms, and the t s algorithm was terminated after 2000 iterations. The proposed algorithms were coded in the C++ language and performed on a desktop computer with CPUInter®Corei5-10500 3.10 GHz, 8 GB RAM on Windows® 10 operating system. The following parameters were randomly generated: ζ h j is uniformly distributed in 1 , 100 ; o h is uniformly distributed in 1 , 50 ; θ and μ are uniformly distributed in 0 , 0.5 , 0.5 , 1 ; U ˜ = V ˜ = 500 ; n ˜ = 100 , 150 , 200 , 250 , 300 ; m ˜ = 12 , 13 , 14 , 15 , 16 (at least one job per group); η = 2 . For each combination ( n ˜ , m ˜ , and θ ( μ ) ), there were 10 randomly generated replicas and the maximum c p u ^ time for each instance was set to 3600 s. For the B & B algorithm, average and maximum c p u ^ time (in seconds), and average and maximum node numbers were given. The error bound of U B and t s algorithms is given by:
t c t ^ Y t c t ^ O p t t c t ^ O p t ,
where Y { U B , t s } , t c t ^ Y is a value t c t ^ by Y, and t c t ^ ( O p t ) is an optimal value by a B & B algorithm. The computational results are given in Table 2 and Table 3. From Table 2 and Table 3, it is easy to see that the B & B can solve up to 300 jobs in a reasonable amount of time, and U B performs very well compared to t s in terms of error bound. When n ˜ 300 , the maximum error bound is less than 0.001559 (i.e., relative error 0.1559 % ).
Table 2. Results of algorithms for θ , μ 0 , 0.5 .
Table 3. Results of algorithms for θ , μ 0.5 , 1 .

7. Conclusions

This paper investigated the group problem with deterioration effects and resource allocation. The goal was to determine π ¯ Ω * , π ¯ h * h = 1 , , m ˜ in Ω h and R * such that t c t ^ is minimized under i = 1 m ˜ j = 1 n ˜ h r ˜ i j V ˜ and i = 1 m ˜ r ˜ i U ˜ . For some special cases, we demonstrated that this problem remains polynomially solvable. For the general case, we proposed some algorithms to solve this problem. As a future extension, it is interesting to deal with group scheduling with two scenarios based on processing times (see Wu et al. [23]) and delivery times (see Qian and Zhan [24]).

Author Contributions

Conceptualization, J.-X.Y. and H.-B.B.; methodology, J.-X.Y. and N.R.; software, J.-X.Y., H.-B.B. and H.B.; formal analysis, J.-X.Y. and J.-B.W.; investigation, J.-B.W.; writing—original draft preparation, J.-X.Y. and J.-B.W.; writing—review and editing, J.-X.Y. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This Work was supported by LiaoNing Revitalization Talents Program (Grant No. XLYC2002017) and Natural Science Foundation of LiaoNing Province, China (Grant No. 2020-MS-233).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data used to support this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ham, I.; Hitomi, K.; Yoshida, T. Group Technology: Applications to Production Management. Available online: https://link.springer.com/book/10.1007/978-94-009-4976-8 (accessed on 17 July 2022).
  2. Wang, J.B.; Guo, A.X.; Shan, F.; Jiang, B.; Wang, L.Y. Single machine group scheduling under decreasing linear deterioration. J. Appl. Math. Comput. 2007, 24, 283–293. [Google Scholar] [CrossRef]
  3. Yang, S.J. Group scheduling problems with simultaneous considerations of learning and deterioration effects on a single-machine. Appl. Math. Model. 2011, 35, 4008–4016. [Google Scholar] [CrossRef]
  4. Bai, J.; Li, Z.R.; Huang, X. Single-machine group scheduling with general deterioration and learning Eff. Appl. Math. Model. 2012, 36, 1267–1274. [Google Scholar] [CrossRef]
  5. Lu, Y.Y.; Wang, J.J.; Wang, J.B. Single machine group scheduling with decreasing time-dependent processing times subject to release dates. Appl. Math. Comput. 2014, 234, 286–292. [Google Scholar] [CrossRef]
  6. Wang, J.B.; Liu, L.; Wang, J.J.; Li, L. Makespan minimization scheduling with ready times, group technology and shortening job processing times. Comput. J. 2018, 61, 1422–1428. [Google Scholar] [CrossRef]
  7. Liu, F.; Yang, J.; Lu, Y.Y. Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng. Optim. 2019, 51, 862–874. [Google Scholar] [CrossRef]
  8. Zhu, Z.G.; Sun, L.Y.; Chu, F.; Liu, M. Single-machine group scheduling with resource allocation and learning effect. Comput. Ind. Eng. 2011, 60, 148–157. [Google Scholar] [CrossRef]
  9. Zhang, X.; Liao, L.; Zhang, W.; Cheng, T.C.E.; Tan, Y.; Ji, M. Single-machine group scheduling with new models of position-dependent processing times. Comput. Ind. Eng. 2018, 117, 1–5. [Google Scholar] [CrossRef]
  10. Liao, B.; Wang, X.; Zhu, X.; Yang, S.; Pardalos, P.M. Less is more approach for competing groups scheduling with different learning effects. J. Comb. Optim. Vol. 2020, 39, 33–54. [Google Scholar] [CrossRef]
  11. Lv, D.Y.; Luo, S.W.; Xue, J.; Xu, J.X.; Wang, J.B. A note on single machine common flow allowance group scheduling with learning effect and resource allocation. Comput. Ind. Eng. 2021, 151, 106941. [Google Scholar] [CrossRef]
  12. Xu, H.Y.; Li, X.P.; Ruiz, R.B.; Zhu, H.H. Group scheduling with nonperiodical maintenance and deteriorating effects. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 2860–2872. [Google Scholar] [CrossRef]
  13. Oron, D. Scheduling controllable processing time jobs in a deteriorating environment. J. Of The Oper. Res. Soc. 2014, 64, 49–56. [Google Scholar] [CrossRef]
  14. Li, L.; Wang, J.-J. Scheduling jobs with deterioration effect and controllable processing time. Neural Comput. Appl. 2018, 29, 1163–1170. [Google Scholar] [CrossRef]
  15. Wang, D.; Huo, Y.; Ji, P. Single-machine group scheduling with deteriorating jobs and allotted resource. Optim. Lett. 2014, 8, 591–605. [Google Scholar] [CrossRef]
  16. Liang, X.X.; Liu, M.Q.; Feng, Y.B.; Wang, J.B.; Wen, L.S. Solution algorithms for single-machine resource allocation scheduling with deteriorating jobs and group technology. Eng. Optim. 2020, 52, 1184–1197. [Google Scholar] [CrossRef]
  17. Wang, J.B.; Liang, X.X. Group scheduling with deteriorating jobs and allotted resource under limited resource availability constraint. Eng. Optim. 2019, 51, 231–246. [Google Scholar] [CrossRef]
  18. Ji, M.; Chen, K.; Ge, J.; Cheng, T.C.E. Group scheduling and job-dependent due window assignment based on a common flow allowance. Comput. Ind. Eng. 2014, 68, 35–41. [Google Scholar] [CrossRef]
  19. Gawiejnowicz, S. Models and Algorithms of Time-Dependent Scheduling; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  20. Shabtay, D.; Kaspi, M. Minimizing the total weighted flow time in a single machine with controllable processing times. Comput. Oper. Res. 2004, 31, 2279–2289. [Google Scholar] [CrossRef]
  21. Wang, J.B.; Wang, M.Z. Single-machine scheduling to minimize total convex resource consumption with a constraint on total weighted flow time. Comput. Oper. Res. 2012, 39, 492–497. [Google Scholar] [CrossRef]
  22. Hardy, G.-H.; Littlewood, J.-E.; Polya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1967. [Google Scholar]
  23. Wu, C.C.; Bai, D.; Chen, J.H.; Lin, W.C.; Xing, L.; Lin, J.C.; Cheng, S.-R. Several variants of simulated annealing hyper-heuristic for a single-machine scheduling with two-scenario-based dependent processing times. Swarm Evol. Comput. 2021, 60, 100765. [Google Scholar] [CrossRef]
  24. Qian, J.; Zhan, Y. The due date assignment scheduling problem with delivery times and truncated sum-of-processing-times-based learning effect. Mathematics 2021, 9, 3085. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.