Next Article in Journal
Advances in Tracking Control for Piezoelectric Actuators Using Fuzzy Logic and Hammerstein-Wiener Compensation
Next Article in Special Issue
A Hybrid Metaheuristic for the Unrelated Parallel Machine Scheduling Problem
Previous Article in Journal
Non-Spiking Laser Controlled by a Delayed Feedback
Previous Article in Special Issue
Two-Agent Preemptive Pareto-Scheduling to Minimize Late Work and Other Criteria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Agent Pareto-Scheduling of Minimizing Total Weighted Completion Time and Total Weighted Late Work

School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(11), 2070; https://doi.org/10.3390/math8112070
Submission received: 26 October 2020 / Revised: 11 November 2020 / Accepted: 16 November 2020 / Published: 20 November 2020
(This article belongs to the Special Issue Theoretical and Computational Research in Various Scheduling Models)

Abstract

:
We investigate the Pareto-scheduling problem with two competing agents on a single machine to minimize the total weighted completion time of agent A’s jobs and the total weighted late work of agent B’s jobs, the B-jobs having a common due date. Since this problem is known to be NP-hard, we present two pseudo-polynomial-time exact algorithms to generate the Pareto frontier and an approximation algorithm to generate a ( 1 + ϵ ) -approximate Pareto frontier. In addition, some numerical tests are undertaken to evaluate the effectiveness of our algorithms.

1. Introduction

Problem description and motivation: Multi-agent scheduling has attracted an ever-increasing research interest due to its extensive applications (see the book of Agnetis et al. [1]). Among the common four problem-versions (including lexical-, positive-combination-, constrained-, and Pareto-scheduling, as shown in Li and Yuan [2]) for a given group of criteria for multiple agents, Pareto-scheduling has the most important practical value, since it reflects the effective tradeoff between the actual and (usually) conflicting requirements of different agents.
Our considered problem is formally stated as follows. Assume that two agents (A and B) compete to process their own sets of independent and non-preemptive jobs on a single machine. The set of the n X jobs from agent X { A , B } is J X = { J 1 X , J 2 X , , J n X X } with J A J B = ϕ . For convenience, we call a job from agent X an X-job. All jobs are available at time zero, and are scheduled consecutively without idle time due to the regularity of the objective functions as shown later. Each job J j X has a processing time p j X and a weight w j X . In addition, each B-job J j B has also a common due date d. We assume that all parameters p j X , w j X and d are known integers.
Let σ be a schedule. We use C j X ( σ ) to denote the completion time of job J j X in σ . The objective function of agent A is the total weighted completion time, denoted by w j A C j A ( σ ) , while the objective function of agent B is the total weighted late work, denoted by w j B Y j B ( σ ) . Here, the late work Y j B ( σ ) of job J j B indicates the amount processed after the due date d, specifically,
Y j B ( σ ) = 0 , if   C j ( σ ) d , C j B ( σ ) d , if   d < C j B ( σ ) d + p j B , p j B , if   C j B ( σ ) > d + p j B .
Following Hariri et al. (1995) [3], job J j B is said to be early, partially early, and late in σ , if Y j B ( σ ) = 0 , 0 < Y j B ( σ ) < p j B , and Y j B ( σ ) = p j B , respectively.
Falling into the category of Pareto-scheduling, the problem studied in this paper aims at generating all Pareto-optimal points (PoPs) and the corresponding Pareto-optimal schedules (PoSs) (the definitions of PoPs and PoSs will be given in Section 2) of all jobs with regard to w j A C j A and w j B Y j B ). Using the notations in Agnetis et al. [1], our studied scheduling problem can be denoted by 1 | d j B = d | # ( w j A C j A , w j B Y j B ) . For this problem, we will devise some efficient approximate algorithms.
Our considered scheduling model arises from many practical scenarios. For example, in a factory, two concurrent projects (A and B), each containing a certain amount of activities with distinct importance, have to share a limited resource. The former focuses on the mean completion time of its activities. In contrast, the latter requires its activities to be completed before the due date as much as possible, since, otherwise, the shortcomings of some key technical forces after the due date will occur and result in irretrievable loss. It is necessary to model the goal of project B as the weighted late work, that is, minimizing the parts left unprocessed before the due date. In addition, two projects naturally have to negotiate to seek a trade-off method of utilizing the common resource.
For another example, in a distribution center, two categories (A- and B-) goods are stored in a warehouse, in which the former comprises common goods and the latter comprises fresh goods with a shelf life. It is hoped that the shipping preparations for the A-goods will be completed as soon as possible. However, due to their limited shelf life, if they are transported after a certain time, the B-goods will not be fresh enough when they reach the customers. Therefore, it is reasonable to respectively model the goals of A-goods and B-goods by minimizing the total weighted completion time and the total weighted late work, and seek an efficient transportation method.
Related works and our contribution: Numerous works have addressed multi-agent scheduling problems in the literature. With the aim of this paper, we only summarize briefly some related results. Wan et al. [4] provided a strongly polynomial-time algorithm for the two-agent Pareto-scheduling problem on a single machine to minimize the number of the tardy A-jobs and the maximum cost of the B-jobs. Later, Wan et al. [5] investigated two Pareto-scheduling problems on a single machine with two competing agents and a linear-deterioration processing time: 1 | | # ( E max A , E max B ) and 1 | | # ( E j A , E max B ) , where E j A is the total earliness of the A-jobs and E max X is the maximum earliness of the X-jobs. For these two problems, they respectively proposed a polynomial-time algorithm. Gao and Yuan [6] showed that the following two Pareto-scheduling problems with a positional due index and precedence constraints are both polynomially solvable: 1 | | # ( C j A , f max B ) and 1 | | # ( f max A , f max B ) , where f max X indicates the maximum cost of the X-jobs. He et al. [7] extensively considered the versions of the problems in Gao and Yuan [6] with deteriorating or shortening processing times and without positional due indices and precedence constraints, and devised polynomial-time algorithms. Yuan et al. [8] showed the single-machine preemptive problem 1 | r j , p m t n | # ( L max a : L max b ) can be solved in a polynomial time, where L max X indicates the maximum lateness of the X-jobs. Wan [9] investigated the single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs, and developed a polynomial-time algorithm.
While most results on Pareto-scheduling concentrate on devising exact algorithms to obtain the Pareto frontier, there are also some methods (such as [10,11,12,13,14]) of developing approximate algorithms to generate the approximate Pareto frontier. Dabia et al. [10] adopted the trimming technique to derive the approximate Pareto frontier for some multi-objective scheduling problems. Yin et al. [15] considered two just-in-time (JIT) scheduling problems with two competing agents on unrelated parallel machines, in which the one agent’s criterion is to maximize the weighted number of its JIT jobs, and another agent’s criterion is either to maximize its maximum gains from its JIT jobs or to maximize the weighted number of its JIT jobs. They showed that the two problems are both unary NP-hard when the machine number is not fixed, and proposed either a polynomial-time algorithm or a fully polynomial-time approximation scheme (FPTAS) when the machine number is a constant. Yin et al. [16] also considered similar problems in the setting of a two-machine flow shop, and provided two pseudo-polynomial-time exact algorithms to find the Pareto frontier. Chen et al. [17] studied a multi-agent Pareto-scheduling problem in a no-wait flow shop setting, in which each agent’s criterion is to maximize its own weighted number of JIT jobs. They showed that it is unary NP-hard when the number of agents is arbitrary, and presented pseudo-polynomial time algorithms and an ( 1 , 1 ϵ , , 1 ϵ ) -approximation algorithm when the number of agents is fixed.
From the perspective of methodology, as a type of optimization problem, the multi-agent scheduling problem’s solution algorithms potentially allow for exploiting the optimal robot path planning by a gravitational search algorithm (Purcaru et al. [18]) and optimization based on phylogram analysis (Soares et al. [19]).
In the prophase work (Zhang and Yuan [20]), we proved that the constrained scheduling problem of minimizing the total late work of agent A’s jobs with equal due dates subject to the makespan of agent B’s jobs not exceeding a given upper bound, is NP-hard even if agent B has only one job. It implies the NP-hardness of our considered problem in this paper. Thus we limit the investigation to devising pseudo-polynomial-time exact algorithms and an approximation algorithm to generate the approximate Pareto frontier.
In addition, in our recent work (Zhang et al. [21]), we considered several three-agent scheduling problems under different constraints on a single machine, in which the three agents’ criteria are to minimize the total weighted completion time, the weighted number of tardy jobs, and the total weighted late work. Among those problems, there are two questions related to this paper: 1 | p j A w j A | # ( Σ w j A C j A , Σ w j B Y j B ) , which is solved in O ( n A n B 2 U A U B ) , and 1 | p j A w j A , d j B w j B | # ( Σ w j A C j A , Σ w j B Y j B ) , which is solved in O ( n A n B U A U B ) . The notation p j A w j A represents that the jobs of the first agent have inversely agreeable processing times and weights, i.e., the smaller the processing time for a job, the greater its weight, and the notation d j B w j B represents that the jobs of agent B have inversely agreeable due dates and weights. U A and U B are the upper bounds on the criteria Σ w j A C j A and Σ w j B Y j B , respectively. In contrast to Zhang et al. [21], in this article we remove the constraint p j A w j A and turn to the optimization problem of B-jobs having a common due date.
The remainder of the paper is organized as follows. In Section 2, some preliminaries are provided. In Section 3 and Section 4, we present two dynamic programming algorithms and an FPTAS. In Section 5, some numeral tests are undertaken to show the algorithms’ efficiency. Section 6 concludes the paper and suggests the future research direction.

2. Preliminaries

For self-consistency, in this section we describe some notions and properties related to Pareto-scheduling, and we present other useful notations in the description of the algorithms in the following sections.
Definition 1.
Consider two m-vectors u = ( u 1 , u 2 , , u m ) and v = ( v 1 , v 2 , , v m ) .
(i) We say that u dominates v , denoted by u v , if u i v i for i = 1 , 2 , , m .
(ii) We say that u strictly dominates v , denoted by u v , if u v and u v .
(iii) Given a constant ϵ > 0 , we say that u ϵ -dominates v , denoted by u ϵ v , if and only if u i ( 1 + ϵ ) v i for i = 1 , 2 , , m .
Definition 2.
Given two agents’ criteria γ A ( σ ) and γ B ( σ ) , a feasible schedule σ is called Pareto-optimal and the corresponding objective vector ( γ A ( σ ) , γ B ( σ ) ) is called a Pareto-optimal point, if no other feasible schedule π satisfies ( γ A ( π ) , γ B ( π ) ) ( γ A ( σ ) , γ B ( σ ) ) . All the Pareto-optimal points form the Pareto frontier, denoted by P .
Let R be the set of the objective vectors of all feasible schedules, and Q be a subset of R .
Definition 3.
A vector u Q is called non-dominated in Q , if there exists no other vector v Q such that v u .
It is not difficult to see that, for the above definitions, the latter is an extension of the former, and especially when Q is exactly equal to R , all the non-dominated vectors in Q compose the Pareto-optimal frontier. The following lemma establishes the relationship between sets P and a subset Q R .
Lemma 1.
For any set Q with P Q R , if O is the set including all the non-dominated vectors in Q , then O = P .
Proof. 
By Definition 2, for each Pareto-optimal point u P , there is no other vector v R such that v u , and naturally, such a fact also holds for the set Q , since Q R . Then, it follows that P O by the definition of the set O . Next we show that O P . If not, we pick up one vector w from O \ P . Again by Definition 2, there is some vector w P such that w u . Nevertheless, this is impossible, since w P Q leads to no existence of such a vector w in Q by the assumption of w and Definition 3. Thus O = P . □
From Lemma 1, to generate the Pareto frontier P , an alternative is to first determine a set Q with P Q R , and then delete the dominated vectors in Q . Throughout the reminder of this paper, such a subset Q is called an intermediate set. Obviously, R is also an intermediate set.
Definition 4.
For a given constant ϵ > 0 , a ( 1 + ϵ ) -approximate Pareto frontier, denoted by P ϵ , is a set of the objective vectors satisfying, for any ( γ A ( σ ) , γ B ( σ ) ) P , there exists at least one objective vector ( γ A ( σ ) , γ B ( σ ) ) P ϵ such that ( γ A ( σ ) , γ B ( σ ) ) ϵ ( γ A ( σ ) , γ B ( σ ) ) .
Definition 5.
A family of algorithms { A ϵ : ϵ > 0 } is called afully polynomial-time approximation scheme(FPTAS) if, for each ϵ > 0 , A ϵ generates a ( 1 + ϵ ) -approximate Pareto frontier with a running time in the polynomial in the instance size and 1 / ϵ .
Besides those already mentioned in Section 1, the following notations will also be used later:
  • J j X : indicates the set of the first j jobs in J X , namely, J j X = { J 1 X , J 2 X , , J j X } .
  • J i X σ J j X indicates that job J i X immediately precedes J j X in schedule σ , where X , X { A , B } .
  • s j X ( σ ) indicates the starting time of job J j X in σ .
  • P sum X = j = 1 n X p j X indicates the total processing time of all X-jobs.
  • P sum indicates the total processing time of all jobs, and P sum = P sum A + P sum B .
  • W sum X = j = 1 n X w j X indicates the total weight of all X-jobs.
  • W sum indicates the total weight of all jobs, and W sum = W s u m A + W s u m B .
  • p max X indicates the maximum processing time of the X-jobs, namely, p max X = max { p j X : 1 j n X } .
  • w max X indicates the maximum weight of the X-jobs, namely, w max X = max { w j X : 1 j n X } .

3. An Exact Algorithm

In this section a dynamic programming algorithm for problem 1 | d j B = d | # ( w j A C j A , w j B Y j B ) is presented. For description convenience, for a given schedule σ , the job set J is divided into the following four subsets: J A 1 ( σ ) = { J j A : C j A ( σ ) d } , J A 2 ( σ ) = { J j A : C j A ( σ ) > d } , J B 1 ( σ ) = { J j B : s j B ( σ ) < d } , and J B 2 ( σ ) = { J j B : s j B ( σ ) d } . Obviously, such a partition of the job set is well defined for a given schedule.
The following lemma establishes the structural properties of the Pareto-optimal schedule.
Lemma 2.
For each Pareto-optimal point ( C , Y ) of problem 1 | d j B = d | # ( w j A C j A , w j B Y j B ) , there is a Pareto-optimal schedule σ such that
(i) J A 1 ( σ ) σ J B 1 ( σ ) σ J A 2 ( σ ) σ J B 2 ( σ ) .
(ii) the jobs in J B 1 ( σ ) are sequenced in the non-increasing order of their weights and the jobs in J B 2 ( σ ) are sequenced arbitrarily.
(iii) the jobs in J A 1 ( σ ) and J A 2 ( σ ) are sequenced according to the weighted shortest processing time (WSPT) rule.
Proof. 
In Lemma 2, statement (i) can easily be observed, since the jobs in J B 2 ( σ ) are late and this will not result in any increase in their total late work when moving them to the end of the schedule, and as many A-jobs as possible can be positioned before the B-jobs in J B 1 ( σ ) , provided that the last job in J B 1 ( σ ) is not late. The left two statements in Lemma 2 can easily be proved by an interchange argument and the detail is omitted here. □
Lemma 2 allows us only to consider the feasible schedules simultaneously satisfying the conditions (i)-(iii). To this end, we re-number the n A jobs in J A in the WSPT order and the n B B-jobs in the maximum weight first (MW) order so that
p 1 A w 1 A p 2 A w 2 A p n A A w n A A .
w 1 B w 2 B w n B B .
Such a sorting takes O ( n log n ) time.
According to Lemma 1, the algorithm to be described adopts the strategy of first finding the intermediate set dynamically and then deleting the dominated points in it. It is necessary to mention that in the proposed algorithm we appropriately relax the conditions to find a modestly larger intermediate set. For briefly describing the dynamic programming algorithm, we introduce the following terminologies and notations.
  • an A B A B -schedule is defined to be a schedule π for I J satisfying (i) π = π 1 π 2 π 3 π 4 , where among the four mutually disjointed subschedules π 1 , π 2 , π 3 , a n d π 4 , the A-jobs are included in π 1 and π 3 , and the B-jobs are included in π 2 and π 4 ; (ii) no idle time exists between the jobs in each subschedule, but this is not necessarily so between two subschedules. Moreover, the idle time between π 3 and π 4 is supposed to be long enough; (iii) the jobs in each subschedule are sequenced in the increasing order of their indices.
  • an ( x , y ) -schedule is defined to be an A B A B -schedule π for J x A ( J B \ J y 1 B ) with no idle time existing between subschedules π 2 and π 3 , where x { 1 , 2 , , n A } and y { 1 , 2 , , n B } .
  • a vector ( t 1 , t 2 , t 3 , C , Y ) is introduced to denote a state of ( x , y ) , in which t 1 , t 2 , t 3 , C , and Y, respectively, stand for the end point of π 1 , the start point of π 2 , the end point of π 3 , the total weighted completion time of the A-jobs of J x A , and the total weighted late work of the B-jobs of J B \ { J y 1 B } . Note that a state of ( x , y ) at least corresponds to some ( x , y ) -schedule.
  • Γ ( x , y ) denotes the set of all the states of ( x , y ) .
  • Γ ˜ ( x , y ) denotes the set obtained from Γ ( x , y ) by deleting the vectors ( t 1 , t 2 , t 3 , C , Y ) , for which there is another vector ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) with t 1 t 1 ¯ , t 2 t 2 ¯ , t 3 t 3 ¯ , C C ¯ , and Y Y ¯ .
  • Let Q 1 = { ( C , Y ) : ( t 1 , t 2 , t 3 , C , Y ) Γ ˜ ( n A , 1 ) } , and let Q 1 ˜ be the set of the non-dominated vectors in Q 1 .
To solve problem 1 | d j B = d | # ( w j A C j A , w j B Y j B ) , we have to first compute Γ ˜ ( n A , 1 ) and then obtain the Pareto-frontier Q 1 ˜ . This can be realized by dynamically computing the sets Γ ( x , y ) for all the possible choices of the tuple ( x , y ) . Note that each ( x , y ) -schedule can be obtained either by adding job J x A to some ( x 1 , y ) -schedule, or by adding job J y B to some ( x , y + 1 ) -schedule. Therefore, we can informally describe our dynamic programming algorithm as follows.
Initially, set Γ ( 0 , n B + 1 ) = { ( 0 , t 0 , t 0 , 0 , 0 ) : d p max A + 1 t 0 d + p max B 1 } and Γ ( x , y ) = if ( x , y ) ( 0 , n B + 1 ) . Then we recursively generate all the state sets Γ ( x , y ) from the previously-generated sets Γ ( x 1 , y ) and Γ ( x , y + 1 ) . Specifically,
  • For each state ( t 1 , t 2 , t 3 , C , Y ) Γ ( x 1 , y ) with Γ ( x 1 , y ) ϕ , add two states ( t 1 , t 2 , t 3 , C , Y ) and ( t 1 , t 2 , t 3 , C , Y ) to the set Γ ( x , y ) , with
    ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 + p x A , t 2 , t 3 , C + w x A ( t 1 + p x A ) , Y ) ,
    and
    ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 , t 2 , t 3 + p x A , C + w x A ( t 3 + p x A ) , Y ) .
These two states respectively correspond to the newly obtained ( x , y ) -schedules by scheduling job J x A immediately following the subschedule π 1 and immediately following the subschedule π 3 , in some ( x 1 , y ) schedule π that corresponds to the state ( t 1 , t 2 , t 3 , C , Y ) . Note that the first case occurs only when t 1 + p x A t 2 is satisfied.
  • For each state ( t 1 , t 2 , t 3 , C , Y ) Γ ( x , y + 1 ) , also add two two states ( t 1 , t 2 , t 3 , C , Y ) and ( t 1 , t 2 , t 3 , C , Y ) to the set Γ ( x , y ) , with
    ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 , t 2 p y B , t 3 , C , Y + w y B max { t 2 d B , 0 } ) ,
    and
    ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 , t 2 , t 3 , C , Y + w y B p y B ) .
These two states respectively correspond to the newly obtained ( x , y ) -schedules by scheduling job J y B immediately preceding the subschedule π 2 and immediately following the subschedule π 4 , in some ( x , y + 1 ) schedule π that corresponds to the state ( t 1 , t 2 , t 3 , C , Y ) . Note that the first case occurs only when t 1 t 2 p y B < d B is satisfied.
Note that, if in the above state-generation procedures we replace sets Γ ( x 1 , y ) and Γ ( x , y + 1 ) with sets Γ ˜ ( x 1 , y ) and Γ ˜ ( x , y + 1 ) , then the resulting set of new states, denoted by Γ ( x , y ) , may be different from Γ ( x , y ) . Recall that, when deleting those dominated vectors in the sets Γ ( x , y ) and Γ ( x , y ) , the newly obtained sets are respectively denoted by Γ ˜ ( x , y ) and Γ ˜ ( x , y ) , which will be shown to be identical in the following lemma.
Lemma 3.
Γ ˜ ( x , y ) = Γ ˜ ( x , y ) .
Proof. 
Since Γ ˜ ( x 1 , y ) Γ ( x 1 , y ) and Γ ˜ ( x , y + 1 ) Γ ( x , y + 1 ) , it follows that Γ ( x , y ) Γ ( x , y ) by the generation procedure of the new states as described previously. If Γ ( x , y ) = Γ ( x , y ) , then naturally Γ ˜ ( x , y ) = Γ ˜ ( x , y ) . In the following, suppose that Γ ( x , y ) \ Γ ( x , y ) . We next show that each state ( t 1 , t 2 , t 3 , C , Y ) Γ ( x , y ) \ Γ ( x , y ) is dominated by a state ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) Γ ( x , y ) , namely, t 1 ¯ t 1 , t 2 ¯ t 2 , t 3 ¯ t 3 , C ¯ C , Y ¯ Y .
Let π be an ( x , y ) -schedule corresponding to ( t 1 , t 2 , t 3 , C , Y ) . According to the above discussion, there are four possibilities of deriving π from some schedule π , which is assumed to correspond to the state ( t 1 , t 2 , t 3 , C , Y ) in Γ ( x 1 , y ) or Γ ( x , y + 1 ) .
Case 1. π is obtained from π by scheduling job J x A directly after subschedule π 1 . Then ( t 1 , t 2 , t 3 , C , Y ) Γ ( x 1 , y ) with t 1 + p x A t 2 , and there is a state ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) Γ ˜ ( x 1 , y ) such that t 1 ˜ t 1 , t 2 ˜ t 2 , t 3 ˜ t 3 , C ˜ C , and Y ˜ Y . Let π ˜ be an ( x 1 , y ) -schedule corresponding to ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) , and let π ¯ be the ( x , y ) -schedule obtained from π ˜ by scheduling J x A directly after schedule π ˜ 1 . Let ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) be the state corresponding to π ¯ . Note that the above operation to get π ¯ is feasible since t 1 ˜ + p x A t 1 + p x A t 2 t 2 ˜ . Then we have ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) = ( t 1 ˜ + p x A , t 2 ˜ , t 3 ˜ , C ˜ + w x A ( t 1 ˜ + p x A ) , Y ˜ ) . Combining with the fact that ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 + p x A , t 2 , t 3 , C + w x A ( t 1 + p x A ) , Y ) , we have
t 1 ¯ = t 1 ˜ + p x A t 1 + p x A = t 1 , t 2 ¯ = t 2 ˜ t 2 = t 2 , t 3 ¯ = t 3 ˜ t 3 = t 3 , C ¯ = C ˜ + w x A ( t 1 ˜ + p x A ) C + w x A ( t 1 + p x A ) = C , Y ¯ = Y ˜ Y = Y .
Case 2. π is obtained from π by scheduling J x A directly after schedule π 3 . Then ( t 1 , t 2 , t 3 , C , Y ) Γ ( x 1 , y ) , and there is a state ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) Γ ˜ ( x 1 , y ) such that t 1 ˜ t 1 , t 2 ˜ t 2 , t 3 ˜ t 3 , C ˜ C , and Y ˜ Y . Let π ˜ be an ( x 1 , y ) -schedule corresponding to ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) , and let π ¯ be the ( x , y ) -schedule obtained from π ˜ by scheduling J x A directly after schedule π ˜ 3 . Let ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) be the state corresponding to π ¯ . Then we have ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) = ( t 1 ˜ , t 2 ˜ , t 3 ˜ + p x A , C ˜ + w x A ( t 3 ˜ + p x A ) , Y ˜ ) . Combining with the fact that ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 , t 2 , t 3 + p x A , C + w x A ( t 3 + p x A ) , Y ) , we have
t 1 ¯ = t 1 ˜ t 1 = t 1 , t 2 ¯ = t 2 ˜ t 2 = t 2 , t 3 ¯ = t 3 ˜ + p x A t 3 + p x A = t 3 , C ¯ = C ˜ + w x A ( t 3 ˜ + p x A ) C + w x A ( t 3 + p x A ) = C , Y ¯ = Y ˜ Y = Y .
Case 3. π is obtained from π by scheduling J y B directly before schedule π 2 . Note that in this case, the condition t 1 t 2 p y B < d B must be satisfied. Then ( t 1 , t 2 , t 3 , C , Y ) Γ ( x , y + 1 ) , and there is a state ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) Γ ˜ ( x , y + 1 ) such that t 1 ˜ t 1 , t 2 ˜ t 2 , t 3 ˜ t 3 , C ˜ C , and Y ˜ Y . Let π ˜ be an ( x , y + 1 ) -schedule corresponding to ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) , and let π ¯ be the ( x , y ) -schedule obtained from π ˜ by scheduling J y B directly before schedule π 2 . Let ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) be the state corresponding to π ¯ . The above operation to obtain π ¯ is feasible. In fact, t 1 ˜ t 1 t 2 p y B , which means there are enough spaces for J y B to be scheduled in. In the following we will illustrate that the condition t 2 ˜ p y B < d is satisfied.
Claim 1.If t 2 ˜ t 2 , then t 2 ˜ d .
Suppose to the contrary that t 2 ˜ > d , then J y B is partially early or late in π ¯ , implying that J y + 1 B , J y + 2 B , , J n B B are all late in π ˜ , i.e., there is no job in π ˜ 2 , which further suggests that j = 1 x p j A = t 1 ˜ + t 3 ˜ t 2 ˜ . What is more, since Y ˜ Y , the jobs J y + 1 B , J y + 2 B , , J n B B are also late in π , which also indicates that Y ˜ = Y and j = 1 x p j A = t 1 + t 3 t 2 . From t 1 ˜ + t 3 ˜ t 2 ˜ = t 1 + t 3 t 2 , t 1 ˜ t 1 , t 2 ˜ t 2 , and t 3 ˜ t 3 we know that t 2 ˜ = t 2 contradicts t 2 ˜ t 2 . Thus, t 2 ˜ d B . Claim 1 follows.
If t 2 ˜ p y B d B , then t 2 ˜ t 2 . From Claim 1 we have t 2 ˜ d < d + p y B , i.e., t 2 ˜ p y B < d , which is a contradiction. Thus the condition t 2 ˜ p y B < d B is satisfied and the operation to get π ¯ is feasible. Then we have ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) = ( t 1 ˜ , t 2 ˜ p y B , t 3 ˜ , C ˜ , Y ˜ + w y B max { t 2 ˜ d , 0 } ) . Combining with the fact that ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 , t 2 p y B , t 3 , C , Y + w y B max { t 2 d , 0 } ) , we have
t 1 ¯ = t 1 ˜ t 1 = t 1 , t 2 ¯ = t 2 ˜ p y B t 2 p y B = t 2 , t 3 ¯ = t 3 ˜ t 3 = t 3 , C ¯ = C ˜ C = C .
Next we prove that Y ¯ Y . In fact, if t 2 ˜ = t 2 , then Y ¯ = Y ˜ + w y B max { t 2 ˜ d , 0 } Y + w y B max { t 2 d , 0 } = Y . If t 2 ˜ t 2 , then from Claim 1 we know that t 2 ˜ d , and then t 2 d < 0 . Thus we have Y ¯ = Y ˜ Y = Y .
Case 4. π is obtained from π by scheduling J y B directly after schedule π 4 . Then ( t 1 , t 2 , t 3 , C , Y ) Γ ( x , y + 1 ) , and there is a state ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) Γ ˜ ( x , y + 1 ) such that t 1 ˜ t 1 , t 2 ˜ t 2 , t 3 ˜ t 3 , C ˜ C , and Y ˜ Y . Let π ˜ be an ( x , y + 1 ) -schedule corresponding to ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ ) , and let π ¯ be the ( x , y ) -schedule obtained from π ˜ by scheduling J x A directly after schedule π ˜ 3 . Let ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) be the state corresponding to π ¯ . Then we have ( t 1 ¯ , t 2 ¯ , t 3 ¯ , C ¯ , Y ¯ ) = ( t 1 ˜ , t 2 ˜ , t 3 ˜ , C ˜ , Y ˜ + w y B p y B ) . Combining with the fact that ( t 1 , t 2 , t 3 , C , Y ) = ( t 1 , t 2 , t 3 , C , Y + w y B p y B ) , we have
t 1 ¯ = t 1 ˜ t 1 = t 1 , t 2 ¯ = t 2 ˜ t 2 = t 2 , t 3 ¯ = t 3 ˜ t 3 = t 3 , C ¯ = C ˜ C = C , Y ¯ = Y ˜ + w y B p y B Y + w y B p y B = Y .
The result follows. □
Theorem 1.
Algorithm 1 solves the Pareto-frontier scheduling problem 1 | d j B = d | # ( Σ w j A C j A , Σ w j B Y j B ) in O ( n A n B d P s u m U A U B ) time.
Algorithm 1: For problem 1 | d j B = d | # ( w j A C j A , w j B Y j B )
Mathematics 08 02070 i001
Proof. 
The correctness of Algorithm 1 is guaranteed by Lemma 2, Lemma 1, and Lemma 3. Here we only analyze its time complexity. The initialization step takes O ( P sum + n A n B ) time, which is dominated by the final time complexity of Algorithm 1. In the implementation of Algorithm 1, we guarantee that Γ ( x , y ) = Γ ˜ ( x , y ) . Note that 0 t 1 d B and d B p max A + 1 t 3 P sum , then each state set Γ ( x , y ) contains O ( d B P sum U A U B ) states. Moreover, Γ ( x , y ) is obtained by performing at most two (constant) operations on the states in Γ ( x 1 , y ) Γ ( x , y + 1 ) for x = 0 , 1 , , n A , y = n B + 1 , n B , , 1 . Note that the upper bounds of Σ w j A C j A and Σ w j B Y j B are given by U A = j = 1 n A w j A P sum and U B = j = 1 n B w j B p j B , respectively. Thus, the overall running time of Algorithm 1 is O ( n A n B d P sum U A U B ) . □

4. An FPTAS

In this section, for problem 1 | d j B = d | # ( w j A C j A , w j B Y j B ) , we first give another dynamic programming algorithm, and then turn it into an FPTAS by the trimming technique. As for Algorithm 1, we first introduce the following terminologies and notations.
  • An ( x , y ) -schedule is defined to be an ABAB-schedule π for J x A J y B with no idle time existing between subschedules π 1 , π 2 and π 3 , where x { 1 , 2 , , n A } and y { 1 , 2 , , n B } .
  • A vector ( t 1 , t 2 , t 3 , W , k ( π ) , C , Y ) is introduced to denote a state of ( x , y ) , in which t 1 , t 2 , t 3 , W , k , C and Y, respectively, stand for the end point of π 1 , the end point of π 2 , the end point of π 3 , the total weight of the jobs in π 3 , the index of the last B-job in π 2 , the total weighted completion time of the A-jobs of J x A , and the total weighted late work of the B-jobs of J y B . Note that a state of ( x , y ) at least corresponds to some ( x , y ) -schedule.
  • Γ ( x , y ) denotes the set of all the states of ( x , y ) .
  • Γ ˜ ( x , y ) denotes the set obtained from Γ ( x , y ) by deleting the vectors ( t 1 ¯ , t 2 ¯ , t 3 ¯ , W ¯ , k , C ¯ , Y ¯ ) , for which there is another vector ( t 1 , t 2 , t 3 , W , k , C , Y ) with t 1 t 1 ¯ , t 2 t 2 ¯ , t 3 t 3 ¯ , W W ¯ , C C ¯ , Y Y ¯ .
  • Let Q 2 = { ( C , Y ) : ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ˜ ( n A , n B ) } , and let Q 2 ˜ be the set of the non-dominated vectors in Q 2 .
Clearly, Q 2 is an intermediate set. Similarly to the discussion for Algorithm 1, we can generate all the Γ ( x , y ) for all the possible choices of the tuple ( x , y ) dynamically in the following way.
Initially, set Γ ( 0 , 0 ) = { ( 0 , 0 , 0 , 0 , 0 , 0 , 0 ) } and Γ ( x , y ) = if ( x , y ) ( 0 , 0 ) . Then we recursively generate all the state sets Γ ( x , y ) from the previously generated sets Γ ( x 1 , y ) and Γ ( x , y 1 ) . Specifically,
  • For each state ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x 1 , y ) with Γ ( x 1 , y ) , add two states ( t 1 , t 2 , t 3 , W , k , C , Y ) and ( t 1 , t 2 , t 3 , W , k , C , Y ) to the set Γ ( x , y ) , with
    ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 + p x A , t 2 + p x A , t 3 + p x A , W , k , C + w x A ( t 1 + p x A ) + W p x A , Y + w k B max { min { t 2 + p x A d B , p x A } , 0 } ) , a n d ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 , t 2 , t 3 + p x A , W + w x A , k , C + w x A ( t 3 + p x A ) , Y ) .
These two states respectively correspond to the newly obtained ( x , y ) -schedules by scheduling job J x A immediately following the subschedule π 1 and immediately following the subschedule π 3 , in some ( x 1 , y ) schedule π that corresponds to the state ( t 1 , t 2 , t 3 , W , k , C , Y ) . Note that the first case occurs only when t 1 + p x A t 2 is satisfied.
  • For each state ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x , y 1 ) , also add two two states ( t 1 , t 2 , t 3 , W , k , C , Y ) and ( t 1 , t 2 , t 3 , W , k , C , Y ) to the set Γ ( x , y ) , with
    ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 , t 2 + p y B , t 3 + p y B , W , y , C + W p y B , Y + w y B max { t 2 + p y B d B , 0 } ) ,
    and
    ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 , t 2 , t 3 , W , k , C , Y + w y B p y B ) .
These two states respectively correspond to the newly obtained ( x , y ) -schedules by scheduling job J y B immediately after π 2 and immediately following the subschedule π 4 , in some ( x , y 1 ) schedule π that corresponds to the state ( t 1 , t 2 , t 3 , W , k , C , Y ) . Note that the first case occurs only when t 2 < d B is satisfied.
Note that, if in the above state-generation procedures we replace sets Γ ( x 1 , y ) and Γ ( x , y 1 ) with sets Γ ˜ ( x 1 , y ) and Γ ˜ ( x , y 1 ) , then the resulting set of new states, denoted by Γ ( x , y ) , may be different from Γ ( x , y ) . Recall that, when deleting those dominated vectors in the sets Γ ( x , y ) and Γ ( x , y ) , the newly obtained sets are respectively denoted by Γ ˜ ( x , y ) and Γ ˜ ( x , y ) , which will be shown to be identical in the following lemma, and its proof is similar to that of Lemma 3.
Lemma 4.
Γ ˜ ( x , y ) = Γ ˜ ( x , y ) .
Theorem 2.
Algorithm 2 solves 1 | d j B = d | # ( w j A C j A , w j B Y j B ) in O ( n A n B 2 d P s u m W s u m A U A U B ) time.
Algorithm 2: For solving 1 | d j B = d | # ( w j A C j A , w j B Y j B )
Mathematics 08 02070 i002
Proof. 
The correctness of Algorithm 2 is guaranteed by the discussion above. Next we only analyze its time complexity. The initialization step takes O ( n A n B ) time, which is dominated by the final time complexity of Algorithm 2. In the implementation of Algorithm 2, we guarantee that Γ ( x , y ) = Γ ˜ ( x , y ) . Note that 0 t 1 d B and d p max A + 1 t 3 P sum , 0 k n B and 0 W W sum A , then each state set Γ ( x , y ) contains O ( n B d P sum W sum A U A U B ) states. Moreover, Γ ( x , y ) is obtained by performing at most two (constant) operations on the states in Γ ( x 1 , y ) Γ ( x , y 1 ) for x = 0 , 1 , , n A , y = 0 , 1 , , n B . Thus, the overall running time of Algorithm 2 is O ( n A n B 2 d B P sum W sum A U A U B ) . □
Next we turn Algorithm 2 into an FPTAS in the following way. Set Δ = 1 + ϵ 2 n , L 1 = l o g Δ d , L 3 = l o g Δ ( P sum ) , L W = l o g Δ ( W sum A ) , L A = l o g Δ ( U A ) and L B = l o g Δ ( U B ) . Set I i 1 = [ Δ ( i 1 ) , Δ i ] for i = 1 , 2 , , L 1 , I i 3 = [ Δ ( i 1 ) , Δ i ] for i = 1 , 2 , , L 3 , I i W = [ Δ ( i 1 ) , Δ i ] for i = 1 , 2 , , L W , I i A = [ Δ ( i 1 ) , Δ i ] for i = 1 , 2 , , L A and I i B = [ Δ ( i 1 ) , Δ i ] for i = 1 , 2 , , L B . For x = 0 , 1 , , n A and y = 0 , 1 , , n B , Γ ^ ( x , y ) is obtained from Γ ( x , y ) by the following operation: for any two states ( t 1 , t 2 , t 3 , W , k , C , Y ) and ( t 1 ¯ , t 2 ¯ , t 3 ¯ , W ¯ , k , C , Y ) in Γ ( x , y ) , if ( t 1 , t 3 , W , C , Y ) and ( t 1 ¯ , t 3 ¯ , W ¯ , C ¯ , Y ¯ ) fall into the same box I u 1 × I v 3 × I w W × I p A × I q B for u = 1 , 2 , , L 1 , v = 1 , 2 , , L 3 , w = 1 , 2 , , L W , p = 1 , 2 , , L A and q = 1 , 2 , , L B with t 2 t 2 ¯ , remaining the first one. Note that it takes O ( L 1 L 3 L W L A L B ) time to partition the boxes. Moreover, we define
Q 3 = { ( C , Y ) : ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ^ ( n A , n B ) }
and let Q 3 ˜ be the set of non-dominated vectors in Q 3 .
Theorem 3.
Algorithm 3 is an FPTAS for solving 1 | d j B = d | # ( w j A C j A , w j B Y j B ) .
Algorithm 3: For solving 1 | d j B = d | # ( w j A C j A , w j B Y j B )
Mathematics 08 02070 i003
Proof. 
By induction on z = x + y , We prove that, for any state ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x , y ) , there is a state ( t 1 ¯ , t 2 ¯ , t 3 ¯ , W ¯ , k , C ¯ , Y ¯ ) Γ ^ ( x , y ) such that t 1 ¯ Δ z t 1 , t 2 ¯ t 2 , t 3 ¯ Δ z t 3 , W ¯ Δ z W , C ¯ Δ z C and Y ¯ Δ z Y .
This is obviously true for z = 0 . Inductively suppose that it holds up to z 1 . Next we show that it also holds for z. Recall that each state ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x , y ) is derived from some state ( t 1 , t 2 , t 3 , W , k , C , Y ) in Γ ( x 1 , y ) or Γ ( x , y 1 ) . Let π be an ( x , y ) -schedule corresponding to ( t 1 , t 2 , t 3 , W , k , C , Y ) , and let π be a schedule corresponding to the state ( t 1 , t 2 , t 3 , W , k , C , Y ) . Using the induction hypothesis, there is a state ( t 1 ^ , t 2 ^ , t 3 ^ , W ^ , k , C ^ , Y ^ ) in Γ ^ ( x 1 , y ) or Γ ^ ( x , y 1 ) such that t 1 ^ Δ z 1 t 1 , t 2 ^ t 2 , t 3 ^ Δ z 1 t 3 , W ^ Δ z 1 W , C ^ Δ z 1 C , and Y ^ Δ z 1 Y . Let π ^ be an ( x 1 , y ) -schedule or ( x , y 1 ) -schedule corresponding to ( t 1 ^ , t 2 ^ , t 3 ^ , W ^ , k , C ^ , Y ^ ) , and if it is feasible, let π ˜ be the ( x , y ) -schedule obtained from π ^ by performing the same operation that we perform on π to get π . Let ( t 1 ˜ , t 2 ˜ , t 3 ˜ , W ˜ , k ˜ , C ˜ , Y ˜ ) be the state corresponding to π ˜ . Furthermore, there is a state ( t 1 ¯ , t 2 ¯ , t 3 ¯ , W ¯ , k ˜ , C ¯ , Y ¯ ) Γ ^ ( x , y ) in the same box with ( t 1 ˜ , t 2 ˜ , t 3 ˜ , W ˜ , k ˜ , C ˜ , Y ˜ ) such that t 1 ¯ Δ t 1 ˜ , t 2 ¯ t 2 ˜ , t 3 ¯ Δ t 3 ˜ , W ¯ Δ W ˜ , C ¯ Δ C ˜ and Y ¯ Δ Y ˜ . There are four possible ways to get π from π .
Case 1. π is obtained from π by scheduling J x A directly after schedule π 1 . Note that in this case, the condition t 2 + p x A d B < p k B must be satisfied. Then ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x 1 , y ) , and the operation to get π ˜ is feasible since t 2 ^ + p x A d B t 2 + p x A d B < p k B . Then we have ( t 1 ˜ , t 2 ˜ , t 3 ˜ , W ˜ , k ˜ , C ˜ , Y ˜ ) = ( t 1 ^ + p x A , t 2 ^ + p x A , t 3 ^ + p x A , W ^ , k , C ^ + w x A ( t 1 ^ + p x A ) + W ^ p x A , Y ^ + w k B max { min { t 2 ^ + p x A d B , p x A } , 0 } ) . Combining with the fact that ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 + p x A , t 2 + p x A , t 3 + p x A , W , k , C + w x A ( t 1 + p x A ) + W p x A , Y + w k B max { min { t 2 + p x A d B , p x A } , 0 } ) , we have
t 1 ¯ Δ t 1 ˜ = Δ ( t 1 ^ + p x A ) Δ z ( t 1 + p x A ) = Δ z t 1 , t 2 ¯ t 2 ˜ = t 2 ^ + p x A t 2 + p x A = t 2 , t 3 ¯ Δ t 3 ˜ = Δ ( t 3 ^ + p x A ) Δ z ( t 3 + p x A ) = Δ z t 3 , W ¯ Δ W ˜ = Δ W ^ Δ z W = Δ z W , k ˜ = k = k , C ¯ Δ C ˜ = Δ ( C ^ + w x A ( t 1 ^ + p x A ) + W ^ p x A ) Δ z ( C + w x A ( t 1 + p x A ) + W p x A ) = Δ z C , Y ¯ Δ Y ˜ = Δ ( Y ^ + w k B max { min { t 2 ^ + p x A d B , p x A } , 0 } ) Δ z ( Y + w k B max { min { t 2 + p x A d B , p x A } , 0 } ) = Δ z Y .
Case 2. π is obtained from π by scheduling J x A directly after schedule π 3 . Then ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x 1 , y ) , and π ˜ is clearly a feasible schedule. Then we have ( t 1 ˜ , t 2 ˜ , t 3 ˜ , W ˜ , k ˜ , C ˜ , Y ˜ ) = ( t 1 ^ , t 2 ^ , t 3 ^ + p x A , W ^ + w x A , k , C ^ + w x A ( t 3 ^ + p x A ) , Y ^ ) . Combining with the fact that ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 , t 2 , t 3 + p x A , W + w x A , k , C + w x A ( t 3 + p x A ) , Y ) , we have
t 1 ¯ Δ t 1 ˜ = Δ t 1 ^ Δ z t 1 = Δ z t 1 , t 2 ¯ t 2 ˜ = t 2 ^ t 2 = t 2 , t 3 ¯ Δ t 3 ˜ = Δ ( t 3 ^ + p x A ) Δ z ( t 3 + p x A ) = Δ z t 3 , W ¯ Δ W ˜ = Δ ( W ^ + w x A ) Δ z ( W + w x A ) = Δ z W , k ˜ = k = k , C ¯ Δ C ˜ = Δ ( C ^ + w x A ( t 3 ^ + p x A ) ) Δ z ( C + w x A ( t 3 + p x A ) ) = Δ z C , Y ¯ Δ Y ˜ = Δ Y ^ Δ z Y = Δ z Y .
Case 3. π is obtained from π by scheduling J y B directly after schedule π 2 . Note that in this case, the condition t 2 < d B must be satisfied. Then ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x , y 1 ) , and the operation to get π ˜ is feasible since t 2 ^ t 2 < d B . Then we have ( t 1 ˜ , t 2 ˜ , t 3 ˜ , W ˜ , k ˜ , C ˜ , Y ˜ ) = ( t 1 ^ , t 2 ^ + p y B , t 3 ^ + p y B , W ^ , y , C ^ + W ^ p y B , Y ^ + w y B max { t 2 ^ + p y B d B , 0 } ) . Combining with the fact that ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 , t 2 + p y B , t 3 + p y B , W , y , C + W p y B , Y + w y B max { t 2 + p y B d B , 0 } ) , we have
t 1 ¯ Δ t 1 ˜ = Δ t 1 ^ Δ z t 1 = Δ z t 1 , t 2 ¯ t 2 ˜ = t 2 ^ + p y B t 2 + p y B = t 2 , t 3 ¯ Δ t 3 ˜ = Δ ( t 3 ^ + p y B ) Δ z ( t 3 + p y B ) = Δ z t 3 , W ¯ Δ W ˜ = Δ W ^ Δ z W = Δ z W , k ˜ = y = k , C ¯ Δ C ˜ = Δ ( C ^ + W ^ p y B ) Δ z ( C + W p y B ) = Δ z C , Y ¯ Δ Y ˜ = Δ ( Y ^ + w y B max { t 2 ^ + p y B d B , 0 } ) Δ z ( Y + w y B max { t 2 + p y B d B , 0 } ) = Δ z Y .
Case 4. π is obtained from π by scheduling J y B directly after schedule π 4 . Then ( t 1 , t 2 , t 3 , W , k , C , Y ) Γ ( x , y 1 ) , and π ˜ is clearly a feasible schedule. Then we have ( t 1 ˜ , t 2 ˜ , t 3 ˜ , W ˜ , k ˜ , C ˜ , Y ˜ ) = ( t 1 ^ , t 2 ^ , t 3 ^ , W ^ , k , C ^ , Y ^ + w y B p y B ) . Combining with the fact that ( t 1 , t 2 , t 3 , W , k , C , Y ) = ( t 1 , t 2 , t 3 , W , k , C , Y + w y B p y B ) , we have
t 1 ¯ Δ t 1 ˜ = Δ t 1 ^ Δ z t 1 = Δ z t 1 , t 2 ¯ t 2 ˜ = t 2 ^ t 2 = t 2 , t 3 ¯ Δ t 3 ˜ = Δ t 3 ^ Δ z t 3 = Δ z t 3 , W ¯ Δ W ˜ = Δ W ^ Δ z W = Δ z W , k ˜ = k = k , C ¯ Δ C ˜ = Δ C ^ Δ z C = Δ z C , Y ¯ Δ Y ˜ = Δ ( Y ^ + w y B p y B ) Δ z ( Y + w y B p y B ) = Δ z Y .
Thus, for each state ( C , Y ) in Q 2 ˜ , there is a state ( C ¯ , Y ¯ ) in Q 3 such that C ¯ Δ n C ( 1 + ϵ ) C and Y ¯ Δ n Y ( 1 + ϵ ) Y .
Next we analyze its time complexity. The initialization step takes O ( n A n B ) time, which is dominated by the final time complexity of Algorithm 3. In the implementation of Algorithm 3, we guarantee that Γ ( x , y ) = Γ ^ ( x , y ) . Note that there are O ( L 1 L 3 L W L A L B ) distinct boxes and 0 k n B , then there are at most O ( n B L 1 L 3 L W L A L B ) different states ( t 1 , t 2 , t 3 , W , k , C , Y ) in Γ ( x , y ) . Moreover, Γ ( x , y ) is obtained by performing at most two (constant) operations on the states in Γ ( x 1 , y ) Γ ( x , y 1 ) for x = 0 , 1 , , n A , y = 0 , 1 , , n B . Thus, the overall running time of Algorithm 3 is O ( n A n B 2 L 1 L 3 L W L A L B ) . □

5. Numerical Results

In this section some numerical results are provided to show the efficiency of our proposed algorithms. For running our optimization algorithms, we need to input the following parameters relative with the job instances: the numbers of A-jobs and B-jobs, the processing times and weights of all the jobs, and the common due date of B-jobs. By running Algorithms 1 and 2, we get the Pareto frontier. To use Algorithm 3, we need to choose the value of ϵ (>0) to get a ( 1 + ϵ ) -approximate Pareto frontier. Note that for the same instance, the Pareto frontiers obtained by Algorithms 1 and 2 are the same, except that the running time of Algorithm 1 is theoretically faster than that of Algorithm 2. The closer the ( 1 + ϵ ) -approximate Pareto frontier obtained by Algorithm 3 is to the curve obtained by Algorithms 1 and 2, the closer it is to the optimal solution.
We randomly generate some job instances, in which the numbers of the jobs are set to be n = 4 ( n A = n B = 2 ), n = 6 ( n A = n B = 3 ), and n = 10 ( n A = n B = 5 ). The processing times and the weights of the jobs are randomly generated between 1 and 2. The common due date of B-jobs is set to be 5. What is more, we set ϵ = 1 . We ran our algorithms on these instances in a Matlab R2016b environment on an Intel(R) Core(TM) CPU, 2.50 GHz, 4 GB of RAM computer. In fact, when the number of the jobs is small, the Pareto frontier or the approximate Pareto frontier can be found relatively quickly, but when the number of the jobs increases, the running time will increase hugely. The following three Figure 1, Figure 2 and Figure 3 present the Pareto frontier and ( 1 + ϵ ) -approximate Pareto frontier generated by Algorithms 1–3. As can be seen from the three figures, the results obtained by Algorithms 1 and 2 are exactly the same. The results of Algorithm 3 are consistent with those of Algorithms 1 and 2, which may be due to the coincidence caused by the small size of the instance we chose and the few choices in the sizes of the jobs. In fact, considering that the problem we studied is NP-hard, our algorithm can only reach pseudo-polynomial-time theoretically. Therefore, our algorithm is theoretically more suitable for small-scale instances, where the sizes of the jobs are relatively uniform, which fits with the nature of such problems in real life, such as in logistics distribution centers where we use boxes of fixed sizes.

6. Conclusions

In this paper we investigated a Pareto-optimal problem of scheduling two agents’ jobs on a single machine to minimize one agent’s total weighted completion time and the other’s total weighted late work. For this problem, we devised two dynamic programming algorithms to obtain the Pareto frontier, and an FPTAS to generate an approximate Pareto frontier. Some numerical results were also provided. Compared with the two problems 1 | p j A w j A | # ( Σ w j A C j A , Σ w j B Y j B ) and 1 | p j A w j A , d j B w j B | # ( Σ w j A C j A , Σ w j B Y j B ) studied in Zhang et al. [20], the constraint p j A w j A was removed from the problem considered in this paper and we turned to the optimization problem under the condition that B-jobs had a common due date. Table 1 lists the computational complexity of the above three problems. As we can see from Table 1, the condition p j A w j A seems to have a greater impact on the complexity result of the problem. In future research, we can try to devise more efficient approximation algorithms for our considered problem with a constant performance-ratio, and we can also study two-agent problems with other combinations of objective functions.

Author Contributions

Supervision, J.Y.; writing–original draft, Y.Z.; writing–review and editing, Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the NSFC under grant numbers 12071442, 11671368 and 11771406.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree letter acronym
LDlinear dichroism

References

  1. Agnetis, A.; Billaut, J.C.; Gawiejnowicz, S.; Pacciarelli, D.; Soukhal, A. Multiagent Scheduling: Models and Algorithms; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  2. Li, S.S.; Yuan, J.J. Single-machine scheduling with multi-agents to minimize total weighted late work. J. Sched. 2020, 23, 497–512. [Google Scholar] [CrossRef]
  3. Hariri, A.M.A.; Potts, C.N.; Van Wassenhove, L.N. Single machine scheduling to minimize total weighted late work. ORSA J. Comput. 1995, 7, 232–242. [Google Scholar] [CrossRef]
  4. Wan, L.; Yuan, J.J.; Wei, L.J. Pareto optimization scheduling with two competing agents to minimize the number of tardy jobs and the maximum cost. Appl. Math. Comput. 2016, 273, 912–923. [Google Scholar] [CrossRef]
  5. Wan, L.; Wei, L.J.; Xiong, N.X.; Yuan, J.J.; Xiong, J.C. Pareto optimization for the two-agent scheduling problems with linear non-increasing deterioration based on Internet of Things. Future Gene. Comp. Syst. 2017, 76, 293–300. [Google Scholar] [CrossRef]
  6. Gao, Y.; Yuan, J.J. Bi-criteria Pareto-scheduling on a single machine with due indices and precedence constraints. Discret. Optim. 2017, 25, 105–119. [Google Scholar] [CrossRef]
  7. He, C.; Leung, J. Two-agent scheduling of time-dependent jobs. J. Comb. Optim. 2017, 34, 362–377. [Google Scholar] [CrossRef]
  8. Yuan, J.J.; Ng, C.T.; Cheng, T.C.E. Two-agent single-machine scheduling with release dates and preemption to minimize the maximum lateness. J. Sched. 2015, 18, 147–153. [Google Scholar] [CrossRef]
  9. Wan, L. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs. Discrete Dyn. Nat. Soc. 2015. [Google Scholar] [CrossRef]
  10. Dabia, S.; Talbi, E.G.; Van Woensel, T.; De Kok, T. Approximating multi-objective scheduling problems. Comput. Oper. Res. 2013, 40, 1165–1175. [Google Scholar] [CrossRef]
  11. Lee, K.; Choi, B.C.; Leung, J.Y.T.; Pinedo, M.L. Approximation algorithms for multi-agent scheduling to minimize total weighted completion time. Inf. Process. Lett. 2009, 109, 913–917. [Google Scholar] [CrossRef]
  12. Legriel, J.; Guernic, C.L.; Cotton, S.; Maler, O. Approximating the pareto front of multi-criteria optimization problems. In Proceedings of the 16th International Conference on Tools and Algorithms for the Construction and Analysis of Systems held at the 13th Joint European Conferences on Theory and Practice of Software, Paphos, Cyprus, 20–28 March 2010. [Google Scholar]
  13. Marinescu, R. Efficient approximation algorithms for multi-objective constraint optimization. In Proceedings of the 2nd International Conference on Algorithmic Decision Theory, Piscataway, NJ, USA, 26–28 October 2011. [Google Scholar]
  14. Vassilvitskii, S.; Yannakakis, M. Efficiently computing succinct trade-off curves. Theor. Comput. Sci. 2005, 348, 334–356. [Google Scholar] [CrossRef] [Green Version]
  15. Yin, Y.Q.; Cheng, S.R.; Cheng, T.C.E.; Wang, D.J.; Wu, C.C. Just-in-time scheduling with two competing agents on unrelated parallel machines. Omega-Int. J. Manag. Sci. 2016, 63, 41–47. [Google Scholar] [CrossRef]
  16. Yin, Y.Q.; Cheng, T.; Wang, D.J.; Wu, C.C. Two-agent flowshop scheduling to maximize the weighted number of just-in-time jobs. J. Sched. 2017, 20, 313–335. [Google Scholar] [CrossRef]
  17. Chen, R.X.; Li, S.S.; Li, W.J. Multi-agent scheduling in a no-wait flow shop system to maximize the weighted number of just-in-time jobs. Eng. Optim. 2019, 51, 217–230. [Google Scholar] [CrossRef]
  18. Purcaru, C.; Precup, R.E.; Iercan, D.; Fedorovici, L.O.; David, R.C.; Dragan, F. Optimal robot path planning using gravitational search algorithm. Int. J. Artif. Intell. 2013, 10, 1–20. [Google Scholar]
  19. Soares, A.; Râbelo, R.; Delbem, A. Optimization based on phylogram analysis. Expert Syst. Appl. 2017, 78, 32–50. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Yuan, J.J. A note on a two-agent scheduling problem related to the total weighted late work. J. Comb. Optim. 2019, 37, 989–999. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Yuan, J.J.; Ng, C.T.; Cheng, T.C.E. Pareto-optimization of three-agent scheduling to minimize the total weighted completion time, weighted number of tardy jobs, and total weighted late work. Nav. Res. Logist. 2020, in press. [Google Scholar]
Figure 1. The black stars are the points generated by Algorithms 1 and 2, the red circles are points generated by Algorithm 3.
Figure 1. The black stars are the points generated by Algorithms 1 and 2, the red circles are points generated by Algorithm 3.
Mathematics 08 02070 g001
Figure 2. The black stars are the points generated by Algorithms 1 and 2, the red circles are points generated by Algorithm 3.
Figure 2. The black stars are the points generated by Algorithms 1 and 2, the red circles are points generated by Algorithm 3.
Mathematics 08 02070 g002
Figure 3. The black stars are the points generated by Algorithms 1 and 2, the red circles are points generated by Algorithm 3.
Figure 3. The black stars are the points generated by Algorithms 1 and 2, the red circles are points generated by Algorithm 3.
Mathematics 08 02070 g003
Table 1. Complexity of three problems.
Table 1. Complexity of three problems.
ProblemsComplexityReference
1 | p j A w j A | # ( Σ w j A C j A , Σ w j B Y j B ) O ( n A n B 2 U A U B ) Zhang et al. [20]
1 | p j A w j A , d j B w j B | # ( Σ w j A C j A , Σ w j B Y j B ) O ( n A n B U A U B ) Zhang et al. [20]
1 | d j B = d | # ( Σ w j A C j A , Σ w j B Y j B ) O ( n A n B d P sum U A U B ) Theorem 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Geng, Z.; Yuan, J. Two-Agent Pareto-Scheduling of Minimizing Total Weighted Completion Time and Total Weighted Late Work. Mathematics 2020, 8, 2070. https://doi.org/10.3390/math8112070

AMA Style

Zhang Y, Geng Z, Yuan J. Two-Agent Pareto-Scheduling of Minimizing Total Weighted Completion Time and Total Weighted Late Work. Mathematics. 2020; 8(11):2070. https://doi.org/10.3390/math8112070

Chicago/Turabian Style

Zhang, Yuan, Zhichao Geng, and Jinjiang Yuan. 2020. "Two-Agent Pareto-Scheduling of Minimizing Total Weighted Completion Time and Total Weighted Late Work" Mathematics 8, no. 11: 2070. https://doi.org/10.3390/math8112070

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop