You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

6 April 2022

A Branch-and-Bound Algorithm for Minimizing the Total Tardiness of Multiple Developers

and
1
Department of Animation and Game Design, Shu-Te University, Kaohsiung 824, Taiwan
2
Department of Multimedia Game Development and Application, Hungkuang University, Taichung 433, Taiwan
*
Author to whom correspondence should be addressed.
This article belongs to the Section E1: Mathematics and Computer Science

Abstract

In the game industry, tardiness is an important issue. Unlike a unifunctional machine, a developer may excel in programming but be mediocre in scene modeling. His/her processing speed varies with job type. To minimize tardiness, we need to schedule these developers carefully. Clearly, traditional scheduling algorithms for unifunctional machines are not suitable for such versatile developers. On the other hand, in an unrelated machine scheduling problem, n jobs can be processed by m machines at n × m different speeds, i.e., its solution space is too wide to be simplified. Therefore, a tardiness minimization problem considering three job types and versatile developers is presented. In this study, a branch-and-bound algorithm and a lower bound based on harmonic mean are proposed for minimizing the total tardiness. Theoretical analyses ensure the correctness of the proposed method. Computational experiments also show that the proposed method can ensure the optimality and efficiency for n ≤ 18. With the exact algorithm, we can fairly evaluate other approximate algorithms in the future.
MSC:
90B35; 68M20; 68Q17; 90C27; 90C57

1. Introduction

Game development is a complicated professional domain in which three limited resources, i.e., money, manpower, and time, have to be carefully managed. First, we should note that the cost of developing a multimedia game, e.g., an online game, is rising. The budget for developing a commercial multi-player game is at least $1,000,000. For some large-scale games, e.g., Grand Theft Auto V, developing a single version may even cost a company around $10,000,000 [,,,]. But once a successful game is released, it might earn a billion dollars in profit, e.g., [,]. In light of the above observations, game development involves considerable expertise, such as product planning, graphic design, sound design, programming, and testing. To avoid endless budget amendments, it is essential to carefully schedule all the jobs at the beginning.
Such a large game cannot be implemented by a single developer, making good teamwork another essential factor. A small-sized game may be implemented by a single designer. However, for some large-scale multimedia games, the team size may range from 3 to 100 professionals [,]. Each game draws upon various areas of expertise. For instance, a single piece of music requires various professional skills, e.g., composing, songwriting, dubbing, and sound effects. The professionals with these skills are sourced from different kinds of personnel pools. Some may be official company employees, while others may be temporarily recruited freelancers. Clearly, semi-finished products made by the former must be passed to the latter on schedule. If a critical job is delayed, it may leave dozens of professionals idle. Since their wages, hotel expenses, and dining fees need to be paid even during such an idle period, such costly human resources need to be carefully scheduled in advance.
The third major resource is time. These multimedia games must eventually be released onto the market, so game developers have to race against time to finish them as early as possible. Since these developers have various areas of expertise, their time costs are different. Consider two developers, Mary and Tom. Mary may be highly proficient at figure design, while Tom may excel in scene design. There are 100 figure design jobs, and both developers are qualified for these jobs. Any daily delay of a job will result in a $100 penalty. Consider further that Mary requires 30 person-days and charges $50,000; Tom takes 50 person-days and charges $30,000. To whom should we assign these jobs? How does a delayed job affect the following jobs? These jobs must be carefully scheduled in the beginning. Any delay in a critical job may cause serious damage. For scheduling such a project, the cost, time, and expertise should be considered as a whole. Clearly, it is not easy to solve such a scheduling problem by labor-intensive means. That is, such a scheduling problem in the game industry is no less challenging than those in the aviation, semiconductor, and construction industries.
In light of the above observations, it is clear that tardiness minimization is important in the game industry. In general, the jobs in a large multimedia game have various properties and different tolerance degrees to delay. For example, the job of leading figure design should be completed as early as possible—such an urgent job had better not be delayed. Conversely, late poster design or user manual translation may not cause a huge loss. If possible, all the jobs would best be completed on time. However, as in other industries, it is difficult to schedule more than 20 jobs manually. Therefore, tailored scheduling algorithms for reducing tardiness in the game industry are called for.
Assigning similar jobs to a developer with corresponding expertise helps to reduce tardiness. With the continued refinement of the game industry, developers specialize in different areas of expertise. Let us consider the above example again. Mary should be assigned figure design jobs, and Tom, scene design jobs. However, there are still some other constraints. Suppose that Mary is overloaded with a lot of figure design jobs. Although Mary is highly proficient at figure design, we had better assign some figure design jobs to Tom. Clearly, the computation of such trade-offs is very complicated. This is because multi-specialty developers are not taken into account in traditional scheduling models. Again, some new algorithms for scheduling such jobs and developers in the game industry are needed.
The following three properties distinguish the presented problem from traditional ones. First, for traditional heterogeneous machine scheduling problems, e.g., [,], a capable machine always outperforms others in terms of speed. However, for the presented problem, a developer may excel in figure design and programming but be mediocre in script design. That is, a single developer (or machine) simultaneously has both merits and shortcomings. It depends on what jobs are assigned to him/her. The considerations of the presented problem are more complicated than those of traditional heterogeneous machine scheduling problems.
Second, compared with identical machine scheduling problems, e.g., [,], the amount of computation of the presented problem is large. For m identical machines, we do not need to consider their permutations. Therefore, the solution space of the presented problem is about m! times larger than that of an identical machine scheduling problem. To our best knowledge, few researchers have focused their efforts on this emerging industry. However, the limited resources (i.e., money, time, and manpower) in this industry are seldom discussed.
Third, it is difficult to develop efficient lower bounds in a traditional unrelated machine scheduling problem, e.g., [,]. For n jobs, the processing speeds of m machines are all different; there are m × n various combinations, i.e., a large solution space. However, in most situations, a game developer usually processes his/her own desired jobs, i.e., one or two types. Such unrelated machine models are too complicated to schedule these developers in the game industry.
In this study, an optimization problem is presented. It is obvious that traditional scheduling algorithms cannot be directly applied to the problem. First, in unifunctional machine scheduling problems, a machine usually processes jobs at a fixed speed, e.g., a welding robot. In the presented problem, the processing speed is determined by the fitness between developers and job types. That is, the combinations that are needed to be considered become greater in number. Second, jobs with agreeable processing times and due dates, e.g., [], are commonly employed to develop lower bounds and minimize tardiness. However, this technique will lead to an anomaly. Consequently, we propose an exact algorithm to schedule these various jobs and versatile developers in the game industry. Two main contributions are made in this study. First, a branch-and-bound algorithm is proposed for ensuring the optimality for n ≤ 18. Second, a lower bound based on a harmonic mean is developed to improve the execution efficiency.
The rest of this study is organized as follows. In the Section 2, past research is introduced. In the Section 3, the scheduling problem considering versatile developers is formulated. In the Section 4, a lower bound and a branch-and-bound algorithm are developed. In the Section 5, experiments are conducted to show the execution efficiency of the proposed algorithms. Conclusions are drawn in the Section 6.

3. Problem Formulation

The optimization problem is formulated as follows. There are n non-preemptive jobs and m developers. Each job j has a default processing time p j , a due date d j , and a job type e j { 1 , 2 , 3 } for j = 1 , 2 , , n . For each job type x, developer a has a processing difficulty ratio r a x for a { 1 , 2 , , m } and x { 1 , 2 , 3 } . That is, if job j of type x (i.e., e j = x ) is assigned to developer a, the actual processing time is p j r a x . Each job needs to be assigned to one and only one developer, and each developer can process only one job at a time. On the other hand, if job j is assigned to developer a according to a schedule π , the actual completion time is denoted by C j @ a ( π ) and the tardiness is defined by T j @ a ( π ) = max { 0 , C j @ a ( π ) d j } . Under the above assumptions and constraints, we aim to determine an optimal schedule π * which minimizes the total tardiness; i.e., the minimization problem is defined by
Minimize   f ( π ) = a = 1 m j = 1 n T j @ a ( π ) ,
where f ( π ) means the objective function.
A problem instance is shown in Figure 1a. Let n = 5, m = 2, p j = 20, 10, 30, 6, 10, d j = 20, 20, 50, 70, 10, and e j = 1, 1, 2, 3, 3, for j = 1, 2, 3, 4, 5. The processing difficulty ratios are listed in Figure 1b. Let π = ( 1 , 2 , 4 , 0 , 5 , 3 ) be a schedule, where number 0 means a separator used to divide jobs between developers. Since developer 1 is highly proficient at dealing with jobs of type 1, job 1 and job 2 are assigned to developer 1, and their processing times are 20 and 10, respectively. Similarly, since developer 2 excels at processing jobs of type 2, job 3 is processed by developer 2, and its actual processing time is 30. Note that neither developer is skilled in processing jobs of type 3 (i.e., jobs 4 and 5). Since job 5 has an early due date, let developer 2 process it first, and its actual processing time is 30 (= 10 × r 23 = 10 × 3). Similarly, developer 1 requires a processing time of 30 (= 6 × r 13 = 6 × 5) to process job 5. Eventually, the total tardiness is f ( π ) = 40 (i.e., 0 + 10 + 10 + 0 + 20).
Figure 1. A problem instance.
It is clear that the above scheduling problem is different from traditional ones. The following features differentiate the presented problem from traditional ones:
  • Compared with traditional unrelated machine scheduling problems, the concept of job type can reduce the amount of computation. For example, all the relationships between machines and jobs must be taken into account, e.g., the probability of machine i processing job j (pij) in [] and the processing time of machine i processing job j (pij) in []. All the m × n combinations must be considered. If a job set is not given, blindly estimating each machine’s average processing speed cannot determine a good lower bound. However, in the presented problem, jobs can be categorized into three types and only m × 3 processing speeds are considered and their average processing speeds can be employed to develop a lower bound.
  • In past heterogeneous machine scheduling models [,,], a capable developer (or machine) always outperforms others in terms of processing speed. That is, each heterogeneous machine has its own fixed speed. However, in this presented problem, a developer might be mediocre in processing jobs of type 1 but excel in dealing with jobs of other types. Clearly, these developers cannot be modeled by such unifunctional machines.
  • Compared with traditional identical machine scheduling problems, e.g., [,], the presented problem is more difficult. An example is given in Table 2. Consider that we allocate the three jobs to three identical machines. It is obvious that there is only one schedule, and it is just the optimal schedule; i.e., each machine takes one job. However, in the presented problem, a capable developer might take all three jobs to achieve optimality. Consequently, we need to check all the possible situations listed in Table 2 to determine the optimal schedule.
    Table 2. The number of all possible schedules.
  • Traditional tardiness minimization techniques cannot be directly applied to this problem. Jobs with larger processing difficulty ratios may take precedence over jobs with earlier due dates. In the presented problem, the processing difficulty ratio, processing time, and due date should be considered as a whole.
In light of the above observations, traditional scheduling algorithms cannot be directly applied to this problem, and a new optimization algorithm is thus required. That is, if a project meets the following two criteria, manpower can be arranged by such algorithms. First, a project is interdisciplinary and it recruits cross-domain workers, no matter what kinds of workers it needs, e.g., employee or freelancer. A developer may acquire several competencies and can perform several kinds of jobs in the project. Second, the performance pattern of each developer is known. That is, we have the big data of all developers and can estimate each one’s processing time for processing a given kind of job [,,].

4. Branch-and-Bound Algorithm

In this section, we develop a branch-and-bound algorithm (named BB). To obtain the optimal schedules, BB will explore each search tree in the depth-first-search (DFS) order. Moreover, to deter us from exploring useless partial schedules, we also propose some dominance rules and develop a lower bound.

4.1. Dominance Rules

For convenience, we introduce some notations at the beginning of developing dominance rules. Let π = ( α , β ) be an undetermined schedule, where α is a determined partial sequence and β is the undetermined part. We wonder if there exists any better schedule π that outperforms π . Consequently, some dominance rules are developed to prove our doubts. Since these rules are similar, we provide only the first proof.
Case I: Consider that jobs i and j are the last two jobs of α and both jobs are assigned to the same developer a. Let π be the schedule obtained by only interchanging the last two jobs i and j in α . For simplicity, let C i @ a ( π ) = t i , C j @ a ( π ) = t j , and C j @ a ( π ) = t j = C i @ a ( π ) = t i . In the following rules, both jobs i and j in π are tardy. However, if we interchange both of them, their tardiness can be alleviated a little. In Rule 1, though both jobs are still tardy in π , their resulting tardiness is lower than π . In Rule 2, the interchange makes job j not tardy, i.e., the resulting tardiness is reduced.
Rule 1.
If t i > d i , t j > d j , t j > d j , t i > d i , and t i + t j t i t j > 0 , then π dominates π .
Proof. 
We prove this property by showing T i @ a ( π ) + T j @ a ( π ) > T i @ a ( π ) + T j @ a ( π ) . That is,
T i @ a ( π ) + T j @ a ( π ) = ( t i d i ) + ( t j d j ) = ( t i + t j ) ( d i + d j )               >   ( t i + t j ) ( d i + d j )   ( since   t i + t j t i t j > 0 )   = ( t i d i ) + ( t j d j ) = T i @ a ( π ) + T j @ a ( π ) .  
The proof is complete. □
Rule 2.
If t i > d i , t j > d j , t j d j , and t i + t j t i > d j , then π dominates π .
In the following four rules, job j is tardy and job i is not in π . Rules 3 and 4 show that job j can be done in time in π and the resulting tardiness can be improved. In Rules 5 and 6, job j is still tardy in π , but the accumulated tardiness can be alleviated.
Rule 3.
If t i d i , t j > d j , t j d j , and t i d i , then π dominates π .
Rule 4.
If t i d i , t j > d j , t j d j , t i > d i , and t j t i > d j d i , then π dominates π .
Rule 5.
If t i d i , t j > d j , t j > d j , t i d i , and t i t i > 0 , then π dominates π .
Rule 6.
If t i d i , t j > d j , t j > d j , t i > d i , and t j t j t i > d i , then π dominates π .
Rule 7 lets the job with an earlier due date be processed first if both jobs, i.e., i and j, are not tardy in π . That is, both objective costs of π and π are the same, and we can stop searching for one of them.
Rule 7.
If t i d i , t j d j , and d j < d i , let π dominate π .
Case II: Consider that job i is the last job of α , which is assigned to developer a, and job j can be any undetermined job in β . Moreover, job i is also the last job assigned to developer a. For simplicity, let C i @ a ( π ) = t i and e j = y { 1 , 2 , 3 } . In Rule 8, it would be wasteful to assign very few jobs to developer a. That is, he/she can accept some extra job in β if no tardiness occurs.
Rule 8.
If t i + p j r a y d j , then π dominates π .
Case III: Consider that job i is the last job of α , which is assigned to developer a, and job j can be any undetermined job in β . Let e i = x { 1 , 2 , 3 } , e j = y { 1 , 2 , 3 } , and π be the schedule obtained by interchanging job i in α and job j in β . For simplicity, let C i @ a ( π ) = t i and C j @ a ( π ) = t j . In Rule 9, we interchange job i in α and job j in β if job j is more urgent and the total tardiness will not deteriorate. In Rule 10, developer a is mediocre at processing jobs of type x but highly proficient at processing jobs of type y. On the other hand, all the remaining developers excel at dealing with jobs of type x and are mediocre at processing jobs of type y. Therefore, we interchange jobs i in α and j in β . Note that the total tardiness will not be worse in the case.
Rule 9.
If e i = e j , p i = p j , d j < d i , and t j d j , then π dominates π .
Rule 10.
If x y , p i r a x > p j r a y , d j d i , t i d i t j d j , and max { p i r a + 1 , x , p i r a + 2 , x , , p i r m x } min { p j r a + 1 , y , p j r a + 2 , y , , p j r m y } , then π dominates π .
The following lemma shows that each developer’s workload has a squeeze effect. That is, if there exists a developer whose workload is unreasonably heavy, then there must be another developer who has a relatively light workload. Due to space limitations, the following proofs can be found in Appendix A.
Lemma 1.
For a schedule π , if there exists a developer a whose maximum completion time is larger than  j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } , there exists another developer b whose maximum completion time is less than  ( j = 1 n max i = 1 m { p j r i e j } ) / m .
The following rule can help us to avoid some unnecessary searches if any developer is overloaded. If there exists a developer whose maximum completion time is unreasonably long, then we can remove a job from the overloaded developer and assign it to a half-loaded developer. That is, the previous schedule is dominated.
Rule 11.
For an optimal schedule π * , each developer’s maximum completion time is less than or equal to j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } .

4.2. Lower Bound

A lower bound is needed to avoid unnecessary searching if we are in the middle of a schedule that is dominated or outperformed by others. That is, after adding up the determined cost and the estimated cost of the remaining part, if the sum is still larger than the current minimal objective cost, we can abandon further searches for the remaining part. Consequently, the earlier we can stop useless searches, the more execution time we can save.
Arranging jobs with agreeable processing times and due dates is a useful way to obtain a lower bound for traditional tardiness minimization problems []. Here agreeableness is a kind of job correlation. As with precedence between two jobs in which a successor (e.g., testing) cannot start until a predecessor (e.g., programming) has finished, agreeableness between any two jobs implies that the smaller job (i.e., less processing time) always has an earlier due date, i.e., another kind of job correlation. However, it may lead to some anomalies in our problem. Consider that two identical developers (or machines) deal with the two jobs shown in Figure 2a. To obtain a lower bound, in Figure 2b, traditional algorithms may adjust the processing times and due dates and make them agreeable, i.e., d ( i ) d ( j ) if and only if p ( i ) p ( j ) . Hence, the lower bound for these jobs is 0; i.e., max{0, 4 − 6} + max{0, 6 − 8}. However, in our problem, an anomaly occurs. Let e 1 = 1 , e 2 = 2 , and the processing difficulty ratios are shown in Figure 2c. For the original jobs shown in Figure 2a, the optimal objective cost is 4; i.e., max{0, 3 × 4 − 8} + max{0, 1 × 6 − 6}. For these virtual jobs shown in Figure 2b, the lower bound is 6; i.e., max{0, 3 × 4 − 6} + max{0, 1 × 6 − 8}. However, a lower bound is never larger than the minimal cost. Consequently, we cannot directly apply this technique here. In our problem, a job with a larger processing difficulty ratio may be more urgent than another with an earlier due date. That is, the processing times, due dates, and processing difficulty ratios should be considered as a whole in this problem.
Figure 2. An anomaly.
Since these developers differ in their abilities (i.e., various processing difficulty ratios), we aim to fabricate an equivalent substitute to replace these heterogeneous developers in the real world. Consider that there are k different developers in the real world and assume that there are k virtually identical developers whose integrated ability is just equal to the sum of all the real ones’ abilities. The following definition gives the correct magnitude of processing difficulty ratio for each virtual developer. It is interesting that the magnitude is the harmonic mean of all the real developers’ processing difficulty ratios.
Definition 1.
There are k available developers numbered from m − k + 1 to m in the real world. Let there be k virtually identical developers. For each job type x, the equivalent processing difficulty ratio of each virtual developer is k / ( 1 / r m k + 1 , x + 1 / r m k + 2 , x + + 1 / r m x ) and denoted by  r ˜ x k .
The following lemma shows that the throughput (i.e., the amount of work per unit time) of these virtual developers is the same as the sum of all the real ones’ throughputs. For more information about harmonic mean, readers can refer to [,].
Lemma 2.
For a given job type x, the sum of the last k real developers’ throughputs (i.e.,  a = m k + 1 m 1 / r a x ) is equivalent to that of the k virtual ones’ throughputs (i.e.,  k / r ˜ x k ).
Now we can merge these virtual developers into a virtual substitute. The following definition gives the correct magnitude of the processing difficulty ratio of the single substitute. Moreover, the following lemma shows that the throughput of the k virtual developers is exactly equal to that of the virtual single substitute.
Definition 2.
There are k available developers numbered from m − k + 1 to m in the real world. Let there be only one virtually equivalent developer, called the substitute. For each job type x, the processing difficulty ratio of the substitute is  1 / ( 1 / r m k + 1 , x + 1 / r m k + 2 , x + + 1 / r m x ) and denoted by  r ¯ x k , i.e.,  r ¯ x k = r ˜ x k / k .
Lemma 3.
For a given job type x, the sum of the last k real developers’ throughputs (i.e.,  a = m k + 1 m 1 / r a x ) is equivalent to that of the substitute’s throughput (i.e.,  1 / r ¯ x k ).
For each different job type, the virtual substitute still has different processing difficulty ratios. The following definition provides the upper and lower limits of the processing difficulty ratios for the substitute. The following lemma shows the boundary of the k real developers’ throughputs. This lemma guarantees that the throughput of the virtual substitute is larger than or at least equal to that of the k real developers.
Definition 3.
For each virtual developer transformed by k available real developers, let r ¯ min k = min { r ¯ 1 k , r ¯ 2 k , r ¯ 3 k } and  r ¯ max k = max { r ¯ 1 k , r ¯ 2 k , r ¯ 3 k } denote his/her minimal processing difficulty ratio and maximal processing difficulty ratio, respectively.
Lemma 4.
The throughput of the last k real developers is less than or equal to 1 / r ¯ min k .
Algorithm 1 shows the algorithm of the proposed lower bound (named LB). Note that BB explores a schedule π = ( α , β ) in the DFS order. Since the jobs in α are determined, we let some job k be the last job of α , i.e., l j ( α ) =k, and some developer a process it, i.e., l d ( α ) =a. Hence, there are m a available developers before C k ( π ) . By Lemma 4, we can regard the m a developers as a virtual developer with processing speed 1 / r ¯ min m a . Similarly, the m a + 1 developers before C k ( π ) can be viewed as another virtual developer with processing speed 1 / r ¯ min m a + 1 . In Step 1, we determine which job is the last job in α and which developer completes the job. In Steps 2–3, the jobs in β are transformed into n l new jobs whose processing times and due dates are agreeable, i.e., p ( i ) p ( j ) if and only if d ( i ) d ( j ) . This modification ensures that such a lower bound will not be larger than the actual optimal cost [,]. Then we allocate these jobs to the two virtual developers at the pace of a unit job in Steps 5–14. We preemptively allocate the transformed jobs and start from time 0. If LB proceeds before time C k ( π ) , we allocate the workload to the first virtual developer (Steps 9–10); otherwise, we let the second virtual developer process the remaining part (Steps 12–13). In Step 15, the estimated tardiness is accumulated, if any. Finally, the estimated lower bound is returned.
Algorithm 1. The proposed lower bound (LB( π , l )).
INPUT
   π = ( α , β ) , where α is a determined partial sequence and β is the undetermined part
   l is the number of jobs in α
OUTPUT
   c o s t l b is the lower bound for the current schedule π = ( α , β )
(1) Set a = l d ( α ) and k = l j ( α ) ;
(2) Sort the processing times of the jobs in β in ascending order, i.e., p ( j ) for j = 1 to nl;
(3) Sort the due dates of the jobs in β in ascending order, i.e., d ( j ) for j = 1 to n l ;
(4) Set c o s t l b = f ( α ) and currentTime=0;  //start to allocate the workload of β
(5) For  j = 1 to n l  do Steps 6–15;
(6)    Set currentAmount= p ( j ) ;   //the amount of work of job (j)
(7)    Repeat Steps 8–14 until currentAmount=0;
(8)     If  c u r r e n t T i m e < C k ( π )  then do Steps 9–10;
(9)       Set Δ = min { c u r r e n t A m o u n t , 1 / r ¯ m a min } ;
(10)       Set c u r r e n t T i m e = c u r r e n t T i m e + Δ × r ¯ m a min ;
(11)     Else do Steps 12–13;
(12)       Set Δ = min { c u r r e n t A m o u n t , 1 / r ¯ m a + 1 min } ;
(13)       Set c u r r e n t T i m e = c u r r e n t T i m e + Δ × r ¯ m a + 1 min ;
(14)     Set c u r r e n t A m o u n t = c u r r e n t A m o u n t Δ ;
(15)  Set c o s t l b = c o s t l b + max { 0 , c u r r e n t T i m e d ( j ) } ;
(16) Output c o s t l b .
Theorem 1 shows the correctness of the proposed lower bound. By Theorem 1, the object cost of the substitute (i.e., the lower bound) will not be larger than the actual optimal cost in the real world.
Theorem 1.
Let  f ( π * ) be the optimal objective cost and  L B ( π ) be the lower bound. Then f ( π * ) L B ( π ) , where π * denotes the optimal schedule and π denotes a schedule.

4.3. Branch-and-Bound Algorithm

Given the above dominance rules and lower bound, a branch-and-bound algorithm (named BB) is therefore developed and shown in Algorithm 2. The exact algorithm recursively explores the solution space in a DFS manner. Every time we enter the recursive algorithm, we check if the current partial sequence α of length l is dominated or the current lower bound is larger than the up-to-the-minute lowest cost (Step 1). If not, BB recursively calls itself (Steps 5–8). Since there are still n l + 1 undetermined jobs in β , we make n l + 1 new subsequences, and each starts with a different leading job. Then, we repeatedly replace the original β and obtain n l + 1 new schedules (Steps 5–7). At the end, BB is recursively called by itself for n l + 1 times (Step 8). Note that both c o s t * and π * are global variables. When the recursive algorithm ends, the globally optimal schedule and the minimal cost are stored in both of them.
Algorithm 2. The proposed branch-and-bound algorithm (BB( π , l )).
INPUT
   π = ( α , β ) , where α is a determined partial sequence and β is the undetermined part
   l is the number of jobs in α
OUTPUT
   c o s t * is the minimal cost //a global variable
   π * is the optimal schedule //a global variable
(1)  If ( π is not dominated and LB( π , l ) c o s t * ) then do Steps 2–8;
(2)     If  l = n  then do Step 3;
(3)      If  f ( π ) < c o s t *  then set c o s t * = f ( π ) and π * = π ;
(4)     Else do Steps 5–8;
(5)      For j = l + 1 to n  do Steps 6–8;
(6)        Set π 0 = π ;
(7)        Swap the ( l +1)st and j th jobs of π 0 ;
(8)        Call BB( π 0 , l +1) recursively.
So far, we have proposed an exact algorithm named BB for locating the optimal solutions. With the aid of the dominance rules and lower bound, BB does not need to search for the entire solution space. Some dominated branches can be omitted, and the execution time is thereby reduced.

5. Experimental Results

In this section, we will observe the performance of the proposed branch-and-bound algorithm for n ≤ 18 and the efficiency of the proposed lower bound for n ≤ 12. Moreover, sensitivity tests are performed to show the influence of each control parameter. All the proposed algorithms are implemented in Pascal and executed on an Intel Core i7 @ 3.40 GHz with 8 GB RAM in a Windows 7 SP1 environment. For each setting, 50 random trials are conducted, and their execution times are measured in seconds. Finally, experimental results are discussed and compared.

5.1. Computational Results

We conduct experiments to observe the performance of BB and LB, and we show how the parameters (e.g., n) affect the objective costs of this problem. Table 3 lists all the parameters used in this section. Parameters m , n , p j , and d j have already been defined in Section 3. To model different job types, we let n j be the number of jobs of type e j for j = 1 , 2 , 3 , where n 1 + n 2 + n 3 = n . To realize different processing difficulties, we let there be three kinds of developers in the following experiments. The first kind means average developers, i.e., r a x { 4 , 5 , 6 , 7 } . The second kind means uni-specialty experts who excel in only one arbitrary job type, i.e., r a x { 1 , 2 , 3 } ; however, for the other two job types, their processing difficulty ratios are in { 4 , 5 , , 10 } . The third kind means bi-specialty experts who are highly proficient at two arbitrary job types with processing difficulty ratios less than or equal to 3; for the remaining job type, their processing difficulty ratios are in { 4 , 5 , , 10 } . Now we let m i be the number of the ith kind developers for i = 1 , 2 , 3 , where m 1 + m 2 + m 3 = m . Moreover, we let T be the total default processing time and use τ and R to control p j and d j such that they follow two discrete uniform distributions, respectively, i.e., p j ~ D U ( 1 , 100 ) and d j ~ D U ( T ( 1 τ R / 2 ) / m , T ( 1 τ + R / 2 ) / m ) .
Table 3. The parameters used in the experiments.
For clarity, the experiments are divided into three parts. In the first part, we observe the performance of BB. Table 4 shows the performance of BB when the problem size is small, i.e., n = 12. Note that all the other parameters are set to their default values. Clearly, the execution time increases if we add an extra developer, no matter what kind of developer he/she is. It implies that m also affects the execution time. On the other hand, τ influences the execution time more greatly than R. When all the jobs have earlier due dates, i.e., a large τ , each urgent job competes for limited resources, i.e., the m developers, more intensively. Consequently, BB will consume more execution time.
Table 4. The effects of different developers on the performance of BB for n = 12.
Table 5 shows the performance of BB when we have a fixed number of developers, i.e., m = 3. For the fixed numbers of developers and jobs, job type does not affect BB’s performance. Even if there are only one job type and one developer type, BB will spend the same execution time to solve the problem. Unless all the m developers degenerate into the same developer type with the same processing difficulty or the n jobs degenerate into the same jobs with the same processing time and due date, the problem will not become easy.
Table 5. The effects of different job types on the performance of BB for n = 12 and m = 3.
Table 6 shows the performance of BB when the problem size is medium, i.e., n = 15. At the beginning, we let the setting m 1 = m 2 = m 3 = 1 be a benchmark for later observations. The column of NA means the number of problem instances unsolved within a hundred million nodes. Again, the more developers we have, the more difficult the problem becomes. However, for all the settings with m 1 + m 2 + m 3 = 4 , the results reveal that the problem instances having later due dates ( τ = 0.25 ) are easier to solve. Most of them can be solved within a hundred million nodes.
Table 6. The effects of different developers on the performance of BB for n = 15.
Table 7 shows the performance of BB when the problem size is large, i.e., n = 18. Again, for all the settings with m 1 + m 2 + m 3 = 4 , the problem instances having later due dates are still easier to solve than others. Compared with similar total tardiness minimization problems on identical machines, e.g., [], the proposed branch-and-bound algorithm performs well for versatile developers. In [], the maximum problem size that a branch-and-bound algorithm can solve is n = 20. Note that their machines are identical and unifunctional for processing the same kind of jobs. As discussed earlier, a permutation problem of m identical machines and n jobs is much easier than ours. The reason is that its solution space is just 1 / ( m ! ) of that of an m-heterogeneous-machine scheduling problem. On the other hand, BB is also compared with a metaheuristic algorithm, i.e., GA []. The relative error percentage (REP) is defined as (fGAfBB)/(fBB) × 100%, where f means an objective cost. In some situations, although GA takes only 0.02 s, it usually converges at local minimums prematurely; and its objective costs might be 512 times larger than the optimal ones. It implies that an approximate algorithm cannot ensure solution quality even for n = 18 only. In light of the above comparisons, we learn that n = 18 is a proper problem size to observe the performance of a branch-and-bound algorithm for solving such a total tardiness minimization problem for versatile developers.
Table 7. The effects of different developers on the performance of BB for n = 18.
In the second part, we analyze the efficiency of the proposed lower bound. To show the performance of LB, we add an extra branch-and-bound algorithm without the aid of LB and compare it with the original BB. Table 8 shows the performances of two branch-and-bound algorithms for n = 12. In general, the original BB only takes 13.86% of the execution time of the modified BB. It is clear that the proposed LB based on the harmonic mean can effectively prune unnecessary nodes and reduce execution time.
Table 8. The performance of the proposed lower bound for n = 12.
In the third part, three control parameters are adjusted to observe their influences on objective cost and execution time. A sensitivity test of p j and d j is shown in Figure 3. In this experiment, we set m 1 = m 2 = m 3 = 1 and n 1 = n 2 = n 3 = 5 to simulate an average case. Other parameters are set to their default values. Intuitively, objective cost decreases if each job’s processing time is reduced. For example, a 15% decrease in each job’s processing time can achieve a 50% decrease in objective cost, where −50% means cost reduction. On the other hand, objective cost decreases if we can postpone each job’s due date. For example, a 15% increase in each job’s due date leads to a 35% decrease in objective cost. In the real world, it is not easy to compress the processing time of each job. However, we can negotiate with our customers to postpone a job’s due date. It is worthwhile to postpone the due date by 15% and achieve a 35% cost reduction.
Figure 3. The influence of p j and d j on objective cost.
Figure 4 shows how p j and d j affect execution time. If we advance each job’s due date (e.g., −15%), we are going to be working to tight schedules, and BB requires more execution time (e.g., 35.4%) to obtain the optimal solutions. Or if each job has a larger due date (e.g., 15%), BB requires less execution time (e.g., −36.74%). This is because most jobs can be completed within their due dates. That is, BB can easily achieve zero or little tardiness, and less computing is needed. On the other hand, if the processing time of each job is lengthened, it means that the durations of jobs are very likely to overlap with each other and BB needs more execution time to schedule them. In general, the default processing time of a job is determined and fixed; however, its due date may be negotiable. It implies that bargaining for a later due date can simultaneously benefit the objective cost and the execution time.
Figure 4. The influences of p j and d j on run time.
In Table 9, we perform another sensitivity test on a limited resource, i.e., developers. Let m 1 = m 2 = m 3 = 1 be a benchmark setting. Though an add-on developer can be regarded as a creditable resource, it will increase the execution cost intensively. From the viewpoint of run time, the number of developers (m) is also a kind of problem size and directly affects the performance of BB adversely. However, from the viewpoint of objective cost, a bi-specialty developer can perform more jobs than a uni-specialty developer and reduce tardiness more. Clearly, such a versatile developer cannot be replaced by a traditional unifunctional machine. That is, these versatile developers make this model closer to the real world. Such findings distinguish our scheduling problem from traditional scheduling problems.
Table 9. The sensitivity test of BB for n = 15.

5.2. Discussion

For traditional industries, a welder does not in general perform a spray job. Today, arranging for a single worker to perform different kinds of jobs has become fairly common among modern industries, e.g., games or movies. Developing a multimedia game heavily involves job scheduling, personnel management, time control, and cost reduction. Therefore, we present an interesting scheduling problem to deal with human resource management in the game industry. For example, tardiness is an important issue mainly caused by human factors. In general, for parallel machine scheduling, an acceptable problem size is about 25, e.g., [,]. Since machines are identical, no permutation of machines is needed. For versatile developers, such an optimization problem will become more difficult, and the problem size that can be optimally solved will be smaller. This is because we must take all the permutations of developers into account.
This study can be distinguished by the following five features. First, unifunctional machines are replaced by cross-domain developers and this change makes the model more realistic. Second, such scheduling algorithms are cost-effective. Compared with enhancing computer hardware, job scheduling is a less expensive way to control budgets. Third, we propose a lower bound based on harmonic mean that can prevent the anomaly from happening. Fourth, for some total tardiness minimization problems over heterogeneous machines, e.g., [,], their maximal solvable problem sizes for brand-and-bound algorithms are about 25. Note that such problems are easier, i.e., each machine always processes jobs at a fixed speed. On the other hand, the experiments show that the proposed brand-and-bound algorithm can optimally solve this problem for n different kinds of jobs and m heterogonous developers, i.e., n = 18 and m = 4. In this study, we need to consider each developer’s processing difficulties for different kinds of jobs. It implies that the presented problem is more difficult, and hence problem size 18 is a considerable achievement. Fifth, the optimal solutions obtained by the proposed algorithm can be used as fair benchmarks for evaluating other metaheuristic algorithms. Moreover, for other industries, we can apply the algorithm to other industries if they have similar needs for human resource management.

6. Conclusions

Today, a modern game is completed by multiple versatile developers and its tardiness should be reduced as much as we possibly can. Clearly, unifunctional machine scheduling is not suitable for this problem since developers can process jobs of different types. On the other hand, unrelated machine scheduling considers m × n processing speeds, i.e., too complicated to fit the presented problem. Consequently, we present an efficient branch-and-bound algorithm to optimally solve this problem.
In this study, to develop a branch-and-bound algorithm, we first analyze the properties of the problem and establish some mathematical theories for the branch-and-bound algorithm. Two main contributions are made in this study. First, this exact algorithm achieves optimality by coordinating each developer’s multiple abilities. Second, a lower bound based on a harmonic mean is developed to avoid the anomaly. The experiments show that the proposed algorithm performs well for 18 jobs and 4 developers. That is, it can be employed as benchmarks for evaluating metaheuristic algorithms when problem sizes are less than or equal to 18.
The proposed exact algorithm is relatively efficient, but it still has limitations. Some future research directions are suggested as follows.
  • A lower bound based on non-preemptive techniques is worth exploring in greater detail. This is because preemption might lead to underestimation of a lower bound.
  • A high-quality metaheuristic algorithm is still needed. For a real-world instance, BB might take several hours to generate the optimal schedules. In the near future, we can develop some approximate algorithms to solve large problem instances near optimally, e.g., n = 100.
  • Hybridization might improve efficiency. If a high-quality metaheuristic algorithm is developed, an exact algorithm can start searching from a near-optimal solution obtained by the metaheuristic algorithm. That would be helpful to improve execution speed. With such a hybrid exact algorithm, we can evaluate other approximate algorithms objectively and precisely.

Author Contributions

C.-H.S.—Conceptualization, Resources, Data Curation; J.-Y.W.—Funding Acquisition, Methodology, Software, Validation, Formal Analysis, Investigation, Visualization, Supervision, Project Administration, Writing—original draft preparation, review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially supported by the Ministry of Science and Technology of Taiwan, R.O.C. under Projects MOST-109-2410-H-241-002 and MOST-110-2410-H-241-001.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The authors thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma A1.For a schedule  π , if there exists a developer a whose maximum completion time is larger than  j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } , there exists another developer b whose maximum completion time is less than  ( j = 1 n max i = 1 m { p j r i e j } ) / m .
Proof. 
We prove this property by contradiction. Suppose the remaining m − 1 developers have maximum completion times that are all larger than or equal to ( j = 1 n max i = 1 m { p j r i e j } ) / m . Then, we sum up all the maximum completion times of the m developers. That is,
i = 1 m max { C j @ i ( π ) } > j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } + ( m 1 ) ( j = 1 n max i = 1 m { p j r i e j } ) / m .
On the other hand, consider the worst situation, in which each job j is assigned to its worst matched developer; i.e., each job j consumes the maximum processing time max i = 1 m { p j r i e j } . Consequently, the sum of all the maximum completion times of all the m developers in schedule π is less than or equal to the total worst processing times of all the n jobs. That is,
i = 1 m max { C j @ i ( π ) } j = 1 n max i = 1 m { p j r i e j } .
Now we have
j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } + ( m 1 ) ( j = 1 n max i = 1 m { p j r i e j } ) / m <   i = 1 m max { C j @ i ( π ) } j = 1 n max i = 1 m { p j r i e j } .
That is,
j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } + ( m 1 ) ( j = 1 n max i = 1 m { p j r i e j } ) / m <   j = 1 n max i = 1 m { p j r i e j } .
It implies that
j = 1 n max i = 1 m { p j r i e j } + m max i = 1 m { max j = 1 n { p j r i e j } } + ( m 1 ) ( j = 1 n max i = 1 m { p j r i e j } ) <   m j = 1 n max i = 1 m { p j r i e j } ,   i . e . ,   m max i = 1 m { max j = 1 n { p j r i e j } } < 0 .
It is a contradiction. The proof is complete. □
Rule A1. For an optimal schedule π * , each developer’s maximum completion time is less than or equal to j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } .
Proof. 
We prove it by contradiction. Let developer a be a developer in an optimal schedule π * whose maximum completion time C j @ a ( π * ) is larger than j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } , where job j is the last job assigned to developer a in this optimal schedule π * . By Lemma 1, there exists another developer b whose maximum completion time is less than ( j = 1 n max i = 1 m { p j r i e j } ) / m . Now we check if the gap between the maximum completion time of developer a and that of developer b can accommodate job j , and it will achieve an earlier completion time (i.e., less tardiness). Let C j @ a ( π * ) be j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } + ε , where ε > 0 . Then, we have ( j = 1 n max i = 1 m { p j r i e j } / m + max i = 1 m { max j = 1 n { p j r i e j } } + ε ( j = 1 n max i = 1 m { p j r i e j } ) / m + p j r b e j )= max i = 1 m { max j = 1 n { p j r i e j } } + ε p j r b e j > ε > 0 . That is, we can move the last job j from developer a to developer b and achieve an earlier completion time, i.e., less tardiness for job j . It contradicts the assumption that π * is an optimal schedule. The proof is complete. □
Lemma A2.For a given job type x, the sum of the last k real developers’ throughputs (i.e.,  a = m k + 1 m 1 / r a x ) is equivalent to that of the k virtual ones’ throughputs (i.e.,  k / r ˜ x k ).
Proof. 
Since the processing difficulty ratio of developer mk + 1 is r m k + 1 , x , he/she takes r m k + 1 , x days to process a unit job (e.g., p j = 1 ) of type x. That is, for job type x, his/her daily amount of work is 1 / r m k + 1 , x . Similarly, for each developer a, his/her daily amount of work is 1 / r a x for a = m k + 2 , m k + 3 , , m . Then, for the k real developers, their daily amount of work is a = m k + 1 m 1 / r a x . On the other hand, since the processing difficulty ratio of each virtual developer is r ˜ x k , his/her daily amount of work is 1 / r ˜ x k . Thus, their total daily amount of work is k / r ˜ x k . We prove the property by showing k / r ˜ x k = a = m k + 1 m 1 / r a x . We have
k / r ˜ x k = k ( k / ( 1 / r m - k + 1 , x + 1 / r m - k + 2 , x + + 1 / r m x ) ) 1 = 1 / r m - k + 1 , x + 1 / r m - k + 2 , x + + 1 / r m x = a = m k + 1 m 1 / r a x .
The proof is complete. □
Lemma A3.For a given job type x, the sum of the last k real developers’ throughputs (i.e.,  a = m k + 1 m 1 / r a x ) is equivalent to that of the substitute’s throughput (i.e., 1 / r ¯ x k ).
Proof. 
Since the processing difficulty ratio of the substitute is r ¯ x k , his/her daily amount of work is 1 / r ¯ x k . Then, we have
1 / r ¯ x k = 1 / ( r ˜ x k / k ) = k / r ˜ x k .  
By Lemma 2, k / r ˜ x k = a = m k + 1 m 1 / r a x . Hence, 1 / r ¯ x k = a = m k + 1 m 1 / r a x holds. The proof is complete. □
Lemma A4.The throughput of the last k real developers is less than or equal to 1 / r ¯ min k .
Proof. 
For the same type of jobs, by Lemma 3, the throughputs of the k real developers and the substitute are the same. That is, if all the remaining jobs belong to job type 1, the throughput of the substitute is r ¯ 1 k . Similarly, the throughput of the substitute is r ¯ 2 k if all the jobs belong to job type 2, and his/her throughput is r ¯ 3 k if all the jobs belong to job type 3. Clearly, the maximal throughput is 1 / r ¯ min k and the minimal throughput is 1 / r ¯ max k . In the real world, however, it is rare that all the jobs belong to the same job type. Consequently, the throughput of the substitute is in [ 1 / r ¯ max k , 1 / r ¯ min k ] if each of the jobs is of a different type. Namely, the throughput of the last k developers in the real world can be as large as 1 / r ¯ min k only. The proof is complete. □
Theorem A1. Let f ( π * ) be the optimal objective cost and L B ( π ) be the lower bound. Then f ( π * ) L B ( π ) , where π * denotes the optimal schedule and π denotes a schedule.
Proof. 
We prove it by contradiction and suppose f ( π * ) < L B ( π ) . Let the average processing difficulty ratio for the optimal schedule π * be r * , where r ¯ min m r * r ¯ max m . Since the optimal objective cost is lower than that of the proposed lower bound, the actually optimal throughput must be larger than the throughput of the substitute. That is, we have 1 / r * > 1 / r ¯ min m . On the other hand, note that r ¯ min m = min { r ¯ 1 m , r ¯ 2 m , r ¯ 3 m } . Then, by Lemma 4, we have
1 / r ¯ min m = 1 / min { r ¯ 1 m , r ¯ 2 m , r ¯ 3 m } 1 / r * .   ( sin ce   r ¯ min m r * r ¯ max m ) .
It contradicts that 1 / r * > 1 / r ¯ min m . The proof is complete. □

References

  1. Wikipedia. List of Most Expensive Video Games to Develop. Wikipedia. 2022. Available online: https://en.wikipedia.org/wiki/List_of_most_expensive_video_games_to_develop (accessed on 1 March 2022).
  2. Fritz, B. Video game borrows page from Hollywood playbook. Los Angeles Times, 18 November 2009. [Google Scholar]
  3. Androvich, M. GTA IV: Most expensive game ever developed? Games Industry International, 30 April 2008. [Google Scholar]
  4. Fritz, B.; Pham, A. Star Wars: The old republic—The story behind a galactic gamble. Los Angeles Times, 20 January 2012. [Google Scholar]
  5. Ultimatepopculture. List of Highest-Grossing Video Game Franchises. Ultimate Pop Culture Wiki. 2022. Available online: https://ultimatepopculture.fandom.com/wiki/List_of_highest-grossing_video_game_franchises#cite_note-6 (accessed on 1 March 2022).
  6. Vogel, H.L. Entertainment Industry Economics: A Guide for Financial Analysis; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  7. Wikipedia. Video Game Development. Wikipedia. 2022. Available online: https://en.wikipedia.org/wiki/Video_game_development (accessed on 1 March 2022).
  8. Moore, M.E.; Novak, J. Game Development Essentials: Game Industry Career Guide; Cengage Learning: Boston, MA, USA, 2010. [Google Scholar]
  9. Wang, J.Y. A branch-and-bound algorithm for minimizing the total tardiness of a three-agent scheduling problem considering the overlap effect and environment protection. IEEE Access 2019, 7, 5106–5123. [Google Scholar] [CrossRef]
  10. Zhao, Y.P.; Xu, X.Y.; Xu, E.D.; Niu, B. Stochastic customer order scheduling on heterogeneous parallel machines with resource allocation consideration. Comput. Ind. Eng. 2021, 160, 107539. [Google Scholar] [CrossRef]
  11. Wang, J.Y. Minimizing the total weighted tardiness of overlapping jobs on parallel machines with a learning effect. J. Oper. Res. Soc. 2020, 71, 910–927. [Google Scholar] [CrossRef]
  12. Bianchessi, N.; Tresoldi, E. A stand-alone branch-and-price algorithm for identical parallel machine scheduling with conflicts. Comput. Oper. Res. 2021, 136, 105464. [Google Scholar] [CrossRef]
  13. Wang, X.M.; Li, Z.T.; Chen, Q.X.; Mao, N. Meta-heuristics for unrelated parallel machines scheduling with random rework to minimize expected total weighted tardiness. Comput. Ind. Eng. 2020, 145, 106505. [Google Scholar] [CrossRef]
  14. Diana, R.O.M.; de Souza, S.R. Analysis of variable neighborhood descent as a local search operator for total weighted tardiness problem on unrelated parallel machines. Comput. Oper. Res. 2020, 117, 104886. [Google Scholar] [CrossRef]
  15. Liu, L.L.; Ng, C.T.; Cheng, T.C.E. Scheduling jobs with agreeable processing times and due dates on a single batch processing machine. Theor. Comput. Sci. 2007, 374, 159–169. [Google Scholar] [CrossRef][Green Version]
  16. Ahmed, S. Naughty Dog Discusses Being Acquired by Sony. GameSpot. 2006. Available online: https://www.gamespot.com/articles/naughty-dog-discusses-being-acquired-by-sony/1100-2677654/ (accessed on 1 March 2022).
  17. Srinivasan, A.; Venkatraman, N. Indirect network effects and platform dominance in the video game industry: A network perspective. IEEE Trans. Eng. Manag. 2010, 57, 661–673. [Google Scholar] [CrossRef]
  18. Gretz, R.T. Hardware quality vs. network size in the home video game industry. J. Econ. Behav. Organ. 2010, 76, 168–183. [Google Scholar] [CrossRef]
  19. Anderson, E.G.; Parker, G.G.; Tan, B. Platform performance investment in the presence of network externalities. Inf. Syst. Res. 2014, 25, 152–172. [Google Scholar] [CrossRef]
  20. Duffy, J. The Game Industry Salary Survey 2007. GameCareerGuide. 2007. Available online: https://www.gamecareerguide.com/features/416/the_game_industry_salary_survey_.php (accessed on 1 March 2022).
  21. Kolakowski, N. Game Developer Salary: Maximum, Minimum, and Career Downsides. Dice. 2018. Available online: https://insights.dice.com/2018/05/11/game-developer-salary-maximum-minimum-career-downsides/ (accessed on 1 March 2022).
  22. Quora. Why Have Video Game Budgets Skyrocketed in Recent Years? Quora. 2016. Available online: https://www.forbes.com/sites/quora/2016/10/31/why-have-video-game-budgets-skyrocketed-in-recent-years/#6db6aaf93ea5 (accessed on 1 March 2022).
  23. East, R. So, You Want to Be a Game Developer? Medium. 2019. Available online: https://medium.com/swlh/so-you-want-to-be-a-game-developer-e3b7f9f4ac70 (accessed on 1 March 2022).
  24. PMI. A Guide to the Project Management Body of Knowledge, 6th ed.; Project Management Institute, Inc.: Newtown Square, PA, USA, 2017. [Google Scholar]
  25. Smith, C. The Most Expensive Video Games Ever Created. KnowTechie. 2018. Available online: https://knowtechie.com/the-most-expensive-video-games-ever-created/ (accessed on 1 March 2022).
  26. Bethke, E. Game Development and Production; Wordware Publishing, Inc.: Plano, TX, USA, 2003. [Google Scholar]
  27. Irwin, M.J. Indie game developers rise up. Forbes, 20 November 2008. [Google Scholar]
  28. Bailey, E.; Miyata, K. Improving video game project scope decisions with data: An analysis of achievements and game completion rates. Entertain. Comput. 2019, 31, 100299. [Google Scholar] [CrossRef]
  29. Ahmad, N.B.; Barakji, S.A.R.; Abou Shahada, T.M.; Anabtawi, Z.A. How to launch a successful video game: A framework. Entertain. Comput. 2017, 23, 1–11. [Google Scholar] [CrossRef]
  30. Harada, N. Video game demand in Japan: A household data analysis. Appl. Econ. 2007, 39, 1705–1710. [Google Scholar] [CrossRef][Green Version]
  31. Hodge, V.J.; Sephton, N.; Devlin, S.; Cowling, P.I.; Goumagias, N.; Shao, J.H.; Purvis, K.; Cabras, I.; Fernandes, K.J.; Li, F. How the business model of customizable card games influences player engagement. IEEE Trans. Games 2019, 11, 374–385. [Google Scholar] [CrossRef]
  32. Lin, F.P.C.; Phoa, F.K.H. Runtime estimation and scheduling on parallel processing supercomputers via instance-based learning and swarm intelligence. Int. J. Mach. Learn. Comput. 2019, 9, 592–598. [Google Scholar] [CrossRef]
  33. Liang, T.K.; Zeng, B.; Liu, J.Q.; Ye, L.F.; Zou, C.F. An unsupervised user behavior prediction algorithm based on machine learning and neural network for smart home. IEEE Access 2018, 6, 49237–49247. [Google Scholar] [CrossRef]
  34. Kozlowski, E.; Mazurkiewicz, D.; Zabinski, T.; Prucnal, S.; Sep, J. Machining sensor data management for operation-level predictive model. Expert Syst. Appl. 2020, 159, 113600. [Google Scholar] [CrossRef]
  35. Mensendiek, A.; Gupta, J.N.D.; Herrmann, J. Scheduling identical parallel machines with fixed delivery dates to minimize total tardiness. Eur. J. Oper. Res. 2015, 243, 514–522. [Google Scholar] [CrossRef]
  36. Arik, O.A.; Schutten, M.; Topan, E. Weighted earliness/tardiness parallel machine scheduling problem with a common due date. Expert Syst. Appl. 2022, 187, 115916. [Google Scholar] [CrossRef]
  37. Cheng, C.Y.; Huang, L.W. Minimizing total earliness and tardiness through unrelated parallel machine scheduling using distributed release time control. J. Manuf. Syst. 2017, 42, 1–10. [Google Scholar] [CrossRef]
  38. Schaller, J.; Valente, J. Branch-and-bound algorithms for minimizing total earliness and tardiness in a two-machine permutation flow shop with unforced idle allowed. Comput. Oper. Res. 2019, 109, 1–11. [Google Scholar] [CrossRef]
  39. Yu, F.; Wen, P.H.; Yi, S.P. A multi-agent scheduling problem for two identical parallel machines to minimize total tardiness time and makespan. Adv. Mech. Eng. 2018, 10, 1687814018756103. [Google Scholar] [CrossRef]
  40. Lee, C.H. A dispatching rule and a random iterated greedy metaheuristic for identical parallel machine scheduling to minimize total tardiness. Int. J. Prod. Res. 2018, 56, 2292–2308. [Google Scholar] [CrossRef]
  41. Hulett, M.; Damodaran, P.; Amouie, M. Scheduling non-identical parallel batch processing machines to minimize total weighted tardiness using particle swarm optimization. Comput. Ind. Eng. 2017, 113, 425–436. [Google Scholar] [CrossRef]
  42. Thenarasu, M.; Rameshkumar, K.; Rousseau, J.; Anbuudayasankar, S.P. Development and analysis of priority decision rules using MCDM approach for a flexible job shop scheduling: A simulation study. Simul. Model. Pract. Theory 2022, 114, 102416. [Google Scholar] [CrossRef]
  43. Wang, J.Y. Algorithms for minimizing resource consumption over multiple machines with a common due window. IEEE Access 2019, 7, 172136–172151. [Google Scholar] [CrossRef]
  44. Wang, S.J.; Liu, M. A branch and bound algorithm for single-machine production scheduling integrated with preventive maintenance planning. Int. J. Prod. Res. 2013, 51, 847–868. [Google Scholar] [CrossRef]
  45. Khoudi, A.; Berrichi, A. Minimize total tardiness and machine unavailability on single machine scheduling problem: Bi-objective branch and bound algorithm. Oper. Res. 2020, 20, 1763–1789. [Google Scholar] [CrossRef]
  46. Yin, Y.Q.; Wu, W.H.; Wu, W.H.; Wu, C.C. A branch-and-bound algorithm for a single machine sequencing to minimize the total tardiness with arbitrary release dates and position-dependent learning effects. Inf. Sci. 2014, 256, 91–108. [Google Scholar] [CrossRef]
  47. Tanaka, S.; Araki, M. A branch-and-bound algorithm with Lagrangian relaxation to minimize total tardiness on identical parallel machines. Int. J. Prod. Econ. 2008, 113, 446–458. [Google Scholar] [CrossRef]
  48. Shim, S.O.; Kim, Y.D. Minimizing total tardiness in an unrelated parallel-machine scheduling problem. J. Oper. Res. Soc. 2007, 58, 346–354. [Google Scholar] [CrossRef]
  49. Costa, L.H.M.; Prata, B.D.; Ramos, H.M.; de Castro, M.A.H. A branch-and-bound algorithm for optimal pump scheduling in water distribution networks. Water Resour. Manag. 2016, 30, 1037–1052. [Google Scholar] [CrossRef]
  50. Lee, J.Y.; Kim, Y.D. A branch and bound algorithm to minimize total tardiness of jobs in a two identical-parallel-machine scheduling problem with a machine availability constraint. J. Oper. Res. Soc. 2015, 66, 1542–1554. [Google Scholar] [CrossRef]
  51. Motair, H.M. Exact and hybrid metaheuristic algorithms to solve bi-objective permutation flow shop scheduling problem. J. Phys. Conf. Ser. 2021, 1818, 012042. [Google Scholar] [CrossRef]
  52. Bajestani, M.A.; Moghaddam, R.T. A new branch-and-bound algorithm for the unrelated parallel machine scheduling problem with sequence-dependent setup times. IFAC Proc. Vol. 2009, 42, 792–797. [Google Scholar] [CrossRef]
  53. Yao, S.Q.; Jiang, Z.B.; Li, N. A branch and bound algorithm for minimizing total completion time on a single batch machine with incompatible job families and dynamic arrivals. Comput. Oper. Res. 2012, 39, 939–951. [Google Scholar] [CrossRef]
  54. Nessah, R.; Kacem, I. Branch-and-bound method for minimizing the weighted completion time scheduling problem on a single machine with release dates. Comput. Oper. Res. 2012, 39, 471–478. [Google Scholar] [CrossRef]
  55. Kacem, I.; Chu, C.B. Efficient branch-and-bound algorithm for minimizing the weighted sum of completion times on a single machine with one availability constraint. Int. J. Prod. Econ. 2008, 112, 138–150. [Google Scholar] [CrossRef]
  56. Danaci, T.; Toksari, D. A branch-and-bound algorithm for two-competing-agent single-machine scheduling problem with jobs under simultaneous effects of learning and deterioration to minimize total weighted completion time with no-tardy jobs. Int. J. Ind. Eng. 2021, 28, 577–593. [Google Scholar]
  57. Lee, W.C.; Wang, J.Y.; Lin, M.C. A branch-and-bound algorithm for minimizing the total weighted completion time on parallel identical machines with two competing agents. Knowl.-Based Syst. 2016, 105, 68–82. [Google Scholar] [CrossRef]
  58. Nessah, R.; Yalaoui, F.; Chu, C.B. A branch-and-bound algorithm to minimize total weighted completion time on identical parallel machines with job release dates. Comput. Oper. Res. 2008, 35, 1176–1190. [Google Scholar] [CrossRef]
  59. Gao, J.S.; Zhu, X.M.; Zhang, R.T. A branch-and-price approach to the multitasking scheduling with batch control on parallel machines. Int. Trans. Oper. Res. 2022. [Google Scholar] [CrossRef]
  60. Pei, Z.; Wan, M.Z.; Wang, Z.T. A new approximation algorithm for unrelated parallel machine scheduling with release dates. Ann. Oper. Res. 2020, 285, 397–425. [Google Scholar] [CrossRef]
  61. Toksari, M.D. A branch and bound algorithm for minimizing makespan on a single machine with unequal release times under learning effect and deteriorating jobs. Comput. Oper. Res. 2011, 38, 1361–1365. [Google Scholar] [CrossRef]
  62. Wang, X.; Ren, T.; Bai, D.; Ezeh, C.; Zhang, H.; Dong, Z. Minimizing the sum of makespan on multi-agent single-machine scheduling with release dates. Swarm Evol. Comput. 2022, 69, 100996. [Google Scholar] [CrossRef]
  63. Ozturk, O.; Begen, M.A.; Zaric, G.S. A branch and bound algorithm for scheduling unit size jobs on parallel batching machines to minimize makespan. Int. J. Prod. Res. 2017, 55, 1815–1831. [Google Scholar] [CrossRef]
  64. Senapati, D.; Sarkar, A.; Karfa, C. Performance-effective DAG scheduling for heterogeneous distributed systems. In Proceedings of the ICDCN 2022: 23rd International Conference on Distributed Computing and Networking, Delhi, India, 4–7 January 2022. [Google Scholar]
  65. Ghirardi, M.; Potts, C.N. Makespan minimization for scheduling unrelated parallel machines: A recovering beam search approach. Eur. J. Oper. Res. 2005, 165, 457–467. [Google Scholar] [CrossRef]
  66. Voutsinas, T.G.; Pappis, C.P. A branch and bound algorithm for single machine scheduling with deteriorating values of jobs. Math. Comput. Model. 2010, 52, 55–61. [Google Scholar] [CrossRef]
  67. Khoudi, A.; Berrichi, A. Branch and bound algorithm for identical parallel machine scheduling problem to maximise system availability. Int. J. Manuf. Res. 2020, 15, 199–217. [Google Scholar] [CrossRef]
  68. Shobaki, G.; Bassett, J.; Heffernan, M.; Kerbow, A. Graph transformations for register-pressure-aware instruction scheduling. In Proceedings of the 31st ACM SIGPLAN International Conference on Compiler Construction, Seoul, Korea, 2–3 April 2022. [Google Scholar]
  69. Wang, J.Y.; Jea, K.F. A near-optimal database allocation for reducing the average waiting time in the grid computing environment. Inf. Sci. 2009, 179, 3772–3790. [Google Scholar] [CrossRef]
  70. Kumar, H.; Tyagi, I. Hybrid model for tasks scheduling in distributed real time system. J. Ambient Intell. Humaniz. Comput. 2021, 12, 2881–2903. [Google Scholar] [CrossRef]
  71. Chou, F.D. Minimising the total weighted tardiness for non-identical parallel batch processing machines with job release times and non-identical job sizes. Eur. J. Ind. Eng. 2013, 7, 529–557. [Google Scholar] [CrossRef]
  72. Lu, G.X. New lower bounds for arithmetic, geometric, harmonic mean inequalities and entropy upper bound. Math. Inequalities Appl. 2017, 20, 1041–1050. [Google Scholar]
  73. Ahmad, S.; Khan, Z.A.; Ali, M.; Asjad, M. Geometric and Harmonic means based priority dispatching rules for single machine scheduling problems. Int. J. Prod. Manag. Eng. 2021, 9, 93–102. [Google Scholar] [CrossRef]
  74. Della Croce, F.; Tadei, R.; Baracco, P.; Grosso, A. A new decomposition approach for the single machine total tardiness scheduling problem. J. Oper. Res. Soc. 1998, 49, 1101–1106. [Google Scholar] [CrossRef]
  75. Chu, C.B. A branch-and-bound algorithm to minimize total tardiness with different release dates. Nav. Res. Logist. 1992, 39, 265–283. [Google Scholar] [CrossRef]
  76. Lee, W.C.; Wang, J.Y.; Lee, L.Y. A hybrid genetic algorithm for an identical parallel-machine problem with maintenance activity. J. Oper. Res. Soc. 2015, 66, 1906–1918. [Google Scholar] [CrossRef]
  77. Schaller, J.E. Minimizing total tardiness for scheduling identical parallel machines with family setups. Comput. Ind. Eng. 2014, 72, 274–281. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.