Next Article in Journal
Correction: Schubert et al. Microencapsulation of Bacteriophages for the Delivery to and Modulation of the Human Gut Microbiota through Milk and Cereal Products. Appl. Sci. 2022, 12, 6299
Previous Article in Journal
Reliable Frequency Control Support Scheme Based on Wind Power Generator Combined with Rechargeable Energy Storage System Applying Adaptive Power Reference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elimination Properties for a Probabilistic Scheduling Problem

by
Wojciech Bożejko
1,*,
Paweł Rajba
2,
Mariusz Uchroński
1 and
Mieczysław Wodecki
3
1
Department of Control Systems and Mechatronics, Faculty of Electronics, Wroclaw University of Science and Technology, Wyb. Wyspianskiego 27, 50-370 Wrocław, Poland
2
Institute of Computer Science, University of Wroclaw, Joliot-Curie 15, 50-383 Wroclaw, Poland
3
Department of Telecommunications and Teleinformatics, Faculty of Electronics, Wroclaw University of Science and Technology, Wyb. Wyspianskiego 27, 50-370 Wrocław, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5304; https://doi.org/10.3390/app13095304
Submission received: 28 January 2023 / Revised: 16 March 2023 / Accepted: 20 March 2023 / Published: 24 April 2023
(This article belongs to the Section Applied Industrial Technologies)

Abstract

:
In many areas of the economy, we deal with random processes, e.g., transport, agriculture, trade, construction, etc. Effective management of such processes often leads to optimization models with random parameters. Solving these problems is already very difficult in deterministic cases, because they usually belong to the NP-hard class. In addition the inclusion of the uncertainty parameters in the model causes additional complications. Hence these problems are much less frequently studied. We propose a new customized approach to searching the solutions space for problems with random parameters. We prove new, strong properties of solutions, the so-called block elimination properties, accelerating the neighborhood search. They make it possible to eliminate certain subsets of the solution space containing worse solutions without the need to calculate the value of the criterion function. Blocks can be used in the construction of exact and approximate algorithms, e.g., metaheuristics such as tabu search, significantly improving their efficiency.

1. Introduction

Studies of discrete optimization problems that have been conducted for many years relate mostly to deterministic models, in which the basic assumption is the uniqueness of the parameters. In order to solve these types of problems that mostly belong to classes of strongly NP-hard problems, a number of very effective approximate algorithms have been developed. The solutions designated by these algorithms are only slightly different from the optimal ones. In practice, however, during the process realization (according to the adopted schedule), it often turns out that some parameters (e.g., operation times) are different from those initially adopted values. In the absence of the stability of solutions, it happens that it loses not only optimality but also acceptability. However, in many applications there are great difficulties in defining process parameters in a clear way or data coming from imprecise measuring devices. They bear a certain error, thus they are uncertain.
The choice of approach to modeling and analysis results from such issues as system features, the ability to perform data measurements, data reliability, the power of theoretical tools, etc. The knowledge of all the aforementioned elements is necessary for the efficient solving of practical problems (Shang, You [1], Zhang [2]). For example, the data regarding the duration of the activity can be accepted as deterministic (e.g., normative), measure their random characteristics (take a series of measurements and verify the hypothesis about the type of distribution and its parameters), make a measurement to approximate a deterministic value (if the variance is small enough) or designating the membership function for the fuzzy representation, determine the membership function based on expert opinion. The complexity of the problems and computational problems already for deterministic cases results in the fact that problems with uncertain data are much less formulated and analyzed.
Many problems related to the decision-making process come down to solving some scheduling problems on machines. Scheduling problems with uncertain data can be solved using methods based on elements of probability (Shaked and Shandhkumard [3], Zhu and Cai [4], Van den Akker and Hoogeveen [5,6], Vondrák [7], Dean [8]), Soroush [9], He [10] or fuzzy set theory (Prade, [11], Ishii [12], Iscibuchi et al. [13], Itoh and Ishii [14], Bocewicz et al.  [15]). The accuracy (quality) of such an algorithm is not determined on the basis of individual instances of the problem (as in deterministic data), but on a certain family of examples generated randomly according to a certain probability of distributions. Such specific accuracy will be called stability of an algorithm. Research carried out in recent years, in which uncertainty is taken into account, is promising not only at the stage of model construction but also during the algorithm design (Rajba and Wodecki [16], Bożejko et al. [17]).
In this paper, we consider the strongly NP-hard single-machine task scheduling problem with critical lines and minimizing the total weighted tardiness tasks cost on a single machine. Task execution times are random variables. We present methods of an intermediate review of solutions, the so-called ‘block properties of the problem,’ which we use in the tabu search algorithm (TS). This algorithm is one of the best for solving the deterministic version of the problem under consideration (Bożejko et al. [18]). It is deterministic and guarantees the repeatability of calculations. Computational experiments were conducted mainly to:
  • check the effectiveness of using blocks, i.e., their impact on the time and quality of the determined solutions; and
  • testing the stability of algorithms, i.e., their resistance to data disturbances.
The computational experiments show that the solutions obtained in the probabilistic model including blocks are stable and allow us to generate better solutions in less time.
It follows from the conducted computational experiments that the solutions obtained in the probabilistic model are stable, i.e., not very sensitive to random data disturbances.

2. Single-Machine Total Weighted Tardiness Tasks Scheduling Problem

Tasks scheduling problems on a single machine with cost goal functions have a very long (over 50 years) history (the first work of Rinnoy Kan et al. [19] appeared in 1975). However, despite the simplicity of formulation, they mostly belong to the class of strongly NP-hard problems. They are important both from the point of view of theory and practice. Such problems often constitute a significant part of more extensive production systems. Different variants of scheduling tasks on a single machine are still intensively tested and the results obtained are the inspiration for much more complex research of multi-machine problems. While describing of the problem considered in this section we will use some definitions, designations and properties presented in the following works: Bożejko et al. [18,20], Rajba and Wodecki [16].
A single-machine Total Weighted Tardiness Problem, abbreviated as TWT, will be defined as follows.
TWT Problem: Let
J = { 1 , 2 , , n } ,
be a set of tasks to be performed, without interruption, on the executing machine performing at a given moment at most one task at a time. For task i J ( i = 1 , , n ) we define:
  • p i —execution time;
  • w i —penalty for tardiness;
  • d i —demanded completion time..
Let π = ( π ( 1 ) , π ( 2 ) , , π ( n ) ) be any permutation of elements from J , and Φ a set of all such permutations. Then,
C π ( i ) = j = 1 i p π ( j )
is a moment of completion of task π ( i ) J , executed as i-th in a sequence. If C π ( i ) d π ( i ) , then the task is called an early one, otherwise a tardy one.
Tardiness of task
T π ( i ) = max { 0 , C π ( i ) d π ( i ) } ,
and cost of its execution f ( π ( i ) ) = w π ( i ) T π ( i ) . By
F ( π ) = i = 1 n w π ( i ) T π ( i )
we designate cost of execution of all tasks (weight of permutation π ).
The problem under consideration (which will be also called a deterministic) problem involves determining permutation π * Φ with the smallest weight, such that
F ( π * ) = min F ( π ) : π Φ .
In the literature, this problem is denoted by 1 | | w i T i and belongs to the strongly NP-hard class of problems. Optimal algorithms for solving it based on the dynamic programming method were presented by Lawler, Moore [21], Sahni [22], those based on the branch and bound method were presented by Potts, Van Wassenhove [23] Villareal, Bulfin [24], Urgo [25] and Wodecki [26]. They enable the effective determination of optimal solutions for examples whose number of tasks does not exceed 80. Due to the time-consuming characteristic of exact algorithms, in practice, approximate algorithms (mainly metaheuristics) are usually applied.
In the best metaheuristic algorithms, where the task is solving multi-machine task scheduling problems, so-called ‘block elimination properties’ are used (e.g., for flow shop problems such algorithms are presented in the works of Nowicki and Smutnicki [27] and Grabowski and Wodecki [28]). Blocks enable both the reduction of calculation time, as well as improvement of the values determined by the algorithm solutions. For the single-machine Total Weighted Tardiness Problem TWT presented in this section, in the works of Uchroński [29], Wodecki [26,30] there were two types of blocks presented: the so-called early and tardy blocks, which were successfully used in tabu search algorithms.

3. Random Task Execution Times

In the literature we can find many ways in which uncertainty is modeled. First, we can distinguish proactive and reactive approaches where the former assumes that all preparations for handling uncertainty are completed before the algorithm starts its execution, while the latter assumes the opposite, i.e., uncertainty needs to be handled during the actual algorithm execution. Of course, we can also find many types of combination of those two. Moreover, we can also distinguish an online approach where the problem instance is not known in advance. Next, uncertainty can be modeled in different ways and even though there are several descriptions in the field, we can distinguish 3 main categories: probabilistic (or stochastic), where random variables are involved, and a fuzzy description and bound form where fixed ranges are involved. Sometimes by stochastic the authors refer to certain types of online algorithms and solutions in the bound form are sometimes referred to as robust, by fixed ranges we can introduce the guarantee on the upper or bottom bound of the provided solutions (however, in general the robustness description is not limited to the bound form). Having the above, in this paper we investigate the proactive version of the problem with uncertainty modeled by a probability approach with random variables; that is, in this section, we consider the probabilistic version of the aforementioned single-machine Total Weighted Tardiness Problem and we assume that the times of completing the tasks are independent random variables.
They are most often described in the literature as scheduling problems with uncertain parameters. In practice, there are great difficulties in establishing the probability distribution of random parameters. Especially when we are dealing with unique processes, therefore, there is no representative statistical data.
An extensive review of methods and algorithms for solving the problems of combinatorial optimization with random parameters was presented by Vondrák in monograph [7] and in the more recent work by Xiaoqiang et al. [31]. Some practical problems are also considered in the works of Bożejko et al. [20,32,33,34]. The last of these works concerns the implementation of construction projects under uncertainty. We will now introduce the necessary definitions and designations.
If X is a continuous random variable, we will use the following designations later in this work:
  • F X —cumulative distribution function of random variable X;
  • f X —density function of random variable X;
  • E ( X ) —expected value of random variable X.
We are considering, as described in Section 2, a probabilistic version of the TWT problem, in which the task execution times p ˜ i are independent random variables, and the remaining task parameters w i and d i ( i = 1 , 2 , , n ) are deterministic. This problem will be denoted in short by PTWT.
If task execution times p ˜ i are random variables, then for any order of execution of tasks π Φ , the completion of the task π ( k )
C ˜ π ( k ) = p ˜ π ( 1 ) + p ˜ π ( 2 ) + , , + p ˜ π ( k ) ,
tardiness T ˜ π ( k ) = max { 0 , C ˜ π ( k ) d π ( k ) } (Equivalent (2)) and criterion function (Equivalent (3))
F ˜ ( π ) = i = 1 n w π ( i ) T ˜ π ( i ) .
are also random variables.
In algorithms for solving optimization problems, it is necessary to compare the values of the criterion function for various acceptable solutions (e.g., permutations). In a case when this function is a random variable (5) we will be using its expected value. Therefore, as comparative criteria of solutions the following function will be used:
L ( π ) = E ( F ˜ ( π ) ) = i = 1 n w π ( i ) E ( T ˜ π ( i ) ) .
By
f ( π ( k ) ) = w π ( k ) E ( T ˜ π ( k ) )
we designate the cost of execution of task π ( k ) .
Later in this work we present the methods for calculating the value of criterion function (6).
Following elements in sequence from π Φ
β = ( π ( a ) , π ( a + 1 ) , , π ( b ) ) ,
where 1 a b n will be called subpermutation of permutation π . The cost of performing tasks from subpermutation β
L ( β ) = i = a b w π ( i ) E ( T ˜ π ( i ) ) .
By Y ( β ) we denote the set of sub-permutation elements β , i.e.,
Y ( β ) = { π ( a ) , π ( a + 1 ) , , π ( b ) } .

4. Blocks of Tasks

We are considering permutation π Φ , i.e., some kind of solution to the PTWT problem. If the expected value of the completion time of the task π ( i )
E ( C ˜ π ( i ) ) d π ( i ) ,
then the task π ( i ) it is called an early one, otherwise, i.e., when
E ( C ˜ π ( i ) ) > d π ( i ) ,
a tardy one.
Later in this section, we present a method of breaking π into subpermutations (called blocks) containing only early or tardy tasks. Next, we will determine the optimal order of tasks in each of these subpermutations. A permutation generated in this way has the following property: any change in the order of elements in any block does not generate a solution with a smaller value of the criterion function (6). This is the so-called block elimination property. Blocks for problems with a deterministic demanded completion time (due-dates) are considered by Uchroński [29] and Bożejko et al. [35], and with a probabilistic one, by Bożejko et al. [36].

4.1. Blocks of Early Tasks

Definition 1.
Subpermutation of tasks π T in permutation π Φ is called blocks of early tasks (in short T -block), if:
(a) 
every task j π T is early and d j E ( C ˜ l a s t ) , where C ˜ l a s t is a random variable—date of completion of execution of the last task from π T ,
(b) 
π T is the maximum subpermutation meeting the constraint (a).
Corollary 1.
If π T = ( π ( a ) , π ( a + 1 ) , , π ( b ) ) is a T -block in a permutation π, then
1.
an inequality min { d j : j π T } E ( C ˜ l a s t ) is fulfilled,
2.
the cost of execution of tasks from π T ,
L ( π T ) = i = a b w π ( i ) E ( T ˜ π ( i ) ) = 0 .
Using this property, we determine the first T -block in permutation π . After minor modifications (expected values of random variables E ( C ˜ i ) instead of deterministic values C i ), one can apply the AT -block algorithm presented in the work of Uchroński [29]. The computational complexity of this algorithm is O ( n ) . Considering further elements of permutation, after the last element of the first T -block, we can determine the next block of early tasks.
Lemma 1.
If permutation β was generated from π Φ by changing the order of elements in some block of early tasks in permutation π to
L ( β ) = L ( π ) .
Proof. 
The proof results directly from the definition of the early task block and equality (10).    □

4.2. Blocks of Tardy Tasks

Definition 2.
Subpermutation of tasks π D in a permutation π Φ is called a D -block of tardy tasks, if:
(a′)
every task j π D is tardy and d j < E ( S ˜ f i r s t + p ˜ j ) , where S ˜ f i r s t is a random variable—date of starting the execution of the first task from π D ,
(b′)
π D is a maximum subpermutation meeting the constraint (a ).
The following properties result directly from the tardy task block definition.
Corollary 2.
If π D = ( π ( a ) , π ( a + 1 ) , , π ( b ) ) is a D -block in permutation π, then
1.
the following inequality is met
max { d j : j π D } < E ( S ˜ f i r s t ) + min { E ( p ˜ i ) : i π D } .
2.
any task (belonging to π D ), swapped in the first position, in permutation π is tardy.
Similarly as in the T -block, we determine the first D -block in permutation π . After modifications (expected values of random variables E ( C ˜ i ) instead of deterministic values C i and E ( S ˜ f i r s t ) instead of S f i r s t ), one can apply the AD -block algorithm presented in the work of Uchroński [29]. The computational complexity of this algorithm is O ( n ) . Considering further elements in permutation, after the last of the first D -blocks, we can designate the next block of tardy tasks.
Theorem 1
([26]). For any permutation π Φ there is such division π into subpermutations in which each of them is:
(i)
T -block, or
(ii)
D -block.
The algorithm for division of an n-elemental permutation into blocks, based on the proof of Theorem 1, has a computational complexity O ( n ) .
It follows from the block definition and the Theorem 1 that after the division of permutation into blocks:
1.
Every task belongs to some T or D block.
2.
Different blocks have different elements.
3.
Two T or D blocks can appear directly next to each other.
4.
A block can contain only one task.
5.
The division of permutations into blocks is not interchangeable.
It is easy to show that the order of occurrence of tasks in the D -block is not optimal due to the criterion (6). Later in this section we present, we are presenting a method for determining the optimal order of elements in the D -block. First, we will prove some auxiliary lemma.
Lemma 2.
Let us assume that permutation δ was generated from π Φ by swapping the position of two random neighboring elements in the permutation. If for every i ( i = 2 , 3 , , n ) there is:
f π ( i 1 ) ( E ( C ˜ π ( i 1 ) ) ) + f π ( i ) ( E ( C ˜ π ( i ) ) ) f π ( i 1 ) ( E ( C ˜ π ( i ) ) ) +
f π ( i ) ( E ( C ˜ π ( i ) p ˜ π ( i 1 ) ) ) ,
then
L ( δ ) L ( π ) .
Proof. 
For the determination of attention, we assume that δ was created from permutation π  by swapping the positions of task π ( i ) z π ( i 1 ) ( 2 i n ) , then
δ ( j ) = π ( j ) , therefore E ( C ˜ δ ( j ) ) = E ( C ˜ π ( j ) ) ,
for j = 1 , 2 , , i 2 , i + 1 , , n
and
δ ( i 1 ) = π ( i ) , δ ( i ) = π ( i 1 ) ,
E ( C ˜ δ ( i 1 ) ) = E ( C ˜ π ( i ) p ˜ π ( i 1 ) ) , E ( C ˜ δ ( i ) ) = E ( C ˜ π ( i ) ) .
Difference in the criterion value:
L ( δ ) L ( π ) = j = 1 n f δ ( j ) ( E ( C ˜ δ ( j ) ) ) j = 1 n f π ( j ) ( E ( C ˜ π ( j ) ) ) =
j = 1 i 2 f δ ( j ) ( E ( C ˜ δ ( j ) ) ) + f δ ( i 1 ) ( E ( C ˜ δ ( i 1 ) ) ) +
f δ ( i ) ( E ( C ˜ δ ( i ) ) ) + j = i + 1 n f δ ( j ) ( E ( C ˜ δ ( j ) ) )
j = 1 i 2 f π ( j ) ( E ( C ˜ π ( j ) ) ) + f π ( i 1 ) ( E ( C ˜ π ( i 1 ) ) ) +
f π ( i ) ( E ( C ˜ π ( i ) ) ) + j = i + 1 n f π ( j ) ( E ( C ˜ π ( j ) ) ) =
f δ ( i 1 ) ( E ( C ˜ δ ( i 1 ) ) ) + f δ ( i ) ( E ( C ˜ δ ( i ) ) )
f π ( i 1 ) ( E ( C ˜ π ( i 1 ) ) ) f π ( i ) ( E ( C ˜ π ( i ) ) ) .
Using the permutation definition δ , we obtain:
L ( δ ) L ( π ) = f δ ( i 1 ) ( E ( C ˜ δ ( i 1 ) ) ) + f δ ( i ) ( E ( C ˜ δ ( i ) ) )
f π ( i 1 ) ( E ( C ˜ π ( i 1 ) ) ) f π ( i ) ( E ( C ˜ π ( i ) ) )
= f π ( i ) ( E ( C ˜ π ( i ) p ˜ π ( i 1 ) ) ) + f π ( i 1 ) ( E ( C ˜ π ( i ) ) )
f π ( i 1 ) ( E ( C ˜ π ( i 1 ) ) ) f π ( i ) ( E ( C ˜ π ( i ) ) ) 0 .
The last inequality follows from the assumption (11).    □
Corollary 3.
If tasks π ( i 1 ) , π ( i ) belong to some D -block in permutation π, then inequality (11) takes the form:
w π ( i 1 ) E ( p ˜ π ( i 1 ) ) w π ( i ) E ( p ˜ π ( i ) ) .
Lemma 3.
Let B = ( π ( a ) , π ( a + 1 ) , , π ( b ) ) , 1 a < b n be the D -block in permutation π . If each pair of elements adjacent to B satisfies the relation (12), then the order in B is optimal for the tasks of the set Y ( B ) , i.e.,
L ( B ) = min { L ( γ ) : γ permutation
of elements of the set Y ( B ) } .
Proof. 
The proof of the Lemma 2 and the property (12) should be used, which by assumption is satisfied by every pair of elements from subpermutation B .    □
Definition 3.
Let B be the partition of permutation π into blocks. The π permutation is ordered (in short D -OPT), if every D -block of tasks satisfies the relation (12), so they appear in optimal order.
Corollary 4.
Changing the order of tasks in any block of ordered permutation π does not generate permutations of a lower value of criterion function.
We will now prove a theorem containing the necessary conditions, which must be met in order to generate a solution of the smaller criterion function value.
Theorem 2.
Let π Φ be the D -OPT permutation. If β Φ and
L ( β ) < L ( π ) ,
 then in the permutation β at least one task of some block from division π was swapped before the first or last task of this block.
Proof. 
Let B = [ B 1 , B 2 , , B k ] be a division of the ordered permutation π Φ into blocks. Every block is a sequence of tasks
B i = ( π ( a i ) , π ( a i + 1 ) , π ( b i ) ) , for i = 1 , 2 , , k , where
1 a 1 b 1 < a 2 b 2 < , , < a k b k .
By
Y i ( π ) = { π ( a i ) , π ( a i + 1 ) , , π ( b i ) } ,
we denote a set of tasks from block B i .
Let permutation β Φ and L ( β ) < L ( π ) . Let us assume indirectly that in permutation β no task from any block B 1 , B 2 , , B k was swapped before the first or last task of this block. Thus
Y i ( π ) = Y i ( β ) , i = 1 , 2 , , k .
Therefore the sequence of tasks ( π ( a i ) , π ( a i + 1 ) , , π ( b i ) ) in permutation π and ( β ( a i ) ,   β ( a i + 1 ) , , β ( b i ) ) in β are permutations of the same subset of tasks { π ( a i ) , π ( a i + 1 ) , , π ( b i ) } . It follows from Lemma 1 and 3 that in this case L ( β ) L ( π ) , which is contrary to the assumption (13) of Theorem 3.    □
The above theorem is the basis for the construction of a subneighborhood in algorithms based on the local improvement method.

5. Tasks Execution Times with Normal Distribution

We now consider the probabilistic Total Weighted Tardiness Problem PTWT. In order to simplify the notation, it was assumed that the order of execution of tasks is a natural permutation, i.e., π = ( 1 , 2 , , n ) . Let ( p ˜ i , w i , d i ) ( i J ) be an example of the problem data, where p ˜ i N ( m i , s i ) are random variables with normal distribution, and w i and d i are certain numbers. Using Graham’s notation, this problem can be symbolically presented as 1 | p ˜ i N ( m i , s i ) | w i T i . When determining the value of the criterion function (6) of the PTWT problem, one must compute the expected tardiness value T ˜ i , i.e., E ( T ˜ i ) . First, we present some facts concerning distributions of random variables.
Fact 1.
If task execution times are independent random variables with normal distribution p ˜ i N ( m i , s i ) i J , the dates of completing tasks are also random variables with normal distribution C ˜ i N ( μ i , σ i ) , where
μ i = j = 1 i m j and σ i = j = 1 i s j 2 .
In this case, tardiness is a random variable defined as follows:
l l T ˜ i = C ˜ i d i , if C ˜ i > d i , 0 , if C ˜ i d i .
Lemma 4.
Cumulative distribution F T ˜ i of distribution of random variable T ˜ i is expressed by the formula:
F T ˜ i ( x ) = 0 , x 0 , F C ˜ i ( d i + x ) F C ˜ i ( d i + x ) F C ˜ i ( d i ) + F C ˜ i ( d i ) , x > 0 ,
Proof. 
Using the definition of the random variable T ˜ i we consider two cases.
1. x > 0 . Then
F T ˜ i ( x ) = P ( T ˜ i < x ) =
P ( T ˜ i < x | C ˜ i d i > 0 ) P ( C ˜ i d i > 0 ) +
P ( T ˜ i < x | C ˜ i d i 0 ) P ( C ˜ i d i 0 ) .
We can see two facts:
P ( T ˜ i < x | C ˜ i d i 0 ) = 1
and
P ( T ˜ i < x | C ˜ i d i > 0 ) = P ( C ˜ i d i < x ) .
Next we have:
F T ˜ i ( x ) = P ( C ˜ i d i < x ) P ( C ˜ i d i > 0 ) + P ( C ˜ i d i 0 ) =
P ( C ˜ i < d i + x ) P ( C ˜ i > d i ) + P ( C ˜ i d i ) =
F C ˜ i ( d i + x ) ( 1 F C ˜ i ( d i ) ) + F C ˜ i ( d i ) =
F C ˜ i ( d i + x ) F C ˜ i ( d i + x ) F C ˜ i ( d i ) + F C ˜ i ( d i ) .
2. x < 0 . It is obvious that F T ˜ i ( x ) = 0 .    □
In order to calculate the density function f T ˜ i ( x ) of random variable T ˜ , it is enough to calculate the derivative of the cumulative distribution function F T ˜ i ( x ) . Thus
f T ˜ i ( x ) = 0 , for   x 0 , f C ˜ i ( d i + x ) F C ˜ i ( d i ) f C ˜ i ( d i + x ) , for   x > 0 .
Using the above lemma we will prove the theorem enabling the calculation of the expected value of the random variable T ˜ i .
Theorem 3.
Expected value of the random variable T ˜ i
E ( T ˜ i ) = ( 1 F C ˜ i ( d i ) ) σ 2 π e ( d i μ i ) 2 2 σ i 2 +
( μ i d i ) 1 F N ( 0 , 1 ) ( d i μ i σ i ) .
Proof. 
By definition of the expected value of a random variable
E ( T ˜ i ) = x f T ˜ i ( x ) d x =
0 x f C ˜ i ( d i + x ) F C ˜ i ( d i ) = f C ˜ i ( d i + x ) d x
0 x f C ˜ i ( d i + x ) d x 0 x F C ˜ i ( d i ) f C ˜ i ( d i + x ) d x =
( 1 F C ˜ i ( d i ) ) 0 x f C ˜ i ( d i + x ) d x .
By introducing the appropriate substitutions, we obtain:
E ( T ˜ i ) = ( 1 F C ˜ i ( d i ) ) d i ( y d i ) f C ˜ i ( y ) d y =
( 1 F C ˜ i ( d i ) ) d i y f C ˜ i ( y ) d y d i d i f C ˜ i ( y ) d y .
The following equations are easy to prove:
d i y f C ˜ i ( y ) d y = σ i 2 π e ( d i μ i ) 2 2 σ i 2 ) + μ 1 F N ( 0 , 1 ) ( d i μ i σ i )
and
d i d i f C ˜ i ( y ) d y = d i 1 F N ( 0 , 1 ) ( d i μ i σ i ) .
By inserting the above equations into the Expression (15) we obtain the theorem thesis:
E ( T ˜ i ) = ( 1 F C ˜ i ( d i ) ) σ i 2 π e ( d i μ i ) 2 2 σ i 2 +
( μ i d i ) 1 F N ( 0 , 1 ) ( d i μ i σ i ) ,
which ends the proof of the theorem.   □
The above proved theorem enables quick calculation of the expected value of tardiness for random dates of the tasks’ completion. Thanks to this, the cost of permutation π Φ (the value of criterion function (6)) is:
L ( π ) = E ( F ˜ ( π ) ) = i = 1 n w π ( i ) E ( T ˜ π ( i ) ) =
i = 1 n w π ( i ) ( 1 F N ( 0 , 1 ) ( d i μ i σ i ) ) σ i 2 π e ( d i μ i ) 2 2 σ i 2 +
+ ( μ i d i ) 1 F N ( 0 , 1 ) ( d i μ i σ i ) .

6. Tabu Search Algorithm

For solving NP-hard discrete optimization problems, approximate algorithms are almost exclusively used. The solutions determined by these algorithms are, from an application point of view, fully satisfactory (they often differ from the best solutions by less than 1%). They belong mostly to the local search methods, whose operation boils down to iterative browsing of a certain subset of acceptable solutions (neighborhood) and determining the best neighbor. One of the best implementations of this approach is tabu search algorithm. The work [18] presents such an algorithm solving the TWT problem. The tabu search algorithm is deterministic, so it guarantees repeatability of results. The application of block properties in the construction of an algorithm significantly improved its efficiency. For solving examples of a single-machine problem with random times of tasks execution PTWT we used a simplified version of this algorithm. Below, we briefly describe its basic elements.

6.1. Moves and Neighborhoods

In each iteration of an algorithm based on a local search method using the neighborhood (moves) generator, a subset of a set of solutions, neighborhood is determined. If the solutions are permutations, most often the swap-type (s-move) and insert (i-move) are used. The first move swaps positions of several elements in permutation, and the second moves the element to a different position.
Let
B = [ B 1 , B 2 , B ν ] ,
be a division of the ordered ( D -OPT) permutation π into blocks. The m move made on permutation π is called improving if it generates permutation m ( π ) with a lower value of criterion function. It follows from Theorem 2 that moves consisting of swapping the order of elements in any block of D -OPT permutation are not improving moves. We now consider task π ( j ) belonging to a certain block from the division B of permutation π . Moves that can bring improvement consist of swapping the task π ( j ) (before) the first or (after) the last task of this block. Let M b f j ( π ) and M a f j ( π ) be sets of these moves. By
M ( π ) = j = 1 n M b f j ( π ) j = 1 n M a f j ( π ) ,
we designate the set of all moves that can bring improvement, i.e., moves before or after blocks of some π permutation. Since the procedure for splitting permutations into blocks is not unambiguous, hence the set of moves M ( π ) is not explicitly defined and depends on the considered split. Formally, in the definition (17) there should be a symbol identifying a specific division. To simplify the description, it is omitted.
For a fixed split D -OPT of permutation π Φ into blocks, the set of solutions
N ( π ) = { m ( π ) : m M ( π ) }
is subneighborhood, generated with the use of “block elimination properties”.
The procedure for determining subneighborhood (including the elimination of certain moves generating permutations that do not improve the value of the objective function) has the complexity of O ( n 2 ) . Its use has significantly improved the efficiency of the algorithm solving the PTWT problem.

6.2. Tabu List

In order to prevent a cycle (returning to the same permutation after a certain number of algorithm iterations), some attributes of each move are stored on the list of prohibited moves L T S . It operates on the principle of a FIFO queue. Performing the move, swapping the element π ( r ) to the position j , generating from π Φ permutation β we save the attributes of this move on the tabu list threefold ( π ( r ) , j , L ( β ) ) .
Let us assume we are considering the m M ( β ) , swapping task β ( k ) to position l , generating from β Φ permutation γ . If on the list there is threefold ( r , j , Ψ ) such that β ( k ) = r , l = j and W ( γ ) Ψ , such a move is prohibited and removed from the set M ( β ) . The only parameter of the list is its length, i.e., the number of remembered elements. In the literature, there are many descriptions of implementations of the tabu list (Algorithm 1).
Algorithm 1 Tabu Search (TS)
Input:
         π —start permutation;
         L T S = —tabu list;
Output:
         π * —best solution;
repeat
        Determine the subneighborhood N ( π ) of permutation π due to Equation (18);
        Delete from N ( π ) permutations forbidden by the list L T S ;
        Determine permutation β N ( π ) , such that
           L ( β ) = min { L ( δ ) : δ N ( π ) } ;
        if ( L ( β ) < L ( π * )  then
           π * : = β ;
        Place attributes β on the list L T S ;
         π : = β
until (the completion condition is fulfilled).
        
Termination condition:
1.     calculation time,
2.     maximum number of iterations.

7. Computational Experiments

This section introduces a method of random generation of data, a measure of algorithm stability and the calculation results of two algorithms:
  • T S P —probabilistic tabu search algorithm with entire neighborhood generated by the insert;
  • T S P + B —modified algorithm T S P , with the additional application of block elimination properties in the procedure of neighborhood generation.
The algorithms have been implemented in C++ and run on the “Bem” cluster in the Wrocław Network Supercomputer Center working under 64-bit operating system Scientific Linux 6.10 (Final) equipped with Intel Xenon processor a E5-2670 (2.3 GHz). Calculations were made on suitably modified examples of reference data for the problem 1 | | w i T i included with the best currently known solutions on the OR-Library website [37]. The data was randomly generated for the number of tasks n = 40, 50 i 100. For each n there were 125 examples with varying degrees of difficulty designated. When running each of the three algorithms, the following assumptions were made:
  • starting permutation: π = ( 1 , 2 , , n ) ,
  • length of the tabu list: n.
A set of these 375 examples, called deterministic data, will be denoted by  D .

7.1. Generating of Test Data

Computational experiments were conducted on examples that were generated according to the standard method of generating popular benchmarks, proposed by Potts and Van Wassenhove [23], available on the website [37].
For a fixed number of tasks, n example of deterministic data
η = ( ( p 1 , w 1 , d 1 ) , ( p 2 , w 2 , d 2 ) , ( p n , w n , d n ) )
is a sequence of n triples (task execution time, tardiness cost factor, requested completion date). Based on this, we set examples of probabilistic data
η ˜ = ( ( p ˜ 1 , w 1 , d 1 ) , ( p ˜ 2 , w 2 , d 2 ) , ( p ˜ n , w n , d n ) ) ) ,
where task execution times p ˜ i ( i = 1 , 2 , , n ) are independent random variables of normal distribution p ˜ i N ( p i , c · p i ) ( i = 1 , 2 , , n ) , and a coefficient c = 0.05 , 0.1 , 0.2 , where c is the random data equivalent of the R D D parameter in the method used to generate data for the single-machine problem with the total weighted tardiness criterion available on the OR-Library [37] website. For one deterministic example, we generate three examples of probabilistic data from set D . In total, probabilistic data set  P ˜ has 1125 examples.
For probabilistic data, task execution times are random variables. The probabilistic algorithm (i.e., the algorithm PTWT problem), for a fixed example of probabilistic data, sets a solution task execution order (task permutation). Therefore we set the order of the tasks, but we do not know the specific values of the execution times because they are the implementation of random variables of normal distribution. Thus, they can have different values. We introduce a certain measure (stability is covered in the next chapter) solution resistance (more generally the algorithm with which the solution was determined). For this purpose, the so-called sets of disturbed data were generated.
Let
η ˜ = ( ( p ˜ 1 , w 1 , d 1 ) , ( p ˜ 2 , w 2 , d 2 ) , ( p ˜ n , w n , d n ) ) ,
( η ˜ P ˜ ) be an example of probabilistic data. For this example, there were 100 examples of disturbed data generated by randomly determining task execution times. The set of these examples is denoted by Z ( η ˜ ) . An example of disturbed data θ Z ( η ˜ ) takes the form of
θ = ( ( p 1 , w 1 , d 1 ) , , ( p n , w n , d n ) ) ,
where the task execution time p i   ( i = 1 , , n ) is implementation of random variable p i ˜ of normal distribution, i.e., p ˜ i N ( p i , c · p i ) ( i = 1 , 2 , , n ) c = 0.05 , 0.1 , 0.2 , where c is the random data equivalent of the R D D parameter in the standard method of generating popular single-machine benchmarks from the [37] website. A set of disturbed data will be denoted by Z , | Z | = 112500 .

7.2. Algorithm Stability

Let F be the solution value determined by the tested algorithm and F * the value of the reference solution. Relative error of the solution F
δ = F F * F * 100 %
indicates by how many percent the solution determined by the algorithm is worse or better than the reference one. Taking into consideration a set of examples, we can then assess the quality of solution designated by the algorithm solutions, and thus indirectly—the quality of the algorithm.
Let η ˜ be an example of probabilistic data, Z ( η ˜ ) data set generated from η ˜ by disturbance of task execution times according to the assumed schedule. We have further:
  • A r e f —algorithm designating reference solutions,
  • A—algorithm whose resistance we are testing (in our case A P or A P + B ),
  • π ( A , x ) —solution designated by algorithm A for data x, F ( π ( A , x ) , y ) —value of criterion function of solution π ( A , x ) for the example y.
Then
Δ ( A , η ˜ , Z ( η ˜ ) ) = φ Z ( η ˜ ) F ( π A , η ˜ , φ ) φ Z ( η ˜ ) F ( π ( A r e f , φ ) , φ ) φ Z ( η ˜ ) F ( π ( A r e f , φ ) , φ ) ,
we call resistance of solution  π ( A , η ˜ ) (designated by algorithm A for the example η ˜ ) on the set of disturbed data Z ( η ˜ ) .
If P ˜ is a set of examples of probabilistic data of the examined problem, then the expression
S ( A , P ˜ ) = 1 | P ˜ | η ˜ P ˜ Δ ( A , η ˜ , Z ( η ˜ ) )
we call a resistance coefficient of algorithm A on set P ˜ . The smaller its value, the more stable the solutions determined by the algorithm, i.e., random disturbances of data results in less change in the cost of the solution.

7.2.1. Algorithms’ Efficiency

Computations of probabilistic algorithm T S P and its version with blocks T S P + B were made on the examples from set D (see Section 7). The received results were compared with the best currently known. For each example, the relative error δ , as well as calculation time, were calculated. Average relative deviation values δ a p r d , for individual groups of examples, are shown in Table 1. The computations time for a run was fixed at 60 s in each instance.
As expected for all groups of examples, the average relative error is smaller for the algorithm with blocks. The difference is particularly noticeable for large-scale examples. For n = 100 tasks, the relative error of the algorithm without blocks is 59.84%, and with blocks only 6.65%, for the same calculation time of one minute for a single test instance.

7.2.2. Algorithms’ Stability

The main purpose of the conducted computational experiments was to examine the individual stability of individual algorithms, i.e., resistance of solutions determined by these algorithms for random changes (disturbances) of parameters.
Let D be a set of deterministic data, D ˜ the corresponding set of probabilistic data, and Z the set of disturbed data. Based on this data, the stability coefficients of individual algorithms were designated. Comparative results are shown in Table 2, Table 3 and Table 4 (best values in a group are marked in bold).
In general, the Tabu search algorithm with blocks is much more resistant to disturbed data disorders. The comparative results generated for 3 groups of examples with the size n = 40 (Table 3), n = 50 (Table 4), and n = 100 (Table 5) clearly indicate that for almost every group of examples the value of the resistant coefficient is lower for the T S P + B algorithm compared to the value of this coefficient for the T S P algorithm. The only exceptions are the groups of examples 026–050, c = 0.05 and c = 0.1 ; 010–125, c = 0.05 and c = 0.1 for n = 40 ; 026–050, c = 0.05 , c = 0.1 and 101–125, c = 0.05 for n = 50 ; 101–125, c = 0.1 and 076–100, c = 0.2 for n = 100 , but they do not affect significantly the overall value of the immunity coefficient. The negative value of the coefficient is related to receiving better than reference solutions by the tested algorithm.
A summary of the results obtained is given in Table 5 (the best values are marked in bold). The stability coefficients of both algorithms increase with the increase of parameter c. This parameter has a direct impact on the variance of the distribution from which the data are drawn. The T S P + B algorithm has a lower stability factor, so the solutions determined by this algorithm are more resistant to data disturbances. In addition, this algorithm, at the same time of calculations, determines much better solutions than the T S P algorithm (see Table 1). For the number of tasks n = 100 , the relative error is almost 10 times smaller. Summing up, the use of blocks in the tabu search algorithm, compared eith the algorithm without blocks, gives a significant improvement in the value of the determined solutions and their stability.

8. Summary

The paper presents a certain method representing uncertain data in discrete optimization problems, using independent random variables of normal distribution. This distribution is in practice widely used for solving many non-deterministic decision problems. Consideration of uncertain data leads to great difficulty in terms of the calculations of practical optimization problems which, however, significantly better describe reality than deterministic models.
Also presented is the construction of the algorithm based on the tabu search method for the problem of scheduling tasks on a single machine with minimization of the sum of penalties for tasks not completed on time. Random execution times tasks are represented by independent random variables of normal distribution. Since the objective function is a random variable, thus, when choosing an element from the neighborhood as a comparative criterion the expected value was assumed. To accelerate the calculation, blocks are used, i.e., certain specific methods of an intermediate review of solutions for the problem under consideration. Computational experiments were conducted out in order to test the stability of the algorithms, i.e., the impact of altering task parameters on changes in the value of a criterion function. The obtained results clearly indicate that probabilistic algorithms are much more stable, i.e., algorithms in which the randomness of parameters is taken into account. The use of blocks significantly accelerated the calculations and greatly improved the stability of the algorithm. The described methods and algorithms can be directly applied to other probability distributions and also other task parameters (not only execution times). They can also be adopted in methods that solve problems with uncertain parameters represented by fuzzy numbers, especially in case of the so-called “triangular fuzzy numbers”.
Further research directions will be focused on adapting the developed properties to solve cyclical problems, minimizing the cycle time, as the most frequent case of large-scale manufacturing in practice.

Author Contributions

Conceptualization, W.B., P.R. and M.W.; methodology, M.W., and P.R.; software, M.U.; validation, W.B.; formal analysis, M.W.; investigation, M.W., and M.U.; resources, P.R.; data curation, M.U.; writing—original draft preparation, M.W.; writing—review and editing, W.B.; supervision, P.R.; project administration, M.U.; funding acquisition, W.B., and M.U. All authors have read and agreed to the published version of the manuscript.

Funding

The calculations were carried out using resources provided by the Wroclaw Centre for Networking and Supercomputing (http://wcss.pl, accessed on 28 January 2023), grant No. 96.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following notations are used in this manuscript:
nnumber of tasks
J set of tasks
p i task’s execution time
w i task’s tardiness cost ratio
d i demanded task’s execution date
π tasks permutation
Φ a set of all elements’ permutations from J
N ( π ) solution neighborhood π
S i start date of task i J
C i end date of task i J
p ˜ i random variable of execution time of task i
T ˜ i random variable of tardiness of task i
π T semi-block of early tasks
π D semi-block of tardy tasks
L ( π ) sum of tardiness costs (criterion)

References

  1. Shang, C.; You, C. Ditributionally robust optimization for planning and scheduling under uncertainty. Comput. Chem. Eng. 2018, 110, 53–68. [Google Scholar] [CrossRef]
  2. Zhang, L.; Lin, Y.; Xiao, Y.; Zhang, X. Stochastic single-machine scheduling with random resource arrival times. Int. J. Mach. Learn. Cybern. 2018, 9, 1101–1107. [Google Scholar] [CrossRef]
  3. Shaked, M.; Shandhkumard, R. (Eds.) Stochastic Order; Academic Press: San Diego, CA, USA, 1994. [Google Scholar]
  4. Zhu, X.; Cai, X. General Stochastic Single-Machine Scheduling with Regular Cost Functions. Math. Comput. Model. 1997, 26, 95–108. [Google Scholar] [CrossRef]
  5. Van den Akker, M.; Hoogeveen, R. Minimizing the Number of Late Jobs in Case of Stochastic Processing Times with Minimum Success Probabilities; Technical Report; Institute of Information and Computation Science, Utrecht University: Utrecht, The Netherlands, 2004. [Google Scholar]
  6. Van den Akker, M.; Hoogevee, R. Minimizing the number of late jobs in a stochastic setting using chance constraint. J. Sched. 2008, 11, 59–69. [Google Scholar] [CrossRef]
  7. Vondrák, J. Probabilistic Methods in Combinatorial and Stochastic Optimization. Ph.D. Thesis, MIT, Cambridge, MA, USA, 2005. [Google Scholar]
  8. Dean, B.C. Approximation Algorithms for Stochastic Scheduling Problems. Ph.D. Thesis, MIT, Cambridge, MA, USA, 2005. [Google Scholar]
  9. Soroush, H.M. Scheduling stochastic job on a single machine minimize weighted number of tardy jobs. Kiwait J. Sci. 2013, 40, 123–147. [Google Scholar]
  10. He, X.X.; Yao, C.; Tang, Q.H. Robust Single Machine Scheduling with Stochastic Processing Times Based on Event Point. Appl. Mech. Mater. 2014, 668–669, 1641–1645. [Google Scholar] [CrossRef]
  11. Prade, H. Using fuzzy set theory in a scheduling problem. Fuzzy Sets Syst. 1979, 2, 153–165. [Google Scholar] [CrossRef]
  12. Ishii, H. Fuzzy combinatorial optimization. Jpn. J. Fuzzy Theory Syst. 1992, 4, 31–40. [Google Scholar] [CrossRef]
  13. Iscibuchi, H.; Yamamoto, N.; Misaki, S.; Tanaka, H. Local Search Algorithm for Flow Shop Scheduling with Fuzzy Due-Dates. Int. J. Prod. Econ. 1994, 33, 53–66. [Google Scholar] [CrossRef]
  14. Itoh, T.; Ishii, H. Fuzzy due-date scheduling problem with fuzzy processing times. Int. Trans. Oper. Res. 1999, 6, 639–647. [Google Scholar] [CrossRef]
  15. Bocewicz, G.; Nielsen, I.E.; Banaszak, Z.A. Production flows scheduling subject to fuzzy processing time constraints. Int. J. Comput. Integr. Manuf. 2016, 29, 1105–1127. [Google Scholar] [CrossRef]
  16. Rajba, P.; Wodecki, M. Stability of scheduling with random processing times on one machine. Appl. Mathematicea 2012, 39, 169–183. [Google Scholar] [CrossRef]
  17. Bożejko, W.; Hejducki, Z.; Wodecki, M. Flowshop scheduling of construction processes with uncertain parameters. Arch. Civ. Mech. Eng. 2019, 19, 194–204. [Google Scholar] [CrossRef]
  18. Bożejko, W.; Grabowski, J.; Wodecki, M. Block approach-tabu search algorithm for single machine total weighted tardiness problem. Comput. Ind. Eng. 2006, 50, 1–14. [Google Scholar] [CrossRef]
  19. Rinnooy Kan, A.H.G.; Lageweg, B.J.; Lenstra, J.K. Minimizing total costs in one-machine scheduling. Oper. Res. 1975, 25, 908–927. [Google Scholar] [CrossRef]
  20. Bożejko, W.; Rajba, P.; Wodecki, M. Stable scheduling of single machine with probabilistic parameters. Bull. Polish Acad. Sci. Tech. Sci. 2017, 65, 219–231. [Google Scholar] [CrossRef]
  21. Lawler, E.L.; Moor, J.M. A Functional Equation and its Applications to Resource Allocation and Sequencing Problems. Manag. Sci. 1969, 16, 77–84. [Google Scholar] [CrossRef]
  22. Sahni, S.K. Algorithms for Scheduling Independent Jobs. J. Assoc. Comput. Match. 1976, 23, 116–127. [Google Scholar] [CrossRef]
  23. Potts, C.N.; Van Wassenhove, L.N. Single Machine Tardiness Sequencing Heuristics. IIE Trans. 1991, 23, 346–354. [Google Scholar] [CrossRef]
  24. Villareal, F.J.; Bulfin, R.L. Scheduling a Single Machine to Minimize the Weighted Number of Tardy Jobs. IEE Trans. 1983, 15, 337–343. [Google Scholar] [CrossRef]
  25. Urgo, M.; Váncza, J. A branch-and-bound approach for the single machine maximum lateness stochastic scheduling problem to minimize the value-at-risk. Flex. Serv. Manuf. J. 2018, 31, 472–496. [Google Scholar] [CrossRef]
  26. Wodecki, M. A Branch-and-Bound Parallel Algorithm for Single-Machine Total Weighted Tardiness Problem. Adv. Manuf. Technol. 2008, 37, 996–1004. [Google Scholar] [CrossRef]
  27. Nowicki, E.; Smutnicki, C. A Fast tabu serach algorithm for permutation flow shop problem. Eur. J. Off. Oper. Res. 1996, 91, 160–175. [Google Scholar] [CrossRef]
  28. Grabowski, J.; Wodecki, M. A very fast tabu search algorithm for the permutation flow shop problem with makespan criterion. Comput. Oper. Res. 2004, 31, 1891–1909. [Google Scholar] [CrossRef]
  29. Uchroński, M. Parallel Algorithm with Blocks for a Single-Machine Total Weighted Tardiness Scheduling Problem. Appl. Sci. 2021, 11, 2069. [Google Scholar] [CrossRef]
  30. Wodecki, M. A block approach to earliness-tardiness scheduling problems. Adv. Manuf. Technol. 2009, 40, 797–807. [Google Scholar] [CrossRef]
  31. Xiaoqiang, C.; Wu, X.; Zhou, X. Optimal Stochastic Scheduling; Springer-Verlag New York Inc.: New York, NY, USA, 2014. [Google Scholar]
  32. Bożejko, W.; Hejducki, Z.; Rajba, P.; Wodecki, M. Project management In building process with uncertain tasks Times. Manag. Prod. Eng. Rev. 2011, 2, 3–9. [Google Scholar]
  33. Bożejko, W.; Uchroński, M.; Wodecki, M. Block approach to the cyclic flow shop scheduling. Comput. Ind. Eng. 2015, 81, 158–166. [Google Scholar] [CrossRef]
  34. Bożejko, W.; Wodecki, M. Solving Permutational Routing Problems by Population-Based Metaheuristics. Comput. Ind. Eng. 2009, 57, 269–276. [Google Scholar] [CrossRef]
  35. Bożejko, W.; Pempera, J.; Uchroński, M.; Wodecki, M. Parallel Block-Based Simulated Annealing for the Single Machine Total Weighted Tardiness Scheduling Problem. In Proceedings of the 16th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2021), Bilbao, Spain, 22–24 September 2021; Advances in Intelligent Systems and Computing Book Series (AISC). Springer: Berling/Heidelberg, Germany, 2022; Volume 1401, pp. 758–765. [Google Scholar]
  36. Bożejko, W.; Rajba, P.; Wodecki, M. Blocks of jobs for solving two-machine flow shop problem with normal distributed processing times. In Proceedings of the 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020), Burgos, Spain, 16–18 September 2020; Advances in Intelligent Systems and Computing Book Series (AISC). Springer: Berling/Heidelberg, Germany, 2021; pp. 289–298. [Google Scholar]
  37. OR Library. Available online: https://www.brunel.ac.uk/~mastjjb/jeb/info.html (accessed on 22 March 2023).
Table 1. Computational results for deterministic data [37] ( t = 60 s).
Table 1. Computational results for deterministic data [37] ( t = 60 s).
Instance Group wt 40 wt 50 wt 100
TS P TS P + B TS P TS P + B TS P TS P + B
000–0256.660.121.430.041.770.22
026–0505.270.7517.070.09230.3823.62
050–0751.930.802.281.292.161.23
076–10042.775.929.253.7064.547.81
101–1250.451.231.272.850.360.39
Average11.421.766.261.6059.846.65
Table 2. Computational results S ( A , P ˜ ) for the w t 40 instance group.
Table 2. Computational results S ( A , P ˜ ) for the w t 40 instance group.
Instance Group c = 0.05 c = 0.1 c = 0.2
TS P TS P + B TS P TS P + B TS P TS P + B
000–0250.01750.01700.01830.00440.09120.0850
026–0500.00870.02213.79383.14830.28690.2023
050–0750.00090.00360.02470.01772.46462.2704
076–1000.00090.028820.603918.00870.24510.1578
101–1250.00130.00330.44400.748713.224812.6221
Average0.00590.00484.97694.38563.26253.0675
Table 3. Computational results S ( A , P ˜ ) for the w t 50 instance group.
Table 3. Computational results S ( A , P ˜ ) for the w t 50 instance group.
Instance Group c = 0.5 c = 0.1 c = 0.2
TS P TS P + B TS P TS P + B TS P TS P + B
000–0250.00490.00700.02630.01690.08260.0862
026–0500.01480.24660.14260.18780.32700.1983
050–0750.01460.00640.05950.00780.26740.1840
076–1000.00540.02560.0077-0.00370.18470.1720
101–1250.01380.02170.02770.02650.68700.6292
Averge0.01070.04590.05280.04710.30970.2540
Table 4. Computational results S ( A , P ˜ ) for the w t 100 instance group.
Table 4. Computational results S ( A , P ˜ ) for the w t 100 instance group.
Instance Group c = 0.5 c = 0.1 c = 0.2
TS P TS P + B TS P TS P + B TS P TS P + B
000–0250.00280.00400.01990.01570.08550.0761
026–05015.04905.34100.04820.04780.37030.3258
050–0750.01310.00170.02820.012219.603117.7367
076–1000.06420.02940.83470.70950.39330.4549
101–1250.05960.03450.30630.38501.19801.0438
Average3.03771.06810.24750.23414.33003.9275
Table 5. Comparative computational results of S ( A , P ˜ ) —summary.
Table 5. Comparative computational results of S ( A , P ˜ ) —summary.
c = 0.05 c = 0.1 c = 0.2
TS P TS P + B TS P TS P + B TS P TS P + B
w t 40 0.00590.00484.97694.38563.26253.0675
w t 50 0.01070.04590.05280.04710.30970.2540
w t 100 3.03771.06810.24750.23414.33003.9275
Average1.01810.36971.75911.55562.63412.4163
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bożejko, W.; Rajba, P.; Uchroński, M.; Wodecki, M. Elimination Properties for a Probabilistic Scheduling Problem. Appl. Sci. 2023, 13, 5304. https://doi.org/10.3390/app13095304

AMA Style

Bożejko W, Rajba P, Uchroński M, Wodecki M. Elimination Properties for a Probabilistic Scheduling Problem. Applied Sciences. 2023; 13(9):5304. https://doi.org/10.3390/app13095304

Chicago/Turabian Style

Bożejko, Wojciech, Paweł Rajba, Mariusz Uchroński, and Mieczysław Wodecki. 2023. "Elimination Properties for a Probabilistic Scheduling Problem" Applied Sciences 13, no. 9: 5304. https://doi.org/10.3390/app13095304

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop