Next Article in Journal
Diffraction by Circular Pin: Wiener–Hopf Method
Next Article in Special Issue
Large-Number Optimization: Exact-Arithmetic Mathematical Programming with Integers and Fractions Beyond Any Bit Limits
Previous Article in Journal
Parametric Blending with Geodesic Curves on Triangular Meshes
Previous Article in Special Issue
Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Recursive Representation of the Permutation Flow and Job Shop Scheduling Problems and Some Extensions

1
V. A. Trapeznikov Institute of Control Sciences, Russian Academy of Sciences, 117997 Moscow, Russia
2
Fakultät für Mathematik, Otto-von-Guericke-Universität Magdeburg, 39016 Magdeburg, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(19), 3185; https://doi.org/10.3390/math13193185
Submission received: 18 July 2025 / Revised: 28 September 2025 / Accepted: 29 September 2025 / Published: 4 October 2025
(This article belongs to the Special Issue Innovations in Optimization and Operations Research)

Abstract

In this paper, we propose a formulation of the permutation flow and job shop scheduling problems using special recursive functions and show its equivalence to the existing classical formulation. Equivalence is understood in the sense that both ways of defining the problem describe the same set of feasible schedules for each pair of jobs and machine numbers. In this paper, the apparatus of recursive functions is used to describe and solve three problems: permutation flow shop; permutation flow shop with the addition of the ‘and’ predicate extending the machine chain to an acyclic graph; and permutation job shop. The predicate ‘and’ allows the description of the flow shop with assembly operation tasks. Recursive functions have a common domain and range. To calculate an optimal schedule for each of these three problems, a branch and bound method is considered based on a recursive function that implements a job swapping algorithm. The complexity of the optimization algorithm does not increase compared to the non-recursive description of the PFSP. This article presents some results for the calculation of optimal schedules on several test instances. It is expected that the new method, based on the description of recursive functions and their superposition, will be productive for formulating and solving some extensions of scheduling problems that have practical significance.

1. Introduction

There exist a huge number of publications devoted to the permutation flow shop problem (PFSP) or to the more general case of the flow shop problem (FSP) in the scheduling literature. In both the PFSP and the FSP, all jobs must be processed on a set of m machines according to a fixed technological order. In the PFSP, all jobs are performed by each machine in the same order, while in the FSP, jobs may be performed in different orders on different machines selected to satisfy a specific optimization criterion. Often, the makespan has to be minimized, i.e., the maximum completion time of the complete set of jobs should be minimized. The PFSP is denoted as F | p r m u | C m a x . In [1], an overview of the PFSP with the makespan criterion is presented. Three categories of methods used to solve the PFSP are considered: traditional algorithms, heuristic algorithms, and meta-heuristic algorithms. This study focused on meta-heuristic algorithms due to their effectiveness in solving many application problems. As a result of this study, it was demonstrated that meta-heuristic algorithms play a vital role in solving the PFSP. Most works deal with various heuristics and metaheuristics, see, e.g., [2,3,4,5,6]. Much of the current work in the field of the PFSP is related to Industry 4.0 [7,8]. References [9,10] provide an overview of the FSP with assembly operations (AFSP). The basic assembly process in a workshop is described, consisting of two stages: the manufacturing or machining stage and the assembly stage. The machining and assembly stages use one or more machines operating in parallel. The final products have a hierarchical assembly structure with multiple components and assembly operations. A review of assembly shop models with a methodology for solving the corresponding problems of scheduling theory is presented. No less intensive work has been done for the job shop problem (JSP). We briefly discuss some of the latest publications. A review of more than 120 papers has been presented in [11]. It examines the problem in relation to the new demands of Industry 4.0. This line of research has split into several new independent ones. Another review [12] briefly described the research developments and current state of the art of job shop problems, classifying existing research methods and discussing future directions for the development. Reference [13] described the development of an innovative high-performance genetic algorithm for the JSP. In [14], a genetic algorithm and a simulated annealing one have been proposed for application in the manufacturing industry. Finally, reference [15] provided an overview for a generalization of the classical job shop problem, namely the flexible job shop problem (FJSP).
This paper attempts to use the recursive functions apparatus to initiate systematic formulations of extensions of the problem of scheduling theory. A recursive function, by definition, is calculated using some algorithm. This, in some cases, allows the specifics of the problem extension to be incorporated into the algorithm of the recursive function. It is proposed to formulate and solve the extended problems of scheduling theory using a set of functions and their superposition. At this stage, the paper presents a description of the PFSP using recursive functions, as well as a transition to problems with the ‘and’ predicate and a permutation variant of the job shop problem (PJSP). The PFSP formulation with the ‘and’ predicate allows for solving problems involving assembly operations. All functions have a common domain and range. The authors demonstrate the applicability of the branch and bound method for all three problems. This greatly simplifies their application when using recursive functions. The flexibility requirements of Industry 4.0 manufacturing necessitate the use of PFSP, PFSP with assembly operations, and PJSP models. This can be accomplished by using three models in one solver.
It can be noted that the general PFSP with at least three machines and minimizing the makespan is already NP-hard in the strong sense, so the existence of polynomial solution algorithms is unlikely. Therefore, all exact enumerative algorithms for the PFSP have an exponential complexity. Since the PFSP with assembly operations and also the PJSP contain the PFSP as a special case, these problems are also NP-hard in the strong sense.
This article discusses a new approach through the definition of recursive functions to describe the timing characteristics of the PFSP. The concept of interval calculations is used to describe time domains. The equivalence of the PFSP representation in the existing and the recursive forms is proved. Here, equivalence is understood as the equality of the sets of feasible schedules for each pair of job number and machine number. Then, the transition from interval recursive functions to scalar ones is carried out. It is shown that with a fixed order of operations, scalar functions calculate an optimal schedule on the set of feasible schedules.
Moreover, the recursive representation of the PFSP is extended by including the predicate ‘and’, which allows one to move from chains of machines to trees and acyclic graphs. The branch and bound method (B&B) [16] using the introduced recursive function is considered as an exact optimization method. A description of one of the variants of a lower bound on the makespan and the branching method are given. The branching method is based on an original permutation generator with a given property. The permutation algorithm and the recursive implementation of the B&B method are described. It is shown that the complexity of the algorithm does not increase compared to the known B&B algorithms for the PFSP [16,17].
The rest of this paper is organized as follows. Section 2 reviews some basics of interval calculations relevant for the PFSP. Section 3 proves the equivalence of the existing and the recursive PFSP representations. Section 4 provides a method for implementing the B&B algorithm for the recursive PFSP. Section 5 gives the definition of the ‘and’ function and the extended formulation of the problem. Then, Section 6 discusses the B&B method for the introduced extended formulation of the problem. Section 7 presents some initial results with the suggested algorithm. In Section 8, a recursive function is formulated to describe a variant of the job shop problem.

2. Interval Values and Operations

Interval calculations in mathematics have been known for a long time; see, for example, [18]. In our case, this theory is used at the level of definitions and properties of some interval operations. We will consider the set of values of the time interval as a segment of integers on the number line with starting and end points included.
Definition 1.
Let N = { 0 , 1 , 2 , 3 , , N ^ } be the finite set of natural numbers from one to N ^ with zero added. Everywhere below, by an interval or domain [ a , b ] , a b , unless otherwise stated, we denote the closed bounded subset N of the form
[ a , b ] = { x | x N a x b } .
The set of all intervals is denoted by I . We will write the elements of I in capital letters. If A I , then we denote its left and right end points as a ̲ and a ¯ : A = [ a ̲ , a ¯ ] . We will also denote A ̲ = a ̲ and A ¯ = a ¯ . We call the elements of I interval numbers. Two intervals A and B are equal if and only if a ̲ = b ̲ and a ¯ = b ¯ . According to the total interval approach [18], we can describe operations as follows. Let ∘ be some operation and A , B I . In this case, we have
A B = { a b | a A , b B } .
If A = [ a ̲ , a ¯ ] and B = [ b ̲ , b ¯ ] , then A B = [ a ̲ b ̲ , a ¯ b ¯ ] . We will use only two interval operations: ”+” and ”max”. They are defined as follows:
A + B = A = [ a ̲ + b ̲ , a ¯ + b ¯ ] ; max { A , B } = [ max { a ̲ , b ̲ } , max { a ¯ , b ¯ } ] .
To illustrate, consider the following example.
Example 1.
Let A = [ 12 , 17 ] and B = [ 3 , 5 ] . Then,
A + B = [ 12 , 17 ] + [ 3 , 5 ] = [ 12 + 3 , 17 + 5 ] = [ 15 , 22 ] ; max { A , B } = max { [ 12 , 17 ] , [ 3 , 5 ] } = [ max { 12 , 3 } , max { 17 , 5 } ]   = [ 12 , 17 ] .
If A = [ 6 , 9 ] and B = [ 3 , 12 ] , then
max { A , B } = max { [ 6 , 9 ] , [ 3 , 12 ] } = [ max { 6 , 3 } , max { 9 , 12 } ] = [ 6 , 12 ] .
While in the first case, the interval A is obtained by the operation max, in the second case, a subinterval of B results.
To illustrate, below are some further examples of various options for the max operation:
  • max { [ 1 , 4 ] , [ 2 , 3 ] } = [ 2 , 4 ] ;
  • max { [ 2 , 5 ] , [ 3 , 6 ] } = [ 3 , 6 ] ;
  • max { [ 5 , 15 ] , [ 0 , 0 ] } = [ 5 , 15 ] .
A confluent interval, i.e., an interval with coinciding end points a = a ̲ = a ¯ , is identical to the integer a. Thus, N I . If A and B are confluent intervals, then the results of the operations on intervals coincide with ordinary arithmetic operations for non-negative integers. An interval integer is a generalization of an integer, and the interval integer arithmetic is a generalization of the integer arithmetic.
Note also that for B = [ b , b ] , we also write for simplicity A + B = A + b . In the theory of intervals, the role of zero is played by the usual 0, which is identified with the confluent interval [0, 0]. In other words,
A + 0 = 0 + A = A , max { A , 0 } = max { 0 , A } = A , A I .
The calculations can lead to an interval = [ a , b ] if a > b , which, according to Definition 1, represents an empty set.

3. Solving the Permutation Flow Shop Equivalence Problem

Consider the description of the PFSP [19]. Let J = { 1 , 2 , , n } be the set of jobs and M = { 1 , 2 , , m } be the set of machines arranged sequentially. Moreover, let p j , k be the processing time of job j on machine k.
The precedence graph of such a problem (see Figure 1a) is a simple chain in which the vertices are machines, and the edges define precedence relations between the machines. This graph can be “expanded” if we fix some order of the jobs and define precedence relations between the jobs J × J and between the machines M × M . Any vertex of such a graph is identified with the pair (job number, machine number). An expanded graph describing precedence relations between the jobs is presented in Figure 1b.
The vertical edges define precedence relations between the jobs for a particular machine. The horizontal edges define precedence relations between the machines for a fixed job. This graph is acyclic and has one start and one end vertex. Consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., for simplicity of the description, consider the schedule ( 1 , 2 , . . . , n ) with the new numbering. The following assumptions are valid for the PFSP:
  • Job assumptions: A job cannot be processed by more than one machine at a time; each job must be processed in accordance with the machine precedence relationship; the job is processed as early as possible, depending on the order of the machines; all jobs are equally important, meaning there are no priorities, deadlines, or urgent orders.
  • Machine assumptions: No machine can process more than one job at a time; after starting a job, it must be completed without interruption; there is only one machine of each type; no job is processed more than once by any machine.
  • Assumptions about the processing times: The processing time of each job on each machine does not depend on the sequence in which the jobs are processed; the processing time of each job on each machine is specific and integer; the transport time between machines and a setup time, if any, are included in the processing time.
Additional time limits can be set in the scheduling problems. Denote T ( j , k ) = [ t ̲ j , k , t ¯ j , k ] as the interval completion time of job j on the kth machine. t ̲ j , k is the minimal time for the completion of job j on the kth machine, and t ¯ j k is the corresponding maximal time. The completion time t j , k must satisfy the condition t ̲ j , k t j , k t ¯ j , k ; otherwise, the schedule is infeasible. In fact, this is the interval function completion time of job j on machine k. We will assume that in the initial formulation of the problem, a matrix T 0 ( j , k ) = [ t ̲ j , k 0 , t ¯ j , k 0 ] of initial domains of the feasible values for the job completion times has been defined. We will assume, unless otherwise specified, that the initial domains are the interval [ 0 , T m a x ] , where T m a x is a sufficiently large finite value, which will be discussed below. We describe two additional restrictions [19]:
  • r j is the moment when job j arrives for processing, i.e., its release date. This parameter defines the time point from which job j can be scheduled for execution, but its processing does not necessarily begin at this moment. In this case, in the matrix of initial domains, it is necessary to put T 0 ( j , k ) = [ r j + p j , k , T m a x ] for j J and 1 k m .
  • D j is the deadline for processing job j. A deadline cannot be violated, and any schedule that contains a job that finishes after its deadline is infeasible. In this case, in the matrix of initial domains, it is necessary to put T 0 ( j , k ) = [ 0 , D j ] for j J and 1 k m .
The combination of the constraints r j and D j generates initial domains of the form T 0 ( j , k ) = [ r j + p j , k , D j ] for j J and 1 k m . If these restrictions relate to specific operations, they will look like r j , k and D j , k . In this case, the following time relations will be satisfied:
p 1 , 1 t 1 , 1 , t 1 , 1 t 1 , 2 p 1 , 2 , , t 1 , m 1 t 1 , m p 1 , m .
t 1 , 1 t 2 , 1 p 2 , 1 , max ( t 1 , 2 , t 2 , 1 ) t 2 , 2 p 2 , 2 , , max ( t 1 , m , t 2 , m 1 ) t 2 , m p 2 , m .
t n 2 , 1 t n 1 , 1 p n 1 , 1 , max ( t n 2 , 2 , t n 1 , 1 ) t n 1 , 2 p n 1 , 2 , , max ( t n 2 , m , t n 1 , m 1 ) t n 1 , m p n 1 , m .
t n 1 , 1 t n , 1 p n , 1 , max ( t n 1 , 2 , t n , 1 ) t n , 2 p n , 2 , , max ( t n 1 , m , t n , m 1 ) t n , m p n , m .
Here, we take into account the fact that the interval time B ( j , k ) for the start of job j on the kth machine is calculated by the following formula:
B ( j , k ) = [ t ̲ j , k , t ¯ j , k ] p j , k = [ t ̲ j , k p j , k , t ¯ j , k p j , k ] , 1 j n , 1 k m .
The above inequalities are supplemented by inequalities of the form t ̲ j , k t ¯ j , k .
The system of inequalities (1)–(4) defines an infinite set of feasible schedules. To reduce this set to a finite one, we will use a positive integer constant T m a x and supplement the system (1)–(4) with the inequalities
t ¯ j , k T m a x , 1 j n , 1 k m .
It is possible to assign to T m a x a value so small that the set of feasible schedules is empty.
Remark 1.
In what follows, we always assume that there is a constant T m a x such that there is at least one feasible schedule.
Example 2.
Consider a PFSP with four machines and three jobs.
Table 1 shows the processing times p j , k of the jobs. The rows correspond to the job numbers, and the columns represent the machine numbers.
Let T m a x = 16 . In this case, the intervals for each pair ( j , k ) are given in Table 2.
Figure 2 shows the Gantt charts of three variants from the set of feasible schedules. ”Schedule 1” and ”Schedule 2” are optimal schedules (for this sequence), while ”Schedule 2” and ”Schedule 3” have different inserted idle times. It is easy to verify that increasing any interval in Table 2 will lead to the appearance of infeasible schedules, and decreasing any interval will result in the loss of feasible schedules.
Let T ( j , k ) be the interval calculated by some algorithm for completing the jth job on the kth machine, and let T 0 ( j , k ) be the corresponding initial domain. According to Figure 3, we define the interval noncommutative operation intersection (∩) as T ( j , k ) = T ( j , k ) T 0 ( j , k ) . The figure shows all significant cases of interval intersections:
(a)
the interval T ( j , k ) is to the left of T 0 ( j , k ) . The completion time of the jth job on the kth machine should be artificially increased to values from the interval T 0 ( j , k ) . Only in this case, the schedule will be feasible;
(b–e)
are clear without additional explanation;
(f)
with this arrangement of intervals, the value of the interval T ( j , k ) is equal to an empty set, which corresponds to the absence of a feasible schedule.
Thus, we have:
T ( j , k ) = , T ¯ 0 ( j , k ) < T ̲ ( j , k ) ; max { T ̲ ( j , k ) , T ̲ 0 ( j , k ) } , T ¯ 0 ( j , k ) , o t h e r w i s e .
It can be shown that all feasible sets of time values are uniquely determinable and maximal. Therefore, when searching for an optimal solution, no solution will be lost.
All precedence relations define intervals (including the scalar c = [ c , c ] ). Again, for simplicity of the description, consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., consider the schedule ( 1 , 2 , . . . , n ) with the new numbering. The calculations of these intervals can be divided into four groups. Next, we write them using interval variables, assuming that the initial interval is T 0 ( j , k ) and T ¯ 0 ( j , k ) T m a x :
  • For the completion of the first job on the first machine ( j = 1 , k = 1 ) , we have:
    T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] , T ( 1 , 1 ) = T ( 1 , 1 ) T 0 ( 1 , 1 ) .
  • For the completion of the first job on the kth machine ( j = 1 , 1 < k m ) , we have:
    T ( 1 , k ) = T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] , T ( 1 , k ) = T ( 1 , k ) T 0 ( 1 , k ) .
  • For the completion of job j on the first machine ( 1 < j n , k = 1 ) , we have:
    T ( j , 1 ) = T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] , T ( j , 1 ) = T ( j , 1 ) T 0 ( j , 1 ) .
  • The interval of the completion time of job j on the kth machine ( 1 < j n , 1 < k m ) is obtained as follows:
    T ( j , k ) = max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] , T ( j , k ) = T ( j , k ) T 0 ( j , k ) .
The PFSP can be formulated using a finite set of special recursive functions. This is due to the existence of precedence relationships between the machines and jobs in the PFSP when the current characteristics of the problem are determined by the preceding ones. Recursive functions have arguments and parameters as data. The arguments of the functions are pairs (job number, machine number). They are passed as function arguments in the normal mathematical sense. Parameters are defined outside the recursive function, and they are essentially global variables in imperative programming terms. Examples of parameters can be n , m , T m a x , p j , k , T 0 ( j , k ) , and so on. For any recursive function R whose set of arguments is determined by the Cartesian product A 1 × A 2 × × A l , the parameters P 1 , P 2 , P h and the set of values I will be written as R { P 1 , P 2 , P h } : A 1 × A 2 × × A l I . If a recursive function has no parameters, they are skipped together with the curly braces in the definition of the recursive function.
We will denote the interval recursive function as
T { n , m , p j , k , T 0 ( j , k ) } : J × M I .
Specific types of recursive functions will be provided subsequently as the PFSP is considered. Let us consider the problem of equivalence of the existing PFSP formulation in the literature and its recursive representation. The equivalence of both formulations is established through the equivalence of the domains of acceptable values of the completion times for each pair (job number, machine number). An optimal solution is within the allowed processing times. This fact allows us to move to the functional representation of the problem and to apply various appropriate methods. We can describe each group by the corresponding recursive functions:
  • T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) ; j = 1 , k = 1 .
  • T ( 1 , k ) = ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) ; j = 1 , 1 < k m .
  • T ( j , 1 ) = ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) ; 1 < j n , k = 1 .
  • T ( j , k ) = ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) ; 1 < j n , 1 < k m .
This set of functions can be combined into one function:
T ( j , k ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) , j = 1 , k = 1 ; ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) , j = 1 , 1 < k m ; ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) , 1 < j n , k = 1 ; ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) , 1 < j n , 1 < k m .
The function T ( j , k ) is defined for all values of j and k. If T ( j , k ) = , then there is no feasible schedule. It can be seen from the formula that it calculates the only value of the interval. We show that this recursive function defines the same domain for each ( j , k ) as the set of values defined by the system of inequalities (1)–(5).
Theorem 1.
Let the following conditions be satisfied for both the PFSP given by the system of inequalities (1)–(5) and the recursive PFSP:
  • one graph is set for both problems (see Figure 1a);
  • consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., consider the schedule ( 1 , 2 , . . . , n ) with the new numbering;
  • the processing time of job j on the kth machine is p j , k ;
  • the same matrix of initial time intervals is specified:
    T 0 ( j , k ) = [ t ̲ j , k 0 , t ¯ j , k 0 ] .
Taking into account Remark 1, this means that the set of feasible schedules is defined on intervals determined by T 0 ( j , k ) from below and T m a x from above.
Then, the following statement is true: the set of domains D ( j , k ) of feasible times for the non-recursive PFSP coincides with the set of domains D ( j , k ) for the recursive PFSP, i.e., D ( j , k ) = D ( j , k ) for 1 j n , 1 k m .
In other words, the sets of feasible schedules for each pair (job number, machine number) are the same, or in both cases, there is no feasible schedule, i.e., there are j and k such that D j , k = D j , k = .
Proof. 
Let the conditions of the theorem be satisfied. The proof is obtained via complete induction. Let us recall the formulation of the principle of complete induction. Let there be a sequence of statements P 1 , P 2 , P 3 , . If for any natural i the fact that all P 1 , P 2 , P 3 , , P i 1 are true also implies that P i is true, then all statements in this sequence are true. Therefore, for i N , i 2 , we have
( j { 1 ; ; i 1 } P j P i ) ( i N ) P i .
Here, x y denotes a logical implication. The statement P i is the equality of the intervals D ( j , k ) = D ( j , k ) . Let us apply this method to the graph in Figure 1b. From graph theory, it is known that the vertices of an acyclic graph are strictly partially ordered. There exist algorithms for constructing some linear order for a strictly partially ordered graph. Informally, a linear order is when all the vertices of a graph are arranged in a line, and all edges are directed from left to right. Any such algorithm is suitable because:
-
the first vertex is always ( 1 , 1 ) of the original graph,
-
the last vertex is always ( n , m ) ,
-
when evaluating the recursive function according to a linear order, its arguments have already been evaluated.
Let us choose a row-by-row traversal. In this case, for a pair ( j , k ) , all pairs ( j , k ) with j < j or k < k have already been considered. We will traverse the elements of the matrix of interval variables T in the rows from left to right and from the top row to the bottom one.
  • Let us prove the statement P 1 . From the system of inequalities (1)–(4), it follows that
    T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) .
    From the definition of the recursive function, it follows that
    T ( 1 , 1 ) = T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) .
    Therefore, we have T ( 1 , 1 ) = T ( 1 , 1 ) .
  • Let us assume that the statements P 1 , , P i 1 are true. We now prove that the statement P i is also true. Among the four groups of relationship types described above, the first type is considered for j = 1 , k = 1 .
  • Let us analyze the remaining three types.
    (a)
    Processing of the first job on the kth machine ( j = 1 , 1 < k m ) : In this case, the problem satisfies the equality
    T ( 1 , k ) = ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) .
    In turn,
    T ( 1 , k ) = ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) ,
    and due to the condition T ( 1 , k 1 ) = T ( 1 , k 1 ) , we get
    T ( 1 , k ) = T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] = T ( 1 , k ) .
    Therefore, we have T ( 1 , k ) = T ( 1 , k ) .
    (b)
    Processing of job j on the first machine ( 1 < j n , k = 1 ) : In this case, the problem satisfies the equality
    T ( j , 1 ) = ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) .
    In case of recursion,
    T ( j , 1 ) = ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) .
    Due to the condition T ( j 1 , 1 ) = T ( j 1 , 1 ) , finally, we get
    T ( j 1 , 1 ) = T ( j 1 , 1 ) = T ( j , 1 ) .
    Therefore, we have T ( j , 1 ) = T ( j , 1 ) .
    (c)
    Processing of job j on the kth machine ( 1 < j n , 1 < k m ) assumes that the problem satisfies the equalities
    T ( j 1 , k ) = T ( j 1 , k ) , T ( j , k 1 ) = T ( j , k 1 ) .
    In this case,
    T ( j , k ) = ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k )   = ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) = T ( j , k ) .
    Therefore, we have T ( j , k ) = T ( j , k ) .
All possible situations have been analyzed.    □
The main optimization problem in the PFSP is to find the order of job processing so that for some selected criterion, the function value is optimal. The most common criterion is the makespan (i.e., the total time it takes to complete the whole set of jobs). We remember that the vast majority of flow shop problems are treated as a PFSP; see, for example, [17,20,21].
To effectively solve the problem of calculating an optimal schedule, it is necessary to switch from interval recursive functions (6) to scalar recursive functions C { n , m , p j , k } : J × M N . Consider the case when all elements of the matrix of initial time intervals are equal to [ 0 , T m a x ] . In this case, all the cases of interval intersections shown in Figure 3 will be reduced to case (d) and will look similar to Figure 4. The value of T ¯ ( j , k ) will always be equal to T m a x , and the interval value can be replaced by a scalar equal to the lower value of the interval T ̲ ( j , k ) . Then, the recursive function (6) will look as follows:
C ( j , k ) = p 1 , 1 , j = 1 , k = 1 ; T ̲ ( 1 , k 1 ) + p 1 , k , j = 1 , k > 1 ; T ̲ ( j 1 , 1 ) + p j , 1 , j > 1 , k = 1 ; max ( T ̲ ( j 1 , k ) , T ̲ ( j , k 1 ) ) + p j , k , j > 1 , k > 1 .
or finally
C ( j , k ) = p 1 , 1 , j = 1 , k = 1 ; C ( 1 , k 1 ) + p 1 , k , j = 1 , 1 < k m ; C ( j 1 , 1 ) + p j , 1 , 1 < j n , k = 1 ; max { C ( j 1 , k ) , C ( j , k 1 ) } + p j , k , 1 < j n , 1 < k m .
The function C ( j , k ) calculates the completion time of job j on the kth machine. The scalar function (8) is obtained from the interval function (6) by replacing the interval with the lowest value of the interval. These arguments are given for [ 0 , T m a x ] , but it can be proven that they are also valid for the intervals [ r j , T m a x ] in accordance with Remark 1.
In Example 2 above, C ( j , k ) computes only one schedule with minimum completion time, which corresponds to the ”Schedule 1” diagram in Figure 2.
The analysis of Formula (8) shows that when calculating the function C ( n , m ) , the traversal of pairs ( j , k ) is carried out in the order indicated in Figure 5a.
However, if Function (8) is replaced by an equivalent Function (9) (the difference is in the last expression), then the pairs will be traversed according to the scheme in Figure 5b:
C ( j , k ) = p 1 , 1 , j = 1 , k = 1 ; C ( 1 , k 1 ) + p 1 , k , j = 1 , 1 < k m ; C ( j 1 , 1 ) + p j , 1 , 1 < j n , k = 1 ; max { C ( j , k 1 ) , C ( j 1 , k ) } + p j , k , 1 < j n , 1 < k m .
Theorem 2.
Let the PFSP include m machines and n jobs. Consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., consider the schedule ( 1 , 2 , , n ) with the new numbering. In this case, the following equality is true for any j ( 1 j n ) and any k ( 1 k m ) :
T ̲ ( j , k ) = C ( j , k ) .
The point of this theorem is that the scalar function calculates the minimum completion time for any given sequence of jobs. This follows from the principle of the function construction.
Proof. 
If in Formula (7), C ( j , k ) is replaced by T ̲ ( j , k ) , then a formula for calculating T ̲ ( j , k ) is obtained. The direct comparison of the resulting formula with Formula (8) for calculating C ( j , k ) proves the validity of the theorem.    □
The following important theorem follows from Theorem 2.
Theorem 3.
Let the PFSP be defined for m machines and n jobs. In addition, let Π be the set of all permutations of n jobs and π * = ( α 1 , α 2 , . . . , α n ) Π be the permutation corresponding to the minimum completion time of all jobs, computed as T ̲ ( α n , m ) for π * . For the PFSP, such a permutation satisfies the following equality:
T ̲ ( α n , m ) = C ( α n , m ) .
The meaning of the theorem is that the minimum completion time of all jobs, computed using interval calculations, is equal to the value of the scalar recursive function for the same permutation.
Proof. 
The set of all permutations Π is finite and has the cardinality n ! . In this case, as proven by Theorem 2, for any permutation π Π , the equality T ̲ ( α n , m ) = C ( α n , m ) holds. Therefore, it also holds for the permutation π * .    □
Let us look at Example 2 again. If, after calculating the intervals in Table 2, we introduce a constraint of the form r 3 = 7 (the moment the job is received for processing), this will lead to a change in the intervals for the third job to [ 10 , 10 ] , [ 12 , 12 ] , [ 14 , 14 ] and for the fourth job to [ 13 , 13 ] , [ 14 , 14 ] , [ 16 , 16 ] . Among the schedules given in the example in Figure 2, only Schedule 3 will remain feasible. The other schedules will become infeasible. Evaluating the recursive function will not result in a feasible schedule. This example demonstrates the importance of including the constraints in the initial value matrix when evaluating the recursive function.
Theorem 3 states that when computing an optimal schedule for the PFSP, one can move from interval calculations to the scalar recursive function C ( j , k ) (8), although the cardinality of the set of values of the function C ( j , k ) is less than or equal to the cardinality of the set of values of the function T ( j , k ) . If the function T ̲ ( n , m ) has several minima, then the function C ( n , m ) will have the same number of minima.
The permutation π = ( α 1 , α 2 , , α n ) denotes the order of the job numbers, and α i , i = 1 , 2 , , n is the number of the job at position α in the schedule (permutation) π . The recursive function was defined before to have a fixed order of the processing of the jobs. Let us add the permutation π to the definition of the function as a global variable so that we can change the order of the jobs. We define new functions: T ^ { n , m , p j , k , π } : J ^ × M I and C ^ { n , m , p j , k , π } : J ^ × M N . J ^ is the set of job position numbers, and α J ^ . We include the permutation π in the definitions of the recursive functions (6) and (8) so that we can change the processing order of the jobs:
T ^ ( α , k ) = [ p π ( 1 ) , 1 , T m a x ] , α = 1 , k = 1 ; T ^ ( 1 , k 1 ) + [ p π ( 1 ) , k , p π ( 1 ) , k ] , α = 1 , 1 < k m ; T ^ ( α 1 , 1 ) + [ p π ( α , 1 , p α , 1 ] , 1 < α n , k = 1 ; max { T ^ ( α 1 , k ) , T ^ ( α , k 1 ) } + [ p π ( α ) , k , p π ( α ) , k ] , 1 < α n , 1 < k m .
C ^ ( α , k ) = p π ( 1 ) , 1 , α = 1 , k = 1 ; C ^ ( 1 , k 1 ) + p π ( 1 ) , k , α = 1 , 1 < k m ; C ^ ( α 1 , 1 ) + p π ( α ) , 1 , 1 < α n , k = 1 ; max { C ^ ( α 1 , k ) , C ^ ( α , k 1 ) } + p π ( α ) , k , 1 < α n , 1 < k m .
In the function C ^ ( α , k ) , where α , α = 1 , 2 , , n , is the number of the position in the schedule (permutation) π and k , k M , is the number of the machine, they are the arguments of the recursive function, and the processing times p j , k and the permutation π are the parameters of the function.
Example 3.
Here, we calculate the completion times of jobs 1 and 2 on machines 1 and 2. It should be noted that in this case, the first argument of the function C ^ is the number of the position in the schedule π:
  • C ^ ( 1 , 1 ) = p π ( 1 ) , 1 ;
  • C ^ ( 1 , 2 ) = C ^ ( 1 , 1 ) + p π ( 1 ) , 2 = p π ( 1 ) , 1 + p π ( 1 ) , 2 ;
  • C ^ ( 2 , 1 ) = C ^ ( 1 , 1 ) + p π ( 2 ) , 1 = p π ( 1 ) , 1 + p π ( 2 ) , 1 ;
  • C ^ ( 2 , 2 ) = max { C ^ ( 1 , 2 ) , C ^ ( 2 , 1 ) } + p π ( 2 ) , 2   = max { p π ( 1 ) , 1 + p π ( 1 ) , 2 , p π ( 1 ) , 1 + p π ( 2 ) , 1 } + p π ( 2 ) , 2 .
For simplicity of the description, we have discussed the approach for the PFSP. It can be noted that one can extend this procedure to the FSP with possibly different job sequences on the machines.

4. Implementation of the Branch and Bound Method for the PFSP

It should be noted that the PFSP was one of the first combinatorial optimization problems for which the branch and bound (B&B) method was applied [16] soon after its development [22]. A large number of works were devoted to the application of the B&B method for solving the PFSP. References [17,23] review the most important works on B&B algorithms for the PFSP. Since the B&B method follows very organically from the recursive representation of the PFSP, we consider it subsequently.
Evaluation of the Lower Bound LB1. Taking into account the permutation π of jobs, for a certain pair ( α , k ) , the lower bound LB can be determined as follows:
L B ( α , k ) = C ^ ( α , k ) + i = k + 1 m p π ( α ) , i + i = α + 1 n p π ( i ) , m .
Remark 2.
Calculating the value of the function C ^ ( α , k ) from the value of the function C ^ ( α 1 , k ) is simpler than calculating L B ( α , k ) . Therefore, it makes sense to calculate the lower bound only for the pair ( α , m ) . In [24], the effectiveness of the comparison with the lower bound on the last machine was also justified.
In this case, it is possible to move from formula (10) to the formula for calculating the bound on the last machine:
L B ( α , m ) = C ^ ( α , m ) + i = α + 1 n p π ( i ) , m .
Next, let us formulate the exact meaning of the bound L B ( α , k ) . In the described case, this means that for a fixed order of a part of the jobs 1 , 2 , , α , the processing times of all jobs cannot be less than L B ( α , k ) for any arrival order of the remaining jobs α + 1 , α + 2 , , n .
Let Π be the set of permutations of n jobs for which the value C ^ ( α n , m ) has already been calculated. Then, the lower bound L B 1 for the considered job sequences will be
L B 1 = min π Π C ^ ( α n , m ) .
Thus, if the inequality
L B ( α , k ) L B 1
is satisfied for the current pair ( α , k ) , then this sequence of jobs 1 , 2 , , α 1 , α is excluded from consideration as not promising, and the corresponding branch of permutations is cut off. The next one will be one of the sequences 1 , 2 , , α 1 , π ( 1 ) with
π P ( α , α + 1 , , n ) a n d π ( 1 ) π ( α ) ,
where P ( α , α + 1 , , n ) is the set of all permutations of the n α + 1 remaining jobs.
Branching. First of all, the branching must be consistent with the traversal defined by the recursive function. If
L B ( α , k ) L B 1 ,
it is necessary to change the order of the jobs. However, it is necessary to change the order in such a way as to minimize the loss of information when calculating the functions C ^ ( 1 , 1 ) , C ^ ( 1 , 2 ) , , C ^ ( α , m ) . The literature describes many permutation algorithms with different characteristics. Let P ( 1 , , s ) = { π 1 , π 2 , , π s ! } . x P ( π ) assigns an element x to each permutation of the set, i.e., x P ( π ) = { x π 1 , x π 2 , , x π s ! } . In this case, the algorithm for generating permutations should calculate them in such a sequence that they satisfy the following recursive property:
P ( j 1 , j 2 , , j s ) = i = 1 s j i P ( j 1 , j 2 , , j i 1 , j i + 1 , , j s ) .
This property, after calculating the function C ( j , k ) , allows one to study all permutations P ( j + 1 , j + 2 , , n ) of the ”tail” of the queue of jobs being scheduled and only then track back.
Example 4.
Figure 6 shows an example of a complete permutation tree that satisfies Property (12) for n = 4 . The root of the tree is an auxiliary vertex, and the permutation corresponds to some path from the root. The vertical edge shows the next level of recursion. It connects the initial sequence to its tail. The tree is traversed from top to bottom and from left to right. In this example, the permutations will be generated in the following order: (1,2,3,4), (1,2,4,3), (1,3,2,4), (1,3,4,2), …, (4,3,1,2), (4,3,2,1). There are 24 permutations in total.
This algorithm can be implemented through recursive copying of the tail of permutations. This is a costly operation, but the algorithm can be modified so that the permutation π exists in a single instance as a global variable. This is done using forward and reverse permutations of two jobs (when returning along the search tree). The description of the B&B algorithm is given in Appendix A.

5. ‘And’ Function

The flow shop problem discussed above is more a subject of theoretical research and has limited application in practice. In order to bring it closer to practice, various additional elements can be introduced; first of all, various types of constraints. In real production, the main machine chain is served by various preparatory operations, for example, those associated with delivery from the warehouse to the production line; preparatory steps; technological issues; testing, etc.
Next, we expand the PFSP by introducing the ‘and’ function. A well-known construction, shown in Figure 7a, denotes a precedence relation in which the execution of a job on machine 3 can only begin after the completion of the jobs on machines 1 and 2. Using the ‘and’ function, it will look similar to Figure 7b. In what follows, we will interpret the ‘and’ function as a machine with zero processing time for any job. Thus, the chained precedence graph for the PFSP is expanded into an acyclic precedence graph when using the ‘and’ function. Therefore, the PFSP is transformed towards an Assembly Line job (AL), but without the presence of a conveyor belt and workstations. This problem can be called the Assembly Permutation Flow Shop Problem (APFSP). The absence of a conveyor belt frees it from the ”cycle time” limitation, and the absence of workstations frees it from solving the balancing problem, i.e., an optimal distribution of the jobs among the stations, taking into account the cycle time.
Despite the absence of assembly line attributes, this model can have wide applications.
The ‘and’ function in the interval representation T a n d { n , m , p j , k , T 0 ( j , k ) } : J × M I is as follows:
T a n d ( j , k ) = max { T ( j , p r e d 1 ( k ) ) , T ( j , p r e d 2 ( k ) ) } ,
where p r e d 1 and p r e d 2 are precedence functions that will be defined later. Therefore,
T a n d ( j , k ) = [ max { T ̲ ( j , p r e d 1 ( k ) ) , T ̲ ( j , p r e d 2 ( k ) ) } , max { T ¯ ( j , p r e d 1 ( k ) ) , T ¯ ( j , p r e d 2 ( k ) ) } ] ,
or in a scalar expression C a n d { n , m , p j , k } : J × M N :
C a n d ( j , k ) = max { C ( j , p r e d 1 ( k ) ) , C ( j , p r e d 2 ( k ) ) } .
Accordingly, we get the function C ^ a n d { n , m , p j , k , π } : J ^ × M N , where J ^ is the set of job position numbers and α J ^ :
C ^ a n d ( α , k ) = max { C ^ ( α , p r e d 1 ( k ) ) , C ^ ( α , p r e d 2 ( k ) ) } .
Including the ‘and’ function in the PFSP causes the machines to process the job according to the precedence tree. To set a specific order of the precedence graph traversal, we assume that the function p r e d 1 is always the first one in Formula (13) and determines the upper arc in Figure 7b. Adding the ‘and’ function causes the assumption in Section 3 to be invalid: “a job cannot be processed by more than one machine at a time”, since machines 1 and 2 in Figure 7b can execute the same job simultaneously.
We prove that when adding the ‘and’ function to the PFSP (i.e., C ( j , k ) = C a n d ( j , k ) and C ^ ( α , k ) = C ^ a n d ( α , k ) ), Theorem 3 remains valid, and the function C ( j , k ) supplemented by C a n d ( j , k ) also implements a greedy algorithm for a fixed sequence of jobs. Figure 8a shows an example of a precedence graph between the machines. Figure 8b shows the precedence graph between the machines and jobs (with a fixed order of jobs). Figure 8c shows the computation (superposition) graph of the recursive function for a certain sequence of four jobs. In the latter case, the arc determines the transfer of the value C ( j , k ) .
Adding the ‘and’ function to the PFSP causes the machines to process a job according to the precedence tree. The graph in Figure 8b is acyclic. In this case, it is convenient to traverse the graph according to the jobs, i.e., the first job tree, the second job tree, etc., in accordance with the job permutation algorithm. The tree of each job is traversed as indicated in the figure by the dotted line.

6. Implementation of the Branch and Bound Method for the APFSP

When implementing any method for the APFSP, it is necessary to switch to a precedence graph representation with typed vertices. Let us define the set { s o p , o p , a n d } as the following types of vertices:
  • s o p —the starting vertex of the graph (machine);
  • o p —the intermediate or final vertex of a graph (machine);
  • and—the vertex corresponding to the ‘and’ function.
Let us define the function T y p e : M { s o p , o p , a n d } that gives this type of a vertex. It is necessary to define explicitly the precedence functions for the vertices p r e d , p r e d 1 , p r e d 2 : M M .
  • p r e d is defined for the vertices of type o p , k 1 = p r e d ( k 2 ) , meaning that the vertex k 1 immediately precedes the vertex k 2 .
  • p r e d 1 and p r e d 2 are defined for the vertices of type ‘and’; k 1 = p r e d 1 ( k 3 ) means that the vertex k 1 immediately precedes the vertex k 3 along the first incoming edge; k 2 = p r e d 2 ( k 3 ) means that the vertex k 2 precedes the vertex k 3 along the second incoming edge.
The main issues of the implementation of the method are also the choice of the lower bound (LB) calculating techniques and branching algorithms. Consider the calculation of the lower bound LB1 by traversing the jobs, that is, similar to Figure 5, but with tree traversal. This means that Formulas (8) and (13) are used.
Evaluation of the Lower Bound LB1. Analysis of Formula (8) shows that when calculating the function C ( n , m ) , the operations ( j , k ) are traversed in the order indicated in Figure 5a. Consider the example in Figure 8a. If the ‘and’ function is present, the job processing tree will be traversed as shown in the figure. Naturally, the value of L B ( α , k ) is calculated after calculating C ^ ( α , k ) . The values of L B for the job α ( 1 < α < n ) considering that p α , 4 = 0 can be calculated as follows:
  • L B ( α , 1 ) = C ^ ( α , 1 ) + p α , 2 + p α , 4 + p α , 5 + p α + 1 , 5 + + p 4 , 5 ;
  • L B ( α , 2 ) = C ^ ( α , 2 ) + p α , 4 + p α , 5 + p α + 1 , 5 + + p 4 , 5 ;
  • L B ( α , 3 ) = C ^ ( α , 3 ) + p α , 4 + p α , 5 + p α + 1 , 5 + + p 4 , 5 ;
  • L B ( α , 4 ) = C ^ ( α , 4 ) + p α , 5 + p α + 1 , 5 + + p 4 , 5 ;
  • L B ( α , 5 ) = C ^ ( α , 5 ) + p α + 1 , 5 + + p 4 , 5 .
From the example considered, it is clear that the calculation of L B ( α , k ) is carried out using the same tree traversal as when calculating C ^ ( α , k ) . In this case, for L B ( α , k ) , Equality (10) is transformed into the following equality:
L B ( α , k ) = C ^ ( α , k ) + i U α , m p π ( α ) , i + i = α + 1 n p π ( i ) , m ,
where U α , m is the set of vertices belonging to the path from α to m in the machine precedence graph. The uniqueness of this path follows from the fact that the machine precedence graph is a tree and m is its root node (see, for example, Figure 8a or Figure 8b). Taking into account Remark 2, Expression (11) can also be used.
Appendix A presents three algorithms used for the solution of the APFSP.
The described algorithms work both for the PFSP and the APFSP. The only difference is in the definitions of the p r e d function. In the case of the PFSP, p r e d ( k ) = k 1 . For the APFSP, the precedence functions are determined from the precedence graph or a matrix. In this case, it is obvious that for the APFSP, the complexity of the B&B algorithm does not increase. An important feature should be noted. In the case when the last vertex in the precedence graph is the ‘and’ function, the B&B algorithm executes correctly, but its “predictive power” is reduced to zero.
Recursive functions also allow one to calculate the values of objective functions, different from makespan. This can be done by replacing the function C ^ ( α , k ) by
C * ( α , k ) = C ^ ( α , k ) ; < c a l c u l a t i n g t h e v a l u e o f t h e o b j e c t i v e f u n c t i o n > .
In this case, the calculation of the optimization criterion is concentrated in the body of the function C * ( α , k ) and is invariant to the details of the optimization method.

7. Evaluating the Effectivenes of the Algorithm

Existing publications on the B&B method in scheduling theory use tests that are characterized by the computing times using specific hardware. To assess the effectiveness of the B&B method without using pure computing time indicators, it is advisable to determine the reduction in completed elementary calculations due to the fact that some sets of job permutations were deliberately discarded and not checked. There are two possible approaches here:
  • consider it elementary to calculate the completion time of the job’s last technological operation (approach L);
  • consider it elementary to calculate the completion time of each technological operation of each job (approach A).
In both cases, it is necessary to determine the maximum possible number of such calculations when solving a problem for n jobs and m machines, then to compare it with the number of calculations actually performed.
In the L approach, the maximum number of elementary computations is determined by the permutation generation algorithm (see Appendix A). At each step, this algorithm either adds a new job to the already fixed initial part of the vector π or replaces the last job in this fixed part by the next one. In both cases, the completion time of the last technological operation of the last job in the fixed part of π is calculated. The completion times of all technological operations of all previous jobs are already known and do not need to be calculated again. Thus, an “elementary computation” is the calculation of m elements of the job completion times matrix ( T o p ) row, and the total number of such computations is equal to the number of different variants of the initial part of the vector π generated by the algorithm.
Consider, for example, the generation of permutations and the computation of the completion times for three jobs ( n = 3). If the rejection of branches of the search along the lower bound L B is never performed, then the sequence of actions performed by the algorithm is described by Table 3.
As a result of the algorithm’s operation, 15 initial parts of the vector π were generated (including 6 complete permutations of 3 tasks), and 15 elementary computations of the T o p rows were performed. With the size h of the initial part of the vector π ( 1 h n ) , the total number of such generated initial parts is equal to the number of placements from n to h:
A n h = n ! ( n h ) ! .
The total number of initial parts of the vector π of all possible sizes from 1 to n is equal to
S m a x ( n ) = h = 1 n A n h = h = 1 n n ! ( n h ) ! .
When using the B&B method, the algorithm will perform all S m a x elementary calculations extremely rarely. However, this estimate is achievable in practice. Let us consider the degeneration case in which all rows of the matrix p j , are identical. In this case, the L B value calculated using Formulas (10) or (11) will be the same for any values of the arguments, and none of the branches will be discarded. The number of elementary calculations performed by the algorithm will be exactly S m a x . In approach A, the maximum possible number of elementary calculations is m times S m a x ( n ) since every row of the job completion times matrix contains m elements. In the optimized algorithm, the number of real calculations will be less than the given values. The effectiveness of solving a problem can be assessed by the proportion of calculations not performed. If S L ( i ) and S A ( i ) are the numbers of calculations when solving problem number i for n jobs and m machines using approaches L and A, respectively, then the efficiency of the solutions can be estimated using the following formulas:
E L ( n , i ) = S m a x ( n ) S L ( i ) S m a x ( n ) ,
E A ( n , m , i ) = m S m a x ( n ) S A ( i ) m S m a x ( n ) .
A zero value of these indicators means that the use of the method did not provide any gain in the number of calculations; a theoretically unattainable value of 1 (or 100%) means that the use of the method completely eliminated the need for calculations.
The authors performed 10,000 tests for problems with 5–11 jobs on random acyclic graphs with random job processing times. The problem generation parameters in these experiments were as follows:
  • total number of nodes in the acyclic graph (technological operations): 15;
  • number of initial operations: random value in [1,3];
  • number of ‘and’ operations: random value in [1,4];
  • job processing times ( p j , k ): random value in [1,10].
All random values are uniformly distributed integers. For each number of jobs, a histogram of the distribution of the number of experiments in the ranges of the E L efficiency obtained in the experiment was constructed. Thus, seven histograms were constructed, presented in Figure 9.
The histograms in the figure show how many times a particular E L efficiency was achieved during the random experiments. The horizontal axis shows the ten-percent efficiency ranges (from “0–10” to “91–100”), and the vertical axis shows the number of experiments out of 10,000 performed in which the corresponding efficiency value was achieved. It can be seen from the figure that as the number of jobs increases, the efficiency of the algorithm also increases. The histogram for five jobs has a maximum around 50%. This means that most of the experiments with five jobs showed a fifty percent efficiency; that is, approximately half of the maximum possible number of elementary calculations for a given task were performed. However, a significant number (about 800) of experiments showed an efficiency of less than 10%. As the number of jobs increases, the histogram maximum moves into the high efficiency region. For 11 jobs, more than 7000 out of 10,000 experiments performed showed an efficiency in the range of 91–100%; i.e., less than 10% of the number of elementary calculations that would be required to solve the problem when analyzing all possible permutations were performed.

8. Job Shop Problem

The job shop problem (JSP) is a generalization of the FSP. For each job, the JSP allows its own set of machines to be used from the set M = { 1 , 2 , , m } and its own order of execution of any job by the machines. The order of execution is determined by the precedence relationships between the machines, and it is possible that a job does not use the entire set of machines. In this section, we consider a special variant of the JSP, namely the permutation job shop problem (PJSP), where the job sequence is identical for all machines. Figure 10a shows an example of a job shop precedence graph for some fixed order of jobs. Just as in Figure 1, each job corresponds to a row of machines. If a job uses a machine, the corresponding graph vertex is shaded. The horizontal arcs represent the precedence relationships between the machines, and the vertical arcs represent the precedence relationships between the jobs for some fixed order of the jobs. We will call the pair (job number, machine number) relevant if machine k is used to perform job j, and we will call machine k relevant for job j and irrelevant otherwise. All machines that are irrelevant for job j are characterized by a zero processing time, i.e., p j , k = 0 . Example Figure 10a shows that for each job, the precedence relationships between the machines are defined only between the machines that are relevant for the given job. Figure 10b shows the graph that results from swapping the third and fifth jobs of the graph in Figure 10a. Obviously, when the jobs are rearranged, the precedence relationships between the machines for each job do not change, but they do change between the jobs. The precedence relations between the machines are defined by the precedence function p r e d k : J × M M as follows:
p r e d k ( j , k ) = k , i f i n j o b j m a c h i n e k i m m e d i a t e l y p r e c e d e s m a c h i n e k ; 0 , i f t h e r e i s n o m a c h i n e i n j o b j t h a t p r e c e d e s m a c h i n e k .
For a fixed sequence of jobs, the function p r e d k ( j , k ) is implemented by a precedence matrix predefined by the precedence graph. For example, the precedence matrix W for jobs j with respect to machines k and matrix P of the given operation processing times, corresponding to the graph in Figure 10a, will have the following form:   
W = 0 1 2 3 0 3 0 0 0 4 0 3 2 0 1 0 0 0 0 0 , P = 3 2 2 4 0 5 3 0 0 3 1 1 3 1 1 0 0 0 4 0 .
When rearranging the jobs to maintain correspondence, it is enough to carry out the appropriate rearrangement of the precedence matrix rows.
Let us define a recursive function: C ^ J S { n , m , π , p i , j } : J ^ × M N . Here, J ^ is the set of job position numbers and α J ^ .
C ^ J S ( α , k ) = p π ( α ) , k , α = 1 , p r e d k ( π ( α ) , k ) = 0 ; C ^ J S ( α , p r e d k ( π ( α ) , k ) ) + p π ( α ) , k , α = 1 , p r e d k ( π ( α ) , k ) 0 ; C ^ J S ( α 1 , k ) + p π ( α ) , k , 1 < α n , p r e d k ( π ( α ) , k ) = 0 ; max { C ^ J S ( α 1 , k ) ,   C ^ J S ( α , p r e d k ( π ( α ) , k ) ) } + p π ( α ) , k , 1 < α n , p r e d k ( π ( α ) , k ) 0 .
The function is defined for all argument values.
The set of machines relevant to some job, connected by the arcs of the precedence relationships, form one or more simple chains of machines. Thus, in Figure 11a, job 2 includes two simple chains of relevant machines { ( 2 , 1 ) , ( 2 , 4 ) } and { ( 2 , 2 ) , ( 2 , 3 ) } . Let us assume that for each job, one of the following two conditions is satisfied:
  • a job can contain only one chain of relevant machines;
  • if a job contains two or more chains of relevant machines, then these chains cannot have common machines.
In this case, such a job can be split into two (or more) jobs so that each job contains one chain of machines. An example of such a split is shown in Figure 11. Here, job 2, consisting of two chains (see Figure 11a), is split into two jobs, job 2 and job 2’ (see Figure 11b), such that each of them contains one chain. This division of the job into two or more jobs increases the total number of jobs and the total number of combinatorial options when searching for an optimal job order. It can be proven that with such a split, there exists an optimal schedule with a value of C m a x for the new set of jobs that will be the same as for the original set of jobs.
Theorem 4.
Let there be a set of n jobs. If some job is a set of w simple chains of machines, then this job can be broken down into w jobs such that each job contains only one chain. If the original set of jobs for some processing order has a completion time of C m a x , then for the new set of n + w 1 jobs, there exists an order with a completion time of C m a x .
Proof. 
Let some order of n jobs be fixed, and let the job under consideration have the number j ( 1 < j n ) . Let us make two assumptions. First, suppose that the job contains only two simple chains of machines. Second, for simplicity and clarity, assume that the two chains consist of sequences of adjacent machines, as shown in Figure 12.
Here, the first chain consists of q machines, and the second consists of z machines. The remaining m q z machines for job j are irrelevant and are not displayed. These assumptions do not reduce the generality of the problem statement. If the statement can be proved for two chains, then a successive application of the job splitting will allow it to be applied to any other set of chains. The second assumption makes the statement of the theorem more clear, but it does not change the essence of the problem. The completion time of the k i -th machine ( k 1 < k i k q ) c k i x depends on the completion time x k i of job i ( i < j ) on some k i -th machine and the completion time c k i 1 x of job j on machine k i 1 . If a machine is the first one in the chain, then its completion time depends only on x k 1 . The same reasoning applies to the second chain with the times c k i y , y k i ( q + 1 i q + z ) . Let us divide job j into two jobs so that each job j and j will contain one chain, as shown in Figure 13. The irrelevant machines of job j will also be irrelevant for the new jobs and are not shown in the figure.
Since the irrelevant machines do not change the completion time when it is transmitted transitively vertically and they have no horizontal connections, then at the output of the two jobs j and j , the values of the completion times c k i x and c k i y do not change. For the machines that are not relevant in both works, the values also do not change. Therefore, if the fixed order corresponded to an optimal schedule, then the new order of the split jobs will have the same completion time value. Moreover, with the new set of jobs, it is possible to have a schedule with a shorter completion time. These arguments apply to job j > 1 . For the first job, similar arguments apply in the absence of the values x k i and y k i .    □
In what follows, we will assume that each job contains one simple chain of relevant machines. To be able to calculate the function C ^ J S ( α , k ) for all jobs and relevant machines, we will do the following:
  • Let us supplement the set M with the machine m + 1 . This will be a virtual machine, and the time it takes to perform any work will be zero ( p j , m + 1 = 0 for 1 j n );
  • For each job, we define an additional precedence relation between machine m + 1 and the last actual machine in the simple chain that performs the given job. The last machine in the chain always precedes machine m + 1 .
The result of the additions for the graph in Figure 10a is shown in Figure 14a. If we call the function C ^ J S ( n , m + 1 ) , then, in the process of calculation, it will go through the machines and jobs in the order shown in Figure 14b using the numbered arrows. If arrows hit vertices where C ^ J S has already been evaluated, they are returned (the function is not re-evaluated). This corresponds to a memorized computation, which significantly reduces the overall amount of computation.
It may seem that there is no need to add machine m + 1 , and we can complete the precedence relationships with machine m. However, if machine m for some job j is inside a chain of machines, as in Figure 11a for job 3, then the calculations will be incorrect.
The function C ^ J S ( α , k ) is completely defined. For the relevant pairs ( α , k ) , it calculates the time it takes for the k-th machine to complete job α . For the irrelevant pairs ( α , k ) , it computes the completion time of job α 1 on machine k. From the example, it is clear that the function C ^ J S ( α , m + 1 ) for each job traverses only the relevant machines.

9. Conclusions

This article was devoted to a new methodology of using the apparatus of recursive functions for formulating and solving the PFSP, APFSP, and PJSP classes of scheduling problems. The fact that recursive functions, unlike analytical ones, are calculated by some algorithms significantly expands their capabilities. A recursive function, in addition to its main purpose of calculating the completion time of an operation, can include accompanying algorithms: checking time constraints, calculating objective functions for various purposes, checking the availability of resources, etc. This frees the optimizer from these problems and makes it invariant to different tasks.
When using the B&B method for the PFSP and APFSP, the simplest form of a lower bound has been used. A detailed review of the B&B method and lower bounds for the PFSP are given in [20]. We think that the main result is an organic transition using recursive functions from the PFSP to the more general and relevant APFSP.
For the APFSP, this paper considers tree graphs, but it is obvious that the method is applicable to acyclic graphs. A recursive function, by definition computed using an algorithm, allows one to integrate into this algorithm the features of various problems of scheduling theory. With the domain of the definition of the function and the set of its values unchanged (which was achieved for the three problems under consideration), this will allow one using a common solver for these problems. Approximate scheduling optimization algorithms, such as ant colony (CA) or simulated annealing (SA), only require the function to be computable. Accordingly, specialists who use, e.g., the CA or SA algorithms can use these recursive functions.
This article does not address the issue of computational efficiency. Recursive functions are not efficient when evaluated directly. However, there are transformations of calculations, such as using previously calculated values (memorized computation), switching from recursion to iteration, avoiding the use of a system memory stack, etc., which significantly increase the efficiency.
The following results have been obtained:
  • construction of recursive functions with a common domain of definition and a common set of values for calculating the makespan when modeling the PFSP, APFSP, and PJSP;
  • development of a B&B algorithm unified for all three problems;
  • experimental calculations have been performed with a random generator of test instances; an analytical assessment of the quality of the B&B optimizer has been given as the ratio of the volume of calculations during the elimination of variants to the volume of calculations during the exhaustive search.
The obtained results allow for the following developments in the future:
  • development of new recursive functions that allow the PFSP to be extended to solve practical problems;
  • improvement of the efficiency of calculating recursive functions and evaluating them on test instances;
  • use of heuristic optimization methods such as ant colony, simulated annealing, etc., as a general method for solving a variety of problems described by recursive functions;
  • investigation of the applicability of a method based on recursive functions for describing and solving non-permutation flow shop problems.

Author Contributions

Conceptualization, B.K. and A.L.; methodology, B.K. and A.L.; software, A.R.; validation, B.K., A.L., A.R. and F.W.; formal analysis, F.W.; investigation, B.K.; writing—original draft preparation, B.K.; writing—review and editing, F.W.; visualization, B.K. and A.R.; supervision, B.K. and A.L.; project administration, A.L.; funding acquisition, F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by DAAD grant 91696586.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

This appendix contains the algorithms for finding an optimal permutation.
Algorithm A1: For finding an optimal permutation (with definitions of the variables).
B e g i n A l g o r i t h m A 1 . (Initializing and calling P e r m u t a t i o n B & B ).
 1.
π is a vector of job permutations, a global variable.
 2.
P is the matrix of job processing times on each machine.
 3.
T o p 0       ’ matrix of the job completion times.
 4.
π ( 1 , 2 , , n )       ’ the initial order of the jobs.
 5.
L B 1 the maximum value.
 6.
P e r m u t a t i o n B & B ( 1 )       ’ calling a permutation set generator.
E n d A l g o r i t h m A 1 .
Algorithm A2: For calculating the function C ^ ( α , k ) for three types of graph vertices.
B e g i n A l g o r i t h m A 2 . (Calculation C ^ ( α , k ) ).
 1.
i f T y p e ( k ) = o p t h e n
 2.
     i f α > 0 t h e n    ’Not the first job
 3.
          C max { C ^ ( α 1 , k ) , C ^ ( α , p r e d ( k ) ) } + P π ( α ) , k
 4.
     e l s e    ’The first job
 5.
          C C ^ ( α , p r e d ( k ) ) + P π ( α ) , k
 6.
     e n d i f
 7.
e l s e
 8.
     i f T y p e ( k ) = s o p t h e n
 9.
          i f α > 0 t h e n
 10.
            C C ^ ( α 1 , k ) + P π ( α ) , k
 11.
        e l s e
 12.
            C P π ( α ) , k
 13.
        e n d i f
 14.
    e l s e
 15.
        i f T y p e ( k ) = a n d t h e n
 16.
            C max { C ^ ( α , p r e d 1 ( k ) ) , C ^ ( α , p r e d 2 ( k ) ) }
 17.
        e n d i f
 18.
    e n d i f
 19.
e n d i f
 20.
T o p ( α , k ) C
E n d A l g o r i t h m A 2 .
Algorithm A3: Generates a set of permutations satisfying Property (12), including the iteration of the B&B method.
B e g i n A l g o r i t h m A 3 . (Recursive algorithm B&B for problem AFS P e r m u t a t i o n B & B ( α ) )
 1.
α – the sequential number of the job in the vector π
 2.
i f α > 1 t h e n
 3.
     L B C ^ ( α 1 , m ) + i = α n p π ( α ) , m
 4.
     i f L B > L B 1 t h e n e x i t
 5.
e n d i f
 6.
i f α = n + 1 t h e n
 7.
     L B 1 L B
 8.
     e x i t
 9.
e n d i f
 10.
P e r m u t a t i o n B & B ( α + 1 )
 11.
f o r i = α + 1 t o n
 12.
     π ( α ) π ( i )    ’swap the elements α and i of the vector π
 13.
     P e r m u t a t i o n B & B ( α + 1 )
 14.
     π ( α ) π ( i )    ’return the elements α and i of the vector to their previous positions π
 15.
n e x t i
E n d A l g o r i t h m A 3 .

References

  1. Zaied, A.N.H.; Ismail, M.M.; Mohamed, S.S. Permutation Flow Shop Scheduling Problem with Makespan Criterion: Literature Review. J. Theor. Appl. Inf. Technol. 2021, 99, 4. [Google Scholar]
  2. Henneberg, M.; Neufeld, J. A Constructive Algorithm and a Simulated Annealing Approach for Solving Flow Shop Problems with Missing Operations. Int. J. Prod. Res. 2016, 54, 3534–3550. [Google Scholar] [CrossRef]
  3. Ruiz, R.; Maroto, C.; Alcaraz, J. Two New Robust Genetic Algorithms for the Flow Shop Scheduling Problem. Omega Int. J. Manag. Sci. 2006, 34, 461–476. [Google Scholar] [CrossRef]
  4. Bargaoui, H.; Driss, O. Multi-Agent Model Based on Tabu Search for the Permutation Flow Shop Scheduling Problem. In Distributed Computing and Artificial Intelligence, 11th International Conference; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  5. Alekseeva, E.; Mezmaz, M.; Tuyttens, D.; Melab, N. Parallel Multi-Core Hyper-Heuristic Grasp to solve Permutation Flow Shop Problem. Concurr. Comput. Pract. Exp. 2017, 29, e3835. [Google Scholar] [CrossRef]
  6. Kaushik, P.; Gupta, D.; Goel, S. Comparative Study of B&B with Heuristics NEH and CDS for Bi-stage Flow Shop Scheduling Problem under Fuzzy Environment. J. Mech. Contin. Math. Sci. 2025, 20, 73–89. [Google Scholar]
  7. Rossit, D.; Toncovich, A.; Rossit, D.; Nesmachnow, S. Solving a flow shop scheduling problem with missing operations in an Industry 4.0 production environment. J. Project Manag. 2021, 6, 33–44. [Google Scholar] [CrossRef]
  8. Fu, Y.; Hou, Y.; Guo, X.; Qi, L.H. Scheduling a Permutation Flow Shop in the Context of Industry 4.0. In Proceedings of the IEEE International Conference on Networking, Sensing and Control, Shanghai, China, 15–18 December 2022. [Google Scholar] [CrossRef]
  9. Komaki, G.M.; Sheikh, S.; Malakooti, B. Flow shop scheduling problems with assembly operations: A review and new trends. Int. J. Prod. Res. 2018, 57, 2926–2955. [Google Scholar] [CrossRef]
  10. Zhao, W.-B.; Hu, J.-H.; Tang, Z.-Q. Virtual Simulation-Based Optimization for Assembly Flow Shop Scheduling Using Migratory Bird Algorithm. Biomimetics 2024, 9, 571. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, J.; Ding, G.; Zou, Y.; Qin, S.; Fu, J. Review of Job Shop Scheduling Research and its New Perspectives Under Industry 4.0. J. Intell. Manuf. 2019, 30, 1809–1830. [Google Scholar] [CrossRef]
  12. Yu, Y. A Research Review on Job Shop Scheduling Problem. E3S Web Conf. 2021, 253, 02024. [Google Scholar] [CrossRef]
  13. Nguyen, T.P.Q.; Le, T.H.T. An Innovative Genetic Algorithm-Based Master Schedule to Optimize Job Shop Scheduling Problem. J. Sci. Technol. 2024, 22, 12. [Google Scholar] [CrossRef]
  14. Gupta, S.; Phanden, R.K.; Wolde, B.; Kumar, R.; Chakraborty, A. Optimization of Job Shop Scheduling Problem Using Genetic Algorithm and Simulated Annealing: A Case Study of Manufacturing Industry. Int. J. Syst. Assur. Eng. Manag. 2024. preprint. [Google Scholar] [CrossRef]
  15. Dauzère-Pérès, S.; Ding, J.; Shen, L.; Tamssaouet, K. The Flexible Job Shop Scheduling Problem: A Review. Eur. J. Oper. Res. 2024, 314, 409–432. [Google Scholar] [CrossRef]
  16. Ignall, E.; Schrage, L.E. Application of the Branch and Bound Technique to Some Flow Shop Problems. J. Oper. Res. Soc. 1965, 13, 400–412. [Google Scholar] [CrossRef]
  17. Gmys, J.; Mezmaz, M.; Melab, N.; Tuyttens, D. A Computationally Efficient Branch-and-Bound Algorithm for the Permutation Flow-Shop Scheduling Problem. Eur. J. Oper. Res. 2020, 284, 814–833. [Google Scholar] [CrossRef]
  18. Kalmykov, S.A.; Shokin, U.I.; Uldashev, Z.X. Interval Analysis Methods; Science: Novosibirsk, Russia, 1986. (In Russian) [Google Scholar]
  19. Lazarev, A.A.; Gafarov, E.R. Scheduling Theory. Problems and Algorithms; MSU: Moscow, Russia, 2011. (In Russian) [Google Scholar]
  20. Ladharia, T.; Haouaria, M. A Computational Study of the Permutation Flow Shop Problem Based on a Tight Lower Bound. Comput. Oper. Res. 2005, 32, 1831–1847. [Google Scholar] [CrossRef]
  21. Belabid, J.; Aqil, S.; Allali, K. Solving Permutation Flow Shop Scheduling Problem with Sequence-Independent Setup Time. J. Appl. Math. 2020. [Google Scholar] [CrossRef]
  22. Land, A.H.; Doig, A.G. An Automatic Method for Solving Discrete Programming Problems. Econometrica 1960, 28, 427–520. [Google Scholar] [CrossRef]
  23. Potts, C.N.; Strusevich, V.A. Fifty Years of Scheduling: A Survey of Milestones. J. Oper. Res. Soc. 2009, 60, 41–68. [Google Scholar] [CrossRef]
  24. Brooks, G.H.; White, C.R. An Algorithm for Finding Optimal or Near Optimal Solutions to the Production Scheduling Problem. J. Ind. Eng. 1965, 16, 34–40. [Google Scholar]
Figure 1. Example of a permutation flow shop problem graph: (a) a simple chain of the machines; (b) the expanded graph.
Figure 1. Example of a permutation flow shop problem graph: (a) a simple chain of the machines; (b) the expanded graph.
Mathematics 13 03185 g001
Figure 2. Examples of diagrams of feasible schedules.
Figure 2. Examples of diagrams of feasible schedules.
Mathematics 13 03185 g002
Figure 3. Cases of interval intersections: (a) T is completely to the left of T 0 ; (b) T is to the left of T 0 and intersects it; (c) T 0 is inside T ; (d) T is inside T 0 ; (e) T is to the right of T 0 and intersects it; (f) T is completely to the right of T 0 .
Figure 3. Cases of interval intersections: (a) T is completely to the left of T 0 ; (b) T is to the left of T 0 and intersects it; (c) T 0 is inside T ; (d) T is inside T 0 ; (e) T is to the right of T 0 and intersects it; (f) T is completely to the right of T 0 .
Mathematics 13 03185 g003
Figure 4. The case of interval intersection when T 0 ( j , k ) = [ 0 , T m a x ] .
Figure 4. The case of interval intersection when T 0 ( j , k ) = [ 0 , T m a x ] .
Mathematics 13 03185 g004
Figure 5. Two traversal options defined by two types of a recursive function: (a) row traversal; (b) column traversal.
Figure 5. Two traversal options defined by two types of a recursive function: (a) row traversal; (b) column traversal.
Mathematics 13 03185 g005
Figure 6. A complete permutation tree satisfying Property (12).
Figure 6. A complete permutation tree satisfying Property (12).
Mathematics 13 03185 g006
Figure 7. ‘And’ function: (a) traditional representation; (b) predicate representation.
Figure 7. ‘And’ function: (a) traditional representation; (b) predicate representation.
Mathematics 13 03185 g007
Figure 8. An example of a PFSP graph with the ‘and’ function: (a) precedence graph between the machines, (b) precedence graph between the machines and jobs, (c) superposition graph of the recursive function.
Figure 8. An example of a PFSP graph with the ‘and’ function: (a) precedence graph between the machines, (b) precedence graph between the machines and jobs, (c) superposition graph of the recursive function.
Mathematics 13 03185 g008
Figure 9. Plots of the numerical experiments.
Figure 9. Plots of the numerical experiments.
Mathematics 13 03185 g009
Figure 10. Examples of job shop precedence graphs: (a) graph of original problem; (b) graph with rearranged jobs 3 and 5.
Figure 10. Examples of job shop precedence graphs: (a) graph of original problem; (b) graph with rearranged jobs 3 and 5.
Mathematics 13 03185 g010
Figure 11. An example of splitting one job into two: (a) graph of original problem; (b) graph with a splitted job.
Figure 11. An example of splitting one job into two: (a) graph of original problem; (b) graph with a splitted job.
Mathematics 13 03185 g011
Figure 12. Two simple chains of machines that are part of job j.
Figure 12. Two simple chains of machines that are part of job j.
Mathematics 13 03185 g012
Figure 13. Splitting job j into two jobs, each containing one simple chain of machines.
Figure 13. Splitting job j into two jobs, each containing one simple chain of machines.
Mathematics 13 03185 g013
Figure 14. Graph with additional attributes and traversal order: (a) graph with added machine m + 1 ; (b) traversal order for the modified graph.
Figure 14. Graph with additional attributes and traversal order: (a) graph with added machine m + 1 ; (b) traversal order for the modified graph.
Mathematics 13 03185 g014
Table 1. Table of job processing times.
Table 1. Table of job processing times.
Machine\Job123
1322
2313
3322
4312
Table 2. Intervals of job completion times for the machine with feasible schedules.
Table 2. Intervals of job completion times for the machine with feasible schedules.
Machine\Job123
1[3, 4][5, 7][7, 9]
2[6, 7][7, 9][10, 12]
3[9, 10][11, 12][13, 14]
4[12, 13][13, 14][15, 16]
Table 3. Sequence of actions of the algorithm for n = 3 without the rejection of branches.
Table 3. Sequence of actions of the algorithm for n = 3 without the rejection of branches.
Recursion
Level
Algorithm
Argument
π π
Fixed Part
π
“Tail”
Computed
Top Row
01(1,2,3)()(1,2,3)
12(1,2,3)(1)(2,3)1
23(1,2,3)(1,2)(3)2
34(1,2,3)(1,2,3)()3
23(1,3,2)(1,3)(2)2
34(1,3,2)(1,3,2)()3
12(2,1,3)(2)(1,3)1
23(2,1,3)(2,1)(3)2
34(2,1,3)(2,1,3)()3
23(2,3,1)(2,3)(1)2
34(2,3,1)(2,3,1)()3
12(3,2,1)(3)(2,1)1
23(3,2,1)(3,2)(1)2
34(3,2,1)(3,2,1)()3
23(3,1,2)(3,1)(2)2
34(3,1,2)(3,1,2)()3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kupriyanov, B.; Lazarev, A.; Roschin, A.; Werner, F. On the Recursive Representation of the Permutation Flow and Job Shop Scheduling Problems and Some Extensions. Mathematics 2025, 13, 3185. https://doi.org/10.3390/math13193185

AMA Style

Kupriyanov B, Lazarev A, Roschin A, Werner F. On the Recursive Representation of the Permutation Flow and Job Shop Scheduling Problems and Some Extensions. Mathematics. 2025; 13(19):3185. https://doi.org/10.3390/math13193185

Chicago/Turabian Style

Kupriyanov, Boris, Alexander Lazarev, Alexander Roschin, and Frank Werner. 2025. "On the Recursive Representation of the Permutation Flow and Job Shop Scheduling Problems and Some Extensions" Mathematics 13, no. 19: 3185. https://doi.org/10.3390/math13193185

APA Style

Kupriyanov, B., Lazarev, A., Roschin, A., & Werner, F. (2025). On the Recursive Representation of the Permutation Flow and Job Shop Scheduling Problems and Some Extensions. Mathematics, 13(19), 3185. https://doi.org/10.3390/math13193185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop