Preprint
Article

This version is not peer-reviewed.

On the Recursive Representation of the Permutation Flow Shop Scheduling Problem and Its Extension

Submitted:

26 November 2024

Posted:

27 November 2024

You are already at the latest version

Abstract

In this paper, we propose a formulation of the permutation flow shop scheduling problem (PFSP) by special recursive functions and show its equivalence to the existing classical formulation. Equivalence is understood in the sense that both ways of defining the problem describe the same set of feasible schedules for each pair of job and machine numbers. In fact, this paper attempts to use the apparatus of recursive functions for scheduling problems. Moreover, the predicate $and$ is introduced into the problem, and the problem PFS is expanded from a chain of machines to an acyclic graph. To construct an optimal schedule, the branch and bound method is considered based on the recursive function, which is included into the corresponding job permutation algorithm. The complexity of the optimization algorithm does not increase compared to the non-recursive PFS problem. Finally, we present some initial computational results.

Keywords: 
;  ;  ;  

1. Introduction

There exist a huge number of publications devoted to the permutation flow shop problem (PFSP) or to the more general case of the flow shop problem (FSP) in the scheduling literature. In both the PFSP and the FSP, all jobs must be processed on set of m machines according to a fixed technological order. In the PFSP, each of the n jobs must be processed in the same sequence, while in the FSP, different job sequences can be selected to satisfy a specific optimization objective function. Often the makespan has to be minimized, i.e., the maximum completion time of a full set of jobs should be minimized. The PFSP is denoted as F | p r m u | C m a x . A corresponding review can be found e.g. in [1]. Most works deal with various heuristics and metaheuristics, see e.g. [2,3,4,5].
This article discusses a new approach through the definition of recursive functions to describe the timing characteristics of the PSFP. The concept of interval calculations is used to describe time domains. The equivalence of the PFSP representation in the existing and the recursive form is proved. Here, equivalence is understood as the equality of the sets of feasible schedules for each pair of job number and machine number. Then the transition from interval recursive functions to scalar ones is carried out. It is shown that with a fixed order of operations, scalar functions calculate an optimal schedule on a set of feasible schedules.
Moreover, the recursive representation of the PFSP is extended by including the predicate a n d , which allows one to move from chains of machines to trees and acyclic graphs. The branch and bound method (B&B) [6] using the introduced recursive function is considered as an exact optimization method. A description of one of the variants of a lower bound on the makespan and the branching method are given. The branching method is based on an original permutation generator with a given property. The permutation algorithm and the recursive implementation of the B&B method are described. It is shown that the complexity of the algorithm does not increase compared to the known B&B algorithms for the PFSP [6,10].
The rest of this paper is organized as follows. Section 2 reviews some basics of interval calculations relevant for the PFSP. Section 3 proves the equivalence of the existing and the recursive PFSP representations. Section 4 provides a method for implementing the B&B algorithm for the recursive PFSP. Section 5 gives the definition of the a n d function and the extended formulation of the problem. Then Section 6 discusses the B&B method for an extended formulation of the problem. Section 7 presents some initial results with the suggested algorithm.

2. Interval Values and Operations

Interval calculations in mathematics have been known for a long time, see for example [7]. In our case, this theory is used at the level of definitions and properties of some interval operations. We will consider the set of values of the time interval as a segment of integers on the number line with starting and end points included.
Definition 1. 
Let N = { 0 , 1 , 2 , 3 , , N ^ } be a finite set of natural numbers from one to N ^ with zero added. Everywhere below, by an interval or domain [ a , b ] , a b , unless otherwise stated, we denote the closed bounded subset N of the form
[ a , b ] = { x | x N a x b } .
The set of all intervals is denoted by I . We will write the elements of I in capital letters. If A I , then we will denote its left and right end points as a ̲ and a ¯ : A = [ a ̲ , a ¯ ] . We will call the elements of I interval numbers. Two intervals A and B are equal if and only if a ̲ = b ̲ and a ¯ = b ¯ . According to the total interval approach [7], we can describe operations as follows. Let ∘ be some operation and A , B I . In this case, we have
A B = { a b | a A , b B } .
If A = [ a ̲ , a ¯ ] and B = [ b ̲ , b ¯ ] , then A B = [ a ̲ b ̲ , a ¯ b ¯ ] . We will use only two interval operations: ”+” and ”max”. They are defined as follows:
A + B = A = [ a ̲ + b ̲ , a ¯ + b ¯ ] ; max { A , B } = [ max { a ̲ , b ̲ } , max { a ¯ , b ¯ } ] .
To illustrate, consider the following example.
Example 1. 
Let A = [ 12 , 17 ] and B = [ 3 , 5 ] . Then
A + B = [ 12 , 17 ] + [ 3 , 5 ] = [ 12 + 3 , 17 + 5 ] = [ 15 , 22 ] ; max { A , B } = max { [ 12 , 17 ] , [ 3 , 5 ] } = [ max { 12 , 3 } , max { 17 , 5 } ] = [ 12 , 17 ] .
If A = [ 6 , 9 ] and B = [ 3 , 12 ] , then
max { A , B } = max { [ 6 , 9 ] , [ 3 , 12 ] } = [ max { 6 , 3 } , max { 9 , 12 } ] = [ 6 , 12 ] .
While in the first case, the interval A is obtained by the operation max, in the second case a subinterval of B results.
To illustrate, below are some further examples of various options for interval intersection.
max { [ 1 , 4 ] , [ 2 , 3 ] } = [ 2 , 4 ] ;
max { [ 2 , 5 ] , [ 3 , 6 ] } = [ 3 , 6 ] ;
max { [ 5 , 15 ] , [ 0 , 0 ] } = [ 5 , 15 ] .
A confluent interval, i.e., an interval with coinciding end points a = a ̲ = a ¯ , is identical to the integer a. Thus, N I . If A and B are confluent intervals, then the results of the operations on intervals coincide with ordinary arithmetic operations for positive integers. An interval integer is a generalization of an integer, and the interval integer arithmetic is a generalization of the integer arithmetic.
Note also that for B = [ b , b ] , we write for simplicity also A + B = A + b . In the theory of intervals, the role of zero is played by the usual 0, which is identified with the confluent interval [0, 0]. In other words,
A + 0 = 0 + A = A , max { A , 0 } = max { 0 , A } = A , A I .
Calculations can lead to an interval = [ a , b ] if a > b , which, according to Definition 1, represents an empty set.

3. Solving the Permutation Flow Shop Equivalence Problem

Consider the description of the PFSP [8]. Let J = { 1 , 2 , , n } be the set of jobs and M = { 1 , 2 , , m } be the set of machines arranged sequentially. Moreover, let p j , k be the processing time of job j on machine k.
The precedence graph of such a problem (see Figure 1a) is a simple chain in which the vertices are machines, and the edges define precedence relations between the machines. This graph can be ”expanded” if we fix some order of the jobs and define precedence relations between the jobs J × J and between the machines M × M . Any vertex of such a graph is identified with the pair (job number, machine number). An expanded graph describing precedence relations between the jobs is presented in Figure 1b.
The vertical edges define precedence relations between the jobs for a particular machine. The horizontal edges define precedence relations between the machines for a fixed job. This graph is acyclic and has one start and one end vertex. Consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., for simplicity of the description consider the schedule ( 1 , 2 , . . . , n ) with the new numbering. The following assumptions are valid for problem PFSP:
  • Job assumptions: A job cannot be processed by more than one machine at a time; each job must be processed in accordance with the machine precedence relationship; the job is processed as early as possible, depending on the order of the machines; all jobs are equally important, meaning there are no priorities, deadlines or urgent orders.
  • Machine assumptions: no machine can process more than one job at a time; after starting a job, it must be completed without interruption; there is only one machine of each type; no job is processed more than once by any machine.
  • Assumptions about the processing times: the processing time of each job on each machine does not depend on the sequence in which the jobs are processed; the processing time of each job on each machine is specific and integer; the transport time between machines and a setup time, if any, are included in the processing time.
Additional time limits can be set in the scheduling problems. Denote T ( j , k ) = [ t ̲ j , k , t ¯ j , k ] as the interval completion time of job j by the kth machine. t ̲ j , k is the minimal time for the completion of job j on the kth machine, and t ¯ j k is the corresponding maximal time. The completion time t j , k must satisfy the condition t ̲ j , k t j , k t ¯ j , k , otherwise the schedule is infeasible. In fact, this is the interval function completion times of job j on machine k. We will assume that in the initial formulation of the problem, a matrix T 0 ( j , k ) = [ t ̲ j , k 0 , t ¯ j , k 0 ] of initial domains of the feasible values for the job completion times has been defined. We will assume, unless otherwise specified, that the initial domains are the interval [ 0 , T m a x ] , where T m a x is a sufficiently large finite value which will be discussed below. We describe two additional restrictions [8]:
  • r j is the moment when job j arrives for processing, i.e., its release date. This parameter defines the point in time from which job j can be scheduled for execution, but its processing does not necessarily begin at this moment. In this case, in the matrix of initial domains, it is necessary to put T 0 ( j , k ) = [ r j + p j , k , T m a x ] for j J and 1 k m .
  • D j is the deadline for processing job j. A deadline cannot be violated, and any schedule that contains a job that finishes after its deadline is infeasible. In this case, in the matrix of initial domains, it is necessary to put T 0 ( j , k ) = [ 0 , D j ] for j J and 1 k m .
The combination of the constraints r j and D j generates initial domains of the form T 0 ( j , k ) = [ r j + p j , k , D j ] for j J and 1 k m . If these restrictions relate to specific operations, they will look like r j , k and D j , k . In this case, the following time relations will be satisfied:
p 1 , 1 t 1 , 1 , t 1 , 1 t 1 , 2 p 1 , 2 , , t 1 , m 1 t 1 , m p 1 , m .
t 1 , 1 t 2 , 1 p 2 , 1 , max ( t 1 , 2 , t 2 , 1 ) t 2 , 2 p 2 , 2 , , max ( t 1 , m , t 2 , m 1 ) t 2 , m p 2 , m .
t n 2 , 1 t n 1 , 1 p n 1 , 1 , max ( t n 2 , 2 , t n 1 , 1 ) t n 1 , 2 p n 1 , 2 , , max ( t n 2 , m , t n 1 , m 1 ) t n 1 , m p n 1 , m .
t n 1 , 1 t n , 1 p n , 1 , max ( t n 1 , 2 , t n , 1 ) t n , 2 p n , 2 , , max ( t n 1 , m , t n , m 1 ) t n , m p n , m .
Here we take into account the fact that the interval time B ( j , k ) for the start of job j on the kth machine is calculated by the formula
B ( j , k ) = [ t ̲ j , k , t ¯ j , k ] p j , k = [ t ̲ j , k p j , k , t ¯ j , k p j , k ] , 1 j n , 1 k m .
The above inequalities are supplemented by inequalities of the form t ̲ j , k t ¯ j , k .
The system of inequalities (2)-(5) defines an infinite set of feasible schedules. To reduce this set to a finite one, we will use a positive integer constant T m a x and supplement the system (2)-(5) with the inequalities
t ¯ j , k T m a x , 1 j n , 1 k m .
It is possible to assign to T m a x a value so small that the set of feasible schedules is empty. We assume that the value of T m a x is such that there is at least one feasible schedule.
Example 2. 
Consider a PFSP with 3 machines and 4 jobs.
Table 1 shows the processing times p j , k of the jobs. The rows correspond to the job numbers, and the columns represent the machine numbers.
Let T m a x = 16 . In this case, the intervals for each pair ( j , k ) are given in Table 2.
Figure 2 shows the Gantt charts of three variants from the set of feasible schedules. ”Schedule 1” and ”Schedule 2” are optimal schedules (for this sequence), while ”Schedule 2” and ”Schedule 3” have different inserted idle times. It is easy to verify that increasing any interval in Table 2 will lead to the appearance of infeasible schedules, and decreasing any interval will result in the loss of feasible schedules.
Let T ( j , k ) be the interval calculated by some algorithm for completing the jth job on the kth machine and T 0 ( j , k ) be the corresponding initial domain. According to Figure 3, we define the interval noncommutative operation intersection (∩) as T ( j , k ) = T ( j , k ) T 0 ( j , k ) . The figure shows all significant cases of interval intersections:
a)
the interval T ( j , k ) is to the left of T 0 ( j , k ) . The completion time of the jth job on the kth machine should be artificially increased to values from the interval T 0 ( j , k ) . Only in this case, the schedule wil be feasible;
b-e)
are cear without additional explanation;
f)
with this arrangement of intervals, the value of the interval T ( j , k ) is equal to an empty set, which corresponds to the absence of a feasible schedule.
Thus, we have:
T ( j , k ) = , T ¯ 0 ( j , k ) < T ̲ ( j , k ) ; max { T ̲ ( j , k ) , T ̲ 0 ( j , k ) } , T ¯ 0 ( j , k ) , o t h e r w i s e .
It can be shown that all feasible sets of time values are uniquely determinable and maximal. Therefore, when searching for an optimal solution, no solution will be lost.
All precedence relations define intervals (including the scalar c = [ c , c ] ). Again for simplicity of the description, consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., consider then the schedule ( 1 , 2 , . . . , n ) with the new numbering. The calculations of these intervals can be divided into 4 groups. Next, we write them using interval variables, assuming that the initial interval is T 0 ( j , k ) and T ¯ 0 ( j , k ) T m a x :
  • For the completion of the 1st job on the 1st machine ( j = 1 , k = 1 ) , we have:
    T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] , T ( 1 , 1 ) = T ( 1 , 1 ) T 0 ( 1 , 1 ) .
  • For the completion of the 1st job on the kth machine ( j = 1 , 1 < k m ) , we have:
    T ( 1 , k ) = T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] , T ( 1 , k ) = T ( 1 , k ) T 0 ( 1 , k ) .
  • For the completion of the job j on the 1st machine ( 1 < j n , k = 1 ) , we have:
    T ( j , 1 ) = T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] , T ( j , 1 ) = T ( j , 1 ) T 0 ( j , 1 ) .
  • The interval of the completion time of job j on the kth machine ( 1 < j n , 1 < k m ) is obtained as follows:
    T ( j , k ) = max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] , T ( j , k ) = T ( j , k ) T 0 ( j , k ) .
The PFSP can be formulated using a finite set of special recursive functions. This is due to the existence of precedence relationships between the machines and jobs in the PFSP when the current characteristics of the problem are determined by the preceding ones. Recursive functions have arguments and parameters as data. The arguments of the functions are pairs (job number, machine number). They are passed as function arguments in the normal mathematical sense. Parameters are defined outside the recursive function, and they are essentially global variables in imperative programming terms. Examples of parameters can be: n , m , T m a x , p j , k , T 0 ( j , k ) , and so on. The calculated value (or set of values) of the function is the completion time of job j on the kth machine.
We will denote the interval recursive function as T : J × M I . Specific types of recursive functions will be provided subsequently as the PFSP is considered. Let us consider the problem of equivalence of the existing PFSP formulation in the literature and its recursive representation. The equivalence of both formulations is established through the equivalence of the domains of acceptable values of the completion times for each pairs (job number, machine number). An optimal solution is within the allowed processing times. This fact allows us to move to the functional representation of the problem and to apply various appropriate methods. We can describe each group by the corresponding recursive functions:
  • T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) ; j = 1 , k = 1 .
  • T ( 1 , k ) = ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) ; j = 1 , 1 < k m .
  • T ( j , 1 ) = ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) ; 1 < j n , k = 1 .
  • T ( j , k ) = ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) ; 1 < j n , 1 < k m .
This set of functions can be combined into one function:
T ( j , k ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) , j = 1 , k = 1 ; ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) , j = 1 , 1 < k m ; ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) , 1 < j n , k = 1 ; ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) , 1 < j n , 1 < k m .
The function T ( j , k ) is defined for all values of j and k. If T ( j , k ) = , then there is no feasible schedule. It can be seen from the formula that it calculates the only value of the interval. We show that this recursive function defines the same domain for each ( j , k ) as the set of values defined by the system of inequalities (2)-(6).
Theorem 1. 
Let the following conditions be satisfied for both the PFSP given by the system of inequalities (2)-(6) and the recursive PFSP:
  • one graph is set for both problems (see Figure 1a);
  • consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e., consider the schedule ( 1 , 2 , . . . , n ) with the new numbering;
  • the processing time of job j on the kth machine is p j , k ;
  • the same matrix of initial time intervals is specified:
    T 0 ( j , k ) = [ t ̲ j , k 0 , t ¯ j , k 0 ] .
Then the following statement is true: the set of domains (domain of definition) D ( j , k ) of feasible times for the non-recursive PFSP coincides with the set of domains D ( j , k ) for the recursive PFSP, i.e., D ( j , k ) = D ( j , k ) for 1 j n , 1 k m .
In other words, the sets of valid schedules for each pair (job number, machine number) are the same, or in both cases there is no feasible schedule, i.e., there are j and k such that D j , k = D j , k = .
Proof. 
Let the conditions of the theorem be satisfied. The proof is done by complete induction. Let us recall the formulation of the principle of complete induction. Let there be a sequence of statements P 1 , P 2 , P 3 , . If for any natural i the fact that all P 1 , P 2 , P 3 , , P i 1 are true also implies that P i is true, then all statements in this sequence are true. So for i N , i 2 we have
( j { 1 ; ; i 1 } P j P i ) ( i N ) P i .
Here x y denotes a logical implication. The statement P i is the equality of the intervals D ( j , k ) = D ( j , k ) . Let us apply this method to the graph in Figure 1b. From graph theory, it is known that the vertices of an acyclic graph are strictly partially ordered. There exist algorithms for constructing some linear order for a strictly partially ordered graph. Informally, a linear order is when all the vertices of a graph are arranged in a line, and all edges are directed from left to right. Any such algorithm is suitable because:
- the first vertex is always ( 1 , 1 ) of the original graph,,
- the last vertex is always ( n , m ) ,
- when evaluating the recursive function according to a linear order, its arguments have already been evaluated.
Let us choose a row-by-row traversal. In this case, for a pair ( j , k ) , all pairs ( j , k ) with j < j or k < k have already been considered. We will traverse the elements of the matrix of interval variables T in the rows from left to right and from the top row to the bottom one.
  • Let us prove the statement P 1 . From the system of inequalities (2)-(5), it follows that
    T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) .
    From the definition of the recursive function, it follows that
    T ( 1 , 1 ) = T ( 1 , 1 ) = [ p 1 , 1 , p 1 , 1 ] T 0 ( 1 , 1 ) .
    Therefore, we have T ( 1 , 1 ) = T ( 1 , 1 ) .
  • Let us assume that the statements P 1 , , P i 1 are true. We now prove that he statement P i is also true. Among the 4 groups of relationship types described above, the first type is considered for j = 1 , k = 1 .
  • Let us analyze the remaining 3 types.
    3a)
    Processing of the 1st job on the kth machine ( j = 1 , 1 < k m ) :
    In this case, the problem satisfies the equality
    T ( 1 , k ) = ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k ) .
    In turn,
    T ( 1 , k ) = ( T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] ) T 0 ( 1 , k )
    and due to the condition T ( 1 , k 1 ) = T ( 1 , k 1 ) , we get
    T ( 1 , k ) = T ( 1 , k 1 ) + [ p 1 , k , p 1 , k ] = T ( 1 , k ) .
    Therefore, we have T ( 1 , k ) = T ( 1 , k ) .
    3b)
    Processing of job j on the 1st machine ( 1 < j n , k = 1 ) : In this case, the problem satisfies the equality
    T ( j , 1 ) = ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) .
    In case of recursion,
    T ( j , 1 ) = ( T ( j 1 , 1 ) + [ p j , 1 , p j , 1 ] ) T 0 ( j , 1 ) .
    Due to the condition T ( j 1 , 1 ) = T ( j 1 , 1 ) , finally we get
    T ( j 1 , 1 ) = T ( j 1 , 1 ) = T ( j , 1 ) .
    Therefore, we have T ( j , 1 ) = T ( j , 1 ) .
    3c)
    Processing of job j on the kth machine ( 1 < j n , 1 < k m ) assumes that the problem satisfies the equalities
    T ( j 1 , k ) = T ( j 1 , k ) , T ( j , k 1 ) = T ( j , k 1 ) .
    In this case,
    T ( j , k ) = ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) = ( max { T ( j 1 , k ) , T ( j , k 1 ) } + [ p j , k , p j , k ] ) T 0 ( j , k ) = T ( j , k ) .
    Therefore, we have T ( j , k ) = T ( j , k ) .
All possible situations have been analyzed.
The main optimization problem in the PFSP is to find the order of job processing so that for some selected criterion, the function value is optimal. The most common criterion is the makespan (i.e., the total time it takes to complete the whole set of jobs). We remind that the vast majority of flow shop problems are treated as a PFSP, see for example [9,10,11].
To effectively solve the problem of calculating an optimal schedule, it is necessary to switch from interval recursive functions (8) to scalar recursive functions C : J × M N . Consider the case when all elements of the matrix of initial time intervals are equal to [ 0 , T m a x ] . In this case, all the cases of interval intersections shown in Figure 3 will be reduced to case d and will look like in Figure 4. The value of T ¯ ( j , k ) will always be equal to T m a x and the interval value can be replaced by a scalar equal to the lower value of the interval T ̲ ( j , k ) . Then the recursive function (8) will look as follows:
C ( j , k ) = p 1 , 1 , j = 1 , k = 1 ; T ̲ ( 1 , k 1 ) + p 1 , k , j = 1 , k > 1 ; T ̲ ( j 1 , 1 ) + p j , 1 , j > 1 , k = 1 ; max ( T ̲ ( j 1 , k ) , T ̲ ( j , k 1 ) ) + p j , k , j > 1 , k > 1 .
or finally
C ( j , k ) = p 1 , 1 , j = 1 , k = 1 ; a ) C ( 1 , k 1 ) + p 1 , k , j = 1 , 1 < k m ; b ) C ( j 1 , 1 ) + p j , 1 , 1 < j n , k = 1 ; c ) max { C ( j 1 , k ) , C ( j , k 1 ) } + p j , k , 1 < j n , 1 < k m . d )
The function C ( j , k ) calculates the completion time of job j on the kth machine. The scalar function (10) is obtained from the interval function (8) by replacing the interval with the lowest value of the interval.
In Example 2 above, C ( j , k ) computes only one schedule with minimum completion time, which corresponds to the ”Schedule 1” diagram in Figure 2.
The analysis of formula (10) shows that when calculating the function C ( n , m ) , the traversal of pairs ( j , k ) is carried out in the order indicated in Figure 5a.
However, if the function (10) is replaced by an equivalent function (11) (the difference is in the expression 11d), then the pairs will be traversed according to the scheme in Figure 5b.
C ( j , k ) = p 1 , 1 , j = 1 , k = 1 ; a ) C ( 1 , k 1 ) + p 1 , k , j = 1 , 1 < k m ; b ) C ( j 1 , 1 ) + p j , 1 , 1 < j n , k = 1 ; c ) max { C ( j , k 1 ) , C ( j 1 , k ) } + p j , k , 1 < j n , 1 < k m . d )
Theorem 2. 
Let the PFSP include m machines and n jobs. Consider some schedule and then renumber the jobs in the order in which they are located in the schedule, i.e. consider the schedule ( 1 , 2 , . . . , n ) with the new numbering. In this case, the following equality is true for any j ( 1 j n ) and any k ( 1 k m ) :
T ̲ ( j , k ) = C ( j , k ) .
The point of this theorem is that the scalar function calculates the minimum completion time for any given sequence of jobs. This follows from the principle of the function construction.
Proof. 
If in the formula (9) C ( j , k ) is replaced by T ̲ ( j , k ) , then a formula for calculating T ̲ ( j , k ) is obtained. The direct comparison of the resulting formula with the formula (10) for calculating C ( j , k ) proves the validity of the theorem. □
The following important theorem follows from Theorem 2.
Theorem 3. 
Let the PFSP be defined for m machines and n jobs. In addition, let Π be the set of all permutations of n jobs and π * = ( α 1 , α 2 , . . . , α n ) Π be the permutation corresponding to the minimum completion time of all jobs, computed as T ̲ ( α n , m ) for π * . For the PFSP, such a permutation satisfies the equality
T ̲ ( α n , m ) = C ( α n , m ) .
The meaning of the theorem is that the minimum completion time of all jobs, computed using interval calculations, is equal to the value of the scalar recursive function for the same permutation.
Proof. 
The set of all permutations Π is finite and has the cardinality n ! . In this case, as it has been proven by Theorem 2, for any permutation π Π , the equality T ̲ ( α n , m ) = C ( α n , m ) holds. Therefore, it also holds for the permutation π * . □
Let us look at Example 2 again. If, after calculating the intervals in Table 2, we introduce a constraint of the form r 3 = 7 (the moment the job is received for processing), this will lead to a change in the intervals for the 3rd job as follows: [ 10 , 10 ] , [ 12 , 12 ] , [ 14 , 14 ] , and for the 4th job to [ 13 , 13 ] , [ 14 , 14 ] , [ 16 , 16 ] . Among the schedules given in the example in Figure 2, only ”Schedule 3” will remain feasible. The other schedules will become infeasible. Evaluating the recursive function will not result in a feasible schedule. This example demonstrates the importance of including constraints in the initial value matrix when evaluating the recursive function.
Theorem 3 states that when computing an optimal schedule for the PFSP, one can move from interval calculations to the scalar recursive function C ( j , k ) (10), although the cardinality of the set of values of the function C ( j , k ) is less than or equal to the cardinality of the set of values of the function T ( j , k ) . If the function T ̲ ( n , m ) has several minima, then the function C ( n , m ) will have the same number of minima.
Denote by the permutation π = ( α 1 , α 2 , , α n ) the order of job numbers, and α i , i = 1 , 2 , , n , is the number of the job at position α in the schedule (permutation) π . The recursive function was defined before to have a fixed order of the processing of the jobs. Let us add a permutation π to the definition of the function as a global variable so that we can change the order of jobs. We define new functions: T ^ : J ^ × M I and C ^ : J ^ × M N . Here J ^ is the set of job position numbers and α J ^ . We include the permutation π in the definitions of the recursive functions (8) and (10) so that we can change the order of processing the jobs:
T ^ ( α , k ) = [ p π ( 1 ) , 1 , T m a x ] , α = 1 , k = 1 ; T ^ ( 1 , k 1 ) + [ p π ( 1 ) , k , p π ( 1 ) , k ] , α = 1 , 1 < k m ; T ^ ( α 1 , 1 ) + [ p π ( α , 1 , p α , 1 ] , 1 < α n , k = 1 ; max { T ^ ( α 1 , k ) , T ^ ( α , k 1 ) } + [ p π ( α ) , k , p π ( α ) , k ] , 1 < α n , 1 < k m .
C ^ ( α , k ) = p π ( 1 ) , 1 , α = 1 , k = 1 ; C ^ ( 1 , k 1 ) + p π ( 1 ) , k , α = 1 , 1 < k m ; C ^ ( α 1 , 1 ) + p π ( α ) , 1 , 1 < α n , k = 1 ; max { C ^ ( α 1 , k ) , C ^ ( α , k 1 ) } + p π ( α ) , k , 1 < α n , 1 < k m .
In the function C ^ ( α , k ) , where α , α = 1 , 2 , , n , is the number of the position in the schedule (permutation) π and k , k M , is the number of the machine, are the arguments of the recursive function, and the processing times p j , k and the permutation π are the parameters of the function.
Example 3. 
Here we calculate the completion times of the jobs 1 and 2 on the machines 1 and 2. It should be noted that in this case the first argument of the function C ^ is the number of the position in the shedule π.
C ^ ( 1 , 1 ) = p π ( 1 ) , 1 ;
C ^ ( 1 , 2 ) = C ^ ( 1 , 1 ) + p π ( 1 ) , 2 = p π ( 1 ) , 1 + p π ( 1 ) , 2 ;
C ^ ( 2 , 1 ) = C ^ ( 1 , 1 ) + p π ( 2 ) , 1 = p π ( 1 ) , 1 + p π ( 2 ) , 1 ;
C ^ ( 2 , 2 ) = max { C ^ ( 1 , 2 ) , C ^ ( 2 , 1 ) } + p π ( 2 ) , 2 = max { p π ( 1 ) , 1 + p π ( 1 ) , 2 , p π ( 1 ) , 1 + p π ( 2 ) , 1 } + p π ( 2 ) , 2 .
For simplicity of the description, we have discussed the approach for the PFSP. It can be noted that one can extend this procedure to the FSP with possibly different job sequences on the machines:

4. Implementation of the Branch and Bound Method for the PFSP

It should be noted that the PFSP was one of the first combinatorial optimization problems for which the branch and bound (B&B) method was applied [6] soon after its development [12]. A large number of works were devoted to the application of the B&B method for solving the PFSP. The papers [10,13] review the most important works on B&B algorithms for the PFSP. Since the branch and bound method follows very organically from the recursive representation of the PFSP, we consider it subsequently.
Evaluation of Lower Bound LB1. Taking into account the permutation π of jobs, for a certain pair ( α , k ) , the lower bound LB can be determined as follows:
L B ( α , k ) = C ^ ( α , k ) + i = k + 1 m p π ( α ) , i + i = α + 1 n p π ( i ) , m .
Remark 1. 
Calculating the value of the function C ^ ( α , k ) from the value of the function C ^ ( α 1 , k ) is simpler than calculating L B ( α , k ) . Therefore, it makes sense to calculate the lower bound only for the pair ( α , m ) . In [14], the effectiveness of the comparison with the lower bound on the last machine was also justified.
In this case, it is possible to move from formula (16) to the formula for calculating the bound on the last machine
L B ( α , m ) = C ^ ( α , m ) + i = α + 1 n p π ( i ) , m .
Next, let us formulate the exact meaning of the bound L B ( α , k ) . In the described case, it means that with a fixed order of a part of the jobs 1 , 2 , , α , the processing times of all jobs cannot be less than L B ( α , k ) for any arrival order of the remaining jobs α + 1 , α + 2 , , n .
Let Π be the set of permutations of n jobs for which the value C ^ ( α n , m ) has already been calculated. Then the lower bound L B 1 for the considered job sequences will be
L B 1 = min π Π C ^ ( α n , m ) .
Thus, if the inequality
L B ( α , k ) L B 1 ,
is satisfied for the current pair ( α , k ) , then this sequence of jobs 1 , 2 , , α 1 , α is excluded from consideration as not promising, and the corresponding branch of permutations is cut off. The next one will be one of the sequences 1 , 2 , , α 1 , π ( 1 ) with
π P ( α , α + 1 , , n ) a n d π ( 1 ) π ( α ) ,
where P ( α , α + 1 , , n ) is the set of all permutations of the n α + 1 remaining jobs.
Branching. First of all, the branching must be consistent with the traversal defined by the recursive function. If
L B ( α , k ) L B 1 ,
it is necessary to change the order of the jobs. However, it is necessary to change the order in such a way as to minimize the loss of information when calculating the functions C ^ ( 1 , 1 ) , C ^ ( 1 , 2 ) , , C ^ ( α , m ) . The literature describes many permutation algorithms with different characteristics. Let P ( 1 , , s ) = { π 1 , π 2 , , π s ! } . x P ( π ) assigns to each permutation of the set an element x, i.e., x P ( π ) = { x π 1 , x π 2 , , x π s ! } . In this case, the algorithm for generating permutations should calculate them in such a sequence that they satisfy the following recursive property:
P ( j 1 , j 2 , , j s ) = i = 1 s j i P ( j 1 , j 2 , , j i 1 , j i + 1 , , j s ) .
This property, after calculating the function C ( j , k ) , allows one to study all permutations of the ”tail” of the queue of jobs being scheduled P ( j + 1 , j + 2 , , n ) and only then track back.
Example 4. 
Figure 6 shows an example of a complete permutation tree that satisfies Property (18) for n = 4 . The root of the tree is an auxiliary vertex, and the permutation corresponds to some path from the root. The vertical edge shows the next level of recursion. It connects the initial sequence to its tail. The tree is traversed from top to bottom and from left to right. In this example, the permutations will be generated in the following order: (1,2,3,4),(1,2,4,3),(1,3,2,4),(1,3,4,2),...,(4,3,1,2),(4,3,2,1). There are 24 permutations in total.
This algorithm can be implemented through recursive copying of the tail of permutations. This is a costly operation, but the algorithm can be modified so that the permutation π exists in a single instance as a global variable. This is done using forward and reverse permutations of two jobs (when returning along the search tree). The description of the B&B algorithm is given in Appendix A.

5. ”And” Function

The flow shop problem discussed above is more a subject of theoretical research and has limited applicatiosn in practice. In order to bring it closer to practice, various additional elements can be introduced, first of all, various types of constraints. In real production, the main machine chain is served by various preparatory operations, for example: those associated with delivery from the warehouse to the production line; preparatory steps; technological issues; testing, etc.
Next, we expand the PFSP by introducing the a n d function. A well-known construction, shown in Figure 7a, denotes a precedence relation in which the execution of a job on machine 3 can only begin after the completion of the jobs on machines 1 and 2. Using the a n d function, it will look like in Figure 7b. In what follows, we will interpret the a n d function as a machine with zero processing time for any job. Thus, the chained precedence graph for the PFSP is expanded into a tree precedence graph when using the a n d function. Thus, the PFSP is transformed towards an Assembly Line job (AL), but without the presence of a conveyor belt and workstations. This problem can be called the Assembly Permutation Flow Shop problem (APFSP) . The absence of a conveyor belt frees from the ”cycle time” limitation, and the absence of workstations frees from solving the balancing problem, i.e., an optimal distribution of the jobs among the stations, taking into account the cycle time.
Despite the absence of assembly line attributes, this model can have a wide application.
The function a n d in the interval representation T a n d : J × M I looks as follows:
T a n d ( j , k ) = max { T ( j , p r e d 1 ( k ) ) , T ( j , p r e d 2 ( k ) ) } ,
or
T a n d ( j , k ) = [ max { T ̲ ( j , p r e d 1 ( k ) ) , T ̲ ( j , p r e d 2 ( k ) ) } , max { T ¯ ( j , p r e d 1 ( k ) ) , T ¯ ( j , p r e d 2 ( k ) ) } ] ,
or in a scalar expression C a n d : J × M N .
C a n d ( j , k ) = max { C ( j , p r e d 1 ( k ) ) , C ( j , p r e d 2 ( k ) ) } .
Accordingly, we get the function C ^ a n d : J ^ × M N , where J ^ is a set of job position numbers and α J ^ :
C ^ a n d ( α , k ) = max { C ^ ( α , p r e d 1 ( k ) ) , C ^ ( α , p r e d 2 ( k ) ) } .
Including the a n d function into the PFSP causes the machines to process the job according to the precedence tree. To set a specific order of the precedence graph traversal, we assume that the function p r e d 1 is always the first one in the formula (19) and determines the upper arc in Figure 7b. Adding the a n d function causes the assumption in Section 3 to be invalid: ”a job cannot be processed by more than one machine at a time”, since the machines 1 and 2 in Figure 7b can execute the same job simultaneously.
It can be proven that when adding the function a n d to the PFSP (i.e., C ( j , k ) = C a n d ( j , k ) and C ^ ( α , k ) = C ^ a n d ( α , k ) ), Theorem 3 remains valid, and the function C ( j , k ) supplemented by C a n d ( j , k ) also implements a greedy algorithm for a fixed sequence of jobs. Figure 8a shows the example of a precedence graph between the machines. Figure 8b shows the precedence graph between machines and jobs (with a fixed order of jobs). Figure 8c shows the computation (superposition) graph of the recursive function for a certain sequence of 4 jobs. In the latter case, the arc determines the transfer of the value C ( j , k ) .
Adding the a n d function to the PFSP causes the machines to process a job according to the precedence tree. Figure 8 shows an example of a precedence graph between the machines (Figure 8a), the corresponding precedence graph between the machines and jobs (Figure 8b), and the graph of computation (superposition) of the recursive function for some sequence of 4 jobs. The graph in Figure 8b is acyclic. In this case, it is convenient to traverse the graph according to the jobs, i.e., the 1st job tree, the 2nd job tree, etc. in accordance with the job permutation algorithm. The tree of each job is traversed as indicated in the figure by the dotted line.

6. Implementation of the Branch and Bound Method for the APFSP

When implementing any method for the APFSP, it is necessary to switch to a precedence graph representation with typed vertices. Let us define the set { s o p , o p , a n d } as the following types of vertices:
  • s o p – the starting vertex of the graph (machine);
  • o p – intermediate or final vertex of a graph (machine);
  • a n d – the vertex corresponding to the a n d function.
Let us define the function T y p e : M { s o p , o p , a n d } that gives the type of a vertex. It is necessary to define explicitly the precedence functions for the vertices p r e d , p r e d 1 , p r e d 2 : M M .
  • p r e d is defined for the vertices of type s o p , k 1 = p r e d ( k 2 ) means that the vertex k 1 precedes the vertex k 2 .
  • p r e d 1 and p r e d 2 are defined for the vertices of type a n d , k 1 = p r e d 1 ( k 3 ) means that the vertex k 1 precedes the vertex k 3 along the first incoming edge, k 2 = p r e d 2 ( k 3 ) means that the vertex k 2 precedes the vertex k 3 along the second incoming edge.
The main issues of the implementation of the method are also the choice of the lower bounds (LB) calculating techniques and branching algorithms. Consider the calculation of the lower bound LB1 by traversing the jobs, that is, similar to Figure 5, but with tree traversal. This means that formulas (10) and (19) are used.
Evaluation of the Lower Bound LB1. The analysis of the formula (10) shows that when calculating the function C ( n , m ) , the operations ( j , k ) are traversed in the order indicated in Figure 5a. Consider the example in Figure 8a. If the a n d function is present, the job processing tree will be traversed as shown in the figure. Naturally, the value of L B ( α , k ) is calculated after calculating C ^ ( α , k ) . The values of L B for the job α ( 1 < α < n ) considering that p α , 4 = 0 can be calculated as follows:
L B ( α , 1 ) = C ^ ( α , 1 ) + p α , 2 + p α , 4 + p α , 5 + p α + 1 , 5 + . . . + p 4 , 5 ;
L B ( α , 2 ) = C ^ ( α , 2 ) + p α , 4 + p α , 5 + p α + 1 , 5 + . . . + p 4 , 5 ;
L B ( α , 3 ) = C ^ ( α , 3 ) + p α , 4 + p α , 5 + p α + 1 , 5 + . . . + p 4 , 5 ;
L B ( α , 4 ) = C ^ ( α , 4 ) + p α , 5 + p α + 1 , 5 + . . . + p 4 , 5 ;
L B ( α , 5 ) = C ^ ( α , 5 ) + p α + 1 , 5 + . . . + p 4 , 5 . From the example considered, it is clear that the calculation of L B ( α , k ) is carried out using the same tree traversal as when calculating C ^ ( α , k ) . In this case, for L B ( α , k ) , Equality (16) is transformed into the equality
L B ( α , k ) = C ^ ( α , k ) + i U α , m p π ( α ) , i + i = α + 1 n p π ( i ) , m ,
where U α , m is the set of vertices belonging to the path from α to m in the machine precedence graph. The uniqueness of this path follows from the fact that the machine precedence graph is a tree, and m is its root node (see, for example, Figure 8a or b). Taking into account Remark 1, the expression (17) can also be used.
The Appendix presents three algorithms used for the solution of the problem AFS.
The described algorithms work both for the PFSP and the APFSP. The only difference is in the definitions of the p r e d function. In the case of the PFSP, p r e d ( k ) = k 1 . For the APFSP, the precedence functions are determined from the precedence graph or a matrix. In this case, it is obvious that for the APFSP, the complexity of the B&B algorithm does not increase. An important feature should be noted. In the case when the last vertex in the precedence graph is the a n d function, the B&B algorithm executes correctly, but its ”predictive power” is reduced to zero.
Recursive functions allow one to calculate also the values of objective functions different from makespan. This can be done by replacing the function C ^ ( α , k ) by
C * ( α , k ) = C ^ ( α , k ) ; < c a l c u l a t i n g t h e v a l u e o f t h e o b j e c t i v e f u n c t i o n > .
In this case, the calculation of the optimization criterion is concentrated in the body of the function C * ( α , k ) and is invariant to the details of the optimization method.

7. Evaluating the Effectivenes of the Algorithm

The existing publications on the B&B method in the scheduling theory use tests that are characterized by the processing times on specific machines. To assess the effectiveness of the B&B method, without using pure processing time indicators, it is advisable to determine how much less of some elementary calculations were completed due to the fact that some sets of job permutations were deliberately discarded and not checked. There are two possible approaches here:
  • consider it elementary to calculate the completion time of the job’s last technological operation (approach L);
  • consider it elementary to calculate the completion time of each technological operation of each job (approach A).
In both cases, it is necessary to determine the maximum possible number of such calculations when solving a problem for n jobs and m machines, and then to compare it with the number of calculations actually performed.
In the L approach, the maximum number of elementary computations is determined by the permutation generation algorithm (see Appendix). At each step, this algorithm either adds a new job to the already fixed initial part of the vector π , or replaces the last job in this fixed part by the next one. In both cases, the completion time of the last technological operation of the last job in the fixed part of π is calculated. The completion times of all technological operations of all previous jobs are already known and do not need to be calculated again. Thus, an ”elementary computation” is the calculation of m elements of the job completion times matrix ( T o p ) row, and the total number of such computations is equal to the number of different variants of the initial part of the vector π generated by the algorithm.
Consider, for example, the generation of permutations and the computation of the completion times for three jobs ( n = 3). If the rejection of branches of the search along the lower bound L B is never performed, then the sequence of actions performed by the algorithm is described by Table 3.
As a result of the algorithm’s operation, 15 initial parts of the vector π were generated (including 6 complete permutations of three tasks), and 15 elementary computations of the T o p rows were performed. With the size h of the initial part of the vector π ( 1 h n ) , the total number of such generated initial parts is equal to the number of placements from n to h:
A n h = n ! ( n h ) ! .
The total number of initial parts of the vector π of all possible sizes from 1 to n is equal to
S m a x ( n ) = h = 1 n A n h = h = 1 n n ! ( n h ) ! .
When using the B&B method, the algorithm will perform all S m a x elementary calculations extremely rarely. However, this estimate is achievable in practice. Let us consider the degeneration case in which all rows of the matrix p j , * are identical. In this case, the L B value calculated using formulas (16) or (17) will be the same for any values of the arguments, and none of the branches will be discarded. The number of elementary calculations performed by the algorithm will be exactly S m a x . In approach A, the maximum possible number of elementary calculations is m times S m a x ( n ) since every row of the job completion times matrix contains m elements. In the optimized algorithm, the number of real calculations will be less than the given values. The effectiveness of solving a problem can be assessed by the proportion of calculations not performed. If S L ( i ) and S A ( i ) are the numbers of calculations when solving problem number i for n jobs and m machines using the approaches L and A, respectively, then the efficiency of the solutions can be estimated using the following formulas:
E L ( n , i ) = S m a x ( n ) S L ( i ) S m a x ( n ) ,
E A ( n , m , i ) = m S m a x ( n ) S A ( i ) m S m a x ( n ) .
A zero value of these indicators means that the use of the method did not provide any gain in the number of calculations; a theoretically unattainable value of 1 (or 100%) means that the use of the method completely eliminated the need for calculations.
The authors performed 10,000 tests for problems with 5–11 jobs on random acyclic graphs with random job processing times. The problem generation parameters in these experiments were as follows:
  • total number of nodes in the acyclic graph (technological operations): 15;
  • number of initial operations: random value in [1,3];
  • number of ”and” operations: random value in [1,4];
  • job processing times ( p j , k ): random value in [1,10].
All random values are uniformly distributed integers. For each number of jobs, a histogram of the number of experiments distribution by the ranges of the E L efficiency obtained in the experiment was constructed. Thus, 7 histograms were constructed, presented in Figure 9.
The histograms in the figure show how many times a particular E L efficiency was achieved during the random experiments. The horizontal axis shows the ten-percent efficiency ranges (from ”0–10” to ”91–100”), and the vertical axis shows the number of experiments out of 10,000 performed in which the corresponding efficiency value was achieved. It can be seen from the figure that as the number of jobs increases, the efficiency of the algorithm also increases. The histogram for five jobs has a maximum around 50%. This means that most of the experiments with five jobs showed a fifty percent efficiency, that is, approximately half of the maximum possible number of elementary calculations for a given task were performed. However, a significant number (about 800) of experiments showed an efficiency of less than 10%. As the number of jobs increases, the histogram maximum moves into the high efficiency region. For eleven jobs, more than 7,000 out of 10,000 experiments performed showed an efficiency in the range of 91–100%, i.e., less than 10% of the number of elementary calculations that would be required to solve the problem when analyzing all possible permutations were performed.

8. Conclusion

This article used the simplest form of a lower bound. A detailed overview of lower bounds for the B&B method for the PFSP has been given in [9]. The main result of this study seems to be an organic transition from the PFSP to the more general and relevant APFSP using recursive functions.
The article dealt with tree precedence graphs, but it is obvious that the described method is applicable also to other acyclic graphs.
The article did not address the issue of computational efficiency. Recursive functions are not efficient in a direct computation. However, there exist calculation transformations, for example: using previously calculated values (calculation with memorization), switching from recursion to iteration, avoiding the use of a memory stack, etc., which can significantly increase the efficiency.
Further research will be focused on the development of:
  • heuristic and metaheuristic algorithms that can be embedded in the given scheme, for example, genetic algorithms or the NEH heuristic, etc.;
  • new recursive functions that will bring the model closer to reality; there is such a prospect, but its presentation does not fit into the framework of this article.

Appendix

This appendix contains the algorithms for finding an optimal permutation.
Algorithm 1 for finding an optimal permutation (with definitions of the variables)
B e g i n A l g o r i t h m 1 . (Initializing and calling PermutationB&B).
  • π is a vector of job permutations, a global variable.
  • P is the matrix of job processing times on each machine.
  • T o p 0 ’ matrix of the job completion times.
  • π ( 1 , 2 , . . . , n ) ’ the initial order of the jobs.
  • L B 1 the maximum value.
  • P e r m u t a t i o n B & B ( 1 ) ’ calling a permutation set generator.
E n d A l g o r i t h m 1 .
Algorithm 2 for calculating the function C ^ ( α , k ) for three types of graph vertices.
B e g i n A l g o r i t h m 2 . (Calculation C ^ ( α , k ) ).
  • i f T y p e ( k ) = o p t h e n
  • i f α > 0 t h e n ’Not the first job
  • C max { C ^ ( α 1 , k ) , C ^ ( α , p r e d ( k ) ) } + P π ( α ) , k
  • e l s e ’The first job
  • C C ^ ( α , p r e d ( k ) ) + P π ( α ) , k
  • e n d i f
  • e l s e
  • i f T y p e ( k ) = s o p t h e n
  • i f α > 0 t h e n
  • C C ^ ( α 1 , k ) + P π ( α ) , k
  • e l s e
  • C P π ( α ) , k
  • e n d i f
  • e l s e
  • i f T y p e ( k ) = a n d t h e n
  • C max { C ^ ( α , p r e d 1 ( k ) ) , C ^ ( α , p r e d 2 ( k ) ) }
  • e n d i f
  • e n d i f
  • e n d i f
  • T o p ( α , k ) C
E n d A l g o r i t h m 2 .
Algorithm 3 generates a set of permutations satisfying Property (18), including the iteration of the B&B method.
B e g i n A l g o r i t h m 3 . (Recursive algorithm B&B for roblem AFS P e r m u t a t i o n B & B ( α ) )
  • α – the sequential number of the job in the vector π
  • i f α > 1 t h e n
  • L B C ^ ( α 1 , m ) + i = α n p π ( α ) , m
  • i f L B > L B 1 t h e n e x i t
  • e n d i f
  • i f α = n + 1 t h e n
  • L B 1 L B
  • e x i t
  • e n d i f
  • P e r m u t a t i o n B & B ( α + 1 )
  • f o r i = α + 1 t o n
  • π ( α ) π ( i ) ’swap the elements α and i of the vector π
  • P e r m u t a t i o n B & B ( α + 1 )
  • π ( α ) π ( i ) ’return the elements α and i of the vector to their previous positions π
  • n e x t i
E n d A l g o r i t h m 3 .

References

  1. Lai, T. A note on the heuristics flowshop scheduling. Journal of the Operational Research Society 1994, 44, 648–652. [Google Scholar]
  2. Henneberg, M.; Neufeld, J. A constructive algorithm and a simulated annealing approach for solving flow shopproblems with missing operations. International Journal of Production Research 2016, 54-12, 3534–3550. [Google Scholar] [CrossRef]
  3. Ruiz, R.; Maroto, C.; Alcaraz, J. Two new robust genetic algorithms for the flowshop scheduling problem. Omega (The International Journal of Management Science) 2006, 34-5, 461–476. [Google Scholar] [CrossRef]
  4. Bargaoui, H.; Driss, O. Multi-agent model based on tabu search for the permutation flow shop scheduling problem. istributed Computing and Artificial Intelligence 2014, 3-8, 27. [Google Scholar]
  5. Alekseeva, E.; Mezmaz, M.; Tuyttens, D.; Melab, N. Parallel multi-core hyper-heuristic grasp to solve permutation flow shop problem. Concurrency and Computation: Practice and Experience 2017, 29-9, e3835. [Google Scholar] [CrossRef]
  6. Ignall, E.; Schrage, L.E. Application of the branch and bound technique to some flow shop problems. Journal of the Operational Research Society 1965, 13, 400–412. [Google Scholar] [CrossRef]
  7. Kalmykov, S.A.; Shokin, U.I.; Uldashev, Z.X. Interval analysis methods; Science: Novosibirsk, USSR, 1986. (in Russian) [Google Scholar]
  8. Lazarev, A.A.; Gafarov, E.R. Scheduling theory. Problems and algorithms; MSU: Moscow, Russian Federation, 2011. (in Russian) [Google Scholar]
  9. Ladharia, T.; Haouaria, M. A computational study of the permutation flow shop problem based on a tight lower bound. Computers & Operations Research 2005, 32, 1831–1847. [Google Scholar]
  10. Gmys, J.; Mezmaz, M.; Melab, N.; Tuyttens, D. A computationally efficient Branch-and-Bound algorithm for the permutation flow-shop scheduling problem. European Journal of Operational Research 2020, 284(3), 814–833. [Google Scholar] [CrossRef]
  11. Belabid, J.; Aqil, S.; Allali, K. Solving permutation flow shop scheduling problem with sequence-independent setup time. Journal of Applied Mathematics 2020, 1–11. [Google Scholar] [CrossRef]
  12. Land, A.H.; Doig, A.G. An automatic method for solving discrete programming problems. Econometrica 1960, 28, 427–520. [Google Scholar] [CrossRef]
  13. Potts, C.N.; Strusevich, V.A. Fifty years of scheduling: a survey of milestones. Journal of the Operational Research Society 2009, 60, 41–68. [Google Scholar] [CrossRef]
  14. Brooks, G. H.; White, C.R. An algorithm for finding optimal or near optimal solutions to the production scheduling problem. Journal of Industrial Engineering 1965, 16. no. 1, 34–40. [Google Scholar]
Figure 1. Example of a permutation flow shop problem graph
Figure 1. Example of a permutation flow shop problem graph
Preprints 140873 g001
Figure 2. Examples of diagrams of feasible schedules
Figure 2. Examples of diagrams of feasible schedules
Preprints 140873 g002
Figure 3. Cases of interval intersections
Figure 3. Cases of interval intersections
Preprints 140873 g003
Figure 4. The case of interval intersection when T 0 ( j , k ) = [ 0 , T m a x ]
Figure 4. The case of interval intersection when T 0 ( j , k ) = [ 0 , T m a x ]
Preprints 140873 g004
Figure 5. Two traversal options defined by two types of a recursive function
Figure 5. Two traversal options defined by two types of a recursive function
Preprints 140873 g005
Figure 6. A complete permutation tree satisfying Property (18)
Figure 6. A complete permutation tree satisfying Property (18)
Preprints 140873 g006
Figure 7. Function and
Figure 7. Function and
Preprints 140873 g007
Figure 8. An example of a PFSP graph with the and function
Figure 8. An example of a PFSP graph with the and function
Preprints 140873 g008
Figure 9. Plots of the numerical experiments
Figure 9. Plots of the numerical experiments
Preprints 140873 g009
Table 1. Table of the job processing times.
Table 1. Table of the job processing times.
Job∖Machine 1 2 3
1 3 2 2
2 3 1 3
3 3 2 2
4 3 1 2
Table 2. Table of the job completion time intervals.
Table 2. Table of the job completion time intervals.
Job∖Machine 1 2 3
1 [3,4] [5,7] [7,9]
2 [6,7] [7,9] [10,12]
3 [9,10] [11,12] [13,14]
4 [12,13] [13,14] [15,16]
Table 3. Sequence of actions of the algorithm for n=3 without the rejection of branches.
Table 3. Sequence of actions of the algorithm for n=3 without the rejection of branches.
Recursion Algorithm π π π Computed
level argument fixed part ”tail” T o p row
0 1 [1,2,3] [] [1,2,3]
1 2 [1,2,3] [1] [2,3] 1
2 3 [1,2,3] [1,2] [3] 2
3 4 [1,2,3] [1,2,3] [] 3
2 3 [1,3,2] [1,3] [2] 2
3 4 [1,3,2] [1,3,2] [] 3
1 2 [2,1,3] [2] [1,3] 1
2 3 [2,1,3] [2,1] [3] 2
3 4 [2,1,3] [2,1,3] [] 3
2 3 [2,3,1] [2,3] [1] 2
3 4 [2,3,1] [2,3,1] [] 3
1 2 [3,2,1] [3] [2,1] 1
2 3 [3,2,1] [3,2] [1] 2
3 4 [3,2,1] [3,2,1] [] 3
2 3 [3,1,2] [3,1] [2] 2
3 4 [3,1,2] [3,1,2] [] 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated